summaryrefslogtreecommitdiff
path: root/arch/x86/kernel/ftrace_32.S
AgeCommit message (Collapse)AuthorFilesLines
2019-10-25x86/ftrace: Get rid of function_hookBorislav Petkov1-5/+3
History lesson courtesy of Steve: "When ftrace first was introduced to the kernel, it used gcc's mcount profiling mechanism. The mcount mechanism would add a call to "mcount" at the start of every function but after the stack frame was set up. Later, in gcc 4.6, gcc introduced -mfentry, that would create a call to "__fentry__" instead of "mcount", before the stack frame was set up. In order to handle both cases, ftrace defined a macro "function_hook" that would be either "mcount" or "__fentry__" depending on which one was being used. The Linux kernel no longer supports the "mcount" method, thus there's no reason to keep the "function_hook" define around. Simply use "__fentry__", as there is no ambiguity to the name anymore." Drop it everywhere. Signed-off-by: Borislav Petkov <bp@suse.de> Acked-by: Jiri Slaby <jslaby@suse.cz> Cc: "H. Peter Anvin" <hpa@zytor.com> Cc: Ingo Molnar <mingo@redhat.com> Cc: Jonathan Corbet <corbet@lwn.net> Cc: Josh Poimboeuf <jpoimboe@redhat.com> Cc: linux-doc@vger.kernel.org Cc: linux-kernel@vger.kernel.org Cc: Peter Zijlstra <peterz@infradead.org> Cc: "Steven Rostedt (VMware)" <rostedt@goodmis.org> Cc: Thomas Gleixner <tglx@linutronix.de> Cc: x86@kernel.org Link: http://lkml.kernel.org/r/20191018124800.0a7006bb@gandalf.local.home
2019-10-18x86/asm: Replace WEAK uses by SYM_INNER_LABEL_ALIGNJiri Slaby1-1/+1
Use the new SYM_INNER_LABEL_ALIGN for WEAK entries in the middle of x86 assembly functions. And make sure WEAK is not defined for x86 anymore as these were the last users. Signed-off-by: Jiri Slaby <jslaby@suse.cz> Signed-off-by: Borislav Petkov <bp@suse.de> Cc: Andrey Ryabinin <aryabinin@virtuozzo.com> Cc: Boris Ostrovsky <boris.ostrovsky@oracle.com> Cc: Herbert Xu <herbert@gondor.apana.org.au> Cc: "H. Peter Anvin" <hpa@zytor.com> Cc: Ingo Molnar <mingo@redhat.com> Cc: Josh Poimboeuf <jpoimboe@redhat.com> Cc: linux-arch@vger.kernel.org Cc: Mark Rutland <mark.rutland@arm.com> Cc: Peter Zijlstra <peterz@infradead.org> Cc: "Rafael J. Wysocki" <rafael.j.wysocki@intel.com> Cc: "Steven Rostedt (VMware)" <rostedt@goodmis.org> Cc: Thomas Gleixner <tglx@linutronix.de> Cc: x86-ml <x86@kernel.org> Link: https://lkml.kernel.org/r/20191011115108.12392-29-jslaby@suse.cz
2019-10-18x86/asm/32: Change all ENTRY+END to SYM_CODE_*Jiri Slaby1-4/+4
Change all assembly code which is marked using END (and not ENDPROC) to appropriate new markings SYM_CODE_START and SYM_CODE_END. And since the last user of END on X86 is gone now, make sure that END is not defined there. Signed-off-by: Jiri Slaby <jslaby@suse.cz> Signed-off-by: Borislav Petkov <bp@suse.de> Cc: Andrey Ryabinin <aryabinin@virtuozzo.com> Cc: Andy Lutomirski <luto@kernel.org> Cc: Boris Ostrovsky <boris.ostrovsky@oracle.com> Cc: Herbert Xu <herbert@gondor.apana.org.au> Cc: "H. Peter Anvin" <hpa@zytor.com> Cc: Ingo Molnar <mingo@redhat.com> Cc: Josh Poimboeuf <jpoimboe@redhat.com> Cc: linux-arch@vger.kernel.org Cc: Mark Rutland <mark.rutland@arm.com> Cc: Peter Zijlstra <peterz@infradead.org> Cc: "Rafael J. Wysocki" <rafael.j.wysocki@intel.com> Cc: "Steven Rostedt (VMware)" <rostedt@goodmis.org> Cc: Thomas Gleixner <tglx@linutronix.de> Cc: x86-ml <x86@kernel.org> Link: https://lkml.kernel.org/r/20191011115108.12392-27-jslaby@suse.cz
2019-10-18x86/asm/32: Add ENDs to some functions and relabel with SYM_CODE_*Jiri Slaby1-1/+2
All these are functions which are invoked from elsewhere but they are not typical C functions. So annotate them using the new SYM_CODE_START. All these were not balanced with any END, so mark their ends by SYM_CODE_END, appropriately. Signed-off-by: Jiri Slaby <jslaby@suse.cz> Signed-off-by: Borislav Petkov <bp@suse.de> Reviewed-by: Boris Ostrovsky <boris.ostrovsky@oracle.com> [xen bits] Reviewed-by: Rafael J. Wysocki <rafael.j.wysocki@intel.com> [hibernate] Cc: Andy Lutomirski <luto@kernel.org> Cc: "H. Peter Anvin" <hpa@zytor.com> Cc: Ingo Molnar <mingo@redhat.com> Cc: Josh Poimboeuf <jpoimboe@redhat.com> Cc: Juergen Gross <jgross@suse.com> Cc: Len Brown <len.brown@intel.com> Cc: linux-arch@vger.kernel.org Cc: linux-pm@vger.kernel.org Cc: Pavel Machek <pavel@ucw.cz> Cc: Peter Zijlstra <peterz@infradead.org> Cc: Pingfan Liu <kernelfans@gmail.com> Cc: Stefano Stabellini <sstabellini@kernel.org> Cc: "Steven Rostedt (VMware)" <rostedt@goodmis.org> Cc: Thomas Gleixner <tglx@linutronix.de> Cc: x86-ml <x86@kernel.org> Cc: xen-devel@lists.xenproject.org Link: https://lkml.kernel.org/r/20191011115108.12392-26-jslaby@suse.cz
2019-10-18x86/asm/ftrace: Mark function_hook as functionJiri Slaby1-2/+2
Relabel function_hook to be marked really as a function. It is called from C and has the same expectations towards the stack etc. Signed-off-by: Jiri Slaby <jslaby@suse.cz> Signed-off-by: Borislav Petkov <bp@suse.de> Cc: "H. Peter Anvin" <hpa@zytor.com> Cc: Ingo Molnar <mingo@redhat.com> Cc: Josh Poimboeuf <jpoimboe@redhat.com> Cc: linux-arch@vger.kernel.org Cc: Masami Hiramatsu <mhiramat@kernel.org> Cc: Peter Zijlstra <peterz@infradead.org> Cc: "Steven Rostedt (VMware)" <rostedt@goodmis.org> Cc: Thomas Gleixner <tglx@linutronix.de> Cc: x86-ml <x86@kernel.org> Link: https://lkml.kernel.org/r/20191011115108.12392-22-jslaby@suse.cz
2019-10-18x86/asm: Use SYM_INNER_LABEL instead of GLOBALJiri Slaby1-1/+1
The GLOBAL macro had several meanings and is going away. Convert all the inner function labels marked with GLOBAL to use SYM_INNER_LABEL instead. Signed-off-by: Jiri Slaby <jslaby@suse.cz> Signed-off-by: Borislav Petkov <bp@suse.de> Cc: Andy Lutomirski <luto@kernel.org> Cc: "H. Peter Anvin" <hpa@zytor.com> Cc: Ingo Molnar <mingo@redhat.com> Cc: Jiri Kosina <jkosina@suse.cz> Cc: Josh Poimboeuf <jpoimboe@redhat.com> Cc: linux-arch@vger.kernel.org Cc: Masami Hiramatsu <mhiramat@kernel.org> Cc: Peter Zijlstra <peterz@infradead.org> Cc: "Steven Rostedt (VMware)" <rostedt@goodmis.org> Cc: Thomas Gleixner <tglx@linutronix.de> Cc: x86-ml <x86@kernel.org> Link: https://lkml.kernel.org/r/20191011115108.12392-18-jslaby@suse.cz
2019-06-25x86/stackframe/32: Provide consistent pt_regsPeter Zijlstra1-33/+42
Currently pt_regs on x86_32 has an oddity in that kernel regs (!user_mode(regs)) are short two entries (esp/ss). This means that any code trying to use them (typically: regs->sp) needs to jump through some unfortunate hoops. Change the entry code to fix this up and create a full pt_regs frame. This then simplifies various trampolines in ftrace and kprobes, the stack unwinder, ptrace, kdump and kgdb. Much thanks to Josh for help with the cleanups! Signed-off-by: Peter Zijlstra (Intel) <peterz@infradead.org> Reviewed-by: Josh Poimboeuf <jpoimboe@redhat.com> Acked-by: Masami Hiramatsu <mhiramat@kernel.org> Cc: Linus Torvalds <torvalds@linux-foundation.org> Cc: Peter Zijlstra <peterz@infradead.org> Cc: Thomas Gleixner <tglx@linutronix.de> Signed-off-by: Ingo Molnar <mingo@kernel.org>
2019-06-25x86/stackframe, x86/ftrace: Add pt_regs frame annotationsPeter Zijlstra1-0/+3
When CONFIG_FRAME_POINTER, we should mark pt_regs frames. Signed-off-by: Peter Zijlstra (Intel) <peterz@infradead.org> Reviewed-by: Josh Poimboeuf <jpoimboe@redhat.com> Cc: Linus Torvalds <torvalds@linux-foundation.org> Cc: Peter Zijlstra <peterz@infradead.org> Cc: Thomas Gleixner <tglx@linutronix.de> Signed-off-by: Ingo Molnar <mingo@kernel.org>
2019-05-10ftrace/x86: Remove mcount supportSteven Rostedt (VMware)1-31/+5
There's two methods of enabling function tracing in Linux on x86. One is with just "gcc -pg" and the other is "gcc -pg -mfentry". The former will use calls to a special function "mcount" after the frame is set up in all C functions. The latter will add calls to a special function called "fentry" as the very first instruction of all C functions. At compile time, there is a check to see if gcc supports, -mfentry, and if it does, it will use that, because it is more versatile and less error prone for function tracing. Starting with v4.19, the minimum gcc supported to build the Linux kernel, was raised to version 4.6. That also happens to be the first gcc version to support -mfentry. Since on x86, using gcc versions from 4.6 and beyond will unconditionally enable the -mfentry, it will no longer use mcount as the method for inserting calls into the C functions of the kernel. This means that there is no point in continuing to maintain mcount in x86. Remove support for using mcount. This makes the code less complex, and will also allow it to be simplified in the future. Acked-by: Peter Zijlstra (Intel) <peterz@infradead.org> Acked-by: Jiri Kosina <jkosina@suse.cz> Acked-by: Josh Poimboeuf <jpoimboe@redhat.com> Signed-off-by: Steven Rostedt (VMware) <rostedt@goodmis.org>
2019-05-10ftrace/x86_32: Remove support for non DYNAMIC_FTRACESteven Rostedt (VMware)1-39/+0
When DYNAMIC_FTRACE is enabled in the kernel, all the functions that can be traced by the function tracer have a "nop" placeholder at the start of the function. When function tracing is enabled, the nop is converted into a call to the tracing infrastructure where the functions get traced. This also allows for specifying specific functions to trace, and a lot of infrastructure is built on top of this. When DYNAMIC_FTRACE is not enabled, all the functions have a call to the ftrace trampoline. A check is made to see if a function pointer is the ftrace_stub or not, and if it is not, it calls the function pointer to trace the code. This adds over 10% overhead to the kernel even when tracing is disabled. When an architecture supports DYNAMIC_FTRACE there really is no reason to use the static tracing. I have kept non DYNAMIC_FTRACE available in x86 so that the generic code for non DYNAMIC_FTRACE can be tested. There is no reason to support non DYNAMIC_FTRACE for both x86_64 and x86_32. As the non DYNAMIC_FTRACE for x86_32 does not even support fentry, and we want to remove mcount completely, there's no reason to keep non DYNAMIC_FTRACE around for x86_32. Signed-off-by: Steven Rostedt (VMware) <rostedt@goodmis.org>
2018-01-12x86/retpoline/ftrace: Convert ftrace assembler indirect jumpsDavid Woodhouse1-2/+4
Convert all indirect jumps in ftrace assembler code to use non-speculative sequences when CONFIG_RETPOLINE is enabled. Signed-off-by: David Woodhouse <dwmw@amazon.co.uk> Signed-off-by: Thomas Gleixner <tglx@linutronix.de> Acked-by: Arjan van de Ven <arjan@linux.intel.com> Acked-by: Ingo Molnar <mingo@kernel.org> Cc: gnomes@lxorguk.ukuu.org.uk Cc: Rik van Riel <riel@redhat.com> Cc: Andi Kleen <ak@linux.intel.com> Cc: Josh Poimboeuf <jpoimboe@redhat.com> Cc: thomas.lendacky@amd.com Cc: Peter Zijlstra <peterz@infradead.org> Cc: Linus Torvalds <torvalds@linux-foundation.org> Cc: Jiri Kosina <jikos@kernel.org> Cc: Andy Lutomirski <luto@amacapital.net> Cc: Dave Hansen <dave.hansen@intel.com> Cc: Kees Cook <keescook@google.com> Cc: Tim Chen <tim.c.chen@linux.intel.com> Cc: Greg Kroah-Hartman <gregkh@linux-foundation.org> Cc: Paul Turner <pjt@google.com> Link: https://lkml.kernel.org/r/1515707194-20531-8-git-send-email-dwmw@amazon.co.uk
2017-11-02License cleanup: add SPDX GPL-2.0 license identifier to files with no licenseGreg Kroah-Hartman1-0/+1
Many source files in the tree are missing licensing information, which makes it harder for compliance tools to determine the correct license. By default all files without license information are under the default license of the kernel, which is GPL version 2. Update the files which contain no license information with the 'GPL-2.0' SPDX license identifier. The SPDX identifier is a legally binding shorthand, which can be used instead of the full boiler plate text. This patch is based on work done by Thomas Gleixner and Kate Stewart and Philippe Ombredanne. How this work was done: Patches were generated and checked against linux-4.14-rc6 for a subset of the use cases: - file had no licensing information it it. - file was a */uapi/* one with no licensing information in it, - file was a */uapi/* one with existing licensing information, Further patches will be generated in subsequent months to fix up cases where non-standard license headers were used, and references to license had to be inferred by heuristics based on keywords. The analysis to determine which SPDX License Identifier to be applied to a file was done in a spreadsheet of side by side results from of the output of two independent scanners (ScanCode & Windriver) producing SPDX tag:value files created by Philippe Ombredanne. Philippe prepared the base worksheet, and did an initial spot review of a few 1000 files. The 4.13 kernel was the starting point of the analysis with 60,537 files assessed. Kate Stewart did a file by file comparison of the scanner results in the spreadsheet to determine which SPDX license identifier(s) to be applied to the file. She confirmed any determination that was not immediately clear with lawyers working with the Linux Foundation. Criteria used to select files for SPDX license identifier tagging was: - Files considered eligible had to be source code files. - Make and config files were included as candidates if they contained >5 lines of source - File already had some variant of a license header in it (even if <5 lines). All documentation files were explicitly excluded. The following heuristics were used to determine which SPDX license identifiers to apply. - when both scanners couldn't find any license traces, file was considered to have no license information in it, and the top level COPYING file license applied. For non */uapi/* files that summary was: SPDX license identifier # files ---------------------------------------------------|------- GPL-2.0 11139 and resulted in the first patch in this series. If that file was a */uapi/* path one, it was "GPL-2.0 WITH Linux-syscall-note" otherwise it was "GPL-2.0". Results of that was: SPDX license identifier # files ---------------------------------------------------|------- GPL-2.0 WITH Linux-syscall-note 930 and resulted in the second patch in this series. - if a file had some form of licensing information in it, and was one of the */uapi/* ones, it was denoted with the Linux-syscall-note if any GPL family license was found in the file or had no licensing in it (per prior point). Results summary: SPDX license identifier # files ---------------------------------------------------|------ GPL-2.0 WITH Linux-syscall-note 270 GPL-2.0+ WITH Linux-syscall-note 169 ((GPL-2.0 WITH Linux-syscall-note) OR BSD-2-Clause) 21 ((GPL-2.0 WITH Linux-syscall-note) OR BSD-3-Clause) 17 LGPL-2.1+ WITH Linux-syscall-note 15 GPL-1.0+ WITH Linux-syscall-note 14 ((GPL-2.0+ WITH Linux-syscall-note) OR BSD-3-Clause) 5 LGPL-2.0+ WITH Linux-syscall-note 4 LGPL-2.1 WITH Linux-syscall-note 3 ((GPL-2.0 WITH Linux-syscall-note) OR MIT) 3 ((GPL-2.0 WITH Linux-syscall-note) AND MIT) 1 and that resulted in the third patch in this series. - when the two scanners agreed on the detected license(s), that became the concluded license(s). - when there was disagreement between the two scanners (one detected a license but the other didn't, or they both detected different licenses) a manual inspection of the file occurred. - In most cases a manual inspection of the information in the file resulted in a clear resolution of the license that should apply (and which scanner probably needed to revisit its heuristics). - When it was not immediately clear, the license identifier was confirmed with lawyers working with the Linux Foundation. - If there was any question as to the appropriate license identifier, the file was flagged for further research and to be revisited later in time. In total, over 70 hours of logged manual review was done on the spreadsheet to determine the SPDX license identifiers to apply to the source files by Kate, Philippe, Thomas and, in some cases, confirmation by lawyers working with the Linux Foundation. Kate also obtained a third independent scan of the 4.13 code base from FOSSology, and compared selected files where the other two scanners disagreed against that SPDX file, to see if there was new insights. The Windriver scanner is based on an older version of FOSSology in part, so they are related. Thomas did random spot checks in about 500 files from the spreadsheets for the uapi headers and agreed with SPDX license identifier in the files he inspected. For the non-uapi files Thomas did random spot checks in about 15000 files. In initial set of patches against 4.14-rc6, 3 files were found to have copy/paste license identifier errors, and have been fixed to reflect the correct identifier. Additionally Philippe spent 10 hours this week doing a detailed manual inspection and review of the 12,461 patched files from the initial patch version early this week with: - a full scancode scan run, collecting the matched texts, detected license ids and scores - reviewing anything where there was a license detected (about 500+ files) to ensure that the applied SPDX license was correct - reviewing anything where there was no detection but the patch license was not GPL-2.0 WITH Linux-syscall-note to ensure that the applied SPDX license was correct This produced a worksheet with 20 files needing minor correction. This worksheet was then exported into 3 different .csv files for the different types of files to be modified. These .csv files were then reviewed by Greg. Thomas wrote a script to parse the csv files and add the proper SPDX tag to the file, in the format that the file expected. This script was further refined by Greg based on the output to detect more types of files automatically and to distinguish between header and source .c files (which need different comment types.) Finally Greg ran the script using the .csv files to generate the patches. Reviewed-by: Kate Stewart <kstewart@linuxfoundation.org> Reviewed-by: Philippe Ombredanne <pombredanne@nexb.com> Reviewed-by: Thomas Gleixner <tglx@linutronix.de> Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
2017-04-21x86/ftrace: Fix ebp in ftrace_regs_caller that screws up unwinderSteven Rostedt (VMware)1-4/+4
Fengguang Wu's zero day bot triggered a stack unwinder dump. This can be easily triggered when CONFIG_FRAME_POINTERS is enabled and -mfentry is in use on x86_32. ># cd /sys/kernel/debug/tracing ># echo 'p:schedule schedule' > kprobe_events ># echo stacktrace > events/kprobes/schedule/trigger This is because the code that implemented fentry in the ftrace_regs_caller tried to use the least amount of #ifdefs, and modified ebp when CC_USE_FENTRY was defined to point to the parent ip as it does when CC_USE_FENTRY is not defined. But when CONFIG_FRAME_POINTERS is set, it corrupts the ebp register for this frame while doing the tracing. NOTE, it does not corrupt ebp in any other way. It is just a bad frame pointer when calling into the tracing infrastructure. The original ebp is restored before returning from the fentry call. But if a stack trace is performed inside the tracing, the unwinder will notice the bad ebp. Instead of toying with ebp with CC_USING_FENTRY, just slap the parent ip into the second parameter (%edx), and have an #else that does it the original way. The unwinder will unfortunately miss the function being traced, as the stack frame is not set up yet for it, as it is for x86_64. But fixing that is a bit more complex and did not work before anyway. This has been tested with and without FRAME_POINTERS being set while using -mfentry, as well as using an older compiler that uses mcount. Analyzed-by: Josh Poimboeuf <jpoimboe@redhat.com> Fixes: 644e0e8dc76b ("x86/ftrace: Add -mfentry support to x86_32 with DYNAMIC_FTRACE set") Reported-by: kernel test robot <fengguang.wu@intel.com> Signed-off-by: Steven Rostedt (VMware) <rostedt@goodmis.org> Cc: Masami Hiramatsu <mhiramat@kernel.org> Cc: Josh Poimboeuf <jpoimboe@redhat.com> Link: https://lists.01.org/pipermail/lkp/2017-April/006165.html Link: http://lkml.kernel.org/r/20170420172236.7af7f6e5@gandalf.local.home Signed-off-by: Thomas Gleixner <tglx@linutronix.de>
2017-03-24x86/ftrace: Use Makefile logic instead of #ifdef for compiling ftrace_*.oSteven Rostedt (VMware)1-3/+0
Currently ftrace_32.S and ftrace_64.S are compiled even when CONFIG_FUNCTION_TRACER is not set. This means there's an unnecessary #ifdef to protect the code. Instead of using preprocessor directives, only compile those files when FUNCTION_TRACER is defined. Suggested-by: Josh Poimboeuf <jpoimboe@redhat.com> Signed-off-by: Steven Rostedt (VMware) <rostedt@goodmis.org> Reviewed-by: Josh Poimboeuf <jpoimboe@redhat.com> Reviewed-by: Ingo Molnar <mingo@kernel.org> Cc: Peter Zijlstra <peterz@infradead.org> Cc: Andy Lutomirski <luto@amacapital.net> Cc: Masami Hiramatsu <mhiramat@kernel.org> Cc: Andrew Morton <akpm@linux-foundation.org> Cc: Linus Torvalds <torvalds@linux-foundation.org> Link: http://lkml.kernel.org/r/20170316210043.peycxdxktwwn6cid@treble Link: http://lkml.kernel.org/r/20170323143446.217684991@goodmis.org Signed-off-by: Thomas Gleixner <tglx@linutronix.de>
2017-03-24x86/ftrace: Add -mfentry support to x86_32 with DYNAMIC_FTRACE setSteven Rostedt (VMware)1-10/+72
x86_64 has had fentry support for some time. I did not add support to x86_32 as I was unsure if it will be used much in the future. It is still very much used, and there's issues with function graph tracing with gcc playing around with the mcount frames, causing function graph to panic. The fentry code does not have this issue, and is able to cope as there is no frame to mess up. Note, this only adds support for fentry when DYNAMIC_FTRACE is set. There's really no reason to not have that set, because the performance of the machine drops significantly when it's not enabled. Keep !DYNAMIC_FTRACE around to test it off, as there's still some archs that have FTRACE but not DYNAMIC_FTRACE. Signed-off-by: Steven Rostedt (VMware) <rostedt@goodmis.org> Reviewed-by: Masami Hiramatsu <mhiramat@kernel.org> Reviewed-by: Josh Poimboeuf <jpoimboe@redhat.com> Reviewed-by: Ingo Molnar <mingo@kernel.org> Cc: Peter Zijlstra <peterz@infradead.org> Cc: Andy Lutomirski <luto@amacapital.net> Cc: Andrew Morton <akpm@linux-foundation.org> Cc: Linus Torvalds <torvalds@linux-foundation.org> Link: http://lkml.kernel.org/r/20170323143446.052202377@goodmis.org Signed-off-by: Thomas Gleixner <tglx@linutronix.de>
2017-03-24x86/ftrace: Clean up ftrace_regs_callerSteven Rostedt (VMware)1-19/+21
When ftrace_regs_caller was created, it was designed to preserve flags as much as possible as it needed to act just like a breakpoint triggered on the same location. But the design is over complicated as it treated all operations as modifying flags. But push, mov and lea do not modify flags. This means the code can become more simplified by allowing flags to be stored further down. Making ftrace_regs_caller simpler will also be useful in implementing fentry logic. Suggested-by: Linus Torvalds <torvalds@linux-foundation.org> Signed-off-by: Steven Rostedt (VMware) <rostedt@goodmis.org> Reviewed-by: Masami Hiramatsu <mhiramat@kernel.org> Reviewed-by: Josh Poimboeuf <jpoimboe@redhat.com> Reviewed-by: Ingo Molnar <mingo@kernel.org> Cc: Peter Zijlstra <peterz@infradead.org> Cc: Andy Lutomirski <luto@amacapital.net> Cc: Andrew Morton <akpm@linux-foundation.org> Link: http://lkml.kernel.org/r/20170316135328.36123c3e@gandalf.local.home Link: http://lkml.kernel.org/r/20170323143445.917292592@goodmis.org Signed-off-by: Thomas Gleixner <tglx@linutronix.de>
2017-03-24x86/ftrace: Add stack frame pointer to ftrace_callerSteven Rostedt (VMware)1-2/+10
The function hook ftrace_caller does not create its own stack frame, and this causes the ftrace stack trace to miss the first function when doing stack traces. # echo schedule:stacktrace > /sys/kernel/tracing/set_ftrace_filter Before: <idle>-0 [002] .N.. 29.865807: <stack trace> => cpu_startup_entry => start_secondary => startup_32_smp <...>-7 [001] .... 29.866509: <stack trace> => kthread => ret_from_fork <...>-1 [000] .... 29.865377: <stack trace> => poll_schedule_timeout => do_select => core_sys_select => SyS_select => do_fast_syscall_32 => entry_SYSENTER_32 After: <idle>-0 [002] .N.. 31.234853: <stack trace> => do_idle => cpu_startup_entry => start_secondary => startup_32_smp <...>-7 [003] .... 31.235140: <stack trace> => rcu_gp_kthread => kthread => ret_from_fork <...>-1819 [000] .... 31.264172: <stack trace> => schedule_hrtimeout_range => poll_schedule_timeout => do_sys_poll => SyS_ppoll => do_fast_syscall_32 => entry_SYSENTER_32 Signed-off-by: Steven Rostedt (VMware) <rostedt@goodmis.org> Reviewed-by: Josh Poimboeuf <jpoimboe@redhat.com> Reviewed-by: Masami Hiramatsu <mhiramat@kernel.org> Reviewed-by: Ingo Molnar <mingo@kernel.org> Cc: Peter Zijlstra <peterz@infradead.org> Cc: Andy Lutomirski <luto@amacapital.net> Cc: Andrew Morton <akpm@linux-foundation.org> Cc: Linus Torvalds <torvalds@linux-foundation.org> Link: http://lkml.kernel.org/r/20170323143445.771707773@goodmis.org Signed-off-by: Thomas Gleixner <tglx@linutronix.de>
2017-03-24x86/ftrace: Move the ftrace specific code out of entry_32.SSteven Rostedt (VMware)1-0/+175
The function tracing hook code for ftrace is not an entry point from userspace and does not belong in the entry_*.S files. It has already been moved out of entry_64.S. Move it out of entry_32.S into its own ftrace_32.S file. Signed-off-by: Steven Rostedt (VMware) <rostedt@goodmis.org> Reviewed-by: Ingo Molnar <mingo@kernel.org> Cc: Peter Zijlstra <peterz@infradead.org> Cc: Andy Lutomirski <luto@amacapital.net> Cc: Masami Hiramatsu <mhiramat@kernel.org> Cc: Josh Poimboeuf <jpoimboe@redhat.com> Cc: Andrew Morton <akpm@linux-foundation.org> Cc: Linus Torvalds <torvalds@linux-foundation.org> Link: http://lkml.kernel.org/r/20170323143445.645218946@goodmis.org Signed-off-by: Thomas Gleixner <tglx@linutronix.de>