summaryrefslogtreecommitdiff
path: root/arch/powerpc/kernel/stacktrace.c
AgeCommit message (Collapse)AuthorFilesLines
2021-10-01kprobes: treewide: Make it harder to refer kretprobe_trampoline directlyMasami Hiramatsu1-1/+1
Since now there is kretprobe_trampoline_addr() for referring the address of kretprobe trampoline code, we don't need to access kretprobe_trampoline directly. Make it harder to refer by renaming it to __kretprobe_trampoline(). Link: https://lkml.kernel.org/r/163163045446.489837.14510577516938803097.stgit@devnote2 Suggested-by: Ingo Molnar <mingo@kernel.org> Signed-off-by: Masami Hiramatsu <mhiramat@kernel.org> Signed-off-by: Steven Rostedt (VMware) <rostedt@goodmis.org>
2021-08-03powerpc/stacktrace: Include linux/delay.hMichal Suchanek1-0/+1
commit 7c6986ade69e ("powerpc/stacktrace: Fix spurious "stale" traces in raise_backtrace_ipi()") introduces udelay() call without including the linux/delay.h header. This may happen to work on master but the header that declares the functionshould be included nonetheless. Fixes: 7c6986ade69e ("powerpc/stacktrace: Fix spurious "stale" traces in raise_backtrace_ipi()") Signed-off-by: Michal Suchanek <msuchanek@suse.de> Signed-off-by: Michael Ellerman <mpe@ellerman.id.au> Link: https://lore.kernel.org/r/20210729180103.15578-1-msuchanek@suse.de
2021-06-25powerpc/stacktrace: Fix spurious "stale" traces in raise_backtrace_ipi()Michael Ellerman1-6/+20
In raise_backtrace_ipi() we iterate through the cpumask of CPUs, sending each an IPI asking them to do a backtrace, but we don't wait for the backtrace to happen. We then iterate through the CPU mask again, and if any CPU hasn't done the backtrace and cleared itself from the mask, we print a trace on its behalf, noting that the trace may be "stale". This works well enough when a CPU is not responding, because in that case it doesn't receive the IPI and the sending CPU is left to print the trace. But when all CPUs are responding we are left with a race between the sending and receiving CPUs, if the sending CPU wins the race then it will erroneously print a trace. This leads to spurious "stale" traces from the sending CPU, which can then be interleaved messily with the receiving CPU, note the CPU numbers, eg: [ 1658.929157][ C7] rcu: Stack dump where RCU GP kthread last ran: [ 1658.929223][ C7] Sending NMI from CPU 7 to CPUs 1: [ 1658.929303][ C1] NMI backtrace for cpu 1 [ 1658.929303][ C7] CPU 1 didn't respond to backtrace IPI, inspecting paca. [ 1658.929362][ C1] CPU: 1 PID: 325 Comm: kworker/1:1H Tainted: G W E 5.13.0-rc2+ #46 [ 1658.929405][ C7] irq_soft_mask: 0x01 in_mce: 0 in_nmi: 0 current: 325 (kworker/1:1H) [ 1658.929465][ C1] Workqueue: events_highpri test_work_fn [test_lockup] [ 1658.929549][ C7] Back trace of paca->saved_r1 (0xc0000000057fb400) (possibly stale): [ 1658.929592][ C1] NIP: c00000000002cf50 LR: c008000000820178 CTR: c00000000002cfa0 To fix it, change the logic so that the sending CPU waits 5s for the receiving CPU to print its trace. If the receiving CPU prints its trace successfully then the sending CPU just continues, avoiding any spurious "stale" trace. This has the added benefit of allowing all CPUs to print their traces in order and avoids any interleaving of their output. Fixes: 5cc05910f26e ("powerpc/64s: Wire up arch_trigger_cpumask_backtrace()") Cc: stable@vger.kernel.org # v4.18+ Reported-by: Nathan Lynch <nathanl@linux.ibm.com> Signed-off-by: Michael Ellerman <mpe@ellerman.id.au> Link: https://lore.kernel.org/r/20210625140408.3351173-1-mpe@ellerman.id.au
2021-06-16powerpc: make stack walking KASAN-safeDaniel Axtens1-4/+4
Make our stack-walking code KASAN-safe by using __no_sanitize_address. Generic code, arm64, s390 and x86 all make accesses unchecked for similar sorts of reasons: when unwinding a stack, we might touch memory that KASAN has marked as being out-of-bounds. In ppc64 KASAN development, I hit this sometimes when checking for an exception frame - because we're checking an arbitrary offset into the stack frame. See commit 20955746320e ("s390/kasan: avoid false positives during stack unwind"), commit bcaf669b4bdb ("arm64: disable kasan when accessing frame->fp in unwind_frame"), commit 91e08ab0c851 ("x86/dumpstack: Prevent KASAN false positive warnings") and commit 6e22c8366416 ("tracing, kasan: Silence Kasan warning in check_stack of stack_tracer"). Signed-off-by: Daniel Axtens <dja@axtens.net> Signed-off-by: Michael Ellerman <mpe@ellerman.id.au> Link: https://lore.kernel.org/r/20210614120907.1952321-1-dja@axtens.net
2021-03-29powerpc: Fix arch_stack_walk() to have running function as first entryChristophe Leroy1-0/+3
It seems like other architectures, namely x86 and arm64 and riscv at least, include the running function as top entry when saving stack trace with save_stack_trace_regs(). Functionnalities like KFENCE expect it. Do the same on powerpc, it allows KFENCE and other users to properly identify the faulting function as depicted below. Before the patch KFENCE was identifying finish_task_switch.isra as the faulting function. [ 14.937370] ================================================================== [ 14.948692] BUG: KFENCE: invalid read in test_invalid_access+0x54/0x108 [ 14.948692] [ 14.956814] Invalid read at 0xdf98800a: [ 14.960664] test_invalid_access+0x54/0x108 [ 14.964876] finish_task_switch.isra.0+0x54/0x23c [ 14.969606] kunit_try_run_case+0x5c/0xd0 [ 14.973658] kunit_generic_run_threadfn_adapter+0x24/0x30 [ 14.979079] kthread+0x15c/0x174 [ 14.982342] ret_from_kernel_thread+0x14/0x1c [ 14.986731] [ 14.988236] CPU: 0 PID: 111 Comm: kunit_try_catch Tainted: G B 5.12.0-rc1-01537-g95f6e2088d7e-dirty #4682 [ 14.999795] NIP: c016ec2c LR: c02f517c CTR: c016ebd8 [ 15.004851] REGS: e2449d90 TRAP: 0301 Tainted: G B (5.12.0-rc1-01537-g95f6e2088d7e-dirty) [ 15.015274] MSR: 00009032 <EE,ME,IR,DR,RI> CR: 22000004 XER: 00000000 [ 15.022043] DAR: df98800a DSISR: 20000000 [ 15.022043] GPR00: c02f517c e2449e50 c1142080 e100dd24 c084b13c 00000008 c084b32b c016ebd8 [ 15.022043] GPR08: c0850000 df988000 c0d10000 e2449eb0 22000288 [ 15.040581] NIP [c016ec2c] test_invalid_access+0x54/0x108 [ 15.046010] LR [c02f517c] kunit_try_run_case+0x5c/0xd0 [ 15.051181] Call Trace: [ 15.053637] [e2449e50] [c005a68c] finish_task_switch.isra.0+0x54/0x23c (unreliable) [ 15.061338] [e2449eb0] [c02f517c] kunit_try_run_case+0x5c/0xd0 [ 15.067215] [e2449ed0] [c02f648c] kunit_generic_run_threadfn_adapter+0x24/0x30 [ 15.074472] [e2449ef0] [c004e7b0] kthread+0x15c/0x174 [ 15.079571] [e2449f30] [c001317c] ret_from_kernel_thread+0x14/0x1c [ 15.085798] Instruction dump: [ 15.088784] 8129d608 38e7ebd8 81020280 911f004c 39000000 995f0024 907f0028 90ff001c [ 15.096613] 3949000a 915f0020 3d40c0d1 3d00c085 <8929000a> 3908adb0 812a4b98 3d40c02f [ 15.104612] ================================================================== Fixes: 35de3b1aa168 ("powerpc: Implement save_stack_trace_regs() to enable kprobe stack tracing") Signed-off-by: Christophe Leroy <christophe.leroy@csgroup.eu> Acked-by: Marco Elver <elver@google.com> Signed-off-by: Michael Ellerman <mpe@ellerman.id.au> Link: https://lore.kernel.org/r/21324f9e2f21d1640c8397b4d1d857a9355a2283.1615881400.git.christophe.leroy@csgroup.eu
2021-03-29powerpc: Convert stacktrace to generic ARCH_STACKWALKChristophe Leroy1-75/+16
This patch converts powerpc stacktrace to the generic ARCH_STACKWALK implemented by commit 214d8ca6ee85 ("stacktrace: Provide common infrastructure") Signed-off-by: Christophe Leroy <christophe.leroy@csgroup.eu> Signed-off-by: Michael Ellerman <mpe@ellerman.id.au> Link: https://lore.kernel.org/r/73b36bbb101299760b95ecd2cd3a46554bea8bf9.1615881400.git.christophe.leroy@csgroup.eu
2021-03-29powerpc: Rename 'tsk' parameter into 'task'Christophe Leroy1-8/+8
To better match generic code, rename 'tsk' to 'task' in some stacktrace functions in preparation of following patch which converts powerpc to generic ARCH_STACKWALK. Signed-off-by: Christophe Leroy <christophe.leroy@csgroup.eu> Signed-off-by: Michael Ellerman <mpe@ellerman.id.au> Link: https://lore.kernel.org/r/117f0200e11961af6c0fdf85c98373e5dcf96a47.1615881400.git.christophe.leroy@csgroup.eu
2021-03-29powerpc: Activate HAVE_RELIABLE_STACKTRACE for allChristophe Leroy1-2/+0
CONFIG_HAVE_RELIABLE_STACKTRACE is applicable to all, no reason to limit it to book3s/64le Signed-off-by: Christophe Leroy <christophe.leroy@csgroup.eu> Signed-off-by: Michael Ellerman <mpe@ellerman.id.au> Link: https://lore.kernel.org/r/955248c6423cb068c5965923121ba31d4dd2fdde.1615881400.git.christophe.leroy@csgroup.eu
2020-06-09kernel: rename show_stack_loglvl() => show_stack()Dmitry Safonov1-1/+1
Now the last users of show_stack() got converted to use an explicit log level, show_stack_loglvl() can drop it's redundant suffix and become once again well known show_stack(). Signed-off-by: Dmitry Safonov <dima@arista.com> Signed-off-by: Andrew Morton <akpm@linux-foundation.org> Link: http://lkml.kernel.org/r/20200418201944.482088-51-dima@arista.com Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
2020-03-04powerpc: Rename current_stack_pointer() to current_stack_frame()Michael Ellerman1-3/+3
current_stack_pointer(), which was called __get_SP(), used to just return the value in r1. But that caused problems in some cases, so it was turned into a function in commit bfe9a2cfe91a ("powerpc: Reimplement __get_SP() as a function not a define"). Because it's a function in a separate compilation unit to all its callers, it has the effect of causing a stack frame to be created, and then returns the address of that frame. This is good in some cases like those described in the above commit, but in other cases it's overkill, we just need to know what stack page we're on. On some other arches current_stack_pointer is just a register global giving the stack pointer, and we'd like to do that too. So rename our current_stack_pointer() to current_stack_frame() to make that possible. Signed-off-by: Michael Ellerman <mpe@ellerman.id.au> Reviewed-by: Christophe Leroy <christophe.leroy@c-s.fr> Link: https://lore.kernel.org/r/20200220115141.2707-1-mpe@ellerman.id.au
2019-09-18powerpc/ftrace: Enable HAVE_FUNCTION_GRAPH_RET_ADDR_PTRNaveen N. Rao1-1/+1
This associates entries in the ftrace_ret_stack with corresponding stack frames, enabling more robust stack unwinding. Also update the only user of ftrace_graph_ret_addr() to pass the stack pointer. Signed-off-by: Naveen N. Rao <naveen.n.rao@linux.vnet.ibm.com> Signed-off-by: Michael Ellerman <mpe@ellerman.id.au> Link: https://lore.kernel.org/r/0224f2d0971b069c678e2ff678cfc2cd1e114cfe.1567707399.git.naveen.n.rao@linux.vnet.ibm.com
2019-03-02powerpc: Remove export of save_stack_trace_tsk_reliable()Joe Lawrence1-1/+0
As tglx points out, there are no in-tree module users of save_stack_trace_tsk_reliable() and its x86 counterpart is not exported, so remove the powerpc symbol export. Suggested-by: Thomas Gleixner <tglx@linutronix.de> Signed-off-by: Joe Lawrence <joe.lawrence@redhat.com> Signed-off-by: Michael Ellerman <mpe@ellerman.id.au>
2019-02-23powerpc: prep stack walkers for THREAD_INFO_IN_TASKChristophe Leroy1-3/+26
[text copied from commit 9bbd4c56b0b6 ("arm64: prep stack walkers for THREAD_INFO_IN_TASK")] When CONFIG_THREAD_INFO_IN_TASK is selected, task stacks may be freed before a task is destroyed. To account for this, the stacks are refcounted, and when manipulating the stack of another task, it is necessary to get/put the stack to ensure it isn't freed and/or re-used while we do so. This patch reworks the powerpc stack walking code to account for this. When CONFIG_THREAD_INFO_IN_TASK is not selected these perform no refcounting, and this should only be a structural change that does not affect behaviour. Acked-by: Mark Rutland <mark.rutland@arm.com> Signed-off-by: Christophe Leroy <christophe.leroy@c-s.fr> Reviewed-by: Nicholas Piggin <npiggin@gmail.com> [mpe: Move try_get_task_stack() below tsk == NULL check in show_stack()] Signed-off-by: Michael Ellerman <mpe@ellerman.id.au>
2019-01-31powerpc/livepatch: return -ERRNO values in save_stack_trace_tsk_reliable()Joe Lawrence1-7/+7
To match its x86 counterpart, save_stack_trace_tsk_reliable() should return -EINVAL in cases that it is currently returning 1. No caller is currently differentiating non-zero error codes, but let's keep the arch-specific implementations consistent. Signed-off-by: Joe Lawrence <joe.lawrence@redhat.com> Signed-off-by: Michael Ellerman <mpe@ellerman.id.au>
2019-01-31powerpc/livepatch: small cleanups in save_stack_trace_tsk_reliable()Joe Lawrence1-26/+14
Mostly cosmetic changes: - Group common stack pointer code at the top - Simplify the first frame logic - Code stackframe iteration into for...loop construct - Check for trace->nr_entries overflow before adding any into the array Suggested-by: Nicolai Stange <nstange@suse.de> Signed-off-by: Joe Lawrence <joe.lawrence@redhat.com> Signed-off-by: Michael Ellerman <mpe@ellerman.id.au>
2019-01-31powerpc/livepatch: relax reliable stack tracer checks for first-frameJoe Lawrence1-8/+24
The bottom-most stack frame (the first to be unwound) may be largely uninitialized, for the "Power Architecture 64-Bit ELF V2 ABI" only requires its backchain pointer to be set. The reliable stack tracer should be careful when verifying this frame: skip checks on STACK_FRAME_LR_SAVE and STACK_FRAME_MARKER offsets that may contain uninitialized residual data. Fixes: df78d3f61480 ("powerpc/livepatch: Implement reliable stack tracing for the consistency model") Signed-off-by: Joe Lawrence <joe.lawrence@redhat.com> Signed-off-by: Michael Ellerman <mpe@ellerman.id.au>
2018-06-19powerpc/64s: Fix build failures with CONFIG_NMI_IPI=nMichael Ellerman1-2/+2
I broke the build when CONFIG_NMI_IPI=n with my recent commit to add arch_trigger_cpumask_backtrace(), eg: stacktrace.c:(.text+0x1b0): undefined reference to `.smp_send_safe_nmi_ipi' We should rework the CONFIG symbols here in future to avoid these double barrelled ifdefs but for now they fix the build. Fixes: 5cc05910f26e ("powerpc/64s: Wire up arch_trigger_cpumask_backtrace()") Reported-by: Christophe LEROY <christophe.leroy@c-s.fr> Signed-off-by: Michael Ellerman <mpe@ellerman.id.au>
2018-06-03powerpc/stacktrace: Update copyrightMichael Ellerman1-6/+4
This now has new code in it written by Nick and I, and switch to a SPDX tag. Signed-off-by: Michael Ellerman <mpe@ellerman.id.au> Reviewed-by: Nicholas Piggin <npiggin@gmail.com>
2018-06-03powerpc/64s: Wire up arch_trigger_cpumask_backtrace()Michael Ellerman1-0/+51
This allows eg. the RCU stall detector, or the soft/hardlockup detectors to trigger a backtrace on all CPUs. We implement this by sending a "safe" NMI, which will actually only send an IPI. Unfortunately the generic code prints "NMI", so that's a little confusing but we can probably live with it. If one of the CPUs doesn't respond to the IPI, we then print some info from it's paca and do a backtrace based on its saved_r1. Example output: INFO: rcu_sched detected stalls on CPUs/tasks: 2-...0: (0 ticks this GP) idle=1be/1/4611686018427387904 softirq=1055/1055 fqs=25735 (detected by 4, t=58847 jiffies, g=58, c=57, q=1258) Sending NMI from CPU 4 to CPUs 2: CPU 2 didn't respond to backtrace IPI, inspecting paca. irq_soft_mask: 0x01 in_mce: 0 in_nmi: 0 current: 3623 (bash) Back trace of paca->saved_r1 (0xc0000000e1c83ba0) (possibly stale): Call Trace: [c0000000e1c83ba0] [0000000000000014] 0x14 (unreliable) [c0000000e1c83bc0] [c000000000765798] lkdtm_do_action+0x48/0x80 [c0000000e1c83bf0] [c000000000765a40] direct_entry+0x110/0x1b0 [c0000000e1c83c90] [c00000000058e650] full_proxy_write+0x90/0xe0 [c0000000e1c83ce0] [c0000000003aae3c] __vfs_write+0x6c/0x1f0 [c0000000e1c83d80] [c0000000003ab214] vfs_write+0xd4/0x240 [c0000000e1c83dd0] [c0000000003ab5cc] ksys_write+0x6c/0x110 [c0000000e1c83e30] [c00000000000b860] system_call+0x58/0x6c Signed-off-by: Michael Ellerman <mpe@ellerman.id.au> Reviewed-by: Nicholas Piggin <npiggin@gmail.com>
2018-05-29powerpc/livepatch: Fix build error with kprobes disabled.Aneesh Kumar K.V1-1/+2
arch/powerpc/kernel/stacktrace.c: In function ‘save_stack_trace_tsk_reliable’: arch/powerpc/kernel/stacktrace.c:176:28: error: ‘kretprobe_trampoline’ undeclared if (ip == (unsigned long)kretprobe_trampoline) ^~~~~~~~~~~~~~~~~~~~ Fixes: df78d3f61480 ("powerpc/livepatch: Implement reliable stack tracing for the consistency model") Signed-off-by: Aneesh Kumar K.V <aneesh.kumar@linux.ibm.com> Signed-off-by: Michael Ellerman <mpe@ellerman.id.au>
2018-05-10powerpc/livepatch: Implement reliable stack tracing for the consistency modelTorsten Duwe1-1/+118
The "Power Architecture 64-Bit ELF V2 ABI" says in section 2.3.2.3: [...] There are several rules that must be adhered to in order to ensure reliable and consistent call chain backtracing: * Before a function calls any other function, it shall establish its own stack frame, whose size shall be a multiple of 16 bytes. – In instances where a function’s prologue creates a stack frame, the back-chain word of the stack frame shall be updated atomically with the value of the stack pointer (r1) when a back chain is implemented. (This must be supported as default by all ELF V2 ABI-compliant environments.) [...] – The function shall save the link register that contains its return address in the LR save doubleword of its caller’s stack frame before calling another function. To me this sounds like the equivalent of HAVE_RELIABLE_STACKTRACE. This patch may be unneccessarily limited to ppc64le, but OTOH the only user of this flag so far is livepatching, which is only implemented on PPCs with 64-LE, a.k.a. ELF ABI v2. Feel free to add other ppc variants, but so far only ppc64le got tested. This change also implements save_stack_trace_tsk_reliable() for ppc64le that checks for the above conditions, where possible. Signed-off-by: Torsten Duwe <duwe@suse.de> Signed-off-by: Nicolai Stange <nstange@suse.de> Acked-by: Josh Poimboeuf <jpoimboe@redhat.com> Signed-off-by: Michael Ellerman <mpe@ellerman.id.au>
2017-03-28powerpc: Make /proc/self/stack always print the current stackThadeu Lima de Souza Cascardo1-1/+8
For the current task, the kernel stack would only tell the last time the process was rescheduled, if ever. Use the current stack pointer for the current task. Otherwise, every once in a while, the stacktrace printed when reading /proc/self/stack would look like the process is running in userspace, while it's not, which some may consider as a bug. This is also consistent with some other architectures, like x86 and arm, at least. Signed-off-by: Thadeu Lima de Souza Cascardo <cascardo@canonical.com> Signed-off-by: Michael Ellerman <mpe@ellerman.id.au>
2017-03-02sched/headers: Prepare for new header dependencies before moving code to ↵Ingo Molnar1-0/+1
<linux/sched/debug.h> We are going to split <linux/sched/debug.h> out of <linux/sched.h>, which will have to be picked up from other headers and a couple of .c files. Create a trivial placeholder <linux/sched/debug.h> file that just maps to <linux/sched.h> to make this patch obviously correct and bisectable. Include the new header in the files that are going to need it. Acked-by: Linus Torvalds <torvalds@linux-foundation.org> Cc: Mike Galbraith <efault@gmx.de> Cc: Peter Zijlstra <peterz@infradead.org> Cc: Thomas Gleixner <tglx@linutronix.de> Cc: linux-kernel@vger.kernel.org Signed-off-by: Ingo Molnar <mingo@kernel.org>
2016-01-11powerpc: Implement save_stack_trace_regs() to enable kprobe stack tracingSteven Rostedt1-0/+7
It has come to my attention that kprobe event stack tracing does not work on powerpc. You can see with the following: # cd /sys/kernel/debug/tracing # echo stacktrace > trace_options # echo 'p kfree' > kprobe_events # echo 1 > events/kprobes/enable Will print the following warning: save_stack_trace_regs() not implemented yet. Although save_stack_trace() (which normal event stack traces use) is implemented, save_stack_trace_regs() which kprobe events use is not. This is a cheap attempt to implement that function. Note, This may have issues if a task tries to get a stack trace from another task with its regs, because it just passes in "current" to save_context_stack(). But this does solve the issue with stack tracing kprobe events. Reported-by: Chunyu Hu <chuhu@redhat.com> Signed-off-by: Steven Rostedt <rostedt@goodmis.org> Signed-off-by: Michael Ellerman <mpe@ellerman.id.au>
2014-10-15powerpc: Rename __get_SP() to current_stack_pointer()Anton Blanchard1-1/+1
Michael points out that __get_SP() is a pretty horrible function name. Let's give it a better name. Signed-off-by: Anton Blanchard <anton@samba.org> Signed-off-by: Michael Ellerman <mpe@ellerman.id.au>
2014-10-15powerpc: Reimplement __get_SP() as a function not a defineAnton Blanchard1-1/+1
Li Zhong points out an issue with our current __get_SP() implementation. If ftrace function tracing is enabled (ie -pg profiling using _mcount) we spill a stack frame on 64bit all the time. If a function calls __get_SP() and later calls a function that is tail call optimised, we will pop the stack frame and the value returned by __get_SP() is no longer valid. An example from Li can be found in save_stack_trace -> save_context_stack: c0000000000432c0 <.save_stack_trace>: c0000000000432c0: mflr r0 c0000000000432c4: std r0,16(r1) c0000000000432c8: stdu r1,-128(r1) <-- stack frame for _mcount c0000000000432cc: std r3,112(r1) c0000000000432d0: bl <._mcount> c0000000000432d4: nop c0000000000432d8: mr r4,r1 <-- __get_SP() c0000000000432dc: ld r5,632(r13) c0000000000432e0: ld r3,112(r1) c0000000000432e4: li r6,1 c0000000000432e8: addi r1,r1,128 <-- pop stack frame c0000000000432ec: ld r0,16(r1) c0000000000432f0: mtlr r0 c0000000000432f4: b <.save_context_stack> <-- tail call optimized save_context_stack ends up with a stack pointer below the current one, and it is likely to be scribbled over. Fix this by making __get_SP() a function which returns the callers stack frame. Also replace inline assembly which grabs the stack pointer in save_stack_trace and show_stack with __get_SP(). This also fixes an issue with perf_arch_fetch_caller_regs(). It currently unwinds the stack once, which will skip a valid stack frame on a leaf function. With the __get_SP() fixes in this patch, we never need to unwind the stack frame to get to the first interesting frame. We have to export __get_SP() because perf_arch_fetch_caller_regs() (which is used in modules) calls it from a header file. Reported-by: Li Zhong <zhong@linux.vnet.ibm.com> Signed-off-by: Anton Blanchard <anton@samba.org> Signed-off-by: Michael Ellerman <mpe@ellerman.id.au>
2011-11-01powerpc: various straight conversions from module.h --> export.hPaul Gortmaker1-1/+1
All these files were including module.h just for the basic EXPORT_SYMBOL infrastructure. We can shift them off to the export.h header which is a way smaller footprint and thus realize some compile time gains. Signed-off-by: Paul Gortmaker <paul.gortmaker@windriver.com>
2008-07-28powerpc: Removed duplicated include in stacktrace.cHuang Weiyi1-1/+0
Removed duplicated include file <linux/module.h> in arch/powerpc/kernel/stacktrace.c. Signed-off-by: Huang Weiyi <weiyi.huang@gmail.com> Signed-off-by: Benjamin Herrenschmidt <benh@kernel.crashing.org>
2008-07-22powerpc: Fix support for latencytopArnd Bergmann1-1/+1
We need to pass the kernel stack pointer instead of the user space stack pointer in save_stack_trace_tsk(). Signed-off-by: Arnd Bergmann <arnd@arndb.de> Signed-off-by: Benjamin Herrenschmidt <benh@kernel.crashing.org>
2008-07-16Merge commit 'origin/master'Benjamin Herrenschmidt1-0/+1
Manual merge of: arch/powerpc/Kconfig arch/powerpc/kernel/stacktrace.c arch/powerpc/mm/slice.c arch/ppc/kernel/smp.c
2008-07-15powerpc: support for latencytopArnd Bergmann1-10/+27
Implement save_stack_trace_tsk on powerpc, so that we can run with latencytop. Signed-off-by: Arnd Bergmann <arnd@arndb.de> Signed-off-by: Benjamin Herrenschmidt <benh@kernel.crashing.org>
2008-07-14generic-ipi: powerpc/generic-ipi tree build failureStephen Rothwell1-0/+1
Today's linux-next build (powerpc allmodconfig) failed like this: ERROR: ".save_stack_trace" [tests/backtracetest.ko] undefined! But save_stack_trace is exported in arch/powerpc/kernel/stacktrace.c I couldn't figure it out until I noticed these earlier warnings: arch/powerpc/kernel/stacktrace.c:47: warning: data definition has no type or storage class arch/powerpc/kernel/stacktrace.c:47: warning: type defaults to 'int' in declaration of 'EXPORT_SYMBOL_GPL' arch/powerpc/kernel/stacktrace.c:47: warning: parameter names (without types) in function declaration I applied the patch below. Signed-off-by: Stephen Rothwell <sfr@canb.auug.org.au> Cc: Paul Mackerras <paulus@samba.org> Cc: Benjamin Herrenschmidt <benh@kernel.crashing.org> Cc: <linuxppc-dev@ozlabs.org> Signed-off-by: Ingo Molnar <mingo@elte.hu>
2008-07-03stacktrace: export save_stack_trace[_tsk]Ingo Molnar1-0/+1
Andrew Morton reported this against linux-next: ERROR: ".save_stack_trace" [tests/backtracetest.ko] undefined! Reported-by: Andrew Morton <akpm@linux-foundation.org> Signed-off-by: Ingo Molnar <mingo@elte.hu>
2008-04-24[POWERPC] Fix new warnings arising from stacktrace patchChristoph Hellwig1-1/+0
Remove the inclusion of asm-offsets.h from stacktrace.c. It isn't supposed to be included in C code and it causes problems with multiple definitions of things. Signed-off-by: Christoph Hellwig <hch@lst.de> Signed-off-by: Paul Mackerras <paulus@samba.org>
2008-04-18[POWERPC] Stacktrace support for lockdepChristoph Hellwig1-0/+47
This adds stacktrace support for powerpc, which will be needed for lockdep. Signed-off-by: Christoph Hellwig <hch@lst.de> Signed-off-by: Benjamin Herrenschmidt <benh@kernel.crashing.org> Signed-off-by: Paul Mackerras <paulus@samba.org>