diff options
author | Christophe Leroy <christophe.leroy@c-s.fr> | 2019-01-31 13:08:52 +0300 |
---|---|---|
committer | Michael Ellerman <mpe@ellerman.id.au> | 2019-02-23 14:31:40 +0300 |
commit | 018cce33c5e62dda265df8ae0ddf7f3a3357ad1f (patch) | |
tree | 2e441439ad12b6bbf76c0e9d00bc6f0a59a24e0d /arch/powerpc/kernel/stacktrace.c | |
parent | 054860897cd35a4e9cec953ae955b429e31e74f7 (diff) | |
download | linux-018cce33c5e62dda265df8ae0ddf7f3a3357ad1f.tar.xz |
powerpc: prep stack walkers for THREAD_INFO_IN_TASK
[text copied from commit 9bbd4c56b0b6
("arm64: prep stack walkers for THREAD_INFO_IN_TASK")]
When CONFIG_THREAD_INFO_IN_TASK is selected, task stacks may be freed
before a task is destroyed. To account for this, the stacks are
refcounted, and when manipulating the stack of another task, it is
necessary to get/put the stack to ensure it isn't freed and/or re-used
while we do so.
This patch reworks the powerpc stack walking code to account for this.
When CONFIG_THREAD_INFO_IN_TASK is not selected these perform no
refcounting, and this should only be a structural change that does not
affect behaviour.
Acked-by: Mark Rutland <mark.rutland@arm.com>
Signed-off-by: Christophe Leroy <christophe.leroy@c-s.fr>
Reviewed-by: Nicholas Piggin <npiggin@gmail.com>
[mpe: Move try_get_task_stack() below tsk == NULL check in show_stack()]
Signed-off-by: Michael Ellerman <mpe@ellerman.id.au>
Diffstat (limited to 'arch/powerpc/kernel/stacktrace.c')
-rw-r--r-- | arch/powerpc/kernel/stacktrace.c | 29 |
1 files changed, 26 insertions, 3 deletions
diff --git a/arch/powerpc/kernel/stacktrace.c b/arch/powerpc/kernel/stacktrace.c index cf31ce6c1f53..f958f3bcba04 100644 --- a/arch/powerpc/kernel/stacktrace.c +++ b/arch/powerpc/kernel/stacktrace.c @@ -67,12 +67,17 @@ void save_stack_trace_tsk(struct task_struct *tsk, struct stack_trace *trace) { unsigned long sp; + if (!try_get_task_stack(tsk)) + return; + if (tsk == current) sp = current_stack_pointer(); else sp = tsk->thread.ksp; save_context_stack(trace, sp, tsk, 0); + + put_task_stack(tsk); } EXPORT_SYMBOL_GPL(save_stack_trace_tsk); @@ -90,9 +95,8 @@ EXPORT_SYMBOL_GPL(save_stack_trace_regs); * * If the task is not 'current', the caller *must* ensure the task is inactive. */ -int -save_stack_trace_tsk_reliable(struct task_struct *tsk, - struct stack_trace *trace) +static int __save_stack_trace_tsk_reliable(struct task_struct *tsk, + struct stack_trace *trace) { unsigned long sp; unsigned long newsp; @@ -197,6 +201,25 @@ save_stack_trace_tsk_reliable(struct task_struct *tsk, } return 0; } + +int save_stack_trace_tsk_reliable(struct task_struct *tsk, + struct stack_trace *trace) +{ + int ret; + + /* + * If the task doesn't have a stack (e.g., a zombie), the stack is + * "reliably" empty. + */ + if (!try_get_task_stack(tsk)) + return 0; + + ret = __save_stack_trace_tsk_reliable(tsk, trace); + + put_task_stack(tsk); + + return ret; +} EXPORT_SYMBOL_GPL(save_stack_trace_tsk_reliable); #endif /* CONFIG_HAVE_RELIABLE_STACKTRACE */ |