diff options
author | Daniel Thompson <daniel.thompson@linaro.org> | 2015-09-22 19:12:10 +0300 |
---|---|---|
committer | Russell King <rmk+kernel@arm.linux.org.uk> | 2015-10-03 18:40:51 +0300 |
commit | 0768330d46435f324a0b4860c889057524af17c2 (patch) | |
tree | 1a7edb29ee41d640e92c67ea7a7ae910e902dd3e /lib/nmi_backtrace.c | |
parent | 001bf455d20645190beb98ff4ee450dfea1b7eb2 (diff) | |
download | linux-0768330d46435f324a0b4860c889057524af17c2.tar.xz |
ARM: 8439/1: Fix backtrace generation when IPI is masked
Currently on ARM when <SysRq-L> is triggered from an interrupt handler
(e.g. a SysRq issued using UART or kbd) the main CPU will wedge for ten
seconds with interrupts masked before issuing a backtrace for every CPU
except itself.
The new backtrace code introduced by commit 96f0e00378d4 ("ARM: add
basic support for on-demand backtrace of other CPUs") does not work
correctly when run from an interrupt handler because IPI_CPU_BACKTRACE
is used to generate the backtrace on all CPUs but cannot preempt the
current calling context.
This can be fixed by detecting that the calling context cannot be
preempted and issuing the backtrace directly in this case. Issuing
directly leaves us without any pt_regs to pass to nmi_cpu_backtrace()
so we also modify the generic code to call dump_stack() when its
argument is NULL.
Acked-by: Hillf Danton <hillf.zj@alibaba-inc.com>
Acked-by: Thomas Gleixner <tglx@linutronix.de>
Signed-off-by: Daniel Thompson <daniel.thompson@linaro.org>
Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk>
Diffstat (limited to 'lib/nmi_backtrace.c')
-rw-r--r-- | lib/nmi_backtrace.c | 11 |
1 files changed, 10 insertions, 1 deletions
diff --git a/lib/nmi_backtrace.c b/lib/nmi_backtrace.c index 88d3d32e5923..6019c53c669e 100644 --- a/lib/nmi_backtrace.c +++ b/lib/nmi_backtrace.c @@ -43,6 +43,12 @@ static void print_seq_line(struct nmi_seq_buf *s, int start, int end) printk("%.*s", (end - start) + 1, buf); } +/* + * When raise() is called it will be is passed a pointer to the + * backtrace_mask. Architectures that call nmi_cpu_backtrace() + * directly from their raise() functions may rely on the mask + * they are passed being updated as a side effect of this call. + */ void nmi_trigger_all_cpu_backtrace(bool include_self, void (*raise)(cpumask_t *mask)) { @@ -149,7 +155,10 @@ bool nmi_cpu_backtrace(struct pt_regs *regs) /* Replace printk to write into the NMI seq */ this_cpu_write(printk_func, nmi_vprintk); pr_warn("NMI backtrace for cpu %d\n", cpu); - show_regs(regs); + if (regs) + show_regs(regs); + else + dump_stack(); this_cpu_write(printk_func, printk_func_save); cpumask_clear_cpu(cpu, to_cpumask(backtrace_mask)); |