summaryrefslogtreecommitdiff
path: root/arch/powerpc/kernel/syscall_64.c
diff options
context:
space:
mode:
authorMichael Ellerman <mpe@ellerman.id.au>2020-05-02 17:33:16 +0300
committerMichael Ellerman <mpe@ellerman.id.au>2020-05-04 02:18:06 +0300
commit0094368e3bb97a710ce163f9c06d290c1c308621 (patch)
tree1ba865944451fd670fd0a3e880486c4d04fd72dd /arch/powerpc/kernel/syscall_64.c
parent07ad112ab77ab282cf00ef2dbfdd2a9b4ded9e79 (diff)
downloadlinux-0094368e3bb97a710ce163f9c06d290c1c308621.tar.xz
powerpc/64s: Fix unrecoverable SLB crashes due to preemption check
Hugh reported that his trusty G5 crashed after a few hours under load with an "Unrecoverable exception 380". The crash is in interrupt_return() where we check lazy_irq_pending(), which calls get_paca() and with CONFIG_DEBUG_PREEMPT=y that goes to check_preemption_disabled() via debug_smp_processor_id(). As Nick explained on the list: Problem is MSR[RI] is cleared here, ready to do the last few things for interrupt return where we're not allowed to take any other interrupts. SLB interrupts can happen just about anywhere aside from kernel text, global variables, and stack. When that hits, it appears to be unrecoverable due to RI=0. The problematic access is in preempt_count() which is: return READ_ONCE(current_thread_info()->preempt_count); Because of THREAD_INFO_IN_TASK, current_thread_info() just points to current, so the access is to somewhere in kernel memory, but not on the stack or in .data, which means it can cause an SLB miss. If we take an SLB miss with RI=0 it is fatal. The easiest solution is to add a version of lazy_irq_pending() that doesn't do the preemption check and call it from the interrupt return path. Fixes: 68b34588e202 ("powerpc/64/sycall: Implement syscall entry/exit logic in C") Reported-by: Hugh Dickins <hughd@google.com> Signed-off-by: Michael Ellerman <mpe@ellerman.id.au> Link: https://lore.kernel.org/r/20200502143316.929341-1-mpe@ellerman.id.au
Diffstat (limited to 'arch/powerpc/kernel/syscall_64.c')
-rw-r--r--arch/powerpc/kernel/syscall_64.c6
1 files changed, 3 insertions, 3 deletions
diff --git a/arch/powerpc/kernel/syscall_64.c b/arch/powerpc/kernel/syscall_64.c
index c74295a7765b..1fe94dd9de32 100644
--- a/arch/powerpc/kernel/syscall_64.c
+++ b/arch/powerpc/kernel/syscall_64.c
@@ -189,7 +189,7 @@ again:
/* This pattern matches prep_irq_for_idle */
__hard_EE_RI_disable();
- if (unlikely(lazy_irq_pending())) {
+ if (unlikely(lazy_irq_pending_nocheck())) {
__hard_RI_enable();
trace_hardirqs_off();
local_paca->irq_happened |= PACA_IRQ_HARD_DIS;
@@ -264,7 +264,7 @@ again:
trace_hardirqs_on();
__hard_EE_RI_disable();
- if (unlikely(lazy_irq_pending())) {
+ if (unlikely(lazy_irq_pending_nocheck())) {
__hard_RI_enable();
trace_hardirqs_off();
local_paca->irq_happened |= PACA_IRQ_HARD_DIS;
@@ -334,7 +334,7 @@ again:
trace_hardirqs_on();
__hard_EE_RI_disable();
- if (unlikely(lazy_irq_pending())) {
+ if (unlikely(lazy_irq_pending_nocheck())) {
__hard_RI_enable();
irq_soft_mask_set(IRQS_ALL_DISABLED);
trace_hardirqs_off();