diff options
| author | Harry Yoo (Oracle) <harry@kernel.org> | 2026-04-27 10:09:53 +0300 |
|---|---|---|
| committer | Greg Kroah-Hartman <gregkh@linuxfoundation.org> | 2026-05-07 07:14:17 +0300 |
| commit | d66553204a15bdb257d9ef8aca1e12f5fbb910b2 (patch) | |
| tree | 9d5eb4ae261f6f4689ee8dae1235f1a71156eeac | |
| parent | a6d57efeaae3f3b3656514f600eac96be713d90e (diff) | |
| download | linux-d66553204a15bdb257d9ef8aca1e12f5fbb910b2.tar.xz | |
mm/slab: return NULL early from kmalloc_nolock() in NMI on UP
commit 5b31044e649e3e54c2caef135c09b371c2fbcd08 upstream.
On UP kernels (!CONFIG_SMP), spin_trylock() is a no-op that
unconditionally succeeds even when the lock is already held. As a
result, kmalloc_nolock() called from NMI context can re-enter the slab
allocator and acquire n->list_lock that the interrupted context is
already holding, corrupting slab state.
With CONFIG_DEBUG_SPINLOCK on UP, the following BUG is triggered with
the slub_kunit test module:
BUG: spinlock trylock failure on UP on CPU#0, kunit_try_catch/243
[...]
Call Trace:
<NMI>
dump_stack_lvl+0x3f/0x60
do_raw_spin_trylock+0x41/0x50
_raw_spin_trylock+0x24/0x50
get_from_partial_node+0x120/0x4d0
___slab_alloc+0x8a/0x4c0
kmalloc_nolock_noprof+0x164/0x310
[...]
</NMI>
Fix this by returning NULL early when invoked from NMI on a UP kernel.
Link: https://lore.kernel.org/linux-mm/ad_cqe51pvr1WaDg@hyeyoo
Cc: stable@vger.kernel.org
Fixes: af92793e52c3 ("slab: Introduce kmalloc_nolock() and kfree_nolock().")
Signed-off-by: Harry Yoo (Oracle) <harry@kernel.org>
Link: https://patch.msgid.link/20260427-nolock-api-fix-v2-2-a6b83a92d9a4@kernel.org
Signed-off-by: Vlastimil Babka (SUSE) <vbabka@kernel.org>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
| -rw-r--r-- | mm/slub.c | 4 |
1 files changed, 4 insertions, 0 deletions
diff --git a/mm/slub.c b/mm/slub.c index 90af21126921..e423afa27d1a 100644 --- a/mm/slub.c +++ b/mm/slub.c @@ -5304,6 +5304,10 @@ void *kmalloc_nolock_noprof(size_t size, gfp_t gfp_flags, int node) if (IS_ENABLED(CONFIG_PREEMPT_RT) && (in_nmi() || in_hardirq())) return NULL; + /* On UP, spin_trylock() always succeeds even when it is locked */ + if (!IS_ENABLED(CONFIG_SMP) && in_nmi()) + return NULL; + retry: if (unlikely(size > KMALLOC_MAX_CACHE_SIZE)) return NULL; |
