diff options
author | Jann Horn <jannh@google.com> | 2020-03-17 03:28:45 +0300 |
---|---|---|
committer | Greg Kroah-Hartman <gregkh@linuxfoundation.org> | 2020-03-21 10:15:55 +0300 |
commit | b31a837d420c0defb088f1b6e39217c83b8d99af (patch) | |
tree | 7fee3e567ba7a00a1de606d60a9640383eba5fab | |
parent | 8cf698d2de9927f0d97c30584497b87ec41f0106 (diff) | |
download | linux-b31a837d420c0defb088f1b6e39217c83b8d99af.tar.xz |
mm: slub: add missing TID bump in kmem_cache_alloc_bulk()
commit fd4d9c7d0c71866ec0c2825189ebd2ce35bd95b8 upstream.
When kmem_cache_alloc_bulk() attempts to allocate N objects from a percpu
freelist of length M, and N > M > 0, it will first remove the M elements
from the percpu freelist, then call ___slab_alloc() to allocate the next
element and repopulate the percpu freelist. ___slab_alloc() can re-enable
IRQs via allocate_slab(), so the TID must be bumped before ___slab_alloc()
to properly commit the freelist head change.
Fix it by unconditionally bumping c->tid when entering the slowpath.
Cc: stable@vger.kernel.org
Fixes: ebe909e0fdb3 ("slub: improve bulk alloc strategy")
Signed-off-by: Jann Horn <jannh@google.com>
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
-rw-r--r-- | mm/slub.c | 9 |
1 files changed, 9 insertions, 0 deletions
diff --git a/mm/slub.c b/mm/slub.c index 8eafccf75940..0cb8a21b1be6 100644 --- a/mm/slub.c +++ b/mm/slub.c @@ -3156,6 +3156,15 @@ int kmem_cache_alloc_bulk(struct kmem_cache *s, gfp_t flags, size_t size, if (unlikely(!object)) { /* + * We may have removed an object from c->freelist using + * the fastpath in the previous iteration; in that case, + * c->tid has not been bumped yet. + * Since ___slab_alloc() may reenable interrupts while + * allocating memory, we should bump c->tid now. + */ + c->tid = next_tid(c->tid); + + /* * Invoking slow path likely have side-effect * of re-populating per CPU c->freelist */ |