diff options
author | Joonsoo Kim <iamjoonsoo.kim@lge.com> | 2013-06-19 10:33:55 +0400 |
---|---|---|
committer | Pekka Enberg <penberg@kernel.org> | 2013-07-07 19:46:30 +0400 |
commit | 318df36e57c0ca9f2146660d41ff28e8650af423 (patch) | |
tree | 6d6ad368eab1e67f2fd88ea92638e638c04e66d3 /mm | |
parent | c17fd13ec0677e61f3692ecb9d4b21f79848fa04 (diff) | |
download | linux-318df36e57c0ca9f2146660d41ff28e8650af423.tar.xz |
slub: do not put a slab to cpu partial list when cpu_partial is 0
In free path, we don't check number of cpu_partial, so one slab can
be linked in cpu partial list even if cpu_partial is 0. To prevent this,
we should check number of cpu_partial in put_cpu_partial().
Acked-by: Christoph Lameeter <cl@linux.com>
Reviewed-by: Wanpeng Li <liwanp@linux.vnet.ibm.com>
Signed-off-by: Joonsoo Kim <iamjoonsoo.kim@lge.com>
Signed-off-by: Pekka Enberg <penberg@kernel.org>
Diffstat (limited to 'mm')
-rw-r--r-- | mm/slub.c | 3 |
1 files changed, 3 insertions, 0 deletions
diff --git a/mm/slub.c b/mm/slub.c index 5ee6c7cd9fc4..54cc4d544f3c 100644 --- a/mm/slub.c +++ b/mm/slub.c @@ -1954,6 +1954,9 @@ static void put_cpu_partial(struct kmem_cache *s, struct page *page, int drain) int pages; int pobjects; + if (!s->cpu_partial) + return; + do { pages = 0; pobjects = 0; |