diff options
| author | Ingo Molnar <mingo@kernel.org> | 2015-10-12 15:52:34 +0300 |
|---|---|---|
| committer | Ingo Molnar <mingo@kernel.org> | 2015-10-12 15:52:34 +0300 |
| commit | cdbcd239e2e264dc3ef7bc7865bcb8ec0023876f (patch) | |
| tree | 94f5d2cf92ebb2eee640862cb2beaab6503bf846 /mm/slab.c | |
| parent | 6e06780a98f149f131d46c1108d4ae27f05a9357 (diff) | |
| parent | 7e0abcd6b7ec1452bf4a850fccbae44043c05806 (diff) | |
| download | linux-cdbcd239e2e264dc3ef7bc7865bcb8ec0023876f.tar.xz | |
Merge branch 'x86/ras' into ras/core, to pick up changes
Signed-off-by: Ingo Molnar <mingo@kernel.org>
Diffstat (limited to 'mm/slab.c')
| -rw-r--r-- | mm/slab.c | 13 |
1 files changed, 10 insertions, 3 deletions
diff --git a/mm/slab.c b/mm/slab.c index c77ebe6cc87c..4fcc5dd8d5a6 100644 --- a/mm/slab.c +++ b/mm/slab.c @@ -2190,9 +2190,16 @@ __kmem_cache_create (struct kmem_cache *cachep, unsigned long flags) size += BYTES_PER_WORD; } #if FORCED_DEBUG && defined(CONFIG_DEBUG_PAGEALLOC) - if (size >= kmalloc_size(INDEX_NODE + 1) - && cachep->object_size > cache_line_size() - && ALIGN(size, cachep->align) < PAGE_SIZE) { + /* + * To activate debug pagealloc, off-slab management is necessary + * requirement. In early phase of initialization, small sized slab + * doesn't get initialized so it would not be possible. So, we need + * to check size >= 256. It guarantees that all necessary small + * sized slab is initialized in current slab initialization sequence. + */ + if (!slab_early_init && size >= kmalloc_size(INDEX_NODE) && + size >= 256 && cachep->object_size > cache_line_size() && + ALIGN(size, cachep->align) < PAGE_SIZE) { cachep->obj_offset += PAGE_SIZE - ALIGN(size, cachep->align); size = PAGE_SIZE; } |
