diff options
author | Benjamin Herrenschmidt <benh@kernel.crashing.org> | 2008-04-18 10:56:17 +0400 |
---|---|---|
committer | Paul Mackerras <paulus@samba.org> | 2008-04-24 14:57:33 +0400 |
commit | f6a616800e68b61807d0f7bb0d5dc70665ef8046 (patch) | |
tree | 62f8224cb6a7da0bc203de1be8a7f3485f4b583b /arch/powerpc/kernel/process.c | |
parent | 8c9843e57a7d9d7a090d6467a0f1f3afb8031527 (diff) | |
download | linux-f6a616800e68b61807d0f7bb0d5dc70665ef8046.tar.xz |
[POWERPC] Fix kernel stack allocation alignment
The powerpc kernel stacks need to be naturally aligned, as they
contain the thread info at the bottom, which is obtained by
clearing the low bits of the stack pointer.
However, when using 64K pages, the stack is smaller than a page,
so we use kmalloc to allocate it, but that doesn't provide the
alignment guarantee we need.
It appeared to work so far... until one enables SLUB debugging
which then returns unaligned pointers. Ooops...
This fixes it by using a slab cache with enforced alignment. It
relies on my previous patch that adds a thread_info_cache_init()
callback.
Signed-off-by: Benjamin Herrenschmidt <benh@kernel.crashing.org>
Acked-by: Andrew Morton <akpm@linux-foundation.org>
Signed-off-by: Paul Mackerras <paulus@samba.org>
Diffstat (limited to 'arch/powerpc/kernel/process.c')
-rw-r--r-- | arch/powerpc/kernel/process.c | 31 |
1 files changed, 31 insertions, 0 deletions
diff --git a/arch/powerpc/kernel/process.c b/arch/powerpc/kernel/process.c index 703100d5e458..6caad17ea72e 100644 --- a/arch/powerpc/kernel/process.c +++ b/arch/powerpc/kernel/process.c @@ -1033,3 +1033,34 @@ void ppc64_runlatch_off(void) } } #endif + +#if THREAD_SHIFT < PAGE_SHIFT + +static struct kmem_cache *thread_info_cache; + +struct thread_info *alloc_thread_info(struct task_struct *tsk) +{ + struct thread_info *ti; + + ti = kmem_cache_alloc(thread_info_cache, GFP_KERNEL); + if (unlikely(ti == NULL)) + return NULL; +#ifdef CONFIG_DEBUG_STACK_USAGE + memset(ti, 0, THREAD_SIZE); +#endif + return ti; +} + +void free_thread_info(struct thread_info *ti) +{ + kmem_cache_free(thread_info_cache, ti); +} + +void thread_info_cache_init(void) +{ + thread_info_cache = kmem_cache_create("thread_info", THREAD_SIZE, + THREAD_SIZE, 0, NULL); + BUG_ON(thread_info_cache == NULL); +} + +#endif /* THREAD_SHIFT < PAGE_SHIFT */ |