diff options
author | Tejun Heo <tj@kernel.org> | 2014-09-02 22:46:02 +0400 |
---|---|---|
committer | Tejun Heo <tj@kernel.org> | 2014-09-02 22:46:02 +0400 |
commit | b38d08f3181c5025a7ce84646494cc4748492a3b (patch) | |
tree | 3e20c06c7dfe6f8fb301e01c2a4a5dc0b55f911e /mm/percpu-km.c | |
parent | a63d4ac4ab6094c051a5a240260d16117a7a2f86 (diff) | |
download | linux-b38d08f3181c5025a7ce84646494cc4748492a3b.tar.xz |
percpu: restructure locking
At first, the percpu allocator required a sleepable context for both
alloc and free paths and used pcpu_alloc_mutex to protect everything.
Later, pcpu_lock was introduced to protect the index data structure so
that the free path can be invoked from atomic contexts. The
conversion only updated what's necessary and left most of the
allocation path under pcpu_alloc_mutex.
The percpu allocator is planned to add support for atomic allocation
and this patch restructures locking so that the coverage of
pcpu_alloc_mutex is further reduced.
* pcpu_alloc() now grab pcpu_alloc_mutex only while creating a new
chunk and populating the allocated area. Everything else is now
protected soley by pcpu_lock.
After this change, multiple instances of pcpu_extend_area_map() may
race but the function already implements sufficient synchronization
using pcpu_lock.
This also allows multiple allocators to arrive at new chunk
creation. To avoid creating multiple empty chunks back-to-back, a
new chunk is created iff there is no other empty chunk after
grabbing pcpu_alloc_mutex.
* pcpu_lock is now held while modifying chunk->populated bitmap.
After this, all data structures are protected by pcpu_lock.
Signed-off-by: Tejun Heo <tj@kernel.org>
Diffstat (limited to 'mm/percpu-km.c')
-rw-r--r-- | mm/percpu-km.c | 2 |
1 files changed, 2 insertions, 0 deletions
diff --git a/mm/percpu-km.c b/mm/percpu-km.c index 67a971b7f745..e662b4947a65 100644 --- a/mm/percpu-km.c +++ b/mm/percpu-km.c @@ -68,7 +68,9 @@ static struct pcpu_chunk *pcpu_create_chunk(void) chunk->data = pages; chunk->base_addr = page_address(pages) - pcpu_group_offsets[0]; + spin_lock_irq(&pcpu_lock); bitmap_fill(chunk->populated, nr_pages); + spin_unlock_irq(&pcpu_lock); return chunk; } |