diff options
author | Mauro Carvalho Chehab <mchehab@s-opensource.com> | 2017-09-29 12:24:10 +0300 |
---|---|---|
committer | Mauro Carvalho Chehab <mchehab@s-opensource.com> | 2017-09-29 12:24:10 +0300 |
commit | cf09e3c904bf424f8b6a8203958e09bf7d9bcbc0 (patch) | |
tree | 5e9936b3de36aa222b52a9bca366a43d98730ffd /Documentation/core-api | |
parent | d5426f4c2ebac8cf05de43988c3fccddbee13d28 (diff) | |
parent | e19b205be43d11bff638cad4487008c48d21c103 (diff) | |
download | linux-cf09e3c904bf424f8b6a8203958e09bf7d9bcbc0.tar.xz |
Merge tag 'v4.14-rc2' into patchwork
Linux 4.14-rc2
* tag 'v4.14-rc2': (12066 commits)
Linux 4.14-rc2
tpm: ibmvtpm: simplify crq initialization and document crq format
tpm: replace msleep() with usleep_range() in TPM 1.2/2.0 generic drivers
Documentation: tpm: add powered-while-suspended binding documentation
tpm: tpm_crb: constify acpi_device_id.
tpm: vtpm: constify vio_device_id
security: fix description of values returned by cap_inode_need_killpriv
x86/asm: Fix inline asm call constraints for Clang
objtool: Handle another GCC stack pointer adjustment bug
inet: fix improper empty comparison
net: use inet6_rcv_saddr to compare sockets
net: set tb->fast_sk_family
net: orphan frags on stand-alone ptype in dev_queue_xmit_nit
MAINTAINERS: update git tree locations for ieee802154 subsystem
SMB3: Don't ignore O_SYNC/O_DSYNC and O_DIRECT flags
SMB3: handle new statx fields
arch: remove unused *_segments() macros/functions
parisc: Unbreak bootloader due to gcc-7 optimizations
parisc: Reintroduce option to gzip-compress the kernel
apparmor: fix apparmorfs DAC access permissions
...
Diffstat (limited to 'Documentation/core-api')
-rw-r--r-- | Documentation/core-api/genalloc.rst | 144 | ||||
-rw-r--r-- | Documentation/core-api/index.rst | 1 | ||||
-rw-r--r-- | Documentation/core-api/kernel-api.rst | 49 | ||||
-rw-r--r-- | Documentation/core-api/workqueue.rst | 10 |
4 files changed, 201 insertions, 3 deletions
diff --git a/Documentation/core-api/genalloc.rst b/Documentation/core-api/genalloc.rst new file mode 100644 index 000000000000..6b38a39fab24 --- /dev/null +++ b/Documentation/core-api/genalloc.rst @@ -0,0 +1,144 @@ +The genalloc/genpool subsystem +============================== + +There are a number of memory-allocation subsystems in the kernel, each +aimed at a specific need. Sometimes, however, a kernel developer needs to +implement a new allocator for a specific range of special-purpose memory; +often that memory is located on a device somewhere. The author of the +driver for that device can certainly write a little allocator to get the +job done, but that is the way to fill the kernel with dozens of poorly +tested allocators. Back in 2005, Jes Sorensen lifted one of those +allocators from the sym53c8xx_2 driver and posted_ it as a generic module +for the creation of ad hoc memory allocators. This code was merged +for the 2.6.13 release; it has been modified considerably since then. + +.. _posted: https://lwn.net/Articles/125842/ + +Code using this allocator should include <linux/genalloc.h>. The action +begins with the creation of a pool using one of: + +.. kernel-doc:: lib/genalloc.c + :functions: gen_pool_create + +.. kernel-doc:: lib/genalloc.c + :functions: devm_gen_pool_create + +A call to :c:func:`gen_pool_create` will create a pool. The granularity of +allocations is set with min_alloc_order; it is a log-base-2 number like +those used by the page allocator, but it refers to bytes rather than pages. +So, if min_alloc_order is passed as 3, then all allocations will be a +multiple of eight bytes. Increasing min_alloc_order decreases the memory +required to track the memory in the pool. The nid parameter specifies +which NUMA node should be used for the allocation of the housekeeping +structures; it can be -1 if the caller doesn't care. + +The "managed" interface :c:func:`devm_gen_pool_create` ties the pool to a +specific device. Among other things, it will automatically clean up the +pool when the given device is destroyed. + +A pool is shut down with: + +.. kernel-doc:: lib/genalloc.c + :functions: gen_pool_destroy + +It's worth noting that, if there are still allocations outstanding from the +given pool, this function will take the rather extreme step of invoking +BUG(), crashing the entire system. You have been warned. + +A freshly created pool has no memory to allocate. It is fairly useless in +that state, so one of the first orders of business is usually to add memory +to the pool. That can be done with one of: + +.. kernel-doc:: include/linux/genalloc.h + :functions: gen_pool_add + +.. kernel-doc:: lib/genalloc.c + :functions: gen_pool_add_virt + +A call to :c:func:`gen_pool_add` will place the size bytes of memory +starting at addr (in the kernel's virtual address space) into the given +pool, once again using nid as the node ID for ancillary memory allocations. +The :c:func:`gen_pool_add_virt` variant associates an explicit physical +address with the memory; this is only necessary if the pool will be used +for DMA allocations. + +The functions for allocating memory from the pool (and putting it back) +are: + +.. kernel-doc:: lib/genalloc.c + :functions: gen_pool_alloc + +.. kernel-doc:: lib/genalloc.c + :functions: gen_pool_dma_alloc + +.. kernel-doc:: lib/genalloc.c + :functions: gen_pool_free + +As one would expect, :c:func:`gen_pool_alloc` will allocate size< bytes +from the given pool. The :c:func:`gen_pool_dma_alloc` variant allocates +memory for use with DMA operations, returning the associated physical +address in the space pointed to by dma. This will only work if the memory +was added with :c:func:`gen_pool_add_virt`. Note that this function +departs from the usual genpool pattern of using unsigned long values to +represent kernel addresses; it returns a void * instead. + +That all seems relatively simple; indeed, some developers clearly found it +to be too simple. After all, the interface above provides no control over +how the allocation functions choose which specific piece of memory to +return. If that sort of control is needed, the following functions will be +of interest: + +.. kernel-doc:: lib/genalloc.c + :functions: gen_pool_alloc_algo + +.. kernel-doc:: lib/genalloc.c + :functions: gen_pool_set_algo + +Allocations with :c:func:`gen_pool_alloc_algo` specify an algorithm to be +used to choose the memory to be allocated; the default algorithm can be set +with :c:func:`gen_pool_set_algo`. The data value is passed to the +algorithm; most ignore it, but it is occasionally needed. One can, +naturally, write a special-purpose algorithm, but there is a fair set +already available: + +- gen_pool_first_fit is a simple first-fit allocator; this is the default + algorithm if none other has been specified. + +- gen_pool_first_fit_align forces the allocation to have a specific + alignment (passed via data in a genpool_data_align structure). + +- gen_pool_first_fit_order_align aligns the allocation to the order of the + size. A 60-byte allocation will thus be 64-byte aligned, for example. + +- gen_pool_best_fit, as one would expect, is a simple best-fit allocator. + +- gen_pool_fixed_alloc allocates at a specific offset (passed in a + genpool_data_fixed structure via the data parameter) within the pool. + If the indicated memory is not available the allocation fails. + +There is a handful of other functions, mostly for purposes like querying +the space available in the pool or iterating through chunks of memory. +Most users, however, should not need much beyond what has been described +above. With luck, wider awareness of this module will help to prevent the +writing of special-purpose memory allocators in the future. + +.. kernel-doc:: lib/genalloc.c + :functions: gen_pool_virt_to_phys + +.. kernel-doc:: lib/genalloc.c + :functions: gen_pool_for_each_chunk + +.. kernel-doc:: lib/genalloc.c + :functions: addr_in_gen_pool + +.. kernel-doc:: lib/genalloc.c + :functions: gen_pool_avail + +.. kernel-doc:: lib/genalloc.c + :functions: gen_pool_size + +.. kernel-doc:: lib/genalloc.c + :functions: gen_pool_get + +.. kernel-doc:: lib/genalloc.c + :functions: of_gen_pool_get diff --git a/Documentation/core-api/index.rst b/Documentation/core-api/index.rst index 0606be3a3111..d5bbe035316d 100644 --- a/Documentation/core-api/index.rst +++ b/Documentation/core-api/index.rst @@ -20,6 +20,7 @@ Core utilities genericirq flexible-arrays librs + genalloc Interfaces for kernel debugging =============================== diff --git a/Documentation/core-api/kernel-api.rst b/Documentation/core-api/kernel-api.rst index 17b00914c6ab..8282099e0cbf 100644 --- a/Documentation/core-api/kernel-api.rst +++ b/Documentation/core-api/kernel-api.rst @@ -344,3 +344,52 @@ codecs, and devices with strict requirements for interface clocking. .. kernel-doc:: include/linux/clk.h :internal: + +Synchronization Primitives +========================== + +Read-Copy Update (RCU) +---------------------- + +.. kernel-doc:: include/linux/rcupdate.h + :external: + +.. kernel-doc:: include/linux/rcupdate_wait.h + :external: + +.. kernel-doc:: include/linux/rcutree.h + :external: + +.. kernel-doc:: kernel/rcu/tree.c + :external: + +.. kernel-doc:: kernel/rcu/tree_plugin.h + :external: + +.. kernel-doc:: kernel/rcu/tree_exp.h + :external: + +.. kernel-doc:: kernel/rcu/update.c + :external: + +.. kernel-doc:: include/linux/srcu.h + :external: + +.. kernel-doc:: kernel/rcu/srcutree.c + :external: + +.. kernel-doc:: include/linux/rculist_bl.h + :external: + +.. kernel-doc:: include/linux/rculist.h + :external: + +.. kernel-doc:: include/linux/rculist_nulls.h + :external: + +.. kernel-doc:: include/linux/rcu_sync.h + :external: + +.. kernel-doc:: kernel/rcu/sync.c + :external: + diff --git a/Documentation/core-api/workqueue.rst b/Documentation/core-api/workqueue.rst index ffdec94fbca1..3943b5bfa8cf 100644 --- a/Documentation/core-api/workqueue.rst +++ b/Documentation/core-api/workqueue.rst @@ -243,11 +243,15 @@ throttling the number of active work items, specifying '0' is recommended. Some users depend on the strict execution ordering of ST wq. The -combination of ``@max_active`` of 1 and ``WQ_UNBOUND`` is used to -achieve this behavior. Work items on such wq are always queued to the -unbound worker-pools and only one work item can be active at any given +combination of ``@max_active`` of 1 and ``WQ_UNBOUND`` used to +achieve this behavior. Work items on such wq were always queued to the +unbound worker-pools and only one work item could be active at any given time thus achieving the same ordering property as ST wq. +In the current implementation the above configuration only guarantees +ST behavior within a given NUMA node. Instead alloc_ordered_queue should +be used to achieve system wide ST behavior. + Example Execution Scenarios =========================== |