diff options
author | Linus Torvalds <torvalds@linux-foundation.org> | 2019-12-02 22:51:02 +0300 |
---|---|---|
committer | Linus Torvalds <torvalds@linux-foundation.org> | 2019-12-02 22:51:02 +0300 |
commit | 937d6eefc716a9071f0e3bada19200de1bb9d048 (patch) | |
tree | 7b2b8e94d157ddbacc2b0712fd5d20a8b4d79c27 /Documentation/core-api | |
parent | 2c97b5ae83dca56718774e7b4bf9640f05d11867 (diff) | |
parent | 36bb9778fd11173f2dd1484e4f6797365e18c1d8 (diff) | |
download | linux-937d6eefc716a9071f0e3bada19200de1bb9d048.tar.xz |
Merge tag 'docs-5.5a' of git://git.lwn.net/linux
Pull Documentation updates from Jonathan Corbet:
"Here are the main documentation changes for 5.5:
- Various kerneldoc script enhancements.
- More RST conversions; those are slowing down as we run out of
things to convert, but we're a ways from done still.
- Dan's "maintainer profile entry" work landed at last. Now we just
need to get maintainers to fill in the profiles...
- A reworking of the parallel build setup to work better with a
variety of systems (and to not take over huge systems entirely in
particular).
- The MAINTAINERS file is now converted to RST during the build.
Hopefully nobody ever tries to print this thing, or they will need
to load a lot of paper.
- A script and documentation making it easy for maintainers to add
Link: tags at commit time.
Also included is the removal of a bunch of spurious CR characters"
* tag 'docs-5.5a' of git://git.lwn.net/linux: (91 commits)
docs: remove a bunch of stray CRs
docs: fix up the maintainer profile document
libnvdimm, MAINTAINERS: Maintainer Entry Profile
Maintainer Handbook: Maintainer Entry Profile
MAINTAINERS: Reclaim the P: tag for Maintainer Entry Profile
docs, parallelism: Rearrange how jobserver reservations are made
docs, parallelism: Do not leak blocking mode to other readers
docs, parallelism: Fix failure path and add comment
Documentation: Remove bootmem_debug from kernel-parameters.txt
Documentation: security: core.rst: fix warnings
Documentation/process/howto/kokr: Update for 4.x -> 5.x versioning
Documentation/translation: Use Korean for Korean translation title
docs/memory-barriers.txt: Remove remaining references to mmiowb()
docs/memory-barriers.txt/kokr: Update I/O section to be clearer about CPU vs thread
docs/memory-barriers.txt/kokr: Fix style, spacing and grammar in I/O section
Documentation/kokr: Kill all references to mmiowb()
docs/memory-barriers.txt/kokr: Rewrite "KERNEL I/O BARRIER EFFECTS" section
docs: Add initial documentation for devfreq
Documentation: Document how to get links with git am
docs: Add request_irq() documentation
...
Diffstat (limited to 'Documentation/core-api')
-rw-r--r-- | Documentation/core-api/genalloc.rst | 26 | ||||
-rw-r--r-- | Documentation/core-api/genericirq.rst | 52 | ||||
-rw-r--r-- | Documentation/core-api/memory-allocation.rst | 50 | ||||
-rw-r--r-- | Documentation/core-api/mm-api.rst | 2 | ||||
-rw-r--r-- | Documentation/core-api/printk-formats.rst | 14 | ||||
-rw-r--r-- | Documentation/core-api/refcount-vs-atomic.rst | 36 |
6 files changed, 97 insertions, 83 deletions
diff --git a/Documentation/core-api/genalloc.rst b/Documentation/core-api/genalloc.rst index 6b38a39fab24..098a46f55798 100644 --- a/Documentation/core-api/genalloc.rst +++ b/Documentation/core-api/genalloc.rst @@ -23,7 +23,7 @@ begins with the creation of a pool using one of: .. kernel-doc:: lib/genalloc.c :functions: devm_gen_pool_create -A call to :c:func:`gen_pool_create` will create a pool. The granularity of +A call to gen_pool_create() will create a pool. The granularity of allocations is set with min_alloc_order; it is a log-base-2 number like those used by the page allocator, but it refers to bytes rather than pages. So, if min_alloc_order is passed as 3, then all allocations will be a @@ -32,7 +32,7 @@ required to track the memory in the pool. The nid parameter specifies which NUMA node should be used for the allocation of the housekeeping structures; it can be -1 if the caller doesn't care. -The "managed" interface :c:func:`devm_gen_pool_create` ties the pool to a +The "managed" interface devm_gen_pool_create() ties the pool to a specific device. Among other things, it will automatically clean up the pool when the given device is destroyed. @@ -53,32 +53,32 @@ to the pool. That can be done with one of: :functions: gen_pool_add .. kernel-doc:: lib/genalloc.c - :functions: gen_pool_add_virt + :functions: gen_pool_add_owner -A call to :c:func:`gen_pool_add` will place the size bytes of memory +A call to gen_pool_add() will place the size bytes of memory starting at addr (in the kernel's virtual address space) into the given pool, once again using nid as the node ID for ancillary memory allocations. -The :c:func:`gen_pool_add_virt` variant associates an explicit physical +The gen_pool_add_virt() variant associates an explicit physical address with the memory; this is only necessary if the pool will be used for DMA allocations. The functions for allocating memory from the pool (and putting it back) are: -.. kernel-doc:: lib/genalloc.c +.. kernel-doc:: include/linux/genalloc.h :functions: gen_pool_alloc .. kernel-doc:: lib/genalloc.c :functions: gen_pool_dma_alloc .. kernel-doc:: lib/genalloc.c - :functions: gen_pool_free + :functions: gen_pool_free_owner -As one would expect, :c:func:`gen_pool_alloc` will allocate size< bytes -from the given pool. The :c:func:`gen_pool_dma_alloc` variant allocates +As one would expect, gen_pool_alloc() will allocate size< bytes +from the given pool. The gen_pool_dma_alloc() variant allocates memory for use with DMA operations, returning the associated physical address in the space pointed to by dma. This will only work if the memory -was added with :c:func:`gen_pool_add_virt`. Note that this function +was added with gen_pool_add_virt(). Note that this function departs from the usual genpool pattern of using unsigned long values to represent kernel addresses; it returns a void * instead. @@ -89,14 +89,14 @@ return. If that sort of control is needed, the following functions will be of interest: .. kernel-doc:: lib/genalloc.c - :functions: gen_pool_alloc_algo + :functions: gen_pool_alloc_algo_owner .. kernel-doc:: lib/genalloc.c :functions: gen_pool_set_algo -Allocations with :c:func:`gen_pool_alloc_algo` specify an algorithm to be +Allocations with gen_pool_alloc_algo() specify an algorithm to be used to choose the memory to be allocated; the default algorithm can be set -with :c:func:`gen_pool_set_algo`. The data value is passed to the +with gen_pool_set_algo(). The data value is passed to the algorithm; most ignore it, but it is occasionally needed. One can, naturally, write a special-purpose algorithm, but there is a fair set already available: diff --git a/Documentation/core-api/genericirq.rst b/Documentation/core-api/genericirq.rst index 4da67b65cecf..8f06d885c310 100644 --- a/Documentation/core-api/genericirq.rst +++ b/Documentation/core-api/genericirq.rst @@ -26,7 +26,7 @@ Rationale ========= The original implementation of interrupt handling in Linux uses the -:c:func:`__do_IRQ` super-handler, which is able to deal with every type of +__do_IRQ() super-handler, which is able to deal with every type of interrupt logic. Originally, Russell King identified different types of handlers to build @@ -43,7 +43,7 @@ During the implementation we identified another type: - Fast EOI type -In the SMP world of the :c:func:`__do_IRQ` super-handler another type was +In the SMP world of the __do_IRQ() super-handler another type was identified: - Per CPU type @@ -83,7 +83,7 @@ IRQ-flow implementation for 'level type' interrupts and add a (sub)architecture specific 'edge type' implementation. To make the transition to the new model easier and prevent the breakage -of existing implementations, the :c:func:`__do_IRQ` super-handler is still +of existing implementations, the __do_IRQ() super-handler is still available. This leads to a kind of duality for the time being. Over time the new model should be used in more and more architectures, as it enables smaller and cleaner IRQ subsystems. It's deprecated for three @@ -116,7 +116,7 @@ status information and pointers to the interrupt flow method and the interrupt chip structure which are assigned to this interrupt. Whenever an interrupt triggers, the low-level architecture code calls -into the generic interrupt code by calling :c:func:`desc->handle_irq`. This +into the generic interrupt code by calling desc->handle_irq(). This high-level IRQ handling function only uses desc->irq_data.chip primitives referenced by the assigned chip descriptor structure. @@ -125,27 +125,29 @@ High-level Driver API The high-level Driver API consists of following functions: -- :c:func:`request_irq` +- request_irq() -- :c:func:`free_irq` +- request_threaded_irq() -- :c:func:`disable_irq` +- free_irq() -- :c:func:`enable_irq` +- disable_irq() -- :c:func:`disable_irq_nosync` (SMP only) +- enable_irq() -- :c:func:`synchronize_irq` (SMP only) +- disable_irq_nosync() (SMP only) -- :c:func:`irq_set_irq_type` +- synchronize_irq() (SMP only) -- :c:func:`irq_set_irq_wake` +- irq_set_irq_type() -- :c:func:`irq_set_handler_data` +- irq_set_irq_wake() -- :c:func:`irq_set_chip` +- irq_set_handler_data() -- :c:func:`irq_set_chip_data` +- irq_set_chip() + +- irq_set_chip_data() See the autogenerated function documentation for details. @@ -154,19 +156,19 @@ High-level IRQ flow handlers The generic layer provides a set of pre-defined irq-flow methods: -- :c:func:`handle_level_irq` +- handle_level_irq() -- :c:func:`handle_edge_irq` +- handle_edge_irq() -- :c:func:`handle_fasteoi_irq` +- handle_fasteoi_irq() -- :c:func:`handle_simple_irq` +- handle_simple_irq() -- :c:func:`handle_percpu_irq` +- handle_percpu_irq() -- :c:func:`handle_edge_eoi_irq` +- handle_edge_eoi_irq() -- :c:func:`handle_bad_irq` +- handle_bad_irq() The interrupt flow handlers (either pre-defined or architecture specific) are assigned to specific interrupts by the architecture either @@ -325,14 +327,14 @@ Delayed interrupt disable This per interrupt selectable feature, which was introduced by Russell King in the ARM interrupt implementation, does not mask an interrupt at -the hardware level when :c:func:`disable_irq` is called. The interrupt is kept +the hardware level when disable_irq() is called. The interrupt is kept enabled and is masked in the flow handler when an interrupt event happens. This prevents losing edge interrupts on hardware which does not store an edge interrupt event while the interrupt is disabled at the hardware level. When an interrupt arrives while the IRQ_DISABLED flag is set, then the interrupt is masked at the hardware level and the IRQ_PENDING bit is set. When the interrupt is re-enabled by -:c:func:`enable_irq` the pending bit is checked and if it is set, the interrupt +enable_irq() the pending bit is checked and if it is set, the interrupt is resent either via hardware or by a software resend mechanism. (It's necessary to enable CONFIG_HARDIRQS_SW_RESEND when you want to use the delayed interrupt disable feature and your hardware is not capable @@ -369,7 +371,7 @@ handler(s) to use these basic units of low-level functionality. __do_IRQ entry point ==================== -The original implementation :c:func:`__do_IRQ` was an alternative entry point +The original implementation __do_IRQ() was an alternative entry point for all types of interrupts. It no longer exists. This handler turned out to be not suitable for all interrupt hardware diff --git a/Documentation/core-api/memory-allocation.rst b/Documentation/core-api/memory-allocation.rst index 939e3dfc86e9..4aa82ddd01b8 100644 --- a/Documentation/core-api/memory-allocation.rst +++ b/Documentation/core-api/memory-allocation.rst @@ -88,10 +88,11 @@ Selecting memory allocator ========================== The most straightforward way to allocate memory is to use a function -from the :c:func:`kmalloc` family. And, to be on the safe size it's -best to use routines that set memory to zero, like -:c:func:`kzalloc`. If you need to allocate memory for an array, there -are :c:func:`kmalloc_array` and :c:func:`kcalloc` helpers. +from the kmalloc() family. And, to be on the safe side it's best to use +routines that set memory to zero, like kzalloc(). If you need to +allocate memory for an array, there are kmalloc_array() and kcalloc() +helpers. The helpers struct_size(), array_size() and array3_size() can +be used to safely calculate object sizes without overflowing. The maximal size of a chunk that can be allocated with `kmalloc` is limited. The actual limit depends on the hardware and the kernel @@ -102,29 +103,26 @@ The address of a chunk allocated with `kmalloc` is aligned to at least ARCH_KMALLOC_MINALIGN bytes. For sizes which are a power of two, the alignment is also guaranteed to be at least the respective size. -For large allocations you can use :c:func:`vmalloc` and -:c:func:`vzalloc`, or directly request pages from the page -allocator. The memory allocated by `vmalloc` and related functions is -not physically contiguous. +For large allocations you can use vmalloc() and vzalloc(), or directly +request pages from the page allocator. The memory allocated by `vmalloc` +and related functions is not physically contiguous. If you are not sure whether the allocation size is too large for -`kmalloc`, it is possible to use :c:func:`kvmalloc` and its -derivatives. It will try to allocate memory with `kmalloc` and if the -allocation fails it will be retried with `vmalloc`. There are -restrictions on which GFP flags can be used with `kvmalloc`; please -see :c:func:`kvmalloc_node` reference documentation. Note that -`kvmalloc` may return memory that is not physically contiguous. +`kmalloc`, it is possible to use kvmalloc() and its derivatives. It will +try to allocate memory with `kmalloc` and if the allocation fails it +will be retried with `vmalloc`. There are restrictions on which GFP +flags can be used with `kvmalloc`; please see kvmalloc_node() reference +documentation. Note that `kvmalloc` may return memory that is not +physically contiguous. If you need to allocate many identical objects you can use the slab -cache allocator. The cache should be set up with -:c:func:`kmem_cache_create` or :c:func:`kmem_cache_create_usercopy` -before it can be used. The second function should be used if a part of -the cache might be copied to the userspace. After the cache is -created :c:func:`kmem_cache_alloc` and its convenience wrappers can -allocate memory from that cache. - -When the allocated memory is no longer needed it must be freed. You -can use :c:func:`kvfree` for the memory allocated with `kmalloc`, -`vmalloc` and `kvmalloc`. The slab caches should be freed with -:c:func:`kmem_cache_free`. And don't forget to destroy the cache with -:c:func:`kmem_cache_destroy`. +cache allocator. The cache should be set up with kmem_cache_create() or +kmem_cache_create_usercopy() before it can be used. The second function +should be used if a part of the cache might be copied to the userspace. +After the cache is created kmem_cache_alloc() and its convenience +wrappers can allocate memory from that cache. + +When the allocated memory is no longer needed it must be freed. You can +use kvfree() for the memory allocated with `kmalloc`, `vmalloc` and +`kvmalloc`. The slab caches should be freed with kmem_cache_free(). And +don't forget to destroy the cache with kmem_cache_destroy(). diff --git a/Documentation/core-api/mm-api.rst b/Documentation/core-api/mm-api.rst index 128e8a721c1e..be726986ff75 100644 --- a/Documentation/core-api/mm-api.rst +++ b/Documentation/core-api/mm-api.rst @@ -11,7 +11,7 @@ User Space Memory Access .. kernel-doc:: arch/x86/lib/usercopy_32.c :export: -.. kernel-doc:: mm/util.c +.. kernel-doc:: mm/gup.c :functions: get_user_pages_fast .. _mm-api-gfp-flags: diff --git a/Documentation/core-api/printk-formats.rst b/Documentation/core-api/printk-formats.rst index ea21dd4b9bad..8ebe46b1af39 100644 --- a/Documentation/core-api/printk-formats.rst +++ b/Documentation/core-api/printk-formats.rst @@ -137,6 +137,20 @@ equivalent to %lx (or %lu). %px is preferred because it is more uniquely grep'able. If in the future we need to modify the way the kernel handles printing pointers we will be better equipped to find the call sites. +Pointer Differences +------------------- + +:: + + %td 2560 + %tx a00 + +For printing the pointer differences, use the %t modifier for ptrdiff_t. + +Example:: + + printk("test: difference between pointers: %td\n", ptr2 - ptr1); + Struct Resources ---------------- diff --git a/Documentation/core-api/refcount-vs-atomic.rst b/Documentation/core-api/refcount-vs-atomic.rst index 976e85adffe8..79a009ce11df 100644 --- a/Documentation/core-api/refcount-vs-atomic.rst +++ b/Documentation/core-api/refcount-vs-atomic.rst @@ -35,7 +35,7 @@ atomics & refcounters only provide atomicity and program order (po) relation (on the same CPU). It guarantees that each ``atomic_*()`` and ``refcount_*()`` operation is atomic and instructions are executed in program order on a single CPU. -This is implemented using :c:func:`READ_ONCE`/:c:func:`WRITE_ONCE` and +This is implemented using READ_ONCE()/WRITE_ONCE() and compare-and-swap primitives. A strong (full) memory ordering guarantees that all prior loads and @@ -44,7 +44,7 @@ before any po-later instruction is executed on the same CPU. It also guarantees that all po-earlier stores on the same CPU and all propagated stores from other CPUs must propagate to all other CPUs before any po-later instruction is executed on the original -CPU (A-cumulative property). This is implemented using :c:func:`smp_mb`. +CPU (A-cumulative property). This is implemented using smp_mb(). A RELEASE memory ordering guarantees that all prior loads and stores (all po-earlier instructions) on the same CPU are completed @@ -52,14 +52,14 @@ before the operation. It also guarantees that all po-earlier stores on the same CPU and all propagated stores from other CPUs must propagate to all other CPUs before the release operation (A-cumulative property). This is implemented using -:c:func:`smp_store_release`. +smp_store_release(). An ACQUIRE memory ordering guarantees that all post loads and stores (all po-later instructions) on the same CPU are completed after the acquire operation. It also guarantees that all po-later stores on the same CPU must propagate to all other CPUs after the acquire operation executes. This is implemented using -:c:func:`smp_acquire__after_ctrl_dep`. +smp_acquire__after_ctrl_dep(). A control dependency (on success) for refcounters guarantees that if a reference for an object was successfully obtained (reference @@ -78,8 +78,8 @@ case 1) - non-"Read/Modify/Write" (RMW) ops Function changes: - * :c:func:`atomic_set` --> :c:func:`refcount_set` - * :c:func:`atomic_read` --> :c:func:`refcount_read` + * atomic_set() --> refcount_set() + * atomic_read() --> refcount_read() Memory ordering guarantee changes: @@ -91,8 +91,8 @@ case 2) - increment-based ops that return no value Function changes: - * :c:func:`atomic_inc` --> :c:func:`refcount_inc` - * :c:func:`atomic_add` --> :c:func:`refcount_add` + * atomic_inc() --> refcount_inc() + * atomic_add() --> refcount_add() Memory ordering guarantee changes: @@ -103,7 +103,7 @@ case 3) - decrement-based RMW ops that return no value Function changes: - * :c:func:`atomic_dec` --> :c:func:`refcount_dec` + * atomic_dec() --> refcount_dec() Memory ordering guarantee changes: @@ -115,8 +115,8 @@ case 4) - increment-based RMW ops that return a value Function changes: - * :c:func:`atomic_inc_not_zero` --> :c:func:`refcount_inc_not_zero` - * no atomic counterpart --> :c:func:`refcount_add_not_zero` + * atomic_inc_not_zero() --> refcount_inc_not_zero() + * no atomic counterpart --> refcount_add_not_zero() Memory ordering guarantees changes: @@ -131,8 +131,8 @@ case 5) - generic dec/sub decrement-based RMW ops that return a value Function changes: - * :c:func:`atomic_dec_and_test` --> :c:func:`refcount_dec_and_test` - * :c:func:`atomic_sub_and_test` --> :c:func:`refcount_sub_and_test` + * atomic_dec_and_test() --> refcount_dec_and_test() + * atomic_sub_and_test() --> refcount_sub_and_test() Memory ordering guarantees changes: @@ -144,14 +144,14 @@ case 6) other decrement-based RMW ops that return a value Function changes: - * no atomic counterpart --> :c:func:`refcount_dec_if_one` + * no atomic counterpart --> refcount_dec_if_one() * ``atomic_add_unless(&var, -1, 1)`` --> ``refcount_dec_not_one(&var)`` Memory ordering guarantees changes: * fully ordered --> RELEASE ordering + control dependency -.. note:: :c:func:`atomic_add_unless` only provides full order on success. +.. note:: atomic_add_unless() only provides full order on success. case 7) - lock-based RMW @@ -159,10 +159,10 @@ case 7) - lock-based RMW Function changes: - * :c:func:`atomic_dec_and_lock` --> :c:func:`refcount_dec_and_lock` - * :c:func:`atomic_dec_and_mutex_lock` --> :c:func:`refcount_dec_and_mutex_lock` + * atomic_dec_and_lock() --> refcount_dec_and_lock() + * atomic_dec_and_mutex_lock() --> refcount_dec_and_mutex_lock() Memory ordering guarantees changes: * fully ordered --> RELEASE ordering + control dependency + hold - :c:func:`spin_lock` on success + spin_lock() on success |