diff options
Diffstat (limited to 'Documentation/core-api')
-rw-r--r-- | Documentation/core-api/cpu_hotplug.rst | 2 | ||||
-rw-r--r-- | Documentation/core-api/dma-api.rst | 132 | ||||
-rw-r--r-- | Documentation/core-api/dma-attributes.rst | 8 | ||||
-rw-r--r-- | Documentation/core-api/genericirq.rst | 2 | ||||
-rw-r--r-- | Documentation/core-api/kernel-api.rst | 6 | ||||
-rw-r--r-- | Documentation/core-api/workqueue.rst | 2 | ||||
-rw-r--r-- | Documentation/core-api/xarray.rst | 16 |
7 files changed, 85 insertions, 83 deletions
diff --git a/Documentation/core-api/cpu_hotplug.rst b/Documentation/core-api/cpu_hotplug.rst index 298c9c8bea9a..a2c96bec5ee8 100644 --- a/Documentation/core-api/cpu_hotplug.rst +++ b/Documentation/core-api/cpu_hotplug.rst @@ -30,7 +30,7 @@ which didn't support these methods. Command Line Switches ===================== ``maxcpus=n`` - Restrict boot time CPUs to *n*. Say if you have fourV CPUs, using + Restrict boot time CPUs to *n*. Say if you have four CPUs, using ``maxcpus=2`` will only boot two. You can choose to bring the other CPUs later online. diff --git a/Documentation/core-api/dma-api.rst b/Documentation/core-api/dma-api.rst index 3b3abbbb4b9a..75cb757bbff0 100644 --- a/Documentation/core-api/dma-api.rst +++ b/Documentation/core-api/dma-api.rst @@ -516,100 +516,110 @@ routines, e.g.::: } -Part II - Advanced dma usage ----------------------------- +Part II - Non-coherent DMA allocations +-------------------------------------- -Warning: These pieces of the DMA API should not be used in the -majority of cases, since they cater for unlikely corner cases that -don't belong in usual drivers. +These APIs allow to allocate pages that are guaranteed to be DMA addressable +by the passed in device, but which need explicit management of memory ownership +for the kernel vs the device. -If you don't understand how cache line coherency works between a -processor and an I/O device, you should not be using this part of the -API at all. +If you don't understand how cache line coherency works between a processor and +an I/O device, you should not be using this part of the API. :: void * - dma_alloc_attrs(struct device *dev, size_t size, dma_addr_t *dma_handle, - gfp_t flag, unsigned long attrs) + dma_alloc_noncoherent(struct device *dev, size_t size, + dma_addr_t *dma_handle, enum dma_data_direction dir, + gfp_t gfp) -Identical to dma_alloc_coherent() except that when the -DMA_ATTR_NON_CONSISTENT flags is passed in the attrs argument, the -platform will choose to return either consistent or non-consistent memory -as it sees fit. By using this API, you are guaranteeing to the platform -that you have all the correct and necessary sync points for this memory -in the driver should it choose to return non-consistent memory. +This routine allocates a region of <size> bytes of consistent memory. It +returns a pointer to the allocated region (in the processor's virtual address +space) or NULL if the allocation failed. The returned memory may or may not +be in the kernel direct mapping. Drivers must not call virt_to_page on +the returned memory region. -Note: where the platform can return consistent memory, it will -guarantee that the sync points become nops. +It also returns a <dma_handle> which may be cast to an unsigned integer the +same width as the bus and given to the device as the DMA address base of +the region. + +The dir parameter specified if data is read and/or written by the device, +see dma_map_single() for details. -Warning: Handling non-consistent memory is a real pain. You should -only use this API if you positively know your driver will be -required to work on one of the rare (usually non-PCI) architectures -that simply cannot make consistent memory. +The gfp parameter allows the caller to specify the ``GFP_`` flags (see +kmalloc()) for the allocation, but rejects flags used to specify a memory +zone such as GFP_DMA or GFP_HIGHMEM. + +Before giving the memory to the device, dma_sync_single_for_device() needs +to be called, and before reading memory written by the device, +dma_sync_single_for_cpu(), just like for streaming DMA mappings that are +reused. :: void - dma_free_attrs(struct device *dev, size_t size, void *cpu_addr, - dma_addr_t dma_handle, unsigned long attrs) + dma_free_noncoherent(struct device *dev, size_t size, void *cpu_addr, + dma_addr_t dma_handle, enum dma_data_direction dir) -Free memory allocated by the dma_alloc_attrs(). All common -parameters must be identical to those otherwise passed to dma_free_coherent, -and the attrs argument must be identical to the attrs passed to -dma_alloc_attrs(). +Free a region of memory previously allocated using dma_alloc_noncoherent(). +dev, size and dma_handle and dir must all be the same as those passed into +dma_alloc_noncoherent(). cpu_addr must be the virtual address returned by +dma_alloc_noncoherent(). :: - int - dma_get_cache_alignment(void) + struct page * + dma_alloc_pages(struct device *dev, size_t size, dma_addr_t *dma_handle, + enum dma_data_direction dir, gfp_t gfp) -Returns the processor cache alignment. This is the absolute minimum -alignment *and* width that you must observe when either mapping -memory or doing partial flushes. +This routine allocates a region of <size> bytes of non-coherent memory. It +returns a pointer to first struct page for the region, or NULL if the +allocation failed. The resulting struct page can be used for everything a +struct page is suitable for. -.. note:: +It also returns a <dma_handle> which may be cast to an unsigned integer the +same width as the bus and given to the device as the DMA address base of +the region. - This API may return a number *larger* than the actual cache - line, but it will guarantee that one or more cache lines fit exactly - into the width returned by this call. It will also always be a power - of two for easy alignment. +The dir parameter specified if data is read and/or written by the device, +see dma_map_single() for details. + +The gfp parameter allows the caller to specify the ``GFP_`` flags (see +kmalloc()) for the allocation, but rejects flags used to specify a memory +zone such as GFP_DMA or GFP_HIGHMEM. + +Before giving the memory to the device, dma_sync_single_for_device() needs +to be called, and before reading memory written by the device, +dma_sync_single_for_cpu(), just like for streaming DMA mappings that are +reused. :: void - dma_cache_sync(struct device *dev, void *vaddr, size_t size, - enum dma_data_direction direction) + dma_free_pages(struct device *dev, size_t size, struct page *page, + dma_addr_t dma_handle, enum dma_data_direction dir) -Do a partial sync of memory that was allocated by dma_alloc_attrs() with -the DMA_ATTR_NON_CONSISTENT flag starting at virtual address vaddr and -continuing on for size. Again, you *must* observe the cache line -boundaries when doing this. +Free a region of memory previously allocated using dma_alloc_pages(). +dev, size and dma_handle and dir must all be the same as those passed into +dma_alloc_noncoherent(). page must be the pointer returned by +dma_alloc_pages(). :: int - dma_declare_coherent_memory(struct device *dev, phys_addr_t phys_addr, - dma_addr_t device_addr, size_t size); - -Declare region of memory to be handed out by dma_alloc_coherent() when -it's asked for coherent memory for this device. - -phys_addr is the CPU physical address to which the memory is currently -assigned (this will be ioremapped so the CPU can access the region). + dma_get_cache_alignment(void) -device_addr is the DMA address the device needs to be programmed -with to actually address this memory (this will be handed out as the -dma_addr_t in dma_alloc_coherent()). +Returns the processor cache alignment. This is the absolute minimum +alignment *and* width that you must observe when either mapping +memory or doing partial flushes. -size is the size of the area (must be multiples of PAGE_SIZE). +.. note:: -As a simplification for the platforms, only *one* such region of -memory may be declared per device. + This API may return a number *larger* than the actual cache + line, but it will guarantee that one or more cache lines fit exactly + into the width returned by this call. It will also always be a power + of two for easy alignment. -For reasons of efficiency, most platforms choose to track the declared -region only at the granularity of a page. For smaller allocations, -you should use the dma_pool() API. Part III - Debug drivers use of the DMA-API ------------------------------------------- diff --git a/Documentation/core-api/dma-attributes.rst b/Documentation/core-api/dma-attributes.rst index 29dcbe8826e8..1887d92e8e92 100644 --- a/Documentation/core-api/dma-attributes.rst +++ b/Documentation/core-api/dma-attributes.rst @@ -25,14 +25,6 @@ Since it is optional for platforms to implement DMA_ATTR_WRITE_COMBINE, those that do not will simply ignore the attribute and exhibit default behavior. -DMA_ATTR_NON_CONSISTENT ------------------------ - -DMA_ATTR_NON_CONSISTENT lets the platform to choose to return either -consistent or non-consistent memory as it sees fit. By using this API, -you are guaranteeing to the platform that you have all the correct and -necessary sync points for this memory in the driver. - DMA_ATTR_NO_KERNEL_MAPPING -------------------------- diff --git a/Documentation/core-api/genericirq.rst b/Documentation/core-api/genericirq.rst index 8f06d885c310..f959c9b53f61 100644 --- a/Documentation/core-api/genericirq.rst +++ b/Documentation/core-api/genericirq.rst @@ -419,6 +419,7 @@ functions which are exported. .. kernel-doc:: kernel/irq/manage.c .. kernel-doc:: kernel/irq/chip.c + :export: Internal Functions Provided =========================== @@ -431,6 +432,7 @@ functions. .. kernel-doc:: kernel/irq/handle.c .. kernel-doc:: kernel/irq/chip.c + :internal: Credits ======= diff --git a/Documentation/core-api/kernel-api.rst b/Documentation/core-api/kernel-api.rst index 4ac53a1363f6..741aa37dc181 100644 --- a/Documentation/core-api/kernel-api.rst +++ b/Documentation/core-api/kernel-api.rst @@ -231,12 +231,6 @@ Refer to the file kernel/module.c for more information. Hardware Interfaces =================== -Interrupt Handling ------------------- - -.. kernel-doc:: kernel/irq/manage.c - :export: - DMA Channels ------------ diff --git a/Documentation/core-api/workqueue.rst b/Documentation/core-api/workqueue.rst index 00a5ba51e63f..541d31de8926 100644 --- a/Documentation/core-api/workqueue.rst +++ b/Documentation/core-api/workqueue.rst @@ -396,3 +396,5 @@ Kernel Inline Documentations Reference ====================================== .. kernel-doc:: include/linux/workqueue.h + +.. kernel-doc:: kernel/workqueue.c diff --git a/Documentation/core-api/xarray.rst b/Documentation/core-api/xarray.rst index 640934b6f7b4..a137a0e6d068 100644 --- a/Documentation/core-api/xarray.rst +++ b/Documentation/core-api/xarray.rst @@ -475,13 +475,15 @@ or iterations will move the index to the first index in the range. Each entry will only be returned once, no matter how many indices it occupies. -Using xas_next() or xas_prev() with a multi-index xa_state -is not supported. Using either of these functions on a multi-index entry -will reveal sibling entries; these should be skipped over by the caller. - -Storing ``NULL`` into any index of a multi-index entry will set the entry -at every index to ``NULL`` and dissolve the tie. Splitting a multi-index -entry into entries occupying smaller ranges is not yet supported. +Using xas_next() or xas_prev() with a multi-index xa_state is not +supported. Using either of these functions on a multi-index entry will +reveal sibling entries; these should be skipped over by the caller. + +Storing ``NULL`` into any index of a multi-index entry will set the +entry at every index to ``NULL`` and dissolve the tie. A multi-index +entry can be split into entries occupying smaller ranges by calling +xas_split_alloc() without the xa_lock held, followed by taking the lock +and calling xas_split(). Functions and structures ======================== |