Age | Commit message (Collapse) | Author | Files | Lines |
|
git://git.kernel.org/pub/scm/linux/kernel/git/gregkh/driver-core
Pull driver core update from Greg KH:
"Here's the set of driver core patches for 3.19-rc1.
They are dominated by the removal of the .owner field in platform
drivers. They touch a lot of files, but they are "simple" changes,
just removing a line in a structure.
Other than that, a few minor driver core and debugfs changes. There
are some ath9k patches coming in through this tree that have been
acked by the wireless maintainers as they relied on the debugfs
changes.
Everything has been in linux-next for a while"
* tag 'driver-core-3.19-rc1' of git://git.kernel.org/pub/scm/linux/kernel/git/gregkh/driver-core: (324 commits)
Revert "ath: ath9k: use debugfs_create_devm_seqfile() helper for seq_file entries"
fs: debugfs: add forward declaration for struct device type
firmware class: Deletion of an unnecessary check before the function call "vunmap"
firmware loader: fix hung task warning dump
devcoredump: provide a one-way disable function
device: Add dev_<level>_once variants
ath: ath9k: use debugfs_create_devm_seqfile() helper for seq_file entries
ath: use seq_file api for ath9k debugfs files
debugfs: add helper function to create device related seq_file
drivers/base: cacheinfo: remove noisy error boot message
Revert "core: platform: add warning if driver has no owner"
drivers: base: support cpu cache information interface to userspace via sysfs
drivers: base: add cpu_device_create to support per-cpu devices
topology: replace custom attribute macros with standard DEVICE_ATTR*
cpumask: factor out show_cpumap into separate helper function
driver core: Fix unbalanced device reference in drivers_probe
driver core: fix race with userland in device_add()
sysfs/kernfs: make read requests on pre-alloc files use the buffer.
sysfs/kernfs: allow attributes to request write buffer be pre-allocated.
fs: sysfs: return EGBIG on write if offset is larger than file size
...
|
|
The horrible split between the low-level part of the edma support
and the dmaengine front-end driver causes problems on multiplatform
kernels. This is an attempt to improve the situation slightly
by only registering the dmaengine devices that are actually
present.
Signed-off-by: Arnd Bergmann <arnd@arndb.de>
[olof: add missing include of linux/dma-mapping.h]
Signed-off-by: Olof Johansson <olof@lixom.net>
Signed-off-by: Olof Johansson <olof@lixom.net>
|
|
A platform_driver does not need to set an owner, it will be populated by the
driver core.
Signed-off-by: Wolfram Sang <wsa@the-dreams.de>
|
|
I added book keeping of whether or not the 8250-dma driver has an RX
transfer pending or not so we don't BUG here if it calls
dmaengine_pause() on a channel which has not a pending transfer. Guess
what, this is not enough.
The following can be triggered with a busy RX channel and hackbench in
background:
- DMA transfer completes. The callback is delayed via
vchan_cookie_complete() into a tasklet so it das not happen asap.
- hackbench keeps the system busy so the tasklet does not run "soon".
- the UART collected enough data and generates an "timeout"-interrupt.
Since 8250-dma *thinks* the DMA-transfer is still pending it tries to
cancel it via invoking dmaengine_pause() first. This causes the segfault
because echan->edesc is NULL now that the transfer completed (however
the callback did not run yet).
With this patch we don't BUG in the scenario described.
Signed-off-by: Sebastian Andrzej Siewior <bigeasy@linutronix.de>
Acked-by: Peter Ujfalusi <peter.ujfalusi@ti.com>
Signed-off-by: Vinod Koul <vinod.koul@intel.com>
|
|
Pull slave-dma updates from Vinod Koul:
"Some notable changes are:
- new driver for AMBA AXI NBPF by Guennadi
- new driver for sun6i controller by Maxime
- pl330 drivers fixes from Lar's
- sh-dma updates and fixes from Laurent, Geert and Kuninori
- Documentation updates from Geert
- drivers fixes and updates spread over dw, edma, freescale, mpc512x
etc.."
* 'for-linus' of git://git.infradead.org/users/vkoul/slave-dma: (72 commits)
dmaengine: sun6i: depends on RESET_CONTROLLER
dma: at_hdmac: fix invalid remaining bytes detection
dmaengine: nbpfaxi: don't build this driver where it cannot be used
dmaengine: nbpf_error_get_channel() can be static
dma: pl08x: Use correct specifier for size_t values
dmaengine: Remove the context argument to the prep_dma_cyclic operation
dmaengine: nbpfaxi: convert to tasklet
dmaengine: nbpfaxi: fix a theoretical race
dmaengine: add a driver for AMBA AXI NBPF DMAC IP cores
dmaengine: add device tree binding documentation for the nbpfaxi driver
dmaengine: edma: Do not register second device when booted with DT
dmaengine: edma: Do not change the error code returned from edma_alloc_slot
dmaengine: rcar-dmac: Add device tree bindings documentation
dmaengine: shdma: Allocate cyclic sg list dynamically
dmaengine: shdma: Make channel filter ignore unrelated devices
dmaengine: sh: Rework Kconfig and Makefile
dmaengine: sun6i: Fix memory leaks
dmaengine: sun6i: Free the interrupt before killing the tasklet
dmaengine: sun6i: Remove switch statement from buswidth convertion routine
dmaengine: of: kconfig: select DMA_ENGINE when DMA_OF is selected
...
|
|
The argument is always set to NULL and never used. Remove it.
Signed-off-by: Laurent Pinchart <laurent.pinchart+renesas@ideasonboard.com>
Signed-off-by: Vinod Koul <vinod.koul@intel.com>
|
|
DT boot does not yet support more than one edma device. To avoid issues at
runtime we should not register the second device when the kernel is booted
with DT.
Signed-off-by: Peter Ujfalusi <peter.ujfalusi@ti.com>
Signed-off-by: Vinod Koul <vinod.koul@intel.com>
|
|
In case of edma_alloc_slot() failure during probe we should return the error
unchanged to make debugging easier.
Signed-off-by: Peter Ujfalusi <peter.ujfalusi@ti.com>
Signed-off-by: Vinod Koul <vinod.koul@intel.com>
|
|
Move the DMA channel used in cyclic mode (audio) to the highest priority
event queue which helps to reduce audio problems.
When the channel is terminated, move it back to the default queue.
Signed-off-by: Peter Ujfalusi <peter.ujfalusi@ti.com>
Signed-off-by: Vinod Koul <vinod.koul@intel.com>
|
|
If the client (audio) does not request interrupts for every period we can
disable them.
With updated audio driver stack we can play audio w/o the need to process
any edma interrupts.
Signed-off-by: Peter Ujfalusi <peter.ujfalusi@ti.com>
Signed-off-by: Vinod Koul <vinod.koul@intel.com>
|
|
The edma can report accurate DMA position so update the residue_granularity
to DMA_RESIDUE_GRANULARITY_BURST.
Signed-off-by: Peter Ujfalusi <peter.ujfalusi@ti.com>
Signed-off-by: Vinod Koul <vinod.koul@intel.com>
|
|
eDMA can be configured for 3bytes word size for source and destination.
Signed-off-by: Peter Ujfalusi <peter.ujfalusi@ti.com>
Acked-by: Vinod Koul <vinod.koul@intel.com>
Acked-by: Takashi Iwai <tiwai@suse.de>
Signed-off-by: Mark Brown <broonie@linaro.org>
|
|
edma param struct is now within an edma_pset struct introduced in Thomas
Gleixner's edma tx status series. Update memcpy function for the same.
Cc: Thomas Gleixner <tglx@linutronix.de>
Signed-off-by: Joel Fernandes <joelf@ti.com>
Signed-off-by: Vinod Koul <vinod.koul@intel.com>
|
|
The granular residue accounting code uses certain variables specifically
for residue accounting. Document these in the structure declaration.
Also move around some elements and group them together.
Cc: Thomas Gleixner <tglx@linutronix.de>
Signed-off-by: Joel Fernandes <joelf@ti.com>
Signed-off-by: Vinod Koul <vinod.koul@intel.com>
|
|
The first slot in the ParamRAM of EDMA holds the current active
subtransfer. Depending on the direction we read either the source or
the destination address from there. In the internal psets we have the
address of the buffer(s).
In the cyclic case we only use the internal pset[0] which holds the
start address of the circular buffer and calculate the remaining room
to the end of the buffer.
In the SG case we read the current address and compare it to the
internal psets address and length.
- If the current address is outside of this range, the pset has been
processed already and we mark it done, update the residue_stat value
and process the next set. That avoids that we need to walk all
processed psets for every invocation of tx_status.
- If its inside the range we know that we look at the current active
set and stop the walk.
- In case of intermediate transfers we update the stats in the
interrupt callback function before starting the next batch of
transfers. The tx_status callback and the interrupt callback are
serialized via vchan.lock.
Signed-off-by: Thomas Gleixner <tglx@linutronix.de>
[joelf@ti.com: Hunk #2 in original patch manually applied]
Signed-off-by: Joel Fernandes <joelf@ti.com>
Signed-off-by: Vinod Koul <vinod.koul@intel.com>
|
|
For granular accounting we need to store the direction and the
information for the individual psets:
- source or destination address, depending on direction
- length
Signed-off-by: Thomas Gleixner <tglx@linutronix.de>
Signed-off-by: Joel Fernandes <joelf@ti.com>
Signed-off-by: Vinod Koul <vinod.koul@intel.com>
|
|
Preparatory patch to support finer grained accounting.
Move the edma_params array out of edma_desc so we can add further per
pset data to it.
Signed-off-by: Thomas Gleixner <tglx@linutronix.de>
[joelf@ti.com: Fixed up hunk #3 in original patch to apply]
Signed-off-by: Joel Fernandes <joelf@ti.com>
Signed-off-by: Vinod Koul <vinod.koul@intel.com>
|
|
It's likely that the caller investigates the status of a currently
active descriptor. Make that simple check first and only rumage in the
vchan list if that fails.
Signed-off-by: Thomas Gleixner <tglx@linutronix.de>
Signed-off-by: Joel Fernandes <joelf@ti.com>
Signed-off-by: Vinod Koul <vinod.koul@intel.com>
|
|
The residue reporting in edma_tx_status() is just broken. It blindly
walks the psets and recalculates the lenght of the transfer from the
hardware parameters. For cyclic transfers it adds the link pset, which
results in interestingly large residues. For non-cyclic it adds the
dummy pset, which is stupid as well.
Aside of that it's silly to walk through the pset params when the per
descriptor residue is known at the point of creating it.
Store the information in edma_desc and use it.
Signed-off-by: Thomas Gleixner <tglx@linutronix.de>
Signed-off-by: Joel Fernandes <joelf@ti.com>
Signed-off-by: Vinod Koul <vinod.koul@intel.com>
|
|
It helps to identify issues if we have some information regarding to the
channel which the event is associated.
Signed-off-by: Peter Ujfalusi <peter.ujfalusi@ti.com>
Acked-by: Joel Fernandes <joelf@ti.com>
Signed-off-by: Vinod Koul <vinod.koul@intel.com>
|
|
The vchan lock in edma_callback is acquired in hard interrupt context. As
interrupts are already disabled, there's no point in save/restoring interrupt
mask bit or cpsr flags.
Get rid of flags local variable and use spin_lock instead of spin_lock_irqsave.
Signed-off-by: Joel Fernandes <joelf@ti.com>
Signed-off-by: Vinod Koul <vinod.koul@intel.com>
|
|
We add DMA memcpy support to EDMA driver. Successful tests performed using
dmatest kernel module. Copy alignment is set to DMA_SLAVE_BUSWIDTH_4_BYTES and
users must ensure length is aligned so that copy is performed fully.
Signed-off-by: Joel Fernandes <joelf@ti.com>
Signed-off-by: Vinod Koul <vinod.koul@intel.com>
|
|
In case of not supported direction it is better to print the direction also.
It is unlikely, but in such an event it helps with the debugging.
Signed-off-by: Peter Ujfalusi <peter.ujfalusi@ti.com>
Acked-by: Joel Fernandes <joelf@ti.com>
Reviewed-and-Tested-by: Joel Fernandes <joelf@ti.com>
Signed-off-by: Vinod Koul <vinod.koul@intel.com>
|
|
callbacks
prep_slave_sg and prep_dma_cyclic callbacks have mostly same failure cases
with the same texts printed in case we hit them. It helps when debugging if
we know exactly which callback generated the errors.
At the same time change the debug level for descriptor allocation failure
from dbg to err since all other error cases are dev_err and this failure is
similarly fatal as the other ones.
Signed-off-by: Peter Ujfalusi <peter.ujfalusi@ti.com>
Acked-by: Joel Fernandes <joelf@ti.com>
Reviewed-and-Tested-by: Joel Fernandes <joelf@ti.com>
Signed-off-by: Vinod Koul <vinod.koul@intel.com>
|
|
With the callback implemented omap-dma can provide information to client
drivers regarding to supported address widths, directions, residue
granularity, etc.
Signed-off-by: Peter Ujfalusi <peter.ujfalusi@ti.com>
Acked-by: Joel Fernandes <joelf@ti.com>
Reviewed-and-Tested-by: Joel Fernandes <joelf@ti.com>
Signed-off-by: Vinod Koul <vinod.koul@intel.com>
|
|
Do not print the paRAM information when verbose debugging is not asked and
also reduce the number of lines printed in edma_prep_dma_cyclic()
Signed-off-by: Peter Ujfalusi <peter.ujfalusi@ti.com>
Acked-by: Joel Fernandes <joelf@ti.com>
Reviewed-and-Tested-by: Joel Fernandes <joelf@ti.com>
Signed-off-by: Vinod Koul <vinod.koul@intel.com>
|
|
Indicate that the edma dmaengine driver has support for cyclic mode.
Signed-off-by: Peter Ujfalusi <peter.ujfalusi@ti.com>
Acked-by: Joel Fernandes <joelf@ti.com>
Reviewed-and-Tested-by: Joel Fernandes <joelf@ti.com>
Signed-off-by: Vinod Koul <vinod.koul@intel.com>
|
|
Pause/Resume can be used by the audio stack when the stream is paused/resumed
The edma platform code has support for this and the legacy audio stack used
this.
Signed-off-by: Peter Ujfalusi <peter.ujfalusi@ti.com>
Acked-by: Joel Fernandes <joelf@ti.com>
Reviewed-and-Tested-by: Joel Fernandes <joelf@ti.com>
Signed-off-by: Vinod Koul <vinod.koul@intel.com>
|
|
When clients asks for maxburst = 0 it is basically the same case as if they
were asking for maxburst = 1 since in both case ASYNC need to be used and
the eDMA is expected to write/read one word per DMA request.
Signed-off-by: Peter Ujfalusi <peter.ujfalusi@ti.com>
Acked-by: Joel Fernandes <joelf@ti.com>
Reviewed-and-Tested-by: Joel Fernandes <joelf@ti.com>
Signed-off-by: Vinod Koul <vinod.koul@intel.com>
|
|
The code to handle any length SG lists calls edma_resume()
even before edma_start() is called. This is incorrect
because edma_resume() enables edma events on the channel
after which CPU (in edma_start) cannot clear posted
events by writing to ECR (per the EDMA user's guide).
Because of this EDMA transfers fail to start if due
to some reason there is a pending EDMA event registered
even before EDMA transfers are started. This can happen if
an EDMA event is a byproduct of device initialization.
Fix this by calling edma_resume() only if it is not the
first batch of MAX_NR_SG elements.
Without this patch, MMC/SD fails to function on DA850 EVM
with DMA. The behaviour is triggered by specific IP and
this can explain why the issue was not reported before
(example with MMC/SD on AM335x).
Tested on DA850 EVM and AM335x EVM-SK using MMC/SD card.
Cc: stable@vger.kernel.org # v3.12.x+
Cc: Joel Fernandes <joelf@ti.com>
Acked-by: Joel Fernandes <joelf@ti.com>
Tested-by: Jon Ringle <jringle@gridpoint.com>
Tested-by: Alexander Holler <holler@ahsoftware.de>
Reported-by: Jon Ringle <jringle@gridpoint.com>
Signed-off-by: Sekhar Nori <nsekhar@ti.com>
Signed-off-by: Vinod Koul <vinod.koul@intel.com>
|
|
Fix a memory leak in the edma_prep_dma_cyclic() error handling path.
Signed-off-by: Christian Engelmayer <cengelma@gmx.at>
Signed-off-by: Vinod Koul <vinod.koul@intel.com>
|
|
The channel allocated/released messages are very spammy and not really
interesting to users. Change them to "debug" level.
Signed-off-by: Ezequiel Garcia <ezequiel.garcia@free-electrons.com>
Acked-by: Matt Porter <mporter@linaro.org>
Signed-off-by: Vinod Koul <vinod.koul@intel.com>
|
|
Pull slave-dmaengine changes from Vinod Koul:
"This brings for slave dmaengine:
- Change dma notification flag to DMA_COMPLETE from DMA_SUCCESS as
dmaengine can only transfer and not verify validaty of dma
transfers
- Bunch of fixes across drivers:
- cppi41 driver fixes from Daniel
- 8 channel freescale dma engine support and updated bindings from
Hongbo
- msx-dma fixes and cleanup by Markus
- DMAengine updates from Dan:
- Bartlomiej and Dan finalized a rework of the dma address unmap
implementation.
- In the course of testing 1/ a collection of enhancements to
dmatest fell out. Notably basic performance statistics, and
fixed / enhanced test control through new module parameters
'run', 'wait', 'noverify', and 'verbose'. Thanks to Andriy and
Linus [Walleij] for their review.
- Testing the raid related corner cases of 1/ triggered bugs in
the recently added 16-source operation support in the ioatdma
driver.
- Some minor fixes / cleanups to mv_xor and ioatdma"
* 'next' of git://git.infradead.org/users/vkoul/slave-dma: (99 commits)
dma: mv_xor: Fix mis-usage of mmio 'base' and 'high_base' registers
dma: mv_xor: Remove unneeded NULL address check
ioat: fix ioat3_irq_reinit
ioat: kill msix_single_vector support
raid6test: add new corner case for ioatdma driver
ioatdma: clean up sed pool kmem_cache
ioatdma: fix selection of 16 vs 8 source path
ioatdma: fix sed pool selection
ioatdma: Fix bug in selftest after removal of DMA_MEMSET.
dmatest: verbose mode
dmatest: convert to dmaengine_unmap_data
dmatest: add a 'wait' parameter
dmatest: add basic performance metrics
dmatest: add support for skipping verification and random data setup
dmatest: use pseudo random numbers
dmatest: support xor-only, or pq-only channels in tests
dmatest: restore ability to start test at module load and init
dmatest: cleanup redundant "dmatest: " prefixes
dmatest: replace stored results mechanism, with uniform messages
Revert "dmatest: append verify result to results"
...
|
|
Pull DMA mask updates from Russell King:
"This series cleans up the handling of DMA masks in a lot of drivers,
fixing some bugs as we go.
Some of the more serious errors include:
- drivers which only set their coherent DMA mask if the attempt to
set the streaming mask fails.
- drivers which test for a NULL dma mask pointer, and then set the
dma mask pointer to a location in their module .data section -
which will cause problems if the module is reloaded.
To counter these, I have introduced two helper functions:
- dma_set_mask_and_coherent() takes care of setting both the
streaming and coherent masks at the same time, with the correct
error handling as specified by the API.
- dma_coerce_mask_and_coherent() which resolves the problem of
drivers forcefully setting DMA masks. This is more a marker for
future work to further clean these locations up - the code which
creates the devices really should be initialising these, but to fix
that in one go along with this change could potentially be very
disruptive.
The last thing this series does is prise away some of Linux's addition
to "DMA addresses are physical addresses and RAM always starts at
zero". We have ARM LPAE systems where all system memory is above 4GB
physical, hence having DMA masks interpreted by (eg) the block layers
as describing physical addresses in the range 0..DMAMASK fails on
these platforms. Santosh Shilimkar addresses this in this series; the
patches were copied to the appropriate people multiple times but were
ignored.
Fixing this also gets rid of some ARM weirdness in the setup of the
max*pfn variables, and brings ARM into line with every other Linux
architecture as far as those go"
* 'for-linus-dma-masks' of git://git.linaro.org/people/rmk/linux-arm: (52 commits)
ARM: 7805/1: mm: change max*pfn to include the physical offset of memory
ARM: 7797/1: mmc: Use dma_max_pfn(dev) helper for bounce_limit calculations
ARM: 7796/1: scsi: Use dma_max_pfn(dev) helper for bounce_limit calculations
ARM: 7795/1: mm: dma-mapping: Add dma_max_pfn(dev) helper function
ARM: 7794/1: block: Rename parameter dma_mask to max_addr for blk_queue_bounce_limit()
ARM: DMA-API: better handing of DMA masks for coherent allocations
ARM: 7857/1: dma: imx-sdma: setup dma mask
DMA-API: firmware/google/gsmi.c: avoid direct access to DMA masks
DMA-API: dcdbas: update DMA mask handing
DMA-API: dma: edma.c: no need to explicitly initialize DMA masks
DMA-API: usb: musb: use platform_device_register_full() to avoid directly messing with dma masks
DMA-API: crypto: remove last references to 'static struct device *dev'
DMA-API: crypto: fix ixp4xx crypto platform device support
DMA-API: others: use dma_set_coherent_mask()
DMA-API: staging: use dma_set_coherent_mask()
DMA-API: usb: use new dma_coerce_mask_and_coherent()
DMA-API: usb: use dma_set_coherent_mask()
DMA-API: parport: parport_pc.c: use dma_coerce_mask_and_coherent()
DMA-API: net: octeon: use dma_coerce_mask_and_coherent()
DMA-API: net: nxp/lpc_eth: use dma_coerce_mask_and_coherent()
...
|
|
fixing of freeing descriptor memory was applied twice, so remove the one
duplicate
Reported-by: Wing-Keung Wang <wingkeung.wang@gmail.com>
Signed-off-by: Vinod Koul <vinod.koul@intel.com>
|
|
Using the PaRAM configuration function that we split for reuse by the
different DMA types, we implement Cyclic DMA support.
For the cyclic case, we pass different configuration parameters to this
function, and handle all the Cyclic-specific functionality separately.
Callbacks to the DMA users are handled using vchan_cyclic_callback in
the virt-dma layer. Linking is handled the same way as the slave SG case
except for the last slot where we link it back to the first one in a
cyclic fashion.
For continuity, we check for cases where no.of periods is great than the
MAX number of slots the driver can allocate for a particular descriptor
and error out on such cases.
Signed-off-by: Joel Fernandes <joelf@ti.com>
Signed-off-by: Vinod Koul <vinod.koul@intel.com>
|
|
|
|
register_platform_device_full() can setup the DMA mask provided the
appropriate member is set in struct platform_device_info. So lets
make that be the case. This avoids a direct reference to the DMA
masks by this driver.
While here, add the dma_set_mask_and_coherent() call which the DMA API
requires DMA-using drivers to call.
Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk>
|
|
edma header defines DMA_COMPLETE, this causes issues as commit adfedd9a32e4 move
DMA_SUCCESS to DMA_COMPLETE. edma should properly namespace its defines and
needs a future fix
Reported-by: Olof Johansson <olof@lixom.net>
Signed-off-by: Vinod Koul <vinod.koul@intel.com>
|
|
|
|
Conflicts:
drivers/dma/edma.c
Moved the memory leak fix post merge
Signed-off-by: Vinod Koul <vinod.koul@intel.com>
|
|
Tested-by: Joel Fernandes <joelf@ti.com>
Acked-by: Dan Williams <dan.j.williams@intel.com>
Acked-by: Linus Walleij <linus.walleij@linaro.org>
Signed-off-by: Vinod Koul <vinod.koul@intel.com>
|
|
commit 4b6271a6 fix a menory leak but one more existed in driver so fix that
Signed-off-by: Vinod Koul <vinod.koul@intel.com>
|
|
When it fails to allocate a slot, edesc should be free'd before return;
Signed-off-by: Valentin Ilie <valentin.ilie@gmail.com>
Signed-off-by: Vinod Koul <vinod.koul@intel.com>
|
|
davinci-pcm uses 16 as the no.of periods. With this, in EDMA we have to
allocate atleast 17 slots: 1 slot for channel, and 16 slots the periods.
Due to this, the MAX_NR_SG limitation causes problems, set it to 20 to make
cyclic DMA work when davinci-pcm is converted to use DMA Engine. Also add
a comment clarifying this.
Signed-off-by: Joel Fernandes <joelf@ti.com>
Signed-off-by: Vinod Koul <vinod.koul@intel.com>
|
|
PaRAM set calculation is abstracted into its own function to
enable better reuse for other DMA cases such as cyclic. We adapt
the Slave SG case to use the new function.
This provides a much cleaner abstraction to the internals of the
PaRAM set. However, any PaRAM attributes that are not common to
all DMA types must be set separately such as setting of interrupts.
This function takes care of the most-common attributes.
Also added comments clarifying A-sync case calculations.
Signed-off-by: Joel Fernandes <joelf@ti.com>
Signed-off-by: Vinod Koul <vinod.koul@intel.com>
|
|
Free memory allocated to edma_desc when failing to allocate slot.
Signed-off-by: Geyslan G. Bem <geyslan@gmail.com>
Signed-off-by: Vinod Koul <vinod.koul@intel.com>
|
|
Matt's @ti.com address bounces. Update the MODULE_AUTHOR information in
edma.c to his Linaro address.
Signed-off-by: Josh Boyer <jwboyer@fedoraproject.org>
Acked-by: Matt Porter <matt.porter@linaro.org>
Signed-off-by: Vinod Koul <vinod.koul@intel.com>
|
|
With this series, this check is no longer required and
we shouldn't need to reject drivers DMA'ing more than the
MAX number of slots.
Signed-off-by: Joel Fernandes <joelf@ti.com>
Signed-off-by: Vinod Koul <vinod.koul@intel.com>
|
|
Dummy slot has been used as a way for missed-events not to be
reported as missing. This has been particularly troublesome for cases
where we might want to temporarily pause all incoming events.
For EDMA DMAC, there is no way to do any such pausing of events as
the occurence of the "next" event is not software controlled.
Using "edma_pause" in IRQ handlers doesn't help as by then the event
in concern from the slave is already missed.
Linking a dummy slot, is seen to absorb these events which we didn't
want to miss. So we don't link to dummy, but instead leave it linked
to NULL set, allow an error condition and detect the channel that
missed it.
Consider the case where we have a scatter-list like:
SG1->SG2->SG3->SG4->SG5->SG6->Null
For ex, for a MAX_NR_SG of 2, earlier we were splitting this as:
SG1->SG2->Null
SG3->SG4->Null
SG5->SG6->Null
Now we split it as
SG1->SG2->Null
SG3->SG4->Null
SG5->SG6->Dummy
This approach results in lesser unwanted interrupts that occur
for the last list split. The Dummy slot has the property of not
raising an error condition if events are missed unlike the Null
slot. We are OK with this as we're done with processing the
whole list once we reach Dummy.
Signed-off-by: Joel Fernandes <joelf@ti.com>
[modifed duplicate s-o-b & patch title]
Signed-off-by: Vinod Koul <vinod.koul@intel.com>
|