summaryrefslogtreecommitdiff
path: root/drivers/dma
AgeCommit message (Collapse)AuthorFilesLines
2019-05-02dmaengine: sh: rcar-dmac: With cyclic DMA residue 0 is validDirk Behme1-1/+3
commit 907bd68a2edc491849e2fdcfe52c4596627bca94 upstream. Having a cyclic DMA, a residue 0 is not an indication of a completed DMA. In case of cyclic DMA make sure that dma_set_residue() is called and with this a residue of 0 is forwarded correctly to the caller. Fixes: 3544d2878817 ("dmaengine: rcar-dmac: use result of updated get_residue in tx_status") Signed-off-by: Dirk Behme <dirk.behme@de.bosch.com> Signed-off-by: Achim Dahlhoff <Achim.Dahlhoff@de.bosch.com> Signed-off-by: Hiroyuki Yokoyama <hiroyuki.yokoyama.vx@renesas.com> Signed-off-by: Yao Lihua <ylhuajnu@outlook.com> Reviewed-by: Yoshihiro Shimoda <yoshihiro.shimoda.uh@renesas.com> Reviewed-by: Laurent Pinchart <laurent.pinchart@ideasonboard.com> Cc: <stable@vger.kernel.org> # v4.8+ Signed-off-by: Vinod Koul <vkoul@kernel.org> Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
2019-04-05dmaengine: tegra: avoid overflow of byte trackingBen Dooks1-1/+4
[ Upstream commit e486df39305864604b7e25f2a95d51039517ac57 ] The dma_desc->bytes_transferred counter tracks the number of bytes moved by the DMA channel. This is then used to calculate the information passed back in the in the tegra_dma_tx_status callback, which is usually fine. When the DMA channel is configured as continous, then the bytes_transferred counter will increase over time and eventually overflow to become negative so the residue count will become invalid and the ALSA sound-dma code will report invalid hardware pointer values to the application. This results in some users becoming confused about the playout position and putting audio data in the wrong place. To fix this issue, always ensure the bytes_transferred field is modulo the size of the request. We only do this for the case of the cyclic transfer done ISR as anyone attempting to move 2GiB of DMA data in one transfer is unlikely. Note, we don't fix the issue that we should /never/ transfer a negative number of bytes so we could make those fields unsigned. Reviewed-by: Dmitry Osipenko <digetx@gmail.com> Signed-off-by: Ben Dooks <ben.dooks@codethink.co.uk> Acked-by: Jon Hunter <jonathanh@nvidia.com> Signed-off-by: Vinod Koul <vkoul@kernel.org> Signed-off-by: Sasha Levin <sashal@kernel.org>
2019-04-05dmaengine: qcom_hidma: assign channel cookie correctlyShunyong Yang1-8/+9
[ Upstream commit 546c0547555efca8ba8c120716c325435e29df1b ] When dma_cookie_complete() is called in hidma_process_completed(), dma_cookie_status() will return DMA_COMPLETE in hidma_tx_status(). Then, hidma_txn_is_success() will be called to use channel cookie mchan->last_success to do additional DMA status check. Current code assigns mchan->last_success after dma_cookie_complete(). This causes a race condition of dma_cookie_status() returns DMA_COMPLETE before mchan->last_success is assigned correctly. The race will cause hidma_tx_status() return DMA_ERROR but the transaction is actually a success. Moreover, in async_tx case, it will cause a timeout panic in async_tx_quiesce(). Kernel panic - not syncing: async_tx_quiesce: DMA error waiting for transaction ... Call trace: [<ffff000008089994>] dump_backtrace+0x0/0x1f4 [<ffff000008089bac>] show_stack+0x24/0x2c [<ffff00000891e198>] dump_stack+0x84/0xa8 [<ffff0000080da544>] panic+0x12c/0x29c [<ffff0000045d0334>] async_tx_quiesce+0xa4/0xc8 [async_tx] [<ffff0000045d03c8>] async_trigger_callback+0x70/0x1c0 [async_tx] [<ffff0000048b7d74>] raid_run_ops+0x86c/0x1540 [raid456] [<ffff0000048bd084>] handle_stripe+0x5e8/0x1c7c [raid456] [<ffff0000048be9ec>] handle_active_stripes.isra.45+0x2d4/0x550 [raid456] [<ffff0000048beff4>] raid5d+0x38c/0x5d0 [raid456] [<ffff000008736538>] md_thread+0x108/0x168 [<ffff0000080fb1cc>] kthread+0x10c/0x138 [<ffff000008084d34>] ret_from_fork+0x10/0x18 Cc: Joey Zheng <yu.zheng@hxt-semitech.com> Reviewed-by: Sinan Kaya <okaya@kernel.org> Signed-off-by: Shunyong Yang <shunyong.yang@hxt-semitech.com> Signed-off-by: Vinod Koul <vkoul@kernel.org> Signed-off-by: Sasha Levin <sashal@kernel.org>
2019-04-05dmaengine: imx-dma: fix warning comparison of distinct pointer typesAnders Roxell1-1/+1
[ Upstream commit 9227ab5643cb8350449502dd9e3168a873ab0e3b ] The warning got introduced by commit 930507c18304 ("arm64: add basic Kconfig symbols for i.MX8"). Since it got enabled for arm64. The warning haven't been seen before since size_t was 'unsigned int' when built on arm32. ../drivers/dma/imx-dma.c: In function ‘imxdma_sg_next’: ../include/linux/kernel.h:846:29: warning: comparison of distinct pointer types lacks a cast (!!(sizeof((typeof(x) *)1 == (typeof(y) *)1))) ^~ ../include/linux/kernel.h:860:4: note: in expansion of macro ‘__typecheck’ (__typecheck(x, y) && __no_side_effects(x, y)) ^~~~~~~~~~~ ../include/linux/kernel.h:870:24: note: in expansion of macro ‘__safe_cmp’ __builtin_choose_expr(__safe_cmp(x, y), \ ^~~~~~~~~~ ../include/linux/kernel.h:879:19: note: in expansion of macro ‘__careful_cmp’ #define min(x, y) __careful_cmp(x, y, <) ^~~~~~~~~~~~~ ../drivers/dma/imx-dma.c:288:8: note: in expansion of macro ‘min’ now = min(d->len, sg_dma_len(sg)); ^~~ Rework so that we use min_t and pass in the size_t that returns the minimum of two values, using the specified type. Signed-off-by: Anders Roxell <anders.roxell@linaro.org> Acked-by: Olof Johansson <olof@lixom.net> Reviewed-by: Fabio Estevam <festevam@gmail.com> Signed-off-by: Vinod Koul <vkoul@kernel.org> Signed-off-by: Sasha Levin <sashal@kernel.org>
2019-03-14dmaengine: dmatest: Abort test in case of mapping errorAndy Shevchenko1-16/+12
[ Upstream commit 6454368a804c4955ccd116236037536f81e5b1f1 ] In case of mapping error the DMA addresses are invalid and continuing will screw system memory or potentially something else. [ 222.480310] dmatest: dma0chan7-copy0: summary 1 tests, 3 failures 6 iops 349 KB/s (0) ... [ 240.912725] check: Corrupted low memory at 00000000c7c75ac9 (2940 phys) = 5656000000000000 [ 240.921998] check: Corrupted low memory at 000000005715a1cd (2948 phys) = 279f2aca5595ab2b [ 240.931280] check: Corrupted low memory at 000000002f4024c0 (2950 phys) = 5e5624f349e793cf ... Abort any test if mapping failed. Fixes: 4076e755dbec ("dmatest: convert to dmaengine_unmap_data") Cc: Dan Williams <dan.j.williams@intel.com> Signed-off-by: Andy Shevchenko <andriy.shevchenko@linux.intel.com> Signed-off-by: Vinod Koul <vkoul@kernel.org> Signed-off-by: Sasha Levin <sashal@kernel.org>
2019-03-14dmaengine: at_xdmac: Fix wrongfull report of a channel as in useCodrin Ciubotariu1-9/+10
[ Upstream commit dc3f595b6617ebc0307e0ce151e8f2f2b2489b95 ] atchan->status variable is used to store two different information: - pass channel interrupts status from interrupt handler to tasklet; - channel information like whether it is cyclic or paused; This causes a bug when device_terminate_all() is called, (AT_XDMAC_CHAN_IS_CYCLIC cleared on atchan->status) and then a late End of Block interrupt arrives (AT_XDMAC_CIS_BIS), which sets bit 0 of atchan->status. Bit 0 is also used for AT_XDMAC_CHAN_IS_CYCLIC, so when a new descriptor for a cyclic transfer is created, the driver reports the channel as in use: if (test_and_set_bit(AT_XDMAC_CHAN_IS_CYCLIC, &atchan->status)) { dev_err(chan2dev(chan), "channel currently used\n"); return NULL; } This patch fixes the bug by adding a different struct member to keep the interrupts status separated from the channel status bits. Fixes: e1f7c9eee707 ("dmaengine: at_xdmac: creation of the atmel eXtended DMA Controller driver") Signed-off-by: Codrin Ciubotariu <codrin.ciubotariu@microchip.com> Acked-by: Ludovic Desroches <ludovic.desroches@microchip.com> Signed-off-by: Vinod Koul <vkoul@kernel.org> Signed-off-by: Sasha Levin <sashal@kernel.org>
2019-02-12dmaengine: imx-dma: fix wrong callback invokeLeonid Iziumtsev1-4/+4
commit 341198eda723c8c1cddbb006a89ad9e362502ea2 upstream. Once the "ld_queue" list is not empty, next descriptor will migrate into "ld_active" list. The "desc" variable will be overwritten during that transition. And later the dmaengine_desc_get_callback_invoke() will use it as an argument. As result we invoke wrong callback. That behaviour was in place since: commit fcaaba6c7136 ("dmaengine: imx-dma: fix callback path in tasklet"). But after commit 4cd13c21b207 ("softirq: Let ksoftirqd do its job") things got worse, since possible delay between tasklet_schedule() from DMA irq handler and actual tasklet function execution got bigger. And that gave more time for new DMA request to be submitted and to be put into "ld_queue" list. It has been noticed that DMA issue is causing problems for "mxc-mmc" driver. While stressing the system with heavy network traffic and writing/reading to/from sd card simultaneously the timeout may happen: 10013000.sdhci: mxcmci_watchdog: read time out (status = 0x30004900) That often lead to file system corruption. Signed-off-by: Leonid Iziumtsev <leonid.iziumtsev@gmail.com> Signed-off-by: Vinod Koul <vkoul@kernel.org> Cc: stable@vger.kernel.org Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
2019-02-12dmaengine: bcm2835: Fix abort of transactionsLukas Wunner1-32/+9
commit 9e528c799d17a4ac37d788c81440b50377dd592d upstream. There are multiple issues with bcm2835_dma_abort() (which is called on termination of a transaction): * The algorithm to abort the transaction first pauses the channel by clearing the ACTIVE flag in the CS register, then waits for the PAUSED flag to clear. Page 49 of the spec documents the latter as follows: "Indicates if the DMA is currently paused and not transferring data. This will occur if the active bit has been cleared [...]" https://www.raspberrypi.org/app/uploads/2012/02/BCM2835-ARM-Peripherals.pdf So the function is entering an infinite loop because it is waiting for PAUSED to clear which is always set due to the function having cleared the ACTIVE flag. The only thing that's saving it from itself is the upper bound of 10000 loop iterations. The code comment says that the intention is to "wait for any current AXI transfer to complete", so the author probably wanted to check the WAITING_FOR_OUTSTANDING_WRITES flag instead. Amend the function accordingly. * The CS register is only read at the beginning of the function. It needs to be read again after pausing the channel and before checking for outstanding writes, otherwise writes which were issued between the register read at the beginning of the function and pausing the channel may not be waited for. * The function seeks to abort the transfer by writing 0 to the NEXTCONBK register and setting the ABORT and ACTIVE flags. Thereby, the 0 in NEXTCONBK is sought to be loaded into the CONBLK_AD register. However experimentation has shown this approach to not work: The CONBLK_AD register remains the same as before and the CS register contains 0x00000030 (PAUSED | DREQ_STOPS_DMA). In other words, the control block is not aborted but merely paused and it will be resumed once the next DMA transaction is started. That is absolutely not the desired behavior. A simpler approach is to set the channel's RESET flag instead. This reliably zeroes the NEXTCONBK as well as the CS register. It requires less code and only a single MMIO write. This is also what popular user space DMA drivers do, e.g.: https://github.com/metachris/RPIO/blob/master/source/c_pwm/pwm.c Note that the spec is contradictory whether the NEXTCONBK register is writeable at all. On the one hand, page 41 claims: "The value loaded into the NEXTCONBK register can be overwritten so that the linked list of Control Block data structures can be dynamically altered. However it is only safe to do this when the DMA is paused." On the other hand, page 40 specifies: "Only three registers in each channel's register set are directly writeable (CS, CONBLK_AD and DEBUG). The other registers (TI, SOURCE_AD, DEST_AD, TXFR_LEN, STRIDE & NEXTCONBK), are automatically loaded from a Control Block data structure held in external memory." Fixes: 96286b576690 ("dmaengine: Add support for BCM2835") Signed-off-by: Lukas Wunner <lukas@wunner.de> Cc: stable@vger.kernel.org # v3.14+ Cc: Frank Pavlic <f.pavlic@kunbus.de> Cc: Martin Sperl <kernel@martin.sperl.org> Cc: Florian Meier <florian.meier@koalo.de> Cc: Clive Messer <clive.m.messer@gmail.com> Cc: Matthias Reichl <hias@horus.com> Tested-by: Stefan Wahren <stefan.wahren@i2se.com> Acked-by: Florian Kauer <florian.kauer@koalo.de> Signed-off-by: Vinod Koul <vkoul@kernel.org> Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
2019-02-12dmaengine: bcm2835: Fix interrupt race on RTLukas Wunner1-15/+18
commit f7da7782aba92593f7b82f03d2409a1c5f4db91b upstream. If IRQ handlers are threaded (either because CONFIG_PREEMPT_RT_BASE is enabled or "threadirqs" was passed on the command line) and if system load is sufficiently high that wakeup latency of IRQ threads degrades, SPI DMA transactions on the BCM2835 occasionally break like this: ks8851 spi0.0: SPI transfer timed out bcm2835-dma 3f007000.dma: DMA transfer could not be terminated ks8851 spi0.0 eth2: ks8851_rdfifo: spi_sync() failed The root cause is an assumption made by the DMA driver which is documented in a code comment in bcm2835_dma_terminate_all(): /* * Stop DMA activity: we assume the callback will not be called * after bcm_dma_abort() returns (even if it does, it will see * c->desc is NULL and exit.) */ That assumption falls apart if the IRQ handler bcm2835_dma_callback() is threaded: A client may terminate a descriptor and issue a new one before the IRQ handler had a chance to run. In fact the IRQ handler may miss an *arbitrary* number of descriptors. The result is the following race condition: 1. A descriptor finishes, its interrupt is deferred to the IRQ thread. 2. A client calls dma_terminate_async() which sets channel->desc = NULL. 3. The client issues a new descriptor. Because channel->desc is NULL, bcm2835_dma_issue_pending() immediately starts the descriptor. 4. Finally the IRQ thread runs and writes BCM2835_DMA_INT to the CS register to acknowledge the interrupt. This clears the ACTIVE flag, so the newly issued descriptor is paused in the middle of the transaction. Because channel->desc is not NULL, the IRQ thread finalizes the descriptor and tries to start the next one. I see two possible solutions: The first is to call synchronize_irq() in bcm2835_dma_issue_pending() to wait until the IRQ thread has finished before issuing a new descriptor. The downside of this approach is unnecessary latency if clients desire rapidly terminating and re-issuing descriptors and don't have any use for an IRQ callback. (The SPI TX DMA channel is a case in point.) A better alternative is to make the IRQ thread recognize that it has missed descriptors and avoid finalizing the newly issued descriptor. So first of all, set the ACTIVE flag when acknowledging the interrupt. This keeps a newly issued descriptor running. If the descriptor was finished, the channel remains idle despite the ACTIVE flag being set. However the ACTIVE flag can then no longer be used to check whether the channel is idle, so instead check whether the register containing the current control block address is zero and finalize the current descriptor only if so. That way, there is no impact on latency and throughput if the client doesn't care for the interrupt: Only minimal additional overhead is introduced for non-cyclic descriptors as one further MMIO read is necessary per interrupt to check for idleness of the channel. Cyclic descriptors are sped up slightly by removing one MMIO write per interrupt. Fixes: 96286b576690 ("dmaengine: Add support for BCM2835") Signed-off-by: Lukas Wunner <lukas@wunner.de> Cc: stable@vger.kernel.org # v3.14+ Cc: Frank Pavlic <f.pavlic@kunbus.de> Cc: Martin Sperl <kernel@martin.sperl.org> Cc: Florian Meier <florian.meier@koalo.de> Cc: Clive Messer <clive.m.messer@gmail.com> Cc: Matthias Reichl <hias@horus.com> Tested-by: Stefan Wahren <stefan.wahren@i2se.com> Acked-by: Florian Kauer <florian.kauer@koalo.de> Signed-off-by: Vinod Koul <vkoul@kernel.org> Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
2019-02-12dmaengine: xilinx_dma: Remove __aligned attribute on zynqmp_dma_desc_llNathan Chancellor1-1/+1
[ Upstream commit aeaebcc17cdf37065d2693865eeb1ff1c7dc5bf3 ] Clang warns: drivers/dma/xilinx/zynqmp_dma.c:166:4: warning: attribute 'aligned' is ignored, place it after "struct" to apply attribute to type declaration [-Wignored-attributes] }; __aligned(64) ^ ./include/linux/compiler_types.h:200:38: note: expanded from macro '__aligned' ^ 1 warning generated. As Nick pointed out in the previous version of this patch, the author likely intended for this struct to be 8-byte (64-bit) aligned, not 64-byte, which is the default. Remove the hanging __aligned attribute. Fixes: b0cc417c1637 ("dmaengine: Add Xilinx zynqmp dma engine driver support") Reported-by: Nick Desaulniers <ndesaulniers@google.com> Suggested-by: Nick Desaulniers <ndesaulniers@google.com> Signed-off-by: Nathan Chancellor <natechancellor@gmail.com> Reviewed-by: Nick Desaulniers <ndesaulniers@google.com> Signed-off-by: Vinod Koul <vkoul@kernel.org> Signed-off-by: Sasha Levin <sashal@kernel.org>
2018-12-13dmaengine: cppi41: delete channel from pending list when stop channelBin Liu1-1/+15
commit 59861547ec9a9736e7882f6fb0c096a720ff811a upstream. The driver defines three states for a cppi channel. - idle: .chan_busy == 0 && not in .pending list - pending: .chan_busy == 0 && in .pending list - busy: .chan_busy == 1 && not in .pending list There are cases in which the cppi channel could be in the pending state when cppi41_dma_issue_pending() is called after cppi41_runtime_suspend() is called. cppi41_stop_chan() has a bug for these cases to set channels to idle state. It only checks the .chan_busy flag, but not the .pending list, then later when cppi41_runtime_resume() is called the channels in .pending list will be transitioned to busy state. Removing channels from the .pending list solves the problem. Fixes: 975faaeb9985 ("dma: cppi41: start tear down only if channel is busy") Cc: stable@vger.kernel.org # v3.15+ Signed-off-by: Bin Liu <b-liu@ti.com> Reviewed-by: Peter Ujfalusi <peter.ujfalusi@ti.com> Signed-off-by: Vinod Koul <vkoul@kernel.org> Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
2018-12-05dmaengine: at_hdmac: fix module unloadingRichard Genoud1-0/+2
commit 77e75fda94d2ebb86aa9d35fb1860f6395bf95de upstream. of_dma_controller_free() was not called on module onloading. This lead to a soft lockup: watchdog: BUG: soft lockup - CPU#0 stuck for 23s! Modules linked in: at_hdmac [last unloaded: at_hdmac] when of_dma_request_slave_channel() tried to call ofdma->of_dma_xlate(). Cc: stable@vger.kernel.org Fixes: bbe89c8e3d59 ("at_hdmac: move to generic DMA binding") Acked-by: Ludovic Desroches <ludovic.desroches@microchip.com> Signed-off-by: Richard Genoud <richard.genoud@gmail.com> Signed-off-by: Vinod Koul <vkoul@kernel.org> Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
2018-12-05dmaengine: at_hdmac: fix memory leak in at_dma_xlate()Richard Genoud1-1/+7
commit 98f5f932254b88ce828bc8e4d1642d14e5854caa upstream. The leak was found when opening/closing a serial port a great number of time, increasing kmalloc-32 in slabinfo. Each time the port was opened, dma_request_slave_channel() was called. Then, in at_dma_xlate(), atslave was allocated with devm_kzalloc() and never freed. (Well, it was free at module unload, but that's not what we want). So, here, kzalloc is more suited for the job since it has to be freed in atc_free_chan_resources(). Cc: stable@vger.kernel.org Fixes: bbe89c8e3d59 ("at_hdmac: move to generic DMA binding") Reported-by: Mario Forner <m.forner@be4energy.com> Suggested-by: Alexandre Belloni <alexandre.belloni@bootlin.com> Acked-by: Alexandre Belloni <alexandre.belloni@bootlin.com> Acked-by: Ludovic Desroches <ludovic.desroches@microchip.com> Signed-off-by: Richard Genoud <richard.genoud@gmail.com> Signed-off-by: Vinod Koul <vkoul@kernel.org> Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
2018-11-13dmaengine: dma-jz4780: Return error if not probed from DTPaul Cercueil1-0/+5
[ Upstream commit 54f919a04cf221bc1601d1193682d4379dacacbd ] The driver calls clk_get() with the clock name set to NULL, which means that the driver could only work when probed from devicetree. From now on, we explicitly require the driver to be probed from devicetree. Signed-off-by: Paul Cercueil <paul@crapouillou.net> Tested-by: Mathieu Malaterre <malat@debian.org> Signed-off-by: Vinod Koul <vkoul@kernel.org> Signed-off-by: Sasha Levin <sashal@kernel.org> Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
2018-11-13driver/dma/ioat: Call del_timer_sync() without holding prep_lockWaiman Long1-1/+8
[ Upstream commit cfb03be6c7e8a1591285849c361d67b09f5149f7 ] The following lockdep splat was observed: [ 1222.241750] ====================================================== [ 1222.271301] WARNING: possible circular locking dependency detected [ 1222.301060] 4.16.0-10.el8+5.x86_64+debug #1 Not tainted [ 1222.326659] ------------------------------------------------------ [ 1222.356565] systemd-shutdow/1 is trying to acquire lock: [ 1222.382660] ((&ioat_chan->timer)){+.-.}, at: [<00000000f71e1a28>] del_timer_sync+0x5/0xf0 [ 1222.422928] [ 1222.422928] but task is already holding lock: [ 1222.451743] (&(&ioat_chan->prep_lock)->rlock){+.-.}, at: [<000000008ea98b12>] ioat_shutdown+0x86/0x100 [ioatdma] : [ 1223.524987] Chain exists of: [ 1223.524987] (&ioat_chan->timer) --> &(&ioat_chan->cleanup_lock)->rlock --> &(&ioat_chan->prep_lock)->rlock [ 1223.524987] [ 1223.594082] Possible unsafe locking scenario: [ 1223.594082] [ 1223.622630] CPU0 CPU1 [ 1223.645080] ---- ---- [ 1223.667404] lock(&(&ioat_chan->prep_lock)->rlock); [ 1223.691535] lock(&(&ioat_chan->cleanup_lock)->rlock); [ 1223.728657] lock(&(&ioat_chan->prep_lock)->rlock); [ 1223.765122] lock((&ioat_chan->timer)); [ 1223.784095] [ 1223.784095] *** DEADLOCK *** [ 1223.784095] [ 1223.813492] 4 locks held by systemd-shutdow/1: [ 1223.834677] #0: (reboot_mutex){+.+.}, at: [<0000000056d33456>] SYSC_reboot+0x10f/0x300 [ 1223.873310] #1: (&dev->mutex){....}, at: [<00000000258dfdd7>] device_shutdown+0x1c8/0x660 [ 1223.913604] #2: (&dev->mutex){....}, at: [<0000000068331147>] device_shutdown+0x1d6/0x660 [ 1223.954000] #3: (&(&ioat_chan->prep_lock)->rlock){+.-.}, at: [<000000008ea98b12>] ioat_shutdown+0x86/0x100 [ioatdma] In the ioat_shutdown() function: spin_lock_bh(&ioat_chan->prep_lock); set_bit(IOAT_CHAN_DOWN, &ioat_chan->state); del_timer_sync(&ioat_chan->timer); spin_unlock_bh(&ioat_chan->prep_lock); According to the synchronization rule for the del_timer_sync() function, the caller must not hold locks which would prevent completion of the timer's handler. The timer structure has its own lock that manages its synchronization. Setting the IOAT_CHAN_DOWN bit should prevent other CPUs from trying to use that device anyway, there is probably no need to call del_timer_sync() while holding the prep_lock. So the del_timer_sync() call is now moved outside of the prep_lock critical section to prevent the circular lock dependency. Signed-off-by: Waiman Long <longman@redhat.com> Reviewed-by: Dave Jiang <dave.jiang@intel.com> Signed-off-by: Vinod Koul <vkoul@kernel.org> Signed-off-by: Sasha Levin <sashal@kernel.org> Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
2018-09-26dmaengine: mv_xor_v2: kill the tasklets upon exitHanna Hawa1-0/+2
[ Upstream commit 8bbafed8dd5cfa81071b50ead5cb60367fdef3a9 ] The mv_xor_v2 driver uses a tasklet, initialized during the probe() routine. However, it forgets to cleanup the tasklet using tasklet_kill() function during the remove() routine, which this patch fixes. This prevents the tasklet from potentially running after the module has been removed. Fixes: 19a340b1a820 ("dmaengine: mv_xor_v2: new driver") Signed-off-by: Hanna Hawa <hannah@marvell.com> Reviewed-by: Thomas Petazzoni <thomas.petazzoni@bootlin.com> Signed-off-by: Vinod Koul <vkoul@kernel.org> Signed-off-by: Sasha Levin <alexander.levin@microsoft.com> Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
2018-09-26dmaengine: pl330: fix irq race with terminate_allJohn Keeping1-2/+3
[ Upstream commit e49756544a21f5625b379b3871d27d8500764670 ] In pl330_update() when checking if a channel has been aborted, the channel's lock is not taken, only the overall pl330_dmac lock. But in pl330_terminate_all() the aborted flag (req_running==-1) is set under the channel lock and not the pl330_dmac lock. With threaded interrupts, this leads to a potential race: pl330_terminate_all pl330_update ------------------- ------------ lock channel entry lock pl330 _stop channel unlock pl330 lock pl330 check req_running != -1 req_running = -1 _start channel Signed-off-by: John Keeping <john@metanate.com> Signed-off-by: Vinod Koul <vkoul@kernel.org> Signed-off-by: Sasha Levin <alexander.levin@microsoft.com> Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
2018-08-24dmaengine: k3dma: Off by one in k3_of_dma_simple_xlate()Dan Carpenter1-1/+1
[ Upstream commit c4c2b7644cc9a41f17a8cc8904efe3f66ae4c7ed ] The d->chans[] array has d->dma_requests elements so the > should be >= here. Fixes: 8e6152bc660e ("dmaengine: Add hisilicon k3 DMA engine driver") Signed-off-by: Dan Carpenter <dan.carpenter@oracle.com> Signed-off-by: Vinod Koul <vkoul@kernel.org> Signed-off-by: Sasha Levin <alexander.levin@microsoft.com> Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
2018-08-24dmaengine: pl330: report BURST residue granularityMarek Szyprowski1-1/+1
[ Upstream commit e3f329c600033f011a978a8bc4ddb1e2e94c4f4d ] The reported residue is already calculated in BURST unit granularity, so advertise this capability properly to other devices in the system. Fixes: aee4d1fac887 ("dmaengine: pl330: improve pl330_tx_status() function") Signed-off-by: Marek Szyprowski <m.szyprowski@samsung.com> Signed-off-by: Vinod Koul <vkoul@kernel.org> Signed-off-by: Sasha Levin <alexander.levin@microsoft.com> Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
2018-05-30dmaengine: qcom: bam_dma: get num-channels and num-ees from dtSrinivas Kandagatla1-5/+22
[ Upstream commit 48d163b1aa6e7f650c0b7a4f9c61c387a6def868 ] When Linux is master of BAM, it can directly read registers to know number of supported channels, however when its remotely controlled reading these registers would trigger a crash if the BAM is not yet initialized or powered up on the remote side. This patch allows driver to read num-channels and num-ees from Device Tree for remotely controlled BAM. Signed-off-by: Srinivas Kandagatla <srinivas.kandagatla@linaro.org> Signed-off-by: Vinod Koul <vinod.koul@intel.com> Signed-off-by: Sasha Levin <alexander.levin@microsoft.com> Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
2018-05-30dmaengine: rcar-dmac: Check the done lists in rcar_dmac_chan_get_residue()Yoshihiro Shimoda1-0/+9
[ Upstream commit 3e081628d510b2ddbe493371d9c574d9275da17e ] This patch fixes an issue that a race condition happens between a client driver and the rcar-dmac driver: - The rcar_dmac_isr_transfer_end() is called. - The done list appears, and desc.running is the next active list. - rcar_dmac_chan_get_residue() is called by a client driver before rcar_dmac_isr_channel_thread() is called. - The rcar_dmac_chan_get_residue() will not find any descriptors. - And, the following WARNING happens: WARN(1, "No descriptor for cookie!"); The sh-sci driver with HSCIF (921,600bps) on R-Car H3 can cause this situation. So, this patch checks the done lists in rcar_dmac_chan_get_residue() and returns zero if the done lists has the argument cookie. Tested-by: Nguyen Viet Dung <dung.nguyen.aj@renesas.com> Signed-off-by: Yoshihiro Shimoda <yoshihiro.shimoda.uh@renesas.com> Signed-off-by: Vinod Koul <vinod.koul@intel.com> Signed-off-by: Sasha Levin <alexander.levin@microsoft.com> Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
2018-05-30dmaengine: pl330: fix a race condition in case of threaded irqsQi Hou1-2/+4
[ Upstream commit a3ca831249ca8c4c226e4ceafee04e280152e59d ] When booting up with "threadirqs" in command line, all irq handlers of the DMA controller pl330 will be threaded forcedly. These threads will race for the same list, pl330->req_done. Before the callback, the spinlock was released. And after it, the spinlock was taken. This opened an race window where another threaded irq handler could steal the spinlock and be permitted to delete entries of the list, pl330->req_done. If the later deleted an entry that was still referred to by the former, there would be a kernel panic when the former was scheduled and tried to get the next sibling of the deleted entry. The scenario could be depicted as below: Thread: T1 pl330->req_done Thread: T2 | | | | -A-B-C-D- | Locked | | | | Waiting Del A | | | -B-C-D- | Unlocked | | | | Locked Waiting | | | | Del B | | | | -C-D- Unlocked Waiting | | | Locked | get C via B \ - Kernel panic The kernel panic looked like as below: Unable to handle kernel paging request at virtual address dead000000000108 pgd = ffffff8008c9e000 [dead000000000108] *pgd=000000027fffe003, *pud=000000027fffe003, *pmd=0000000000000000 Internal error: Oops: 96000044 [#1] PREEMPT SMP Modules linked in: CPU: 0 PID: 85 Comm: irq/59-66330000 Not tainted 4.8.24-WR9.0.0.12_standard #2 Hardware name: Broadcom NS2 SVK (DT) task: ffffffc1f5cc3c00 task.stack: ffffffc1f5ce0000 PC is at pl330_irq_handler+0x27c/0x390 LR is at pl330_irq_handler+0x2a8/0x390 pc : [<ffffff80084cb694>] lr : [<ffffff80084cb6c0>] pstate: 800001c5 sp : ffffffc1f5ce3d00 x29: ffffffc1f5ce3d00 x28: 0000000000000140 x27: ffffffc1f5c530b0 x26: dead000000000100 x25: dead000000000200 x24: 0000000000418958 x23: 0000000000000001 x22: ffffffc1f5ccd668 x21: ffffffc1f5ccd590 x20: ffffffc1f5ccd418 x19: dead000000000060 x18: 0000000000000001 x17: 0000000000000007 x16: 0000000000000001 x15: ffffffffffffffff x14: ffffffffffffffff x13: ffffffffffffffff x12: 0000000000000000 x11: 0000000000000001 x10: 0000000000000840 x9 : ffffffc1f5ce0000 x8 : ffffffc1f5cc3338 x7 : ffffff8008ce2020 x6 : 0000000000000000 x5 : 0000000000000000 x4 : 0000000000000001 x3 : dead000000000200 x2 : dead000000000100 x1 : 0000000000000140 x0 : ffffffc1f5ccd590 Process irq/59-66330000 (pid: 85, stack limit = 0xffffffc1f5ce0020) Stack: (0xffffffc1f5ce3d00 to 0xffffffc1f5ce4000) 3d00: ffffffc1f5ce3d80 ffffff80080f09d0 ffffffc1f5ca0c00 ffffffc1f6f7c600 3d20: ffffffc1f5ce0000 ffffffc1f6f7c600 ffffffc1f5ca0c00 ffffff80080f0998 3d40: ffffffc1f5ce0000 ffffff80080f0000 0000000000000000 0000000000000000 3d60: ffffff8008ce202c ffffff8008ce2020 ffffffc1f5ccd668 ffffffc1f5c530b0 3d80: ffffffc1f5ce3db0 ffffff80080f0d70 ffffffc1f5ca0c40 0000000000000001 3da0: ffffffc1f5ce0000 ffffff80080f0cfc ffffffc1f5ce3e20 ffffff80080bf4f8 3dc0: ffffffc1f5ca0c80 ffffff8008bf3798 ffffff8008955528 ffffffc1f5ca0c00 3de0: ffffff80080f0c30 0000000000000000 0000000000000000 0000000000000000 3e00: 0000000000000000 0000000000000000 0000000000000000 ffffff80080f0b68 3e20: 0000000000000000 ffffff8008083690 ffffff80080bf420 ffffffc1f5ca0c80 3e40: 0000000000000000 0000000000000000 0000000000000000 ffffff80080cb648 3e60: ffffff8008b1c780 0000000000000000 0000000000000000 ffffffc1f5ca0c00 3e80: ffffffc100000000 ffffff8000000000 ffffffc1f5ce3e90 ffffffc1f5ce3e90 3ea0: 0000000000000000 ffffff8000000000 ffffffc1f5ce3eb0 ffffffc1f5ce3eb0 3ec0: 0000000000000000 0000000000000000 0000000000000000 0000000000000000 3ee0: 0000000000000000 0000000000000000 0000000000000000 0000000000000000 3f00: 0000000000000000 0000000000000000 0000000000000000 0000000000000000 3f20: 0000000000000000 0000000000000000 0000000000000000 0000000000000000 3f40: 0000000000000000 0000000000000000 0000000000000000 0000000000000000 3f60: 0000000000000000 0000000000000000 0000000000000000 0000000000000000 3f80: 0000000000000000 0000000000000000 0000000000000000 0000000000000000 3fa0: 0000000000000000 0000000000000000 0000000000000000 0000000000000000 3fc0: 0000000000000000 0000000000000005 0000000000000000 0000000000000000 3fe0: 0000000000000000 0000000000000000 0000000275ce3ff0 0000000275ce3ff8 Call trace: Exception stack(0xffffffc1f5ce3b30 to 0xffffffc1f5ce3c60) 3b20: dead000000000060 0000008000000000 3b40: ffffffc1f5ce3d00 ffffff80084cb694 0000000000000008 0000000000000e88 3b60: ffffffc1f5ce3bb0 ffffff80080dac68 ffffffc1f5ce3b90 ffffff8008826fe4 3b80: 00000000000001c0 00000000000001c0 ffffffc1f5ce3bb0 ffffff800848dfcc 3ba0: 0000000000020000 ffffff8008b15ae4 ffffffc1f5ce3c00 ffffff800808f000 3bc0: 0000000000000010 ffffff80088377f0 ffffffc1f5ccd590 0000000000000140 3be0: dead000000000100 dead000000000200 0000000000000001 0000000000000000 3c00: 0000000000000000 ffffff8008ce2020 ffffffc1f5cc3338 ffffffc1f5ce0000 3c20: 0000000000000840 0000000000000001 0000000000000000 ffffffffffffffff 3c40: ffffffffffffffff ffffffffffffffff 0000000000000001 0000000000000007 [<ffffff80084cb694>] pl330_irq_handler+0x27c/0x390 [<ffffff80080f09d0>] irq_forced_thread_fn+0x38/0x88 [<ffffff80080f0d70>] irq_thread+0x140/0x200 [<ffffff80080bf4f8>] kthread+0xd8/0xf0 [<ffffff8008083690>] ret_from_fork+0x10/0x40 Code: f2a00838 f9405763 aa1c03e1 aa1503e0 (f9000443) ---[ end trace f50005726d31199c ]--- Kernel panic - not syncing: Fatal exception in interrupt SMP: stopping secondary CPUs SMP: failed to stop secondary CPUs 0-1 Kernel Offset: disabled Memory Limit: none ---[ end Kernel panic - not syncing: Fatal exception in interrupt To fix this, re-start with the list-head after dropping the lock then re-takeing it. Reviewed-by: Frank Mori Hess <fmh6jj@gmail.com> Tested-by: Frank Mori Hess <fmh6jj@gmail.com> Signed-off-by: Qi Hou <qi.hou@windriver.com> Signed-off-by: Vinod Koul <vinod.koul@intel.com> Signed-off-by: Sasha Levin <alexander.levin@microsoft.com> Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
2018-05-30dmaengine: mv_xor_v2: Fix clock resource by adding a register clockGregory CLEMENT1-5/+20
[ Upstream commit 3cd2c313f1d618f92d1294addc6c685c17065761 ] On the CP110 components which are present on the Armada 7K/8K SoC we need to explicitly enable the clock for the registers. However it is not needed for the AP8xx component, that's why this clock is optional. With this patch both clock have now a name, but in order to be backward compatible, the name of the first clock is not used. It allows to still use this clock with a device tree using the old binding. Reviewed-by: Rob Herring <robh@kernel.org> Signed-off-by: Vinod Koul <vinod.koul@intel.com> Signed-off-by: Sasha Levin <alexander.levin@microsoft.com> Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
2018-05-30dmaengine: rcar-dmac: fix max_chunk_size for R-Car Gen3Yoshihiro Shimoda1-1/+1
[ Upstream commit d716d9b702bb759dd6fb50804f10a174bd156d71 ] According to R-Car Gen3 Rev.0.80 manual, the DMATCR can be set to 16,777,215 as maximum. So, this patch fixes the max_chunk_size for safety on all of SoCs. Otherwise, a system may hang if the DMATCR is set to 0 on R-Car Gen3. Signed-off-by: Yoshihiro Shimoda <yoshihiro.shimoda.uh@renesas.com> Reviewed-by: Simon Horman <horms+renesas@verge.net.au> Signed-off-by: Vinod Koul <vinod.koul@intel.com> Signed-off-by: Sasha Levin <alexander.levin@microsoft.com> Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
2018-04-24dmaengine: at_xdmac: fix rare residue corruptionMaxime Jayat1-2/+2
commit c5637476bbf9bb86c7f0413b8f4822a73d8d2d07 upstream. Despite the efforts made to correctly read the NDA and CUBC registers, the order in which the registers are read could sometimes lead to an inconsistent state. Re-using the timeline from the comments, this following timing of registers reads could lead to reading NDA with value "@desc2" and CUBC with value "MAX desc1": INITD -------- ------------ |____________________| _______________________ _______________ NDA @desc2 \/ @desc3 _______________________/\_______________ __________ ___________ _______________ CUBC 0 \/ MAX desc1 \/ MAX desc2 __________/\___________/\_______________ | | | | Events:(1)(2) (3)(4) (1) check_nda = @desc2 (2) initd = 1 (3) cur_ubc = MAX desc1 (4) cur_nda = @desc2 This is allowed by the condition ((check_nda == cur_nda) && initd), despite cur_ubc and cur_nda being in the precise state we don't want. This error leads to incorrect residue computation. Fix it by inversing the order in which CUBC and INITD are read. This makes sure that NDA and CUBC are always read together either _before_ INITD goes to 0 or _after_ it is back at 1. The case where NDA is read before INITD is at 0 and CUBC is read after INITD is back at 1 will be rejected by check_nda and cur_nda being different. Fixes: 53398f488821 ("dmaengine: at_xdmac: fix residue corruption") Cc: stable@vger.kernel.org Signed-off-by: Maxime Jayat <maxime.jayat@mobile-devices.fr> Acked-by: Ludovic Desroches <ludovic.desroches@microchip.com> Signed-off-by: Vinod Koul <vinod.koul@intel.com> Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
2018-04-13dmaengine: imx-sdma: Handle return value of clk_prepare_enableArvind Yadav1-5/+18
[ Upstream commit fb9caf370f4d0457789d13a1a1b110a8db846e5e ] clk_prepare_enable() can fail here and we must check its return value. Signed-off-by: Arvind Yadav <arvind.yadav.cs@gmail.com> Signed-off-by: Vinod Koul <vinod.koul@intel.com> Signed-off-by: Sasha Levin <alexander.levin@microsoft.com> Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
2018-03-24dmaengine: ti-dma-crossbar: Fix event mapping for TPCC_EVT_MUX_60_63Vignesh R1-1/+9
[ Upstream commit d087f15786021a9605b20f4c678312510be4cac1 ] Register layout of a typical TPCC_EVT_MUX_M_N register is such that the lowest numbered event is at the lowest byte address and highest numbered event at highest byte address. But TPCC_EVT_MUX_60_63 register layout is different, in that the lowest numbered event is at the highest address and highest numbered event is at the lowest address. Therefore, modify ti_am335x_xbar_write() to handle TPCC_EVT_MUX_60_63 register accordingly. Signed-off-by: Vignesh R <vigneshr@ti.com> Signed-off-by: Peter Ujfalusi <peter.ujfalusi@ti.com> Signed-off-by: Vinod Koul <vinod.koul@intel.com> Signed-off-by: Sasha Levin <alexander.levin@microsoft.com> Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
2018-03-24dmaengine: zynqmp_dma: Fix race condition in the probeKedareswara rao Appana1-1/+2
[ Upstream commit 5ba080aada5e739165e0f38d5cc3b04c82b323c8 ] Incase of interrupt property is not present, Driver is trying to free an invalid irq, This patch fixes it by adding a check before freeing the irq. Signed-off-by: Kedareswara rao Appana <appanad@xilinx.com> Signed-off-by: Vinod Koul <vinod.koul@intel.com> Signed-off-by: Sasha Levin <alexander.levin@microsoft.com> Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
2018-03-22dmaengine: imx-sdma: add 1ms delay to ensure SDMA channel is stoppedJiada Wang1-1/+16
[ Upstream commit 7f3ff14b7eb1ffad132117f08a1973b48e653d43 ] sdma_disable_channel() cannot ensure dma is stopped to access module's FIFOs. There is chance SDMA core is running and accessing BD when disable of corresponding channel, this may cause sometimes even after call of .sdma_disable_channel(), SDMA core still be running and accessing module's FIFOs. According to NXP R&D team a delay of one BD SDMA cost time (maximum is 1ms) should be added after disable of the channel bit, to ensure SDMA core has really been stopped after SDMA clients call .device_terminate_all. This patch introduces adds a new function sdma_disable_channel_with_delay() which simply adds 1ms delay after call sdma_disable_channel(), and set it as .device_terminate_all. Signed-off-by: Jiada Wang <jiada_wang@mentor.com> Signed-off-by: Vinod Koul <vinod.koul@intel.com> Signed-off-by: Sasha Levin <alexander.levin@microsoft.com> Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
2018-03-03dmaengine: fsl-edma: disable clks on all error pathsAndreas Platschek1-14/+14
[ Upstream commit 2610acf46b9ed528ec2cacd717bc9d354e452b73 ] Previously enabled clks are only disabled if clk_prepare_enable() fails. However, there are other error paths were the previously enabled clocks are not disabled. To fix the problem, fsl_disable_clocks() now takes the number of clocks that shall be disabled + unprepared. For existing calls were all clocks were already successfully prepared + enabled, DMAMUX_NR is passed to disable + unprepare all clocks. In error paths were only some clocks were successfully prepared + enabled the loop counter is passed, in order to disable + unprepare all successfully prepared + enabled clocks. Found by Linux Driver Verification project (linuxtesting.org). Signed-off-by: Andreas Platschek <andreas.platschek@opentech.at> Signed-off-by: Vinod Koul <vinod.koul@intel.com> Signed-off-by: Sasha Levin <alexander.levin@microsoft.com> Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
2018-02-25dmaengine: zx: fix build warningJun Nie1-1/+1
commit 067fdeb2f391bfa071f741a2b3eb74b8ff3785cd upstream. Fix build warning that related to PAGE_SIZE. The maximum DMA length has nothing to do with PAGE_SIZE, just use a fix number for the definition. drivers/dma/zx_dma.c: In function 'zx_dma_prep_memcpy': drivers/dma/zx_dma.c:523:8: warning: division by zero [-Wdiv-by-zero] drivers/dma/zx_dma.c: In function 'zx_dma_prep_slave_sg': drivers/dma/zx_dma.c:567:11: warning: division by zero [-Wdiv-by-zero] Signed-off-by: Jun Nie <jun.nie@linaro.org> Tested-by: Shawn Guo <shawn.guo@linaro.org> Signed-off-by: Vinod Koul <vinod.koul@intel.com> Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
2018-02-25dmaengine: jz4740: disable/unprepare clk if probe failsTobias Jordan1-1/+3
[ Upstream commit eb9436966fdc84cebdf222952a99898ab46d9bb0 ] in error path of jz4740_dma_probe(), call clk_disable_unprepare() to clean up. Found by Linux Driver Verification project (linuxtesting.org). Fixes: 25ce6c35fea0 MIPS: jz4740: Remove custom DMA API Signed-off-by: Tobias Jordan <Tobias.Jordan@elektrobit.com> Signed-off-by: Vinod Koul <vinod.koul@intel.com> Signed-off-by: Sasha Levin <alexander.levin@microsoft.com> Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
2018-02-25dmaengine: at_hdmac: fix potential NULL pointer dereference in ↵Gustavo A. R. Silva1-1/+3
atc_prep_dma_interleaved [ Upstream commit 62a277d43d47e74972de44d33bd3763e31992414 ] _xt_ is being dereferenced before it is null checked, hence there is a potential null pointer dereference. Fix this by moving the pointer dereference after _xt_ has been null checked. This issue was detected with the help of Coccinelle. Fixes: 4483320e241c ("dmaengine: Use Pointer xt after NULL check.") Signed-off-by: Gustavo A. R. Silva <garsilva@embeddedor.com> Acked-by: Ludovic Desroches <ludovic.desroches@microchip.com> Signed-off-by: Vinod Koul <vinod.koul@intel.com> Signed-off-by: Sasha Levin <alexander.levin@microsoft.com> Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
2018-02-25dmaengine: ioat: Fix error handling pathChristophe JAILLET1-1/+1
[ Upstream commit 5c9afbda911ce20b3f2181d1e440a0222e1027dd ] If the last test in 'ioat_dma_self_test()' fails, we must release all the allocated resources and not just part of them. Signed-off-by: Christophe JAILLET <christophe.jaillet@wanadoo.fr> Acked-by: Dave Jiang <dave.jiang@intel.com> Signed-off-by: Vinod Koul <vinod.koul@intel.com> Signed-off-by: Sasha Levin <alexander.levin@microsoft.com> Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
2018-02-17dmaengine: dmatest: fix container_of member in dmatest_callbackYang Shunyong1-1/+1
commit 66b3bd2356e0a1531c71a3dcf96944621e25c17c upstream. The type of arg passed to dmatest_callback is struct dmatest_done. It refers to test_done in struct dmatest_thread, not done_wait. Fixes: 6f6a23a213be ("dmaengine: dmatest: move callback wait ...") Signed-off-by: Yang Shunyong <shunyong.yang@hxt-semitech.com> Acked-by: Adam Wallis <awallis@codeaurora.org> Signed-off-by: Vinod Koul <vinod.koul@intel.com> Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
2017-12-20dmaengine: ti-dma-crossbar: Correct am335x/am43xx mux value typePeter Ujfalusi1-4/+4
[ Upstream commit 288e7560e4d3e259aa28f8f58a8dfe63627a1bf6 ] The used 0x1f mask is only valid for am335x family of SoC, different family using this type of crossbar might have different number of electable events. In case of am43xx family 0x3f mask should have been used for example. Instead of trying to handle each family's mask, just use u8 type to store the mux value since the event offsets are aligned to byte offset. Fixes: 42dbdcc6bf965 ("dmaengine: ti-dma-crossbar: Add support for crossbar on AM33xx/AM43xx") Signed-off-by: Peter Ujfalusi <peter.ujfalusi@ti.com> Signed-off-by: Vinod Koul <vinod.koul@intel.com> Signed-off-by: Sasha Levin <alexander.levin@verizon.com> Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
2017-12-20dmaengine: Fix array index out of bounds warning in __get_unmap_pool()Matthias Kaehlcke1-0/+2
[ Upstream commit 23f963e91fd81f44f6b316b1c24db563354c6be8 ] This fixes the following warning when building with clang and CONFIG_DMA_ENGINE_RAID=n : drivers/dma/dmaengine.c:1102:11: error: array index 2 is past the end of the array (which contains 1 element) [-Werror,-Warray-bounds] return &unmap_pool[2]; ^ ~ drivers/dma/dmaengine.c:1083:1: note: array 'unmap_pool' declared here static struct dmaengine_unmap_pool unmap_pool[] = { ^ drivers/dma/dmaengine.c:1104:11: error: array index 3 is past the end of the array (which contains 1 element) [-Werror,-Warray-bounds] return &unmap_pool[3]; ^ ~ drivers/dma/dmaengine.c:1083:1: note: array 'unmap_pool' declared here static struct dmaengine_unmap_pool unmap_pool[] = { Signed-off-by: Matthias Kaehlcke <mka@chromium.org> Reviewed-by: Dan Williams <dan.j.williams@intel.com> Signed-off-by: Vinod Koul <vinod.koul@intel.com> Signed-off-by: Sasha Levin <alexander.levin@verizon.com> Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
2017-12-20dmaengine: dmatest: move callback wait queue to thread contextAdam Wallis1-24/+31
commit 6f6a23a213be51728502b88741ba6a10cda2441d upstream. Commit adfa543e7314 ("dmatest: don't use set_freezable_with_signal()") introduced a bug (that is in fact documented by the patch commit text) that leaves behind a dangling pointer. Since the done_wait structure is allocated on the stack, future invocations to the DMATEST can produce undesirable results (e.g., corrupted spinlocks). Commit a9df21e34b42 ("dmaengine: dmatest: warn user when dma test times out") attempted to WARN the user that the stack was likely corrupted but did not fix the actual issue. This patch fixes the issue by pushing the wait queue and callback structs into the the thread structure. If a failure occurs due to time, dmaengine_terminate_all will force the callback to safely call wake_up_all() without possibility of using a freed pointer. Bug: https://bugzilla.kernel.org/show_bug.cgi?id=197605 Fixes: adfa543e7314 ("dmatest: don't use set_freezable_with_signal()") Reviewed-by: Sinan Kaya <okaya@codeaurora.org> Suggested-by: Shunyong Yang <shunyong.yang@hxt-semitech.com> Signed-off-by: Adam Wallis <awallis@codeaurora.org> Signed-off-by: Vinod Koul <vinod.koul@intel.com> Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
2017-12-10dmaengine: pl330: fix double lockIago Abal1-13/+6
[ Upstream commit 91539eb1fda2d530d3b268eef542c5414e54bf1a ] The static bug finder EBA (http://www.iagoabal.eu/eba/) reported the following double-lock bug: Double lock: 1. spin_lock_irqsave(pch->lock, flags) at pl330_free_chan_resources:2236; 2. call to function `pl330_release_channel' immediately after; 3. call to function `dma_pl330_rqcb' in line 1753; 4. spin_lock_irqsave(pch->lock, flags) at dma_pl330_rqcb:1505. I have fixed it as suggested by Marek Szyprowski. First, I have replaced `pch->lock' with `pl330->lock' in functions `pl330_alloc_chan_resources' and `pl330_free_chan_resources'. This avoids the double-lock by acquiring a different lock than `dma_pl330_rqcb'. NOTE that, as a result, `pl330_free_chan_resources' executes `list_splice_tail_init' on `pch->work_list' under lock `pl330->lock', whereas in the rest of the code `pch->work_list' is protected by `pch->lock'. I don't know if this may cause race conditions. Similarly `pch->cyclic' is written by `pl330_alloc_chan_resources' under `pl330->lock' but read by `pl330_tx_submit' under `pch->lock'. Second, I have removed locking from `pl330_request_channel' and `pl330_release_channel' functions. Function `pl330_request_channel' is only called from `pl330_alloc_chan_resources', so the lock is already held. Function `pl330_release_channel' is called from `pl330_free_chan_resources', which already holds the lock, and from `pl330_del'. Function `pl330_del' is called in an error path of `pl330_probe' and at the end of `pl330_remove', but I assume that there cannot be concurrent accesses to the protected data at those points. Signed-off-by: Iago Abal <mail@iagoabal.eu> Reviewed-by: Marek Szyprowski <m.szyprowski@samsung.com> Signed-off-by: Vinod Koul <vinod.koul@intel.com> Signed-off-by: Sasha Levin <alexander.levin@verizon.com> Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
2017-12-10dmaengine: stm32-dma: Fix null pointer dereference in stm32_dma_tx_statusM'boumba Cedric Madianga1-7/+3
[ Upstream commit 57b5a32135c813f2ab669039fb4ec16b30cb3305 ] chan->desc is always set to NULL when a DMA transfer is complete. As a DMA transfer could be complete during the call of stm32_dma_tx_status, we need to be sure that chan->desc is not NULL before using this variable to avoid a null pointer deference issue. Signed-off-by: M'boumba Cedric Madianga <cedric.madianga@gmail.com> Reviewed-by: Ludovic BARRE <ludovic.barre@st.com> Signed-off-by: Vinod Koul <vinod.koul@intel.com> Signed-off-by: Sasha Levin <alexander.levin@verizon.com> Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org> Acked-by: Pierre-Yves MORDRET <pierre-yves.mordret@st.com>
2017-12-10dmaengine: stm32-dma: Set correct args number for DMA request from DTM'boumba Cedric Madianga1-5/+2
[ Upstream commit 7e96304d99477de1f70db42035071e56439da817 ] This patch sets the right number of arguments to be used for DMA clients which request channels from DT. Signed-off-by: M'boumba Cedric Madianga <cedric.madianga@gmail.com> Reviewed-by: Ludovic BARRE <ludovic.barre@st.com> Signed-off-by: Vinod Koul <vinod.koul@intel.com> Signed-off-by: Sasha Levin <alexander.levin@verizon.com> Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
2017-11-30dmaengine: zx: set DMA_CYCLIC cap_mask bitShawn Guo1-0/+1
[ Upstream commit fc318d64f3d91e15babac00e08354b1beb650b57 ] The zx_dma driver supports cyclic transfer mode. Let's set DMA_CYCLIC cap_mask bit to make that clear, and avoid unnecessary failure when clients request channel via dma_request_chan_by_mask() with DMA_CYCLIC bit set in mask. Signed-off-by: Shawn Guo <shawn.guo@linaro.org> Reviewed-by: Jun Nie <jun.nie@linaro.org> Signed-off-by: Vinod Koul <vinod.koul@intel.com> Signed-off-by: Sasha Levin <alexander.levin@verizon.com> Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
2017-11-24dmaengine: dmatest: warn user when dma test times outAdam Wallis1-0/+1
commit a9df21e34b422f79d9a9fa5c3eff8c2a53491be6 upstream. Commit adfa543e7314 ("dmatest: don't use set_freezable_with_signal()") introduced a bug (that is in fact documented by the patch commit text) that leaves behind a dangling pointer. Since the done_wait structure is allocated on the stack, future invocations to the DMATEST can produce undesirable results (e.g., corrupted spinlocks). Ideally, this would be cleaned up in the thread handler, but at the very least, the kernel is left in a very precarious scenario that can lead to some long debug sessions when the crash comes later. Bugzilla: https://bugzilla.kernel.org/show_bug.cgi?id=197605 Signed-off-by: Adam Wallis <awallis@codeaurora.org> Signed-off-by: Vinod Koul <vinod.koul@intel.com> Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
2017-10-18dmaengine: ti-dma-crossbar: Fix possible race condition with dma_inusePeter Ujfalusi1-1/+2
commit 2ccb4837c938357233a0b8818e3ca3e58242c952 upstream. When looking for unused xbar_out lane we should also protect the set_bit() call with the same mutex to protect against concurrent threads picking the same ID. Fixes: ec9bfa1e1a796 ("dmaengine: ti-dma-crossbar: dra7: Use bitops instead of idr") Signed-off-by: Peter Ujfalusi <peter.ujfalusi@ti.com> Signed-off-by: Vinod Koul <vinod.koul@intel.com> Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
2017-10-18dmaengine: edma: Align the memcpy acnt array size with the transferPeter Ujfalusi1-3/+16
commit 87a2f622cc6446c7d09ac655b7b9b04886f16a4c upstream. Memory to Memory transfers does not have any special alignment needs regarding to acnt array size, but if one of the areas are in memory mapped regions (like PCIe memory), we need to make sure that the acnt array size is aligned with the mem copy parameters. Before "dmaengine: edma: Optimize memcpy operation" change the memcpy was set up in a different way: acnt == number of bytes in a word based on __ffs((src | dest | len), bcnt and ccnt for looping the necessary number of words to comlete the trasnfer. Instead of reverting the commit we can fix it to make sure that the ACNT size is aligned to the traswnfer. Fixes: df6694f80365a (dmaengine: edma: Optimize memcpy operation) Signed-off-by: Peter Ujfalusi <peter.ujfalusi@ti.com> Signed-off-by: Vinod Koul <vinod.koul@intel.com> Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
2017-08-07dmaengine: ti-dma-crossbar: Add some 'of_node_put()' in error path.Christophe JAILLET1-0/+2
[ Upstream commit 75bdc7f31a3a6e9a12e218b31a44a1f54a91554c ] Add some missing 'of_node_put()' in early exit error path. Signed-off-by: Christophe JAILLET <christophe.jaillet@wanadoo.fr> Signed-off-by: Vinod Koul <vinod.koul@intel.com> Signed-off-by: Sasha Levin <alexander.levin@verizon.com> Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
2017-08-07dmaengine: ioatdma: workaround SKX ioatdma versionDave Jiang1-0/+2
[ Upstream commit 34a31f0af84158955a9747fb5c6712da5bbb5331 ] The Skylake ioatdma is technically CBDMA 3.2+ and contains the same hardware bits with some additional 3.3 features, but it's not really 3.3 where the driver is concerned. Signed-off-by: Dave Jiang <dave.jiang@intel.com> Signed-off-by: Vinod Koul <vinod.koul@intel.com> Signed-off-by: Sasha Levin <alexander.levin@verizon.com> Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
2017-08-07dmaengine: ioatdma: Add Skylake PCI Dev IDDave Jiang2-1/+10
[ Upstream commit 1594c18fd297a8edcc72bc4b161f3f52603ebb92 ] Adding Skylake Xeon PCI device ids for ioatdma and related bits. Signed-off-by: Dave Jiang <dave.jiang@intel.com> Signed-off-by: Vinod Koul <vinod.koul@intel.com> Signed-off-by: Sasha Levin <alexander.levin@verizon.com> Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
2017-06-29dmaengine: bcm2835: Fix cyclic DMA period splittingMatthias Reichl1-1/+4
commit 2201ac6129fa162ac24da089a034bb0971648ebb upstream. The code responsible for splitting periods into chunks that can be handled by the DMA controller missed to update total_len, the number of bytes processed in the current period, when there are more chunks to follow. Therefore total_len was stuck at 0 and the code didn't work at all. This resulted in a wrong control block layout and audio issues because the cyclic DMA callback wasn't executing on period boundaries. Fix this by adding the missing total_len update. Signed-off-by: Matthias Reichl <hias@horus.com> Signed-off-by: Martin Sperl <kernel@martin.sperl.org> Tested-by: Clive Messer <clive.messer@digitaldreamtime.co.uk> Reviewed-by: Eric Anholt <eric@anholt.net> Signed-off-by: Vinod Koul <vinod.koul@intel.com> Signed-off-by: Amit Pundir <amit.pundir@linaro.org> Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
2017-06-14dmaengine: mv_xor_v2: set DMA mask to 40 bitsThomas Petazzoni1-0/+4
commit b2d3c270f9f2fb82518ac500a9849c3aaf503852 upstream. The XORv2 engine on Armada 7K/8K can only access the first 40 bits of the physical address space, so the DMA mask must be set accordingly. Fixes: 19a340b1a820 ("dmaengine: mv_xor_v2: new driver") Signed-off-by: Thomas Petazzoni <thomas.petazzoni@free-electrons.com> Signed-off-by: Vinod Koul <vinod.koul@intel.com> Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>