summaryrefslogtreecommitdiff
path: root/drivers/dma
AgeCommit message (Collapse)AuthorFilesLines
2023-12-20dmaengine: stm32-dma: avoid bitfield overflow assertionAmelie Delaunay1-2/+6
commit 54bed6bafa0f38daf9697af50e3aff5ff1354fe1 upstream. stm32_dma_get_burst() returns a negative error for invalid input, which gets turned into a large u32 value in stm32_dma_prep_dma_memcpy() that in turn triggers an assertion because it does not fit into a two-bit field: drivers/dma/stm32-dma.c: In function 'stm32_dma_prep_dma_memcpy': include/linux/compiler_types.h:354:38: error: call to '__compiletime_assert_282' declared with attribute error: FIELD_PREP: value too large for the field _compiletime_assert(condition, msg, __compiletime_assert_, __COUNTER__) ^ include/linux/compiler_types.h:335:4: note: in definition of macro '__compiletime_assert' prefix ## suffix(); \ ^~~~~~ include/linux/compiler_types.h:354:2: note: in expansion of macro '_compiletime_assert' _compiletime_assert(condition, msg, __compiletime_assert_, __COUNTER__) ^~~~~~~~~~~~~~~~~~~ include/linux/build_bug.h:39:37: note: in expansion of macro 'compiletime_assert' #define BUILD_BUG_ON_MSG(cond, msg) compiletime_assert(!(cond), msg) ^~~~~~~~~~~~~~~~~~ include/linux/bitfield.h:68:3: note: in expansion of macro 'BUILD_BUG_ON_MSG' BUILD_BUG_ON_MSG(__builtin_constant_p(_val) ? \ ^~~~~~~~~~~~~~~~ include/linux/bitfield.h:114:3: note: in expansion of macro '__BF_FIELD_CHECK' __BF_FIELD_CHECK(_mask, 0ULL, _val, "FIELD_PREP: "); \ ^~~~~~~~~~~~~~~~ drivers/dma/stm32-dma.c:1237:4: note: in expansion of macro 'FIELD_PREP' FIELD_PREP(STM32_DMA_SCR_PBURST_MASK, dma_burst) | ^~~~~~~~~~ As an easy workaround, assume the error can happen, so try to handle this by failing stm32_dma_prep_dma_memcpy() before the assertion. It replicates what is done in stm32_dma_set_xfer_param() where stm32_dma_get_burst() is also used. Fixes: 1c32d6c37cc2 ("dmaengine: stm32-dma: use bitfield helpers") Fixes: a2b6103b7a8a ("dmaengine: stm32-dma: Improve memory burst management") Signed-off-by: Arnd Bergmann <arnd@arndb.de> Signed-off-by: Amelie Delaunay <amelie.delaunay@foss.st.com> Cc: stable@vger.kernel.org Reported-by: kernel test robot <lkp@intel.com> Closes: https://lore.kernel.org/oe-kbuild-all/202311060135.Q9eMnpCL-lkp@intel.com/ Link: https://lore.kernel.org/r/20231106134832.1470305-1-amelie.delaunay@foss.st.com Signed-off-by: Vinod Koul <vkoul@kernel.org> Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
2023-11-28dmaengine: stm32-mdma: correct desc prep when channel runningAlain Volmat1-2/+2
commit 03f25d53b145bc2f7ccc82fc04e4482ed734f524 upstream. In case of the prep descriptor while the channel is already running, the CCR register value stored into the channel could already have its EN bit set. This would lead to a bad transfer since, at start transfer time, enabling the channel while other registers aren't yet properly set. To avoid this, ensure to mask the CCR_EN bit when storing the ccr value into the mdma channel structure. Fixes: a4ffb13c8946 ("dmaengine: Add STM32 MDMA driver") Signed-off-by: Alain Volmat <alain.volmat@foss.st.com> Signed-off-by: Amelie Delaunay <amelie.delaunay@foss.st.com> Cc: stable@vger.kernel.org Tested-by: Alain Volmat <alain.volmat@foss.st.com> Link: https://lore.kernel.org/r/20231009082450.452877-1-amelie.delaunay@foss.st.com Signed-off-by: Vinod Koul <vkoul@kernel.org> Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
2023-11-20dmaengine: pxa_dma: Remove an erroneous BUG_ON() in pxad_free_desc()Christophe JAILLET1-1/+0
[ Upstream commit 83c761f568733277ce1f7eb9dc9e890649c29a8c ] If pxad_alloc_desc() fails on the first dma_pool_alloc() call, then sw_desc->nb_desc is zero. In such a case pxad_free_desc() is called and it will BUG_ON(). Remove this erroneous BUG_ON(). It is also useless, because if "sw_desc->nb_desc == 0", then, on the first iteration of the for loop, i is -1 and the loop will not be executed. (both i and sw_desc->nb_desc are 'int') Fixes: a57e16cf0333 ("dmaengine: pxa: add pxa dmaengine driver") Signed-off-by: Christophe JAILLET <christophe.jaillet@wanadoo.fr> Link: https://lore.kernel.org/r/c8fc5563c9593c914fde41f0f7d1489a21b45a9a.1696676782.git.christophe.jaillet@wanadoo.fr Signed-off-by: Vinod Koul <vkoul@kernel.org> Signed-off-by: Sasha Levin <sashal@kernel.org>
2023-11-20dmaengine: ti: edma: handle irq_of_parse_and_map() errorsDan Carpenter1-2/+2
[ Upstream commit 14f6d317913f634920a640e9047aa2e66f5bdcb7 ] Zero is not a valid IRQ for in-kernel code and the irq_of_parse_and_map() function returns zero on error. So this check for valid IRQs should only accept values > 0. Fixes: 2b6b3b742019 ("ARM/dmaengine: edma: Merge the two drivers under drivers/dma/") Signed-off-by: Dan Carpenter <dan.carpenter@linaro.org> Acked-by: Peter Ujfalusi <peter.ujfalusi@gmail.com> Link: https://lore.kernel.org/r/f15cb6a7-8449-4f79-98b6-34072f04edbc@moroto.mountain Signed-off-by: Vinod Koul <vkoul@kernel.org> Signed-off-by: Sasha Levin <sashal@kernel.org>
2023-11-20dmaengine: idxd: Register dsa_bus_type before registering idxd sub-driversFenghua Yu1-3/+3
[ Upstream commit 88928addeec577386e8c83b48b5bc24d28ba97fd ] idxd sub-drivers belong to bus dsa_bus_type. Thus, dsa_bus_type must be registered in dsa bus init before idxd drivers can be registered. But the order is wrong when both idxd and idxd_bus are builtin drivers. In this case, idxd driver is compiled and linked before idxd_bus driver. Since the initcall order is determined by the link order, idxd sub-drivers are registered in idxd initcall before dsa_bus_type is registered in idxd_bus initcall. idxd initcall fails: [ 21.562803] calling idxd_init_module+0x0/0x110 @ 1 [ 21.570761] Driver 'idxd' was unable to register with bus_type 'dsa' because the bus was not initialized. [ 21.586475] initcall idxd_init_module+0x0/0x110 returned -22 after 15717 usecs [ 21.597178] calling dsa_bus_init+0x0/0x20 @ 1 To fix the issue, compile and link idxd_bus driver before idxd driver to ensure the right registration order. Fixes: d9e5481fca74 ("dmaengine: dsa: move dsa_bus_type out of idxd driver to standalone") Reported-by: Michael Prinke <michael.prinke@intel.com> Signed-off-by: Fenghua Yu <fenghua.yu@intel.com> Reviewed-by: Dave Jiang <dave.jiang@intel.com> Reviewed-by: Lijun Pan <lijun.pan@intel.com> Tested-by: Lijun Pan <lijun.pan@intel.com> Link: https://lore.kernel.org/r/20230924162232.1409454-1-fenghua.yu@intel.com Signed-off-by: Vinod Koul <vkoul@kernel.org> Signed-off-by: Sasha Levin <sashal@kernel.org>
2023-11-08dmaengine: ste_dma40: Fix PM disable depth imbalance in d40_probeZhang Shurong1-0/+1
[ Upstream commit 0618c077a8c20e8c81e367988f70f7e32bb5a717 ] The pm_runtime_enable will increase power disable depth. Thus a pairing decrement is needed on the error handling path to keep it balanced according to context. We fix it by calling pm_runtime_disable when error returns. Signed-off-by: Zhang Shurong <zhang_shurong@foxmail.com> Reviewed-by: Linus Walleij <linus.walleij@linaro.org> Link: https://lore.kernel.org/r/tencent_DD2D371DB5925B4B602B1E1D0A5FA88F1208@qq.com Signed-off-by: Vinod Koul <vkoul@kernel.org> Signed-off-by: Sasha Levin <sashal@kernel.org>
2023-10-20dmaengine: mediatek: Fix deadlock caused by synchronize_irq()Duoming Zhou1-2/+1
[ Upstream commit 01f1ae2733e2bb4de92fefcea5fda847d92aede1 ] The synchronize_irq(c->irq) will not return until the IRQ handler mtk_uart_apdma_irq_handler() is completed. If the synchronize_irq() holds a spin_lock and waits the IRQ handler to complete, but the IRQ handler also needs the same spin_lock. The deadlock will happen. The process is shown below: cpu0 cpu1 mtk_uart_apdma_device_pause() | mtk_uart_apdma_irq_handler() spin_lock_irqsave() | | spin_lock_irqsave() //hold the lock to wait | synchronize_irq() | This patch reorders the synchronize_irq(c->irq) outside the spin_lock in order to mitigate the bug. Fixes: 9135408c3ace ("dmaengine: mediatek: Add MediaTek UART APDMA support") Signed-off-by: Duoming Zhou <duoming@zju.edu.cn> Reviewed-by: Eugen Hristev <eugen.hristev@collabora.com> Link: https://lore.kernel.org/r/20230806032511.45263-1-duoming@zju.edu.cn Signed-off-by: Vinod Koul <vkoul@kernel.org> Signed-off-by: Sasha Levin <sashal@kernel.org>
2023-10-20dmaengine: idxd: use spin_lock_irqsave before wait_event_lock_irqRex Zhang1-2/+3
[ Upstream commit c0409dd3d151f661e7e57b901a81a02565df163c ] In idxd_cmd_exec(), wait_event_lock_irq() explicitly calls spin_unlock_irq()/spin_lock_irq(). If the interrupt is on before entering wait_event_lock_irq(), it will become off status after wait_event_lock_irq() is called. Later, wait_for_completion() may go to sleep but irq is disabled. The scenario is warned in might_sleep(). Fix it by using spin_lock_irqsave() instead of the primitive spin_lock() to save the irq status before entering wait_event_lock_irq() and using spin_unlock_irqrestore() instead of the primitive spin_unlock() to restore the irq status before entering wait_for_completion(). Before the change: idxd_cmd_exec() { interrupt is on spin_lock() // interrupt is on wait_event_lock_irq() spin_unlock_irq() // interrupt is enabled ... spin_lock_irq() // interrupt is disabled spin_unlock() // interrupt is still disabled wait_for_completion() // report "BUG: sleeping function // called from invalid context... // in_atomic() irqs_disabled()" } After applying spin_lock_irqsave(): idxd_cmd_exec() { interrupt is on spin_lock_irqsave() // save the on state // interrupt is disabled wait_event_lock_irq() spin_unlock_irq() // interrupt is enabled ... spin_lock_irq() // interrupt is disabled spin_unlock_irqrestore() // interrupt is restored to on wait_for_completion() // No Call trace } Fixes: f9f4082dbc56 ("dmaengine: idxd: remove interrupt disable for cmd_lock") Signed-off-by: Rex Zhang <rex.zhang@intel.com> Signed-off-by: Lijun Pan <lijun.pan@intel.com> Reviewed-by: Dave Jiang <dave.jiang@intel.com> Reviewed-by: Fenghua Yu <fenghua.yu@intel.com> Link: https://lore.kernel.org/r/20230916060619.3744220-1-rex.zhang@intel.com Signed-off-by: Vinod Koul <vkoul@kernel.org> Signed-off-by: Sasha Levin <sashal@kernel.org>
2023-10-20dmaengine: stm32-mdma: set in_flight_bytes in case CRQA flag is setAmelie Delaunay1-5/+9
commit 584970421725b7805db84714b857851fdf7203a9 upstream. CRQA flag is set by hardware when the channel request become active and the channel is enabled. It is cleared by hardware, when the channel request is completed. So when it is set, it means MDMA is transferring bytes. This information is useful in case of STM32 DMA and MDMA chaining, especially when the user pauses DMA before stopping it, to trig one last MDMA transfer to get the latest bytes of the SRAM buffer to the destination buffer. STM32 DCMI driver can then use this to know if the last MDMA transfer in case of chaining is done. Fixes: 696874322771 ("dmaengine: stm32-mdma: add support to be triggered by STM32 DMA") Signed-off-by: Amelie Delaunay <amelie.delaunay@foss.st.com> Cc: stable@vger.kernel.org Link: https://lore.kernel.org/r/20231004163531.2864160-3-amelie.delaunay@foss.st.com Signed-off-by: Vinod Koul <vkoul@kernel.org> Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
2023-10-20dmaengine: stm32-mdma: use Link Address Register to compute residueAmelie Delaunay1-4/+11
commit a4b306eb83579c07b63dc65cd5bae53b7b4019d0 upstream. Current implementation relies on curr_hwdesc index. But to keep this index up to date, Block Transfer interrupt (BTIE) has to be enabled. If it is not, curr_hwdesc is not updated, and then residue is not reliable. Rely on Link Address Register instead. And disable BTIE interrupt in stm32_mdma_setup_xfer() because it is no more needed in case of _prep_slave_sg() to maintain curr_hwdesc up to date. It avoids extra interrupts and also ensures a reliable residue. These improvements are required for STM32 DCMI camera capture use case, which need STM32 DMA and MDMA chaining for good performance. Fixes: 696874322771 ("dmaengine: stm32-mdma: add support to be triggered by STM32 DMA") Signed-off-by: Amelie Delaunay <amelie.delaunay@foss.st.com> Cc: stable@vger.kernel.org Link: https://lore.kernel.org/r/20231004163531.2864160-2-amelie.delaunay@foss.st.com Signed-off-by: Vinod Koul <vkoul@kernel.org> Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
2023-10-20dmaengine: stm32-dma: fix residue in case of MDMA chainingAmelie Delaunay1-3/+4
commit 67e13e89742c3b21ce177f612bf9ef32caae6047 upstream. In case of MDMA chaining, DMA is configured in Double-Buffer Mode (DBM) with two periods, but if transfer has been prepared with _prep_slave_sg(), the transfer is not marked cyclic (=!chan->desc->cyclic). However, as DBM is activated for MDMA chaining, residue computation must take into account cyclic constraints. With only two periods in MDMA chaining, and no update due to Transfer Complete interrupt masked, n_sg is always 0. If DMA current memory address (depending on SxCR.CT and SxM0AR/SxM1AR) does not correspond, it means n_sg should be increased. Then, the residue of the current period is the one read from SxNDTR and should not be overwritten with the full period length. Fixes: 723795173ce1 ("dmaengine: stm32-dma: add support to trigger STM32 MDMA") Signed-off-by: Amelie Delaunay <amelie.delaunay@foss.st.com> Cc: stable@vger.kernel.org Link: https://lore.kernel.org/r/20231004155024.2609531-2-amelie.delaunay@foss.st.com Signed-off-by: Vinod Koul <vkoul@kernel.org> Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
2023-10-20dmaengine: stm32-dma: fix stm32_dma_prep_slave_sg in case of MDMA chainingAmelie Delaunay1-1/+3
commit 2df467e908ce463cff1431ca1b00f650f7a514b4 upstream. Current Target (CT) have to be reset when starting an MDMA chaining use case, as Double Buffer mode is activated. It ensures the DMA will start processing the first memory target (pointed with SxM0AR). Fixes: 723795173ce1 ("dmaengine: stm32-dma: add support to trigger STM32 MDMA") Signed-off-by: Amelie Delaunay <amelie.delaunay@foss.st.com> Cc: stable@vger.kernel.org Link: https://lore.kernel.org/r/20231004155024.2609531-1-amelie.delaunay@foss.st.com Signed-off-by: Vinod Koul <vkoul@kernel.org> Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
2023-10-20dmaengine: stm32-mdma: abort resume if no ongoing transferAmelie Delaunay1-0/+4
commit 81337b9a72dc58a5fa0ae8a042e8cb59f9bdec4a upstream. chan->desc can be null, if transfer is terminated when resume is called, leading to a NULL pointer when retrieving the hwdesc. To avoid this case, check that chan->desc is not null and channel is disabled (transfer previously paused or terminated). Fixes: a4ffb13c8946 ("dmaengine: Add STM32 MDMA driver") Signed-off-by: Amelie Delaunay <amelie.delaunay@foss.st.com> Cc: stable@vger.kernel.org Link: https://lore.kernel.org/r/20231004163531.2864160-1-amelie.delaunay@foss.st.com Signed-off-by: Vinod Koul <vkoul@kernel.org> Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
2023-09-19dmaengine: sh: rz-dmac: Fix destination and source data size settingHien Huynh1-4/+7
commit c6ec8c83a29fb3aec3efa6fabbf5344498f57c7f upstream. Before setting DDS and SDS values, we need to clear its value first otherwise, we get incorrect results when we change/update the DMA bus width several times due to the 'OR' expression. Fixes: 5000d37042a6 ("dmaengine: sh: Add DMAC driver for RZ/G2L SoC") Cc: stable@kernel.org Signed-off-by: Hien Huynh <hien.huynh.px@renesas.com> Signed-off-by: Biju Das <biju.das.jz@bp.renesas.com> Reviewed-by: Geert Uytterhoeven <geert+renesas@glider.be> Link: https://lore.kernel.org/r/20230706112150.198941-3-biju.das.jz@bp.renesas.com Signed-off-by: Vinod Koul <vkoul@kernel.org> Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
2023-09-13dmaengine: ste_dma40: Add missing IRQ check in d40_proberuanjinjie1-0/+4
[ Upstream commit c05ce6907b3d6e148b70f0bb5eafd61dcef1ddc1 ] Check for the return value of platform_get_irq(): if no interrupt is specified, it wouldn't make sense to call request_irq(). Fixes: 8d318a50b3d7 ("DMAENGINE: Support for ST-Ericssons DMA40 block v3") Signed-off-by: Ruan Jinjie <ruanjinjie@huawei.com> Reviewed-by: Linus Walleij <linus.walleij@linaro.org> Link: https://lore.kernel.org/r/20230724144108.2582917-1-ruanjinjie@huawei.com Signed-off-by: Vinod Koul <vkoul@kernel.org> Signed-off-by: Sasha Levin <sashal@kernel.org>
2023-09-13dmaengine: idxd: Modify the dependence of attribute pasid_enabledRex Zhang1-1/+1
[ Upstream commit 50c5e6f41d5ad7c731c31135a30d0e4f0e4fea26 ] Kernel PASID and user PASID are separately enabled. User needs to know the user PASID enabling status to decide how to use IDXD device in user space. This is done via the attribute /sys/bus/dsa/devices/dsa0/pasid_enabled. It's unnecessary for user to know the kernel PASID enabling status because user won't use the kernel PASID. But instead of showing the user PASID enabling status, the attribute shows the kernel PASID enabling status. Fix the issue by showing the user PASID enabling status in the attribute. Fixes: 42a1b73852c4 ("dmaengine: idxd: Separate user and kernel pasid enabling") Signed-off-by: Rex Zhang <rex.zhang@intel.com> Acked-by: Fenghua Yu <fenghua.yu@intel.com> Acked-by: Dave Jiang <dave.jiang@intel.com> Link: https://lore.kernel.org/r/20230614062706.1743078-1-rex.zhang@intel.com Signed-off-by: Vinod Koul <vkoul@kernel.org> Signed-off-by: Sasha Levin <sashal@kernel.org>
2023-09-13idmaengine: make FSL_EDMA and INTEL_IDMA64 depends on HAS_IOMEMBaoquan He1-0/+2
[ Upstream commit b1e213a9e31c20206f111ec664afcf31cbfe0dbb ] On s390 systems (aka mainframes), it has classic channel devices for networking and permanent storage that are currently even more common than PCI devices. Hence it could have a fully functional s390 kernel with CONFIG_PCI=n, then the relevant iomem mapping functions [including ioremap(), devm_ioremap(), etc.] are not available. Here let FSL_EDMA and INTEL_IDMA64 depend on HAS_IOMEM so that it won't be built to cause below compiling error if PCI is unset. -------- ERROR: modpost: "devm_platform_ioremap_resource" [drivers/dma/fsl-edma.ko] undefined! ERROR: modpost: "devm_platform_ioremap_resource" [drivers/dma/idma64.ko] undefined! -------- Reported-by: kernel test robot <lkp@intel.com> Closes: https://lore.kernel.org/oe-kbuild-all/202306211329.ticOJCSv-lkp@intel.com/ Signed-off-by: Baoquan He <bhe@redhat.com> Cc: Vinod Koul <vkoul@kernel.org> Cc: dmaengine@vger.kernel.org Link: https://lore.kernel.org/r/20230707135852.24292-2-bhe@redhat.com Signed-off-by: Vinod Koul <vkoul@kernel.org> Signed-off-by: Sasha Levin <sashal@kernel.org>
2023-08-16dmaengine: owl-dma: Modify mismatched function nameZhang Jianhua1-1/+1
commit 74d7221c1f9c9f3a8c316a3557ca7dca8b99d14c upstream. No functional modification involved. drivers/dma/owl-dma.c:208: warning: expecting prototype for struct owl_dma_pchan. Prototype was for struct owl_dma_vchan instead HDRTEST usr/include/sound/asequencer.h Fixes: 47e20577c24d ("dmaengine: Add Actions Semi Owl family S900 DMA driver") Signed-off-by: Zhang Jianhua <chris.zjh@huawei.com> Reviewed-by: Randy Dunlap <rdunlap@infradead.org> Link: https://lore.kernel.org/r/20230722153244.2086949-1-chris.zjh@huawei.com Signed-off-by: Vinod Koul <vkoul@kernel.org> Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
2023-08-16dmaengine: mcf-edma: Fix a potential un-allocated memory accessChristophe JAILLET1-6/+7
commit 0a46781c89dece85386885a407244ca26e5c1c44 upstream. When 'mcf_edma' is allocated, some space is allocated for a flexible array at the end of the struct. 'chans' item are allocated, that is to say 'pdata->dma_channels'. Then, this number of item is stored in 'mcf_edma->n_chans'. A few lines later, if 'mcf_edma->n_chans' is 0, then a default value of 64 is set. This ends to no space allocated by devm_kzalloc() because chans was 0, but 64 items are read and/or written in some not allocated memory. Change the logic to define a default value before allocating the memory. Fixes: e7a3ff92eaf1 ("dmaengine: fsl-edma: add ColdFire mcf5441x edma support") Signed-off-by: Christophe JAILLET <christophe.jaillet@wanadoo.fr> Link: https://lore.kernel.org/r/f55d914407c900828f6fad3ea5fa791a5f17b9a4.1685172449.git.christophe.jaillet@wanadoo.fr Signed-off-by: Vinod Koul <vkoul@kernel.org> Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
2023-08-16dmaengine: pl330: Return DMA_PAUSED when transaction is pausedIlpo Järvinen1-2/+16
commit 8cda3ececf07d374774f6a13e5a94bc2dc04c26c upstream. pl330_pause() does not set anything to indicate paused condition which causes pl330_tx_status() to return DMA_IN_PROGRESS. This breaks 8250 DMA flush after the fix in commit 57e9af7831dc ("serial: 8250_dma: Fix DMA Rx rearm race"). The function comment for pl330_pause() claims pause is supported but resume is not which is enough for 8250 DMA flush to work as long as DMA status reports DMA_PAUSED when appropriate. Add PAUSED state for descriptor and mark BUSY descriptors with PAUSED in pl330_pause(). Return DMA_PAUSED from pl330_tx_status() when the descriptor is PAUSED. Reported-by: Richard Tresidder <rtresidd@electromag.com.au> Tested-by: Richard Tresidder <rtresidd@electromag.com.au> Fixes: 88987d2c7534 ("dmaengine: pl330: add DMA_PAUSE feature") Cc: stable@vger.kernel.org Link: https://lore.kernel.org/linux-serial/f8a86ecd-64b1-573f-c2fa-59f541083f1a@electromag.com.au/ Signed-off-by: Ilpo Järvinen <ilpo.jarvinen@linux.intel.com> Link: https://lore.kernel.org/r/20230526105434.14959-1-ilpo.jarvinen@linux.intel.com Signed-off-by: Vinod Koul <vkoul@kernel.org> Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
2023-06-09dmaengine: pl330: rename _start to prevent build errorRandy Dunlap1-4/+4
[ Upstream commit a1a5f2c887252dec161c1e12e04303ca9ba56fa9 ] "_start" is used in several arches and proably should be reserved for ARCH usage. Using it in a driver for a private symbol can cause a build error when it conflicts with ARCH usage of the same symbol. Therefore rename pl330's "_start" to "pl330_start_thread" so that there is no conflict and no build error. drivers/dma/pl330.c:1053:13: error: '_start' redeclared as different kind of symbol 1053 | static bool _start(struct pl330_thread *thrd) | ^~~~~~ In file included from ../include/linux/interrupt.h:21, from ../drivers/dma/pl330.c:18: arch/riscv/include/asm/sections.h:11:13: note: previous declaration of '_start' with type 'char[]' 11 | extern char _start[]; | ^~~~~~ Fixes: b7d861d93945 ("DMA: PL330: Merge PL330 driver into drivers/dma/") Fixes: ae43b3289186 ("ARM: 8202/1: dmaengine: pl330: Add runtime Power Management support v12") Signed-off-by: Randy Dunlap <rdunlap@infradead.org> Cc: Jaswinder Singh <jassisinghbrar@gmail.com> Cc: Boojin Kim <boojin.kim@samsung.com> Cc: Krzysztof Kozlowski <krzk@kernel.org> Cc: Russell King <rmk+kernel@arm.linux.org.uk> Cc: Vinod Koul <vkoul@kernel.org> Cc: dmaengine@vger.kernel.org Cc: linux-riscv@lists.infradead.org Link: https://lore.kernel.org/r/20230524045310.27923-1-rdunlap@infradead.org Signed-off-by: Vinod Koul <vkoul@kernel.org> Signed-off-by: Sasha Levin <sashal@kernel.org>
2023-06-09dmaengine: at_xdmac: fix potential Oops in at_xdmac_prep_interleaved()Dan Carpenter1-2/+5
[ Upstream commit 4d43acb145c363626d76f49febb4240c488cd1cf ] There are two place if the at_xdmac_interleaved_queue_desc() fails which could lead to a NULL dereference where "first" is NULL and we call list_add_tail(&first->desc_node, ...). In the first caller, the return is not checked so add a check for that. In the next caller, the return is checked but if it fails on the first iteration through the loop then it will lead to a NULL pointer dereference. Fixes: 4e5385784e69 ("dmaengine: at_xdmac: handle numf > 1") Fixes: 62b5cb757f1d ("dmaengine: at_xdmac: fix memory leak in interleaved mode") Signed-off-by: Dan Carpenter <dan.carpenter@linaro.org> Reviewed-by: Tudor Ambarus <tudor.ambarus@linaro.org> Link: https://lore.kernel.org/r/21282b66-9860-410a-83df-39c17fcf2f1b@kili.mountain Signed-off-by: Vinod Koul <vkoul@kernel.org> Signed-off-by: Sasha Levin <sashal@kernel.org>
2023-05-11dmaengine: at_xdmac: do not enable all cyclic channelsClaudiu Beznea1-1/+4
[ Upstream commit f8435befd81dd85b7b610598551fadf675849bc1 ] Do not global enable all the cyclic channels in at_xdmac_resume(). Instead save the global status in at_xdmac_suspend() and re-enable the cyclic channel only if it was active before suspend. Fixes: e1f7c9eee707 ("dmaengine: at_xdmac: creation of the atmel eXtended DMA Controller driver") Signed-off-by: Claudiu Beznea <claudiu.beznea@microchip.com> Link: https://lore.kernel.org/r/20230214151827.1050280-6-claudiu.beznea@microchip.com Signed-off-by: Vinod Koul <vkoul@kernel.org> Signed-off-by: Sasha Levin <sashal@kernel.org>
2023-05-11dmaengine: dw-edma: Fix to enable to issue dma request on DMA processingShunsuke Mie1-2/+5
[ Upstream commit 970b17dfe264a9085ba4e593730ecfd496b950ab ] The issue_pending request is ignored while driver is processing a DMA request. Fix to issue the pending requests on any dma channel status. Fixes: e63d79d1ffcd ("dmaengine: Add Synopsys eDMA IP core driver") Signed-off-by: Shunsuke Mie <mie@igel.co.jp> Link: https://lore.kernel.org/r/20230411101758.438472-2-mie@igel.co.jp Signed-off-by: Vinod Koul <vkoul@kernel.org> Signed-off-by: Sasha Levin <sashal@kernel.org>
2023-05-11dmaengine: dw-edma: Fix to change for continuous transferShunsuke Mie1-9/+11
[ Upstream commit a251994a441ee0a69ba7062c8cd2d08ead3db379 ] The dw-edma driver stops after processing a DMA request even if a request remains in the issued queue, which is not the expected behavior. The DMA engine API requires continuous processing. Add a trigger to start after one processing finished if there are requests remain. Fixes: e63d79d1ffcd ("dmaengine: Add Synopsys eDMA IP core driver") Signed-off-by: Shunsuke Mie <mie@igel.co.jp> Link: https://lore.kernel.org/r/20230411101758.438472-1-mie@igel.co.jp Signed-off-by: Vinod Koul <vkoul@kernel.org> Signed-off-by: Sasha Levin <sashal@kernel.org>
2023-05-11dma: gpi: remove spurious unlock in gpi_ch_initDmitry Baryshkov1-1/+0
[ Upstream commit 91d6a468e335571f1e67e046050dea9af5fa4ebe ] gpi_ch_init() doesn't lock the ctrl_lock mutex, so there is no need to unlock it too. Instead the mutex is handled by the function gpi_alloc_chan_resources(), which properly locks and unlocks the mutex. ===================================== WARNING: bad unlock balance detected! 6.3.0-rc5-00253-g99792582ded1-dirty #15 Not tainted ------------------------------------- kworker/u16:0/9 is trying to release lock (&gpii->ctrl_lock) at: [<ffffb99d04e1284c>] gpi_alloc_chan_resources+0x108/0x5bc but there are no more locks to release! other info that might help us debug this: 6 locks held by kworker/u16:0/9: #0: ffff575740010938 ((wq_completion)events_unbound){+.+.}-{0:0}, at: process_one_work+0x220/0x594 #1: ffff80000809bdd0 (deferred_probe_work){+.+.}-{0:0}, at: process_one_work+0x220/0x594 #2: ffff575740f2a0f8 (&dev->mutex){....}-{3:3}, at: __device_attach+0x38/0x188 #3: ffff57574b5570f8 (&dev->mutex){....}-{3:3}, at: __device_attach+0x38/0x188 #4: ffffb99d06a2f180 (of_dma_lock){+.+.}-{3:3}, at: of_dma_request_slave_channel+0x138/0x280 #5: ffffb99d06a2ee20 (dma_list_mutex){+.+.}-{3:3}, at: dma_get_slave_channel+0x28/0x10c stack backtrace: CPU: 7 PID: 9 Comm: kworker/u16:0 Not tainted 6.3.0-rc5-00253-g99792582ded1-dirty #15 Hardware name: Google Pixel 3 (DT) Workqueue: events_unbound deferred_probe_work_func Call trace: dump_backtrace+0xa0/0xfc show_stack+0x18/0x24 dump_stack_lvl+0x60/0xac dump_stack+0x18/0x24 print_unlock_imbalance_bug+0x130/0x148 lock_release+0x270/0x300 __mutex_unlock_slowpath+0x48/0x2cc mutex_unlock+0x20/0x2c gpi_alloc_chan_resources+0x108/0x5bc dma_chan_get+0x84/0x188 dma_get_slave_channel+0x5c/0x10c gpi_of_dma_xlate+0x110/0x1a0 of_dma_request_slave_channel+0x174/0x280 dma_request_chan+0x3c/0x2d4 geni_i2c_probe+0x544/0x63c platform_probe+0x68/0xc4 really_probe+0x148/0x2ac __driver_probe_device+0x78/0xe0 driver_probe_device+0x3c/0x160 __device_attach_driver+0xb8/0x138 bus_for_each_drv+0x84/0xe0 __device_attach+0x9c/0x188 device_initial_probe+0x14/0x20 bus_probe_device+0xac/0xb0 device_add+0x60c/0x7d8 of_device_add+0x44/0x60 of_platform_device_create_pdata+0x90/0x124 of_platform_bus_create+0x15c/0x3c8 of_platform_populate+0x58/0xf8 devm_of_platform_populate+0x58/0xbc geni_se_probe+0xf0/0x164 platform_probe+0x68/0xc4 really_probe+0x148/0x2ac __driver_probe_device+0x78/0xe0 driver_probe_device+0x3c/0x160 __device_attach_driver+0xb8/0x138 bus_for_each_drv+0x84/0xe0 __device_attach+0x9c/0x188 device_initial_probe+0x14/0x20 bus_probe_device+0xac/0xb0 deferred_probe_work_func+0x8c/0xc8 process_one_work+0x2bc/0x594 worker_thread+0x228/0x438 kthread+0x108/0x10c ret_from_fork+0x10/0x20 Fixes: 5d0c3533a19f ("dmaengine: qcom: Add GPI dma driver") Signed-off-by: Dmitry Baryshkov <dmitry.baryshkov@linaro.org> Link: https://lore.kernel.org/r/20230409233355.453741-1-dmitry.baryshkov@linaro.org Signed-off-by: Vinod Koul <vkoul@kernel.org> Signed-off-by: Sasha Levin <sashal@kernel.org>
2023-05-11dmaengine: mv_xor_v2: Fix an error code.Christophe JAILLET1-1/+1
[ Upstream commit 827026ae2e56ec05ef1155661079badbbfc0b038 ] If the probe is deferred, -EPROBE_DEFER should be returned, not +EPROBE_DEFER. Fixes: 3cd2c313f1d6 ("dmaengine: mv_xor_v2: Fix clock resource by adding a register clock") Signed-off-by: Christophe JAILLET <christophe.jaillet@wanadoo.fr> Link: https://lore.kernel.org/r/201170dff832a3c496d125772e10070cd834ebf2.1679814350.git.christophe.jaillet@wanadoo.fr Signed-off-by: Vinod Koul <vkoul@kernel.org> Signed-off-by: Sasha Levin <sashal@kernel.org>
2023-04-20dmaengine: apple-admac: Fix 'current_tx' not getting freedMartin Povišer1-1/+4
[ Upstream commit d9503be5a100c553731c0e8a82c7b4201e8a970c ] In terminate_all we should queue up all submitted descriptors to be freed. We do that for the content of the 'issued' and 'submitted' lists, but the 'current_tx' descriptor falls through the cracks as it's removed from the 'issued' list once it gets assigned to be the current descriptor. Explicitly queue up freeing of the 'current_tx' descriptor to address a memory leak that is otherwise present. Fixes: b127315d9a78 ("dmaengine: apple-admac: Add Apple ADMAC driver") Signed-off-by: Martin Povišer <povik+lin@cutebit.org> Link: https://lore.kernel.org/r/20230224152222.26732-2-povik+lin@cutebit.org Signed-off-by: Vinod Koul <vkoul@kernel.org> Signed-off-by: Sasha Levin <sashal@kernel.org>
2023-04-20dmaengine: apple-admac: Set src_addr_widths capabilityMartin Povišer1-0/+3
[ Upstream commit 6e96adcaa7a29827ac8ee8df290a44957a4823ec ] Add missing setting of 'src_addr_widths', which is the same as for the other direction. Fixes: b127315d9a78 ("dmaengine: apple-admac: Add Apple ADMAC driver") Signed-off-by: Martin Povišer <povik+lin@cutebit.org> Link: https://lore.kernel.org/r/20230224152222.26732-3-povik+lin@cutebit.org Signed-off-by: Vinod Koul <vkoul@kernel.org> Signed-off-by: Sasha Levin <sashal@kernel.org>
2023-04-20dmaengine: apple-admac: Handle 'global' interrupt flagsMartin Povišer1-2/+10
[ Upstream commit a288fd158fbf85c06a9ac01cecabf97ac5d962e7 ] In addition to TX channel and RX channel interrupt flags there's another class of 'global' interrupt flags with unknown semantics. Those weren't being handled up to now, and they are the suspected cause of stuck IRQ states that have been sporadically occurring. Check the global flags and clear them if raised. Fixes: b127315d9a78 ("dmaengine: apple-admac: Add Apple ADMAC driver") Signed-off-by: Martin Povišer <povik+lin@cutebit.org> Link: https://lore.kernel.org/r/20230224152222.26732-1-povik+lin@cutebit.org Signed-off-by: Vinod Koul <vkoul@kernel.org> Signed-off-by: Sasha Levin <sashal@kernel.org>
2023-03-10dmaengine: ptdma: check for null desc before calling pt_cmd_callbackEric Pilmore1-1/+1
[ Upstream commit 928469986171a6f763b34b039427f5667ba3fd50 ] Resolves a panic that can occur on AMD systems, typically during host shutdown, after the PTDMA driver had been exercised. The issue was the pt_issue_pending() function is mistakenly assuming that there will be at least one descriptor in the Submitted queue when the function is called. However, it is possible that both the Submitted and Issued queues could be empty, which could result in pt_cmd_callback() being mistakenly called with a NULL pointer. Ref: Bugzilla Bug 216856. Fixes: 6fa7e0e836e2 ("dmaengine: ptdma: fix concurrency issue with multiple dma transfer") Signed-off-by: Eric Pilmore <epilmore@gigaio.com> Link: https://lore.kernel.org/r/20230210075142.58253-1-epilmore@gigaio.com Signed-off-by: Vinod Koul <vkoul@kernel.org> Signed-off-by: Sasha Levin <sashal@kernel.org>
2023-03-10dmaengine: dw-axi-dmac: Do not dereference NULL structureKees Cook1-2/+0
[ Upstream commit be4d46edeee4b2459d2f53f37ada88bbfb634b6c ] If "vdesc" is NULL, it cannot be used with vd_to_axi_desc(). Leave "bytes" unchanged at 0. Seen under GCC 13 with -Warray-bounds: ../drivers/dma/dw-axi-dmac/dw-axi-dmac-platform.c: In function 'dma_chan_tx_status': ../drivers/dma/dw-axi-dmac/dw-axi-dmac-platform.c:329:46: warning: array subscript 0 is outside array bounds of 'struct virt_dma_desc[46116860184273879]' [-Warray-bounds=] 329 | bytes = vd_to_axi_desc(vdesc)->length; | ^~ Fixes: 8e55444da65c ("dmaengine: dw-axi-dmac: Support burst residue granularity") Cc: Eugeniy Paltsev <Eugeniy.Paltsev@synopsys.com> Cc: Vinod Koul <vkoul@kernel.org> Cc: dmaengine@vger.kernel.org Signed-off-by: Kees Cook <keescook@chromium.org> Reviewed-by: Andy Shevchenko <andriy.shevchenko@linux.intel.com> Link: https://lore.kernel.org/r/20230127223623.never.507-kees@kernel.org Signed-off-by: Vinod Koul <vkoul@kernel.org> Signed-off-by: Sasha Levin <sashal@kernel.org>
2023-03-10dmaengine: sf-pdma: pdma_desc memory leak fixShravan Chippa2-3/+1
[ Upstream commit b02e07015a5ac7bbc029da931ae17914b8ae0339 ] Commit b2cc5c465c2c ("dmaengine: sf-pdma: Add multithread support for a DMA channel") changed sf_pdma_prep_dma_memcpy() to unconditionally allocate a new sf_pdma_desc each time it is called. The driver previously recycled descs, by checking the in_use flag, only allocating additional descs if the existing one was in use. This logic was removed in commit b2cc5c465c2c ("dmaengine: sf-pdma: Add multithread support for a DMA channel"), but sf_pdma_free_desc() was not changed to handle the new behaviour. As a result, each time sf_pdma_prep_dma_memcpy() is called, the previous descriptor is leaked, over time leading to memory starvation: unreferenced object 0xffffffe008447300 (size 192): comm "irq/39-mchp_dsc", pid 343, jiffies 4294906910 (age 981.200s) hex dump (first 32 bytes): 00 00 00 ff 00 00 00 00 b8 c1 00 00 00 00 00 00 ................ 00 00 70 08 10 00 00 00 00 00 00 c0 00 00 00 00 ..p............. backtrace: [<00000000064a04f4>] kmemleak_alloc+0x1e/0x28 [<00000000018927a7>] kmem_cache_alloc+0x11e/0x178 [<000000002aea8d16>] sf_pdma_prep_dma_memcpy+0x40/0x112 Add the missing kfree() to sf_pdma_free_desc(), and remove the redundant in_use flag. Fixes: b2cc5c465c2c ("dmaengine: sf-pdma: Add multithread support for a DMA channel") Signed-off-by: Shravan Chippa <shravan.chippa@microchip.com> Reviewed-by: Conor Dooley <conor.dooley@microchip.com> Link: https://lore.kernel.org/r/20230120100623.3530634-1-shravan.chippa@microchip.com Signed-off-by: Vinod Koul <vkoul@kernel.org> Signed-off-by: Sasha Levin <sashal@kernel.org>
2023-03-10dmaengine: dw-edma: Fix readq_ch() return value truncationSerge Semin1-1/+1
[ Upstream commit 5fdca4a995bcd4cf61bda40af154a730589dc524 ] Previously, readq_ch() did a 64-bit readq(), but truncated the result by storing it in the u32 "value". Change "value" to u64 to avoid the truncation. Note: the method is currently unused, so the bug hasn't caused any problem so far. Fixes: 04e0a39fc10f ("dmaengine: dw-edma: Add writeq() and readq() for 64 bits architectures") Signed-off-by: Serge Semin <Sergey.Semin@baikalelectronics.ru> Signed-off-by: Bjorn Helgaas <bhelgaas@google.com> Signed-off-by: Sasha Levin <sashal@kernel.org>
2023-03-10dmaengine: dw-edma: Fix missing src/dst address of interleaved xfersSerge Semin1-0/+4
[ Upstream commit 13b6299cf66165a442089fa895a7f70250703584 ] Interleaved DMA transfer support was added by 85e7518f42c8 ("dmaengine: dw-edma: Add device_prep_interleave_dma() support"), but depending on the selected channel, either source or destination address are left uninitialized which was obviously wrong. Initialize the destination address of the eDMA burst descriptors for DEV_TO_MEM interleaved operations and the source address for MEM_TO_DEV operations. Link: https://lore.kernel.org/r/20230113171409.30470-5-Sergey.Semin@baikalelectronics.ru Fixes: 85e7518f42c8 ("dmaengine: dw-edma: Add device_prep_interleave_dma() support") Tested-by: Manivannan Sadhasivam <manivannan.sadhasivam@linaro.org> Signed-off-by: Serge Semin <Sergey.Semin@baikalelectronics.ru> Signed-off-by: Lorenzo Pieralisi <lpieralisi@kernel.org> Signed-off-by: Bjorn Helgaas <bhelgaas@google.com> Reviewed-by: Manivannan Sadhasivam <manivannan.sadhasivam@linaro.org> Acked-by: Vinod Koul <vkoul@kernel.org> Signed-off-by: Sasha Levin <sashal@kernel.org>
2023-03-10dmaengine: HISI_DMA should depend on ARCH_HISIGeert Uytterhoeven1-1/+1
[ Upstream commit dcca9d045c0852584ad092123c7f6e6526a633b1 ] The HiSilicon DMA Engine is only present on HiSilicon SoCs. Hence add a dependency on ARCH_HISI, to prevent asking the user about this driver when configuring a kernel without HiSilicon SoC support. Fixes: e9f08b65250d73ab ("dmaengine: hisilicon: Add Kunpeng DMA engine support") Signed-off-by: Geert Uytterhoeven <geert+renesas@glider.be> Link: https://lore.kernel.org/r/363a1816d36cd3cf604d88ec90f97c75f604de64.1669044190.git.geert+renesas@glider.be Signed-off-by: Vinod Koul <vkoul@kernel.org> Signed-off-by: Sasha Levin <sashal@kernel.org>
2023-03-10dmaengine: idxd: Set traffic class values in GRPCFG on DSA 2.0Fenghua Yu3-4/+4
[ Upstream commit 9735bde36487da43d3c3fc910df49639f72decbf ] On DSA/IAX 1.0, TC-A and TC-B in GRPCFG are set as 1 to have best performance and cannot be changed through sysfs knobs unless override option is given. The same values should be set on DSA 2.0 as well. Fixes: ea7c8f598c32 ("dmaengine: idxd: restore traffic class defaults after wq reset") Fixes: ade8a86b512c ("dmaengine: idxd: Set defaults for GRPCFG traffic class") Signed-off-by: Fenghua Yu <fenghua.yu@intel.com> Reviewed-by: Dave Jiang <dave.jiang@intel.com> Link: https://lore.kernel.org/r/20221209172141.562648-1-fenghua.yu@intel.com Signed-off-by: Vinod Koul <vkoul@kernel.org> Signed-off-by: Sasha Levin <sashal@kernel.org>
2023-02-06dmaengine: imx-sdma: Fix a possible memory leak in sdma_transfer_initHui Wang1-1/+3
[ Upstream commit 1417f59ac0b02130ee56c0c50794b9b257be3d17 ] If the function sdma_load_context() fails, the sdma_desc will be freed, but the allocated desc->bd is forgot to be freed. We already met the sdma_load_context() failure case and the log as below: [ 450.699064] imx-sdma 30bd0000.dma-controller: Timeout waiting for CH0 ready ... In this case, the desc->bd will not be freed without this change. Signed-off-by: Hui Wang <hui.wang@canonical.com> Reviewed-by: Sascha Hauer <s.hauer@pengutronix.de> Link: https://lore.kernel.org/r/20221130090800.102035-1-hui.wang@canonical.com Signed-off-by: Vinod Koul <vkoul@kernel.org> Signed-off-by: Sasha Levin <sashal@kernel.org>
2023-02-01ptdma: pt_core_execute_cmd() should use spinlockEric Pilmore2-4/+5
[ Upstream commit 95e5fda3b5f9ed8239b145da3fa01e641cf5d53c ] The interrupt handler (pt_core_irq_handler()) of the ptdma driver can be called from interrupt context. The code flow in this function can lead down to pt_core_execute_cmd() which will attempt to grab a mutex, which is not appropriate in interrupt context and ultimately leads to a kernel panic. The fix here changes this mutex to a spinlock, which has been verified to resolve the issue. Fixes: fa5d823b16a9 ("dmaengine: ptdma: Initial driver for the AMD PTDMA") Signed-off-by: Eric Pilmore <epilmore@gigaio.com> Link: https://lore.kernel.org/r/20230119033907.35071-1-epilmore@gigaio.com Signed-off-by: Vinod Koul <vkoul@kernel.org> Signed-off-by: Sasha Levin <sashal@kernel.org>
2023-02-01dmaengine: tegra: Fix memory leak in terminate_all()Akhil R1-0/+1
[ Upstream commit a7a7ee6f5a019ad72852c001abbce50d35e992f2 ] Terminate vdesc when terminating an ongoing transfer. This will ensure that the vdesc is present in the desc_terminated list The descriptor will be freed later in desc_free_list(). This fixes the memory leaks which can happen when terminating an ongoing transfer. Fixes: ee17028009d4 ("dmaengine: tegra: Add tegra gpcdma driver") Signed-off-by: Akhil R <akhilrajeev@nvidia.com> Link: https://lore.kernel.org/r/20230118115801.15210-1-akhilrajeev@nvidia.com Signed-off-by: Vinod Koul <vkoul@kernel.org> Signed-off-by: Sasha Levin <sashal@kernel.org>
2023-02-01dmaengine: xilinx_dma: call of_node_put() when breaking out of ↵Liu Shixin1-1/+3
for_each_child_of_node() [ Upstream commit 596b53ccc36a546ab28e8897315c5b4d1d5a0200 ] Since for_each_child_of_node() will increase the refcount of node, we need to call of_node_put() manually when breaking out of the iteration. Fixes: 9cd4360de609 ("dma: Add Xilinx AXI Video Direct Memory Access Engine driver support") Signed-off-by: Liu Shixin <liushixin2@huawei.com> Acked-by: Peter Korsgaard <peter@korsgaard.com> Link: https://lore.kernel.org/r/20221122021612.1908866-1-liushixin2@huawei.com Signed-off-by: Vinod Koul <vkoul@kernel.org> Signed-off-by: Sasha Levin <sashal@kernel.org>
2023-02-01dmaengine: Fix double increment of client_count in dma_chan_get()Koba Ko1-3/+4
[ Upstream commit f3dc1b3b4750851a94212dba249703dd0e50bb20 ] The first time dma_chan_get() is called for a channel the channel client_count is incorrectly incremented twice for public channels, first in balance_ref_count(), and again prior to returning. This results in an incorrect client count which will lead to the channel resources not being freed when they should be. A simple test of repeated module load and unload of async_tx on a Dell Power Edge R7425 also shows this resulting in a kref underflow warning. [ 124.329662] async_tx: api initialized (async) [ 129.000627] async_tx: api initialized (async) [ 130.047839] ------------[ cut here ]------------ [ 130.052472] refcount_t: underflow; use-after-free. [ 130.057279] WARNING: CPU: 3 PID: 19364 at lib/refcount.c:28 refcount_warn_saturate+0xba/0x110 [ 130.065811] Modules linked in: async_tx(-) rfkill intel_rapl_msr intel_rapl_common amd64_edac edac_mce_amd ipmi_ssif kvm_amd dcdbas kvm mgag200 drm_shmem_helper acpi_ipmi irqbypass drm_kms_helper ipmi_si syscopyarea sysfillrect rapl pcspkr ipmi_devintf sysimgblt fb_sys_fops k10temp i2c_piix4 ipmi_msghandler acpi_power_meter acpi_cpufreq vfat fat drm fuse xfs libcrc32c sd_mod t10_pi sg ahci crct10dif_pclmul libahci crc32_pclmul crc32c_intel ghash_clmulni_intel igb megaraid_sas i40e libata i2c_algo_bit ccp sp5100_tco dca dm_mirror dm_region_hash dm_log dm_mod [last unloaded: async_tx] [ 130.117361] CPU: 3 PID: 19364 Comm: modprobe Kdump: loaded Not tainted 5.14.0-185.el9.x86_64 #1 [ 130.126091] Hardware name: Dell Inc. PowerEdge R7425/02MJ3T, BIOS 1.18.0 01/17/2022 [ 130.133806] RIP: 0010:refcount_warn_saturate+0xba/0x110 [ 130.139041] Code: 01 01 e8 6d bd 55 00 0f 0b e9 72 9d 8a 00 80 3d 26 18 9c 01 00 75 85 48 c7 c7 f8 a3 03 9d c6 05 16 18 9c 01 01 e8 4a bd 55 00 <0f> 0b e9 4f 9d 8a 00 80 3d 01 18 9c 01 00 0f 85 5e ff ff ff 48 c7 [ 130.157807] RSP: 0018:ffffbf98898afe68 EFLAGS: 00010286 [ 130.163036] RAX: 0000000000000000 RBX: ffff9da06028e598 RCX: 0000000000000000 [ 130.170172] RDX: ffff9daf9de26480 RSI: ffff9daf9de198a0 RDI: ffff9daf9de198a0 [ 130.177316] RBP: ffff9da7cddf3970 R08: 0000000000000000 R09: 00000000ffff7fff [ 130.184459] R10: ffffbf98898afd00 R11: ffffffff9d9e8c28 R12: ffff9da7cddf1970 [ 130.191596] R13: 0000000000000000 R14: 0000000000000000 R15: 0000000000000000 [ 130.198739] FS: 00007f646435c740(0000) GS:ffff9daf9de00000(0000) knlGS:0000000000000000 [ 130.206832] CS: 0010 DS: 0000 ES: 0000 CR0: 0000000080050033 [ 130.212586] CR2: 00007f6463b214f0 CR3: 00000008ab98c000 CR4: 00000000003506e0 [ 130.219729] Call Trace: [ 130.222192] <TASK> [ 130.224305] dma_chan_put+0x10d/0x110 [ 130.227988] dmaengine_put+0x7a/0xa0 [ 130.231575] __do_sys_delete_module.constprop.0+0x178/0x280 [ 130.237157] ? syscall_trace_enter.constprop.0+0x145/0x1d0 [ 130.242652] do_syscall_64+0x5c/0x90 [ 130.246240] ? exc_page_fault+0x62/0x150 [ 130.250178] entry_SYSCALL_64_after_hwframe+0x63/0xcd [ 130.255243] RIP: 0033:0x7f6463a3f5ab [ 130.258830] Code: 73 01 c3 48 8b 0d 75 a8 1b 00 f7 d8 64 89 01 48 83 c8 ff c3 66 2e 0f 1f 84 00 00 00 00 00 90 f3 0f 1e fa b8 b0 00 00 00 0f 05 <48> 3d 01 f0 ff ff 73 01 c3 48 8b 0d 45 a8 1b 00 f7 d8 64 89 01 48 [ 130.277591] RSP: 002b:00007fff22f972c8 EFLAGS: 00000206 ORIG_RAX: 00000000000000b0 [ 130.285164] RAX: ffffffffffffffda RBX: 000055b6786edd40 RCX: 00007f6463a3f5ab [ 130.292303] RDX: 0000000000000000 RSI: 0000000000000800 RDI: 000055b6786edda8 [ 130.299443] RBP: 000055b6786edd40 R08: 0000000000000000 R09: 0000000000000000 [ 130.306584] R10: 00007f6463b9eac0 R11: 0000000000000206 R12: 000055b6786edda8 [ 130.313731] R13: 0000000000000000 R14: 000055b6786edda8 R15: 00007fff22f995f8 [ 130.320875] </TASK> [ 130.323081] ---[ end trace eff7156d56b5cf25 ]--- cat /sys/class/dma/dma0chan*/in_use would get the wrong result. 2 2 2 Fixes: d2f4f99db3e9 ("dmaengine: Rework dma_chan_get") Signed-off-by: Koba Ko <koba.ko@canonical.com> Reviewed-by: Jie Hai <haijie1@huawei.com> Test-by: Jie Hai <haijie1@huawei.com> Reviewed-by: Jerry Snitselaar <jsnitsel@redhat.com> Reviewed-by: Dave Jiang <dave.jiang@intel.com> Tested-by: Joel Savitz <jsavitz@redhat.com> Link: https://lore.kernel.org/r/20221201030050.978595-1-koba.ko@canonical.com Signed-off-by: Vinod Koul <vkoul@kernel.org> Signed-off-by: Sasha Levin <sashal@kernel.org>
2023-02-01dmaengine: ti: k3-udma: Do conditional decrement of UDMA_CHAN_RT_PEER_BCNT_REGJayesh Choudhary1-2/+3
[ Upstream commit efab25894a41a920d9581183741e7fadba00719c ] PSIL_EP_NATIVE endpoints may not have PEER registers for BCNT and thus udma_decrement_byte_counters() should not try to decrement these counters. This fixes the issue of crypto IPERF testing where the client side (EVM) hangs without transfer of packets to the server side, seen since this function was added. Fixes: 7c94dcfa8fcf ("dmaengine: ti: k3-udma: Reset UDMA_CHAN_RT byte counters to prevent overflow") Signed-off-by: Jayesh Choudhary <j-choudhary@ti.com> Acked-by: Peter Ujfalusi <peter.ujfalusi@gmail.com> Link: https://lore.kernel.org/r/20221128085005.489964-1-j-choudhary@ti.com Signed-off-by: Vinod Koul <vkoul@kernel.org> Signed-off-by: Sasha Levin <sashal@kernel.org>
2023-02-01dmaengine: qcom: gpi: Set link_rx bit on GO TRE for rx operationVijaya Krishna Nivarthi1-0/+1
[ Upstream commit 25e8ac233d24051e2c4ff64c34f60609b0988568 ] Rx operation on SPI GSI DMA is currently not working. As per GSI spec, link_rx bit is to be set on GO TRE on tx channel whenever there is going to be a DMA TRE on rx channel. This is currently set for duplex operation only. Set the bit for rx operation as well. This is part of changes required to bring up Rx. Fixes: 94b8f0e58fa1 ("dmaengine: qcom: gpi: set chain and link flag for duplex") Signed-off-by: Vijaya Krishna Nivarthi <quic_vnivarth@quicinc.com> Reviewed-by: Douglas Anderson <dianders@chromium.org> Link: https://lore.kernel.org/r/1671212293-14767-1-git-send-email-quic_vnivarth@quicinc.com Signed-off-by: Vinod Koul <vkoul@kernel.org> Signed-off-by: Sasha Levin <sashal@kernel.org>
2023-01-24dmaengine: idxd: Do not call DMX TX callbacks during workqueue disableReinette Chatre1-0/+11
commit 6744a030d81e456883bfbb627ac1f30465c1a989 upstream. On driver unload any pending descriptors are flushed and pending DMA descriptors are explicitly completed: idxd_dmaengine_drv_remove() -> drv_disable_wq() -> idxd_wq_free_irq() -> idxd_flush_pending_descs() -> idxd_dma_complete_txd() With this done during driver unload any remaining descriptor is likely stuck and can be dropped. Even so, the descriptor may still have a callback set that could no longer be accessible. An example of such a problem is when the dmatest fails and the dmatest module is unloaded. The failure of dmatest leaves descriptors with dma_async_tx_descriptor::callback pointing to code that no longer exist. This causes a page fault as below at the time the IDXD driver is unloaded when it attempts to run the callback: BUG: unable to handle page fault for address: ffffffffc0665190 #PF: supervisor instruction fetch in kernel mode #PF: error_code(0x0010) - not-present page Fix this by clearing the callback pointers on the transmit descriptors only when workqueue is disabled. Fixes: 403a2e236538 ("dmaengine: idxd: change MSIX allocation based on per wq activation") Signed-off-by: Reinette Chatre <reinette.chatre@intel.com> Reviewed-by: Dave Jiang <dave.jiang@intel.com> Reviewed-by: Fenghua Yu <fenghua.yu@intel.com> Cc: stable@vger.kernel.org Link: https://lore.kernel.org/r/37d06b772aa7f8863ca50f90930ea2fd80b38fc3.1670452419.git.reinette.chatre@intel.com Signed-off-by: Vinod Koul <vkoul@kernel.org> Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
2023-01-24dmaengine: idxd: Prevent use after free on completion memoryReinette Chatre1-1/+1
commit 1beeec45f9ac31eba52478379f70a5fa9c2ad005 upstream. On driver unload any pending descriptors are flushed at the time the interrupt is freed: idxd_dmaengine_drv_remove() -> drv_disable_wq() -> idxd_wq_free_irq() -> idxd_flush_pending_descs(). If there are any descriptors present that need to be flushed this flow triggers a "not present" page fault as below: BUG: unable to handle page fault for address: ff391c97c70c9040 #PF: supervisor read access in kernel mode #PF: error_code(0x0000) - not-present page The address that triggers the fault is the address of the descriptor that was freed moments earlier via: drv_disable_wq()->idxd_wq_free_resources() Fix the use after free by freeing the descriptors after any possible usage. This is done after idxd_wq_reset() to ensure that the memory remains accessible during possible completion writes by the device. Fixes: 63c14ae6c161 ("dmaengine: idxd: refactor wq driver enable/disable operations") Suggested-by: Dave Jiang <dave.jiang@intel.com> Signed-off-by: Reinette Chatre <reinette.chatre@intel.com> Reviewed-by: Dave Jiang <dave.jiang@intel.com> Reviewed-by: Fenghua Yu <fenghua.yu@intel.com> Cc: stable@vger.kernel.org Link: https://lore.kernel.org/r/6c4657d9cff0a0a00501a7b928297ac966e9ec9d.1670452419.git.reinette.chatre@intel.com Signed-off-by: Vinod Koul <vkoul@kernel.org> Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
2023-01-24dmaengine: idxd: Let probe fail when workqueue cannot be enabledReinette Chatre1-2/+1
commit b51b75f0604f17c0f6f3b6f68f1a521a5cc6b04f upstream. The workqueue is enabled when the appropriate driver is loaded and disabled when the driver is removed. When the driver is removed it assumes that the workqueue was enabled successfully and proceeds to free allocations made during workqueue enabling. Failure during workqueue enabling does not prevent the driver from being loaded. This is because the error path within drv_enable_wq() returns success unless a second failure is encountered during the error path. By returning success it is possible to load the driver even if the workqueue cannot be enabled and allocations that do not exist are attempted to be freed during driver remove. Some examples of problematic flows: (a) idxd_dmaengine_drv_probe() -> drv_enable_wq() -> idxd_wq_request_irq(): In above flow, if idxd_wq_request_irq() fails then idxd_wq_unmap_portal() is called on error exit path, but drv_enable_wq() returns 0 because idxd_wq_disable() succeeds. The driver is thus loaded successfully. idxd_dmaengine_drv_remove()->drv_disable_wq()->idxd_wq_unmap_portal() Above flow on driver unload triggers the WARN in devm_iounmap() because the device resource has already been removed during error path of drv_enable_wq(). (b) idxd_dmaengine_drv_probe() -> drv_enable_wq() -> idxd_wq_request_irq(): In above flow, if idxd_wq_request_irq() fails then idxd_wq_init_percpu_ref() is never called to initialize the percpu counter, yet the driver loads successfully because drv_enable_wq() returns 0. idxd_dmaengine_drv_remove()->__idxd_wq_quiesce()->percpu_ref_kill(): Above flow on driver unload triggers a BUG when attempting to drop the initial ref of the uninitialized percpu ref: BUG: kernel NULL pointer dereference, address: 0000000000000010 Fix the drv_enable_wq() error path by returning the original error that indicates failure of workqueue enabling. This ensures that the probe fails when an error is encountered and the driver remove paths are only attempted when the workqueue was enabled successfully. Fixes: 1f2bb40337f0 ("dmaengine: idxd: move wq_enable() to device.c") Signed-off-by: Reinette Chatre <reinette.chatre@intel.com> Reviewed-by: Dave Jiang <dave.jiang@intel.com> Reviewed-by: Fenghua Yu <fenghua.yu@intel.com> Cc: stable@vger.kernel.org Link: https://lore.kernel.org/r/e8d8116e5efa0fd14fadc5adae6ffd319f0e5ff1.1670452419.git.reinette.chatre@intel.com Signed-off-by: Vinod Koul <vkoul@kernel.org> Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
2023-01-24dmaengine: tegra210-adma: fix global intr clearMohan Kumar1-1/+1
commit 9c7e355ccbb33d239360c876dbe49ad5ade65b47 upstream. The current global interrupt clear programming register offset was not correct. Fix the programming with right offset Fixes: ded1f3db4cd6 ("dmaengine: tegra210-adma: prepare for supporting newer Tegra chips") Cc: stable@vger.kernel.org Signed-off-by: Mohan Kumar <mkumard@nvidia.com> Link: https://lore.kernel.org/r/20230102064844.31306-1-mkumard@nvidia.com Signed-off-by: Vinod Koul <vkoul@kernel.org> Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
2023-01-24dmaengine: lgm: Move DT parsing after initializationPeter Harliman Liem1-5/+5
commit 96b3bb18f6cbe259ef4e0bed3135911b7e8d2af5 upstream. ldma_cfg_init() will parse DT to retrieve certain configs. However, that is called before ldma_dma_init_vXX(), which will make some initialization to channel configs. It will thus incorrectly overwrite certain configs that are declared in DT. To fix that, we move DT parsing after initialization. Function name is renamed to better represent what it does. Fixes: 32d31c79a1a4 ("dmaengine: Add Intel LGM SoC DMA support.") Signed-off-by: Peter Harliman Liem <pliem@maxlinear.com> Cc: stable@vger.kernel.org Link: https://lore.kernel.org/r/afef6fc1ed20098b684e0d53737d69faf63c125f.1672887183.git.pliem@maxlinear.com Signed-off-by: Vinod Koul <vkoul@kernel.org> Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
2023-01-24Add exception protection processing for vd in axi_chan_handle_err functionShawn.Shao1-0/+6
commit 57054fe516d59d03a7bcf1888e82479ccc244f87 upstream. Since there is no protection for vd, a kernel panic will be triggered here in exceptional cases. You can refer to the processing of axi_chan_block_xfer_complete function The triggered kernel panic is as follows: [ 67.848444] Unable to handle kernel NULL pointer dereference at virtual address 0000000000000060 [ 67.848447] Mem abort info: [ 67.848449] ESR = 0x96000004 [ 67.848451] EC = 0x25: DABT (current EL), IL = 32 bits [ 67.848454] SET = 0, FnV = 0 [ 67.848456] EA = 0, S1PTW = 0 [ 67.848458] Data abort info: [ 67.848460] ISV = 0, ISS = 0x00000004 [ 67.848462] CM = 0, WnR = 0 [ 67.848465] user pgtable: 4k pages, 48-bit VAs, pgdp=00000800c4c0b000 [ 67.848468] [0000000000000060] pgd=0000000000000000, p4d=0000000000000000 [ 67.848472] Internal error: Oops: 96000004 [#1] SMP [ 67.848475] Modules linked in: dmatest [ 67.848479] CPU: 0 PID: 0 Comm: swapper/0 Not tainted 5.10.100-emu_x2rc+ #11 [ 67.848483] pstate: 62000085 (nZCv daIf -PAN -UAO +TCO BTYPE=--) [ 67.848487] pc : axi_chan_handle_err+0xc4/0x230 [ 67.848491] lr : axi_chan_handle_err+0x30/0x230 [ 67.848493] sp : ffff0803fe55ae50 [ 67.848495] x29: ffff0803fe55ae50 x28: ffff800011212200 [ 67.848500] x27: ffff0800c42c0080 x26: ffff0800c097c080 [ 67.848504] x25: ffff800010d33880 x24: ffff80001139d850 [ 67.848508] x23: ffff0800c097c168 x22: 0000000000000000 [ 67.848512] x21: 0000000000000080 x20: 0000000000002000 [ 67.848517] x19: ffff0800c097c080 x18: 0000000000000000 [ 67.848521] x17: 0000000000000000 x16: 0000000000000000 [ 67.848525] x15: 0000000000000000 x14: 0000000000000000 [ 67.848529] x13: 0000000000000000 x12: 0000000000000040 [ 67.848533] x11: ffff0800c0400248 x10: ffff0800c040024a [ 67.848538] x9 : ffff800010576cd4 x8 : ffff0800c0400270 [ 67.848542] x7 : 0000000000000000 x6 : ffff0800c04003e0 [ 67.848546] x5 : ffff0800c0400248 x4 : ffff0800c4294480 [ 67.848550] x3 : dead000000000100 x2 : dead000000000122 [ 67.848555] x1 : 0000000000000100 x0 : ffff0800c097c168 [ 67.848559] Call trace: [ 67.848562] axi_chan_handle_err+0xc4/0x230 [ 67.848566] dw_axi_dma_interrupt+0xf4/0x590 [ 67.848569] __handle_irq_event_percpu+0x60/0x220 [ 67.848573] handle_irq_event+0x64/0x120 [ 67.848576] handle_fasteoi_irq+0xc4/0x220 [ 67.848580] __handle_domain_irq+0x80/0xe0 [ 67.848583] gic_handle_irq+0xc0/0x138 [ 67.848585] el1_irq+0xc8/0x180 [ 67.848588] arch_cpu_idle+0x14/0x2c [ 67.848591] default_idle_call+0x40/0x16c [ 67.848594] do_idle+0x1f0/0x250 [ 67.848597] cpu_startup_entry+0x2c/0x60 [ 67.848600] rest_init+0xc0/0xcc [ 67.848603] arch_call_rest_init+0x14/0x1c [ 67.848606] start_kernel+0x4cc/0x500 [ 67.848610] Code: eb0002ff 9a9f12d6 f2fbd5a2 f2fbd5a3 (a94602c1) [ 67.848613] ---[ end trace 585a97036f88203a ]--- Signed-off-by: Shawn.Shao <shawn.shao@jaguarmicro.com> Cc: stable@vger.kernel.org Link: https://lore.kernel.org/r/20230112055802.1764-1-shawn.shao@jaguarmicro.com Signed-off-by: Vinod Koul <vkoul@kernel.org> Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>