summaryrefslogtreecommitdiff
path: root/drivers
AgeCommit message (Collapse)AuthorFilesLines
2025-05-06dma-mapping: Provide an interface to allow allocate IOVALeon Romanovsky1-0/+86
The existing .map_pages() callback provides both allocating of IOVA and linking DMA pages. That combination works great for most of the callers who use it in control paths, but is less effective in fast paths where there may be multiple calls to map_page(). These advanced callers already manage their data in some sort of database and can perform IOVA allocation in advance, leaving range linkage operation to be in fast path. Provide an interface to allocate/deallocate IOVA and next patch link/unlink DMA ranges to that specific IOVA. In the new API a DMA mapping transaction is identified by a struct dma_iova_state, which holds some recomputed information for the transaction which does not change for each page being mapped, so add a check if IOVA can be used for the specific transaction. The API is exported from dma-iommu as it is the only implementation supported, the namespace is clearly different from iommu_* functions which are not allowed to be used. This code layout allows us to save function call per API call used in datapath as well as a lot of boilerplate code. Reviewed-by: Christoph Hellwig <hch@lst.de> Tested-by: Jens Axboe <axboe@kernel.dk> Reviewed-by: Luis Chamberlain <mcgrof@kernel.org> Signed-off-by: Leon Romanovsky <leonro@nvidia.com> Signed-off-by: Marek Szyprowski <m.szyprowski@samsung.com>
2025-05-06iommu: add kernel-doc for iommu_unmap_fastLeon Romanovsky1-0/+19
Add kernel-doc section for iommu_unmap_fast to document existing limitation of underlying functions which can't split individual ranges. Suggested-by: Jason Gunthorpe <jgg@nvidia.com> Acked-by: Will Deacon <will@kernel.org> Reviewed-by: Christoph Hellwig <hch@lst.de> Tested-by: Jens Axboe <axboe@kernel.dk> Reviewed-by: Jason Gunthorpe <jgg@nvidia.com> Reviewed-by: Luis Chamberlain <mcgrof@kernel.org> Reviewed-by: Lu Baolu <baolu.lu@linux.intel.com> Signed-off-by: Leon Romanovsky <leonro@nvidia.com> Signed-off-by: Marek Szyprowski <m.szyprowski@samsung.com>
2025-05-06iommu: generalize the batched sync after map interfaceChristoph Hellwig1-36/+29
For the upcoming IOVA-based DMA API we want to batch the ops->iotlb_sync_map() call after mapping multiple IOVAs from dma-iommu without having a scatterlist. Improve the API. Add a wrapper for the map_sync as iommu_sync_map() so that callers don't need to poke into the methods directly. Formalize __iommu_map() into iommu_map_nosync() which requires the caller to call iommu_sync_map() after all maps are completed. Refactor the existing sanity checks from all the different layers into iommu_map_nosync(). Signed-off-by: Christoph Hellwig <hch@lst.de> Acked-by: Will Deacon <will@kernel.org> Tested-by: Jens Axboe <axboe@kernel.dk> Reviewed-by: Jason Gunthorpe <jgg@nvidia.com> Reviewed-by: Luis Chamberlain <mcgrof@kernel.org> Signed-off-by: Leon Romanovsky <leonro@nvidia.com> Signed-off-by: Marek Szyprowski <m.szyprowski@samsung.com>
2025-05-06dma-mapping: move the PCI P2PDMA mapping helpers to pci-p2pdma.hChristoph Hellwig1-0/+1
To support the upcoming non-scatterlist mapping helpers, we need to go back to have them called outside of the DMA API. Thus move them out of dma-map-ops.h, which is only for DMA API implementations to pci-p2pdma.h, which is for driver use. Note that the core helper is still not exported as the mapping is expected to be done only by very highlevel subsystem code at least for now. Signed-off-by: Christoph Hellwig <hch@lst.de> Reviewed-by: Logan Gunthorpe <logang@deltatee.com> Acked-by: Bjorn Helgaas <bhelgaas@google.com> Tested-by: Jens Axboe <axboe@kernel.dk> Reviewed-by: Luis Chamberlain <mcgrof@kernel.org> Reviewed-by: Lu Baolu <baolu.lu@linux.intel.com> Signed-off-by: Leon Romanovsky <leonro@nvidia.com> Signed-off-by: Marek Szyprowski <m.szyprowski@samsung.com>
2025-05-06PCI/P2PDMA: Refactor the p2pdma mapping helpersChristoph Hellwig2-56/+29
The current scheme with a single helper to determine the P2P status and map a scatterlist segment force users to always use the map_sg helper to DMA map, which we're trying to get away from because they are very cache inefficient. Refactor the code so that there is a single helper that checks the P2P state for a page, including the result that it is not a P2P page to simplify the callers, and a second one to perform the address translation for a bus mapped P2P transfer that does not depend on the scatterlist structure. Signed-off-by: Christoph Hellwig <hch@lst.de> Reviewed-by: Logan Gunthorpe <logang@deltatee.com> Acked-by: Bjorn Helgaas <bhelgaas@google.com> Tested-by: Jens Axboe <axboe@kernel.dk> Reviewed-by: Luis Chamberlain <mcgrof@kernel.org> Reviewed-by: Lu Baolu <baolu.lu@linux.intel.com> Signed-off-by: Leon Romanovsky <leonro@nvidia.com> Signed-off-by: Marek Szyprowski <m.szyprowski@samsung.com>
2025-05-06scsi: scsi_debug: Reduce DEF_ATOMIC_WR_MAX_LENGTHJohn Garry1-1/+1
The default atomic write max length in DEF_ATOMIC_WR_MAX_LENGTH is excessively large. For 512B LBS, we would get a 4MB max, but due to block layer atomic write restrictions this is limited to 512KB. Reduce DEF_ATOMIC_WR_MAX_LENGTH to a value which would be more realistic (for a real device supporting atomic writes), 64KB. Signed-off-by: John Garry <john.g.garry@oracle.com> Link: https://lore.kernel.org/r/20250501100241.930071-1-john.g.garry@oracle.com Signed-off-by: Martin K. Petersen <martin.petersen@oracle.com>
2025-05-06scsi: smartpqi: Delete a stray tab in pqi_is_parity_write_stream()Dan Carpenter1-1/+1
We accidentally indented this line an extra tab. Delete the tab. Signed-off-by: Dan Carpenter <dan.carpenter@linaro.org> Acked-by: Don Brace <Don.Brace@microchip.com> Link: https://lore.kernel.org/r/aBHarJ601XTGsyOX@stanley.mountain Signed-off-by: Martin K. Petersen <martin.petersen@oracle.com>
2025-05-06scsi: dc395x: Remove leftover if statement in reselect()Nathan Chancellor1-1/+0
Clang warns (or errors with CONFIG_WERROR=y): drivers/scsi/dc395x.c:2553:6: error: variable 'id' is used uninitialized whenever 'if' condition is false [-Werror,-Wsometimes-uninitialized] 2553 | if (!(rsel_tar_lun_id & (IDENTIFY_BASE << 8))) | ^~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ drivers/scsi/dc395x.c:2556:22: note: uninitialized use occurs here 2556 | dcb = find_dcb(acb, id, lun); | ^~ drivers/scsi/dc395x.c:2553:2: note: remove the 'if' if its condition is always true 2553 | if (!(rsel_tar_lun_id & (IDENTIFY_BASE << 8))) | ^~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ 2554 | id = rsel_tar_lun_id & 0xff; This if statement only existed for a debugging print but it was not removed with the debugging print in a recent cleanup, leading to id only being initialized when the if condition is true. Remove the if statement to ensure id is always initialized, clearing up the warning. Fixes: 62b434b0db2c ("scsi: dc395x: Remove DEBUG conditional compilation") Signed-off-by: Nathan Chancellor <nathan@kernel.org> Link: https://lore.kernel.org/r/20250429-scsi-dc395x-fix-uninit-var-v1-1-25215d481020@kernel.org Signed-off-by: Martin K. Petersen <martin.petersen@oracle.com>
2025-05-06vhost/net: Defer TX queue re-enable until after sendmsgJon Kohler1-9/+21
In handle_tx_copy, TX batching processes packets below ~PAGE_SIZE and batches up to 64 messages before calling sock->sendmsg. Currently, when there are no more messages on the ring to dequeue, handle_tx_copy re-enables kicks on the ring *before* firing off the batch sendmsg. However, sock->sendmsg incurs a non-zero delay, especially if it needs to wake up a thread (e.g., another vhost worker). If the guest submits additional messages immediately after the last ring check and disablement, it triggers an EPT_MISCONFIG vmexit to attempt to kick the vhost worker. This may happen while the worker is still processing the sendmsg, leading to wasteful exit(s). This is particularly problematic for single-threaded guest submission threads, as they must exit, wait for the exit to be processed (potentially involving a TTWU), and then resume. In scenarios like a constant stream of UDP messages, this results in a sawtooth pattern where the submitter frequently vmexits, and the vhost-net worker alternates between sleeping and waking. A common solution is to configure vhost-net busy polling via userspace (e.g., qemu poll-us). However, treating the sendmsg as the "busy" period by keeping kicks disabled during the final sendmsg and performing one additional ring check afterward provides a significant performance improvement without any excess busy poll cycles. If messages are found in the ring after the final sendmsg, requeue the TX handler. This ensures fairness for the RX handler and allows vhost_run_work_list to cond_resched() as needed. Test Case TX VM: taskset -c 2 iperf3 -c rx-ip-here -t 60 -p 5200 -b 0 -u -i 5 RX VM: taskset -c 2 iperf3 -s -p 5200 -D 6.12.0, each worker backed by tun interface with IFF_NAPI setup. Note: TCP side is largely unchanged as that was copy bound 6.12.0 unpatched EPT_MISCONFIG/second: 5411 Datagrams/second: ~382k Interval Transfer Bitrate Lost/Total Datagrams 0.00-30.00 sec 15.5 GBytes 4.43 Gbits/sec 0/11481630 (0%) sender 6.12.0 patched EPT_MISCONFIG/second: 58 (~93x reduction) Datagrams/second: ~650k (~1.7x increase) Interval Transfer Bitrate Lost/Total Datagrams 0.00-30.00 sec 26.4 GBytes 7.55 Gbits/sec 0/19554720 (0%) sender Acked-by: Jason Wang <jasowang@redhat.com> Signed-off-by: Jon Kohler <jon@nutanix.com> Acked-by: Michael S. Tsirkin <mst@redhat.com> Link: https://patch.msgid.link/20250501020428.1889162-1-jon@nutanix.com Signed-off-by: Jakub Kicinski <kuba@kernel.org>
2025-05-06net: phy: Refactor fwnode_get_phy_node()Andy Shevchenko1-4/+4
Refactor to check if the fwnode we got is correct and return if so, otherwise do additional checks. Using same pattern in all conditionals makes it slightly easier to read and understand. Signed-off-by: Andy Shevchenko <andriy.shevchenko@linux.intel.com> Link: https://patch.msgid.link/20250430143802.3714405-1-andriy.shevchenko@linux.intel.com Signed-off-by: Jakub Kicinski <kuba@kernel.org>
2025-05-06AsoC: Phase out hybrid PCI devresMark Brown134-765/+1299
Merge series from Philipp Stanner <phasta@kernel.org>: A year ago we spent quite some work trying to get PCI into better shape. Some pci_ functions can be sometimes managed with devres, which is obviously bad. We want to provide an obvious API, where pci_ functions are never, and pcim_ functions are always managed. Thus, everyone enabling his device with pcim_enable_device() must be ported to pcim_ functions. Porting all users will later enable us to significantly simplify parts of the PCI subsystem. See here [1] for details. This patch series does that for sound. Feel free to squash the commits as you see fit. P. [1] https://elixir.bootlin.com/linux/v6.14-rc4/source/drivers/pci/devres.c#L18
2025-05-06PCI: Explicitly put devices into D0 when initializingMario Limonciello3-9/+11
AMD BIOS team has root caused an issue that NVMe storage failed to come back from suspend to a lack of a call to _REG when NVMe device was probed. 112a7f9c8edbf ("PCI/ACPI: Call _REG when transitioning D-states") added support for calling _REG when transitioning D-states, but this only works if the device actually "transitions" D-states. 967577b062417 ("PCI/PM: Keep runtime PM enabled for unbound PCI devices") added support for runtime PM on PCI devices, but never actually 'explicitly' sets the device to D0. To make sure that devices are in D0 and that platform methods such as _REG are called, explicitly set all devices into D0 during initialization. Fixes: 967577b062417 ("PCI/PM: Keep runtime PM enabled for unbound PCI devices") Signed-off-by: Mario Limonciello <mario.limonciello@amd.com> Signed-off-by: Bjorn Helgaas <bhelgaas@google.com> Tested-by: Denis Benato <benato.denis96@gmail.com> Tested-By: Yijun Shen <Yijun_Shen@Dell.com> Tested-By: David Perry <david.perry@amd.com> Reviewed-by: Rafael J. Wysocki <rafael@kernel.org> Link: https://patch.msgid.link/20250424043232.1848107-1-superm1@kernel.org
2025-05-06i2c: omap: fix deprecated of_property_read_bool() useJohan Hovold1-1/+1
Using of_property_read_bool() for non-boolean properties is deprecated and results in a warning during runtime since commit c141ecc3cecd ("of: Warn when of_property_read_bool() is used on non-boolean properties"). Fixes: b6ef830c60b6 ("i2c: omap: Add support for setting mux") Cc: Jayesh Choudhary <j-choudhary@ti.com> Signed-off-by: Johan Hovold <johan+linaro@kernel.org> Acked-by: Mukesh Kumar Savaliya <quic_msavaliy@quicinc.com> Link: https://lore.kernel.org/r/20250415075230.16235-1-johan+linaro@kernel.org Signed-off-by: Andi Shyti <andi.shyti@kernel.org>
2025-05-06of: Simplify of_dma_set_restricted_buffer() to use of_for_each_phandle()Rob Herring (Arm)1-20/+11
Simplify of_dma_set_restricted_buffer() by using of_property_present() and of_for_each_phandle() iterator. Acked-by: Arnaud Pouliquen <arnaud.pouliquen@foss.st.com> Link: https://lore.kernel.org/r/20250423-dt-memory-region-v2-v2-2-2fbd6ebd3c88@kernel.org Signed-off-by: Rob Herring (Arm) <robh@kernel.org>
2025-05-06of: reserved_mem: Add functions to parse "memory-region"Rob Herring (Arm)1-0/+80
Drivers with "memory-region" properties currently have to do their own parsing of "memory-region" properties. The result is all the drivers have similar patterns of a call to parse "memory-region" and then get the region's address and size. As this is a standard property, it should have common functions for drivers to use. Add new functions to count the number of regions and retrieve the region's address as a resource. Reviewed-by: Daniel Baluta <daniel.baluta@nxp.com> Acked-by: Arnaud Pouliquen <arnaud.pouliquen@foss.st.com> Link: https://lore.kernel.org/r/20250423-dt-memory-region-v2-v2-1-2fbd6ebd3c88@kernel.org Signed-off-by: Rob Herring (Arm) <robh@kernel.org>
2025-05-05virtio-net: free xsk_buffs on error in virtnet_xsk_pool_enable()Jakub Kicinski1-2/+6
The selftests added to our CI by Bui Quang Minh recently reveals that there is a mem leak on the error path of virtnet_xsk_pool_enable(): unreferenced object 0xffff88800a68a000 (size 2048): comm "xdp_helper", pid 318, jiffies 4294692778 hex dump (first 32 bytes): 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 ................ 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 ................ backtrace (crc 0): __kvmalloc_node_noprof+0x402/0x570 virtnet_xsk_pool_enable+0x293/0x6a0 (drivers/net/virtio_net.c:5882) xp_assign_dev+0x369/0x670 (net/xdp/xsk_buff_pool.c:226) xsk_bind+0x6a5/0x1ae0 __sys_bind+0x15e/0x230 __x64_sys_bind+0x72/0xb0 do_syscall_64+0xc1/0x1d0 entry_SYSCALL_64_after_hwframe+0x77/0x7f Acked-by: Jason Wang <jasowang@redhat.com> Fixes: e9f3962441c0 ("virtio_net: xsk: rx: support fill with xsk buffer") Link: https://patch.msgid.link/20250430163836.3029761-1-kuba@kernel.org Signed-off-by: Jakub Kicinski <kuba@kernel.org>
2025-05-05virtio-net: don't re-enable refill work too early when NAPI is disabledJakub Kicinski1-3/+8
Commit 4bc12818b363 ("virtio-net: disable delayed refill when pausing rx") fixed a deadlock between reconfig paths and refill work trying to disable the same NAPI instance. The refill work can't run in parallel with reconfig because trying to double-disable a NAPI instance causes a stall under the instance lock, which the reconfig path needs to re-enable the NAPI and therefore unblock the stalled thread. There are two cases where we re-enable refill too early. One is in the virtnet_set_queues() handler. We call it when installing XDP: virtnet_rx_pause_all(vi); ... virtnet_napi_tx_disable(..); ... virtnet_set_queues(..); ... virtnet_rx_resume_all(..); We want the work to be disabled until we call virtnet_rx_resume_all(), but virtnet_set_queues() kicks it before NAPIs were re-enabled. The other case is a more trivial case of mis-ordering in __virtnet_rx_resume() found by code inspection. Taking the spin lock in virtnet_set_queues() (requested during review) may be unnecessary as we are under rtnl_lock and so are all paths writing to ->refill_enabled. Acked-by: Michael S. Tsirkin <mst@redhat.com> Reviewed-by: Bui Quang Minh <minhquangbui99@gmail.com> Fixes: 4bc12818b363 ("virtio-net: disable delayed refill when pausing rx") Fixes: 413f0271f396 ("net: protect NAPI enablement with netdev_lock()") Link: https://patch.msgid.link/20250430163758.3029367-1-kuba@kernel.org Signed-off-by: Jakub Kicinski <kuba@kernel.org>
2025-05-05clk: rockchip: add GATE_GRFs for SAI MCLKOUT to rk3576Nicolas Frattaroli1-0/+27
The Rockchip RK3576 gates the SAI MCLKOUT clocks behind some IOC GRF writes. Add these clock branches, and add the IOC GRF to the auxiliary GRF hashtable. Signed-off-by: Nicolas Frattaroli <nicolas.frattaroli@collabora.com> Link: https://lore.kernel.org/r/20250502-rk3576-sai-v3-4-376cef19dd7c@collabora.com Signed-off-by: Heiko Stuebner <heiko@sntech.de>
2025-05-05clk: rockchip: introduce GRF gatesNicolas Frattaroli4-1/+134
Some rockchip SoCs, namely the RK3576, have bits in a General Register File (GRF) that act just like clock gates. The downstream vendor kernel simply maps over the already mapped GRF range with a generic clock gate driver. This solution isn't suitable for upstream, as a memory range will be in use by multiple drivers at the same time, and it leaks implementation details into the device tree. Instead, implement this with a new clock branch type in the Rockchip clock driver: GRF gates. Somewhat akin to MUXGRF, this clock branch depends on the type of GRF, but functions like a gate instead. Signed-off-by: Nicolas Frattaroli <nicolas.frattaroli@collabora.com> Link: https://lore.kernel.org/r/20250502-rk3576-sai-v3-3-376cef19dd7c@collabora.com Signed-off-by: Heiko Stuebner <heiko@sntech.de>
2025-05-05clk: rockchip: introduce auxiliary GRFsNicolas Frattaroli7-18/+72
The MUXGRF clock branch type depends on having access to some sort of GRF as a regmap to be registered. So far, we could easily get away with only ever having one GRF stowed away in the context. However, newer Rockchip SoCs, such as the RK3576, have several GRFs which are relevant for clock purposes. It already depends on the pmu0 GRF for MUXGRF reasons, but could get away with not refactoring this because it didn't need the sysgrf at all, so could overwrite the pointer in the clock provider to the pmu0 grf regmap handle. In preparation for needing to finally access more than one GRF per SoC, let's untangle this. Introduce an auxiliary GRF hashmap, and a GRF type enum. The hashmap is keyed by the enum, and clock branches now have a struct member to store the value of that enum, which defaults to the system GRF. The SoC-specific _clk_init function can then insert pointers to GRF regmaps into the hashmap based on the grf type. During clock branch registration, we then pick the right GRF for each branch from the hashmap if something other than the sys GRF is requested. The reason for doing it with this grf type indirection in the clock branches is so that we don't need to define the MUXGRF branches in a separate step, just to have a direct pointer to a regmap available already. Signed-off-by: Nicolas Frattaroli <nicolas.frattaroli@collabora.com> Link: https://lore.kernel.org/r/20250502-rk3576-sai-v3-2-376cef19dd7c@collabora.com Signed-off-by: Heiko Stuebner <heiko@sntech.de>
2025-05-05drm/xe/gsc: do not flush the GSC worker from the reset pathDaniele Ceraolo Spurio7-2/+44
The workqueue used for the reset worker is marked as WQ_MEM_RECLAIM, while the GSC one isn't (and can't be as we need to do memory allocations in the gsc worker). Therefore, we can't flush the latter from the former. The reason why we had such a flush was to avoid interrupting either the GSC FW load or in progress GSC proxy operations. GSC proxy operations fall into 2 categories: 1) GSC proxy init: this only happens once immediately after GSC FW load and does not support being interrupted. The only way to recover from an interruption of the proxy init is to do an FLR and re-load the GSC. 2) GSC proxy request: this can happen in response to a request that the driver sends to the GSC. If this is interrupted, the GSC FW will timeout and the driver request will be failed, but overall the GSC will keep working fine. Flushing the work allowed us to avoid interruption in both cases (unless the hang came from the GSC engine itself, in which case we're toast anyway). However, a failure on a proxy request is tolerable if we're in a scenario where we're triggering a GT reset (i.e., something is already gone pretty wrong), so what we really need to avoid is interrupting the init flow, which we can do by polling on the register that reports when the proxy init is complete (as that ensure us that all the load and init operations have been completed). Note that during suspend we still want to do a flush of the worker to make sure it completes any operations involving the HW before the power is cut. v2: fix spelling in commit msg, rename waiter function (Julia) Fixes: dd0e89e5edc2 ("drm/xe/gsc: GSC FW load") Closes: https://gitlab.freedesktop.org/drm/xe/kernel/-/issues/4830 Signed-off-by: Daniele Ceraolo Spurio <daniele.ceraolospurio@intel.com> Cc: John Harrison <John.C.Harrison@Intel.com> Cc: Alan Previn <alan.previn.teres.alexis@intel.com> Cc: <stable@vger.kernel.org> # v6.8+ Reviewed-by: Julia Filipchuk <julia.filipchuk@intel.com> Link: https://lore.kernel.org/r/20250502155104.2201469-1-daniele.ceraolospurio@intel.com
2025-05-05mm: remove NR_BOUNCE zone statChristoph Hellwig1-1/+1
The stat is always 0 now, so remove it and hardwire the user visible output to 0. Signed-off-by: Christoph Hellwig <hch@lst.de> Reviewed-by: Hannes Reinecke <hare@suse.de> Reviewed-by: Johannes Thumshirn <johannes.thumshirn@wdc.com> Link: https://lore.kernel.org/r/20250505081138.3435992-8-hch@lst.de Signed-off-by: Jens Axboe <axboe@kernel.dk>
2025-05-05scsi: remove the no_highmem flag in the hostChristoph Hellwig1-3/+0
All users are gone now. Signed-off-by: Christoph Hellwig <hch@lst.de> Reviewed-by: Hannes Reinecke <hare@suse.de> Reviewed-by: John Garry <john.g.garry@oracle.com> Reviewed-by: Johannes Thumshirn <johannes.thumshirn@wdc.com> Link: https://lore.kernel.org/r/20250505081138.3435992-6-hch@lst.de Signed-off-by: Jens Axboe <axboe@kernel.dk>
2025-05-05usb-storage: reject probe of device one non-DMA HCDs when using highmemChristoph Hellwig1-6/+14
usb-storage is the last user of the block layer bounce buffering now, and only uses it for HCDs that do not support DMA on highmem configs. Remove this support and fail the probe so that the block layer bounce buffering can go away. Signed-off-by: Christoph Hellwig <hch@lst.de> Acked-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org> Reviewed-by: Hannes Reinecke <hare@suse.de> Reviewed-by: Alan Stern <stern@rowland.harvard.edu> Reviewed-by: Johannes Thumshirn <johannes.thumshirn@wdc.com> Link: https://lore.kernel.org/r/20250505081138.3435992-5-hch@lst.de Signed-off-by: Jens Axboe <axboe@kernel.dk>
2025-05-05scsi: make ppa depend on !HIGHMEMChristoph Hellwig2-1/+1
This is one of the last drivers depending on the block layer bounce buffering code. Restrict it to run on non-highmem configs so that the bounce buffering code can be removed. Signed-off-by: Christoph Hellwig <hch@lst.de> Reviewed-by: Hannes Reinecke <hare@suse.de> Reviewed-by: John Garry <john.g.garry@oracle.com> Reviewed-by: Johannes Thumshirn <johannes.thumshirn@wdc.com> Link: https://lore.kernel.org/r/20250505081138.3435992-4-hch@lst.de Signed-off-by: Jens Axboe <axboe@kernel.dk>
2025-05-05scsi: make imm depend on !HIGHMEMChristoph Hellwig2-1/+1
This is one of the last drivers depending on the block layer bounce buffering code. Restrict it to run on non-highmem configs so that the bounce buffering code can be removed. Signed-off-by: Christoph Hellwig <hch@lst.de> Reviewed-by: Hannes Reinecke <hare@suse.de> Reviewed-by: John Garry <john.g.garry@oracle.com> Reviewed-by: Johannes Thumshirn <johannes.thumshirn@wdc.com> Link: https://lore.kernel.org/r/20250505081138.3435992-3-hch@lst.de Signed-off-by: Jens Axboe <axboe@kernel.dk>
2025-05-05scsi: make aha152x depend on !HIGHMEMChristoph Hellwig2-1/+1
This is one of the last drivers depending on the block layer bounce buffering code. Restrict it to run on non-highmem configs so that the bounce buffering code can be removed. Signed-off-by: Christoph Hellwig <hch@lst.de> Reviewed-by: Hannes Reinecke <hare@suse.de> Reviewed-by: John Garry <john.g.garry@oracle.com> Reviewed-by: Johannes Thumshirn <johannes.thumshirn@wdc.com> Link: https://lore.kernel.org/r/20250505081138.3435992-2-hch@lst.de Signed-off-by: Jens Axboe <axboe@kernel.dk>
2025-05-05PCI: Fix lock symmetry in pci_slot_unlock()Ilpo Järvinen1-1/+2
The commit a4e772898f8b ("PCI: Add missing bridge lock to pci_bus_lock()") made the lock function to call depend on dev->subordinate but left pci_slot_unlock() unmodified creating locking asymmetry compared with pci_slot_lock(). Because of the asymmetric lock handling, the same bridge device is unlocked twice. First pci_bus_unlock() unlocks bus->self and then pci_slot_unlock() will unconditionally unlock the same bridge device. Move pci_dev_unlock() inside an else branch to match the logic in pci_slot_lock(). Fixes: a4e772898f8b ("PCI: Add missing bridge lock to pci_bus_lock()") Signed-off-by: Ilpo Järvinen <ilpo.jarvinen@linux.intel.com> Signed-off-by: Bjorn Helgaas <bhelgaas@google.com> Reviewed-by: Lukas Wunner <lukas@wunner.de> Reviewed-by: Dave Jiang <dave.jiang@intel.com> Cc: stable@vger.kernel.org Link: https://patch.msgid.link/20250505115412.37628-1-ilpo.jarvinen@linux.intel.com
2025-05-05drm/amdgpu: only keep most recent fence for each contextArvind Yadav1-0/+6
Keep only the latest fences to reduce the number of values given back to userspace v2: - Export this code from dma-fence-unwrap.c(by Christian). v3: - To split this in a dma_buf patch and amd userq patch(by Sunil). - No need to add a new function just re-use existing(by Christian). v4: Export dma_fence_dedub_array function and used it(by Christian). Cc: Alex Deucher <alexander.deucher@amd.com> Cc: Arunpravin Paneer Selvam <Arunpravin.PaneerSelvam@amd.com> Reviewed-by: Sunil Khatri <sunil.khatri@amd.com> Reviewed-by: Christian König <christian.koenig@amd.com> Signed-off-by: Arvind Yadav <Arvind.Yadav@amd.com> Signed-off-by: Alex Deucher <alexander.deucher@amd.com>
2025-05-05drm/amdgpu: Add Support for enforcing isolation without Cleaner ShaderSrinivasan Shanmugam5-7/+25
Adjusted the enforce isolation setting handling to include the ability to disable the cleaner shader without affecting isolation between tasks. v2: Updated enforce isolation documentation and parameters. (Alex) Cc: Christian König <christian.koenig@amd.com> Cc: Alex Deucher <alexander.deucher@amd.com> Signed-off-by: Srinivasan Shanmugam <srinivasan.shanmugam@amd.com> Reviewed-by: Alex Deucher <alexander.deucher@amd.com> Signed-off-by: Alex Deucher <alexander.deucher@amd.com>
2025-05-05dma-fence: Add helper to sort and deduplicate dma_fence arraysArvind Yadav1-14/+37
Export a new helper function `dma_fence_dedup_array()` that sorts an array of dma_fence pointers by context, then deduplicates the array by retaining only the most recent fence per context. This utility is useful when merging or optimizing sets of fences where redundant entries from the same context can be pruned. The operation is performed in-place and releases references to dropped fences using dma_fence_put(). v2: - Export this code from dma-fence-unwrap.c(by Christian). v3: - To split this in a dma_buf patch and amd userq patch(by Sunil). - No need to add a new function just re-use existing(by Christian). v4: - Export dma_fence_dedub_array and use it(by Christian). Cc: Alex Deucher <alexander.deucher@amd.com> Cc: Arunpravin Paneer Selvam <Arunpravin.PaneerSelvam@amd.com> Reviewed-by: Sunil Khatri <sunil.khatri@amd.com> Reviewed-by: Christian König <christian.koenig@amd.com> Signed-off-by: Arvind Yadav <Arvind.Yadav@amd.com> Signed-off-by: Alex Deucher <alexander.deucher@amd.com>
2025-05-05drm/amdgpu: change DRM_DBG_DRIVER to drm_dbg_driverSunil Khatri1-3/+4
update the functions in amdgpu_userqueues.c from DRM_DBG_DRIVER to drm_dbg_driver so multi gpu instance can be logged in. Signed-off-by: Sunil Khatri <sunil.khatri@amd.com> Reviewed-by: Tvrtko Ursulin <tvrtko.ursulin@igalia.com> Reviewed-by: Christian König <christian.koenig@amd.com> Signed-off-by: Alex Deucher <alexander.deucher@amd.com>
2025-05-05drm/amdgpu: change DRM_ERROR to drm_file_err in amdgpu_userq.cSunil Khatri1-31/+32
change the DRM_ERROR and drm_err to drm_file_err to add process name and pid to the logging. Signed-off-by: Sunil Khatri <sunil.khatri@amd.com> Reviewed-by: Tvrtko Ursulin <tvrtko.ursulin@igalia.com> Reviewed-by: Christian König <christian.koenig@amd.com> Signed-off-by: Alex Deucher <alexander.deucher@amd.com>
2025-05-05drm/amdgpu: use drm_file_err in fence timeoutsSunil Khatri1-5/+4
use drm_file_err instead of DRM_ERROR which adds process and pid information in the userqueue error logging. Sample log: [ 19.802315] amdgpu 0000:0a:00.0: [drm] *ERROR* comm: ibus-x11 pid: 2055 client: Unset ... Couldn't unmap all the queues [ 19.802319] amdgpu 0000:0a:00.0: [drm] *ERROR* comm: ibus-x11 pid: 2055 client: Unset ... Failed to evict userqueue [ 19.838432] amdgpu 0000:0a:00.0: [drm] *ERROR* comm: systemd-logind pid: 1042 client: Unset ... Couldn't unmap all the queues [ 19.838436] amdgpu 0000:0a:00.0: [drm] *ERROR* comm: systemd-logind pid: 1042 client: Unset ... Failed to evict userqueue Signed-off-by: Sunil Khatri <sunil.khatri@amd.com> Reviewed-by: Tvrtko Ursulin <tvrtko.ursulin@igalia.com> Reviewed-by: Christian König <christian.koenig@amd.com> Signed-off-by: Alex Deucher <alexander.deucher@amd.com>
2025-05-05drm/amdgpu: add drm_file reference in userq_mgrSunil Khatri3-3/+7
drm_file will be used in usermode queues code to enable better process information in logging and hence add drm_file part of the userq_mgr struct. update the drm_file pointer in userq_mgr for each amdgpu_driver_open_kms. Signed-off-by: Sunil Khatri <sunil.khatri@amd.com> Reviewed-by: Tvrtko Ursulin <tvrtko.ursulin@igalia.com> Reviewed-by: Christian König <christian.koenig@amd.com> Signed-off-by: Alex Deucher <alexander.deucher@amd.com>
2025-05-05drm: add drm_file_err function to add process infoSunil Khatri1-0/+34
Add a drm helper function which appends the process information for the drm_file over drm_err formatted output. v5: change to macro from function (Christian Koenig) add helper functions for lock/unlock (Christian Koenig) v6: remove __maybe_unused and make function inline (Jani Nikula) remove drm_print.h v7: Use va_format and %pV to concatenate fmt and vargs (Jani Nikula) v8: Code formatting and typos (Ursulin tvrtko) Signed-off-by: Sunil Khatri <sunil.khatri@amd.com> Reviewed-by: Tvrtko Ursulin <tvrtko.ursulin@igalia.com> Reviewed-by: Christian König <christian.koenig@amd.com> Signed-off-by: Alex Deucher <alexander.deucher@amd.com>
2025-05-05drm/amd/display: Promote DC to 3.2.331Taimur Hassan1-1/+1
Summary * Remove redundant NULL check * Fix invalid context error in dml helper * Prepare for Fused I2C-over-AUX * Allow DSCClock disable * Vmax / Vmin update for Vsync * Fix race condition in DPIA AUX transfer * Fix wrong handling for AUX_DEFER case * Only wait for required space in DMUB mailbox Acked-by: Tom Chung <chiahsuan.chung@amd.com> Signed-off-by: Taimur Hassan <Syed.Hassan@amd.com> Signed-off-by: Ray Wu <ray.wu@amd.com> Tested-by: Daniel Wheeler <daniel.wheeler@amd.com> Signed-off-by: Alex Deucher <alexander.deucher@amd.com>
2025-05-05drm/amd/display: Only wait for required free space in DMUB mailboxDillon Varone3-17/+76
[WHY&HOW] When command submission is blocked by a full mailbox, only wait for enough space to free to submit the command, instead of waiting for idle. Reviewed-by: Nicholas Kazlauskas <nicholas.kazlauskas@amd.com> Signed-off-by: Dillon Varone <dillon.varone@amd.com> Signed-off-by: Ray Wu <ray.wu@amd.com> Tested-by: Daniel Wheeler <daniel.wheeler@amd.com> Signed-off-by: Alex Deucher <alexander.deucher@amd.com>
2025-05-05drm/amd/display: Assign preferred stream encoder instance to dpiaMeenakshikumar Somasundaram1-0/+4
[Why] For dpia, preferred engine instance availability is not checked when assigning stream encoder instance. [How] Check for dpia preferred engine id and assign the same stream encoder instance for the stream if available. Reviewed-by: PeiChen Huang <peichen.huang@amd.com> Signed-off-by: Meenakshikumar Somasundaram <meenakshikumar.somasundaram@amd.com> Signed-off-by: Ray Wu <ray.wu@amd.com> Tested-by: Daniel Wheeler <daniel.wheeler@amd.com> Signed-off-by: Alex Deucher <alexander.deucher@amd.com>
2025-05-05drm/amd/display: Fix wrong handling for AUX_DEFER caseWayne Lin1-4/+24
[Why] We incorrectly ack all bytes get written when the reply actually is defer. When it's defer, means sink is not ready for the request. We should retry the request. [How] Only reply all data get written when receive I2C_ACK|AUX_ACK. Otherwise, reply the number of actual written bytes received from the sink. Add some messages to facilitate debugging as well. Fixes: ad6756b4d773 ("drm/amd/display: Shift dc link aux to aux_payload") Cc: Mario Limonciello <mario.limonciello@amd.com> Cc: Alex Deucher <alexander.deucher@amd.com> Reviewed-by: Ray Wu <ray.wu@amd.com> Signed-off-by: Wayne Lin <Wayne.Lin@amd.com> Signed-off-by: Ray Wu <ray.wu@amd.com> Tested-by: Daniel Wheeler <daniel.wheeler@amd.com> Signed-off-by: Alex Deucher <alexander.deucher@amd.com>
2025-05-05drm/amd/display: Copy AUX read reply data whenever length > 0Wayne Lin1-2/+1
[Why] amdgpu_dm_process_dmub_aux_transfer_sync() should return all exact data reply from the sink side. Don't do the analysis job in it. [How] Remove unnecessary check condition AUX_TRANSACTION_REPLY_AUX_ACK. Fixes: ead08b95fa50 ("drm/amd/display: Fix race condition in DPIA AUX transfer") Cc: Mario Limonciello <mario.limonciello@amd.com> Cc: Alex Deucher <alexander.deucher@amd.com> Reviewed-by: Ray Wu <ray.wu@amd.com> Signed-off-by: Wayne Lin <Wayne.Lin@amd.com> Signed-off-by: Ray Wu <ray.wu@amd.com> Tested-by: Daniel Wheeler <daniel.wheeler@amd.com> Signed-off-by: Alex Deucher <alexander.deucher@amd.com>
2025-05-05amd-pstate-ut: Reset amd-pstate driver mode after running selftestsSwapnil Sapkal3-7/+19
In amd-pstate-ut, one of the basic test is to switch between all possible mode combinations. After running this test the mode of the amd-pstate driver is active mode. Store and reset the mode to its original state. Reviewed-by: Gautham R. Shenoy <gautham.shenoy@amd.com> Signed-off-by: Swapnil Sapkal <swapnil.sapkal@amd.com> Reviewed-by: Mario Limonciello <mario.limonciello@amd.com> Link: https://lore.kernel.org/r/20250430064206.7402-1-swapnil.sapkal@amd.com Signed-off-by: Mario Limonciello <mario.limonciello@amd.com>
2025-05-05drm/amd/display: Remove incorrect checking in dmub aux handlerWayne Lin1-11/+1
[Why & How] "Request length != reply length" is expected behavior defined in spec. It's not an invalid reply. Besides, replied data handling logic is not designed to be written in amdgpu_dm_process_dmub_aux_transfer_sync(). Remove the incorrectly handling section. Fixes: ead08b95fa50 ("drm/amd/display: Fix race condition in DPIA AUX transfer") Cc: Mario Limonciello <mario.limonciello@amd.com> Cc: Alex Deucher <alexander.deucher@amd.com> Reviewed-by: Ray Wu <ray.wu@amd.com> Signed-off-by: Wayne Lin <Wayne.Lin@amd.com> Signed-off-by: Ray Wu <ray.wu@amd.com> Tested-by: Daniel Wheeler <daniel.wheeler@amd.com> Signed-off-by: Alex Deucher <alexander.deucher@amd.com>
2025-05-05drm/amd/display: Fix the checking condition in dmub aux handlingWayne Lin1-1/+1
[Why & How] Fix the checking condition for detecting AUX_RET_ERROR_PROTOCOL_ERROR. It was wrongly checking by "not equals to" Reviewed-by: Ray Wu <ray.wu@amd.com> Signed-off-by: Wayne Lin <Wayne.Lin@amd.com> Signed-off-by: Ray Wu <ray.wu@amd.com> Tested-by: Daniel Wheeler <daniel.wheeler@amd.com> Signed-off-by: Alex Deucher <alexander.deucher@amd.com>
2025-05-05drm/amd/display: Shift DMUB AUX reply command if necessaryWayne Lin1-1/+4
[Why] Defined value of dmub AUX reply command field get updated but didn't adjust dm receiving side accordingly. [How] Check the received reply command value to see if it's updated version or not. Adjust it if necessary. Fixes: ead08b95fa50 ("drm/amd/display: Fix race condition in DPIA AUX transfer") Cc: Mario Limonciello <mario.limonciello@amd.com> Cc: Alex Deucher <alexander.deucher@amd.com> Reviewed-by: Ray Wu <ray.wu@amd.com> Signed-off-by: Wayne Lin <Wayne.Lin@amd.com> Signed-off-by: Ray Wu <ray.wu@amd.com> Tested-by: Daniel Wheeler <daniel.wheeler@amd.com> Signed-off-by: Alex Deucher <alexander.deucher@amd.com>
2025-05-05drm/amd/display: Refactor SubVP cursor limiting logicDillon Varone39-135/+418
[WHY] There are several gaps that can result in SubVP being enabled with incompatible HW cursor sizes, and unjust restrictions to cursor size due to wrong predictions on future usage of SubVP. [HOW] - remove "prediction" logic in favor of tagging based on previous SubVP usage - block SubVP if current HW cursor settings are incompatible - provide interface for DM to determine if HW cursor should be disabled due to an attempt to enable SubVP Reviewed-by: Alvin Lee <alvin.lee2@amd.com> Signed-off-by: Dillon Varone <dillon.varone@amd.com> Signed-off-by: Ray Wu <ray.wu@amd.com> Tested-by: Daniel Wheeler <daniel.wheeler@amd.com> Signed-off-by: Alex Deucher <alexander.deucher@amd.com>
2025-05-05drm/amd/display: Call FP Protect Before Mode Programming/Mode SupportAustin Zheng1-4/+4
[Why] Memory allocation occurs within dml21_validate() for adding phantom planes. May cause kernel to be tainted due to usage of FP Start. [How] Move FP start from dml21_validate to before mode programming/mode support. Calculations requiring floating point are all done within mode programming or mode support. Reviewed-by: Alvin Lee <alvin.lee2@amd.com> Signed-off-by: Austin Zheng <Austin.Zheng@amd.com> Signed-off-by: Ray Wu <ray.wu@amd.com> Tested-by: Daniel Wheeler <daniel.wheeler@amd.com> Signed-off-by: Alex Deucher <alexander.deucher@amd.com>
2025-05-05drm/amd/display: Remove unnecessary DC_FP_START/DC_FP_ENDAlex Hung1-6/+0
[WHY & HOW] Remove the unnecessary DC_FP_START/DC_FP_END pair to reduce time in preempt_disable. It also fixes "BUG: sleeping function called from invalid context" error messages because of calling kzalloc with GFP_KERNEL which can sleep. Reviewed-by: Aurabindo Pillai <aurabindo.pillai@amd.com> Signed-off-by: Alex Hung <alex.hung@amd.com> Signed-off-by: Ray Wu <ray.wu@amd.com> Tested-by: Daniel Wheeler <daniel.wheeler@amd.com> Signed-off-by: Alex Deucher <alexander.deucher@amd.com>
2025-05-05drm/amd/display: Send IPSExit unconditionally.JinZe Xu1-7/+8
[Why&How] PMFW needs to flush page cache in IPSExit. Reviewed-by: Nicholas Kazlauskas <nicholas.kazlauskas@amd.com> Signed-off-by: JinZe Xu <JinZe.Xu@amd.com> Signed-off-by: Ray Wu <ray.wu@amd.com> Tested-by: Daniel Wheeler <daniel.wheeler@amd.com> Signed-off-by: Alex Deucher <alexander.deucher@amd.com>
2025-05-05drm/amd/display: Add skip rIOMMU dc config optionKevin Gao2-3/+4
[Why] Need option to skip rIOMMU calls for dcn21. [How] Added rIOMMU dc config option and check for whether to skip rIOMMU calls. Reviewed-by: Nicholas Kazlauskas <nicholas.kazlauskas@amd.com> Signed-off-by: Kevin Gao <kgao1003@amd.com> Signed-off-by: Ray Wu <ray.wu@amd.com> Tested-by: Daniel Wheeler <daniel.wheeler@amd.com> Signed-off-by: Alex Deucher <alexander.deucher@amd.com>