summaryrefslogtreecommitdiff
AgeCommit message (Collapse)AuthorFilesLines
2025-08-06scsi: ufs: mediatek: Fix out-of-bounds access in MCQ IRQ mappingPeter Wang1-1/+1
Address a potential out-of-bounds access issue when accessing 'host->mcq_intr_info[q_index]'. The value of 'q_index' might exceed the valid array bounds if 'q_index == nr'. Correct condition to 'q_index >= nr' to prevent accessing invalid memory. Fixes: 66e26a4b8a77 ("scsi: ufs: host: mediatek: Set IRQ affinity policy for MCQ mode") Cc: stable@vger.kernel.org Reported-by: Dan Carpenter <dan.carpenter@linaro.org> Signed-off-by: Peter Wang <peter.wang@mediatek.com> Link: https://lore.kernel.org/r/20250804060249.1387057-1-peter.wang@mediatek.com Signed-off-by: Martin K. Petersen <martin.petersen@oracle.com>
2025-08-06scsi: lpfc: Remove redundant assignment to avoid memory leakJiasheng Jiang1-1/+0
Remove the redundant assignment if kzalloc() succeeds to avoid memory leak. Fixes: bd2cdd5e400f ("scsi: lpfc: NVME Initiator: Add debugfs support") Signed-off-by: Jiasheng Jiang <jiashengjiangcool@gmail.com> Link: https://lore.kernel.org/r/20250801185202.42631-1-jiashengjiangcool@gmail.com Reviewed-by: Justin Tee <justin.tee@broadcom.com> Signed-off-by: Martin K. Petersen <martin.petersen@oracle.com>
2025-08-06media: venus: Fix OPP table error handlingSasha Levin1-4/+8
The venus driver fails to check if dev_pm_opp_find_freq_{ceil,floor}() returns an error pointer before calling dev_pm_opp_put(). This causes a crash when OPP tables are not present in device tree. Unable to handle kernel access to user memory outside uaccess routines at virtual address 000000000000002e ... pc : dev_pm_opp_put+0x1c/0x4c lr : core_clks_enable+0x4c/0x16c [venus_core] Add IS_ERR() checks before calling dev_pm_opp_put() to avoid dereferencing error pointers. Fixes: b179234b5e59 ("media: venus: pm_helpers: use opp-table for the frequency") Signed-off-by: Sasha Levin <sashal@kernel.org> Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
2025-08-06scsi: lpfc: Fix wrong function reference in a commentJean Delvare1-1/+1
Function scsi_host_remove() doesn't exist, the actual function name is scsi_remove_host(). Signed-off-by: Jean Delvare <jdelvare@suse.de> Link: https://lore.kernel.org/r/20250731133311.52034cc4@endymion Reviewed-by: Justin Tee <justin.tee@broadcom.com> Signed-off-by: Martin K. Petersen <martin.petersen@oracle.com>
2025-08-06scsi: ufs: core: Fix interrupt handling for MCQ ModeNitin Rawat1-2/+7
Commit 3c7ac40d7322 ("scsi: ufs: core: Delegate the interrupt service routine to a threaded IRQ handler") introduced a regression where the UFS interrupt status register (IS) was not cleared in ufshcd_intr() when operating in MCQ mode. As a result, the IS register remained uncleared. This led to a persistent issue during UIC interrupts: ufshcd_is_auto_hibern8_error() consistently returned true because the UFSHCD_UIC_HIBERN8_MASK bit was set, while the active command was neither UIC_CMD_DME_HIBER_ENTER nor UIC_CMD_DME_HIBER_EXIT. This caused continuous auto hibern8 enter errors and device failed to boot. To fix this, ensure that the interrupt status register is properly cleared in the ufshcd_intr() function for both MCQ mode with ESI enabled. [ 4.553226] ufshcd-qcom 1d84000.ufs: ufshcd_check_errors: Auto Hibern8 Enter failed - status: 0x00000040, upmcrs: 0x00000001 [ 4.553229] ufshcd-qcom 1d84000.ufs: ufshcd_check_errors: saved_err 0x40 saved_uic_err 0x0 [ 4.553311] host_regs: 00000000: d5c7033f 20e0071f 00000400 00000000 [ 4.553312] host_regs: 00000010: 01000000 00010217 00000c96 00000000 [ 4.553314] host_regs: 00000020: 00000440 00170ef5 00000000 00000000 [ 4.553316] host_regs: 00000030: 0000010f 00000001 00000000 00000000 [ 4.553317] host_regs: 00000040: 00000000 00000000 00000000 00000000 [ 4.553319] host_regs: 00000050: fffdf000 0000000f 00000000 00000000 [ 4.553320] host_regs: 00000060: 00000001 80000000 00000000 00000000 [ 4.553322] host_regs: 00000070: fffde000 0000000f 00000000 00000000 [ 4.553323] host_regs: 00000080: 00000001 00000000 00000000 00000000 [ 4.553325] host_regs: 00000090: 00000002 d0020000 00000000 01930200 Fixes: 3c7ac40d7322 ("scsi: ufs: core: Delegate the interrupt service routine to a threaded IRQ handler") Co-developed-by: Palash Kambar <quic_pkambar@quicinc.com> Signed-off-by: Palash Kambar <quic_pkambar@quicinc.com> Signed-off-by: Nitin Rawat <quic_nitirawa@quicinc.com> Link: https://lore.kernel.org/r/20250728225711.29273-1-quic_nitirawa@quicinc.com Tested-by: Neil Armstrong <neil.armstrong@linaro.org> # on SM8650-QRD Reviewed-by: Bart Van Assche <bvanassche@acm.org> Reviewed-by: Peter Wang <peter.wang@mediatek.com> Signed-off-by: Martin K. Petersen <martin.petersen@oracle.com>
2025-08-06Merge tag 'perf-fixes-27504' of ↵Linus Torvalds4-9/+266
git://git.kernel.org/pub/scm/linux/kernel/git/tip/tip.git Pull perf fixes from Thomas Gleixner: "Perf fixes for perf_mmap() reference counting to prevent potential reference count leaks which are caused by: - VMA splits, which change the offset or size of a mapping, which causes perf_mmap_close() to ignore the unmap or unmap the wrong buffer. - Several internal issues of perf_mmap(), which can cause reference count leaks in the perf mmap, corrupt accounting or cause leaks in perf drivers. The main fix is to prevent VMA splits by implementing the [may_]split() callback for vm operations. The other issues are addressed by rearranging code, early returns on failure and invocation of cleanups. Also provide a selftest to validate the fixes. The reference counting should be converted to refcount_t, but that requires larger refactoring of the code and will be done once these fixes are upstream" * tag 'perf-fixes-27504' of git://git.kernel.org/pub/scm/linux/kernel/git/tip/tip.git: selftests/perf_events: Add a mmap() correctness test perf/core: Prevent VMA split of buffer mappings perf/core: Handle buffer mapping fail correctly in perf_mmap() perf/core: Exit early on perf_mmap() fail perf/core: Don't leak AUX buffer refcount on allocation failure perf/core: Preserve AUX buffer allocation failure result
2025-08-06net: usbnet: Fix the wrong netif_carrier_on() callAmmar Faizi1-3/+3
The commit referenced in the Fixes tag causes usbnet to malfunction (identified via git bisect). Post-commit, my external RJ45 LAN cable fails to connect. Linus also reported the same issue after pulling that commit. The code has a logic error: netif_carrier_on() is only called when the link is already on. Fix this by moving the netif_carrier_on() call outside the if-statement entirely. This ensures it is always called when EVENT_LINK_CARRIER_ON is set and properly clears it regardless of the link state. Cc: stable@vger.kernel.org Cc: Armando Budianto <sprite@gnuweeb.org> Reviewed-by: Simon Horman <horms@kernel.org> Suggested-by: Linus Torvalds <torvalds@linux-foundation.org> Link: https://lore.kernel.org/all/CAHk-=wjqL4uF0MG_c8+xHX1Vv8==sPYQrtzbdA3kzi96284nuQ@mail.gmail.com Closes: https://lore.kernel.org/netdev/CAHk-=wjKh8X4PT_mU1kD4GQrbjivMfPn-_hXa6han_BTDcXddw@mail.gmail.com Closes: https://lore.kernel.org/netdev/0752dee6-43d6-4e1f-81d2-4248142cccd2@gnuweeb.org Fixes: 0d9cfc9b8cb1 ("net: usbnet: Avoid potential RCU stall on LINK_CHANGE event") Signed-off-by: Ammar Faizi <ammarfaizi2@gnuweeb.org> Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
2025-08-06MAINTAINERS: hand over Kbuild maintenanceMasahiro Yamada2-8/+11
I'm stepping down as the maintainer of Kbuild/Kconfig. It was enjoyable to refactor and improve the kernel build system, but due to personal reasons, I believe it's difficult for me to continue in this role any further. I discussed this off-list with Nathan and Nicolas, and they have kindly agreed to take over the maintenance of Kbuild with Odd Fixes. I'm grateful to them for stepping in. As for Kconfig, there are currently no designated reviewers, so the maintainer position will remain vacant for now. I hope someone will step up to take on the role. Signed-off-by: Masahiro Yamada <masahiroy@kernel.org> Acked-by: Nathan Chancellor <nathan@kernel.org> Acked-by: Nicolas Schier <nicolas@fjasle.eu>
2025-08-06kheaders: make it possible to override TARMichał Górny2-4/+5
Commit 86cdd2fdc4e3 ("kheaders: make headers archive reproducible") introduced a number of options specific to GNU tar to the `tar` invocation in `gen_kheaders.sh` script. This causes the script to fail to work on systems where `tar` is not GNU tar. This can occur e.g. on recent Gentoo Linux installations that support using bsdtar from libarchive instead. Add a `TAR` make variable to make it possible to override the tar executable used, e.g. by specifying: make TAR=gtar Link: https://bugs.gentoo.org/884061 Reported-by: Sam James <sam@gentoo.org> Tested-by: Sam James <sam@gentoo.org> Co-developed-by: Masahiro Yamada <masahiroy@kernel.org> Signed-off-by: Michał Górny <mgorny@gentoo.org> Signed-off-by: Sam James <sam@gentoo.org> Signed-off-by: Masahiro Yamada <masahiroy@kernel.org>
2025-08-06kbuild: userprogs: use correct linker when mixing clang and GNU ldThomas Weißschuh1-1/+1
The userprogs infrastructure does not expect clang being used with GNU ld and in that case uses /usr/bin/ld for linking, not the configured $(LD). This fallback is problematic as it will break when cross-compiling. Mixing clang and GNU ld is used for example when building for SPARC64, as ld.lld is not sufficient; see Documentation/kbuild/llvm.rst. Relax the check around --ld-path so it gets used for all linkers. Fixes: dfc1b168a8c4 ("kbuild: userprogs: use correct lld when linking through clang") Cc: stable@vger.kernel.org Signed-off-by: Thomas Weißschuh <thomas.weissschuh@linutronix.de> Reviewed-by: Nathan Chancellor <nathan@kernel.org> Signed-off-by: Masahiro Yamada <masahiroy@kernel.org>
2025-08-06kconfig: lxdialog: replace strcpy() with strncpy() in inputbox.cSuchit Karunakaran1-2/+4
strcpy() performs no bounds checking and can lead to buffer overflows if the input string exceeds the destination buffer size. This patch replaces it with strncpy(), and null terminates the input string. Signed-off-by: Suchit Karunakaran <suchitkarunakaran@gmail.com> Reviewed-by: Nicolas Schier <nicolas.schier@linux.dev> Signed-off-by: Masahiro Yamada <masahiroy@kernel.org>
2025-08-06kconfig: lxdialog: replace strcpy with snprintf in print_autowrapSuchit Karunakaran1-2/+1
strcpy() does not perform bounds checking and can lead to buffer overflows if the source string exceeds the destination buffer size. In print_autowrap(), replace strcpy() with snprintf() to safely copy the prompt string into the fixed-size tempstr buffer. Signed-off-by: Suchit Karunakaran <suchitkarunakaran@gmail.com> Signed-off-by: Masahiro Yamada <masahiroy@kernel.org>
2025-08-06net: ti: icssg-prueth: Fix skb handling for XDP_PASSMeghana Malladi1-7/+8
emac_rx_packet() is a common function for handling traffic for both xdp and non-xdp use cases. Use common logic for handling skb with or without xdp to prevent any incorrect packet processing. This patch fixes ping working with XDP_PASS for icssg driver. Fixes: 62aa3246f4623 ("net: ti: icssg-prueth: Add XDP support") Signed-off-by: Meghana Malladi <m-malladi@ti.com> Reviewed-by: Jacob Keller <jacob.e.keller@intel.com> Link: https://patch.msgid.link/20250803180216.3569139-1-m-malladi@ti.com Signed-off-by: Jakub Kicinski <kuba@kernel.org>
2025-08-06net: Update threaded state in napi config in netif_set_threadedSamiullah Khawaja3-17/+121
Commit 2677010e7793 ("Add support to set NAPI threaded for individual NAPI") added support to enable/disable threaded napi using netlink. This also extended the napi config save/restore functionality to set the napi threaded state. This breaks netdev reset for drivers that use napi threaded at device level and also use napi config save/restore on napi_disable/napi_enable. Basically on netdev with napi threaded enabled at device level, a napi_enable call will get stuck trying to stop the napi kthread. This is because the napi->config->threaded is set to disabled when threaded is enabled at device level. The issue can be reproduced on virtio-net device using qemu. To reproduce the issue run following, echo 1 > /sys/class/net/threaded ethtool -L eth0 combined 1 Update the threaded state in napi config in netif_set_threaded and add a new test that verifies this scenario. Tested on qemu with virtio-net: NETIF=eth0 ./tools/testing/selftests/drivers/net/napi_threaded.py TAP version 13 1..2 ok 1 napi_threaded.change_num_queues ok 2 napi_threaded.enable_dev_threaded_disable_napi_threaded # Totals: pass:2 fail:0 xfail:0 xpass:0 skip:0 error:0 Fixes: 2677010e7793 ("Add support to set NAPI threaded for individual NAPI") Signed-off-by: Samiullah Khawaja <skhawaja@google.com> Link: https://patch.msgid.link/20250804164457.2494390-1-skhawaja@google.com Signed-off-by: Jakub Kicinski <kuba@kernel.org>
2025-08-06NFS/localio: nfs_uuid_put() fix the wake up after unlinking the fileTrond Myklebust1-2/+1
Use store_release_wake_up() instead of wake_up_var_locked(), because the waiter cannot retake the nfs_uuid->lock. Acked-by: Mike Snitzer <snitzer@kernel.org> Tested-by: Mike Snitzer <snitzer@kernel.org> Suggested-by: NeilBrown <neil@brown.name> Link: https://lore.kernel.org/all/175262948827.2234665.1891349021754495573@noble.neil.brown.name/ Fixes: 21fb44034695 ("nfs_localio: protect race between nfs_uuid_put() and nfs_close_local_fh()") Signed-off-by: Trond Myklebust <trond.myklebust@hammerspace.com>
2025-08-06NFS/localio: nfs_uuid_put() fix races with nfs_open/close_local_fh()Trond Myklebust1-8/+15
In order for the wait in nfs_uuid_put() to be safe, it is necessary to ensure that nfs_uuid_add_file() doesn't add a new entry once the nfs_uuid->net has been NULLed out. Also fix up the wake_up_var_locked() / wait_var_event_spinlock() to both use the nfs_uuid address, since nfl, and &nfl->uuid could be used elsewhere. Acked-by: Mike Snitzer <snitzer@kernel.org> Tested-by: Mike Snitzer <snitzer@kernel.org> Link: https://lore.kernel.org/all/175262893035.2234665.1735173020338594784@noble.neil.brown.name/ Fixes: 21fb44034695 ("nfs_localio: protect race between nfs_uuid_put() and nfs_close_local_fh()") Signed-off-by: Trond Myklebust <trond.myklebust@hammerspace.com>
2025-08-06NFS/localio: nfs_close_local_fh() fix check for file closedTrond Myklebust1-1/+1
If the struct nfs_file_localio is closed, its list entry will be empty, but the nfs_uuid->files list might still contain other entries. Acked-by: Mike Snitzer <snitzer@kernel.org> Tested-by: Mike Snitzer <snitzer@kernel.org> Reviewed-by: NeilBrown <neil@brown.name> Fixes: 21fb44034695 ("nfs_localio: protect race between nfs_uuid_put() and nfs_close_local_fh()") Signed-off-by: Trond Myklebust <trond.myklebust@hammerspace.com>
2025-08-06selftests: netdevsim: Xfail nexthop test on slow machinesIdo Schimmel1-1/+1
A lot of test cases in the file are related to the idle and unbalanced timers of resilient nexthop groups and these tests are reported to be flaky on slow machines running debug kernels. Rather than marking a lot of individual tests with xfail_on_slow(), simply mark all the tests. Note that the test is stable on non-debug machines and that with debug kernels we are mainly interested in the output of various sanitizers in order to determine pass / fail. Before: # make -C tools/testing/selftests KSFT_MACHINE_SLOW=yes \ TARGETS=drivers/net/netdevsim TEST_PROGS=nexthop.sh \ TEST_GEN_PROGS="" run_tests [...] # TEST: Bucket migration after idle timer (with delete) [FAIL] # Group expected to still be unbalanced [...] not ok 1 selftests: drivers/net/netdevsim: nexthop.sh # exit=1 After: # make -C tools/testing/selftests KSFT_MACHINE_SLOW=yes \ TARGETS=drivers/net/netdevsim TEST_PROGS=nexthop.sh \ TEST_GEN_PROGS="" run_tests [...] # TEST: Bucket migration after idle timer (with delete) [XFAIL] # Group expected to still be unbalanced [...] ok 1 selftests: drivers/net/netdevsim: nexthop.sh Reported-by: Jakub Kicinski <kuba@kernel.org> Closes: https://lore.kernel.org/netdev/20250729160609.02e0f157@kernel.org/ Reviewed-by: Petr Machata <petrm@nvidia.com> Signed-off-by: Ido Schimmel <idosch@nvidia.com> Link: https://patch.msgid.link/20250804114320.193203-1-idosch@nvidia.com Signed-off-by: Jakub Kicinski <kuba@kernel.org>
2025-08-06Merge branch 'eth-fbnic-fix-drop-stats-support'Jakub Kicinski1-4/+6
Mohsin Bashir says: ==================== eth: fbnic: Fix drop stats support Fix hardware drop stats support on the TX path of fbnic by addressing two issues: ensure that tx_dropped stats are correctly copied to the rtnl_link_stats64 struct, and protect the copying of drop stats from fdb->hw_stats to the local variable with the hw_stats_lock to ensure consistency. ==================== Link: https://patch.msgid.link/20250802024636.679317-1-mohsin.bashr@gmail.com Signed-off-by: Jakub Kicinski <kuba@kernel.org>
2025-08-06eth: fbnic: Lock the tx_dropped updateMohsin Bashir1-0/+2
Wrap copying of drop stats on TX path from fbd->hw_stats by the hw_stats_lock. Currently, it is being performed outside the lock and another thread accessing fbd->hw_stats can lead to inconsistencies. Fixes: 5f8bd2ce8269 ("eth: fbnic: add support for TMI stats") Signed-off-by: Mohsin Bashir <mohsin.bashr@gmail.com> Reviewed-by: Simon Horman <horms@kernel.org> Link: https://patch.msgid.link/20250802024636.679317-3-mohsin.bashr@gmail.com Signed-off-by: Jakub Kicinski <kuba@kernel.org>
2025-08-06eth: fbnic: Fix tx_dropped reportingMohsin Bashir1-4/+4
Correctly copy the tx_dropped stats from the fbd->hw_stats to the rtnl_link_stats64 struct. Fixes: 5f8bd2ce8269 ("eth: fbnic: add support for TMI stats") Signed-off-by: Mohsin Bashir <mohsin.bashr@gmail.com> Reviewed-by: Simon Horman <horms@kernel.org> Link: https://patch.msgid.link/20250802024636.679317-2-mohsin.bashr@gmail.com Signed-off-by: Jakub Kicinski <kuba@kernel.org>
2025-08-06eth: fbnic: remove the debugging trick of super high page biasJakub Kicinski2-6/+4
Alex added page bias of LONG_MAX, which is admittedly quite a clever way of catching overflows of the pp ref count. The page pool code was "optimized" to leave the ref at 1 for freed pages so it can't catch basic bugs by itself any more. (Something we should probably address under DEBUG_NET...) Unfortunately for fbnic since commit f7dc3248dcfb ("skbuff: Optimization of SKB coalescing for page pool") core _may_ actually take two extra pp refcounts, if one of them is returned before driver gives up the bias the ret < 0 check in page_pool_unref_netmem() will trigger. While at it add a FBNIC_ to the name of the driver constant. Fixes: 0cb4c0a13723 ("eth: fbnic: Implement Rx queue alloc/start/stop/free") Link: https://patch.msgid.link/20250801170754.2439577-1-kuba@kernel.org Signed-off-by: Jakub Kicinski <kuba@kernel.org>
2025-08-06net: ftgmac100: fix potential NULL pointer access in ftgmac100_phy_disconnectHeiner Kallweit1-3/+4
After the call to phy_disconnect() netdev->phydev is reset to NULL. So fixed_phy_unregister() would be called with a NULL pointer as argument. Therefore cache the phy_device before this call. Fixes: e24a6c874601 ("net: ftgmac100: Get link speed and duplex for NC-SI") Cc: stable@vger.kernel.org Signed-off-by: Heiner Kallweit <hkallweit1@gmail.com> Reviewed-by: Dawid Osuchowski <dawid.osuchowski@linux.intel.com> Link: https://patch.msgid.link/2b80a77a-06db-4dd7-85dc-3a8e0de55a1d@gmail.com Signed-off-by: Jakub Kicinski <kuba@kernel.org>
2025-08-06dt-bindings: net: Replace bouncing Alexandru Tachici emailsKrzysztof Kozlowski2-2/+2
Emails to alexandru.tachici@analog.com bounce permanently: Remote Server returned '550 5.1.10 RESOLVER.ADR.RecipientNotFound; Recipient not found by SMTP address lookup' so replace him with Marcelo Schmitt from Analog. Signed-off-by: Krzysztof Kozlowski <krzysztof.kozlowski@linaro.org> Acked-by: Rob Herring (Arm) <robh@kernel.org> Signed-off-by: Krzysztof Kozlowski <krzysztof.kozlowski@linaro.org> Reviewed-by: Marcelo Schmitt <marcelo.schmitt@analog.com> Link: https://patch.msgid.link/20250724113758.61874-2-krzysztof.kozlowski@linaro.org Signed-off-by: Jakub Kicinski <kuba@kernel.org>
2025-08-06vfio/type1: conditional rescheduling while pinningKeith Busch1-0/+7
A large DMA mapping request can loop through dma address pinning for many pages. In cases where THP can not be used, the repeated vmf_insert_pfn can be costly, so let the task reschedule as need to prevent CPU stalls. Failure to do so has potential harmful side effects, like increased memory pressure as unrelated rcu tasks are unable to make their reclaim callbacks and result in OOM conditions. rcu: INFO: rcu_sched self-detected stall on CPU rcu: 36-....: (20999 ticks this GP) idle=b01c/1/0x4000000000000000 softirq=35839/35839 fqs=3538 rcu: hardirqs softirqs csw/system rcu: number: 0 107 0 rcu: cputime: 50 0 10446 ==> 10556(ms) rcu: (t=21075 jiffies g=377761 q=204059 ncpus=384) ... <TASK> ? asm_sysvec_apic_timer_interrupt+0x16/0x20 ? walk_system_ram_range+0x63/0x120 ? walk_system_ram_range+0x46/0x120 ? pgprot_writethrough+0x20/0x20 lookup_memtype+0x67/0xf0 track_pfn_insert+0x20/0x40 vmf_insert_pfn_prot+0x88/0x140 vfio_pci_mmap_huge_fault+0xf9/0x1b0 [vfio_pci_core] __do_fault+0x28/0x1b0 handle_mm_fault+0xef1/0x2560 fixup_user_fault+0xf5/0x270 vaddr_get_pfns+0x169/0x2f0 [vfio_iommu_type1] vfio_pin_pages_remote+0x162/0x8e0 [vfio_iommu_type1] vfio_iommu_type1_ioctl+0x1121/0x1810 [vfio_iommu_type1] ? futex_wake+0x1c1/0x260 x64_sys_call+0x234/0x17a0 do_syscall_64+0x63/0x130 ? exc_page_fault+0x63/0x130 entry_SYSCALL_64_after_hwframe+0x4b/0x53 Signed-off-by: Keith Busch <kbusch@kernel.org> Reviewed-by: Paul E. McKenney <paulmck@kernel.org> Link: https://lore.kernel.org/r/20250715184622.3561598-1-kbusch@meta.com Signed-off-by: Alex Williamson <alex.williamson@redhat.com>
2025-08-06vfio/qat: add support for intel QAT 6xxx virtual functionsMałgorzata Mielnik1-1/+3
Extend the qat_vfio_pci variant driver to support QAT 6xxx Virtual Functions (VFs). Add the relevant QAT 6xxx VF device IDs to the driver's probe table, enabling proper detection and initialization of these devices. Update the module description to reflect that the driver now supports all QAT generations. Signed-off-by: Małgorzata Mielnik <malgorzata.mielnik@intel.com> Signed-off-by: Suman Kumar Chakraborty <suman.kumar.chakraborty@intel.com> Reviewed-by: Giovanni Cabiddu <giovanni.cabiddu@intel.com> Link: https://lore.kernel.org/r/20250715081150.1244466-1-suman.kumar.chakraborty@intel.com Signed-off-by: Alex Williamson <alex.williamson@redhat.com>
2025-08-06vfio/qat: Remove myself from VFIO QAT PCI driver maintainersXin Zeng1-1/+0
Remove myself from VFIO QAT PCI driver maintainers as I'm leaving Intel. Signed-off-by: Xin Zeng <xin.zeng@intel.com> Link: https://lore.kernel.org/r/20250715001357.33725-1-xin.zeng@intel.com Signed-off-by: Alex Williamson <alex.williamson@redhat.com>
2025-08-06vfio/pci: Do vf_token checks for VFIO_DEVICE_BIND_IOMMUFDJason Gunthorpe12-12/+76
This was missed during the initial implementation. The VFIO PCI encodes the vf_token inside the device name when opening the device from the group FD, something like: "0000:04:10.0 vf_token=bd8d9d2b-5a5f-4f5a-a211-f591514ba1f3" This is used to control access to a VF unless there is co-ordination with the owner of the PF. Since we no longer have a device name in the cdev path, pass the token directly through VFIO_DEVICE_BIND_IOMMUFD using an optional field indicated by VFIO_DEVICE_BIND_FLAG_TOKEN. Fixes: 5fcc26969a16 ("vfio: Add VFIO_DEVICE_BIND_IOMMUFD") Tested-by: Shameer Kolothum <shameerali.kolothum.thodi@huawei.com> Reviewed-by: Yi Liu <yi.l.liu@intel.com> Signed-off-by: Jason Gunthorpe <jgg@nvidia.com> Reviewed-by: Kevin Tian <kevin.tian@intel.com> Link: https://lore.kernel.org/r/0-v3-bdd8716e85fe+3978a-vfio_token_jgg@nvidia.com Signed-off-by: Alex Williamson <alex.williamson@redhat.com>
2025-08-05Input: add keycode for performance mode keyMarcos Alano1-0/+3
Alienware calls this key "Performance Boost". Dell calls it "G-Mode". The goal is to have a specific keycode to detect when this key is pressed, so userspace can act upon it and do what have to do, usually starting the power profile for performance. Signed-off-by: Marcos Alano <marcoshalano@gmail.com> Link: https://lore.kernel.org/r/20250509193708.2190586-1-marcoshalano@gmail.com Signed-off-by: Dmitry Torokhov <dmitry.torokhov@gmail.com>
2025-08-05fs/proc/task_mmu: hold PTL in pagemap_hugetlb_range and gather_hugetlb_statsJinjiang Tu1-3/+11
Hold PTL in pagemap_hugetlb_range() and gather_hugetlb_stats() to avoid operating on stale page, as pagemap_pmd_range() and gather_pte_stats() have done. Link: https://lkml.kernel.org/r/20250724090958.455887-3-tujinjiang@huawei.com Signed-off-by: Jinjiang Tu <tujinjiang@huawei.com> Acked-by: David Hildenbrand <david@redhat.com> Cc: Andrei Vagin <avagin@gmail.com> Cc: Andrii Nakryiko <andrii@kernel.org> Cc: Baolin Wang <baolin.wang@linux.alibaba.com> Cc: Brahmajit Das <brahmajit.xyz@gmail.com> Cc: Catalin Marinas <catalin.marinas@arm.com> Cc: Christophe Leroy <christophe.leroy@csgroup.eu> Cc: David Rientjes <rientjes@google.com> Cc: Dev Jain <dev.jain@arm.com> Cc: Hugh Dickins <hughd@google.com> Cc: Joern Engel <joern@logfs.org> Cc: Kefeng Wang <wangkefeng.wang@huawei.com> Cc: Lorenzo Stoakes <lorenzo.stoakes@oracle.com> Cc: Michal Hocko <mhocko@suse.com> Cc: Ryan Roberts <ryan.roberts@arm.com> Cc: Thiago Jung Bauermann <thiago.bauermann@linaro.org> Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
2025-08-05mm/smaps: fix race between smaps_hugetlb_range and migrationJinjiang Tu1-1/+5
smaps_hugetlb_range() handles the pte without holdling ptl, and may be concurrenct with migration, leaing to BUG_ON in pfn_swap_entry_to_page(). The race is as follows. smaps_hugetlb_range migrate_pages huge_ptep_get remove_migration_ptes folio_unlock pfn_swap_entry_folio BUG_ON To fix it, hold ptl lock in smaps_hugetlb_range(). Link: https://lkml.kernel.org/r/20250724090958.455887-1-tujinjiang@huawei.com Link: https://lkml.kernel.org/r/20250724090958.455887-2-tujinjiang@huawei.com Fixes: 25ee01a2fca0 ("mm: hugetlb: proc: add hugetlb-related fields to /proc/PID/smaps") Signed-off-by: Jinjiang Tu <tujinjiang@huawei.com> Acked-by: David Hildenbrand <david@redhat.com> Cc: Andrei Vagin <avagin@gmail.com> Cc: Andrii Nakryiko <andrii@kernel.org> Cc: Baolin Wang <baolin.wang@linux.alibaba.com> Cc: Brahmajit Das <brahmajit.xyz@gmail.com> Cc: Catalin Marinas <catalin.marinas@arm.com> Cc: Christophe Leroy <christophe.leroy@csgroup.eu> Cc: David Rientjes <rientjes@google.com> Cc: Dev Jain <dev.jain@arm.com> Cc: Hugh Dickins <hughd@google.com> Cc: Joern Engel <joern@logfs.org> Cc: Kefeng Wang <wangkefeng.wang@huawei.com> Cc: Lorenzo Stoakes <lorenzo.stoakes@oracle.com> Cc: Michal Hocko <mhocko@suse.com> Cc: Ryan Roberts <ryan.roberts@arm.com> Cc: Thiago Jung Bauermann <thiago.bauermann@linaro.org> Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
2025-08-05mm: fix the race between collapse and PT_RECLAIM under per-vma lockBarry Song1-1/+1
The check_pmd_still_valid() call during collapse is currently only protected by the mmap_lock in write mode, which was sufficient when pt_reclaim always ran under mmap_lock in read mode. However, since madvise_dontneed can now execute under a per-VMA lock, this assumption is no longer valid. As a result, a race condition can occur between collapse and PT_RECLAIM, potentially leading to a kernel panic. [ 38.151897] Oops: general protection fault, probably for non-canonical address 0xdffffc0000000003: 0000 [#1] SMP KASI [ 38.153519] KASAN: null-ptr-deref in range [0x0000000000000018-0x000000000000001f] [ 38.154605] CPU: 0 UID: 0 PID: 721 Comm: repro Not tainted 6.16.0-next-20250801-next-2025080 #1 PREEMPT(voluntary) [ 38.155929] Hardware name: QEMU Standard PC (i440FX + PIIX, 1996), BIOS rel-1.16.3-0-ga6ed6b701f0a-prebuilt.qemu.org4 [ 38.157418] RIP: 0010:kasan_byte_accessible+0x15/0x30 [ 38.158125] Code: 03 0f 1f 40 00 90 90 90 90 90 90 90 90 90 90 90 90 90 90 90 90 66 0f 1f 00 48 b8 00 00 00 00 00 fc0 [ 38.160461] RSP: 0018:ffff88800feef678 EFLAGS: 00010286 [ 38.161220] RAX: dffffc0000000000 RBX: 0000000000000001 RCX: 1ffffffff0dde60c [ 38.162232] RDX: 0000000000000000 RSI: ffffffff85da1e18 RDI: dffffc0000000003 [ 38.163176] RBP: ffff88800feef698 R08: 0000000000000001 R09: 0000000000000000 [ 38.164195] R10: 0000000000000000 R11: ffff888016a8ba58 R12: 0000000000000018 [ 38.165189] R13: 0000000000000018 R14: ffffffff85da1e18 R15: 0000000000000000 [ 38.166100] FS: 0000000000000000(0000) GS:ffff8880e3b40000(0000) knlGS:0000000000000000 [ 38.167137] CS: 0010 DS: 0000 ES: 0000 CR0: 0000000080050033 [ 38.167891] CR2: 00007f97fadfe504 CR3: 0000000007088005 CR4: 0000000000770ef0 [ 38.168812] PKRU: 55555554 [ 38.169275] Call Trace: [ 38.169647] <TASK> [ 38.169975] ? __kasan_check_byte+0x19/0x50 [ 38.170581] lock_acquire+0xea/0x310 [ 38.171083] ? rcu_is_watching+0x19/0xc0 [ 38.171615] ? __sanitizer_cov_trace_const_cmp4+0x1a/0x20 [ 38.172343] ? __sanitizer_cov_trace_const_cmp8+0x1c/0x30 [ 38.173130] _raw_spin_lock+0x38/0x50 [ 38.173707] ? __pte_offset_map_lock+0x1a2/0x3c0 [ 38.174390] __pte_offset_map_lock+0x1a2/0x3c0 [ 38.174987] ? __pfx___pte_offset_map_lock+0x10/0x10 [ 38.175724] ? __pfx_pud_val+0x10/0x10 [ 38.176308] ? __sanitizer_cov_trace_const_cmp1+0x1e/0x30 [ 38.177183] unmap_page_range+0xb60/0x43e0 [ 38.177824] ? __pfx_unmap_page_range+0x10/0x10 [ 38.178485] ? mas_next_slot+0x133a/0x1a50 [ 38.179079] unmap_single_vma.constprop.0+0x15b/0x250 [ 38.179830] unmap_vmas+0x1fa/0x460 [ 38.180373] ? __pfx_unmap_vmas+0x10/0x10 [ 38.180994] ? __sanitizer_cov_trace_const_cmp4+0x1a/0x20 [ 38.181877] exit_mmap+0x1a2/0xb40 [ 38.182396] ? lock_release+0x14f/0x2c0 [ 38.182929] ? __pfx_exit_mmap+0x10/0x10 [ 38.183474] ? __pfx___mutex_unlock_slowpath+0x10/0x10 [ 38.184188] ? mutex_unlock+0x16/0x20 [ 38.184704] mmput+0x132/0x370 [ 38.185208] do_exit+0x7e7/0x28c0 [ 38.185682] ? __this_cpu_preempt_check+0x21/0x30 [ 38.186328] ? do_group_exit+0x1d8/0x2c0 [ 38.186873] ? __pfx_do_exit+0x10/0x10 [ 38.187401] ? __this_cpu_preempt_check+0x21/0x30 [ 38.188036] ? _raw_spin_unlock_irq+0x2c/0x60 [ 38.188634] ? lockdep_hardirqs_on+0x89/0x110 [ 38.189313] do_group_exit+0xe4/0x2c0 [ 38.189831] __x64_sys_exit_group+0x4d/0x60 [ 38.190413] x64_sys_call+0x2174/0x2180 [ 38.190935] do_syscall_64+0x6d/0x2e0 [ 38.191449] entry_SYSCALL_64_after_hwframe+0x76/0x7e This patch moves the vma_start_write() call to precede check_pmd_still_valid(), ensuring that the check is also properly protected by the per-VMA lock. Link: https://lkml.kernel.org/r/20250805035447.7958-1-21cnbao@gmail.com Fixes: a6fde7add78d ("mm: use per_vma lock for MADV_DONTNEED") Signed-off-by: Barry Song <v-songbaohua@oppo.com> Tested-by: "Lai, Yi" <yi1.lai@linux.intel.com> Reported-by: "Lai, Yi" <yi1.lai@linux.intel.com> Closes: https://lore.kernel.org/all/aJAFrYfyzGpbm+0m@ly-workstation/ Reviewed-by: Lorenzo Stoakes <lorenzo.stoakes@oracle.com> Cc: David Hildenbrand <david@redhat.com> Cc: Lorenzo Stoakes <lorenzo.stoakes@oracle.com> Cc: Qi Zheng <zhengqi.arch@bytedance.com> Cc: Vlastimil Babka <vbabka@suse.cz> Cc: Jann Horn <jannh@google.com> Cc: Suren Baghdasaryan <surenb@google.com> Cc: Lokesh Gidra <lokeshgidra@google.com> Cc: Tangquan Zheng <zhengtangquan@oppo.com> Cc: Lance Yang <ioworker0@gmail.com> Cc: Zi Yan <ziy@nvidia.com> Cc: Baolin Wang <baolin.wang@linux.alibaba.com> Cc: Liam R. Howlett <Liam.Howlett@oracle.com> Cc: Nico Pache <npache@redhat.com> Cc: Ryan Roberts <ryan.roberts@arm.com> Cc: Dev Jain <dev.jain@arm.com> Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
2025-08-05mm/kmemleak: avoid soft lockup in __kmemleak_do_cleanup()Waiman Long1-0/+5
A soft lockup warning was observed on a relative small system x86-64 system with 16 GB of memory when running a debug kernel with kmemleak enabled. watchdog: BUG: soft lockup - CPU#8 stuck for 33s! [kworker/8:1:134] The test system was running a workload with hot unplug happening in parallel. Then kemleak decided to disable itself due to its inability to allocate more kmemleak objects. The debug kernel has its CONFIG_DEBUG_KMEMLEAK_MEM_POOL_SIZE set to 40,000. The soft lockup happened in kmemleak_do_cleanup() when the existing kmemleak objects were being removed and deleted one-by-one in a loop via a workqueue. In this particular case, there are at least 40,000 objects that need to be processed and given the slowness of a debug kernel and the fact that a raw_spinlock has to be acquired and released in __delete_object(), it could take a while to properly handle all these objects. As kmemleak has been disabled in this case, the object removal and deletion process can be further optimized as locking isn't really needed. However, it is probably not worth the effort to optimize for such an edge case that should rarely happen. So the simple solution is to call cond_resched() at periodic interval in the iteration loop to avoid soft lockup. Link: https://lkml.kernel.org/r/20250728190248.605750-1-longman@redhat.com Signed-off-by: Waiman Long <longman@redhat.com> Acked-by: Catalin Marinas <catalin.marinas@arm.com> Cc: <stable@vger.kernel.org> Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
2025-08-05MAINTAINERS: add Masami as a reviewer of hung task detectorMasami Hiramatsu (Google)1-0/+1
Since I'm actively working on hung task blocker detector, add myself to a reviewer of the HUNG TASK DETECTOR feature. Link: https://lkml.kernel.org/r/175388550841.627474.3260499035226455392.stgit@devnote2 Signed-off-by: Masami Hiramatsu (Google) <mhiramat@kernel.org> Acked-by: Lance Yang <lance.yang@linux.dev> Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
2025-08-05mm/kmemleak: avoid deadlock by moving pr_warn() outside kmemleak_lockBreno Leitao1-1/+4
When netpoll is enabled, calling pr_warn_once() while holding kmemleak_lock in mem_pool_alloc() can cause a deadlock due to lock inversion with the netconsole subsystem. This occurs because pr_warn_once() may trigger netpoll, which eventually leads to __alloc_skb() and back into kmemleak code, attempting to reacquire kmemleak_lock. This is the path for the deadlock. mem_pool_alloc() -> raw_spin_lock_irqsave(&kmemleak_lock, flags); -> pr_warn_once() -> netconsole subsystem -> netpoll -> __alloc_skb -> __create_object -> raw_spin_lock_irqsave(&kmemleak_lock, flags); Fix this by setting a flag and issuing the pr_warn_once() after kmemleak_lock is released. Link: https://lkml.kernel.org/r/20250731-kmemleak_lock-v1-1-728fd470198f@debian.org Fixes: c5665868183f ("mm: kmemleak: use the memory pool for early allocations") Signed-off-by: Breno Leitao <leitao@debian.org> Reported-by: Jakub Kicinski <kuba@kernel.org> Acked-by: Catalin Marinas <catalin.marinas@arm.com> Cc: <stable@vger.kernel.org> Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
2025-08-05kasan/test: fix protection against compiler elisionJann Horn1-1/+1
The kunit test is using assignments to "static volatile void *kasan_ptr_result" to prevent elision of memory loads, but that's not working: In this variable definition, the "volatile" applies to the "void", not to the pointer. To make "volatile" apply to the pointer as intended, it must follow after the "*". This makes the kasan_memchr test pass again on my system. The kasan_strings test is still failing because all the definitions of load_unaligned_zeropad() are lacking explicit instrumentation hooks and ASAN does not instrument asm() memory operands. Link: https://lkml.kernel.org/r/20250728-kasan-kunit-fix-volatile-v1-1-e7157c9af82d@google.com Fixes: 5f1c8108e7ad ("mm:kasan: fix sparse warnings: Should it be static?") Signed-off-by: Jann Horn <jannh@google.com> Cc: Alexander Potapenko <glider@google.com> Cc: Andrey Konovalov <andreyknvl@gmail.com> Cc: Andrey Ryabinin <ryabinin.a.a@gmail.com> Cc: Dmitriy Vyukov <dvyukov@google.com> Cc: Jann Horn <jannh@google.com> Cc: Nihar Chaithanya <niharchaithanya@gmail.com> Cc: Vincenzo Frascino <vincenzo.frascino@arm.com> Cc: <stable@vger.kernel.org> Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
2025-08-05Merge tag 'drm-intel-next-fixes-2025-08-05' of ↵Dave Airlie1-6/+15
https://gitlab.freedesktop.org/drm/i915/kernel into drm-next drm/i915 fixes for v6.17-rc1: - Fixes around DP LFPS (Low-Frequency Periodic Signaling) Signed-off-by: Dave Airlie <airlied@redhat.com> From: Jani Nikula <jani.nikula@intel.com> Link: https://lore.kernel.org/r/e1147bede8f219682419d198022cfe8d9d4edc28@intel.com
2025-08-05selftests/perf_events: Add a mmap() correctness testLorenzo Stoakes3-1/+238
Exercise various mmap(), munmap() and mremap() invocations, which might cause a perf buffer mapping to be split or truncated. To avoid hard coding the perf event and having dependencies on architectures and configuration options, scan through event types in sysfs and try to open them. On success, try to mmap() and if that succeeds try to mmap() the AUX buffer. In case that no AUX buffer supporting event is found, only test the base buffer mapping. If no mappable event is found or permissions are not sufficient, skip the tests. Reserve a PROT_NONE region for both rb and aux tests to allow testing the case where mremap unmaps beyond the end of a mapped VMA to prevent it from unmapping unrelated mappings. Signed-off-by: Lorenzo Stoakes <lorenzo.stoakes@oracle.com> Co-developed-by: Thomas Gleixner <tglx@linutronix.de> Signed-off-by: Thomas Gleixner <tglx@linutronix.de> Reviewed-by: Lorenzo Stoakes <lorenzo.stoakes@oracle.com>
2025-08-05perf/core: Prevent VMA split of buffer mappingsThomas Gleixner1-0/+10
The perf mmap code is careful about mmap()'ing the user page with the ringbuffer and additionally the auxiliary buffer, when the event supports it. Once the first mapping is established, subsequent mapping have to use the same offset and the same size in both cases. The reference counting for the ringbuffer and the auxiliary buffer depends on this being correct. Though perf does not prevent that a related mapping is split via mmap(2), munmap(2) or mremap(2). A split of a VMA results in perf_mmap_open() calls, which take reference counts, but then the subsequent perf_mmap_close() calls are not longer fulfilling the offset and size checks. This leads to reference count leaks. As perf already has the requirement for subsequent mappings to match the initial mapping, the obvious consequence is that VMA splits, caused by resizing of a mapping or partial unmapping, have to be prevented. Implement the vm_operations_struct::may_split() callback and return unconditionally -EINVAL. That ensures that the mapping offsets and sizes cannot be changed after the fact. Remapping to a different fixed address with the same size is still possible as it takes the references for the new mapping and drops those of the old mapping. Fixes: 45bfb2e50471 ("perf: Add AUX area to ring buffer for raw data streams") Reported-by: zdi-disclosures@trendmicro.com # ZDI-CAN-27504 Signed-off-by: Thomas Gleixner <tglx@linutronix.de> Reviewed-by: Lorenzo Stoakes <lorenzo.stoakes@oracle.com> Acked-by: Arnaldo Carvalho de Melo <acme@redhat.com> Acked-by: Vlastimil Babka <vbabka@suse.cz> Cc: stable@vger.kernel.org
2025-08-05perf/core: Handle buffer mapping fail correctly in perf_mmap()Thomas Gleixner1-2/+10
After successful allocation of a buffer or a successful attachment to an existing buffer perf_mmap() tries to map the buffer read only into the page table. If that fails, the already set up page table entries are zapped, but the other perf specific side effects of that failure are not handled. The calling code just cleans up the VMA and does not invoke perf_mmap_close(). This leaks reference counts, corrupts user->vm accounting and also results in an unbalanced invocation of event::event_mapped(). Cure this by moving the event::event_mapped() invocation before the map_range() call so that on map_range() failure perf_mmap_close() can be invoked without causing an unbalanced event::event_unmapped() call. perf_mmap_close() undoes the reference counts and eventually frees buffers. Fixes: b709eb872e19 ("perf: map pages in advance") Signed-off-by: Thomas Gleixner <tglx@linutronix.de> Reviewed-by: Lorenzo Stoakes <lorenzo.stoakes@oracle.com> Cc: stable@vger.kernel.org
2025-08-05perf/core: Exit early on perf_mmap() failThomas Gleixner1-2/+4
When perf_mmap() fails to allocate a buffer, it still invokes the event_mapped() callback of the related event. On X86 this might increase the perf_rdpmc_allowed reference counter. But nothing undoes this as perf_mmap_close() is never called in this case, which causes another reference count leak. Return early on failure to prevent that. Fixes: 1e0fb9ec679c ("perf: Add pmu callbacks to track event mapping and unmapping") Signed-off-by: Thomas Gleixner <tglx@linutronix.de> Reviewed-by: Lorenzo Stoakes <lorenzo.stoakes@oracle.com> Cc: stable@vger.kernel.org
2025-08-05perf/core: Don't leak AUX buffer refcount on allocation failureThomas Gleixner1-3/+4
Failure of the AUX buffer allocation leaks the reference count. Set the reference count to 1 only when the allocation succeeds. Fixes: 45bfb2e50471 ("perf: Add AUX area to ring buffer for raw data streams") Signed-off-by: Thomas Gleixner <tglx@linutronix.de> Reviewed-by: Lorenzo Stoakes <lorenzo.stoakes@oracle.com> Cc: stable@vger.kernel.org
2025-08-05perf/core: Preserve AUX buffer allocation failure resultThomas Gleixner1-2/+1
A recent overhaul sets the return value to 0 unconditionally after the allocations, which causes reference count leaks and corrupts the user->vm accounting. Preserve the AUX buffer allocation failure return value, so that the subsequent code works correctly. Fixes: 0983593f32c4 ("perf/core: Lift event->mmap_mutex in perf_mmap()") Signed-off-by: Thomas Gleixner <tglx@linutronix.de> Reviewed-by: Lorenzo Stoakes <lorenzo.stoakes@oracle.com> Cc: stable@vger.kernel.org
2025-08-05smb: client: smb: client: eliminate mid_flags fieldWang Zhaolong4-19/+16
This is step 3/4 of a patch series to fix mid_q_entry memory leaks caused by race conditions in callback execution. Replace the mid_flags bitmask with dedicated boolean fields to simplify locking logic and improve code readability: - Replace MID_DELETED with bool deleted_from_q - Replace MID_WAIT_CANCELLED with bool wait_cancelled - Remove mid_flags field entirely The new boolean fields have clearer semantics: - deleted_from_q: whether mid has been removed from pending_mid_q - wait_cancelled: whether request was cancelled during wait This change reduces memory usage (from 4-byte bitmask to 2 boolean flags) and eliminates confusion about which lock protects which flag bits, preparing for per-mid locking in the next patch. Signed-off-by: Wang Zhaolong <wangzhaolong@huaweicloud.com> Acked-by: Enzo Matsumiya <ematsumiya@suse.de> Signed-off-by: Steve French <stfrench@microsoft.com>
2025-08-05smb: client: add mid_counter_lock to protect the mid counter counterWang Zhaolong5-35/+38
This is step 2/4 of a patch series to fix mid_q_entry memory leaks caused by race conditions in callback execution. Add a dedicated mid_counter_lock to protect current_mid counter, separating it from mid_queue_lock which protects pending_mid_q operations. This reduces lock contention and prepares for finer- grained locking in subsequent patches. Changes: - Add TCP_Server_Info->mid_counter_lock spinlock - Rename CurrentMid to current_mid for consistency - Use mid_counter_lock to protect current_mid access - Update locking documentation in cifsglob.h This separation allows mid allocation to proceed without blocking queue operations, improving performance under heavy load. Signed-off-by: Wang Zhaolong <wangzhaolong@huaweicloud.com> Acked-by: Enzo Matsumiya <ematsumiya@suse.de> Signed-off-by: Steve French <stfrench@microsoft.com>
2025-08-05smb: client: rename server mid_lock to mid_queue_lockWang Zhaolong7-54/+54
This is step 1/4 of a patch series to fix mid_q_entry memory leaks caused by race conditions in callback execution. The current mid_lock name is somewhat ambiguous about what it protects. To prepare for splitting this lock into separate, more granular locks, this patch renames mid_lock to mid_queue_lock to clearly indicate its specific responsibility for protecting the pending_mid_q list and related queue operations. No functional changes are made in this patch - it only prepares the codebase for the lock splitting that follows. - mid_queue_lock for queue operations - mid_counter_lock for mid counter operations - per-mid locks for individual mid state management Signed-off-by: Wang Zhaolong <wangzhaolong@huaweicloud.com> Acked-by: Enzo Matsumiya <ematsumiya@suse.de> Signed-off-by: Steve French <stfrench@microsoft.com>
2025-08-05RDMA/siw: Fix the sendmsg byte count in siw_tcp_sendpagesPedro Falcato1-3/+2
Ever since commit c2ff29e99a76 ("siw: Inline do_tcp_sendpages()"), we have been doing this: static int siw_tcp_sendpages(struct socket *s, struct page **page, int offset, size_t size) [...] /* Calculate the number of bytes we need to push, for this page * specifically */ size_t bytes = min_t(size_t, PAGE_SIZE - offset, size); /* If we can't splice it, then copy it in, as normal */ if (!sendpage_ok(page[i])) msg.msg_flags &= ~MSG_SPLICE_PAGES; /* Set the bvec pointing to the page, with len $bytes */ bvec_set_page(&bvec, page[i], bytes, offset); /* Set the iter to $size, aka the size of the whole sendpages (!!!) */ iov_iter_bvec(&msg.msg_iter, ITER_SOURCE, &bvec, 1, size); try_page_again: lock_sock(sk); /* Sendmsg with $size size (!!!) */ rv = tcp_sendmsg_locked(sk, &msg, size); This means we've been sending oversized iov_iters and tcp_sendmsg calls for a while. This has a been a benign bug because sendpage_ok() always returned true. With the recent slab allocator changes being slowly introduced into next (that disallow sendpage on large kmalloc allocations), we have recently hit out-of-bounds crashes, due to slight differences in iov_iter behavior between the MSG_SPLICE_PAGES and "regular" copy paths: (MSG_SPLICE_PAGES) skb_splice_from_iter iov_iter_extract_pages iov_iter_extract_bvec_pages uses i->nr_segs to correctly stop in its tracks before OoB'ing everywhere skb_splice_from_iter gets a "short" read (!MSG_SPLICE_PAGES) skb_copy_to_page_nocache copy=iov_iter_count [...] copy_from_iter /* this doesn't help */ if (unlikely(iter->count < len)) len = iter->count; iterate_bvec ... and we run off the bvecs Fix this by properly setting the iov_iter's byte count, plus sending the correct byte count to tcp_sendmsg_locked. Link: https://patch.msgid.link/r/20250729120348.495568-1-pfalcato@suse.de Cc: stable@vger.kernel.org Fixes: c2ff29e99a76 ("siw: Inline do_tcp_sendpages()") Reported-by: kernel test robot <oliver.sang@intel.com> Closes: https://lore.kernel.org/oe-lkp/202507220801.50a7210-lkp@intel.com Reviewed-by: David Howells <dhowells@redhat.com> Signed-off-by: Pedro Falcato <pfalcato@suse.de> Acked-by: Bernard Metzler <bernard.metzler@linux.dev> Signed-off-by: Jason Gunthorpe <jgg@nvidia.com>
2025-08-05nfsd: avoid ref leak in nfsd_open_local_fh()NeilBrown1-2/+3
If two calls to nfsd_open_local_fh() race and both successfully call nfsd_file_acquire_local(), they will both get an extra reference to the net to accompany the file reference stored in *pnf. One of them will fail to store (using xchg()) the file reference in *pnf and will drop that reference but WON'T drop the accompanying reference to the net. This leak means that when the nfs server is shut down it will hang in nfsd_shutdown_net() waiting for &nn->nfsd_net_free_done. This patch adds the missing nfsd_net_put(). Reported-by: Mike Snitzer <snitzer@kernel.org> Fixes: e6f7e1487ab5 ("nfs_localio: simplify interface to nfsd for getting nfsd_file") Cc: stable@vger.kernel.org Signed-off-by: NeilBrown <neil@brown.name> Tested-by: Mike Snitzer <snitzer@kernel.org> Reviewed-by: Mike Snitzer <snitzer@kernel.org> Reviewed-by: Jeff Layton <jlayton@kernel.org> Signed-off-by: Chuck Lever <chuck.lever@oracle.com>
2025-08-05nfsd: don't set the ctime on delegated atime updatesJeff Layton1-1/+9
Clients will typically precede a DELEGRETURN for a delegation with delegated timestamp with a SETATTR to set the timestamps on the server to match what the client has. knfsd implements this by using the nfsd_setattr() infrastructure, which will set ATTR_CTIME on any update that goes to notify_change(). This is problematic as it means that the client will get a spurious ctime update when updating the atime. POSIX unfortunately doesn't phrase it succinctly, but updating the atime due to reads should not update the ctime. In this case, the client is sending a SETATTR to update the atime on the server to match its latest value. The ctime should not be advanced in this case as that would incorrectly indicate a change to the inode. Fix this by not implicitly setting ATTR_CTIME when ATTR_DELEG is set in __nfsd_setattr(). The decoder for FATTR4_WORD2_TIME_DELEG_MODIFY already sets ATTR_CTIME, so this is sufficient to make it skip setting the ctime on atime-only updates. Fixes: 7e13f4f8d27d ("nfsd: handle delegated timestamps in SETATTR") Cc: stable@vger.kernel.org Signed-off-by: Jeff Layton <jlayton@kernel.org> Signed-off-by: Chuck Lever <chuck.lever@oracle.com>
2025-08-05Merge tag 'for-linus' of git://git.kernel.org/pub/scm/linux/kernel/git/rmk/linuxLinus Torvalds1-160/+0
Pull ARM update from Russell King: "Just one development update this time: - Finish removing Coresight support" * tag 'for-linus' of git://git.kernel.org/pub/scm/linux/kernel/git/rmk/linux: ARM: 9449/1: coresight: Finish removal of Coresight support in arch/arm/kernel