summaryrefslogtreecommitdiff
path: root/drivers/infiniband
AgeCommit message (Collapse)AuthorFilesLines
10 daysRDMA/rxe: Fix the failure of ibv_query_device() and ibv_query_device_ex() testsZhu Yanjun1-19/+6
[ Upstream commit 8ce2eb9dfac8743d1c423b86339336a5b6a6069e ] In rdma-core, the following failures appear. " $ ./build/bin/run_tests.py -k device ssssssss....FF........s ====================================================================== FAIL: test_query_device (tests.test_device.DeviceTest.test_query_device) Test ibv_query_device() ---------------------------------------------------------------------- Traceback (most recent call last): File "/home/ubuntu/rdma-core/tests/test_device.py", line 63, in test_query_device self.verify_device_attr(attr, dev) File "/home/ubuntu/rdma-core/tests/test_device.py", line 200, in verify_device_attr assert attr.sys_image_guid != 0 ^^^^^^^^^^^^^^^^^^^^^^^^ AssertionError ====================================================================== FAIL: test_query_device_ex (tests.test_device.DeviceTest.test_query_device_ex) Test ibv_query_device_ex() ---------------------------------------------------------------------- Traceback (most recent call last): File "/home/ubuntu/rdma-core/tests/test_device.py", line 222, in test_query_device_ex self.verify_device_attr(attr_ex.orig_attr, dev) File "/home/ubuntu/rdma-core/tests/test_device.py", line 200, in verify_device_attr assert attr.sys_image_guid != 0 ^^^^^^^^^^^^^^^^^^^^^^^^ AssertionError " The root cause is: before a net device is set with rxe, this net device is used to generate a sys_image_guid. Fixes: 2ac5415022d1 ("RDMA/rxe: Remove the direct link to net_device") Signed-off-by: Zhu Yanjun <yanjun.zhu@linux.dev> Link: https://patch.msgid.link/20250302215444.3742072-1-yanjun.zhu@linux.dev Reviewed-by: Daisuke Matsuda <matsuda-daisuke@fujitsu.com> Tested-by: Daisuke Matsuda <matsuda-daisuke@fujitsu.com> Signed-off-by: Leon Romanovsky <leon@kernel.org> Signed-off-by: Sasha Levin <sashal@kernel.org> [Shivani: Modified to apply on 6.6.y] Signed-off-by: Shivani Agarwal <shivani.agarwal@broadcom.com> Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
10 daysRDMA/rxe: Remove the direct link to net_deviceZhu Yanjun7-20/+91
[ Upstream commit 2ac5415022d16d63d912a39a06f32f1f51140261 ] The similar patch in siw is in the link: https://git.kernel.org/rdma/rdma/c/16b87037b48889 This problem also occurred in RXE. The following analyze this problem. In the following Call Traces: " BUG: KASAN: slab-use-after-free in dev_get_flags+0x188/0x1d0 net/core/dev.c:8782 Read of size 4 at addr ffff8880554640b0 by task kworker/1:4/5295 CPU: 1 UID: 0 PID: 5295 Comm: kworker/1:4 Not tainted 6.12.0-rc3-syzkaller-00399-g9197b73fd7bb #0 Hardware name: Google Compute Engine/Google Compute Engine, BIOS Google 09/13/2024 Workqueue: infiniband ib_cache_event_task Call Trace: <TASK> __dump_stack lib/dump_stack.c:94 [inline] dump_stack_lvl+0x241/0x360 lib/dump_stack.c:120 print_address_description mm/kasan/report.c:377 [inline] print_report+0x169/0x550 mm/kasan/report.c:488 kasan_report+0x143/0x180 mm/kasan/report.c:601 dev_get_flags+0x188/0x1d0 net/core/dev.c:8782 rxe_query_port+0x12d/0x260 drivers/infiniband/sw/rxe/rxe_verbs.c:60 __ib_query_port drivers/infiniband/core/device.c:2111 [inline] ib_query_port+0x168/0x7d0 drivers/infiniband/core/device.c:2143 ib_cache_update+0x1a9/0xb80 drivers/infiniband/core/cache.c:1494 ib_cache_event_task+0xf3/0x1e0 drivers/infiniband/core/cache.c:1568 process_one_work kernel/workqueue.c:3229 [inline] process_scheduled_works+0xa65/0x1850 kernel/workqueue.c:3310 worker_thread+0x870/0xd30 kernel/workqueue.c:3391 kthread+0x2f2/0x390 kernel/kthread.c:389 ret_from_fork+0x4d/0x80 arch/x86/kernel/process.c:147 ret_from_fork_asm+0x1a/0x30 arch/x86/entry/entry_64.S:244 </TASK> " 1). In the link [1], " infiniband syz2: set down " This means that on 839.350575, the event ib_cache_event_task was sent andi queued in ib_wq. 2). In the link [1], " team0 (unregistering): Port device team_slave_0 removed " It indicates that before 843.251853, the net device should be freed. 3). In the link [1], " BUG: KASAN: slab-use-after-free in dev_get_flags+0x188/0x1d0 " This means that on 850.559070, this slab-use-after-free problem occurred. In all, on 839.350575, the event ib_cache_event_task was sent and queued in ib_wq, before 843.251853, the net device veth was freed. on 850.559070, this event was executed, and the mentioned freed net device was called. Thus, the above call trace occurred. [1] https://syzkaller.appspot.com/x/log.txt?x=12e7025f980000 Reported-by: syzbot+4b87489410b4efd181bf@syzkaller.appspotmail.com Closes: https://syzkaller.appspot.com/bug?extid=4b87489410b4efd181bf Fixes: 8700e3e7c485 ("Soft RoCE driver") Signed-off-by: Zhu Yanjun <yanjun.zhu@linux.dev> Link: https://patch.msgid.link/20241220222325.2487767-1-yanjun.zhu@linux.dev Signed-off-by: Leon Romanovsky <leon@kernel.org> Signed-off-by: Sasha Levin <sashal@kernel.org> [Shivani: - exported ib_device_get_netdev() function. - added ib_device_get_netdev() to ib_verbs.h.] Signed-off-by: Shivani Agarwal <shivani.agarwal@broadcom.com> Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
10 daysRDMA/core: Fix "KASAN: slab-use-after-free Read in ib_register_device" problemZhu Yanjun1-0/+5
commit d0706bfd3ee40923c001c6827b786a309e2a8713 upstream. Call Trace: __dump_stack lib/dump_stack.c:94 [inline] dump_stack_lvl+0x116/0x1f0 lib/dump_stack.c:120 print_address_description mm/kasan/report.c:408 [inline] print_report+0xc3/0x670 mm/kasan/report.c:521 kasan_report+0xe0/0x110 mm/kasan/report.c:634 strlen+0x93/0xa0 lib/string.c:420 __fortify_strlen include/linux/fortify-string.h:268 [inline] get_kobj_path_length lib/kobject.c:118 [inline] kobject_get_path+0x3f/0x2a0 lib/kobject.c:158 kobject_uevent_env+0x289/0x1870 lib/kobject_uevent.c:545 ib_register_device drivers/infiniband/core/device.c:1472 [inline] ib_register_device+0x8cf/0xe00 drivers/infiniband/core/device.c:1393 rxe_register_device+0x275/0x320 drivers/infiniband/sw/rxe/rxe_verbs.c:1552 rxe_net_add+0x8e/0xe0 drivers/infiniband/sw/rxe/rxe_net.c:550 rxe_newlink+0x70/0x190 drivers/infiniband/sw/rxe/rxe.c:225 nldev_newlink+0x3a3/0x680 drivers/infiniband/core/nldev.c:1796 rdma_nl_rcv_msg+0x387/0x6e0 drivers/infiniband/core/netlink.c:195 rdma_nl_rcv_skb.constprop.0.isra.0+0x2e5/0x450 netlink_unicast_kernel net/netlink/af_netlink.c:1313 [inline] netlink_unicast+0x53a/0x7f0 net/netlink/af_netlink.c:1339 netlink_sendmsg+0x8d1/0xdd0 net/netlink/af_netlink.c:1883 sock_sendmsg_nosec net/socket.c:712 [inline] __sock_sendmsg net/socket.c:727 [inline] ____sys_sendmsg+0xa95/0xc70 net/socket.c:2566 ___sys_sendmsg+0x134/0x1d0 net/socket.c:2620 __sys_sendmsg+0x16d/0x220 net/socket.c:2652 do_syscall_x64 arch/x86/entry/syscall_64.c:63 [inline] do_syscall_64+0xcd/0x260 arch/x86/entry/syscall_64.c:94 entry_SYSCALL_64_after_hwframe+0x77/0x7f This problem is similar to the problem that the commit 1d6a9e7449e2 ("RDMA/core: Fix use-after-free when rename device name") fixes. The root cause is: the function ib_device_rename() renames the name with lock. But in the function kobject_uevent(), this name is accessed without lock protection at the same time. The solution is to add the lock protection when this name is accessed in the function kobject_uevent(). Fixes: 779e0bf47632 ("RDMA/core: Do not indicate device ready when device enablement fails") Link: https://patch.msgid.link/r/20250506151008.75701-1-yanjun.zhu@linux.dev Reported-by: syzbot+e2ce9e275ecc70a30b72@syzkaller.appspotmail.com Closes: https://syzkaller.appspot.com/bug?extid=e2ce9e275ecc70a30b72 Signed-off-by: Zhu Yanjun <yanjun.zhu@linux.dev> Signed-off-by: Jason Gunthorpe <jgg@nvidia.com> Signed-off-by: Sasha Levin <sashal@kernel.org> [ Ajay: Modified to apply on v5.10.y-v6.6.y ib_device_notify_register() not present in v5.10.y-v6.6.y, so directly added lock for kobject_uevent() ] Signed-off-by: Ajay Kaher <ajay.kaher@broadcom.com> Signed-off-by: Shivani Agarwal <shivani.agarwal@broadcom.com> Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
10 daysRDMA/cm: Fix leaking the multicast GID table referenceJason Gunthorpe1-0/+3
commit 57f3cb6c84159d12ba343574df2115fb18dd83ca upstream. If the CM ID is destroyed while the CM event for multicast creating is still queued the cancel_work_sync() will prevent the work from running which also prevents destroying the ah_attr. This leaks a refcount and triggers a WARN: GID entry ref leak for dev syz1 index 2 ref=573 WARNING: CPU: 1 PID: 655 at drivers/infiniband/core/cache.c:809 release_gid_table drivers/infiniband/core/cache.c:806 [inline] WARNING: CPU: 1 PID: 655 at drivers/infiniband/core/cache.c:809 gid_table_release_one+0x284/0x3cc drivers/infiniband/core/cache.c:886 Destroy the ah_attr after canceling the work, it is safe to call this twice. Link: https://patch.msgid.link/r/0-v1-4285d070a6b2+20a-rdma_mc_gid_leak_syz_jgg@nvidia.com Cc: stable@vger.kernel.org Fixes: fe454dc31e84 ("RDMA/ucma: Fix use-after-free bug in ucma_create_uevent") Reported-by: syzbot+b0da83a6c0e2e2bddbd4@syzkaller.appspotmail.com Closes: https://lore.kernel.org/all/68232e7b.050a0220.f2294.09f6.GAE@google.com Signed-off-by: Jason Gunthorpe <jgg@nvidia.com> Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
10 daysRDMA/core: Check for the presence of LS_NLA_TYPE_DGID correctlyJason Gunthorpe1-23/+10
commit a7b8e876e0ef0232b8076972c57ce9a7286b47ca upstream. The netlink response for RDMA_NL_LS_OP_IP_RESOLVE should always have a LS_NLA_TYPE_DGID attribute, it is invalid if it does not. Use the nl parsing logic properly and call nla_parse_deprecated() to fill the nlattrs array and then directly index that array to get the data for the DGID. Just fail if it is NULL. Remove the for loop searching for the nla, and squash the validation and parsing into one function. Fixes an uninitialized read from the stack triggered by userspace if it does not provide the DGID to a kernel initiated RDMA_NL_LS_OP_IP_RESOLVE query. BUG: KMSAN: uninit-value in hex_byte_pack include/linux/hex.h:13 [inline] BUG: KMSAN: uninit-value in ip6_string+0xef4/0x13a0 lib/vsprintf.c:1490 hex_byte_pack include/linux/hex.h:13 [inline] ip6_string+0xef4/0x13a0 lib/vsprintf.c:1490 ip6_addr_string+0x18a/0x3e0 lib/vsprintf.c:1509 ip_addr_string+0x245/0xee0 lib/vsprintf.c:1633 pointer+0xc09/0x1bd0 lib/vsprintf.c:2542 vsnprintf+0xf8a/0x1bd0 lib/vsprintf.c:2930 vprintk_store+0x3ae/0x1530 kernel/printk/printk.c:2279 vprintk_emit+0x307/0xcd0 kernel/printk/printk.c:2426 vprintk_default+0x3f/0x50 kernel/printk/printk.c:2465 vprintk+0x36/0x50 kernel/printk/printk_safe.c:82 _printk+0x17e/0x1b0 kernel/printk/printk.c:2475 ib_nl_process_good_ip_rsep drivers/infiniband/core/addr.c:128 [inline] ib_nl_handle_ip_res_resp+0x963/0x9d0 drivers/infiniband/core/addr.c:141 rdma_nl_rcv_msg drivers/infiniband/core/netlink.c:-1 [inline] rdma_nl_rcv_skb drivers/infiniband/core/netlink.c:239 [inline] rdma_nl_rcv+0xefa/0x11c0 drivers/infiniband/core/netlink.c:259 netlink_unicast_kernel net/netlink/af_netlink.c:1320 [inline] netlink_unicast+0xf04/0x12b0 net/netlink/af_netlink.c:1346 netlink_sendmsg+0x10b3/0x1250 net/netlink/af_netlink.c:1896 sock_sendmsg_nosec net/socket.c:714 [inline] __sock_sendmsg+0x333/0x3d0 net/socket.c:729 ____sys_sendmsg+0x7e0/0xd80 net/socket.c:2617 ___sys_sendmsg+0x271/0x3b0 net/socket.c:2671 __sys_sendmsg+0x1aa/0x300 net/socket.c:2703 __compat_sys_sendmsg net/compat.c:346 [inline] __do_compat_sys_sendmsg net/compat.c:353 [inline] __se_compat_sys_sendmsg net/compat.c:350 [inline] __ia32_compat_sys_sendmsg+0xa4/0x100 net/compat.c:350 ia32_sys_call+0x3f6c/0x4310 arch/x86/include/generated/asm/syscalls_32.h:371 do_syscall_32_irqs_on arch/x86/entry/syscall_32.c:83 [inline] __do_fast_syscall_32+0xb0/0x150 arch/x86/entry/syscall_32.c:306 do_fast_syscall_32+0x38/0x80 arch/x86/entry/syscall_32.c:331 do_SYSENTER_32+0x1f/0x30 arch/x86/entry/syscall_32.c:3 Link: https://patch.msgid.link/r/0-v1-3fbaef094271+2cf-rdma_op_ip_rslv_syz_jgg@nvidia.com Cc: stable@vger.kernel.org Fixes: ae43f8286730 ("IB/core: Add IP to GID netlink offload") Reported-by: syzbot+938fcd548c303fe33c1a@syzkaller.appspotmail.com Closes: https://lore.kernel.org/r/68dc3dac.a00a0220.102ee.004f.GAE@google.com Signed-off-by: Jason Gunthorpe <jgg@nvidia.com> Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
10 daysRDMA/bnxt_re: fix dma_free_coherent() pointerThomas Fourier1-3/+1
[ Upstream commit fcd431a9627f272b4c0bec445eba365fe2232a94 ] The dma_alloc_coherent() allocates a dma-mapped buffer, pbl->pg_arr[i]. The dma_free_coherent() should pass the same buffer to dma_free_coherent() and not page-aligned. Fixes: 1ac5a4047975 ("RDMA/bnxt_re: Add bnxt_re RoCE driver") Signed-off-by: Thomas Fourier <fourier.thomas@gmail.com> Link: https://patch.msgid.link/20251230085121.8023-2-fourier.thomas@gmail.com Signed-off-by: Leon Romanovsky <leon@kernel.org> Signed-off-by: Sasha Levin <sashal@kernel.org>
10 daysRDMA/rtrs: Fix clt_path::max_pages_per_mr calculationHonggang LI1-0/+1
[ Upstream commit 43bd09d5b750f700499ae8ec45fd41a4c48673e6 ] If device max_mr_size bits in the range [mr_page_shift+31:mr_page_shift] are zero, the `min3` function will set clt_path::max_pages_per_mr to zero. `alloc_path_reqs` will pass zero, which is invalid, as the third parameter to `ib_alloc_mr`. Fixes: 6a98d71daea1 ("RDMA/rtrs: client: main functionality") Signed-off-by: Honggang LI <honggangli@163.com> Link: https://patch.msgid.link/20251229025617.13241-1-honggangli@163.com Signed-off-by: Leon Romanovsky <leon@kernel.org> Signed-off-by: Sasha Levin <sashal@kernel.org>
10 daysRDMA/bnxt_re: Fix to use correct page size for PDE tableKalesh AP1-2/+2
[ Upstream commit 3d70e0fb0f289b0c778041c5bb04d099e1aa7c1c ] In bnxt_qplib_alloc_init_hwq(), while allocating memory for PDE table driver incorrectly is using the "pg_size" value passed to the function. Fixed to use the right value 4K. Also, fixed the allocation size for PBL table. Fixes: 0c4dcd602817 ("RDMA/bnxt_re: Refactor hardware queue memory allocation") Signed-off-by: Damodharam Ammepalli <damodharam.ammepalli@broadcom.com> Signed-off-by: Kalesh AP <kalesh-anakkur.purayil@broadcom.com> Link: https://patch.msgid.link/20251223131855.145955-1-kalesh-anakkur.purayil@broadcom.com Reviewed-by: Selvin Xavier <selvin.xavier@broadcom.com> Signed-off-by: Leon Romanovsky <leon@kernel.org> Signed-off-by: Sasha Levin <sashal@kernel.org>
10 daysRDMA/bnxt_re: Fix IB_SEND_IP_CSUM handling in post_sendAlok Tiwari1-6/+1
[ Upstream commit f01765a2361323e78e3d91b1cb1d5527a83c5cf7 ] The bnxt_re SEND path checks wr->send_flags to enable features such as IP checksum offload. However, send_flags is a bitmask and may contain multiple flags (e.g. IB_SEND_SIGNALED | IB_SEND_IP_CSUM), while the existing code uses a switch() statement that only matches when send_flags is exactly IB_SEND_IP_CSUM. As a result, checksum offload is not enabled when additional SEND flags are present. Replace the switch() with a bitmask test: if (wr->send_flags & IB_SEND_IP_CSUM) This ensures IP checksum offload is enabled correctly when multiple SEND flags are used. Fixes: 1ac5a4047975 ("RDMA/bnxt_re: Add bnxt_re RoCE driver") Signed-off-by: Alok Tiwari <alok.a.tiwari@oracle.com> Link: https://patch.msgid.link/20251219093308.2415620-1-alok.a.tiwari@oracle.com Reviewed-by: Kalesh AP <kalesh-anakkur.purayil@broadcom.com> Signed-off-by: Leon Romanovsky <leon@kernel.org> Signed-off-by: Sasha Levin <sashal@kernel.org>
10 daysRDMA/bnxt_re: Fix incorrect BAR check in bnxt_qplib_map_creq_db()Alok Tiwari1-1/+1
[ Upstream commit 145a417a39d7efbc881f52e829817376972b278c ] RCFW_COMM_CONS_PCI_BAR_REGION is defined as BAR 2, so checking !creq_db->reg.bar_id is incorrect and always false. pci_resource_start() returns the BAR base address, and a value of 0 indicates that the BAR is unassigned. Update the condition to test bar_base == 0 instead. This ensures the driver detects and logs an error for an unassigned RCFW communication BAR. Fixes: cee0c7bba486 ("RDMA/bnxt_re: Refactor command queue management code") Signed-off-by: Alok Tiwari <alok.a.tiwari@oracle.com> Link: https://patch.msgid.link/20251217100158.752504-1-alok.a.tiwari@oracle.com Reviewed-by: Kalesh AP <kalesh-anakkur.purayil@broadcom.com> Signed-off-by: Leon Romanovsky <leon@kernel.org> Signed-off-by: Sasha Levin <sashal@kernel.org>
10 daysRDMA/core: Fix logic error in ib_get_gids_from_rdma_hdr()Jang Ingyu1-1/+1
[ Upstream commit 8aaa848eaddd9ef8680fc6aafbd3a0646da5df40 ] Fix missing comparison operator for RDMA_NETWORK_ROCE_V1 in the conditional statement. The constant was used directly instead of being compared with net_type, causing the condition to always evaluate to true. Fixes: 1c15b4f2a42f ("RDMA/core: Modify enum ib_gid_type and enum rdma_network_type") Signed-off-by: Jang Ingyu <ingyujang25@korea.ac.kr> Link: https://patch.msgid.link/20251219041508.1725947-1-ingyujang25@korea.ac.kr Signed-off-by: Leon Romanovsky <leon@kernel.org> Signed-off-by: Sasha Levin <sashal@kernel.org>
10 daysRDMA/efa: Remove possible negative shiftMichael Margolin1-4/+0
[ Upstream commit 85463eb6a46caf2f1e0e1a6d0731f2f3bab17780 ] The page size used for device might in some cases be smaller than PAGE_SIZE what results in a negative shift when calculating the number of host pages in PAGE_SIZE for a debug log. Remove the debug line together with the calculation. Fixes: 40909f664d27 ("RDMA/efa: Add EFA verbs implementation") Link: https://patch.msgid.link/r/20251210173656.8180-1-mrgolin@amazon.com Reviewed-by: Tom Sela <tomsela@amazon.com> Reviewed-by: Yonatan Nachum <ynachum@amazon.com> Signed-off-by: Michael Margolin <mrgolin@amazon.com> Reviewed-by: Gal Pressman <gal.pressman@linux.dev> Signed-off-by: Jason Gunthorpe <jgg@nvidia.com> Signed-off-by: Sasha Levin <sashal@kernel.org>
10 daysRDMA/irdma: avoid invalid read in irdma_net_eventMichal Schmidt1-1/+2
[ Upstream commit 6f05611728e9d0ab024832a4f1abb74a5f5d0bb0 ] irdma_net_event() should not dereference anything from "neigh" (alias "ptr") until it has checked that the event is NETEVENT_NEIGH_UPDATE. Other events come with different structures pointed to by "ptr" and they may be smaller than struct neighbour. Move the read of neigh->dev under the NETEVENT_NEIGH_UPDATE case. The bug is mostly harmless, but it triggers KASAN on debug kernels: BUG: KASAN: stack-out-of-bounds in irdma_net_event+0x32e/0x3b0 [irdma] Read of size 8 at addr ffffc900075e07f0 by task kworker/27:2/542554 CPU: 27 PID: 542554 Comm: kworker/27:2 Kdump: loaded Not tainted 5.14.0-630.el9.x86_64+debug #1 Hardware name: [...] Workqueue: events rt6_probe_deferred Call Trace: <IRQ> dump_stack_lvl+0x60/0xb0 print_address_description.constprop.0+0x2c/0x3f0 print_report+0xb4/0x270 kasan_report+0x92/0xc0 irdma_net_event+0x32e/0x3b0 [irdma] notifier_call_chain+0x9e/0x180 atomic_notifier_call_chain+0x5c/0x110 rt6_do_redirect+0xb91/0x1080 tcp_v6_err+0xe9b/0x13e0 icmpv6_notify+0x2b2/0x630 ndisc_redirect_rcv+0x328/0x530 icmpv6_rcv+0xc16/0x1360 ip6_protocol_deliver_rcu+0xb84/0x12e0 ip6_input_finish+0x117/0x240 ip6_input+0xc4/0x370 ipv6_rcv+0x420/0x7d0 __netif_receive_skb_one_core+0x118/0x1b0 process_backlog+0xd1/0x5d0 __napi_poll.constprop.0+0xa3/0x440 net_rx_action+0x78a/0xba0 handle_softirqs+0x2d4/0x9c0 do_softirq+0xad/0xe0 </IRQ> Fixes: 915cc7ac0f8e ("RDMA/irdma: Add miscellaneous utility definitions") Link: https://patch.msgid.link/r/20251127143150.121099-1-mschmidt@redhat.com Signed-off-by: Michal Schmidt <mschmidt@redhat.com> Signed-off-by: Jason Gunthorpe <jgg@nvidia.com> Signed-off-by: Sasha Levin <sashal@kernel.org>
10 daysRDMA/irdma: Do not directly rely on IB_PD_UNSAFE_GLOBAL_RKEYJacob Moroni4-10/+12
[ Upstream commit 71d3bdae5eab21cf8991a6f3cd914caa31d5a51f ] The HW disables bounds checking for MRs with a length of zero, so the driver will only allow a zero length MR if the "all_memory" flag is set, and this flag is only set if IB_PD_UNSAFE_GLOBAL_RKEY is set for the PD. This means that the "get_dma_mr" method will currently fail unless the IB_PD_UNSAFE_GLOBAL_RKEY flag is set. This has not been an issue because the "get_dma_mr" method is only ever invoked if the device does not support the local DMA key or if IB_PD_UNSAFE_GLOBAL_RKEY is set, and so far, all IRDMA HW supports the local DMA lkey. However, some new HW does not support the local DMA lkey, so the "get_dma_mr" method needs to work without IB_PD_UNSAFE_GLOBAL_RKEY being set. To support HW that does not allow the local DMA lkey, the logic has been changed to pass an explicit flag to indicate when a dma_mr is being created so that the zero length will be allowed. Also, the "all_memory" flag has been forced to false for normal MR allocation since these MRs are never supposed to provide global unsafe rkey semantics anyway; only the MR created with "get_dma_mr" should support this. Fixes: bb6d73d9add6 ("RDMA/irdma: Prevent zero-length STAG registration") Signed-off-by: Jacob Moroni <jmoroni@google.com> Signed-off-by: Tatyana Nikolova <tatyana.e.nikolova@intel.com> Link: https://patch.msgid.link/20251125025350.180-7-tatyana.e.nikolova@intel.com Signed-off-by: Leon Romanovsky <leon@kernel.org> Signed-off-by: Sasha Levin <sashal@kernel.org>
10 daysRDMA/irdma: Add support to re-register a memory regionSindhu Devale2-43/+191
[ Upstream commit 5ac388db27c443dadfbb0b8b23fa7ccf429d901a ] Add support for reregister MR verb API by doing a de-register followed by a register MR with the new attributes. Reuse resources like iwmr handle and HW stag where possible. Signed-off-by: Sindhu Devale <sindhu.devale@intel.com> Signed-off-by: Shiraz Saleem <shiraz.saleem@intel.com> Link: https://lore.kernel.org/r/20231004151306.228-1-shiraz.saleem@intel.com Signed-off-by: Leon Romanovsky <leon@kernel.org> Stable-dep-of: 71d3bdae5eab ("RDMA/irdma: Do not directly rely on IB_PD_UNSAFE_GLOBAL_RKEY") Signed-off-by: Sasha Levin <sashal@kernel.org>
10 daysRDMA/irdma: Fix data race in irdma_free_pbleKrzysztof Czurylo1-2/+4
[ Upstream commit 81f44409fb4f027d1e6d54edbeba5156ad94b214 ] Protects pble_rsrc counters with mutex to prevent data race. Fixes the following data race in irdma_free_pble reported by KCSAN: BUG: KCSAN: data-race in irdma_free_pble [irdma] / irdma_free_pble [irdma] write to 0xffff91430baa0078 of 8 bytes by task 16956 on cpu 5: irdma_free_pble+0x3b/0xb0 [irdma] irdma_dereg_mr+0x108/0x110 [irdma] ib_dereg_mr_user+0x74/0x160 [ib_core] uverbs_free_mr+0x26/0x30 [ib_uverbs] destroy_hw_idr_uobject+0x4a/0x90 [ib_uverbs] uverbs_destroy_uobject+0x7b/0x330 [ib_uverbs] uobj_destroy+0x61/0xb0 [ib_uverbs] ib_uverbs_run_method+0x1f2/0x380 [ib_uverbs] ib_uverbs_cmd_verbs+0x365/0x440 [ib_uverbs] ib_uverbs_ioctl+0x111/0x190 [ib_uverbs] __x64_sys_ioctl+0xc9/0x100 do_syscall_64+0x44/0xa0 entry_SYSCALL_64_after_hwframe+0x6e/0xd8 read to 0xffff91430baa0078 of 8 bytes by task 16953 on cpu 2: irdma_free_pble+0x23/0xb0 [irdma] irdma_dereg_mr+0x108/0x110 [irdma] ib_dereg_mr_user+0x74/0x160 [ib_core] uverbs_free_mr+0x26/0x30 [ib_uverbs] destroy_hw_idr_uobject+0x4a/0x90 [ib_uverbs] uverbs_destroy_uobject+0x7b/0x330 [ib_uverbs] uobj_destroy+0x61/0xb0 [ib_uverbs] ib_uverbs_run_method+0x1f2/0x380 [ib_uverbs] ib_uverbs_cmd_verbs+0x365/0x440 [ib_uverbs] ib_uverbs_ioctl+0x111/0x190 [ib_uverbs] __x64_sys_ioctl+0xc9/0x100 do_syscall_64+0x44/0xa0 entry_SYSCALL_64_after_hwframe+0x6e/0xd8 value changed: 0x0000000000005a62 -> 0x0000000000005a68 Fixes: e8c4dbc2fcac ("RDMA/irdma: Add PBLE resource manager") Signed-off-by: Krzysztof Czurylo <krzysztof.czurylo@intel.com> Signed-off-by: Tatyana Nikolova <tatyana.e.nikolova@intel.com> Link: https://patch.msgid.link/20251125025350.180-3-tatyana.e.nikolova@intel.com Signed-off-by: Leon Romanovsky <leon@kernel.org> Signed-off-by: Sasha Levin <sashal@kernel.org>
10 daysRDMA/irdma: Fix data race in irdma_sc_ccq_armKrzysztof Czurylo1-0/+3
[ Upstream commit a521928164433de44fed5aaf5f49aeb3f1fb96f5 ] Adds a lock around irdma_sc_ccq_arm body to prevent inter-thread data race. Fixes data race in irdma_sc_ccq_arm() reported by KCSAN: BUG: KCSAN: data-race in irdma_sc_ccq_arm [irdma] / irdma_sc_ccq_arm [irdma] read to 0xffff9d51b4034220 of 8 bytes by task 255 on cpu 11: irdma_sc_ccq_arm+0x36/0xd0 [irdma] irdma_cqp_ce_handler+0x300/0x310 [irdma] cqp_compl_worker+0x2a/0x40 [irdma] process_one_work+0x402/0x7e0 worker_thread+0xb3/0x6d0 kthread+0x178/0x1a0 ret_from_fork+0x2c/0x50 write to 0xffff9d51b4034220 of 8 bytes by task 89 on cpu 3: irdma_sc_ccq_arm+0x7e/0xd0 [irdma] irdma_cqp_ce_handler+0x300/0x310 [irdma] irdma_wait_event+0xd4/0x3e0 [irdma] irdma_handle_cqp_op+0xa5/0x220 [irdma] irdma_hw_flush_wqes+0xb1/0x300 [irdma] irdma_flush_wqes+0x22e/0x3a0 [irdma] irdma_cm_disconn_true+0x4c7/0x5d0 [irdma] irdma_disconnect_worker+0x35/0x50 [irdma] process_one_work+0x402/0x7e0 worker_thread+0xb3/0x6d0 kthread+0x178/0x1a0 ret_from_fork+0x2c/0x50 value changed: 0x0000000000024000 -> 0x0000000000034000 Fixes: 3f49d6842569 ("RDMA/irdma: Implement HW Admin Queue OPs") Signed-off-by: Krzysztof Czurylo <krzysztof.czurylo@intel.com> Signed-off-by: Tatyana Nikolova <tatyana.e.nikolova@intel.com> Link: https://patch.msgid.link/20251125025350.180-2-tatyana.e.nikolova@intel.com Signed-off-by: Leon Romanovsky <leon@kernel.org> Signed-off-by: Sasha Levin <sashal@kernel.org>
10 daysRDMA/bnxt_re: Fix the inline size for GenP7 devicesSelvin Xavier1-1/+1
[ Upstream commit 6afe40ff484a1155b71158b911c65299496e35c3 ] Inline size supported by the device is based on the number of SGEs supported by the adapter. Change the inline size calculation based on that. Fixes: de1d364c3815 ("RDMA/bnxt_re: Add support for Variable WQE in Genp7 adapters") Reviewed-by: Kashyap Desai <kashyap.desai@broadcom.com> Signed-off-by: Kalesh AP <kalesh-anakkur.purayil@broadcom.com> Signed-off-by: Selvin Xavier <selvin.xavier@broadcom.com> Link: https://patch.msgid.link/1763624215-10382-1-git-send-email-selvin.xavier@broadcom.com Signed-off-by: Leon Romanovsky <leon@kernel.org> Signed-off-by: Sasha Levin <sashal@kernel.org>
10 daysRDMA/rtrs: server: Fix error handling in get_or_create_srvMa Ke1-1/+1
[ Upstream commit a338d6e849ab31f32c08b4fcac11c0c72afbb150 ] After device_initialize() is called, use put_device() to release the device according to kernel device management rules. While direct kfree() work in this case, using put_device() is more correct. Found by code review. Fixes: 9cb837480424 ("RDMA/rtrs: server: main functionality") Signed-off-by: Ma Ke <make24@iscas.ac.cn> Link: https://patch.msgid.link/20251110005158.13394-1-make24@iscas.ac.cn Acked-by: Jack Wang <jinpu.wang@ionos.com> Signed-off-by: Leon Romanovsky <leon@kernel.org> Signed-off-by: Sasha Levin <sashal@kernel.org>
10 daysRDMA/rxe: Fix null deref on srq->rq.queue after resize failureZhu Yanjun1-6/+1
[ Upstream commit 503a5e4690ae14c18570141bc0dcf7501a8419b0 ] A NULL pointer dereference can occur in rxe_srq_chk_attr() when ibv_modify_srq() is invoked twice in succession under certain error conditions. The first call may fail in rxe_queue_resize(), which leads rxe_srq_from_attr() to set srq->rq.queue = NULL. The second call then triggers a crash (null deref) when accessing srq->rq.queue->buf->index_mask. Call Trace: <TASK> rxe_modify_srq+0x170/0x480 [rdma_rxe] ? __pfx_rxe_modify_srq+0x10/0x10 [rdma_rxe] ? uverbs_try_lock_object+0x4f/0xa0 [ib_uverbs] ? rdma_lookup_get_uobject+0x1f0/0x380 [ib_uverbs] ib_uverbs_modify_srq+0x204/0x290 [ib_uverbs] ? __pfx_ib_uverbs_modify_srq+0x10/0x10 [ib_uverbs] ? tryinc_node_nr_active+0xe6/0x150 ? uverbs_fill_udata+0xed/0x4f0 [ib_uverbs] ib_uverbs_handler_UVERBS_METHOD_INVOKE_WRITE+0x2c0/0x470 [ib_uverbs] ? __pfx_ib_uverbs_handler_UVERBS_METHOD_INVOKE_WRITE+0x10/0x10 [ib_uverbs] ? uverbs_fill_udata+0xed/0x4f0 [ib_uverbs] ib_uverbs_run_method+0x55a/0x6e0 [ib_uverbs] ? __pfx_ib_uverbs_handler_UVERBS_METHOD_INVOKE_WRITE+0x10/0x10 [ib_uverbs] ib_uverbs_cmd_verbs+0x54d/0x800 [ib_uverbs] ? __pfx_ib_uverbs_cmd_verbs+0x10/0x10 [ib_uverbs] ? __pfx___raw_spin_lock_irqsave+0x10/0x10 ? __pfx_do_vfs_ioctl+0x10/0x10 ? ioctl_has_perm.constprop.0.isra.0+0x2c7/0x4c0 ? __pfx_ioctl_has_perm.constprop.0.isra.0+0x10/0x10 ib_uverbs_ioctl+0x13e/0x220 [ib_uverbs] ? __pfx_ib_uverbs_ioctl+0x10/0x10 [ib_uverbs] __x64_sys_ioctl+0x138/0x1c0 do_syscall_64+0x82/0x250 ? fdget_pos+0x58/0x4c0 ? ksys_write+0xf3/0x1c0 ? __pfx_ksys_write+0x10/0x10 ? do_syscall_64+0xc8/0x250 ? __pfx_vm_mmap_pgoff+0x10/0x10 ? fget+0x173/0x230 ? fput+0x2a/0x80 ? ksys_mmap_pgoff+0x224/0x4c0 ? do_syscall_64+0xc8/0x250 ? do_user_addr_fault+0x37b/0xfe0 ? clear_bhb_loop+0x50/0xa0 ? clear_bhb_loop+0x50/0xa0 ? clear_bhb_loop+0x50/0xa0 entry_SYSCALL_64_after_hwframe+0x76/0x7e Fixes: 8700e3e7c485 ("Soft RoCE driver") Tested-by: Liu Yi <asatsuyu.liu@gmail.com> Signed-off-by: Zhu Yanjun <yanjun.zhu@linux.dev> Link: https://patch.msgid.link/20251027215203.1321-1-yanjun.zhu@linux.dev Signed-off-by: Leon Romanovsky <leon@kernel.org> Signed-off-by: Sasha Levin <sashal@kernel.org>
2025-11-24RDMA/hns: Fix wrong WQE data when QP wraps aroundJunxian Huang1-3/+8
[ Upstream commit fe9622011f955e35ba84d3af7b2f2fed31cf8ca1 ] When QP wraps around, WQE data from the previous use at the same position still remains as driver does not clear it. The WQE field layout differs across different opcodes, causing that the fields that are not explicitly assigned for the current opcode retain stale values, and are issued to HW by mistake. Such fields are as follows: * MSG_START_SGE_IDX field in ATOMIC WQE * BLOCK_SIZE and ZBVA fields in FRMR WQE * DirectWQE fields when DirectWQE not used For ATOMIC WQE, always set the latest sge index in MSG_START_SGE_IDX as required by HW. For FRMR WQE and DirectWQE, clear only those unassigned fields instead of the entire WQE to avoid performance penalty. Fixes: 68a997c5d28c ("RDMA/hns: Add FRMR support for hip08") Signed-off-by: Junxian Huang <huangjunxian6@hisilicon.com> Link: https://patch.msgid.link/20251016114051.1963197-4-huangjunxian6@hisilicon.com Signed-off-by: Leon Romanovsky <leon@kernel.org> Signed-off-by: Sasha Levin <sashal@kernel.org>
2025-11-24RDMA/hns: Fix the modification of max_send_sgewenglianfa1-2/+0
[ Upstream commit f5a7cbea5411668d429eb4ffe96c4063fe8dac9e ] The actual sge number may exceed the value specified in init_attr->cap when HW needs extra sge to enable inline feature. Since these extra sges are not expected by ULP, return the user-specified value to ULP instead of the expanded sge number. Fixes: 0c5e259b06a8 ("RDMA/hns: Fix incorrect sge nums calculation") Signed-off-by: wenglianfa <wenglianfa@huawei.com> Signed-off-by: Junxian Huang <huangjunxian6@hisilicon.com> Link: https://patch.msgid.link/20251016114051.1963197-3-huangjunxian6@hisilicon.com Signed-off-by: Leon Romanovsky <leon@kernel.org> Signed-off-by: Sasha Levin <sashal@kernel.org>
2025-11-24RDMA/irdma: Set irdma_cq cq_num field during CQ createJacob Moroni2-1/+2
[ Upstream commit 5575b7646b94c0afb0f4c0d86e00e13cf3397a62 ] The driver maintains a CQ table that is used to ensure that a CQ is still valid when processing CQ related AEs. When a CQ is destroyed, the table entry is cleared, using irdma_cq.cq_num as the index. This field was never being set, so it was just always clearing out entry 0. Additionally, the cq_num field size was increased to accommodate HW supporting more than 64K CQs. Fixes: b48c24c2d710 ("RDMA/irdma: Implement device supported verb APIs") Signed-off-by: Jacob Moroni <jmoroni@google.com> Link: https://patch.msgid.link/20250923142439.943930-1-jmoroni@google.com Acked-by: Tatyana Nikolova <tatyana.e.nikolova@intel.com> Signed-off-by: Leon Romanovsky <leon@kernel.org> Signed-off-by: Sasha Levin <sashal@kernel.org>
2025-11-24RDMA/irdma: Remove unused struct irdma_cq fieldsJacob Moroni2-9/+0
[ Upstream commit 880245fd029a8f8ee8fd557c2681d077c1b1a959 ] These fields were set but not used anywhere, so remove them. Link: https://patch.msgid.link/r/20250923142128.943240-1-jmoroni@google.com Signed-off-by: Jacob Moroni <jmoroni@google.com> Signed-off-by: Jason Gunthorpe <jgg@nvidia.com> Stable-dep-of: 5575b7646b94 ("RDMA/irdma: Set irdma_cq cq_num field during CQ create") Signed-off-by: Sasha Levin <sashal@kernel.org>
2025-11-24RDMA/irdma: Fix SD index calculationJacob Moroni1-1/+1
[ Upstream commit 8d158f47f1f33d8747e80c3afbea5aa337e59d41 ] In some cases, it is possible for pble_rsrc->next_fpm_addr to be larger than u32, so remove the u32 cast to avoid unintentional truncation. This fixes the following error that can be observed when registering massive memory regions: [ 447.227494] (NULL ib_device): cqp opcode = 0x1f maj_err_code = 0xffff min_err_code = 0x800c [ 447.227505] (NULL ib_device): [Update PE SDs Cmd Error][op_code=21] status=-5 waiting=1 completion_err=1 maj=0xffff min=0x800c Fixes: e8c4dbc2fcac ("RDMA/irdma: Add PBLE resource manager") Signed-off-by: Jacob Moroni <jmoroni@google.com> Link: https://patch.msgid.link/20250923190850.1022773-1-jmoroni@google.com Acked-by: Tatyana Nikolova <tatyana.e.nikolova@intel.com> Signed-off-by: Leon Romanovsky <leon@kernel.org> Signed-off-by: Sasha Levin <sashal@kernel.org>
2025-10-15RDMA/siw: Always report immediate post SQ errorsBernard Metzler1-11/+14
[ Upstream commit fdd0fe94d68649322e391c5c27dd9f436b4e955e ] In siw_post_send(), any immediate error encountered during processing of the work request list must be reported to the caller, even if previous work requests in that list were just accepted and added to the send queue. Not reporting those errors confuses the caller, which would wait indefinitely for the failing and potentially subsequently aborted work requests completion. This fixes a case where immediate errors were overwritten by subsequent code in siw_post_send(). Fixes: 303ae1cdfdf7 ("rdma/siw: application interface") Link: https://patch.msgid.link/r/20250923144536.103825-1-bernard.metzler@linux.dev Suggested-by: Stefan Metzmacher <metze@samba.org> Signed-off-by: Bernard Metzler <bernard.metzler@linux.dev> Signed-off-by: Jason Gunthorpe <jgg@nvidia.com> Signed-off-by: Sasha Levin <sashal@kernel.org>
2025-10-15RDMA/rxe: Fix race in do_task() when drainingGui-Dong Han1-2/+6
[ Upstream commit 8ca7eada62fcfabf6ec1dc7468941e791c1d8729 ] When do_task() exhausts its iteration budget (!ret), it sets the state to TASK_STATE_IDLE to reschedule, without a secondary check on the current task->state. This can overwrite the TASK_STATE_DRAINING state set by a concurrent call to rxe_cleanup_task() or rxe_disable_task(). While state changes are protected by a spinlock, both rxe_cleanup_task() and rxe_disable_task() release the lock while waiting for the task to finish draining in the while(!is_done(task)) loop. The race occurs if do_task() hits its iteration limit and acquires the lock in this window. The cleanup logic may then proceed while the task incorrectly reschedules itself, leading to a potential use-after-free. This bug was introduced during the migration from tasklets to workqueues, where the special handling for the draining case was lost. Fix this by restoring the original pre-migration behavior. If the state is TASK_STATE_DRAINING when iterations are exhausted, set cont to 1 to force a new loop iteration. This allows the task to finish its work, so that a subsequent iteration can reach the switch statement and correctly transition the state to TASK_STATE_DRAINED, stopping the task as intended. Fixes: 9b4b7c1f9f54 ("RDMA/rxe: Add workqueue support for rxe tasks") Reviewed-by: Zhu Yanjun <yanjun.zhu@linux.dev> Signed-off-by: Gui-Dong Han <hanguidong02@gmail.com> Link: https://patch.msgid.link/20250919025212.1682087-1-hanguidong02@gmail.com Signed-off-by: Leon Romanovsky <leon@kernel.org> Signed-off-by: Sasha Levin <sashal@kernel.org>
2025-10-15IB/sa: Fix sa_local_svc_timeout_ms read raceVlad Dumitrescu1-2/+4
[ Upstream commit 1428cd764cd708d53a072a2f208d87014bfe05bc ] When computing the delta, the sa_local_svc_timeout_ms is read without ib_nl_request_lock held. Though unlikely in practice, this can cause a race condition if multiple local service threads are managing the timeout. Fixes: 2ca546b92a02 ("IB/sa: Route SA pathrecord query through netlink") Signed-off-by: Vlad Dumitrescu <vdumitrescu@nvidia.com> Reviewed-by: Mark Zhang <markzhang@nvidia.com> Signed-off-by: Edward Srouji <edwards@nvidia.com> Link: https://patch.msgid.link/20250916163112.98414-1-edwards@nvidia.com Signed-off-by: Leon Romanovsky <leon@kernel.org> Signed-off-by: Sasha Levin <sashal@kernel.org>
2025-10-15RDMA/core: Resolve MAC of next-hop device without ARP supportParav Pandit1-7/+3
[ Upstream commit 200651b9b8aadfbbec852f0e5d042d9abe75e2ab ] Currently, if the next-hop netdevice does not support ARP resolution, the destination MAC address is silently set to zero without reporting an error. This leads to incorrect behavior and may result in packet transmission failures. Fix this by deferring MAC resolution to the IP stack via neighbour lookup, allowing proper resolution or error reporting as appropriate. Fixes: 7025fcd36bd6 ("IB: address translation to map IP toIB addresses (GIDs)") Signed-off-by: Parav Pandit <parav@nvidia.com> Reviewed-by: Vlad Dumitrescu <vdumitrescu@nvidia.com> Signed-off-by: Edward Srouji <edwards@nvidia.com> Link: https://patch.msgid.link/20250916111103.84069-3-edwards@nvidia.com Signed-off-by: Leon Romanovsky <leon@kernel.org> Signed-off-by: Sasha Levin <sashal@kernel.org>
2025-10-15RDMA/cm: Rate limit destroy CM ID timeout error messageHåkon Bugge1-2/+2
[ Upstream commit 2bbe1255fcf19c5eb300efb6cb5ad98d66fdae2e ] When the destroy CM ID timeout kicks in, you typically get a storm of them which creates a log flooding. Hence, change pr_err() to pr_err_ratelimited() in cm_destroy_id_wait_timeout(). Fixes: 96d9cbe2f2ff ("RDMA/cm: add timeout to cm_destroy_id wait") Signed-off-by: Håkon Bugge <haakon.bugge@oracle.com> Link: https://patch.msgid.link/20250912100525.531102-1-haakon.bugge@oracle.com Reviewed-by: Zhu Yanjun <yanjun.zhu@linux.dev> Signed-off-by: Leon Romanovsky <leon@kernel.org> Signed-off-by: Sasha Levin <sashal@kernel.org>
2025-10-15RDMA/mlx5: Fix vport loopback forcing for MPV devicePatrisious Haddad2-4/+16
[ Upstream commit 08aae7860450c89eebbc6fd4d38416e53c7a33d2 ] Previously loopback for MPV was supposed to be permanently enabled, however other driver flows were able to over-ride that configuration and disable it. Add force_lb parameter that indicates that loopback should always be enabled which prevents all other driver flows from disabling it. Fixes: a9a9e68954f2 ("RDMA/mlx5: Fix vport loopback for MPV device") Link: https://patch.msgid.link/r/cfc6b1f0f99f8100b087483cc14da6025317f901.1755088808.git.leon@kernel.org Signed-off-by: Patrisious Haddad <phaddad@nvidia.com> Reviewed-by: Mark Bloch <mbloch@nvidia.com> Signed-off-by: Leon Romanovsky <leonro@nvidia.com> Signed-off-by: Jason Gunthorpe <jgg@nvidia.com> Signed-off-by: Sasha Levin <sashal@kernel.org>
2025-10-02IB/mlx5: Fix obj_type mismatch for SRQ event subscriptionsOr Har-Toov1-0/+1
[ Upstream commit 85fe9f565d2d5af95ac2bbaa5082b8ce62b039f5 ] Fix a bug where the driver's event subscription logic for SRQ-related events incorrectly sets obj_type for RMP objects. When subscribing to SRQ events, get_legacy_obj_type() did not handle the MLX5_CMD_OP_CREATE_RMP case, which caused obj_type to be 0 (default). This led to a mismatch between the obj_type used during subscription (0) and the value used during notification (1, taken from the event's type field). As a result, event mapping for SRQ objects could fail and event notification would not be delivered correctly. This fix adds handling for MLX5_CMD_OP_CREATE_RMP in get_legacy_obj_type, returning MLX5_EVENT_QUEUE_TYPE_RQ so obj_type is consistent between subscription and notification. Fixes: 759738537142 ("IB/mlx5: Enable subscription for device events over DEVX") Link: https://patch.msgid.link/r/8f1048e3fdd1fde6b90607ce0ed251afaf8a148c.1755088962.git.leon@kernel.org Signed-off-by: Or Har-Toov <ohartoov@nvidia.com> Reviewed-by: Edward Srouji <edwards@nvidia.com> Signed-off-by: Leon Romanovsky <leonro@nvidia.com> Signed-off-by: Jason Gunthorpe <jgg@nvidia.com> Signed-off-by: Sasha Levin <sashal@kernel.org>
2025-08-28RDMA/bnxt_re: Fix to initialize the PBL arrayAnantha Prabhu1-0/+2
[ Upstream commit 806b9f494f62791ee6d68f515a8056c615a0e7b2 ] memset the PBL page pointer and page map arrays before populating the SGL addresses of the HWQ. Fixes: 0c4dcd602817 ("RDMA/bnxt_re: Refactor hardware queue memory allocation") Signed-off-by: Anantha Prabhu <anantha.prabhu@broadcom.com> Reviewed-by: Saravanan Vajravel <saravanan.vajravel@broadcom.com> Reviewed-by: Selvin Xavier <selvin.xavier@broadcom.com> Signed-off-by: Kalesh AP <kalesh-anakkur.purayil@broadcom.com> Link: https://patch.msgid.link/20250805101000.233310-5-kalesh-anakkur.purayil@broadcom.com Signed-off-by: Leon Romanovsky <leon@kernel.org> Signed-off-by: Sasha Levin <sashal@kernel.org>
2025-08-28RDMA/bnxt_re: Fix to remove workload check in SRQ limit pathKashyap Desai3-35/+2
[ Upstream commit 666bce0bd7e771127cb0cda125cc9d32d9f9f15d ] There should not be any checks of current workload to set srq_limit value to SRQ hw context. Remove all such workload checks and make a direct call to set srq_limit via doorbell SRQ_ARM. Fixes: 37cb11acf1f7 ("RDMA/bnxt_re: Add SRQ support for Broadcom adapters") Signed-off-by: Kashyap Desai <kashyap.desai@broadcom.com> Signed-off-by: Saravanan Vajravel <saravanan.vajravel@broadcom.com> Signed-off-by: Kalesh AP <kalesh-anakkur.purayil@broadcom.com> Link: https://patch.msgid.link/20250805101000.233310-3-kalesh-anakkur.purayil@broadcom.com Signed-off-by: Leon Romanovsky <leon@kernel.org> Signed-off-by: Sasha Levin <sashal@kernel.org>
2025-08-28RDMA/bnxt_re: Fix to do SRQ armena by defaultKashyap Desai1-2/+1
[ Upstream commit 6296f9a5293ada28558f2867ac54c487e1e2b9f2 ] Whenever SRQ is created, make sure SRQ arm enable is always set. Driver is always ready to receive SRQ ASYNC event. Additional note - There is no need to do srq arm enable conditionally. See bnxt_qplib_armen_db in bnxt_qplib_create_cq(). Fixes: 37cb11acf1f7 ("RDMA/bnxt_re: Add SRQ support for Broadcom adapters") Signed-off-by: Kashyap Desai <kashyap.desai@broadcom.com> Signed-off-by: Saravanan Vajravel <saravanan.vajravel@broadcom.com> Link: https://patch.msgid.link/20250805101000.233310-2-kalesh-anakkur.purayil@broadcom.com Reviewed-by: Kalesh AP <kalesh-anakkur.purayil@broadcom.com> Signed-off-by: Leon Romanovsky <leon@kernel.org> Signed-off-by: Sasha Levin <sashal@kernel.org>
2025-08-28RDMA/erdma: Fix ignored return value of init_kernel_qpBoshi Yu1-1/+3
[ Upstream commit d5c74713f0117d07f91eb48b10bc2ad44e23c9b9 ] The init_kernel_qp interface may fail. Check its return value and free related resources properly when it does. Fixes: 155055771704 ("RDMA/erdma: Add verbs implementation") Reviewed-by: Cheng Xu <chengyou@linux.alibaba.com> Signed-off-by: Boshi Yu <boshiyu@linux.alibaba.com> Link: https://patch.msgid.link/20250725055410.67520-3-boshiyu@linux.alibaba.com Signed-off-by: Leon Romanovsky <leon@kernel.org> Signed-off-by: Sasha Levin <sashal@kernel.org>
2025-08-28RDMA/siw: Fix the sendmsg byte count in siw_tcp_sendpagesPedro Falcato1-3/+2
commit c18646248fed07683d4cee8a8af933fc4fe83c0d upstream. Ever since commit c2ff29e99a76 ("siw: Inline do_tcp_sendpages()"), we have been doing this: static int siw_tcp_sendpages(struct socket *s, struct page **page, int offset, size_t size) [...] /* Calculate the number of bytes we need to push, for this page * specifically */ size_t bytes = min_t(size_t, PAGE_SIZE - offset, size); /* If we can't splice it, then copy it in, as normal */ if (!sendpage_ok(page[i])) msg.msg_flags &= ~MSG_SPLICE_PAGES; /* Set the bvec pointing to the page, with len $bytes */ bvec_set_page(&bvec, page[i], bytes, offset); /* Set the iter to $size, aka the size of the whole sendpages (!!!) */ iov_iter_bvec(&msg.msg_iter, ITER_SOURCE, &bvec, 1, size); try_page_again: lock_sock(sk); /* Sendmsg with $size size (!!!) */ rv = tcp_sendmsg_locked(sk, &msg, size); This means we've been sending oversized iov_iters and tcp_sendmsg calls for a while. This has a been a benign bug because sendpage_ok() always returned true. With the recent slab allocator changes being slowly introduced into next (that disallow sendpage on large kmalloc allocations), we have recently hit out-of-bounds crashes, due to slight differences in iov_iter behavior between the MSG_SPLICE_PAGES and "regular" copy paths: (MSG_SPLICE_PAGES) skb_splice_from_iter iov_iter_extract_pages iov_iter_extract_bvec_pages uses i->nr_segs to correctly stop in its tracks before OoB'ing everywhere skb_splice_from_iter gets a "short" read (!MSG_SPLICE_PAGES) skb_copy_to_page_nocache copy=iov_iter_count [...] copy_from_iter /* this doesn't help */ if (unlikely(iter->count < len)) len = iter->count; iterate_bvec ... and we run off the bvecs Fix this by properly setting the iov_iter's byte count, plus sending the correct byte count to tcp_sendmsg_locked. Link: https://patch.msgid.link/r/20250729120348.495568-1-pfalcato@suse.de Cc: stable@vger.kernel.org Fixes: c2ff29e99a76 ("siw: Inline do_tcp_sendpages()") Reported-by: kernel test robot <oliver.sang@intel.com> Closes: https://lore.kernel.org/oe-lkp/202507220801.50a7210-lkp@intel.com Reviewed-by: David Howells <dhowells@redhat.com> Signed-off-by: Pedro Falcato <pfalcato@suse.de> Acked-by: Bernard Metzler <bernard.metzler@linux.dev> Signed-off-by: Jason Gunthorpe <jgg@nvidia.com> Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
2025-08-28RDMA/core: reduce stack using in nldev_stat_get_doit()Arnd Bergmann1-10/+12
[ Upstream commit 43163f4c30f94d2103c948a247cdf2cda5068ca7 ] In the s390 defconfig, gcc-10 and earlier end up inlining three functions into nldev_stat_get_doit(), and each of them uses some 600 bytes of stack. The result is a function with an overly large stack frame and a warning: drivers/infiniband/core/nldev.c:2466:1: error: the frame size of 1720 bytes is larger than 1280 bytes [-Werror=frame-larger-than=] Mark the three functions noinline_for_stack to prevent this, ensuring that only one copy of the nlattr array is on the stack of each function. Signed-off-by: Arnd Bergmann <arnd@arndb.de> Link: https://patch.msgid.link/20250620113335.3776965-1-arnd@kernel.org Signed-off-by: Leon Romanovsky <leon@kernel.org> Signed-off-by: Sasha Levin <sashal@kernel.org>
2025-08-28RDMA: hfi1: fix possible divide-by-zero in find_hw_thread_mask()Yury Norov [NVIDIA]1-20/+24
[ Upstream commit 59f7d2138591ef8f0e4e4ab5f1ab674e8181ad3a ] The function divides number of online CPUs by num_core_siblings, and later checks the divider by zero. This implies a possibility to get and divide-by-zero runtime error. Fix it by moving the check prior to division. This also helps to save one indentation level. Signed-off-by: Yury Norov [NVIDIA] <yury.norov@gmail.com> Link: https://patch.msgid.link/20250604193947.11834-3-yury.norov@gmail.com Signed-off-by: Leon Romanovsky <leon@kernel.org> Signed-off-by: Sasha Levin <sashal@kernel.org>
2025-08-15RDMA/hns: Fix -Wframe-larger-than issueJunxian Huang1-5/+10
[ Upstream commit 79d56805c5068f2bc81518043e043c3dedd1c82a ] Fix -Wframe-larger-than issue by allocating memory for qpc struct with kzalloc() instead of using stack memory. Fixes: 606bf89e98ef ("RDMA/hns: Refactor for hns_roce_v2_modify_qp function") Reported-by: kernel test robot <lkp@intel.com> Closes: https://lore.kernel.org/oe-kbuild-all/202506240032.CSgIyFct-lkp@intel.com/ Signed-off-by: Junxian Huang <huangjunxian6@hisilicon.com> Link: https://patch.msgid.link/20250703113905.3597124-7-huangjunxian6@hisilicon.com Signed-off-by: Leon Romanovsky <leon@kernel.org> Signed-off-by: Sasha Levin <sashal@kernel.org>
2025-08-15Fix dma_unmap_sg() nents valueThomas Fourier1-1/+2
[ Upstream commit 1db50f7b7a793670adcf062df9ff27798829d963 ] The dma_unmap_sg() functions should be called with the same nents as the dma_map_sg(), not the value the map function returned. Fixes: ed10435d3583 ("RDMA/erdma: Implement hierarchical MTT") Signed-off-by: Thomas Fourier <fourier.thomas@gmail.com> Link: https://patch.msgid.link/20250630092346.81017-2-fourier.thomas@gmail.com Signed-off-by: Leon Romanovsky <leon@kernel.org> Signed-off-by: Sasha Levin <sashal@kernel.org>
2025-08-15net/mlx5: Check device memory pointer before usageStav Aviram1-1/+1
[ Upstream commit 70f238c902b8c0461ae6fbb8d1a0bbddc4350eea ] Add a NULL check before accessing device memory to prevent a crash if dev->dm allocation in mlx5_init_once() fails. Fixes: c9b9dcb430b3 ("net/mlx5: Move device memory management to mlx5_core") Signed-off-by: Stav Aviram <saviram@nvidia.com> Link: https://patch.msgid.link/c88711327f4d74d5cebc730dc629607e989ca187.1751370035.git.leon@kernel.org Signed-off-by: Leon Romanovsky <leon@kernel.org> Signed-off-by: Sasha Levin <sashal@kernel.org>
2025-08-01RDMA/core: Rate limit GID cache warning messagesMaor Gottlieb1-2/+2
[ Upstream commit 333e4d79316c9ed5877d7aac8b8ed22efc74e96d ] The GID cache warning messages can flood the kernel log when there are multiple failed attempts to add GIDs. This can happen when creating many virtual interfaces without having enough space for their GIDs in the GID table. Change pr_warn to pr_warn_ratelimited to prevent log flooding while still maintaining visibility of the issue. Link: https://patch.msgid.link/r/fd45ed4a1078e743f498b234c3ae816610ba1b18.1750062357.git.leon@kernel.org Signed-off-by: Maor Gottlieb <maorg@nvidia.com> Signed-off-by: Leon Romanovsky <leonro@nvidia.com> Signed-off-by: Jason Gunthorpe <jgg@nvidia.com> Signed-off-by: Sasha Levin <sashal@kernel.org>
2025-07-10RDMA/mlx5: Fix vport loopback for MPV devicePatrisious Haddad1-0/+33
[ Upstream commit a9a9e68954f29b1e197663f76289db4879fd51bb ] Always enable vport loopback for both MPV devices on driver start. Previously in some cases related to MPV RoCE, packets weren't correctly executing loopback check at vport in FW, since it was disabled. Due to complexity of identifying such cases for MPV always enable vport loopback for both GVMIs when binding the slave to the master port. Fixes: 0042f9e458a5 ("RDMA/mlx5: Enable vport loopback when user context or QP mandate") Signed-off-by: Patrisious Haddad <phaddad@nvidia.com> Reviewed-by: Mark Bloch <mbloch@nvidia.com> Link: https://patch.msgid.link/d4298f5ebb2197459e9e7221c51ecd6a34699847.1750064969.git.leon@kernel.org Signed-off-by: Leon Romanovsky <leon@kernel.org> Signed-off-by: Sasha Levin <sashal@kernel.org>
2025-07-10RDMA/mlx5: Fix CC counters query for MPVPatrisious Haddad1-1/+1
[ Upstream commit acd245b1e33fc4b9d0f2e3372021d632f7ee0652 ] In case, CC counters are querying for the second port use the correct core device for the query instead of always using the master core device. Fixes: aac4492ef23a ("IB/mlx5: Update counter implementation for dual port RoCE") Signed-off-by: Patrisious Haddad <phaddad@nvidia.com> Reviewed-by: Michael Guralnik <michaelgur@nvidia.com> Link: https://patch.msgid.link/9cace74dcf106116118bebfa9146d40d4166c6b0.1750064969.git.leon@kernel.org Signed-off-by: Leon Romanovsky <leon@kernel.org> Signed-off-by: Sasha Levin <sashal@kernel.org>
2025-07-10RDMA/mlx5: Fix HW counters query for non-representor devicesPatrisious Haddad1-1/+1
[ Upstream commit 3cc1dbfddf88dc5ecce0a75185061403b1f7352d ] To get the device HW counters, a non-representor switchdev device should use the mlx5_ib_query_q_counters() function and query all of the available counters. While a representor device in switchdev mode should use the mlx5_ib_query_q_counters_vport() function and query only the Q_Counters without the PPCNT counters and congestion control counters, since they aren't relevant for a representor device. Currently a non-representor switchdev device skips querying the PPCNT counters and congestion control counters, leaving them unupdated. Fix that by properly querying those counters for non-representor devices. Fixes: d22467a71ebe ("RDMA/mlx5: Expand switchdev Q-counters to expose representor statistics") Signed-off-by: Patrisious Haddad <phaddad@nvidia.com> Reviewed-by: Maher Sanalla <msanalla@nvidia.com> Link: https://patch.msgid.link/56bf8af4ca8c58e3fb9f7e47b1dca2009eeeed81.1750064969.git.leon@kernel.org Signed-off-by: Leon Romanovsky <leon@kernel.org> Signed-off-by: Sasha Levin <sashal@kernel.org>
2025-07-10RDMA/mlx5: Initialize obj_event->obj_sub_list before xa_insertMark Zhang1-1/+1
[ Upstream commit 8edab8a72d67742f87e9dc2e2b0cdfddda5dc29a ] The obj_event may be loaded immediately after inserted, then if the list_head is not initialized then we may get a poisonous pointer. This fixes the crash below: mlx5_core 0000:03:00.0: MLX5E: StrdRq(1) RqSz(8) StrdSz(2048) RxCqeCmprss(0 enhanced) mlx5_core.sf mlx5_core.sf.4: firmware version: 32.38.3056 mlx5_core 0000:03:00.0 en3f0pf0sf2002: renamed from eth0 mlx5_core.sf mlx5_core.sf.4: Rate limit: 127 rates are supported, range: 0Mbps to 195312Mbps IPv6: ADDRCONF(NETDEV_CHANGE): en3f0pf0sf2002: link becomes ready Unable to handle kernel NULL pointer dereference at virtual address 0000000000000060 Mem abort info: ESR = 0x96000006 EC = 0x25: DABT (current EL), IL = 32 bits SET = 0, FnV = 0 EA = 0, S1PTW = 0 Data abort info: ISV = 0, ISS = 0x00000006 CM = 0, WnR = 0 user pgtable: 4k pages, 48-bit VAs, pgdp=00000007760fb000 [0000000000000060] pgd=000000076f6d7003, p4d=000000076f6d7003, pud=0000000777841003, pmd=0000000000000000 Internal error: Oops: 96000006 [#1] SMP Modules linked in: ipmb_host(OE) act_mirred(E) cls_flower(E) sch_ingress(E) mptcp_diag(E) udp_diag(E) raw_diag(E) unix_diag(E) tcp_diag(E) inet_diag(E) binfmt_misc(E) bonding(OE) rdma_ucm(OE) rdma_cm(OE) iw_cm(OE) ib_ipoib(OE) ib_cm(OE) isofs(E) cdrom(E) mst_pciconf(OE) ib_umad(OE) mlx5_ib(OE) ipmb_dev_int(OE) mlx5_core(OE) kpatch_15237886(OEK) mlxdevm(OE) auxiliary(OE) ib_uverbs(OE) ib_core(OE) psample(E) mlxfw(OE) tls(E) sunrpc(E) vfat(E) fat(E) crct10dif_ce(E) ghash_ce(E) sha1_ce(E) sbsa_gwdt(E) virtio_console(E) ext4(E) mbcache(E) jbd2(E) xfs(E) libcrc32c(E) mmc_block(E) virtio_net(E) net_failover(E) failover(E) sha2_ce(E) sha256_arm64(E) nvme(OE) nvme_core(OE) gpio_mlxbf3(OE) mlx_compat(OE) mlxbf_pmc(OE) i2c_mlxbf(OE) sdhci_of_dwcmshc(OE) pinctrl_mlxbf3(OE) mlxbf_pka(OE) gpio_generic(E) i2c_core(E) mmc_core(E) mlxbf_gige(OE) vitesse(E) pwr_mlxbf(OE) mlxbf_tmfifo(OE) micrel(E) mlxbf_bootctl(OE) virtio_ring(E) virtio(E) ipmi_devintf(E) ipmi_msghandler(E) [last unloaded: mst_pci] CPU: 11 PID: 20913 Comm: rte-worker-11 Kdump: loaded Tainted: G OE K 5.10.134-13.1.an8.aarch64 #1 Hardware name: https://www.mellanox.com BlueField-3 SmartNIC Main Card/BlueField-3 SmartNIC Main Card, BIOS 4.2.2.12968 Oct 26 2023 pstate: a0400089 (NzCv daIf +PAN -UAO -TCO BTYPE=--) pc : dispatch_event_fd+0x68/0x300 [mlx5_ib] lr : devx_event_notifier+0xcc/0x228 [mlx5_ib] sp : ffff80001005bcf0 x29: ffff80001005bcf0 x28: 0000000000000001 x27: ffff244e0740a1d8 x26: ffff244e0740a1d0 x25: ffffda56beff5ae0 x24: ffffda56bf911618 x23: ffff244e0596a480 x22: ffff244e0596a480 x21: ffff244d8312ad90 x20: ffff244e0596a480 x19: fffffffffffffff0 x18: 0000000000000000 x17: 0000000000000000 x16: ffffda56be66d620 x15: 0000000000000000 x14: 0000000000000000 x13: 0000000000000000 x12: 0000000000000000 x11: 0000000000000040 x10: ffffda56bfcafb50 x9 : ffffda5655c25f2c x8 : 0000000000000010 x7 : 0000000000000000 x6 : ffff24545a2e24b8 x5 : 0000000000000003 x4 : ffff80001005bd28 x3 : 0000000000000000 x2 : 0000000000000000 x1 : ffff244e0596a480 x0 : ffff244d8312ad90 Call trace: dispatch_event_fd+0x68/0x300 [mlx5_ib] devx_event_notifier+0xcc/0x228 [mlx5_ib] atomic_notifier_call_chain+0x58/0x80 mlx5_eq_async_int+0x148/0x2b0 [mlx5_core] atomic_notifier_call_chain+0x58/0x80 irq_int_handler+0x20/0x30 [mlx5_core] __handle_irq_event_percpu+0x60/0x220 handle_irq_event_percpu+0x3c/0x90 handle_irq_event+0x58/0x158 handle_fasteoi_irq+0xfc/0x188 generic_handle_irq+0x34/0x48 ... Fixes: 759738537142 ("IB/mlx5: Enable subscription for device events over DEVX") Link: https://patch.msgid.link/r/3ce7f20e0d1a03dc7de6e57494ec4b8eaf1f05c2.1750147949.git.leon@kernel.org Signed-off-by: Mark Zhang <markzhang@nvidia.com> Signed-off-by: Leon Romanovsky <leonro@nvidia.com> Signed-off-by: Jason Gunthorpe <jgg@nvidia.com> Signed-off-by: Sasha Levin <sashal@kernel.org>
2025-06-27RDMA/iwcm: Fix use-after-free of work objects after cm_id destructionShin'ichiro Kawasaki1-14/+15
commit 6883b680e703c6b2efddb4e7a8d891ce1803d06b upstream. The commit 59c68ac31e15 ("iw_cm: free cm_id resources on the last deref") simplified cm_id resource management by freeing cm_id once all references to the cm_id were removed. The references are removed either upon completion of iw_cm event handlers or when the application destroys the cm_id. This commit introduced the use-after-free condition where cm_id_private object could still be in use by event handler works during the destruction of cm_id. The commit aee2424246f9 ("RDMA/iwcm: Fix a use-after-free related to destroying CM IDs") addressed this use-after- free by flushing all pending works at the cm_id destruction. However, still another use-after-free possibility remained. It happens with the work objects allocated for each cm_id_priv within alloc_work_entries() during cm_id creation, and subsequently freed in dealloc_work_entries() once all references to the cm_id are removed. If the cm_id's last reference is decremented in the event handler work, the work object for the work itself gets removed, and causes the use- after-free BUG below: BUG: KASAN: slab-use-after-free in __pwq_activate_work+0x1ff/0x250 Read of size 8 at addr ffff88811f9cf800 by task kworker/u16:1/147091 CPU: 2 UID: 0 PID: 147091 Comm: kworker/u16:1 Not tainted 6.15.0-rc2+ #27 PREEMPT(voluntary) Hardware name: QEMU Standard PC (i440FX + PIIX, 1996), BIOS 1.16.3-3.fc41 04/01/2014 Workqueue: 0x0 (iw_cm_wq) Call Trace: <TASK> dump_stack_lvl+0x6a/0x90 print_report+0x174/0x554 ? __virt_addr_valid+0x208/0x430 ? __pwq_activate_work+0x1ff/0x250 kasan_report+0xae/0x170 ? __pwq_activate_work+0x1ff/0x250 __pwq_activate_work+0x1ff/0x250 pwq_dec_nr_in_flight+0x8c5/0xfb0 process_one_work+0xc11/0x1460 ? __pfx_process_one_work+0x10/0x10 ? assign_work+0x16c/0x240 worker_thread+0x5ef/0xfd0 ? __pfx_worker_thread+0x10/0x10 kthread+0x3b0/0x770 ? __pfx_kthread+0x10/0x10 ? rcu_is_watching+0x11/0xb0 ? _raw_spin_unlock_irq+0x24/0x50 ? rcu_is_watching+0x11/0xb0 ? __pfx_kthread+0x10/0x10 ret_from_fork+0x30/0x70 ? __pfx_kthread+0x10/0x10 ret_from_fork_asm+0x1a/0x30 </TASK> Allocated by task 147416: kasan_save_stack+0x2c/0x50 kasan_save_track+0x10/0x30 __kasan_kmalloc+0xa6/0xb0 alloc_work_entries+0xa9/0x260 [iw_cm] iw_cm_connect+0x23/0x4a0 [iw_cm] rdma_connect_locked+0xbfd/0x1920 [rdma_cm] nvme_rdma_cm_handler+0x8e5/0x1b60 [nvme_rdma] cma_cm_event_handler+0xae/0x320 [rdma_cm] cma_work_handler+0x106/0x1b0 [rdma_cm] process_one_work+0x84f/0x1460 worker_thread+0x5ef/0xfd0 kthread+0x3b0/0x770 ret_from_fork+0x30/0x70 ret_from_fork_asm+0x1a/0x30 Freed by task 147091: kasan_save_stack+0x2c/0x50 kasan_save_track+0x10/0x30 kasan_save_free_info+0x37/0x60 __kasan_slab_free+0x4b/0x70 kfree+0x13a/0x4b0 dealloc_work_entries+0x125/0x1f0 [iw_cm] iwcm_deref_id+0x6f/0xa0 [iw_cm] cm_work_handler+0x136/0x1ba0 [iw_cm] process_one_work+0x84f/0x1460 worker_thread+0x5ef/0xfd0 kthread+0x3b0/0x770 ret_from_fork+0x30/0x70 ret_from_fork_asm+0x1a/0x30 Last potentially related work creation: kasan_save_stack+0x2c/0x50 kasan_record_aux_stack+0xa3/0xb0 __queue_work+0x2ff/0x1390 queue_work_on+0x67/0xc0 cm_event_handler+0x46a/0x820 [iw_cm] siw_cm_upcall+0x330/0x650 [siw] siw_cm_work_handler+0x6b9/0x2b20 [siw] process_one_work+0x84f/0x1460 worker_thread+0x5ef/0xfd0 kthread+0x3b0/0x770 ret_from_fork+0x30/0x70 ret_from_fork_asm+0x1a/0x30 This BUG is reproducible by repeating the blktests test case nvme/061 for the rdma transport and the siw driver. To avoid the use-after-free of cm_id_private work objects, ensure that the last reference to the cm_id is decremented not in the event handler works, but in the cm_id destruction context. For that purpose, move iwcm_deref_id() call from destroy_cm_id() to the callers of destroy_cm_id(). In iw_destroy_cm_id(), call iwcm_deref_id() after flushing the pending works. During the fix work, I noticed that iw_destroy_cm_id() is called from cm_work_handler() and process_event() context. However, the comment of iw_destroy_cm_id() notes that the function "cannot be called by the event thread". Drop the false comment. Closes: https://lore.kernel.org/linux-rdma/r5676e754sv35aq7cdsqrlnvyhiq5zktteaurl7vmfih35efko@z6lay7uypy3c/ Fixes: 59c68ac31e15 ("iw_cm: free cm_id resources on the last deref") Cc: stable@vger.kernel.org Signed-off-by: Shin'ichiro Kawasaki <shinichiro.kawasaki@wdc.com> Link: https://patch.msgid.link/20250510101036.1756439-1-shinichiro.kawasaki@wdc.com Reviewed-by: Zhu Yanjun <yanjun.zhu@linux.dev> Signed-off-by: Leon Romanovsky <leon@kernel.org> Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
2025-06-19RDMA/cma: Fix hang when cma_netevent_callback fails to queue_workJack Morgenstein1-1/+2
[ Upstream commit 92a251c3df8ea1991cd9fe00f1ab0cfce18d7711 ] The cited commit fixed a crash when cma_netevent_callback was called for a cma_id while work on that id from a previous call had not yet started. The work item was re-initialized in the second call, which corrupted the work item currently in the work queue. However, it left a problem when queue_work fails (because the item is still pending in the work queue from a previous call). In this case, cma_id_put (which is called in the work handler) is therefore not called. This results in a userspace process hang (zombie process). Fix this by calling cma_id_put() if queue_work fails. Fixes: 45f5dcdd0497 ("RDMA/cma: Fix workqueue crash in cma_netevent_work_handler") Link: https://patch.msgid.link/r/4f3640b501e48d0166f312a64fdadf72b059bd04.1747827103.git.leon@kernel.org Signed-off-by: Jack Morgenstein <jackm@nvidia.com> Signed-off-by: Feng Liu <feliu@nvidia.com> Reviewed-by: Vlad Dumitrescu <vdumitrescu@nvidia.com> Signed-off-by: Leon Romanovsky <leonro@nvidia.com> Reviewed-by: Sharath Srinivasan <sharath.srinivasan@oracle.com> Reviewed-by: Kalesh AP <kalesh-anakkur.purayil@broadcom.com> Signed-off-by: Jason Gunthorpe <jgg@nvidia.com> Signed-off-by: Sasha Levin <sashal@kernel.org>
2025-06-19RDMA/mlx5: Fix error flow upon firmware failure for RQ destructionPatrisious Haddad1-2/+28
[ Upstream commit 5d2ea5aebbb2f3ebde4403f9c55b2b057e5dd2d6 ] Upon RQ destruction if the firmware command fails which is the last resource to be destroyed some SW resources were already cleaned regardless of the failure. Now properly rollback the object to its original state upon such failure. In order to avoid a use-after free in case someone tries to destroy the object again, which results in the following kernel trace: refcount_t: underflow; use-after-free. WARNING: CPU: 0 PID: 37589 at lib/refcount.c:28 refcount_warn_saturate+0xf4/0x148 Modules linked in: rdma_ucm(OE) rdma_cm(OE) iw_cm(OE) ib_ipoib(OE) ib_cm(OE) ib_umad(OE) mlx5_ib(OE) rfkill mlx5_core(OE) mlxdevm(OE) ib_uverbs(OE) ib_core(OE) psample mlxfw(OE) mlx_compat(OE) macsec tls pci_hyperv_intf sunrpc vfat fat virtio_net net_failover failover fuse loop nfnetlink vsock_loopback vmw_vsock_virtio_transport_common vmw_vsock_vmci_transport vmw_vmci vsock xfs crct10dif_ce ghash_ce sha2_ce sha256_arm64 sha1_ce virtio_console virtio_gpu virtio_blk virtio_dma_buf virtio_mmio dm_mirror dm_region_hash dm_log dm_mod xpmem(OE) CPU: 0 UID: 0 PID: 37589 Comm: python3 Kdump: loaded Tainted: G OE ------- --- 6.12.0-54.el10.aarch64 #1 Tainted: [O]=OOT_MODULE, [E]=UNSIGNED_MODULE Hardware name: QEMU KVM Virtual Machine, BIOS 0.0.0 02/06/2015 pstate: 60400005 (nZCv daif +PAN -UAO -TCO -DIT -SSBS BTYPE=--) pc : refcount_warn_saturate+0xf4/0x148 lr : refcount_warn_saturate+0xf4/0x148 sp : ffff80008b81b7e0 x29: ffff80008b81b7e0 x28: ffff000133d51600 x27: 0000000000000001 x26: 0000000000000000 x25: 00000000ffffffea x24: ffff00010ae80f00 x23: ffff00010ae80f80 x22: ffff0000c66e5d08 x21: 0000000000000000 x20: ffff0000c66e0000 x19: ffff00010ae80340 x18: 0000000000000006 x17: 0000000000000000 x16: 0000000000000020 x15: ffff80008b81b37f x14: 0000000000000000 x13: 2e656572662d7265 x12: ffff80008283ef78 x11: ffff80008257efd0 x10: ffff80008283efd0 x9 : ffff80008021ed90 x8 : 0000000000000001 x7 : 00000000000bffe8 x6 : c0000000ffff7fff x5 : ffff0001fb8e3408 x4 : 0000000000000000 x3 : ffff800179993000 x2 : 0000000000000000 x1 : 0000000000000000 x0 : ffff000133d51600 Call trace: refcount_warn_saturate+0xf4/0x148 mlx5_core_put_rsc+0x88/0xa0 [mlx5_ib] mlx5_core_destroy_rq_tracked+0x64/0x98 [mlx5_ib] mlx5_ib_destroy_wq+0x34/0x80 [mlx5_ib] ib_destroy_wq_user+0x30/0xc0 [ib_core] uverbs_free_wq+0x28/0x58 [ib_uverbs] destroy_hw_idr_uobject+0x34/0x78 [ib_uverbs] uverbs_destroy_uobject+0x48/0x240 [ib_uverbs] __uverbs_cleanup_ufile+0xd4/0x1a8 [ib_uverbs] uverbs_destroy_ufile_hw+0x48/0x120 [ib_uverbs] ib_uverbs_close+0x2c/0x100 [ib_uverbs] __fput+0xd8/0x2f0 __fput_sync+0x50/0x70 __arm64_sys_close+0x40/0x90 invoke_syscall.constprop.0+0x74/0xd0 do_el0_svc+0x48/0xe8 el0_svc+0x44/0x1d0 el0t_64_sync_handler+0x120/0x130 el0t_64_sync+0x1a4/0x1a8 Fixes: e2013b212f9f ("net/mlx5_core: Add RQ and SQ event handling") Signed-off-by: Patrisious Haddad <phaddad@nvidia.com> Link: https://patch.msgid.link/3181433ccdd695c63560eeeb3f0c990961732101.1745839855.git.leon@kernel.org Signed-off-by: Leon Romanovsky <leon@kernel.org> Signed-off-by: Sasha Levin <sashal@kernel.org>