summaryrefslogtreecommitdiff
path: root/drivers/infiniband
AgeCommit message (Collapse)AuthorFilesLines
12 daysRDMA/irdma: Return EINVAL for invalid arp index errorTatyana Nikolova1-7/+10
[ Upstream commit 7221f581eefa79ead06e171044f393fb7ee22f87 ] When rdma_connect() fails due to an invalid arp index, user space rdma core reports ENOMEM which is confusing. Modify irdma_make_cm_node() to return the correct error code. Fixes: 146b9756f14c ("RDMA/irdma: Add connection manager") Signed-off-by: Tatyana Nikolova <tatyana.e.nikolova@intel.com> Signed-off-by: Leon Romanovsky <leon@kernel.org> Signed-off-by: Sasha Levin <sashal@kernel.org>
12 daysRDMA/irdma: Fix deadlock during netdev reset with active connectionsAnil Samal1-1/+2
[ Upstream commit 6f52370970ac07d352a7af4089e55e0e6425f827 ] Resolve deadlock that occurs when user executes netdev reset while RDMA applications (e.g., rping) are active. The netdev reset causes ice driver to remove irdma auxiliary driver, triggering device_delete and subsequent client removal. During client removal, uverbs_client waits for QP reference count to reach zero while cma_client holds the final reference, creating circular dependency and indefinite wait in iWARP mode. Skip QP reference count wait during device reset to prevent deadlock. Fixes: c8f304d75f6c ("RDMA/irdma: Prevent QP use after free") Signed-off-by: Anil Samal <anil.samal@intel.com> Signed-off-by: Tatyana Nikolova <tatyana.e.nikolova@intel.com> Signed-off-by: Leon Romanovsky <leon@kernel.org> Signed-off-by: Sasha Levin <sashal@kernel.org>
12 daysRDMA/irdma: Remove reset check from irdma_modify_qp_to_err()Tatyana Nikolova1-2/+0
[ Upstream commit c45c6ebd693b944f1ffe429fdfb6cc1674c237be ] During reset, irdma_modify_qp() to error should be called to disconnect the QP. Without this fix, if not preceded by irdma_modify_qp() to error, the API call irdma_destroy_qp() gets stuck waiting for the QP refcount to go to zero, because the cm_node associated with this QP isn't disconnected. Fixes: 915cc7ac0f8e ("RDMA/irdma: Add miscellaneous utility definitions") Signed-off-by: Tatyana Nikolova <tatyana.e.nikolova@intel.com> Signed-off-by: Leon Romanovsky <leon@kernel.org> Signed-off-by: Sasha Levin <sashal@kernel.org>
12 daysRDMA/irdma: Clean up unnecessary dereference of event->cm_nodeIvan Barrera1-6/+6
[ Upstream commit b415399c9a024d574b65479636f0d4eb625b9abd ] The cm_node is available and the usage of cm_node and event->cm_node seems arbitrary. Clean up unnecessary dereference of event->cm_node. Fixes: 146b9756f14c ("RDMA/irdma: Add connection manager") Signed-off-by: Ivan Barrera <ivan.d.barrera@intel.com> Signed-off-by: Tatyana Nikolova <tatyana.e.nikolova@intel.com> Signed-off-by: Leon Romanovsky <leon@kernel.org> Signed-off-by: Sasha Levin <sashal@kernel.org>
12 daysRDMA/irdma: Remove a NOP wait_event() in irdma_modify_qp_roce()Tatyana Nikolova1-2/+0
[ Upstream commit 5e8f0239731a83753473b7aa91bda67bbdff5053 ] Remove a NOP wait_event() in irdma_modify_qp_roce() which is relevant for iWARP and likely a copy and paste artifact for RoCEv2. The wait event is for sending a reset on a TCP connection, after the reset has been requested in irdma_modify_qp(), which occurs only in iWarp mode. Fixes: b48c24c2d710 ("RDMA/irdma: Implement device supported verb APIs") Signed-off-by: Tatyana Nikolova <tatyana.e.nikolova@intel.com> Signed-off-by: Leon Romanovsky <leon@kernel.org> Signed-off-by: Sasha Levin <sashal@kernel.org>
12 daysRDMA/irdma: Update ibqp state to error if QP is already in error stateTatyana Nikolova1-0/+2
[ Upstream commit 8c1f19a2225cf37b3f8ab0b5a8a5322291cda620 ] In irdma_modify_qp() update ibqp state to error if the irdma QP is already in error state, otherwise the ibqp state which is visible to the consumer app remains stale. Fixes: b48c24c2d710 ("RDMA/irdma: Implement device supported verb APIs") Signed-off-by: Tatyana Nikolova <tatyana.e.nikolova@intel.com> Signed-off-by: Leon Romanovsky <leon@kernel.org> Signed-off-by: Sasha Levin <sashal@kernel.org>
12 daysRDMA/irdma: Initialize free_qp completion before using itJacob Moroni1-1/+1
[ Upstream commit 11a95521fb93c91e2d4ef9d53dc80ef0a755549b ] In irdma_create_qp, if ib_copy_to_udata fails, it will call irdma_destroy_qp to clean up which will attempt to wait on the free_qp completion, which is not initialized yet. Fix this by initializing the completion before the ib_copy_to_udata call. Fixes: b48c24c2d710 ("RDMA/irdma: Implement device supported verb APIs") Signed-off-by: Jacob Moroni <jmoroni@google.com> Signed-off-by: Tatyana Nikolova <tatyana.e.nikolova@intel.com> Signed-off-by: Leon Romanovsky <leon@kernel.org> Signed-off-by: Sasha Levin <sashal@kernel.org>
12 daysRDMA/rw: Fall back to direct SGE on MR pool exhaustionChuck Lever1-3/+18
[ Upstream commit 00da250c21b074ea9494c375d0117b69e5b1d0a4 ] When IOMMU passthrough mode is active, ib_dma_map_sgtable_attrs() produces no coalescing: each scatterlist page maps 1:1 to a DMA entry, so sgt.nents equals the raw page count. A 1 MB transfer yields 256 DMA entries. If that count exceeds the device's max_sgl_rd threshold (an optimization hint from mlx5 firmware), rdma_rw_io_needs_mr() steers the operation into the MR registration path. Each such operation consumes one or more MRs from a pool sized at max_rdma_ctxs -- roughly one MR per concurrent context. Under write-intensive workloads that issue many concurrent RDMA READs, the pool is rapidly exhausted, ib_mr_pool_get() returns NULL, and rdma_rw_init_one_mr() returns -EAGAIN. Upper layer protocols treat this as a fatal DMA mapping failure and tear down the connection. The max_sgl_rd check is a performance optimization, not a correctness requirement: the device can handle large SGE counts via direct posting, just less efficiently than with MR registration. When the MR pool cannot satisfy a request, falling back to the direct SGE (map_wrs) path avoids the connection reset while preserving the MR optimization for the common case where pool resources are available. Add a fallback in rdma_rw_ctx_init() so that -EAGAIN from rdma_rw_init_mr_wrs() triggers direct SGE posting instead of propagating the error. iWARP devices, which mandate MR registration for RDMA READs, and force_mr debug mode continue to treat -EAGAIN as terminal. Fixes: 00bd1439f464 ("RDMA/rw: Support threshold for registration vs scattering to local pages") Signed-off-by: Chuck Lever <chuck.lever@oracle.com> Reviewed-by: Christoph Hellwig <hch@lst.de> Link: https://patch.msgid.link/20260313194201.5818-2-cel@kernel.org Signed-off-by: Leon Romanovsky <leon@kernel.org> Signed-off-by: Sasha Levin <sashal@kernel.org>
2026-03-25RDMA/irdma: Fix kernel stack leak in irdma_create_user_ah()Jason Gunthorpe1-1/+1
commit 74586c6da9ea222a61c98394f2fc0a604748438c upstream. struct irdma_create_ah_resp { // 8 bytes, no padding __u32 ah_id; // offset 0 - SET (uresp.ah_id = ah->sc_ah.ah_info.ah_idx) __u8 rsvd[4]; // offset 4 - NEVER SET <- LEAK }; rsvd[4]: 4 bytes of stack memory leaked unconditionally. Only ah_id is assigned before ib_respond_udata(). The reserved members of the structure were not zeroed. Cc: stable@vger.kernel.org Fixes: b48c24c2d710 ("RDMA/irdma: Implement device supported verb APIs") Signed-off-by: Jason Gunthorpe <jgg@nvidia.com> Link: https://patch.msgid.link/3-v1-83e918d69e73+a9-rdma_udata_rc_jgg@nvidia.com Signed-off-by: Leon Romanovsky <leon@kernel.org> Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
2026-03-25IB/mthca: Add missed mthca_unmap_user_db() for mthca_create_srq()Jason Gunthorpe1-2/+3
commit 117942ca43e2e3c3d121faae530989931b7f67e1 upstream. Fix a user triggerable leak on the system call failure path. Cc: stable@vger.kernel.org Fixes: ec34a922d243 ("[PATCH] IB/mthca: Add SRQ implementation") Signed-off-by: Jason Gunthorpe <jgg@nvidia.com> Link: https://patch.msgid.link/2-v1-83e918d69e73+a9-rdma_udata_rc_jgg@nvidia.com Signed-off-by: Leon Romanovsky <leon@kernel.org> Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
2026-03-04RDMA/umem: Fix double dma_buf_unpin in failure pathJacob Moroni1-3/+1
[ Upstream commit 104016eb671e19709721c1b0048dd912dc2e96be ] In ib_umem_dmabuf_get_pinned_with_dma_device(), the call to ib_umem_dmabuf_map_pages() can fail. If this occurs, the dmabuf is immediately unpinned but the umem_dmabuf->pinned flag is still set. Then, when ib_umem_release() is called, it calls ib_umem_dmabuf_revoke() which will call dma_buf_unpin() again. Fix this by removing the immediate unpin upon failure and just let the ib_umem_release/revoke path handle it. This also ensures the proper unmap-unpin unwind ordering if the dmabuf_map_pages call happened to fail due to dma_resv_wait_timeout (and therefore has a non-NULL umem_dmabuf->sgt). Fixes: 1e4df4a21c5a ("RDMA/umem: Allow pinned dmabuf umem usage") Signed-off-by: Jacob Moroni <jmoroni@google.com> Link: https://patch.msgid.link/20260224234153.1207849-1-jmoroni@google.com Signed-off-by: Leon Romanovsky <leon@kernel.org> Signed-off-by: Sasha Levin <sashal@kernel.org>
2026-03-04RDMA/efa: Fix typo in efa_alloc_mr()Jason Gunthorpe1-1/+1
[ Upstream commit f22c77ce49db0589103d96487dca56f5b2136362 ] The pattern is to check the entire driver request space, not just sizeof something unrelated. Fixes: 40909f664d27 ("RDMA/efa: Add EFA verbs implementation") Signed-off-by: Jason Gunthorpe <jgg@nvidia.com> Link: https://patch.msgid.link/1-v1-83e918d69e73+a9-rdma_udata_rc_jgg@nvidia.com Acked-by: Michael Margolin <mrgolin@amazon.com> Signed-off-by: Leon Romanovsky <leon@kernel.org> Signed-off-by: Sasha Levin <sashal@kernel.org>
2026-03-04RDMA/core: Fix stale RoCE GIDs during netdev events at registrationJiri Pirko3-1/+49
[ Upstream commit 9af0feae8016ba58ad7ff784a903404986b395b1 ] RoCE GID entries become stale when netdev properties change during the IB device registration window. This is reproducible with a udev rule that sets a MAC address when a VF netdev appears: ACTION=="add", SUBSYSTEM=="net", KERNEL=="eth4", \ RUN+="/sbin/ip link set eth4 address 88:22:33:44:55:66" After VF creation, show_gids displays GIDs derived from the original random MAC rather than the configured one. The root cause is a race between netdev event processing and device registration: CPU 0 (driver) CPU 1 (udev/workqueue) ────────────── ────────────────────── ib_register_device() ib_cache_setup_one() gid_table_setup_one() _gid_table_setup_one() ← GID table allocated rdma_roce_rescan_device() ← GIDs populated with OLD MAC ip link set eth4 addr NEW_MAC NETDEV_CHANGEADDR queued netdevice_event_work_handler() ib_enum_all_roce_netdevs() ← Iterates DEVICE_REGISTERED ← Device NOT marked yet, SKIP! enable_device_and_get() xa_set_mark(DEVICE_REGISTERED) ← Too late, event was lost The netdev event handler uses ib_enum_all_roce_netdevs() which only iterates devices marked DEVICE_REGISTERED. However, this mark is set late in the registration process, after the GID cache is already populated. Events arriving in this window are silently dropped. Fix this by introducing a new xarray mark DEVICE_GID_UPDATES that is set immediately after the GID table is allocated and initialized. Use the new mark in ib_enum_all_roce_netdevs() function to iterate devices instead of DEVICE_REGISTERED. This is safe because: - After _gid_table_setup_one(), all required structures exist (port_data, immutable, cache.gid) - The GID table mutex serializes concurrent access between the initial rescan and event handlers - Event handlers correctly update stale GIDs even when racing with rescan - The mark is cleared in ib_cache_cleanup_one() before teardown This also fixes similar races for IP address events (inetaddr_event, inet6addr_event) which use the same enumeration path. Fixes: 0df91bb67334 ("RDMA/devices: Use xarray to store the client_data") Signed-off-by: Jiri Pirko <jiri@nvidia.com> Link: https://patch.msgid.link/20260127093839.126291-1-jiri@resnulli.us Reported-by: syzbot+881d65229ca4f9ae8c84@syzkaller.appspotmail.com Closes: https://syzkaller.appspot.com/bug?extid=881d65229ca4f9ae8c84 Signed-off-by: Leon Romanovsky <leon@kernel.org> Signed-off-by: Sasha Levin <sashal@kernel.org>
2026-03-04RDMA/rtrs-clt: For conn rejection use actual err numberMd Haris Iqbal1-2/+2
[ Upstream commit fc290630702b530c2969061e7ef0d869a5b6dc4f ] When the connection establishment request is rejected from the server side, then the actual error number sent back should be used. Signed-off-by: Md Haris Iqbal <haris.iqbal@ionos.com> Link: https://patch.msgid.link/20260107161517.56357-10-haris.iqbal@ionos.com Reviewed-by: Grzegorz Prajsner <grzegorz.prajsner@ionos.com> Reviewed-by: Jack Wang <jinpu.wang@ionos.com> Signed-off-by: Leon Romanovsky <leon@kernel.org> Signed-off-by: Sasha Levin <sashal@kernel.org>
2026-03-04RDMA/uverbs: Add __GFP_NOWARN to ib_uverbs_unmarshall_recv() kmallocYi Liu1-1/+1
[ Upstream commit 58b604dfc7bb753f91bc0ccd3fa705e14e6edfb4 ] Since wqe_size in ib_uverbs_unmarshall_recv() is user-provided and already validated, but can still be large, add __GFP_NOWARN to suppress memory allocation warnings for large sizes, consistent with the similar fix in ib_uverbs_post_send(). Fixes: 67cdb40ca444 ("[IB] uverbs: Implement more commands") Signed-off-by: Yi Liu <liuy22@mails.tsinghua.edu.cn> Link: https://patch.msgid.link/20260129094900.3517706-1-liuy22@mails.tsinghua.edu.cn Signed-off-by: Leon Romanovsky <leon@kernel.org> Signed-off-by: Sasha Levin <sashal@kernel.org>
2026-03-04RDMA/core: add rdma_rw_max_sge() helper for SQ sizingChuck Lever1-15/+38
[ Upstream commit afcae7d7b8a278a6c29e064f99e5bafd4ac1fb37 ] svc_rdma_accept() computes sc_sq_depth as the sum of rq_depth and the number of rdma_rw contexts (ctxts). This value is used to allocate the Send CQ and to initialize the sc_sq_avail credit pool. However, when the device uses memory registration for RDMA operations, rdma_rw_init_qp() inflates the QP's max_send_wr by a factor of three per context to account for REG and INV work requests. The Send CQ and credit pool remain sized for only one work request per context, causing Send Queue exhaustion under heavy NFS WRITE workloads. Introduce rdma_rw_max_sge() to compute the actual number of Send Queue entries required for a given number of rdma_rw contexts. Upper layer protocols call this helper before creating a Queue Pair so that their Send CQs and credit accounting match the QP's true capacity. Update svc_rdma_accept() to use rdma_rw_max_sge() when computing sc_sq_depth, ensuring the credit pool reflects the work requests that rdma_rw_init_qp() will reserve. Reviewed-by: Christoph Hellwig <hch@lst.de> Fixes: 00bd1439f464 ("RDMA/rw: Support threshold for registration vs scattering to local pages") Signed-off-by: Chuck Lever <chuck.lever@oracle.com> Link: https://patch.msgid.link/20260128005400.25147-5-cel@kernel.org Signed-off-by: Leon Romanovsky <leon@kernel.org> Signed-off-by: Sasha Levin <sashal@kernel.org>
2026-03-04RDMA/core: Fix a couple of obvious typos in commentsChuck Lever1-1/+1
[ Upstream commit 0aa44595d61ca9e61239f321fec799518884feb3 ] Fix typos. Signed-off-by: Chuck Lever <chuck.lever@oracle.com> Link: https://lore.kernel.org/r/169643338101.8035.6826446669479247727.stgit@manet.1015granger.net Signed-off-by: Leon Romanovsky <leon@kernel.org> Stable-dep-of: afcae7d7b8a2 ("RDMA/core: add rdma_rw_max_sge() helper for SQ sizing") Signed-off-by: Sasha Levin <sashal@kernel.org>
2026-03-04RDMA/uverbs: Validate wqe_size before using it in ib_uverbs_post_sendYi Liu1-1/+4
[ Upstream commit 1956f0a74ccf5dc9c3ef717f2985c3ed3400aab0 ] ib_uverbs_post_send() uses cmd.wqe_size from userspace without any validation before passing it to kmalloc() and using the allocated buffer as struct ib_uverbs_send_wr. If a user provides a small wqe_size value (e.g., 1), kmalloc() will succeed, but subsequent accesses to user_wr->opcode, user_wr->num_sge, and other fields will read beyond the allocated buffer, resulting in an out-of-bounds read from kernel heap memory. This could potentially leak sensitive kernel information to userspace. Additionally, providing an excessively large wqe_size can trigger a WARNING in the memory allocation path, as reported by syzkaller. This is inconsistent with ib_uverbs_unmarshall_recv() which properly validates that wqe_size >= sizeof(struct ib_uverbs_recv_wr) before proceeding. Add the same validation for ib_uverbs_post_send() to ensure wqe_size is at least sizeof(struct ib_uverbs_send_wr). Fixes: c3bea3d2dc53 ("RDMA/uverbs: Use the iterator for ib_uverbs_unmarshall_recv()") Signed-off-by: Yi Liu <liuy22@mails.tsinghua.edu.cn> Link: https://patch.msgid.link/20260122142900.2356276-2-liuy22@mails.tsinghua.edu.cn Signed-off-by: Leon Romanovsky <leon@kernel.org> Signed-off-by: Sasha Levin <sashal@kernel.org>
2026-03-04RDMA/rxe: Fix double free in rxe_srq_from_initJiasheng Jiang1-0/+3
[ Upstream commit 0beefd0e15d962f497aad750b2d5e9c3570b66d1 ] In rxe_srq_from_init(), the queue pointer 'q' is assigned to 'srq->rq.queue' before copying the SRQ number to user space. If copy_to_user() fails, the function calls rxe_queue_cleanup() to free the queue, but leaves the now-invalid pointer in 'srq->rq.queue'. The caller of rxe_srq_from_init() (rxe_create_srq) eventually calls rxe_srq_cleanup() upon receiving the error, which triggers a second rxe_queue_cleanup() on the same memory, leading to a double free. The call trace looks like this: kmem_cache_free+0x.../0x... rxe_queue_cleanup+0x1a/0x30 [rdma_rxe] rxe_srq_cleanup+0x42/0x60 [rdma_rxe] rxe_elem_release+0x31/0x70 [rdma_rxe] rxe_create_srq+0x12b/0x1a0 [rdma_rxe] ib_create_srq_user+0x9a/0x150 [ib_core] Fix this by moving 'srq->rq.queue = q' after copy_to_user. Fixes: aae0484e15f0 ("IB/rxe: avoid srq memory leak") Signed-off-by: Jiasheng Jiang <jiashengjiangcool@gmail.com> Link: https://patch.msgid.link/20260112015412.29458-1-jiashengjiangcool@gmail.com Reviewed-by: Zhu Yanjun <yanjun.Zhu@linux.dev> Signed-off-by: Leon Romanovsky <leon@kernel.org> Signed-off-by: Sasha Levin <sashal@kernel.org>
2026-03-04RDMA/rtrs-srv: fix SG mappingRoman Penyaev1-5/+20
[ Upstream commit 83835f7c07b523c7ca2a5ad0a511670b5810539e ] This fixes the following error on the server side: RTRS server session allocation failed: -EINVAL caused by the caller of the `ib_dma_map_sg()`, which does not expect less mapped entries, than requested, which is in the order of things and can be easily reproduced on the machine with enabled IOMMU. The fix is to treat any positive number of mapped sg entries as a successful mapping and cache DMA addresses by traversing modified SG table. Fixes: 9cb837480424 ("RDMA/rtrs: server: main functionality") Signed-off-by: Roman Penyaev <r.peniaev@gmail.com> Signed-off-by: Jack Wang <jinpu.wang@ionos.com> Signed-off-by: Grzegorz Prajsner <grzegorz.prajsner@ionos.com> Link: https://patch.msgid.link/20260107161517.56357-2-haris.iqbal@ionos.com Signed-off-by: Leon Romanovsky <leon@kernel.org> Signed-off-by: Sasha Levin <sashal@kernel.org>
2026-03-04RDMA/rtrs-srv: Correct the checking of ib_map_mr_sgGuoqing Jiang1-1/+1
[ Upstream commit 102d2f70ec0999a5cde181f1ccbe8a81cba45b10 ] We should check with nr_sgt, also the only successful case is that all sg elements are mapped, so make it explicitly. Acked-by: Jack Wang <jinpu.wang@ionos.com> Signed-off-by: Guoqing Jiang <guoqing.jiang@linux.dev> Link: https://lore.kernel.org/r/20221117101945.6317-4-guoqing.jiang@linux.dev Signed-off-by: Leon Romanovsky <leon@kernel.org> Stable-dep-of: 83835f7c07b5 ("RDMA/rtrs-srv: fix SG mapping") Signed-off-by: Sasha Levin <sashal@kernel.org>
2026-03-04RDMA/rtrs-srv: Refactor the handling of failure case in map_cont_bufsGuoqing Jiang1-27/+20
[ Upstream commit 0f597ac618d04beb9de997fda59a29c9d3818fb2 ] Let's call unmap_cont_bufs when failure happens, and also only update mrs_num after everything is settled which means we can remove 'mri'. Acked-by: Md Haris Iqbal <haris.iqbal@ionos.com> Signed-off-by: Guoqing Jiang <guoqing.jiang@linux.dev> Link: https://lore.kernel.org/r/20221117101945.6317-3-guoqing.jiang@linux.dev Signed-off-by: Leon Romanovsky <leon@kernel.org> Stable-dep-of: 83835f7c07b5 ("RDMA/rtrs-srv: fix SG mapping") Signed-off-by: Sasha Levin <sashal@kernel.org>
2026-03-04RDMA/hns: Notify ULP of remaining soft-WCs during resetChengchang Tang1-0/+23
[ Upstream commit 0789f929900d85b80b343c5f04f8b9444e991384 ] During a reset, software-generated WCs cannot be reported via interrupts. This may cause the ULP to miss some WCs. To avoid this, add check in the CQ arm process: if a hardware reset has occurred and there are still unreported soft-WCs, notify the ULP to handle the remaining WCs, thereby preventing any loss of completions. Fixes: 626903e9355b ("RDMA/hns: Add support for reporting wc as software mode") Signed-off-by: Chengchang Tang <tangchengchang@huawei.com> Signed-off-by: Junxian Huang <huangjunxian6@hisilicon.com> Link: https://patch.msgid.link/20260104064057.1582216-5-huangjunxian6@hisilicon.com Signed-off-by: Leon Romanovsky <leon@kernel.org> Signed-off-by: Sasha Levin <sashal@kernel.org>
2026-03-04RDMA/hns: Fix WQ_MEM_RECLAIM warningChengchang Tang1-1/+2
[ Upstream commit c0a26bbd3f99b7b03f072e3409aff4e6ec8af6f6 ] When sunrpc is used, if a reset triggered, our wq may lead the following trace: workqueue: WQ_MEM_RECLAIM xprtiod:xprt_rdma_connect_worker [rpcrdma] is flushing !WQ_MEM_RECLAIM hns_roce_irq_workq:flush_work_handle [hns_roce_hw_v2] WARNING: CPU: 0 PID: 8250 at kernel/workqueue.c:2644 check_flush_dependency+0xe0/0x144 Call trace: check_flush_dependency+0xe0/0x144 start_flush_work.constprop.0+0x1d0/0x2f0 __flush_work.isra.0+0x40/0xb0 flush_work+0x14/0x30 hns_roce_v2_destroy_qp+0xac/0x1e0 [hns_roce_hw_v2] ib_destroy_qp_user+0x9c/0x2b4 rdma_destroy_qp+0x34/0xb0 rpcrdma_ep_destroy+0x28/0xcc [rpcrdma] rpcrdma_ep_put+0x74/0xb4 [rpcrdma] rpcrdma_xprt_disconnect+0x1d8/0x260 [rpcrdma] xprt_rdma_connect_worker+0xc0/0x120 [rpcrdma] process_one_work+0x1cc/0x4d0 worker_thread+0x154/0x414 kthread+0x104/0x144 ret_from_fork+0x10/0x18 Since QP destruction frees memory, this wq should have the WQ_MEM_RECLAIM. Fixes: ffd541d45726 ("RDMA/hns: Add the workqueue framework for flush cqe handler") Signed-off-by: Chengchang Tang <tangchengchang@huawei.com> Signed-off-by: Junxian Huang <huangjunxian6@hisilicon.com> Link: https://patch.msgid.link/20260104064057.1582216-2-huangjunxian6@hisilicon.com Signed-off-by: Leon Romanovsky <leon@kernel.org> Signed-off-by: Sasha Levin <sashal@kernel.org>
2026-03-04IB/cache: update gid cache on client reregister eventEtienne AUJAMES1-1/+2
[ Upstream commit ddd6c8c873e912cb1ead79def54de5e24ff71c80 ] Some HCAs (e.g: ConnectX4) do not trigger a IB_EVENT_GID_CHANGE on subnet prefix update from SM (PortInfo). Since the commit d58c23c92548 ("IB/core: Only update PKEY and GID caches on respective events"), the GID cache is updated exclusively on IB_EVENT_GID_CHANGE. If this event is not emitted, the subnet prefix in the IPoIB interface’s hardware address remains set to its default value (0xfe80000000000000). Then rdma_bind_addr() failed because it relies on hardware address to find the port GID (subnet_prefix + port GUID). This patch fixes this issue by updating the GID cache on IB_EVENT_CLIENT_REREGISTER event (emitted on PortInfo::ClientReregister=1). Fixes: d58c23c92548 ("IB/core: Only update PKEY and GID caches on respective events") Signed-off-by: Etienne AUJAMES <eaujames@ddn.com> Link: https://patch.msgid.link/aVUfsO58QIDn5bGX@eaujamesFR0130 Reviewed-by: Parav Pandit <parav@nvidia.com> Signed-off-by: Leon Romanovsky <leon@kernel.org> Signed-off-by: Sasha Levin <sashal@kernel.org>
2026-03-04RDMA/rtrs: server: remove dead codeHonggang LI1-7/+1
[ Upstream commit a3572bdc3a028ca47f77d7166ac95b719cf77d50 ] As rkey had been initialized to zero, the WARN_ON_ONCE should never been triggered. Remove it. Fixes: 9cb837480424 ("RDMA/rtrs: server: main functionality") Signed-off-by: Honggang LI <honggangli@163.com> Link: https://patch.msgid.link/20251224023819.138846-1-honggangli@163.com Signed-off-by: Leon Romanovsky <leon@kernel.org> Signed-off-by: Sasha Levin <sashal@kernel.org>
2026-03-04RDMA/umad: Reject negative data_len in ib_umad_writeYunJe Shin1-2/+6
commit 5551b02fdbfd85a325bb857f3a8f9c9f33397ed2 upstream. ib_umad_write computes data_len from user-controlled count and the MAD header sizes. With a mismatched user MAD header size and RMPP header length, data_len can become negative and reach ib_create_send_mad(). This can make the padding calculation exceed the segment size and trigger an out-of-bounds memset in alloc_send_rmpp_list(). Add an explicit check to reject negative data_len before creating the send buffer. KASAN splat: [ 211.363464] BUG: KASAN: slab-out-of-bounds in ib_create_send_mad+0xa01/0x11b0 [ 211.364077] Write of size 220 at addr ffff88800c3fa1f8 by task spray_thread/102 [ 211.365867] ib_create_send_mad+0xa01/0x11b0 [ 211.365887] ib_umad_write+0x853/0x1c80 Fixes: 2be8e3ee8efd ("IB/umad: Add P_Key index support") Signed-off-by: YunJe Shin <ioerts@kookmin.ac.kr> Link: https://patch.msgid.link/20260203100628.1215408-1-ioerts@kookmin.ac.kr Signed-off-by: Leon Romanovsky <leon@kernel.org> Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
2026-03-04RDMA/siw: Fix potential NULL pointer dereference in header processingYunJe Shin1-1/+2
commit 14ab3da122bd18920ad57428f6cf4fade8385142 upstream. If siw_get_hdr() returns -EINVAL before set_rx_fpdu_context(), qp->rx_fpdu can be NULL. The error path in siw_tcp_rx_data() dereferences qp->rx_fpdu->more_ddp_segs without checking, which may lead to a NULL pointer deref. Only check more_ddp_segs when rx_fpdu is present. KASAN splat: [ 101.384271] KASAN: null-ptr-deref in range [0x00000000000000c0-0x00000000000000c7] [ 101.385869] RIP: 0010:siw_tcp_rx_data+0x13ad/0x1e50 Fixes: 8b6a361b8c48 ("rdma/siw: receive path") Signed-off-by: YunJe Shin <ioerts@kookmin.ac.kr> Link: https://patch.msgid.link/20260204092546.489842-1-ioerts@kookmin.ac.kr Acked-by: Bernard Metzler <bernard.metzler@linux.dev> Signed-off-by: Leon Romanovsky <leon@kernel.org> Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
2026-01-11RDMA/core: Fix "KASAN: slab-use-after-free Read in ib_register_device" problemZhu Yanjun1-0/+5
commit d0706bfd3ee40923c001c6827b786a309e2a8713 upstream. Call Trace: __dump_stack lib/dump_stack.c:94 [inline] dump_stack_lvl+0x116/0x1f0 lib/dump_stack.c:120 print_address_description mm/kasan/report.c:408 [inline] print_report+0xc3/0x670 mm/kasan/report.c:521 kasan_report+0xe0/0x110 mm/kasan/report.c:634 strlen+0x93/0xa0 lib/string.c:420 __fortify_strlen include/linux/fortify-string.h:268 [inline] get_kobj_path_length lib/kobject.c:118 [inline] kobject_get_path+0x3f/0x2a0 lib/kobject.c:158 kobject_uevent_env+0x289/0x1870 lib/kobject_uevent.c:545 ib_register_device drivers/infiniband/core/device.c:1472 [inline] ib_register_device+0x8cf/0xe00 drivers/infiniband/core/device.c:1393 rxe_register_device+0x275/0x320 drivers/infiniband/sw/rxe/rxe_verbs.c:1552 rxe_net_add+0x8e/0xe0 drivers/infiniband/sw/rxe/rxe_net.c:550 rxe_newlink+0x70/0x190 drivers/infiniband/sw/rxe/rxe.c:225 nldev_newlink+0x3a3/0x680 drivers/infiniband/core/nldev.c:1796 rdma_nl_rcv_msg+0x387/0x6e0 drivers/infiniband/core/netlink.c:195 rdma_nl_rcv_skb.constprop.0.isra.0+0x2e5/0x450 netlink_unicast_kernel net/netlink/af_netlink.c:1313 [inline] netlink_unicast+0x53a/0x7f0 net/netlink/af_netlink.c:1339 netlink_sendmsg+0x8d1/0xdd0 net/netlink/af_netlink.c:1883 sock_sendmsg_nosec net/socket.c:712 [inline] __sock_sendmsg net/socket.c:727 [inline] ____sys_sendmsg+0xa95/0xc70 net/socket.c:2566 ___sys_sendmsg+0x134/0x1d0 net/socket.c:2620 __sys_sendmsg+0x16d/0x220 net/socket.c:2652 do_syscall_x64 arch/x86/entry/syscall_64.c:63 [inline] do_syscall_64+0xcd/0x260 arch/x86/entry/syscall_64.c:94 entry_SYSCALL_64_after_hwframe+0x77/0x7f This problem is similar to the problem that the commit 1d6a9e7449e2 ("RDMA/core: Fix use-after-free when rename device name") fixes. The root cause is: the function ib_device_rename() renames the name with lock. But in the function kobject_uevent(), this name is accessed without lock protection at the same time. The solution is to add the lock protection when this name is accessed in the function kobject_uevent(). Fixes: 779e0bf47632 ("RDMA/core: Do not indicate device ready when device enablement fails") Link: https://patch.msgid.link/r/20250506151008.75701-1-yanjun.zhu@linux.dev Reported-by: syzbot+e2ce9e275ecc70a30b72@syzkaller.appspotmail.com Closes: https://syzkaller.appspot.com/bug?extid=e2ce9e275ecc70a30b72 Signed-off-by: Zhu Yanjun <yanjun.zhu@linux.dev> Signed-off-by: Jason Gunthorpe <jgg@nvidia.com> Signed-off-by: Sasha Levin <sashal@kernel.org> [ Ajay: Modified to apply on v5.10.y-v6.6.y ib_device_notify_register() not present in v5.10.y-v6.6.y, so directly added lock for kobject_uevent() ] Signed-off-by: Ajay Kaher <ajay.kaher@broadcom.com> Signed-off-by: Shivani Agarwal <shivani.agarwal@broadcom.com> Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
2026-01-11RDMA/cm: Fix leaking the multicast GID table referenceJason Gunthorpe1-0/+3
commit 57f3cb6c84159d12ba343574df2115fb18dd83ca upstream. If the CM ID is destroyed while the CM event for multicast creating is still queued the cancel_work_sync() will prevent the work from running which also prevents destroying the ah_attr. This leaks a refcount and triggers a WARN: GID entry ref leak for dev syz1 index 2 ref=573 WARNING: CPU: 1 PID: 655 at drivers/infiniband/core/cache.c:809 release_gid_table drivers/infiniband/core/cache.c:806 [inline] WARNING: CPU: 1 PID: 655 at drivers/infiniband/core/cache.c:809 gid_table_release_one+0x284/0x3cc drivers/infiniband/core/cache.c:886 Destroy the ah_attr after canceling the work, it is safe to call this twice. Link: https://patch.msgid.link/r/0-v1-4285d070a6b2+20a-rdma_mc_gid_leak_syz_jgg@nvidia.com Cc: stable@vger.kernel.org Fixes: fe454dc31e84 ("RDMA/ucma: Fix use-after-free bug in ucma_create_uevent") Reported-by: syzbot+b0da83a6c0e2e2bddbd4@syzkaller.appspotmail.com Closes: https://lore.kernel.org/all/68232e7b.050a0220.f2294.09f6.GAE@google.com Signed-off-by: Jason Gunthorpe <jgg@nvidia.com> Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
2026-01-11RDMA/core: Check for the presence of LS_NLA_TYPE_DGID correctlyJason Gunthorpe1-23/+10
commit a7b8e876e0ef0232b8076972c57ce9a7286b47ca upstream. The netlink response for RDMA_NL_LS_OP_IP_RESOLVE should always have a LS_NLA_TYPE_DGID attribute, it is invalid if it does not. Use the nl parsing logic properly and call nla_parse_deprecated() to fill the nlattrs array and then directly index that array to get the data for the DGID. Just fail if it is NULL. Remove the for loop searching for the nla, and squash the validation and parsing into one function. Fixes an uninitialized read from the stack triggered by userspace if it does not provide the DGID to a kernel initiated RDMA_NL_LS_OP_IP_RESOLVE query. BUG: KMSAN: uninit-value in hex_byte_pack include/linux/hex.h:13 [inline] BUG: KMSAN: uninit-value in ip6_string+0xef4/0x13a0 lib/vsprintf.c:1490 hex_byte_pack include/linux/hex.h:13 [inline] ip6_string+0xef4/0x13a0 lib/vsprintf.c:1490 ip6_addr_string+0x18a/0x3e0 lib/vsprintf.c:1509 ip_addr_string+0x245/0xee0 lib/vsprintf.c:1633 pointer+0xc09/0x1bd0 lib/vsprintf.c:2542 vsnprintf+0xf8a/0x1bd0 lib/vsprintf.c:2930 vprintk_store+0x3ae/0x1530 kernel/printk/printk.c:2279 vprintk_emit+0x307/0xcd0 kernel/printk/printk.c:2426 vprintk_default+0x3f/0x50 kernel/printk/printk.c:2465 vprintk+0x36/0x50 kernel/printk/printk_safe.c:82 _printk+0x17e/0x1b0 kernel/printk/printk.c:2475 ib_nl_process_good_ip_rsep drivers/infiniband/core/addr.c:128 [inline] ib_nl_handle_ip_res_resp+0x963/0x9d0 drivers/infiniband/core/addr.c:141 rdma_nl_rcv_msg drivers/infiniband/core/netlink.c:-1 [inline] rdma_nl_rcv_skb drivers/infiniband/core/netlink.c:239 [inline] rdma_nl_rcv+0xefa/0x11c0 drivers/infiniband/core/netlink.c:259 netlink_unicast_kernel net/netlink/af_netlink.c:1320 [inline] netlink_unicast+0xf04/0x12b0 net/netlink/af_netlink.c:1346 netlink_sendmsg+0x10b3/0x1250 net/netlink/af_netlink.c:1896 sock_sendmsg_nosec net/socket.c:714 [inline] __sock_sendmsg+0x333/0x3d0 net/socket.c:729 ____sys_sendmsg+0x7e0/0xd80 net/socket.c:2617 ___sys_sendmsg+0x271/0x3b0 net/socket.c:2671 __sys_sendmsg+0x1aa/0x300 net/socket.c:2703 __compat_sys_sendmsg net/compat.c:346 [inline] __do_compat_sys_sendmsg net/compat.c:353 [inline] __se_compat_sys_sendmsg net/compat.c:350 [inline] __ia32_compat_sys_sendmsg+0xa4/0x100 net/compat.c:350 ia32_sys_call+0x3f6c/0x4310 arch/x86/include/generated/asm/syscalls_32.h:371 do_syscall_32_irqs_on arch/x86/entry/syscall_32.c:83 [inline] __do_fast_syscall_32+0xb0/0x150 arch/x86/entry/syscall_32.c:306 do_fast_syscall_32+0x38/0x80 arch/x86/entry/syscall_32.c:331 do_SYSENTER_32+0x1f/0x30 arch/x86/entry/syscall_32.c:3 Link: https://patch.msgid.link/r/0-v1-3fbaef094271+2cf-rdma_op_ip_rslv_syz_jgg@nvidia.com Cc: stable@vger.kernel.org Fixes: ae43f8286730 ("IB/core: Add IP to GID netlink offload") Reported-by: syzbot+938fcd548c303fe33c1a@syzkaller.appspotmail.com Closes: https://lore.kernel.org/r/68dc3dac.a00a0220.102ee.004f.GAE@google.com Signed-off-by: Jason Gunthorpe <jgg@nvidia.com> Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
2026-01-11RDMA/bnxt_re: fix dma_free_coherent() pointerThomas Fourier1-3/+1
[ Upstream commit fcd431a9627f272b4c0bec445eba365fe2232a94 ] The dma_alloc_coherent() allocates a dma-mapped buffer, pbl->pg_arr[i]. The dma_free_coherent() should pass the same buffer to dma_free_coherent() and not page-aligned. Fixes: 1ac5a4047975 ("RDMA/bnxt_re: Add bnxt_re RoCE driver") Signed-off-by: Thomas Fourier <fourier.thomas@gmail.com> Link: https://patch.msgid.link/20251230085121.8023-2-fourier.thomas@gmail.com Signed-off-by: Leon Romanovsky <leon@kernel.org> Signed-off-by: Sasha Levin <sashal@kernel.org>
2026-01-11RDMA/rtrs: Fix clt_path::max_pages_per_mr calculationHonggang LI1-0/+1
[ Upstream commit 43bd09d5b750f700499ae8ec45fd41a4c48673e6 ] If device max_mr_size bits in the range [mr_page_shift+31:mr_page_shift] are zero, the `min3` function will set clt_path::max_pages_per_mr to zero. `alloc_path_reqs` will pass zero, which is invalid, as the third parameter to `ib_alloc_mr`. Fixes: 6a98d71daea1 ("RDMA/rtrs: client: main functionality") Signed-off-by: Honggang LI <honggangli@163.com> Link: https://patch.msgid.link/20251229025617.13241-1-honggangli@163.com Signed-off-by: Leon Romanovsky <leon@kernel.org> Signed-off-by: Sasha Levin <sashal@kernel.org>
2026-01-11RDMA/bnxt_re: Fix to use correct page size for PDE tableKalesh AP1-2/+2
[ Upstream commit 3d70e0fb0f289b0c778041c5bb04d099e1aa7c1c ] In bnxt_qplib_alloc_init_hwq(), while allocating memory for PDE table driver incorrectly is using the "pg_size" value passed to the function. Fixed to use the right value 4K. Also, fixed the allocation size for PBL table. Fixes: 0c4dcd602817 ("RDMA/bnxt_re: Refactor hardware queue memory allocation") Signed-off-by: Damodharam Ammepalli <damodharam.ammepalli@broadcom.com> Signed-off-by: Kalesh AP <kalesh-anakkur.purayil@broadcom.com> Link: https://patch.msgid.link/20251223131855.145955-1-kalesh-anakkur.purayil@broadcom.com Reviewed-by: Selvin Xavier <selvin.xavier@broadcom.com> Signed-off-by: Leon Romanovsky <leon@kernel.org> Signed-off-by: Sasha Levin <sashal@kernel.org>
2026-01-11RDMA/bnxt_re: Fix IB_SEND_IP_CSUM handling in post_sendAlok Tiwari1-6/+1
[ Upstream commit f01765a2361323e78e3d91b1cb1d5527a83c5cf7 ] The bnxt_re SEND path checks wr->send_flags to enable features such as IP checksum offload. However, send_flags is a bitmask and may contain multiple flags (e.g. IB_SEND_SIGNALED | IB_SEND_IP_CSUM), while the existing code uses a switch() statement that only matches when send_flags is exactly IB_SEND_IP_CSUM. As a result, checksum offload is not enabled when additional SEND flags are present. Replace the switch() with a bitmask test: if (wr->send_flags & IB_SEND_IP_CSUM) This ensures IP checksum offload is enabled correctly when multiple SEND flags are used. Fixes: 1ac5a4047975 ("RDMA/bnxt_re: Add bnxt_re RoCE driver") Signed-off-by: Alok Tiwari <alok.a.tiwari@oracle.com> Link: https://patch.msgid.link/20251219093308.2415620-1-alok.a.tiwari@oracle.com Reviewed-by: Kalesh AP <kalesh-anakkur.purayil@broadcom.com> Signed-off-by: Leon Romanovsky <leon@kernel.org> Signed-off-by: Sasha Levin <sashal@kernel.org>
2026-01-11RDMA/bnxt_re: Fix incorrect BAR check in bnxt_qplib_map_creq_db()Alok Tiwari1-1/+1
[ Upstream commit 145a417a39d7efbc881f52e829817376972b278c ] RCFW_COMM_CONS_PCI_BAR_REGION is defined as BAR 2, so checking !creq_db->reg.bar_id is incorrect and always false. pci_resource_start() returns the BAR base address, and a value of 0 indicates that the BAR is unassigned. Update the condition to test bar_base == 0 instead. This ensures the driver detects and logs an error for an unassigned RCFW communication BAR. Fixes: cee0c7bba486 ("RDMA/bnxt_re: Refactor command queue management code") Signed-off-by: Alok Tiwari <alok.a.tiwari@oracle.com> Link: https://patch.msgid.link/20251217100158.752504-1-alok.a.tiwari@oracle.com Reviewed-by: Kalesh AP <kalesh-anakkur.purayil@broadcom.com> Signed-off-by: Leon Romanovsky <leon@kernel.org> Signed-off-by: Sasha Levin <sashal@kernel.org>
2026-01-11RDMA/core: Fix logic error in ib_get_gids_from_rdma_hdr()Jang Ingyu1-1/+1
[ Upstream commit 8aaa848eaddd9ef8680fc6aafbd3a0646da5df40 ] Fix missing comparison operator for RDMA_NETWORK_ROCE_V1 in the conditional statement. The constant was used directly instead of being compared with net_type, causing the condition to always evaluate to true. Fixes: 1c15b4f2a42f ("RDMA/core: Modify enum ib_gid_type and enum rdma_network_type") Signed-off-by: Jang Ingyu <ingyujang25@korea.ac.kr> Link: https://patch.msgid.link/20251219041508.1725947-1-ingyujang25@korea.ac.kr Signed-off-by: Leon Romanovsky <leon@kernel.org> Signed-off-by: Sasha Levin <sashal@kernel.org>
2026-01-11RDMA/efa: Remove possible negative shiftMichael Margolin1-4/+0
[ Upstream commit 85463eb6a46caf2f1e0e1a6d0731f2f3bab17780 ] The page size used for device might in some cases be smaller than PAGE_SIZE what results in a negative shift when calculating the number of host pages in PAGE_SIZE for a debug log. Remove the debug line together with the calculation. Fixes: 40909f664d27 ("RDMA/efa: Add EFA verbs implementation") Link: https://patch.msgid.link/r/20251210173656.8180-1-mrgolin@amazon.com Reviewed-by: Tom Sela <tomsela@amazon.com> Reviewed-by: Yonatan Nachum <ynachum@amazon.com> Signed-off-by: Michael Margolin <mrgolin@amazon.com> Reviewed-by: Gal Pressman <gal.pressman@linux.dev> Signed-off-by: Jason Gunthorpe <jgg@nvidia.com> Signed-off-by: Sasha Levin <sashal@kernel.org>
2026-01-11RDMA/irdma: avoid invalid read in irdma_net_eventMichal Schmidt1-1/+2
[ Upstream commit 6f05611728e9d0ab024832a4f1abb74a5f5d0bb0 ] irdma_net_event() should not dereference anything from "neigh" (alias "ptr") until it has checked that the event is NETEVENT_NEIGH_UPDATE. Other events come with different structures pointed to by "ptr" and they may be smaller than struct neighbour. Move the read of neigh->dev under the NETEVENT_NEIGH_UPDATE case. The bug is mostly harmless, but it triggers KASAN on debug kernels: BUG: KASAN: stack-out-of-bounds in irdma_net_event+0x32e/0x3b0 [irdma] Read of size 8 at addr ffffc900075e07f0 by task kworker/27:2/542554 CPU: 27 PID: 542554 Comm: kworker/27:2 Kdump: loaded Not tainted 5.14.0-630.el9.x86_64+debug #1 Hardware name: [...] Workqueue: events rt6_probe_deferred Call Trace: <IRQ> dump_stack_lvl+0x60/0xb0 print_address_description.constprop.0+0x2c/0x3f0 print_report+0xb4/0x270 kasan_report+0x92/0xc0 irdma_net_event+0x32e/0x3b0 [irdma] notifier_call_chain+0x9e/0x180 atomic_notifier_call_chain+0x5c/0x110 rt6_do_redirect+0xb91/0x1080 tcp_v6_err+0xe9b/0x13e0 icmpv6_notify+0x2b2/0x630 ndisc_redirect_rcv+0x328/0x530 icmpv6_rcv+0xc16/0x1360 ip6_protocol_deliver_rcu+0xb84/0x12e0 ip6_input_finish+0x117/0x240 ip6_input+0xc4/0x370 ipv6_rcv+0x420/0x7d0 __netif_receive_skb_one_core+0x118/0x1b0 process_backlog+0xd1/0x5d0 __napi_poll.constprop.0+0xa3/0x440 net_rx_action+0x78a/0xba0 handle_softirqs+0x2d4/0x9c0 do_softirq+0xad/0xe0 </IRQ> Fixes: 915cc7ac0f8e ("RDMA/irdma: Add miscellaneous utility definitions") Link: https://patch.msgid.link/r/20251127143150.121099-1-mschmidt@redhat.com Signed-off-by: Michal Schmidt <mschmidt@redhat.com> Signed-off-by: Jason Gunthorpe <jgg@nvidia.com> Signed-off-by: Sasha Levin <sashal@kernel.org>
2026-01-11RDMA/irdma: Fix data race in irdma_free_pbleKrzysztof Czurylo1-2/+4
[ Upstream commit 81f44409fb4f027d1e6d54edbeba5156ad94b214 ] Protects pble_rsrc counters with mutex to prevent data race. Fixes the following data race in irdma_free_pble reported by KCSAN: BUG: KCSAN: data-race in irdma_free_pble [irdma] / irdma_free_pble [irdma] write to 0xffff91430baa0078 of 8 bytes by task 16956 on cpu 5: irdma_free_pble+0x3b/0xb0 [irdma] irdma_dereg_mr+0x108/0x110 [irdma] ib_dereg_mr_user+0x74/0x160 [ib_core] uverbs_free_mr+0x26/0x30 [ib_uverbs] destroy_hw_idr_uobject+0x4a/0x90 [ib_uverbs] uverbs_destroy_uobject+0x7b/0x330 [ib_uverbs] uobj_destroy+0x61/0xb0 [ib_uverbs] ib_uverbs_run_method+0x1f2/0x380 [ib_uverbs] ib_uverbs_cmd_verbs+0x365/0x440 [ib_uverbs] ib_uverbs_ioctl+0x111/0x190 [ib_uverbs] __x64_sys_ioctl+0xc9/0x100 do_syscall_64+0x44/0xa0 entry_SYSCALL_64_after_hwframe+0x6e/0xd8 read to 0xffff91430baa0078 of 8 bytes by task 16953 on cpu 2: irdma_free_pble+0x23/0xb0 [irdma] irdma_dereg_mr+0x108/0x110 [irdma] ib_dereg_mr_user+0x74/0x160 [ib_core] uverbs_free_mr+0x26/0x30 [ib_uverbs] destroy_hw_idr_uobject+0x4a/0x90 [ib_uverbs] uverbs_destroy_uobject+0x7b/0x330 [ib_uverbs] uobj_destroy+0x61/0xb0 [ib_uverbs] ib_uverbs_run_method+0x1f2/0x380 [ib_uverbs] ib_uverbs_cmd_verbs+0x365/0x440 [ib_uverbs] ib_uverbs_ioctl+0x111/0x190 [ib_uverbs] __x64_sys_ioctl+0xc9/0x100 do_syscall_64+0x44/0xa0 entry_SYSCALL_64_after_hwframe+0x6e/0xd8 value changed: 0x0000000000005a62 -> 0x0000000000005a68 Fixes: e8c4dbc2fcac ("RDMA/irdma: Add PBLE resource manager") Signed-off-by: Krzysztof Czurylo <krzysztof.czurylo@intel.com> Signed-off-by: Tatyana Nikolova <tatyana.e.nikolova@intel.com> Link: https://patch.msgid.link/20251125025350.180-3-tatyana.e.nikolova@intel.com Signed-off-by: Leon Romanovsky <leon@kernel.org> Signed-off-by: Sasha Levin <sashal@kernel.org>
2026-01-11RDMA/irdma: Fix data race in irdma_sc_ccq_armKrzysztof Czurylo1-0/+3
[ Upstream commit a521928164433de44fed5aaf5f49aeb3f1fb96f5 ] Adds a lock around irdma_sc_ccq_arm body to prevent inter-thread data race. Fixes data race in irdma_sc_ccq_arm() reported by KCSAN: BUG: KCSAN: data-race in irdma_sc_ccq_arm [irdma] / irdma_sc_ccq_arm [irdma] read to 0xffff9d51b4034220 of 8 bytes by task 255 on cpu 11: irdma_sc_ccq_arm+0x36/0xd0 [irdma] irdma_cqp_ce_handler+0x300/0x310 [irdma] cqp_compl_worker+0x2a/0x40 [irdma] process_one_work+0x402/0x7e0 worker_thread+0xb3/0x6d0 kthread+0x178/0x1a0 ret_from_fork+0x2c/0x50 write to 0xffff9d51b4034220 of 8 bytes by task 89 on cpu 3: irdma_sc_ccq_arm+0x7e/0xd0 [irdma] irdma_cqp_ce_handler+0x300/0x310 [irdma] irdma_wait_event+0xd4/0x3e0 [irdma] irdma_handle_cqp_op+0xa5/0x220 [irdma] irdma_hw_flush_wqes+0xb1/0x300 [irdma] irdma_flush_wqes+0x22e/0x3a0 [irdma] irdma_cm_disconn_true+0x4c7/0x5d0 [irdma] irdma_disconnect_worker+0x35/0x50 [irdma] process_one_work+0x402/0x7e0 worker_thread+0xb3/0x6d0 kthread+0x178/0x1a0 ret_from_fork+0x2c/0x50 value changed: 0x0000000000024000 -> 0x0000000000034000 Fixes: 3f49d6842569 ("RDMA/irdma: Implement HW Admin Queue OPs") Signed-off-by: Krzysztof Czurylo <krzysztof.czurylo@intel.com> Signed-off-by: Tatyana Nikolova <tatyana.e.nikolova@intel.com> Link: https://patch.msgid.link/20251125025350.180-2-tatyana.e.nikolova@intel.com Signed-off-by: Leon Romanovsky <leon@kernel.org> Signed-off-by: Sasha Levin <sashal@kernel.org>
2026-01-11RDMA/rtrs: server: Fix error handling in get_or_create_srvMa Ke1-1/+1
[ Upstream commit a338d6e849ab31f32c08b4fcac11c0c72afbb150 ] After device_initialize() is called, use put_device() to release the device according to kernel device management rules. While direct kfree() work in this case, using put_device() is more correct. Found by code review. Fixes: 9cb837480424 ("RDMA/rtrs: server: main functionality") Signed-off-by: Ma Ke <make24@iscas.ac.cn> Link: https://patch.msgid.link/20251110005158.13394-1-make24@iscas.ac.cn Acked-by: Jack Wang <jinpu.wang@ionos.com> Signed-off-by: Leon Romanovsky <leon@kernel.org> Signed-off-by: Sasha Levin <sashal@kernel.org>
2025-12-07RDMA/hns: Fix wrong WQE data when QP wraps aroundJunxian Huang1-3/+8
[ Upstream commit fe9622011f955e35ba84d3af7b2f2fed31cf8ca1 ] When QP wraps around, WQE data from the previous use at the same position still remains as driver does not clear it. The WQE field layout differs across different opcodes, causing that the fields that are not explicitly assigned for the current opcode retain stale values, and are issued to HW by mistake. Such fields are as follows: * MSG_START_SGE_IDX field in ATOMIC WQE * BLOCK_SIZE and ZBVA fields in FRMR WQE * DirectWQE fields when DirectWQE not used For ATOMIC WQE, always set the latest sge index in MSG_START_SGE_IDX as required by HW. For FRMR WQE and DirectWQE, clear only those unassigned fields instead of the entire WQE to avoid performance penalty. Fixes: 68a997c5d28c ("RDMA/hns: Add FRMR support for hip08") Signed-off-by: Junxian Huang <huangjunxian6@hisilicon.com> Link: https://patch.msgid.link/20251016114051.1963197-4-huangjunxian6@hisilicon.com Signed-off-by: Leon Romanovsky <leon@kernel.org> Signed-off-by: Sasha Levin <sashal@kernel.org>
2025-12-07RDMA/hns: Fix the modification of max_send_sgewenglianfa1-2/+0
[ Upstream commit f5a7cbea5411668d429eb4ffe96c4063fe8dac9e ] The actual sge number may exceed the value specified in init_attr->cap when HW needs extra sge to enable inline feature. Since these extra sges are not expected by ULP, return the user-specified value to ULP instead of the expanded sge number. Fixes: 0c5e259b06a8 ("RDMA/hns: Fix incorrect sge nums calculation") Signed-off-by: wenglianfa <wenglianfa@huawei.com> Signed-off-by: Junxian Huang <huangjunxian6@hisilicon.com> Link: https://patch.msgid.link/20251016114051.1963197-3-huangjunxian6@hisilicon.com Signed-off-by: Leon Romanovsky <leon@kernel.org> Signed-off-by: Sasha Levin <sashal@kernel.org>
2025-12-07RDMA/irdma: Set irdma_cq cq_num field during CQ createJacob Moroni2-1/+2
[ Upstream commit 5575b7646b94c0afb0f4c0d86e00e13cf3397a62 ] The driver maintains a CQ table that is used to ensure that a CQ is still valid when processing CQ related AEs. When a CQ is destroyed, the table entry is cleared, using irdma_cq.cq_num as the index. This field was never being set, so it was just always clearing out entry 0. Additionally, the cq_num field size was increased to accommodate HW supporting more than 64K CQs. Fixes: b48c24c2d710 ("RDMA/irdma: Implement device supported verb APIs") Signed-off-by: Jacob Moroni <jmoroni@google.com> Link: https://patch.msgid.link/20250923142439.943930-1-jmoroni@google.com Acked-by: Tatyana Nikolova <tatyana.e.nikolova@intel.com> Signed-off-by: Leon Romanovsky <leon@kernel.org> Signed-off-by: Sasha Levin <sashal@kernel.org>
2025-12-07RDMA/irdma: Remove unused struct irdma_cq fieldsJacob Moroni2-9/+0
[ Upstream commit 880245fd029a8f8ee8fd557c2681d077c1b1a959 ] These fields were set but not used anywhere, so remove them. Link: https://patch.msgid.link/r/20250923142128.943240-1-jmoroni@google.com Signed-off-by: Jacob Moroni <jmoroni@google.com> Signed-off-by: Jason Gunthorpe <jgg@nvidia.com> Stable-dep-of: 5575b7646b94 ("RDMA/irdma: Set irdma_cq cq_num field during CQ create") Signed-off-by: Sasha Levin <sashal@kernel.org>
2025-12-07RDMA/irdma: Fix SD index calculationJacob Moroni1-1/+1
[ Upstream commit 8d158f47f1f33d8747e80c3afbea5aa337e59d41 ] In some cases, it is possible for pble_rsrc->next_fpm_addr to be larger than u32, so remove the u32 cast to avoid unintentional truncation. This fixes the following error that can be observed when registering massive memory regions: [ 447.227494] (NULL ib_device): cqp opcode = 0x1f maj_err_code = 0xffff min_err_code = 0x800c [ 447.227505] (NULL ib_device): [Update PE SDs Cmd Error][op_code=21] status=-5 waiting=1 completion_err=1 maj=0xffff min=0x800c Fixes: e8c4dbc2fcac ("RDMA/irdma: Add PBLE resource manager") Signed-off-by: Jacob Moroni <jmoroni@google.com> Link: https://patch.msgid.link/20250923190850.1022773-1-jmoroni@google.com Acked-by: Tatyana Nikolova <tatyana.e.nikolova@intel.com> Signed-off-by: Leon Romanovsky <leon@kernel.org> Signed-off-by: Sasha Levin <sashal@kernel.org>
2025-10-15RDMA/siw: Always report immediate post SQ errorsBernard Metzler1-11/+14
[ Upstream commit fdd0fe94d68649322e391c5c27dd9f436b4e955e ] In siw_post_send(), any immediate error encountered during processing of the work request list must be reported to the caller, even if previous work requests in that list were just accepted and added to the send queue. Not reporting those errors confuses the caller, which would wait indefinitely for the failing and potentially subsequently aborted work requests completion. This fixes a case where immediate errors were overwritten by subsequent code in siw_post_send(). Fixes: 303ae1cdfdf7 ("rdma/siw: application interface") Link: https://patch.msgid.link/r/20250923144536.103825-1-bernard.metzler@linux.dev Suggested-by: Stefan Metzmacher <metze@samba.org> Signed-off-by: Bernard Metzler <bernard.metzler@linux.dev> Signed-off-by: Jason Gunthorpe <jgg@nvidia.com> Signed-off-by: Sasha Levin <sashal@kernel.org>
2025-10-15IB/sa: Fix sa_local_svc_timeout_ms read raceVlad Dumitrescu1-2/+4
[ Upstream commit 1428cd764cd708d53a072a2f208d87014bfe05bc ] When computing the delta, the sa_local_svc_timeout_ms is read without ib_nl_request_lock held. Though unlikely in practice, this can cause a race condition if multiple local service threads are managing the timeout. Fixes: 2ca546b92a02 ("IB/sa: Route SA pathrecord query through netlink") Signed-off-by: Vlad Dumitrescu <vdumitrescu@nvidia.com> Reviewed-by: Mark Zhang <markzhang@nvidia.com> Signed-off-by: Edward Srouji <edwards@nvidia.com> Link: https://patch.msgid.link/20250916163112.98414-1-edwards@nvidia.com Signed-off-by: Leon Romanovsky <leon@kernel.org> Signed-off-by: Sasha Levin <sashal@kernel.org>
2025-10-15RDMA/core: Resolve MAC of next-hop device without ARP supportParav Pandit1-7/+3
[ Upstream commit 200651b9b8aadfbbec852f0e5d042d9abe75e2ab ] Currently, if the next-hop netdevice does not support ARP resolution, the destination MAC address is silently set to zero without reporting an error. This leads to incorrect behavior and may result in packet transmission failures. Fix this by deferring MAC resolution to the IP stack via neighbour lookup, allowing proper resolution or error reporting as appropriate. Fixes: 7025fcd36bd6 ("IB: address translation to map IP toIB addresses (GIDs)") Signed-off-by: Parav Pandit <parav@nvidia.com> Reviewed-by: Vlad Dumitrescu <vdumitrescu@nvidia.com> Signed-off-by: Edward Srouji <edwards@nvidia.com> Link: https://patch.msgid.link/20250916111103.84069-3-edwards@nvidia.com Signed-off-by: Leon Romanovsky <leon@kernel.org> Signed-off-by: Sasha Levin <sashal@kernel.org>