summaryrefslogtreecommitdiff
path: root/drivers/infiniband/hw
AgeCommit message (Collapse)AuthorFilesLines
2021-10-29RDMA/bnxt_re: Use helper function to set GUIDsKamal Heib4-21/+4
Use addrconf_addr_eui48() helper function to set the GUIDs and remove the driver specific version. Link: https://lore.kernel.org/r/20211028094359.160407-1-kamalheib1@gmail.com Signed-off-by: Kamal Heib <kamalheib1@gmail.com> Signed-off-by: Jason Gunthorpe <jgg@nvidia.com>
2021-10-29RDMA/bnxt_re: Fix kernel panic when trying to access bnxt_re_stat_descsKamal Heib1-0/+2
For some reason when introducing the fixed commit the "active_pds" and "active_ahs" descriptors got dropped, which lead to the following panic when trying to access the first entry in the descriptors. bnxt_re: Broadcom NetXtreme-C/E RoCE Driver BUG: kernel NULL pointer dereference, address: 0000000000000000 CPU: 2 PID: 594 Comm: kworker/u32:1 Not tainted 5.15.0-rc6+ #2 Hardware name: Dell Inc. PowerEdge R430/0CN7X8, BIOS 2.12.1 12/07/2020 Workqueue: bnxt_re bnxt_re_task [bnxt_re] RIP: 0010:strlen+0x0/0x20 Code: 48 89 f9 74 09 48 83 c1 01 80 39 00 75 f7 31 d2 44 0f b6 04 16 44 88 04 11 48 83 c2 01 45 84 c0 75 ee c3 0f 1f 80 00 00 00 00 <80> 3f 00 74 10 48 89 f8 48 83 c0 01 80 31 RSP: 0018:ffffb25fc47dfbb0 EFLAGS: 00010246 RAX: 0000000000000000 RBX: 0000000000000000 RCX: 0000000000008100 RDX: 0000000000000000 RSI: 0000000000000000 RDI: 0000000000000000 RBP: 0000000000000000 R08: 00000000fffffff4 R09: 0000000000000000 R10: ffff8a05c71fc028 R11: 0000000000000000 R12: 0000000000000000 R13: 0000000000000000 R14: 0000000000000000 R15: ffff8a05c3dee800 FS: 0000000000000000(0000) GS:ffff8a092fc40000(0000) knlGS:0000000000000000 CS: 0010 DS: 0000 ES: 0000 CR0: 0000000080050033 CR2: 0000000000000000 CR3: 000000048d3da001 CR4: 00000000001706e0 Call Trace: kernfs_name_hash+0x12/0x80 kernfs_find_ns+0x35/0xd0 kernfs_remove_by_name_ns+0x32/0x90 remove_files+0x2b/0x60 create_files+0x1d3/0x1f0 internal_create_group+0x17b/0x1f0 internal_create_groups.part.0+0x3d/0xa0 setup_port+0x180/0x3b0 [ib_core] ? __cond_resched+0x16/0x40 ? kmem_cache_alloc_trace+0x278/0x3d0 ib_setup_port_attrs+0x99/0x240 [ib_core] ib_register_device+0xcc/0x160 [ib_core] bnxt_re_task+0xba/0x170 [bnxt_re] process_one_work+0x1eb/0x390 worker_thread+0x53/0x3d0 ? process_one_work+0x390/0x390 kthread+0x10f/0x130 ? set_kthread_struct+0x40/0x40 ret_from_fork+0x22/0x30 Fixes: 13f30b0fa0a9 ("RDMA/counter: Add a descriptor in struct rdma_hw_stats") Link: https://lore.kernel.org/r/20211027205448.127821-1-kamalheib1@gmail.com Signed-off-by: Kamal Heib <kamalheib1@gmail.com> Acked-by: Selvin Xavier <selvin.xavier@broadcom.com> Reviewed-by: Leon Romanovsky <leonro@nvidia.com> Reviewed-by: Devesh Sharma <devesh.s.sharma@oracle.com> Signed-off-by: Jason Gunthorpe <jgg@nvidia.com>
2021-10-29RDMA/qedr: Fix NULL deref for query_qp on the GSI QPAlok Prasad1-6/+9
This patch fixes a crash caused by querying the QP via netlink, and corrects the state of GSI qp. GSI qp's have a NULL qed_qp. The call trace is generated by: $ rdma res show BUG: kernel NULL pointer dereference, address: 0000000000000034 Hardware name: Dell Inc. PowerEdge R720/0M1GCR, BIOS 1.2.6 05/10/2012 RIP: 0010:qed_rdma_query_qp+0x33/0x1a0 [qed] RSP: 0018:ffffba560a08f580 EFLAGS: 00010206 RAX: 0000000200000000 RBX: ffffba560a08f5b8 RCX: 0000000000000000 RDX: ffffba560a08f5b8 RSI: 0000000000000000 RDI: ffff9807ee458090 RBP: ffffba560a08f5a0 R08: 0000000000000000 R09: ffff9807890e7048 R10: ffffba560a08f658 R11: 0000000000000000 R12: 0000000000000000 R13: ffff9807ee458090 R14: ffff9807f0afb000 R15: ffffba560a08f7ec FS: 00007fbbf8bfe740(0000) GS:ffff980aafa00000(0000) knlGS:0000000000000000 CS: 0010 DS: 0000 ES: 0000 CR0: 0000000080050033 CR2: 0000000000000034 CR3: 00000001720ba001 CR4: 00000000000606f0 Call Trace: qedr_query_qp+0x82/0x360 [qedr] ib_query_qp+0x34/0x40 [ib_core] ? ib_query_qp+0x34/0x40 [ib_core] fill_res_qp_entry_query.isra.26+0x47/0x1d0 [ib_core] ? __nla_put+0x20/0x30 ? nla_put+0x33/0x40 fill_res_qp_entry+0xe3/0x120 [ib_core] res_get_common_dumpit+0x3f8/0x5d0 [ib_core] ? fill_res_cm_id_entry+0x1f0/0x1f0 [ib_core] nldev_res_get_qp_dumpit+0x1a/0x20 [ib_core] netlink_dump+0x156/0x2f0 __netlink_dump_start+0x1ab/0x260 rdma_nl_rcv+0x1de/0x330 [ib_core] ? nldev_res_get_cm_id_dumpit+0x20/0x20 [ib_core] netlink_unicast+0x1b8/0x270 netlink_sendmsg+0x33e/0x470 sock_sendmsg+0x63/0x70 __sys_sendto+0x13f/0x180 ? setup_sgl.isra.12+0x70/0xc0 __x64_sys_sendto+0x28/0x30 do_syscall_64+0x3a/0xb0 entry_SYSCALL_64_after_hwframe+0x44/0xae Cc: stable@vger.kernel.org Fixes: cecbcddf6461 ("qedr: Add support for QP verbs") Link: https://lore.kernel.org/r/20211027184329.18454-1-palok@marvell.com Signed-off-by: Ariel Elior <aelior@marvell.com> Signed-off-by: Shai Malin <smalin@marvell.com> Signed-off-by: Prabhakar Kushwaha <pkushwaha@marvell.com> Signed-off-by: Alok Prasad <palok@marvell.com> Signed-off-by: Jason Gunthorpe <jgg@nvidia.com>
2021-10-29RDMA/hns: Modify the value of MAX_LP_MSG_LEN to meet hardware compatibilityYixing Liu1-2/+2
The upper limit of MAX_LP_MSG_LEN on HIP08 is 64K, and the upper limit on HIP09 is 16K. Regardless of whether it is HIP08 or HIP09, only 16K will be used. In order to ensure compatibility, it is unified to 16K. Setting MAX_LP_MSG_LEN to 16K will not cause performance loss on HIP08. Fixes: fbed9d2be292 ("RDMA/hns: Fix configuration of ack_req_freq in QPC") Link: https://lore.kernel.org/r/20211029100537.27299-1-liangwenpeng@huawei.com Signed-off-by: Yixing Liu <liuyixing1@huawei.com> Signed-off-by: Wenpeng Liang <liangwenpeng@huawei.com> Signed-off-by: Jason Gunthorpe <jgg@nvidia.com>
2021-10-29RDMA/hns: Fix initial arm_st of CQHaoyue Xu1-1/+1
We set the init CQ status to ARMED before. As a result, an unexpected CEQE would be reported. Therefore, the init CQ status should be set to no_armed rather than REG_NXT_CEQE. Fixes: a5073d6054f7 ("RDMA/hns: Add eq support of hip08") Link: https://lore.kernel.org/r/20211029095846.26732-1-liangwenpeng@huawei.com Signed-off-by: Haoyue Xu <xuhaoyue1@hisilicon.com> Signed-off-by: Wenpeng Liang <liangwenpeng@huawei.com> Signed-off-by: Jason Gunthorpe <jgg@nvidia.com>
2021-10-28Merge git://git.kernel.org/pub/scm/linux/kernel/git/netdev/netJakub Kicinski10-26/+53
include/net/sock.h 7b50ecfcc6cd ("net: Rename ->stream_memory_read to ->sock_is_readable") 4c1e34c0dbff ("vsock: Enable y2038 safe timeval for timeout") drivers/net/ethernet/marvell/octeontx2/af/rvu_debugfs.c 0daa55d033b0 ("octeontx2-af: cn10k: debugfs for dumping LMTST map table") e77bcdd1f639 ("octeontx2-af: Display all enabled PF VF rsrc_alloc entries.") Adjacent code addition in both cases, keep both. Signed-off-by: Jakub Kicinski <kuba@kernel.org>
2021-10-28RDMA/irdma: Remove the unused variable local_qpZhu Yanjun2-2/+0
Since the member variable local_qp is not used, remove it. Link: https://lore.kernel.org/r/20211027175457.201822-1-yanjun.zhu@linux.dev Signed-off-by: Zhu Yanjun <yanjun.zhu@linux.dev> Acked-by: Shiraz Saleem <shiraz.saleem@intel.com> Signed-off-by: Jason Gunthorpe <jgg@nvidia.com>
2021-10-28RDMA/efa: Add support for dmabuf memory regionsGal Pressman3-31/+101
Implement a dmabuf importer for the EFA driver. As ODP is not supported, the pinned dmabuf are used to prevent the move_notify callback from being called. Link: https://lore.kernel.org/r/20211012120903.96933-4-galpress@amazon.com Signed-off-by: Gal Pressman <galpress@amazon.com> Signed-off-by: Jason Gunthorpe <jgg@nvidia.com>
2021-10-28Merge branch 'mlx5-next' of ↵Saeed Mahameed9-120/+140
git://git.kernel.org/pub/scm/linux/kernel/git/mellanox/linux into net-next Signed-off-by: Saeed Mahameed <saeedm@nvidia.com>
2021-10-25RDMA/qedr: Remove unsupported qedr_resize_cq callbackKamal Heib3-12/+0
There is no need to return always zero for function which is not supported, especially since 0 is the wrong return code. Fixes: a7efd7773e31 ("qedr: Add support for PD,PKEY and CQ verbs") Link: https://lore.kernel.org/r/20211025062632.3960-1-kamalheib1@gmail.com Signed-off-by: Kamal Heib <kamalheib1@gmail.com> Signed-off-by: Jason Gunthorpe <jgg@nvidia.com>
2021-10-25RDMA/irdma: Remove the unused spin lock in struct irdma_qp_ukZhu Yanjun2-2/+0
The spin lock in struct irdma_qp_uk is not used. So remove it. Link: https://lore.kernel.org/r/20211021230612.153812-1-yanjun.zhu@linux.dev Signed-off-by: Zhu Yanjun <yanjun.zhu@linux.dev> Acked-by: Shiraz Saleem <shiraz.saleem@intel.com> Signed-off-by: Jason Gunthorpe <jgg@nvidia.com>
2021-10-25RDMA: Constify netdev->dev_addr accessesJakub Kicinski17-33/+40
netdev->dev_addr will become const soon, make sure drivers propagate the qualifier. Link: https://lore.kernel.org/r/20211019182604.1441387-4-kuba@kernel.org Signed-off-by: Jakub Kicinski <kuba@kernel.org> Reviewed-by: Leon Romanovsky <leonro@nvidia.com> Acked-by: Dennis Dalessandro <dennis.dalessandro@cornelisnetworks.com> Signed-off-by: Jason Gunthorpe <jgg@nvidia.com>
2021-10-25RDMA/mlx5: fix build error with INFINIBAND_USER_ACCESS=nArnd Bergmann1-4/+5
The mlx5_ib_fs_add_op_fc/mlx5_ib_fs_remove_op_fc functions are only available when user access is enabled, without that we run into a link error: ERROR: modpost: "mlx5_ib_fs_add_op_fc" [drivers/infiniband/hw/mlx5/mlx5_ib.ko] undefined! ERROR: modpost: "mlx5_ib_fs_remove_op_fc" [drivers/infiniband/hw/mlx5/mlx5_ib.ko] undefined! Conditionally compiling the newly added code section makes it build, though this is probably not a correct fix. Fixes: a29b934ceb4c ("RDMA/mlx5: Add modify_op_stat() support") Link: https://lore.kernel.org/r/20211019061602.3062196-1-arnd@kernel.org Signed-off-by: Arnd Bergmann <arnd@arndb.de> Acked-by: Leon Romanovsky <leonro@nvidia.com> Signed-off-by: Jason Gunthorpe <jgg@nvidia.com>
2021-10-21Merge brank 'mlx5_mkey' into rdma.git for-nextLeon Romanovsky20-137/+148
A small series to clean up the mlx5 mkey code across the mlx5_core and InfiniBand. * branch 'mlx5_mkey': RDMA/mlx5: Attach ndescs to mlx5_ib_mkey RDMA/mlx5: Move struct mlx5_core_mkey to mlx5_ib RDMA/mlx5: Replace struct mlx5_core_mkey by u32 key RDMA/mlx5: Remove pd from struct mlx5_core_mkey RDMA/mlx5: Remove size from struct mlx5_core_mkey RDMA/mlx5: Remove iova from struct mlx5_core_mkey Signed-off-by: Leon Romanovsky <leonro@nvidia.com>
2021-10-21RDMA/irdma: Make irdma_uk_cq_init() return a voidZhu Yanjun3-10/+5
The function irdma_uk_cq_init always returns 0, so make it void and delete all the return value checks. Link: https://lore.kernel.org/r/20211019153717.3836-1-yanjun.zhu@linux.dev Signed-off-by: Zhu Yanjun <yanjun.zhu@linux.dev> Reviewed-by: Shiraz Saleem <shiraz.saleem@intel.com> Signed-off-by: Jason Gunthorpe <jgg@nvidia.com>
2021-10-20RDMA/irdma: Do not hold qos mutex twice on QP resumeMustafa Ismail1-6/+7
When irdma_ws_add fails, irdma_ws_remove is used to cleanup the leaf node. This lead to holding the qos mutex twice in the QP resume path. Fix this by avoiding the call to irdma_ws_remove and unwinding the error in irdma_ws_add. This skips the call to irdma_tc_in_use function which is not needed in the error unwind cases. Fixes: 3ae331c75128 ("RDMA/irdma: Add QoS definitions") Link: https://lore.kernel.org/r/20211019151654.1943-2-shiraz.saleem@intel.com Signed-off-by: Mustafa Ismail <mustafa.ismail@intel.com> Signed-off-by: Shiraz Saleem <shiraz.saleem@intel.com> Signed-off-by: Jason Gunthorpe <jgg@nvidia.com>
2021-10-20RDMA/irdma: Set VLAN in UD work completion correctlyMustafa Ismail1-2/+6
Currently VLAN is reported in UD work completion when VLAN id is zero, i.e. no VLAN case. Report VLAN in UD work completion only when VLAN id is non-zero. Fixes: b48c24c2d710 ("RDMA/irdma: Implement device supported verb APIs") Link: https://lore.kernel.org/r/20211019151654.1943-1-shiraz.saleem@intel.com Signed-off-by: Mustafa Ismail <mustafa.ismail@intel.com> Signed-off-by: Shiraz Saleem <shiraz.saleem@intel.com> Signed-off-by: Jason Gunthorpe <jgg@nvidia.com>
2021-10-20RDMA/mlx5: Initialize the ODP xarray when creating an ODP MRAharon Landau1-1/+1
Normally the zero fill would hide the missing initialization, but an errant set to desc_size in reg_create() causes a crash: BUG: unable to handle page fault for address: 0000000800000000 PGD 0 P4D 0 Oops: 0000 [#1] SMP PTI CPU: 5 PID: 890 Comm: ib_write_bw Not tainted 5.15.0-rc4+ #47 Hardware name: QEMU Standard PC (Q35 + ICH9, 2009), BIOS rel-1.13.0-0-gf21b5a4aeb02-prebuilt.qemu.org 04/01/2014 RIP: 0010:mlx5_ib_dereg_mr+0x14/0x3b0 [mlx5_ib] Code: 48 63 cd 4c 89 f7 48 89 0c 24 e8 37 30 03 e1 48 8b 0c 24 eb a0 90 0f 1f 44 00 00 41 56 41 55 41 54 55 53 48 89 fb 48 83 ec 30 <48> 8b 2f 65 48 8b 04 25 28 00 00 00 48 89 44 24 28 31 c0 8b 87 c8 RSP: 0018:ffff88811afa3a60 EFLAGS: 00010286 RAX: 000000000000001c RBX: 0000000800000000 RCX: 0000000000000000 RDX: 0000000000000000 RSI: 0000000000000000 RDI: 0000000800000000 RBP: 0000000800000000 R08: 0000000000000000 R09: c0000000fffff7ff R10: ffff88811afa38f8 R11: ffff88811afa38f0 R12: ffffffffa02c7ac0 R13: 0000000000000000 R14: ffff88811afa3cd8 R15: ffff88810772fa00 FS: 00007f47b9080740(0000) GS:ffff88852cd40000(0000) knlGS:0000000000000000 CS: 0010 DS: 0000 ES: 0000 CR0: 0000000080050033 CR2: 0000000800000000 CR3: 000000010761e003 CR4: 0000000000370ea0 DR0: 0000000000000000 DR1: 0000000000000000 DR2: 0000000000000000 DR3: 0000000000000000 DR6: 00000000fffe0ff0 DR7: 0000000000000400 Call Trace: mlx5_ib_free_odp_mr+0x95/0xc0 [mlx5_ib] mlx5_ib_dereg_mr+0x128/0x3b0 [mlx5_ib] ib_dereg_mr_user+0x45/0xb0 [ib_core] ? xas_load+0x8/0x80 destroy_hw_idr_uobject+0x1a/0x50 [ib_uverbs] uverbs_destroy_uobject+0x2f/0x150 [ib_uverbs] uobj_destroy+0x3c/0x70 [ib_uverbs] ib_uverbs_cmd_verbs+0x467/0xb00 [ib_uverbs] ? uverbs_finalize_object+0x60/0x60 [ib_uverbs] ? ttwu_queue_wakelist+0xa9/0xe0 ? pty_write+0x85/0x90 ? file_tty_write.isra.33+0x214/0x330 ? process_echoes+0x60/0x60 ib_uverbs_ioctl+0xa7/0x110 [ib_uverbs] __x64_sys_ioctl+0x10d/0x8e0 ? vfs_write+0x17f/0x260 do_syscall_64+0x3c/0x80 entry_SYSCALL_64_after_hwframe+0x44/0xae Add the missing xarray initialization and remove the desc_size set. Fixes: a639e66703ee ("RDMA/mlx5: Zero out ODP related items in the mlx5_ib_mr") Link: https://lore.kernel.org/r/a4846a11c9de834663e521770da895007f9f0d30.1634642730.git.leonro@nvidia.com Signed-off-by: Aharon Landau <aharonl@nvidia.com> Reviewed-by: Maor Gottlieb <maorg@nvidia.com> Signed-off-by: Leon Romanovsky <leonro@nvidia.com> Signed-off-by: Jason Gunthorpe <jgg@nvidia.com>
2021-10-20rdma/qedr: Fix crash due to redundant release of device's qp memoryPrabhakar Kushwaha3-2/+6
Device's QP memory should only be allocated and released by IB layer. This patch removes the redundant release of the device's qp memory and uses completion APIs to make sure that .destroy_qp() only return, when qp reference becomes 0. Fixes: 514aee660df4 ("RDMA: Globally allocate and release QP memory") Link: https://lore.kernel.org/r/20211019082212.7052-1-pkushwaha@marvell.com Acked-by: Michal Kalderon <michal.kalderon@marvell.com> Signed-off-by: Ariel Elior <aelior@marvell.com> Signed-off-by: Shai Malin <smalin@marvell.com> Signed-off-by: Alok Prasad <palok@marvell.com> Signed-off-by: Prabhakar Kushwaha <pkushwaha@marvell.com> Signed-off-by: Jason Gunthorpe <jgg@nvidia.com>
2021-10-19Merge brank 'mlx5_mkey' into rdma.git for-nextJason Gunthorpe6-96/+81
A small series to clean up the mlx5 mkey code across the mlx5_core and InfiniBand. * branch 'mlx5_mkey': RDMA/mlx5: Attach ndescs to mlx5_ib_mkey RDMA/mlx5: Move struct mlx5_core_mkey to mlx5_ib RDMA/mlx5: Replace struct mlx5_core_mkey by u32 key RDMA/mlx5: Remove pd from struct mlx5_core_mkey RDMA/mlx5: Remove size from struct mlx5_core_mkey RDMA/mlx5: Remove iova from struct mlx5_core_mkey Signed-off-by: Jason Gunthorpe <jgg@nvidia.com>
2021-10-19RDMA/mlx5: Attach ndescs to mlx5_ib_mkeyAharon Landau6-52/+26
Generalize the use of ndescs by adding it to mlx5_ib_mkey. Signed-off-by: Aharon Landau <aharonl@nvidia.com> Reviewed-by: Shay Drory <shayd@nvidia.com> Acked-by: Michael S. Tsirkin <mst@redhat.com> Signed-off-by: Leon Romanovsky <leonro@nvidia.com>
2021-10-19RDMA/mlx5: Move struct mlx5_core_mkey to mlx5_ibAharon Landau4-18/+29
Move mlx5_core_mkey struct to mlx5_ib, as the mlx5_core doesn't use it at this point. Signed-off-by: Aharon Landau <aharonl@nvidia.com> Reviewed-by: Shay Drory <shayd@nvidia.com> Acked-by: Michael S. Tsirkin <mst@redhat.com> Signed-off-by: Leon Romanovsky <leonro@nvidia.com>
2021-10-19RDMA/mlx5: Replace struct mlx5_core_mkey by u32 keyAharon Landau2-8/+15
In mlx5_core and vdpa there is no use of mlx5_core_mkey members except for the key itself. As preparation for moving mlx5_core_mkey to mlx5_ib, the occurrences of struct mlx5_core_mkey in all modules except for mlx5_ib are replaced by a u32 key. Signed-off-by: Aharon Landau <aharonl@nvidia.com> Reviewed-by: Shay Drory <shayd@nvidia.com> Acked-by: Michael S. Tsirkin <mst@redhat.com> Signed-off-by: Leon Romanovsky <leonro@nvidia.com>
2021-10-19RDMA/mlx5: Remove pd from struct mlx5_core_mkeyAharon Landau2-4/+0
There is no read of mkey->pd, only write. Remove it. Signed-off-by: Aharon Landau <aharonl@nvidia.com> Reviewed-by: Shay Drory <shayd@nvidia.com> Acked-by: Michael S. Tsirkin <mst@redhat.com> Signed-off-by: Leon Romanovsky <leonro@nvidia.com>
2021-10-19RDMA/mlx5: Remove size from struct mlx5_core_mkeyAharon Landau2-4/+2
mkey->size is already stored in ibmr->length, no need to store it here. Signed-off-by: Aharon Landau <aharonl@nvidia.com> Reviewed-by: Shay Drory <shayd@nvidia.com> Acked-by: Michael S. Tsirkin <mst@redhat.com> Signed-off-by: Leon Romanovsky <leonro@nvidia.com>
2021-10-19RDMA/mlx5: Remove iova from struct mlx5_core_mkeyAharon Landau3-13/+12
iova is already stored in ibmr->iova, no need to store it here. Signed-off-by: Aharon Landau <aharonl@nvidia.com> Reviewed-by: Shay Drory <shayd@nvidia.com> Acked-by: Michael S. Tsirkin <mst@redhat.com> Signed-off-by: Leon Romanovsky <leonro@nvidia.com>
2021-10-13IB/hfi1: Fix abba locking issue with sc_disable()Mike Marciniszyn1-3/+6
sc_disable() after having disabled the send context wakes up any waiters by calling hfi1_qp_wakeup() while holding the waitlock for the sc. This is contrary to the model for all other calls to hfi1_qp_wakeup() where the waitlock is dropped and a local is used to drive calls to hfi1_qp_wakeup(). Fix by moving the sc->piowait into a local list and driving the wakeup calls from the list. Fixes: 099a884ba4c0 ("IB/hfi1: Handle wakeup of orphaned QPs for pio") Link: https://lore.kernel.org/r/20211013141852.128104.2682.stgit@awfm-01.cornelisnetworks.com Signed-off-by: Mike Marciniszyn <mike.marciniszyn@cornelisnetworks.com> Reported-by: TOTE Robot <oslab@tsinghua.edu.cn> Signed-off-by: Dennis Dalessandro <dennis.dalessandro@cornelisnetworks.com> Signed-off-by: Jason Gunthorpe <jgg@nvidia.com>
2021-10-13IB/qib: Protect from buffer overflow in struct qib_user_sdma_pkt fieldsMike Marciniszyn1-10/+23
Overflowing either addrlimit or bytes_togo can allow userspace to trigger a buffer overflow of kernel memory. Check for overflows in all the places doing math on user controlled buffers. Fixes: f931551bafe1 ("IB/qib: Add new qib driver for QLogic PCIe InfiniBand adapters") Link: https://lore.kernel.org/r/20211012175519.7298.77738.stgit@awfm-01.cornelisnetworks.com Reported-by: Ilja Van Sprundel <ivansprundel@ioactive.com> Reviewed-by: Dennis Dalessandro <dennis.dalessandro@cornelisnetworks.com> Signed-off-by: Mike Marciniszyn <mike.marciniszyn@cornelisnetworks.com> Signed-off-by: Dennis Dalessandro <dennis.dalessandro@cornelisnetworks.com> Signed-off-by: Jason Gunthorpe <jgg@nvidia.com>
2021-10-12RDMA/mlx4: Return missed an error if device doesn't support steeringLeon Romanovsky1-1/+3
The error flow fixed in this patch is not possible because all kernel users of create QP interface check that device supports steering before set IB_QP_CREATE_NETIF_QP flag. Fixes: c1c98501121e ("IB/mlx4: Add support for steerable IB UD QPs") Link: https://lore.kernel.org/r/91c61f6e60eb0240f8bbc321fda7a1d2986dd03c.1634023677.git.leonro@nvidia.com Reported-by: Dan Carpenter <dan.carpenter@oracle.com> Signed-off-by: Leon Romanovsky <leonro@nvidia.com> Signed-off-by: Jason Gunthorpe <jgg@nvidia.com>
2021-10-12RDMA/irdma: Remove irdma_cqp_up_map_cmd()Zhu Yanjun2-36/+0
The function irdma_cqp_up_map_cmd() is not used. So remove it. Link: https://lore.kernel.org/r/20211011110128.4057-5-yanjun.zhu@linux.dev Signed-off-by: Zhu Yanjun <yanjun.zhu@linux.dev> Acked-by: Shiraz Saleem <shiraz.saleem@intel.com> Signed-off-by: Jason Gunthorpe <jgg@nvidia.com>
2021-10-12RDMA/irdma: Remove irdma_get_hw_addr()Zhu Yanjun2-12/+0
The function irdma_get_hw_addr() is not used. So remove it. Link: https://lore.kernel.org/r/20211011110128.4057-4-yanjun.zhu@linux.dev Signed-off-by: Zhu Yanjun <yanjun.zhu@linux.dev> Acked-by: Shiraz Saleem <shiraz.saleem@intel.com> Signed-off-by: Jason Gunthorpe <jgg@nvidia.com>
2021-10-12RDMA/irdma: Remove irdma_sc_send_lsmm_nostag()Zhu Yanjun2-39/+1
The function irdma_sc_send_lsmm_nostag is not used. So remove it. Link: https://lore.kernel.org/r/20211011110128.4057-3-yanjun.zhu@linux.dev Signed-off-by: Zhu Yanjun <yanjun.zhu@linux.dev> Acked-by: Shiraz Saleem <shiraz.saleem@intel.com> Signed-off-by: Jason Gunthorpe <jgg@nvidia.com>
2021-10-12RDMA/irdma: Remove irdma_uk_mw_bind()Zhu Yanjun2-60/+1
The function irdma_uk_mw_bind is not used. So remove it. Link: https://lore.kernel.org/r/20211011110128.4057-2-yanjun.zhu@linux.dev Signed-off-by: Zhu Yanjun <yanjun.zhu@linux.dev> Acked-by: Shiraz Saleem <shiraz.saleem@intel.com> Signed-off-by: Jason Gunthorpe <jgg@nvidia.com>
2021-10-12RDMA: Remove redundant 'flush_workqueue()' callsChristophe JAILLET3-5/+1
'destroy_workqueue()' already drains the queue before destroying it, so there is no need to flush it explicitly. Remove the redundant 'flush_workqueue()' calls. This was generated with coccinelle: @@ expression E; @@ - flush_workqueue(E); destroy_workqueue(E); Link: https://lore.kernel.org/r/ca7bac6e6c9c5cc8d04eec3944edb13de0e381a3.1633874776.git.christophe.jaillet@wanadoo.fr Signed-off-by: Christophe JAILLET <christophe.jaillet@wanadoo.fr> Reviewed-by: Leon Romanovsky <leonro@nvidia.com> Signed-off-by: Jason Gunthorpe <jgg@nvidia.com>
2021-10-12RDMA/hns: Use dma_alloc_coherent() instead of kmalloc/dma_map_single()Cai Huoqing1-15/+5
Replacing kmalloc/kfree/dma_map_single/dma_unmap_single() with dma_alloc_coherent/dma_free_coherent() helps to reduce code size, and simplify the code, and coherent DMA will not clear the cache every time. The SOC that this driver supports does not have incoherent DMA, so this makes the code follow the DMA API properly with no performance impact. Currently there are missing dma sync calls around the DMA transfers. Link: https://lore.kernel.org/r/20210926061116.282-1-caihuoqing@baidu.com Signed-off-by: Cai Huoqing <caihuoqing@baidu.com> Reviewed-by: Wenpeng Liang <liangwenpeng@huawei.com> Tested-by: Wenpeng Liang <liangwenpeng@huawei.com> Signed-off-by: Jason Gunthorpe <jgg@nvidia.com>
2021-10-12RDMA/mlx5: Add optional counter support in get_hw_stats callbackAharon Landau1-3/+85
When get_hw_stats is called, query and return the optional counter statistic as well. Link: https://lore.kernel.org/r/20211008122439.166063-14-markzhang@nvidia.com Signed-off-by: Aharon Landau <aharonl@nvidia.com> Signed-off-by: Leon Romanovsky <leonro@nvidia.com> Signed-off-by: Mark Zhang <markzhang@nvidia.com> Signed-off-by: Jason Gunthorpe <jgg@nvidia.com>
2021-10-12RDMA/mlx5: Add modify_op_stat() supportAharon Landau2-6/+74
Add support for ib callback modify_op_stat() to add or remove an optional counter. When adding, a steering flow table is created with a rule that catches and counts all the matching packets. When removing, the table and flow counter are destroyed. Link: https://lore.kernel.org/r/20211008122439.166063-13-markzhang@nvidia.com Signed-off-by: Aharon Landau <aharonl@nvidia.com> Signed-off-by: Leon Romanovsky <leonro@nvidia.com> Signed-off-by: Mark Zhang <markzhang@nvidia.com> Signed-off-by: Jason Gunthorpe <jgg@nvidia.com>
2021-10-12RDMA/mlx5: Add steering support in optional flow countersAharon Landau2-0/+211
Adding steering infrastructure for adding and removing optional counter. This allows to add and remove the counters dynamically in order not to hurt performance. Link: https://lore.kernel.org/r/20211008122439.166063-12-markzhang@nvidia.com Signed-off-by: Aharon Landau <aharonl@nvidia.com> Reviewed-by: Maor Gottlieb <maorg@nvidia.com> Signed-off-by: Leon Romanovsky <leonro@nvidia.com> Signed-off-by: Mark Zhang <markzhang@nvidia.com> Signed-off-by: Jason Gunthorpe <jgg@nvidia.com>
2021-10-12RDMA/mlx5: Support optional counters in hw_stats initializationAharon Landau2-16/+75
Add optional counter support when allocate and initialize hw_stats structure. Optional counters have IB_STAT_FLAG_OPTIONAL flag set and are disabled by default. Link: https://lore.kernel.org/r/20211008122439.166063-11-markzhang@nvidia.com Signed-off-by: Aharon Landau <aharonl@nvidia.com> Signed-off-by: Leon Romanovsky <leonro@nvidia.com> Signed-off-by: Mark Zhang <markzhang@nvidia.com> Signed-off-by: Jason Gunthorpe <jgg@nvidia.com>
2021-10-12RDMA/counter: Add a descriptor in struct rdma_hw_statsAharon Landau9-212/+208
Add a counter statistic descriptor structure in rdma_hw_stats. In addition to the counter name, more meta-information will be added. This code extension is needed for optional-counter support in the following patches. Link: https://lore.kernel.org/r/20211008122439.166063-4-markzhang@nvidia.com Signed-off-by: Aharon Landau <aharonl@nvidia.com> Signed-off-by: Leon Romanovsky <leonro@nvidia.com> Signed-off-by: Mark Zhang <markzhang@nvidia.com> Signed-off-by: Jason Gunthorpe <jgg@nvidia.com>
2021-10-07RDMA/efa: CQ notificationsGal Pressman10-52/+610
This patch adds support for CQ notifications through the standard verbs api. In order to achieve that, a new event queue (EQ) object is introduced, which is in charge of reporting completion events to the driver. On driver load, EQs are allocated and their affinity is set to a single cpu. When a user app creates a CQ with a completion channel, the completion vector number is converted to a EQ number, which is in charge of reporting the CQ events. In addition, the CQ creation admin command now returns an offset for the CQ doorbell, which is mapped to the userspace provider and is used to arm the CQ when requested by the user. The EQs use a single doorbell (located on the registers BAR), which encodes the EQ number and arm as part of the doorbell value. The EQs are polled by the driver on each new EQE, and arm it when the poll is completed. Link: https://lore.kernel.org/r/20211003105605.29222-1-galpress@amazon.com Reviewed-by: Firas JahJah <firasj@amazon.com> Reviewed-by: Yossi Leybovich <sleybo@amazon.com> Signed-off-by: Gal Pressman <galpress@amazon.com> Signed-off-by: Jason Gunthorpe <jgg@nvidia.com>
2021-10-06RDMA/mlx5: Set user priority for DCTPatrisious Haddad1-0/+2
Currently, the driver doesn't set the PCP-based priority for DCT, hence DCT response packets are transmitted without user priority. Fix it by setting user provided priority in the eth_prio field in the DCT context, which in turn sets the value in the transmitted packet. Fixes: 776a3906b692 ("IB/mlx5: Add support for DC target QP") Link: https://lore.kernel.org/r/5fd2d94a13f5742d8803c218927322257d53205c.1633512672.git.leonro@nvidia.com Signed-off-by: Patrisious Haddad <phaddad@nvidia.com> Reviewed-by: Maor Gottlieb <maorg@nvidia.com> Signed-off-by: Leon Romanovsky <leonro@nvidia.com> Signed-off-by: Jason Gunthorpe <jgg@nvidia.com>
2021-10-06RDMA/irdma: Process extended CQ entries correctlyShiraz Saleem1-2/+2
The valid bit for extended CQE's written by HW is retrieved from the incorrect quad-word. This leads to missed completions for any UD traffic particularly after a wrap-around. Get the valid bit for extended CQE's from the correct quad-word in the descriptor. Fixes: 551c46edc769 ("RDMA/irdma: Add user/kernel shared libraries") Link: https://lore.kernel.org/r/20211005182302.374-1-shiraz.saleem@intel.com Signed-off-by: Shiraz Saleem <shiraz.saleem@intel.com> Signed-off-by: Jason Gunthorpe <jgg@nvidia.com>
2021-10-06RDMA/irdma: Delete unused struct irdma_bthZhu Yanjun1-8/+0
The struct irdma_bth is not used, so remove it. Link: https://lore.kernel.org/r/20211006201531.469650-1-yanjun.zhu@linux.dev Signed-off-by: Zhu Yanjun <yanjun.zhu@linux.dev> Acked-by: Shiraz Saleem <shiraz.saleem@intel.com> Signed-off-by: Jason Gunthorpe <jgg@nvidia.com>
2021-10-05IB/hf1: Use string_upper() instead of an open coded variantAndy Shevchenko1-6/+4
Use string_upper() from the string helper module instead of an open coded variant. Link: https://lore.kernel.org/r/20211001123153.67379-1-andriy.shevchenko@linux.intel.com Signed-off-by: Andy Shevchenko <andriy.shevchenko@linux.intel.com> Signed-off-by: Jason Gunthorpe <jgg@nvidia.com>
2021-10-05mlx4: replace mlx4_mac_to_u64() with ether_addr_to_u64()Jakub Kicinski2-2/+2
mlx4_mac_to_u64() predates and opencodes ether_addr_to_u64(). It doesn't make the argument constant so it'll be problematic when dev->dev_addr becomes a const. Convert to the generic helper. Signed-off-by: Jakub Kicinski <kuba@kernel.org> Reviewed-by: Tariq Toukan <tariqt@nvidia.com> Signed-off-by: David S. Miller <davem@davemloft.net>
2021-10-05net/mlx5: Shift control IRQ to the last indexShay Drory1-0/+1
Control IRQ is the first IRQ vector. This complicates handling of completion irqs as we need to offset them by one. in the next patch, there are scenarios where completion and control EQs will share the same irq. for example: functions with single IRQ. To ease such scenarios, we shift control IRQ to the end of the irq array. Signed-off-by: Shay Drory <shayd@nvidia.com> Signed-off-by: Saeed Mahameed <saeedm@nvidia.com>
2021-10-04RDMA/mlx5: Avoid taking MRs from larger MR cache pools when a pool is emptyAharon Landau1-17/+9
Currently, if a cache entry is empty, the driver will try to take MRs from larger cache entries. This behavior consumes a lot of memory. In addition, when searching for an mkey in an entry, the entry is locked. When using a multithreaded application with the old behavior, the threads will block each other more often, which can hurt performance as can be seen in the table below. Therefore, avoid it by creating a new mkey when the requested cache entry is empty. The test was performed on a machine with Intel(R) Xeon(R) CPU E5-2699 v4 @ 2.20GHz 44 cores. Here are the time measures for allocating MRs of 2^6 pages. The search in the cache started from entry 6. +------------+---------------------+---------------------+ | | Old behavior | New behavior | | +----------+----------+----------+----------+ | | 1 thread | 5 thread | 1 thread | 5 thread | +============+==========+==========+==========+==========+ | 1,000 MRs | 14 ms | 30 ms | 14 ms | 80 ms | +------------+----------+----------+----------+----------+ | 10,000 MRs | 135 ms | 6 sec | 173 ms | 880 ms | +------------+----------+----------+----------+----------+ |100,000 MRs | 11.2 sec | 57 sec | 1.74 sec | 8.8 sec | +------------+----------+----------+----------+----------+ Link: https://lore.kernel.org/r/71af2770c737b936f7b10f457f0ef303ffcf7ad7.1632644527.git.leonro@nvidia.com Signed-off-by: Aharon Landau <aharonl@nvidia.com> Signed-off-by: Leon Romanovsky <leonro@nvidia.com> Signed-off-by: Jason Gunthorpe <jgg@nvidia.com>
2021-10-04Merge tag 'v5.15-rc4' into rdma.get for-nextJason Gunthorpe13-40/+66
Merged due to dependencies in following patches. Conflict in drivers/infiniband/hw/hfi1/ipoib_tx.c resolved by hand to take the %p change and txq stats rename together. Signed-off-by: Jason Gunthorpe <jgg@nvidia.com>
2021-10-04qed: Remove e4_ and _e4 from FW HSIShai Malin1-1/+1
The existing qed/qede/qedr/qedi/qedf code uses chip-specific naming in structures, functions, variables and defines in FW HSI (Hardware Software Interface). The new FW version introduced a generic naming convention in HSI in-which the same code will be used across different versions for simpler maintainability. It also eases in providing support for new features. With this patch every "_e4" or "e4_" prefix or suffix is not needed anymore and it will be removed. Reviewed-by: Manish Rangankar <mrangankar@marvell.com> Reviewed-by: Javed Hasan <jhasan@marvell.com> Signed-off-by: Ariel Elior <aelior@marvell.com> Signed-off-by: Omkar Kulkarni <okulkarni@marvell.com> Signed-off-by: Shai Malin <smalin@marvell.com> Signed-off-by: Prabhakar Kushwaha <pkushwaha@marvell.com> Signed-off-by: David S. Miller <davem@davemloft.net>