summaryrefslogtreecommitdiff
path: root/drivers/infiniband/hw
AgeCommit message (Collapse)AuthorFilesLines
2020-03-13RDMA/mlx5: Fix locking in MR cache work queueJason Gunthorpe2-46/+80
All of the members of mlx5_cache_ent must be accessed while holding the spinlock, add the missing spinlock in the __cache_work_func(). Using cache->stopped and flush_workqueue() is an inherently racy way to shutdown self-scheduling work on a queue. Replace it with ent->disabled under lock, and always check disabled before queuing any new work. Use cancel_work_sync() to shutdown the queue. Use READ_ONCE/WRITE_ONCE for dev->last_add to manage concurrency as coherency is less important here. Split fill_delay from the bitfield. C bitfield updates are not atomic and this is just a mess. Use READ_ONCE/WRITE_ONCE, but this could also use test_bit()/set_bit(). Link: https://lore.kernel.org/r/20200310082238.239865-11-leon@kernel.org Signed-off-by: Leon Romanovsky <leonro@mellanox.com> Signed-off-by: Jason Gunthorpe <jgg@mellanox.com>
2020-03-13RDMA/mlx5: Lock access to ent->available_mrs/limit when doing queue_workJason Gunthorpe1-15/+25
Accesses to these members needs to be locked. There is no reason not to hold a spinlock while calling queue_work(), so move the tests into a helper and always call it under lock. The helper should be called when available_mrs is adjusted. Link: https://lore.kernel.org/r/20200310082238.239865-10-leon@kernel.org Signed-off-by: Leon Romanovsky <leonro@mellanox.com> Signed-off-by: Jason Gunthorpe <jgg@mellanox.com>
2020-03-13RDMA/mlx5: Fix MR cache size and limit debugfsJason Gunthorpe1-64/+88
The size_write function is supposed to adjust the total_mr's to match the user's request, but lacks locking and safety checking. total_mrs can only be adjusted by at most available_mrs. mrs already assigned to users cannot be revoked. Ensure that the user provides a target value within the range of available_mrs and within the high/low water mark. limit_write has confusing and wrong sanity checking, and doesn't have the ability to deallocate on limit reduction. Since both functions use the same algorithm to adjust the available_mrs, consolidate it into one function and write it correctly. Fix the locking and by holding the spinlock for all accesses to ent->X. Always fail if the user provides a malformed string. Fixes: e126ba97dba9 ("mlx5: Add driver for Mellanox Connect-IB adapters") Link: https://lore.kernel.org/r/20200310082238.239865-9-leon@kernel.org Signed-off-by: Leon Romanovsky <leonro@mellanox.com> Signed-off-by: Jason Gunthorpe <jgg@mellanox.com>
2020-03-13RDMA/mlx5: Always remove MRs from the cache before destroying themJason Gunthorpe1-6/+13
The cache bucket tracks the total number of MRs that exists, both inside and outside of the cache. Removing a MR from the cache (by setting cache_ent to NULL) without updating total_mrs will cause the tracking to leak and be inflated. Further fix the rereg_mr path to always destroy the MR. reg_create will always overwrite all the MR data in mlx5_ib_mr, so the MR must be completely destroyed, in all cases, before this function can be called. Detach the MR from the cache and unconditionally destroy it to avoid leaking HW mkeys. Fixes: afd1417404fb ("IB/mlx5: Use direct mkey destroy command upon UMR unreg failure") Fixes: 56e11d628c5d ("IB/mlx5: Added support for re-registration of MRs") Link: https://lore.kernel.org/r/20200310082238.239865-8-leon@kernel.org Signed-off-by: Leon Romanovsky <leonro@mellanox.com> Signed-off-by: Jason Gunthorpe <jgg@mellanox.com>
2020-03-13RDMA/mlx5: Simplify how the MR cache bucket is locatedJason Gunthorpe3-98/+71
There are many bad APIs here that are accepting a cache bucket index instead of a bucket pointer. Many of the callers already have a bucket pointer, so this results in a lot of confusing uses of order2idx(). Pass the struct mlx5_cache_ent into add_keys(), remove_keys(), and alloc_cached_mr(). Once the MR is in the cache, store the cache bucket pointer directly in the MR, replacing the 'bool allocated_from cache'. In the end there is only one place that needs to form index from order, alloc_mr_from_cache(). Increase the safety of this function by disallowing it from accessing cache entries in the ODP special area. Link: https://lore.kernel.org/r/20200310082238.239865-7-leon@kernel.org Signed-off-by: Leon Romanovsky <leonro@mellanox.com> Signed-off-by: Jason Gunthorpe <jgg@mellanox.com>
2020-03-13RDMA/mlx5: Rename the tracking variables for the MR cacheJason Gunthorpe2-31/+42
The old names do not clearly indicate the intent. Link: https://lore.kernel.org/r/20200310082238.239865-6-leon@kernel.org Signed-off-by: Leon Romanovsky <leonro@mellanox.com> Signed-off-by: Jason Gunthorpe <jgg@mellanox.com>
2020-03-13RDMA/mlx5: Replace spinlock protected write with atomic varSaeed Mahameed3-10/+3
mkey variant calculation was spinlock protected to make it atomic, replace that with one atomic variable. Link: https://lore.kernel.org/r/20200310082238.239865-4-leon@kernel.org Signed-off-by: Saeed Mahameed <saeedm@mellanox.com> Signed-off-by: Leon Romanovsky <leonro@mellanox.com> Signed-off-by: Jason Gunthorpe <jgg@mellanox.com>
2020-03-13{IB,net}/mlx5: Move asynchronous mkey creation to mlx5_ibMichael Guralnik1-3/+3
As mlx5_ib is the only user of the mlx5_core_create_mkey_cb, move the logic inside mlx5_ib and cleanup the code in mlx5_core. Signed-off-by: Michael Guralnik <michaelgur@mellanox.com> Signed-off-by: Leon Romanovsky <leonro@mellanox.com>
2020-03-13{IB,net}/mlx5: Assign mkey variant in mlx5_ib onlySaeed Mahameed3-10/+54
mkey variant is not required for mlx5_core use, move the mkey variant counter to mlx5_ib. Signed-off-by: Saeed Mahameed <saeedm@mellanox.com> Signed-off-by: Leon Romanovsky <leonro@mellanox.com>
2020-03-13{IB,net}/mlx5: Setup mkey variant before mr create command invocationSaeed Mahameed1-5/+2
On reg_mr_callback() mlx5_ib is recalculating the mkey variant which is wrong and will lead to using a different key variant than the one submitted to firmware on create mkey command invocation. To fix this, we store the mkey variant before invoking the firmware command and use it later on completion (reg_mr_callback). Signed-off-by: Saeed Mahameed <saeedm@mellanox.com> Reviewed-by: Eli Cohen <eli@mellanox.com> Signed-off-by: Leon Romanovsky <leonro@mellanox.com>
2020-03-13RDMA/mlx5: Use offsetofend() instead of duplicated variantLeon Romanovsky2-31/+27
Convert mlx5 driver to use offsetofend() instead of its duplicated variant. Link: https://lore.kernel.org/r/20200310091438.248429-5-leon@kernel.org Signed-off-by: Leon Romanovsky <leonro@mellanox.com> Signed-off-by: Jason Gunthorpe <jgg@mellanox.com>
2020-03-13RDMA/mlx4: Delete duplicated offsetofend implementationLeon Romanovsky1-6/+3
Convert mlx4 to use in-kernel offsetofend() instead of its duplicated implementation. Link: https://lore.kernel.org/r/20200310091438.248429-3-leon@kernel.org Signed-off-by: Leon Romanovsky <leonro@mellanox.com> Signed-off-by: Jason Gunthorpe <jgg@mellanox.com>
2020-03-13Merge git://git.kernel.org/pub/scm/linux/kernel/git/netdev/netDavid S. Miller4-11/+13
Minor overlapping changes, nothing serious. Signed-off-by: David S. Miller <davem@davemloft.net>
2020-03-12Merge branch 'ct-offload' of ↵David S. Miller1-1/+2
git://git.kernel.org/pub/scm/linux/kernel/git/saeed/linux
2020-03-10IB/mlx5: Replace tunnel mpls capability bits for tunnel_offloadsAlex Vesker1-4/+2
Until now the flex parser capability was used in ib_query_device() to indicate tunnel_offloads_caps support for mpls_over_gre/mpls_over_udp. Newer devices and firmware will have configurations with the flexparser but without mpls support. Testing for the flex parser capability was a mistake, the tunnel_stateless capability was intended for detecting mpls and was introduced at the same time as the flex parser capability. Otherwise userspace will be incorrectly informed that a future device supports MPLS when it does not. Link: https://lore.kernel.org/r/20200305123841.196086-1-leon@kernel.org Cc: <stable@vger.kernel.org> # 4.17 Fixes: e818e255a58d ("IB/mlx5: Expose MPLS related tunneling offloads") Signed-off-by: Alex Vesker <valex@mellanox.com> Reviewed-by: Ariel Levkovich <lariel@mellanox.com> Signed-off-by: Leon Romanovsky <leonro@mellanox.com> Signed-off-by: Jason Gunthorpe <jgg@mellanox.com>
2020-03-10RDMA/mlx5: Remove duplicate definitions of SW_ICM macrosErez Shitrit1-4/+0
Those macros are already defined in include/linux/mlx5/driver.h, so delete their duplicate variants. Link: https://lore.kernel.org/r/20200310075706.238592-1-leon@kernel.org Signed-off-by: Ariel Levkovich <lariel@mellanox.com> Signed-off-by: Yevgeny Kliteynik <kliteyn@mellanox.com> Signed-off-by: Erez Shitrit <erezsh@mellanox.com> Reviewed-by: Alex Vesker <valex@mellanox.com> Signed-off-by: Leon Romanovsky <leonro@mellanox.com> Signed-off-by: Jason Gunthorpe <jgg@mellanox.com>
2020-03-10RDMA/mlx5: Fix the number of hwcounters of a dynamic counterMark Zhang1-2/+3
When we read the global counter and there's any dynamic counter allocated, the value of a hwcounter is the sum of the default counter and all dynamic counters. So the number of hwcounters of a dynamically allocated counter must be same as of the default counter, otherwise there will be read violations. This fixes the KASAN slab-out-of-bounds bug: BUG: KASAN: slab-out-of-bounds in rdma_counter_get_hwstat_value+0x36d/0x390 [ib_core] Read of size 8 at addr ffff8884192a5778 by task rdma/10138 CPU: 7 PID: 10138 Comm: rdma Not tainted 5.5.0-for-upstream-dbg-2020-02-06_18-30-19-27 #1 Hardware name: QEMU Standard PC (Q35 + ICH9, 2009), BIOS rel-1.12.1-0-ga5cab58e9a3f-prebuilt.qemu.org 04/01/2014 Call Trace: dump_stack+0xb7/0x10b print_address_description.constprop.4+0x1e2/0x400 ? rdma_counter_get_hwstat_value+0x36d/0x390 [ib_core] __kasan_report+0x15c/0x1e0 ? mlx5_ib_query_q_counters+0x13f/0x270 [mlx5_ib] ? rdma_counter_get_hwstat_value+0x36d/0x390 [ib_core] kasan_report+0xe/0x20 rdma_counter_get_hwstat_value+0x36d/0x390 [ib_core] ? rdma_counter_query_stats+0xd0/0xd0 [ib_core] ? memcpy+0x34/0x50 ? nla_put+0xe2/0x170 nldev_stat_get_doit+0x9c7/0x14f0 [ib_core] ... do_syscall_64+0x95/0x490 entry_SYSCALL_64_after_hwframe+0x49/0xbe RIP: 0033:0x7fcc457fe65a Code: bb 66 2e 0f 1f 84 00 00 00 00 00 0f 1f 44 00 00 8b 05 fa f1 2b 00 45 89 c9 4c 63 d1 48 63 ff 85 c0 75 15 b8 2c 00 00 00 0f 05 <48> 3d 00 f0 ff ff 77 76 f3 c3 0f 1f 40 00 41 55 41 54 4d 89 c5 55 RSP: 002b:00007ffc0586f868 EFLAGS: 00000246 ORIG_RAX: 000000000000002c RAX: ffffffffffffffda RBX: 0000000000000000 RCX: 00007fcc457fe65a RDX: 0000000000000020 RSI: 00000000013db920 RDI: 0000000000000003 RBP: 00007ffc0586fa90 R08: 00007fcc45ac10e0 R09: 000000000000000c R10: 0000000000000000 R11: 0000000000000246 R12: 00000000004089c0 R13: 0000000000000000 R14: 00007ffc0586fab0 R15: 00000000013dc9a0 Allocated by task 9700: save_stack+0x19/0x80 __kasan_kmalloc.constprop.7+0xa0/0xd0 mlx5_ib_counter_alloc_stats+0xd1/0x1d0 [mlx5_ib] rdma_counter_alloc+0x16d/0x3f0 [ib_core] rdma_counter_bind_qpn_alloc+0x216/0x4e0 [ib_core] nldev_stat_set_doit+0x8c2/0xb10 [ib_core] rdma_nl_rcv_msg+0x3d2/0x730 [ib_core] rdma_nl_rcv+0x2a8/0x400 [ib_core] netlink_unicast+0x448/0x620 netlink_sendmsg+0x731/0xd10 sock_sendmsg+0xb1/0xf0 __sys_sendto+0x25d/0x2c0 __x64_sys_sendto+0xdd/0x1b0 do_syscall_64+0x95/0x490 entry_SYSCALL_64_after_hwframe+0x49/0xbe Fixes: 18d422ce8ccf ("IB/mlx5: Add counter_alloc_stats() and counter_update_stats() support") Link: https://lore.kernel.org/r/20200305124052.196688-1-leon@kernel.org Signed-off-by: Mark Zhang <markz@mellanox.com> Signed-off-by: Leon Romanovsky <leonro@mellanox.com> Signed-off-by: Jason Gunthorpe <jgg@mellanox.com>
2020-03-10RDMA/bnxt_re: Remove a redundant 'memset'Christophe JAILLET1-4/+1
'wqe' is already zeroed at the top of the 'while' loop, just a few lines below, and is not used outside of the loop. So there is no need to zero it again, or for the variable to be declared outside the loop. Link: https://lore.kernel.org/r/20200308065442.5415-1-christophe.jaillet@wanadoo.fr Signed-off-by: Christophe JAILLET <christophe.jaillet@wanadoo.fr> Signed-off-by: Jason Gunthorpe <jgg@mellanox.com>
2020-03-10Merge tag 'v5.6-rc5' into rdma.git for-nextJason Gunthorpe4-11/+13
Required due to dependencies in following patches. Signed-off-by: Jason Gunthorpe <jgg@mellanox.com>
2020-03-10Merge branch 'mlx5_packet_pacing' into rdma.git for-nextJason Gunthorpe4-0/+144
Yishai Hadas Says: ==================== Expose raw packet pacing APIs to be used by DEVX based applications. The existing code was refactored to have a single flow with the new raw APIs. ==================== Based on the mlx5-next branch at git://git.kernel.org/pub/scm/linux/kernel/git/mellanox/linux Due to dependencies * branch 'mlx5_packet_pacing': IB/mlx5: Introduce UAPIs to manage packet pacing net/mlx5: Expose raw packet pacing APIs
2020-03-10IB/mlx5: Introduce UAPIs to manage packet pacingYishai Hadas4-0/+144
Introduce packet pacing uobject and its alloc and destroy methods. This uobject holds mlx5 packet pacing context according to the device specification and enables managing packet pacing device entries that are needed by DEVX applications. Link: https://lore.kernel.org/r/20200219190518.200912-3-leon@kernel.org Signed-off-by: Yishai Hadas <yishaih@mellanox.com> Signed-off-by: Leon Romanovsky <leonro@mellanox.com> Signed-off-by: Jason Gunthorpe <jgg@mellanox.com>
2020-03-08Merge branch 'efi/urgent' into efi/core, to pick up fixesIngo Molnar4-11/+13
Signed-off-by: Ingo Molnar <mingo@kernel.org>
2020-03-04RDMA/hns: fix spelling mistake "attatch" -> "attach"Colin Ian King1-1/+1
There is a spelling mistake in an error message. Fix it. Link: https://lore.kernel.org/r/20200304081045.81164-1-colin.king@canonical.com Signed-off-by: Colin Ian King <colin.king@canonical.com> Signed-off-by: Jason Gunthorpe <jgg@mellanox.com>
2020-03-04IB/mlx5: Fix missing congestion control debugfs on rep rdma deviceParav Pandit1-0/+3
Cited commit missed to include low level congestion control related debugfs stage initialization. This resulted in missing debugfs entries for cc_params of a RDMA device. Add them back. Fixes: b5ca15ad7e61 ("IB/mlx5: Add proper representors support") Link: https://lore.kernel.org/r/20200227125407.99803-1-leon@kernel.org Signed-off-by: Parav Pandit <parav@mellanox.com> Reviewed-by: Mark Bloch <markb@mellanox.com> Signed-off-by: Leon Romanovsky <leonro@mellanox.com> Signed-off-by: Jason Gunthorpe <jgg@mellanox.com>
2020-03-04IB/mlx5: Add np_min_time_between_cnps and rp_max_rate debug paramsParav Pandit2-0/+22
Add two debugfs parameters described below. np_min_time_between_cnps - Minimum time between sending CNPs from the port. Unit = microseconds. Default = 0 (no min wait time; generated based on incoming ECN marked packets). rp_max_rate - Maximum rate at which reaction point node can transmit. Once this limit is reached, RP is no longer rate limited. Unit = Mbits/sec Default = 0 (full speed) Link: https://lore.kernel.org/r/20200227125246.99472-1-leon@kernel.org Signed-off-by: Parav Pandit <parav@mellanox.com> Signed-off-by: Leon Romanovsky <leonro@mellanox.com> Signed-off-by: Jason Gunthorpe <jgg@mellanox.com>
2020-03-04IB/mlx5: Fix implicit ODP raceArtemy Kovalyov2-10/+8
Following race may occur because of the call_srcu and the placement of the synchronize_srcu vs the xa_erase. CPU0 CPU1 mlx5_ib_free_implicit_mr: destroy_unused_implicit_child_mr: xa_erase(odp_mkeys) synchronize_srcu() xa_lock(implicit_children) if (still in xarray) atomic_inc() call_srcu() xa_unlock(implicit_children) xa_erase(implicit_children): xa_lock(implicit_children) __xa_erase() xa_unlock(implicit_children) flush_workqueue() [..] free_implicit_child_mr_rcu: (via call_srcu) queue_work() WARN_ON(atomic_read()) [..] free_implicit_child_mr_work: (via wq) free_implicit_child_mr() mlx5_mr_cache_invalidate() mlx5_ib_update_xlt() <-- UMR QP fail atomic_dec() The wait_event() solves the race because it blocks until free_implicit_child_mr_work() completes. Fixes: 5256edcb98a1 ("RDMA/mlx5: Rework implicit ODP destroy") Link: https://lore.kernel.org/r/20200227113918.94432-1-leon@kernel.org Signed-off-by: Artemy Kovalyov <artemyko@mellanox.com> Reviewed-by: Jason Gunthorpe <jgg@mellanox.com> Signed-off-by: Leon Romanovsky <leonro@mellanox.com> Signed-off-by: Jason Gunthorpe <jgg@mellanox.com>
2020-03-04IB/mlx5: Optimize u64 division on 32-bit archesAlexander Lobakin1-1/+2
Commit f164be8c0366 ("IB/mlx5: Extend caps stage to handle VAR capabilities") introduced a straight "/" division of the u64 variable "bar_size". This was fixed with commit 685eff513183 ("IB/mlx5: Use div64_u64 for num_var_hw_entries calculation"). However, div64_u64() is redundant here as mlx5_var_table::stride_size is of type u32. Make the actual code way more optimized on 32-bit kernels using div_u64() and fix 80 chars break-through by the way. Fixes: 685eff513183 ("IB/mlx5: Use div64_u64 for num_var_hw_entries calculation") Link: https://lore.kernel.org/r/20200217073629.8051-1-alobakin@dlink.ru Signed-off-by: Alexander Lobakin <alobakin@dlink.ru> Signed-off-by: Jason Gunthorpe <jgg@mellanox.com>
2020-03-04Merge tag 'v5.6-rc4' into rdma.git for-nextJason Gunthorpe10-60/+95
Required due to dependencies in following patches. Signed-off-by: Jason Gunthorpe <jgg@mellanox.com>
2020-03-04RDMA/providers: Fix return value when QP type isn't supportedKamal Heib11-11/+11
The proper return code is "-EOPNOTSUPP" when the requested QP type is not supported by the provider. Link: https://lore.kernel.org/r/20200130082049.463-1-kamalheib1@gmail.com Signed-off-by: Kamal Heib <kamalheib1@gmail.com> Reviewed-by: Dennis Dalessandro <dennis.dalessandro@intel.com> Signed-off-by: Jason Gunthorpe <jgg@mellanox.com>
2020-03-02RDMA/mlx5: Prevent UMR usage with RO only when we have RO capsMichael Guralnik1-1/+3
Relaxed ordering is not supported in UMR so we are disabling UMR usage when user passes relaxed ordering access flag. Enable using UMR when user requested relaxed ordering but there are no relaxed ordering capabilities. This will prevent user from unnecessarily registering a new mkey. Fixes: d6de0bb1850f ("RDMA/mlx5: Set relaxed ordering when requested") Link: https://lore.kernel.org/r/20200227113834.94233-1-leon@kernel.org Signed-off-by: Michael Guralnik <michaelgur@mellanox.com> Reviewed-by: Maor Gottlieb <maorg@mellanox.com> Signed-off-by: Leon Romanovsky <leonro@mellanox.com> Signed-off-by: Jason Gunthorpe <jgg@mellanox.com>
2020-03-02RDMA/bnxt_re: Remove set but not used variables 'pg' and 'idx'YueHaibing1-4/+0
Fixes gcc '-Wunused-but-set-variable' warning: drivers/infiniband/hw/bnxt_re/qplib_rcfw.c: In function '__send_message': drivers/infiniband/hw/bnxt_re/qplib_rcfw.c:101:10: warning: variable 'idx' set but not used [-Wunused-but-set-variable] drivers/infiniband/hw/bnxt_re/qplib_rcfw.c:101:6: warning: variable 'pg' set but not used [-Wunused-but-set-variable] commit cee0c7bba486 ("RDMA/bnxt_re: Refactor command queue management code") involved this, but not used. Link: https://lore.kernel.org/r/20200227064900.92255-1-yuehaibing@huawei.com Signed-off-by: YueHaibing <yuehaibing@huawei.com> Signed-off-by: Jason Gunthorpe <jgg@mellanox.com>
2020-03-02RDMA/bnxt_re: Remove set but not used variable 'dev_attr'YueHaibing1-2/+0
Fixes gcc '-Wunused-but-set-variable' warning: drivers/infiniband/hw/bnxt_re/ib_verbs.c: In function 'bnxt_re_create_gsi_qp': drivers/infiniband/hw/bnxt_re/ib_verbs.c:1283:30: warning: variable 'dev_attr' set but not used [-Wunused-but-set-variable] commit 8dae419f9ec7 ("RDMA/bnxt_re: Refactor queue pair creation code") involved this, but not used, so remove it. Link: https://lore.kernel.org/r/20200227064542.91205-1-yuehaibing@huawei.com Reported-by: Hulk Robot <hulkci@huawei.com> Signed-off-by: YueHaibing <yuehaibing@huawei.com> Signed-off-by: Jason Gunthorpe <jgg@mellanox.com>
2020-03-02RDMA/bnxt_re: Remove set but not used variable 'pg_size'YueHaibing1-2/+1
Fixes gcc '-Wunused-but-set-variable' warning: drivers/infiniband/hw/bnxt_re/qplib_res.c: In function '__alloc_pbl': drivers/infiniband/hw/bnxt_re/qplib_res.c:109:13: warning: variable 'pg_size' set but not used [-Wunused-but-set-variable] commit 0c4dcd602817 ("RDMA/bnxt_re: Refactor hardware queue memory allocation") involved this, but not used, so remove it. Link: https://lore.kernel.org/r/20200227064209.87893-1-yuehaibing@huawei.com Reported-by: Hulk Robot <hulkci@huawei.com> Signed-off-by: YueHaibing <yuehaibing@huawei.com> Signed-off-by: Jason Gunthorpe <jgg@mellanox.com>
2020-03-02RDMA/bnxt_re: Use driver_unregister and unregistration APISelvin Xavier1-64/+42
Using the new unregister APIs provided by the core. Provide the dealloc_driver hook for the core to callback at the time of device un-registration. bnxt_re VF resources are created by the corresponding PF driver. During ib_unregister_driver, PF might get removed before VF and this could cause failure when VFs are removed. Driver is explicitly queuing the removal of VF devices before calling ib_unregister_driver. Link: https://lore.kernel.org/r/1582731932-26574-3-git-send-email-selvin.xavier@broadcom.com Signed-off-by: Selvin Xavier <selvin.xavier@broadcom.com> Signed-off-by: Jason Gunthorpe <jgg@mellanox.com>
2020-03-02RDMA/bnxt_re: Refactor device add/remove functionalitiesSelvin Xavier1-57/+82
- bnxt_re_ib_reg() handles two main functionalities - initializing the device and registering with the IB stack. Split it into 2 functions i.e. bnxt_re_dev_init() and bnxt_re_ib_init() to account for the same thereby improve modularity. Do the same for bnxt_re_ib_unreg()i.e. split into two functions - bnxt_re_dev_uninit() and bnxt_re_ib_uninit(). - Simplify the code by combining the different steps to add and remove the device into two functions. - Report correct netdev link state during device register Link: https://lore.kernel.org/r/1582731932-26574-2-git-send-email-selvin.xavier@broadcom.com Signed-off-by: Selvin Xavier <selvin.xavier@broadcom.com> Signed-off-by: Jason Gunthorpe <jgg@mellanox.com>
2020-03-02IB/hfi1, qib: Ensure RCU is locked when accessing listDennis Dalessandro2-1/+5
The packet handling function, specifically the iteration of the qp list for mad packet processing misses locking RCU before running through the list. Not only is this incorrect, but the list_for_each_entry_rcu() call can not be called with a conditional check for lock dependency. Remedy this by invoking the rcu lock and unlock around the critical section. This brings MAD packet processing in line with what is done for non-MAD packets. Fixes: 7724105686e7 ("IB/hfi1: add driver files") Link: https://lore.kernel.org/r/20200225195445.140896.41873.stgit@awfm-01.aw.intel.com Reviewed-by: Mike Marciniszyn <mike.marciniszyn@intel.com> Signed-off-by: Dennis Dalessandro <dennis.dalessandro@intel.com> Signed-off-by: Jason Gunthorpe <jgg@mellanox.com>
2020-02-28RDMA/efa: Do not delay freeing of DMA pagesGal Pressman1-22/+22
When destroying a DMA mmapped object, there is no need to artificially delay the freeing of the pages to the mmap entry removal. Since the vma keeps a reference count on these pages, free_pages_exact can be called on the destroy verb as it won't really free the pages until the reference count is cleared (in case the user hasn't called munmap yet). Remove the special handling of DMA pages and call free_pages_exact on destroy_qp/cq. The mmap entry removal is moved to the beginning of the destroy flows, so the driver can safely free the pages. Link: https://lore.kernel.org/r/20200225114010.21790-4-galpress@amazon.com Reviewed-by: Firas JahJah <firasj@amazon.com> Reviewed-by: Yossi Leybovich <sleybo@amazon.com> Signed-off-by: Gal Pressman <galpress@amazon.com> Signed-off-by: Jason Gunthorpe <jgg@mellanox.com>
2020-02-28RDMA/efa: Properly document the interrupt mask registerGal Pressman2-3/+4
The fact that the LSB in the register is the enable bit should not be an implicit assumption between the driver and the device, properly document that in the register definition. Link: https://lore.kernel.org/r/20200225114010.21790-3-galpress@amazon.com Reviewed-by: Firas JahJah <firasj@amazon.com> Reviewed-by: Yossi Leybovich <sleybo@amazon.com> Signed-off-by: Gal Pressman <galpress@amazon.com> Signed-off-by: Jason Gunthorpe <jgg@mellanox.com>
2020-02-28RDMA/efa: Unified getters/setters for device structs bitmask accessGal Pressman6-128/+101
Use unified macros for device structs access instead of open coding the shifts and masks over and over again. Link: https://lore.kernel.org/r/20200225114010.21790-2-galpress@amazon.com Reviewed-by: Firas JahJah <firasj@amazon.com> Reviewed-by: Yossi Leybovich <sleybo@amazon.com> Signed-off-by: Gal Pressman <galpress@amazon.com> Signed-off-by: Jason Gunthorpe <jgg@mellanox.com>
2020-02-28RDMA/hns: Optimize qp doorbell allocation flowXi Wang1-102/+132
Encapsulate the kernel qp doorbell allocation related code into 2 functions: alloc_qp_db() and free_qp_db(). Link: https://lore.kernel.org/r/1582526258-13825-8-git-send-email-liweihang@huawei.com Signed-off-by: Xi Wang <wangxi11@huawei.com> Signed-off-by: Weihang Li <liweihang@huawei.com> Signed-off-by: Jason Gunthorpe <jgg@mellanox.com>
2020-02-28RDMA/hns: Optimize kernel qp wrid allocation flowXi Wang1-27/+45
Encapsulate the kernel qp wrid allocation related code into 2 functions: alloc_kernel_wrid() and free_kernel_wrid(). Link: https://lore.kernel.org/r/1582526258-13825-7-git-send-email-liweihang@huawei.com Signed-off-by: Xi Wang <wangxi11@huawei.com> Signed-off-by: Weihang Li <liweihang@huawei.com> Signed-off-by: Jason Gunthorpe <jgg@mellanox.com>
2020-02-28RDMA/hns: Optimize qp param setup flowXi Wang1-64/+72
Encapsulate the qp param setup related code into set_qp_param(). Link: https://lore.kernel.org/r/1582526258-13825-6-git-send-email-liweihang@huawei.com Signed-off-by: Xi Wang <wangxi11@huawei.com> Signed-off-by: Weihang Li <liweihang@huawei.com> Signed-off-by: Jason Gunthorpe <jgg@mellanox.com>
2020-02-28RDMA/hns: Optimize qp buffer allocation flowXi Wang2-123/+144
Encapsulate qp buffer allocation related code into 3 functions: alloc_qp_buf(), map_wqe_buf() and free_qp_buf(). Link: https://lore.kernel.org/r/1582526258-13825-5-git-send-email-liweihang@huawei.com Signed-off-by: Xi Wang <wangxi11@huawei.com> Signed-off-by: Weihang Li <liweihang@huawei.com> Signed-off-by: Jason Gunthorpe <jgg@mellanox.com>
2020-02-28RDMA/hns: Optimize qp number assign flowXi Wang1-47/+44
Encapsulate the code associated with the qp number assignment into alloc_qpn() and free_qpn(). Link: https://lore.kernel.org/r/1582526258-13825-4-git-send-email-liweihang@huawei.com Signed-off-by: Xi Wang <wangxi11@huawei.com> Signed-off-by: Weihang Li <liweihang@huawei.com> Signed-off-by: Jason Gunthorpe <jgg@mellanox.com>
2020-02-28RDMA/hns: Optimize qp context create and destroy flowXi Wang2-89/+81
Rename the qp context related functions and adjusts the code location to distinguish between the qp context and the entire qp. Link: https://lore.kernel.org/r/1582526258-13825-3-git-send-email-liweihang@huawei.com Signed-off-by: Xi Wang <wangxi11@huawei.com> Signed-off-by: Weihang Li <liweihang@huawei.com> Signed-off-by: Jason Gunthorpe <jgg@mellanox.com>
2020-02-28RDMA/hns: Optimize qp destroy flowXi Wang4-58/+46
Wrap the duplicate code in hip08 and hip06 qp destruction process as hns_roce_qp_destroy() to simply the qp destroy flow. Link: https://lore.kernel.org/r/1582526258-13825-2-git-send-email-liweihang@huawei.com Signed-off-by: Xi Wang <wangxi11@huawei.com> Signed-off-by: Weihang Li <liweihang@huawei.com> Signed-off-by: Jason Gunthorpe <jgg@mellanox.com>
2020-02-28RDMA/hns: Stop doorbell update while qp state errorYixian Liu1-35/+41
There are two paths to update qp producer index into hardware now, one path is doorbell in post verbs (send and recv), the another is mailbox in modify qp verb which is called by flush process. This will lead the hardware to be broken to correctly generate flush cqe. With stopping doorbell update and holding qp spinlock in modify qp during flush process, the problem can be solved. Fixes: 0425e3e6e0c7 ("RDMA/hns: Support flush cqe for hip08 in kernel space") Link: https://lore.kernel.org/r/1582367158-27030-3-git-send-email-liuyixian@huawei.com Signed-off-by: Yixian Liu <liuyixian@huawei.com> Signed-off-by: Jason Gunthorpe <jgg@mellanox.com>
2020-02-28RDMA/hns: Use flush framework for the case in aeqYixian Liu2-36/+9
As now we already have flush framework, using it instead of current flush process for qp error in asynchronized interrupt (aeq). Link: https://lore.kernel.org/r/1582367158-27030-2-git-send-email-liuyixian@huawei.com Signed-off-by: Yixian Liu <liuyixian@huawei.com> Signed-off-by: Jason Gunthorpe <jgg@mellanox.com>
2020-02-28RDMA/hns: Treat revision HIP08_A as a special caseLang Cheng1-14/+12
Set revisions that equal to or higher than HIP08_B as default to maintain backward compatibility. Link: https://lore.kernel.org/r/1582363039-10714-1-git-send-email-liweihang@huawei.com Signed-off-by: Lang Cheng <chenglang@huawei.com> Signed-off-by: Weihang Li <liweihang@huawei.com> Signed-off-by: Jason Gunthorpe <jgg@mellanox.com>
2020-02-26RDMA/bnxt_re: Using vmalloc requires including vmalloc.hJason Gunthorpe1-0/+1
Add it Fixes: 0c4dcd602817 ("RDMA/bnxt_re: Refactor hardware queue memory allocation") Signed-off-by: Stephen Rothwell <sfr@canb.auug.org.au> Reviewed-by: Devesh Sharma <devesh.sharma@broadcom.com> Signed-off-by: Jason Gunthorpe <jgg@mellanox.com>