summaryrefslogtreecommitdiff
path: root/drivers/infiniband
AgeCommit message (Collapse)AuthorFilesLines
2021-03-31RDMA/hns: Fix a spelling mistake in hns_roce_hw_v1.cRuiqi Gong1-4/+4
s/caculating/calculating Link: https://lore.kernel.org/r/20210330122912.19989-1-gongruiqi1@huawei.com Reported-by: Hulk Robot <hulkci@huawei.com> Signed-off-by: Ruiqi Gong <gongruiqi1@huawei.com> Signed-off-by: Jason Gunthorpe <jgg@nvidia.com>
2021-03-30RDMA/rxe: Split MEM into MR and MWBob Pearson8-226/+218
In the original rxe implementation it was intended to use a common object to represent MRs and MWs but they are different enough to separate these into two objects. This allows replacing the mem name with mr for MRs which is more consistent with the style for the other objects and less likely to be confusing. This is a long patch that mostly changes mem to mr where it makes sense and adds a new rxe_mw struct. Link: https://lore.kernel.org/r/20210325212425.2792-1-rpearson@hpe.com Signed-off-by: Bob Pearson <rpearson@hpe.com> Acked-by: Zhu Yanjun <zyjzyj2000@gmail.com> Signed-off-by: Jason Gunthorpe <jgg@nvidia.com>
2021-03-30RDMA/efa: Use strscpy instead of strlcpyGal Pressman1-5/+5
The strlcpy function doesn't limit the source length, use the preferred strscpy function instead. Link: https://lore.kernel.org/r/20210329120131.18793-1-galpress@amazon.com Reviewed-by: Firas JahJah <firasj@amazon.com> Reviewed-by: Yossi Leybovich <sleybo@amazon.com> Signed-off-by: Gal Pressman <galpress@amazon.com> Signed-off-by: Jason Gunthorpe <jgg@nvidia.com>
2021-03-26IB/isert: Fix a use after free in isert_connect_requestLv Yunlong1-8/+8
The device is got by isert_device_get() with refcount is 1, and is assigned to isert_conn by isert_conn->device = device. When isert_create_qp() failed, device will be freed with isert_device_put(). Later, the device is used in isert_free_login_buf(isert_conn) by the isert_conn->device->ib_device statement. Free the device in the correct order. Fixes: ae9ea9ed38c9 ("iser-target: Split some logic in isert_connect_request to routines") Link: https://lore.kernel.org/r/20210322161325.7491-1-lyl2019@mail.ustc.edu.cn Signed-off-by: Lv Yunlong <lyl2019@mail.ustc.edu.cn> Acked-by: Sagi Grimberg <sagi@grimberg.me> Reviewed-by: Leon Romanovsky <leonro@nvidia.com> Reviewed-by: Max Gurtovoy <mgurtovoy@nvidia.com> Signed-off-by: Jason Gunthorpe <jgg@nvidia.com>
2021-03-26IB/hfi1: Fix a typoBhaskar Chowdhury1-1/+1
s/struture/structure/ And add the missing colon for kdoc Link: https://lore.kernel.org/r/20210322062923.3306167-1-unixbhaskar@gmail.com Signed-off-by: Bhaskar Chowdhury <unixbhaskar@gmail.com> Signed-off-by: Jason Gunthorpe <jgg@nvidia.com>
2021-03-26RDMA/core: Correct misspellings of two words in commentsYangyang Li1-2/+2
Correct the following spelling errors: 1. shold -> should 2. uncontext -> ucontext Link: https://lore.kernel.org/r/1616147749-49106-1-git-send-email-liweihang@huawei.com Signed-off-by: Yangyang Li <liyangyang20@huawei.com> Signed-off-by: Weihang Li <liweihang@huawei.com> Reviewed-by: Leon Romanovsky <leonro@nvidia.com> Signed-off-by: Jason Gunthorpe <jgg@nvidia.com>
2021-03-26RDMA/mlx5: Set ODP caps only if device profile support ODPShay Drory3-9/+3
Currently, ODP caps are set during the init stage of mlx5_ib_dev, regardless of whether the device profile supports ODP or not. There is no point in setting ODP caps if the device profile doesn't support ODP. Hence, move setting the ODP caps to the odp_init stage. Link: https://lore.kernel.org/r/20210318135259.681264-1-leon@kernel.org Reviewed-by: Maor Gottlieb <maorg@nvidia.com> Signed-off-by: Shay Drory <shayd@nvidia.com> Signed-off-by: Leon Romanovsky <leonro@nvidia.com> Signed-off-by: Jason Gunthorpe <jgg@nvidia.com>
2021-03-26RDMA/mlx5: Fix drop packet rule in egress tableMaor Gottlieb1-4/+5
Initial drop action support missed that drop action can be added to egress flow tables as well. Add the missing support. This requires making sure that dest_type isn't set to PORT which in turn exposes a possibility of passing dst while indicating number of dsts as zero. Explicitly check for number of dsts and pass the appropriate pointer. Fixes: f29de9eee782 ("RDMA/mlx5: Add support for drop action in DV steering") Link: https://lore.kernel.org/r/20210318135123.680759-1-leon@kernel.org Reviewed-by: Mark Bloch <markb@nvidia.com> Signed-off-by: Maor Gottlieb <maorg@nvidia.com> Signed-off-by: Leon Romanovsky <leonro@nvidia.com> Signed-off-by: Jason Gunthorpe <jgg@nvidia.com>
2021-03-26RDMA/uverbs: Refactor rdma_counter_set_auto_mode and __counter_set_modePatrisious Haddad1-18/+24
Success is returned in the following flows: * New mode is the same as the current one. * Switched to new mode and there are no bound counters yet. Link: https://lore.kernel.org/r/20210318110502.673676-1-leon@kernel.org Signed-off-by: Patrisious Haddad <phaddad@nvidia.com> Reviewed-by: Mark Zhang <markzhang@nvidia.com> Reviewed-by: Maor Gottlieb <maorg@nvidia.com> Signed-off-by: Leon Romanovsky <leonro@nvidia.com> Signed-off-by: Jason Gunthorpe <jgg@nvidia.com>
2021-03-26RDMA/bnxt_re: Move device to error state upon device crashSelvin Xavier4-0/+47
When the L2 driver detects a device crash or device undergone reset, it invokes a stop callback to recover from error. The current RoCE driver doesn't recover the device. So move the device to error state and dispatch fatal events to all qps Release the MSIx vectors to avoid a crash when L2 driver disables the MSIx. Also, check for the device state to avoid posting further commands to the HW. Link: https://lore.kernel.org/r/1615968942-30970-1-git-send-email-selvin.xavier@broadcom.com Signed-off-by: Naresh Kumar PBS <nareshkumar.pbs@broadcom.com> Signed-off-by: Devesh Sharma <devesh.sharma@broadcom.com> Signed-off-by: Selvin Xavier <selvin.xavier@broadcom.com> Signed-off-by: Jason Gunthorpe <jgg@nvidia.com>
2021-03-26RDMA: Support more than 255 rdma portsMark Bloch98-654/+640
Current code uses many different types when dealing with a port of a RDMA device: u8, unsigned int and u32. Switch to u32 to clean up the logic. This allows us to make (at least) the core view consistent and use the same type. Unfortunately not all places can be converted. Many uverbs functions expect port to be u8 so keep those places in order not to break UAPIs. HW/Spec defined values must also not be changed. With the switch to u32 we now can support devices with more than 255 ports. U32_MAX is reserved to make control logic a bit easier to deal with. As a device with U32_MAX ports probably isn't going to happen any time soon this seems like a non issue. When a device with more than 255 ports is created uverbs will report the RDMA device as having 255 ports as this is the max currently supported. The verbs interface is not changed yet because the IBTA spec limits the port size in too many places to be u8 and all applications that relies in verbs won't be able to cope with this change. At this stage, we are extending the interfaces that are using vendor channel solely Once the limitation is lifted mlx5 in switchdev mode will be able to have thousands of SFs created by the device. As the only instance of an RDMA device that reports more than 255 ports will be a representor device and it exposes itself as a RAW Ethernet only device CM/MAD/IPoIB and other ULPs aren't effected by this change and their sysfs/interfaces that are exposes to userspace can remain unchanged. While here cleanup some alignment issues and remove unneeded sanity checks (mainly in rdmavt), Link: https://lore.kernel.org/r/20210301070420.439400-1-leon@kernel.org Signed-off-by: Mark Bloch <mbloch@nvidia.com> Signed-off-by: Leon Romanovsky <leonro@nvidia.com> Signed-off-by: Jason Gunthorpe <jgg@nvidia.com>
2021-03-25Merge tag 'for-linus' of git://git.kernel.org/pub/scm/linux/kernel/git/rdma/rdmaLinus Torvalds3-4/+6
Pull rdma fixes from Jason Gunthorpe: "Not much going on, just some small bug fixes: - Typo causing a regression in mlx5 devx - Regression in the recent hns rework causing the HW to get out of sync - Long-standing cxgb4 adaptor crash when destroying cm ids" * tag 'for-linus' of git://git.kernel.org/pub/scm/linux/kernel/git/rdma/rdma: RDMA/cxgb4: Fix adapter LE hash errors while destroying ipv6 listening server RDMA/hns: Fix bug during CMDQ initialization RDMA/mlx5: Fix typo in destroy_mkey inbox
2021-03-25RDMA/cxgb4: Fix adapter LE hash errors while destroying ipv6 listening serverPotnuri Bharat Teja1-2/+2
Not setting the ipv6 bit while destroying ipv6 listening servers may result in potential fatal adapter errors due to lookup engine memory hash errors. Therefore always set ipv6 field while destroying ipv6 listening servers. Fixes: 830662f6f032 ("RDMA/cxgb4: Add support for active and passive open connection with IPv6 address") Link: https://lore.kernel.org/r/20210324190453.8171-1-bharat@chelsio.com Signed-off-by: Potnuri Bharat Teja <bharat@chelsio.com> Reviewed-by: Leon Romanovsky <leonro@nvidia.com> Signed-off-by: Jason Gunthorpe <jgg@nvidia.com>
2021-03-23RDMA/hns: Support to query firmware versionLang Cheng1-0/+14
Implement the ops named get_dev_fw_str to support ib_get_device_fw_str(). Link: https://lore.kernel.org/r/1615882161-53827-1-git-send-email-liweihang@huawei.com Signed-off-by: Lang Cheng <chenglang@huawei.com> Signed-off-by: Weihang Li <liweihang@huawei.com> Reviewed-by: Leon Romanovsky <leonro@nvidia.com> Signed-off-by: Jason Gunthorpe <jgg@nvidia.com>
2021-03-23RDMA/mlx5: Create ODP EQ only when ODP MR is createdShay Drory3-10/+29
There is no need to create the ODP EQ if the user doesn't use ODP MRs. Hence, create it only when the first ODP MR is created. This EQ will be destroyed only when the device is unloaded. This will decrease the number of EQs created per device. for example: If we creates 1K devices (SF/VF/etc'), than we will decrease the num of EQs by 1K. Link: https://lore.kernel.org/r/20210314125418.179716-1-leon@kernel.org Signed-off-by: Shay Drory <shayd@nvidia.com> Reviewed-by: Maor Gottlieb <maorg@nvidia.com> Signed-off-by: Leon Romanovsky <leonro@nvidia.com> Signed-off-by: Jason Gunthorpe <jgg@nvidia.com>
2021-03-23RDMA/hns: Fix memory corruption when allocating XRCDNWeihang Li1-2/+10
It's incorrect to cast the type of pointer to xrcdn from (u32 *) to (unsigned long *), then pass it into hns_roce_bitmap_alloc(), this will lead to a memory corruption. Fixes: 32548870d438 ("RDMA/hns: Add support for XRC on HIP09") Link: https://lore.kernel.org/r/1616381069-51759-1-git-send-email-liweihang@huawei.com Reported-by: Dan Carpenter <dan.carpenter@oracle.com> Signed-off-by: Weihang Li <liweihang@huawei.com> Reviewed-by: Leon Romanovsky <leonro@nvidia.com> Signed-off-by: Jason Gunthorpe <jgg@nvidia.com>
2021-03-22RDMA/cma: Remove unused leftovers in cma codeGal Pressman1-9/+0
Commit ee1c60b1bff8 ("IB/SA: Modify SA to implicitly cache Class Port info") removed the class_port_info_context struct usage, remove a couple of leftovers. Link: https://lore.kernel.org/r/20210314143427.76101-1-galpress@amazon.com Signed-off-by: Gal Pressman <galpress@amazon.com> Reviewed-by: Leon Romanovsky <leonro@nvidia.com> Signed-off-by: Jason Gunthorpe <jgg@nvidia.com>
2021-03-22RDMA: Delete not-used static inline functionsLeon Romanovsky13-187/+0
Perform mass deletion of static inline functions that are not used. Link: https://lore.kernel.org/r/20210314133908.291945-3-leon@kernel.org Signed-off-by: Leon Romanovsky <leonro@nvidia.com> Signed-off-by: Jason Gunthorpe <jgg@nvidia.com>
2021-03-22RDMA: Fix kernel-doc compilation warningsLeon Romanovsky20-34/+36
This patch fixes bunch of kernel-doc compilation warnings like below: drivers/infiniband/hw/i40iw/i40iw_cm.c:4372: warning: expecting prototype for i40iw_ifdown_notify(). Prototype was for i40iw_if_notify() instead Link: https://lore.kernel.org/r/20210314133908.291945-2-leon@kernel.org Signed-off-by: Leon Romanovsky <leonro@nvidia.com> Signed-off-by: Jason Gunthorpe <jgg@nvidia.com>
2021-03-22RDMA/mlx5: Add missing returned error check of mlx5_ib_dereg_mrLeon Romanovsky1-1/+1
Fix the following smatch error: drivers/infiniband/hw/mlx5/mr.c:1950 mlx5_ib_dereg_mr() error: uninitialized symbol 'rc'. Fixes: e6fb246ccafb ("RDMA/mlx5: Consolidate MR destruction to mlx5_ib_dereg_mr()") Link: https://lore.kernel.org/r/20210314082250.10143-1-leon@kernel.org Reported-by: kernel test robot <lkp@intel.com> Signed-off-by: Leon Romanovsky <leonro@nvidia.com> Signed-off-by: Jason Gunthorpe <jgg@nvidia.com>
2021-03-22RDMA/hns: Fix bug during CMDQ initializationLang Cheng1-1/+3
When reloading driver, the head/tail pointer of CMDQ may be not at position 0. Then during initialization of CMDQ, if head is reset first, the firmware will start to handle CMDQ because the head is not equal to the tail. The driver can reset tail first since the firmware will be triggerred only by head. This bug is introduced by changing macros of head/tail register without changing the order of initialization. Fixes: 292b3352bd5b ("RDMA/hns: Adjust fields and variables about CMDQ tail/head") Link: https://lore.kernel.org/r/1615602611-7963-1-git-send-email-liweihang@huawei.com Signed-off-by: Lang Cheng <chenglang@huawei.com> Signed-off-by: Weihang Li <liweihang@huawei.com> Signed-off-by: Jason Gunthorpe <jgg@nvidia.com>
2021-03-13net/mlx5: E-Switch, Refactor send to vport to be more genericMark Bloch1-2/+1
Now that each representor stores a pointer to the managing E-Switch use that information when creating the send-to-vport rules. Signed-off-by: Mark Bloch <mbloch@nvidia.com> Reviewed-by: Saeed Mahameed <saeedm@nvidia.com> Signed-off-by: Saeed Mahameed <saeedm@nvidia.com>
2021-03-13RDMA/mlx5: Use representor E-Switch when getting netdev and metadataMark Bloch3-4/+3
Now that a pointer to the managing E-Switch is stored in the representor use it. Signed-off-by: Mark Bloch <mbloch@nvidia.com> Reviewed-by: Saeed Mahameed <saeedm@nvidia.com> Acked-by: Jason Gunthorpe <jgg@nvidia.com> Signed-off-by: Saeed Mahameed <saeedm@nvidia.com>
2021-03-12RDMA/mlx5: Allow larger pages in DevX umemJason Gunthorpe1-10/+54
The umem DMA list calculation was locked at 4k pages due to confusion around how this API works and is used when larger pages are present. The conclusion is: - umem's cannot extend past what is mapped into the process, so creating a lage page size and referring to a sub-range is not allowed - umem's must always have a page offset of zero, except for sub PAGE_SIZE umems - The feature of umem_offset to create multiple objects inside a umem is buggy and isn't used anyplace. Thus we can assume all users of the current API have umem_offset == 0 as well Provide a new page size calculator that limits the DMA list to the VA range and enforces umem_offset == 0. Allow user space to specify the page sizes which it can accept, this bitmap must be derived from the intended use of the umem, based on per-usage HW limitations. Link: https://lore.kernel.org/r/20210304130501.1102577-4-leon@kernel.org Signed-off-by: Yishai Hadas <yishaih@nvidia.com> Signed-off-by: Leon Romanovsky <leonro@nvidia.com> Signed-off-by: Jason Gunthorpe <jgg@nvidia.com>
2021-03-12IB/core: Split uverbs_get_const/default to consider target typeYishai Hadas2-4/+29
Change uverbs_get_const/uverbs_get_const_default to work properly with both signed/unsigned parameters. Current APIs mix s64 and u64 which leads to incorrect check when u64 value was supplied and its upper bit was set. In that case uverbs_get_const() / uverbs_get_const_default() lower bound check may fail unexpectedly, target is unsigned (lower bound is 0) but value became negative as of the s64 usage. Split to have two different APIs, no change to callers as the required API will be called internally according to the target type. Link: https://lore.kernel.org/r/20210304130501.1102577-3-leon@kernel.org Signed-off-by: Yishai Hadas <yishaih@nvidia.com> Signed-off-by: Leon Romanovsky <leonro@nvidia.com> Signed-off-by: Jason Gunthorpe <jgg@nvidia.com>
2021-03-12IB/core: Drop WARN_ON() from ib_umem_find_best_pgsz()Yishai Hadas1-4/+0
The WARN_ON() issued as part of ib_umem_find_best_pgsz() blocked cases when only page sizes larger than PAGE_SIZE were set, drop it to enable those cases. In addition, there is no need to have a specific check for zero pgsz_bitmap, the function will do its job and return 0 at the end if nothing match will be found. Link: https://lore.kernel.org/r/20210304130501.1102577-2-leon@kernel.org Signed-off-by: Yishai Hadas <yishaih@nvidia.com> Signed-off-by: Leon Romanovsky <leonro@nvidia.com> Signed-off-by: Jason Gunthorpe <jgg@nvidia.com>
2021-03-12RDMA/mlx5: Fix mlx5 rates to IB rates mapMark Zhang1-1/+14
Correct the map between mlx5 rates and corresponding ib rates, as they don't always have a fixed offset between them. Fixes: e126ba97dba9 ("mlx5: Add driver for Mellanox Connect-IB adapters") Link: https://lore.kernel.org/r/20210304124517.1100608-4-leon@kernel.org Signed-off-by: Mark Zhang <markzhang@nvidia.com> Reviewed-by: Maor Gottlieb <maorg@nvidia.com> Signed-off-by: Leon Romanovsky <leonro@nvidia.com> Signed-off-by: Jason Gunthorpe <jgg@nvidia.com>
2021-03-12RDMA/mlx5: Fix query RoCE portMaor Gottlieb1-3/+3
mlx5_is_roce_enabled returns the devlink RoCE init value, therefore it should be used only when driver is loaded. Instead we just need to read the roce_en field. In addition, rename mlx5_is_roce_enabled to mlx5_is_roce_init_enabled. Fixes: 7a58779edd75 ("IB/mlx5: Improve query port for representor port") Link: https://lore.kernel.org/r/20210304124517.1100608-2-leon@kernel.org Signed-off-by: Maor Gottlieb <maorg@nvidia.com> Signed-off-by: Leon Romanovsky <leonro@nvidia.com> Signed-off-by: Jason Gunthorpe <jgg@nvidia.com>
2021-03-12RDMA/mlx5: Rename mlx5_mr_cache_invalidate() to revoke_mr()Jason Gunthorpe1-6/+6
Now that this is only used in a few places in mr.c give it a sensible name. It has nothing to do with the cache and can be invoked on any MR. DMA is stopped and the user cannot touch the MR any further once it completes. Link: https://lore.kernel.org/r/20210304120745.1090751-5-leon@kernel.org Signed-off-by: Leon Romanovsky <leonro@nvidia.com> Signed-off-by: Jason Gunthorpe <jgg@nvidia.com>
2021-03-12RDMA/mlx5: Consolidate MR destruction to mlx5_ib_dereg_mr()Jason Gunthorpe4-203/+90
Now that the SRCU stuff has been removed the entire MR destroy logic can be made a lot simpler. Currently there are many different ways to destroy a MR and it makes it really hard to do this task correctly. Route all destruction through mlx5_ib_dereg_mr() and make it work for all situations. Since it turns out all the different MR types do basically the same thing this removes a lot of knowledge of MR internals from ODP and leaves ODP just exporting an operation to clean up children. This fixes a few weird corner cases bugs and firmly uses the correct ordering of the MR destruction: - Stop parallel access to the mkey via the ODP xarray - Stop DMA - Release the umem - Clean up ODP children - Free/Recycle the MR Link: https://lore.kernel.org/r/20210304120745.1090751-4-leon@kernel.org Signed-off-by: Leon Romanovsky <leonro@nvidia.com> Signed-off-by: Jason Gunthorpe <jgg@nvidia.com>
2021-03-12RDMA/mlx5: Use a union inside mlx5_ib_mrJason Gunthorpe2-3/+3
The struct mlx5_ib_mr can be used for three different things, but only one at a time: - In the user MR cache - As a kernel MR - As a user MR Overlay the three things into a single union with the following rules: - If the mr is found on the cache_ent->head list then it is a cache MR and umem == NULL. The entire union is zero after the MR is removed from the cache. - If umem != NULL or type == IB_MR_TYPE_USER then it is a user MR. - If umem == NULL then it is a kernel MR This reduces the size of struct mlx5_ib_mr to 552 bytes from 702. The only place the three flows overlap in the code is during dereg, so add a few extra checks along there. Link: https://lore.kernel.org/r/20210304120745.1090751-3-leon@kernel.org Signed-off-by: Leon Romanovsky <leonro@nvidia.com> Signed-off-by: Jason Gunthorpe <jgg@nvidia.com>
2021-03-12RDMA/mlx5: Zero out ODP related items in the mlx5_ib_mrJason Gunthorpe3-52/+66
All of the ODP code assumes when it calls mlx5_mr_cache_alloc() the ODP related fields are zero'd. This is true if the MR was just allocated, but if the MR is recycled through the cache then the values are never zero'd. This causes a bug in the odp_stats, they don't reset when the MR is reallocated, also is_odp_implicit is never 0'd. So we can use memset on a block of the mlx5_ib_mr reorganize the structure to put all the data that can be zero'd by the cache at the end. It is organized as an anonymous struct because the next patch will make this a union. Delete the unused smr_info. Don't set the kernel only desc_size on the user path. No longer any need to zero mr->parent before freeing it, the memset() will get it now. Fixes: a3de94e3d61e ("IB/mlx5: Introduce ODP diagnostic counters") Link: https://lore.kernel.org/r/20210304120745.1090751-2-leon@kernel.org Signed-off-by: Leon Romanovsky <leonro@nvidia.com> Signed-off-by: Jason Gunthorpe <jgg@nvidia.com>
2021-03-12RDMA/hns: Add support for XRC on HIP09Wenpeng Liang8-70/+256
The HIP09 supports XRC transport service, it greatly saves the number of QPs required to connect all processes in a large cluster. Link: https://lore.kernel.org/r/1614826558-35423-1-git-send-email-liweihang@huawei.com Signed-off-by: Wenpeng Liang <liangwenpeng@huawei.com> Signed-off-by: Weihang Li <liweihang@huawei.com> Signed-off-by: Jason Gunthorpe <jgg@nvidia.com>
2021-03-11RDMA/rtrs-clt: Use rdma_event_msg in logJack Wang1-3/+6
It's easier to understand a string instead of enum. Link: https://lore.kernel.org/r/20210222141551.54345-2-jinpu.wang@cloud.ionos.com Signed-off-by: Jack Wang <jinpu.wang@cloud.ionos.com> Reviewed-by: Leon Romanovsky <leonro@nvidia.com> Signed-off-by: Jason Gunthorpe <jgg@nvidia.com>
2021-03-11RDMA/rtrs: Use new shared CQ mechanismJack Wang4-14/+15
Have the driver use shared CQs which provids a ~10%-20% improvement during test. Instead of opening a CQ for each QP per connection, a CQ for each QP will be provided by the RDMA core driver that will be shared between the QPs on that core reducing interrupt overhead. Link: https://lore.kernel.org/r/20210222141551.54345-1-jinpu.wang@cloud.ionos.com Signed-off-by: Jack Wang <jinpu.wang@cloud.ionos.com> Reviewed-by: Leon Romanovsky <leonro@nvidia.com> Signed-off-by: Jason Gunthorpe <jgg@nvidia.com>
2021-03-11RDMA/core: Remove unused req_ncomp_notif device operationGal Pressman1-1/+0
The request_ncomp_notif device operation and function are unused, remove them. Link: https://lore.kernel.org/r/20210311150921.23726-1-galpress@amazon.com Signed-off-by: Gal Pressman <galpress@amazon.com> Signed-off-by: Jason Gunthorpe <jgg@nvidia.com>
2021-03-11RDMA/mlx5: Fix typo in destroy_mkey inboxMark Zhang1-1/+1
Set mkey_index to the correct place when preparing the destroy_mkey inbox, this will fix the following syndrome: mlx5_core 0000:08:00.0: mlx5_cmd_check:782:(pid 980716): DEALLOC_PD(0x801) op_mod(0x0) failed, status bad resource state(0x9), syndrome (0x31b635) Link: https://lore.kernel.org/r/20210311142024.1282004-1-leon@kernel.org Fixes: 1368ead04c36 ("RDMA/mlx5: Use strict get/set operations for obj_id") Reviewed-by: Ido Kalir <idok@mellanox.com> Reviewed-by: Yishai Hadas <yishaih@nvidia.com> Signed-off-by: Mark Zhang <markzhang@nvidia.com> Signed-off-by: Leon Romanovsky <leonro@nvidia.com> Signed-off-by: Jason Gunthorpe <jgg@nvidia.com>
2021-03-10RDMA/iwcm: Allow AFONLY binding for IPv6 addressesBernard Metzler2-2/+18
Binding IPv6 address/port to AF_INET6 domain only is provided via rdma_set_afonly(), but was not signalled to the provider. Applications like NFS/RDMA bind the same port to both IPv4 and IPv6 addresses simultaneously and thus rely on it working correctly. Link: https://lore.kernel.org/r/20210219143441.1068-1-bmt@zurich.ibm.com Tested-by: Chuck Lever <chuck.lever@oracle.com> Tested-by: Benjamin Coddington <bcodding@redhat.com> Signed-off-by: Bernard Metzler <bmt@zurich.ibm.com> Signed-off-by: Jason Gunthorpe <jgg@nvidia.com>
2021-03-10RDMA/mlx5: Fix timestamp default modeMaor Gottlieb1-4/+14
1. Don't set the ts_format bit to default when it reserved - device is running in the old mode (free running). 2. XRC doesn't have a CQ therefore the ts format in the QP context should be default / free running. 3. Set ts_format to WQ. Fixes: 2fe8d4b87802 ("RDMA/mlx5: Fail QP creation if the device can not support the CQE TS") Signed-off-by: Maor Gottlieb <maorg@nvidia.com> Signed-off-by: Saeed Mahameed <saeedm@nvidia.com>
2021-03-10RDMA/hns: Use new SQ doorbell register for HIP09Lang Cheng2-24/+28
HIP09 uses new address space to map SQ doorbell registers, the doorbell of each QP is isolated based on the size of 64KB, which can improve the performance in concurrency scenarios. Link: https://lore.kernel.org/r/1614082833-23130-1-git-send-email-liweihang@huawei.com Signed-off-by: Lang Cheng <chenglang@huawei.com> Signed-off-by: Weihang Li <liweihang@huawei.com> Signed-off-by: Jason Gunthorpe <jgg@nvidia.com>
2021-03-08qib_fs: switch to simple_recursive_removal()Al Viro1-63/+5
Signed-off-by: Al Viro <viro@zeniv.linux.org.uk>
2021-03-05RDMA/rxe: Fix errant WARN_ONCE in rxe_completer()Bob Pearson1-32/+23
In rxe_comp.c in rxe_completer() the function free_pkt() did not clear skb which triggered a warning at 'done:' and could possibly at 'exit:'. The WARN_ONCE() calls are not actually needed. The call to free_pkt() is moved to the end to clearly show that all skbs are freed. Fixes: 899aba891cab ("RDMA/rxe: Fix FIXME in rxe_udp_encap_recv()") Link: https://lore.kernel.org/r/20210304192048.2958-1-rpearson@hpe.com Signed-off-by: Bob Pearson <rpearsonhpe@gmail.com> Signed-off-by: Jason Gunthorpe <jgg@nvidia.com>
2021-03-05RDMA/rxe: Fix extra deref in rxe_rcv_mcast_pkt()Bob Pearson1-24/+35
rxe_rcv_mcast_pkt() dropped a reference to ib_device when no error occurred causing an underflow on the reference counter. This code is cleaned up to be clearer and easier to read. Fixes: 899aba891cab ("RDMA/rxe: Fix FIXME in rxe_udp_encap_recv()") Link: https://lore.kernel.org/r/20210304192048.2958-1-rpearson@hpe.com Signed-off-by: Bob Pearson <rpearsonhpe@gmail.com> Signed-off-by: Jason Gunthorpe <jgg@nvidia.com>
2021-03-05RDMA/rxe: Fix missed IB reference counting in loopbackBob Pearson1-1/+9
When the noted patch below extending the reference taken by rxe_get_dev_from_net() in rxe_udp_encap_recv() until each skb is freed it was not matched by a reference in the loopback path resulting in underflows. Fixes: 899aba891cab ("RDMA/rxe: Fix FIXME in rxe_udp_encap_recv()") Link: https://lore.kernel.org/r/20210304192048.2958-1-rpearson@hpe.com Signed-off-by: Bob Pearson <rpearsonhpe@gmail.com> Signed-off-by: Jason Gunthorpe <jgg@nvidia.com>
2021-03-05scsi: target: core: Add gfp_t arg to target_cmd_init_cdb()Mike Christie1-1/+2
tcm_loop could be used like a normal block device, so we can't use GFP_KERNEL and should use GFP_NOIO. This adds a gfp_t arg to target_cmd_init_cdb() and converts the users. For every driver but loop GFP_KERNEL is kept. This will also be useful in subsequent patches where loop needs to do target_submit_prep() from interrupt context to get a ref to the se_device, and so it will need to use GFP_ATOMIC. Link: https://lore.kernel.org/r/20210227170006.5077-16-michael.christie@oracle.com Tested-by: Laurence Oberman <loberman@redhat.com> Reviewed-by: Christoph Hellwig <hch@lst.de> Signed-off-by: Mike Christie <michael.christie@oracle.com> Signed-off-by: Martin K. Petersen <martin.petersen@oracle.com>
2021-03-05scsi: target: srpt: Convert to new submission APIMike Christie1-5/+8
target_submit_cmd_map_sgls() is being removed, so convert srpt to the new submission API. srpt uses target_stop_session() to sync session shutdown with LIO core, so we use target_init_cmd()/target_submit_prep()/target_submit(), because target_init_cmd() will detect the target_stop_session() call and return an error. Link: https://lore.kernel.org/r/20210227170006.5077-6-michael.christie@oracle.com Cc: Bart Van Assche <bvanassche@acm.org> Reviewed-by: Christoph Hellwig <hch@lst.de> Reviewed-by: Bart Van Assche <bvanassche@acm.org> Signed-off-by: Mike Christie <michael.christie@oracle.com> Signed-off-by: Martin K. Petersen <martin.petersen@oracle.com>
2021-03-03RDMA/uverbs: Fix kernel-doc warning of _uverbs_allocLeon Romanovsky1-1/+1
Fix the following W=1 compilation warning: drivers/infiniband/core/uverbs_ioctl.c:108: warning: expecting prototype for uverbs_alloc(). Prototype was for _uverbs_alloc() instead Fixes: 461bb2eee4e1 ("IB/uverbs: Add a simple allocator to uverbs_attr_bundle") Link: https://lore.kernel.org/r/20210302074214.1054299-3-leon@kernel.org Signed-off-by: Leon Romanovsky <leonro@nvidia.com> Signed-off-by: Jason Gunthorpe <jgg@nvidia.com>
2021-03-03RDMA/mlx5: Set correct kernel-doc identifierLeon Romanovsky1-1/+1
The W=1 allmodconfig build produces the following warning: drivers/infiniband/hw/mlx5/odp.c:1086: warning: wrong kernel-doc identifier on line: * Parse a series of data segments for page fault handling. Fix it by changing /** to be /* as it is written in kernel-doc documentation. Fixes: 5e769e444d26 ("RDMA/hw/mlx5/odp: Fix formatting and add missing descriptions in 'pagefault_data_segments()'") Link: https://lore.kernel.org/r/20210302074214.1054299-2-leon@kernel.org Signed-off-by: Leon Romanovsky <leonro@nvidia.com> Signed-off-by: Jason Gunthorpe <jgg@nvidia.com>
2021-03-01IB/mlx5: Add missing error codeYueHaibing1-1/+3
Set err to -ENOMEM if kzalloc fails instead of 0. Fixes: 759738537142 ("IB/mlx5: Enable subscription for device events over DEVX") Link: https://lore.kernel.org/r/20210222122343.19720-1-yuehaibing@huawei.com Signed-off-by: YueHaibing <yuehaibing@huawei.com> Acked-by: Leon Romanovsky <leonro@nvidia.com> Signed-off-by: Jason Gunthorpe <jgg@nvidia.com>
2021-03-01RDMA/rxe: Fix missing kconfig dependency on CRYPTOJulian Braha1-0/+1
When RDMA_RXE is enabled and CRYPTO is disabled, Kbuild gives the following warning: WARNING: unmet direct dependencies detected for CRYPTO_CRC32 Depends on [n]: CRYPTO [=n] Selected by [y]: - RDMA_RXE [=y] && (INFINIBAND_USER_ACCESS [=y] || !INFINIBAND_USER_ACCESS [=y]) && INET [=y] && PCI [=y] && INFINIBAND [=y] && INFINIBAND_VIRT_DMA [=y] This is because RDMA_RXE selects CRYPTO_CRC32, without depending on or selecting CRYPTO, despite that config option being subordinate to CRYPTO. Fixes: cee2688e3cd6 ("IB/rxe: Offload CRC calculation when possible") Signed-off-by: Julian Braha <julianbraha@gmail.com> Link: https://lore.kernel.org/r/21525878.NYvzQUHefP@ubuntu-mate-laptop Signed-off-by: Jason Gunthorpe <jgg@nvidia.com>