summaryrefslogtreecommitdiff
AgeCommit message (Collapse)AuthorFilesLines
2020-05-04net/smc: llc_del_link_work and use the LLC flow for delete linkKarsten Graul3-34/+45
Introduce a work that is scheduled when a new DELETE_LINK LLC request is received. The work will call either the SMC client or SMC server DELETE_LINK processing. And use the LLC flow framework to process incoming DELETE_LINK LLC messages, scheduling the llc_del_link_work for those events. With these changes smc_lgr_forget() is only called by one function and can be migrated into smc_lgr_cleanup_early(). Signed-off-by: Karsten Graul <kgraul@linux.ibm.com> Reviewed-by: Ursula Braun <ubraun@linux.ibm.com> Signed-off-by: David S. Miller <davem@davemloft.net>
2020-05-04net/smc: delete an asymmetric link as SMC serverKarsten Graul3-2/+82
When a link group moved from asymmetric to symmetric state then the dangling asymmetric link can be deleted. Add smc_llc_find_asym_link() to find the respective link and add smc_llc_delete_asym_link() to delete it. Signed-off-by: Karsten Graul <kgraul@linux.ibm.com> Reviewed-by: Ursula Braun <ubraun@linux.ibm.com> Signed-off-by: David S. Miller <davem@davemloft.net>
2020-05-04net/smc: final part of add link processing as SMC serverKarsten Graul1-1/+28
This patch finalizes the ADD_LINK processing of new links. Send the CONFIRM_LINK request to the peer, receive the response and set link state to ACTIVE. Signed-off-by: Karsten Graul <kgraul@linux.ibm.com> Reviewed-by: Ursula Braun <ubraun@linux.ibm.com> Signed-off-by: David S. Miller <davem@davemloft.net>
2020-05-04net/smc: rkey processing for a new link as SMC serverKarsten Graul1-1/+42
Part of SMC server new link establishment is the exchange of rkeys for used buffers. Loop over all used RMB buffers and send ADD_LINK_CONTINUE LLC messages to the peer. Signed-off-by: Karsten Graul <kgraul@linux.ibm.com> Reviewed-by: Ursula Braun <ubraun@linux.ibm.com> Signed-off-by: David S. Miller <davem@davemloft.net>
2020-05-04net/smc: first part of add link processing as SMC serverKarsten Graul3-2/+92
First set of functions to process an ADD_LINK LLC request as an SMC server. Find an alternate IB device, determine the new link group type and get the index for the new link. Then initialize the link and send the ADD_LINK LLC message to the peer. Save the contents of the response, ready the link, map all used buffers and register the buffers with the IB device. If any error occurs, stop the processing and clear the link. And call smc_llc_srv_add_link() in af_smc.c to start second link establishment after the initial link of a link group was created. Signed-off-by: Karsten Graul <kgraul@linux.ibm.com> Reviewed-by: Ursula Braun <ubraun@linux.ibm.com> Signed-off-by: David S. Miller <davem@davemloft.net>
2020-05-04net/smc: final part of add link processing as SMC clientKarsten Graul3-4/+72
This patch finalizes the ADD_LINK processing of new links. Receive the CONFIRM_LINK request from peer, complete the link initialization, register all used buffers with the IB device and finally send the CONFIRM_LINK response, which completes the ADD_LINK processing. And activate smc_llc_cli_add_link() in af_smc.c. Signed-off-by: Karsten Graul <kgraul@linux.ibm.com> Reviewed-by: Ursula Braun <ubraun@linux.ibm.com> Signed-off-by: David S. Miller <davem@davemloft.net>
2020-05-04net/smc: rkey processing for a new link as SMC clientKarsten Graul2-1/+157
Part of the SMC client new link establishment process is the exchange of rkeys for all used buffers. Add new LLC message type ADD_LINK_CONTINUE which is used to exchange rkeys of all current RMB buffers. Add functions to iterate over all used RMB buffers of the link group, and implement the ADD_LINK_CONTINUE processing. Signed-off-by: Karsten Graul <kgraul@linux.ibm.com> Reviewed-by: Ursula Braun <ubraun@linux.ibm.com> Signed-off-by: David S. Miller <davem@davemloft.net>
2020-05-04net/smc: first part of add link processing as SMC clientKarsten Graul3-2/+111
First set of functions to process an ADD_LINK LLC request as an SMC client. Find an alternate IB device, determine the new link group type and get the index for the new link. Then ready the link, map the buffers and send an ADD_LINK LLC response. If any error occurs, send a reject LLC message and terminate the processing. Add smc_llc_alloc_alt_link() to find a free link index for a new link, depending on the new link group type. Signed-off-by: Karsten Graul <kgraul@linux.ibm.com> Reviewed-by: Ursula Braun <ubraun@linux.ibm.com> Signed-off-by: David S. Miller <davem@davemloft.net>
2020-05-04Merge branch 'Enhance-current-features-in-ena-driver'David S. Miller6-70/+105
Sameeh Jubran says: ==================== Enhance current features in ena driver Difference from v2: * dropped patch "net: ena: move llq configuration from ena_probe to ena_device_init()" * reworked patch ""net: ena: implement ena_com_get_admin_polling_mode() to drop the prototype Difference from v1: * reodered paches #01 and #02. * dropped adding Rx/Tx drops to ethtool in patch #08 V1: This patchset introduces the following: * minor changes to RSS feature * add total rx and tx drop counter * add unmask_interrupt counter for ethtool statistics * add missing implementation for ena_com_get_admin_polling_mode() * some minor code clean-up and cosmetics * use SHUTDOWN as reset reason when closing interface ==================== Signed-off-by: David S. Miller <davem@davemloft.net>
2020-05-04net: ena: cosmetic: extract code to ena_indirection_table_set()Arthur Kiyanovski1-18/+30
Extract code to ena_indirection_table_set() to make the code cleaner. Signed-off-by: Sameeh Jubran <sameehj@amazon.com> Signed-off-by: Arthur Kiyanovski <akiyano@amazon.com> Signed-off-by: David S. Miller <davem@davemloft.net>
2020-05-04net: ena: cosmetic: remove unnecessary spaces and tabs in ena_com.h macrosSameeh Jubran1-7/+7
The macros in ena_com.h have inconsistent spaces between the macro name and it's value. This commit sets all the macros to have a single space between the name and value. Signed-off-by: Arthur Kiyanovski <akiyano@amazon.com> Signed-off-by: Sameeh Jubran <sameehj@amazon.com> Signed-off-by: David S. Miller <davem@davemloft.net>
2020-05-04net: ena: use SHUTDOWN as reset reason when closing interfaceSameeh Jubran1-0/+2
The 'ENA_REGS_RESET_SHUTDOWN' enum indicates a normal driver shutdown / removal procedure. Also, a comment is added to one of the reset reason assignments for code clarity. Signed-off-by: Shay Agroskin <shayagr@amazon.com> Signed-off-by: Arthur Kiyanovski <akiyano@amazon.com> Signed-off-by: Sameeh Jubran <sameehj@amazon.com> Signed-off-by: David S. Miller <davem@davemloft.net>
2020-05-04net: ena: drop superfluous prototypeArthur Kiyanovski1-12/+0
Before this commit there was a function prototype named ena_com_get_ena_admin_polling_mode() that was never implemented. This patch simply deletes it. Signed-off-by: Igor Chauskin <igorch@amazon.com> Signed-off-by: Arthur Kiyanovski <akiyano@amazon.com> Signed-off-by: David S. Miller <davem@davemloft.net>
2020-05-04net: ena: add support for reporting of packet dropsSameeh Jubran3-0/+15
1. Add support for getting tx drops from the device and saving them in the driver. 2. Report tx via netdev stats. Signed-off-by: Igor Chauskin <igorch@amazon.com> Signed-off-by: Guy Tzalik <gtzalik@amazon.com> Signed-off-by: Arthur Kiyanovski <akiyano@amazon.com> Signed-off-by: Sameeh Jubran <sameehj@amazon.com> Signed-off-by: David S. Miller <davem@davemloft.net>
2020-05-04net: ena: add unmask interrupts statistics to ethtoolSameeh Jubran3-0/+5
Add unmask interrupts statistics to ethtool. Signed-off-by: Netanel Belgazal <netanel@amazon.com> Signed-off-by: Arthur Kiyanovski <akiyano@amazon.com> Signed-off-by: Sameeh Jubran <sameehj@amazon.com> Signed-off-by: David S. Miller <davem@davemloft.net>
2020-05-04net: ena: remove code that does nothingSameeh Jubran1-4/+1
Both key and func parameters are pointers on the stack. Setting them to NULL does nothing. The original intent was to leave the key and func unset in this case, but for this to happen nothing needs to be done as the calling function ethtool_get_rxfh() already clears key and func. This commit removes the above described useless code. Signed-off-by: Arthur Kiyanovski <akiyano@amazon.com> Signed-off-by: Sameeh Jubran <sameehj@amazon.com> Signed-off-by: David S. Miller <davem@davemloft.net>
2020-05-04net: ena: changes to RSS hash key allocationSameeh Jubran1-11/+5
This commit contains 2 cosmetic changes: 1. Use ena_com_check_supported_feature_id() in ena_com_hash_key_fill_default_key() instead of rewriting its implementation. This also saves us a superfluous admin command by using the cached value. 2. Change if conditions in ena_com_rss_init() to be clearer. Signed-off-by: Arthur Kiyanovski <akiyano@amazon.com> Signed-off-by: Sameeh Jubran <sameehj@amazon.com> Signed-off-by: David S. Miller <davem@davemloft.net>
2020-05-04net: ena: change default RSS hash function to ToeplitzArthur Kiyanovski1-1/+1
Currently in the driver we are setting the hash function to be CRC32. Starting with this commit we want to change the default behaviour so that we set the hash function to be Toeplitz instead. Signed-off-by: Sameeh Jubran <sameehj@amazon.com> Signed-off-by: Arthur Kiyanovski <akiyano@amazon.com> Signed-off-by: David S. Miller <davem@davemloft.net>
2020-05-04net: ena: allow setting the hash function without changing the keySameeh Jubran3-14/+32
Current code does not allow setting the hash function without changing the key. This commit enables it. To achieve this we separate ena_com_get_hash_function() to 2 functions: ena_com_get_hash_function() - which gets only the hash function, and ena_com_get_hash_key() - which gets only the hash key. Also return 0 instead of rc at the end of ena_get_rxfh() since all previous operations succeeded. Signed-off-by: Arthur Kiyanovski <akiyano@amazon.com> Signed-off-by: Sameeh Jubran <sameehj@amazon.com> Signed-off-by: David S. Miller <davem@davemloft.net>
2020-05-04net: ena: fix error returning in ena_com_get_hash_function()Arthur Kiyanovski1-2/+4
In case the "func" parameter is NULL we now return "-EINVAL". This shouldn't happen in general, but when it does happen, this is the proper way to handle it. We also check func for NULL in the beginning of the function, as there is no reason to do all the work and realize in the end of the function it was useless. Signed-off-by: Sameeh Jubran <sameehj@amazon.com> Signed-off-by: Arthur Kiyanovski <akiyano@amazon.com> Signed-off-by: David S. Miller <davem@davemloft.net>
2020-05-04net: ena: avoid unnecessary admin command when RSS function set failsArthur Kiyanovski1-1/+3
Currently when ena_set_hash_function() fails the hash function is restored to the previous value by calling an admin command to get the hash function from the device. In this commit we avoid the admin command, by saving the previous hash function before calling ena_set_hash_function() and using this previous value to restore the hash function in case of failure of ena_set_hash_function(). Signed-off-by: Sameeh Jubran <sameehj@amazon.com> Signed-off-by: Arthur Kiyanovski <akiyano@amazon.com> Signed-off-by: David S. Miller <davem@davemloft.net>
2020-05-04Merge branch 'sch_fq-optimizations'David S. Miller1-36/+48
Eric Dumazet says: ==================== net_sched: sch_fq: round of optimizations This series is focused on better layout of struct fq_flow to reduce number of cache line misses in fq_enqueue() and dequeue operations. ==================== Signed-off-by: David S. Miller <davem@davemloft.net>
2020-05-04net_sched: sch_fq: perform a prefetch() earlierEric Dumazet1-1/+1
The prefetch() done in fq_dequeue() can be done a bit earlier after the refactoring of the code done in the prior patch. Signed-off-by: Eric Dumazet <edumazet@google.com> Signed-off-by: David S. Miller <davem@davemloft.net>
2020-05-04net_sched: sch_fq: do not call fq_peek() twice per packetEric Dumazet1-18/+16
This refactors the code to not call fq_peek() from fq_dequeue_head() since the caller can provide the skb. Also rename fq_dequeue_head() to fq_dequeue_skb() because 'head' is a bit vague, given the skb could come from t_root rb-tree. Signed-off-by: Eric Dumazet <edumazet@google.com> Signed-off-by: David S. Miller <davem@davemloft.net>
2020-05-04net_sched: sch_fq: use bulk freeing in fq_gc()Eric Dumazet1-7/+11
fq_gc() already builds a small array of pointers, so using kmem_cache_free_bulk() needs very little change. Signed-off-by: Eric Dumazet <edumazet@google.com> Signed-off-by: David S. Miller <davem@davemloft.net>
2020-05-04net_sched: sch_fq: change fq_flow size/layoutEric Dumazet1-2/+7
sizeof(struct fq_flow) is 112 bytes on 64bit arches. This means that half of them use two cache lines, but 50% use three cache lines. This patch adds cache line alignment, and makes sure that only the first cache line is touched by fq_enqueue(), which is more expensive that fq_dequeue() in general. Signed-off-by: Eric Dumazet <edumazet@google.com> Signed-off-by: David S. Miller <davem@davemloft.net>
2020-05-04net_sched: sch_fq: avoid touching f->next from fq_gc()Eric Dumazet1-8/+13
A significant amount of cpu cycles is spent in fq_gc() When fq_gc() does its lookup in the rb-tree, it needs the following fields from struct fq_flow : f->sk (lookup key in the rb-tree) f->fq_node (anchor in the rb-tree) f->next (used to determine if the flow is detached) f->age (used to determine if the flow is candidate for gc) This unfortunately spans two cache lines (assuming 64 bytes cache lines) We can avoid using f->next, if we use the low order bit of f->{age|tail} This low order bit is 0, if f->tail points to an sk_buff. We set the low order bit to 1, if the union contains a jiffies value. Combined with the following patch, this makes sure we only need to bring into cpu caches one cache line per flow. Signed-off-by: Eric Dumazet <edumazet@google.com> Signed-off-by: David S. Miller <davem@davemloft.net>
2020-05-03inet_diag: bc: read cgroup id only for full socketsDmitry Yakunin1-1/+2
Fix bug introduced by commit b1f3e43dbfac ("inet_diag: add support for cgroup filter"). Signed-off-by: Dmitry Yakunin <zeil@yandex-team.ru> Reported-by: syzbot+ee80f840d9bf6893223b@syzkaller.appspotmail.com Reported-by: syzbot+13bef047dbfffa5cd1af@syzkaller.appspotmail.com Fixes: b1f3e43dbfac ("inet_diag: add support for cgroup filter") Signed-off-by: David S. Miller <davem@davemloft.net>
2020-05-03net: ethernet: fec: Replace interrupt driven MDIO with polled IOAndrew Lunn2-36/+45
Measurements of the MDIO bus have shown that driving the MDIO bus using interrupts is slow. Back to back MDIO transactions take about 90us, with 25us spent performing the transaction, and the remainder of the time the bus is idle. Replacing the completion interrupt with polled IO results in back to back transactions of 40us. The polling loop waiting for the hardware to complete the transaction takes around 28us. Which suggests interrupt handling has an overhead of 50us, and polled IO nearly halves this overhead, and doubles the MDIO performance. Care has to be taken when setting the MII_SPEED register, or it can trigger an MII event> That then upsets the polling, due to an unexpected pending event. Suggested-by: Chris Heally <cphealy@gmail.com> Signed-off-by: Andrew Lunn <andrew@lunn.ch> Signed-off-by: David S. Miller <davem@davemloft.net>
2020-05-03smc: Remove unused function.David S. Miller1-24/+0
net/smc/smc_llc.c:544:12: warning: ‘smc_llc_alloc_alt_link’ defined but not used [-Wunused-function] static int smc_llc_alloc_alt_link(struct smc_link_group *lgr, ^~~~~~~~~~~~~~~~~~~~~~ Signed-off-by: David S. Miller <davem@davemloft.net>
2020-05-03Merge branch 'ptp-Add-adjust-phase-to-support-phase-offset'David S. Miller7-6/+114
Vincent Cheng says: ==================== ptp: Add adjust phase to support phase offset. This series adds adjust phase to the PTP Hardware Clock device interface. Some PTP hardware clocks have a write phase mode that has a built-in hardware filtering capability. The write phase mode utilizes a phase offset control word instead of a frequency offset control word. Add adjust phase function to take advantage of this capability. Changes since v1: - As suggested by Richard Cochran: 1. ops->adjphase is new so need to check for non-null function pointer. 2. Kernel coding style uses lower_case_underscores. 3. Use existing PTP clock API for delayed worker. ==================== Signed-off-by: David S. Miller <davem@davemloft.net>
2020-05-03ptp: ptp_clockmatrix: Add adjphase() to support PHC write phase mode.Vincent Cheng2-2/+98
Add idtcm_adjphase() to support PHC write phase mode. Signed-off-by: Vincent Cheng <vincent.cheng.xh@renesas.com> Acked-by: Richard Cochran <richardcochran@gmail.com> Signed-off-by: David S. Miller <davem@davemloft.net>
2020-05-03ptp: Add adjust_phase to ptp_clock_caps capability.Vincent Cheng3-3/+8
Add adjust_phase to ptp_clock_caps capability to allow user to query if a PHC driver supports adjust phase with ioctl PTP_CLOCK_GETCAPS command. Signed-off-by: Vincent Cheng <vincent.cheng.xh@renesas.com> Reviewed-by: Richard Cochran <richardcochran@gmail.com> Signed-off-by: David S. Miller <davem@davemloft.net>
2020-05-03ptp: Add adjphase function to support phase offset control.Vincent Cheng2-1/+8
Adds adjust phase function to take advantage of a PHC clock's hardware filtering capability that uses phase offset control word instead of frequency offset control word. Signed-off-by: Vincent Cheng <vincent.cheng.xh@renesas.com> Reviewed-by: Richard Cochran <richardcochran@gmail.com> Signed-off-by: David S. Miller <davem@davemloft.net>
2020-05-02Merge git://git.kernel.org/pub/scm/linux/kernel/git/bpf/bpf-nextDavid S. Miller153-2960/+6332
Alexei Starovoitov says: ==================== pull-request: bpf-next 2020-05-01 (v2) The following pull-request contains BPF updates for your *net-next* tree. We've added 61 non-merge commits during the last 6 day(s) which contain a total of 153 files changed, 6739 insertions(+), 3367 deletions(-). The main changes are: 1) pulled work.sysctl from vfs tree with sysctl bpf changes. 2) bpf_link observability, from Andrii. 3) BTF-defined map in map, from Andrii. 4) asan fixes for selftests, from Andrii. 5) Allow bpf_map_lookup_elem for SOCKMAP and SOCKHASH, from Jakub. 6) production cloudflare classifier as a selftes, from Lorenz. 7) bpf_ktime_get_*_ns() helper improvements, from Maciej. 8) unprivileged bpftool feature probe, from Quentin. 9) BPF_ENABLE_STATS command, from Song. 10) enable bpf_[gs]etsockopt() helpers for sock_ops progs, from Stanislav. 11) enable a bunch of common helpers for cg-device, sysctl, sockopt progs, from Stanislav. ==================== Signed-off-by: David S. Miller <davem@davemloft.net>
2020-05-02Merge branch 'net-smc-extent-buffer-mapping-and-port-handling'David S. Miller11-196/+522
Karsten Graul says: ==================== net/smc: extent buffer mapping and port handling Add functionality to map/unmap and register/unregister memory buffers for specific SMC-R links and for the whole link group. Prepare LLC layer messages for the support of multiple links and extent the processing of adapter events. And add further small preparations needed for the SMC-R failover support. ==================== Signed-off-by: David S. Miller <davem@davemloft.net>
2020-05-02selftests/bpf: Use reno instead of dctcpStanislav Fomichev2-4/+3
Andrey pointed out that we can use reno instead of dctcp for CC tests and drop CONFIG_TCP_CONG_DCTCP=y requirement. Fixes: beecf11bc218 ("bpf: Bpf_{g,s}etsockopt for struct bpf_sock_addr") Suggested-by: Andrey Ignatov <rdna@fb.com> Signed-off-by: Stanislav Fomichev <sdf@google.com> Signed-off-by: Alexei Starovoitov <ast@kernel.org> Acked-by: Martin KaFai Lau <kafai@fb.com> Link: https://lore.kernel.org/bpf/20200501224320.28441-1-sdf@google.com
2020-05-02net/smc: llc_add_link_work to handle ADD_LINK LLC requestsKarsten Graul2-2/+23
Introduce a work that is scheduled when a new ADD_LINK LLC request is received. The work will call either the SMC client or SMC server ADD_LINK processing. Signed-off-by: Karsten Graul <kgraul@linux.ibm.com> Reviewed-by: Ursula Braun <ubraun@linux.ibm.com> Signed-off-by: David S. Miller <davem@davemloft.net>
2020-05-02net/smc: allocate index for a new linkKarsten Graul2-1/+25
Add smc_llc_alloc_alt_link() to find a free link index for a new link, depending on the new link group type. And update constants for the maximum number of links to 3 (2 symmetric and 1 dangling asymmetric link). These maximum numbers are the same as used by other implementations of the SMC-R protocol. Signed-off-by: Karsten Graul <kgraul@linux.ibm.com> Reviewed-by: Ursula Braun <ubraun@linux.ibm.com> Signed-off-by: David S. Miller <davem@davemloft.net>
2020-05-02net/smc: introduce smc_pnet_find_alt_roce()Karsten Graul2-3/+17
Introduce a new function in smc_pnet.c that searches for an alternate IB device, using an existing link group and a primary IB device. The alternate IB device needs to be active and must have the same PNETID as the link group. Signed-off-by: Karsten Graul <kgraul@linux.ibm.com> Reviewed-by: Ursula Braun <ubraun@linux.ibm.com> Signed-off-by: David S. Miller <davem@davemloft.net>
2020-05-02net/smc: remove DELETE LINK processing from smc_core.cKarsten Graul1-33/+0
Support for multiple links makes the former DELETE LINK processing obsolete which sent one DELETE_LINK LLC message for each single link. Remove this processing from smc_core.c. Signed-off-by: Karsten Graul <kgraul@linux.ibm.com> Reviewed-by: Ursula Braun <ubraun@linux.ibm.com> Signed-off-by: David S. Miller <davem@davemloft.net>
2020-05-02net/smc: take link down instead of terminating the link groupKarsten Graul4-19/+13
Use the introduced link down processing in all places where the link group is terminated and take down the affected link only. Signed-off-by: Karsten Graul <kgraul@linux.ibm.com> Reviewed-by: Ursula Braun <ubraun@linux.ibm.com> Signed-off-by: David S. Miller <davem@davemloft.net>
2020-05-02net/smc: add smcr_port_err() and smcr_link_down() processingKarsten Graul4-32/+98
Call smcr_port_err() when an IB event reports an inactive IB device. smcr_port_err() calls smcr_link_down() for all affected links. smcr_link_down() either triggers the local DELETE_LINK processing, or sends an DELETE_LINK LLC message to the SMC server to initiate the processing. The old handler function smc_port_terminate() is removed. Add helper smcr_link_down_cond() to take a link down conditionally, and smcr_link_down_cond_sched() to schedule the link_down processing to a work. Signed-off-by: Karsten Graul <kgraul@linux.ibm.com> Reviewed-by: Ursula Braun <ubraun@linux.ibm.com> Signed-off-by: David S. Miller <davem@davemloft.net>
2020-05-02net/smc: add smcr_port_add() and smcr_link_up() processingKarsten Graul3-0/+88
Call smcr_port_add() when an IB event reports a new active IB device. smcr_port_add() will start a work which either triggers the local ADD_LINK processing, or send an ADD_LINK LLC message to the SMC server to initiate the processing. Signed-off-by: Karsten Graul <kgraul@linux.ibm.com> Reviewed-by: Ursula Braun <ubraun@linux.ibm.com> Signed-off-by: David S. Miller <davem@davemloft.net>
2020-05-02net/smc: remember PNETID of IB device for later device matchingKarsten Graul2-0/+4
The PNETID is needed to find an alternate link for a link group. Save the PNETID of the link that is used to create the link group for later device matching. Signed-off-by: Karsten Graul <kgraul@linux.ibm.com> Reviewed-by: Ursula Braun <ubraun@linux.ibm.com> Signed-off-by: David S. Miller <davem@davemloft.net>
2020-05-02net/smc: mutex to protect the lgr against parallel reconfigurationsKarsten Graul4-12/+34
Introduce llc_conf_mutex in the link group which is used to protect the buffers and lgr states against parallel link reconfiguration. This ensures that new connections do not start to register buffers with the links of a link group when link creation or termination is running. Signed-off-by: Karsten Graul <kgraul@linux.ibm.com> Reviewed-by: Ursula Braun <ubraun@linux.ibm.com> Signed-off-by: David S. Miller <davem@davemloft.net>
2020-05-02net/smc: extend smc_llc_send_add_link() and smc_llc_send_delete_link()Karsten Graul3-46/+62
All LLC sends are done from worker context only, so remove the prep functions which were used to build the message before it was sent, and add the function content into the respective send function smc_llc_send_add_link() and smc_llc_send_delete_link(). Extend smc_llc_send_add_link() to include the qp_mtu value in the LLC message, which is needed to establish a link after the initial link was created. Extend smc_llc_send_delete_link() to contain a link_id and a reason code for the link deletion in the LLC message, which is needed when a specific link should be deleted. And add the list of existing DELETE_LINK reason codes. Signed-off-by: Karsten Graul <kgraul@linux.ibm.com> Reviewed-by: Ursula Braun <ubraun@linux.ibm.com> Signed-off-by: David S. Miller <davem@davemloft.net>
2020-05-02net/smc: map and register buffers for a new linkKarsten Graul2-0/+62
Introduce support to map and register all current buffers for a new link. smcr_buf_map_lgr() will map used buffers for a new link and smcr_buf_reg_lgr() can be called to register used buffers on the IB device of the new link. Signed-off-by: Karsten Graul <kgraul@linux.ibm.com> Reviewed-by: Ursula Braun <ubraun@linux.ibm.com> Signed-off-by: David S. Miller <davem@davemloft.net>
2020-05-02net/smc: unmapping of buffers to support multiple linksKarsten Graul2-17/+60
With the support of multiple links that are created and cleared there is a need to unmap one link from all current buffers. Add unmapping by link and by rmb. And make smcr_link_clear() available to be called from the LLC layer. Signed-off-by: Karsten Graul <kgraul@linux.ibm.com> Reviewed-by: Ursula Braun <ubraun@linux.ibm.com> Signed-off-by: David S. Miller <davem@davemloft.net>
2020-05-02net/smc: multiple link support for rmb buffer registrationKarsten Graul3-35/+36
The CONFIRM_RKEY LLC processing handles all links in one LLC message. Move the call to this processing out of smcr_link_reg_rmb() which does processing per link, into smcr_lgr_reg_rmbs() which is responsible for link group level processing. Move smcr_link_reg_rmb() into module smc_core.c. >From af_smc.c now call smcr_lgr_reg_rmbs() to register new rmbs on all available links. Signed-off-by: Karsten Graul <kgraul@linux.ibm.com> Reviewed-by: Ursula Braun <ubraun@linux.ibm.com> Signed-off-by: David S. Miller <davem@davemloft.net>