diff options
author | Linus Torvalds <torvalds@linux-foundation.org> | 2019-07-11 20:55:49 +0300 |
---|---|---|
committer | Linus Torvalds <torvalds@linux-foundation.org> | 2019-07-11 20:55:49 +0300 |
commit | 237f83dfbe668443b5e31c3c7576125871cca674 (patch) | |
tree | 11848a8d0aa414a1d3ce2024e181071b1d9dea08 /drivers/net/ethernet/qlogic | |
parent | 8f6ccf6159aed1f04c6d179f61f6fb2691261e84 (diff) | |
parent | 1ff2f0fa450ea4e4f87793d9ed513098ec6e12be (diff) | |
download | linux-237f83dfbe668443b5e31c3c7576125871cca674.tar.xz |
Merge git://git.kernel.org/pub/scm/linux/kernel/git/davem/net-next
Pull networking updates from David Miller:
"Some highlights from this development cycle:
1) Big refactoring of ipv6 route and neigh handling to support
nexthop objects configurable as units from userspace. From David
Ahern.
2) Convert explored_states in BPF verifier into a hash table,
significantly decreased state held for programs with bpf2bpf
calls, from Alexei Starovoitov.
3) Implement bpf_send_signal() helper, from Yonghong Song.
4) Various classifier enhancements to mvpp2 driver, from Maxime
Chevallier.
5) Add aRFS support to hns3 driver, from Jian Shen.
6) Fix use after free in inet frags by allocating fqdirs dynamically
and reworking how rhashtable dismantle occurs, from Eric Dumazet.
7) Add act_ctinfo packet classifier action, from Kevin
Darbyshire-Bryant.
8) Add TFO key backup infrastructure, from Jason Baron.
9) Remove several old and unused ISDN drivers, from Arnd Bergmann.
10) Add devlink notifications for flash update status to mlxsw driver,
from Jiri Pirko.
11) Lots of kTLS offload infrastructure fixes, from Jakub Kicinski.
12) Add support for mv88e6250 DSA chips, from Rasmus Villemoes.
13) Various enhancements to ipv6 flow label handling, from Eric
Dumazet and Willem de Bruijn.
14) Support TLS offload in nfp driver, from Jakub Kicinski, Dirk van
der Merwe, and others.
15) Various improvements to axienet driver including converting it to
phylink, from Robert Hancock.
16) Add PTP support to sja1105 DSA driver, from Vladimir Oltean.
17) Add mqprio qdisc offload support to dpaa2-eth, from Ioana
Radulescu.
18) Add devlink health reporting to mlx5, from Moshe Shemesh.
19) Convert stmmac over to phylink, from Jose Abreu.
20) Add PTP PHC (Physical Hardware Clock) support to mlxsw, from
Shalom Toledo.
21) Add nftables SYNPROXY support, from Fernando Fernandez Mancera.
22) Convert tcp_fastopen over to use SipHash, from Ard Biesheuvel.
23) Track spill/fill of constants in BPF verifier, from Alexei
Starovoitov.
24) Support bounded loops in BPF, from Alexei Starovoitov.
25) Various page_pool API fixes and improvements, from Jesper Dangaard
Brouer.
26) Just like ipv4, support ref-countless ipv6 route handling. From
Wei Wang.
27) Support VLAN offloading in aquantia driver, from Igor Russkikh.
28) Add AF_XDP zero-copy support to mlx5, from Maxim Mikityanskiy.
29) Add flower GRE encap/decap support to nfp driver, from Pieter
Jansen van Vuuren.
30) Protect against stack overflow when using act_mirred, from John
Hurley.
31) Allow devmap map lookups from eBPF, from Toke Høiland-Jørgensen.
32) Use page_pool API in netsec driver, Ilias Apalodimas.
33) Add Google gve network driver, from Catherine Sullivan.
34) More indirect call avoidance, from Paolo Abeni.
35) Add kTLS TX HW offload support to mlx5, from Tariq Toukan.
36) Add XDP_REDIRECT support to bnxt_en, from Andy Gospodarek.
37) Add MPLS manipulation actions to TC, from John Hurley.
38) Add sending a packet to connection tracking from TC actions, and
then allow flower classifier matching on conntrack state. From
Paul Blakey.
39) Netfilter hw offload support, from Pablo Neira Ayuso"
* git://git.kernel.org/pub/scm/linux/kernel/git/davem/net-next: (2080 commits)
net/mlx5e: Return in default case statement in tx_post_resync_params
mlx5: Return -EINVAL when WARN_ON_ONCE triggers in mlx5e_tls_resync().
net: dsa: add support for BRIDGE_MROUTER attribute
pkt_sched: Include const.h
net: netsec: remove static declaration for netsec_set_tx_de()
net: netsec: remove superfluous if statement
netfilter: nf_tables: add hardware offload support
net: flow_offload: rename tc_cls_flower_offload to flow_cls_offload
net: flow_offload: add flow_block_cb_is_busy() and use it
net: sched: remove tcf block API
drivers: net: use flow block API
net: sched: use flow block API
net: flow_offload: add flow_block_cb_{priv, incref, decref}()
net: flow_offload: add list handling functions
net: flow_offload: add flow_block_cb_alloc() and flow_block_cb_free()
net: flow_offload: rename TCF_BLOCK_BINDER_TYPE_* to FLOW_BLOCK_BINDER_TYPE_*
net: flow_offload: rename TC_BLOCK_{UN}BIND to FLOW_BLOCK_{UN}BIND
net: flow_offload: add flow_block_cb_setup_simple()
net: hisilicon: Add an tx_desc to adapt HI13X1_GMAC
net: hisilicon: Add an rx_desc to adapt HI13X1_GMAC
...
Diffstat (limited to 'drivers/net/ethernet/qlogic')
32 files changed, 1825 insertions, 651 deletions
diff --git a/drivers/net/ethernet/qlogic/Kconfig b/drivers/net/ethernet/qlogic/Kconfig index fdbb3ce00e20..a391cf6ee4b2 100644 --- a/drivers/net/ethernet/qlogic/Kconfig +++ b/drivers/net/ethernet/qlogic/Kconfig @@ -87,6 +87,7 @@ config QED depends on PCI select ZLIB_INFLATE select CRC8 + select NET_DEVLINK ---help--- This enables the support for ... diff --git a/drivers/net/ethernet/qlogic/netxen/netxen_nic_main.c b/drivers/net/ethernet/qlogic/netxen/netxen_nic_main.c index 84cb62434556..58e2eaf77014 100644 --- a/drivers/net/ethernet/qlogic/netxen/netxen_nic_main.c +++ b/drivers/net/ethernet/qlogic/netxen/netxen_nic_main.c @@ -3248,6 +3248,7 @@ netxen_config_indev_addr(struct netxen_adapter *adapter, struct net_device *dev, unsigned long event) { struct in_device *indev; + struct in_ifaddr *ifa; if (!netxen_destip_supported(adapter)) return; @@ -3256,7 +3257,8 @@ netxen_config_indev_addr(struct netxen_adapter *adapter, if (!indev) return; - for_ifa(indev) { + rcu_read_lock(); + in_dev_for_each_ifa_rcu(ifa, indev) { switch (event) { case NETDEV_UP: netxen_list_config_ip(adapter, ifa, NX_IP_UP); @@ -3267,8 +3269,8 @@ netxen_config_indev_addr(struct netxen_adapter *adapter, default: break; } - } endfor_ifa(indev); - + } + rcu_read_unlock(); in_dev_put(indev); } diff --git a/drivers/net/ethernet/qlogic/qed/qed.h b/drivers/net/ethernet/qlogic/qed/qed.h index c5e96ce20f59..89fe091c958d 100644 --- a/drivers/net/ethernet/qlogic/qed/qed.h +++ b/drivers/net/ethernet/qlogic/qed/qed.h @@ -140,6 +140,7 @@ struct qed_cxt_mngr; struct qed_sb_sp_info; struct qed_ll2_info; struct qed_mcp_info; +struct qed_llh_info; struct qed_rt_data { u32 *init_val; @@ -741,6 +742,7 @@ struct qed_dev { #define QED_DEV_ID_MASK 0xff00 #define QED_DEV_ID_MASK_BB 0x1600 #define QED_DEV_ID_MASK_AH 0x8000 +#define QED_IS_E4(dev) (QED_IS_BB(dev) || QED_IS_AH(dev)) u16 chip_num; #define CHIP_NUM_MASK 0xffff @@ -801,6 +803,11 @@ struct qed_dev { u8 num_hwfns; struct qed_hwfn hwfns[MAX_HWFNS_PER_DEVICE]; + /* Engine affinity */ + u8 l2_affin_hint; + u8 fir_affin; + u8 iwarp_affin; + /* SRIOV */ struct qed_hw_sriov_info *p_iov_info; #define IS_QED_SRIOV(cdev) (!!(cdev)->p_iov_info) @@ -815,6 +822,10 @@ struct qed_dev { /* Recovery */ bool recov_in_prog; + /* LLH info */ + u8 ppfid_bitmap; + struct qed_llh_info *p_llh_info; + /* Linux specific here */ struct qede_dev *edev; struct pci_dev *pdev; @@ -852,6 +863,9 @@ struct qed_dev { u32 rdma_max_inline; u32 rdma_max_srq_sge; u16 tunn_feature_mask; + + struct devlink *dl; + bool iwarp_cmt; }; #define NUM_OF_VFS(dev) (QED_IS_BB(dev) ? MAX_NUM_VFS_BB \ @@ -904,6 +918,14 @@ void qed_set_fw_mac_addr(__le16 *fw_msb, __le16 *fw_mid, __le16 *fw_lsb, u8 *mac); #define QED_LEADING_HWFN(dev) (&dev->hwfns[0]) +#define QED_IS_CMT(dev) ((dev)->num_hwfns > 1) +/* Macros for getting the engine-affinitized hwfn (FIR: fcoe,iscsi,roce) */ +#define QED_FIR_AFFIN_HWFN(dev) (&(dev)->hwfns[dev->fir_affin]) +#define QED_IWARP_AFFIN_HWFN(dev) (&(dev)->hwfns[dev->iwarp_affin]) +#define QED_AFFIN_HWFN(dev) \ + (QED_IS_IWARP_PERSONALITY(QED_LEADING_HWFN(dev)) ? \ + QED_IWARP_AFFIN_HWFN(dev) : QED_FIR_AFFIN_HWFN(dev)) +#define QED_AFFIN_HWFN_IDX(dev) (IS_LEAD_HWFN(QED_AFFIN_HWFN(dev)) ? 0 : 1) /* Flags for indication of required queues */ #define PQ_FLAGS_RLS (BIT(0)) @@ -923,8 +945,6 @@ u16 qed_get_cm_pq_idx_vf(struct qed_hwfn *p_hwfn, u16 vf); u16 qed_get_cm_pq_idx_ofld_mtc(struct qed_hwfn *p_hwfn, u8 tc); u16 qed_get_cm_pq_idx_llt_mtc(struct qed_hwfn *p_hwfn, u8 tc); -#define QED_LEADING_HWFN(dev) (&dev->hwfns[0]) - /* doorbell recovery mechanism */ void qed_db_recovery_dp(struct qed_hwfn *p_hwfn); void qed_db_recovery_execute(struct qed_hwfn *p_hwfn); diff --git a/drivers/net/ethernet/qlogic/qed/qed_cxt.c b/drivers/net/ethernet/qlogic/qed/qed_cxt.c index e61d1d905415..8e1bdf58b9e7 100644 --- a/drivers/net/ethernet/qlogic/qed/qed_cxt.c +++ b/drivers/net/ethernet/qlogic/qed/qed_cxt.c @@ -2351,7 +2351,8 @@ qed_cxt_dynamic_ilt_alloc(struct qed_hwfn *p_hwfn, /* Write via DMAE since the PSWRQ2_REG_ILT_MEMORY line is a wide-bus */ qed_dmae_host2grc(p_hwfn, p_ptt, (u64) (uintptr_t)&ilt_hw_entry, - reg_offset, sizeof(ilt_hw_entry) / sizeof(u32), 0); + reg_offset, sizeof(ilt_hw_entry) / sizeof(u32), + NULL); if (elem_type == QED_ELEM_CXT) { u32 last_cid_allocated = (1 + (iid / elems_per_p)) * @@ -2457,7 +2458,7 @@ qed_cxt_free_ilt_range(struct qed_hwfn *p_hwfn, (u64) (uintptr_t) &ilt_hw_entry, reg_offset, sizeof(ilt_hw_entry) / sizeof(u32), - 0); + NULL); } qed_ptt_release(p_hwfn, p_ptt); diff --git a/drivers/net/ethernet/qlogic/qed/qed_debug.c b/drivers/net/ethernet/qlogic/qed/qed_debug.c index ab8cacbdee3e..5ea6c4fc6050 100644 --- a/drivers/net/ethernet/qlogic/qed/qed_debug.c +++ b/drivers/net/ethernet/qlogic/qed/qed_debug.c @@ -2534,7 +2534,7 @@ static u32 qed_grc_dump_addr_range(struct qed_hwfn *p_hwfn, (len >= s_platform_defs[dev_data->platform_id].dmae_thresh || wide_bus)) { if (!qed_dmae_grc2host(p_hwfn, p_ptt, DWORDS_TO_BYTES(addr), - (u64)(uintptr_t)(dump_buf), len, 0)) + (u64)(uintptr_t)(dump_buf), len, NULL)) return len; dev_data->use_dmae = 0; DP_VERBOSE(p_hwfn, diff --git a/drivers/net/ethernet/qlogic/qed/qed_dev.c b/drivers/net/ethernet/qlogic/qed/qed_dev.c index fccdb06fc5c5..a1ebc2b1ca0b 100644 --- a/drivers/net/ethernet/qlogic/qed/qed_dev.c +++ b/drivers/net/ethernet/qlogic/qed/qed_dev.c @@ -361,6 +361,927 @@ void qed_db_recovery_execute(struct qed_hwfn *p_hwfn) /******************** Doorbell Recovery end ****************/ +/********************************** NIG LLH ***********************************/ + +enum qed_llh_filter_type { + QED_LLH_FILTER_TYPE_MAC, + QED_LLH_FILTER_TYPE_PROTOCOL, +}; + +struct qed_llh_mac_filter { + u8 addr[ETH_ALEN]; +}; + +struct qed_llh_protocol_filter { + enum qed_llh_prot_filter_type_t type; + u16 source_port_or_eth_type; + u16 dest_port; +}; + +union qed_llh_filter { + struct qed_llh_mac_filter mac; + struct qed_llh_protocol_filter protocol; +}; + +struct qed_llh_filter_info { + bool b_enabled; + u32 ref_cnt; + enum qed_llh_filter_type type; + union qed_llh_filter filter; +}; + +struct qed_llh_info { + /* Number of LLH filters banks */ + u8 num_ppfid; + +#define MAX_NUM_PPFID 8 + u8 ppfid_array[MAX_NUM_PPFID]; + + /* Array of filters arrays: + * "num_ppfid" elements of filters banks, where each is an array of + * "NIG_REG_LLH_FUNC_FILTER_EN_SIZE" filters. + */ + struct qed_llh_filter_info **pp_filters; +}; + +static void qed_llh_free(struct qed_dev *cdev) +{ + struct qed_llh_info *p_llh_info = cdev->p_llh_info; + u32 i; + + if (p_llh_info) { + if (p_llh_info->pp_filters) + for (i = 0; i < p_llh_info->num_ppfid; i++) + kfree(p_llh_info->pp_filters[i]); + + kfree(p_llh_info->pp_filters); + } + + kfree(p_llh_info); + cdev->p_llh_info = NULL; +} + +static int qed_llh_alloc(struct qed_dev *cdev) +{ + struct qed_llh_info *p_llh_info; + u32 size, i; + + p_llh_info = kzalloc(sizeof(*p_llh_info), GFP_KERNEL); + if (!p_llh_info) + return -ENOMEM; + cdev->p_llh_info = p_llh_info; + + for (i = 0; i < MAX_NUM_PPFID; i++) { + if (!(cdev->ppfid_bitmap & (0x1 << i))) + continue; + + p_llh_info->ppfid_array[p_llh_info->num_ppfid] = i; + DP_VERBOSE(cdev, QED_MSG_SP, "ppfid_array[%d] = %hhd\n", + p_llh_info->num_ppfid, i); + p_llh_info->num_ppfid++; + } + + size = p_llh_info->num_ppfid * sizeof(*p_llh_info->pp_filters); + p_llh_info->pp_filters = kzalloc(size, GFP_KERNEL); + if (!p_llh_info->pp_filters) + return -ENOMEM; + + size = NIG_REG_LLH_FUNC_FILTER_EN_SIZE * + sizeof(**p_llh_info->pp_filters); + for (i = 0; i < p_llh_info->num_ppfid; i++) { + p_llh_info->pp_filters[i] = kzalloc(size, GFP_KERNEL); + if (!p_llh_info->pp_filters[i]) + return -ENOMEM; + } + + return 0; +} + +static int qed_llh_shadow_sanity(struct qed_dev *cdev, + u8 ppfid, u8 filter_idx, const char *action) +{ + struct qed_llh_info *p_llh_info = cdev->p_llh_info; + + if (ppfid >= p_llh_info->num_ppfid) { + DP_NOTICE(cdev, + "LLH shadow [%s]: using ppfid %d while only %d ppfids are available\n", + action, ppfid, p_llh_info->num_ppfid); + return -EINVAL; + } + + if (filter_idx >= NIG_REG_LLH_FUNC_FILTER_EN_SIZE) { + DP_NOTICE(cdev, + "LLH shadow [%s]: using filter_idx %d while only %d filters are available\n", + action, filter_idx, NIG_REG_LLH_FUNC_FILTER_EN_SIZE); + return -EINVAL; + } + + return 0; +} + +#define QED_LLH_INVALID_FILTER_IDX 0xff + +static int +qed_llh_shadow_search_filter(struct qed_dev *cdev, + u8 ppfid, + union qed_llh_filter *p_filter, u8 *p_filter_idx) +{ + struct qed_llh_info *p_llh_info = cdev->p_llh_info; + struct qed_llh_filter_info *p_filters; + int rc; + u8 i; + + rc = qed_llh_shadow_sanity(cdev, ppfid, 0, "search"); + if (rc) + return rc; + + *p_filter_idx = QED_LLH_INVALID_FILTER_IDX; + + p_filters = p_llh_info->pp_filters[ppfid]; + for (i = 0; i < NIG_REG_LLH_FUNC_FILTER_EN_SIZE; i++) { + if (!memcmp(p_filter, &p_filters[i].filter, + sizeof(*p_filter))) { + *p_filter_idx = i; + break; + } + } + + return 0; +} + +static int +qed_llh_shadow_get_free_idx(struct qed_dev *cdev, u8 ppfid, u8 *p_filter_idx) +{ + struct qed_llh_info *p_llh_info = cdev->p_llh_info; + struct qed_llh_filter_info *p_filters; + int rc; + u8 i; + + rc = qed_llh_shadow_sanity(cdev, ppfid, 0, "get_free_idx"); + if (rc) + return rc; + + *p_filter_idx = QED_LLH_INVALID_FILTER_IDX; + + p_filters = p_llh_info->pp_filters[ppfid]; + for (i = 0; i < NIG_REG_LLH_FUNC_FILTER_EN_SIZE; i++) { + if (!p_filters[i].b_enabled) { + *p_filter_idx = i; + break; + } + } + + return 0; +} + +static int +__qed_llh_shadow_add_filter(struct qed_dev *cdev, + u8 ppfid, + u8 filter_idx, + enum qed_llh_filter_type type, + union qed_llh_filter *p_filter, u32 *p_ref_cnt) +{ + struct qed_llh_info *p_llh_info = cdev->p_llh_info; + struct qed_llh_filter_info *p_filters; + int rc; + + rc = qed_llh_shadow_sanity(cdev, ppfid, filter_idx, "add"); + if (rc) + return rc; + + p_filters = p_llh_info->pp_filters[ppfid]; + if (!p_filters[filter_idx].ref_cnt) { + p_filters[filter_idx].b_enabled = true; + p_filters[filter_idx].type = type; + memcpy(&p_filters[filter_idx].filter, p_filter, + sizeof(p_filters[filter_idx].filter)); + } + + *p_ref_cnt = ++p_filters[filter_idx].ref_cnt; + + return 0; +} + +static int +qed_llh_shadow_add_filter(struct qed_dev *cdev, + u8 ppfid, + enum qed_llh_filter_type type, + union qed_llh_filter *p_filter, + u8 *p_filter_idx, u32 *p_ref_cnt) +{ + int rc; + + /* Check if the same filter already exist */ + rc = qed_llh_shadow_search_filter(cdev, ppfid, p_filter, p_filter_idx); + if (rc) + return rc; + + /* Find a new entry in case of a new filter */ + if (*p_filter_idx == QED_LLH_INVALID_FILTER_IDX) { + rc = qed_llh_shadow_get_free_idx(cdev, ppfid, p_filter_idx); + if (rc) + return rc; + } + + /* No free entry was found */ + if (*p_filter_idx == QED_LLH_INVALID_FILTER_IDX) { + DP_NOTICE(cdev, + "Failed to find an empty LLH filter to utilize [ppfid %d]\n", + ppfid); + return -EINVAL; + } + + return __qed_llh_shadow_add_filter(cdev, ppfid, *p_filter_idx, type, + p_filter, p_ref_cnt); +} + +static int +__qed_llh_shadow_remove_filter(struct qed_dev *cdev, + u8 ppfid, u8 filter_idx, u32 *p_ref_cnt) +{ + struct qed_llh_info *p_llh_info = cdev->p_llh_info; + struct qed_llh_filter_info *p_filters; + int rc; + + rc = qed_llh_shadow_sanity(cdev, ppfid, filter_idx, "remove"); + if (rc) + return rc; + + p_filters = p_llh_info->pp_filters[ppfid]; + if (!p_filters[filter_idx].ref_cnt) { + DP_NOTICE(cdev, + "LLH shadow: trying to remove a filter with ref_cnt=0\n"); + return -EINVAL; + } + + *p_ref_cnt = --p_filters[filter_idx].ref_cnt; + if (!p_filters[filter_idx].ref_cnt) + memset(&p_filters[filter_idx], + 0, sizeof(p_filters[filter_idx])); + + return 0; +} + +static int +qed_llh_shadow_remove_filter(struct qed_dev *cdev, + u8 ppfid, + union qed_llh_filter *p_filter, + u8 *p_filter_idx, u32 *p_ref_cnt) +{ + int rc; + + rc = qed_llh_shadow_search_filter(cdev, ppfid, p_filter, p_filter_idx); + if (rc) + return rc; + + /* No matching filter was found */ + if (*p_filter_idx == QED_LLH_INVALID_FILTER_IDX) { + DP_NOTICE(cdev, "Failed to find a filter in the LLH shadow\n"); + return -EINVAL; + } + + return __qed_llh_shadow_remove_filter(cdev, ppfid, *p_filter_idx, + p_ref_cnt); +} + +static int qed_llh_abs_ppfid(struct qed_dev *cdev, u8 ppfid, u8 *p_abs_ppfid) +{ + struct qed_llh_info *p_llh_info = cdev->p_llh_info; + + if (ppfid >= p_llh_info->num_ppfid) { + DP_NOTICE(cdev, + "ppfid %d is not valid, available indices are 0..%hhd\n", + ppfid, p_llh_info->num_ppfid - 1); + *p_abs_ppfid = 0; + return -EINVAL; + } + + *p_abs_ppfid = p_llh_info->ppfid_array[ppfid]; + + return 0; +} + +static int +qed_llh_set_engine_affin(struct qed_hwfn *p_hwfn, struct qed_ptt *p_ptt) +{ + struct qed_dev *cdev = p_hwfn->cdev; + enum qed_eng eng; + u8 ppfid; + int rc; + + rc = qed_mcp_get_engine_config(p_hwfn, p_ptt); + if (rc != 0 && rc != -EOPNOTSUPP) { + DP_NOTICE(p_hwfn, + "Failed to get the engine affinity configuration\n"); + return rc; + } + + /* RoCE PF is bound to a single engine */ + if (QED_IS_ROCE_PERSONALITY(p_hwfn)) { + eng = cdev->fir_affin ? QED_ENG1 : QED_ENG0; + rc = qed_llh_set_roce_affinity(cdev, eng); + if (rc) { + DP_NOTICE(cdev, + "Failed to set the RoCE engine affinity\n"); + return rc; + } + + DP_VERBOSE(cdev, + QED_MSG_SP, + "LLH: Set the engine affinity of RoCE packets as %d\n", + eng); + } + + /* Storage PF is bound to a single engine while L2 PF uses both */ + if (QED_IS_FCOE_PERSONALITY(p_hwfn) || QED_IS_ISCSI_PERSONALITY(p_hwfn)) + eng = cdev->fir_affin ? QED_ENG1 : QED_ENG0; + else /* L2_PERSONALITY */ + eng = QED_BOTH_ENG; + + for (ppfid = 0; ppfid < cdev->p_llh_info->num_ppfid; ppfid++) { + rc = qed_llh_set_ppfid_affinity(cdev, ppfid, eng); + if (rc) { + DP_NOTICE(cdev, + "Failed to set the engine affinity of ppfid %d\n", + ppfid); + return rc; + } + } + + DP_VERBOSE(cdev, QED_MSG_SP, + "LLH: Set the engine affinity of non-RoCE packets as %d\n", + eng); + + return 0; +} + +static int qed_llh_hw_init_pf(struct qed_hwfn *p_hwfn, + struct qed_ptt *p_ptt) +{ + struct qed_dev *cdev = p_hwfn->cdev; + u8 ppfid, abs_ppfid; + int rc; + + for (ppfid = 0; ppfid < cdev->p_llh_info->num_ppfid; ppfid++) { + u32 addr; + + rc = qed_llh_abs_ppfid(cdev, ppfid, &abs_ppfid); + if (rc) + return rc; + + addr = NIG_REG_LLH_PPFID2PFID_TBL_0 + abs_ppfid * 0x4; + qed_wr(p_hwfn, p_ptt, addr, p_hwfn->rel_pf_id); + } + + if (test_bit(QED_MF_LLH_MAC_CLSS, &cdev->mf_bits) && + !QED_IS_FCOE_PERSONALITY(p_hwfn)) { + rc = qed_llh_add_mac_filter(cdev, 0, + p_hwfn->hw_info.hw_mac_addr); + if (rc) + DP_NOTICE(cdev, + "Failed to add an LLH filter with the primary MAC\n"); + } + + if (QED_IS_CMT(cdev)) { + rc = qed_llh_set_engine_affin(p_hwfn, p_ptt); + if (rc) + return rc; + } + + return 0; +} + +u8 qed_llh_get_num_ppfid(struct qed_dev *cdev) +{ + return cdev->p_llh_info->num_ppfid; +} + +#define NIG_REG_PPF_TO_ENGINE_SEL_ROCE_MASK 0x3 +#define NIG_REG_PPF_TO_ENGINE_SEL_ROCE_SHIFT 0 +#define NIG_REG_PPF_TO_ENGINE_SEL_NON_ROCE_MASK 0x3 +#define NIG_REG_PPF_TO_ENGINE_SEL_NON_ROCE_SHIFT 2 + +int qed_llh_set_ppfid_affinity(struct qed_dev *cdev, u8 ppfid, enum qed_eng eng) +{ + struct qed_hwfn *p_hwfn = QED_LEADING_HWFN(cdev); + struct qed_ptt *p_ptt = qed_ptt_acquire(p_hwfn); + u32 addr, val, eng_sel; + u8 abs_ppfid; + int rc = 0; + + if (!p_ptt) + return -EAGAIN; + + if (!QED_IS_CMT(cdev)) + goto out; + + rc = qed_llh_abs_ppfid(cdev, ppfid, &abs_ppfid); + if (rc) + goto out; + + switch (eng) { + case QED_ENG0: + eng_sel = 0; + break; + case QED_ENG1: + eng_sel = 1; + break; + case QED_BOTH_ENG: + eng_sel = 2; + break; + default: + DP_NOTICE(cdev, "Invalid affinity value for ppfid [%d]\n", eng); + rc = -EINVAL; + goto out; + } + + addr = NIG_REG_PPF_TO_ENGINE_SEL + abs_ppfid * 0x4; + val = qed_rd(p_hwfn, p_ptt, addr); + SET_FIELD(val, NIG_REG_PPF_TO_ENGINE_SEL_NON_ROCE, eng_sel); + qed_wr(p_hwfn, p_ptt, addr, val); + + /* The iWARP affinity is set as the affinity of ppfid 0 */ + if (!ppfid && QED_IS_IWARP_PERSONALITY(p_hwfn)) + cdev->iwarp_affin = (eng == QED_ENG1) ? 1 : 0; +out: + qed_ptt_release(p_hwfn, p_ptt); + + return rc; +} + +int qed_llh_set_roce_affinity(struct qed_dev *cdev, enum qed_eng eng) +{ + struct qed_hwfn *p_hwfn = QED_LEADING_HWFN(cdev); + struct qed_ptt *p_ptt = qed_ptt_acquire(p_hwfn); + u32 addr, val, eng_sel; + u8 ppfid, abs_ppfid; + int rc = 0; + + if (!p_ptt) + return -EAGAIN; + + if (!QED_IS_CMT(cdev)) + goto out; + + switch (eng) { + case QED_ENG0: + eng_sel = 0; + break; + case QED_ENG1: + eng_sel = 1; + break; + case QED_BOTH_ENG: + eng_sel = 2; + qed_wr(p_hwfn, p_ptt, NIG_REG_LLH_ENG_CLS_ROCE_QP_SEL, + 0xf); /* QP bit 15 */ + break; + default: + DP_NOTICE(cdev, "Invalid affinity value for RoCE [%d]\n", eng); + rc = -EINVAL; + goto out; + } + + for (ppfid = 0; ppfid < cdev->p_llh_info->num_ppfid; ppfid++) { + rc = qed_llh_abs_ppfid(cdev, ppfid, &abs_ppfid); + if (rc) + goto out; + + addr = NIG_REG_PPF_TO_ENGINE_SEL + abs_ppfid * 0x4; + val = qed_rd(p_hwfn, p_ptt, addr); + SET_FIELD(val, NIG_REG_PPF_TO_ENGINE_SEL_ROCE, eng_sel); + qed_wr(p_hwfn, p_ptt, addr, val); + } +out: + qed_ptt_release(p_hwfn, p_ptt); + + return rc; +} + +struct qed_llh_filter_details { + u64 value; + u32 mode; + u32 protocol_type; + u32 hdr_sel; + u32 enable; +}; + +static int +qed_llh_access_filter(struct qed_hwfn *p_hwfn, + struct qed_ptt *p_ptt, + u8 abs_ppfid, + u8 filter_idx, + struct qed_llh_filter_details *p_details) +{ + struct qed_dmae_params params = {0}; + u32 addr; + u8 pfid; + int rc; + + /* The NIG/LLH registers that are accessed in this function have only 16 + * rows which are exposed to a PF. I.e. only the 16 filters of its + * default ppfid. Accessing filters of other ppfids requires pretending + * to another PFs. + * The calculation of PPFID->PFID in AH is based on the relative index + * of a PF on its port. + * For BB the pfid is actually the abs_ppfid. + */ + if (QED_IS_BB(p_hwfn->cdev)) + pfid = abs_ppfid; + else + pfid = abs_ppfid * p_hwfn->cdev->num_ports_in_engine + + MFW_PORT(p_hwfn); + + /* Filter enable - should be done first when removing a filter */ + if (!p_details->enable) { + qed_fid_pretend(p_hwfn, p_ptt, + pfid << PXP_PRETEND_CONCRETE_FID_PFID_SHIFT); + + addr = NIG_REG_LLH_FUNC_FILTER_EN + filter_idx * 0x4; + qed_wr(p_hwfn, p_ptt, addr, p_details->enable); + + qed_fid_pretend(p_hwfn, p_ptt, + p_hwfn->rel_pf_id << + PXP_PRETEND_CONCRETE_FID_PFID_SHIFT); + } + + /* Filter value */ + addr = NIG_REG_LLH_FUNC_FILTER_VALUE + 2 * filter_idx * 0x4; + + params.flags = QED_DMAE_FLAG_PF_DST; + params.dst_pfid = pfid; + rc = qed_dmae_host2grc(p_hwfn, + p_ptt, + (u64)(uintptr_t)&p_details->value, + addr, 2 /* size_in_dwords */, + ¶ms); + if (rc) + return rc; + + qed_fid_pretend(p_hwfn, p_ptt, + pfid << PXP_PRETEND_CONCRETE_FID_PFID_SHIFT); + + /* Filter mode */ + addr = NIG_REG_LLH_FUNC_FILTER_MODE + filter_idx * 0x4; + qed_wr(p_hwfn, p_ptt, addr, p_details->mode); + + /* Filter protocol type */ + addr = NIG_REG_LLH_FUNC_FILTER_PROTOCOL_TYPE + filter_idx * 0x4; + qed_wr(p_hwfn, p_ptt, addr, p_details->protocol_type); + + /* Filter header select */ + addr = NIG_REG_LLH_FUNC_FILTER_HDR_SEL + filter_idx * 0x4; + qed_wr(p_hwfn, p_ptt, addr, p_details->hdr_sel); + + /* Filter enable - should be done last when adding a filter */ + if (p_details->enable) { + addr = NIG_REG_LLH_FUNC_FILTER_EN + filter_idx * 0x4; + qed_wr(p_hwfn, p_ptt, addr, p_details->enable); + } + + qed_fid_pretend(p_hwfn, p_ptt, + p_hwfn->rel_pf_id << + PXP_PRETEND_CONCRETE_FID_PFID_SHIFT); + + return 0; +} + +static int +qed_llh_add_filter(struct qed_hwfn *p_hwfn, + struct qed_ptt *p_ptt, + u8 abs_ppfid, + u8 filter_idx, u8 filter_prot_type, u32 high, u32 low) +{ + struct qed_llh_filter_details filter_details; + + filter_details.enable = 1; + filter_details.value = ((u64)high << 32) | low; + filter_details.hdr_sel = 0; + filter_details.protocol_type = filter_prot_type; + /* Mode: 0: MAC-address classification 1: protocol classification */ + filter_details.mode = filter_prot_type ? 1 : 0; + + return qed_llh_access_filter(p_hwfn, p_ptt, abs_ppfid, filter_idx, + &filter_details); +} + +static int +qed_llh_remove_filter(struct qed_hwfn *p_hwfn, + struct qed_ptt *p_ptt, u8 abs_ppfid, u8 filter_idx) +{ + struct qed_llh_filter_details filter_details = {0}; + + return qed_llh_access_filter(p_hwfn, p_ptt, abs_ppfid, filter_idx, + &filter_details); +} + +int qed_llh_add_mac_filter(struct qed_dev *cdev, + u8 ppfid, u8 mac_addr[ETH_ALEN]) +{ + struct qed_hwfn *p_hwfn = QED_LEADING_HWFN(cdev); + struct qed_ptt *p_ptt = qed_ptt_acquire(p_hwfn); + union qed_llh_filter filter = {}; + u8 filter_idx, abs_ppfid; + u32 high, low, ref_cnt; + int rc = 0; + + if (!p_ptt) + return -EAGAIN; + + if (!test_bit(QED_MF_LLH_MAC_CLSS, &cdev->mf_bits)) + goto out; + + memcpy(filter.mac.addr, mac_addr, ETH_ALEN); + rc = qed_llh_shadow_add_filter(cdev, ppfid, + QED_LLH_FILTER_TYPE_MAC, + &filter, &filter_idx, &ref_cnt); + if (rc) + goto err; + + /* Configure the LLH only in case of a new the filter */ + if (ref_cnt == 1) { + rc = qed_llh_abs_ppfid(cdev, ppfid, &abs_ppfid); + if (rc) + goto err; + + high = mac_addr[1] | (mac_addr[0] << 8); + low = mac_addr[5] | (mac_addr[4] << 8) | (mac_addr[3] << 16) | + (mac_addr[2] << 24); + rc = qed_llh_add_filter(p_hwfn, p_ptt, abs_ppfid, filter_idx, + 0, high, low); + if (rc) + goto err; + } + + DP_VERBOSE(cdev, + QED_MSG_SP, + "LLH: Added MAC filter [%pM] to ppfid %hhd [abs %hhd] at idx %hhd [ref_cnt %d]\n", + mac_addr, ppfid, abs_ppfid, filter_idx, ref_cnt); + + goto out; + +err: DP_NOTICE(cdev, + "LLH: Failed to add MAC filter [%pM] to ppfid %hhd\n", + mac_addr, ppfid); +out: + qed_ptt_release(p_hwfn, p_ptt); + + return rc; +} + +static int +qed_llh_protocol_filter_stringify(struct qed_dev *cdev, + enum qed_llh_prot_filter_type_t type, + u16 source_port_or_eth_type, + u16 dest_port, u8 *str, size_t str_len) +{ + switch (type) { + case QED_LLH_FILTER_ETHERTYPE: + snprintf(str, str_len, "Ethertype 0x%04x", + source_port_or_eth_type); + break; + case QED_LLH_FILTER_TCP_SRC_PORT: + snprintf(str, str_len, "TCP src port 0x%04x", + source_port_or_eth_type); + break; + case QED_LLH_FILTER_UDP_SRC_PORT: + snprintf(str, str_len, "UDP src port 0x%04x", + source_port_or_eth_type); + break; + case QED_LLH_FILTER_TCP_DEST_PORT: + snprintf(str, str_len, "TCP dst port 0x%04x", dest_port); + break; + case QED_LLH_FILTER_UDP_DEST_PORT: + snprintf(str, str_len, "UDP dst port 0x%04x", dest_port); + break; + case QED_LLH_FILTER_TCP_SRC_AND_DEST_PORT: + snprintf(str, str_len, "TCP src/dst ports 0x%04x/0x%04x", + source_port_or_eth_type, dest_port); + break; + case QED_LLH_FILTER_UDP_SRC_AND_DEST_PORT: + snprintf(str, str_len, "UDP src/dst ports 0x%04x/0x%04x", + source_port_or_eth_type, dest_port); + break; + default: + DP_NOTICE(cdev, + "Non valid LLH protocol filter type %d\n", type); + return -EINVAL; + } + + return 0; +} + +static int +qed_llh_protocol_filter_to_hilo(struct qed_dev *cdev, + enum qed_llh_prot_filter_type_t type, + u16 source_port_or_eth_type, + u16 dest_port, u32 *p_high, u32 *p_low) +{ + *p_high = 0; + *p_low = 0; + + switch (type) { + case QED_LLH_FILTER_ETHERTYPE: + *p_high = source_port_or_eth_type; + break; + case QED_LLH_FILTER_TCP_SRC_PORT: + case QED_LLH_FILTER_UDP_SRC_PORT: + *p_low = source_port_or_eth_type << 16; + break; + case QED_LLH_FILTER_TCP_DEST_PORT: + case QED_LLH_FILTER_UDP_DEST_PORT: + *p_low = dest_port; + break; + case QED_LLH_FILTER_TCP_SRC_AND_DEST_PORT: + case QED_LLH_FILTER_UDP_SRC_AND_DEST_PORT: + *p_low = (source_port_or_eth_type << 16) | dest_port; + break; + default: + DP_NOTICE(cdev, + "Non valid LLH protocol filter type %d\n", type); + return -EINVAL; + } + + return 0; +} + +int +qed_llh_add_protocol_filter(struct qed_dev *cdev, + u8 ppfid, + enum qed_llh_prot_filter_type_t type, + u16 source_port_or_eth_type, u16 dest_port) +{ + struct qed_hwfn *p_hwfn = QED_LEADING_HWFN(cdev); + struct qed_ptt *p_ptt = qed_ptt_acquire(p_hwfn); + u8 filter_idx, abs_ppfid, str[32], type_bitmap; + union qed_llh_filter filter = {}; + u32 high, low, ref_cnt; + int rc = 0; + + if (!p_ptt) + return -EAGAIN; + + if (!test_bit(QED_MF_LLH_PROTO_CLSS, &cdev->mf_bits)) + goto out; + + rc = qed_llh_protocol_filter_stringify(cdev, type, + source_port_or_eth_type, + dest_port, str, sizeof(str)); + if (rc) + goto err; + + filter.protocol.type = type; + filter.protocol.source_port_or_eth_type = source_port_or_eth_type; + filter.protocol.dest_port = dest_port; + rc = qed_llh_shadow_add_filter(cdev, + ppfid, + QED_LLH_FILTER_TYPE_PROTOCOL, + &filter, &filter_idx, &ref_cnt); + if (rc) + goto err; + + rc = qed_llh_abs_ppfid(cdev, ppfid, &abs_ppfid); + if (rc) + goto err; + + /* Configure the LLH only in case of a new the filter */ + if (ref_cnt == 1) { + rc = qed_llh_protocol_filter_to_hilo(cdev, type, + source_port_or_eth_type, + dest_port, &high, &low); + if (rc) + goto err; + + type_bitmap = 0x1 << type; + rc = qed_llh_add_filter(p_hwfn, p_ptt, abs_ppfid, + filter_idx, type_bitmap, high, low); + if (rc) + goto err; + } + + DP_VERBOSE(cdev, + QED_MSG_SP, + "LLH: Added protocol filter [%s] to ppfid %hhd [abs %hhd] at idx %hhd [ref_cnt %d]\n", + str, ppfid, abs_ppfid, filter_idx, ref_cnt); + + goto out; + +err: DP_NOTICE(p_hwfn, + "LLH: Failed to add protocol filter [%s] to ppfid %hhd\n", + str, ppfid); +out: + qed_ptt_release(p_hwfn, p_ptt); + + return rc; +} + +void qed_llh_remove_mac_filter(struct qed_dev *cdev, + u8 ppfid, u8 mac_addr[ETH_ALEN]) +{ + struct qed_hwfn *p_hwfn = QED_LEADING_HWFN(cdev); + struct qed_ptt *p_ptt = qed_ptt_acquire(p_hwfn); + union qed_llh_filter filter = {}; + u8 filter_idx, abs_ppfid; + int rc = 0; + u32 ref_cnt; + + if (!p_ptt) + return; + + if (!test_bit(QED_MF_LLH_MAC_CLSS, &cdev->mf_bits)) + goto out; + + ether_addr_copy(filter.mac.addr, mac_addr); + rc = qed_llh_shadow_remove_filter(cdev, ppfid, &filter, &filter_idx, + &ref_cnt); + if (rc) + goto err; + + rc = qed_llh_abs_ppfid(cdev, ppfid, &abs_ppfid); + if (rc) + goto err; + + /* Remove from the LLH in case the filter is not in use */ + if (!ref_cnt) { + rc = qed_llh_remove_filter(p_hwfn, p_ptt, abs_ppfid, + filter_idx); + if (rc) + goto err; + } + + DP_VERBOSE(cdev, + QED_MSG_SP, + "LLH: Removed MAC filter [%pM] from ppfid %hhd [abs %hhd] at idx %hhd [ref_cnt %d]\n", + mac_addr, ppfid, abs_ppfid, filter_idx, ref_cnt); + + goto out; + +err: DP_NOTICE(cdev, + "LLH: Failed to remove MAC filter [%pM] from ppfid %hhd\n", + mac_addr, ppfid); +out: + qed_ptt_release(p_hwfn, p_ptt); +} + +void qed_llh_remove_protocol_filter(struct qed_dev *cdev, + u8 ppfid, + enum qed_llh_prot_filter_type_t type, + u16 source_port_or_eth_type, u16 dest_port) +{ + struct qed_hwfn *p_hwfn = QED_LEADING_HWFN(cdev); + struct qed_ptt *p_ptt = qed_ptt_acquire(p_hwfn); + u8 filter_idx, abs_ppfid, str[32]; + union qed_llh_filter filter = {}; + int rc = 0; + u32 ref_cnt; + + if (!p_ptt) + return; + + if (!test_bit(QED_MF_LLH_PROTO_CLSS, &cdev->mf_bits)) + goto out; + + rc = qed_llh_protocol_filter_stringify(cdev, type, + source_port_or_eth_type, + dest_port, str, sizeof(str)); + if (rc) + goto err; + + filter.protocol.type = type; + filter.protocol.source_port_or_eth_type = source_port_or_eth_type; + filter.protocol.dest_port = dest_port; + rc = qed_llh_shadow_remove_filter(cdev, ppfid, &filter, &filter_idx, + &ref_cnt); + if (rc) + goto err; + + rc = qed_llh_abs_ppfid(cdev, ppfid, &abs_ppfid); + if (rc) + goto err; + + /* Remove from the LLH in case the filter is not in use */ + if (!ref_cnt) { + rc = qed_llh_remove_filter(p_hwfn, p_ptt, abs_ppfid, + filter_idx); + if (rc) + goto err; + } + + DP_VERBOSE(cdev, + QED_MSG_SP, + "LLH: Removed protocol filter [%s] from ppfid %hhd [abs %hhd] at idx %hhd [ref_cnt %d]\n", + str, ppfid, abs_ppfid, filter_idx, ref_cnt); + + goto out; + +err: DP_NOTICE(cdev, + "LLH: Failed to remove protocol filter [%s] from ppfid %hhd\n", + str, ppfid); +out: + qed_ptt_release(p_hwfn, p_ptt); +} + +/******************************* NIG LLH - End ********************************/ + #define QED_MIN_DPIS (4) #define QED_MIN_PWM_REGION (QED_WID_SIZE * QED_MIN_DPIS) @@ -461,6 +1382,8 @@ void qed_resc_free(struct qed_dev *cdev) kfree(cdev->reset_stats); cdev->reset_stats = NULL; + qed_llh_free(cdev); + for_each_hwfn(cdev, i) { struct qed_hwfn *p_hwfn = &cdev->hwfns[i]; @@ -1428,6 +2351,13 @@ int qed_resc_alloc(struct qed_dev *cdev) goto alloc_err; } + rc = qed_llh_alloc(cdev); + if (rc) { + DP_NOTICE(cdev, + "Failed to allocate memory for the llh_info structure\n"); + goto alloc_err; + } + cdev->reset_stats = kzalloc(sizeof(*cdev->reset_stats), GFP_KERNEL); if (!cdev->reset_stats) goto alloc_no_mem; @@ -1879,6 +2809,10 @@ static int qed_hw_init_port(struct qed_hwfn *p_hwfn, { int rc = 0; + /* In CMT the gate should be cleared by the 2nd hwfn */ + if (!QED_IS_CMT(p_hwfn->cdev) || !IS_LEAD_HWFN(p_hwfn)) + STORE_RT_REG(p_hwfn, NIG_REG_BRB_GATE_DNTFWD_PORT_RT_OFFSET, 0); + rc = qed_init_run(p_hwfn, p_ptt, PHASE_PORT, p_hwfn->port_id, hw_mode); if (rc) return rc; @@ -1964,6 +2898,13 @@ static int qed_hw_init_pf(struct qed_hwfn *p_hwfn, if (rc) return rc; + /* Use the leading hwfn since in CMT only NIG #0 is operational */ + if (IS_LEAD_HWFN(p_hwfn)) { + rc = qed_llh_hw_init_pf(p_hwfn, p_ptt); + if (rc) + return rc; + } + if (b_hw_start) { /* enable interrupts */ qed_int_igu_enable(p_hwfn, p_ptt, int_mode); @@ -2393,6 +3334,12 @@ int qed_hw_stop(struct qed_dev *cdev) qed_wr(p_hwfn, p_ptt, DORQ_REG_PF_DB_ENABLE, 0); qed_wr(p_hwfn, p_ptt, QM_REG_PF_EN, 0); + if (IS_LEAD_HWFN(p_hwfn) && + test_bit(QED_MF_LLH_MAC_CLSS, &cdev->mf_bits) && + !QED_IS_FCOE_PERSONALITY(p_hwfn)) + qed_llh_remove_mac_filter(cdev, 0, + p_hwfn->hw_info.hw_mac_addr); + if (!cdev->recov_in_prog) { rc = qed_mcp_unload_done(p_hwfn, p_ptt); if (rc) { @@ -2868,6 +3815,36 @@ static int qed_hw_set_resc_info(struct qed_hwfn *p_hwfn) return 0; } +static int qed_hw_get_ppfid_bitmap(struct qed_hwfn *p_hwfn, + struct qed_ptt *p_ptt) +{ + struct qed_dev *cdev = p_hwfn->cdev; + u8 native_ppfid_idx; + int rc; + + /* Calculation of BB/AH is different for native_ppfid_idx */ + if (QED_IS_BB(cdev)) + native_ppfid_idx = p_hwfn->rel_pf_id; + else + native_ppfid_idx = p_hwfn->rel_pf_id / + cdev->num_ports_in_engine; + + rc = qed_mcp_get_ppfid_bitmap(p_hwfn, p_ptt); + if (rc != 0 && rc != -EOPNOTSUPP) + return rc; + else if (rc == -EOPNOTSUPP) + cdev->ppfid_bitmap = 0x1 << native_ppfid_idx; + + if (!(cdev->ppfid_bitmap & (0x1 << native_ppfid_idx))) { + DP_INFO(p_hwfn, + "Fix the PPFID bitmap to include the native PPFID [native_ppfid_idx %hhd, orig_bitmap 0x%hhx]\n", + native_ppfid_idx, cdev->ppfid_bitmap); + cdev->ppfid_bitmap = 0x1 << native_ppfid_idx; + } + + return 0; +} + static int qed_hw_get_resc(struct qed_hwfn *p_hwfn, struct qed_ptt *p_ptt) { struct qed_resc_unlock_params resc_unlock_params; @@ -2925,6 +3902,13 @@ static int qed_hw_get_resc(struct qed_hwfn *p_hwfn, struct qed_ptt *p_ptt) "Failed to release the resource lock for the resource allocation commands\n"); } + /* PPFID bitmap */ + if (IS_LEAD_HWFN(p_hwfn)) { + rc = qed_hw_get_ppfid_bitmap(p_hwfn, p_ptt); + if (rc) + return rc; + } + /* Sanity for ILT */ if ((b_ah && (RESC_END(p_hwfn, QED_ILT) > PXP_NUM_ILT_RECORDS_K2)) || (!b_ah && (RESC_END(p_hwfn, QED_ILT) > PXP_NUM_ILT_RECORDS_BB))) { @@ -3443,6 +4427,7 @@ static void qed_nvm_info_free(struct qed_hwfn *p_hwfn) static int qed_hw_prepare_single(struct qed_hwfn *p_hwfn, void __iomem *p_regview, void __iomem *p_doorbells, + u64 db_phys_addr, enum qed_pci_personality personality) { struct qed_dev *cdev = p_hwfn->cdev; @@ -3451,6 +4436,7 @@ static int qed_hw_prepare_single(struct qed_hwfn *p_hwfn, /* Split PCI bars evenly between hwfns */ p_hwfn->regview = p_regview; p_hwfn->doorbells = p_doorbells; + p_hwfn->db_phys_addr = db_phys_addr; if (IS_VF(p_hwfn->cdev)) return qed_vf_hw_prepare(p_hwfn); @@ -3546,7 +4532,9 @@ int qed_hw_prepare(struct qed_dev *cdev, /* Initialize the first hwfn - will learn number of hwfns */ rc = qed_hw_prepare_single(p_hwfn, cdev->regview, - cdev->doorbells, personality); + cdev->doorbells, + cdev->db_phys_addr, + personality); if (rc) return rc; @@ -3555,22 +4543,25 @@ int qed_hw_prepare(struct qed_dev *cdev, /* Initialize the rest of the hwfns */ if (cdev->num_hwfns > 1) { void __iomem *p_regview, *p_doorbell; - u8 __iomem *addr; + u64 db_phys_addr; + u32 offset; /* adjust bar offset for second engine */ - addr = cdev->regview + - qed_hw_bar_size(p_hwfn, p_hwfn->p_main_ptt, - BAR_ID_0) / 2; - p_regview = addr; + offset = qed_hw_bar_size(p_hwfn, p_hwfn->p_main_ptt, + BAR_ID_0) / 2; + p_regview = cdev->regview + offset; + + offset = qed_hw_bar_size(p_hwfn, p_hwfn->p_main_ptt, + BAR_ID_1) / 2; - addr = cdev->doorbells + - qed_hw_bar_size(p_hwfn, p_hwfn->p_main_ptt, - BAR_ID_1) / 2; - p_doorbell = addr; + p_doorbell = cdev->doorbells + offset; + + db_phys_addr = cdev->db_phys_addr + offset; /* prepare second hw function */ rc = qed_hw_prepare_single(&cdev->hwfns[1], p_regview, - p_doorbell, personality); + p_doorbell, db_phys_addr, + personality); /* in case of error, need to free the previously * initiliazed hwfn 0. @@ -3951,269 +4942,6 @@ int qed_fw_rss_eng(struct qed_hwfn *p_hwfn, u8 src_id, u8 *dst_id) return 0; } -static void qed_llh_mac_to_filter(u32 *p_high, u32 *p_low, - u8 *p_filter) -{ - *p_high = p_filter[1] | (p_filter[0] << 8); - *p_low = p_filter[5] | (p_filter[4] << 8) | - (p_filter[3] << 16) | (p_filter[2] << 24); -} - -int qed_llh_add_mac_filter(struct qed_hwfn *p_hwfn, - struct qed_ptt *p_ptt, u8 *p_filter) -{ - u32 high = 0, low = 0, en; - int i; - - if (!test_bit(QED_MF_LLH_MAC_CLSS, &p_hwfn->cdev->mf_bits)) - return 0; - - qed_llh_mac_to_filter(&high, &low, p_filter); - - /* Find a free entry and utilize it */ - for (i = 0; i < NIG_REG_LLH_FUNC_FILTER_EN_SIZE; i++) { - en = qed_rd(p_hwfn, p_ptt, - NIG_REG_LLH_FUNC_FILTER_EN + i * sizeof(u32)); - if (en) - continue; - qed_wr(p_hwfn, p_ptt, - NIG_REG_LLH_FUNC_FILTER_VALUE + - 2 * i * sizeof(u32), low); - qed_wr(p_hwfn, p_ptt, - NIG_REG_LLH_FUNC_FILTER_VALUE + - (2 * i + 1) * sizeof(u32), high); - qed_wr(p_hwfn, p_ptt, - NIG_REG_LLH_FUNC_FILTER_MODE + i * sizeof(u32), 0); - qed_wr(p_hwfn, p_ptt, - NIG_REG_LLH_FUNC_FILTER_PROTOCOL_TYPE + - i * sizeof(u32), 0); - qed_wr(p_hwfn, p_ptt, - NIG_REG_LLH_FUNC_FILTER_EN + i * sizeof(u32), 1); - break; - } - if (i >= NIG_REG_LLH_FUNC_FILTER_EN_SIZE) { - DP_NOTICE(p_hwfn, - "Failed to find an empty LLH filter to utilize\n"); - return -EINVAL; - } - - DP_VERBOSE(p_hwfn, NETIF_MSG_HW, - "mac: %pM is added at %d\n", - p_filter, i); - - return 0; -} - -void qed_llh_remove_mac_filter(struct qed_hwfn *p_hwfn, - struct qed_ptt *p_ptt, u8 *p_filter) -{ - u32 high = 0, low = 0; - int i; - - if (!test_bit(QED_MF_LLH_MAC_CLSS, &p_hwfn->cdev->mf_bits)) - return; - - qed_llh_mac_to_filter(&high, &low, p_filter); - - /* Find the entry and clean it */ - for (i = 0; i < NIG_REG_LLH_FUNC_FILTER_EN_SIZE; i++) { - if (qed_rd(p_hwfn, p_ptt, - NIG_REG_LLH_FUNC_FILTER_VALUE + - 2 * i * sizeof(u32)) != low) - continue; - if (qed_rd(p_hwfn, p_ptt, - NIG_REG_LLH_FUNC_FILTER_VALUE + - (2 * i + 1) * sizeof(u32)) != high) - continue; - - qed_wr(p_hwfn, p_ptt, - NIG_REG_LLH_FUNC_FILTER_EN + i * sizeof(u32), 0); - qed_wr(p_hwfn, p_ptt, - NIG_REG_LLH_FUNC_FILTER_VALUE + 2 * i * sizeof(u32), 0); - qed_wr(p_hwfn, p_ptt, - NIG_REG_LLH_FUNC_FILTER_VALUE + - (2 * i + 1) * sizeof(u32), 0); - - DP_VERBOSE(p_hwfn, NETIF_MSG_HW, - "mac: %pM is removed from %d\n", - p_filter, i); - break; - } - if (i >= NIG_REG_LLH_FUNC_FILTER_EN_SIZE) - DP_NOTICE(p_hwfn, "Tried to remove a non-configured filter\n"); -} - -int -qed_llh_add_protocol_filter(struct qed_hwfn *p_hwfn, - struct qed_ptt *p_ptt, - u16 source_port_or_eth_type, - u16 dest_port, enum qed_llh_port_filter_type_t type) -{ - u32 high = 0, low = 0, en; - int i; - - if (!test_bit(QED_MF_LLH_PROTO_CLSS, &p_hwfn->cdev->mf_bits)) - return 0; - - switch (type) { - case QED_LLH_FILTER_ETHERTYPE: - high = source_port_or_eth_type; - break; - case QED_LLH_FILTER_TCP_SRC_PORT: - case QED_LLH_FILTER_UDP_SRC_PORT: - low = source_port_or_eth_type << 16; - break; - case QED_LLH_FILTER_TCP_DEST_PORT: - case QED_LLH_FILTER_UDP_DEST_PORT: - low = dest_port; - break; - case QED_LLH_FILTER_TCP_SRC_AND_DEST_PORT: - case QED_LLH_FILTER_UDP_SRC_AND_DEST_PORT: - low = (source_port_or_eth_type << 16) | dest_port; - break; - default: - DP_NOTICE(p_hwfn, - "Non valid LLH protocol filter type %d\n", type); - return -EINVAL; - } - /* Find a free entry and utilize it */ - for (i = 0; i < NIG_REG_LLH_FUNC_FILTER_EN_SIZE; i++) { - en = qed_rd(p_hwfn, p_ptt, - NIG_REG_LLH_FUNC_FILTER_EN + i * sizeof(u32)); - if (en) - continue; - qed_wr(p_hwfn, p_ptt, - NIG_REG_LLH_FUNC_FILTER_VALUE + - 2 * i * sizeof(u32), low); - qed_wr(p_hwfn, p_ptt, - NIG_REG_LLH_FUNC_FILTER_VALUE + - (2 * i + 1) * sizeof(u32), high); - qed_wr(p_hwfn, p_ptt, - NIG_REG_LLH_FUNC_FILTER_MODE + i * sizeof(u32), 1); - qed_wr(p_hwfn, p_ptt, - NIG_REG_LLH_FUNC_FILTER_PROTOCOL_TYPE + - i * sizeof(u32), 1 << type); - qed_wr(p_hwfn, p_ptt, - NIG_REG_LLH_FUNC_FILTER_EN + i * sizeof(u32), 1); - break; - } - if (i >= NIG_REG_LLH_FUNC_FILTER_EN_SIZE) { - DP_NOTICE(p_hwfn, - "Failed to find an empty LLH filter to utilize\n"); - return -EINVAL; - } - switch (type) { - case QED_LLH_FILTER_ETHERTYPE: - DP_VERBOSE(p_hwfn, NETIF_MSG_HW, - "ETH type %x is added at %d\n", - source_port_or_eth_type, i); - break; - case QED_LLH_FILTER_TCP_SRC_PORT: - DP_VERBOSE(p_hwfn, NETIF_MSG_HW, - "TCP src port %x is added at %d\n", - source_port_or_eth_type, i); - break; - case QED_LLH_FILTER_UDP_SRC_PORT: - DP_VERBOSE(p_hwfn, NETIF_MSG_HW, - "UDP src port %x is added at %d\n", - source_port_or_eth_type, i); - break; - case QED_LLH_FILTER_TCP_DEST_PORT: - DP_VERBOSE(p_hwfn, NETIF_MSG_HW, - "TCP dst port %x is added at %d\n", dest_port, i); - break; - case QED_LLH_FILTER_UDP_DEST_PORT: - DP_VERBOSE(p_hwfn, NETIF_MSG_HW, - "UDP dst port %x is added at %d\n", dest_port, i); - break; - case QED_LLH_FILTER_TCP_SRC_AND_DEST_PORT: - DP_VERBOSE(p_hwfn, NETIF_MSG_HW, - "TCP src/dst ports %x/%x are added at %d\n", - source_port_or_eth_type, dest_port, i); - break; - case QED_LLH_FILTER_UDP_SRC_AND_DEST_PORT: - DP_VERBOSE(p_hwfn, NETIF_MSG_HW, - "UDP src/dst ports %x/%x are added at %d\n", - source_port_or_eth_type, dest_port, i); - break; - } - return 0; -} - -void -qed_llh_remove_protocol_filter(struct qed_hwfn *p_hwfn, - struct qed_ptt *p_ptt, - u16 source_port_or_eth_type, - u16 dest_port, - enum qed_llh_port_filter_type_t type) -{ - u32 high = 0, low = 0; - int i; - - if (!test_bit(QED_MF_LLH_PROTO_CLSS, &p_hwfn->cdev->mf_bits)) - return; - - switch (type) { - case QED_LLH_FILTER_ETHERTYPE: - high = source_port_or_eth_type; - break; - case QED_LLH_FILTER_TCP_SRC_PORT: - case QED_LLH_FILTER_UDP_SRC_PORT: - low = source_port_or_eth_type << 16; - break; - case QED_LLH_FILTER_TCP_DEST_PORT: - case QED_LLH_FILTER_UDP_DEST_PORT: - low = dest_port; - break; - case QED_LLH_FILTER_TCP_SRC_AND_DEST_PORT: - case QED_LLH_FILTER_UDP_SRC_AND_DEST_PORT: - low = (source_port_or_eth_type << 16) | dest_port; - break; - default: - DP_NOTICE(p_hwfn, - "Non valid LLH protocol filter type %d\n", type); - return; - } - - for (i = 0; i < NIG_REG_LLH_FUNC_FILTER_EN_SIZE; i++) { - if (!qed_rd(p_hwfn, p_ptt, - NIG_REG_LLH_FUNC_FILTER_EN + i * sizeof(u32))) - continue; - if (!qed_rd(p_hwfn, p_ptt, - NIG_REG_LLH_FUNC_FILTER_MODE + i * sizeof(u32))) - continue; - if (!(qed_rd(p_hwfn, p_ptt, - NIG_REG_LLH_FUNC_FILTER_PROTOCOL_TYPE + - i * sizeof(u32)) & BIT(type))) - continue; - if (qed_rd(p_hwfn, p_ptt, - NIG_REG_LLH_FUNC_FILTER_VALUE + - 2 * i * sizeof(u32)) != low) - continue; - if (qed_rd(p_hwfn, p_ptt, - NIG_REG_LLH_FUNC_FILTER_VALUE + - (2 * i + 1) * sizeof(u32)) != high) - continue; - - qed_wr(p_hwfn, p_ptt, - NIG_REG_LLH_FUNC_FILTER_EN + i * sizeof(u32), 0); - qed_wr(p_hwfn, p_ptt, - NIG_REG_LLH_FUNC_FILTER_MODE + i * sizeof(u32), 0); - qed_wr(p_hwfn, p_ptt, - NIG_REG_LLH_FUNC_FILTER_PROTOCOL_TYPE + - i * sizeof(u32), 0); - qed_wr(p_hwfn, p_ptt, - NIG_REG_LLH_FUNC_FILTER_VALUE + 2 * i * sizeof(u32), 0); - qed_wr(p_hwfn, p_ptt, - NIG_REG_LLH_FUNC_FILTER_VALUE + - (2 * i + 1) * sizeof(u32), 0); - break; - } - - if (i >= NIG_REG_LLH_FUNC_FILTER_EN_SIZE) - DP_NOTICE(p_hwfn, "Tried to remove a non-configured filter\n"); -} - static int qed_set_coalesce(struct qed_hwfn *p_hwfn, struct qed_ptt *p_ptt, u32 hw_addr, void *p_eth_qzone, size_t eth_qzone_size, u8 timeset) diff --git a/drivers/net/ethernet/qlogic/qed/qed_dev_api.h b/drivers/net/ethernet/qlogic/qed/qed_dev_api.h index e4b4e3b78e8a..47376d4d071f 100644 --- a/drivers/net/ethernet/qlogic/qed/qed_dev_api.h +++ b/drivers/net/ethernet/qlogic/qed/qed_dev_api.h @@ -241,11 +241,17 @@ enum qed_dmae_address_type_t { #define QED_DMAE_FLAG_VF_SRC 0x00000002 #define QED_DMAE_FLAG_VF_DST 0x00000004 #define QED_DMAE_FLAG_COMPLETION_DST 0x00000008 +#define QED_DMAE_FLAG_PORT 0x00000010 +#define QED_DMAE_FLAG_PF_SRC 0x00000020 +#define QED_DMAE_FLAG_PF_DST 0x00000040 struct qed_dmae_params { u32 flags; /* consists of QED_DMAE_FLAG_* values */ u8 src_vfid; u8 dst_vfid; + u8 port_id; + u8 src_pfid; + u8 dst_pfid; }; /** @@ -257,7 +263,7 @@ struct qed_dmae_params { * @param source_addr * @param grc_addr (dmae_data_offset) * @param size_in_dwords - * @param flags (one of the flags defined above) + * @param p_params (default parameters will be used in case of NULL) */ int qed_dmae_host2grc(struct qed_hwfn *p_hwfn, @@ -265,7 +271,7 @@ qed_dmae_host2grc(struct qed_hwfn *p_hwfn, u64 source_addr, u32 grc_addr, u32 size_in_dwords, - u32 flags); + struct qed_dmae_params *p_params); /** * @brief qed_dmae_grc2host - Read data from dmae data offset @@ -275,11 +281,11 @@ qed_dmae_host2grc(struct qed_hwfn *p_hwfn, * @param grc_addr (dmae_data_offset) * @param dest_addr * @param size_in_dwords - * @param flags - one of the flags defined above + * @param p_params (default parameters will be used in case of NULL) */ int qed_dmae_grc2host(struct qed_hwfn *p_hwfn, struct qed_ptt *p_ptt, u32 grc_addr, dma_addr_t dest_addr, u32 size_in_dwords, - u32 flags); + struct qed_dmae_params *p_params); /** * @brief qed_dmae_host2host - copy data from to source address @@ -290,7 +296,7 @@ int qed_dmae_grc2host(struct qed_hwfn *p_hwfn, struct qed_ptt *p_ptt, * @param source_addr * @param dest_addr * @param size_in_dwords - * @param params + * @param p_params (default parameters will be used in case of NULL) */ int qed_dmae_host2host(struct qed_hwfn *p_hwfn, struct qed_ptt *p_ptt, @@ -368,26 +374,66 @@ int qed_fw_rss_eng(struct qed_hwfn *p_hwfn, u8 *dst_id); /** - * @brief qed_llh_add_mac_filter - configures a MAC filter in llh + * @brief qed_llh_get_num_ppfid - Return the allocated number of LLH filter + * banks that are allocated to the PF. * - * @param p_hwfn - * @param p_ptt - * @param p_filter - MAC to add + * @param cdev + * + * @return u8 - Number of LLH filter banks */ -int qed_llh_add_mac_filter(struct qed_hwfn *p_hwfn, - struct qed_ptt *p_ptt, u8 *p_filter); +u8 qed_llh_get_num_ppfid(struct qed_dev *cdev); + +enum qed_eng { + QED_ENG0, + QED_ENG1, + QED_BOTH_ENG, +}; /** - * @brief qed_llh_remove_mac_filter - removes a MAC filter from llh + * @brief qed_llh_set_ppfid_affinity - Set the engine affinity for the given + * LLH filter bank. + * + * @param cdev + * @param ppfid - relative within the allocated ppfids ('0' is the default one). + * @param eng + * + * @return int + */ +int qed_llh_set_ppfid_affinity(struct qed_dev *cdev, + u8 ppfid, enum qed_eng eng); + +/** + * @brief qed_llh_set_roce_affinity - Set the RoCE engine affinity + * + * @param cdev + * @param eng + * + * @return int + */ +int qed_llh_set_roce_affinity(struct qed_dev *cdev, enum qed_eng eng); + +/** + * @brief qed_llh_add_mac_filter - Add a LLH MAC filter into the given filter + * bank. + * + * @param cdev + * @param ppfid - relative within the allocated ppfids ('0' is the default one). + * @param mac_addr - MAC to add + */ +int qed_llh_add_mac_filter(struct qed_dev *cdev, + u8 ppfid, u8 mac_addr[ETH_ALEN]); + +/** + * @brief qed_llh_remove_mac_filter - Remove a LLH MAC filter from the given + * filter bank. * - * @param p_hwfn * @param p_ptt * @param p_filter - MAC to remove */ -void qed_llh_remove_mac_filter(struct qed_hwfn *p_hwfn, - struct qed_ptt *p_ptt, u8 *p_filter); +void qed_llh_remove_mac_filter(struct qed_dev *cdev, + u8 ppfid, u8 mac_addr[ETH_ALEN]); -enum qed_llh_port_filter_type_t { +enum qed_llh_prot_filter_type_t { QED_LLH_FILTER_ETHERTYPE, QED_LLH_FILTER_TCP_SRC_PORT, QED_LLH_FILTER_TCP_DEST_PORT, @@ -398,36 +444,37 @@ enum qed_llh_port_filter_type_t { }; /** - * @brief qed_llh_add_protocol_filter - configures a protocol filter in llh + * @brief qed_llh_add_protocol_filter - Add a LLH protocol filter into the + * given filter bank. * - * @param p_hwfn - * @param p_ptt + * @param cdev + * @param ppfid - relative within the allocated ppfids ('0' is the default one). + * @param type - type of filters and comparing * @param source_port_or_eth_type - source port or ethertype to add * @param dest_port - destination port to add * @param type - type of filters and comparing */ int -qed_llh_add_protocol_filter(struct qed_hwfn *p_hwfn, - struct qed_ptt *p_ptt, - u16 source_port_or_eth_type, - u16 dest_port, - enum qed_llh_port_filter_type_t type); +qed_llh_add_protocol_filter(struct qed_dev *cdev, + u8 ppfid, + enum qed_llh_prot_filter_type_t type, + u16 source_port_or_eth_type, u16 dest_port); /** - * @brief qed_llh_remove_protocol_filter - remove a protocol filter in llh + * @brief qed_llh_remove_protocol_filter - Remove a LLH protocol filter from + * the given filter bank. * - * @param p_hwfn - * @param p_ptt + * @param cdev + * @param ppfid - relative within the allocated ppfids ('0' is the default one). + * @param type - type of filters and comparing * @param source_port_or_eth_type - source port or ethertype to add * @param dest_port - destination port to add - * @param type - type of filters and comparing */ void -qed_llh_remove_protocol_filter(struct qed_hwfn *p_hwfn, - struct qed_ptt *p_ptt, - u16 source_port_or_eth_type, - u16 dest_port, - enum qed_llh_port_filter_type_t type); +qed_llh_remove_protocol_filter(struct qed_dev *cdev, + u8 ppfid, + enum qed_llh_prot_filter_type_t type, + u16 source_port_or_eth_type, u16 dest_port); /** * *@brief Cleanup of previous driver remains prior to load diff --git a/drivers/net/ethernet/qlogic/qed/qed_fcoe.c b/drivers/net/ethernet/qlogic/qed/qed_fcoe.c index 46dc93d3b9b5..de31a382f58e 100644 --- a/drivers/net/ethernet/qlogic/qed/qed_fcoe.c +++ b/drivers/net/ethernet/qlogic/qed/qed_fcoe.c @@ -745,7 +745,7 @@ struct qed_hash_fcoe_con { static int qed_fill_fcoe_dev_info(struct qed_dev *cdev, struct qed_dev_fcoe_info *info) { - struct qed_hwfn *hwfn = QED_LEADING_HWFN(cdev); + struct qed_hwfn *hwfn = QED_AFFIN_HWFN(cdev); int rc; memset(info, 0, sizeof(*info)); @@ -806,15 +806,15 @@ static int qed_fcoe_stop(struct qed_dev *cdev) return -EINVAL; } - p_ptt = qed_ptt_acquire(QED_LEADING_HWFN(cdev)); + p_ptt = qed_ptt_acquire(QED_AFFIN_HWFN(cdev)); if (!p_ptt) return -EAGAIN; /* Stop the fcoe */ - rc = qed_sp_fcoe_func_stop(QED_LEADING_HWFN(cdev), p_ptt, + rc = qed_sp_fcoe_func_stop(QED_AFFIN_HWFN(cdev), p_ptt, QED_SPQ_MODE_EBLOCK, NULL); cdev->flags &= ~QED_FLAG_STORAGE_STARTED; - qed_ptt_release(QED_LEADING_HWFN(cdev), p_ptt); + qed_ptt_release(QED_AFFIN_HWFN(cdev), p_ptt); return rc; } @@ -828,8 +828,8 @@ static int qed_fcoe_start(struct qed_dev *cdev, struct qed_fcoe_tid *tasks) return 0; } - rc = qed_sp_fcoe_func_start(QED_LEADING_HWFN(cdev), - QED_SPQ_MODE_EBLOCK, NULL); + rc = qed_sp_fcoe_func_start(QED_AFFIN_HWFN(cdev), QED_SPQ_MODE_EBLOCK, + NULL); if (rc) { DP_NOTICE(cdev, "Failed to start fcoe\n"); return rc; @@ -849,7 +849,7 @@ static int qed_fcoe_start(struct qed_dev *cdev, struct qed_fcoe_tid *tasks) return -ENOMEM; } - rc = qed_cxt_get_tid_mem_info(QED_LEADING_HWFN(cdev), tid_info); + rc = qed_cxt_get_tid_mem_info(QED_AFFIN_HWFN(cdev), tid_info); if (rc) { DP_NOTICE(cdev, "Failed to gather task information\n"); qed_fcoe_stop(cdev); @@ -884,7 +884,7 @@ static int qed_fcoe_acquire_conn(struct qed_dev *cdev, } /* Acquire the connection */ - rc = qed_fcoe_acquire_connection(QED_LEADING_HWFN(cdev), NULL, + rc = qed_fcoe_acquire_connection(QED_AFFIN_HWFN(cdev), NULL, &hash_con->con); if (rc) { DP_NOTICE(cdev, "Failed to acquire Connection\n"); @@ -898,7 +898,7 @@ static int qed_fcoe_acquire_conn(struct qed_dev *cdev, hash_add(cdev->connections, &hash_con->node, *handle); if (p_doorbell) - *p_doorbell = qed_fcoe_get_db_addr(QED_LEADING_HWFN(cdev), + *p_doorbell = qed_fcoe_get_db_addr(QED_AFFIN_HWFN(cdev), *handle); return 0; @@ -916,7 +916,7 @@ static int qed_fcoe_release_conn(struct qed_dev *cdev, u32 handle) } hlist_del(&hash_con->node); - qed_fcoe_release_connection(QED_LEADING_HWFN(cdev), hash_con->con); + qed_fcoe_release_connection(QED_AFFIN_HWFN(cdev), hash_con->con); kfree(hash_con); return 0; @@ -971,7 +971,7 @@ static int qed_fcoe_offload_conn(struct qed_dev *cdev, con->d_id.addr_mid = conn_info->d_id.addr_mid; con->d_id.addr_lo = conn_info->d_id.addr_lo; - return qed_sp_fcoe_conn_offload(QED_LEADING_HWFN(cdev), con, + return qed_sp_fcoe_conn_offload(QED_AFFIN_HWFN(cdev), con, QED_SPQ_MODE_EBLOCK, NULL); } @@ -992,13 +992,13 @@ static int qed_fcoe_destroy_conn(struct qed_dev *cdev, con = hash_con->con; con->terminate_params = terminate_params; - return qed_sp_fcoe_conn_destroy(QED_LEADING_HWFN(cdev), con, + return qed_sp_fcoe_conn_destroy(QED_AFFIN_HWFN(cdev), con, QED_SPQ_MODE_EBLOCK, NULL); } static int qed_fcoe_stats(struct qed_dev *cdev, struct qed_fcoe_stats *stats) { - return qed_fcoe_get_stats(QED_LEADING_HWFN(cdev), stats); + return qed_fcoe_get_stats(QED_AFFIN_HWFN(cdev), stats); } void qed_get_protocol_stats_fcoe(struct qed_dev *cdev, diff --git a/drivers/net/ethernet/qlogic/qed/qed_hsi.h b/drivers/net/ethernet/qlogic/qed/qed_hsi.h index 37edaa847512..e054f6c69e3a 100644 --- a/drivers/net/ethernet/qlogic/qed/qed_hsi.h +++ b/drivers/net/ethernet/qlogic/qed/qed_hsi.h @@ -12612,8 +12612,10 @@ struct public_drv_mb { #define DRV_MSG_CODE_BIST_TEST 0x001e0000 #define DRV_MSG_CODE_SET_LED_MODE 0x00200000 -#define DRV_MSG_CODE_RESOURCE_CMD 0x00230000 +#define DRV_MSG_CODE_RESOURCE_CMD 0x00230000 #define DRV_MSG_CODE_GET_TLV_DONE 0x002f0000 +#define DRV_MSG_CODE_GET_ENGINE_CONFIG 0x00370000 +#define DRV_MSG_CODE_GET_PPFID_BITMAP 0x43000000 #define RESOURCE_CMD_REQ_RESC_MASK 0x0000001F #define RESOURCE_CMD_REQ_RESC_SHIFT 0 @@ -12802,6 +12804,18 @@ struct public_drv_mb { #define FW_MB_PARAM_LOAD_DONE_DID_EFUSE_ERROR (1 << 0) +#define FW_MB_PARAM_ENG_CFG_FIR_AFFIN_VALID_MASK 0x00000001 +#define FW_MB_PARAM_ENG_CFG_FIR_AFFIN_VALID_SHIFT 0 +#define FW_MB_PARAM_ENG_CFG_FIR_AFFIN_VALUE_MASK 0x00000002 +#define FW_MB_PARAM_ENG_CFG_FIR_AFFIN_VALUE_SHIFT 1 +#define FW_MB_PARAM_ENG_CFG_L2_AFFIN_VALID_MASK 0x00000004 +#define FW_MB_PARAM_ENG_CFG_L2_AFFIN_VALID_SHIFT 2 +#define FW_MB_PARAM_ENG_CFG_L2_AFFIN_VALUE_MASK 0x00000008 +#define FW_MB_PARAM_ENG_CFG_L2_AFFIN_VALUE_SHIFT 3 + +#define FW_MB_PARAM_PPFID_BITMAP_MASK 0xFF +#define FW_MB_PARAM_PPFID_BITMAP_SHIFT 0 + u32 drv_pulse_mb; #define DRV_PULSE_SEQ_MASK 0x00007fff #define DRV_PULSE_SYSTEM_TIME_MASK 0xffff0000 diff --git a/drivers/net/ethernet/qlogic/qed/qed_hw.c b/drivers/net/ethernet/qlogic/qed/qed_hw.c index 72ec1c6bdf70..a4de9e3ef72c 100644 --- a/drivers/net/ethernet/qlogic/qed/qed_hw.c +++ b/drivers/net/ethernet/qlogic/qed/qed_hw.c @@ -392,11 +392,15 @@ u32 qed_vfid_to_concrete(struct qed_hwfn *p_hwfn, u8 vfid) } /* DMAE */ +#define QED_DMAE_FLAGS_IS_SET(params, flag) \ + ((params) != NULL && ((params)->flags & QED_DMAE_FLAG_##flag)) + static void qed_dmae_opcode(struct qed_hwfn *p_hwfn, const u8 is_src_type_grc, const u8 is_dst_type_grc, struct qed_dmae_params *p_params) { + u8 src_pfid, dst_pfid, port_id; u16 opcode_b = 0; u32 opcode = 0; @@ -407,14 +411,18 @@ static void qed_dmae_opcode(struct qed_hwfn *p_hwfn, opcode |= (is_src_type_grc ? DMAE_CMD_SRC_MASK_GRC : DMAE_CMD_SRC_MASK_PCIE) << DMAE_CMD_SRC_SHIFT; - opcode |= ((p_hwfn->rel_pf_id & DMAE_CMD_SRC_PF_ID_MASK) << + src_pfid = QED_DMAE_FLAGS_IS_SET(p_params, PF_SRC) ? + p_params->src_pfid : p_hwfn->rel_pf_id; + opcode |= ((src_pfid & DMAE_CMD_SRC_PF_ID_MASK) << DMAE_CMD_SRC_PF_ID_SHIFT); /* The destination of the DMA can be: 0-None 1-PCIe 2-GRC 3-None */ opcode |= (is_dst_type_grc ? DMAE_CMD_DST_MASK_GRC : DMAE_CMD_DST_MASK_PCIE) << DMAE_CMD_DST_SHIFT; - opcode |= ((p_hwfn->rel_pf_id & DMAE_CMD_DST_PF_ID_MASK) << + dst_pfid = QED_DMAE_FLAGS_IS_SET(p_params, PF_DST) ? + p_params->dst_pfid : p_hwfn->rel_pf_id; + opcode |= ((dst_pfid & DMAE_CMD_DST_PF_ID_MASK) << DMAE_CMD_DST_PF_ID_SHIFT); /* Whether to write a completion word to the completion destination: @@ -425,12 +433,14 @@ static void qed_dmae_opcode(struct qed_hwfn *p_hwfn, opcode |= (DMAE_CMD_SRC_ADDR_RESET_MASK << DMAE_CMD_SRC_ADDR_RESET_SHIFT); - if (p_params->flags & QED_DMAE_FLAG_COMPLETION_DST) + if (QED_DMAE_FLAGS_IS_SET(p_params, COMPLETION_DST)) opcode |= (1 << DMAE_CMD_COMP_FUNC_SHIFT); opcode |= (DMAE_CMD_ENDIANITY << DMAE_CMD_ENDIANITY_MODE_SHIFT); - opcode |= ((p_hwfn->port_id) << DMAE_CMD_PORT_ID_SHIFT); + port_id = (QED_DMAE_FLAGS_IS_SET(p_params, PORT)) ? + p_params->port_id : p_hwfn->port_id; + opcode |= (port_id << DMAE_CMD_PORT_ID_SHIFT); /* reset source address in next go */ opcode |= (DMAE_CMD_SRC_ADDR_RESET_MASK << @@ -441,7 +451,7 @@ static void qed_dmae_opcode(struct qed_hwfn *p_hwfn, DMAE_CMD_DST_ADDR_RESET_SHIFT); /* SRC/DST VFID: all 1's - pf, otherwise VF id */ - if (p_params->flags & QED_DMAE_FLAG_VF_SRC) { + if (QED_DMAE_FLAGS_IS_SET(p_params, VF_SRC)) { opcode |= 1 << DMAE_CMD_SRC_VF_ID_VALID_SHIFT; opcode_b |= p_params->src_vfid << DMAE_CMD_SRC_VF_ID_SHIFT; } else { @@ -449,7 +459,7 @@ static void qed_dmae_opcode(struct qed_hwfn *p_hwfn, DMAE_CMD_SRC_VF_ID_SHIFT; } - if (p_params->flags & QED_DMAE_FLAG_VF_DST) { + if (QED_DMAE_FLAGS_IS_SET(p_params, VF_DST)) { opcode |= 1 << DMAE_CMD_DST_VF_ID_VALID_SHIFT; opcode_b |= p_params->dst_vfid << DMAE_CMD_DST_VF_ID_SHIFT; } else { @@ -733,7 +743,7 @@ static int qed_dmae_execute_command(struct qed_hwfn *p_hwfn, for (i = 0; i <= cnt_split; i++) { offset = length_limit * i; - if (!(p_params->flags & QED_DMAE_FLAG_RW_REPL_SRC)) { + if (!QED_DMAE_FLAGS_IS_SET(p_params, RW_REPL_SRC)) { if (src_type == QED_DMAE_ADDRESS_GRC) src_addr_split = src_addr + offset; else @@ -771,14 +781,12 @@ static int qed_dmae_execute_command(struct qed_hwfn *p_hwfn, int qed_dmae_host2grc(struct qed_hwfn *p_hwfn, struct qed_ptt *p_ptt, - u64 source_addr, u32 grc_addr, u32 size_in_dwords, u32 flags) + u64 source_addr, u32 grc_addr, u32 size_in_dwords, + struct qed_dmae_params *p_params) { u32 grc_addr_in_dw = grc_addr / sizeof(u32); - struct qed_dmae_params params; int rc; - memset(¶ms, 0, sizeof(struct qed_dmae_params)); - params.flags = flags; mutex_lock(&p_hwfn->dmae_info.mutex); @@ -786,7 +794,7 @@ int qed_dmae_host2grc(struct qed_hwfn *p_hwfn, grc_addr_in_dw, QED_DMAE_ADDRESS_HOST_VIRT, QED_DMAE_ADDRESS_GRC, - size_in_dwords, ¶ms); + size_in_dwords, p_params); mutex_unlock(&p_hwfn->dmae_info.mutex); @@ -796,21 +804,19 @@ int qed_dmae_host2grc(struct qed_hwfn *p_hwfn, int qed_dmae_grc2host(struct qed_hwfn *p_hwfn, struct qed_ptt *p_ptt, u32 grc_addr, - dma_addr_t dest_addr, u32 size_in_dwords, u32 flags) + dma_addr_t dest_addr, u32 size_in_dwords, + struct qed_dmae_params *p_params) { u32 grc_addr_in_dw = grc_addr / sizeof(u32); - struct qed_dmae_params params; int rc; - memset(¶ms, 0, sizeof(struct qed_dmae_params)); - params.flags = flags; mutex_lock(&p_hwfn->dmae_info.mutex); rc = qed_dmae_execute_command(p_hwfn, p_ptt, grc_addr_in_dw, dest_addr, QED_DMAE_ADDRESS_GRC, QED_DMAE_ADDRESS_HOST_VIRT, - size_in_dwords, ¶ms); + size_in_dwords, p_params); mutex_unlock(&p_hwfn->dmae_info.mutex); @@ -842,7 +848,6 @@ int qed_dmae_sanity(struct qed_hwfn *p_hwfn, struct qed_ptt *p_ptt, const char *phase) { u32 size = PAGE_SIZE / 2, val; - struct qed_dmae_params params; int rc = 0; dma_addr_t p_phys; void *p_virt; @@ -875,9 +880,8 @@ int qed_dmae_sanity(struct qed_hwfn *p_hwfn, (u64)p_phys, p_virt, (u64)(p_phys + size), (u8 *)p_virt + size, size); - memset(¶ms, 0, sizeof(params)); rc = qed_dmae_host2host(p_hwfn, p_ptt, p_phys, p_phys + size, - size / 4 /* size_in_dwords */, ¶ms); + size / 4, NULL); if (rc) { DP_NOTICE(p_hwfn, "DMAE sanity [%s]: qed_dmae_host2host() failed. rc = %d.\n", diff --git a/drivers/net/ethernet/qlogic/qed/qed_init_ops.c b/drivers/net/ethernet/qlogic/qed/qed_init_ops.c index 34193c2f1699..a868d7f88601 100644 --- a/drivers/net/ethernet/qlogic/qed/qed_init_ops.c +++ b/drivers/net/ethernet/qlogic/qed/qed_init_ops.c @@ -131,7 +131,7 @@ static int qed_init_rt(struct qed_hwfn *p_hwfn, rc = qed_dmae_host2grc(p_hwfn, p_ptt, (uintptr_t)(p_init_val + i), - addr + (i << 2), segment, 0); + addr + (i << 2), segment, NULL); if (rc) return rc; @@ -194,7 +194,7 @@ static int qed_init_array_dmae(struct qed_hwfn *p_hwfn, } else { rc = qed_dmae_host2grc(p_hwfn, p_ptt, (uintptr_t)(buf + dmae_data_offset), - addr, size, 0); + addr, size, NULL); } return rc; @@ -205,6 +205,7 @@ static int qed_init_fill_dmae(struct qed_hwfn *p_hwfn, u32 addr, u32 fill, u32 fill_count) { static u32 zero_buffer[DMAE_MAX_RW_SIZE]; + struct qed_dmae_params params = {}; memset(zero_buffer, 0, sizeof(u32) * DMAE_MAX_RW_SIZE); @@ -214,10 +215,10 @@ static int qed_init_fill_dmae(struct qed_hwfn *p_hwfn, * 3. p_hwfb->temp_data, * 4. fill_count */ - + params.flags = QED_DMAE_FLAG_RW_REPL_SRC; return qed_dmae_host2grc(p_hwfn, p_ptt, (uintptr_t)(&zero_buffer[0]), - addr, fill_count, QED_DMAE_FLAG_RW_REPL_SRC); + addr, fill_count, ¶ms); } static void qed_init_fill(struct qed_hwfn *p_hwfn, diff --git a/drivers/net/ethernet/qlogic/qed/qed_int.c b/drivers/net/ethernet/qlogic/qed/qed_int.c index fdfedbc8e431..4e8118a08654 100644 --- a/drivers/net/ethernet/qlogic/qed/qed_int.c +++ b/drivers/net/ethernet/qlogic/qed/qed_int.c @@ -1508,10 +1508,10 @@ void qed_int_cau_conf_sb(struct qed_hwfn *p_hwfn, qed_dmae_host2grc(p_hwfn, p_ptt, (u64)(uintptr_t)&phys_addr, CAU_REG_SB_ADDR_MEMORY + - igu_sb_id * sizeof(u64), 2, 0); + igu_sb_id * sizeof(u64), 2, NULL); qed_dmae_host2grc(p_hwfn, p_ptt, (u64)(uintptr_t)&sb_entry, CAU_REG_SB_VAR_MEMORY + - igu_sb_id * sizeof(u64), 2, 0); + igu_sb_id * sizeof(u64), 2, NULL); } else { /* Initialize Status Block Address */ STORE_RT_REG_AGG(p_hwfn, @@ -2362,7 +2362,7 @@ int qed_int_set_timer_res(struct qed_hwfn *p_hwfn, struct qed_ptt *p_ptt, rc = qed_dmae_grc2host(p_hwfn, p_ptt, CAU_REG_SB_VAR_MEMORY + sb_id * sizeof(u64), - (u64)(uintptr_t)&sb_entry, 2, 0); + (u64)(uintptr_t)&sb_entry, 2, NULL); if (rc) { DP_ERR(p_hwfn, "dmae_grc2host failed %d\n", rc); return rc; @@ -2376,7 +2376,7 @@ int qed_int_set_timer_res(struct qed_hwfn *p_hwfn, struct qed_ptt *p_ptt, rc = qed_dmae_host2grc(p_hwfn, p_ptt, (u64)(uintptr_t)&sb_entry, CAU_REG_SB_VAR_MEMORY + - sb_id * sizeof(u64), 2, 0); + sb_id * sizeof(u64), 2, NULL); if (rc) { DP_ERR(p_hwfn, "dmae_host2grc failed %d\n", rc); return rc; diff --git a/drivers/net/ethernet/qlogic/qed/qed_iscsi.c b/drivers/net/ethernet/qlogic/qed/qed_iscsi.c index 4f8a685d1a55..5585c18053ec 100644 --- a/drivers/net/ethernet/qlogic/qed/qed_iscsi.c +++ b/drivers/net/ethernet/qlogic/qed/qed_iscsi.c @@ -1082,7 +1082,7 @@ struct qed_hash_iscsi_con { static int qed_fill_iscsi_dev_info(struct qed_dev *cdev, struct qed_dev_iscsi_info *info) { - struct qed_hwfn *hwfn = QED_LEADING_HWFN(cdev); + struct qed_hwfn *hwfn = QED_AFFIN_HWFN(cdev); int rc; @@ -1141,8 +1141,8 @@ static int qed_iscsi_stop(struct qed_dev *cdev) } /* Stop the iscsi */ - rc = qed_sp_iscsi_func_stop(QED_LEADING_HWFN(cdev), - QED_SPQ_MODE_EBLOCK, NULL); + rc = qed_sp_iscsi_func_stop(QED_AFFIN_HWFN(cdev), QED_SPQ_MODE_EBLOCK, + NULL); cdev->flags &= ~QED_FLAG_STORAGE_STARTED; return rc; @@ -1161,9 +1161,8 @@ static int qed_iscsi_start(struct qed_dev *cdev, return 0; } - rc = qed_sp_iscsi_func_start(QED_LEADING_HWFN(cdev), - QED_SPQ_MODE_EBLOCK, NULL, event_context, - async_event_cb); + rc = qed_sp_iscsi_func_start(QED_AFFIN_HWFN(cdev), QED_SPQ_MODE_EBLOCK, + NULL, event_context, async_event_cb); if (rc) { DP_NOTICE(cdev, "Failed to start iscsi\n"); return rc; @@ -1182,8 +1181,7 @@ static int qed_iscsi_start(struct qed_dev *cdev, return -ENOMEM; } - rc = qed_cxt_get_tid_mem_info(QED_LEADING_HWFN(cdev), - tid_info); + rc = qed_cxt_get_tid_mem_info(QED_AFFIN_HWFN(cdev), tid_info); if (rc) { DP_NOTICE(cdev, "Failed to gather task information\n"); qed_iscsi_stop(cdev); @@ -1215,7 +1213,7 @@ static int qed_iscsi_acquire_conn(struct qed_dev *cdev, return -ENOMEM; /* Acquire the connection */ - rc = qed_iscsi_acquire_connection(QED_LEADING_HWFN(cdev), NULL, + rc = qed_iscsi_acquire_connection(QED_AFFIN_HWFN(cdev), NULL, &hash_con->con); if (rc) { DP_NOTICE(cdev, "Failed to acquire Connection\n"); @@ -1229,7 +1227,7 @@ static int qed_iscsi_acquire_conn(struct qed_dev *cdev, hash_add(cdev->connections, &hash_con->node, *handle); if (p_doorbell) - *p_doorbell = qed_iscsi_get_db_addr(QED_LEADING_HWFN(cdev), + *p_doorbell = qed_iscsi_get_db_addr(QED_AFFIN_HWFN(cdev), *handle); return 0; @@ -1247,7 +1245,7 @@ static int qed_iscsi_release_conn(struct qed_dev *cdev, u32 handle) } hlist_del(&hash_con->node); - qed_iscsi_release_connection(QED_LEADING_HWFN(cdev), hash_con->con); + qed_iscsi_release_connection(QED_AFFIN_HWFN(cdev), hash_con->con); kfree(hash_con); return 0; @@ -1324,7 +1322,7 @@ static int qed_iscsi_offload_conn(struct qed_dev *cdev, /* Set default values on other connection fields */ con->offl_flags = 0x1; - return qed_sp_iscsi_conn_offload(QED_LEADING_HWFN(cdev), con, + return qed_sp_iscsi_conn_offload(QED_AFFIN_HWFN(cdev), con, QED_SPQ_MODE_EBLOCK, NULL); } @@ -1351,7 +1349,7 @@ static int qed_iscsi_update_conn(struct qed_dev *cdev, con->first_seq_length = conn_info->first_seq_length; con->exp_stat_sn = conn_info->exp_stat_sn; - return qed_sp_iscsi_conn_update(QED_LEADING_HWFN(cdev), con, + return qed_sp_iscsi_conn_update(QED_AFFIN_HWFN(cdev), con, QED_SPQ_MODE_EBLOCK, NULL); } @@ -1366,8 +1364,7 @@ static int qed_iscsi_clear_conn_sq(struct qed_dev *cdev, u32 handle) return -EINVAL; } - return qed_sp_iscsi_conn_clear_sq(QED_LEADING_HWFN(cdev), - hash_con->con, + return qed_sp_iscsi_conn_clear_sq(QED_AFFIN_HWFN(cdev), hash_con->con, QED_SPQ_MODE_EBLOCK, NULL); } @@ -1385,14 +1382,13 @@ static int qed_iscsi_destroy_conn(struct qed_dev *cdev, hash_con->con->abortive_dsconnect = abrt_conn; - return qed_sp_iscsi_conn_terminate(QED_LEADING_HWFN(cdev), - hash_con->con, + return qed_sp_iscsi_conn_terminate(QED_AFFIN_HWFN(cdev), hash_con->con, QED_SPQ_MODE_EBLOCK, NULL); } static int qed_iscsi_stats(struct qed_dev *cdev, struct qed_iscsi_stats *stats) { - return qed_iscsi_get_stats(QED_LEADING_HWFN(cdev), stats); + return qed_iscsi_get_stats(QED_AFFIN_HWFN(cdev), stats); } static int qed_iscsi_change_mac(struct qed_dev *cdev, @@ -1407,8 +1403,7 @@ static int qed_iscsi_change_mac(struct qed_dev *cdev, return -EINVAL; } - return qed_sp_iscsi_mac_update(QED_LEADING_HWFN(cdev), - hash_con->con, + return qed_sp_iscsi_mac_update(QED_AFFIN_HWFN(cdev), hash_con->con, QED_SPQ_MODE_EBLOCK, NULL); } diff --git a/drivers/net/ethernet/qlogic/qed/qed_iwarp.c b/drivers/net/ethernet/qlogic/qed/qed_iwarp.c index ded556b7bab5..f380fae8799d 100644 --- a/drivers/net/ethernet/qlogic/qed/qed_iwarp.c +++ b/drivers/net/ethernet/qlogic/qed/qed_iwarp.c @@ -63,7 +63,12 @@ struct mpa_v2_hdr { #define MPA_REV2(_mpa_rev) ((_mpa_rev) == MPA_NEGOTIATION_TYPE_ENHANCED) #define QED_IWARP_INVALID_TCP_CID 0xffffffff -#define QED_IWARP_RCV_WND_SIZE_DEF (256 * 1024) + +#define QED_IWARP_RCV_WND_SIZE_DEF_BB_2P (200 * 1024) +#define QED_IWARP_RCV_WND_SIZE_DEF_BB_4P (100 * 1024) +#define QED_IWARP_RCV_WND_SIZE_DEF_AH_2P (150 * 1024) +#define QED_IWARP_RCV_WND_SIZE_DEF_AH_4P (90 * 1024) + #define QED_IWARP_RCV_WND_SIZE_MIN (0xffff) #define TIMESTAMP_HEADER_SIZE (12) #define QED_IWARP_MAX_FIN_RT_DEFAULT (2) @@ -532,7 +537,8 @@ int qed_iwarp_destroy_qp(struct qed_hwfn *p_hwfn, struct qed_rdma_qp *qp) /* Make sure ep is closed before returning and freeing memory. */ if (ep) { - while (ep->state != QED_IWARP_EP_CLOSED && wait_count++ < 200) + while (READ_ONCE(ep->state) != QED_IWARP_EP_CLOSED && + wait_count++ < 200) msleep(100); if (ep->state != QED_IWARP_EP_CLOSED) @@ -1022,8 +1028,6 @@ qed_iwarp_mpa_complete(struct qed_hwfn *p_hwfn, params.ep_context = ep; - ep->state = QED_IWARP_EP_CLOSED; - switch (fw_return_code) { case RDMA_RETURN_OK: ep->qp->max_rd_atomic_req = ep->cm_info.ord; @@ -1083,6 +1087,10 @@ qed_iwarp_mpa_complete(struct qed_hwfn *p_hwfn, break; } + if (fw_return_code != RDMA_RETURN_OK) + /* paired with READ_ONCE in destroy_qp */ + smp_store_release(&ep->state, QED_IWARP_EP_CLOSED); + ep->event_cb(ep->cb_context, ¶ms); /* on passive side, if there is no associated QP (REJECT) we need to @@ -2528,7 +2536,7 @@ qed_iwarp_ll2_slowpath(void *cxt, memset(fpdu, 0, sizeof(*fpdu)); } -static int qed_iwarp_ll2_stop(struct qed_hwfn *p_hwfn, struct qed_ptt *p_ptt) +static int qed_iwarp_ll2_stop(struct qed_hwfn *p_hwfn) { struct qed_iwarp_info *iwarp_info = &p_hwfn->p_rdma_info->iwarp; int rc = 0; @@ -2563,8 +2571,9 @@ static int qed_iwarp_ll2_stop(struct qed_hwfn *p_hwfn, struct qed_ptt *p_ptt) iwarp_info->ll2_mpa_handle = QED_IWARP_HANDLE_INVAL; } - qed_llh_remove_mac_filter(p_hwfn, - p_ptt, p_hwfn->p_rdma_info->iwarp.mac_addr); + qed_llh_remove_mac_filter(p_hwfn->cdev, 0, + p_hwfn->p_rdma_info->iwarp.mac_addr); + return rc; } @@ -2609,7 +2618,7 @@ qed_iwarp_ll2_alloc_buffers(struct qed_hwfn *p_hwfn, static int qed_iwarp_ll2_start(struct qed_hwfn *p_hwfn, struct qed_rdma_start_in_params *params, - struct qed_ptt *p_ptt) + u32 rcv_wnd_size) { struct qed_iwarp_info *iwarp_info; struct qed_ll2_acquire_data data; @@ -2628,7 +2637,7 @@ qed_iwarp_ll2_start(struct qed_hwfn *p_hwfn, ether_addr_copy(p_hwfn->p_rdma_info->iwarp.mac_addr, params->mac_addr); - rc = qed_llh_add_mac_filter(p_hwfn, p_ptt, params->mac_addr); + rc = qed_llh_add_mac_filter(p_hwfn->cdev, 0, params->mac_addr); if (rc) return rc; @@ -2637,6 +2646,7 @@ qed_iwarp_ll2_start(struct qed_hwfn *p_hwfn, cbs.rx_release_cb = qed_iwarp_ll2_rel_rx_pkt; cbs.tx_comp_cb = qed_iwarp_ll2_comp_tx_pkt; cbs.tx_release_cb = qed_iwarp_ll2_rel_tx_pkt; + cbs.slowpath_cb = NULL; cbs.cookie = p_hwfn; memset(&data, 0, sizeof(data)); @@ -2653,7 +2663,7 @@ qed_iwarp_ll2_start(struct qed_hwfn *p_hwfn, rc = qed_ll2_acquire_connection(p_hwfn, &data); if (rc) { DP_NOTICE(p_hwfn, "Failed to acquire LL2 connection\n"); - qed_llh_remove_mac_filter(p_hwfn, p_ptt, params->mac_addr); + qed_llh_remove_mac_filter(p_hwfn->cdev, 0, params->mac_addr); return rc; } @@ -2675,7 +2685,7 @@ qed_iwarp_ll2_start(struct qed_hwfn *p_hwfn, data.input.conn_type = QED_LL2_TYPE_OOO; data.input.mtu = params->max_mtu; - n_ooo_bufs = (QED_IWARP_MAX_OOO * QED_IWARP_RCV_WND_SIZE_DEF) / + n_ooo_bufs = (QED_IWARP_MAX_OOO * rcv_wnd_size) / iwarp_info->max_mtu; n_ooo_bufs = min_t(u32, n_ooo_bufs, QED_IWARP_LL2_OOO_MAX_RX_SIZE); @@ -2708,6 +2718,8 @@ qed_iwarp_ll2_start(struct qed_hwfn *p_hwfn, data.input.rx_num_desc = n_ooo_bufs * 2; data.input.tx_num_desc = data.input.rx_num_desc; data.input.tx_max_bds_per_packet = QED_IWARP_MAX_BDS_PER_FPDU; + data.input.tx_tc = PKT_LB_TC; + data.input.tx_dest = QED_LL2_TX_DEST_LB; data.p_connection_handle = &iwarp_info->ll2_mpa_handle; data.input.secondary_queue = true; data.cbs = &cbs; @@ -2757,21 +2769,35 @@ qed_iwarp_ll2_start(struct qed_hwfn *p_hwfn, &iwarp_info->mpa_buf_list); return rc; err: - qed_iwarp_ll2_stop(p_hwfn, p_ptt); + qed_iwarp_ll2_stop(p_hwfn); return rc; } -int qed_iwarp_setup(struct qed_hwfn *p_hwfn, struct qed_ptt *p_ptt, +static struct { + u32 two_ports; + u32 four_ports; +} qed_iwarp_rcv_wnd_size[MAX_CHIP_IDS] = { + {QED_IWARP_RCV_WND_SIZE_DEF_BB_2P, QED_IWARP_RCV_WND_SIZE_DEF_BB_4P}, + {QED_IWARP_RCV_WND_SIZE_DEF_AH_2P, QED_IWARP_RCV_WND_SIZE_DEF_AH_4P} +}; + +int qed_iwarp_setup(struct qed_hwfn *p_hwfn, struct qed_rdma_start_in_params *params) { + struct qed_dev *cdev = p_hwfn->cdev; struct qed_iwarp_info *iwarp_info; + enum chip_ids chip_id; u32 rcv_wnd_size; iwarp_info = &p_hwfn->p_rdma_info->iwarp; iwarp_info->tcp_flags = QED_IWARP_TS_EN; - rcv_wnd_size = QED_IWARP_RCV_WND_SIZE_DEF; + + chip_id = QED_IS_BB(cdev) ? CHIP_BB : CHIP_K2; + rcv_wnd_size = (qed_device_num_ports(cdev) == 4) ? + qed_iwarp_rcv_wnd_size[chip_id].four_ports : + qed_iwarp_rcv_wnd_size[chip_id].two_ports; /* value 0 is used for ilog2(QED_IWARP_RCV_WND_SIZE_MIN) */ iwarp_info->rcv_wnd_scale = ilog2(rcv_wnd_size) - @@ -2794,10 +2820,10 @@ int qed_iwarp_setup(struct qed_hwfn *p_hwfn, struct qed_ptt *p_ptt, qed_iwarp_async_event); qed_ooo_setup(p_hwfn); - return qed_iwarp_ll2_start(p_hwfn, params, p_ptt); + return qed_iwarp_ll2_start(p_hwfn, params, rcv_wnd_size); } -int qed_iwarp_stop(struct qed_hwfn *p_hwfn, struct qed_ptt *p_ptt) +int qed_iwarp_stop(struct qed_hwfn *p_hwfn) { int rc; @@ -2808,7 +2834,7 @@ int qed_iwarp_stop(struct qed_hwfn *p_hwfn, struct qed_ptt *p_ptt) qed_spq_unregister_async_cb(p_hwfn, PROTOCOLID_IWARP); - return qed_iwarp_ll2_stop(p_hwfn, p_ptt); + return qed_iwarp_ll2_stop(p_hwfn); } static void qed_iwarp_qp_in_error(struct qed_hwfn *p_hwfn, @@ -2825,7 +2851,9 @@ static void qed_iwarp_qp_in_error(struct qed_hwfn *p_hwfn, params.status = (fw_return_code == IWARP_QP_IN_ERROR_GOOD_CLOSE) ? 0 : -ECONNRESET; - ep->state = QED_IWARP_EP_CLOSED; + /* paired with READ_ONCE in destroy_qp */ + smp_store_release(&ep->state, QED_IWARP_EP_CLOSED); + spin_lock_bh(&p_hwfn->p_rdma_info->iwarp.iw_lock); list_del(&ep->list_entry); spin_unlock_bh(&p_hwfn->p_rdma_info->iwarp.iw_lock); @@ -2914,7 +2942,8 @@ qed_iwarp_tcp_connect_unsuccessful(struct qed_hwfn *p_hwfn, params.event = QED_IWARP_EVENT_ACTIVE_COMPLETE; params.ep_context = ep; params.cm_info = &ep->cm_info; - ep->state = QED_IWARP_EP_CLOSED; + /* paired with READ_ONCE in destroy_qp */ + smp_store_release(&ep->state, QED_IWARP_EP_CLOSED); switch (fw_return_code) { case IWARP_CONN_ERROR_TCP_CONNECT_INVALID_PACKET: diff --git a/drivers/net/ethernet/qlogic/qed/qed_iwarp.h b/drivers/net/ethernet/qlogic/qed/qed_iwarp.h index 7ac959038324..c1b2057d23b8 100644 --- a/drivers/net/ethernet/qlogic/qed/qed_iwarp.h +++ b/drivers/net/ethernet/qlogic/qed/qed_iwarp.h @@ -183,13 +183,13 @@ struct qed_iwarp_listener { int qed_iwarp_alloc(struct qed_hwfn *p_hwfn); -int qed_iwarp_setup(struct qed_hwfn *p_hwfn, struct qed_ptt *p_ptt, +int qed_iwarp_setup(struct qed_hwfn *p_hwfn, struct qed_rdma_start_in_params *params); void qed_iwarp_init_fw_ramrod(struct qed_hwfn *p_hwfn, struct iwarp_init_func_ramrod_data *p_ramrod); -int qed_iwarp_stop(struct qed_hwfn *p_hwfn, struct qed_ptt *p_ptt); +int qed_iwarp_stop(struct qed_hwfn *p_hwfn); void qed_iwarp_resc_free(struct qed_hwfn *p_hwfn); diff --git a/drivers/net/ethernet/qlogic/qed/qed_l2.c b/drivers/net/ethernet/qlogic/qed/qed_l2.c index 57641728df69..9f36e7948222 100644 --- a/drivers/net/ethernet/qlogic/qed/qed_l2.c +++ b/drivers/net/ethernet/qlogic/qed/qed_l2.c @@ -2111,7 +2111,7 @@ int qed_get_rxq_coalesce(struct qed_hwfn *p_hwfn, rc = qed_dmae_grc2host(p_hwfn, p_ptt, CAU_REG_SB_VAR_MEMORY + p_cid->sb_igu_id * sizeof(u64), - (u64)(uintptr_t)&sb_entry, 2, 0); + (u64)(uintptr_t)&sb_entry, 2, NULL); if (rc) { DP_ERR(p_hwfn, "dmae_grc2host failed %d\n", rc); return rc; @@ -2144,7 +2144,7 @@ int qed_get_txq_coalesce(struct qed_hwfn *p_hwfn, rc = qed_dmae_grc2host(p_hwfn, p_ptt, CAU_REG_SB_VAR_MEMORY + p_cid->sb_igu_id * sizeof(u64), - (u64)(uintptr_t)&sb_entry, 2, 0); + (u64)(uintptr_t)&sb_entry, 2, NULL); if (rc) { DP_ERR(p_hwfn, "dmae_grc2host failed %d\n", rc); return rc; diff --git a/drivers/net/ethernet/qlogic/qed/qed_ll2.c b/drivers/net/ethernet/qlogic/qed/qed_ll2.c index b5f419b71287..19a1a58d60f8 100644 --- a/drivers/net/ethernet/qlogic/qed/qed_ll2.c +++ b/drivers/net/ethernet/qlogic/qed/qed_ll2.c @@ -239,9 +239,8 @@ out_post1: buffer->phys_addr = new_phys_addr; out_post: - rc = qed_ll2_post_rx_buffer(QED_LEADING_HWFN(cdev), cdev->ll2->handle, - buffer->phys_addr, 0, buffer, 1); - + rc = qed_ll2_post_rx_buffer(p_hwfn, cdev->ll2->handle, + buffer->phys_addr, 0, buffer, 1); if (rc) qed_ll2_dealloc_buffer(cdev, buffer); } @@ -926,16 +925,15 @@ static int qed_ll2_lb_txq_completion(struct qed_hwfn *p_hwfn, void *p_cookie) return 0; } -static void qed_ll2_stop_ooo(struct qed_dev *cdev) +static void qed_ll2_stop_ooo(struct qed_hwfn *p_hwfn) { - struct qed_hwfn *hwfn = QED_LEADING_HWFN(cdev); - u8 *handle = &hwfn->pf_params.iscsi_pf_params.ll2_ooo_queue_id; + u8 *handle = &p_hwfn->pf_params.iscsi_pf_params.ll2_ooo_queue_id; - DP_VERBOSE(cdev, QED_MSG_STORAGE, "Stopping LL2 OOO queue [%02x]\n", - *handle); + DP_VERBOSE(p_hwfn, (QED_MSG_STORAGE | QED_MSG_LL2), + "Stopping LL2 OOO queue [%02x]\n", *handle); - qed_ll2_terminate_connection(hwfn, *handle); - qed_ll2_release_connection(hwfn, *handle); + qed_ll2_terminate_connection(p_hwfn, *handle); + qed_ll2_release_connection(p_hwfn, *handle); *handle = QED_LL2_UNUSED_HANDLE; } @@ -1574,12 +1572,12 @@ int qed_ll2_establish_connection(void *cxt, u8 connection_handle) if (p_ll2_conn->input.conn_type == QED_LL2_TYPE_FCOE) { if (!test_bit(QED_MF_UFP_SPECIFIC, &p_hwfn->cdev->mf_bits)) - qed_llh_add_protocol_filter(p_hwfn, p_ptt, - ETH_P_FCOE, 0, - QED_LLH_FILTER_ETHERTYPE); - qed_llh_add_protocol_filter(p_hwfn, p_ptt, - ETH_P_FIP, 0, - QED_LLH_FILTER_ETHERTYPE); + qed_llh_add_protocol_filter(p_hwfn->cdev, 0, + QED_LLH_FILTER_ETHERTYPE, + ETH_P_FCOE, 0); + qed_llh_add_protocol_filter(p_hwfn->cdev, 0, + QED_LLH_FILTER_ETHERTYPE, + ETH_P_FIP, 0); } out: @@ -1980,12 +1978,12 @@ int qed_ll2_terminate_connection(void *cxt, u8 connection_handle) if (p_ll2_conn->input.conn_type == QED_LL2_TYPE_FCOE) { if (!test_bit(QED_MF_UFP_SPECIFIC, &p_hwfn->cdev->mf_bits)) - qed_llh_remove_protocol_filter(p_hwfn, p_ptt, - ETH_P_FCOE, 0, - QED_LLH_FILTER_ETHERTYPE); - qed_llh_remove_protocol_filter(p_hwfn, p_ptt, - ETH_P_FIP, 0, - QED_LLH_FILTER_ETHERTYPE); + qed_llh_remove_protocol_filter(p_hwfn->cdev, 0, + QED_LLH_FILTER_ETHERTYPE, + ETH_P_FCOE, 0); + qed_llh_remove_protocol_filter(p_hwfn->cdev, 0, + QED_LLH_FILTER_ETHERTYPE, + ETH_P_FIP, 0); } out: @@ -2086,12 +2084,12 @@ static void _qed_ll2_get_port_stats(struct qed_hwfn *p_hwfn, TSTORM_LL2_PORT_STAT_OFFSET(MFW_PORT(p_hwfn)), sizeof(port_stats)); - p_stats->gsi_invalid_hdr = HILO_64_REGPAIR(port_stats.gsi_invalid_hdr); - p_stats->gsi_invalid_pkt_length = + p_stats->gsi_invalid_hdr += HILO_64_REGPAIR(port_stats.gsi_invalid_hdr); + p_stats->gsi_invalid_pkt_length += HILO_64_REGPAIR(port_stats.gsi_invalid_pkt_length); - p_stats->gsi_unsupported_pkt_typ = + p_stats->gsi_unsupported_pkt_typ += HILO_64_REGPAIR(port_stats.gsi_unsupported_pkt_typ); - p_stats->gsi_crcchksm_error = + p_stats->gsi_crcchksm_error += HILO_64_REGPAIR(port_stats.gsi_crcchksm_error); } @@ -2109,9 +2107,9 @@ static void _qed_ll2_get_tstats(struct qed_hwfn *p_hwfn, CORE_LL2_TSTORM_PER_QUEUE_STAT_OFFSET(qid); qed_memcpy_from(p_hwfn, p_ptt, &tstats, tstats_addr, sizeof(tstats)); - p_stats->packet_too_big_discard = + p_stats->packet_too_big_discard += HILO_64_REGPAIR(tstats.packet_too_big_discard); - p_stats->no_buff_discard = HILO_64_REGPAIR(tstats.no_buff_discard); + p_stats->no_buff_discard += HILO_64_REGPAIR(tstats.no_buff_discard); } static void _qed_ll2_get_ustats(struct qed_hwfn *p_hwfn, @@ -2128,12 +2126,12 @@ static void _qed_ll2_get_ustats(struct qed_hwfn *p_hwfn, CORE_LL2_USTORM_PER_QUEUE_STAT_OFFSET(qid); qed_memcpy_from(p_hwfn, p_ptt, &ustats, ustats_addr, sizeof(ustats)); - p_stats->rcv_ucast_bytes = HILO_64_REGPAIR(ustats.rcv_ucast_bytes); - p_stats->rcv_mcast_bytes = HILO_64_REGPAIR(ustats.rcv_mcast_bytes); - p_stats->rcv_bcast_bytes = HILO_64_REGPAIR(ustats.rcv_bcast_bytes); - p_stats->rcv_ucast_pkts = HILO_64_REGPAIR(ustats.rcv_ucast_pkts); - p_stats->rcv_mcast_pkts = HILO_64_REGPAIR(ustats.rcv_mcast_pkts); - p_stats->rcv_bcast_pkts = HILO_64_REGPAIR(ustats.rcv_bcast_pkts); + p_stats->rcv_ucast_bytes += HILO_64_REGPAIR(ustats.rcv_ucast_bytes); + p_stats->rcv_mcast_bytes += HILO_64_REGPAIR(ustats.rcv_mcast_bytes); + p_stats->rcv_bcast_bytes += HILO_64_REGPAIR(ustats.rcv_bcast_bytes); + p_stats->rcv_ucast_pkts += HILO_64_REGPAIR(ustats.rcv_ucast_pkts); + p_stats->rcv_mcast_pkts += HILO_64_REGPAIR(ustats.rcv_mcast_pkts); + p_stats->rcv_bcast_pkts += HILO_64_REGPAIR(ustats.rcv_bcast_pkts); } static void _qed_ll2_get_pstats(struct qed_hwfn *p_hwfn, @@ -2150,23 +2148,21 @@ static void _qed_ll2_get_pstats(struct qed_hwfn *p_hwfn, CORE_LL2_PSTORM_PER_QUEUE_STAT_OFFSET(stats_id); qed_memcpy_from(p_hwfn, p_ptt, &pstats, pstats_addr, sizeof(pstats)); - p_stats->sent_ucast_bytes = HILO_64_REGPAIR(pstats.sent_ucast_bytes); - p_stats->sent_mcast_bytes = HILO_64_REGPAIR(pstats.sent_mcast_bytes); - p_stats->sent_bcast_bytes = HILO_64_REGPAIR(pstats.sent_bcast_bytes); - p_stats->sent_ucast_pkts = HILO_64_REGPAIR(pstats.sent_ucast_pkts); - p_stats->sent_mcast_pkts = HILO_64_REGPAIR(pstats.sent_mcast_pkts); - p_stats->sent_bcast_pkts = HILO_64_REGPAIR(pstats.sent_bcast_pkts); + p_stats->sent_ucast_bytes += HILO_64_REGPAIR(pstats.sent_ucast_bytes); + p_stats->sent_mcast_bytes += HILO_64_REGPAIR(pstats.sent_mcast_bytes); + p_stats->sent_bcast_bytes += HILO_64_REGPAIR(pstats.sent_bcast_bytes); + p_stats->sent_ucast_pkts += HILO_64_REGPAIR(pstats.sent_ucast_pkts); + p_stats->sent_mcast_pkts += HILO_64_REGPAIR(pstats.sent_mcast_pkts); + p_stats->sent_bcast_pkts += HILO_64_REGPAIR(pstats.sent_bcast_pkts); } -int qed_ll2_get_stats(void *cxt, - u8 connection_handle, struct qed_ll2_stats *p_stats) +static int __qed_ll2_get_stats(void *cxt, u8 connection_handle, + struct qed_ll2_stats *p_stats) { struct qed_hwfn *p_hwfn = cxt; struct qed_ll2_info *p_ll2_conn = NULL; struct qed_ptt *p_ptt; - memset(p_stats, 0, sizeof(*p_stats)); - if ((connection_handle >= QED_MAX_NUM_OF_LL2_CONNECTIONS) || !p_hwfn->p_ll2_info) return -EINVAL; @@ -2181,15 +2177,26 @@ int qed_ll2_get_stats(void *cxt, if (p_ll2_conn->input.gsi_enable) _qed_ll2_get_port_stats(p_hwfn, p_ptt, p_stats); + _qed_ll2_get_tstats(p_hwfn, p_ptt, p_ll2_conn, p_stats); + _qed_ll2_get_ustats(p_hwfn, p_ptt, p_ll2_conn, p_stats); + if (p_ll2_conn->tx_stats_en) _qed_ll2_get_pstats(p_hwfn, p_ptt, p_ll2_conn, p_stats); qed_ptt_release(p_hwfn, p_ptt); + return 0; } +int qed_ll2_get_stats(void *cxt, + u8 connection_handle, struct qed_ll2_stats *p_stats) +{ + memset(p_stats, 0, sizeof(*p_stats)); + return __qed_ll2_get_stats(cxt, connection_handle, p_stats); +} + static void qed_ll2b_release_rx_packet(void *cxt, u8 connection_handle, void *cookie, @@ -2216,7 +2223,7 @@ struct qed_ll2_cbs ll2_cbs = { .tx_release_cb = &qed_ll2b_complete_tx_packet, }; -static void qed_ll2_set_conn_data(struct qed_dev *cdev, +static void qed_ll2_set_conn_data(struct qed_hwfn *p_hwfn, struct qed_ll2_acquire_data *data, struct qed_ll2_params *params, enum qed_ll2_conn_type conn_type, @@ -2232,7 +2239,7 @@ static void qed_ll2_set_conn_data(struct qed_dev *cdev, data->input.tx_num_desc = QED_LL2_TX_SIZE; data->p_connection_handle = handle; data->cbs = &ll2_cbs; - ll2_cbs.cookie = QED_LEADING_HWFN(cdev); + ll2_cbs.cookie = p_hwfn; if (lb) { data->input.tx_tc = PKT_LB_TC; @@ -2243,74 +2250,102 @@ static void qed_ll2_set_conn_data(struct qed_dev *cdev, } } -static int qed_ll2_start_ooo(struct qed_dev *cdev, +static int qed_ll2_start_ooo(struct qed_hwfn *p_hwfn, struct qed_ll2_params *params) { - struct qed_hwfn *hwfn = QED_LEADING_HWFN(cdev); - u8 *handle = &hwfn->pf_params.iscsi_pf_params.ll2_ooo_queue_id; + u8 *handle = &p_hwfn->pf_params.iscsi_pf_params.ll2_ooo_queue_id; struct qed_ll2_acquire_data data; int rc; - qed_ll2_set_conn_data(cdev, &data, params, + qed_ll2_set_conn_data(p_hwfn, &data, params, QED_LL2_TYPE_OOO, handle, true); - rc = qed_ll2_acquire_connection(hwfn, &data); + rc = qed_ll2_acquire_connection(p_hwfn, &data); if (rc) { - DP_INFO(cdev, "Failed to acquire LL2 OOO connection\n"); + DP_INFO(p_hwfn, "Failed to acquire LL2 OOO connection\n"); goto out; } - rc = qed_ll2_establish_connection(hwfn, *handle); + rc = qed_ll2_establish_connection(p_hwfn, *handle); if (rc) { - DP_INFO(cdev, "Failed to establist LL2 OOO connection\n"); + DP_INFO(p_hwfn, "Failed to establish LL2 OOO connection\n"); goto fail; } return 0; fail: - qed_ll2_release_connection(hwfn, *handle); + qed_ll2_release_connection(p_hwfn, *handle); out: *handle = QED_LL2_UNUSED_HANDLE; return rc; } -static int qed_ll2_start(struct qed_dev *cdev, struct qed_ll2_params *params) +static bool qed_ll2_is_storage_eng1(struct qed_dev *cdev) { - struct qed_ll2_buffer *buffer, *tmp_buffer; - enum qed_ll2_conn_type conn_type; - struct qed_ll2_acquire_data data; - struct qed_ptt *p_ptt; - int rc, i; + return (QED_IS_FCOE_PERSONALITY(QED_LEADING_HWFN(cdev)) || + QED_IS_ISCSI_PERSONALITY(QED_LEADING_HWFN(cdev))) && + (QED_AFFIN_HWFN(cdev) != QED_LEADING_HWFN(cdev)); +} +static int __qed_ll2_stop(struct qed_hwfn *p_hwfn) +{ + struct qed_dev *cdev = p_hwfn->cdev; + int rc; - /* Initialize LL2 locks & lists */ - INIT_LIST_HEAD(&cdev->ll2->list); - spin_lock_init(&cdev->ll2->lock); - cdev->ll2->rx_size = NET_SKB_PAD + ETH_HLEN + - L1_CACHE_BYTES + params->mtu; + rc = qed_ll2_terminate_connection(p_hwfn, cdev->ll2->handle); + if (rc) + DP_INFO(cdev, "Failed to terminate LL2 connection\n"); - /*Allocate memory for LL2 */ - DP_INFO(cdev, "Allocating LL2 buffers of size %08x bytes\n", - cdev->ll2->rx_size); - for (i = 0; i < QED_LL2_RX_SIZE; i++) { - buffer = kzalloc(sizeof(*buffer), GFP_KERNEL); - if (!buffer) { - DP_INFO(cdev, "Failed to allocate LL2 buffers\n"); - goto fail; - } + qed_ll2_release_connection(p_hwfn, cdev->ll2->handle); - rc = qed_ll2_alloc_buffer(cdev, (u8 **)&buffer->data, - &buffer->phys_addr); - if (rc) { - kfree(buffer); - goto fail; - } + return rc; +} - list_add_tail(&buffer->list, &cdev->ll2->list); +static int qed_ll2_stop(struct qed_dev *cdev) +{ + bool b_is_storage_eng1 = qed_ll2_is_storage_eng1(cdev); + struct qed_hwfn *p_hwfn = QED_AFFIN_HWFN(cdev); + int rc = 0, rc2 = 0; + + if (cdev->ll2->handle == QED_LL2_UNUSED_HANDLE) + return 0; + + qed_llh_remove_mac_filter(cdev, 0, cdev->ll2_mac_address); + eth_zero_addr(cdev->ll2_mac_address); + + if (QED_IS_ISCSI_PERSONALITY(p_hwfn)) + qed_ll2_stop_ooo(p_hwfn); + + /* In CMT mode, LL2 is always started on engine 0 for a storage PF */ + if (b_is_storage_eng1) { + rc2 = __qed_ll2_stop(QED_LEADING_HWFN(cdev)); + if (rc2) + DP_NOTICE(QED_LEADING_HWFN(cdev), + "Failed to stop LL2 on engine 0\n"); } - switch (QED_LEADING_HWFN(cdev)->hw_info.personality) { + rc = __qed_ll2_stop(p_hwfn); + if (rc) + DP_NOTICE(p_hwfn, "Failed to stop LL2\n"); + + qed_ll2_kill_buffers(cdev); + + cdev->ll2->handle = QED_LL2_UNUSED_HANDLE; + + return rc | rc2; +} + +static int __qed_ll2_start(struct qed_hwfn *p_hwfn, + struct qed_ll2_params *params) +{ + struct qed_ll2_buffer *buffer, *tmp_buffer; + struct qed_dev *cdev = p_hwfn->cdev; + enum qed_ll2_conn_type conn_type; + struct qed_ll2_acquire_data data; + int rc, rx_cnt; + + switch (p_hwfn->hw_info.personality) { case QED_PCI_FCOE: conn_type = QED_LL2_TYPE_FCOE; break; @@ -2321,33 +2356,34 @@ static int qed_ll2_start(struct qed_dev *cdev, struct qed_ll2_params *params) conn_type = QED_LL2_TYPE_ROCE; break; default: + conn_type = QED_LL2_TYPE_TEST; } - qed_ll2_set_conn_data(cdev, &data, params, conn_type, + qed_ll2_set_conn_data(p_hwfn, &data, params, conn_type, &cdev->ll2->handle, false); - rc = qed_ll2_acquire_connection(QED_LEADING_HWFN(cdev), &data); + rc = qed_ll2_acquire_connection(p_hwfn, &data); if (rc) { - DP_INFO(cdev, "Failed to acquire LL2 connection\n"); - goto fail; + DP_INFO(p_hwfn, "Failed to acquire LL2 connection\n"); + return rc; } - rc = qed_ll2_establish_connection(QED_LEADING_HWFN(cdev), - cdev->ll2->handle); + rc = qed_ll2_establish_connection(p_hwfn, cdev->ll2->handle); if (rc) { - DP_INFO(cdev, "Failed to establish LL2 connection\n"); - goto release_fail; + DP_INFO(p_hwfn, "Failed to establish LL2 connection\n"); + goto release_conn; } /* Post all Rx buffers to FW */ spin_lock_bh(&cdev->ll2->lock); + rx_cnt = cdev->ll2->rx_cnt; list_for_each_entry_safe(buffer, tmp_buffer, &cdev->ll2->list, list) { - rc = qed_ll2_post_rx_buffer(QED_LEADING_HWFN(cdev), + rc = qed_ll2_post_rx_buffer(p_hwfn, cdev->ll2->handle, buffer->phys_addr, 0, buffer, 1); if (rc) { - DP_INFO(cdev, + DP_INFO(p_hwfn, "Failed to post an Rx buffer; Deleting it\n"); dma_unmap_single(&cdev->pdev->dev, buffer->phys_addr, cdev->ll2->rx_size, DMA_FROM_DEVICE); @@ -2355,100 +2391,127 @@ static int qed_ll2_start(struct qed_dev *cdev, struct qed_ll2_params *params) list_del(&buffer->list); kfree(buffer); } else { - cdev->ll2->rx_cnt++; + rx_cnt++; } } spin_unlock_bh(&cdev->ll2->lock); - if (!cdev->ll2->rx_cnt) { - DP_INFO(cdev, "Failed passing even a single Rx buffer\n"); - goto release_terminate; + if (rx_cnt == cdev->ll2->rx_cnt) { + DP_NOTICE(p_hwfn, "Failed passing even a single Rx buffer\n"); + goto terminate_conn; } + cdev->ll2->rx_cnt = rx_cnt; + + return 0; + +terminate_conn: + qed_ll2_terminate_connection(p_hwfn, cdev->ll2->handle); +release_conn: + qed_ll2_release_connection(p_hwfn, cdev->ll2->handle); + return rc; +} + +static int qed_ll2_start(struct qed_dev *cdev, struct qed_ll2_params *params) +{ + bool b_is_storage_eng1 = qed_ll2_is_storage_eng1(cdev); + struct qed_hwfn *p_hwfn = QED_AFFIN_HWFN(cdev); + struct qed_ll2_buffer *buffer; + int rx_num_desc, i, rc; if (!is_valid_ether_addr(params->ll2_mac_address)) { - DP_INFO(cdev, "Invalid Ethernet address\n"); - goto release_terminate; + DP_NOTICE(cdev, "Invalid Ethernet address\n"); + return -EINVAL; } - if (QED_LEADING_HWFN(cdev)->hw_info.personality == QED_PCI_ISCSI) { - DP_VERBOSE(cdev, QED_MSG_STORAGE, "Starting OOO LL2 queue\n"); - rc = qed_ll2_start_ooo(cdev, params); + WARN_ON(!cdev->ll2->cbs); + + /* Initialize LL2 locks & lists */ + INIT_LIST_HEAD(&cdev->ll2->list); + spin_lock_init(&cdev->ll2->lock); + + cdev->ll2->rx_size = NET_SKB_PAD + ETH_HLEN + + L1_CACHE_BYTES + params->mtu; + + /* Allocate memory for LL2. + * In CMT mode, in case of a storage PF which is affintized to engine 1, + * LL2 is started also on engine 0 and thus we need twofold buffers. + */ + rx_num_desc = QED_LL2_RX_SIZE * (b_is_storage_eng1 ? 2 : 1); + DP_INFO(cdev, "Allocating %d LL2 buffers of size %08x bytes\n", + rx_num_desc, cdev->ll2->rx_size); + for (i = 0; i < rx_num_desc; i++) { + buffer = kzalloc(sizeof(*buffer), GFP_KERNEL); + if (!buffer) { + DP_INFO(cdev, "Failed to allocate LL2 buffers\n"); + rc = -ENOMEM; + goto err0; + } + + rc = qed_ll2_alloc_buffer(cdev, (u8 **)&buffer->data, + &buffer->phys_addr); if (rc) { - DP_INFO(cdev, - "Failed to initialize the OOO LL2 queue\n"); - goto release_terminate; + kfree(buffer); + goto err0; } - } - p_ptt = qed_ptt_acquire(QED_LEADING_HWFN(cdev)); - if (!p_ptt) { - DP_INFO(cdev, "Failed to acquire PTT\n"); - goto release_terminate; + list_add_tail(&buffer->list, &cdev->ll2->list); } - rc = qed_llh_add_mac_filter(QED_LEADING_HWFN(cdev), p_ptt, - params->ll2_mac_address); - qed_ptt_release(QED_LEADING_HWFN(cdev), p_ptt); + rc = __qed_ll2_start(p_hwfn, params); if (rc) { - DP_ERR(cdev, "Failed to allocate LLH filter\n"); - goto release_terminate_all; + DP_NOTICE(cdev, "Failed to start LL2\n"); + goto err0; } - ether_addr_copy(cdev->ll2_mac_address, params->ll2_mac_address); - return 0; - -release_terminate_all: - -release_terminate: - qed_ll2_terminate_connection(QED_LEADING_HWFN(cdev), cdev->ll2->handle); -release_fail: - qed_ll2_release_connection(QED_LEADING_HWFN(cdev), cdev->ll2->handle); -fail: - qed_ll2_kill_buffers(cdev); - cdev->ll2->handle = QED_LL2_UNUSED_HANDLE; - return -EINVAL; -} - -static int qed_ll2_stop(struct qed_dev *cdev) -{ - struct qed_ptt *p_ptt; - int rc; - - if (cdev->ll2->handle == QED_LL2_UNUSED_HANDLE) - return 0; + /* In CMT mode, always need to start LL2 on engine 0 for a storage PF, + * since broadcast/mutlicast packets are routed to engine 0. + */ + if (b_is_storage_eng1) { + rc = __qed_ll2_start(QED_LEADING_HWFN(cdev), params); + if (rc) { + DP_NOTICE(QED_LEADING_HWFN(cdev), + "Failed to start LL2 on engine 0\n"); + goto err1; + } + } - p_ptt = qed_ptt_acquire(QED_LEADING_HWFN(cdev)); - if (!p_ptt) { - DP_INFO(cdev, "Failed to acquire PTT\n"); - goto fail; + if (QED_IS_ISCSI_PERSONALITY(p_hwfn)) { + DP_VERBOSE(cdev, QED_MSG_STORAGE, "Starting OOO LL2 queue\n"); + rc = qed_ll2_start_ooo(p_hwfn, params); + if (rc) { + DP_NOTICE(cdev, "Failed to start OOO LL2\n"); + goto err2; + } } - qed_llh_remove_mac_filter(QED_LEADING_HWFN(cdev), p_ptt, - cdev->ll2_mac_address); - qed_ptt_release(QED_LEADING_HWFN(cdev), p_ptt); - eth_zero_addr(cdev->ll2_mac_address); + rc = qed_llh_add_mac_filter(cdev, 0, params->ll2_mac_address); + if (rc) { + DP_NOTICE(cdev, "Failed to add an LLH filter\n"); + goto err3; + } - if (QED_LEADING_HWFN(cdev)->hw_info.personality == QED_PCI_ISCSI) - qed_ll2_stop_ooo(cdev); + ether_addr_copy(cdev->ll2_mac_address, params->ll2_mac_address); - rc = qed_ll2_terminate_connection(QED_LEADING_HWFN(cdev), - cdev->ll2->handle); - if (rc) - DP_INFO(cdev, "Failed to terminate LL2 connection\n"); + return 0; +err3: + if (QED_IS_ISCSI_PERSONALITY(p_hwfn)) + qed_ll2_stop_ooo(p_hwfn); +err2: + if (b_is_storage_eng1) + __qed_ll2_stop(QED_LEADING_HWFN(cdev)); +err1: + __qed_ll2_stop(p_hwfn); +err0: qed_ll2_kill_buffers(cdev); - - qed_ll2_release_connection(QED_LEADING_HWFN(cdev), cdev->ll2->handle); cdev->ll2->handle = QED_LL2_UNUSED_HANDLE; - return rc; -fail: - return -EINVAL; } static int qed_ll2_start_xmit(struct qed_dev *cdev, struct sk_buff *skb, unsigned long xmit_flags) { + struct qed_hwfn *p_hwfn = QED_AFFIN_HWFN(cdev); struct qed_ll2_tx_pkt_info pkt; const skb_frag_t *frag; u8 flags = 0, nr_frags; @@ -2506,7 +2569,7 @@ static int qed_ll2_start_xmit(struct qed_dev *cdev, struct sk_buff *skb, * routine may run and free the SKB, so no dereferencing the SKB * beyond this point unless skb has any fragments. */ - rc = qed_ll2_prepare_tx_packet(&cdev->hwfns[0], cdev->ll2->handle, + rc = qed_ll2_prepare_tx_packet(p_hwfn, cdev->ll2->handle, &pkt, 1); if (rc) goto err; @@ -2524,13 +2587,13 @@ static int qed_ll2_start_xmit(struct qed_dev *cdev, struct sk_buff *skb, goto err; } - rc = qed_ll2_set_fragment_of_tx_packet(QED_LEADING_HWFN(cdev), + rc = qed_ll2_set_fragment_of_tx_packet(p_hwfn, cdev->ll2->handle, mapping, skb_frag_size(frag)); /* if failed not much to do here, partial packet has been posted - * we can't free memory, will need to wait for completion. + * we can't free memory, will need to wait for completion */ if (rc) goto err2; @@ -2540,18 +2603,37 @@ static int qed_ll2_start_xmit(struct qed_dev *cdev, struct sk_buff *skb, err: dma_unmap_single(&cdev->pdev->dev, mapping, skb->len, DMA_TO_DEVICE); - err2: return rc; } static int qed_ll2_stats(struct qed_dev *cdev, struct qed_ll2_stats *stats) { + bool b_is_storage_eng1 = qed_ll2_is_storage_eng1(cdev); + struct qed_hwfn *p_hwfn = QED_AFFIN_HWFN(cdev); + int rc; + if (!cdev->ll2) return -EINVAL; - return qed_ll2_get_stats(QED_LEADING_HWFN(cdev), - cdev->ll2->handle, stats); + rc = qed_ll2_get_stats(p_hwfn, cdev->ll2->handle, stats); + if (rc) { + DP_NOTICE(p_hwfn, "Failed to get LL2 stats\n"); + return rc; + } + + /* In CMT mode, LL2 is always started on engine 0 for a storage PF */ + if (b_is_storage_eng1) { + rc = __qed_ll2_get_stats(QED_LEADING_HWFN(cdev), + cdev->ll2->handle, stats); + if (rc) { + DP_NOTICE(QED_LEADING_HWFN(cdev), + "Failed to get LL2 stats on engine 0\n"); + return rc; + } + } + + return 0; } const struct qed_ll2_ops qed_ll2_ops_pass = { diff --git a/drivers/net/ethernet/qlogic/qed/qed_main.c b/drivers/net/ethernet/qlogic/qed/qed_main.c index 6de23b56b294..829dd60ab937 100644 --- a/drivers/net/ethernet/qlogic/qed/qed_main.c +++ b/drivers/net/ethernet/qlogic/qed/qed_main.c @@ -48,6 +48,7 @@ #include <linux/crc32.h> #include <linux/qed/qed_if.h> #include <linux/qed/qed_ll2_if.h> +#include <net/devlink.h> #include "qed.h" #include "qed_sriov.h" @@ -342,6 +343,107 @@ static int qed_set_power_state(struct qed_dev *cdev, pci_power_t state) return 0; } +struct qed_devlink { + struct qed_dev *cdev; +}; + +enum qed_devlink_param_id { + QED_DEVLINK_PARAM_ID_BASE = DEVLINK_PARAM_GENERIC_ID_MAX, + QED_DEVLINK_PARAM_ID_IWARP_CMT, +}; + +static int qed_dl_param_get(struct devlink *dl, u32 id, + struct devlink_param_gset_ctx *ctx) +{ + struct qed_devlink *qed_dl; + struct qed_dev *cdev; + + qed_dl = devlink_priv(dl); + cdev = qed_dl->cdev; + ctx->val.vbool = cdev->iwarp_cmt; + + return 0; +} + +static int qed_dl_param_set(struct devlink *dl, u32 id, + struct devlink_param_gset_ctx *ctx) +{ + struct qed_devlink *qed_dl; + struct qed_dev *cdev; + + qed_dl = devlink_priv(dl); + cdev = qed_dl->cdev; + cdev->iwarp_cmt = ctx->val.vbool; + + return 0; +} + +static const struct devlink_param qed_devlink_params[] = { + DEVLINK_PARAM_DRIVER(QED_DEVLINK_PARAM_ID_IWARP_CMT, + "iwarp_cmt", DEVLINK_PARAM_TYPE_BOOL, + BIT(DEVLINK_PARAM_CMODE_RUNTIME), + qed_dl_param_get, qed_dl_param_set, NULL), +}; + +static const struct devlink_ops qed_dl_ops; + +static int qed_devlink_register(struct qed_dev *cdev) +{ + union devlink_param_value value; + struct qed_devlink *qed_dl; + struct devlink *dl; + int rc; + + dl = devlink_alloc(&qed_dl_ops, sizeof(*qed_dl)); + if (!dl) + return -ENOMEM; + + qed_dl = devlink_priv(dl); + + cdev->dl = dl; + qed_dl->cdev = cdev; + + rc = devlink_register(dl, &cdev->pdev->dev); + if (rc) + goto err_free; + + rc = devlink_params_register(dl, qed_devlink_params, + ARRAY_SIZE(qed_devlink_params)); + if (rc) + goto err_unregister; + + value.vbool = false; + devlink_param_driverinit_value_set(dl, + QED_DEVLINK_PARAM_ID_IWARP_CMT, + value); + + devlink_params_publish(dl); + cdev->iwarp_cmt = false; + + return 0; + +err_unregister: + devlink_unregister(dl); + +err_free: + cdev->dl = NULL; + devlink_free(dl); + + return rc; +} + +static void qed_devlink_unregister(struct qed_dev *cdev) +{ + if (!cdev->dl) + return; + + devlink_params_unregister(cdev->dl, qed_devlink_params, + ARRAY_SIZE(qed_devlink_params)); + + devlink_unregister(cdev->dl); + devlink_free(cdev->dl); +} + /* probing */ static struct qed_dev *qed_probe(struct pci_dev *pdev, struct qed_probe_params *params) @@ -370,6 +472,12 @@ static struct qed_dev *qed_probe(struct pci_dev *pdev, } DP_INFO(cdev, "PCI init completed successfully\n"); + rc = qed_devlink_register(cdev); + if (rc) { + DP_INFO(cdev, "Failed to register devlink.\n"); + goto err2; + } + rc = qed_hw_prepare(cdev, QED_PCI_DEFAULT); if (rc) { DP_ERR(cdev, "hw prepare failed\n"); @@ -399,6 +507,8 @@ static void qed_remove(struct qed_dev *cdev) qed_set_power_state(cdev, PCI_D3hot); + qed_devlink_unregister(cdev); + qed_free_cdev(cdev); } @@ -1301,26 +1411,21 @@ static u32 qed_sb_init(struct qed_dev *cdev, { struct qed_hwfn *p_hwfn; struct qed_ptt *p_ptt; - int hwfn_index; u16 rel_sb_id; - u8 n_hwfns; u32 rc; - /* RoCE uses single engine and CMT uses two engines. When using both - * we force only a single engine. Storage uses only engine 0 too. - */ - if (type == QED_SB_TYPE_L2_QUEUE) - n_hwfns = cdev->num_hwfns; - else - n_hwfns = 1; - - hwfn_index = sb_id % n_hwfns; - p_hwfn = &cdev->hwfns[hwfn_index]; - rel_sb_id = sb_id / n_hwfns; + /* RoCE/Storage use a single engine in CMT mode while L2 uses both */ + if (type == QED_SB_TYPE_L2_QUEUE) { + p_hwfn = &cdev->hwfns[sb_id % cdev->num_hwfns]; + rel_sb_id = sb_id / cdev->num_hwfns; + } else { + p_hwfn = QED_AFFIN_HWFN(cdev); + rel_sb_id = sb_id; + } DP_VERBOSE(cdev, NETIF_MSG_INTR, "hwfn [%d] <--[init]-- SB %04x [0x%04x upper]\n", - hwfn_index, rel_sb_id, sb_id); + IS_LEAD_HWFN(p_hwfn) ? 0 : 1, rel_sb_id, sb_id); if (IS_PF(p_hwfn->cdev)) { p_ptt = qed_ptt_acquire(p_hwfn); @@ -1339,20 +1444,26 @@ static u32 qed_sb_init(struct qed_dev *cdev, } static u32 qed_sb_release(struct qed_dev *cdev, - struct qed_sb_info *sb_info, u16 sb_id) + struct qed_sb_info *sb_info, + u16 sb_id, + enum qed_sb_type type) { struct qed_hwfn *p_hwfn; - int hwfn_index; u16 rel_sb_id; u32 rc; - hwfn_index = sb_id % cdev->num_hwfns; - p_hwfn = &cdev->hwfns[hwfn_index]; - rel_sb_id = sb_id / cdev->num_hwfns; + /* RoCE/Storage use a single engine in CMT mode while L2 uses both */ + if (type == QED_SB_TYPE_L2_QUEUE) { + p_hwfn = &cdev->hwfns[sb_id % cdev->num_hwfns]; + rel_sb_id = sb_id / cdev->num_hwfns; + } else { + p_hwfn = QED_AFFIN_HWFN(cdev); + rel_sb_id = sb_id; + } DP_VERBOSE(cdev, NETIF_MSG_INTR, "hwfn [%d] <--[init]-- SB %04x [0x%04x upper]\n", - hwfn_index, rel_sb_id, sb_id); + IS_LEAD_HWFN(p_hwfn) ? 0 : 1, rel_sb_id, sb_id); rc = qed_int_sb_release(p_hwfn, sb_info, rel_sb_id); @@ -2372,6 +2483,11 @@ static int qed_read_module_eeprom(struct qed_dev *cdev, char *buf, return rc; } +static u8 qed_get_affin_hwfn_idx(struct qed_dev *cdev) +{ + return QED_AFFIN_HWFN_IDX(cdev); +} + static struct qed_selftest_ops qed_selftest_ops_pass = { .selftest_memory = &qed_selftest_memory, .selftest_interrupt = &qed_selftest_interrupt, @@ -2419,6 +2535,7 @@ const struct qed_common_ops qed_common_ops_pass = { .db_recovery_add = &qed_db_recovery_add, .db_recovery_del = &qed_db_recovery_del, .read_module_eeprom = &qed_read_module_eeprom, + .get_affin_hwfn_idx = &qed_get_affin_hwfn_idx, }; void qed_get_protocol_stats(struct qed_dev *cdev, diff --git a/drivers/net/ethernet/qlogic/qed/qed_mcp.c b/drivers/net/ethernet/qlogic/qed/qed_mcp.c index cc27fd60d689..758702c1ce9c 100644 --- a/drivers/net/ethernet/qlogic/qed/qed_mcp.c +++ b/drivers/net/ethernet/qlogic/qed/qed_mcp.c @@ -3685,3 +3685,68 @@ int qed_mcp_set_capabilities(struct qed_hwfn *p_hwfn, struct qed_ptt *p_ptt) return qed_mcp_cmd(p_hwfn, p_ptt, DRV_MSG_CODE_FEATURE_SUPPORT, features, &mcp_resp, &mcp_param); } + +int qed_mcp_get_engine_config(struct qed_hwfn *p_hwfn, struct qed_ptt *p_ptt) +{ + struct qed_mcp_mb_params mb_params = {0}; + struct qed_dev *cdev = p_hwfn->cdev; + u8 fir_valid, l2_valid; + int rc; + + mb_params.cmd = DRV_MSG_CODE_GET_ENGINE_CONFIG; + rc = qed_mcp_cmd_and_union(p_hwfn, p_ptt, &mb_params); + if (rc) + return rc; + + if (mb_params.mcp_resp == FW_MSG_CODE_UNSUPPORTED) { + DP_INFO(p_hwfn, + "The get_engine_config command is unsupported by the MFW\n"); + return -EOPNOTSUPP; + } + + fir_valid = QED_MFW_GET_FIELD(mb_params.mcp_param, + FW_MB_PARAM_ENG_CFG_FIR_AFFIN_VALID); + if (fir_valid) + cdev->fir_affin = + QED_MFW_GET_FIELD(mb_params.mcp_param, + FW_MB_PARAM_ENG_CFG_FIR_AFFIN_VALUE); + + l2_valid = QED_MFW_GET_FIELD(mb_params.mcp_param, + FW_MB_PARAM_ENG_CFG_L2_AFFIN_VALID); + if (l2_valid) + cdev->l2_affin_hint = + QED_MFW_GET_FIELD(mb_params.mcp_param, + FW_MB_PARAM_ENG_CFG_L2_AFFIN_VALUE); + + DP_INFO(p_hwfn, + "Engine affinity config: FIR={valid %hhd, value %hhd}, L2_hint={valid %hhd, value %hhd}\n", + fir_valid, cdev->fir_affin, l2_valid, cdev->l2_affin_hint); + + return 0; +} + +int qed_mcp_get_ppfid_bitmap(struct qed_hwfn *p_hwfn, struct qed_ptt *p_ptt) +{ + struct qed_mcp_mb_params mb_params = {0}; + struct qed_dev *cdev = p_hwfn->cdev; + int rc; + + mb_params.cmd = DRV_MSG_CODE_GET_PPFID_BITMAP; + rc = qed_mcp_cmd_and_union(p_hwfn, p_ptt, &mb_params); + if (rc) + return rc; + + if (mb_params.mcp_resp == FW_MSG_CODE_UNSUPPORTED) { + DP_INFO(p_hwfn, + "The get_ppfid_bitmap command is unsupported by the MFW\n"); + return -EOPNOTSUPP; + } + + cdev->ppfid_bitmap = QED_MFW_GET_FIELD(mb_params.mcp_param, + FW_MB_PARAM_PPFID_BITMAP); + + DP_VERBOSE(p_hwfn, QED_MSG_SP, "PPFID bitmap 0x%hhx\n", + cdev->ppfid_bitmap); + + return 0; +} diff --git a/drivers/net/ethernet/qlogic/qed/qed_mcp.h b/drivers/net/ethernet/qlogic/qed/qed_mcp.h index 261c1a392e2c..e4f8fe4bd062 100644 --- a/drivers/net/ethernet/qlogic/qed/qed_mcp.h +++ b/drivers/net/ethernet/qlogic/qed/qed_mcp.h @@ -1186,4 +1186,20 @@ void qed_mcp_read_ufp_config(struct qed_hwfn *p_hwfn, struct qed_ptt *p_ptt); */ int qed_mcp_nvm_info_populate(struct qed_hwfn *p_hwfn); +/** + * @brief Get the engine affinity configuration. + * + * @param p_hwfn + * @param p_ptt + */ +int qed_mcp_get_engine_config(struct qed_hwfn *p_hwfn, struct qed_ptt *p_ptt); + +/** + * @brief Get the PPFID bitmap. + * + * @param p_hwfn + * @param p_ptt + */ +int qed_mcp_get_ppfid_bitmap(struct qed_hwfn *p_hwfn, struct qed_ptt *p_ptt); + #endif diff --git a/drivers/net/ethernet/qlogic/qed/qed_ptp.c b/drivers/net/ethernet/qlogic/qed/qed_ptp.c index 1302b308bd87..0dacf2c18c09 100644 --- a/drivers/net/ethernet/qlogic/qed/qed_ptp.c +++ b/drivers/net/ethernet/qlogic/qed/qed_ptp.c @@ -44,6 +44,8 @@ /* Add/subtract the Adjustment_Value when making a Drift adjustment */ #define QED_DRIFT_CNTR_DIRECTION_SHIFT 31 #define QED_TIMESTAMP_MASK BIT(16) +/* Param mask for Hardware to detect/timestamp the unicast PTP packets */ +#define QED_PTP_UCAST_PARAM_MASK 0xF static enum qed_resc_lock qed_ptcdev_to_resc(struct qed_hwfn *p_hwfn) { @@ -157,7 +159,8 @@ static int qed_ptp_hw_read_tx_ts(struct qed_dev *cdev, u64 *timestamp) *timestamp = 0; val = qed_rd(p_hwfn, p_ptt, NIG_REG_TX_LLH_PTP_BUF_SEQID); if (!(val & QED_TIMESTAMP_MASK)) { - DP_INFO(p_hwfn, "Invalid Tx timestamp, buf_seqid = %d\n", val); + DP_VERBOSE(p_hwfn, QED_MSG_DEBUG, + "Invalid Tx timestamp, buf_seqid = %08x\n", val); return -EINVAL; } @@ -242,7 +245,8 @@ static int qed_ptp_hw_cfg_filters(struct qed_dev *cdev, return -EINVAL; } - qed_wr(p_hwfn, p_ptt, NIG_REG_LLH_PTP_PARAM_MASK, 0); + qed_wr(p_hwfn, p_ptt, NIG_REG_LLH_PTP_PARAM_MASK, + QED_PTP_UCAST_PARAM_MASK); qed_wr(p_hwfn, p_ptt, NIG_REG_LLH_PTP_RULE_MASK, rule_mask); qed_wr(p_hwfn, p_ptt, NIG_REG_RX_PTP_EN, enable_cfg); @@ -252,7 +256,8 @@ static int qed_ptp_hw_cfg_filters(struct qed_dev *cdev, qed_wr(p_hwfn, p_ptt, NIG_REG_TX_LLH_PTP_RULE_MASK, 0x3FFF); } else { qed_wr(p_hwfn, p_ptt, NIG_REG_TX_PTP_EN, enable_cfg); - qed_wr(p_hwfn, p_ptt, NIG_REG_TX_LLH_PTP_PARAM_MASK, 0); + qed_wr(p_hwfn, p_ptt, NIG_REG_TX_LLH_PTP_PARAM_MASK, + QED_PTP_UCAST_PARAM_MASK); qed_wr(p_hwfn, p_ptt, NIG_REG_TX_LLH_PTP_RULE_MASK, rule_mask); } diff --git a/drivers/net/ethernet/qlogic/qed/qed_rdma.c b/drivers/net/ethernet/qlogic/qed/qed_rdma.c index 7873d6dfd91f..f900fde448db 100644 --- a/drivers/net/ethernet/qlogic/qed/qed_rdma.c +++ b/drivers/net/ethernet/qlogic/qed/qed_rdma.c @@ -700,7 +700,7 @@ static int qed_rdma_setup(struct qed_hwfn *p_hwfn, return rc; if (QED_IS_IWARP_PERSONALITY(p_hwfn)) { - rc = qed_iwarp_setup(p_hwfn, p_ptt, params); + rc = qed_iwarp_setup(p_hwfn, params); if (rc) return rc; } else { @@ -742,7 +742,7 @@ static int qed_rdma_stop(void *rdma_cxt) (ll2_ethertype_en & 0xFFFE)); if (QED_IS_IWARP_PERSONALITY(p_hwfn)) { - rc = qed_iwarp_stop(p_hwfn, p_ptt); + rc = qed_iwarp_stop(p_hwfn); if (rc) { qed_ptt_release(p_hwfn, p_ptt); return rc; @@ -803,7 +803,7 @@ static int qed_rdma_add_user(void *rdma_cxt, dpi_start_offset + ((out_params->dpi) * p_hwfn->dpi_size)); - out_params->dpi_phys_addr = p_hwfn->cdev->db_phys_addr + + out_params->dpi_phys_addr = p_hwfn->db_phys_addr + dpi_start_offset + ((out_params->dpi) * p_hwfn->dpi_size); @@ -818,14 +818,17 @@ static struct qed_rdma_port *qed_rdma_query_port(void *rdma_cxt) { struct qed_hwfn *p_hwfn = (struct qed_hwfn *)rdma_cxt; struct qed_rdma_port *p_port = p_hwfn->p_rdma_info->port; + struct qed_mcp_link_state *p_link_output; DP_VERBOSE(p_hwfn, QED_MSG_RDMA, "RDMA Query port\n"); - /* Link may have changed */ - p_port->port_state = p_hwfn->mcp_info->link_output.link_up ? - QED_RDMA_PORT_UP : QED_RDMA_PORT_DOWN; + /* The link state is saved only for the leading hwfn */ + p_link_output = &QED_LEADING_HWFN(p_hwfn->cdev)->mcp_info->link_output; - p_port->link_speed = p_hwfn->mcp_info->link_output.speed; + p_port->port_state = p_link_output->link_up ? QED_RDMA_PORT_UP + : QED_RDMA_PORT_DOWN; + + p_port->link_speed = p_link_output->speed; p_port->max_msg_size = RDMA_MAX_DATA_SIZE_IN_WQE; @@ -870,7 +873,7 @@ static void qed_rdma_cnq_prod_update(void *rdma_cxt, u8 qz_offset, u16 prod) static int qed_fill_rdma_dev_info(struct qed_dev *cdev, struct qed_dev_rdma_info *info) { - struct qed_hwfn *p_hwfn = QED_LEADING_HWFN(cdev); + struct qed_hwfn *p_hwfn = QED_AFFIN_HWFN(cdev); memset(info, 0, sizeof(*info)); @@ -889,9 +892,9 @@ static int qed_rdma_get_sb_start(struct qed_dev *cdev) int feat_num; if (cdev->num_hwfns > 1) - feat_num = FEAT_NUM(QED_LEADING_HWFN(cdev), QED_PF_L2_QUE); + feat_num = FEAT_NUM(QED_AFFIN_HWFN(cdev), QED_PF_L2_QUE); else - feat_num = FEAT_NUM(QED_LEADING_HWFN(cdev), QED_PF_L2_QUE) * + feat_num = FEAT_NUM(QED_AFFIN_HWFN(cdev), QED_PF_L2_QUE) * cdev->num_hwfns; return feat_num; @@ -899,7 +902,7 @@ static int qed_rdma_get_sb_start(struct qed_dev *cdev) static int qed_rdma_get_min_cnq_msix(struct qed_dev *cdev) { - int n_cnq = FEAT_NUM(QED_LEADING_HWFN(cdev), QED_RDMA_CNQ); + int n_cnq = FEAT_NUM(QED_AFFIN_HWFN(cdev), QED_RDMA_CNQ); int n_msix = cdev->int_params.rdma_msix_cnt; return min_t(int, n_cnq, n_msix); @@ -1653,7 +1656,7 @@ static int qed_rdma_deregister_tid(void *rdma_cxt, u32 itid) static void *qed_rdma_get_rdma_ctx(struct qed_dev *cdev) { - return QED_LEADING_HWFN(cdev); + return QED_AFFIN_HWFN(cdev); } static int qed_rdma_modify_srq(void *rdma_cxt, @@ -1881,7 +1884,7 @@ err: static int qed_rdma_init(struct qed_dev *cdev, struct qed_rdma_start_in_params *params) { - return qed_rdma_start(QED_LEADING_HWFN(cdev), params); + return qed_rdma_start(QED_AFFIN_HWFN(cdev), params); } static void qed_rdma_remove_user(void *rdma_cxt, u16 dpi) @@ -1899,23 +1902,12 @@ static int qed_roce_ll2_set_mac_filter(struct qed_dev *cdev, u8 *old_mac_address, u8 *new_mac_address) { - struct qed_hwfn *p_hwfn = QED_LEADING_HWFN(cdev); - struct qed_ptt *p_ptt; int rc = 0; - p_ptt = qed_ptt_acquire(p_hwfn); - if (!p_ptt) { - DP_ERR(cdev, - "qed roce ll2 mac filter set: failed to acquire PTT\n"); - return -EINVAL; - } - if (old_mac_address) - qed_llh_remove_mac_filter(p_hwfn, p_ptt, old_mac_address); + qed_llh_remove_mac_filter(cdev, 0, old_mac_address); if (new_mac_address) - rc = qed_llh_add_mac_filter(p_hwfn, p_ptt, new_mac_address); - - qed_ptt_release(p_hwfn, p_ptt); + rc = qed_llh_add_mac_filter(cdev, 0, new_mac_address); if (rc) DP_ERR(cdev, @@ -1924,6 +1916,36 @@ static int qed_roce_ll2_set_mac_filter(struct qed_dev *cdev, return rc; } +static int qed_iwarp_set_engine_affin(struct qed_dev *cdev, bool b_reset) +{ + enum qed_eng eng; + u8 ppfid = 0; + int rc; + + /* Make sure iwarp cmt mode is enabled before setting affinity */ + if (!cdev->iwarp_cmt) + return -EINVAL; + + if (b_reset) + eng = QED_BOTH_ENG; + else + eng = cdev->l2_affin_hint ? QED_ENG1 : QED_ENG0; + + rc = qed_llh_set_ppfid_affinity(cdev, ppfid, eng); + if (rc) { + DP_NOTICE(cdev, + "Failed to set the engine affinity of ppfid %d\n", + ppfid); + return rc; + } + + DP_VERBOSE(cdev, (QED_MSG_RDMA | QED_MSG_SP), + "LLH: Set the engine affinity of non-RoCE packets as %d\n", + eng); + + return 0; +} + static const struct qed_rdma_ops qed_rdma_ops_pass = { .common = &qed_common_ops_pass, .fill_dev_info = &qed_fill_rdma_dev_info, @@ -1963,6 +1985,7 @@ static const struct qed_rdma_ops qed_rdma_ops_pass = { .ll2_set_fragment_of_tx_packet = &qed_ll2_set_fragment_of_tx_packet, .ll2_set_mac_filter = &qed_roce_ll2_set_mac_filter, .ll2_get_stats = &qed_ll2_get_stats, + .iwarp_set_engine_affin = &qed_iwarp_set_engine_affin, .iwarp_connect = &qed_iwarp_connect, .iwarp_create_listen = &qed_iwarp_create_listen, .iwarp_destroy_listen = &qed_iwarp_destroy_listen, diff --git a/drivers/net/ethernet/qlogic/qed/qed_reg_addr.h b/drivers/net/ethernet/qlogic/qed/qed_reg_addr.h index 5ce825ca5f24..60f850c3bdd6 100644 --- a/drivers/net/ethernet/qlogic/qed/qed_reg_addr.h +++ b/drivers/net/ethernet/qlogic/qed/qed_reg_addr.h @@ -254,6 +254,10 @@ 0x500840UL #define NIG_REG_LLH_TAGMAC_DEF_PF_VECTOR \ 0x50196cUL +#define NIG_REG_LLH_PPFID2PFID_TBL_0 \ + 0x501970UL +#define NIG_REG_LLH_ENG_CLS_ROCE_QP_SEL \ + 0x50 #define NIG_REG_LLH_CLS_TYPE_DUALMODE \ 0x501964UL #define NIG_REG_LLH_FUNC_TAG_EN 0x5019b0UL @@ -1626,6 +1630,8 @@ #define PHY_PCIE_REG_PHY1_K2_E5 \ 0x624000UL #define NIG_REG_ROCE_DUPLICATE_TO_HOST 0x5088f0UL +#define NIG_REG_PPF_TO_ENGINE_SEL 0x508900UL +#define NIG_REG_PPF_TO_ENGINE_SEL_SIZE 8 #define PRS_REG_LIGHT_L2_ETHERTYPE_EN 0x1f0968UL #define NIG_REG_LLH_ENG_CLS_ENG_ID_TBL 0x501b90UL #define DORQ_REG_PF_DPM_ENABLE 0x100510UL diff --git a/drivers/net/ethernet/qlogic/qed/qed_sp_commands.c b/drivers/net/ethernet/qlogic/qed/qed_sp_commands.c index 5a495fda9e9d..7e0b795230b2 100644 --- a/drivers/net/ethernet/qlogic/qed/qed_sp_commands.c +++ b/drivers/net/ethernet/qlogic/qed/qed_sp_commands.c @@ -588,7 +588,7 @@ int qed_sp_pf_update_stag(struct qed_hwfn *p_hwfn) { struct qed_spq_entry *p_ent = NULL; struct qed_sp_init_data init_data; - int rc = -EINVAL; + int rc; /* Get SPQ entry */ memset(&init_data, 0, sizeof(init_data)); diff --git a/drivers/net/ethernet/qlogic/qed/qed_sriov.c b/drivers/net/ethernet/qlogic/qed/qed_sriov.c index 2f318aaf2b05..78f77b712b10 100644 --- a/drivers/net/ethernet/qlogic/qed/qed_sriov.c +++ b/drivers/net/ethernet/qlogic/qed/qed_sriov.c @@ -917,10 +917,11 @@ static u8 qed_iov_alloc_vf_igu_sbs(struct qed_hwfn *p_hwfn, /* Configure igu sb in CAU which were marked valid */ qed_init_cau_sb_entry(p_hwfn, &sb_entry, p_hwfn->rel_pf_id, vf->abs_vf_id, 1); + qed_dmae_host2grc(p_hwfn, p_ptt, (u64)(uintptr_t)&sb_entry, CAU_REG_SB_VAR_MEMORY + - p_block->igu_sb_id * sizeof(u64), 2, 0); + p_block->igu_sb_id * sizeof(u64), 2, NULL); } vf->num_sbs = (u8) num_rx_queues; diff --git a/drivers/net/ethernet/qlogic/qede/qede.h b/drivers/net/ethernet/qlogic/qede/qede.h index 92fe226980fd..0e931c04fecf 100644 --- a/drivers/net/ethernet/qlogic/qede/qede.h +++ b/drivers/net/ethernet/qlogic/qede/qede.h @@ -92,6 +92,7 @@ struct qede_stats_common { u64 non_coalesced_pkts; u64 coalesced_bytes; u64 link_change_count; + u64 ptp_skip_txts; /* port */ u64 rx_64_byte_packets; @@ -189,6 +190,7 @@ struct qede_dev { const struct qed_eth_ops *ops; struct qede_ptp *ptp; + u64 ptp_skip_txts; struct qed_dev_eth_info dev_info; #define QEDE_MAX_RSS_CNT(edev) ((edev)->dev_info.num_queues) @@ -549,7 +551,7 @@ int qede_txq_has_work(struct qede_tx_queue *txq); void qede_recycle_rx_bd_ring(struct qede_rx_queue *rxq, u8 count); void qede_update_rx_prod(struct qede_dev *edev, struct qede_rx_queue *rxq); int qede_add_tc_flower_fltr(struct qede_dev *edev, __be16 proto, - struct tc_cls_flower_offload *f); + struct flow_cls_offload *f); #define RX_RING_SIZE_POW 13 #define RX_RING_SIZE ((u16)BIT(RX_RING_SIZE_POW)) diff --git a/drivers/net/ethernet/qlogic/qede/qede_ethtool.c b/drivers/net/ethernet/qlogic/qede/qede_ethtool.c index 8911a97ab0ca..e85f9fef930c 100644 --- a/drivers/net/ethernet/qlogic/qede/qede_ethtool.c +++ b/drivers/net/ethernet/qlogic/qede/qede_ethtool.c @@ -174,6 +174,7 @@ static const struct { QEDE_STAT(coalesced_bytes), QEDE_STAT(link_change_count), + QEDE_STAT(ptp_skip_txts), }; #define QEDE_NUM_STATS ARRAY_SIZE(qede_stats_arr) diff --git a/drivers/net/ethernet/qlogic/qede/qede_filter.c b/drivers/net/ethernet/qlogic/qede/qede_filter.c index add922b93d2c..9a6a9a008714 100644 --- a/drivers/net/ethernet/qlogic/qede/qede_filter.c +++ b/drivers/net/ethernet/qlogic/qede/qede_filter.c @@ -1943,7 +1943,7 @@ qede_parse_flow_attr(struct qede_dev *edev, __be16 proto, } int qede_add_tc_flower_fltr(struct qede_dev *edev, __be16 proto, - struct tc_cls_flower_offload *f) + struct flow_cls_offload *f) { struct qede_arfs_fltr_node *n; int min_hlen, rc = -EINVAL; diff --git a/drivers/net/ethernet/qlogic/qede/qede_main.c b/drivers/net/ethernet/qlogic/qede/qede_main.c index 02a97c659e29..8d1c208f778f 100644 --- a/drivers/net/ethernet/qlogic/qede/qede_main.c +++ b/drivers/net/ethernet/qlogic/qede/qede_main.c @@ -390,6 +390,7 @@ void qede_fill_by_demand_stats(struct qede_dev *edev) p_common->brb_discards = stats.common.brb_discards; p_common->tx_mac_ctrl_frames = stats.common.tx_mac_ctrl_frames; p_common->link_change_count = stats.common.link_change_count; + p_common->ptp_skip_txts = edev->ptp_skip_txts; if (QEDE_IS_BB(edev)) { struct qede_stats_bb *p_bb = &edev->stats.bb; @@ -547,13 +548,13 @@ static int qede_setup_tc(struct net_device *ndev, u8 num_tc) } static int -qede_set_flower(struct qede_dev *edev, struct tc_cls_flower_offload *f, +qede_set_flower(struct qede_dev *edev, struct flow_cls_offload *f, __be16 proto) { switch (f->command) { - case TC_CLSFLOWER_REPLACE: + case FLOW_CLS_REPLACE: return qede_add_tc_flower_fltr(edev, proto, f); - case TC_CLSFLOWER_DESTROY: + case FLOW_CLS_DESTROY: return qede_delete_flow_filter(edev, f->cookie); default: return -EOPNOTSUPP; @@ -563,7 +564,7 @@ qede_set_flower(struct qede_dev *edev, struct tc_cls_flower_offload *f, static int qede_setup_tc_block_cb(enum tc_setup_type type, void *type_data, void *cb_priv) { - struct tc_cls_flower_offload *f; + struct flow_cls_offload *f; struct qede_dev *edev = cb_priv; if (!tc_cls_can_offload_and_chain0(edev->ndev, type_data)) @@ -578,24 +579,7 @@ static int qede_setup_tc_block_cb(enum tc_setup_type type, void *type_data, } } -static int qede_setup_tc_block(struct qede_dev *edev, - struct tc_block_offload *f) -{ - if (f->binder_type != TCF_BLOCK_BINDER_TYPE_CLSACT_INGRESS) - return -EOPNOTSUPP; - - switch (f->command) { - case TC_BLOCK_BIND: - return tcf_block_cb_register(f->block, - qede_setup_tc_block_cb, - edev, edev, f->extack); - case TC_BLOCK_UNBIND: - tcf_block_cb_unregister(f->block, qede_setup_tc_block_cb, edev); - return 0; - default: - return -EOPNOTSUPP; - } -} +static LIST_HEAD(qede_block_cb_list); static int qede_setup_tc_offload(struct net_device *dev, enum tc_setup_type type, @@ -606,7 +590,10 @@ qede_setup_tc_offload(struct net_device *dev, enum tc_setup_type type, switch (type) { case TC_SETUP_BLOCK: - return qede_setup_tc_block(edev, type_data); + return flow_block_cb_setup_simple(type_data, + &qede_block_cb_list, + qede_setup_tc_block_cb, + edev, edev, true); case TC_SETUP_QDISC_MQPRIO: mqprio = type_data; @@ -959,13 +946,13 @@ void __qede_unlock(struct qede_dev *edev) /* This version of the lock should be used when acquiring the RTNL lock is also * needed in addition to the internal qede lock. */ -void qede_lock(struct qede_dev *edev) +static void qede_lock(struct qede_dev *edev) { rtnl_lock(); __qede_lock(edev); } -void qede_unlock(struct qede_dev *edev) +static void qede_unlock(struct qede_dev *edev) { __qede_unlock(edev); rtnl_unlock(); @@ -1306,7 +1293,8 @@ static void qede_free_mem_sb(struct qede_dev *edev, struct qed_sb_info *sb_info, u16 sb_id) { if (sb_info->sb_virt) { - edev->ops->common->sb_release(edev->cdev, sb_info, sb_id); + edev->ops->common->sb_release(edev->cdev, sb_info, sb_id, + QED_SB_TYPE_L2_QUEUE); dma_free_coherent(&edev->pdev->dev, sizeof(*sb_info->sb_virt), (void *)sb_info->sb_virt, sb_info->sb_phys); memset(sb_info, 0, sizeof(*sb_info)); @@ -2231,6 +2219,8 @@ out: if (mode != QEDE_UNLOAD_RECOVERY) DP_NOTICE(edev, "Link is down\n"); + edev->ptp_skip_txts = 0; + DP_INFO(edev, "Ending qede unload\n"); } diff --git a/drivers/net/ethernet/qlogic/qede/qede_ptp.c b/drivers/net/ethernet/qlogic/qede/qede_ptp.c index bddb2b5982dc..f815435cf106 100644 --- a/drivers/net/ethernet/qlogic/qede/qede_ptp.c +++ b/drivers/net/ethernet/qlogic/qede/qede_ptp.c @@ -30,6 +30,7 @@ * SOFTWARE. */ #include "qede_ptp.h" +#define QEDE_PTP_TX_TIMEOUT (2 * HZ) struct qede_ptp { const struct qed_eth_ptp_ops *ops; @@ -38,6 +39,7 @@ struct qede_ptp { struct timecounter tc; struct ptp_clock *clock; struct work_struct work; + unsigned long ptp_tx_start; struct qede_dev *edev; struct sk_buff *tx_skb; @@ -160,18 +162,30 @@ static void qede_ptp_task(struct work_struct *work) struct qede_dev *edev; struct qede_ptp *ptp; u64 timestamp, ns; + bool timedout; int rc; ptp = container_of(work, struct qede_ptp, work); edev = ptp->edev; + timedout = time_is_before_jiffies(ptp->ptp_tx_start + + QEDE_PTP_TX_TIMEOUT); /* Read Tx timestamp registers */ spin_lock_bh(&ptp->lock); rc = ptp->ops->read_tx_ts(edev->cdev, ×tamp); spin_unlock_bh(&ptp->lock); if (rc) { - /* Reschedule to keep checking for a valid timestamp value */ - schedule_work(&ptp->work); + if (unlikely(timedout)) { + DP_INFO(edev, "Tx timestamp is not recorded\n"); + dev_kfree_skb_any(ptp->tx_skb); + ptp->tx_skb = NULL; + clear_bit_unlock(QEDE_FLAGS_PTP_TX_IN_PRORGESS, + &edev->flags); + edev->ptp_skip_txts++; + } else { + /* Reschedule to keep checking for a valid TS value */ + schedule_work(&ptp->work); + } return; } @@ -514,19 +528,28 @@ void qede_ptp_tx_ts(struct qede_dev *edev, struct sk_buff *skb) if (!ptp) return; - if (test_and_set_bit_lock(QEDE_FLAGS_PTP_TX_IN_PRORGESS, &edev->flags)) + if (test_and_set_bit_lock(QEDE_FLAGS_PTP_TX_IN_PRORGESS, + &edev->flags)) { + DP_ERR(edev, "Timestamping in progress\n"); + edev->ptp_skip_txts++; return; + } if (unlikely(!test_bit(QEDE_FLAGS_TX_TIMESTAMPING_EN, &edev->flags))) { - DP_NOTICE(edev, - "Tx timestamping was not enabled, this packet will not be timestamped\n"); + DP_ERR(edev, + "Tx timestamping was not enabled, this packet will not be timestamped\n"); + clear_bit_unlock(QEDE_FLAGS_PTP_TX_IN_PRORGESS, &edev->flags); + edev->ptp_skip_txts++; } else if (unlikely(ptp->tx_skb)) { - DP_NOTICE(edev, - "The device supports only a single outstanding packet to timestamp, this packet will not be timestamped\n"); + DP_ERR(edev, + "The device supports only a single outstanding packet to timestamp, this packet will not be timestamped\n"); + clear_bit_unlock(QEDE_FLAGS_PTP_TX_IN_PRORGESS, &edev->flags); + edev->ptp_skip_txts++; } else { skb_shinfo(skb)->tx_flags |= SKBTX_IN_PROGRESS; /* schedule check for Tx timestamp */ ptp->tx_skb = skb_get(skb); + ptp->ptp_tx_start = jiffies; schedule_work(&ptp->work); } } diff --git a/drivers/net/ethernet/qlogic/qlcnic/qlcnic_main.c b/drivers/net/ethernet/qlogic/qlcnic/qlcnic_main.c index 7a873002e626..c07438db30ba 100644 --- a/drivers/net/ethernet/qlogic/qlcnic/qlcnic_main.c +++ b/drivers/net/ethernet/qlogic/qlcnic/qlcnic_main.c @@ -4119,13 +4119,14 @@ static void qlcnic_config_indev_addr(struct qlcnic_adapter *adapter, struct net_device *dev, unsigned long event) { + const struct in_ifaddr *ifa; struct in_device *indev; indev = in_dev_get(dev); if (!indev) return; - for_ifa(indev) { + in_dev_for_each_ifa_rtnl(ifa, indev) { switch (event) { case NETDEV_UP: qlcnic_config_ipaddr(adapter, @@ -4138,7 +4139,7 @@ qlcnic_config_indev_addr(struct qlcnic_adapter *adapter, default: break; } - } endfor_ifa(indev); + } in_dev_put(indev); } diff --git a/drivers/net/ethernet/qlogic/qlcnic/qlcnic_sriov_pf.c b/drivers/net/ethernet/qlogic/qlcnic/qlcnic_sriov_pf.c index af3b037fa442..5632da05145a 100644 --- a/drivers/net/ethernet/qlogic/qlcnic/qlcnic_sriov_pf.c +++ b/drivers/net/ethernet/qlogic/qlcnic/qlcnic_sriov_pf.c @@ -1066,7 +1066,7 @@ static int qlcnic_sriov_pf_cfg_ip_cmd(struct qlcnic_bc_trans *trans, { struct qlcnic_vf_info *vf = trans->vf; struct qlcnic_adapter *adapter = vf->adapter; - int err = -EIO; + int err; cmd->req.arg[1] |= vf->vp->handle << 16; cmd->req.arg[1] |= BIT_31; |