diff options
author | Linus Torvalds <torvalds@linux-foundation.org> | 2023-02-22 05:24:12 +0300 |
---|---|---|
committer | Linus Torvalds <torvalds@linux-foundation.org> | 2023-02-22 05:24:12 +0300 |
commit | 5b7c4cabbb65f5c469464da6c5f614cbd7f730f2 (patch) | |
tree | cc5c2d0a898769fd59549594fedb3ee6f84e59a0 /drivers/net/ethernet | |
parent | 36289a03bcd3aabdf66de75cb6d1b4ee15726438 (diff) | |
parent | d1fabc68f8e0541d41657096dc713cb01775652d (diff) | |
download | linux-5b7c4cabbb65f5c469464da6c5f614cbd7f730f2.tar.xz |
Merge tag 'net-next-6.3' of git://git.kernel.org/pub/scm/linux/kernel/git/netdev/net-next
Pull networking updates from Jakub Kicinski:
"Core:
- Add dedicated kmem_cache for typical/small skb->head, avoid having
to access struct page at kfree time, and improve memory use.
- Introduce sysctl to set default RPS configuration for new netdevs.
- Define Netlink protocol specification format which can be used to
describe messages used by each family and auto-generate parsers.
Add tools for generating kernel data structures and uAPI headers.
- Expose all net/core sysctls inside netns.
- Remove 4s sleep in netpoll if carrier is instantly detected on
boot.
- Add configurable limit of MDB entries per port, and port-vlan.
- Continue populating drop reasons throughout the stack.
- Retire a handful of legacy Qdiscs and classifiers.
Protocols:
- Support IPv4 big TCP (TSO frames larger than 64kB).
- Add IP_LOCAL_PORT_RANGE socket option, to control local port range
on socket by socket basis.
- Track and report in procfs number of MPTCP sockets used.
- Support mixing IPv4 and IPv6 flows in the in-kernel MPTCP path
manager.
- IPv6: don't check net.ipv6.route.max_size and rely on garbage
collection to free memory (similarly to IPv4).
- Support Penultimate Segment Pop (PSP) flavor in SRv6 (RFC8986).
- ICMP: add per-rate limit counters.
- Add support for user scanning requests in ieee802154.
- Remove static WEP support.
- Support minimal Wi-Fi 7 Extremely High Throughput (EHT) rate
reporting.
- WiFi 7 EHT channel puncturing support (client & AP).
BPF:
- Add a rbtree data structure following the "next-gen data structure"
precedent set by recently added linked list, that is, by using
kfunc + kptr instead of adding a new BPF map type.
- Expose XDP hints via kfuncs with initial support for RX hash and
timestamp metadata.
- Add BPF_F_NO_TUNNEL_KEY extension to bpf_skb_set_tunnel_key to
better support decap on GRE tunnel devices not operating in collect
metadata.
- Improve x86 JIT's codegen for PROBE_MEM runtime error checks.
- Remove the need for trace_printk_lock for bpf_trace_printk and
bpf_trace_vprintk helpers.
- Extend libbpf's bpf_tracing.h support for tracing arguments of
kprobes/uprobes and syscall as a special case.
- Significantly reduce the search time for module symbols by
livepatch and BPF.
- Enable cpumasks to be used as kptrs, which is useful for tracing
programs tracking which tasks end up running on which CPUs in
different time intervals.
- Add support for BPF trampoline on s390x and riscv64.
- Add capability to export the XDP features supported by the NIC.
- Add __bpf_kfunc tag for marking kernel functions as kfuncs.
- Add cgroup.memory=nobpf kernel parameter option to disable BPF
memory accounting for container environments.
Netfilter:
- Remove the CLUSTERIP target. It has been marked as obsolete for
years, and we still have WARN splats wrt races of the out-of-band
/proc interface installed by this target.
- Add 'destroy' commands to nf_tables. They are identical to the
existing 'delete' commands, but do not return an error if the
referenced object (set, chain, rule...) did not exist.
Driver API:
- Improve cpumask_local_spread() locality to help NICs set the right
IRQ affinity on AMD platforms.
- Separate C22 and C45 MDIO bus transactions more clearly.
- Introduce new DCB table to control DSCP rewrite on egress.
- Support configuration of Physical Layer Collision Avoidance (PLCA)
Reconciliation Sublayer (RS) (802.3cg-2019). Modern version of
shared medium Ethernet.
- Support for MAC Merge layer (IEEE 802.3-2018 clause 99). Allowing
preemption of low priority frames by high priority frames.
- Add support for controlling MACSec offload using netlink SET.
- Rework devlink instance refcounts to allow registration and
de-registration under the instance lock. Split the code into
multiple files, drop some of the unnecessarily granular locks and
factor out common parts of netlink operation handling.
- Add TX frame aggregation parameters (for USB drivers).
- Add a new attr TCA_EXT_WARN_MSG to report TC (offload) warning
messages with notifications for debug.
- Allow offloading of UDP NEW connections via act_ct.
- Add support for per action HW stats in TC.
- Support hardware miss to TC action (continue processing in SW from
a specific point in the action chain).
- Warn if old Wireless Extension user space interface is used with
modern cfg80211/mac80211 drivers. Do not support Wireless
Extensions for Wi-Fi 7 devices at all. Everyone should switch to
using nl80211 interface instead.
- Improve the CAN bit timing configuration. Use extack to return
error messages directly to user space, update the SJW handling,
including the definition of a new default value that will benefit
CAN-FD controllers, by increasing their oscillator tolerance.
New hardware / drivers:
- Ethernet:
- nVidia BlueField-3 support (control traffic driver)
- Ethernet support for imx93 SoCs
- Motorcomm yt8531 gigabit Ethernet PHY
- onsemi NCN26000 10BASE-T1S PHY (with support for PLCA)
- Microchip LAN8841 PHY (incl. cable diagnostics and PTP)
- Amlogic gxl MDIO mux
- WiFi:
- RealTek RTL8188EU (rtl8xxxu)
- Qualcomm Wi-Fi 7 devices (ath12k)
- CAN:
- Renesas R-Car V4H
Drivers:
- Bluetooth:
- Set Per Platform Antenna Gain (PPAG) for Intel controllers.
- Ethernet NICs:
- Intel (1G, igc):
- support TSN / Qbv / packet scheduling features of i226 model
- Intel (100G, ice):
- use GNSS subsystem instead of TTY
- multi-buffer XDP support
- extend support for GPIO pins to E823 devices
- nVidia/Mellanox:
- update the shared buffer configuration on PFC commands
- implement PTP adjphase function for HW offset control
- TC support for Geneve and GRE with VF tunnel offload
- more efficient crypto key management method
- multi-port eswitch support
- Netronome/Corigine:
- add DCB IEEE support
- support IPsec offloading for NFP3800
- Freescale/NXP (enetc):
- support XDP_REDIRECT for XDP non-linear buffers
- improve reconfig, avoid link flap and waiting for idle
- support MAC Merge layer
- Other NICs:
- sfc/ef100: add basic devlink support for ef100
- ionic: rx_push mode operation (writing descriptors via MMIO)
- bnxt: use the auxiliary bus abstraction for RDMA
- r8169: disable ASPM and reset bus in case of tx timeout
- cpsw: support QSGMII mode for J721e CPSW9G
- cpts: support pulse-per-second output
- ngbe: add an mdio bus driver
- usbnet: optimize usbnet_bh() by avoiding unnecessary queuing
- r8152: handle devices with FW with NCM support
- amd-xgbe: support 10Mbps, 2.5GbE speeds and rx-adaptation
- virtio-net: support multi buffer XDP
- virtio/vsock: replace virtio_vsock_pkt with sk_buff
- tsnep: XDP support
- Ethernet high-speed switches:
- nVidia/Mellanox (mlxsw):
- add support for latency TLV (in FW control messages)
- Microchip (sparx5):
- separate explicit and implicit traffic forwarding rules, make
the implicit rules always active
- add support for egress DSCP rewrite
- IS0 VCAP support (Ingress Classification)
- IS2 VCAP filters (protos, L3 addrs, L4 ports, flags, ToS
etc.)
- ES2 VCAP support (Egress Access Control)
- support for Per-Stream Filtering and Policing (802.1Q,
8.6.5.1)
- Ethernet embedded switches:
- Marvell (mv88e6xxx):
- add MAB (port auth) offload support
- enable PTP receive for mv88e6390
- NXP (ocelot):
- support MAC Merge layer
- support for the the vsc7512 internal copper phys
- Microchip:
- lan9303: convert to PHYLINK
- lan966x: support TC flower filter statistics
- lan937x: PTP support for KSZ9563/KSZ8563 and LAN937x
- lan937x: support Credit Based Shaper configuration
- ksz9477: support Energy Efficient Ethernet
- other:
- qca8k: convert to regmap read/write API, use bulk operations
- rswitch: Improve TX timestamp accuracy
- Intel WiFi (iwlwifi):
- EHT (Wi-Fi 7) rate reporting
- STEP equalizer support: transfer some STEP (connection to radio
on platforms with integrated wifi) related parameters from the
BIOS to the firmware.
- Qualcomm 802.11ax WiFi (ath11k):
- IPQ5018 support
- Fine Timing Measurement (FTM) responder role support
- channel 177 support
- MediaTek WiFi (mt76):
- per-PHY LED support
- mt7996: EHT (Wi-Fi 7) support
- Wireless Ethernet Dispatch (WED) reset support
- switch to using page pool allocator
- RealTek WiFi (rtw89):
- support new version of Bluetooth co-existance
- Mobile:
- rmnet: support TX aggregation"
* tag 'net-next-6.3' of git://git.kernel.org/pub/scm/linux/kernel/git/netdev/net-next: (1872 commits)
page_pool: add a comment explaining the fragment counter usage
net: ethtool: fix __ethtool_dev_mm_supported() implementation
ethtool: pse-pd: Fix double word in comments
xsk: add linux/vmalloc.h to xsk.c
sefltests: netdevsim: wait for devlink instance after netns removal
selftest: fib_tests: Always cleanup before exit
net/mlx5e: Align IPsec ASO result memory to be as required by hardware
net/mlx5e: TC, Set CT miss to the specific ct action instance
net/mlx5e: Rename CHAIN_TO_REG to MAPPED_OBJ_TO_REG
net/mlx5: Refactor tc miss handling to a single function
net/mlx5: Kconfig: Make tc offload depend on tc skb extension
net/sched: flower: Support hardware miss to tc action
net/sched: flower: Move filter handle initialization earlier
net/sched: cls_api: Support hardware miss to tc action
net/sched: Rename user cookie and act cookie
sfc: fix builds without CONFIG_RTC_LIB
sfc: clean up some inconsistent indentings
net/mlx4_en: Introduce flexible array to silence overflow warning
net: lan966x: Fix possible deadlock inside PTP
net/ulp: Remove redundant ->clone() test in inet_clone_ulp().
...
Diffstat (limited to 'drivers/net/ethernet')
463 files changed, 35904 insertions, 15059 deletions
diff --git a/drivers/net/ethernet/actions/owl-emac.c b/drivers/net/ethernet/actions/owl-emac.c index cd4d71b83c33..c6f8f852bff1 100644 --- a/drivers/net/ethernet/actions/owl-emac.c +++ b/drivers/net/ethernet/actions/owl-emac.c @@ -1275,9 +1275,6 @@ static int owl_emac_mdio_read(struct mii_bus *bus, int addr, int regnum) u32 data, tmp; int ret; - if (regnum & MII_ADDR_C45) - return -EOPNOTSUPP; - data = OWL_EMAC_BIT_MAC_CSR10_SB; data |= OWL_EMAC_VAL_MAC_CSR10_OPCODE_RD << OWL_EMAC_OFF_MAC_CSR10_OPCODE; @@ -1305,9 +1302,6 @@ owl_emac_mdio_write(struct mii_bus *bus, int addr, int regnum, u16 val) struct owl_emac_priv *priv = bus->priv; u32 data, tmp; - if (regnum & MII_ADDR_C45) - return -EOPNOTSUPP; - data = OWL_EMAC_BIT_MAC_CSR10_SB; data |= OWL_EMAC_VAL_MAC_CSR10_OPCODE_WR << OWL_EMAC_OFF_MAC_CSR10_OPCODE; diff --git a/drivers/net/ethernet/adi/adin1110.c b/drivers/net/ethernet/adi/adin1110.c index c26b8597945b..3f316a0f4158 100644 --- a/drivers/net/ethernet/adi/adin1110.c +++ b/drivers/net/ethernet/adi/adin1110.c @@ -523,7 +523,6 @@ static int adin1110_register_mdiobus(struct adin1110_priv *priv, mii_bus->priv = priv; mii_bus->parent = dev; mii_bus->phy_mask = ~((u32)GENMASK(2, 0)); - mii_bus->probe_capabilities = MDIOBUS_C22; snprintf(mii_bus->id, MII_BUS_ID_SIZE, "%s", dev_name(dev)); ret = devm_mdiobus_register(dev, mii_bus); diff --git a/drivers/net/ethernet/amazon/ena/ena_netdev.c b/drivers/net/ethernet/amazon/ena/ena_netdev.c index e8ad5ea31aff..d3999db7c6a2 100644 --- a/drivers/net/ethernet/amazon/ena/ena_netdev.c +++ b/drivers/net/ethernet/amazon/ena/ena_netdev.c @@ -597,7 +597,9 @@ static int ena_xdp_set(struct net_device *netdev, struct netdev_bpf *bpf) if (rc) return rc; } + xdp_features_set_redirect_target(netdev, false); } else if (old_bpf_prog) { + xdp_features_clear_redirect_target(netdev); rc = ena_destroy_and_free_all_xdp_queues(adapter); if (rc) return rc; @@ -4103,6 +4105,8 @@ static void ena_set_conf_feat_params(struct ena_adapter *adapter, /* Set offload features */ ena_set_dev_offloads(feat, netdev); + netdev->xdp_features = NETDEV_XDP_ACT_BASIC | NETDEV_XDP_ACT_REDIRECT; + adapter->max_mtu = feat->dev_attr.max_mtu; netdev->max_mtu = adapter->max_mtu; netdev->min_mtu = ENA_MIN_MTU; diff --git a/drivers/net/ethernet/amd/xgbe/xgbe-common.h b/drivers/net/ethernet/amd/xgbe/xgbe-common.h index 466273b22f0a..3b70f6737633 100644 --- a/drivers/net/ethernet/amd/xgbe/xgbe-common.h +++ b/drivers/net/ethernet/amd/xgbe/xgbe-common.h @@ -1285,6 +1285,22 @@ #define MDIO_PMA_RX_CTRL1 0x8051 #endif +#ifndef MDIO_PMA_RX_LSTS +#define MDIO_PMA_RX_LSTS 0x018020 +#endif + +#ifndef MDIO_PMA_RX_EQ_CTRL4 +#define MDIO_PMA_RX_EQ_CTRL4 0x0001805C +#endif + +#ifndef MDIO_PMA_MP_MISC_STS +#define MDIO_PMA_MP_MISC_STS 0x0078 +#endif + +#ifndef MDIO_PMA_PHY_RX_EQ_CEU +#define MDIO_PMA_PHY_RX_EQ_CEU 0x1800E +#endif + #ifndef MDIO_PCS_DIG_CTRL #define MDIO_PCS_DIG_CTRL 0x8000 #endif @@ -1395,6 +1411,28 @@ #define XGBE_PMA_RX_RST_0_RESET_ON 0x10 #define XGBE_PMA_RX_RST_0_RESET_OFF 0x00 +#define XGBE_PMA_RX_SIG_DET_0_MASK BIT(4) +#define XGBE_PMA_RX_SIG_DET_0_ENABLE BIT(4) +#define XGBE_PMA_RX_SIG_DET_0_DISABLE 0x0000 + +#define XGBE_PMA_RX_VALID_0_MASK BIT(12) +#define XGBE_PMA_RX_VALID_0_ENABLE BIT(12) +#define XGBE_PMA_RX_VALID_0_DISABLE 0x0000 + +#define XGBE_PMA_RX_AD_REQ_MASK BIT(12) +#define XGBE_PMA_RX_AD_REQ_ENABLE BIT(12) +#define XGBE_PMA_RX_AD_REQ_DISABLE 0x0000 + +#define XGBE_PMA_RX_ADPT_ACK_MASK BIT(12) +#define XGBE_PMA_RX_ADPT_ACK BIT(12) + +#define XGBE_PMA_CFF_UPDTM1_VLD BIT(8) +#define XGBE_PMA_CFF_UPDT0_VLD BIT(9) +#define XGBE_PMA_CFF_UPDT1_VLD BIT(10) +#define XGBE_PMA_CFF_UPDT_MASK (XGBE_PMA_CFF_UPDTM1_VLD |\ + XGBE_PMA_CFF_UPDT0_VLD | \ + XGBE_PMA_CFF_UPDT1_VLD) + #define XGBE_PMA_PLL_CTRL_MASK BIT(15) #define XGBE_PMA_PLL_CTRL_ENABLE BIT(15) #define XGBE_PMA_PLL_CTRL_DISABLE 0x0000 @@ -1699,20 +1737,21 @@ do { \ } while (0) /* Macros for building, reading or writing register values or bits - * using MDIO. Different from above because of the use of standardized - * Linux include values. No shifting is performed with the bit - * operations, everything works on mask values. + * using MDIO. */ + +#define XGBE_ADDR_C45 BIT(30) + #define XMDIO_READ(_pdata, _mmd, _reg) \ ((_pdata)->hw_if.read_mmd_regs((_pdata), 0, \ - MII_ADDR_C45 | (_mmd << 16) | ((_reg) & 0xffff))) + XGBE_ADDR_C45 | (_mmd << 16) | ((_reg) & 0xffff))) #define XMDIO_READ_BITS(_pdata, _mmd, _reg, _mask) \ (XMDIO_READ((_pdata), _mmd, _reg) & _mask) #define XMDIO_WRITE(_pdata, _mmd, _reg, _val) \ ((_pdata)->hw_if.write_mmd_regs((_pdata), 0, \ - MII_ADDR_C45 | (_mmd << 16) | ((_reg) & 0xffff), (_val))) + XGBE_ADDR_C45 | (_mmd << 16) | ((_reg) & 0xffff), (_val))) #define XMDIO_WRITE_BITS(_pdata, _mmd, _reg, _mask, _val) \ do { \ diff --git a/drivers/net/ethernet/amd/xgbe/xgbe-dev.c b/drivers/net/ethernet/amd/xgbe/xgbe-dev.c index 4030d619e84f..f393228d41c7 100644 --- a/drivers/net/ethernet/amd/xgbe/xgbe-dev.c +++ b/drivers/net/ethernet/amd/xgbe/xgbe-dev.c @@ -814,6 +814,9 @@ static int xgbe_set_speed(struct xgbe_prv_data *pdata, int speed) unsigned int ss; switch (speed) { + case SPEED_10: + ss = 0x07; + break; case SPEED_1000: ss = 0x03; break; @@ -1154,8 +1157,8 @@ static int xgbe_read_mmd_regs_v2(struct xgbe_prv_data *pdata, int prtad, unsigned int mmd_address, index, offset; int mmd_data; - if (mmd_reg & MII_ADDR_C45) - mmd_address = mmd_reg & ~MII_ADDR_C45; + if (mmd_reg & XGBE_ADDR_C45) + mmd_address = mmd_reg & ~XGBE_ADDR_C45; else mmd_address = (pdata->mdio_mmd << 16) | (mmd_reg & 0xffff); @@ -1186,8 +1189,8 @@ static void xgbe_write_mmd_regs_v2(struct xgbe_prv_data *pdata, int prtad, unsigned long flags; unsigned int mmd_address, index, offset; - if (mmd_reg & MII_ADDR_C45) - mmd_address = mmd_reg & ~MII_ADDR_C45; + if (mmd_reg & XGBE_ADDR_C45) + mmd_address = mmd_reg & ~XGBE_ADDR_C45; else mmd_address = (pdata->mdio_mmd << 16) | (mmd_reg & 0xffff); @@ -1217,8 +1220,8 @@ static int xgbe_read_mmd_regs_v1(struct xgbe_prv_data *pdata, int prtad, unsigned int mmd_address; int mmd_data; - if (mmd_reg & MII_ADDR_C45) - mmd_address = mmd_reg & ~MII_ADDR_C45; + if (mmd_reg & XGBE_ADDR_C45) + mmd_address = mmd_reg & ~XGBE_ADDR_C45; else mmd_address = (pdata->mdio_mmd << 16) | (mmd_reg & 0xffff); @@ -1245,8 +1248,8 @@ static void xgbe_write_mmd_regs_v1(struct xgbe_prv_data *pdata, int prtad, unsigned int mmd_address; unsigned long flags; - if (mmd_reg & MII_ADDR_C45) - mmd_address = mmd_reg & ~MII_ADDR_C45; + if (mmd_reg & XGBE_ADDR_C45) + mmd_address = mmd_reg & ~XGBE_ADDR_C45; else mmd_address = (pdata->mdio_mmd << 16) | (mmd_reg & 0xffff); @@ -1291,11 +1294,20 @@ static void xgbe_write_mmd_regs(struct xgbe_prv_data *pdata, int prtad, } } -static unsigned int xgbe_create_mdio_sca(int port, int reg) +static unsigned int xgbe_create_mdio_sca_c22(int port, int reg) { - unsigned int mdio_sca, da; + unsigned int mdio_sca; + + mdio_sca = 0; + XGMAC_SET_BITS(mdio_sca, MAC_MDIOSCAR, RA, reg); + XGMAC_SET_BITS(mdio_sca, MAC_MDIOSCAR, PA, port); - da = (reg & MII_ADDR_C45) ? reg >> 16 : 0; + return mdio_sca; +} + +static unsigned int xgbe_create_mdio_sca_c45(int port, unsigned int da, int reg) +{ + unsigned int mdio_sca; mdio_sca = 0; XGMAC_SET_BITS(mdio_sca, MAC_MDIOSCAR, RA, reg); @@ -1305,14 +1317,13 @@ static unsigned int xgbe_create_mdio_sca(int port, int reg) return mdio_sca; } -static int xgbe_write_ext_mii_regs(struct xgbe_prv_data *pdata, int addr, - int reg, u16 val) +static int xgbe_write_ext_mii_regs(struct xgbe_prv_data *pdata, + unsigned int mdio_sca, u16 val) { - unsigned int mdio_sca, mdio_sccd; + unsigned int mdio_sccd; reinit_completion(&pdata->mdio_complete); - mdio_sca = xgbe_create_mdio_sca(addr, reg); XGMAC_IOWRITE(pdata, MAC_MDIOSCAR, mdio_sca); mdio_sccd = 0; @@ -1329,14 +1340,33 @@ static int xgbe_write_ext_mii_regs(struct xgbe_prv_data *pdata, int addr, return 0; } -static int xgbe_read_ext_mii_regs(struct xgbe_prv_data *pdata, int addr, - int reg) +static int xgbe_write_ext_mii_regs_c22(struct xgbe_prv_data *pdata, int addr, + int reg, u16 val) { - unsigned int mdio_sca, mdio_sccd; + unsigned int mdio_sca; + + mdio_sca = xgbe_create_mdio_sca_c22(addr, reg); + + return xgbe_write_ext_mii_regs(pdata, mdio_sca, val); +} + +static int xgbe_write_ext_mii_regs_c45(struct xgbe_prv_data *pdata, int addr, + int devad, int reg, u16 val) +{ + unsigned int mdio_sca; + + mdio_sca = xgbe_create_mdio_sca_c45(addr, devad, reg); + + return xgbe_write_ext_mii_regs(pdata, mdio_sca, val); +} + +static int xgbe_read_ext_mii_regs(struct xgbe_prv_data *pdata, + unsigned int mdio_sca) +{ + unsigned int mdio_sccd; reinit_completion(&pdata->mdio_complete); - mdio_sca = xgbe_create_mdio_sca(addr, reg); XGMAC_IOWRITE(pdata, MAC_MDIOSCAR, mdio_sca); mdio_sccd = 0; @@ -1352,6 +1382,26 @@ static int xgbe_read_ext_mii_regs(struct xgbe_prv_data *pdata, int addr, return XGMAC_IOREAD_BITS(pdata, MAC_MDIOSCCDR, DATA); } +static int xgbe_read_ext_mii_regs_c22(struct xgbe_prv_data *pdata, int addr, + int reg) +{ + unsigned int mdio_sca; + + mdio_sca = xgbe_create_mdio_sca_c22(addr, reg); + + return xgbe_read_ext_mii_regs(pdata, mdio_sca); +} + +static int xgbe_read_ext_mii_regs_c45(struct xgbe_prv_data *pdata, int addr, + int devad, int reg) +{ + unsigned int mdio_sca; + + mdio_sca = xgbe_create_mdio_sca_c45(addr, devad, reg); + + return xgbe_read_ext_mii_regs(pdata, mdio_sca); +} + static int xgbe_set_ext_mii_mode(struct xgbe_prv_data *pdata, unsigned int port, enum xgbe_mdio_mode mode) { @@ -3565,8 +3615,10 @@ void xgbe_init_function_ptrs_dev(struct xgbe_hw_if *hw_if) hw_if->set_speed = xgbe_set_speed; hw_if->set_ext_mii_mode = xgbe_set_ext_mii_mode; - hw_if->read_ext_mii_regs = xgbe_read_ext_mii_regs; - hw_if->write_ext_mii_regs = xgbe_write_ext_mii_regs; + hw_if->read_ext_mii_regs_c22 = xgbe_read_ext_mii_regs_c22; + hw_if->write_ext_mii_regs_c22 = xgbe_write_ext_mii_regs_c22; + hw_if->read_ext_mii_regs_c45 = xgbe_read_ext_mii_regs_c45; + hw_if->write_ext_mii_regs_c45 = xgbe_write_ext_mii_regs_c45; hw_if->set_gpio = xgbe_set_gpio; hw_if->clr_gpio = xgbe_clr_gpio; diff --git a/drivers/net/ethernet/amd/xgbe/xgbe-mdio.c b/drivers/net/ethernet/amd/xgbe/xgbe-mdio.c index 43fdd111235a..33a9574e9e04 100644 --- a/drivers/net/ethernet/amd/xgbe/xgbe-mdio.c +++ b/drivers/net/ethernet/amd/xgbe/xgbe-mdio.c @@ -274,6 +274,15 @@ static void xgbe_sgmii_1000_mode(struct xgbe_prv_data *pdata) pdata->phy_if.phy_impl.set_mode(pdata, XGBE_MODE_SGMII_1000); } +static void xgbe_sgmii_10_mode(struct xgbe_prv_data *pdata) +{ + /* Set MAC to 10M speed */ + pdata->hw_if.set_speed(pdata, SPEED_10); + + /* Call PHY implementation support to complete rate change */ + pdata->phy_if.phy_impl.set_mode(pdata, XGBE_MODE_SGMII_10); +} + static void xgbe_sgmii_100_mode(struct xgbe_prv_data *pdata) { /* Set MAC to 1G speed */ @@ -306,6 +315,9 @@ static void xgbe_change_mode(struct xgbe_prv_data *pdata, case XGBE_MODE_KR: xgbe_kr_mode(pdata); break; + case XGBE_MODE_SGMII_10: + xgbe_sgmii_10_mode(pdata); + break; case XGBE_MODE_SGMII_100: xgbe_sgmii_100_mode(pdata); break; @@ -1077,6 +1089,8 @@ static const char *xgbe_phy_fc_string(struct xgbe_prv_data *pdata) static const char *xgbe_phy_speed_string(int speed) { switch (speed) { + case SPEED_10: + return "10Mbps"; case SPEED_100: return "100Mbps"; case SPEED_1000: @@ -1164,6 +1178,7 @@ static int xgbe_phy_config_fixed(struct xgbe_prv_data *pdata) case XGBE_MODE_KX_1000: case XGBE_MODE_KX_2500: case XGBE_MODE_KR: + case XGBE_MODE_SGMII_10: case XGBE_MODE_SGMII_100: case XGBE_MODE_SGMII_1000: case XGBE_MODE_X: @@ -1225,6 +1240,8 @@ static int __xgbe_phy_config_aneg(struct xgbe_prv_data *pdata, bool set_mode) xgbe_set_mode(pdata, XGBE_MODE_SGMII_1000); } else if (xgbe_use_mode(pdata, XGBE_MODE_SGMII_100)) { xgbe_set_mode(pdata, XGBE_MODE_SGMII_100); + } else if (xgbe_use_mode(pdata, XGBE_MODE_SGMII_10)) { + xgbe_set_mode(pdata, XGBE_MODE_SGMII_10); } else { enable_irq(pdata->an_irq); ret = -EINVAL; @@ -1325,6 +1342,9 @@ static void xgbe_phy_status_result(struct xgbe_prv_data *pdata) mode = xgbe_phy_status_aneg(pdata); switch (mode) { + case XGBE_MODE_SGMII_10: + pdata->phy.speed = SPEED_10; + break; case XGBE_MODE_SGMII_100: pdata->phy.speed = SPEED_100; break; @@ -1467,6 +1487,8 @@ static int xgbe_phy_start(struct xgbe_prv_data *pdata) xgbe_sgmii_1000_mode(pdata); } else if (xgbe_use_mode(pdata, XGBE_MODE_SGMII_100)) { xgbe_sgmii_100_mode(pdata); + } else if (xgbe_use_mode(pdata, XGBE_MODE_SGMII_10)) { + xgbe_sgmii_10_mode(pdata); } else { ret = -EINVAL; goto err_irq; @@ -1564,6 +1586,8 @@ static int xgbe_phy_best_advertised_speed(struct xgbe_prv_data *pdata) return SPEED_1000; else if (XGBE_ADV(lks, 100baseT_Full)) return SPEED_100; + else if (XGBE_ADV(lks, 10baseT_Full)) + return SPEED_10; return SPEED_UNKNOWN; } diff --git a/drivers/net/ethernet/amd/xgbe/xgbe-phy-v2.c b/drivers/net/ethernet/amd/xgbe/xgbe-phy-v2.c index c731a04731f8..16e7fb2c0dae 100644 --- a/drivers/net/ethernet/amd/xgbe/xgbe-phy-v2.c +++ b/drivers/net/ethernet/amd/xgbe/xgbe-phy-v2.c @@ -124,6 +124,7 @@ #include "xgbe.h" #include "xgbe-common.h" +#define XGBE_PHY_PORT_SPEED_10 BIT(0) #define XGBE_PHY_PORT_SPEED_100 BIT(1) #define XGBE_PHY_PORT_SPEED_1000 BIT(2) #define XGBE_PHY_PORT_SPEED_2500 BIT(3) @@ -387,6 +388,10 @@ struct xgbe_phy_data { static DEFINE_MUTEX(xgbe_phy_comm_lock); static enum xgbe_an_mode xgbe_phy_an_mode(struct xgbe_prv_data *pdata); +static void xgbe_phy_rrc(struct xgbe_prv_data *pdata); +static void xgbe_phy_perform_ratechange(struct xgbe_prv_data *pdata, + enum xgbe_mb_cmd cmd, + enum xgbe_mb_subcmd sub_cmd); static int xgbe_phy_i2c_xfer(struct xgbe_prv_data *pdata, struct xgbe_i2c_op *i2c_op) @@ -599,20 +604,27 @@ static int xgbe_phy_get_comm_ownership(struct xgbe_prv_data *pdata) return -ETIMEDOUT; } -static int xgbe_phy_mdio_mii_write(struct xgbe_prv_data *pdata, int addr, - int reg, u16 val) +static int xgbe_phy_mdio_mii_write_c22(struct xgbe_prv_data *pdata, int addr, + int reg, u16 val) { struct xgbe_phy_data *phy_data = pdata->phy_data; - if (reg & MII_ADDR_C45) { - if (phy_data->phydev_mode != XGBE_MDIO_MODE_CL45) - return -ENOTSUPP; - } else { - if (phy_data->phydev_mode != XGBE_MDIO_MODE_CL22) - return -ENOTSUPP; - } + if (phy_data->phydev_mode != XGBE_MDIO_MODE_CL22) + return -EOPNOTSUPP; + + return pdata->hw_if.write_ext_mii_regs_c22(pdata, addr, reg, val); +} - return pdata->hw_if.write_ext_mii_regs(pdata, addr, reg, val); +static int xgbe_phy_mdio_mii_write_c45(struct xgbe_prv_data *pdata, int addr, + int devad, int reg, u16 val) +{ + struct xgbe_phy_data *phy_data = pdata->phy_data; + + if (phy_data->phydev_mode != XGBE_MDIO_MODE_CL45) + return -EOPNOTSUPP; + + return pdata->hw_if.write_ext_mii_regs_c45(pdata, addr, devad, + reg, val); } static int xgbe_phy_i2c_mii_write(struct xgbe_prv_data *pdata, int reg, u16 val) @@ -637,7 +649,8 @@ static int xgbe_phy_i2c_mii_write(struct xgbe_prv_data *pdata, int reg, u16 val) return ret; } -static int xgbe_phy_mii_write(struct mii_bus *mii, int addr, int reg, u16 val) +static int xgbe_phy_mii_write_c22(struct mii_bus *mii, int addr, int reg, + u16 val) { struct xgbe_prv_data *pdata = mii->priv; struct xgbe_phy_data *phy_data = pdata->phy_data; @@ -650,29 +663,58 @@ static int xgbe_phy_mii_write(struct mii_bus *mii, int addr, int reg, u16 val) if (phy_data->conn_type == XGBE_CONN_TYPE_SFP) ret = xgbe_phy_i2c_mii_write(pdata, reg, val); else if (phy_data->conn_type & XGBE_CONN_TYPE_MDIO) - ret = xgbe_phy_mdio_mii_write(pdata, addr, reg, val); + ret = xgbe_phy_mdio_mii_write_c22(pdata, addr, reg, val); else - ret = -ENOTSUPP; + ret = -EOPNOTSUPP; xgbe_phy_put_comm_ownership(pdata); return ret; } -static int xgbe_phy_mdio_mii_read(struct xgbe_prv_data *pdata, int addr, - int reg) +static int xgbe_phy_mii_write_c45(struct mii_bus *mii, int addr, int devad, + int reg, u16 val) { + struct xgbe_prv_data *pdata = mii->priv; struct xgbe_phy_data *phy_data = pdata->phy_data; + int ret; - if (reg & MII_ADDR_C45) { - if (phy_data->phydev_mode != XGBE_MDIO_MODE_CL45) - return -ENOTSUPP; - } else { - if (phy_data->phydev_mode != XGBE_MDIO_MODE_CL22) - return -ENOTSUPP; - } + ret = xgbe_phy_get_comm_ownership(pdata); + if (ret) + return ret; - return pdata->hw_if.read_ext_mii_regs(pdata, addr, reg); + if (phy_data->conn_type == XGBE_CONN_TYPE_SFP) + ret = -EOPNOTSUPP; + else if (phy_data->conn_type & XGBE_CONN_TYPE_MDIO) + ret = xgbe_phy_mdio_mii_write_c45(pdata, addr, devad, reg, val); + else + ret = -EOPNOTSUPP; + + xgbe_phy_put_comm_ownership(pdata); + + return ret; +} + +static int xgbe_phy_mdio_mii_read_c22(struct xgbe_prv_data *pdata, int addr, + int reg) +{ + struct xgbe_phy_data *phy_data = pdata->phy_data; + + if (phy_data->phydev_mode != XGBE_MDIO_MODE_CL22) + return -EOPNOTSUPP; + + return pdata->hw_if.read_ext_mii_regs_c22(pdata, addr, reg); +} + +static int xgbe_phy_mdio_mii_read_c45(struct xgbe_prv_data *pdata, int addr, + int devad, int reg) +{ + struct xgbe_phy_data *phy_data = pdata->phy_data; + + if (phy_data->phydev_mode != XGBE_MDIO_MODE_CL45) + return -EOPNOTSUPP; + + return pdata->hw_if.read_ext_mii_regs_c45(pdata, addr, devad, reg); } static int xgbe_phy_i2c_mii_read(struct xgbe_prv_data *pdata, int reg) @@ -697,7 +739,7 @@ static int xgbe_phy_i2c_mii_read(struct xgbe_prv_data *pdata, int reg) return ret; } -static int xgbe_phy_mii_read(struct mii_bus *mii, int addr, int reg) +static int xgbe_phy_mii_read_c22(struct mii_bus *mii, int addr, int reg) { struct xgbe_prv_data *pdata = mii->priv; struct xgbe_phy_data *phy_data = pdata->phy_data; @@ -710,7 +752,30 @@ static int xgbe_phy_mii_read(struct mii_bus *mii, int addr, int reg) if (phy_data->conn_type == XGBE_CONN_TYPE_SFP) ret = xgbe_phy_i2c_mii_read(pdata, reg); else if (phy_data->conn_type & XGBE_CONN_TYPE_MDIO) - ret = xgbe_phy_mdio_mii_read(pdata, addr, reg); + ret = xgbe_phy_mdio_mii_read_c22(pdata, addr, reg); + else + ret = -EOPNOTSUPP; + + xgbe_phy_put_comm_ownership(pdata); + + return ret; +} + +static int xgbe_phy_mii_read_c45(struct mii_bus *mii, int addr, int devad, + int reg) +{ + struct xgbe_prv_data *pdata = mii->priv; + struct xgbe_phy_data *phy_data = pdata->phy_data; + int ret; + + ret = xgbe_phy_get_comm_ownership(pdata); + if (ret) + return ret; + + if (phy_data->conn_type == XGBE_CONN_TYPE_SFP) + ret = -EOPNOTSUPP; + else if (phy_data->conn_type & XGBE_CONN_TYPE_MDIO) + ret = xgbe_phy_mdio_mii_read_c45(pdata, addr, devad, reg); else ret = -ENOTSUPP; @@ -759,6 +824,8 @@ static void xgbe_phy_sfp_phy_settings(struct xgbe_prv_data *pdata) XGBE_SET_SUP(lks, Pause); XGBE_SET_SUP(lks, Asym_Pause); if (phy_data->sfp_base == XGBE_SFP_BASE_1000_T) { + if (phy_data->port_speeds & XGBE_PHY_PORT_SPEED_10) + XGBE_SET_SUP(lks, 10baseT_Full); if (phy_data->port_speeds & XGBE_PHY_PORT_SPEED_100) XGBE_SET_SUP(lks, 100baseT_Full); if (phy_data->port_speeds & XGBE_PHY_PORT_SPEED_1000) @@ -1542,6 +1609,16 @@ static enum xgbe_mode xgbe_phy_an37_sgmii_outcome(struct xgbe_prv_data *pdata) xgbe_phy_phydev_flowctrl(pdata); switch (pdata->an_status & XGBE_SGMII_AN_LINK_SPEED) { + case XGBE_SGMII_AN_LINK_SPEED_10: + if (pdata->an_status & XGBE_SGMII_AN_LINK_DUPLEX) { + XGBE_SET_LP_ADV(lks, 10baseT_Full); + mode = XGBE_MODE_SGMII_10; + } else { + /* Half-duplex not supported */ + XGBE_SET_LP_ADV(lks, 10baseT_Half); + mode = XGBE_MODE_UNKNOWN; + } + break; case XGBE_SGMII_AN_LINK_SPEED_100: if (pdata->an_status & XGBE_SGMII_AN_LINK_DUPLEX) { XGBE_SET_LP_ADV(lks, 100baseT_Full); @@ -1658,7 +1735,10 @@ static enum xgbe_mode xgbe_phy_an73_redrv_outcome(struct xgbe_prv_data *pdata) switch (phy_data->sfp_base) { case XGBE_SFP_BASE_1000_T: if (phy_data->phydev && - (phy_data->phydev->speed == SPEED_100)) + (phy_data->phydev->speed == SPEED_10)) + mode = XGBE_MODE_SGMII_10; + else if (phy_data->phydev && + (phy_data->phydev->speed == SPEED_100)) mode = XGBE_MODE_SGMII_100; else mode = XGBE_MODE_SGMII_1000; @@ -1673,7 +1753,10 @@ static enum xgbe_mode xgbe_phy_an73_redrv_outcome(struct xgbe_prv_data *pdata) break; default: if (phy_data->phydev && - (phy_data->phydev->speed == SPEED_100)) + (phy_data->phydev->speed == SPEED_10)) + mode = XGBE_MODE_SGMII_10; + else if (phy_data->phydev && + (phy_data->phydev->speed == SPEED_100)) mode = XGBE_MODE_SGMII_100; else mode = XGBE_MODE_SGMII_1000; @@ -1803,6 +1886,9 @@ static void xgbe_phy_an_advertising(struct xgbe_prv_data *pdata, if (phy_data->phydev && (phy_data->phydev->speed == SPEED_10000)) XGBE_SET_ADV(dlks, 10000baseKR_Full); + else if (phy_data->phydev && + (phy_data->phydev->speed == SPEED_2500)) + XGBE_SET_ADV(dlks, 2500baseX_Full); else XGBE_SET_ADV(dlks, 1000baseKX_Full); break; @@ -1910,8 +1996,8 @@ static int xgbe_phy_set_redrv_mode_mdio(struct xgbe_prv_data *pdata, redrv_reg = XGBE_PHY_REDRV_MODE_REG + (phy_data->redrv_lane * 0x1000); redrv_val = (u16)mode; - return pdata->hw_if.write_ext_mii_regs(pdata, phy_data->redrv_addr, - redrv_reg, redrv_val); + return pdata->hw_if.write_ext_mii_regs_c22(pdata, phy_data->redrv_addr, + redrv_reg, redrv_val); } static int xgbe_phy_set_redrv_mode_i2c(struct xgbe_prv_data *pdata, @@ -1956,6 +2042,93 @@ static void xgbe_phy_set_redrv_mode(struct xgbe_prv_data *pdata) xgbe_phy_put_comm_ownership(pdata); } +#define MAX_RX_ADAPT_RETRIES 1 +#define XGBE_PMA_RX_VAL_SIG_MASK (XGBE_PMA_RX_SIG_DET_0_MASK | \ + XGBE_PMA_RX_VALID_0_MASK) + +static void xgbe_set_rx_adap_mode(struct xgbe_prv_data *pdata, + enum xgbe_mode mode) +{ + if (pdata->rx_adapt_retries++ >= MAX_RX_ADAPT_RETRIES) { + pdata->rx_adapt_retries = 0; + return; + } + + xgbe_phy_perform_ratechange(pdata, + mode == XGBE_MODE_KR ? + XGBE_MB_CMD_SET_10G_KR : + XGBE_MB_CMD_SET_10G_SFI, + XGBE_MB_SUBCMD_RX_ADAP); +} + +static void xgbe_rx_adaptation(struct xgbe_prv_data *pdata) +{ + struct xgbe_phy_data *phy_data = pdata->phy_data; + unsigned int reg; + + /* step 2: force PCS to send RX_ADAPT Req to PHY */ + XMDIO_WRITE_BITS(pdata, MDIO_MMD_PMAPMD, MDIO_PMA_RX_EQ_CTRL4, + XGBE_PMA_RX_AD_REQ_MASK, XGBE_PMA_RX_AD_REQ_ENABLE); + + /* Step 3: Wait for RX_ADAPT ACK from the PHY */ + msleep(200); + + /* Software polls for coefficient update command (given by local PHY) */ + reg = XMDIO_READ(pdata, MDIO_MMD_PMAPMD, MDIO_PMA_PHY_RX_EQ_CEU); + + /* Clear the RX_AD_REQ bit */ + XMDIO_WRITE_BITS(pdata, MDIO_MMD_PMAPMD, MDIO_PMA_RX_EQ_CTRL4, + XGBE_PMA_RX_AD_REQ_MASK, XGBE_PMA_RX_AD_REQ_DISABLE); + + /* Check if coefficient update command is set */ + if ((reg & XGBE_PMA_CFF_UPDT_MASK) != XGBE_PMA_CFF_UPDT_MASK) + goto set_mode; + + /* Step 4: Check for Block lock */ + + /* Link status is latched low, so read once to clear + * and then read again to get current state + */ + reg = XMDIO_READ(pdata, MDIO_MMD_PCS, MDIO_STAT1); + reg = XMDIO_READ(pdata, MDIO_MMD_PCS, MDIO_STAT1); + if (reg & MDIO_STAT1_LSTATUS) { + /* If the block lock is found, update the helpers + * and declare the link up + */ + netif_dbg(pdata, link, pdata->netdev, "Block_lock done"); + pdata->rx_adapt_done = true; + pdata->mode_set = false; + return; + } + +set_mode: + xgbe_set_rx_adap_mode(pdata, phy_data->cur_mode); +} + +static void xgbe_phy_rx_adaptation(struct xgbe_prv_data *pdata) +{ + unsigned int reg; + +rx_adapt_reinit: + reg = XMDIO_READ_BITS(pdata, MDIO_MMD_PMAPMD, MDIO_PMA_RX_LSTS, + XGBE_PMA_RX_VAL_SIG_MASK); + + /* step 1: Check for RX_VALID && LF_SIGDET */ + if ((reg & XGBE_PMA_RX_VAL_SIG_MASK) != XGBE_PMA_RX_VAL_SIG_MASK) { + netif_dbg(pdata, link, pdata->netdev, + "RX_VALID or LF_SIGDET is unset, issue rrc"); + xgbe_phy_rrc(pdata); + if (pdata->rx_adapt_retries++ >= MAX_RX_ADAPT_RETRIES) { + pdata->rx_adapt_retries = 0; + return; + } + goto rx_adapt_reinit; + } + + /* perform rx adaptation */ + xgbe_rx_adaptation(pdata); +} + static void xgbe_phy_rx_reset(struct xgbe_prv_data *pdata) { int reg; @@ -2021,7 +2194,7 @@ static void xgbe_phy_perform_ratechange(struct xgbe_prv_data *pdata, wait = XGBE_RATECHANGE_COUNT; while (wait--) { if (!XP_IOREAD_BITS(pdata, XP_DRIVER_INT_RO, STATUS)) - goto reenable_pll; + goto do_rx_adaptation; usleep_range(1000, 2000); } @@ -2031,6 +2204,20 @@ static void xgbe_phy_perform_ratechange(struct xgbe_prv_data *pdata, /* Reset on error */ xgbe_phy_rx_reset(pdata); + goto reenable_pll; + +do_rx_adaptation: + if (pdata->en_rx_adap && sub_cmd == XGBE_MB_SUBCMD_RX_ADAP && + (cmd == XGBE_MB_CMD_SET_10G_KR || cmd == XGBE_MB_CMD_SET_10G_SFI)) { + netif_dbg(pdata, link, pdata->netdev, + "Enabling RX adaptation\n"); + pdata->mode_set = true; + xgbe_phy_rx_adaptation(pdata); + /* return from here to avoid enabling PLL ctrl + * during adaptation phase + */ + return; + } reenable_pll: /* Enable PLL re-initialization, not needed for PHY Power Off and RRC cmds */ @@ -2059,6 +2246,31 @@ static void xgbe_phy_power_off(struct xgbe_prv_data *pdata) netif_dbg(pdata, link, pdata->netdev, "phy powered off\n"); } +static bool enable_rx_adap(struct xgbe_prv_data *pdata, enum xgbe_mode mode) +{ + struct xgbe_phy_data *phy_data = pdata->phy_data; + unsigned int ver; + + /* Rx-Adaptation is not supported on older platforms(< 0x30H) */ + ver = XGMAC_GET_BITS(pdata->hw_feat.version, MAC_VR, SNPSVER); + if (ver < 0x30) + return false; + + /* Re-driver models 4223 && 4227 do not support Rx-Adaptation */ + if (phy_data->redrv && + (phy_data->redrv_model == XGBE_PHY_REDRV_MODEL_4223 || + phy_data->redrv_model == XGBE_PHY_REDRV_MODEL_4227)) + return false; + + /* 10G KR mode with AN does not support Rx-Adaptation */ + if (mode == XGBE_MODE_KR && + phy_data->port_mode != XGBE_PORT_MODE_BACKPLANE_NO_AUTONEG) + return false; + + pdata->en_rx_adap = 1; + return true; +} + static void xgbe_phy_sfi_mode(struct xgbe_prv_data *pdata) { struct xgbe_phy_data *phy_data = pdata->phy_data; @@ -2067,7 +2279,12 @@ static void xgbe_phy_sfi_mode(struct xgbe_prv_data *pdata) /* 10G/SFI */ if (phy_data->sfp_cable != XGBE_SFP_CABLE_PASSIVE) { + pdata->en_rx_adap = 0; xgbe_phy_perform_ratechange(pdata, XGBE_MB_CMD_SET_10G_SFI, XGBE_MB_SUBCMD_ACTIVE); + } else if ((phy_data->sfp_cable == XGBE_SFP_CABLE_PASSIVE) && + (enable_rx_adap(pdata, XGBE_MODE_SFI))) { + xgbe_phy_perform_ratechange(pdata, XGBE_MB_CMD_SET_10G_SFI, + XGBE_MB_SUBCMD_RX_ADAP); } else { if (phy_data->sfp_cable_len <= 1) xgbe_phy_perform_ratechange(pdata, XGBE_MB_CMD_SET_10G_SFI, @@ -2127,6 +2344,20 @@ static void xgbe_phy_sgmii_100_mode(struct xgbe_prv_data *pdata) netif_dbg(pdata, link, pdata->netdev, "100MbE SGMII mode set\n"); } +static void xgbe_phy_sgmii_10_mode(struct xgbe_prv_data *pdata) +{ + struct xgbe_phy_data *phy_data = pdata->phy_data; + + xgbe_phy_set_redrv_mode(pdata); + + /* 10M/SGMII */ + xgbe_phy_perform_ratechange(pdata, XGBE_MB_CMD_SET_1G, XGBE_MB_SUBCMD_10MBITS); + + phy_data->cur_mode = XGBE_MODE_SGMII_10; + + netif_dbg(pdata, link, pdata->netdev, "10MbE SGMII mode set\n"); +} + static void xgbe_phy_kr_mode(struct xgbe_prv_data *pdata) { struct xgbe_phy_data *phy_data = pdata->phy_data; @@ -2134,7 +2365,12 @@ static void xgbe_phy_kr_mode(struct xgbe_prv_data *pdata) xgbe_phy_set_redrv_mode(pdata); /* 10G/KR */ - xgbe_phy_perform_ratechange(pdata, XGBE_MB_CMD_SET_10G_KR, XGBE_MB_SUBCMD_NONE); + if (enable_rx_adap(pdata, XGBE_MODE_KR)) + xgbe_phy_perform_ratechange(pdata, XGBE_MB_CMD_SET_10G_KR, + XGBE_MB_SUBCMD_RX_ADAP); + else + xgbe_phy_perform_ratechange(pdata, XGBE_MB_CMD_SET_10G_KR, + XGBE_MB_SUBCMD_NONE); phy_data->cur_mode = XGBE_MODE_KR; @@ -2185,12 +2421,15 @@ static enum xgbe_mode xgbe_phy_switch_baset_mode(struct xgbe_prv_data *pdata) return xgbe_phy_cur_mode(pdata); switch (xgbe_phy_cur_mode(pdata)) { + case XGBE_MODE_SGMII_10: case XGBE_MODE_SGMII_100: case XGBE_MODE_SGMII_1000: return XGBE_MODE_KR; + case XGBE_MODE_KX_2500: + return XGBE_MODE_SGMII_1000; case XGBE_MODE_KR: default: - return XGBE_MODE_SGMII_1000; + return XGBE_MODE_KX_2500; } } @@ -2252,6 +2491,8 @@ static enum xgbe_mode xgbe_phy_get_baset_mode(struct xgbe_phy_data *phy_data, int speed) { switch (speed) { + case SPEED_10: + return XGBE_MODE_SGMII_10; case SPEED_100: return XGBE_MODE_SGMII_100; case SPEED_1000: @@ -2269,6 +2510,8 @@ static enum xgbe_mode xgbe_phy_get_sfp_mode(struct xgbe_phy_data *phy_data, int speed) { switch (speed) { + case SPEED_10: + return XGBE_MODE_SGMII_10; case SPEED_100: return XGBE_MODE_SGMII_100; case SPEED_1000: @@ -2343,6 +2586,9 @@ static void xgbe_phy_set_mode(struct xgbe_prv_data *pdata, enum xgbe_mode mode) case XGBE_MODE_KR: xgbe_phy_kr_mode(pdata); break; + case XGBE_MODE_SGMII_10: + xgbe_phy_sgmii_10_mode(pdata); + break; case XGBE_MODE_SGMII_100: xgbe_phy_sgmii_100_mode(pdata); break; @@ -2399,6 +2645,9 @@ static bool xgbe_phy_use_baset_mode(struct xgbe_prv_data *pdata, struct ethtool_link_ksettings *lks = &pdata->phy.lks; switch (mode) { + case XGBE_MODE_SGMII_10: + return xgbe_phy_check_mode(pdata, mode, + XGBE_ADV(lks, 10baseT_Full)); case XGBE_MODE_SGMII_100: return xgbe_phy_check_mode(pdata, mode, XGBE_ADV(lks, 100baseT_Full)); @@ -2428,6 +2677,11 @@ static bool xgbe_phy_use_sfp_mode(struct xgbe_prv_data *pdata, return false; return xgbe_phy_check_mode(pdata, mode, XGBE_ADV(lks, 1000baseX_Full)); + case XGBE_MODE_SGMII_10: + if (phy_data->sfp_base != XGBE_SFP_BASE_1000_T) + return false; + return xgbe_phy_check_mode(pdata, mode, + XGBE_ADV(lks, 10baseT_Full)); case XGBE_MODE_SGMII_100: if (phy_data->sfp_base != XGBE_SFP_BASE_1000_T) return false; @@ -2520,15 +2774,23 @@ static bool xgbe_phy_valid_speed_basex_mode(struct xgbe_phy_data *phy_data, } } -static bool xgbe_phy_valid_speed_baset_mode(struct xgbe_phy_data *phy_data, +static bool xgbe_phy_valid_speed_baset_mode(struct xgbe_prv_data *pdata, int speed) { + struct xgbe_phy_data *phy_data = pdata->phy_data; + unsigned int ver; + switch (speed) { + case SPEED_10: + /* Supported in ver >= 30H */ + ver = XGMAC_GET_BITS(pdata->hw_feat.version, MAC_VR, SNPSVER); + return (ver >= 0x30) ? true : false; case SPEED_100: case SPEED_1000: return true; case SPEED_2500: - return (phy_data->port_mode == XGBE_PORT_MODE_NBASE_T); + return ((phy_data->port_mode == XGBE_PORT_MODE_10GBASE_T) || + (phy_data->port_mode == XGBE_PORT_MODE_NBASE_T)); case SPEED_10000: return (phy_data->port_mode == XGBE_PORT_MODE_10GBASE_T); default: @@ -2536,10 +2798,17 @@ static bool xgbe_phy_valid_speed_baset_mode(struct xgbe_phy_data *phy_data, } } -static bool xgbe_phy_valid_speed_sfp_mode(struct xgbe_phy_data *phy_data, +static bool xgbe_phy_valid_speed_sfp_mode(struct xgbe_prv_data *pdata, int speed) { + struct xgbe_phy_data *phy_data = pdata->phy_data; + unsigned int ver; + switch (speed) { + case SPEED_10: + /* Supported in ver >= 30H */ + ver = XGMAC_GET_BITS(pdata->hw_feat.version, MAC_VR, SNPSVER); + return (ver >= 0x30) && (phy_data->sfp_speed == XGBE_SFP_SPEED_100_1000); case SPEED_100: return (phy_data->sfp_speed == XGBE_SFP_SPEED_100_1000); case SPEED_1000: @@ -2586,12 +2855,12 @@ static bool xgbe_phy_valid_speed(struct xgbe_prv_data *pdata, int speed) case XGBE_PORT_MODE_1000BASE_T: case XGBE_PORT_MODE_NBASE_T: case XGBE_PORT_MODE_10GBASE_T: - return xgbe_phy_valid_speed_baset_mode(phy_data, speed); + return xgbe_phy_valid_speed_baset_mode(pdata, speed); case XGBE_PORT_MODE_1000BASE_X: case XGBE_PORT_MODE_10GBASE_R: return xgbe_phy_valid_speed_basex_mode(phy_data, speed); case XGBE_PORT_MODE_SFP: - return xgbe_phy_valid_speed_sfp_mode(phy_data, speed); + return xgbe_phy_valid_speed_sfp_mode(pdata, speed); default: return false; } @@ -2614,8 +2883,11 @@ static int xgbe_phy_link_status(struct xgbe_prv_data *pdata, int *an_restart) return 0; } - if (phy_data->sfp_mod_absent || phy_data->sfp_rx_los) + if (phy_data->sfp_mod_absent || phy_data->sfp_rx_los) { + if (pdata->en_rx_adap) + pdata->rx_adapt_done = false; return 0; + } } if (phy_data->phydev) { @@ -2637,7 +2909,29 @@ static int xgbe_phy_link_status(struct xgbe_prv_data *pdata, int *an_restart) */ reg = XMDIO_READ(pdata, MDIO_MMD_PCS, MDIO_STAT1); reg = XMDIO_READ(pdata, MDIO_MMD_PCS, MDIO_STAT1); - if (reg & MDIO_STAT1_LSTATUS) + + if (pdata->en_rx_adap) { + /* if the link is available and adaptation is done, + * declare link up + */ + if ((reg & MDIO_STAT1_LSTATUS) && pdata->rx_adapt_done) + return 1; + /* If either link is not available or adaptation is not done, + * retrigger the adaptation logic. (if the mode is not set, + * then issue mailbox command first) + */ + if (pdata->mode_set) { + xgbe_phy_rx_adaptation(pdata); + } else { + pdata->rx_adapt_done = false; + xgbe_phy_set_mode(pdata, phy_data->cur_mode); + } + + /* check again for the link and adaptation status */ + reg = XMDIO_READ(pdata, MDIO_MMD_PCS, MDIO_STAT1); + if ((reg & MDIO_STAT1_LSTATUS) && pdata->rx_adapt_done) + return 1; + } else if (reg & MDIO_STAT1_LSTATUS) return 1; if (pdata->phy.autoneg == AUTONEG_ENABLE && @@ -2862,6 +3156,12 @@ static int xgbe_phy_mdio_reset_setup(struct xgbe_prv_data *pdata) static bool xgbe_phy_port_mode_mismatch(struct xgbe_prv_data *pdata) { struct xgbe_phy_data *phy_data = pdata->phy_data; + unsigned int ver; + + /* 10 Mbps speed is not supported in ver < 30H */ + ver = XGMAC_GET_BITS(pdata->hw_feat.version, MAC_VR, SNPSVER); + if (ver < 0x30 && (phy_data->port_speeds & XGBE_PHY_PORT_SPEED_10)) + return true; switch (phy_data->port_mode) { case XGBE_PORT_MODE_BACKPLANE: @@ -2875,7 +3175,8 @@ static bool xgbe_phy_port_mode_mismatch(struct xgbe_prv_data *pdata) return false; break; case XGBE_PORT_MODE_1000BASE_T: - if ((phy_data->port_speeds & XGBE_PHY_PORT_SPEED_100) || + if ((phy_data->port_speeds & XGBE_PHY_PORT_SPEED_10) || + (phy_data->port_speeds & XGBE_PHY_PORT_SPEED_100) || (phy_data->port_speeds & XGBE_PHY_PORT_SPEED_1000)) return false; break; @@ -2884,14 +3185,17 @@ static bool xgbe_phy_port_mode_mismatch(struct xgbe_prv_data *pdata) return false; break; case XGBE_PORT_MODE_NBASE_T: - if ((phy_data->port_speeds & XGBE_PHY_PORT_SPEED_100) || + if ((phy_data->port_speeds & XGBE_PHY_PORT_SPEED_10) || + (phy_data->port_speeds & XGBE_PHY_PORT_SPEED_100) || (phy_data->port_speeds & XGBE_PHY_PORT_SPEED_1000) || (phy_data->port_speeds & XGBE_PHY_PORT_SPEED_2500)) return false; break; case XGBE_PORT_MODE_10GBASE_T: - if ((phy_data->port_speeds & XGBE_PHY_PORT_SPEED_100) || + if ((phy_data->port_speeds & XGBE_PHY_PORT_SPEED_10) || + (phy_data->port_speeds & XGBE_PHY_PORT_SPEED_100) || (phy_data->port_speeds & XGBE_PHY_PORT_SPEED_1000) || + (phy_data->port_speeds & XGBE_PHY_PORT_SPEED_2500) || (phy_data->port_speeds & XGBE_PHY_PORT_SPEED_10000)) return false; break; @@ -2900,7 +3204,8 @@ static bool xgbe_phy_port_mode_mismatch(struct xgbe_prv_data *pdata) return false; break; case XGBE_PORT_MODE_SFP: - if ((phy_data->port_speeds & XGBE_PHY_PORT_SPEED_100) || + if ((phy_data->port_speeds & XGBE_PHY_PORT_SPEED_10) || + (phy_data->port_speeds & XGBE_PHY_PORT_SPEED_100) || (phy_data->port_speeds & XGBE_PHY_PORT_SPEED_1000) || (phy_data->port_speeds & XGBE_PHY_PORT_SPEED_10000)) return false; @@ -3269,6 +3574,10 @@ static int xgbe_phy_init(struct xgbe_prv_data *pdata) XGBE_SET_SUP(lks, Pause); XGBE_SET_SUP(lks, Asym_Pause); XGBE_SET_SUP(lks, TP); + if (phy_data->port_speeds & XGBE_PHY_PORT_SPEED_10) { + XGBE_SET_SUP(lks, 10baseT_Full); + phy_data->start_mode = XGBE_MODE_SGMII_10; + } if (phy_data->port_speeds & XGBE_PHY_PORT_SPEED_100) { XGBE_SET_SUP(lks, 100baseT_Full); phy_data->start_mode = XGBE_MODE_SGMII_100; @@ -3299,6 +3608,10 @@ static int xgbe_phy_init(struct xgbe_prv_data *pdata) XGBE_SET_SUP(lks, Pause); XGBE_SET_SUP(lks, Asym_Pause); XGBE_SET_SUP(lks, TP); + if (phy_data->port_speeds & XGBE_PHY_PORT_SPEED_10) { + XGBE_SET_SUP(lks, 10baseT_Full); + phy_data->start_mode = XGBE_MODE_SGMII_10; + } if (phy_data->port_speeds & XGBE_PHY_PORT_SPEED_100) { XGBE_SET_SUP(lks, 100baseT_Full); phy_data->start_mode = XGBE_MODE_SGMII_100; @@ -3321,6 +3634,10 @@ static int xgbe_phy_init(struct xgbe_prv_data *pdata) XGBE_SET_SUP(lks, Pause); XGBE_SET_SUP(lks, Asym_Pause); XGBE_SET_SUP(lks, TP); + if (phy_data->port_speeds & XGBE_PHY_PORT_SPEED_10) { + XGBE_SET_SUP(lks, 10baseT_Full); + phy_data->start_mode = XGBE_MODE_SGMII_10; + } if (phy_data->port_speeds & XGBE_PHY_PORT_SPEED_100) { XGBE_SET_SUP(lks, 100baseT_Full); phy_data->start_mode = XGBE_MODE_SGMII_100; @@ -3329,6 +3646,10 @@ static int xgbe_phy_init(struct xgbe_prv_data *pdata) XGBE_SET_SUP(lks, 1000baseT_Full); phy_data->start_mode = XGBE_MODE_SGMII_1000; } + if (phy_data->port_speeds & XGBE_PHY_PORT_SPEED_2500) { + XGBE_SET_SUP(lks, 2500baseT_Full); + phy_data->start_mode = XGBE_MODE_KX_2500; + } if (phy_data->port_speeds & XGBE_PHY_PORT_SPEED_10000) { XGBE_SET_SUP(lks, 10000baseT_Full); phy_data->start_mode = XGBE_MODE_KR; @@ -3361,6 +3682,8 @@ static int xgbe_phy_init(struct xgbe_prv_data *pdata) XGBE_SET_SUP(lks, Asym_Pause); XGBE_SET_SUP(lks, TP); XGBE_SET_SUP(lks, FIBRE); + if (phy_data->port_speeds & XGBE_PHY_PORT_SPEED_10) + phy_data->start_mode = XGBE_MODE_SGMII_10; if (phy_data->port_speeds & XGBE_PHY_PORT_SPEED_100) phy_data->start_mode = XGBE_MODE_SGMII_100; if (phy_data->port_speeds & XGBE_PHY_PORT_SPEED_1000) @@ -3415,8 +3738,10 @@ static int xgbe_phy_init(struct xgbe_prv_data *pdata) mii->priv = pdata; mii->name = "amd-xgbe-mii"; - mii->read = xgbe_phy_mii_read; - mii->write = xgbe_phy_mii_write; + mii->read = xgbe_phy_mii_read_c22; + mii->write = xgbe_phy_mii_write_c22; + mii->read_c45 = xgbe_phy_mii_read_c45; + mii->write_c45 = xgbe_phy_mii_write_c45; mii->parent = pdata->dev; mii->phy_mask = ~0; snprintf(mii->id, sizeof(mii->id), "%s", dev_name(pdata->dev)); diff --git a/drivers/net/ethernet/amd/xgbe/xgbe.h b/drivers/net/ethernet/amd/xgbe/xgbe.h index 7a41367c437d..ad136ed493ed 100644 --- a/drivers/net/ethernet/amd/xgbe/xgbe.h +++ b/drivers/net/ethernet/amd/xgbe/xgbe.h @@ -294,6 +294,7 @@ #define XGBE_SGMII_AN_LINK_STATUS BIT(1) #define XGBE_SGMII_AN_LINK_SPEED (BIT(2) | BIT(3)) +#define XGBE_SGMII_AN_LINK_SPEED_10 0x00 #define XGBE_SGMII_AN_LINK_SPEED_100 0x04 #define XGBE_SGMII_AN_LINK_SPEED_1000 0x08 #define XGBE_SGMII_AN_LINK_DUPLEX BIT(4) @@ -595,6 +596,7 @@ enum xgbe_mode { XGBE_MODE_KX_2500, XGBE_MODE_KR, XGBE_MODE_X, + XGBE_MODE_SGMII_10, XGBE_MODE_SGMII_100, XGBE_MODE_SGMII_1000, XGBE_MODE_SFI, @@ -623,6 +625,7 @@ enum xgbe_mb_cmd { enum xgbe_mb_subcmd { XGBE_MB_SUBCMD_NONE = 0, + XGBE_MB_SUBCMD_RX_ADAP, /* 10GbE SFP subcommands */ XGBE_MB_SUBCMD_ACTIVE = 0, @@ -774,8 +777,11 @@ struct xgbe_hw_if { int (*set_ext_mii_mode)(struct xgbe_prv_data *, unsigned int, enum xgbe_mdio_mode); - int (*read_ext_mii_regs)(struct xgbe_prv_data *, int, int); - int (*write_ext_mii_regs)(struct xgbe_prv_data *, int, int, u16); + int (*read_ext_mii_regs_c22)(struct xgbe_prv_data *, int, int); + int (*write_ext_mii_regs_c22)(struct xgbe_prv_data *, int, int, u16); + int (*read_ext_mii_regs_c45)(struct xgbe_prv_data *, int, int, int); + int (*write_ext_mii_regs_c45)(struct xgbe_prv_data *, int, int, int, + u16); int (*set_gpio)(struct xgbe_prv_data *, unsigned int); int (*clr_gpio)(struct xgbe_prv_data *, unsigned int); @@ -1311,6 +1317,10 @@ struct xgbe_prv_data { bool debugfs_an_cdr_workaround; bool debugfs_an_cdr_track_early; + bool en_rx_adap; + int rx_adapt_retries; + bool rx_adapt_done; + bool mode_set; }; /* Function prototypes*/ diff --git a/drivers/net/ethernet/aquantia/atlantic/aq_main.c b/drivers/net/ethernet/aquantia/atlantic/aq_main.c index 77609dc0a08d..0b2a52199914 100644 --- a/drivers/net/ethernet/aquantia/atlantic/aq_main.c +++ b/drivers/net/ethernet/aquantia/atlantic/aq_main.c @@ -21,6 +21,7 @@ #include <linux/ip.h> #include <linux/udp.h> #include <net/pkt_cls.h> +#include <net/pkt_sched.h> #include <linux/filter.h> MODULE_LICENSE("GPL v2"); diff --git a/drivers/net/ethernet/aquantia/atlantic/aq_nic.c b/drivers/net/ethernet/aquantia/atlantic/aq_nic.c index 06508eebb585..d6d6d5d37ff3 100644 --- a/drivers/net/ethernet/aquantia/atlantic/aq_nic.c +++ b/drivers/net/ethernet/aquantia/atlantic/aq_nic.c @@ -384,6 +384,11 @@ void aq_nic_ndev_init(struct aq_nic_s *self) self->ndev->mtu = aq_nic_cfg->mtu - ETH_HLEN; self->ndev->max_mtu = aq_hw_caps->mtu - ETH_FCS_LEN - ETH_HLEN; + self->ndev->xdp_features = NETDEV_XDP_ACT_BASIC | + NETDEV_XDP_ACT_REDIRECT | + NETDEV_XDP_ACT_NDO_XMIT | + NETDEV_XDP_ACT_RX_SG | + NETDEV_XDP_ACT_NDO_XMIT_SG; } void aq_nic_set_tx_ring(struct aq_nic_s *self, unsigned int idx, diff --git a/drivers/net/ethernet/atheros/alx/main.c b/drivers/net/ethernet/atheros/alx/main.c index d30d11872719..306393f8eeca 100644 --- a/drivers/net/ethernet/atheros/alx/main.c +++ b/drivers/net/ethernet/atheros/alx/main.c @@ -1905,7 +1905,6 @@ static void alx_remove(struct pci_dev *pdev) free_netdev(alx->dev); } -#ifdef CONFIG_PM_SLEEP static int alx_suspend(struct device *dev) { struct alx_priv *alx = dev_get_drvdata(dev); @@ -1951,12 +1950,7 @@ unlock: return err; } -static SIMPLE_DEV_PM_OPS(alx_pm_ops, alx_suspend, alx_resume); -#define ALX_PM_OPS (&alx_pm_ops) -#else -#define ALX_PM_OPS NULL -#endif - +static DEFINE_SIMPLE_DEV_PM_OPS(alx_pm_ops, alx_suspend, alx_resume); static pci_ers_result_t alx_pci_error_detected(struct pci_dev *pdev, pci_channel_state_t state) @@ -2055,7 +2049,7 @@ static struct pci_driver alx_driver = { .probe = alx_probe, .remove = alx_remove, .err_handler = &alx_err_handlers, - .driver.pm = ALX_PM_OPS, + .driver.pm = pm_sleep_ptr(&alx_pm_ops), }; module_pci_driver(alx_driver); diff --git a/drivers/net/ethernet/broadcom/Kconfig b/drivers/net/ethernet/broadcom/Kconfig index f4ca0c6c0f51..948586bf1b5b 100644 --- a/drivers/net/ethernet/broadcom/Kconfig +++ b/drivers/net/ethernet/broadcom/Kconfig @@ -213,6 +213,7 @@ config BNXT select NET_DEVLINK select PAGE_POOL select DIMLIB + select AUXILIARY_BUS help This driver supports Broadcom NetXtreme-C/E 10/25/40/50 gigabit Ethernet cards. To compile this driver as a module, choose M here: diff --git a/drivers/net/ethernet/broadcom/b44.c b/drivers/net/ethernet/broadcom/b44.c index b751dc8486dc..392ec09a1d8a 100644 --- a/drivers/net/ethernet/broadcom/b44.c +++ b/drivers/net/ethernet/broadcom/b44.c @@ -196,28 +196,6 @@ static int b44_wait_bit(struct b44 *bp, unsigned long reg, return 0; } -static inline void __b44_cam_read(struct b44 *bp, unsigned char *data, int index) -{ - u32 val; - - bw32(bp, B44_CAM_CTRL, (CAM_CTRL_READ | - (index << CAM_CTRL_INDEX_SHIFT))); - - b44_wait_bit(bp, B44_CAM_CTRL, CAM_CTRL_BUSY, 100, 1); - - val = br32(bp, B44_CAM_DATA_LO); - - data[2] = (val >> 24) & 0xFF; - data[3] = (val >> 16) & 0xFF; - data[4] = (val >> 8) & 0xFF; - data[5] = (val >> 0) & 0xFF; - - val = br32(bp, B44_CAM_DATA_HI); - - data[0] = (val >> 8) & 0xFF; - data[1] = (val >> 0) & 0xFF; -} - static inline void __b44_cam_write(struct b44 *bp, const unsigned char *data, int index) { diff --git a/drivers/net/ethernet/broadcom/bnxt/bnxt.c b/drivers/net/ethernet/broadcom/bnxt/bnxt.c index 6c32f5c427b5..5d4b1f2ebeac 100644 --- a/drivers/net/ethernet/broadcom/bnxt/bnxt.c +++ b/drivers/net/ethernet/broadcom/bnxt/bnxt.c @@ -2414,7 +2414,6 @@ static int bnxt_async_event_process(struct bnxt *bp, } bnxt_queue_sp_work(bp); async_event_process_exit: - bnxt_ulp_async_events(bp, cmpl); return 0; } @@ -5538,7 +5537,7 @@ vnic_mru: #endif if ((bp->flags & BNXT_FLAG_STRIP_VLAN) || def_vlan) req->flags |= cpu_to_le32(VNIC_CFG_REQ_FLAGS_VLAN_STRIP_MODE); - if (!vnic_id && bnxt_ulp_registered(bp->edev, BNXT_ROCE_ULP)) + if (!vnic_id && bnxt_ulp_registered(bp->edev)) req->flags |= cpu_to_le32(bnxt_get_roce_vnic_mode(bp)); return hwrm_req_send(bp, req); @@ -13185,6 +13184,8 @@ static void bnxt_remove_one(struct pci_dev *pdev) if (BNXT_PF(bp)) bnxt_sriov_disable(bp); + bnxt_rdma_aux_device_uninit(bp); + bnxt_ptp_clear(bp); pci_disable_pcie_error_reporting(pdev); unregister_netdev(dev); @@ -13690,6 +13691,9 @@ static int bnxt_init_one(struct pci_dev *pdev, const struct pci_device_id *ent) netif_set_tso_max_size(dev, GSO_MAX_SIZE); + dev->xdp_features = NETDEV_XDP_ACT_BASIC | NETDEV_XDP_ACT_REDIRECT | + NETDEV_XDP_ACT_RX_SG; + #ifdef CONFIG_BNXT_SRIOV init_waitqueue_head(&bp->sriov_cfg_wait); #endif @@ -13780,11 +13784,13 @@ static int bnxt_init_one(struct pci_dev *pdev, const struct pci_device_id *ent) bnxt_dl_fw_reporters_create(bp); + bnxt_rdma_aux_device_init(bp); + bnxt_print_device_info(bp); pci_save_state(pdev); - return 0; + return 0; init_err_cleanup: bnxt_dl_unregister(bp); init_err_dl: @@ -13828,7 +13834,6 @@ static void bnxt_shutdown(struct pci_dev *pdev) if (netif_running(dev)) dev_close(dev); - bnxt_ulp_shutdown(bp); bnxt_clear_int_mode(bp); pci_disable_device(pdev); diff --git a/drivers/net/ethernet/broadcom/bnxt/bnxt.h b/drivers/net/ethernet/broadcom/bnxt/bnxt.h index 5163ef4a49ea..dcb09fbe4007 100644 --- a/drivers/net/ethernet/broadcom/bnxt/bnxt.h +++ b/drivers/net/ethernet/broadcom/bnxt/bnxt.h @@ -24,6 +24,7 @@ #include <linux/interrupt.h> #include <linux/rhashtable.h> #include <linux/crash_dump.h> +#include <linux/auxiliary_bus.h> #include <net/devlink.h> #include <net/dst_metadata.h> #include <net/xdp.h> @@ -1631,6 +1632,12 @@ struct bnxt_fw_health { #define BNXT_FW_IF_RETRY 10 #define BNXT_FW_SLOT_RESET_RETRY 4 +struct bnxt_aux_priv { + struct auxiliary_device aux_dev; + struct bnxt_en_dev *edev; + int id; +}; + enum board_idx { BCM57301, BCM57302, @@ -1852,6 +1859,7 @@ struct bnxt { #define BNXT_CHIP_P4_PLUS(bp) \ (BNXT_CHIP_P4(bp) || BNXT_CHIP_P5(bp)) + struct bnxt_aux_priv *aux_priv; struct bnxt_en_dev *edev; struct bnxt_napi **bnapi; diff --git a/drivers/net/ethernet/broadcom/bnxt/bnxt_devlink.c b/drivers/net/ethernet/broadcom/bnxt/bnxt_devlink.c index 26913dc816d3..8b3e7697390f 100644 --- a/drivers/net/ethernet/broadcom/bnxt/bnxt_devlink.c +++ b/drivers/net/ethernet/broadcom/bnxt/bnxt_devlink.c @@ -1303,7 +1303,6 @@ int bnxt_dl_register(struct bnxt *bp) if (rc) goto err_dl_port_unreg; - devlink_set_features(dl, DEVLINK_F_RELOAD); out: devlink_register(dl); return 0; diff --git a/drivers/net/ethernet/broadcom/bnxt/bnxt_sriov.c b/drivers/net/ethernet/broadcom/bnxt/bnxt_sriov.c index a4cba7cb2783..3ed3a2b3b3a9 100644 --- a/drivers/net/ethernet/broadcom/bnxt/bnxt_sriov.c +++ b/drivers/net/ethernet/broadcom/bnxt/bnxt_sriov.c @@ -749,7 +749,6 @@ int bnxt_cfg_hw_sriov(struct bnxt *bp, int *num_vfs, bool reset) *num_vfs = rc; } - bnxt_ulp_sriov_cfg(bp, *num_vfs); return 0; } @@ -823,10 +822,8 @@ static int bnxt_sriov_enable(struct bnxt *bp, int *num_vfs) goto err_out2; rc = pci_enable_sriov(bp->pdev, *num_vfs); - if (rc) { - bnxt_ulp_sriov_cfg(bp, 0); + if (rc) goto err_out2; - } return 0; @@ -872,8 +869,6 @@ void bnxt_sriov_disable(struct bnxt *bp) rtnl_lock(); bnxt_restore_pf_fw_resources(bp); rtnl_unlock(); - - bnxt_ulp_sriov_cfg(bp, 0); } int bnxt_sriov_configure(struct pci_dev *pdev, int num_vfs) diff --git a/drivers/net/ethernet/broadcom/bnxt/bnxt_ulp.c b/drivers/net/ethernet/broadcom/bnxt/bnxt_ulp.c index 2e54bf4fc7a7..d4cc9c371e7b 100644 --- a/drivers/net/ethernet/broadcom/bnxt/bnxt_ulp.c +++ b/drivers/net/ethernet/broadcom/bnxt/bnxt_ulp.c @@ -19,89 +19,26 @@ #include <linux/irq.h> #include <asm/byteorder.h> #include <linux/bitmap.h> +#include <linux/auxiliary_bus.h> #include "bnxt_hsi.h" #include "bnxt.h" #include "bnxt_hwrm.h" #include "bnxt_ulp.h" -static int bnxt_register_dev(struct bnxt_en_dev *edev, unsigned int ulp_id, - struct bnxt_ulp_ops *ulp_ops, void *handle) -{ - struct net_device *dev = edev->net; - struct bnxt *bp = netdev_priv(dev); - struct bnxt_ulp *ulp; - - ASSERT_RTNL(); - if (ulp_id >= BNXT_MAX_ULP) - return -EINVAL; - - ulp = &edev->ulp_tbl[ulp_id]; - if (rcu_access_pointer(ulp->ulp_ops)) { - netdev_err(bp->dev, "ulp id %d already registered\n", ulp_id); - return -EBUSY; - } - if (ulp_id == BNXT_ROCE_ULP) { - unsigned int max_stat_ctxs; - - max_stat_ctxs = bnxt_get_max_func_stat_ctxs(bp); - if (max_stat_ctxs <= BNXT_MIN_ROCE_STAT_CTXS || - bp->cp_nr_rings == max_stat_ctxs) - return -ENOMEM; - } - - atomic_set(&ulp->ref_count, 0); - ulp->handle = handle; - rcu_assign_pointer(ulp->ulp_ops, ulp_ops); - - if (ulp_id == BNXT_ROCE_ULP) { - if (test_bit(BNXT_STATE_OPEN, &bp->state)) - bnxt_hwrm_vnic_cfg(bp, 0); - } - - return 0; -} - -static int bnxt_unregister_dev(struct bnxt_en_dev *edev, unsigned int ulp_id) -{ - struct net_device *dev = edev->net; - struct bnxt *bp = netdev_priv(dev); - struct bnxt_ulp *ulp; - int i = 0; - - ASSERT_RTNL(); - if (ulp_id >= BNXT_MAX_ULP) - return -EINVAL; - - ulp = &edev->ulp_tbl[ulp_id]; - if (!rcu_access_pointer(ulp->ulp_ops)) { - netdev_err(bp->dev, "ulp id %d not registered\n", ulp_id); - return -EINVAL; - } - if (ulp_id == BNXT_ROCE_ULP && ulp->msix_requested) - edev->en_ops->bnxt_free_msix(edev, ulp_id); - - if (ulp->max_async_event_id) - bnxt_hwrm_func_drv_rgtr(bp, NULL, 0, true); - - RCU_INIT_POINTER(ulp->ulp_ops, NULL); - synchronize_rcu(); - ulp->max_async_event_id = 0; - ulp->async_events_bmap = NULL; - while (atomic_read(&ulp->ref_count) != 0 && i < 10) { - msleep(100); - i++; - } - return 0; -} +static DEFINE_IDA(bnxt_aux_dev_ids); static void bnxt_fill_msix_vecs(struct bnxt *bp, struct bnxt_msix_entry *ent) { struct bnxt_en_dev *edev = bp->edev; int num_msix, idx, i; - num_msix = edev->ulp_tbl[BNXT_ROCE_ULP].msix_requested; - idx = edev->ulp_tbl[BNXT_ROCE_ULP].msix_base; + if (!edev->ulp_tbl->msix_requested) { + netdev_warn(bp->dev, "Requested MSI-X vectors insufficient\n"); + return; + } + num_msix = edev->ulp_tbl->msix_requested; + idx = edev->ulp_tbl->msix_base; for (i = 0; i < num_msix; i++) { ent[i].vector = bp->irq_tbl[idx + i].vector; ent[i].ring_idx = idx + i; @@ -115,125 +52,95 @@ static void bnxt_fill_msix_vecs(struct bnxt *bp, struct bnxt_msix_entry *ent) } } -static int bnxt_req_msix_vecs(struct bnxt_en_dev *edev, unsigned int ulp_id, - struct bnxt_msix_entry *ent, int num_msix) +int bnxt_register_dev(struct bnxt_en_dev *edev, + struct bnxt_ulp_ops *ulp_ops, + void *handle) { struct net_device *dev = edev->net; struct bnxt *bp = netdev_priv(dev); - struct bnxt_hw_resc *hw_resc; - int max_idx, max_cp_rings; - int avail_msix, idx; - int total_vecs; - int rc = 0; - - ASSERT_RTNL(); - if (ulp_id != BNXT_ROCE_ULP) - return -EINVAL; - - if (!(bp->flags & BNXT_FLAG_USING_MSIX)) - return -ENODEV; + unsigned int max_stat_ctxs; + struct bnxt_ulp *ulp; - if (edev->ulp_tbl[ulp_id].msix_requested) - return -EAGAIN; + max_stat_ctxs = bnxt_get_max_func_stat_ctxs(bp); + if (max_stat_ctxs <= BNXT_MIN_ROCE_STAT_CTXS || + bp->cp_nr_rings == max_stat_ctxs) + return -ENOMEM; - max_cp_rings = bnxt_get_max_func_cp_rings(bp); - avail_msix = bnxt_get_avail_msix(bp, num_msix); - if (!avail_msix) + ulp = edev->ulp_tbl; + if (!ulp) return -ENOMEM; - if (avail_msix > num_msix) - avail_msix = num_msix; - - if (BNXT_NEW_RM(bp)) { - idx = bp->cp_nr_rings; - } else { - max_idx = min_t(int, bp->total_irqs, max_cp_rings); - idx = max_idx - avail_msix; - } - edev->ulp_tbl[ulp_id].msix_base = idx; - edev->ulp_tbl[ulp_id].msix_requested = avail_msix; - hw_resc = &bp->hw_resc; - total_vecs = idx + avail_msix; - if (bp->total_irqs < total_vecs || - (BNXT_NEW_RM(bp) && hw_resc->resv_irqs < total_vecs)) { - if (netif_running(dev)) { - bnxt_close_nic(bp, true, false); - rc = bnxt_open_nic(bp, true, false); - } else { - rc = bnxt_reserve_rings(bp, true); - } - } - if (rc) { - edev->ulp_tbl[ulp_id].msix_requested = 0; - return -EAGAIN; - } - if (BNXT_NEW_RM(bp)) { - int resv_msix; + ulp->handle = handle; + rcu_assign_pointer(ulp->ulp_ops, ulp_ops); + + if (test_bit(BNXT_STATE_OPEN, &bp->state)) + bnxt_hwrm_vnic_cfg(bp, 0); - resv_msix = hw_resc->resv_irqs - bp->cp_nr_rings; - avail_msix = min_t(int, resv_msix, avail_msix); - edev->ulp_tbl[ulp_id].msix_requested = avail_msix; - } - bnxt_fill_msix_vecs(bp, ent); + bnxt_fill_msix_vecs(bp, bp->edev->msix_entries); edev->flags |= BNXT_EN_FLAG_MSIX_REQUESTED; - return avail_msix; + return 0; } +EXPORT_SYMBOL(bnxt_register_dev); -static int bnxt_free_msix_vecs(struct bnxt_en_dev *edev, unsigned int ulp_id) +void bnxt_unregister_dev(struct bnxt_en_dev *edev) { struct net_device *dev = edev->net; struct bnxt *bp = netdev_priv(dev); + struct bnxt_ulp *ulp; + int i = 0; - ASSERT_RTNL(); - if (ulp_id != BNXT_ROCE_ULP) - return -EINVAL; + ulp = edev->ulp_tbl; + if (ulp->msix_requested) + edev->flags &= ~BNXT_EN_FLAG_MSIX_REQUESTED; - if (!(edev->flags & BNXT_EN_FLAG_MSIX_REQUESTED)) - return 0; + if (ulp->max_async_event_id) + bnxt_hwrm_func_drv_rgtr(bp, NULL, 0, true); - edev->ulp_tbl[ulp_id].msix_requested = 0; - edev->flags &= ~BNXT_EN_FLAG_MSIX_REQUESTED; - if (netif_running(dev) && !(edev->flags & BNXT_EN_FLAG_ULP_STOPPED)) { - bnxt_close_nic(bp, true, false); - bnxt_open_nic(bp, true, false); + RCU_INIT_POINTER(ulp->ulp_ops, NULL); + synchronize_rcu(); + ulp->max_async_event_id = 0; + ulp->async_events_bmap = NULL; + while (atomic_read(&ulp->ref_count) != 0 && i < 10) { + msleep(100); + i++; } - return 0; + return; } +EXPORT_SYMBOL(bnxt_unregister_dev); int bnxt_get_ulp_msix_num(struct bnxt *bp) { - if (bnxt_ulp_registered(bp->edev, BNXT_ROCE_ULP)) { - struct bnxt_en_dev *edev = bp->edev; + u32 roce_msix = BNXT_VF(bp) ? + BNXT_MAX_VF_ROCE_MSIX : BNXT_MAX_ROCE_MSIX; - return edev->ulp_tbl[BNXT_ROCE_ULP].msix_requested; - } - return 0; + return ((bp->flags & BNXT_FLAG_ROCE_CAP) ? + min_t(u32, roce_msix, num_online_cpus()) : 0); } int bnxt_get_ulp_msix_base(struct bnxt *bp) { - if (bnxt_ulp_registered(bp->edev, BNXT_ROCE_ULP)) { + if (bnxt_ulp_registered(bp->edev)) { struct bnxt_en_dev *edev = bp->edev; - if (edev->ulp_tbl[BNXT_ROCE_ULP].msix_requested) - return edev->ulp_tbl[BNXT_ROCE_ULP].msix_base; + if (edev->ulp_tbl->msix_requested) + return edev->ulp_tbl->msix_base; } return 0; } int bnxt_get_ulp_stat_ctxs(struct bnxt *bp) { - if (bnxt_ulp_registered(bp->edev, BNXT_ROCE_ULP)) { + if (bnxt_ulp_registered(bp->edev)) { struct bnxt_en_dev *edev = bp->edev; - if (edev->ulp_tbl[BNXT_ROCE_ULP].msix_requested) + if (edev->ulp_tbl->msix_requested) return BNXT_MIN_ROCE_STAT_CTXS; } return 0; } -static int bnxt_send_msg(struct bnxt_en_dev *edev, unsigned int ulp_id, +int bnxt_send_msg(struct bnxt_en_dev *edev, struct bnxt_fw_msg *fw_msg) { struct net_device *dev = edev->net; @@ -243,7 +150,7 @@ static int bnxt_send_msg(struct bnxt_en_dev *edev, unsigned int ulp_id, u32 resp_len; int rc; - if (ulp_id != BNXT_ROCE_ULP && bp->fw_reset_state) + if (bp->fw_reset_state) return -EBUSY; rc = hwrm_req_init(bp, req, 0 /* don't care */); @@ -267,42 +174,36 @@ static int bnxt_send_msg(struct bnxt_en_dev *edev, unsigned int ulp_id, hwrm_req_drop(bp, req); return rc; } - -static void bnxt_ulp_get(struct bnxt_ulp *ulp) -{ - atomic_inc(&ulp->ref_count); -} - -static void bnxt_ulp_put(struct bnxt_ulp *ulp) -{ - atomic_dec(&ulp->ref_count); -} +EXPORT_SYMBOL(bnxt_send_msg); void bnxt_ulp_stop(struct bnxt *bp) { + struct bnxt_aux_priv *aux_priv = bp->aux_priv; struct bnxt_en_dev *edev = bp->edev; - struct bnxt_ulp_ops *ops; - int i; if (!edev) return; edev->flags |= BNXT_EN_FLAG_ULP_STOPPED; - for (i = 0; i < BNXT_MAX_ULP; i++) { - struct bnxt_ulp *ulp = &edev->ulp_tbl[i]; + if (aux_priv) { + struct auxiliary_device *adev; - ops = rtnl_dereference(ulp->ulp_ops); - if (!ops || !ops->ulp_stop) - continue; - ops->ulp_stop(ulp->handle); + adev = &aux_priv->aux_dev; + if (adev->dev.driver) { + struct auxiliary_driver *adrv; + pm_message_t pm = {}; + + adrv = to_auxiliary_drv(adev->dev.driver); + edev->en_state = bp->state; + adrv->suspend(adev, pm); + } } } void bnxt_ulp_start(struct bnxt *bp, int err) { + struct bnxt_aux_priv *aux_priv = bp->aux_priv; struct bnxt_en_dev *edev = bp->edev; - struct bnxt_ulp_ops *ops; - int i; if (!edev) return; @@ -312,58 +213,19 @@ void bnxt_ulp_start(struct bnxt *bp, int err) if (err) return; - for (i = 0; i < BNXT_MAX_ULP; i++) { - struct bnxt_ulp *ulp = &edev->ulp_tbl[i]; - - ops = rtnl_dereference(ulp->ulp_ops); - if (!ops || !ops->ulp_start) - continue; - ops->ulp_start(ulp->handle); - } -} - -void bnxt_ulp_sriov_cfg(struct bnxt *bp, int num_vfs) -{ - struct bnxt_en_dev *edev = bp->edev; - struct bnxt_ulp_ops *ops; - int i; - - if (!edev) - return; + if (aux_priv) { + struct auxiliary_device *adev; - for (i = 0; i < BNXT_MAX_ULP; i++) { - struct bnxt_ulp *ulp = &edev->ulp_tbl[i]; + adev = &aux_priv->aux_dev; + if (adev->dev.driver) { + struct auxiliary_driver *adrv; - rcu_read_lock(); - ops = rcu_dereference(ulp->ulp_ops); - if (!ops || !ops->ulp_sriov_config) { - rcu_read_unlock(); - continue; + adrv = to_auxiliary_drv(adev->dev.driver); + edev->en_state = bp->state; + adrv->resume(adev); } - bnxt_ulp_get(ulp); - rcu_read_unlock(); - ops->ulp_sriov_config(ulp->handle, num_vfs); - bnxt_ulp_put(ulp); } -} -void bnxt_ulp_shutdown(struct bnxt *bp) -{ - struct bnxt_en_dev *edev = bp->edev; - struct bnxt_ulp_ops *ops; - int i; - - if (!edev) - return; - - for (i = 0; i < BNXT_MAX_ULP; i++) { - struct bnxt_ulp *ulp = &edev->ulp_tbl[i]; - - ops = rtnl_dereference(ulp->ulp_ops); - if (!ops || !ops->ulp_shutdown) - continue; - ops->ulp_shutdown(ulp->handle); - } } void bnxt_ulp_irq_stop(struct bnxt *bp) @@ -374,8 +236,8 @@ void bnxt_ulp_irq_stop(struct bnxt *bp) if (!edev || !(edev->flags & BNXT_EN_FLAG_MSIX_REQUESTED)) return; - if (bnxt_ulp_registered(bp->edev, BNXT_ROCE_ULP)) { - struct bnxt_ulp *ulp = &edev->ulp_tbl[BNXT_ROCE_ULP]; + if (bnxt_ulp_registered(bp->edev)) { + struct bnxt_ulp *ulp = edev->ulp_tbl; if (!ulp->msix_requested) return; @@ -395,8 +257,8 @@ void bnxt_ulp_irq_restart(struct bnxt *bp, int err) if (!edev || !(edev->flags & BNXT_EN_FLAG_MSIX_REQUESTED)) return; - if (bnxt_ulp_registered(bp->edev, BNXT_ROCE_ULP)) { - struct bnxt_ulp *ulp = &edev->ulp_tbl[BNXT_ROCE_ULP]; + if (bnxt_ulp_registered(bp->edev)) { + struct bnxt_ulp *ulp = edev->ulp_tbl; struct bnxt_msix_entry *ent = NULL; if (!ulp->msix_requested) @@ -418,46 +280,15 @@ void bnxt_ulp_irq_restart(struct bnxt *bp, int err) } } -void bnxt_ulp_async_events(struct bnxt *bp, struct hwrm_async_event_cmpl *cmpl) -{ - u16 event_id = le16_to_cpu(cmpl->event_id); - struct bnxt_en_dev *edev = bp->edev; - struct bnxt_ulp_ops *ops; - int i; - - if (!edev) - return; - - rcu_read_lock(); - for (i = 0; i < BNXT_MAX_ULP; i++) { - struct bnxt_ulp *ulp = &edev->ulp_tbl[i]; - - ops = rcu_dereference(ulp->ulp_ops); - if (!ops || !ops->ulp_async_notifier) - continue; - if (!ulp->async_events_bmap || - event_id > ulp->max_async_event_id) - continue; - - /* Read max_async_event_id first before testing the bitmap. */ - smp_rmb(); - if (test_bit(event_id, ulp->async_events_bmap)) - ops->ulp_async_notifier(ulp->handle, cmpl); - } - rcu_read_unlock(); -} - -static int bnxt_register_async_events(struct bnxt_en_dev *edev, unsigned int ulp_id, - unsigned long *events_bmap, u16 max_id) +int bnxt_register_async_events(struct bnxt_en_dev *edev, + unsigned long *events_bmap, + u16 max_id) { struct net_device *dev = edev->net; struct bnxt *bp = netdev_priv(dev); struct bnxt_ulp *ulp; - if (ulp_id >= BNXT_MAX_ULP) - return -EINVAL; - - ulp = &edev->ulp_tbl[ulp_id]; + ulp = edev->ulp_tbl; ulp->async_events_bmap = events_bmap; /* Make sure bnxt_ulp_async_events() sees this order */ smp_wmb(); @@ -465,38 +296,121 @@ static int bnxt_register_async_events(struct bnxt_en_dev *edev, unsigned int ulp bnxt_hwrm_func_drv_rgtr(bp, events_bmap, max_id + 1, true); return 0; } +EXPORT_SYMBOL(bnxt_register_async_events); -static const struct bnxt_en_ops bnxt_en_ops_tbl = { - .bnxt_register_device = bnxt_register_dev, - .bnxt_unregister_device = bnxt_unregister_dev, - .bnxt_request_msix = bnxt_req_msix_vecs, - .bnxt_free_msix = bnxt_free_msix_vecs, - .bnxt_send_fw_msg = bnxt_send_msg, - .bnxt_register_fw_async_events = bnxt_register_async_events, -}; +void bnxt_rdma_aux_device_uninit(struct bnxt *bp) +{ + struct bnxt_aux_priv *aux_priv; + struct auxiliary_device *adev; + + /* Skip if no auxiliary device init was done. */ + if (!(bp->flags & BNXT_FLAG_ROCE_CAP)) + return; + + aux_priv = bp->aux_priv; + adev = &aux_priv->aux_dev; + auxiliary_device_delete(adev); + auxiliary_device_uninit(adev); +} -struct bnxt_en_dev *bnxt_ulp_probe(struct net_device *dev) +static void bnxt_aux_dev_release(struct device *dev) { - struct bnxt *bp = netdev_priv(dev); - struct bnxt_en_dev *edev; + struct bnxt_aux_priv *aux_priv = + container_of(dev, struct bnxt_aux_priv, aux_dev.dev); + + ida_free(&bnxt_aux_dev_ids, aux_priv->id); + kfree(aux_priv->edev->ulp_tbl); + kfree(aux_priv->edev); + kfree(aux_priv); +} + +static void bnxt_set_edev_info(struct bnxt_en_dev *edev, struct bnxt *bp) +{ + edev->net = bp->dev; + edev->pdev = bp->pdev; + edev->l2_db_size = bp->db_size; + edev->l2_db_size_nc = bp->db_size; - edev = bp->edev; - if (!edev) { - edev = kzalloc(sizeof(*edev), GFP_KERNEL); - if (!edev) - return ERR_PTR(-ENOMEM); - edev->en_ops = &bnxt_en_ops_tbl; - edev->net = dev; - edev->pdev = bp->pdev; - edev->l2_db_size = bp->db_size; - edev->l2_db_size_nc = bp->db_size; - bp->edev = edev; - } - edev->flags &= ~BNXT_EN_FLAG_ROCE_CAP; if (bp->flags & BNXT_FLAG_ROCEV1_CAP) edev->flags |= BNXT_EN_FLAG_ROCEV1_CAP; if (bp->flags & BNXT_FLAG_ROCEV2_CAP) edev->flags |= BNXT_EN_FLAG_ROCEV2_CAP; - return bp->edev; + if (bp->flags & BNXT_FLAG_VF) + edev->flags |= BNXT_EN_FLAG_VF; + + edev->chip_num = bp->chip_num; + edev->hw_ring_stats_size = bp->hw_ring_stats_size; + edev->pf_port_id = bp->pf.port_id; + edev->en_state = bp->state; + + edev->ulp_tbl->msix_requested = bnxt_get_ulp_msix_num(bp); +} + +void bnxt_rdma_aux_device_init(struct bnxt *bp) +{ + struct auxiliary_device *aux_dev; + struct bnxt_aux_priv *aux_priv; + struct bnxt_en_dev *edev; + struct bnxt_ulp *ulp; + int rc; + + if (!(bp->flags & BNXT_FLAG_ROCE_CAP)) + return; + + bp->aux_priv = kzalloc(sizeof(*bp->aux_priv), GFP_KERNEL); + if (!bp->aux_priv) + goto exit; + + bp->aux_priv->id = ida_alloc(&bnxt_aux_dev_ids, GFP_KERNEL); + if (bp->aux_priv->id < 0) { + netdev_warn(bp->dev, + "ida alloc failed for ROCE auxiliary device\n"); + kfree(bp->aux_priv); + goto exit; + } + + aux_priv = bp->aux_priv; + aux_dev = &aux_priv->aux_dev; + aux_dev->id = aux_priv->id; + aux_dev->name = "rdma"; + aux_dev->dev.parent = &bp->pdev->dev; + aux_dev->dev.release = bnxt_aux_dev_release; + + rc = auxiliary_device_init(aux_dev); + if (rc) { + ida_free(&bnxt_aux_dev_ids, bp->aux_priv->id); + kfree(bp->aux_priv); + goto exit; + } + + /* From this point, all cleanup will happen via the .release callback & + * any error unwinding will need to include a call to + * auxiliary_device_uninit. + */ + edev = kzalloc(sizeof(*edev), GFP_KERNEL); + if (!edev) + goto aux_dev_uninit; + + ulp = kzalloc(sizeof(*ulp), GFP_KERNEL); + if (!ulp) + goto aux_dev_uninit; + + edev->ulp_tbl = ulp; + aux_priv->edev = edev; + bp->edev = edev; + bnxt_set_edev_info(edev, bp); + + rc = auxiliary_device_add(aux_dev); + if (rc) { + netdev_warn(bp->dev, + "Failed to add auxiliary device for ROCE\n"); + goto aux_dev_uninit; + } + + return; + +aux_dev_uninit: + auxiliary_device_uninit(aux_dev); +exit: + bp->flags &= ~BNXT_FLAG_ROCE_CAP; } -EXPORT_SYMBOL(bnxt_ulp_probe); diff --git a/drivers/net/ethernet/broadcom/bnxt/bnxt_ulp.h b/drivers/net/ethernet/broadcom/bnxt/bnxt_ulp.h index 42b50abc3e91..80cbc4b6130a 100644 --- a/drivers/net/ethernet/broadcom/bnxt/bnxt_ulp.h +++ b/drivers/net/ethernet/broadcom/bnxt/bnxt_ulp.h @@ -15,6 +15,8 @@ #define BNXT_MIN_ROCE_CP_RINGS 2 #define BNXT_MIN_ROCE_STAT_CTXS 1 +#define BNXT_MAX_ROCE_MSIX 9 +#define BNXT_MAX_VF_ROCE_MSIX 2 struct hwrm_async_event_cmpl; struct bnxt; @@ -26,12 +28,6 @@ struct bnxt_msix_entry { }; struct bnxt_ulp_ops { - /* async_notifier() cannot sleep (in BH context) */ - void (*ulp_async_notifier)(void *, struct hwrm_async_event_cmpl *); - void (*ulp_stop)(void *); - void (*ulp_start)(void *); - void (*ulp_sriov_config)(void *, int); - void (*ulp_shutdown)(void *); void (*ulp_irq_stop)(void *); void (*ulp_irq_restart)(void *, struct bnxt_msix_entry *); }; @@ -57,6 +53,7 @@ struct bnxt_ulp { struct bnxt_en_dev { struct net_device *net; struct pci_dev *pdev; + struct bnxt_msix_entry msix_entries[BNXT_MAX_ROCE_MSIX]; u32 flags; #define BNXT_EN_FLAG_ROCEV1_CAP 0x1 #define BNXT_EN_FLAG_ROCEV2_CAP 0x2 @@ -64,8 +61,10 @@ struct bnxt_en_dev { BNXT_EN_FLAG_ROCEV2_CAP) #define BNXT_EN_FLAG_MSIX_REQUESTED 0x4 #define BNXT_EN_FLAG_ULP_STOPPED 0x8 - const struct bnxt_en_ops *en_ops; - struct bnxt_ulp ulp_tbl[BNXT_MAX_ULP]; + #define BNXT_EN_FLAG_VF 0x10 +#define BNXT_EN_VF(edev) ((edev)->flags & BNXT_EN_FLAG_VF) + + struct bnxt_ulp *ulp_tbl; int l2_db_size; /* Doorbell BAR size in * bytes mapped by L2 * driver. @@ -74,24 +73,19 @@ struct bnxt_en_dev { * bytes mapped as non- * cacheable. */ + u16 chip_num; + u16 hw_ring_stats_size; + u16 pf_port_id; + unsigned long en_state; /* Could be checked in + * RoCE driver suspend + * mode only. Will be + * updated in resume. + */ }; -struct bnxt_en_ops { - int (*bnxt_register_device)(struct bnxt_en_dev *, unsigned int, - struct bnxt_ulp_ops *, void *); - int (*bnxt_unregister_device)(struct bnxt_en_dev *, unsigned int); - int (*bnxt_request_msix)(struct bnxt_en_dev *, unsigned int, - struct bnxt_msix_entry *, int); - int (*bnxt_free_msix)(struct bnxt_en_dev *, unsigned int); - int (*bnxt_send_fw_msg)(struct bnxt_en_dev *, unsigned int, - struct bnxt_fw_msg *); - int (*bnxt_register_fw_async_events)(struct bnxt_en_dev *, unsigned int, - unsigned long *, u16); -}; - -static inline bool bnxt_ulp_registered(struct bnxt_en_dev *edev, int ulp_id) +static inline bool bnxt_ulp_registered(struct bnxt_en_dev *edev) { - if (edev && rcu_access_pointer(edev->ulp_tbl[ulp_id].ulp_ops)) + if (edev && edev->ulp_tbl) return true; return false; } @@ -102,10 +96,15 @@ int bnxt_get_ulp_stat_ctxs(struct bnxt *bp); void bnxt_ulp_stop(struct bnxt *bp); void bnxt_ulp_start(struct bnxt *bp, int err); void bnxt_ulp_sriov_cfg(struct bnxt *bp, int num_vfs); -void bnxt_ulp_shutdown(struct bnxt *bp); void bnxt_ulp_irq_stop(struct bnxt *bp); void bnxt_ulp_irq_restart(struct bnxt *bp, int err); void bnxt_ulp_async_events(struct bnxt *bp, struct hwrm_async_event_cmpl *cmpl); -struct bnxt_en_dev *bnxt_ulp_probe(struct net_device *dev); - +void bnxt_rdma_aux_device_uninit(struct bnxt *bp); +void bnxt_rdma_aux_device_init(struct bnxt *bp); +int bnxt_register_dev(struct bnxt_en_dev *edev, struct bnxt_ulp_ops *ulp_ops, + void *handle); +void bnxt_unregister_dev(struct bnxt_en_dev *edev); +int bnxt_send_msg(struct bnxt_en_dev *edev, struct bnxt_fw_msg *fw_msg); +int bnxt_register_async_events(struct bnxt_en_dev *edev, + unsigned long *events_bmap, u16 max_id); #endif diff --git a/drivers/net/ethernet/broadcom/bnxt/bnxt_xdp.c b/drivers/net/ethernet/broadcom/bnxt/bnxt_xdp.c index 36d5202c0aee..5843c93b1711 100644 --- a/drivers/net/ethernet/broadcom/bnxt/bnxt_xdp.c +++ b/drivers/net/ethernet/broadcom/bnxt/bnxt_xdp.c @@ -422,9 +422,11 @@ static int bnxt_xdp_set(struct bnxt *bp, struct bpf_prog *prog) if (prog) { bnxt_set_rx_skb_mode(bp, true); + xdp_features_set_redirect_target(dev, true); } else { int rx, tx; + xdp_features_clear_redirect_target(dev); bnxt_set_rx_skb_mode(bp, false); bnxt_get_max_rings(bp, &rx, &tx, true); if (rx > 1) { diff --git a/drivers/net/ethernet/broadcom/genet/bcmgenet.c b/drivers/net/ethernet/broadcom/genet/bcmgenet.c index 21973046b12b..d937daa8ee88 100644 --- a/drivers/net/ethernet/broadcom/genet/bcmgenet.c +++ b/drivers/net/ethernet/broadcom/genet/bcmgenet.c @@ -2316,6 +2316,14 @@ static unsigned int bcmgenet_desc_rx(struct bcmgenet_rx_ring *ring, __func__, p_index, ring->c_index, ring->read_ptr, dma_length_status); + if (unlikely(len > RX_BUF_LENGTH)) { + netif_err(priv, rx_status, dev, "oversized packet\n"); + dev->stats.rx_length_errors++; + dev->stats.rx_errors++; + dev_kfree_skb_any(skb); + goto next; + } + if (unlikely(!(dma_flag & DMA_EOP) || !(dma_flag & DMA_SOP))) { netif_err(priv, rx_status, dev, "dropping fragmented packet!\n"); diff --git a/drivers/net/ethernet/broadcom/genet/bcmgenet_wol.c b/drivers/net/ethernet/broadcom/genet/bcmgenet_wol.c index f55d9d9c01a8..3a4b6cb7b7b9 100644 --- a/drivers/net/ethernet/broadcom/genet/bcmgenet_wol.c +++ b/drivers/net/ethernet/broadcom/genet/bcmgenet_wol.c @@ -77,14 +77,18 @@ int bcmgenet_set_wol(struct net_device *dev, struct ethtool_wolinfo *wol) if (wol->wolopts) { device_set_wakeup_enable(kdev, 1); /* Avoid unbalanced enable_irq_wake calls */ - if (priv->wol_irq_disabled) + if (priv->wol_irq_disabled) { enable_irq_wake(priv->wol_irq); + enable_irq_wake(priv->irq0); + } priv->wol_irq_disabled = false; } else { device_set_wakeup_enable(kdev, 0); /* Avoid unbalanced disable_irq_wake calls */ - if (!priv->wol_irq_disabled) + if (!priv->wol_irq_disabled) { disable_irq_wake(priv->wol_irq); + disable_irq_wake(priv->irq0); + } priv->wol_irq_disabled = true; } diff --git a/drivers/net/ethernet/broadcom/genet/bcmmii.c b/drivers/net/ethernet/broadcom/genet/bcmmii.c index b615176338b2..be042905ada2 100644 --- a/drivers/net/ethernet/broadcom/genet/bcmmii.c +++ b/drivers/net/ethernet/broadcom/genet/bcmmii.c @@ -176,15 +176,6 @@ void bcmgenet_phy_power_set(struct net_device *dev, bool enable) static void bcmgenet_moca_phy_setup(struct bcmgenet_priv *priv) { - u32 reg; - - if (!GENET_IS_V5(priv)) { - /* Speed settings are set in bcmgenet_mii_setup() */ - reg = bcmgenet_sys_readl(priv, SYS_PORT_CTRL); - reg |= LED_ACT_SOURCE_MAC; - bcmgenet_sys_writel(priv, reg, SYS_PORT_CTRL); - } - if (priv->hw_params->flags & GENET_HAS_MOCA_LINK_DET) fixed_phy_set_link_update(priv->dev->phydev, bcmgenet_fixed_phy_link_update); @@ -217,6 +208,8 @@ int bcmgenet_mii_config(struct net_device *dev, bool init) if (!phy_name) { phy_name = "MoCA"; + if (!GENET_IS_V5(priv)) + port_ctrl |= LED_ACT_SOURCE_MAC; bcmgenet_moca_phy_setup(priv); } break; diff --git a/drivers/net/ethernet/cadence/macb.h b/drivers/net/ethernet/cadence/macb.h index 9c410f93a103..14dfec4db8f9 100644 --- a/drivers/net/ethernet/cadence/macb.h +++ b/drivers/net/ethernet/cadence/macb.h @@ -768,8 +768,6 @@ #define gem_readl_n(port, reg, idx) (port)->macb_reg_readl((port), GEM_##reg + idx * 4) #define gem_writel_n(port, reg, idx, value) (port)->macb_reg_writel((port), GEM_##reg + idx * 4, (value)) -#define PTP_TS_BUFFER_SIZE 128 /* must be power of 2 */ - /* Conditional GEM/MACB macros. These perform the operation to the correct * register dependent on whether the device is a GEM or a MACB. For registers * and bitfields that are common across both devices, use macb_{read,write}l @@ -819,11 +817,6 @@ struct macb_dma_desc_ptp { u32 ts_1; u32 ts_2; }; - -struct gem_tx_ts { - struct sk_buff *skb; - struct macb_dma_desc_ptp desc_ptp; -}; #endif /* DMA descriptor bitfields */ @@ -1224,12 +1217,6 @@ struct macb_queue { void *rx_buffers; struct napi_struct napi_rx; struct queue_stats stats; - -#ifdef CONFIG_MACB_USE_HWSTAMP - struct work_struct tx_ts_task; - unsigned int tx_ts_head, tx_ts_tail; - struct gem_tx_ts tx_timestamps[PTP_TS_BUFFER_SIZE]; -#endif }; struct ethtool_rx_fs_item { @@ -1340,14 +1327,14 @@ enum macb_bd_control { void gem_ptp_init(struct net_device *ndev); void gem_ptp_remove(struct net_device *ndev); -int gem_ptp_txstamp(struct macb_queue *queue, struct sk_buff *skb, struct macb_dma_desc *des); +void gem_ptp_txstamp(struct macb *bp, struct sk_buff *skb, struct macb_dma_desc *desc); void gem_ptp_rxstamp(struct macb *bp, struct sk_buff *skb, struct macb_dma_desc *desc); -static inline int gem_ptp_do_txstamp(struct macb_queue *queue, struct sk_buff *skb, struct macb_dma_desc *desc) +static inline void gem_ptp_do_txstamp(struct macb *bp, struct sk_buff *skb, struct macb_dma_desc *desc) { - if (queue->bp->tstamp_config.tx_type == TSTAMP_DISABLED) - return -ENOTSUPP; + if (bp->tstamp_config.tx_type == TSTAMP_DISABLED) + return; - return gem_ptp_txstamp(queue, skb, desc); + gem_ptp_txstamp(bp, skb, desc); } static inline void gem_ptp_do_rxstamp(struct macb *bp, struct sk_buff *skb, struct macb_dma_desc *desc) @@ -1363,11 +1350,7 @@ int gem_set_hwtst(struct net_device *dev, struct ifreq *ifr, int cmd); static inline void gem_ptp_init(struct net_device *ndev) { } static inline void gem_ptp_remove(struct net_device *ndev) { } -static inline int gem_ptp_do_txstamp(struct macb_queue *queue, struct sk_buff *skb, struct macb_dma_desc *desc) -{ - return -1; -} - +static inline void gem_ptp_do_txstamp(struct macb *bp, struct sk_buff *skb, struct macb_dma_desc *desc) { } static inline void gem_ptp_do_rxstamp(struct macb *bp, struct sk_buff *skb, struct macb_dma_desc *desc) { } #endif diff --git a/drivers/net/ethernet/cadence/macb_main.c b/drivers/net/ethernet/cadence/macb_main.c index 6cda31520c42..6e141a8bbf43 100644 --- a/drivers/net/ethernet/cadence/macb_main.c +++ b/drivers/net/ethernet/cadence/macb_main.c @@ -334,7 +334,7 @@ static int macb_mdio_wait_for_idle(struct macb *bp) 1, MACB_MDIO_TIMEOUT); } -static int macb_mdio_read(struct mii_bus *bus, int mii_id, int regnum) +static int macb_mdio_read_c22(struct mii_bus *bus, int mii_id, int regnum) { struct macb *bp = bus->priv; int status; @@ -347,35 +347,62 @@ static int macb_mdio_read(struct mii_bus *bus, int mii_id, int regnum) if (status < 0) goto mdio_read_exit; - if (regnum & MII_ADDR_C45) { - macb_writel(bp, MAN, (MACB_BF(SOF, MACB_MAN_C45_SOF) - | MACB_BF(RW, MACB_MAN_C45_ADDR) - | MACB_BF(PHYA, mii_id) - | MACB_BF(REGA, (regnum >> 16) & 0x1F) - | MACB_BF(DATA, regnum & 0xFFFF) - | MACB_BF(CODE, MACB_MAN_C45_CODE))); - - status = macb_mdio_wait_for_idle(bp); - if (status < 0) - goto mdio_read_exit; - - macb_writel(bp, MAN, (MACB_BF(SOF, MACB_MAN_C45_SOF) - | MACB_BF(RW, MACB_MAN_C45_READ) - | MACB_BF(PHYA, mii_id) - | MACB_BF(REGA, (regnum >> 16) & 0x1F) - | MACB_BF(CODE, MACB_MAN_C45_CODE))); - } else { - macb_writel(bp, MAN, (MACB_BF(SOF, MACB_MAN_C22_SOF) - | MACB_BF(RW, MACB_MAN_C22_READ) - | MACB_BF(PHYA, mii_id) - | MACB_BF(REGA, regnum) - | MACB_BF(CODE, MACB_MAN_C22_CODE))); + macb_writel(bp, MAN, (MACB_BF(SOF, MACB_MAN_C22_SOF) + | MACB_BF(RW, MACB_MAN_C22_READ) + | MACB_BF(PHYA, mii_id) + | MACB_BF(REGA, regnum) + | MACB_BF(CODE, MACB_MAN_C22_CODE))); + + status = macb_mdio_wait_for_idle(bp); + if (status < 0) + goto mdio_read_exit; + + status = MACB_BFEXT(DATA, macb_readl(bp, MAN)); + +mdio_read_exit: + pm_runtime_mark_last_busy(&bp->pdev->dev); + pm_runtime_put_autosuspend(&bp->pdev->dev); +mdio_pm_exit: + return status; +} + +static int macb_mdio_read_c45(struct mii_bus *bus, int mii_id, int devad, + int regnum) +{ + struct macb *bp = bus->priv; + int status; + + status = pm_runtime_get_sync(&bp->pdev->dev); + if (status < 0) { + pm_runtime_put_noidle(&bp->pdev->dev); + goto mdio_pm_exit; } status = macb_mdio_wait_for_idle(bp); if (status < 0) goto mdio_read_exit; + macb_writel(bp, MAN, (MACB_BF(SOF, MACB_MAN_C45_SOF) + | MACB_BF(RW, MACB_MAN_C45_ADDR) + | MACB_BF(PHYA, mii_id) + | MACB_BF(REGA, devad & 0x1F) + | MACB_BF(DATA, regnum & 0xFFFF) + | MACB_BF(CODE, MACB_MAN_C45_CODE))); + + status = macb_mdio_wait_for_idle(bp); + if (status < 0) + goto mdio_read_exit; + + macb_writel(bp, MAN, (MACB_BF(SOF, MACB_MAN_C45_SOF) + | MACB_BF(RW, MACB_MAN_C45_READ) + | MACB_BF(PHYA, mii_id) + | MACB_BF(REGA, devad & 0x1F) + | MACB_BF(CODE, MACB_MAN_C45_CODE))); + + status = macb_mdio_wait_for_idle(bp); + if (status < 0) + goto mdio_read_exit; + status = MACB_BFEXT(DATA, macb_readl(bp, MAN)); mdio_read_exit: @@ -385,8 +412,8 @@ mdio_pm_exit: return status; } -static int macb_mdio_write(struct mii_bus *bus, int mii_id, int regnum, - u16 value) +static int macb_mdio_write_c22(struct mii_bus *bus, int mii_id, int regnum, + u16 value) { struct macb *bp = bus->priv; int status; @@ -399,37 +426,63 @@ static int macb_mdio_write(struct mii_bus *bus, int mii_id, int regnum, if (status < 0) goto mdio_write_exit; - if (regnum & MII_ADDR_C45) { - macb_writel(bp, MAN, (MACB_BF(SOF, MACB_MAN_C45_SOF) - | MACB_BF(RW, MACB_MAN_C45_ADDR) - | MACB_BF(PHYA, mii_id) - | MACB_BF(REGA, (regnum >> 16) & 0x1F) - | MACB_BF(DATA, regnum & 0xFFFF) - | MACB_BF(CODE, MACB_MAN_C45_CODE))); - - status = macb_mdio_wait_for_idle(bp); - if (status < 0) - goto mdio_write_exit; - - macb_writel(bp, MAN, (MACB_BF(SOF, MACB_MAN_C45_SOF) - | MACB_BF(RW, MACB_MAN_C45_WRITE) - | MACB_BF(PHYA, mii_id) - | MACB_BF(REGA, (regnum >> 16) & 0x1F) - | MACB_BF(CODE, MACB_MAN_C45_CODE) - | MACB_BF(DATA, value))); - } else { - macb_writel(bp, MAN, (MACB_BF(SOF, MACB_MAN_C22_SOF) - | MACB_BF(RW, MACB_MAN_C22_WRITE) - | MACB_BF(PHYA, mii_id) - | MACB_BF(REGA, regnum) - | MACB_BF(CODE, MACB_MAN_C22_CODE) - | MACB_BF(DATA, value))); + macb_writel(bp, MAN, (MACB_BF(SOF, MACB_MAN_C22_SOF) + | MACB_BF(RW, MACB_MAN_C22_WRITE) + | MACB_BF(PHYA, mii_id) + | MACB_BF(REGA, regnum) + | MACB_BF(CODE, MACB_MAN_C22_CODE) + | MACB_BF(DATA, value))); + + status = macb_mdio_wait_for_idle(bp); + if (status < 0) + goto mdio_write_exit; + +mdio_write_exit: + pm_runtime_mark_last_busy(&bp->pdev->dev); + pm_runtime_put_autosuspend(&bp->pdev->dev); +mdio_pm_exit: + return status; +} + +static int macb_mdio_write_c45(struct mii_bus *bus, int mii_id, + int devad, int regnum, + u16 value) +{ + struct macb *bp = bus->priv; + int status; + + status = pm_runtime_get_sync(&bp->pdev->dev); + if (status < 0) { + pm_runtime_put_noidle(&bp->pdev->dev); + goto mdio_pm_exit; } status = macb_mdio_wait_for_idle(bp); if (status < 0) goto mdio_write_exit; + macb_writel(bp, MAN, (MACB_BF(SOF, MACB_MAN_C45_SOF) + | MACB_BF(RW, MACB_MAN_C45_ADDR) + | MACB_BF(PHYA, mii_id) + | MACB_BF(REGA, devad & 0x1F) + | MACB_BF(DATA, regnum & 0xFFFF) + | MACB_BF(CODE, MACB_MAN_C45_CODE))); + + status = macb_mdio_wait_for_idle(bp); + if (status < 0) + goto mdio_write_exit; + + macb_writel(bp, MAN, (MACB_BF(SOF, MACB_MAN_C45_SOF) + | MACB_BF(RW, MACB_MAN_C45_WRITE) + | MACB_BF(PHYA, mii_id) + | MACB_BF(REGA, devad & 0x1F) + | MACB_BF(CODE, MACB_MAN_C45_CODE) + | MACB_BF(DATA, value))); + + status = macb_mdio_wait_for_idle(bp); + if (status < 0) + goto mdio_write_exit; + mdio_write_exit: pm_runtime_mark_last_busy(&bp->pdev->dev); pm_runtime_put_autosuspend(&bp->pdev->dev); @@ -902,8 +955,10 @@ static int macb_mii_init(struct macb *bp) } bp->mii_bus->name = "MACB_mii_bus"; - bp->mii_bus->read = &macb_mdio_read; - bp->mii_bus->write = &macb_mdio_write; + bp->mii_bus->read = &macb_mdio_read_c22; + bp->mii_bus->write = &macb_mdio_write_c22; + bp->mii_bus->read_c45 = &macb_mdio_read_c45; + bp->mii_bus->write_c45 = &macb_mdio_write_c45; snprintf(bp->mii_bus->id, MII_BUS_ID_SIZE, "%s-%x", bp->pdev->name, bp->pdev->id); bp->mii_bus->priv = bp; @@ -1191,13 +1246,9 @@ static int macb_tx_complete(struct macb_queue *queue, int budget) /* First, update TX stats if needed */ if (skb) { if (unlikely(skb_shinfo(skb)->tx_flags & SKBTX_HW_TSTAMP) && - !ptp_one_step_sync(skb) && - gem_ptp_do_txstamp(queue, skb, desc) == 0) { - /* skb now belongs to timestamp buffer - * and will be removed later - */ - tx_skb->skb = NULL; - } + !ptp_one_step_sync(skb)) + gem_ptp_do_txstamp(bp, skb, desc); + netdev_vdbg(bp->dev, "skb %u (data %p) TX complete\n", macb_tx_ring_wrap(bp, tail), skb->data); @@ -2253,6 +2304,12 @@ static netdev_tx_t macb_start_xmit(struct sk_buff *skb, struct net_device *dev) return ret; } +#ifdef CONFIG_MACB_USE_HWSTAMP + if ((skb_shinfo(skb)->tx_flags & SKBTX_HW_TSTAMP) && + (bp->hw_dma_cap & HW_DMA_CAP_PTP)) + skb_shinfo(skb)->tx_flags |= SKBTX_IN_PROGRESS; +#endif + is_lso = (skb_shinfo(skb)->gso_size != 0); if (is_lso) { diff --git a/drivers/net/ethernet/cadence/macb_ptp.c b/drivers/net/ethernet/cadence/macb_ptp.c index e6cb20aaa76a..f962a95068a0 100644 --- a/drivers/net/ethernet/cadence/macb_ptp.c +++ b/drivers/net/ethernet/cadence/macb_ptp.c @@ -292,79 +292,39 @@ void gem_ptp_rxstamp(struct macb *bp, struct sk_buff *skb, } } -static void gem_tstamp_tx(struct macb *bp, struct sk_buff *skb, - struct macb_dma_desc_ptp *desc_ptp) +void gem_ptp_txstamp(struct macb *bp, struct sk_buff *skb, + struct macb_dma_desc *desc) { struct skb_shared_hwtstamps shhwtstamps; - struct timespec64 ts; - - gem_hw_timestamp(bp, desc_ptp->ts_1, desc_ptp->ts_2, &ts); - memset(&shhwtstamps, 0, sizeof(shhwtstamps)); - shhwtstamps.hwtstamp = ktime_set(ts.tv_sec, ts.tv_nsec); - skb_tstamp_tx(skb, &shhwtstamps); -} - -int gem_ptp_txstamp(struct macb_queue *queue, struct sk_buff *skb, - struct macb_dma_desc *desc) -{ - unsigned long tail = READ_ONCE(queue->tx_ts_tail); - unsigned long head = queue->tx_ts_head; struct macb_dma_desc_ptp *desc_ptp; - struct gem_tx_ts *tx_timestamp; - - if (!GEM_BFEXT(DMA_TXVALID, desc->ctrl)) - return -EINVAL; + struct timespec64 ts; - if (CIRC_SPACE(head, tail, PTP_TS_BUFFER_SIZE) == 0) - return -ENOMEM; + if (!GEM_BFEXT(DMA_TXVALID, desc->ctrl)) { + dev_warn_ratelimited(&bp->pdev->dev, + "Timestamp not set in TX BD as expected\n"); + return; + } - desc_ptp = macb_ptp_desc(queue->bp, desc); + desc_ptp = macb_ptp_desc(bp, desc); /* Unlikely but check */ - if (!desc_ptp) - return -EINVAL; - skb_shinfo(skb)->tx_flags |= SKBTX_IN_PROGRESS; - tx_timestamp = &queue->tx_timestamps[head]; - tx_timestamp->skb = skb; + if (!desc_ptp) { + dev_warn_ratelimited(&bp->pdev->dev, + "Timestamp not supported in BD\n"); + return; + } + /* ensure ts_1/ts_2 is loaded after ctrl (TX_USED check) */ dma_rmb(); - tx_timestamp->desc_ptp.ts_1 = desc_ptp->ts_1; - tx_timestamp->desc_ptp.ts_2 = desc_ptp->ts_2; - /* move head */ - smp_store_release(&queue->tx_ts_head, - (head + 1) & (PTP_TS_BUFFER_SIZE - 1)); - - schedule_work(&queue->tx_ts_task); - return 0; -} + gem_hw_timestamp(bp, desc_ptp->ts_1, desc_ptp->ts_2, &ts); -static void gem_tx_timestamp_flush(struct work_struct *work) -{ - struct macb_queue *queue = - container_of(work, struct macb_queue, tx_ts_task); - unsigned long head, tail; - struct gem_tx_ts *tx_ts; - - /* take current head */ - head = smp_load_acquire(&queue->tx_ts_head); - tail = queue->tx_ts_tail; - - while (CIRC_CNT(head, tail, PTP_TS_BUFFER_SIZE)) { - tx_ts = &queue->tx_timestamps[tail]; - gem_tstamp_tx(queue->bp, tx_ts->skb, &tx_ts->desc_ptp); - /* cleanup */ - dev_kfree_skb_any(tx_ts->skb); - /* remove old tail */ - smp_store_release(&queue->tx_ts_tail, - (tail + 1) & (PTP_TS_BUFFER_SIZE - 1)); - tail = queue->tx_ts_tail; - } + memset(&shhwtstamps, 0, sizeof(shhwtstamps)); + shhwtstamps.hwtstamp = ktime_set(ts.tv_sec, ts.tv_nsec); + skb_tstamp_tx(skb, &shhwtstamps); } void gem_ptp_init(struct net_device *dev) { struct macb *bp = netdev_priv(dev); - struct macb_queue *queue; - unsigned int q; bp->ptp_clock_info = gem_ptp_caps_template; @@ -384,11 +344,6 @@ void gem_ptp_init(struct net_device *dev) } spin_lock_init(&bp->tsu_clk_lock); - for (q = 0, queue = bp->queues; q < bp->num_queues; ++q, ++queue) { - queue->tx_ts_head = 0; - queue->tx_ts_tail = 0; - INIT_WORK(&queue->tx_ts_task, gem_tx_timestamp_flush); - } gem_ptp_init_tsu(bp); diff --git a/drivers/net/ethernet/cavium/thunder/nicvf_main.c b/drivers/net/ethernet/cavium/thunder/nicvf_main.c index f2f95493ec89..8b25313c7f6b 100644 --- a/drivers/net/ethernet/cavium/thunder/nicvf_main.c +++ b/drivers/net/ethernet/cavium/thunder/nicvf_main.c @@ -2218,6 +2218,8 @@ static int nicvf_probe(struct pci_dev *pdev, const struct pci_device_id *ent) netdev->netdev_ops = &nicvf_netdev_ops; netdev->watchdog_timeo = NICVF_TX_TIMEOUT; + netdev->xdp_features = NETDEV_XDP_ACT_BASIC; + /* MTU range: 64 - 9200 */ netdev->min_mtu = NIC_HW_MIN_FRS; netdev->max_mtu = NIC_HW_MAX_FRS; diff --git a/drivers/net/ethernet/chelsio/cxgb4/cxgb4_main.c b/drivers/net/ethernet/chelsio/cxgb4/cxgb4_main.c index 9cbce1faab26..7db2403c4c9c 100644 --- a/drivers/net/ethernet/chelsio/cxgb4/cxgb4_main.c +++ b/drivers/net/ethernet/chelsio/cxgb4/cxgb4_main.c @@ -6490,21 +6490,21 @@ static const struct tlsdev_ops cxgb4_ktls_ops = { #if IS_ENABLED(CONFIG_CHELSIO_IPSEC_INLINE) -static int cxgb4_xfrm_add_state(struct xfrm_state *x) +static int cxgb4_xfrm_add_state(struct xfrm_state *x, + struct netlink_ext_ack *extack) { struct adapter *adap = netdev2adap(x->xso.dev); int ret; if (!mutex_trylock(&uld_mutex)) { - dev_dbg(adap->pdev_dev, - "crypto uld critical resource is under use\n"); + NL_SET_ERR_MSG_MOD(extack, "crypto uld critical resource is under use"); return -EBUSY; } ret = chcr_offload_state(adap, CXGB4_XFRMDEV_OPS); if (ret) goto out_unlock; - ret = adap->uld[CXGB4_ULD_IPSEC].xfrmdev_ops->xdo_dev_state_add(x); + ret = adap->uld[CXGB4_ULD_IPSEC].xfrmdev_ops->xdo_dev_state_add(x, extack); out_unlock: mutex_unlock(&uld_mutex); diff --git a/drivers/net/ethernet/chelsio/cxgb4/cxgb4_tc_mqprio.h b/drivers/net/ethernet/chelsio/cxgb4/cxgb4_tc_mqprio.h index be96f1dc0372..d4a862a9fd7d 100644 --- a/drivers/net/ethernet/chelsio/cxgb4/cxgb4_tc_mqprio.h +++ b/drivers/net/ethernet/chelsio/cxgb4/cxgb4_tc_mqprio.h @@ -4,7 +4,7 @@ #ifndef __CXGB4_TC_MQPRIO_H__ #define __CXGB4_TC_MQPRIO_H__ -#include <net/pkt_cls.h> +#include <net/pkt_sched.h> #define CXGB4_EOSW_TXQ_DEFAULT_DESC_NUM 128 diff --git a/drivers/net/ethernet/chelsio/inline_crypto/ch_ipsec/chcr_ipsec.c b/drivers/net/ethernet/chelsio/inline_crypto/ch_ipsec/chcr_ipsec.c index ca21794281d6..3731c93f8f95 100644 --- a/drivers/net/ethernet/chelsio/inline_crypto/ch_ipsec/chcr_ipsec.c +++ b/drivers/net/ethernet/chelsio/inline_crypto/ch_ipsec/chcr_ipsec.c @@ -80,7 +80,8 @@ static void *ch_ipsec_uld_add(const struct cxgb4_lld_info *infop); static void ch_ipsec_advance_esn_state(struct xfrm_state *x); static void ch_ipsec_xfrm_free_state(struct xfrm_state *x); static void ch_ipsec_xfrm_del_state(struct xfrm_state *x); -static int ch_ipsec_xfrm_add_state(struct xfrm_state *x); +static int ch_ipsec_xfrm_add_state(struct xfrm_state *x, + struct netlink_ext_ack *extack); static const struct xfrmdev_ops ch_ipsec_xfrmdev_ops = { .xdo_dev_state_add = ch_ipsec_xfrm_add_state, @@ -226,65 +227,66 @@ out: * returns 0 on success, negative error if failed to send message to FPGA * positive error if FPGA returned a bad response */ -static int ch_ipsec_xfrm_add_state(struct xfrm_state *x) +static int ch_ipsec_xfrm_add_state(struct xfrm_state *x, + struct netlink_ext_ack *extack) { struct ipsec_sa_entry *sa_entry; int res = 0; if (x->props.aalgo != SADB_AALG_NONE) { - pr_debug("Cannot offload authenticated xfrm states\n"); + NL_SET_ERR_MSG_MOD(extack, "Cannot offload authenticated xfrm states"); return -EINVAL; } if (x->props.calgo != SADB_X_CALG_NONE) { - pr_debug("Cannot offload compressed xfrm states\n"); + NL_SET_ERR_MSG_MOD(extack, "Cannot offload compressed xfrm states"); return -EINVAL; } if (x->props.family != AF_INET && x->props.family != AF_INET6) { - pr_debug("Only IPv4/6 xfrm state offloaded\n"); + NL_SET_ERR_MSG_MOD(extack, "Only IPv4/6 xfrm state offloaded"); return -EINVAL; } if (x->props.mode != XFRM_MODE_TRANSPORT && x->props.mode != XFRM_MODE_TUNNEL) { - pr_debug("Only transport and tunnel xfrm offload\n"); + NL_SET_ERR_MSG_MOD(extack, "Only transport and tunnel xfrm offload"); return -EINVAL; } if (x->id.proto != IPPROTO_ESP) { - pr_debug("Only ESP xfrm state offloaded\n"); + NL_SET_ERR_MSG_MOD(extack, "Only ESP xfrm state offloaded"); return -EINVAL; } if (x->encap) { - pr_debug("Encapsulated xfrm state not offloaded\n"); + NL_SET_ERR_MSG_MOD(extack, "Encapsulated xfrm state not offloaded"); return -EINVAL; } if (!x->aead) { - pr_debug("Cannot offload xfrm states without aead\n"); + NL_SET_ERR_MSG_MOD(extack, "Cannot offload xfrm states without aead"); return -EINVAL; } if (x->aead->alg_icv_len != 128 && x->aead->alg_icv_len != 96) { - pr_debug("Cannot offload xfrm states with AEAD ICV length other than 96b & 128b\n"); - return -EINVAL; + NL_SET_ERR_MSG_MOD(extack, "Cannot offload xfrm states with AEAD ICV length other than 96b & 128b"); + return -EINVAL; } if ((x->aead->alg_key_len != 128 + 32) && (x->aead->alg_key_len != 256 + 32)) { - pr_debug("cannot offload xfrm states with AEAD key length other than 128/256 bit\n"); + NL_SET_ERR_MSG_MOD(extack, "cannot offload xfrm states with AEAD key length other than 128/256 bit"); return -EINVAL; } if (x->tfcpad) { - pr_debug("Cannot offload xfrm states with tfc padding\n"); + NL_SET_ERR_MSG_MOD(extack, "Cannot offload xfrm states with tfc padding"); return -EINVAL; } if (!x->geniv) { - pr_debug("Cannot offload xfrm states without geniv\n"); + NL_SET_ERR_MSG_MOD(extack, "Cannot offload xfrm states without geniv"); return -EINVAL; } if (strcmp(x->geniv, "seqiv")) { - pr_debug("Cannot offload xfrm states with geniv other than seqiv\n"); + NL_SET_ERR_MSG_MOD(extack, "Cannot offload xfrm states with geniv other than seqiv"); return -EINVAL; } if (x->xso.type != XFRM_DEV_OFFLOAD_CRYPTO) { - pr_debug("Unsupported xfrm offload\n"); + NL_SET_ERR_MSG_MOD(extack, "Unsupported xfrm offload"); return -EINVAL; } diff --git a/drivers/net/ethernet/engleder/Makefile b/drivers/net/ethernet/engleder/Makefile index b6e3b16623de..b98135f65eb7 100644 --- a/drivers/net/ethernet/engleder/Makefile +++ b/drivers/net/ethernet/engleder/Makefile @@ -6,5 +6,5 @@ obj-$(CONFIG_TSNEP) += tsnep.o tsnep-objs := tsnep_main.o tsnep_ethtool.o tsnep_ptp.o tsnep_tc.o \ - tsnep_rxnfc.o $(tsnep-y) + tsnep_rxnfc.o tsnep_xdp.o tsnep-$(CONFIG_TSNEP_SELFTESTS) += tsnep_selftests.o diff --git a/drivers/net/ethernet/engleder/tsnep.h b/drivers/net/ethernet/engleder/tsnep.h index f93ba48bac3f..058c2bcf31a7 100644 --- a/drivers/net/ethernet/engleder/tsnep.h +++ b/drivers/net/ethernet/engleder/tsnep.h @@ -65,7 +65,11 @@ struct tsnep_tx_entry { u32 properties; - struct sk_buff *skb; + u32 type; + union { + struct sk_buff *skb; + struct xdp_frame *xdpf; + }; size_t len; DEFINE_DMA_UNMAP_ADDR(dma); }; @@ -78,8 +82,6 @@ struct tsnep_tx { void *page[TSNEP_RING_PAGE_COUNT]; dma_addr_t page_dma[TSNEP_RING_PAGE_COUNT]; - /* TX ring lock */ - spinlock_t lock; struct tsnep_tx_entry entry[TSNEP_RING_SIZE]; int write; int read; @@ -107,6 +109,7 @@ struct tsnep_rx { struct tsnep_adapter *adapter; void __iomem *addr; int queue_index; + int tx_queue_index; void *page[TSNEP_RING_PAGE_COUNT]; dma_addr_t page_dma[TSNEP_RING_PAGE_COUNT]; @@ -123,6 +126,8 @@ struct tsnep_rx { u32 dropped; u32 multicast; u32 alloc_failed; + + struct xdp_rxq_info xdp_rxq; }; struct tsnep_queue { @@ -172,6 +177,8 @@ struct tsnep_adapter { int rxnfc_count; int rxnfc_max; + struct bpf_prog *xdp_prog; + int num_tx_queues; struct tsnep_tx tx[TSNEP_MAX_QUEUES]; int num_rx_queues; @@ -204,6 +211,9 @@ int tsnep_rxnfc_add_rule(struct tsnep_adapter *adapter, int tsnep_rxnfc_del_rule(struct tsnep_adapter *adapter, struct ethtool_rxnfc *cmd); +int tsnep_xdp_setup_prog(struct tsnep_adapter *adapter, struct bpf_prog *prog, + struct netlink_ext_ack *extack); + #if IS_ENABLED(CONFIG_TSNEP_SELFTESTS) int tsnep_ethtool_get_test_count(void); void tsnep_ethtool_get_test_strings(u8 *data); diff --git a/drivers/net/ethernet/engleder/tsnep_main.c b/drivers/net/ethernet/engleder/tsnep_main.c index 00e2108f2ca4..6982aaa928b5 100644 --- a/drivers/net/ethernet/engleder/tsnep_main.c +++ b/drivers/net/ethernet/engleder/tsnep_main.c @@ -26,9 +26,11 @@ #include <linux/etherdevice.h> #include <linux/phy.h> #include <linux/iopoll.h> +#include <linux/bpf.h> +#include <linux/bpf_trace.h> -#define TSNEP_SKB_PAD (NET_SKB_PAD + NET_IP_ALIGN) -#define TSNEP_HEADROOM ALIGN(TSNEP_SKB_PAD, 4) +#define TSNEP_RX_OFFSET (max(NET_SKB_PAD, XDP_PACKET_HEADROOM) + NET_IP_ALIGN) +#define TSNEP_HEADROOM ALIGN(TSNEP_RX_OFFSET, 4) #define TSNEP_MAX_RX_BUF_SIZE (PAGE_SIZE - TSNEP_HEADROOM - \ SKB_DATA_ALIGN(sizeof(struct skb_shared_info))) @@ -43,6 +45,14 @@ #define TSNEP_COALESCE_USECS_MAX ((ECM_INT_DELAY_MASK >> ECM_INT_DELAY_SHIFT) * \ ECM_INT_DELAY_BASE_US + ECM_INT_DELAY_BASE_US - 1) +#define TSNEP_TX_TYPE_SKB BIT(0) +#define TSNEP_TX_TYPE_SKB_FRAG BIT(1) +#define TSNEP_TX_TYPE_XDP_TX BIT(2) +#define TSNEP_TX_TYPE_XDP_NDO BIT(3) + +#define TSNEP_XDP_TX BIT(0) +#define TSNEP_XDP_REDIRECT BIT(1) + static void tsnep_enable_irq(struct tsnep_adapter *adapter, u32 mask) { iowrite32(mask, adapter->addr + ECM_INT_ENABLE); @@ -120,9 +130,6 @@ static int tsnep_mdiobus_read(struct mii_bus *bus, int addr, int regnum) u32 md; int retval; - if (regnum & MII_ADDR_C45) - return -EOPNOTSUPP; - md = ECM_MD_READ; if (!adapter->suppress_preamble) md |= ECM_MD_PREAMBLE; @@ -144,9 +151,6 @@ static int tsnep_mdiobus_write(struct mii_bus *bus, int addr, int regnum, u32 md; int retval; - if (regnum & MII_ADDR_C45) - return -EOPNOTSUPP; - md = ECM_MD_WRITE; if (!adapter->suppress_preamble) md |= ECM_MD_PREAMBLE; @@ -306,10 +310,12 @@ static void tsnep_tx_activate(struct tsnep_tx *tx, int index, int length, struct tsnep_tx_entry *entry = &tx->entry[index]; entry->properties = 0; + /* xdpf is union with skb */ if (entry->skb) { entry->properties = length & TSNEP_DESC_LENGTH_MASK; entry->properties |= TSNEP_DESC_INTERRUPT_FLAG; - if (skb_shinfo(entry->skb)->tx_flags & SKBTX_IN_PROGRESS) + if ((entry->type & TSNEP_TX_TYPE_SKB) && + (skb_shinfo(entry->skb)->tx_flags & SKBTX_IN_PROGRESS)) entry->properties |= TSNEP_DESC_EXTENDED_WRITEBACK_FLAG; /* toggle user flag to prevent false acknowledge @@ -378,15 +384,19 @@ static int tsnep_tx_map(struct sk_buff *skb, struct tsnep_tx *tx, int count) for (i = 0; i < count; i++) { entry = &tx->entry[(tx->write + i) % TSNEP_RING_SIZE]; - if (i == 0) { + if (!i) { len = skb_headlen(skb); dma = dma_map_single(dmadev, skb->data, len, DMA_TO_DEVICE); + + entry->type = TSNEP_TX_TYPE_SKB; } else { len = skb_frag_size(&skb_shinfo(skb)->frags[i - 1]); dma = skb_frag_dma_map(dmadev, &skb_shinfo(skb)->frags[i - 1], 0, len, DMA_TO_DEVICE); + + entry->type = TSNEP_TX_TYPE_SKB_FRAG; } if (dma_mapping_error(dmadev, dma)) return -ENOMEM; @@ -413,12 +423,13 @@ static int tsnep_tx_unmap(struct tsnep_tx *tx, int index, int count) entry = &tx->entry[(index + i) % TSNEP_RING_SIZE]; if (entry->len) { - if (i == 0) + if (entry->type & TSNEP_TX_TYPE_SKB) dma_unmap_single(dmadev, dma_unmap_addr(entry, dma), dma_unmap_len(entry, len), DMA_TO_DEVICE); - else + else if (entry->type & + (TSNEP_TX_TYPE_SKB_FRAG | TSNEP_TX_TYPE_XDP_NDO)) dma_unmap_page(dmadev, dma_unmap_addr(entry, dma), dma_unmap_len(entry, len), @@ -434,7 +445,6 @@ static int tsnep_tx_unmap(struct tsnep_tx *tx, int index, int count) static netdev_tx_t tsnep_xmit_frame_ring(struct sk_buff *skb, struct tsnep_tx *tx) { - unsigned long flags; int count = 1; struct tsnep_tx_entry *entry; int length; @@ -444,16 +454,12 @@ static netdev_tx_t tsnep_xmit_frame_ring(struct sk_buff *skb, if (skb_shinfo(skb)->nr_frags > 0) count += skb_shinfo(skb)->nr_frags; - spin_lock_irqsave(&tx->lock, flags); - if (tsnep_tx_desc_available(tx) < count) { /* ring full, shall not happen because queue is stopped if full * below */ netif_stop_subqueue(tx->adapter->netdev, tx->queue_index); - spin_unlock_irqrestore(&tx->lock, flags); - return NETDEV_TX_BUSY; } @@ -468,10 +474,6 @@ static netdev_tx_t tsnep_xmit_frame_ring(struct sk_buff *skb, tx->dropped++; - spin_unlock_irqrestore(&tx->lock, flags); - - netdev_err(tx->adapter->netdev, "TX DMA map failed\n"); - return NETDEV_TX_OK; } length = retval; @@ -481,7 +483,7 @@ static netdev_tx_t tsnep_xmit_frame_ring(struct sk_buff *skb, for (i = 0; i < count; i++) tsnep_tx_activate(tx, (tx->write + i) % TSNEP_RING_SIZE, length, - i == (count - 1)); + i == count - 1); tx->write = (tx->write + count) % TSNEP_RING_SIZE; skb_tx_timestamp(skb); @@ -496,23 +498,146 @@ static netdev_tx_t tsnep_xmit_frame_ring(struct sk_buff *skb, netif_stop_subqueue(tx->adapter->netdev, tx->queue_index); } - spin_unlock_irqrestore(&tx->lock, flags); - return NETDEV_TX_OK; } +static int tsnep_xdp_tx_map(struct xdp_frame *xdpf, struct tsnep_tx *tx, + struct skb_shared_info *shinfo, int count, u32 type) +{ + struct device *dmadev = tx->adapter->dmadev; + struct tsnep_tx_entry *entry; + struct page *page; + skb_frag_t *frag; + unsigned int len; + int map_len = 0; + dma_addr_t dma; + void *data; + int i; + + frag = NULL; + len = xdpf->len; + for (i = 0; i < count; i++) { + entry = &tx->entry[(tx->write + i) % TSNEP_RING_SIZE]; + if (type & TSNEP_TX_TYPE_XDP_NDO) { + data = unlikely(frag) ? skb_frag_address(frag) : + xdpf->data; + dma = dma_map_single(dmadev, data, len, DMA_TO_DEVICE); + if (dma_mapping_error(dmadev, dma)) + return -ENOMEM; + + entry->type = TSNEP_TX_TYPE_XDP_NDO; + } else { + page = unlikely(frag) ? skb_frag_page(frag) : + virt_to_page(xdpf->data); + dma = page_pool_get_dma_addr(page); + if (unlikely(frag)) + dma += skb_frag_off(frag); + else + dma += sizeof(*xdpf) + xdpf->headroom; + dma_sync_single_for_device(dmadev, dma, len, + DMA_BIDIRECTIONAL); + + entry->type = TSNEP_TX_TYPE_XDP_TX; + } + + entry->len = len; + dma_unmap_addr_set(entry, dma, dma); + + entry->desc->tx = __cpu_to_le64(dma); + + map_len += len; + + if (i + 1 < count) { + frag = &shinfo->frags[i]; + len = skb_frag_size(frag); + } + } + + return map_len; +} + +/* This function requires __netif_tx_lock is held by the caller. */ +static bool tsnep_xdp_xmit_frame_ring(struct xdp_frame *xdpf, + struct tsnep_tx *tx, u32 type) +{ + struct skb_shared_info *shinfo = xdp_get_shared_info_from_frame(xdpf); + struct tsnep_tx_entry *entry; + int count, length, retval, i; + + count = 1; + if (unlikely(xdp_frame_has_frags(xdpf))) + count += shinfo->nr_frags; + + /* ensure that TX ring is not filled up by XDP, always MAX_SKB_FRAGS + * will be available for normal TX path and queue is stopped there if + * necessary + */ + if (tsnep_tx_desc_available(tx) < (MAX_SKB_FRAGS + 1 + count)) + return false; + + entry = &tx->entry[tx->write]; + entry->xdpf = xdpf; + + retval = tsnep_xdp_tx_map(xdpf, tx, shinfo, count, type); + if (retval < 0) { + tsnep_tx_unmap(tx, tx->write, count); + entry->xdpf = NULL; + + tx->dropped++; + + return false; + } + length = retval; + + for (i = 0; i < count; i++) + tsnep_tx_activate(tx, (tx->write + i) % TSNEP_RING_SIZE, length, + i == count - 1); + tx->write = (tx->write + count) % TSNEP_RING_SIZE; + + /* descriptor properties shall be valid before hardware is notified */ + dma_wmb(); + + return true; +} + +static void tsnep_xdp_xmit_flush(struct tsnep_tx *tx) +{ + iowrite32(TSNEP_CONTROL_TX_ENABLE, tx->addr + TSNEP_CONTROL); +} + +static bool tsnep_xdp_xmit_back(struct tsnep_adapter *adapter, + struct xdp_buff *xdp, + struct netdev_queue *tx_nq, struct tsnep_tx *tx) +{ + struct xdp_frame *xdpf = xdp_convert_buff_to_frame(xdp); + bool xmit; + + if (unlikely(!xdpf)) + return false; + + __netif_tx_lock(tx_nq, smp_processor_id()); + + xmit = tsnep_xdp_xmit_frame_ring(xdpf, tx, TSNEP_TX_TYPE_XDP_TX); + + /* Avoid transmit queue timeout since we share it with the slow path */ + if (xmit) + txq_trans_cond_update(tx_nq); + + __netif_tx_unlock(tx_nq); + + return xmit; +} + static bool tsnep_tx_poll(struct tsnep_tx *tx, int napi_budget) { struct tsnep_tx_entry *entry; struct netdev_queue *nq; - unsigned long flags; int budget = 128; int length; int count; nq = netdev_get_tx_queue(tx->adapter->netdev, tx->queue_index); - - spin_lock_irqsave(&tx->lock, flags); + __netif_tx_lock(nq, smp_processor_id()); do { if (tx->read == tx->write) @@ -530,12 +655,17 @@ static bool tsnep_tx_poll(struct tsnep_tx *tx, int napi_budget) dma_rmb(); count = 1; - if (skb_shinfo(entry->skb)->nr_frags > 0) + if ((entry->type & TSNEP_TX_TYPE_SKB) && + skb_shinfo(entry->skb)->nr_frags > 0) count += skb_shinfo(entry->skb)->nr_frags; + else if (!(entry->type & TSNEP_TX_TYPE_SKB) && + xdp_frame_has_frags(entry->xdpf)) + count += xdp_get_shared_info_from_frame(entry->xdpf)->nr_frags; length = tsnep_tx_unmap(tx, tx->read, count); - if ((skb_shinfo(entry->skb)->tx_flags & SKBTX_IN_PROGRESS) && + if ((entry->type & TSNEP_TX_TYPE_SKB) && + (skb_shinfo(entry->skb)->tx_flags & SKBTX_IN_PROGRESS) && (__le32_to_cpu(entry->desc_wb->properties) & TSNEP_DESC_EXTENDED_WRITEBACK_FLAG)) { struct skb_shared_hwtstamps hwtstamps; @@ -555,7 +685,11 @@ static bool tsnep_tx_poll(struct tsnep_tx *tx, int napi_budget) skb_tstamp_tx(entry->skb, &hwtstamps); } - napi_consume_skb(entry->skb, budget); + if (entry->type & TSNEP_TX_TYPE_SKB) + napi_consume_skb(entry->skb, napi_budget); + else + xdp_return_frame_rx_napi(entry->xdpf); + /* xdpf is union with skb */ entry->skb = NULL; tx->read = (tx->read + count) % TSNEP_RING_SIZE; @@ -571,18 +705,19 @@ static bool tsnep_tx_poll(struct tsnep_tx *tx, int napi_budget) netif_tx_wake_queue(nq); } - spin_unlock_irqrestore(&tx->lock, flags); + __netif_tx_unlock(nq); - return (budget != 0); + return budget != 0; } static bool tsnep_tx_pending(struct tsnep_tx *tx) { - unsigned long flags; struct tsnep_tx_entry *entry; + struct netdev_queue *nq; bool pending = false; - spin_lock_irqsave(&tx->lock, flags); + nq = netdev_get_tx_queue(tx->adapter->netdev, tx->queue_index); + __netif_tx_lock(nq, smp_processor_id()); if (tx->read != tx->write) { entry = &tx->entry[tx->read]; @@ -592,7 +727,7 @@ static bool tsnep_tx_pending(struct tsnep_tx *tx) pending = true; } - spin_unlock_irqrestore(&tx->lock, flags); + __netif_tx_unlock(nq); return pending; } @@ -618,8 +753,6 @@ static int tsnep_tx_open(struct tsnep_adapter *adapter, void __iomem *addr, tx->owner_counter = 1; tx->increment_owner_counter = TSNEP_RING_SIZE - 1; - spin_lock_init(&tx->lock); - return 0; } @@ -695,9 +828,9 @@ static int tsnep_rx_ring_init(struct tsnep_rx *rx) pp_params.pool_size = TSNEP_RING_SIZE; pp_params.nid = dev_to_node(dmadev); pp_params.dev = dmadev; - pp_params.dma_dir = DMA_FROM_DEVICE; + pp_params.dma_dir = DMA_BIDIRECTIONAL; pp_params.max_len = TSNEP_MAX_RX_BUF_SIZE; - pp_params.offset = TSNEP_SKB_PAD; + pp_params.offset = TSNEP_RX_OFFSET; rx->page_pool = page_pool_create(&pp_params); if (IS_ERR(rx->page_pool)) { retval = PTR_ERR(rx->page_pool); @@ -732,7 +865,7 @@ static void tsnep_rx_set_page(struct tsnep_rx *rx, struct tsnep_rx_entry *entry, entry->page = page; entry->len = TSNEP_MAX_RX_BUF_SIZE; entry->dma = page_pool_get_dma_addr(entry->page); - entry->desc->rx = __cpu_to_le64(entry->dma + TSNEP_SKB_PAD); + entry->desc->rx = __cpu_to_le64(entry->dma + TSNEP_RX_OFFSET); } static int tsnep_rx_alloc_buffer(struct tsnep_rx *rx, int index) @@ -826,6 +959,62 @@ static int tsnep_rx_refill(struct tsnep_rx *rx, int count, bool reuse) return i; } +static bool tsnep_xdp_run_prog(struct tsnep_rx *rx, struct bpf_prog *prog, + struct xdp_buff *xdp, int *status, + struct netdev_queue *tx_nq, struct tsnep_tx *tx) +{ + unsigned int length; + unsigned int sync; + u32 act; + + length = xdp->data_end - xdp->data_hard_start - XDP_PACKET_HEADROOM; + + act = bpf_prog_run_xdp(prog, xdp); + + /* Due xdp_adjust_tail: DMA sync for_device cover max len CPU touch */ + sync = xdp->data_end - xdp->data_hard_start - XDP_PACKET_HEADROOM; + sync = max(sync, length); + + switch (act) { + case XDP_PASS: + return false; + case XDP_TX: + if (!tsnep_xdp_xmit_back(rx->adapter, xdp, tx_nq, tx)) + goto out_failure; + *status |= TSNEP_XDP_TX; + return true; + case XDP_REDIRECT: + if (xdp_do_redirect(rx->adapter->netdev, xdp, prog) < 0) + goto out_failure; + *status |= TSNEP_XDP_REDIRECT; + return true; + default: + bpf_warn_invalid_xdp_action(rx->adapter->netdev, prog, act); + fallthrough; + case XDP_ABORTED: +out_failure: + trace_xdp_exception(rx->adapter->netdev, prog, act); + fallthrough; + case XDP_DROP: + page_pool_put_page(rx->page_pool, virt_to_head_page(xdp->data), + sync, true); + return true; + } +} + +static void tsnep_finalize_xdp(struct tsnep_adapter *adapter, int status, + struct netdev_queue *tx_nq, struct tsnep_tx *tx) +{ + if (status & TSNEP_XDP_TX) { + __netif_tx_lock(tx_nq, smp_processor_id()); + tsnep_xdp_xmit_flush(tx); + __netif_tx_unlock(tx_nq); + } + + if (status & TSNEP_XDP_REDIRECT) + xdp_do_flush(); +} + static struct sk_buff *tsnep_build_skb(struct tsnep_rx *rx, struct page *page, int length) { @@ -836,14 +1025,14 @@ static struct sk_buff *tsnep_build_skb(struct tsnep_rx *rx, struct page *page, return NULL; /* update pointers within the skb to store the data */ - skb_reserve(skb, TSNEP_SKB_PAD + TSNEP_RX_INLINE_METADATA_SIZE); - __skb_put(skb, length - TSNEP_RX_INLINE_METADATA_SIZE - ETH_FCS_LEN); + skb_reserve(skb, TSNEP_RX_OFFSET + TSNEP_RX_INLINE_METADATA_SIZE); + __skb_put(skb, length - ETH_FCS_LEN); if (rx->adapter->hwtstamp_config.rx_filter == HWTSTAMP_FILTER_ALL) { struct skb_shared_hwtstamps *hwtstamps = skb_hwtstamps(skb); struct tsnep_rx_inline *rx_inline = (struct tsnep_rx_inline *)(page_address(page) + - TSNEP_SKB_PAD); + TSNEP_RX_OFFSET); skb_shinfo(skb)->tx_flags |= SKBTX_HW_TSTAMP_NETDEV; @@ -861,15 +1050,28 @@ static int tsnep_rx_poll(struct tsnep_rx *rx, struct napi_struct *napi, int budget) { struct device *dmadev = rx->adapter->dmadev; - int desc_available; - int done = 0; enum dma_data_direction dma_dir; struct tsnep_rx_entry *entry; + struct netdev_queue *tx_nq; + struct bpf_prog *prog; + struct xdp_buff xdp; struct sk_buff *skb; + struct tsnep_tx *tx; + int desc_available; + int xdp_status = 0; + int done = 0; int length; desc_available = tsnep_rx_desc_available(rx); dma_dir = page_pool_get_dma_dir(rx->page_pool); + prog = READ_ONCE(rx->adapter->xdp_prog); + if (prog) { + tx_nq = netdev_get_tx_queue(rx->adapter->netdev, + rx->tx_queue_index); + tx = &rx->adapter->tx[rx->tx_queue_index]; + + xdp_init_buff(&xdp, PAGE_SIZE, &rx->xdp_rxq); + } while (likely(done < budget) && (rx->read != rx->write)) { entry = &rx->entry[rx->read]; @@ -903,21 +1105,47 @@ static int tsnep_rx_poll(struct tsnep_rx *rx, struct napi_struct *napi, */ dma_rmb(); - prefetch(page_address(entry->page) + TSNEP_SKB_PAD); + prefetch(page_address(entry->page) + TSNEP_RX_OFFSET); length = __le32_to_cpu(entry->desc_wb->properties) & TSNEP_DESC_LENGTH_MASK; - dma_sync_single_range_for_cpu(dmadev, entry->dma, TSNEP_SKB_PAD, - length, dma_dir); + dma_sync_single_range_for_cpu(dmadev, entry->dma, + TSNEP_RX_OFFSET, length, dma_dir); + + /* RX metadata with timestamps is in front of actual data, + * subtract metadata size to get length of actual data and + * consider metadata size as offset of actual data during RX + * processing + */ + length -= TSNEP_RX_INLINE_METADATA_SIZE; rx->read = (rx->read + 1) % TSNEP_RING_SIZE; desc_available++; + if (prog) { + bool consume; + + xdp_prepare_buff(&xdp, page_address(entry->page), + XDP_PACKET_HEADROOM + TSNEP_RX_INLINE_METADATA_SIZE, + length, false); + + consume = tsnep_xdp_run_prog(rx, prog, &xdp, + &xdp_status, tx_nq, tx); + if (consume) { + rx->packets++; + rx->bytes += length; + + entry->page = NULL; + + continue; + } + } + skb = tsnep_build_skb(rx, entry->page, length); if (skb) { page_pool_release_page(rx->page_pool, entry->page); rx->packets++; - rx->bytes += length - TSNEP_RX_INLINE_METADATA_SIZE; + rx->bytes += length; if (skb->pkt_type == PACKET_MULTICAST) rx->multicast++; @@ -930,6 +1158,9 @@ static int tsnep_rx_poll(struct tsnep_rx *rx, struct napi_struct *napi, entry->page = NULL; } + if (xdp_status) + tsnep_finalize_xdp(rx->adapter, xdp_status, tx_nq, tx); + if (desc_available) tsnep_rx_refill(rx, desc_available, false); @@ -1086,17 +1317,73 @@ static void tsnep_free_irq(struct tsnep_queue *queue, bool first) memset(queue->name, 0, sizeof(queue->name)); } +static void tsnep_queue_close(struct tsnep_queue *queue, bool first) +{ + struct tsnep_rx *rx = queue->rx; + + tsnep_free_irq(queue, first); + + if (rx && xdp_rxq_info_is_reg(&rx->xdp_rxq)) + xdp_rxq_info_unreg(&rx->xdp_rxq); + + netif_napi_del(&queue->napi); +} + +static int tsnep_queue_open(struct tsnep_adapter *adapter, + struct tsnep_queue *queue, bool first) +{ + struct tsnep_rx *rx = queue->rx; + struct tsnep_tx *tx = queue->tx; + int retval; + + queue->adapter = adapter; + + netif_napi_add(adapter->netdev, &queue->napi, tsnep_poll); + + if (rx) { + /* choose TX queue for XDP_TX */ + if (tx) + rx->tx_queue_index = tx->queue_index; + else if (rx->queue_index < adapter->num_tx_queues) + rx->tx_queue_index = rx->queue_index; + else + rx->tx_queue_index = 0; + + retval = xdp_rxq_info_reg(&rx->xdp_rxq, adapter->netdev, + rx->queue_index, queue->napi.napi_id); + if (retval) + goto failed; + retval = xdp_rxq_info_reg_mem_model(&rx->xdp_rxq, + MEM_TYPE_PAGE_POOL, + rx->page_pool); + if (retval) + goto failed; + } + + retval = tsnep_request_irq(queue, first); + if (retval) { + netif_err(adapter, drv, adapter->netdev, + "can't get assigned irq %d.\n", queue->irq); + goto failed; + } + + return 0; + +failed: + tsnep_queue_close(queue, first); + + return retval; +} + static int tsnep_netdev_open(struct net_device *netdev) { struct tsnep_adapter *adapter = netdev_priv(netdev); - int i; - void __iomem *addr; int tx_queue_index = 0; int rx_queue_index = 0; - int retval; + void __iomem *addr; + int i, retval; for (i = 0; i < adapter->num_queues; i++) { - adapter->queue[i].adapter = adapter; if (adapter->queue[i].tx) { addr = adapter->addr + TSNEP_QUEUE(tx_queue_index); retval = tsnep_tx_open(adapter, addr, tx_queue_index, @@ -1107,21 +1394,16 @@ static int tsnep_netdev_open(struct net_device *netdev) } if (adapter->queue[i].rx) { addr = adapter->addr + TSNEP_QUEUE(rx_queue_index); - retval = tsnep_rx_open(adapter, addr, - rx_queue_index, + retval = tsnep_rx_open(adapter, addr, rx_queue_index, adapter->queue[i].rx); if (retval) goto failed; rx_queue_index++; } - retval = tsnep_request_irq(&adapter->queue[i], i == 0); - if (retval) { - netif_err(adapter, drv, adapter->netdev, - "can't get assigned irq %d.\n", - adapter->queue[i].irq); + retval = tsnep_queue_open(adapter, &adapter->queue[i], i == 0); + if (retval) goto failed; - } } retval = netif_set_real_num_tx_queues(adapter->netdev, @@ -1139,8 +1421,6 @@ static int tsnep_netdev_open(struct net_device *netdev) goto phy_failed; for (i = 0; i < adapter->num_queues; i++) { - netif_napi_add(adapter->netdev, &adapter->queue[i].napi, - tsnep_poll); napi_enable(&adapter->queue[i].napi); tsnep_enable_irq(adapter, adapter->queue[i].irq_mask); @@ -1150,10 +1430,9 @@ static int tsnep_netdev_open(struct net_device *netdev) phy_failed: tsnep_disable_irq(adapter, ECM_INT_LINK); - tsnep_phy_close(adapter); failed: for (i = 0; i < adapter->num_queues; i++) { - tsnep_free_irq(&adapter->queue[i], i == 0); + tsnep_queue_close(&adapter->queue[i], i == 0); if (adapter->queue[i].rx) tsnep_rx_close(adapter->queue[i].rx); @@ -1175,9 +1454,8 @@ static int tsnep_netdev_close(struct net_device *netdev) tsnep_disable_irq(adapter, adapter->queue[i].irq_mask); napi_disable(&adapter->queue[i].napi); - netif_napi_del(&adapter->queue[i].napi); - tsnep_free_irq(&adapter->queue[i], i == 0); + tsnep_queue_close(&adapter->queue[i], i == 0); if (adapter->queue[i].rx) tsnep_rx_close(adapter->queue[i].rx); @@ -1330,6 +1608,67 @@ static ktime_t tsnep_netdev_get_tstamp(struct net_device *netdev, return ns_to_ktime(timestamp); } +static int tsnep_netdev_bpf(struct net_device *dev, struct netdev_bpf *bpf) +{ + struct tsnep_adapter *adapter = netdev_priv(dev); + + switch (bpf->command) { + case XDP_SETUP_PROG: + return tsnep_xdp_setup_prog(adapter, bpf->prog, bpf->extack); + default: + return -EOPNOTSUPP; + } +} + +static struct tsnep_tx *tsnep_xdp_get_tx(struct tsnep_adapter *adapter, u32 cpu) +{ + if (cpu >= TSNEP_MAX_QUEUES) + cpu &= TSNEP_MAX_QUEUES - 1; + + while (cpu >= adapter->num_tx_queues) + cpu -= adapter->num_tx_queues; + + return &adapter->tx[cpu]; +} + +static int tsnep_netdev_xdp_xmit(struct net_device *dev, int n, + struct xdp_frame **xdp, u32 flags) +{ + struct tsnep_adapter *adapter = netdev_priv(dev); + u32 cpu = smp_processor_id(); + struct netdev_queue *nq; + struct tsnep_tx *tx; + int nxmit; + bool xmit; + + if (unlikely(flags & ~XDP_XMIT_FLAGS_MASK)) + return -EINVAL; + + tx = tsnep_xdp_get_tx(adapter, cpu); + nq = netdev_get_tx_queue(adapter->netdev, tx->queue_index); + + __netif_tx_lock(nq, cpu); + + for (nxmit = 0; nxmit < n; nxmit++) { + xmit = tsnep_xdp_xmit_frame_ring(xdp[nxmit], tx, + TSNEP_TX_TYPE_XDP_NDO); + if (!xmit) + break; + + /* avoid transmit queue timeout since we share it with the slow + * path + */ + txq_trans_cond_update(nq); + } + + if (flags & XDP_XMIT_FLUSH) + tsnep_xdp_xmit_flush(tx); + + __netif_tx_unlock(nq); + + return nxmit; +} + static const struct net_device_ops tsnep_netdev_ops = { .ndo_open = tsnep_netdev_open, .ndo_stop = tsnep_netdev_close, @@ -1341,6 +1680,8 @@ static const struct net_device_ops tsnep_netdev_ops = { .ndo_set_features = tsnep_netdev_set_features, .ndo_get_tstamp = tsnep_netdev_get_tstamp, .ndo_setup_tc = tsnep_tc_setup, + .ndo_bpf = tsnep_netdev_bpf, + .ndo_xdp_xmit = tsnep_netdev_xdp_xmit, }; static int tsnep_mac_init(struct tsnep_adapter *adapter) @@ -1585,6 +1926,10 @@ static int tsnep_probe(struct platform_device *pdev) netdev->features = NETIF_F_SG; netdev->hw_features = netdev->features | NETIF_F_LOOPBACK; + netdev->xdp_features = NETDEV_XDP_ACT_BASIC | NETDEV_XDP_ACT_REDIRECT | + NETDEV_XDP_ACT_NDO_XMIT | + NETDEV_XDP_ACT_NDO_XMIT_SG; + /* carrier off reporting is important to ethtool even BEFORE open */ netif_carrier_off(netdev); diff --git a/drivers/net/ethernet/engleder/tsnep_tc.c b/drivers/net/ethernet/engleder/tsnep_tc.c index c4c6e1357317..d083e6684f12 100644 --- a/drivers/net/ethernet/engleder/tsnep_tc.c +++ b/drivers/net/ethernet/engleder/tsnep_tc.c @@ -403,12 +403,33 @@ static int tsnep_taprio(struct tsnep_adapter *adapter, return 0; } +static int tsnep_tc_query_caps(struct tsnep_adapter *adapter, + struct tc_query_caps_base *base) +{ + switch (base->type) { + case TC_SETUP_QDISC_TAPRIO: { + struct tc_taprio_caps *caps = base->caps; + + if (!adapter->gate_control) + return -EOPNOTSUPP; + + caps->gate_mask_per_txq = true; + + return 0; + } + default: + return -EOPNOTSUPP; + } +} + int tsnep_tc_setup(struct net_device *netdev, enum tc_setup_type type, void *type_data) { struct tsnep_adapter *adapter = netdev_priv(netdev); switch (type) { + case TC_QUERY_CAPS: + return tsnep_tc_query_caps(adapter, type_data); case TC_SETUP_QDISC_TAPRIO: return tsnep_taprio(adapter, type_data); default: diff --git a/drivers/net/ethernet/engleder/tsnep_xdp.c b/drivers/net/ethernet/engleder/tsnep_xdp.c new file mode 100644 index 000000000000..4d14cb1fd772 --- /dev/null +++ b/drivers/net/ethernet/engleder/tsnep_xdp.c @@ -0,0 +1,19 @@ +// SPDX-License-Identifier: GPL-2.0 +/* Copyright (C) 2022 Gerhard Engleder <gerhard@engleder-embedded.com> */ + +#include <linux/if_vlan.h> +#include <net/xdp_sock_drv.h> + +#include "tsnep.h" + +int tsnep_xdp_setup_prog(struct tsnep_adapter *adapter, struct bpf_prog *prog, + struct netlink_ext_ack *extack) +{ + struct bpf_prog *old_prog; + + old_prog = xchg(&adapter->xdp_prog, prog); + if (old_prog) + bpf_prog_put(old_prog); + + return 0; +} diff --git a/drivers/net/ethernet/faraday/ftmac100.c b/drivers/net/ethernet/faraday/ftmac100.c index 6c8c78018ce6..139fe66f8bcd 100644 --- a/drivers/net/ethernet/faraday/ftmac100.c +++ b/drivers/net/ethernet/faraday/ftmac100.c @@ -182,6 +182,12 @@ static int ftmac100_start_hw(struct ftmac100 *priv) if (netdev->mtu > ETH_DATA_LEN) maccr |= FTMAC100_MACCR_RX_FTL; + /* Add other bits as needed */ + if (netdev->flags & IFF_PROMISC) + maccr |= FTMAC100_MACCR_RCV_ALL; + if (netdev->flags & IFF_ALLMULTI) + maccr |= FTMAC100_MACCR_RX_MULTIPKT; + iowrite32(maccr, priv->base + FTMAC100_OFFSET_MACCR); return 0; } diff --git a/drivers/net/ethernet/freescale/dpaa/dpaa_eth.c b/drivers/net/ethernet/freescale/dpaa/dpaa_eth.c index 027fff9f7db0..9318a2554056 100644 --- a/drivers/net/ethernet/freescale/dpaa/dpaa_eth.c +++ b/drivers/net/ethernet/freescale/dpaa/dpaa_eth.c @@ -244,6 +244,10 @@ static int dpaa_netdev_init(struct net_device *net_dev, net_dev->features |= net_dev->hw_features; net_dev->vlan_features = net_dev->features; + net_dev->xdp_features = NETDEV_XDP_ACT_BASIC | + NETDEV_XDP_ACT_REDIRECT | + NETDEV_XDP_ACT_NDO_XMIT; + if (is_valid_ether_addr(mac_addr)) { memcpy(net_dev->perm_addr, mac_addr, net_dev->addr_len); eth_hw_addr_set(net_dev, mac_addr); diff --git a/drivers/net/ethernet/freescale/dpaa2/dpaa2-eth.c b/drivers/net/ethernet/freescale/dpaa2/dpaa2-eth.c index 2e79d18fc3c7..a62cffaf6ff1 100644 --- a/drivers/net/ethernet/freescale/dpaa2/dpaa2-eth.c +++ b/drivers/net/ethernet/freescale/dpaa2/dpaa2-eth.c @@ -4596,6 +4596,12 @@ static int dpaa2_eth_netdev_init(struct net_device *net_dev) NETIF_F_LLTX | NETIF_F_HW_TC | NETIF_F_TSO; net_dev->gso_max_segs = DPAA2_ETH_ENQUEUE_MAX_FDS; net_dev->hw_features = net_dev->features; + net_dev->xdp_features = NETDEV_XDP_ACT_BASIC | + NETDEV_XDP_ACT_REDIRECT | + NETDEV_XDP_ACT_NDO_XMIT; + if (priv->dpni_attrs.wriop_version >= DPAA2_WRIOP_VERSION(3, 0, 0) && + priv->dpni_attrs.num_queues <= 8) + net_dev->xdp_features |= NETDEV_XDP_ACT_XSK_ZEROCOPY; if (priv->dpni_attrs.vlan_filter_entries) net_dev->hw_features |= NETIF_F_HW_VLAN_CTAG_FILTER; diff --git a/drivers/net/ethernet/freescale/enetc/Kconfig b/drivers/net/ethernet/freescale/enetc/Kconfig index cdc0ff89388a..9bc099cf3cb1 100644 --- a/drivers/net/ethernet/freescale/enetc/Kconfig +++ b/drivers/net/ethernet/freescale/enetc/Kconfig @@ -1,7 +1,16 @@ # SPDX-License-Identifier: GPL-2.0 +config FSL_ENETC_CORE + tristate + help + This module supports common functionality between the PF and VF + drivers for the NXP ENETC controller. + + If compiled as module (M), the module name is fsl-enetc-core. + config FSL_ENETC tristate "ENETC PF driver" - depends on PCI && PCI_MSI + depends on PCI_MSI + select FSL_ENETC_CORE select FSL_ENETC_IERB select FSL_ENETC_MDIO select PHYLINK @@ -16,7 +25,8 @@ config FSL_ENETC config FSL_ENETC_VF tristate "ENETC VF driver" - depends on PCI && PCI_MSI + depends on PCI_MSI + select FSL_ENETC_CORE select FSL_ENETC_MDIO select PHYLINK select DIMLIB diff --git a/drivers/net/ethernet/freescale/enetc/Makefile b/drivers/net/ethernet/freescale/enetc/Makefile index e0e8dfd13793..b13cbbabb2ea 100644 --- a/drivers/net/ethernet/freescale/enetc/Makefile +++ b/drivers/net/ethernet/freescale/enetc/Makefile @@ -1,14 +1,15 @@ # SPDX-License-Identifier: GPL-2.0 -common-objs := enetc.o enetc_cbdr.o enetc_ethtool.o +obj-$(CONFIG_FSL_ENETC_CORE) += fsl-enetc-core.o +fsl-enetc-core-y := enetc.o enetc_cbdr.o enetc_ethtool.o obj-$(CONFIG_FSL_ENETC) += fsl-enetc.o -fsl-enetc-y := enetc_pf.o $(common-objs) +fsl-enetc-y := enetc_pf.o fsl-enetc-$(CONFIG_PCI_IOV) += enetc_msg.o fsl-enetc-$(CONFIG_FSL_ENETC_QOS) += enetc_qos.o obj-$(CONFIG_FSL_ENETC_VF) += fsl-enetc-vf.o -fsl-enetc-vf-y := enetc_vf.o $(common-objs) +fsl-enetc-vf-y := enetc_vf.o obj-$(CONFIG_FSL_ENETC_IERB) += fsl-enetc-ierb.o fsl-enetc-ierb-y := enetc_ierb.o diff --git a/drivers/net/ethernet/freescale/enetc/enetc.c b/drivers/net/ethernet/freescale/enetc/enetc.c index e96449eedfb5..2fc712b24d12 100644 --- a/drivers/net/ethernet/freescale/enetc/enetc.c +++ b/drivers/net/ethernet/freescale/enetc/enetc.c @@ -11,14 +11,26 @@ #include <net/pkt_sched.h> #include <net/tso.h> +u32 enetc_port_mac_rd(struct enetc_si *si, u32 reg) +{ + return enetc_port_rd(&si->hw, reg); +} +EXPORT_SYMBOL_GPL(enetc_port_mac_rd); + +void enetc_port_mac_wr(struct enetc_si *si, u32 reg, u32 val) +{ + enetc_port_wr(&si->hw, reg, val); + if (si->hw_features & ENETC_SI_F_QBU) + enetc_port_wr(&si->hw, reg + ENETC_PMAC_OFFSET, val); +} +EXPORT_SYMBOL_GPL(enetc_port_mac_wr); + static int enetc_num_stack_tx_queues(struct enetc_ndev_priv *priv) { int num_tx_rings = priv->num_tx_rings; - int i; - for (i = 0; i < priv->num_rx_rings; i++) - if (priv->rx_ring[i]->xdp.prog) - return num_tx_rings - num_possible_cpus(); + if (priv->xdp_prog) + return num_tx_rings - num_possible_cpus(); return num_tx_rings; } @@ -243,8 +255,8 @@ static int enetc_map_tx_buffs(struct enetc_bdr *tx_ring, struct sk_buff *skb) if (udp) val |= ENETC_PM0_SINGLE_STEP_CH; - enetc_port_wr(hw, ENETC_PM0_SINGLE_STEP, val); - enetc_port_wr(hw, ENETC_PM1_SINGLE_STEP, val); + enetc_port_mac_wr(priv->si, ENETC_PM0_SINGLE_STEP, + val); } else if (do_twostep_tstamp) { skb_shinfo(skb)->tx_flags |= SKBTX_IN_PROGRESS; e_flags |= ENETC_TXBD_E_FLAGS_TWO_STEP_PTP; @@ -651,6 +663,7 @@ netdev_tx_t enetc_xmit(struct sk_buff *skb, struct net_device *ndev) return enetc_start_xmit(skb, ndev); } +EXPORT_SYMBOL_GPL(enetc_xmit); static irqreturn_t enetc_msix(int irq, void *data) { @@ -1305,6 +1318,10 @@ static int enetc_xdp_frame_to_xdp_tx_swbd(struct enetc_bdr *tx_ring, xdp_tx_swbd->xdp_frame = NULL; n++; + + if (!xdp_frame_has_frags(xdp_frame)) + goto out; + xdp_tx_swbd = &xdp_tx_arr[n]; shinfo = xdp_get_shared_info_from_frame(xdp_frame); @@ -1334,7 +1351,7 @@ static int enetc_xdp_frame_to_xdp_tx_swbd(struct enetc_bdr *tx_ring, n++; xdp_tx_swbd = &xdp_tx_arr[n]; } - +out: xdp_tx_arr[n - 1].is_eof = true; xdp_tx_arr[n - 1].xdp_frame = xdp_frame; @@ -1384,22 +1401,19 @@ int enetc_xdp_xmit(struct net_device *ndev, int num_frames, return xdp_tx_frm_cnt; } +EXPORT_SYMBOL_GPL(enetc_xdp_xmit); static void enetc_map_rx_buff_to_xdp(struct enetc_bdr *rx_ring, int i, struct xdp_buff *xdp_buff, u16 size) { struct enetc_rx_swbd *rx_swbd = enetc_get_rx_buff(rx_ring, i, size); void *hard_start = page_address(rx_swbd->page) + rx_swbd->page_offset; - struct skb_shared_info *shinfo; /* To be used for XDP_TX */ rx_swbd->len = size; xdp_prepare_buff(xdp_buff, hard_start - rx_ring->buffer_offset, rx_ring->buffer_offset, size, false); - - shinfo = xdp_get_shared_info_from_buff(xdp_buff); - shinfo->nr_frags = 0; } static void enetc_add_rx_buff_to_xdp(struct enetc_bdr *rx_ring, int i, @@ -1407,11 +1421,23 @@ static void enetc_add_rx_buff_to_xdp(struct enetc_bdr *rx_ring, int i, { struct skb_shared_info *shinfo = xdp_get_shared_info_from_buff(xdp_buff); struct enetc_rx_swbd *rx_swbd = enetc_get_rx_buff(rx_ring, i, size); - skb_frag_t *frag = &shinfo->frags[shinfo->nr_frags]; + skb_frag_t *frag; /* To be used for XDP_TX */ rx_swbd->len = size; + if (!xdp_buff_has_frags(xdp_buff)) { + xdp_buff_set_frags_flag(xdp_buff); + shinfo->xdp_frags_size = size; + shinfo->nr_frags = 0; + } else { + shinfo->xdp_frags_size += size; + } + + if (page_is_pfmemalloc(rx_swbd->page)) + xdp_buff_set_frag_pfmemalloc(xdp_buff); + + frag = &shinfo->frags[shinfo->nr_frags]; skb_frag_off_set(frag, rx_swbd->page_offset); skb_frag_size_set(frag, size); __skb_frag_set_page(frag, rx_swbd->page); @@ -1584,20 +1610,6 @@ static int enetc_clean_rx_ring_xdp(struct enetc_bdr *rx_ring, } break; case XDP_REDIRECT: - /* xdp_return_frame does not support S/G in the sense - * that it leaks the fragments (__xdp_return should not - * call page_frag_free only for the initial buffer). - * Until XDP_REDIRECT gains support for S/G let's keep - * the code structure in place, but dead. We drop the - * S/G frames ourselves to avoid memory leaks which - * would otherwise leave the kernel OOM. - */ - if (unlikely(cleaned_cnt - orig_cleaned_cnt != 1)) { - enetc_xdp_drop(rx_ring, orig_i, i); - rx_ring->stats.xdp_redirect_sg++; - break; - } - err = xdp_do_redirect(rx_ring->ndev, &xdp_buff, prog); if (unlikely(err)) { enetc_xdp_drop(rx_ring, orig_i, i); @@ -1713,204 +1725,263 @@ void enetc_get_si_caps(struct enetc_si *si) if (val & ENETC_SIPCAPR0_QBV) si->hw_features |= ENETC_SI_F_QBV; + if (val & ENETC_SIPCAPR0_QBU) + si->hw_features |= ENETC_SI_F_QBU; + if (val & ENETC_SIPCAPR0_PSFP) si->hw_features |= ENETC_SI_F_PSFP; } +EXPORT_SYMBOL_GPL(enetc_get_si_caps); -static int enetc_dma_alloc_bdr(struct enetc_bdr *r, size_t bd_size) +static int enetc_dma_alloc_bdr(struct enetc_bdr_resource *res) { - r->bd_base = dma_alloc_coherent(r->dev, r->bd_count * bd_size, - &r->bd_dma_base, GFP_KERNEL); - if (!r->bd_base) + size_t bd_base_size = res->bd_count * res->bd_size; + + res->bd_base = dma_alloc_coherent(res->dev, bd_base_size, + &res->bd_dma_base, GFP_KERNEL); + if (!res->bd_base) return -ENOMEM; /* h/w requires 128B alignment */ - if (!IS_ALIGNED(r->bd_dma_base, 128)) { - dma_free_coherent(r->dev, r->bd_count * bd_size, r->bd_base, - r->bd_dma_base); + if (!IS_ALIGNED(res->bd_dma_base, 128)) { + dma_free_coherent(res->dev, bd_base_size, res->bd_base, + res->bd_dma_base); return -EINVAL; } return 0; } -static int enetc_alloc_txbdr(struct enetc_bdr *txr) +static void enetc_dma_free_bdr(const struct enetc_bdr_resource *res) +{ + size_t bd_base_size = res->bd_count * res->bd_size; + + dma_free_coherent(res->dev, bd_base_size, res->bd_base, + res->bd_dma_base); +} + +static int enetc_alloc_tx_resource(struct enetc_bdr_resource *res, + struct device *dev, size_t bd_count) { int err; - txr->tx_swbd = vzalloc(txr->bd_count * sizeof(struct enetc_tx_swbd)); - if (!txr->tx_swbd) + res->dev = dev; + res->bd_count = bd_count; + res->bd_size = sizeof(union enetc_tx_bd); + + res->tx_swbd = vzalloc(bd_count * sizeof(*res->tx_swbd)); + if (!res->tx_swbd) return -ENOMEM; - err = enetc_dma_alloc_bdr(txr, sizeof(union enetc_tx_bd)); + err = enetc_dma_alloc_bdr(res); if (err) goto err_alloc_bdr; - txr->tso_headers = dma_alloc_coherent(txr->dev, - txr->bd_count * TSO_HEADER_SIZE, - &txr->tso_headers_dma, + res->tso_headers = dma_alloc_coherent(dev, bd_count * TSO_HEADER_SIZE, + &res->tso_headers_dma, GFP_KERNEL); - if (!txr->tso_headers) { + if (!res->tso_headers) { err = -ENOMEM; goto err_alloc_tso; } - txr->next_to_clean = 0; - txr->next_to_use = 0; - return 0; err_alloc_tso: - dma_free_coherent(txr->dev, txr->bd_count * sizeof(union enetc_tx_bd), - txr->bd_base, txr->bd_dma_base); - txr->bd_base = NULL; + enetc_dma_free_bdr(res); err_alloc_bdr: - vfree(txr->tx_swbd); - txr->tx_swbd = NULL; + vfree(res->tx_swbd); + res->tx_swbd = NULL; return err; } -static void enetc_free_txbdr(struct enetc_bdr *txr) +static void enetc_free_tx_resource(const struct enetc_bdr_resource *res) { - int size, i; - - for (i = 0; i < txr->bd_count; i++) - enetc_free_tx_frame(txr, &txr->tx_swbd[i]); - - size = txr->bd_count * sizeof(union enetc_tx_bd); - - dma_free_coherent(txr->dev, txr->bd_count * TSO_HEADER_SIZE, - txr->tso_headers, txr->tso_headers_dma); - txr->tso_headers = NULL; - - dma_free_coherent(txr->dev, size, txr->bd_base, txr->bd_dma_base); - txr->bd_base = NULL; - - vfree(txr->tx_swbd); - txr->tx_swbd = NULL; + dma_free_coherent(res->dev, res->bd_count * TSO_HEADER_SIZE, + res->tso_headers, res->tso_headers_dma); + enetc_dma_free_bdr(res); + vfree(res->tx_swbd); } -static int enetc_alloc_tx_resources(struct enetc_ndev_priv *priv) +static struct enetc_bdr_resource * +enetc_alloc_tx_resources(struct enetc_ndev_priv *priv) { + struct enetc_bdr_resource *tx_res; int i, err; + tx_res = kcalloc(priv->num_tx_rings, sizeof(*tx_res), GFP_KERNEL); + if (!tx_res) + return ERR_PTR(-ENOMEM); + for (i = 0; i < priv->num_tx_rings; i++) { - err = enetc_alloc_txbdr(priv->tx_ring[i]); + struct enetc_bdr *tx_ring = priv->tx_ring[i]; + err = enetc_alloc_tx_resource(&tx_res[i], tx_ring->dev, + tx_ring->bd_count); if (err) goto fail; } - return 0; + return tx_res; fail: while (i-- > 0) - enetc_free_txbdr(priv->tx_ring[i]); + enetc_free_tx_resource(&tx_res[i]); - return err; + kfree(tx_res); + + return ERR_PTR(err); } -static void enetc_free_tx_resources(struct enetc_ndev_priv *priv) +static void enetc_free_tx_resources(const struct enetc_bdr_resource *tx_res, + size_t num_resources) { - int i; + size_t i; - for (i = 0; i < priv->num_tx_rings; i++) - enetc_free_txbdr(priv->tx_ring[i]); + for (i = 0; i < num_resources; i++) + enetc_free_tx_resource(&tx_res[i]); + + kfree(tx_res); } -static int enetc_alloc_rxbdr(struct enetc_bdr *rxr, bool extended) +static int enetc_alloc_rx_resource(struct enetc_bdr_resource *res, + struct device *dev, size_t bd_count, + bool extended) { - size_t size = sizeof(union enetc_rx_bd); int err; - rxr->rx_swbd = vzalloc(rxr->bd_count * sizeof(struct enetc_rx_swbd)); - if (!rxr->rx_swbd) - return -ENOMEM; - + res->dev = dev; + res->bd_count = bd_count; + res->bd_size = sizeof(union enetc_rx_bd); if (extended) - size *= 2; + res->bd_size *= 2; + + res->rx_swbd = vzalloc(bd_count * sizeof(struct enetc_rx_swbd)); + if (!res->rx_swbd) + return -ENOMEM; - err = enetc_dma_alloc_bdr(rxr, size); + err = enetc_dma_alloc_bdr(res); if (err) { - vfree(rxr->rx_swbd); + vfree(res->rx_swbd); return err; } - rxr->next_to_clean = 0; - rxr->next_to_use = 0; - rxr->next_to_alloc = 0; - rxr->ext_en = extended; - return 0; } -static void enetc_free_rxbdr(struct enetc_bdr *rxr) +static void enetc_free_rx_resource(const struct enetc_bdr_resource *res) { - int size; - - size = rxr->bd_count * sizeof(union enetc_rx_bd); - - dma_free_coherent(rxr->dev, size, rxr->bd_base, rxr->bd_dma_base); - rxr->bd_base = NULL; - - vfree(rxr->rx_swbd); - rxr->rx_swbd = NULL; + enetc_dma_free_bdr(res); + vfree(res->rx_swbd); } -static int enetc_alloc_rx_resources(struct enetc_ndev_priv *priv) +static struct enetc_bdr_resource * +enetc_alloc_rx_resources(struct enetc_ndev_priv *priv, bool extended) { - bool extended = !!(priv->active_offloads & ENETC_F_RX_TSTAMP); + struct enetc_bdr_resource *rx_res; int i, err; + rx_res = kcalloc(priv->num_rx_rings, sizeof(*rx_res), GFP_KERNEL); + if (!rx_res) + return ERR_PTR(-ENOMEM); + for (i = 0; i < priv->num_rx_rings; i++) { - err = enetc_alloc_rxbdr(priv->rx_ring[i], extended); + struct enetc_bdr *rx_ring = priv->rx_ring[i]; + err = enetc_alloc_rx_resource(&rx_res[i], rx_ring->dev, + rx_ring->bd_count, extended); if (err) goto fail; } - return 0; + return rx_res; fail: while (i-- > 0) - enetc_free_rxbdr(priv->rx_ring[i]); + enetc_free_rx_resource(&rx_res[i]); - return err; + kfree(rx_res); + + return ERR_PTR(err); } -static void enetc_free_rx_resources(struct enetc_ndev_priv *priv) +static void enetc_free_rx_resources(const struct enetc_bdr_resource *rx_res, + size_t num_resources) +{ + size_t i; + + for (i = 0; i < num_resources; i++) + enetc_free_rx_resource(&rx_res[i]); + + kfree(rx_res); +} + +static void enetc_assign_tx_resource(struct enetc_bdr *tx_ring, + const struct enetc_bdr_resource *res) +{ + tx_ring->bd_base = res ? res->bd_base : NULL; + tx_ring->bd_dma_base = res ? res->bd_dma_base : 0; + tx_ring->tx_swbd = res ? res->tx_swbd : NULL; + tx_ring->tso_headers = res ? res->tso_headers : NULL; + tx_ring->tso_headers_dma = res ? res->tso_headers_dma : 0; +} + +static void enetc_assign_rx_resource(struct enetc_bdr *rx_ring, + const struct enetc_bdr_resource *res) +{ + rx_ring->bd_base = res ? res->bd_base : NULL; + rx_ring->bd_dma_base = res ? res->bd_dma_base : 0; + rx_ring->rx_swbd = res ? res->rx_swbd : NULL; +} + +static void enetc_assign_tx_resources(struct enetc_ndev_priv *priv, + const struct enetc_bdr_resource *res) { int i; - for (i = 0; i < priv->num_rx_rings; i++) - enetc_free_rxbdr(priv->rx_ring[i]); + if (priv->tx_res) + enetc_free_tx_resources(priv->tx_res, priv->num_tx_rings); + + for (i = 0; i < priv->num_tx_rings; i++) { + enetc_assign_tx_resource(priv->tx_ring[i], + res ? &res[i] : NULL); + } + + priv->tx_res = res; } -static void enetc_free_tx_ring(struct enetc_bdr *tx_ring) +static void enetc_assign_rx_resources(struct enetc_ndev_priv *priv, + const struct enetc_bdr_resource *res) { int i; - if (!tx_ring->tx_swbd) - return; + if (priv->rx_res) + enetc_free_rx_resources(priv->rx_res, priv->num_rx_rings); + + for (i = 0; i < priv->num_rx_rings; i++) { + enetc_assign_rx_resource(priv->rx_ring[i], + res ? &res[i] : NULL); + } + + priv->rx_res = res; +} + +static void enetc_free_tx_ring(struct enetc_bdr *tx_ring) +{ + int i; for (i = 0; i < tx_ring->bd_count; i++) { struct enetc_tx_swbd *tx_swbd = &tx_ring->tx_swbd[i]; enetc_free_tx_frame(tx_ring, tx_swbd); } - - tx_ring->next_to_clean = 0; - tx_ring->next_to_use = 0; } static void enetc_free_rx_ring(struct enetc_bdr *rx_ring) { int i; - if (!rx_ring->rx_swbd) - return; - for (i = 0; i < rx_ring->bd_count; i++) { struct enetc_rx_swbd *rx_swbd = &rx_ring->rx_swbd[i]; @@ -1922,10 +1993,6 @@ static void enetc_free_rx_ring(struct enetc_bdr *rx_ring) __free_page(rx_swbd->page); rx_swbd->page = NULL; } - - rx_ring->next_to_clean = 0; - rx_ring->next_to_use = 0; - rx_ring->next_to_alloc = 0; } static void enetc_free_rxtx_rings(struct enetc_ndev_priv *priv) @@ -1980,6 +2047,7 @@ int enetc_configure_si(struct enetc_ndev_priv *priv) return 0; } +EXPORT_SYMBOL_GPL(enetc_configure_si); void enetc_init_si_rings_params(struct enetc_ndev_priv *priv) { @@ -1999,6 +2067,7 @@ void enetc_init_si_rings_params(struct enetc_ndev_priv *priv) priv->ic_mode = ENETC_IC_RX_ADAPTIVE | ENETC_IC_TX_MANUAL; priv->tx_ictt = ENETC_TXIC_TIMETHR; } +EXPORT_SYMBOL_GPL(enetc_init_si_rings_params); int enetc_alloc_si_resources(struct enetc_ndev_priv *priv) { @@ -2011,11 +2080,13 @@ int enetc_alloc_si_resources(struct enetc_ndev_priv *priv) return 0; } +EXPORT_SYMBOL_GPL(enetc_alloc_si_resources); void enetc_free_si_resources(struct enetc_ndev_priv *priv) { kfree(priv->cls_rules); } +EXPORT_SYMBOL_GPL(enetc_free_si_resources); static void enetc_setup_txbdr(struct enetc_hw *hw, struct enetc_bdr *tx_ring) { @@ -2039,7 +2110,7 @@ static void enetc_setup_txbdr(struct enetc_hw *hw, struct enetc_bdr *tx_ring) /* enable Tx ints by setting pkt thr to 1 */ enetc_txbdr_wr(hw, idx, ENETC_TBICR0, ENETC_TBICR0_ICEN | 0x1); - tbmr = ENETC_TBMR_EN | ENETC_TBMR_SET_PRIO(tx_ring->prio); + tbmr = ENETC_TBMR_SET_PRIO(tx_ring->prio); if (tx_ring->ndev->features & NETIF_F_HW_VLAN_CTAG_TX) tbmr |= ENETC_TBMR_VIH; @@ -2051,10 +2122,11 @@ static void enetc_setup_txbdr(struct enetc_hw *hw, struct enetc_bdr *tx_ring) tx_ring->idr = hw->reg + ENETC_SITXIDR; } -static void enetc_setup_rxbdr(struct enetc_hw *hw, struct enetc_bdr *rx_ring) +static void enetc_setup_rxbdr(struct enetc_hw *hw, struct enetc_bdr *rx_ring, + bool extended) { int idx = rx_ring->index; - u32 rbmr; + u32 rbmr = 0; enetc_rxbdr_wr(hw, idx, ENETC_RBBAR0, lower_32_bits(rx_ring->bd_dma_base)); @@ -2081,8 +2153,7 @@ static void enetc_setup_rxbdr(struct enetc_hw *hw, struct enetc_bdr *rx_ring) /* enable Rx ints by setting pkt thr to 1 */ enetc_rxbdr_wr(hw, idx, ENETC_RBICR0, ENETC_RBICR0_ICEN | 0x1); - rbmr = ENETC_RBMR_EN; - + rx_ring->ext_en = extended; if (rx_ring->ext_en) rbmr |= ENETC_RBMR_BDS; @@ -2092,15 +2163,18 @@ static void enetc_setup_rxbdr(struct enetc_hw *hw, struct enetc_bdr *rx_ring) rx_ring->rcir = hw->reg + ENETC_BDR(RX, idx, ENETC_RBCIR); rx_ring->idr = hw->reg + ENETC_SIRXIDR; + rx_ring->next_to_clean = 0; + rx_ring->next_to_use = 0; + rx_ring->next_to_alloc = 0; + enetc_lock_mdio(); enetc_refill_rx_ring(rx_ring, enetc_bd_unused(rx_ring)); enetc_unlock_mdio(); - /* enable ring */ enetc_rxbdr_wr(hw, idx, ENETC_RBMR, rbmr); } -static void enetc_setup_bdrs(struct enetc_ndev_priv *priv) +static void enetc_setup_bdrs(struct enetc_ndev_priv *priv, bool extended) { struct enetc_hw *hw = &priv->si->hw; int i; @@ -2109,10 +2183,42 @@ static void enetc_setup_bdrs(struct enetc_ndev_priv *priv) enetc_setup_txbdr(hw, priv->tx_ring[i]); for (i = 0; i < priv->num_rx_rings; i++) - enetc_setup_rxbdr(hw, priv->rx_ring[i]); + enetc_setup_rxbdr(hw, priv->rx_ring[i], extended); +} + +static void enetc_enable_txbdr(struct enetc_hw *hw, struct enetc_bdr *tx_ring) +{ + int idx = tx_ring->index; + u32 tbmr; + + tbmr = enetc_txbdr_rd(hw, idx, ENETC_TBMR); + tbmr |= ENETC_TBMR_EN; + enetc_txbdr_wr(hw, idx, ENETC_TBMR, tbmr); +} + +static void enetc_enable_rxbdr(struct enetc_hw *hw, struct enetc_bdr *rx_ring) +{ + int idx = rx_ring->index; + u32 rbmr; + + rbmr = enetc_rxbdr_rd(hw, idx, ENETC_RBMR); + rbmr |= ENETC_RBMR_EN; + enetc_rxbdr_wr(hw, idx, ENETC_RBMR, rbmr); +} + +static void enetc_enable_bdrs(struct enetc_ndev_priv *priv) +{ + struct enetc_hw *hw = &priv->si->hw; + int i; + + for (i = 0; i < priv->num_tx_rings; i++) + enetc_enable_txbdr(hw, priv->tx_ring[i]); + + for (i = 0; i < priv->num_rx_rings; i++) + enetc_enable_rxbdr(hw, priv->rx_ring[i]); } -static void enetc_clear_rxbdr(struct enetc_hw *hw, struct enetc_bdr *rx_ring) +static void enetc_disable_rxbdr(struct enetc_hw *hw, struct enetc_bdr *rx_ring) { int idx = rx_ring->index; @@ -2120,13 +2226,30 @@ static void enetc_clear_rxbdr(struct enetc_hw *hw, struct enetc_bdr *rx_ring) enetc_rxbdr_wr(hw, idx, ENETC_RBMR, 0); } -static void enetc_clear_txbdr(struct enetc_hw *hw, struct enetc_bdr *tx_ring) +static void enetc_disable_txbdr(struct enetc_hw *hw, struct enetc_bdr *rx_ring) { - int delay = 8, timeout = 100; - int idx = tx_ring->index; + int idx = rx_ring->index; /* disable EN bit on ring */ enetc_txbdr_wr(hw, idx, ENETC_TBMR, 0); +} + +static void enetc_disable_bdrs(struct enetc_ndev_priv *priv) +{ + struct enetc_hw *hw = &priv->si->hw; + int i; + + for (i = 0; i < priv->num_tx_rings; i++) + enetc_disable_txbdr(hw, priv->tx_ring[i]); + + for (i = 0; i < priv->num_rx_rings; i++) + enetc_disable_rxbdr(hw, priv->rx_ring[i]); +} + +static void enetc_wait_txbdr(struct enetc_hw *hw, struct enetc_bdr *tx_ring) +{ + int delay = 8, timeout = 100; + int idx = tx_ring->index; /* wait for busy to clear */ while (delay < timeout && @@ -2140,18 +2263,13 @@ static void enetc_clear_txbdr(struct enetc_hw *hw, struct enetc_bdr *tx_ring) idx); } -static void enetc_clear_bdrs(struct enetc_ndev_priv *priv) +static void enetc_wait_bdrs(struct enetc_ndev_priv *priv) { struct enetc_hw *hw = &priv->si->hw; int i; for (i = 0; i < priv->num_tx_rings; i++) - enetc_clear_txbdr(hw, priv->tx_ring[i]); - - for (i = 0; i < priv->num_rx_rings; i++) - enetc_clear_rxbdr(hw, priv->rx_ring[i]); - - udelay(1); + enetc_wait_txbdr(hw, priv->tx_ring[i]); } static int enetc_setup_irqs(struct enetc_ndev_priv *priv) @@ -2267,8 +2385,11 @@ static int enetc_phylink_connect(struct net_device *ndev) struct ethtool_eee edata; int err; - if (!priv->phylink) - return 0; /* phy-less mode */ + if (!priv->phylink) { + /* phy-less mode */ + netif_carrier_on(ndev); + return 0; + } err = phylink_of_phy_connect(priv->phylink, priv->dev->of_node, 0); if (err) { @@ -2280,6 +2401,8 @@ static int enetc_phylink_connect(struct net_device *ndev) memset(&edata, 0, sizeof(struct ethtool_eee)); phylink_ethtool_set_eee(priv->phylink, &edata); + phylink_start(priv->phylink); + return 0; } @@ -2321,20 +2444,21 @@ void enetc_start(struct net_device *ndev) enable_irq(irq); } - if (priv->phylink) - phylink_start(priv->phylink); - else - netif_carrier_on(ndev); + enetc_enable_bdrs(priv); netif_tx_start_all_queues(ndev); } +EXPORT_SYMBOL_GPL(enetc_start); int enetc_open(struct net_device *ndev) { struct enetc_ndev_priv *priv = netdev_priv(ndev); - int num_stack_tx_queues; + struct enetc_bdr_resource *tx_res, *rx_res; + bool extended; int err; + extended = !!(priv->active_offloads & ENETC_F_RX_TSTAMP); + err = enetc_setup_irqs(priv); if (err) return err; @@ -2343,34 +2467,28 @@ int enetc_open(struct net_device *ndev) if (err) goto err_phy_connect; - err = enetc_alloc_tx_resources(priv); - if (err) + tx_res = enetc_alloc_tx_resources(priv); + if (IS_ERR(tx_res)) { + err = PTR_ERR(tx_res); goto err_alloc_tx; + } - err = enetc_alloc_rx_resources(priv); - if (err) + rx_res = enetc_alloc_rx_resources(priv, extended); + if (IS_ERR(rx_res)) { + err = PTR_ERR(rx_res); goto err_alloc_rx; - - num_stack_tx_queues = enetc_num_stack_tx_queues(priv); - - err = netif_set_real_num_tx_queues(ndev, num_stack_tx_queues); - if (err) - goto err_set_queues; - - err = netif_set_real_num_rx_queues(ndev, priv->num_rx_rings); - if (err) - goto err_set_queues; + } enetc_tx_onestep_tstamp_init(priv); - enetc_setup_bdrs(priv); + enetc_assign_tx_resources(priv, tx_res); + enetc_assign_rx_resources(priv, rx_res); + enetc_setup_bdrs(priv, extended); enetc_start(ndev); return 0; -err_set_queues: - enetc_free_rx_resources(priv); err_alloc_rx: - enetc_free_tx_resources(priv); + enetc_free_tx_resources(tx_res, priv->num_tx_rings); err_alloc_tx: if (priv->phylink) phylink_disconnect_phy(priv->phylink); @@ -2379,6 +2497,7 @@ err_phy_connect: return err; } +EXPORT_SYMBOL_GPL(enetc_open); void enetc_stop(struct net_device *ndev) { @@ -2387,6 +2506,8 @@ void enetc_stop(struct net_device *ndev) netif_tx_stop_all_queues(ndev); + enetc_disable_bdrs(priv); + for (i = 0; i < priv->bdr_int_num; i++) { int irq = pci_irq_vector(priv->si->pdev, ENETC_BDR_INT_BASE_IDX + i); @@ -2396,104 +2517,205 @@ void enetc_stop(struct net_device *ndev) napi_disable(&priv->int_vector[i]->napi); } - if (priv->phylink) - phylink_stop(priv->phylink); - else - netif_carrier_off(ndev); + enetc_wait_bdrs(priv); enetc_clear_interrupts(priv); } +EXPORT_SYMBOL_GPL(enetc_stop); int enetc_close(struct net_device *ndev) { struct enetc_ndev_priv *priv = netdev_priv(ndev); enetc_stop(ndev); - enetc_clear_bdrs(priv); - if (priv->phylink) + if (priv->phylink) { + phylink_stop(priv->phylink); phylink_disconnect_phy(priv->phylink); + } else { + netif_carrier_off(ndev); + } + enetc_free_rxtx_rings(priv); - enetc_free_rx_resources(priv); - enetc_free_tx_resources(priv); + + /* Avoids dangling pointers and also frees old resources */ + enetc_assign_rx_resources(priv, NULL); + enetc_assign_tx_resources(priv, NULL); + enetc_free_irqs(priv); return 0; } +EXPORT_SYMBOL_GPL(enetc_close); -int enetc_setup_tc_mqprio(struct net_device *ndev, void *type_data) +static int enetc_reconfigure(struct enetc_ndev_priv *priv, bool extended, + int (*cb)(struct enetc_ndev_priv *priv, void *ctx), + void *ctx) +{ + struct enetc_bdr_resource *tx_res, *rx_res; + int err; + + ASSERT_RTNL(); + + /* If the interface is down, run the callback right away, + * without reconfiguration. + */ + if (!netif_running(priv->ndev)) { + if (cb) { + err = cb(priv, ctx); + if (err) + return err; + } + + return 0; + } + + tx_res = enetc_alloc_tx_resources(priv); + if (IS_ERR(tx_res)) { + err = PTR_ERR(tx_res); + goto out; + } + + rx_res = enetc_alloc_rx_resources(priv, extended); + if (IS_ERR(rx_res)) { + err = PTR_ERR(rx_res); + goto out_free_tx_res; + } + + enetc_stop(priv->ndev); + enetc_free_rxtx_rings(priv); + + /* Interface is down, run optional callback now */ + if (cb) { + err = cb(priv, ctx); + if (err) + goto out_restart; + } + + enetc_assign_tx_resources(priv, tx_res); + enetc_assign_rx_resources(priv, rx_res); + enetc_setup_bdrs(priv, extended); + enetc_start(priv->ndev); + + return 0; + +out_restart: + enetc_setup_bdrs(priv, extended); + enetc_start(priv->ndev); + enetc_free_rx_resources(rx_res, priv->num_rx_rings); +out_free_tx_res: + enetc_free_tx_resources(tx_res, priv->num_tx_rings); +out: + return err; +} + +static void enetc_debug_tx_ring_prios(struct enetc_ndev_priv *priv) +{ + int i; + + for (i = 0; i < priv->num_tx_rings; i++) + netdev_dbg(priv->ndev, "TX ring %d prio %d\n", i, + priv->tx_ring[i]->prio); +} + +static void enetc_reset_tc_mqprio(struct net_device *ndev) { struct enetc_ndev_priv *priv = netdev_priv(ndev); - struct tc_mqprio_qopt *mqprio = type_data; struct enetc_hw *hw = &priv->si->hw; struct enetc_bdr *tx_ring; int num_stack_tx_queues; - u8 num_tc; int i; num_stack_tx_queues = enetc_num_stack_tx_queues(priv); - mqprio->hw = TC_MQPRIO_HW_OFFLOAD_TCS; - num_tc = mqprio->num_tc; - if (!num_tc) { - netdev_reset_tc(ndev); - netif_set_real_num_tx_queues(ndev, num_stack_tx_queues); + netdev_reset_tc(ndev); + netif_set_real_num_tx_queues(ndev, num_stack_tx_queues); + priv->min_num_stack_tx_queues = num_possible_cpus(); - /* Reset all ring priorities to 0 */ - for (i = 0; i < priv->num_tx_rings; i++) { - tx_ring = priv->tx_ring[i]; - tx_ring->prio = 0; - enetc_set_bdr_prio(hw, tx_ring->index, tx_ring->prio); - } + /* Reset all ring priorities to 0 */ + for (i = 0; i < priv->num_tx_rings; i++) { + tx_ring = priv->tx_ring[i]; + tx_ring->prio = 0; + enetc_set_bdr_prio(hw, tx_ring->index, tx_ring->prio); + } + + enetc_debug_tx_ring_prios(priv); +} + +int enetc_setup_tc_mqprio(struct net_device *ndev, void *type_data) +{ + struct enetc_ndev_priv *priv = netdev_priv(ndev); + struct tc_mqprio_qopt *mqprio = type_data; + struct enetc_hw *hw = &priv->si->hw; + int num_stack_tx_queues = 0; + u8 num_tc = mqprio->num_tc; + struct enetc_bdr *tx_ring; + int offset, count; + int err, tc, q; + if (!num_tc) { + enetc_reset_tc_mqprio(ndev); return 0; } - /* Check if we have enough BD rings available to accommodate all TCs */ - if (num_tc > num_stack_tx_queues) { - netdev_err(ndev, "Max %d traffic classes supported\n", - priv->num_tx_rings); - return -EINVAL; - } + err = netdev_set_num_tc(ndev, num_tc); + if (err) + return err; - /* For the moment, we use only one BD ring per TC. - * - * Configure num_tc BD rings with increasing priorities. - */ - for (i = 0; i < num_tc; i++) { - tx_ring = priv->tx_ring[i]; - tx_ring->prio = i; - enetc_set_bdr_prio(hw, tx_ring->index, tx_ring->prio); + for (tc = 0; tc < num_tc; tc++) { + offset = mqprio->offset[tc]; + count = mqprio->count[tc]; + num_stack_tx_queues += count; + + err = netdev_set_tc_queue(ndev, tc, count, offset); + if (err) + goto err_reset_tc; + + for (q = offset; q < offset + count; q++) { + tx_ring = priv->tx_ring[q]; + /* The prio_tc_map is skb_tx_hash()'s way of selecting + * between TX queues based on skb->priority. As such, + * there's nothing to offload based on it. + * Make the mqprio "traffic class" be the priority of + * this ring group, and leave the Tx IPV to traffic + * class mapping as its default mapping value of 1:1. + */ + tx_ring->prio = tc; + enetc_set_bdr_prio(hw, tx_ring->index, tx_ring->prio); + } } - /* Reset the number of netdev queues based on the TC count */ - netif_set_real_num_tx_queues(ndev, num_tc); + err = netif_set_real_num_tx_queues(ndev, num_stack_tx_queues); + if (err) + goto err_reset_tc; - netdev_set_num_tc(ndev, num_tc); + priv->min_num_stack_tx_queues = num_stack_tx_queues; - /* Each TC is associated with one netdev queue */ - for (i = 0; i < num_tc; i++) - netdev_set_tc_queue(ndev, i, 1, i); + enetc_debug_tx_ring_prios(priv); return 0; + +err_reset_tc: + enetc_reset_tc_mqprio(ndev); + return err; } +EXPORT_SYMBOL_GPL(enetc_setup_tc_mqprio); -static int enetc_setup_xdp_prog(struct net_device *dev, struct bpf_prog *prog, - struct netlink_ext_ack *extack) +static int enetc_reconfigure_xdp_cb(struct enetc_ndev_priv *priv, void *ctx) { - struct enetc_ndev_priv *priv = netdev_priv(dev); - struct bpf_prog *old_prog; - bool is_up; - int i; - - /* The buffer layout is changing, so we need to drain the old - * RX buffers and seed new ones. - */ - is_up = netif_running(dev); - if (is_up) - dev_close(dev); + struct bpf_prog *old_prog, *prog = ctx; + int num_stack_tx_queues; + int err, i; old_prog = xchg(&priv->xdp_prog, prog); + + num_stack_tx_queues = enetc_num_stack_tx_queues(priv); + err = netif_set_real_num_tx_queues(priv->ndev, num_stack_tx_queues); + if (err) { + xchg(&priv->xdp_prog, old_prog); + return err; + } + if (old_prog) bpf_prog_put(old_prog); @@ -2508,23 +2730,46 @@ static int enetc_setup_xdp_prog(struct net_device *dev, struct bpf_prog *prog, rx_ring->buffer_offset = ENETC_RXB_PAD; } - if (is_up) - return dev_open(dev, extack); - return 0; } -int enetc_setup_bpf(struct net_device *dev, struct netdev_bpf *xdp) +static int enetc_setup_xdp_prog(struct net_device *ndev, struct bpf_prog *prog, + struct netlink_ext_ack *extack) { - switch (xdp->command) { + int num_xdp_tx_queues = prog ? num_possible_cpus() : 0; + struct enetc_ndev_priv *priv = netdev_priv(ndev); + bool extended; + + if (priv->min_num_stack_tx_queues + num_xdp_tx_queues > + priv->num_tx_rings) { + NL_SET_ERR_MSG_FMT_MOD(extack, + "Reserving %d XDP TXQs does not leave a minimum of %d TXQs for network stack (total %d available)", + num_xdp_tx_queues, + priv->min_num_stack_tx_queues, + priv->num_tx_rings); + return -EBUSY; + } + + extended = !!(priv->active_offloads & ENETC_F_RX_TSTAMP); + + /* The buffer layout is changing, so we need to drain the old + * RX buffers and seed new ones. + */ + return enetc_reconfigure(priv, extended, enetc_reconfigure_xdp_cb, prog); +} + +int enetc_setup_bpf(struct net_device *ndev, struct netdev_bpf *bpf) +{ + switch (bpf->command) { case XDP_SETUP_PROG: - return enetc_setup_xdp_prog(dev, xdp->prog, xdp->extack); + return enetc_setup_xdp_prog(ndev, bpf->prog, bpf->extack); default: return -EINVAL; } return 0; } +EXPORT_SYMBOL_GPL(enetc_setup_bpf); struct net_device_stats *enetc_get_stats(struct net_device *ndev) { @@ -2556,6 +2801,7 @@ struct net_device_stats *enetc_get_stats(struct net_device *ndev) return stats; } +EXPORT_SYMBOL_GPL(enetc_get_stats); static int enetc_set_rss(struct net_device *ndev, int en) { @@ -2608,48 +2854,53 @@ void enetc_set_features(struct net_device *ndev, netdev_features_t features) enetc_enable_txvlan(ndev, !!(features & NETIF_F_HW_VLAN_CTAG_TX)); } +EXPORT_SYMBOL_GPL(enetc_set_features); #ifdef CONFIG_FSL_ENETC_PTP_CLOCK static int enetc_hwtstamp_set(struct net_device *ndev, struct ifreq *ifr) { struct enetc_ndev_priv *priv = netdev_priv(ndev); + int err, new_offloads = priv->active_offloads; struct hwtstamp_config config; - int ao; if (copy_from_user(&config, ifr->ifr_data, sizeof(config))) return -EFAULT; switch (config.tx_type) { case HWTSTAMP_TX_OFF: - priv->active_offloads &= ~ENETC_F_TX_TSTAMP_MASK; + new_offloads &= ~ENETC_F_TX_TSTAMP_MASK; break; case HWTSTAMP_TX_ON: - priv->active_offloads &= ~ENETC_F_TX_TSTAMP_MASK; - priv->active_offloads |= ENETC_F_TX_TSTAMP; + new_offloads &= ~ENETC_F_TX_TSTAMP_MASK; + new_offloads |= ENETC_F_TX_TSTAMP; break; case HWTSTAMP_TX_ONESTEP_SYNC: - priv->active_offloads &= ~ENETC_F_TX_TSTAMP_MASK; - priv->active_offloads |= ENETC_F_TX_ONESTEP_SYNC_TSTAMP; + new_offloads &= ~ENETC_F_TX_TSTAMP_MASK; + new_offloads |= ENETC_F_TX_ONESTEP_SYNC_TSTAMP; break; default: return -ERANGE; } - ao = priv->active_offloads; switch (config.rx_filter) { case HWTSTAMP_FILTER_NONE: - priv->active_offloads &= ~ENETC_F_RX_TSTAMP; + new_offloads &= ~ENETC_F_RX_TSTAMP; break; default: - priv->active_offloads |= ENETC_F_RX_TSTAMP; + new_offloads |= ENETC_F_RX_TSTAMP; config.rx_filter = HWTSTAMP_FILTER_ALL; } - if (netif_running(ndev) && ao != priv->active_offloads) { - enetc_close(ndev); - enetc_open(ndev); + if ((new_offloads ^ priv->active_offloads) & ENETC_F_RX_TSTAMP) { + bool extended = !!(new_offloads & ENETC_F_RX_TSTAMP); + + err = enetc_reconfigure(priv, extended, NULL, NULL); + if (err) + return err; } + priv->active_offloads = new_offloads; + return copy_to_user(ifr->ifr_data, &config, sizeof(config)) ? -EFAULT : 0; } @@ -2691,10 +2942,12 @@ int enetc_ioctl(struct net_device *ndev, struct ifreq *rq, int cmd) return phylink_mii_ioctl(priv->phylink, rq, cmd); } +EXPORT_SYMBOL_GPL(enetc_ioctl); int enetc_alloc_msix(struct enetc_ndev_priv *priv) { struct pci_dev *pdev = priv->si->pdev; + int num_stack_tx_queues; int first_xdp_tx_ring; int i, n, err, nvec; int v_tx_rings; @@ -2771,6 +3024,17 @@ int enetc_alloc_msix(struct enetc_ndev_priv *priv) } } + num_stack_tx_queues = enetc_num_stack_tx_queues(priv); + + err = netif_set_real_num_tx_queues(priv->ndev, num_stack_tx_queues); + if (err) + goto fail; + + err = netif_set_real_num_rx_queues(priv->ndev, priv->num_rx_rings); + if (err) + goto fail; + + priv->min_num_stack_tx_queues = num_possible_cpus(); first_xdp_tx_ring = priv->num_tx_rings - num_possible_cpus(); priv->xdp_tx_ring = &priv->tx_ring[first_xdp_tx_ring]; @@ -2792,6 +3056,7 @@ fail: return err; } +EXPORT_SYMBOL_GPL(enetc_alloc_msix); void enetc_free_msix(struct enetc_ndev_priv *priv) { @@ -2821,6 +3086,7 @@ void enetc_free_msix(struct enetc_ndev_priv *priv) /* disable all MSIX for this device */ pci_free_irq_vectors(priv->si->pdev); } +EXPORT_SYMBOL_GPL(enetc_free_msix); static void enetc_kfree_si(struct enetc_si *si) { @@ -2910,6 +3176,7 @@ err_dma: return err; } +EXPORT_SYMBOL_GPL(enetc_pci_probe); void enetc_pci_remove(struct pci_dev *pdev) { @@ -2921,3 +3188,6 @@ void enetc_pci_remove(struct pci_dev *pdev) pci_release_mem_regions(pdev); pci_disable_device(pdev); } +EXPORT_SYMBOL_GPL(enetc_pci_remove); + +MODULE_LICENSE("Dual BSD/GPL"); diff --git a/drivers/net/ethernet/freescale/enetc/enetc.h b/drivers/net/ethernet/freescale/enetc/enetc.h index c6d8cc15c270..8010f31cd10d 100644 --- a/drivers/net/ethernet/freescale/enetc/enetc.h +++ b/drivers/net/ethernet/freescale/enetc/enetc.h @@ -70,7 +70,6 @@ struct enetc_ring_stats { unsigned int xdp_tx_drops; unsigned int xdp_redirect; unsigned int xdp_redirect_failures; - unsigned int xdp_redirect_sg; unsigned int recycles; unsigned int recycle_failures; unsigned int win_drop; @@ -86,6 +85,23 @@ struct enetc_xdp_data { #define ENETC_TX_RING_DEFAULT_SIZE 2048 #define ENETC_DEFAULT_TX_WORK (ENETC_TX_RING_DEFAULT_SIZE / 2) +struct enetc_bdr_resource { + /* Input arguments saved for teardown */ + struct device *dev; /* for DMA mapping */ + size_t bd_count; + size_t bd_size; + + /* Resource proper */ + void *bd_base; /* points to Rx or Tx BD ring */ + dma_addr_t bd_dma_base; + union { + struct enetc_tx_swbd *tx_swbd; + struct enetc_rx_swbd *rx_swbd; + }; + char *tso_headers; + dma_addr_t tso_headers_dma; +}; + struct enetc_bdr { struct device *dev; /* for DMA mapping */ struct net_device *ndev; @@ -213,8 +229,9 @@ enum enetc_errata { ENETC_ERR_UCMCSWP = BIT(1), }; -#define ENETC_SI_F_QBV BIT(0) -#define ENETC_SI_F_PSFP BIT(1) +#define ENETC_SI_F_PSFP BIT(0) +#define ENETC_SI_F_QBV BIT(1) +#define ENETC_SI_F_QBU BIT(2) /* PCI IEP device data */ struct enetc_si { @@ -297,7 +314,6 @@ struct psfp_cap { }; #define ENETC_F_TX_TSTAMP_MASK 0xff -/* TODO: more hardware offloads */ enum enetc_active_offloads { /* 8 bits reserved for TX timestamp types (hwtstamp_tx_types) */ ENETC_F_TX_TSTAMP = BIT(0), @@ -306,6 +322,7 @@ enum enetc_active_offloads { ENETC_F_RX_TSTAMP = BIT(8), ENETC_F_QBV = BIT(9), ENETC_F_QCI = BIT(10), + ENETC_F_QBU = BIT(11), }; enum enetc_flags_bit { @@ -345,11 +362,16 @@ struct enetc_ndev_priv { struct enetc_bdr **xdp_tx_ring; struct enetc_bdr *tx_ring[16]; struct enetc_bdr *rx_ring[16]; + const struct enetc_bdr_resource *tx_res; + const struct enetc_bdr_resource *rx_res; struct enetc_cls_rule *cls_rules; struct psfp_cap psfp_cap; + /* Minimum number of TX queues required by the network stack */ + unsigned int min_num_stack_tx_queues; + struct phylink *phylink; int ic_mode; u32 tx_ictt; @@ -360,6 +382,11 @@ struct enetc_ndev_priv { struct work_struct tx_onestep_tstamp; struct sk_buff_head tx_skbs; + + /* Serialize access to MAC Merge state between ethtool requests + * and link state updates + */ + struct mutex mm_lock; }; /* Messaging */ @@ -378,6 +405,8 @@ struct enetc_msg_cmd_set_primary_mac { extern int enetc_phc_index; /* SI common */ +u32 enetc_port_mac_rd(struct enetc_si *si, u32 reg); +void enetc_port_mac_wr(struct enetc_si *si, u32 reg, u32 val); int enetc_pci_probe(struct pci_dev *pdev, const char *name, int sizeof_priv); void enetc_pci_remove(struct pci_dev *pdev); int enetc_alloc_msix(struct enetc_ndev_priv *priv); @@ -397,12 +426,13 @@ struct net_device_stats *enetc_get_stats(struct net_device *ndev); void enetc_set_features(struct net_device *ndev, netdev_features_t features); int enetc_ioctl(struct net_device *ndev, struct ifreq *rq, int cmd); int enetc_setup_tc_mqprio(struct net_device *ndev, void *type_data); -int enetc_setup_bpf(struct net_device *dev, struct netdev_bpf *xdp); +int enetc_setup_bpf(struct net_device *ndev, struct netdev_bpf *bpf); int enetc_xdp_xmit(struct net_device *ndev, int num_frames, struct xdp_frame **frames, u32 flags); /* ethtool */ void enetc_set_ethtool_ops(struct net_device *ndev); +void enetc_mm_link_state_update(struct enetc_ndev_priv *priv, bool link); /* control buffer descriptor ring (CBDR) */ int enetc_setup_cbdr(struct device *dev, struct enetc_hw *hw, int bd_count, diff --git a/drivers/net/ethernet/freescale/enetc/enetc_cbdr.c b/drivers/net/ethernet/freescale/enetc/enetc_cbdr.c index af68dc46a795..20bfdf7fb4b4 100644 --- a/drivers/net/ethernet/freescale/enetc/enetc_cbdr.c +++ b/drivers/net/ethernet/freescale/enetc/enetc_cbdr.c @@ -44,6 +44,7 @@ int enetc_setup_cbdr(struct device *dev, struct enetc_hw *hw, int bd_count, return 0; } +EXPORT_SYMBOL_GPL(enetc_setup_cbdr); void enetc_teardown_cbdr(struct enetc_cbdr *cbdr) { @@ -57,6 +58,7 @@ void enetc_teardown_cbdr(struct enetc_cbdr *cbdr) cbdr->bd_base = NULL; cbdr->dma_dev = NULL; } +EXPORT_SYMBOL_GPL(enetc_teardown_cbdr); static void enetc_clean_cbdr(struct enetc_cbdr *ring) { @@ -127,6 +129,7 @@ int enetc_send_cmd(struct enetc_si *si, struct enetc_cbd *cbd) return 0; } +EXPORT_SYMBOL_GPL(enetc_send_cmd); int enetc_clear_mac_flt_entry(struct enetc_si *si, int index) { @@ -140,6 +143,7 @@ int enetc_clear_mac_flt_entry(struct enetc_si *si, int index) return enetc_send_cmd(si, &cbd); } +EXPORT_SYMBOL_GPL(enetc_clear_mac_flt_entry); int enetc_set_mac_flt_entry(struct enetc_si *si, int index, char *mac_addr, int si_map) @@ -165,6 +169,7 @@ int enetc_set_mac_flt_entry(struct enetc_si *si, int index, return enetc_send_cmd(si, &cbd); } +EXPORT_SYMBOL_GPL(enetc_set_mac_flt_entry); /* Set entry in RFS table */ int enetc_set_fs_entry(struct enetc_si *si, struct enetc_cmd_rfse *rfse, @@ -197,6 +202,7 @@ int enetc_set_fs_entry(struct enetc_si *si, struct enetc_cmd_rfse *rfse, return err; } +EXPORT_SYMBOL_GPL(enetc_set_fs_entry); static int enetc_cmd_rss_table(struct enetc_si *si, u32 *table, int count, bool read) @@ -242,9 +248,11 @@ int enetc_get_rss_table(struct enetc_si *si, u32 *table, int count) { return enetc_cmd_rss_table(si, table, count, true); } +EXPORT_SYMBOL_GPL(enetc_get_rss_table); /* Set RSS table */ int enetc_set_rss_table(struct enetc_si *si, const u32 *table, int count) { return enetc_cmd_rss_table(si, (u32 *)table, count, false); } +EXPORT_SYMBOL_GPL(enetc_set_rss_table); diff --git a/drivers/net/ethernet/freescale/enetc/enetc_ethtool.c b/drivers/net/ethernet/freescale/enetc/enetc_ethtool.c index c8369e3752b0..bca68edfbe9c 100644 --- a/drivers/net/ethernet/freescale/enetc/enetc_ethtool.c +++ b/drivers/net/ethernet/freescale/enetc/enetc_ethtool.c @@ -1,6 +1,7 @@ // SPDX-License-Identifier: (GPL-2.0+ OR BSD-3-Clause) /* Copyright 2017-2019 NXP */ +#include <linux/ethtool_netlink.h> #include <linux/net_tstamp.h> #include <linux/module.h> #include "enetc.h" @@ -197,7 +198,6 @@ static const char rx_ring_stats[][ETH_GSTRING_LEN] = { "Rx ring %2d recycle failures", "Rx ring %2d redirects", "Rx ring %2d redirect failures", - "Rx ring %2d redirect S/G", }; static const char tx_ring_stats[][ETH_GSTRING_LEN] = { @@ -291,7 +291,6 @@ static void enetc_get_ethtool_stats(struct net_device *ndev, data[o++] = priv->rx_ring[i]->stats.recycle_failures; data[o++] = priv->rx_ring[i]->stats.xdp_redirect; data[o++] = priv->rx_ring[i]->stats.xdp_redirect_failures; - data[o++] = priv->rx_ring[i]->stats.xdp_redirect_sg; } if (!enetc_si_is_pf(priv->si)) @@ -301,14 +300,32 @@ static void enetc_get_ethtool_stats(struct net_device *ndev, data[o++] = enetc_port_rd(hw, enetc_port_counters[i].reg); } +static void enetc_pause_stats(struct enetc_hw *hw, int mac, + struct ethtool_pause_stats *pause_stats) +{ + pause_stats->tx_pause_frames = enetc_port_rd(hw, ENETC_PM_TXPF(mac)); + pause_stats->rx_pause_frames = enetc_port_rd(hw, ENETC_PM_RXPF(mac)); +} + static void enetc_get_pause_stats(struct net_device *ndev, struct ethtool_pause_stats *pause_stats) { struct enetc_ndev_priv *priv = netdev_priv(ndev); struct enetc_hw *hw = &priv->si->hw; + struct enetc_si *si = priv->si; - pause_stats->tx_pause_frames = enetc_port_rd(hw, ENETC_PM_TXPF(0)); - pause_stats->rx_pause_frames = enetc_port_rd(hw, ENETC_PM_RXPF(0)); + switch (pause_stats->src) { + case ETHTOOL_MAC_STATS_SRC_EMAC: + enetc_pause_stats(hw, 0, pause_stats); + break; + case ETHTOOL_MAC_STATS_SRC_PMAC: + if (si->hw_features & ENETC_SI_F_QBU) + enetc_pause_stats(hw, 1, pause_stats); + break; + case ETHTOOL_MAC_STATS_SRC_AGGREGATE: + ethtool_aggregate_pause_stats(ndev, pause_stats); + break; + } } static void enetc_mac_stats(struct enetc_hw *hw, int mac, @@ -385,8 +402,20 @@ static void enetc_get_eth_mac_stats(struct net_device *ndev, { struct enetc_ndev_priv *priv = netdev_priv(ndev); struct enetc_hw *hw = &priv->si->hw; + struct enetc_si *si = priv->si; - enetc_mac_stats(hw, 0, mac_stats); + switch (mac_stats->src) { + case ETHTOOL_MAC_STATS_SRC_EMAC: + enetc_mac_stats(hw, 0, mac_stats); + break; + case ETHTOOL_MAC_STATS_SRC_PMAC: + if (si->hw_features & ENETC_SI_F_QBU) + enetc_mac_stats(hw, 1, mac_stats); + break; + case ETHTOOL_MAC_STATS_SRC_AGGREGATE: + ethtool_aggregate_mac_stats(ndev, mac_stats); + break; + } } static void enetc_get_eth_ctrl_stats(struct net_device *ndev, @@ -394,8 +423,20 @@ static void enetc_get_eth_ctrl_stats(struct net_device *ndev, { struct enetc_ndev_priv *priv = netdev_priv(ndev); struct enetc_hw *hw = &priv->si->hw; + struct enetc_si *si = priv->si; - enetc_ctrl_stats(hw, 0, ctrl_stats); + switch (ctrl_stats->src) { + case ETHTOOL_MAC_STATS_SRC_EMAC: + enetc_ctrl_stats(hw, 0, ctrl_stats); + break; + case ETHTOOL_MAC_STATS_SRC_PMAC: + if (si->hw_features & ENETC_SI_F_QBU) + enetc_ctrl_stats(hw, 1, ctrl_stats); + break; + case ETHTOOL_MAC_STATS_SRC_AGGREGATE: + ethtool_aggregate_ctrl_stats(ndev, ctrl_stats); + break; + } } static void enetc_get_rmon_stats(struct net_device *ndev, @@ -404,8 +445,20 @@ static void enetc_get_rmon_stats(struct net_device *ndev, { struct enetc_ndev_priv *priv = netdev_priv(ndev); struct enetc_hw *hw = &priv->si->hw; + struct enetc_si *si = priv->si; - enetc_rmon_stats(hw, 0, rmon_stats, ranges); + switch (rmon_stats->src) { + case ETHTOOL_MAC_STATS_SRC_EMAC: + enetc_rmon_stats(hw, 0, rmon_stats, ranges); + break; + case ETHTOOL_MAC_STATS_SRC_PMAC: + if (si->hw_features & ENETC_SI_F_QBU) + enetc_rmon_stats(hw, 1, rmon_stats, ranges); + break; + case ETHTOOL_MAC_STATS_SRC_AGGREGATE: + ethtool_aggregate_rmon_stats(ndev, rmon_stats); + break; + } } #define ENETC_RSSHASH_L3 (RXH_L2DA | RXH_VLAN | RXH_L3_PROTO | RXH_IP_SRC | \ @@ -651,6 +704,7 @@ void enetc_set_rss_key(struct enetc_hw *hw, const u8 *bytes) for (i = 0; i < ENETC_RSSHASH_KEY_SIZE / 4; i++) enetc_port_wr(hw, ENETC_PRSSK(i), ((u32 *)bytes)[i]); } +EXPORT_SYMBOL_GPL(enetc_set_rss_key); static int enetc_set_rxfh(struct net_device *ndev, const u32 *indir, const u8 *key, const u8 hfunc) @@ -864,6 +918,166 @@ static int enetc_set_link_ksettings(struct net_device *dev, return phylink_ethtool_ksettings_set(priv->phylink, cmd); } +static void enetc_get_mm_stats(struct net_device *ndev, + struct ethtool_mm_stats *s) +{ + struct enetc_ndev_priv *priv = netdev_priv(ndev); + struct enetc_hw *hw = &priv->si->hw; + struct enetc_si *si = priv->si; + + if (!(si->hw_features & ENETC_SI_F_QBU)) + return; + + s->MACMergeFrameAssErrorCount = enetc_port_rd(hw, ENETC_MMFAECR); + s->MACMergeFrameSmdErrorCount = enetc_port_rd(hw, ENETC_MMFSECR); + s->MACMergeFrameAssOkCount = enetc_port_rd(hw, ENETC_MMFAOCR); + s->MACMergeFragCountRx = enetc_port_rd(hw, ENETC_MMFCRXR); + s->MACMergeFragCountTx = enetc_port_rd(hw, ENETC_MMFCTXR); + s->MACMergeHoldCount = enetc_port_rd(hw, ENETC_MMHCR); +} + +static int enetc_get_mm(struct net_device *ndev, struct ethtool_mm_state *state) +{ + struct enetc_ndev_priv *priv = netdev_priv(ndev); + struct enetc_si *si = priv->si; + struct enetc_hw *hw = &si->hw; + u32 lafs, rafs, val; + + if (!(si->hw_features & ENETC_SI_F_QBU)) + return -EOPNOTSUPP; + + mutex_lock(&priv->mm_lock); + + val = enetc_port_rd(hw, ENETC_PFPMR); + state->pmac_enabled = !!(val & ENETC_PFPMR_PMACE); + + val = enetc_port_rd(hw, ENETC_MMCSR); + + switch (ENETC_MMCSR_GET_VSTS(val)) { + case 0: + state->verify_status = ETHTOOL_MM_VERIFY_STATUS_DISABLED; + break; + case 2: + state->verify_status = ETHTOOL_MM_VERIFY_STATUS_VERIFYING; + break; + case 3: + state->verify_status = ETHTOOL_MM_VERIFY_STATUS_SUCCEEDED; + break; + case 4: + state->verify_status = ETHTOOL_MM_VERIFY_STATUS_FAILED; + break; + case 5: + default: + state->verify_status = ETHTOOL_MM_VERIFY_STATUS_UNKNOWN; + break; + } + + rafs = ENETC_MMCSR_GET_RAFS(val); + state->tx_min_frag_size = ethtool_mm_frag_size_add_to_min(rafs); + lafs = ENETC_MMCSR_GET_LAFS(val); + state->rx_min_frag_size = ethtool_mm_frag_size_add_to_min(lafs); + state->tx_enabled = !!(val & ENETC_MMCSR_LPE); /* mirror of MMCSR_ME */ + state->tx_active = !!(val & ENETC_MMCSR_LPA); + state->verify_enabled = !(val & ENETC_MMCSR_VDIS); + state->verify_time = ENETC_MMCSR_GET_VT(val); + /* A verifyTime of 128 ms would exceed the 7 bit width + * of the ENETC_MMCSR_VT field + */ + state->max_verify_time = 127; + + mutex_unlock(&priv->mm_lock); + + return 0; +} + +static int enetc_set_mm(struct net_device *ndev, struct ethtool_mm_cfg *cfg, + struct netlink_ext_ack *extack) +{ + struct enetc_ndev_priv *priv = netdev_priv(ndev); + struct enetc_hw *hw = &priv->si->hw; + struct enetc_si *si = priv->si; + u32 val, add_frag_size; + int err; + + if (!(si->hw_features & ENETC_SI_F_QBU)) + return -EOPNOTSUPP; + + err = ethtool_mm_frag_size_min_to_add(cfg->tx_min_frag_size, + &add_frag_size, extack); + if (err) + return err; + + mutex_lock(&priv->mm_lock); + + val = enetc_port_rd(hw, ENETC_PFPMR); + if (cfg->pmac_enabled) + val |= ENETC_PFPMR_PMACE; + else + val &= ~ENETC_PFPMR_PMACE; + enetc_port_wr(hw, ENETC_PFPMR, val); + + val = enetc_port_rd(hw, ENETC_MMCSR); + + if (cfg->verify_enabled) + val &= ~ENETC_MMCSR_VDIS; + else + val |= ENETC_MMCSR_VDIS; + + if (cfg->tx_enabled) + priv->active_offloads |= ENETC_F_QBU; + else + priv->active_offloads &= ~ENETC_F_QBU; + + /* If link is up, enable MAC Merge right away */ + if (!!(priv->active_offloads & ENETC_F_QBU) && + !(val & ENETC_MMCSR_LINK_FAIL)) + val |= ENETC_MMCSR_ME; + + val &= ~ENETC_MMCSR_VT_MASK; + val |= ENETC_MMCSR_VT(cfg->verify_time); + + val &= ~ENETC_MMCSR_RAFS_MASK; + val |= ENETC_MMCSR_RAFS(add_frag_size); + + enetc_port_wr(hw, ENETC_MMCSR, val); + + mutex_unlock(&priv->mm_lock); + + return 0; +} + +/* When the link is lost, the verification state machine goes to the FAILED + * state and doesn't restart on its own after a new link up event. + * According to 802.3 Figure 99-8 - Verify state diagram, the LINK_FAIL bit + * should have been sufficient to re-trigger verification, but for ENETC it + * doesn't. As a workaround, we need to toggle the Merge Enable bit to + * re-trigger verification when link comes up. + */ +void enetc_mm_link_state_update(struct enetc_ndev_priv *priv, bool link) +{ + struct enetc_hw *hw = &priv->si->hw; + u32 val; + + mutex_lock(&priv->mm_lock); + + val = enetc_port_rd(hw, ENETC_MMCSR); + + if (link) { + val &= ~ENETC_MMCSR_LINK_FAIL; + if (priv->active_offloads & ENETC_F_QBU) + val |= ENETC_MMCSR_ME; + } else { + val |= ENETC_MMCSR_LINK_FAIL; + if (priv->active_offloads & ENETC_F_QBU) + val &= ~ENETC_MMCSR_ME; + } + + enetc_port_wr(hw, ENETC_MMCSR, val); + + mutex_unlock(&priv->mm_lock); +} +EXPORT_SYMBOL_GPL(enetc_mm_link_state_update); + static const struct ethtool_ops enetc_pf_ethtool_ops = { .supported_coalesce_params = ETHTOOL_COALESCE_USECS | ETHTOOL_COALESCE_MAX_FRAMES | @@ -894,6 +1108,9 @@ static const struct ethtool_ops enetc_pf_ethtool_ops = { .set_wol = enetc_set_wol, .get_pauseparam = enetc_get_pauseparam, .set_pauseparam = enetc_set_pauseparam, + .get_mm = enetc_get_mm, + .set_mm = enetc_set_mm, + .get_mm_stats = enetc_get_mm_stats, }; static const struct ethtool_ops enetc_vf_ethtool_ops = { @@ -926,3 +1143,4 @@ void enetc_set_ethtool_ops(struct net_device *ndev) else ndev->ethtool_ops = &enetc_vf_ethtool_ops; } +EXPORT_SYMBOL_GPL(enetc_set_ethtool_ops); diff --git a/drivers/net/ethernet/freescale/enetc/enetc_hw.h b/drivers/net/ethernet/freescale/enetc/enetc_hw.h index 18ca1f42b1f7..de2e0ee8cdcb 100644 --- a/drivers/net/ethernet/freescale/enetc/enetc_hw.h +++ b/drivers/net/ethernet/freescale/enetc/enetc_hw.h @@ -18,9 +18,10 @@ #define ENETC_SICTR0 0x18 #define ENETC_SICTR1 0x1c #define ENETC_SIPCAPR0 0x20 -#define ENETC_SIPCAPR0_QBV BIT(4) #define ENETC_SIPCAPR0_PSFP BIT(9) #define ENETC_SIPCAPR0_RSS BIT(8) +#define ENETC_SIPCAPR0_QBV BIT(4) +#define ENETC_SIPCAPR0_QBU BIT(3) #define ENETC_SIPCAPR1 0x24 #define ENETC_SITGTGR 0x30 #define ENETC_SIRBGCR 0x38 @@ -213,7 +214,6 @@ enum enetc_bdr_type {TX, RX}; #define ENETC_PSIRFSCFGR(n) (0x1814 + (n) * 4) /* n = SI index */ #define ENETC_PFPMR 0x1900 #define ENETC_PFPMR_PMACE BIT(1) -#define ENETC_PFPMR_MWLM BIT(0) #define ENETC_EMDIO_BASE 0x1c00 #define ENETC_PSIUMHFR0(n, err) (((err) ? 0x1d08 : 0x1d00) + (n) * 0x10) #define ENETC_PSIUMHFR1(n) (0x1d04 + (n) * 0x10) @@ -222,11 +222,35 @@ enum enetc_bdr_type {TX, RX}; #define ENETC_PSIVHFR0(n) (0x1e00 + (n) * 8) /* n = SI index */ #define ENETC_PSIVHFR1(n) (0x1e04 + (n) * 8) /* n = SI index */ #define ENETC_MMCSR 0x1f00 -#define ENETC_MMCSR_ME BIT(16) +#define ENETC_MMCSR_LINK_FAIL BIT(31) +#define ENETC_MMCSR_VT_MASK GENMASK(29, 23) /* Verify Time */ +#define ENETC_MMCSR_VT(x) (((x) << 23) & ENETC_MMCSR_VT_MASK) +#define ENETC_MMCSR_GET_VT(x) (((x) & ENETC_MMCSR_VT_MASK) >> 23) +#define ENETC_MMCSR_TXSTS_MASK GENMASK(22, 21) /* Merge Status */ +#define ENETC_MMCSR_GET_TXSTS(x) (((x) & ENETC_MMCSR_TXSTS_MASK) >> 21) +#define ENETC_MMCSR_VSTS_MASK GENMASK(20, 18) /* Verify Status */ +#define ENETC_MMCSR_GET_VSTS(x) (((x) & ENETC_MMCSR_VSTS_MASK) >> 18) +#define ENETC_MMCSR_VDIS BIT(17) /* Verify Disabled */ +#define ENETC_MMCSR_ME BIT(16) /* Merge Enabled */ +#define ENETC_MMCSR_RAFS_MASK GENMASK(9, 8) /* Remote Additional Fragment Size */ +#define ENETC_MMCSR_RAFS(x) (((x) << 8) & ENETC_MMCSR_RAFS_MASK) +#define ENETC_MMCSR_GET_RAFS(x) (((x) & ENETC_MMCSR_RAFS_MASK) >> 8) +#define ENETC_MMCSR_LAFS_MASK GENMASK(4, 3) /* Local Additional Fragment Size */ +#define ENETC_MMCSR_GET_LAFS(x) (((x) & ENETC_MMCSR_LAFS_MASK) >> 3) +#define ENETC_MMCSR_LPA BIT(2) /* Local Preemption Active */ +#define ENETC_MMCSR_LPE BIT(1) /* Local Preemption Enabled */ +#define ENETC_MMCSR_LPS BIT(0) /* Local Preemption Supported */ +#define ENETC_MMFAECR 0x1f08 +#define ENETC_MMFSECR 0x1f0c +#define ENETC_MMFAOCR 0x1f10 +#define ENETC_MMFCRXR 0x1f14 +#define ENETC_MMFCTXR 0x1f18 +#define ENETC_MMHCR 0x1f1c #define ENETC_PTCMSDUR(n) (0x2020 + (n) * 4) /* n = TC index [0..7] */ +#define ENETC_PMAC_OFFSET 0x1000 + #define ENETC_PM0_CMD_CFG 0x8008 -#define ENETC_PM1_CMD_CFG 0x9008 #define ENETC_PM0_TX_EN BIT(0) #define ENETC_PM0_RX_EN BIT(1) #define ENETC_PM0_PROMISC BIT(4) @@ -245,11 +269,8 @@ enum enetc_bdr_type {TX, RX}; #define ENETC_PM0_PAUSE_QUANTA 0x8054 #define ENETC_PM0_PAUSE_THRESH 0x8064 -#define ENETC_PM1_PAUSE_QUANTA 0x9054 -#define ENETC_PM1_PAUSE_THRESH 0x9064 #define ENETC_PM0_SINGLE_STEP 0x80c0 -#define ENETC_PM1_SINGLE_STEP 0x90c0 #define ENETC_PM0_SINGLE_STEP_CH BIT(7) #define ENETC_PM0_SINGLE_STEP_EN BIT(31) #define ENETC_SET_SINGLE_STEP_OFFSET(v) (((v) & 0xff) << 8) @@ -279,57 +300,57 @@ enum enetc_bdr_type {TX, RX}; /* Port MAC counters: Port MAC 0 corresponds to the eMAC and * Port MAC 1 to the pMAC. */ -#define ENETC_PM_REOCT(mac) (0x8100 + 0x1000 * (mac)) -#define ENETC_PM_RALN(mac) (0x8110 + 0x1000 * (mac)) -#define ENETC_PM_RXPF(mac) (0x8118 + 0x1000 * (mac)) -#define ENETC_PM_RFRM(mac) (0x8120 + 0x1000 * (mac)) -#define ENETC_PM_RFCS(mac) (0x8128 + 0x1000 * (mac)) -#define ENETC_PM_RVLAN(mac) (0x8130 + 0x1000 * (mac)) -#define ENETC_PM_RERR(mac) (0x8138 + 0x1000 * (mac)) -#define ENETC_PM_RUCA(mac) (0x8140 + 0x1000 * (mac)) -#define ENETC_PM_RMCA(mac) (0x8148 + 0x1000 * (mac)) -#define ENETC_PM_RBCA(mac) (0x8150 + 0x1000 * (mac)) -#define ENETC_PM_RDRP(mac) (0x8158 + 0x1000 * (mac)) -#define ENETC_PM_RPKT(mac) (0x8160 + 0x1000 * (mac)) -#define ENETC_PM_RUND(mac) (0x8168 + 0x1000 * (mac)) -#define ENETC_PM_R64(mac) (0x8170 + 0x1000 * (mac)) -#define ENETC_PM_R127(mac) (0x8178 + 0x1000 * (mac)) -#define ENETC_PM_R255(mac) (0x8180 + 0x1000 * (mac)) -#define ENETC_PM_R511(mac) (0x8188 + 0x1000 * (mac)) -#define ENETC_PM_R1023(mac) (0x8190 + 0x1000 * (mac)) -#define ENETC_PM_R1522(mac) (0x8198 + 0x1000 * (mac)) -#define ENETC_PM_R1523X(mac) (0x81A0 + 0x1000 * (mac)) -#define ENETC_PM_ROVR(mac) (0x81A8 + 0x1000 * (mac)) -#define ENETC_PM_RJBR(mac) (0x81B0 + 0x1000 * (mac)) -#define ENETC_PM_RFRG(mac) (0x81B8 + 0x1000 * (mac)) -#define ENETC_PM_RCNP(mac) (0x81C0 + 0x1000 * (mac)) -#define ENETC_PM_RDRNTP(mac) (0x81C8 + 0x1000 * (mac)) -#define ENETC_PM_TEOCT(mac) (0x8200 + 0x1000 * (mac)) -#define ENETC_PM_TOCT(mac) (0x8208 + 0x1000 * (mac)) -#define ENETC_PM_TCRSE(mac) (0x8210 + 0x1000 * (mac)) -#define ENETC_PM_TXPF(mac) (0x8218 + 0x1000 * (mac)) -#define ENETC_PM_TFRM(mac) (0x8220 + 0x1000 * (mac)) -#define ENETC_PM_TFCS(mac) (0x8228 + 0x1000 * (mac)) -#define ENETC_PM_TVLAN(mac) (0x8230 + 0x1000 * (mac)) -#define ENETC_PM_TERR(mac) (0x8238 + 0x1000 * (mac)) -#define ENETC_PM_TUCA(mac) (0x8240 + 0x1000 * (mac)) -#define ENETC_PM_TMCA(mac) (0x8248 + 0x1000 * (mac)) -#define ENETC_PM_TBCA(mac) (0x8250 + 0x1000 * (mac)) -#define ENETC_PM_TPKT(mac) (0x8260 + 0x1000 * (mac)) -#define ENETC_PM_TUND(mac) (0x8268 + 0x1000 * (mac)) -#define ENETC_PM_T64(mac) (0x8270 + 0x1000 * (mac)) -#define ENETC_PM_T127(mac) (0x8278 + 0x1000 * (mac)) -#define ENETC_PM_T255(mac) (0x8280 + 0x1000 * (mac)) -#define ENETC_PM_T511(mac) (0x8288 + 0x1000 * (mac)) -#define ENETC_PM_T1023(mac) (0x8290 + 0x1000 * (mac)) -#define ENETC_PM_T1522(mac) (0x8298 + 0x1000 * (mac)) -#define ENETC_PM_T1523X(mac) (0x82A0 + 0x1000 * (mac)) -#define ENETC_PM_TCNP(mac) (0x82C0 + 0x1000 * (mac)) -#define ENETC_PM_TDFR(mac) (0x82D0 + 0x1000 * (mac)) -#define ENETC_PM_TMCOL(mac) (0x82D8 + 0x1000 * (mac)) -#define ENETC_PM_TSCOL(mac) (0x82E0 + 0x1000 * (mac)) -#define ENETC_PM_TLCOL(mac) (0x82E8 + 0x1000 * (mac)) -#define ENETC_PM_TECOL(mac) (0x82F0 + 0x1000 * (mac)) +#define ENETC_PM_REOCT(mac) (0x8100 + ENETC_PMAC_OFFSET * (mac)) +#define ENETC_PM_RALN(mac) (0x8110 + ENETC_PMAC_OFFSET * (mac)) +#define ENETC_PM_RXPF(mac) (0x8118 + ENETC_PMAC_OFFSET * (mac)) +#define ENETC_PM_RFRM(mac) (0x8120 + ENETC_PMAC_OFFSET * (mac)) +#define ENETC_PM_RFCS(mac) (0x8128 + ENETC_PMAC_OFFSET * (mac)) +#define ENETC_PM_RVLAN(mac) (0x8130 + ENETC_PMAC_OFFSET * (mac)) +#define ENETC_PM_RERR(mac) (0x8138 + ENETC_PMAC_OFFSET * (mac)) +#define ENETC_PM_RUCA(mac) (0x8140 + ENETC_PMAC_OFFSET * (mac)) +#define ENETC_PM_RMCA(mac) (0x8148 + ENETC_PMAC_OFFSET * (mac)) +#define ENETC_PM_RBCA(mac) (0x8150 + ENETC_PMAC_OFFSET * (mac)) +#define ENETC_PM_RDRP(mac) (0x8158 + ENETC_PMAC_OFFSET * (mac)) +#define ENETC_PM_RPKT(mac) (0x8160 + ENETC_PMAC_OFFSET * (mac)) +#define ENETC_PM_RUND(mac) (0x8168 + ENETC_PMAC_OFFSET * (mac)) +#define ENETC_PM_R64(mac) (0x8170 + ENETC_PMAC_OFFSET * (mac)) +#define ENETC_PM_R127(mac) (0x8178 + ENETC_PMAC_OFFSET * (mac)) +#define ENETC_PM_R255(mac) (0x8180 + ENETC_PMAC_OFFSET * (mac)) +#define ENETC_PM_R511(mac) (0x8188 + ENETC_PMAC_OFFSET * (mac)) +#define ENETC_PM_R1023(mac) (0x8190 + ENETC_PMAC_OFFSET * (mac)) +#define ENETC_PM_R1522(mac) (0x8198 + ENETC_PMAC_OFFSET * (mac)) +#define ENETC_PM_R1523X(mac) (0x81A0 + ENETC_PMAC_OFFSET * (mac)) +#define ENETC_PM_ROVR(mac) (0x81A8 + ENETC_PMAC_OFFSET * (mac)) +#define ENETC_PM_RJBR(mac) (0x81B0 + ENETC_PMAC_OFFSET * (mac)) +#define ENETC_PM_RFRG(mac) (0x81B8 + ENETC_PMAC_OFFSET * (mac)) +#define ENETC_PM_RCNP(mac) (0x81C0 + ENETC_PMAC_OFFSET * (mac)) +#define ENETC_PM_RDRNTP(mac) (0x81C8 + ENETC_PMAC_OFFSET * (mac)) +#define ENETC_PM_TEOCT(mac) (0x8200 + ENETC_PMAC_OFFSET * (mac)) +#define ENETC_PM_TOCT(mac) (0x8208 + ENETC_PMAC_OFFSET * (mac)) +#define ENETC_PM_TCRSE(mac) (0x8210 + ENETC_PMAC_OFFSET * (mac)) +#define ENETC_PM_TXPF(mac) (0x8218 + ENETC_PMAC_OFFSET * (mac)) +#define ENETC_PM_TFRM(mac) (0x8220 + ENETC_PMAC_OFFSET * (mac)) +#define ENETC_PM_TFCS(mac) (0x8228 + ENETC_PMAC_OFFSET * (mac)) +#define ENETC_PM_TVLAN(mac) (0x8230 + ENETC_PMAC_OFFSET * (mac)) +#define ENETC_PM_TERR(mac) (0x8238 + ENETC_PMAC_OFFSET * (mac)) +#define ENETC_PM_TUCA(mac) (0x8240 + ENETC_PMAC_OFFSET * (mac)) +#define ENETC_PM_TMCA(mac) (0x8248 + ENETC_PMAC_OFFSET * (mac)) +#define ENETC_PM_TBCA(mac) (0x8250 + ENETC_PMAC_OFFSET * (mac)) +#define ENETC_PM_TPKT(mac) (0x8260 + ENETC_PMAC_OFFSET * (mac)) +#define ENETC_PM_TUND(mac) (0x8268 + ENETC_PMAC_OFFSET * (mac)) +#define ENETC_PM_T64(mac) (0x8270 + ENETC_PMAC_OFFSET * (mac)) +#define ENETC_PM_T127(mac) (0x8278 + ENETC_PMAC_OFFSET * (mac)) +#define ENETC_PM_T255(mac) (0x8280 + ENETC_PMAC_OFFSET * (mac)) +#define ENETC_PM_T511(mac) (0x8288 + ENETC_PMAC_OFFSET * (mac)) +#define ENETC_PM_T1023(mac) (0x8290 + ENETC_PMAC_OFFSET * (mac)) +#define ENETC_PM_T1522(mac) (0x8298 + ENETC_PMAC_OFFSET * (mac)) +#define ENETC_PM_T1523X(mac) (0x82A0 + ENETC_PMAC_OFFSET * (mac)) +#define ENETC_PM_TCNP(mac) (0x82C0 + ENETC_PMAC_OFFSET * (mac)) +#define ENETC_PM_TDFR(mac) (0x82D0 + ENETC_PMAC_OFFSET * (mac)) +#define ENETC_PM_TMCOL(mac) (0x82D8 + ENETC_PMAC_OFFSET * (mac)) +#define ENETC_PM_TSCOL(mac) (0x82E0 + ENETC_PMAC_OFFSET * (mac)) +#define ENETC_PM_TLCOL(mac) (0x82E8 + ENETC_PMAC_OFFSET * (mac)) +#define ENETC_PM_TECOL(mac) (0x82F0 + ENETC_PMAC_OFFSET * (mac)) /* Port counters */ #define ENETC_PICDR(n) (0x0700 + (n) * 8) /* n = [0..3] */ diff --git a/drivers/net/ethernet/freescale/enetc/enetc_mdio.c b/drivers/net/ethernet/freescale/enetc/enetc_mdio.c index 1c8f5cc6dec4..998aaa394e9c 100644 --- a/drivers/net/ethernet/freescale/enetc/enetc_mdio.c +++ b/drivers/net/ethernet/freescale/enetc/enetc_mdio.c @@ -55,7 +55,8 @@ static int enetc_mdio_wait_complete(struct enetc_mdio_priv *mdio_priv) is_busy, !is_busy, 10, 10 * 1000); } -int enetc_mdio_write(struct mii_bus *bus, int phy_id, int regnum, u16 value) +int enetc_mdio_write_c22(struct mii_bus *bus, int phy_id, int regnum, + u16 value) { struct enetc_mdio_priv *mdio_priv = bus->priv; u32 mdio_ctl, mdio_cfg; @@ -63,14 +64,39 @@ int enetc_mdio_write(struct mii_bus *bus, int phy_id, int regnum, u16 value) int ret; mdio_cfg = ENETC_EMDIO_CFG; - if (regnum & MII_ADDR_C45) { - dev_addr = (regnum >> 16) & 0x1f; - mdio_cfg |= MDIO_CFG_ENC45; - } else { - /* clause 22 (ie 1G) */ - dev_addr = regnum & 0x1f; - mdio_cfg &= ~MDIO_CFG_ENC45; - } + dev_addr = regnum & 0x1f; + mdio_cfg &= ~MDIO_CFG_ENC45; + + enetc_mdio_wr(mdio_priv, ENETC_MDIO_CFG, mdio_cfg); + + ret = enetc_mdio_wait_complete(mdio_priv); + if (ret) + return ret; + + /* set port and dev addr */ + mdio_ctl = MDIO_CTL_PORT_ADDR(phy_id) | MDIO_CTL_DEV_ADDR(dev_addr); + enetc_mdio_wr(mdio_priv, ENETC_MDIO_CTL, mdio_ctl); + + /* write the value */ + enetc_mdio_wr(mdio_priv, ENETC_MDIO_DATA, value); + + ret = enetc_mdio_wait_complete(mdio_priv); + if (ret) + return ret; + + return 0; +} +EXPORT_SYMBOL_GPL(enetc_mdio_write_c22); + +int enetc_mdio_write_c45(struct mii_bus *bus, int phy_id, int dev_addr, + int regnum, u16 value) +{ + struct enetc_mdio_priv *mdio_priv = bus->priv; + u32 mdio_ctl, mdio_cfg; + int ret; + + mdio_cfg = ENETC_EMDIO_CFG; + mdio_cfg |= MDIO_CFG_ENC45; enetc_mdio_wr(mdio_priv, ENETC_MDIO_CFG, mdio_cfg); @@ -83,13 +109,11 @@ int enetc_mdio_write(struct mii_bus *bus, int phy_id, int regnum, u16 value) enetc_mdio_wr(mdio_priv, ENETC_MDIO_CTL, mdio_ctl); /* set the register address */ - if (regnum & MII_ADDR_C45) { - enetc_mdio_wr(mdio_priv, ENETC_MDIO_ADDR, regnum & 0xffff); + enetc_mdio_wr(mdio_priv, ENETC_MDIO_ADDR, regnum & 0xffff); - ret = enetc_mdio_wait_complete(mdio_priv); - if (ret) - return ret; - } + ret = enetc_mdio_wait_complete(mdio_priv); + if (ret) + return ret; /* write the value */ enetc_mdio_wr(mdio_priv, ENETC_MDIO_DATA, value); @@ -100,9 +124,9 @@ int enetc_mdio_write(struct mii_bus *bus, int phy_id, int regnum, u16 value) return 0; } -EXPORT_SYMBOL_GPL(enetc_mdio_write); +EXPORT_SYMBOL_GPL(enetc_mdio_write_c45); -int enetc_mdio_read(struct mii_bus *bus, int phy_id, int regnum) +int enetc_mdio_read_c22(struct mii_bus *bus, int phy_id, int regnum) { struct enetc_mdio_priv *mdio_priv = bus->priv; u32 mdio_ctl, mdio_cfg; @@ -110,14 +134,51 @@ int enetc_mdio_read(struct mii_bus *bus, int phy_id, int regnum) int ret; mdio_cfg = ENETC_EMDIO_CFG; - if (regnum & MII_ADDR_C45) { - dev_addr = (regnum >> 16) & 0x1f; - mdio_cfg |= MDIO_CFG_ENC45; - } else { - dev_addr = regnum & 0x1f; - mdio_cfg &= ~MDIO_CFG_ENC45; + dev_addr = regnum & 0x1f; + mdio_cfg &= ~MDIO_CFG_ENC45; + + enetc_mdio_wr(mdio_priv, ENETC_MDIO_CFG, mdio_cfg); + + ret = enetc_mdio_wait_complete(mdio_priv); + if (ret) + return ret; + + /* set port and device addr */ + mdio_ctl = MDIO_CTL_PORT_ADDR(phy_id) | MDIO_CTL_DEV_ADDR(dev_addr); + enetc_mdio_wr(mdio_priv, ENETC_MDIO_CTL, mdio_ctl); + + /* initiate the read */ + enetc_mdio_wr(mdio_priv, ENETC_MDIO_CTL, mdio_ctl | MDIO_CTL_READ); + + ret = enetc_mdio_wait_complete(mdio_priv); + if (ret) + return ret; + + /* return all Fs if nothing was there */ + if (enetc_mdio_rd(mdio_priv, ENETC_MDIO_CFG) & MDIO_CFG_RD_ER) { + dev_dbg(&bus->dev, + "Error while reading PHY%d reg at %d.%d\n", + phy_id, dev_addr, regnum); + return 0xffff; } + value = enetc_mdio_rd(mdio_priv, ENETC_MDIO_DATA) & 0xffff; + + return value; +} +EXPORT_SYMBOL_GPL(enetc_mdio_read_c22); + +int enetc_mdio_read_c45(struct mii_bus *bus, int phy_id, int dev_addr, + int regnum) +{ + struct enetc_mdio_priv *mdio_priv = bus->priv; + u32 mdio_ctl, mdio_cfg; + u16 value; + int ret; + + mdio_cfg = ENETC_EMDIO_CFG; + mdio_cfg |= MDIO_CFG_ENC45; + enetc_mdio_wr(mdio_priv, ENETC_MDIO_CFG, mdio_cfg); ret = enetc_mdio_wait_complete(mdio_priv); @@ -129,13 +190,11 @@ int enetc_mdio_read(struct mii_bus *bus, int phy_id, int regnum) enetc_mdio_wr(mdio_priv, ENETC_MDIO_CTL, mdio_ctl); /* set the register address */ - if (regnum & MII_ADDR_C45) { - enetc_mdio_wr(mdio_priv, ENETC_MDIO_ADDR, regnum & 0xffff); + enetc_mdio_wr(mdio_priv, ENETC_MDIO_ADDR, regnum & 0xffff); - ret = enetc_mdio_wait_complete(mdio_priv); - if (ret) - return ret; - } + ret = enetc_mdio_wait_complete(mdio_priv); + if (ret) + return ret; /* initiate the read */ enetc_mdio_wr(mdio_priv, ENETC_MDIO_CTL, mdio_ctl | MDIO_CTL_READ); @@ -156,7 +215,7 @@ int enetc_mdio_read(struct mii_bus *bus, int phy_id, int regnum) return value; } -EXPORT_SYMBOL_GPL(enetc_mdio_read); +EXPORT_SYMBOL_GPL(enetc_mdio_read_c45); struct enetc_hw *enetc_hw_alloc(struct device *dev, void __iomem *port_regs) { diff --git a/drivers/net/ethernet/freescale/enetc/enetc_pci_mdio.c b/drivers/net/ethernet/freescale/enetc/enetc_pci_mdio.c index dafb26f81f95..a1b595bd7993 100644 --- a/drivers/net/ethernet/freescale/enetc/enetc_pci_mdio.c +++ b/drivers/net/ethernet/freescale/enetc/enetc_pci_mdio.c @@ -39,8 +39,10 @@ static int enetc_pci_mdio_probe(struct pci_dev *pdev, } bus->name = ENETC_MDIO_BUS_NAME; - bus->read = enetc_mdio_read; - bus->write = enetc_mdio_write; + bus->read = enetc_mdio_read_c22; + bus->write = enetc_mdio_write_c22; + bus->read_c45 = enetc_mdio_read_c45; + bus->write_c45 = enetc_mdio_write_c45; bus->parent = dev; mdio_priv = bus->priv; mdio_priv->hw = hw; diff --git a/drivers/net/ethernet/freescale/enetc/enetc_pf.c b/drivers/net/ethernet/freescale/enetc/enetc_pf.c index 9f6c4f5c0a6c..7cd22d370caa 100644 --- a/drivers/net/ethernet/freescale/enetc/enetc_pf.c +++ b/drivers/net/ethernet/freescale/enetc/enetc_pf.c @@ -319,24 +319,23 @@ static int enetc_vlan_rx_del_vid(struct net_device *ndev, __be16 prot, u16 vid) static void enetc_set_loopback(struct net_device *ndev, bool en) { struct enetc_ndev_priv *priv = netdev_priv(ndev); - struct enetc_hw *hw = &priv->si->hw; + struct enetc_si *si = priv->si; u32 reg; - reg = enetc_port_rd(hw, ENETC_PM0_IF_MODE); + reg = enetc_port_mac_rd(si, ENETC_PM0_IF_MODE); if (reg & ENETC_PM0_IFM_RG) { /* RGMII mode */ reg = (reg & ~ENETC_PM0_IFM_RLP) | (en ? ENETC_PM0_IFM_RLP : 0); - enetc_port_wr(hw, ENETC_PM0_IF_MODE, reg); + enetc_port_mac_wr(si, ENETC_PM0_IF_MODE, reg); } else { /* assume SGMII mode */ - reg = enetc_port_rd(hw, ENETC_PM0_CMD_CFG); + reg = enetc_port_mac_rd(si, ENETC_PM0_CMD_CFG); reg = (reg & ~ENETC_PM0_CMD_XGLP) | (en ? ENETC_PM0_CMD_XGLP : 0); reg = (reg & ~ENETC_PM0_CMD_PHY_TX_EN) | (en ? ENETC_PM0_CMD_PHY_TX_EN : 0); - enetc_port_wr(hw, ENETC_PM0_CMD_CFG, reg); - enetc_port_wr(hw, ENETC_PM1_CMD_CFG, reg); + enetc_port_mac_wr(si, ENETC_PM0_CMD_CFG, reg); } } @@ -538,65 +537,50 @@ void enetc_reset_ptcmsdur(struct enetc_hw *hw) enetc_port_wr(hw, ENETC_PTCMSDUR(tc), ENETC_MAC_MAXFRM_SIZE); } -static void enetc_configure_port_mac(struct enetc_hw *hw) +static void enetc_configure_port_mac(struct enetc_si *si) { - enetc_port_wr(hw, ENETC_PM0_MAXFRM, - ENETC_SET_MAXFRM(ENETC_RX_MAXFRM_SIZE)); + struct enetc_hw *hw = &si->hw; - enetc_reset_ptcmsdur(hw); + enetc_port_mac_wr(si, ENETC_PM0_MAXFRM, + ENETC_SET_MAXFRM(ENETC_RX_MAXFRM_SIZE)); - enetc_port_wr(hw, ENETC_PM0_CMD_CFG, ENETC_PM0_CMD_PHY_TX_EN | - ENETC_PM0_CMD_TXP | ENETC_PM0_PROMISC); + enetc_reset_ptcmsdur(hw); - enetc_port_wr(hw, ENETC_PM1_CMD_CFG, ENETC_PM0_CMD_PHY_TX_EN | - ENETC_PM0_CMD_TXP | ENETC_PM0_PROMISC); + enetc_port_mac_wr(si, ENETC_PM0_CMD_CFG, ENETC_PM0_CMD_PHY_TX_EN | + ENETC_PM0_CMD_TXP | ENETC_PM0_PROMISC); /* On LS1028A, the MAC RX FIFO defaults to 2, which is too high * and may lead to RX lock-up under traffic. Set it to 1 instead, * as recommended by the hardware team. */ - enetc_port_wr(hw, ENETC_PM0_RX_FIFO, ENETC_PM0_RX_FIFO_VAL); + enetc_port_mac_wr(si, ENETC_PM0_RX_FIFO, ENETC_PM0_RX_FIFO_VAL); } -static void enetc_mac_config(struct enetc_hw *hw, phy_interface_t phy_mode) +static void enetc_mac_config(struct enetc_si *si, phy_interface_t phy_mode) { u32 val; if (phy_interface_mode_is_rgmii(phy_mode)) { - val = enetc_port_rd(hw, ENETC_PM0_IF_MODE); + val = enetc_port_mac_rd(si, ENETC_PM0_IF_MODE); val &= ~(ENETC_PM0_IFM_EN_AUTO | ENETC_PM0_IFM_IFMODE_MASK); val |= ENETC_PM0_IFM_IFMODE_GMII | ENETC_PM0_IFM_RG; - enetc_port_wr(hw, ENETC_PM0_IF_MODE, val); + enetc_port_mac_wr(si, ENETC_PM0_IF_MODE, val); } if (phy_mode == PHY_INTERFACE_MODE_USXGMII) { val = ENETC_PM0_IFM_FULL_DPX | ENETC_PM0_IFM_IFMODE_XGMII; - enetc_port_wr(hw, ENETC_PM0_IF_MODE, val); + enetc_port_mac_wr(si, ENETC_PM0_IF_MODE, val); } } -static void enetc_mac_enable(struct enetc_hw *hw, bool en) +static void enetc_mac_enable(struct enetc_si *si, bool en) { - u32 val = enetc_port_rd(hw, ENETC_PM0_CMD_CFG); + u32 val = enetc_port_mac_rd(si, ENETC_PM0_CMD_CFG); val &= ~(ENETC_PM0_TX_EN | ENETC_PM0_RX_EN); val |= en ? (ENETC_PM0_TX_EN | ENETC_PM0_RX_EN) : 0; - enetc_port_wr(hw, ENETC_PM0_CMD_CFG, val); - enetc_port_wr(hw, ENETC_PM1_CMD_CFG, val); -} - -static void enetc_configure_port_pmac(struct enetc_hw *hw) -{ - u32 temp; - - /* Set pMAC step lock */ - temp = enetc_port_rd(hw, ENETC_PFPMR); - enetc_port_wr(hw, ENETC_PFPMR, - temp | ENETC_PFPMR_PMACE | ENETC_PFPMR_MWLM); - - temp = enetc_port_rd(hw, ENETC_MMCSR); - enetc_port_wr(hw, ENETC_MMCSR, temp | ENETC_MMCSR_ME); + enetc_port_mac_wr(si, ENETC_PM0_CMD_CFG, val); } static void enetc_configure_port(struct enetc_pf *pf) @@ -604,9 +588,7 @@ static void enetc_configure_port(struct enetc_pf *pf) u8 hash_key[ENETC_RSSHASH_KEY_SIZE]; struct enetc_hw *hw = &pf->si->hw; - enetc_configure_port_pmac(hw); - - enetc_configure_port_mac(hw); + enetc_configure_port_mac(pf->si); enetc_port_si_configure(pf->si); @@ -825,6 +807,9 @@ static void enetc_pf_netdev_setup(struct enetc_si *si, struct net_device *ndev, ndev->hw_features |= NETIF_F_RXHASH; ndev->priv_flags |= IFF_UNICAST_FLT; + ndev->xdp_features = NETDEV_XDP_ACT_BASIC | NETDEV_XDP_ACT_REDIRECT | + NETDEV_XDP_ACT_NDO_XMIT | NETDEV_XDP_ACT_RX_SG | + NETDEV_XDP_ACT_NDO_XMIT_SG; if (si->hw_features & ENETC_SI_F_PSFP && !enetc_psfp_enable(priv)) { priv->active_offloads |= ENETC_F_QCI; @@ -848,8 +833,10 @@ static int enetc_mdio_probe(struct enetc_pf *pf, struct device_node *np) return -ENOMEM; bus->name = "Freescale ENETC MDIO Bus"; - bus->read = enetc_mdio_read; - bus->write = enetc_mdio_write; + bus->read = enetc_mdio_read_c22; + bus->write = enetc_mdio_write_c22; + bus->read_c45 = enetc_mdio_read_c45; + bus->write_c45 = enetc_mdio_write_c45; bus->parent = dev; mdio_priv = bus->priv; mdio_priv->hw = &pf->si->hw; @@ -885,8 +872,10 @@ static int enetc_imdio_create(struct enetc_pf *pf) return -ENOMEM; bus->name = "Freescale ENETC internal MDIO Bus"; - bus->read = enetc_mdio_read; - bus->write = enetc_mdio_write; + bus->read = enetc_mdio_read_c22; + bus->write = enetc_mdio_write_c22; + bus->read_c45 = enetc_mdio_read_c45; + bus->write_c45 = enetc_mdio_write_c45; bus->parent = dev; bus->phy_mask = ~0; mdio_priv = bus->priv; @@ -994,14 +983,14 @@ static void enetc_pl_mac_config(struct phylink_config *config, { struct enetc_pf *pf = phylink_to_enetc_pf(config); - enetc_mac_config(&pf->si->hw, state->interface); + enetc_mac_config(pf->si, state->interface); } -static void enetc_force_rgmii_mac(struct enetc_hw *hw, int speed, int duplex) +static void enetc_force_rgmii_mac(struct enetc_si *si, int speed, int duplex) { u32 old_val, val; - old_val = val = enetc_port_rd(hw, ENETC_PM0_IF_MODE); + old_val = val = enetc_port_mac_rd(si, ENETC_PM0_IF_MODE); if (speed == SPEED_1000) { val &= ~ENETC_PM0_IFM_SSP_MASK; @@ -1022,7 +1011,7 @@ static void enetc_force_rgmii_mac(struct enetc_hw *hw, int speed, int duplex) if (val == old_val) return; - enetc_port_wr(hw, ENETC_PM0_IF_MODE, val); + enetc_port_mac_wr(si, ENETC_PM0_IF_MODE, val); } static void enetc_pl_mac_link_up(struct phylink_config *config, @@ -1034,6 +1023,7 @@ static void enetc_pl_mac_link_up(struct phylink_config *config, u32 pause_off_thresh = 0, pause_on_thresh = 0; u32 init_quanta = 0, refresh_quanta = 0; struct enetc_hw *hw = &pf->si->hw; + struct enetc_si *si = pf->si; struct enetc_ndev_priv *priv; u32 rbmr, cmd_cfg; int idx; @@ -1045,7 +1035,7 @@ static void enetc_pl_mac_link_up(struct phylink_config *config, if (!phylink_autoneg_inband(mode) && phy_interface_mode_is_rgmii(interface)) - enetc_force_rgmii_mac(hw, speed, duplex); + enetc_force_rgmii_mac(si, speed, duplex); /* Flow control */ for (idx = 0; idx < priv->num_rx_rings; idx++) { @@ -1081,24 +1071,24 @@ static void enetc_pl_mac_link_up(struct phylink_config *config, pause_off_thresh = 1 * ENETC_MAC_MAXFRM_SIZE; } - enetc_port_wr(hw, ENETC_PM0_PAUSE_QUANTA, init_quanta); - enetc_port_wr(hw, ENETC_PM1_PAUSE_QUANTA, init_quanta); - enetc_port_wr(hw, ENETC_PM0_PAUSE_THRESH, refresh_quanta); - enetc_port_wr(hw, ENETC_PM1_PAUSE_THRESH, refresh_quanta); + enetc_port_mac_wr(si, ENETC_PM0_PAUSE_QUANTA, init_quanta); + enetc_port_mac_wr(si, ENETC_PM0_PAUSE_THRESH, refresh_quanta); enetc_port_wr(hw, ENETC_PPAUONTR, pause_on_thresh); enetc_port_wr(hw, ENETC_PPAUOFFTR, pause_off_thresh); - cmd_cfg = enetc_port_rd(hw, ENETC_PM0_CMD_CFG); + cmd_cfg = enetc_port_mac_rd(si, ENETC_PM0_CMD_CFG); if (rx_pause) cmd_cfg &= ~ENETC_PM0_PAUSE_IGN; else cmd_cfg |= ENETC_PM0_PAUSE_IGN; - enetc_port_wr(hw, ENETC_PM0_CMD_CFG, cmd_cfg); - enetc_port_wr(hw, ENETC_PM1_CMD_CFG, cmd_cfg); + enetc_port_mac_wr(si, ENETC_PM0_CMD_CFG, cmd_cfg); + + enetc_mac_enable(si, true); - enetc_mac_enable(hw, true); + if (si->hw_features & ENETC_SI_F_QBU) + enetc_mm_link_state_update(priv, true); } static void enetc_pl_mac_link_down(struct phylink_config *config, @@ -1106,8 +1096,15 @@ static void enetc_pl_mac_link_down(struct phylink_config *config, phy_interface_t interface) { struct enetc_pf *pf = phylink_to_enetc_pf(config); + struct enetc_si *si = pf->si; + struct enetc_ndev_priv *priv; - enetc_mac_enable(&pf->si->hw, false); + priv = netdev_priv(si->ndev); + + if (si->hw_features & ENETC_SI_F_QBU) + enetc_mm_link_state_update(priv, false); + + enetc_mac_enable(si, false); } static const struct phylink_mac_ops enetc_mac_phylink_ops = { @@ -1300,6 +1297,8 @@ static int enetc_pf_probe(struct pci_dev *pdev, priv = netdev_priv(ndev); + mutex_init(&priv->mm_lock); + enetc_init_si_rings_params(priv); err = enetc_alloc_si_resources(priv); diff --git a/drivers/net/ethernet/freescale/enetc/enetc_qos.c b/drivers/net/ethernet/freescale/enetc/enetc_qos.c index fcebb54224c0..130ebf6853e6 100644 --- a/drivers/net/ethernet/freescale/enetc/enetc_qos.c +++ b/drivers/net/ethernet/freescale/enetc/enetc_qos.c @@ -136,29 +136,21 @@ int enetc_setup_tc_taprio(struct net_device *ndev, void *type_data) { struct tc_taprio_qopt_offload *taprio = type_data; struct enetc_ndev_priv *priv = netdev_priv(ndev); - struct enetc_hw *hw = &priv->si->hw; - struct enetc_bdr *tx_ring; - int err; - int i; + int err, i; /* TSD and Qbv are mutually exclusive in hardware */ for (i = 0; i < priv->num_tx_rings; i++) if (priv->tx_ring[i]->tsd_enable) return -EBUSY; - for (i = 0; i < priv->num_tx_rings; i++) { - tx_ring = priv->tx_ring[i]; - tx_ring->prio = taprio->enable ? i : 0; - enetc_set_bdr_prio(hw, tx_ring->index, tx_ring->prio); - } + err = enetc_setup_tc_mqprio(ndev, &taprio->mqprio); + if (err) + return err; err = enetc_setup_taprio(ndev, taprio); if (err) { - for (i = 0; i < priv->num_tx_rings; i++) { - tx_ring = priv->tx_ring[i]; - tx_ring->prio = taprio->enable ? 0 : i; - enetc_set_bdr_prio(hw, tx_ring->index, tx_ring->prio); - } + taprio->mqprio.qopt.num_tc = 0; + enetc_setup_tc_mqprio(ndev, &taprio->mqprio); } return err; @@ -1611,6 +1603,13 @@ int enetc_qos_query_caps(struct net_device *ndev, void *type_data) struct enetc_si *si = priv->si; switch (base->type) { + case TC_SETUP_QDISC_MQPRIO: { + struct tc_mqprio_caps *caps = base->caps; + + caps->validate_queue_counts = true; + + return 0; + } case TC_SETUP_QDISC_TAPRIO: { struct tc_taprio_caps *caps = base->caps; diff --git a/drivers/net/ethernet/freescale/fec_main.c b/drivers/net/ethernet/freescale/fec_main.c index 2341597408d1..c73e25f8995e 100644 --- a/drivers/net/ethernet/freescale/fec_main.c +++ b/drivers/net/ethernet/freescale/fec_main.c @@ -56,12 +56,12 @@ #include <linux/fec.h> #include <linux/of.h> #include <linux/of_device.h> -#include <linux/of_gpio.h> #include <linux/of_mdio.h> #include <linux/of_net.h> #include <linux/regulator/consumer.h> #include <linux/if_vlan.h> #include <linux/pinctrl/consumer.h> +#include <linux/gpio/consumer.h> #include <linux/prefetch.h> #include <linux/mfd/syscon.h> #include <linux/regmap.h> @@ -1987,47 +1987,74 @@ static int fec_enet_mdio_wait(struct fec_enet_private *fep) return ret; } -static int fec_enet_mdio_read(struct mii_bus *bus, int mii_id, int regnum) +static int fec_enet_mdio_read_c22(struct mii_bus *bus, int mii_id, int regnum) { struct fec_enet_private *fep = bus->priv; struct device *dev = &fep->pdev->dev; int ret = 0, frame_start, frame_addr, frame_op; - bool is_c45 = !!(regnum & MII_ADDR_C45); ret = pm_runtime_resume_and_get(dev); if (ret < 0) return ret; - if (is_c45) { - frame_start = FEC_MMFR_ST_C45; + /* C22 read */ + frame_op = FEC_MMFR_OP_READ; + frame_start = FEC_MMFR_ST; + frame_addr = regnum; - /* write address */ - frame_addr = (regnum >> 16); - writel(frame_start | FEC_MMFR_OP_ADDR_WRITE | - FEC_MMFR_PA(mii_id) | FEC_MMFR_RA(frame_addr) | - FEC_MMFR_TA | (regnum & 0xFFFF), - fep->hwp + FEC_MII_DATA); + /* start a read op */ + writel(frame_start | frame_op | + FEC_MMFR_PA(mii_id) | FEC_MMFR_RA(frame_addr) | + FEC_MMFR_TA, fep->hwp + FEC_MII_DATA); - /* wait for end of transfer */ - ret = fec_enet_mdio_wait(fep); - if (ret) { - netdev_err(fep->netdev, "MDIO address write timeout\n"); - goto out; - } + /* wait for end of transfer */ + ret = fec_enet_mdio_wait(fep); + if (ret) { + netdev_err(fep->netdev, "MDIO read timeout\n"); + goto out; + } - frame_op = FEC_MMFR_OP_READ_C45; + ret = FEC_MMFR_DATA(readl(fep->hwp + FEC_MII_DATA)); - } else { - /* C22 read */ - frame_op = FEC_MMFR_OP_READ; - frame_start = FEC_MMFR_ST; - frame_addr = regnum; +out: + pm_runtime_mark_last_busy(dev); + pm_runtime_put_autosuspend(dev); + + return ret; +} + +static int fec_enet_mdio_read_c45(struct mii_bus *bus, int mii_id, + int devad, int regnum) +{ + struct fec_enet_private *fep = bus->priv; + struct device *dev = &fep->pdev->dev; + int ret = 0, frame_start, frame_op; + + ret = pm_runtime_resume_and_get(dev); + if (ret < 0) + return ret; + + frame_start = FEC_MMFR_ST_C45; + + /* write address */ + writel(frame_start | FEC_MMFR_OP_ADDR_WRITE | + FEC_MMFR_PA(mii_id) | FEC_MMFR_RA(devad) | + FEC_MMFR_TA | (regnum & 0xFFFF), + fep->hwp + FEC_MII_DATA); + + /* wait for end of transfer */ + ret = fec_enet_mdio_wait(fep); + if (ret) { + netdev_err(fep->netdev, "MDIO address write timeout\n"); + goto out; } + frame_op = FEC_MMFR_OP_READ_C45; + /* start a read op */ writel(frame_start | frame_op | - FEC_MMFR_PA(mii_id) | FEC_MMFR_RA(frame_addr) | - FEC_MMFR_TA, fep->hwp + FEC_MII_DATA); + FEC_MMFR_PA(mii_id) | FEC_MMFR_RA(devad) | + FEC_MMFR_TA, fep->hwp + FEC_MII_DATA); /* wait for end of transfer */ ret = fec_enet_mdio_wait(fep); @@ -2045,45 +2072,69 @@ out: return ret; } -static int fec_enet_mdio_write(struct mii_bus *bus, int mii_id, int regnum, - u16 value) +static int fec_enet_mdio_write_c22(struct mii_bus *bus, int mii_id, int regnum, + u16 value) { struct fec_enet_private *fep = bus->priv; struct device *dev = &fep->pdev->dev; int ret, frame_start, frame_addr; - bool is_c45 = !!(regnum & MII_ADDR_C45); ret = pm_runtime_resume_and_get(dev); if (ret < 0) return ret; - if (is_c45) { - frame_start = FEC_MMFR_ST_C45; + /* C22 write */ + frame_start = FEC_MMFR_ST; + frame_addr = regnum; + + /* start a write op */ + writel(frame_start | FEC_MMFR_OP_WRITE | + FEC_MMFR_PA(mii_id) | FEC_MMFR_RA(frame_addr) | + FEC_MMFR_TA | FEC_MMFR_DATA(value), + fep->hwp + FEC_MII_DATA); + + /* wait for end of transfer */ + ret = fec_enet_mdio_wait(fep); + if (ret) + netdev_err(fep->netdev, "MDIO write timeout\n"); + + pm_runtime_mark_last_busy(dev); + pm_runtime_put_autosuspend(dev); + + return ret; +} + +static int fec_enet_mdio_write_c45(struct mii_bus *bus, int mii_id, + int devad, int regnum, u16 value) +{ + struct fec_enet_private *fep = bus->priv; + struct device *dev = &fep->pdev->dev; + int ret, frame_start; + + ret = pm_runtime_resume_and_get(dev); + if (ret < 0) + return ret; + + frame_start = FEC_MMFR_ST_C45; - /* write address */ - frame_addr = (regnum >> 16); - writel(frame_start | FEC_MMFR_OP_ADDR_WRITE | - FEC_MMFR_PA(mii_id) | FEC_MMFR_RA(frame_addr) | - FEC_MMFR_TA | (regnum & 0xFFFF), - fep->hwp + FEC_MII_DATA); + /* write address */ + writel(frame_start | FEC_MMFR_OP_ADDR_WRITE | + FEC_MMFR_PA(mii_id) | FEC_MMFR_RA(devad) | + FEC_MMFR_TA | (regnum & 0xFFFF), + fep->hwp + FEC_MII_DATA); - /* wait for end of transfer */ - ret = fec_enet_mdio_wait(fep); - if (ret) { - netdev_err(fep->netdev, "MDIO address write timeout\n"); - goto out; - } - } else { - /* C22 write */ - frame_start = FEC_MMFR_ST; - frame_addr = regnum; + /* wait for end of transfer */ + ret = fec_enet_mdio_wait(fep); + if (ret) { + netdev_err(fep->netdev, "MDIO address write timeout\n"); + goto out; } /* start a write op */ writel(frame_start | FEC_MMFR_OP_WRITE | - FEC_MMFR_PA(mii_id) | FEC_MMFR_RA(frame_addr) | - FEC_MMFR_TA | FEC_MMFR_DATA(value), - fep->hwp + FEC_MII_DATA); + FEC_MMFR_PA(mii_id) | FEC_MMFR_RA(devad) | + FEC_MMFR_TA | FEC_MMFR_DATA(value), + fep->hwp + FEC_MII_DATA); /* wait for end of transfer */ ret = fec_enet_mdio_wait(fep); @@ -2381,8 +2432,10 @@ static int fec_enet_mii_init(struct platform_device *pdev) } fep->mii_bus->name = "fec_enet_mii_bus"; - fep->mii_bus->read = fec_enet_mdio_read; - fep->mii_bus->write = fec_enet_mdio_write; + fep->mii_bus->read = fec_enet_mdio_read_c22; + fep->mii_bus->write = fec_enet_mdio_write_c22; + fep->mii_bus->read_c45 = fec_enet_mdio_read_c45; + fep->mii_bus->write_c45 = fec_enet_mdio_write_c45; snprintf(fep->mii_bus->id, MII_BUS_ID_SIZE, "%s-%x", pdev->name, fep->dev_id + 1); fep->mii_bus->priv = fep; @@ -3982,10 +4035,10 @@ free_queue_mem: #ifdef CONFIG_OF static int fec_reset_phy(struct platform_device *pdev) { - int err, phy_reset; - bool active_high = false; + struct gpio_desc *phy_reset; int msec = 1, phy_post_delay = 0; struct device_node *np = pdev->dev.of_node; + int err; if (!np) return 0; @@ -3995,33 +4048,26 @@ static int fec_reset_phy(struct platform_device *pdev) if (!err && msec > 1000) msec = 1; - phy_reset = of_get_named_gpio(np, "phy-reset-gpios", 0); - if (phy_reset == -EPROBE_DEFER) - return phy_reset; - else if (!gpio_is_valid(phy_reset)) - return 0; - err = of_property_read_u32(np, "phy-reset-post-delay", &phy_post_delay); /* valid reset duration should be less than 1s */ if (!err && phy_post_delay > 1000) return -EINVAL; - active_high = of_property_read_bool(np, "phy-reset-active-high"); + phy_reset = devm_gpiod_get_optional(&pdev->dev, "phy-reset", + GPIOD_OUT_HIGH); + if (IS_ERR(phy_reset)) + return dev_err_probe(&pdev->dev, PTR_ERR(phy_reset), + "failed to get phy-reset-gpios\n"); - err = devm_gpio_request_one(&pdev->dev, phy_reset, - active_high ? GPIOF_OUT_INIT_HIGH : GPIOF_OUT_INIT_LOW, - "phy-reset"); - if (err) { - dev_err(&pdev->dev, "failed to get phy-reset-gpios: %d\n", err); - return err; - } + if (!phy_reset) + return 0; if (msec > 20) msleep(msec); else usleep_range(msec * 1000, msec * 1000 + 1000); - gpio_set_value_cansleep(phy_reset, !active_high); + gpiod_set_value_cansleep(phy_reset, 0); if (!phy_post_delay) return 0; diff --git a/drivers/net/ethernet/freescale/xgmac_mdio.c b/drivers/net/ethernet/freescale/xgmac_mdio.c index d7d39a58cd80..a13b4ba4d6e1 100644 --- a/drivers/net/ethernet/freescale/xgmac_mdio.c +++ b/drivers/net/ethernet/freescale/xgmac_mdio.c @@ -128,30 +128,49 @@ static int xgmac_wait_until_done(struct device *dev, return 0; } -/* - * Write value to the PHY for this device to the register at regnum,waiting - * until the write is done before it returns. All PHY configuration has to be - * done through the TSEC1 MIIM regs. - */ -static int xgmac_mdio_write(struct mii_bus *bus, int phy_id, int regnum, u16 value) +static int xgmac_mdio_write_c22(struct mii_bus *bus, int phy_id, int regnum, + u16 value) { struct mdio_fsl_priv *priv = (struct mdio_fsl_priv *)bus->priv; struct tgec_mdio_controller __iomem *regs = priv->mdio_base; - uint16_t dev_addr; + bool endian = priv->is_little_endian; + u16 dev_addr = regnum & 0x1f; u32 mdio_ctl, mdio_stat; int ret; + + mdio_stat = xgmac_read32(®s->mdio_stat, endian); + mdio_stat &= ~MDIO_STAT_ENC; + xgmac_write32(mdio_stat, ®s->mdio_stat, endian); + + ret = xgmac_wait_until_free(&bus->dev, regs, endian); + if (ret) + return ret; + + /* Set the port and dev addr */ + mdio_ctl = MDIO_CTL_PORT_ADDR(phy_id) | MDIO_CTL_DEV_ADDR(dev_addr); + xgmac_write32(mdio_ctl, ®s->mdio_ctl, endian); + + /* Write the value to the register */ + xgmac_write32(MDIO_DATA(value), ®s->mdio_data, endian); + + ret = xgmac_wait_until_done(&bus->dev, regs, endian); + if (ret) + return ret; + + return 0; +} + +static int xgmac_mdio_write_c45(struct mii_bus *bus, int phy_id, int dev_addr, + int regnum, u16 value) +{ + struct mdio_fsl_priv *priv = (struct mdio_fsl_priv *)bus->priv; + struct tgec_mdio_controller __iomem *regs = priv->mdio_base; bool endian = priv->is_little_endian; + u32 mdio_ctl, mdio_stat; + int ret; mdio_stat = xgmac_read32(®s->mdio_stat, endian); - if (regnum & MII_ADDR_C45) { - /* Clause 45 (ie 10G) */ - dev_addr = (regnum >> 16) & 0x1f; - mdio_stat |= MDIO_STAT_ENC; - } else { - /* Clause 22 (ie 1G) */ - dev_addr = regnum & 0x1f; - mdio_stat &= ~MDIO_STAT_ENC; - } + mdio_stat |= MDIO_STAT_ENC; xgmac_write32(mdio_stat, ®s->mdio_stat, endian); @@ -164,13 +183,11 @@ static int xgmac_mdio_write(struct mii_bus *bus, int phy_id, int regnum, u16 val xgmac_write32(mdio_ctl, ®s->mdio_ctl, endian); /* Set the register address */ - if (regnum & MII_ADDR_C45) { - xgmac_write32(regnum & 0xffff, ®s->mdio_addr, endian); + xgmac_write32(regnum & 0xffff, ®s->mdio_addr, endian); - ret = xgmac_wait_until_free(&bus->dev, regs, endian); - if (ret) - return ret; - } + ret = xgmac_wait_until_free(&bus->dev, regs, endian); + if (ret) + return ret; /* Write the value to the register */ xgmac_write32(MDIO_DATA(value), ®s->mdio_data, endian); @@ -182,31 +199,82 @@ static int xgmac_mdio_write(struct mii_bus *bus, int phy_id, int regnum, u16 val return 0; } -/* - * Reads from register regnum in the PHY for device dev, returning the value. +/* Reads from register regnum in the PHY for device dev, returning the value. * Clears miimcom first. All PHY configuration has to be done through the * TSEC1 MIIM regs. */ -static int xgmac_mdio_read(struct mii_bus *bus, int phy_id, int regnum) +static int xgmac_mdio_read_c22(struct mii_bus *bus, int phy_id, int regnum) { struct mdio_fsl_priv *priv = (struct mdio_fsl_priv *)bus->priv; struct tgec_mdio_controller __iomem *regs = priv->mdio_base; + bool endian = priv->is_little_endian; + u16 dev_addr = regnum & 0x1f; unsigned long flags; - uint16_t dev_addr; uint32_t mdio_stat; uint32_t mdio_ctl; int ret; - bool endian = priv->is_little_endian; mdio_stat = xgmac_read32(®s->mdio_stat, endian); - if (regnum & MII_ADDR_C45) { - dev_addr = (regnum >> 16) & 0x1f; - mdio_stat |= MDIO_STAT_ENC; + mdio_stat &= ~MDIO_STAT_ENC; + xgmac_write32(mdio_stat, ®s->mdio_stat, endian); + + ret = xgmac_wait_until_free(&bus->dev, regs, endian); + if (ret) + return ret; + + /* Set the Port and Device Addrs */ + mdio_ctl = MDIO_CTL_PORT_ADDR(phy_id) | MDIO_CTL_DEV_ADDR(dev_addr); + xgmac_write32(mdio_ctl, ®s->mdio_ctl, endian); + + if (priv->has_a009885) + /* Once the operation completes, i.e. MDIO_STAT_BSY clears, we + * must read back the data register within 16 MDC cycles. + */ + local_irq_save(flags); + + /* Initiate the read */ + xgmac_write32(mdio_ctl | MDIO_CTL_READ, ®s->mdio_ctl, endian); + + ret = xgmac_wait_until_done(&bus->dev, regs, endian); + if (ret) + goto irq_restore; + + /* Return all Fs if nothing was there */ + if ((xgmac_read32(®s->mdio_stat, endian) & MDIO_STAT_RD_ER) && + !priv->has_a011043) { + dev_dbg(&bus->dev, + "Error while reading PHY%d reg at %d.%d\n", + phy_id, dev_addr, regnum); + ret = 0xffff; } else { - dev_addr = regnum & 0x1f; - mdio_stat &= ~MDIO_STAT_ENC; + ret = xgmac_read32(®s->mdio_data, endian) & 0xffff; + dev_dbg(&bus->dev, "read %04x\n", ret); } +irq_restore: + if (priv->has_a009885) + local_irq_restore(flags); + + return ret; +} + +/* Reads from register regnum in the PHY for device dev, returning the value. + * Clears miimcom first. All PHY configuration has to be done through the + * TSEC1 MIIM regs. + */ +static int xgmac_mdio_read_c45(struct mii_bus *bus, int phy_id, int dev_addr, + int regnum) +{ + struct mdio_fsl_priv *priv = (struct mdio_fsl_priv *)bus->priv; + struct tgec_mdio_controller __iomem *regs = priv->mdio_base; + bool endian = priv->is_little_endian; + u32 mdio_stat, mdio_ctl; + unsigned long flags; + int ret; + + mdio_stat = xgmac_read32(®s->mdio_stat, endian); + mdio_stat |= MDIO_STAT_ENC; + xgmac_write32(mdio_stat, ®s->mdio_stat, endian); ret = xgmac_wait_until_free(&bus->dev, regs, endian); @@ -218,13 +286,11 @@ static int xgmac_mdio_read(struct mii_bus *bus, int phy_id, int regnum) xgmac_write32(mdio_ctl, ®s->mdio_ctl, endian); /* Set the register address */ - if (regnum & MII_ADDR_C45) { - xgmac_write32(regnum & 0xffff, ®s->mdio_addr, endian); + xgmac_write32(regnum & 0xffff, ®s->mdio_addr, endian); - ret = xgmac_wait_until_free(&bus->dev, regs, endian); - if (ret) - return ret; - } + ret = xgmac_wait_until_free(&bus->dev, regs, endian); + if (ret) + return ret; if (priv->has_a009885) /* Once the operation completes, i.e. MDIO_STAT_BSY clears, we @@ -326,10 +392,11 @@ static int xgmac_mdio_probe(struct platform_device *pdev) return -ENOMEM; bus->name = "Freescale XGMAC MDIO Bus"; - bus->read = xgmac_mdio_read; - bus->write = xgmac_mdio_write; + bus->read = xgmac_mdio_read_c22; + bus->write = xgmac_mdio_write_c22; + bus->read_c45 = xgmac_mdio_read_c45; + bus->write_c45 = xgmac_mdio_write_c45; bus->parent = &pdev->dev; - bus->probe_capabilities = MDIOBUS_C22_C45; snprintf(bus->id, MII_BUS_ID_SIZE, "%pa", &res->start); priv = bus->priv; diff --git a/drivers/net/ethernet/fungible/funeth/Kconfig b/drivers/net/ethernet/fungible/funeth/Kconfig index c72ad9386400..e742e7663449 100644 --- a/drivers/net/ethernet/fungible/funeth/Kconfig +++ b/drivers/net/ethernet/fungible/funeth/Kconfig @@ -5,7 +5,7 @@ config FUN_ETH tristate "Fungible Ethernet device driver" - depends on PCI && PCI_MSI + depends on PCI_MSI depends on TLS && TLS_DEVICE || TLS_DEVICE=n select NET_DEVLINK select FUN_CORE diff --git a/drivers/net/ethernet/fungible/funeth/funeth_main.c b/drivers/net/ethernet/fungible/funeth/funeth_main.c index b4cce30e526a..df86770731ad 100644 --- a/drivers/net/ethernet/fungible/funeth/funeth_main.c +++ b/drivers/net/ethernet/fungible/funeth/funeth_main.c @@ -1160,6 +1160,11 @@ static int fun_xdp_setup(struct net_device *dev, struct netdev_bpf *xdp) WRITE_ONCE(rxqs[i]->xdp_prog, prog); } + if (prog) + xdp_features_set_redirect_target(dev, true); + else + xdp_features_clear_redirect_target(dev); + dev->max_mtu = prog ? XDP_MAX_MTU : FUN_MAX_MTU; old_prog = xchg(&fp->xdp_prog, prog); if (old_prog) @@ -1765,6 +1770,7 @@ static int fun_create_netdev(struct fun_ethdev *ed, unsigned int portid) netdev->vlan_features = netdev->features & VLAN_FEAT; netdev->mpls_features = netdev->vlan_features; netdev->hw_enc_features = netdev->hw_features; + netdev->xdp_features = NETDEV_XDP_ACT_BASIC | NETDEV_XDP_ACT_REDIRECT; netdev->min_mtu = ETH_MIN_MTU; netdev->max_mtu = FUN_MAX_MTU; diff --git a/drivers/net/ethernet/google/gve/gve_main.c b/drivers/net/ethernet/google/gve/gve_main.c index 5b40f9c53196..07111c241e0e 100644 --- a/drivers/net/ethernet/google/gve/gve_main.c +++ b/drivers/net/ethernet/google/gve/gve_main.c @@ -327,7 +327,6 @@ static int gve_napi_poll_dqo(struct napi_struct *napi, int budget) static int gve_alloc_notify_blocks(struct gve_priv *priv) { int num_vecs_requested = priv->num_ntfy_blks + 1; - char *name = priv->dev->name; unsigned int active_cpus; int vecs_enabled; int i, j; @@ -371,8 +370,8 @@ static int gve_alloc_notify_blocks(struct gve_priv *priv) active_cpus = min_t(int, priv->num_ntfy_blks / 2, num_online_cpus()); /* Setup Management Vector - the last vector */ - snprintf(priv->mgmt_msix_name, sizeof(priv->mgmt_msix_name), "%s-mgmnt", - name); + snprintf(priv->mgmt_msix_name, sizeof(priv->mgmt_msix_name), "gve-mgmnt@pci:%s", + pci_name(priv->pdev)); err = request_irq(priv->msix_vectors[priv->mgmt_msix_idx].vector, gve_mgmnt_intr, 0, priv->mgmt_msix_name, priv); if (err) { @@ -401,8 +400,8 @@ static int gve_alloc_notify_blocks(struct gve_priv *priv) struct gve_notify_block *block = &priv->ntfy_blocks[i]; int msix_idx = i; - snprintf(block->name, sizeof(block->name), "%s-ntfy-block.%d", - name, i); + snprintf(block->name, sizeof(block->name), "gve-ntfy-blk%d@pci:%s", + i, pci_name(priv->pdev)); block->priv = priv; err = request_irq(priv->msix_vectors[msix_idx].vector, gve_is_gqi(priv) ? gve_intr : gve_intr_dqo, diff --git a/drivers/net/ethernet/hisilicon/hns/hns_dsaf_misc.c b/drivers/net/ethernet/hisilicon/hns/hns_dsaf_misc.c index 740850b64aff..5df19c604d09 100644 --- a/drivers/net/ethernet/hisilicon/hns/hns_dsaf_misc.c +++ b/drivers/net/ethernet/hisilicon/hns/hns_dsaf_misc.c @@ -554,11 +554,11 @@ static phy_interface_t hns_mac_get_phy_if_acpi(struct hns_mac_cb *mac_cb) argv4.package.count = 1; argv4.package.elements = &obj_args; - obj = acpi_evaluate_dsm(ACPI_HANDLE(mac_cb->dev), - &hns_dsaf_acpi_dsm_guid, 0, - HNS_OP_GET_PORT_TYPE_FUNC, &argv4); - - if (!obj || obj->type != ACPI_TYPE_INTEGER) + obj = acpi_evaluate_dsm_typed(ACPI_HANDLE(mac_cb->dev), + &hns_dsaf_acpi_dsm_guid, 0, + HNS_OP_GET_PORT_TYPE_FUNC, &argv4, + ACPI_TYPE_INTEGER); + if (!obj) return phy_if; phy_if = obj->integer.value ? @@ -601,11 +601,11 @@ static int hns_mac_get_sfp_prsnt_acpi(struct hns_mac_cb *mac_cb, int *sfp_prsnt) argv4.package.count = 1; argv4.package.elements = &obj_args; - obj = acpi_evaluate_dsm(ACPI_HANDLE(mac_cb->dev), - &hns_dsaf_acpi_dsm_guid, 0, - HNS_OP_GET_SFP_STAT_FUNC, &argv4); - - if (!obj || obj->type != ACPI_TYPE_INTEGER) + obj = acpi_evaluate_dsm_typed(ACPI_HANDLE(mac_cb->dev), + &hns_dsaf_acpi_dsm_guid, 0, + HNS_OP_GET_SFP_STAT_FUNC, &argv4, + ACPI_TYPE_INTEGER); + if (!obj) return -ENODEV; *sfp_prsnt = obj->integer.value; diff --git a/drivers/net/ethernet/hisilicon/hns3/hnae3.h b/drivers/net/ethernet/hisilicon/hns3/hnae3.h index 17137de9338c..40f4306449eb 100644 --- a/drivers/net/ethernet/hisilicon/hns3/hnae3.h +++ b/drivers/net/ethernet/hisilicon/hns3/hnae3.h @@ -32,6 +32,7 @@ #include <linux/pkt_sched.h> #include <linux/types.h> #include <net/pkt_cls.h> +#include <net/pkt_sched.h> #define HNAE3_MOD_VERSION "1.0" diff --git a/drivers/net/ethernet/hisilicon/hns3/hns3_enet.c b/drivers/net/ethernet/hisilicon/hns3/hns3_enet.c index b4c4fb873568..25be7f8ac7cd 100644 --- a/drivers/net/ethernet/hisilicon/hns3/hns3_enet.c +++ b/drivers/net/ethernet/hisilicon/hns3/hns3_enet.c @@ -20,6 +20,7 @@ #include <net/gro.h> #include <net/ip6_checksum.h> #include <net/pkt_cls.h> +#include <net/pkt_sched.h> #include <net/tcp.h> #include <net/vxlan.h> #include <net/geneve.h> diff --git a/drivers/net/ethernet/hisilicon/hns3/hns3pf/hclge_debugfs.c b/drivers/net/ethernet/hisilicon/hns3/hns3pf/hclge_debugfs.c index 142415c84c6b..a0b46e7d863e 100644 --- a/drivers/net/ethernet/hisilicon/hns3/hns3pf/hclge_debugfs.c +++ b/drivers/net/ethernet/hisilicon/hns3/hns3pf/hclge_debugfs.c @@ -2,6 +2,7 @@ /* Copyright (c) 2018-2019 Hisilicon Limited. */ #include <linux/device.h> +#include <linux/sched/clock.h> #include "hclge_debugfs.h" #include "hclge_err.h" diff --git a/drivers/net/ethernet/hisilicon/hns3/hns3pf/hclge_devlink.c b/drivers/net/ethernet/hisilicon/hns3/hns3pf/hclge_devlink.c index 3d3b69605423..9a939c0b217f 100644 --- a/drivers/net/ethernet/hisilicon/hns3/hns3pf/hclge_devlink.c +++ b/drivers/net/ethernet/hisilicon/hns3/hns3pf/hclge_devlink.c @@ -114,7 +114,6 @@ int hclge_devlink_init(struct hclge_dev *hdev) priv->hdev = hdev; hdev->devlink = devlink; - devlink_set_features(devlink, DEVLINK_F_RELOAD); devlink_register(devlink); return 0; } diff --git a/drivers/net/ethernet/hisilicon/hns3/hns3pf/hclge_err.c b/drivers/net/ethernet/hisilicon/hns3/hns3pf/hclge_err.c index 6efd768cc07c..3f35227ef1fa 100644 --- a/drivers/net/ethernet/hisilicon/hns3/hns3pf/hclge_err.c +++ b/drivers/net/ethernet/hisilicon/hns3/hns3pf/hclge_err.c @@ -1,6 +1,8 @@ // SPDX-License-Identifier: GPL-2.0+ /* Copyright (c) 2016-2017 Hisilicon Limited. */ +#include <linux/sched/clock.h> + #include "hclge_err.h" static const struct hclge_hw_error hclge_imp_tcm_ecc_int[] = { diff --git a/drivers/net/ethernet/hisilicon/hns3/hns3vf/hclgevf_devlink.c b/drivers/net/ethernet/hisilicon/hns3/hns3vf/hclgevf_devlink.c index a6c3c5e8f0ab..1b535142c65a 100644 --- a/drivers/net/ethernet/hisilicon/hns3/hns3vf/hclgevf_devlink.c +++ b/drivers/net/ethernet/hisilicon/hns3/hns3vf/hclgevf_devlink.c @@ -116,7 +116,6 @@ int hclgevf_devlink_init(struct hclgevf_dev *hdev) priv->hdev = hdev; hdev->devlink = devlink; - devlink_set_features(devlink, DEVLINK_F_RELOAD); devlink_register(devlink); return 0; } diff --git a/drivers/net/ethernet/hisilicon/hns_mdio.c b/drivers/net/ethernet/hisilicon/hns_mdio.c index c2ae1b4f9a5f..9232caaf0bdc 100644 --- a/drivers/net/ethernet/hisilicon/hns_mdio.c +++ b/drivers/net/ethernet/hisilicon/hns_mdio.c @@ -206,7 +206,7 @@ static void hns_mdio_cmd_write(struct hns_mdio_device *mdio_dev, } /** - * hns_mdio_write - access phy register + * hns_mdio_write_c22 - access phy register * @bus: mdio bus * @phy_id: phy id * @regnum: register num @@ -214,21 +214,19 @@ static void hns_mdio_cmd_write(struct hns_mdio_device *mdio_dev, * * Return 0 on success, negative on failure */ -static int hns_mdio_write(struct mii_bus *bus, - int phy_id, int regnum, u16 data) +static int hns_mdio_write_c22(struct mii_bus *bus, + int phy_id, int regnum, u16 data) { - int ret; struct hns_mdio_device *mdio_dev = (struct hns_mdio_device *)bus->priv; - u8 devad = ((regnum >> 16) & 0x1f); - u8 is_c45 = !!(regnum & MII_ADDR_C45); u16 reg = (u16)(regnum & 0xffff); - u8 op; u16 cmd_reg_cfg; + int ret; + u8 op; dev_dbg(&bus->dev, "mdio write %s,base is %p\n", bus->id, mdio_dev->vbase); - dev_dbg(&bus->dev, "phy id=%d, is_c45=%d, devad=%d, reg=%#x, write data=%d\n", - phy_id, is_c45, devad, reg, data); + dev_dbg(&bus->dev, "phy id=%d, reg=%#x, write data=%d\n", + phy_id, reg, data); /* wait for ready */ ret = hns_mdio_wait_ready(bus); @@ -237,58 +235,91 @@ static int hns_mdio_write(struct mii_bus *bus, return ret; } - if (!is_c45) { - cmd_reg_cfg = reg; - op = MDIO_C22_WRITE; - } else { - /* config the cmd-reg to write addr*/ - MDIO_SET_REG_FIELD(mdio_dev, MDIO_ADDR_REG, MDIO_ADDR_DATA_M, - MDIO_ADDR_DATA_S, reg); + cmd_reg_cfg = reg; + op = MDIO_C22_WRITE; - hns_mdio_cmd_write(mdio_dev, is_c45, - MDIO_C45_WRITE_ADDR, phy_id, devad); + MDIO_SET_REG_FIELD(mdio_dev, MDIO_WDATA_REG, MDIO_WDATA_DATA_M, + MDIO_WDATA_DATA_S, data); - /* check for read or write opt is finished */ - ret = hns_mdio_wait_ready(bus); - if (ret) { - dev_err(&bus->dev, "MDIO bus is busy\n"); - return ret; - } + hns_mdio_cmd_write(mdio_dev, false, op, phy_id, cmd_reg_cfg); + + return 0; +} + +/** + * hns_mdio_write_c45 - access phy register + * @bus: mdio bus + * @phy_id: phy id + * @devad: device address to read + * @regnum: register num + * @data: register value + * + * Return 0 on success, negative on failure + */ +static int hns_mdio_write_c45(struct mii_bus *bus, int phy_id, int devad, + int regnum, u16 data) +{ + struct hns_mdio_device *mdio_dev = (struct hns_mdio_device *)bus->priv; + u16 reg = (u16)(regnum & 0xffff); + u16 cmd_reg_cfg; + int ret; + u8 op; + + dev_dbg(&bus->dev, "mdio write %s,base is %p\n", + bus->id, mdio_dev->vbase); + dev_dbg(&bus->dev, "phy id=%d, devad=%d, reg=%#x, write data=%d\n", + phy_id, devad, reg, data); + + /* wait for ready */ + ret = hns_mdio_wait_ready(bus); + if (ret) { + dev_err(&bus->dev, "MDIO bus is busy\n"); + return ret; + } + + /* config the cmd-reg to write addr*/ + MDIO_SET_REG_FIELD(mdio_dev, MDIO_ADDR_REG, MDIO_ADDR_DATA_M, + MDIO_ADDR_DATA_S, reg); - /* config the data needed writing */ - cmd_reg_cfg = devad; - op = MDIO_C45_WRITE_DATA; + hns_mdio_cmd_write(mdio_dev, true, MDIO_C45_WRITE_ADDR, phy_id, devad); + + /* check for read or write opt is finished */ + ret = hns_mdio_wait_ready(bus); + if (ret) { + dev_err(&bus->dev, "MDIO bus is busy\n"); + return ret; } + /* config the data needed writing */ + cmd_reg_cfg = devad; + op = MDIO_C45_WRITE_DATA; + MDIO_SET_REG_FIELD(mdio_dev, MDIO_WDATA_REG, MDIO_WDATA_DATA_M, MDIO_WDATA_DATA_S, data); - hns_mdio_cmd_write(mdio_dev, is_c45, op, phy_id, cmd_reg_cfg); + hns_mdio_cmd_write(mdio_dev, true, op, phy_id, cmd_reg_cfg); return 0; } /** - * hns_mdio_read - access phy register + * hns_mdio_read_c22 - access phy register * @bus: mdio bus * @phy_id: phy id * @regnum: register num * * Return phy register value */ -static int hns_mdio_read(struct mii_bus *bus, int phy_id, int regnum) +static int hns_mdio_read_c22(struct mii_bus *bus, int phy_id, int regnum) { - int ret; - u16 reg_val; - u8 devad = ((regnum >> 16) & 0x1f); - u8 is_c45 = !!(regnum & MII_ADDR_C45); - u16 reg = (u16)(regnum & 0xffff); struct hns_mdio_device *mdio_dev = (struct hns_mdio_device *)bus->priv; + u16 reg = (u16)(regnum & 0xffff); + u16 reg_val; + int ret; dev_dbg(&bus->dev, "mdio read %s,base is %p\n", bus->id, mdio_dev->vbase); - dev_dbg(&bus->dev, "phy id=%d, is_c45=%d, devad=%d, reg=%#x!\n", - phy_id, is_c45, devad, reg); + dev_dbg(&bus->dev, "phy id=%d, reg=%#x!\n", phy_id, reg); /* Step 1: wait for ready */ ret = hns_mdio_wait_ready(bus); @@ -297,29 +328,74 @@ static int hns_mdio_read(struct mii_bus *bus, int phy_id, int regnum) return ret; } - if (!is_c45) { - hns_mdio_cmd_write(mdio_dev, is_c45, - MDIO_C22_READ, phy_id, reg); - } else { - MDIO_SET_REG_FIELD(mdio_dev, MDIO_ADDR_REG, MDIO_ADDR_DATA_M, - MDIO_ADDR_DATA_S, reg); + hns_mdio_cmd_write(mdio_dev, false, MDIO_C22_READ, phy_id, reg); - /* Step 2; config the cmd-reg to write addr*/ - hns_mdio_cmd_write(mdio_dev, is_c45, - MDIO_C45_WRITE_ADDR, phy_id, devad); + /* Step 2: waiting for MDIO_COMMAND_REG 's mdio_start==0,*/ + /* check for read or write opt is finished */ + ret = hns_mdio_wait_ready(bus); + if (ret) { + dev_err(&bus->dev, "MDIO bus is busy\n"); + return ret; + } - /* Step 3: check for read or write opt is finished */ - ret = hns_mdio_wait_ready(bus); - if (ret) { - dev_err(&bus->dev, "MDIO bus is busy\n"); - return ret; - } + reg_val = MDIO_GET_REG_BIT(mdio_dev, MDIO_STA_REG, MDIO_STATE_STA_B); + if (reg_val) { + dev_err(&bus->dev, " ERROR! MDIO Read failed!\n"); + return -EBUSY; + } - hns_mdio_cmd_write(mdio_dev, is_c45, - MDIO_C45_READ, phy_id, devad); + /* Step 3; get out data*/ + reg_val = (u16)MDIO_GET_REG_FIELD(mdio_dev, MDIO_RDATA_REG, + MDIO_RDATA_DATA_M, MDIO_RDATA_DATA_S); + + return reg_val; +} + +/** + * hns_mdio_read_c45 - access phy register + * @bus: mdio bus + * @phy_id: phy id + * @devad: device address to read + * @regnum: register num + * + * Return phy register value + */ +static int hns_mdio_read_c45(struct mii_bus *bus, int phy_id, int devad, + int regnum) +{ + struct hns_mdio_device *mdio_dev = (struct hns_mdio_device *)bus->priv; + u16 reg = (u16)(regnum & 0xffff); + u16 reg_val; + int ret; + + dev_dbg(&bus->dev, "mdio read %s,base is %p\n", + bus->id, mdio_dev->vbase); + dev_dbg(&bus->dev, "phy id=%d, devad=%d, reg=%#x!\n", + phy_id, devad, reg); + + /* Step 1: wait for ready */ + ret = hns_mdio_wait_ready(bus); + if (ret) { + dev_err(&bus->dev, "MDIO bus is busy\n"); + return ret; + } + + MDIO_SET_REG_FIELD(mdio_dev, MDIO_ADDR_REG, MDIO_ADDR_DATA_M, + MDIO_ADDR_DATA_S, reg); + + /* Step 2; config the cmd-reg to write addr*/ + hns_mdio_cmd_write(mdio_dev, true, MDIO_C45_WRITE_ADDR, phy_id, devad); + + /* Step 3: check for read or write opt is finished */ + ret = hns_mdio_wait_ready(bus); + if (ret) { + dev_err(&bus->dev, "MDIO bus is busy\n"); + return ret; } - /* Step 5: waiting for MDIO_COMMAND_REG's mdio_start==0,*/ + hns_mdio_cmd_write(mdio_dev, true, MDIO_C45_READ, phy_id, devad); + + /* Step 5: waiting for MDIO_COMMAND_REG 's mdio_start==0,*/ /* check for read or write opt is finished */ ret = hns_mdio_wait_ready(bus); if (ret) { @@ -438,8 +514,10 @@ static int hns_mdio_probe(struct platform_device *pdev) } new_bus->name = MDIO_BUS_NAME; - new_bus->read = hns_mdio_read; - new_bus->write = hns_mdio_write; + new_bus->read = hns_mdio_read_c22; + new_bus->write = hns_mdio_write_c22; + new_bus->read_c45 = hns_mdio_read_c45; + new_bus->write_c45 = hns_mdio_write_c45; new_bus->reset = hns_mdio_reset; new_bus->priv = mdio_dev; new_bus->parent = &pdev->dev; diff --git a/drivers/net/ethernet/ibm/ibmvnic.c b/drivers/net/ethernet/ibm/ibmvnic.c index e19a6bb3f444..146ca1d8031b 100644 --- a/drivers/net/ethernet/ibm/ibmvnic.c +++ b/drivers/net/ethernet/ibm/ibmvnic.c @@ -250,10 +250,11 @@ static void ibmvnic_set_affinity(struct ibmvnic_adapter *adapter) struct ibmvnic_sub_crq_queue **rxqs = adapter->rx_scrq; struct ibmvnic_sub_crq_queue **txqs = adapter->tx_scrq; struct ibmvnic_sub_crq_queue *queue; - int num_rxqs = adapter->num_active_rx_scrqs; - int num_txqs = adapter->num_active_tx_scrqs; + int num_rxqs = adapter->num_active_rx_scrqs, i_rxqs = 0; + int num_txqs = adapter->num_active_tx_scrqs, i_txqs = 0; int total_queues, stride, stragglers, i; unsigned int num_cpu, cpu; + bool is_rx_queue; int rc = 0; netdev_dbg(adapter->netdev, "%s: Setting irq affinity hints", __func__); @@ -273,14 +274,24 @@ static void ibmvnic_set_affinity(struct ibmvnic_adapter *adapter) /* next available cpu to assign irq to */ cpu = cpumask_next(-1, cpu_online_mask); - for (i = 0; i < num_txqs; i++) { - queue = txqs[i]; + for (i = 0; i < total_queues; i++) { + is_rx_queue = false; + /* balance core load by alternating rx and tx assignments + * ex: TX0 -> RX0 -> TX1 -> RX1 etc. + */ + if ((i % 2 == 1 && i_rxqs < num_rxqs) || i_txqs == num_txqs) { + queue = rxqs[i_rxqs++]; + is_rx_queue = true; + } else { + queue = txqs[i_txqs++]; + } + rc = ibmvnic_set_queue_affinity(queue, &cpu, &stragglers, stride); if (rc) goto out; - if (!queue) + if (!queue || is_rx_queue) continue; rc = __netif_set_xps_queue(adapter->netdev, @@ -291,14 +302,6 @@ static void ibmvnic_set_affinity(struct ibmvnic_adapter *adapter) __func__, i, rc); } - for (i = 0; i < num_rxqs; i++) { - queue = rxqs[i]; - rc = ibmvnic_set_queue_affinity(queue, &cpu, &stragglers, - stride); - if (rc) - goto out; - } - out: if (rc) { netdev_warn(adapter->netdev, diff --git a/drivers/net/ethernet/intel/Kconfig b/drivers/net/ethernet/intel/Kconfig index 3facb55b7161..a3c84bf05e44 100644 --- a/drivers/net/ethernet/intel/Kconfig +++ b/drivers/net/ethernet/intel/Kconfig @@ -337,6 +337,9 @@ config ICE_HWTS the PTP clock driver precise cross-timestamp ioctl (PTP_SYS_OFFSET_PRECISE). +config ICE_GNSS + def_bool GNSS = y || GNSS = ICE + config FM10K tristate "Intel(R) FM10000 Ethernet Switch Host Interface Support" default n diff --git a/drivers/net/ethernet/intel/e1000e/ethtool.c b/drivers/net/ethernet/intel/e1000e/ethtool.c index 59e82d131d88..721f86fd5802 100644 --- a/drivers/net/ethernet/intel/e1000e/ethtool.c +++ b/drivers/net/ethernet/intel/e1000e/ethtool.c @@ -110,9 +110,9 @@ static const char e1000_gstrings_test[][ETH_GSTRING_LEN] = { static int e1000_get_link_ksettings(struct net_device *netdev, struct ethtool_link_ksettings *cmd) { + u32 speed, supported, advertising, lp_advertising, lpa_t; struct e1000_adapter *adapter = netdev_priv(netdev); struct e1000_hw *hw = &adapter->hw; - u32 speed, supported, advertising; if (hw->phy.media_type == e1000_media_type_copper) { supported = (SUPPORTED_10baseT_Half | @@ -120,7 +120,9 @@ static int e1000_get_link_ksettings(struct net_device *netdev, SUPPORTED_100baseT_Half | SUPPORTED_100baseT_Full | SUPPORTED_1000baseT_Full | + SUPPORTED_Asym_Pause | SUPPORTED_Autoneg | + SUPPORTED_Pause | SUPPORTED_TP); if (hw->phy.type == e1000_phy_ife) supported &= ~SUPPORTED_1000baseT_Full; @@ -192,10 +194,16 @@ static int e1000_get_link_ksettings(struct net_device *netdev, if (hw->phy.media_type != e1000_media_type_copper) cmd->base.eth_tp_mdix_ctrl = ETH_TP_MDI_INVALID; + lpa_t = mii_stat1000_to_ethtool_lpa_t(adapter->phy_regs.stat1000); + lp_advertising = lpa_t | + mii_lpa_to_ethtool_lpa_t(adapter->phy_regs.lpa); + ethtool_convert_legacy_u32_to_link_mode(cmd->link_modes.supported, supported); ethtool_convert_legacy_u32_to_link_mode(cmd->link_modes.advertising, advertising); + ethtool_convert_legacy_u32_to_link_mode(cmd->link_modes.lp_advertising, + lp_advertising); return 0; } diff --git a/drivers/net/ethernet/intel/e1000e/netdev.c b/drivers/net/ethernet/intel/e1000e/netdev.c index 04acd1a992fa..e1eb1de88bf9 100644 --- a/drivers/net/ethernet/intel/e1000e/netdev.c +++ b/drivers/net/ethernet/intel/e1000e/netdev.c @@ -7418,9 +7418,6 @@ static int e1000_probe(struct pci_dev *pdev, const struct pci_device_id *ent) if (err) goto err_pci_reg; - /* AER (Advanced Error Reporting) hooks */ - pci_enable_pcie_error_reporting(pdev); - pci_set_master(pdev); /* PCI config space info */ err = pci_save_state(pdev); @@ -7708,7 +7705,6 @@ err_flashmap: err_ioremap: free_netdev(netdev); err_alloc_etherdev: - pci_disable_pcie_error_reporting(pdev); pci_release_mem_regions(pdev); err_pci_reg: err_dma: @@ -7775,9 +7771,6 @@ static void e1000_remove(struct pci_dev *pdev) free_netdev(netdev); - /* AER disable */ - pci_disable_pcie_error_reporting(pdev); - pci_disable_device(pdev); } diff --git a/drivers/net/ethernet/intel/e1000e/phy.c b/drivers/net/ethernet/intel/e1000e/phy.c index 060b263348ce..08c3d477dd6f 100644 --- a/drivers/net/ethernet/intel/e1000e/phy.c +++ b/drivers/net/ethernet/intel/e1000e/phy.c @@ -2,6 +2,7 @@ /* Copyright(c) 1999 - 2018 Intel Corporation. */ #include "e1000.h" +#include <linux/ethtool.h> static s32 e1000_wait_autoneg(struct e1000_hw *hw); static s32 e1000_access_phy_wakeup_reg_bm(struct e1000_hw *hw, u32 offset, @@ -1011,6 +1012,8 @@ static s32 e1000_phy_setup_autoneg(struct e1000_hw *hw) */ mii_autoneg_adv_reg &= ~(ADVERTISE_PAUSE_ASYM | ADVERTISE_PAUSE_CAP); + phy->autoneg_advertised &= + ~(ADVERTISED_Pause | ADVERTISED_Asym_Pause); break; case e1000_fc_rx_pause: /* Rx Flow control is enabled, and Tx Flow control is @@ -1024,6 +1027,8 @@ static s32 e1000_phy_setup_autoneg(struct e1000_hw *hw) */ mii_autoneg_adv_reg |= (ADVERTISE_PAUSE_ASYM | ADVERTISE_PAUSE_CAP); + phy->autoneg_advertised |= + (ADVERTISED_Pause | ADVERTISED_Asym_Pause); break; case e1000_fc_tx_pause: /* Tx Flow control is enabled, and Rx Flow control is @@ -1031,6 +1036,8 @@ static s32 e1000_phy_setup_autoneg(struct e1000_hw *hw) */ mii_autoneg_adv_reg |= ADVERTISE_PAUSE_ASYM; mii_autoneg_adv_reg &= ~ADVERTISE_PAUSE_CAP; + phy->autoneg_advertised |= ADVERTISED_Asym_Pause; + phy->autoneg_advertised &= ~ADVERTISED_Pause; break; case e1000_fc_full: /* Flow control (both Rx and Tx) is enabled by a software @@ -1038,6 +1045,8 @@ static s32 e1000_phy_setup_autoneg(struct e1000_hw *hw) */ mii_autoneg_adv_reg |= (ADVERTISE_PAUSE_ASYM | ADVERTISE_PAUSE_CAP); + phy->autoneg_advertised |= + (ADVERTISED_Pause | ADVERTISED_Asym_Pause); break; default: e_dbg("Flow control param set incorrectly\n"); diff --git a/drivers/net/ethernet/intel/fm10k/fm10k_pci.c b/drivers/net/ethernet/intel/fm10k/fm10k_pci.c index b473cb7d7c57..027d721feb18 100644 --- a/drivers/net/ethernet/intel/fm10k/fm10k_pci.c +++ b/drivers/net/ethernet/intel/fm10k/fm10k_pci.c @@ -2127,8 +2127,6 @@ static int fm10k_probe(struct pci_dev *pdev, const struct pci_device_id *ent) goto err_pci_reg; } - pci_enable_pcie_error_reporting(pdev); - pci_set_master(pdev); pci_save_state(pdev); @@ -2227,7 +2225,6 @@ err_sw_init: err_ioremap: free_netdev(netdev); err_alloc_netdev: - pci_disable_pcie_error_reporting(pdev); pci_release_mem_regions(pdev); err_pci_reg: err_dma: @@ -2281,8 +2278,6 @@ static void fm10k_remove(struct pci_dev *pdev) pci_release_mem_regions(pdev); - pci_disable_pcie_error_reporting(pdev); - pci_disable_device(pdev); } diff --git a/drivers/net/ethernet/intel/i40e/i40e.h b/drivers/net/ethernet/intel/i40e/i40e.h index 3a1c28ca5bb4..60ce4d15d82a 100644 --- a/drivers/net/ethernet/intel/i40e/i40e.h +++ b/drivers/net/ethernet/intel/i40e/i40e.h @@ -33,6 +33,7 @@ #include <linux/net_tstamp.h> #include <linux/ptp_clock_kernel.h> #include <net/pkt_cls.h> +#include <net/pkt_sched.h> #include <net/tc_act/tc_gact.h> #include <net/tc_act/tc_mirred.h> #include <net/udp_tunnel.h> @@ -1287,9 +1288,9 @@ void i40e_ptp_stop(struct i40e_pf *pf); int i40e_ptp_alloc_pins(struct i40e_pf *pf); int i40e_update_adq_vsi_queues(struct i40e_vsi *vsi, int vsi_offset); int i40e_is_vsi_uplink_mode_veb(struct i40e_vsi *vsi); -i40e_status i40e_get_partition_bw_setting(struct i40e_pf *pf); -i40e_status i40e_set_partition_bw_setting(struct i40e_pf *pf); -i40e_status i40e_commit_partition_bw_setting(struct i40e_pf *pf); +int i40e_get_partition_bw_setting(struct i40e_pf *pf); +int i40e_set_partition_bw_setting(struct i40e_pf *pf); +int i40e_commit_partition_bw_setting(struct i40e_pf *pf); void i40e_print_link_message(struct i40e_vsi *vsi, bool isup); void i40e_set_fec_in_flags(u8 fec_cfg, u32 *flags); diff --git a/drivers/net/ethernet/intel/i40e/i40e_adminq.c b/drivers/net/ethernet/intel/i40e/i40e_adminq.c index 42439f725aa4..86fac8f959bb 100644 --- a/drivers/net/ethernet/intel/i40e/i40e_adminq.c +++ b/drivers/net/ethernet/intel/i40e/i40e_adminq.c @@ -47,9 +47,9 @@ static void i40e_adminq_init_regs(struct i40e_hw *hw) * i40e_alloc_adminq_asq_ring - Allocate Admin Queue send rings * @hw: pointer to the hardware structure **/ -static i40e_status i40e_alloc_adminq_asq_ring(struct i40e_hw *hw) +static int i40e_alloc_adminq_asq_ring(struct i40e_hw *hw) { - i40e_status ret_code; + int ret_code; ret_code = i40e_allocate_dma_mem(hw, &hw->aq.asq.desc_buf, i40e_mem_atq_ring, @@ -74,9 +74,9 @@ static i40e_status i40e_alloc_adminq_asq_ring(struct i40e_hw *hw) * i40e_alloc_adminq_arq_ring - Allocate Admin Queue receive rings * @hw: pointer to the hardware structure **/ -static i40e_status i40e_alloc_adminq_arq_ring(struct i40e_hw *hw) +static int i40e_alloc_adminq_arq_ring(struct i40e_hw *hw) { - i40e_status ret_code; + int ret_code; ret_code = i40e_allocate_dma_mem(hw, &hw->aq.arq.desc_buf, i40e_mem_arq_ring, @@ -115,11 +115,11 @@ static void i40e_free_adminq_arq(struct i40e_hw *hw) * i40e_alloc_arq_bufs - Allocate pre-posted buffers for the receive queue * @hw: pointer to the hardware structure **/ -static i40e_status i40e_alloc_arq_bufs(struct i40e_hw *hw) +static int i40e_alloc_arq_bufs(struct i40e_hw *hw) { - i40e_status ret_code; struct i40e_aq_desc *desc; struct i40e_dma_mem *bi; + int ret_code; int i; /* We'll be allocating the buffer info memory first, then we can @@ -182,10 +182,10 @@ unwind_alloc_arq_bufs: * i40e_alloc_asq_bufs - Allocate empty buffer structs for the send queue * @hw: pointer to the hardware structure **/ -static i40e_status i40e_alloc_asq_bufs(struct i40e_hw *hw) +static int i40e_alloc_asq_bufs(struct i40e_hw *hw) { - i40e_status ret_code; struct i40e_dma_mem *bi; + int ret_code; int i; /* No mapped memory needed yet, just the buffer info structures */ @@ -266,9 +266,9 @@ static void i40e_free_asq_bufs(struct i40e_hw *hw) * * Configure base address and length registers for the transmit queue **/ -static i40e_status i40e_config_asq_regs(struct i40e_hw *hw) +static int i40e_config_asq_regs(struct i40e_hw *hw) { - i40e_status ret_code = 0; + int ret_code = 0; u32 reg = 0; /* Clear Head and Tail */ @@ -295,9 +295,9 @@ static i40e_status i40e_config_asq_regs(struct i40e_hw *hw) * * Configure base address and length registers for the receive (event queue) **/ -static i40e_status i40e_config_arq_regs(struct i40e_hw *hw) +static int i40e_config_arq_regs(struct i40e_hw *hw) { - i40e_status ret_code = 0; + int ret_code = 0; u32 reg = 0; /* Clear Head and Tail */ @@ -334,9 +334,9 @@ static i40e_status i40e_config_arq_regs(struct i40e_hw *hw) * Do *NOT* hold the lock when calling this as the memory allocation routines * called are not going to be atomic context safe **/ -static i40e_status i40e_init_asq(struct i40e_hw *hw) +static int i40e_init_asq(struct i40e_hw *hw) { - i40e_status ret_code = 0; + int ret_code = 0; if (hw->aq.asq.count > 0) { /* queue already initialized */ @@ -393,9 +393,9 @@ init_adminq_exit: * Do *NOT* hold the lock when calling this as the memory allocation routines * called are not going to be atomic context safe **/ -static i40e_status i40e_init_arq(struct i40e_hw *hw) +static int i40e_init_arq(struct i40e_hw *hw) { - i40e_status ret_code = 0; + int ret_code = 0; if (hw->aq.arq.count > 0) { /* queue already initialized */ @@ -445,9 +445,9 @@ init_adminq_exit: * * The main shutdown routine for the Admin Send Queue **/ -static i40e_status i40e_shutdown_asq(struct i40e_hw *hw) +static int i40e_shutdown_asq(struct i40e_hw *hw) { - i40e_status ret_code = 0; + int ret_code = 0; mutex_lock(&hw->aq.asq_mutex); @@ -479,9 +479,9 @@ shutdown_asq_out: * * The main shutdown routine for the Admin Receive Queue **/ -static i40e_status i40e_shutdown_arq(struct i40e_hw *hw) +static int i40e_shutdown_arq(struct i40e_hw *hw) { - i40e_status ret_code = 0; + int ret_code = 0; mutex_lock(&hw->aq.arq_mutex); @@ -582,12 +582,12 @@ static void i40e_set_hw_flags(struct i40e_hw *hw) * - hw->aq.arq_buf_size * - hw->aq.asq_buf_size **/ -i40e_status i40e_init_adminq(struct i40e_hw *hw) +int i40e_init_adminq(struct i40e_hw *hw) { u16 cfg_ptr, oem_hi, oem_lo; u16 eetrack_lo, eetrack_hi; - i40e_status ret_code; int retry = 0; + int ret_code; /* verify input for valid configuration */ if ((hw->aq.num_arq_entries == 0) || @@ -780,7 +780,7 @@ static bool i40e_asq_done(struct i40e_hw *hw) * This is the main send command driver routine for the Admin Queue send * queue. It runs the queue, cleans the queue, etc **/ -static i40e_status +static int i40e_asq_send_command_atomic_exec(struct i40e_hw *hw, struct i40e_aq_desc *desc, void *buff, /* can be NULL */ @@ -788,12 +788,12 @@ i40e_asq_send_command_atomic_exec(struct i40e_hw *hw, struct i40e_asq_cmd_details *cmd_details, bool is_atomic_context) { - i40e_status status = 0; struct i40e_dma_mem *dma_buff = NULL; struct i40e_asq_cmd_details *details; struct i40e_aq_desc *desc_on_ring; bool cmd_completed = false; u16 retval = 0; + int status = 0; u32 val = 0; if (hw->aq.asq.count == 0) { @@ -984,7 +984,7 @@ asq_send_command_error: * Acquires the lock and calls the main send command execution * routine. **/ -i40e_status +int i40e_asq_send_command_atomic(struct i40e_hw *hw, struct i40e_aq_desc *desc, void *buff, /* can be NULL */ @@ -992,7 +992,7 @@ i40e_asq_send_command_atomic(struct i40e_hw *hw, struct i40e_asq_cmd_details *cmd_details, bool is_atomic_context) { - i40e_status status; + int status; mutex_lock(&hw->aq.asq_mutex); status = i40e_asq_send_command_atomic_exec(hw, desc, buff, buff_size, @@ -1003,7 +1003,7 @@ i40e_asq_send_command_atomic(struct i40e_hw *hw, return status; } -i40e_status +int i40e_asq_send_command(struct i40e_hw *hw, struct i40e_aq_desc *desc, void *buff, /* can be NULL */ u16 buff_size, struct i40e_asq_cmd_details *cmd_details) @@ -1026,7 +1026,7 @@ i40e_asq_send_command(struct i40e_hw *hw, struct i40e_aq_desc *desc, * routine. Returns the last Admin Queue status in aq_status * to avoid race conditions in access to hw->aq.asq_last_status. **/ -i40e_status +int i40e_asq_send_command_atomic_v2(struct i40e_hw *hw, struct i40e_aq_desc *desc, void *buff, /* can be NULL */ @@ -1035,7 +1035,7 @@ i40e_asq_send_command_atomic_v2(struct i40e_hw *hw, bool is_atomic_context, enum i40e_admin_queue_err *aq_status) { - i40e_status status; + int status; mutex_lock(&hw->aq.asq_mutex); status = i40e_asq_send_command_atomic_exec(hw, desc, buff, @@ -1048,7 +1048,7 @@ i40e_asq_send_command_atomic_v2(struct i40e_hw *hw, return status; } -i40e_status +int i40e_asq_send_command_v2(struct i40e_hw *hw, struct i40e_aq_desc *desc, void *buff, /* can be NULL */ u16 buff_size, struct i40e_asq_cmd_details *cmd_details, @@ -1084,14 +1084,14 @@ void i40e_fill_default_direct_cmd_desc(struct i40e_aq_desc *desc, * the contents through e. It can also return how many events are * left to process through 'pending' **/ -i40e_status i40e_clean_arq_element(struct i40e_hw *hw, - struct i40e_arq_event_info *e, - u16 *pending) +int i40e_clean_arq_element(struct i40e_hw *hw, + struct i40e_arq_event_info *e, + u16 *pending) { - i40e_status ret_code = 0; u16 ntc = hw->aq.arq.next_to_clean; struct i40e_aq_desc *desc; struct i40e_dma_mem *bi; + int ret_code = 0; u16 desc_idx; u16 datalen; u16 flags; diff --git a/drivers/net/ethernet/intel/i40e/i40e_alloc.h b/drivers/net/ethernet/intel/i40e/i40e_alloc.h index cb8689222c8b..a6c9a9e343d1 100644 --- a/drivers/net/ethernet/intel/i40e/i40e_alloc.h +++ b/drivers/net/ethernet/intel/i40e/i40e_alloc.h @@ -20,16 +20,16 @@ enum i40e_memory_type { }; /* prototype for functions used for dynamic memory allocation */ -i40e_status i40e_allocate_dma_mem(struct i40e_hw *hw, - struct i40e_dma_mem *mem, - enum i40e_memory_type type, - u64 size, u32 alignment); -i40e_status i40e_free_dma_mem(struct i40e_hw *hw, - struct i40e_dma_mem *mem); -i40e_status i40e_allocate_virt_mem(struct i40e_hw *hw, - struct i40e_virt_mem *mem, - u32 size); -i40e_status i40e_free_virt_mem(struct i40e_hw *hw, - struct i40e_virt_mem *mem); +int i40e_allocate_dma_mem(struct i40e_hw *hw, + struct i40e_dma_mem *mem, + enum i40e_memory_type type, + u64 size, u32 alignment); +int i40e_free_dma_mem(struct i40e_hw *hw, + struct i40e_dma_mem *mem); +int i40e_allocate_virt_mem(struct i40e_hw *hw, + struct i40e_virt_mem *mem, + u32 size); +int i40e_free_virt_mem(struct i40e_hw *hw, + struct i40e_virt_mem *mem); #endif /* _I40E_ALLOC_H_ */ diff --git a/drivers/net/ethernet/intel/i40e/i40e_client.c b/drivers/net/ethernet/intel/i40e/i40e_client.c index 10d7a982a5b9..639c5a1ca853 100644 --- a/drivers/net/ethernet/intel/i40e/i40e_client.c +++ b/drivers/net/ethernet/intel/i40e/i40e_client.c @@ -541,9 +541,9 @@ static int i40e_client_virtchnl_send(struct i40e_info *ldev, { struct i40e_pf *pf = ldev->pf; struct i40e_hw *hw = &pf->hw; - i40e_status err; + int err; - err = i40e_aq_send_msg_to_vf(hw, vf_id, VIRTCHNL_OP_IWARP, + err = i40e_aq_send_msg_to_vf(hw, vf_id, VIRTCHNL_OP_RDMA, 0, msg, len, NULL); if (err) dev_err(&pf->pdev->dev, "Unable to send iWarp message to VF, error %d, aq status %d\n", @@ -674,7 +674,7 @@ static int i40e_client_update_vsi_ctxt(struct i40e_info *ldev, struct i40e_vsi *vsi = pf->vsi[pf->lan_vsi]; struct i40e_vsi_context ctxt; bool update = true; - i40e_status err; + int err; /* TODO: for now do not allow setting VF's VSI setting */ if (is_vf) @@ -686,8 +686,8 @@ static int i40e_client_update_vsi_ctxt(struct i40e_info *ldev, ctxt.flags = I40E_AQ_VSI_TYPE_PF; if (err) { dev_info(&pf->pdev->dev, - "couldn't get PF vsi config, err %s aq_err %s\n", - i40e_stat_str(&pf->hw, err), + "couldn't get PF vsi config, err %pe aq_err %s\n", + ERR_PTR(err), i40e_aq_str(&pf->hw, pf->hw.aq.asq_last_status)); return -ENOENT; @@ -714,8 +714,8 @@ static int i40e_client_update_vsi_ctxt(struct i40e_info *ldev, err = i40e_aq_update_vsi_params(&vsi->back->hw, &ctxt, NULL); if (err) { dev_info(&pf->pdev->dev, - "update VSI ctxt for PE failed, err %s aq_err %s\n", - i40e_stat_str(&pf->hw, err), + "update VSI ctxt for PE failed, err %pe aq_err %s\n", + ERR_PTR(err), i40e_aq_str(&pf->hw, pf->hw.aq.asq_last_status)); } diff --git a/drivers/net/ethernet/intel/i40e/i40e_common.c b/drivers/net/ethernet/intel/i40e/i40e_common.c index 8f764ff5c990..ed88e38d488b 100644 --- a/drivers/net/ethernet/intel/i40e/i40e_common.c +++ b/drivers/net/ethernet/intel/i40e/i40e_common.c @@ -14,9 +14,9 @@ * This function sets the mac type of the adapter based on the * vendor ID and device ID stored in the hw structure. **/ -i40e_status i40e_set_mac_type(struct i40e_hw *hw) +int i40e_set_mac_type(struct i40e_hw *hw) { - i40e_status status = 0; + int status = 0; if (hw->vendor_id == PCI_VENDOR_ID_INTEL) { switch (hw->device_id) { @@ -125,154 +125,6 @@ const char *i40e_aq_str(struct i40e_hw *hw, enum i40e_admin_queue_err aq_err) } /** - * i40e_stat_str - convert status err code to a string - * @hw: pointer to the HW structure - * @stat_err: the status error code to convert - **/ -const char *i40e_stat_str(struct i40e_hw *hw, i40e_status stat_err) -{ - switch (stat_err) { - case 0: - return "OK"; - case I40E_ERR_NVM: - return "I40E_ERR_NVM"; - case I40E_ERR_NVM_CHECKSUM: - return "I40E_ERR_NVM_CHECKSUM"; - case I40E_ERR_PHY: - return "I40E_ERR_PHY"; - case I40E_ERR_CONFIG: - return "I40E_ERR_CONFIG"; - case I40E_ERR_PARAM: - return "I40E_ERR_PARAM"; - case I40E_ERR_MAC_TYPE: - return "I40E_ERR_MAC_TYPE"; - case I40E_ERR_UNKNOWN_PHY: - return "I40E_ERR_UNKNOWN_PHY"; - case I40E_ERR_LINK_SETUP: - return "I40E_ERR_LINK_SETUP"; - case I40E_ERR_ADAPTER_STOPPED: - return "I40E_ERR_ADAPTER_STOPPED"; - case I40E_ERR_INVALID_MAC_ADDR: - return "I40E_ERR_INVALID_MAC_ADDR"; - case I40E_ERR_DEVICE_NOT_SUPPORTED: - return "I40E_ERR_DEVICE_NOT_SUPPORTED"; - case I40E_ERR_PRIMARY_REQUESTS_PENDING: - return "I40E_ERR_PRIMARY_REQUESTS_PENDING"; - case I40E_ERR_INVALID_LINK_SETTINGS: - return "I40E_ERR_INVALID_LINK_SETTINGS"; - case I40E_ERR_AUTONEG_NOT_COMPLETE: - return "I40E_ERR_AUTONEG_NOT_COMPLETE"; - case I40E_ERR_RESET_FAILED: - return "I40E_ERR_RESET_FAILED"; - case I40E_ERR_SWFW_SYNC: - return "I40E_ERR_SWFW_SYNC"; - case I40E_ERR_NO_AVAILABLE_VSI: - return "I40E_ERR_NO_AVAILABLE_VSI"; - case I40E_ERR_NO_MEMORY: - return "I40E_ERR_NO_MEMORY"; - case I40E_ERR_BAD_PTR: - return "I40E_ERR_BAD_PTR"; - case I40E_ERR_RING_FULL: - return "I40E_ERR_RING_FULL"; - case I40E_ERR_INVALID_PD_ID: - return "I40E_ERR_INVALID_PD_ID"; - case I40E_ERR_INVALID_QP_ID: - return "I40E_ERR_INVALID_QP_ID"; - case I40E_ERR_INVALID_CQ_ID: - return "I40E_ERR_INVALID_CQ_ID"; - case I40E_ERR_INVALID_CEQ_ID: - return "I40E_ERR_INVALID_CEQ_ID"; - case I40E_ERR_INVALID_AEQ_ID: - return "I40E_ERR_INVALID_AEQ_ID"; - case I40E_ERR_INVALID_SIZE: - return "I40E_ERR_INVALID_SIZE"; - case I40E_ERR_INVALID_ARP_INDEX: - return "I40E_ERR_INVALID_ARP_INDEX"; - case I40E_ERR_INVALID_FPM_FUNC_ID: - return "I40E_ERR_INVALID_FPM_FUNC_ID"; - case I40E_ERR_QP_INVALID_MSG_SIZE: - return "I40E_ERR_QP_INVALID_MSG_SIZE"; - case I40E_ERR_QP_TOOMANY_WRS_POSTED: - return "I40E_ERR_QP_TOOMANY_WRS_POSTED"; - case I40E_ERR_INVALID_FRAG_COUNT: - return "I40E_ERR_INVALID_FRAG_COUNT"; - case I40E_ERR_QUEUE_EMPTY: - return "I40E_ERR_QUEUE_EMPTY"; - case I40E_ERR_INVALID_ALIGNMENT: - return "I40E_ERR_INVALID_ALIGNMENT"; - case I40E_ERR_FLUSHED_QUEUE: - return "I40E_ERR_FLUSHED_QUEUE"; - case I40E_ERR_INVALID_PUSH_PAGE_INDEX: - return "I40E_ERR_INVALID_PUSH_PAGE_INDEX"; - case I40E_ERR_INVALID_IMM_DATA_SIZE: - return "I40E_ERR_INVALID_IMM_DATA_SIZE"; - case I40E_ERR_TIMEOUT: - return "I40E_ERR_TIMEOUT"; - case I40E_ERR_OPCODE_MISMATCH: - return "I40E_ERR_OPCODE_MISMATCH"; - case I40E_ERR_CQP_COMPL_ERROR: - return "I40E_ERR_CQP_COMPL_ERROR"; - case I40E_ERR_INVALID_VF_ID: - return "I40E_ERR_INVALID_VF_ID"; - case I40E_ERR_INVALID_HMCFN_ID: - return "I40E_ERR_INVALID_HMCFN_ID"; - case I40E_ERR_BACKING_PAGE_ERROR: - return "I40E_ERR_BACKING_PAGE_ERROR"; - case I40E_ERR_NO_PBLCHUNKS_AVAILABLE: - return "I40E_ERR_NO_PBLCHUNKS_AVAILABLE"; - case I40E_ERR_INVALID_PBLE_INDEX: - return "I40E_ERR_INVALID_PBLE_INDEX"; - case I40E_ERR_INVALID_SD_INDEX: - return "I40E_ERR_INVALID_SD_INDEX"; - case I40E_ERR_INVALID_PAGE_DESC_INDEX: - return "I40E_ERR_INVALID_PAGE_DESC_INDEX"; - case I40E_ERR_INVALID_SD_TYPE: - return "I40E_ERR_INVALID_SD_TYPE"; - case I40E_ERR_MEMCPY_FAILED: - return "I40E_ERR_MEMCPY_FAILED"; - case I40E_ERR_INVALID_HMC_OBJ_INDEX: - return "I40E_ERR_INVALID_HMC_OBJ_INDEX"; - case I40E_ERR_INVALID_HMC_OBJ_COUNT: - return "I40E_ERR_INVALID_HMC_OBJ_COUNT"; - case I40E_ERR_INVALID_SRQ_ARM_LIMIT: - return "I40E_ERR_INVALID_SRQ_ARM_LIMIT"; - case I40E_ERR_SRQ_ENABLED: - return "I40E_ERR_SRQ_ENABLED"; - case I40E_ERR_ADMIN_QUEUE_ERROR: - return "I40E_ERR_ADMIN_QUEUE_ERROR"; - case I40E_ERR_ADMIN_QUEUE_TIMEOUT: - return "I40E_ERR_ADMIN_QUEUE_TIMEOUT"; - case I40E_ERR_BUF_TOO_SHORT: - return "I40E_ERR_BUF_TOO_SHORT"; - case I40E_ERR_ADMIN_QUEUE_FULL: - return "I40E_ERR_ADMIN_QUEUE_FULL"; - case I40E_ERR_ADMIN_QUEUE_NO_WORK: - return "I40E_ERR_ADMIN_QUEUE_NO_WORK"; - case I40E_ERR_BAD_IWARP_CQE: - return "I40E_ERR_BAD_IWARP_CQE"; - case I40E_ERR_NVM_BLANK_MODE: - return "I40E_ERR_NVM_BLANK_MODE"; - case I40E_ERR_NOT_IMPLEMENTED: - return "I40E_ERR_NOT_IMPLEMENTED"; - case I40E_ERR_PE_DOORBELL_NOT_ENABLED: - return "I40E_ERR_PE_DOORBELL_NOT_ENABLED"; - case I40E_ERR_DIAG_TEST_FAILED: - return "I40E_ERR_DIAG_TEST_FAILED"; - case I40E_ERR_NOT_READY: - return "I40E_ERR_NOT_READY"; - case I40E_NOT_SUPPORTED: - return "I40E_NOT_SUPPORTED"; - case I40E_ERR_FIRMWARE_API_VERSION: - return "I40E_ERR_FIRMWARE_API_VERSION"; - case I40E_ERR_ADMIN_QUEUE_CRITICAL_ERROR: - return "I40E_ERR_ADMIN_QUEUE_CRITICAL_ERROR"; - } - - snprintf(hw->err_str, sizeof(hw->err_str), "%d", stat_err); - return hw->err_str; -} - -/** * i40e_debug_aq * @hw: debug mask related to admin queue * @mask: debug mask @@ -355,13 +207,13 @@ bool i40e_check_asq_alive(struct i40e_hw *hw) * Tell the Firmware that we're shutting down the AdminQ and whether * or not the driver is unloading as well. **/ -i40e_status i40e_aq_queue_shutdown(struct i40e_hw *hw, - bool unloading) +int i40e_aq_queue_shutdown(struct i40e_hw *hw, + bool unloading) { struct i40e_aq_desc desc; struct i40e_aqc_queue_shutdown *cmd = (struct i40e_aqc_queue_shutdown *)&desc.params.raw; - i40e_status status; + int status; i40e_fill_default_direct_cmd_desc(&desc, i40e_aqc_opc_queue_shutdown); @@ -384,15 +236,15 @@ i40e_status i40e_aq_queue_shutdown(struct i40e_hw *hw, * * Internal function to get or set RSS look up table **/ -static i40e_status i40e_aq_get_set_rss_lut(struct i40e_hw *hw, - u16 vsi_id, bool pf_lut, - u8 *lut, u16 lut_size, - bool set) +static int i40e_aq_get_set_rss_lut(struct i40e_hw *hw, + u16 vsi_id, bool pf_lut, + u8 *lut, u16 lut_size, + bool set) { - i40e_status status; struct i40e_aq_desc desc; struct i40e_aqc_get_set_rss_lut *cmd_resp = (struct i40e_aqc_get_set_rss_lut *)&desc.params.raw; + int status; if (set) i40e_fill_default_direct_cmd_desc(&desc, @@ -437,8 +289,8 @@ static i40e_status i40e_aq_get_set_rss_lut(struct i40e_hw *hw, * * get the RSS lookup table, PF or VSI type **/ -i40e_status i40e_aq_get_rss_lut(struct i40e_hw *hw, u16 vsi_id, - bool pf_lut, u8 *lut, u16 lut_size) +int i40e_aq_get_rss_lut(struct i40e_hw *hw, u16 vsi_id, + bool pf_lut, u8 *lut, u16 lut_size) { return i40e_aq_get_set_rss_lut(hw, vsi_id, pf_lut, lut, lut_size, false); @@ -454,8 +306,8 @@ i40e_status i40e_aq_get_rss_lut(struct i40e_hw *hw, u16 vsi_id, * * set the RSS lookup table, PF or VSI type **/ -i40e_status i40e_aq_set_rss_lut(struct i40e_hw *hw, u16 vsi_id, - bool pf_lut, u8 *lut, u16 lut_size) +int i40e_aq_set_rss_lut(struct i40e_hw *hw, u16 vsi_id, + bool pf_lut, u8 *lut, u16 lut_size) { return i40e_aq_get_set_rss_lut(hw, vsi_id, pf_lut, lut, lut_size, true); } @@ -469,16 +321,16 @@ i40e_status i40e_aq_set_rss_lut(struct i40e_hw *hw, u16 vsi_id, * * get the RSS key per VSI **/ -static i40e_status i40e_aq_get_set_rss_key(struct i40e_hw *hw, - u16 vsi_id, - struct i40e_aqc_get_set_rss_key_data *key, - bool set) +static int i40e_aq_get_set_rss_key(struct i40e_hw *hw, + u16 vsi_id, + struct i40e_aqc_get_set_rss_key_data *key, + bool set) { - i40e_status status; struct i40e_aq_desc desc; struct i40e_aqc_get_set_rss_key *cmd_resp = (struct i40e_aqc_get_set_rss_key *)&desc.params.raw; u16 key_size = sizeof(struct i40e_aqc_get_set_rss_key_data); + int status; if (set) i40e_fill_default_direct_cmd_desc(&desc, @@ -509,9 +361,9 @@ static i40e_status i40e_aq_get_set_rss_key(struct i40e_hw *hw, * @key: pointer to key info struct * **/ -i40e_status i40e_aq_get_rss_key(struct i40e_hw *hw, - u16 vsi_id, - struct i40e_aqc_get_set_rss_key_data *key) +int i40e_aq_get_rss_key(struct i40e_hw *hw, + u16 vsi_id, + struct i40e_aqc_get_set_rss_key_data *key) { return i40e_aq_get_set_rss_key(hw, vsi_id, key, false); } @@ -524,9 +376,9 @@ i40e_status i40e_aq_get_rss_key(struct i40e_hw *hw, * * set the RSS key per VSI **/ -i40e_status i40e_aq_set_rss_key(struct i40e_hw *hw, - u16 vsi_id, - struct i40e_aqc_get_set_rss_key_data *key) +int i40e_aq_set_rss_key(struct i40e_hw *hw, + u16 vsi_id, + struct i40e_aqc_get_set_rss_key_data *key) { return i40e_aq_get_set_rss_key(hw, vsi_id, key, true); } @@ -796,10 +648,10 @@ struct i40e_rx_ptype_decoded i40e_ptype_lookup[BIT(8)] = { * hw_addr, back, device_id, vendor_id, subsystem_device_id, * subsystem_vendor_id, and revision_id **/ -i40e_status i40e_init_shared_code(struct i40e_hw *hw) +int i40e_init_shared_code(struct i40e_hw *hw) { - i40e_status status = 0; u32 port, ari, func_rid; + int status = 0; i40e_set_mac_type(hw); @@ -836,15 +688,16 @@ i40e_status i40e_init_shared_code(struct i40e_hw *hw) * @addrs: the requestor's mac addr store * @cmd_details: pointer to command details structure or NULL **/ -static i40e_status i40e_aq_mac_address_read(struct i40e_hw *hw, - u16 *flags, - struct i40e_aqc_mac_address_read_data *addrs, - struct i40e_asq_cmd_details *cmd_details) +static int +i40e_aq_mac_address_read(struct i40e_hw *hw, + u16 *flags, + struct i40e_aqc_mac_address_read_data *addrs, + struct i40e_asq_cmd_details *cmd_details) { struct i40e_aq_desc desc; struct i40e_aqc_mac_address_read *cmd_data = (struct i40e_aqc_mac_address_read *)&desc.params.raw; - i40e_status status; + int status; i40e_fill_default_direct_cmd_desc(&desc, i40e_aqc_opc_mac_address_read); desc.flags |= cpu_to_le16(I40E_AQ_FLAG_BUF); @@ -863,14 +716,14 @@ static i40e_status i40e_aq_mac_address_read(struct i40e_hw *hw, * @mac_addr: address to write * @cmd_details: pointer to command details structure or NULL **/ -i40e_status i40e_aq_mac_address_write(struct i40e_hw *hw, - u16 flags, u8 *mac_addr, - struct i40e_asq_cmd_details *cmd_details) +int i40e_aq_mac_address_write(struct i40e_hw *hw, + u16 flags, u8 *mac_addr, + struct i40e_asq_cmd_details *cmd_details) { struct i40e_aq_desc desc; struct i40e_aqc_mac_address_write *cmd_data = (struct i40e_aqc_mac_address_write *)&desc.params.raw; - i40e_status status; + int status; i40e_fill_default_direct_cmd_desc(&desc, i40e_aqc_opc_mac_address_write); @@ -893,11 +746,11 @@ i40e_status i40e_aq_mac_address_write(struct i40e_hw *hw, * * Reads the adapter's MAC address from register **/ -i40e_status i40e_get_mac_addr(struct i40e_hw *hw, u8 *mac_addr) +int i40e_get_mac_addr(struct i40e_hw *hw, u8 *mac_addr) { struct i40e_aqc_mac_address_read_data addrs; - i40e_status status; u16 flags = 0; + int status; status = i40e_aq_mac_address_read(hw, &flags, &addrs, NULL); @@ -914,11 +767,11 @@ i40e_status i40e_get_mac_addr(struct i40e_hw *hw, u8 *mac_addr) * * Reads the adapter's Port MAC address **/ -i40e_status i40e_get_port_mac_addr(struct i40e_hw *hw, u8 *mac_addr) +int i40e_get_port_mac_addr(struct i40e_hw *hw, u8 *mac_addr) { struct i40e_aqc_mac_address_read_data addrs; - i40e_status status; u16 flags = 0; + int status; status = i40e_aq_mac_address_read(hw, &flags, &addrs, NULL); if (status) @@ -972,13 +825,13 @@ void i40e_pre_tx_queue_cfg(struct i40e_hw *hw, u32 queue, bool enable) * * Reads the part number string from the EEPROM. **/ -i40e_status i40e_read_pba_string(struct i40e_hw *hw, u8 *pba_num, - u32 pba_num_size) +int i40e_read_pba_string(struct i40e_hw *hw, u8 *pba_num, + u32 pba_num_size) { - i40e_status status = 0; u16 pba_word = 0; u16 pba_size = 0; u16 pba_ptr = 0; + int status = 0; u16 i = 0; status = i40e_read_nvm_word(hw, I40E_SR_PBA_FLAGS, &pba_word); @@ -1087,8 +940,8 @@ static enum i40e_media_type i40e_get_media_type(struct i40e_hw *hw) * @hw: pointer to the hardware structure * @retry_limit: how many times to retry before failure **/ -static i40e_status i40e_poll_globr(struct i40e_hw *hw, - u32 retry_limit) +static int i40e_poll_globr(struct i40e_hw *hw, + u32 retry_limit) { u32 cnt, reg = 0; @@ -1114,7 +967,7 @@ static i40e_status i40e_poll_globr(struct i40e_hw *hw, * Assuming someone else has triggered a global reset, * assure the global reset is complete and then reset the PF **/ -i40e_status i40e_pf_reset(struct i40e_hw *hw) +int i40e_pf_reset(struct i40e_hw *hw) { u32 cnt = 0; u32 cnt1 = 0; @@ -1453,15 +1306,16 @@ void i40e_led_set(struct i40e_hw *hw, u32 mode, bool blink) * * Returns the various PHY abilities supported on the Port. **/ -i40e_status i40e_aq_get_phy_capabilities(struct i40e_hw *hw, - bool qualified_modules, bool report_init, - struct i40e_aq_get_phy_abilities_resp *abilities, - struct i40e_asq_cmd_details *cmd_details) +int +i40e_aq_get_phy_capabilities(struct i40e_hw *hw, + bool qualified_modules, bool report_init, + struct i40e_aq_get_phy_abilities_resp *abilities, + struct i40e_asq_cmd_details *cmd_details) { - struct i40e_aq_desc desc; - i40e_status status; u16 abilities_size = sizeof(struct i40e_aq_get_phy_abilities_resp); u16 max_delay = I40E_MAX_PHY_TIMEOUT, total_delay = 0; + struct i40e_aq_desc desc; + int status; if (!abilities) return I40E_ERR_PARAM; @@ -1532,14 +1386,14 @@ i40e_status i40e_aq_get_phy_capabilities(struct i40e_hw *hw, * of the PHY Config parameters. This status will be indicated by the * command response. **/ -enum i40e_status_code i40e_aq_set_phy_config(struct i40e_hw *hw, - struct i40e_aq_set_phy_config *config, - struct i40e_asq_cmd_details *cmd_details) +int i40e_aq_set_phy_config(struct i40e_hw *hw, + struct i40e_aq_set_phy_config *config, + struct i40e_asq_cmd_details *cmd_details) { struct i40e_aq_desc desc; struct i40e_aq_set_phy_config *cmd = (struct i40e_aq_set_phy_config *)&desc.params.raw; - enum i40e_status_code status; + int status; if (!config) return I40E_ERR_PARAM; @@ -1554,7 +1408,7 @@ enum i40e_status_code i40e_aq_set_phy_config(struct i40e_hw *hw, return status; } -static noinline_for_stack enum i40e_status_code +static noinline_for_stack int i40e_set_fc_status(struct i40e_hw *hw, struct i40e_aq_get_phy_abilities_resp *abilities, bool atomic_restart) @@ -1612,11 +1466,11 @@ i40e_set_fc_status(struct i40e_hw *hw, * * Set the requested flow control mode using set_phy_config. **/ -enum i40e_status_code i40e_set_fc(struct i40e_hw *hw, u8 *aq_failures, - bool atomic_restart) +int i40e_set_fc(struct i40e_hw *hw, u8 *aq_failures, + bool atomic_restart) { struct i40e_aq_get_phy_abilities_resp abilities; - enum i40e_status_code status; + int status; *aq_failures = 0x0; @@ -1655,13 +1509,13 @@ enum i40e_status_code i40e_set_fc(struct i40e_hw *hw, u8 *aq_failures, * * Tell the firmware that the driver is taking over from PXE **/ -i40e_status i40e_aq_clear_pxe_mode(struct i40e_hw *hw, - struct i40e_asq_cmd_details *cmd_details) +int i40e_aq_clear_pxe_mode(struct i40e_hw *hw, + struct i40e_asq_cmd_details *cmd_details) { - i40e_status status; struct i40e_aq_desc desc; struct i40e_aqc_clear_pxe *cmd = (struct i40e_aqc_clear_pxe *)&desc.params.raw; + int status; i40e_fill_default_direct_cmd_desc(&desc, i40e_aqc_opc_clear_pxe_mode); @@ -1683,14 +1537,14 @@ i40e_status i40e_aq_clear_pxe_mode(struct i40e_hw *hw, * * Sets up the link and restarts the Auto-Negotiation over the link. **/ -i40e_status i40e_aq_set_link_restart_an(struct i40e_hw *hw, - bool enable_link, - struct i40e_asq_cmd_details *cmd_details) +int i40e_aq_set_link_restart_an(struct i40e_hw *hw, + bool enable_link, + struct i40e_asq_cmd_details *cmd_details) { struct i40e_aq_desc desc; struct i40e_aqc_set_link_restart_an *cmd = (struct i40e_aqc_set_link_restart_an *)&desc.params.raw; - i40e_status status; + int status; i40e_fill_default_direct_cmd_desc(&desc, i40e_aqc_opc_set_link_restart_an); @@ -1715,17 +1569,17 @@ i40e_status i40e_aq_set_link_restart_an(struct i40e_hw *hw, * * Returns the link status of the adapter. **/ -i40e_status i40e_aq_get_link_info(struct i40e_hw *hw, - bool enable_lse, struct i40e_link_status *link, - struct i40e_asq_cmd_details *cmd_details) +int i40e_aq_get_link_info(struct i40e_hw *hw, + bool enable_lse, struct i40e_link_status *link, + struct i40e_asq_cmd_details *cmd_details) { struct i40e_aq_desc desc; struct i40e_aqc_get_link_status *resp = (struct i40e_aqc_get_link_status *)&desc.params.raw; struct i40e_link_status *hw_link_info = &hw->phy.link_info; - i40e_status status; bool tx_pause, rx_pause; u16 command_flags; + int status; i40e_fill_default_direct_cmd_desc(&desc, i40e_aqc_opc_get_link_status); @@ -1811,14 +1665,14 @@ aq_get_link_info_exit: * * Set link interrupt mask. **/ -i40e_status i40e_aq_set_phy_int_mask(struct i40e_hw *hw, - u16 mask, - struct i40e_asq_cmd_details *cmd_details) +int i40e_aq_set_phy_int_mask(struct i40e_hw *hw, + u16 mask, + struct i40e_asq_cmd_details *cmd_details) { struct i40e_aq_desc desc; struct i40e_aqc_set_phy_int_mask *cmd = (struct i40e_aqc_set_phy_int_mask *)&desc.params.raw; - i40e_status status; + int status; i40e_fill_default_direct_cmd_desc(&desc, i40e_aqc_opc_set_phy_int_mask); @@ -1838,8 +1692,8 @@ i40e_status i40e_aq_set_phy_int_mask(struct i40e_hw *hw, * * Enable/disable loopback on a given port */ -i40e_status i40e_aq_set_mac_loopback(struct i40e_hw *hw, bool ena_lpbk, - struct i40e_asq_cmd_details *cmd_details) +int i40e_aq_set_mac_loopback(struct i40e_hw *hw, bool ena_lpbk, + struct i40e_asq_cmd_details *cmd_details) { struct i40e_aq_desc desc; struct i40e_aqc_set_lb_mode *cmd = @@ -1864,13 +1718,13 @@ i40e_status i40e_aq_set_mac_loopback(struct i40e_hw *hw, bool ena_lpbk, * * Reset the external PHY. **/ -i40e_status i40e_aq_set_phy_debug(struct i40e_hw *hw, u8 cmd_flags, - struct i40e_asq_cmd_details *cmd_details) +int i40e_aq_set_phy_debug(struct i40e_hw *hw, u8 cmd_flags, + struct i40e_asq_cmd_details *cmd_details) { struct i40e_aq_desc desc; struct i40e_aqc_set_phy_debug *cmd = (struct i40e_aqc_set_phy_debug *)&desc.params.raw; - i40e_status status; + int status; i40e_fill_default_direct_cmd_desc(&desc, i40e_aqc_opc_set_phy_debug); @@ -1905,9 +1759,9 @@ static bool i40e_is_aq_api_ver_ge(struct i40e_adminq_info *aq, u16 maj, * * Add a VSI context to the hardware. **/ -i40e_status i40e_aq_add_vsi(struct i40e_hw *hw, - struct i40e_vsi_context *vsi_ctx, - struct i40e_asq_cmd_details *cmd_details) +int i40e_aq_add_vsi(struct i40e_hw *hw, + struct i40e_vsi_context *vsi_ctx, + struct i40e_asq_cmd_details *cmd_details) { struct i40e_aq_desc desc; struct i40e_aqc_add_get_update_vsi *cmd = @@ -1915,7 +1769,7 @@ i40e_status i40e_aq_add_vsi(struct i40e_hw *hw, struct i40e_aqc_add_get_update_vsi_completion *resp = (struct i40e_aqc_add_get_update_vsi_completion *) &desc.params.raw; - i40e_status status; + int status; i40e_fill_default_direct_cmd_desc(&desc, i40e_aqc_opc_add_vsi); @@ -1949,15 +1803,15 @@ aq_add_vsi_exit: * @seid: vsi number * @cmd_details: pointer to command details structure or NULL **/ -i40e_status i40e_aq_set_default_vsi(struct i40e_hw *hw, - u16 seid, - struct i40e_asq_cmd_details *cmd_details) +int i40e_aq_set_default_vsi(struct i40e_hw *hw, + u16 seid, + struct i40e_asq_cmd_details *cmd_details) { struct i40e_aq_desc desc; struct i40e_aqc_set_vsi_promiscuous_modes *cmd = (struct i40e_aqc_set_vsi_promiscuous_modes *) &desc.params.raw; - i40e_status status; + int status; i40e_fill_default_direct_cmd_desc(&desc, i40e_aqc_opc_set_vsi_promiscuous_modes); @@ -1977,15 +1831,15 @@ i40e_status i40e_aq_set_default_vsi(struct i40e_hw *hw, * @seid: vsi number * @cmd_details: pointer to command details structure or NULL **/ -i40e_status i40e_aq_clear_default_vsi(struct i40e_hw *hw, - u16 seid, - struct i40e_asq_cmd_details *cmd_details) +int i40e_aq_clear_default_vsi(struct i40e_hw *hw, + u16 seid, + struct i40e_asq_cmd_details *cmd_details) { struct i40e_aq_desc desc; struct i40e_aqc_set_vsi_promiscuous_modes *cmd = (struct i40e_aqc_set_vsi_promiscuous_modes *) &desc.params.raw; - i40e_status status; + int status; i40e_fill_default_direct_cmd_desc(&desc, i40e_aqc_opc_set_vsi_promiscuous_modes); @@ -2007,16 +1861,16 @@ i40e_status i40e_aq_clear_default_vsi(struct i40e_hw *hw, * @cmd_details: pointer to command details structure or NULL * @rx_only_promisc: flag to decide if egress traffic gets mirrored in promisc **/ -i40e_status i40e_aq_set_vsi_unicast_promiscuous(struct i40e_hw *hw, - u16 seid, bool set, - struct i40e_asq_cmd_details *cmd_details, - bool rx_only_promisc) +int i40e_aq_set_vsi_unicast_promiscuous(struct i40e_hw *hw, + u16 seid, bool set, + struct i40e_asq_cmd_details *cmd_details, + bool rx_only_promisc) { struct i40e_aq_desc desc; struct i40e_aqc_set_vsi_promiscuous_modes *cmd = (struct i40e_aqc_set_vsi_promiscuous_modes *)&desc.params.raw; - i40e_status status; u16 flags = 0; + int status; i40e_fill_default_direct_cmd_desc(&desc, i40e_aqc_opc_set_vsi_promiscuous_modes); @@ -2047,14 +1901,15 @@ i40e_status i40e_aq_set_vsi_unicast_promiscuous(struct i40e_hw *hw, * @set: set multicast promiscuous enable/disable * @cmd_details: pointer to command details structure or NULL **/ -i40e_status i40e_aq_set_vsi_multicast_promiscuous(struct i40e_hw *hw, - u16 seid, bool set, struct i40e_asq_cmd_details *cmd_details) +int i40e_aq_set_vsi_multicast_promiscuous(struct i40e_hw *hw, + u16 seid, bool set, + struct i40e_asq_cmd_details *cmd_details) { struct i40e_aq_desc desc; struct i40e_aqc_set_vsi_promiscuous_modes *cmd = (struct i40e_aqc_set_vsi_promiscuous_modes *)&desc.params.raw; - i40e_status status; u16 flags = 0; + int status; i40e_fill_default_direct_cmd_desc(&desc, i40e_aqc_opc_set_vsi_promiscuous_modes); @@ -2080,16 +1935,16 @@ i40e_status i40e_aq_set_vsi_multicast_promiscuous(struct i40e_hw *hw, * @vid: The VLAN tag filter - capture any multicast packet with this VLAN tag * @cmd_details: pointer to command details structure or NULL **/ -enum i40e_status_code i40e_aq_set_vsi_mc_promisc_on_vlan(struct i40e_hw *hw, - u16 seid, bool enable, - u16 vid, - struct i40e_asq_cmd_details *cmd_details) +int i40e_aq_set_vsi_mc_promisc_on_vlan(struct i40e_hw *hw, + u16 seid, bool enable, + u16 vid, + struct i40e_asq_cmd_details *cmd_details) { struct i40e_aq_desc desc; struct i40e_aqc_set_vsi_promiscuous_modes *cmd = (struct i40e_aqc_set_vsi_promiscuous_modes *)&desc.params.raw; - enum i40e_status_code status; u16 flags = 0; + int status; i40e_fill_default_direct_cmd_desc(&desc, i40e_aqc_opc_set_vsi_promiscuous_modes); @@ -2116,16 +1971,16 @@ enum i40e_status_code i40e_aq_set_vsi_mc_promisc_on_vlan(struct i40e_hw *hw, * @vid: The VLAN tag filter - capture any unicast packet with this VLAN tag * @cmd_details: pointer to command details structure or NULL **/ -enum i40e_status_code i40e_aq_set_vsi_uc_promisc_on_vlan(struct i40e_hw *hw, - u16 seid, bool enable, - u16 vid, - struct i40e_asq_cmd_details *cmd_details) +int i40e_aq_set_vsi_uc_promisc_on_vlan(struct i40e_hw *hw, + u16 seid, bool enable, + u16 vid, + struct i40e_asq_cmd_details *cmd_details) { struct i40e_aq_desc desc; struct i40e_aqc_set_vsi_promiscuous_modes *cmd = (struct i40e_aqc_set_vsi_promiscuous_modes *)&desc.params.raw; - enum i40e_status_code status; u16 flags = 0; + int status; i40e_fill_default_direct_cmd_desc(&desc, i40e_aqc_opc_set_vsi_promiscuous_modes); @@ -2158,15 +2013,15 @@ enum i40e_status_code i40e_aq_set_vsi_uc_promisc_on_vlan(struct i40e_hw *hw, * @vid: The VLAN tag filter - capture any broadcast packet with this VLAN tag * @cmd_details: pointer to command details structure or NULL **/ -i40e_status i40e_aq_set_vsi_bc_promisc_on_vlan(struct i40e_hw *hw, - u16 seid, bool enable, u16 vid, - struct i40e_asq_cmd_details *cmd_details) +int i40e_aq_set_vsi_bc_promisc_on_vlan(struct i40e_hw *hw, + u16 seid, bool enable, u16 vid, + struct i40e_asq_cmd_details *cmd_details) { struct i40e_aq_desc desc; struct i40e_aqc_set_vsi_promiscuous_modes *cmd = (struct i40e_aqc_set_vsi_promiscuous_modes *)&desc.params.raw; - i40e_status status; u16 flags = 0; + int status; i40e_fill_default_direct_cmd_desc(&desc, i40e_aqc_opc_set_vsi_promiscuous_modes); @@ -2193,14 +2048,14 @@ i40e_status i40e_aq_set_vsi_bc_promisc_on_vlan(struct i40e_hw *hw, * * Set or clear the broadcast promiscuous flag (filter) for a given VSI. **/ -i40e_status i40e_aq_set_vsi_broadcast(struct i40e_hw *hw, - u16 seid, bool set_filter, - struct i40e_asq_cmd_details *cmd_details) +int i40e_aq_set_vsi_broadcast(struct i40e_hw *hw, + u16 seid, bool set_filter, + struct i40e_asq_cmd_details *cmd_details) { struct i40e_aq_desc desc; struct i40e_aqc_set_vsi_promiscuous_modes *cmd = (struct i40e_aqc_set_vsi_promiscuous_modes *)&desc.params.raw; - i40e_status status; + int status; i40e_fill_default_direct_cmd_desc(&desc, i40e_aqc_opc_set_vsi_promiscuous_modes); @@ -2226,15 +2081,15 @@ i40e_status i40e_aq_set_vsi_broadcast(struct i40e_hw *hw, * @enable: set MAC L2 layer unicast promiscuous enable/disable for a given VLAN * @cmd_details: pointer to command details structure or NULL **/ -i40e_status i40e_aq_set_vsi_vlan_promisc(struct i40e_hw *hw, - u16 seid, bool enable, - struct i40e_asq_cmd_details *cmd_details) +int i40e_aq_set_vsi_vlan_promisc(struct i40e_hw *hw, + u16 seid, bool enable, + struct i40e_asq_cmd_details *cmd_details) { struct i40e_aq_desc desc; struct i40e_aqc_set_vsi_promiscuous_modes *cmd = (struct i40e_aqc_set_vsi_promiscuous_modes *)&desc.params.raw; - i40e_status status; u16 flags = 0; + int status; i40e_fill_default_direct_cmd_desc(&desc, i40e_aqc_opc_set_vsi_promiscuous_modes); @@ -2256,9 +2111,9 @@ i40e_status i40e_aq_set_vsi_vlan_promisc(struct i40e_hw *hw, * @vsi_ctx: pointer to a vsi context struct * @cmd_details: pointer to command details structure or NULL **/ -i40e_status i40e_aq_get_vsi_params(struct i40e_hw *hw, - struct i40e_vsi_context *vsi_ctx, - struct i40e_asq_cmd_details *cmd_details) +int i40e_aq_get_vsi_params(struct i40e_hw *hw, + struct i40e_vsi_context *vsi_ctx, + struct i40e_asq_cmd_details *cmd_details) { struct i40e_aq_desc desc; struct i40e_aqc_add_get_update_vsi *cmd = @@ -2266,7 +2121,7 @@ i40e_status i40e_aq_get_vsi_params(struct i40e_hw *hw, struct i40e_aqc_add_get_update_vsi_completion *resp = (struct i40e_aqc_add_get_update_vsi_completion *) &desc.params.raw; - i40e_status status; + int status; i40e_fill_default_direct_cmd_desc(&desc, i40e_aqc_opc_get_vsi_parameters); @@ -2298,9 +2153,9 @@ aq_get_vsi_params_exit: * * Update a VSI context. **/ -i40e_status i40e_aq_update_vsi_params(struct i40e_hw *hw, - struct i40e_vsi_context *vsi_ctx, - struct i40e_asq_cmd_details *cmd_details) +int i40e_aq_update_vsi_params(struct i40e_hw *hw, + struct i40e_vsi_context *vsi_ctx, + struct i40e_asq_cmd_details *cmd_details) { struct i40e_aq_desc desc; struct i40e_aqc_add_get_update_vsi *cmd = @@ -2308,7 +2163,7 @@ i40e_status i40e_aq_update_vsi_params(struct i40e_hw *hw, struct i40e_aqc_add_get_update_vsi_completion *resp = (struct i40e_aqc_add_get_update_vsi_completion *) &desc.params.raw; - i40e_status status; + int status; i40e_fill_default_direct_cmd_desc(&desc, i40e_aqc_opc_update_vsi_parameters); @@ -2336,15 +2191,15 @@ i40e_status i40e_aq_update_vsi_params(struct i40e_hw *hw, * * Fill the buf with switch configuration returned from AdminQ command **/ -i40e_status i40e_aq_get_switch_config(struct i40e_hw *hw, - struct i40e_aqc_get_switch_config_resp *buf, - u16 buf_size, u16 *start_seid, - struct i40e_asq_cmd_details *cmd_details) +int i40e_aq_get_switch_config(struct i40e_hw *hw, + struct i40e_aqc_get_switch_config_resp *buf, + u16 buf_size, u16 *start_seid, + struct i40e_asq_cmd_details *cmd_details) { struct i40e_aq_desc desc; struct i40e_aqc_switch_seid *scfg = (struct i40e_aqc_switch_seid *)&desc.params.raw; - i40e_status status; + int status; i40e_fill_default_direct_cmd_desc(&desc, i40e_aqc_opc_get_switch_config); @@ -2370,15 +2225,15 @@ i40e_status i40e_aq_get_switch_config(struct i40e_hw *hw, * * Set switch configuration bits **/ -enum i40e_status_code i40e_aq_set_switch_config(struct i40e_hw *hw, - u16 flags, - u16 valid_flags, u8 mode, - struct i40e_asq_cmd_details *cmd_details) +int i40e_aq_set_switch_config(struct i40e_hw *hw, + u16 flags, + u16 valid_flags, u8 mode, + struct i40e_asq_cmd_details *cmd_details) { struct i40e_aq_desc desc; struct i40e_aqc_set_switch_config *scfg = (struct i40e_aqc_set_switch_config *)&desc.params.raw; - enum i40e_status_code status; + int status; i40e_fill_default_direct_cmd_desc(&desc, i40e_aqc_opc_set_switch_config); @@ -2407,16 +2262,16 @@ enum i40e_status_code i40e_aq_set_switch_config(struct i40e_hw *hw, * * Get the firmware version from the admin queue commands **/ -i40e_status i40e_aq_get_firmware_version(struct i40e_hw *hw, - u16 *fw_major_version, u16 *fw_minor_version, - u32 *fw_build, - u16 *api_major_version, u16 *api_minor_version, - struct i40e_asq_cmd_details *cmd_details) +int i40e_aq_get_firmware_version(struct i40e_hw *hw, + u16 *fw_major_version, u16 *fw_minor_version, + u32 *fw_build, + u16 *api_major_version, u16 *api_minor_version, + struct i40e_asq_cmd_details *cmd_details) { struct i40e_aq_desc desc; struct i40e_aqc_get_version *resp = (struct i40e_aqc_get_version *)&desc.params.raw; - i40e_status status; + int status; i40e_fill_default_direct_cmd_desc(&desc, i40e_aqc_opc_get_version); @@ -2446,14 +2301,14 @@ i40e_status i40e_aq_get_firmware_version(struct i40e_hw *hw, * * Send the driver version to the firmware **/ -i40e_status i40e_aq_send_driver_version(struct i40e_hw *hw, +int i40e_aq_send_driver_version(struct i40e_hw *hw, struct i40e_driver_version *dv, struct i40e_asq_cmd_details *cmd_details) { struct i40e_aq_desc desc; struct i40e_aqc_driver_version *cmd = (struct i40e_aqc_driver_version *)&desc.params.raw; - i40e_status status; + int status; u16 len; if (dv == NULL) @@ -2488,9 +2343,9 @@ i40e_status i40e_aq_send_driver_version(struct i40e_hw *hw, * * Side effect: LinkStatusEvent reporting becomes enabled **/ -i40e_status i40e_get_link_status(struct i40e_hw *hw, bool *link_up) +int i40e_get_link_status(struct i40e_hw *hw, bool *link_up) { - i40e_status status = 0; + int status = 0; if (hw->phy.get_link_info) { status = i40e_update_link_info(hw); @@ -2509,10 +2364,10 @@ i40e_status i40e_get_link_status(struct i40e_hw *hw, bool *link_up) * i40e_update_link_info - update status of the HW network link * @hw: pointer to the hw struct **/ -noinline_for_stack i40e_status i40e_update_link_info(struct i40e_hw *hw) +noinline_for_stack int i40e_update_link_info(struct i40e_hw *hw) { struct i40e_aq_get_phy_abilities_resp abilities; - i40e_status status = 0; + int status = 0; status = i40e_aq_get_link_info(hw, true, NULL, NULL); if (status) @@ -2559,19 +2414,19 @@ noinline_for_stack i40e_status i40e_update_link_info(struct i40e_hw *hw) * This asks the FW to add a VEB between the uplink and downlink * elements. If the uplink SEID is 0, this will be a floating VEB. **/ -i40e_status i40e_aq_add_veb(struct i40e_hw *hw, u16 uplink_seid, - u16 downlink_seid, u8 enabled_tc, - bool default_port, u16 *veb_seid, - bool enable_stats, - struct i40e_asq_cmd_details *cmd_details) +int i40e_aq_add_veb(struct i40e_hw *hw, u16 uplink_seid, + u16 downlink_seid, u8 enabled_tc, + bool default_port, u16 *veb_seid, + bool enable_stats, + struct i40e_asq_cmd_details *cmd_details) { struct i40e_aq_desc desc; struct i40e_aqc_add_veb *cmd = (struct i40e_aqc_add_veb *)&desc.params.raw; struct i40e_aqc_add_veb_completion *resp = (struct i40e_aqc_add_veb_completion *)&desc.params.raw; - i40e_status status; u16 veb_flags = 0; + int status; /* SEIDs need to either both be set or both be 0 for floating VEB */ if (!!uplink_seid != !!downlink_seid) @@ -2617,17 +2472,17 @@ i40e_status i40e_aq_add_veb(struct i40e_hw *hw, u16 uplink_seid, * This retrieves the parameters for a particular VEB, specified by * uplink_seid, and returns them to the caller. **/ -i40e_status i40e_aq_get_veb_parameters(struct i40e_hw *hw, - u16 veb_seid, u16 *switch_id, - bool *floating, u16 *statistic_index, - u16 *vebs_used, u16 *vebs_free, - struct i40e_asq_cmd_details *cmd_details) +int i40e_aq_get_veb_parameters(struct i40e_hw *hw, + u16 veb_seid, u16 *switch_id, + bool *floating, u16 *statistic_index, + u16 *vebs_used, u16 *vebs_free, + struct i40e_asq_cmd_details *cmd_details) { struct i40e_aq_desc desc; struct i40e_aqc_get_veb_parameters_completion *cmd_resp = (struct i40e_aqc_get_veb_parameters_completion *) &desc.params.raw; - i40e_status status; + int status; if (veb_seid == 0) return I40E_ERR_PARAM; @@ -2711,7 +2566,7 @@ i40e_prepare_add_macvlan(struct i40e_aqc_add_macvlan_element_data *mv_list, * * Add MAC/VLAN addresses to the HW filtering **/ -i40e_status +int i40e_aq_add_macvlan(struct i40e_hw *hw, u16 seid, struct i40e_aqc_add_macvlan_element_data *mv_list, u16 count, struct i40e_asq_cmd_details *cmd_details) @@ -2743,7 +2598,7 @@ i40e_aq_add_macvlan(struct i40e_hw *hw, u16 seid, * It also calls _v2 versions of asq_send_command functions to * get the aq_status on the stack. **/ -i40e_status +int i40e_aq_add_macvlan_v2(struct i40e_hw *hw, u16 seid, struct i40e_aqc_add_macvlan_element_data *mv_list, u16 count, struct i40e_asq_cmd_details *cmd_details, @@ -2771,15 +2626,16 @@ i40e_aq_add_macvlan_v2(struct i40e_hw *hw, u16 seid, * * Remove MAC/VLAN addresses from the HW filtering **/ -i40e_status i40e_aq_remove_macvlan(struct i40e_hw *hw, u16 seid, - struct i40e_aqc_remove_macvlan_element_data *mv_list, - u16 count, struct i40e_asq_cmd_details *cmd_details) +int +i40e_aq_remove_macvlan(struct i40e_hw *hw, u16 seid, + struct i40e_aqc_remove_macvlan_element_data *mv_list, + u16 count, struct i40e_asq_cmd_details *cmd_details) { struct i40e_aq_desc desc; struct i40e_aqc_macvlan *cmd = (struct i40e_aqc_macvlan *)&desc.params.raw; - i40e_status status; u16 buf_size; + int status; if (count == 0 || !mv_list || !hw) return I40E_ERR_PARAM; @@ -2818,7 +2674,7 @@ i40e_status i40e_aq_remove_macvlan(struct i40e_hw *hw, u16 seid, * It also calls _v2 versions of asq_send_command functions to * get the aq_status on the stack. **/ -i40e_status +int i40e_aq_remove_macvlan_v2(struct i40e_hw *hw, u16 seid, struct i40e_aqc_remove_macvlan_element_data *mv_list, u16 count, struct i40e_asq_cmd_details *cmd_details, @@ -2866,19 +2722,19 @@ i40e_aq_remove_macvlan_v2(struct i40e_hw *hw, u16 seid, * Add/Delete a mirror rule to a specific switch. Mirror rules are supported for * VEBs/VEPA elements only **/ -static i40e_status i40e_mirrorrule_op(struct i40e_hw *hw, - u16 opcode, u16 sw_seid, u16 rule_type, u16 id, - u16 count, __le16 *mr_list, - struct i40e_asq_cmd_details *cmd_details, - u16 *rule_id, u16 *rules_used, u16 *rules_free) +static int i40e_mirrorrule_op(struct i40e_hw *hw, + u16 opcode, u16 sw_seid, u16 rule_type, u16 id, + u16 count, __le16 *mr_list, + struct i40e_asq_cmd_details *cmd_details, + u16 *rule_id, u16 *rules_used, u16 *rules_free) { struct i40e_aq_desc desc; struct i40e_aqc_add_delete_mirror_rule *cmd = (struct i40e_aqc_add_delete_mirror_rule *)&desc.params.raw; struct i40e_aqc_add_delete_mirror_rule_completion *resp = (struct i40e_aqc_add_delete_mirror_rule_completion *)&desc.params.raw; - i40e_status status; u16 buf_size; + int status; buf_size = count * sizeof(*mr_list); @@ -2926,10 +2782,11 @@ static i40e_status i40e_mirrorrule_op(struct i40e_hw *hw, * * Add mirror rule. Mirror rules are supported for VEBs or VEPA elements only **/ -i40e_status i40e_aq_add_mirrorrule(struct i40e_hw *hw, u16 sw_seid, - u16 rule_type, u16 dest_vsi, u16 count, __le16 *mr_list, - struct i40e_asq_cmd_details *cmd_details, - u16 *rule_id, u16 *rules_used, u16 *rules_free) +int i40e_aq_add_mirrorrule(struct i40e_hw *hw, u16 sw_seid, + u16 rule_type, u16 dest_vsi, u16 count, + __le16 *mr_list, + struct i40e_asq_cmd_details *cmd_details, + u16 *rule_id, u16 *rules_used, u16 *rules_free) { if (!(rule_type == I40E_AQC_MIRROR_RULE_TYPE_ALL_INGRESS || rule_type == I40E_AQC_MIRROR_RULE_TYPE_ALL_EGRESS)) { @@ -2957,10 +2814,11 @@ i40e_status i40e_aq_add_mirrorrule(struct i40e_hw *hw, u16 sw_seid, * * Delete a mirror rule. Mirror rules are supported for VEBs/VEPA elements only **/ -i40e_status i40e_aq_delete_mirrorrule(struct i40e_hw *hw, u16 sw_seid, - u16 rule_type, u16 rule_id, u16 count, __le16 *mr_list, - struct i40e_asq_cmd_details *cmd_details, - u16 *rules_used, u16 *rules_free) +int i40e_aq_delete_mirrorrule(struct i40e_hw *hw, u16 sw_seid, + u16 rule_type, u16 rule_id, u16 count, + __le16 *mr_list, + struct i40e_asq_cmd_details *cmd_details, + u16 *rules_used, u16 *rules_free) { /* Rule ID has to be valid except rule_type: INGRESS VLAN mirroring */ if (rule_type == I40E_AQC_MIRROR_RULE_TYPE_VLAN) { @@ -2989,14 +2847,14 @@ i40e_status i40e_aq_delete_mirrorrule(struct i40e_hw *hw, u16 sw_seid, * * send msg to vf **/ -i40e_status i40e_aq_send_msg_to_vf(struct i40e_hw *hw, u16 vfid, - u32 v_opcode, u32 v_retval, u8 *msg, u16 msglen, - struct i40e_asq_cmd_details *cmd_details) +int i40e_aq_send_msg_to_vf(struct i40e_hw *hw, u16 vfid, + u32 v_opcode, u32 v_retval, u8 *msg, u16 msglen, + struct i40e_asq_cmd_details *cmd_details) { struct i40e_aq_desc desc; struct i40e_aqc_pf_vf_message *cmd = (struct i40e_aqc_pf_vf_message *)&desc.params.raw; - i40e_status status; + int status; i40e_fill_default_direct_cmd_desc(&desc, i40e_aqc_opc_send_msg_to_vf); cmd->id = cpu_to_le32(vfid); @@ -3024,14 +2882,14 @@ i40e_status i40e_aq_send_msg_to_vf(struct i40e_hw *hw, u16 vfid, * * Read the register using the admin queue commands **/ -i40e_status i40e_aq_debug_read_register(struct i40e_hw *hw, +int i40e_aq_debug_read_register(struct i40e_hw *hw, u32 reg_addr, u64 *reg_val, struct i40e_asq_cmd_details *cmd_details) { struct i40e_aq_desc desc; struct i40e_aqc_debug_reg_read_write *cmd_resp = (struct i40e_aqc_debug_reg_read_write *)&desc.params.raw; - i40e_status status; + int status; if (reg_val == NULL) return I40E_ERR_PARAM; @@ -3059,14 +2917,14 @@ i40e_status i40e_aq_debug_read_register(struct i40e_hw *hw, * * Write to a register using the admin queue commands **/ -i40e_status i40e_aq_debug_write_register(struct i40e_hw *hw, - u32 reg_addr, u64 reg_val, - struct i40e_asq_cmd_details *cmd_details) +int i40e_aq_debug_write_register(struct i40e_hw *hw, + u32 reg_addr, u64 reg_val, + struct i40e_asq_cmd_details *cmd_details) { struct i40e_aq_desc desc; struct i40e_aqc_debug_reg_read_write *cmd = (struct i40e_aqc_debug_reg_read_write *)&desc.params.raw; - i40e_status status; + int status; i40e_fill_default_direct_cmd_desc(&desc, i40e_aqc_opc_debug_write_reg); @@ -3090,16 +2948,16 @@ i40e_status i40e_aq_debug_write_register(struct i40e_hw *hw, * * requests common resource using the admin queue commands **/ -i40e_status i40e_aq_request_resource(struct i40e_hw *hw, - enum i40e_aq_resources_ids resource, - enum i40e_aq_resource_access_type access, - u8 sdp_number, u64 *timeout, - struct i40e_asq_cmd_details *cmd_details) +int i40e_aq_request_resource(struct i40e_hw *hw, + enum i40e_aq_resources_ids resource, + enum i40e_aq_resource_access_type access, + u8 sdp_number, u64 *timeout, + struct i40e_asq_cmd_details *cmd_details) { struct i40e_aq_desc desc; struct i40e_aqc_request_resource *cmd_resp = (struct i40e_aqc_request_resource *)&desc.params.raw; - i40e_status status; + int status; i40e_fill_default_direct_cmd_desc(&desc, i40e_aqc_opc_request_resource); @@ -3129,15 +2987,15 @@ i40e_status i40e_aq_request_resource(struct i40e_hw *hw, * * release common resource using the admin queue commands **/ -i40e_status i40e_aq_release_resource(struct i40e_hw *hw, - enum i40e_aq_resources_ids resource, - u8 sdp_number, - struct i40e_asq_cmd_details *cmd_details) +int i40e_aq_release_resource(struct i40e_hw *hw, + enum i40e_aq_resources_ids resource, + u8 sdp_number, + struct i40e_asq_cmd_details *cmd_details) { struct i40e_aq_desc desc; struct i40e_aqc_request_resource *cmd = (struct i40e_aqc_request_resource *)&desc.params.raw; - i40e_status status; + int status; i40e_fill_default_direct_cmd_desc(&desc, i40e_aqc_opc_release_resource); @@ -3161,15 +3019,15 @@ i40e_status i40e_aq_release_resource(struct i40e_hw *hw, * * Read the NVM using the admin queue commands **/ -i40e_status i40e_aq_read_nvm(struct i40e_hw *hw, u8 module_pointer, - u32 offset, u16 length, void *data, - bool last_command, - struct i40e_asq_cmd_details *cmd_details) +int i40e_aq_read_nvm(struct i40e_hw *hw, u8 module_pointer, + u32 offset, u16 length, void *data, + bool last_command, + struct i40e_asq_cmd_details *cmd_details) { struct i40e_aq_desc desc; struct i40e_aqc_nvm_update *cmd = (struct i40e_aqc_nvm_update *)&desc.params.raw; - i40e_status status; + int status; /* In offset the highest byte must be zeroed. */ if (offset & 0xFF000000) { @@ -3207,14 +3065,14 @@ i40e_aq_read_nvm_exit: * * Erase the NVM sector using the admin queue commands **/ -i40e_status i40e_aq_erase_nvm(struct i40e_hw *hw, u8 module_pointer, - u32 offset, u16 length, bool last_command, - struct i40e_asq_cmd_details *cmd_details) +int i40e_aq_erase_nvm(struct i40e_hw *hw, u8 module_pointer, + u32 offset, u16 length, bool last_command, + struct i40e_asq_cmd_details *cmd_details) { struct i40e_aq_desc desc; struct i40e_aqc_nvm_update *cmd = (struct i40e_aqc_nvm_update *)&desc.params.raw; - i40e_status status; + int status; /* In offset the highest byte must be zeroed. */ if (offset & 0xFF000000) { @@ -3255,8 +3113,8 @@ static void i40e_parse_discover_capabilities(struct i40e_hw *hw, void *buff, u32 number, logical_id, phys_id; struct i40e_hw_capabilities *p; u16 id, ocp_cfg_word0; - i40e_status status; u8 major_rev; + int status; u32 i = 0; cap = (struct i40e_aqc_list_capabilities_element_resp *) buff; @@ -3497,14 +3355,14 @@ static void i40e_parse_discover_capabilities(struct i40e_hw *hw, void *buff, * * Get the device capabilities descriptions from the firmware **/ -i40e_status i40e_aq_discover_capabilities(struct i40e_hw *hw, - void *buff, u16 buff_size, u16 *data_size, - enum i40e_admin_queue_opc list_type_opc, - struct i40e_asq_cmd_details *cmd_details) +int i40e_aq_discover_capabilities(struct i40e_hw *hw, + void *buff, u16 buff_size, u16 *data_size, + enum i40e_admin_queue_opc list_type_opc, + struct i40e_asq_cmd_details *cmd_details) { struct i40e_aqc_list_capabilites *cmd; struct i40e_aq_desc desc; - i40e_status status = 0; + int status = 0; cmd = (struct i40e_aqc_list_capabilites *)&desc.params.raw; @@ -3546,15 +3404,15 @@ exit: * * Update the NVM using the admin queue commands **/ -i40e_status i40e_aq_update_nvm(struct i40e_hw *hw, u8 module_pointer, - u32 offset, u16 length, void *data, - bool last_command, u8 preservation_flags, - struct i40e_asq_cmd_details *cmd_details) +int i40e_aq_update_nvm(struct i40e_hw *hw, u8 module_pointer, + u32 offset, u16 length, void *data, + bool last_command, u8 preservation_flags, + struct i40e_asq_cmd_details *cmd_details) { struct i40e_aq_desc desc; struct i40e_aqc_nvm_update *cmd = (struct i40e_aqc_nvm_update *)&desc.params.raw; - i40e_status status; + int status; /* In offset the highest byte must be zeroed. */ if (offset & 0xFF000000) { @@ -3599,13 +3457,13 @@ i40e_aq_update_nvm_exit: * * Rearrange NVM structure, available only for transition FW **/ -i40e_status i40e_aq_rearrange_nvm(struct i40e_hw *hw, - u8 rearrange_nvm, - struct i40e_asq_cmd_details *cmd_details) +int i40e_aq_rearrange_nvm(struct i40e_hw *hw, + u8 rearrange_nvm, + struct i40e_asq_cmd_details *cmd_details) { struct i40e_aqc_nvm_update *cmd; - i40e_status status; struct i40e_aq_desc desc; + int status; cmd = (struct i40e_aqc_nvm_update *)&desc.params.raw; @@ -3639,17 +3497,17 @@ i40e_aq_rearrange_nvm_exit: * * Requests the complete LLDP MIB (entire packet). **/ -i40e_status i40e_aq_get_lldp_mib(struct i40e_hw *hw, u8 bridge_type, - u8 mib_type, void *buff, u16 buff_size, - u16 *local_len, u16 *remote_len, - struct i40e_asq_cmd_details *cmd_details) +int i40e_aq_get_lldp_mib(struct i40e_hw *hw, u8 bridge_type, + u8 mib_type, void *buff, u16 buff_size, + u16 *local_len, u16 *remote_len, + struct i40e_asq_cmd_details *cmd_details) { struct i40e_aq_desc desc; struct i40e_aqc_lldp_get_mib *cmd = (struct i40e_aqc_lldp_get_mib *)&desc.params.raw; struct i40e_aqc_lldp_get_mib *resp = (struct i40e_aqc_lldp_get_mib *)&desc.params.raw; - i40e_status status; + int status; if (buff_size == 0 || !buff) return I40E_ERR_PARAM; @@ -3689,14 +3547,14 @@ i40e_status i40e_aq_get_lldp_mib(struct i40e_hw *hw, u8 bridge_type, * * Set the LLDP MIB. **/ -enum i40e_status_code +int i40e_aq_set_lldp_mib(struct i40e_hw *hw, u8 mib_type, void *buff, u16 buff_size, struct i40e_asq_cmd_details *cmd_details) { struct i40e_aqc_lldp_set_local_mib *cmd; - enum i40e_status_code status; struct i40e_aq_desc desc; + int status; cmd = (struct i40e_aqc_lldp_set_local_mib *)&desc.params.raw; if (buff_size == 0 || !buff) @@ -3728,14 +3586,14 @@ i40e_aq_set_lldp_mib(struct i40e_hw *hw, * Enable or Disable posting of an event on ARQ when LLDP MIB * associated with the interface changes **/ -i40e_status i40e_aq_cfg_lldp_mib_change_event(struct i40e_hw *hw, - bool enable_update, - struct i40e_asq_cmd_details *cmd_details) +int i40e_aq_cfg_lldp_mib_change_event(struct i40e_hw *hw, + bool enable_update, + struct i40e_asq_cmd_details *cmd_details) { struct i40e_aq_desc desc; struct i40e_aqc_lldp_update_mib *cmd = (struct i40e_aqc_lldp_update_mib *)&desc.params.raw; - i40e_status status; + int status; i40e_fill_default_direct_cmd_desc(&desc, i40e_aqc_opc_lldp_update_mib); @@ -3757,14 +3615,14 @@ i40e_status i40e_aq_cfg_lldp_mib_change_event(struct i40e_hw *hw, * Restore LLDP Agent factory settings if @restore set to True. In other case * only returns factory setting in AQ response. **/ -enum i40e_status_code +int i40e_aq_restore_lldp(struct i40e_hw *hw, u8 *setting, bool restore, struct i40e_asq_cmd_details *cmd_details) { struct i40e_aq_desc desc; struct i40e_aqc_lldp_restore *cmd = (struct i40e_aqc_lldp_restore *)&desc.params.raw; - i40e_status status; + int status; if (!(hw->flags & I40E_HW_FLAG_FW_LLDP_PERSISTENT)) { i40e_debug(hw, I40E_DEBUG_ALL, @@ -3794,14 +3652,14 @@ i40e_aq_restore_lldp(struct i40e_hw *hw, u8 *setting, bool restore, * * Stop or Shutdown the embedded LLDP Agent **/ -i40e_status i40e_aq_stop_lldp(struct i40e_hw *hw, bool shutdown_agent, - bool persist, - struct i40e_asq_cmd_details *cmd_details) +int i40e_aq_stop_lldp(struct i40e_hw *hw, bool shutdown_agent, + bool persist, + struct i40e_asq_cmd_details *cmd_details) { struct i40e_aq_desc desc; struct i40e_aqc_lldp_stop *cmd = (struct i40e_aqc_lldp_stop *)&desc.params.raw; - i40e_status status; + int status; i40e_fill_default_direct_cmd_desc(&desc, i40e_aqc_opc_lldp_stop); @@ -3829,13 +3687,13 @@ i40e_status i40e_aq_stop_lldp(struct i40e_hw *hw, bool shutdown_agent, * * Start the embedded LLDP Agent on all ports. **/ -i40e_status i40e_aq_start_lldp(struct i40e_hw *hw, bool persist, - struct i40e_asq_cmd_details *cmd_details) +int i40e_aq_start_lldp(struct i40e_hw *hw, bool persist, + struct i40e_asq_cmd_details *cmd_details) { struct i40e_aq_desc desc; struct i40e_aqc_lldp_start *cmd = (struct i40e_aqc_lldp_start *)&desc.params.raw; - i40e_status status; + int status; i40e_fill_default_direct_cmd_desc(&desc, i40e_aqc_opc_lldp_start); @@ -3861,14 +3719,14 @@ i40e_status i40e_aq_start_lldp(struct i40e_hw *hw, bool persist, * @dcb_enable: True if DCB configuration needs to be applied * **/ -enum i40e_status_code +int i40e_aq_set_dcb_parameters(struct i40e_hw *hw, bool dcb_enable, struct i40e_asq_cmd_details *cmd_details) { struct i40e_aq_desc desc; struct i40e_aqc_set_dcb_parameters *cmd = (struct i40e_aqc_set_dcb_parameters *)&desc.params.raw; - i40e_status status; + int status; if (!(hw->flags & I40E_HW_FLAG_FW_LLDP_STOPPABLE)) return I40E_ERR_DEVICE_NOT_SUPPORTED; @@ -3894,12 +3752,12 @@ i40e_aq_set_dcb_parameters(struct i40e_hw *hw, bool dcb_enable, * * Get CEE DCBX mode operational configuration from firmware **/ -i40e_status i40e_aq_get_cee_dcb_config(struct i40e_hw *hw, - void *buff, u16 buff_size, - struct i40e_asq_cmd_details *cmd_details) +int i40e_aq_get_cee_dcb_config(struct i40e_hw *hw, + void *buff, u16 buff_size, + struct i40e_asq_cmd_details *cmd_details) { struct i40e_aq_desc desc; - i40e_status status; + int status; if (buff_size == 0 || !buff) return I40E_ERR_PARAM; @@ -3925,17 +3783,17 @@ i40e_status i40e_aq_get_cee_dcb_config(struct i40e_hw *hw, * and this function will call cpu_to_le16 to convert from Host byte order to * Little Endian order. **/ -i40e_status i40e_aq_add_udp_tunnel(struct i40e_hw *hw, - u16 udp_port, u8 protocol_index, - u8 *filter_index, - struct i40e_asq_cmd_details *cmd_details) +int i40e_aq_add_udp_tunnel(struct i40e_hw *hw, + u16 udp_port, u8 protocol_index, + u8 *filter_index, + struct i40e_asq_cmd_details *cmd_details) { struct i40e_aq_desc desc; struct i40e_aqc_add_udp_tunnel *cmd = (struct i40e_aqc_add_udp_tunnel *)&desc.params.raw; struct i40e_aqc_del_udp_tunnel_completion *resp = (struct i40e_aqc_del_udp_tunnel_completion *)&desc.params.raw; - i40e_status status; + int status; i40e_fill_default_direct_cmd_desc(&desc, i40e_aqc_opc_add_udp_tunnel); @@ -3956,13 +3814,13 @@ i40e_status i40e_aq_add_udp_tunnel(struct i40e_hw *hw, * @index: filter index * @cmd_details: pointer to command details structure or NULL **/ -i40e_status i40e_aq_del_udp_tunnel(struct i40e_hw *hw, u8 index, - struct i40e_asq_cmd_details *cmd_details) +int i40e_aq_del_udp_tunnel(struct i40e_hw *hw, u8 index, + struct i40e_asq_cmd_details *cmd_details) { struct i40e_aq_desc desc; struct i40e_aqc_remove_udp_tunnel *cmd = (struct i40e_aqc_remove_udp_tunnel *)&desc.params.raw; - i40e_status status; + int status; i40e_fill_default_direct_cmd_desc(&desc, i40e_aqc_opc_del_udp_tunnel); @@ -3981,13 +3839,13 @@ i40e_status i40e_aq_del_udp_tunnel(struct i40e_hw *hw, u8 index, * * This deletes a switch element from the switch. **/ -i40e_status i40e_aq_delete_element(struct i40e_hw *hw, u16 seid, - struct i40e_asq_cmd_details *cmd_details) +int i40e_aq_delete_element(struct i40e_hw *hw, u16 seid, + struct i40e_asq_cmd_details *cmd_details) { struct i40e_aq_desc desc; struct i40e_aqc_switch_seid *cmd = (struct i40e_aqc_switch_seid *)&desc.params.raw; - i40e_status status; + int status; if (seid == 0) return I40E_ERR_PARAM; @@ -4011,11 +3869,11 @@ i40e_status i40e_aq_delete_element(struct i40e_hw *hw, u16 seid, * recomputed and modified. The retval field in the descriptor * will be set to 0 when RPB is modified. **/ -i40e_status i40e_aq_dcb_updated(struct i40e_hw *hw, - struct i40e_asq_cmd_details *cmd_details) +int i40e_aq_dcb_updated(struct i40e_hw *hw, + struct i40e_asq_cmd_details *cmd_details) { struct i40e_aq_desc desc; - i40e_status status; + int status; i40e_fill_default_direct_cmd_desc(&desc, i40e_aqc_opc_dcb_updated); @@ -4035,15 +3893,15 @@ i40e_status i40e_aq_dcb_updated(struct i40e_hw *hw, * * Generic command handler for Tx scheduler AQ commands **/ -static i40e_status i40e_aq_tx_sched_cmd(struct i40e_hw *hw, u16 seid, +static int i40e_aq_tx_sched_cmd(struct i40e_hw *hw, u16 seid, void *buff, u16 buff_size, - enum i40e_admin_queue_opc opcode, + enum i40e_admin_queue_opc opcode, struct i40e_asq_cmd_details *cmd_details) { struct i40e_aq_desc desc; struct i40e_aqc_tx_sched_ind *cmd = (struct i40e_aqc_tx_sched_ind *)&desc.params.raw; - i40e_status status; + int status; bool cmd_param_flag = false; switch (opcode) { @@ -4093,14 +3951,14 @@ static i40e_status i40e_aq_tx_sched_cmd(struct i40e_hw *hw, u16 seid, * @max_credit: Max BW limit credits * @cmd_details: pointer to command details structure or NULL **/ -i40e_status i40e_aq_config_vsi_bw_limit(struct i40e_hw *hw, +int i40e_aq_config_vsi_bw_limit(struct i40e_hw *hw, u16 seid, u16 credit, u8 max_credit, struct i40e_asq_cmd_details *cmd_details) { struct i40e_aq_desc desc; struct i40e_aqc_configure_vsi_bw_limit *cmd = (struct i40e_aqc_configure_vsi_bw_limit *)&desc.params.raw; - i40e_status status; + int status; i40e_fill_default_direct_cmd_desc(&desc, i40e_aqc_opc_configure_vsi_bw_limit); @@ -4121,10 +3979,10 @@ i40e_status i40e_aq_config_vsi_bw_limit(struct i40e_hw *hw, * @bw_data: Buffer holding enabled TCs, relative TC BW limit/credits * @cmd_details: pointer to command details structure or NULL **/ -i40e_status i40e_aq_config_vsi_tc_bw(struct i40e_hw *hw, - u16 seid, - struct i40e_aqc_configure_vsi_tc_bw_data *bw_data, - struct i40e_asq_cmd_details *cmd_details) +int i40e_aq_config_vsi_tc_bw(struct i40e_hw *hw, + u16 seid, + struct i40e_aqc_configure_vsi_tc_bw_data *bw_data, + struct i40e_asq_cmd_details *cmd_details) { return i40e_aq_tx_sched_cmd(hw, seid, (void *)bw_data, sizeof(*bw_data), i40e_aqc_opc_configure_vsi_tc_bw, @@ -4139,11 +3997,12 @@ i40e_status i40e_aq_config_vsi_tc_bw(struct i40e_hw *hw, * @opcode: Tx scheduler AQ command opcode * @cmd_details: pointer to command details structure or NULL **/ -i40e_status i40e_aq_config_switch_comp_ets(struct i40e_hw *hw, - u16 seid, - struct i40e_aqc_configure_switching_comp_ets_data *ets_data, - enum i40e_admin_queue_opc opcode, - struct i40e_asq_cmd_details *cmd_details) +int +i40e_aq_config_switch_comp_ets(struct i40e_hw *hw, + u16 seid, + struct i40e_aqc_configure_switching_comp_ets_data *ets_data, + enum i40e_admin_queue_opc opcode, + struct i40e_asq_cmd_details *cmd_details) { return i40e_aq_tx_sched_cmd(hw, seid, (void *)ets_data, sizeof(*ets_data), opcode, cmd_details); @@ -4156,7 +4015,8 @@ i40e_status i40e_aq_config_switch_comp_ets(struct i40e_hw *hw, * @bw_data: Buffer holding enabled TCs, relative/absolute TC BW limit/credits * @cmd_details: pointer to command details structure or NULL **/ -i40e_status i40e_aq_config_switch_comp_bw_config(struct i40e_hw *hw, +int +i40e_aq_config_switch_comp_bw_config(struct i40e_hw *hw, u16 seid, struct i40e_aqc_configure_switching_comp_bw_config_data *bw_data, struct i40e_asq_cmd_details *cmd_details) @@ -4173,10 +4033,11 @@ i40e_status i40e_aq_config_switch_comp_bw_config(struct i40e_hw *hw, * @bw_data: Buffer to hold VSI BW configuration * @cmd_details: pointer to command details structure or NULL **/ -i40e_status i40e_aq_query_vsi_bw_config(struct i40e_hw *hw, - u16 seid, - struct i40e_aqc_query_vsi_bw_config_resp *bw_data, - struct i40e_asq_cmd_details *cmd_details) +int +i40e_aq_query_vsi_bw_config(struct i40e_hw *hw, + u16 seid, + struct i40e_aqc_query_vsi_bw_config_resp *bw_data, + struct i40e_asq_cmd_details *cmd_details) { return i40e_aq_tx_sched_cmd(hw, seid, (void *)bw_data, sizeof(*bw_data), i40e_aqc_opc_query_vsi_bw_config, @@ -4190,10 +4051,11 @@ i40e_status i40e_aq_query_vsi_bw_config(struct i40e_hw *hw, * @bw_data: Buffer to hold VSI BW configuration per TC * @cmd_details: pointer to command details structure or NULL **/ -i40e_status i40e_aq_query_vsi_ets_sla_config(struct i40e_hw *hw, - u16 seid, - struct i40e_aqc_query_vsi_ets_sla_config_resp *bw_data, - struct i40e_asq_cmd_details *cmd_details) +int +i40e_aq_query_vsi_ets_sla_config(struct i40e_hw *hw, + u16 seid, + struct i40e_aqc_query_vsi_ets_sla_config_resp *bw_data, + struct i40e_asq_cmd_details *cmd_details) { return i40e_aq_tx_sched_cmd(hw, seid, (void *)bw_data, sizeof(*bw_data), i40e_aqc_opc_query_vsi_ets_sla_config, @@ -4207,10 +4069,11 @@ i40e_status i40e_aq_query_vsi_ets_sla_config(struct i40e_hw *hw, * @bw_data: Buffer to hold switching component's per TC BW config * @cmd_details: pointer to command details structure or NULL **/ -i40e_status i40e_aq_query_switch_comp_ets_config(struct i40e_hw *hw, - u16 seid, - struct i40e_aqc_query_switching_comp_ets_config_resp *bw_data, - struct i40e_asq_cmd_details *cmd_details) +int +i40e_aq_query_switch_comp_ets_config(struct i40e_hw *hw, + u16 seid, + struct i40e_aqc_query_switching_comp_ets_config_resp *bw_data, + struct i40e_asq_cmd_details *cmd_details) { return i40e_aq_tx_sched_cmd(hw, seid, (void *)bw_data, sizeof(*bw_data), i40e_aqc_opc_query_switching_comp_ets_config, @@ -4224,10 +4087,11 @@ i40e_status i40e_aq_query_switch_comp_ets_config(struct i40e_hw *hw, * @bw_data: Buffer to hold current ETS configuration for the Physical Port * @cmd_details: pointer to command details structure or NULL **/ -i40e_status i40e_aq_query_port_ets_config(struct i40e_hw *hw, - u16 seid, - struct i40e_aqc_query_port_ets_config_resp *bw_data, - struct i40e_asq_cmd_details *cmd_details) +int +i40e_aq_query_port_ets_config(struct i40e_hw *hw, + u16 seid, + struct i40e_aqc_query_port_ets_config_resp *bw_data, + struct i40e_asq_cmd_details *cmd_details) { return i40e_aq_tx_sched_cmd(hw, seid, (void *)bw_data, sizeof(*bw_data), i40e_aqc_opc_query_port_ets_config, @@ -4241,10 +4105,11 @@ i40e_status i40e_aq_query_port_ets_config(struct i40e_hw *hw, * @bw_data: Buffer to hold switching component's BW configuration * @cmd_details: pointer to command details structure or NULL **/ -i40e_status i40e_aq_query_switch_comp_bw_config(struct i40e_hw *hw, - u16 seid, - struct i40e_aqc_query_switching_comp_bw_config_resp *bw_data, - struct i40e_asq_cmd_details *cmd_details) +int +i40e_aq_query_switch_comp_bw_config(struct i40e_hw *hw, + u16 seid, + struct i40e_aqc_query_switching_comp_bw_config_resp *bw_data, + struct i40e_asq_cmd_details *cmd_details) { return i40e_aq_tx_sched_cmd(hw, seid, (void *)bw_data, sizeof(*bw_data), i40e_aqc_opc_query_switching_comp_bw_config, @@ -4263,8 +4128,9 @@ i40e_status i40e_aq_query_switch_comp_bw_config(struct i40e_hw *hw, * Returns 0 if the values passed are valid and within * range else returns an error. **/ -static i40e_status i40e_validate_filter_settings(struct i40e_hw *hw, - struct i40e_filter_control_settings *settings) +static int +i40e_validate_filter_settings(struct i40e_hw *hw, + struct i40e_filter_control_settings *settings) { u32 fcoe_cntx_size, fcoe_filt_size; u32 fcoe_fmax; @@ -4350,11 +4216,11 @@ static i40e_status i40e_validate_filter_settings(struct i40e_hw *hw, * for a single PF. It is expected that these settings are programmed * at the driver initialization time. **/ -i40e_status i40e_set_filter_control(struct i40e_hw *hw, - struct i40e_filter_control_settings *settings) +int i40e_set_filter_control(struct i40e_hw *hw, + struct i40e_filter_control_settings *settings) { - i40e_status ret = 0; u32 hash_lut_size = 0; + int ret = 0; u32 val; if (!settings) @@ -4424,11 +4290,11 @@ i40e_status i40e_set_filter_control(struct i40e_hw *hw, * In return it will update the total number of perfect filter count in * the stats member. **/ -i40e_status i40e_aq_add_rem_control_packet_filter(struct i40e_hw *hw, - u8 *mac_addr, u16 ethtype, u16 flags, - u16 vsi_seid, u16 queue, bool is_add, - struct i40e_control_filter_stats *stats, - struct i40e_asq_cmd_details *cmd_details) +int i40e_aq_add_rem_control_packet_filter(struct i40e_hw *hw, + u8 *mac_addr, u16 ethtype, u16 flags, + u16 vsi_seid, u16 queue, bool is_add, + struct i40e_control_filter_stats *stats, + struct i40e_asq_cmd_details *cmd_details) { struct i40e_aq_desc desc; struct i40e_aqc_add_remove_control_packet_filter *cmd = @@ -4437,7 +4303,7 @@ i40e_status i40e_aq_add_rem_control_packet_filter(struct i40e_hw *hw, struct i40e_aqc_add_remove_control_packet_filter_completion *resp = (struct i40e_aqc_add_remove_control_packet_filter_completion *) &desc.params.raw; - i40e_status status; + int status; if (vsi_seid == 0) return I40E_ERR_PARAM; @@ -4483,7 +4349,7 @@ void i40e_add_filter_to_drop_tx_flow_control_frames(struct i40e_hw *hw, I40E_AQC_ADD_CONTROL_PACKET_FLAGS_DROP | I40E_AQC_ADD_CONTROL_PACKET_FLAGS_TX; u16 ethtype = I40E_FLOW_CONTROL_ETHTYPE; - i40e_status status; + int status; status = i40e_aq_add_rem_control_packet_filter(hw, NULL, ethtype, flag, seid, 0, true, NULL, @@ -4505,14 +4371,14 @@ void i40e_add_filter_to_drop_tx_flow_control_frames(struct i40e_hw *hw, * is not passed then only register at 'reg_addr0' is read. * **/ -static i40e_status i40e_aq_alternate_read(struct i40e_hw *hw, - u32 reg_addr0, u32 *reg_val0, - u32 reg_addr1, u32 *reg_val1) +static int i40e_aq_alternate_read(struct i40e_hw *hw, + u32 reg_addr0, u32 *reg_val0, + u32 reg_addr1, u32 *reg_val1) { struct i40e_aq_desc desc; struct i40e_aqc_alternate_write *cmd_resp = (struct i40e_aqc_alternate_write *)&desc.params.raw; - i40e_status status; + int status; if (!reg_val0) return I40E_ERR_PARAM; @@ -4541,12 +4407,12 @@ static i40e_status i40e_aq_alternate_read(struct i40e_hw *hw, * * Suspend port's Tx traffic **/ -i40e_status i40e_aq_suspend_port_tx(struct i40e_hw *hw, u16 seid, - struct i40e_asq_cmd_details *cmd_details) +int i40e_aq_suspend_port_tx(struct i40e_hw *hw, u16 seid, + struct i40e_asq_cmd_details *cmd_details) { struct i40e_aqc_tx_sched_ind *cmd; struct i40e_aq_desc desc; - i40e_status status; + int status; cmd = (struct i40e_aqc_tx_sched_ind *)&desc.params.raw; i40e_fill_default_direct_cmd_desc(&desc, i40e_aqc_opc_suspend_port_tx); @@ -4563,11 +4429,11 @@ i40e_status i40e_aq_suspend_port_tx(struct i40e_hw *hw, u16 seid, * * Resume port's Tx traffic **/ -i40e_status i40e_aq_resume_port_tx(struct i40e_hw *hw, - struct i40e_asq_cmd_details *cmd_details) +int i40e_aq_resume_port_tx(struct i40e_hw *hw, + struct i40e_asq_cmd_details *cmd_details) { struct i40e_aq_desc desc; - i40e_status status; + int status; i40e_fill_default_direct_cmd_desc(&desc, i40e_aqc_opc_resume_port_tx); @@ -4637,18 +4503,18 @@ void i40e_set_pci_config_data(struct i40e_hw *hw, u16 link_status) * Dump internal FW/HW data for debug purposes. * **/ -i40e_status i40e_aq_debug_dump(struct i40e_hw *hw, u8 cluster_id, - u8 table_id, u32 start_index, u16 buff_size, - void *buff, u16 *ret_buff_size, - u8 *ret_next_table, u32 *ret_next_index, - struct i40e_asq_cmd_details *cmd_details) +int i40e_aq_debug_dump(struct i40e_hw *hw, u8 cluster_id, + u8 table_id, u32 start_index, u16 buff_size, + void *buff, u16 *ret_buff_size, + u8 *ret_next_table, u32 *ret_next_index, + struct i40e_asq_cmd_details *cmd_details) { struct i40e_aq_desc desc; struct i40e_aqc_debug_dump_internals *cmd = (struct i40e_aqc_debug_dump_internals *)&desc.params.raw; struct i40e_aqc_debug_dump_internals *resp = (struct i40e_aqc_debug_dump_internals *)&desc.params.raw; - i40e_status status; + int status; if (buff_size == 0 || !buff) return I40E_ERR_PARAM; @@ -4689,12 +4555,12 @@ i40e_status i40e_aq_debug_dump(struct i40e_hw *hw, u8 cluster_id, * * Read bw from the alternate ram for the given pf **/ -i40e_status i40e_read_bw_from_alt_ram(struct i40e_hw *hw, - u32 *max_bw, u32 *min_bw, - bool *min_valid, bool *max_valid) +int i40e_read_bw_from_alt_ram(struct i40e_hw *hw, + u32 *max_bw, u32 *min_bw, + bool *min_valid, bool *max_valid) { - i40e_status status; u32 max_bw_addr, min_bw_addr; + int status; /* Calculate the address of the min/max bw registers */ max_bw_addr = I40E_ALT_STRUCT_FIRST_PF_OFFSET + @@ -4729,13 +4595,14 @@ i40e_status i40e_read_bw_from_alt_ram(struct i40e_hw *hw, * * Configure partitions guaranteed/max bw **/ -i40e_status i40e_aq_configure_partition_bw(struct i40e_hw *hw, - struct i40e_aqc_configure_partition_bw_data *bw_data, - struct i40e_asq_cmd_details *cmd_details) +int +i40e_aq_configure_partition_bw(struct i40e_hw *hw, + struct i40e_aqc_configure_partition_bw_data *bw_data, + struct i40e_asq_cmd_details *cmd_details) { - i40e_status status; - struct i40e_aq_desc desc; u16 bwd_size = sizeof(*bw_data); + struct i40e_aq_desc desc; + int status; i40e_fill_default_direct_cmd_desc(&desc, i40e_aqc_opc_configure_partition_bw); @@ -4764,11 +4631,11 @@ i40e_status i40e_aq_configure_partition_bw(struct i40e_hw *hw, * * Reads specified PHY register value **/ -i40e_status i40e_read_phy_register_clause22(struct i40e_hw *hw, - u16 reg, u8 phy_addr, u16 *value) +int i40e_read_phy_register_clause22(struct i40e_hw *hw, + u16 reg, u8 phy_addr, u16 *value) { - i40e_status status = I40E_ERR_TIMEOUT; u8 port_num = (u8)hw->func_caps.mdio_port_num; + int status = I40E_ERR_TIMEOUT; u32 command = 0; u16 retry = 1000; @@ -4809,11 +4676,11 @@ i40e_status i40e_read_phy_register_clause22(struct i40e_hw *hw, * * Writes specified PHY register value **/ -i40e_status i40e_write_phy_register_clause22(struct i40e_hw *hw, - u16 reg, u8 phy_addr, u16 value) +int i40e_write_phy_register_clause22(struct i40e_hw *hw, + u16 reg, u8 phy_addr, u16 value) { - i40e_status status = I40E_ERR_TIMEOUT; u8 port_num = (u8)hw->func_caps.mdio_port_num; + int status = I40E_ERR_TIMEOUT; u32 command = 0; u16 retry = 1000; @@ -4850,13 +4717,13 @@ i40e_status i40e_write_phy_register_clause22(struct i40e_hw *hw, * * Reads specified PHY register value **/ -i40e_status i40e_read_phy_register_clause45(struct i40e_hw *hw, - u8 page, u16 reg, u8 phy_addr, u16 *value) +int i40e_read_phy_register_clause45(struct i40e_hw *hw, + u8 page, u16 reg, u8 phy_addr, u16 *value) { - i40e_status status = I40E_ERR_TIMEOUT; + u8 port_num = hw->func_caps.mdio_port_num; + int status = I40E_ERR_TIMEOUT; u32 command = 0; u16 retry = 1000; - u8 port_num = hw->func_caps.mdio_port_num; command = (reg << I40E_GLGEN_MSCA_MDIADD_SHIFT) | (page << I40E_GLGEN_MSCA_DEVADD_SHIFT) | @@ -4924,13 +4791,13 @@ phy_read_end: * * Writes value to specified PHY register **/ -i40e_status i40e_write_phy_register_clause45(struct i40e_hw *hw, - u8 page, u16 reg, u8 phy_addr, u16 value) +int i40e_write_phy_register_clause45(struct i40e_hw *hw, + u8 page, u16 reg, u8 phy_addr, u16 value) { - i40e_status status = I40E_ERR_TIMEOUT; - u32 command = 0; - u16 retry = 1000; u8 port_num = hw->func_caps.mdio_port_num; + int status = I40E_ERR_TIMEOUT; + u16 retry = 1000; + u32 command = 0; command = (reg << I40E_GLGEN_MSCA_MDIADD_SHIFT) | (page << I40E_GLGEN_MSCA_DEVADD_SHIFT) | @@ -4991,10 +4858,10 @@ phy_write_end: * * Writes value to specified PHY register **/ -i40e_status i40e_write_phy_register(struct i40e_hw *hw, - u8 page, u16 reg, u8 phy_addr, u16 value) +int i40e_write_phy_register(struct i40e_hw *hw, + u8 page, u16 reg, u8 phy_addr, u16 value) { - i40e_status status; + int status; switch (hw->device_id) { case I40E_DEV_ID_1G_BASE_T_X722: @@ -5030,10 +4897,10 @@ i40e_status i40e_write_phy_register(struct i40e_hw *hw, * * Reads specified PHY register value **/ -i40e_status i40e_read_phy_register(struct i40e_hw *hw, - u8 page, u16 reg, u8 phy_addr, u16 *value) +int i40e_read_phy_register(struct i40e_hw *hw, + u8 page, u16 reg, u8 phy_addr, u16 *value) { - i40e_status status; + int status; switch (hw->device_id) { case I40E_DEV_ID_1G_BASE_T_X722: @@ -5082,17 +4949,17 @@ u8 i40e_get_phy_address(struct i40e_hw *hw, u8 dev_num) * * Blinks PHY link LED **/ -i40e_status i40e_blink_phy_link_led(struct i40e_hw *hw, - u32 time, u32 interval) +int i40e_blink_phy_link_led(struct i40e_hw *hw, + u32 time, u32 interval) { - i40e_status status = 0; - u32 i; - u16 led_ctl; - u16 gpio_led_port; - u16 led_reg; u16 led_addr = I40E_PHY_LED_PROV_REG_1; + u16 gpio_led_port; u8 phy_addr = 0; + int status = 0; + u16 led_ctl; u8 port_num; + u16 led_reg; + u32 i; i = rd32(hw, I40E_PFGEN_PORTNUM); port_num = (u8)(i & I40E_PFGEN_PORTNUM_PORT_NUM_MASK); @@ -5154,12 +5021,12 @@ phy_blinking_end: * @led_addr: LED register address * @reg_val: read register value **/ -static enum i40e_status_code i40e_led_get_reg(struct i40e_hw *hw, u16 led_addr, - u32 *reg_val) +static int i40e_led_get_reg(struct i40e_hw *hw, u16 led_addr, + u32 *reg_val) { - enum i40e_status_code status; u8 phy_addr = 0; u8 port_num; + int status; u32 i; *reg_val = 0; @@ -5188,12 +5055,12 @@ static enum i40e_status_code i40e_led_get_reg(struct i40e_hw *hw, u16 led_addr, * @led_addr: LED register address * @reg_val: register value to write **/ -static enum i40e_status_code i40e_led_set_reg(struct i40e_hw *hw, u16 led_addr, - u32 reg_val) +static int i40e_led_set_reg(struct i40e_hw *hw, u16 led_addr, + u32 reg_val) { - enum i40e_status_code status; u8 phy_addr = 0; u8 port_num; + int status; u32 i; if (hw->flags & I40E_HW_FLAG_AQ_PHY_ACCESS_CAPABLE) { @@ -5223,17 +5090,17 @@ static enum i40e_status_code i40e_led_set_reg(struct i40e_hw *hw, u16 led_addr, * @val: original value of register to use * **/ -i40e_status i40e_led_get_phy(struct i40e_hw *hw, u16 *led_addr, - u16 *val) +int i40e_led_get_phy(struct i40e_hw *hw, u16 *led_addr, + u16 *val) { - i40e_status status = 0; u16 gpio_led_port; u8 phy_addr = 0; - u16 reg_val; + u32 reg_val_aq; + int status = 0; u16 temp_addr; + u16 reg_val; u8 port_num; u32 i; - u32 reg_val_aq; if (hw->flags & I40E_HW_FLAG_AQ_PHY_ACCESS_CAPABLE) { status = @@ -5278,12 +5145,12 @@ i40e_status i40e_led_get_phy(struct i40e_hw *hw, u16 *led_addr, * Set led's on or off when controlled by the PHY * **/ -i40e_status i40e_led_set_phy(struct i40e_hw *hw, bool on, - u16 led_addr, u32 mode) +int i40e_led_set_phy(struct i40e_hw *hw, bool on, + u16 led_addr, u32 mode) { - i40e_status status = 0; u32 led_ctl = 0; u32 led_reg = 0; + int status = 0; status = i40e_led_get_reg(hw, led_addr, &led_reg); if (status) @@ -5327,14 +5194,14 @@ restore_config: * Use the firmware to read the Rx control register, * especially useful if the Rx unit is under heavy pressure **/ -i40e_status i40e_aq_rx_ctl_read_register(struct i40e_hw *hw, - u32 reg_addr, u32 *reg_val, - struct i40e_asq_cmd_details *cmd_details) +int i40e_aq_rx_ctl_read_register(struct i40e_hw *hw, + u32 reg_addr, u32 *reg_val, + struct i40e_asq_cmd_details *cmd_details) { struct i40e_aq_desc desc; struct i40e_aqc_rx_ctl_reg_read_write *cmd_resp = (struct i40e_aqc_rx_ctl_reg_read_write *)&desc.params.raw; - i40e_status status; + int status; if (!reg_val) return I40E_ERR_PARAM; @@ -5358,8 +5225,8 @@ i40e_status i40e_aq_rx_ctl_read_register(struct i40e_hw *hw, **/ u32 i40e_read_rx_ctl(struct i40e_hw *hw, u32 reg_addr) { - i40e_status status = 0; bool use_register; + int status = 0; int retry = 5; u32 val = 0; @@ -5393,14 +5260,14 @@ do_retry: * Use the firmware to write to an Rx control register, * especially useful if the Rx unit is under heavy pressure **/ -i40e_status i40e_aq_rx_ctl_write_register(struct i40e_hw *hw, - u32 reg_addr, u32 reg_val, - struct i40e_asq_cmd_details *cmd_details) +int i40e_aq_rx_ctl_write_register(struct i40e_hw *hw, + u32 reg_addr, u32 reg_val, + struct i40e_asq_cmd_details *cmd_details) { struct i40e_aq_desc desc; struct i40e_aqc_rx_ctl_reg_read_write *cmd = (struct i40e_aqc_rx_ctl_reg_read_write *)&desc.params.raw; - i40e_status status; + int status; i40e_fill_default_direct_cmd_desc(&desc, i40e_aqc_opc_rx_ctl_reg_write); @@ -5420,8 +5287,8 @@ i40e_status i40e_aq_rx_ctl_write_register(struct i40e_hw *hw, **/ void i40e_write_rx_ctl(struct i40e_hw *hw, u32 reg_addr, u32 reg_val) { - i40e_status status = 0; bool use_register; + int status = 0; int retry = 5; use_register = (((hw->aq.api_maj_ver == 1) && @@ -5483,16 +5350,16 @@ static void i40e_mdio_if_number_selection(struct i40e_hw *hw, bool set_mdio, * NOTE: In common cases MDIO I/F number should not be changed, thats why you * may use simple wrapper i40e_aq_set_phy_register. **/ -enum i40e_status_code i40e_aq_set_phy_register_ext(struct i40e_hw *hw, - u8 phy_select, u8 dev_addr, bool page_change, - bool set_mdio, u8 mdio_num, - u32 reg_addr, u32 reg_val, - struct i40e_asq_cmd_details *cmd_details) +int i40e_aq_set_phy_register_ext(struct i40e_hw *hw, + u8 phy_select, u8 dev_addr, bool page_change, + bool set_mdio, u8 mdio_num, + u32 reg_addr, u32 reg_val, + struct i40e_asq_cmd_details *cmd_details) { struct i40e_aq_desc desc; struct i40e_aqc_phy_register_access *cmd = (struct i40e_aqc_phy_register_access *)&desc.params.raw; - i40e_status status; + int status; i40e_fill_default_direct_cmd_desc(&desc, i40e_aqc_opc_set_phy_register); @@ -5528,16 +5395,16 @@ enum i40e_status_code i40e_aq_set_phy_register_ext(struct i40e_hw *hw, * NOTE: In common cases MDIO I/F number should not be changed, thats why you * may use simple wrapper i40e_aq_get_phy_register. **/ -enum i40e_status_code i40e_aq_get_phy_register_ext(struct i40e_hw *hw, - u8 phy_select, u8 dev_addr, bool page_change, - bool set_mdio, u8 mdio_num, - u32 reg_addr, u32 *reg_val, - struct i40e_asq_cmd_details *cmd_details) +int i40e_aq_get_phy_register_ext(struct i40e_hw *hw, + u8 phy_select, u8 dev_addr, bool page_change, + bool set_mdio, u8 mdio_num, + u32 reg_addr, u32 *reg_val, + struct i40e_asq_cmd_details *cmd_details) { struct i40e_aq_desc desc; struct i40e_aqc_phy_register_access *cmd = (struct i40e_aqc_phy_register_access *)&desc.params.raw; - i40e_status status; + int status; i40e_fill_default_direct_cmd_desc(&desc, i40e_aqc_opc_get_phy_register); @@ -5568,18 +5435,17 @@ enum i40e_status_code i40e_aq_get_phy_register_ext(struct i40e_hw *hw, * @error_info: returns error information * @cmd_details: pointer to command details structure or NULL **/ -enum -i40e_status_code i40e_aq_write_ddp(struct i40e_hw *hw, void *buff, - u16 buff_size, u32 track_id, - u32 *error_offset, u32 *error_info, - struct i40e_asq_cmd_details *cmd_details) +int i40e_aq_write_ddp(struct i40e_hw *hw, void *buff, + u16 buff_size, u32 track_id, + u32 *error_offset, u32 *error_info, + struct i40e_asq_cmd_details *cmd_details) { struct i40e_aq_desc desc; struct i40e_aqc_write_personalization_profile *cmd = (struct i40e_aqc_write_personalization_profile *) &desc.params.raw; struct i40e_aqc_write_ddp_resp *resp; - i40e_status status; + int status; i40e_fill_default_direct_cmd_desc(&desc, i40e_aqc_opc_write_personalization_profile); @@ -5612,15 +5478,14 @@ i40e_status_code i40e_aq_write_ddp(struct i40e_hw *hw, void *buff, * @flags: AdminQ command flags * @cmd_details: pointer to command details structure or NULL **/ -enum -i40e_status_code i40e_aq_get_ddp_list(struct i40e_hw *hw, void *buff, - u16 buff_size, u8 flags, - struct i40e_asq_cmd_details *cmd_details) +int i40e_aq_get_ddp_list(struct i40e_hw *hw, void *buff, + u16 buff_size, u8 flags, + struct i40e_asq_cmd_details *cmd_details) { struct i40e_aq_desc desc; struct i40e_aqc_get_applied_profiles *cmd = (struct i40e_aqc_get_applied_profiles *)&desc.params.raw; - i40e_status status; + int status; i40e_fill_default_direct_cmd_desc(&desc, i40e_aqc_opc_get_personalization_profile_list); @@ -5719,14 +5584,13 @@ i40e_find_section_in_profile(u32 section_type, * @hw: pointer to the hw struct * @aq: command buffer containing all data to execute AQ **/ -static enum -i40e_status_code i40e_ddp_exec_aq_section(struct i40e_hw *hw, - struct i40e_profile_aq_section *aq) +static int i40e_ddp_exec_aq_section(struct i40e_hw *hw, + struct i40e_profile_aq_section *aq) { - i40e_status status; struct i40e_aq_desc desc; u8 *msg = NULL; u16 msglen; + int status; i40e_fill_default_direct_cmd_desc(&desc, aq->opcode); desc.flags |= cpu_to_le16(aq->flags); @@ -5766,14 +5630,14 @@ i40e_status_code i40e_ddp_exec_aq_section(struct i40e_hw *hw, * * Validates supported devices and profile's sections. */ -static enum i40e_status_code +static int i40e_validate_profile(struct i40e_hw *hw, struct i40e_profile_segment *profile, u32 track_id, bool rollback) { struct i40e_profile_section_header *sec = NULL; - i40e_status status = 0; struct i40e_section_table *sec_tbl; u32 vendor_dev_id; + int status = 0; u32 dev_cnt; u32 sec_off; u32 i; @@ -5831,16 +5695,16 @@ i40e_validate_profile(struct i40e_hw *hw, struct i40e_profile_segment *profile, * * Handles the download of a complete package. */ -enum i40e_status_code +int i40e_write_profile(struct i40e_hw *hw, struct i40e_profile_segment *profile, u32 track_id) { - i40e_status status = 0; - struct i40e_section_table *sec_tbl; struct i40e_profile_section_header *sec = NULL; struct i40e_profile_aq_section *ddp_aq; - u32 section_size = 0; + struct i40e_section_table *sec_tbl; u32 offset = 0, info = 0; + u32 section_size = 0; + int status = 0; u32 sec_off; u32 i; @@ -5894,15 +5758,15 @@ i40e_write_profile(struct i40e_hw *hw, struct i40e_profile_segment *profile, * * Rolls back previously loaded package. */ -enum i40e_status_code +int i40e_rollback_profile(struct i40e_hw *hw, struct i40e_profile_segment *profile, u32 track_id) { struct i40e_profile_section_header *sec = NULL; - i40e_status status = 0; struct i40e_section_table *sec_tbl; u32 offset = 0, info = 0; u32 section_size = 0; + int status = 0; u32 sec_off; int i; @@ -5946,15 +5810,15 @@ i40e_rollback_profile(struct i40e_hw *hw, struct i40e_profile_segment *profile, * * Register a profile to the list of loaded profiles. */ -enum i40e_status_code +int i40e_add_pinfo_to_list(struct i40e_hw *hw, struct i40e_profile_segment *profile, u8 *profile_info_sec, u32 track_id) { - i40e_status status = 0; struct i40e_profile_section_header *sec = NULL; struct i40e_profile_info *pinfo; u32 offset = 0, info = 0; + int status = 0; sec = (struct i40e_profile_section_header *)profile_info_sec; sec->tbl_size = 1; @@ -5988,7 +5852,7 @@ i40e_add_pinfo_to_list(struct i40e_hw *hw, * of the function. * **/ -enum i40e_status_code +int i40e_aq_add_cloud_filters(struct i40e_hw *hw, u16 seid, struct i40e_aqc_cloud_filters_element_data *filters, u8 filter_count) @@ -5996,8 +5860,8 @@ i40e_aq_add_cloud_filters(struct i40e_hw *hw, u16 seid, struct i40e_aq_desc desc; struct i40e_aqc_add_remove_cloud_filters *cmd = (struct i40e_aqc_add_remove_cloud_filters *)&desc.params.raw; - enum i40e_status_code status; u16 buff_len; + int status; i40e_fill_default_direct_cmd_desc(&desc, i40e_aqc_opc_add_cloud_filters); @@ -6025,7 +5889,7 @@ i40e_aq_add_cloud_filters(struct i40e_hw *hw, u16 seid, * function. * **/ -enum i40e_status_code +int i40e_aq_add_cloud_filters_bb(struct i40e_hw *hw, u16 seid, struct i40e_aqc_cloud_filters_element_bb *filters, u8 filter_count) @@ -6033,8 +5897,8 @@ i40e_aq_add_cloud_filters_bb(struct i40e_hw *hw, u16 seid, struct i40e_aq_desc desc; struct i40e_aqc_add_remove_cloud_filters *cmd = (struct i40e_aqc_add_remove_cloud_filters *)&desc.params.raw; - i40e_status status; u16 buff_len; + int status; int i; i40e_fill_default_direct_cmd_desc(&desc, @@ -6082,7 +5946,7 @@ i40e_aq_add_cloud_filters_bb(struct i40e_hw *hw, u16 seid, * of the function. * **/ -enum i40e_status_code +int i40e_aq_rem_cloud_filters(struct i40e_hw *hw, u16 seid, struct i40e_aqc_cloud_filters_element_data *filters, u8 filter_count) @@ -6090,8 +5954,8 @@ i40e_aq_rem_cloud_filters(struct i40e_hw *hw, u16 seid, struct i40e_aq_desc desc; struct i40e_aqc_add_remove_cloud_filters *cmd = (struct i40e_aqc_add_remove_cloud_filters *)&desc.params.raw; - enum i40e_status_code status; u16 buff_len; + int status; i40e_fill_default_direct_cmd_desc(&desc, i40e_aqc_opc_remove_cloud_filters); @@ -6119,7 +5983,7 @@ i40e_aq_rem_cloud_filters(struct i40e_hw *hw, u16 seid, * function. * **/ -enum i40e_status_code +int i40e_aq_rem_cloud_filters_bb(struct i40e_hw *hw, u16 seid, struct i40e_aqc_cloud_filters_element_bb *filters, u8 filter_count) @@ -6127,8 +5991,8 @@ i40e_aq_rem_cloud_filters_bb(struct i40e_hw *hw, u16 seid, struct i40e_aq_desc desc; struct i40e_aqc_add_remove_cloud_filters *cmd = (struct i40e_aqc_add_remove_cloud_filters *)&desc.params.raw; - i40e_status status; u16 buff_len; + int status; int i; i40e_fill_default_direct_cmd_desc(&desc, diff --git a/drivers/net/ethernet/intel/i40e/i40e_dcb.c b/drivers/net/ethernet/intel/i40e/i40e_dcb.c index 673f341f4c0c..90638b67f8dc 100644 --- a/drivers/net/ethernet/intel/i40e/i40e_dcb.c +++ b/drivers/net/ethernet/intel/i40e/i40e_dcb.c @@ -12,7 +12,7 @@ * * Get the DCBX status from the Firmware **/ -i40e_status i40e_get_dcbx_status(struct i40e_hw *hw, u16 *status) +int i40e_get_dcbx_status(struct i40e_hw *hw, u16 *status) { u32 reg; @@ -497,15 +497,15 @@ static void i40e_parse_org_tlv(struct i40e_lldp_org_tlv *tlv, * * Parse DCB configuration from the LLDPDU **/ -i40e_status i40e_lldp_to_dcb_config(u8 *lldpmib, - struct i40e_dcbx_config *dcbcfg) +int i40e_lldp_to_dcb_config(u8 *lldpmib, + struct i40e_dcbx_config *dcbcfg) { - i40e_status ret = 0; struct i40e_lldp_org_tlv *tlv; - u16 type; - u16 length; u16 typelength; u16 offset = 0; + int ret = 0; + u16 length; + u16 type; if (!lldpmib || !dcbcfg) return I40E_ERR_PARAM; @@ -551,12 +551,12 @@ i40e_status i40e_lldp_to_dcb_config(u8 *lldpmib, * * Query DCB configuration from the Firmware **/ -i40e_status i40e_aq_get_dcb_config(struct i40e_hw *hw, u8 mib_type, - u8 bridgetype, - struct i40e_dcbx_config *dcbcfg) +int i40e_aq_get_dcb_config(struct i40e_hw *hw, u8 mib_type, + u8 bridgetype, + struct i40e_dcbx_config *dcbcfg) { - i40e_status ret = 0; struct i40e_virt_mem mem; + int ret = 0; u8 *lldpmib; /* Allocate the LLDPDU */ @@ -767,9 +767,9 @@ static void i40e_cee_to_dcb_config( * * Get IEEE mode DCB configuration from the Firmware **/ -static i40e_status i40e_get_ieee_dcb_config(struct i40e_hw *hw) +static int i40e_get_ieee_dcb_config(struct i40e_hw *hw) { - i40e_status ret = 0; + int ret = 0; /* IEEE mode */ hw->local_dcbx_config.dcbx_mode = I40E_DCBX_MODE_IEEE; @@ -797,11 +797,11 @@ out: * * Get DCB configuration from the Firmware **/ -i40e_status i40e_get_dcb_config(struct i40e_hw *hw) +int i40e_get_dcb_config(struct i40e_hw *hw) { - i40e_status ret = 0; - struct i40e_aqc_get_cee_dcb_cfg_resp cee_cfg; struct i40e_aqc_get_cee_dcb_cfg_v1_resp cee_v1_cfg; + struct i40e_aqc_get_cee_dcb_cfg_resp cee_cfg; + int ret = 0; /* If Firmware version < v4.33 on X710/XL710, IEEE only */ if ((hw->mac.type == I40E_MAC_XL710) && @@ -867,11 +867,11 @@ out: * * Update DCB configuration from the Firmware **/ -i40e_status i40e_init_dcb(struct i40e_hw *hw, bool enable_mib_change) +int i40e_init_dcb(struct i40e_hw *hw, bool enable_mib_change) { - i40e_status ret = 0; struct i40e_lldp_variables lldp_cfg; u8 adminstatus = 0; + int ret = 0; if (!hw->func_caps.dcb) return I40E_NOT_SUPPORTED; @@ -940,13 +940,13 @@ i40e_status i40e_init_dcb(struct i40e_hw *hw, bool enable_mib_change) * Get status of FW Link Layer Discovery Protocol (LLDP) Agent. * Status of agent is reported via @lldp_status parameter. **/ -enum i40e_status_code +int i40e_get_fw_lldp_status(struct i40e_hw *hw, enum i40e_get_fw_lldp_status_resp *lldp_status) { struct i40e_virt_mem mem; - i40e_status ret; u8 *lldpmib; + int ret; if (!lldp_status) return I40E_ERR_PARAM; @@ -1238,13 +1238,13 @@ static void i40e_add_dcb_tlv(struct i40e_lldp_org_tlv *tlv, * * Set DCB configuration to the Firmware **/ -i40e_status i40e_set_dcb_config(struct i40e_hw *hw) +int i40e_set_dcb_config(struct i40e_hw *hw) { struct i40e_dcbx_config *dcbcfg; struct i40e_virt_mem mem; u8 mib_type, *lldpmib; - i40e_status ret; u16 miblen; + int ret; /* update the hw local config */ dcbcfg = &hw->local_dcbx_config; @@ -1274,8 +1274,8 @@ i40e_status i40e_set_dcb_config(struct i40e_hw *hw) * * send DCB configuration to FW **/ -i40e_status i40e_dcb_config_to_lldp(u8 *lldpmib, u16 *miblen, - struct i40e_dcbx_config *dcbcfg) +int i40e_dcb_config_to_lldp(u8 *lldpmib, u16 *miblen, + struct i40e_dcbx_config *dcbcfg) { u16 length, offset = 0, tlvid, typelength; struct i40e_lldp_org_tlv *tlv; @@ -1888,13 +1888,13 @@ void i40e_dcb_hw_rx_pb_config(struct i40e_hw *hw, * * Reads the LLDP configuration data from NVM using passed addresses **/ -static i40e_status _i40e_read_lldp_cfg(struct i40e_hw *hw, - struct i40e_lldp_variables *lldp_cfg, - u8 module, u32 word_offset) +static int _i40e_read_lldp_cfg(struct i40e_hw *hw, + struct i40e_lldp_variables *lldp_cfg, + u8 module, u32 word_offset) { u32 address, offset = (2 * word_offset); - i40e_status ret; __le16 raw_mem; + int ret; u16 mem; ret = i40e_acquire_nvm(hw, I40E_RESOURCE_READ); @@ -1950,10 +1950,10 @@ err_lldp_cfg: * * Reads the LLDP configuration data from NVM **/ -i40e_status i40e_read_lldp_cfg(struct i40e_hw *hw, - struct i40e_lldp_variables *lldp_cfg) +int i40e_read_lldp_cfg(struct i40e_hw *hw, + struct i40e_lldp_variables *lldp_cfg) { - i40e_status ret = 0; + int ret = 0; u32 mem; if (!lldp_cfg) diff --git a/drivers/net/ethernet/intel/i40e/i40e_dcb.h b/drivers/net/ethernet/intel/i40e/i40e_dcb.h index 2370ceecb061..6b60dc9b7736 100644 --- a/drivers/net/ethernet/intel/i40e/i40e_dcb.h +++ b/drivers/net/ethernet/intel/i40e/i40e_dcb.h @@ -264,20 +264,20 @@ void i40e_dcb_hw_calculate_pool_sizes(struct i40e_hw *hw, void i40e_dcb_hw_rx_pb_config(struct i40e_hw *hw, struct i40e_rx_pb_config *old_pb_cfg, struct i40e_rx_pb_config *new_pb_cfg); -i40e_status i40e_get_dcbx_status(struct i40e_hw *hw, - u16 *status); -i40e_status i40e_lldp_to_dcb_config(u8 *lldpmib, - struct i40e_dcbx_config *dcbcfg); -i40e_status i40e_aq_get_dcb_config(struct i40e_hw *hw, u8 mib_type, - u8 bridgetype, - struct i40e_dcbx_config *dcbcfg); -i40e_status i40e_get_dcb_config(struct i40e_hw *hw); -i40e_status i40e_init_dcb(struct i40e_hw *hw, - bool enable_mib_change); -enum i40e_status_code +int i40e_get_dcbx_status(struct i40e_hw *hw, + u16 *status); +int i40e_lldp_to_dcb_config(u8 *lldpmib, + struct i40e_dcbx_config *dcbcfg); +int i40e_aq_get_dcb_config(struct i40e_hw *hw, u8 mib_type, + u8 bridgetype, + struct i40e_dcbx_config *dcbcfg); +int i40e_get_dcb_config(struct i40e_hw *hw); +int i40e_init_dcb(struct i40e_hw *hw, + bool enable_mib_change); +int i40e_get_fw_lldp_status(struct i40e_hw *hw, enum i40e_get_fw_lldp_status_resp *lldp_status); -i40e_status i40e_set_dcb_config(struct i40e_hw *hw); -i40e_status i40e_dcb_config_to_lldp(u8 *lldpmib, u16 *miblen, - struct i40e_dcbx_config *dcbcfg); +int i40e_set_dcb_config(struct i40e_hw *hw); +int i40e_dcb_config_to_lldp(u8 *lldpmib, u16 *miblen, + struct i40e_dcbx_config *dcbcfg); #endif /* _I40E_DCB_H_ */ diff --git a/drivers/net/ethernet/intel/i40e/i40e_dcb_nl.c b/drivers/net/ethernet/intel/i40e/i40e_dcb_nl.c index e32c61909b31..195421d863ab 100644 --- a/drivers/net/ethernet/intel/i40e/i40e_dcb_nl.c +++ b/drivers/net/ethernet/intel/i40e/i40e_dcb_nl.c @@ -135,8 +135,8 @@ static int i40e_dcbnl_ieee_setets(struct net_device *netdev, ret = i40e_hw_dcb_config(pf, &pf->tmp_cfg); if (ret) { dev_info(&pf->pdev->dev, - "Failed setting DCB ETS configuration err %s aq_err %s\n", - i40e_stat_str(&pf->hw, ret), + "Failed setting DCB ETS configuration err %pe aq_err %s\n", + ERR_PTR(ret), i40e_aq_str(&pf->hw, pf->hw.aq.asq_last_status)); return -EINVAL; } @@ -174,8 +174,8 @@ static int i40e_dcbnl_ieee_setpfc(struct net_device *netdev, ret = i40e_hw_dcb_config(pf, &pf->tmp_cfg); if (ret) { dev_info(&pf->pdev->dev, - "Failed setting DCB PFC configuration err %s aq_err %s\n", - i40e_stat_str(&pf->hw, ret), + "Failed setting DCB PFC configuration err %pe aq_err %s\n", + ERR_PTR(ret), i40e_aq_str(&pf->hw, pf->hw.aq.asq_last_status)); return -EINVAL; } @@ -225,8 +225,8 @@ static int i40e_dcbnl_ieee_setapp(struct net_device *netdev, ret = i40e_hw_dcb_config(pf, &pf->tmp_cfg); if (ret) { dev_info(&pf->pdev->dev, - "Failed setting DCB configuration err %s aq_err %s\n", - i40e_stat_str(&pf->hw, ret), + "Failed setting DCB configuration err %pe aq_err %s\n", + ERR_PTR(ret), i40e_aq_str(&pf->hw, pf->hw.aq.asq_last_status)); return -EINVAL; } @@ -290,8 +290,8 @@ static int i40e_dcbnl_ieee_delapp(struct net_device *netdev, ret = i40e_hw_dcb_config(pf, &pf->tmp_cfg); if (ret) { dev_info(&pf->pdev->dev, - "Failed setting DCB configuration err %s aq_err %s\n", - i40e_stat_str(&pf->hw, ret), + "Failed setting DCB configuration err %pe aq_err %s\n", + ERR_PTR(ret), i40e_aq_str(&pf->hw, pf->hw.aq.asq_last_status)); return -EINVAL; } diff --git a/drivers/net/ethernet/intel/i40e/i40e_ddp.c b/drivers/net/ethernet/intel/i40e/i40e_ddp.c index e1069ae658ad..7e8183762fd9 100644 --- a/drivers/net/ethernet/intel/i40e/i40e_ddp.c +++ b/drivers/net/ethernet/intel/i40e/i40e_ddp.c @@ -36,7 +36,7 @@ static int i40e_ddp_does_profile_exist(struct i40e_hw *hw, { struct i40e_ddp_profile_list *profile_list; u8 buff[I40E_PROFILE_LIST_SIZE]; - i40e_status status; + int status; int i; status = i40e_aq_get_ddp_list(hw, buff, I40E_PROFILE_LIST_SIZE, 0, @@ -91,7 +91,7 @@ static int i40e_ddp_does_profile_overlap(struct i40e_hw *hw, { struct i40e_ddp_profile_list *profile_list; u8 buff[I40E_PROFILE_LIST_SIZE]; - i40e_status status; + int status; int i; status = i40e_aq_get_ddp_list(hw, buff, I40E_PROFILE_LIST_SIZE, 0, @@ -117,14 +117,14 @@ static int i40e_ddp_does_profile_overlap(struct i40e_hw *hw, * * Register a profile to the list of loaded profiles. */ -static enum i40e_status_code +static int i40e_add_pinfo(struct i40e_hw *hw, struct i40e_profile_segment *profile, u8 *profile_info_sec, u32 track_id) { struct i40e_profile_section_header *sec; struct i40e_profile_info *pinfo; - i40e_status status; u32 offset = 0, info = 0; + int status; sec = (struct i40e_profile_section_header *)profile_info_sec; sec->tbl_size = 1; @@ -157,14 +157,14 @@ i40e_add_pinfo(struct i40e_hw *hw, struct i40e_profile_segment *profile, * * Removes DDP profile from the NIC. **/ -static enum i40e_status_code +static int i40e_del_pinfo(struct i40e_hw *hw, struct i40e_profile_segment *profile, u8 *profile_info_sec, u32 track_id) { struct i40e_profile_section_header *sec; struct i40e_profile_info *pinfo; - i40e_status status; u32 offset = 0, info = 0; + int status; sec = (struct i40e_profile_section_header *)profile_info_sec; sec->tbl_size = 1; @@ -270,12 +270,12 @@ int i40e_ddp_load(struct net_device *netdev, const u8 *data, size_t size, struct i40e_profile_segment *profile_hdr; struct i40e_profile_info pinfo; struct i40e_package_header *pkg_hdr; - i40e_status status; struct i40e_netdev_priv *np = netdev_priv(netdev); struct i40e_vsi *vsi = np->vsi; struct i40e_pf *pf = vsi->back; u32 track_id; int istatus; + int status; pkg_hdr = (struct i40e_package_header *)data; if (!i40e_ddp_is_pkg_hdr_valid(netdev, pkg_hdr, size)) diff --git a/drivers/net/ethernet/intel/i40e/i40e_debugfs.c b/drivers/net/ethernet/intel/i40e/i40e_debugfs.c index c9dcd6d92c83..9954493cd448 100644 --- a/drivers/net/ethernet/intel/i40e/i40e_debugfs.c +++ b/drivers/net/ethernet/intel/i40e/i40e_debugfs.c @@ -918,9 +918,9 @@ static ssize_t i40e_dbg_command_write(struct file *filp, dev_info(&pf->pdev->dev, "deleting relay %d\n", veb_seid); i40e_veb_release(pf->veb[i]); } else if (strncmp(cmd_buf, "add pvid", 8) == 0) { - i40e_status ret; - u16 vid; unsigned int v; + int ret; + u16 vid; cnt = sscanf(&cmd_buf[8], "%i %u", &vsi_seid, &v); if (cnt != 2) { @@ -1284,7 +1284,7 @@ static ssize_t i40e_dbg_command_write(struct file *filp, } } else if (strncmp(cmd_buf, "send aq_cmd", 11) == 0) { struct i40e_aq_desc *desc; - i40e_status ret; + int ret; desc = kzalloc(sizeof(struct i40e_aq_desc), GFP_KERNEL); if (!desc) @@ -1330,9 +1330,9 @@ static ssize_t i40e_dbg_command_write(struct file *filp, desc = NULL; } else if (strncmp(cmd_buf, "send indirect aq_cmd", 20) == 0) { struct i40e_aq_desc *desc; - i40e_status ret; u16 buffer_len; u8 *buff; + int ret; desc = kzalloc(sizeof(struct i40e_aq_desc), GFP_KERNEL); if (!desc) diff --git a/drivers/net/ethernet/intel/i40e/i40e_diag.c b/drivers/net/ethernet/intel/i40e/i40e_diag.c index ef4d3762bf37..5b3519c6e362 100644 --- a/drivers/net/ethernet/intel/i40e/i40e_diag.c +++ b/drivers/net/ethernet/intel/i40e/i40e_diag.c @@ -10,8 +10,8 @@ * @reg: reg to be tested * @mask: bits to be touched **/ -static i40e_status i40e_diag_reg_pattern_test(struct i40e_hw *hw, - u32 reg, u32 mask) +static int i40e_diag_reg_pattern_test(struct i40e_hw *hw, + u32 reg, u32 mask) { static const u32 patterns[] = { 0x5A5A5A5A, 0xA5A5A5A5, 0x00000000, 0xFFFFFFFF @@ -74,9 +74,9 @@ struct i40e_diag_reg_test_info i40e_reg_list[] = { * * Perform registers diagnostic test **/ -i40e_status i40e_diag_reg_test(struct i40e_hw *hw) +int i40e_diag_reg_test(struct i40e_hw *hw) { - i40e_status ret_code = 0; + int ret_code = 0; u32 reg, mask; u32 i, j; @@ -114,9 +114,9 @@ i40e_status i40e_diag_reg_test(struct i40e_hw *hw) * * Perform EEPROM diagnostic test **/ -i40e_status i40e_diag_eeprom_test(struct i40e_hw *hw) +int i40e_diag_eeprom_test(struct i40e_hw *hw) { - i40e_status ret_code; + int ret_code; u16 reg_val; /* read NVM control word and if NVM valid, validate EEPROM checksum*/ diff --git a/drivers/net/ethernet/intel/i40e/i40e_diag.h b/drivers/net/ethernet/intel/i40e/i40e_diag.h index c3340f320a18..e641035c7297 100644 --- a/drivers/net/ethernet/intel/i40e/i40e_diag.h +++ b/drivers/net/ethernet/intel/i40e/i40e_diag.h @@ -22,7 +22,7 @@ struct i40e_diag_reg_test_info { extern struct i40e_diag_reg_test_info i40e_reg_list[]; -i40e_status i40e_diag_reg_test(struct i40e_hw *hw); -i40e_status i40e_diag_eeprom_test(struct i40e_hw *hw); +int i40e_diag_reg_test(struct i40e_hw *hw); +int i40e_diag_eeprom_test(struct i40e_hw *hw); #endif /* _I40E_DIAG_H_ */ diff --git a/drivers/net/ethernet/intel/i40e/i40e_ethtool.c b/drivers/net/ethernet/intel/i40e/i40e_ethtool.c index 887a735fe2a7..4934ff58332c 100644 --- a/drivers/net/ethernet/intel/i40e/i40e_ethtool.c +++ b/drivers/net/ethernet/intel/i40e/i40e_ethtool.c @@ -1226,8 +1226,8 @@ static int i40e_set_link_ksettings(struct net_device *netdev, struct i40e_vsi *vsi = np->vsi; struct i40e_hw *hw = &pf->hw; bool autoneg_changed = false; - i40e_status status = 0; int timeout = 50; + int status = 0; int err = 0; __u32 speed; u8 autoneg; @@ -1455,8 +1455,8 @@ static int i40e_set_link_ksettings(struct net_device *netdev, status = i40e_aq_set_phy_config(hw, &config, NULL); if (status) { netdev_info(netdev, - "Set phy config failed, err %s aq_err %s\n", - i40e_stat_str(hw, status), + "Set phy config failed, err %pe aq_err %s\n", + ERR_PTR(status), i40e_aq_str(hw, hw->aq.asq_last_status)); err = -EAGAIN; goto done; @@ -1465,8 +1465,8 @@ static int i40e_set_link_ksettings(struct net_device *netdev, status = i40e_update_link_info(hw); if (status) netdev_dbg(netdev, - "Updating link info failed with err %s aq_err %s\n", - i40e_stat_str(hw, status), + "Updating link info failed with err %pe aq_err %s\n", + ERR_PTR(status), i40e_aq_str(hw, hw->aq.asq_last_status)); } else { @@ -1485,7 +1485,7 @@ static int i40e_set_fec_cfg(struct net_device *netdev, u8 fec_cfg) struct i40e_aq_get_phy_abilities_resp abilities; struct i40e_pf *pf = np->vsi->back; struct i40e_hw *hw = &pf->hw; - i40e_status status = 0; + int status = 0; u32 flags = 0; int err = 0; @@ -1517,8 +1517,8 @@ static int i40e_set_fec_cfg(struct net_device *netdev, u8 fec_cfg) status = i40e_aq_set_phy_config(hw, &config, NULL); if (status) { netdev_info(netdev, - "Set phy config failed, err %s aq_err %s\n", - i40e_stat_str(hw, status), + "Set phy config failed, err %pe aq_err %s\n", + ERR_PTR(status), i40e_aq_str(hw, hw->aq.asq_last_status)); err = -EAGAIN; goto done; @@ -1531,8 +1531,8 @@ static int i40e_set_fec_cfg(struct net_device *netdev, u8 fec_cfg) * (e.g. no physical connection etc.) */ netdev_dbg(netdev, - "Updating link info failed with err %s aq_err %s\n", - i40e_stat_str(hw, status), + "Updating link info failed with err %pe aq_err %s\n", + ERR_PTR(status), i40e_aq_str(hw, hw->aq.asq_last_status)); } @@ -1547,7 +1547,7 @@ static int i40e_get_fec_param(struct net_device *netdev, struct i40e_aq_get_phy_abilities_resp abilities; struct i40e_pf *pf = np->vsi->back; struct i40e_hw *hw = &pf->hw; - i40e_status status = 0; + int status = 0; int err = 0; u8 fec_cfg; @@ -1634,12 +1634,12 @@ static int i40e_nway_reset(struct net_device *netdev) struct i40e_pf *pf = np->vsi->back; struct i40e_hw *hw = &pf->hw; bool link_up = hw->phy.link_info.link_info & I40E_AQ_LINK_UP; - i40e_status ret = 0; + int ret = 0; ret = i40e_aq_set_link_restart_an(hw, link_up, NULL); if (ret) { - netdev_info(netdev, "link restart failed, err %s aq_err %s\n", - i40e_stat_str(hw, ret), + netdev_info(netdev, "link restart failed, err %pe aq_err %s\n", + ERR_PTR(ret), i40e_aq_str(hw, hw->aq.asq_last_status)); return -EIO; } @@ -1699,9 +1699,9 @@ static int i40e_set_pauseparam(struct net_device *netdev, struct i40e_link_status *hw_link_info = &hw->phy.link_info; struct i40e_dcbx_config *dcbx_cfg = &hw->local_dcbx_config; bool link_up = hw_link_info->link_info & I40E_AQ_LINK_UP; - i40e_status status; u8 aq_failures; int err = 0; + int status; u32 is_an; /* Changing the port's flow control is not supported if this isn't the @@ -1755,20 +1755,20 @@ static int i40e_set_pauseparam(struct net_device *netdev, status = i40e_set_fc(hw, &aq_failures, link_up); if (aq_failures & I40E_SET_FC_AQ_FAIL_GET) { - netdev_info(netdev, "Set fc failed on the get_phy_capabilities call with err %s aq_err %s\n", - i40e_stat_str(hw, status), + netdev_info(netdev, "Set fc failed on the get_phy_capabilities call with err %pe aq_err %s\n", + ERR_PTR(status), i40e_aq_str(hw, hw->aq.asq_last_status)); err = -EAGAIN; } if (aq_failures & I40E_SET_FC_AQ_FAIL_SET) { - netdev_info(netdev, "Set fc failed on the set_phy_config call with err %s aq_err %s\n", - i40e_stat_str(hw, status), + netdev_info(netdev, "Set fc failed on the set_phy_config call with err %pe aq_err %s\n", + ERR_PTR(status), i40e_aq_str(hw, hw->aq.asq_last_status)); err = -EAGAIN; } if (aq_failures & I40E_SET_FC_AQ_FAIL_UPDATE) { - netdev_info(netdev, "Set fc failed on the get_link_info call with err %s aq_err %s\n", - i40e_stat_str(hw, status), + netdev_info(netdev, "Set fc failed on the get_link_info call with err %pe aq_err %s\n", + ERR_PTR(status), i40e_aq_str(hw, hw->aq.asq_last_status)); err = -EAGAIN; } @@ -2583,8 +2583,8 @@ static u64 i40e_link_test(struct net_device *netdev, u64 *data) { struct i40e_netdev_priv *np = netdev_priv(netdev); struct i40e_pf *pf = np->vsi->back; - i40e_status status; bool link_up = false; + int status; netif_info(pf, hw, netdev, "link test\n"); status = i40e_get_link_status(&pf->hw, &link_up); @@ -2807,11 +2807,11 @@ static int i40e_set_phys_id(struct net_device *netdev, enum ethtool_phys_id_state state) { struct i40e_netdev_priv *np = netdev_priv(netdev); - i40e_status ret = 0; struct i40e_pf *pf = np->vsi->back; struct i40e_hw *hw = &pf->hw; int blink_freq = 2; u16 temp_status; + int ret = 0; switch (state) { case ETHTOOL_ID_ACTIVE: @@ -5247,7 +5247,7 @@ static int i40e_set_priv_flags(struct net_device *dev, u32 flags) struct i40e_vsi *vsi = np->vsi; struct i40e_pf *pf = vsi->back; u32 reset_needed = 0; - i40e_status status; + int status; u32 i, j; orig_flags = READ_ONCE(pf->flags); @@ -5362,8 +5362,8 @@ flags_complete: 0, NULL); if (ret && pf->hw.aq.asq_last_status != I40E_AQ_RC_ESRCH) { dev_info(&pf->pdev->dev, - "couldn't set switch config bits, err %s aq_err %s\n", - i40e_stat_str(&pf->hw, ret), + "couldn't set switch config bits, err %pe aq_err %s\n", + ERR_PTR(ret), i40e_aq_str(&pf->hw, pf->hw.aq.asq_last_status)); /* not a fatal problem, just keep going */ @@ -5435,9 +5435,8 @@ flags_complete: return -EBUSY; default: dev_warn(&pf->pdev->dev, - "Starting FW LLDP agent failed: error: %s, %s\n", - i40e_stat_str(&pf->hw, - status), + "Starting FW LLDP agent failed: error: %pe, %s\n", + ERR_PTR(status), i40e_aq_str(&pf->hw, adq_err)); return -EINVAL; @@ -5477,8 +5476,8 @@ static int i40e_get_module_info(struct net_device *netdev, u32 sff8472_comp = 0; u32 sff8472_swap = 0; u32 sff8636_rev = 0; - i40e_status status; u32 type = 0; + int status; /* Check if firmware supports reading module EEPROM. */ if (!(hw->flags & I40E_HW_FLAG_AQ_PHY_ACCESS_CAPABLE)) { @@ -5582,8 +5581,8 @@ static int i40e_get_module_eeprom(struct net_device *netdev, struct i40e_pf *pf = vsi->back; struct i40e_hw *hw = &pf->hw; bool is_sfp = false; - i40e_status status; u32 value = 0; + int status; int i; if (!ee || !ee->len || !data) @@ -5624,10 +5623,10 @@ static int i40e_get_eee(struct net_device *netdev, struct ethtool_eee *edata) { struct i40e_netdev_priv *np = netdev_priv(netdev); struct i40e_aq_get_phy_abilities_resp phy_cfg; - enum i40e_status_code status = 0; struct i40e_vsi *vsi = np->vsi; struct i40e_pf *pf = vsi->back; struct i40e_hw *hw = &pf->hw; + int status = 0; /* Get initial PHY capabilities */ status = i40e_aq_get_phy_capabilities(hw, false, true, &phy_cfg, NULL); @@ -5689,11 +5688,11 @@ static int i40e_set_eee(struct net_device *netdev, struct ethtool_eee *edata) { struct i40e_netdev_priv *np = netdev_priv(netdev); struct i40e_aq_get_phy_abilities_resp abilities; - enum i40e_status_code status = I40E_SUCCESS; struct i40e_aq_set_phy_config config; struct i40e_vsi *vsi = np->vsi; struct i40e_pf *pf = vsi->back; struct i40e_hw *hw = &pf->hw; + int status = I40E_SUCCESS; __le16 eee_capability; /* Deny parameters we don't support */ diff --git a/drivers/net/ethernet/intel/i40e/i40e_hmc.c b/drivers/net/ethernet/intel/i40e/i40e_hmc.c index 163ee8c6311c..46f7950a0049 100644 --- a/drivers/net/ethernet/intel/i40e/i40e_hmc.c +++ b/drivers/net/ethernet/intel/i40e/i40e_hmc.c @@ -17,17 +17,17 @@ * @type: what type of segment descriptor we're manipulating * @direct_mode_sz: size to alloc in direct mode **/ -i40e_status i40e_add_sd_table_entry(struct i40e_hw *hw, - struct i40e_hmc_info *hmc_info, - u32 sd_index, - enum i40e_sd_entry_type type, - u64 direct_mode_sz) +int i40e_add_sd_table_entry(struct i40e_hw *hw, + struct i40e_hmc_info *hmc_info, + u32 sd_index, + enum i40e_sd_entry_type type, + u64 direct_mode_sz) { enum i40e_memory_type mem_type __attribute__((unused)); struct i40e_hmc_sd_entry *sd_entry; bool dma_mem_alloc_done = false; + int ret_code = I40E_SUCCESS; struct i40e_dma_mem mem; - i40e_status ret_code = I40E_SUCCESS; u64 alloc_len; if (NULL == hmc_info->sd_table.sd_entry) { @@ -106,19 +106,19 @@ exit: * aligned on 4K boundary and zeroed memory. * 2. It should be 4K in size. **/ -i40e_status i40e_add_pd_table_entry(struct i40e_hw *hw, - struct i40e_hmc_info *hmc_info, - u32 pd_index, - struct i40e_dma_mem *rsrc_pg) +int i40e_add_pd_table_entry(struct i40e_hw *hw, + struct i40e_hmc_info *hmc_info, + u32 pd_index, + struct i40e_dma_mem *rsrc_pg) { - i40e_status ret_code = 0; struct i40e_hmc_pd_table *pd_table; struct i40e_hmc_pd_entry *pd_entry; struct i40e_dma_mem mem; struct i40e_dma_mem *page = &mem; u32 sd_idx, rel_pd_idx; - u64 *pd_addr; + int ret_code = 0; u64 page_desc; + u64 *pd_addr; if (pd_index / I40E_HMC_PD_CNT_IN_SD >= hmc_info->sd_table.sd_cnt) { ret_code = I40E_ERR_INVALID_PAGE_DESC_INDEX; @@ -185,15 +185,15 @@ exit: * 1. Caller can deallocate the memory used by backing storage after this * function returns. **/ -i40e_status i40e_remove_pd_bp(struct i40e_hw *hw, - struct i40e_hmc_info *hmc_info, - u32 idx) +int i40e_remove_pd_bp(struct i40e_hw *hw, + struct i40e_hmc_info *hmc_info, + u32 idx) { - i40e_status ret_code = 0; struct i40e_hmc_pd_entry *pd_entry; struct i40e_hmc_pd_table *pd_table; struct i40e_hmc_sd_entry *sd_entry; u32 sd_idx, rel_pd_idx; + int ret_code = 0; u64 *pd_addr; /* calculate index */ @@ -241,11 +241,11 @@ exit: * @hmc_info: pointer to the HMC configuration information structure * @idx: the page index **/ -i40e_status i40e_prep_remove_sd_bp(struct i40e_hmc_info *hmc_info, - u32 idx) +int i40e_prep_remove_sd_bp(struct i40e_hmc_info *hmc_info, + u32 idx) { - i40e_status ret_code = 0; struct i40e_hmc_sd_entry *sd_entry; + int ret_code = 0; /* get the entry and decrease its ref counter */ sd_entry = &hmc_info->sd_table.sd_entry[idx]; @@ -269,9 +269,9 @@ exit: * @idx: the page index * @is_pf: used to distinguish between VF and PF **/ -i40e_status i40e_remove_sd_bp_new(struct i40e_hw *hw, - struct i40e_hmc_info *hmc_info, - u32 idx, bool is_pf) +int i40e_remove_sd_bp_new(struct i40e_hw *hw, + struct i40e_hmc_info *hmc_info, + u32 idx, bool is_pf) { struct i40e_hmc_sd_entry *sd_entry; @@ -290,11 +290,11 @@ i40e_status i40e_remove_sd_bp_new(struct i40e_hw *hw, * @hmc_info: pointer to the HMC configuration information structure * @idx: segment descriptor index to find the relevant page descriptor **/ -i40e_status i40e_prep_remove_pd_page(struct i40e_hmc_info *hmc_info, - u32 idx) +int i40e_prep_remove_pd_page(struct i40e_hmc_info *hmc_info, + u32 idx) { - i40e_status ret_code = 0; struct i40e_hmc_sd_entry *sd_entry; + int ret_code = 0; sd_entry = &hmc_info->sd_table.sd_entry[idx]; @@ -318,9 +318,9 @@ exit: * @idx: segment descriptor index to find the relevant page descriptor * @is_pf: used to distinguish between VF and PF **/ -i40e_status i40e_remove_pd_page_new(struct i40e_hw *hw, - struct i40e_hmc_info *hmc_info, - u32 idx, bool is_pf) +int i40e_remove_pd_page_new(struct i40e_hw *hw, + struct i40e_hmc_info *hmc_info, + u32 idx, bool is_pf) { struct i40e_hmc_sd_entry *sd_entry; diff --git a/drivers/net/ethernet/intel/i40e/i40e_hmc.h b/drivers/net/ethernet/intel/i40e/i40e_hmc.h index 3113792afaff..9960da07a573 100644 --- a/drivers/net/ethernet/intel/i40e/i40e_hmc.h +++ b/drivers/net/ethernet/intel/i40e/i40e_hmc.h @@ -187,28 +187,28 @@ struct i40e_hmc_info { /* add one more to the limit to correct our range */ \ *(pd_limit) += 1; \ } -i40e_status i40e_add_sd_table_entry(struct i40e_hw *hw, - struct i40e_hmc_info *hmc_info, - u32 sd_index, - enum i40e_sd_entry_type type, - u64 direct_mode_sz); - -i40e_status i40e_add_pd_table_entry(struct i40e_hw *hw, - struct i40e_hmc_info *hmc_info, - u32 pd_index, - struct i40e_dma_mem *rsrc_pg); -i40e_status i40e_remove_pd_bp(struct i40e_hw *hw, - struct i40e_hmc_info *hmc_info, - u32 idx); -i40e_status i40e_prep_remove_sd_bp(struct i40e_hmc_info *hmc_info, - u32 idx); -i40e_status i40e_remove_sd_bp_new(struct i40e_hw *hw, - struct i40e_hmc_info *hmc_info, - u32 idx, bool is_pf); -i40e_status i40e_prep_remove_pd_page(struct i40e_hmc_info *hmc_info, - u32 idx); -i40e_status i40e_remove_pd_page_new(struct i40e_hw *hw, - struct i40e_hmc_info *hmc_info, - u32 idx, bool is_pf); + +int i40e_add_sd_table_entry(struct i40e_hw *hw, + struct i40e_hmc_info *hmc_info, + u32 sd_index, + enum i40e_sd_entry_type type, + u64 direct_mode_sz); +int i40e_add_pd_table_entry(struct i40e_hw *hw, + struct i40e_hmc_info *hmc_info, + u32 pd_index, + struct i40e_dma_mem *rsrc_pg); +int i40e_remove_pd_bp(struct i40e_hw *hw, + struct i40e_hmc_info *hmc_info, + u32 idx); +int i40e_prep_remove_sd_bp(struct i40e_hmc_info *hmc_info, + u32 idx); +int i40e_remove_sd_bp_new(struct i40e_hw *hw, + struct i40e_hmc_info *hmc_info, + u32 idx, bool is_pf); +int i40e_prep_remove_pd_page(struct i40e_hmc_info *hmc_info, + u32 idx); +int i40e_remove_pd_page_new(struct i40e_hw *hw, + struct i40e_hmc_info *hmc_info, + u32 idx, bool is_pf); #endif /* _I40E_HMC_H_ */ diff --git a/drivers/net/ethernet/intel/i40e/i40e_lan_hmc.c b/drivers/net/ethernet/intel/i40e/i40e_lan_hmc.c index d6e92ecddfbd..40c101f286d1 100644 --- a/drivers/net/ethernet/intel/i40e/i40e_lan_hmc.c +++ b/drivers/net/ethernet/intel/i40e/i40e_lan_hmc.c @@ -74,12 +74,12 @@ static u64 i40e_calculate_l2fpm_size(u32 txq_num, u32 rxq_num, * Assumptions: * - HMC Resource Profile has been selected before calling this function. **/ -i40e_status i40e_init_lan_hmc(struct i40e_hw *hw, u32 txq_num, - u32 rxq_num, u32 fcoe_cntx_num, - u32 fcoe_filt_num) +int i40e_init_lan_hmc(struct i40e_hw *hw, u32 txq_num, + u32 rxq_num, u32 fcoe_cntx_num, + u32 fcoe_filt_num) { struct i40e_hmc_obj_info *obj, *full_obj; - i40e_status ret_code = 0; + int ret_code = 0; u64 l2fpm_size; u32 size_exp; @@ -229,11 +229,11 @@ init_lan_hmc_out: * 1. caller can deallocate the memory used by pd after this function * returns. **/ -static i40e_status i40e_remove_pd_page(struct i40e_hw *hw, - struct i40e_hmc_info *hmc_info, - u32 idx) +static int i40e_remove_pd_page(struct i40e_hw *hw, + struct i40e_hmc_info *hmc_info, + u32 idx) { - i40e_status ret_code = 0; + int ret_code = 0; if (!i40e_prep_remove_pd_page(hmc_info, idx)) ret_code = i40e_remove_pd_page_new(hw, hmc_info, idx, true); @@ -256,11 +256,11 @@ static i40e_status i40e_remove_pd_page(struct i40e_hw *hw, * 1. caller can deallocate the memory used by backing storage after this * function returns. **/ -static i40e_status i40e_remove_sd_bp(struct i40e_hw *hw, - struct i40e_hmc_info *hmc_info, - u32 idx) +static int i40e_remove_sd_bp(struct i40e_hw *hw, + struct i40e_hmc_info *hmc_info, + u32 idx) { - i40e_status ret_code = 0; + int ret_code = 0; if (!i40e_prep_remove_sd_bp(hmc_info, idx)) ret_code = i40e_remove_sd_bp_new(hw, hmc_info, idx, true); @@ -276,15 +276,15 @@ static i40e_status i40e_remove_sd_bp(struct i40e_hw *hw, * This will allocate memory for PDs and backing pages and populate * the sd and pd entries. **/ -static i40e_status i40e_create_lan_hmc_object(struct i40e_hw *hw, - struct i40e_hmc_lan_create_obj_info *info) +static int i40e_create_lan_hmc_object(struct i40e_hw *hw, + struct i40e_hmc_lan_create_obj_info *info) { - i40e_status ret_code = 0; struct i40e_hmc_sd_entry *sd_entry; u32 pd_idx1 = 0, pd_lmt1 = 0; u32 pd_idx = 0, pd_lmt = 0; bool pd_error = false; u32 sd_idx, sd_lmt; + int ret_code = 0; u64 sd_size; u32 i, j; @@ -435,13 +435,13 @@ exit: * - This function will be called after i40e_init_lan_hmc() and before * any LAN/FCoE HMC objects can be created. **/ -i40e_status i40e_configure_lan_hmc(struct i40e_hw *hw, - enum i40e_hmc_model model) +int i40e_configure_lan_hmc(struct i40e_hw *hw, + enum i40e_hmc_model model) { struct i40e_hmc_lan_create_obj_info info; - i40e_status ret_code = 0; u8 hmc_fn_id = hw->hmc.hmc_fn_id; struct i40e_hmc_obj_info *obj; + int ret_code = 0; /* Initialize part of the create object info struct */ info.hmc_info = &hw->hmc; @@ -520,13 +520,13 @@ configure_lan_hmc_out: * caller should deallocate memory allocated previously for * book-keeping information about PDs and backing storage. **/ -static i40e_status i40e_delete_lan_hmc_object(struct i40e_hw *hw, - struct i40e_hmc_lan_delete_obj_info *info) +static int i40e_delete_lan_hmc_object(struct i40e_hw *hw, + struct i40e_hmc_lan_delete_obj_info *info) { - i40e_status ret_code = 0; struct i40e_hmc_pd_table *pd_table; u32 pd_idx, pd_lmt, rel_pd_idx; u32 sd_idx, sd_lmt; + int ret_code = 0; u32 i, j; if (NULL == info) { @@ -632,10 +632,10 @@ exit: * This must be called by drivers as they are shutting down and being * removed from the OS. **/ -i40e_status i40e_shutdown_lan_hmc(struct i40e_hw *hw) +int i40e_shutdown_lan_hmc(struct i40e_hw *hw) { struct i40e_hmc_lan_delete_obj_info info; - i40e_status ret_code; + int ret_code; info.hmc_info = &hw->hmc; info.rsrc_type = I40E_HMC_LAN_FULL; @@ -915,9 +915,9 @@ static void i40e_write_qword(u8 *hmc_bits, * @context_bytes: pointer to the context bit array (DMA memory) * @hmc_type: the type of HMC resource **/ -static i40e_status i40e_clear_hmc_context(struct i40e_hw *hw, - u8 *context_bytes, - enum i40e_hmc_lan_rsrc_type hmc_type) +static int i40e_clear_hmc_context(struct i40e_hw *hw, + u8 *context_bytes, + enum i40e_hmc_lan_rsrc_type hmc_type) { /* clean the bit array */ memset(context_bytes, 0, (u32)hw->hmc.hmc_obj[hmc_type].size); @@ -931,9 +931,9 @@ static i40e_status i40e_clear_hmc_context(struct i40e_hw *hw, * @ce_info: a description of the struct to be filled * @dest: the struct to be filled **/ -static i40e_status i40e_set_hmc_context(u8 *context_bytes, - struct i40e_context_ele *ce_info, - u8 *dest) +static int i40e_set_hmc_context(u8 *context_bytes, + struct i40e_context_ele *ce_info, + u8 *dest) { int f; @@ -973,18 +973,18 @@ static i40e_status i40e_set_hmc_context(u8 *context_bytes, * base pointer. This function is used for LAN Queue contexts. **/ static -i40e_status i40e_hmc_get_object_va(struct i40e_hw *hw, u8 **object_base, - enum i40e_hmc_lan_rsrc_type rsrc_type, - u32 obj_idx) +int i40e_hmc_get_object_va(struct i40e_hw *hw, u8 **object_base, + enum i40e_hmc_lan_rsrc_type rsrc_type, + u32 obj_idx) { struct i40e_hmc_info *hmc_info = &hw->hmc; u32 obj_offset_in_sd, obj_offset_in_pd; struct i40e_hmc_sd_entry *sd_entry; struct i40e_hmc_pd_entry *pd_entry; u32 pd_idx, pd_lmt, rel_pd_idx; - i40e_status ret_code = 0; u64 obj_offset_in_fpm; u32 sd_idx, sd_lmt; + int ret_code = 0; if (NULL == hmc_info) { ret_code = I40E_ERR_BAD_PTR; @@ -1042,11 +1042,11 @@ exit: * @hw: the hardware struct * @queue: the queue we care about **/ -i40e_status i40e_clear_lan_tx_queue_context(struct i40e_hw *hw, - u16 queue) +int i40e_clear_lan_tx_queue_context(struct i40e_hw *hw, + u16 queue) { - i40e_status err; u8 *context_bytes; + int err; err = i40e_hmc_get_object_va(hw, &context_bytes, I40E_HMC_LAN_TX, queue); @@ -1062,12 +1062,12 @@ i40e_status i40e_clear_lan_tx_queue_context(struct i40e_hw *hw, * @queue: the queue we care about * @s: the struct to be filled **/ -i40e_status i40e_set_lan_tx_queue_context(struct i40e_hw *hw, - u16 queue, - struct i40e_hmc_obj_txq *s) +int i40e_set_lan_tx_queue_context(struct i40e_hw *hw, + u16 queue, + struct i40e_hmc_obj_txq *s) { - i40e_status err; u8 *context_bytes; + int err; err = i40e_hmc_get_object_va(hw, &context_bytes, I40E_HMC_LAN_TX, queue); @@ -1083,11 +1083,11 @@ i40e_status i40e_set_lan_tx_queue_context(struct i40e_hw *hw, * @hw: the hardware struct * @queue: the queue we care about **/ -i40e_status i40e_clear_lan_rx_queue_context(struct i40e_hw *hw, - u16 queue) +int i40e_clear_lan_rx_queue_context(struct i40e_hw *hw, + u16 queue) { - i40e_status err; u8 *context_bytes; + int err; err = i40e_hmc_get_object_va(hw, &context_bytes, I40E_HMC_LAN_RX, queue); @@ -1103,12 +1103,12 @@ i40e_status i40e_clear_lan_rx_queue_context(struct i40e_hw *hw, * @queue: the queue we care about * @s: the struct to be filled **/ -i40e_status i40e_set_lan_rx_queue_context(struct i40e_hw *hw, - u16 queue, - struct i40e_hmc_obj_rxq *s) +int i40e_set_lan_rx_queue_context(struct i40e_hw *hw, + u16 queue, + struct i40e_hmc_obj_rxq *s) { - i40e_status err; u8 *context_bytes; + int err; err = i40e_hmc_get_object_va(hw, &context_bytes, I40E_HMC_LAN_RX, queue); diff --git a/drivers/net/ethernet/intel/i40e/i40e_lan_hmc.h b/drivers/net/ethernet/intel/i40e/i40e_lan_hmc.h index c46a2c449e60..9f960404c2b3 100644 --- a/drivers/net/ethernet/intel/i40e/i40e_lan_hmc.h +++ b/drivers/net/ethernet/intel/i40e/i40e_lan_hmc.h @@ -137,22 +137,22 @@ struct i40e_hmc_lan_delete_obj_info { u32 count; }; -i40e_status i40e_init_lan_hmc(struct i40e_hw *hw, u32 txq_num, - u32 rxq_num, u32 fcoe_cntx_num, - u32 fcoe_filt_num); -i40e_status i40e_configure_lan_hmc(struct i40e_hw *hw, - enum i40e_hmc_model model); -i40e_status i40e_shutdown_lan_hmc(struct i40e_hw *hw); - -i40e_status i40e_clear_lan_tx_queue_context(struct i40e_hw *hw, - u16 queue); -i40e_status i40e_set_lan_tx_queue_context(struct i40e_hw *hw, - u16 queue, - struct i40e_hmc_obj_txq *s); -i40e_status i40e_clear_lan_rx_queue_context(struct i40e_hw *hw, - u16 queue); -i40e_status i40e_set_lan_rx_queue_context(struct i40e_hw *hw, - u16 queue, - struct i40e_hmc_obj_rxq *s); +int i40e_init_lan_hmc(struct i40e_hw *hw, u32 txq_num, + u32 rxq_num, u32 fcoe_cntx_num, + u32 fcoe_filt_num); +int i40e_configure_lan_hmc(struct i40e_hw *hw, + enum i40e_hmc_model model); +int i40e_shutdown_lan_hmc(struct i40e_hw *hw); + +int i40e_clear_lan_tx_queue_context(struct i40e_hw *hw, + u16 queue); +int i40e_set_lan_tx_queue_context(struct i40e_hw *hw, + u16 queue, + struct i40e_hmc_obj_txq *s); +int i40e_clear_lan_rx_queue_context(struct i40e_hw *hw, + u16 queue); +int i40e_set_lan_rx_queue_context(struct i40e_hw *hw, + u16 queue, + struct i40e_hmc_obj_rxq *s); #endif /* _I40E_LAN_HMC_H_ */ diff --git a/drivers/net/ethernet/intel/i40e/i40e_main.c b/drivers/net/ethernet/intel/i40e/i40e_main.c index 52eec0a50492..467001db5070 100644 --- a/drivers/net/ethernet/intel/i40e/i40e_main.c +++ b/drivers/net/ethernet/intel/i40e/i40e_main.c @@ -1817,13 +1817,13 @@ static int i40e_set_mac(struct net_device *netdev, void *p) spin_unlock_bh(&vsi->mac_filter_hash_lock); if (vsi->type == I40E_VSI_MAIN) { - i40e_status ret; + int ret; ret = i40e_aq_mac_address_write(hw, I40E_AQC_WRITE_TYPE_LAA_WOL, addr->sa_data, NULL); if (ret) - netdev_info(netdev, "Ignoring error from firmware on LAA update, status %s, AQ ret %s\n", - i40e_stat_str(hw, ret), + netdev_info(netdev, "Ignoring error from firmware on LAA update, status %pe, AQ ret %s\n", + ERR_PTR(ret), i40e_aq_str(hw, hw->aq.asq_last_status)); } @@ -1854,8 +1854,8 @@ static int i40e_config_rss_aq(struct i40e_vsi *vsi, const u8 *seed, ret = i40e_aq_set_rss_key(hw, vsi->id, seed_dw); if (ret) { dev_info(&pf->pdev->dev, - "Cannot set RSS key, err %s aq_err %s\n", - i40e_stat_str(hw, ret), + "Cannot set RSS key, err %pe aq_err %s\n", + ERR_PTR(ret), i40e_aq_str(hw, hw->aq.asq_last_status)); return ret; } @@ -1866,8 +1866,8 @@ static int i40e_config_rss_aq(struct i40e_vsi *vsi, const u8 *seed, ret = i40e_aq_set_rss_lut(hw, vsi->id, pf_lut, lut, lut_size); if (ret) { dev_info(&pf->pdev->dev, - "Cannot set RSS lut, err %s aq_err %s\n", - i40e_stat_str(hw, ret), + "Cannot set RSS lut, err %pe aq_err %s\n", + ERR_PTR(ret), i40e_aq_str(hw, hw->aq.asq_last_status)); return ret; } @@ -2349,7 +2349,7 @@ void i40e_aqc_del_filters(struct i40e_vsi *vsi, const char *vsi_name, { struct i40e_hw *hw = &vsi->back->hw; enum i40e_admin_queue_err aq_status; - i40e_status aq_ret; + int aq_ret; aq_ret = i40e_aq_remove_macvlan_v2(hw, vsi->seid, list, num_del, NULL, &aq_status); @@ -2358,8 +2358,8 @@ void i40e_aqc_del_filters(struct i40e_vsi *vsi, const char *vsi_name, if (aq_ret && !(aq_status == I40E_AQ_RC_ENOENT)) { *retval = -EIO; dev_info(&vsi->back->pdev->dev, - "ignoring delete macvlan error on %s, err %s, aq_err %s\n", - vsi_name, i40e_stat_str(hw, aq_ret), + "ignoring delete macvlan error on %s, err %pe, aq_err %s\n", + vsi_name, ERR_PTR(aq_ret), i40e_aq_str(hw, aq_status)); } } @@ -2423,13 +2423,13 @@ void i40e_aqc_add_filters(struct i40e_vsi *vsi, const char *vsi_name, * * Returns status indicating success or failure; **/ -static i40e_status +static int i40e_aqc_broadcast_filter(struct i40e_vsi *vsi, const char *vsi_name, struct i40e_mac_filter *f) { bool enable = f->state == I40E_FILTER_NEW; struct i40e_hw *hw = &vsi->back->hw; - i40e_status aq_ret; + int aq_ret; if (f->vlan == I40E_VLAN_ANY) { aq_ret = i40e_aq_set_vsi_broadcast(hw, @@ -2468,7 +2468,7 @@ static int i40e_set_promiscuous(struct i40e_pf *pf, bool promisc) { struct i40e_vsi *vsi = pf->vsi[pf->lan_vsi]; struct i40e_hw *hw = &pf->hw; - i40e_status aq_ret; + int aq_ret; if (vsi->type == I40E_VSI_MAIN && pf->lan_veb != I40E_NO_VEB && @@ -2488,8 +2488,8 @@ static int i40e_set_promiscuous(struct i40e_pf *pf, bool promisc) NULL); if (aq_ret) { dev_info(&pf->pdev->dev, - "Set default VSI failed, err %s, aq_err %s\n", - i40e_stat_str(hw, aq_ret), + "Set default VSI failed, err %pe, aq_err %s\n", + ERR_PTR(aq_ret), i40e_aq_str(hw, hw->aq.asq_last_status)); } } else { @@ -2500,8 +2500,8 @@ static int i40e_set_promiscuous(struct i40e_pf *pf, bool promisc) true); if (aq_ret) { dev_info(&pf->pdev->dev, - "set unicast promisc failed, err %s, aq_err %s\n", - i40e_stat_str(hw, aq_ret), + "set unicast promisc failed, err %pe, aq_err %s\n", + ERR_PTR(aq_ret), i40e_aq_str(hw, hw->aq.asq_last_status)); } aq_ret = i40e_aq_set_vsi_multicast_promiscuous( @@ -2510,8 +2510,8 @@ static int i40e_set_promiscuous(struct i40e_pf *pf, bool promisc) promisc, NULL); if (aq_ret) { dev_info(&pf->pdev->dev, - "set multicast promisc failed, err %s, aq_err %s\n", - i40e_stat_str(hw, aq_ret), + "set multicast promisc failed, err %pe, aq_err %s\n", + ERR_PTR(aq_ret), i40e_aq_str(hw, hw->aq.asq_last_status)); } } @@ -2541,12 +2541,12 @@ int i40e_sync_vsi_filters(struct i40e_vsi *vsi) unsigned int vlan_filters = 0; char vsi_name[16] = "PF"; int filter_list_len = 0; - i40e_status aq_ret = 0; u32 changed_flags = 0; struct hlist_node *h; struct i40e_pf *pf; int num_add = 0; int num_del = 0; + int aq_ret = 0; int retval = 0; u16 cmd_flags; int list_size; @@ -2814,9 +2814,9 @@ int i40e_sync_vsi_filters(struct i40e_vsi *vsi) retval = i40e_aq_rc_to_posix(aq_ret, hw->aq.asq_last_status); dev_info(&pf->pdev->dev, - "set multi promisc failed on %s, err %s aq_err %s\n", + "set multi promisc failed on %s, err %pe aq_err %s\n", vsi_name, - i40e_stat_str(hw, aq_ret), + ERR_PTR(aq_ret), i40e_aq_str(hw, hw->aq.asq_last_status)); } else { dev_info(&pf->pdev->dev, "%s allmulti mode.\n", @@ -2834,10 +2834,10 @@ int i40e_sync_vsi_filters(struct i40e_vsi *vsi) retval = i40e_aq_rc_to_posix(aq_ret, hw->aq.asq_last_status); dev_info(&pf->pdev->dev, - "Setting promiscuous %s failed on %s, err %s aq_err %s\n", + "Setting promiscuous %s failed on %s, err %pe aq_err %s\n", cur_promisc ? "on" : "off", vsi_name, - i40e_stat_str(hw, aq_ret), + ERR_PTR(aq_ret), i40e_aq_str(hw, hw->aq.asq_last_status)); } } @@ -2965,7 +2965,7 @@ int i40e_ioctl(struct net_device *netdev, struct ifreq *ifr, int cmd) void i40e_vlan_stripping_enable(struct i40e_vsi *vsi) { struct i40e_vsi_context ctxt; - i40e_status ret; + int ret; /* Don't modify stripping options if a port VLAN is active */ if (vsi->info.pvid) @@ -2985,8 +2985,8 @@ void i40e_vlan_stripping_enable(struct i40e_vsi *vsi) ret = i40e_aq_update_vsi_params(&vsi->back->hw, &ctxt, NULL); if (ret) { dev_info(&vsi->back->pdev->dev, - "update vlan stripping failed, err %s aq_err %s\n", - i40e_stat_str(&vsi->back->hw, ret), + "update vlan stripping failed, err %pe aq_err %s\n", + ERR_PTR(ret), i40e_aq_str(&vsi->back->hw, vsi->back->hw.aq.asq_last_status)); } @@ -2999,7 +2999,7 @@ void i40e_vlan_stripping_enable(struct i40e_vsi *vsi) void i40e_vlan_stripping_disable(struct i40e_vsi *vsi) { struct i40e_vsi_context ctxt; - i40e_status ret; + int ret; /* Don't modify stripping options if a port VLAN is active */ if (vsi->info.pvid) @@ -3020,8 +3020,8 @@ void i40e_vlan_stripping_disable(struct i40e_vsi *vsi) ret = i40e_aq_update_vsi_params(&vsi->back->hw, &ctxt, NULL); if (ret) { dev_info(&vsi->back->pdev->dev, - "update vlan stripping failed, err %s aq_err %s\n", - i40e_stat_str(&vsi->back->hw, ret), + "update vlan stripping failed, err %pe aq_err %s\n", + ERR_PTR(ret), i40e_aq_str(&vsi->back->hw, vsi->back->hw.aq.asq_last_status)); } @@ -3252,7 +3252,7 @@ static void i40e_restore_vlan(struct i40e_vsi *vsi) int i40e_vsi_add_pvid(struct i40e_vsi *vsi, u16 vid) { struct i40e_vsi_context ctxt; - i40e_status ret; + int ret; vsi->info.valid_sections = cpu_to_le16(I40E_AQ_VSI_PROP_VLAN_VALID); vsi->info.pvid = cpu_to_le16(vid); @@ -3265,8 +3265,8 @@ int i40e_vsi_add_pvid(struct i40e_vsi *vsi, u16 vid) ret = i40e_aq_update_vsi_params(&vsi->back->hw, &ctxt, NULL); if (ret) { dev_info(&vsi->back->pdev->dev, - "add pvid failed, err %s aq_err %s\n", - i40e_stat_str(&vsi->back->hw, ret), + "add pvid failed, err %pe aq_err %s\n", + ERR_PTR(ret), i40e_aq_str(&vsi->back->hw, vsi->back->hw.aq.asq_last_status)); return -ENOENT; @@ -3429,8 +3429,8 @@ static int i40e_configure_tx_ring(struct i40e_ring *ring) u16 pf_q = vsi->base_queue + ring->queue_index; struct i40e_hw *hw = &vsi->back->hw; struct i40e_hmc_obj_txq tx_ctx; - i40e_status err = 0; u32 qtx_ctl = 0; + int err = 0; if (ring_is_xdp(ring)) ring->xsk_pool = i40e_xsk_pool(ring); @@ -3554,7 +3554,7 @@ static int i40e_configure_rx_ring(struct i40e_ring *ring) u16 pf_q = vsi->base_queue + ring->queue_index; struct i40e_hw *hw = &vsi->back->hw; struct i40e_hmc_obj_rxq rx_ctx; - i40e_status err = 0; + int err = 0; bool ok; int ret; @@ -5525,16 +5525,16 @@ static int i40e_vsi_get_bw_info(struct i40e_vsi *vsi) struct i40e_aqc_query_vsi_bw_config_resp bw_config = {0}; struct i40e_pf *pf = vsi->back; struct i40e_hw *hw = &pf->hw; - i40e_status ret; u32 tc_bw_max; + int ret; int i; /* Get the VSI level BW configuration */ ret = i40e_aq_query_vsi_bw_config(hw, vsi->seid, &bw_config, NULL); if (ret) { dev_info(&pf->pdev->dev, - "couldn't get PF vsi bw config, err %s aq_err %s\n", - i40e_stat_str(&pf->hw, ret), + "couldn't get PF vsi bw config, err %pe aq_err %s\n", + ERR_PTR(ret), i40e_aq_str(&pf->hw, pf->hw.aq.asq_last_status)); return -EINVAL; } @@ -5544,8 +5544,8 @@ static int i40e_vsi_get_bw_info(struct i40e_vsi *vsi) NULL); if (ret) { dev_info(&pf->pdev->dev, - "couldn't get PF vsi ets bw config, err %s aq_err %s\n", - i40e_stat_str(&pf->hw, ret), + "couldn't get PF vsi ets bw config, err %pe aq_err %s\n", + ERR_PTR(ret), i40e_aq_str(&pf->hw, pf->hw.aq.asq_last_status)); return -EINVAL; } @@ -5586,7 +5586,7 @@ static int i40e_vsi_configure_bw_alloc(struct i40e_vsi *vsi, u8 enabled_tc, { struct i40e_aqc_configure_vsi_tc_bw_data bw_data; struct i40e_pf *pf = vsi->back; - i40e_status ret; + int ret; int i; /* There is no need to reset BW when mqprio mode is on. */ @@ -5734,8 +5734,8 @@ int i40e_update_adq_vsi_queues(struct i40e_vsi *vsi, int vsi_offset) ret = i40e_aq_update_vsi_params(hw, &ctxt, NULL); if (ret) { - dev_info(&pf->pdev->dev, "Update vsi config failed, err %s aq_err %s\n", - i40e_stat_str(hw, ret), + dev_info(&pf->pdev->dev, "Update vsi config failed, err %pe aq_err %s\n", + ERR_PTR(ret), i40e_aq_str(hw, hw->aq.asq_last_status)); return ret; } @@ -5790,8 +5790,8 @@ static int i40e_vsi_config_tc(struct i40e_vsi *vsi, u8 enabled_tc) &bw_config, NULL); if (ret) { dev_info(&pf->pdev->dev, - "Failed querying vsi bw info, err %s aq_err %s\n", - i40e_stat_str(hw, ret), + "Failed querying vsi bw info, err %pe aq_err %s\n", + ERR_PTR(ret), i40e_aq_str(hw, hw->aq.asq_last_status)); goto out; } @@ -5857,8 +5857,8 @@ static int i40e_vsi_config_tc(struct i40e_vsi *vsi, u8 enabled_tc) ret = i40e_aq_update_vsi_params(hw, &ctxt, NULL); if (ret) { dev_info(&pf->pdev->dev, - "Update vsi tc config failed, err %s aq_err %s\n", - i40e_stat_str(hw, ret), + "Update vsi tc config failed, err %pe aq_err %s\n", + ERR_PTR(ret), i40e_aq_str(hw, hw->aq.asq_last_status)); goto out; } @@ -5870,8 +5870,8 @@ static int i40e_vsi_config_tc(struct i40e_vsi *vsi, u8 enabled_tc) ret = i40e_vsi_get_bw_info(vsi); if (ret) { dev_info(&pf->pdev->dev, - "Failed updating vsi bw info, err %s aq_err %s\n", - i40e_stat_str(hw, ret), + "Failed updating vsi bw info, err %pe aq_err %s\n", + ERR_PTR(ret), i40e_aq_str(hw, hw->aq.asq_last_status)); goto out; } @@ -5962,8 +5962,8 @@ int i40e_set_bw_limit(struct i40e_vsi *vsi, u16 seid, u64 max_tx_rate) I40E_MAX_BW_INACTIVE_ACCUM, NULL); if (ret) dev_err(&pf->pdev->dev, - "Failed set tx rate (%llu Mbps) for vsi->seid %u, err %s aq_err %s\n", - max_tx_rate, seid, i40e_stat_str(&pf->hw, ret), + "Failed set tx rate (%llu Mbps) for vsi->seid %u, err %pe aq_err %s\n", + max_tx_rate, seid, ERR_PTR(ret), i40e_aq_str(&pf->hw, pf->hw.aq.asq_last_status)); return ret; } @@ -6038,8 +6038,8 @@ static void i40e_remove_queue_channels(struct i40e_vsi *vsi) last_aq_status = pf->hw.aq.asq_last_status; if (ret) dev_info(&pf->pdev->dev, - "Failed to delete cloud filter, err %s aq_err %s\n", - i40e_stat_str(&pf->hw, ret), + "Failed to delete cloud filter, err %pe aq_err %s\n", + ERR_PTR(ret), i40e_aq_str(&pf->hw, last_aq_status)); kfree(cfilter); } @@ -6173,8 +6173,8 @@ static int i40e_vsi_reconfig_rss(struct i40e_vsi *vsi, u16 rss_size) ret = i40e_config_rss(vsi, seed, lut, vsi->rss_table_size); if (ret) { dev_info(&pf->pdev->dev, - "Cannot set RSS lut, err %s aq_err %s\n", - i40e_stat_str(hw, ret), + "Cannot set RSS lut, err %pe aq_err %s\n", + ERR_PTR(ret), i40e_aq_str(hw, hw->aq.asq_last_status)); kfree(lut); return ret; @@ -6272,8 +6272,8 @@ static int i40e_add_channel(struct i40e_pf *pf, u16 uplink_seid, ret = i40e_aq_add_vsi(hw, &ctxt, NULL); if (ret) { dev_info(&pf->pdev->dev, - "add new vsi failed, err %s aq_err %s\n", - i40e_stat_str(&pf->hw, ret), + "add new vsi failed, err %pe aq_err %s\n", + ERR_PTR(ret), i40e_aq_str(&pf->hw, pf->hw.aq.asq_last_status)); return -ENOENT; @@ -6304,7 +6304,7 @@ static int i40e_channel_config_bw(struct i40e_vsi *vsi, struct i40e_channel *ch, u8 *bw_share) { struct i40e_aqc_configure_vsi_tc_bw_data bw_data; - i40e_status ret; + int ret; int i; memset(&bw_data, 0, sizeof(bw_data)); @@ -6340,9 +6340,9 @@ static int i40e_channel_config_tx_ring(struct i40e_pf *pf, struct i40e_vsi *vsi, struct i40e_channel *ch) { - i40e_status ret; - int i; u8 bw_share[I40E_MAX_TRAFFIC_CLASS] = {0}; + int ret; + int i; /* Enable ETS TCs with equal BW Share for now across all VSIs */ for (i = 0; i < I40E_MAX_TRAFFIC_CLASS; i++) { @@ -6518,8 +6518,8 @@ static int i40e_validate_and_set_switch_mode(struct i40e_vsi *vsi) mode, NULL); if (ret && hw->aq.asq_last_status != I40E_AQ_RC_ESRCH) dev_err(&pf->pdev->dev, - "couldn't set switch config bits, err %s aq_err %s\n", - i40e_stat_str(hw, ret), + "couldn't set switch config bits, err %pe aq_err %s\n", + ERR_PTR(ret), i40e_aq_str(hw, hw->aq.asq_last_status)); @@ -6719,8 +6719,8 @@ int i40e_veb_config_tc(struct i40e_veb *veb, u8 enabled_tc) &bw_data, NULL); if (ret) { dev_info(&pf->pdev->dev, - "VEB bw config failed, err %s aq_err %s\n", - i40e_stat_str(&pf->hw, ret), + "VEB bw config failed, err %pe aq_err %s\n", + ERR_PTR(ret), i40e_aq_str(&pf->hw, pf->hw.aq.asq_last_status)); goto out; } @@ -6729,8 +6729,8 @@ int i40e_veb_config_tc(struct i40e_veb *veb, u8 enabled_tc) ret = i40e_veb_get_bw_info(veb); if (ret) { dev_info(&pf->pdev->dev, - "Failed getting veb bw config, err %s aq_err %s\n", - i40e_stat_str(&pf->hw, ret), + "Failed getting veb bw config, err %pe aq_err %s\n", + ERR_PTR(ret), i40e_aq_str(&pf->hw, pf->hw.aq.asq_last_status)); } @@ -6813,8 +6813,8 @@ static int i40e_resume_port_tx(struct i40e_pf *pf) ret = i40e_aq_resume_port_tx(hw, NULL); if (ret) { dev_info(&pf->pdev->dev, - "Resume Port Tx failed, err %s aq_err %s\n", - i40e_stat_str(&pf->hw, ret), + "Resume Port Tx failed, err %pe aq_err %s\n", + ERR_PTR(ret), i40e_aq_str(&pf->hw, pf->hw.aq.asq_last_status)); /* Schedule PF reset to recover */ set_bit(__I40E_PF_RESET_REQUESTED, pf->state); @@ -6838,8 +6838,8 @@ static int i40e_suspend_port_tx(struct i40e_pf *pf) ret = i40e_aq_suspend_port_tx(hw, pf->mac_seid, NULL); if (ret) { dev_info(&pf->pdev->dev, - "Suspend Port Tx failed, err %s aq_err %s\n", - i40e_stat_str(&pf->hw, ret), + "Suspend Port Tx failed, err %pe aq_err %s\n", + ERR_PTR(ret), i40e_aq_str(&pf->hw, pf->hw.aq.asq_last_status)); /* Schedule PF reset to recover */ set_bit(__I40E_PF_RESET_REQUESTED, pf->state); @@ -6878,8 +6878,8 @@ static int i40e_hw_set_dcb_config(struct i40e_pf *pf, ret = i40e_set_dcb_config(&pf->hw); if (ret) { dev_info(&pf->pdev->dev, - "Set DCB Config failed, err %s aq_err %s\n", - i40e_stat_str(&pf->hw, ret), + "Set DCB Config failed, err %pe aq_err %s\n", + ERR_PTR(ret), i40e_aq_str(&pf->hw, pf->hw.aq.asq_last_status)); goto out; } @@ -6995,8 +6995,8 @@ int i40e_hw_dcb_config(struct i40e_pf *pf, struct i40e_dcbx_config *new_cfg) i40e_aqc_opc_modify_switching_comp_ets, NULL); if (ret) { dev_info(&pf->pdev->dev, - "Modify Port ETS failed, err %s aq_err %s\n", - i40e_stat_str(&pf->hw, ret), + "Modify Port ETS failed, err %pe aq_err %s\n", + ERR_PTR(ret), i40e_aq_str(&pf->hw, pf->hw.aq.asq_last_status)); goto out; } @@ -7033,8 +7033,8 @@ int i40e_hw_dcb_config(struct i40e_pf *pf, struct i40e_dcbx_config *new_cfg) ret = i40e_aq_dcb_updated(&pf->hw, NULL); if (ret) { dev_info(&pf->pdev->dev, - "DCB Updated failed, err %s aq_err %s\n", - i40e_stat_str(&pf->hw, ret), + "DCB Updated failed, err %pe aq_err %s\n", + ERR_PTR(ret), i40e_aq_str(&pf->hw, pf->hw.aq.asq_last_status)); goto out; } @@ -7117,8 +7117,8 @@ int i40e_dcb_sw_default_config(struct i40e_pf *pf) i40e_aqc_opc_enable_switching_comp_ets, NULL); if (err) { dev_info(&pf->pdev->dev, - "Enable Port ETS failed, err %s aq_err %s\n", - i40e_stat_str(&pf->hw, err), + "Enable Port ETS failed, err %pe aq_err %s\n", + ERR_PTR(err), i40e_aq_str(&pf->hw, pf->hw.aq.asq_last_status)); err = -ENOENT; goto out; @@ -7197,8 +7197,8 @@ static int i40e_init_pf_dcb(struct i40e_pf *pf) pf->flags |= I40E_FLAG_DISABLE_FW_LLDP; } else { dev_info(&pf->pdev->dev, - "Query for DCB configuration failed, err %s aq_err %s\n", - i40e_stat_str(&pf->hw, err), + "Query for DCB configuration failed, err %pe aq_err %s\n", + ERR_PTR(err), i40e_aq_str(&pf->hw, pf->hw.aq.asq_last_status)); } @@ -7416,15 +7416,15 @@ static void i40e_vsi_reinit_locked(struct i40e_vsi *vsi) * @pf: board private structure * @is_up: whether the link state should be forced up or down **/ -static i40e_status i40e_force_link_state(struct i40e_pf *pf, bool is_up) +static int i40e_force_link_state(struct i40e_pf *pf, bool is_up) { struct i40e_aq_get_phy_abilities_resp abilities; struct i40e_aq_set_phy_config config = {0}; bool non_zero_phy_type = is_up; struct i40e_hw *hw = &pf->hw; - i40e_status err; u64 mask; u8 speed; + int err; /* Card might've been put in an unstable state by other drivers * and applications, which causes incorrect speed values being @@ -7436,8 +7436,8 @@ static i40e_status i40e_force_link_state(struct i40e_pf *pf, bool is_up) NULL); if (err) { dev_err(&pf->pdev->dev, - "failed to get phy cap., ret = %s last_status = %s\n", - i40e_stat_str(hw, err), + "failed to get phy cap., ret = %pe last_status = %s\n", + ERR_PTR(err), i40e_aq_str(hw, hw->aq.asq_last_status)); return err; } @@ -7448,8 +7448,8 @@ static i40e_status i40e_force_link_state(struct i40e_pf *pf, bool is_up) NULL); if (err) { dev_err(&pf->pdev->dev, - "failed to get phy cap., ret = %s last_status = %s\n", - i40e_stat_str(hw, err), + "failed to get phy cap., ret = %pe last_status = %s\n", + ERR_PTR(err), i40e_aq_str(hw, hw->aq.asq_last_status)); return err; } @@ -7493,8 +7493,8 @@ static i40e_status i40e_force_link_state(struct i40e_pf *pf, bool is_up) if (err) { dev_err(&pf->pdev->dev, - "set phy config ret = %s last_status = %s\n", - i40e_stat_str(&pf->hw, err), + "set phy config ret = %pe last_status = %s\n", + ERR_PTR(err), i40e_aq_str(&pf->hw, pf->hw.aq.asq_last_status)); return err; } @@ -7657,11 +7657,11 @@ static void i40e_vsi_set_default_tc_config(struct i40e_vsi *vsi) * This function deletes a mac filter on the channel VSI which serves as the * macvlan. Returns 0 on success. **/ -static i40e_status i40e_del_macvlan_filter(struct i40e_hw *hw, u16 seid, - const u8 *macaddr, int *aq_err) +static int i40e_del_macvlan_filter(struct i40e_hw *hw, u16 seid, + const u8 *macaddr, int *aq_err) { struct i40e_aqc_remove_macvlan_element_data element; - i40e_status status; + int status; memset(&element, 0, sizeof(element)); ether_addr_copy(element.mac_addr, macaddr); @@ -7683,12 +7683,12 @@ static i40e_status i40e_del_macvlan_filter(struct i40e_hw *hw, u16 seid, * This function adds a mac filter on the channel VSI which serves as the * macvlan. Returns 0 on success. **/ -static i40e_status i40e_add_macvlan_filter(struct i40e_hw *hw, u16 seid, - const u8 *macaddr, int *aq_err) +static int i40e_add_macvlan_filter(struct i40e_hw *hw, u16 seid, + const u8 *macaddr, int *aq_err) { struct i40e_aqc_add_macvlan_element_data element; - i40e_status status; u16 cmd_flags = 0; + int status; ether_addr_copy(element.mac_addr, macaddr); element.vlan_tag = 0; @@ -7834,8 +7834,8 @@ static int i40e_fwd_ring_up(struct i40e_vsi *vsi, struct net_device *vdev, rx_ring->netdev = NULL; } dev_info(&pf->pdev->dev, - "Error adding mac filter on macvlan err %s, aq_err %s\n", - i40e_stat_str(hw, ret), + "Error adding mac filter on macvlan err %pe, aq_err %s\n", + ERR_PTR(ret), i40e_aq_str(hw, aq_err)); netdev_err(vdev, "L2fwd offload disabled to L2 filter error\n"); } @@ -7907,8 +7907,8 @@ static int i40e_setup_macvlans(struct i40e_vsi *vsi, u16 macvlan_cnt, u16 qcnt, ret = i40e_aq_update_vsi_params(hw, &ctxt, NULL); if (ret) { dev_info(&pf->pdev->dev, - "Update vsi tc config failed, err %s aq_err %s\n", - i40e_stat_str(hw, ret), + "Update vsi tc config failed, err %pe aq_err %s\n", + ERR_PTR(ret), i40e_aq_str(hw, hw->aq.asq_last_status)); return ret; } @@ -8123,8 +8123,8 @@ static void i40e_fwd_del(struct net_device *netdev, void *vdev) ch->fwd = NULL; } else { dev_info(&pf->pdev->dev, - "Error deleting mac filter on macvlan err %s, aq_err %s\n", - i40e_stat_str(hw, ret), + "Error deleting mac filter on macvlan err %pe, aq_err %s\n", + ERR_PTR(ret), i40e_aq_str(hw, aq_err)); } break; @@ -8875,8 +8875,8 @@ static int i40e_delete_clsflower(struct i40e_vsi *vsi, kfree(filter); if (err) { dev_err(&pf->pdev->dev, - "Failed to delete cloud filter, err %s\n", - i40e_stat_str(&pf->hw, err)); + "Failed to delete cloud filter, err %pe\n", + ERR_PTR(err)); return i40e_aq_rc_to_posix(err, pf->hw.aq.asq_last_status); } @@ -9438,8 +9438,8 @@ static int i40e_handle_lldp_event(struct i40e_pf *pf, pf->flags &= ~I40E_FLAG_DCB_CAPABLE; } else { dev_info(&pf->pdev->dev, - "Failed querying DCB configuration data from firmware, err %s aq_err %s\n", - i40e_stat_str(&pf->hw, ret), + "Failed querying DCB configuration data from firmware, err %pe aq_err %s\n", + ERR_PTR(ret), i40e_aq_str(&pf->hw, pf->hw.aq.asq_last_status)); } @@ -9887,8 +9887,8 @@ static void i40e_link_event(struct i40e_pf *pf) { struct i40e_vsi *vsi = pf->vsi[pf->lan_vsi]; u8 new_link_speed, old_link_speed; - i40e_status status; bool new_link, old_link; + int status; #ifdef CONFIG_I40E_DCB int err; #endif /* CONFIG_I40E_DCB */ @@ -10099,9 +10099,9 @@ static void i40e_clean_adminq_subtask(struct i40e_pf *pf) struct i40e_arq_event_info event; struct i40e_hw *hw = &pf->hw; u16 pending, i = 0; - i40e_status ret; u16 opcode; u32 oldval; + int ret; u32 val; /* Do not run clean AQ when PF reset fails */ @@ -10265,8 +10265,8 @@ static void i40e_enable_pf_switch_lb(struct i40e_pf *pf) ret = i40e_aq_get_vsi_params(&pf->hw, &ctxt, NULL); if (ret) { dev_info(&pf->pdev->dev, - "couldn't get PF vsi config, err %s aq_err %s\n", - i40e_stat_str(&pf->hw, ret), + "couldn't get PF vsi config, err %pe aq_err %s\n", + ERR_PTR(ret), i40e_aq_str(&pf->hw, pf->hw.aq.asq_last_status)); return; } @@ -10277,8 +10277,8 @@ static void i40e_enable_pf_switch_lb(struct i40e_pf *pf) ret = i40e_aq_update_vsi_params(&vsi->back->hw, &ctxt, NULL); if (ret) { dev_info(&pf->pdev->dev, - "update vsi switch failed, err %s aq_err %s\n", - i40e_stat_str(&pf->hw, ret), + "update vsi switch failed, err %pe aq_err %s\n", + ERR_PTR(ret), i40e_aq_str(&pf->hw, pf->hw.aq.asq_last_status)); } } @@ -10301,8 +10301,8 @@ static void i40e_disable_pf_switch_lb(struct i40e_pf *pf) ret = i40e_aq_get_vsi_params(&pf->hw, &ctxt, NULL); if (ret) { dev_info(&pf->pdev->dev, - "couldn't get PF vsi config, err %s aq_err %s\n", - i40e_stat_str(&pf->hw, ret), + "couldn't get PF vsi config, err %pe aq_err %s\n", + ERR_PTR(ret), i40e_aq_str(&pf->hw, pf->hw.aq.asq_last_status)); return; } @@ -10313,8 +10313,8 @@ static void i40e_disable_pf_switch_lb(struct i40e_pf *pf) ret = i40e_aq_update_vsi_params(&vsi->back->hw, &ctxt, NULL); if (ret) { dev_info(&pf->pdev->dev, - "update vsi switch failed, err %s aq_err %s\n", - i40e_stat_str(&pf->hw, ret), + "update vsi switch failed, err %pe aq_err %s\n", + ERR_PTR(ret), i40e_aq_str(&pf->hw, pf->hw.aq.asq_last_status)); } } @@ -10458,8 +10458,8 @@ static int i40e_get_capabilities(struct i40e_pf *pf, buf_len = data_size; } else if (pf->hw.aq.asq_last_status != I40E_AQ_RC_OK || err) { dev_info(&pf->pdev->dev, - "capability discovery failed, err %s aq_err %s\n", - i40e_stat_str(&pf->hw, err), + "capability discovery failed, err %pe aq_err %s\n", + ERR_PTR(err), i40e_aq_str(&pf->hw, pf->hw.aq.asq_last_status)); return -ENODEV; @@ -10580,7 +10580,7 @@ static int i40e_rebuild_cloud_filters(struct i40e_vsi *vsi, u16 seid) struct i40e_cloud_filter *cfilter; struct i40e_pf *pf = vsi->back; struct hlist_node *node; - i40e_status ret; + int ret; /* Add cloud filters back if they exist */ hlist_for_each_entry_safe(cfilter, node, &pf->cloud_filter_list, @@ -10596,8 +10596,8 @@ static int i40e_rebuild_cloud_filters(struct i40e_vsi *vsi, u16 seid) if (ret) { dev_dbg(&pf->pdev->dev, - "Failed to rebuild cloud filter, err %s aq_err %s\n", - i40e_stat_str(&pf->hw, ret), + "Failed to rebuild cloud filter, err %pe aq_err %s\n", + ERR_PTR(ret), i40e_aq_str(&pf->hw, pf->hw.aq.asq_last_status)); return ret; @@ -10615,7 +10615,7 @@ static int i40e_rebuild_cloud_filters(struct i40e_vsi *vsi, u16 seid) static int i40e_rebuild_channels(struct i40e_vsi *vsi) { struct i40e_channel *ch, *ch_tmp; - i40e_status ret; + int ret; if (list_empty(&vsi->ch_list)) return 0; @@ -10691,7 +10691,7 @@ static void i40e_clean_xps_state(struct i40e_vsi *vsi) static void i40e_prep_for_reset(struct i40e_pf *pf) { struct i40e_hw *hw = &pf->hw; - i40e_status ret = 0; + int ret = 0; u32 v; clear_bit(__I40E_RESET_INTR_RECEIVED, pf->state); @@ -10796,7 +10796,7 @@ static void i40e_get_oem_version(struct i40e_hw *hw) static int i40e_reset(struct i40e_pf *pf) { struct i40e_hw *hw = &pf->hw; - i40e_status ret; + int ret; ret = i40e_pf_reset(hw); if (ret) { @@ -10821,7 +10821,7 @@ static void i40e_rebuild(struct i40e_pf *pf, bool reinit, bool lock_acquired) const bool is_recovery_mode_reported = i40e_check_recovery_mode(pf); struct i40e_vsi *vsi = pf->vsi[pf->lan_vsi]; struct i40e_hw *hw = &pf->hw; - i40e_status ret; + int ret; u32 val; int v; @@ -10837,8 +10837,8 @@ static void i40e_rebuild(struct i40e_pf *pf, bool reinit, bool lock_acquired) /* rebuild the basics for the AdminQ, HMC, and initial HW switch */ ret = i40e_init_adminq(&pf->hw); if (ret) { - dev_info(&pf->pdev->dev, "Rebuild AdminQ failed, err %s aq_err %s\n", - i40e_stat_str(&pf->hw, ret), + dev_info(&pf->pdev->dev, "Rebuild AdminQ failed, err %pe aq_err %s\n", + ERR_PTR(ret), i40e_aq_str(&pf->hw, pf->hw.aq.asq_last_status)); goto clear_recovery; } @@ -10949,8 +10949,8 @@ static void i40e_rebuild(struct i40e_pf *pf, bool reinit, bool lock_acquired) I40E_AQ_EVENT_MEDIA_NA | I40E_AQ_EVENT_MODULE_QUAL_FAIL), NULL); if (ret) - dev_info(&pf->pdev->dev, "set phy mask fail, err %s aq_err %s\n", - i40e_stat_str(&pf->hw, ret), + dev_info(&pf->pdev->dev, "set phy mask fail, err %pe aq_err %s\n", + ERR_PTR(ret), i40e_aq_str(&pf->hw, pf->hw.aq.asq_last_status)); /* Rebuild the VSIs and VEBs that existed before reset. @@ -11053,8 +11053,8 @@ static void i40e_rebuild(struct i40e_pf *pf, bool reinit, bool lock_acquired) msleep(75); ret = i40e_aq_set_link_restart_an(&pf->hw, true, NULL); if (ret) - dev_info(&pf->pdev->dev, "link restart failed, err %s aq_err %s\n", - i40e_stat_str(&pf->hw, ret), + dev_info(&pf->pdev->dev, "link restart failed, err %pe aq_err %s\n", + ERR_PTR(ret), i40e_aq_str(&pf->hw, pf->hw.aq.asq_last_status)); } @@ -11082,9 +11082,9 @@ static void i40e_rebuild(struct i40e_pf *pf, bool reinit, bool lock_acquired) ret = i40e_set_promiscuous(pf, pf->cur_promisc); if (ret) dev_warn(&pf->pdev->dev, - "Failed to restore promiscuous setting: %s, err %s aq_err %s\n", + "Failed to restore promiscuous setting: %s, err %pe aq_err %s\n", pf->cur_promisc ? "on" : "off", - i40e_stat_str(&pf->hw, ret), + ERR_PTR(ret), i40e_aq_str(&pf->hw, pf->hw.aq.asq_last_status)); i40e_reset_all_vfs(pf, true); @@ -12218,8 +12218,8 @@ static int i40e_get_rss_aq(struct i40e_vsi *vsi, const u8 *seed, (struct i40e_aqc_get_set_rss_key_data *)seed); if (ret) { dev_info(&pf->pdev->dev, - "Cannot get RSS key, err %s aq_err %s\n", - i40e_stat_str(&pf->hw, ret), + "Cannot get RSS key, err %pe aq_err %s\n", + ERR_PTR(ret), i40e_aq_str(&pf->hw, pf->hw.aq.asq_last_status)); return ret; @@ -12232,8 +12232,8 @@ static int i40e_get_rss_aq(struct i40e_vsi *vsi, const u8 *seed, ret = i40e_aq_get_rss_lut(hw, vsi->id, pf_lut, lut, lut_size); if (ret) { dev_info(&pf->pdev->dev, - "Cannot get RSS lut, err %s aq_err %s\n", - i40e_stat_str(&pf->hw, ret), + "Cannot get RSS lut, err %pe aq_err %s\n", + ERR_PTR(ret), i40e_aq_str(&pf->hw, pf->hw.aq.asq_last_status)); return ret; @@ -12508,11 +12508,11 @@ int i40e_reconfig_rss_queues(struct i40e_pf *pf, int queue_count) * i40e_get_partition_bw_setting - Retrieve BW settings for this PF partition * @pf: board private structure **/ -i40e_status i40e_get_partition_bw_setting(struct i40e_pf *pf) +int i40e_get_partition_bw_setting(struct i40e_pf *pf) { - i40e_status status; bool min_valid, max_valid; u32 max_bw, min_bw; + int status; status = i40e_read_bw_from_alt_ram(&pf->hw, &max_bw, &min_bw, &min_valid, &max_valid); @@ -12531,10 +12531,10 @@ i40e_status i40e_get_partition_bw_setting(struct i40e_pf *pf) * i40e_set_partition_bw_setting - Set BW settings for this PF partition * @pf: board private structure **/ -i40e_status i40e_set_partition_bw_setting(struct i40e_pf *pf) +int i40e_set_partition_bw_setting(struct i40e_pf *pf) { struct i40e_aqc_configure_partition_bw_data bw_data; - i40e_status status; + int status; memset(&bw_data, 0, sizeof(bw_data)); @@ -12553,12 +12553,12 @@ i40e_status i40e_set_partition_bw_setting(struct i40e_pf *pf) * i40e_commit_partition_bw_setting - Commit BW settings for this PF partition * @pf: board private structure **/ -i40e_status i40e_commit_partition_bw_setting(struct i40e_pf *pf) +int i40e_commit_partition_bw_setting(struct i40e_pf *pf) { /* Commit temporary BW setting to permanent NVM image */ enum i40e_admin_queue_err last_aq_status; - i40e_status ret; u16 nvm_word; + int ret; if (pf->hw.partition_id != 1) { dev_info(&pf->pdev->dev, @@ -12573,8 +12573,8 @@ i40e_status i40e_commit_partition_bw_setting(struct i40e_pf *pf) last_aq_status = pf->hw.aq.asq_last_status; if (ret) { dev_info(&pf->pdev->dev, - "Cannot acquire NVM for read access, err %s aq_err %s\n", - i40e_stat_str(&pf->hw, ret), + "Cannot acquire NVM for read access, err %pe aq_err %s\n", + ERR_PTR(ret), i40e_aq_str(&pf->hw, last_aq_status)); goto bw_commit_out; } @@ -12590,8 +12590,8 @@ i40e_status i40e_commit_partition_bw_setting(struct i40e_pf *pf) last_aq_status = pf->hw.aq.asq_last_status; i40e_release_nvm(&pf->hw); if (ret) { - dev_info(&pf->pdev->dev, "NVM read error, err %s aq_err %s\n", - i40e_stat_str(&pf->hw, ret), + dev_info(&pf->pdev->dev, "NVM read error, err %pe aq_err %s\n", + ERR_PTR(ret), i40e_aq_str(&pf->hw, last_aq_status)); goto bw_commit_out; } @@ -12604,8 +12604,8 @@ i40e_status i40e_commit_partition_bw_setting(struct i40e_pf *pf) last_aq_status = pf->hw.aq.asq_last_status; if (ret) { dev_info(&pf->pdev->dev, - "Cannot acquire NVM for write access, err %s aq_err %s\n", - i40e_stat_str(&pf->hw, ret), + "Cannot acquire NVM for write access, err %pe aq_err %s\n", + ERR_PTR(ret), i40e_aq_str(&pf->hw, last_aq_status)); goto bw_commit_out; } @@ -12624,8 +12624,8 @@ i40e_status i40e_commit_partition_bw_setting(struct i40e_pf *pf) i40e_release_nvm(&pf->hw); if (ret) dev_info(&pf->pdev->dev, - "BW settings NOT SAVED, err %s aq_err %s\n", - i40e_stat_str(&pf->hw, ret), + "BW settings NOT SAVED, err %pe aq_err %s\n", + ERR_PTR(ret), i40e_aq_str(&pf->hw, last_aq_status)); bw_commit_out: @@ -12646,7 +12646,7 @@ static bool i40e_is_total_port_shutdown_enabled(struct i40e_pf *pf) #define I40E_LINK_BEHAVIOR_WORD_LENGTH 0x1 #define I40E_LINK_BEHAVIOR_OS_FORCED_ENABLED BIT(0) #define I40E_LINK_BEHAVIOR_PORT_BIT_LENGTH 4 - i40e_status read_status = I40E_SUCCESS; + int read_status = I40E_SUCCESS; u16 sr_emp_sr_settings_ptr = 0; u16 features_enable = 0; u16 link_behavior = 0; @@ -12679,8 +12679,8 @@ static bool i40e_is_total_port_shutdown_enabled(struct i40e_pf *pf) err_nvm: dev_warn(&pf->pdev->dev, - "total-port-shutdown feature is off due to read nvm error: %s\n", - i40e_stat_str(&pf->hw, read_status)); + "total-port-shutdown feature is off due to read nvm error: %pe\n", + ERR_PTR(read_status)); return ret; } @@ -13025,7 +13025,7 @@ static int i40e_udp_tunnel_set_port(struct net_device *netdev, struct i40e_netdev_priv *np = netdev_priv(netdev); struct i40e_hw *hw = &np->vsi->back->hw; u8 type, filter_index; - i40e_status ret; + int ret; type = ti->type == UDP_TUNNEL_TYPE_VXLAN ? I40E_AQC_TUNNEL_TYPE_VXLAN : I40E_AQC_TUNNEL_TYPE_NGE; @@ -13033,8 +13033,8 @@ static int i40e_udp_tunnel_set_port(struct net_device *netdev, ret = i40e_aq_add_udp_tunnel(hw, ntohs(ti->port), type, &filter_index, NULL); if (ret) { - netdev_info(netdev, "add UDP port failed, err %s aq_err %s\n", - i40e_stat_str(hw, ret), + netdev_info(netdev, "add UDP port failed, err %pe aq_err %s\n", + ERR_PTR(ret), i40e_aq_str(hw, hw->aq.asq_last_status)); return -EIO; } @@ -13049,12 +13049,12 @@ static int i40e_udp_tunnel_unset_port(struct net_device *netdev, { struct i40e_netdev_priv *np = netdev_priv(netdev); struct i40e_hw *hw = &np->vsi->back->hw; - i40e_status ret; + int ret; ret = i40e_aq_del_udp_tunnel(hw, ti->hw_priv, NULL); if (ret) { - netdev_info(netdev, "delete UDP port failed, err %s aq_err %s\n", - i40e_stat_str(hw, ret), + netdev_info(netdev, "delete UDP port failed, err %pe aq_err %s\n", + ERR_PTR(ret), i40e_aq_str(hw, hw->aq.asq_last_status)); return -EIO; } @@ -13341,9 +13341,11 @@ static int i40e_xdp_setup(struct i40e_vsi *vsi, struct bpf_prog *prog, old_prog = xchg(&vsi->xdp_prog, prog); if (need_reset) { - if (!prog) + if (!prog) { + xdp_features_clear_redirect_target(vsi->netdev); /* Wait until ndo_xsk_wakeup completes. */ synchronize_rcu(); + } i40e_reset_and_rebuild(pf, true, true); } @@ -13364,11 +13366,13 @@ static int i40e_xdp_setup(struct i40e_vsi *vsi, struct bpf_prog *prog, /* Kick start the NAPI context if there is an AF_XDP socket open * on that queue id. This so that receiving will start. */ - if (need_reset && prog) + if (need_reset && prog) { for (i = 0; i < vsi->num_queue_pairs; i++) if (vsi->xdp_rings[i]->xsk_pool) (void)i40e_xsk_wakeup(vsi->netdev, i, XDP_WAKEUP_RX); + xdp_features_set_redirect_target(vsi->netdev, true); + } return 0; } @@ -13803,6 +13807,10 @@ static int i40e_config_netdev(struct i40e_vsi *vsi) spin_lock_bh(&vsi->mac_filter_hash_lock); i40e_add_mac_filter(vsi, mac_addr); spin_unlock_bh(&vsi->mac_filter_hash_lock); + + netdev->xdp_features = NETDEV_XDP_ACT_BASIC | + NETDEV_XDP_ACT_REDIRECT | + NETDEV_XDP_ACT_XSK_ZEROCOPY; } else { /* Relate the VSI_VMDQ name to the VSI_MAIN name. Note that we * are still limited by IFNAMSIZ, but we're adding 'v%d\0' to @@ -13943,8 +13951,8 @@ static int i40e_add_vsi(struct i40e_vsi *vsi) ctxt.flags = I40E_AQ_VSI_TYPE_PF; if (ret) { dev_info(&pf->pdev->dev, - "couldn't get PF vsi config, err %s aq_err %s\n", - i40e_stat_str(&pf->hw, ret), + "couldn't get PF vsi config, err %pe aq_err %s\n", + ERR_PTR(ret), i40e_aq_str(&pf->hw, pf->hw.aq.asq_last_status)); return -ENOENT; @@ -13973,8 +13981,8 @@ static int i40e_add_vsi(struct i40e_vsi *vsi) ret = i40e_aq_update_vsi_params(hw, &ctxt, NULL); if (ret) { dev_info(&pf->pdev->dev, - "update vsi failed, err %s aq_err %s\n", - i40e_stat_str(&pf->hw, ret), + "update vsi failed, err %d aq_err %s\n", + ret, i40e_aq_str(&pf->hw, pf->hw.aq.asq_last_status)); ret = -ENOENT; @@ -13993,8 +14001,8 @@ static int i40e_add_vsi(struct i40e_vsi *vsi) ret = i40e_aq_update_vsi_params(hw, &ctxt, NULL); if (ret) { dev_info(&pf->pdev->dev, - "update vsi failed, err %s aq_err %s\n", - i40e_stat_str(&pf->hw, ret), + "update vsi failed, err %pe aq_err %s\n", + ERR_PTR(ret), i40e_aq_str(&pf->hw, pf->hw.aq.asq_last_status)); ret = -ENOENT; @@ -14016,9 +14024,9 @@ static int i40e_add_vsi(struct i40e_vsi *vsi) * message and continue */ dev_info(&pf->pdev->dev, - "failed to configure TCs for main VSI tc_map 0x%08x, err %s aq_err %s\n", + "failed to configure TCs for main VSI tc_map 0x%08x, err %pe aq_err %s\n", enabled_tc, - i40e_stat_str(&pf->hw, ret), + ERR_PTR(ret), i40e_aq_str(&pf->hw, pf->hw.aq.asq_last_status)); } @@ -14112,8 +14120,8 @@ static int i40e_add_vsi(struct i40e_vsi *vsi) ret = i40e_aq_add_vsi(hw, &ctxt, NULL); if (ret) { dev_info(&vsi->back->pdev->dev, - "add vsi failed, err %s aq_err %s\n", - i40e_stat_str(&pf->hw, ret), + "add vsi failed, err %pe aq_err %s\n", + ERR_PTR(ret), i40e_aq_str(&pf->hw, pf->hw.aq.asq_last_status)); ret = -ENOENT; @@ -14144,8 +14152,8 @@ static int i40e_add_vsi(struct i40e_vsi *vsi) ret = i40e_vsi_get_bw_info(vsi); if (ret) { dev_info(&pf->pdev->dev, - "couldn't get vsi bw info, err %s aq_err %s\n", - i40e_stat_str(&pf->hw, ret), + "couldn't get vsi bw info, err %pe aq_err %s\n", + ERR_PTR(ret), i40e_aq_str(&pf->hw, pf->hw.aq.asq_last_status)); /* VSI is already added so not tearing that up */ ret = 0; @@ -14591,8 +14599,8 @@ static int i40e_veb_get_bw_info(struct i40e_veb *veb) &bw_data, NULL); if (ret) { dev_info(&pf->pdev->dev, - "query veb bw config failed, err %s aq_err %s\n", - i40e_stat_str(&pf->hw, ret), + "query veb bw config failed, err %pe aq_err %s\n", + ERR_PTR(ret), i40e_aq_str(&pf->hw, hw->aq.asq_last_status)); goto out; } @@ -14601,8 +14609,8 @@ static int i40e_veb_get_bw_info(struct i40e_veb *veb) &ets_data, NULL); if (ret) { dev_info(&pf->pdev->dev, - "query veb bw ets config failed, err %s aq_err %s\n", - i40e_stat_str(&pf->hw, ret), + "query veb bw ets config failed, err %pe aq_err %s\n", + ERR_PTR(ret), i40e_aq_str(&pf->hw, hw->aq.asq_last_status)); goto out; } @@ -14798,8 +14806,8 @@ static int i40e_add_veb(struct i40e_veb *veb, struct i40e_vsi *vsi) /* get a VEB from the hardware */ if (ret) { dev_info(&pf->pdev->dev, - "couldn't add VEB, err %s aq_err %s\n", - i40e_stat_str(&pf->hw, ret), + "couldn't add VEB, err %pe aq_err %s\n", + ERR_PTR(ret), i40e_aq_str(&pf->hw, pf->hw.aq.asq_last_status)); return -EPERM; } @@ -14809,16 +14817,16 @@ static int i40e_add_veb(struct i40e_veb *veb, struct i40e_vsi *vsi) &veb->stats_idx, NULL, NULL, NULL); if (ret) { dev_info(&pf->pdev->dev, - "couldn't get VEB statistics idx, err %s aq_err %s\n", - i40e_stat_str(&pf->hw, ret), + "couldn't get VEB statistics idx, err %pe aq_err %s\n", + ERR_PTR(ret), i40e_aq_str(&pf->hw, pf->hw.aq.asq_last_status)); return -EPERM; } ret = i40e_veb_get_bw_info(veb); if (ret) { dev_info(&pf->pdev->dev, - "couldn't get VEB bw info, err %s aq_err %s\n", - i40e_stat_str(&pf->hw, ret), + "couldn't get VEB bw info, err %pe aq_err %s\n", + ERR_PTR(ret), i40e_aq_str(&pf->hw, pf->hw.aq.asq_last_status)); i40e_aq_delete_element(&pf->hw, veb->seid, NULL); return -ENOENT; @@ -15028,8 +15036,8 @@ int i40e_fetch_switch_configuration(struct i40e_pf *pf, bool printconfig) &next_seid, NULL); if (ret) { dev_info(&pf->pdev->dev, - "get switch config failed err %s aq_err %s\n", - i40e_stat_str(&pf->hw, ret), + "get switch config failed err %d aq_err %s\n", + ret, i40e_aq_str(&pf->hw, pf->hw.aq.asq_last_status)); kfree(aq_buf); @@ -15074,8 +15082,8 @@ static int i40e_setup_pf_switch(struct i40e_pf *pf, bool reinit, bool lock_acqui ret = i40e_fetch_switch_configuration(pf, false); if (ret) { dev_info(&pf->pdev->dev, - "couldn't fetch switch config, err %s aq_err %s\n", - i40e_stat_str(&pf->hw, ret), + "couldn't fetch switch config, err %pe aq_err %s\n", + ERR_PTR(ret), i40e_aq_str(&pf->hw, pf->hw.aq.asq_last_status)); return ret; } @@ -15101,8 +15109,8 @@ static int i40e_setup_pf_switch(struct i40e_pf *pf, bool reinit, bool lock_acqui NULL); if (ret && pf->hw.aq.asq_last_status != I40E_AQ_RC_ESRCH) { dev_info(&pf->pdev->dev, - "couldn't set switch config bits, err %s aq_err %s\n", - i40e_stat_str(&pf->hw, ret), + "couldn't set switch config bits, err %pe aq_err %s\n", + ERR_PTR(ret), i40e_aq_str(&pf->hw, pf->hw.aq.asq_last_status)); /* not a fatal problem, just keep going */ @@ -15439,13 +15447,12 @@ static bool i40e_check_recovery_mode(struct i40e_pf *pf) * * Return 0 on success, negative on failure. **/ -static i40e_status i40e_pf_loop_reset(struct i40e_pf *pf) +static int i40e_pf_loop_reset(struct i40e_pf *pf) { /* wait max 10 seconds for PF reset to succeed */ const unsigned long time_end = jiffies + 10 * HZ; - struct i40e_hw *hw = &pf->hw; - i40e_status ret; + int ret; ret = i40e_pf_reset(hw); while (ret != I40E_SUCCESS && time_before(jiffies, time_end)) { @@ -15491,9 +15498,9 @@ static bool i40e_check_fw_empr(struct i40e_pf *pf) * Return 0 if NIC is healthy or negative value when there are issues * with resets **/ -static i40e_status i40e_handle_resets(struct i40e_pf *pf) +static int i40e_handle_resets(struct i40e_pf *pf) { - const i40e_status pfr = i40e_pf_loop_reset(pf); + const int pfr = i40e_pf_loop_reset(pf); const bool is_empr = i40e_check_fw_empr(pf); if (is_empr || pfr != I40E_SUCCESS) @@ -15591,7 +15598,6 @@ err_switch_setup: timer_shutdown_sync(&pf->service_timer); i40e_shutdown_adminq(hw); iounmap(hw->hw_addr); - pci_disable_pcie_error_reporting(pf->pdev); pci_release_mem_regions(pf->pdev); pci_disable_device(pf->pdev); kfree(pf); @@ -15631,13 +15637,15 @@ static int i40e_probe(struct pci_dev *pdev, const struct pci_device_id *ent) struct i40e_aq_get_phy_abilities_resp abilities; #ifdef CONFIG_I40E_DCB enum i40e_get_fw_lldp_status_resp lldp_status; - i40e_status status; #endif /* CONFIG_I40E_DCB */ struct i40e_pf *pf; struct i40e_hw *hw; static u16 pfs_found; u16 wol_nvm_bits; u16 link_status; +#ifdef CONFIG_I40E_DCB + int status; +#endif /* CONFIG_I40E_DCB */ int err; u32 val; u32 i; @@ -15662,7 +15670,6 @@ static int i40e_probe(struct pci_dev *pdev, const struct pci_device_id *ent) goto err_pci_reg; } - pci_enable_pcie_error_reporting(pdev); pci_set_master(pdev); /* Now that we have a PCI connection, we need to do the @@ -16006,8 +16013,8 @@ static int i40e_probe(struct pci_dev *pdev, const struct pci_device_id *ent) I40E_AQ_EVENT_MEDIA_NA | I40E_AQ_EVENT_MODULE_QUAL_FAIL), NULL); if (err) - dev_info(&pf->pdev->dev, "set phy mask fail, err %s aq_err %s\n", - i40e_stat_str(&pf->hw, err), + dev_info(&pf->pdev->dev, "set phy mask fail, err %pe aq_err %s\n", + ERR_PTR(err), i40e_aq_str(&pf->hw, pf->hw.aq.asq_last_status)); /* Reconfigure hardware for allowing smaller MSS in the case @@ -16025,8 +16032,8 @@ static int i40e_probe(struct pci_dev *pdev, const struct pci_device_id *ent) msleep(75); err = i40e_aq_set_link_restart_an(&pf->hw, true, NULL); if (err) - dev_info(&pf->pdev->dev, "link restart failed, err %s aq_err %s\n", - i40e_stat_str(&pf->hw, err), + dev_info(&pf->pdev->dev, "link restart failed, err %pe aq_err %s\n", + ERR_PTR(err), i40e_aq_str(&pf->hw, pf->hw.aq.asq_last_status)); } @@ -16158,8 +16165,8 @@ static int i40e_probe(struct pci_dev *pdev, const struct pci_device_id *ent) /* get the requested speeds from the fw */ err = i40e_aq_get_phy_capabilities(hw, false, false, &abilities, NULL); if (err) - dev_dbg(&pf->pdev->dev, "get requested speeds ret = %s last_status = %s\n", - i40e_stat_str(&pf->hw, err), + dev_dbg(&pf->pdev->dev, "get requested speeds ret = %pe last_status = %s\n", + ERR_PTR(err), i40e_aq_str(&pf->hw, pf->hw.aq.asq_last_status)); pf->hw.phy.link_info.requested_speeds = abilities.link_speed; @@ -16169,8 +16176,8 @@ static int i40e_probe(struct pci_dev *pdev, const struct pci_device_id *ent) /* get the supported phy types from the fw */ err = i40e_aq_get_phy_capabilities(hw, false, true, &abilities, NULL); if (err) - dev_dbg(&pf->pdev->dev, "get supported phy types ret = %s last_status = %s\n", - i40e_stat_str(&pf->hw, err), + dev_dbg(&pf->pdev->dev, "get supported phy types ret = %pe last_status = %s\n", + ERR_PTR(err), i40e_aq_str(&pf->hw, pf->hw.aq.asq_last_status)); /* make sure the MFS hasn't been set lower than the default */ @@ -16220,7 +16227,6 @@ err_pf_reset: err_ioremap: kfree(pf); err_pf_alloc: - pci_disable_pcie_error_reporting(pdev); pci_release_mem_regions(pdev); err_pci_reg: err_dma: @@ -16241,7 +16247,7 @@ static void i40e_remove(struct pci_dev *pdev) { struct i40e_pf *pf = pci_get_drvdata(pdev); struct i40e_hw *hw = &pf->hw; - i40e_status ret_code; + int ret_code; int i; i40e_dbg_pf_exit(pf); @@ -16368,7 +16374,6 @@ unmap: kfree(pf); pci_release_mem_regions(pdev); - pci_disable_pcie_error_reporting(pdev); pci_disable_device(pdev); } @@ -16489,9 +16494,9 @@ static void i40e_pci_error_resume(struct pci_dev *pdev) static void i40e_enable_mc_magic_wake(struct i40e_pf *pf) { struct i40e_hw *hw = &pf->hw; - i40e_status ret; u8 mac_addr[6]; u16 flags = 0; + int ret; /* Get current MAC address in case it's an LAA */ if (pf->vsi[pf->lan_vsi] && pf->vsi[pf->lan_vsi]->netdev) { diff --git a/drivers/net/ethernet/intel/i40e/i40e_nvm.c b/drivers/net/ethernet/intel/i40e/i40e_nvm.c index 3a38bf8bcde7..9da0c87f0328 100644 --- a/drivers/net/ethernet/intel/i40e/i40e_nvm.c +++ b/drivers/net/ethernet/intel/i40e/i40e_nvm.c @@ -13,10 +13,10 @@ * in this file) as an equivalent of the FLASH part mapped into the SR. * We are accessing FLASH always thru the Shadow RAM. **/ -i40e_status i40e_init_nvm(struct i40e_hw *hw) +int i40e_init_nvm(struct i40e_hw *hw) { struct i40e_nvm_info *nvm = &hw->nvm; - i40e_status ret_code = 0; + int ret_code = 0; u32 fla, gens; u8 sr_size; @@ -52,12 +52,12 @@ i40e_status i40e_init_nvm(struct i40e_hw *hw) * This function will request NVM ownership for reading * via the proper Admin Command. **/ -i40e_status i40e_acquire_nvm(struct i40e_hw *hw, - enum i40e_aq_resource_access_type access) +int i40e_acquire_nvm(struct i40e_hw *hw, + enum i40e_aq_resource_access_type access) { - i40e_status ret_code = 0; u64 gtime, timeout; u64 time_left = 0; + int ret_code = 0; if (hw->nvm.blank_nvm_mode) goto i40e_i40e_acquire_nvm_exit; @@ -111,7 +111,7 @@ i40e_i40e_acquire_nvm_exit: **/ void i40e_release_nvm(struct i40e_hw *hw) { - i40e_status ret_code = I40E_SUCCESS; + int ret_code = I40E_SUCCESS; u32 total_delay = 0; if (hw->nvm.blank_nvm_mode) @@ -138,9 +138,9 @@ void i40e_release_nvm(struct i40e_hw *hw) * * Polls the SRCTL Shadow RAM register done bit. **/ -static i40e_status i40e_poll_sr_srctl_done_bit(struct i40e_hw *hw) +static int i40e_poll_sr_srctl_done_bit(struct i40e_hw *hw) { - i40e_status ret_code = I40E_ERR_TIMEOUT; + int ret_code = I40E_ERR_TIMEOUT; u32 srctl, wait_cnt; /* Poll the I40E_GLNVM_SRCTL until the done bit is set */ @@ -165,10 +165,10 @@ static i40e_status i40e_poll_sr_srctl_done_bit(struct i40e_hw *hw) * * Reads one 16 bit word from the Shadow RAM using the GLNVM_SRCTL register. **/ -static i40e_status i40e_read_nvm_word_srctl(struct i40e_hw *hw, u16 offset, - u16 *data) +static int i40e_read_nvm_word_srctl(struct i40e_hw *hw, u16 offset, + u16 *data) { - i40e_status ret_code = I40E_ERR_TIMEOUT; + int ret_code = I40E_ERR_TIMEOUT; u32 sr_reg; if (offset >= hw->nvm.sr_size) { @@ -216,13 +216,13 @@ read_nvm_exit: * * Writes a 16 bit words buffer to the Shadow RAM using the admin command. **/ -static i40e_status i40e_read_nvm_aq(struct i40e_hw *hw, - u8 module_pointer, u32 offset, - u16 words, void *data, - bool last_command) +static int i40e_read_nvm_aq(struct i40e_hw *hw, + u8 module_pointer, u32 offset, + u16 words, void *data, + bool last_command) { - i40e_status ret_code = I40E_ERR_NVM; struct i40e_asq_cmd_details cmd_details; + int ret_code = I40E_ERR_NVM; memset(&cmd_details, 0, sizeof(cmd_details)); cmd_details.wb_desc = &hw->nvm_wb_desc; @@ -264,10 +264,10 @@ static i40e_status i40e_read_nvm_aq(struct i40e_hw *hw, * * Reads one 16 bit word from the Shadow RAM using the AdminQ **/ -static i40e_status i40e_read_nvm_word_aq(struct i40e_hw *hw, u16 offset, - u16 *data) +static int i40e_read_nvm_word_aq(struct i40e_hw *hw, u16 offset, + u16 *data) { - i40e_status ret_code = I40E_ERR_TIMEOUT; + int ret_code = I40E_ERR_TIMEOUT; ret_code = i40e_read_nvm_aq(hw, 0x0, offset, 1, data, true); *data = le16_to_cpu(*(__le16 *)data); @@ -286,8 +286,8 @@ static i40e_status i40e_read_nvm_word_aq(struct i40e_hw *hw, u16 offset, * Do not use this function except in cases where the nvm lock is already * taken via i40e_acquire_nvm(). **/ -static i40e_status __i40e_read_nvm_word(struct i40e_hw *hw, - u16 offset, u16 *data) +static int __i40e_read_nvm_word(struct i40e_hw *hw, + u16 offset, u16 *data) { if (hw->flags & I40E_HW_FLAG_AQ_SRCTL_ACCESS_ENABLE) return i40e_read_nvm_word_aq(hw, offset, data); @@ -303,10 +303,10 @@ static i40e_status __i40e_read_nvm_word(struct i40e_hw *hw, * * Reads one 16 bit word from the Shadow RAM. **/ -i40e_status i40e_read_nvm_word(struct i40e_hw *hw, u16 offset, - u16 *data) +int i40e_read_nvm_word(struct i40e_hw *hw, u16 offset, + u16 *data) { - i40e_status ret_code = 0; + int ret_code = 0; if (hw->flags & I40E_HW_FLAG_NVM_READ_REQUIRES_LOCK) ret_code = i40e_acquire_nvm(hw, I40E_RESOURCE_READ); @@ -330,17 +330,17 @@ i40e_status i40e_read_nvm_word(struct i40e_hw *hw, u16 offset, * @words_data_size: Words to read from NVM * @data_ptr: Pointer to memory location where resulting buffer will be stored **/ -enum i40e_status_code i40e_read_nvm_module_data(struct i40e_hw *hw, - u8 module_ptr, - u16 module_offset, - u16 data_offset, - u16 words_data_size, - u16 *data_ptr) +int i40e_read_nvm_module_data(struct i40e_hw *hw, + u8 module_ptr, + u16 module_offset, + u16 data_offset, + u16 words_data_size, + u16 *data_ptr) { - i40e_status status; u16 specific_ptr = 0; u16 ptr_value = 0; u32 offset = 0; + int status; if (module_ptr != 0) { status = i40e_read_nvm_word(hw, module_ptr, &ptr_value); @@ -406,10 +406,10 @@ enum i40e_status_code i40e_read_nvm_module_data(struct i40e_hw *hw, * method. The buffer read is preceded by the NVM ownership take * and followed by the release. **/ -static i40e_status i40e_read_nvm_buffer_srctl(struct i40e_hw *hw, u16 offset, - u16 *words, u16 *data) +static int i40e_read_nvm_buffer_srctl(struct i40e_hw *hw, u16 offset, + u16 *words, u16 *data) { - i40e_status ret_code = 0; + int ret_code = 0; u16 index, word; /* Loop thru the selected region */ @@ -437,13 +437,13 @@ static i40e_status i40e_read_nvm_buffer_srctl(struct i40e_hw *hw, u16 offset, * method. The buffer read is preceded by the NVM ownership take * and followed by the release. **/ -static i40e_status i40e_read_nvm_buffer_aq(struct i40e_hw *hw, u16 offset, - u16 *words, u16 *data) +static int i40e_read_nvm_buffer_aq(struct i40e_hw *hw, u16 offset, + u16 *words, u16 *data) { - i40e_status ret_code; - u16 read_size; bool last_cmd = false; u16 words_read = 0; + u16 read_size; + int ret_code; u16 i = 0; do { @@ -493,9 +493,9 @@ read_nvm_buffer_aq_exit: * Reads 16 bit words (data buffer) from the SR using the i40e_read_nvm_srrd() * method. **/ -static i40e_status __i40e_read_nvm_buffer(struct i40e_hw *hw, - u16 offset, u16 *words, - u16 *data) +static int __i40e_read_nvm_buffer(struct i40e_hw *hw, + u16 offset, u16 *words, + u16 *data) { if (hw->flags & I40E_HW_FLAG_AQ_SRCTL_ACCESS_ENABLE) return i40e_read_nvm_buffer_aq(hw, offset, words, data); @@ -514,10 +514,10 @@ static i40e_status __i40e_read_nvm_buffer(struct i40e_hw *hw, * method. The buffer read is preceded by the NVM ownership take * and followed by the release. **/ -i40e_status i40e_read_nvm_buffer(struct i40e_hw *hw, u16 offset, - u16 *words, u16 *data) +int i40e_read_nvm_buffer(struct i40e_hw *hw, u16 offset, + u16 *words, u16 *data) { - i40e_status ret_code = 0; + int ret_code = 0; if (hw->flags & I40E_HW_FLAG_AQ_SRCTL_ACCESS_ENABLE) { ret_code = i40e_acquire_nvm(hw, I40E_RESOURCE_READ); @@ -544,12 +544,12 @@ i40e_status i40e_read_nvm_buffer(struct i40e_hw *hw, u16 offset, * * Writes a 16 bit words buffer to the Shadow RAM using the admin command. **/ -static i40e_status i40e_write_nvm_aq(struct i40e_hw *hw, u8 module_pointer, - u32 offset, u16 words, void *data, - bool last_command) +static int i40e_write_nvm_aq(struct i40e_hw *hw, u8 module_pointer, + u32 offset, u16 words, void *data, + bool last_command) { - i40e_status ret_code = I40E_ERR_NVM; struct i40e_asq_cmd_details cmd_details; + int ret_code = I40E_ERR_NVM; memset(&cmd_details, 0, sizeof(cmd_details)); cmd_details.wb_desc = &hw->nvm_wb_desc; @@ -594,14 +594,14 @@ static i40e_status i40e_write_nvm_aq(struct i40e_hw *hw, u8 module_pointer, * is customer specific and unknown. Therefore, this function skips all maximum * possible size of VPD (1kB). **/ -static i40e_status i40e_calc_nvm_checksum(struct i40e_hw *hw, - u16 *checksum) +static int i40e_calc_nvm_checksum(struct i40e_hw *hw, + u16 *checksum) { - i40e_status ret_code; struct i40e_virt_mem vmem; u16 pcie_alt_module = 0; u16 checksum_local = 0; u16 vpd_module = 0; + int ret_code; u16 *data; u16 i = 0; @@ -675,11 +675,11 @@ i40e_calc_nvm_checksum_exit: * on ARQ completion event reception by caller. * This function will commit SR to NVM. **/ -i40e_status i40e_update_nvm_checksum(struct i40e_hw *hw) +int i40e_update_nvm_checksum(struct i40e_hw *hw) { - i40e_status ret_code; - u16 checksum; __le16 le_sum; + int ret_code; + u16 checksum; ret_code = i40e_calc_nvm_checksum(hw, &checksum); if (!ret_code) { @@ -699,12 +699,12 @@ i40e_status i40e_update_nvm_checksum(struct i40e_hw *hw) * Performs checksum calculation and validates the NVM SW checksum. If the * caller does not need checksum, the value can be NULL. **/ -i40e_status i40e_validate_nvm_checksum(struct i40e_hw *hw, - u16 *checksum) +int i40e_validate_nvm_checksum(struct i40e_hw *hw, + u16 *checksum) { - i40e_status ret_code = 0; - u16 checksum_sr = 0; u16 checksum_local = 0; + u16 checksum_sr = 0; + int ret_code = 0; /* We must acquire the NVM lock in order to correctly synchronize the * NVM accesses across multiple PFs. Without doing so it is possible @@ -733,36 +733,36 @@ i40e_status i40e_validate_nvm_checksum(struct i40e_hw *hw, return ret_code; } -static i40e_status i40e_nvmupd_state_init(struct i40e_hw *hw, - struct i40e_nvm_access *cmd, - u8 *bytes, int *perrno); -static i40e_status i40e_nvmupd_state_reading(struct i40e_hw *hw, - struct i40e_nvm_access *cmd, - u8 *bytes, int *perrno); -static i40e_status i40e_nvmupd_state_writing(struct i40e_hw *hw, - struct i40e_nvm_access *cmd, - u8 *bytes, int *errno); +static int i40e_nvmupd_state_init(struct i40e_hw *hw, + struct i40e_nvm_access *cmd, + u8 *bytes, int *perrno); +static int i40e_nvmupd_state_reading(struct i40e_hw *hw, + struct i40e_nvm_access *cmd, + u8 *bytes, int *perrno); +static int i40e_nvmupd_state_writing(struct i40e_hw *hw, + struct i40e_nvm_access *cmd, + u8 *bytes, int *errno); static enum i40e_nvmupd_cmd i40e_nvmupd_validate_command(struct i40e_hw *hw, struct i40e_nvm_access *cmd, int *perrno); -static i40e_status i40e_nvmupd_nvm_erase(struct i40e_hw *hw, - struct i40e_nvm_access *cmd, - int *perrno); -static i40e_status i40e_nvmupd_nvm_write(struct i40e_hw *hw, - struct i40e_nvm_access *cmd, - u8 *bytes, int *perrno); -static i40e_status i40e_nvmupd_nvm_read(struct i40e_hw *hw, - struct i40e_nvm_access *cmd, - u8 *bytes, int *perrno); -static i40e_status i40e_nvmupd_exec_aq(struct i40e_hw *hw, - struct i40e_nvm_access *cmd, - u8 *bytes, int *perrno); -static i40e_status i40e_nvmupd_get_aq_result(struct i40e_hw *hw, - struct i40e_nvm_access *cmd, - u8 *bytes, int *perrno); -static i40e_status i40e_nvmupd_get_aq_event(struct i40e_hw *hw, - struct i40e_nvm_access *cmd, - u8 *bytes, int *perrno); +static int i40e_nvmupd_nvm_erase(struct i40e_hw *hw, + struct i40e_nvm_access *cmd, + int *perrno); +static int i40e_nvmupd_nvm_write(struct i40e_hw *hw, + struct i40e_nvm_access *cmd, + u8 *bytes, int *perrno); +static int i40e_nvmupd_nvm_read(struct i40e_hw *hw, + struct i40e_nvm_access *cmd, + u8 *bytes, int *perrno); +static int i40e_nvmupd_exec_aq(struct i40e_hw *hw, + struct i40e_nvm_access *cmd, + u8 *bytes, int *perrno); +static int i40e_nvmupd_get_aq_result(struct i40e_hw *hw, + struct i40e_nvm_access *cmd, + u8 *bytes, int *perrno); +static int i40e_nvmupd_get_aq_event(struct i40e_hw *hw, + struct i40e_nvm_access *cmd, + u8 *bytes, int *perrno); static inline u8 i40e_nvmupd_get_module(u32 val) { return (u8)(val & I40E_NVM_MOD_PNT_MASK); @@ -807,12 +807,12 @@ static const char * const i40e_nvm_update_state_str[] = { * * Dispatches command depending on what update state is current **/ -i40e_status i40e_nvmupd_command(struct i40e_hw *hw, - struct i40e_nvm_access *cmd, - u8 *bytes, int *perrno) +int i40e_nvmupd_command(struct i40e_hw *hw, + struct i40e_nvm_access *cmd, + u8 *bytes, int *perrno) { - i40e_status status; enum i40e_nvmupd_cmd upd_cmd; + int status; /* assume success */ *perrno = 0; @@ -923,12 +923,12 @@ i40e_status i40e_nvmupd_command(struct i40e_hw *hw, * Process legitimate commands of the Init state and conditionally set next * state. Reject all other commands. **/ -static i40e_status i40e_nvmupd_state_init(struct i40e_hw *hw, - struct i40e_nvm_access *cmd, - u8 *bytes, int *perrno) +static int i40e_nvmupd_state_init(struct i40e_hw *hw, + struct i40e_nvm_access *cmd, + u8 *bytes, int *perrno) { - i40e_status status = 0; enum i40e_nvmupd_cmd upd_cmd; + int status = 0; upd_cmd = i40e_nvmupd_validate_command(hw, cmd, perrno); @@ -1062,12 +1062,12 @@ static i40e_status i40e_nvmupd_state_init(struct i40e_hw *hw, * NVM ownership is already held. Process legitimate commands and set any * change in state; reject all other commands. **/ -static i40e_status i40e_nvmupd_state_reading(struct i40e_hw *hw, - struct i40e_nvm_access *cmd, - u8 *bytes, int *perrno) +static int i40e_nvmupd_state_reading(struct i40e_hw *hw, + struct i40e_nvm_access *cmd, + u8 *bytes, int *perrno) { - i40e_status status = 0; enum i40e_nvmupd_cmd upd_cmd; + int status = 0; upd_cmd = i40e_nvmupd_validate_command(hw, cmd, perrno); @@ -1104,13 +1104,13 @@ static i40e_status i40e_nvmupd_state_reading(struct i40e_hw *hw, * NVM ownership is already held. Process legitimate commands and set any * change in state; reject all other commands **/ -static i40e_status i40e_nvmupd_state_writing(struct i40e_hw *hw, - struct i40e_nvm_access *cmd, - u8 *bytes, int *perrno) +static int i40e_nvmupd_state_writing(struct i40e_hw *hw, + struct i40e_nvm_access *cmd, + u8 *bytes, int *perrno) { - i40e_status status = 0; enum i40e_nvmupd_cmd upd_cmd; bool retry_attempt = false; + int status = 0; upd_cmd = i40e_nvmupd_validate_command(hw, cmd, perrno); @@ -1187,8 +1187,8 @@ retry: */ if (status && (hw->aq.asq_last_status == I40E_AQ_RC_EBUSY) && !retry_attempt) { - i40e_status old_status = status; u32 old_asq_status = hw->aq.asq_last_status; + int old_status = status; u32 gtime; gtime = rd32(hw, I40E_GLVFGEN_TIMER); @@ -1370,17 +1370,17 @@ static enum i40e_nvmupd_cmd i40e_nvmupd_validate_command(struct i40e_hw *hw, * * cmd structure contains identifiers and data buffer **/ -static i40e_status i40e_nvmupd_exec_aq(struct i40e_hw *hw, - struct i40e_nvm_access *cmd, - u8 *bytes, int *perrno) +static int i40e_nvmupd_exec_aq(struct i40e_hw *hw, + struct i40e_nvm_access *cmd, + u8 *bytes, int *perrno) { struct i40e_asq_cmd_details cmd_details; - i40e_status status; struct i40e_aq_desc *aq_desc; u32 buff_size = 0; u8 *buff = NULL; u32 aq_desc_len; u32 aq_data_len; + int status; i40e_debug(hw, I40E_DEBUG_NVM, "NVMUPD: %s\n", __func__); if (cmd->offset == 0xffff) @@ -1429,8 +1429,8 @@ static i40e_status i40e_nvmupd_exec_aq(struct i40e_hw *hw, buff_size, &cmd_details); if (status) { i40e_debug(hw, I40E_DEBUG_NVM, - "i40e_nvmupd_exec_aq err %s aq_err %s\n", - i40e_stat_str(hw, status), + "%s err %pe aq_err %s\n", + __func__, ERR_PTR(status), i40e_aq_str(hw, hw->aq.asq_last_status)); *perrno = i40e_aq_rc_to_posix(status, hw->aq.asq_last_status); return status; @@ -1454,9 +1454,9 @@ static i40e_status i40e_nvmupd_exec_aq(struct i40e_hw *hw, * * cmd structure contains identifiers and data buffer **/ -static i40e_status i40e_nvmupd_get_aq_result(struct i40e_hw *hw, - struct i40e_nvm_access *cmd, - u8 *bytes, int *perrno) +static int i40e_nvmupd_get_aq_result(struct i40e_hw *hw, + struct i40e_nvm_access *cmd, + u8 *bytes, int *perrno) { u32 aq_total_len; u32 aq_desc_len; @@ -1523,9 +1523,9 @@ static i40e_status i40e_nvmupd_get_aq_result(struct i40e_hw *hw, * * cmd structure contains identifiers and data buffer **/ -static i40e_status i40e_nvmupd_get_aq_event(struct i40e_hw *hw, - struct i40e_nvm_access *cmd, - u8 *bytes, int *perrno) +static int i40e_nvmupd_get_aq_event(struct i40e_hw *hw, + struct i40e_nvm_access *cmd, + u8 *bytes, int *perrno) { u32 aq_total_len; u32 aq_desc_len; @@ -1557,13 +1557,13 @@ static i40e_status i40e_nvmupd_get_aq_event(struct i40e_hw *hw, * * cmd structure contains identifiers and data buffer **/ -static i40e_status i40e_nvmupd_nvm_read(struct i40e_hw *hw, - struct i40e_nvm_access *cmd, - u8 *bytes, int *perrno) +static int i40e_nvmupd_nvm_read(struct i40e_hw *hw, + struct i40e_nvm_access *cmd, + u8 *bytes, int *perrno) { struct i40e_asq_cmd_details cmd_details; - i40e_status status; u8 module, transaction; + int status; bool last; transaction = i40e_nvmupd_get_transaction(cmd->config); @@ -1596,13 +1596,13 @@ static i40e_status i40e_nvmupd_nvm_read(struct i40e_hw *hw, * * module, offset, data_size and data are in cmd structure **/ -static i40e_status i40e_nvmupd_nvm_erase(struct i40e_hw *hw, - struct i40e_nvm_access *cmd, - int *perrno) +static int i40e_nvmupd_nvm_erase(struct i40e_hw *hw, + struct i40e_nvm_access *cmd, + int *perrno) { - i40e_status status = 0; struct i40e_asq_cmd_details cmd_details; u8 module, transaction; + int status = 0; bool last; transaction = i40e_nvmupd_get_transaction(cmd->config); @@ -1636,14 +1636,14 @@ static i40e_status i40e_nvmupd_nvm_erase(struct i40e_hw *hw, * * module, offset, data_size and data are in cmd structure **/ -static i40e_status i40e_nvmupd_nvm_write(struct i40e_hw *hw, - struct i40e_nvm_access *cmd, - u8 *bytes, int *perrno) +static int i40e_nvmupd_nvm_write(struct i40e_hw *hw, + struct i40e_nvm_access *cmd, + u8 *bytes, int *perrno) { - i40e_status status = 0; struct i40e_asq_cmd_details cmd_details; u8 module, transaction; u8 preservation_flags; + int status = 0; bool last; transaction = i40e_nvmupd_get_transaction(cmd->config); diff --git a/drivers/net/ethernet/intel/i40e/i40e_osdep.h b/drivers/net/ethernet/intel/i40e/i40e_osdep.h index 2f6815b2f8df..2bd4de03dafa 100644 --- a/drivers/net/ethernet/intel/i40e/i40e_osdep.h +++ b/drivers/net/ethernet/intel/i40e/i40e_osdep.h @@ -56,5 +56,4 @@ do { \ (h)->bus.func, ##__VA_ARGS__); \ } while (0) -typedef enum i40e_status_code i40e_status; #endif /* _I40E_OSDEP_H_ */ diff --git a/drivers/net/ethernet/intel/i40e/i40e_prototype.h b/drivers/net/ethernet/intel/i40e/i40e_prototype.h index 9a71121420c3..fe845987d99a 100644 --- a/drivers/net/ethernet/intel/i40e/i40e_prototype.h +++ b/drivers/net/ethernet/intel/i40e/i40e_prototype.h @@ -16,29 +16,29 @@ */ /* adminq functions */ -i40e_status i40e_init_adminq(struct i40e_hw *hw); +int i40e_init_adminq(struct i40e_hw *hw); void i40e_shutdown_adminq(struct i40e_hw *hw); void i40e_adminq_init_ring_data(struct i40e_hw *hw); -i40e_status i40e_clean_arq_element(struct i40e_hw *hw, - struct i40e_arq_event_info *e, - u16 *events_pending); -i40e_status +int i40e_clean_arq_element(struct i40e_hw *hw, + struct i40e_arq_event_info *e, + u16 *events_pending); +int i40e_asq_send_command(struct i40e_hw *hw, struct i40e_aq_desc *desc, void *buff, /* can be NULL */ u16 buff_size, struct i40e_asq_cmd_details *cmd_details); -i40e_status +int i40e_asq_send_command_v2(struct i40e_hw *hw, struct i40e_aq_desc *desc, void *buff, /* can be NULL */ u16 buff_size, struct i40e_asq_cmd_details *cmd_details, enum i40e_admin_queue_err *aq_status); -i40e_status +int i40e_asq_send_command_atomic(struct i40e_hw *hw, struct i40e_aq_desc *desc, void *buff, /* can be NULL */ u16 buff_size, struct i40e_asq_cmd_details *cmd_details, bool is_atomic_context); -i40e_status +int i40e_asq_send_command_atomic_v2(struct i40e_hw *hw, struct i40e_aq_desc *desc, void *buff, /* can be NULL */ @@ -53,327 +53,332 @@ void i40e_debug_aq(struct i40e_hw *hw, enum i40e_debug_mask mask, void i40e_idle_aq(struct i40e_hw *hw); bool i40e_check_asq_alive(struct i40e_hw *hw); -i40e_status i40e_aq_queue_shutdown(struct i40e_hw *hw, bool unloading); +int i40e_aq_queue_shutdown(struct i40e_hw *hw, bool unloading); const char *i40e_aq_str(struct i40e_hw *hw, enum i40e_admin_queue_err aq_err); -const char *i40e_stat_str(struct i40e_hw *hw, i40e_status stat_err); -i40e_status i40e_aq_get_rss_lut(struct i40e_hw *hw, u16 seid, - bool pf_lut, u8 *lut, u16 lut_size); -i40e_status i40e_aq_set_rss_lut(struct i40e_hw *hw, u16 seid, - bool pf_lut, u8 *lut, u16 lut_size); -i40e_status i40e_aq_get_rss_key(struct i40e_hw *hw, - u16 seid, - struct i40e_aqc_get_set_rss_key_data *key); -i40e_status i40e_aq_set_rss_key(struct i40e_hw *hw, - u16 seid, - struct i40e_aqc_get_set_rss_key_data *key); +int i40e_aq_get_rss_lut(struct i40e_hw *hw, u16 seid, + bool pf_lut, u8 *lut, u16 lut_size); +int i40e_aq_set_rss_lut(struct i40e_hw *hw, u16 seid, + bool pf_lut, u8 *lut, u16 lut_size); +int i40e_aq_get_rss_key(struct i40e_hw *hw, + u16 seid, + struct i40e_aqc_get_set_rss_key_data *key); +int i40e_aq_set_rss_key(struct i40e_hw *hw, + u16 seid, + struct i40e_aqc_get_set_rss_key_data *key); u32 i40e_led_get(struct i40e_hw *hw); void i40e_led_set(struct i40e_hw *hw, u32 mode, bool blink); -i40e_status i40e_led_set_phy(struct i40e_hw *hw, bool on, - u16 led_addr, u32 mode); -i40e_status i40e_led_get_phy(struct i40e_hw *hw, u16 *led_addr, - u16 *val); -i40e_status i40e_blink_phy_link_led(struct i40e_hw *hw, - u32 time, u32 interval); +int i40e_led_set_phy(struct i40e_hw *hw, bool on, + u16 led_addr, u32 mode); +int i40e_led_get_phy(struct i40e_hw *hw, u16 *led_addr, + u16 *val); +int i40e_blink_phy_link_led(struct i40e_hw *hw, + u32 time, u32 interval); /* admin send queue commands */ -i40e_status i40e_aq_get_firmware_version(struct i40e_hw *hw, - u16 *fw_major_version, u16 *fw_minor_version, - u32 *fw_build, - u16 *api_major_version, u16 *api_minor_version, - struct i40e_asq_cmd_details *cmd_details); -i40e_status i40e_aq_debug_write_register(struct i40e_hw *hw, - u32 reg_addr, u64 reg_val, - struct i40e_asq_cmd_details *cmd_details); -i40e_status i40e_aq_debug_read_register(struct i40e_hw *hw, +int i40e_aq_get_firmware_version(struct i40e_hw *hw, + u16 *fw_major_version, u16 *fw_minor_version, + u32 *fw_build, + u16 *api_major_version, u16 *api_minor_version, + struct i40e_asq_cmd_details *cmd_details); +int i40e_aq_debug_write_register(struct i40e_hw *hw, + u32 reg_addr, u64 reg_val, + struct i40e_asq_cmd_details *cmd_details); +int i40e_aq_debug_read_register(struct i40e_hw *hw, u32 reg_addr, u64 *reg_val, struct i40e_asq_cmd_details *cmd_details); -i40e_status i40e_aq_set_phy_debug(struct i40e_hw *hw, u8 cmd_flags, - struct i40e_asq_cmd_details *cmd_details); -i40e_status i40e_aq_set_default_vsi(struct i40e_hw *hw, u16 vsi_id, - struct i40e_asq_cmd_details *cmd_details); -i40e_status i40e_aq_clear_default_vsi(struct i40e_hw *hw, u16 vsi_id, - struct i40e_asq_cmd_details *cmd_details); -enum i40e_status_code i40e_aq_get_phy_capabilities(struct i40e_hw *hw, - bool qualified_modules, bool report_init, - struct i40e_aq_get_phy_abilities_resp *abilities, - struct i40e_asq_cmd_details *cmd_details); -enum i40e_status_code i40e_aq_set_phy_config(struct i40e_hw *hw, - struct i40e_aq_set_phy_config *config, - struct i40e_asq_cmd_details *cmd_details); -enum i40e_status_code i40e_set_fc(struct i40e_hw *hw, u8 *aq_failures, - bool atomic_reset); -i40e_status i40e_aq_set_mac_loopback(struct i40e_hw *hw, - bool ena_lpbk, - struct i40e_asq_cmd_details *cmd_details); -i40e_status i40e_aq_set_phy_int_mask(struct i40e_hw *hw, u16 mask, - struct i40e_asq_cmd_details *cmd_details); -i40e_status i40e_aq_clear_pxe_mode(struct i40e_hw *hw, - struct i40e_asq_cmd_details *cmd_details); -i40e_status i40e_aq_set_link_restart_an(struct i40e_hw *hw, - bool enable_link, - struct i40e_asq_cmd_details *cmd_details); -i40e_status i40e_aq_get_link_info(struct i40e_hw *hw, - bool enable_lse, struct i40e_link_status *link, - struct i40e_asq_cmd_details *cmd_details); -i40e_status i40e_aq_set_local_advt_reg(struct i40e_hw *hw, - u64 advt_reg, - struct i40e_asq_cmd_details *cmd_details); -i40e_status i40e_aq_send_driver_version(struct i40e_hw *hw, +int i40e_aq_set_phy_debug(struct i40e_hw *hw, u8 cmd_flags, + struct i40e_asq_cmd_details *cmd_details); +int i40e_aq_set_default_vsi(struct i40e_hw *hw, u16 vsi_id, + struct i40e_asq_cmd_details *cmd_details); +int i40e_aq_clear_default_vsi(struct i40e_hw *hw, u16 vsi_id, + struct i40e_asq_cmd_details *cmd_details); +int i40e_aq_get_phy_capabilities(struct i40e_hw *hw, + bool qualified_modules, bool report_init, + struct i40e_aq_get_phy_abilities_resp *abilities, + struct i40e_asq_cmd_details *cmd_details); +int i40e_aq_set_phy_config(struct i40e_hw *hw, + struct i40e_aq_set_phy_config *config, + struct i40e_asq_cmd_details *cmd_details); +int i40e_set_fc(struct i40e_hw *hw, u8 *aq_failures, + bool atomic_reset); +int i40e_aq_set_mac_loopback(struct i40e_hw *hw, + bool ena_lpbk, + struct i40e_asq_cmd_details *cmd_details); +int i40e_aq_set_phy_int_mask(struct i40e_hw *hw, u16 mask, + struct i40e_asq_cmd_details *cmd_details); +int i40e_aq_clear_pxe_mode(struct i40e_hw *hw, + struct i40e_asq_cmd_details *cmd_details); +int i40e_aq_set_link_restart_an(struct i40e_hw *hw, + bool enable_link, + struct i40e_asq_cmd_details *cmd_details); +int i40e_aq_get_link_info(struct i40e_hw *hw, + bool enable_lse, struct i40e_link_status *link, + struct i40e_asq_cmd_details *cmd_details); +int i40e_aq_set_local_advt_reg(struct i40e_hw *hw, + u64 advt_reg, + struct i40e_asq_cmd_details *cmd_details); +int i40e_aq_send_driver_version(struct i40e_hw *hw, struct i40e_driver_version *dv, struct i40e_asq_cmd_details *cmd_details); -i40e_status i40e_aq_add_vsi(struct i40e_hw *hw, - struct i40e_vsi_context *vsi_ctx, - struct i40e_asq_cmd_details *cmd_details); -i40e_status i40e_aq_set_vsi_broadcast(struct i40e_hw *hw, - u16 vsi_id, bool set_filter, - struct i40e_asq_cmd_details *cmd_details); -i40e_status i40e_aq_set_vsi_unicast_promiscuous(struct i40e_hw *hw, - u16 vsi_id, bool set, struct i40e_asq_cmd_details *cmd_details, - bool rx_only_promisc); -i40e_status i40e_aq_set_vsi_multicast_promiscuous(struct i40e_hw *hw, - u16 vsi_id, bool set, struct i40e_asq_cmd_details *cmd_details); -enum i40e_status_code i40e_aq_set_vsi_mc_promisc_on_vlan(struct i40e_hw *hw, - u16 seid, bool enable, - u16 vid, - struct i40e_asq_cmd_details *cmd_details); -enum i40e_status_code i40e_aq_set_vsi_uc_promisc_on_vlan(struct i40e_hw *hw, - u16 seid, bool enable, - u16 vid, - struct i40e_asq_cmd_details *cmd_details); -i40e_status i40e_aq_set_vsi_bc_promisc_on_vlan(struct i40e_hw *hw, - u16 seid, bool enable, u16 vid, - struct i40e_asq_cmd_details *cmd_details); -i40e_status i40e_aq_set_vsi_vlan_promisc(struct i40e_hw *hw, - u16 seid, bool enable, - struct i40e_asq_cmd_details *cmd_details); -i40e_status i40e_aq_get_vsi_params(struct i40e_hw *hw, - struct i40e_vsi_context *vsi_ctx, - struct i40e_asq_cmd_details *cmd_details); -i40e_status i40e_aq_update_vsi_params(struct i40e_hw *hw, - struct i40e_vsi_context *vsi_ctx, - struct i40e_asq_cmd_details *cmd_details); -i40e_status i40e_aq_add_veb(struct i40e_hw *hw, u16 uplink_seid, - u16 downlink_seid, u8 enabled_tc, - bool default_port, u16 *pveb_seid, - bool enable_stats, - struct i40e_asq_cmd_details *cmd_details); -i40e_status i40e_aq_get_veb_parameters(struct i40e_hw *hw, - u16 veb_seid, u16 *switch_id, bool *floating, - u16 *statistic_index, u16 *vebs_used, - u16 *vebs_free, - struct i40e_asq_cmd_details *cmd_details); -i40e_status i40e_aq_add_macvlan(struct i40e_hw *hw, u16 vsi_id, +int i40e_aq_add_vsi(struct i40e_hw *hw, + struct i40e_vsi_context *vsi_ctx, + struct i40e_asq_cmd_details *cmd_details); +int i40e_aq_set_vsi_broadcast(struct i40e_hw *hw, + u16 vsi_id, bool set_filter, + struct i40e_asq_cmd_details *cmd_details); +int i40e_aq_set_vsi_unicast_promiscuous(struct i40e_hw *hw, u16 vsi_id, bool set, + struct i40e_asq_cmd_details *cmd_details, + bool rx_only_promisc); +int i40e_aq_set_vsi_multicast_promiscuous(struct i40e_hw *hw, u16 vsi_id, bool set, + struct i40e_asq_cmd_details *cmd_details); +int i40e_aq_set_vsi_mc_promisc_on_vlan(struct i40e_hw *hw, + u16 seid, bool enable, + u16 vid, + struct i40e_asq_cmd_details *cmd_details); +int i40e_aq_set_vsi_uc_promisc_on_vlan(struct i40e_hw *hw, + u16 seid, bool enable, + u16 vid, + struct i40e_asq_cmd_details *cmd_details); +int i40e_aq_set_vsi_bc_promisc_on_vlan(struct i40e_hw *hw, + u16 seid, bool enable, u16 vid, + struct i40e_asq_cmd_details *cmd_details); +int i40e_aq_set_vsi_vlan_promisc(struct i40e_hw *hw, + u16 seid, bool enable, + struct i40e_asq_cmd_details *cmd_details); +int i40e_aq_get_vsi_params(struct i40e_hw *hw, + struct i40e_vsi_context *vsi_ctx, + struct i40e_asq_cmd_details *cmd_details); +int i40e_aq_update_vsi_params(struct i40e_hw *hw, + struct i40e_vsi_context *vsi_ctx, + struct i40e_asq_cmd_details *cmd_details); +int i40e_aq_add_veb(struct i40e_hw *hw, u16 uplink_seid, + u16 downlink_seid, u8 enabled_tc, + bool default_port, u16 *pveb_seid, + bool enable_stats, + struct i40e_asq_cmd_details *cmd_details); +int i40e_aq_get_veb_parameters(struct i40e_hw *hw, + u16 veb_seid, u16 *switch_id, bool *floating, + u16 *statistic_index, u16 *vebs_used, + u16 *vebs_free, + struct i40e_asq_cmd_details *cmd_details); +int i40e_aq_add_macvlan(struct i40e_hw *hw, u16 vsi_id, struct i40e_aqc_add_macvlan_element_data *mv_list, u16 count, struct i40e_asq_cmd_details *cmd_details); -i40e_status +int i40e_aq_add_macvlan_v2(struct i40e_hw *hw, u16 seid, struct i40e_aqc_add_macvlan_element_data *mv_list, u16 count, struct i40e_asq_cmd_details *cmd_details, enum i40e_admin_queue_err *aq_status); -i40e_status i40e_aq_remove_macvlan(struct i40e_hw *hw, u16 vsi_id, - struct i40e_aqc_remove_macvlan_element_data *mv_list, - u16 count, struct i40e_asq_cmd_details *cmd_details); -i40e_status +int i40e_aq_remove_macvlan(struct i40e_hw *hw, u16 vsi_id, + struct i40e_aqc_remove_macvlan_element_data *mv_list, + u16 count, struct i40e_asq_cmd_details *cmd_details); +int i40e_aq_remove_macvlan_v2(struct i40e_hw *hw, u16 seid, struct i40e_aqc_remove_macvlan_element_data *mv_list, u16 count, struct i40e_asq_cmd_details *cmd_details, enum i40e_admin_queue_err *aq_status); -i40e_status i40e_aq_add_mirrorrule(struct i40e_hw *hw, u16 sw_seid, - u16 rule_type, u16 dest_vsi, u16 count, __le16 *mr_list, - struct i40e_asq_cmd_details *cmd_details, - u16 *rule_id, u16 *rules_used, u16 *rules_free); -i40e_status i40e_aq_delete_mirrorrule(struct i40e_hw *hw, u16 sw_seid, - u16 rule_type, u16 rule_id, u16 count, __le16 *mr_list, - struct i40e_asq_cmd_details *cmd_details, - u16 *rules_used, u16 *rules_free); +int i40e_aq_add_mirrorrule(struct i40e_hw *hw, u16 sw_seid, + u16 rule_type, u16 dest_vsi, u16 count, __le16 *mr_list, + struct i40e_asq_cmd_details *cmd_details, + u16 *rule_id, u16 *rules_used, u16 *rules_free); +int i40e_aq_delete_mirrorrule(struct i40e_hw *hw, u16 sw_seid, + u16 rule_type, u16 rule_id, u16 count, __le16 *mr_list, + struct i40e_asq_cmd_details *cmd_details, + u16 *rules_used, u16 *rules_free); -i40e_status i40e_aq_send_msg_to_vf(struct i40e_hw *hw, u16 vfid, - u32 v_opcode, u32 v_retval, u8 *msg, u16 msglen, - struct i40e_asq_cmd_details *cmd_details); -i40e_status i40e_aq_get_switch_config(struct i40e_hw *hw, - struct i40e_aqc_get_switch_config_resp *buf, - u16 buf_size, u16 *start_seid, - struct i40e_asq_cmd_details *cmd_details); -enum i40e_status_code i40e_aq_set_switch_config(struct i40e_hw *hw, - u16 flags, - u16 valid_flags, u8 mode, - struct i40e_asq_cmd_details *cmd_details); -i40e_status i40e_aq_request_resource(struct i40e_hw *hw, - enum i40e_aq_resources_ids resource, - enum i40e_aq_resource_access_type access, - u8 sdp_number, u64 *timeout, - struct i40e_asq_cmd_details *cmd_details); -i40e_status i40e_aq_release_resource(struct i40e_hw *hw, - enum i40e_aq_resources_ids resource, - u8 sdp_number, - struct i40e_asq_cmd_details *cmd_details); -i40e_status i40e_aq_read_nvm(struct i40e_hw *hw, u8 module_pointer, - u32 offset, u16 length, void *data, - bool last_command, - struct i40e_asq_cmd_details *cmd_details); -i40e_status i40e_aq_erase_nvm(struct i40e_hw *hw, u8 module_pointer, - u32 offset, u16 length, bool last_command, +int i40e_aq_send_msg_to_vf(struct i40e_hw *hw, u16 vfid, + u32 v_opcode, u32 v_retval, u8 *msg, u16 msglen, + struct i40e_asq_cmd_details *cmd_details); +int i40e_aq_get_switch_config(struct i40e_hw *hw, + struct i40e_aqc_get_switch_config_resp *buf, + u16 buf_size, u16 *start_seid, struct i40e_asq_cmd_details *cmd_details); -i40e_status i40e_aq_discover_capabilities(struct i40e_hw *hw, - void *buff, u16 buff_size, u16 *data_size, - enum i40e_admin_queue_opc list_type_opc, - struct i40e_asq_cmd_details *cmd_details); -i40e_status i40e_aq_update_nvm(struct i40e_hw *hw, u8 module_pointer, - u32 offset, u16 length, void *data, - bool last_command, u8 preservation_flags, - struct i40e_asq_cmd_details *cmd_details); -i40e_status i40e_aq_rearrange_nvm(struct i40e_hw *hw, - u8 rearrange_nvm, +int i40e_aq_set_switch_config(struct i40e_hw *hw, + u16 flags, + u16 valid_flags, u8 mode, + struct i40e_asq_cmd_details *cmd_details); +int i40e_aq_request_resource(struct i40e_hw *hw, + enum i40e_aq_resources_ids resource, + enum i40e_aq_resource_access_type access, + u8 sdp_number, u64 *timeout, + struct i40e_asq_cmd_details *cmd_details); +int i40e_aq_release_resource(struct i40e_hw *hw, + enum i40e_aq_resources_ids resource, + u8 sdp_number, + struct i40e_asq_cmd_details *cmd_details); +int i40e_aq_read_nvm(struct i40e_hw *hw, u8 module_pointer, + u32 offset, u16 length, void *data, + bool last_command, + struct i40e_asq_cmd_details *cmd_details); +int i40e_aq_erase_nvm(struct i40e_hw *hw, u8 module_pointer, + u32 offset, u16 length, bool last_command, + struct i40e_asq_cmd_details *cmd_details); +int i40e_aq_discover_capabilities(struct i40e_hw *hw, + void *buff, u16 buff_size, u16 *data_size, + enum i40e_admin_queue_opc list_type_opc, struct i40e_asq_cmd_details *cmd_details); -i40e_status i40e_aq_get_lldp_mib(struct i40e_hw *hw, u8 bridge_type, - u8 mib_type, void *buff, u16 buff_size, - u16 *local_len, u16 *remote_len, - struct i40e_asq_cmd_details *cmd_details); -enum i40e_status_code +int i40e_aq_update_nvm(struct i40e_hw *hw, u8 module_pointer, + u32 offset, u16 length, void *data, + bool last_command, u8 preservation_flags, + struct i40e_asq_cmd_details *cmd_details); +int i40e_aq_rearrange_nvm(struct i40e_hw *hw, + u8 rearrange_nvm, + struct i40e_asq_cmd_details *cmd_details); +int i40e_aq_get_lldp_mib(struct i40e_hw *hw, u8 bridge_type, + u8 mib_type, void *buff, u16 buff_size, + u16 *local_len, u16 *remote_len, + struct i40e_asq_cmd_details *cmd_details); +int i40e_aq_set_lldp_mib(struct i40e_hw *hw, u8 mib_type, void *buff, u16 buff_size, struct i40e_asq_cmd_details *cmd_details); -i40e_status i40e_aq_cfg_lldp_mib_change_event(struct i40e_hw *hw, - bool enable_update, - struct i40e_asq_cmd_details *cmd_details); -enum i40e_status_code +int i40e_aq_cfg_lldp_mib_change_event(struct i40e_hw *hw, + bool enable_update, + struct i40e_asq_cmd_details *cmd_details); +int i40e_aq_restore_lldp(struct i40e_hw *hw, u8 *setting, bool restore, struct i40e_asq_cmd_details *cmd_details); -i40e_status i40e_aq_stop_lldp(struct i40e_hw *hw, bool shutdown_agent, - bool persist, - struct i40e_asq_cmd_details *cmd_details); -i40e_status i40e_aq_set_dcb_parameters(struct i40e_hw *hw, - bool dcb_enable, - struct i40e_asq_cmd_details - *cmd_details); -i40e_status i40e_aq_start_lldp(struct i40e_hw *hw, bool persist, +int i40e_aq_stop_lldp(struct i40e_hw *hw, bool shutdown_agent, + bool persist, + struct i40e_asq_cmd_details *cmd_details); +int i40e_aq_set_dcb_parameters(struct i40e_hw *hw, + bool dcb_enable, + struct i40e_asq_cmd_details + *cmd_details); +int i40e_aq_start_lldp(struct i40e_hw *hw, bool persist, + struct i40e_asq_cmd_details *cmd_details); +int i40e_aq_get_cee_dcb_config(struct i40e_hw *hw, + void *buff, u16 buff_size, struct i40e_asq_cmd_details *cmd_details); -i40e_status i40e_aq_get_cee_dcb_config(struct i40e_hw *hw, - void *buff, u16 buff_size, - struct i40e_asq_cmd_details *cmd_details); -i40e_status i40e_aq_add_udp_tunnel(struct i40e_hw *hw, - u16 udp_port, u8 protocol_index, - u8 *filter_index, - struct i40e_asq_cmd_details *cmd_details); -i40e_status i40e_aq_del_udp_tunnel(struct i40e_hw *hw, u8 index, - struct i40e_asq_cmd_details *cmd_details); -i40e_status i40e_aq_delete_element(struct i40e_hw *hw, u16 seid, - struct i40e_asq_cmd_details *cmd_details); -i40e_status i40e_aq_mac_address_write(struct i40e_hw *hw, - u16 flags, u8 *mac_addr, - struct i40e_asq_cmd_details *cmd_details); -i40e_status i40e_aq_config_vsi_bw_limit(struct i40e_hw *hw, +int i40e_aq_add_udp_tunnel(struct i40e_hw *hw, + u16 udp_port, u8 protocol_index, + u8 *filter_index, + struct i40e_asq_cmd_details *cmd_details); +int i40e_aq_del_udp_tunnel(struct i40e_hw *hw, u8 index, + struct i40e_asq_cmd_details *cmd_details); +int i40e_aq_delete_element(struct i40e_hw *hw, u16 seid, + struct i40e_asq_cmd_details *cmd_details); +int i40e_aq_mac_address_write(struct i40e_hw *hw, + u16 flags, u8 *mac_addr, + struct i40e_asq_cmd_details *cmd_details); +int i40e_aq_config_vsi_bw_limit(struct i40e_hw *hw, u16 seid, u16 credit, u8 max_credit, struct i40e_asq_cmd_details *cmd_details); -i40e_status i40e_aq_dcb_updated(struct i40e_hw *hw, - struct i40e_asq_cmd_details *cmd_details); -i40e_status i40e_aq_config_switch_comp_bw_limit(struct i40e_hw *hw, - u16 seid, u16 credit, u8 max_bw, - struct i40e_asq_cmd_details *cmd_details); -i40e_status i40e_aq_config_vsi_tc_bw(struct i40e_hw *hw, u16 seid, - struct i40e_aqc_configure_vsi_tc_bw_data *bw_data, +int i40e_aq_dcb_updated(struct i40e_hw *hw, struct i40e_asq_cmd_details *cmd_details); -i40e_status i40e_aq_config_switch_comp_ets(struct i40e_hw *hw, - u16 seid, - struct i40e_aqc_configure_switching_comp_ets_data *ets_data, - enum i40e_admin_queue_opc opcode, - struct i40e_asq_cmd_details *cmd_details); -i40e_status i40e_aq_config_switch_comp_bw_config(struct i40e_hw *hw, +int i40e_aq_config_switch_comp_bw_limit(struct i40e_hw *hw, + u16 seid, u16 credit, u8 max_bw, + struct i40e_asq_cmd_details *cmd_details); +int i40e_aq_config_vsi_tc_bw(struct i40e_hw *hw, u16 seid, + struct i40e_aqc_configure_vsi_tc_bw_data *bw_data, + struct i40e_asq_cmd_details *cmd_details); +int +i40e_aq_config_switch_comp_ets(struct i40e_hw *hw, + u16 seid, + struct i40e_aqc_configure_switching_comp_ets_data *ets_data, + enum i40e_admin_queue_opc opcode, + struct i40e_asq_cmd_details *cmd_details); +int i40e_aq_config_switch_comp_bw_config(struct i40e_hw *hw, u16 seid, struct i40e_aqc_configure_switching_comp_bw_config_data *bw_data, struct i40e_asq_cmd_details *cmd_details); -i40e_status i40e_aq_query_vsi_bw_config(struct i40e_hw *hw, - u16 seid, - struct i40e_aqc_query_vsi_bw_config_resp *bw_data, - struct i40e_asq_cmd_details *cmd_details); -i40e_status i40e_aq_query_vsi_ets_sla_config(struct i40e_hw *hw, - u16 seid, - struct i40e_aqc_query_vsi_ets_sla_config_resp *bw_data, - struct i40e_asq_cmd_details *cmd_details); -i40e_status i40e_aq_query_switch_comp_ets_config(struct i40e_hw *hw, - u16 seid, - struct i40e_aqc_query_switching_comp_ets_config_resp *bw_data, - struct i40e_asq_cmd_details *cmd_details); -i40e_status i40e_aq_query_port_ets_config(struct i40e_hw *hw, - u16 seid, - struct i40e_aqc_query_port_ets_config_resp *bw_data, - struct i40e_asq_cmd_details *cmd_details); -i40e_status i40e_aq_query_switch_comp_bw_config(struct i40e_hw *hw, - u16 seid, - struct i40e_aqc_query_switching_comp_bw_config_resp *bw_data, - struct i40e_asq_cmd_details *cmd_details); -i40e_status i40e_aq_resume_port_tx(struct i40e_hw *hw, - struct i40e_asq_cmd_details *cmd_details); -enum i40e_status_code +int i40e_aq_query_vsi_bw_config(struct i40e_hw *hw, + u16 seid, + struct i40e_aqc_query_vsi_bw_config_resp *bw_data, + struct i40e_asq_cmd_details *cmd_details); +int +i40e_aq_query_vsi_ets_sla_config(struct i40e_hw *hw, + u16 seid, + struct i40e_aqc_query_vsi_ets_sla_config_resp *bw_data, + struct i40e_asq_cmd_details *cmd_details); +int +i40e_aq_query_switch_comp_ets_config(struct i40e_hw *hw, + u16 seid, + struct i40e_aqc_query_switching_comp_ets_config_resp *bw_data, + struct i40e_asq_cmd_details *cmd_details); +int +i40e_aq_query_port_ets_config(struct i40e_hw *hw, + u16 seid, + struct i40e_aqc_query_port_ets_config_resp *bw_data, + struct i40e_asq_cmd_details *cmd_details); +int +i40e_aq_query_switch_comp_bw_config(struct i40e_hw *hw, + u16 seid, + struct i40e_aqc_query_switching_comp_bw_config_resp *bw_data, + struct i40e_asq_cmd_details *cmd_details); +int i40e_aq_resume_port_tx(struct i40e_hw *hw, + struct i40e_asq_cmd_details *cmd_details); +int i40e_aq_add_cloud_filters_bb(struct i40e_hw *hw, u16 seid, struct i40e_aqc_cloud_filters_element_bb *filters, u8 filter_count); -enum i40e_status_code +int i40e_aq_add_cloud_filters(struct i40e_hw *hw, u16 vsi, struct i40e_aqc_cloud_filters_element_data *filters, u8 filter_count); -enum i40e_status_code +int i40e_aq_rem_cloud_filters(struct i40e_hw *hw, u16 vsi, struct i40e_aqc_cloud_filters_element_data *filters, u8 filter_count); -enum i40e_status_code +int i40e_aq_rem_cloud_filters_bb(struct i40e_hw *hw, u16 seid, struct i40e_aqc_cloud_filters_element_bb *filters, u8 filter_count); -i40e_status i40e_read_lldp_cfg(struct i40e_hw *hw, - struct i40e_lldp_variables *lldp_cfg); -enum i40e_status_code +int i40e_read_lldp_cfg(struct i40e_hw *hw, + struct i40e_lldp_variables *lldp_cfg); +int i40e_aq_suspend_port_tx(struct i40e_hw *hw, u16 seid, struct i40e_asq_cmd_details *cmd_details); /* i40e_common */ -i40e_status i40e_init_shared_code(struct i40e_hw *hw); -i40e_status i40e_pf_reset(struct i40e_hw *hw); +int i40e_init_shared_code(struct i40e_hw *hw); +int i40e_pf_reset(struct i40e_hw *hw); void i40e_clear_hw(struct i40e_hw *hw); void i40e_clear_pxe_mode(struct i40e_hw *hw); -i40e_status i40e_get_link_status(struct i40e_hw *hw, bool *link_up); -i40e_status i40e_update_link_info(struct i40e_hw *hw); -i40e_status i40e_get_mac_addr(struct i40e_hw *hw, u8 *mac_addr); -i40e_status i40e_read_bw_from_alt_ram(struct i40e_hw *hw, - u32 *max_bw, u32 *min_bw, bool *min_valid, - bool *max_valid); -i40e_status i40e_aq_configure_partition_bw(struct i40e_hw *hw, - struct i40e_aqc_configure_partition_bw_data *bw_data, - struct i40e_asq_cmd_details *cmd_details); -i40e_status i40e_get_port_mac_addr(struct i40e_hw *hw, u8 *mac_addr); -i40e_status i40e_read_pba_string(struct i40e_hw *hw, u8 *pba_num, - u32 pba_num_size); -i40e_status i40e_validate_mac_addr(u8 *mac_addr); +int i40e_get_link_status(struct i40e_hw *hw, bool *link_up); +int i40e_update_link_info(struct i40e_hw *hw); +int i40e_get_mac_addr(struct i40e_hw *hw, u8 *mac_addr); +int i40e_read_bw_from_alt_ram(struct i40e_hw *hw, + u32 *max_bw, u32 *min_bw, bool *min_valid, + bool *max_valid); +int +i40e_aq_configure_partition_bw(struct i40e_hw *hw, + struct i40e_aqc_configure_partition_bw_data *bw_data, + struct i40e_asq_cmd_details *cmd_details); +int i40e_get_port_mac_addr(struct i40e_hw *hw, u8 *mac_addr); +int i40e_read_pba_string(struct i40e_hw *hw, u8 *pba_num, + u32 pba_num_size); +int i40e_validate_mac_addr(u8 *mac_addr); void i40e_pre_tx_queue_cfg(struct i40e_hw *hw, u32 queue, bool enable); /* prototype for functions used for NVM access */ -i40e_status i40e_init_nvm(struct i40e_hw *hw); -i40e_status i40e_acquire_nvm(struct i40e_hw *hw, - enum i40e_aq_resource_access_type access); +int i40e_init_nvm(struct i40e_hw *hw); +int i40e_acquire_nvm(struct i40e_hw *hw, + enum i40e_aq_resource_access_type access); void i40e_release_nvm(struct i40e_hw *hw); -i40e_status i40e_read_nvm_word(struct i40e_hw *hw, u16 offset, - u16 *data); -enum i40e_status_code i40e_read_nvm_module_data(struct i40e_hw *hw, - u8 module_ptr, - u16 module_offset, - u16 data_offset, - u16 words_data_size, - u16 *data_ptr); -i40e_status i40e_read_nvm_buffer(struct i40e_hw *hw, u16 offset, - u16 *words, u16 *data); -i40e_status i40e_update_nvm_checksum(struct i40e_hw *hw); -i40e_status i40e_validate_nvm_checksum(struct i40e_hw *hw, - u16 *checksum); -i40e_status i40e_nvmupd_command(struct i40e_hw *hw, - struct i40e_nvm_access *cmd, - u8 *bytes, int *); +int i40e_read_nvm_word(struct i40e_hw *hw, u16 offset, + u16 *data); +int i40e_read_nvm_module_data(struct i40e_hw *hw, + u8 module_ptr, + u16 module_offset, + u16 data_offset, + u16 words_data_size, + u16 *data_ptr); +int i40e_read_nvm_buffer(struct i40e_hw *hw, u16 offset, + u16 *words, u16 *data); +int i40e_update_nvm_checksum(struct i40e_hw *hw); +int i40e_validate_nvm_checksum(struct i40e_hw *hw, + u16 *checksum); +int i40e_nvmupd_command(struct i40e_hw *hw, + struct i40e_nvm_access *cmd, + u8 *bytes, int *errno); void i40e_nvmupd_check_wait_event(struct i40e_hw *hw, u16 opcode, struct i40e_aq_desc *desc); void i40e_nvmupd_clear_wait_state(struct i40e_hw *hw); void i40e_set_pci_config_data(struct i40e_hw *hw, u16 link_status); -i40e_status i40e_set_mac_type(struct i40e_hw *hw); +int i40e_set_mac_type(struct i40e_hw *hw); extern struct i40e_rx_ptype_decoded i40e_ptype_lookup[]; @@ -422,41 +427,41 @@ i40e_virtchnl_link_speed(enum i40e_aq_link_speed link_speed) /* i40e_common for VF drivers*/ void i40e_vf_parse_hw_config(struct i40e_hw *hw, struct virtchnl_vf_resource *msg); -i40e_status i40e_vf_reset(struct i40e_hw *hw); -i40e_status i40e_aq_send_msg_to_pf(struct i40e_hw *hw, - enum virtchnl_ops v_opcode, - i40e_status v_retval, - u8 *msg, u16 msglen, - struct i40e_asq_cmd_details *cmd_details); -i40e_status i40e_set_filter_control(struct i40e_hw *hw, - struct i40e_filter_control_settings *settings); -i40e_status i40e_aq_add_rem_control_packet_filter(struct i40e_hw *hw, - u8 *mac_addr, u16 ethtype, u16 flags, - u16 vsi_seid, u16 queue, bool is_add, - struct i40e_control_filter_stats *stats, - struct i40e_asq_cmd_details *cmd_details); -i40e_status i40e_aq_debug_dump(struct i40e_hw *hw, u8 cluster_id, - u8 table_id, u32 start_index, u16 buff_size, - void *buff, u16 *ret_buff_size, - u8 *ret_next_table, u32 *ret_next_index, - struct i40e_asq_cmd_details *cmd_details); +int i40e_vf_reset(struct i40e_hw *hw); +int i40e_aq_send_msg_to_pf(struct i40e_hw *hw, + enum virtchnl_ops v_opcode, + int v_retval, + u8 *msg, u16 msglen, + struct i40e_asq_cmd_details *cmd_details); +int i40e_set_filter_control(struct i40e_hw *hw, + struct i40e_filter_control_settings *settings); +int i40e_aq_add_rem_control_packet_filter(struct i40e_hw *hw, + u8 *mac_addr, u16 ethtype, u16 flags, + u16 vsi_seid, u16 queue, bool is_add, + struct i40e_control_filter_stats *stats, + struct i40e_asq_cmd_details *cmd_details); +int i40e_aq_debug_dump(struct i40e_hw *hw, u8 cluster_id, + u8 table_id, u32 start_index, u16 buff_size, + void *buff, u16 *ret_buff_size, + u8 *ret_next_table, u32 *ret_next_index, + struct i40e_asq_cmd_details *cmd_details); void i40e_add_filter_to_drop_tx_flow_control_frames(struct i40e_hw *hw, u16 vsi_seid); -i40e_status i40e_aq_rx_ctl_read_register(struct i40e_hw *hw, - u32 reg_addr, u32 *reg_val, - struct i40e_asq_cmd_details *cmd_details); +int i40e_aq_rx_ctl_read_register(struct i40e_hw *hw, + u32 reg_addr, u32 *reg_val, + struct i40e_asq_cmd_details *cmd_details); u32 i40e_read_rx_ctl(struct i40e_hw *hw, u32 reg_addr); -i40e_status i40e_aq_rx_ctl_write_register(struct i40e_hw *hw, - u32 reg_addr, u32 reg_val, - struct i40e_asq_cmd_details *cmd_details); +int i40e_aq_rx_ctl_write_register(struct i40e_hw *hw, + u32 reg_addr, u32 reg_val, + struct i40e_asq_cmd_details *cmd_details); void i40e_write_rx_ctl(struct i40e_hw *hw, u32 reg_addr, u32 reg_val); -enum i40e_status_code +int i40e_aq_set_phy_register_ext(struct i40e_hw *hw, u8 phy_select, u8 dev_addr, bool page_change, bool set_mdio, u8 mdio_num, u32 reg_addr, u32 reg_val, struct i40e_asq_cmd_details *cmd_details); -enum i40e_status_code +int i40e_aq_get_phy_register_ext(struct i40e_hw *hw, u8 phy_select, u8 dev_addr, bool page_change, bool set_mdio, u8 mdio_num, @@ -469,43 +474,43 @@ i40e_aq_get_phy_register_ext(struct i40e_hw *hw, #define i40e_aq_get_phy_register(hw, ps, da, pc, ra, rv, cd) \ i40e_aq_get_phy_register_ext(hw, ps, da, pc, false, 0, ra, rv, cd) -i40e_status i40e_read_phy_register_clause22(struct i40e_hw *hw, - u16 reg, u8 phy_addr, u16 *value); -i40e_status i40e_write_phy_register_clause22(struct i40e_hw *hw, - u16 reg, u8 phy_addr, u16 value); -i40e_status i40e_read_phy_register_clause45(struct i40e_hw *hw, - u8 page, u16 reg, u8 phy_addr, u16 *value); -i40e_status i40e_write_phy_register_clause45(struct i40e_hw *hw, - u8 page, u16 reg, u8 phy_addr, u16 value); -i40e_status i40e_read_phy_register(struct i40e_hw *hw, u8 page, u16 reg, - u8 phy_addr, u16 *value); -i40e_status i40e_write_phy_register(struct i40e_hw *hw, u8 page, u16 reg, - u8 phy_addr, u16 value); +int i40e_read_phy_register_clause22(struct i40e_hw *hw, + u16 reg, u8 phy_addr, u16 *value); +int i40e_write_phy_register_clause22(struct i40e_hw *hw, + u16 reg, u8 phy_addr, u16 value); +int i40e_read_phy_register_clause45(struct i40e_hw *hw, + u8 page, u16 reg, u8 phy_addr, u16 *value); +int i40e_write_phy_register_clause45(struct i40e_hw *hw, + u8 page, u16 reg, u8 phy_addr, u16 value); +int i40e_read_phy_register(struct i40e_hw *hw, u8 page, u16 reg, + u8 phy_addr, u16 *value); +int i40e_write_phy_register(struct i40e_hw *hw, u8 page, u16 reg, + u8 phy_addr, u16 value); u8 i40e_get_phy_address(struct i40e_hw *hw, u8 dev_num); -i40e_status i40e_blink_phy_link_led(struct i40e_hw *hw, - u32 time, u32 interval); -i40e_status i40e_aq_write_ddp(struct i40e_hw *hw, void *buff, - u16 buff_size, u32 track_id, - u32 *error_offset, u32 *error_info, - struct i40e_asq_cmd_details * - cmd_details); -i40e_status i40e_aq_get_ddp_list(struct i40e_hw *hw, void *buff, - u16 buff_size, u8 flags, - struct i40e_asq_cmd_details * - cmd_details); +int i40e_blink_phy_link_led(struct i40e_hw *hw, + u32 time, u32 interval); +int i40e_aq_write_ddp(struct i40e_hw *hw, void *buff, + u16 buff_size, u32 track_id, + u32 *error_offset, u32 *error_info, + struct i40e_asq_cmd_details * + cmd_details); +int i40e_aq_get_ddp_list(struct i40e_hw *hw, void *buff, + u16 buff_size, u8 flags, + struct i40e_asq_cmd_details * + cmd_details); struct i40e_generic_seg_header * i40e_find_segment_in_package(u32 segment_type, struct i40e_package_header *pkg_header); struct i40e_profile_section_header * i40e_find_section_in_profile(u32 section_type, struct i40e_profile_segment *profile); -enum i40e_status_code +int i40e_write_profile(struct i40e_hw *hw, struct i40e_profile_segment *i40e_seg, u32 track_id); -enum i40e_status_code +int i40e_rollback_profile(struct i40e_hw *hw, struct i40e_profile_segment *i40e_seg, u32 track_id); -enum i40e_status_code +int i40e_add_pinfo_to_list(struct i40e_hw *hw, struct i40e_profile_segment *profile, u8 *profile_info_sec, u32 track_id); diff --git a/drivers/net/ethernet/intel/i40e/i40e_status.h b/drivers/net/ethernet/intel/i40e/i40e_status.h index db3714a65dc7..4d2782e76038 100644 --- a/drivers/net/ethernet/intel/i40e/i40e_status.h +++ b/drivers/net/ethernet/intel/i40e/i40e_status.h @@ -9,65 +9,30 @@ enum i40e_status_code { I40E_SUCCESS = 0, I40E_ERR_NVM = -1, I40E_ERR_NVM_CHECKSUM = -2, - I40E_ERR_PHY = -3, I40E_ERR_CONFIG = -4, I40E_ERR_PARAM = -5, - I40E_ERR_MAC_TYPE = -6, I40E_ERR_UNKNOWN_PHY = -7, - I40E_ERR_LINK_SETUP = -8, - I40E_ERR_ADAPTER_STOPPED = -9, I40E_ERR_INVALID_MAC_ADDR = -10, I40E_ERR_DEVICE_NOT_SUPPORTED = -11, - I40E_ERR_PRIMARY_REQUESTS_PENDING = -12, - I40E_ERR_INVALID_LINK_SETTINGS = -13, - I40E_ERR_AUTONEG_NOT_COMPLETE = -14, I40E_ERR_RESET_FAILED = -15, - I40E_ERR_SWFW_SYNC = -16, I40E_ERR_NO_AVAILABLE_VSI = -17, I40E_ERR_NO_MEMORY = -18, I40E_ERR_BAD_PTR = -19, - I40E_ERR_RING_FULL = -20, - I40E_ERR_INVALID_PD_ID = -21, - I40E_ERR_INVALID_QP_ID = -22, - I40E_ERR_INVALID_CQ_ID = -23, - I40E_ERR_INVALID_CEQ_ID = -24, - I40E_ERR_INVALID_AEQ_ID = -25, I40E_ERR_INVALID_SIZE = -26, - I40E_ERR_INVALID_ARP_INDEX = -27, - I40E_ERR_INVALID_FPM_FUNC_ID = -28, - I40E_ERR_QP_INVALID_MSG_SIZE = -29, - I40E_ERR_QP_TOOMANY_WRS_POSTED = -30, - I40E_ERR_INVALID_FRAG_COUNT = -31, I40E_ERR_QUEUE_EMPTY = -32, - I40E_ERR_INVALID_ALIGNMENT = -33, - I40E_ERR_FLUSHED_QUEUE = -34, - I40E_ERR_INVALID_PUSH_PAGE_INDEX = -35, - I40E_ERR_INVALID_IMM_DATA_SIZE = -36, I40E_ERR_TIMEOUT = -37, - I40E_ERR_OPCODE_MISMATCH = -38, - I40E_ERR_CQP_COMPL_ERROR = -39, - I40E_ERR_INVALID_VF_ID = -40, - I40E_ERR_INVALID_HMCFN_ID = -41, - I40E_ERR_BACKING_PAGE_ERROR = -42, - I40E_ERR_NO_PBLCHUNKS_AVAILABLE = -43, - I40E_ERR_INVALID_PBLE_INDEX = -44, I40E_ERR_INVALID_SD_INDEX = -45, I40E_ERR_INVALID_PAGE_DESC_INDEX = -46, I40E_ERR_INVALID_SD_TYPE = -47, - I40E_ERR_MEMCPY_FAILED = -48, I40E_ERR_INVALID_HMC_OBJ_INDEX = -49, I40E_ERR_INVALID_HMC_OBJ_COUNT = -50, - I40E_ERR_INVALID_SRQ_ARM_LIMIT = -51, - I40E_ERR_SRQ_ENABLED = -52, I40E_ERR_ADMIN_QUEUE_ERROR = -53, I40E_ERR_ADMIN_QUEUE_TIMEOUT = -54, I40E_ERR_BUF_TOO_SHORT = -55, I40E_ERR_ADMIN_QUEUE_FULL = -56, I40E_ERR_ADMIN_QUEUE_NO_WORK = -57, - I40E_ERR_BAD_IWARP_CQE = -58, I40E_ERR_NVM_BLANK_MODE = -59, I40E_ERR_NOT_IMPLEMENTED = -60, - I40E_ERR_PE_DOORBELL_NOT_ENABLED = -61, I40E_ERR_DIAG_TEST_FAILED = -62, I40E_ERR_NOT_READY = -63, I40E_NOT_SUPPORTED = -64, diff --git a/drivers/net/ethernet/intel/i40e/i40e_virtchnl_pf.c b/drivers/net/ethernet/intel/i40e/i40e_virtchnl_pf.c index 635f93d60318..8a4587585acd 100644 --- a/drivers/net/ethernet/intel/i40e/i40e_virtchnl_pf.c +++ b/drivers/net/ethernet/intel/i40e/i40e_virtchnl_pf.c @@ -17,7 +17,7 @@ **/ static void i40e_vc_vf_broadcast(struct i40e_pf *pf, enum virtchnl_ops v_opcode, - i40e_status v_retval, u8 *msg, + int v_retval, u8 *msg, u16 msglen) { struct i40e_hw *hw = &pf->hw; @@ -441,14 +441,14 @@ irq_list_done: } /** - * i40e_release_iwarp_qvlist + * i40e_release_rdma_qvlist * @vf: pointer to the VF. * **/ -static void i40e_release_iwarp_qvlist(struct i40e_vf *vf) +static void i40e_release_rdma_qvlist(struct i40e_vf *vf) { struct i40e_pf *pf = vf->pf; - struct virtchnl_iwarp_qvlist_info *qvlist_info = vf->qvlist_info; + struct virtchnl_rdma_qvlist_info *qvlist_info = vf->qvlist_info; u32 msix_vf; u32 i; @@ -457,7 +457,7 @@ static void i40e_release_iwarp_qvlist(struct i40e_vf *vf) msix_vf = pf->hw.func_caps.num_msix_vectors_vf; for (i = 0; i < qvlist_info->num_vectors; i++) { - struct virtchnl_iwarp_qv_info *qv_info; + struct virtchnl_rdma_qv_info *qv_info; u32 next_q_index, next_q_type; struct i40e_hw *hw = &pf->hw; u32 v_idx, reg_idx, reg; @@ -491,18 +491,19 @@ static void i40e_release_iwarp_qvlist(struct i40e_vf *vf) } /** - * i40e_config_iwarp_qvlist + * i40e_config_rdma_qvlist * @vf: pointer to the VF info * @qvlist_info: queue and vector list * * Return 0 on success or < 0 on error **/ -static int i40e_config_iwarp_qvlist(struct i40e_vf *vf, - struct virtchnl_iwarp_qvlist_info *qvlist_info) +static int +i40e_config_rdma_qvlist(struct i40e_vf *vf, + struct virtchnl_rdma_qvlist_info *qvlist_info) { struct i40e_pf *pf = vf->pf; struct i40e_hw *hw = &pf->hw; - struct virtchnl_iwarp_qv_info *qv_info; + struct virtchnl_rdma_qv_info *qv_info; u32 v_idx, i, reg_idx, reg; u32 next_q_idx, next_q_type; u32 msix_vf; @@ -1246,13 +1247,13 @@ err: * @vl: List of VLANs - apply filter for given VLANs * @num_vlans: Number of elements in @vl **/ -static i40e_status +static int i40e_set_vsi_promisc(struct i40e_vf *vf, u16 seid, bool multi_enable, bool unicast_enable, s16 *vl, u16 num_vlans) { - i40e_status aq_ret, aq_tmp = 0; struct i40e_pf *pf = vf->pf; struct i40e_hw *hw = &pf->hw; + int aq_ret, aq_tmp = 0; int i; /* No VLAN to set promisc on, set on VSI */ @@ -1264,9 +1265,9 @@ i40e_set_vsi_promisc(struct i40e_vf *vf, u16 seid, bool multi_enable, int aq_err = pf->hw.aq.asq_last_status; dev_err(&pf->pdev->dev, - "VF %d failed to set multicast promiscuous mode err %s aq_err %s\n", + "VF %d failed to set multicast promiscuous mode err %pe aq_err %s\n", vf->vf_id, - i40e_stat_str(&pf->hw, aq_ret), + ERR_PTR(aq_ret), i40e_aq_str(&pf->hw, aq_err)); return aq_ret; @@ -1280,9 +1281,9 @@ i40e_set_vsi_promisc(struct i40e_vf *vf, u16 seid, bool multi_enable, int aq_err = pf->hw.aq.asq_last_status; dev_err(&pf->pdev->dev, - "VF %d failed to set unicast promiscuous mode err %s aq_err %s\n", + "VF %d failed to set unicast promiscuous mode err %pe aq_err %s\n", vf->vf_id, - i40e_stat_str(&pf->hw, aq_ret), + ERR_PTR(aq_ret), i40e_aq_str(&pf->hw, aq_err)); } @@ -1297,9 +1298,9 @@ i40e_set_vsi_promisc(struct i40e_vf *vf, u16 seid, bool multi_enable, int aq_err = pf->hw.aq.asq_last_status; dev_err(&pf->pdev->dev, - "VF %d failed to set multicast promiscuous mode err %s aq_err %s\n", + "VF %d failed to set multicast promiscuous mode err %pe aq_err %s\n", vf->vf_id, - i40e_stat_str(&pf->hw, aq_ret), + ERR_PTR(aq_ret), i40e_aq_str(&pf->hw, aq_err)); if (!aq_tmp) @@ -1313,9 +1314,9 @@ i40e_set_vsi_promisc(struct i40e_vf *vf, u16 seid, bool multi_enable, int aq_err = pf->hw.aq.asq_last_status; dev_err(&pf->pdev->dev, - "VF %d failed to set unicast promiscuous mode err %s aq_err %s\n", + "VF %d failed to set unicast promiscuous mode err %pe aq_err %s\n", vf->vf_id, - i40e_stat_str(&pf->hw, aq_ret), + ERR_PTR(aq_ret), i40e_aq_str(&pf->hw, aq_err)); if (!aq_tmp) @@ -1339,13 +1340,13 @@ i40e_set_vsi_promisc(struct i40e_vf *vf, u16 seid, bool multi_enable, * Called from the VF to configure the promiscuous mode of * VF vsis and from the VF reset path to reset promiscuous mode. **/ -static i40e_status i40e_config_vf_promiscuous_mode(struct i40e_vf *vf, - u16 vsi_id, - bool allmulti, - bool alluni) +static int i40e_config_vf_promiscuous_mode(struct i40e_vf *vf, + u16 vsi_id, + bool allmulti, + bool alluni) { - i40e_status aq_ret = I40E_SUCCESS; struct i40e_pf *pf = vf->pf; + int aq_ret = I40E_SUCCESS; struct i40e_vsi *vsi; u16 num_vlans; s16 *vl; @@ -1955,7 +1956,7 @@ static int i40e_vc_send_msg_to_vf(struct i40e_vf *vf, u32 v_opcode, struct i40e_pf *pf; struct i40e_hw *hw; int abs_vf_id; - i40e_status aq_ret; + int aq_ret; /* validate the request */ if (!vf || vf->vf_id >= vf->pf->num_alloc_vfs) @@ -1987,7 +1988,7 @@ static int i40e_vc_send_msg_to_vf(struct i40e_vf *vf, u32 v_opcode, **/ static int i40e_vc_send_resp_to_vf(struct i40e_vf *vf, enum virtchnl_ops opcode, - i40e_status retval) + int retval) { return i40e_vc_send_msg_to_vf(vf, opcode, retval, NULL, 0); } @@ -2091,9 +2092,9 @@ static int i40e_vc_get_vf_resources_msg(struct i40e_vf *vf, u8 *msg) { struct virtchnl_vf_resource *vfres = NULL; struct i40e_pf *pf = vf->pf; - i40e_status aq_ret = 0; struct i40e_vsi *vsi; int num_vsis = 1; + int aq_ret = 0; size_t len = 0; int ret; @@ -2123,11 +2124,11 @@ static int i40e_vc_get_vf_resources_msg(struct i40e_vf *vf, u8 *msg) vfres->vf_cap_flags |= VIRTCHNL_VF_OFFLOAD_VLAN; if (i40e_vf_client_capable(pf, vf->vf_id) && - (vf->driver_caps & VIRTCHNL_VF_OFFLOAD_IWARP)) { - vfres->vf_cap_flags |= VIRTCHNL_VF_OFFLOAD_IWARP; - set_bit(I40E_VF_STATE_IWARPENA, &vf->vf_states); + (vf->driver_caps & VIRTCHNL_VF_OFFLOAD_RDMA)) { + vfres->vf_cap_flags |= VIRTCHNL_VF_OFFLOAD_RDMA; + set_bit(I40E_VF_STATE_RDMAENA, &vf->vf_states); } else { - clear_bit(I40E_VF_STATE_IWARPENA, &vf->vf_states); + clear_bit(I40E_VF_STATE_RDMAENA, &vf->vf_states); } if (vf->driver_caps & VIRTCHNL_VF_OFFLOAD_RSS_PF) { @@ -2221,9 +2222,9 @@ static int i40e_vc_config_promiscuous_mode_msg(struct i40e_vf *vf, u8 *msg) struct virtchnl_promisc_info *info = (struct virtchnl_promisc_info *)msg; struct i40e_pf *pf = vf->pf; - i40e_status aq_ret = 0; bool allmulti = false; bool alluni = false; + int aq_ret = 0; if (!i40e_sync_vf_state(vf, I40E_VF_STATE_ACTIVE)) { aq_ret = I40E_ERR_PARAM; @@ -2308,10 +2309,10 @@ static int i40e_vc_config_queues_msg(struct i40e_vf *vf, u8 *msg) struct virtchnl_queue_pair_info *qpi; u16 vsi_id, vsi_queue_id = 0; struct i40e_pf *pf = vf->pf; - i40e_status aq_ret = 0; int i, j = 0, idx = 0; struct i40e_vsi *vsi; u16 num_qps_all = 0; + int aq_ret = 0; if (!i40e_sync_vf_state(vf, I40E_VF_STATE_ACTIVE)) { aq_ret = I40E_ERR_PARAM; @@ -2458,8 +2459,8 @@ static int i40e_vc_config_irq_map_msg(struct i40e_vf *vf, u8 *msg) struct virtchnl_irq_map_info *irqmap_info = (struct virtchnl_irq_map_info *)msg; struct virtchnl_vector_map *map; + int aq_ret = 0; u16 vsi_id; - i40e_status aq_ret = 0; int i; if (!i40e_sync_vf_state(vf, I40E_VF_STATE_ACTIVE)) { @@ -2574,7 +2575,7 @@ static int i40e_vc_enable_queues_msg(struct i40e_vf *vf, u8 *msg) struct virtchnl_queue_select *vqs = (struct virtchnl_queue_select *)msg; struct i40e_pf *pf = vf->pf; - i40e_status aq_ret = 0; + int aq_ret = 0; int i; if (!test_bit(I40E_VF_STATE_ACTIVE, &vf->vf_states)) { @@ -2632,7 +2633,7 @@ static int i40e_vc_disable_queues_msg(struct i40e_vf *vf, u8 *msg) struct virtchnl_queue_select *vqs = (struct virtchnl_queue_select *)msg; struct i40e_pf *pf = vf->pf; - i40e_status aq_ret = 0; + int aq_ret = 0; if (!i40e_sync_vf_state(vf, I40E_VF_STATE_ACTIVE)) { aq_ret = I40E_ERR_PARAM; @@ -2783,7 +2784,7 @@ static int i40e_vc_get_stats_msg(struct i40e_vf *vf, u8 *msg) (struct virtchnl_queue_select *)msg; struct i40e_pf *pf = vf->pf; struct i40e_eth_stats stats; - i40e_status aq_ret = 0; + int aq_ret = 0; struct i40e_vsi *vsi; memset(&stats, 0, sizeof(struct i40e_eth_stats)); @@ -2926,7 +2927,7 @@ static int i40e_vc_add_mac_addr_msg(struct i40e_vf *vf, u8 *msg) (struct virtchnl_ether_addr_list *)msg; struct i40e_pf *pf = vf->pf; struct i40e_vsi *vsi = NULL; - i40e_status ret = 0; + int ret = 0; int i; if (!i40e_sync_vf_state(vf, I40E_VF_STATE_ACTIVE) || @@ -2998,7 +2999,7 @@ static int i40e_vc_del_mac_addr_msg(struct i40e_vf *vf, u8 *msg) bool was_unimac_deleted = false; struct i40e_pf *pf = vf->pf; struct i40e_vsi *vsi = NULL; - i40e_status ret = 0; + int ret = 0; int i; if (!i40e_sync_vf_state(vf, I40E_VF_STATE_ACTIVE) || @@ -3071,7 +3072,7 @@ static int i40e_vc_add_vlan_msg(struct i40e_vf *vf, u8 *msg) (struct virtchnl_vlan_filter_list *)msg; struct i40e_pf *pf = vf->pf; struct i40e_vsi *vsi = NULL; - i40e_status aq_ret = 0; + int aq_ret = 0; int i; if ((vf->num_vlan >= I40E_VC_MAX_VLAN_PER_VF) && @@ -3142,7 +3143,7 @@ static int i40e_vc_remove_vlan_msg(struct i40e_vf *vf, u8 *msg) (struct virtchnl_vlan_filter_list *)msg; struct i40e_pf *pf = vf->pf; struct i40e_vsi *vsi = NULL; - i40e_status aq_ret = 0; + int aq_ret = 0; int i; if (!i40e_sync_vf_state(vf, I40E_VF_STATE_ACTIVE) || @@ -3187,21 +3188,21 @@ error_param: } /** - * i40e_vc_iwarp_msg + * i40e_vc_rdma_msg * @vf: pointer to the VF info * @msg: pointer to the msg buffer * @msglen: msg length * * called from the VF for the iwarp msgs **/ -static int i40e_vc_iwarp_msg(struct i40e_vf *vf, u8 *msg, u16 msglen) +static int i40e_vc_rdma_msg(struct i40e_vf *vf, u8 *msg, u16 msglen) { struct i40e_pf *pf = vf->pf; int abs_vf_id = vf->vf_id + pf->hw.func_caps.vf_base_id; - i40e_status aq_ret = 0; + int aq_ret = 0; if (!test_bit(I40E_VF_STATE_ACTIVE, &vf->vf_states) || - !test_bit(I40E_VF_STATE_IWARPENA, &vf->vf_states)) { + !test_bit(I40E_VF_STATE_RDMAENA, &vf->vf_states)) { aq_ret = I40E_ERR_PARAM; goto error_param; } @@ -3211,42 +3212,42 @@ static int i40e_vc_iwarp_msg(struct i40e_vf *vf, u8 *msg, u16 msglen) error_param: /* send the response to the VF */ - return i40e_vc_send_resp_to_vf(vf, VIRTCHNL_OP_IWARP, + return i40e_vc_send_resp_to_vf(vf, VIRTCHNL_OP_RDMA, aq_ret); } /** - * i40e_vc_iwarp_qvmap_msg + * i40e_vc_rdma_qvmap_msg * @vf: pointer to the VF info * @msg: pointer to the msg buffer * @config: config qvmap or release it * * called from the VF for the iwarp msgs **/ -static int i40e_vc_iwarp_qvmap_msg(struct i40e_vf *vf, u8 *msg, bool config) +static int i40e_vc_rdma_qvmap_msg(struct i40e_vf *vf, u8 *msg, bool config) { - struct virtchnl_iwarp_qvlist_info *qvlist_info = - (struct virtchnl_iwarp_qvlist_info *)msg; - i40e_status aq_ret = 0; + struct virtchnl_rdma_qvlist_info *qvlist_info = + (struct virtchnl_rdma_qvlist_info *)msg; + int aq_ret = 0; if (!test_bit(I40E_VF_STATE_ACTIVE, &vf->vf_states) || - !test_bit(I40E_VF_STATE_IWARPENA, &vf->vf_states)) { + !test_bit(I40E_VF_STATE_RDMAENA, &vf->vf_states)) { aq_ret = I40E_ERR_PARAM; goto error_param; } if (config) { - if (i40e_config_iwarp_qvlist(vf, qvlist_info)) + if (i40e_config_rdma_qvlist(vf, qvlist_info)) aq_ret = I40E_ERR_PARAM; } else { - i40e_release_iwarp_qvlist(vf); + i40e_release_rdma_qvlist(vf); } error_param: /* send the response to the VF */ return i40e_vc_send_resp_to_vf(vf, - config ? VIRTCHNL_OP_CONFIG_IWARP_IRQ_MAP : - VIRTCHNL_OP_RELEASE_IWARP_IRQ_MAP, + config ? VIRTCHNL_OP_CONFIG_RDMA_IRQ_MAP : + VIRTCHNL_OP_RELEASE_RDMA_IRQ_MAP, aq_ret); } @@ -3263,7 +3264,7 @@ static int i40e_vc_config_rss_key(struct i40e_vf *vf, u8 *msg) (struct virtchnl_rss_key *)msg; struct i40e_pf *pf = vf->pf; struct i40e_vsi *vsi = NULL; - i40e_status aq_ret = 0; + int aq_ret = 0; if (!i40e_sync_vf_state(vf, I40E_VF_STATE_ACTIVE) || !i40e_vc_isvalid_vsi_id(vf, vrk->vsi_id) || @@ -3293,7 +3294,7 @@ static int i40e_vc_config_rss_lut(struct i40e_vf *vf, u8 *msg) (struct virtchnl_rss_lut *)msg; struct i40e_pf *pf = vf->pf; struct i40e_vsi *vsi = NULL; - i40e_status aq_ret = 0; + int aq_ret = 0; u16 i; if (!i40e_sync_vf_state(vf, I40E_VF_STATE_ACTIVE) || @@ -3328,7 +3329,7 @@ static int i40e_vc_get_rss_hena(struct i40e_vf *vf, u8 *msg) { struct virtchnl_rss_hena *vrh = NULL; struct i40e_pf *pf = vf->pf; - i40e_status aq_ret = 0; + int aq_ret = 0; int len = 0; if (!i40e_sync_vf_state(vf, I40E_VF_STATE_ACTIVE)) { @@ -3365,7 +3366,7 @@ static int i40e_vc_set_rss_hena(struct i40e_vf *vf, u8 *msg) (struct virtchnl_rss_hena *)msg; struct i40e_pf *pf = vf->pf; struct i40e_hw *hw = &pf->hw; - i40e_status aq_ret = 0; + int aq_ret = 0; if (!i40e_sync_vf_state(vf, I40E_VF_STATE_ACTIVE)) { aq_ret = I40E_ERR_PARAM; @@ -3389,8 +3390,8 @@ err: **/ static int i40e_vc_enable_vlan_stripping(struct i40e_vf *vf, u8 *msg) { - i40e_status aq_ret = 0; struct i40e_vsi *vsi; + int aq_ret = 0; if (!i40e_sync_vf_state(vf, I40E_VF_STATE_ACTIVE)) { aq_ret = I40E_ERR_PARAM; @@ -3415,8 +3416,8 @@ err: **/ static int i40e_vc_disable_vlan_stripping(struct i40e_vf *vf, u8 *msg) { - i40e_status aq_ret = 0; struct i40e_vsi *vsi; + int aq_ret = 0; if (!i40e_sync_vf_state(vf, I40E_VF_STATE_ACTIVE)) { aq_ret = I40E_ERR_PARAM; @@ -3615,8 +3616,8 @@ static void i40e_del_all_cloud_filters(struct i40e_vf *vf) ret = i40e_add_del_cloud_filter(vsi, cfilter, false); if (ret) dev_err(&pf->pdev->dev, - "VF %d: Failed to delete cloud filter, err %s aq_err %s\n", - vf->vf_id, i40e_stat_str(&pf->hw, ret), + "VF %d: Failed to delete cloud filter, err %pe aq_err %s\n", + vf->vf_id, ERR_PTR(ret), i40e_aq_str(&pf->hw, pf->hw.aq.asq_last_status)); @@ -3642,7 +3643,7 @@ static int i40e_vc_del_cloud_filter(struct i40e_vf *vf, u8 *msg) struct i40e_pf *pf = vf->pf; struct i40e_vsi *vsi = NULL; struct hlist_node *node; - i40e_status aq_ret = 0; + int aq_ret = 0; int i, ret; if (!i40e_sync_vf_state(vf, I40E_VF_STATE_ACTIVE)) { @@ -3718,8 +3719,8 @@ static int i40e_vc_del_cloud_filter(struct i40e_vf *vf, u8 *msg) ret = i40e_add_del_cloud_filter(vsi, &cfilter, false); if (ret) { dev_err(&pf->pdev->dev, - "VF %d: Failed to delete cloud filter, err %s aq_err %s\n", - vf->vf_id, i40e_stat_str(&pf->hw, ret), + "VF %d: Failed to delete cloud filter, err %pe aq_err %s\n", + vf->vf_id, ERR_PTR(ret), i40e_aq_str(&pf->hw, pf->hw.aq.asq_last_status)); goto err; } @@ -3773,7 +3774,7 @@ static int i40e_vc_add_cloud_filter(struct i40e_vf *vf, u8 *msg) struct i40e_cloud_filter *cfilter = NULL; struct i40e_pf *pf = vf->pf; struct i40e_vsi *vsi = NULL; - i40e_status aq_ret = 0; + int aq_ret = 0; int i, ret; if (!i40e_sync_vf_state(vf, I40E_VF_STATE_ACTIVE)) { @@ -3852,8 +3853,8 @@ static int i40e_vc_add_cloud_filter(struct i40e_vf *vf, u8 *msg) ret = i40e_add_del_cloud_filter(vsi, cfilter, true); if (ret) { dev_err(&pf->pdev->dev, - "VF %d: Failed to add cloud filter, err %s aq_err %s\n", - vf->vf_id, i40e_stat_str(&pf->hw, ret), + "VF %d: Failed to add cloud filter, err %pe aq_err %s\n", + vf->vf_id, ERR_PTR(ret), i40e_aq_str(&pf->hw, pf->hw.aq.asq_last_status)); goto err_free; } @@ -3882,7 +3883,7 @@ static int i40e_vc_add_qch_msg(struct i40e_vf *vf, u8 *msg) struct i40e_pf *pf = vf->pf; struct i40e_link_status *ls = &pf->hw.phy.link_info; int i, adq_request_qps = 0; - i40e_status aq_ret = 0; + int aq_ret = 0; u64 speed = 0; if (!i40e_sync_vf_state(vf, I40E_VF_STATE_ACTIVE)) { @@ -3994,7 +3995,7 @@ err: static int i40e_vc_del_qch_msg(struct i40e_vf *vf, u8 *msg) { struct i40e_pf *pf = vf->pf; - i40e_status aq_ret = 0; + int aq_ret = 0; if (!i40e_sync_vf_state(vf, I40E_VF_STATE_ACTIVE)) { aq_ret = I40E_ERR_PARAM; @@ -4112,14 +4113,14 @@ int i40e_vc_process_vf_msg(struct i40e_pf *pf, s16 vf_id, u32 v_opcode, case VIRTCHNL_OP_GET_STATS: ret = i40e_vc_get_stats_msg(vf, msg); break; - case VIRTCHNL_OP_IWARP: - ret = i40e_vc_iwarp_msg(vf, msg, msglen); + case VIRTCHNL_OP_RDMA: + ret = i40e_vc_rdma_msg(vf, msg, msglen); break; - case VIRTCHNL_OP_CONFIG_IWARP_IRQ_MAP: - ret = i40e_vc_iwarp_qvmap_msg(vf, msg, true); + case VIRTCHNL_OP_CONFIG_RDMA_IRQ_MAP: + ret = i40e_vc_rdma_qvmap_msg(vf, msg, true); break; - case VIRTCHNL_OP_RELEASE_IWARP_IRQ_MAP: - ret = i40e_vc_iwarp_qvmap_msg(vf, msg, false); + case VIRTCHNL_OP_RELEASE_RDMA_IRQ_MAP: + ret = i40e_vc_rdma_qvmap_msg(vf, msg, false); break; case VIRTCHNL_OP_CONFIG_RSS_KEY: ret = i40e_vc_config_rss_key(vf, msg); diff --git a/drivers/net/ethernet/intel/i40e/i40e_virtchnl_pf.h b/drivers/net/ethernet/intel/i40e/i40e_virtchnl_pf.h index 358bbdb58795..895b8feb2567 100644 --- a/drivers/net/ethernet/intel/i40e/i40e_virtchnl_pf.h +++ b/drivers/net/ethernet/intel/i40e/i40e_virtchnl_pf.h @@ -34,7 +34,7 @@ enum i40e_queue_ctrl { enum i40e_vf_states { I40E_VF_STATE_INIT = 0, I40E_VF_STATE_ACTIVE, - I40E_VF_STATE_IWARPENA, + I40E_VF_STATE_RDMAENA, I40E_VF_STATE_DISABLED, I40E_VF_STATE_MC_PROMISC, I40E_VF_STATE_UC_PROMISC, @@ -46,7 +46,7 @@ enum i40e_vf_states { enum i40e_vf_capabilities { I40E_VIRTCHNL_VF_CAP_PRIVILEGE = 0, I40E_VIRTCHNL_VF_CAP_L2, - I40E_VIRTCHNL_VF_CAP_IWARP, + I40E_VIRTCHNL_VF_CAP_RDMA, }; /* In ADq, max 4 VSI's can be allocated per VF including primary VF VSI. @@ -108,7 +108,7 @@ struct i40e_vf { u16 num_cloud_filters; /* RDMA Client */ - struct virtchnl_iwarp_qvlist_info *qvlist_info; + struct virtchnl_rdma_qvlist_info *qvlist_info; }; void i40e_free_vfs(struct i40e_pf *pf); diff --git a/drivers/net/ethernet/intel/iavf/iavf.h b/drivers/net/ethernet/intel/iavf/iavf.h index 2a9f1eeeb701..232bc61d9eee 100644 --- a/drivers/net/ethernet/intel/iavf/iavf.h +++ b/drivers/net/ethernet/intel/iavf/iavf.h @@ -30,6 +30,7 @@ #include <linux/jiffies.h> #include <net/ip6_checksum.h> #include <net/pkt_cls.h> +#include <net/pkt_sched.h> #include <net/udp.h> #include <net/tc_act/tc_gact.h> #include <net/tc_act/tc_mirred.h> @@ -276,8 +277,8 @@ struct iavf_adapter { u64 hw_csum_rx_error; u32 rx_desc_count; int num_msix_vectors; - int num_iwarp_msix; - int iwarp_base_vector; + int num_rdma_msix; + int rdma_base_vector; u32 client_pending; struct iavf_client_instance *cinst; struct msix_entry *msix_entries; @@ -384,7 +385,7 @@ struct iavf_adapter { enum virtchnl_ops current_op; #define CLIENT_ALLOWED(_a) ((_a)->vf_res ? \ (_a)->vf_res->vf_cap_flags & \ - VIRTCHNL_VF_OFFLOAD_IWARP : \ + VIRTCHNL_VF_OFFLOAD_RDMA : \ 0) #define CLIENT_ENABLED(_a) ((_a)->cinst) /* RSS by the PF should be preferred over RSS via other methods. */ diff --git a/drivers/net/ethernet/intel/iavf/iavf_client.c b/drivers/net/ethernet/intel/iavf/iavf_client.c index 0c77e4171808..93c903c02c64 100644 --- a/drivers/net/ethernet/intel/iavf/iavf_client.c +++ b/drivers/net/ethernet/intel/iavf/iavf_client.c @@ -127,7 +127,7 @@ void iavf_notify_client_open(struct iavf_vsi *vsi) } /** - * iavf_client_release_qvlist - send a message to the PF to release iwarp qv map + * iavf_client_release_qvlist - send a message to the PF to release rdma qv map * @ldev: pointer to L2 context. * * Return 0 on success or < 0 on error @@ -141,12 +141,12 @@ static int iavf_client_release_qvlist(struct iavf_info *ldev) return -EAGAIN; err = iavf_aq_send_msg_to_pf(&adapter->hw, - VIRTCHNL_OP_RELEASE_IWARP_IRQ_MAP, + VIRTCHNL_OP_RELEASE_RDMA_IRQ_MAP, IAVF_SUCCESS, NULL, 0, NULL); if (err) dev_err(&adapter->pdev->dev, - "Unable to send iWarp vector release message to PF, error %d, aq status %d\n", + "Unable to send RDMA vector release message to PF, error %d, aq status %d\n", err, adapter->hw.aq.asq_last_status); return err; @@ -215,9 +215,9 @@ iavf_client_add_instance(struct iavf_adapter *adapter) cinst->lan_info.params = params; set_bit(__IAVF_CLIENT_INSTANCE_NONE, &cinst->state); - cinst->lan_info.msix_count = adapter->num_iwarp_msix; + cinst->lan_info.msix_count = adapter->num_rdma_msix; cinst->lan_info.msix_entries = - &adapter->msix_entries[adapter->iwarp_base_vector]; + &adapter->msix_entries[adapter->rdma_base_vector]; mac = list_first_entry(&cinst->lan_info.netdev->dev_addrs.list, struct netdev_hw_addr, list); @@ -425,17 +425,17 @@ static u32 iavf_client_virtchnl_send(struct iavf_info *ldev, if (adapter->aq_required) return -EAGAIN; - err = iavf_aq_send_msg_to_pf(&adapter->hw, VIRTCHNL_OP_IWARP, + err = iavf_aq_send_msg_to_pf(&adapter->hw, VIRTCHNL_OP_RDMA, IAVF_SUCCESS, msg, len, NULL); if (err) - dev_err(&adapter->pdev->dev, "Unable to send iWarp message to PF, error %d, aq status %d\n", + dev_err(&adapter->pdev->dev, "Unable to send RDMA message to PF, error %d, aq status %d\n", err, adapter->hw.aq.asq_last_status); return err; } /** - * iavf_client_setup_qvlist - send a message to the PF to setup iwarp qv map + * iavf_client_setup_qvlist - send a message to the PF to setup rdma qv map * @ldev: pointer to L2 context. * @client: Client pointer. * @qvlist_info: queue and vector list @@ -446,7 +446,7 @@ static int iavf_client_setup_qvlist(struct iavf_info *ldev, struct iavf_client *client, struct iavf_qvlist_info *qvlist_info) { - struct virtchnl_iwarp_qvlist_info *v_qvlist_info; + struct virtchnl_rdma_qvlist_info *v_qvlist_info; struct iavf_adapter *adapter = ldev->vf; struct iavf_qv_info *qv_info; enum iavf_status err; @@ -463,23 +463,23 @@ static int iavf_client_setup_qvlist(struct iavf_info *ldev, continue; v_idx = qv_info->v_idx; if ((v_idx >= - (adapter->iwarp_base_vector + adapter->num_iwarp_msix)) || - (v_idx < adapter->iwarp_base_vector)) + (adapter->rdma_base_vector + adapter->num_rdma_msix)) || + (v_idx < adapter->rdma_base_vector)) return -EINVAL; } - v_qvlist_info = (struct virtchnl_iwarp_qvlist_info *)qvlist_info; + v_qvlist_info = (struct virtchnl_rdma_qvlist_info *)qvlist_info; msg_size = struct_size(v_qvlist_info, qv_info, v_qvlist_info->num_vectors - 1); - adapter->client_pending |= BIT(VIRTCHNL_OP_CONFIG_IWARP_IRQ_MAP); + adapter->client_pending |= BIT(VIRTCHNL_OP_CONFIG_RDMA_IRQ_MAP); err = iavf_aq_send_msg_to_pf(&adapter->hw, - VIRTCHNL_OP_CONFIG_IWARP_IRQ_MAP, IAVF_SUCCESS, + VIRTCHNL_OP_CONFIG_RDMA_IRQ_MAP, IAVF_SUCCESS, (u8 *)v_qvlist_info, msg_size, NULL); if (err) { dev_err(&adapter->pdev->dev, - "Unable to send iWarp vector config message to PF, error %d, aq status %d\n", + "Unable to send RDMA vector config message to PF, error %d, aq status %d\n", err, adapter->hw.aq.asq_last_status); goto out; } @@ -488,7 +488,7 @@ static int iavf_client_setup_qvlist(struct iavf_info *ldev, for (i = 0; i < 5; i++) { msleep(100); if (!(adapter->client_pending & - BIT(VIRTCHNL_OP_CONFIG_IWARP_IRQ_MAP))) { + BIT(VIRTCHNL_OP_CONFIG_RDMA_IRQ_MAP))) { err = 0; break; } diff --git a/drivers/net/ethernet/intel/iavf/iavf_client.h b/drivers/net/ethernet/intel/iavf/iavf_client.h index 9a7cf39ea75a..c5d51d7dc7cc 100644 --- a/drivers/net/ethernet/intel/iavf/iavf_client.h +++ b/drivers/net/ethernet/intel/iavf/iavf_client.h @@ -159,7 +159,7 @@ struct iavf_client { #define IAVF_CLIENT_FLAGS_LAUNCH_ON_PROBE BIT(0) #define IAVF_TX_FLAGS_NOTIFY_OTHER_EVENTS BIT(2) u8 type; -#define IAVF_CLIENT_IWARP 0 +#define IAVF_CLIENT_RDMA 0 struct iavf_client_ops *ops; /* client ops provided by the client */ }; diff --git a/drivers/net/ethernet/intel/iavf/iavf_common.c b/drivers/net/ethernet/intel/iavf/iavf_common.c index 34e46a23894f..16c490965b61 100644 --- a/drivers/net/ethernet/intel/iavf/iavf_common.c +++ b/drivers/net/ethernet/intel/iavf/iavf_common.c @@ -223,8 +223,8 @@ const char *iavf_stat_str(struct iavf_hw *hw, enum iavf_status stat_err) return "IAVF_ERR_ADMIN_QUEUE_FULL"; case IAVF_ERR_ADMIN_QUEUE_NO_WORK: return "IAVF_ERR_ADMIN_QUEUE_NO_WORK"; - case IAVF_ERR_BAD_IWARP_CQE: - return "IAVF_ERR_BAD_IWARP_CQE"; + case IAVF_ERR_BAD_RDMA_CQE: + return "IAVF_ERR_BAD_RDMA_CQE"; case IAVF_ERR_NVM_BLANK_MODE: return "IAVF_ERR_NVM_BLANK_MODE"; case IAVF_ERR_NOT_IMPLEMENTED: diff --git a/drivers/net/ethernet/intel/iavf/iavf_main.c b/drivers/net/ethernet/intel/iavf/iavf_main.c index 4b09785d2147..3273aeb8fa67 100644 --- a/drivers/net/ethernet/intel/iavf/iavf_main.c +++ b/drivers/net/ethernet/intel/iavf/iavf_main.c @@ -105,7 +105,7 @@ int iavf_status_to_errno(enum iavf_status status) case IAVF_ERR_SRQ_ENABLED: case IAVF_ERR_ADMIN_QUEUE_ERROR: case IAVF_ERR_ADMIN_QUEUE_FULL: - case IAVF_ERR_BAD_IWARP_CQE: + case IAVF_ERR_BAD_RDMA_CQE: case IAVF_ERR_NVM_BLANK_MODE: case IAVF_ERR_PE_DOORBELL_NOT_ENABLED: case IAVF_ERR_DIAG_TEST_FAILED: @@ -4868,8 +4868,6 @@ static int iavf_probe(struct pci_dev *pdev, const struct pci_device_id *ent) goto err_pci_reg; } - pci_enable_pcie_error_reporting(pdev); - pci_set_master(pdev); netdev = alloc_etherdev_mq(sizeof(struct iavf_adapter), @@ -4957,7 +4955,6 @@ err_ioremap: err_alloc_wq: free_netdev(netdev); err_alloc_etherdev: - pci_disable_pcie_error_reporting(pdev); pci_release_regions(pdev); err_pci_reg: err_dma: @@ -5175,8 +5172,6 @@ static void iavf_remove(struct pci_dev *pdev) free_netdev(netdev); - pci_disable_pcie_error_reporting(pdev); - pci_disable_device(pdev); } diff --git a/drivers/net/ethernet/intel/iavf/iavf_status.h b/drivers/net/ethernet/intel/iavf/iavf_status.h index 2ea5c7c339bc..0e493ee9e9d1 100644 --- a/drivers/net/ethernet/intel/iavf/iavf_status.h +++ b/drivers/net/ethernet/intel/iavf/iavf_status.h @@ -64,7 +64,7 @@ enum iavf_status { IAVF_ERR_BUF_TOO_SHORT = -55, IAVF_ERR_ADMIN_QUEUE_FULL = -56, IAVF_ERR_ADMIN_QUEUE_NO_WORK = -57, - IAVF_ERR_BAD_IWARP_CQE = -58, + IAVF_ERR_BAD_RDMA_CQE = -58, IAVF_ERR_NVM_BLANK_MODE = -59, IAVF_ERR_NOT_IMPLEMENTED = -60, IAVF_ERR_PE_DOORBELL_NOT_ENABLED = -61, diff --git a/drivers/net/ethernet/intel/iavf/iavf_virtchnl.c b/drivers/net/ethernet/intel/iavf/iavf_virtchnl.c index 365ca0c710c4..6d23338604bb 100644 --- a/drivers/net/ethernet/intel/iavf/iavf_virtchnl.c +++ b/drivers/net/ethernet/intel/iavf/iavf_virtchnl.c @@ -2298,7 +2298,7 @@ void iavf_virtchnl_completion(struct iavf_adapter *adapter, if (v_opcode != adapter->current_op) return; break; - case VIRTCHNL_OP_IWARP: + case VIRTCHNL_OP_RDMA: /* Gobble zero-length replies from the PF. They indicate that * a previous message was received OK, and the client doesn't * care about that. @@ -2307,9 +2307,9 @@ void iavf_virtchnl_completion(struct iavf_adapter *adapter, iavf_notify_client_message(&adapter->vsi, msg, msglen); break; - case VIRTCHNL_OP_CONFIG_IWARP_IRQ_MAP: + case VIRTCHNL_OP_CONFIG_RDMA_IRQ_MAP: adapter->client_pending &= - ~(BIT(VIRTCHNL_OP_CONFIG_IWARP_IRQ_MAP)); + ~(BIT(VIRTCHNL_OP_CONFIG_RDMA_IRQ_MAP)); break; case VIRTCHNL_OP_GET_RSS_HENA_CAPS: { struct virtchnl_rss_hena *vrh = (struct virtchnl_rss_hena *)msg; diff --git a/drivers/net/ethernet/intel/ice/Makefile b/drivers/net/ethernet/intel/ice/Makefile index 9183d480b70b..f269952d207d 100644 --- a/drivers/net/ethernet/intel/ice/Makefile +++ b/drivers/net/ethernet/intel/ice/Makefile @@ -28,6 +28,7 @@ ice-y := ice_main.o \ ice_flow.o \ ice_idc.o \ ice_devlink.o \ + ice_ddp.o \ ice_fw_update.o \ ice_lag.o \ ice_ethtool.o \ @@ -42,8 +43,8 @@ ice-$(CONFIG_PCI_IOV) += \ ice_vf_vsi_vlan_ops.o \ ice_vf_lib.o ice-$(CONFIG_PTP_1588_CLOCK) += ice_ptp.o ice_ptp_hw.o -ice-$(CONFIG_TTY) += ice_gnss.o ice-$(CONFIG_DCB) += ice_dcb.o ice_dcb_nl.o ice_dcb_lib.o ice-$(CONFIG_RFS_ACCEL) += ice_arfs.o ice-$(CONFIG_XDP_SOCKETS) += ice_xsk.o ice-$(CONFIG_ICE_SWITCHDEV) += ice_eswitch.o +ice-$(CONFIG_ICE_GNSS) += ice_gnss.o diff --git a/drivers/net/ethernet/intel/ice/ice.h b/drivers/net/ethernet/intel/ice/ice.h index 713069f809ec..b0e29e342401 100644 --- a/drivers/net/ethernet/intel/ice/ice.h +++ b/drivers/net/ethernet/intel/ice/ice.h @@ -39,7 +39,9 @@ #include <linux/avf/virtchnl.h> #include <linux/cpu_rmap.h> #include <linux/dim.h> +#include <linux/gnss.h> #include <net/pkt_cls.h> +#include <net/pkt_sched.h> #include <net/tc_act/tc_mirred.h> #include <net/tc_act/tc_gact.h> #include <net/ip.h> @@ -121,6 +123,8 @@ #define ICE_MAX_MTU (ICE_AQ_SET_MAC_FRAME_SIZE_MAX - ICE_ETH_PKT_HDR_PAD) +#define ICE_MAX_TSO_SIZE 131072 + #define ICE_UP_TABLE_TRANSLATE(val, i) \ (((val) << ICE_AQ_VSI_UP_TABLE_UP##i##_S) & \ ICE_AQ_VSI_UP_TABLE_UP##i##_M) @@ -352,7 +356,6 @@ struct ice_vsi { struct ice_vf *vf; /* VF associated with this VSI */ - u16 ethtype; /* Ethernet protocol for pause frame */ u16 num_gfltr; u16 num_bfltr; @@ -565,9 +568,8 @@ struct ice_pf { struct mutex adev_mutex; /* lock to protect aux device access */ u32 msg_enable; struct ice_ptp ptp; - struct tty_driver *ice_gnss_tty_driver; - struct tty_port *gnss_tty_port[ICE_GNSS_TTY_MINOR_DEVICES]; - struct gnss_serial *gnss_serial[ICE_GNSS_TTY_MINOR_DEVICES]; + struct gnss_serial *gnss_serial; + struct gnss_device *gnss_dev; u16 num_rdma_msix; /* Total MSIX vectors for RDMA driver */ u16 rdma_base_vector; @@ -889,7 +891,7 @@ ice_fetch_u64_stats_per_ring(struct u64_stats_sync *syncp, int ice_up(struct ice_vsi *vsi); int ice_down(struct ice_vsi *vsi); int ice_down_up(struct ice_vsi *vsi); -int ice_vsi_cfg(struct ice_vsi *vsi); +int ice_vsi_cfg_lan(struct ice_vsi *vsi); struct ice_vsi *ice_lb_vsi_setup(struct ice_pf *pf, struct ice_port_info *pi); int ice_vsi_determine_xdp_res(struct ice_vsi *vsi); int ice_prepare_xdp_rings(struct ice_vsi *vsi, struct bpf_prog *prog); @@ -907,6 +909,7 @@ void ice_print_link_msg(struct ice_vsi *vsi, bool isup); int ice_plug_aux_dev(struct ice_pf *pf); void ice_unplug_aux_dev(struct ice_pf *pf); int ice_init_rdma(struct ice_pf *pf); +void ice_deinit_rdma(struct ice_pf *pf); const char *ice_aq_str(enum ice_aq_err aq_err); bool ice_is_wol_supported(struct ice_hw *hw); void ice_fdir_del_all_fltrs(struct ice_vsi *vsi); @@ -931,6 +934,8 @@ int ice_open(struct net_device *netdev); int ice_open_internal(struct net_device *netdev); int ice_stop(struct net_device *netdev); void ice_service_task_schedule(struct ice_pf *pf); +int ice_load(struct ice_pf *pf); +void ice_unload(struct ice_pf *pf); /** * ice_set_rdma_cap - enable RDMA support diff --git a/drivers/net/ethernet/intel/ice/ice_adminq_cmd.h b/drivers/net/ethernet/intel/ice/ice_adminq_cmd.h index 958c1e435232..838d9b274d68 100644 --- a/drivers/net/ethernet/intel/ice/ice_adminq_cmd.h +++ b/drivers/net/ethernet/intel/ice/ice_adminq_cmd.h @@ -1659,14 +1659,24 @@ struct ice_aqc_lldp_get_mib { #define ICE_AQ_LLDP_TX_ACTIVE 0 #define ICE_AQ_LLDP_TX_SUSPENDED 1 #define ICE_AQ_LLDP_TX_FLUSHED 3 +/* DCBX mode */ +#define ICE_AQ_LLDP_DCBX_M GENMASK(7, 6) +#define ICE_AQ_LLDP_DCBX_NA 0 +#define ICE_AQ_LLDP_DCBX_CEE 1 +#define ICE_AQ_LLDP_DCBX_IEEE 2 + + u8 state; +#define ICE_AQ_LLDP_MIB_CHANGE_STATE_M BIT(0) +#define ICE_AQ_LLDP_MIB_CHANGE_EXECUTED 0 +#define ICE_AQ_LLDP_MIB_CHANGE_PENDING 1 + /* The following bytes are reserved for the Get LLDP MIB command (0x0A00) * and in the LLDP MIB Change Event (0x0A01). They are valid for the * Get LLDP MIB (0x0A00) response only. */ - u8 reserved1; __le16 local_len; __le16 remote_len; - u8 reserved2[2]; + u8 reserved[2]; __le32 addr_high; __le32 addr_low; }; @@ -1677,6 +1687,9 @@ struct ice_aqc_lldp_set_mib_change { u8 command; #define ICE_AQ_LLDP_MIB_UPDATE_ENABLE 0x0 #define ICE_AQ_LLDP_MIB_UPDATE_DIS 0x1 +#define ICE_AQ_LLDP_MIB_PENDING_M BIT(1) +#define ICE_AQ_LLDP_MIB_PENDING_DISABLE 0 +#define ICE_AQ_LLDP_MIB_PENDING_ENABLE 1 u8 reserved[15]; }; @@ -2329,6 +2342,7 @@ enum ice_adminq_opc { ice_aqc_opc_lldp_set_local_mib = 0x0A08, ice_aqc_opc_lldp_stop_start_specific_agent = 0x0A09, ice_aqc_opc_lldp_filter_ctrl = 0x0A0A, + ice_aqc_opc_lldp_execute_pending_mib = 0x0A0B, /* RSS commands */ ice_aqc_opc_set_rss_key = 0x0B02, diff --git a/drivers/net/ethernet/intel/ice/ice_base.c b/drivers/net/ethernet/intel/ice/ice_base.c index 554095b25f44..1911d644dfa8 100644 --- a/drivers/net/ethernet/intel/ice/ice_base.c +++ b/drivers/net/ethernet/intel/ice/ice_base.c @@ -355,9 +355,6 @@ static unsigned int ice_rx_offset(struct ice_rx_ring *rx_ring) { if (ice_ring_uses_build_skb(rx_ring)) return ICE_SKB_PAD; - else if (ice_is_xdp_ena_vsi(rx_ring->vsi)) - return XDP_PACKET_HEADROOM; - return 0; } @@ -495,7 +492,7 @@ static int ice_setup_rx_ctx(struct ice_rx_ring *ring) int ice_vsi_cfg_rxq(struct ice_rx_ring *ring) { struct device *dev = ice_pf_to_dev(ring->vsi->back); - u16 num_bufs = ICE_DESC_UNUSED(ring); + u32 num_bufs = ICE_RX_DESC_UNUSED(ring); int err; ring->rx_buf_len = ring->vsi->rx_buf_len; @@ -503,8 +500,10 @@ int ice_vsi_cfg_rxq(struct ice_rx_ring *ring) if (ring->vsi->type == ICE_VSI_PF) { if (!xdp_rxq_info_is_reg(&ring->xdp_rxq)) /* coverity[check_return] */ - xdp_rxq_info_reg(&ring->xdp_rxq, ring->netdev, - ring->q_index, ring->q_vector->napi.napi_id); + __xdp_rxq_info_reg(&ring->xdp_rxq, ring->netdev, + ring->q_index, + ring->q_vector->napi.napi_id, + ring->vsi->rx_buf_len); ring->xsk_pool = ice_xsk_pool(ring); if (ring->xsk_pool) { @@ -524,9 +523,11 @@ int ice_vsi_cfg_rxq(struct ice_rx_ring *ring) } else { if (!xdp_rxq_info_is_reg(&ring->xdp_rxq)) /* coverity[check_return] */ - xdp_rxq_info_reg(&ring->xdp_rxq, - ring->netdev, - ring->q_index, ring->q_vector->napi.napi_id); + __xdp_rxq_info_reg(&ring->xdp_rxq, + ring->netdev, + ring->q_index, + ring->q_vector->napi.napi_id, + ring->vsi->rx_buf_len); err = xdp_rxq_info_reg_mem_model(&ring->xdp_rxq, MEM_TYPE_PAGE_SHARED, @@ -536,6 +537,8 @@ int ice_vsi_cfg_rxq(struct ice_rx_ring *ring) } } + xdp_init_buff(&ring->xdp, ice_rx_pg_size(ring) / 2, &ring->xdp_rxq); + ring->xdp.data = NULL; err = ice_setup_rx_ctx(ring); if (err) { dev_err(dev, "ice_setup_rx_ctx failed for RxQ %d, err %d\n", diff --git a/drivers/net/ethernet/intel/ice/ice_common.c b/drivers/net/ethernet/intel/ice/ice_common.c index 3e08847505ce..c2fda4fa4188 100644 --- a/drivers/net/ethernet/intel/ice/ice_common.c +++ b/drivers/net/ethernet/intel/ice/ice_common.c @@ -208,6 +208,31 @@ bool ice_is_e810t(struct ice_hw *hw) } /** + * ice_is_e823 + * @hw: pointer to the hardware structure + * + * returns true if the device is E823-L or E823-C based, false if not. + */ +bool ice_is_e823(struct ice_hw *hw) +{ + switch (hw->device_id) { + case ICE_DEV_ID_E823L_BACKPLANE: + case ICE_DEV_ID_E823L_SFP: + case ICE_DEV_ID_E823L_10G_BASE_T: + case ICE_DEV_ID_E823L_1GBE: + case ICE_DEV_ID_E823L_QSFP: + case ICE_DEV_ID_E823C_BACKPLANE: + case ICE_DEV_ID_E823C_QSFP: + case ICE_DEV_ID_E823C_SFP: + case ICE_DEV_ID_E823C_10G_BASE_T: + case ICE_DEV_ID_E823C_SGMII: + return true; + default: + return false; + } +} + +/** * ice_clear_pf_cfg - Clear PF configuration * @hw: pointer to the hardware structure * @@ -1088,8 +1113,10 @@ int ice_init_hw(struct ice_hw *hw) if (status) goto err_unroll_cqinit; - hw->port_info = devm_kzalloc(ice_hw_to_dev(hw), - sizeof(*hw->port_info), GFP_KERNEL); + if (!hw->port_info) + hw->port_info = devm_kzalloc(ice_hw_to_dev(hw), + sizeof(*hw->port_info), + GFP_KERNEL); if (!hw->port_info) { status = -ENOMEM; goto err_unroll_cqinit; @@ -1217,11 +1244,6 @@ void ice_deinit_hw(struct ice_hw *hw) ice_free_hw_tbls(hw); mutex_destroy(&hw->tnl_lock); - if (hw->port_info) { - devm_kfree(ice_hw_to_dev(hw), hw->port_info); - hw->port_info = NULL; - } - /* Attempt to disable FW logging before shutting down control queues */ ice_cfg_fw_log(hw, false); ice_destroy_all_ctrlq(hw); @@ -5504,6 +5526,19 @@ ice_lldp_fltr_add_remove(struct ice_hw *hw, u16 vsi_num, bool add) } /** + * ice_lldp_execute_pending_mib - execute LLDP pending MIB request + * @hw: pointer to HW struct + */ +int ice_lldp_execute_pending_mib(struct ice_hw *hw) +{ + struct ice_aq_desc desc; + + ice_fill_dflt_direct_cmd_desc(&desc, ice_aqc_opc_lldp_execute_pending_mib); + + return ice_aq_send_cmd(hw, &desc, NULL, 0, NULL); +} + +/** * ice_fw_supports_report_dflt_cfg * @hw: pointer to the hardware structure * diff --git a/drivers/net/ethernet/intel/ice/ice_common.h b/drivers/net/ethernet/intel/ice/ice_common.h index 4c6a0b5c9304..8ba5f935a092 100644 --- a/drivers/net/ethernet/intel/ice/ice_common.h +++ b/drivers/net/ethernet/intel/ice/ice_common.h @@ -122,7 +122,7 @@ ice_set_fc(struct ice_port_info *pi, u8 *aq_failures, bool ena_auto_link_update); int ice_cfg_phy_fc(struct ice_port_info *pi, struct ice_aqc_set_phy_cfg_data *cfg, - enum ice_fc_mode fc); + enum ice_fc_mode req_mode); bool ice_phy_caps_equals_cfg(struct ice_aqc_get_phy_caps_data *caps, struct ice_aqc_set_phy_cfg_data *cfg); @@ -199,6 +199,7 @@ void ice_stat_update32(struct ice_hw *hw, u32 reg, bool prev_stat_loaded, u64 *prev_stat, u64 *cur_stat); bool ice_is_e810t(struct ice_hw *hw); +bool ice_is_e823(struct ice_hw *hw); int ice_sched_query_elem(struct ice_hw *hw, u32 node_teid, struct ice_aqc_txsched_elem_data *buf); @@ -221,6 +222,7 @@ ice_aq_set_lldp_mib(struct ice_hw *hw, u8 mib_type, void *buf, u16 buf_size, bool ice_fw_supports_lldp_fltr_ctrl(struct ice_hw *hw); int ice_lldp_fltr_add_remove(struct ice_hw *hw, u16 vsi_num, bool add); +int ice_lldp_execute_pending_mib(struct ice_hw *hw); int ice_aq_read_i2c(struct ice_hw *hw, struct ice_aqc_link_topo_addr topo_addr, u16 bus_addr, __le16 addr, u8 params, u8 *data, diff --git a/drivers/net/ethernet/intel/ice/ice_dcb.c b/drivers/net/ethernet/intel/ice/ice_dcb.c index 6be02f9b0b8c..c557dfc50aad 100644 --- a/drivers/net/ethernet/intel/ice/ice_dcb.c +++ b/drivers/net/ethernet/intel/ice/ice_dcb.c @@ -73,6 +73,9 @@ ice_aq_cfg_lldp_mib_change(struct ice_hw *hw, bool ena_update, if (!ena_update) cmd->command |= ICE_AQ_LLDP_MIB_UPDATE_DIS; + else + cmd->command |= FIELD_PREP(ICE_AQ_LLDP_MIB_PENDING_M, + ICE_AQ_LLDP_MIB_PENDING_ENABLE); return ice_aq_send_cmd(hw, &desc, NULL, 0, cd); } @@ -566,7 +569,7 @@ ice_parse_cee_tlv(struct ice_lldp_org_tlv *tlv, struct ice_dcbx_cfg *dcbcfg) * @tlv: Organization specific TLV * @dcbcfg: Local store to update ETS REC data * - * Currently only IEEE 802.1Qaz TLV is supported, all others + * Currently IEEE 802.1Qaz and CEE DCBX TLV are supported, others * will be returned */ static void @@ -585,7 +588,7 @@ ice_parse_org_tlv(struct ice_lldp_org_tlv *tlv, struct ice_dcbx_cfg *dcbcfg) ice_parse_cee_tlv(tlv, dcbcfg); break; default: - break; + break; /* Other OUIs not supported */ } } @@ -964,6 +967,42 @@ int ice_get_dcb_cfg(struct ice_port_info *pi) } /** + * ice_get_dcb_cfg_from_mib_change + * @pi: port information structure + * @event: pointer to the admin queue receive event + * + * Set DCB configuration from received MIB Change event + */ +void ice_get_dcb_cfg_from_mib_change(struct ice_port_info *pi, + struct ice_rq_event_info *event) +{ + struct ice_dcbx_cfg *dcbx_cfg = &pi->qos_cfg.local_dcbx_cfg; + struct ice_aqc_lldp_get_mib *mib; + u8 change_type, dcbx_mode; + + mib = (struct ice_aqc_lldp_get_mib *)&event->desc.params.raw; + + change_type = FIELD_GET(ICE_AQ_LLDP_MIB_TYPE_M, mib->type); + if (change_type == ICE_AQ_LLDP_MIB_REMOTE) + dcbx_cfg = &pi->qos_cfg.remote_dcbx_cfg; + + dcbx_mode = FIELD_GET(ICE_AQ_LLDP_DCBX_M, mib->type); + + switch (dcbx_mode) { + case ICE_AQ_LLDP_DCBX_IEEE: + dcbx_cfg->dcbx_mode = ICE_DCBX_MODE_IEEE; + ice_lldp_to_dcb_cfg(event->msg_buf, dcbx_cfg); + break; + + case ICE_AQ_LLDP_DCBX_CEE: + pi->qos_cfg.desired_dcbx_cfg = pi->qos_cfg.local_dcbx_cfg; + ice_cee_to_dcb_cfg((struct ice_aqc_get_cee_dcb_cfg_resp *) + event->msg_buf, pi); + break; + } +} + +/** * ice_init_dcb * @hw: pointer to the HW struct * @enable_mib_change: enable MIB change event diff --git a/drivers/net/ethernet/intel/ice/ice_dcb.h b/drivers/net/ethernet/intel/ice/ice_dcb.h index 6abf28a14291..be34650a77d5 100644 --- a/drivers/net/ethernet/intel/ice/ice_dcb.h +++ b/drivers/net/ethernet/intel/ice/ice_dcb.h @@ -144,6 +144,8 @@ ice_aq_get_dcb_cfg(struct ice_hw *hw, u8 mib_type, u8 bridgetype, struct ice_dcbx_cfg *dcbcfg); int ice_get_dcb_cfg(struct ice_port_info *pi); int ice_set_dcb_cfg(struct ice_port_info *pi); +void ice_get_dcb_cfg_from_mib_change(struct ice_port_info *pi, + struct ice_rq_event_info *event); int ice_init_dcb(struct ice_hw *hw, bool enable_mib_change); int ice_query_port_ets(struct ice_port_info *pi, diff --git a/drivers/net/ethernet/intel/ice/ice_dcb_lib.c b/drivers/net/ethernet/intel/ice/ice_dcb_lib.c index 0a55c552189a..c6d4926f0fcf 100644 --- a/drivers/net/ethernet/intel/ice/ice_dcb_lib.c +++ b/drivers/net/ethernet/intel/ice/ice_dcb_lib.c @@ -862,7 +862,7 @@ int ice_init_pf_dcb(struct ice_pf *pf, bool locked) if (err) goto dcb_init_err; - return err; + return 0; dcb_init_err: dev_err(dev, "DCB init failed\n"); @@ -947,6 +947,16 @@ ice_tx_prepare_vlan_flags_dcb(struct ice_tx_ring *tx_ring, } /** + * ice_dcb_is_mib_change_pending - Check if MIB change is pending + * @state: MIB change state + */ +static bool ice_dcb_is_mib_change_pending(u8 state) +{ + return ICE_AQ_LLDP_MIB_CHANGE_PENDING == + FIELD_GET(ICE_AQ_LLDP_MIB_CHANGE_STATE_M, state); +} + +/** * ice_dcb_process_lldp_set_mib_change - Process MIB change * @pf: ptr to ice_pf * @event: pointer to the admin queue receive event @@ -959,6 +969,7 @@ ice_dcb_process_lldp_set_mib_change(struct ice_pf *pf, struct device *dev = ice_pf_to_dev(pf); struct ice_aqc_lldp_get_mib *mib; struct ice_dcbx_cfg tmp_dcbx_cfg; + bool pending_handled = true; bool need_reconfig = false; struct ice_port_info *pi; u8 mib_type; @@ -975,41 +986,58 @@ ice_dcb_process_lldp_set_mib_change(struct ice_pf *pf, pi = pf->hw.port_info; mib = (struct ice_aqc_lldp_get_mib *)&event->desc.params.raw; + /* Ignore if event is not for Nearest Bridge */ - mib_type = ((mib->type >> ICE_AQ_LLDP_BRID_TYPE_S) & - ICE_AQ_LLDP_BRID_TYPE_M); + mib_type = FIELD_GET(ICE_AQ_LLDP_BRID_TYPE_M, mib->type); dev_dbg(dev, "LLDP event MIB bridge type 0x%x\n", mib_type); if (mib_type != ICE_AQ_LLDP_BRID_TYPE_NEAREST_BRID) return; + /* A pending change event contains accurate config information, and + * the FW setting has not been updaed yet, so detect if change is + * pending to determine where to pull config information from + * (FW vs event) + */ + if (ice_dcb_is_mib_change_pending(mib->state)) + pending_handled = false; + /* Check MIB Type and return if event for Remote MIB update */ - mib_type = mib->type & ICE_AQ_LLDP_MIB_TYPE_M; + mib_type = FIELD_GET(ICE_AQ_LLDP_MIB_TYPE_M, mib->type); dev_dbg(dev, "LLDP event mib type %s\n", mib_type ? "remote" : "local"); if (mib_type == ICE_AQ_LLDP_MIB_REMOTE) { /* Update the remote cached instance and return */ - ret = ice_aq_get_dcb_cfg(pi->hw, ICE_AQ_LLDP_MIB_REMOTE, - ICE_AQ_LLDP_BRID_TYPE_NEAREST_BRID, - &pi->qos_cfg.remote_dcbx_cfg); - if (ret) { - dev_err(dev, "Failed to get remote DCB config\n"); - return; + if (!pending_handled) { + ice_get_dcb_cfg_from_mib_change(pi, event); + } else { + ret = + ice_aq_get_dcb_cfg(pi->hw, ICE_AQ_LLDP_MIB_REMOTE, + ICE_AQ_LLDP_BRID_TYPE_NEAREST_BRID, + &pi->qos_cfg.remote_dcbx_cfg); + if (ret) + dev_dbg(dev, "Failed to get remote DCB config\n"); } + return; } + /* That a DCB change has happened is now determined */ mutex_lock(&pf->tc_mutex); /* store the old configuration */ - tmp_dcbx_cfg = pf->hw.port_info->qos_cfg.local_dcbx_cfg; + tmp_dcbx_cfg = pi->qos_cfg.local_dcbx_cfg; /* Reset the old DCBX configuration data */ memset(&pi->qos_cfg.local_dcbx_cfg, 0, sizeof(pi->qos_cfg.local_dcbx_cfg)); /* Get updated DCBX data from firmware */ - ret = ice_get_dcb_cfg(pf->hw.port_info); - if (ret) { - dev_err(dev, "Failed to get DCB config\n"); - goto out; + if (!pending_handled) { + ice_get_dcb_cfg_from_mib_change(pi, event); + } else { + ret = ice_get_dcb_cfg(pi); + if (ret) { + dev_err(dev, "Failed to get DCB config\n"); + goto out; + } } /* No change detected in DCBX configs */ @@ -1036,11 +1064,17 @@ ice_dcb_process_lldp_set_mib_change(struct ice_pf *pf, clear_bit(ICE_FLAG_DCB_ENA, pf->flags); } + /* Send Execute Pending MIB Change event if it is a Pending event */ + if (!pending_handled) { + ice_lldp_execute_pending_mib(&pf->hw); + pending_handled = true; + } + rtnl_lock(); /* disable VSIs affected by DCB changes */ ice_dcb_ena_dis_vsi(pf, false, true); - ret = ice_query_port_ets(pf->hw.port_info, &buf, sizeof(buf), NULL); + ret = ice_query_port_ets(pi, &buf, sizeof(buf), NULL); if (ret) { dev_err(dev, "Query Port ETS failed\n"); goto unlock_rtnl; @@ -1055,4 +1089,8 @@ unlock_rtnl: rtnl_unlock(); out: mutex_unlock(&pf->tc_mutex); + + /* Send Execute Pending MIB Change event if it is a Pending event */ + if (!pending_handled) + ice_lldp_execute_pending_mib(&pf->hw); } diff --git a/drivers/net/ethernet/intel/ice/ice_ddp.c b/drivers/net/ethernet/intel/ice/ice_ddp.c new file mode 100644 index 000000000000..d71ed210f9c4 --- /dev/null +++ b/drivers/net/ethernet/intel/ice/ice_ddp.c @@ -0,0 +1,1897 @@ +// SPDX-License-Identifier: GPL-2.0 +/* Copyright (c) 2022, Intel Corporation. */ + +#include "ice_common.h" +#include "ice.h" +#include "ice_ddp.h" + +/* For supporting double VLAN mode, it is necessary to enable or disable certain + * boost tcam entries. The metadata labels names that match the following + * prefixes will be saved to allow enabling double VLAN mode. + */ +#define ICE_DVM_PRE "BOOST_MAC_VLAN_DVM" /* enable these entries */ +#define ICE_SVM_PRE "BOOST_MAC_VLAN_SVM" /* disable these entries */ + +/* To support tunneling entries by PF, the package will append the PF number to + * the label; for example TNL_VXLAN_PF0, TNL_VXLAN_PF1, TNL_VXLAN_PF2, etc. + */ +#define ICE_TNL_PRE "TNL_" +static const struct ice_tunnel_type_scan tnls[] = { + { TNL_VXLAN, "TNL_VXLAN_PF" }, + { TNL_GENEVE, "TNL_GENEVE_PF" }, + { TNL_LAST, "" } +}; + +/** + * ice_verify_pkg - verify package + * @pkg: pointer to the package buffer + * @len: size of the package buffer + * + * Verifies various attributes of the package file, including length, format + * version, and the requirement of at least one segment. + */ +enum ice_ddp_state ice_verify_pkg(struct ice_pkg_hdr *pkg, u32 len) +{ + u32 seg_count; + u32 i; + + if (len < struct_size(pkg, seg_offset, 1)) + return ICE_DDP_PKG_INVALID_FILE; + + if (pkg->pkg_format_ver.major != ICE_PKG_FMT_VER_MAJ || + pkg->pkg_format_ver.minor != ICE_PKG_FMT_VER_MNR || + pkg->pkg_format_ver.update != ICE_PKG_FMT_VER_UPD || + pkg->pkg_format_ver.draft != ICE_PKG_FMT_VER_DFT) + return ICE_DDP_PKG_INVALID_FILE; + + /* pkg must have at least one segment */ + seg_count = le32_to_cpu(pkg->seg_count); + if (seg_count < 1) + return ICE_DDP_PKG_INVALID_FILE; + + /* make sure segment array fits in package length */ + if (len < struct_size(pkg, seg_offset, seg_count)) + return ICE_DDP_PKG_INVALID_FILE; + + /* all segments must fit within length */ + for (i = 0; i < seg_count; i++) { + u32 off = le32_to_cpu(pkg->seg_offset[i]); + struct ice_generic_seg_hdr *seg; + + /* segment header must fit */ + if (len < off + sizeof(*seg)) + return ICE_DDP_PKG_INVALID_FILE; + + seg = (struct ice_generic_seg_hdr *)((u8 *)pkg + off); + + /* segment body must fit */ + if (len < off + le32_to_cpu(seg->seg_size)) + return ICE_DDP_PKG_INVALID_FILE; + } + + return ICE_DDP_PKG_SUCCESS; +} + +/** + * ice_free_seg - free package segment pointer + * @hw: pointer to the hardware structure + * + * Frees the package segment pointer in the proper manner, depending on if the + * segment was allocated or just the passed in pointer was stored. + */ +void ice_free_seg(struct ice_hw *hw) +{ + if (hw->pkg_copy) { + devm_kfree(ice_hw_to_dev(hw), hw->pkg_copy); + hw->pkg_copy = NULL; + hw->pkg_size = 0; + } + hw->seg = NULL; +} + +/** + * ice_chk_pkg_version - check package version for compatibility with driver + * @pkg_ver: pointer to a version structure to check + * + * Check to make sure that the package about to be downloaded is compatible with + * the driver. To be compatible, the major and minor components of the package + * version must match our ICE_PKG_SUPP_VER_MAJ and ICE_PKG_SUPP_VER_MNR + * definitions. + */ +static enum ice_ddp_state ice_chk_pkg_version(struct ice_pkg_ver *pkg_ver) +{ + if (pkg_ver->major > ICE_PKG_SUPP_VER_MAJ || + (pkg_ver->major == ICE_PKG_SUPP_VER_MAJ && + pkg_ver->minor > ICE_PKG_SUPP_VER_MNR)) + return ICE_DDP_PKG_FILE_VERSION_TOO_HIGH; + else if (pkg_ver->major < ICE_PKG_SUPP_VER_MAJ || + (pkg_ver->major == ICE_PKG_SUPP_VER_MAJ && + pkg_ver->minor < ICE_PKG_SUPP_VER_MNR)) + return ICE_DDP_PKG_FILE_VERSION_TOO_LOW; + + return ICE_DDP_PKG_SUCCESS; +} + +/** + * ice_pkg_val_buf + * @buf: pointer to the ice buffer + * + * This helper function validates a buffer's header. + */ +struct ice_buf_hdr *ice_pkg_val_buf(struct ice_buf *buf) +{ + struct ice_buf_hdr *hdr; + u16 section_count; + u16 data_end; + + hdr = (struct ice_buf_hdr *)buf->buf; + /* verify data */ + section_count = le16_to_cpu(hdr->section_count); + if (section_count < ICE_MIN_S_COUNT || section_count > ICE_MAX_S_COUNT) + return NULL; + + data_end = le16_to_cpu(hdr->data_end); + if (data_end < ICE_MIN_S_DATA_END || data_end > ICE_MAX_S_DATA_END) + return NULL; + + return hdr; +} + +/** + * ice_find_buf_table + * @ice_seg: pointer to the ice segment + * + * Returns the address of the buffer table within the ice segment. + */ +static struct ice_buf_table *ice_find_buf_table(struct ice_seg *ice_seg) +{ + struct ice_nvm_table *nvms = (struct ice_nvm_table *) + (ice_seg->device_table + le32_to_cpu(ice_seg->device_table_count)); + + return (__force struct ice_buf_table *)(nvms->vers + + le32_to_cpu(nvms->table_count)); +} + +/** + * ice_pkg_enum_buf + * @ice_seg: pointer to the ice segment (or NULL on subsequent calls) + * @state: pointer to the enum state + * + * This function will enumerate all the buffers in the ice segment. The first + * call is made with the ice_seg parameter non-NULL; on subsequent calls, + * ice_seg is set to NULL which continues the enumeration. When the function + * returns a NULL pointer, then the end of the buffers has been reached, or an + * unexpected value has been detected (for example an invalid section count or + * an invalid buffer end value). + */ +static struct ice_buf_hdr *ice_pkg_enum_buf(struct ice_seg *ice_seg, + struct ice_pkg_enum *state) +{ + if (ice_seg) { + state->buf_table = ice_find_buf_table(ice_seg); + if (!state->buf_table) + return NULL; + + state->buf_idx = 0; + return ice_pkg_val_buf(state->buf_table->buf_array); + } + + if (++state->buf_idx < le32_to_cpu(state->buf_table->buf_count)) + return ice_pkg_val_buf(state->buf_table->buf_array + + state->buf_idx); + else + return NULL; +} + +/** + * ice_pkg_advance_sect + * @ice_seg: pointer to the ice segment (or NULL on subsequent calls) + * @state: pointer to the enum state + * + * This helper function will advance the section within the ice segment, + * also advancing the buffer if needed. + */ +static bool ice_pkg_advance_sect(struct ice_seg *ice_seg, + struct ice_pkg_enum *state) +{ + if (!ice_seg && !state->buf) + return false; + + if (!ice_seg && state->buf) + if (++state->sect_idx < le16_to_cpu(state->buf->section_count)) + return true; + + state->buf = ice_pkg_enum_buf(ice_seg, state); + if (!state->buf) + return false; + + /* start of new buffer, reset section index */ + state->sect_idx = 0; + return true; +} + +/** + * ice_pkg_enum_section + * @ice_seg: pointer to the ice segment (or NULL on subsequent calls) + * @state: pointer to the enum state + * @sect_type: section type to enumerate + * + * This function will enumerate all the sections of a particular type in the + * ice segment. The first call is made with the ice_seg parameter non-NULL; + * on subsequent calls, ice_seg is set to NULL which continues the enumeration. + * When the function returns a NULL pointer, then the end of the matching + * sections has been reached. + */ +void *ice_pkg_enum_section(struct ice_seg *ice_seg, struct ice_pkg_enum *state, + u32 sect_type) +{ + u16 offset, size; + + if (ice_seg) + state->type = sect_type; + + if (!ice_pkg_advance_sect(ice_seg, state)) + return NULL; + + /* scan for next matching section */ + while (state->buf->section_entry[state->sect_idx].type != + cpu_to_le32(state->type)) + if (!ice_pkg_advance_sect(NULL, state)) + return NULL; + + /* validate section */ + offset = le16_to_cpu(state->buf->section_entry[state->sect_idx].offset); + if (offset < ICE_MIN_S_OFF || offset > ICE_MAX_S_OFF) + return NULL; + + size = le16_to_cpu(state->buf->section_entry[state->sect_idx].size); + if (size < ICE_MIN_S_SZ || size > ICE_MAX_S_SZ) + return NULL; + + /* make sure the section fits in the buffer */ + if (offset + size > ICE_PKG_BUF_SIZE) + return NULL; + + state->sect_type = + le32_to_cpu(state->buf->section_entry[state->sect_idx].type); + + /* calc pointer to this section */ + state->sect = + ((u8 *)state->buf) + + le16_to_cpu(state->buf->section_entry[state->sect_idx].offset); + + return state->sect; +} + +/** + * ice_pkg_enum_entry + * @ice_seg: pointer to the ice segment (or NULL on subsequent calls) + * @state: pointer to the enum state + * @sect_type: section type to enumerate + * @offset: pointer to variable that receives the offset in the table (optional) + * @handler: function that handles access to the entries into the section type + * + * This function will enumerate all the entries in particular section type in + * the ice segment. The first call is made with the ice_seg parameter non-NULL; + * on subsequent calls, ice_seg is set to NULL which continues the enumeration. + * When the function returns a NULL pointer, then the end of the entries has + * been reached. + * + * Since each section may have a different header and entry size, the handler + * function is needed to determine the number and location entries in each + * section. + * + * The offset parameter is optional, but should be used for sections that + * contain an offset for each section table. For such cases, the section handler + * function must return the appropriate offset + index to give the absolution + * offset for each entry. For example, if the base for a section's header + * indicates a base offset of 10, and the index for the entry is 2, then + * section handler function should set the offset to 10 + 2 = 12. + */ +static void *ice_pkg_enum_entry(struct ice_seg *ice_seg, + struct ice_pkg_enum *state, u32 sect_type, + u32 *offset, + void *(*handler)(u32 sect_type, void *section, + u32 index, u32 *offset)) +{ + void *entry; + + if (ice_seg) { + if (!handler) + return NULL; + + if (!ice_pkg_enum_section(ice_seg, state, sect_type)) + return NULL; + + state->entry_idx = 0; + state->handler = handler; + } else { + state->entry_idx++; + } + + if (!state->handler) + return NULL; + + /* get entry */ + entry = state->handler(state->sect_type, state->sect, state->entry_idx, + offset); + if (!entry) { + /* end of a section, look for another section of this type */ + if (!ice_pkg_enum_section(NULL, state, 0)) + return NULL; + + state->entry_idx = 0; + entry = state->handler(state->sect_type, state->sect, + state->entry_idx, offset); + } + + return entry; +} + +/** + * ice_sw_fv_handler + * @sect_type: section type + * @section: pointer to section + * @index: index of the field vector entry to be returned + * @offset: ptr to variable that receives the offset in the field vector table + * + * This is a callback function that can be passed to ice_pkg_enum_entry. + * This function treats the given section as of type ice_sw_fv_section and + * enumerates offset field. "offset" is an index into the field vector table. + */ +static void *ice_sw_fv_handler(u32 sect_type, void *section, u32 index, + u32 *offset) +{ + struct ice_sw_fv_section *fv_section = section; + + if (!section || sect_type != ICE_SID_FLD_VEC_SW) + return NULL; + if (index >= le16_to_cpu(fv_section->count)) + return NULL; + if (offset) + /* "index" passed in to this function is relative to a given + * 4k block. To get to the true index into the field vector + * table need to add the relative index to the base_offset + * field of this section + */ + *offset = le16_to_cpu(fv_section->base_offset) + index; + return fv_section->fv + index; +} + +/** + * ice_get_prof_index_max - get the max profile index for used profile + * @hw: pointer to the HW struct + * + * Calling this function will get the max profile index for used profile + * and store the index number in struct ice_switch_info *switch_info + * in HW for following use. + */ +static int ice_get_prof_index_max(struct ice_hw *hw) +{ + u16 prof_index = 0, j, max_prof_index = 0; + struct ice_pkg_enum state; + struct ice_seg *ice_seg; + bool flag = false; + struct ice_fv *fv; + u32 offset; + + memset(&state, 0, sizeof(state)); + + if (!hw->seg) + return -EINVAL; + + ice_seg = hw->seg; + + do { + fv = ice_pkg_enum_entry(ice_seg, &state, ICE_SID_FLD_VEC_SW, + &offset, ice_sw_fv_handler); + if (!fv) + break; + ice_seg = NULL; + + /* in the profile that not be used, the prot_id is set to 0xff + * and the off is set to 0x1ff for all the field vectors. + */ + for (j = 0; j < hw->blk[ICE_BLK_SW].es.fvw; j++) + if (fv->ew[j].prot_id != ICE_PROT_INVALID || + fv->ew[j].off != ICE_FV_OFFSET_INVAL) + flag = true; + if (flag && prof_index > max_prof_index) + max_prof_index = prof_index; + + prof_index++; + flag = false; + } while (fv); + + hw->switch_info->max_used_prof_index = max_prof_index; + + return 0; +} + +/** + * ice_get_ddp_pkg_state - get DDP pkg state after download + * @hw: pointer to the HW struct + * @already_loaded: indicates if pkg was already loaded onto the device + */ +static enum ice_ddp_state ice_get_ddp_pkg_state(struct ice_hw *hw, + bool already_loaded) +{ + if (hw->pkg_ver.major == hw->active_pkg_ver.major && + hw->pkg_ver.minor == hw->active_pkg_ver.minor && + hw->pkg_ver.update == hw->active_pkg_ver.update && + hw->pkg_ver.draft == hw->active_pkg_ver.draft && + !memcmp(hw->pkg_name, hw->active_pkg_name, sizeof(hw->pkg_name))) { + if (already_loaded) + return ICE_DDP_PKG_SAME_VERSION_ALREADY_LOADED; + else + return ICE_DDP_PKG_SUCCESS; + } else if (hw->active_pkg_ver.major != ICE_PKG_SUPP_VER_MAJ || + hw->active_pkg_ver.minor != ICE_PKG_SUPP_VER_MNR) { + return ICE_DDP_PKG_ALREADY_LOADED_NOT_SUPPORTED; + } else if (hw->active_pkg_ver.major == ICE_PKG_SUPP_VER_MAJ && + hw->active_pkg_ver.minor == ICE_PKG_SUPP_VER_MNR) { + return ICE_DDP_PKG_COMPATIBLE_ALREADY_LOADED; + } else { + return ICE_DDP_PKG_ERR; + } +} + +/** + * ice_init_pkg_regs - initialize additional package registers + * @hw: pointer to the hardware structure + */ +static void ice_init_pkg_regs(struct ice_hw *hw) +{ +#define ICE_SW_BLK_INP_MASK_L 0xFFFFFFFF +#define ICE_SW_BLK_INP_MASK_H 0x0000FFFF +#define ICE_SW_BLK_IDX 0 + + /* setup Switch block input mask, which is 48-bits in two parts */ + wr32(hw, GL_PREEXT_L2_PMASK0(ICE_SW_BLK_IDX), ICE_SW_BLK_INP_MASK_L); + wr32(hw, GL_PREEXT_L2_PMASK1(ICE_SW_BLK_IDX), ICE_SW_BLK_INP_MASK_H); +} + +/** + * ice_marker_ptype_tcam_handler + * @sect_type: section type + * @section: pointer to section + * @index: index of the Marker PType TCAM entry to be returned + * @offset: pointer to receive absolute offset, always 0 for ptype TCAM sections + * + * This is a callback function that can be passed to ice_pkg_enum_entry. + * Handles enumeration of individual Marker PType TCAM entries. + */ +static void *ice_marker_ptype_tcam_handler(u32 sect_type, void *section, + u32 index, u32 *offset) +{ + struct ice_marker_ptype_tcam_section *marker_ptype; + + if (sect_type != ICE_SID_RXPARSER_MARKER_PTYPE) + return NULL; + + if (index > ICE_MAX_MARKER_PTYPE_TCAMS_IN_BUF) + return NULL; + + if (offset) + *offset = 0; + + marker_ptype = section; + if (index >= le16_to_cpu(marker_ptype->count)) + return NULL; + + return marker_ptype->tcam + index; +} + +/** + * ice_add_dvm_hint + * @hw: pointer to the HW structure + * @val: value of the boost entry + * @enable: true if entry needs to be enabled, or false if needs to be disabled + */ +static void ice_add_dvm_hint(struct ice_hw *hw, u16 val, bool enable) +{ + if (hw->dvm_upd.count < ICE_DVM_MAX_ENTRIES) { + hw->dvm_upd.tbl[hw->dvm_upd.count].boost_addr = val; + hw->dvm_upd.tbl[hw->dvm_upd.count].enable = enable; + hw->dvm_upd.count++; + } +} + +/** + * ice_add_tunnel_hint + * @hw: pointer to the HW structure + * @label_name: label text + * @val: value of the tunnel port boost entry + */ +static void ice_add_tunnel_hint(struct ice_hw *hw, char *label_name, u16 val) +{ + if (hw->tnl.count < ICE_TUNNEL_MAX_ENTRIES) { + u16 i; + + for (i = 0; tnls[i].type != TNL_LAST; i++) { + size_t len = strlen(tnls[i].label_prefix); + + /* Look for matching label start, before continuing */ + if (strncmp(label_name, tnls[i].label_prefix, len)) + continue; + + /* Make sure this label matches our PF. Note that the PF + * character ('0' - '7') will be located where our + * prefix string's null terminator is located. + */ + if ((label_name[len] - '0') == hw->pf_id) { + hw->tnl.tbl[hw->tnl.count].type = tnls[i].type; + hw->tnl.tbl[hw->tnl.count].valid = false; + hw->tnl.tbl[hw->tnl.count].boost_addr = val; + hw->tnl.tbl[hw->tnl.count].port = 0; + hw->tnl.count++; + break; + } + } + } +} + +/** + * ice_label_enum_handler + * @sect_type: section type + * @section: pointer to section + * @index: index of the label entry to be returned + * @offset: pointer to receive absolute offset, always zero for label sections + * + * This is a callback function that can be passed to ice_pkg_enum_entry. + * Handles enumeration of individual label entries. + */ +static void *ice_label_enum_handler(u32 __always_unused sect_type, + void *section, u32 index, u32 *offset) +{ + struct ice_label_section *labels; + + if (!section) + return NULL; + + if (index > ICE_MAX_LABELS_IN_BUF) + return NULL; + + if (offset) + *offset = 0; + + labels = section; + if (index >= le16_to_cpu(labels->count)) + return NULL; + + return labels->label + index; +} + +/** + * ice_enum_labels + * @ice_seg: pointer to the ice segment (NULL on subsequent calls) + * @type: the section type that will contain the label (0 on subsequent calls) + * @state: ice_pkg_enum structure that will hold the state of the enumeration + * @value: pointer to a value that will return the label's value if found + * + * Enumerates a list of labels in the package. The caller will call + * ice_enum_labels(ice_seg, type, ...) to start the enumeration, then call + * ice_enum_labels(NULL, 0, ...) to continue. When the function returns a NULL + * the end of the list has been reached. + */ +static char *ice_enum_labels(struct ice_seg *ice_seg, u32 type, + struct ice_pkg_enum *state, u16 *value) +{ + struct ice_label *label; + + /* Check for valid label section on first call */ + if (type && !(type >= ICE_SID_LBL_FIRST && type <= ICE_SID_LBL_LAST)) + return NULL; + + label = ice_pkg_enum_entry(ice_seg, state, type, NULL, + ice_label_enum_handler); + if (!label) + return NULL; + + *value = le16_to_cpu(label->value); + return label->name; +} + +/** + * ice_boost_tcam_handler + * @sect_type: section type + * @section: pointer to section + * @index: index of the boost TCAM entry to be returned + * @offset: pointer to receive absolute offset, always 0 for boost TCAM sections + * + * This is a callback function that can be passed to ice_pkg_enum_entry. + * Handles enumeration of individual boost TCAM entries. + */ +static void *ice_boost_tcam_handler(u32 sect_type, void *section, u32 index, + u32 *offset) +{ + struct ice_boost_tcam_section *boost; + + if (!section) + return NULL; + + if (sect_type != ICE_SID_RXPARSER_BOOST_TCAM) + return NULL; + + if (index > ICE_MAX_BST_TCAMS_IN_BUF) + return NULL; + + if (offset) + *offset = 0; + + boost = section; + if (index >= le16_to_cpu(boost->count)) + return NULL; + + return boost->tcam + index; +} + +/** + * ice_find_boost_entry + * @ice_seg: pointer to the ice segment (non-NULL) + * @addr: Boost TCAM address of entry to search for + * @entry: returns pointer to the entry + * + * Finds a particular Boost TCAM entry and returns a pointer to that entry + * if it is found. The ice_seg parameter must not be NULL since the first call + * to ice_pkg_enum_entry requires a pointer to an actual ice_segment structure. + */ +static int ice_find_boost_entry(struct ice_seg *ice_seg, u16 addr, + struct ice_boost_tcam_entry **entry) +{ + struct ice_boost_tcam_entry *tcam; + struct ice_pkg_enum state; + + memset(&state, 0, sizeof(state)); + + if (!ice_seg) + return -EINVAL; + + do { + tcam = ice_pkg_enum_entry(ice_seg, &state, + ICE_SID_RXPARSER_BOOST_TCAM, NULL, + ice_boost_tcam_handler); + if (tcam && le16_to_cpu(tcam->addr) == addr) { + *entry = tcam; + return 0; + } + + ice_seg = NULL; + } while (tcam); + + *entry = NULL; + return -EIO; +} + +/** + * ice_is_init_pkg_successful - check if DDP init was successful + * @state: state of the DDP pkg after download + */ +bool ice_is_init_pkg_successful(enum ice_ddp_state state) +{ + switch (state) { + case ICE_DDP_PKG_SUCCESS: + case ICE_DDP_PKG_SAME_VERSION_ALREADY_LOADED: + case ICE_DDP_PKG_COMPATIBLE_ALREADY_LOADED: + return true; + default: + return false; + } +} + +/** + * ice_pkg_buf_alloc + * @hw: pointer to the HW structure + * + * Allocates a package buffer and returns a pointer to the buffer header. + * Note: all package contents must be in Little Endian form. + */ +struct ice_buf_build *ice_pkg_buf_alloc(struct ice_hw *hw) +{ + struct ice_buf_build *bld; + struct ice_buf_hdr *buf; + + bld = devm_kzalloc(ice_hw_to_dev(hw), sizeof(*bld), GFP_KERNEL); + if (!bld) + return NULL; + + buf = (struct ice_buf_hdr *)bld; + buf->data_end = + cpu_to_le16(offsetof(struct ice_buf_hdr, section_entry)); + return bld; +} + +static bool ice_is_gtp_u_profile(u16 prof_idx) +{ + return (prof_idx >= ICE_PROFID_IPV6_GTPU_TEID && + prof_idx <= ICE_PROFID_IPV6_GTPU_IPV6_TCP_INNER) || + prof_idx == ICE_PROFID_IPV4_GTPU_TEID; +} + +static bool ice_is_gtp_c_profile(u16 prof_idx) +{ + switch (prof_idx) { + case ICE_PROFID_IPV4_GTPC_TEID: + case ICE_PROFID_IPV4_GTPC_NO_TEID: + case ICE_PROFID_IPV6_GTPC_TEID: + case ICE_PROFID_IPV6_GTPC_NO_TEID: + return true; + default: + return false; + } +} + +/** + * ice_get_sw_prof_type - determine switch profile type + * @hw: pointer to the HW structure + * @fv: pointer to the switch field vector + * @prof_idx: profile index to check + */ +static enum ice_prof_type ice_get_sw_prof_type(struct ice_hw *hw, + struct ice_fv *fv, u32 prof_idx) +{ + u16 i; + + if (ice_is_gtp_c_profile(prof_idx)) + return ICE_PROF_TUN_GTPC; + + if (ice_is_gtp_u_profile(prof_idx)) + return ICE_PROF_TUN_GTPU; + + for (i = 0; i < hw->blk[ICE_BLK_SW].es.fvw; i++) { + /* UDP tunnel will have UDP_OF protocol ID and VNI offset */ + if (fv->ew[i].prot_id == (u8)ICE_PROT_UDP_OF && + fv->ew[i].off == ICE_VNI_OFFSET) + return ICE_PROF_TUN_UDP; + + /* GRE tunnel will have GRE protocol */ + if (fv->ew[i].prot_id == (u8)ICE_PROT_GRE_OF) + return ICE_PROF_TUN_GRE; + } + + return ICE_PROF_NON_TUN; +} + +/** + * ice_get_sw_fv_bitmap - Get switch field vector bitmap based on profile type + * @hw: pointer to hardware structure + * @req_profs: type of profiles requested + * @bm: pointer to memory for returning the bitmap of field vectors + */ +void ice_get_sw_fv_bitmap(struct ice_hw *hw, enum ice_prof_type req_profs, + unsigned long *bm) +{ + struct ice_pkg_enum state; + struct ice_seg *ice_seg; + struct ice_fv *fv; + + if (req_profs == ICE_PROF_ALL) { + bitmap_set(bm, 0, ICE_MAX_NUM_PROFILES); + return; + } + + memset(&state, 0, sizeof(state)); + bitmap_zero(bm, ICE_MAX_NUM_PROFILES); + ice_seg = hw->seg; + do { + enum ice_prof_type prof_type; + u32 offset; + + fv = ice_pkg_enum_entry(ice_seg, &state, ICE_SID_FLD_VEC_SW, + &offset, ice_sw_fv_handler); + ice_seg = NULL; + + if (fv) { + /* Determine field vector type */ + prof_type = ice_get_sw_prof_type(hw, fv, offset); + + if (req_profs & prof_type) + set_bit((u16)offset, bm); + } + } while (fv); +} + +/** + * ice_get_sw_fv_list + * @hw: pointer to the HW structure + * @lkups: list of protocol types + * @bm: bitmap of field vectors to consider + * @fv_list: Head of a list + * + * Finds all the field vector entries from switch block that contain + * a given protocol ID and offset and returns a list of structures of type + * "ice_sw_fv_list_entry". Every structure in the list has a field vector + * definition and profile ID information + * NOTE: The caller of the function is responsible for freeing the memory + * allocated for every list entry. + */ +int ice_get_sw_fv_list(struct ice_hw *hw, struct ice_prot_lkup_ext *lkups, + unsigned long *bm, struct list_head *fv_list) +{ + struct ice_sw_fv_list_entry *fvl; + struct ice_sw_fv_list_entry *tmp; + struct ice_pkg_enum state; + struct ice_seg *ice_seg; + struct ice_fv *fv; + u32 offset; + + memset(&state, 0, sizeof(state)); + + if (!lkups->n_val_words || !hw->seg) + return -EINVAL; + + ice_seg = hw->seg; + do { + u16 i; + + fv = ice_pkg_enum_entry(ice_seg, &state, ICE_SID_FLD_VEC_SW, + &offset, ice_sw_fv_handler); + if (!fv) + break; + ice_seg = NULL; + + /* If field vector is not in the bitmap list, then skip this + * profile. + */ + if (!test_bit((u16)offset, bm)) + continue; + + for (i = 0; i < lkups->n_val_words; i++) { + int j; + + for (j = 0; j < hw->blk[ICE_BLK_SW].es.fvw; j++) + if (fv->ew[j].prot_id == + lkups->fv_words[i].prot_id && + fv->ew[j].off == lkups->fv_words[i].off) + break; + if (j >= hw->blk[ICE_BLK_SW].es.fvw) + break; + if (i + 1 == lkups->n_val_words) { + fvl = devm_kzalloc(ice_hw_to_dev(hw), + sizeof(*fvl), GFP_KERNEL); + if (!fvl) + goto err; + fvl->fv_ptr = fv; + fvl->profile_id = offset; + list_add(&fvl->list_entry, fv_list); + break; + } + } + } while (fv); + if (list_empty(fv_list)) { + dev_warn(ice_hw_to_dev(hw), + "Required profiles not found in currently loaded DDP package"); + return -EIO; + } + + return 0; + +err: + list_for_each_entry_safe(fvl, tmp, fv_list, list_entry) { + list_del(&fvl->list_entry); + devm_kfree(ice_hw_to_dev(hw), fvl); + } + + return -ENOMEM; +} + +/** + * ice_init_prof_result_bm - Initialize the profile result index bitmap + * @hw: pointer to hardware structure + */ +void ice_init_prof_result_bm(struct ice_hw *hw) +{ + struct ice_pkg_enum state; + struct ice_seg *ice_seg; + struct ice_fv *fv; + + memset(&state, 0, sizeof(state)); + + if (!hw->seg) + return; + + ice_seg = hw->seg; + do { + u32 off; + u16 i; + + fv = ice_pkg_enum_entry(ice_seg, &state, ICE_SID_FLD_VEC_SW, + &off, ice_sw_fv_handler); + ice_seg = NULL; + if (!fv) + break; + + bitmap_zero(hw->switch_info->prof_res_bm[off], + ICE_MAX_FV_WORDS); + + /* Determine empty field vector indices, these can be + * used for recipe results. Skip index 0, since it is + * always used for Switch ID. + */ + for (i = 1; i < ICE_MAX_FV_WORDS; i++) + if (fv->ew[i].prot_id == ICE_PROT_INVALID && + fv->ew[i].off == ICE_FV_OFFSET_INVAL) + set_bit(i, hw->switch_info->prof_res_bm[off]); + } while (fv); +} + +/** + * ice_pkg_buf_free + * @hw: pointer to the HW structure + * @bld: pointer to pkg build (allocated by ice_pkg_buf_alloc()) + * + * Frees a package buffer + */ +void ice_pkg_buf_free(struct ice_hw *hw, struct ice_buf_build *bld) +{ + devm_kfree(ice_hw_to_dev(hw), bld); +} + +/** + * ice_pkg_buf_reserve_section + * @bld: pointer to pkg build (allocated by ice_pkg_buf_alloc()) + * @count: the number of sections to reserve + * + * Reserves one or more section table entries in a package buffer. This routine + * can be called multiple times as long as they are made before calling + * ice_pkg_buf_alloc_section(). Once ice_pkg_buf_alloc_section() + * is called once, the number of sections that can be allocated will not be able + * to be increased; not using all reserved sections is fine, but this will + * result in some wasted space in the buffer. + * Note: all package contents must be in Little Endian form. + */ +int ice_pkg_buf_reserve_section(struct ice_buf_build *bld, u16 count) +{ + struct ice_buf_hdr *buf; + u16 section_count; + u16 data_end; + + if (!bld) + return -EINVAL; + + buf = (struct ice_buf_hdr *)&bld->buf; + + /* already an active section, can't increase table size */ + section_count = le16_to_cpu(buf->section_count); + if (section_count > 0) + return -EIO; + + if (bld->reserved_section_table_entries + count > ICE_MAX_S_COUNT) + return -EIO; + bld->reserved_section_table_entries += count; + + data_end = le16_to_cpu(buf->data_end) + + flex_array_size(buf, section_entry, count); + buf->data_end = cpu_to_le16(data_end); + + return 0; +} + +/** + * ice_pkg_buf_alloc_section + * @bld: pointer to pkg build (allocated by ice_pkg_buf_alloc()) + * @type: the section type value + * @size: the size of the section to reserve (in bytes) + * + * Reserves memory in the buffer for a section's content and updates the + * buffers' status accordingly. This routine returns a pointer to the first + * byte of the section start within the buffer, which is used to fill in the + * section contents. + * Note: all package contents must be in Little Endian form. + */ +void *ice_pkg_buf_alloc_section(struct ice_buf_build *bld, u32 type, u16 size) +{ + struct ice_buf_hdr *buf; + u16 sect_count; + u16 data_end; + + if (!bld || !type || !size) + return NULL; + + buf = (struct ice_buf_hdr *)&bld->buf; + + /* check for enough space left in buffer */ + data_end = le16_to_cpu(buf->data_end); + + /* section start must align on 4 byte boundary */ + data_end = ALIGN(data_end, 4); + + if ((data_end + size) > ICE_MAX_S_DATA_END) + return NULL; + + /* check for more available section table entries */ + sect_count = le16_to_cpu(buf->section_count); + if (sect_count < bld->reserved_section_table_entries) { + void *section_ptr = ((u8 *)buf) + data_end; + + buf->section_entry[sect_count].offset = cpu_to_le16(data_end); + buf->section_entry[sect_count].size = cpu_to_le16(size); + buf->section_entry[sect_count].type = cpu_to_le32(type); + + data_end += size; + buf->data_end = cpu_to_le16(data_end); + + buf->section_count = cpu_to_le16(sect_count + 1); + return section_ptr; + } + + /* no free section table entries */ + return NULL; +} + +/** + * ice_pkg_buf_alloc_single_section + * @hw: pointer to the HW structure + * @type: the section type value + * @size: the size of the section to reserve (in bytes) + * @section: returns pointer to the section + * + * Allocates a package buffer with a single section. + * Note: all package contents must be in Little Endian form. + */ +struct ice_buf_build *ice_pkg_buf_alloc_single_section(struct ice_hw *hw, + u32 type, u16 size, + void **section) +{ + struct ice_buf_build *buf; + + if (!section) + return NULL; + + buf = ice_pkg_buf_alloc(hw); + if (!buf) + return NULL; + + if (ice_pkg_buf_reserve_section(buf, 1)) + goto ice_pkg_buf_alloc_single_section_err; + + *section = ice_pkg_buf_alloc_section(buf, type, size); + if (!*section) + goto ice_pkg_buf_alloc_single_section_err; + + return buf; + +ice_pkg_buf_alloc_single_section_err: + ice_pkg_buf_free(hw, buf); + return NULL; +} + +/** + * ice_pkg_buf_get_active_sections + * @bld: pointer to pkg build (allocated by ice_pkg_buf_alloc()) + * + * Returns the number of active sections. Before using the package buffer + * in an update package command, the caller should make sure that there is at + * least one active section - otherwise, the buffer is not legal and should + * not be used. + * Note: all package contents must be in Little Endian form. + */ +u16 ice_pkg_buf_get_active_sections(struct ice_buf_build *bld) +{ + struct ice_buf_hdr *buf; + + if (!bld) + return 0; + + buf = (struct ice_buf_hdr *)&bld->buf; + return le16_to_cpu(buf->section_count); +} + +/** + * ice_pkg_buf + * @bld: pointer to pkg build (allocated by ice_pkg_buf_alloc()) + * + * Return a pointer to the buffer's header + */ +struct ice_buf *ice_pkg_buf(struct ice_buf_build *bld) +{ + if (!bld) + return NULL; + + return &bld->buf; +} + +static enum ice_ddp_state ice_map_aq_err_to_ddp_state(enum ice_aq_err aq_err) +{ + switch (aq_err) { + case ICE_AQ_RC_ENOSEC: + case ICE_AQ_RC_EBADSIG: + return ICE_DDP_PKG_FILE_SIGNATURE_INVALID; + case ICE_AQ_RC_ESVN: + return ICE_DDP_PKG_FILE_REVISION_TOO_LOW; + case ICE_AQ_RC_EBADMAN: + case ICE_AQ_RC_EBADBUF: + return ICE_DDP_PKG_LOAD_ERROR; + default: + return ICE_DDP_PKG_ERR; + } +} + +/** + * ice_acquire_global_cfg_lock + * @hw: pointer to the HW structure + * @access: access type (read or write) + * + * This function will request ownership of the global config lock for reading + * or writing of the package. When attempting to obtain write access, the + * caller must check for the following two return values: + * + * 0 - Means the caller has acquired the global config lock + * and can perform writing of the package. + * -EALREADY - Indicates another driver has already written the + * package or has found that no update was necessary; in + * this case, the caller can just skip performing any + * update of the package. + */ +static int ice_acquire_global_cfg_lock(struct ice_hw *hw, + enum ice_aq_res_access_type access) +{ + int status; + + status = ice_acquire_res(hw, ICE_GLOBAL_CFG_LOCK_RES_ID, access, + ICE_GLOBAL_CFG_LOCK_TIMEOUT); + + if (!status) + mutex_lock(&ice_global_cfg_lock_sw); + else if (status == -EALREADY) + ice_debug(hw, ICE_DBG_PKG, + "Global config lock: No work to do\n"); + + return status; +} + +/** + * ice_release_global_cfg_lock + * @hw: pointer to the HW structure + * + * This function will release the global config lock. + */ +static void ice_release_global_cfg_lock(struct ice_hw *hw) +{ + mutex_unlock(&ice_global_cfg_lock_sw); + ice_release_res(hw, ICE_GLOBAL_CFG_LOCK_RES_ID); +} + +/** + * ice_dwnld_cfg_bufs + * @hw: pointer to the hardware structure + * @bufs: pointer to an array of buffers + * @count: the number of buffers in the array + * + * Obtains global config lock and downloads the package configuration buffers + * to the firmware. Metadata buffers are skipped, and the first metadata buffer + * found indicates that the rest of the buffers are all metadata buffers. + */ +static enum ice_ddp_state ice_dwnld_cfg_bufs(struct ice_hw *hw, + struct ice_buf *bufs, u32 count) +{ + enum ice_ddp_state state = ICE_DDP_PKG_SUCCESS; + struct ice_buf_hdr *bh; + enum ice_aq_err err; + u32 offset, info, i; + int status; + + if (!bufs || !count) + return ICE_DDP_PKG_ERR; + + /* If the first buffer's first section has its metadata bit set + * then there are no buffers to be downloaded, and the operation is + * considered a success. + */ + bh = (struct ice_buf_hdr *)bufs; + if (le32_to_cpu(bh->section_entry[0].type) & ICE_METADATA_BUF) + return ICE_DDP_PKG_SUCCESS; + + status = ice_acquire_global_cfg_lock(hw, ICE_RES_WRITE); + if (status) { + if (status == -EALREADY) + return ICE_DDP_PKG_ALREADY_LOADED; + return ice_map_aq_err_to_ddp_state(hw->adminq.sq_last_status); + } + + for (i = 0; i < count; i++) { + bool last = ((i + 1) == count); + + if (!last) { + /* check next buffer for metadata flag */ + bh = (struct ice_buf_hdr *)(bufs + i + 1); + + /* A set metadata flag in the next buffer will signal + * that the current buffer will be the last buffer + * downloaded + */ + if (le16_to_cpu(bh->section_count)) + if (le32_to_cpu(bh->section_entry[0].type) & + ICE_METADATA_BUF) + last = true; + } + + bh = (struct ice_buf_hdr *)(bufs + i); + + status = ice_aq_download_pkg(hw, bh, ICE_PKG_BUF_SIZE, last, + &offset, &info, NULL); + + /* Save AQ status from download package */ + if (status) { + ice_debug(hw, ICE_DBG_PKG, + "Pkg download failed: err %d off %d inf %d\n", + status, offset, info); + err = hw->adminq.sq_last_status; + state = ice_map_aq_err_to_ddp_state(err); + break; + } + + if (last) + break; + } + + if (!status) { + status = ice_set_vlan_mode(hw); + if (status) + ice_debug(hw, ICE_DBG_PKG, + "Failed to set VLAN mode: err %d\n", status); + } + + ice_release_global_cfg_lock(hw); + + return state; +} + +/** + * ice_aq_get_pkg_info_list + * @hw: pointer to the hardware structure + * @pkg_info: the buffer which will receive the information list + * @buf_size: the size of the pkg_info information buffer + * @cd: pointer to command details structure or NULL + * + * Get Package Info List (0x0C43) + */ +static int ice_aq_get_pkg_info_list(struct ice_hw *hw, + struct ice_aqc_get_pkg_info_resp *pkg_info, + u16 buf_size, struct ice_sq_cd *cd) +{ + struct ice_aq_desc desc; + + ice_fill_dflt_direct_cmd_desc(&desc, ice_aqc_opc_get_pkg_info_list); + + return ice_aq_send_cmd(hw, &desc, pkg_info, buf_size, cd); +} + +/** + * ice_download_pkg + * @hw: pointer to the hardware structure + * @ice_seg: pointer to the segment of the package to be downloaded + * + * Handles the download of a complete package. + */ +static enum ice_ddp_state ice_download_pkg(struct ice_hw *hw, + struct ice_seg *ice_seg) +{ + struct ice_buf_table *ice_buf_tbl; + int status; + + ice_debug(hw, ICE_DBG_PKG, "Segment format version: %d.%d.%d.%d\n", + ice_seg->hdr.seg_format_ver.major, + ice_seg->hdr.seg_format_ver.minor, + ice_seg->hdr.seg_format_ver.update, + ice_seg->hdr.seg_format_ver.draft); + + ice_debug(hw, ICE_DBG_PKG, "Seg: type 0x%X, size %d, name %s\n", + le32_to_cpu(ice_seg->hdr.seg_type), + le32_to_cpu(ice_seg->hdr.seg_size), ice_seg->hdr.seg_id); + + ice_buf_tbl = ice_find_buf_table(ice_seg); + + ice_debug(hw, ICE_DBG_PKG, "Seg buf count: %d\n", + le32_to_cpu(ice_buf_tbl->buf_count)); + + status = ice_dwnld_cfg_bufs(hw, ice_buf_tbl->buf_array, + le32_to_cpu(ice_buf_tbl->buf_count)); + + ice_post_pkg_dwnld_vlan_mode_cfg(hw); + + return status; +} + +/** + * ice_aq_download_pkg + * @hw: pointer to the hardware structure + * @pkg_buf: the package buffer to transfer + * @buf_size: the size of the package buffer + * @last_buf: last buffer indicator + * @error_offset: returns error offset + * @error_info: returns error information + * @cd: pointer to command details structure or NULL + * + * Download Package (0x0C40) + */ +int ice_aq_download_pkg(struct ice_hw *hw, struct ice_buf_hdr *pkg_buf, + u16 buf_size, bool last_buf, u32 *error_offset, + u32 *error_info, struct ice_sq_cd *cd) +{ + struct ice_aqc_download_pkg *cmd; + struct ice_aq_desc desc; + int status; + + if (error_offset) + *error_offset = 0; + if (error_info) + *error_info = 0; + + cmd = &desc.params.download_pkg; + ice_fill_dflt_direct_cmd_desc(&desc, ice_aqc_opc_download_pkg); + desc.flags |= cpu_to_le16(ICE_AQ_FLAG_RD); + + if (last_buf) + cmd->flags |= ICE_AQC_DOWNLOAD_PKG_LAST_BUF; + + status = ice_aq_send_cmd(hw, &desc, pkg_buf, buf_size, cd); + if (status == -EIO) { + /* Read error from buffer only when the FW returned an error */ + struct ice_aqc_download_pkg_resp *resp; + + resp = (struct ice_aqc_download_pkg_resp *)pkg_buf; + if (error_offset) + *error_offset = le32_to_cpu(resp->error_offset); + if (error_info) + *error_info = le32_to_cpu(resp->error_info); + } + + return status; +} + +/** + * ice_aq_upload_section + * @hw: pointer to the hardware structure + * @pkg_buf: the package buffer which will receive the section + * @buf_size: the size of the package buffer + * @cd: pointer to command details structure or NULL + * + * Upload Section (0x0C41) + */ +int ice_aq_upload_section(struct ice_hw *hw, struct ice_buf_hdr *pkg_buf, + u16 buf_size, struct ice_sq_cd *cd) +{ + struct ice_aq_desc desc; + + ice_fill_dflt_direct_cmd_desc(&desc, ice_aqc_opc_upload_section); + desc.flags |= cpu_to_le16(ICE_AQ_FLAG_RD); + + return ice_aq_send_cmd(hw, &desc, pkg_buf, buf_size, cd); +} + +/** + * ice_aq_update_pkg + * @hw: pointer to the hardware structure + * @pkg_buf: the package cmd buffer + * @buf_size: the size of the package cmd buffer + * @last_buf: last buffer indicator + * @error_offset: returns error offset + * @error_info: returns error information + * @cd: pointer to command details structure or NULL + * + * Update Package (0x0C42) + */ +static int ice_aq_update_pkg(struct ice_hw *hw, struct ice_buf_hdr *pkg_buf, + u16 buf_size, bool last_buf, u32 *error_offset, + u32 *error_info, struct ice_sq_cd *cd) +{ + struct ice_aqc_download_pkg *cmd; + struct ice_aq_desc desc; + int status; + + if (error_offset) + *error_offset = 0; + if (error_info) + *error_info = 0; + + cmd = &desc.params.download_pkg; + ice_fill_dflt_direct_cmd_desc(&desc, ice_aqc_opc_update_pkg); + desc.flags |= cpu_to_le16(ICE_AQ_FLAG_RD); + + if (last_buf) + cmd->flags |= ICE_AQC_DOWNLOAD_PKG_LAST_BUF; + + status = ice_aq_send_cmd(hw, &desc, pkg_buf, buf_size, cd); + if (status == -EIO) { + /* Read error from buffer only when the FW returned an error */ + struct ice_aqc_download_pkg_resp *resp; + + resp = (struct ice_aqc_download_pkg_resp *)pkg_buf; + if (error_offset) + *error_offset = le32_to_cpu(resp->error_offset); + if (error_info) + *error_info = le32_to_cpu(resp->error_info); + } + + return status; +} + +/** + * ice_update_pkg_no_lock + * @hw: pointer to the hardware structure + * @bufs: pointer to an array of buffers + * @count: the number of buffers in the array + */ +int ice_update_pkg_no_lock(struct ice_hw *hw, struct ice_buf *bufs, u32 count) +{ + int status = 0; + u32 i; + + for (i = 0; i < count; i++) { + struct ice_buf_hdr *bh = (struct ice_buf_hdr *)(bufs + i); + bool last = ((i + 1) == count); + u32 offset, info; + + status = ice_aq_update_pkg(hw, bh, le16_to_cpu(bh->data_end), + last, &offset, &info, NULL); + + if (status) { + ice_debug(hw, ICE_DBG_PKG, + "Update pkg failed: err %d off %d inf %d\n", + status, offset, info); + break; + } + } + + return status; +} + +/** + * ice_update_pkg + * @hw: pointer to the hardware structure + * @bufs: pointer to an array of buffers + * @count: the number of buffers in the array + * + * Obtains change lock and updates package. + */ +int ice_update_pkg(struct ice_hw *hw, struct ice_buf *bufs, u32 count) +{ + int status; + + status = ice_acquire_change_lock(hw, ICE_RES_WRITE); + if (status) + return status; + + status = ice_update_pkg_no_lock(hw, bufs, count); + + ice_release_change_lock(hw); + + return status; +} + +/** + * ice_find_seg_in_pkg + * @hw: pointer to the hardware structure + * @seg_type: the segment type to search for (i.e., SEGMENT_TYPE_CPK) + * @pkg_hdr: pointer to the package header to be searched + * + * This function searches a package file for a particular segment type. On + * success it returns a pointer to the segment header, otherwise it will + * return NULL. + */ +struct ice_generic_seg_hdr *ice_find_seg_in_pkg(struct ice_hw *hw, u32 seg_type, + struct ice_pkg_hdr *pkg_hdr) +{ + u32 i; + + ice_debug(hw, ICE_DBG_PKG, "Package format version: %d.%d.%d.%d\n", + pkg_hdr->pkg_format_ver.major, pkg_hdr->pkg_format_ver.minor, + pkg_hdr->pkg_format_ver.update, + pkg_hdr->pkg_format_ver.draft); + + /* Search all package segments for the requested segment type */ + for (i = 0; i < le32_to_cpu(pkg_hdr->seg_count); i++) { + struct ice_generic_seg_hdr *seg; + + seg = (struct ice_generic_seg_hdr + *)((u8 *)pkg_hdr + + le32_to_cpu(pkg_hdr->seg_offset[i])); + + if (le32_to_cpu(seg->seg_type) == seg_type) + return seg; + } + + return NULL; +} + +/** + * ice_init_pkg_info + * @hw: pointer to the hardware structure + * @pkg_hdr: pointer to the driver's package hdr + * + * Saves off the package details into the HW structure. + */ +static enum ice_ddp_state ice_init_pkg_info(struct ice_hw *hw, + struct ice_pkg_hdr *pkg_hdr) +{ + struct ice_generic_seg_hdr *seg_hdr; + + if (!pkg_hdr) + return ICE_DDP_PKG_ERR; + + seg_hdr = ice_find_seg_in_pkg(hw, SEGMENT_TYPE_ICE, pkg_hdr); + if (seg_hdr) { + struct ice_meta_sect *meta; + struct ice_pkg_enum state; + + memset(&state, 0, sizeof(state)); + + /* Get package information from the Metadata Section */ + meta = ice_pkg_enum_section((struct ice_seg *)seg_hdr, &state, + ICE_SID_METADATA); + if (!meta) { + ice_debug(hw, ICE_DBG_INIT, + "Did not find ice metadata section in package\n"); + return ICE_DDP_PKG_INVALID_FILE; + } + + hw->pkg_ver = meta->ver; + memcpy(hw->pkg_name, meta->name, sizeof(meta->name)); + + ice_debug(hw, ICE_DBG_PKG, "Pkg: %d.%d.%d.%d, %s\n", + meta->ver.major, meta->ver.minor, meta->ver.update, + meta->ver.draft, meta->name); + + hw->ice_seg_fmt_ver = seg_hdr->seg_format_ver; + memcpy(hw->ice_seg_id, seg_hdr->seg_id, sizeof(hw->ice_seg_id)); + + ice_debug(hw, ICE_DBG_PKG, "Ice Seg: %d.%d.%d.%d, %s\n", + seg_hdr->seg_format_ver.major, + seg_hdr->seg_format_ver.minor, + seg_hdr->seg_format_ver.update, + seg_hdr->seg_format_ver.draft, seg_hdr->seg_id); + } else { + ice_debug(hw, ICE_DBG_INIT, + "Did not find ice segment in driver package\n"); + return ICE_DDP_PKG_INVALID_FILE; + } + + return ICE_DDP_PKG_SUCCESS; +} + +/** + * ice_get_pkg_info + * @hw: pointer to the hardware structure + * + * Store details of the package currently loaded in HW into the HW structure. + */ +static enum ice_ddp_state ice_get_pkg_info(struct ice_hw *hw) +{ + enum ice_ddp_state state = ICE_DDP_PKG_SUCCESS; + struct ice_aqc_get_pkg_info_resp *pkg_info; + u16 size; + u32 i; + + size = struct_size(pkg_info, pkg_info, ICE_PKG_CNT); + pkg_info = kzalloc(size, GFP_KERNEL); + if (!pkg_info) + return ICE_DDP_PKG_ERR; + + if (ice_aq_get_pkg_info_list(hw, pkg_info, size, NULL)) { + state = ICE_DDP_PKG_ERR; + goto init_pkg_free_alloc; + } + + for (i = 0; i < le32_to_cpu(pkg_info->count); i++) { +#define ICE_PKG_FLAG_COUNT 4 + char flags[ICE_PKG_FLAG_COUNT + 1] = { 0 }; + u8 place = 0; + + if (pkg_info->pkg_info[i].is_active) { + flags[place++] = 'A'; + hw->active_pkg_ver = pkg_info->pkg_info[i].ver; + hw->active_track_id = + le32_to_cpu(pkg_info->pkg_info[i].track_id); + memcpy(hw->active_pkg_name, pkg_info->pkg_info[i].name, + sizeof(pkg_info->pkg_info[i].name)); + hw->active_pkg_in_nvm = pkg_info->pkg_info[i].is_in_nvm; + } + if (pkg_info->pkg_info[i].is_active_at_boot) + flags[place++] = 'B'; + if (pkg_info->pkg_info[i].is_modified) + flags[place++] = 'M'; + if (pkg_info->pkg_info[i].is_in_nvm) + flags[place++] = 'N'; + + ice_debug(hw, ICE_DBG_PKG, "Pkg[%d]: %d.%d.%d.%d,%s,%s\n", i, + pkg_info->pkg_info[i].ver.major, + pkg_info->pkg_info[i].ver.minor, + pkg_info->pkg_info[i].ver.update, + pkg_info->pkg_info[i].ver.draft, + pkg_info->pkg_info[i].name, flags); + } + +init_pkg_free_alloc: + kfree(pkg_info); + + return state; +} + +/** + * ice_chk_pkg_compat + * @hw: pointer to the hardware structure + * @ospkg: pointer to the package hdr + * @seg: pointer to the package segment hdr + * + * This function checks the package version compatibility with driver and NVM + */ +static enum ice_ddp_state ice_chk_pkg_compat(struct ice_hw *hw, + struct ice_pkg_hdr *ospkg, + struct ice_seg **seg) +{ + struct ice_aqc_get_pkg_info_resp *pkg; + enum ice_ddp_state state; + u16 size; + u32 i; + + /* Check package version compatibility */ + state = ice_chk_pkg_version(&hw->pkg_ver); + if (state) { + ice_debug(hw, ICE_DBG_INIT, "Package version check failed.\n"); + return state; + } + + /* find ICE segment in given package */ + *seg = (struct ice_seg *)ice_find_seg_in_pkg(hw, SEGMENT_TYPE_ICE, + ospkg); + if (!*seg) { + ice_debug(hw, ICE_DBG_INIT, "no ice segment in package.\n"); + return ICE_DDP_PKG_INVALID_FILE; + } + + /* Check if FW is compatible with the OS package */ + size = struct_size(pkg, pkg_info, ICE_PKG_CNT); + pkg = kzalloc(size, GFP_KERNEL); + if (!pkg) + return ICE_DDP_PKG_ERR; + + if (ice_aq_get_pkg_info_list(hw, pkg, size, NULL)) { + state = ICE_DDP_PKG_LOAD_ERROR; + goto fw_ddp_compat_free_alloc; + } + + for (i = 0; i < le32_to_cpu(pkg->count); i++) { + /* loop till we find the NVM package */ + if (!pkg->pkg_info[i].is_in_nvm) + continue; + if ((*seg)->hdr.seg_format_ver.major != + pkg->pkg_info[i].ver.major || + (*seg)->hdr.seg_format_ver.minor > + pkg->pkg_info[i].ver.minor) { + state = ICE_DDP_PKG_FW_MISMATCH; + ice_debug(hw, ICE_DBG_INIT, + "OS package is not compatible with NVM.\n"); + } + /* done processing NVM package so break */ + break; + } +fw_ddp_compat_free_alloc: + kfree(pkg); + return state; +} + +/** + * ice_init_pkg_hints + * @hw: pointer to the HW structure + * @ice_seg: pointer to the segment of the package scan (non-NULL) + * + * This function will scan the package and save off relevant information + * (hints or metadata) for driver use. The ice_seg parameter must not be NULL + * since the first call to ice_enum_labels requires a pointer to an actual + * ice_seg structure. + */ +static void ice_init_pkg_hints(struct ice_hw *hw, struct ice_seg *ice_seg) +{ + struct ice_pkg_enum state; + char *label_name; + u16 val; + int i; + + memset(&hw->tnl, 0, sizeof(hw->tnl)); + memset(&state, 0, sizeof(state)); + + if (!ice_seg) + return; + + label_name = ice_enum_labels(ice_seg, ICE_SID_LBL_RXPARSER_TMEM, &state, + &val); + + while (label_name) { + if (!strncmp(label_name, ICE_TNL_PRE, strlen(ICE_TNL_PRE))) + /* check for a tunnel entry */ + ice_add_tunnel_hint(hw, label_name, val); + + /* check for a dvm mode entry */ + else if (!strncmp(label_name, ICE_DVM_PRE, strlen(ICE_DVM_PRE))) + ice_add_dvm_hint(hw, val, true); + + /* check for a svm mode entry */ + else if (!strncmp(label_name, ICE_SVM_PRE, strlen(ICE_SVM_PRE))) + ice_add_dvm_hint(hw, val, false); + + label_name = ice_enum_labels(NULL, 0, &state, &val); + } + + /* Cache the appropriate boost TCAM entry pointers for tunnels */ + for (i = 0; i < hw->tnl.count; i++) { + ice_find_boost_entry(ice_seg, hw->tnl.tbl[i].boost_addr, + &hw->tnl.tbl[i].boost_entry); + if (hw->tnl.tbl[i].boost_entry) { + hw->tnl.tbl[i].valid = true; + if (hw->tnl.tbl[i].type < __TNL_TYPE_CNT) + hw->tnl.valid_count[hw->tnl.tbl[i].type]++; + } + } + + /* Cache the appropriate boost TCAM entry pointers for DVM and SVM */ + for (i = 0; i < hw->dvm_upd.count; i++) + ice_find_boost_entry(ice_seg, hw->dvm_upd.tbl[i].boost_addr, + &hw->dvm_upd.tbl[i].boost_entry); +} + +/** + * ice_fill_hw_ptype - fill the enabled PTYPE bit information + * @hw: pointer to the HW structure + */ +static void ice_fill_hw_ptype(struct ice_hw *hw) +{ + struct ice_marker_ptype_tcam_entry *tcam; + struct ice_seg *seg = hw->seg; + struct ice_pkg_enum state; + + bitmap_zero(hw->hw_ptype, ICE_FLOW_PTYPE_MAX); + if (!seg) + return; + + memset(&state, 0, sizeof(state)); + + do { + tcam = ice_pkg_enum_entry(seg, &state, + ICE_SID_RXPARSER_MARKER_PTYPE, NULL, + ice_marker_ptype_tcam_handler); + if (tcam && + le16_to_cpu(tcam->addr) < ICE_MARKER_PTYPE_TCAM_ADDR_MAX && + le16_to_cpu(tcam->ptype) < ICE_FLOW_PTYPE_MAX) + set_bit(le16_to_cpu(tcam->ptype), hw->hw_ptype); + + seg = NULL; + } while (tcam); +} + +/** + * ice_init_pkg - initialize/download package + * @hw: pointer to the hardware structure + * @buf: pointer to the package buffer + * @len: size of the package buffer + * + * This function initializes a package. The package contains HW tables + * required to do packet processing. First, the function extracts package + * information such as version. Then it finds the ice configuration segment + * within the package; this function then saves a copy of the segment pointer + * within the supplied package buffer. Next, the function will cache any hints + * from the package, followed by downloading the package itself. Note, that if + * a previous PF driver has already downloaded the package successfully, then + * the current driver will not have to download the package again. + * + * The local package contents will be used to query default behavior and to + * update specific sections of the HW's version of the package (e.g. to update + * the parse graph to understand new protocols). + * + * This function stores a pointer to the package buffer memory, and it is + * expected that the supplied buffer will not be freed immediately. If the + * package buffer needs to be freed, such as when read from a file, use + * ice_copy_and_init_pkg() instead of directly calling ice_init_pkg() in this + * case. + */ +enum ice_ddp_state ice_init_pkg(struct ice_hw *hw, u8 *buf, u32 len) +{ + bool already_loaded = false; + enum ice_ddp_state state; + struct ice_pkg_hdr *pkg; + struct ice_seg *seg; + + if (!buf || !len) + return ICE_DDP_PKG_ERR; + + pkg = (struct ice_pkg_hdr *)buf; + state = ice_verify_pkg(pkg, len); + if (state) { + ice_debug(hw, ICE_DBG_INIT, "failed to verify pkg (err: %d)\n", + state); + return state; + } + + /* initialize package info */ + state = ice_init_pkg_info(hw, pkg); + if (state) + return state; + + /* before downloading the package, check package version for + * compatibility with driver + */ + state = ice_chk_pkg_compat(hw, pkg, &seg); + if (state) + return state; + + /* initialize package hints and then download package */ + ice_init_pkg_hints(hw, seg); + state = ice_download_pkg(hw, seg); + if (state == ICE_DDP_PKG_ALREADY_LOADED) { + ice_debug(hw, ICE_DBG_INIT, + "package previously loaded - no work.\n"); + already_loaded = true; + } + + /* Get information on the package currently loaded in HW, then make sure + * the driver is compatible with this version. + */ + if (!state || state == ICE_DDP_PKG_ALREADY_LOADED) { + state = ice_get_pkg_info(hw); + if (!state) + state = ice_get_ddp_pkg_state(hw, already_loaded); + } + + if (ice_is_init_pkg_successful(state)) { + hw->seg = seg; + /* on successful package download update other required + * registers to support the package and fill HW tables + * with package content. + */ + ice_init_pkg_regs(hw); + ice_fill_blk_tbls(hw); + ice_fill_hw_ptype(hw); + ice_get_prof_index_max(hw); + } else { + ice_debug(hw, ICE_DBG_INIT, "package load failed, %d\n", state); + } + + return state; +} + +/** + * ice_copy_and_init_pkg - initialize/download a copy of the package + * @hw: pointer to the hardware structure + * @buf: pointer to the package buffer + * @len: size of the package buffer + * + * This function copies the package buffer, and then calls ice_init_pkg() to + * initialize the copied package contents. + * + * The copying is necessary if the package buffer supplied is constant, or if + * the memory may disappear shortly after calling this function. + * + * If the package buffer resides in the data segment and can be modified, the + * caller is free to use ice_init_pkg() instead of ice_copy_and_init_pkg(). + * + * However, if the package buffer needs to be copied first, such as when being + * read from a file, the caller should use ice_copy_and_init_pkg(). + * + * This function will first copy the package buffer, before calling + * ice_init_pkg(). The caller is free to immediately destroy the original + * package buffer, as the new copy will be managed by this function and + * related routines. + */ +enum ice_ddp_state ice_copy_and_init_pkg(struct ice_hw *hw, const u8 *buf, + u32 len) +{ + enum ice_ddp_state state; + u8 *buf_copy; + + if (!buf || !len) + return ICE_DDP_PKG_ERR; + + buf_copy = devm_kmemdup(ice_hw_to_dev(hw), buf, len, GFP_KERNEL); + + state = ice_init_pkg(hw, buf_copy, len); + if (!ice_is_init_pkg_successful(state)) { + /* Free the copy, since we failed to initialize the package */ + devm_kfree(ice_hw_to_dev(hw), buf_copy); + } else { + /* Track the copied pkg so we can free it later */ + hw->pkg_copy = buf_copy; + hw->pkg_size = len; + } + + return state; +} diff --git a/drivers/net/ethernet/intel/ice/ice_ddp.h b/drivers/net/ethernet/intel/ice/ice_ddp.h new file mode 100644 index 000000000000..37eadb3d27a8 --- /dev/null +++ b/drivers/net/ethernet/intel/ice/ice_ddp.h @@ -0,0 +1,445 @@ +/* SPDX-License-Identifier: GPL-2.0 */ +/* Copyright (c) 2022, Intel Corporation. */ + +#ifndef _ICE_DDP_H_ +#define _ICE_DDP_H_ + +#include "ice_type.h" + +/* Package minimal version supported */ +#define ICE_PKG_SUPP_VER_MAJ 1 +#define ICE_PKG_SUPP_VER_MNR 3 + +/* Package format version */ +#define ICE_PKG_FMT_VER_MAJ 1 +#define ICE_PKG_FMT_VER_MNR 0 +#define ICE_PKG_FMT_VER_UPD 0 +#define ICE_PKG_FMT_VER_DFT 0 + +#define ICE_PKG_CNT 4 + +#define ICE_FV_OFFSET_INVAL 0x1FF + +/* Extraction Sequence (Field Vector) Table */ +struct ice_fv_word { + u8 prot_id; + u16 off; /* Offset within the protocol header */ + u8 resvrd; +} __packed; + +#define ICE_MAX_NUM_PROFILES 256 + +#define ICE_MAX_FV_WORDS 48 +struct ice_fv { + struct ice_fv_word ew[ICE_MAX_FV_WORDS]; +}; + +enum ice_ddp_state { + /* Indicates that this call to ice_init_pkg + * successfully loaded the requested DDP package + */ + ICE_DDP_PKG_SUCCESS = 0, + + /* Generic error for already loaded errors, it is mapped later to + * the more specific one (one of the next 3) + */ + ICE_DDP_PKG_ALREADY_LOADED = -1, + + /* Indicates that a DDP package of the same version has already been + * loaded onto the device by a previous call or by another PF + */ + ICE_DDP_PKG_SAME_VERSION_ALREADY_LOADED = -2, + + /* The device has a DDP package that is not supported by the driver */ + ICE_DDP_PKG_ALREADY_LOADED_NOT_SUPPORTED = -3, + + /* The device has a compatible package + * (but different from the request) already loaded + */ + ICE_DDP_PKG_COMPATIBLE_ALREADY_LOADED = -4, + + /* The firmware loaded on the device is not compatible with + * the DDP package loaded + */ + ICE_DDP_PKG_FW_MISMATCH = -5, + + /* The DDP package file is invalid */ + ICE_DDP_PKG_INVALID_FILE = -6, + + /* The version of the DDP package provided is higher than + * the driver supports + */ + ICE_DDP_PKG_FILE_VERSION_TOO_HIGH = -7, + + /* The version of the DDP package provided is lower than the + * driver supports + */ + ICE_DDP_PKG_FILE_VERSION_TOO_LOW = -8, + + /* The signature of the DDP package file provided is invalid */ + ICE_DDP_PKG_FILE_SIGNATURE_INVALID = -9, + + /* The DDP package file security revision is too low and not + * supported by firmware + */ + ICE_DDP_PKG_FILE_REVISION_TOO_LOW = -10, + + /* An error occurred in firmware while loading the DDP package */ + ICE_DDP_PKG_LOAD_ERROR = -11, + + /* Other errors */ + ICE_DDP_PKG_ERR = -12 +}; + +/* Package and segment headers and tables */ +struct ice_pkg_hdr { + struct ice_pkg_ver pkg_format_ver; + __le32 seg_count; + __le32 seg_offset[]; +}; + +/* generic segment */ +struct ice_generic_seg_hdr { +#define SEGMENT_TYPE_METADATA 0x00000001 +#define SEGMENT_TYPE_ICE 0x00000010 + __le32 seg_type; + struct ice_pkg_ver seg_format_ver; + __le32 seg_size; + char seg_id[ICE_PKG_NAME_SIZE]; +}; + +/* ice specific segment */ + +union ice_device_id { + struct { + __le16 device_id; + __le16 vendor_id; + } dev_vend_id; + __le32 id; +}; + +struct ice_device_id_entry { + union ice_device_id device; + union ice_device_id sub_device; +}; + +struct ice_seg { + struct ice_generic_seg_hdr hdr; + __le32 device_table_count; + struct ice_device_id_entry device_table[]; +}; + +struct ice_nvm_table { + __le32 table_count; + __le32 vers[]; +}; + +struct ice_buf { +#define ICE_PKG_BUF_SIZE 4096 + u8 buf[ICE_PKG_BUF_SIZE]; +}; + +struct ice_buf_table { + __le32 buf_count; + struct ice_buf buf_array[]; +}; + +struct ice_run_time_cfg_seg { + struct ice_generic_seg_hdr hdr; + u8 rsvd[8]; + struct ice_buf_table buf_table; +}; + +/* global metadata specific segment */ +struct ice_global_metadata_seg { + struct ice_generic_seg_hdr hdr; + struct ice_pkg_ver pkg_ver; + __le32 rsvd; + char pkg_name[ICE_PKG_NAME_SIZE]; +}; + +#define ICE_MIN_S_OFF 12 +#define ICE_MAX_S_OFF 4095 +#define ICE_MIN_S_SZ 1 +#define ICE_MAX_S_SZ 4084 + +/* section information */ +struct ice_section_entry { + __le32 type; + __le16 offset; + __le16 size; +}; + +#define ICE_MIN_S_COUNT 1 +#define ICE_MAX_S_COUNT 511 +#define ICE_MIN_S_DATA_END 12 +#define ICE_MAX_S_DATA_END 4096 + +#define ICE_METADATA_BUF 0x80000000 + +struct ice_buf_hdr { + __le16 section_count; + __le16 data_end; + struct ice_section_entry section_entry[]; +}; + +#define ICE_MAX_ENTRIES_IN_BUF(hd_sz, ent_sz) \ + ((ICE_PKG_BUF_SIZE - \ + struct_size((struct ice_buf_hdr *)0, section_entry, 1) - (hd_sz)) / \ + (ent_sz)) + +/* ice package section IDs */ +#define ICE_SID_METADATA 1 +#define ICE_SID_XLT0_SW 10 +#define ICE_SID_XLT_KEY_BUILDER_SW 11 +#define ICE_SID_XLT1_SW 12 +#define ICE_SID_XLT2_SW 13 +#define ICE_SID_PROFID_TCAM_SW 14 +#define ICE_SID_PROFID_REDIR_SW 15 +#define ICE_SID_FLD_VEC_SW 16 +#define ICE_SID_CDID_KEY_BUILDER_SW 17 + +struct ice_meta_sect { + struct ice_pkg_ver ver; +#define ICE_META_SECT_NAME_SIZE 28 + char name[ICE_META_SECT_NAME_SIZE]; + __le32 track_id; +}; + +#define ICE_SID_CDID_REDIR_SW 18 + +#define ICE_SID_XLT0_ACL 20 +#define ICE_SID_XLT_KEY_BUILDER_ACL 21 +#define ICE_SID_XLT1_ACL 22 +#define ICE_SID_XLT2_ACL 23 +#define ICE_SID_PROFID_TCAM_ACL 24 +#define ICE_SID_PROFID_REDIR_ACL 25 +#define ICE_SID_FLD_VEC_ACL 26 +#define ICE_SID_CDID_KEY_BUILDER_ACL 27 +#define ICE_SID_CDID_REDIR_ACL 28 + +#define ICE_SID_XLT0_FD 30 +#define ICE_SID_XLT_KEY_BUILDER_FD 31 +#define ICE_SID_XLT1_FD 32 +#define ICE_SID_XLT2_FD 33 +#define ICE_SID_PROFID_TCAM_FD 34 +#define ICE_SID_PROFID_REDIR_FD 35 +#define ICE_SID_FLD_VEC_FD 36 +#define ICE_SID_CDID_KEY_BUILDER_FD 37 +#define ICE_SID_CDID_REDIR_FD 38 + +#define ICE_SID_XLT0_RSS 40 +#define ICE_SID_XLT_KEY_BUILDER_RSS 41 +#define ICE_SID_XLT1_RSS 42 +#define ICE_SID_XLT2_RSS 43 +#define ICE_SID_PROFID_TCAM_RSS 44 +#define ICE_SID_PROFID_REDIR_RSS 45 +#define ICE_SID_FLD_VEC_RSS 46 +#define ICE_SID_CDID_KEY_BUILDER_RSS 47 +#define ICE_SID_CDID_REDIR_RSS 48 + +#define ICE_SID_RXPARSER_MARKER_PTYPE 55 +#define ICE_SID_RXPARSER_BOOST_TCAM 56 +#define ICE_SID_RXPARSER_METADATA_INIT 58 +#define ICE_SID_TXPARSER_BOOST_TCAM 66 + +#define ICE_SID_XLT0_PE 80 +#define ICE_SID_XLT_KEY_BUILDER_PE 81 +#define ICE_SID_XLT1_PE 82 +#define ICE_SID_XLT2_PE 83 +#define ICE_SID_PROFID_TCAM_PE 84 +#define ICE_SID_PROFID_REDIR_PE 85 +#define ICE_SID_FLD_VEC_PE 86 +#define ICE_SID_CDID_KEY_BUILDER_PE 87 +#define ICE_SID_CDID_REDIR_PE 88 + +/* Label Metadata section IDs */ +#define ICE_SID_LBL_FIRST 0x80000010 +#define ICE_SID_LBL_RXPARSER_TMEM 0x80000018 +/* The following define MUST be updated to reflect the last label section ID */ +#define ICE_SID_LBL_LAST 0x80000038 + +/* Label ICE runtime configuration section IDs */ +#define ICE_SID_TX_5_LAYER_TOPO 0x10 + +enum ice_block { + ICE_BLK_SW = 0, + ICE_BLK_ACL, + ICE_BLK_FD, + ICE_BLK_RSS, + ICE_BLK_PE, + ICE_BLK_COUNT +}; + +enum ice_sect { + ICE_XLT0 = 0, + ICE_XLT_KB, + ICE_XLT1, + ICE_XLT2, + ICE_PROF_TCAM, + ICE_PROF_REDIR, + ICE_VEC_TBL, + ICE_CDID_KB, + ICE_CDID_REDIR, + ICE_SECT_COUNT +}; + +/* package labels */ +struct ice_label { + __le16 value; +#define ICE_PKG_LABEL_SIZE 64 + char name[ICE_PKG_LABEL_SIZE]; +}; + +struct ice_label_section { + __le16 count; + struct ice_label label[]; +}; + +#define ICE_MAX_LABELS_IN_BUF \ + ICE_MAX_ENTRIES_IN_BUF(struct_size((struct ice_label_section *)0, \ + label, 1) - \ + sizeof(struct ice_label), \ + sizeof(struct ice_label)) + +struct ice_sw_fv_section { + __le16 count; + __le16 base_offset; + struct ice_fv fv[]; +}; + +struct ice_sw_fv_list_entry { + struct list_head list_entry; + u32 profile_id; + struct ice_fv *fv_ptr; +}; + +/* The BOOST TCAM stores the match packet header in reverse order, meaning + * the fields are reversed; in addition, this means that the normally big endian + * fields of the packet are now little endian. + */ +struct ice_boost_key_value { +#define ICE_BOOST_REMAINING_HV_KEY 15 + u8 remaining_hv_key[ICE_BOOST_REMAINING_HV_KEY]; + __le16 hv_dst_port_key; + __le16 hv_src_port_key; + u8 tcam_search_key; +} __packed; + +struct ice_boost_key { + struct ice_boost_key_value key; + struct ice_boost_key_value key2; +}; + +/* package Boost TCAM entry */ +struct ice_boost_tcam_entry { + __le16 addr; + __le16 reserved; + /* break up the 40 bytes of key into different fields */ + struct ice_boost_key key; + u8 boost_hit_index_group; + /* The following contains bitfields which are not on byte boundaries. + * These fields are currently unused by driver software. + */ +#define ICE_BOOST_BIT_FIELDS 43 + u8 bit_fields[ICE_BOOST_BIT_FIELDS]; +}; + +struct ice_boost_tcam_section { + __le16 count; + __le16 reserved; + struct ice_boost_tcam_entry tcam[]; +}; + +#define ICE_MAX_BST_TCAMS_IN_BUF \ + ICE_MAX_ENTRIES_IN_BUF(struct_size((struct ice_boost_tcam_section *)0, \ + tcam, 1) - \ + sizeof(struct ice_boost_tcam_entry), \ + sizeof(struct ice_boost_tcam_entry)) + +/* package Marker Ptype TCAM entry */ +struct ice_marker_ptype_tcam_entry { +#define ICE_MARKER_PTYPE_TCAM_ADDR_MAX 1024 + __le16 addr; + __le16 ptype; + u8 keys[20]; +}; + +struct ice_marker_ptype_tcam_section { + __le16 count; + __le16 reserved; + struct ice_marker_ptype_tcam_entry tcam[]; +}; + +#define ICE_MAX_MARKER_PTYPE_TCAMS_IN_BUF \ + ICE_MAX_ENTRIES_IN_BUF( \ + struct_size((struct ice_marker_ptype_tcam_section *)0, tcam, \ + 1) - \ + sizeof(struct ice_marker_ptype_tcam_entry), \ + sizeof(struct ice_marker_ptype_tcam_entry)) + +struct ice_xlt1_section { + __le16 count; + __le16 offset; + u8 value[]; +}; + +struct ice_xlt2_section { + __le16 count; + __le16 offset; + __le16 value[]; +}; + +struct ice_prof_redir_section { + __le16 count; + __le16 offset; + u8 redir_value[]; +}; + +/* package buffer building */ + +struct ice_buf_build { + struct ice_buf buf; + u16 reserved_section_table_entries; +}; + +struct ice_pkg_enum { + struct ice_buf_table *buf_table; + u32 buf_idx; + + u32 type; + struct ice_buf_hdr *buf; + u32 sect_idx; + void *sect; + u32 sect_type; + + u32 entry_idx; + void *(*handler)(u32 sect_type, void *section, u32 index, u32 *offset); +}; + +int ice_aq_download_pkg(struct ice_hw *hw, struct ice_buf_hdr *pkg_buf, + u16 buf_size, bool last_buf, u32 *error_offset, + u32 *error_info, struct ice_sq_cd *cd); +int ice_aq_upload_section(struct ice_hw *hw, struct ice_buf_hdr *pkg_buf, + u16 buf_size, struct ice_sq_cd *cd); + +void *ice_pkg_buf_alloc_section(struct ice_buf_build *bld, u32 type, u16 size); + +enum ice_ddp_state ice_verify_pkg(struct ice_pkg_hdr *pkg, u32 len); + +struct ice_buf_build *ice_pkg_buf_alloc(struct ice_hw *hw); + +struct ice_generic_seg_hdr *ice_find_seg_in_pkg(struct ice_hw *hw, u32 seg_type, + struct ice_pkg_hdr *pkg_hdr); + +int ice_update_pkg_no_lock(struct ice_hw *hw, struct ice_buf *bufs, u32 count); +int ice_update_pkg(struct ice_hw *hw, struct ice_buf *bufs, u32 count); + +int ice_pkg_buf_reserve_section(struct ice_buf_build *bld, u16 count); +u16 ice_pkg_buf_get_active_sections(struct ice_buf_build *bld); +void *ice_pkg_enum_section(struct ice_seg *ice_seg, struct ice_pkg_enum *state, + u32 sect_type); + +struct ice_buf_hdr *ice_pkg_val_buf(struct ice_buf *buf); + +#endif diff --git a/drivers/net/ethernet/intel/ice/ice_devlink.c b/drivers/net/ethernet/intel/ice/ice_devlink.c index 0fae0186bd85..05f216af8c81 100644 --- a/drivers/net/ethernet/intel/ice/ice_devlink.c +++ b/drivers/net/ethernet/intel/ice/ice_devlink.c @@ -371,10 +371,7 @@ out_free_ctx: /** * ice_devlink_reload_empr_start - Start EMP reset to activate new firmware - * @devlink: pointer to the devlink instance to reload - * @netns_change: if true, the network namespace is changing - * @action: the action to perform. Must be DEVLINK_RELOAD_ACTION_FW_ACTIVATE - * @limit: limits on what reload should do, such as not resetting + * @pf: pointer to the pf instance * @extack: netlink extended ACK structure * * Allow user to activate new Embedded Management Processor firmware by @@ -387,12 +384,9 @@ out_free_ctx: * any source. */ static int -ice_devlink_reload_empr_start(struct devlink *devlink, bool netns_change, - enum devlink_reload_action action, - enum devlink_reload_limit limit, +ice_devlink_reload_empr_start(struct ice_pf *pf, struct netlink_ext_ack *extack) { - struct ice_pf *pf = devlink_priv(devlink); struct device *dev = ice_pf_to_dev(pf); struct ice_hw *hw = &pf->hw; u8 pending; @@ -431,11 +425,51 @@ ice_devlink_reload_empr_start(struct devlink *devlink, bool netns_change, } /** + * ice_devlink_reload_down - prepare for reload + * @devlink: pointer to the devlink instance to reload + * @netns_change: if true, the network namespace is changing + * @action: the action to perform + * @limit: limits on what reload should do, such as not resetting + * @extack: netlink extended ACK structure + */ +static int +ice_devlink_reload_down(struct devlink *devlink, bool netns_change, + enum devlink_reload_action action, + enum devlink_reload_limit limit, + struct netlink_ext_ack *extack) +{ + struct ice_pf *pf = devlink_priv(devlink); + + switch (action) { + case DEVLINK_RELOAD_ACTION_DRIVER_REINIT: + if (ice_is_eswitch_mode_switchdev(pf)) { + NL_SET_ERR_MSG_MOD(extack, + "Go to legacy mode before doing reinit\n"); + return -EOPNOTSUPP; + } + if (ice_is_adq_active(pf)) { + NL_SET_ERR_MSG_MOD(extack, + "Turn off ADQ before doing reinit\n"); + return -EOPNOTSUPP; + } + if (ice_has_vfs(pf)) { + NL_SET_ERR_MSG_MOD(extack, + "Remove all VFs before doing reinit\n"); + return -EOPNOTSUPP; + } + ice_unload(pf); + return 0; + case DEVLINK_RELOAD_ACTION_FW_ACTIVATE: + return ice_devlink_reload_empr_start(pf, extack); + default: + WARN_ON(1); + return -EOPNOTSUPP; + } +} + +/** * ice_devlink_reload_empr_finish - Wait for EMP reset to finish - * @devlink: pointer to the devlink instance reloading - * @action: the action requested - * @limit: limits imposed by userspace, such as not resetting - * @actions_performed: on return, indicate what actions actually performed + * @pf: pointer to the pf instance * @extack: netlink extended ACK structure * * Wait for driver to finish rebuilding after EMP reset is completed. This @@ -443,17 +477,11 @@ ice_devlink_reload_empr_start(struct devlink *devlink, bool netns_change, * for the driver's rebuild to complete. */ static int -ice_devlink_reload_empr_finish(struct devlink *devlink, - enum devlink_reload_action action, - enum devlink_reload_limit limit, - u32 *actions_performed, +ice_devlink_reload_empr_finish(struct ice_pf *pf, struct netlink_ext_ack *extack) { - struct ice_pf *pf = devlink_priv(devlink); int err; - *actions_performed = BIT(DEVLINK_RELOAD_ACTION_FW_ACTIVATE); - err = ice_wait_for_reset(pf, 60 * HZ); if (err) { NL_SET_ERR_MSG_MOD(extack, "Device still resetting after 1 minute"); @@ -1192,12 +1220,43 @@ static int ice_devlink_set_parent(struct devlink_rate *devlink_rate, return status; } +/** + * ice_devlink_reload_up - do reload up after reinit + * @devlink: pointer to the devlink instance reloading + * @action: the action requested + * @limit: limits imposed by userspace, such as not resetting + * @actions_performed: on return, indicate what actions actually performed + * @extack: netlink extended ACK structure + */ +static int +ice_devlink_reload_up(struct devlink *devlink, + enum devlink_reload_action action, + enum devlink_reload_limit limit, + u32 *actions_performed, + struct netlink_ext_ack *extack) +{ + struct ice_pf *pf = devlink_priv(devlink); + + switch (action) { + case DEVLINK_RELOAD_ACTION_DRIVER_REINIT: + *actions_performed = BIT(DEVLINK_RELOAD_ACTION_DRIVER_REINIT); + return ice_load(pf); + case DEVLINK_RELOAD_ACTION_FW_ACTIVATE: + *actions_performed = BIT(DEVLINK_RELOAD_ACTION_FW_ACTIVATE); + return ice_devlink_reload_empr_finish(pf, extack); + default: + WARN_ON(1); + return -EOPNOTSUPP; + } +} + static const struct devlink_ops ice_devlink_ops = { .supported_flash_update_params = DEVLINK_SUPPORT_FLASH_UPDATE_OVERWRITE_MASK, - .reload_actions = BIT(DEVLINK_RELOAD_ACTION_FW_ACTIVATE), + .reload_actions = BIT(DEVLINK_RELOAD_ACTION_DRIVER_REINIT) | + BIT(DEVLINK_RELOAD_ACTION_FW_ACTIVATE), /* The ice driver currently does not support driver reinit */ - .reload_down = ice_devlink_reload_empr_start, - .reload_up = ice_devlink_reload_empr_finish, + .reload_down = ice_devlink_reload_down, + .reload_up = ice_devlink_reload_up, .port_split = ice_devlink_port_split, .port_unsplit = ice_devlink_port_unsplit, .eswitch_mode_get = ice_eswitch_mode_get, @@ -1376,7 +1435,6 @@ void ice_devlink_register(struct ice_pf *pf) { struct devlink *devlink = priv_to_devlink(pf); - devlink_set_features(devlink, DEVLINK_F_RELOAD); devlink_register(devlink); } @@ -1411,25 +1469,9 @@ ice_devlink_set_switch_id(struct ice_pf *pf, struct netdev_phys_item_id *ppid) int ice_devlink_register_params(struct ice_pf *pf) { struct devlink *devlink = priv_to_devlink(pf); - union devlink_param_value value; - int err; - err = devlink_params_register(devlink, ice_devlink_params, - ARRAY_SIZE(ice_devlink_params)); - if (err) - return err; - - value.vbool = false; - devlink_param_driverinit_value_set(devlink, - DEVLINK_PARAM_GENERIC_ID_ENABLE_IWARP, - value); - - value.vbool = test_bit(ICE_FLAG_RDMA_ENA, pf->flags) ? true : false; - devlink_param_driverinit_value_set(devlink, - DEVLINK_PARAM_GENERIC_ID_ENABLE_ROCE, - value); - - return 0; + return devlink_params_register(devlink, ice_devlink_params, + ARRAY_SIZE(ice_devlink_params)); } void ice_devlink_unregister_params(struct ice_pf *pf) diff --git a/drivers/net/ethernet/intel/ice/ice_eswitch.c b/drivers/net/ethernet/intel/ice/ice_eswitch.c index f9f15acae90a..f6dd3f8fd936 100644 --- a/drivers/net/ethernet/intel/ice/ice_eswitch.c +++ b/drivers/net/ethernet/intel/ice/ice_eswitch.c @@ -71,17 +71,17 @@ void ice_eswitch_replay_vf_mac_rule(struct ice_vf *vf) if (!ice_is_switchdev_running(vf->pf)) return; - if (is_valid_ether_addr(vf->hw_lan_addr.addr)) { + if (is_valid_ether_addr(vf->hw_lan_addr)) { err = ice_eswitch_add_vf_mac_rule(vf->pf, vf, - vf->hw_lan_addr.addr); + vf->hw_lan_addr); if (err) { dev_err(ice_pf_to_dev(vf->pf), "Failed to add MAC %pM for VF %d\n, error %d\n", - vf->hw_lan_addr.addr, vf->vf_id, err); + vf->hw_lan_addr, vf->vf_id, err); return; } vf->num_mac++; - ether_addr_copy(vf->dev_lan_addr.addr, vf->hw_lan_addr.addr); + ether_addr_copy(vf->dev_lan_addr, vf->hw_lan_addr); } } @@ -237,7 +237,7 @@ ice_eswitch_release_reprs(struct ice_pf *pf, struct ice_vsi *ctrl_vsi) ice_vsi_update_security(vsi, ice_vsi_ctx_set_antispoof); metadata_dst_free(vf->repr->dst); vf->repr->dst = NULL; - ice_fltr_add_mac_and_broadcast(vsi, vf->hw_lan_addr.addr, + ice_fltr_add_mac_and_broadcast(vsi, vf->hw_lan_addr, ICE_FWD_TO_VSI); netif_napi_del(&vf->repr->q_vector->napi); @@ -265,14 +265,14 @@ static int ice_eswitch_setup_reprs(struct ice_pf *pf) GFP_KERNEL); if (!vf->repr->dst) { ice_fltr_add_mac_and_broadcast(vsi, - vf->hw_lan_addr.addr, + vf->hw_lan_addr, ICE_FWD_TO_VSI); goto err; } if (ice_vsi_update_security(vsi, ice_vsi_ctx_clear_antispoof)) { ice_fltr_add_mac_and_broadcast(vsi, - vf->hw_lan_addr.addr, + vf->hw_lan_addr, ICE_FWD_TO_VSI); metadata_dst_free(vf->repr->dst); vf->repr->dst = NULL; @@ -281,7 +281,7 @@ static int ice_eswitch_setup_reprs(struct ice_pf *pf) if (ice_vsi_add_vlan_zero(vsi)) { ice_fltr_add_mac_and_broadcast(vsi, - vf->hw_lan_addr.addr, + vf->hw_lan_addr, ICE_FWD_TO_VSI); metadata_dst_free(vf->repr->dst); vf->repr->dst = NULL; @@ -338,7 +338,7 @@ void ice_eswitch_update_repr(struct ice_vsi *vsi) ret = ice_vsi_update_security(vsi, ice_vsi_ctx_clear_antispoof); if (ret) { - ice_fltr_add_mac_and_broadcast(vsi, vf->hw_lan_addr.addr, ICE_FWD_TO_VSI); + ice_fltr_add_mac_and_broadcast(vsi, vf->hw_lan_addr, ICE_FWD_TO_VSI); dev_err(ice_pf_to_dev(pf), "Failed to update VF %d port representor", vsi->vf->vf_id); } @@ -425,7 +425,13 @@ static void ice_eswitch_release_env(struct ice_pf *pf) static struct ice_vsi * ice_eswitch_vsi_setup(struct ice_pf *pf, struct ice_port_info *pi) { - return ice_vsi_setup(pf, pi, ICE_VSI_SWITCHDEV_CTRL, NULL, NULL); + struct ice_vsi_cfg_params params = {}; + + params.type = ICE_VSI_SWITCHDEV_CTRL; + params.pi = pi; + params.flags = ICE_VSI_FLAG_INIT; + + return ice_vsi_setup(pf, ¶ms); } /** diff --git a/drivers/net/ethernet/intel/ice/ice_ethtool.c b/drivers/net/ethernet/intel/ice/ice_ethtool.c index a359f1610fc1..b360bd8f1599 100644 --- a/drivers/net/ethernet/intel/ice/ice_ethtool.c +++ b/drivers/net/ethernet/intel/ice/ice_ethtool.c @@ -656,7 +656,7 @@ static int ice_lbtest_prepare_rings(struct ice_vsi *vsi) if (status) goto err_setup_rx_ring; - status = ice_vsi_cfg(vsi); + status = ice_vsi_cfg_lan(vsi); if (status) goto err_setup_rx_ring; @@ -664,7 +664,7 @@ static int ice_lbtest_prepare_rings(struct ice_vsi *vsi) if (status) goto err_start_rx_ring; - return status; + return 0; err_start_rx_ring: ice_vsi_free_rx_rings(vsi); @@ -1950,8 +1950,7 @@ ice_phy_type_to_ethtool(struct net_device *netdev, ICE_PHY_TYPE_LOW_100G_CAUI4 | ICE_PHY_TYPE_LOW_100G_AUI4_AOC_ACC | ICE_PHY_TYPE_LOW_100G_AUI4 | - ICE_PHY_TYPE_LOW_100GBASE_CR_PAM4 | - ICE_PHY_TYPE_LOW_100GBASE_CP2; + ICE_PHY_TYPE_LOW_100GBASE_CR_PAM4; phy_type_mask_hi = ICE_PHY_TYPE_HIGH_100G_CAUI2_AOC_ACC | ICE_PHY_TYPE_HIGH_100G_CAUI2 | ICE_PHY_TYPE_HIGH_100G_AUI2_AOC_ACC | @@ -1964,15 +1963,27 @@ ice_phy_type_to_ethtool(struct net_device *netdev, 100000baseCR4_Full); } - phy_type_mask_lo = ICE_PHY_TYPE_LOW_100GBASE_SR4 | - ICE_PHY_TYPE_LOW_100GBASE_SR2; - if (phy_types_low & phy_type_mask_lo) { + if (phy_types_low & ICE_PHY_TYPE_LOW_100GBASE_CP2) { + ethtool_link_ksettings_add_link_mode(ks, supported, + 100000baseCR2_Full); + ice_ethtool_advertise_link_mode(ICE_AQ_LINK_SPEED_100GB, + 100000baseCR2_Full); + } + + if (phy_types_low & ICE_PHY_TYPE_LOW_100GBASE_SR4) { ethtool_link_ksettings_add_link_mode(ks, supported, 100000baseSR4_Full); ice_ethtool_advertise_link_mode(ICE_AQ_LINK_SPEED_100GB, 100000baseSR4_Full); } + if (phy_types_low & ICE_PHY_TYPE_LOW_100GBASE_SR2) { + ethtool_link_ksettings_add_link_mode(ks, supported, + 100000baseSR2_Full); + ice_ethtool_advertise_link_mode(ICE_AQ_LINK_SPEED_100GB, + 100000baseSR2_Full); + } + phy_type_mask_lo = ICE_PHY_TYPE_LOW_100GBASE_LR4 | ICE_PHY_TYPE_LOW_100GBASE_DR; if (phy_types_low & phy_type_mask_lo) { @@ -1984,14 +1995,20 @@ ice_phy_type_to_ethtool(struct net_device *netdev, phy_type_mask_lo = ICE_PHY_TYPE_LOW_100GBASE_KR4 | ICE_PHY_TYPE_LOW_100GBASE_KR_PAM4; - phy_type_mask_hi = ICE_PHY_TYPE_HIGH_100GBASE_KR2_PAM4; - if (phy_types_low & phy_type_mask_lo || - phy_types_high & phy_type_mask_hi) { + if (phy_types_low & phy_type_mask_lo) { ethtool_link_ksettings_add_link_mode(ks, supported, 100000baseKR4_Full); ice_ethtool_advertise_link_mode(ICE_AQ_LINK_SPEED_100GB, 100000baseKR4_Full); } + + if (phy_types_high & ICE_PHY_TYPE_HIGH_100GBASE_KR2_PAM4) { + ethtool_link_ksettings_add_link_mode(ks, supported, + 100000baseKR2_Full); + ice_ethtool_advertise_link_mode(ICE_AQ_LINK_SPEED_100GB, + 100000baseKR2_Full); + } + } #define TEST_SET_BITS_TIMEOUT 50 @@ -2242,17 +2259,15 @@ ice_ksettings_find_adv_link_speed(const struct ethtool_link_ksettings *ks) 100baseT_Full)) adv_link_speed |= ICE_AQ_LINK_SPEED_100MB; if (ethtool_link_ksettings_test_link_mode(ks, advertising, - 1000baseX_Full)) - adv_link_speed |= ICE_AQ_LINK_SPEED_1000MB; - if (ethtool_link_ksettings_test_link_mode(ks, advertising, + 1000baseX_Full) || + ethtool_link_ksettings_test_link_mode(ks, advertising, 1000baseT_Full) || ethtool_link_ksettings_test_link_mode(ks, advertising, 1000baseKX_Full)) adv_link_speed |= ICE_AQ_LINK_SPEED_1000MB; if (ethtool_link_ksettings_test_link_mode(ks, advertising, - 2500baseT_Full)) - adv_link_speed |= ICE_AQ_LINK_SPEED_2500MB; - if (ethtool_link_ksettings_test_link_mode(ks, advertising, + 2500baseT_Full) || + ethtool_link_ksettings_test_link_mode(ks, advertising, 2500baseX_Full)) adv_link_speed |= ICE_AQ_LINK_SPEED_2500MB; if (ethtool_link_ksettings_test_link_mode(ks, advertising, @@ -2261,9 +2276,8 @@ ice_ksettings_find_adv_link_speed(const struct ethtool_link_ksettings *ks) if (ethtool_link_ksettings_test_link_mode(ks, advertising, 10000baseT_Full) || ethtool_link_ksettings_test_link_mode(ks, advertising, - 10000baseKR_Full)) - adv_link_speed |= ICE_AQ_LINK_SPEED_10GB; - if (ethtool_link_ksettings_test_link_mode(ks, advertising, + 10000baseKR_Full) || + ethtool_link_ksettings_test_link_mode(ks, advertising, 10000baseSR_Full) || ethtool_link_ksettings_test_link_mode(ks, advertising, 10000baseLR_Full)) @@ -2287,9 +2301,8 @@ ice_ksettings_find_adv_link_speed(const struct ethtool_link_ksettings *ks) if (ethtool_link_ksettings_test_link_mode(ks, advertising, 50000baseCR2_Full) || ethtool_link_ksettings_test_link_mode(ks, advertising, - 50000baseKR2_Full)) - adv_link_speed |= ICE_AQ_LINK_SPEED_50GB; - if (ethtool_link_ksettings_test_link_mode(ks, advertising, + 50000baseKR2_Full) || + ethtool_link_ksettings_test_link_mode(ks, advertising, 50000baseSR2_Full)) adv_link_speed |= ICE_AQ_LINK_SPEED_50GB; if (ethtool_link_ksettings_test_link_mode(ks, advertising, @@ -2299,7 +2312,13 @@ ice_ksettings_find_adv_link_speed(const struct ethtool_link_ksettings *ks) ethtool_link_ksettings_test_link_mode(ks, advertising, 100000baseLR4_ER4_Full) || ethtool_link_ksettings_test_link_mode(ks, advertising, - 100000baseKR4_Full)) + 100000baseKR4_Full) || + ethtool_link_ksettings_test_link_mode(ks, advertising, + 100000baseCR2_Full) || + ethtool_link_ksettings_test_link_mode(ks, advertising, + 100000baseSR2_Full) || + ethtool_link_ksettings_test_link_mode(ks, advertising, + 100000baseKR2_Full)) adv_link_speed |= ICE_AQ_LINK_SPEED_100GB; return adv_link_speed; @@ -3027,8 +3046,6 @@ ice_set_ringparam(struct net_device *netdev, struct ethtool_ringparam *ring, /* clone ring and setup updated count */ xdp_rings[i] = *vsi->xdp_rings[i]; xdp_rings[i].count = new_tx_cnt; - xdp_rings[i].next_dd = ICE_RING_QUARTER(&xdp_rings[i]) - 1; - xdp_rings[i].next_rs = ICE_RING_QUARTER(&xdp_rings[i]) - 1; xdp_rings[i].desc = NULL; xdp_rings[i].tx_buf = NULL; err = ice_setup_tx_ring(&xdp_rings[i]); @@ -3073,7 +3090,7 @@ process_rx: /* allocate Rx buffers */ err = ice_alloc_rx_bufs(&rx_rings[i], - ICE_DESC_UNUSED(&rx_rings[i])); + ICE_RX_DESC_UNUSED(&rx_rings[i])); rx_unwind: if (err) { while (i) { diff --git a/drivers/net/ethernet/intel/ice/ice_flex_pipe.c b/drivers/net/ethernet/intel/ice/ice_flex_pipe.c index 4b3bb19e1d06..5ce413965930 100644 --- a/drivers/net/ethernet/intel/ice/ice_flex_pipe.c +++ b/drivers/net/ethernet/intel/ice/ice_flex_pipe.c @@ -6,23 +6,6 @@ #include "ice_flow.h" #include "ice.h" -/* For supporting double VLAN mode, it is necessary to enable or disable certain - * boost tcam entries. The metadata labels names that match the following - * prefixes will be saved to allow enabling double VLAN mode. - */ -#define ICE_DVM_PRE "BOOST_MAC_VLAN_DVM" /* enable these entries */ -#define ICE_SVM_PRE "BOOST_MAC_VLAN_SVM" /* disable these entries */ - -/* To support tunneling entries by PF, the package will append the PF number to - * the label; for example TNL_VXLAN_PF0, TNL_VXLAN_PF1, TNL_VXLAN_PF2, etc. - */ -#define ICE_TNL_PRE "TNL_" -static const struct ice_tunnel_type_scan tnls[] = { - { TNL_VXLAN, "TNL_VXLAN_PF" }, - { TNL_GENEVE, "TNL_GENEVE_PF" }, - { TNL_LAST, "" } -}; - static const u32 ice_sect_lkup[ICE_BLK_COUNT][ICE_SECT_COUNT] = { /* SWITCH */ { @@ -104,225 +87,6 @@ static u32 ice_sect_id(enum ice_block blk, enum ice_sect sect) } /** - * ice_pkg_val_buf - * @buf: pointer to the ice buffer - * - * This helper function validates a buffer's header. - */ -static struct ice_buf_hdr *ice_pkg_val_buf(struct ice_buf *buf) -{ - struct ice_buf_hdr *hdr; - u16 section_count; - u16 data_end; - - hdr = (struct ice_buf_hdr *)buf->buf; - /* verify data */ - section_count = le16_to_cpu(hdr->section_count); - if (section_count < ICE_MIN_S_COUNT || section_count > ICE_MAX_S_COUNT) - return NULL; - - data_end = le16_to_cpu(hdr->data_end); - if (data_end < ICE_MIN_S_DATA_END || data_end > ICE_MAX_S_DATA_END) - return NULL; - - return hdr; -} - -/** - * ice_find_buf_table - * @ice_seg: pointer to the ice segment - * - * Returns the address of the buffer table within the ice segment. - */ -static struct ice_buf_table *ice_find_buf_table(struct ice_seg *ice_seg) -{ - struct ice_nvm_table *nvms; - - nvms = (struct ice_nvm_table *) - (ice_seg->device_table + - le32_to_cpu(ice_seg->device_table_count)); - - return (__force struct ice_buf_table *) - (nvms->vers + le32_to_cpu(nvms->table_count)); -} - -/** - * ice_pkg_enum_buf - * @ice_seg: pointer to the ice segment (or NULL on subsequent calls) - * @state: pointer to the enum state - * - * This function will enumerate all the buffers in the ice segment. The first - * call is made with the ice_seg parameter non-NULL; on subsequent calls, - * ice_seg is set to NULL which continues the enumeration. When the function - * returns a NULL pointer, then the end of the buffers has been reached, or an - * unexpected value has been detected (for example an invalid section count or - * an invalid buffer end value). - */ -static struct ice_buf_hdr * -ice_pkg_enum_buf(struct ice_seg *ice_seg, struct ice_pkg_enum *state) -{ - if (ice_seg) { - state->buf_table = ice_find_buf_table(ice_seg); - if (!state->buf_table) - return NULL; - - state->buf_idx = 0; - return ice_pkg_val_buf(state->buf_table->buf_array); - } - - if (++state->buf_idx < le32_to_cpu(state->buf_table->buf_count)) - return ice_pkg_val_buf(state->buf_table->buf_array + - state->buf_idx); - else - return NULL; -} - -/** - * ice_pkg_advance_sect - * @ice_seg: pointer to the ice segment (or NULL on subsequent calls) - * @state: pointer to the enum state - * - * This helper function will advance the section within the ice segment, - * also advancing the buffer if needed. - */ -static bool -ice_pkg_advance_sect(struct ice_seg *ice_seg, struct ice_pkg_enum *state) -{ - if (!ice_seg && !state->buf) - return false; - - if (!ice_seg && state->buf) - if (++state->sect_idx < le16_to_cpu(state->buf->section_count)) - return true; - - state->buf = ice_pkg_enum_buf(ice_seg, state); - if (!state->buf) - return false; - - /* start of new buffer, reset section index */ - state->sect_idx = 0; - return true; -} - -/** - * ice_pkg_enum_section - * @ice_seg: pointer to the ice segment (or NULL on subsequent calls) - * @state: pointer to the enum state - * @sect_type: section type to enumerate - * - * This function will enumerate all the sections of a particular type in the - * ice segment. The first call is made with the ice_seg parameter non-NULL; - * on subsequent calls, ice_seg is set to NULL which continues the enumeration. - * When the function returns a NULL pointer, then the end of the matching - * sections has been reached. - */ -static void * -ice_pkg_enum_section(struct ice_seg *ice_seg, struct ice_pkg_enum *state, - u32 sect_type) -{ - u16 offset, size; - - if (ice_seg) - state->type = sect_type; - - if (!ice_pkg_advance_sect(ice_seg, state)) - return NULL; - - /* scan for next matching section */ - while (state->buf->section_entry[state->sect_idx].type != - cpu_to_le32(state->type)) - if (!ice_pkg_advance_sect(NULL, state)) - return NULL; - - /* validate section */ - offset = le16_to_cpu(state->buf->section_entry[state->sect_idx].offset); - if (offset < ICE_MIN_S_OFF || offset > ICE_MAX_S_OFF) - return NULL; - - size = le16_to_cpu(state->buf->section_entry[state->sect_idx].size); - if (size < ICE_MIN_S_SZ || size > ICE_MAX_S_SZ) - return NULL; - - /* make sure the section fits in the buffer */ - if (offset + size > ICE_PKG_BUF_SIZE) - return NULL; - - state->sect_type = - le32_to_cpu(state->buf->section_entry[state->sect_idx].type); - - /* calc pointer to this section */ - state->sect = ((u8 *)state->buf) + - le16_to_cpu(state->buf->section_entry[state->sect_idx].offset); - - return state->sect; -} - -/** - * ice_pkg_enum_entry - * @ice_seg: pointer to the ice segment (or NULL on subsequent calls) - * @state: pointer to the enum state - * @sect_type: section type to enumerate - * @offset: pointer to variable that receives the offset in the table (optional) - * @handler: function that handles access to the entries into the section type - * - * This function will enumerate all the entries in particular section type in - * the ice segment. The first call is made with the ice_seg parameter non-NULL; - * on subsequent calls, ice_seg is set to NULL which continues the enumeration. - * When the function returns a NULL pointer, then the end of the entries has - * been reached. - * - * Since each section may have a different header and entry size, the handler - * function is needed to determine the number and location entries in each - * section. - * - * The offset parameter is optional, but should be used for sections that - * contain an offset for each section table. For such cases, the section handler - * function must return the appropriate offset + index to give the absolution - * offset for each entry. For example, if the base for a section's header - * indicates a base offset of 10, and the index for the entry is 2, then - * section handler function should set the offset to 10 + 2 = 12. - */ -static void * -ice_pkg_enum_entry(struct ice_seg *ice_seg, struct ice_pkg_enum *state, - u32 sect_type, u32 *offset, - void *(*handler)(u32 sect_type, void *section, - u32 index, u32 *offset)) -{ - void *entry; - - if (ice_seg) { - if (!handler) - return NULL; - - if (!ice_pkg_enum_section(ice_seg, state, sect_type)) - return NULL; - - state->entry_idx = 0; - state->handler = handler; - } else { - state->entry_idx++; - } - - if (!state->handler) - return NULL; - - /* get entry */ - entry = state->handler(state->sect_type, state->sect, state->entry_idx, - offset); - if (!entry) { - /* end of a section, look for another section of this type */ - if (!ice_pkg_enum_section(NULL, state, 0)) - return NULL; - - state->entry_idx = 0; - entry = state->handler(state->sect_type, state->sect, - state->entry_idx, offset); - } - - return entry; -} - -/** * ice_hw_ptype_ena - check if the PTYPE is enabled or not * @hw: pointer to the HW structure * @ptype: the hardware PTYPE @@ -333,312 +97,6 @@ bool ice_hw_ptype_ena(struct ice_hw *hw, u16 ptype) test_bit(ptype, hw->hw_ptype); } -/** - * ice_marker_ptype_tcam_handler - * @sect_type: section type - * @section: pointer to section - * @index: index of the Marker PType TCAM entry to be returned - * @offset: pointer to receive absolute offset, always 0 for ptype TCAM sections - * - * This is a callback function that can be passed to ice_pkg_enum_entry. - * Handles enumeration of individual Marker PType TCAM entries. - */ -static void * -ice_marker_ptype_tcam_handler(u32 sect_type, void *section, u32 index, - u32 *offset) -{ - struct ice_marker_ptype_tcam_section *marker_ptype; - - if (sect_type != ICE_SID_RXPARSER_MARKER_PTYPE) - return NULL; - - if (index > ICE_MAX_MARKER_PTYPE_TCAMS_IN_BUF) - return NULL; - - if (offset) - *offset = 0; - - marker_ptype = section; - if (index >= le16_to_cpu(marker_ptype->count)) - return NULL; - - return marker_ptype->tcam + index; -} - -/** - * ice_fill_hw_ptype - fill the enabled PTYPE bit information - * @hw: pointer to the HW structure - */ -static void ice_fill_hw_ptype(struct ice_hw *hw) -{ - struct ice_marker_ptype_tcam_entry *tcam; - struct ice_seg *seg = hw->seg; - struct ice_pkg_enum state; - - bitmap_zero(hw->hw_ptype, ICE_FLOW_PTYPE_MAX); - if (!seg) - return; - - memset(&state, 0, sizeof(state)); - - do { - tcam = ice_pkg_enum_entry(seg, &state, - ICE_SID_RXPARSER_MARKER_PTYPE, NULL, - ice_marker_ptype_tcam_handler); - if (tcam && - le16_to_cpu(tcam->addr) < ICE_MARKER_PTYPE_TCAM_ADDR_MAX && - le16_to_cpu(tcam->ptype) < ICE_FLOW_PTYPE_MAX) - set_bit(le16_to_cpu(tcam->ptype), hw->hw_ptype); - - seg = NULL; - } while (tcam); -} - -/** - * ice_boost_tcam_handler - * @sect_type: section type - * @section: pointer to section - * @index: index of the boost TCAM entry to be returned - * @offset: pointer to receive absolute offset, always 0 for boost TCAM sections - * - * This is a callback function that can be passed to ice_pkg_enum_entry. - * Handles enumeration of individual boost TCAM entries. - */ -static void * -ice_boost_tcam_handler(u32 sect_type, void *section, u32 index, u32 *offset) -{ - struct ice_boost_tcam_section *boost; - - if (!section) - return NULL; - - if (sect_type != ICE_SID_RXPARSER_BOOST_TCAM) - return NULL; - - /* cppcheck-suppress nullPointer */ - if (index > ICE_MAX_BST_TCAMS_IN_BUF) - return NULL; - - if (offset) - *offset = 0; - - boost = section; - if (index >= le16_to_cpu(boost->count)) - return NULL; - - return boost->tcam + index; -} - -/** - * ice_find_boost_entry - * @ice_seg: pointer to the ice segment (non-NULL) - * @addr: Boost TCAM address of entry to search for - * @entry: returns pointer to the entry - * - * Finds a particular Boost TCAM entry and returns a pointer to that entry - * if it is found. The ice_seg parameter must not be NULL since the first call - * to ice_pkg_enum_entry requires a pointer to an actual ice_segment structure. - */ -static int -ice_find_boost_entry(struct ice_seg *ice_seg, u16 addr, - struct ice_boost_tcam_entry **entry) -{ - struct ice_boost_tcam_entry *tcam; - struct ice_pkg_enum state; - - memset(&state, 0, sizeof(state)); - - if (!ice_seg) - return -EINVAL; - - do { - tcam = ice_pkg_enum_entry(ice_seg, &state, - ICE_SID_RXPARSER_BOOST_TCAM, NULL, - ice_boost_tcam_handler); - if (tcam && le16_to_cpu(tcam->addr) == addr) { - *entry = tcam; - return 0; - } - - ice_seg = NULL; - } while (tcam); - - *entry = NULL; - return -EIO; -} - -/** - * ice_label_enum_handler - * @sect_type: section type - * @section: pointer to section - * @index: index of the label entry to be returned - * @offset: pointer to receive absolute offset, always zero for label sections - * - * This is a callback function that can be passed to ice_pkg_enum_entry. - * Handles enumeration of individual label entries. - */ -static void * -ice_label_enum_handler(u32 __always_unused sect_type, void *section, u32 index, - u32 *offset) -{ - struct ice_label_section *labels; - - if (!section) - return NULL; - - /* cppcheck-suppress nullPointer */ - if (index > ICE_MAX_LABELS_IN_BUF) - return NULL; - - if (offset) - *offset = 0; - - labels = section; - if (index >= le16_to_cpu(labels->count)) - return NULL; - - return labels->label + index; -} - -/** - * ice_enum_labels - * @ice_seg: pointer to the ice segment (NULL on subsequent calls) - * @type: the section type that will contain the label (0 on subsequent calls) - * @state: ice_pkg_enum structure that will hold the state of the enumeration - * @value: pointer to a value that will return the label's value if found - * - * Enumerates a list of labels in the package. The caller will call - * ice_enum_labels(ice_seg, type, ...) to start the enumeration, then call - * ice_enum_labels(NULL, 0, ...) to continue. When the function returns a NULL - * the end of the list has been reached. - */ -static char * -ice_enum_labels(struct ice_seg *ice_seg, u32 type, struct ice_pkg_enum *state, - u16 *value) -{ - struct ice_label *label; - - /* Check for valid label section on first call */ - if (type && !(type >= ICE_SID_LBL_FIRST && type <= ICE_SID_LBL_LAST)) - return NULL; - - label = ice_pkg_enum_entry(ice_seg, state, type, NULL, - ice_label_enum_handler); - if (!label) - return NULL; - - *value = le16_to_cpu(label->value); - return label->name; -} - -/** - * ice_add_tunnel_hint - * @hw: pointer to the HW structure - * @label_name: label text - * @val: value of the tunnel port boost entry - */ -static void ice_add_tunnel_hint(struct ice_hw *hw, char *label_name, u16 val) -{ - if (hw->tnl.count < ICE_TUNNEL_MAX_ENTRIES) { - u16 i; - - for (i = 0; tnls[i].type != TNL_LAST; i++) { - size_t len = strlen(tnls[i].label_prefix); - - /* Look for matching label start, before continuing */ - if (strncmp(label_name, tnls[i].label_prefix, len)) - continue; - - /* Make sure this label matches our PF. Note that the PF - * character ('0' - '7') will be located where our - * prefix string's null terminator is located. - */ - if ((label_name[len] - '0') == hw->pf_id) { - hw->tnl.tbl[hw->tnl.count].type = tnls[i].type; - hw->tnl.tbl[hw->tnl.count].valid = false; - hw->tnl.tbl[hw->tnl.count].boost_addr = val; - hw->tnl.tbl[hw->tnl.count].port = 0; - hw->tnl.count++; - break; - } - } - } -} - -/** - * ice_add_dvm_hint - * @hw: pointer to the HW structure - * @val: value of the boost entry - * @enable: true if entry needs to be enabled, or false if needs to be disabled - */ -static void ice_add_dvm_hint(struct ice_hw *hw, u16 val, bool enable) -{ - if (hw->dvm_upd.count < ICE_DVM_MAX_ENTRIES) { - hw->dvm_upd.tbl[hw->dvm_upd.count].boost_addr = val; - hw->dvm_upd.tbl[hw->dvm_upd.count].enable = enable; - hw->dvm_upd.count++; - } -} - -/** - * ice_init_pkg_hints - * @hw: pointer to the HW structure - * @ice_seg: pointer to the segment of the package scan (non-NULL) - * - * This function will scan the package and save off relevant information - * (hints or metadata) for driver use. The ice_seg parameter must not be NULL - * since the first call to ice_enum_labels requires a pointer to an actual - * ice_seg structure. - */ -static void ice_init_pkg_hints(struct ice_hw *hw, struct ice_seg *ice_seg) -{ - struct ice_pkg_enum state; - char *label_name; - u16 val; - int i; - - memset(&hw->tnl, 0, sizeof(hw->tnl)); - memset(&state, 0, sizeof(state)); - - if (!ice_seg) - return; - - label_name = ice_enum_labels(ice_seg, ICE_SID_LBL_RXPARSER_TMEM, &state, - &val); - - while (label_name) { - if (!strncmp(label_name, ICE_TNL_PRE, strlen(ICE_TNL_PRE))) - /* check for a tunnel entry */ - ice_add_tunnel_hint(hw, label_name, val); - - /* check for a dvm mode entry */ - else if (!strncmp(label_name, ICE_DVM_PRE, strlen(ICE_DVM_PRE))) - ice_add_dvm_hint(hw, val, true); - - /* check for a svm mode entry */ - else if (!strncmp(label_name, ICE_SVM_PRE, strlen(ICE_SVM_PRE))) - ice_add_dvm_hint(hw, val, false); - - label_name = ice_enum_labels(NULL, 0, &state, &val); - } - - /* Cache the appropriate boost TCAM entry pointers for tunnels */ - for (i = 0; i < hw->tnl.count; i++) { - ice_find_boost_entry(ice_seg, hw->tnl.tbl[i].boost_addr, - &hw->tnl.tbl[i].boost_entry); - if (hw->tnl.tbl[i].boost_entry) { - hw->tnl.tbl[i].valid = true; - if (hw->tnl.tbl[i].type < __TNL_TYPE_CNT) - hw->tnl.valid_count[hw->tnl.tbl[i].type]++; - } - } - - /* Cache the appropriate boost TCAM entry pointers for DVM and SVM */ - for (i = 0; i < hw->dvm_upd.count; i++) - ice_find_boost_entry(ice_seg, hw->dvm_upd.tbl[i].boost_addr, - &hw->dvm_upd.tbl[i].boost_entry); -} - /* Key creation */ #define ICE_DC_KEY 0x1 /* don't care */ @@ -810,51 +268,6 @@ ice_set_key(u8 *key, u16 size, u8 *val, u8 *upd, u8 *dc, u8 *nm, u16 off, } /** - * ice_acquire_global_cfg_lock - * @hw: pointer to the HW structure - * @access: access type (read or write) - * - * This function will request ownership of the global config lock for reading - * or writing of the package. When attempting to obtain write access, the - * caller must check for the following two return values: - * - * 0 - Means the caller has acquired the global config lock - * and can perform writing of the package. - * -EALREADY - Indicates another driver has already written the - * package or has found that no update was necessary; in - * this case, the caller can just skip performing any - * update of the package. - */ -static int -ice_acquire_global_cfg_lock(struct ice_hw *hw, - enum ice_aq_res_access_type access) -{ - int status; - - status = ice_acquire_res(hw, ICE_GLOBAL_CFG_LOCK_RES_ID, access, - ICE_GLOBAL_CFG_LOCK_TIMEOUT); - - if (!status) - mutex_lock(&ice_global_cfg_lock_sw); - else if (status == -EALREADY) - ice_debug(hw, ICE_DBG_PKG, "Global config lock: No work to do\n"); - - return status; -} - -/** - * ice_release_global_cfg_lock - * @hw: pointer to the HW structure - * - * This function will release the global config lock. - */ -static void ice_release_global_cfg_lock(struct ice_hw *hw) -{ - mutex_unlock(&ice_global_cfg_lock_sw); - ice_release_res(hw, ICE_GLOBAL_CFG_LOCK_RES_ID); -} - -/** * ice_acquire_change_lock * @hw: pointer to the HW structure * @access: access type (read or write) @@ -880,1325 +293,6 @@ void ice_release_change_lock(struct ice_hw *hw) } /** - * ice_aq_download_pkg - * @hw: pointer to the hardware structure - * @pkg_buf: the package buffer to transfer - * @buf_size: the size of the package buffer - * @last_buf: last buffer indicator - * @error_offset: returns error offset - * @error_info: returns error information - * @cd: pointer to command details structure or NULL - * - * Download Package (0x0C40) - */ -static int -ice_aq_download_pkg(struct ice_hw *hw, struct ice_buf_hdr *pkg_buf, - u16 buf_size, bool last_buf, u32 *error_offset, - u32 *error_info, struct ice_sq_cd *cd) -{ - struct ice_aqc_download_pkg *cmd; - struct ice_aq_desc desc; - int status; - - if (error_offset) - *error_offset = 0; - if (error_info) - *error_info = 0; - - cmd = &desc.params.download_pkg; - ice_fill_dflt_direct_cmd_desc(&desc, ice_aqc_opc_download_pkg); - desc.flags |= cpu_to_le16(ICE_AQ_FLAG_RD); - - if (last_buf) - cmd->flags |= ICE_AQC_DOWNLOAD_PKG_LAST_BUF; - - status = ice_aq_send_cmd(hw, &desc, pkg_buf, buf_size, cd); - if (status == -EIO) { - /* Read error from buffer only when the FW returned an error */ - struct ice_aqc_download_pkg_resp *resp; - - resp = (struct ice_aqc_download_pkg_resp *)pkg_buf; - if (error_offset) - *error_offset = le32_to_cpu(resp->error_offset); - if (error_info) - *error_info = le32_to_cpu(resp->error_info); - } - - return status; -} - -/** - * ice_aq_upload_section - * @hw: pointer to the hardware structure - * @pkg_buf: the package buffer which will receive the section - * @buf_size: the size of the package buffer - * @cd: pointer to command details structure or NULL - * - * Upload Section (0x0C41) - */ -int -ice_aq_upload_section(struct ice_hw *hw, struct ice_buf_hdr *pkg_buf, - u16 buf_size, struct ice_sq_cd *cd) -{ - struct ice_aq_desc desc; - - ice_fill_dflt_direct_cmd_desc(&desc, ice_aqc_opc_upload_section); - desc.flags |= cpu_to_le16(ICE_AQ_FLAG_RD); - - return ice_aq_send_cmd(hw, &desc, pkg_buf, buf_size, cd); -} - -/** - * ice_aq_update_pkg - * @hw: pointer to the hardware structure - * @pkg_buf: the package cmd buffer - * @buf_size: the size of the package cmd buffer - * @last_buf: last buffer indicator - * @error_offset: returns error offset - * @error_info: returns error information - * @cd: pointer to command details structure or NULL - * - * Update Package (0x0C42) - */ -static int -ice_aq_update_pkg(struct ice_hw *hw, struct ice_buf_hdr *pkg_buf, u16 buf_size, - bool last_buf, u32 *error_offset, u32 *error_info, - struct ice_sq_cd *cd) -{ - struct ice_aqc_download_pkg *cmd; - struct ice_aq_desc desc; - int status; - - if (error_offset) - *error_offset = 0; - if (error_info) - *error_info = 0; - - cmd = &desc.params.download_pkg; - ice_fill_dflt_direct_cmd_desc(&desc, ice_aqc_opc_update_pkg); - desc.flags |= cpu_to_le16(ICE_AQ_FLAG_RD); - - if (last_buf) - cmd->flags |= ICE_AQC_DOWNLOAD_PKG_LAST_BUF; - - status = ice_aq_send_cmd(hw, &desc, pkg_buf, buf_size, cd); - if (status == -EIO) { - /* Read error from buffer only when the FW returned an error */ - struct ice_aqc_download_pkg_resp *resp; - - resp = (struct ice_aqc_download_pkg_resp *)pkg_buf; - if (error_offset) - *error_offset = le32_to_cpu(resp->error_offset); - if (error_info) - *error_info = le32_to_cpu(resp->error_info); - } - - return status; -} - -/** - * ice_find_seg_in_pkg - * @hw: pointer to the hardware structure - * @seg_type: the segment type to search for (i.e., SEGMENT_TYPE_CPK) - * @pkg_hdr: pointer to the package header to be searched - * - * This function searches a package file for a particular segment type. On - * success it returns a pointer to the segment header, otherwise it will - * return NULL. - */ -static struct ice_generic_seg_hdr * -ice_find_seg_in_pkg(struct ice_hw *hw, u32 seg_type, - struct ice_pkg_hdr *pkg_hdr) -{ - u32 i; - - ice_debug(hw, ICE_DBG_PKG, "Package format version: %d.%d.%d.%d\n", - pkg_hdr->pkg_format_ver.major, pkg_hdr->pkg_format_ver.minor, - pkg_hdr->pkg_format_ver.update, - pkg_hdr->pkg_format_ver.draft); - - /* Search all package segments for the requested segment type */ - for (i = 0; i < le32_to_cpu(pkg_hdr->seg_count); i++) { - struct ice_generic_seg_hdr *seg; - - seg = (struct ice_generic_seg_hdr *) - ((u8 *)pkg_hdr + le32_to_cpu(pkg_hdr->seg_offset[i])); - - if (le32_to_cpu(seg->seg_type) == seg_type) - return seg; - } - - return NULL; -} - -/** - * ice_update_pkg_no_lock - * @hw: pointer to the hardware structure - * @bufs: pointer to an array of buffers - * @count: the number of buffers in the array - */ -static int -ice_update_pkg_no_lock(struct ice_hw *hw, struct ice_buf *bufs, u32 count) -{ - int status = 0; - u32 i; - - for (i = 0; i < count; i++) { - struct ice_buf_hdr *bh = (struct ice_buf_hdr *)(bufs + i); - bool last = ((i + 1) == count); - u32 offset, info; - - status = ice_aq_update_pkg(hw, bh, le16_to_cpu(bh->data_end), - last, &offset, &info, NULL); - - if (status) { - ice_debug(hw, ICE_DBG_PKG, "Update pkg failed: err %d off %d inf %d\n", - status, offset, info); - break; - } - } - - return status; -} - -/** - * ice_update_pkg - * @hw: pointer to the hardware structure - * @bufs: pointer to an array of buffers - * @count: the number of buffers in the array - * - * Obtains change lock and updates package. - */ -static int ice_update_pkg(struct ice_hw *hw, struct ice_buf *bufs, u32 count) -{ - int status; - - status = ice_acquire_change_lock(hw, ICE_RES_WRITE); - if (status) - return status; - - status = ice_update_pkg_no_lock(hw, bufs, count); - - ice_release_change_lock(hw); - - return status; -} - -static enum ice_ddp_state ice_map_aq_err_to_ddp_state(enum ice_aq_err aq_err) -{ - switch (aq_err) { - case ICE_AQ_RC_ENOSEC: - case ICE_AQ_RC_EBADSIG: - return ICE_DDP_PKG_FILE_SIGNATURE_INVALID; - case ICE_AQ_RC_ESVN: - return ICE_DDP_PKG_FILE_REVISION_TOO_LOW; - case ICE_AQ_RC_EBADMAN: - case ICE_AQ_RC_EBADBUF: - return ICE_DDP_PKG_LOAD_ERROR; - default: - return ICE_DDP_PKG_ERR; - } -} - -/** - * ice_dwnld_cfg_bufs - * @hw: pointer to the hardware structure - * @bufs: pointer to an array of buffers - * @count: the number of buffers in the array - * - * Obtains global config lock and downloads the package configuration buffers - * to the firmware. Metadata buffers are skipped, and the first metadata buffer - * found indicates that the rest of the buffers are all metadata buffers. - */ -static enum ice_ddp_state -ice_dwnld_cfg_bufs(struct ice_hw *hw, struct ice_buf *bufs, u32 count) -{ - enum ice_ddp_state state = ICE_DDP_PKG_SUCCESS; - struct ice_buf_hdr *bh; - enum ice_aq_err err; - u32 offset, info, i; - int status; - - if (!bufs || !count) - return ICE_DDP_PKG_ERR; - - /* If the first buffer's first section has its metadata bit set - * then there are no buffers to be downloaded, and the operation is - * considered a success. - */ - bh = (struct ice_buf_hdr *)bufs; - if (le32_to_cpu(bh->section_entry[0].type) & ICE_METADATA_BUF) - return ICE_DDP_PKG_SUCCESS; - - status = ice_acquire_global_cfg_lock(hw, ICE_RES_WRITE); - if (status) { - if (status == -EALREADY) - return ICE_DDP_PKG_ALREADY_LOADED; - return ice_map_aq_err_to_ddp_state(hw->adminq.sq_last_status); - } - - for (i = 0; i < count; i++) { - bool last = ((i + 1) == count); - - if (!last) { - /* check next buffer for metadata flag */ - bh = (struct ice_buf_hdr *)(bufs + i + 1); - - /* A set metadata flag in the next buffer will signal - * that the current buffer will be the last buffer - * downloaded - */ - if (le16_to_cpu(bh->section_count)) - if (le32_to_cpu(bh->section_entry[0].type) & - ICE_METADATA_BUF) - last = true; - } - - bh = (struct ice_buf_hdr *)(bufs + i); - - status = ice_aq_download_pkg(hw, bh, ICE_PKG_BUF_SIZE, last, - &offset, &info, NULL); - - /* Save AQ status from download package */ - if (status) { - ice_debug(hw, ICE_DBG_PKG, "Pkg download failed: err %d off %d inf %d\n", - status, offset, info); - err = hw->adminq.sq_last_status; - state = ice_map_aq_err_to_ddp_state(err); - break; - } - - if (last) - break; - } - - if (!status) { - status = ice_set_vlan_mode(hw); - if (status) - ice_debug(hw, ICE_DBG_PKG, "Failed to set VLAN mode: err %d\n", - status); - } - - ice_release_global_cfg_lock(hw); - - return state; -} - -/** - * ice_aq_get_pkg_info_list - * @hw: pointer to the hardware structure - * @pkg_info: the buffer which will receive the information list - * @buf_size: the size of the pkg_info information buffer - * @cd: pointer to command details structure or NULL - * - * Get Package Info List (0x0C43) - */ -static int -ice_aq_get_pkg_info_list(struct ice_hw *hw, - struct ice_aqc_get_pkg_info_resp *pkg_info, - u16 buf_size, struct ice_sq_cd *cd) -{ - struct ice_aq_desc desc; - - ice_fill_dflt_direct_cmd_desc(&desc, ice_aqc_opc_get_pkg_info_list); - - return ice_aq_send_cmd(hw, &desc, pkg_info, buf_size, cd); -} - -/** - * ice_download_pkg - * @hw: pointer to the hardware structure - * @ice_seg: pointer to the segment of the package to be downloaded - * - * Handles the download of a complete package. - */ -static enum ice_ddp_state -ice_download_pkg(struct ice_hw *hw, struct ice_seg *ice_seg) -{ - struct ice_buf_table *ice_buf_tbl; - int status; - - ice_debug(hw, ICE_DBG_PKG, "Segment format version: %d.%d.%d.%d\n", - ice_seg->hdr.seg_format_ver.major, - ice_seg->hdr.seg_format_ver.minor, - ice_seg->hdr.seg_format_ver.update, - ice_seg->hdr.seg_format_ver.draft); - - ice_debug(hw, ICE_DBG_PKG, "Seg: type 0x%X, size %d, name %s\n", - le32_to_cpu(ice_seg->hdr.seg_type), - le32_to_cpu(ice_seg->hdr.seg_size), ice_seg->hdr.seg_id); - - ice_buf_tbl = ice_find_buf_table(ice_seg); - - ice_debug(hw, ICE_DBG_PKG, "Seg buf count: %d\n", - le32_to_cpu(ice_buf_tbl->buf_count)); - - status = ice_dwnld_cfg_bufs(hw, ice_buf_tbl->buf_array, - le32_to_cpu(ice_buf_tbl->buf_count)); - - ice_post_pkg_dwnld_vlan_mode_cfg(hw); - - return status; -} - -/** - * ice_init_pkg_info - * @hw: pointer to the hardware structure - * @pkg_hdr: pointer to the driver's package hdr - * - * Saves off the package details into the HW structure. - */ -static enum ice_ddp_state -ice_init_pkg_info(struct ice_hw *hw, struct ice_pkg_hdr *pkg_hdr) -{ - struct ice_generic_seg_hdr *seg_hdr; - - if (!pkg_hdr) - return ICE_DDP_PKG_ERR; - - seg_hdr = ice_find_seg_in_pkg(hw, SEGMENT_TYPE_ICE, pkg_hdr); - if (seg_hdr) { - struct ice_meta_sect *meta; - struct ice_pkg_enum state; - - memset(&state, 0, sizeof(state)); - - /* Get package information from the Metadata Section */ - meta = ice_pkg_enum_section((struct ice_seg *)seg_hdr, &state, - ICE_SID_METADATA); - if (!meta) { - ice_debug(hw, ICE_DBG_INIT, "Did not find ice metadata section in package\n"); - return ICE_DDP_PKG_INVALID_FILE; - } - - hw->pkg_ver = meta->ver; - memcpy(hw->pkg_name, meta->name, sizeof(meta->name)); - - ice_debug(hw, ICE_DBG_PKG, "Pkg: %d.%d.%d.%d, %s\n", - meta->ver.major, meta->ver.minor, meta->ver.update, - meta->ver.draft, meta->name); - - hw->ice_seg_fmt_ver = seg_hdr->seg_format_ver; - memcpy(hw->ice_seg_id, seg_hdr->seg_id, - sizeof(hw->ice_seg_id)); - - ice_debug(hw, ICE_DBG_PKG, "Ice Seg: %d.%d.%d.%d, %s\n", - seg_hdr->seg_format_ver.major, - seg_hdr->seg_format_ver.minor, - seg_hdr->seg_format_ver.update, - seg_hdr->seg_format_ver.draft, - seg_hdr->seg_id); - } else { - ice_debug(hw, ICE_DBG_INIT, "Did not find ice segment in driver package\n"); - return ICE_DDP_PKG_INVALID_FILE; - } - - return ICE_DDP_PKG_SUCCESS; -} - -/** - * ice_get_pkg_info - * @hw: pointer to the hardware structure - * - * Store details of the package currently loaded in HW into the HW structure. - */ -static enum ice_ddp_state ice_get_pkg_info(struct ice_hw *hw) -{ - enum ice_ddp_state state = ICE_DDP_PKG_SUCCESS; - struct ice_aqc_get_pkg_info_resp *pkg_info; - u16 size; - u32 i; - - size = struct_size(pkg_info, pkg_info, ICE_PKG_CNT); - pkg_info = kzalloc(size, GFP_KERNEL); - if (!pkg_info) - return ICE_DDP_PKG_ERR; - - if (ice_aq_get_pkg_info_list(hw, pkg_info, size, NULL)) { - state = ICE_DDP_PKG_ERR; - goto init_pkg_free_alloc; - } - - for (i = 0; i < le32_to_cpu(pkg_info->count); i++) { -#define ICE_PKG_FLAG_COUNT 4 - char flags[ICE_PKG_FLAG_COUNT + 1] = { 0 }; - u8 place = 0; - - if (pkg_info->pkg_info[i].is_active) { - flags[place++] = 'A'; - hw->active_pkg_ver = pkg_info->pkg_info[i].ver; - hw->active_track_id = - le32_to_cpu(pkg_info->pkg_info[i].track_id); - memcpy(hw->active_pkg_name, - pkg_info->pkg_info[i].name, - sizeof(pkg_info->pkg_info[i].name)); - hw->active_pkg_in_nvm = pkg_info->pkg_info[i].is_in_nvm; - } - if (pkg_info->pkg_info[i].is_active_at_boot) - flags[place++] = 'B'; - if (pkg_info->pkg_info[i].is_modified) - flags[place++] = 'M'; - if (pkg_info->pkg_info[i].is_in_nvm) - flags[place++] = 'N'; - - ice_debug(hw, ICE_DBG_PKG, "Pkg[%d]: %d.%d.%d.%d,%s,%s\n", - i, pkg_info->pkg_info[i].ver.major, - pkg_info->pkg_info[i].ver.minor, - pkg_info->pkg_info[i].ver.update, - pkg_info->pkg_info[i].ver.draft, - pkg_info->pkg_info[i].name, flags); - } - -init_pkg_free_alloc: - kfree(pkg_info); - - return state; -} - -/** - * ice_verify_pkg - verify package - * @pkg: pointer to the package buffer - * @len: size of the package buffer - * - * Verifies various attributes of the package file, including length, format - * version, and the requirement of at least one segment. - */ -static enum ice_ddp_state ice_verify_pkg(struct ice_pkg_hdr *pkg, u32 len) -{ - u32 seg_count; - u32 i; - - if (len < struct_size(pkg, seg_offset, 1)) - return ICE_DDP_PKG_INVALID_FILE; - - if (pkg->pkg_format_ver.major != ICE_PKG_FMT_VER_MAJ || - pkg->pkg_format_ver.minor != ICE_PKG_FMT_VER_MNR || - pkg->pkg_format_ver.update != ICE_PKG_FMT_VER_UPD || - pkg->pkg_format_ver.draft != ICE_PKG_FMT_VER_DFT) - return ICE_DDP_PKG_INVALID_FILE; - - /* pkg must have at least one segment */ - seg_count = le32_to_cpu(pkg->seg_count); - if (seg_count < 1) - return ICE_DDP_PKG_INVALID_FILE; - - /* make sure segment array fits in package length */ - if (len < struct_size(pkg, seg_offset, seg_count)) - return ICE_DDP_PKG_INVALID_FILE; - - /* all segments must fit within length */ - for (i = 0; i < seg_count; i++) { - u32 off = le32_to_cpu(pkg->seg_offset[i]); - struct ice_generic_seg_hdr *seg; - - /* segment header must fit */ - if (len < off + sizeof(*seg)) - return ICE_DDP_PKG_INVALID_FILE; - - seg = (struct ice_generic_seg_hdr *)((u8 *)pkg + off); - - /* segment body must fit */ - if (len < off + le32_to_cpu(seg->seg_size)) - return ICE_DDP_PKG_INVALID_FILE; - } - - return ICE_DDP_PKG_SUCCESS; -} - -/** - * ice_free_seg - free package segment pointer - * @hw: pointer to the hardware structure - * - * Frees the package segment pointer in the proper manner, depending on if the - * segment was allocated or just the passed in pointer was stored. - */ -void ice_free_seg(struct ice_hw *hw) -{ - if (hw->pkg_copy) { - devm_kfree(ice_hw_to_dev(hw), hw->pkg_copy); - hw->pkg_copy = NULL; - hw->pkg_size = 0; - } - hw->seg = NULL; -} - -/** - * ice_init_pkg_regs - initialize additional package registers - * @hw: pointer to the hardware structure - */ -static void ice_init_pkg_regs(struct ice_hw *hw) -{ -#define ICE_SW_BLK_INP_MASK_L 0xFFFFFFFF -#define ICE_SW_BLK_INP_MASK_H 0x0000FFFF -#define ICE_SW_BLK_IDX 0 - - /* setup Switch block input mask, which is 48-bits in two parts */ - wr32(hw, GL_PREEXT_L2_PMASK0(ICE_SW_BLK_IDX), ICE_SW_BLK_INP_MASK_L); - wr32(hw, GL_PREEXT_L2_PMASK1(ICE_SW_BLK_IDX), ICE_SW_BLK_INP_MASK_H); -} - -/** - * ice_chk_pkg_version - check package version for compatibility with driver - * @pkg_ver: pointer to a version structure to check - * - * Check to make sure that the package about to be downloaded is compatible with - * the driver. To be compatible, the major and minor components of the package - * version must match our ICE_PKG_SUPP_VER_MAJ and ICE_PKG_SUPP_VER_MNR - * definitions. - */ -static enum ice_ddp_state ice_chk_pkg_version(struct ice_pkg_ver *pkg_ver) -{ - if (pkg_ver->major > ICE_PKG_SUPP_VER_MAJ || - (pkg_ver->major == ICE_PKG_SUPP_VER_MAJ && - pkg_ver->minor > ICE_PKG_SUPP_VER_MNR)) - return ICE_DDP_PKG_FILE_VERSION_TOO_HIGH; - else if (pkg_ver->major < ICE_PKG_SUPP_VER_MAJ || - (pkg_ver->major == ICE_PKG_SUPP_VER_MAJ && - pkg_ver->minor < ICE_PKG_SUPP_VER_MNR)) - return ICE_DDP_PKG_FILE_VERSION_TOO_LOW; - - return ICE_DDP_PKG_SUCCESS; -} - -/** - * ice_chk_pkg_compat - * @hw: pointer to the hardware structure - * @ospkg: pointer to the package hdr - * @seg: pointer to the package segment hdr - * - * This function checks the package version compatibility with driver and NVM - */ -static enum ice_ddp_state -ice_chk_pkg_compat(struct ice_hw *hw, struct ice_pkg_hdr *ospkg, - struct ice_seg **seg) -{ - struct ice_aqc_get_pkg_info_resp *pkg; - enum ice_ddp_state state; - u16 size; - u32 i; - - /* Check package version compatibility */ - state = ice_chk_pkg_version(&hw->pkg_ver); - if (state) { - ice_debug(hw, ICE_DBG_INIT, "Package version check failed.\n"); - return state; - } - - /* find ICE segment in given package */ - *seg = (struct ice_seg *)ice_find_seg_in_pkg(hw, SEGMENT_TYPE_ICE, - ospkg); - if (!*seg) { - ice_debug(hw, ICE_DBG_INIT, "no ice segment in package.\n"); - return ICE_DDP_PKG_INVALID_FILE; - } - - /* Check if FW is compatible with the OS package */ - size = struct_size(pkg, pkg_info, ICE_PKG_CNT); - pkg = kzalloc(size, GFP_KERNEL); - if (!pkg) - return ICE_DDP_PKG_ERR; - - if (ice_aq_get_pkg_info_list(hw, pkg, size, NULL)) { - state = ICE_DDP_PKG_LOAD_ERROR; - goto fw_ddp_compat_free_alloc; - } - - for (i = 0; i < le32_to_cpu(pkg->count); i++) { - /* loop till we find the NVM package */ - if (!pkg->pkg_info[i].is_in_nvm) - continue; - if ((*seg)->hdr.seg_format_ver.major != - pkg->pkg_info[i].ver.major || - (*seg)->hdr.seg_format_ver.minor > - pkg->pkg_info[i].ver.minor) { - state = ICE_DDP_PKG_FW_MISMATCH; - ice_debug(hw, ICE_DBG_INIT, "OS package is not compatible with NVM.\n"); - } - /* done processing NVM package so break */ - break; - } -fw_ddp_compat_free_alloc: - kfree(pkg); - return state; -} - -/** - * ice_sw_fv_handler - * @sect_type: section type - * @section: pointer to section - * @index: index of the field vector entry to be returned - * @offset: ptr to variable that receives the offset in the field vector table - * - * This is a callback function that can be passed to ice_pkg_enum_entry. - * This function treats the given section as of type ice_sw_fv_section and - * enumerates offset field. "offset" is an index into the field vector table. - */ -static void * -ice_sw_fv_handler(u32 sect_type, void *section, u32 index, u32 *offset) -{ - struct ice_sw_fv_section *fv_section = section; - - if (!section || sect_type != ICE_SID_FLD_VEC_SW) - return NULL; - if (index >= le16_to_cpu(fv_section->count)) - return NULL; - if (offset) - /* "index" passed in to this function is relative to a given - * 4k block. To get to the true index into the field vector - * table need to add the relative index to the base_offset - * field of this section - */ - *offset = le16_to_cpu(fv_section->base_offset) + index; - return fv_section->fv + index; -} - -/** - * ice_get_prof_index_max - get the max profile index for used profile - * @hw: pointer to the HW struct - * - * Calling this function will get the max profile index for used profile - * and store the index number in struct ice_switch_info *switch_info - * in HW for following use. - */ -static int ice_get_prof_index_max(struct ice_hw *hw) -{ - u16 prof_index = 0, j, max_prof_index = 0; - struct ice_pkg_enum state; - struct ice_seg *ice_seg; - bool flag = false; - struct ice_fv *fv; - u32 offset; - - memset(&state, 0, sizeof(state)); - - if (!hw->seg) - return -EINVAL; - - ice_seg = hw->seg; - - do { - fv = ice_pkg_enum_entry(ice_seg, &state, ICE_SID_FLD_VEC_SW, - &offset, ice_sw_fv_handler); - if (!fv) - break; - ice_seg = NULL; - - /* in the profile that not be used, the prot_id is set to 0xff - * and the off is set to 0x1ff for all the field vectors. - */ - for (j = 0; j < hw->blk[ICE_BLK_SW].es.fvw; j++) - if (fv->ew[j].prot_id != ICE_PROT_INVALID || - fv->ew[j].off != ICE_FV_OFFSET_INVAL) - flag = true; - if (flag && prof_index > max_prof_index) - max_prof_index = prof_index; - - prof_index++; - flag = false; - } while (fv); - - hw->switch_info->max_used_prof_index = max_prof_index; - - return 0; -} - -/** - * ice_get_ddp_pkg_state - get DDP pkg state after download - * @hw: pointer to the HW struct - * @already_loaded: indicates if pkg was already loaded onto the device - */ -static enum ice_ddp_state -ice_get_ddp_pkg_state(struct ice_hw *hw, bool already_loaded) -{ - if (hw->pkg_ver.major == hw->active_pkg_ver.major && - hw->pkg_ver.minor == hw->active_pkg_ver.minor && - hw->pkg_ver.update == hw->active_pkg_ver.update && - hw->pkg_ver.draft == hw->active_pkg_ver.draft && - !memcmp(hw->pkg_name, hw->active_pkg_name, sizeof(hw->pkg_name))) { - if (already_loaded) - return ICE_DDP_PKG_SAME_VERSION_ALREADY_LOADED; - else - return ICE_DDP_PKG_SUCCESS; - } else if (hw->active_pkg_ver.major != ICE_PKG_SUPP_VER_MAJ || - hw->active_pkg_ver.minor != ICE_PKG_SUPP_VER_MNR) { - return ICE_DDP_PKG_ALREADY_LOADED_NOT_SUPPORTED; - } else if (hw->active_pkg_ver.major == ICE_PKG_SUPP_VER_MAJ && - hw->active_pkg_ver.minor == ICE_PKG_SUPP_VER_MNR) { - return ICE_DDP_PKG_COMPATIBLE_ALREADY_LOADED; - } else { - return ICE_DDP_PKG_ERR; - } -} - -/** - * ice_init_pkg - initialize/download package - * @hw: pointer to the hardware structure - * @buf: pointer to the package buffer - * @len: size of the package buffer - * - * This function initializes a package. The package contains HW tables - * required to do packet processing. First, the function extracts package - * information such as version. Then it finds the ice configuration segment - * within the package; this function then saves a copy of the segment pointer - * within the supplied package buffer. Next, the function will cache any hints - * from the package, followed by downloading the package itself. Note, that if - * a previous PF driver has already downloaded the package successfully, then - * the current driver will not have to download the package again. - * - * The local package contents will be used to query default behavior and to - * update specific sections of the HW's version of the package (e.g. to update - * the parse graph to understand new protocols). - * - * This function stores a pointer to the package buffer memory, and it is - * expected that the supplied buffer will not be freed immediately. If the - * package buffer needs to be freed, such as when read from a file, use - * ice_copy_and_init_pkg() instead of directly calling ice_init_pkg() in this - * case. - */ -enum ice_ddp_state ice_init_pkg(struct ice_hw *hw, u8 *buf, u32 len) -{ - bool already_loaded = false; - enum ice_ddp_state state; - struct ice_pkg_hdr *pkg; - struct ice_seg *seg; - - if (!buf || !len) - return ICE_DDP_PKG_ERR; - - pkg = (struct ice_pkg_hdr *)buf; - state = ice_verify_pkg(pkg, len); - if (state) { - ice_debug(hw, ICE_DBG_INIT, "failed to verify pkg (err: %d)\n", - state); - return state; - } - - /* initialize package info */ - state = ice_init_pkg_info(hw, pkg); - if (state) - return state; - - /* before downloading the package, check package version for - * compatibility with driver - */ - state = ice_chk_pkg_compat(hw, pkg, &seg); - if (state) - return state; - - /* initialize package hints and then download package */ - ice_init_pkg_hints(hw, seg); - state = ice_download_pkg(hw, seg); - if (state == ICE_DDP_PKG_ALREADY_LOADED) { - ice_debug(hw, ICE_DBG_INIT, "package previously loaded - no work.\n"); - already_loaded = true; - } - - /* Get information on the package currently loaded in HW, then make sure - * the driver is compatible with this version. - */ - if (!state || state == ICE_DDP_PKG_ALREADY_LOADED) { - state = ice_get_pkg_info(hw); - if (!state) - state = ice_get_ddp_pkg_state(hw, already_loaded); - } - - if (ice_is_init_pkg_successful(state)) { - hw->seg = seg; - /* on successful package download update other required - * registers to support the package and fill HW tables - * with package content. - */ - ice_init_pkg_regs(hw); - ice_fill_blk_tbls(hw); - ice_fill_hw_ptype(hw); - ice_get_prof_index_max(hw); - } else { - ice_debug(hw, ICE_DBG_INIT, "package load failed, %d\n", - state); - } - - return state; -} - -/** - * ice_copy_and_init_pkg - initialize/download a copy of the package - * @hw: pointer to the hardware structure - * @buf: pointer to the package buffer - * @len: size of the package buffer - * - * This function copies the package buffer, and then calls ice_init_pkg() to - * initialize the copied package contents. - * - * The copying is necessary if the package buffer supplied is constant, or if - * the memory may disappear shortly after calling this function. - * - * If the package buffer resides in the data segment and can be modified, the - * caller is free to use ice_init_pkg() instead of ice_copy_and_init_pkg(). - * - * However, if the package buffer needs to be copied first, such as when being - * read from a file, the caller should use ice_copy_and_init_pkg(). - * - * This function will first copy the package buffer, before calling - * ice_init_pkg(). The caller is free to immediately destroy the original - * package buffer, as the new copy will be managed by this function and - * related routines. - */ -enum ice_ddp_state -ice_copy_and_init_pkg(struct ice_hw *hw, const u8 *buf, u32 len) -{ - enum ice_ddp_state state; - u8 *buf_copy; - - if (!buf || !len) - return ICE_DDP_PKG_ERR; - - buf_copy = devm_kmemdup(ice_hw_to_dev(hw), buf, len, GFP_KERNEL); - - state = ice_init_pkg(hw, buf_copy, len); - if (!ice_is_init_pkg_successful(state)) { - /* Free the copy, since we failed to initialize the package */ - devm_kfree(ice_hw_to_dev(hw), buf_copy); - } else { - /* Track the copied pkg so we can free it later */ - hw->pkg_copy = buf_copy; - hw->pkg_size = len; - } - - return state; -} - -/** - * ice_is_init_pkg_successful - check if DDP init was successful - * @state: state of the DDP pkg after download - */ -bool ice_is_init_pkg_successful(enum ice_ddp_state state) -{ - switch (state) { - case ICE_DDP_PKG_SUCCESS: - case ICE_DDP_PKG_SAME_VERSION_ALREADY_LOADED: - case ICE_DDP_PKG_COMPATIBLE_ALREADY_LOADED: - return true; - default: - return false; - } -} - -/** - * ice_pkg_buf_alloc - * @hw: pointer to the HW structure - * - * Allocates a package buffer and returns a pointer to the buffer header. - * Note: all package contents must be in Little Endian form. - */ -static struct ice_buf_build *ice_pkg_buf_alloc(struct ice_hw *hw) -{ - struct ice_buf_build *bld; - struct ice_buf_hdr *buf; - - bld = devm_kzalloc(ice_hw_to_dev(hw), sizeof(*bld), GFP_KERNEL); - if (!bld) - return NULL; - - buf = (struct ice_buf_hdr *)bld; - buf->data_end = cpu_to_le16(offsetof(struct ice_buf_hdr, - section_entry)); - return bld; -} - -static bool ice_is_gtp_u_profile(u16 prof_idx) -{ - return (prof_idx >= ICE_PROFID_IPV6_GTPU_TEID && - prof_idx <= ICE_PROFID_IPV6_GTPU_IPV6_TCP_INNER) || - prof_idx == ICE_PROFID_IPV4_GTPU_TEID; -} - -static bool ice_is_gtp_c_profile(u16 prof_idx) -{ - switch (prof_idx) { - case ICE_PROFID_IPV4_GTPC_TEID: - case ICE_PROFID_IPV4_GTPC_NO_TEID: - case ICE_PROFID_IPV6_GTPC_TEID: - case ICE_PROFID_IPV6_GTPC_NO_TEID: - return true; - default: - return false; - } -} - -/** - * ice_get_sw_prof_type - determine switch profile type - * @hw: pointer to the HW structure - * @fv: pointer to the switch field vector - * @prof_idx: profile index to check - */ -static enum ice_prof_type -ice_get_sw_prof_type(struct ice_hw *hw, struct ice_fv *fv, u32 prof_idx) -{ - u16 i; - - if (ice_is_gtp_c_profile(prof_idx)) - return ICE_PROF_TUN_GTPC; - - if (ice_is_gtp_u_profile(prof_idx)) - return ICE_PROF_TUN_GTPU; - - for (i = 0; i < hw->blk[ICE_BLK_SW].es.fvw; i++) { - /* UDP tunnel will have UDP_OF protocol ID and VNI offset */ - if (fv->ew[i].prot_id == (u8)ICE_PROT_UDP_OF && - fv->ew[i].off == ICE_VNI_OFFSET) - return ICE_PROF_TUN_UDP; - - /* GRE tunnel will have GRE protocol */ - if (fv->ew[i].prot_id == (u8)ICE_PROT_GRE_OF) - return ICE_PROF_TUN_GRE; - } - - return ICE_PROF_NON_TUN; -} - -/** - * ice_get_sw_fv_bitmap - Get switch field vector bitmap based on profile type - * @hw: pointer to hardware structure - * @req_profs: type of profiles requested - * @bm: pointer to memory for returning the bitmap of field vectors - */ -void -ice_get_sw_fv_bitmap(struct ice_hw *hw, enum ice_prof_type req_profs, - unsigned long *bm) -{ - struct ice_pkg_enum state; - struct ice_seg *ice_seg; - struct ice_fv *fv; - - if (req_profs == ICE_PROF_ALL) { - bitmap_set(bm, 0, ICE_MAX_NUM_PROFILES); - return; - } - - memset(&state, 0, sizeof(state)); - bitmap_zero(bm, ICE_MAX_NUM_PROFILES); - ice_seg = hw->seg; - do { - enum ice_prof_type prof_type; - u32 offset; - - fv = ice_pkg_enum_entry(ice_seg, &state, ICE_SID_FLD_VEC_SW, - &offset, ice_sw_fv_handler); - ice_seg = NULL; - - if (fv) { - /* Determine field vector type */ - prof_type = ice_get_sw_prof_type(hw, fv, offset); - - if (req_profs & prof_type) - set_bit((u16)offset, bm); - } - } while (fv); -} - -/** - * ice_get_sw_fv_list - * @hw: pointer to the HW structure - * @lkups: list of protocol types - * @bm: bitmap of field vectors to consider - * @fv_list: Head of a list - * - * Finds all the field vector entries from switch block that contain - * a given protocol ID and offset and returns a list of structures of type - * "ice_sw_fv_list_entry". Every structure in the list has a field vector - * definition and profile ID information - * NOTE: The caller of the function is responsible for freeing the memory - * allocated for every list entry. - */ -int -ice_get_sw_fv_list(struct ice_hw *hw, struct ice_prot_lkup_ext *lkups, - unsigned long *bm, struct list_head *fv_list) -{ - struct ice_sw_fv_list_entry *fvl; - struct ice_sw_fv_list_entry *tmp; - struct ice_pkg_enum state; - struct ice_seg *ice_seg; - struct ice_fv *fv; - u32 offset; - - memset(&state, 0, sizeof(state)); - - if (!lkups->n_val_words || !hw->seg) - return -EINVAL; - - ice_seg = hw->seg; - do { - u16 i; - - fv = ice_pkg_enum_entry(ice_seg, &state, ICE_SID_FLD_VEC_SW, - &offset, ice_sw_fv_handler); - if (!fv) - break; - ice_seg = NULL; - - /* If field vector is not in the bitmap list, then skip this - * profile. - */ - if (!test_bit((u16)offset, bm)) - continue; - - for (i = 0; i < lkups->n_val_words; i++) { - int j; - - for (j = 0; j < hw->blk[ICE_BLK_SW].es.fvw; j++) - if (fv->ew[j].prot_id == - lkups->fv_words[i].prot_id && - fv->ew[j].off == lkups->fv_words[i].off) - break; - if (j >= hw->blk[ICE_BLK_SW].es.fvw) - break; - if (i + 1 == lkups->n_val_words) { - fvl = devm_kzalloc(ice_hw_to_dev(hw), - sizeof(*fvl), GFP_KERNEL); - if (!fvl) - goto err; - fvl->fv_ptr = fv; - fvl->profile_id = offset; - list_add(&fvl->list_entry, fv_list); - break; - } - } - } while (fv); - if (list_empty(fv_list)) { - dev_warn(ice_hw_to_dev(hw), "Required profiles not found in currently loaded DDP package"); - return -EIO; - } - - return 0; - -err: - list_for_each_entry_safe(fvl, tmp, fv_list, list_entry) { - list_del(&fvl->list_entry); - devm_kfree(ice_hw_to_dev(hw), fvl); - } - - return -ENOMEM; -} - -/** - * ice_init_prof_result_bm - Initialize the profile result index bitmap - * @hw: pointer to hardware structure - */ -void ice_init_prof_result_bm(struct ice_hw *hw) -{ - struct ice_pkg_enum state; - struct ice_seg *ice_seg; - struct ice_fv *fv; - - memset(&state, 0, sizeof(state)); - - if (!hw->seg) - return; - - ice_seg = hw->seg; - do { - u32 off; - u16 i; - - fv = ice_pkg_enum_entry(ice_seg, &state, ICE_SID_FLD_VEC_SW, - &off, ice_sw_fv_handler); - ice_seg = NULL; - if (!fv) - break; - - bitmap_zero(hw->switch_info->prof_res_bm[off], - ICE_MAX_FV_WORDS); - - /* Determine empty field vector indices, these can be - * used for recipe results. Skip index 0, since it is - * always used for Switch ID. - */ - for (i = 1; i < ICE_MAX_FV_WORDS; i++) - if (fv->ew[i].prot_id == ICE_PROT_INVALID && - fv->ew[i].off == ICE_FV_OFFSET_INVAL) - set_bit(i, hw->switch_info->prof_res_bm[off]); - } while (fv); -} - -/** - * ice_pkg_buf_free - * @hw: pointer to the HW structure - * @bld: pointer to pkg build (allocated by ice_pkg_buf_alloc()) - * - * Frees a package buffer - */ -void ice_pkg_buf_free(struct ice_hw *hw, struct ice_buf_build *bld) -{ - devm_kfree(ice_hw_to_dev(hw), bld); -} - -/** - * ice_pkg_buf_reserve_section - * @bld: pointer to pkg build (allocated by ice_pkg_buf_alloc()) - * @count: the number of sections to reserve - * - * Reserves one or more section table entries in a package buffer. This routine - * can be called multiple times as long as they are made before calling - * ice_pkg_buf_alloc_section(). Once ice_pkg_buf_alloc_section() - * is called once, the number of sections that can be allocated will not be able - * to be increased; not using all reserved sections is fine, but this will - * result in some wasted space in the buffer. - * Note: all package contents must be in Little Endian form. - */ -static int -ice_pkg_buf_reserve_section(struct ice_buf_build *bld, u16 count) -{ - struct ice_buf_hdr *buf; - u16 section_count; - u16 data_end; - - if (!bld) - return -EINVAL; - - buf = (struct ice_buf_hdr *)&bld->buf; - - /* already an active section, can't increase table size */ - section_count = le16_to_cpu(buf->section_count); - if (section_count > 0) - return -EIO; - - if (bld->reserved_section_table_entries + count > ICE_MAX_S_COUNT) - return -EIO; - bld->reserved_section_table_entries += count; - - data_end = le16_to_cpu(buf->data_end) + - flex_array_size(buf, section_entry, count); - buf->data_end = cpu_to_le16(data_end); - - return 0; -} - -/** - * ice_pkg_buf_alloc_section - * @bld: pointer to pkg build (allocated by ice_pkg_buf_alloc()) - * @type: the section type value - * @size: the size of the section to reserve (in bytes) - * - * Reserves memory in the buffer for a section's content and updates the - * buffers' status accordingly. This routine returns a pointer to the first - * byte of the section start within the buffer, which is used to fill in the - * section contents. - * Note: all package contents must be in Little Endian form. - */ -static void * -ice_pkg_buf_alloc_section(struct ice_buf_build *bld, u32 type, u16 size) -{ - struct ice_buf_hdr *buf; - u16 sect_count; - u16 data_end; - - if (!bld || !type || !size) - return NULL; - - buf = (struct ice_buf_hdr *)&bld->buf; - - /* check for enough space left in buffer */ - data_end = le16_to_cpu(buf->data_end); - - /* section start must align on 4 byte boundary */ - data_end = ALIGN(data_end, 4); - - if ((data_end + size) > ICE_MAX_S_DATA_END) - return NULL; - - /* check for more available section table entries */ - sect_count = le16_to_cpu(buf->section_count); - if (sect_count < bld->reserved_section_table_entries) { - void *section_ptr = ((u8 *)buf) + data_end; - - buf->section_entry[sect_count].offset = cpu_to_le16(data_end); - buf->section_entry[sect_count].size = cpu_to_le16(size); - buf->section_entry[sect_count].type = cpu_to_le32(type); - - data_end += size; - buf->data_end = cpu_to_le16(data_end); - - buf->section_count = cpu_to_le16(sect_count + 1); - return section_ptr; - } - - /* no free section table entries */ - return NULL; -} - -/** - * ice_pkg_buf_alloc_single_section - * @hw: pointer to the HW structure - * @type: the section type value - * @size: the size of the section to reserve (in bytes) - * @section: returns pointer to the section - * - * Allocates a package buffer with a single section. - * Note: all package contents must be in Little Endian form. - */ -struct ice_buf_build * -ice_pkg_buf_alloc_single_section(struct ice_hw *hw, u32 type, u16 size, - void **section) -{ - struct ice_buf_build *buf; - - if (!section) - return NULL; - - buf = ice_pkg_buf_alloc(hw); - if (!buf) - return NULL; - - if (ice_pkg_buf_reserve_section(buf, 1)) - goto ice_pkg_buf_alloc_single_section_err; - - *section = ice_pkg_buf_alloc_section(buf, type, size); - if (!*section) - goto ice_pkg_buf_alloc_single_section_err; - - return buf; - -ice_pkg_buf_alloc_single_section_err: - ice_pkg_buf_free(hw, buf); - return NULL; -} - -/** - * ice_pkg_buf_get_active_sections - * @bld: pointer to pkg build (allocated by ice_pkg_buf_alloc()) - * - * Returns the number of active sections. Before using the package buffer - * in an update package command, the caller should make sure that there is at - * least one active section - otherwise, the buffer is not legal and should - * not be used. - * Note: all package contents must be in Little Endian form. - */ -static u16 ice_pkg_buf_get_active_sections(struct ice_buf_build *bld) -{ - struct ice_buf_hdr *buf; - - if (!bld) - return 0; - - buf = (struct ice_buf_hdr *)&bld->buf; - return le16_to_cpu(buf->section_count); -} - -/** - * ice_pkg_buf - * @bld: pointer to pkg build (allocated by ice_pkg_buf_alloc()) - * - * Return a pointer to the buffer's header - */ -struct ice_buf *ice_pkg_buf(struct ice_buf_build *bld) -{ - if (!bld) - return NULL; - - return &bld->buf; -} - -/** * ice_get_open_tunnel_port - retrieve an open tunnel port * @hw: pointer to the HW structure * @port: returns open port @@ -2297,10 +391,11 @@ ice_upd_dvm_boost_entry_err: */ int ice_set_dvm_boost_entries(struct ice_hw *hw) { - int status; u16 i; for (i = 0; i < hw->dvm_upd.count; i++) { + int status; + status = ice_upd_dvm_boost_entry(hw, &hw->dvm_upd.tbl[i]); if (status) return status; @@ -2757,7 +852,6 @@ ice_match_prop_lst(struct list_head *list1, struct list_head *list2) count++; list_for_each_entry(tmp2, list2, list) chk_count++; - /* cppcheck-suppress knownConditionTrueFalse */ if (!count || count != chk_count) return false; @@ -5102,12 +3196,13 @@ ice_rem_vsig(struct ice_hw *hw, enum ice_block blk, u16 vsig, u16 idx = vsig & ICE_VSIG_IDX_M; struct ice_vsig_vsi *vsi_cur; struct ice_vsig_prof *d, *t; - int status; /* remove TCAM entries */ list_for_each_entry_safe(d, t, &hw->blk[blk].xlt2.vsig_tbl[idx].prop_lst, list) { + int status; + status = ice_rem_prof_id(hw, blk, d); if (status) return status; @@ -5158,12 +3253,13 @@ ice_rem_prof_id_vsig(struct ice_hw *hw, enum ice_block blk, u16 vsig, u64 hdl, { u16 idx = vsig & ICE_VSIG_IDX_M; struct ice_vsig_prof *p, *t; - int status; list_for_each_entry_safe(p, t, &hw->blk[blk].xlt2.vsig_tbl[idx].prop_lst, list) if (p->profile_cookie == hdl) { + int status; + if (ice_vsig_prof_id_count(hw, blk, vsig) == 1) /* this is the last profile, remove the VSIG */ return ice_rem_vsig(hw, blk, vsig, chg); diff --git a/drivers/net/ethernet/intel/ice/ice_flex_pipe.h b/drivers/net/ethernet/intel/ice/ice_flex_pipe.h index 9c530c86703e..7af7c8e9aa4e 100644 --- a/drivers/net/ethernet/intel/ice/ice_flex_pipe.h +++ b/drivers/net/ethernet/intel/ice/ice_flex_pipe.h @@ -6,75 +6,6 @@ #include "ice_type.h" -/* Package minimal version supported */ -#define ICE_PKG_SUPP_VER_MAJ 1 -#define ICE_PKG_SUPP_VER_MNR 3 - -/* Package format version */ -#define ICE_PKG_FMT_VER_MAJ 1 -#define ICE_PKG_FMT_VER_MNR 0 -#define ICE_PKG_FMT_VER_UPD 0 -#define ICE_PKG_FMT_VER_DFT 0 - -#define ICE_PKG_CNT 4 - -enum ice_ddp_state { - /* Indicates that this call to ice_init_pkg - * successfully loaded the requested DDP package - */ - ICE_DDP_PKG_SUCCESS = 0, - - /* Generic error for already loaded errors, it is mapped later to - * the more specific one (one of the next 3) - */ - ICE_DDP_PKG_ALREADY_LOADED = -1, - - /* Indicates that a DDP package of the same version has already been - * loaded onto the device by a previous call or by another PF - */ - ICE_DDP_PKG_SAME_VERSION_ALREADY_LOADED = -2, - - /* The device has a DDP package that is not supported by the driver */ - ICE_DDP_PKG_ALREADY_LOADED_NOT_SUPPORTED = -3, - - /* The device has a compatible package - * (but different from the request) already loaded - */ - ICE_DDP_PKG_COMPATIBLE_ALREADY_LOADED = -4, - - /* The firmware loaded on the device is not compatible with - * the DDP package loaded - */ - ICE_DDP_PKG_FW_MISMATCH = -5, - - /* The DDP package file is invalid */ - ICE_DDP_PKG_INVALID_FILE = -6, - - /* The version of the DDP package provided is higher than - * the driver supports - */ - ICE_DDP_PKG_FILE_VERSION_TOO_HIGH = -7, - - /* The version of the DDP package provided is lower than the - * driver supports - */ - ICE_DDP_PKG_FILE_VERSION_TOO_LOW = -8, - - /* The signature of the DDP package file provided is invalid */ - ICE_DDP_PKG_FILE_SIGNATURE_INVALID = -9, - - /* The DDP package file security revision is too low and not - * supported by firmware - */ - ICE_DDP_PKG_FILE_REVISION_TOO_LOW = -10, - - /* An error occurred in firmware while loading the DDP package */ - ICE_DDP_PKG_LOAD_ERROR = -11, - - /* Other errors */ - ICE_DDP_PKG_ERR = -12 -}; - int ice_acquire_change_lock(struct ice_hw *hw, enum ice_aq_res_access_type access); void ice_release_change_lock(struct ice_hw *hw); diff --git a/drivers/net/ethernet/intel/ice/ice_flex_type.h b/drivers/net/ethernet/intel/ice/ice_flex_type.h index 974d14a83b2e..4f42e14ed3ae 100644 --- a/drivers/net/ethernet/intel/ice/ice_flex_type.h +++ b/drivers/net/ethernet/intel/ice/ice_flex_type.h @@ -3,205 +3,7 @@ #ifndef _ICE_FLEX_TYPE_H_ #define _ICE_FLEX_TYPE_H_ - -#define ICE_FV_OFFSET_INVAL 0x1FF - -/* Extraction Sequence (Field Vector) Table */ -struct ice_fv_word { - u8 prot_id; - u16 off; /* Offset within the protocol header */ - u8 resvrd; -} __packed; - -#define ICE_MAX_NUM_PROFILES 256 - -#define ICE_MAX_FV_WORDS 48 -struct ice_fv { - struct ice_fv_word ew[ICE_MAX_FV_WORDS]; -}; - -/* Package and segment headers and tables */ -struct ice_pkg_hdr { - struct ice_pkg_ver pkg_format_ver; - __le32 seg_count; - __le32 seg_offset[]; -}; - -/* generic segment */ -struct ice_generic_seg_hdr { -#define SEGMENT_TYPE_METADATA 0x00000001 -#define SEGMENT_TYPE_ICE 0x00000010 - __le32 seg_type; - struct ice_pkg_ver seg_format_ver; - __le32 seg_size; - char seg_id[ICE_PKG_NAME_SIZE]; -}; - -/* ice specific segment */ - -union ice_device_id { - struct { - __le16 device_id; - __le16 vendor_id; - } dev_vend_id; - __le32 id; -}; - -struct ice_device_id_entry { - union ice_device_id device; - union ice_device_id sub_device; -}; - -struct ice_seg { - struct ice_generic_seg_hdr hdr; - __le32 device_table_count; - struct ice_device_id_entry device_table[]; -}; - -struct ice_nvm_table { - __le32 table_count; - __le32 vers[]; -}; - -struct ice_buf { -#define ICE_PKG_BUF_SIZE 4096 - u8 buf[ICE_PKG_BUF_SIZE]; -}; - -struct ice_buf_table { - __le32 buf_count; - struct ice_buf buf_array[]; -}; - -/* global metadata specific segment */ -struct ice_global_metadata_seg { - struct ice_generic_seg_hdr hdr; - struct ice_pkg_ver pkg_ver; - __le32 rsvd; - char pkg_name[ICE_PKG_NAME_SIZE]; -}; - -#define ICE_MIN_S_OFF 12 -#define ICE_MAX_S_OFF 4095 -#define ICE_MIN_S_SZ 1 -#define ICE_MAX_S_SZ 4084 - -/* section information */ -struct ice_section_entry { - __le32 type; - __le16 offset; - __le16 size; -}; - -#define ICE_MIN_S_COUNT 1 -#define ICE_MAX_S_COUNT 511 -#define ICE_MIN_S_DATA_END 12 -#define ICE_MAX_S_DATA_END 4096 - -#define ICE_METADATA_BUF 0x80000000 - -struct ice_buf_hdr { - __le16 section_count; - __le16 data_end; - struct ice_section_entry section_entry[]; -}; - -#define ICE_MAX_ENTRIES_IN_BUF(hd_sz, ent_sz) ((ICE_PKG_BUF_SIZE - \ - struct_size((struct ice_buf_hdr *)0, section_entry, 1) - (hd_sz)) /\ - (ent_sz)) - -/* ice package section IDs */ -#define ICE_SID_METADATA 1 -#define ICE_SID_XLT0_SW 10 -#define ICE_SID_XLT_KEY_BUILDER_SW 11 -#define ICE_SID_XLT1_SW 12 -#define ICE_SID_XLT2_SW 13 -#define ICE_SID_PROFID_TCAM_SW 14 -#define ICE_SID_PROFID_REDIR_SW 15 -#define ICE_SID_FLD_VEC_SW 16 -#define ICE_SID_CDID_KEY_BUILDER_SW 17 - -struct ice_meta_sect { - struct ice_pkg_ver ver; -#define ICE_META_SECT_NAME_SIZE 28 - char name[ICE_META_SECT_NAME_SIZE]; - __le32 track_id; -}; - -#define ICE_SID_CDID_REDIR_SW 18 - -#define ICE_SID_XLT0_ACL 20 -#define ICE_SID_XLT_KEY_BUILDER_ACL 21 -#define ICE_SID_XLT1_ACL 22 -#define ICE_SID_XLT2_ACL 23 -#define ICE_SID_PROFID_TCAM_ACL 24 -#define ICE_SID_PROFID_REDIR_ACL 25 -#define ICE_SID_FLD_VEC_ACL 26 -#define ICE_SID_CDID_KEY_BUILDER_ACL 27 -#define ICE_SID_CDID_REDIR_ACL 28 - -#define ICE_SID_XLT0_FD 30 -#define ICE_SID_XLT_KEY_BUILDER_FD 31 -#define ICE_SID_XLT1_FD 32 -#define ICE_SID_XLT2_FD 33 -#define ICE_SID_PROFID_TCAM_FD 34 -#define ICE_SID_PROFID_REDIR_FD 35 -#define ICE_SID_FLD_VEC_FD 36 -#define ICE_SID_CDID_KEY_BUILDER_FD 37 -#define ICE_SID_CDID_REDIR_FD 38 - -#define ICE_SID_XLT0_RSS 40 -#define ICE_SID_XLT_KEY_BUILDER_RSS 41 -#define ICE_SID_XLT1_RSS 42 -#define ICE_SID_XLT2_RSS 43 -#define ICE_SID_PROFID_TCAM_RSS 44 -#define ICE_SID_PROFID_REDIR_RSS 45 -#define ICE_SID_FLD_VEC_RSS 46 -#define ICE_SID_CDID_KEY_BUILDER_RSS 47 -#define ICE_SID_CDID_REDIR_RSS 48 - -#define ICE_SID_RXPARSER_MARKER_PTYPE 55 -#define ICE_SID_RXPARSER_BOOST_TCAM 56 -#define ICE_SID_RXPARSER_METADATA_INIT 58 -#define ICE_SID_TXPARSER_BOOST_TCAM 66 - -#define ICE_SID_XLT0_PE 80 -#define ICE_SID_XLT_KEY_BUILDER_PE 81 -#define ICE_SID_XLT1_PE 82 -#define ICE_SID_XLT2_PE 83 -#define ICE_SID_PROFID_TCAM_PE 84 -#define ICE_SID_PROFID_REDIR_PE 85 -#define ICE_SID_FLD_VEC_PE 86 -#define ICE_SID_CDID_KEY_BUILDER_PE 87 -#define ICE_SID_CDID_REDIR_PE 88 - -/* Label Metadata section IDs */ -#define ICE_SID_LBL_FIRST 0x80000010 -#define ICE_SID_LBL_RXPARSER_TMEM 0x80000018 -/* The following define MUST be updated to reflect the last label section ID */ -#define ICE_SID_LBL_LAST 0x80000038 - -enum ice_block { - ICE_BLK_SW = 0, - ICE_BLK_ACL, - ICE_BLK_FD, - ICE_BLK_RSS, - ICE_BLK_PE, - ICE_BLK_COUNT -}; - -enum ice_sect { - ICE_XLT0 = 0, - ICE_XLT_KB, - ICE_XLT1, - ICE_XLT2, - ICE_PROF_TCAM, - ICE_PROF_REDIR, - ICE_VEC_TBL, - ICE_CDID_KB, - ICE_CDID_REDIR, - ICE_SECT_COUNT -}; +#include "ice_ddp.h" /* Packet Type (PTYPE) values */ #define ICE_PTYPE_MAC_PAY 1 @@ -283,134 +85,6 @@ struct ice_ptype_attributes { enum ice_ptype_attrib_type attrib; }; -/* package labels */ -struct ice_label { - __le16 value; -#define ICE_PKG_LABEL_SIZE 64 - char name[ICE_PKG_LABEL_SIZE]; -}; - -struct ice_label_section { - __le16 count; - struct ice_label label[]; -}; - -#define ICE_MAX_LABELS_IN_BUF ICE_MAX_ENTRIES_IN_BUF( \ - struct_size((struct ice_label_section *)0, label, 1) - \ - sizeof(struct ice_label), sizeof(struct ice_label)) - -struct ice_sw_fv_section { - __le16 count; - __le16 base_offset; - struct ice_fv fv[]; -}; - -struct ice_sw_fv_list_entry { - struct list_head list_entry; - u32 profile_id; - struct ice_fv *fv_ptr; -}; - -/* The BOOST TCAM stores the match packet header in reverse order, meaning - * the fields are reversed; in addition, this means that the normally big endian - * fields of the packet are now little endian. - */ -struct ice_boost_key_value { -#define ICE_BOOST_REMAINING_HV_KEY 15 - u8 remaining_hv_key[ICE_BOOST_REMAINING_HV_KEY]; - __le16 hv_dst_port_key; - __le16 hv_src_port_key; - u8 tcam_search_key; -} __packed; - -struct ice_boost_key { - struct ice_boost_key_value key; - struct ice_boost_key_value key2; -}; - -/* package Boost TCAM entry */ -struct ice_boost_tcam_entry { - __le16 addr; - __le16 reserved; - /* break up the 40 bytes of key into different fields */ - struct ice_boost_key key; - u8 boost_hit_index_group; - /* The following contains bitfields which are not on byte boundaries. - * These fields are currently unused by driver software. - */ -#define ICE_BOOST_BIT_FIELDS 43 - u8 bit_fields[ICE_BOOST_BIT_FIELDS]; -}; - -struct ice_boost_tcam_section { - __le16 count; - __le16 reserved; - struct ice_boost_tcam_entry tcam[]; -}; - -#define ICE_MAX_BST_TCAMS_IN_BUF ICE_MAX_ENTRIES_IN_BUF( \ - struct_size((struct ice_boost_tcam_section *)0, tcam, 1) - \ - sizeof(struct ice_boost_tcam_entry), \ - sizeof(struct ice_boost_tcam_entry)) - -/* package Marker Ptype TCAM entry */ -struct ice_marker_ptype_tcam_entry { -#define ICE_MARKER_PTYPE_TCAM_ADDR_MAX 1024 - __le16 addr; - __le16 ptype; - u8 keys[20]; -}; - -struct ice_marker_ptype_tcam_section { - __le16 count; - __le16 reserved; - struct ice_marker_ptype_tcam_entry tcam[]; -}; - -#define ICE_MAX_MARKER_PTYPE_TCAMS_IN_BUF \ - ICE_MAX_ENTRIES_IN_BUF(struct_size((struct ice_marker_ptype_tcam_section *)0, tcam, 1) - \ - sizeof(struct ice_marker_ptype_tcam_entry), \ - sizeof(struct ice_marker_ptype_tcam_entry)) - -struct ice_xlt1_section { - __le16 count; - __le16 offset; - u8 value[]; -}; - -struct ice_xlt2_section { - __le16 count; - __le16 offset; - __le16 value[]; -}; - -struct ice_prof_redir_section { - __le16 count; - __le16 offset; - u8 redir_value[]; -}; - -/* package buffer building */ - -struct ice_buf_build { - struct ice_buf buf; - u16 reserved_section_table_entries; -}; - -struct ice_pkg_enum { - struct ice_buf_table *buf_table; - u32 buf_idx; - - u32 type; - struct ice_buf_hdr *buf; - u32 sect_idx; - void *sect; - u32 sect_type; - - u32 entry_idx; - void *(*handler)(u32 sect_type, void *section, u32 index, u32 *offset); -}; - /* Tunnel enabling */ enum ice_tunnel_type { diff --git a/drivers/net/ethernet/intel/ice/ice_fltr.c b/drivers/net/ethernet/intel/ice/ice_fltr.c index 40e678cfb507..aff7a141c30d 100644 --- a/drivers/net/ethernet/intel/ice/ice_fltr.c +++ b/drivers/net/ethernet/intel/ice/ice_fltr.c @@ -208,6 +208,11 @@ static int ice_fltr_remove_eth_list(struct ice_vsi *vsi, struct list_head *list) void ice_fltr_remove_all(struct ice_vsi *vsi) { ice_remove_vsi_fltr(&vsi->back->hw, vsi->idx); + /* sync netdev filters if exist */ + if (vsi->netdev) { + __dev_uc_unsync(vsi->netdev, NULL); + __dev_mc_unsync(vsi->netdev, NULL); + } } /** diff --git a/drivers/net/ethernet/intel/ice/ice_gnss.c b/drivers/net/ethernet/intel/ice/ice_gnss.c index 43e199b5b513..8dec748bb53a 100644 --- a/drivers/net/ethernet/intel/ice/ice_gnss.c +++ b/drivers/net/ethernet/intel/ice/ice_gnss.c @@ -3,15 +3,18 @@ #include "ice.h" #include "ice_lib.h" -#include <linux/tty_driver.h> /** - * ice_gnss_do_write - Write data to internal GNSS + * ice_gnss_do_write - Write data to internal GNSS receiver * @pf: board private structure * @buf: command buffer * @size: command buffer size * * Write UBX command data to the GNSS receiver + * + * Return: + * * number of bytes written - success + * * negative - error code */ static unsigned int ice_gnss_do_write(struct ice_pf *pf, unsigned char *buf, unsigned int size) @@ -82,6 +85,12 @@ static void ice_gnss_write_pending(struct kthread_work *work) write_work); struct ice_pf *pf = gnss->back; + if (!pf) + return; + + if (!test_bit(ICE_FLAG_GNSS, pf->flags)) + return; + if (!list_empty(&gnss->queue)) { struct gnss_write_buf *write_buf = NULL; unsigned int bytes; @@ -102,16 +111,14 @@ static void ice_gnss_write_pending(struct kthread_work *work) * ice_gnss_read - Read data from internal GNSS module * @work: GNSS read work structure * - * Read the data from internal GNSS receiver, number of bytes read will be - * returned in *read_data parameter. + * Read the data from internal GNSS receiver, write it to gnss_dev. */ static void ice_gnss_read(struct kthread_work *work) { struct gnss_serial *gnss = container_of(work, struct gnss_serial, read_work.work); + unsigned int i, bytes_read, data_len, count; struct ice_aqc_link_topo_addr link_topo; - unsigned int i, bytes_read, data_len; - struct tty_port *port; struct ice_pf *pf; struct ice_hw *hw; __be16 data_len_b; @@ -120,14 +127,15 @@ static void ice_gnss_read(struct kthread_work *work) int err = 0; pf = gnss->back; - if (!pf || !gnss->tty || !gnss->tty->port) { + if (!pf) { err = -EFAULT; goto exit; } - hw = &pf->hw; - port = gnss->tty->port; + if (!test_bit(ICE_FLAG_GNSS, pf->flags)) + return; + hw = &pf->hw; buf = (char *)get_zeroed_page(GFP_KERNEL); if (!buf) { err = -ENOMEM; @@ -159,7 +167,6 @@ static void ice_gnss_read(struct kthread_work *work) } data_len = min_t(typeof(data_len), data_len, PAGE_SIZE); - data_len = tty_buffer_request_room(port, data_len); if (!data_len) { err = -ENOMEM; goto exit_buf; @@ -179,12 +186,11 @@ static void ice_gnss_read(struct kthread_work *work) goto exit_buf; } - /* Send the data to the tty layer for users to read. This doesn't - * actually push the data through unless tty->low_latency is set. - */ - tty_insert_flip_string(port, buf, i); - tty_flip_buffer_push(port); - + count = gnss_insert_raw(pf->gnss_dev, buf, i); + if (count != i) + dev_warn(ice_pf_to_dev(pf), + "gnss_insert_raw ret=%d size=%d\n", + count, i); exit_buf: free_page((unsigned long)buf); kthread_queue_delayed_work(gnss->kworker, &gnss->read_work, @@ -195,11 +201,16 @@ exit: } /** - * ice_gnss_struct_init - Initialize GNSS structure for the TTY + * ice_gnss_struct_init - Initialize GNSS receiver * @pf: Board private structure - * @index: TTY device index + * + * Initialize GNSS structures and workers. + * + * Return: + * * pointer to initialized gnss_serial struct - success + * * NULL - error */ -static struct gnss_serial *ice_gnss_struct_init(struct ice_pf *pf, int index) +static struct gnss_serial *ice_gnss_struct_init(struct ice_pf *pf) { struct device *dev = ice_pf_to_dev(pf); struct kthread_worker *kworker; @@ -209,17 +220,12 @@ static struct gnss_serial *ice_gnss_struct_init(struct ice_pf *pf, int index) if (!gnss) return NULL; - mutex_init(&gnss->gnss_mutex); - gnss->open_count = 0; gnss->back = pf; - pf->gnss_serial[index] = gnss; + pf->gnss_serial = gnss; kthread_init_delayed_work(&gnss->read_work, ice_gnss_read); INIT_LIST_HEAD(&gnss->queue); kthread_init_work(&gnss->write_work, ice_gnss_write_pending); - /* Allocate a kworker for handling work required for the GNSS TTY - * writes. - */ kworker = kthread_create_worker(0, "ice-gnss-%s", dev_name(dev)); if (IS_ERR(kworker)) { kfree(gnss); @@ -232,140 +238,100 @@ static struct gnss_serial *ice_gnss_struct_init(struct ice_pf *pf, int index) } /** - * ice_gnss_tty_open - Initialize GNSS structures on TTY device open - * @tty: pointer to the tty_struct - * @filp: pointer to the file + * ice_gnss_open - Open GNSS device + * @gdev: pointer to the gnss device struct + * + * Open GNSS device and start filling the read buffer for consumer. * - * This routine is mandatory. If this routine is not filled in, the attempted - * open will fail with ENODEV. + * Return: + * * 0 - success + * * negative - error code */ -static int ice_gnss_tty_open(struct tty_struct *tty, struct file *filp) +static int ice_gnss_open(struct gnss_device *gdev) { + struct ice_pf *pf = gnss_get_drvdata(gdev); struct gnss_serial *gnss; - struct ice_pf *pf; - pf = (struct ice_pf *)tty->driver->driver_state; if (!pf) return -EFAULT; - /* Clear the pointer in case something fails */ - tty->driver_data = NULL; - - /* Get the serial object associated with this tty pointer */ - gnss = pf->gnss_serial[tty->index]; - if (!gnss) { - /* Initialize GNSS struct on the first device open */ - gnss = ice_gnss_struct_init(pf, tty->index); - if (!gnss) - return -ENOMEM; - } + if (!test_bit(ICE_FLAG_GNSS, pf->flags)) + return -EFAULT; - mutex_lock(&gnss->gnss_mutex); + gnss = pf->gnss_serial; + if (!gnss) + return -ENODEV; - /* Save our structure within the tty structure */ - tty->driver_data = gnss; - gnss->tty = tty; - gnss->open_count++; kthread_queue_delayed_work(gnss->kworker, &gnss->read_work, 0); - mutex_unlock(&gnss->gnss_mutex); - return 0; } /** - * ice_gnss_tty_close - Cleanup GNSS structures on tty device close - * @tty: pointer to the tty_struct - * @filp: pointer to the file + * ice_gnss_close - Close GNSS device + * @gdev: pointer to the gnss device struct + * + * Close GNSS device, cancel worker, stop filling the read buffer. */ -static void ice_gnss_tty_close(struct tty_struct *tty, struct file *filp) +static void ice_gnss_close(struct gnss_device *gdev) { - struct gnss_serial *gnss = tty->driver_data; - struct ice_pf *pf; - - if (!gnss) - return; + struct ice_pf *pf = gnss_get_drvdata(gdev); + struct gnss_serial *gnss; - pf = (struct ice_pf *)tty->driver->driver_state; if (!pf) return; - mutex_lock(&gnss->gnss_mutex); - - if (!gnss->open_count) { - /* Port was never opened */ - dev_err(ice_pf_to_dev(pf), "GNSS port not opened\n"); - goto exit; - } + gnss = pf->gnss_serial; + if (!gnss) + return; - gnss->open_count--; - if (gnss->open_count <= 0) { - /* Port is in shutdown state */ - kthread_cancel_delayed_work_sync(&gnss->read_work); - } -exit: - mutex_unlock(&gnss->gnss_mutex); + kthread_cancel_work_sync(&gnss->write_work); + kthread_cancel_delayed_work_sync(&gnss->read_work); } /** - * ice_gnss_tty_write - Write GNSS data - * @tty: pointer to the tty_struct + * ice_gnss_write - Write to GNSS device + * @gdev: pointer to the gnss device struct * @buf: pointer to the user data - * @count: the number of characters queued to be sent to the HW + * @count: size of the buffer to be sent to the GNSS device * - * The write function call is called by the user when there is data to be sent - * to the hardware. First the tty core receives the call, and then it passes the - * data on to the tty driver's write function. The tty core also tells the tty - * driver the size of the data being sent. - * If any errors happen during the write call, a negative error value should be - * returned instead of the number of characters queued to be written. + * Return: + * * number of written bytes - success + * * negative - error code */ static int -ice_gnss_tty_write(struct tty_struct *tty, const unsigned char *buf, int count) +ice_gnss_write(struct gnss_device *gdev, const unsigned char *buf, + size_t count) { + struct ice_pf *pf = gnss_get_drvdata(gdev); struct gnss_write_buf *write_buf; struct gnss_serial *gnss; unsigned char *cmd_buf; - struct ice_pf *pf; int err = count; /* We cannot write a single byte using our I2C implementation. */ if (count <= 1 || count > ICE_GNSS_TTY_WRITE_BUF) return -EINVAL; - gnss = tty->driver_data; - if (!gnss) - return -EFAULT; - - pf = (struct ice_pf *)tty->driver->driver_state; if (!pf) return -EFAULT; - /* Only allow to write on TTY 0 */ - if (gnss != pf->gnss_serial[0]) - return -EIO; - - mutex_lock(&gnss->gnss_mutex); + if (!test_bit(ICE_FLAG_GNSS, pf->flags)) + return -EFAULT; - if (!gnss->open_count) { - err = -EINVAL; - goto exit; - } + gnss = pf->gnss_serial; + if (!gnss) + return -ENODEV; cmd_buf = kcalloc(count, sizeof(*buf), GFP_KERNEL); - if (!cmd_buf) { - err = -ENOMEM; - goto exit; - } + if (!cmd_buf) + return -ENOMEM; memcpy(cmd_buf, buf, count); - - /* Send the data out to a hardware port */ write_buf = kzalloc(sizeof(*write_buf), GFP_KERNEL); if (!write_buf) { kfree(cmd_buf); - err = -ENOMEM; - goto exit; + return -ENOMEM; } write_buf->buf = cmd_buf; @@ -373,141 +339,89 @@ ice_gnss_tty_write(struct tty_struct *tty, const unsigned char *buf, int count) INIT_LIST_HEAD(&write_buf->queue); list_add_tail(&write_buf->queue, &gnss->queue); kthread_queue_work(gnss->kworker, &gnss->write_work); -exit: - mutex_unlock(&gnss->gnss_mutex); + return err; } +static const struct gnss_operations ice_gnss_ops = { + .open = ice_gnss_open, + .close = ice_gnss_close, + .write_raw = ice_gnss_write, +}; + /** - * ice_gnss_tty_write_room - Returns the numbers of characters to be written. - * @tty: pointer to the tty_struct + * ice_gnss_register - Register GNSS receiver + * @pf: Board private structure + * + * Allocate and register GNSS receiver in the Linux GNSS subsystem. * - * This routine returns the numbers of characters the tty driver will accept - * for queuing to be written or 0 if either the TTY is not open or user - * tries to write to the TTY other than the first. + * Return: + * * 0 - success + * * negative - error code */ -static unsigned int ice_gnss_tty_write_room(struct tty_struct *tty) +static int ice_gnss_register(struct ice_pf *pf) { - struct gnss_serial *gnss = tty->driver_data; - - /* Only allow to write on TTY 0 */ - if (!gnss || gnss != gnss->back->gnss_serial[0]) - return 0; - - mutex_lock(&gnss->gnss_mutex); + struct gnss_device *gdev; + int ret; + + gdev = gnss_allocate_device(ice_pf_to_dev(pf)); + if (!gdev) { + dev_err(ice_pf_to_dev(pf), + "gnss_allocate_device returns NULL\n"); + return -ENOMEM; + } - if (!gnss->open_count) { - mutex_unlock(&gnss->gnss_mutex); - return 0; + gdev->ops = &ice_gnss_ops; + gdev->type = GNSS_TYPE_UBX; + gnss_set_drvdata(gdev, pf); + ret = gnss_register_device(gdev); + if (ret) { + dev_err(ice_pf_to_dev(pf), "gnss_register_device err=%d\n", + ret); + gnss_put_device(gdev); + } else { + pf->gnss_dev = gdev; } - mutex_unlock(&gnss->gnss_mutex); - return ICE_GNSS_TTY_WRITE_BUF; + return ret; } -static const struct tty_operations tty_gps_ops = { - .open = ice_gnss_tty_open, - .close = ice_gnss_tty_close, - .write = ice_gnss_tty_write, - .write_room = ice_gnss_tty_write_room, -}; - /** - * ice_gnss_create_tty_driver - Create a TTY driver for GNSS + * ice_gnss_deregister - Deregister GNSS receiver * @pf: Board private structure + * + * Deregister GNSS receiver from the Linux GNSS subsystem, + * release its resources. */ -static struct tty_driver *ice_gnss_create_tty_driver(struct ice_pf *pf) +static void ice_gnss_deregister(struct ice_pf *pf) { - struct device *dev = ice_pf_to_dev(pf); - const int ICE_TTYDRV_NAME_MAX = 14; - struct tty_driver *tty_driver; - char *ttydrv_name; - unsigned int i; - int err; - - tty_driver = tty_alloc_driver(ICE_GNSS_TTY_MINOR_DEVICES, - TTY_DRIVER_REAL_RAW); - if (IS_ERR(tty_driver)) { - dev_err(dev, "Failed to allocate memory for GNSS TTY\n"); - return NULL; - } - - ttydrv_name = kzalloc(ICE_TTYDRV_NAME_MAX, GFP_KERNEL); - if (!ttydrv_name) { - tty_driver_kref_put(tty_driver); - return NULL; + if (pf->gnss_dev) { + gnss_deregister_device(pf->gnss_dev); + gnss_put_device(pf->gnss_dev); + pf->gnss_dev = NULL; } - - snprintf(ttydrv_name, ICE_TTYDRV_NAME_MAX, "ttyGNSS_%02x%02x_", - (u8)pf->pdev->bus->number, (u8)PCI_SLOT(pf->pdev->devfn)); - - /* Initialize the tty driver*/ - tty_driver->owner = THIS_MODULE; - tty_driver->driver_name = dev_driver_string(dev); - tty_driver->name = (const char *)ttydrv_name; - tty_driver->type = TTY_DRIVER_TYPE_SERIAL; - tty_driver->subtype = SERIAL_TYPE_NORMAL; - tty_driver->init_termios = tty_std_termios; - tty_driver->init_termios.c_iflag &= ~INLCR; - tty_driver->init_termios.c_iflag |= IGNCR; - tty_driver->init_termios.c_oflag &= ~OPOST; - tty_driver->init_termios.c_lflag &= ~ICANON; - tty_driver->init_termios.c_cflag &= ~(CSIZE | CBAUD | CBAUDEX); - /* baud rate 9600 */ - tty_termios_encode_baud_rate(&tty_driver->init_termios, 9600, 9600); - tty_driver->driver_state = pf; - tty_set_operations(tty_driver, &tty_gps_ops); - - for (i = 0; i < ICE_GNSS_TTY_MINOR_DEVICES; i++) { - pf->gnss_tty_port[i] = kzalloc(sizeof(*pf->gnss_tty_port[i]), - GFP_KERNEL); - if (!pf->gnss_tty_port[i]) - goto err_out; - - pf->gnss_serial[i] = NULL; - - tty_port_init(pf->gnss_tty_port[i]); - tty_port_link_device(pf->gnss_tty_port[i], tty_driver, i); - } - - err = tty_register_driver(tty_driver); - if (err) { - dev_err(dev, "Failed to register TTY driver err=%d\n", err); - goto err_out; - } - - for (i = 0; i < ICE_GNSS_TTY_MINOR_DEVICES; i++) - dev_info(dev, "%s%d registered\n", ttydrv_name, i); - - return tty_driver; - -err_out: - while (i--) { - tty_port_destroy(pf->gnss_tty_port[i]); - kfree(pf->gnss_tty_port[i]); - } - kfree(ttydrv_name); - tty_driver_kref_put(pf->ice_gnss_tty_driver); - - return NULL; } /** - * ice_gnss_init - Initialize GNSS TTY support + * ice_gnss_init - Initialize GNSS support * @pf: Board private structure */ void ice_gnss_init(struct ice_pf *pf) { - struct tty_driver *tty_driver; + int ret; - tty_driver = ice_gnss_create_tty_driver(pf); - if (!tty_driver) + pf->gnss_serial = ice_gnss_struct_init(pf); + if (!pf->gnss_serial) return; - pf->ice_gnss_tty_driver = tty_driver; - - set_bit(ICE_FLAG_GNSS, pf->flags); - dev_info(ice_pf_to_dev(pf), "GNSS TTY init successful\n"); + ret = ice_gnss_register(pf); + if (!ret) { + set_bit(ICE_FLAG_GNSS, pf->flags); + dev_info(ice_pf_to_dev(pf), "GNSS init successful\n"); + } else { + ice_gnss_exit(pf); + dev_err(ice_pf_to_dev(pf), "GNSS init failure\n"); + } } /** @@ -516,31 +430,20 @@ void ice_gnss_init(struct ice_pf *pf) */ void ice_gnss_exit(struct ice_pf *pf) { - unsigned int i; + ice_gnss_deregister(pf); + clear_bit(ICE_FLAG_GNSS, pf->flags); - if (!test_bit(ICE_FLAG_GNSS, pf->flags) || !pf->ice_gnss_tty_driver) - return; - - for (i = 0; i < ICE_GNSS_TTY_MINOR_DEVICES; i++) { - if (pf->gnss_tty_port[i]) { - tty_port_destroy(pf->gnss_tty_port[i]); - kfree(pf->gnss_tty_port[i]); - } + if (pf->gnss_serial) { + struct gnss_serial *gnss = pf->gnss_serial; - if (pf->gnss_serial[i]) { - struct gnss_serial *gnss = pf->gnss_serial[i]; + kthread_cancel_work_sync(&gnss->write_work); + kthread_cancel_delayed_work_sync(&gnss->read_work); + kthread_destroy_worker(gnss->kworker); + gnss->kworker = NULL; - kthread_cancel_work_sync(&gnss->write_work); - kthread_cancel_delayed_work_sync(&gnss->read_work); - kfree(gnss); - pf->gnss_serial[i] = NULL; - } + kfree(gnss); + pf->gnss_serial = NULL; } - - tty_unregister_driver(pf->ice_gnss_tty_driver); - kfree(pf->ice_gnss_tty_driver->name); - tty_driver_kref_put(pf->ice_gnss_tty_driver); - pf->ice_gnss_tty_driver = NULL; } /** diff --git a/drivers/net/ethernet/intel/ice/ice_gnss.h b/drivers/net/ethernet/intel/ice/ice_gnss.h index f454dd1d9285..31db0701d13f 100644 --- a/drivers/net/ethernet/intel/ice/ice_gnss.h +++ b/drivers/net/ethernet/intel/ice/ice_gnss.h @@ -4,15 +4,8 @@ #ifndef _ICE_GNSS_H_ #define _ICE_GNSS_H_ -#include <linux/tty.h> -#include <linux/tty_flip.h> - #define ICE_E810T_GNSS_I2C_BUS 0x2 #define ICE_GNSS_TIMER_DELAY_TIME (HZ / 10) /* 0.1 second per message */ -/* Create 2 minor devices, both using the same GNSS module. First one is RW, - * second one RO. - */ -#define ICE_GNSS_TTY_MINOR_DEVICES 2 #define ICE_GNSS_TTY_WRITE_BUF 250 #define ICE_MAX_I2C_DATA_SIZE FIELD_MAX(ICE_AQC_I2C_DATA_SIZE_M) #define ICE_MAX_I2C_WRITE_BYTES 4 @@ -36,13 +29,9 @@ struct gnss_write_buf { unsigned char *buf; }; - /** * struct gnss_serial - data used to initialize GNSS TTY port * @back: back pointer to PF - * @tty: pointer to the tty for this device - * @open_count: number of times this port has been opened - * @gnss_mutex: gnss_mutex used to protect GNSS serial operations * @kworker: kwork thread for handling periodic work * @read_work: read_work function for handling GNSS reads * @write_work: write_work function for handling GNSS writes @@ -50,16 +39,13 @@ struct gnss_write_buf { */ struct gnss_serial { struct ice_pf *back; - struct tty_struct *tty; - int open_count; - struct mutex gnss_mutex; /* protects GNSS serial structure */ struct kthread_worker *kworker; struct kthread_delayed_work read_work; struct kthread_work write_work; struct list_head queue; }; -#if IS_ENABLED(CONFIG_TTY) +#if IS_ENABLED(CONFIG_ICE_GNSS) void ice_gnss_init(struct ice_pf *pf); void ice_gnss_exit(struct ice_pf *pf); bool ice_gnss_is_gps_present(struct ice_hw *hw); @@ -70,5 +56,5 @@ static inline bool ice_gnss_is_gps_present(struct ice_hw *hw) { return false; } -#endif /* IS_ENABLED(CONFIG_TTY) */ +#endif /* IS_ENABLED(CONFIG_ICE_GNSS) */ #endif /* _ICE_GNSS_H_ */ diff --git a/drivers/net/ethernet/intel/ice/ice_idc.c b/drivers/net/ethernet/intel/ice/ice_idc.c index 895c32bcc8b5..e6bc2285071e 100644 --- a/drivers/net/ethernet/intel/ice/ice_idc.c +++ b/drivers/net/ethernet/intel/ice/ice_idc.c @@ -6,6 +6,8 @@ #include "ice_lib.h" #include "ice_dcb_lib.h" +static DEFINE_XARRAY_ALLOC1(ice_aux_id); + /** * ice_get_auxiliary_drv - retrieve iidc_auxiliary_drv struct * @pf: pointer to PF struct @@ -246,6 +248,17 @@ static int ice_reserve_rdma_qvector(struct ice_pf *pf) } /** + * ice_free_rdma_qvector - free vector resources reserved for RDMA driver + * @pf: board private structure to initialize + */ +static void ice_free_rdma_qvector(struct ice_pf *pf) +{ + pf->num_avail_sw_msix -= pf->num_rdma_msix; + ice_free_res(pf->irq_tracker, pf->rdma_base_vector, + ICE_RES_RDMA_VEC_ID); +} + +/** * ice_adev_release - function to be mapped to AUX dev's release op * @dev: pointer to device to free */ @@ -331,12 +344,48 @@ int ice_init_rdma(struct ice_pf *pf) struct device *dev = &pf->pdev->dev; int ret; + if (!ice_is_rdma_ena(pf)) { + dev_warn(dev, "RDMA is not supported on this device\n"); + return 0; + } + + ret = xa_alloc(&ice_aux_id, &pf->aux_idx, NULL, XA_LIMIT(1, INT_MAX), + GFP_KERNEL); + if (ret) { + dev_err(dev, "Failed to allocate device ID for AUX driver\n"); + return -ENOMEM; + } + /* Reserve vector resources */ ret = ice_reserve_rdma_qvector(pf); if (ret < 0) { dev_err(dev, "failed to reserve vectors for RDMA\n"); - return ret; + goto err_reserve_rdma_qvector; } pf->rdma_mode |= IIDC_RDMA_PROTOCOL_ROCEV2; - return ice_plug_aux_dev(pf); + ret = ice_plug_aux_dev(pf); + if (ret) + goto err_plug_aux_dev; + return 0; + +err_plug_aux_dev: + ice_free_rdma_qvector(pf); +err_reserve_rdma_qvector: + pf->adev = NULL; + xa_erase(&ice_aux_id, pf->aux_idx); + return ret; +} + +/** + * ice_deinit_rdma - deinitialize RDMA on PF + * @pf: ptr to ice_pf + */ +void ice_deinit_rdma(struct ice_pf *pf) +{ + if (!ice_is_rdma_ena(pf)) + return; + + ice_unplug_aux_dev(pf); + ice_free_rdma_qvector(pf); + xa_erase(&ice_aux_id, pf->aux_idx); } diff --git a/drivers/net/ethernet/intel/ice/ice_lib.c b/drivers/net/ethernet/intel/ice/ice_lib.c index a596e07b3ce9..781475480ff2 100644 --- a/drivers/net/ethernet/intel/ice/ice_lib.c +++ b/drivers/net/ethernet/intel/ice/ice_lib.c @@ -166,14 +166,14 @@ static void ice_vsi_set_num_desc(struct ice_vsi *vsi) /** * ice_vsi_set_num_qs - Set number of queues, descriptors and vectors for a VSI * @vsi: the VSI being configured - * @vf: the VF associated with this VSI, if any * * Return 0 on success and a negative value on error */ -static void ice_vsi_set_num_qs(struct ice_vsi *vsi, struct ice_vf *vf) +static void ice_vsi_set_num_qs(struct ice_vsi *vsi) { enum ice_vsi_type vsi_type = vsi->type; struct ice_pf *pf = vsi->back; + struct ice_vf *vf = vsi->vf; if (WARN_ON(vsi_type == ICE_VSI_VF && !vf)) return; @@ -282,10 +282,10 @@ static int ice_get_free_slot(void *array, int size, int curr) } /** - * ice_vsi_delete - delete a VSI from the switch + * ice_vsi_delete_from_hw - delete a VSI from the switch * @vsi: pointer to VSI being removed */ -void ice_vsi_delete(struct ice_vsi *vsi) +static void ice_vsi_delete_from_hw(struct ice_vsi *vsi) { struct ice_pf *pf = vsi->back; struct ice_vsi_ctx *ctxt; @@ -348,47 +348,144 @@ static void ice_vsi_free_arrays(struct ice_vsi *vsi) } /** - * ice_vsi_clear - clean up and deallocate the provided VSI + * ice_vsi_free_stats - Free the ring statistics structures + * @vsi: VSI pointer + */ +static void ice_vsi_free_stats(struct ice_vsi *vsi) +{ + struct ice_vsi_stats *vsi_stat; + struct ice_pf *pf = vsi->back; + int i; + + if (vsi->type == ICE_VSI_CHNL) + return; + if (!pf->vsi_stats) + return; + + vsi_stat = pf->vsi_stats[vsi->idx]; + if (!vsi_stat) + return; + + ice_for_each_alloc_txq(vsi, i) { + if (vsi_stat->tx_ring_stats[i]) { + kfree_rcu(vsi_stat->tx_ring_stats[i], rcu); + WRITE_ONCE(vsi_stat->tx_ring_stats[i], NULL); + } + } + + ice_for_each_alloc_rxq(vsi, i) { + if (vsi_stat->rx_ring_stats[i]) { + kfree_rcu(vsi_stat->rx_ring_stats[i], rcu); + WRITE_ONCE(vsi_stat->rx_ring_stats[i], NULL); + } + } + + kfree(vsi_stat->tx_ring_stats); + kfree(vsi_stat->rx_ring_stats); + kfree(vsi_stat); + pf->vsi_stats[vsi->idx] = NULL; +} + +/** + * ice_vsi_alloc_ring_stats - Allocates Tx and Rx ring stats for the VSI + * @vsi: VSI which is having stats allocated + */ +static int ice_vsi_alloc_ring_stats(struct ice_vsi *vsi) +{ + struct ice_ring_stats **tx_ring_stats; + struct ice_ring_stats **rx_ring_stats; + struct ice_vsi_stats *vsi_stats; + struct ice_pf *pf = vsi->back; + u16 i; + + vsi_stats = pf->vsi_stats[vsi->idx]; + tx_ring_stats = vsi_stats->tx_ring_stats; + rx_ring_stats = vsi_stats->rx_ring_stats; + + /* Allocate Tx ring stats */ + ice_for_each_alloc_txq(vsi, i) { + struct ice_ring_stats *ring_stats; + struct ice_tx_ring *ring; + + ring = vsi->tx_rings[i]; + ring_stats = tx_ring_stats[i]; + + if (!ring_stats) { + ring_stats = kzalloc(sizeof(*ring_stats), GFP_KERNEL); + if (!ring_stats) + goto err_out; + + WRITE_ONCE(tx_ring_stats[i], ring_stats); + } + + ring->ring_stats = ring_stats; + } + + /* Allocate Rx ring stats */ + ice_for_each_alloc_rxq(vsi, i) { + struct ice_ring_stats *ring_stats; + struct ice_rx_ring *ring; + + ring = vsi->rx_rings[i]; + ring_stats = rx_ring_stats[i]; + + if (!ring_stats) { + ring_stats = kzalloc(sizeof(*ring_stats), GFP_KERNEL); + if (!ring_stats) + goto err_out; + + WRITE_ONCE(rx_ring_stats[i], ring_stats); + } + + ring->ring_stats = ring_stats; + } + + return 0; + +err_out: + ice_vsi_free_stats(vsi); + return -ENOMEM; +} + +/** + * ice_vsi_free - clean up and deallocate the provided VSI * @vsi: pointer to VSI being cleared * * This deallocates the VSI's queue resources, removes it from the PF's * VSI array if necessary, and deallocates the VSI - * - * Returns 0 on success, negative on failure */ -int ice_vsi_clear(struct ice_vsi *vsi) +static void ice_vsi_free(struct ice_vsi *vsi) { struct ice_pf *pf = NULL; struct device *dev; - if (!vsi) - return 0; - - if (!vsi->back) - return -EINVAL; + if (!vsi || !vsi->back) + return; pf = vsi->back; dev = ice_pf_to_dev(pf); if (!pf->vsi[vsi->idx] || pf->vsi[vsi->idx] != vsi) { dev_dbg(dev, "vsi does not exist at pf->vsi[%d]\n", vsi->idx); - return -EINVAL; + return; } mutex_lock(&pf->sw_mutex); /* updates the PF for this cleared VSI */ pf->vsi[vsi->idx] = NULL; - if (vsi->idx < pf->next_vsi && vsi->type != ICE_VSI_CTRL) - pf->next_vsi = vsi->idx; - if (vsi->idx < pf->next_vsi && vsi->type == ICE_VSI_CTRL && vsi->vf) - pf->next_vsi = vsi->idx; + pf->next_vsi = vsi->idx; + ice_vsi_free_stats(vsi); ice_vsi_free_arrays(vsi); mutex_unlock(&pf->sw_mutex); devm_kfree(dev, vsi); +} - return 0; +void ice_vsi_delete(struct ice_vsi *vsi) +{ + ice_vsi_delete_from_hw(vsi); + ice_vsi_free(vsi); } /** @@ -461,6 +558,10 @@ static int ice_vsi_alloc_stat_arrays(struct ice_vsi *vsi) if (!pf->vsi_stats) return -ENOENT; + if (pf->vsi_stats[vsi->idx]) + /* realloc will happen in rebuild path */ + return 0; + vsi_stat = kzalloc(sizeof(*vsi_stat), GFP_KERNEL); if (!vsi_stat) return -ENOMEM; @@ -491,128 +592,93 @@ err_alloc_tx: } /** - * ice_vsi_alloc - Allocates the next available struct VSI in the PF - * @pf: board private structure - * @vsi_type: type of VSI + * ice_vsi_alloc_def - set default values for already allocated VSI + * @vsi: ptr to VSI * @ch: ptr to channel - * @vf: VF for ICE_VSI_VF and ICE_VSI_CTRL - * - * The VF pointer is used for ICE_VSI_VF and ICE_VSI_CTRL. For ICE_VSI_CTRL, - * it may be NULL in the case there is no association with a VF. For - * ICE_VSI_VF the VF pointer *must not* be NULL. - * - * returns a pointer to a VSI on success, NULL on failure. */ -static struct ice_vsi * -ice_vsi_alloc(struct ice_pf *pf, enum ice_vsi_type vsi_type, - struct ice_channel *ch, struct ice_vf *vf) +static int +ice_vsi_alloc_def(struct ice_vsi *vsi, struct ice_channel *ch) { - struct device *dev = ice_pf_to_dev(pf); - struct ice_vsi *vsi = NULL; - - if (WARN_ON(vsi_type == ICE_VSI_VF && !vf)) - return NULL; - - /* Need to protect the allocation of the VSIs at the PF level */ - mutex_lock(&pf->sw_mutex); - - /* If we have already allocated our maximum number of VSIs, - * pf->next_vsi will be ICE_NO_VSI. If not, pf->next_vsi index - * is available to be populated - */ - if (pf->next_vsi == ICE_NO_VSI) { - dev_dbg(dev, "out of VSI slots!\n"); - goto unlock_pf; + if (vsi->type != ICE_VSI_CHNL) { + ice_vsi_set_num_qs(vsi); + if (ice_vsi_alloc_arrays(vsi)) + return -ENOMEM; } - vsi = devm_kzalloc(dev, sizeof(*vsi), GFP_KERNEL); - if (!vsi) - goto unlock_pf; - - vsi->type = vsi_type; - vsi->back = pf; - set_bit(ICE_VSI_DOWN, vsi->state); - - if (vsi_type == ICE_VSI_VF) - ice_vsi_set_num_qs(vsi, vf); - else if (vsi_type != ICE_VSI_CHNL) - ice_vsi_set_num_qs(vsi, NULL); - switch (vsi->type) { case ICE_VSI_SWITCHDEV_CTRL: - if (ice_vsi_alloc_arrays(vsi)) - goto err_rings; - /* Setup eswitch MSIX irq handler for VSI */ vsi->irq_handler = ice_eswitch_msix_clean_rings; break; case ICE_VSI_PF: - if (ice_vsi_alloc_arrays(vsi)) - goto err_rings; - /* Setup default MSIX irq handler for VSI */ vsi->irq_handler = ice_msix_clean_rings; break; case ICE_VSI_CTRL: - if (ice_vsi_alloc_arrays(vsi)) - goto err_rings; - /* Setup ctrl VSI MSIX irq handler */ vsi->irq_handler = ice_msix_clean_ctrl_vsi; - - /* For the PF control VSI this is NULL, for the VF control VSI - * this will be the first VF to allocate it. - */ - vsi->vf = vf; - break; - case ICE_VSI_VF: - if (ice_vsi_alloc_arrays(vsi)) - goto err_rings; - vsi->vf = vf; break; case ICE_VSI_CHNL: if (!ch) - goto err_rings; + return -EINVAL; + vsi->num_rxq = ch->num_rxq; vsi->num_txq = ch->num_txq; vsi->next_base_q = ch->base_q; break; + case ICE_VSI_VF: case ICE_VSI_LB: - if (ice_vsi_alloc_arrays(vsi)) - goto err_rings; break; default: - dev_warn(dev, "Unknown VSI type %d\n", vsi->type); - goto unlock_pf; + ice_vsi_free_arrays(vsi); + return -EINVAL; } - if (vsi->type == ICE_VSI_CTRL && !vf) { - /* Use the last VSI slot as the index for PF control VSI */ - vsi->idx = pf->num_alloc_vsi - 1; - pf->ctrl_vsi_idx = vsi->idx; - pf->vsi[vsi->idx] = vsi; - } else { - /* fill slot and make note of the index */ - vsi->idx = pf->next_vsi; - pf->vsi[pf->next_vsi] = vsi; + return 0; +} + +/** + * ice_vsi_alloc - Allocates the next available struct VSI in the PF + * @pf: board private structure + * + * Reserves a VSI index from the PF and allocates an empty VSI structure + * without a type. The VSI structure must later be initialized by calling + * ice_vsi_cfg(). + * + * returns a pointer to a VSI on success, NULL on failure. + */ +static struct ice_vsi *ice_vsi_alloc(struct ice_pf *pf) +{ + struct device *dev = ice_pf_to_dev(pf); + struct ice_vsi *vsi = NULL; - /* prepare pf->next_vsi for next use */ - pf->next_vsi = ice_get_free_slot(pf->vsi, pf->num_alloc_vsi, - pf->next_vsi); + /* Need to protect the allocation of the VSIs at the PF level */ + mutex_lock(&pf->sw_mutex); + + /* If we have already allocated our maximum number of VSIs, + * pf->next_vsi will be ICE_NO_VSI. If not, pf->next_vsi index + * is available to be populated + */ + if (pf->next_vsi == ICE_NO_VSI) { + dev_dbg(dev, "out of VSI slots!\n"); + goto unlock_pf; } - if (vsi->type == ICE_VSI_CTRL && vf) - vf->ctrl_vsi_idx = vsi->idx; + vsi = devm_kzalloc(dev, sizeof(*vsi), GFP_KERNEL); + if (!vsi) + goto unlock_pf; - /* allocate memory for Tx/Rx ring stat pointers */ - if (ice_vsi_alloc_stat_arrays(vsi)) - goto err_rings; + vsi->back = pf; + set_bit(ICE_VSI_DOWN, vsi->state); - goto unlock_pf; + /* fill slot and make note of the index */ + vsi->idx = pf->next_vsi; + pf->vsi[pf->next_vsi] = vsi; + + /* prepare pf->next_vsi for next use */ + pf->next_vsi = ice_get_free_slot(pf->vsi, pf->num_alloc_vsi, + pf->next_vsi); -err_rings: - devm_kfree(dev, vsi); - vsi = NULL; unlock_pf: mutex_unlock(&pf->sw_mutex); return vsi; @@ -1177,12 +1243,15 @@ ice_chnl_vsi_setup_q_map(struct ice_vsi *vsi, struct ice_vsi_ctx *ctxt) /** * ice_vsi_init - Create and initialize a VSI * @vsi: the VSI being configured - * @init_vsi: is this call creating a VSI + * @vsi_flags: VSI configuration flags + * + * Set ICE_FLAG_VSI_INIT to initialize a new VSI context, clear it to + * reconfigure an existing context. * * This initializes a VSI context depending on the VSI type to be added and * passes it down to the add_vsi aq command to create a new VSI. */ -static int ice_vsi_init(struct ice_vsi *vsi, bool init_vsi) +static int ice_vsi_init(struct ice_vsi *vsi, u32 vsi_flags) { struct ice_pf *pf = vsi->back; struct ice_hw *hw = &pf->hw; @@ -1244,7 +1313,7 @@ static int ice_vsi_init(struct ice_vsi *vsi, bool init_vsi) /* if updating VSI context, make sure to set valid_section: * to indicate which section of VSI context being updated */ - if (!init_vsi) + if (!(vsi_flags & ICE_VSI_FLAG_INIT)) ctxt->info.valid_sections |= cpu_to_le16(ICE_AQ_VSI_PROP_Q_OPT_VALID); } @@ -1257,7 +1326,8 @@ static int ice_vsi_init(struct ice_vsi *vsi, bool init_vsi) if (ret) goto out; - if (!init_vsi) /* means VSI being updated */ + if (!(vsi_flags & ICE_VSI_FLAG_INIT)) + /* means VSI being updated */ /* must to indicate which section of VSI context are * being modified */ @@ -1272,7 +1342,7 @@ static int ice_vsi_init(struct ice_vsi *vsi, bool init_vsi) cpu_to_le16(ICE_AQ_VSI_PROP_SECURITY_VALID); } - if (init_vsi) { + if (vsi_flags & ICE_VSI_FLAG_INIT) { ret = ice_add_vsi(hw, vsi->idx, ctxt, NULL); if (ret) { dev_err(dev, "Add VSI failed, err %d\n", ret); @@ -1436,7 +1506,7 @@ static int ice_get_vf_ctrl_res(struct ice_pf *pf, struct ice_vsi *vsi) * ice_vsi_setup_vector_base - Set up the base vector for the given VSI * @vsi: ptr to the VSI * - * This should only be called after ice_vsi_alloc() which allocates the + * This should only be called after ice_vsi_alloc_def() which allocates the * corresponding SW VSI structure and initializes num_queue_pairs for the * newly allocated VSI. * @@ -1584,106 +1654,6 @@ err_out: } /** - * ice_vsi_free_stats - Free the ring statistics structures - * @vsi: VSI pointer - */ -static void ice_vsi_free_stats(struct ice_vsi *vsi) -{ - struct ice_vsi_stats *vsi_stat; - struct ice_pf *pf = vsi->back; - int i; - - if (vsi->type == ICE_VSI_CHNL) - return; - if (!pf->vsi_stats) - return; - - vsi_stat = pf->vsi_stats[vsi->idx]; - if (!vsi_stat) - return; - - ice_for_each_alloc_txq(vsi, i) { - if (vsi_stat->tx_ring_stats[i]) { - kfree_rcu(vsi_stat->tx_ring_stats[i], rcu); - WRITE_ONCE(vsi_stat->tx_ring_stats[i], NULL); - } - } - - ice_for_each_alloc_rxq(vsi, i) { - if (vsi_stat->rx_ring_stats[i]) { - kfree_rcu(vsi_stat->rx_ring_stats[i], rcu); - WRITE_ONCE(vsi_stat->rx_ring_stats[i], NULL); - } - } - - kfree(vsi_stat->tx_ring_stats); - kfree(vsi_stat->rx_ring_stats); - kfree(vsi_stat); - pf->vsi_stats[vsi->idx] = NULL; -} - -/** - * ice_vsi_alloc_ring_stats - Allocates Tx and Rx ring stats for the VSI - * @vsi: VSI which is having stats allocated - */ -static int ice_vsi_alloc_ring_stats(struct ice_vsi *vsi) -{ - struct ice_ring_stats **tx_ring_stats; - struct ice_ring_stats **rx_ring_stats; - struct ice_vsi_stats *vsi_stats; - struct ice_pf *pf = vsi->back; - u16 i; - - vsi_stats = pf->vsi_stats[vsi->idx]; - tx_ring_stats = vsi_stats->tx_ring_stats; - rx_ring_stats = vsi_stats->rx_ring_stats; - - /* Allocate Tx ring stats */ - ice_for_each_alloc_txq(vsi, i) { - struct ice_ring_stats *ring_stats; - struct ice_tx_ring *ring; - - ring = vsi->tx_rings[i]; - ring_stats = tx_ring_stats[i]; - - if (!ring_stats) { - ring_stats = kzalloc(sizeof(*ring_stats), GFP_KERNEL); - if (!ring_stats) - goto err_out; - - WRITE_ONCE(tx_ring_stats[i], ring_stats); - } - - ring->ring_stats = ring_stats; - } - - /* Allocate Rx ring stats */ - ice_for_each_alloc_rxq(vsi, i) { - struct ice_ring_stats *ring_stats; - struct ice_rx_ring *ring; - - ring = vsi->rx_rings[i]; - ring_stats = rx_ring_stats[i]; - - if (!ring_stats) { - ring_stats = kzalloc(sizeof(*ring_stats), GFP_KERNEL); - if (!ring_stats) - goto err_out; - - WRITE_ONCE(rx_ring_stats[i], ring_stats); - } - - ring->ring_stats = ring_stats; - } - - return 0; - -err_out: - ice_vsi_free_stats(vsi); - return -ENOMEM; -} - -/** * ice_vsi_manage_rss_lut - disable/enable RSS * @vsi: the VSI being changed * @ena: boolean value indicating if this is an enable or disable request @@ -1992,8 +1962,8 @@ void ice_update_eth_stats(struct ice_vsi *vsi) void ice_vsi_cfg_frame_size(struct ice_vsi *vsi) { if (!vsi->netdev || test_bit(ICE_FLAG_LEGACY_RX, vsi->back->flags)) { - vsi->max_frame = ICE_AQ_SET_MAC_FRAME_SIZE_MAX; - vsi->rx_buf_len = ICE_RXBUF_2048; + vsi->max_frame = ICE_MAX_FRAME_LEGACY_RX; + vsi->rx_buf_len = ICE_RXBUF_1664; #if (PAGE_SIZE < 8192) } else if (!ICE_2K_TOO_SMALL_WITH_PADDING && (vsi->netdev->mtu <= ETH_DATA_LEN)) { @@ -2002,11 +1972,7 @@ void ice_vsi_cfg_frame_size(struct ice_vsi *vsi) #endif } else { vsi->max_frame = ICE_AQ_SET_MAC_FRAME_SIZE_MAX; -#if (PAGE_SIZE < 8192) vsi->rx_buf_len = ICE_RXBUF_3072; -#else - vsi->rx_buf_len = ICE_RXBUF_2048; -#endif } } @@ -2645,54 +2611,97 @@ static void ice_set_agg_vsi(struct ice_vsi *vsi) } /** - * ice_vsi_setup - Set up a VSI by a given type - * @pf: board private structure - * @pi: pointer to the port_info instance - * @vsi_type: VSI type - * @vf: pointer to VF to which this VSI connects. This field is used primarily - * for the ICE_VSI_VF type. Other VSI types should pass NULL. - * @ch: ptr to channel - * - * This allocates the sw VSI structure and its queue resources. + * ice_free_vf_ctrl_res - Free the VF control VSI resource + * @pf: pointer to PF structure + * @vsi: the VSI to free resources for * - * Returns pointer to the successfully allocated and configured VSI sw struct on - * success, NULL on failure. + * Check if the VF control VSI resource is still in use. If no VF is using it + * any more, release the VSI resource. Otherwise, leave it to be cleaned up + * once no other VF uses it. */ -struct ice_vsi * -ice_vsi_setup(struct ice_pf *pf, struct ice_port_info *pi, - enum ice_vsi_type vsi_type, struct ice_vf *vf, - struct ice_channel *ch) +static void ice_free_vf_ctrl_res(struct ice_pf *pf, struct ice_vsi *vsi) +{ + struct ice_vf *vf; + unsigned int bkt; + + rcu_read_lock(); + ice_for_each_vf_rcu(pf, bkt, vf) { + if (vf != vsi->vf && vf->ctrl_vsi_idx != ICE_NO_VSI) { + rcu_read_unlock(); + return; + } + } + rcu_read_unlock(); + + /* No other VFs left that have control VSI. It is now safe to reclaim + * SW interrupts back to the common pool. + */ + ice_free_res(pf->irq_tracker, vsi->base_vector, + ICE_RES_VF_CTRL_VEC_ID); + pf->num_avail_sw_msix += vsi->num_q_vectors; +} + +static int ice_vsi_cfg_tc_lan(struct ice_pf *pf, struct ice_vsi *vsi) { u16 max_txqs[ICE_MAX_TRAFFIC_CLASS] = { 0 }; struct device *dev = ice_pf_to_dev(pf); - struct ice_vsi *vsi; int ret, i; - if (vsi_type == ICE_VSI_CHNL) - vsi = ice_vsi_alloc(pf, vsi_type, ch, NULL); - else if (vsi_type == ICE_VSI_VF || vsi_type == ICE_VSI_CTRL) - vsi = ice_vsi_alloc(pf, vsi_type, NULL, vf); - else - vsi = ice_vsi_alloc(pf, vsi_type, NULL, NULL); + /* configure VSI nodes based on number of queues and TC's */ + ice_for_each_traffic_class(i) { + if (!(vsi->tc_cfg.ena_tc & BIT(i))) + continue; - if (!vsi) { - dev_err(dev, "could not allocate VSI\n"); - return NULL; + if (vsi->type == ICE_VSI_CHNL) { + if (!vsi->alloc_txq && vsi->num_txq) + max_txqs[i] = vsi->num_txq; + else + max_txqs[i] = pf->num_lan_tx; + } else { + max_txqs[i] = vsi->alloc_txq; + } } - vsi->port_info = pi; + dev_dbg(dev, "vsi->tc_cfg.ena_tc = %d\n", vsi->tc_cfg.ena_tc); + ret = ice_cfg_vsi_lan(vsi->port_info, vsi->idx, vsi->tc_cfg.ena_tc, + max_txqs); + if (ret) { + dev_err(dev, "VSI %d failed lan queue config, error %d\n", + vsi->vsi_num, ret); + return ret; + } + + return 0; +} + +/** + * ice_vsi_cfg_def - configure default VSI based on the type + * @vsi: pointer to VSI + * @params: the parameters to configure this VSI with + */ +static int +ice_vsi_cfg_def(struct ice_vsi *vsi, struct ice_vsi_cfg_params *params) +{ + struct device *dev = ice_pf_to_dev(vsi->back); + struct ice_pf *pf = vsi->back; + int ret; + vsi->vsw = pf->first_sw; - if (vsi->type == ICE_VSI_PF) - vsi->ethtype = ETH_P_PAUSE; + + ret = ice_vsi_alloc_def(vsi, params->ch); + if (ret) + return ret; + + /* allocate memory for Tx/Rx ring stat pointers */ + if (ice_vsi_alloc_stat_arrays(vsi)) + goto unroll_vsi_alloc; ice_alloc_fd_res(vsi); - if (vsi_type != ICE_VSI_CHNL) { - if (ice_vsi_get_qs(vsi)) { - dev_err(dev, "Failed to allocate queues. vsi->idx = %d\n", - vsi->idx); - goto unroll_vsi_alloc; - } + if (ice_vsi_get_qs(vsi)) { + dev_err(dev, "Failed to allocate queues. vsi->idx = %d\n", + vsi->idx); + goto unroll_vsi_alloc_stat; } /* set RSS capabilities */ @@ -2702,7 +2711,7 @@ ice_vsi_setup(struct ice_pf *pf, struct ice_port_info *pi, ice_vsi_set_tc_cfg(vsi); /* create the VSI */ - ret = ice_vsi_init(vsi, true); + ret = ice_vsi_init(vsi, params->flags); if (ret) goto unroll_get_qs; @@ -2733,6 +2742,14 @@ ice_vsi_setup(struct ice_pf *pf, struct ice_port_info *pi, goto unroll_vector_base; ice_vsi_map_rings_to_vectors(vsi); + if (ice_is_xdp_ena_vsi(vsi)) { + ret = ice_vsi_determine_xdp_res(vsi); + if (ret) + goto unroll_vector_base; + ret = ice_prepare_xdp_rings(vsi, vsi->xdp_prog); + if (ret) + goto unroll_vector_base; + } /* ICE_VSI_CTRL does not need RSS so skip RSS processing */ if (vsi->type != ICE_VSI_CTRL) @@ -2797,30 +2814,156 @@ ice_vsi_setup(struct ice_pf *pf, struct ice_port_info *pi, goto unroll_vsi_init; } - /* configure VSI nodes based on number of queues and TC's */ - ice_for_each_traffic_class(i) { - if (!(vsi->tc_cfg.ena_tc & BIT(i))) - continue; + return 0; - if (vsi->type == ICE_VSI_CHNL) { - if (!vsi->alloc_txq && vsi->num_txq) - max_txqs[i] = vsi->num_txq; - else - max_txqs[i] = pf->num_lan_tx; +unroll_vector_base: + /* reclaim SW interrupts back to the common pool */ + ice_free_res(pf->irq_tracker, vsi->base_vector, vsi->idx); + pf->num_avail_sw_msix += vsi->num_q_vectors; +unroll_alloc_q_vector: + ice_vsi_free_q_vectors(vsi); +unroll_vsi_init: + ice_vsi_delete_from_hw(vsi); +unroll_get_qs: + ice_vsi_put_qs(vsi); +unroll_vsi_alloc_stat: + ice_vsi_free_stats(vsi); +unroll_vsi_alloc: + ice_vsi_free_arrays(vsi); + return ret; +} + +/** + * ice_vsi_cfg - configure a previously allocated VSI + * @vsi: pointer to VSI + * @params: parameters used to configure this VSI + */ +int ice_vsi_cfg(struct ice_vsi *vsi, struct ice_vsi_cfg_params *params) +{ + struct ice_pf *pf = vsi->back; + int ret; + + if (WARN_ON(params->type == ICE_VSI_VF && !params->vf)) + return -EINVAL; + + vsi->type = params->type; + vsi->port_info = params->pi; + + /* For VSIs which don't have a connected VF, this will be NULL */ + vsi->vf = params->vf; + + ret = ice_vsi_cfg_def(vsi, params); + if (ret) + return ret; + + ret = ice_vsi_cfg_tc_lan(vsi->back, vsi); + if (ret) + ice_vsi_decfg(vsi); + + if (vsi->type == ICE_VSI_CTRL) { + if (vsi->vf) { + WARN_ON(vsi->vf->ctrl_vsi_idx != ICE_NO_VSI); + vsi->vf->ctrl_vsi_idx = vsi->idx; } else { - max_txqs[i] = vsi->alloc_txq; + WARN_ON(pf->ctrl_vsi_idx != ICE_NO_VSI); + pf->ctrl_vsi_idx = vsi->idx; } } - dev_dbg(dev, "vsi->tc_cfg.ena_tc = %d\n", vsi->tc_cfg.ena_tc); - ret = ice_cfg_vsi_lan(vsi->port_info, vsi->idx, vsi->tc_cfg.ena_tc, - max_txqs); - if (ret) { - dev_err(dev, "VSI %d failed lan queue config, error %d\n", - vsi->vsi_num, ret); - goto unroll_clear_rings; + return ret; +} + +/** + * ice_vsi_decfg - remove all VSI configuration + * @vsi: pointer to VSI + */ +void ice_vsi_decfg(struct ice_vsi *vsi) +{ + struct ice_pf *pf = vsi->back; + int err; + + /* The Rx rule will only exist to remove if the LLDP FW + * engine is currently stopped + */ + if (!ice_is_safe_mode(pf) && vsi->type == ICE_VSI_PF && + !test_bit(ICE_FLAG_FW_LLDP_AGENT, pf->flags)) + ice_cfg_sw_lldp(vsi, false, false); + + ice_fltr_remove_all(vsi); + ice_rm_vsi_lan_cfg(vsi->port_info, vsi->idx); + err = ice_rm_vsi_rdma_cfg(vsi->port_info, vsi->idx); + if (err) + dev_err(ice_pf_to_dev(pf), "Failed to remove RDMA scheduler config for VSI %u, err %d\n", + vsi->vsi_num, err); + + if (ice_is_xdp_ena_vsi(vsi)) + /* return value check can be skipped here, it always returns + * 0 if reset is in progress + */ + ice_destroy_xdp_rings(vsi); + + ice_vsi_clear_rings(vsi); + ice_vsi_free_q_vectors(vsi); + ice_vsi_put_qs(vsi); + ice_vsi_free_arrays(vsi); + + /* SR-IOV determines needed MSIX resources all at once instead of per + * VSI since when VFs are spawned we know how many VFs there are and how + * many interrupts each VF needs. SR-IOV MSIX resources are also + * cleared in the same manner. + */ + if (vsi->type == ICE_VSI_CTRL && vsi->vf) { + ice_free_vf_ctrl_res(pf, vsi); + } else if (vsi->type != ICE_VSI_VF) { + /* reclaim SW interrupts back to the common pool */ + ice_free_res(pf->irq_tracker, vsi->base_vector, vsi->idx); + pf->num_avail_sw_msix += vsi->num_q_vectors; + vsi->base_vector = 0; } + if (vsi->type == ICE_VSI_VF && + vsi->agg_node && vsi->agg_node->valid) + vsi->agg_node->num_vsis--; + if (vsi->agg_node) { + vsi->agg_node->valid = false; + vsi->agg_node->agg_id = 0; + } +} + +/** + * ice_vsi_setup - Set up a VSI by a given type + * @pf: board private structure + * @params: parameters to use when creating the VSI + * + * This allocates the sw VSI structure and its queue resources. + * + * Returns pointer to the successfully allocated and configured VSI sw struct on + * success, NULL on failure. + */ +struct ice_vsi * +ice_vsi_setup(struct ice_pf *pf, struct ice_vsi_cfg_params *params) +{ + struct device *dev = ice_pf_to_dev(pf); + struct ice_vsi *vsi; + int ret; + + /* ice_vsi_setup can only initialize a new VSI, and we must have + * a port_info structure for it. + */ + if (WARN_ON(!(params->flags & ICE_VSI_FLAG_INIT)) || + WARN_ON(!params->pi)) + return NULL; + + vsi = ice_vsi_alloc(pf); + if (!vsi) { + dev_err(dev, "could not allocate VSI\n"); + return NULL; + } + + ret = ice_vsi_cfg(vsi, params); + if (ret) + goto err_vsi_cfg; + /* Add switch rule to drop all Tx Flow Control Frames, of look up * type ETHERTYPE from VSIs, and restrict malicious VF from sending * out PAUSE or PFC frames. If enabled, FW can still send FC frames. @@ -2830,34 +2973,21 @@ ice_vsi_setup(struct ice_pf *pf, struct ice_port_info *pi, * be dropped so that VFs cannot send LLDP packets to reconfig DCB * settings in the HW. */ - if (!ice_is_safe_mode(pf)) - if (vsi->type == ICE_VSI_PF) { - ice_fltr_add_eth(vsi, ETH_P_PAUSE, ICE_FLTR_TX, - ICE_DROP_PACKET); - ice_cfg_sw_lldp(vsi, true, true); - } + if (!ice_is_safe_mode(pf) && vsi->type == ICE_VSI_PF) { + ice_fltr_add_eth(vsi, ETH_P_PAUSE, ICE_FLTR_TX, + ICE_DROP_PACKET); + ice_cfg_sw_lldp(vsi, true, true); + } if (!vsi->agg_node) ice_set_agg_vsi(vsi); + return vsi; -unroll_clear_rings: - ice_vsi_clear_rings(vsi); -unroll_vector_base: - /* reclaim SW interrupts back to the common pool */ - ice_free_res(pf->irq_tracker, vsi->base_vector, vsi->idx); - pf->num_avail_sw_msix += vsi->num_q_vectors; -unroll_alloc_q_vector: - ice_vsi_free_q_vectors(vsi); -unroll_vsi_init: - ice_vsi_free_stats(vsi); - ice_vsi_delete(vsi); -unroll_get_qs: - ice_vsi_put_qs(vsi); -unroll_vsi_alloc: - if (vsi_type == ICE_VSI_VF) +err_vsi_cfg: + if (params->type == ICE_VSI_VF) ice_enable_lag(pf->lag); - ice_vsi_clear(vsi); + ice_vsi_free(vsi); return NULL; } @@ -3121,37 +3251,6 @@ void ice_napi_del(struct ice_vsi *vsi) } /** - * ice_free_vf_ctrl_res - Free the VF control VSI resource - * @pf: pointer to PF structure - * @vsi: the VSI to free resources for - * - * Check if the VF control VSI resource is still in use. If no VF is using it - * any more, release the VSI resource. Otherwise, leave it to be cleaned up - * once no other VF uses it. - */ -static void ice_free_vf_ctrl_res(struct ice_pf *pf, struct ice_vsi *vsi) -{ - struct ice_vf *vf; - unsigned int bkt; - - rcu_read_lock(); - ice_for_each_vf_rcu(pf, bkt, vf) { - if (vf != vsi->vf && vf->ctrl_vsi_idx != ICE_NO_VSI) { - rcu_read_unlock(); - return; - } - } - rcu_read_unlock(); - - /* No other VFs left that have control VSI. It is now safe to reclaim - * SW interrupts back to the common pool. - */ - ice_free_res(pf->irq_tracker, vsi->base_vector, - ICE_RES_VF_CTRL_VEC_ID); - pf->num_avail_sw_msix += vsi->num_q_vectors; -} - -/** * ice_vsi_release - Delete a VSI and free its resources * @vsi: the VSI being removed * @@ -3160,7 +3259,6 @@ static void ice_free_vf_ctrl_res(struct ice_pf *pf, struct ice_vsi *vsi) int ice_vsi_release(struct ice_vsi *vsi) { struct ice_pf *pf; - int err; if (!vsi->back) return -ENODEV; @@ -3178,50 +3276,14 @@ int ice_vsi_release(struct ice_vsi *vsi) clear_bit(ICE_VSI_NETDEV_REGISTERED, vsi->state); } + if (vsi->type == ICE_VSI_PF) + ice_devlink_destroy_pf_port(pf); + if (test_bit(ICE_FLAG_RSS_ENA, pf->flags)) ice_rss_clean(vsi); - /* Disable VSI and free resources */ - if (vsi->type != ICE_VSI_LB) - ice_vsi_dis_irq(vsi); ice_vsi_close(vsi); - - /* SR-IOV determines needed MSIX resources all at once instead of per - * VSI since when VFs are spawned we know how many VFs there are and how - * many interrupts each VF needs. SR-IOV MSIX resources are also - * cleared in the same manner. - */ - if (vsi->type == ICE_VSI_CTRL && vsi->vf) { - ice_free_vf_ctrl_res(pf, vsi); - } else if (vsi->type != ICE_VSI_VF) { - /* reclaim SW interrupts back to the common pool */ - ice_free_res(pf->irq_tracker, vsi->base_vector, vsi->idx); - pf->num_avail_sw_msix += vsi->num_q_vectors; - } - - if (!ice_is_safe_mode(pf)) { - if (vsi->type == ICE_VSI_PF) { - ice_fltr_remove_eth(vsi, ETH_P_PAUSE, ICE_FLTR_TX, - ICE_DROP_PACKET); - ice_cfg_sw_lldp(vsi, true, false); - /* The Rx rule will only exist to remove if the LLDP FW - * engine is currently stopped - */ - if (!test_bit(ICE_FLAG_FW_LLDP_AGENT, pf->flags)) - ice_cfg_sw_lldp(vsi, false, false); - } - } - - if (ice_is_vsi_dflt_vsi(vsi)) - ice_clear_dflt_vsi(vsi); - ice_fltr_remove_all(vsi); - ice_rm_vsi_lan_cfg(vsi->port_info, vsi->idx); - err = ice_rm_vsi_rdma_cfg(vsi->port_info, vsi->idx); - if (err) - dev_err(ice_pf_to_dev(vsi->back), "Failed to remove RDMA scheduler config for VSI %u, err %d\n", - vsi->vsi_num, err); - ice_vsi_delete(vsi); - ice_vsi_free_q_vectors(vsi); + ice_vsi_decfg(vsi); if (vsi->netdev) { if (test_bit(ICE_VSI_NETDEV_REGISTERED, vsi->state)) { @@ -3235,19 +3297,12 @@ int ice_vsi_release(struct ice_vsi *vsi) } } - if (vsi->type == ICE_VSI_VF && - vsi->agg_node && vsi->agg_node->valid) - vsi->agg_node->num_vsis--; - ice_vsi_clear_rings(vsi); - ice_vsi_free_stats(vsi); - ice_vsi_put_qs(vsi); - /* retain SW VSI data structure since it is needed to unregister and * free VSI netdev when PF is not in reset recovery pending state,\ * for ex: during rmmod. */ if (!ice_is_reset_in_progress(pf->state)) - ice_vsi_clear(vsi); + ice_vsi_delete(vsi); return 0; } @@ -3372,7 +3427,7 @@ ice_vsi_rebuild_set_coalesce(struct ice_vsi *vsi, * @prev_txq: Number of Tx rings before ring reallocation * @prev_rxq: Number of Rx rings before ring reallocation */ -static int +static void ice_vsi_realloc_stat_arrays(struct ice_vsi *vsi, int prev_txq, int prev_rxq) { struct ice_vsi_stats *vsi_stat; @@ -3380,9 +3435,9 @@ ice_vsi_realloc_stat_arrays(struct ice_vsi *vsi, int prev_txq, int prev_rxq) int i; if (!prev_txq || !prev_rxq) - return 0; + return; if (vsi->type == ICE_VSI_CHNL) - return 0; + return; vsi_stat = pf->vsi_stats[vsi->idx]; @@ -3403,36 +3458,36 @@ ice_vsi_realloc_stat_arrays(struct ice_vsi *vsi, int prev_txq, int prev_rxq) } } } - - return 0; } /** * ice_vsi_rebuild - Rebuild VSI after reset * @vsi: VSI to be rebuild - * @init_vsi: is this an initialization or a reconfigure of the VSI + * @vsi_flags: flags used for VSI rebuild flow + * + * Set vsi_flags to ICE_VSI_FLAG_INIT to initialize a new VSI, or + * ICE_VSI_FLAG_NO_INIT to rebuild an existing VSI in hardware. * * Returns 0 on success and negative value on failure */ -int ice_vsi_rebuild(struct ice_vsi *vsi, bool init_vsi) +int ice_vsi_rebuild(struct ice_vsi *vsi, u32 vsi_flags) { - u16 max_txqs[ICE_MAX_TRAFFIC_CLASS] = { 0 }; + struct ice_vsi_cfg_params params = {}; struct ice_coalesce_stored *coalesce; - int ret, i, prev_txq, prev_rxq; + int ret, prev_txq, prev_rxq; int prev_num_q_vectors = 0; - enum ice_vsi_type vtype; struct ice_pf *pf; if (!vsi) return -EINVAL; + params = ice_vsi_to_params(vsi); + params.flags = vsi_flags; + pf = vsi->back; - vtype = vsi->type; - if (WARN_ON(vtype == ICE_VSI_VF && !vsi->vf)) + if (WARN_ON(vsi->type == ICE_VSI_VF && !vsi->vf)) return -EINVAL; - ice_vsi_init_vlan_ops(vsi); - coalesce = kcalloc(vsi->num_q_vectors, sizeof(struct ice_coalesce_stored), GFP_KERNEL); if (!coalesce) @@ -3443,188 +3498,32 @@ int ice_vsi_rebuild(struct ice_vsi *vsi, bool init_vsi) prev_txq = vsi->num_txq; prev_rxq = vsi->num_rxq; - ice_rm_vsi_lan_cfg(vsi->port_info, vsi->idx); - ret = ice_rm_vsi_rdma_cfg(vsi->port_info, vsi->idx); + ice_vsi_decfg(vsi); + ret = ice_vsi_cfg_def(vsi, ¶ms); if (ret) - dev_err(ice_pf_to_dev(vsi->back), "Failed to remove RDMA scheduler config for VSI %u, err %d\n", - vsi->vsi_num, ret); - ice_vsi_free_q_vectors(vsi); - - /* SR-IOV determines needed MSIX resources all at once instead of per - * VSI since when VFs are spawned we know how many VFs there are and how - * many interrupts each VF needs. SR-IOV MSIX resources are also - * cleared in the same manner. - */ - if (vtype != ICE_VSI_VF) { - /* reclaim SW interrupts back to the common pool */ - ice_free_res(pf->irq_tracker, vsi->base_vector, vsi->idx); - pf->num_avail_sw_msix += vsi->num_q_vectors; - vsi->base_vector = 0; - } - - if (ice_is_xdp_ena_vsi(vsi)) - /* return value check can be skipped here, it always returns - * 0 if reset is in progress - */ - ice_destroy_xdp_rings(vsi); - ice_vsi_put_qs(vsi); - ice_vsi_clear_rings(vsi); - ice_vsi_free_arrays(vsi); - if (vtype == ICE_VSI_VF) - ice_vsi_set_num_qs(vsi, vsi->vf); - else - ice_vsi_set_num_qs(vsi, NULL); - - ret = ice_vsi_alloc_arrays(vsi); - if (ret < 0) - goto err_vsi; - - ice_vsi_get_qs(vsi); - - ice_alloc_fd_res(vsi); - ice_vsi_set_tc_cfg(vsi); - - /* Initialize VSI struct elements and create VSI in FW */ - ret = ice_vsi_init(vsi, init_vsi); - if (ret < 0) - goto err_vsi; - - switch (vtype) { - case ICE_VSI_CTRL: - case ICE_VSI_SWITCHDEV_CTRL: - case ICE_VSI_PF: - ret = ice_vsi_alloc_q_vectors(vsi); - if (ret) - goto err_rings; - - ret = ice_vsi_setup_vector_base(vsi); - if (ret) - goto err_vectors; - - ret = ice_vsi_set_q_vectors_reg_idx(vsi); - if (ret) - goto err_vectors; - - ret = ice_vsi_alloc_rings(vsi); - if (ret) - goto err_vectors; - - ret = ice_vsi_alloc_ring_stats(vsi); - if (ret) - goto err_vectors; - - ice_vsi_map_rings_to_vectors(vsi); - - vsi->stat_offsets_loaded = false; - if (ice_is_xdp_ena_vsi(vsi)) { - ret = ice_vsi_determine_xdp_res(vsi); - if (ret) - goto err_vectors; - ret = ice_prepare_xdp_rings(vsi, vsi->xdp_prog); - if (ret) - goto err_vectors; - } - /* ICE_VSI_CTRL does not need RSS so skip RSS processing */ - if (vtype != ICE_VSI_CTRL) - /* Do not exit if configuring RSS had an issue, at - * least receive traffic on first queue. Hence no - * need to capture return value - */ - if (test_bit(ICE_FLAG_RSS_ENA, pf->flags)) - ice_vsi_cfg_rss_lut_key(vsi); - - /* disable or enable CRC stripping */ - if (vsi->netdev) - ice_vsi_cfg_crc_strip(vsi, !!(vsi->netdev->features & - NETIF_F_RXFCS)); - - break; - case ICE_VSI_VF: - ret = ice_vsi_alloc_q_vectors(vsi); - if (ret) - goto err_rings; - - ret = ice_vsi_set_q_vectors_reg_idx(vsi); - if (ret) - goto err_vectors; - - ret = ice_vsi_alloc_rings(vsi); - if (ret) - goto err_vectors; - - ret = ice_vsi_alloc_ring_stats(vsi); - if (ret) - goto err_vectors; - - vsi->stat_offsets_loaded = false; - break; - case ICE_VSI_CHNL: - if (test_bit(ICE_FLAG_RSS_ENA, pf->flags)) { - ice_vsi_cfg_rss_lut_key(vsi); - ice_vsi_set_rss_flow_fld(vsi); - } - break; - default: - break; - } - - /* configure VSI nodes based on number of queues and TC's */ - for (i = 0; i < vsi->tc_cfg.numtc; i++) { - /* configure VSI nodes based on number of queues and TC's. - * ADQ creates VSIs for each TC/Channel but doesn't - * allocate queues instead it reconfigures the PF queues - * as per the TC command. So max_txqs should point to the - * PF Tx queues. - */ - if (vtype == ICE_VSI_CHNL) - max_txqs[i] = pf->num_lan_tx; - else - max_txqs[i] = vsi->alloc_txq; - - if (ice_is_xdp_ena_vsi(vsi)) - max_txqs[i] += vsi->num_xdp_txq; - } - - if (test_bit(ICE_FLAG_TC_MQPRIO, pf->flags)) - /* If MQPRIO is set, means channel code path, hence for main - * VSI's, use TC as 1 - */ - ret = ice_cfg_vsi_lan(vsi->port_info, vsi->idx, 1, max_txqs); - else - ret = ice_cfg_vsi_lan(vsi->port_info, vsi->idx, - vsi->tc_cfg.ena_tc, max_txqs); + goto err_vsi_cfg; + ret = ice_vsi_cfg_tc_lan(pf, vsi); if (ret) { - dev_err(ice_pf_to_dev(pf), "VSI %d failed lan queue config, error %d\n", - vsi->vsi_num, ret); - if (init_vsi) { + if (vsi_flags & ICE_VSI_FLAG_INIT) { ret = -EIO; - goto err_vectors; + goto err_vsi_cfg_tc_lan; } else { + kfree(coalesce); return ice_schedule_reset(pf, ICE_RESET_PFR); } } - if (ice_vsi_realloc_stat_arrays(vsi, prev_txq, prev_rxq)) - goto err_vectors; + ice_vsi_realloc_stat_arrays(vsi, prev_txq, prev_rxq); ice_vsi_rebuild_set_coalesce(vsi, coalesce, prev_num_q_vectors); kfree(coalesce); return 0; -err_vectors: - ice_vsi_free_q_vectors(vsi); -err_rings: - if (vsi->netdev) { - vsi->current_netdev_flags = 0; - unregister_netdev(vsi->netdev); - free_netdev(vsi->netdev); - vsi->netdev = NULL; - } -err_vsi: - ice_vsi_clear(vsi); - set_bit(ICE_RESET_FAILED, pf->state); +err_vsi_cfg_tc_lan: + ice_vsi_decfg(vsi); +err_vsi_cfg: kfree(coalesce); return ret; } diff --git a/drivers/net/ethernet/intel/ice/ice_lib.h b/drivers/net/ethernet/intel/ice/ice_lib.h index dcdf69a693e9..75221478f2dc 100644 --- a/drivers/net/ethernet/intel/ice/ice_lib.h +++ b/drivers/net/ethernet/intel/ice/ice_lib.h @@ -7,6 +7,47 @@ #include "ice.h" #include "ice_vlan.h" +/* Flags used for VSI configuration and rebuild */ +#define ICE_VSI_FLAG_INIT BIT(0) +#define ICE_VSI_FLAG_NO_INIT 0 + +/** + * struct ice_vsi_cfg_params - VSI configuration parameters + * @pi: pointer to the port_info instance for the VSI + * @ch: pointer to the channel structure for the VSI, may be NULL + * @vf: pointer to the VF associated with this VSI, may be NULL + * @type: the type of VSI to configure + * @flags: VSI flags used for rebuild and configuration + * + * Parameter structure used when configuring a new VSI. + */ +struct ice_vsi_cfg_params { + struct ice_port_info *pi; + struct ice_channel *ch; + struct ice_vf *vf; + enum ice_vsi_type type; + u32 flags; +}; + +/** + * ice_vsi_to_params - Get parameters for an existing VSI + * @vsi: the VSI to get parameters for + * + * Fill a parameter structure for reconfiguring a VSI with its current + * parameters, such as during a rebuild operation. + */ +static inline struct ice_vsi_cfg_params ice_vsi_to_params(struct ice_vsi *vsi) +{ + struct ice_vsi_cfg_params params = {}; + + params.pi = vsi->port_info; + params.ch = vsi->ch; + params.vf = vsi->vf; + params.type = vsi->type; + + return params; +} + const char *ice_vsi_type_str(enum ice_vsi_type vsi_type); bool ice_pf_state_is_nominal(struct ice_pf *pf); @@ -42,7 +83,6 @@ void ice_cfg_sw_lldp(struct ice_vsi *vsi, bool tx, bool create); int ice_set_link(struct ice_vsi *vsi, bool ena); void ice_vsi_delete(struct ice_vsi *vsi); -int ice_vsi_clear(struct ice_vsi *vsi); int ice_vsi_cfg_tc(struct ice_vsi *vsi, u8 ena_tc); @@ -51,9 +91,7 @@ int ice_vsi_cfg_rss_lut_key(struct ice_vsi *vsi); void ice_vsi_cfg_netdev_tc(struct ice_vsi *vsi, u8 ena_tc); struct ice_vsi * -ice_vsi_setup(struct ice_pf *pf, struct ice_port_info *pi, - enum ice_vsi_type vsi_type, struct ice_vf *vf, - struct ice_channel *ch); +ice_vsi_setup(struct ice_pf *pf, struct ice_vsi_cfg_params *params); void ice_napi_del(struct ice_vsi *vsi); @@ -63,6 +101,7 @@ void ice_vsi_close(struct ice_vsi *vsi); int ice_ena_vsi(struct ice_vsi *vsi, bool locked); +void ice_vsi_decfg(struct ice_vsi *vsi); void ice_dis_vsi(struct ice_vsi *vsi, bool locked); int ice_free_res(struct ice_res_tracker *res, u16 index, u16 id); @@ -70,7 +109,8 @@ int ice_free_res(struct ice_res_tracker *res, u16 index, u16 id); int ice_get_res(struct ice_pf *pf, struct ice_res_tracker *res, u16 needed, u16 id); -int ice_vsi_rebuild(struct ice_vsi *vsi, bool init_vsi); +int ice_vsi_rebuild(struct ice_vsi *vsi, u32 vsi_flags); +int ice_vsi_cfg(struct ice_vsi *vsi, struct ice_vsi_cfg_params *params); bool ice_is_reset_in_progress(unsigned long *state); int ice_wait_for_reset(struct ice_pf *pf, unsigned long timeout); diff --git a/drivers/net/ethernet/intel/ice/ice_main.c b/drivers/net/ethernet/intel/ice/ice_main.c index 8ec24f6cf6be..567694bf098b 100644 --- a/drivers/net/ethernet/intel/ice/ice_main.c +++ b/drivers/net/ethernet/intel/ice/ice_main.c @@ -22,6 +22,7 @@ #include "ice_eswitch.h" #include "ice_tc_lib.h" #include "ice_vsi_vlan_ops.h" +#include <net/xdp_sock_drv.h> #define DRV_SUMMARY "Intel(R) Ethernet Connection E800 Series Linux Driver" static const char ice_driver_string[] = DRV_SUMMARY; @@ -44,7 +45,6 @@ MODULE_PARM_DESC(debug, "netif level (0=none,...,16=all), hw debug_mask (0x8XXXX MODULE_PARM_DESC(debug, "netif level (0=none,...,16=all)"); #endif /* !CONFIG_DYNAMIC_DEBUG */ -static DEFINE_IDA(ice_aux_ida); DEFINE_STATIC_KEY_FALSE(ice_xdp_locking_key); EXPORT_SYMBOL(ice_xdp_locking_key); @@ -564,7 +564,7 @@ ice_prepare_for_reset(struct ice_pf *pf, enum ice_reset_req reset_type) /* Disable VFs until reset is completed */ mutex_lock(&pf->vfs.table_lock); ice_for_each_vf(pf, bkt, vf) - ice_set_vf_state_qs_dis(vf); + ice_set_vf_state_dis(vf); mutex_unlock(&pf->vfs.table_lock); if (ice_is_eswitch_mode_switchdev(pf)) { @@ -2596,8 +2596,6 @@ static int ice_xdp_alloc_setup_rings(struct ice_vsi *vsi) xdp_ring->netdev = NULL; xdp_ring->dev = dev; xdp_ring->count = vsi->num_tx_desc; - xdp_ring->next_dd = ICE_RING_QUARTER(xdp_ring) - 1; - xdp_ring->next_rs = ICE_RING_QUARTER(xdp_ring) - 1; WRITE_ONCE(vsi->xdp_rings[i], xdp_ring); if (ice_setup_tx_ring(xdp_ring)) goto free_xdp_rings; @@ -2889,6 +2887,18 @@ int ice_vsi_determine_xdp_res(struct ice_vsi *vsi) } /** + * ice_max_xdp_frame_size - returns the maximum allowed frame size for XDP + * @vsi: Pointer to VSI structure + */ +static int ice_max_xdp_frame_size(struct ice_vsi *vsi) +{ + if (test_bit(ICE_FLAG_LEGACY_RX, vsi->back->flags)) + return ICE_RXBUF_1664; + else + return ICE_RXBUF_3072; +} + +/** * ice_xdp_setup_prog - Add or remove XDP eBPF program * @vsi: VSI to setup XDP for * @prog: XDP program @@ -2898,13 +2908,16 @@ static int ice_xdp_setup_prog(struct ice_vsi *vsi, struct bpf_prog *prog, struct netlink_ext_ack *extack) { - int frame_size = vsi->netdev->mtu + ICE_ETH_PKT_HDR_PAD; + unsigned int frame_size = vsi->netdev->mtu + ICE_ETH_PKT_HDR_PAD; bool if_running = netif_running(vsi->netdev); int ret = 0, xdp_ring_err = 0; - if (frame_size > vsi->rx_buf_len) { - NL_SET_ERR_MSG_MOD(extack, "MTU too large for loading XDP"); - return -EOPNOTSUPP; + if (prog && !prog->aux->xdp_has_frags) { + if (frame_size > ice_max_xdp_frame_size(vsi)) { + NL_SET_ERR_MSG_MOD(extack, + "MTU is too large for linear frames and XDP prog does not support frags"); + return -EOPNOTSUPP; + } } /* need to stop netdev while setting up the program for Rx rings */ @@ -2925,11 +2938,13 @@ ice_xdp_setup_prog(struct ice_vsi *vsi, struct bpf_prog *prog, if (xdp_ring_err) NL_SET_ERR_MSG_MOD(extack, "Setting up XDP Tx resources failed"); } + xdp_features_set_redirect_target(vsi->netdev, true); /* reallocate Rx queues that are used for zero-copy */ xdp_ring_err = ice_realloc_zc_buf(vsi, true); if (xdp_ring_err) NL_SET_ERR_MSG_MOD(extack, "Setting up XDP Rx resources failed"); } else if (ice_is_xdp_ena_vsi(vsi) && !prog) { + xdp_features_clear_redirect_target(vsi->netdev); xdp_ring_err = ice_destroy_xdp_rings(vsi); if (xdp_ring_err) NL_SET_ERR_MSG_MOD(extack, "Freeing XDP Tx resources failed"); @@ -3344,10 +3359,11 @@ static void ice_napi_add(struct ice_vsi *vsi) /** * ice_set_ops - set netdev and ethtools ops for the given netdev - * @netdev: netdev instance + * @vsi: the VSI associated with the new netdev */ -static void ice_set_ops(struct net_device *netdev) +static void ice_set_ops(struct ice_vsi *vsi) { + struct net_device *netdev = vsi->netdev; struct ice_pf *pf = ice_netdev_to_pf(netdev); if (ice_is_safe_mode(pf)) { @@ -3359,6 +3375,13 @@ static void ice_set_ops(struct net_device *netdev) netdev->netdev_ops = &ice_netdev_ops; netdev->udp_tunnel_nic_info = &pf->hw.udp_tunnel_nic; ice_set_ethtool_ops(netdev); + + if (vsi->type != ICE_VSI_PF) + return; + + netdev->xdp_features = NETDEV_XDP_ACT_BASIC | NETDEV_XDP_ACT_REDIRECT | + NETDEV_XDP_ACT_XSK_ZEROCOPY | + NETDEV_XDP_ACT_RX_SG; } /** @@ -3447,53 +3470,8 @@ static void ice_set_netdev_features(struct net_device *netdev) * be changed at runtime */ netdev->hw_features |= NETIF_F_RXFCS; -} - -/** - * ice_cfg_netdev - Allocate, configure and register a netdev - * @vsi: the VSI associated with the new netdev - * - * Returns 0 on success, negative value on failure - */ -static int ice_cfg_netdev(struct ice_vsi *vsi) -{ - struct ice_netdev_priv *np; - struct net_device *netdev; - u8 mac_addr[ETH_ALEN]; - - netdev = alloc_etherdev_mqs(sizeof(*np), vsi->alloc_txq, - vsi->alloc_rxq); - if (!netdev) - return -ENOMEM; - - set_bit(ICE_VSI_NETDEV_ALLOCD, vsi->state); - vsi->netdev = netdev; - np = netdev_priv(netdev); - np->vsi = vsi; - - ice_set_netdev_features(netdev); - - ice_set_ops(netdev); - if (vsi->type == ICE_VSI_PF) { - SET_NETDEV_DEV(netdev, ice_pf_to_dev(vsi->back)); - ether_addr_copy(mac_addr, vsi->port_info->mac.perm_addr); - eth_hw_addr_set(netdev, mac_addr); - ether_addr_copy(netdev->perm_addr, mac_addr); - } - - netdev->priv_flags |= IFF_UNICAST_FLT; - - /* Setup netdev TC information */ - ice_vsi_cfg_netdev_tc(vsi, vsi->tc_cfg.ena_tc); - - /* setup watchdog timeout value to be 5 second */ - netdev->watchdog_timeo = 5 * HZ; - - netdev->min_mtu = ETH_MIN_MTU; - netdev->max_mtu = ICE_MAX_MTU; - - return 0; + netif_set_tso_max_size(netdev, ICE_MAX_TSO_SIZE); } /** @@ -3521,14 +3499,27 @@ void ice_fill_rss_lut(u8 *lut, u16 rss_table_size, u16 rss_size) static struct ice_vsi * ice_pf_vsi_setup(struct ice_pf *pf, struct ice_port_info *pi) { - return ice_vsi_setup(pf, pi, ICE_VSI_PF, NULL, NULL); + struct ice_vsi_cfg_params params = {}; + + params.type = ICE_VSI_PF; + params.pi = pi; + params.flags = ICE_VSI_FLAG_INIT; + + return ice_vsi_setup(pf, ¶ms); } static struct ice_vsi * ice_chnl_vsi_setup(struct ice_pf *pf, struct ice_port_info *pi, struct ice_channel *ch) { - return ice_vsi_setup(pf, pi, ICE_VSI_CHNL, NULL, ch); + struct ice_vsi_cfg_params params = {}; + + params.type = ICE_VSI_CHNL; + params.pi = pi; + params.ch = ch; + params.flags = ICE_VSI_FLAG_INIT; + + return ice_vsi_setup(pf, ¶ms); } /** @@ -3542,7 +3533,13 @@ ice_chnl_vsi_setup(struct ice_pf *pf, struct ice_port_info *pi, static struct ice_vsi * ice_ctrl_vsi_setup(struct ice_pf *pf, struct ice_port_info *pi) { - return ice_vsi_setup(pf, pi, ICE_VSI_CTRL, NULL, NULL); + struct ice_vsi_cfg_params params = {}; + + params.type = ICE_VSI_CTRL; + params.pi = pi; + params.flags = ICE_VSI_FLAG_INIT; + + return ice_vsi_setup(pf, ¶ms); } /** @@ -3556,7 +3553,13 @@ ice_ctrl_vsi_setup(struct ice_pf *pf, struct ice_port_info *pi) struct ice_vsi * ice_lb_vsi_setup(struct ice_pf *pf, struct ice_port_info *pi) { - return ice_vsi_setup(pf, pi, ICE_VSI_LB, NULL, NULL); + struct ice_vsi_cfg_params params = {}; + + params.type = ICE_VSI_LB; + params.pi = pi; + params.flags = ICE_VSI_FLAG_INIT; + + return ice_vsi_setup(pf, ¶ms); } /** @@ -3716,20 +3719,6 @@ static void ice_tc_indir_block_unregister(struct ice_vsi *vsi) } /** - * ice_tc_indir_block_remove - clean indirect TC block notifications - * @pf: PF structure - */ -static void ice_tc_indir_block_remove(struct ice_pf *pf) -{ - struct ice_vsi *pf_vsi = ice_get_main_vsi(pf); - - if (!pf_vsi) - return; - - ice_tc_indir_block_unregister(pf_vsi); -} - -/** * ice_tc_indir_block_register - Register TC indirect block notifications * @vsi: VSI struct which has the netdev * @@ -3749,78 +3738,6 @@ static int ice_tc_indir_block_register(struct ice_vsi *vsi) } /** - * ice_setup_pf_sw - Setup the HW switch on startup or after reset - * @pf: board private structure - * - * Returns 0 on success, negative value on failure - */ -static int ice_setup_pf_sw(struct ice_pf *pf) -{ - struct device *dev = ice_pf_to_dev(pf); - bool dvm = ice_is_dvm_ena(&pf->hw); - struct ice_vsi *vsi; - int status; - - if (ice_is_reset_in_progress(pf->state)) - return -EBUSY; - - status = ice_aq_set_port_params(pf->hw.port_info, dvm, NULL); - if (status) - return -EIO; - - vsi = ice_pf_vsi_setup(pf, pf->hw.port_info); - if (!vsi) - return -ENOMEM; - - /* init channel list */ - INIT_LIST_HEAD(&vsi->ch_list); - - status = ice_cfg_netdev(vsi); - if (status) - goto unroll_vsi_setup; - /* netdev has to be configured before setting frame size */ - ice_vsi_cfg_frame_size(vsi); - - /* init indirect block notifications */ - status = ice_tc_indir_block_register(vsi); - if (status) { - dev_err(dev, "Failed to register netdev notifier\n"); - goto unroll_cfg_netdev; - } - - /* Setup DCB netlink interface */ - ice_dcbnl_setup(vsi); - - /* registering the NAPI handler requires both the queues and - * netdev to be created, which are done in ice_pf_vsi_setup() - * and ice_cfg_netdev() respectively - */ - ice_napi_add(vsi); - - status = ice_init_mac_fltr(pf); - if (status) - goto unroll_napi_add; - - return 0; - -unroll_napi_add: - ice_tc_indir_block_unregister(vsi); -unroll_cfg_netdev: - if (vsi) { - ice_napi_del(vsi); - if (vsi->netdev) { - clear_bit(ICE_VSI_NETDEV_ALLOCD, vsi->state); - free_netdev(vsi->netdev); - vsi->netdev = NULL; - } - } - -unroll_vsi_setup: - ice_vsi_release(vsi); - return status; -} - -/** * ice_get_avail_q_count - Get count of queues in use * @pf_qmap: bitmap to get queue use count from * @lock: pointer to a mutex that protects access to pf_qmap @@ -4249,13 +4166,13 @@ int ice_vsi_recfg_qs(struct ice_vsi *vsi, int new_rx, int new_tx, bool locked) /* set for the next time the netdev is started */ if (!netif_running(vsi->netdev)) { - ice_vsi_rebuild(vsi, false); + ice_vsi_rebuild(vsi, ICE_VSI_FLAG_NO_INIT); dev_dbg(ice_pf_to_dev(pf), "Link is down, queue count change happens when link is brought up\n"); goto done; } ice_vsi_close(vsi); - ice_vsi_rebuild(vsi, false); + ice_vsi_rebuild(vsi, ICE_VSI_FLAG_NO_INIT); ice_pf_dcb_recfg(pf, locked); ice_vsi_open(vsi); done: @@ -4518,6 +4435,23 @@ err_vsi_open: return err; } +static void ice_deinit_fdir(struct ice_pf *pf) +{ + struct ice_vsi *vsi = ice_get_ctrl_vsi(pf); + + if (!vsi) + return; + + ice_vsi_manage_fdir(vsi, false); + ice_vsi_release(vsi); + if (pf->ctrl_vsi_idx != ICE_NO_VSI) { + pf->vsi[pf->ctrl_vsi_idx] = NULL; + pf->ctrl_vsi_idx = ICE_NO_VSI; + } + + mutex_destroy(&(&pf->hw)->fdir_fltr_lock); +} + /** * ice_get_opt_fw_name - return optional firmware file name or NULL * @pf: pointer to the PF instance @@ -4618,116 +4552,171 @@ static void ice_print_wake_reason(struct ice_pf *pf) /** * ice_register_netdev - register netdev - * @pf: pointer to the PF struct + * @vsi: pointer to the VSI struct */ -static int ice_register_netdev(struct ice_pf *pf) +static int ice_register_netdev(struct ice_vsi *vsi) { - struct ice_vsi *vsi; - int err = 0; + int err; - vsi = ice_get_main_vsi(pf); if (!vsi || !vsi->netdev) return -EIO; err = register_netdev(vsi->netdev); if (err) - goto err_register_netdev; + return err; set_bit(ICE_VSI_NETDEV_REGISTERED, vsi->state); netif_carrier_off(vsi->netdev); netif_tx_stop_all_queues(vsi->netdev); return 0; -err_register_netdev: - free_netdev(vsi->netdev); - vsi->netdev = NULL; - clear_bit(ICE_VSI_NETDEV_ALLOCD, vsi->state); - return err; +} + +static void ice_unregister_netdev(struct ice_vsi *vsi) +{ + if (!vsi || !vsi->netdev) + return; + + unregister_netdev(vsi->netdev); + clear_bit(ICE_VSI_NETDEV_REGISTERED, vsi->state); } /** - * ice_probe - Device initialization routine - * @pdev: PCI device information struct - * @ent: entry in ice_pci_tbl + * ice_cfg_netdev - Allocate, configure and register a netdev + * @vsi: the VSI associated with the new netdev * - * Returns 0 on success, negative on failure + * Returns 0 on success, negative value on failure */ -static int -ice_probe(struct pci_dev *pdev, const struct pci_device_id __always_unused *ent) +static int ice_cfg_netdev(struct ice_vsi *vsi) { - struct device *dev = &pdev->dev; - struct ice_vsi *vsi; - struct ice_pf *pf; - struct ice_hw *hw; - int i, err; + struct ice_netdev_priv *np; + struct net_device *netdev; + u8 mac_addr[ETH_ALEN]; - if (pdev->is_virtfn) { - dev_err(dev, "can't probe a virtual function\n"); - return -EINVAL; + netdev = alloc_etherdev_mqs(sizeof(*np), vsi->alloc_txq, + vsi->alloc_rxq); + if (!netdev) + return -ENOMEM; + + set_bit(ICE_VSI_NETDEV_ALLOCD, vsi->state); + vsi->netdev = netdev; + np = netdev_priv(netdev); + np->vsi = vsi; + + ice_set_netdev_features(netdev); + ice_set_ops(vsi); + + if (vsi->type == ICE_VSI_PF) { + SET_NETDEV_DEV(netdev, ice_pf_to_dev(vsi->back)); + ether_addr_copy(mac_addr, vsi->port_info->mac.perm_addr); + eth_hw_addr_set(netdev, mac_addr); } - /* this driver uses devres, see - * Documentation/driver-api/driver-model/devres.rst - */ - err = pcim_enable_device(pdev); + netdev->priv_flags |= IFF_UNICAST_FLT; + + /* Setup netdev TC information */ + ice_vsi_cfg_netdev_tc(vsi, vsi->tc_cfg.ena_tc); + + netdev->max_mtu = ICE_MAX_MTU; + + return 0; +} + +static void ice_decfg_netdev(struct ice_vsi *vsi) +{ + clear_bit(ICE_VSI_NETDEV_ALLOCD, vsi->state); + free_netdev(vsi->netdev); + vsi->netdev = NULL; +} + +static int ice_start_eth(struct ice_vsi *vsi) +{ + int err; + + err = ice_init_mac_fltr(vsi->back); if (err) return err; - err = pcim_iomap_regions(pdev, BIT(ICE_BAR0), dev_driver_string(dev)); - if (err) { - dev_err(dev, "BAR0 I/O map error %d\n", err); - return err; - } + rtnl_lock(); + err = ice_vsi_open(vsi); + rtnl_unlock(); - pf = ice_allocate_pf(dev); - if (!pf) - return -ENOMEM; + return err; +} - /* initialize Auxiliary index to invalid value */ - pf->aux_idx = -1; +static int ice_init_eth(struct ice_pf *pf) +{ + struct ice_vsi *vsi = ice_get_main_vsi(pf); + int err; - /* set up for high or low DMA */ - err = dma_set_mask_and_coherent(dev, DMA_BIT_MASK(64)); - if (err) { - dev_err(dev, "DMA configuration failed: 0x%x\n", err); + if (!vsi) + return -EINVAL; + + /* init channel list */ + INIT_LIST_HEAD(&vsi->ch_list); + + err = ice_cfg_netdev(vsi); + if (err) return err; - } + /* Setup DCB netlink interface */ + ice_dcbnl_setup(vsi); - pci_enable_pcie_error_reporting(pdev); - pci_set_master(pdev); + err = ice_init_mac_fltr(pf); + if (err) + goto err_init_mac_fltr; - pf->pdev = pdev; - pci_set_drvdata(pdev, pf); - set_bit(ICE_DOWN, pf->state); - /* Disable service task until DOWN bit is cleared */ - set_bit(ICE_SERVICE_DIS, pf->state); + err = ice_devlink_create_pf_port(pf); + if (err) + goto err_devlink_create_pf_port; - hw = &pf->hw; - hw->hw_addr = pcim_iomap_table(pdev)[ICE_BAR0]; - pci_save_state(pdev); + SET_NETDEV_DEVLINK_PORT(vsi->netdev, &pf->devlink_port); - hw->back = pf; - hw->vendor_id = pdev->vendor; - hw->device_id = pdev->device; - pci_read_config_byte(pdev, PCI_REVISION_ID, &hw->revision_id); - hw->subsystem_vendor_id = pdev->subsystem_vendor; - hw->subsystem_device_id = pdev->subsystem_device; - hw->bus.device = PCI_SLOT(pdev->devfn); - hw->bus.func = PCI_FUNC(pdev->devfn); - ice_set_ctrlq_len(hw); + err = ice_register_netdev(vsi); + if (err) + goto err_register_netdev; - pf->msg_enable = netif_msg_init(debug, ICE_DFLT_NETIF_M); + err = ice_tc_indir_block_register(vsi); + if (err) + goto err_tc_indir_block_register; -#ifndef CONFIG_DYNAMIC_DEBUG - if (debug < -1) - hw->debug_mask = debug; -#endif + ice_napi_add(vsi); + + return 0; + +err_tc_indir_block_register: + ice_unregister_netdev(vsi); +err_register_netdev: + ice_devlink_destroy_pf_port(pf); +err_devlink_create_pf_port: +err_init_mac_fltr: + ice_decfg_netdev(vsi); + return err; +} + +static void ice_deinit_eth(struct ice_pf *pf) +{ + struct ice_vsi *vsi = ice_get_main_vsi(pf); + + if (!vsi) + return; + + ice_vsi_close(vsi); + ice_unregister_netdev(vsi); + ice_devlink_destroy_pf_port(pf); + ice_tc_indir_block_unregister(vsi); + ice_decfg_netdev(vsi); +} + +static int ice_init_dev(struct ice_pf *pf) +{ + struct device *dev = ice_pf_to_dev(pf); + struct ice_hw *hw = &pf->hw; + int err; err = ice_init_hw(hw); if (err) { dev_err(dev, "ice_init_hw failed: %d\n", err); - err = -EIO; - goto err_exit_unroll; + return err; } ice_init_feature_support(pf); @@ -4750,62 +4739,31 @@ ice_probe(struct pci_dev *pdev, const struct pci_device_id __always_unused *ent) err = ice_init_pf(pf); if (err) { dev_err(dev, "ice_init_pf failed: %d\n", err); - goto err_init_pf_unroll; + goto err_init_pf; } - ice_devlink_init_regions(pf); - pf->hw.udp_tunnel_nic.set_port = ice_udp_tunnel_set_port; pf->hw.udp_tunnel_nic.unset_port = ice_udp_tunnel_unset_port; pf->hw.udp_tunnel_nic.flags = UDP_TUNNEL_NIC_INFO_MAY_SLEEP; pf->hw.udp_tunnel_nic.shared = &pf->hw.udp_tunnel_shared; - i = 0; if (pf->hw.tnl.valid_count[TNL_VXLAN]) { - pf->hw.udp_tunnel_nic.tables[i].n_entries = + pf->hw.udp_tunnel_nic.tables[0].n_entries = pf->hw.tnl.valid_count[TNL_VXLAN]; - pf->hw.udp_tunnel_nic.tables[i].tunnel_types = + pf->hw.udp_tunnel_nic.tables[0].tunnel_types = UDP_TUNNEL_TYPE_VXLAN; - i++; } if (pf->hw.tnl.valid_count[TNL_GENEVE]) { - pf->hw.udp_tunnel_nic.tables[i].n_entries = + pf->hw.udp_tunnel_nic.tables[1].n_entries = pf->hw.tnl.valid_count[TNL_GENEVE]; - pf->hw.udp_tunnel_nic.tables[i].tunnel_types = + pf->hw.udp_tunnel_nic.tables[1].tunnel_types = UDP_TUNNEL_TYPE_GENEVE; - i++; - } - - pf->num_alloc_vsi = hw->func_caps.guar_num_vsi; - if (!pf->num_alloc_vsi) { - err = -EIO; - goto err_init_pf_unroll; - } - if (pf->num_alloc_vsi > UDP_TUNNEL_NIC_MAX_SHARING_DEVICES) { - dev_warn(&pf->pdev->dev, - "limiting the VSI count due to UDP tunnel limitation %d > %d\n", - pf->num_alloc_vsi, UDP_TUNNEL_NIC_MAX_SHARING_DEVICES); - pf->num_alloc_vsi = UDP_TUNNEL_NIC_MAX_SHARING_DEVICES; - } - - pf->vsi = devm_kcalloc(dev, pf->num_alloc_vsi, sizeof(*pf->vsi), - GFP_KERNEL); - if (!pf->vsi) { - err = -ENOMEM; - goto err_init_pf_unroll; - } - - pf->vsi_stats = devm_kcalloc(dev, pf->num_alloc_vsi, - sizeof(*pf->vsi_stats), GFP_KERNEL); - if (!pf->vsi_stats) { - err = -ENOMEM; - goto err_init_vsi_unroll; } err = ice_init_interrupt_scheme(pf); if (err) { dev_err(dev, "ice_init_interrupt_scheme failed: %d\n", err); err = -EIO; - goto err_init_vsi_stats_unroll; + goto err_init_interrupt_scheme; } /* In case of MSIX we are going to setup the misc vector right here @@ -4816,49 +4774,94 @@ ice_probe(struct pci_dev *pdev, const struct pci_device_id __always_unused *ent) err = ice_req_irq_msix_misc(pf); if (err) { dev_err(dev, "setup of misc vector failed: %d\n", err); - goto err_init_interrupt_unroll; + goto err_req_irq_msix_misc; } - /* create switch struct for the switch element created by FW on boot */ - pf->first_sw = devm_kzalloc(dev, sizeof(*pf->first_sw), GFP_KERNEL); - if (!pf->first_sw) { - err = -ENOMEM; - goto err_msix_misc_unroll; - } + return 0; - if (hw->evb_veb) - pf->first_sw->bridge_mode = BRIDGE_MODE_VEB; - else - pf->first_sw->bridge_mode = BRIDGE_MODE_VEPA; +err_req_irq_msix_misc: + ice_clear_interrupt_scheme(pf); +err_init_interrupt_scheme: + ice_deinit_pf(pf); +err_init_pf: + ice_deinit_hw(hw); + return err; +} - pf->first_sw->pf = pf; +static void ice_deinit_dev(struct ice_pf *pf) +{ + ice_free_irq_msix_misc(pf); + ice_clear_interrupt_scheme(pf); + ice_deinit_pf(pf); + ice_deinit_hw(&pf->hw); +} - /* record the sw_id available for later use */ - pf->first_sw->sw_id = hw->port_info->sw_id; +static void ice_init_features(struct ice_pf *pf) +{ + struct device *dev = ice_pf_to_dev(pf); - err = ice_setup_pf_sw(pf); - if (err) { - dev_err(dev, "probe failed due to setup PF switch: %d\n", err); - goto err_alloc_sw_unroll; - } + if (ice_is_safe_mode(pf)) + return; - clear_bit(ICE_SERVICE_DIS, pf->state); + /* initialize DDP driven features */ + if (test_bit(ICE_FLAG_PTP_SUPPORTED, pf->flags)) + ice_ptp_init(pf); - /* tell the firmware we are up */ - err = ice_send_version(pf); - if (err) { - dev_err(dev, "probe failed sending driver version %s. error: %d\n", - UTS_RELEASE, err); - goto err_send_version_unroll; + if (ice_is_feature_supported(pf, ICE_F_GNSS)) + ice_gnss_init(pf); + + /* Note: Flow director init failure is non-fatal to load */ + if (ice_init_fdir(pf)) + dev_err(dev, "could not initialize flow director\n"); + + /* Note: DCB init failure is non-fatal to load */ + if (ice_init_pf_dcb(pf, false)) { + clear_bit(ICE_FLAG_DCB_CAPABLE, pf->flags); + clear_bit(ICE_FLAG_DCB_ENA, pf->flags); + } else { + ice_cfg_lldp_mib_change(&pf->hw, true); } - /* since everything is good, start the service timer */ - mod_timer(&pf->serv_tmr, round_jiffies(jiffies + pf->serv_tmr_period)); + if (ice_init_lag(pf)) + dev_warn(dev, "Failed to init link aggregation support\n"); +} + +static void ice_deinit_features(struct ice_pf *pf) +{ + ice_deinit_lag(pf); + if (test_bit(ICE_FLAG_DCB_CAPABLE, pf->flags)) + ice_cfg_lldp_mib_change(&pf->hw, false); + ice_deinit_fdir(pf); + if (ice_is_feature_supported(pf, ICE_F_GNSS)) + ice_gnss_exit(pf); + if (test_bit(ICE_FLAG_PTP_SUPPORTED, pf->flags)) + ice_ptp_release(pf); +} + +static void ice_init_wakeup(struct ice_pf *pf) +{ + /* Save wakeup reason register for later use */ + pf->wakeup_reason = rd32(&pf->hw, PFPM_WUS); + + /* check for a power management event */ + ice_print_wake_reason(pf); + + /* clear wake status, all bits */ + wr32(&pf->hw, PFPM_WUS, U32_MAX); + + /* Disable WoL at init, wait for user to enable */ + device_set_wakeup_enable(ice_pf_to_dev(pf), false); +} + +static int ice_init_link(struct ice_pf *pf) +{ + struct device *dev = ice_pf_to_dev(pf); + int err; err = ice_init_link_events(pf->hw.port_info); if (err) { dev_err(dev, "ice_init_link_events failed: %d\n", err); - goto err_send_version_unroll; + return err; } /* not a fatal error if this fails */ @@ -4894,123 +4897,350 @@ ice_probe(struct pci_dev *pdev, const struct pci_device_id __always_unused *ent) set_bit(ICE_FLAG_NO_MEDIA, pf->flags); } - ice_verify_cacheline_size(pf); + return err; +} - /* Save wakeup reason register for later use */ - pf->wakeup_reason = rd32(hw, PFPM_WUS); +static int ice_init_pf_sw(struct ice_pf *pf) +{ + bool dvm = ice_is_dvm_ena(&pf->hw); + struct ice_vsi *vsi; + int err; - /* check for a power management event */ - ice_print_wake_reason(pf); + /* create switch struct for the switch element created by FW on boot */ + pf->first_sw = kzalloc(sizeof(*pf->first_sw), GFP_KERNEL); + if (!pf->first_sw) + return -ENOMEM; - /* clear wake status, all bits */ - wr32(hw, PFPM_WUS, U32_MAX); + if (pf->hw.evb_veb) + pf->first_sw->bridge_mode = BRIDGE_MODE_VEB; + else + pf->first_sw->bridge_mode = BRIDGE_MODE_VEPA; - /* Disable WoL at init, wait for user to enable */ - device_set_wakeup_enable(dev, false); + pf->first_sw->pf = pf; - if (ice_is_safe_mode(pf)) { - ice_set_safe_mode_vlan_cfg(pf); - goto probe_done; + /* record the sw_id available for later use */ + pf->first_sw->sw_id = pf->hw.port_info->sw_id; + + err = ice_aq_set_port_params(pf->hw.port_info, dvm, NULL); + if (err) + goto err_aq_set_port_params; + + vsi = ice_pf_vsi_setup(pf, pf->hw.port_info); + if (!vsi) { + err = -ENOMEM; + goto err_pf_vsi_setup; } - /* initialize DDP driven features */ - if (test_bit(ICE_FLAG_PTP_SUPPORTED, pf->flags)) - ice_ptp_init(pf); + return 0; - if (ice_is_feature_supported(pf, ICE_F_GNSS)) - ice_gnss_init(pf); +err_pf_vsi_setup: +err_aq_set_port_params: + kfree(pf->first_sw); + return err; +} - /* Note: Flow director init failure is non-fatal to load */ - if (ice_init_fdir(pf)) - dev_err(dev, "could not initialize flow director\n"); +static void ice_deinit_pf_sw(struct ice_pf *pf) +{ + struct ice_vsi *vsi = ice_get_main_vsi(pf); - /* Note: DCB init failure is non-fatal to load */ - if (ice_init_pf_dcb(pf, false)) { - clear_bit(ICE_FLAG_DCB_CAPABLE, pf->flags); - clear_bit(ICE_FLAG_DCB_ENA, pf->flags); - } else { - ice_cfg_lldp_mib_change(&pf->hw, true); + if (!vsi) + return; + + ice_vsi_release(vsi); + kfree(pf->first_sw); +} + +static int ice_alloc_vsis(struct ice_pf *pf) +{ + struct device *dev = ice_pf_to_dev(pf); + + pf->num_alloc_vsi = pf->hw.func_caps.guar_num_vsi; + if (!pf->num_alloc_vsi) + return -EIO; + + if (pf->num_alloc_vsi > UDP_TUNNEL_NIC_MAX_SHARING_DEVICES) { + dev_warn(dev, + "limiting the VSI count due to UDP tunnel limitation %d > %d\n", + pf->num_alloc_vsi, UDP_TUNNEL_NIC_MAX_SHARING_DEVICES); + pf->num_alloc_vsi = UDP_TUNNEL_NIC_MAX_SHARING_DEVICES; } - if (ice_init_lag(pf)) - dev_warn(dev, "Failed to init link aggregation support\n"); + pf->vsi = devm_kcalloc(dev, pf->num_alloc_vsi, sizeof(*pf->vsi), + GFP_KERNEL); + if (!pf->vsi) + return -ENOMEM; - /* print PCI link speed and width */ - pcie_print_link_status(pf->pdev); + pf->vsi_stats = devm_kcalloc(dev, pf->num_alloc_vsi, + sizeof(*pf->vsi_stats), GFP_KERNEL); + if (!pf->vsi_stats) { + devm_kfree(dev, pf->vsi); + return -ENOMEM; + } -probe_done: - err = ice_devlink_create_pf_port(pf); + return 0; +} + +static void ice_dealloc_vsis(struct ice_pf *pf) +{ + devm_kfree(ice_pf_to_dev(pf), pf->vsi_stats); + pf->vsi_stats = NULL; + + pf->num_alloc_vsi = 0; + devm_kfree(ice_pf_to_dev(pf), pf->vsi); + pf->vsi = NULL; +} + +static int ice_init_devlink(struct ice_pf *pf) +{ + int err; + + err = ice_devlink_register_params(pf); if (err) - goto err_create_pf_port; + return err; - vsi = ice_get_main_vsi(pf); - if (!vsi || !vsi->netdev) { - err = -EINVAL; - goto err_netdev_reg; - } + ice_devlink_init_regions(pf); + ice_devlink_register(pf); - SET_NETDEV_DEVLINK_PORT(vsi->netdev, &pf->devlink_port); + return 0; +} + +static void ice_deinit_devlink(struct ice_pf *pf) +{ + ice_devlink_unregister(pf); + ice_devlink_destroy_regions(pf); + ice_devlink_unregister_params(pf); +} + +static int ice_init(struct ice_pf *pf) +{ + int err; - err = ice_register_netdev(pf); + err = ice_init_dev(pf); if (err) - goto err_netdev_reg; + return err; - err = ice_devlink_register_params(pf); + err = ice_alloc_vsis(pf); + if (err) + goto err_alloc_vsis; + + err = ice_init_pf_sw(pf); if (err) - goto err_netdev_reg; + goto err_init_pf_sw; + + ice_init_wakeup(pf); + + err = ice_init_link(pf); + if (err) + goto err_init_link; + + err = ice_send_version(pf); + if (err) + goto err_init_link; + + ice_verify_cacheline_size(pf); + + if (ice_is_safe_mode(pf)) + ice_set_safe_mode_vlan_cfg(pf); + else + /* print PCI link speed and width */ + pcie_print_link_status(pf->pdev); /* ready to go, so clear down state bit */ clear_bit(ICE_DOWN, pf->state); - if (ice_is_rdma_ena(pf)) { - pf->aux_idx = ida_alloc(&ice_aux_ida, GFP_KERNEL); - if (pf->aux_idx < 0) { - dev_err(dev, "Failed to allocate device ID for AUX driver\n"); - err = -ENOMEM; - goto err_devlink_reg_param; - } + clear_bit(ICE_SERVICE_DIS, pf->state); - err = ice_init_rdma(pf); - if (err) { - dev_err(dev, "Failed to initialize RDMA: %d\n", err); - err = -EIO; - goto err_init_aux_unroll; - } - } else { - dev_warn(dev, "RDMA is not supported on this device\n"); - } + /* since everything is good, start the service timer */ + mod_timer(&pf->serv_tmr, round_jiffies(jiffies + pf->serv_tmr_period)); - ice_devlink_register(pf); return 0; -err_init_aux_unroll: - pf->adev = NULL; - ida_free(&ice_aux_ida, pf->aux_idx); -err_devlink_reg_param: - ice_devlink_unregister_params(pf); -err_netdev_reg: - ice_devlink_destroy_pf_port(pf); -err_create_pf_port: -err_send_version_unroll: - ice_vsi_release_all(pf); -err_alloc_sw_unroll: +err_init_link: + ice_deinit_pf_sw(pf); +err_init_pf_sw: + ice_dealloc_vsis(pf); +err_alloc_vsis: + ice_deinit_dev(pf); + return err; +} + +static void ice_deinit(struct ice_pf *pf) +{ set_bit(ICE_SERVICE_DIS, pf->state); set_bit(ICE_DOWN, pf->state); - devm_kfree(dev, pf->first_sw); -err_msix_misc_unroll: - ice_free_irq_msix_misc(pf); -err_init_interrupt_unroll: - ice_clear_interrupt_scheme(pf); -err_init_vsi_stats_unroll: - devm_kfree(dev, pf->vsi_stats); - pf->vsi_stats = NULL; -err_init_vsi_unroll: - devm_kfree(dev, pf->vsi); -err_init_pf_unroll: - ice_deinit_pf(pf); - ice_devlink_destroy_regions(pf); - ice_deinit_hw(hw); -err_exit_unroll: - pci_disable_pcie_error_reporting(pdev); + + ice_deinit_pf_sw(pf); + ice_dealloc_vsis(pf); + ice_deinit_dev(pf); +} + +/** + * ice_load - load pf by init hw and starting VSI + * @pf: pointer to the pf instance + */ +int ice_load(struct ice_pf *pf) +{ + struct ice_vsi_cfg_params params = {}; + struct ice_vsi *vsi; + int err; + + err = ice_reset(&pf->hw, ICE_RESET_PFR); + if (err) + return err; + + err = ice_init_dev(pf); + if (err) + return err; + + vsi = ice_get_main_vsi(pf); + + params = ice_vsi_to_params(vsi); + params.flags = ICE_VSI_FLAG_INIT; + + err = ice_vsi_cfg(vsi, ¶ms); + if (err) + goto err_vsi_cfg; + + err = ice_start_eth(ice_get_main_vsi(pf)); + if (err) + goto err_start_eth; + + err = ice_init_rdma(pf); + if (err) + goto err_init_rdma; + + ice_init_features(pf); + ice_service_task_restart(pf); + + clear_bit(ICE_DOWN, pf->state); + + return 0; + +err_init_rdma: + ice_vsi_close(ice_get_main_vsi(pf)); +err_start_eth: + ice_vsi_decfg(ice_get_main_vsi(pf)); +err_vsi_cfg: + ice_deinit_dev(pf); + return err; +} + +/** + * ice_unload - unload pf by stopping VSI and deinit hw + * @pf: pointer to the pf instance + */ +void ice_unload(struct ice_pf *pf) +{ + ice_deinit_features(pf); + ice_deinit_rdma(pf); + ice_vsi_close(ice_get_main_vsi(pf)); + ice_vsi_decfg(ice_get_main_vsi(pf)); + ice_deinit_dev(pf); +} + +/** + * ice_probe - Device initialization routine + * @pdev: PCI device information struct + * @ent: entry in ice_pci_tbl + * + * Returns 0 on success, negative on failure + */ +static int +ice_probe(struct pci_dev *pdev, const struct pci_device_id __always_unused *ent) +{ + struct device *dev = &pdev->dev; + struct ice_pf *pf; + struct ice_hw *hw; + int err; + + if (pdev->is_virtfn) { + dev_err(dev, "can't probe a virtual function\n"); + return -EINVAL; + } + + /* this driver uses devres, see + * Documentation/driver-api/driver-model/devres.rst + */ + err = pcim_enable_device(pdev); + if (err) + return err; + + err = pcim_iomap_regions(pdev, BIT(ICE_BAR0), dev_driver_string(dev)); + if (err) { + dev_err(dev, "BAR0 I/O map error %d\n", err); + return err; + } + + pf = ice_allocate_pf(dev); + if (!pf) + return -ENOMEM; + + /* initialize Auxiliary index to invalid value */ + pf->aux_idx = -1; + + /* set up for high or low DMA */ + err = dma_set_mask_and_coherent(dev, DMA_BIT_MASK(64)); + if (err) { + dev_err(dev, "DMA configuration failed: 0x%x\n", err); + return err; + } + + pci_set_master(pdev); + + pf->pdev = pdev; + pci_set_drvdata(pdev, pf); + set_bit(ICE_DOWN, pf->state); + /* Disable service task until DOWN bit is cleared */ + set_bit(ICE_SERVICE_DIS, pf->state); + + hw = &pf->hw; + hw->hw_addr = pcim_iomap_table(pdev)[ICE_BAR0]; + pci_save_state(pdev); + + hw->back = pf; + hw->port_info = NULL; + hw->vendor_id = pdev->vendor; + hw->device_id = pdev->device; + pci_read_config_byte(pdev, PCI_REVISION_ID, &hw->revision_id); + hw->subsystem_vendor_id = pdev->subsystem_vendor; + hw->subsystem_device_id = pdev->subsystem_device; + hw->bus.device = PCI_SLOT(pdev->devfn); + hw->bus.func = PCI_FUNC(pdev->devfn); + ice_set_ctrlq_len(hw); + + pf->msg_enable = netif_msg_init(debug, ICE_DFLT_NETIF_M); + +#ifndef CONFIG_DYNAMIC_DEBUG + if (debug < -1) + hw->debug_mask = debug; +#endif + + err = ice_init(pf); + if (err) + goto err_init; + + err = ice_init_eth(pf); + if (err) + goto err_init_eth; + + err = ice_init_rdma(pf); + if (err) + goto err_init_rdma; + + err = ice_init_devlink(pf); + if (err) + goto err_init_devlink; + + ice_init_features(pf); + + return 0; + +err_init_devlink: + ice_deinit_rdma(pf); +err_init_rdma: + ice_deinit_eth(pf); +err_init_eth: + ice_deinit(pf); +err_init: pci_disable_device(pdev); return err; } @@ -5085,52 +5315,33 @@ static void ice_remove(struct pci_dev *pdev) struct ice_pf *pf = pci_get_drvdata(pdev); int i; - ice_devlink_unregister(pf); for (i = 0; i < ICE_MAX_RESET_WAIT; i++) { if (!ice_is_reset_in_progress(pf->state)) break; msleep(100); } - ice_tc_indir_block_remove(pf); - if (test_bit(ICE_FLAG_SRIOV_ENA, pf->flags)) { set_bit(ICE_VF_RESETS_DISABLED, pf->state); ice_free_vfs(pf); } ice_service_task_stop(pf); - ice_aq_cancel_waiting_tasks(pf); - ice_unplug_aux_dev(pf); - if (pf->aux_idx >= 0) - ida_free(&ice_aux_ida, pf->aux_idx); - ice_devlink_unregister_params(pf); set_bit(ICE_DOWN, pf->state); - ice_deinit_lag(pf); - if (test_bit(ICE_FLAG_PTP_SUPPORTED, pf->flags)) - ice_ptp_release(pf); - if (ice_is_feature_supported(pf, ICE_F_GNSS)) - ice_gnss_exit(pf); if (!ice_is_safe_mode(pf)) ice_remove_arfs(pf); - ice_setup_mc_magic_wake(pf); + ice_deinit_features(pf); + ice_deinit_devlink(pf); + ice_deinit_rdma(pf); + ice_deinit_eth(pf); + ice_deinit(pf); + ice_vsi_release_all(pf); - mutex_destroy(&(&pf->hw)->fdir_fltr_lock); - ice_devlink_destroy_pf_port(pf); + + ice_setup_mc_magic_wake(pf); ice_set_wake(pf); - ice_free_irq_msix_misc(pf); - ice_for_each_vsi(pf, i) { - if (!pf->vsi[i]) - continue; - ice_vsi_free_q_vectors(pf->vsi[i]); - } - devm_kfree(&pdev->dev, pf->vsi_stats); - pf->vsi_stats = NULL; - ice_deinit_pf(pf); - ice_devlink_destroy_regions(pf); - ice_deinit_hw(&pf->hw); /* Issue a PFR as part of the prescribed driver unload flow. Do not * do it via ice_schedule_reset() since there is no need to rebuild @@ -5138,8 +5349,6 @@ static void ice_remove(struct pci_dev *pdev) */ ice_reset(&pf->hw, ICE_RESET_PFR); pci_wait_for_pending_transaction(pdev); - ice_clear_interrupt_scheme(pf); - pci_disable_pcie_error_reporting(pdev); pci_disable_device(pdev); } @@ -6173,24 +6382,21 @@ static int ice_vsi_vlan_setup(struct ice_vsi *vsi) } /** - * ice_vsi_cfg - Setup the VSI + * ice_vsi_cfg_lan - Setup the VSI lan related config * @vsi: the VSI being configured * * Return 0 on success and negative value on error */ -int ice_vsi_cfg(struct ice_vsi *vsi) +int ice_vsi_cfg_lan(struct ice_vsi *vsi) { int err; - if (vsi->netdev) { + if (vsi->netdev && vsi->type == ICE_VSI_PF) { ice_set_rx_mode(vsi->netdev); - if (vsi->type != ICE_VSI_LB) { - err = ice_vsi_vlan_setup(vsi); - - if (err) - return err; - } + err = ice_vsi_vlan_setup(vsi); + if (err) + return err; } ice_vsi_cfg_dcb_rings(vsi); @@ -6371,7 +6577,7 @@ static int ice_up_complete(struct ice_vsi *vsi) if (vsi->port_info && (vsi->port_info->phy.link_info.link_info & ICE_AQ_LINK_UP) && - vsi->netdev) { + vsi->netdev && vsi->type == ICE_VSI_PF) { ice_print_link_msg(vsi, true); netif_tx_start_all_queues(vsi->netdev); netif_carrier_on(vsi->netdev); @@ -6382,7 +6588,9 @@ static int ice_up_complete(struct ice_vsi *vsi) * set the baseline so counters are ready when interface is up */ ice_update_eth_stats(vsi); - ice_service_task_schedule(pf); + + if (vsi->type == ICE_VSI_PF) + ice_service_task_schedule(pf); return 0; } @@ -6395,7 +6603,7 @@ int ice_up(struct ice_vsi *vsi) { int err; - err = ice_vsi_cfg(vsi); + err = ice_vsi_cfg_lan(vsi); if (!err) err = ice_up_complete(vsi); @@ -6963,7 +7171,7 @@ int ice_vsi_open_ctrl(struct ice_vsi *vsi) if (err) goto err_setup_rx; - err = ice_vsi_cfg(vsi); + err = ice_vsi_cfg_lan(vsi); if (err) goto err_setup_rx; @@ -7017,7 +7225,7 @@ int ice_vsi_open(struct ice_vsi *vsi) if (err) goto err_setup_rx; - err = ice_vsi_cfg(vsi); + err = ice_vsi_cfg_lan(vsi); if (err) goto err_setup_rx; @@ -7102,7 +7310,7 @@ static int ice_vsi_rebuild_by_type(struct ice_pf *pf, enum ice_vsi_type type) continue; /* rebuild the VSI */ - err = ice_vsi_rebuild(vsi, true); + err = ice_vsi_rebuild(vsi, ICE_VSI_FLAG_INIT); if (err) { dev_err(dev, "rebuild VSI failed, err %d, VSI index %d, type %s\n", err, vsi->idx, ice_vsi_type_str(type)); @@ -7358,18 +7566,6 @@ clear_recovery: } /** - * ice_max_xdp_frame_size - returns the maximum allowed frame size for XDP - * @vsi: Pointer to VSI structure - */ -static int ice_max_xdp_frame_size(struct ice_vsi *vsi) -{ - if (PAGE_SIZE >= 8192 || test_bit(ICE_FLAG_LEGACY_RX, vsi->back->flags)) - return ICE_RXBUF_2048 - XDP_PACKET_HEADROOM; - else - return ICE_RXBUF_3072; -} - -/** * ice_change_mtu - NDO callback to change the MTU * @netdev: network interface device structure * @new_mtu: new value for maximum frame size @@ -7381,6 +7577,7 @@ static int ice_change_mtu(struct net_device *netdev, int new_mtu) struct ice_netdev_priv *np = netdev_priv(netdev); struct ice_vsi *vsi = np->vsi; struct ice_pf *pf = vsi->back; + struct bpf_prog *prog; u8 count = 0; int err = 0; @@ -7389,7 +7586,8 @@ static int ice_change_mtu(struct net_device *netdev, int new_mtu) return 0; } - if (ice_is_xdp_ena_vsi(vsi)) { + prog = vsi->xdp_prog; + if (prog && !prog->aux->xdp_has_frags) { int frame_size = ice_max_xdp_frame_size(vsi); if (new_mtu + ICE_ETH_PKT_HDR_PAD > frame_size) { @@ -7397,6 +7595,12 @@ static int ice_change_mtu(struct net_device *netdev, int new_mtu) frame_size - ICE_ETH_PKT_HDR_PAD); return -EINVAL; } + } else if (test_bit(ICE_FLAG_LEGACY_RX, pf->flags)) { + if (new_mtu + ICE_ETH_PKT_HDR_PAD > ICE_MAX_FRAME_LEGACY_RX) { + netdev_err(netdev, "Too big MTU for legacy-rx; Max is %d\n", + ICE_MAX_FRAME_LEGACY_RX - ICE_ETH_PKT_HDR_PAD); + return -EINVAL; + } } /* if a reset is in progress, wait for some time for it to complete */ @@ -8447,12 +8651,9 @@ static void ice_remove_q_channels(struct ice_vsi *vsi, bool rem_fltr) /* clear the VSI from scheduler tree */ ice_rm_vsi_lan_cfg(ch->ch_vsi->port_info, ch->ch_vsi->idx); - /* Delete VSI from FW */ + /* Delete VSI from FW, PF and HW VSI arrays */ ice_vsi_delete(ch->ch_vsi); - /* Delete VSI from PF and HW VSI arrays */ - ice_vsi_clear(ch->ch_vsi); - /* free the channel */ kfree(ch); } @@ -8511,7 +8712,7 @@ static int ice_rebuild_channels(struct ice_pf *pf) type = vsi->type; /* rebuild ADQ VSI */ - err = ice_vsi_rebuild(vsi, true); + err = ice_vsi_rebuild(vsi, ICE_VSI_FLAG_INIT); if (err) { dev_err(dev, "VSI (type:%s) at index %d rebuild failed, err %d\n", ice_vsi_type_str(type), vsi->idx, err); @@ -8743,14 +8944,14 @@ config_tcf: cur_rxq = vsi->num_rxq; /* proceed with rebuild main VSI using correct number of queues */ - ret = ice_vsi_rebuild(vsi, false); + ret = ice_vsi_rebuild(vsi, ICE_VSI_FLAG_NO_INIT); if (ret) { /* fallback to current number of queues */ dev_info(dev, "Rebuild failed with new queues, try with current number of queues\n"); vsi->req_txq = cur_txq; vsi->req_rxq = cur_rxq; clear_bit(ICE_RESET_FAILED, pf->state); - if (ice_vsi_rebuild(vsi, false)) { + if (ice_vsi_rebuild(vsi, ICE_VSI_FLAG_NO_INIT)) { dev_err(dev, "Rebuild of main VSI failed again\n"); return ret; } diff --git a/drivers/net/ethernet/intel/ice/ice_nvm.c b/drivers/net/ethernet/intel/ice/ice_nvm.c index c262dc886e6a..f6f52a248066 100644 --- a/drivers/net/ethernet/intel/ice/ice_nvm.c +++ b/drivers/net/ethernet/intel/ice/ice_nvm.c @@ -662,7 +662,6 @@ ice_get_orom_civd_data(struct ice_hw *hw, enum ice_bank_select bank, /* Verify that the simple checksum is zero */ for (i = 0; i < sizeof(*tmp); i++) - /* cppcheck-suppress objectIndex */ sum += ((u8 *)tmp)[i]; if (sum) { diff --git a/drivers/net/ethernet/intel/ice/ice_ptp.c b/drivers/net/ethernet/intel/ice/ice_ptp.c index d63161d73eb1..ac6f06f9a2ed 100644 --- a/drivers/net/ethernet/intel/ice/ice_ptp.c +++ b/drivers/net/ethernet/intel/ice/ice_ptp.c @@ -680,6 +680,7 @@ static bool ice_ptp_tx_tstamp(struct ice_ptp_tx *tx) struct ice_pf *pf; struct ice_hw *hw; u64 tstamp_ready; + bool link_up; int err; u8 idx; @@ -695,11 +696,14 @@ static bool ice_ptp_tx_tstamp(struct ice_ptp_tx *tx) if (err) return false; + /* Drop packets if the link went down */ + link_up = ptp_port->link_up; + for_each_set_bit(idx, tx->in_use, tx->len) { struct skb_shared_hwtstamps shhwtstamps = {}; u8 phy_idx = idx + tx->offset; u64 raw_tstamp = 0, tstamp; - bool drop_ts = false; + bool drop_ts = !link_up; struct sk_buff *skb; /* Drop packets which have waited for more than 2 seconds */ @@ -728,7 +732,7 @@ static bool ice_ptp_tx_tstamp(struct ice_ptp_tx *tx) ice_trace(tx_tstamp_fw_req, tx->tstamps[idx].skb, idx); err = ice_read_phy_tstamp(hw, tx->block, phy_idx, &raw_tstamp); - if (err) + if (err && !drop_ts) continue; ice_trace(tx_tstamp_fw_done, tx->tstamps[idx].skb, idx); @@ -1770,6 +1774,38 @@ ice_ptp_gpio_enable_e810(struct ptp_clock_info *info, } /** + * ice_ptp_gpio_enable_e823 - Enable/disable ancillary features of PHC + * @info: the driver's PTP info structure + * @rq: The requested feature to change + * @on: Enable/disable flag + */ +static int ice_ptp_gpio_enable_e823(struct ptp_clock_info *info, + struct ptp_clock_request *rq, int on) +{ + struct ice_pf *pf = ptp_info_to_pf(info); + struct ice_perout_channel clk_cfg = {0}; + int err; + + switch (rq->type) { + case PTP_CLK_REQ_PPS: + clk_cfg.gpio_pin = PPS_PIN_INDEX; + clk_cfg.period = NSEC_PER_SEC; + clk_cfg.ena = !!on; + + err = ice_ptp_cfg_clkout(pf, PPS_CLK_GEN_CHAN, &clk_cfg, true); + break; + case PTP_CLK_REQ_EXTTS: + err = ice_ptp_cfg_extts(pf, !!on, rq->extts.index, + TIME_SYNC_PIN_INDEX, rq->extts.flags); + break; + default: + return -EOPNOTSUPP; + } + + return err; +} + +/** * ice_ptp_gettimex64 - Get the time of the clock * @info: the driver's PTP info structure * @ts: timespec64 structure to hold the current time value @@ -2221,6 +2257,19 @@ ice_ptp_setup_pins_e810(struct ice_pf *pf, struct ptp_clock_info *info) } /** + * ice_ptp_setup_pins_e823 - Setup PTP pins in sysfs + * @pf: pointer to the PF instance + * @info: PTP clock capabilities + */ +static void +ice_ptp_setup_pins_e823(struct ice_pf *pf, struct ptp_clock_info *info) +{ + info->pps = 1; + info->n_per_out = 0; + info->n_ext_ts = 1; +} + +/** * ice_ptp_set_funcs_e822 - Set specialized functions for E822 support * @pf: Board private structure * @info: PTP info to fill @@ -2258,6 +2307,23 @@ ice_ptp_set_funcs_e810(struct ice_pf *pf, struct ptp_clock_info *info) } /** + * ice_ptp_set_funcs_e823 - Set specialized functions for E823 support + * @pf: Board private structure + * @info: PTP info to fill + * + * Assign functions to the PTP capabiltiies structure for E823 devices. + * Functions which operate across all device families should be set directly + * in ice_ptp_set_caps. Only add functions here which are distinct for e823 + * devices. + */ +static void +ice_ptp_set_funcs_e823(struct ice_pf *pf, struct ptp_clock_info *info) +{ + info->enable = ice_ptp_gpio_enable_e823; + ice_ptp_setup_pins_e823(pf, info); +} + +/** * ice_ptp_set_caps - Set PTP capabilities * @pf: Board private structure */ @@ -2269,7 +2335,7 @@ static void ice_ptp_set_caps(struct ice_pf *pf) snprintf(info->name, sizeof(info->name) - 1, "%s-%s-clk", dev_driver_string(dev), dev_name(dev)); info->owner = THIS_MODULE; - info->max_adj = 999999999; + info->max_adj = 100000000; info->adjtime = ice_ptp_adjtime; info->adjfine = ice_ptp_adjfine; info->gettimex64 = ice_ptp_gettimex64; @@ -2277,6 +2343,8 @@ static void ice_ptp_set_caps(struct ice_pf *pf) if (ice_is_e810(&pf->hw)) ice_ptp_set_funcs_e810(pf, info); + else if (ice_is_e823(&pf->hw)) + ice_ptp_set_funcs_e823(pf, info); else ice_ptp_set_funcs_e822(pf, info); } diff --git a/drivers/net/ethernet/intel/ice/ice_sched.c b/drivers/net/ethernet/intel/ice/ice_sched.c index 6d08b397df2a..4eca8d195ef0 100644 --- a/drivers/net/ethernet/intel/ice/ice_sched.c +++ b/drivers/net/ethernet/intel/ice/ice_sched.c @@ -1063,7 +1063,6 @@ ice_sched_add_nodes_to_layer(struct ice_port_info *pi, *num_nodes_added = 0; while (*num_nodes_added < num_nodes) { u16 max_child_nodes, num_added = 0; - /* cppcheck-suppress unusedVariable */ u32 temp; status = ice_sched_add_nodes_to_hw_layer(pi, tc_node, parent, @@ -1655,12 +1654,13 @@ ice_sched_add_vsi_child_nodes(struct ice_port_info *pi, u16 vsi_handle, u32 first_node_teid; u16 num_added = 0; u8 i, qgl, vsil; - int status; qgl = ice_sched_get_qgrp_layer(hw); vsil = ice_sched_get_vsi_layer(hw); parent = ice_sched_get_vsi_node(pi, tc_node, vsi_handle); for (i = vsil + 1; i <= qgl; i++) { + int status; + if (!parent) return -EIO; @@ -1756,13 +1756,14 @@ ice_sched_add_vsi_support_nodes(struct ice_port_info *pi, u16 vsi_handle, u32 first_node_teid; u16 num_added = 0; u8 i, vsil; - int status; if (!pi) return -EINVAL; vsil = ice_sched_get_vsi_layer(pi->hw); for (i = pi->hw->sw_entry_point_layer; i <= vsil; i++) { + int status; + status = ice_sched_add_nodes_to_layer(pi, tc_node, parent, i, num_nodes[i], &first_node_teid, diff --git a/drivers/net/ethernet/intel/ice/ice_sriov.c b/drivers/net/ethernet/intel/ice/ice_sriov.c index 3ba1408c56a9..96a64c25e2ef 100644 --- a/drivers/net/ethernet/intel/ice/ice_sriov.c +++ b/drivers/net/ethernet/intel/ice/ice_sriov.c @@ -41,21 +41,6 @@ static void ice_free_vf_entries(struct ice_pf *pf) } /** - * ice_vf_vsi_release - invalidate the VF's VSI after freeing it - * @vf: invalidate this VF's VSI after freeing it - */ -static void ice_vf_vsi_release(struct ice_vf *vf) -{ - struct ice_vsi *vsi = ice_get_vf_vsi(vf); - - if (WARN_ON(!vsi)) - return; - - ice_vsi_release(vsi); - ice_vf_invalidate_vsi(vf); -} - -/** * ice_free_vf_res - Free a VF's resources * @vf: pointer to the VF info */ @@ -248,11 +233,16 @@ void ice_free_vfs(struct ice_pf *pf) */ static struct ice_vsi *ice_vf_vsi_setup(struct ice_vf *vf) { - struct ice_port_info *pi = ice_vf_get_port_info(vf); + struct ice_vsi_cfg_params params = {}; struct ice_pf *pf = vf->pf; struct ice_vsi *vsi; - vsi = ice_vsi_setup(pf, pi, ICE_VSI_VF, vf, NULL); + params.type = ICE_VSI_VF; + params.pi = ice_vf_get_port_info(vf); + params.vf = vf; + params.flags = ICE_VSI_FLAG_INIT; + + vsi = ice_vsi_setup(pf, ¶ms); if (!vsi) { dev_err(ice_pf_to_dev(pf), "Failed to create VF VSI\n"); @@ -583,51 +573,19 @@ static int ice_set_per_vf_res(struct ice_pf *pf, u16 num_vfs) */ static int ice_init_vf_vsi_res(struct ice_vf *vf) { - struct ice_vsi_vlan_ops *vlan_ops; struct ice_pf *pf = vf->pf; - u8 broadcast[ETH_ALEN]; struct ice_vsi *vsi; - struct device *dev; int err; vf->first_vector_idx = ice_calc_vf_first_vector_idx(pf, vf); - dev = ice_pf_to_dev(pf); vsi = ice_vf_vsi_setup(vf); if (!vsi) return -ENOMEM; - err = ice_vsi_add_vlan_zero(vsi); - if (err) { - dev_warn(dev, "Failed to add VLAN 0 filter for VF %d\n", - vf->vf_id); - goto release_vsi; - } - - vlan_ops = ice_get_compat_vsi_vlan_ops(vsi); - err = vlan_ops->ena_rx_filtering(vsi); - if (err) { - dev_warn(dev, "Failed to enable Rx VLAN filtering for VF %d\n", - vf->vf_id); - goto release_vsi; - } - - eth_broadcast_addr(broadcast); - err = ice_fltr_add_mac(vsi, broadcast, ICE_FWD_TO_VSI); - if (err) { - dev_err(dev, "Failed to add broadcast MAC filter for VF %d, error %d\n", - vf->vf_id, err); - goto release_vsi; - } - - err = ice_vsi_apply_spoofchk(vsi, vf->spoofchk); - if (err) { - dev_warn(dev, "Failed to initialize spoofchk setting for VF %d\n", - vf->vf_id); + err = ice_vf_init_host_cfg(vf, vsi); + if (err) goto release_vsi; - } - - vf->num_mac = 1; return 0; @@ -697,6 +655,21 @@ static void ice_sriov_free_vf(struct ice_vf *vf) } /** + * ice_sriov_clear_reset_state - clears VF Reset status register + * @vf: the vf to configure + */ +static void ice_sriov_clear_reset_state(struct ice_vf *vf) +{ + struct ice_hw *hw = &vf->pf->hw; + + /* Clear the reset status register so that VF immediately sees that + * the device is resetting, even if hardware hasn't yet gotten around + * to clearing VFGEN_RSTAT for us. + */ + wr32(hw, VFGEN_RSTAT(vf->vf_id), VIRTCHNL_VFR_INPROGRESS); +} + +/** * ice_sriov_clear_mbx_register - clears SRIOV VF's mailbox registers * @vf: the vf to configure */ @@ -799,23 +772,19 @@ static void ice_sriov_clear_reset_trigger(struct ice_vf *vf) } /** - * ice_sriov_vsi_rebuild - release and rebuild VF's VSI - * @vf: VF to release and setup the VSI for + * ice_sriov_create_vsi - Create a new VSI for a VF + * @vf: VF to create the VSI for * - * This is only called when a single VF is being reset (i.e. VFR, VFLR, host VF - * configuration change, etc.). + * This is called by ice_vf_recreate_vsi to create the new VSI after the old + * VSI has been released. */ -static int ice_sriov_vsi_rebuild(struct ice_vf *vf) +static int ice_sriov_create_vsi(struct ice_vf *vf) { - struct ice_pf *pf = vf->pf; + struct ice_vsi *vsi; - ice_vf_vsi_release(vf); - if (!ice_vf_vsi_setup(vf)) { - dev_err(ice_pf_to_dev(pf), - "Failed to release and setup the VF%u's VSI\n", - vf->vf_id); + vsi = ice_vf_vsi_setup(vf); + if (!vsi) return -ENOMEM; - } return 0; } @@ -826,8 +795,6 @@ static int ice_sriov_vsi_rebuild(struct ice_vf *vf) */ static void ice_sriov_post_vsi_rebuild(struct ice_vf *vf) { - ice_vf_rebuild_host_cfg(vf); - ice_vf_set_initialized(vf); ice_ena_vf_mappings(vf); wr32(&vf->pf->hw, VFGEN_RSTAT(vf->vf_id), VIRTCHNL_VFR_VFACTIVE); } @@ -835,11 +802,13 @@ static void ice_sriov_post_vsi_rebuild(struct ice_vf *vf) static const struct ice_vf_ops ice_sriov_vf_ops = { .reset_type = ICE_VF_RESET, .free = ice_sriov_free_vf, + .clear_reset_state = ice_sriov_clear_reset_state, .clear_mbx_register = ice_sriov_clear_mbx_register, .trigger_reset_register = ice_sriov_trigger_reset_register, .poll_reset_status = ice_sriov_poll_reset_status, .clear_reset_trigger = ice_sriov_clear_reset_trigger, - .vsi_rebuild = ice_sriov_vsi_rebuild, + .irq_close = NULL, + .create_vsi = ice_sriov_create_vsi, .post_vsi_rebuild = ice_sriov_post_vsi_rebuild, }; @@ -879,21 +848,9 @@ static int ice_create_vf_entries(struct ice_pf *pf, u16 num_vfs) /* set sriov vf ops for VFs created during SRIOV flow */ vf->vf_ops = &ice_sriov_vf_ops; - vf->vf_sw_id = pf->first_sw; - /* assign default capabilities */ - vf->spoofchk = true; - vf->num_vf_qs = pf->vfs.num_qps_per; - ice_vc_set_default_allowlist(vf); - - /* ctrl_vsi_idx will be set to a valid value only when VF - * creates its first fdir rule. - */ - ice_vf_ctrl_invalidate_vsi(vf); - ice_vf_fdir_init(vf); - - ice_virtchnl_set_dflt_ops(vf); + ice_initialize_vf_entry(vf); - mutex_init(&vf->cfg_lock); + vf->vf_sw_id = pf->first_sw; hash_add_rcu(vfs->table, &vf->entry, vf_id); } @@ -1285,7 +1242,7 @@ ice_get_vf_cfg(struct net_device *netdev, int vf_id, struct ifla_vf_info *ivi) goto out_put_vf; ivi->vf = vf_id; - ether_addr_copy(ivi->mac, vf->hw_lan_addr.addr); + ether_addr_copy(ivi->mac, vf->hw_lan_addr); /* VF configuration for VLAN and applicable QoS */ ivi->vlan = ice_vf_get_port_vlan_id(vf); @@ -1333,8 +1290,8 @@ int ice_set_vf_mac(struct net_device *netdev, int vf_id, u8 *mac) return -EINVAL; /* nothing left to do, unicast MAC already set */ - if (ether_addr_equal(vf->dev_lan_addr.addr, mac) && - ether_addr_equal(vf->hw_lan_addr.addr, mac)) { + if (ether_addr_equal(vf->dev_lan_addr, mac) && + ether_addr_equal(vf->hw_lan_addr, mac)) { ret = 0; goto out_put_vf; } @@ -1348,8 +1305,8 @@ int ice_set_vf_mac(struct net_device *netdev, int vf_id, u8 *mac) /* VF is notified of its new MAC via the PF's response to the * VIRTCHNL_OP_GET_VF_RESOURCES message after the VF has been reset */ - ether_addr_copy(vf->dev_lan_addr.addr, mac); - ether_addr_copy(vf->hw_lan_addr.addr, mac); + ether_addr_copy(vf->dev_lan_addr, mac); + ether_addr_copy(vf->hw_lan_addr, mac); if (is_zero_ether_addr(mac)) { /* VF will send VIRTCHNL_OP_ADD_ETH_ADDR message with its MAC */ vf->pf_set_mac = false; @@ -1750,7 +1707,7 @@ void ice_print_vf_rx_mdd_event(struct ice_vf *vf) dev_info(dev, "%d Rx Malicious Driver Detection events detected on PF %d VF %d MAC %pM. mdd-auto-reset-vfs=%s\n", vf->mdd_rx_events.count, pf->hw.pf_id, vf->vf_id, - vf->dev_lan_addr.addr, + vf->dev_lan_addr, test_bit(ICE_FLAG_MDD_AUTO_RESET_VF, pf->flags) ? "on" : "off"); } @@ -1794,7 +1751,7 @@ void ice_print_vfs_mdd_events(struct ice_pf *pf) dev_info(dev, "%d Tx Malicious Driver Detection events detected on PF %d VF %d MAC %pM.\n", vf->mdd_tx_events.count, hw->pf_id, vf->vf_id, - vf->dev_lan_addr.addr); + vf->dev_lan_addr); } } mutex_unlock(&pf->vfs.table_lock); @@ -1884,7 +1841,7 @@ ice_is_malicious_vf(struct ice_pf *pf, struct ice_rq_event_info *event, if (pf_vsi) dev_warn(dev, "VF MAC %pM on PF MAC %pM is generating asynchronous messages and may be overflowing the PF message queue. Please see the Adapter User Guide for more information\n", - &vf->dev_lan_addr.addr[0], + &vf->dev_lan_addr[0], pf_vsi->netdev->dev_addr); } } diff --git a/drivers/net/ethernet/intel/ice/ice_tc_lib.c b/drivers/net/ethernet/intel/ice/ice_tc_lib.c index 95f392ab9670..6b48cbc049c6 100644 --- a/drivers/net/ethernet/intel/ice/ice_tc_lib.c +++ b/drivers/net/ethernet/intel/ice/ice_tc_lib.c @@ -792,7 +792,7 @@ static struct ice_vsi * ice_tc_forward_action(struct ice_vsi *vsi, struct ice_tc_flower_fltr *tc_fltr) { struct ice_rx_ring *ring = NULL; - struct ice_vsi *ch_vsi = NULL; + struct ice_vsi *dest_vsi = NULL; struct ice_pf *pf = vsi->back; struct device *dev; u32 tc_class; @@ -810,7 +810,7 @@ ice_tc_forward_action(struct ice_vsi *vsi, struct ice_tc_flower_fltr *tc_fltr) return ERR_PTR(-EOPNOTSUPP); } /* Locate ADQ VSI depending on hw_tc number */ - ch_vsi = vsi->tc_map_vsi[tc_class]; + dest_vsi = vsi->tc_map_vsi[tc_class]; break; case ICE_FWD_TO_Q: /* Locate the Rx queue */ @@ -824,7 +824,7 @@ ice_tc_forward_action(struct ice_vsi *vsi, struct ice_tc_flower_fltr *tc_fltr) /* Determine destination VSI even though the action is * FWD_TO_QUEUE, because QUEUE is associated with VSI */ - ch_vsi = tc_fltr->dest_vsi; + dest_vsi = tc_fltr->dest_vsi; break; default: dev_err(dev, @@ -832,13 +832,13 @@ ice_tc_forward_action(struct ice_vsi *vsi, struct ice_tc_flower_fltr *tc_fltr) tc_fltr->action.fltr_act); return ERR_PTR(-EINVAL); } - /* Must have valid ch_vsi (it could be main VSI or ADQ VSI) */ - if (!ch_vsi) { + /* Must have valid dest_vsi (it could be main VSI or ADQ VSI) */ + if (!dest_vsi) { dev_err(dev, "Unable to add filter because specified destination VSI doesn't exist\n"); return ERR_PTR(-EINVAL); } - return ch_vsi; + return dest_vsi; } /** @@ -860,7 +860,7 @@ ice_add_tc_flower_adv_fltr(struct ice_vsi *vsi, struct ice_pf *pf = vsi->back; struct ice_hw *hw = &pf->hw; u32 flags = tc_fltr->flags; - struct ice_vsi *ch_vsi; + struct ice_vsi *dest_vsi; struct device *dev; u16 lkups_cnt = 0; u16 l4_proto = 0; @@ -883,9 +883,11 @@ ice_add_tc_flower_adv_fltr(struct ice_vsi *vsi, } /* validate forwarding action VSI and queue */ - ch_vsi = ice_tc_forward_action(vsi, tc_fltr); - if (IS_ERR(ch_vsi)) - return PTR_ERR(ch_vsi); + if (ice_is_forward_action(tc_fltr->action.fltr_act)) { + dest_vsi = ice_tc_forward_action(vsi, tc_fltr); + if (IS_ERR(dest_vsi)) + return PTR_ERR(dest_vsi); + } lkups_cnt = ice_tc_count_lkups(flags, headers, tc_fltr); list = kcalloc(lkups_cnt, sizeof(*list), GFP_ATOMIC); @@ -904,7 +906,7 @@ ice_add_tc_flower_adv_fltr(struct ice_vsi *vsi, switch (tc_fltr->action.fltr_act) { case ICE_FWD_TO_VSI: - rule_info.sw_act.vsi_handle = ch_vsi->idx; + rule_info.sw_act.vsi_handle = dest_vsi->idx; rule_info.priority = ICE_SWITCH_FLTR_PRIO_VSI; rule_info.sw_act.src = hw->pf_id; rule_info.rx = true; @@ -915,7 +917,7 @@ ice_add_tc_flower_adv_fltr(struct ice_vsi *vsi, case ICE_FWD_TO_Q: /* HW queue number in global space */ rule_info.sw_act.fwd_id.q_id = tc_fltr->action.fwd.q.hw_queue; - rule_info.sw_act.vsi_handle = ch_vsi->idx; + rule_info.sw_act.vsi_handle = dest_vsi->idx; rule_info.priority = ICE_SWITCH_FLTR_PRIO_QUEUE; rule_info.sw_act.src = hw->pf_id; rule_info.rx = true; @@ -923,14 +925,15 @@ ice_add_tc_flower_adv_fltr(struct ice_vsi *vsi, tc_fltr->action.fwd.q.queue, tc_fltr->action.fwd.q.hw_queue, lkups_cnt); break; - default: - rule_info.sw_act.flag |= ICE_FLTR_TX; - /* In case of Tx (LOOKUP_TX), src needs to be src VSI */ - rule_info.sw_act.src = vsi->idx; - /* 'Rx' is false, direction of rule(LOOKUPTRX) */ - rule_info.rx = false; + case ICE_DROP_PACKET: + rule_info.sw_act.flag |= ICE_FLTR_RX; + rule_info.sw_act.src = hw->pf_id; + rule_info.rx = true; rule_info.priority = ICE_SWITCH_FLTR_PRIO_VSI; break; + default: + ret = -EOPNOTSUPP; + goto exit; } ret = ice_add_adv_rule(hw, list, lkups_cnt, &rule_info, &rule_added); @@ -953,11 +956,11 @@ ice_add_tc_flower_adv_fltr(struct ice_vsi *vsi, tc_fltr->dest_vsi_handle = rule_added.vsi_handle; if (tc_fltr->action.fltr_act == ICE_FWD_TO_VSI || tc_fltr->action.fltr_act == ICE_FWD_TO_Q) { - tc_fltr->dest_vsi = ch_vsi; + tc_fltr->dest_vsi = dest_vsi; /* keep track of advanced switch filter for * destination VSI */ - ch_vsi->num_chnl_fltr++; + dest_vsi->num_chnl_fltr++; /* keeps track of channel filters for PF VSI */ if (vsi->type == ICE_VSI_PF && @@ -978,6 +981,10 @@ ice_add_tc_flower_adv_fltr(struct ice_vsi *vsi, tc_fltr->action.fwd.q.hw_queue, rule_added.rid, rule_added.rule_id); break; + case ICE_DROP_PACKET: + dev_dbg(dev, "added switch rule (lkups_cnt %u, flags 0x%x), action is drop, rid %u, rule_id %u\n", + lkups_cnt, flags, rule_added.rid, rule_added.rule_id); + break; default: break; } @@ -1712,6 +1719,9 @@ ice_tc_parse_action(struct ice_vsi *vsi, struct ice_tc_flower_fltr *fltr, case FLOW_ACTION_RX_QUEUE_MAPPING: /* forward to queue */ return ice_tc_forward_to_queue(vsi, fltr, act); + case FLOW_ACTION_DROP: + fltr->action.fltr_act = ICE_DROP_PACKET; + return 0; default: NL_SET_ERR_MSG_MOD(fltr->extack, "Unsupported TC action"); return -EOPNOTSUPP; diff --git a/drivers/net/ethernet/intel/ice/ice_tc_lib.h b/drivers/net/ethernet/intel/ice/ice_tc_lib.h index d916d1e92aa3..8d5e22ac7023 100644 --- a/drivers/net/ethernet/intel/ice/ice_tc_lib.h +++ b/drivers/net/ethernet/intel/ice/ice_tc_lib.h @@ -211,4 +211,14 @@ ice_del_cls_flower(struct ice_vsi *vsi, struct flow_cls_offload *cls_flower); void ice_replay_tc_fltrs(struct ice_pf *pf); bool ice_is_tunnel_supported(struct net_device *dev); +static inline bool ice_is_forward_action(enum ice_sw_fwd_act_type fltr_act) +{ + switch (fltr_act) { + case ICE_FWD_TO_VSI: + case ICE_FWD_TO_Q: + return true; + default: + return false; + } +} #endif /* _ICE_TC_LIB_H_ */ diff --git a/drivers/net/ethernet/intel/ice/ice_txrx.c b/drivers/net/ethernet/intel/ice/ice_txrx.c index 086f0b3ab68d..dfd22862e926 100644 --- a/drivers/net/ethernet/intel/ice/ice_txrx.c +++ b/drivers/net/ethernet/intel/ice/ice_txrx.c @@ -85,7 +85,7 @@ ice_prgm_fdir_fltr(struct ice_vsi *vsi, struct ice_fltr_desc *fdir_desc, td_cmd = ICE_TXD_LAST_DESC_CMD | ICE_TX_DESC_CMD_DUMMY | ICE_TX_DESC_CMD_RE; - tx_buf->tx_flags = ICE_TX_FLAGS_DUMMY_PKT; + tx_buf->type = ICE_TX_BUF_DUMMY; tx_buf->raw_buf = raw_packet; tx_desc->cmd_type_offset_bsz = @@ -112,27 +112,29 @@ ice_prgm_fdir_fltr(struct ice_vsi *vsi, struct ice_fltr_desc *fdir_desc, static void ice_unmap_and_free_tx_buf(struct ice_tx_ring *ring, struct ice_tx_buf *tx_buf) { - if (tx_buf->skb) { - if (tx_buf->tx_flags & ICE_TX_FLAGS_DUMMY_PKT) - devm_kfree(ring->dev, tx_buf->raw_buf); - else if (ice_ring_is_xdp(ring)) - page_frag_free(tx_buf->raw_buf); - else - dev_kfree_skb_any(tx_buf->skb); - if (dma_unmap_len(tx_buf, len)) - dma_unmap_single(ring->dev, - dma_unmap_addr(tx_buf, dma), - dma_unmap_len(tx_buf, len), - DMA_TO_DEVICE); - } else if (dma_unmap_len(tx_buf, len)) { + if (dma_unmap_len(tx_buf, len)) dma_unmap_page(ring->dev, dma_unmap_addr(tx_buf, dma), dma_unmap_len(tx_buf, len), DMA_TO_DEVICE); + + switch (tx_buf->type) { + case ICE_TX_BUF_DUMMY: + devm_kfree(ring->dev, tx_buf->raw_buf); + break; + case ICE_TX_BUF_SKB: + dev_kfree_skb_any(tx_buf->skb); + break; + case ICE_TX_BUF_XDP_TX: + page_frag_free(tx_buf->raw_buf); + break; + case ICE_TX_BUF_XDP_XMIT: + xdp_return_frame(tx_buf->xdpf); + break; } tx_buf->next_to_watch = NULL; - tx_buf->skb = NULL; + tx_buf->type = ICE_TX_BUF_EMPTY; dma_unmap_len_set(tx_buf, len, 0); /* tx_buf must be completely set up in the transmit path */ } @@ -174,8 +176,6 @@ tx_skip_free: tx_ring->next_to_use = 0; tx_ring->next_to_clean = 0; - tx_ring->next_dd = ICE_RING_QUARTER(tx_ring) - 1; - tx_ring->next_rs = ICE_RING_QUARTER(tx_ring) - 1; if (!tx_ring->netdev) return; @@ -267,7 +267,7 @@ static bool ice_clean_tx_irq(struct ice_tx_ring *tx_ring, int napi_budget) DMA_TO_DEVICE); /* clear tx_buf data */ - tx_buf->skb = NULL; + tx_buf->type = ICE_TX_BUF_EMPTY; dma_unmap_len_set(tx_buf, len, 0); /* unmap remaining buffers */ @@ -382,6 +382,7 @@ err: */ void ice_clean_rx_ring(struct ice_rx_ring *rx_ring) { + struct xdp_buff *xdp = &rx_ring->xdp; struct device *dev = rx_ring->dev; u32 size; u16 i; @@ -390,16 +391,16 @@ void ice_clean_rx_ring(struct ice_rx_ring *rx_ring) if (!rx_ring->rx_buf) return; - if (rx_ring->skb) { - dev_kfree_skb(rx_ring->skb); - rx_ring->skb = NULL; - } - if (rx_ring->xsk_pool) { ice_xsk_clean_rx_ring(rx_ring); goto rx_skip_free; } + if (xdp->data) { + xdp_return_buff(xdp); + xdp->data = NULL; + } + /* Free all the Rx ring sk_buffs */ for (i = 0; i < rx_ring->count; i++) { struct ice_rx_buf *rx_buf = &rx_ring->rx_buf[i]; @@ -437,6 +438,7 @@ rx_skip_free: rx_ring->next_to_alloc = 0; rx_ring->next_to_clean = 0; + rx_ring->first_desc = 0; rx_ring->next_to_use = 0; } @@ -506,6 +508,7 @@ int ice_setup_rx_ring(struct ice_rx_ring *rx_ring) rx_ring->next_to_use = 0; rx_ring->next_to_clean = 0; + rx_ring->first_desc = 0; if (ice_is_xdp_ena_vsi(rx_ring->vsi)) WRITE_ONCE(rx_ring->xdp_prog, rx_ring->vsi->xdp_prog); @@ -523,8 +526,16 @@ err: return -ENOMEM; } +/** + * ice_rx_frame_truesize + * @rx_ring: ptr to Rx ring + * @size: size + * + * calculate the truesize with taking into the account PAGE_SIZE of + * underlying arch + */ static unsigned int -ice_rx_frame_truesize(struct ice_rx_ring *rx_ring, unsigned int __maybe_unused size) +ice_rx_frame_truesize(struct ice_rx_ring *rx_ring, const unsigned int size) { unsigned int truesize; @@ -545,34 +556,39 @@ ice_rx_frame_truesize(struct ice_rx_ring *rx_ring, unsigned int __maybe_unused s * @xdp: xdp_buff used as input to the XDP program * @xdp_prog: XDP program to run * @xdp_ring: ring to be used for XDP_TX action + * @rx_buf: Rx buffer to store the XDP action * * Returns any of ICE_XDP_{PASS, CONSUMED, TX, REDIR} */ -static int +static void ice_run_xdp(struct ice_rx_ring *rx_ring, struct xdp_buff *xdp, - struct bpf_prog *xdp_prog, struct ice_tx_ring *xdp_ring) + struct bpf_prog *xdp_prog, struct ice_tx_ring *xdp_ring, + struct ice_rx_buf *rx_buf) { - int err; + unsigned int ret = ICE_XDP_PASS; u32 act; + if (!xdp_prog) + goto exit; + act = bpf_prog_run_xdp(xdp_prog, xdp); switch (act) { case XDP_PASS: - return ICE_XDP_PASS; + break; case XDP_TX: if (static_branch_unlikely(&ice_xdp_locking_key)) spin_lock(&xdp_ring->tx_lock); - err = ice_xmit_xdp_ring(xdp->data, xdp->data_end - xdp->data, xdp_ring); + ret = __ice_xmit_xdp_ring(xdp, xdp_ring, false); if (static_branch_unlikely(&ice_xdp_locking_key)) spin_unlock(&xdp_ring->tx_lock); - if (err == ICE_XDP_CONSUMED) + if (ret == ICE_XDP_CONSUMED) goto out_failure; - return err; + break; case XDP_REDIRECT: - err = xdp_do_redirect(rx_ring->netdev, xdp, xdp_prog); - if (err) + if (xdp_do_redirect(rx_ring->netdev, xdp, xdp_prog)) goto out_failure; - return ICE_XDP_REDIR; + ret = ICE_XDP_REDIR; + break; default: bpf_warn_invalid_xdp_action(rx_ring->netdev, xdp_prog, act); fallthrough; @@ -581,8 +597,31 @@ out_failure: trace_xdp_exception(rx_ring->netdev, xdp_prog, act); fallthrough; case XDP_DROP: - return ICE_XDP_CONSUMED; + ret = ICE_XDP_CONSUMED; } +exit: + rx_buf->act = ret; + if (unlikely(xdp_buff_has_frags(xdp))) + ice_set_rx_bufs_act(xdp, rx_ring, ret); +} + +/** + * ice_xmit_xdp_ring - submit frame to XDP ring for transmission + * @xdpf: XDP frame that will be converted to XDP buff + * @xdp_ring: XDP ring for transmission + */ +static int ice_xmit_xdp_ring(const struct xdp_frame *xdpf, + struct ice_tx_ring *xdp_ring) +{ + struct xdp_buff xdp; + + xdp.data_hard_start = (void *)xdpf; + xdp.data = xdpf->data; + xdp.data_end = xdp.data + xdpf->len; + xdp.frame_sz = xdpf->frame_sz; + xdp.flags = xdpf->flags; + + return __ice_xmit_xdp_ring(&xdp, xdp_ring, true); } /** @@ -605,6 +644,7 @@ ice_xdp_xmit(struct net_device *dev, int n, struct xdp_frame **frames, unsigned int queue_index = smp_processor_id(); struct ice_vsi *vsi = np->vsi; struct ice_tx_ring *xdp_ring; + struct ice_tx_buf *tx_buf; int nxmit = 0, i; if (test_bit(ICE_VSI_DOWN, vsi->state)) @@ -627,16 +667,18 @@ ice_xdp_xmit(struct net_device *dev, int n, struct xdp_frame **frames, xdp_ring = vsi->xdp_rings[queue_index]; } + tx_buf = &xdp_ring->tx_buf[xdp_ring->next_to_use]; for (i = 0; i < n; i++) { - struct xdp_frame *xdpf = frames[i]; + const struct xdp_frame *xdpf = frames[i]; int err; - err = ice_xmit_xdp_ring(xdpf->data, xdpf->len, xdp_ring); + err = ice_xmit_xdp_ring(xdpf, xdp_ring); if (err != ICE_XDP_TX) break; nxmit++; } + tx_buf->rs_idx = ice_set_rs_bit(xdp_ring); if (unlikely(flags & XDP_XMIT_FLUSH)) ice_xdp_ring_update_tail(xdp_ring); @@ -706,7 +748,7 @@ ice_alloc_mapped_page(struct ice_rx_ring *rx_ring, struct ice_rx_buf *bi) * buffers. Then bump tail at most one time. Grouping like this lets us avoid * multiple tail writes per call. */ -bool ice_alloc_rx_bufs(struct ice_rx_ring *rx_ring, u16 cleaned_count) +bool ice_alloc_rx_bufs(struct ice_rx_ring *rx_ring, unsigned int cleaned_count) { union ice_32b_rx_flex_desc *rx_desc; u16 ntu = rx_ring->next_to_use; @@ -783,7 +825,6 @@ ice_rx_buf_adjust_pg_offset(struct ice_rx_buf *rx_buf, unsigned int size) /** * ice_can_reuse_rx_page - Determine if page can be reused for another Rx * @rx_buf: buffer containing the page - * @rx_buf_pgcnt: rx_buf page refcount pre xdp_do_redirect() call * * If page is reusable, we have a green light for calling ice_reuse_rx_page, * which will assign the current buffer to the buffer that next_to_alloc is @@ -791,7 +832,7 @@ ice_rx_buf_adjust_pg_offset(struct ice_rx_buf *rx_buf, unsigned int size) * page freed */ static bool -ice_can_reuse_rx_page(struct ice_rx_buf *rx_buf, int rx_buf_pgcnt) +ice_can_reuse_rx_page(struct ice_rx_buf *rx_buf) { unsigned int pagecnt_bias = rx_buf->pagecnt_bias; struct page *page = rx_buf->page; @@ -802,7 +843,7 @@ ice_can_reuse_rx_page(struct ice_rx_buf *rx_buf, int rx_buf_pgcnt) #if (PAGE_SIZE < 8192) /* if we are only owner of page we can reuse it */ - if (unlikely((rx_buf_pgcnt - pagecnt_bias) > 1)) + if (unlikely(rx_buf->pgcnt - pagecnt_bias > 1)) return false; #else #define ICE_LAST_OFFSET \ @@ -824,33 +865,44 @@ ice_can_reuse_rx_page(struct ice_rx_buf *rx_buf, int rx_buf_pgcnt) } /** - * ice_add_rx_frag - Add contents of Rx buffer to sk_buff as a frag + * ice_add_xdp_frag - Add contents of Rx buffer to xdp buf as a frag * @rx_ring: Rx descriptor ring to transact packets on + * @xdp: xdp buff to place the data into * @rx_buf: buffer containing page to add - * @skb: sk_buff to place the data into * @size: packet length from rx_desc * - * This function will add the data contained in rx_buf->page to the skb. - * It will just attach the page as a frag to the skb. - * The function will then update the page offset. + * This function will add the data contained in rx_buf->page to the xdp buf. + * It will just attach the page as a frag. */ -static void -ice_add_rx_frag(struct ice_rx_ring *rx_ring, struct ice_rx_buf *rx_buf, - struct sk_buff *skb, unsigned int size) +static int +ice_add_xdp_frag(struct ice_rx_ring *rx_ring, struct xdp_buff *xdp, + struct ice_rx_buf *rx_buf, const unsigned int size) { -#if (PAGE_SIZE >= 8192) - unsigned int truesize = SKB_DATA_ALIGN(size + rx_ring->rx_offset); -#else - unsigned int truesize = ice_rx_pg_size(rx_ring) / 2; -#endif + struct skb_shared_info *sinfo = xdp_get_shared_info_from_buff(xdp); if (!size) - return; - skb_add_rx_frag(skb, skb_shinfo(skb)->nr_frags, rx_buf->page, - rx_buf->page_offset, size, truesize); + return 0; + + if (!xdp_buff_has_frags(xdp)) { + sinfo->nr_frags = 0; + sinfo->xdp_frags_size = 0; + xdp_buff_set_frags_flag(xdp); + } - /* page is being used so we must update the page offset */ - ice_rx_buf_adjust_pg_offset(rx_buf, truesize); + if (unlikely(sinfo->nr_frags == MAX_SKB_FRAGS)) { + if (unlikely(xdp_buff_has_frags(xdp))) + ice_set_rx_bufs_act(xdp, rx_ring, ICE_XDP_CONSUMED); + return -ENOMEM; + } + + __skb_fill_page_desc_noacc(sinfo, sinfo->nr_frags++, rx_buf->page, + rx_buf->page_offset, size); + sinfo->xdp_frags_size += size; + + if (page_is_pfmemalloc(rx_buf->page)) + xdp_buff_set_frag_pfmemalloc(xdp); + + return 0; } /** @@ -886,19 +938,18 @@ ice_reuse_rx_page(struct ice_rx_ring *rx_ring, struct ice_rx_buf *old_buf) * ice_get_rx_buf - Fetch Rx buffer and synchronize data for use * @rx_ring: Rx descriptor ring to transact packets on * @size: size of buffer to add to skb - * @rx_buf_pgcnt: rx_buf page refcount * * This function will pull an Rx buffer from the ring and synchronize it * for use by the CPU. */ static struct ice_rx_buf * ice_get_rx_buf(struct ice_rx_ring *rx_ring, const unsigned int size, - int *rx_buf_pgcnt) + const unsigned int ntc) { struct ice_rx_buf *rx_buf; - rx_buf = &rx_ring->rx_buf[rx_ring->next_to_clean]; - *rx_buf_pgcnt = + rx_buf = &rx_ring->rx_buf[ntc]; + rx_buf->pgcnt = #if (PAGE_SIZE < 8192) page_count(rx_buf->page); #else @@ -922,26 +973,25 @@ ice_get_rx_buf(struct ice_rx_ring *rx_ring, const unsigned int size, /** * ice_build_skb - Build skb around an existing buffer * @rx_ring: Rx descriptor ring to transact packets on - * @rx_buf: Rx buffer to pull data from * @xdp: xdp_buff pointing to the data * - * This function builds an skb around an existing Rx buffer, taking care - * to set up the skb correctly and avoid any memcpy overhead. + * This function builds an skb around an existing XDP buffer, taking care + * to set up the skb correctly and avoid any memcpy overhead. Driver has + * already combined frags (if any) to skb_shared_info. */ static struct sk_buff * -ice_build_skb(struct ice_rx_ring *rx_ring, struct ice_rx_buf *rx_buf, - struct xdp_buff *xdp) +ice_build_skb(struct ice_rx_ring *rx_ring, struct xdp_buff *xdp) { u8 metasize = xdp->data - xdp->data_meta; -#if (PAGE_SIZE < 8192) - unsigned int truesize = ice_rx_pg_size(rx_ring) / 2; -#else - unsigned int truesize = SKB_DATA_ALIGN(sizeof(struct skb_shared_info)) + - SKB_DATA_ALIGN(xdp->data_end - - xdp->data_hard_start); -#endif + struct skb_shared_info *sinfo = NULL; + unsigned int nr_frags; struct sk_buff *skb; + if (unlikely(xdp_buff_has_frags(xdp))) { + sinfo = xdp_get_shared_info_from_buff(xdp); + nr_frags = sinfo->nr_frags; + } + /* Prefetch first cache line of first page. If xdp->data_meta * is unused, this points exactly as xdp->data, otherwise we * likely have a consumer accessing first few bytes of meta @@ -949,7 +999,7 @@ ice_build_skb(struct ice_rx_ring *rx_ring, struct ice_rx_buf *rx_buf, */ net_prefetch(xdp->data_meta); /* build an skb around the page buffer */ - skb = napi_build_skb(xdp->data_hard_start, truesize); + skb = napi_build_skb(xdp->data_hard_start, xdp->frame_sz); if (unlikely(!skb)) return NULL; @@ -964,8 +1014,11 @@ ice_build_skb(struct ice_rx_ring *rx_ring, struct ice_rx_buf *rx_buf, if (metasize) skb_metadata_set(skb, metasize); - /* buffer is used by skb, update page_offset */ - ice_rx_buf_adjust_pg_offset(rx_buf, truesize); + if (unlikely(xdp_buff_has_frags(xdp))) + xdp_update_skb_shared_info(skb, nr_frags, + sinfo->xdp_frags_size, + nr_frags * xdp->frame_sz, + xdp_buff_is_frag_pfmemalloc(xdp)); return skb; } @@ -981,24 +1034,30 @@ ice_build_skb(struct ice_rx_ring *rx_ring, struct ice_rx_buf *rx_buf, * skb correctly. */ static struct sk_buff * -ice_construct_skb(struct ice_rx_ring *rx_ring, struct ice_rx_buf *rx_buf, - struct xdp_buff *xdp) +ice_construct_skb(struct ice_rx_ring *rx_ring, struct xdp_buff *xdp) { - unsigned int metasize = xdp->data - xdp->data_meta; unsigned int size = xdp->data_end - xdp->data; + struct skb_shared_info *sinfo = NULL; + struct ice_rx_buf *rx_buf; + unsigned int nr_frags = 0; unsigned int headlen; struct sk_buff *skb; /* prefetch first cache line of first page */ - net_prefetch(xdp->data_meta); + net_prefetch(xdp->data); + + if (unlikely(xdp_buff_has_frags(xdp))) { + sinfo = xdp_get_shared_info_from_buff(xdp); + nr_frags = sinfo->nr_frags; + } /* allocate a skb to store the frags */ - skb = __napi_alloc_skb(&rx_ring->q_vector->napi, - ICE_RX_HDR_SIZE + metasize, + skb = __napi_alloc_skb(&rx_ring->q_vector->napi, ICE_RX_HDR_SIZE, GFP_ATOMIC | __GFP_NOWARN); if (unlikely(!skb)) return NULL; + rx_buf = &rx_ring->rx_buf[rx_ring->first_desc]; skb_record_rx_queue(skb, rx_ring->q_index); /* Determine available headroom for copy */ headlen = size; @@ -1006,32 +1065,42 @@ ice_construct_skb(struct ice_rx_ring *rx_ring, struct ice_rx_buf *rx_buf, headlen = eth_get_headlen(skb->dev, xdp->data, ICE_RX_HDR_SIZE); /* align pull length to size of long to optimize memcpy performance */ - memcpy(__skb_put(skb, headlen + metasize), xdp->data_meta, - ALIGN(headlen + metasize, sizeof(long))); - - if (metasize) { - skb_metadata_set(skb, metasize); - __skb_pull(skb, metasize); - } + memcpy(__skb_put(skb, headlen), xdp->data, ALIGN(headlen, + sizeof(long))); /* if we exhaust the linear part then add what is left as a frag */ size -= headlen; if (size) { -#if (PAGE_SIZE >= 8192) - unsigned int truesize = SKB_DATA_ALIGN(size); -#else - unsigned int truesize = ice_rx_pg_size(rx_ring) / 2; -#endif + /* besides adding here a partial frag, we are going to add + * frags from xdp_buff, make sure there is enough space for + * them + */ + if (unlikely(nr_frags >= MAX_SKB_FRAGS - 1)) { + dev_kfree_skb(skb); + return NULL; + } skb_add_rx_frag(skb, 0, rx_buf->page, - rx_buf->page_offset + headlen, size, truesize); - /* buffer is used by skb, update page_offset */ - ice_rx_buf_adjust_pg_offset(rx_buf, truesize); + rx_buf->page_offset + headlen, size, + xdp->frame_sz); } else { - /* buffer is unused, reset bias back to rx_buf; data was copied - * onto skb's linear part so there's no need for adjusting - * page offset and we can reuse this buffer as-is + /* buffer is unused, change the act that should be taken later + * on; data was copied onto skb's linear part so there's no + * need for adjusting page offset and we can reuse this buffer + * as-is */ - rx_buf->pagecnt_bias++; + rx_buf->act = ICE_SKB_CONSUMED; + } + + if (unlikely(xdp_buff_has_frags(xdp))) { + struct skb_shared_info *skinfo = skb_shinfo(skb); + + memcpy(&skinfo->frags[skinfo->nr_frags], &sinfo->frags[0], + sizeof(skb_frag_t) * nr_frags); + + xdp_update_skb_shared_info(skb, skinfo->nr_frags + nr_frags, + sinfo->xdp_frags_size, + nr_frags * xdp->frame_sz, + xdp_buff_is_frag_pfmemalloc(xdp)); } return skb; @@ -1041,26 +1110,17 @@ ice_construct_skb(struct ice_rx_ring *rx_ring, struct ice_rx_buf *rx_buf, * ice_put_rx_buf - Clean up used buffer and either recycle or free * @rx_ring: Rx descriptor ring to transact packets on * @rx_buf: Rx buffer to pull data from - * @rx_buf_pgcnt: Rx buffer page count pre xdp_do_redirect() * - * This function will update next_to_clean and then clean up the contents - * of the rx_buf. It will either recycle the buffer or unmap it and free - * the associated resources. + * This function will clean up the contents of the rx_buf. It will either + * recycle the buffer or unmap it and free the associated resources. */ static void -ice_put_rx_buf(struct ice_rx_ring *rx_ring, struct ice_rx_buf *rx_buf, - int rx_buf_pgcnt) +ice_put_rx_buf(struct ice_rx_ring *rx_ring, struct ice_rx_buf *rx_buf) { - u16 ntc = rx_ring->next_to_clean + 1; - - /* fetch, update, and store next to clean */ - ntc = (ntc < rx_ring->count) ? ntc : 0; - rx_ring->next_to_clean = ntc; - if (!rx_buf) return; - if (ice_can_reuse_rx_page(rx_buf, rx_buf_pgcnt)) { + if (ice_can_reuse_rx_page(rx_buf)) { /* hand second half of page back to the ring */ ice_reuse_rx_page(rx_ring, rx_buf); } else { @@ -1076,27 +1136,6 @@ ice_put_rx_buf(struct ice_rx_ring *rx_ring, struct ice_rx_buf *rx_buf, } /** - * ice_is_non_eop - process handling of non-EOP buffers - * @rx_ring: Rx ring being processed - * @rx_desc: Rx descriptor for current buffer - * - * If the buffer is an EOP buffer, this function exits returning false, - * otherwise return true indicating that this is in fact a non-EOP buffer. - */ -static bool -ice_is_non_eop(struct ice_rx_ring *rx_ring, union ice_32b_rx_flex_desc *rx_desc) -{ - /* if we are the last buffer then there is nothing else to do */ -#define ICE_RXD_EOF BIT(ICE_RX_FLEX_DESC_STATUS0_EOF_S) - if (likely(ice_test_staterr(rx_desc->wb.status_error0, ICE_RXD_EOF))) - return false; - - rx_ring->ring_stats->rx_stats.non_eop_descs++; - - return true; -} - -/** * ice_clean_rx_irq - Clean completed descriptors from Rx ring - bounce buf * @rx_ring: Rx descriptor ring to transact packets on * @budget: Total limit on number of packets to process @@ -1110,39 +1149,42 @@ ice_is_non_eop(struct ice_rx_ring *rx_ring, union ice_32b_rx_flex_desc *rx_desc) */ int ice_clean_rx_irq(struct ice_rx_ring *rx_ring, int budget) { - unsigned int total_rx_bytes = 0, total_rx_pkts = 0, frame_sz = 0; - u16 cleaned_count = ICE_DESC_UNUSED(rx_ring); + unsigned int total_rx_bytes = 0, total_rx_pkts = 0; unsigned int offset = rx_ring->rx_offset; + struct xdp_buff *xdp = &rx_ring->xdp; struct ice_tx_ring *xdp_ring = NULL; - unsigned int xdp_res, xdp_xmit = 0; - struct sk_buff *skb = rx_ring->skb; struct bpf_prog *xdp_prog = NULL; - struct xdp_buff xdp; + u32 ntc = rx_ring->next_to_clean; + u32 cnt = rx_ring->count; + u32 cached_ntc = ntc; + u32 xdp_xmit = 0; + u32 cached_ntu; bool failure; + u32 first; /* Frame size depend on rx_ring setup when PAGE_SIZE=4K */ #if (PAGE_SIZE < 8192) - frame_sz = ice_rx_frame_truesize(rx_ring, 0); + xdp->frame_sz = ice_rx_frame_truesize(rx_ring, 0); #endif - xdp_init_buff(&xdp, frame_sz, &rx_ring->xdp_rxq); xdp_prog = READ_ONCE(rx_ring->xdp_prog); - if (xdp_prog) + if (xdp_prog) { xdp_ring = rx_ring->xdp_ring; + cached_ntu = xdp_ring->next_to_use; + } /* start the loop to process Rx packets bounded by 'budget' */ while (likely(total_rx_pkts < (unsigned int)budget)) { union ice_32b_rx_flex_desc *rx_desc; struct ice_rx_buf *rx_buf; - unsigned char *hard_start; + struct sk_buff *skb; unsigned int size; u16 stat_err_bits; - int rx_buf_pgcnt; u16 vlan_tag = 0; u16 rx_ptype; /* get the Rx desc from Rx ring based on 'next_to_clean' */ - rx_desc = ICE_RX_DESC(rx_ring, rx_ring->next_to_clean); + rx_desc = ICE_RX_DESC(rx_ring, ntc); /* status_error_len will always be zero for unused descriptors * because it's cleared in cleanup, and overlaps with hdr_addr @@ -1166,8 +1208,8 @@ int ice_clean_rx_irq(struct ice_rx_ring *rx_ring, int budget) if (rx_desc->wb.rxdid == FDIR_DESC_RXDID && ctrl_vsi->vf) ice_vc_fdir_irq_handler(ctrl_vsi, rx_desc); - ice_put_rx_buf(rx_ring, NULL, 0); - cleaned_count++; + if (++ntc == cnt) + ntc = 0; continue; } @@ -1175,65 +1217,56 @@ int ice_clean_rx_irq(struct ice_rx_ring *rx_ring, int budget) ICE_RX_FLX_DESC_PKT_LEN_M; /* retrieve a buffer from the ring */ - rx_buf = ice_get_rx_buf(rx_ring, size, &rx_buf_pgcnt); + rx_buf = ice_get_rx_buf(rx_ring, size, ntc); - if (!size) { - xdp.data = NULL; - xdp.data_end = NULL; - xdp.data_hard_start = NULL; - xdp.data_meta = NULL; - goto construct_skb; - } + if (!xdp->data) { + void *hard_start; - hard_start = page_address(rx_buf->page) + rx_buf->page_offset - - offset; - xdp_prepare_buff(&xdp, hard_start, offset, size, true); + hard_start = page_address(rx_buf->page) + rx_buf->page_offset - + offset; + xdp_prepare_buff(xdp, hard_start, offset, size, !!offset); #if (PAGE_SIZE > 4096) - /* At larger PAGE_SIZE, frame_sz depend on len size */ - xdp.frame_sz = ice_rx_frame_truesize(rx_ring, size); + /* At larger PAGE_SIZE, frame_sz depend on len size */ + xdp->frame_sz = ice_rx_frame_truesize(rx_ring, size); #endif + xdp_buff_clear_frags_flag(xdp); + } else if (ice_add_xdp_frag(rx_ring, xdp, rx_buf, size)) { + break; + } + if (++ntc == cnt) + ntc = 0; - if (!xdp_prog) - goto construct_skb; + /* skip if it is NOP desc */ + if (ice_is_non_eop(rx_ring, rx_desc)) + continue; - xdp_res = ice_run_xdp(rx_ring, &xdp, xdp_prog, xdp_ring); - if (!xdp_res) + ice_run_xdp(rx_ring, xdp, xdp_prog, xdp_ring, rx_buf); + if (rx_buf->act == ICE_XDP_PASS) goto construct_skb; - if (xdp_res & (ICE_XDP_TX | ICE_XDP_REDIR)) { - xdp_xmit |= xdp_res; - ice_rx_buf_adjust_pg_offset(rx_buf, xdp.frame_sz); - } else { - rx_buf->pagecnt_bias++; - } - total_rx_bytes += size; + total_rx_bytes += xdp_get_buff_len(xdp); total_rx_pkts++; - cleaned_count++; - ice_put_rx_buf(rx_ring, rx_buf, rx_buf_pgcnt); + xdp->data = NULL; + rx_ring->first_desc = ntc; continue; construct_skb: - if (skb) { - ice_add_rx_frag(rx_ring, rx_buf, skb, size); - } else if (likely(xdp.data)) { - if (ice_ring_uses_build_skb(rx_ring)) - skb = ice_build_skb(rx_ring, rx_buf, &xdp); - else - skb = ice_construct_skb(rx_ring, rx_buf, &xdp); - } + if (likely(ice_ring_uses_build_skb(rx_ring))) + skb = ice_build_skb(rx_ring, xdp); + else + skb = ice_construct_skb(rx_ring, xdp); /* exit if we failed to retrieve a buffer */ if (!skb) { - rx_ring->ring_stats->rx_stats.alloc_buf_failed++; - if (rx_buf) - rx_buf->pagecnt_bias++; + rx_ring->ring_stats->rx_stats.alloc_page_failed++; + rx_buf->act = ICE_XDP_CONSUMED; + if (unlikely(xdp_buff_has_frags(xdp))) + ice_set_rx_bufs_act(xdp, rx_ring, + ICE_XDP_CONSUMED); + xdp->data = NULL; + rx_ring->first_desc = ntc; break; } - - ice_put_rx_buf(rx_ring, rx_buf, rx_buf_pgcnt); - cleaned_count++; - - /* skip if it is NOP desc */ - if (ice_is_non_eop(rx_ring, rx_desc)) - continue; + xdp->data = NULL; + rx_ring->first_desc = ntc; stat_err_bits = BIT(ICE_RX_FLEX_DESC_STATUS0_RXE_S); if (unlikely(ice_test_staterr(rx_desc->wb.status_error0, @@ -1245,10 +1278,8 @@ construct_skb: vlan_tag = ice_get_vlan_tag_from_rx_desc(rx_desc); /* pad the skb if needed, to make a valid ethernet frame */ - if (eth_skb_pad(skb)) { - skb = NULL; + if (eth_skb_pad(skb)) continue; - } /* probably a little skewed due to removing CRC */ total_rx_bytes += skb->len; @@ -1262,18 +1293,34 @@ construct_skb: ice_trace(clean_rx_irq_indicate, rx_ring, rx_desc, skb); /* send completed skb up the stack */ ice_receive_skb(rx_ring, skb, vlan_tag); - skb = NULL; /* update budget accounting */ total_rx_pkts++; } + first = rx_ring->first_desc; + while (cached_ntc != first) { + struct ice_rx_buf *buf = &rx_ring->rx_buf[cached_ntc]; + + if (buf->act & (ICE_XDP_TX | ICE_XDP_REDIR)) { + ice_rx_buf_adjust_pg_offset(buf, xdp->frame_sz); + xdp_xmit |= buf->act; + } else if (buf->act & ICE_XDP_CONSUMED) { + buf->pagecnt_bias++; + } else if (buf->act == ICE_XDP_PASS) { + ice_rx_buf_adjust_pg_offset(buf, xdp->frame_sz); + } + + ice_put_rx_buf(rx_ring, buf); + if (++cached_ntc >= cnt) + cached_ntc = 0; + } + rx_ring->next_to_clean = ntc; /* return up to cleaned_count buffers to hardware */ - failure = ice_alloc_rx_bufs(rx_ring, cleaned_count); + failure = ice_alloc_rx_bufs(rx_ring, ICE_RX_DESC_UNUSED(rx_ring)); - if (xdp_prog) - ice_finalize_xdp_rx(xdp_ring, xdp_xmit); - rx_ring->skb = skb; + if (xdp_xmit) + ice_finalize_xdp_rx(xdp_ring, xdp_xmit, cached_ntu); if (rx_ring->ring_stats) ice_update_rx_ring_stats(rx_ring, total_rx_pkts, @@ -1682,6 +1729,7 @@ ice_tx_map(struct ice_tx_ring *tx_ring, struct ice_tx_buf *first, DMA_TO_DEVICE); tx_buf = &tx_ring->tx_buf[i]; + tx_buf->type = ICE_TX_BUF_FRAG; } /* record SW timestamp if HW timestamp is not available */ @@ -1996,7 +2044,6 @@ int ice_tso(struct ice_tx_buf *first, struct ice_tx_offload_params *off) if (err < 0) return err; - /* cppcheck-suppress unreadVariable */ protocol = vlan_get_protocol(skb); if (eth_p_mpls(protocol)) @@ -2033,8 +2080,6 @@ int ice_tso(struct ice_tx_buf *first, struct ice_tx_offload_params *off) } /* reset pointers to inner headers */ - - /* cppcheck-suppress unreadVariable */ ip.hdr = skb_inner_network_header(skb); l4.hdr = skb_inner_transport_header(skb); @@ -2300,6 +2345,9 @@ ice_xmit_frame_ring(struct sk_buff *skb, struct ice_tx_ring *tx_ring) ice_trace(xmit_frame_ring, tx_ring, skb); + if (unlikely(ipv6_hopopt_jumbo_remove(skb))) + goto out_drop; + count = ice_xmit_desc_count(skb); if (ice_chk_linearize(skb, count)) { if (__skb_linearize(skb)) @@ -2328,6 +2376,7 @@ ice_xmit_frame_ring(struct sk_buff *skb, struct ice_tx_ring *tx_ring) /* record the location of the first descriptor for this packet */ first = &tx_ring->tx_buf[tx_ring->next_to_use]; first->skb = skb; + first->type = ICE_TX_BUF_SKB; first->bytecount = max_t(unsigned int, skb->len, ETH_ZLEN); first->gso_segs = 1; first->tx_flags = 0; @@ -2500,11 +2549,11 @@ void ice_clean_ctrl_tx_irq(struct ice_tx_ring *tx_ring) dma_unmap_addr(tx_buf, dma), dma_unmap_len(tx_buf, len), DMA_TO_DEVICE); - if (tx_buf->tx_flags & ICE_TX_FLAGS_DUMMY_PKT) + if (tx_buf->type == ICE_TX_BUF_DUMMY) devm_kfree(tx_ring->dev, tx_buf->raw_buf); /* clear next_to_watch to prevent false hangs */ - tx_buf->raw_buf = NULL; + tx_buf->type = ICE_TX_BUF_EMPTY; tx_buf->tx_flags = 0; tx_buf->next_to_watch = NULL; dma_unmap_len_set(tx_buf, len, 0); diff --git a/drivers/net/ethernet/intel/ice/ice_txrx.h b/drivers/net/ethernet/intel/ice/ice_txrx.h index 4fd0e5d0a313..fff0efe28373 100644 --- a/drivers/net/ethernet/intel/ice/ice_txrx.h +++ b/drivers/net/ethernet/intel/ice/ice_txrx.h @@ -9,10 +9,12 @@ #define ICE_DFLT_IRQ_WORK 256 #define ICE_RXBUF_3072 3072 #define ICE_RXBUF_2048 2048 +#define ICE_RXBUF_1664 1664 #define ICE_RXBUF_1536 1536 #define ICE_MAX_CHAINED_RX_BUFS 5 #define ICE_MAX_BUF_TXD 8 #define ICE_MIN_TX_LEN 17 +#define ICE_MAX_FRAME_LEGACY_RX 8320 /* The size limit for a transmit buffer in a descriptor is (16K - 1). * In order to align with the read requests we will align the value to @@ -110,15 +112,16 @@ static inline int ice_skb_pad(void) (u16)((((R)->next_to_clean > (R)->next_to_use) ? 0 : (R)->count) + \ (R)->next_to_clean - (R)->next_to_use - 1) +#define ICE_RX_DESC_UNUSED(R) \ + ((((R)->first_desc > (R)->next_to_use) ? 0 : (R)->count) + \ + (R)->first_desc - (R)->next_to_use - 1) + #define ICE_RING_QUARTER(R) ((R)->count >> 2) #define ICE_TX_FLAGS_TSO BIT(0) #define ICE_TX_FLAGS_HW_VLAN BIT(1) #define ICE_TX_FLAGS_SW_VLAN BIT(2) -/* ICE_TX_FLAGS_DUMMY_PKT is used to mark dummy packets that should be - * freed instead of returned like skb packets. - */ -#define ICE_TX_FLAGS_DUMMY_PKT BIT(3) +/* Free, was ICE_TX_FLAGS_DUMMY_PKT */ #define ICE_TX_FLAGS_TSYN BIT(4) #define ICE_TX_FLAGS_IPV4 BIT(5) #define ICE_TX_FLAGS_IPV6 BIT(6) @@ -134,6 +137,7 @@ static inline int ice_skb_pad(void) #define ICE_XDP_TX BIT(1) #define ICE_XDP_REDIR BIT(2) #define ICE_XDP_EXIT BIT(3) +#define ICE_SKB_CONSUMED ICE_XDP_CONSUMED #define ICE_RX_DMA_ATTR \ (DMA_ATTR_SKIP_CPU_SYNC | DMA_ATTR_WEAK_ORDERING) @@ -142,15 +146,44 @@ static inline int ice_skb_pad(void) #define ICE_TXD_LAST_DESC_CMD (ICE_TX_DESC_CMD_EOP | ICE_TX_DESC_CMD_RS) +/** + * enum ice_tx_buf_type - type of &ice_tx_buf to act on Tx completion + * @ICE_TX_BUF_EMPTY: unused OR XSk frame, no action required + * @ICE_TX_BUF_DUMMY: dummy Flow Director packet, unmap and kfree() + * @ICE_TX_BUF_FRAG: mapped skb OR &xdp_buff frag, only unmap DMA + * @ICE_TX_BUF_SKB: &sk_buff, unmap and consume_skb(), update stats + * @ICE_TX_BUF_XDP_TX: &xdp_buff, unmap and page_frag_free(), stats + * @ICE_TX_BUF_XDP_XMIT: &xdp_frame, unmap and xdp_return_frame(), stats + * @ICE_TX_BUF_XSK_TX: &xdp_buff on XSk queue, xsk_buff_free(), stats + */ +enum ice_tx_buf_type { + ICE_TX_BUF_EMPTY = 0U, + ICE_TX_BUF_DUMMY, + ICE_TX_BUF_FRAG, + ICE_TX_BUF_SKB, + ICE_TX_BUF_XDP_TX, + ICE_TX_BUF_XDP_XMIT, + ICE_TX_BUF_XSK_TX, +}; + struct ice_tx_buf { - struct ice_tx_desc *next_to_watch; union { - struct sk_buff *skb; - void *raw_buf; /* used for XDP */ + struct ice_tx_desc *next_to_watch; + u32 rs_idx; + }; + union { + void *raw_buf; /* used for XDP_TX and FDir rules */ + struct sk_buff *skb; /* used for .ndo_start_xmit() */ + struct xdp_frame *xdpf; /* used for .ndo_xdp_xmit() */ + struct xdp_buff *xdp; /* used for XDP_TX ZC */ }; unsigned int bytecount; - unsigned short gso_segs; - u32 tx_flags; + union { + unsigned int gso_segs; + unsigned int nr_frags; /* used for mbuf XDP */ + }; + u32 type:16; /* &ice_tx_buf_type */ + u32 tx_flags:16; DEFINE_DMA_UNMAP_LEN(len); DEFINE_DMA_UNMAP_ADDR(dma); }; @@ -170,7 +203,9 @@ struct ice_rx_buf { dma_addr_t dma; struct page *page; unsigned int page_offset; - u16 pagecnt_bias; + unsigned int pgcnt; + unsigned int act; + unsigned int pagecnt_bias; }; struct ice_q_stats { @@ -273,42 +308,44 @@ struct ice_rx_ring { struct ice_vsi *vsi; /* Backreference to associated VSI */ struct ice_q_vector *q_vector; /* Backreference to associated vector */ u8 __iomem *tail; + u16 q_index; /* Queue number of ring */ + + u16 count; /* Number of descriptors */ + u16 reg_idx; /* HW register index of the ring */ + u16 next_to_alloc; + /* CL2 - 2nd cacheline starts here */ union { struct ice_rx_buf *rx_buf; struct xdp_buff **xdp_buf; }; - /* CL2 - 2nd cacheline starts here */ - struct xdp_rxq_info xdp_rxq; + struct xdp_buff xdp; /* CL3 - 3rd cacheline starts here */ - u16 q_index; /* Queue number of ring */ - - u16 count; /* Number of descriptors */ - u16 reg_idx; /* HW register index of the ring */ + struct bpf_prog *xdp_prog; + u16 rx_offset; /* used in interrupt processing */ u16 next_to_use; u16 next_to_clean; - u16 next_to_alloc; - u16 rx_offset; - u16 rx_buf_len; + u16 first_desc; /* stats structs */ struct ice_ring_stats *ring_stats; struct rcu_head rcu; /* to avoid race on free */ - /* CL4 - 3rd cacheline starts here */ + /* CL4 - 4th cacheline starts here */ struct ice_channel *ch; - struct bpf_prog *xdp_prog; struct ice_tx_ring *xdp_ring; struct xsk_buff_pool *xsk_pool; - struct sk_buff *skb; dma_addr_t dma; /* physical address of ring */ u64 cached_phctime; + u16 rx_buf_len; u8 dcb_tc; /* Traffic class of ring */ u8 ptp_rx; #define ICE_RX_FLAGS_RING_BUILD_SKB BIT(1) #define ICE_RX_FLAGS_CRC_STRIP_DIS BIT(2) u8 flags; + /* CL5 - 5th cacheline starts here */ + struct xdp_rxq_info xdp_rxq; } ____cacheline_internodealigned_in_smp; struct ice_tx_ring { @@ -326,12 +363,11 @@ struct ice_tx_ring { struct xsk_buff_pool *xsk_pool; u16 next_to_use; u16 next_to_clean; - u16 next_rs; - u16 next_dd; u16 q_handle; /* Queue handle per TC */ u16 reg_idx; /* HW register index of the ring */ u16 count; /* Number of descriptors */ u16 q_index; /* Queue number of ring */ + u16 xdp_tx_active; /* stats structs */ struct ice_ring_stats *ring_stats; /* CL3 - 3rd cacheline starts here */ @@ -342,7 +378,6 @@ struct ice_tx_ring { spinlock_t tx_lock; u32 txq_teid; /* Added Tx queue TEID */ /* CL4 - 4th cacheline starts here */ - u16 xdp_tx_active; #define ICE_TX_FLAGS_RING_XDP BIT(0) #define ICE_TX_FLAGS_RING_VLAN_L2TAG1 BIT(1) #define ICE_TX_FLAGS_RING_VLAN_L2TAG2 BIT(2) @@ -431,7 +466,7 @@ static inline unsigned int ice_rx_pg_order(struct ice_rx_ring *ring) union ice_32b_rx_flex_desc; -bool ice_alloc_rx_bufs(struct ice_rx_ring *rxr, u16 cleaned_count); +bool ice_alloc_rx_bufs(struct ice_rx_ring *rxr, unsigned int cleaned_count); netdev_tx_t ice_start_xmit(struct sk_buff *skb, struct net_device *netdev); u16 ice_select_queue(struct net_device *dev, struct sk_buff *skb, diff --git a/drivers/net/ethernet/intel/ice/ice_txrx_lib.c b/drivers/net/ethernet/intel/ice/ice_txrx_lib.c index 25f04266c668..7bc5aa340c7d 100644 --- a/drivers/net/ethernet/intel/ice/ice_txrx_lib.c +++ b/drivers/net/ethernet/intel/ice/ice_txrx_lib.c @@ -221,128 +221,217 @@ ice_receive_skb(struct ice_rx_ring *rx_ring, struct sk_buff *skb, u16 vlan_tag) } /** + * ice_clean_xdp_tx_buf - Free and unmap XDP Tx buffer + * @dev: device for DMA mapping + * @tx_buf: Tx buffer to clean + * @bq: XDP bulk flush struct + */ +static void +ice_clean_xdp_tx_buf(struct device *dev, struct ice_tx_buf *tx_buf, + struct xdp_frame_bulk *bq) +{ + dma_unmap_single(dev, dma_unmap_addr(tx_buf, dma), + dma_unmap_len(tx_buf, len), DMA_TO_DEVICE); + dma_unmap_len_set(tx_buf, len, 0); + + switch (tx_buf->type) { + case ICE_TX_BUF_XDP_TX: + page_frag_free(tx_buf->raw_buf); + break; + case ICE_TX_BUF_XDP_XMIT: + xdp_return_frame_bulk(tx_buf->xdpf, bq); + break; + } + + tx_buf->type = ICE_TX_BUF_EMPTY; +} + +/** * ice_clean_xdp_irq - Reclaim resources after transmit completes on XDP ring * @xdp_ring: XDP ring to clean */ -static void ice_clean_xdp_irq(struct ice_tx_ring *xdp_ring) +static u32 ice_clean_xdp_irq(struct ice_tx_ring *xdp_ring) { - unsigned int total_bytes = 0, total_pkts = 0; - u16 tx_thresh = ICE_RING_QUARTER(xdp_ring); - u16 ntc = xdp_ring->next_to_clean; - struct ice_tx_desc *next_dd_desc; - u16 next_dd = xdp_ring->next_dd; - struct ice_tx_buf *tx_buf; - int i; + int total_bytes = 0, total_pkts = 0; + struct device *dev = xdp_ring->dev; + u32 ntc = xdp_ring->next_to_clean; + struct ice_tx_desc *tx_desc; + u32 cnt = xdp_ring->count; + struct xdp_frame_bulk bq; + u32 frags, xdp_tx = 0; + u32 ready_frames = 0; + u32 idx; + u32 ret; + + idx = xdp_ring->tx_buf[ntc].rs_idx; + tx_desc = ICE_TX_DESC(xdp_ring, idx); + if (tx_desc->cmd_type_offset_bsz & + cpu_to_le64(ICE_TX_DESC_DTYPE_DESC_DONE)) { + if (idx >= ntc) + ready_frames = idx - ntc + 1; + else + ready_frames = idx + cnt - ntc + 1; + } - next_dd_desc = ICE_TX_DESC(xdp_ring, next_dd); - if (!(next_dd_desc->cmd_type_offset_bsz & - cpu_to_le64(ICE_TX_DESC_DTYPE_DESC_DONE))) - return; + if (unlikely(!ready_frames)) + return 0; + ret = ready_frames; + + xdp_frame_bulk_init(&bq); + rcu_read_lock(); /* xdp_return_frame_bulk() */ - for (i = 0; i < tx_thresh; i++) { - tx_buf = &xdp_ring->tx_buf[ntc]; + while (ready_frames) { + struct ice_tx_buf *tx_buf = &xdp_ring->tx_buf[ntc]; + struct ice_tx_buf *head = tx_buf; + /* bytecount holds size of head + frags */ total_bytes += tx_buf->bytecount; - /* normally tx_buf->gso_segs was taken but at this point - * it's always 1 for us - */ + frags = tx_buf->nr_frags; total_pkts++; - - page_frag_free(tx_buf->raw_buf); - dma_unmap_single(xdp_ring->dev, dma_unmap_addr(tx_buf, dma), - dma_unmap_len(tx_buf, len), DMA_TO_DEVICE); - dma_unmap_len_set(tx_buf, len, 0); - tx_buf->raw_buf = NULL; + /* count head + frags */ + ready_frames -= frags + 1; + xdp_tx++; ntc++; - if (ntc >= xdp_ring->count) + if (ntc == cnt) ntc = 0; + + for (int i = 0; i < frags; i++) { + tx_buf = &xdp_ring->tx_buf[ntc]; + + ice_clean_xdp_tx_buf(dev, tx_buf, &bq); + ntc++; + if (ntc == cnt) + ntc = 0; + } + + ice_clean_xdp_tx_buf(dev, head, &bq); } - next_dd_desc->cmd_type_offset_bsz = 0; - xdp_ring->next_dd = xdp_ring->next_dd + tx_thresh; - if (xdp_ring->next_dd > xdp_ring->count) - xdp_ring->next_dd = tx_thresh - 1; + xdp_flush_frame_bulk(&bq); + rcu_read_unlock(); + + tx_desc->cmd_type_offset_bsz = 0; xdp_ring->next_to_clean = ntc; + xdp_ring->xdp_tx_active -= xdp_tx; ice_update_tx_ring_stats(xdp_ring, total_pkts, total_bytes); + + return ret; } /** - * ice_xmit_xdp_ring - submit single packet to XDP ring for transmission - * @data: packet data pointer - * @size: packet data size + * __ice_xmit_xdp_ring - submit frame to XDP ring for transmission + * @xdp: XDP buffer to be placed onto Tx descriptors * @xdp_ring: XDP ring for transmission + * @frame: whether this comes from .ndo_xdp_xmit() */ -int ice_xmit_xdp_ring(void *data, u16 size, struct ice_tx_ring *xdp_ring) +int __ice_xmit_xdp_ring(struct xdp_buff *xdp, struct ice_tx_ring *xdp_ring, + bool frame) { - u16 tx_thresh = ICE_RING_QUARTER(xdp_ring); - u16 i = xdp_ring->next_to_use; + struct skb_shared_info *sinfo = NULL; + u32 size = xdp->data_end - xdp->data; + struct device *dev = xdp_ring->dev; + u32 ntu = xdp_ring->next_to_use; struct ice_tx_desc *tx_desc; + struct ice_tx_buf *tx_head; struct ice_tx_buf *tx_buf; - dma_addr_t dma; + u32 cnt = xdp_ring->count; + void *data = xdp->data; + u32 nr_frags = 0; + u32 free_space; + u32 frag = 0; + + free_space = ICE_DESC_UNUSED(xdp_ring); + if (free_space < ICE_RING_QUARTER(xdp_ring)) + free_space += ice_clean_xdp_irq(xdp_ring); + + if (unlikely(!free_space)) + goto busy; + + if (unlikely(xdp_buff_has_frags(xdp))) { + sinfo = xdp_get_shared_info_from_buff(xdp); + nr_frags = sinfo->nr_frags; + if (free_space < nr_frags + 1) + goto busy; + } - if (ICE_DESC_UNUSED(xdp_ring) < tx_thresh) - ice_clean_xdp_irq(xdp_ring); + tx_desc = ICE_TX_DESC(xdp_ring, ntu); + tx_head = &xdp_ring->tx_buf[ntu]; + tx_buf = tx_head; - if (!unlikely(ICE_DESC_UNUSED(xdp_ring))) { - xdp_ring->ring_stats->tx_stats.tx_busy++; - return ICE_XDP_CONSUMED; - } + for (;;) { + dma_addr_t dma; - dma = dma_map_single(xdp_ring->dev, data, size, DMA_TO_DEVICE); - if (dma_mapping_error(xdp_ring->dev, dma)) - return ICE_XDP_CONSUMED; + dma = dma_map_single(dev, data, size, DMA_TO_DEVICE); + if (dma_mapping_error(dev, dma)) + goto dma_unmap; - tx_buf = &xdp_ring->tx_buf[i]; - tx_buf->bytecount = size; - tx_buf->gso_segs = 1; - tx_buf->raw_buf = data; + /* record length, and DMA address */ + dma_unmap_len_set(tx_buf, len, size); + dma_unmap_addr_set(tx_buf, dma, dma); - /* record length, and DMA address */ - dma_unmap_len_set(tx_buf, len, size); - dma_unmap_addr_set(tx_buf, dma, dma); + if (frame) { + tx_buf->type = ICE_TX_BUF_FRAG; + } else { + tx_buf->type = ICE_TX_BUF_XDP_TX; + tx_buf->raw_buf = data; + } - tx_desc = ICE_TX_DESC(xdp_ring, i); - tx_desc->buf_addr = cpu_to_le64(dma); - tx_desc->cmd_type_offset_bsz = ice_build_ctob(ICE_TX_DESC_CMD_EOP, 0, - size, 0); + tx_desc->buf_addr = cpu_to_le64(dma); + tx_desc->cmd_type_offset_bsz = ice_build_ctob(0, 0, size, 0); - xdp_ring->xdp_tx_active++; - i++; - if (i == xdp_ring->count) { - i = 0; - tx_desc = ICE_TX_DESC(xdp_ring, xdp_ring->next_rs); - tx_desc->cmd_type_offset_bsz |= - cpu_to_le64(ICE_TX_DESC_CMD_RS << ICE_TXD_QW1_CMD_S); - xdp_ring->next_rs = tx_thresh - 1; + ntu++; + if (ntu == cnt) + ntu = 0; + + if (frag == nr_frags) + break; + + tx_desc = ICE_TX_DESC(xdp_ring, ntu); + tx_buf = &xdp_ring->tx_buf[ntu]; + + data = skb_frag_address(&sinfo->frags[frag]); + size = skb_frag_size(&sinfo->frags[frag]); + frag++; } - xdp_ring->next_to_use = i; - if (i > xdp_ring->next_rs) { - tx_desc = ICE_TX_DESC(xdp_ring, xdp_ring->next_rs); - tx_desc->cmd_type_offset_bsz |= - cpu_to_le64(ICE_TX_DESC_CMD_RS << ICE_TXD_QW1_CMD_S); - xdp_ring->next_rs += tx_thresh; + /* store info about bytecount and frag count in first desc */ + tx_head->bytecount = xdp_get_buff_len(xdp); + tx_head->nr_frags = nr_frags; + + if (frame) { + tx_head->type = ICE_TX_BUF_XDP_XMIT; + tx_head->xdpf = xdp->data_hard_start; } + /* update last descriptor from a frame with EOP */ + tx_desc->cmd_type_offset_bsz |= + cpu_to_le64(ICE_TX_DESC_CMD_EOP << ICE_TXD_QW1_CMD_S); + + xdp_ring->xdp_tx_active++; + xdp_ring->next_to_use = ntu; + return ICE_XDP_TX; -} -/** - * ice_xmit_xdp_buff - convert an XDP buffer to an XDP frame and send it - * @xdp: XDP buffer - * @xdp_ring: XDP Tx ring - * - * Returns negative on failure, 0 on success. - */ -int ice_xmit_xdp_buff(struct xdp_buff *xdp, struct ice_tx_ring *xdp_ring) -{ - struct xdp_frame *xdpf = xdp_convert_buff_to_frame(xdp); +dma_unmap: + for (;;) { + tx_buf = &xdp_ring->tx_buf[ntu]; + dma_unmap_page(dev, dma_unmap_addr(tx_buf, dma), + dma_unmap_len(tx_buf, len), DMA_TO_DEVICE); + dma_unmap_len_set(tx_buf, len, 0); + if (tx_buf == tx_head) + break; + + if (!ntu) + ntu += cnt; + ntu--; + } + return ICE_XDP_CONSUMED; - if (unlikely(!xdpf)) - return ICE_XDP_CONSUMED; +busy: + xdp_ring->ring_stats->tx_stats.tx_busy++; - return ice_xmit_xdp_ring(xdpf->data, xdpf->len, xdp_ring); + return ICE_XDP_CONSUMED; } /** @@ -354,14 +443,21 @@ int ice_xmit_xdp_buff(struct xdp_buff *xdp, struct ice_tx_ring *xdp_ring) * should be called when a batch of packets has been processed in the * napi loop. */ -void ice_finalize_xdp_rx(struct ice_tx_ring *xdp_ring, unsigned int xdp_res) +void ice_finalize_xdp_rx(struct ice_tx_ring *xdp_ring, unsigned int xdp_res, + u32 first_idx) { + struct ice_tx_buf *tx_buf = &xdp_ring->tx_buf[first_idx]; + if (xdp_res & ICE_XDP_REDIR) xdp_do_flush_map(); if (xdp_res & ICE_XDP_TX) { if (static_branch_unlikely(&ice_xdp_locking_key)) spin_lock(&xdp_ring->tx_lock); + /* store index of descriptor with RS bit set in the first + * ice_tx_buf of given NAPI batch + */ + tx_buf->rs_idx = ice_set_rs_bit(xdp_ring); ice_xdp_ring_update_tail(xdp_ring); if (static_branch_unlikely(&ice_xdp_locking_key)) spin_unlock(&xdp_ring->tx_lock); diff --git a/drivers/net/ethernet/intel/ice/ice_txrx_lib.h b/drivers/net/ethernet/intel/ice/ice_txrx_lib.h index c7d2954dc9ea..115969ecdf7b 100644 --- a/drivers/net/ethernet/intel/ice/ice_txrx_lib.h +++ b/drivers/net/ethernet/intel/ice/ice_txrx_lib.h @@ -6,6 +6,36 @@ #include "ice.h" /** + * ice_set_rx_bufs_act - propagate Rx buffer action to frags + * @xdp: XDP buffer representing frame (linear and frags part) + * @rx_ring: Rx ring struct + * act: action to store onto Rx buffers related to XDP buffer parts + * + * Set action that should be taken before putting Rx buffer from first frag + * to one before last. Last one is handled by caller of this function as it + * is the EOP frag that is currently being processed. This function is + * supposed to be called only when XDP buffer contains frags. + */ +static inline void +ice_set_rx_bufs_act(struct xdp_buff *xdp, const struct ice_rx_ring *rx_ring, + const unsigned int act) +{ + const struct skb_shared_info *sinfo = xdp_get_shared_info_from_buff(xdp); + u32 first = rx_ring->first_desc; + u32 nr_frags = sinfo->nr_frags; + u32 cnt = rx_ring->count; + struct ice_rx_buf *buf; + + for (int i = 0; i < nr_frags; i++) { + buf = &rx_ring->rx_buf[first]; + buf->act = act; + + if (++first == cnt) + first = 0; + } +} + +/** * ice_test_staterr - tests bits in Rx descriptor status and error fields * @status_err_n: Rx descriptor status_error0 or status_error1 bits * @stat_err_bits: value to mask @@ -21,6 +51,28 @@ ice_test_staterr(__le16 status_err_n, const u16 stat_err_bits) return !!(status_err_n & cpu_to_le16(stat_err_bits)); } +/** + * ice_is_non_eop - process handling of non-EOP buffers + * @rx_ring: Rx ring being processed + * @rx_desc: Rx descriptor for current buffer + * + * If the buffer is an EOP buffer, this function exits returning false, + * otherwise return true indicating that this is in fact a non-EOP buffer. + */ +static inline bool +ice_is_non_eop(const struct ice_rx_ring *rx_ring, + const union ice_32b_rx_flex_desc *rx_desc) +{ + /* if we are the last buffer then there is nothing else to do */ +#define ICE_RXD_EOF BIT(ICE_RX_FLEX_DESC_STATUS0_EOF_S) + if (likely(ice_test_staterr(rx_desc->wb.status_error0, ICE_RXD_EOF))) + return false; + + rx_ring->ring_stats->rx_stats.non_eop_descs++; + + return true; +} + static inline __le64 ice_build_ctob(u64 td_cmd, u64 td_offset, unsigned int size, u64 td_tag) { @@ -70,9 +122,28 @@ static inline void ice_xdp_ring_update_tail(struct ice_tx_ring *xdp_ring) writel_relaxed(xdp_ring->next_to_use, xdp_ring->tail); } -void ice_finalize_xdp_rx(struct ice_tx_ring *xdp_ring, unsigned int xdp_res); +/** + * ice_set_rs_bit - set RS bit on last produced descriptor (one behind current NTU) + * @xdp_ring: XDP ring to produce the HW Tx descriptors on + * + * returns index of descriptor that had RS bit produced on + */ +static inline u32 ice_set_rs_bit(const struct ice_tx_ring *xdp_ring) +{ + u32 rs_idx = xdp_ring->next_to_use ? xdp_ring->next_to_use - 1 : xdp_ring->count - 1; + struct ice_tx_desc *tx_desc; + + tx_desc = ICE_TX_DESC(xdp_ring, rs_idx); + tx_desc->cmd_type_offset_bsz |= + cpu_to_le64(ICE_TX_DESC_CMD_RS << ICE_TXD_QW1_CMD_S); + + return rs_idx; +} + +void ice_finalize_xdp_rx(struct ice_tx_ring *xdp_ring, unsigned int xdp_res, u32 first_idx); int ice_xmit_xdp_buff(struct xdp_buff *xdp, struct ice_tx_ring *xdp_ring); -int ice_xmit_xdp_ring(void *data, u16 size, struct ice_tx_ring *xdp_ring); +int __ice_xmit_xdp_ring(struct xdp_buff *xdp, struct ice_tx_ring *xdp_ring, + bool frame); void ice_release_rx_desc(struct ice_rx_ring *rx_ring, u16 val); void ice_process_skb_fields(struct ice_rx_ring *rx_ring, diff --git a/drivers/net/ethernet/intel/ice/ice_vf_lib.c b/drivers/net/ethernet/intel/ice/ice_vf_lib.c index 375eb6493f0f..0e57bd1b85fd 100644 --- a/drivers/net/ethernet/intel/ice/ice_vf_lib.c +++ b/drivers/net/ethernet/intel/ice/ice_vf_lib.c @@ -237,16 +237,49 @@ static void ice_vf_clear_counters(struct ice_vf *vf) */ static void ice_vf_pre_vsi_rebuild(struct ice_vf *vf) { + /* Close any IRQ mapping now */ + if (vf->vf_ops->irq_close) + vf->vf_ops->irq_close(vf); + ice_vf_clear_counters(vf); vf->vf_ops->clear_reset_trigger(vf); } /** + * ice_vf_recreate_vsi - Release and re-create the VF's VSI + * @vf: VF to recreate the VSI for + * + * This is only called when a single VF is being reset (i.e. VVF, VFLR, host + * VF configuration change, etc) + * + * It releases and then re-creates a new VSI. + */ +static int ice_vf_recreate_vsi(struct ice_vf *vf) +{ + struct ice_pf *pf = vf->pf; + int err; + + ice_vf_vsi_release(vf); + + err = vf->vf_ops->create_vsi(vf); + if (err) { + dev_err(ice_pf_to_dev(pf), + "Failed to recreate the VF%u's VSI, error %d\n", + vf->vf_id, err); + return err; + } + + return 0; +} + +/** * ice_vf_rebuild_vsi - rebuild the VF's VSI * @vf: VF to rebuild the VSI for * * This is only called when all VF(s) are being reset (i.e. PCIe Reset on the * host, PFR, CORER, etc.). + * + * It reprograms the VSI configuration back into hardware. */ static int ice_vf_rebuild_vsi(struct ice_vf *vf) { @@ -256,7 +289,7 @@ static int ice_vf_rebuild_vsi(struct ice_vf *vf) if (WARN_ON(!vsi)) return -EINVAL; - if (ice_vsi_rebuild(vsi, true)) { + if (ice_vsi_rebuild(vsi, ICE_VSI_FLAG_INIT)) { dev_err(ice_pf_to_dev(pf), "failed to rebuild VF %d VSI\n", vf->vf_id); return -EIO; @@ -271,6 +304,21 @@ static int ice_vf_rebuild_vsi(struct ice_vf *vf) } /** + * ice_vf_post_vsi_rebuild - Reset tasks that occur after VSI rebuild + * @vf: the VF being reset + * + * Perform reset tasks which must occur after the VSI has been re-created or + * rebuilt during a VF reset. + */ +static void ice_vf_post_vsi_rebuild(struct ice_vf *vf) +{ + ice_vf_rebuild_host_cfg(vf); + ice_vf_set_initialized(vf); + + vf->vf_ops->post_vsi_rebuild(vf); +} + +/** * ice_is_any_vf_in_unicast_promisc - check if any VF(s) * are in unicast promiscuous mode * @pf: PF structure for accessing VF(s) @@ -495,7 +543,7 @@ void ice_reset_all_vfs(struct ice_pf *pf) ice_vf_pre_vsi_rebuild(vf); ice_vf_rebuild_vsi(vf); - vf->vf_ops->post_vsi_rebuild(vf); + ice_vf_post_vsi_rebuild(vf); mutex_unlock(&vf->cfg_lock); } @@ -639,14 +687,14 @@ int ice_reset_vf(struct ice_vf *vf, u32 flags) ice_vf_pre_vsi_rebuild(vf); - if (vf->vf_ops->vsi_rebuild(vf)) { + if (ice_vf_recreate_vsi(vf)) { dev_err(dev, "Failed to release and setup the VF%u's VSI\n", vf->vf_id); err = -EFAULT; goto out_unlock; } - vf->vf_ops->post_vsi_rebuild(vf); + ice_vf_post_vsi_rebuild(vf); vsi = ice_get_vf_vsi(vf); if (WARN_ON(!vsi)) { err = -EINVAL; @@ -673,7 +721,7 @@ out_unlock: * ice_set_vf_state_qs_dis - Set VF queues state to disabled * @vf: pointer to the VF structure */ -void ice_set_vf_state_qs_dis(struct ice_vf *vf) +static void ice_set_vf_state_qs_dis(struct ice_vf *vf) { /* Clear Rx/Tx enabled queues flag */ bitmap_zero(vf->txq_ena, ICE_MAX_RSS_QS_PER_VF); @@ -681,9 +729,45 @@ void ice_set_vf_state_qs_dis(struct ice_vf *vf) clear_bit(ICE_VF_STATE_QS_ENA, vf->vf_states); } +/** + * ice_set_vf_state_dis - Set VF state to disabled + * @vf: pointer to the VF structure + */ +void ice_set_vf_state_dis(struct ice_vf *vf) +{ + ice_set_vf_state_qs_dis(vf); + vf->vf_ops->clear_reset_state(vf); +} + /* Private functions only accessed from other virtualization files */ /** + * ice_initialize_vf_entry - Initialize a VF entry + * @vf: pointer to the VF structure + */ +void ice_initialize_vf_entry(struct ice_vf *vf) +{ + struct ice_pf *pf = vf->pf; + struct ice_vfs *vfs; + + vfs = &pf->vfs; + + /* assign default capabilities */ + vf->spoofchk = true; + vf->num_vf_qs = vfs->num_qps_per; + ice_vc_set_default_allowlist(vf); + ice_virtchnl_set_dflt_ops(vf); + + /* ctrl_vsi_idx will be set to a valid value only when iAVF + * creates its first fdir rule. + */ + ice_vf_ctrl_invalidate_vsi(vf); + ice_vf_fdir_init(vf); + + mutex_init(&vf->cfg_lock); +} + +/** * ice_dis_vf_qs - Disable the VF queues * @vf: pointer to the VF structure */ @@ -924,18 +1008,18 @@ static int ice_vf_rebuild_host_mac_cfg(struct ice_vf *vf) vf->num_mac++; - if (is_valid_ether_addr(vf->hw_lan_addr.addr)) { - status = ice_fltr_add_mac(vsi, vf->hw_lan_addr.addr, + if (is_valid_ether_addr(vf->hw_lan_addr)) { + status = ice_fltr_add_mac(vsi, vf->hw_lan_addr, ICE_FWD_TO_VSI); if (status) { dev_err(dev, "failed to add default unicast MAC filter %pM for VF %u, error %d\n", - &vf->hw_lan_addr.addr[0], vf->vf_id, + &vf->hw_lan_addr[0], vf->vf_id, status); return status; } vf->num_mac++; - ether_addr_copy(vf->dev_lan_addr.addr, vf->hw_lan_addr.addr); + ether_addr_copy(vf->dev_lan_addr, vf->hw_lan_addr); } return 0; @@ -1115,11 +1199,16 @@ void ice_vf_ctrl_vsi_release(struct ice_vf *vf) */ struct ice_vsi *ice_vf_ctrl_vsi_setup(struct ice_vf *vf) { - struct ice_port_info *pi = ice_vf_get_port_info(vf); + struct ice_vsi_cfg_params params = {}; struct ice_pf *pf = vf->pf; struct ice_vsi *vsi; - vsi = ice_vsi_setup(pf, pi, ICE_VSI_CTRL, vf, NULL); + params.type = ICE_VSI_CTRL; + params.pi = ice_vf_get_port_info(vf); + params.vf = vf; + params.flags = ICE_VSI_FLAG_INIT; + + vsi = ice_vsi_setup(pf, ¶ms); if (!vsi) { dev_err(ice_pf_to_dev(pf), "Failed to create VF control VSI\n"); ice_vf_ctrl_invalidate_vsi(vf); @@ -1129,6 +1218,60 @@ struct ice_vsi *ice_vf_ctrl_vsi_setup(struct ice_vf *vf) } /** + * ice_vf_init_host_cfg - Initialize host admin configuration + * @vf: VF to initialize + * @vsi: the VSI created at initialization + * + * Initialize the VF host configuration. Called during VF creation to setup + * VLAN 0, add the VF VSI broadcast filter, and setup spoof checking. It + * should only be called during VF creation. + */ +int ice_vf_init_host_cfg(struct ice_vf *vf, struct ice_vsi *vsi) +{ + struct ice_vsi_vlan_ops *vlan_ops; + struct ice_pf *pf = vf->pf; + u8 broadcast[ETH_ALEN]; + struct device *dev; + int err; + + dev = ice_pf_to_dev(pf); + + err = ice_vsi_add_vlan_zero(vsi); + if (err) { + dev_warn(dev, "Failed to add VLAN 0 filter for VF %d\n", + vf->vf_id); + return err; + } + + vlan_ops = ice_get_compat_vsi_vlan_ops(vsi); + err = vlan_ops->ena_rx_filtering(vsi); + if (err) { + dev_warn(dev, "Failed to enable Rx VLAN filtering for VF %d\n", + vf->vf_id); + return err; + } + + eth_broadcast_addr(broadcast); + err = ice_fltr_add_mac(vsi, broadcast, ICE_FWD_TO_VSI); + if (err) { + dev_err(dev, "Failed to add broadcast MAC filter for VF %d, status %d\n", + vf->vf_id, err); + return err; + } + + vf->num_mac = 1; + + err = ice_vsi_apply_spoofchk(vsi, vf->spoofchk); + if (err) { + dev_warn(dev, "Failed to initialize spoofchk setting for VF %d\n", + vf->vf_id); + return err; + } + + return 0; +} + +/** * ice_vf_invalidate_vsi - invalidate vsi_idx/vsi_num to remove VSI access * @vf: VF to remove access to VSI for */ @@ -1139,6 +1282,24 @@ void ice_vf_invalidate_vsi(struct ice_vf *vf) } /** + * ice_vf_vsi_release - Release the VF VSI and invalidate indexes + * @vf: pointer to the VF structure + * + * Release the VF associated with this VSI and then invalidate the VSI + * indexes. + */ +void ice_vf_vsi_release(struct ice_vf *vf) +{ + struct ice_vsi *vsi = ice_get_vf_vsi(vf); + + if (WARN_ON(!vsi)) + return; + + ice_vsi_release(vsi); + ice_vf_invalidate_vsi(vf); +} + +/** * ice_vf_set_initialized - VF is ready for VIRTCHNL communication * @vf: VF to set in initialized state * diff --git a/drivers/net/ethernet/intel/ice/ice_vf_lib.h b/drivers/net/ethernet/intel/ice/ice_vf_lib.h index 52bd9a3816bf..ef30f05b5d02 100644 --- a/drivers/net/ethernet/intel/ice/ice_vf_lib.h +++ b/drivers/net/ethernet/intel/ice/ice_vf_lib.h @@ -56,11 +56,13 @@ struct ice_mdd_vf_events { struct ice_vf_ops { enum ice_disq_rst_src reset_type; void (*free)(struct ice_vf *vf); + void (*clear_reset_state)(struct ice_vf *vf); void (*clear_mbx_register)(struct ice_vf *vf); void (*trigger_reset_register)(struct ice_vf *vf, bool is_vflr); bool (*poll_reset_status)(struct ice_vf *vf); void (*clear_reset_trigger)(struct ice_vf *vf); - int (*vsi_rebuild)(struct ice_vf *vf); + void (*irq_close)(struct ice_vf *vf); + int (*create_vsi)(struct ice_vf *vf); void (*post_vsi_rebuild)(struct ice_vf *vf); }; @@ -96,8 +98,8 @@ struct ice_vf { struct ice_sw *vf_sw_id; /* switch ID the VF VSIs connect to */ struct virtchnl_version_info vf_ver; u32 driver_caps; /* reported by VF driver */ - struct virtchnl_ether_addr dev_lan_addr; - struct virtchnl_ether_addr hw_lan_addr; + u8 dev_lan_addr[ETH_ALEN]; + u8 hw_lan_addr[ETH_ALEN]; struct ice_time_mac legacy_last_added_umac; DECLARE_BITMAP(txq_ena, ICE_MAX_RSS_QS_PER_VF); DECLARE_BITMAP(rxq_ena, ICE_MAX_RSS_QS_PER_VF); @@ -213,7 +215,7 @@ u16 ice_get_num_vfs(struct ice_pf *pf); struct ice_vsi *ice_get_vf_vsi(struct ice_vf *vf); bool ice_is_vf_disabled(struct ice_vf *vf); int ice_check_vf_ready_for_cfg(struct ice_vf *vf); -void ice_set_vf_state_qs_dis(struct ice_vf *vf); +void ice_set_vf_state_dis(struct ice_vf *vf); bool ice_is_any_vf_in_unicast_promisc(struct ice_pf *pf); void ice_vf_get_promisc_masks(struct ice_vf *vf, struct ice_vsi *vsi, @@ -259,7 +261,7 @@ static inline int ice_check_vf_ready_for_cfg(struct ice_vf *vf) return -EOPNOTSUPP; } -static inline void ice_set_vf_state_qs_dis(struct ice_vf *vf) +static inline void ice_set_vf_state_dis(struct ice_vf *vf) { } diff --git a/drivers/net/ethernet/intel/ice/ice_vf_lib_private.h b/drivers/net/ethernet/intel/ice/ice_vf_lib_private.h index 9c8ef2b01f0f..6f3293b793b5 100644 --- a/drivers/net/ethernet/intel/ice/ice_vf_lib_private.h +++ b/drivers/net/ethernet/intel/ice/ice_vf_lib_private.h @@ -23,6 +23,7 @@ #warning "Only include ice_vf_lib_private.h in CONFIG_PCI_IOV virtualization files" #endif +void ice_initialize_vf_entry(struct ice_vf *vf); void ice_dis_vf_qs(struct ice_vf *vf); int ice_check_vf_init(struct ice_vf *vf); enum virtchnl_status_code ice_err_to_virt_err(int err); @@ -35,7 +36,9 @@ void ice_vf_rebuild_host_cfg(struct ice_vf *vf); void ice_vf_ctrl_invalidate_vsi(struct ice_vf *vf); void ice_vf_ctrl_vsi_release(struct ice_vf *vf); struct ice_vsi *ice_vf_ctrl_vsi_setup(struct ice_vf *vf); +int ice_vf_init_host_cfg(struct ice_vf *vf, struct ice_vsi *vsi); void ice_vf_invalidate_vsi(struct ice_vf *vf); +void ice_vf_vsi_release(struct ice_vf *vf); void ice_vf_set_initialized(struct ice_vf *vf); #endif /* _ICE_VF_LIB_PRIVATE_H_ */ diff --git a/drivers/net/ethernet/intel/ice/ice_virtchnl.c b/drivers/net/ethernet/intel/ice/ice_virtchnl.c index dab3cd5d300e..e24e3f5017ca 100644 --- a/drivers/net/ethernet/intel/ice/ice_virtchnl.c +++ b/drivers/net/ethernet/intel/ice/ice_virtchnl.c @@ -507,7 +507,7 @@ static int ice_vc_get_vf_res_msg(struct ice_vf *vf, u8 *msg) vfres->vsi_res[0].vsi_type = VIRTCHNL_VSI_SRIOV; vfres->vsi_res[0].num_queue_pairs = vsi->num_txq; ether_addr_copy(vfres->vsi_res[0].default_mac_addr, - vf->hw_lan_addr.addr); + vf->hw_lan_addr); /* match guest capabilities */ vf->driver_caps = vfres->vf_cap_flags; @@ -1802,10 +1802,10 @@ ice_vfhw_mac_add(struct ice_vf *vf, struct virtchnl_ether_addr *vc_ether_addr) * was correctly specified over VIRTCHNL */ if ((ice_is_vc_addr_legacy(vc_ether_addr) && - is_zero_ether_addr(vf->hw_lan_addr.addr)) || + is_zero_ether_addr(vf->hw_lan_addr)) || ice_is_vc_addr_primary(vc_ether_addr)) { - ether_addr_copy(vf->dev_lan_addr.addr, mac_addr); - ether_addr_copy(vf->hw_lan_addr.addr, mac_addr); + ether_addr_copy(vf->dev_lan_addr, mac_addr); + ether_addr_copy(vf->hw_lan_addr, mac_addr); } /* hardware and device MACs are already set, but its possible that the @@ -1836,7 +1836,7 @@ ice_vc_add_mac_addr(struct ice_vf *vf, struct ice_vsi *vsi, int ret; /* device MAC already added */ - if (ether_addr_equal(mac_addr, vf->dev_lan_addr.addr)) + if (ether_addr_equal(mac_addr, vf->dev_lan_addr)) return 0; if (is_unicast_ether_addr(mac_addr) && !ice_can_vf_change_mac(vf)) { @@ -1891,8 +1891,8 @@ ice_update_legacy_cached_mac(struct ice_vf *vf, ice_is_legacy_umac_expired(&vf->legacy_last_added_umac)) return; - ether_addr_copy(vf->dev_lan_addr.addr, vf->legacy_last_added_umac.addr); - ether_addr_copy(vf->hw_lan_addr.addr, vf->legacy_last_added_umac.addr); + ether_addr_copy(vf->dev_lan_addr, vf->legacy_last_added_umac.addr); + ether_addr_copy(vf->hw_lan_addr, vf->legacy_last_added_umac.addr); } /** @@ -1906,15 +1906,15 @@ ice_vfhw_mac_del(struct ice_vf *vf, struct virtchnl_ether_addr *vc_ether_addr) u8 *mac_addr = vc_ether_addr->addr; if (!is_valid_ether_addr(mac_addr) || - !ether_addr_equal(vf->dev_lan_addr.addr, mac_addr)) + !ether_addr_equal(vf->dev_lan_addr, mac_addr)) return; /* allow the device MAC to be repopulated in the add flow and don't - * clear the hardware MAC (i.e. hw_lan_addr.addr) here as that is meant + * clear the hardware MAC (i.e. hw_lan_addr) here as that is meant * to be persistent on VM reboot and across driver unload/load, which * won't work if we clear the hardware MAC here */ - eth_zero_addr(vf->dev_lan_addr.addr); + eth_zero_addr(vf->dev_lan_addr); ice_update_legacy_cached_mac(vf, vc_ether_addr); } @@ -1934,7 +1934,7 @@ ice_vc_del_mac_addr(struct ice_vf *vf, struct ice_vsi *vsi, int status; if (!ice_can_vf_change_mac(vf) && - ether_addr_equal(vf->dev_lan_addr.addr, mac_addr)) + ether_addr_equal(vf->dev_lan_addr, mac_addr)) return 0; status = ice_fltr_remove_mac(vsi, mac_addr, ICE_FWD_TO_VSI); @@ -3733,7 +3733,7 @@ static int ice_vc_repr_add_mac(struct ice_vf *vf, u8 *msg) int result; if (!is_unicast_ether_addr(mac_addr) || - ether_addr_equal(mac_addr, vf->hw_lan_addr.addr)) + ether_addr_equal(mac_addr, vf->hw_lan_addr)) continue; if (vf->pf_set_mac) { diff --git a/drivers/net/ethernet/intel/ice/ice_virtchnl_fdir.c b/drivers/net/ethernet/intel/ice/ice_virtchnl_fdir.c index c6a58343d81d..e6ef6b303222 100644 --- a/drivers/net/ethernet/intel/ice/ice_virtchnl_fdir.c +++ b/drivers/net/ethernet/intel/ice/ice_virtchnl_fdir.c @@ -113,7 +113,7 @@ ice_vc_fdir_param_check(struct ice_vf *vf, u16 vsi_id) if (!ice_vc_isvalid_vsi_id(vf, vsi_id)) return -EINVAL; - if (!pf->vsi[vf->lan_vsi_idx]) + if (!ice_get_vf_vsi(vf)) return -EINVAL; return 0; @@ -494,7 +494,7 @@ ice_vc_fdir_rem_prof(struct ice_vf *vf, enum ice_fltr_ptype flow, int tun) vf_prof = fdir->fdir_prof[flow]; - vf_vsi = pf->vsi[vf->lan_vsi_idx]; + vf_vsi = ice_get_vf_vsi(vf); if (!vf_vsi) { dev_dbg(dev, "NULL vf %d vsi pointer\n", vf->vf_id); return; @@ -572,7 +572,7 @@ ice_vc_fdir_write_flow_prof(struct ice_vf *vf, enum ice_fltr_ptype flow, pf = vf->pf; dev = ice_pf_to_dev(pf); hw = &pf->hw; - vf_vsi = pf->vsi[vf->lan_vsi_idx]; + vf_vsi = ice_get_vf_vsi(vf); if (!vf_vsi) return -EINVAL; @@ -1205,7 +1205,7 @@ static int ice_vc_fdir_write_fltr(struct ice_vf *vf, pf = vf->pf; dev = ice_pf_to_dev(pf); hw = &pf->hw; - vsi = pf->vsi[vf->lan_vsi_idx]; + vsi = ice_get_vf_vsi(vf); if (!vsi) { dev_dbg(dev, "Invalid vsi for VF %d\n", vf->vf_id); return -EINVAL; diff --git a/drivers/net/ethernet/intel/ice/ice_xsk.c b/drivers/net/ethernet/intel/ice/ice_xsk.c index 374b7f10b549..31565bbafa22 100644 --- a/drivers/net/ethernet/intel/ice/ice_xsk.c +++ b/drivers/net/ethernet/intel/ice/ice_xsk.c @@ -598,6 +598,112 @@ ice_construct_skb_zc(struct ice_rx_ring *rx_ring, struct xdp_buff *xdp) } /** + * ice_clean_xdp_irq_zc - produce AF_XDP descriptors to CQ + * @xdp_ring: XDP Tx ring + */ +static void ice_clean_xdp_irq_zc(struct ice_tx_ring *xdp_ring) +{ + u16 ntc = xdp_ring->next_to_clean; + struct ice_tx_desc *tx_desc; + u16 cnt = xdp_ring->count; + struct ice_tx_buf *tx_buf; + u16 completed_frames = 0; + u16 xsk_frames = 0; + u16 last_rs; + int i; + + last_rs = xdp_ring->next_to_use ? xdp_ring->next_to_use - 1 : cnt - 1; + tx_desc = ICE_TX_DESC(xdp_ring, last_rs); + if (tx_desc->cmd_type_offset_bsz & + cpu_to_le64(ICE_TX_DESC_DTYPE_DESC_DONE)) { + if (last_rs >= ntc) + completed_frames = last_rs - ntc + 1; + else + completed_frames = last_rs + cnt - ntc + 1; + } + + if (!completed_frames) + return; + + if (likely(!xdp_ring->xdp_tx_active)) { + xsk_frames = completed_frames; + goto skip; + } + + ntc = xdp_ring->next_to_clean; + for (i = 0; i < completed_frames; i++) { + tx_buf = &xdp_ring->tx_buf[ntc]; + + if (tx_buf->type == ICE_TX_BUF_XSK_TX) { + tx_buf->type = ICE_TX_BUF_EMPTY; + xsk_buff_free(tx_buf->xdp); + xdp_ring->xdp_tx_active--; + } else { + xsk_frames++; + } + + ntc++; + if (ntc >= xdp_ring->count) + ntc = 0; + } +skip: + tx_desc->cmd_type_offset_bsz = 0; + xdp_ring->next_to_clean += completed_frames; + if (xdp_ring->next_to_clean >= cnt) + xdp_ring->next_to_clean -= cnt; + if (xsk_frames) + xsk_tx_completed(xdp_ring->xsk_pool, xsk_frames); +} + +/** + * ice_xmit_xdp_tx_zc - AF_XDP ZC handler for XDP_TX + * @xdp: XDP buffer to xmit + * @xdp_ring: XDP ring to produce descriptor onto + * + * note that this function works directly on xdp_buff, no need to convert + * it to xdp_frame. xdp_buff pointer is stored to ice_tx_buf so that cleaning + * side will be able to xsk_buff_free() it. + * + * Returns ICE_XDP_TX for successfully produced desc, ICE_XDP_CONSUMED if there + * was not enough space on XDP ring + */ +static int ice_xmit_xdp_tx_zc(struct xdp_buff *xdp, + struct ice_tx_ring *xdp_ring) +{ + u32 size = xdp->data_end - xdp->data; + u32 ntu = xdp_ring->next_to_use; + struct ice_tx_desc *tx_desc; + struct ice_tx_buf *tx_buf; + dma_addr_t dma; + + if (ICE_DESC_UNUSED(xdp_ring) < ICE_RING_QUARTER(xdp_ring)) { + ice_clean_xdp_irq_zc(xdp_ring); + if (!ICE_DESC_UNUSED(xdp_ring)) { + xdp_ring->ring_stats->tx_stats.tx_busy++; + return ICE_XDP_CONSUMED; + } + } + + dma = xsk_buff_xdp_get_dma(xdp); + xsk_buff_raw_dma_sync_for_device(xdp_ring->xsk_pool, dma, size); + + tx_buf = &xdp_ring->tx_buf[ntu]; + tx_buf->xdp = xdp; + tx_buf->type = ICE_TX_BUF_XSK_TX; + tx_desc = ICE_TX_DESC(xdp_ring, ntu); + tx_desc->buf_addr = cpu_to_le64(dma); + tx_desc->cmd_type_offset_bsz = ice_build_ctob(ICE_TX_DESC_CMD_EOP, + 0, size, 0); + xdp_ring->xdp_tx_active++; + + if (++ntu == xdp_ring->count) + ntu = 0; + xdp_ring->next_to_use = ntu; + + return ICE_XDP_TX; +} + +/** * ice_run_xdp_zc - Executes an XDP program in zero-copy path * @rx_ring: Rx ring * @xdp: xdp_buff used as input to the XDP program @@ -630,7 +736,7 @@ ice_run_xdp_zc(struct ice_rx_ring *rx_ring, struct xdp_buff *xdp, case XDP_PASS: break; case XDP_TX: - result = ice_xmit_xdp_buff(xdp, xdp_ring); + result = ice_xmit_xdp_tx_zc(xdp, xdp_ring); if (result == ICE_XDP_CONSUMED) goto out_failure; break; @@ -760,7 +866,7 @@ construct_skb: if (entries_to_alloc > ICE_RING_QUARTER(rx_ring)) failure |= !ice_alloc_rx_bufs_zc(rx_ring, entries_to_alloc); - ice_finalize_xdp_rx(xdp_ring, xdp_xmit); + ice_finalize_xdp_rx(xdp_ring, xdp_xmit, 0); ice_update_rx_ring_stats(rx_ring, total_rx_packets, total_rx_bytes); if (xsk_uses_need_wakeup(rx_ring->xsk_pool)) { @@ -776,78 +882,6 @@ construct_skb: } /** - * ice_clean_xdp_tx_buf - Free and unmap XDP Tx buffer - * @xdp_ring: XDP Tx ring - * @tx_buf: Tx buffer to clean - */ -static void -ice_clean_xdp_tx_buf(struct ice_tx_ring *xdp_ring, struct ice_tx_buf *tx_buf) -{ - page_frag_free(tx_buf->raw_buf); - xdp_ring->xdp_tx_active--; - dma_unmap_single(xdp_ring->dev, dma_unmap_addr(tx_buf, dma), - dma_unmap_len(tx_buf, len), DMA_TO_DEVICE); - dma_unmap_len_set(tx_buf, len, 0); -} - -/** - * ice_clean_xdp_irq_zc - produce AF_XDP descriptors to CQ - * @xdp_ring: XDP Tx ring - */ -static void ice_clean_xdp_irq_zc(struct ice_tx_ring *xdp_ring) -{ - u16 ntc = xdp_ring->next_to_clean; - struct ice_tx_desc *tx_desc; - u16 cnt = xdp_ring->count; - struct ice_tx_buf *tx_buf; - u16 completed_frames = 0; - u16 xsk_frames = 0; - u16 last_rs; - int i; - - last_rs = xdp_ring->next_to_use ? xdp_ring->next_to_use - 1 : cnt - 1; - tx_desc = ICE_TX_DESC(xdp_ring, last_rs); - if ((tx_desc->cmd_type_offset_bsz & - cpu_to_le64(ICE_TX_DESC_DTYPE_DESC_DONE))) { - if (last_rs >= ntc) - completed_frames = last_rs - ntc + 1; - else - completed_frames = last_rs + cnt - ntc + 1; - } - - if (!completed_frames) - return; - - if (likely(!xdp_ring->xdp_tx_active)) { - xsk_frames = completed_frames; - goto skip; - } - - ntc = xdp_ring->next_to_clean; - for (i = 0; i < completed_frames; i++) { - tx_buf = &xdp_ring->tx_buf[ntc]; - - if (tx_buf->raw_buf) { - ice_clean_xdp_tx_buf(xdp_ring, tx_buf); - tx_buf->raw_buf = NULL; - } else { - xsk_frames++; - } - - ntc++; - if (ntc >= xdp_ring->count) - ntc = 0; - } -skip: - tx_desc->cmd_type_offset_bsz = 0; - xdp_ring->next_to_clean += completed_frames; - if (xdp_ring->next_to_clean >= cnt) - xdp_ring->next_to_clean -= cnt; - if (xsk_frames) - xsk_tx_completed(xdp_ring->xsk_pool, xsk_frames); -} - -/** * ice_xmit_pkt - produce a single HW Tx descriptor out of AF_XDP descriptor * @xdp_ring: XDP ring to produce the HW Tx descriptor on * @desc: AF_XDP descriptor to pull the DMA address and length from @@ -921,20 +955,6 @@ static void ice_fill_tx_hw_ring(struct ice_tx_ring *xdp_ring, struct xdp_desc *d } /** - * ice_set_rs_bit - set RS bit on last produced descriptor (one behind current NTU) - * @xdp_ring: XDP ring to produce the HW Tx descriptors on - */ -static void ice_set_rs_bit(struct ice_tx_ring *xdp_ring) -{ - u16 ntu = xdp_ring->next_to_use ? xdp_ring->next_to_use - 1 : xdp_ring->count - 1; - struct ice_tx_desc *tx_desc; - - tx_desc = ICE_TX_DESC(xdp_ring, ntu); - tx_desc->cmd_type_offset_bsz |= - cpu_to_le64(ICE_TX_DESC_CMD_RS << ICE_TXD_QW1_CMD_S); -} - -/** * ice_xmit_zc - take entries from XSK Tx ring and place them onto HW Tx ring * @xdp_ring: XDP ring to produce the HW Tx descriptors on * @@ -1068,12 +1088,12 @@ void ice_xsk_clean_xdp_ring(struct ice_tx_ring *xdp_ring) while (ntc != ntu) { struct ice_tx_buf *tx_buf = &xdp_ring->tx_buf[ntc]; - if (tx_buf->raw_buf) - ice_clean_xdp_tx_buf(xdp_ring, tx_buf); - else + if (tx_buf->type == ICE_TX_BUF_XSK_TX) { + tx_buf->type = ICE_TX_BUF_EMPTY; + xsk_buff_free(tx_buf->xdp); + } else { xsk_frames++; - - tx_buf->raw_buf = NULL; + } ntc++; if (ntc >= xdp_ring->count) diff --git a/drivers/net/ethernet/intel/igb/igb_main.c b/drivers/net/ethernet/intel/igb/igb_main.c index b5b443883da9..03bc1e8af575 100644 --- a/drivers/net/ethernet/intel/igb/igb_main.c +++ b/drivers/net/ethernet/intel/igb/igb_main.c @@ -2835,6 +2835,22 @@ static int igb_offload_txtime(struct igb_adapter *adapter, return 0; } +static int igb_tc_query_caps(struct igb_adapter *adapter, + struct tc_query_caps_base *base) +{ + switch (base->type) { + case TC_SETUP_QDISC_TAPRIO: { + struct tc_taprio_caps *caps = base->caps; + + caps->broken_mqprio = true; + + return 0; + } + default: + return -EOPNOTSUPP; + } +} + static LIST_HEAD(igb_block_cb_list); static int igb_setup_tc(struct net_device *dev, enum tc_setup_type type, @@ -2843,6 +2859,8 @@ static int igb_setup_tc(struct net_device *dev, enum tc_setup_type type, struct igb_adapter *adapter = netdev_priv(dev); switch (type) { + case TC_QUERY_CAPS: + return igb_tc_query_caps(adapter, type_data); case TC_SETUP_QDISC_CBS: return igb_offload_cbs(adapter, type_data); case TC_SETUP_BLOCK: @@ -2896,8 +2914,14 @@ static int igb_xdp_setup(struct net_device *dev, struct netdev_bpf *bpf) bpf_prog_put(old_prog); /* bpf is just replaced, RXQ and MTU are already setup */ - if (!need_reset) + if (!need_reset) { return 0; + } else { + if (prog) + xdp_features_set_redirect_target(dev, true); + else + xdp_features_clear_redirect_target(dev); + } if (running) igb_open(dev); @@ -3210,8 +3234,6 @@ static int igb_probe(struct pci_dev *pdev, const struct pci_device_id *ent) if (err) goto err_pci_reg; - pci_enable_pcie_error_reporting(pdev); - pci_set_master(pdev); pci_save_state(pdev); @@ -3333,6 +3355,7 @@ static int igb_probe(struct pci_dev *pdev, const struct pci_device_id *ent) netdev->priv_flags |= IFF_SUPP_NOFCS; netdev->priv_flags |= IFF_UNICAST_FLT; + netdev->xdp_features = NETDEV_XDP_ACT_BASIC | NETDEV_XDP_ACT_REDIRECT; /* MTU range: 68 - 9216 */ netdev->min_mtu = ETH_MIN_MTU; @@ -3648,7 +3671,6 @@ err_sw_init: err_ioremap: free_netdev(netdev); err_alloc_etherdev: - pci_disable_pcie_error_reporting(pdev); pci_release_mem_regions(pdev); err_pci_reg: err_dma: @@ -3859,8 +3881,6 @@ static void igb_remove(struct pci_dev *pdev) kfree(adapter->shadow_vfta); free_netdev(netdev); - pci_disable_pcie_error_reporting(pdev); - pci_disable_device(pdev); } diff --git a/drivers/net/ethernet/intel/igc/igc_base.c b/drivers/net/ethernet/intel/igc/igc_base.c index a15927e77272..a1d815af507d 100644 --- a/drivers/net/ethernet/intel/igc/igc_base.c +++ b/drivers/net/ethernet/intel/igc/igc_base.c @@ -396,6 +396,35 @@ void igc_rx_fifo_flush_base(struct igc_hw *hw) rd32(IGC_MPC); } +bool igc_is_device_id_i225(struct igc_hw *hw) +{ + switch (hw->device_id) { + case IGC_DEV_ID_I225_LM: + case IGC_DEV_ID_I225_V: + case IGC_DEV_ID_I225_I: + case IGC_DEV_ID_I225_K: + case IGC_DEV_ID_I225_K2: + case IGC_DEV_ID_I225_LMVP: + case IGC_DEV_ID_I225_IT: + return true; + default: + return false; + } +} + +bool igc_is_device_id_i226(struct igc_hw *hw) +{ + switch (hw->device_id) { + case IGC_DEV_ID_I226_LM: + case IGC_DEV_ID_I226_V: + case IGC_DEV_ID_I226_K: + case IGC_DEV_ID_I226_IT: + return true; + default: + return false; + } +} + static struct igc_mac_operations igc_mac_ops_base = { .init_hw = igc_init_hw_base, .check_for_link = igc_check_for_copper_link, diff --git a/drivers/net/ethernet/intel/igc/igc_base.h b/drivers/net/ethernet/intel/igc/igc_base.h index ce530f5fd7bd..7a992befca24 100644 --- a/drivers/net/ethernet/intel/igc/igc_base.h +++ b/drivers/net/ethernet/intel/igc/igc_base.h @@ -7,6 +7,8 @@ /* forward declaration */ void igc_rx_fifo_flush_base(struct igc_hw *hw); void igc_power_down_phy_copper_base(struct igc_hw *hw); +bool igc_is_device_id_i225(struct igc_hw *hw); +bool igc_is_device_id_i226(struct igc_hw *hw); /* Transmit Descriptor - Advanced */ union igc_adv_tx_desc { diff --git a/drivers/net/ethernet/intel/igc/igc_defines.h b/drivers/net/ethernet/intel/igc/igc_defines.h index e9747ec5ac0b..9dec3563ce3a 100644 --- a/drivers/net/ethernet/intel/igc/igc_defines.h +++ b/drivers/net/ethernet/intel/igc/igc_defines.h @@ -524,6 +524,7 @@ /* Transmit Scheduling */ #define IGC_TQAVCTRL_TRANSMIT_MODE_TSN 0x00000001 #define IGC_TQAVCTRL_ENHANCED_QAV 0x00000008 +#define IGC_TQAVCTRL_FUTSCDDIS 0x00000080 #define IGC_TXQCTL_QUEUE_MODE_LAUNCHT 0x00000001 #define IGC_TXQCTL_STRICT_CYCLE 0x00000002 diff --git a/drivers/net/ethernet/intel/igc/igc_main.c b/drivers/net/ethernet/intel/igc/igc_main.c index 1dd2a7fee8d4..2928a6c73692 100644 --- a/drivers/net/ethernet/intel/igc/igc_main.c +++ b/drivers/net/ethernet/intel/igc/igc_main.c @@ -5978,6 +5978,7 @@ static bool validate_schedule(struct igc_adapter *adapter, const struct tc_taprio_qopt_offload *qopt) { int queue_uses[IGC_MAX_TX_QUEUES] = { }; + struct igc_hw *hw = &adapter->hw; struct timespec64 now; size_t n; @@ -5990,8 +5991,10 @@ static bool validate_schedule(struct igc_adapter *adapter, * in the future, it will hold all the packets until that * time, causing a lot of TX Hangs, so to avoid that, we * reject schedules that would start in the future. + * Note: Limitation above is no longer in i226. */ - if (!is_base_time_past(qopt->base_time, &now)) + if (!is_base_time_past(qopt->base_time, &now) && + igc_is_device_id_i225(hw)) return false; for (n = 0; n < qopt->num_entries; n++) { @@ -6061,6 +6064,7 @@ static int igc_save_qbv_schedule(struct igc_adapter *adapter, struct tc_taprio_qopt_offload *qopt) { bool queue_configured[IGC_MAX_TX_QUEUES] = { }; + struct igc_hw *hw = &adapter->hw; u32 start_time = 0, end_time = 0; size_t n; int i; @@ -6073,7 +6077,7 @@ static int igc_save_qbv_schedule(struct igc_adapter *adapter, if (qopt->base_time < 0) return -ERANGE; - if (adapter->base_time) + if (igc_is_device_id_i225(hw) && adapter->base_time) return -EALREADY; if (!validate_schedule(adapter, qopt)) @@ -6221,12 +6225,35 @@ static int igc_tsn_enable_cbs(struct igc_adapter *adapter, return igc_tsn_offload_apply(adapter); } +static int igc_tc_query_caps(struct igc_adapter *adapter, + struct tc_query_caps_base *base) +{ + struct igc_hw *hw = &adapter->hw; + + switch (base->type) { + case TC_SETUP_QDISC_TAPRIO: { + struct tc_taprio_caps *caps = base->caps; + + caps->broken_mqprio = true; + + if (hw->mac.type == igc_i225) + caps->gate_mask_per_txq = true; + + return 0; + } + default: + return -EOPNOTSUPP; + } +} + static int igc_setup_tc(struct net_device *dev, enum tc_setup_type type, void *type_data) { struct igc_adapter *adapter = netdev_priv(dev); switch (type) { + case TC_QUERY_CAPS: + return igc_tc_query_caps(adapter, type_data); case TC_SETUP_QDISC_TAPRIO: return igc_tsn_enable_qbv_scheduling(adapter, type_data); @@ -6451,8 +6478,6 @@ static int igc_probe(struct pci_dev *pdev, if (err) goto err_pci_reg; - pci_enable_pcie_error_reporting(pdev); - err = pci_enable_ptm(pdev, NULL); if (err < 0) dev_info(&pdev->dev, "PCIe PTM not supported by PCIe bus/controller\n"); @@ -6550,6 +6575,9 @@ static int igc_probe(struct pci_dev *pdev, netdev->mpls_features |= NETIF_F_HW_CSUM; netdev->hw_enc_features |= netdev->vlan_features; + netdev->xdp_features = NETDEV_XDP_ACT_BASIC | NETDEV_XDP_ACT_REDIRECT | + NETDEV_XDP_ACT_XSK_ZEROCOPY; + /* MTU range: 68 - 9216 */ netdev->min_mtu = ETH_MIN_MTU; netdev->max_mtu = MAX_STD_JUMBO_FRAME_SIZE; @@ -6657,7 +6685,6 @@ err_sw_init: err_ioremap: free_netdev(netdev); err_alloc_etherdev: - pci_disable_pcie_error_reporting(pdev); pci_release_mem_regions(pdev); err_pci_reg: err_dma: @@ -6705,8 +6732,6 @@ static void igc_remove(struct pci_dev *pdev) free_netdev(netdev); - pci_disable_pcie_error_reporting(pdev); - pci_disable_device(pdev); } diff --git a/drivers/net/ethernet/intel/igc/igc_tsn.c b/drivers/net/ethernet/intel/igc/igc_tsn.c index bb10d7b65232..a386c8d61dbf 100644 --- a/drivers/net/ethernet/intel/igc/igc_tsn.c +++ b/drivers/net/ethernet/intel/igc/igc_tsn.c @@ -2,6 +2,7 @@ /* Copyright (c) 2019 Intel Corporation */ #include "igc.h" +#include "igc_hw.h" #include "igc_tsn.h" static bool is_any_launchtime(struct igc_adapter *adapter) @@ -92,7 +93,8 @@ static int igc_tsn_disable_offload(struct igc_adapter *adapter) tqavctrl = rd32(IGC_TQAVCTRL); tqavctrl &= ~(IGC_TQAVCTRL_TRANSMIT_MODE_TSN | - IGC_TQAVCTRL_ENHANCED_QAV); + IGC_TQAVCTRL_ENHANCED_QAV | IGC_TQAVCTRL_FUTSCDDIS); + wr32(IGC_TQAVCTRL, tqavctrl); for (i = 0; i < adapter->num_tx_queues; i++) { @@ -117,20 +119,10 @@ static int igc_tsn_enable_offload(struct igc_adapter *adapter) ktime_t base_time, systim; int i; - cycle = adapter->cycle_time; - base_time = adapter->base_time; - wr32(IGC_TSAUXC, 0); wr32(IGC_DTXMXPKTSZ, IGC_DTXMXPKTSZ_TSN); wr32(IGC_TXPBS, IGC_TXPBSIZE_TSN); - tqavctrl = rd32(IGC_TQAVCTRL); - tqavctrl |= IGC_TQAVCTRL_TRANSMIT_MODE_TSN | IGC_TQAVCTRL_ENHANCED_QAV; - wr32(IGC_TQAVCTRL, tqavctrl); - - wr32(IGC_QBVCYCLET_S, cycle); - wr32(IGC_QBVCYCLET, cycle); - for (i = 0; i < adapter->num_tx_queues; i++) { struct igc_ring *ring = adapter->tx_ring[i]; u32 txqctl = 0; @@ -233,21 +225,46 @@ skip_cbs: wr32(IGC_TXQCTL(i), txqctl); } + tqavctrl = rd32(IGC_TQAVCTRL) & ~IGC_TQAVCTRL_FUTSCDDIS; + tqavctrl |= IGC_TQAVCTRL_TRANSMIT_MODE_TSN | IGC_TQAVCTRL_ENHANCED_QAV; + + cycle = adapter->cycle_time; + base_time = adapter->base_time; + nsec = rd32(IGC_SYSTIML); sec = rd32(IGC_SYSTIMH); systim = ktime_set(sec, nsec); - if (ktime_compare(systim, base_time) > 0) { - s64 n; + s64 n = div64_s64(ktime_sub_ns(systim, base_time), cycle); - n = div64_s64(ktime_sub_ns(systim, base_time), cycle); base_time = ktime_add_ns(base_time, (n + 1) * cycle); + } else { + /* According to datasheet section 7.5.2.9.3.3, FutScdDis bit + * has to be configured before the cycle time and base time. + * Tx won't hang if there is a GCL is already running, + * so in this case we don't need to set FutScdDis. + */ + if (igc_is_device_id_i226(hw) && + !(rd32(IGC_BASET_H) || rd32(IGC_BASET_L))) + tqavctrl |= IGC_TQAVCTRL_FUTSCDDIS; } - baset_h = div_s64_rem(base_time, NSEC_PER_SEC, &baset_l); + wr32(IGC_TQAVCTRL, tqavctrl); + + wr32(IGC_QBVCYCLET_S, cycle); + wr32(IGC_QBVCYCLET, cycle); + baset_h = div_s64_rem(base_time, NSEC_PER_SEC, &baset_l); wr32(IGC_BASET_H, baset_h); + + /* In i226, Future base time is only supported when FutScdDis bit + * is enabled and only active for re-configuration. + * In this case, initialize the base time with zero to create + * "re-configuration" scenario then only set the desired base time. + */ + if (tqavctrl & IGC_TQAVCTRL_FUTSCDDIS) + wr32(IGC_BASET_L, 0); wr32(IGC_BASET_L, baset_l); return 0; @@ -274,17 +291,14 @@ int igc_tsn_reset(struct igc_adapter *adapter) int igc_tsn_offload_apply(struct igc_adapter *adapter) { - int err; + struct igc_hw *hw = &adapter->hw; - if (netif_running(adapter->netdev)) { + if (netif_running(adapter->netdev) && igc_is_device_id_i225(hw)) { schedule_work(&adapter->reset_task); return 0; } - err = igc_tsn_enable_offload(adapter); - if (err < 0) - return err; + igc_tsn_reset(adapter); - adapter->flags = igc_tsn_new_flags(adapter); return 0; } diff --git a/drivers/net/ethernet/intel/igc/igc_xdp.c b/drivers/net/ethernet/intel/igc/igc_xdp.c index aeeb34e64610..e27af72aada8 100644 --- a/drivers/net/ethernet/intel/igc/igc_xdp.c +++ b/drivers/net/ethernet/intel/igc/igc_xdp.c @@ -29,6 +29,11 @@ int igc_xdp_set_prog(struct igc_adapter *adapter, struct bpf_prog *prog, if (old_prog) bpf_prog_put(old_prog); + if (prog) + xdp_features_set_redirect_target(dev, true); + else + xdp_features_clear_redirect_target(dev); + if (if_running) igc_open(dev); diff --git a/drivers/net/ethernet/intel/ixgbe/ixgbe_common.c b/drivers/net/ethernet/intel/ixgbe/ixgbe_common.c index 38c4609bd429..878dd8dff528 100644 --- a/drivers/net/ethernet/intel/ixgbe/ixgbe_common.c +++ b/drivers/net/ethernet/intel/ixgbe/ixgbe_common.c @@ -3292,13 +3292,14 @@ static bool ixgbe_need_crosstalk_fix(struct ixgbe_hw *hw) s32 ixgbe_check_mac_link_generic(struct ixgbe_hw *hw, ixgbe_link_speed *speed, bool *link_up, bool link_up_wait_to_complete) { + bool crosstalk_fix_active = ixgbe_need_crosstalk_fix(hw); u32 links_reg, links_orig; u32 i; /* If Crosstalk fix enabled do the sanity check of making sure * the SFP+ cage is full. */ - if (ixgbe_need_crosstalk_fix(hw)) { + if (crosstalk_fix_active) { u32 sfp_cage_full; switch (hw->mac.type) { @@ -3346,10 +3347,24 @@ s32 ixgbe_check_mac_link_generic(struct ixgbe_hw *hw, ixgbe_link_speed *speed, links_reg = IXGBE_READ_REG(hw, IXGBE_LINKS); } } else { - if (links_reg & IXGBE_LINKS_UP) + if (links_reg & IXGBE_LINKS_UP) { + if (crosstalk_fix_active) { + /* Check the link state again after a delay + * to filter out spurious link up + * notifications. + */ + mdelay(5); + links_reg = IXGBE_READ_REG(hw, IXGBE_LINKS); + if (!(links_reg & IXGBE_LINKS_UP)) { + *link_up = false; + *speed = IXGBE_LINK_SPEED_UNKNOWN; + return 0; + } + } *link_up = true; - else + } else { *link_up = false; + } } switch (links_reg & IXGBE_LINKS_SPEED_82599) { diff --git a/drivers/net/ethernet/intel/ixgbe/ixgbe_ipsec.c b/drivers/net/ethernet/intel/ixgbe/ixgbe_ipsec.c index 53a969e34883..13a6fca31004 100644 --- a/drivers/net/ethernet/intel/ixgbe/ixgbe_ipsec.c +++ b/drivers/net/ethernet/intel/ixgbe/ixgbe_ipsec.c @@ -557,8 +557,10 @@ static int ixgbe_ipsec_check_mgmt_ip(struct xfrm_state *xs) /** * ixgbe_ipsec_add_sa - program device with a security association * @xs: pointer to transformer state struct + * @extack: extack point to fill failure reason **/ -static int ixgbe_ipsec_add_sa(struct xfrm_state *xs) +static int ixgbe_ipsec_add_sa(struct xfrm_state *xs, + struct netlink_ext_ack *extack) { struct net_device *dev = xs->xso.real_dev; struct ixgbe_adapter *adapter = netdev_priv(dev); @@ -570,23 +572,22 @@ static int ixgbe_ipsec_add_sa(struct xfrm_state *xs) int i; if (xs->id.proto != IPPROTO_ESP && xs->id.proto != IPPROTO_AH) { - netdev_err(dev, "Unsupported protocol 0x%04x for ipsec offload\n", - xs->id.proto); + NL_SET_ERR_MSG_MOD(extack, "Unsupported protocol for ipsec offload"); return -EINVAL; } if (xs->props.mode != XFRM_MODE_TRANSPORT) { - netdev_err(dev, "Unsupported mode for ipsec offload\n"); + NL_SET_ERR_MSG_MOD(extack, "Unsupported mode for ipsec offload"); return -EINVAL; } if (ixgbe_ipsec_check_mgmt_ip(xs)) { - netdev_err(dev, "IPsec IP addr clash with mgmt filters\n"); + NL_SET_ERR_MSG_MOD(extack, "IPsec IP addr clash with mgmt filters"); return -EINVAL; } if (xs->xso.type != XFRM_DEV_OFFLOAD_CRYPTO) { - netdev_err(dev, "Unsupported ipsec offload type\n"); + NL_SET_ERR_MSG_MOD(extack, "Unsupported ipsec offload type"); return -EINVAL; } @@ -594,14 +595,14 @@ static int ixgbe_ipsec_add_sa(struct xfrm_state *xs) struct rx_sa rsa; if (xs->calg) { - netdev_err(dev, "Compression offload not supported\n"); + NL_SET_ERR_MSG_MOD(extack, "Compression offload not supported"); return -EINVAL; } /* find the first unused index */ ret = ixgbe_ipsec_find_empty_idx(ipsec, true); if (ret < 0) { - netdev_err(dev, "No space for SA in Rx table!\n"); + NL_SET_ERR_MSG_MOD(extack, "No space for SA in Rx table!"); return ret; } sa_idx = (u16)ret; @@ -616,7 +617,7 @@ static int ixgbe_ipsec_add_sa(struct xfrm_state *xs) /* get the key and salt */ ret = ixgbe_ipsec_parse_proto_keys(xs, rsa.key, &rsa.salt); if (ret) { - netdev_err(dev, "Failed to get key data for Rx SA table\n"); + NL_SET_ERR_MSG_MOD(extack, "Failed to get key data for Rx SA table"); return ret; } @@ -676,7 +677,7 @@ static int ixgbe_ipsec_add_sa(struct xfrm_state *xs) } else { /* no match and no empty slot */ - netdev_err(dev, "No space for SA in Rx IP SA table\n"); + NL_SET_ERR_MSG_MOD(extack, "No space for SA in Rx IP SA table"); memset(&rsa, 0, sizeof(rsa)); return -ENOSPC; } @@ -711,7 +712,7 @@ static int ixgbe_ipsec_add_sa(struct xfrm_state *xs) /* find the first unused index */ ret = ixgbe_ipsec_find_empty_idx(ipsec, false); if (ret < 0) { - netdev_err(dev, "No space for SA in Tx table\n"); + NL_SET_ERR_MSG_MOD(extack, "No space for SA in Tx table"); return ret; } sa_idx = (u16)ret; @@ -725,7 +726,7 @@ static int ixgbe_ipsec_add_sa(struct xfrm_state *xs) ret = ixgbe_ipsec_parse_proto_keys(xs, tsa.key, &tsa.salt); if (ret) { - netdev_err(dev, "Failed to get key data for Tx SA table\n"); + NL_SET_ERR_MSG_MOD(extack, "Failed to get key data for Tx SA table"); memset(&tsa, 0, sizeof(tsa)); return ret; } @@ -950,7 +951,7 @@ int ixgbe_ipsec_vf_add_sa(struct ixgbe_adapter *adapter, u32 *msgbuf, u32 vf) memcpy(xs->aead->alg_name, aes_gcm_name, sizeof(aes_gcm_name)); /* set up the HW offload */ - err = ixgbe_ipsec_add_sa(xs); + err = ixgbe_ipsec_add_sa(xs, NULL); if (err) goto err_aead; diff --git a/drivers/net/ethernet/intel/ixgbe/ixgbe_main.c b/drivers/net/ethernet/intel/ixgbe/ixgbe_main.c index 4507fba8747a..773c35fecace 100644 --- a/drivers/net/ethernet/intel/ixgbe/ixgbe_main.c +++ b/drivers/net/ethernet/intel/ixgbe/ixgbe_main.c @@ -6647,7 +6647,7 @@ int ixgbe_setup_rx_resources(struct ixgbe_adapter *adapter, rx_ring->queue_index, ixgbe_rx_napi_id(rx_ring)) < 0) goto err; - rx_ring->xdp_prog = adapter->xdp_prog; + WRITE_ONCE(rx_ring->xdp_prog, adapter->xdp_prog); return 0; err: @@ -8943,7 +8943,8 @@ ixgbe_mdio_read(struct net_device *netdev, int prtad, int devad, u16 addr) int regnum = addr; if (devad != MDIO_DEVAD_NONE) - regnum |= (devad << 16) | MII_ADDR_C45; + return mdiobus_c45_read(adapter->mii_bus, prtad, + devad, regnum); return mdiobus_read(adapter->mii_bus, prtad, regnum); } @@ -8966,7 +8967,8 @@ static int ixgbe_mdio_write(struct net_device *netdev, int prtad, int devad, int regnum = addr; if (devad != MDIO_DEVAD_NONE) - regnum |= (devad << 16) | MII_ADDR_C45; + return mdiobus_c45_write(adapter->mii_bus, prtad, devad, + regnum, value); return mdiobus_write(adapter->mii_bus, prtad, regnum, value); } @@ -10303,14 +10305,15 @@ static int ixgbe_xdp_setup(struct net_device *dev, struct bpf_prog *prog) synchronize_rcu(); err = ixgbe_setup_tc(dev, adapter->hw_tcs); - if (err) { - rcu_assign_pointer(adapter->xdp_prog, old_prog); + if (err) return -EINVAL; - } + if (!prog) + xdp_features_clear_redirect_target(dev); } else { - for (i = 0; i < adapter->num_rx_queues; i++) - (void)xchg(&adapter->rx_ring[i]->xdp_prog, - adapter->xdp_prog); + for (i = 0; i < adapter->num_rx_queues; i++) { + WRITE_ONCE(adapter->rx_ring[i]->xdp_prog, + adapter->xdp_prog); + } } if (old_prog) @@ -10326,6 +10329,7 @@ static int ixgbe_xdp_setup(struct net_device *dev, struct bpf_prog *prog) if (adapter->xdp_ring[i]->xsk_pool) (void)ixgbe_xsk_wakeup(adapter->netdev, i, XDP_WAKEUP_RX); + xdp_features_set_redirect_target(dev, true); } return 0; @@ -10814,8 +10818,6 @@ static int ixgbe_probe(struct pci_dev *pdev, const struct pci_device_id *ent) goto err_pci_reg; } - pci_enable_pcie_error_reporting(pdev); - pci_set_master(pdev); pci_save_state(pdev); @@ -11023,6 +11025,9 @@ skip_sriov: netdev->priv_flags |= IFF_UNICAST_FLT; netdev->priv_flags |= IFF_SUPP_NOFCS; + netdev->xdp_features = NETDEV_XDP_ACT_BASIC | NETDEV_XDP_ACT_REDIRECT | + NETDEV_XDP_ACT_XSK_ZEROCOPY; + /* MTU range: 68 - 9710 */ netdev->min_mtu = ETH_MIN_MTU; netdev->max_mtu = IXGBE_MAX_JUMBO_FRAME_SIZE - (ETH_HLEN + ETH_FCS_LEN); @@ -11243,7 +11248,6 @@ err_ioremap: disable_dev = !test_and_set_bit(__IXGBE_DISABLED, &adapter->state); free_netdev(netdev); err_alloc_etherdev: - pci_disable_pcie_error_reporting(pdev); pci_release_mem_regions(pdev); err_pci_reg: err_dma: @@ -11332,8 +11336,6 @@ static void ixgbe_remove(struct pci_dev *pdev) disable_dev = !test_and_set_bit(__IXGBE_DISABLED, &adapter->state); free_netdev(netdev); - pci_disable_pcie_error_reporting(pdev); - if (disable_dev) pci_disable_device(pdev); } diff --git a/drivers/net/ethernet/intel/ixgbe/ixgbe_phy.c b/drivers/net/ethernet/intel/ixgbe/ixgbe_phy.c index 123dca9ce468..689470c1e8ad 100644 --- a/drivers/net/ethernet/intel/ixgbe/ixgbe_phy.c +++ b/drivers/net/ethernet/intel/ixgbe/ixgbe_phy.c @@ -680,14 +680,14 @@ static s32 ixgbe_msca_cmd(struct ixgbe_hw *hw, u32 cmd) } /** - * ixgbe_mii_bus_read_generic - Read a clause 22/45 register with gssr flags + * ixgbe_mii_bus_read_generic_c22 - Read a clause 22 register with gssr flags * @hw: pointer to hardware structure * @addr: address * @regnum: register number * @gssr: semaphore flags to acquire **/ -static s32 ixgbe_mii_bus_read_generic(struct ixgbe_hw *hw, int addr, - int regnum, u32 gssr) +static s32 ixgbe_mii_bus_read_generic_c22(struct ixgbe_hw *hw, int addr, + int regnum, u32 gssr) { u32 hwaddr, cmd; s32 data; @@ -696,31 +696,52 @@ static s32 ixgbe_mii_bus_read_generic(struct ixgbe_hw *hw, int addr, return -EBUSY; hwaddr = addr << IXGBE_MSCA_PHY_ADDR_SHIFT; - if (regnum & MII_ADDR_C45) { - hwaddr |= regnum & GENMASK(21, 0); - cmd = hwaddr | IXGBE_MSCA_ADDR_CYCLE | IXGBE_MSCA_MDI_COMMAND; - } else { - hwaddr |= (regnum & GENMASK(5, 0)) << IXGBE_MSCA_DEV_TYPE_SHIFT; - cmd = hwaddr | IXGBE_MSCA_OLD_PROTOCOL | - IXGBE_MSCA_READ_AUTOINC | IXGBE_MSCA_MDI_COMMAND; - } + hwaddr |= (regnum & GENMASK(5, 0)) << IXGBE_MSCA_DEV_TYPE_SHIFT; + cmd = hwaddr | IXGBE_MSCA_OLD_PROTOCOL | + IXGBE_MSCA_READ_AUTOINC | IXGBE_MSCA_MDI_COMMAND; data = ixgbe_msca_cmd(hw, cmd); if (data < 0) goto mii_bus_read_done; - /* For a clause 45 access the address cycle just completed, we still - * need to do the read command, otherwise just get the data - */ - if (!(regnum & MII_ADDR_C45)) - goto do_mii_bus_read; + data = IXGBE_READ_REG(hw, IXGBE_MSRWD); + data = (data >> IXGBE_MSRWD_READ_DATA_SHIFT) & GENMASK(16, 0); + +mii_bus_read_done: + hw->mac.ops.release_swfw_sync(hw, gssr); + return data; +} + +/** + * ixgbe_mii_bus_read_generic_c45 - Read a clause 45 register with gssr flags + * @hw: pointer to hardware structure + * @addr: address + * @devad: device address to read + * @regnum: register number + * @gssr: semaphore flags to acquire + **/ +static s32 ixgbe_mii_bus_read_generic_c45(struct ixgbe_hw *hw, int addr, + int devad, int regnum, u32 gssr) +{ + u32 hwaddr, cmd; + s32 data; + + if (hw->mac.ops.acquire_swfw_sync(hw, gssr)) + return -EBUSY; + + hwaddr = addr << IXGBE_MSCA_PHY_ADDR_SHIFT; + hwaddr |= devad << 16 | regnum; + cmd = hwaddr | IXGBE_MSCA_ADDR_CYCLE | IXGBE_MSCA_MDI_COMMAND; + + data = ixgbe_msca_cmd(hw, cmd); + if (data < 0) + goto mii_bus_read_done; cmd = hwaddr | IXGBE_MSCA_READ | IXGBE_MSCA_MDI_COMMAND; data = ixgbe_msca_cmd(hw, cmd); if (data < 0) goto mii_bus_read_done; -do_mii_bus_read: data = IXGBE_READ_REG(hw, IXGBE_MSRWD); data = (data >> IXGBE_MSRWD_READ_DATA_SHIFT) & GENMASK(16, 0); @@ -730,15 +751,15 @@ mii_bus_read_done: } /** - * ixgbe_mii_bus_write_generic - Write a clause 22/45 register with gssr flags + * ixgbe_mii_bus_write_generic_c22 - Write a clause 22 register with gssr flags * @hw: pointer to hardware structure * @addr: address * @regnum: register number * @val: value to write * @gssr: semaphore flags to acquire **/ -static s32 ixgbe_mii_bus_write_generic(struct ixgbe_hw *hw, int addr, - int regnum, u16 val, u32 gssr) +static s32 ixgbe_mii_bus_write_generic_c22(struct ixgbe_hw *hw, int addr, + int regnum, u16 val, u32 gssr) { u32 hwaddr, cmd; s32 err; @@ -749,20 +770,43 @@ static s32 ixgbe_mii_bus_write_generic(struct ixgbe_hw *hw, int addr, IXGBE_WRITE_REG(hw, IXGBE_MSRWD, (u32)val); hwaddr = addr << IXGBE_MSCA_PHY_ADDR_SHIFT; - if (regnum & MII_ADDR_C45) { - hwaddr |= regnum & GENMASK(21, 0); - cmd = hwaddr | IXGBE_MSCA_ADDR_CYCLE | IXGBE_MSCA_MDI_COMMAND; - } else { - hwaddr |= (regnum & GENMASK(5, 0)) << IXGBE_MSCA_DEV_TYPE_SHIFT; - cmd = hwaddr | IXGBE_MSCA_OLD_PROTOCOL | IXGBE_MSCA_WRITE | - IXGBE_MSCA_MDI_COMMAND; - } + hwaddr |= (regnum & GENMASK(5, 0)) << IXGBE_MSCA_DEV_TYPE_SHIFT; + cmd = hwaddr | IXGBE_MSCA_OLD_PROTOCOL | IXGBE_MSCA_WRITE | + IXGBE_MSCA_MDI_COMMAND; + + err = ixgbe_msca_cmd(hw, cmd); + + hw->mac.ops.release_swfw_sync(hw, gssr); + return err; +} + +/** + * ixgbe_mii_bus_write_generic_c45 - Write a clause 45 register with gssr flags + * @hw: pointer to hardware structure + * @addr: address + * @devad: device address to read + * @regnum: register number + * @val: value to write + * @gssr: semaphore flags to acquire + **/ +static s32 ixgbe_mii_bus_write_generic_c45(struct ixgbe_hw *hw, int addr, + int devad, int regnum, u16 val, + u32 gssr) +{ + u32 hwaddr, cmd; + s32 err; + + if (hw->mac.ops.acquire_swfw_sync(hw, gssr)) + return -EBUSY; + + IXGBE_WRITE_REG(hw, IXGBE_MSRWD, (u32)val); + + hwaddr = addr << IXGBE_MSCA_PHY_ADDR_SHIFT; + hwaddr |= devad << 16 | regnum; + cmd = hwaddr | IXGBE_MSCA_ADDR_CYCLE | IXGBE_MSCA_MDI_COMMAND; - /* For clause 45 this is an address cycle, for clause 22 this is the - * entire transaction - */ err = ixgbe_msca_cmd(hw, cmd); - if (err < 0 || !(regnum & MII_ADDR_C45)) + if (err < 0) goto mii_bus_write_done; cmd = hwaddr | IXGBE_MSCA_WRITE | IXGBE_MSCA_MDI_COMMAND; @@ -774,70 +818,144 @@ mii_bus_write_done: } /** - * ixgbe_mii_bus_read - Read a clause 22/45 register + * ixgbe_mii_bus_read_c22 - Read a clause 22 register + * @bus: pointer to mii_bus structure which points to our driver private + * @addr: address + * @regnum: register number + **/ +static s32 ixgbe_mii_bus_read_c22(struct mii_bus *bus, int addr, int regnum) +{ + struct ixgbe_adapter *adapter = bus->priv; + struct ixgbe_hw *hw = &adapter->hw; + u32 gssr = hw->phy.phy_semaphore_mask; + + return ixgbe_mii_bus_read_generic_c22(hw, addr, regnum, gssr); +} + +/** + * ixgbe_mii_bus_read_c45 - Read a clause 45 register * @bus: pointer to mii_bus structure which points to our driver private + * @devad: device address to read * @addr: address * @regnum: register number **/ -static s32 ixgbe_mii_bus_read(struct mii_bus *bus, int addr, int regnum) +static s32 ixgbe_mii_bus_read_c45(struct mii_bus *bus, int devad, int addr, + int regnum) +{ + struct ixgbe_adapter *adapter = bus->priv; + struct ixgbe_hw *hw = &adapter->hw; + u32 gssr = hw->phy.phy_semaphore_mask; + + return ixgbe_mii_bus_read_generic_c45(hw, addr, devad, regnum, gssr); +} + +/** + * ixgbe_mii_bus_write_c22 - Write a clause 22 register + * @bus: pointer to mii_bus structure which points to our driver private + * @addr: address + * @regnum: register number + * @val: value to write + **/ +static s32 ixgbe_mii_bus_write_c22(struct mii_bus *bus, int addr, int regnum, + u16 val) { struct ixgbe_adapter *adapter = bus->priv; struct ixgbe_hw *hw = &adapter->hw; u32 gssr = hw->phy.phy_semaphore_mask; - return ixgbe_mii_bus_read_generic(hw, addr, regnum, gssr); + return ixgbe_mii_bus_write_generic_c22(hw, addr, regnum, val, gssr); } /** - * ixgbe_mii_bus_write - Write a clause 22/45 register + * ixgbe_mii_bus_write_c45 - Write a clause 45 register * @bus: pointer to mii_bus structure which points to our driver private * @addr: address + * @devad: device address to read * @regnum: register number * @val: value to write **/ -static s32 ixgbe_mii_bus_write(struct mii_bus *bus, int addr, int regnum, - u16 val) +static s32 ixgbe_mii_bus_write_c45(struct mii_bus *bus, int addr, int devad, + int regnum, u16 val) { struct ixgbe_adapter *adapter = bus->priv; struct ixgbe_hw *hw = &adapter->hw; u32 gssr = hw->phy.phy_semaphore_mask; - return ixgbe_mii_bus_write_generic(hw, addr, regnum, val, gssr); + return ixgbe_mii_bus_write_generic_c45(hw, addr, devad, regnum, val, + gssr); } /** - * ixgbe_x550em_a_mii_bus_read - Read a clause 22/45 register on x550em_a + * ixgbe_x550em_a_mii_bus_read_c22 - Read a clause 22 register on x550em_a * @bus: pointer to mii_bus structure which points to our driver private * @addr: address * @regnum: register number **/ -static s32 ixgbe_x550em_a_mii_bus_read(struct mii_bus *bus, int addr, - int regnum) +static s32 ixgbe_x550em_a_mii_bus_read_c22(struct mii_bus *bus, int addr, + int regnum) +{ + struct ixgbe_adapter *adapter = bus->priv; + struct ixgbe_hw *hw = &adapter->hw; + u32 gssr = hw->phy.phy_semaphore_mask; + + gssr |= IXGBE_GSSR_TOKEN_SM | IXGBE_GSSR_PHY0_SM; + return ixgbe_mii_bus_read_generic_c22(hw, addr, regnum, gssr); +} + +/** + * ixgbe_x550em_a_mii_bus_read_c45 - Read a clause 45 register on x550em_a + * @bus: pointer to mii_bus structure which points to our driver private + * @addr: address + * @devad: device address to read + * @regnum: register number + **/ +static s32 ixgbe_x550em_a_mii_bus_read_c45(struct mii_bus *bus, int addr, + int devad, int regnum) +{ + struct ixgbe_adapter *adapter = bus->priv; + struct ixgbe_hw *hw = &adapter->hw; + u32 gssr = hw->phy.phy_semaphore_mask; + + gssr |= IXGBE_GSSR_TOKEN_SM | IXGBE_GSSR_PHY0_SM; + return ixgbe_mii_bus_read_generic_c45(hw, addr, devad, regnum, gssr); +} + +/** + * ixgbe_x550em_a_mii_bus_write_c22 - Write a clause 22 register on x550em_a + * @bus: pointer to mii_bus structure which points to our driver private + * @addr: address + * @regnum: register number + * @val: value to write + **/ +static s32 ixgbe_x550em_a_mii_bus_write_c22(struct mii_bus *bus, int addr, + int regnum, u16 val) { struct ixgbe_adapter *adapter = bus->priv; struct ixgbe_hw *hw = &adapter->hw; u32 gssr = hw->phy.phy_semaphore_mask; gssr |= IXGBE_GSSR_TOKEN_SM | IXGBE_GSSR_PHY0_SM; - return ixgbe_mii_bus_read_generic(hw, addr, regnum, gssr); + return ixgbe_mii_bus_write_generic_c22(hw, addr, regnum, val, gssr); } /** - * ixgbe_x550em_a_mii_bus_write - Write a clause 22/45 register on x550em_a + * ixgbe_x550em_a_mii_bus_write_c45 - Write a clause 45 register on x550em_a * @bus: pointer to mii_bus structure which points to our driver private * @addr: address + * @devad: device address to read * @regnum: register number * @val: value to write **/ -static s32 ixgbe_x550em_a_mii_bus_write(struct mii_bus *bus, int addr, - int regnum, u16 val) +static s32 ixgbe_x550em_a_mii_bus_write_c45(struct mii_bus *bus, int addr, + int devad, int regnum, u16 val) { struct ixgbe_adapter *adapter = bus->priv; struct ixgbe_hw *hw = &adapter->hw; u32 gssr = hw->phy.phy_semaphore_mask; gssr |= IXGBE_GSSR_TOKEN_SM | IXGBE_GSSR_PHY0_SM; - return ixgbe_mii_bus_write_generic(hw, addr, regnum, val, gssr); + return ixgbe_mii_bus_write_generic_c45(hw, addr, devad, regnum, val, + gssr); } /** @@ -909,8 +1027,11 @@ out: **/ s32 ixgbe_mii_bus_init(struct ixgbe_hw *hw) { - s32 (*write)(struct mii_bus *bus, int addr, int regnum, u16 val); - s32 (*read)(struct mii_bus *bus, int addr, int regnum); + s32 (*write_c22)(struct mii_bus *bus, int addr, int regnum, u16 val); + s32 (*read_c22)(struct mii_bus *bus, int addr, int regnum); + s32 (*write_c45)(struct mii_bus *bus, int addr, int devad, int regnum, + u16 val); + s32 (*read_c45)(struct mii_bus *bus, int addr, int devad, int regnum); struct ixgbe_adapter *adapter = hw->back; struct pci_dev *pdev = adapter->pdev; struct device *dev = &adapter->netdev->dev; @@ -929,12 +1050,16 @@ s32 ixgbe_mii_bus_init(struct ixgbe_hw *hw) case IXGBE_DEV_ID_X550EM_A_1G_T_L: if (!ixgbe_x550em_a_has_mii(hw)) return 0; - read = &ixgbe_x550em_a_mii_bus_read; - write = &ixgbe_x550em_a_mii_bus_write; + read_c22 = ixgbe_x550em_a_mii_bus_read_c22; + write_c22 = ixgbe_x550em_a_mii_bus_write_c22; + read_c45 = ixgbe_x550em_a_mii_bus_read_c45; + write_c45 = ixgbe_x550em_a_mii_bus_write_c45; break; default: - read = &ixgbe_mii_bus_read; - write = &ixgbe_mii_bus_write; + read_c22 = ixgbe_mii_bus_read_c22; + write_c22 = ixgbe_mii_bus_write_c22; + read_c45 = ixgbe_mii_bus_read_c45; + write_c45 = ixgbe_mii_bus_write_c45; break; } @@ -942,8 +1067,10 @@ s32 ixgbe_mii_bus_init(struct ixgbe_hw *hw) if (!bus) return -ENOMEM; - bus->read = read; - bus->write = write; + bus->read = read_c22; + bus->write = write_c22; + bus->read_c45 = read_c45; + bus->write_c45 = write_c45; /* Use the position of the device in the PCI hierarchy as the id */ snprintf(bus->id, MII_BUS_ID_SIZE, "%s-mdio-%s", ixgbe_driver_name, diff --git a/drivers/net/ethernet/intel/ixgbevf/ipsec.c b/drivers/net/ethernet/intel/ixgbevf/ipsec.c index c1cf540d162a..66cf17f19408 100644 --- a/drivers/net/ethernet/intel/ixgbevf/ipsec.c +++ b/drivers/net/ethernet/intel/ixgbevf/ipsec.c @@ -257,8 +257,10 @@ static int ixgbevf_ipsec_parse_proto_keys(struct xfrm_state *xs, /** * ixgbevf_ipsec_add_sa - program device with a security association * @xs: pointer to transformer state struct + * @extack: extack point to fill failure reason **/ -static int ixgbevf_ipsec_add_sa(struct xfrm_state *xs) +static int ixgbevf_ipsec_add_sa(struct xfrm_state *xs, + struct netlink_ext_ack *extack) { struct net_device *dev = xs->xso.real_dev; struct ixgbevf_adapter *adapter; @@ -270,18 +272,17 @@ static int ixgbevf_ipsec_add_sa(struct xfrm_state *xs) ipsec = adapter->ipsec; if (xs->id.proto != IPPROTO_ESP && xs->id.proto != IPPROTO_AH) { - netdev_err(dev, "Unsupported protocol 0x%04x for IPsec offload\n", - xs->id.proto); + NL_SET_ERR_MSG_MOD(extack, "Unsupported protocol for IPsec offload"); return -EINVAL; } if (xs->props.mode != XFRM_MODE_TRANSPORT) { - netdev_err(dev, "Unsupported mode for ipsec offload\n"); + NL_SET_ERR_MSG_MOD(extack, "Unsupported mode for ipsec offload"); return -EINVAL; } if (xs->xso.type != XFRM_DEV_OFFLOAD_CRYPTO) { - netdev_err(dev, "Unsupported ipsec offload type\n"); + NL_SET_ERR_MSG_MOD(extack, "Unsupported ipsec offload type"); return -EINVAL; } @@ -289,14 +290,14 @@ static int ixgbevf_ipsec_add_sa(struct xfrm_state *xs) struct rx_sa rsa; if (xs->calg) { - netdev_err(dev, "Compression offload not supported\n"); + NL_SET_ERR_MSG_MOD(extack, "Compression offload not supported"); return -EINVAL; } /* find the first unused index */ ret = ixgbevf_ipsec_find_empty_idx(ipsec, true); if (ret < 0) { - netdev_err(dev, "No space for SA in Rx table!\n"); + NL_SET_ERR_MSG_MOD(extack, "No space for SA in Rx table!"); return ret; } sa_idx = (u16)ret; @@ -311,7 +312,7 @@ static int ixgbevf_ipsec_add_sa(struct xfrm_state *xs) /* get the key and salt */ ret = ixgbevf_ipsec_parse_proto_keys(xs, rsa.key, &rsa.salt); if (ret) { - netdev_err(dev, "Failed to get key data for Rx SA table\n"); + NL_SET_ERR_MSG_MOD(extack, "Failed to get key data for Rx SA table"); return ret; } @@ -350,7 +351,7 @@ static int ixgbevf_ipsec_add_sa(struct xfrm_state *xs) /* find the first unused index */ ret = ixgbevf_ipsec_find_empty_idx(ipsec, false); if (ret < 0) { - netdev_err(dev, "No space for SA in Tx table\n"); + NL_SET_ERR_MSG_MOD(extack, "No space for SA in Tx table"); return ret; } sa_idx = (u16)ret; @@ -364,7 +365,7 @@ static int ixgbevf_ipsec_add_sa(struct xfrm_state *xs) ret = ixgbevf_ipsec_parse_proto_keys(xs, tsa.key, &tsa.salt); if (ret) { - netdev_err(dev, "Failed to get key data for Tx SA table\n"); + NL_SET_ERR_MSG_MOD(extack, "Failed to get key data for Tx SA table"); memset(&tsa, 0, sizeof(tsa)); return ret; } diff --git a/drivers/net/ethernet/intel/ixgbevf/ixgbevf_main.c b/drivers/net/ethernet/intel/ixgbevf/ixgbevf_main.c index ea0a230c1153..a44e4bd56142 100644 --- a/drivers/net/ethernet/intel/ixgbevf/ixgbevf_main.c +++ b/drivers/net/ethernet/intel/ixgbevf/ixgbevf_main.c @@ -4634,6 +4634,7 @@ static int ixgbevf_probe(struct pci_dev *pdev, const struct pci_device_id *ent) NETIF_F_HW_VLAN_CTAG_TX; netdev->priv_flags |= IFF_UNICAST_FLT; + netdev->xdp_features = NETDEV_XDP_ACT_BASIC; /* MTU range: 68 - 1504 or 9710 */ netdev->min_mtu = ETH_MIN_MTU; diff --git a/drivers/net/ethernet/marvell/mvmdio.c b/drivers/net/ethernet/marvell/mvmdio.c index ef878973b859..8662543ca5c8 100644 --- a/drivers/net/ethernet/marvell/mvmdio.c +++ b/drivers/net/ethernet/marvell/mvmdio.c @@ -146,9 +146,6 @@ static int orion_mdio_smi_read(struct mii_bus *bus, int mii_id, u32 val; int ret; - if (regnum & MII_ADDR_C45) - return -EOPNOTSUPP; - ret = orion_mdio_wait_ready(&orion_mdio_smi_ops, bus); if (ret < 0) return ret; @@ -177,9 +174,6 @@ static int orion_mdio_smi_write(struct mii_bus *bus, int mii_id, struct orion_mdio_dev *dev = bus->priv; int ret; - if (regnum & MII_ADDR_C45) - return -EOPNOTSUPP; - ret = orion_mdio_wait_ready(&orion_mdio_smi_ops, bus); if (ret < 0) return ret; @@ -204,21 +198,17 @@ static const struct orion_mdio_ops orion_mdio_xsmi_ops = { .poll_interval_max = MVMDIO_XSMI_POLL_INTERVAL_MAX, }; -static int orion_mdio_xsmi_read(struct mii_bus *bus, int mii_id, - int regnum) +static int orion_mdio_xsmi_read_c45(struct mii_bus *bus, int mii_id, + int dev_addr, int regnum) { struct orion_mdio_dev *dev = bus->priv; - u16 dev_addr = (regnum >> 16) & GENMASK(4, 0); int ret; - if (!(regnum & MII_ADDR_C45)) - return -EOPNOTSUPP; - ret = orion_mdio_wait_ready(&orion_mdio_xsmi_ops, bus); if (ret < 0) return ret; - writel(regnum & GENMASK(15, 0), dev->regs + MVMDIO_XSMI_ADDR_REG); + writel(regnum, dev->regs + MVMDIO_XSMI_ADDR_REG); writel((mii_id << MVMDIO_XSMI_PHYADDR_SHIFT) | (dev_addr << MVMDIO_XSMI_DEVADDR_SHIFT) | MVMDIO_XSMI_READ_OPERATION, @@ -237,21 +227,17 @@ static int orion_mdio_xsmi_read(struct mii_bus *bus, int mii_id, return readl(dev->regs + MVMDIO_XSMI_MGNT_REG) & GENMASK(15, 0); } -static int orion_mdio_xsmi_write(struct mii_bus *bus, int mii_id, - int regnum, u16 value) +static int orion_mdio_xsmi_write_c45(struct mii_bus *bus, int mii_id, + int dev_addr, int regnum, u16 value) { struct orion_mdio_dev *dev = bus->priv; - u16 dev_addr = (regnum >> 16) & GENMASK(4, 0); int ret; - if (!(regnum & MII_ADDR_C45)) - return -EOPNOTSUPP; - ret = orion_mdio_wait_ready(&orion_mdio_xsmi_ops, bus); if (ret < 0) return ret; - writel(regnum & GENMASK(15, 0), dev->regs + MVMDIO_XSMI_ADDR_REG); + writel(regnum, dev->regs + MVMDIO_XSMI_ADDR_REG); writel((mii_id << MVMDIO_XSMI_PHYADDR_SHIFT) | (dev_addr << MVMDIO_XSMI_DEVADDR_SHIFT) | MVMDIO_XSMI_WRITE_OPERATION | value, @@ -302,8 +288,8 @@ static int orion_mdio_probe(struct platform_device *pdev) bus->write = orion_mdio_smi_write; break; case BUS_TYPE_XSMI: - bus->read = orion_mdio_xsmi_read; - bus->write = orion_mdio_xsmi_write; + bus->read_c45 = orion_mdio_xsmi_read_c45; + bus->write_c45 = orion_mdio_xsmi_write_c45; break; } diff --git a/drivers/net/ethernet/marvell/mvneta.c b/drivers/net/ethernet/marvell/mvneta.c index f8925cac61e4..0e39d199ff06 100644 --- a/drivers/net/ethernet/marvell/mvneta.c +++ b/drivers/net/ethernet/marvell/mvneta.c @@ -38,7 +38,7 @@ #include <net/ipv6.h> #include <net/tso.h> #include <net/page_pool.h> -#include <net/pkt_cls.h> +#include <net/pkt_sched.h> #include <linux/bpf_trace.h> /* Registers */ @@ -5612,6 +5612,12 @@ static int mvneta_probe(struct platform_device *pdev) NETIF_F_TSO | NETIF_F_RXCSUM; dev->hw_features |= dev->features; dev->vlan_features |= dev->features; + if (!pp->bm_priv) + dev->xdp_features = NETDEV_XDP_ACT_BASIC | + NETDEV_XDP_ACT_REDIRECT | + NETDEV_XDP_ACT_NDO_XMIT | + NETDEV_XDP_ACT_RX_SG | + NETDEV_XDP_ACT_NDO_XMIT_SG; dev->priv_flags |= IFF_LIVE_ADDR_CHANGE; netif_set_tso_max_segs(dev, MVNETA_MAX_TSO_SEGS); diff --git a/drivers/net/ethernet/marvell/mvpp2/mvpp2_main.c b/drivers/net/ethernet/marvell/mvpp2/mvpp2_main.c index 4da45c5abba5..9b4ecbe4f36d 100644 --- a/drivers/net/ethernet/marvell/mvpp2/mvpp2_main.c +++ b/drivers/net/ethernet/marvell/mvpp2/mvpp2_main.c @@ -6866,6 +6866,10 @@ static int mvpp2_port_probe(struct platform_device *pdev, dev->vlan_features |= features; netif_set_tso_max_segs(dev, MVPP2_MAX_TSO_SEGS); + + dev->xdp_features = NETDEV_XDP_ACT_BASIC | NETDEV_XDP_ACT_REDIRECT | + NETDEV_XDP_ACT_NDO_XMIT; + dev->priv_flags |= IFF_UNICAST_FLT; /* MTU range: 68 - 9704 */ diff --git a/drivers/net/ethernet/marvell/octeontx2/af/mbox.h b/drivers/net/ethernet/marvell/octeontx2/af/mbox.h index d2584ebb7a70..5727d67e0259 100644 --- a/drivers/net/ethernet/marvell/octeontx2/af/mbox.h +++ b/drivers/net/ethernet/marvell/octeontx2/af/mbox.h @@ -195,6 +195,9 @@ M(CPT_STATS, 0xA05, cpt_sts, cpt_sts_req, cpt_sts_rsp) \ M(CPT_RXC_TIME_CFG, 0xA06, cpt_rxc_time_cfg, cpt_rxc_time_cfg_req, \ msg_rsp) \ M(CPT_CTX_CACHE_SYNC, 0xA07, cpt_ctx_cache_sync, msg_req, msg_rsp) \ +M(CPT_LF_RESET, 0xA08, cpt_lf_reset, cpt_lf_rst_req, msg_rsp) \ +M(CPT_FLT_ENG_INFO, 0xA09, cpt_flt_eng_info, cpt_flt_eng_info_req, \ + cpt_flt_eng_info_rsp) \ /* SDP mbox IDs (range 0x1000 - 0x11FF) */ \ M(SET_SDP_CHAN_INFO, 0x1000, set_sdp_chan_info, sdp_chan_info_msg, msg_rsp) \ M(GET_SDP_CHAN_INFO, 0x1001, get_sdp_chan_info, msg_req, sdp_get_chan_info_msg) \ @@ -297,6 +300,8 @@ M(NIX_BANDPROF_FREE, 0x801e, nix_bandprof_free, nix_bandprof_free_req, \ msg_rsp) \ M(NIX_BANDPROF_GET_HWINFO, 0x801f, nix_bandprof_get_hwinfo, msg_req, \ nix_bandprof_get_hwinfo_rsp) \ +M(NIX_READ_INLINE_IPSEC_CFG, 0x8023, nix_read_inline_ipsec_cfg, \ + msg_req, nix_inline_ipsec_cfg) \ /* MCS mbox IDs (range 0xA000 - 0xBFFF) */ \ M(MCS_ALLOC_RESOURCES, 0xa000, mcs_alloc_resources, mcs_alloc_rsrc_req, \ mcs_alloc_rsrc_rsp) \ @@ -1196,7 +1201,7 @@ struct nix_inline_ipsec_cfg { u32 cpt_credit; struct { u8 egrp; - u8 opcode; + u16 opcode; u16 param1; u16 param2; } gen_cfg; @@ -1205,6 +1210,8 @@ struct nix_inline_ipsec_cfg { u8 cpt_slot; } inst_qsel; u8 enable; + u16 bpid; + u32 credit_th; }; /* Per NIX LF inline IPSec configuration */ @@ -1609,6 +1616,8 @@ struct cpt_lf_alloc_req_msg { u16 sso_pf_func; u16 eng_grpmsk; int blkaddr; + u8 ctx_ilen_valid : 1; + u8 ctx_ilen : 7; }; #define CPT_INLINE_INBOUND 0 @@ -1692,6 +1701,28 @@ struct cpt_inst_lmtst_req { u64 rsvd; }; +/* Mailbox message format to request for CPT LF reset */ +struct cpt_lf_rst_req { + struct mbox_msghdr hdr; + u32 slot; + u32 rsvd; +}; + +/* Mailbox message format to request for CPT faulted engines */ +struct cpt_flt_eng_info_req { + struct mbox_msghdr hdr; + int blkaddr; + bool reset; + u32 rsvd; +}; + +struct cpt_flt_eng_info_rsp { + struct mbox_msghdr hdr; + u64 flt_eng_map[CPT_10K_AF_INT_VEC_RVU]; + u64 rcvrd_eng_map[CPT_10K_AF_INT_VEC_RVU]; + u64 rsvd; +}; + struct sdp_node_info { /* Node to which this PF belons to */ u8 node_id; diff --git a/drivers/net/ethernet/marvell/octeontx2/af/rvu.c b/drivers/net/ethernet/marvell/octeontx2/af/rvu.c index 3f5e09b77d4b..8683ce57ed3f 100644 --- a/drivers/net/ethernet/marvell/octeontx2/af/rvu.c +++ b/drivers/net/ethernet/marvell/octeontx2/af/rvu.c @@ -1164,8 +1164,16 @@ cpt: goto nix_err; } + err = rvu_cpt_init(rvu); + if (err) { + dev_err(rvu->dev, "%s: Failed to initialize cpt\n", __func__); + goto mcs_err; + } + return 0; +mcs_err: + rvu_mcs_exit(rvu); nix_err: rvu_nix_freemem(rvu); npa_err: diff --git a/drivers/net/ethernet/marvell/octeontx2/af/rvu.h b/drivers/net/ethernet/marvell/octeontx2/af/rvu.h index 7f0a64731c67..389663a13d1d 100644 --- a/drivers/net/ethernet/marvell/octeontx2/af/rvu.h +++ b/drivers/net/ethernet/marvell/octeontx2/af/rvu.h @@ -108,6 +108,8 @@ struct rvu_block { u64 lfreset_reg; unsigned char name[NAME_SIZE]; struct rvu *rvu; + u64 cpt_flt_eng_map[3]; + u64 cpt_rcvrd_eng_map[3]; }; struct nix_mcast { @@ -459,6 +461,7 @@ struct rvu { struct rvu_pfvf *pf; struct rvu_pfvf *hwvf; struct mutex rsrc_lock; /* Serialize resource alloc/free */ + struct mutex alias_lock; /* Serialize bar2 alias access */ int vfs; /* Number of VFs attached to RVU */ int nix_blkaddr[MAX_NIX_BLKS]; @@ -510,6 +513,7 @@ struct rvu { struct ptp *ptp; int mcs_blk_cnt; + int cpt_pf_num; #ifdef CONFIG_DEBUG_FS struct rvu_debugfs rvu_dbg; @@ -524,6 +528,8 @@ struct rvu { struct list_head mcs_intrq_head; /* mcs interrupt queue lock */ spinlock_t mcs_intrq_lock; + /* CPT interrupt lock */ + spinlock_t cpt_intr_lock; }; static inline void rvu_write64(struct rvu *rvu, u64 block, u64 offset, u64 val) @@ -546,6 +552,17 @@ static inline u64 rvupf_read64(struct rvu *rvu, u64 offset) return readq(rvu->pfreg_base + offset); } +static inline void rvu_bar2_sel_write64(struct rvu *rvu, u64 block, u64 offset, u64 val) +{ + /* HW requires read back of RVU_AF_BAR2_SEL register to make sure completion of + * write operation. + */ + rvu_write64(rvu, block, offset, val); + rvu_read64(rvu, block, offset); + /* Barrier to ensure read completes before accessing LF registers */ + mb(); +} + /* Silicon revisions */ static inline bool is_rvu_pre_96xx_C0(struct rvu *rvu) { @@ -865,11 +882,15 @@ void rvu_cpt_unregister_interrupts(struct rvu *rvu); int rvu_cpt_lf_teardown(struct rvu *rvu, u16 pcifunc, int blkaddr, int lf, int slot); int rvu_cpt_ctx_flush(struct rvu *rvu, u16 pcifunc); +int rvu_cpt_init(struct rvu *rvu); /* CN10K RVU */ int rvu_set_channels_base(struct rvu *rvu); void rvu_program_channels(struct rvu *rvu); +/* CN10K NIX */ +void rvu_nix_block_cn10k_init(struct rvu *rvu, struct nix_hw *nix_hw); + /* CN10K RVU - LMT*/ void rvu_reset_lmt_map_tbl(struct rvu *rvu, u16 pcifunc); diff --git a/drivers/net/ethernet/marvell/octeontx2/af/rvu_cn10k.c b/drivers/net/ethernet/marvell/octeontx2/af/rvu_cn10k.c index 7dbbc115cde4..4ad9ff025c96 100644 --- a/drivers/net/ethernet/marvell/octeontx2/af/rvu_cn10k.c +++ b/drivers/net/ethernet/marvell/octeontx2/af/rvu_cn10k.c @@ -538,3 +538,21 @@ void rvu_program_channels(struct rvu *rvu) rvu_lbk_set_channels(rvu); rvu_rpm_set_channels(rvu); } + +void rvu_nix_block_cn10k_init(struct rvu *rvu, struct nix_hw *nix_hw) +{ + int blkaddr = nix_hw->blkaddr; + u64 cfg; + + /* Set AF vWQE timer interval to a LF configurable range of + * 6.4us to 1.632ms. + */ + rvu_write64(rvu, blkaddr, NIX_AF_VWQE_TIMER, 0x3FULL); + + /* Enable NIX RX stream and global conditional clock to + * avoild multiple free of NPA buffers. + */ + cfg = rvu_read64(rvu, blkaddr, NIX_AF_CFG); + cfg |= BIT_ULL(1) | BIT_ULL(2); + rvu_write64(rvu, blkaddr, NIX_AF_CFG, cfg); +} diff --git a/drivers/net/ethernet/marvell/octeontx2/af/rvu_cpt.c b/drivers/net/ethernet/marvell/octeontx2/af/rvu_cpt.c index 38bbae5d9ae0..f047185f38e0 100644 --- a/drivers/net/ethernet/marvell/octeontx2/af/rvu_cpt.c +++ b/drivers/net/ethernet/marvell/octeontx2/af/rvu_cpt.c @@ -17,7 +17,7 @@ #define PCI_DEVID_OTX2_CPT10K_PF 0xA0F2 /* Length of initial context fetch in 128 byte words */ -#define CPT_CTX_ILEN 2ULL +#define CPT_CTX_ILEN 1ULL #define cpt_get_eng_sts(e_min, e_max, rsp, etype) \ ({ \ @@ -37,34 +37,68 @@ (_rsp)->free_sts_##etype = free_sts; \ }) -static irqreturn_t rvu_cpt_af_flt_intr_handler(int irq, void *ptr) +static irqreturn_t cpt_af_flt_intr_handler(int vec, void *ptr) { struct rvu_block *block = ptr; struct rvu *rvu = block->rvu; int blkaddr = block->addr; - u64 reg0, reg1, reg2; - - reg0 = rvu_read64(rvu, blkaddr, CPT_AF_FLTX_INT(0)); - reg1 = rvu_read64(rvu, blkaddr, CPT_AF_FLTX_INT(1)); - if (!is_rvu_otx2(rvu)) { - reg2 = rvu_read64(rvu, blkaddr, CPT_AF_FLTX_INT(2)); - dev_err_ratelimited(rvu->dev, - "Received CPTAF FLT irq : 0x%llx, 0x%llx, 0x%llx", - reg0, reg1, reg2); - } else { - dev_err_ratelimited(rvu->dev, - "Received CPTAF FLT irq : 0x%llx, 0x%llx", - reg0, reg1); + u64 reg, val; + int i, eng; + u8 grp; + + reg = rvu_read64(rvu, blkaddr, CPT_AF_FLTX_INT(vec)); + dev_err_ratelimited(rvu->dev, "Received CPTAF FLT%d irq : 0x%llx", vec, reg); + + i = -1; + while ((i = find_next_bit((unsigned long *)®, 64, i + 1)) < 64) { + switch (vec) { + case 0: + eng = i; + break; + case 1: + eng = i + 64; + break; + case 2: + eng = i + 128; + break; + } + grp = rvu_read64(rvu, blkaddr, CPT_AF_EXEX_CTL2(eng)) & 0xFF; + /* Disable and enable the engine which triggers fault */ + rvu_write64(rvu, blkaddr, CPT_AF_EXEX_CTL2(eng), 0x0); + val = rvu_read64(rvu, blkaddr, CPT_AF_EXEX_CTL(eng)); + rvu_write64(rvu, blkaddr, CPT_AF_EXEX_CTL(eng), val & ~1ULL); + + rvu_write64(rvu, blkaddr, CPT_AF_EXEX_CTL2(eng), grp); + rvu_write64(rvu, blkaddr, CPT_AF_EXEX_CTL(eng), val | 1ULL); + + spin_lock(&rvu->cpt_intr_lock); + block->cpt_flt_eng_map[vec] |= BIT_ULL(i); + val = rvu_read64(rvu, blkaddr, CPT_AF_EXEX_STS(eng)); + val = val & 0x3; + if (val == 0x1 || val == 0x2) + block->cpt_rcvrd_eng_map[vec] |= BIT_ULL(i); + spin_unlock(&rvu->cpt_intr_lock); } - - rvu_write64(rvu, blkaddr, CPT_AF_FLTX_INT(0), reg0); - rvu_write64(rvu, blkaddr, CPT_AF_FLTX_INT(1), reg1); - if (!is_rvu_otx2(rvu)) - rvu_write64(rvu, blkaddr, CPT_AF_FLTX_INT(2), reg2); + rvu_write64(rvu, blkaddr, CPT_AF_FLTX_INT(vec), reg); return IRQ_HANDLED; } +static irqreturn_t rvu_cpt_af_flt0_intr_handler(int irq, void *ptr) +{ + return cpt_af_flt_intr_handler(CPT_AF_INT_VEC_FLT0, ptr); +} + +static irqreturn_t rvu_cpt_af_flt1_intr_handler(int irq, void *ptr) +{ + return cpt_af_flt_intr_handler(CPT_AF_INT_VEC_FLT1, ptr); +} + +static irqreturn_t rvu_cpt_af_flt2_intr_handler(int irq, void *ptr) +{ + return cpt_af_flt_intr_handler(CPT_10K_AF_INT_VEC_FLT2, ptr); +} + static irqreturn_t rvu_cpt_af_rvu_intr_handler(int irq, void *ptr) { struct rvu_block *block = ptr; @@ -119,8 +153,10 @@ static void cpt_10k_unregister_interrupts(struct rvu_block *block, int off) int i; /* Disable all CPT AF interrupts */ - for (i = 0; i < CPT_10K_AF_INT_VEC_RVU; i++) - rvu_write64(rvu, blkaddr, CPT_AF_FLTX_INT_ENA_W1C(i), 0x1); + rvu_write64(rvu, blkaddr, CPT_AF_FLTX_INT_ENA_W1C(0), ~0ULL); + rvu_write64(rvu, blkaddr, CPT_AF_FLTX_INT_ENA_W1C(1), ~0ULL); + rvu_write64(rvu, blkaddr, CPT_AF_FLTX_INT_ENA_W1C(2), 0xFFFF); + rvu_write64(rvu, blkaddr, CPT_AF_RVU_INT_ENA_W1C, 0x1); rvu_write64(rvu, blkaddr, CPT_AF_RAS_INT_ENA_W1C, 0x1); @@ -151,7 +187,7 @@ static void cpt_unregister_interrupts(struct rvu *rvu, int blkaddr) /* Disable all CPT AF interrupts */ for (i = 0; i < CPT_AF_INT_VEC_RVU; i++) - rvu_write64(rvu, blkaddr, CPT_AF_FLTX_INT_ENA_W1C(i), 0x1); + rvu_write64(rvu, blkaddr, CPT_AF_FLTX_INT_ENA_W1C(i), ~0ULL); rvu_write64(rvu, blkaddr, CPT_AF_RVU_INT_ENA_W1C, 0x1); rvu_write64(rvu, blkaddr, CPT_AF_RAS_INT_ENA_W1C, 0x1); @@ -172,16 +208,31 @@ static int cpt_10k_register_interrupts(struct rvu_block *block, int off) { struct rvu *rvu = block->rvu; int blkaddr = block->addr; + irq_handler_t flt_fn; int i, ret; for (i = CPT_10K_AF_INT_VEC_FLT0; i < CPT_10K_AF_INT_VEC_RVU; i++) { sprintf(&rvu->irq_name[(off + i) * NAME_SIZE], "CPTAF FLT%d", i); + + switch (i) { + case CPT_10K_AF_INT_VEC_FLT0: + flt_fn = rvu_cpt_af_flt0_intr_handler; + break; + case CPT_10K_AF_INT_VEC_FLT1: + flt_fn = rvu_cpt_af_flt1_intr_handler; + break; + case CPT_10K_AF_INT_VEC_FLT2: + flt_fn = rvu_cpt_af_flt2_intr_handler; + break; + } ret = rvu_cpt_do_register_interrupt(block, off + i, - rvu_cpt_af_flt_intr_handler, - &rvu->irq_name[(off + i) * NAME_SIZE]); + flt_fn, &rvu->irq_name[(off + i) * NAME_SIZE]); if (ret) goto err; - rvu_write64(rvu, blkaddr, CPT_AF_FLTX_INT_ENA_W1S(i), 0x1); + if (i == CPT_10K_AF_INT_VEC_FLT2) + rvu_write64(rvu, blkaddr, CPT_AF_FLTX_INT_ENA_W1S(i), 0xFFFF); + else + rvu_write64(rvu, blkaddr, CPT_AF_FLTX_INT_ENA_W1S(i), ~0ULL); } ret = rvu_cpt_do_register_interrupt(block, off + CPT_10K_AF_INT_VEC_RVU, @@ -208,8 +259,8 @@ static int cpt_register_interrupts(struct rvu *rvu, int blkaddr) { struct rvu_hwinfo *hw = rvu->hw; struct rvu_block *block; + irq_handler_t flt_fn; int i, offs, ret = 0; - char irq_name[16]; if (!is_block_implemented(rvu->hw, blkaddr)) return 0; @@ -226,13 +277,20 @@ static int cpt_register_interrupts(struct rvu *rvu, int blkaddr) return cpt_10k_register_interrupts(block, offs); for (i = CPT_AF_INT_VEC_FLT0; i < CPT_AF_INT_VEC_RVU; i++) { - snprintf(irq_name, sizeof(irq_name), "CPTAF FLT%d", i); + sprintf(&rvu->irq_name[(offs + i) * NAME_SIZE], "CPTAF FLT%d", i); + switch (i) { + case CPT_AF_INT_VEC_FLT0: + flt_fn = rvu_cpt_af_flt0_intr_handler; + break; + case CPT_AF_INT_VEC_FLT1: + flt_fn = rvu_cpt_af_flt1_intr_handler; + break; + } ret = rvu_cpt_do_register_interrupt(block, offs + i, - rvu_cpt_af_flt_intr_handler, - irq_name); + flt_fn, &rvu->irq_name[(offs + i) * NAME_SIZE]); if (ret) goto err; - rvu_write64(rvu, blkaddr, CPT_AF_FLTX_INT_ENA_W1S(i), 0x1); + rvu_write64(rvu, blkaddr, CPT_AF_FLTX_INT_ENA_W1S(i), ~0ULL); } ret = rvu_cpt_do_register_interrupt(block, offs + CPT_AF_INT_VEC_RVU, @@ -290,7 +348,7 @@ static int get_cpt_pf_num(struct rvu *rvu) static bool is_cpt_pf(struct rvu *rvu, u16 pcifunc) { - int cpt_pf_num = get_cpt_pf_num(rvu); + int cpt_pf_num = rvu->cpt_pf_num; if (rvu_get_pf(pcifunc) != cpt_pf_num) return false; @@ -302,7 +360,7 @@ static bool is_cpt_pf(struct rvu *rvu, u16 pcifunc) static bool is_cpt_vf(struct rvu *rvu, u16 pcifunc) { - int cpt_pf_num = get_cpt_pf_num(rvu); + int cpt_pf_num = rvu->cpt_pf_num; if (rvu_get_pf(pcifunc) != cpt_pf_num) return false; @@ -371,8 +429,12 @@ int rvu_mbox_handler_cpt_lf_alloc(struct rvu *rvu, /* Set CPT LF group and priority */ val = (u64)req->eng_grpmsk << 48 | 1; - if (!is_rvu_otx2(rvu)) - val |= (CPT_CTX_ILEN << 17); + if (!is_rvu_otx2(rvu)) { + if (req->ctx_ilen_valid) + val |= (req->ctx_ilen << 17); + else + val |= (CPT_CTX_ILEN << 17); + } rvu_write64(rvu, blkaddr, CPT_AF_LFX_CTL(cptlf), val); @@ -762,10 +824,21 @@ int rvu_mbox_handler_cpt_sts(struct rvu *rvu, struct cpt_sts_req *req, #define RXC_ZOMBIE_COUNT GENMASK_ULL(60, 48) static void cpt_rxc_time_cfg(struct rvu *rvu, struct cpt_rxc_time_cfg_req *req, - int blkaddr) + int blkaddr, struct cpt_rxc_time_cfg_req *save) { u64 dfrg_reg; + if (save) { + /* Save older config */ + dfrg_reg = rvu_read64(rvu, blkaddr, CPT_AF_RXC_DFRG); + save->zombie_thres = FIELD_GET(RXC_ZOMBIE_THRES, dfrg_reg); + save->zombie_limit = FIELD_GET(RXC_ZOMBIE_LIMIT, dfrg_reg); + save->active_thres = FIELD_GET(RXC_ACTIVE_THRES, dfrg_reg); + save->active_limit = FIELD_GET(RXC_ACTIVE_LIMIT, dfrg_reg); + + save->step = rvu_read64(rvu, blkaddr, CPT_AF_RXC_TIME_CFG); + } + dfrg_reg = FIELD_PREP(RXC_ZOMBIE_THRES, req->zombie_thres); dfrg_reg |= FIELD_PREP(RXC_ZOMBIE_LIMIT, req->zombie_limit); dfrg_reg |= FIELD_PREP(RXC_ACTIVE_THRES, req->active_thres); @@ -790,7 +863,7 @@ int rvu_mbox_handler_cpt_rxc_time_cfg(struct rvu *rvu, !is_cpt_vf(rvu, req->hdr.pcifunc)) return CPT_AF_ERR_ACCESS_DENIED; - cpt_rxc_time_cfg(rvu, req, blkaddr); + cpt_rxc_time_cfg(rvu, req, blkaddr, NULL); return 0; } @@ -801,9 +874,67 @@ int rvu_mbox_handler_cpt_ctx_cache_sync(struct rvu *rvu, struct msg_req *req, return rvu_cpt_ctx_flush(rvu, req->hdr.pcifunc); } +int rvu_mbox_handler_cpt_lf_reset(struct rvu *rvu, struct cpt_lf_rst_req *req, + struct msg_rsp *rsp) +{ + u16 pcifunc = req->hdr.pcifunc; + struct rvu_block *block; + int cptlf, blkaddr, ret; + u16 actual_slot; + u64 ctl, ctl2; + + blkaddr = rvu_get_blkaddr_from_slot(rvu, BLKTYPE_CPT, pcifunc, + req->slot, &actual_slot); + if (blkaddr < 0) + return CPT_AF_ERR_LF_INVALID; + + block = &rvu->hw->block[blkaddr]; + + cptlf = rvu_get_lf(rvu, block, pcifunc, actual_slot); + if (cptlf < 0) + return CPT_AF_ERR_LF_INVALID; + ctl = rvu_read64(rvu, blkaddr, CPT_AF_LFX_CTL(cptlf)); + ctl2 = rvu_read64(rvu, blkaddr, CPT_AF_LFX_CTL2(cptlf)); + + ret = rvu_lf_reset(rvu, block, cptlf); + if (ret) + dev_err(rvu->dev, "Failed to reset blkaddr %d LF%d\n", + block->addr, cptlf); + + rvu_write64(rvu, blkaddr, CPT_AF_LFX_CTL(cptlf), ctl); + rvu_write64(rvu, blkaddr, CPT_AF_LFX_CTL2(cptlf), ctl2); + + return 0; +} + +int rvu_mbox_handler_cpt_flt_eng_info(struct rvu *rvu, struct cpt_flt_eng_info_req *req, + struct cpt_flt_eng_info_rsp *rsp) +{ + struct rvu_block *block; + unsigned long flags; + int blkaddr, vec; + + blkaddr = validate_and_get_cpt_blkaddr(req->blkaddr); + if (blkaddr < 0) + return blkaddr; + + block = &rvu->hw->block[blkaddr]; + for (vec = 0; vec < CPT_10K_AF_INT_VEC_RVU; vec++) { + spin_lock_irqsave(&rvu->cpt_intr_lock, flags); + rsp->flt_eng_map[vec] = block->cpt_flt_eng_map[vec]; + rsp->rcvrd_eng_map[vec] = block->cpt_rcvrd_eng_map[vec]; + if (req->reset) { + block->cpt_flt_eng_map[vec] = 0x0; + block->cpt_rcvrd_eng_map[vec] = 0x0; + } + spin_unlock_irqrestore(&rvu->cpt_intr_lock, flags); + } + return 0; +} + static void cpt_rxc_teardown(struct rvu *rvu, int blkaddr) { - struct cpt_rxc_time_cfg_req req; + struct cpt_rxc_time_cfg_req req, prev; int timeout = 2000; u64 reg; @@ -819,7 +950,7 @@ static void cpt_rxc_teardown(struct rvu *rvu, int blkaddr) req.active_thres = 1; req.active_limit = 1; - cpt_rxc_time_cfg(rvu, &req, blkaddr); + cpt_rxc_time_cfg(rvu, &req, blkaddr, &prev); do { reg = rvu_read64(rvu, blkaddr, CPT_AF_RXC_ACTIVE_STS); @@ -845,70 +976,68 @@ static void cpt_rxc_teardown(struct rvu *rvu, int blkaddr) if (timeout == 0) dev_warn(rvu->dev, "Poll for RXC zombie count hits hard loop counter\n"); + + /* Restore config */ + cpt_rxc_time_cfg(rvu, &prev, blkaddr, NULL); } -#define INPROG_INFLIGHT(reg) ((reg) & 0x1FF) -#define INPROG_GRB_PARTIAL(reg) ((reg) & BIT_ULL(31)) -#define INPROG_GRB(reg) (((reg) >> 32) & 0xFF) -#define INPROG_GWB(reg) (((reg) >> 40) & 0xFF) +#define INFLIGHT GENMASK_ULL(8, 0) +#define GRB_CNT GENMASK_ULL(39, 32) +#define GWB_CNT GENMASK_ULL(47, 40) +#define XQ_XOR GENMASK_ULL(63, 63) +#define DQPTR GENMASK_ULL(19, 0) +#define NQPTR GENMASK_ULL(51, 32) static void cpt_lf_disable_iqueue(struct rvu *rvu, int blkaddr, int slot) { - int i = 0, hard_lp_ctr = 100000; - u64 inprog, grp_ptr; - u16 nq_ptr, dq_ptr; + int timeout = 1000000; + u64 inprog, inst_ptr; + u64 qsize, pending; + int i = 0; /* Disable instructions enqueuing */ rvu_write64(rvu, blkaddr, CPT_AF_BAR2_ALIASX(slot, CPT_LF_CTL), 0x0); - /* Disable executions in the LF's queue */ inprog = rvu_read64(rvu, blkaddr, CPT_AF_BAR2_ALIASX(slot, CPT_LF_INPROG)); - inprog &= ~BIT_ULL(16); + inprog |= BIT_ULL(16); rvu_write64(rvu, blkaddr, CPT_AF_BAR2_ALIASX(slot, CPT_LF_INPROG), inprog); - /* Wait for CPT queue to become execution-quiescent */ + qsize = rvu_read64(rvu, blkaddr, + CPT_AF_BAR2_ALIASX(slot, CPT_LF_Q_SIZE)) & 0x7FFF; do { - inprog = rvu_read64(rvu, blkaddr, - CPT_AF_BAR2_ALIASX(slot, CPT_LF_INPROG)); - if (INPROG_GRB_PARTIAL(inprog)) { - i = 0; - hard_lp_ctr--; - } else { - i++; - } - - grp_ptr = rvu_read64(rvu, blkaddr, - CPT_AF_BAR2_ALIASX(slot, - CPT_LF_Q_GRP_PTR)); - nq_ptr = (grp_ptr >> 32) & 0x7FFF; - dq_ptr = grp_ptr & 0x7FFF; - - } while (hard_lp_ctr && (i < 10) && (nq_ptr != dq_ptr)); + inst_ptr = rvu_read64(rvu, blkaddr, + CPT_AF_BAR2_ALIASX(slot, CPT_LF_Q_INST_PTR)); + pending = (FIELD_GET(XQ_XOR, inst_ptr) * qsize * 40) + + FIELD_GET(NQPTR, inst_ptr) - + FIELD_GET(DQPTR, inst_ptr); + udelay(1); + timeout--; + } while ((pending != 0) && (timeout != 0)); - if (hard_lp_ctr == 0) - dev_warn(rvu->dev, "CPT FLR hits hard loop counter\n"); + if (timeout == 0) + dev_warn(rvu->dev, "TIMEOUT: CPT poll on pending instructions\n"); - i = 0; - hard_lp_ctr = 100000; + timeout = 1000000; + /* Wait for CPT queue to become execution-quiescent */ do { inprog = rvu_read64(rvu, blkaddr, CPT_AF_BAR2_ALIASX(slot, CPT_LF_INPROG)); - if ((INPROG_INFLIGHT(inprog) == 0) && - (INPROG_GWB(inprog) < 40) && - ((INPROG_GRB(inprog) == 0) || - (INPROG_GRB((inprog)) == 40))) { + if ((FIELD_GET(INFLIGHT, inprog) == 0) && + (FIELD_GET(GRB_CNT, inprog) == 0)) { i++; } else { i = 0; - hard_lp_ctr--; + timeout--; } - } while (hard_lp_ctr && (i < 10)); + } while ((timeout != 0) && (i < 10)); - if (hard_lp_ctr == 0) - dev_warn(rvu->dev, "CPT FLR hits hard loop counter\n"); + if (timeout == 0) + dev_warn(rvu->dev, "TIMEOUT: CPT poll on inflight count\n"); + /* Wait for 2 us to flush all queue writes to memory */ + udelay(2); } int rvu_cpt_lf_teardown(struct rvu *rvu, u16 pcifunc, int blkaddr, int lf, int slot) @@ -918,18 +1047,15 @@ int rvu_cpt_lf_teardown(struct rvu *rvu, u16 pcifunc, int blkaddr, int lf, int s if (is_cpt_pf(rvu, pcifunc) || is_cpt_vf(rvu, pcifunc)) cpt_rxc_teardown(rvu, blkaddr); + mutex_lock(&rvu->alias_lock); /* Enable BAR2 ALIAS for this pcifunc. */ reg = BIT_ULL(16) | pcifunc; - rvu_write64(rvu, blkaddr, CPT_AF_BAR2_SEL, reg); + rvu_bar2_sel_write64(rvu, blkaddr, CPT_AF_BAR2_SEL, reg); cpt_lf_disable_iqueue(rvu, blkaddr, slot); - /* Set group drop to help clear out hardware */ - reg = rvu_read64(rvu, blkaddr, CPT_AF_BAR2_ALIASX(slot, CPT_LF_INPROG)); - reg |= BIT_ULL(17); - rvu_write64(rvu, blkaddr, CPT_AF_BAR2_ALIASX(slot, CPT_LF_INPROG), reg); - - rvu_write64(rvu, blkaddr, CPT_AF_BAR2_SEL, 0); + rvu_bar2_sel_write64(rvu, blkaddr, CPT_AF_BAR2_SEL, 0); + mutex_unlock(&rvu->alias_lock); return 0; } @@ -940,7 +1066,7 @@ int rvu_cpt_lf_teardown(struct rvu *rvu, u16 pcifunc, int blkaddr, int lf, int s static int cpt_inline_inb_lf_cmd_send(struct rvu *rvu, int blkaddr, int nix_blkaddr) { - int cpt_pf_num = get_cpt_pf_num(rvu); + int cpt_pf_num = rvu->cpt_pf_num; struct cpt_inst_lmtst_req *req; dma_addr_t res_daddr; int timeout = 3000; @@ -1064,7 +1190,7 @@ int rvu_cpt_ctx_flush(struct rvu *rvu, u16 pcifunc) /* Enable BAR2 ALIAS for this pcifunc. */ reg = BIT_ULL(16) | pcifunc; - rvu_write64(rvu, blkaddr, CPT_AF_BAR2_SEL, reg); + rvu_bar2_sel_write64(rvu, blkaddr, CPT_AF_BAR2_SEL, reg); for (i = 0; i < max_ctx_entries; i++) { cam_data = rvu_read64(rvu, blkaddr, CPT_AF_CTX_CAM_DATA(i)); @@ -1077,10 +1203,19 @@ int rvu_cpt_ctx_flush(struct rvu *rvu, u16 pcifunc) reg); } } - rvu_write64(rvu, blkaddr, CPT_AF_BAR2_SEL, 0); + rvu_bar2_sel_write64(rvu, blkaddr, CPT_AF_BAR2_SEL, 0); unlock: mutex_unlock(&rvu->rsrc_lock); return 0; } + +int rvu_cpt_init(struct rvu *rvu) +{ + /* Retrieve CPT PF number */ + rvu->cpt_pf_num = get_cpt_pf_num(rvu); + spin_lock_init(&rvu->cpt_intr_lock); + + return 0; +} diff --git a/drivers/net/ethernet/marvell/octeontx2/af/rvu_nix.c b/drivers/net/ethernet/marvell/octeontx2/af/rvu_nix.c index 6b8747ebc08c..26e639e57dae 100644 --- a/drivers/net/ethernet/marvell/octeontx2/af/rvu_nix.c +++ b/drivers/net/ethernet/marvell/octeontx2/af/rvu_nix.c @@ -2058,6 +2058,13 @@ static int nix_smq_flush(struct rvu *rvu, int blkaddr, int err, restore_tx_en = 0; u64 cfg; + if (!is_rvu_otx2(rvu)) { + /* Skip SMQ flush if pkt count is zero */ + cfg = rvu_read64(rvu, blkaddr, NIX_AF_MDQX_IN_MD_COUNT(smq)); + if (!cfg) + return 0; + } + /* enable cgx tx if disabled */ if (is_pf_cgxmapped(rvu, pf)) { rvu_get_cgx_lmac_id(rvu->pf2cgxlmac_map[pf], &cgx_id, &lmac_id); @@ -4309,6 +4316,9 @@ static int rvu_nix_block_init(struct rvu *rvu, struct nix_hw *nix_hw) rvu_write64(rvu, blkaddr, NIX_AF_SEB_CFG, cfg); + if (!is_rvu_otx2(rvu)) + rvu_nix_block_cn10k_init(rvu, nix_hw); + if (is_block_implemented(hw, blkaddr)) { err = nix_setup_txschq(rvu, nix_hw, blkaddr); if (err) @@ -4731,6 +4741,10 @@ int rvu_mbox_handler_nix_lso_format_cfg(struct rvu *rvu, #define CPT_INST_QSEL_PF_FUNC GENMASK_ULL(23, 8) #define CPT_INST_QSEL_SLOT GENMASK_ULL(7, 0) +#define CPT_INST_CREDIT_TH GENMASK_ULL(53, 32) +#define CPT_INST_CREDIT_BPID GENMASK_ULL(30, 22) +#define CPT_INST_CREDIT_CNT GENMASK_ULL(21, 0) + static void nix_inline_ipsec_cfg(struct rvu *rvu, struct nix_inline_ipsec_cfg *req, int blkaddr) { @@ -4767,14 +4781,23 @@ static void nix_inline_ipsec_cfg(struct rvu *rvu, struct nix_inline_ipsec_cfg *r val); /* Set CPT credit */ - rvu_write64(rvu, blkaddr, NIX_AF_RX_CPTX_CREDIT(cpt_idx), - req->cpt_credit); + val = rvu_read64(rvu, blkaddr, NIX_AF_RX_CPTX_CREDIT(cpt_idx)); + if ((val & 0x3FFFFF) != 0x3FFFFF) + rvu_write64(rvu, blkaddr, NIX_AF_RX_CPTX_CREDIT(cpt_idx), + 0x3FFFFF - val); + + val = FIELD_PREP(CPT_INST_CREDIT_CNT, req->cpt_credit); + val |= FIELD_PREP(CPT_INST_CREDIT_BPID, req->bpid); + val |= FIELD_PREP(CPT_INST_CREDIT_TH, req->credit_th); + rvu_write64(rvu, blkaddr, NIX_AF_RX_CPTX_CREDIT(cpt_idx), val); } else { rvu_write64(rvu, blkaddr, NIX_AF_RX_IPSEC_GEN_CFG, 0x0); rvu_write64(rvu, blkaddr, NIX_AF_RX_CPTX_INST_QSEL(cpt_idx), 0x0); - rvu_write64(rvu, blkaddr, NIX_AF_RX_CPTX_CREDIT(cpt_idx), - 0x3FFFFF); + val = rvu_read64(rvu, blkaddr, NIX_AF_RX_CPTX_CREDIT(cpt_idx)); + if ((val & 0x3FFFFF) != 0x3FFFFF) + rvu_write64(rvu, blkaddr, NIX_AF_RX_CPTX_CREDIT(cpt_idx), + 0x3FFFFF - val); } } @@ -4792,6 +4815,30 @@ int rvu_mbox_handler_nix_inline_ipsec_cfg(struct rvu *rvu, return 0; } +int rvu_mbox_handler_nix_read_inline_ipsec_cfg(struct rvu *rvu, + struct msg_req *req, + struct nix_inline_ipsec_cfg *rsp) + +{ + u64 val; + + if (!is_block_implemented(rvu->hw, BLKADDR_CPT0)) + return 0; + + val = rvu_read64(rvu, BLKADDR_NIX0, NIX_AF_RX_IPSEC_GEN_CFG); + rsp->gen_cfg.egrp = FIELD_GET(IPSEC_GEN_CFG_EGRP, val); + rsp->gen_cfg.opcode = FIELD_GET(IPSEC_GEN_CFG_OPCODE, val); + rsp->gen_cfg.param1 = FIELD_GET(IPSEC_GEN_CFG_PARAM1, val); + rsp->gen_cfg.param2 = FIELD_GET(IPSEC_GEN_CFG_PARAM2, val); + + val = rvu_read64(rvu, BLKADDR_NIX0, NIX_AF_RX_CPTX_CREDIT(0)); + rsp->cpt_credit = FIELD_GET(CPT_INST_CREDIT_CNT, val); + rsp->credit_th = FIELD_GET(CPT_INST_CREDIT_TH, val); + rsp->bpid = FIELD_GET(CPT_INST_CREDIT_BPID, val); + + return 0; +} + int rvu_mbox_handler_nix_inline_ipsec_lf_cfg(struct rvu *rvu, struct nix_inline_ipsec_lf_cfg *req, struct msg_rsp *rsp) @@ -4835,6 +4882,7 @@ int rvu_mbox_handler_nix_inline_ipsec_lf_cfg(struct rvu *rvu, return 0; } + void rvu_nix_reset_mac(struct rvu_pfvf *pfvf, int pcifunc) { bool from_vf = !!(pcifunc & RVU_PFVF_FUNC_MASK); diff --git a/drivers/net/ethernet/marvell/octeontx2/af/rvu_npc_hash.c b/drivers/net/ethernet/marvell/octeontx2/af/rvu_npc_hash.c index f69102d20c90..20ebb9c95c73 100644 --- a/drivers/net/ethernet/marvell/octeontx2/af/rvu_npc_hash.c +++ b/drivers/net/ethernet/marvell/octeontx2/af/rvu_npc_hash.c @@ -200,10 +200,8 @@ void npc_config_secret_key(struct rvu *rvu, int blkaddr) struct rvu_hwinfo *hw = rvu->hw; u8 intf; - if (!hwcap->npc_hash_extract) { - dev_info(rvu->dev, "HW does not support secret key configuration\n"); + if (!hwcap->npc_hash_extract) return; - } for (intf = 0; intf < hw->npc_intfs; intf++) { rvu_write64(rvu, blkaddr, NPC_AF_INTFX_SECRET_KEY0(intf), @@ -221,10 +219,8 @@ void npc_program_mkex_hash(struct rvu *rvu, int blkaddr) struct rvu_hwinfo *hw = rvu->hw; u8 intf; - if (!hwcap->npc_hash_extract) { - dev_dbg(rvu->dev, "Field hash extract feature is not supported\n"); + if (!hwcap->npc_hash_extract) return; - } for (intf = 0; intf < hw->npc_intfs; intf++) { npc_program_mkex_hash_rx(rvu, blkaddr, intf); @@ -1853,19 +1849,13 @@ int rvu_npc_exact_init(struct rvu *rvu) /* Check exact match feature is supported */ npc_const3 = rvu_read64(rvu, blkaddr, NPC_AF_CONST3); - if (!(npc_const3 & BIT_ULL(62))) { - dev_info(rvu->dev, "%s: No support for exact match support\n", - __func__); + if (!(npc_const3 & BIT_ULL(62))) return 0; - } /* Check if kex profile has enabled EXACT match nibble */ cfg = rvu_read64(rvu, blkaddr, NPC_AF_INTFX_KEX_CFG(NIX_INTF_RX)); - if (!(cfg & NPC_EXACT_NIBBLE_HIT)) { - dev_info(rvu->dev, "%s: NPC exact match nibble not enabled in KEX profile\n", - __func__); + if (!(cfg & NPC_EXACT_NIBBLE_HIT)) return 0; - } /* Set capability to true */ rvu->hw->cap.npc_exact_match_enabled = true; diff --git a/drivers/net/ethernet/marvell/octeontx2/af/rvu_reg.h b/drivers/net/ethernet/marvell/octeontx2/af/rvu_reg.h index 0e0d536645ac..1729b22580ce 100644 --- a/drivers/net/ethernet/marvell/octeontx2/af/rvu_reg.h +++ b/drivers/net/ethernet/marvell/octeontx2/af/rvu_reg.h @@ -189,6 +189,7 @@ #define NIX_AF_RX_CFG (0x00D0) #define NIX_AF_AVG_DELAY (0x00E0) #define NIX_AF_CINT_DELAY (0x00F0) +#define NIX_AF_VWQE_TIMER (0x00F8) #define NIX_AF_RX_MCAST_BASE (0x0100) #define NIX_AF_RX_MCAST_CFG (0x0110) #define NIX_AF_RX_MCAST_BUF_BASE (0x0120) @@ -426,6 +427,7 @@ #define NIX_AF_RX_NPC_MIRROR_DROP (0x4730) #define NIX_AF_RX_ACTIVE_CYCLES_PCX(a) (0x4800 | (a) << 16) #define NIX_AF_LINKX_CFG(a) (0x4010 | (a) << 17) +#define NIX_AF_MDQX_IN_MD_COUNT(a) (0x14e0 | (a) << 16) #define NIX_PRIV_AF_INT_CFG (0x8000000) #define NIX_PRIV_LFX_CFG (0x8000010) @@ -545,6 +547,8 @@ #define CPT_LF_CTL 0x10 #define CPT_LF_INPROG 0x40 +#define CPT_LF_Q_SIZE 0x100 +#define CPT_LF_Q_INST_PTR 0x110 #define CPT_LF_Q_GRP_PTR 0x120 #define CPT_LF_CTX_FLUSH 0x510 diff --git a/drivers/net/ethernet/marvell/octeontx2/nic/otx2_pf.c b/drivers/net/ethernet/marvell/octeontx2/nic/otx2_pf.c index c1ea60bc2630..179433d0a54a 100644 --- a/drivers/net/ethernet/marvell/octeontx2/nic/otx2_pf.c +++ b/drivers/net/ethernet/marvell/octeontx2/nic/otx2_pf.c @@ -2512,10 +2512,13 @@ static int otx2_xdp_setup(struct otx2_nic *pf, struct bpf_prog *prog) /* Network stack and XDP shared same rx queues. * Use separate tx queues for XDP and network stack. */ - if (pf->xdp_prog) + if (pf->xdp_prog) { pf->hw.xdp_queues = pf->hw.rx_queues; - else + xdp_features_set_redirect_target(dev, false); + } else { pf->hw.xdp_queues = 0; + xdp_features_clear_redirect_target(dev); + } pf->hw.tot_tx_queues += pf->hw.xdp_queues; @@ -2878,6 +2881,7 @@ static int otx2_probe(struct pci_dev *pdev, const struct pci_device_id *id) netdev->watchdog_timeo = OTX2_TX_TIMEOUT; netdev->netdev_ops = &otx2_netdev_ops; + netdev->xdp_features = NETDEV_XDP_ACT_BASIC | NETDEV_XDP_ACT_REDIRECT; netdev->min_mtu = OTX2_MIN_MTU; netdev->max_mtu = otx2_get_max_mtu(pf); diff --git a/drivers/net/ethernet/marvell/pxa168_eth.c b/drivers/net/ethernet/marvell/pxa168_eth.c index cf456d62677f..87fff539d39d 100644 --- a/drivers/net/ethernet/marvell/pxa168_eth.c +++ b/drivers/net/ethernet/marvell/pxa168_eth.c @@ -965,7 +965,7 @@ static int pxa168_init_phy(struct net_device *dev) if (dev->phydev) return 0; - phy = mdiobus_scan(pep->smi_bus, pep->phy_addr); + phy = mdiobus_scan_c22(pep->smi_bus, pep->phy_addr); if (IS_ERR(phy)) return PTR_ERR(phy); diff --git a/drivers/net/ethernet/mediatek/mtk_eth_soc.c b/drivers/net/ethernet/mediatek/mtk_eth_soc.c index e3123723522e..14be6ea51b88 100644 --- a/drivers/net/ethernet/mediatek/mtk_eth_soc.c +++ b/drivers/net/ethernet/mediatek/mtk_eth_soc.c @@ -51,6 +51,7 @@ static const struct mtk_reg_map mtk_reg_map = { .delay_irq = 0x0a0c, .irq_status = 0x0a20, .irq_mask = 0x0a28, + .adma_rx_dbg0 = 0x0a38, .int_grp = 0x0a50, }, .qdma = { @@ -82,6 +83,8 @@ static const struct mtk_reg_map mtk_reg_map = { [0] = 0x2800, [1] = 0x2c00, }, + .pse_iq_sta = 0x0110, + .pse_oq_sta = 0x0118, }; static const struct mtk_reg_map mt7628_reg_map = { @@ -112,6 +115,7 @@ static const struct mtk_reg_map mt7986_reg_map = { .delay_irq = 0x620c, .irq_status = 0x6220, .irq_mask = 0x6228, + .adma_rx_dbg0 = 0x6238, .int_grp = 0x6250, }, .qdma = { @@ -143,6 +147,8 @@ static const struct mtk_reg_map mt7986_reg_map = { [0] = 0x4800, [1] = 0x4c00, }, + .pse_iq_sta = 0x0180, + .pse_oq_sta = 0x01a0, }; /* strings used by ethtool */ @@ -215,8 +221,8 @@ static int mtk_mdio_busy_wait(struct mtk_eth *eth) return -ETIMEDOUT; } -static int _mtk_mdio_write(struct mtk_eth *eth, u32 phy_addr, u32 phy_reg, - u32 write_data) +static int _mtk_mdio_write_c22(struct mtk_eth *eth, u32 phy_addr, u32 phy_reg, + u32 write_data) { int ret; @@ -224,35 +230,13 @@ static int _mtk_mdio_write(struct mtk_eth *eth, u32 phy_addr, u32 phy_reg, if (ret < 0) return ret; - if (phy_reg & MII_ADDR_C45) { - mtk_w32(eth, PHY_IAC_ACCESS | - PHY_IAC_START_C45 | - PHY_IAC_CMD_C45_ADDR | - PHY_IAC_REG(mdiobus_c45_devad(phy_reg)) | - PHY_IAC_ADDR(phy_addr) | - PHY_IAC_DATA(mdiobus_c45_regad(phy_reg)), - MTK_PHY_IAC); - - ret = mtk_mdio_busy_wait(eth); - if (ret < 0) - return ret; - - mtk_w32(eth, PHY_IAC_ACCESS | - PHY_IAC_START_C45 | - PHY_IAC_CMD_WRITE | - PHY_IAC_REG(mdiobus_c45_devad(phy_reg)) | - PHY_IAC_ADDR(phy_addr) | - PHY_IAC_DATA(write_data), - MTK_PHY_IAC); - } else { - mtk_w32(eth, PHY_IAC_ACCESS | - PHY_IAC_START_C22 | - PHY_IAC_CMD_WRITE | - PHY_IAC_REG(phy_reg) | - PHY_IAC_ADDR(phy_addr) | - PHY_IAC_DATA(write_data), - MTK_PHY_IAC); - } + mtk_w32(eth, PHY_IAC_ACCESS | + PHY_IAC_START_C22 | + PHY_IAC_CMD_WRITE | + PHY_IAC_REG(phy_reg) | + PHY_IAC_ADDR(phy_addr) | + PHY_IAC_DATA(write_data), + MTK_PHY_IAC); ret = mtk_mdio_busy_wait(eth); if (ret < 0) @@ -261,7 +245,8 @@ static int _mtk_mdio_write(struct mtk_eth *eth, u32 phy_addr, u32 phy_reg, return 0; } -static int _mtk_mdio_read(struct mtk_eth *eth, u32 phy_addr, u32 phy_reg) +static int _mtk_mdio_write_c45(struct mtk_eth *eth, u32 phy_addr, + u32 devad, u32 phy_reg, u32 write_data) { int ret; @@ -269,33 +254,47 @@ static int _mtk_mdio_read(struct mtk_eth *eth, u32 phy_addr, u32 phy_reg) if (ret < 0) return ret; - if (phy_reg & MII_ADDR_C45) { - mtk_w32(eth, PHY_IAC_ACCESS | - PHY_IAC_START_C45 | - PHY_IAC_CMD_C45_ADDR | - PHY_IAC_REG(mdiobus_c45_devad(phy_reg)) | - PHY_IAC_ADDR(phy_addr) | - PHY_IAC_DATA(mdiobus_c45_regad(phy_reg)), - MTK_PHY_IAC); - - ret = mtk_mdio_busy_wait(eth); - if (ret < 0) - return ret; - - mtk_w32(eth, PHY_IAC_ACCESS | - PHY_IAC_START_C45 | - PHY_IAC_CMD_C45_READ | - PHY_IAC_REG(mdiobus_c45_devad(phy_reg)) | - PHY_IAC_ADDR(phy_addr), - MTK_PHY_IAC); - } else { - mtk_w32(eth, PHY_IAC_ACCESS | - PHY_IAC_START_C22 | - PHY_IAC_CMD_C22_READ | - PHY_IAC_REG(phy_reg) | - PHY_IAC_ADDR(phy_addr), - MTK_PHY_IAC); - } + mtk_w32(eth, PHY_IAC_ACCESS | + PHY_IAC_START_C45 | + PHY_IAC_CMD_C45_ADDR | + PHY_IAC_REG(devad) | + PHY_IAC_ADDR(phy_addr) | + PHY_IAC_DATA(phy_reg), + MTK_PHY_IAC); + + ret = mtk_mdio_busy_wait(eth); + if (ret < 0) + return ret; + + mtk_w32(eth, PHY_IAC_ACCESS | + PHY_IAC_START_C45 | + PHY_IAC_CMD_WRITE | + PHY_IAC_REG(devad) | + PHY_IAC_ADDR(phy_addr) | + PHY_IAC_DATA(write_data), + MTK_PHY_IAC); + + ret = mtk_mdio_busy_wait(eth); + if (ret < 0) + return ret; + + return 0; +} + +static int _mtk_mdio_read_c22(struct mtk_eth *eth, u32 phy_addr, u32 phy_reg) +{ + int ret; + + ret = mtk_mdio_busy_wait(eth); + if (ret < 0) + return ret; + + mtk_w32(eth, PHY_IAC_ACCESS | + PHY_IAC_START_C22 | + PHY_IAC_CMD_C22_READ | + PHY_IAC_REG(phy_reg) | + PHY_IAC_ADDR(phy_addr), + MTK_PHY_IAC); ret = mtk_mdio_busy_wait(eth); if (ret < 0) @@ -304,19 +303,70 @@ static int _mtk_mdio_read(struct mtk_eth *eth, u32 phy_addr, u32 phy_reg) return mtk_r32(eth, MTK_PHY_IAC) & PHY_IAC_DATA_MASK; } -static int mtk_mdio_write(struct mii_bus *bus, int phy_addr, - int phy_reg, u16 val) +static int _mtk_mdio_read_c45(struct mtk_eth *eth, u32 phy_addr, + u32 devad, u32 phy_reg) +{ + int ret; + + ret = mtk_mdio_busy_wait(eth); + if (ret < 0) + return ret; + + mtk_w32(eth, PHY_IAC_ACCESS | + PHY_IAC_START_C45 | + PHY_IAC_CMD_C45_ADDR | + PHY_IAC_REG(devad) | + PHY_IAC_ADDR(phy_addr) | + PHY_IAC_DATA(phy_reg), + MTK_PHY_IAC); + + ret = mtk_mdio_busy_wait(eth); + if (ret < 0) + return ret; + + mtk_w32(eth, PHY_IAC_ACCESS | + PHY_IAC_START_C45 | + PHY_IAC_CMD_C45_READ | + PHY_IAC_REG(devad) | + PHY_IAC_ADDR(phy_addr), + MTK_PHY_IAC); + + ret = mtk_mdio_busy_wait(eth); + if (ret < 0) + return ret; + + return mtk_r32(eth, MTK_PHY_IAC) & PHY_IAC_DATA_MASK; +} + +static int mtk_mdio_write_c22(struct mii_bus *bus, int phy_addr, + int phy_reg, u16 val) { struct mtk_eth *eth = bus->priv; - return _mtk_mdio_write(eth, phy_addr, phy_reg, val); + return _mtk_mdio_write_c22(eth, phy_addr, phy_reg, val); } -static int mtk_mdio_read(struct mii_bus *bus, int phy_addr, int phy_reg) +static int mtk_mdio_write_c45(struct mii_bus *bus, int phy_addr, + int devad, int phy_reg, u16 val) { struct mtk_eth *eth = bus->priv; - return _mtk_mdio_read(eth, phy_addr, phy_reg); + return _mtk_mdio_write_c45(eth, phy_addr, devad, phy_reg, val); +} + +static int mtk_mdio_read_c22(struct mii_bus *bus, int phy_addr, int phy_reg) +{ + struct mtk_eth *eth = bus->priv; + + return _mtk_mdio_read_c22(eth, phy_addr, phy_reg); +} + +static int mtk_mdio_read_c45(struct mii_bus *bus, int phy_addr, int devad, + int phy_reg) +{ + struct mtk_eth *eth = bus->priv; + + return _mtk_mdio_read_c45(eth, phy_addr, devad, phy_reg); } static int mt7621_gmac0_rgmii_adjust(struct mtk_eth *eth, @@ -760,9 +810,10 @@ static int mtk_mdio_init(struct mtk_eth *eth) } eth->mii_bus->name = "mdio"; - eth->mii_bus->read = mtk_mdio_read; - eth->mii_bus->write = mtk_mdio_write; - eth->mii_bus->probe_capabilities = MDIOBUS_C22_C45; + eth->mii_bus->read = mtk_mdio_read_c22; + eth->mii_bus->write = mtk_mdio_write_c22; + eth->mii_bus->read_c45 = mtk_mdio_read_c45; + eth->mii_bus->write_c45 = mtk_mdio_write_c45; eth->mii_bus->priv = eth; eth->mii_bus->parent = eth->dev; @@ -2988,14 +3039,29 @@ static void mtk_dma_free(struct mtk_eth *eth) kfree(eth->scratch_head); } +static bool mtk_hw_reset_check(struct mtk_eth *eth) +{ + u32 val = mtk_r32(eth, MTK_INT_STATUS2); + + return (val & MTK_FE_INT_FQ_EMPTY) || (val & MTK_FE_INT_RFIFO_UF) || + (val & MTK_FE_INT_RFIFO_OV) || (val & MTK_FE_INT_TSO_FAIL) || + (val & MTK_FE_INT_TSO_ALIGN) || (val & MTK_FE_INT_TSO_ILLEGAL); +} + static void mtk_tx_timeout(struct net_device *dev, unsigned int txqueue) { struct mtk_mac *mac = netdev_priv(dev); struct mtk_eth *eth = mac->hw; + if (test_bit(MTK_RESETTING, ð->state)) + return; + + if (!mtk_hw_reset_check(eth)) + return; + eth->netdev[mac->id]->stats.tx_errors++; - netif_err(eth, tx_err, dev, - "transmit timed out\n"); + netif_err(eth, tx_err, dev, "transmit timed out\n"); + schedule_work(ð->pending_work); } @@ -3475,22 +3541,188 @@ static void mtk_set_mcr_max_rx(struct mtk_mac *mac, u32 val) mtk_w32(mac->hw, mcr_new, MTK_MAC_MCR(mac->id)); } -static int mtk_hw_init(struct mtk_eth *eth) +static void mtk_hw_reset(struct mtk_eth *eth) +{ + u32 val; + + if (MTK_HAS_CAPS(eth->soc->caps, MTK_NETSYS_V2)) { + regmap_write(eth->ethsys, ETHSYS_FE_RST_CHK_IDLE_EN, 0); + val = RSTCTRL_PPE0_V2; + } else { + val = RSTCTRL_PPE0; + } + + if (MTK_HAS_CAPS(eth->soc->caps, MTK_RSTCTRL_PPE1)) + val |= RSTCTRL_PPE1; + + ethsys_reset(eth, RSTCTRL_ETH | RSTCTRL_FE | val); + + if (MTK_HAS_CAPS(eth->soc->caps, MTK_NETSYS_V2)) + regmap_write(eth->ethsys, ETHSYS_FE_RST_CHK_IDLE_EN, + 0x3ffffff); +} + +static u32 mtk_hw_reset_read(struct mtk_eth *eth) +{ + u32 val; + + regmap_read(eth->ethsys, ETHSYS_RSTCTRL, &val); + return val; +} + +static void mtk_hw_warm_reset(struct mtk_eth *eth) +{ + u32 rst_mask, val; + + regmap_update_bits(eth->ethsys, ETHSYS_RSTCTRL, RSTCTRL_FE, + RSTCTRL_FE); + if (readx_poll_timeout_atomic(mtk_hw_reset_read, eth, val, + val & RSTCTRL_FE, 1, 1000)) { + dev_err(eth->dev, "warm reset failed\n"); + mtk_hw_reset(eth); + return; + } + + if (MTK_HAS_CAPS(eth->soc->caps, MTK_NETSYS_V2)) + rst_mask = RSTCTRL_ETH | RSTCTRL_PPE0_V2; + else + rst_mask = RSTCTRL_ETH | RSTCTRL_PPE0; + + if (MTK_HAS_CAPS(eth->soc->caps, MTK_RSTCTRL_PPE1)) + rst_mask |= RSTCTRL_PPE1; + + regmap_update_bits(eth->ethsys, ETHSYS_RSTCTRL, rst_mask, rst_mask); + + udelay(1); + val = mtk_hw_reset_read(eth); + if (!(val & rst_mask)) + dev_err(eth->dev, "warm reset stage0 failed %08x (%08x)\n", + val, rst_mask); + + rst_mask |= RSTCTRL_FE; + regmap_update_bits(eth->ethsys, ETHSYS_RSTCTRL, rst_mask, ~rst_mask); + + udelay(1); + val = mtk_hw_reset_read(eth); + if (val & rst_mask) + dev_err(eth->dev, "warm reset stage1 failed %08x (%08x)\n", + val, rst_mask); +} + +static bool mtk_hw_check_dma_hang(struct mtk_eth *eth) +{ + const struct mtk_reg_map *reg_map = eth->soc->reg_map; + bool gmac1_tx, gmac2_tx, gdm1_tx, gdm2_tx; + bool oq_hang, cdm1_busy, adma_busy; + bool wtx_busy, cdm_full, oq_free; + u32 wdidx, val, gdm1_fc, gdm2_fc; + bool qfsm_hang, qfwd_hang; + bool ret = false; + + if (MTK_HAS_CAPS(eth->soc->caps, MTK_SOC_MT7628)) + return false; + + /* WDMA sanity checks */ + wdidx = mtk_r32(eth, reg_map->wdma_base[0] + 0xc); + + val = mtk_r32(eth, reg_map->wdma_base[0] + 0x204); + wtx_busy = FIELD_GET(MTK_TX_DMA_BUSY, val); + + val = mtk_r32(eth, reg_map->wdma_base[0] + 0x230); + cdm_full = !FIELD_GET(MTK_CDM_TXFIFO_RDY, val); + + oq_free = (!(mtk_r32(eth, reg_map->pse_oq_sta) & GENMASK(24, 16)) && + !(mtk_r32(eth, reg_map->pse_oq_sta + 0x4) & GENMASK(8, 0)) && + !(mtk_r32(eth, reg_map->pse_oq_sta + 0x10) & GENMASK(24, 16))); + + if (wdidx == eth->reset.wdidx && wtx_busy && cdm_full && oq_free) { + if (++eth->reset.wdma_hang_count > 2) { + eth->reset.wdma_hang_count = 0; + ret = true; + } + goto out; + } + + /* QDMA sanity checks */ + qfsm_hang = !!mtk_r32(eth, reg_map->qdma.qtx_cfg + 0x234); + qfwd_hang = !mtk_r32(eth, reg_map->qdma.qtx_cfg + 0x308); + + gdm1_tx = FIELD_GET(GENMASK(31, 16), mtk_r32(eth, MTK_FE_GDM1_FSM)) > 0; + gdm2_tx = FIELD_GET(GENMASK(31, 16), mtk_r32(eth, MTK_FE_GDM2_FSM)) > 0; + gmac1_tx = FIELD_GET(GENMASK(31, 24), mtk_r32(eth, MTK_MAC_FSM(0))) != 1; + gmac2_tx = FIELD_GET(GENMASK(31, 24), mtk_r32(eth, MTK_MAC_FSM(1))) != 1; + gdm1_fc = mtk_r32(eth, reg_map->gdm1_cnt + 0x24); + gdm2_fc = mtk_r32(eth, reg_map->gdm1_cnt + 0x64); + + if (qfsm_hang && qfwd_hang && + ((gdm1_tx && gmac1_tx && gdm1_fc < 1) || + (gdm2_tx && gmac2_tx && gdm2_fc < 1))) { + if (++eth->reset.qdma_hang_count > 2) { + eth->reset.qdma_hang_count = 0; + ret = true; + } + goto out; + } + + /* ADMA sanity checks */ + oq_hang = !!(mtk_r32(eth, reg_map->pse_oq_sta) & GENMASK(8, 0)); + cdm1_busy = !!(mtk_r32(eth, MTK_FE_CDM1_FSM) & GENMASK(31, 16)); + adma_busy = !(mtk_r32(eth, reg_map->pdma.adma_rx_dbg0) & GENMASK(4, 0)) && + !(mtk_r32(eth, reg_map->pdma.adma_rx_dbg0) & BIT(6)); + + if (oq_hang && cdm1_busy && adma_busy) { + if (++eth->reset.adma_hang_count > 2) { + eth->reset.adma_hang_count = 0; + ret = true; + } + goto out; + } + + eth->reset.wdma_hang_count = 0; + eth->reset.qdma_hang_count = 0; + eth->reset.adma_hang_count = 0; +out: + eth->reset.wdidx = wdidx; + + return ret; +} + +static void mtk_hw_reset_monitor_work(struct work_struct *work) +{ + struct delayed_work *del_work = to_delayed_work(work); + struct mtk_eth *eth = container_of(del_work, struct mtk_eth, + reset.monitor_work); + + if (test_bit(MTK_RESETTING, ð->state)) + goto out; + + /* DMA stuck checks */ + if (mtk_hw_check_dma_hang(eth)) + schedule_work(ð->pending_work); + +out: + schedule_delayed_work(ð->reset.monitor_work, + MTK_DMA_MONITOR_TIMEOUT); +} + +static int mtk_hw_init(struct mtk_eth *eth, bool reset) { u32 dma_mask = ETHSYS_DMA_AG_MAP_PDMA | ETHSYS_DMA_AG_MAP_QDMA | ETHSYS_DMA_AG_MAP_PPE; const struct mtk_reg_map *reg_map = eth->soc->reg_map; int i, val, ret; - if (test_and_set_bit(MTK_HW_INIT, ð->state)) + if (!reset && test_and_set_bit(MTK_HW_INIT, ð->state)) return 0; - pm_runtime_enable(eth->dev); - pm_runtime_get_sync(eth->dev); + if (!reset) { + pm_runtime_enable(eth->dev); + pm_runtime_get_sync(eth->dev); - ret = mtk_clk_enable(eth); - if (ret) - goto err_disable_pm; + ret = mtk_clk_enable(eth); + if (ret) + goto err_disable_pm; + } if (eth->ethsys) regmap_update_bits(eth->ethsys, ETHSYS_DMA_AG_MAP, dma_mask, @@ -3514,22 +3746,14 @@ static int mtk_hw_init(struct mtk_eth *eth) return 0; } - if (MTK_HAS_CAPS(eth->soc->caps, MTK_NETSYS_V2)) { - regmap_write(eth->ethsys, ETHSYS_FE_RST_CHK_IDLE_EN, 0); - val = RSTCTRL_PPE0_V2; - } else { - val = RSTCTRL_PPE0; - } - - if (MTK_HAS_CAPS(eth->soc->caps, MTK_RSTCTRL_PPE1)) - val |= RSTCTRL_PPE1; + msleep(100); - ethsys_reset(eth, RSTCTRL_ETH | RSTCTRL_FE | val); + if (reset) + mtk_hw_warm_reset(eth); + else + mtk_hw_reset(eth); if (MTK_HAS_CAPS(eth->soc->caps, MTK_NETSYS_V2)) { - regmap_write(eth->ethsys, ETHSYS_FE_RST_CHK_IDLE_EN, - 0x3ffffff); - /* Set FE to PDMAv2 if necessary */ val = mtk_r32(eth, MTK_FE_GLO_MISC); mtk_w32(eth, val | BIT(4), MTK_FE_GLO_MISC); @@ -3631,8 +3855,10 @@ static int mtk_hw_init(struct mtk_eth *eth) return 0; err_disable_pm: - pm_runtime_put_sync(eth->dev); - pm_runtime_disable(eth->dev); + if (!reset) { + pm_runtime_put_sync(eth->dev); + pm_runtime_disable(eth->dev); + } return ret; } @@ -3711,52 +3937,86 @@ static int mtk_do_ioctl(struct net_device *dev, struct ifreq *ifr, int cmd) return -EOPNOTSUPP; } +static void mtk_prepare_for_reset(struct mtk_eth *eth) +{ + u32 val; + int i; + + /* disabe FE P3 and P4 */ + val = mtk_r32(eth, MTK_FE_GLO_CFG) | MTK_FE_LINK_DOWN_P3; + if (MTK_HAS_CAPS(eth->soc->caps, MTK_RSTCTRL_PPE1)) + val |= MTK_FE_LINK_DOWN_P4; + mtk_w32(eth, val, MTK_FE_GLO_CFG); + + /* adjust PPE configurations to prepare for reset */ + for (i = 0; i < ARRAY_SIZE(eth->ppe); i++) + mtk_ppe_prepare_reset(eth->ppe[i]); + + /* disable NETSYS interrupts */ + mtk_w32(eth, 0, MTK_FE_INT_ENABLE); + + /* force link down GMAC */ + for (i = 0; i < 2; i++) { + val = mtk_r32(eth, MTK_MAC_MCR(i)) & ~MAC_MCR_FORCE_LINK; + mtk_w32(eth, val, MTK_MAC_MCR(i)); + } +} + static void mtk_pending_work(struct work_struct *work) { struct mtk_eth *eth = container_of(work, struct mtk_eth, pending_work); - int err, i; unsigned long restart = 0; + u32 val; + int i; rtnl_lock(); - - dev_dbg(eth->dev, "[%s][%d] reset\n", __func__, __LINE__); set_bit(MTK_RESETTING, ð->state); + mtk_prepare_for_reset(eth); + mtk_wed_fe_reset(); + /* Run again reset preliminary configuration in order to avoid any + * possible race during FE reset since it can run releasing RTNL lock. + */ + mtk_prepare_for_reset(eth); + /* stop all devices to make sure that dma is properly shut down */ for (i = 0; i < MTK_MAC_COUNT; i++) { - if (!eth->netdev[i]) + if (!eth->netdev[i] || !netif_running(eth->netdev[i])) continue; + mtk_stop(eth->netdev[i]); __set_bit(i, &restart); } - dev_dbg(eth->dev, "[%s][%d] mtk_stop ends\n", __func__, __LINE__); - /* restart underlying hardware such as power, clock, pin mux - * and the connected phy - */ - mtk_hw_deinit(eth); + usleep_range(15000, 16000); if (eth->dev->pins) pinctrl_select_state(eth->dev->pins->p, eth->dev->pins->default_state); - mtk_hw_init(eth); + mtk_hw_init(eth, true); /* restart DMA and enable IRQs */ for (i = 0; i < MTK_MAC_COUNT; i++) { if (!test_bit(i, &restart)) continue; - err = mtk_open(eth->netdev[i]); - if (err) { + + if (mtk_open(eth->netdev[i])) { netif_alert(eth, ifup, eth->netdev[i], - "Driver up/down cycle failed, closing device.\n"); + "Driver up/down cycle failed\n"); dev_close(eth->netdev[i]); } } - dev_dbg(eth->dev, "[%s][%d] reset done\n", __func__, __LINE__); + /* enabe FE P3 and P4 */ + val = mtk_r32(eth, MTK_FE_GLO_CFG) & ~MTK_FE_LINK_DOWN_P3; + if (MTK_HAS_CAPS(eth->soc->caps, MTK_RSTCTRL_PPE1)) + val &= ~MTK_FE_LINK_DOWN_P4; + mtk_w32(eth, val, MTK_FE_GLO_CFG); clear_bit(MTK_RESETTING, ð->state); + mtk_wed_fe_reset_complete(); + rtnl_unlock(); } @@ -3801,6 +4061,7 @@ static int mtk_cleanup(struct mtk_eth *eth) mtk_unreg_dev(eth); mtk_free_dev(eth); cancel_work_sync(ð->pending_work); + cancel_delayed_work_sync(ð->reset.monitor_work); return 0; } @@ -4190,6 +4451,12 @@ static int mtk_add_mac(struct mtk_eth *eth, struct device_node *np) register_netdevice_notifier(&mac->device_notifier); } + if (mtk_page_pool_enabled(eth)) + eth->netdev[id]->xdp_features = NETDEV_XDP_ACT_BASIC | + NETDEV_XDP_ACT_REDIRECT | + NETDEV_XDP_ACT_NDO_XMIT | + NETDEV_XDP_ACT_NDO_XMIT_SG; + return 0; free_netdev: @@ -4255,6 +4522,7 @@ static int mtk_probe(struct platform_device *pdev) eth->rx_dim.mode = DIM_CQ_PERIOD_MODE_START_FROM_EQE; INIT_WORK(ð->rx_dim.work, mtk_dim_rx); + INIT_DELAYED_WORK(ð->reset.monitor_work, mtk_hw_reset_monitor_work); eth->tx_dim.mode = DIM_CQ_PERIOD_MODE_START_FROM_EQE; INIT_WORK(ð->tx_dim.work, mtk_dim_tx); @@ -4368,7 +4636,7 @@ static int mtk_probe(struct platform_device *pdev) eth->msg_enable = netif_msg_init(mtk_msg_level, MTK_DEFAULT_MSG_ENABLE); INIT_WORK(ð->pending_work, mtk_pending_work); - err = mtk_hw_init(eth); + err = mtk_hw_init(eth, false); if (err) goto err_wed_exit; @@ -4457,6 +4725,8 @@ static int mtk_probe(struct platform_device *pdev) netif_napi_add(ð->dummy_dev, ð->rx_napi, mtk_napi_rx); platform_set_drvdata(pdev, eth); + schedule_delayed_work(ð->reset.monitor_work, + MTK_DMA_MONITOR_TIMEOUT); return 0; diff --git a/drivers/net/ethernet/mediatek/mtk_eth_soc.h b/drivers/net/ethernet/mediatek/mtk_eth_soc.h index 2d9186d32bc0..afc9d52e79bf 100644 --- a/drivers/net/ethernet/mediatek/mtk_eth_soc.h +++ b/drivers/net/ethernet/mediatek/mtk_eth_soc.h @@ -77,12 +77,24 @@ #define MTK_HW_LRO_REPLACE_DELTA 1000 #define MTK_HW_LRO_SDL_REMAIN_ROOM 1522 +/* Frame Engine Global Configuration */ +#define MTK_FE_GLO_CFG 0x00 +#define MTK_FE_LINK_DOWN_P3 BIT(11) +#define MTK_FE_LINK_DOWN_P4 BIT(12) + /* Frame Engine Global Reset Register */ #define MTK_RST_GL 0x04 #define RST_GL_PSE BIT(0) /* Frame Engine Interrupt Status Register */ #define MTK_INT_STATUS2 0x08 +#define MTK_FE_INT_ENABLE 0x0c +#define MTK_FE_INT_FQ_EMPTY BIT(8) +#define MTK_FE_INT_TSO_FAIL BIT(12) +#define MTK_FE_INT_TSO_ILLEGAL BIT(13) +#define MTK_FE_INT_TSO_ALIGN BIT(14) +#define MTK_FE_INT_RFIFO_OV BIT(18) +#define MTK_FE_INT_RFIFO_UF BIT(19) #define MTK_GDM1_AF BIT(28) #define MTK_GDM2_AF BIT(29) @@ -272,6 +284,8 @@ #define MTK_RX_DONE_INT_V2 BIT(14) +#define MTK_CDM_TXFIFO_RDY BIT(7) + /* QDMA Interrupt grouping registers */ #define MTK_RLS_DONE_INT BIT(0) @@ -562,6 +576,17 @@ #define MT7628_SDM_RBCNT (MT7628_SDM_OFFSET + 0x10c) #define MT7628_SDM_CS_ERR (MT7628_SDM_OFFSET + 0x110) +#define MTK_FE_CDM1_FSM 0x220 +#define MTK_FE_CDM2_FSM 0x224 +#define MTK_FE_CDM3_FSM 0x238 +#define MTK_FE_CDM4_FSM 0x298 +#define MTK_FE_CDM5_FSM 0x318 +#define MTK_FE_CDM6_FSM 0x328 +#define MTK_FE_GDM1_FSM 0x228 +#define MTK_FE_GDM2_FSM 0x22C + +#define MTK_MAC_FSM(x) (0x1010C + ((x) * 0x100)) + struct mtk_rx_dma { unsigned int rxd1; unsigned int rxd2; @@ -958,6 +983,7 @@ struct mtk_reg_map { u32 delay_irq; /* delay interrupt */ u32 irq_status; /* interrupt status */ u32 irq_mask; /* interrupt mask */ + u32 adma_rx_dbg0; u32 int_grp; } pdma; struct { @@ -986,6 +1012,8 @@ struct mtk_reg_map { u32 gdma_to_ppe; u32 ppe_base; u32 wdma_base[2]; + u32 pse_iq_sta; + u32 pse_oq_sta; }; /* struct mtk_eth_data - This is the structure holding all differences @@ -1028,6 +1056,8 @@ struct mtk_soc_data { } txrx; }; +#define MTK_DMA_MONITOR_TIMEOUT msecs_to_jiffies(1000) + /* currently no SoC has more than 2 macs */ #define MTK_MAX_DEVS 2 @@ -1154,6 +1184,14 @@ struct mtk_eth { struct rhashtable flow_table; struct bpf_prog __rcu *prog; + + struct { + struct delayed_work monitor_work; + u32 wdidx; + u8 wdma_hang_count; + u8 qdma_hang_count; + u8 adma_hang_count; + } reset; }; /* struct mtk_mac - the structure that holds the info about the MACs of the diff --git a/drivers/net/ethernet/mediatek/mtk_ppe.c b/drivers/net/ethernet/mediatek/mtk_ppe.c index 1ff024f42444..6883eb34cd8b 100644 --- a/drivers/net/ethernet/mediatek/mtk_ppe.c +++ b/drivers/net/ethernet/mediatek/mtk_ppe.c @@ -729,6 +729,33 @@ int mtk_foe_entry_idle_time(struct mtk_ppe *ppe, struct mtk_flow_entry *entry) return __mtk_foe_entry_idle_time(ppe, entry->data.ib1); } +int mtk_ppe_prepare_reset(struct mtk_ppe *ppe) +{ + if (!ppe) + return -EINVAL; + + /* disable KA */ + ppe_clear(ppe, MTK_PPE_TB_CFG, MTK_PPE_TB_CFG_KEEPALIVE); + ppe_clear(ppe, MTK_PPE_BIND_LMT1, MTK_PPE_NTU_KEEPALIVE); + ppe_w32(ppe, MTK_PPE_KEEPALIVE, 0); + usleep_range(10000, 11000); + + /* set KA timer to maximum */ + ppe_set(ppe, MTK_PPE_BIND_LMT1, MTK_PPE_NTU_KEEPALIVE); + ppe_w32(ppe, MTK_PPE_KEEPALIVE, 0xffffffff); + + /* set KA tick select */ + ppe_set(ppe, MTK_PPE_TB_CFG, MTK_PPE_TB_TICK_SEL); + ppe_set(ppe, MTK_PPE_TB_CFG, MTK_PPE_TB_CFG_KEEPALIVE); + usleep_range(10000, 11000); + + /* disable scan mode */ + ppe_clear(ppe, MTK_PPE_TB_CFG, MTK_PPE_TB_CFG_SCAN_MODE); + usleep_range(10000, 11000); + + return mtk_ppe_wait_busy(ppe); +} + struct mtk_ppe *mtk_ppe_init(struct mtk_eth *eth, void __iomem *base, int version, int index) { diff --git a/drivers/net/ethernet/mediatek/mtk_ppe.h b/drivers/net/ethernet/mediatek/mtk_ppe.h index b5e432031340..5e8bc48252b1 100644 --- a/drivers/net/ethernet/mediatek/mtk_ppe.h +++ b/drivers/net/ethernet/mediatek/mtk_ppe.h @@ -308,6 +308,7 @@ struct mtk_ppe *mtk_ppe_init(struct mtk_eth *eth, void __iomem *base, void mtk_ppe_deinit(struct mtk_eth *eth); void mtk_ppe_start(struct mtk_ppe *ppe); int mtk_ppe_stop(struct mtk_ppe *ppe); +int mtk_ppe_prepare_reset(struct mtk_ppe *ppe); void __mtk_ppe_check_skb(struct mtk_ppe *ppe, struct sk_buff *skb, u16 hash); diff --git a/drivers/net/ethernet/mediatek/mtk_ppe_regs.h b/drivers/net/ethernet/mediatek/mtk_ppe_regs.h index 59596d823d8b..0fdb983b0a88 100644 --- a/drivers/net/ethernet/mediatek/mtk_ppe_regs.h +++ b/drivers/net/ethernet/mediatek/mtk_ppe_regs.h @@ -58,6 +58,12 @@ #define MTK_PPE_TB_CFG_SCAN_MODE GENMASK(17, 16) #define MTK_PPE_TB_CFG_HASH_DEBUG GENMASK(19, 18) #define MTK_PPE_TB_CFG_INFO_SEL BIT(20) +#define MTK_PPE_TB_TICK_SEL BIT(24) + +#define MTK_PPE_BIND_LMT1 0x230 +#define MTK_PPE_NTU_KEEPALIVE GENMASK(23, 16) + +#define MTK_PPE_KEEPALIVE 0x234 enum { MTK_PPE_SCAN_MODE_DISABLED, diff --git a/drivers/net/ethernet/mediatek/mtk_star_emac.c b/drivers/net/ethernet/mediatek/mtk_star_emac.c index 7050351250b7..02c03325911f 100644 --- a/drivers/net/ethernet/mediatek/mtk_star_emac.c +++ b/drivers/net/ethernet/mediatek/mtk_star_emac.c @@ -1378,9 +1378,6 @@ static int mtk_star_mdio_read(struct mii_bus *mii, int phy_id, int regnum) unsigned int val, data; int ret; - if (regnum & MII_ADDR_C45) - return -EOPNOTSUPP; - mtk_star_mdio_rwok_clear(priv); val = (regnum << MTK_STAR_OFF_PHY_CTRL0_PREG); @@ -1407,9 +1404,6 @@ static int mtk_star_mdio_write(struct mii_bus *mii, int phy_id, struct mtk_star_priv *priv = mii->priv; unsigned int val; - if (regnum & MII_ADDR_C45) - return -EOPNOTSUPP; - mtk_star_mdio_rwok_clear(priv); val = data; diff --git a/drivers/net/ethernet/mediatek/mtk_wed.c b/drivers/net/ethernet/mediatek/mtk_wed.c index a6271449617f..95d890870984 100644 --- a/drivers/net/ethernet/mediatek/mtk_wed.c +++ b/drivers/net/ethernet/mediatek/mtk_wed.c @@ -206,6 +206,48 @@ mtk_wed_wo_reset(struct mtk_wed_device *dev) iounmap(reg); } +void mtk_wed_fe_reset(void) +{ + int i; + + mutex_lock(&hw_lock); + + for (i = 0; i < ARRAY_SIZE(hw_list); i++) { + struct mtk_wed_hw *hw = hw_list[i]; + struct mtk_wed_device *dev = hw->wed_dev; + int err; + + if (!dev || !dev->wlan.reset) + continue; + + /* reset callback blocks until WLAN reset is completed */ + err = dev->wlan.reset(dev); + if (err) + dev_err(dev->dev, "wlan reset failed: %d\n", err); + } + + mutex_unlock(&hw_lock); +} + +void mtk_wed_fe_reset_complete(void) +{ + int i; + + mutex_lock(&hw_lock); + + for (i = 0; i < ARRAY_SIZE(hw_list); i++) { + struct mtk_wed_hw *hw = hw_list[i]; + struct mtk_wed_device *dev = hw->wed_dev; + + if (!dev || !dev->wlan.reset_complete) + continue; + + dev->wlan.reset_complete(dev); + } + + mutex_unlock(&hw_lock); +} + static struct mtk_wed_hw * mtk_wed_assign(struct mtk_wed_device *dev) { @@ -745,7 +787,6 @@ mtk_wed_rro_ring_alloc(struct mtk_wed_device *dev, struct mtk_wed_ring *ring, ring->desc_size = sizeof(*ring->desc); ring->size = size; - memset(ring->desc, 0, size); return 0; } diff --git a/drivers/net/ethernet/mediatek/mtk_wed.h b/drivers/net/ethernet/mediatek/mtk_wed.h index e012b8a82133..43ab77eaf683 100644 --- a/drivers/net/ethernet/mediatek/mtk_wed.h +++ b/drivers/net/ethernet/mediatek/mtk_wed.h @@ -128,6 +128,8 @@ void mtk_wed_add_hw(struct device_node *np, struct mtk_eth *eth, void mtk_wed_exit(void); int mtk_wed_flow_add(int index); void mtk_wed_flow_remove(int index); +void mtk_wed_fe_reset(void); +void mtk_wed_fe_reset_complete(void); #else static inline void mtk_wed_add_hw(struct device_node *np, struct mtk_eth *eth, @@ -147,6 +149,13 @@ static inline void mtk_wed_flow_remove(int index) { } +static inline void mtk_wed_fe_reset(void) +{ +} + +static inline void mtk_wed_fe_reset_complete(void) +{ +} #endif #ifdef CONFIG_DEBUG_FS diff --git a/drivers/net/ethernet/mediatek/mtk_wed_wo.c b/drivers/net/ethernet/mediatek/mtk_wed_wo.c index a0a39643caf7..69fba29055e9 100644 --- a/drivers/net/ethernet/mediatek/mtk_wed_wo.c +++ b/drivers/net/ethernet/mediatek/mtk_wed_wo.c @@ -138,7 +138,6 @@ mtk_wed_wo_queue_refill(struct mtk_wed_wo *wo, struct mtk_wed_wo_queue *q, enum dma_data_direction dir = rx ? DMA_FROM_DEVICE : DMA_TO_DEVICE; int n_buf = 0; - spin_lock_bh(&q->lock); while (q->queued < q->n_desc) { struct mtk_wed_wo_queue_entry *entry; dma_addr_t addr; @@ -172,7 +171,6 @@ mtk_wed_wo_queue_refill(struct mtk_wed_wo *wo, struct mtk_wed_wo_queue *q, q->queued++; n_buf++; } - spin_unlock_bh(&q->lock); return n_buf; } @@ -260,7 +258,6 @@ mtk_wed_wo_queue_alloc(struct mtk_wed_wo *wo, struct mtk_wed_wo_queue *q, int n_desc, int buf_size, int index, struct mtk_wed_wo_queue_regs *regs) { - spin_lock_init(&q->lock); q->regs = *regs; q->n_desc = n_desc; q->buf_size = buf_size; @@ -292,7 +289,6 @@ mtk_wed_wo_queue_tx_clean(struct mtk_wed_wo *wo, struct mtk_wed_wo_queue *q) struct page *page; int i; - spin_lock_bh(&q->lock); for (i = 0; i < q->n_desc; i++) { struct mtk_wed_wo_queue_entry *entry = &q->entry[i]; @@ -301,7 +297,6 @@ mtk_wed_wo_queue_tx_clean(struct mtk_wed_wo *wo, struct mtk_wed_wo_queue *q) skb_free_frag(entry->buf); entry->buf = NULL; } - spin_unlock_bh(&q->lock); if (!q->cache.va) return; @@ -316,7 +311,6 @@ mtk_wed_wo_queue_rx_clean(struct mtk_wed_wo *wo, struct mtk_wed_wo_queue *q) { struct page *page; - spin_lock_bh(&q->lock); for (;;) { void *buf = mtk_wed_wo_dequeue(wo, q, NULL, true); @@ -325,7 +319,6 @@ mtk_wed_wo_queue_rx_clean(struct mtk_wed_wo *wo, struct mtk_wed_wo_queue *q) skb_free_frag(buf); } - spin_unlock_bh(&q->lock); if (!q->cache.va) return; @@ -351,8 +344,6 @@ int mtk_wed_wo_queue_tx_skb(struct mtk_wed_wo *wo, struct mtk_wed_wo_queue *q, int ret = 0, index; u32 ctrl; - spin_lock_bh(&q->lock); - q->tail = mtk_wed_mmio_r32(wo, q->regs.dma_idx); index = (q->head + 1) % q->n_desc; if (q->tail == index) { @@ -383,8 +374,6 @@ int mtk_wed_wo_queue_tx_skb(struct mtk_wed_wo *wo, struct mtk_wed_wo_queue *q, mtk_wed_wo_queue_kick(wo, q, q->head); mtk_wed_wo_kickout(wo); out: - spin_unlock_bh(&q->lock); - dev_kfree_skb(skb); return ret; diff --git a/drivers/net/ethernet/mediatek/mtk_wed_wo.h b/drivers/net/ethernet/mediatek/mtk_wed_wo.h index c8fb85795864..dbcf42ce9173 100644 --- a/drivers/net/ethernet/mediatek/mtk_wed_wo.h +++ b/drivers/net/ethernet/mediatek/mtk_wed_wo.h @@ -211,7 +211,6 @@ struct mtk_wed_wo_queue { struct mtk_wed_wo_queue_regs regs; struct page_frag_cache cache; - spinlock_t lock; struct mtk_wed_wo_queue_desc *desc; dma_addr_t desc_dma; diff --git a/drivers/net/ethernet/mellanox/mlx4/en_clock.c b/drivers/net/ethernet/mellanox/mlx4/en_clock.c index 98b5ffb4d729..9e3b76182088 100644 --- a/drivers/net/ethernet/mellanox/mlx4/en_clock.c +++ b/drivers/net/ethernet/mellanox/mlx4/en_clock.c @@ -58,9 +58,7 @@ u64 mlx4_en_get_cqe_ts(struct mlx4_cqe *cqe) return hi | lo; } -void mlx4_en_fill_hwtstamps(struct mlx4_en_dev *mdev, - struct skb_shared_hwtstamps *hwts, - u64 timestamp) +u64 mlx4_en_get_hwtstamp(struct mlx4_en_dev *mdev, u64 timestamp) { unsigned int seq; u64 nsec; @@ -70,8 +68,15 @@ void mlx4_en_fill_hwtstamps(struct mlx4_en_dev *mdev, nsec = timecounter_cyc2time(&mdev->clock, timestamp); } while (read_seqretry(&mdev->clock_lock, seq)); + return ns_to_ktime(nsec); +} + +void mlx4_en_fill_hwtstamps(struct mlx4_en_dev *mdev, + struct skb_shared_hwtstamps *hwts, + u64 timestamp) +{ memset(hwts, 0, sizeof(struct skb_shared_hwtstamps)); - hwts->hwtstamp = ns_to_ktime(nsec); + hwts->hwtstamp = mlx4_en_get_hwtstamp(mdev, timestamp); } /** diff --git a/drivers/net/ethernet/mellanox/mlx4/en_netdev.c b/drivers/net/ethernet/mellanox/mlx4/en_netdev.c index 8800d3f1f55c..e11bc0ac880e 100644 --- a/drivers/net/ethernet/mellanox/mlx4/en_netdev.c +++ b/drivers/net/ethernet/mellanox/mlx4/en_netdev.c @@ -2889,6 +2889,11 @@ static const struct net_device_ops mlx4_netdev_ops_master = { .ndo_bpf = mlx4_xdp, }; +static const struct xdp_metadata_ops mlx4_xdp_metadata_ops = { + .xmo_rx_timestamp = mlx4_en_xdp_rx_timestamp, + .xmo_rx_hash = mlx4_en_xdp_rx_hash, +}; + struct mlx4_en_bond { struct work_struct work; struct mlx4_en_priv *priv; @@ -3310,6 +3315,7 @@ int mlx4_en_init_netdev(struct mlx4_en_dev *mdev, int port, dev->netdev_ops = &mlx4_netdev_ops_master; else dev->netdev_ops = &mlx4_netdev_ops; + dev->xdp_metadata_ops = &mlx4_xdp_metadata_ops; dev->watchdog_timeo = MLX4_EN_WATCHDOG_TIMEOUT; netif_set_real_num_tx_queues(dev, priv->tx_ring_num[TX]); netif_set_real_num_rx_queues(dev, priv->rx_ring_num); @@ -3410,6 +3416,8 @@ int mlx4_en_init_netdev(struct mlx4_en_dev *mdev, int port, priv->rss_hash_fn = ETH_RSS_HASH_TOP; } + dev->xdp_features = NETDEV_XDP_ACT_BASIC | NETDEV_XDP_ACT_REDIRECT; + /* MTU range: 68 - hw-specific max */ dev->min_mtu = ETH_MIN_MTU; dev->max_mtu = priv->max_mtu; diff --git a/drivers/net/ethernet/mellanox/mlx4/en_rx.c b/drivers/net/ethernet/mellanox/mlx4/en_rx.c index 8f762fc170b3..0869d4fff17b 100644 --- a/drivers/net/ethernet/mellanox/mlx4/en_rx.c +++ b/drivers/net/ethernet/mellanox/mlx4/en_rx.c @@ -661,9 +661,41 @@ static int check_csum(struct mlx4_cqe *cqe, struct sk_buff *skb, void *va, #define MLX4_CQE_STATUS_IP_ANY (MLX4_CQE_STATUS_IPV4) #endif +struct mlx4_en_xdp_buff { + struct xdp_buff xdp; + struct mlx4_cqe *cqe; + struct mlx4_en_dev *mdev; + struct mlx4_en_rx_ring *ring; + struct net_device *dev; +}; + +int mlx4_en_xdp_rx_timestamp(const struct xdp_md *ctx, u64 *timestamp) +{ + struct mlx4_en_xdp_buff *_ctx = (void *)ctx; + + if (unlikely(_ctx->ring->hwtstamp_rx_filter != HWTSTAMP_FILTER_ALL)) + return -EOPNOTSUPP; + + *timestamp = mlx4_en_get_hwtstamp(_ctx->mdev, + mlx4_en_get_cqe_ts(_ctx->cqe)); + return 0; +} + +int mlx4_en_xdp_rx_hash(const struct xdp_md *ctx, u32 *hash) +{ + struct mlx4_en_xdp_buff *_ctx = (void *)ctx; + + if (unlikely(!(_ctx->dev->features & NETIF_F_RXHASH))) + return -EOPNOTSUPP; + + *hash = be32_to_cpu(_ctx->cqe->immed_rss_invalid); + return 0; +} + int mlx4_en_process_rx_cq(struct net_device *dev, struct mlx4_en_cq *cq, int budget) { struct mlx4_en_priv *priv = netdev_priv(dev); + struct mlx4_en_xdp_buff mxbuf = {}; int factor = priv->cqe_factor; struct mlx4_en_rx_ring *ring; struct bpf_prog *xdp_prog; @@ -671,7 +703,6 @@ int mlx4_en_process_rx_cq(struct net_device *dev, struct mlx4_en_cq *cq, int bud bool doorbell_pending; bool xdp_redir_flush; struct mlx4_cqe *cqe; - struct xdp_buff xdp; int polled = 0; int index; @@ -681,7 +712,7 @@ int mlx4_en_process_rx_cq(struct net_device *dev, struct mlx4_en_cq *cq, int bud ring = priv->rx_ring[cq_ring]; xdp_prog = rcu_dereference_bh(ring->xdp_prog); - xdp_init_buff(&xdp, priv->frag_info[0].frag_stride, &ring->xdp_rxq); + xdp_init_buff(&mxbuf.xdp, priv->frag_info[0].frag_stride, &ring->xdp_rxq); doorbell_pending = false; xdp_redir_flush = false; @@ -776,24 +807,28 @@ int mlx4_en_process_rx_cq(struct net_device *dev, struct mlx4_en_cq *cq, int bud priv->frag_info[0].frag_size, DMA_FROM_DEVICE); - xdp_prepare_buff(&xdp, va - frags[0].page_offset, - frags[0].page_offset, length, false); - orig_data = xdp.data; - - act = bpf_prog_run_xdp(xdp_prog, &xdp); - - length = xdp.data_end - xdp.data; - if (xdp.data != orig_data) { - frags[0].page_offset = xdp.data - - xdp.data_hard_start; - va = xdp.data; + xdp_prepare_buff(&mxbuf.xdp, va - frags[0].page_offset, + frags[0].page_offset, length, true); + orig_data = mxbuf.xdp.data; + mxbuf.cqe = cqe; + mxbuf.mdev = priv->mdev; + mxbuf.ring = ring; + mxbuf.dev = dev; + + act = bpf_prog_run_xdp(xdp_prog, &mxbuf.xdp); + + length = mxbuf.xdp.data_end - mxbuf.xdp.data; + if (mxbuf.xdp.data != orig_data) { + frags[0].page_offset = mxbuf.xdp.data - + mxbuf.xdp.data_hard_start; + va = mxbuf.xdp.data; } switch (act) { case XDP_PASS: break; case XDP_REDIRECT: - if (likely(!xdp_do_redirect(dev, &xdp, xdp_prog))) { + if (likely(!xdp_do_redirect(dev, &mxbuf.xdp, xdp_prog))) { ring->xdp_redirect++; xdp_redir_flush = true; frags[0].page = NULL; diff --git a/drivers/net/ethernet/mellanox/mlx4/en_tx.c b/drivers/net/ethernet/mellanox/mlx4/en_tx.c index c5758637b7be..2f79378fbf6e 100644 --- a/drivers/net/ethernet/mellanox/mlx4/en_tx.c +++ b/drivers/net/ethernet/mellanox/mlx4/en_tx.c @@ -699,32 +699,32 @@ static void build_inline_wqe(struct mlx4_en_tx_desc *tx_desc, inl->byte_count = cpu_to_be32(1 << 31 | skb->len); } else { inl->byte_count = cpu_to_be32(1 << 31 | MIN_PKT_LEN); - memset(((void *)(inl + 1)) + skb->len, 0, + memset(inl->data + skb->len, 0, MIN_PKT_LEN - skb->len); } - skb_copy_from_linear_data(skb, inl + 1, hlen); + skb_copy_from_linear_data(skb, inl->data, hlen); if (shinfo->nr_frags) - memcpy(((void *)(inl + 1)) + hlen, fragptr, + memcpy(inl->data + hlen, fragptr, skb_frag_size(&shinfo->frags[0])); } else { inl->byte_count = cpu_to_be32(1 << 31 | spc); if (hlen <= spc) { - skb_copy_from_linear_data(skb, inl + 1, hlen); + skb_copy_from_linear_data(skb, inl->data, hlen); if (hlen < spc) { - memcpy(((void *)(inl + 1)) + hlen, + memcpy(inl->data + hlen, fragptr, spc - hlen); fragptr += spc - hlen; } - inl = (void *) (inl + 1) + spc; - memcpy(((void *)(inl + 1)), fragptr, skb->len - spc); + inl = (void *)inl->data + spc; + memcpy(inl->data, fragptr, skb->len - spc); } else { - skb_copy_from_linear_data(skb, inl + 1, spc); - inl = (void *) (inl + 1) + spc; - skb_copy_from_linear_data_offset(skb, spc, inl + 1, + skb_copy_from_linear_data(skb, inl->data, spc); + inl = (void *)inl->data + spc; + skb_copy_from_linear_data_offset(skb, spc, inl->data, hlen - spc); if (shinfo->nr_frags) - memcpy(((void *)(inl + 1)) + hlen - spc, + memcpy(inl->data + hlen - spc, fragptr, skb_frag_size(&shinfo->frags[0])); } diff --git a/drivers/net/ethernet/mellanox/mlx4/main.c b/drivers/net/ethernet/mellanox/mlx4/main.c index 3ae246391549..277738c50c56 100644 --- a/drivers/net/ethernet/mellanox/mlx4/main.c +++ b/drivers/net/ethernet/mellanox/mlx4/main.c @@ -265,29 +265,29 @@ static void mlx4_devlink_set_params_init_values(struct devlink *devlink) union devlink_param_value value; value.vbool = !!mlx4_internal_err_reset; - devlink_param_driverinit_value_set(devlink, - DEVLINK_PARAM_GENERIC_ID_INT_ERR_RESET, - value); + devl_param_driverinit_value_set(devlink, + DEVLINK_PARAM_GENERIC_ID_INT_ERR_RESET, + value); value.vu32 = 1UL << log_num_mac; - devlink_param_driverinit_value_set(devlink, - DEVLINK_PARAM_GENERIC_ID_MAX_MACS, - value); + devl_param_driverinit_value_set(devlink, + DEVLINK_PARAM_GENERIC_ID_MAX_MACS, + value); value.vbool = enable_64b_cqe_eqe; - devlink_param_driverinit_value_set(devlink, - MLX4_DEVLINK_PARAM_ID_ENABLE_64B_CQE_EQE, - value); + devl_param_driverinit_value_set(devlink, + MLX4_DEVLINK_PARAM_ID_ENABLE_64B_CQE_EQE, + value); value.vbool = enable_4k_uar; - devlink_param_driverinit_value_set(devlink, - MLX4_DEVLINK_PARAM_ID_ENABLE_4K_UAR, - value); + devl_param_driverinit_value_set(devlink, + MLX4_DEVLINK_PARAM_ID_ENABLE_4K_UAR, + value); value.vbool = false; - devlink_param_driverinit_value_set(devlink, - DEVLINK_PARAM_GENERIC_ID_REGION_SNAPSHOT, - value); + devl_param_driverinit_value_set(devlink, + DEVLINK_PARAM_GENERIC_ID_REGION_SNAPSHOT, + value); } static inline void mlx4_set_num_reserved_uars(struct mlx4_dev *dev, @@ -3910,37 +3910,37 @@ static void mlx4_devlink_param_load_driverinit_values(struct devlink *devlink) union devlink_param_value saved_value; int err; - err = devlink_param_driverinit_value_get(devlink, - DEVLINK_PARAM_GENERIC_ID_INT_ERR_RESET, - &saved_value); + err = devl_param_driverinit_value_get(devlink, + DEVLINK_PARAM_GENERIC_ID_INT_ERR_RESET, + &saved_value); if (!err && mlx4_internal_err_reset != saved_value.vbool) { mlx4_internal_err_reset = saved_value.vbool; /* Notify on value changed on runtime configuration mode */ - devlink_param_value_changed(devlink, - DEVLINK_PARAM_GENERIC_ID_INT_ERR_RESET); + devl_param_value_changed(devlink, + DEVLINK_PARAM_GENERIC_ID_INT_ERR_RESET); } - err = devlink_param_driverinit_value_get(devlink, - DEVLINK_PARAM_GENERIC_ID_MAX_MACS, - &saved_value); + err = devl_param_driverinit_value_get(devlink, + DEVLINK_PARAM_GENERIC_ID_MAX_MACS, + &saved_value); if (!err) log_num_mac = order_base_2(saved_value.vu32); - err = devlink_param_driverinit_value_get(devlink, - MLX4_DEVLINK_PARAM_ID_ENABLE_64B_CQE_EQE, - &saved_value); + err = devl_param_driverinit_value_get(devlink, + MLX4_DEVLINK_PARAM_ID_ENABLE_64B_CQE_EQE, + &saved_value); if (!err) enable_64b_cqe_eqe = saved_value.vbool; - err = devlink_param_driverinit_value_get(devlink, - MLX4_DEVLINK_PARAM_ID_ENABLE_4K_UAR, - &saved_value); + err = devl_param_driverinit_value_get(devlink, + MLX4_DEVLINK_PARAM_ID_ENABLE_4K_UAR, + &saved_value); if (!err) enable_4k_uar = saved_value.vbool; - err = devlink_param_driverinit_value_get(devlink, - DEVLINK_PARAM_GENERIC_ID_REGION_SNAPSHOT, - &saved_value); + err = devl_param_driverinit_value_get(devlink, + DEVLINK_PARAM_GENERIC_ID_REGION_SNAPSHOT, + &saved_value); if (!err && crdump->snapshot_enable != saved_value.vbool) { crdump->snapshot_enable = saved_value.vbool; - devlink_param_value_changed(devlink, - DEVLINK_PARAM_GENERIC_ID_REGION_SNAPSHOT); + devl_param_value_changed(devlink, + DEVLINK_PARAM_GENERIC_ID_REGION_SNAPSHOT); } } @@ -4021,8 +4021,8 @@ static int mlx4_init_one(struct pci_dev *pdev, const struct pci_device_id *id) mutex_init(&dev->persist->interface_state_mutex); mutex_init(&dev->persist->pci_status_mutex); - ret = devlink_params_register(devlink, mlx4_devlink_params, - ARRAY_SIZE(mlx4_devlink_params)); + ret = devl_params_register(devlink, mlx4_devlink_params, + ARRAY_SIZE(mlx4_devlink_params)); if (ret) goto err_devlink_unregister; mlx4_devlink_set_params_init_values(devlink); @@ -4031,14 +4031,13 @@ static int mlx4_init_one(struct pci_dev *pdev, const struct pci_device_id *id) goto err_params_unregister; pci_save_state(pdev); - devlink_set_features(devlink, DEVLINK_F_RELOAD); devl_unlock(devlink); devlink_register(devlink); return 0; err_params_unregister: - devlink_params_unregister(devlink, mlx4_devlink_params, - ARRAY_SIZE(mlx4_devlink_params)); + devl_params_unregister(devlink, mlx4_devlink_params, + ARRAY_SIZE(mlx4_devlink_params)); err_devlink_unregister: kfree(dev->persist); err_devlink_free: @@ -4181,8 +4180,8 @@ static void mlx4_remove_one(struct pci_dev *pdev) pci_release_regions(pdev); mlx4_pci_disable_device(dev); - devlink_params_unregister(devlink, mlx4_devlink_params, - ARRAY_SIZE(mlx4_devlink_params)); + devl_params_unregister(devlink, mlx4_devlink_params, + ARRAY_SIZE(mlx4_devlink_params)); kfree(dev->persist); devl_unlock(devlink); devlink_free(devlink); diff --git a/drivers/net/ethernet/mellanox/mlx4/mlx4_en.h b/drivers/net/ethernet/mellanox/mlx4/mlx4_en.h index 3d4226ddba5e..544e09b97483 100644 --- a/drivers/net/ethernet/mellanox/mlx4/mlx4_en.h +++ b/drivers/net/ethernet/mellanox/mlx4/mlx4_en.h @@ -796,10 +796,15 @@ void mlx4_en_update_pfc_stats_bitmap(struct mlx4_dev *dev, int mlx4_en_netdev_event(struct notifier_block *this, unsigned long event, void *ptr); +struct xdp_md; +int mlx4_en_xdp_rx_timestamp(const struct xdp_md *ctx, u64 *timestamp); +int mlx4_en_xdp_rx_hash(const struct xdp_md *ctx, u32 *hash); + /* * Functions for time stamping */ u64 mlx4_en_get_cqe_ts(struct mlx4_cqe *cqe); +u64 mlx4_en_get_hwtstamp(struct mlx4_en_dev *mdev, u64 timestamp); void mlx4_en_fill_hwtstamps(struct mlx4_en_dev *mdev, struct skb_shared_hwtstamps *hwts, u64 timestamp); diff --git a/drivers/net/ethernet/mellanox/mlx5/core/Kconfig b/drivers/net/ethernet/mellanox/mlx5/core/Kconfig index 26685fd0fdaa..bb1d7b039a7e 100644 --- a/drivers/net/ethernet/mellanox/mlx5/core/Kconfig +++ b/drivers/net/ethernet/mellanox/mlx5/core/Kconfig @@ -85,7 +85,7 @@ config MLX5_BRIDGE config MLX5_CLS_ACT bool "MLX5 TC classifier action support" - depends on MLX5_ESWITCH && NET_CLS_ACT + depends on MLX5_ESWITCH && NET_CLS_ACT && NET_TC_SKB_EXT default y help mlx5 ConnectX offloads support for TC classifier action (NET_CLS_ACT), @@ -100,7 +100,7 @@ config MLX5_CLS_ACT config MLX5_TC_CT bool "MLX5 TC connection tracking offload support" - depends on MLX5_CLS_ACT && NF_FLOW_TABLE && NET_ACT_CT && NET_TC_SKB_EXT + depends on MLX5_CLS_ACT && NF_FLOW_TABLE && NET_ACT_CT default y help Say Y here if you want to support offloading connection tracking rules diff --git a/drivers/net/ethernet/mellanox/mlx5/core/Makefile b/drivers/net/ethernet/mellanox/mlx5/core/Makefile index cd4a1ab0ea78..8d4e25cc54ea 100644 --- a/drivers/net/ethernet/mellanox/mlx5/core/Makefile +++ b/drivers/net/ethernet/mellanox/mlx5/core/Makefile @@ -47,7 +47,7 @@ mlx5_core-$(CONFIG_MLX5_CLS_ACT) += en_tc.o en/rep/tc.o en/rep/neigh.o \ en/tc_tun_vxlan.o en/tc_tun_gre.o en/tc_tun_geneve.o \ en/tc_tun_mplsoudp.o diag/en_tc_tracepoint.o \ en/tc/post_act.o en/tc/int_port.o en/tc/meter.o \ - en/tc/post_meter.o + en/tc/post_meter.o en/tc/act_stats.o mlx5_core-$(CONFIG_MLX5_CLS_ACT) += en/tc/act/act.o en/tc/act/drop.o en/tc/act/trap.o \ en/tc/act/accept.o en/tc/act/mark.o en/tc/act/goto.o \ @@ -97,7 +97,7 @@ mlx5_core-$(CONFIG_MLX5_EN_MACSEC) += en_accel/macsec.o en_accel/macsec_fs.o \ mlx5_core-$(CONFIG_MLX5_EN_IPSEC) += en_accel/ipsec.o en_accel/ipsec_rxtx.o \ en_accel/ipsec_stats.o en_accel/ipsec_fs.o \ - en_accel/ipsec_offload.o + en_accel/ipsec_offload.o lib/ipsec_fs_roce.o mlx5_core-$(CONFIG_MLX5_EN_TLS) += en_accel/ktls_stats.o \ en_accel/fs_tcp.o en_accel/ktls.o en_accel/ktls_txrx.o \ diff --git a/drivers/net/ethernet/mellanox/mlx5/core/cmd.c b/drivers/net/ethernet/mellanox/mlx5/core/cmd.c index c837103a9ee3..b00e33ed05e9 100644 --- a/drivers/net/ethernet/mellanox/mlx5/core/cmd.c +++ b/drivers/net/ethernet/mellanox/mlx5/core/cmd.c @@ -47,6 +47,25 @@ #define CREATE_TRACE_POINTS #include "diag/cmd_tracepoint.h" +struct mlx5_ifc_mbox_out_bits { + u8 status[0x8]; + u8 reserved_at_8[0x18]; + + u8 syndrome[0x20]; + + u8 reserved_at_40[0x40]; +}; + +struct mlx5_ifc_mbox_in_bits { + u8 opcode[0x10]; + u8 uid[0x10]; + + u8 reserved_at_20[0x10]; + u8 op_mod[0x10]; + + u8 reserved_at_40[0x40]; +}; + enum { CMD_IF_REV = 5, }; @@ -70,6 +89,27 @@ enum { MLX5_CMD_DELIVERY_STAT_CMD_DESCR_ERR = 0x10, }; +static u16 in_to_opcode(void *in) +{ + return MLX5_GET(mbox_in, in, opcode); +} + +/* Returns true for opcodes that might be triggered very frequently and throttle + * the command interface. Limit their command slots usage. + */ +static bool mlx5_cmd_is_throttle_opcode(u16 op) +{ + switch (op) { + case MLX5_CMD_OP_CREATE_GENERAL_OBJECT: + case MLX5_CMD_OP_DESTROY_GENERAL_OBJECT: + case MLX5_CMD_OP_MODIFY_GENERAL_OBJECT: + case MLX5_CMD_OP_QUERY_GENERAL_OBJECT: + case MLX5_CMD_OP_SYNC_CRYPTO: + return true; + } + return false; +} + static struct mlx5_cmd_work_ent * cmd_alloc_ent(struct mlx5_cmd *cmd, struct mlx5_cmd_msg *in, struct mlx5_cmd_msg *out, void *uout, int uout_size, @@ -91,6 +131,7 @@ cmd_alloc_ent(struct mlx5_cmd *cmd, struct mlx5_cmd_msg *in, ent->context = context; ent->cmd = cmd; ent->page_queue = page_queue; + ent->op = in_to_opcode(in->first.data); refcount_set(&ent->refcnt, 1); return ent; @@ -483,6 +524,7 @@ static int mlx5_internal_err_ret_value(struct mlx5_core_dev *dev, u16 op, case MLX5_CMD_OP_QUERY_VHCA_MIGRATION_STATE: case MLX5_CMD_OP_SAVE_VHCA_STATE: case MLX5_CMD_OP_LOAD_VHCA_STATE: + case MLX5_CMD_OP_SYNC_CRYPTO: *status = MLX5_DRIVER_STATUS_ABORTED; *synd = MLX5_DRIVER_SYND; return -ENOLINK; @@ -685,6 +727,7 @@ const char *mlx5_command_str(int command) MLX5_COMMAND_STR_CASE(QUERY_VHCA_MIGRATION_STATE); MLX5_COMMAND_STR_CASE(SAVE_VHCA_STATE); MLX5_COMMAND_STR_CASE(LOAD_VHCA_STATE); + MLX5_COMMAND_STR_CASE(SYNC_CRYPTO); default: return "unknown command opcode"; } } @@ -752,25 +795,6 @@ static int cmd_status_to_err(u8 status) } } -struct mlx5_ifc_mbox_out_bits { - u8 status[0x8]; - u8 reserved_at_8[0x18]; - - u8 syndrome[0x20]; - - u8 reserved_at_40[0x40]; -}; - -struct mlx5_ifc_mbox_in_bits { - u8 opcode[0x10]; - u8 uid[0x10]; - - u8 reserved_at_20[0x10]; - u8 op_mod[0x10]; - - u8 reserved_at_40[0x40]; -}; - void mlx5_cmd_out_err(struct mlx5_core_dev *dev, u16 opcode, u16 op_mod, void *out) { u32 syndrome = MLX5_GET(mbox_out, out, syndrome); @@ -788,11 +812,12 @@ static void cmd_status_print(struct mlx5_core_dev *dev, void *in, void *out) u16 opcode, op_mod; u16 uid; - opcode = MLX5_GET(mbox_in, in, opcode); + opcode = in_to_opcode(in); op_mod = MLX5_GET(mbox_in, in, op_mod); uid = MLX5_GET(mbox_in, in, uid); - if (!uid && opcode != MLX5_CMD_OP_DESTROY_MKEY) + if (!uid && opcode != MLX5_CMD_OP_DESTROY_MKEY && + opcode != MLX5_CMD_OP_CREATE_UCTX) mlx5_cmd_out_err(dev, opcode, op_mod, out); } @@ -800,7 +825,7 @@ int mlx5_cmd_check(struct mlx5_core_dev *dev, int err, void *in, void *out) { /* aborted due to PCI error or via reset flow mlx5_cmd_trigger_completions() */ if (err == -ENXIO) { - u16 opcode = MLX5_GET(mbox_in, in, opcode); + u16 opcode = in_to_opcode(in); u32 syndrome; u8 status; @@ -829,9 +854,9 @@ static void dump_command(struct mlx5_core_dev *dev, struct mlx5_cmd_work_ent *ent, int input) { struct mlx5_cmd_msg *msg = input ? ent->in : ent->out; - u16 op = MLX5_GET(mbox_in, ent->lay->in, opcode); struct mlx5_cmd_mailbox *next = msg->next; int n = mlx5_calc_cmd_blocks(msg); + u16 op = ent->op; int data_only; u32 offset = 0; int dump_len; @@ -883,11 +908,6 @@ static void dump_command(struct mlx5_core_dev *dev, mlx5_core_dbg(dev, "cmd[%d]: end dump\n", ent->idx); } -static u16 msg_to_opcode(struct mlx5_cmd_msg *in) -{ - return MLX5_GET(mbox_in, in->first.data, opcode); -} - static void mlx5_cmd_comp_handler(struct mlx5_core_dev *dev, u64 vec, bool forced); static void cb_timeout_handler(struct work_struct *work) @@ -905,13 +925,13 @@ static void cb_timeout_handler(struct work_struct *work) /* Maybe got handled by eq recover ? */ if (!test_bit(MLX5_CMD_ENT_STATE_PENDING_COMP, &ent->state)) { mlx5_core_warn(dev, "cmd[%d]: %s(0x%x) Async, recovered after timeout\n", ent->idx, - mlx5_command_str(msg_to_opcode(ent->in)), msg_to_opcode(ent->in)); + mlx5_command_str(ent->op), ent->op); goto out; /* phew, already handled */ } ent->ret = -ETIMEDOUT; mlx5_core_warn(dev, "cmd[%d]: %s(0x%x) Async, timeout. Will cause a leak of a command resource\n", - ent->idx, mlx5_command_str(msg_to_opcode(ent->in)), msg_to_opcode(ent->in)); + ent->idx, mlx5_command_str(ent->op), ent->op); mlx5_cmd_comp_handler(dev, 1ULL << ent->idx, true); out: @@ -985,7 +1005,6 @@ static void cmd_work_handler(struct work_struct *work) ent->lay = lay; memset(lay, 0, sizeof(*lay)); memcpy(lay->in, ent->in->first.data, sizeof(lay->in)); - ent->op = be32_to_cpu(lay->in[0]) >> 16; if (ent->in->next) lay->in_ptr = cpu_to_be64(ent->in->next->dma); lay->inlen = cpu_to_be32(ent->in->len); @@ -1098,12 +1117,12 @@ static void wait_func_handle_exec_timeout(struct mlx5_core_dev *dev, */ if (wait_for_completion_timeout(&ent->done, timeout)) { mlx5_core_warn(dev, "cmd[%d]: %s(0x%x) recovered after timeout\n", ent->idx, - mlx5_command_str(msg_to_opcode(ent->in)), msg_to_opcode(ent->in)); + mlx5_command_str(ent->op), ent->op); return; } mlx5_core_warn(dev, "cmd[%d]: %s(0x%x) No done completion\n", ent->idx, - mlx5_command_str(msg_to_opcode(ent->in)), msg_to_opcode(ent->in)); + mlx5_command_str(ent->op), ent->op); ent->ret = -ETIMEDOUT; mlx5_cmd_comp_handler(dev, 1ULL << ent->idx, true); @@ -1130,12 +1149,10 @@ out_err: if (err == -ETIMEDOUT) { mlx5_core_warn(dev, "%s(0x%x) timeout. Will cause a leak of a command resource\n", - mlx5_command_str(msg_to_opcode(ent->in)), - msg_to_opcode(ent->in)); + mlx5_command_str(ent->op), ent->op); } else if (err == -ECANCELED) { mlx5_core_warn(dev, "%s(0x%x) canceled on out of queue timeout.\n", - mlx5_command_str(msg_to_opcode(ent->in)), - msg_to_opcode(ent->in)); + mlx5_command_str(ent->op), ent->op); } mlx5_core_dbg(dev, "err %d, delivery status %s(%d)\n", err, deliv_status_to_str(ent->status), ent->status); @@ -1169,7 +1186,6 @@ static int mlx5_cmd_invoke(struct mlx5_core_dev *dev, struct mlx5_cmd_msg *in, u8 status = 0; int err = 0; s64 ds; - u16 op; if (callback && page_queue) return -EINVAL; @@ -1209,9 +1225,8 @@ static int mlx5_cmd_invoke(struct mlx5_core_dev *dev, struct mlx5_cmd_msg *in, goto out_free; ds = ent->ts2 - ent->ts1; - op = MLX5_GET(mbox_in, in->first.data, opcode); - if (op < MLX5_CMD_OP_MAX) { - stats = &cmd->stats[op]; + if (ent->op < MLX5_CMD_OP_MAX) { + stats = &cmd->stats[ent->op]; spin_lock_irq(&stats->lock); stats->sum += ds; ++stats->n; @@ -1219,7 +1234,7 @@ static int mlx5_cmd_invoke(struct mlx5_core_dev *dev, struct mlx5_cmd_msg *in, } mlx5_core_dbg_mask(dev, 1 << MLX5_CMD_TIME, "fw exec time for %s is %lld nsec\n", - mlx5_command_str(op), ds); + mlx5_command_str(ent->op), ds); out_free: status = ent->status; @@ -1816,7 +1831,7 @@ cache_miss: static int is_manage_pages(void *in) { - return MLX5_GET(mbox_in, in, opcode) == MLX5_CMD_OP_MANAGE_PAGES; + return in_to_opcode(in) == MLX5_CMD_OP_MANAGE_PAGES; } /* Notes: @@ -1827,8 +1842,9 @@ static int cmd_exec(struct mlx5_core_dev *dev, void *in, int in_size, void *out, int out_size, mlx5_cmd_cbk_t callback, void *context, bool force_polling) { - u16 opcode = MLX5_GET(mbox_in, in, opcode); struct mlx5_cmd_msg *inb, *outb; + u16 opcode = in_to_opcode(in); + bool throttle_op; int pages_queue; gfp_t gfp; u8 token; @@ -1837,13 +1853,21 @@ static int cmd_exec(struct mlx5_core_dev *dev, void *in, int in_size, void *out, if (mlx5_cmd_is_down(dev) || !opcode_allowed(&dev->cmd, opcode)) return -ENXIO; + throttle_op = mlx5_cmd_is_throttle_opcode(opcode); + if (throttle_op) { + /* atomic context may not sleep */ + if (callback) + return -EINVAL; + down(&dev->cmd.throttle_sem); + } + pages_queue = is_manage_pages(in); gfp = callback ? GFP_ATOMIC : GFP_KERNEL; inb = alloc_msg(dev, in_size, gfp); if (IS_ERR(inb)) { err = PTR_ERR(inb); - return err; + goto out_up; } token = alloc_token(&dev->cmd); @@ -1877,6 +1901,9 @@ out_out: mlx5_free_cmd_msg(dev, outb); out_in: free_msg(dev, inb); +out_up: + if (throttle_op) + up(&dev->cmd.throttle_sem); return err; } @@ -1950,8 +1977,8 @@ static int cmd_status_err(struct mlx5_core_dev *dev, int err, u16 opcode, u16 op int mlx5_cmd_do(struct mlx5_core_dev *dev, void *in, int in_size, void *out, int out_size) { int err = cmd_exec(dev, in, in_size, out, out_size, NULL, NULL, false); - u16 opcode = MLX5_GET(mbox_in, in, opcode); u16 op_mod = MLX5_GET(mbox_in, in, op_mod); + u16 opcode = in_to_opcode(in); return cmd_status_err(dev, err, opcode, op_mod, out); } @@ -1996,8 +2023,8 @@ int mlx5_cmd_exec_polling(struct mlx5_core_dev *dev, void *in, int in_size, void *out, int out_size) { int err = cmd_exec(dev, in, in_size, out, out_size, NULL, NULL, true); - u16 opcode = MLX5_GET(mbox_in, in, opcode); u16 op_mod = MLX5_GET(mbox_in, in, op_mod); + u16 opcode = in_to_opcode(in); err = cmd_status_err(dev, err, opcode, op_mod, out); return mlx5_cmd_check(dev, err, in, out); @@ -2049,7 +2076,7 @@ int mlx5_cmd_exec_cb(struct mlx5_async_ctx *ctx, void *in, int in_size, work->ctx = ctx; work->user_callback = callback; - work->opcode = MLX5_GET(mbox_in, in, opcode); + work->opcode = in_to_opcode(in); work->op_mod = MLX5_GET(mbox_in, in, op_mod); work->out = out; if (WARN_ON(!atomic_inc_not_zero(&ctx->num_inflight))) @@ -2220,6 +2247,7 @@ int mlx5_cmd_init(struct mlx5_core_dev *dev) sema_init(&cmd->sem, cmd->max_reg_cmds); sema_init(&cmd->pages_sem, 1); + sema_init(&cmd->throttle_sem, DIV_ROUND_UP(cmd->max_reg_cmds, 2)); cmd_h = (u32)((u64)(cmd->dma) >> 32); cmd_l = (u32)(cmd->dma); diff --git a/drivers/net/ethernet/mellanox/mlx5/core/dev.c b/drivers/net/ethernet/mellanox/mlx5/core/dev.c index 0571e40c6ee5..445fe30c3d0b 100644 --- a/drivers/net/ethernet/mellanox/mlx5/core/dev.c +++ b/drivers/net/ethernet/mellanox/mlx5/core/dev.c @@ -59,6 +59,9 @@ bool mlx5_eth_supported(struct mlx5_core_dev *dev) if (!IS_ENABLED(CONFIG_MLX5_CORE_EN)) return false; + if (mlx5_core_is_management_pf(dev)) + return false; + if (MLX5_CAP_GEN(dev, port_type) != MLX5_CAP_PORT_TYPE_ETH) return false; @@ -111,9 +114,9 @@ static bool is_eth_enabled(struct mlx5_core_dev *dev) union devlink_param_value val; int err; - err = devlink_param_driverinit_value_get(priv_to_devlink(dev), - DEVLINK_PARAM_GENERIC_ID_ENABLE_ETH, - &val); + err = devl_param_driverinit_value_get(priv_to_devlink(dev), + DEVLINK_PARAM_GENERIC_ID_ENABLE_ETH, + &val); return err ? false : val.vbool; } @@ -144,9 +147,9 @@ static bool is_vnet_enabled(struct mlx5_core_dev *dev) union devlink_param_value val; int err; - err = devlink_param_driverinit_value_get(priv_to_devlink(dev), - DEVLINK_PARAM_GENERIC_ID_ENABLE_VNET, - &val); + err = devl_param_driverinit_value_get(priv_to_devlink(dev), + DEVLINK_PARAM_GENERIC_ID_ENABLE_VNET, + &val); return err ? false : val.vbool; } @@ -198,6 +201,9 @@ bool mlx5_rdma_supported(struct mlx5_core_dev *dev) if (!IS_ENABLED(CONFIG_MLX5_INFINIBAND)) return false; + if (mlx5_core_is_management_pf(dev)) + return false; + if (dev->priv.flags & MLX5_PRIV_FLAGS_DISABLE_IB_ADEV) return false; @@ -215,9 +221,9 @@ static bool is_ib_enabled(struct mlx5_core_dev *dev) union devlink_param_value val; int err; - err = devlink_param_driverinit_value_get(priv_to_devlink(dev), - DEVLINK_PARAM_GENERIC_ID_ENABLE_RDMA, - &val); + err = devl_param_driverinit_value_get(priv_to_devlink(dev), + DEVLINK_PARAM_GENERIC_ID_ENABLE_RDMA, + &val); return err ? false : val.vbool; } @@ -343,7 +349,6 @@ int mlx5_attach_device(struct mlx5_core_dev *dev) devl_assert_locked(priv_to_devlink(dev)); mutex_lock(&mlx5_intf_mutex); priv->flags &= ~MLX5_PRIV_FLAGS_DETACH; - priv->flags |= MLX5_PRIV_FLAGS_MLX5E_LOCKED_FLOW; for (i = 0; i < ARRAY_SIZE(mlx5_adev_devices); i++) { if (!priv->adev[i]) { bool is_supported = false; @@ -372,10 +377,6 @@ int mlx5_attach_device(struct mlx5_core_dev *dev) /* Pay attention that this is not PCI driver that * mlx5_core_dev is connected, but auxiliary driver. - * - * Here we can race of module unload with devlink - * reload, but we don't need to take extra lock because - * we are holding global mlx5_intf_mutex. */ if (!adev->dev.driver) continue; @@ -391,12 +392,11 @@ int mlx5_attach_device(struct mlx5_core_dev *dev) break; } } - priv->flags &= ~MLX5_PRIV_FLAGS_MLX5E_LOCKED_FLOW; mutex_unlock(&mlx5_intf_mutex); return ret; } -void mlx5_detach_device(struct mlx5_core_dev *dev) +void mlx5_detach_device(struct mlx5_core_dev *dev, bool suspend) { struct mlx5_priv *priv = &dev->priv; struct auxiliary_device *adev; @@ -406,7 +406,6 @@ void mlx5_detach_device(struct mlx5_core_dev *dev) devl_assert_locked(priv_to_devlink(dev)); mutex_lock(&mlx5_intf_mutex); - priv->flags |= MLX5_PRIV_FLAGS_MLX5E_LOCKED_FLOW; for (i = ARRAY_SIZE(mlx5_adev_devices) - 1; i >= 0; i--) { if (!priv->adev[i]) continue; @@ -426,7 +425,7 @@ void mlx5_detach_device(struct mlx5_core_dev *dev) adrv = to_auxiliary_drv(adev->dev.driver); - if (adrv->suspend) { + if (adrv->suspend && suspend) { adrv->suspend(adev, pm); continue; } @@ -435,7 +434,6 @@ skip_suspend: del_adev(&priv->adev[i]->adev); priv->adev[i] = NULL; } - priv->flags &= ~MLX5_PRIV_FLAGS_MLX5E_LOCKED_FLOW; priv->flags |= MLX5_PRIV_FLAGS_DETACH; mutex_unlock(&mlx5_intf_mutex); } @@ -534,22 +532,16 @@ del_adev: int mlx5_rescan_drivers_locked(struct mlx5_core_dev *dev) { struct mlx5_priv *priv = &dev->priv; - int err = 0; lockdep_assert_held(&mlx5_intf_mutex); if (priv->flags & MLX5_PRIV_FLAGS_DETACH) return 0; - priv->flags |= MLX5_PRIV_FLAGS_MLX5E_LOCKED_FLOW; delete_drivers(dev); if (priv->flags & MLX5_PRIV_FLAGS_DISABLE_ALL_ADEV) - goto out; - - err = add_drivers(dev); + return 0; -out: - priv->flags &= ~MLX5_PRIV_FLAGS_MLX5E_LOCKED_FLOW; - return err; + return add_drivers(dev); } bool mlx5_same_hw_devs(struct mlx5_core_dev *dev, struct mlx5_core_dev *peer_dev) diff --git a/drivers/net/ethernet/mellanox/mlx5/core/devlink.c b/drivers/net/ethernet/mellanox/mlx5/core/devlink.c index 5bd83c0275f8..c5d2fdcabd56 100644 --- a/drivers/net/ethernet/mellanox/mlx5/core/devlink.c +++ b/drivers/net/ethernet/mellanox/mlx5/core/devlink.c @@ -7,6 +7,7 @@ #include "fw_reset.h" #include "fs_core.h" #include "eswitch.h" +#include "lag/lag.h" #include "esw/qos.h" #include "sf/dev/dev.h" #include "sf/sf.h" @@ -104,7 +105,7 @@ static int mlx5_devlink_reload_fw_activate(struct devlink *devlink, struct netli if (err) return err; - mlx5_unload_one_devl_locked(dev); + mlx5_unload_one_devl_locked(dev, true); err = mlx5_health_wait_pci_up(dev); if (err) NL_SET_ERR_MSG_MOD(extack, "FW activate aborted, PCI reads fail after reset"); @@ -156,13 +157,18 @@ static int mlx5_devlink_reload_down(struct devlink *devlink, bool netns_change, return -EOPNOTSUPP; } + if (mlx5_core_is_mp_slave(dev)) { + NL_SET_ERR_MSG_MOD(extack, "reload is unsupported for multi port slave"); + return -EOPNOTSUPP; + } + if (pci_num_vf(pdev)) { NL_SET_ERR_MSG_MOD(extack, "reload while VFs are present is unfavorable"); } switch (action) { case DEVLINK_RELOAD_ACTION_DRIVER_REINIT: - mlx5_unload_one_devl_locked(dev); + mlx5_unload_one_devl_locked(dev, false); break; case DEVLINK_RELOAD_ACTION_FW_ACTIVATE: if (limit == DEVLINK_RELOAD_LIMIT_NO_RESET) @@ -263,9 +269,10 @@ static int mlx5_devlink_trap_action_set(struct devlink *devlink, struct netlink_ext_ack *extack) { struct mlx5_core_dev *dev = devlink_priv(devlink); + struct mlx5_devlink_trap_event_ctx trap_event_ctx; enum devlink_trap_action action_orig; struct mlx5_devlink_trap *dl_trap; - int err = 0; + int err; if (is_mdev_switchdev_mode(dev)) { NL_SET_ERR_MSG_MOD(extack, "Devlink traps can't be set in switchdev mode"); @@ -275,26 +282,25 @@ static int mlx5_devlink_trap_action_set(struct devlink *devlink, dl_trap = mlx5_find_trap_by_id(dev, trap->id); if (!dl_trap) { mlx5_core_err(dev, "Devlink trap: Set action on invalid trap id 0x%x", trap->id); - err = -EINVAL; - goto out; + return -EINVAL; } - if (action != DEVLINK_TRAP_ACTION_DROP && action != DEVLINK_TRAP_ACTION_TRAP) { - err = -EOPNOTSUPP; - goto out; - } + if (action != DEVLINK_TRAP_ACTION_DROP && action != DEVLINK_TRAP_ACTION_TRAP) + return -EOPNOTSUPP; if (action == dl_trap->trap.action) - goto out; + return 0; action_orig = dl_trap->trap.action; dl_trap->trap.action = action; + trap_event_ctx.trap = &dl_trap->trap; + trap_event_ctx.err = 0; err = mlx5_blocking_notifier_call_chain(dev, MLX5_DRIVER_EVENT_TYPE_TRAP, - &dl_trap->trap); - if (err) + &trap_event_ctx); + if (err == NOTIFY_BAD) dl_trap->trap.action = action_orig; -out: - return err; + + return trap_event_ctx.err; } static const struct devlink_ops mlx5_devlink_ops = { @@ -396,70 +402,6 @@ void mlx5_devlink_free(struct devlink *devlink) devlink_free(devlink); } -static int mlx5_devlink_fs_mode_validate(struct devlink *devlink, u32 id, - union devlink_param_value val, - struct netlink_ext_ack *extack) -{ - struct mlx5_core_dev *dev = devlink_priv(devlink); - char *value = val.vstr; - int err = 0; - - if (!strcmp(value, "dmfs")) { - return 0; - } else if (!strcmp(value, "smfs")) { - u8 eswitch_mode; - bool smfs_cap; - - eswitch_mode = mlx5_eswitch_mode(dev); - smfs_cap = mlx5_fs_dr_is_supported(dev); - - if (!smfs_cap) { - err = -EOPNOTSUPP; - NL_SET_ERR_MSG_MOD(extack, - "Software managed steering is not supported by current device"); - } - - else if (eswitch_mode == MLX5_ESWITCH_OFFLOADS) { - NL_SET_ERR_MSG_MOD(extack, - "Software managed steering is not supported when eswitch offloads enabled."); - err = -EOPNOTSUPP; - } - } else { - NL_SET_ERR_MSG_MOD(extack, - "Bad parameter: supported values are [\"dmfs\", \"smfs\"]"); - err = -EINVAL; - } - - return err; -} - -static int mlx5_devlink_fs_mode_set(struct devlink *devlink, u32 id, - struct devlink_param_gset_ctx *ctx) -{ - struct mlx5_core_dev *dev = devlink_priv(devlink); - enum mlx5_flow_steering_mode mode; - - if (!strcmp(ctx->val.vstr, "smfs")) - mode = MLX5_FLOW_STEERING_MODE_SMFS; - else - mode = MLX5_FLOW_STEERING_MODE_DMFS; - dev->priv.steering->mode = mode; - - return 0; -} - -static int mlx5_devlink_fs_mode_get(struct devlink *devlink, u32 id, - struct devlink_param_gset_ctx *ctx) -{ - struct mlx5_core_dev *dev = devlink_priv(devlink); - - if (dev->priv.steering->mode == MLX5_FLOW_STEERING_MODE_SMFS) - strcpy(ctx->val.vstr, "smfs"); - else - strcpy(ctx->val.vstr, "dmfs"); - return 0; -} - static int mlx5_devlink_enable_roce_validate(struct devlink *devlink, u32 id, union devlink_param_value val, struct netlink_ext_ack *extack) @@ -496,68 +438,54 @@ static int mlx5_devlink_large_group_num_validate(struct devlink *devlink, u32 id return 0; } -static int mlx5_devlink_esw_port_metadata_set(struct devlink *devlink, u32 id, - struct devlink_param_gset_ctx *ctx) +static int mlx5_devlink_esw_multiport_set(struct devlink *devlink, u32 id, + struct devlink_param_gset_ctx *ctx) { struct mlx5_core_dev *dev = devlink_priv(devlink); if (!MLX5_ESWITCH_MANAGER(dev)) return -EOPNOTSUPP; - return mlx5_esw_offloads_vport_metadata_set(dev->priv.eswitch, ctx->val.vbool); + if (ctx->val.vbool) + return mlx5_lag_mpesw_enable(dev); + + mlx5_lag_mpesw_disable(dev); + return 0; } -static int mlx5_devlink_esw_port_metadata_get(struct devlink *devlink, u32 id, - struct devlink_param_gset_ctx *ctx) +static int mlx5_devlink_esw_multiport_get(struct devlink *devlink, u32 id, + struct devlink_param_gset_ctx *ctx) { struct mlx5_core_dev *dev = devlink_priv(devlink); if (!MLX5_ESWITCH_MANAGER(dev)) return -EOPNOTSUPP; - ctx->val.vbool = mlx5_eswitch_vport_match_metadata_enabled(dev->priv.eswitch); + ctx->val.vbool = mlx5_lag_is_mpesw(dev); return 0; } -static int mlx5_devlink_esw_port_metadata_validate(struct devlink *devlink, u32 id, - union devlink_param_value val, - struct netlink_ext_ack *extack) +static int mlx5_devlink_esw_multiport_validate(struct devlink *devlink, u32 id, + union devlink_param_value val, + struct netlink_ext_ack *extack) { struct mlx5_core_dev *dev = devlink_priv(devlink); - u8 esw_mode; if (!MLX5_ESWITCH_MANAGER(dev)) { NL_SET_ERR_MSG_MOD(extack, "E-Switch is unsupported"); return -EOPNOTSUPP; } - esw_mode = mlx5_eswitch_mode(dev); - if (esw_mode == MLX5_ESWITCH_OFFLOADS) { + + if (mlx5_eswitch_mode(dev) != MLX5_ESWITCH_OFFLOADS) { NL_SET_ERR_MSG_MOD(extack, - "E-Switch must either disabled or non switchdev mode"); + "E-Switch must be in switchdev mode"); return -EBUSY; } - return 0; -} - -#endif - -static int mlx5_devlink_enable_remote_dev_reset_set(struct devlink *devlink, u32 id, - struct devlink_param_gset_ctx *ctx) -{ - struct mlx5_core_dev *dev = devlink_priv(devlink); - mlx5_fw_reset_enable_remote_dev_reset_set(dev, ctx->val.vbool); return 0; } -static int mlx5_devlink_enable_remote_dev_reset_get(struct devlink *devlink, u32 id, - struct devlink_param_gset_ctx *ctx) -{ - struct mlx5_core_dev *dev = devlink_priv(devlink); - - ctx->val.vbool = mlx5_fw_reset_enable_remote_dev_reset_get(dev); - return 0; -} +#endif static int mlx5_devlink_eq_depth_validate(struct devlink *devlink, u32 id, union devlink_param_value val, @@ -567,11 +495,6 @@ static int mlx5_devlink_eq_depth_validate(struct devlink *devlink, u32 id, } static const struct devlink_param mlx5_devlink_params[] = { - DEVLINK_PARAM_DRIVER(MLX5_DEVLINK_PARAM_ID_FLOW_STEERING_MODE, - "flow_steering_mode", DEVLINK_PARAM_TYPE_STRING, - BIT(DEVLINK_PARAM_CMODE_RUNTIME), - mlx5_devlink_fs_mode_get, mlx5_devlink_fs_mode_set, - mlx5_devlink_fs_mode_validate), DEVLINK_PARAM_GENERIC(ENABLE_ROCE, BIT(DEVLINK_PARAM_CMODE_DRIVERINIT), NULL, NULL, mlx5_devlink_enable_roce_validate), #ifdef CONFIG_MLX5_ESWITCH @@ -580,16 +503,13 @@ static const struct devlink_param mlx5_devlink_params[] = { BIT(DEVLINK_PARAM_CMODE_DRIVERINIT), NULL, NULL, mlx5_devlink_large_group_num_validate), - DEVLINK_PARAM_DRIVER(MLX5_DEVLINK_PARAM_ID_ESW_PORT_METADATA, - "esw_port_metadata", DEVLINK_PARAM_TYPE_BOOL, + DEVLINK_PARAM_DRIVER(MLX5_DEVLINK_PARAM_ID_ESW_MULTIPORT, + "esw_multiport", DEVLINK_PARAM_TYPE_BOOL, BIT(DEVLINK_PARAM_CMODE_RUNTIME), - mlx5_devlink_esw_port_metadata_get, - mlx5_devlink_esw_port_metadata_set, - mlx5_devlink_esw_port_metadata_validate), + mlx5_devlink_esw_multiport_get, + mlx5_devlink_esw_multiport_set, + mlx5_devlink_esw_multiport_validate), #endif - DEVLINK_PARAM_GENERIC(ENABLE_REMOTE_DEV_RESET, BIT(DEVLINK_PARAM_CMODE_RUNTIME), - mlx5_devlink_enable_remote_dev_reset_get, - mlx5_devlink_enable_remote_dev_reset_set, NULL), DEVLINK_PARAM_GENERIC(IO_EQ_SIZE, BIT(DEVLINK_PARAM_CMODE_DRIVERINIT), NULL, NULL, mlx5_devlink_eq_depth_validate), DEVLINK_PARAM_GENERIC(EVENT_EQ_SIZE, BIT(DEVLINK_PARAM_CMODE_DRIVERINIT), @@ -602,33 +522,34 @@ static void mlx5_devlink_set_params_init_values(struct devlink *devlink) union devlink_param_value value; value.vbool = MLX5_CAP_GEN(dev, roce); - devlink_param_driverinit_value_set(devlink, - DEVLINK_PARAM_GENERIC_ID_ENABLE_ROCE, - value); + devl_param_driverinit_value_set(devlink, + DEVLINK_PARAM_GENERIC_ID_ENABLE_ROCE, + value); #ifdef CONFIG_MLX5_ESWITCH value.vu32 = ESW_OFFLOADS_DEFAULT_NUM_GROUPS; - devlink_param_driverinit_value_set(devlink, - MLX5_DEVLINK_PARAM_ID_ESW_LARGE_GROUP_NUM, - value); + devl_param_driverinit_value_set(devlink, + MLX5_DEVLINK_PARAM_ID_ESW_LARGE_GROUP_NUM, + value); #endif value.vu32 = MLX5_COMP_EQ_SIZE; - devlink_param_driverinit_value_set(devlink, - DEVLINK_PARAM_GENERIC_ID_IO_EQ_SIZE, - value); + devl_param_driverinit_value_set(devlink, + DEVLINK_PARAM_GENERIC_ID_IO_EQ_SIZE, + value); value.vu32 = MLX5_NUM_ASYNC_EQE; - devlink_param_driverinit_value_set(devlink, - DEVLINK_PARAM_GENERIC_ID_EVENT_EQ_SIZE, - value); + devl_param_driverinit_value_set(devlink, + DEVLINK_PARAM_GENERIC_ID_EVENT_EQ_SIZE, + value); } -static const struct devlink_param enable_eth_param = +static const struct devlink_param mlx5_devlink_eth_params[] = { DEVLINK_PARAM_GENERIC(ENABLE_ETH, BIT(DEVLINK_PARAM_CMODE_DRIVERINIT), - NULL, NULL, NULL); + NULL, NULL, NULL), +}; -static int mlx5_devlink_eth_param_register(struct devlink *devlink) +static int mlx5_devlink_eth_params_register(struct devlink *devlink) { struct mlx5_core_dev *dev = devlink_priv(devlink); union devlink_param_value value; @@ -637,25 +558,27 @@ static int mlx5_devlink_eth_param_register(struct devlink *devlink) if (!mlx5_eth_supported(dev)) return 0; - err = devlink_param_register(devlink, &enable_eth_param); + err = devl_params_register(devlink, mlx5_devlink_eth_params, + ARRAY_SIZE(mlx5_devlink_eth_params)); if (err) return err; value.vbool = true; - devlink_param_driverinit_value_set(devlink, - DEVLINK_PARAM_GENERIC_ID_ENABLE_ETH, - value); + devl_param_driverinit_value_set(devlink, + DEVLINK_PARAM_GENERIC_ID_ENABLE_ETH, + value); return 0; } -static void mlx5_devlink_eth_param_unregister(struct devlink *devlink) +static void mlx5_devlink_eth_params_unregister(struct devlink *devlink) { struct mlx5_core_dev *dev = devlink_priv(devlink); if (!mlx5_eth_supported(dev)) return; - devlink_param_unregister(devlink, &enable_eth_param); + devl_params_unregister(devlink, mlx5_devlink_eth_params, + ARRAY_SIZE(mlx5_devlink_eth_params)); } static int mlx5_devlink_enable_rdma_validate(struct devlink *devlink, u32 id, @@ -670,11 +593,12 @@ static int mlx5_devlink_enable_rdma_validate(struct devlink *devlink, u32 id, return 0; } -static const struct devlink_param enable_rdma_param = +static const struct devlink_param mlx5_devlink_rdma_params[] = { DEVLINK_PARAM_GENERIC(ENABLE_RDMA, BIT(DEVLINK_PARAM_CMODE_DRIVERINIT), - NULL, NULL, mlx5_devlink_enable_rdma_validate); + NULL, NULL, mlx5_devlink_enable_rdma_validate), +}; -static int mlx5_devlink_rdma_param_register(struct devlink *devlink) +static int mlx5_devlink_rdma_params_register(struct devlink *devlink) { union devlink_param_value value; int err; @@ -682,30 +606,33 @@ static int mlx5_devlink_rdma_param_register(struct devlink *devlink) if (!IS_ENABLED(CONFIG_MLX5_INFINIBAND)) return 0; - err = devlink_param_register(devlink, &enable_rdma_param); + err = devl_params_register(devlink, mlx5_devlink_rdma_params, + ARRAY_SIZE(mlx5_devlink_rdma_params)); if (err) return err; value.vbool = true; - devlink_param_driverinit_value_set(devlink, - DEVLINK_PARAM_GENERIC_ID_ENABLE_RDMA, - value); + devl_param_driverinit_value_set(devlink, + DEVLINK_PARAM_GENERIC_ID_ENABLE_RDMA, + value); return 0; } -static void mlx5_devlink_rdma_param_unregister(struct devlink *devlink) +static void mlx5_devlink_rdma_params_unregister(struct devlink *devlink) { if (!IS_ENABLED(CONFIG_MLX5_INFINIBAND)) return; - devlink_param_unregister(devlink, &enable_rdma_param); + devl_params_unregister(devlink, mlx5_devlink_rdma_params, + ARRAY_SIZE(mlx5_devlink_rdma_params)); } -static const struct devlink_param enable_vnet_param = +static const struct devlink_param mlx5_devlink_vnet_params[] = { DEVLINK_PARAM_GENERIC(ENABLE_VNET, BIT(DEVLINK_PARAM_CMODE_DRIVERINIT), - NULL, NULL, NULL); + NULL, NULL, NULL), +}; -static int mlx5_devlink_vnet_param_register(struct devlink *devlink) +static int mlx5_devlink_vnet_params_register(struct devlink *devlink) { struct mlx5_core_dev *dev = devlink_priv(devlink); union devlink_param_value value; @@ -714,56 +641,58 @@ static int mlx5_devlink_vnet_param_register(struct devlink *devlink) if (!mlx5_vnet_supported(dev)) return 0; - err = devlink_param_register(devlink, &enable_vnet_param); + err = devl_params_register(devlink, mlx5_devlink_vnet_params, + ARRAY_SIZE(mlx5_devlink_vnet_params)); if (err) return err; value.vbool = true; - devlink_param_driverinit_value_set(devlink, - DEVLINK_PARAM_GENERIC_ID_ENABLE_VNET, - value); + devl_param_driverinit_value_set(devlink, + DEVLINK_PARAM_GENERIC_ID_ENABLE_VNET, + value); return 0; } -static void mlx5_devlink_vnet_param_unregister(struct devlink *devlink) +static void mlx5_devlink_vnet_params_unregister(struct devlink *devlink) { struct mlx5_core_dev *dev = devlink_priv(devlink); if (!mlx5_vnet_supported(dev)) return; - devlink_param_unregister(devlink, &enable_vnet_param); + devl_params_unregister(devlink, mlx5_devlink_vnet_params, + ARRAY_SIZE(mlx5_devlink_vnet_params)); } static int mlx5_devlink_auxdev_params_register(struct devlink *devlink) { int err; - err = mlx5_devlink_eth_param_register(devlink); + err = mlx5_devlink_eth_params_register(devlink); if (err) return err; - err = mlx5_devlink_rdma_param_register(devlink); + err = mlx5_devlink_rdma_params_register(devlink); if (err) goto rdma_err; - err = mlx5_devlink_vnet_param_register(devlink); + err = mlx5_devlink_vnet_params_register(devlink); if (err) goto vnet_err; return 0; vnet_err: - mlx5_devlink_rdma_param_unregister(devlink); + mlx5_devlink_rdma_params_unregister(devlink); rdma_err: - mlx5_devlink_eth_param_unregister(devlink); + mlx5_devlink_eth_params_unregister(devlink); return err; } static void mlx5_devlink_auxdev_params_unregister(struct devlink *devlink) { - mlx5_devlink_vnet_param_unregister(devlink); - mlx5_devlink_rdma_param_unregister(devlink); - mlx5_devlink_eth_param_unregister(devlink); + mlx5_devlink_vnet_params_unregister(devlink); + mlx5_devlink_rdma_params_unregister(devlink); + mlx5_devlink_eth_params_unregister(devlink); } static int mlx5_devlink_max_uc_list_validate(struct devlink *devlink, u32 id, @@ -791,11 +720,12 @@ static int mlx5_devlink_max_uc_list_validate(struct devlink *devlink, u32 id, return 0; } -static const struct devlink_param max_uc_list_param = +static const struct devlink_param mlx5_devlink_max_uc_list_params[] = { DEVLINK_PARAM_GENERIC(MAX_MACS, BIT(DEVLINK_PARAM_CMODE_DRIVERINIT), - NULL, NULL, mlx5_devlink_max_uc_list_validate); + NULL, NULL, mlx5_devlink_max_uc_list_validate), +}; -static int mlx5_devlink_max_uc_list_param_register(struct devlink *devlink) +static int mlx5_devlink_max_uc_list_params_register(struct devlink *devlink) { struct mlx5_core_dev *dev = devlink_priv(devlink); union devlink_param_value value; @@ -804,26 +734,28 @@ static int mlx5_devlink_max_uc_list_param_register(struct devlink *devlink) if (!MLX5_CAP_GEN_MAX(dev, log_max_current_uc_list_wr_supported)) return 0; - err = devlink_param_register(devlink, &max_uc_list_param); + err = devl_params_register(devlink, mlx5_devlink_max_uc_list_params, + ARRAY_SIZE(mlx5_devlink_max_uc_list_params)); if (err) return err; value.vu32 = 1 << MLX5_CAP_GEN(dev, log_max_current_uc_list); - devlink_param_driverinit_value_set(devlink, - DEVLINK_PARAM_GENERIC_ID_MAX_MACS, - value); + devl_param_driverinit_value_set(devlink, + DEVLINK_PARAM_GENERIC_ID_MAX_MACS, + value); return 0; } static void -mlx5_devlink_max_uc_list_param_unregister(struct devlink *devlink) +mlx5_devlink_max_uc_list_params_unregister(struct devlink *devlink) { struct mlx5_core_dev *dev = devlink_priv(devlink); if (!MLX5_CAP_GEN_MAX(dev, log_max_current_uc_list_wr_supported)) return; - devlink_param_unregister(devlink, &max_uc_list_param); + devl_params_unregister(devlink, mlx5_devlink_max_uc_list_params, + ARRAY_SIZE(mlx5_devlink_max_uc_list_params)); } #define MLX5_TRAP_DROP(_id, _group_id) \ @@ -869,13 +801,12 @@ void mlx5_devlink_traps_unregister(struct devlink *devlink) ARRAY_SIZE(mlx5_trap_groups_arr)); } -int mlx5_devlink_register(struct devlink *devlink) +int mlx5_devlink_params_register(struct devlink *devlink) { - struct mlx5_core_dev *dev = devlink_priv(devlink); int err; - err = devlink_params_register(devlink, mlx5_devlink_params, - ARRAY_SIZE(mlx5_devlink_params)); + err = devl_params_register(devlink, mlx5_devlink_params, + ARRAY_SIZE(mlx5_devlink_params)); if (err) return err; @@ -885,27 +816,24 @@ int mlx5_devlink_register(struct devlink *devlink) if (err) goto auxdev_reg_err; - err = mlx5_devlink_max_uc_list_param_register(devlink); + err = mlx5_devlink_max_uc_list_params_register(devlink); if (err) goto max_uc_list_err; - if (!mlx5_core_is_mp_slave(dev)) - devlink_set_features(devlink, DEVLINK_F_RELOAD); - return 0; max_uc_list_err: mlx5_devlink_auxdev_params_unregister(devlink); auxdev_reg_err: - devlink_params_unregister(devlink, mlx5_devlink_params, - ARRAY_SIZE(mlx5_devlink_params)); + devl_params_unregister(devlink, mlx5_devlink_params, + ARRAY_SIZE(mlx5_devlink_params)); return err; } -void mlx5_devlink_unregister(struct devlink *devlink) +void mlx5_devlink_params_unregister(struct devlink *devlink) { - mlx5_devlink_max_uc_list_param_unregister(devlink); + mlx5_devlink_max_uc_list_params_unregister(devlink); mlx5_devlink_auxdev_params_unregister(devlink); - devlink_params_unregister(devlink, mlx5_devlink_params, - ARRAY_SIZE(mlx5_devlink_params)); + devl_params_unregister(devlink, mlx5_devlink_params, + ARRAY_SIZE(mlx5_devlink_params)); } diff --git a/drivers/net/ethernet/mellanox/mlx5/core/devlink.h b/drivers/net/ethernet/mellanox/mlx5/core/devlink.h index fd033df24856..212b12424146 100644 --- a/drivers/net/ethernet/mellanox/mlx5/core/devlink.h +++ b/drivers/net/ethernet/mellanox/mlx5/core/devlink.h @@ -11,6 +11,7 @@ enum mlx5_devlink_param_id { MLX5_DEVLINK_PARAM_ID_FLOW_STEERING_MODE, MLX5_DEVLINK_PARAM_ID_ESW_LARGE_GROUP_NUM, MLX5_DEVLINK_PARAM_ID_ESW_PORT_METADATA, + MLX5_DEVLINK_PARAM_ID_ESW_MULTIPORT, }; struct mlx5_trap_ctx { @@ -24,6 +25,11 @@ struct mlx5_devlink_trap { struct list_head list; }; +struct mlx5_devlink_trap_event_ctx { + struct mlx5_trap_ctx *trap; + int err; +}; + struct mlx5_core_dev; void mlx5_devlink_trap_report(struct mlx5_core_dev *dev, int trap_id, struct sk_buff *skb, struct devlink_port *dl_port); @@ -35,7 +41,7 @@ void mlx5_devlink_traps_unregister(struct devlink *devlink); struct devlink *mlx5_devlink_alloc(struct device *dev); void mlx5_devlink_free(struct devlink *devlink); -int mlx5_devlink_register(struct devlink *devlink); -void mlx5_devlink_unregister(struct devlink *devlink); +int mlx5_devlink_params_register(struct devlink *devlink); +void mlx5_devlink_params_unregister(struct devlink *devlink); #endif /* __MLX5_DEVLINK_H__ */ diff --git a/drivers/net/ethernet/mellanox/mlx5/core/diag/fs_tracepoint.c b/drivers/net/ethernet/mellanox/mlx5/core/diag/fs_tracepoint.c index 2732128e7a6e..6d73127b7217 100644 --- a/drivers/net/ethernet/mellanox/mlx5/core/diag/fs_tracepoint.c +++ b/drivers/net/ethernet/mellanox/mlx5/core/diag/fs_tracepoint.c @@ -275,6 +275,10 @@ const char *parse_fs_dst(struct trace_seq *p, fs_dest_range_field_to_str(dst->range.field), dst->range.min, dst->range.max); break; + case MLX5_FLOW_DESTINATION_TYPE_TABLE_TYPE: + trace_seq_printf(p, "flow_table_type=%u id:%u\n", dst->ft->type, + dst->ft->id); + break; case MLX5_FLOW_DESTINATION_TYPE_NONE: trace_seq_printf(p, "none\n"); break; diff --git a/drivers/net/ethernet/mellanox/mlx5/core/diag/fw_tracer.c b/drivers/net/ethernet/mellanox/mlx5/core/diag/fw_tracer.c index 5b05b884b5fb..f40497823e65 100644 --- a/drivers/net/ethernet/mellanox/mlx5/core/diag/fw_tracer.c +++ b/drivers/net/ethernet/mellanox/mlx5/core/diag/fw_tracer.c @@ -234,6 +234,8 @@ static int mlx5_fw_tracer_allocate_strings_db(struct mlx5_fw_tracer *tracer) int i; for (i = 0; i < num_string_db; i++) { + if (!string_db_size_out[i]) + continue; tracer->str_db.buffer[i] = kzalloc(string_db_size_out[i], GFP_KERNEL); if (!tracer->str_db.buffer[i]) goto free_strings_db; @@ -279,6 +281,8 @@ static void mlx5_tracer_read_strings_db(struct work_struct *work) } for (i = 0; i < num_string_db; i++) { + if (!tracer->str_db.size_out[i]) + continue; offset = 0; MLX5_SET(mtrc_stdb, in, string_db_index, i); num_of_reads = tracer->str_db.size_out[i] / @@ -385,6 +389,8 @@ static struct tracer_string_format *mlx5_tracer_get_string(struct mlx5_fw_tracer str_ptr = tracer_event->string_event.string_param; for (i = 0; i < tracer->str_db.num_string_db; i++) { + if (!tracer->str_db.size_out[i]) + continue; if (str_ptr > tracer->str_db.base_address_out[i] && str_ptr < tracer->str_db.base_address_out[i] + tracer->str_db.size_out[i]) { @@ -460,6 +466,7 @@ static void poll_trace(struct mlx5_fw_tracer *tracer, tracer_event->event_id = MLX5_GET(tracer_event, trace, event_id); tracer_event->lost_event = MLX5_GET(tracer_event, trace, lost); + tracer_event->out = trace; switch (tracer_event->event_id) { case TRACER_EVENT_TYPE_TIMESTAMP: @@ -582,6 +589,26 @@ void mlx5_tracer_print_trace(struct tracer_string_format *str_frmt, mlx5_tracer_clean_message(str_frmt); } +static int mlx5_tracer_handle_raw_string(struct mlx5_fw_tracer *tracer, + struct tracer_event *tracer_event) +{ + struct tracer_string_format *cur_string; + + cur_string = mlx5_tracer_message_insert(tracer, tracer_event); + if (!cur_string) + return -1; + + cur_string->event_id = tracer_event->event_id; + cur_string->timestamp = tracer_event->string_event.timestamp; + cur_string->lost = tracer_event->lost_event; + cur_string->string = "0x%08x%08x"; + cur_string->num_of_params = 2; + cur_string->params[0] = upper_32_bits(*tracer_event->out); + cur_string->params[1] = lower_32_bits(*tracer_event->out); + list_add_tail(&cur_string->list, &tracer->ready_strings_list); + return 0; +} + static int mlx5_tracer_handle_string_trace(struct mlx5_fw_tracer *tracer, struct tracer_event *tracer_event) { @@ -590,7 +617,7 @@ static int mlx5_tracer_handle_string_trace(struct mlx5_fw_tracer *tracer, if (tracer_event->string_event.tdsn == 0) { cur_string = mlx5_tracer_get_string(tracer, tracer_event); if (!cur_string) - return -1; + return mlx5_tracer_handle_raw_string(tracer, tracer_event); cur_string->num_of_params = mlx5_tracer_get_num_of_params(cur_string->string); cur_string->last_param_num = 0; @@ -603,9 +630,9 @@ static int mlx5_tracer_handle_string_trace(struct mlx5_fw_tracer *tracer, } else { cur_string = mlx5_tracer_message_get(tracer, tracer_event); if (!cur_string) { - pr_debug("%s Got string event for unknown string tdsm: %d\n", + pr_debug("%s Got string event for unknown string tmsn: %d\n", __func__, tracer_event->string_event.tmsn); - return -1; + return mlx5_tracer_handle_raw_string(tracer, tracer_event); } cur_string->last_param_num += 1; if (cur_string->last_param_num > TRACER_MAX_PARAMS) { @@ -931,6 +958,14 @@ unlock: return err; } +static void mlx5_fw_tracer_update_db(struct work_struct *work) +{ + struct mlx5_fw_tracer *tracer = + container_of(work, struct mlx5_fw_tracer, update_db_work); + + mlx5_fw_tracer_reload(tracer); +} + /* Create software resources (Buffers, etc ..) */ struct mlx5_fw_tracer *mlx5_fw_tracer_create(struct mlx5_core_dev *dev) { @@ -958,6 +993,8 @@ struct mlx5_fw_tracer *mlx5_fw_tracer_create(struct mlx5_core_dev *dev) INIT_WORK(&tracer->ownership_change_work, mlx5_fw_tracer_ownership_change); INIT_WORK(&tracer->read_fw_strings_work, mlx5_tracer_read_strings_db); INIT_WORK(&tracer->handle_traces_work, mlx5_fw_tracer_handle_traces); + INIT_WORK(&tracer->update_db_work, mlx5_fw_tracer_update_db); + mutex_init(&tracer->state_lock); err = mlx5_query_mtrc_caps(tracer); @@ -1004,11 +1041,15 @@ int mlx5_fw_tracer_init(struct mlx5_fw_tracer *tracer) if (IS_ERR_OR_NULL(tracer)) return 0; - dev = tracer->dev; - if (!tracer->str_db.loaded) queue_work(tracer->work_queue, &tracer->read_fw_strings_work); + mutex_lock(&tracer->state_lock); + if (test_and_set_bit(MLX5_TRACER_STATE_UP, &tracer->state)) + goto unlock; + + dev = tracer->dev; + err = mlx5_core_alloc_pd(dev, &tracer->buff.pdn); if (err) { mlx5_core_warn(dev, "FWTracer: Failed to allocate PD %d\n", err); @@ -1029,6 +1070,8 @@ int mlx5_fw_tracer_init(struct mlx5_fw_tracer *tracer) mlx5_core_warn(dev, "FWTracer: Failed to start tracer %d\n", err); goto err_notifier_unregister; } +unlock: + mutex_unlock(&tracer->state_lock); return 0; err_notifier_unregister: @@ -1038,6 +1081,7 @@ err_dealloc_pd: mlx5_core_dealloc_pd(dev, tracer->buff.pdn); err_cancel_work: cancel_work_sync(&tracer->read_fw_strings_work); + mutex_unlock(&tracer->state_lock); return err; } @@ -1047,17 +1091,27 @@ void mlx5_fw_tracer_cleanup(struct mlx5_fw_tracer *tracer) if (IS_ERR_OR_NULL(tracer)) return; + mutex_lock(&tracer->state_lock); + if (!test_and_clear_bit(MLX5_TRACER_STATE_UP, &tracer->state)) + goto unlock; + mlx5_core_dbg(tracer->dev, "FWTracer: Cleanup, is owner ? (%d)\n", tracer->owner); mlx5_eq_notifier_unregister(tracer->dev, &tracer->nb); cancel_work_sync(&tracer->ownership_change_work); cancel_work_sync(&tracer->handle_traces_work); + /* It is valid to get here from update_db_work. Hence, don't wait for + * update_db_work to finished. + */ + cancel_work(&tracer->update_db_work); if (tracer->owner) mlx5_fw_tracer_ownership_release(tracer); mlx5_core_destroy_mkey(tracer->dev, tracer->buff.mkey); mlx5_core_dealloc_pd(tracer->dev, tracer->buff.pdn); +unlock: + mutex_unlock(&tracer->state_lock); } /* Free software resources (Buffers, etc ..) */ @@ -1074,6 +1128,7 @@ void mlx5_fw_tracer_destroy(struct mlx5_fw_tracer *tracer) mlx5_fw_tracer_clean_saved_traces_array(tracer); mlx5_fw_tracer_free_strings_db(tracer); mlx5_fw_tracer_destroy_log_buf(tracer); + mutex_destroy(&tracer->state_lock); destroy_workqueue(tracer->work_queue); kvfree(tracer); } @@ -1083,6 +1138,8 @@ static int mlx5_fw_tracer_recreate_strings_db(struct mlx5_fw_tracer *tracer) struct mlx5_core_dev *dev; int err; + if (test_and_set_bit(MLX5_TRACER_RECREATE_DB, &tracer->state)) + return 0; cancel_work_sync(&tracer->read_fw_strings_work); mlx5_fw_tracer_clean_ready_list(tracer); mlx5_fw_tracer_clean_print_hash(tracer); @@ -1093,17 +1150,18 @@ static int mlx5_fw_tracer_recreate_strings_db(struct mlx5_fw_tracer *tracer) err = mlx5_query_mtrc_caps(tracer); if (err) { mlx5_core_dbg(dev, "FWTracer: Failed to query capabilities %d\n", err); - return err; + goto out; } err = mlx5_fw_tracer_allocate_strings_db(tracer); if (err) { mlx5_core_warn(dev, "FWTracer: Allocate strings DB failed %d\n", err); - return err; + goto out; } mlx5_fw_tracer_init_saved_traces_array(tracer); - - return 0; +out: + clear_bit(MLX5_TRACER_RECREATE_DB, &tracer->state); + return err; } int mlx5_fw_tracer_reload(struct mlx5_fw_tracer *tracer) @@ -1143,6 +1201,9 @@ static int fw_tracer_event(struct notifier_block *nb, unsigned long action, void case MLX5_TRACER_SUBTYPE_TRACES_AVAILABLE: queue_work(tracer->work_queue, &tracer->handle_traces_work); break; + case MLX5_TRACER_SUBTYPE_STRINGS_DB_UPDATE: + queue_work(tracer->work_queue, &tracer->update_db_work); + break; default: mlx5_core_dbg(dev, "FWTracer: Event with unrecognized subtype: sub_type %d\n", eqe->sub_type); diff --git a/drivers/net/ethernet/mellanox/mlx5/core/diag/fw_tracer.h b/drivers/net/ethernet/mellanox/mlx5/core/diag/fw_tracer.h index 4762b55b0b0e..5c548bb74f07 100644 --- a/drivers/net/ethernet/mellanox/mlx5/core/diag/fw_tracer.h +++ b/drivers/net/ethernet/mellanox/mlx5/core/diag/fw_tracer.h @@ -63,6 +63,11 @@ struct mlx5_fw_trace_data { char msg[TRACE_STR_MSG]; }; +enum mlx5_fw_tracer_state { + MLX5_TRACER_STATE_UP = BIT(0), + MLX5_TRACER_RECREATE_DB = BIT(1), +}; + struct mlx5_fw_tracer { struct mlx5_core_dev *dev; struct mlx5_nb nb; @@ -104,6 +109,9 @@ struct mlx5_fw_tracer { struct work_struct handle_traces_work; struct hlist_head hash[MESSAGE_HASH_SIZE]; struct list_head ready_strings_list; + struct work_struct update_db_work; + struct mutex state_lock; /* Synchronize update work with reload flows */ + unsigned long state; }; struct tracer_string_format { @@ -158,6 +166,7 @@ struct tracer_event { struct tracer_string_event string_event; struct tracer_timestamp_event timestamp_event; }; + u64 *out; }; struct mlx5_ifc_tracer_event_bits { diff --git a/drivers/net/ethernet/mellanox/mlx5/core/ecpf.c b/drivers/net/ethernet/mellanox/mlx5/core/ecpf.c index cdc87ecae5d3..9a3878f9e582 100644 --- a/drivers/net/ethernet/mellanox/mlx5/core/ecpf.c +++ b/drivers/net/ethernet/mellanox/mlx5/core/ecpf.c @@ -75,6 +75,10 @@ int mlx5_ec_init(struct mlx5_core_dev *dev) if (!mlx5_core_is_ecpf(dev)) return 0; + /* Management PF don't have a peer PF */ + if (mlx5_core_is_management_pf(dev)) + return 0; + return mlx5_host_pf_init(dev); } @@ -85,6 +89,10 @@ void mlx5_ec_cleanup(struct mlx5_core_dev *dev) if (!mlx5_core_is_ecpf(dev)) return; + /* Management PF don't have a peer PF */ + if (mlx5_core_is_management_pf(dev)) + return; + mlx5_host_pf_cleanup(dev); err = mlx5_wait_for_pages(dev, &dev->priv.page_counters[MLX5_HOST_PF]); diff --git a/drivers/net/ethernet/mellanox/mlx5/core/en.h b/drivers/net/ethernet/mellanox/mlx5/core/en.h index 2d77fb8a8a01..88460b7796e5 100644 --- a/drivers/net/ethernet/mellanox/mlx5/core/en.h +++ b/drivers/net/ethernet/mellanox/mlx5/core/en.h @@ -247,7 +247,7 @@ struct mlx5e_rx_wqe_ll { }; struct mlx5e_rx_wqe_cyc { - struct mlx5_wqe_data_seg data[0]; + DECLARE_FLEX_ARRAY(struct mlx5_wqe_data_seg, data); }; struct mlx5e_umr_wqe { @@ -454,6 +454,7 @@ struct mlx5e_txqsq { struct mlx5_clock *clock; struct net_device *netdev; struct mlx5_core_dev *mdev; + struct mlx5e_channel *channel; struct mlx5e_priv *priv; /* control path */ @@ -626,10 +627,11 @@ struct mlx5e_rq; typedef void (*mlx5e_fp_handle_rx_cqe)(struct mlx5e_rq*, struct mlx5_cqe64*); typedef struct sk_buff * (*mlx5e_fp_skb_from_cqe_mpwrq)(struct mlx5e_rq *rq, struct mlx5e_mpw_info *wi, - u16 cqe_bcnt, u32 head_offset, u32 page_idx); + struct mlx5_cqe64 *cqe, u16 cqe_bcnt, + u32 head_offset, u32 page_idx); typedef struct sk_buff * (*mlx5e_fp_skb_from_cqe)(struct mlx5e_rq *rq, struct mlx5e_wqe_frag_info *wi, - u32 cqe_bcnt); + struct mlx5_cqe64 *cqe, u32 cqe_bcnt); typedef bool (*mlx5e_fp_post_rx_wqes)(struct mlx5e_rq *rq); typedef void (*mlx5e_fp_dealloc_wqe)(struct mlx5e_rq*, u16); typedef void (*mlx5e_fp_shampo_dealloc_hd)(struct mlx5e_rq*, u16, u16, bool); @@ -968,6 +970,12 @@ struct mlx5e_priv { struct mlx5e_scratchpad scratchpad; struct mlx5e_htb *htb; struct mlx5e_mqprio_rl *mqprio_rl; + struct dentry *dfs_root; +}; + +struct mlx5e_dev { + struct mlx5e_priv *priv; + struct devlink_port dl_port; }; struct mlx5e_rx_handlers { diff --git a/drivers/net/ethernet/mellanox/mlx5/core/en/devlink.c b/drivers/net/ethernet/mellanox/mlx5/core/en/devlink.c index 83adaabf59f5..c6b6e290fd79 100644 --- a/drivers/net/ethernet/mellanox/mlx5/core/en/devlink.c +++ b/drivers/net/ethernet/mellanox/mlx5/core/en/devlink.c @@ -4,6 +4,31 @@ #include "en/devlink.h" #include "eswitch.h" +static const struct devlink_ops mlx5e_devlink_ops = { +}; + +struct mlx5e_dev *mlx5e_create_devlink(struct device *dev, + struct mlx5_core_dev *mdev) +{ + struct mlx5e_dev *mlx5e_dev; + struct devlink *devlink; + + devlink = devlink_alloc_ns(&mlx5e_devlink_ops, sizeof(*mlx5e_dev), + devlink_net(priv_to_devlink(mdev)), dev); + if (!devlink) + return ERR_PTR(-ENOMEM); + devlink_register(devlink); + return devlink_priv(devlink); +} + +void mlx5e_destroy_devlink(struct mlx5e_dev *mlx5e_dev) +{ + struct devlink *devlink = priv_to_devlink(mlx5e_dev); + + devlink_unregister(devlink); + devlink_free(devlink); +} + static void mlx5e_devlink_get_port_parent_id(struct mlx5_core_dev *dev, struct netdev_phys_item_id *ppid) { @@ -14,51 +39,36 @@ mlx5e_devlink_get_port_parent_id(struct mlx5_core_dev *dev, struct netdev_phys_i memcpy(ppid->id, &parent_id, sizeof(parent_id)); } -int mlx5e_devlink_port_register(struct mlx5e_priv *priv) +int mlx5e_devlink_port_register(struct mlx5e_dev *mlx5e_dev, + struct mlx5_core_dev *mdev) { - struct devlink *devlink = priv_to_devlink(priv->mdev); + struct devlink *devlink = priv_to_devlink(mlx5e_dev); struct devlink_port_attrs attrs = {}; struct netdev_phys_item_id ppid = {}; - struct devlink_port *dl_port; unsigned int dl_port_index; - int ret; - if (mlx5_core_is_pf(priv->mdev)) { + if (mlx5_core_is_pf(mdev)) { attrs.flavour = DEVLINK_PORT_FLAVOUR_PHYSICAL; - attrs.phys.port_number = mlx5_get_dev_index(priv->mdev); - if (MLX5_ESWITCH_MANAGER(priv->mdev)) { - mlx5e_devlink_get_port_parent_id(priv->mdev, &ppid); + attrs.phys.port_number = mlx5_get_dev_index(mdev); + if (MLX5_ESWITCH_MANAGER(mdev)) { + mlx5e_devlink_get_port_parent_id(mdev, &ppid); memcpy(attrs.switch_id.id, ppid.id, ppid.id_len); attrs.switch_id.id_len = ppid.id_len; } - dl_port_index = mlx5_esw_vport_to_devlink_port_index(priv->mdev, + dl_port_index = mlx5_esw_vport_to_devlink_port_index(mdev, MLX5_VPORT_UPLINK); } else { attrs.flavour = DEVLINK_PORT_FLAVOUR_VIRTUAL; - dl_port_index = mlx5_esw_vport_to_devlink_port_index(priv->mdev, 0); + dl_port_index = mlx5_esw_vport_to_devlink_port_index(mdev, 0); } - dl_port = mlx5e_devlink_get_dl_port(priv); - memset(dl_port, 0, sizeof(*dl_port)); - devlink_port_attrs_set(dl_port, &attrs); + devlink_port_attrs_set(&mlx5e_dev->dl_port, &attrs); - if (!(priv->mdev->priv.flags & MLX5_PRIV_FLAGS_MLX5E_LOCKED_FLOW)) - devl_lock(devlink); - ret = devl_port_register(devlink, dl_port, dl_port_index); - if (!(priv->mdev->priv.flags & MLX5_PRIV_FLAGS_MLX5E_LOCKED_FLOW)) - devl_unlock(devlink); - - return ret; + return devlink_port_register(devlink, &mlx5e_dev->dl_port, + dl_port_index); } -void mlx5e_devlink_port_unregister(struct mlx5e_priv *priv) +void mlx5e_devlink_port_unregister(struct mlx5e_dev *mlx5e_dev) { - struct devlink_port *dl_port = mlx5e_devlink_get_dl_port(priv); - struct devlink *devlink = priv_to_devlink(priv->mdev); - - if (!(priv->mdev->priv.flags & MLX5_PRIV_FLAGS_MLX5E_LOCKED_FLOW)) - devl_lock(devlink); - devl_port_unregister(dl_port); - if (!(priv->mdev->priv.flags & MLX5_PRIV_FLAGS_MLX5E_LOCKED_FLOW)) - devl_unlock(devlink); + devlink_port_unregister(&mlx5e_dev->dl_port); } diff --git a/drivers/net/ethernet/mellanox/mlx5/core/en/devlink.h b/drivers/net/ethernet/mellanox/mlx5/core/en/devlink.h index 4f238d4fff55..d5ec4461f300 100644 --- a/drivers/net/ethernet/mellanox/mlx5/core/en/devlink.h +++ b/drivers/net/ethernet/mellanox/mlx5/core/en/devlink.h @@ -7,13 +7,11 @@ #include <net/devlink.h> #include "en.h" -int mlx5e_devlink_port_register(struct mlx5e_priv *priv); -void mlx5e_devlink_port_unregister(struct mlx5e_priv *priv); - -static inline struct devlink_port * -mlx5e_devlink_get_dl_port(struct mlx5e_priv *priv) -{ - return &priv->mdev->mlx5e_res.dl_port; -} +struct mlx5e_dev *mlx5e_create_devlink(struct device *dev, + struct mlx5_core_dev *mdev); +void mlx5e_destroy_devlink(struct mlx5e_dev *mlx5e_dev); +int mlx5e_devlink_port_register(struct mlx5e_dev *mlx5e_dev, + struct mlx5_core_dev *mdev); +void mlx5e_devlink_port_unregister(struct mlx5e_dev *mlx5e_dev); #endif diff --git a/drivers/net/ethernet/mellanox/mlx5/core/en/fs.h b/drivers/net/ethernet/mellanox/mlx5/core/en/fs.h index 379c6dc9a3be..e5a44b0b9616 100644 --- a/drivers/net/ethernet/mellanox/mlx5/core/en/fs.h +++ b/drivers/net/ethernet/mellanox/mlx5/core/en/fs.h @@ -87,6 +87,7 @@ enum { MLX5E_ACCEL_FS_POL_FT_LEVEL = MLX5E_INNER_TTC_FT_LEVEL + 1, MLX5E_ACCEL_FS_ESP_FT_LEVEL, MLX5E_ACCEL_FS_ESP_FT_ERR_LEVEL, + MLX5E_ACCEL_FS_ESP_FT_ROCE_LEVEL, #endif }; @@ -145,7 +146,8 @@ void mlx5e_destroy_flow_steering(struct mlx5e_flow_steering *fs, bool ntuple, struct mlx5e_flow_steering *mlx5e_fs_init(const struct mlx5e_profile *profile, struct mlx5_core_dev *mdev, - bool state_destroy); + bool state_destroy, + struct dentry *dfs_root); void mlx5e_fs_cleanup(struct mlx5e_flow_steering *fs); struct mlx5e_vlan_table *mlx5e_fs_get_vlan(struct mlx5e_flow_steering *fs); void mlx5e_fs_set_tc(struct mlx5e_flow_steering *fs, struct mlx5e_tc_table *tc); @@ -189,6 +191,8 @@ int mlx5e_fs_vlan_rx_kill_vid(struct mlx5e_flow_steering *fs, __be16 proto, u16 vid); void mlx5e_fs_init_l2_addr(struct mlx5e_flow_steering *fs, struct net_device *netdev); +struct dentry *mlx5e_fs_get_debugfs_root(struct mlx5e_flow_steering *fs); + #define fs_err(fs, fmt, ...) \ mlx5_core_err(mlx5e_fs_get_mdev(fs), fmt, ##__VA_ARGS__) diff --git a/drivers/net/ethernet/mellanox/mlx5/core/en/mod_hdr.c b/drivers/net/ethernet/mellanox/mlx5/core/en/mod_hdr.c index 17325c5d6516..cf60f0a3ff23 100644 --- a/drivers/net/ethernet/mellanox/mlx5/core/en/mod_hdr.c +++ b/drivers/net/ethernet/mellanox/mlx5/core/en/mod_hdr.c @@ -47,6 +47,7 @@ void mlx5e_mod_hdr_tbl_init(struct mod_hdr_tbl *tbl) void mlx5e_mod_hdr_tbl_destroy(struct mod_hdr_tbl *tbl) { + WARN_ON(!hash_empty(tbl->hlist)); mutex_destroy(&tbl->lock); } diff --git a/drivers/net/ethernet/mellanox/mlx5/core/en/params.c b/drivers/net/ethernet/mellanox/mlx5/core/en/params.c index 4ad19c981294..a21bd1179477 100644 --- a/drivers/net/ethernet/mellanox/mlx5/core/en/params.c +++ b/drivers/net/ethernet/mellanox/mlx5/core/en/params.c @@ -411,9 +411,14 @@ u8 mlx5e_mpwqe_get_log_num_strides(struct mlx5_core_dev *mdev, { enum mlx5e_mpwrq_umr_mode umr_mode = mlx5e_mpwrq_umr_mode(mdev, xsk); u8 page_shift = mlx5e_mpwrq_page_shift(mdev, xsk); + u8 log_wqe_size, log_stride_size; - return mlx5e_mpwrq_log_wqe_sz(mdev, page_shift, umr_mode) - - mlx5e_mpwqe_get_log_stride_size(mdev, params, xsk); + log_wqe_size = mlx5e_mpwrq_log_wqe_sz(mdev, page_shift, umr_mode); + log_stride_size = mlx5e_mpwqe_get_log_stride_size(mdev, params, xsk); + WARN(log_wqe_size < log_stride_size, + "Log WQE size %u < log stride size %u (page shift %u, umr mode %d, xsk on? %d)\n", + log_wqe_size, log_stride_size, page_shift, umr_mode, !!xsk); + return log_wqe_size - log_stride_size; } u8 mlx5e_mpwqe_get_min_wqe_bulk(unsigned int wq_sz) @@ -580,11 +585,16 @@ int mlx5e_mpwrq_validate_xsk(struct mlx5_core_dev *mdev, struct mlx5e_params *pa u8 page_shift = mlx5e_mpwrq_page_shift(mdev, xsk); u16 max_mtu_pkts; - if (!mlx5e_check_fragmented_striding_rq_cap(mdev, page_shift, umr_mode)) + if (!mlx5e_check_fragmented_striding_rq_cap(mdev, page_shift, umr_mode)) { + mlx5_core_err(mdev, "Striding RQ for XSK can't be activated with page_shift %u and umr_mode %d\n", + page_shift, umr_mode); return -EOPNOTSUPP; + } - if (!mlx5e_rx_mpwqe_is_linear_skb(mdev, params, xsk)) + if (!mlx5e_rx_mpwqe_is_linear_skb(mdev, params, xsk)) { + mlx5_core_err(mdev, "Striding RQ linear mode for XSK can't be activated with current params\n"); return -EINVAL; + } /* Current RQ length is too big for the given frame size, the * needed number of WQEs exceeds the maximum. diff --git a/drivers/net/ethernet/mellanox/mlx5/core/en/port.c b/drivers/net/ethernet/mellanox/mlx5/core/en/port.c index 89510cac46c2..505ba41195b9 100644 --- a/drivers/net/ethernet/mellanox/mlx5/core/en/port.c +++ b/drivers/net/ethernet/mellanox/mlx5/core/en/port.c @@ -287,6 +287,78 @@ int mlx5e_port_set_pbmc(struct mlx5_core_dev *mdev, void *in) return err; } +int mlx5e_port_query_sbpr(struct mlx5_core_dev *mdev, u32 desc, u8 dir, + u8 pool_idx, void *out, int size_out) +{ + u32 in[MLX5_ST_SZ_DW(sbpr_reg)] = {}; + + MLX5_SET(sbpr_reg, in, desc, desc); + MLX5_SET(sbpr_reg, in, dir, dir); + MLX5_SET(sbpr_reg, in, pool, pool_idx); + + return mlx5_core_access_reg(mdev, in, sizeof(in), out, size_out, MLX5_REG_SBPR, 0, 0); +} + +int mlx5e_port_set_sbpr(struct mlx5_core_dev *mdev, u32 desc, u8 dir, + u8 pool_idx, u32 infi_size, u32 size) +{ + u32 out[MLX5_ST_SZ_DW(sbpr_reg)] = {}; + u32 in[MLX5_ST_SZ_DW(sbpr_reg)] = {}; + + MLX5_SET(sbpr_reg, in, desc, desc); + MLX5_SET(sbpr_reg, in, dir, dir); + MLX5_SET(sbpr_reg, in, pool, pool_idx); + MLX5_SET(sbpr_reg, in, infi_size, infi_size); + MLX5_SET(sbpr_reg, in, size, size); + MLX5_SET(sbpr_reg, in, mode, 1); + + return mlx5_core_access_reg(mdev, in, sizeof(in), out, sizeof(out), MLX5_REG_SBPR, 0, 1); +} + +static int mlx5e_port_query_sbcm(struct mlx5_core_dev *mdev, u32 desc, + u8 pg_buff_idx, u8 dir, void *out, + int size_out) +{ + u32 in[MLX5_ST_SZ_DW(sbcm_reg)] = {}; + + MLX5_SET(sbcm_reg, in, desc, desc); + MLX5_SET(sbcm_reg, in, local_port, 1); + MLX5_SET(sbcm_reg, in, pg_buff, pg_buff_idx); + MLX5_SET(sbcm_reg, in, dir, dir); + + return mlx5_core_access_reg(mdev, in, sizeof(in), out, size_out, MLX5_REG_SBCM, 0, 0); +} + +int mlx5e_port_set_sbcm(struct mlx5_core_dev *mdev, u32 desc, u8 pg_buff_idx, + u8 dir, u8 infi_size, u32 max_buff, u8 pool_idx) +{ + u32 out[MLX5_ST_SZ_DW(sbcm_reg)] = {}; + u32 in[MLX5_ST_SZ_DW(sbcm_reg)] = {}; + u32 min_buff; + int err; + u8 exc; + + err = mlx5e_port_query_sbcm(mdev, desc, pg_buff_idx, dir, out, + sizeof(out)); + if (err) + return err; + + exc = MLX5_GET(sbcm_reg, out, exc); + min_buff = MLX5_GET(sbcm_reg, out, min_buff); + + MLX5_SET(sbcm_reg, in, desc, desc); + MLX5_SET(sbcm_reg, in, local_port, 1); + MLX5_SET(sbcm_reg, in, pg_buff, pg_buff_idx); + MLX5_SET(sbcm_reg, in, dir, dir); + MLX5_SET(sbcm_reg, in, exc, exc); + MLX5_SET(sbcm_reg, in, min_buff, min_buff); + MLX5_SET(sbcm_reg, in, infi_max, infi_size); + MLX5_SET(sbcm_reg, in, max_buff, max_buff); + MLX5_SET(sbcm_reg, in, pool, pool_idx); + + return mlx5_core_access_reg(mdev, in, sizeof(in), out, sizeof(out), MLX5_REG_SBCM, 0, 1); +} + /* buffer[i]: buffer that priority i mapped to */ int mlx5e_port_query_priority2buffer(struct mlx5_core_dev *mdev, u8 *buffer) { diff --git a/drivers/net/ethernet/mellanox/mlx5/core/en/port.h b/drivers/net/ethernet/mellanox/mlx5/core/en/port.h index 7a7defe60792..3f474e370828 100644 --- a/drivers/net/ethernet/mellanox/mlx5/core/en/port.h +++ b/drivers/net/ethernet/mellanox/mlx5/core/en/port.h @@ -57,6 +57,12 @@ u32 mlx5e_port_speed2linkmodes(struct mlx5_core_dev *mdev, u32 speed, bool mlx5e_ptys_ext_supported(struct mlx5_core_dev *mdev); int mlx5e_port_query_pbmc(struct mlx5_core_dev *mdev, void *out); int mlx5e_port_set_pbmc(struct mlx5_core_dev *mdev, void *in); +int mlx5e_port_query_sbpr(struct mlx5_core_dev *mdev, u32 desc, u8 dir, + u8 pool_idx, void *out, int size_out); +int mlx5e_port_set_sbpr(struct mlx5_core_dev *mdev, u32 desc, u8 dir, + u8 pool_idx, u32 infi_size, u32 size); +int mlx5e_port_set_sbcm(struct mlx5_core_dev *mdev, u32 desc, u8 pg_buff_idx, + u8 dir, u8 infi_size, u32 max_buff, u8 pool_idx); int mlx5e_port_query_priority2buffer(struct mlx5_core_dev *mdev, u8 *buffer); int mlx5e_port_set_priority2buffer(struct mlx5_core_dev *mdev, u8 *buffer); diff --git a/drivers/net/ethernet/mellanox/mlx5/core/en/port_buffer.c b/drivers/net/ethernet/mellanox/mlx5/core/en/port_buffer.c index c9d5d8d93994..7ac1ad9c46de 100644 --- a/drivers/net/ethernet/mellanox/mlx5/core/en/port_buffer.c +++ b/drivers/net/ethernet/mellanox/mlx5/core/en/port_buffer.c @@ -73,6 +73,7 @@ int mlx5e_port_query_buffer(struct mlx5e_priv *priv, port_buffer->buffer[i].lossy); } + port_buffer->headroom_size = total_used; port_buffer->port_buffer_size = MLX5_GET(pbmc_reg, out, port_buffer_size) * port_buff_cell_sz; port_buffer->spare_buffer_size = @@ -86,16 +87,204 @@ out: return err; } +struct mlx5e_buffer_pool { + u32 infi_size; + u32 size; + u32 buff_occupancy; +}; + +static int mlx5e_port_query_pool(struct mlx5_core_dev *mdev, + struct mlx5e_buffer_pool *buffer_pool, + u32 desc, u8 dir, u8 pool_idx) +{ + u32 out[MLX5_ST_SZ_DW(sbpr_reg)] = {}; + int err; + + err = mlx5e_port_query_sbpr(mdev, desc, dir, pool_idx, out, + sizeof(out)); + if (err) + return err; + + buffer_pool->size = MLX5_GET(sbpr_reg, out, size); + buffer_pool->infi_size = MLX5_GET(sbpr_reg, out, infi_size); + buffer_pool->buff_occupancy = MLX5_GET(sbpr_reg, out, buff_occupancy); + + return err; +} + +enum { + MLX5_INGRESS_DIR = 0, + MLX5_EGRESS_DIR = 1, +}; + +enum { + MLX5_LOSSY_POOL = 0, + MLX5_LOSSLESS_POOL = 1, +}; + +/* No limit on usage of shared buffer pool (max_buff=0) */ +#define MLX5_SB_POOL_NO_THRESHOLD 0 +/* Shared buffer pool usage threshold when calculated + * dynamically in alpha units. alpha=13 is equivalent to + * HW_alpha of [(1/128) * 2 ^ (alpha-1)] = 32, where HW_alpha + * equates to the following portion of the shared buffer pool: + * [32 / (1 + n * 32)] While *n* is the number of buffers + * that are using the shared buffer pool. + */ +#define MLX5_SB_POOL_THRESHOLD 13 + +/* Shared buffer class management parameters */ +struct mlx5_sbcm_params { + u8 pool_idx; + u8 max_buff; + u8 infi_size; +}; + +static const struct mlx5_sbcm_params sbcm_default = { + .pool_idx = MLX5_LOSSY_POOL, + .max_buff = MLX5_SB_POOL_NO_THRESHOLD, + .infi_size = 0, +}; + +static const struct mlx5_sbcm_params sbcm_lossy = { + .pool_idx = MLX5_LOSSY_POOL, + .max_buff = MLX5_SB_POOL_NO_THRESHOLD, + .infi_size = 1, +}; + +static const struct mlx5_sbcm_params sbcm_lossless = { + .pool_idx = MLX5_LOSSLESS_POOL, + .max_buff = MLX5_SB_POOL_THRESHOLD, + .infi_size = 0, +}; + +static const struct mlx5_sbcm_params sbcm_lossless_no_threshold = { + .pool_idx = MLX5_LOSSLESS_POOL, + .max_buff = MLX5_SB_POOL_NO_THRESHOLD, + .infi_size = 1, +}; + +/** + * select_sbcm_params() - selects the shared buffer pool configuration + * + * @buffer: <input> port buffer to retrieve params of + * @lossless_buff_count: <input> number of lossless buffers in total + * + * The selection is based on the following rules: + * 1. If buffer size is 0, no shared buffer pool is used. + * 2. If buffer is lossy, use lossy shared buffer pool. + * 3. If there are more than 1 lossless buffers, use lossless shared buffer pool + * with threshold. + * 4. If there is only 1 lossless buffer, use lossless shared buffer pool + * without threshold. + * + * @return const struct mlx5_sbcm_params* selected values + */ +static const struct mlx5_sbcm_params * +select_sbcm_params(struct mlx5e_bufferx_reg *buffer, u8 lossless_buff_count) +{ + if (buffer->size == 0) + return &sbcm_default; + + if (buffer->lossy) + return &sbcm_lossy; + + if (lossless_buff_count > 1) + return &sbcm_lossless; + + return &sbcm_lossless_no_threshold; +} + +static int port_update_pool_cfg(struct mlx5_core_dev *mdev, + struct mlx5e_port_buffer *port_buffer) +{ + const struct mlx5_sbcm_params *p; + u8 lossless_buff_count = 0; + int err; + int i; + + if (!MLX5_CAP_GEN(mdev, sbcam_reg)) + return 0; + + for (i = 0; i < MLX5E_MAX_BUFFER; i++) + lossless_buff_count += ((port_buffer->buffer[i].size) && + (!(port_buffer->buffer[i].lossy))); + + for (i = 0; i < MLX5E_MAX_BUFFER; i++) { + p = select_sbcm_params(&port_buffer->buffer[i], lossless_buff_count); + err = mlx5e_port_set_sbcm(mdev, 0, i, + MLX5_INGRESS_DIR, + p->infi_size, + p->max_buff, + p->pool_idx); + if (err) + return err; + } + + return 0; +} + +static int port_update_shared_buffer(struct mlx5_core_dev *mdev, + u32 current_headroom_size, + u32 new_headroom_size) +{ + struct mlx5e_buffer_pool lossless_ipool; + struct mlx5e_buffer_pool lossy_epool; + u32 lossless_ipool_size; + u32 shared_buffer_size; + u32 total_buffer_size; + u32 lossy_epool_size; + int err; + + if (!MLX5_CAP_GEN(mdev, sbcam_reg)) + return 0; + + err = mlx5e_port_query_pool(mdev, &lossy_epool, 0, MLX5_EGRESS_DIR, + MLX5_LOSSY_POOL); + if (err) + return err; + + err = mlx5e_port_query_pool(mdev, &lossless_ipool, 0, MLX5_INGRESS_DIR, + MLX5_LOSSLESS_POOL); + if (err) + return err; + + total_buffer_size = current_headroom_size + lossy_epool.size + + lossless_ipool.size; + shared_buffer_size = total_buffer_size - new_headroom_size; + + if (shared_buffer_size < 4) { + pr_err("Requested port buffer is too large, not enough space left for shared buffer\n"); + return -EINVAL; + } + + /* Total shared buffer size is split in a ratio of 3:1 between + * lossy and lossless pools respectively. + */ + lossy_epool_size = (shared_buffer_size / 4) * 3; + lossless_ipool_size = shared_buffer_size / 4; + + mlx5e_port_set_sbpr(mdev, 0, MLX5_EGRESS_DIR, MLX5_LOSSY_POOL, 0, + lossy_epool_size); + mlx5e_port_set_sbpr(mdev, 0, MLX5_INGRESS_DIR, MLX5_LOSSLESS_POOL, 0, + lossless_ipool_size); + return 0; +} + static int port_set_buffer(struct mlx5e_priv *priv, struct mlx5e_port_buffer *port_buffer) { u16 port_buff_cell_sz = priv->dcbx.port_buff_cell_sz; struct mlx5_core_dev *mdev = priv->mdev; int sz = MLX5_ST_SZ_BYTES(pbmc_reg); + u32 new_headroom_size = 0; + u32 current_headroom_size; void *in; int err; int i; + current_headroom_size = port_buffer->headroom_size; + in = kzalloc(sz, GFP_KERNEL); if (!in) return -ENOMEM; @@ -110,6 +299,7 @@ static int port_set_buffer(struct mlx5e_priv *priv, u64 xoff = port_buffer->buffer[i].xoff; u64 xon = port_buffer->buffer[i].xon; + new_headroom_size += size; do_div(size, port_buff_cell_sz); do_div(xoff, port_buff_cell_sz); do_div(xon, port_buff_cell_sz); @@ -119,6 +309,17 @@ static int port_set_buffer(struct mlx5e_priv *priv, MLX5_SET(bufferx_reg, buffer, xon_threshold, xon); } + new_headroom_size /= port_buff_cell_sz; + current_headroom_size /= port_buff_cell_sz; + err = port_update_shared_buffer(priv->mdev, current_headroom_size, + new_headroom_size); + if (err) + goto out; + + err = port_update_pool_cfg(priv->mdev, port_buffer); + if (err) + goto out; + err = mlx5e_port_set_pbmc(mdev, in); out: kfree(in); @@ -174,6 +375,7 @@ static int update_xoff_threshold(struct mlx5e_port_buffer *port_buffer, /** * update_buffer_lossy - Update buffer configuration based on pfc + * @mdev: port function core device * @max_mtu: netdev's max_mtu * @pfc_en: <input> current pfc configuration * @buffer: <input> current prio to buffer mapping @@ -192,7 +394,8 @@ static int update_xoff_threshold(struct mlx5e_port_buffer *port_buffer, * @return: 0 if no error, * sets change to true if buffer configuration was modified. */ -static int update_buffer_lossy(unsigned int max_mtu, +static int update_buffer_lossy(struct mlx5_core_dev *mdev, + unsigned int max_mtu, u8 pfc_en, u8 *buffer, u32 xoff, u16 port_buff_cell_sz, struct mlx5e_port_buffer *port_buffer, bool *change) @@ -229,6 +432,10 @@ static int update_buffer_lossy(unsigned int max_mtu, } if (changed) { + err = port_update_pool_cfg(mdev, port_buffer); + if (err) + return err; + err = update_xoff_threshold(port_buffer, xoff, max_mtu, port_buff_cell_sz); if (err) return err; @@ -293,23 +500,30 @@ int mlx5e_port_manual_buffer_config(struct mlx5e_priv *priv, } if (change & MLX5E_PORT_BUFFER_PFC) { + mlx5e_dbg(HW, priv, "%s: requested PFC per priority bitmask: 0x%x\n", + __func__, pfc->pfc_en); err = mlx5e_port_query_priority2buffer(priv->mdev, buffer); if (err) return err; - err = update_buffer_lossy(max_mtu, pfc->pfc_en, buffer, xoff, port_buff_cell_sz, - &port_buffer, &update_buffer); + err = update_buffer_lossy(priv->mdev, max_mtu, pfc->pfc_en, buffer, xoff, + port_buff_cell_sz, &port_buffer, + &update_buffer); if (err) return err; } if (change & MLX5E_PORT_BUFFER_PRIO2BUFFER) { update_prio2buffer = true; + for (i = 0; i < MLX5E_MAX_BUFFER; i++) + mlx5e_dbg(HW, priv, "%s: requested to map prio[%d] to buffer %d\n", + __func__, i, prio2buffer[i]); + err = fill_pfc_en(priv->mdev, &curr_pfc_en); if (err) return err; - err = update_buffer_lossy(max_mtu, curr_pfc_en, prio2buffer, xoff, + err = update_buffer_lossy(priv->mdev, max_mtu, curr_pfc_en, prio2buffer, xoff, port_buff_cell_sz, &port_buffer, &update_buffer); if (err) return err; diff --git a/drivers/net/ethernet/mellanox/mlx5/core/en/port_buffer.h b/drivers/net/ethernet/mellanox/mlx5/core/en/port_buffer.h index 80af7a5ac604..a6ef118de758 100644 --- a/drivers/net/ethernet/mellanox/mlx5/core/en/port_buffer.h +++ b/drivers/net/ethernet/mellanox/mlx5/core/en/port_buffer.h @@ -60,6 +60,7 @@ struct mlx5e_bufferx_reg { struct mlx5e_port_buffer { u32 port_buffer_size; u32 spare_buffer_size; + u32 headroom_size; struct mlx5e_bufferx_reg buffer[MLX5E_MAX_BUFFER]; }; diff --git a/drivers/net/ethernet/mellanox/mlx5/core/en/ptp.c b/drivers/net/ethernet/mellanox/mlx5/core/en/ptp.c index 8469e9c38670..9a1bc93b7dc6 100644 --- a/drivers/net/ethernet/mellanox/mlx5/core/en/ptp.c +++ b/drivers/net/ethernet/mellanox/mlx5/core/en/ptp.c @@ -771,8 +771,8 @@ void mlx5e_ptp_activate_channel(struct mlx5e_ptp *c) if (test_bit(MLX5E_PTP_STATE_RX, c->state)) { mlx5e_ptp_rx_set_fs(c->priv); mlx5e_activate_rq(&c->rq); - mlx5e_trigger_napi_sched(&c->napi); } + mlx5e_trigger_napi_sched(&c->napi); } void mlx5e_ptp_deactivate_channel(struct mlx5e_ptp *c) diff --git a/drivers/net/ethernet/mellanox/mlx5/core/en/rep/bond.c b/drivers/net/ethernet/mellanox/mlx5/core/en/rep/bond.c index b6f5c1bcdbcd..016a61c52c45 100644 --- a/drivers/net/ethernet/mellanox/mlx5/core/en/rep/bond.c +++ b/drivers/net/ethernet/mellanox/mlx5/core/en/rep/bond.c @@ -120,8 +120,8 @@ int mlx5e_rep_bond_enslave(struct mlx5_eswitch *esw, struct net_device *netdev, priv = netdev_priv(netdev); rpriv = priv->ppriv; - err = mlx5_esw_acl_ingress_vport_bond_update(esw, rpriv->rep->vport, - mdata->metadata_reg_c_0); + err = mlx5_esw_acl_ingress_vport_metadata_update(esw, rpriv->rep->vport, + mdata->metadata_reg_c_0); if (err) goto ingress_err; @@ -167,7 +167,7 @@ void mlx5e_rep_bond_unslave(struct mlx5_eswitch *esw, /* Reset bond_metadata to zero first then reset all ingress/egress * acls and rx rules of unslave representor's vport */ - mlx5_esw_acl_ingress_vport_bond_update(esw, rpriv->rep->vport, 0); + mlx5_esw_acl_ingress_vport_metadata_update(esw, rpriv->rep->vport, 0); mlx5_esw_acl_egress_vport_unbond(esw, rpriv->rep->vport); mlx5e_rep_bond_update(priv, false); diff --git a/drivers/net/ethernet/mellanox/mlx5/core/en/rep/tc.c b/drivers/net/ethernet/mellanox/mlx5/core/en/rep/tc.c index b08339d986d5..e24b46953542 100644 --- a/drivers/net/ethernet/mellanox/mlx5/core/en/rep/tc.c +++ b/drivers/net/ethernet/mellanox/mlx5/core/en/rep/tc.c @@ -1,7 +1,6 @@ // SPDX-License-Identifier: GPL-2.0 OR Linux-OpenIB /* Copyright (c) 2020 Mellanox Technologies. */ -#include <net/dst_metadata.h> #include <linux/netdevice.h> #include <linux/if_macvlan.h> #include <linux/list.h> @@ -589,7 +588,7 @@ mlx5e_rep_indr_stats_act(struct mlx5e_rep_priv *rpriv, act = mlx5e_tc_act_get(fl_act->id, ns_type); if (!act || !act->stats_action) - return -EOPNOTSUPP; + return mlx5e_tc_fill_action_stats(priv, fl_act); return act->stats_action(priv, fl_act); } @@ -665,232 +664,54 @@ void mlx5e_rep_tc_netdevice_event_unregister(struct mlx5e_rep_priv *rpriv) mlx5e_rep_indr_block_unbind); } -static bool mlx5e_restore_tunnel(struct mlx5e_priv *priv, struct sk_buff *skb, - struct mlx5e_tc_update_priv *tc_priv, - u32 tunnel_id) -{ - struct mlx5_eswitch *esw = priv->mdev->priv.eswitch; - struct tunnel_match_enc_opts enc_opts = {}; - struct mlx5_rep_uplink_priv *uplink_priv; - struct mlx5e_rep_priv *uplink_rpriv; - struct metadata_dst *tun_dst; - struct tunnel_match_key key; - u32 tun_id, enc_opts_id; - struct net_device *dev; - int err; - - enc_opts_id = tunnel_id & ENC_OPTS_BITS_MASK; - tun_id = tunnel_id >> ENC_OPTS_BITS; - - if (!tun_id) - return true; - - uplink_rpriv = mlx5_eswitch_get_uplink_priv(esw, REP_ETH); - uplink_priv = &uplink_rpriv->uplink_priv; - - err = mapping_find(uplink_priv->tunnel_mapping, tun_id, &key); - if (err) { - netdev_dbg(priv->netdev, - "Couldn't find tunnel for tun_id: %d, err: %d\n", - tun_id, err); - return false; - } - - if (enc_opts_id) { - err = mapping_find(uplink_priv->tunnel_enc_opts_mapping, - enc_opts_id, &enc_opts); - if (err) { - netdev_dbg(priv->netdev, - "Couldn't find tunnel (opts) for tun_id: %d, err: %d\n", - enc_opts_id, err); - return false; - } - } - - if (key.enc_control.addr_type == FLOW_DISSECTOR_KEY_IPV4_ADDRS) { - tun_dst = __ip_tun_set_dst(key.enc_ipv4.src, key.enc_ipv4.dst, - key.enc_ip.tos, key.enc_ip.ttl, - key.enc_tp.dst, TUNNEL_KEY, - key32_to_tunnel_id(key.enc_key_id.keyid), - enc_opts.key.len); - } else if (key.enc_control.addr_type == FLOW_DISSECTOR_KEY_IPV6_ADDRS) { - tun_dst = __ipv6_tun_set_dst(&key.enc_ipv6.src, &key.enc_ipv6.dst, - key.enc_ip.tos, key.enc_ip.ttl, - key.enc_tp.dst, 0, TUNNEL_KEY, - key32_to_tunnel_id(key.enc_key_id.keyid), - enc_opts.key.len); - } else { - netdev_dbg(priv->netdev, - "Couldn't restore tunnel, unsupported addr_type: %d\n", - key.enc_control.addr_type); - return false; - } - - if (!tun_dst) { - netdev_dbg(priv->netdev, "Couldn't restore tunnel, no tun_dst\n"); - return false; - } - - tun_dst->u.tun_info.key.tp_src = key.enc_tp.src; - - if (enc_opts.key.len) - ip_tunnel_info_opts_set(&tun_dst->u.tun_info, - enc_opts.key.data, - enc_opts.key.len, - enc_opts.key.dst_opt_type); - - skb_dst_set(skb, (struct dst_entry *)tun_dst); - dev = dev_get_by_index(&init_net, key.filter_ifindex); - if (!dev) { - netdev_dbg(priv->netdev, - "Couldn't find tunnel device with ifindex: %d\n", - key.filter_ifindex); - return false; - } - - /* Set fwd_dev so we do dev_put() after datapath */ - tc_priv->fwd_dev = dev; - - skb->dev = dev; - - return true; -} - -static bool mlx5e_restore_skb_chain(struct sk_buff *skb, u32 chain, u32 reg_c1, - struct mlx5e_tc_update_priv *tc_priv) -{ - struct mlx5e_priv *priv = netdev_priv(skb->dev); - u32 tunnel_id = (reg_c1 >> ESW_TUN_OFFSET) & TUNNEL_ID_MASK; - -#if IS_ENABLED(CONFIG_NET_TC_SKB_EXT) - if (chain) { - struct mlx5_rep_uplink_priv *uplink_priv; - struct mlx5e_rep_priv *uplink_rpriv; - struct tc_skb_ext *tc_skb_ext; - struct mlx5_eswitch *esw; - u32 zone_restore_id; - - tc_skb_ext = tc_skb_ext_alloc(skb); - if (!tc_skb_ext) { - WARN_ON(1); - return false; - } - tc_skb_ext->chain = chain; - zone_restore_id = reg_c1 & ESW_ZONE_ID_MASK; - esw = priv->mdev->priv.eswitch; - uplink_rpriv = mlx5_eswitch_get_uplink_priv(esw, REP_ETH); - uplink_priv = &uplink_rpriv->uplink_priv; - if (!mlx5e_tc_ct_restore_flow(uplink_priv->ct_priv, skb, - zone_restore_id)) - return false; - } -#endif /* CONFIG_NET_TC_SKB_EXT */ - - return mlx5e_restore_tunnel(priv, skb, tc_priv, tunnel_id); -} - -static void mlx5_rep_tc_post_napi_receive(struct mlx5e_tc_update_priv *tc_priv) -{ - if (tc_priv->fwd_dev) - dev_put(tc_priv->fwd_dev); -} - -static void mlx5e_restore_skb_sample(struct mlx5e_priv *priv, struct sk_buff *skb, - struct mlx5_mapped_obj *mapped_obj, - struct mlx5e_tc_update_priv *tc_priv) -{ - if (!mlx5e_restore_tunnel(priv, skb, tc_priv, mapped_obj->sample.tunnel_id)) { - netdev_dbg(priv->netdev, - "Failed to restore tunnel info for sampled packet\n"); - return; - } - mlx5e_tc_sample_skb(skb, mapped_obj); - mlx5_rep_tc_post_napi_receive(tc_priv); -} - -static bool mlx5e_restore_skb_int_port(struct mlx5e_priv *priv, struct sk_buff *skb, - struct mlx5_mapped_obj *mapped_obj, - struct mlx5e_tc_update_priv *tc_priv, - bool *forward_tx, - u32 reg_c1) -{ - u32 tunnel_id = (reg_c1 >> ESW_TUN_OFFSET) & TUNNEL_ID_MASK; - struct mlx5_eswitch *esw = priv->mdev->priv.eswitch; - struct mlx5_rep_uplink_priv *uplink_priv; - struct mlx5e_rep_priv *uplink_rpriv; - - /* Tunnel restore takes precedence over int port restore */ - if (tunnel_id) - return mlx5e_restore_tunnel(priv, skb, tc_priv, tunnel_id); - - uplink_rpriv = mlx5_eswitch_get_uplink_priv(esw, REP_ETH); - uplink_priv = &uplink_rpriv->uplink_priv; - - if (mlx5e_tc_int_port_dev_fwd(uplink_priv->int_port_priv, skb, - mapped_obj->int_port_metadata, forward_tx)) { - /* Set fwd_dev for future dev_put */ - tc_priv->fwd_dev = skb->dev; - - return true; - } - - return false; -} - void mlx5e_rep_tc_receive(struct mlx5_cqe64 *cqe, struct mlx5e_rq *rq, struct sk_buff *skb) { - u32 reg_c1 = be32_to_cpu(cqe->ft_metadata); + u32 reg_c0, reg_c1, zone_restore_id, tunnel_id; struct mlx5e_tc_update_priv tc_priv = {}; - struct mlx5_mapped_obj mapped_obj; + struct mlx5_rep_uplink_priv *uplink_priv; + struct mlx5e_rep_priv *uplink_rpriv; + struct mlx5_tc_ct_priv *ct_priv; + struct mapping_ctx *mapping_ctx; struct mlx5_eswitch *esw; - bool forward_tx = false; struct mlx5e_priv *priv; - u32 reg_c0; - int err; reg_c0 = (be32_to_cpu(cqe->sop_drop_qpn) & MLX5E_TC_FLOW_ID_MASK); if (!reg_c0 || reg_c0 == MLX5_FS_DEFAULT_FLOW_TAG) goto forward; - /* If reg_c0 is not equal to the default flow tag then skb->mark + /* If mapped_obj_id is not equal to the default flow tag then skb->mark * is not supported and must be reset back to 0. */ skb->mark = 0; priv = netdev_priv(skb->dev); esw = priv->mdev->priv.eswitch; - err = mapping_find(esw->offloads.reg_c0_obj_pool, reg_c0, &mapped_obj); - if (err) { - netdev_dbg(priv->netdev, - "Couldn't find mapped object for reg_c0: %d, err: %d\n", - reg_c0, err); - goto free_skb; - } + mapping_ctx = esw->offloads.reg_c0_obj_pool; + reg_c1 = be32_to_cpu(cqe->ft_metadata); + zone_restore_id = reg_c1 & ESW_ZONE_ID_MASK; + tunnel_id = (reg_c1 >> ESW_TUN_OFFSET) & TUNNEL_ID_MASK; - if (mapped_obj.type == MLX5_MAPPED_OBJ_CHAIN) { - if (!mlx5e_restore_skb_chain(skb, mapped_obj.chain, reg_c1, &tc_priv) && - !mlx5_ipsec_is_rx_flow(cqe)) - goto free_skb; - } else if (mapped_obj.type == MLX5_MAPPED_OBJ_SAMPLE) { - mlx5e_restore_skb_sample(priv, skb, &mapped_obj, &tc_priv); - goto free_skb; - } else if (mapped_obj.type == MLX5_MAPPED_OBJ_INT_PORT_METADATA) { - if (!mlx5e_restore_skb_int_port(priv, skb, &mapped_obj, &tc_priv, - &forward_tx, reg_c1)) - goto free_skb; - } else { - netdev_dbg(priv->netdev, "Invalid mapped object type: %d\n", mapped_obj.type); + uplink_rpriv = mlx5_eswitch_get_uplink_priv(esw, REP_ETH); + uplink_priv = &uplink_rpriv->uplink_priv; + ct_priv = uplink_priv->ct_priv; + + if (!mlx5_ipsec_is_rx_flow(cqe) && + !mlx5e_tc_update_skb(cqe, skb, mapping_ctx, reg_c0, ct_priv, zone_restore_id, tunnel_id, + &tc_priv)) goto free_skb; - } forward: - if (forward_tx) + if (tc_priv.skb_done) + goto free_skb; + + if (tc_priv.forward_tx) dev_queue_xmit(skb); else napi_gro_receive(rq->cq.napi, skb); - mlx5_rep_tc_post_napi_receive(&tc_priv); + if (tc_priv.fwd_dev) + dev_put(tc_priv.fwd_dev); return; diff --git a/drivers/net/ethernet/mellanox/mlx5/core/en/reporter_rx.c b/drivers/net/ethernet/mellanox/mlx5/core/en/reporter_rx.c index 1ae15b8536a8..c462fe76495b 100644 --- a/drivers/net/ethernet/mellanox/mlx5/core/en/reporter_rx.c +++ b/drivers/net/ethernet/mellanox/mlx5/core/en/reporter_rx.c @@ -736,10 +736,10 @@ static const struct devlink_health_reporter_ops mlx5_rx_reporter_ops = { void mlx5e_reporter_rx_create(struct mlx5e_priv *priv) { - struct devlink_port *dl_port = mlx5e_devlink_get_dl_port(priv); struct devlink_health_reporter *reporter; - reporter = devlink_port_health_reporter_create(dl_port, &mlx5_rx_reporter_ops, + reporter = devlink_port_health_reporter_create(priv->netdev->devlink_port, + &mlx5_rx_reporter_ops, MLX5E_REPORTER_RX_GRACEFUL_PERIOD, priv); if (IS_ERR(reporter)) { netdev_warn(priv->netdev, "Failed to create rx reporter, err = %ld\n", @@ -754,6 +754,6 @@ void mlx5e_reporter_rx_destroy(struct mlx5e_priv *priv) if (!priv->rx_reporter) return; - devlink_port_health_reporter_destroy(priv->rx_reporter); + devlink_health_reporter_destroy(priv->rx_reporter); priv->rx_reporter = NULL; } diff --git a/drivers/net/ethernet/mellanox/mlx5/core/en/reporter_tx.c b/drivers/net/ethernet/mellanox/mlx5/core/en/reporter_tx.c index 60bc5b577ab9..34666e2b3871 100644 --- a/drivers/net/ethernet/mellanox/mlx5/core/en/reporter_tx.c +++ b/drivers/net/ethernet/mellanox/mlx5/core/en/reporter_tx.c @@ -81,6 +81,10 @@ static int mlx5e_tx_reporter_err_cqe_recover(void *ctx) sq->stats->recover++; clear_bit(MLX5E_SQ_STATE_RECOVERING, &sq->state); mlx5e_activate_txqsq(sq); + if (sq->channel) + mlx5e_trigger_napi_icosq(sq->channel); + else + mlx5e_trigger_napi_sched(sq->cq.napi); return 0; out: @@ -590,10 +594,10 @@ static const struct devlink_health_reporter_ops mlx5_tx_reporter_ops = { void mlx5e_reporter_tx_create(struct mlx5e_priv *priv) { - struct devlink_port *dl_port = mlx5e_devlink_get_dl_port(priv); struct devlink_health_reporter *reporter; - reporter = devlink_port_health_reporter_create(dl_port, &mlx5_tx_reporter_ops, + reporter = devlink_port_health_reporter_create(priv->netdev->devlink_port, + &mlx5_tx_reporter_ops, MLX5_REPORTER_TX_GRACEFUL_PERIOD, priv); if (IS_ERR(reporter)) { netdev_warn(priv->netdev, @@ -609,6 +613,6 @@ void mlx5e_reporter_tx_destroy(struct mlx5e_priv *priv) if (!priv->tx_reporter) return; - devlink_port_health_reporter_destroy(priv->tx_reporter); + devlink_health_reporter_destroy(priv->tx_reporter); priv->tx_reporter = NULL; } diff --git a/drivers/net/ethernet/mellanox/mlx5/core/en/tc/act/mirred.c b/drivers/net/ethernet/mellanox/mlx5/core/en/tc/act/mirred.c index 78c427b38048..07cc65596f89 100644 --- a/drivers/net/ethernet/mellanox/mlx5/core/en/tc/act/mirred.c +++ b/drivers/net/ethernet/mellanox/mlx5/core/en/tc/act/mirred.c @@ -216,7 +216,6 @@ parse_mirred(struct mlx5e_tc_act_parse_state *parse_state, struct net_device *uplink_dev; struct mlx5e_priv *out_priv; struct mlx5_eswitch *esw; - bool is_uplink_rep; int *ifindexes; int if_count; int err; @@ -231,10 +230,9 @@ parse_mirred(struct mlx5e_tc_act_parse_state *parse_state, parse_state->ifindexes[if_count] = out_dev->ifindex; parse_state->if_count++; - is_uplink_rep = mlx5e_eswitch_uplink_rep(out_dev); - err = mlx5_lag_do_mirred(priv->mdev, out_dev); - if (err) - return err; + + if (mlx5_lag_mpesw_do_mirred(priv->mdev, out_dev, extack)) + return -EOPNOTSUPP; out_dev = get_fdb_out_dev(uplink_dev, out_dev); if (!out_dev) @@ -275,13 +273,6 @@ parse_mirred(struct mlx5e_tc_act_parse_state *parse_state, esw_attr->dests[esw_attr->out_count].rep = rpriv->rep; esw_attr->dests[esw_attr->out_count].mdev = out_priv->mdev; - /* If output device is bond master then rules are not explicit - * so we don't attempt to count them. - */ - if (is_uplink_rep && MLX5_CAP_PORT_SELECTION(priv->mdev, port_select_flow_table) && - MLX5_CAP_GEN(priv->mdev, create_lag_when_not_master_up)) - attr->lag.count = true; - esw_attr->out_count++; return 0; diff --git a/drivers/net/ethernet/mellanox/mlx5/core/en/tc/act/vlan.c b/drivers/net/ethernet/mellanox/mlx5/core/en/tc/act/vlan.c index b86ac604d0c2..2e0d88b513aa 100644 --- a/drivers/net/ethernet/mellanox/mlx5/core/en/tc/act/vlan.c +++ b/drivers/net/ethernet/mellanox/mlx5/core/en/tc/act/vlan.c @@ -44,19 +44,17 @@ parse_tc_vlan_action(struct mlx5e_priv *priv, return -EOPNOTSUPP; } + if (!mlx5_eswitch_vlan_actions_supported(priv->mdev, vlan_idx)) { + NL_SET_ERR_MSG_MOD(extack, "firmware vlan actions is not supported"); + return -EOPNOTSUPP; + } + switch (act->id) { case FLOW_ACTION_VLAN_POP: - if (vlan_idx) { - if (!mlx5_eswitch_vlan_actions_supported(priv->mdev, - MLX5_FS_VLAN_DEPTH)) { - NL_SET_ERR_MSG_MOD(extack, "vlan pop action is not supported"); - return -EOPNOTSUPP; - } - + if (vlan_idx) *action |= MLX5_FLOW_CONTEXT_ACTION_VLAN_POP_2; - } else { + else *action |= MLX5_FLOW_CONTEXT_ACTION_VLAN_POP; - } break; case FLOW_ACTION_VLAN_PUSH: attr->vlan_vid[vlan_idx] = act->vlan.vid; @@ -65,25 +63,10 @@ parse_tc_vlan_action(struct mlx5e_priv *priv, if (!attr->vlan_proto[vlan_idx]) attr->vlan_proto[vlan_idx] = htons(ETH_P_8021Q); - if (vlan_idx) { - if (!mlx5_eswitch_vlan_actions_supported(priv->mdev, - MLX5_FS_VLAN_DEPTH)) { - NL_SET_ERR_MSG_MOD(extack, - "vlan push action is not supported for vlan depth > 1"); - return -EOPNOTSUPP; - } - + if (vlan_idx) *action |= MLX5_FLOW_CONTEXT_ACTION_VLAN_PUSH_2; - } else { - if (!mlx5_eswitch_vlan_actions_supported(priv->mdev, 1) && - (act->vlan.proto != htons(ETH_P_8021Q) || - act->vlan.prio)) { - NL_SET_ERR_MSG_MOD(extack, "vlan push action is not supported"); - return -EOPNOTSUPP; - } - + else *action |= MLX5_FLOW_CONTEXT_ACTION_VLAN_PUSH; - } break; case FLOW_ACTION_VLAN_POP_ETH: parse_state->eth_pop = true; diff --git a/drivers/net/ethernet/mellanox/mlx5/core/en/tc/act_stats.c b/drivers/net/ethernet/mellanox/mlx5/core/en/tc/act_stats.c new file mode 100644 index 000000000000..f71766dca660 --- /dev/null +++ b/drivers/net/ethernet/mellanox/mlx5/core/en/tc/act_stats.c @@ -0,0 +1,197 @@ +// SPDX-License-Identifier: GPL-2.0 OR Linux-OpenIB +// Copyright (c) 2023, NVIDIA CORPORATION & AFFILIATES. All rights reserved. + +#include <linux/rhashtable.h> +#include <net/flow_offload.h> +#include "en/tc_priv.h" +#include "act_stats.h" +#include "en/fs.h" + +struct mlx5e_tc_act_stats_handle { + struct rhashtable ht; + spinlock_t ht_lock; /* protects hashtable */ +}; + +struct mlx5e_tc_act_stats { + unsigned long tc_act_cookie; + + struct mlx5_fc *counter; + u64 lastpackets; + u64 lastbytes; + + struct rhash_head hash; + struct rcu_head rcu_head; +}; + +static const struct rhashtable_params act_counters_ht_params = { + .head_offset = offsetof(struct mlx5e_tc_act_stats, hash), + .key_offset = 0, + .key_len = offsetof(struct mlx5e_tc_act_stats, counter), + .automatic_shrinking = true, +}; + +struct mlx5e_tc_act_stats_handle * +mlx5e_tc_act_stats_create(void) +{ + struct mlx5e_tc_act_stats_handle *handle; + int err; + + handle = kvzalloc(sizeof(*handle), GFP_KERNEL); + if (IS_ERR(handle)) + return ERR_PTR(-ENOMEM); + + err = rhashtable_init(&handle->ht, &act_counters_ht_params); + if (err) + goto err; + + spin_lock_init(&handle->ht_lock); + return handle; +err: + kvfree(handle); + return ERR_PTR(err); +} + +void mlx5e_tc_act_stats_free(struct mlx5e_tc_act_stats_handle *handle) +{ + rhashtable_destroy(&handle->ht); + kvfree(handle); +} + +static int +mlx5e_tc_act_stats_add(struct mlx5e_tc_act_stats_handle *handle, + unsigned long act_cookie, + struct mlx5_fc *counter) +{ + struct mlx5e_tc_act_stats *act_stats, *old_act_stats; + struct rhashtable *ht = &handle->ht; + int err = 0; + + act_stats = kvzalloc(sizeof(*act_stats), GFP_KERNEL); + if (!act_stats) + return -ENOMEM; + + act_stats->tc_act_cookie = act_cookie; + act_stats->counter = counter; + + rcu_read_lock(); + old_act_stats = rhashtable_lookup_get_insert_fast(ht, + &act_stats->hash, + act_counters_ht_params); + if (IS_ERR(old_act_stats)) { + err = PTR_ERR(old_act_stats); + goto err_hash_insert; + } else if (old_act_stats) { + err = -EEXIST; + goto err_hash_insert; + } + rcu_read_unlock(); + + return 0; + +err_hash_insert: + rcu_read_unlock(); + kvfree(act_stats); + return err; +} + +void +mlx5e_tc_act_stats_del_flow(struct mlx5e_tc_act_stats_handle *handle, + struct mlx5e_tc_flow *flow) +{ + struct mlx5_flow_attr *attr; + struct mlx5e_tc_act_stats *act_stats; + int i; + + if (!flow_flag_test(flow, USE_ACT_STATS)) + return; + + list_for_each_entry(attr, &flow->attrs, list) { + for (i = 0; i < attr->tc_act_cookies_count; i++) { + struct rhashtable *ht = &handle->ht; + + spin_lock(&handle->ht_lock); + act_stats = rhashtable_lookup_fast(ht, + &attr->tc_act_cookies[i], + act_counters_ht_params); + if (act_stats && + rhashtable_remove_fast(ht, &act_stats->hash, + act_counters_ht_params) == 0) + kvfree_rcu(act_stats, rcu_head); + + spin_unlock(&handle->ht_lock); + } + } +} + +int +mlx5e_tc_act_stats_add_flow(struct mlx5e_tc_act_stats_handle *handle, + struct mlx5e_tc_flow *flow) +{ + struct mlx5_fc *curr_counter = NULL; + unsigned long last_cookie = 0; + struct mlx5_flow_attr *attr; + int err; + int i; + + if (!flow_flag_test(flow, USE_ACT_STATS)) + return 0; + + list_for_each_entry(attr, &flow->attrs, list) { + if (attr->counter) + curr_counter = attr->counter; + + for (i = 0; i < attr->tc_act_cookies_count; i++) { + /* jump over identical ids (e.g. pedit)*/ + if (last_cookie == attr->tc_act_cookies[i]) + continue; + + err = mlx5e_tc_act_stats_add(handle, attr->tc_act_cookies[i], curr_counter); + if (err) + goto out_err; + last_cookie = attr->tc_act_cookies[i]; + } + } + + return 0; +out_err: + mlx5e_tc_act_stats_del_flow(handle, flow); + return err; +} + +int +mlx5e_tc_act_stats_fill_stats(struct mlx5e_tc_act_stats_handle *handle, + struct flow_offload_action *fl_act) +{ + struct rhashtable *ht = &handle->ht; + struct mlx5e_tc_act_stats *item; + struct mlx5e_tc_act_stats key; + u64 pkts, bytes, lastused; + int err = 0; + + key.tc_act_cookie = fl_act->cookie; + + rcu_read_lock(); + item = rhashtable_lookup(ht, &key, act_counters_ht_params); + if (!item) { + rcu_read_unlock(); + err = -ENOENT; + goto err_out; + } + + mlx5_fc_query_cached_raw(item->counter, + &bytes, &pkts, &lastused); + + flow_stats_update(&fl_act->stats, + bytes - item->lastbytes, + pkts - item->lastpackets, + 0, lastused, FLOW_ACTION_HW_STATS_DELAYED); + + item->lastpackets = pkts; + item->lastbytes = bytes; + rcu_read_unlock(); + + return 0; + +err_out: + return err; +} diff --git a/drivers/net/ethernet/mellanox/mlx5/core/en/tc/act_stats.h b/drivers/net/ethernet/mellanox/mlx5/core/en/tc/act_stats.h new file mode 100644 index 000000000000..002292c2567c --- /dev/null +++ b/drivers/net/ethernet/mellanox/mlx5/core/en/tc/act_stats.h @@ -0,0 +1,27 @@ +/* SPDX-License-Identifier: GPL-2.0 OR Linux-OpenIB */ +/* Copyright (c) 2023, NVIDIA CORPORATION & AFFILIATES. All rights reserved. */ + +#ifndef __MLX5_EN_ACT_STATS_H__ +#define __MLX5_EN_ACT_STATS_H__ + +#include <net/flow_offload.h> +#include "en/tc_priv.h" + +struct mlx5e_tc_act_stats_handle; + +struct mlx5e_tc_act_stats_handle *mlx5e_tc_act_stats_create(void); +void mlx5e_tc_act_stats_free(struct mlx5e_tc_act_stats_handle *handle); + +int +mlx5e_tc_act_stats_add_flow(struct mlx5e_tc_act_stats_handle *handle, + struct mlx5e_tc_flow *flow); + +void +mlx5e_tc_act_stats_del_flow(struct mlx5e_tc_act_stats_handle *handle, + struct mlx5e_tc_flow *flow); + +int +mlx5e_tc_act_stats_fill_stats(struct mlx5e_tc_act_stats_handle *handle, + struct flow_offload_action *fl_act); + +#endif /* __MLX5_EN_ACT_STATS_H__ */ diff --git a/drivers/net/ethernet/mellanox/mlx5/core/en/tc/meter.c b/drivers/net/ethernet/mellanox/mlx5/core/en/tc/meter.c index 78af8a3175bf..8218c892b161 100644 --- a/drivers/net/ethernet/mellanox/mlx5/core/en/tc/meter.c +++ b/drivers/net/ethernet/mellanox/mlx5/core/en/tc/meter.c @@ -28,7 +28,7 @@ struct mlx5e_flow_meter_aso_obj { int base_id; int total_meters; - unsigned long meters_map[0]; /* must be at the end of this struct */ + unsigned long meters_map[]; /* must be at the end of this struct */ }; struct mlx5e_flow_meters { @@ -204,13 +204,15 @@ mlx5e_flow_meter_create_aso_obj(struct mlx5e_flow_meters *flow_meters, int *obj_ u32 in[MLX5_ST_SZ_DW(create_flow_meter_aso_obj_in)] = {}; u32 out[MLX5_ST_SZ_DW(general_obj_out_cmd_hdr)]; struct mlx5_core_dev *mdev = flow_meters->mdev; - void *obj; + void *obj, *param; int err; MLX5_SET(general_obj_in_cmd_hdr, in, opcode, MLX5_CMD_OP_CREATE_GENERAL_OBJECT); MLX5_SET(general_obj_in_cmd_hdr, in, obj_type, MLX5_GENERAL_OBJECT_TYPES_FLOW_METER_ASO); - MLX5_SET(general_obj_in_cmd_hdr, in, log_obj_range, flow_meters->log_granularity); + param = MLX5_ADDR_OF(general_obj_in_cmd_hdr, in, op_param); + MLX5_SET(general_obj_create_param, param, log_obj_range, + flow_meters->log_granularity); obj = MLX5_ADDR_OF(create_flow_meter_aso_obj_in, in, flow_meter_aso_obj); MLX5_SET(flow_meter_aso_obj, obj, meter_aso_access_pd, flow_meters->pdn); diff --git a/drivers/net/ethernet/mellanox/mlx5/core/en/tc/sample.c b/drivers/net/ethernet/mellanox/mlx5/core/en/tc/sample.c index f2c2c752bd1c..558a776359af 100644 --- a/drivers/net/ethernet/mellanox/mlx5/core/en/tc/sample.c +++ b/drivers/net/ethernet/mellanox/mlx5/core/en/tc/sample.c @@ -237,7 +237,7 @@ sample_modify_hdr_get(struct mlx5_core_dev *mdev, u32 obj_id, int err; err = mlx5e_tc_match_to_reg_set(mdev, mod_acts, MLX5_FLOW_NAMESPACE_FDB, - CHAIN_TO_REG, obj_id); + MAPPED_OBJ_TO_REG, obj_id); if (err) goto err_set_regc0; diff --git a/drivers/net/ethernet/mellanox/mlx5/core/en/tc_ct.c b/drivers/net/ethernet/mellanox/mlx5/core/en/tc_ct.c index 313df8232db7..314983bc6f08 100644 --- a/drivers/net/ethernet/mellanox/mlx5/core/en/tc_ct.c +++ b/drivers/net/ethernet/mellanox/mlx5/core/en/tc_ct.c @@ -35,6 +35,7 @@ #define MLX5_CT_STATE_REPLY_BIT BIT(4) #define MLX5_CT_STATE_RELATED_BIT BIT(5) #define MLX5_CT_STATE_INVALID_BIT BIT(6) +#define MLX5_CT_STATE_NEW_BIT BIT(7) #define MLX5_CT_LABELS_BITS MLX5_REG_MAPPING_MBITS(LABELS_TO_REG) #define MLX5_CT_LABELS_MASK MLX5_REG_MAPPING_MASK(LABELS_TO_REG) @@ -59,6 +60,7 @@ struct mlx5_tc_ct_debugfs { struct mlx5_tc_ct_priv { struct mlx5_core_dev *dev; + struct mlx5e_priv *priv; const struct net_device *netdev; struct mod_hdr_tbl *mod_hdr_tbl; struct xarray tuple_ids; @@ -85,7 +87,6 @@ struct mlx5_ct_flow { struct mlx5_flow_attr *pre_ct_attr; struct mlx5_flow_handle *pre_ct_rule; struct mlx5_ct_ft *ft; - u32 chain_mapping; }; struct mlx5_ct_zone_rule { @@ -721,12 +722,14 @@ mlx5_tc_ct_entry_create_mod_hdr(struct mlx5_tc_ct_priv *ct_priv, DECLARE_MOD_HDR_ACTS_ACTIONS(actions_arr, MLX5_CT_MIN_MOD_ACTS); DECLARE_MOD_HDR_ACTS(mod_acts, actions_arr); struct flow_action_entry *meta; + enum ip_conntrack_info ctinfo; u16 ct_state = 0; int err; meta = mlx5_tc_ct_get_ct_metadata_action(flow_rule); if (!meta) return -EOPNOTSUPP; + ctinfo = meta->ct_metadata.cookie & NFCT_INFOMASK; err = mlx5_get_label_mapping(ct_priv, meta->ct_metadata.labels, &attr->ct_attr.ct_labels_id); @@ -742,7 +745,8 @@ mlx5_tc_ct_entry_create_mod_hdr(struct mlx5_tc_ct_priv *ct_priv, ct_state |= MLX5_CT_STATE_NAT_BIT; } - ct_state |= MLX5_CT_STATE_ESTABLISHED_BIT | MLX5_CT_STATE_TRK_BIT; + ct_state |= MLX5_CT_STATE_TRK_BIT; + ct_state |= ctinfo == IP_CT_NEW ? MLX5_CT_STATE_NEW_BIT : MLX5_CT_STATE_ESTABLISHED_BIT; ct_state |= meta->ct_metadata.orig_dir ? 0 : MLX5_CT_STATE_REPLY_BIT; err = mlx5_tc_ct_entry_set_registers(ct_priv, &mod_acts, ct_state, @@ -871,6 +875,68 @@ err_attr: return err; } +static int +mlx5_tc_ct_entry_replace_rule(struct mlx5_tc_ct_priv *ct_priv, + struct flow_rule *flow_rule, + struct mlx5_ct_entry *entry, + bool nat, u8 zone_restore_id) +{ + struct mlx5_ct_zone_rule *zone_rule = &entry->zone_rules[nat]; + struct mlx5_flow_attr *attr = zone_rule->attr, *old_attr; + struct mlx5e_mod_hdr_handle *mh; + struct mlx5_ct_fs_rule *rule; + struct mlx5_flow_spec *spec; + int err; + + spec = kvzalloc(sizeof(*spec), GFP_KERNEL); + if (!spec) + return -ENOMEM; + + old_attr = mlx5_alloc_flow_attr(ct_priv->ns_type); + if (!old_attr) { + err = -ENOMEM; + goto err_attr; + } + *old_attr = *attr; + + err = mlx5_tc_ct_entry_create_mod_hdr(ct_priv, attr, flow_rule, &mh, zone_restore_id, + nat, mlx5_tc_ct_entry_has_nat(entry)); + if (err) { + ct_dbg("Failed to create ct entry mod hdr"); + goto err_mod_hdr; + } + + mlx5_tc_ct_set_tuple_match(ct_priv, spec, flow_rule); + mlx5e_tc_match_to_reg_match(spec, ZONE_TO_REG, entry->tuple.zone, MLX5_CT_ZONE_MASK); + + rule = ct_priv->fs_ops->ct_rule_add(ct_priv->fs, spec, attr, flow_rule); + if (IS_ERR(rule)) { + err = PTR_ERR(rule); + ct_dbg("Failed to add replacement ct entry rule, nat: %d", nat); + goto err_rule; + } + + ct_priv->fs_ops->ct_rule_del(ct_priv->fs, zone_rule->rule); + zone_rule->rule = rule; + mlx5_tc_ct_entry_destroy_mod_hdr(ct_priv, old_attr, zone_rule->mh); + zone_rule->mh = mh; + + kfree(old_attr); + kvfree(spec); + ct_dbg("Replaced ct entry rule in zone %d", entry->tuple.zone); + + return 0; + +err_rule: + mlx5_tc_ct_entry_destroy_mod_hdr(ct_priv, zone_rule->attr, mh); + mlx5_put_label_mapping(ct_priv, attr->ct_attr.ct_labels_id); +err_mod_hdr: + kfree(old_attr); +err_attr: + kvfree(spec); + return err; +} + static bool mlx5_tc_ct_entry_valid(struct mlx5_ct_entry *entry) { @@ -1066,6 +1132,52 @@ err_orig: } static int +mlx5_tc_ct_entry_replace_rules(struct mlx5_tc_ct_priv *ct_priv, + struct flow_rule *flow_rule, + struct mlx5_ct_entry *entry, + u8 zone_restore_id) +{ + int err; + + err = mlx5_tc_ct_entry_replace_rule(ct_priv, flow_rule, entry, false, + zone_restore_id); + if (err) + return err; + + err = mlx5_tc_ct_entry_replace_rule(ct_priv, flow_rule, entry, true, + zone_restore_id); + if (err) + mlx5_tc_ct_entry_del_rule(ct_priv, entry, false); + return err; +} + +static int +mlx5_tc_ct_block_flow_offload_replace(struct mlx5_ct_ft *ft, struct flow_rule *flow_rule, + struct mlx5_ct_entry *entry, unsigned long cookie) +{ + struct mlx5_tc_ct_priv *ct_priv = ft->ct_priv; + int err; + + err = mlx5_tc_ct_entry_replace_rules(ct_priv, flow_rule, entry, ft->zone_restore_id); + if (!err) + return 0; + + /* If failed to update the entry, then look it up again under ht_lock + * protection and properly delete it. + */ + spin_lock_bh(&ct_priv->ht_lock); + entry = rhashtable_lookup_fast(&ft->ct_entries_ht, &cookie, cts_ht_params); + if (entry) { + rhashtable_remove_fast(&ft->ct_entries_ht, &entry->node, cts_ht_params); + spin_unlock_bh(&ct_priv->ht_lock); + mlx5_tc_ct_entry_put(entry); + } else { + spin_unlock_bh(&ct_priv->ht_lock); + } + return err; +} + +static int mlx5_tc_ct_block_flow_offload_add(struct mlx5_ct_ft *ft, struct flow_cls_offload *flow) { @@ -1083,9 +1195,17 @@ mlx5_tc_ct_block_flow_offload_add(struct mlx5_ct_ft *ft, spin_lock_bh(&ct_priv->ht_lock); entry = rhashtable_lookup_fast(&ft->ct_entries_ht, &cookie, cts_ht_params); if (entry && refcount_inc_not_zero(&entry->refcnt)) { + if (entry->restore_cookie == meta_action->ct_metadata.cookie) { + spin_unlock_bh(&ct_priv->ht_lock); + mlx5_tc_ct_entry_put(entry); + return -EEXIST; + } + entry->restore_cookie = meta_action->ct_metadata.cookie; spin_unlock_bh(&ct_priv->ht_lock); + + err = mlx5_tc_ct_block_flow_offload_replace(ft, flow_rule, entry, cookie); mlx5_tc_ct_entry_put(entry); - return -EEXIST; + return err; } spin_unlock_bh(&ct_priv->ht_lock); @@ -1323,7 +1443,7 @@ mlx5_tc_ct_match_add(struct mlx5_tc_ct_priv *priv, struct mlx5_ct_attr *ct_attr, struct netlink_ext_ack *extack) { - bool trk, est, untrk, unest, new, rpl, unrpl, rel, unrel, inv, uninv; + bool trk, est, untrk, unnew, unest, new, rpl, unrpl, rel, unrel, inv, uninv; struct flow_rule *rule = flow_cls_offload_flow_rule(f); struct flow_dissector_key_ct *mask, *key; u32 ctstate = 0, ctstate_mask = 0; @@ -1369,15 +1489,18 @@ mlx5_tc_ct_match_add(struct mlx5_tc_ct_priv *priv, rel = ct_state_on & TCA_FLOWER_KEY_CT_FLAGS_RELATED; inv = ct_state_on & TCA_FLOWER_KEY_CT_FLAGS_INVALID; untrk = ct_state_off & TCA_FLOWER_KEY_CT_FLAGS_TRACKED; + unnew = ct_state_off & TCA_FLOWER_KEY_CT_FLAGS_NEW; unest = ct_state_off & TCA_FLOWER_KEY_CT_FLAGS_ESTABLISHED; unrpl = ct_state_off & TCA_FLOWER_KEY_CT_FLAGS_REPLY; unrel = ct_state_off & TCA_FLOWER_KEY_CT_FLAGS_RELATED; uninv = ct_state_off & TCA_FLOWER_KEY_CT_FLAGS_INVALID; ctstate |= trk ? MLX5_CT_STATE_TRK_BIT : 0; + ctstate |= new ? MLX5_CT_STATE_NEW_BIT : 0; ctstate |= est ? MLX5_CT_STATE_ESTABLISHED_BIT : 0; ctstate |= rpl ? MLX5_CT_STATE_REPLY_BIT : 0; ctstate_mask |= (untrk || trk) ? MLX5_CT_STATE_TRK_BIT : 0; + ctstate_mask |= (unnew || new) ? MLX5_CT_STATE_NEW_BIT : 0; ctstate_mask |= (unest || est) ? MLX5_CT_STATE_ESTABLISHED_BIT : 0; ctstate_mask |= (unrpl || rpl) ? MLX5_CT_STATE_REPLY_BIT : 0; ctstate_mask |= unrel ? MLX5_CT_STATE_RELATED_BIT : 0; @@ -1395,12 +1518,6 @@ mlx5_tc_ct_match_add(struct mlx5_tc_ct_priv *priv, return -EOPNOTSUPP; } - if (new) { - NL_SET_ERR_MSG_MOD(extack, - "matching on ct_state +new isn't supported"); - return -EOPNOTSUPP; - } - if (mask->ct_zone) mlx5e_tc_match_to_reg_match(spec, ZONE_TO_REG, key->ct_zone, MLX5_CT_ZONE_MASK); @@ -1441,6 +1558,7 @@ mlx5_tc_ct_parse_action(struct mlx5_tc_ct_priv *priv, attr->ct_attr.zone = act->ct.zone; attr->ct_attr.ct_action = act->ct.action; attr->ct_attr.nf_ft = act->ct.flow_table; + attr->ct_attr.act_miss_cookie = act->miss_cookie; return 0; } @@ -1778,7 +1896,7 @@ mlx5_tc_ct_del_ft_cb(struct mlx5_tc_ct_priv *ct_priv, struct mlx5_ct_ft *ft) * + ft prio (tc chain) + * + original match + * +---------------------+ - * | set chain miss mapping + * | set act_miss_cookie mapping * | set fte_id * | set tunnel_id * | do decap @@ -1823,7 +1941,7 @@ __mlx5_tc_ct_flow_offload(struct mlx5_tc_ct_priv *ct_priv, struct mlx5_flow_attr *pre_ct_attr; struct mlx5_modify_hdr *mod_hdr; struct mlx5_ct_flow *ct_flow; - int chain_mapping = 0, err; + int act_miss_mapping = 0, err; struct mlx5_ct_ft *ft; u16 zone; @@ -1858,22 +1976,18 @@ __mlx5_tc_ct_flow_offload(struct mlx5_tc_ct_priv *ct_priv, pre_ct_attr->action |= MLX5_FLOW_CONTEXT_ACTION_FWD_DEST | MLX5_FLOW_CONTEXT_ACTION_MOD_HDR; - /* Write chain miss tag for miss in ct table as we - * don't go though all prios of this chain as normal tc rules - * miss. - */ - err = mlx5_chains_get_chain_mapping(ct_priv->chains, attr->chain, - &chain_mapping); + err = mlx5e_tc_action_miss_mapping_get(ct_priv->priv, attr, attr->ct_attr.act_miss_cookie, + &act_miss_mapping); if (err) { - ct_dbg("Failed to get chain register mapping for chain"); - goto err_get_chain; + ct_dbg("Failed to get register mapping for act miss"); + goto err_get_act_miss; } - ct_flow->chain_mapping = chain_mapping; + attr->ct_attr.act_miss_mapping = act_miss_mapping; err = mlx5e_tc_match_to_reg_set(priv->mdev, pre_mod_acts, ct_priv->ns_type, - CHAIN_TO_REG, chain_mapping); + MAPPED_OBJ_TO_REG, act_miss_mapping); if (err) { - ct_dbg("Failed to set chain register mapping"); + ct_dbg("Failed to set act miss register mapping"); goto err_mapping; } @@ -1937,8 +2051,8 @@ err_insert_orig: mlx5_modify_header_dealloc(priv->mdev, pre_ct_attr->modify_hdr); err_mapping: mlx5e_mod_hdr_dealloc(pre_mod_acts); - mlx5_chains_put_chain_mapping(ct_priv->chains, ct_flow->chain_mapping); -err_get_chain: + mlx5e_tc_action_miss_mapping_put(ct_priv->priv, attr, act_miss_mapping); +err_get_act_miss: kfree(ct_flow->pre_ct_attr); err_alloc_pre: mlx5_tc_ct_del_ft_cb(ct_priv, ft); @@ -1977,7 +2091,7 @@ __mlx5_tc_ct_delete_flow(struct mlx5_tc_ct_priv *ct_priv, mlx5_tc_rule_delete(priv, ct_flow->pre_ct_rule, pre_ct_attr); mlx5_modify_header_dealloc(priv->mdev, pre_ct_attr->modify_hdr); - mlx5_chains_put_chain_mapping(ct_priv->chains, ct_flow->chain_mapping); + mlx5e_tc_action_miss_mapping_put(ct_priv->priv, attr, attr->ct_attr.act_miss_mapping); mlx5_tc_ct_del_ft_cb(ct_priv, ct_flow->ft); kfree(ct_flow->pre_ct_attr); @@ -2074,13 +2188,6 @@ mlx5_tc_ct_init_check_support(struct mlx5e_priv *priv, const char *err_msg = NULL; int err = 0; -#if !IS_ENABLED(CONFIG_NET_TC_SKB_EXT) - /* cannot restore chain ID on HW miss */ - - err_msg = "tc skb extension missing"; - err = -EOPNOTSUPP; - goto out_err; -#endif if (IS_ERR_OR_NULL(post_act)) { /* Ignore_flow_level support isn't supported by default for VFs and so post_act * won't be supported. Skip showing error msg. @@ -2157,6 +2264,7 @@ mlx5_tc_ct_init(struct mlx5e_priv *priv, struct mlx5_fs_chains *chains, } spin_lock_init(&ct_priv->ht_lock); + ct_priv->priv = priv; ct_priv->ns_type = ns_type; ct_priv->chains = chains; ct_priv->netdev = priv->netdev; diff --git a/drivers/net/ethernet/mellanox/mlx5/core/en/tc_ct.h b/drivers/net/ethernet/mellanox/mlx5/core/en/tc_ct.h index 5bbd6b92840f..5c5ddaa83055 100644 --- a/drivers/net/ethernet/mellanox/mlx5/core/en/tc_ct.h +++ b/drivers/net/ethernet/mellanox/mlx5/core/en/tc_ct.h @@ -28,6 +28,8 @@ struct mlx5_ct_attr { struct mlx5_ct_flow *ct_flow; struct nf_flowtable *nf_ft; u32 ct_labels_id; + u32 act_miss_mapping; + u64 act_miss_cookie; }; #define zone_to_reg_ct {\ diff --git a/drivers/net/ethernet/mellanox/mlx5/core/en/tc_priv.h b/drivers/net/ethernet/mellanox/mlx5/core/en/tc_priv.h index 2b7fd1c0e643..451fd4342a5a 100644 --- a/drivers/net/ethernet/mellanox/mlx5/core/en/tc_priv.h +++ b/drivers/net/ethernet/mellanox/mlx5/core/en/tc_priv.h @@ -30,6 +30,7 @@ enum { MLX5E_TC_FLOW_FLAG_TUN_RX = MLX5E_TC_FLOW_BASE + 9, MLX5E_TC_FLOW_FLAG_FAILED = MLX5E_TC_FLOW_BASE + 10, MLX5E_TC_FLOW_FLAG_SAMPLE = MLX5E_TC_FLOW_BASE + 11, + MLX5E_TC_FLOW_FLAG_USE_ACT_STATS = MLX5E_TC_FLOW_BASE + 12, }; struct mlx5e_tc_flow_parse_attr { @@ -95,8 +96,6 @@ struct mlx5e_tc_flow { */ struct encap_flow_item encaps[MLX5_MAX_FLOW_FWD_VPORTS]; struct mlx5e_tc_flow *peer_flow; - struct mlx5e_mod_hdr_handle *mh; /* attached mod header instance */ - struct mlx5e_mod_hdr_handle *slow_mh; /* attached mod header instance for slow path */ struct mlx5e_hairpin_entry *hpe; /* attached hairpin instance */ struct list_head hairpin; /* flows sharing the same hairpin */ struct list_head peer; /* flows with peer flow */ diff --git a/drivers/net/ethernet/mellanox/mlx5/core/en/tc_tun.c b/drivers/net/ethernet/mellanox/mlx5/core/en/tc_tun.c index e6f64d890fb3..00a04fdd756f 100644 --- a/drivers/net/ethernet/mellanox/mlx5/core/en/tc_tun.c +++ b/drivers/net/ethernet/mellanox/mlx5/core/en/tc_tun.c @@ -93,11 +93,11 @@ static int get_route_and_out_devs(struct mlx5e_priv *priv, else return -EOPNOTSUPP; - if (!(mlx5e_eswitch_rep(*out_dev) && - mlx5e_is_uplink_rep(netdev_priv(*out_dev)))) + if (!mlx5e_eswitch_uplink_rep(*out_dev)) return -EOPNOTSUPP; - if (mlx5e_eswitch_uplink_rep(priv->netdev) && *out_dev != priv->netdev) + if (mlx5e_eswitch_uplink_rep(priv->netdev) && *out_dev != priv->netdev && + !mlx5_lag_is_mpesw(priv->mdev)) return -EOPNOTSUPP; return 0; @@ -745,8 +745,6 @@ int mlx5e_tc_tun_route_lookup(struct mlx5e_priv *priv, if (err) goto out; - esw_attr->rx_tun_attr->vni = MLX5_GET(fte_match_param, spec->match_value, - misc_parameters.vxlan_vni); esw_attr->rx_tun_attr->decap_vport = vport_num; } else if (netif_is_ovs_master(attr.route_dev) && mlx5e_tc_int_port_supported(esw)) { int_port = mlx5e_tc_int_port_get(mlx5e_get_int_port_priv(priv), diff --git a/drivers/net/ethernet/mellanox/mlx5/core/en/tc_tun_encap.c b/drivers/net/ethernet/mellanox/mlx5/core/en/tc_tun_encap.c index 2aaf8ab857b8..780224fd67a1 100644 --- a/drivers/net/ethernet/mellanox/mlx5/core/en/tc_tun_encap.c +++ b/drivers/net/ethernet/mellanox/mlx5/core/en/tc_tun_encap.c @@ -1349,7 +1349,8 @@ static void mlx5e_invalidate_encap(struct mlx5e_priv *priv, mlx5e_tc_unoffload_from_slow_path(esw, flow); else mlx5e_tc_unoffload_fdb_rules(esw, flow, flow->attr); - mlx5_modify_header_dealloc(priv->mdev, attr->modify_hdr); + + mlx5e_tc_detach_mod_hdr(priv, flow, attr); attr->modify_hdr = NULL; esw_attr->dests[flow->tmp_entry_index].flags &= @@ -1405,7 +1406,7 @@ static void mlx5e_reoffload_encap(struct mlx5e_priv *priv, continue; } - err = mlx5e_tc_add_flow_mod_hdr(priv, flow, attr); + err = mlx5e_tc_attach_mod_hdr(priv, flow, attr); if (err) { mlx5_core_warn(priv->mdev, "Failed to update flow mod_hdr err=%d", err); diff --git a/drivers/net/ethernet/mellanox/mlx5/core/en/txrx.h b/drivers/net/ethernet/mellanox/mlx5/core/en/txrx.h index 853f312cd757..c067d2efab51 100644 --- a/drivers/net/ethernet/mellanox/mlx5/core/en/txrx.h +++ b/drivers/net/ethernet/mellanox/mlx5/core/en/txrx.h @@ -73,6 +73,11 @@ int mlx5e_poll_rx_cq(struct mlx5e_cq *cq, int budget); void mlx5e_free_rx_descs(struct mlx5e_rq *rq); void mlx5e_free_rx_in_progress_descs(struct mlx5e_rq *rq); +static inline bool mlx5e_rx_hw_stamp(struct hwtstamp_config *config) +{ + return config->rx_filter == HWTSTAMP_FILTER_ALL; +} + /* TX */ netdev_tx_t mlx5e_xmit(struct sk_buff *skb, struct net_device *dev); bool mlx5e_poll_tx_cq(struct mlx5e_cq *cq, int napi_budget); @@ -315,7 +320,6 @@ mlx5e_tx_dma_unmap(struct device *pdev, struct mlx5e_sq_dma *dma) } } -void mlx5e_sq_xmit_simple(struct mlx5e_txqsq *sq, struct sk_buff *skb, bool xmit_more); void mlx5e_tx_mpwqe_ensure_complete(struct mlx5e_txqsq *sq); static inline bool mlx5e_tx_mpwqe_is_full(struct mlx5e_tx_mpwqe *session, u8 max_sq_mpw_wqebbs) @@ -445,7 +449,7 @@ mlx5e_set_eseg_swp(struct sk_buff *skb, struct mlx5_wqe_eth_seg *eseg, static inline u16 mlx5e_stop_room_for_wqe(struct mlx5_core_dev *mdev, u16 wqe_size) { - WARN_ON_ONCE(PAGE_SIZE / MLX5_SEND_WQE_BB < mlx5e_get_max_sq_wqebbs(mdev)); + WARN_ON_ONCE(PAGE_SIZE / MLX5_SEND_WQE_BB < (u16)mlx5e_get_max_sq_wqebbs(mdev)); /* A WQE must not cross the page boundary, hence two conditions: * 1. Its size must not exceed the page size. diff --git a/drivers/net/ethernet/mellanox/mlx5/core/en/xdp.c b/drivers/net/ethernet/mellanox/mlx5/core/en/xdp.c index 20507ef2f956..bcd6370de440 100644 --- a/drivers/net/ethernet/mellanox/mlx5/core/en/xdp.c +++ b/drivers/net/ethernet/mellanox/mlx5/core/en/xdp.c @@ -57,8 +57,9 @@ int mlx5e_xdp_max_mtu(struct mlx5e_params *params, struct mlx5e_xsk_param *xsk) static inline bool mlx5e_xmit_xdp_buff(struct mlx5e_xdpsq *sq, struct mlx5e_rq *rq, - struct page *page, struct xdp_buff *xdp) + struct xdp_buff *xdp) { + struct page *page = virt_to_page(xdp->data); struct skb_shared_info *sinfo = NULL; struct mlx5e_xmit_data xdptxd; struct mlx5e_xdp_info xdpi; @@ -156,10 +157,39 @@ mlx5e_xmit_xdp_buff(struct mlx5e_xdpsq *sq, struct mlx5e_rq *rq, return true; } +static int mlx5e_xdp_rx_timestamp(const struct xdp_md *ctx, u64 *timestamp) +{ + const struct mlx5e_xdp_buff *_ctx = (void *)ctx; + + if (unlikely(!mlx5e_rx_hw_stamp(_ctx->rq->tstamp))) + return -EOPNOTSUPP; + + *timestamp = mlx5e_cqe_ts_to_ns(_ctx->rq->ptp_cyc2time, + _ctx->rq->clock, get_cqe_ts(_ctx->cqe)); + return 0; +} + +static int mlx5e_xdp_rx_hash(const struct xdp_md *ctx, u32 *hash) +{ + const struct mlx5e_xdp_buff *_ctx = (void *)ctx; + + if (unlikely(!(_ctx->xdp.rxq->dev->features & NETIF_F_RXHASH))) + return -EOPNOTSUPP; + + *hash = be32_to_cpu(_ctx->cqe->rss_hash_result); + return 0; +} + +const struct xdp_metadata_ops mlx5e_xdp_metadata_ops = { + .xmo_rx_timestamp = mlx5e_xdp_rx_timestamp, + .xmo_rx_hash = mlx5e_xdp_rx_hash, +}; + /* returns true if packet was consumed by xdp */ -bool mlx5e_xdp_handle(struct mlx5e_rq *rq, struct page *page, - struct bpf_prog *prog, struct xdp_buff *xdp) +bool mlx5e_xdp_handle(struct mlx5e_rq *rq, + struct bpf_prog *prog, struct mlx5e_xdp_buff *mxbuf) { + struct xdp_buff *xdp = &mxbuf->xdp; u32 act; int err; @@ -168,7 +198,7 @@ bool mlx5e_xdp_handle(struct mlx5e_rq *rq, struct page *page, case XDP_PASS: return false; case XDP_TX: - if (unlikely(!mlx5e_xmit_xdp_buff(rq->xdpsq, rq, page, xdp))) + if (unlikely(!mlx5e_xmit_xdp_buff(rq->xdpsq, rq, xdp))) goto xdp_abort; __set_bit(MLX5E_RQ_FLAG_XDP_XMIT, rq->flags); /* non-atomic */ return true; @@ -180,7 +210,7 @@ bool mlx5e_xdp_handle(struct mlx5e_rq *rq, struct page *page, __set_bit(MLX5E_RQ_FLAG_XDP_XMIT, rq->flags); __set_bit(MLX5E_RQ_FLAG_XDP_REDIRECT, rq->flags); if (xdp->rxq->mem.type != MEM_TYPE_XSK_BUFF_POOL) - mlx5e_page_dma_unmap(rq, page); + mlx5e_page_dma_unmap(rq, virt_to_page(xdp->data)); rq->stats->xdp_redirect++; return true; default: diff --git a/drivers/net/ethernet/mellanox/mlx5/core/en/xdp.h b/drivers/net/ethernet/mellanox/mlx5/core/en/xdp.h index bc2d9034af5b..10bcfa6f88c1 100644 --- a/drivers/net/ethernet/mellanox/mlx5/core/en/xdp.h +++ b/drivers/net/ethernet/mellanox/mlx5/core/en/xdp.h @@ -44,10 +44,16 @@ (MLX5E_XDP_INLINE_WQE_MAX_DS_CNT * MLX5_SEND_WQE_DS - \ sizeof(struct mlx5_wqe_inline_seg)) +struct mlx5e_xdp_buff { + struct xdp_buff xdp; + struct mlx5_cqe64 *cqe; + struct mlx5e_rq *rq; +}; + struct mlx5e_xsk_param; int mlx5e_xdp_max_mtu(struct mlx5e_params *params, struct mlx5e_xsk_param *xsk); -bool mlx5e_xdp_handle(struct mlx5e_rq *rq, struct page *page, - struct bpf_prog *prog, struct xdp_buff *xdp); +bool mlx5e_xdp_handle(struct mlx5e_rq *rq, + struct bpf_prog *prog, struct mlx5e_xdp_buff *mlctx); void mlx5e_xdp_mpwqe_complete(struct mlx5e_xdpsq *sq); bool mlx5e_poll_xdpsq_cq(struct mlx5e_cq *cq); void mlx5e_free_xdpsq_descs(struct mlx5e_xdpsq *sq); @@ -56,6 +62,8 @@ void mlx5e_xdp_rx_poll_complete(struct mlx5e_rq *rq); int mlx5e_xdp_xmit(struct net_device *dev, int n, struct xdp_frame **frames, u32 flags); +extern const struct xdp_metadata_ops mlx5e_xdp_metadata_ops; + INDIRECT_CALLABLE_DECLARE(bool mlx5e_xmit_xdp_frame_mpwqe(struct mlx5e_xdpsq *sq, struct mlx5e_xmit_data *xdptxd, struct skb_shared_info *sinfo, diff --git a/drivers/net/ethernet/mellanox/mlx5/core/en/xsk/rx.c b/drivers/net/ethernet/mellanox/mlx5/core/en/xsk/rx.c index c91b54d9ff27..fab787600459 100644 --- a/drivers/net/ethernet/mellanox/mlx5/core/en/xsk/rx.c +++ b/drivers/net/ethernet/mellanox/mlx5/core/en/xsk/rx.c @@ -8,6 +8,14 @@ /* RX data path */ +static struct mlx5e_xdp_buff *xsk_buff_to_mxbuf(struct xdp_buff *xdp) +{ + /* mlx5e_xdp_buff shares its layout with xdp_buff_xsk + * and private mlx5e_xdp_buff fields fall into xdp_buff_xsk->cb + */ + return (struct mlx5e_xdp_buff *)xdp; +} + int mlx5e_xsk_alloc_rx_mpwqe(struct mlx5e_rq *rq, u16 ix) { struct mlx5e_mpw_info *wi = mlx5e_get_mpw_info(rq, ix); @@ -22,6 +30,7 @@ int mlx5e_xsk_alloc_rx_mpwqe(struct mlx5e_rq *rq, u16 ix) goto err; BUILD_BUG_ON(sizeof(wi->alloc_units[0]) != sizeof(wi->alloc_units[0].xsk)); + XSK_CHECK_PRIV_TYPE(struct mlx5e_xdp_buff); batch = xsk_buff_alloc_batch(rq->xsk_pool, (struct xdp_buff **)wi->alloc_units, rq->mpwqe.pages_per_wqe); @@ -43,25 +52,30 @@ int mlx5e_xsk_alloc_rx_mpwqe(struct mlx5e_rq *rq, u16 ix) if (likely(rq->mpwqe.umr_mode == MLX5E_MPWRQ_UMR_MODE_ALIGNED)) { for (i = 0; i < batch; i++) { + struct mlx5e_xdp_buff *mxbuf = xsk_buff_to_mxbuf(wi->alloc_units[i].xsk); dma_addr_t addr = xsk_buff_xdp_get_frame_dma(wi->alloc_units[i].xsk); umr_wqe->inline_mtts[i] = (struct mlx5_mtt) { .ptag = cpu_to_be64(addr | MLX5_EN_WR), }; + mxbuf->rq = rq; } } else if (unlikely(rq->mpwqe.umr_mode == MLX5E_MPWRQ_UMR_MODE_UNALIGNED)) { for (i = 0; i < batch; i++) { + struct mlx5e_xdp_buff *mxbuf = xsk_buff_to_mxbuf(wi->alloc_units[i].xsk); dma_addr_t addr = xsk_buff_xdp_get_frame_dma(wi->alloc_units[i].xsk); umr_wqe->inline_ksms[i] = (struct mlx5_ksm) { .key = rq->mkey_be, .va = cpu_to_be64(addr), }; + mxbuf->rq = rq; } } else if (likely(rq->mpwqe.umr_mode == MLX5E_MPWRQ_UMR_MODE_TRIPLE)) { u32 mapping_size = 1 << (rq->mpwqe.page_shift - 2); for (i = 0; i < batch; i++) { + struct mlx5e_xdp_buff *mxbuf = xsk_buff_to_mxbuf(wi->alloc_units[i].xsk); dma_addr_t addr = xsk_buff_xdp_get_frame_dma(wi->alloc_units[i].xsk); umr_wqe->inline_ksms[i << 2] = (struct mlx5_ksm) { @@ -80,6 +94,7 @@ int mlx5e_xsk_alloc_rx_mpwqe(struct mlx5e_rq *rq, u16 ix) .key = rq->mkey_be, .va = cpu_to_be64(rq->wqe_overflow.addr), }; + mxbuf->rq = rq; } } else { __be32 pad_size = cpu_to_be32((1 << rq->mpwqe.page_shift) - @@ -87,6 +102,7 @@ int mlx5e_xsk_alloc_rx_mpwqe(struct mlx5e_rq *rq, u16 ix) __be32 frame_size = cpu_to_be32(rq->xsk_pool->chunk_size); for (i = 0; i < batch; i++) { + struct mlx5e_xdp_buff *mxbuf = xsk_buff_to_mxbuf(wi->alloc_units[i].xsk); dma_addr_t addr = xsk_buff_xdp_get_frame_dma(wi->alloc_units[i].xsk); umr_wqe->inline_klms[i << 1] = (struct mlx5_klm) { @@ -99,6 +115,7 @@ int mlx5e_xsk_alloc_rx_mpwqe(struct mlx5e_rq *rq, u16 ix) .va = cpu_to_be64(rq->wqe_overflow.addr), .bcount = pad_size, }; + mxbuf->rq = rq; } } @@ -229,11 +246,12 @@ static struct sk_buff *mlx5e_xsk_construct_skb(struct mlx5e_rq *rq, struct xdp_b struct sk_buff *mlx5e_xsk_skb_from_cqe_mpwrq_linear(struct mlx5e_rq *rq, struct mlx5e_mpw_info *wi, + struct mlx5_cqe64 *cqe, u16 cqe_bcnt, u32 head_offset, u32 page_idx) { - struct xdp_buff *xdp = wi->alloc_units[page_idx].xsk; + struct mlx5e_xdp_buff *mxbuf = xsk_buff_to_mxbuf(wi->alloc_units[page_idx].xsk); struct bpf_prog *prog; /* Check packet size. Note LRO doesn't use linear SKB */ @@ -249,9 +267,11 @@ struct sk_buff *mlx5e_xsk_skb_from_cqe_mpwrq_linear(struct mlx5e_rq *rq, */ WARN_ON_ONCE(head_offset); - xsk_buff_set_size(xdp, cqe_bcnt); - xsk_buff_dma_sync_for_cpu(xdp, rq->xsk_pool); - net_prefetch(xdp->data); + /* mxbuf->rq is set on allocation, but cqe is per-packet so set it here */ + mxbuf->cqe = cqe; + xsk_buff_set_size(&mxbuf->xdp, cqe_bcnt); + xsk_buff_dma_sync_for_cpu(&mxbuf->xdp, rq->xsk_pool); + net_prefetch(mxbuf->xdp.data); /* Possible flows: * - XDP_REDIRECT to XSKMAP: @@ -269,7 +289,7 @@ struct sk_buff *mlx5e_xsk_skb_from_cqe_mpwrq_linear(struct mlx5e_rq *rq, */ prog = rcu_dereference(rq->xdp_prog); - if (likely(prog && mlx5e_xdp_handle(rq, NULL, prog, xdp))) { + if (likely(prog && mlx5e_xdp_handle(rq, prog, mxbuf))) { if (likely(__test_and_clear_bit(MLX5E_RQ_FLAG_XDP_XMIT, rq->flags))) __set_bit(page_idx, wi->xdp_xmit_bitmap); /* non-atomic */ return NULL; /* page/packet was consumed by XDP */ @@ -278,14 +298,15 @@ struct sk_buff *mlx5e_xsk_skb_from_cqe_mpwrq_linear(struct mlx5e_rq *rq, /* XDP_PASS: copy the data from the UMEM to a new SKB and reuse the * frame. On SKB allocation failure, NULL is returned. */ - return mlx5e_xsk_construct_skb(rq, xdp); + return mlx5e_xsk_construct_skb(rq, &mxbuf->xdp); } struct sk_buff *mlx5e_xsk_skb_from_cqe_linear(struct mlx5e_rq *rq, struct mlx5e_wqe_frag_info *wi, + struct mlx5_cqe64 *cqe, u32 cqe_bcnt) { - struct xdp_buff *xdp = wi->au->xsk; + struct mlx5e_xdp_buff *mxbuf = xsk_buff_to_mxbuf(wi->au->xsk); struct bpf_prog *prog; /* wi->offset is not used in this function, because xdp->data and the @@ -295,17 +316,19 @@ struct sk_buff *mlx5e_xsk_skb_from_cqe_linear(struct mlx5e_rq *rq, */ WARN_ON_ONCE(wi->offset); - xsk_buff_set_size(xdp, cqe_bcnt); - xsk_buff_dma_sync_for_cpu(xdp, rq->xsk_pool); - net_prefetch(xdp->data); + /* mxbuf->rq is set on allocation, but cqe is per-packet so set it here */ + mxbuf->cqe = cqe; + xsk_buff_set_size(&mxbuf->xdp, cqe_bcnt); + xsk_buff_dma_sync_for_cpu(&mxbuf->xdp, rq->xsk_pool); + net_prefetch(mxbuf->xdp.data); prog = rcu_dereference(rq->xdp_prog); - if (likely(prog && mlx5e_xdp_handle(rq, NULL, prog, xdp))) + if (likely(prog && mlx5e_xdp_handle(rq, prog, mxbuf))) return NULL; /* page/packet was consumed by XDP */ /* XDP_PASS: copy the data from the UMEM to a new SKB. The frame reuse * will be handled by mlx5e_free_rx_wqe. * On SKB allocation failure, NULL is returned. */ - return mlx5e_xsk_construct_skb(rq, xdp); + return mlx5e_xsk_construct_skb(rq, &mxbuf->xdp); } diff --git a/drivers/net/ethernet/mellanox/mlx5/core/en/xsk/rx.h b/drivers/net/ethernet/mellanox/mlx5/core/en/xsk/rx.h index 087c943bd8e9..cefc0ef6105d 100644 --- a/drivers/net/ethernet/mellanox/mlx5/core/en/xsk/rx.h +++ b/drivers/net/ethernet/mellanox/mlx5/core/en/xsk/rx.h @@ -13,11 +13,13 @@ int mlx5e_xsk_alloc_rx_wqes_batched(struct mlx5e_rq *rq, u16 ix, int wqe_bulk); int mlx5e_xsk_alloc_rx_wqes(struct mlx5e_rq *rq, u16 ix, int wqe_bulk); struct sk_buff *mlx5e_xsk_skb_from_cqe_mpwrq_linear(struct mlx5e_rq *rq, struct mlx5e_mpw_info *wi, + struct mlx5_cqe64 *cqe, u16 cqe_bcnt, u32 head_offset, u32 page_idx); struct sk_buff *mlx5e_xsk_skb_from_cqe_linear(struct mlx5e_rq *rq, struct mlx5e_wqe_frag_info *wi, + struct mlx5_cqe64 *cqe, u32 cqe_bcnt); #endif /* __MLX5_EN_XSK_RX_H__ */ diff --git a/drivers/net/ethernet/mellanox/mlx5/core/en/xsk/setup.c b/drivers/net/ethernet/mellanox/mlx5/core/en/xsk/setup.c index ff03c43833bb..81a567e17264 100644 --- a/drivers/net/ethernet/mellanox/mlx5/core/en/xsk/setup.c +++ b/drivers/net/ethernet/mellanox/mlx5/core/en/xsk/setup.c @@ -7,6 +7,18 @@ #include "en/health.h" #include <net/xdp_sock_drv.h> +static int mlx5e_legacy_rq_validate_xsk(struct mlx5_core_dev *mdev, + struct mlx5e_params *params, + struct mlx5e_xsk_param *xsk) +{ + if (!mlx5e_rx_is_linear_skb(mdev, params, xsk)) { + mlx5_core_err(mdev, "Legacy RQ linear mode for XSK can't be activated with current params\n"); + return -EINVAL; + } + + return 0; +} + /* The limitation of 2048 can be altered, but shouldn't go beyond the minimal * stride size of striding RQ. */ @@ -17,8 +29,11 @@ bool mlx5e_validate_xsk_param(struct mlx5e_params *params, struct mlx5_core_dev *mdev) { /* AF_XDP doesn't support frames larger than PAGE_SIZE. */ - if (xsk->chunk_size > PAGE_SIZE || xsk->chunk_size < MLX5E_MIN_XSK_CHUNK_SIZE) + if (xsk->chunk_size > PAGE_SIZE || xsk->chunk_size < MLX5E_MIN_XSK_CHUNK_SIZE) { + mlx5_core_err(mdev, "XSK chunk size %u out of bounds [%u, %lu]\n", xsk->chunk_size, + MLX5E_MIN_XSK_CHUNK_SIZE, PAGE_SIZE); return false; + } /* frag_sz is different for regular and XSK RQs, so ensure that linear * SKB mode is possible. @@ -27,7 +42,7 @@ bool mlx5e_validate_xsk_param(struct mlx5e_params *params, case MLX5_WQ_TYPE_LINKED_LIST_STRIDING_RQ: return !mlx5e_mpwrq_validate_xsk(mdev, params, xsk); default: /* MLX5_WQ_TYPE_CYCLIC */ - return mlx5e_rx_is_linear_skb(mdev, params, xsk); + return !mlx5e_legacy_rq_validate_xsk(mdev, params, xsk); } } diff --git a/drivers/net/ethernet/mellanox/mlx5/core/en_accel/en_accel.h b/drivers/net/ethernet/mellanox/mlx5/core/en_accel/en_accel.h index 07187028f0d3..c964644ee866 100644 --- a/drivers/net/ethernet/mellanox/mlx5/core/en_accel/en_accel.h +++ b/drivers/net/ethernet/mellanox/mlx5/core/en_accel/en_accel.h @@ -124,7 +124,7 @@ static inline bool mlx5e_accel_tx_begin(struct net_device *dev, mlx5e_udp_gso_handle_tx_skb(skb); #ifdef CONFIG_MLX5_EN_TLS - /* May send SKBs and WQEs. */ + /* May send WQEs. */ if (mlx5e_ktls_skb_offloaded(skb)) if (unlikely(!mlx5e_ktls_handle_tx_skb(dev, sq, skb, &state->tls))) diff --git a/drivers/net/ethernet/mellanox/mlx5/core/en_accel/fs_tcp.c b/drivers/net/ethernet/mellanox/mlx5/core/en_accel/fs_tcp.c index d7c020f72401..88a5aed9d678 100644 --- a/drivers/net/ethernet/mellanox/mlx5/core/en_accel/fs_tcp.c +++ b/drivers/net/ethernet/mellanox/mlx5/core/en_accel/fs_tcp.c @@ -365,7 +365,7 @@ void mlx5e_accel_fs_tcp_destroy(struct mlx5e_flow_steering *fs) for (i = 0; i < ACCEL_FS_TCP_NUM_TYPES; i++) accel_fs_tcp_destroy_table(fs, i); - kvfree(accel_tcp); + kfree(accel_tcp); mlx5e_fs_set_accel_tcp(fs, NULL); } @@ -377,7 +377,7 @@ int mlx5e_accel_fs_tcp_create(struct mlx5e_flow_steering *fs) if (!MLX5_CAP_FLOWTABLE_NIC_RX(mlx5e_fs_get_mdev(fs), ft_field_support.outer_ip_version)) return -EOPNOTSUPP; - accel_tcp = kvzalloc(sizeof(*accel_tcp), GFP_KERNEL); + accel_tcp = kzalloc(sizeof(*accel_tcp), GFP_KERNEL); if (!accel_tcp) return -ENOMEM; mlx5e_fs_set_accel_tcp(fs, accel_tcp); @@ -397,7 +397,7 @@ int mlx5e_accel_fs_tcp_create(struct mlx5e_flow_steering *fs) err_destroy_tables: while (--i >= 0) accel_fs_tcp_destroy_table(fs, i); - kvfree(accel_tcp); + kfree(accel_tcp); mlx5e_fs_set_accel_tcp(fs, NULL); return err; } diff --git a/drivers/net/ethernet/mellanox/mlx5/core/en_accel/ipsec.c b/drivers/net/ethernet/mellanox/mlx5/core/en_accel/ipsec.c index bb9023957f74..7b0d3de0ec6c 100644 --- a/drivers/net/ethernet/mellanox/mlx5/core/en_accel/ipsec.c +++ b/drivers/net/ethernet/mellanox/mlx5/core/en_accel/ipsec.c @@ -158,95 +158,103 @@ void mlx5e_ipsec_build_accel_xfrm_attrs(struct mlx5e_ipsec_sa_entry *sa_entry, attrs->family = x->props.family; attrs->type = x->xso.type; attrs->reqid = x->props.reqid; + attrs->upspec.dport = ntohs(x->sel.dport); + attrs->upspec.dport_mask = ntohs(x->sel.dport_mask); + attrs->upspec.sport = ntohs(x->sel.sport); + attrs->upspec.sport_mask = ntohs(x->sel.sport_mask); + attrs->upspec.proto = x->sel.proto; mlx5e_ipsec_init_limits(sa_entry, attrs); } -static inline int mlx5e_xfrm_validate_state(struct xfrm_state *x) +static int mlx5e_xfrm_validate_state(struct mlx5_core_dev *mdev, + struct xfrm_state *x, + struct netlink_ext_ack *extack) { - struct net_device *netdev = x->xso.real_dev; - struct mlx5e_priv *priv; - - priv = netdev_priv(netdev); - if (x->props.aalgo != SADB_AALG_NONE) { - netdev_info(netdev, "Cannot offload authenticated xfrm states\n"); + NL_SET_ERR_MSG_MOD(extack, "Cannot offload authenticated xfrm states"); return -EINVAL; } if (x->props.ealgo != SADB_X_EALG_AES_GCM_ICV16) { - netdev_info(netdev, "Only AES-GCM-ICV16 xfrm state may be offloaded\n"); + NL_SET_ERR_MSG_MOD(extack, "Only AES-GCM-ICV16 xfrm state may be offloaded"); return -EINVAL; } if (x->props.calgo != SADB_X_CALG_NONE) { - netdev_info(netdev, "Cannot offload compressed xfrm states\n"); + NL_SET_ERR_MSG_MOD(extack, "Cannot offload compressed xfrm states"); return -EINVAL; } if (x->props.flags & XFRM_STATE_ESN && - !(mlx5_ipsec_device_caps(priv->mdev) & MLX5_IPSEC_CAP_ESN)) { - netdev_info(netdev, "Cannot offload ESN xfrm states\n"); + !(mlx5_ipsec_device_caps(mdev) & MLX5_IPSEC_CAP_ESN)) { + NL_SET_ERR_MSG_MOD(extack, "Cannot offload ESN xfrm states"); return -EINVAL; } if (x->props.family != AF_INET && x->props.family != AF_INET6) { - netdev_info(netdev, "Only IPv4/6 xfrm states may be offloaded\n"); + NL_SET_ERR_MSG_MOD(extack, "Only IPv4/6 xfrm states may be offloaded"); return -EINVAL; } if (x->id.proto != IPPROTO_ESP) { - netdev_info(netdev, "Only ESP xfrm state may be offloaded\n"); + NL_SET_ERR_MSG_MOD(extack, "Only ESP xfrm state may be offloaded"); return -EINVAL; } if (x->encap) { - netdev_info(netdev, "Encapsulated xfrm state may not be offloaded\n"); + NL_SET_ERR_MSG_MOD(extack, "Encapsulated xfrm state may not be offloaded"); return -EINVAL; } if (!x->aead) { - netdev_info(netdev, "Cannot offload xfrm states without aead\n"); + NL_SET_ERR_MSG_MOD(extack, "Cannot offload xfrm states without aead"); return -EINVAL; } if (x->aead->alg_icv_len != 128) { - netdev_info(netdev, "Cannot offload xfrm states with AEAD ICV length other than 128bit\n"); + NL_SET_ERR_MSG_MOD(extack, "Cannot offload xfrm states with AEAD ICV length other than 128bit"); return -EINVAL; } if ((x->aead->alg_key_len != 128 + 32) && (x->aead->alg_key_len != 256 + 32)) { - netdev_info(netdev, "Cannot offload xfrm states with AEAD key length other than 128/256 bit\n"); + NL_SET_ERR_MSG_MOD(extack, "Cannot offload xfrm states with AEAD key length other than 128/256 bit"); return -EINVAL; } if (x->tfcpad) { - netdev_info(netdev, "Cannot offload xfrm states with tfc padding\n"); + NL_SET_ERR_MSG_MOD(extack, "Cannot offload xfrm states with tfc padding"); return -EINVAL; } if (!x->geniv) { - netdev_info(netdev, "Cannot offload xfrm states without geniv\n"); + NL_SET_ERR_MSG_MOD(extack, "Cannot offload xfrm states without geniv"); return -EINVAL; } if (strcmp(x->geniv, "seqiv")) { - netdev_info(netdev, "Cannot offload xfrm states with geniv other than seqiv\n"); + NL_SET_ERR_MSG_MOD(extack, "Cannot offload xfrm states with geniv other than seqiv"); return -EINVAL; } + + if (x->sel.proto != IPPROTO_IP && + (x->sel.proto != IPPROTO_UDP || x->xso.dir != XFRM_DEV_OFFLOAD_OUT)) { + NL_SET_ERR_MSG_MOD(extack, "Device does not support upper protocol other than UDP, and only Tx direction"); + return -EINVAL; + } + switch (x->xso.type) { case XFRM_DEV_OFFLOAD_CRYPTO: - if (!(mlx5_ipsec_device_caps(priv->mdev) & - MLX5_IPSEC_CAP_CRYPTO)) { - netdev_info(netdev, "Crypto offload is not supported\n"); + if (!(mlx5_ipsec_device_caps(mdev) & MLX5_IPSEC_CAP_CRYPTO)) { + NL_SET_ERR_MSG_MOD(extack, "Crypto offload is not supported"); return -EINVAL; } if (x->props.mode != XFRM_MODE_TRANSPORT && x->props.mode != XFRM_MODE_TUNNEL) { - netdev_info(netdev, "Only transport and tunnel xfrm states may be offloaded\n"); + NL_SET_ERR_MSG_MOD(extack, "Only transport and tunnel xfrm states may be offloaded"); return -EINVAL; } break; case XFRM_DEV_OFFLOAD_PACKET: - if (!(mlx5_ipsec_device_caps(priv->mdev) & + if (!(mlx5_ipsec_device_caps(mdev) & MLX5_IPSEC_CAP_PACKET_OFFLOAD)) { - netdev_info(netdev, "Packet offload is not supported\n"); + NL_SET_ERR_MSG_MOD(extack, "Packet offload is not supported"); return -EINVAL; } if (x->props.mode != XFRM_MODE_TRANSPORT) { - netdev_info(netdev, "Only transport xfrm states may be offloaded in packet mode\n"); + NL_SET_ERR_MSG_MOD(extack, "Only transport xfrm states may be offloaded in packet mode"); return -EINVAL; } @@ -254,35 +262,30 @@ static inline int mlx5e_xfrm_validate_state(struct xfrm_state *x) x->replay_esn->replay_window != 64 && x->replay_esn->replay_window != 128 && x->replay_esn->replay_window != 256) { - netdev_info(netdev, - "Unsupported replay window size %u\n", - x->replay_esn->replay_window); + NL_SET_ERR_MSG_MOD(extack, "Unsupported replay window size"); return -EINVAL; } if (!x->props.reqid) { - netdev_info(netdev, "Cannot offload without reqid\n"); + NL_SET_ERR_MSG_MOD(extack, "Cannot offload without reqid"); return -EINVAL; } if (x->lft.hard_byte_limit != XFRM_INF || x->lft.soft_byte_limit != XFRM_INF) { - netdev_info(netdev, - "Device doesn't support limits in bytes\n"); + NL_SET_ERR_MSG_MOD(extack, "Device doesn't support limits in bytes"); return -EINVAL; } if (x->lft.soft_packet_limit >= x->lft.hard_packet_limit && x->lft.hard_packet_limit != XFRM_INF) { /* XFRM stack doesn't prevent such configuration :(. */ - netdev_info(netdev, - "Hard packet limit must be greater than soft one\n"); + NL_SET_ERR_MSG_MOD(extack, "Hard packet limit must be greater than soft one"); return -EINVAL; } break; default: - netdev_info(netdev, "Unsupported xfrm offload type %d\n", - x->xso.type); + NL_SET_ERR_MSG_MOD(extack, "Unsupported xfrm offload type"); return -EINVAL; } return 0; @@ -298,7 +301,8 @@ static void _update_xfrm_state(struct work_struct *work) mlx5_accel_esp_modify_xfrm(sa_entry, &modify_work->attrs); } -static int mlx5e_xfrm_add_state(struct xfrm_state *x) +static int mlx5e_xfrm_add_state(struct xfrm_state *x, + struct netlink_ext_ack *extack) { struct mlx5e_ipsec_sa_entry *sa_entry = NULL; struct net_device *netdev = x->xso.real_dev; @@ -311,15 +315,13 @@ static int mlx5e_xfrm_add_state(struct xfrm_state *x) return -EOPNOTSUPP; ipsec = priv->ipsec; - err = mlx5e_xfrm_validate_state(x); + err = mlx5e_xfrm_validate_state(priv->mdev, x, extack); if (err) return err; sa_entry = kzalloc(sizeof(*sa_entry), GFP_KERNEL); - if (!sa_entry) { - err = -ENOMEM; - goto out; - } + if (!sa_entry) + return -ENOMEM; sa_entry->x = x; sa_entry->ipsec = ipsec; @@ -360,7 +362,7 @@ err_hw_ctx: mlx5_ipsec_free_sa_ctx(sa_entry); err_xfrm: kfree(sa_entry); -out: + NL_SET_ERR_MSG_MOD(extack, "Device failed to offload this policy"); return err; } @@ -497,34 +499,39 @@ static void mlx5e_xfrm_update_curlft(struct xfrm_state *x) mlx5e_ipsec_aso_update_curlft(sa_entry, &x->curlft.packets); } -static int mlx5e_xfrm_validate_policy(struct xfrm_policy *x) +static int mlx5e_xfrm_validate_policy(struct xfrm_policy *x, + struct netlink_ext_ack *extack) { - struct net_device *netdev = x->xdo.real_dev; - if (x->type != XFRM_POLICY_TYPE_MAIN) { - netdev_info(netdev, "Cannot offload non-main policy types\n"); + NL_SET_ERR_MSG_MOD(extack, "Cannot offload non-main policy types"); return -EINVAL; } /* Please pay attention that we support only one template */ if (x->xfrm_nr > 1) { - netdev_info(netdev, "Cannot offload more than one template\n"); + NL_SET_ERR_MSG_MOD(extack, "Cannot offload more than one template"); return -EINVAL; } if (x->xdo.dir != XFRM_DEV_OFFLOAD_IN && x->xdo.dir != XFRM_DEV_OFFLOAD_OUT) { - netdev_info(netdev, "Cannot offload forward policy\n"); + NL_SET_ERR_MSG_MOD(extack, "Cannot offload forward policy"); return -EINVAL; } if (!x->xfrm_vec[0].reqid) { - netdev_info(netdev, "Cannot offload policy without reqid\n"); + NL_SET_ERR_MSG_MOD(extack, "Cannot offload policy without reqid"); return -EINVAL; } if (x->xdo.type != XFRM_DEV_OFFLOAD_PACKET) { - netdev_info(netdev, "Unsupported xfrm offload type\n"); + NL_SET_ERR_MSG_MOD(extack, "Unsupported xfrm offload type"); + return -EINVAL; + } + + if (x->selector.proto != IPPROTO_IP && + (x->selector.proto != IPPROTO_UDP || x->xdo.dir != XFRM_DEV_OFFLOAD_OUT)) { + NL_SET_ERR_MSG_MOD(extack, "Device does not support upper protocol other than UDP, and only Tx direction"); return -EINVAL; } @@ -548,9 +555,15 @@ mlx5e_ipsec_build_accel_pol_attrs(struct mlx5e_ipsec_pol_entry *pol_entry, attrs->action = x->action; attrs->type = XFRM_DEV_OFFLOAD_PACKET; attrs->reqid = x->xfrm_vec[0].reqid; + attrs->upspec.dport = ntohs(sel->dport); + attrs->upspec.dport_mask = ntohs(sel->dport_mask); + attrs->upspec.sport = ntohs(sel->sport); + attrs->upspec.sport_mask = ntohs(sel->sport_mask); + attrs->upspec.proto = sel->proto; } -static int mlx5e_xfrm_add_policy(struct xfrm_policy *x) +static int mlx5e_xfrm_add_policy(struct xfrm_policy *x, + struct netlink_ext_ack *extack) { struct net_device *netdev = x->xdo.real_dev; struct mlx5e_ipsec_pol_entry *pol_entry; @@ -558,10 +571,12 @@ static int mlx5e_xfrm_add_policy(struct xfrm_policy *x) int err; priv = netdev_priv(netdev); - if (!priv->ipsec) + if (!priv->ipsec) { + NL_SET_ERR_MSG_MOD(extack, "Device doesn't support IPsec packet offload"); return -EOPNOTSUPP; + } - err = mlx5e_xfrm_validate_policy(x); + err = mlx5e_xfrm_validate_policy(x, extack); if (err) return err; @@ -582,6 +597,7 @@ static int mlx5e_xfrm_add_policy(struct xfrm_policy *x) err_fs: kfree(pol_entry); + NL_SET_ERR_MSG_MOD(extack, "Device failed to offload this policy"); return err; } diff --git a/drivers/net/ethernet/mellanox/mlx5/core/en_accel/ipsec.h b/drivers/net/ethernet/mellanox/mlx5/core/en_accel/ipsec.h index 8bed9c361075..12f044330639 100644 --- a/drivers/net/ethernet/mellanox/mlx5/core/en_accel/ipsec.h +++ b/drivers/net/ethernet/mellanox/mlx5/core/en_accel/ipsec.h @@ -52,6 +52,14 @@ struct aes_gcm_keymat { u32 aes_key[256 / 32]; }; +struct upspec { + u16 dport; + u16 dport_mask; + u16 sport; + u16 sport_mask; + u8 proto; +}; + struct mlx5_accel_esp_xfrm_attrs { u32 esn; u32 spi; @@ -68,6 +76,7 @@ struct mlx5_accel_esp_xfrm_attrs { __be32 a6[4]; } daddr; + struct upspec upspec; u8 dir : 2; u8 esn_overlap : 1; u8 esn_trigger : 1; @@ -84,6 +93,7 @@ enum mlx5_ipsec_cap { MLX5_IPSEC_CAP_CRYPTO = 1 << 0, MLX5_IPSEC_CAP_ESN = 1 << 1, MLX5_IPSEC_CAP_PACKET_OFFLOAD = 1 << 2, + MLX5_IPSEC_CAP_ROCE = 1 << 3, }; struct mlx5e_priv; @@ -119,7 +129,7 @@ struct mlx5e_ipsec_work { }; struct mlx5e_ipsec_aso { - u8 ctx[MLX5_ST_SZ_BYTES(ipsec_aso)]; + u8 __aligned(64) ctx[MLX5_ST_SZ_BYTES(ipsec_aso)]; dma_addr_t dma_addr; struct mlx5_aso *aso; /* Protect ASO WQ access, as it is global to whole IPsec */ @@ -138,6 +148,7 @@ struct mlx5e_ipsec { struct mlx5e_ipsec_tx *tx; struct mlx5e_ipsec_aso *aso; struct notifier_block nb; + struct mlx5_ipsec_fs *roce; }; struct mlx5e_ipsec_esn_state { @@ -181,6 +192,7 @@ struct mlx5_accel_pol_xfrm_attrs { __be32 a6[4]; } daddr; + struct upspec upspec; u8 family; u8 action; u8 type : 2; diff --git a/drivers/net/ethernet/mellanox/mlx5/core/en_accel/ipsec_fs.c b/drivers/net/ethernet/mellanox/mlx5/core/en_accel/ipsec_fs.c index 9f19f4b59a70..9871ba1b25ff 100644 --- a/drivers/net/ethernet/mellanox/mlx5/core/en_accel/ipsec_fs.c +++ b/drivers/net/ethernet/mellanox/mlx5/core/en_accel/ipsec_fs.c @@ -6,6 +6,7 @@ #include "en/fs.h" #include "ipsec.h" #include "fs_core.h" +#include "lib/ipsec_fs_roce.h" #define NUM_IPSEC_FTE BIT(15) @@ -166,7 +167,8 @@ out: return err; } -static void rx_destroy(struct mlx5_core_dev *mdev, struct mlx5e_ipsec_rx *rx) +static void rx_destroy(struct mlx5_core_dev *mdev, struct mlx5e_ipsec *ipsec, + struct mlx5e_ipsec_rx *rx, u32 family) { mlx5_del_flow_rules(rx->pol.rule); mlx5_destroy_flow_group(rx->pol.group); @@ -179,6 +181,8 @@ static void rx_destroy(struct mlx5_core_dev *mdev, struct mlx5e_ipsec_rx *rx) mlx5_del_flow_rules(rx->status.rule); mlx5_modify_header_dealloc(mdev, rx->status.modify_hdr); mlx5_destroy_flow_table(rx->ft.status); + + mlx5_ipsec_fs_roce_rx_destroy(ipsec->roce, family); } static int rx_create(struct mlx5_core_dev *mdev, struct mlx5e_ipsec *ipsec, @@ -186,18 +190,35 @@ static int rx_create(struct mlx5_core_dev *mdev, struct mlx5e_ipsec *ipsec, { struct mlx5_flow_namespace *ns = mlx5e_fs_get_ns(ipsec->fs, false); struct mlx5_ttc_table *ttc = mlx5e_fs_get_ttc(ipsec->fs, false); + struct mlx5_flow_destination default_dest; struct mlx5_flow_destination dest[2]; struct mlx5_flow_table *ft; int err; + default_dest = mlx5_ttc_get_default_dest(ttc, family2tt(family)); + err = mlx5_ipsec_fs_roce_rx_create(mdev, ipsec->roce, ns, &default_dest, + family, MLX5E_ACCEL_FS_ESP_FT_ROCE_LEVEL, + MLX5E_NIC_PRIO); + if (err) + return err; + ft = ipsec_ft_create(ns, MLX5E_ACCEL_FS_ESP_FT_ERR_LEVEL, MLX5E_NIC_PRIO, 1); - if (IS_ERR(ft)) - return PTR_ERR(ft); + if (IS_ERR(ft)) { + err = PTR_ERR(ft); + goto err_fs_ft_status; + } rx->ft.status = ft; - dest[0] = mlx5_ttc_get_default_dest(ttc, family2tt(family)); + ft = mlx5_ipsec_fs_roce_ft_get(ipsec->roce, family); + if (ft) { + dest[0].type = MLX5_FLOW_DESTINATION_TYPE_FLOW_TABLE; + dest[0].ft = ft; + } else { + dest[0] = default_dest; + } + dest[1].type = MLX5_FLOW_DESTINATION_TYPE_COUNTER; dest[1].counter_id = mlx5_fc_id(rx->fc->cnt); err = ipsec_status_rule(mdev, rx, dest); @@ -245,6 +266,8 @@ err_fs_ft: mlx5_modify_header_dealloc(mdev, rx->status.modify_hdr); err_add: mlx5_destroy_flow_table(rx->ft.status); +err_fs_ft_status: + mlx5_ipsec_fs_roce_rx_destroy(ipsec->roce, family); return err; } @@ -304,14 +327,15 @@ static void rx_ft_put(struct mlx5_core_dev *mdev, struct mlx5e_ipsec *ipsec, mlx5_ttc_fwd_default_dest(ttc, family2tt(family)); /* remove FT */ - rx_destroy(mdev, rx); + rx_destroy(mdev, ipsec, rx, family); out: mutex_unlock(&rx->ft.mutex); } /* IPsec TX flow steering */ -static int tx_create(struct mlx5_core_dev *mdev, struct mlx5e_ipsec_tx *tx) +static int tx_create(struct mlx5_core_dev *mdev, struct mlx5e_ipsec_tx *tx, + struct mlx5_ipsec_fs *roce) { struct mlx5_flow_destination dest = {}; struct mlx5_flow_table *ft; @@ -334,8 +358,15 @@ static int tx_create(struct mlx5_core_dev *mdev, struct mlx5e_ipsec_tx *tx) err = ipsec_miss_create(mdev, tx->ft.pol, &tx->pol, &dest); if (err) goto err_pol_miss; + + err = mlx5_ipsec_fs_roce_tx_create(mdev, roce, tx->ft.pol); + if (err) + goto err_roce; return 0; +err_roce: + mlx5_del_flow_rules(tx->pol.rule); + mlx5_destroy_flow_group(tx->pol.group); err_pol_miss: mlx5_destroy_flow_table(tx->ft.pol); err_pol_ft: @@ -353,9 +384,10 @@ static struct mlx5e_ipsec_tx *tx_ft_get(struct mlx5_core_dev *mdev, if (tx->ft.refcnt) goto skip; - err = tx_create(mdev, tx); + err = tx_create(mdev, tx, ipsec->roce); if (err) goto out; + skip: tx->ft.refcnt++; out: @@ -374,6 +406,7 @@ static void tx_ft_put(struct mlx5e_ipsec *ipsec) if (tx->ft.refcnt) goto out; + mlx5_ipsec_fs_roce_tx_destroy(ipsec->roce); mlx5_del_flow_rules(tx->pol.rule); mlx5_destroy_flow_group(tx->pol.group); mlx5_destroy_flow_table(tx->ft.pol); @@ -467,6 +500,27 @@ static void setup_fte_reg_c0(struct mlx5_flow_spec *spec, u32 reqid) misc_parameters_2.metadata_reg_c_0, reqid); } +static void setup_fte_upper_proto_match(struct mlx5_flow_spec *spec, struct upspec *upspec) +{ + if (upspec->proto != IPPROTO_UDP) + return; + + spec->match_criteria_enable |= MLX5_MATCH_OUTER_HEADERS; + MLX5_SET_TO_ONES(fte_match_set_lyr_2_4, spec->match_criteria, ip_protocol); + MLX5_SET(fte_match_set_lyr_2_4, spec->match_value, ip_protocol, upspec->proto); + if (upspec->dport) { + MLX5_SET(fte_match_set_lyr_2_4, spec->match_criteria, udp_dport, + upspec->dport_mask); + MLX5_SET(fte_match_set_lyr_2_4, spec->match_value, udp_dport, upspec->dport); + } + + if (upspec->sport) { + MLX5_SET(fte_match_set_lyr_2_4, spec->match_criteria, udp_dport, + upspec->sport_mask); + MLX5_SET(fte_match_set_lyr_2_4, spec->match_value, udp_dport, upspec->sport); + } +} + static int setup_modify_header(struct mlx5_core_dev *mdev, u32 val, u8 dir, struct mlx5_flow_act *flow_act) { @@ -654,6 +708,7 @@ static int tx_add_rule(struct mlx5e_ipsec_sa_entry *sa_entry) setup_fte_addr6(spec, attrs->saddr.a6, attrs->daddr.a6); setup_fte_no_frags(spec); + setup_fte_upper_proto_match(spec, &attrs->upspec); switch (attrs->type) { case XFRM_DEV_OFFLOAD_CRYPTO: @@ -728,6 +783,7 @@ static int tx_add_policy(struct mlx5e_ipsec_pol_entry *pol_entry) setup_fte_addr6(spec, attrs->saddr.a6, attrs->daddr.a6); setup_fte_no_frags(spec); + setup_fte_upper_proto_match(spec, &attrs->upspec); err = setup_modify_header(mdev, attrs->reqid, XFRM_DEV_OFFLOAD_OUT, &flow_act); @@ -1008,6 +1064,9 @@ void mlx5e_accel_ipsec_fs_cleanup(struct mlx5e_ipsec *ipsec) if (!ipsec->tx) return; + if (mlx5_ipsec_device_caps(ipsec->mdev) & MLX5_IPSEC_CAP_ROCE) + mlx5_ipsec_fs_roce_cleanup(ipsec->roce); + ipsec_fs_destroy_counters(ipsec); mutex_destroy(&ipsec->tx->ft.mutex); WARN_ON(ipsec->tx->ft.refcnt); @@ -1024,6 +1083,7 @@ void mlx5e_accel_ipsec_fs_cleanup(struct mlx5e_ipsec *ipsec) int mlx5e_accel_ipsec_fs_init(struct mlx5e_ipsec *ipsec) { + struct mlx5_core_dev *mdev = ipsec->mdev; struct mlx5_flow_namespace *ns; int err = -ENOMEM; @@ -1053,6 +1113,9 @@ int mlx5e_accel_ipsec_fs_init(struct mlx5e_ipsec *ipsec) mutex_init(&ipsec->rx_ipv6->ft.mutex); ipsec->tx->ns = ns; + if (mlx5_ipsec_device_caps(mdev) & MLX5_IPSEC_CAP_ROCE) + ipsec->roce = mlx5_ipsec_fs_roce_init(mdev); + return 0; err_counters: diff --git a/drivers/net/ethernet/mellanox/mlx5/core/en_accel/ipsec_offload.c b/drivers/net/ethernet/mellanox/mlx5/core/en_accel/ipsec_offload.c index 2461462b7b99..5fa7a4c40429 100644 --- a/drivers/net/ethernet/mellanox/mlx5/core/en_accel/ipsec_offload.c +++ b/drivers/net/ethernet/mellanox/mlx5/core/en_accel/ipsec_offload.c @@ -4,7 +4,7 @@ #include "mlx5_core.h" #include "en.h" #include "ipsec.h" -#include "lib/mlx5.h" +#include "lib/crypto.h" enum { MLX5_IPSEC_ASO_REMOVE_FLOW_PKT_CNT_OFFSET, @@ -42,6 +42,11 @@ u32 mlx5_ipsec_device_caps(struct mlx5_core_dev *mdev) MLX5_CAP_FLOWTABLE_NIC_RX(mdev, decap)) caps |= MLX5_IPSEC_CAP_PACKET_OFFLOAD; + if (mlx5_get_roce_state(mdev) && + MLX5_CAP_GEN_2(mdev, flow_table_type_2_type) & MLX5_FT_NIC_RX_2_NIC_RX_RDMA && + MLX5_CAP_GEN_2(mdev, flow_table_type_2_type) & MLX5_FT_NIC_TX_RDMA_2_NIC_TX) + caps |= MLX5_IPSEC_CAP_ROCE; + if (!caps) return 0; @@ -92,7 +97,6 @@ static void mlx5e_ipsec_packet_setup(void *obj, u32 pdn, MLX5_SET(ipsec_aso, aso_ctx, remove_flow_pkt_cnt, lower_32_bits(attrs->hard_packet_limit)); MLX5_SET(ipsec_aso, aso_ctx, hard_lft_arm, 1); - MLX5_SET(ipsec_aso, aso_ctx, remove_flow_enable, 1); } if (attrs->soft_packet_limit != XFRM_INF) { @@ -329,8 +333,7 @@ static void mlx5e_ipsec_handle_event(struct work_struct *_work) if (attrs->soft_packet_limit != XFRM_INF) if (!MLX5_GET(ipsec_aso, aso->ctx, soft_lft_arm) || - !MLX5_GET(ipsec_aso, aso->ctx, hard_lft_arm) || - !MLX5_GET(ipsec_aso, aso->ctx, remove_flow_enable)) + !MLX5_GET(ipsec_aso, aso->ctx, hard_lft_arm)) xfrm_state_check_expire(sa_entry->x); unlock: diff --git a/drivers/net/ethernet/mellanox/mlx5/core/en_accel/ktls.c b/drivers/net/ethernet/mellanox/mlx5/core/en_accel/ktls.c index da2184c94203..cf704f106b7c 100644 --- a/drivers/net/ethernet/mellanox/mlx5/core/en_accel/ktls.c +++ b/drivers/net/ethernet/mellanox/mlx5/core/en_accel/ktls.c @@ -1,18 +1,19 @@ // SPDX-License-Identifier: GPL-2.0 OR Linux-OpenIB // Copyright (c) 2019 Mellanox Technologies. +#include <linux/debugfs.h> #include "en.h" #include "lib/mlx5.h" +#include "lib/crypto.h" #include "en_accel/ktls.h" #include "en_accel/ktls_utils.h" #include "en_accel/fs_tcp.h" -int mlx5_ktls_create_key(struct mlx5_core_dev *mdev, - struct tls_crypto_info *crypto_info, - u32 *p_key_id) +struct mlx5_crypto_dek *mlx5_ktls_create_key(struct mlx5_crypto_dek_pool *dek_pool, + struct tls_crypto_info *crypto_info) { + const void *key; u32 sz_bytes; - void *key; switch (crypto_info->cipher_type) { case TLS_CIPHER_AES_GCM_128: { @@ -32,17 +33,16 @@ int mlx5_ktls_create_key(struct mlx5_core_dev *mdev, break; } default: - return -EINVAL; + return ERR_PTR(-EINVAL); } - return mlx5_create_encryption_key(mdev, key, sz_bytes, - MLX5_ACCEL_OBJ_TLS_KEY, - p_key_id); + return mlx5_crypto_dek_create(dek_pool, key, sz_bytes); } -void mlx5_ktls_destroy_key(struct mlx5_core_dev *mdev, u32 key_id) +void mlx5_ktls_destroy_key(struct mlx5_crypto_dek_pool *dek_pool, + struct mlx5_crypto_dek *dek) { - mlx5_destroy_encryption_key(mdev, key_id); + mlx5_crypto_dek_destroy(dek_pool, dek); } static int mlx5e_ktls_add(struct net_device *netdev, struct sock *sk, @@ -177,8 +177,18 @@ void mlx5e_ktls_cleanup_rx(struct mlx5e_priv *priv) destroy_workqueue(priv->tls->rx_wq); } +static void mlx5e_tls_debugfs_init(struct mlx5e_tls *tls, + struct dentry *dfs_root) +{ + if (IS_ERR_OR_NULL(dfs_root)) + return; + + tls->debugfs.dfs = debugfs_create_dir("tls", dfs_root); +} + int mlx5e_ktls_init(struct mlx5e_priv *priv) { + struct mlx5_crypto_dek_pool *dek_pool; struct mlx5e_tls *tls; if (!mlx5e_is_ktls_device(priv->mdev)) @@ -187,13 +197,32 @@ int mlx5e_ktls_init(struct mlx5e_priv *priv) tls = kzalloc(sizeof(*tls), GFP_KERNEL); if (!tls) return -ENOMEM; + tls->mdev = priv->mdev; + dek_pool = mlx5_crypto_dek_pool_create(priv->mdev, MLX5_ACCEL_OBJ_TLS_KEY); + if (IS_ERR(dek_pool)) { + kfree(tls); + return PTR_ERR(dek_pool); + } + tls->dek_pool = dek_pool; priv->tls = tls; + + mlx5e_tls_debugfs_init(tls, priv->dfs_root); + return 0; } void mlx5e_ktls_cleanup(struct mlx5e_priv *priv) { + struct mlx5e_tls *tls = priv->tls; + + if (!mlx5e_is_ktls_device(priv->mdev)) + return; + + debugfs_remove_recursive(tls->debugfs.dfs); + tls->debugfs.dfs = NULL; + + mlx5_crypto_dek_pool_destroy(tls->dek_pool); kfree(priv->tls); priv->tls = NULL; } diff --git a/drivers/net/ethernet/mellanox/mlx5/core/en_accel/ktls.h b/drivers/net/ethernet/mellanox/mlx5/core/en_accel/ktls.h index 1c35045e41fb..f11075e67658 100644 --- a/drivers/net/ethernet/mellanox/mlx5/core/en_accel/ktls.h +++ b/drivers/net/ethernet/mellanox/mlx5/core/en_accel/ktls.h @@ -4,15 +4,18 @@ #ifndef __MLX5E_KTLS_H__ #define __MLX5E_KTLS_H__ +#include <linux/debugfs.h> #include <linux/tls.h> #include <net/tls.h> #include "en.h" #ifdef CONFIG_MLX5_EN_TLS -int mlx5_ktls_create_key(struct mlx5_core_dev *mdev, - struct tls_crypto_info *crypto_info, - u32 *p_key_id); -void mlx5_ktls_destroy_key(struct mlx5_core_dev *mdev, u32 key_id); +#include "lib/crypto.h" + +struct mlx5_crypto_dek *mlx5_ktls_create_key(struct mlx5_crypto_dek_pool *dek_pool, + struct tls_crypto_info *crypto_info); +void mlx5_ktls_destroy_key(struct mlx5_crypto_dek_pool *dek_pool, + struct mlx5_crypto_dek *dek); static inline bool mlx5e_is_ktls_device(struct mlx5_core_dev *mdev) { @@ -72,10 +75,18 @@ struct mlx5e_tls_sw_stats { atomic64_t rx_tls_del; }; +struct mlx5e_tls_debugfs { + struct dentry *dfs; + struct dentry *dfs_tx; +}; + struct mlx5e_tls { + struct mlx5_core_dev *mdev; struct mlx5e_tls_sw_stats sw_stats; struct workqueue_struct *rx_wq; struct mlx5e_tls_tx_pool *tx_pool; + struct mlx5_crypto_dek_pool *dek_pool; + struct mlx5e_tls_debugfs debugfs; }; int mlx5e_ktls_init(struct mlx5e_priv *priv); diff --git a/drivers/net/ethernet/mellanox/mlx5/core/en_accel/ktls_rx.c b/drivers/net/ethernet/mellanox/mlx5/core/en_accel/ktls_rx.c index 3e54834747ce..4be770443b0c 100644 --- a/drivers/net/ethernet/mellanox/mlx5/core/en_accel/ktls_rx.c +++ b/drivers/net/ethernet/mellanox/mlx5/core/en_accel/ktls_rx.c @@ -50,7 +50,7 @@ struct mlx5e_ktls_offload_context_rx { struct mlx5e_tls_sw_stats *sw_stats; struct completion add_ctx; struct mlx5e_tir tir; - u32 key_id; + struct mlx5_crypto_dek *dek; u32 rxq; DECLARE_BITMAP(flags, MLX5E_NUM_PRIV_RX_FLAGS); @@ -148,7 +148,8 @@ post_static_params(struct mlx5e_icosq *sq, wqe = MLX5E_TLS_FETCH_SET_STATIC_PARAMS_WQE(sq, pi); mlx5e_ktls_build_static_params(wqe, sq->pc, sq->sqn, &priv_rx->crypto_info, mlx5e_tir_get_tirn(&priv_rx->tir), - priv_rx->key_id, priv_rx->resync.seq, false, + mlx5_crypto_dek_get_id(priv_rx->dek), + priv_rx->resync.seq, false, TLS_OFFLOAD_CTX_DIR_RX); wi = (struct mlx5e_icosq_wqe_info) { .wqe_type = MLX5E_ICOSQ_WQE_UMR_TLS, @@ -610,20 +611,22 @@ int mlx5e_ktls_add_rx(struct net_device *netdev, struct sock *sk, struct mlx5e_ktls_offload_context_rx *priv_rx; struct mlx5e_ktls_rx_resync_ctx *resync; struct tls_context *tls_ctx; - struct mlx5_core_dev *mdev; + struct mlx5_crypto_dek *dek; struct mlx5e_priv *priv; int rxq, err; tls_ctx = tls_get_ctx(sk); priv = netdev_priv(netdev); - mdev = priv->mdev; priv_rx = kzalloc(sizeof(*priv_rx), GFP_KERNEL); if (unlikely(!priv_rx)) return -ENOMEM; - err = mlx5_ktls_create_key(mdev, crypto_info, &priv_rx->key_id); - if (err) + dek = mlx5_ktls_create_key(priv->tls->dek_pool, crypto_info); + if (IS_ERR(dek)) { + err = PTR_ERR(dek); goto err_create_key; + } + priv_rx->dek = dek; INIT_LIST_HEAD(&priv_rx->list); spin_lock_init(&priv_rx->lock); @@ -673,7 +676,7 @@ int mlx5e_ktls_add_rx(struct net_device *netdev, struct sock *sk, err_post_wqes: mlx5e_tir_destroy(&priv_rx->tir); err_create_tir: - mlx5_ktls_destroy_key(mdev, priv_rx->key_id); + mlx5_ktls_destroy_key(priv->tls->dek_pool, priv_rx->dek); err_create_key: kfree(priv_rx); return err; @@ -683,11 +686,9 @@ void mlx5e_ktls_del_rx(struct net_device *netdev, struct tls_context *tls_ctx) { struct mlx5e_ktls_offload_context_rx *priv_rx; struct mlx5e_ktls_rx_resync_ctx *resync; - struct mlx5_core_dev *mdev; struct mlx5e_priv *priv; priv = netdev_priv(netdev); - mdev = priv->mdev; priv_rx = mlx5e_get_ktls_rx_priv_ctx(tls_ctx); set_bit(MLX5E_PRIV_RX_FLAG_DELETING, priv_rx->flags); @@ -707,7 +708,7 @@ void mlx5e_ktls_del_rx(struct net_device *netdev, struct tls_context *tls_ctx) mlx5e_accel_fs_del_sk(priv_rx->rule.rule); mlx5e_tir_destroy(&priv_rx->tir); - mlx5_ktls_destroy_key(mdev, priv_rx->key_id); + mlx5_ktls_destroy_key(priv->tls->dek_pool, priv_rx->dek); /* priv_rx should normally be freed here, but if there is an outstanding * GET_PSV, deallocation will be delayed until the CQE for GET_PSV is * processed. diff --git a/drivers/net/ethernet/mellanox/mlx5/core/en_accel/ktls_tx.c b/drivers/net/ethernet/mellanox/mlx5/core/en_accel/ktls_tx.c index 78072bf93f3f..60b3e08a1028 100644 --- a/drivers/net/ethernet/mellanox/mlx5/core/en_accel/ktls_tx.c +++ b/drivers/net/ethernet/mellanox/mlx5/core/en_accel/ktls_tx.c @@ -1,6 +1,7 @@ // SPDX-License-Identifier: GPL-2.0 OR Linux-OpenIB // Copyright (c) 2019 Mellanox Technologies. +#include <linux/debugfs.h> #include "en_accel/ktls.h" #include "en_accel/ktls_txrx.h" #include "en_accel/ktls_utils.h" @@ -97,7 +98,7 @@ struct mlx5e_ktls_offload_context_tx { struct tls_offload_context_tx *tx_ctx; struct mlx5_core_dev *mdev; struct mlx5e_tls_sw_stats *sw_stats; - u32 key_id; + struct mlx5_crypto_dek *dek; u8 create_err : 1; }; @@ -456,6 +457,7 @@ int mlx5e_ktls_add_tx(struct net_device *netdev, struct sock *sk, struct mlx5e_ktls_offload_context_tx *priv_tx; struct mlx5e_tls_tx_pool *pool; struct tls_context *tls_ctx; + struct mlx5_crypto_dek *dek; struct mlx5e_priv *priv; int err; @@ -467,9 +469,12 @@ int mlx5e_ktls_add_tx(struct net_device *netdev, struct sock *sk, if (IS_ERR(priv_tx)) return PTR_ERR(priv_tx); - err = mlx5_ktls_create_key(pool->mdev, crypto_info, &priv_tx->key_id); - if (err) + dek = mlx5_ktls_create_key(priv->tls->dek_pool, crypto_info); + if (IS_ERR(dek)) { + err = PTR_ERR(dek); goto err_create_key; + } + priv_tx->dek = dek; priv_tx->expected_seq = start_offload_tcp_sn; switch (crypto_info->cipher_type) { @@ -511,7 +516,7 @@ void mlx5e_ktls_del_tx(struct net_device *netdev, struct tls_context *tls_ctx) pool = priv->tls->tx_pool; atomic64_inc(&priv_tx->sw_stats->tx_tls_del); - mlx5_ktls_destroy_key(priv_tx->mdev, priv_tx->key_id); + mlx5_ktls_destroy_key(priv->tls->dek_pool, priv_tx->dek); pool_push(pool, priv_tx); } @@ -550,8 +555,9 @@ post_static_params(struct mlx5e_txqsq *sq, pi = mlx5e_txqsq_get_next_pi(sq, num_wqebbs); wqe = MLX5E_TLS_FETCH_SET_STATIC_PARAMS_WQE(sq, pi); mlx5e_ktls_build_static_params(wqe, sq->pc, sq->sqn, &priv_tx->crypto_info, - priv_tx->tisn, priv_tx->key_id, 0, fence, - TLS_OFFLOAD_CTX_DIR_TX); + priv_tx->tisn, + mlx5_crypto_dek_get_id(priv_tx->dek), + 0, fence, TLS_OFFLOAD_CTX_DIR_TX); tx_fill_wi(sq, pi, num_wqebbs, 0, NULL); sq->pc += num_wqebbs; } @@ -886,8 +892,22 @@ err_out: return false; } +static void mlx5e_tls_tx_debugfs_init(struct mlx5e_tls *tls, + struct dentry *dfs_root) +{ + if (IS_ERR_OR_NULL(dfs_root)) + return; + + tls->debugfs.dfs_tx = debugfs_create_dir("tx", dfs_root); + + debugfs_create_size_t("pool_size", 0400, tls->debugfs.dfs_tx, + &tls->tx_pool->size); +} + int mlx5e_ktls_init_tx(struct mlx5e_priv *priv) { + struct mlx5e_tls *tls = priv->tls; + if (!mlx5e_is_ktls_tx(priv->mdev)) return 0; @@ -895,6 +915,8 @@ int mlx5e_ktls_init_tx(struct mlx5e_priv *priv) if (!priv->tls->tx_pool) return -ENOMEM; + mlx5e_tls_tx_debugfs_init(tls, tls->debugfs.dfs); + return 0; } @@ -903,6 +925,9 @@ void mlx5e_ktls_cleanup_tx(struct mlx5e_priv *priv) if (!mlx5e_is_ktls_tx(priv->mdev)) return; + debugfs_remove_recursive(priv->tls->debugfs.dfs_tx); + priv->tls->debugfs.dfs_tx = NULL; + mlx5e_tls_tx_pool_cleanup(priv->tls->tx_pool); priv->tls->tx_pool = NULL; } diff --git a/drivers/net/ethernet/mellanox/mlx5/core/en_accel/macsec.c b/drivers/net/ethernet/mellanox/mlx5/core/en_accel/macsec.c index 7f6b940830b3..08d0929e8260 100644 --- a/drivers/net/ethernet/mellanox/mlx5/core/en_accel/macsec.c +++ b/drivers/net/ethernet/mellanox/mlx5/core/en_accel/macsec.c @@ -7,7 +7,7 @@ #include "en.h" #include "lib/aso.h" -#include "lib/mlx5.h" +#include "lib/crypto.h" #include "en_accel/macsec.h" #include "en_accel/macsec_fs.h" diff --git a/drivers/net/ethernet/mellanox/mlx5/core/en_common.c b/drivers/net/ethernet/mellanox/mlx5/core/en_common.c index 68f19324db93..4c9a3210600c 100644 --- a/drivers/net/ethernet/mellanox/mlx5/core/en_common.c +++ b/drivers/net/ethernet/mellanox/mlx5/core/en_common.c @@ -31,6 +31,7 @@ */ #include "en.h" +#include "lib/crypto.h" /* mlx5e global resources should be placed in this file. * Global resources are common to all the netdevices created on the same nic. @@ -104,6 +105,13 @@ int mlx5e_create_mdev_resources(struct mlx5_core_dev *mdev) INIT_LIST_HEAD(&res->td.tirs_list); mutex_init(&res->td.list_lock); + mdev->mlx5e_res.dek_priv = mlx5_crypto_dek_init(mdev); + if (IS_ERR(mdev->mlx5e_res.dek_priv)) { + mlx5_core_err(mdev, "crypto dek init failed, %ld\n", + PTR_ERR(mdev->mlx5e_res.dek_priv)); + mdev->mlx5e_res.dek_priv = NULL; + } + return 0; err_destroy_mkey: @@ -119,6 +127,8 @@ void mlx5e_destroy_mdev_resources(struct mlx5_core_dev *mdev) { struct mlx5e_hw_objs *res = &mdev->mlx5e_res.hw_objs; + mlx5_crypto_dek_cleanup(mdev->mlx5e_res.dek_priv); + mdev->mlx5e_res.dek_priv = NULL; mlx5_free_bfreg(mdev, &res->bfreg); mlx5_core_destroy_mkey(mdev, res->mkey); mlx5_core_dealloc_transport_domain(mdev, res->td.tdn); diff --git a/drivers/net/ethernet/mellanox/mlx5/core/en_fs.c b/drivers/net/ethernet/mellanox/mlx5/core/en_fs.c index 7cd36f4ac3ef..05796f8b1d7c 100644 --- a/drivers/net/ethernet/mellanox/mlx5/core/en_fs.c +++ b/drivers/net/ethernet/mellanox/mlx5/core/en_fs.c @@ -30,6 +30,7 @@ * SOFTWARE. */ +#include <linux/debugfs.h> #include <linux/list.h> #include <linux/ip.h> #include <linux/ipv6.h> @@ -67,6 +68,7 @@ struct mlx5e_flow_steering { struct mlx5e_fs_udp *udp; struct mlx5e_fs_any *any; struct mlx5e_ptp_fs *ptp_fs; + struct dentry *dfs_root; }; static int mlx5e_add_l2_flow_rule(struct mlx5e_flow_steering *fs, @@ -104,6 +106,11 @@ static inline int mlx5e_hash_l2(const u8 *addr) return addr[5]; } +struct dentry *mlx5e_fs_get_debugfs_root(struct mlx5e_flow_steering *fs) +{ + return fs->dfs_root; +} + static void mlx5e_add_l2_to_hash(struct hlist_head *hash, const u8 *addr) { struct mlx5e_l2_hash_node *hn; @@ -1429,9 +1436,19 @@ static int mlx5e_fs_ethtool_alloc(struct mlx5e_flow_steering *fs) static void mlx5e_fs_ethtool_free(struct mlx5e_flow_steering *fs) { } #endif +static void mlx5e_fs_debugfs_init(struct mlx5e_flow_steering *fs, + struct dentry *dfs_root) +{ + if (IS_ERR_OR_NULL(dfs_root)) + return; + + fs->dfs_root = debugfs_create_dir("fs", dfs_root); +} + struct mlx5e_flow_steering *mlx5e_fs_init(const struct mlx5e_profile *profile, struct mlx5_core_dev *mdev, - bool state_destroy) + bool state_destroy, + struct dentry *dfs_root) { struct mlx5e_flow_steering *fs; int err; @@ -1458,6 +1475,8 @@ struct mlx5e_flow_steering *mlx5e_fs_init(const struct mlx5e_profile *profile, if (err) goto err_free_tc; + mlx5e_fs_debugfs_init(fs, dfs_root); + return fs; err_free_tc: mlx5e_fs_tc_free(fs); @@ -1471,6 +1490,7 @@ err: void mlx5e_fs_cleanup(struct mlx5e_flow_steering *fs) { + debugfs_remove_recursive(fs->dfs_root); mlx5e_fs_ethtool_free(fs); mlx5e_fs_tc_free(fs); mlx5e_fs_vlan_free(fs); diff --git a/drivers/net/ethernet/mellanox/mlx5/core/en_main.c b/drivers/net/ethernet/mellanox/mlx5/core/en_main.c index 6c24f33a5ea5..53feb0529943 100644 --- a/drivers/net/ethernet/mellanox/mlx5/core/en_main.c +++ b/drivers/net/ethernet/mellanox/mlx5/core/en_main.c @@ -35,9 +35,11 @@ #include <net/vxlan.h> #include <net/geneve.h> #include <linux/bpf.h> +#include <linux/debugfs.h> #include <linux/if_bridge.h> #include <linux/filter.h> #include <net/page_pool.h> +#include <net/pkt_sched.h> #include <net/xdp_sock_drv.h> #include "eswitch.h" #include "en.h" @@ -179,17 +181,21 @@ static void mlx5e_disable_async_events(struct mlx5e_priv *priv) static int blocking_event(struct notifier_block *nb, unsigned long event, void *data) { struct mlx5e_priv *priv = container_of(nb, struct mlx5e_priv, blocking_events_nb); + struct mlx5_devlink_trap_event_ctx *trap_event_ctx = data; int err; switch (event) { case MLX5_DRIVER_EVENT_TYPE_TRAP: - err = mlx5e_handle_trap_event(priv, data); + err = mlx5e_handle_trap_event(priv, trap_event_ctx->trap); + if (err) { + trap_event_ctx->err = err; + return NOTIFY_BAD; + } break; default: - netdev_warn(priv->netdev, "Sync event: Unknown event %ld\n", event); - err = -EINVAL; + return NOTIFY_DONE; } - return err; + return NOTIFY_OK; } static void mlx5e_enable_blocking_events(struct mlx5e_priv *priv) @@ -1441,6 +1447,7 @@ static int mlx5e_alloc_txqsq(struct mlx5e_channel *c, sq->mkey_be = c->mkey_be; sq->netdev = c->netdev; sq->mdev = c->mdev; + sq->channel = c; sq->priv = c->priv; sq->ch_ix = c->ix; sq->txq_ix = txq_ix; @@ -2453,8 +2460,6 @@ static void mlx5e_activate_channel(struct mlx5e_channel *c) mlx5e_activate_xsk(c); else mlx5e_activate_rq(&c->rq); - - mlx5e_trigger_napi_icosq(c); } static void mlx5e_deactivate_channel(struct mlx5e_channel *c) @@ -2546,13 +2551,19 @@ err_free: return err; } -static void mlx5e_activate_channels(struct mlx5e_channels *chs) +static void mlx5e_activate_channels(struct mlx5e_priv *priv, struct mlx5e_channels *chs) { int i; for (i = 0; i < chs->num; i++) mlx5e_activate_channel(chs->c[i]); + if (priv->htb) + mlx5e_qos_activate_queues(priv); + + for (i = 0; i < chs->num; i++) + mlx5e_trigger_napi_icosq(chs->c[i]); + if (chs->ptp) mlx5e_ptp_activate_channel(chs->ptp); } @@ -2859,9 +2870,7 @@ out: void mlx5e_activate_priv_channels(struct mlx5e_priv *priv) { mlx5e_build_txq_maps(priv); - mlx5e_activate_channels(&priv->channels); - if (priv->htb) - mlx5e_qos_activate_queues(priv); + mlx5e_activate_channels(priv, &priv->channels); mlx5e_xdp_tx_enable(priv); /* dev_watchdog() wants all TX queues to be started when the carrier is @@ -2969,32 +2978,37 @@ int mlx5e_safe_switch_params(struct mlx5e_priv *priv, mlx5e_fp_preactivate preactivate, void *context, bool reset) { - struct mlx5e_channels new_chs = {}; + struct mlx5e_channels *new_chs; int err; reset &= test_bit(MLX5E_STATE_OPENED, &priv->state); if (!reset) return mlx5e_switch_priv_params(priv, params, preactivate, context); - new_chs.params = *params; + new_chs = kzalloc(sizeof(*new_chs), GFP_KERNEL); + if (!new_chs) + return -ENOMEM; + new_chs->params = *params; - mlx5e_selq_prepare_params(&priv->selq, &new_chs.params); + mlx5e_selq_prepare_params(&priv->selq, &new_chs->params); - err = mlx5e_open_channels(priv, &new_chs); + err = mlx5e_open_channels(priv, new_chs); if (err) goto err_cancel_selq; - err = mlx5e_switch_priv_channels(priv, &new_chs, preactivate, context); + err = mlx5e_switch_priv_channels(priv, new_chs, preactivate, context); if (err) goto err_close; + kfree(new_chs); return 0; err_close: - mlx5e_close_channels(&new_chs); + mlx5e_close_channels(new_chs); err_cancel_selq: mlx5e_selq_cancel(&priv->selq); + kfree(new_chs); return err; } @@ -4727,6 +4741,13 @@ static int mlx5e_xdp_set(struct net_device *netdev, struct bpf_prog *prog) if (old_prog) bpf_prog_put(old_prog); + if (reset) { + if (prog) + xdp_features_set_redirect_target(netdev, true); + else + xdp_features_clear_redirect_target(netdev); + } + if (!test_bit(MLX5E_STATE_OPENED, &priv->state) || reset) goto unlock; @@ -5004,6 +5025,7 @@ static void mlx5e_build_nic_netdev(struct net_device *netdev) SET_NETDEV_DEV(netdev, mdev->device); netdev->netdev_ops = &mlx5e_netdev_ops; + netdev->xdp_metadata_ops = &mlx5e_xdp_metadata_ops; mlx5e_dcbnl_build_netdev(netdev); @@ -5121,6 +5143,10 @@ static void mlx5e_build_nic_netdev(struct net_device *netdev) netdev->features |= NETIF_F_HIGHDMA; netdev->features |= NETIF_F_HW_VLAN_STAG_FILTER; + netdev->xdp_features = NETDEV_XDP_ACT_BASIC | NETDEV_XDP_ACT_REDIRECT | + NETDEV_XDP_ACT_XSK_ZEROCOPY | + NETDEV_XDP_ACT_RX_SG; + netdev->priv_flags |= IFF_UNICAST_FLT; netif_set_tso_max_size(netdev, GSO_MAX_SIZE); @@ -5181,7 +5207,8 @@ static int mlx5e_nic_init(struct mlx5_core_dev *mdev, mlx5e_timestamp_init(priv); fs = mlx5e_fs_init(priv->profile, mdev, - !test_bit(MLX5E_STATE_DESTROYING, &priv->state)); + !test_bit(MLX5E_STATE_DESTROYING, &priv->state), + priv->dfs_root); if (!fs) { err = -ENOMEM; mlx5_core_err(mdev, "FS initialization failed, %d\n", err); @@ -5822,7 +5849,8 @@ void mlx5e_destroy_netdev(struct mlx5e_priv *priv) static int mlx5e_resume(struct auxiliary_device *adev) { struct mlx5_adev *edev = container_of(adev, struct mlx5_adev, adev); - struct mlx5e_priv *priv = auxiliary_get_drvdata(adev); + struct mlx5e_dev *mlx5e_dev = auxiliary_get_drvdata(adev); + struct mlx5e_priv *priv = mlx5e_dev->priv; struct net_device *netdev = priv->netdev; struct mlx5_core_dev *mdev = edev->mdev; int err; @@ -5845,7 +5873,8 @@ static int mlx5e_resume(struct auxiliary_device *adev) static int mlx5e_suspend(struct auxiliary_device *adev, pm_message_t state) { - struct mlx5e_priv *priv = auxiliary_get_drvdata(adev); + struct mlx5e_dev *mlx5e_dev = auxiliary_get_drvdata(adev); + struct mlx5e_priv *priv = mlx5e_dev->priv; struct net_device *netdev = priv->netdev; struct mlx5_core_dev *mdev = priv->mdev; @@ -5863,35 +5892,46 @@ static int mlx5e_probe(struct auxiliary_device *adev, struct mlx5_adev *edev = container_of(adev, struct mlx5_adev, adev); const struct mlx5e_profile *profile = &mlx5e_nic_profile; struct mlx5_core_dev *mdev = edev->mdev; + struct mlx5e_dev *mlx5e_dev; struct net_device *netdev; pm_message_t state = {}; struct mlx5e_priv *priv; int err; + mlx5e_dev = mlx5e_create_devlink(&adev->dev, mdev); + if (IS_ERR(mlx5e_dev)) + return PTR_ERR(mlx5e_dev); + auxiliary_set_drvdata(adev, mlx5e_dev); + + err = mlx5e_devlink_port_register(mlx5e_dev, mdev); + if (err) { + mlx5_core_err(mdev, "mlx5e_devlink_port_register failed, %d\n", err); + goto err_devlink_unregister; + } + netdev = mlx5e_create_netdev(mdev, profile); if (!netdev) { mlx5_core_err(mdev, "mlx5e_create_netdev failed\n"); - return -ENOMEM; + err = -ENOMEM; + goto err_devlink_port_unregister; } + SET_NETDEV_DEVLINK_PORT(netdev, &mlx5e_dev->dl_port); mlx5e_build_nic_netdev(netdev); priv = netdev_priv(netdev); - auxiliary_set_drvdata(adev, priv); + mlx5e_dev->priv = priv; priv->profile = profile; priv->ppriv = NULL; - err = mlx5e_devlink_port_register(priv); - if (err) { - mlx5_core_err(mdev, "mlx5e_devlink_port_register failed, %d\n", err); - goto err_destroy_netdev; - } + priv->dfs_root = debugfs_create_dir("nic", + mlx5_debugfs_get_dev_root(priv->mdev)); err = profile->init(mdev, netdev); if (err) { mlx5_core_err(mdev, "mlx5e_nic_profile init failed, %d\n", err); - goto err_devlink_cleanup; + goto err_destroy_netdev; } err = mlx5e_resume(adev); @@ -5900,7 +5940,6 @@ static int mlx5e_probe(struct auxiliary_device *adev, goto err_profile_cleanup; } - SET_NETDEV_DEVLINK_PORT(netdev, mlx5e_devlink_get_dl_port(priv)); err = register_netdev(netdev); if (err) { mlx5_core_err(mdev, "register_netdev failed, %d\n", err); @@ -5908,7 +5947,7 @@ static int mlx5e_probe(struct auxiliary_device *adev, } mlx5e_dcbnl_init_app(priv); - mlx5_uplink_netdev_set(mdev, netdev); + mlx5_core_uplink_netdev_set(mdev, netdev); mlx5e_params_print_info(mdev, &priv->channels.params); return 0; @@ -5916,24 +5955,31 @@ err_resume: mlx5e_suspend(adev, state); err_profile_cleanup: profile->cleanup(priv); -err_devlink_cleanup: - mlx5e_devlink_port_unregister(priv); err_destroy_netdev: + debugfs_remove_recursive(priv->dfs_root); mlx5e_destroy_netdev(priv); +err_devlink_port_unregister: + mlx5e_devlink_port_unregister(mlx5e_dev); +err_devlink_unregister: + mlx5e_destroy_devlink(mlx5e_dev); return err; } static void mlx5e_remove(struct auxiliary_device *adev) { - struct mlx5e_priv *priv = auxiliary_get_drvdata(adev); + struct mlx5e_dev *mlx5e_dev = auxiliary_get_drvdata(adev); + struct mlx5e_priv *priv = mlx5e_dev->priv; pm_message_t state = {}; + mlx5_core_uplink_netdev_set(priv->mdev, NULL); mlx5e_dcbnl_delete_app(priv); unregister_netdev(priv->netdev); mlx5e_suspend(adev, state); priv->profile->cleanup(priv); - mlx5e_devlink_port_unregister(priv); + debugfs_remove_recursive(priv->dfs_root); mlx5e_destroy_netdev(priv); + mlx5e_devlink_port_unregister(mlx5e_dev); + mlx5e_destroy_devlink(mlx5e_dev); } static const struct auxiliary_device_id mlx5e_id_table[] = { diff --git a/drivers/net/ethernet/mellanox/mlx5/core/en_rep.c b/drivers/net/ethernet/mellanox/mlx5/core/en_rep.c index 7d90e5b72854..9b9203443085 100644 --- a/drivers/net/ethernet/mellanox/mlx5/core/en_rep.c +++ b/drivers/net/ethernet/mellanox/mlx5/core/en_rep.c @@ -789,8 +789,10 @@ static int mlx5e_init_rep(struct mlx5_core_dev *mdev, { struct mlx5e_priv *priv = netdev_priv(netdev); - priv->fs = mlx5e_fs_init(priv->profile, mdev, - !test_bit(MLX5E_STATE_DESTROYING, &priv->state)); + priv->fs = + mlx5e_fs_init(priv->profile, mdev, + !test_bit(MLX5E_STATE_DESTROYING, &priv->state), + priv->dfs_root); if (!priv->fs) { netdev_err(priv->netdev, "FS allocation failed\n"); return -ENOMEM; @@ -808,7 +810,8 @@ static int mlx5e_init_ul_rep(struct mlx5_core_dev *mdev, struct mlx5e_priv *priv = netdev_priv(netdev); priv->fs = mlx5e_fs_init(priv->profile, mdev, - !test_bit(MLX5E_STATE_DESTROYING, &priv->state)); + !test_bit(MLX5E_STATE_DESTROYING, &priv->state), + priv->dfs_root); if (!priv->fs) { netdev_err(priv->netdev, "FS allocation failed\n"); return -ENOMEM; @@ -1004,8 +1007,23 @@ static void mlx5e_cleanup_rep_rx(struct mlx5e_priv *priv) priv->rx_res = NULL; } +static void mlx5e_rep_mpesw_work(struct work_struct *work) +{ + struct mlx5_rep_uplink_priv *uplink_priv = + container_of(work, struct mlx5_rep_uplink_priv, + mpesw_work); + struct mlx5e_rep_priv *rpriv = + container_of(uplink_priv, struct mlx5e_rep_priv, + uplink_priv); + struct mlx5e_priv *priv = netdev_priv(rpriv->netdev); + + rep_vport_rx_rule_destroy(priv); + mlx5e_create_rep_vport_rx_rule(priv); +} + static int mlx5e_init_ul_rep_rx(struct mlx5e_priv *priv) { + struct mlx5e_rep_priv *rpriv = priv->ppriv; int err; mlx5e_create_q_counters(priv); @@ -1015,12 +1033,17 @@ static int mlx5e_init_ul_rep_rx(struct mlx5e_priv *priv) mlx5e_tc_int_port_init_rep_rx(priv); + INIT_WORK(&rpriv->uplink_priv.mpesw_work, mlx5e_rep_mpesw_work); + out: return err; } static void mlx5e_cleanup_ul_rep_rx(struct mlx5e_priv *priv) { + struct mlx5e_rep_priv *rpriv = priv->ppriv; + + cancel_work_sync(&rpriv->uplink_priv.mpesw_work); mlx5e_tc_int_port_cleanup_rep_rx(priv); mlx5e_cleanup_rep_rx(priv); mlx5e_destroy_q_counters(priv); @@ -1129,6 +1152,19 @@ static int mlx5e_update_rep_rx(struct mlx5e_priv *priv) return 0; } +static int mlx5e_rep_event_mpesw(struct mlx5e_priv *priv) +{ + struct mlx5e_rep_priv *rpriv = priv->ppriv; + struct mlx5_eswitch_rep *rep = rpriv->rep; + + if (rep->vport != MLX5_VPORT_UPLINK) + return NOTIFY_DONE; + + queue_work(priv->wq, &rpriv->uplink_priv.mpesw_work); + + return NOTIFY_OK; +} + static int uplink_rep_async_event(struct notifier_block *nb, unsigned long event, void *data) { struct mlx5e_priv *priv = container_of(nb, struct mlx5e_priv, events_nb); @@ -1150,6 +1186,8 @@ static int uplink_rep_async_event(struct notifier_block *nb, unsigned long event if (event == MLX5_DEV_EVENT_PORT_AFFINITY) return mlx5e_rep_tc_event_port_affinity(priv); + else if (event == MLX5_DEV_EVENT_MULTIPORT_ESW) + return mlx5e_rep_event_mpesw(priv); return NOTIFY_DONE; } diff --git a/drivers/net/ethernet/mellanox/mlx5/core/en_rep.h b/drivers/net/ethernet/mellanox/mlx5/core/en_rep.h index b4e691760da9..dcfad0bf0f45 100644 --- a/drivers/net/ethernet/mellanox/mlx5/core/en_rep.h +++ b/drivers/net/ethernet/mellanox/mlx5/core/en_rep.h @@ -100,6 +100,11 @@ struct mlx5_rep_uplink_priv { struct mlx5e_tc_int_port_priv *int_port_priv; struct mlx5e_flow_meters *flow_meters; + + /* tc action stats */ + struct mlx5e_tc_act_stats_handle *action_stats_handle; + + struct work_struct mpesw_work; }; struct mlx5e_rep_priv { diff --git a/drivers/net/ethernet/mellanox/mlx5/core/en_rx.c b/drivers/net/ethernet/mellanox/mlx5/core/en_rx.c index 3df455f6b168..3f7b63d6616b 100644 --- a/drivers/net/ethernet/mellanox/mlx5/core/en_rx.c +++ b/drivers/net/ethernet/mellanox/mlx5/core/en_rx.c @@ -62,10 +62,12 @@ static struct sk_buff * mlx5e_skb_from_cqe_mpwrq_linear(struct mlx5e_rq *rq, struct mlx5e_mpw_info *wi, - u16 cqe_bcnt, u32 head_offset, u32 page_idx); + struct mlx5_cqe64 *cqe, u16 cqe_bcnt, u32 head_offset, + u32 page_idx); static struct sk_buff * mlx5e_skb_from_cqe_mpwrq_nonlinear(struct mlx5e_rq *rq, struct mlx5e_mpw_info *wi, - u16 cqe_bcnt, u32 head_offset, u32 page_idx); + struct mlx5_cqe64 *cqe, u16 cqe_bcnt, u32 head_offset, + u32 page_idx); static void mlx5e_handle_rx_cqe(struct mlx5e_rq *rq, struct mlx5_cqe64 *cqe); static void mlx5e_handle_rx_cqe_mpwrq(struct mlx5e_rq *rq, struct mlx5_cqe64 *cqe); static void mlx5e_handle_rx_cqe_mpwrq_shampo(struct mlx5e_rq *rq, struct mlx5_cqe64 *cqe); @@ -76,11 +78,6 @@ const struct mlx5e_rx_handlers mlx5e_rx_handlers_nic = { .handle_rx_cqe_mpwqe_shampo = mlx5e_handle_rx_cqe_mpwrq_shampo, }; -static inline bool mlx5e_rx_hw_stamp(struct hwtstamp_config *config) -{ - return config->rx_filter == HWTSTAMP_FILTER_ALL; -} - static inline void mlx5e_read_cqe_slot(struct mlx5_cqwq *wq, u32 cqcc, void *data) { @@ -1559,7 +1556,7 @@ struct sk_buff *mlx5e_build_linear_skb(struct mlx5e_rq *rq, void *va, u32 frag_size, u16 headroom, u32 cqe_bcnt, u32 metasize) { - struct sk_buff *skb = build_skb(va, frag_size); + struct sk_buff *skb = napi_build_skb(va, frag_size); if (unlikely(!skb)) { rq->stats->buff_alloc_err++; @@ -1575,16 +1572,19 @@ struct sk_buff *mlx5e_build_linear_skb(struct mlx5e_rq *rq, void *va, return skb; } -static void mlx5e_fill_xdp_buff(struct mlx5e_rq *rq, void *va, u16 headroom, - u32 len, struct xdp_buff *xdp) +static void mlx5e_fill_mxbuf(struct mlx5e_rq *rq, struct mlx5_cqe64 *cqe, + void *va, u16 headroom, u32 len, + struct mlx5e_xdp_buff *mxbuf) { - xdp_init_buff(xdp, rq->buff.frame0_sz, &rq->xdp_rxq); - xdp_prepare_buff(xdp, va, headroom, len, true); + xdp_init_buff(&mxbuf->xdp, rq->buff.frame0_sz, &rq->xdp_rxq); + xdp_prepare_buff(&mxbuf->xdp, va, headroom, len, true); + mxbuf->cqe = cqe; + mxbuf->rq = rq; } static struct sk_buff * mlx5e_skb_from_cqe_linear(struct mlx5e_rq *rq, struct mlx5e_wqe_frag_info *wi, - u32 cqe_bcnt) + struct mlx5_cqe64 *cqe, u32 cqe_bcnt) { union mlx5e_alloc_unit *au = wi->au; u16 rx_headroom = rq->buff.headroom; @@ -1606,16 +1606,16 @@ mlx5e_skb_from_cqe_linear(struct mlx5e_rq *rq, struct mlx5e_wqe_frag_info *wi, prog = rcu_dereference(rq->xdp_prog); if (prog) { - struct xdp_buff xdp; + struct mlx5e_xdp_buff mxbuf; net_prefetchw(va); /* xdp_frame data area */ - mlx5e_fill_xdp_buff(rq, va, rx_headroom, cqe_bcnt, &xdp); - if (mlx5e_xdp_handle(rq, au->page, prog, &xdp)) + mlx5e_fill_mxbuf(rq, cqe, va, rx_headroom, cqe_bcnt, &mxbuf); + if (mlx5e_xdp_handle(rq, prog, &mxbuf)) return NULL; /* page/packet was consumed by XDP */ - rx_headroom = xdp.data - xdp.data_hard_start; - metasize = xdp.data - xdp.data_meta; - cqe_bcnt = xdp.data_end - xdp.data; + rx_headroom = mxbuf.xdp.data - mxbuf.xdp.data_hard_start; + metasize = mxbuf.xdp.data - mxbuf.xdp.data_meta; + cqe_bcnt = mxbuf.xdp.data_end - mxbuf.xdp.data; } frag_size = MLX5_SKB_FRAG_SZ(rx_headroom + cqe_bcnt); skb = mlx5e_build_linear_skb(rq, va, frag_size, rx_headroom, cqe_bcnt, metasize); @@ -1630,16 +1630,16 @@ mlx5e_skb_from_cqe_linear(struct mlx5e_rq *rq, struct mlx5e_wqe_frag_info *wi, static struct sk_buff * mlx5e_skb_from_cqe_nonlinear(struct mlx5e_rq *rq, struct mlx5e_wqe_frag_info *wi, - u32 cqe_bcnt) + struct mlx5_cqe64 *cqe, u32 cqe_bcnt) { struct mlx5e_rq_frag_info *frag_info = &rq->wqe.info.arr[0]; struct mlx5e_wqe_frag_info *head_wi = wi; union mlx5e_alloc_unit *au = wi->au; u16 rx_headroom = rq->buff.headroom; struct skb_shared_info *sinfo; + struct mlx5e_xdp_buff mxbuf; u32 frag_consumed_bytes; struct bpf_prog *prog; - struct xdp_buff xdp; struct sk_buff *skb; dma_addr_t addr; u32 truesize; @@ -1654,8 +1654,8 @@ mlx5e_skb_from_cqe_nonlinear(struct mlx5e_rq *rq, struct mlx5e_wqe_frag_info *wi net_prefetchw(va); /* xdp_frame data area */ net_prefetch(va + rx_headroom); - mlx5e_fill_xdp_buff(rq, va, rx_headroom, frag_consumed_bytes, &xdp); - sinfo = xdp_get_shared_info_from_buff(&xdp); + mlx5e_fill_mxbuf(rq, cqe, va, rx_headroom, frag_consumed_bytes, &mxbuf); + sinfo = xdp_get_shared_info_from_buff(&mxbuf.xdp); truesize = 0; cqe_bcnt -= frag_consumed_bytes; @@ -1673,13 +1673,13 @@ mlx5e_skb_from_cqe_nonlinear(struct mlx5e_rq *rq, struct mlx5e_wqe_frag_info *wi dma_sync_single_for_cpu(rq->pdev, addr + wi->offset, frag_consumed_bytes, rq->buff.map_dir); - if (!xdp_buff_has_frags(&xdp)) { + if (!xdp_buff_has_frags(&mxbuf.xdp)) { /* Init on the first fragment to avoid cold cache access * when possible. */ sinfo->nr_frags = 0; sinfo->xdp_frags_size = 0; - xdp_buff_set_frags_flag(&xdp); + xdp_buff_set_frags_flag(&mxbuf.xdp); } frag = &sinfo->frags[sinfo->nr_frags++]; @@ -1688,7 +1688,7 @@ mlx5e_skb_from_cqe_nonlinear(struct mlx5e_rq *rq, struct mlx5e_wqe_frag_info *wi skb_frag_size_set(frag, frag_consumed_bytes); if (page_is_pfmemalloc(au->page)) - xdp_buff_set_frag_pfmemalloc(&xdp); + xdp_buff_set_frag_pfmemalloc(&mxbuf.xdp); sinfo->xdp_frags_size += frag_consumed_bytes; truesize += frag_info->frag_stride; @@ -1698,10 +1698,8 @@ mlx5e_skb_from_cqe_nonlinear(struct mlx5e_rq *rq, struct mlx5e_wqe_frag_info *wi wi++; } - au = head_wi->au; - prog = rcu_dereference(rq->xdp_prog); - if (prog && mlx5e_xdp_handle(rq, au->page, prog, &xdp)) { + if (prog && mlx5e_xdp_handle(rq, prog, &mxbuf)) { if (test_bit(MLX5E_RQ_FLAG_XDP_XMIT, rq->flags)) { int i; @@ -1711,22 +1709,22 @@ mlx5e_skb_from_cqe_nonlinear(struct mlx5e_rq *rq, struct mlx5e_wqe_frag_info *wi return NULL; /* page/packet was consumed by XDP */ } - skb = mlx5e_build_linear_skb(rq, xdp.data_hard_start, rq->buff.frame0_sz, - xdp.data - xdp.data_hard_start, - xdp.data_end - xdp.data, - xdp.data - xdp.data_meta); + skb = mlx5e_build_linear_skb(rq, mxbuf.xdp.data_hard_start, rq->buff.frame0_sz, + mxbuf.xdp.data - mxbuf.xdp.data_hard_start, + mxbuf.xdp.data_end - mxbuf.xdp.data, + mxbuf.xdp.data - mxbuf.xdp.data_meta); if (unlikely(!skb)) return NULL; - page_ref_inc(au->page); + page_ref_inc(head_wi->au->page); - if (unlikely(xdp_buff_has_frags(&xdp))) { + if (xdp_buff_has_frags(&mxbuf.xdp)) { int i; /* sinfo->nr_frags is reset by build_skb, calculate again. */ xdp_update_skb_shared_info(skb, wi - head_wi - 1, sinfo->xdp_frags_size, truesize, - xdp_buff_is_frag_pfmemalloc(&xdp)); + xdp_buff_is_frag_pfmemalloc(&mxbuf.xdp)); for (i = 0; i < sinfo->nr_frags; i++) { skb_frag_t *frag = &sinfo->frags[i]; @@ -1777,7 +1775,7 @@ static void mlx5e_handle_rx_cqe(struct mlx5e_rq *rq, struct mlx5_cqe64 *cqe) mlx5e_skb_from_cqe_linear, mlx5e_skb_from_cqe_nonlinear, mlx5e_xsk_skb_from_cqe_linear, - rq, wi, cqe_bcnt); + rq, wi, cqe, cqe_bcnt); if (!skb) { /* probably for XDP */ if (__test_and_clear_bit(MLX5E_RQ_FLAG_XDP_XMIT, rq->flags)) { @@ -1792,7 +1790,7 @@ static void mlx5e_handle_rx_cqe(struct mlx5e_rq *rq, struct mlx5_cqe64 *cqe) mlx5e_complete_rx_cqe(rq, cqe, cqe_bcnt, skb); if (mlx5e_cqe_regb_chain(cqe)) - if (!mlx5e_tc_update_skb(cqe, skb)) { + if (!mlx5e_tc_update_skb_nic(cqe, skb)) { dev_kfree_skb_any(skb); goto free_wqe; } @@ -1830,7 +1828,7 @@ static void mlx5e_handle_rx_cqe_rep(struct mlx5e_rq *rq, struct mlx5_cqe64 *cqe) skb = INDIRECT_CALL_2(rq->wqe.skb_from_cqe, mlx5e_skb_from_cqe_linear, mlx5e_skb_from_cqe_nonlinear, - rq, wi, cqe_bcnt); + rq, wi, cqe, cqe_bcnt); if (!skb) { /* probably for XDP */ if (__test_and_clear_bit(MLX5E_RQ_FLAG_XDP_XMIT, rq->flags)) { @@ -1889,7 +1887,7 @@ static void mlx5e_handle_rx_cqe_mpwrq_rep(struct mlx5e_rq *rq, struct mlx5_cqe64 skb = INDIRECT_CALL_2(rq->mpwqe.skb_from_cqe_mpwrq, mlx5e_skb_from_cqe_mpwrq_linear, mlx5e_skb_from_cqe_mpwrq_nonlinear, - rq, wi, cqe_bcnt, head_offset, page_idx); + rq, wi, cqe, cqe_bcnt, head_offset, page_idx); if (!skb) goto mpwrq_cqe_out; @@ -1940,7 +1938,8 @@ mlx5e_fill_skb_data(struct sk_buff *skb, struct mlx5e_rq *rq, static struct sk_buff * mlx5e_skb_from_cqe_mpwrq_nonlinear(struct mlx5e_rq *rq, struct mlx5e_mpw_info *wi, - u16 cqe_bcnt, u32 head_offset, u32 page_idx) + struct mlx5_cqe64 *cqe, u16 cqe_bcnt, u32 head_offset, + u32 page_idx) { union mlx5e_alloc_unit *au = &wi->alloc_units[page_idx]; u16 headlen = min_t(u16, MLX5E_RX_MAX_HEAD, cqe_bcnt); @@ -1979,7 +1978,8 @@ mlx5e_skb_from_cqe_mpwrq_nonlinear(struct mlx5e_rq *rq, struct mlx5e_mpw_info *w static struct sk_buff * mlx5e_skb_from_cqe_mpwrq_linear(struct mlx5e_rq *rq, struct mlx5e_mpw_info *wi, - u16 cqe_bcnt, u32 head_offset, u32 page_idx) + struct mlx5_cqe64 *cqe, u16 cqe_bcnt, u32 head_offset, + u32 page_idx) { union mlx5e_alloc_unit *au = &wi->alloc_units[page_idx]; u16 rx_headroom = rq->buff.headroom; @@ -2007,19 +2007,19 @@ mlx5e_skb_from_cqe_mpwrq_linear(struct mlx5e_rq *rq, struct mlx5e_mpw_info *wi, prog = rcu_dereference(rq->xdp_prog); if (prog) { - struct xdp_buff xdp; + struct mlx5e_xdp_buff mxbuf; net_prefetchw(va); /* xdp_frame data area */ - mlx5e_fill_xdp_buff(rq, va, rx_headroom, cqe_bcnt, &xdp); - if (mlx5e_xdp_handle(rq, au->page, prog, &xdp)) { + mlx5e_fill_mxbuf(rq, cqe, va, rx_headroom, cqe_bcnt, &mxbuf); + if (mlx5e_xdp_handle(rq, prog, &mxbuf)) { if (__test_and_clear_bit(MLX5E_RQ_FLAG_XDP_XMIT, rq->flags)) __set_bit(page_idx, wi->xdp_xmit_bitmap); /* non-atomic */ return NULL; /* page/packet was consumed by XDP */ } - rx_headroom = xdp.data - xdp.data_hard_start; - metasize = xdp.data - xdp.data_meta; - cqe_bcnt = xdp.data_end - xdp.data; + rx_headroom = mxbuf.xdp.data - mxbuf.xdp.data_hard_start; + metasize = mxbuf.xdp.data - mxbuf.xdp.data_meta; + cqe_bcnt = mxbuf.xdp.data_end - mxbuf.xdp.data; } frag_size = MLX5_SKB_FRAG_SZ(rx_headroom + cqe_bcnt); skb = mlx5e_build_linear_skb(rq, va, frag_size, rx_headroom, cqe_bcnt, metasize); @@ -2174,8 +2174,8 @@ static void mlx5e_handle_rx_cqe_mpwrq_shampo(struct mlx5e_rq *rq, struct mlx5_cq if (likely(head_size)) *skb = mlx5e_skb_from_cqe_shampo(rq, wi, cqe, header_index); else - *skb = mlx5e_skb_from_cqe_mpwrq_nonlinear(rq, wi, cqe_bcnt, data_offset, - page_idx); + *skb = mlx5e_skb_from_cqe_mpwrq_nonlinear(rq, wi, cqe, cqe_bcnt, + data_offset, page_idx); if (unlikely(!*skb)) goto free_hd_entry; @@ -2249,14 +2249,15 @@ static void mlx5e_handle_rx_cqe_mpwrq(struct mlx5e_rq *rq, struct mlx5_cqe64 *cq mlx5e_skb_from_cqe_mpwrq_linear, mlx5e_skb_from_cqe_mpwrq_nonlinear, mlx5e_xsk_skb_from_cqe_mpwrq_linear, - rq, wi, cqe_bcnt, head_offset, page_idx); + rq, wi, cqe, cqe_bcnt, head_offset, + page_idx); if (!skb) goto mpwrq_cqe_out; mlx5e_complete_rx_cqe(rq, cqe, cqe_bcnt, skb); if (mlx5e_cqe_regb_chain(cqe)) - if (!mlx5e_tc_update_skb(cqe, skb)) { + if (!mlx5e_tc_update_skb_nic(cqe, skb)) { dev_kfree_skb_any(skb); goto mpwrq_cqe_out; } @@ -2494,7 +2495,7 @@ static void mlx5i_handle_rx_cqe(struct mlx5e_rq *rq, struct mlx5_cqe64 *cqe) skb = INDIRECT_CALL_2(rq->wqe.skb_from_cqe, mlx5e_skb_from_cqe_linear, mlx5e_skb_from_cqe_nonlinear, - rq, wi, cqe_bcnt); + rq, wi, cqe, cqe_bcnt); if (!skb) goto wq_free_wqe; @@ -2567,10 +2568,8 @@ int mlx5e_rq_set_handlers(struct mlx5e_rq *rq, struct mlx5e_params *params, bool static void mlx5e_trap_handle_rx_cqe(struct mlx5e_rq *rq, struct mlx5_cqe64 *cqe) { - struct mlx5e_priv *priv = netdev_priv(rq->netdev); struct mlx5_wq_cyc *wq = &rq->wqe.wq; struct mlx5e_wqe_frag_info *wi; - struct devlink_port *dl_port; struct sk_buff *skb; u32 cqe_bcnt; u16 trap_id; @@ -2586,15 +2585,15 @@ static void mlx5e_trap_handle_rx_cqe(struct mlx5e_rq *rq, struct mlx5_cqe64 *cqe goto free_wqe; } - skb = mlx5e_skb_from_cqe_nonlinear(rq, wi, cqe_bcnt); + skb = mlx5e_skb_from_cqe_nonlinear(rq, wi, cqe, cqe_bcnt); if (!skb) goto free_wqe; mlx5e_complete_rx_cqe(rq, cqe, cqe_bcnt, skb); skb_push(skb, ETH_HLEN); - dl_port = mlx5e_devlink_get_dl_port(priv); - mlx5_devlink_trap_report(rq->mdev, trap_id, skb, dl_port); + mlx5_devlink_trap_report(rq->mdev, trap_id, skb, + rq->netdev->devlink_port); dev_kfree_skb_any(skb); free_wqe: diff --git a/drivers/net/ethernet/mellanox/mlx5/core/en_tc.c b/drivers/net/ethernet/mellanox/mlx5/core/en_tc.c index 243d5d7750be..e34d9b5fb504 100644 --- a/drivers/net/ethernet/mellanox/mlx5/core/en_tc.c +++ b/drivers/net/ethernet/mellanox/mlx5/core/en_tc.c @@ -43,8 +43,10 @@ #include <net/ipv6_stubs.h> #include <net/bareudp.h> #include <net/bonding.h> +#include <net/dst_metadata.h> #include "en.h" #include "en/tc/post_act.h" +#include "en/tc/act_stats.h" #include "en_rep.h" #include "en/rep/tc.h" #include "en/rep/neigh.h" @@ -71,6 +73,12 @@ #define MLX5E_TC_TABLE_NUM_GROUPS 4 #define MLX5E_TC_TABLE_MAX_GROUP_SIZE BIT(18) +struct mlx5e_hairpin_params { + struct mlx5_core_dev *mdev; + u32 num_queues; + u32 queue_size; +}; + struct mlx5e_tc_table { /* Protects the dynamic assignment of the t parameter * which is the nic tc root table. @@ -93,10 +101,15 @@ struct mlx5e_tc_table { struct mlx5_tc_ct_priv *ct; struct mapping_ctx *mapping; + struct mlx5e_hairpin_params hairpin_params; + struct dentry *dfs_root; + + /* tc action stats */ + struct mlx5e_tc_act_stats_handle *action_stats_handle; }; struct mlx5e_tc_attr_to_reg_mapping mlx5e_tc_attr_to_reg_mappings[] = { - [CHAIN_TO_REG] = { + [MAPPED_OBJ_TO_REG] = { .mfield = MLX5_ACTION_IN_FIELD_METADATA_REG_C_0, .moffset = 0, .mlen = 16, @@ -123,7 +136,7 @@ struct mlx5e_tc_attr_to_reg_mapping mlx5e_tc_attr_to_reg_mappings[] = { * into reg_b that is passed to SW since we don't * jump between steering domains. */ - [NIC_CHAIN_TO_REG] = { + [NIC_MAPPED_OBJ_TO_REG] = { .mfield = MLX5_ACTION_IN_FIELD_METADATA_REG_B, .moffset = 0, .mlen = 16, @@ -278,6 +291,24 @@ mlx5e_tc_match_to_reg_set_and_get_id(struct mlx5_core_dev *mdev, return err; } +static struct mlx5e_tc_act_stats_handle * +get_act_stats_handle(struct mlx5e_priv *priv) +{ + struct mlx5e_tc_table *tc = mlx5e_fs_get_tc(priv->fs); + struct mlx5_eswitch *esw = priv->mdev->priv.eswitch; + struct mlx5_rep_uplink_priv *uplink_priv; + struct mlx5e_rep_priv *uplink_rpriv; + + if (is_mdev_switchdev_mode(priv->mdev)) { + uplink_rpriv = mlx5_eswitch_get_uplink_priv(esw, REP_ETH); + uplink_priv = &uplink_rpriv->uplink_priv; + + return uplink_priv->action_stats_handle; + } + + return tc->action_stats_handle; +} + struct mlx5e_tc_int_port_priv * mlx5e_get_int_port_priv(struct mlx5e_priv *priv) { @@ -639,36 +670,36 @@ get_mod_hdr_table(struct mlx5e_priv *priv, struct mlx5e_tc_flow *flow) &tc->mod_hdr; } -static int mlx5e_attach_mod_hdr(struct mlx5e_priv *priv, - struct mlx5e_tc_flow *flow, - struct mlx5e_tc_flow_parse_attr *parse_attr) +int mlx5e_tc_attach_mod_hdr(struct mlx5e_priv *priv, + struct mlx5e_tc_flow *flow, + struct mlx5_flow_attr *attr) { - struct mlx5_modify_hdr *modify_hdr; struct mlx5e_mod_hdr_handle *mh; mh = mlx5e_mod_hdr_attach(priv->mdev, get_mod_hdr_table(priv, flow), mlx5e_get_flow_namespace(flow), - &parse_attr->mod_hdr_acts); + &attr->parse_attr->mod_hdr_acts); if (IS_ERR(mh)) return PTR_ERR(mh); - modify_hdr = mlx5e_mod_hdr_get(mh); - flow->attr->modify_hdr = modify_hdr; - flow->mh = mh; + WARN_ON(attr->modify_hdr); + attr->modify_hdr = mlx5e_mod_hdr_get(mh); + attr->mh = mh; return 0; } -static void mlx5e_detach_mod_hdr(struct mlx5e_priv *priv, - struct mlx5e_tc_flow *flow) +void mlx5e_tc_detach_mod_hdr(struct mlx5e_priv *priv, + struct mlx5e_tc_flow *flow, + struct mlx5_flow_attr *attr) { /* flow wasn't fully initialized */ - if (!flow->mh) + if (!attr->mh) return; mlx5e_mod_hdr_detach(priv->mdev, get_mod_hdr_table(priv, flow), - flow->mh); - flow->mh = NULL; + attr->mh); + attr->mh = NULL; } static @@ -1017,6 +1048,136 @@ static int mlx5e_hairpin_get_prio(struct mlx5e_priv *priv, return 0; } +static int debugfs_hairpin_queues_set(void *data, u64 val) +{ + struct mlx5e_hairpin_params *hp = data; + + if (!val) { + mlx5_core_err(hp->mdev, + "Number of hairpin queues must be > 0\n"); + return -EINVAL; + } + + hp->num_queues = val; + + return 0; +} + +static int debugfs_hairpin_queues_get(void *data, u64 *val) +{ + struct mlx5e_hairpin_params *hp = data; + + *val = hp->num_queues; + + return 0; +} +DEFINE_DEBUGFS_ATTRIBUTE(fops_hairpin_queues, debugfs_hairpin_queues_get, + debugfs_hairpin_queues_set, "%llu\n"); + +static int debugfs_hairpin_queue_size_set(void *data, u64 val) +{ + struct mlx5e_hairpin_params *hp = data; + + if (val > BIT(MLX5_CAP_GEN(hp->mdev, log_max_hairpin_num_packets))) { + mlx5_core_err(hp->mdev, + "Invalid hairpin queue size, must be <= %lu\n", + BIT(MLX5_CAP_GEN(hp->mdev, + log_max_hairpin_num_packets))); + return -EINVAL; + } + + hp->queue_size = roundup_pow_of_two(val); + + return 0; +} + +static int debugfs_hairpin_queue_size_get(void *data, u64 *val) +{ + struct mlx5e_hairpin_params *hp = data; + + *val = hp->queue_size; + + return 0; +} +DEFINE_DEBUGFS_ATTRIBUTE(fops_hairpin_queue_size, + debugfs_hairpin_queue_size_get, + debugfs_hairpin_queue_size_set, "%llu\n"); + +static int debugfs_hairpin_num_active_get(void *data, u64 *val) +{ + struct mlx5e_tc_table *tc = data; + struct mlx5e_hairpin_entry *hpe; + u32 cnt = 0; + u32 bkt; + + mutex_lock(&tc->hairpin_tbl_lock); + hash_for_each(tc->hairpin_tbl, bkt, hpe, hairpin_hlist) + cnt++; + mutex_unlock(&tc->hairpin_tbl_lock); + + *val = cnt; + + return 0; +} +DEFINE_DEBUGFS_ATTRIBUTE(fops_hairpin_num_active, + debugfs_hairpin_num_active_get, NULL, "%llu\n"); + +static int debugfs_hairpin_table_dump_show(struct seq_file *file, void *priv) + +{ + struct mlx5e_tc_table *tc = file->private; + struct mlx5e_hairpin_entry *hpe; + u32 bkt; + + mutex_lock(&tc->hairpin_tbl_lock); + hash_for_each(tc->hairpin_tbl, bkt, hpe, hairpin_hlist) + seq_printf(file, "Hairpin peer_vhca_id %u prio %u refcnt %u\n", + hpe->peer_vhca_id, hpe->prio, + refcount_read(&hpe->refcnt)); + mutex_unlock(&tc->hairpin_tbl_lock); + + return 0; +} +DEFINE_SHOW_ATTRIBUTE(debugfs_hairpin_table_dump); + +static void mlx5e_tc_debugfs_init(struct mlx5e_tc_table *tc, + struct dentry *dfs_root) +{ + if (IS_ERR_OR_NULL(dfs_root)) + return; + + tc->dfs_root = debugfs_create_dir("tc", dfs_root); + + debugfs_create_file("hairpin_num_queues", 0644, tc->dfs_root, + &tc->hairpin_params, &fops_hairpin_queues); + debugfs_create_file("hairpin_queue_size", 0644, tc->dfs_root, + &tc->hairpin_params, &fops_hairpin_queue_size); + debugfs_create_file("hairpin_num_active", 0444, tc->dfs_root, tc, + &fops_hairpin_num_active); + debugfs_create_file("hairpin_table_dump", 0444, tc->dfs_root, tc, + &debugfs_hairpin_table_dump_fops); +} + +static void +mlx5e_hairpin_params_init(struct mlx5e_hairpin_params *hairpin_params, + struct mlx5_core_dev *mdev) +{ + u64 link_speed64; + u32 link_speed; + + hairpin_params->mdev = mdev; + /* set hairpin pair per each 50Gbs share of the link */ + mlx5e_port_max_linkspeed(mdev, &link_speed); + link_speed = max_t(u32, link_speed, 50000); + link_speed64 = link_speed; + do_div(link_speed64, 50000); + hairpin_params->num_queues = link_speed64; + + hairpin_params->queue_size = + BIT(min_t(u32, 16 - MLX5_MPWRQ_MIN_LOG_STRIDE_SZ(mdev), + MLX5_CAP_GEN(mdev, log_max_hairpin_num_packets))); +} + static int mlx5e_hairpin_flow_add(struct mlx5e_priv *priv, struct mlx5e_tc_flow *flow, struct mlx5e_tc_flow_parse_attr *parse_attr, @@ -1028,8 +1189,6 @@ static int mlx5e_hairpin_flow_add(struct mlx5e_priv *priv, struct mlx5_core_dev *peer_mdev; struct mlx5e_hairpin_entry *hpe; struct mlx5e_hairpin *hp; - u64 link_speed64; - u32 link_speed; u8 match_prio; u16 peer_id; int err; @@ -1082,21 +1241,16 @@ static int mlx5e_hairpin_flow_add(struct mlx5e_priv *priv, hash_hairpin_info(peer_id, match_prio)); mutex_unlock(&tc->hairpin_tbl_lock); - params.log_data_size = clamp_t(u8, 16, - MLX5_CAP_GEN(priv->mdev, log_min_hairpin_wq_data_sz), - MLX5_CAP_GEN(priv->mdev, log_max_hairpin_wq_data_sz)); - params.log_num_packets = params.log_data_size - - MLX5_MPWRQ_MIN_LOG_STRIDE_SZ(priv->mdev); - params.log_num_packets = min_t(u8, params.log_num_packets, - MLX5_CAP_GEN(priv->mdev, log_max_hairpin_num_packets)); + params.log_num_packets = ilog2(tc->hairpin_params.queue_size); + params.log_data_size = + clamp_t(u32, + params.log_num_packets + + MLX5_MPWRQ_MIN_LOG_STRIDE_SZ(priv->mdev), + MLX5_CAP_GEN(priv->mdev, log_min_hairpin_wq_data_sz), + MLX5_CAP_GEN(priv->mdev, log_max_hairpin_wq_data_sz)); params.q_counter = priv->q_counter; - /* set hairpin pair per each 50Gbs share of the link */ - mlx5e_port_max_linkspeed(priv->mdev, &link_speed); - link_speed = max_t(u32, link_speed, 50000); - link_speed64 = link_speed; - do_div(link_speed64, 50000); - params.num_channels = link_speed64; + params.num_channels = tc->hairpin_params.num_queues; hp = mlx5e_hairpin_create(priv, ¶ms, peer_ifindex); hpe->hp = hp; @@ -1301,7 +1455,7 @@ mlx5e_tc_add_nic_flow(struct mlx5e_priv *priv, } if (attr->action & MLX5_FLOW_CONTEXT_ACTION_MOD_HDR) { - err = mlx5e_attach_mod_hdr(priv, flow, parse_attr); + err = mlx5e_tc_attach_mod_hdr(priv, flow, attr); if (err) return err; } @@ -1361,7 +1515,7 @@ static void mlx5e_tc_del_nic_flow(struct mlx5e_priv *priv, if (attr->action & MLX5_FLOW_CONTEXT_ACTION_MOD_HDR) { mlx5e_mod_hdr_dealloc(&attr->parse_attr->mod_hdr_acts); - mlx5e_detach_mod_hdr(priv, flow); + mlx5e_tc_detach_mod_hdr(priv, flow, attr); } if (attr->action & MLX5_FLOW_CONTEXT_ACTION_COUNT) @@ -1451,7 +1605,7 @@ mlx5e_tc_offload_to_slow_path(struct mlx5_eswitch *esw, goto err_get_chain; err = mlx5e_tc_match_to_reg_set(esw->dev, &mod_acts, MLX5_FLOW_NAMESPACE_FDB, - CHAIN_TO_REG, chain_mapping); + MAPPED_OBJ_TO_REG, chain_mapping); if (err) goto err_reg_set; @@ -1472,7 +1626,7 @@ skip_restore: goto err_offload; } - flow->slow_mh = mh; + flow->attr->slow_mh = mh; flow->chain_mapping = chain_mapping; flow_flag_set(flow, SLOW); @@ -1497,6 +1651,7 @@ err_get_chain: void mlx5e_tc_unoffload_from_slow_path(struct mlx5_eswitch *esw, struct mlx5e_tc_flow *flow) { + struct mlx5e_mod_hdr_handle *slow_mh = flow->attr->slow_mh; struct mlx5_flow_attr *slow_attr; slow_attr = mlx5_alloc_flow_attr(MLX5_FLOW_NAMESPACE_FDB); @@ -1509,16 +1664,16 @@ void mlx5e_tc_unoffload_from_slow_path(struct mlx5_eswitch *esw, slow_attr->action = MLX5_FLOW_CONTEXT_ACTION_FWD_DEST; slow_attr->esw_attr->split_count = 0; slow_attr->flags |= MLX5_ATTR_FLAG_SLOW_PATH; - if (flow->slow_mh) { + if (slow_mh) { slow_attr->action |= MLX5_FLOW_CONTEXT_ACTION_MOD_HDR; - slow_attr->modify_hdr = mlx5e_mod_hdr_get(flow->slow_mh); + slow_attr->modify_hdr = mlx5e_mod_hdr_get(slow_mh); } mlx5e_tc_unoffload_fdb_rules(esw, flow, slow_attr); - if (flow->slow_mh) { - mlx5e_mod_hdr_detach(esw->dev, get_mod_hdr_table(flow->priv, flow), flow->slow_mh); + if (slow_mh) { + mlx5e_mod_hdr_detach(esw->dev, get_mod_hdr_table(flow->priv, flow), slow_mh); mlx5_chains_put_chain_mapping(esw_chains(esw), flow->chain_mapping); flow->chain_mapping = 0; - flow->slow_mh = NULL; + flow->attr->slow_mh = NULL; } flow_flag_clear(flow, SLOW); kfree(slow_attr); @@ -1629,26 +1784,6 @@ int mlx5e_tc_query_route_vport(struct net_device *out_dev, struct net_device *ro return err; } -int mlx5e_tc_add_flow_mod_hdr(struct mlx5e_priv *priv, - struct mlx5e_tc_flow *flow, - struct mlx5_flow_attr *attr) -{ - struct mlx5e_tc_mod_hdr_acts *mod_hdr_acts = &attr->parse_attr->mod_hdr_acts; - struct mlx5_modify_hdr *mod_hdr; - - mod_hdr = mlx5_modify_header_alloc(priv->mdev, - mlx5e_get_flow_namespace(flow), - mod_hdr_acts->num_actions, - mod_hdr_acts->actions); - if (IS_ERR(mod_hdr)) - return PTR_ERR(mod_hdr); - - WARN_ON(attr->modify_hdr); - attr->modify_hdr = mod_hdr; - - return 0; -} - static int set_encap_dests(struct mlx5e_priv *priv, struct mlx5e_tc_flow *flow, @@ -1768,10 +1903,8 @@ verify_attr_actions(u32 actions, struct netlink_ext_ack *extack) static int post_process_attr(struct mlx5e_tc_flow *flow, struct mlx5_flow_attr *attr, - bool is_post_act_attr, struct netlink_ext_ack *extack) { - struct mlx5_eswitch *esw = flow->priv->mdev->priv.eswitch; bool vf_tun; int err = 0; @@ -1783,34 +1916,22 @@ post_process_attr(struct mlx5e_tc_flow *flow, if (err) goto err_out; - if (mlx5e_is_eswitch_flow(flow)) { - err = mlx5_eswitch_add_vlan_action(esw, attr); + if (attr->action & MLX5_FLOW_CONTEXT_ACTION_MOD_HDR) { + err = mlx5e_tc_attach_mod_hdr(flow->priv, flow, attr); if (err) goto err_out; } - if (attr->action & MLX5_FLOW_CONTEXT_ACTION_MOD_HDR) { - if (vf_tun || is_post_act_attr) { - err = mlx5e_tc_add_flow_mod_hdr(flow->priv, flow, attr); - if (err) - goto err_out; - } else { - err = mlx5e_attach_mod_hdr(flow->priv, flow, attr->parse_attr); - if (err) - goto err_out; - } - } - if (attr->branch_true && attr->branch_true->action & MLX5_FLOW_CONTEXT_ACTION_MOD_HDR) { - err = mlx5e_tc_add_flow_mod_hdr(flow->priv, flow, attr->branch_true); + err = mlx5e_tc_attach_mod_hdr(flow->priv, flow, attr->branch_true); if (err) goto err_out; } if (attr->branch_false && attr->branch_false->action & MLX5_FLOW_CONTEXT_ACTION_MOD_HDR) { - err = mlx5e_tc_add_flow_mod_hdr(flow->priv, flow, attr->branch_false); + err = mlx5e_tc_attach_mod_hdr(flow->priv, flow, attr->branch_false); if (err) goto err_out; } @@ -1924,7 +2045,11 @@ mlx5e_tc_add_fdb_flow(struct mlx5e_priv *priv, esw_attr->int_port = int_port; } - err = post_process_attr(flow, attr, false, extack); + err = post_process_attr(flow, attr, extack); + if (err) + goto err_out; + + err = mlx5e_tc_act_stats_add_flow(get_act_stats_handle(priv), flow); if (err) goto err_out; @@ -1998,8 +2123,6 @@ static void mlx5e_tc_del_fdb_flow(struct mlx5e_priv *priv, if (mlx5_flow_has_geneve_opt(flow)) mlx5_geneve_tlv_option_del(priv->mdev->geneve); - mlx5_eswitch_del_vlan_action(esw, attr); - if (flow->decap_route) mlx5e_detach_decap_route(priv, flow); @@ -2009,10 +2132,7 @@ static void mlx5e_tc_del_fdb_flow(struct mlx5e_priv *priv, if (attr->action & MLX5_FLOW_CONTEXT_ACTION_MOD_HDR) { mlx5e_mod_hdr_dealloc(&attr->parse_attr->mod_hdr_acts); - if (vf_tun && attr->modify_hdr) - mlx5_modify_header_dealloc(priv->mdev, attr->modify_hdr); - else - mlx5e_detach_mod_hdr(priv, flow); + mlx5e_tc_detach_mod_hdr(priv, flow, attr); } if (attr->action & MLX5_FLOW_CONTEXT_ACTION_COUNT) @@ -2027,13 +2147,12 @@ static void mlx5e_tc_del_fdb_flow(struct mlx5e_priv *priv, if (flow_flag_test(flow, L3_TO_L2_DECAP)) mlx5e_detach_decap(priv, flow); + mlx5e_tc_act_stats_del_flow(get_act_stats_handle(priv), flow); + free_flow_post_acts(flow); free_branch_attr(flow, attr->branch_true); free_branch_attr(flow, attr->branch_false); - if (flow->attr->lag.count) - mlx5_lag_del_mpesw_rule(esw->dev); - kvfree(attr->esw_attr->rx_tun_attr); kvfree(attr->parse_attr); kfree(flow->attr); @@ -2492,13 +2611,13 @@ static int parse_tunnel_attr(struct mlx5e_priv *priv, err = mlx5e_tc_set_attr_rx_tun(flow, spec); if (err) return err; - } else if (tunnel && tunnel->tunnel_type == MLX5E_TC_TUNNEL_TYPE_VXLAN) { + } else if (tunnel) { struct mlx5_flow_spec *tmp_spec; tmp_spec = kvzalloc(sizeof(*tmp_spec), GFP_KERNEL); if (!tmp_spec) { - NL_SET_ERR_MSG_MOD(extack, "Failed to allocate memory for vxlan tmp spec"); - netdev_warn(priv->netdev, "Failed to allocate memory for vxlan tmp spec"); + NL_SET_ERR_MSG_MOD(extack, "Failed to allocate memory for tunnel tmp spec"); + netdev_warn(priv->netdev, "Failed to allocate memory for tunnel tmp spec"); return -ENOMEM; } memcpy(tmp_spec, spec, sizeof(*tmp_spec)); @@ -3560,7 +3679,6 @@ out_ok: static bool actions_match_supported_fdb(struct mlx5e_priv *priv, - struct mlx5e_tc_flow_parse_attr *parse_attr, struct mlx5e_tc_flow *flow, struct netlink_ext_ack *extack) { @@ -3609,7 +3727,7 @@ actions_match_supported(struct mlx5e_priv *priv, return false; if (mlx5e_is_eswitch_flow(flow) && - !actions_match_supported_fdb(priv, parse_attr, flow, extack)) + !actions_match_supported_fdb(priv, flow, extack)) return false; return true; @@ -3692,10 +3810,13 @@ mlx5e_clone_flow_attr_for_post_act(struct mlx5_flow_attr *attr, INIT_LIST_HEAD(&attr2->list); parse_attr->filter_dev = attr->parse_attr->filter_dev; attr2->action = 0; + attr2->counter = NULL; + attr->tc_act_cookies_count = 0; attr2->flags = 0; attr2->parse_attr = parse_attr; attr2->dest_chain = 0; attr2->dest_ft = NULL; + attr2->act_id_restore_rule = NULL; if (ns_type == MLX5_FLOW_NAMESPACE_FDB) { attr2->esw_attr->out_count = 0; @@ -3831,7 +3952,7 @@ alloc_flow_post_acts(struct mlx5e_tc_flow *flow, struct netlink_ext_ack *extack) if (err) goto out_free; - err = post_process_attr(flow, attr, true, extack); + err = post_process_attr(flow, attr, extack); if (err) goto out_free; @@ -3991,6 +4112,11 @@ parse_branch_ctrl(struct flow_action_entry *act, struct mlx5e_tc_act *tc_act, jump_state->jumping_attr = attr->branch_false; jump_state->jump_count = jump_count; + + /* branching action requires its own counter */ + attr->action |= MLX5_FLOW_CONTEXT_ACTION_COUNT; + flow_flag_set(flow, USE_ACT_STATS); + return 0; err_branch_false: @@ -4051,6 +4177,8 @@ parse_tc_actions(struct mlx5e_tc_act_parse_state *parse_state, goto out_free; parse_state->actions |= attr->action; + if (!tc_act->stats_action) + attr->tc_act_cookies[attr->tc_act_cookies_count++] = act->cookie; /* Split attr for multi table act if not the last act. */ if (jump_state.jump_target || @@ -4184,12 +4312,7 @@ static bool is_lag_dev(struct mlx5e_priv *priv, static bool is_multiport_eligible(struct mlx5e_priv *priv, struct net_device *out_dev) { - if (same_hw_reps(priv, out_dev) && - MLX5_CAP_PORT_SELECTION(priv->mdev, port_select_flow_table) && - MLX5_CAP_GEN(priv->mdev, create_lag_when_not_master_up)) - return true; - - return false; + return same_hw_reps(priv, out_dev) && mlx5_lag_is_mpesw(priv->mdev); } bool mlx5e_is_valid_eswitch_fwd_dev(struct mlx5e_priv *priv, @@ -4360,6 +4483,9 @@ static bool is_peer_flow_needed(struct mlx5e_tc_flow *flow) (is_rep_ingress || act_is_encap)) return true; + if (mlx5_lag_is_mpesw(esw_attr->in_mdev)) + return true; + return false; } @@ -4398,8 +4524,7 @@ mlx5_free_flow_attr(struct mlx5e_tc_flow *flow, struct mlx5_flow_attr *attr) if (attr->action & MLX5_FLOW_CONTEXT_ACTION_MOD_HDR) { mlx5e_mod_hdr_dealloc(&attr->parse_attr->mod_hdr_acts); - if (attr->modify_hdr) - mlx5_modify_header_dealloc(flow->priv->mdev, attr->modify_hdr); + mlx5e_tc_detach_mod_hdr(flow->priv, flow, attr); } } @@ -4492,7 +4617,6 @@ __mlx5e_add_fdb_flow(struct mlx5e_priv *priv, struct mlx5_core_dev *in_mdev) { struct flow_rule *rule = flow_cls_offload_flow_rule(f); - struct mlx5_eswitch *esw = priv->mdev->priv.eswitch; struct netlink_ext_ack *extack = f->common.extack; struct mlx5e_tc_flow_parse_attr *parse_attr; struct mlx5e_tc_flow *flow; @@ -4521,33 +4645,21 @@ __mlx5e_add_fdb_flow(struct mlx5e_priv *priv, if (err) goto err_free; - /* always set IP version for indirect table handling */ - flow->attr->ip_version = mlx5e_tc_get_ip_version(&parse_attr->spec, true); - err = parse_tc_fdb_actions(priv, &rule->action, flow, extack); if (err) goto err_free; - if (flow->attr->lag.count) { - err = mlx5_lag_add_mpesw_rule(esw->dev); - if (err) - goto err_free; - } - err = mlx5e_tc_add_fdb_flow(priv, flow, extack); complete_all(&flow->init_done); if (err) { if (!(err == -ENETUNREACH && mlx5_lag_is_multipath(in_mdev))) - goto err_lag; + goto err_free; add_unready_flow(flow); } return flow; -err_lag: - if (flow->attr->lag.count) - mlx5_lag_del_mpesw_rule(esw->dev); err_free: mlx5e_flow_put(priv, flow); out: @@ -4579,8 +4691,10 @@ static int mlx5e_tc_add_fdb_peer_flow(struct flow_cls_offload *f, * So packets redirected to uplink use the same mdev of the * original flow and packets redirected from uplink use the * peer mdev. + * In multiport eswitch it's a special case that we need to + * keep the original mdev. */ - if (attr->in_rep->vport == MLX5_VPORT_UPLINK) + if (attr->in_rep->vport == MLX5_VPORT_UPLINK && !mlx5_lag_is_mpesw(priv->mdev)) in_mdev = peer_priv->mdev; else in_mdev = priv->mdev; @@ -4842,6 +4956,12 @@ errout: return err; } +int mlx5e_tc_fill_action_stats(struct mlx5e_priv *priv, + struct flow_offload_action *fl_act) +{ + return mlx5e_tc_act_stats_fill_stats(get_act_stats_handle(priv), fl_act); +} + int mlx5e_stats_flower(struct net_device *dev, struct mlx5e_priv *priv, struct flow_cls_offload *f, unsigned long flags) { @@ -4868,11 +4988,15 @@ int mlx5e_stats_flower(struct net_device *dev, struct mlx5e_priv *priv, } if (mlx5e_is_offloaded_flow(flow) || flow_flag_test(flow, CT)) { - counter = mlx5e_tc_get_counter(flow); - if (!counter) - goto errout; + if (flow_flag_test(flow, USE_ACT_STATS)) { + f->use_act_stats = true; + } else { + counter = mlx5e_tc_get_counter(flow); + if (!counter) + goto errout; - mlx5_fc_query_cached(counter, &bytes, &packets, &lastuse); + mlx5_fc_query_cached(counter, &bytes, &packets, &lastuse); + } } /* Under multipath it's possible for one rule to be currently @@ -4888,14 +5012,18 @@ int mlx5e_stats_flower(struct net_device *dev, struct mlx5e_priv *priv, u64 packets2; u64 lastuse2; - counter = mlx5e_tc_get_counter(flow->peer_flow); - if (!counter) - goto no_peer_counter; - mlx5_fc_query_cached(counter, &bytes2, &packets2, &lastuse2); - - bytes += bytes2; - packets += packets2; - lastuse = max_t(u64, lastuse, lastuse2); + if (flow_flag_test(flow, USE_ACT_STATS)) { + f->use_act_stats = true; + } else { + counter = mlx5e_tc_get_counter(flow->peer_flow); + if (!counter) + goto no_peer_counter; + mlx5_fc_query_cached(counter, &bytes2, &packets2, &lastuse2); + + bytes += bytes2; + packets += packets2; + lastuse = max_t(u64, lastuse, lastuse2); + } } no_peer_counter: @@ -5220,6 +5348,8 @@ int mlx5e_tc_nic_init(struct mlx5e_priv *priv) tc->ct = mlx5_tc_ct_init(priv, tc->chains, &tc->mod_hdr, MLX5_FLOW_NAMESPACE_KERNEL, tc->post_act); + mlx5e_hairpin_params_init(&tc->hairpin_params, dev); + tc->netdevice_nb.notifier_call = mlx5e_tc_netdev_event; err = register_netdevice_notifier_dev_net(priv->netdev, &tc->netdevice_nb, @@ -5230,8 +5360,18 @@ int mlx5e_tc_nic_init(struct mlx5e_priv *priv) goto err_reg; } + mlx5e_tc_debugfs_init(tc, mlx5e_fs_get_debugfs_root(priv->fs)); + + tc->action_stats_handle = mlx5e_tc_act_stats_create(); + if (IS_ERR(tc->action_stats_handle)) + goto err_act_stats; + return 0; +err_act_stats: + unregister_netdevice_notifier_dev_net(priv->netdev, + &tc->netdevice_nb, + &tc->netdevice_nn); err_reg: mlx5_tc_ct_clean(tc->ct); mlx5e_tc_post_act_destroy(tc->post_act); @@ -5258,6 +5398,8 @@ void mlx5e_tc_nic_cleanup(struct mlx5e_priv *priv) { struct mlx5e_tc_table *tc = mlx5e_fs_get_tc(priv->fs); + debugfs_remove_recursive(tc->dfs_root); + if (tc->netdevice_nb.notifier_call) unregister_netdevice_notifier_dev_net(priv->netdev, &tc->netdevice_nb, @@ -5279,6 +5421,7 @@ void mlx5e_tc_nic_cleanup(struct mlx5e_priv *priv) mapping_destroy(tc->mapping); mlx5_chains_destroy(tc->chains); mlx5e_tc_nic_destroy_miss_table(priv); + mlx5e_tc_act_stats_free(tc->action_stats_handle); } int mlx5e_tc_ht_init(struct rhashtable *tc_ht) @@ -5355,8 +5498,14 @@ int mlx5e_tc_esw_init(struct mlx5_rep_uplink_priv *uplink_priv) goto err_register_fib_notifier; } + uplink_priv->action_stats_handle = mlx5e_tc_act_stats_create(); + if (IS_ERR(uplink_priv->action_stats_handle)) + goto err_action_counter; + return 0; +err_action_counter: + mlx5e_tc_tun_cleanup(uplink_priv->encap); err_register_fib_notifier: mapping_destroy(uplink_priv->tunnel_enc_opts_mapping); err_enc_opts_mapping: @@ -5383,6 +5532,7 @@ void mlx5e_tc_esw_cleanup(struct mlx5_rep_uplink_priv *uplink_priv) mlx5_tc_ct_clean(uplink_priv->ct_priv); mlx5e_flow_meters_cleanup(uplink_priv->flow_meters); mlx5e_tc_post_act_destroy(uplink_priv->post_act); + mlx5e_tc_act_stats_free(uplink_priv->action_stats_handle); } int mlx5e_tc_num_filters(struct mlx5e_priv *priv, unsigned long flags) @@ -5456,48 +5606,268 @@ int mlx5e_setup_tc_block_cb(enum tc_setup_type type, void *type_data, } } -bool mlx5e_tc_update_skb(struct mlx5_cqe64 *cqe, - struct sk_buff *skb) +static bool mlx5e_tc_restore_tunnel(struct mlx5e_priv *priv, struct sk_buff *skb, + struct mlx5e_tc_update_priv *tc_priv, + u32 tunnel_id) { -#if IS_ENABLED(CONFIG_NET_TC_SKB_EXT) - u32 chain = 0, chain_tag, reg_b, zone_restore_id; - struct mlx5e_priv *priv = netdev_priv(skb->dev); - struct mlx5_mapped_obj mapped_obj; - struct tc_skb_ext *tc_skb_ext; - struct mlx5e_tc_table *tc; + struct mlx5_eswitch *esw = priv->mdev->priv.eswitch; + struct tunnel_match_enc_opts enc_opts = {}; + struct mlx5_rep_uplink_priv *uplink_priv; + struct mlx5e_rep_priv *uplink_rpriv; + struct metadata_dst *tun_dst; + struct tunnel_match_key key; + u32 tun_id, enc_opts_id; + struct net_device *dev; int err; - reg_b = be32_to_cpu(cqe->ft_metadata); - tc = mlx5e_fs_get_tc(priv->fs); - chain_tag = reg_b & MLX5E_TC_TABLE_CHAIN_TAG_MASK; + enc_opts_id = tunnel_id & ENC_OPTS_BITS_MASK; + tun_id = tunnel_id >> ENC_OPTS_BITS; - err = mapping_find(tc->mapping, chain_tag, &mapped_obj); + if (!tun_id) + return true; + + uplink_rpriv = mlx5_eswitch_get_uplink_priv(esw, REP_ETH); + uplink_priv = &uplink_rpriv->uplink_priv; + + err = mapping_find(uplink_priv->tunnel_mapping, tun_id, &key); if (err) { netdev_dbg(priv->netdev, - "Couldn't find chain for chain tag: %d, err: %d\n", - chain_tag, err); + "Couldn't find tunnel for tun_id: %d, err: %d\n", + tun_id, err); return false; } - if (mapped_obj.type == MLX5_MAPPED_OBJ_CHAIN) { - chain = mapped_obj.chain; - tc_skb_ext = tc_skb_ext_alloc(skb); - if (WARN_ON(!tc_skb_ext)) + if (enc_opts_id) { + err = mapping_find(uplink_priv->tunnel_enc_opts_mapping, + enc_opts_id, &enc_opts); + if (err) { + netdev_dbg(priv->netdev, + "Couldn't find tunnel (opts) for tun_id: %d, err: %d\n", + enc_opts_id, err); return false; + } + } + + switch (key.enc_control.addr_type) { + case FLOW_DISSECTOR_KEY_IPV4_ADDRS: + tun_dst = __ip_tun_set_dst(key.enc_ipv4.src, key.enc_ipv4.dst, + key.enc_ip.tos, key.enc_ip.ttl, + key.enc_tp.dst, TUNNEL_KEY, + key32_to_tunnel_id(key.enc_key_id.keyid), + enc_opts.key.len); + break; + case FLOW_DISSECTOR_KEY_IPV6_ADDRS: + tun_dst = __ipv6_tun_set_dst(&key.enc_ipv6.src, &key.enc_ipv6.dst, + key.enc_ip.tos, key.enc_ip.ttl, + key.enc_tp.dst, 0, TUNNEL_KEY, + key32_to_tunnel_id(key.enc_key_id.keyid), + enc_opts.key.len); + break; + default: + netdev_dbg(priv->netdev, + "Couldn't restore tunnel, unsupported addr_type: %d\n", + key.enc_control.addr_type); + return false; + } + + if (!tun_dst) { + netdev_dbg(priv->netdev, "Couldn't restore tunnel, no tun_dst\n"); + return false; + } + + tun_dst->u.tun_info.key.tp_src = key.enc_tp.src; + + if (enc_opts.key.len) + ip_tunnel_info_opts_set(&tun_dst->u.tun_info, + enc_opts.key.data, + enc_opts.key.len, + enc_opts.key.dst_opt_type); + + skb_dst_set(skb, (struct dst_entry *)tun_dst); + dev = dev_get_by_index(&init_net, key.filter_ifindex); + if (!dev) { + netdev_dbg(priv->netdev, + "Couldn't find tunnel device with ifindex: %d\n", + key.filter_ifindex); + return false; + } - tc_skb_ext->chain = chain; + /* Set fwd_dev so we do dev_put() after datapath */ + tc_priv->fwd_dev = dev; - zone_restore_id = (reg_b >> MLX5_REG_MAPPING_MOFFSET(NIC_ZONE_RESTORE_TO_REG)) & - ESW_ZONE_ID_MASK; + skb->dev = dev; - if (!mlx5e_tc_ct_restore_flow(tc->ct, skb, - zone_restore_id)) + return true; +} + +static bool mlx5e_tc_restore_skb_tc_meta(struct sk_buff *skb, struct mlx5_tc_ct_priv *ct_priv, + struct mlx5_mapped_obj *mapped_obj, u32 zone_restore_id, + u32 tunnel_id, struct mlx5e_tc_update_priv *tc_priv) +{ + struct mlx5e_priv *priv = netdev_priv(skb->dev); + struct tc_skb_ext *tc_skb_ext; + u64 act_miss_cookie; + u32 chain; + + chain = mapped_obj->type == MLX5_MAPPED_OBJ_CHAIN ? mapped_obj->chain : 0; + act_miss_cookie = mapped_obj->type == MLX5_MAPPED_OBJ_ACT_MISS ? + mapped_obj->act_miss_cookie : 0; + if (chain || act_miss_cookie) { + if (!mlx5e_tc_ct_restore_flow(ct_priv, skb, zone_restore_id)) return false; - } else { + + tc_skb_ext = tc_skb_ext_alloc(skb); + if (!tc_skb_ext) { + WARN_ON(1); + return false; + } + + if (act_miss_cookie) { + tc_skb_ext->act_miss_cookie = act_miss_cookie; + tc_skb_ext->act_miss = 1; + } else { + tc_skb_ext->chain = chain; + } + } + + if (tc_priv) + return mlx5e_tc_restore_tunnel(priv, skb, tc_priv, tunnel_id); + + return true; +} + +static void mlx5e_tc_restore_skb_sample(struct mlx5e_priv *priv, struct sk_buff *skb, + struct mlx5_mapped_obj *mapped_obj, + struct mlx5e_tc_update_priv *tc_priv) +{ + if (!mlx5e_tc_restore_tunnel(priv, skb, tc_priv, mapped_obj->sample.tunnel_id)) { + netdev_dbg(priv->netdev, + "Failed to restore tunnel info for sampled packet\n"); + return; + } + mlx5e_tc_sample_skb(skb, mapped_obj); +} + +static bool mlx5e_tc_restore_skb_int_port(struct mlx5e_priv *priv, struct sk_buff *skb, + struct mlx5_mapped_obj *mapped_obj, + struct mlx5e_tc_update_priv *tc_priv, + u32 tunnel_id) +{ + struct mlx5_eswitch *esw = priv->mdev->priv.eswitch; + struct mlx5_rep_uplink_priv *uplink_priv; + struct mlx5e_rep_priv *uplink_rpriv; + bool forward_tx = false; + + /* Tunnel restore takes precedence over int port restore */ + if (tunnel_id) + return mlx5e_tc_restore_tunnel(priv, skb, tc_priv, tunnel_id); + + uplink_rpriv = mlx5_eswitch_get_uplink_priv(esw, REP_ETH); + uplink_priv = &uplink_rpriv->uplink_priv; + + if (mlx5e_tc_int_port_dev_fwd(uplink_priv->int_port_priv, skb, + mapped_obj->int_port_metadata, &forward_tx)) { + /* Set fwd_dev for future dev_put */ + tc_priv->fwd_dev = skb->dev; + tc_priv->forward_tx = forward_tx; + + return true; + } + + return false; +} + +bool mlx5e_tc_update_skb(struct mlx5_cqe64 *cqe, struct sk_buff *skb, + struct mapping_ctx *mapping_ctx, u32 mapped_obj_id, + struct mlx5_tc_ct_priv *ct_priv, + u32 zone_restore_id, u32 tunnel_id, + struct mlx5e_tc_update_priv *tc_priv) +{ + struct mlx5e_priv *priv = netdev_priv(skb->dev); + struct mlx5_mapped_obj mapped_obj; + int err; + + err = mapping_find(mapping_ctx, mapped_obj_id, &mapped_obj); + if (err) { + netdev_dbg(skb->dev, + "Couldn't find mapped object for mapped_obj_id: %d, err: %d\n", + mapped_obj_id, err); + return false; + } + + switch (mapped_obj.type) { + case MLX5_MAPPED_OBJ_CHAIN: + case MLX5_MAPPED_OBJ_ACT_MISS: + return mlx5e_tc_restore_skb_tc_meta(skb, ct_priv, &mapped_obj, zone_restore_id, + tunnel_id, tc_priv); + case MLX5_MAPPED_OBJ_SAMPLE: + mlx5e_tc_restore_skb_sample(priv, skb, &mapped_obj, tc_priv); + tc_priv->skb_done = true; + return true; + case MLX5_MAPPED_OBJ_INT_PORT_METADATA: + return mlx5e_tc_restore_skb_int_port(priv, skb, &mapped_obj, tc_priv, tunnel_id); + default: netdev_dbg(priv->netdev, "Invalid mapped object type: %d\n", mapped_obj.type); return false; } -#endif /* CONFIG_NET_TC_SKB_EXT */ - return true; + return false; +} + +bool mlx5e_tc_update_skb_nic(struct mlx5_cqe64 *cqe, struct sk_buff *skb) +{ + struct mlx5e_priv *priv = netdev_priv(skb->dev); + u32 mapped_obj_id, reg_b, zone_restore_id; + struct mlx5_tc_ct_priv *ct_priv; + struct mapping_ctx *mapping_ctx; + struct mlx5e_tc_table *tc; + + reg_b = be32_to_cpu(cqe->ft_metadata); + tc = mlx5e_fs_get_tc(priv->fs); + mapped_obj_id = reg_b & MLX5E_TC_TABLE_CHAIN_TAG_MASK; + zone_restore_id = (reg_b >> MLX5_REG_MAPPING_MOFFSET(NIC_ZONE_RESTORE_TO_REG)) & + ESW_ZONE_ID_MASK; + ct_priv = tc->ct; + mapping_ctx = tc->mapping; + + return mlx5e_tc_update_skb(cqe, skb, mapping_ctx, mapped_obj_id, ct_priv, zone_restore_id, + 0, NULL); +} + +int mlx5e_tc_action_miss_mapping_get(struct mlx5e_priv *priv, struct mlx5_flow_attr *attr, + u64 act_miss_cookie, u32 *act_miss_mapping) +{ + struct mlx5_eswitch *esw = priv->mdev->priv.eswitch; + struct mlx5_mapped_obj mapped_obj = {}; + struct mapping_ctx *ctx; + int err; + + ctx = esw->offloads.reg_c0_obj_pool; + + mapped_obj.type = MLX5_MAPPED_OBJ_ACT_MISS; + mapped_obj.act_miss_cookie = act_miss_cookie; + err = mapping_add(ctx, &mapped_obj, act_miss_mapping); + if (err) + return err; + + attr->act_id_restore_rule = esw_add_restore_rule(esw, *act_miss_mapping); + if (IS_ERR(attr->act_id_restore_rule)) + goto err_rule; + + return 0; + +err_rule: + mapping_remove(ctx, *act_miss_mapping); + return err; +} + +void mlx5e_tc_action_miss_mapping_put(struct mlx5e_priv *priv, struct mlx5_flow_attr *attr, + u32 act_miss_mapping) +{ + struct mlx5_eswitch *esw = priv->mdev->priv.eswitch; + struct mapping_ctx *ctx; + + ctx = esw->offloads.reg_c0_obj_pool; + mlx5_del_flow_rules(attr->act_id_restore_rule); + mapping_remove(ctx, act_miss_mapping); } diff --git a/drivers/net/ethernet/mellanox/mlx5/core/en_tc.h b/drivers/net/ethernet/mellanox/mlx5/core/en_tc.h index 50af70ef22f3..adb39e30f90f 100644 --- a/drivers/net/ethernet/mellanox/mlx5/core/en_tc.h +++ b/drivers/net/ethernet/mellanox/mlx5/core/en_tc.h @@ -59,6 +59,8 @@ int mlx5e_tc_num_filters(struct mlx5e_priv *priv, unsigned long flags); struct mlx5e_tc_update_priv { struct net_device *fwd_dev; + bool skb_done; + bool forward_tx; }; struct mlx5_nic_flow_attr { @@ -69,35 +71,33 @@ struct mlx5_nic_flow_attr { struct mlx5_flow_attr { u32 action; + unsigned long tc_act_cookies[TCA_ACT_MAX_PRIO]; struct mlx5_fc *counter; struct mlx5_modify_hdr *modify_hdr; + struct mlx5e_mod_hdr_handle *mh; /* attached mod header instance */ + struct mlx5e_mod_hdr_handle *slow_mh; /* attached mod header instance for slow path */ struct mlx5_ct_attr ct_attr; struct mlx5e_sample_attr sample_attr; struct mlx5e_meter_attr meter_attr; struct mlx5e_tc_flow_parse_attr *parse_attr; u32 chain; u16 prio; + u16 tc_act_cookies_count; u32 dest_chain; struct mlx5_flow_table *ft; struct mlx5_flow_table *dest_ft; u8 inner_match_level; u8 outer_match_level; - u8 ip_version; u8 tun_ip_version; int tunnel_id; /* mapped tunnel id */ u32 flags; u32 exe_aso_type; struct list_head list; struct mlx5e_post_act_handle *post_act_handle; - struct { - /* Indicate whether the parsed flow should be counted for lag mode decision - * making - */ - bool count; - } lag; struct mlx5_flow_attr *branch_true; struct mlx5_flow_attr *branch_false; struct mlx5_flow_attr *jumping_attr; + struct mlx5_flow_handle *act_id_restore_rule; /* keep this union last */ union { DECLARE_FLEX_ARRAY(struct mlx5_esw_flow_attr, esw_attr); @@ -134,7 +134,6 @@ struct mlx5_rx_tun_attr { __be32 v4; struct in6_addr v6; } dst_ip; /* Valid if decap_vport is not zero */ - u32 vni; }; #define MLX5E_TC_TABLE_CHAIN_TAG_BITS 16 @@ -197,6 +196,8 @@ int mlx5e_delete_flower(struct net_device *dev, struct mlx5e_priv *priv, int mlx5e_stats_flower(struct net_device *dev, struct mlx5e_priv *priv, struct flow_cls_offload *f, unsigned long flags); +int mlx5e_tc_fill_action_stats(struct mlx5e_priv *priv, + struct flow_offload_action *fl_act); int mlx5e_tc_configure_matchall(struct mlx5e_priv *priv, struct tc_cls_matchall_offload *f); @@ -227,7 +228,7 @@ void mlx5e_tc_update_neigh_used_value(struct mlx5e_neigh_hash_entry *nhe); void mlx5e_tc_reoffload_flows_work(struct work_struct *work); enum mlx5e_tc_attr_to_reg { - CHAIN_TO_REG, + MAPPED_OBJ_TO_REG, VPORT_TO_REG, TUNNEL_TO_REG, CTSTATE_TO_REG, @@ -236,7 +237,7 @@ enum mlx5e_tc_attr_to_reg { MARK_TO_REG, LABELS_TO_REG, FTEID_TO_REG, - NIC_CHAIN_TO_REG, + NIC_MAPPED_OBJ_TO_REG, NIC_ZONE_RESTORE_TO_REG, PACKET_COLOR_TO_REG, }; @@ -285,9 +286,13 @@ int mlx5e_tc_match_to_reg_set_and_get_id(struct mlx5_core_dev *mdev, enum mlx5e_tc_attr_to_reg type, u32 data); -int mlx5e_tc_add_flow_mod_hdr(struct mlx5e_priv *priv, - struct mlx5e_tc_flow *flow, - struct mlx5_flow_attr *attr); +int mlx5e_tc_attach_mod_hdr(struct mlx5e_priv *priv, + struct mlx5e_tc_flow *flow, + struct mlx5_flow_attr *attr); + +void mlx5e_tc_detach_mod_hdr(struct mlx5e_priv *priv, + struct mlx5e_tc_flow *flow, + struct mlx5_flow_attr *attr); void mlx5e_tc_set_ethertype(struct mlx5_core_dev *mdev, struct flow_match_basic *match, bool outer, @@ -366,7 +371,6 @@ struct mlx5e_tc_table *mlx5e_tc_table_alloc(void); void mlx5e_tc_table_free(struct mlx5e_tc_table *tc); static inline bool mlx5e_cqe_regb_chain(struct mlx5_cqe64 *cqe) { -#if IS_ENABLED(CONFIG_NET_TC_SKB_EXT) u32 chain, reg_b; reg_b = be32_to_cpu(cqe->ft_metadata); @@ -377,20 +381,29 @@ static inline bool mlx5e_cqe_regb_chain(struct mlx5_cqe64 *cqe) chain = reg_b & MLX5E_TC_TABLE_CHAIN_TAG_MASK; if (chain) return true; -#endif return false; } -bool mlx5e_tc_update_skb(struct mlx5_cqe64 *cqe, struct sk_buff *skb); +bool mlx5e_tc_update_skb_nic(struct mlx5_cqe64 *cqe, struct sk_buff *skb); +bool mlx5e_tc_update_skb(struct mlx5_cqe64 *cqe, struct sk_buff *skb, + struct mapping_ctx *mapping_ctx, u32 mapped_obj_id, + struct mlx5_tc_ct_priv *ct_priv, + u32 zone_restore_id, u32 tunnel_id, + struct mlx5e_tc_update_priv *tc_priv); #else /* CONFIG_MLX5_CLS_ACT */ static inline struct mlx5e_tc_table *mlx5e_tc_table_alloc(void) { return NULL; } static inline void mlx5e_tc_table_free(struct mlx5e_tc_table *tc) {} static inline bool mlx5e_cqe_regb_chain(struct mlx5_cqe64 *cqe) { return false; } static inline bool -mlx5e_tc_update_skb(struct mlx5_cqe64 *cqe, struct sk_buff *skb) +mlx5e_tc_update_skb_nic(struct mlx5_cqe64 *cqe, struct sk_buff *skb) { return true; } #endif +int mlx5e_tc_action_miss_mapping_get(struct mlx5e_priv *priv, struct mlx5_flow_attr *attr, + u64 act_miss_cookie, u32 *act_miss_mapping); +void mlx5e_tc_action_miss_mapping_put(struct mlx5e_priv *priv, struct mlx5_flow_attr *attr, + u32 act_miss_mapping); + #endif /* __MLX5_EN_TC_H__ */ diff --git a/drivers/net/ethernet/mellanox/mlx5/core/en_tx.c b/drivers/net/ethernet/mellanox/mlx5/core/en_tx.c index f7897ddb29c5..df5e780e8e6a 100644 --- a/drivers/net/ethernet/mellanox/mlx5/core/en_tx.c +++ b/drivers/net/ethernet/mellanox/mlx5/core/en_tx.c @@ -720,21 +720,6 @@ netdev_tx_t mlx5e_xmit(struct sk_buff *skb, struct net_device *dev) return NETDEV_TX_OK; } -void mlx5e_sq_xmit_simple(struct mlx5e_txqsq *sq, struct sk_buff *skb, bool xmit_more) -{ - struct mlx5e_tx_wqe_attr wqe_attr; - struct mlx5e_tx_attr attr; - struct mlx5e_tx_wqe *wqe; - u16 pi; - - mlx5e_sq_xmit_prepare(sq, skb, NULL, &attr); - mlx5e_sq_calc_wqe_attr(skb, &attr, &wqe_attr); - pi = mlx5e_txqsq_get_next_pi(sq, wqe_attr.num_wqebbs); - wqe = MLX5E_TX_FETCH_WQE(sq, pi); - mlx5e_txwqe_build_eseg_csum(sq, skb, NULL, &wqe->eth); - mlx5e_sq_xmit_wqe(sq, skb, &attr, &wqe_attr, wqe, pi, xmit_more); -} - static void mlx5e_tx_wi_dma_unmap(struct mlx5e_txqsq *sq, struct mlx5e_tx_wqe_info *wi, u32 *dma_fifo_cc) { diff --git a/drivers/net/ethernet/mellanox/mlx5/core/eq.c b/drivers/net/ethernet/mellanox/mlx5/core/eq.c index 8f7580fec193..38b32e98f3bd 100644 --- a/drivers/net/ethernet/mellanox/mlx5/core/eq.c +++ b/drivers/net/ethernet/mellanox/mlx5/core/eq.c @@ -629,9 +629,9 @@ static u16 async_eq_depth_devlink_param_get(struct mlx5_core_dev *dev) union devlink_param_value val; int err; - err = devlink_param_driverinit_value_get(devlink, - DEVLINK_PARAM_GENERIC_ID_EVENT_EQ_SIZE, - &val); + err = devl_param_driverinit_value_get(devlink, + DEVLINK_PARAM_GENERIC_ID_EVENT_EQ_SIZE, + &val); if (!err) return val.vu32; mlx5_core_dbg(dev, "Failed to get param. using default. err = %d\n", err); @@ -817,9 +817,12 @@ static void comp_irqs_release(struct mlx5_core_dev *dev) static int comp_irqs_request(struct mlx5_core_dev *dev) { struct mlx5_eq_table *table = dev->priv.eq_table; + const struct cpumask *prev = cpu_none_mask; + const struct cpumask *mask; int ncomp_eqs = table->num_comp_eqs; u16 *cpus; int ret; + int cpu; int i; ncomp_eqs = table->num_comp_eqs; @@ -838,8 +841,19 @@ static int comp_irqs_request(struct mlx5_core_dev *dev) ret = -ENOMEM; goto free_irqs; } - for (i = 0; i < ncomp_eqs; i++) - cpus[i] = cpumask_local_spread(i, dev->priv.numa_node); + + i = 0; + rcu_read_lock(); + for_each_numa_hop_mask(mask, dev->priv.numa_node) { + for_each_cpu_andnot(cpu, mask, prev) { + cpus[i] = cpu; + if (++i == ncomp_eqs) + goto spread_done; + } + prev = mask; + } +spread_done: + rcu_read_unlock(); ret = mlx5_irqs_request_vectors(dev, cpus, ncomp_eqs, table->comp_irqs); kfree(cpus); if (ret < 0) @@ -874,9 +888,9 @@ static u16 comp_eq_depth_devlink_param_get(struct mlx5_core_dev *dev) union devlink_param_value val; int err; - err = devlink_param_driverinit_value_get(devlink, - DEVLINK_PARAM_GENERIC_ID_IO_EQ_SIZE, - &val); + err = devl_param_driverinit_value_get(devlink, + DEVLINK_PARAM_GENERIC_ID_IO_EQ_SIZE, + &val); if (!err) return val.vu32; mlx5_core_dbg(dev, "Failed to get param. using default. err = %d\n", err); @@ -946,11 +960,11 @@ static int vector2eqnirqn(struct mlx5_core_dev *dev, int vector, int *eqn, unsigned int *irqn) { struct mlx5_eq_table *table = dev->priv.eq_table; - struct mlx5_eq_comp *eq, *n; + struct mlx5_eq_comp *eq; int err = -ENOENT; int i = 0; - list_for_each_entry_safe(eq, n, &table->comp_eqs_list, list) { + list_for_each_entry(eq, &table->comp_eqs_list, list) { if (i++ == vector) { if (irqn) *irqn = eq->core.irqn; @@ -985,10 +999,10 @@ struct cpumask * mlx5_comp_irq_get_affinity_mask(struct mlx5_core_dev *dev, int vector) { struct mlx5_eq_table *table = dev->priv.eq_table; - struct mlx5_eq_comp *eq, *n; + struct mlx5_eq_comp *eq; int i = 0; - list_for_each_entry_safe(eq, n, &table->comp_eqs_list, list) { + list_for_each_entry(eq, &table->comp_eqs_list, list) { if (i++ == vector) break; } diff --git a/drivers/net/ethernet/mellanox/mlx5/core/esw/acl/ingress_ofld.c b/drivers/net/ethernet/mellanox/mlx5/core/esw/acl/ingress_ofld.c index a994e71e05c1..d55775627a47 100644 --- a/drivers/net/ethernet/mellanox/mlx5/core/esw/acl/ingress_ofld.c +++ b/drivers/net/ethernet/mellanox/mlx5/core/esw/acl/ingress_ofld.c @@ -356,8 +356,8 @@ void esw_acl_ingress_ofld_cleanup(struct mlx5_eswitch *esw, } /* Caller must hold rtnl_lock */ -int mlx5_esw_acl_ingress_vport_bond_update(struct mlx5_eswitch *esw, u16 vport_num, - u32 metadata) +int mlx5_esw_acl_ingress_vport_metadata_update(struct mlx5_eswitch *esw, u16 vport_num, + u32 metadata) { struct mlx5_vport *vport = mlx5_eswitch_get_vport(esw, vport_num); int err; diff --git a/drivers/net/ethernet/mellanox/mlx5/core/esw/acl/ofld.h b/drivers/net/ethernet/mellanox/mlx5/core/esw/acl/ofld.h index 11d3d3978848..c9f8469e9a47 100644 --- a/drivers/net/ethernet/mellanox/mlx5/core/esw/acl/ofld.h +++ b/drivers/net/ethernet/mellanox/mlx5/core/esw/acl/ofld.h @@ -24,8 +24,8 @@ static inline bool mlx5_esw_acl_egress_fwd2vport_supported(struct mlx5_eswitch * /* Eswitch acl ingress external APIs */ int esw_acl_ingress_ofld_setup(struct mlx5_eswitch *esw, struct mlx5_vport *vport); void esw_acl_ingress_ofld_cleanup(struct mlx5_eswitch *esw, struct mlx5_vport *vport); -int mlx5_esw_acl_ingress_vport_bond_update(struct mlx5_eswitch *esw, u16 vport_num, - u32 metadata); +int mlx5_esw_acl_ingress_vport_metadata_update(struct mlx5_eswitch *esw, u16 vport_num, + u32 metadata); void mlx5_esw_acl_ingress_vport_drop_rule_destroy(struct mlx5_eswitch *esw, u16 vport_num); int mlx5_esw_acl_ingress_vport_drop_rule_create(struct mlx5_eswitch *esw, u16 vport_num); diff --git a/drivers/net/ethernet/mellanox/mlx5/core/esw/indir_table.c b/drivers/net/ethernet/mellanox/mlx5/core/esw/indir_table.c index c9a91158e99c..9959e9fd15a1 100644 --- a/drivers/net/ethernet/mellanox/mlx5/core/esw/indir_table.c +++ b/drivers/net/ethernet/mellanox/mlx5/core/esw/indir_table.c @@ -16,18 +16,12 @@ #include "lib/fs_chains.h" #include "en/mod_hdr.h" -#define MLX5_ESW_INDIR_TABLE_SIZE 128 -#define MLX5_ESW_INDIR_TABLE_RECIRC_IDX_MAX (MLX5_ESW_INDIR_TABLE_SIZE - 2) +#define MLX5_ESW_INDIR_TABLE_SIZE 2 +#define MLX5_ESW_INDIR_TABLE_RECIRC_IDX (MLX5_ESW_INDIR_TABLE_SIZE - 2) #define MLX5_ESW_INDIR_TABLE_FWD_IDX (MLX5_ESW_INDIR_TABLE_SIZE - 1) struct mlx5_esw_indir_table_rule { - struct list_head list; struct mlx5_flow_handle *handle; - union { - __be32 v4; - struct in6_addr v6; - } dst_ip; - u32 vni; struct mlx5_modify_hdr *mh; refcount_t refcnt; }; @@ -38,12 +32,10 @@ struct mlx5_esw_indir_table_entry { struct mlx5_flow_group *recirc_grp; struct mlx5_flow_group *fwd_grp; struct mlx5_flow_handle *fwd_rule; - struct list_head recirc_rules; - int recirc_cnt; + struct mlx5_esw_indir_table_rule *recirc_rule; int fwd_ref; u16 vport; - u8 ip_version; }; struct mlx5_esw_indir_table { @@ -89,7 +81,6 @@ mlx5_esw_indir_table_needed(struct mlx5_eswitch *esw, return esw_attr->in_rep->vport == MLX5_VPORT_UPLINK && vf_sf_vport && esw->dev == dest_mdev && - attr->ip_version && attr->flags & MLX5_ATTR_FLAG_SRC_REWRITE; } @@ -101,27 +92,8 @@ mlx5_esw_indir_table_decap_vport(struct mlx5_flow_attr *attr) return esw_attr->rx_tun_attr ? esw_attr->rx_tun_attr->decap_vport : 0; } -static struct mlx5_esw_indir_table_rule * -mlx5_esw_indir_table_rule_lookup(struct mlx5_esw_indir_table_entry *e, - struct mlx5_esw_flow_attr *attr) -{ - struct mlx5_esw_indir_table_rule *rule; - - list_for_each_entry(rule, &e->recirc_rules, list) - if (rule->vni == attr->rx_tun_attr->vni && - !memcmp(&rule->dst_ip, &attr->rx_tun_attr->dst_ip, - sizeof(attr->rx_tun_attr->dst_ip))) - goto found; - return NULL; - -found: - refcount_inc(&rule->refcnt); - return rule; -} - static int mlx5_esw_indir_table_rule_get(struct mlx5_eswitch *esw, struct mlx5_flow_attr *attr, - struct mlx5_flow_spec *spec, struct mlx5_esw_indir_table_entry *e) { struct mlx5_esw_flow_attr *esw_attr = attr->esw_attr; @@ -130,73 +102,18 @@ static int mlx5_esw_indir_table_rule_get(struct mlx5_eswitch *esw, struct mlx5_flow_destination dest = {}; struct mlx5_esw_indir_table_rule *rule; struct mlx5_flow_act flow_act = {}; - struct mlx5_flow_spec *rule_spec; struct mlx5_flow_handle *handle; int err = 0; u32 data; - rule = mlx5_esw_indir_table_rule_lookup(e, esw_attr); - if (rule) + if (e->recirc_rule) { + refcount_inc(&e->recirc_rule->refcnt); return 0; - - if (e->recirc_cnt == MLX5_ESW_INDIR_TABLE_RECIRC_IDX_MAX) - return -EINVAL; - - rule_spec = kvzalloc(sizeof(*rule_spec), GFP_KERNEL); - if (!rule_spec) - return -ENOMEM; - - rule = kzalloc(sizeof(*rule), GFP_KERNEL); - if (!rule) { - err = -ENOMEM; - goto out; - } - - rule_spec->match_criteria_enable = MLX5_MATCH_OUTER_HEADERS | - MLX5_MATCH_MISC_PARAMETERS | - MLX5_MATCH_MISC_PARAMETERS_2; - if (MLX5_CAP_FLOWTABLE_NIC_RX(esw->dev, ft_field_support.outer_ip_version)) { - MLX5_SET(fte_match_param, rule_spec->match_criteria, - outer_headers.ip_version, 0xf); - MLX5_SET(fte_match_param, rule_spec->match_value, outer_headers.ip_version, - attr->ip_version); - } else if (attr->ip_version) { - MLX5_SET_TO_ONES(fte_match_param, rule_spec->match_criteria, - outer_headers.ethertype); - MLX5_SET(fte_match_param, rule_spec->match_value, outer_headers.ethertype, - (attr->ip_version == 4 ? ETH_P_IP : ETH_P_IPV6)); - } else { - err = -EOPNOTSUPP; - goto err_ethertype; } - if (attr->ip_version == 4) { - MLX5_SET_TO_ONES(fte_match_param, rule_spec->match_criteria, - outer_headers.dst_ipv4_dst_ipv6.ipv4_layout.ipv4); - MLX5_SET(fte_match_param, rule_spec->match_value, - outer_headers.dst_ipv4_dst_ipv6.ipv4_layout.ipv4, - ntohl(esw_attr->rx_tun_attr->dst_ip.v4)); - } else if (attr->ip_version == 6) { - int len = sizeof(struct in6_addr); - - memset(MLX5_ADDR_OF(fte_match_param, rule_spec->match_criteria, - outer_headers.dst_ipv4_dst_ipv6.ipv6_layout.ipv6), - 0xff, len); - memcpy(MLX5_ADDR_OF(fte_match_param, rule_spec->match_value, - outer_headers.dst_ipv4_dst_ipv6.ipv6_layout.ipv6), - &esw_attr->rx_tun_attr->dst_ip.v6, len); - } - - MLX5_SET_TO_ONES(fte_match_param, rule_spec->match_criteria, - misc_parameters.vxlan_vni); - MLX5_SET(fte_match_param, rule_spec->match_value, misc_parameters.vxlan_vni, - MLX5_GET(fte_match_param, spec->match_value, misc_parameters.vxlan_vni)); - - MLX5_SET(fte_match_param, rule_spec->match_criteria, - misc_parameters_2.metadata_reg_c_0, mlx5_eswitch_get_vport_metadata_mask()); - MLX5_SET(fte_match_param, rule_spec->match_value, misc_parameters_2.metadata_reg_c_0, - mlx5_eswitch_get_vport_metadata_for_match(esw_attr->in_mdev->priv.eswitch, - MLX5_VPORT_UPLINK)); + rule = kzalloc(sizeof(*rule), GFP_KERNEL); + if (!rule) + return -ENOMEM; /* Modify flow source to recirculate packet */ data = mlx5_eswitch_get_vport_metadata_for_set(esw, esw_attr->rx_tun_attr->decap_vport); @@ -219,13 +136,14 @@ static int mlx5_esw_indir_table_rule_get(struct mlx5_eswitch *esw, flow_act.action = MLX5_FLOW_CONTEXT_ACTION_FWD_DEST | MLX5_FLOW_CONTEXT_ACTION_MOD_HDR; flow_act.flags = FLOW_ACT_IGNORE_FLOW_LEVEL | FLOW_ACT_NO_APPEND; + flow_act.fg = e->recirc_grp; dest.type = MLX5_FLOW_DESTINATION_TYPE_FLOW_TABLE; dest.ft = mlx5_chains_get_table(chains, 0, 1, 0); if (IS_ERR(dest.ft)) { err = PTR_ERR(dest.ft); goto err_table; } - handle = mlx5_add_flow_rules(e->ft, rule_spec, &flow_act, &dest, 1); + handle = mlx5_add_flow_rules(e->ft, NULL, &flow_act, &dest, 1); if (IS_ERR(handle)) { err = PTR_ERR(handle); goto err_handle; @@ -233,14 +151,10 @@ static int mlx5_esw_indir_table_rule_get(struct mlx5_eswitch *esw, mlx5e_mod_hdr_dealloc(&mod_acts); rule->handle = handle; - rule->vni = esw_attr->rx_tun_attr->vni; rule->mh = flow_act.modify_hdr; - memcpy(&rule->dst_ip, &esw_attr->rx_tun_attr->dst_ip, - sizeof(esw_attr->rx_tun_attr->dst_ip)); refcount_set(&rule->refcnt, 1); - list_add(&rule->list, &e->recirc_rules); - e->recirc_cnt++; - goto out; + e->recirc_rule = rule; + return 0; err_handle: mlx5_chains_put_table(chains, 0, 1, 0); @@ -250,89 +164,44 @@ err_mod_hdr_alloc: err_mod_hdr_regc1: mlx5e_mod_hdr_dealloc(&mod_acts); err_mod_hdr_regc0: -err_ethertype: kfree(rule); -out: - kvfree(rule_spec); return err; } static void mlx5_esw_indir_table_rule_put(struct mlx5_eswitch *esw, - struct mlx5_flow_attr *attr, struct mlx5_esw_indir_table_entry *e) { - struct mlx5_esw_flow_attr *esw_attr = attr->esw_attr; + struct mlx5_esw_indir_table_rule *rule = e->recirc_rule; struct mlx5_fs_chains *chains = esw_chains(esw); - struct mlx5_esw_indir_table_rule *rule; - list_for_each_entry(rule, &e->recirc_rules, list) - if (rule->vni == esw_attr->rx_tun_attr->vni && - !memcmp(&rule->dst_ip, &esw_attr->rx_tun_attr->dst_ip, - sizeof(esw_attr->rx_tun_attr->dst_ip))) - goto found; - - return; + if (!rule) + return; -found: if (!refcount_dec_and_test(&rule->refcnt)) return; mlx5_del_flow_rules(rule->handle); mlx5_chains_put_table(chains, 0, 1, 0); mlx5_modify_header_dealloc(esw->dev, rule->mh); - list_del(&rule->list); kfree(rule); - e->recirc_cnt--; + e->recirc_rule = NULL; } -static int mlx5_create_indir_recirc_group(struct mlx5_eswitch *esw, - struct mlx5_flow_attr *attr, - struct mlx5_flow_spec *spec, - struct mlx5_esw_indir_table_entry *e) +static int mlx5_create_indir_recirc_group(struct mlx5_esw_indir_table_entry *e) { int err = 0, inlen = MLX5_ST_SZ_BYTES(create_flow_group_in); - u32 *in, *match; + u32 *in; in = kvzalloc(inlen, GFP_KERNEL); if (!in) return -ENOMEM; - MLX5_SET(create_flow_group_in, in, match_criteria_enable, MLX5_MATCH_OUTER_HEADERS | - MLX5_MATCH_MISC_PARAMETERS | MLX5_MATCH_MISC_PARAMETERS_2); - match = MLX5_ADDR_OF(create_flow_group_in, in, match_criteria); - - if (MLX5_CAP_FLOWTABLE_NIC_RX(esw->dev, ft_field_support.outer_ip_version)) - MLX5_SET(fte_match_param, match, outer_headers.ip_version, 0xf); - else - MLX5_SET_TO_ONES(fte_match_param, match, outer_headers.ethertype); - - if (attr->ip_version == 4) { - MLX5_SET_TO_ONES(fte_match_param, match, - outer_headers.dst_ipv4_dst_ipv6.ipv4_layout.ipv4); - } else if (attr->ip_version == 6) { - memset(MLX5_ADDR_OF(fte_match_param, match, - outer_headers.dst_ipv4_dst_ipv6.ipv6_layout.ipv6), - 0xff, sizeof(struct in6_addr)); - } else { - err = -EOPNOTSUPP; - goto out; - } - - MLX5_SET_TO_ONES(fte_match_param, match, misc_parameters.vxlan_vni); - MLX5_SET(fte_match_param, match, misc_parameters_2.metadata_reg_c_0, - mlx5_eswitch_get_vport_metadata_mask()); MLX5_SET(create_flow_group_in, in, start_flow_index, 0); - MLX5_SET(create_flow_group_in, in, end_flow_index, MLX5_ESW_INDIR_TABLE_RECIRC_IDX_MAX); + MLX5_SET(create_flow_group_in, in, end_flow_index, MLX5_ESW_INDIR_TABLE_RECIRC_IDX); e->recirc_grp = mlx5_create_flow_group(e->ft, in); - if (IS_ERR(e->recirc_grp)) { + if (IS_ERR(e->recirc_grp)) err = PTR_ERR(e->recirc_grp); - goto out; - } - INIT_LIST_HEAD(&e->recirc_rules); - e->recirc_cnt = 0; - -out: kvfree(in); return err; } @@ -343,19 +212,12 @@ static int mlx5_create_indir_fwd_group(struct mlx5_eswitch *esw, int err = 0, inlen = MLX5_ST_SZ_BYTES(create_flow_group_in); struct mlx5_flow_destination dest = {}; struct mlx5_flow_act flow_act = {}; - struct mlx5_flow_spec *spec; u32 *in; in = kvzalloc(inlen, GFP_KERNEL); if (!in) return -ENOMEM; - spec = kvzalloc(sizeof(*spec), GFP_KERNEL); - if (!spec) { - kvfree(in); - return -ENOMEM; - } - /* Hold one entry */ MLX5_SET(create_flow_group_in, in, start_flow_index, MLX5_ESW_INDIR_TABLE_FWD_IDX); MLX5_SET(create_flow_group_in, in, end_flow_index, MLX5_ESW_INDIR_TABLE_FWD_IDX); @@ -366,25 +228,25 @@ static int mlx5_create_indir_fwd_group(struct mlx5_eswitch *esw, } flow_act.action = MLX5_FLOW_CONTEXT_ACTION_FWD_DEST; + flow_act.fg = e->fwd_grp; dest.type = MLX5_FLOW_DESTINATION_TYPE_VPORT; dest.vport.num = e->vport; dest.vport.vhca_id = MLX5_CAP_GEN(esw->dev, vhca_id); dest.vport.flags = MLX5_FLOW_DEST_VPORT_VHCA_ID; - e->fwd_rule = mlx5_add_flow_rules(e->ft, spec, &flow_act, &dest, 1); + e->fwd_rule = mlx5_add_flow_rules(e->ft, NULL, &flow_act, &dest, 1); if (IS_ERR(e->fwd_rule)) { mlx5_destroy_flow_group(e->fwd_grp); err = PTR_ERR(e->fwd_rule); } err_out: - kvfree(spec); kvfree(in); return err; } static struct mlx5_esw_indir_table_entry * mlx5_esw_indir_table_entry_create(struct mlx5_eswitch *esw, struct mlx5_flow_attr *attr, - struct mlx5_flow_spec *spec, u16 vport, bool decap) + u16 vport, bool decap) { struct mlx5_flow_table_attr ft_attr = {}; struct mlx5_flow_namespace *root_ns; @@ -412,15 +274,14 @@ mlx5_esw_indir_table_entry_create(struct mlx5_eswitch *esw, struct mlx5_flow_att } e->ft = ft; e->vport = vport; - e->ip_version = attr->ip_version; e->fwd_ref = !decap; - err = mlx5_create_indir_recirc_group(esw, attr, spec, e); + err = mlx5_create_indir_recirc_group(e); if (err) goto recirc_grp_err; if (decap) { - err = mlx5_esw_indir_table_rule_get(esw, attr, spec, e); + err = mlx5_esw_indir_table_rule_get(esw, attr, e); if (err) goto recirc_rule_err; } @@ -430,13 +291,13 @@ mlx5_esw_indir_table_entry_create(struct mlx5_eswitch *esw, struct mlx5_flow_att goto fwd_grp_err; hash_add(esw->fdb_table.offloads.indir->table, &e->hlist, - vport << 16 | attr->ip_version); + vport << 16); return e; fwd_grp_err: if (decap) - mlx5_esw_indir_table_rule_put(esw, attr, e); + mlx5_esw_indir_table_rule_put(esw, e); recirc_rule_err: mlx5_destroy_flow_group(e->recirc_grp); recirc_grp_err: @@ -447,13 +308,13 @@ tbl_err: } static struct mlx5_esw_indir_table_entry * -mlx5_esw_indir_table_entry_lookup(struct mlx5_eswitch *esw, u16 vport, u8 ip_version) +mlx5_esw_indir_table_entry_lookup(struct mlx5_eswitch *esw, u16 vport) { struct mlx5_esw_indir_table_entry *e; - u32 key = vport << 16 | ip_version; + u32 key = vport << 16; hash_for_each_possible(esw->fdb_table.offloads.indir->table, e, hlist, key) - if (e->vport == vport && e->ip_version == ip_version) + if (e->vport == vport) return e; return NULL; @@ -461,24 +322,23 @@ mlx5_esw_indir_table_entry_lookup(struct mlx5_eswitch *esw, u16 vport, u8 ip_ver struct mlx5_flow_table *mlx5_esw_indir_table_get(struct mlx5_eswitch *esw, struct mlx5_flow_attr *attr, - struct mlx5_flow_spec *spec, u16 vport, bool decap) { struct mlx5_esw_indir_table_entry *e; int err; mutex_lock(&esw->fdb_table.offloads.indir->lock); - e = mlx5_esw_indir_table_entry_lookup(esw, vport, attr->ip_version); + e = mlx5_esw_indir_table_entry_lookup(esw, vport); if (e) { if (!decap) { e->fwd_ref++; } else { - err = mlx5_esw_indir_table_rule_get(esw, attr, spec, e); + err = mlx5_esw_indir_table_rule_get(esw, attr, e); if (err) goto out_err; } } else { - e = mlx5_esw_indir_table_entry_create(esw, attr, spec, vport, decap); + e = mlx5_esw_indir_table_entry_create(esw, attr, vport, decap); if (IS_ERR(e)) { err = PTR_ERR(e); esw_warn(esw->dev, "Failed to create indirection table, err %d.\n", err); @@ -494,22 +354,21 @@ out_err: } void mlx5_esw_indir_table_put(struct mlx5_eswitch *esw, - struct mlx5_flow_attr *attr, u16 vport, bool decap) { struct mlx5_esw_indir_table_entry *e; mutex_lock(&esw->fdb_table.offloads.indir->lock); - e = mlx5_esw_indir_table_entry_lookup(esw, vport, attr->ip_version); + e = mlx5_esw_indir_table_entry_lookup(esw, vport); if (!e) goto out; if (!decap) e->fwd_ref--; else - mlx5_esw_indir_table_rule_put(esw, attr, e); + mlx5_esw_indir_table_rule_put(esw, e); - if (e->fwd_ref || e->recirc_cnt) + if (e->fwd_ref || e->recirc_rule) goto out; hash_del(&e->hlist); diff --git a/drivers/net/ethernet/mellanox/mlx5/core/esw/indir_table.h b/drivers/net/ethernet/mellanox/mlx5/core/esw/indir_table.h index 21d56b49d14b..036f5b3a341b 100644 --- a/drivers/net/ethernet/mellanox/mlx5/core/esw/indir_table.h +++ b/drivers/net/ethernet/mellanox/mlx5/core/esw/indir_table.h @@ -13,10 +13,8 @@ mlx5_esw_indir_table_destroy(struct mlx5_esw_indir_table *indir); struct mlx5_flow_table *mlx5_esw_indir_table_get(struct mlx5_eswitch *esw, struct mlx5_flow_attr *attr, - struct mlx5_flow_spec *spec, u16 vport, bool decap); void mlx5_esw_indir_table_put(struct mlx5_eswitch *esw, - struct mlx5_flow_attr *attr, u16 vport, bool decap); bool @@ -44,7 +42,6 @@ mlx5_esw_indir_table_destroy(struct mlx5_esw_indir_table *indir) static inline struct mlx5_flow_table * mlx5_esw_indir_table_get(struct mlx5_eswitch *esw, struct mlx5_flow_attr *attr, - struct mlx5_flow_spec *spec, u16 vport, bool decap) { return ERR_PTR(-EOPNOTSUPP); @@ -52,7 +49,6 @@ mlx5_esw_indir_table_get(struct mlx5_eswitch *esw, static inline void mlx5_esw_indir_table_put(struct mlx5_eswitch *esw, - struct mlx5_flow_attr *attr, u16 vport, bool decap) { } diff --git a/drivers/net/ethernet/mellanox/mlx5/core/eswitch.c b/drivers/net/ethernet/mellanox/mlx5/core/eswitch.c index 9daf55e90367..0f052513fefa 100644 --- a/drivers/net/ethernet/mellanox/mlx5/core/eswitch.c +++ b/drivers/net/ethernet/mellanox/mlx5/core/eswitch.c @@ -1190,9 +1190,9 @@ static void mlx5_eswitch_get_devlink_param(struct mlx5_eswitch *esw) union devlink_param_value val; int err; - err = devlink_param_driverinit_value_get(devlink, - MLX5_DEVLINK_PARAM_ID_ESW_LARGE_GROUP_NUM, - &val); + err = devl_param_driverinit_value_get(devlink, + MLX5_DEVLINK_PARAM_ID_ESW_LARGE_GROUP_NUM, + &val); if (!err) { esw->params.large_group_num = val.vu32; } else { @@ -1250,7 +1250,7 @@ static int mlx5_esw_acls_ns_init(struct mlx5_eswitch *esw) if (err) return err; } else { - esw_warn(dev, "engress ACL is not supported by FW\n"); + esw_warn(dev, "egress ACL is not supported by FW\n"); } if (MLX5_CAP_ESW_INGRESS_ACL(dev, ft_support)) { @@ -1406,9 +1406,7 @@ void mlx5_eswitch_disable_sriov(struct mlx5_eswitch *esw, bool clear_vf) mlx5_eswitch_unload_vf_vports(esw, esw->esw_funcs.num_vfs); if (clear_vf) mlx5_eswitch_clear_vf_vports_info(esw); - /* If disabling sriov in switchdev mode, free meta rules here - * because it depends on num_vfs. - */ + if (esw->mode == MLX5_ESWITCH_OFFLOADS) { struct devlink *devlink = priv_to_devlink(esw->dev); @@ -1489,7 +1487,7 @@ int mlx5_esw_sf_max_hpf_functions(struct mlx5_core_dev *dev, u16 *max_sfs, u16 * void *hca_caps; int err; - if (!mlx5_core_is_ecpf(dev)) { + if (!mlx5_core_is_ecpf(dev) || mlx5_core_is_management_pf(dev)) { *max_sfs = 0; return 0; } @@ -1642,7 +1640,7 @@ int mlx5_eswitch_init(struct mlx5_core_dev *dev) if (err) goto abort; - err = esw_offloads_init_reps(esw); + err = esw_offloads_init(esw); if (err) goto reps_err; @@ -1708,7 +1706,7 @@ void mlx5_eswitch_cleanup(struct mlx5_eswitch *esw) mlx5e_mod_hdr_tbl_destroy(&esw->offloads.mod_hdr); mutex_destroy(&esw->offloads.encap_tbl_lock); mutex_destroy(&esw->offloads.decap_tbl_lock); - esw_offloads_cleanup_reps(esw); + esw_offloads_cleanup(esw); mlx5_esw_vports_cleanup(esw); kfree(esw); } diff --git a/drivers/net/ethernet/mellanox/mlx5/core/eswitch.h b/drivers/net/ethernet/mellanox/mlx5/core/eswitch.h index 92644fbb5081..19e9a77c4633 100644 --- a/drivers/net/ethernet/mellanox/mlx5/core/eswitch.h +++ b/drivers/net/ethernet/mellanox/mlx5/core/eswitch.h @@ -52,12 +52,14 @@ enum mlx5_mapped_obj_type { MLX5_MAPPED_OBJ_CHAIN, MLX5_MAPPED_OBJ_SAMPLE, MLX5_MAPPED_OBJ_INT_PORT_METADATA, + MLX5_MAPPED_OBJ_ACT_MISS, }; struct mlx5_mapped_obj { enum mlx5_mapped_obj_type type; union { u32 chain; + u64 act_miss_cookie; struct { u32 group_id; u32 rate; @@ -222,7 +224,6 @@ struct mlx5_eswitch_fdb { struct mlx5_flow_handle **send_to_vport_meta_rules; struct mlx5_flow_handle *miss_rule_uni; struct mlx5_flow_handle *miss_rule_multi; - int vlan_push_pop_refcount; struct mlx5_fs_chains *esw_chains_priv; struct { @@ -346,8 +347,8 @@ struct mlx5_eswitch { void esw_offloads_disable(struct mlx5_eswitch *esw); int esw_offloads_enable(struct mlx5_eswitch *esw); -void esw_offloads_cleanup_reps(struct mlx5_eswitch *esw); -int esw_offloads_init_reps(struct mlx5_eswitch *esw); +void esw_offloads_cleanup(struct mlx5_eswitch *esw); +int esw_offloads_init(struct mlx5_eswitch *esw); struct mlx5_flow_handle * mlx5_eswitch_add_send_to_vport_meta_rule(struct mlx5_eswitch *esw, u16 vport_num); @@ -520,10 +521,6 @@ int mlx5_devlink_port_fn_migratable_set(struct devlink_port *port, bool enable, struct netlink_ext_ack *extack); void *mlx5_eswitch_get_uplink_priv(struct mlx5_eswitch *esw, u8 rep_type); -int mlx5_eswitch_add_vlan_action(struct mlx5_eswitch *esw, - struct mlx5_flow_attr *attr); -int mlx5_eswitch_del_vlan_action(struct mlx5_eswitch *esw, - struct mlx5_flow_attr *attr); int __mlx5_eswitch_set_vport_vlan(struct mlx5_eswitch *esw, u16 vport, u16 vlan, u8 qos, u8 set_flags); diff --git a/drivers/net/ethernet/mellanox/mlx5/core/eswitch_offloads.c b/drivers/net/ethernet/mellanox/mlx5/core/eswitch_offloads.c index c981fa77f439..2a98375a0abf 100644 --- a/drivers/net/ethernet/mellanox/mlx5/core/eswitch_offloads.c +++ b/drivers/net/ethernet/mellanox/mlx5/core/eswitch_offloads.c @@ -179,15 +179,14 @@ mlx5_eswitch_set_rule_source_port(struct mlx5_eswitch *esw, static int esw_setup_decap_indir(struct mlx5_eswitch *esw, - struct mlx5_flow_attr *attr, - struct mlx5_flow_spec *spec) + struct mlx5_flow_attr *attr) { struct mlx5_flow_table *ft; if (!(attr->flags & MLX5_ATTR_FLAG_SRC_REWRITE)) return -EOPNOTSUPP; - ft = mlx5_esw_indir_table_get(esw, attr, spec, + ft = mlx5_esw_indir_table_get(esw, attr, mlx5_esw_indir_table_decap_vport(attr), true); return PTR_ERR_OR_ZERO(ft); } @@ -197,7 +196,7 @@ esw_cleanup_decap_indir(struct mlx5_eswitch *esw, struct mlx5_flow_attr *attr) { if (mlx5_esw_indir_table_decap_vport(attr)) - mlx5_esw_indir_table_put(esw, attr, + mlx5_esw_indir_table_put(esw, mlx5_esw_indir_table_decap_vport(attr), true); } @@ -235,7 +234,6 @@ esw_setup_ft_dest(struct mlx5_flow_destination *dest, struct mlx5_flow_act *flow_act, struct mlx5_eswitch *esw, struct mlx5_flow_attr *attr, - struct mlx5_flow_spec *spec, int i) { flow_act->flags |= FLOW_ACT_IGNORE_FLOW_LEVEL; @@ -243,7 +241,7 @@ esw_setup_ft_dest(struct mlx5_flow_destination *dest, dest[i].ft = attr->dest_ft; if (mlx5_esw_indir_table_decap_vport(attr)) - return esw_setup_decap_indir(esw, attr, spec); + return esw_setup_decap_indir(esw, attr); return 0; } @@ -298,7 +296,7 @@ static void esw_put_dest_tables_loop(struct mlx5_eswitch *esw, struct mlx5_flow_ mlx5_chains_put_table(chains, 0, 1, 0); else if (mlx5_esw_indir_table_needed(esw, attr, esw_attr->dests[i].rep->vport, esw_attr->dests[i].mdev)) - mlx5_esw_indir_table_put(esw, attr, esw_attr->dests[i].rep->vport, + mlx5_esw_indir_table_put(esw, esw_attr->dests[i].rep->vport, false); } @@ -384,7 +382,6 @@ esw_setup_indir_table(struct mlx5_flow_destination *dest, struct mlx5_flow_act *flow_act, struct mlx5_eswitch *esw, struct mlx5_flow_attr *attr, - struct mlx5_flow_spec *spec, bool ignore_flow_lvl, int *i) { @@ -399,7 +396,7 @@ esw_setup_indir_table(struct mlx5_flow_destination *dest, flow_act->flags |= FLOW_ACT_IGNORE_FLOW_LEVEL; dest[*i].type = MLX5_FLOW_DESTINATION_TYPE_FLOW_TABLE; - dest[*i].ft = mlx5_esw_indir_table_get(esw, attr, spec, + dest[*i].ft = mlx5_esw_indir_table_get(esw, attr, esw_attr->dests[j].rep->vport, false); if (IS_ERR(dest[*i].ft)) { err = PTR_ERR(dest[*i].ft); @@ -408,7 +405,7 @@ esw_setup_indir_table(struct mlx5_flow_destination *dest, } if (mlx5_esw_indir_table_decap_vport(attr)) { - err = esw_setup_decap_indir(esw, attr, spec); + err = esw_setup_decap_indir(esw, attr); if (err) goto err_indir_tbl_get; } @@ -446,7 +443,7 @@ esw_setup_vport_dest(struct mlx5_flow_destination *dest, struct mlx5_flow_act *f MLX5_CAP_GEN(esw_attr->dests[attr_idx].mdev, vhca_id); dest[dest_idx].vport.flags |= MLX5_FLOW_DEST_VPORT_VHCA_ID; if (dest[dest_idx].vport.num == MLX5_VPORT_UPLINK && - mlx5_lag_mpesw_is_activated(esw->dev)) + mlx5_lag_is_mpesw(esw->dev)) dest[dest_idx].type = MLX5_FLOW_DESTINATION_TYPE_UPLINK; } if (esw_attr->dests[attr_idx].flags & MLX5_ESW_DEST_ENCAP_VALID) { @@ -511,14 +508,14 @@ esw_setup_dests(struct mlx5_flow_destination *dest, err = esw_setup_mtu_dest(dest, &attr->meter_attr, *i); (*i)++; } else if (esw_is_indir_table(esw, attr)) { - err = esw_setup_indir_table(dest, flow_act, esw, attr, spec, true, i); + err = esw_setup_indir_table(dest, flow_act, esw, attr, true, i); } else if (esw_is_chain_src_port_rewrite(esw, esw_attr)) { err = esw_setup_chain_src_port_rewrite(dest, flow_act, esw, chains, attr, i); } else { *i = esw_setup_vport_dests(dest, flow_act, esw, esw_attr, *i); if (attr->dest_ft) { - err = esw_setup_ft_dest(dest, flow_act, esw, attr, spec, *i); + err = esw_setup_ft_dest(dest, flow_act, esw, attr, *i); (*i)++; } else if (attr->dest_chain) { err = esw_setup_chain_dest(dest, flow_act, chains, attr->dest_chain, @@ -582,16 +579,16 @@ mlx5_eswitch_add_offloaded_rule(struct mlx5_eswitch *esw, if (esw->mode != MLX5_ESWITCH_OFFLOADS) return ERR_PTR(-EOPNOTSUPP); + if (!mlx5_eswitch_vlan_actions_supported(esw->dev, 1)) + return ERR_PTR(-EOPNOTSUPP); + dest = kcalloc(MLX5_MAX_FLOW_FWD_VPORTS + 1, sizeof(*dest), GFP_KERNEL); if (!dest) return ERR_PTR(-ENOMEM); flow_act.action = attr->action; - /* if per flow vlan pop/push is emulated, don't set that into the firmware */ - if (!mlx5_eswitch_vlan_actions_supported(esw->dev, 1)) - flow_act.action &= ~(MLX5_FLOW_CONTEXT_ACTION_VLAN_PUSH | - MLX5_FLOW_CONTEXT_ACTION_VLAN_POP); - else if (flow_act.action & MLX5_FLOW_CONTEXT_ACTION_VLAN_PUSH) { + + if (flow_act.action & MLX5_FLOW_CONTEXT_ACTION_VLAN_PUSH) { flow_act.vlan[0].ethtype = ntohs(esw_attr->vlan_proto[0]); flow_act.vlan[0].vid = esw_attr->vlan_vid[0]; flow_act.vlan[0].prio = esw_attr->vlan_prio[0]; @@ -727,7 +724,7 @@ mlx5_eswitch_add_fwd_rule(struct mlx5_eswitch *esw, flow_act.action = MLX5_FLOW_CONTEXT_ACTION_FWD_DEST; for (i = 0; i < esw_attr->split_count; i++) { if (esw_is_indir_table(esw, attr)) - err = esw_setup_indir_table(dest, &flow_act, esw, attr, spec, false, &i); + err = esw_setup_indir_table(dest, &flow_act, esw, attr, false, &i); else if (esw_is_chain_src_port_rewrite(esw, esw_attr)) err = esw_setup_chain_src_port_rewrite(dest, &flow_act, esw, chains, attr, &i); @@ -832,204 +829,6 @@ mlx5_eswitch_del_fwd_rule(struct mlx5_eswitch *esw, __mlx5_eswitch_del_rule(esw, rule, attr, true); } -static int esw_set_global_vlan_pop(struct mlx5_eswitch *esw, u8 val) -{ - struct mlx5_eswitch_rep *rep; - unsigned long i; - int err = 0; - - esw_debug(esw->dev, "%s applying global %s policy\n", __func__, val ? "pop" : "none"); - mlx5_esw_for_each_host_func_vport(esw, i, rep, esw->esw_funcs.num_vfs) { - if (atomic_read(&rep->rep_data[REP_ETH].state) != REP_LOADED) - continue; - - err = __mlx5_eswitch_set_vport_vlan(esw, rep->vport, 0, 0, val); - if (err) - goto out; - } - -out: - return err; -} - -static struct mlx5_eswitch_rep * -esw_vlan_action_get_vport(struct mlx5_esw_flow_attr *attr, bool push, bool pop) -{ - struct mlx5_eswitch_rep *in_rep, *out_rep, *vport = NULL; - - in_rep = attr->in_rep; - out_rep = attr->dests[0].rep; - - if (push) - vport = in_rep; - else if (pop) - vport = out_rep; - else - vport = in_rep; - - return vport; -} - -static int esw_add_vlan_action_check(struct mlx5_esw_flow_attr *attr, - bool push, bool pop, bool fwd) -{ - struct mlx5_eswitch_rep *in_rep, *out_rep; - - if ((push || pop) && !fwd) - goto out_notsupp; - - in_rep = attr->in_rep; - out_rep = attr->dests[0].rep; - - if (push && in_rep->vport == MLX5_VPORT_UPLINK) - goto out_notsupp; - - if (pop && out_rep->vport == MLX5_VPORT_UPLINK) - goto out_notsupp; - - /* vport has vlan push configured, can't offload VF --> wire rules w.o it */ - if (!push && !pop && fwd) - if (in_rep->vlan && out_rep->vport == MLX5_VPORT_UPLINK) - goto out_notsupp; - - /* protects against (1) setting rules with different vlans to push and - * (2) setting rules w.o vlans (attr->vlan = 0) && w. vlans to push (!= 0) - */ - if (push && in_rep->vlan_refcount && (in_rep->vlan != attr->vlan_vid[0])) - goto out_notsupp; - - return 0; - -out_notsupp: - return -EOPNOTSUPP; -} - -int mlx5_eswitch_add_vlan_action(struct mlx5_eswitch *esw, - struct mlx5_flow_attr *attr) -{ - struct offloads_fdb *offloads = &esw->fdb_table.offloads; - struct mlx5_esw_flow_attr *esw_attr = attr->esw_attr; - struct mlx5_eswitch_rep *vport = NULL; - bool push, pop, fwd; - int err = 0; - - /* nop if we're on the vlan push/pop non emulation mode */ - if (mlx5_eswitch_vlan_actions_supported(esw->dev, 1)) - return 0; - - push = !!(attr->action & MLX5_FLOW_CONTEXT_ACTION_VLAN_PUSH); - pop = !!(attr->action & MLX5_FLOW_CONTEXT_ACTION_VLAN_POP); - fwd = !!((attr->action & MLX5_FLOW_CONTEXT_ACTION_FWD_DEST) && - !attr->dest_chain); - - mutex_lock(&esw->state_lock); - - err = esw_add_vlan_action_check(esw_attr, push, pop, fwd); - if (err) - goto unlock; - - attr->flags &= ~MLX5_ATTR_FLAG_VLAN_HANDLED; - - vport = esw_vlan_action_get_vport(esw_attr, push, pop); - - if (!push && !pop && fwd) { - /* tracks VF --> wire rules without vlan push action */ - if (esw_attr->dests[0].rep->vport == MLX5_VPORT_UPLINK) { - vport->vlan_refcount++; - attr->flags |= MLX5_ATTR_FLAG_VLAN_HANDLED; - } - - goto unlock; - } - - if (!push && !pop) - goto unlock; - - if (!(offloads->vlan_push_pop_refcount)) { - /* it's the 1st vlan rule, apply global vlan pop policy */ - err = esw_set_global_vlan_pop(esw, SET_VLAN_STRIP); - if (err) - goto out; - } - offloads->vlan_push_pop_refcount++; - - if (push) { - if (vport->vlan_refcount) - goto skip_set_push; - - err = __mlx5_eswitch_set_vport_vlan(esw, vport->vport, esw_attr->vlan_vid[0], - 0, SET_VLAN_INSERT | SET_VLAN_STRIP); - if (err) - goto out; - vport->vlan = esw_attr->vlan_vid[0]; -skip_set_push: - vport->vlan_refcount++; - } -out: - if (!err) - attr->flags |= MLX5_ATTR_FLAG_VLAN_HANDLED; -unlock: - mutex_unlock(&esw->state_lock); - return err; -} - -int mlx5_eswitch_del_vlan_action(struct mlx5_eswitch *esw, - struct mlx5_flow_attr *attr) -{ - struct offloads_fdb *offloads = &esw->fdb_table.offloads; - struct mlx5_esw_flow_attr *esw_attr = attr->esw_attr; - struct mlx5_eswitch_rep *vport = NULL; - bool push, pop, fwd; - int err = 0; - - /* nop if we're on the vlan push/pop non emulation mode */ - if (mlx5_eswitch_vlan_actions_supported(esw->dev, 1)) - return 0; - - if (!(attr->flags & MLX5_ATTR_FLAG_VLAN_HANDLED)) - return 0; - - push = !!(attr->action & MLX5_FLOW_CONTEXT_ACTION_VLAN_PUSH); - pop = !!(attr->action & MLX5_FLOW_CONTEXT_ACTION_VLAN_POP); - fwd = !!(attr->action & MLX5_FLOW_CONTEXT_ACTION_FWD_DEST); - - mutex_lock(&esw->state_lock); - - vport = esw_vlan_action_get_vport(esw_attr, push, pop); - - if (!push && !pop && fwd) { - /* tracks VF --> wire rules without vlan push action */ - if (esw_attr->dests[0].rep->vport == MLX5_VPORT_UPLINK) - vport->vlan_refcount--; - - goto out; - } - - if (push) { - vport->vlan_refcount--; - if (vport->vlan_refcount) - goto skip_unset_push; - - vport->vlan = 0; - err = __mlx5_eswitch_set_vport_vlan(esw, vport->vport, - 0, 0, SET_VLAN_STRIP); - if (err) - goto out; - } - -skip_unset_push: - offloads->vlan_push_pop_refcount--; - if (offloads->vlan_push_pop_refcount) - goto out; - - /* no more vlan rules, stop global vlan pop policy */ - err = esw_set_global_vlan_pop(esw, 0); - -out: - mutex_unlock(&esw->state_lock); - return err; -} - struct mlx5_flow_handle * mlx5_eswitch_add_send_to_vport_rule(struct mlx5_eswitch *on_esw, struct mlx5_eswitch *from_esw, @@ -2406,7 +2205,7 @@ static void mlx5_esw_offloads_rep_cleanup(struct mlx5_eswitch *esw, kfree(rep); } -void esw_offloads_cleanup_reps(struct mlx5_eswitch *esw) +static void esw_offloads_cleanup_reps(struct mlx5_eswitch *esw) { struct mlx5_eswitch_rep *rep; unsigned long i; @@ -2416,7 +2215,7 @@ void esw_offloads_cleanup_reps(struct mlx5_eswitch *esw) xa_destroy(&esw->offloads.vport_reps); } -int esw_offloads_init_reps(struct mlx5_eswitch *esw) +static int esw_offloads_init_reps(struct mlx5_eswitch *esw) { struct mlx5_vport *vport; unsigned long i; @@ -2436,6 +2235,94 @@ err: return err; } +static int esw_port_metadata_set(struct devlink *devlink, u32 id, + struct devlink_param_gset_ctx *ctx) +{ + struct mlx5_core_dev *dev = devlink_priv(devlink); + struct mlx5_eswitch *esw = dev->priv.eswitch; + int err = 0; + + down_write(&esw->mode_lock); + if (mlx5_esw_is_fdb_created(esw)) { + err = -EBUSY; + goto done; + } + if (!mlx5_esw_vport_match_metadata_supported(esw)) { + err = -EOPNOTSUPP; + goto done; + } + if (ctx->val.vbool) + esw->flags |= MLX5_ESWITCH_VPORT_MATCH_METADATA; + else + esw->flags &= ~MLX5_ESWITCH_VPORT_MATCH_METADATA; +done: + up_write(&esw->mode_lock); + return err; +} + +static int esw_port_metadata_get(struct devlink *devlink, u32 id, + struct devlink_param_gset_ctx *ctx) +{ + struct mlx5_core_dev *dev = devlink_priv(devlink); + + ctx->val.vbool = mlx5_eswitch_vport_match_metadata_enabled(dev->priv.eswitch); + return 0; +} + +static int esw_port_metadata_validate(struct devlink *devlink, u32 id, + union devlink_param_value val, + struct netlink_ext_ack *extack) +{ + struct mlx5_core_dev *dev = devlink_priv(devlink); + u8 esw_mode; + + esw_mode = mlx5_eswitch_mode(dev); + if (esw_mode == MLX5_ESWITCH_OFFLOADS) { + NL_SET_ERR_MSG_MOD(extack, + "E-Switch must either disabled or non switchdev mode"); + return -EBUSY; + } + return 0; +} + +static const struct devlink_param esw_devlink_params[] = { + DEVLINK_PARAM_DRIVER(MLX5_DEVLINK_PARAM_ID_ESW_PORT_METADATA, + "esw_port_metadata", DEVLINK_PARAM_TYPE_BOOL, + BIT(DEVLINK_PARAM_CMODE_RUNTIME), + esw_port_metadata_get, + esw_port_metadata_set, + esw_port_metadata_validate), +}; + +int esw_offloads_init(struct mlx5_eswitch *esw) +{ + int err; + + err = esw_offloads_init_reps(esw); + if (err) + return err; + + err = devl_params_register(priv_to_devlink(esw->dev), + esw_devlink_params, + ARRAY_SIZE(esw_devlink_params)); + if (err) + goto err_params; + + return 0; + +err_params: + esw_offloads_cleanup_reps(esw); + return err; +} + +void esw_offloads_cleanup(struct mlx5_eswitch *esw) +{ + devl_params_unregister(priv_to_devlink(esw->dev), + esw_devlink_params, + ARRAY_SIZE(esw_devlink_params)); + esw_offloads_cleanup_reps(esw); +} + static void __esw_offloads_unload_rep(struct mlx5_eswitch *esw, struct mlx5_eswitch_rep *rep, u8 rep_type) { @@ -3575,9 +3462,9 @@ int mlx5_devlink_eswitch_mode_get(struct devlink *devlink, u16 *mode) if (IS_ERR(esw)) return PTR_ERR(esw); - down_write(&esw->mode_lock); + down_read(&esw->mode_lock); err = esw_mode_to_devlink(esw->mode, mode); - up_write(&esw->mode_lock); + up_read(&esw->mode_lock); return err; } @@ -3675,9 +3562,9 @@ int mlx5_devlink_eswitch_inline_mode_get(struct devlink *devlink, u8 *mode) if (IS_ERR(esw)) return PTR_ERR(esw); - down_write(&esw->mode_lock); + down_read(&esw->mode_lock); err = esw_inline_mode_to_devlink(esw->offloads.inline_mode, mode); - up_write(&esw->mode_lock); + up_read(&esw->mode_lock); return err; } @@ -3749,9 +3636,9 @@ int mlx5_devlink_eswitch_encap_mode_get(struct devlink *devlink, if (IS_ERR(esw)) return PTR_ERR(esw); - down_write(&esw->mode_lock); + down_read(&esw->mode_lock); *encap = esw->offloads.encap; - up_write(&esw->mode_lock); + up_read(&esw->mode_lock); return 0; } diff --git a/drivers/net/ethernet/mellanox/mlx5/core/events.c b/drivers/net/ethernet/mellanox/mlx5/core/events.c index 9459e56ee90a..718cf09c28ce 100644 --- a/drivers/net/ethernet/mellanox/mlx5/core/events.c +++ b/drivers/net/ethernet/mellanox/mlx5/core/events.c @@ -424,6 +424,7 @@ int mlx5_blocking_notifier_register(struct mlx5_core_dev *dev, struct notifier_b return blocking_notifier_chain_register(&events->sw_nh, nb); } +EXPORT_SYMBOL(mlx5_blocking_notifier_register); int mlx5_blocking_notifier_unregister(struct mlx5_core_dev *dev, struct notifier_block *nb) { @@ -431,6 +432,7 @@ int mlx5_blocking_notifier_unregister(struct mlx5_core_dev *dev, struct notifier return blocking_notifier_chain_unregister(&events->sw_nh, nb); } +EXPORT_SYMBOL(mlx5_blocking_notifier_unregister); int mlx5_blocking_notifier_call_chain(struct mlx5_core_dev *dev, unsigned int event, void *data) diff --git a/drivers/net/ethernet/mellanox/mlx5/core/fs_cmd.c b/drivers/net/ethernet/mellanox/mlx5/core/fs_cmd.c index 32d4c967469c..144e59480686 100644 --- a/drivers/net/ethernet/mellanox/mlx5/core/fs_cmd.c +++ b/drivers/net/ethernet/mellanox/mlx5/core/fs_cmd.c @@ -272,8 +272,6 @@ static int mlx5_cmd_create_flow_table(struct mlx5_flow_root_namespace *ns, unsigned int size; int err; - if (ft_attr->max_fte != POOL_NEXT_SIZE) - size = roundup_pow_of_two(ft_attr->max_fte); size = mlx5_ft_pool_get_avail_sz(dev, ft->type, ft_attr->max_fte); if (!size) return -ENOSPC; @@ -412,11 +410,6 @@ static int mlx5_cmd_create_flow_group(struct mlx5_flow_root_namespace *ns, MLX5_CMD_OP_CREATE_FLOW_GROUP); MLX5_SET(create_flow_group_in, in, table_type, ft->type); MLX5_SET(create_flow_group_in, in, table_id, ft->id); - if (ft->vport) { - MLX5_SET(create_flow_group_in, in, vport_number, ft->vport); - MLX5_SET(create_flow_group_in, in, other_vport, 1); - } - MLX5_SET(create_flow_group_in, in, vport_number, ft->vport); MLX5_SET(create_flow_group_in, in, other_vport, !!(ft->flags & MLX5_FLOW_TABLE_OTHER_VPORT)); @@ -653,6 +646,12 @@ static int mlx5_cmd_set_fte(struct mlx5_core_dev *dev, id = dst->dest_attr.sampler_id; ifc_type = MLX5_IFC_FLOW_DESTINATION_TYPE_FLOW_SAMPLER; break; + case MLX5_FLOW_DESTINATION_TYPE_TABLE_TYPE: + MLX5_SET(dest_format_struct, in_dests, + destination_table_type, dst->dest_attr.ft->type); + id = dst->dest_attr.ft->id; + ifc_type = MLX5_IFC_FLOW_DESTINATION_TYPE_TABLE_TYPE; + break; default: id = dst->dest_attr.tir_num; ifc_type = MLX5_IFC_FLOW_DESTINATION_TYPE_TIR; diff --git a/drivers/net/ethernet/mellanox/mlx5/core/fs_core.c b/drivers/net/ethernet/mellanox/mlx5/core/fs_core.c index 5a85d8c1e797..731acbe22dc7 100644 --- a/drivers/net/ethernet/mellanox/mlx5/core/fs_core.c +++ b/drivers/net/ethernet/mellanox/mlx5/core/fs_core.c @@ -34,12 +34,14 @@ #include <linux/mlx5/driver.h> #include <linux/mlx5/vport.h> #include <linux/mlx5/eswitch.h> +#include <net/devlink.h> #include "mlx5_core.h" #include "fs_core.h" #include "fs_cmd.h" #include "fs_ft_pool.h" #include "diag/fs_tracepoint.h" +#include "devlink.h" #define INIT_TREE_NODE_ARRAY_SIZE(...) (sizeof((struct init_tree_node[]){__VA_ARGS__}) /\ sizeof(struct init_tree_node)) @@ -111,8 +113,10 @@ #define ETHTOOL_PRIO_NUM_LEVELS 1 #define ETHTOOL_NUM_PRIOS 11 #define ETHTOOL_MIN_LEVEL (KERNEL_MIN_LEVEL + ETHTOOL_NUM_PRIOS) -/* Promiscuous, Vlan, mac, ttc, inner ttc, {UDP/ANY/aRFS/accel/{esp, esp_err}}, IPsec policy */ -#define KERNEL_NIC_PRIO_NUM_LEVELS 8 +/* Promiscuous, Vlan, mac, ttc, inner ttc, {UDP/ANY/aRFS/accel/{esp, esp_err}}, IPsec policy, + * IPsec RoCE policy + */ +#define KERNEL_NIC_PRIO_NUM_LEVELS 9 #define KERNEL_NIC_NUM_PRIOS 1 /* One more level for tc */ #define KERNEL_MIN_LEVEL (KERNEL_NIC_PRIO_NUM_LEVELS + 1) @@ -219,19 +223,30 @@ static struct init_tree_node egress_root_fs = { }; enum { + RDMA_RX_IPSEC_PRIO, RDMA_RX_COUNTERS_PRIO, RDMA_RX_BYPASS_PRIO, RDMA_RX_KERNEL_PRIO, }; +#define RDMA_RX_IPSEC_NUM_PRIOS 1 +#define RDMA_RX_IPSEC_NUM_LEVELS 2 +#define RDMA_RX_IPSEC_MIN_LEVEL (RDMA_RX_IPSEC_NUM_LEVELS) + #define RDMA_RX_BYPASS_MIN_LEVEL MLX5_BY_PASS_NUM_REGULAR_PRIOS #define RDMA_RX_KERNEL_MIN_LEVEL (RDMA_RX_BYPASS_MIN_LEVEL + 1) #define RDMA_RX_COUNTERS_MIN_LEVEL (RDMA_RX_KERNEL_MIN_LEVEL + 2) static struct init_tree_node rdma_rx_root_fs = { .type = FS_TYPE_NAMESPACE, - .ar_size = 3, + .ar_size = 4, .children = (struct init_tree_node[]) { + [RDMA_RX_IPSEC_PRIO] = + ADD_PRIO(0, RDMA_RX_IPSEC_MIN_LEVEL, 0, + FS_CHAINING_CAPS, + ADD_NS(MLX5_FLOW_TABLE_MISS_ACTION_DEF, + ADD_MULTIPLE_PRIO(RDMA_RX_IPSEC_NUM_PRIOS, + RDMA_RX_IPSEC_NUM_LEVELS))), [RDMA_RX_COUNTERS_PRIO] = ADD_PRIO(0, RDMA_RX_COUNTERS_MIN_LEVEL, 0, FS_CHAINING_CAPS, @@ -254,15 +269,20 @@ static struct init_tree_node rdma_rx_root_fs = { enum { RDMA_TX_COUNTERS_PRIO, + RDMA_TX_IPSEC_PRIO, RDMA_TX_BYPASS_PRIO, }; #define RDMA_TX_BYPASS_MIN_LEVEL MLX5_BY_PASS_NUM_PRIOS #define RDMA_TX_COUNTERS_MIN_LEVEL (RDMA_TX_BYPASS_MIN_LEVEL + 1) +#define RDMA_TX_IPSEC_NUM_PRIOS 1 +#define RDMA_TX_IPSEC_PRIO_NUM_LEVELS 1 +#define RDMA_TX_IPSEC_MIN_LEVEL (RDMA_TX_COUNTERS_MIN_LEVEL + RDMA_TX_IPSEC_NUM_PRIOS) + static struct init_tree_node rdma_tx_root_fs = { .type = FS_TYPE_NAMESPACE, - .ar_size = 2, + .ar_size = 3, .children = (struct init_tree_node[]) { [RDMA_TX_COUNTERS_PRIO] = ADD_PRIO(0, RDMA_TX_COUNTERS_MIN_LEVEL, 0, @@ -270,6 +290,13 @@ static struct init_tree_node rdma_tx_root_fs = { ADD_NS(MLX5_FLOW_TABLE_MISS_ACTION_DEF, ADD_MULTIPLE_PRIO(MLX5_RDMA_TX_NUM_COUNTERS_PRIOS, RDMA_TX_COUNTERS_PRIO_NUM_LEVELS))), + [RDMA_TX_IPSEC_PRIO] = + ADD_PRIO(0, RDMA_TX_IPSEC_MIN_LEVEL, 0, + FS_CHAINING_CAPS, + ADD_NS(MLX5_FLOW_TABLE_MISS_ACTION_DEF, + ADD_MULTIPLE_PRIO(RDMA_TX_IPSEC_NUM_PRIOS, + RDMA_TX_IPSEC_PRIO_NUM_LEVELS))), + [RDMA_TX_BYPASS_PRIO] = ADD_PRIO(0, RDMA_TX_BYPASS_MIN_LEVEL, 0, FS_CHAINING_CAPS_RDMA_TX, @@ -449,7 +476,8 @@ static bool is_fwd_dest_type(enum mlx5_flow_destination_type type) type == MLX5_FLOW_DESTINATION_TYPE_VPORT || type == MLX5_FLOW_DESTINATION_TYPE_FLOW_SAMPLER || type == MLX5_FLOW_DESTINATION_TYPE_TIR || - type == MLX5_FLOW_DESTINATION_TYPE_RANGE; + type == MLX5_FLOW_DESTINATION_TYPE_RANGE || + type == MLX5_FLOW_DESTINATION_TYPE_TABLE_TYPE; } static bool check_valid_spec(const struct mlx5_flow_spec *spec) @@ -1774,7 +1802,6 @@ static int build_match_list(struct match_list *match_head, { struct rhlist_head *tmp, *list; struct mlx5_flow_group *g; - int err = 0; rcu_read_lock(); INIT_LIST_HEAD(&match_head->list); @@ -1800,7 +1827,7 @@ static int build_match_list(struct match_list *match_head, list_add_tail(&curr_match->list, &match_head->list); } rcu_read_unlock(); - return err; + return 0; } static u64 matched_fgs_get_version(struct list_head *match_head) @@ -2367,6 +2394,14 @@ struct mlx5_flow_namespace *mlx5_get_flow_namespace(struct mlx5_core_dev *dev, root_ns = steering->rdma_tx_root_ns; prio = RDMA_TX_COUNTERS_PRIO; break; + case MLX5_FLOW_NAMESPACE_RDMA_RX_IPSEC: + root_ns = steering->rdma_rx_root_ns; + prio = RDMA_RX_IPSEC_PRIO; + break; + case MLX5_FLOW_NAMESPACE_RDMA_TX_IPSEC: + root_ns = steering->rdma_tx_root_ns; + prio = RDMA_TX_IPSEC_PRIO; + break; default: /* Must be NIC RX */ WARN_ON(!is_nic_rx_ns(type)); root_ns = steering->root_ns; @@ -3143,6 +3178,78 @@ cleanup: return err; } +static int mlx5_fs_mode_validate(struct devlink *devlink, u32 id, + union devlink_param_value val, + struct netlink_ext_ack *extack) +{ + struct mlx5_core_dev *dev = devlink_priv(devlink); + char *value = val.vstr; + int err = 0; + + if (!strcmp(value, "dmfs")) { + return 0; + } else if (!strcmp(value, "smfs")) { + u8 eswitch_mode; + bool smfs_cap; + + eswitch_mode = mlx5_eswitch_mode(dev); + smfs_cap = mlx5_fs_dr_is_supported(dev); + + if (!smfs_cap) { + err = -EOPNOTSUPP; + NL_SET_ERR_MSG_MOD(extack, + "Software managed steering is not supported by current device"); + } + + else if (eswitch_mode == MLX5_ESWITCH_OFFLOADS) { + NL_SET_ERR_MSG_MOD(extack, + "Software managed steering is not supported when eswitch offloads enabled."); + err = -EOPNOTSUPP; + } + } else { + NL_SET_ERR_MSG_MOD(extack, + "Bad parameter: supported values are [\"dmfs\", \"smfs\"]"); + err = -EINVAL; + } + + return err; +} + +static int mlx5_fs_mode_set(struct devlink *devlink, u32 id, + struct devlink_param_gset_ctx *ctx) +{ + struct mlx5_core_dev *dev = devlink_priv(devlink); + enum mlx5_flow_steering_mode mode; + + if (!strcmp(ctx->val.vstr, "smfs")) + mode = MLX5_FLOW_STEERING_MODE_SMFS; + else + mode = MLX5_FLOW_STEERING_MODE_DMFS; + dev->priv.steering->mode = mode; + + return 0; +} + +static int mlx5_fs_mode_get(struct devlink *devlink, u32 id, + struct devlink_param_gset_ctx *ctx) +{ + struct mlx5_core_dev *dev = devlink_priv(devlink); + + if (dev->priv.steering->mode == MLX5_FLOW_STEERING_MODE_SMFS) + strcpy(ctx->val.vstr, "smfs"); + else + strcpy(ctx->val.vstr, "dmfs"); + return 0; +} + +static const struct devlink_param mlx5_fs_params[] = { + DEVLINK_PARAM_DRIVER(MLX5_DEVLINK_PARAM_ID_FLOW_STEERING_MODE, + "flow_steering_mode", DEVLINK_PARAM_TYPE_STRING, + BIT(DEVLINK_PARAM_CMODE_RUNTIME), + mlx5_fs_mode_get, mlx5_fs_mode_set, + mlx5_fs_mode_validate), +}; + void mlx5_fs_core_cleanup(struct mlx5_core_dev *dev) { struct mlx5_flow_steering *steering = dev->priv.steering; @@ -3155,12 +3262,20 @@ void mlx5_fs_core_cleanup(struct mlx5_core_dev *dev) cleanup_root_ns(steering->rdma_rx_root_ns); cleanup_root_ns(steering->rdma_tx_root_ns); cleanup_root_ns(steering->egress_root_ns); + + devl_params_unregister(priv_to_devlink(dev), mlx5_fs_params, + ARRAY_SIZE(mlx5_fs_params)); } int mlx5_fs_core_init(struct mlx5_core_dev *dev) { struct mlx5_flow_steering *steering = dev->priv.steering; - int err = 0; + int err; + + err = devl_params_register(priv_to_devlink(dev), mlx5_fs_params, + ARRAY_SIZE(mlx5_fs_params)); + if (err) + return err; if ((((MLX5_CAP_GEN(dev, port_type) == MLX5_CAP_PORT_TYPE_ETH) && (MLX5_CAP_GEN(dev, nic_flow_table))) || diff --git a/drivers/net/ethernet/mellanox/mlx5/core/fs_counters.c b/drivers/net/ethernet/mellanox/mlx5/core/fs_counters.c index b406e0367af6..17fe30a4c06c 100644 --- a/drivers/net/ethernet/mellanox/mlx5/core/fs_counters.c +++ b/drivers/net/ethernet/mellanox/mlx5/core/fs_counters.c @@ -504,6 +504,16 @@ void mlx5_fc_query_cached(struct mlx5_fc *counter, counter->lastpackets = c.packets; } +void mlx5_fc_query_cached_raw(struct mlx5_fc *counter, + u64 *bytes, u64 *packets, u64 *lastuse) +{ + struct mlx5_fc_cache c = counter->cache; + + *bytes = c.bytes; + *packets = c.packets; + *lastuse = c.lastuse; +} + void mlx5_fc_queue_stats_work(struct mlx5_core_dev *dev, struct delayed_work *dwork, unsigned long delay) diff --git a/drivers/net/ethernet/mellanox/mlx5/core/fw.c b/drivers/net/ethernet/mellanox/mlx5/core/fw.c index f34e758a2f1f..7bb7be01225a 100644 --- a/drivers/net/ethernet/mellanox/mlx5/core/fw.c +++ b/drivers/net/ethernet/mellanox/mlx5/core/fw.c @@ -267,6 +267,12 @@ int mlx5_query_hca_caps(struct mlx5_core_dev *dev) return err; } + if (MLX5_CAP_GEN(dev, crypto)) { + err = mlx5_core_get_caps(dev, MLX5_CAP_CRYPTO); + if (err) + return err; + } + if (MLX5_CAP_GEN(dev, shampo)) { err = mlx5_core_get_caps(dev, MLX5_CAP_DEV_SHAMPO); if (err) diff --git a/drivers/net/ethernet/mellanox/mlx5/core/fw_reset.c b/drivers/net/ethernet/mellanox/mlx5/core/fw_reset.c index 1e46f9afa40e..4c2dad9d7cfb 100644 --- a/drivers/net/ethernet/mellanox/mlx5/core/fw_reset.c +++ b/drivers/net/ethernet/mellanox/mlx5/core/fw_reset.c @@ -1,6 +1,8 @@ // SPDX-License-Identifier: GPL-2.0 OR Linux-OpenIB /* Copyright (c) 2020, Mellanox Technologies inc. All rights reserved. */ +#include <devlink.h> + #include "fw_reset.h" #include "diag/fw_tracer.h" #include "lib/tout.h" @@ -28,21 +30,32 @@ struct mlx5_fw_reset { int ret; }; -void mlx5_fw_reset_enable_remote_dev_reset_set(struct mlx5_core_dev *dev, bool enable) +static int mlx5_fw_reset_enable_remote_dev_reset_set(struct devlink *devlink, u32 id, + struct devlink_param_gset_ctx *ctx) { - struct mlx5_fw_reset *fw_reset = dev->priv.fw_reset; + struct mlx5_core_dev *dev = devlink_priv(devlink); + struct mlx5_fw_reset *fw_reset; - if (enable) + fw_reset = dev->priv.fw_reset; + + if (ctx->val.vbool) clear_bit(MLX5_FW_RESET_FLAGS_NACK_RESET_REQUEST, &fw_reset->reset_flags); else set_bit(MLX5_FW_RESET_FLAGS_NACK_RESET_REQUEST, &fw_reset->reset_flags); + return 0; } -bool mlx5_fw_reset_enable_remote_dev_reset_get(struct mlx5_core_dev *dev) +static int mlx5_fw_reset_enable_remote_dev_reset_get(struct devlink *devlink, u32 id, + struct devlink_param_gset_ctx *ctx) { - struct mlx5_fw_reset *fw_reset = dev->priv.fw_reset; + struct mlx5_core_dev *dev = devlink_priv(devlink); + struct mlx5_fw_reset *fw_reset; + + fw_reset = dev->priv.fw_reset; - return !test_bit(MLX5_FW_RESET_FLAGS_NACK_RESET_REQUEST, &fw_reset->reset_flags); + ctx->val.vbool = !test_bit(MLX5_FW_RESET_FLAGS_NACK_RESET_REQUEST, + &fw_reset->reset_flags); + return 0; } static int mlx5_reg_mfrl_set(struct mlx5_core_dev *dev, u8 reset_level, @@ -150,11 +163,11 @@ static void mlx5_fw_reset_complete_reload(struct mlx5_core_dev *dev) if (test_bit(MLX5_FW_RESET_FLAGS_PENDING_COMP, &fw_reset->reset_flags)) { complete(&fw_reset->done); } else { - mlx5_unload_one(dev); + mlx5_unload_one(dev, false); if (mlx5_health_wait_pci_up(dev)) mlx5_core_err(dev, "reset reload flow aborted, PCI reads still not working\n"); else - mlx5_load_one(dev, false); + mlx5_load_one(dev); devlink_remote_reload_actions_performed(priv_to_devlink(dev), 0, BIT(DEVLINK_RELOAD_ACTION_DRIVER_REINIT) | BIT(DEVLINK_RELOAD_ACTION_FW_ACTIVATE)); @@ -358,6 +371,7 @@ static int mlx5_pci_link_toggle(struct mlx5_core_dev *dev) mlx5_core_err(dev, "PCI link not ready (0x%04x) after %llu ms\n", reg16, mlx5_tout_ms(dev, PCI_TOGGLE)); err = -ETIMEDOUT; + goto restore; } do { @@ -484,7 +498,7 @@ int mlx5_fw_reset_wait_reset_done(struct mlx5_core_dev *dev) } err = fw_reset->ret; if (test_and_clear_bit(MLX5_FW_RESET_FLAGS_RELOAD_REQUIRED, &fw_reset->reset_flags)) { - mlx5_unload_one_devl_locked(dev); + mlx5_unload_one_devl_locked(dev, false); mlx5_load_one_devl_locked(dev, false); } out: @@ -517,9 +531,16 @@ void mlx5_drain_fw_reset(struct mlx5_core_dev *dev) cancel_work_sync(&fw_reset->reset_abort_work); } +static const struct devlink_param mlx5_fw_reset_devlink_params[] = { + DEVLINK_PARAM_GENERIC(ENABLE_REMOTE_DEV_RESET, BIT(DEVLINK_PARAM_CMODE_RUNTIME), + mlx5_fw_reset_enable_remote_dev_reset_get, + mlx5_fw_reset_enable_remote_dev_reset_set, NULL), +}; + int mlx5_fw_reset_init(struct mlx5_core_dev *dev) { struct mlx5_fw_reset *fw_reset = kzalloc(sizeof(*fw_reset), GFP_KERNEL); + int err; if (!fw_reset) return -ENOMEM; @@ -532,6 +553,15 @@ int mlx5_fw_reset_init(struct mlx5_core_dev *dev) fw_reset->dev = dev; dev->priv.fw_reset = fw_reset; + err = devl_params_register(priv_to_devlink(dev), + mlx5_fw_reset_devlink_params, + ARRAY_SIZE(mlx5_fw_reset_devlink_params)); + if (err) { + destroy_workqueue(fw_reset->wq); + kfree(fw_reset); + return err; + } + INIT_WORK(&fw_reset->fw_live_patch_work, mlx5_fw_live_patch_event); INIT_WORK(&fw_reset->reset_request_work, mlx5_sync_reset_request_event); INIT_WORK(&fw_reset->reset_reload_work, mlx5_sync_reset_reload_work); @@ -546,6 +576,9 @@ void mlx5_fw_reset_cleanup(struct mlx5_core_dev *dev) { struct mlx5_fw_reset *fw_reset = dev->priv.fw_reset; + devl_params_unregister(priv_to_devlink(dev), + mlx5_fw_reset_devlink_params, + ARRAY_SIZE(mlx5_fw_reset_devlink_params)); destroy_workqueue(fw_reset->wq); kfree(dev->priv.fw_reset); } diff --git a/drivers/net/ethernet/mellanox/mlx5/core/fw_reset.h b/drivers/net/ethernet/mellanox/mlx5/core/fw_reset.h index dc141c7e641a..c57465595f7c 100644 --- a/drivers/net/ethernet/mellanox/mlx5/core/fw_reset.h +++ b/drivers/net/ethernet/mellanox/mlx5/core/fw_reset.h @@ -6,8 +6,6 @@ #include "mlx5_core.h" -void mlx5_fw_reset_enable_remote_dev_reset_set(struct mlx5_core_dev *dev, bool enable); -bool mlx5_fw_reset_enable_remote_dev_reset_get(struct mlx5_core_dev *dev); int mlx5_fw_reset_query(struct mlx5_core_dev *dev, u8 *reset_level, u8 *reset_type); int mlx5_fw_reset_set_reset_sync(struct mlx5_core_dev *dev, u8 reset_type_sel, struct netlink_ext_ack *extack); diff --git a/drivers/net/ethernet/mellanox/mlx5/core/health.c b/drivers/net/ethernet/mellanox/mlx5/core/health.c index 879555ba847d..f9438d4e43ca 100644 --- a/drivers/net/ethernet/mellanox/mlx5/core/health.c +++ b/drivers/net/ethernet/mellanox/mlx5/core/health.c @@ -62,7 +62,7 @@ enum { }; enum { - MLX5_DROP_NEW_HEALTH_WORK, + MLX5_DROP_HEALTH_WORK, }; enum { @@ -675,7 +675,7 @@ static void mlx5_fw_fatal_reporter_err_work(struct work_struct *work) devlink = priv_to_devlink(dev); mutex_lock(&dev->intf_state_mutex); - if (test_bit(MLX5_DROP_NEW_HEALTH_WORK, &health->flags)) { + if (test_bit(MLX5_DROP_HEALTH_WORK, &health->flags)) { mlx5_core_err(dev, "health works are not permitted at this stage\n"); mutex_unlock(&dev->intf_state_mutex); return; @@ -699,7 +699,7 @@ static void mlx5_fw_fatal_reporter_err_work(struct work_struct *work) * requests from the kernel. */ mlx5_core_err(dev, "Driver is in error state. Unloading\n"); - mlx5_unload_one(dev); + mlx5_unload_one(dev, false); } } @@ -771,14 +771,8 @@ static unsigned long get_next_poll_jiffies(struct mlx5_core_dev *dev) void mlx5_trigger_health_work(struct mlx5_core_dev *dev) { struct mlx5_core_health *health = &dev->priv.health; - unsigned long flags; - spin_lock_irqsave(&health->wq_lock, flags); - if (!test_bit(MLX5_DROP_NEW_HEALTH_WORK, &health->flags)) - queue_work(health->wq, &health->fatal_report_work); - else - mlx5_core_err(dev, "new health works are not permitted at this stage\n"); - spin_unlock_irqrestore(&health->wq_lock, flags); + queue_work(health->wq, &health->fatal_report_work); } #define MLX5_MSEC_PER_HOUR (MSEC_PER_SEC * 60 * 60) @@ -858,7 +852,7 @@ void mlx5_start_health_poll(struct mlx5_core_dev *dev) timer_setup(&health->timer, poll_health, 0); health->fatal_error = MLX5_SENSOR_NO_ERR; - clear_bit(MLX5_DROP_NEW_HEALTH_WORK, &health->flags); + clear_bit(MLX5_DROP_HEALTH_WORK, &health->flags); health->health = &dev->iseg->health; health->health_counter = &dev->iseg->health_counter; @@ -869,13 +863,9 @@ void mlx5_start_health_poll(struct mlx5_core_dev *dev) void mlx5_stop_health_poll(struct mlx5_core_dev *dev, bool disable_health) { struct mlx5_core_health *health = &dev->priv.health; - unsigned long flags; - if (disable_health) { - spin_lock_irqsave(&health->wq_lock, flags); - set_bit(MLX5_DROP_NEW_HEALTH_WORK, &health->flags); - spin_unlock_irqrestore(&health->wq_lock, flags); - } + if (disable_health) + set_bit(MLX5_DROP_HEALTH_WORK, &health->flags); del_timer_sync(&health->timer); } @@ -891,11 +881,8 @@ void mlx5_start_health_fw_log_up(struct mlx5_core_dev *dev) void mlx5_drain_health_wq(struct mlx5_core_dev *dev) { struct mlx5_core_health *health = &dev->priv.health; - unsigned long flags; - spin_lock_irqsave(&health->wq_lock, flags); - set_bit(MLX5_DROP_NEW_HEALTH_WORK, &health->flags); - spin_unlock_irqrestore(&health->wq_lock, flags); + set_bit(MLX5_DROP_HEALTH_WORK, &health->flags); cancel_delayed_work_sync(&health->update_fw_log_ts_work); cancel_work_sync(&health->report_work); cancel_work_sync(&health->fatal_report_work); @@ -928,7 +915,6 @@ int mlx5_health_init(struct mlx5_core_dev *dev) kfree(name); if (!health->wq) goto out_err; - spin_lock_init(&health->wq_lock); INIT_WORK(&health->fatal_report_work, mlx5_fw_fatal_reporter_err_work); INIT_WORK(&health->report_work, mlx5_fw_reporter_err_work); INIT_DELAYED_WORK(&health->update_fw_log_ts_work, mlx5_health_log_ts_update); diff --git a/drivers/net/ethernet/mellanox/mlx5/core/ipoib/ethtool.c b/drivers/net/ethernet/mellanox/mlx5/core/ipoib/ethtool.c index e09518f887a0..779d92b762d3 100644 --- a/drivers/net/ethernet/mellanox/mlx5/core/ipoib/ethtool.c +++ b/drivers/net/ethernet/mellanox/mlx5/core/ipoib/ethtool.c @@ -172,6 +172,7 @@ enum mlx5_ptys_rate { MLX5_PTYS_RATE_EDR = 1 << 5, MLX5_PTYS_RATE_HDR = 1 << 6, MLX5_PTYS_RATE_NDR = 1 << 7, + MLX5_PTYS_RATE_XDR = 1 << 8, }; static inline int mlx5_ptys_rate_enum_to_int(enum mlx5_ptys_rate rate) @@ -185,6 +186,7 @@ static inline int mlx5_ptys_rate_enum_to_int(enum mlx5_ptys_rate rate) case MLX5_PTYS_RATE_EDR: return 25000; case MLX5_PTYS_RATE_HDR: return 50000; case MLX5_PTYS_RATE_NDR: return 100000; + case MLX5_PTYS_RATE_XDR: return 200000; default: return -1; } } diff --git a/drivers/net/ethernet/mellanox/mlx5/core/ipoib/ipoib.c b/drivers/net/ethernet/mellanox/mlx5/core/ipoib/ipoib.c index 911cf4d23964..c2a4f86bc890 100644 --- a/drivers/net/ethernet/mellanox/mlx5/core/ipoib/ipoib.c +++ b/drivers/net/ethernet/mellanox/mlx5/core/ipoib/ipoib.c @@ -412,7 +412,8 @@ static int mlx5i_init_rx(struct mlx5e_priv *priv) int err; priv->fs = mlx5e_fs_init(priv->profile, mdev, - !test_bit(MLX5E_STATE_DESTROYING, &priv->state)); + !test_bit(MLX5E_STATE_DESTROYING, &priv->state), + priv->dfs_root); if (!priv->fs) { netdev_err(priv->netdev, "FS allocation failed\n"); return -ENOMEM; diff --git a/drivers/net/ethernet/mellanox/mlx5/core/lag/debugfs.c b/drivers/net/ethernet/mellanox/mlx5/core/lag/debugfs.c index b8feaf0f5c4c..f4b777d4e108 100644 --- a/drivers/net/ethernet/mellanox/mlx5/core/lag/debugfs.c +++ b/drivers/net/ethernet/mellanox/mlx5/core/lag/debugfs.c @@ -22,7 +22,7 @@ static int type_show(struct seq_file *file, void *priv) struct mlx5_lag *ldev; char *mode = NULL; - ldev = dev->priv.lag; + ldev = mlx5_lag_dev(dev); mutex_lock(&ldev->lock); if (__mlx5_lag_is_active(ldev)) mode = get_str_mode_type(ldev); @@ -41,7 +41,7 @@ static int port_sel_mode_show(struct seq_file *file, void *priv) int ret = 0; char *mode; - ldev = dev->priv.lag; + ldev = mlx5_lag_dev(dev); mutex_lock(&ldev->lock); if (__mlx5_lag_is_active(ldev)) mode = mlx5_get_str_port_sel_mode(ldev->mode, ldev->mode_flags); @@ -61,7 +61,7 @@ static int state_show(struct seq_file *file, void *priv) struct mlx5_lag *ldev; bool active; - ldev = dev->priv.lag; + ldev = mlx5_lag_dev(dev); mutex_lock(&ldev->lock); active = __mlx5_lag_is_active(ldev); mutex_unlock(&ldev->lock); @@ -77,7 +77,7 @@ static int flags_show(struct seq_file *file, void *priv) bool shared_fdb; bool lag_active; - ldev = dev->priv.lag; + ldev = mlx5_lag_dev(dev); mutex_lock(&ldev->lock); lag_active = __mlx5_lag_is_active(ldev); if (!lag_active) @@ -108,7 +108,7 @@ static int mapping_show(struct seq_file *file, void *priv) int num_ports; int i; - ldev = dev->priv.lag; + ldev = mlx5_lag_dev(dev); mutex_lock(&ldev->lock); lag_active = __mlx5_lag_is_active(ldev); if (lag_active) { @@ -142,7 +142,7 @@ static int members_show(struct seq_file *file, void *priv) struct mlx5_lag *ldev; int i; - ldev = dev->priv.lag; + ldev = mlx5_lag_dev(dev); mutex_lock(&ldev->lock); for (i = 0; i < ldev->ports; i++) { if (!ldev->pf[i].dev) diff --git a/drivers/net/ethernet/mellanox/mlx5/core/lag/lag.c b/drivers/net/ethernet/mellanox/mlx5/core/lag/lag.c index ad32b80e8501..5d331b940f4d 100644 --- a/drivers/net/ethernet/mellanox/mlx5/core/lag/lag.c +++ b/drivers/net/ethernet/mellanox/mlx5/core/lag/lag.c @@ -230,7 +230,6 @@ static void mlx5_ldev_free(struct kref *ref) mlx5_lag_mp_cleanup(ldev); cancel_delayed_work_sync(&ldev->bond_work); destroy_workqueue(ldev->wq); - mlx5_lag_mpesw_cleanup(ldev); mutex_destroy(&ldev->lock); kfree(ldev); } @@ -276,7 +275,6 @@ static struct mlx5_lag *mlx5_lag_dev_alloc(struct mlx5_core_dev *dev) mlx5_core_err(dev, "Failed to init multipath lag err=%d\n", err); - mlx5_lag_mpesw_init(ldev); ldev->ports = MLX5_CAP_GEN(dev, num_lag_ports); ldev->buckets = 1; @@ -646,7 +644,7 @@ int mlx5_activate_lag(struct mlx5_lag *ldev, return 0; } -static int mlx5_deactivate_lag(struct mlx5_lag *ldev) +int mlx5_deactivate_lag(struct mlx5_lag *ldev) { struct mlx5_core_dev *dev0 = ldev->pf[MLX5_LAG_P1].dev; struct mlx5_core_dev *dev1 = ldev->pf[MLX5_LAG_P2].dev; @@ -688,7 +686,7 @@ static int mlx5_deactivate_lag(struct mlx5_lag *ldev) } #define MLX5_LAG_OFFLOADS_SUPPORTED_PORTS 2 -static bool mlx5_lag_check_prereq(struct mlx5_lag *ldev) +bool mlx5_lag_check_prereq(struct mlx5_lag *ldev) { #ifdef CONFIG_MLX5_ESWITCH struct mlx5_core_dev *dev; @@ -723,7 +721,7 @@ static bool mlx5_lag_check_prereq(struct mlx5_lag *ldev) return true; } -static void mlx5_lag_add_devices(struct mlx5_lag *ldev) +void mlx5_lag_add_devices(struct mlx5_lag *ldev) { int i; @@ -740,7 +738,7 @@ static void mlx5_lag_add_devices(struct mlx5_lag *ldev) } } -static void mlx5_lag_remove_devices(struct mlx5_lag *ldev) +void mlx5_lag_remove_devices(struct mlx5_lag *ldev) { int i; @@ -1187,7 +1185,7 @@ static int __mlx5_lag_dev_add_mdev(struct mlx5_core_dev *dev) tmp_dev = mlx5_get_next_phys_dev_lag(dev); if (tmp_dev) - ldev = tmp_dev->priv.lag; + ldev = mlx5_lag_dev(tmp_dev); if (!ldev) { ldev = mlx5_lag_dev_alloc(dev); @@ -1386,8 +1384,7 @@ bool mlx5_lag_is_shared_fdb(struct mlx5_core_dev *dev) spin_lock_irqsave(&lag_lock, flags); ldev = mlx5_lag_dev(dev); - res = ldev && __mlx5_lag_is_sriov(ldev) && - test_bit(MLX5_LAG_MODE_FLAG_SHARED_FDB, &ldev->mode_flags); + res = ldev && test_bit(MLX5_LAG_MODE_FLAG_SHARED_FDB, &ldev->mode_flags); spin_unlock_irqrestore(&lag_lock, flags); return res; diff --git a/drivers/net/ethernet/mellanox/mlx5/core/lag/lag.h b/drivers/net/ethernet/mellanox/mlx5/core/lag/lag.h index f30ac2de639f..bc1f1dd3e283 100644 --- a/drivers/net/ethernet/mellanox/mlx5/core/lag/lag.h +++ b/drivers/net/ethernet/mellanox/mlx5/core/lag/lag.h @@ -50,19 +50,6 @@ struct lag_tracker { enum netdev_lag_hash hash_type; }; -enum mpesw_op { - MLX5_MPESW_OP_ENABLE, - MLX5_MPESW_OP_DISABLE, -}; - -struct mlx5_mpesw_work_st { - struct work_struct work; - struct mlx5_lag *lag; - enum mpesw_op op; - struct completion comp; - int result; -}; - /* LAG data of a ConnectX card. * It serves both its phys functions. */ @@ -115,6 +102,7 @@ mlx5_lag_is_ready(struct mlx5_lag *ldev) return test_bit(MLX5_LAG_FLAG_NDEVS_READY, &ldev->state_flags); } +bool mlx5_lag_check_prereq(struct mlx5_lag *ldev); void mlx5_modify_lag(struct mlx5_lag *ldev, struct lag_tracker *tracker); int mlx5_activate_lag(struct mlx5_lag *ldev, @@ -124,8 +112,6 @@ int mlx5_activate_lag(struct mlx5_lag *ldev, int mlx5_lag_dev_get_netdev_idx(struct mlx5_lag *ldev, struct net_device *ndev); bool mlx5_shared_fdb_supported(struct mlx5_lag *ldev); -void mlx5_lag_del_mpesw_rule(struct mlx5_core_dev *dev); -int mlx5_lag_add_mpesw_rule(struct mlx5_core_dev *dev); char *mlx5_get_str_port_sel_mode(enum mlx5_lag_mode mode, unsigned long flags); void mlx5_infer_tx_enabled(struct lag_tracker *tracker, u8 num_ports, @@ -134,5 +120,8 @@ void mlx5_infer_tx_enabled(struct lag_tracker *tracker, u8 num_ports, void mlx5_ldev_add_debugfs(struct mlx5_core_dev *dev); void mlx5_ldev_remove_debugfs(struct dentry *dbg); void mlx5_disable_lag(struct mlx5_lag *ldev); +void mlx5_lag_remove_devices(struct mlx5_lag *ldev); +int mlx5_deactivate_lag(struct mlx5_lag *ldev); +void mlx5_lag_add_devices(struct mlx5_lag *ldev); #endif /* __MLX5_LAG_H__ */ diff --git a/drivers/net/ethernet/mellanox/mlx5/core/lag/mp.c b/drivers/net/ethernet/mellanox/mlx5/core/lag/mp.c index d9fcb9ed726f..d85a8dfc153d 100644 --- a/drivers/net/ethernet/mellanox/mlx5/core/lag/mp.c +++ b/drivers/net/ethernet/mellanox/mlx5/core/lag/mp.c @@ -28,13 +28,9 @@ static bool mlx5_lag_multipath_check_prereq(struct mlx5_lag *ldev) bool mlx5_lag_is_multipath(struct mlx5_core_dev *dev) { - struct mlx5_lag *ldev; - bool res; - - ldev = mlx5_lag_dev(dev); - res = ldev && __mlx5_lag_is_multipath(ldev); + struct mlx5_lag *ldev = mlx5_lag_dev(dev); - return res; + return ldev && __mlx5_lag_is_multipath(ldev); } /** diff --git a/drivers/net/ethernet/mellanox/mlx5/core/lag/mpesw.c b/drivers/net/ethernet/mellanox/mlx5/core/lag/mpesw.c index c17e8f1ec914..0c0ef600f643 100644 --- a/drivers/net/ethernet/mellanox/mlx5/core/lag/mpesw.c +++ b/drivers/net/ethernet/mellanox/mlx5/core/lag/mpesw.c @@ -5,39 +5,121 @@ #include <net/nexthop.h> #include "lag/lag.h" #include "eswitch.h" +#include "esw/acl/ofld.h" #include "lib/mlx5.h" -static int add_mpesw_rule(struct mlx5_lag *ldev) +static void mlx5_mpesw_metadata_cleanup(struct mlx5_lag *ldev) { - struct mlx5_core_dev *dev = ldev->pf[MLX5_LAG_P1].dev; - int err; + struct mlx5_core_dev *dev; + struct mlx5_eswitch *esw; + u32 pf_metadata; + int i; + + for (i = 0; i < ldev->ports; i++) { + dev = ldev->pf[i].dev; + esw = dev->priv.eswitch; + pf_metadata = ldev->lag_mpesw.pf_metadata[i]; + if (!pf_metadata) + continue; + mlx5_esw_acl_ingress_vport_metadata_update(esw, MLX5_VPORT_UPLINK, 0); + mlx5_notifier_call_chain(dev->priv.events, MLX5_DEV_EVENT_MULTIPORT_ESW, + (void *)0); + mlx5_esw_match_metadata_free(esw, pf_metadata); + ldev->lag_mpesw.pf_metadata[i] = 0; + } +} - if (atomic_add_return(1, &ldev->lag_mpesw.mpesw_rule_count) != 1) - return 0; +static int mlx5_mpesw_metadata_set(struct mlx5_lag *ldev) +{ + struct mlx5_core_dev *dev; + struct mlx5_eswitch *esw; + u32 pf_metadata; + int i, err; + + for (i = 0; i < ldev->ports; i++) { + dev = ldev->pf[i].dev; + esw = dev->priv.eswitch; + pf_metadata = mlx5_esw_match_metadata_alloc(esw); + if (!pf_metadata) { + err = -ENOSPC; + goto err_metadata; + } + + ldev->lag_mpesw.pf_metadata[i] = pf_metadata; + err = mlx5_esw_acl_ingress_vport_metadata_update(esw, MLX5_VPORT_UPLINK, + pf_metadata); + if (err) + goto err_metadata; + } - if (ldev->mode != MLX5_LAG_MODE_NONE) { - err = -EINVAL; - goto out_err; + for (i = 0; i < ldev->ports; i++) { + dev = ldev->pf[i].dev; + mlx5_notifier_call_chain(dev->priv.events, MLX5_DEV_EVENT_MULTIPORT_ESW, + (void *)0); } - err = mlx5_activate_lag(ldev, NULL, MLX5_LAG_MODE_MPESW, false); + return 0; + +err_metadata: + mlx5_mpesw_metadata_cleanup(ldev); + return err; +} + +static int enable_mpesw(struct mlx5_lag *ldev) +{ + struct mlx5_core_dev *dev0 = ldev->pf[MLX5_LAG_P1].dev; + struct mlx5_core_dev *dev1 = ldev->pf[MLX5_LAG_P2].dev; + int err; + + if (ldev->mode != MLX5_LAG_MODE_NONE) + return -EINVAL; + + if (mlx5_eswitch_mode(dev0) != MLX5_ESWITCH_OFFLOADS || + !MLX5_CAP_PORT_SELECTION(dev0, port_select_flow_table) || + !MLX5_CAP_GEN(dev0, create_lag_when_not_master_up) || + !mlx5_lag_check_prereq(ldev)) + return -EOPNOTSUPP; + + err = mlx5_mpesw_metadata_set(ldev); + if (err) + return err; + + mlx5_lag_remove_devices(ldev); + + err = mlx5_activate_lag(ldev, NULL, MLX5_LAG_MODE_MPESW, true); if (err) { - mlx5_core_warn(dev, "Failed to create LAG in MPESW mode (%d)\n", err); - goto out_err; + mlx5_core_warn(dev0, "Failed to create LAG in MPESW mode (%d)\n", err); + goto err_add_devices; } + dev0->priv.flags &= ~MLX5_PRIV_FLAGS_DISABLE_IB_ADEV; + mlx5_rescan_drivers_locked(dev0); + err = mlx5_eswitch_reload_reps(dev0->priv.eswitch); + if (!err) + err = mlx5_eswitch_reload_reps(dev1->priv.eswitch); + if (err) + goto err_rescan_drivers; + return 0; -out_err: - atomic_dec(&ldev->lag_mpesw.mpesw_rule_count); +err_rescan_drivers: + dev0->priv.flags |= MLX5_PRIV_FLAGS_DISABLE_IB_ADEV; + mlx5_rescan_drivers_locked(dev0); + mlx5_deactivate_lag(ldev); +err_add_devices: + mlx5_lag_add_devices(ldev); + mlx5_eswitch_reload_reps(dev0->priv.eswitch); + mlx5_eswitch_reload_reps(dev1->priv.eswitch); + mlx5_mpesw_metadata_cleanup(ldev); return err; } -static void del_mpesw_rule(struct mlx5_lag *ldev) +static void disable_mpesw(struct mlx5_lag *ldev) { - if (!atomic_dec_return(&ldev->lag_mpesw.mpesw_rule_count) && - ldev->mode == MLX5_LAG_MODE_MPESW) + if (ldev->mode == MLX5_LAG_MODE_MPESW) { + mlx5_mpesw_metadata_cleanup(ldev); mlx5_disable_lag(ldev); + } } static void mlx5_mpesw_work(struct work_struct *work) @@ -45,20 +127,27 @@ static void mlx5_mpesw_work(struct work_struct *work) struct mlx5_mpesw_work_st *mpesww = container_of(work, struct mlx5_mpesw_work_st, work); struct mlx5_lag *ldev = mpesww->lag; + mlx5_dev_list_lock(); mutex_lock(&ldev->lock); + if (ldev->mode_changes_in_progress) { + mpesww->result = -EAGAIN; + goto unlock; + } + if (mpesww->op == MLX5_MPESW_OP_ENABLE) - mpesww->result = add_mpesw_rule(ldev); + mpesww->result = enable_mpesw(ldev); else if (mpesww->op == MLX5_MPESW_OP_DISABLE) - del_mpesw_rule(ldev); + disable_mpesw(ldev); +unlock: mutex_unlock(&ldev->lock); - + mlx5_dev_list_unlock(); complete(&mpesww->comp); } static int mlx5_lag_mpesw_queue_work(struct mlx5_core_dev *dev, enum mpesw_op op) { - struct mlx5_lag *ldev = dev->priv.lag; + struct mlx5_lag *ldev = mlx5_lag_dev(dev); struct mlx5_mpesw_work_st *work; int err = 0; @@ -86,43 +175,36 @@ out: return err; } -void mlx5_lag_del_mpesw_rule(struct mlx5_core_dev *dev) +void mlx5_lag_mpesw_disable(struct mlx5_core_dev *dev) { mlx5_lag_mpesw_queue_work(dev, MLX5_MPESW_OP_DISABLE); } -int mlx5_lag_add_mpesw_rule(struct mlx5_core_dev *dev) +int mlx5_lag_mpesw_enable(struct mlx5_core_dev *dev) { return mlx5_lag_mpesw_queue_work(dev, MLX5_MPESW_OP_ENABLE); } -int mlx5_lag_do_mirred(struct mlx5_core_dev *mdev, struct net_device *out_dev) +int mlx5_lag_mpesw_do_mirred(struct mlx5_core_dev *mdev, + struct net_device *out_dev, + struct netlink_ext_ack *extack) { - struct mlx5_lag *ldev = mdev->priv.lag; + struct mlx5_lag *ldev = mlx5_lag_dev(mdev); if (!netif_is_bond_master(out_dev) || !ldev) return 0; - if (ldev->mode == MLX5_LAG_MODE_MPESW) - return -EOPNOTSUPP; - - return 0; -} - -bool mlx5_lag_mpesw_is_activated(struct mlx5_core_dev *dev) -{ - bool ret; + if (ldev->mode != MLX5_LAG_MODE_MPESW) + return 0; - ret = dev->priv.lag && dev->priv.lag->mode == MLX5_LAG_MODE_MPESW; - return ret; + NL_SET_ERR_MSG_MOD(extack, "can't forward to bond in mpesw mode"); + return -EOPNOTSUPP; } -void mlx5_lag_mpesw_init(struct mlx5_lag *ldev) +bool mlx5_lag_is_mpesw(struct mlx5_core_dev *dev) { - atomic_set(&ldev->lag_mpesw.mpesw_rule_count, 0); -} + struct mlx5_lag *ldev = mlx5_lag_dev(dev); -void mlx5_lag_mpesw_cleanup(struct mlx5_lag *ldev) -{ - WARN_ON(atomic_read(&ldev->lag_mpesw.mpesw_rule_count)); + return ldev && ldev->mode == MLX5_LAG_MODE_MPESW; } +EXPORT_SYMBOL(mlx5_lag_is_mpesw); diff --git a/drivers/net/ethernet/mellanox/mlx5/core/lag/mpesw.h b/drivers/net/ethernet/mellanox/mlx5/core/lag/mpesw.h index 88e8daffcf92..02520f27a033 100644 --- a/drivers/net/ethernet/mellanox/mlx5/core/lag/mpesw.h +++ b/drivers/net/ethernet/mellanox/mlx5/core/lag/mpesw.h @@ -9,17 +9,27 @@ struct lag_mpesw { struct work_struct mpesw_work; - atomic_t mpesw_rule_count; + u32 pf_metadata[MLX5_MAX_PORTS]; }; -int mlx5_lag_do_mirred(struct mlx5_core_dev *mdev, struct net_device *out_dev); -bool mlx5_lag_mpesw_is_activated(struct mlx5_core_dev *dev); -#if IS_ENABLED(CONFIG_MLX5_ESWITCH) -void mlx5_lag_mpesw_init(struct mlx5_lag *ldev); -void mlx5_lag_mpesw_cleanup(struct mlx5_lag *ldev); -#else -static inline void mlx5_lag_mpesw_init(struct mlx5_lag *ldev) {} -static inline void mlx5_lag_mpesw_cleanup(struct mlx5_lag *ldev) {} -#endif +enum mpesw_op { + MLX5_MPESW_OP_ENABLE, + MLX5_MPESW_OP_DISABLE, +}; + +struct mlx5_mpesw_work_st { + struct work_struct work; + struct mlx5_lag *lag; + enum mpesw_op op; + struct completion comp; + int result; +}; + +int mlx5_lag_mpesw_do_mirred(struct mlx5_core_dev *mdev, + struct net_device *out_dev, + struct netlink_ext_ack *extack); +bool mlx5_lag_is_mpesw(struct mlx5_core_dev *dev); +void mlx5_lag_mpesw_disable(struct mlx5_core_dev *dev); +int mlx5_lag_mpesw_enable(struct mlx5_core_dev *dev); #endif /* __MLX5_LAG_MPESW_H__ */ diff --git a/drivers/net/ethernet/mellanox/mlx5/core/lib/clock.c b/drivers/net/ethernet/mellanox/mlx5/core/lib/clock.c index 69318b143268..4c9a40211059 100644 --- a/drivers/net/ethernet/mellanox/mlx5/core/lib/clock.c +++ b/drivers/net/ethernet/mellanox/mlx5/core/lib/clock.c @@ -69,6 +69,13 @@ enum { MLX5_MTPPS_FS_OUT_PULSE_DURATION_NS = BIT(0xa), }; +enum { + MLX5_MTUTC_OPERATION_ADJUST_TIME_MIN = S16_MIN, + MLX5_MTUTC_OPERATION_ADJUST_TIME_MAX = S16_MAX, + MLX5_MTUTC_OPERATION_ADJUST_TIME_EXTENDED_MIN = -200000, + MLX5_MTUTC_OPERATION_ADJUST_TIME_EXTENDED_MAX = 200000, +}; + static bool mlx5_real_time_mode(struct mlx5_core_dev *mdev) { return (mlx5_is_real_time_rq(mdev) || mlx5_is_real_time_sq(mdev)); @@ -86,6 +93,22 @@ static bool mlx5_modify_mtutc_allowed(struct mlx5_core_dev *mdev) return MLX5_CAP_MCAM_FEATURE(mdev, ptpcyc2realtime_modify); } +static bool mlx5_is_mtutc_time_adj_cap(struct mlx5_core_dev *mdev, s64 delta) +{ + s64 min = MLX5_MTUTC_OPERATION_ADJUST_TIME_MIN; + s64 max = MLX5_MTUTC_OPERATION_ADJUST_TIME_MAX; + + if (MLX5_CAP_MCAM_FEATURE(mdev, mtutc_time_adjustment_extended_range)) { + min = MLX5_MTUTC_OPERATION_ADJUST_TIME_EXTENDED_MIN; + max = MLX5_MTUTC_OPERATION_ADJUST_TIME_EXTENDED_MAX; + } + + if (delta < min || delta > max) + return false; + + return true; +} + static int mlx5_set_mtutc(struct mlx5_core_dev *dev, u32 *mtutc, u32 size) { u32 out[MLX5_ST_SZ_DW(mtutc_reg)] = {}; @@ -288,8 +311,8 @@ static int mlx5_ptp_adjtime_real_time(struct mlx5_core_dev *mdev, s64 delta) if (!mlx5_modify_mtutc_allowed(mdev)) return 0; - /* HW time adjustment range is s16. If out of range, settime instead */ - if (delta < S16_MIN || delta > S16_MAX) { + /* HW time adjustment range is checked. If out of range, settime instead */ + if (!mlx5_is_mtutc_time_adj_cap(mdev, delta)) { struct timespec64 ts; s64 ns; @@ -326,7 +349,20 @@ static int mlx5_ptp_adjtime(struct ptp_clock_info *ptp, s64 delta) return 0; } -static int mlx5_ptp_adjfreq_real_time(struct mlx5_core_dev *mdev, s32 freq) +static int mlx5_ptp_adjphase(struct ptp_clock_info *ptp, s32 delta) +{ + struct mlx5_clock *clock = container_of(ptp, struct mlx5_clock, ptp_info); + struct mlx5_core_dev *mdev; + + mdev = container_of(clock, struct mlx5_core_dev, clock); + + if (!mlx5_is_mtutc_time_adj_cap(mdev, delta)) + return -ERANGE; + + return mlx5_ptp_adjtime(ptp, delta); +} + +static int mlx5_ptp_freq_adj_real_time(struct mlx5_core_dev *mdev, long scaled_ppm) { u32 in[MLX5_ST_SZ_DW(mtutc_reg)] = {}; @@ -334,7 +370,15 @@ static int mlx5_ptp_adjfreq_real_time(struct mlx5_core_dev *mdev, s32 freq) return 0; MLX5_SET(mtutc_reg, in, operation, MLX5_MTUTC_OPERATION_ADJUST_FREQ_UTC); - MLX5_SET(mtutc_reg, in, freq_adjustment, freq); + + if (MLX5_CAP_MCAM_FEATURE(mdev, mtutc_freq_adj_units)) { + MLX5_SET(mtutc_reg, in, freq_adj_units, + MLX5_MTUTC_FREQ_ADJ_UNITS_SCALED_PPM); + MLX5_SET(mtutc_reg, in, freq_adjustment, scaled_ppm); + } else { + MLX5_SET(mtutc_reg, in, freq_adj_units, MLX5_MTUTC_FREQ_ADJ_UNITS_PPB); + MLX5_SET(mtutc_reg, in, freq_adjustment, scaled_ppm_to_ppb(scaled_ppm)); + } return mlx5_set_mtutc(mdev, in, sizeof(in)); } @@ -349,7 +393,8 @@ static int mlx5_ptp_adjfine(struct ptp_clock_info *ptp, long scaled_ppm) int err; mdev = container_of(clock, struct mlx5_core_dev, clock); - err = mlx5_ptp_adjfreq_real_time(mdev, scaled_ppm_to_ppb(scaled_ppm)); + + err = mlx5_ptp_freq_adj_real_time(mdev, scaled_ppm); if (err) return err; @@ -688,6 +733,7 @@ static const struct ptp_clock_info mlx5_ptp_clock_info = { .n_pins = 0, .pps = 0, .adjfine = mlx5_ptp_adjfine, + .adjphase = mlx5_ptp_adjphase, .adjtime = mlx5_ptp_adjtime, .gettimex64 = mlx5_ptp_gettimex, .settime64 = mlx5_ptp_settime, diff --git a/drivers/net/ethernet/mellanox/mlx5/core/lib/crypto.c b/drivers/net/ethernet/mellanox/mlx5/core/lib/crypto.c index e995f8378df7..3a94b8f8031e 100644 --- a/drivers/net/ethernet/mellanox/mlx5/core/lib/crypto.c +++ b/drivers/net/ethernet/mellanox/mlx5/core/lib/crypto.c @@ -2,53 +2,253 @@ // Copyright (c) 2019 Mellanox Technologies. #include "mlx5_core.h" -#include "lib/mlx5.h" +#include "lib/crypto.h" -int mlx5_create_encryption_key(struct mlx5_core_dev *mdev, - void *key, u32 sz_bytes, - u32 key_type, u32 *p_key_id) -{ - u32 in[MLX5_ST_SZ_DW(create_encryption_key_in)] = {}; - u32 out[MLX5_ST_SZ_DW(general_obj_out_cmd_hdr)]; - u32 sz_bits = sz_bytes * BITS_PER_BYTE; - u8 general_obj_key_size; - u64 general_obj_types; - void *obj, *key_p; - int err; +#define MLX5_CRYPTO_DEK_POOLS_NUM (MLX5_ACCEL_OBJ_TYPE_KEY_NUM - 1) +#define type2idx(type) ((type) - 1) - obj = MLX5_ADDR_OF(create_encryption_key_in, in, encryption_key_object); - key_p = MLX5_ADDR_OF(encryption_key_obj, obj, key); +#define MLX5_CRYPTO_DEK_POOL_SYNC_THRESH 128 - general_obj_types = MLX5_CAP_GEN_64(mdev, general_obj_types); - if (!(general_obj_types & - MLX5_HCA_CAP_GENERAL_OBJECT_TYPES_ENCRYPTION_KEY)) - return -EINVAL; +/* calculate the num of DEKs, which are freed by any user + * (for example, TLS) after last revalidation in a pool or a bulk. + */ +#define MLX5_CRYPTO_DEK_CALC_FREED(a) \ + ({ typeof(a) _a = (a); \ + _a->num_deks - _a->avail_deks - _a->in_use_deks; }) + +#define MLX5_CRYPTO_DEK_POOL_CALC_FREED(pool) MLX5_CRYPTO_DEK_CALC_FREED(pool) +#define MLX5_CRYPTO_DEK_BULK_CALC_FREED(bulk) MLX5_CRYPTO_DEK_CALC_FREED(bulk) + +#define MLX5_CRYPTO_DEK_BULK_IDLE(bulk) \ + ({ typeof(bulk) _bulk = (bulk); \ + _bulk->avail_deks == _bulk->num_deks; }) + +enum { + MLX5_CRYPTO_DEK_ALL_TYPE = BIT(0), +}; + +struct mlx5_crypto_dek_pool { + struct mlx5_core_dev *mdev; + u32 key_purpose; + int num_deks; /* the total number of keys in this pool */ + int avail_deks; /* the number of available keys in this pool */ + int in_use_deks; /* the number of being used keys in this pool */ + struct mutex lock; /* protect the following lists, and the bulks */ + struct list_head partial_list; /* some of keys are available */ + struct list_head full_list; /* no available keys */ + struct list_head avail_list; /* all keys are available to use */ + + /* No in-used keys, and all need to be synced. + * These bulks will be put to avail list after sync. + */ + struct list_head sync_list; + + bool syncing; + struct list_head wait_for_free; + struct work_struct sync_work; + + spinlock_t destroy_lock; /* protect destroy_list */ + struct list_head destroy_list; + struct work_struct destroy_work; +}; + +struct mlx5_crypto_dek_bulk { + struct mlx5_core_dev *mdev; + int base_obj_id; + int avail_start; /* the bit to start search */ + int num_deks; /* the total number of keys in a bulk */ + int avail_deks; /* the number of keys available, with need_sync bit 0 */ + int in_use_deks; /* the number of keys being used, with in_use bit 1 */ + struct list_head entry; + + /* 0: not being used by any user, 1: otherwise */ + unsigned long *in_use; + + /* The bits are set when they are used, and reset after crypto_sync + * is executed. So, the value 0 means the key is newly created, or not + * used after sync, and 1 means it is in use, or freed but not synced + */ + unsigned long *need_sync; +}; + +struct mlx5_crypto_dek_priv { + struct mlx5_core_dev *mdev; + int log_dek_obj_range; +}; + +struct mlx5_crypto_dek { + struct mlx5_crypto_dek_bulk *bulk; + struct list_head entry; + u32 obj_id; +}; + +u32 mlx5_crypto_dek_get_id(struct mlx5_crypto_dek *dek) +{ + return dek->obj_id; +} + +static int mlx5_crypto_dek_get_key_sz(struct mlx5_core_dev *mdev, + u32 sz_bytes, u8 *key_sz_p) +{ + u32 sz_bits = sz_bytes * BITS_PER_BYTE; switch (sz_bits) { case 128: - general_obj_key_size = - MLX5_GENERAL_OBJECT_TYPE_ENCRYPTION_KEY_KEY_SIZE_128; - key_p += sz_bytes; + *key_sz_p = MLX5_GENERAL_OBJECT_TYPE_ENCRYPTION_KEY_KEY_SIZE_128; break; case 256: - general_obj_key_size = - MLX5_GENERAL_OBJECT_TYPE_ENCRYPTION_KEY_KEY_SIZE_256; + *key_sz_p = MLX5_GENERAL_OBJECT_TYPE_ENCRYPTION_KEY_KEY_SIZE_256; break; default: + mlx5_core_err(mdev, "Crypto offload error, invalid key size (%u bits)\n", + sz_bits); return -EINVAL; } - memcpy(key_p, key, sz_bytes); + return 0; +} + +static int mlx5_crypto_dek_fill_key(struct mlx5_core_dev *mdev, u8 *key_obj, + const void *key, u32 sz_bytes) +{ + void *dst; + u8 key_sz; + int err; + + err = mlx5_crypto_dek_get_key_sz(mdev, sz_bytes, &key_sz); + if (err) + return err; + + MLX5_SET(encryption_key_obj, key_obj, key_size, key_sz); + + if (sz_bytes == 16) + /* For key size of 128b the MSBs are reserved. */ + dst = MLX5_ADDR_OF(encryption_key_obj, key_obj, key[1]); + else + dst = MLX5_ADDR_OF(encryption_key_obj, key_obj, key); + + memcpy(dst, key, sz_bytes); + + return 0; +} + +static int mlx5_crypto_cmd_sync_crypto(struct mlx5_core_dev *mdev, + int crypto_type) +{ + u32 in[MLX5_ST_SZ_DW(sync_crypto_in)] = {}; + int err; + + mlx5_core_dbg(mdev, + "Execute SYNC_CRYPTO command with crypto_type(0x%x)\n", + crypto_type); + + MLX5_SET(sync_crypto_in, in, opcode, MLX5_CMD_OP_SYNC_CRYPTO); + MLX5_SET(sync_crypto_in, in, crypto_type, crypto_type); + + err = mlx5_cmd_exec_in(mdev, sync_crypto, in); + if (err) + mlx5_core_err(mdev, + "Failed to exec sync crypto, type=%d, err=%d\n", + crypto_type, err); + + return err; +} + +static int mlx5_crypto_create_dek_bulk(struct mlx5_core_dev *mdev, + u32 key_purpose, int log_obj_range, + u32 *obj_id) +{ + u32 in[MLX5_ST_SZ_DW(create_encryption_key_in)] = {}; + u32 out[MLX5_ST_SZ_DW(general_obj_out_cmd_hdr)]; + void *obj, *param; + int err; - MLX5_SET(encryption_key_obj, obj, key_size, general_obj_key_size); - MLX5_SET(encryption_key_obj, obj, key_type, key_type); MLX5_SET(general_obj_in_cmd_hdr, in, opcode, MLX5_CMD_OP_CREATE_GENERAL_OBJECT); MLX5_SET(general_obj_in_cmd_hdr, in, obj_type, MLX5_GENERAL_OBJECT_TYPES_ENCRYPTION_KEY); + param = MLX5_ADDR_OF(general_obj_in_cmd_hdr, in, op_param); + MLX5_SET(general_obj_create_param, param, log_obj_range, log_obj_range); + + obj = MLX5_ADDR_OF(create_encryption_key_in, in, encryption_key_object); + MLX5_SET(encryption_key_obj, obj, key_purpose, key_purpose); MLX5_SET(encryption_key_obj, obj, pd, mdev->mlx5e_res.hw_objs.pdn); err = mlx5_cmd_exec(mdev, in, sizeof(in), out, sizeof(out)); + if (err) + return err; + + *obj_id = MLX5_GET(general_obj_out_cmd_hdr, out, obj_id); + mlx5_core_dbg(mdev, "DEK objects created, bulk=%d, obj_id=%d\n", + 1 << log_obj_range, *obj_id); + + return 0; +} + +static int mlx5_crypto_modify_dek_key(struct mlx5_core_dev *mdev, + const void *key, u32 sz_bytes, u32 key_purpose, + u32 obj_id, u32 obj_offset) +{ + u32 in[MLX5_ST_SZ_DW(modify_encryption_key_in)] = {}; + u32 out[MLX5_ST_SZ_DW(general_obj_out_cmd_hdr)]; + void *obj, *param; + int err; + + MLX5_SET(general_obj_in_cmd_hdr, in, opcode, + MLX5_CMD_OP_MODIFY_GENERAL_OBJECT); + MLX5_SET(general_obj_in_cmd_hdr, in, obj_type, + MLX5_GENERAL_OBJECT_TYPES_ENCRYPTION_KEY); + MLX5_SET(general_obj_in_cmd_hdr, in, obj_id, obj_id); + + param = MLX5_ADDR_OF(general_obj_in_cmd_hdr, in, op_param); + MLX5_SET(general_obj_query_param, param, obj_offset, obj_offset); + + obj = MLX5_ADDR_OF(modify_encryption_key_in, in, encryption_key_object); + MLX5_SET64(encryption_key_obj, obj, modify_field_select, 1); + MLX5_SET(encryption_key_obj, obj, key_purpose, key_purpose); + MLX5_SET(encryption_key_obj, obj, pd, mdev->mlx5e_res.hw_objs.pdn); + + err = mlx5_crypto_dek_fill_key(mdev, obj, key, sz_bytes); + if (err) + return err; + + err = mlx5_cmd_exec(mdev, in, sizeof(in), out, sizeof(out)); + + /* avoid leaking key on the stack */ + memzero_explicit(in, sizeof(in)); + + return err; +} + +static int mlx5_crypto_create_dek_key(struct mlx5_core_dev *mdev, + const void *key, u32 sz_bytes, + u32 key_purpose, u32 *p_key_id) +{ + u32 in[MLX5_ST_SZ_DW(create_encryption_key_in)] = {}; + u32 out[MLX5_ST_SZ_DW(general_obj_out_cmd_hdr)]; + u64 general_obj_types; + void *obj; + int err; + + general_obj_types = MLX5_CAP_GEN_64(mdev, general_obj_types); + if (!(general_obj_types & + MLX5_HCA_CAP_GENERAL_OBJECT_TYPES_ENCRYPTION_KEY)) + return -EINVAL; + + MLX5_SET(general_obj_in_cmd_hdr, in, opcode, + MLX5_CMD_OP_CREATE_GENERAL_OBJECT); + MLX5_SET(general_obj_in_cmd_hdr, in, obj_type, + MLX5_GENERAL_OBJECT_TYPES_ENCRYPTION_KEY); + + obj = MLX5_ADDR_OF(create_encryption_key_in, in, encryption_key_object); + MLX5_SET(encryption_key_obj, obj, key_purpose, key_purpose); + MLX5_SET(encryption_key_obj, obj, pd, mdev->mlx5e_res.hw_objs.pdn); + + err = mlx5_crypto_dek_fill_key(mdev, obj, key, sz_bytes); + if (err) + return err; + + err = mlx5_cmd_exec(mdev, in, sizeof(in), out, sizeof(out)); if (!err) *p_key_id = MLX5_GET(general_obj_out_cmd_hdr, out, obj_id); @@ -58,7 +258,7 @@ int mlx5_create_encryption_key(struct mlx5_core_dev *mdev, return err; } -void mlx5_destroy_encryption_key(struct mlx5_core_dev *mdev, u32 key_id) +static void mlx5_crypto_destroy_dek_key(struct mlx5_core_dev *mdev, u32 key_id) { u32 in[MLX5_ST_SZ_DW(general_obj_in_cmd_hdr)] = {}; u32 out[MLX5_ST_SZ_DW(general_obj_out_cmd_hdr)]; @@ -71,3 +271,504 @@ void mlx5_destroy_encryption_key(struct mlx5_core_dev *mdev, u32 key_id) mlx5_cmd_exec(mdev, in, sizeof(in), out, sizeof(out)); } + +int mlx5_create_encryption_key(struct mlx5_core_dev *mdev, + const void *key, u32 sz_bytes, + u32 key_type, u32 *p_key_id) +{ + return mlx5_crypto_create_dek_key(mdev, key, sz_bytes, key_type, p_key_id); +} + +void mlx5_destroy_encryption_key(struct mlx5_core_dev *mdev, u32 key_id) +{ + mlx5_crypto_destroy_dek_key(mdev, key_id); +} + +static struct mlx5_crypto_dek_bulk * +mlx5_crypto_dek_bulk_create(struct mlx5_crypto_dek_pool *pool) +{ + struct mlx5_crypto_dek_priv *dek_priv = pool->mdev->mlx5e_res.dek_priv; + struct mlx5_core_dev *mdev = pool->mdev; + struct mlx5_crypto_dek_bulk *bulk; + int num_deks, base_obj_id; + int err; + + bulk = kzalloc(sizeof(*bulk), GFP_KERNEL); + if (!bulk) + return ERR_PTR(-ENOMEM); + + num_deks = 1 << dek_priv->log_dek_obj_range; + bulk->need_sync = bitmap_zalloc(num_deks, GFP_KERNEL); + if (!bulk->need_sync) { + err = -ENOMEM; + goto err_out; + } + + bulk->in_use = bitmap_zalloc(num_deks, GFP_KERNEL); + if (!bulk->in_use) { + err = -ENOMEM; + goto err_out; + } + + err = mlx5_crypto_create_dek_bulk(mdev, pool->key_purpose, + dek_priv->log_dek_obj_range, + &base_obj_id); + if (err) + goto err_out; + + bulk->base_obj_id = base_obj_id; + bulk->num_deks = num_deks; + bulk->avail_deks = num_deks; + bulk->mdev = mdev; + + return bulk; + +err_out: + bitmap_free(bulk->in_use); + bitmap_free(bulk->need_sync); + kfree(bulk); + return ERR_PTR(err); +} + +static struct mlx5_crypto_dek_bulk * +mlx5_crypto_dek_pool_add_bulk(struct mlx5_crypto_dek_pool *pool) +{ + struct mlx5_crypto_dek_bulk *bulk; + + bulk = mlx5_crypto_dek_bulk_create(pool); + if (IS_ERR(bulk)) + return bulk; + + pool->avail_deks += bulk->num_deks; + pool->num_deks += bulk->num_deks; + list_add(&bulk->entry, &pool->partial_list); + + return bulk; +} + +static void mlx5_crypto_dek_bulk_free(struct mlx5_crypto_dek_bulk *bulk) +{ + mlx5_crypto_destroy_dek_key(bulk->mdev, bulk->base_obj_id); + bitmap_free(bulk->need_sync); + bitmap_free(bulk->in_use); + kfree(bulk); +} + +static void mlx5_crypto_dek_pool_remove_bulk(struct mlx5_crypto_dek_pool *pool, + struct mlx5_crypto_dek_bulk *bulk, + bool delay) +{ + pool->num_deks -= bulk->num_deks; + pool->avail_deks -= bulk->avail_deks; + pool->in_use_deks -= bulk->in_use_deks; + list_del(&bulk->entry); + if (!delay) + mlx5_crypto_dek_bulk_free(bulk); +} + +static struct mlx5_crypto_dek_bulk * +mlx5_crypto_dek_pool_pop(struct mlx5_crypto_dek_pool *pool, u32 *obj_offset) +{ + struct mlx5_crypto_dek_bulk *bulk; + int pos; + + mutex_lock(&pool->lock); + bulk = list_first_entry_or_null(&pool->partial_list, + struct mlx5_crypto_dek_bulk, entry); + + if (bulk) { + pos = find_next_zero_bit(bulk->need_sync, bulk->num_deks, + bulk->avail_start); + if (pos == bulk->num_deks) { + mlx5_core_err(pool->mdev, "Wrong DEK bulk avail_start.\n"); + pos = find_first_zero_bit(bulk->need_sync, bulk->num_deks); + } + WARN_ON(pos == bulk->num_deks); + } else { + bulk = list_first_entry_or_null(&pool->avail_list, + struct mlx5_crypto_dek_bulk, + entry); + if (bulk) { + list_move(&bulk->entry, &pool->partial_list); + } else { + bulk = mlx5_crypto_dek_pool_add_bulk(pool); + if (IS_ERR(bulk)) + goto out; + } + pos = 0; + } + + *obj_offset = pos; + bitmap_set(bulk->need_sync, pos, 1); + bitmap_set(bulk->in_use, pos, 1); + bulk->in_use_deks++; + bulk->avail_deks--; + if (!bulk->avail_deks) { + list_move(&bulk->entry, &pool->full_list); + bulk->avail_start = bulk->num_deks; + } else { + bulk->avail_start = pos + 1; + } + pool->avail_deks--; + pool->in_use_deks++; + +out: + mutex_unlock(&pool->lock); + return bulk; +} + +static bool mlx5_crypto_dek_need_sync(struct mlx5_crypto_dek_pool *pool) +{ + return !pool->syncing && + MLX5_CRYPTO_DEK_POOL_CALC_FREED(pool) > MLX5_CRYPTO_DEK_POOL_SYNC_THRESH; +} + +static int mlx5_crypto_dek_free_locked(struct mlx5_crypto_dek_pool *pool, + struct mlx5_crypto_dek *dek) +{ + struct mlx5_crypto_dek_bulk *bulk = dek->bulk; + int obj_offset; + bool old_val; + int err = 0; + + obj_offset = dek->obj_id - bulk->base_obj_id; + old_val = test_and_clear_bit(obj_offset, bulk->in_use); + WARN_ON_ONCE(!old_val); + if (!old_val) { + err = -ENOENT; + goto out_free; + } + pool->in_use_deks--; + bulk->in_use_deks--; + if (!bulk->avail_deks && !bulk->in_use_deks) + list_move(&bulk->entry, &pool->sync_list); + + if (mlx5_crypto_dek_need_sync(pool) && schedule_work(&pool->sync_work)) + pool->syncing = true; + +out_free: + kfree(dek); + return err; +} + +static int mlx5_crypto_dek_pool_push(struct mlx5_crypto_dek_pool *pool, + struct mlx5_crypto_dek *dek) +{ + int err = 0; + + mutex_lock(&pool->lock); + if (pool->syncing) + list_add(&dek->entry, &pool->wait_for_free); + else + err = mlx5_crypto_dek_free_locked(pool, dek); + mutex_unlock(&pool->lock); + + return err; +} + +/* Update the bits for a bulk while sync, and avail_next for search. + * As the combinations of (need_sync, in_use) of one DEK are + * - (0,0) means the key is ready for use, + * - (1,1) means the key is currently being used by a user, + * - (1,0) means the key is freed, and waiting for being synced, + * - (0,1) is invalid state. + * the number of revalidated DEKs can be calculated by + * hweight_long(need_sync XOR in_use), and the need_sync bits can be reset + * by simply copying from in_use bits. + */ +static void mlx5_crypto_dek_bulk_reset_synced(struct mlx5_crypto_dek_pool *pool, + struct mlx5_crypto_dek_bulk *bulk) +{ + unsigned long *need_sync = bulk->need_sync; + unsigned long *in_use = bulk->in_use; + int i, freed, reused, avail_next; + bool first = true; + + freed = MLX5_CRYPTO_DEK_BULK_CALC_FREED(bulk); + + for (i = 0; freed && i < BITS_TO_LONGS(bulk->num_deks); + i++, need_sync++, in_use++) { + reused = hweight_long((*need_sync) ^ (*in_use)); + if (!reused) + continue; + + bulk->avail_deks += reused; + pool->avail_deks += reused; + *need_sync = *in_use; + if (first) { + avail_next = i * BITS_PER_TYPE(long); + if (bulk->avail_start > avail_next) + bulk->avail_start = avail_next; + first = false; + } + + freed -= reused; + } +} + +/* Return true if the bulk is reused, false if destroyed with delay */ +static bool mlx5_crypto_dek_bulk_handle_avail(struct mlx5_crypto_dek_pool *pool, + struct mlx5_crypto_dek_bulk *bulk, + struct list_head *destroy_list) +{ + if (list_empty(&pool->avail_list)) { + list_move(&bulk->entry, &pool->avail_list); + return true; + } + + mlx5_crypto_dek_pool_remove_bulk(pool, bulk, true); + list_add(&bulk->entry, destroy_list); + return false; +} + +static void mlx5_crypto_dek_pool_splice_destroy_list(struct mlx5_crypto_dek_pool *pool, + struct list_head *list, + struct list_head *head) +{ + spin_lock(&pool->destroy_lock); + list_splice_init(list, head); + spin_unlock(&pool->destroy_lock); +} + +static void mlx5_crypto_dek_pool_free_wait_keys(struct mlx5_crypto_dek_pool *pool) +{ + struct mlx5_crypto_dek *dek, *next; + + list_for_each_entry_safe(dek, next, &pool->wait_for_free, entry) { + list_del(&dek->entry); + mlx5_crypto_dek_free_locked(pool, dek); + } +} + +/* For all the bulks in each list, reset the bits while sync. + * Move them to different lists according to the number of available DEKs. + * Destrory all the idle bulks, except one for quick service. + * And free DEKs in the waiting list at the end of this func. + */ +static void mlx5_crypto_dek_pool_reset_synced(struct mlx5_crypto_dek_pool *pool) +{ + struct mlx5_crypto_dek_bulk *bulk, *tmp; + LIST_HEAD(destroy_list); + + list_for_each_entry_safe(bulk, tmp, &pool->partial_list, entry) { + mlx5_crypto_dek_bulk_reset_synced(pool, bulk); + if (MLX5_CRYPTO_DEK_BULK_IDLE(bulk)) + mlx5_crypto_dek_bulk_handle_avail(pool, bulk, &destroy_list); + } + + list_for_each_entry_safe(bulk, tmp, &pool->full_list, entry) { + mlx5_crypto_dek_bulk_reset_synced(pool, bulk); + + if (!bulk->avail_deks) + continue; + + if (MLX5_CRYPTO_DEK_BULK_IDLE(bulk)) + mlx5_crypto_dek_bulk_handle_avail(pool, bulk, &destroy_list); + else + list_move(&bulk->entry, &pool->partial_list); + } + + list_for_each_entry_safe(bulk, tmp, &pool->sync_list, entry) { + bulk->avail_deks = bulk->num_deks; + pool->avail_deks += bulk->num_deks; + if (mlx5_crypto_dek_bulk_handle_avail(pool, bulk, &destroy_list)) { + memset(bulk->need_sync, 0, BITS_TO_BYTES(bulk->num_deks)); + bulk->avail_start = 0; + } + } + + mlx5_crypto_dek_pool_free_wait_keys(pool); + + if (!list_empty(&destroy_list)) { + mlx5_crypto_dek_pool_splice_destroy_list(pool, &destroy_list, + &pool->destroy_list); + schedule_work(&pool->destroy_work); + } +} + +static void mlx5_crypto_dek_sync_work_fn(struct work_struct *work) +{ + struct mlx5_crypto_dek_pool *pool = + container_of(work, struct mlx5_crypto_dek_pool, sync_work); + int err; + + err = mlx5_crypto_cmd_sync_crypto(pool->mdev, BIT(pool->key_purpose)); + mutex_lock(&pool->lock); + if (!err) + mlx5_crypto_dek_pool_reset_synced(pool); + pool->syncing = false; + mutex_unlock(&pool->lock); +} + +struct mlx5_crypto_dek *mlx5_crypto_dek_create(struct mlx5_crypto_dek_pool *dek_pool, + const void *key, u32 sz_bytes) +{ + struct mlx5_crypto_dek_priv *dek_priv = dek_pool->mdev->mlx5e_res.dek_priv; + struct mlx5_core_dev *mdev = dek_pool->mdev; + u32 key_purpose = dek_pool->key_purpose; + struct mlx5_crypto_dek_bulk *bulk; + struct mlx5_crypto_dek *dek; + int obj_offset; + int err; + + dek = kzalloc(sizeof(*dek), GFP_KERNEL); + if (!dek) + return ERR_PTR(-ENOMEM); + + if (!dek_priv) { + err = mlx5_crypto_create_dek_key(mdev, key, sz_bytes, + key_purpose, &dek->obj_id); + goto out; + } + + bulk = mlx5_crypto_dek_pool_pop(dek_pool, &obj_offset); + if (IS_ERR(bulk)) { + err = PTR_ERR(bulk); + goto out; + } + + dek->bulk = bulk; + dek->obj_id = bulk->base_obj_id + obj_offset; + err = mlx5_crypto_modify_dek_key(mdev, key, sz_bytes, key_purpose, + bulk->base_obj_id, obj_offset); + if (err) { + mlx5_crypto_dek_pool_push(dek_pool, dek); + return ERR_PTR(err); + } + +out: + if (err) { + kfree(dek); + return ERR_PTR(err); + } + + return dek; +} + +void mlx5_crypto_dek_destroy(struct mlx5_crypto_dek_pool *dek_pool, + struct mlx5_crypto_dek *dek) +{ + struct mlx5_crypto_dek_priv *dek_priv = dek_pool->mdev->mlx5e_res.dek_priv; + struct mlx5_core_dev *mdev = dek_pool->mdev; + + if (!dek_priv) { + mlx5_crypto_destroy_dek_key(mdev, dek->obj_id); + kfree(dek); + } else { + mlx5_crypto_dek_pool_push(dek_pool, dek); + } +} + +static void mlx5_crypto_dek_free_destroy_list(struct list_head *destroy_list) +{ + struct mlx5_crypto_dek_bulk *bulk, *tmp; + + list_for_each_entry_safe(bulk, tmp, destroy_list, entry) + mlx5_crypto_dek_bulk_free(bulk); +} + +static void mlx5_crypto_dek_destroy_work_fn(struct work_struct *work) +{ + struct mlx5_crypto_dek_pool *pool = + container_of(work, struct mlx5_crypto_dek_pool, destroy_work); + LIST_HEAD(destroy_list); + + mlx5_crypto_dek_pool_splice_destroy_list(pool, &pool->destroy_list, + &destroy_list); + mlx5_crypto_dek_free_destroy_list(&destroy_list); +} + +struct mlx5_crypto_dek_pool * +mlx5_crypto_dek_pool_create(struct mlx5_core_dev *mdev, int key_purpose) +{ + struct mlx5_crypto_dek_pool *pool; + + pool = kzalloc(sizeof(*pool), GFP_KERNEL); + if (!pool) + return ERR_PTR(-ENOMEM); + + pool->mdev = mdev; + pool->key_purpose = key_purpose; + + mutex_init(&pool->lock); + INIT_LIST_HEAD(&pool->avail_list); + INIT_LIST_HEAD(&pool->partial_list); + INIT_LIST_HEAD(&pool->full_list); + INIT_LIST_HEAD(&pool->sync_list); + INIT_LIST_HEAD(&pool->wait_for_free); + INIT_WORK(&pool->sync_work, mlx5_crypto_dek_sync_work_fn); + spin_lock_init(&pool->destroy_lock); + INIT_LIST_HEAD(&pool->destroy_list); + INIT_WORK(&pool->destroy_work, mlx5_crypto_dek_destroy_work_fn); + + return pool; +} + +void mlx5_crypto_dek_pool_destroy(struct mlx5_crypto_dek_pool *pool) +{ + struct mlx5_crypto_dek_bulk *bulk, *tmp; + + cancel_work_sync(&pool->sync_work); + cancel_work_sync(&pool->destroy_work); + + mlx5_crypto_dek_pool_free_wait_keys(pool); + + list_for_each_entry_safe(bulk, tmp, &pool->avail_list, entry) + mlx5_crypto_dek_pool_remove_bulk(pool, bulk, false); + + list_for_each_entry_safe(bulk, tmp, &pool->full_list, entry) + mlx5_crypto_dek_pool_remove_bulk(pool, bulk, false); + + list_for_each_entry_safe(bulk, tmp, &pool->sync_list, entry) + mlx5_crypto_dek_pool_remove_bulk(pool, bulk, false); + + list_for_each_entry_safe(bulk, tmp, &pool->partial_list, entry) + mlx5_crypto_dek_pool_remove_bulk(pool, bulk, false); + + mlx5_crypto_dek_free_destroy_list(&pool->destroy_list); + + mutex_destroy(&pool->lock); + + kfree(pool); +} + +void mlx5_crypto_dek_cleanup(struct mlx5_crypto_dek_priv *dek_priv) +{ + if (!dek_priv) + return; + + kfree(dek_priv); +} + +struct mlx5_crypto_dek_priv *mlx5_crypto_dek_init(struct mlx5_core_dev *mdev) +{ + struct mlx5_crypto_dek_priv *dek_priv; + int err; + + if (!MLX5_CAP_CRYPTO(mdev, log_dek_max_alloc)) + return NULL; + + dek_priv = kzalloc(sizeof(*dek_priv), GFP_KERNEL); + if (!dek_priv) + return ERR_PTR(-ENOMEM); + + dek_priv->mdev = mdev; + dek_priv->log_dek_obj_range = min_t(int, 12, + MLX5_CAP_CRYPTO(mdev, log_dek_max_alloc)); + + /* sync all types of objects */ + err = mlx5_crypto_cmd_sync_crypto(mdev, MLX5_CRYPTO_DEK_ALL_TYPE); + if (err) + goto err_sync_crypto; + + mlx5_core_dbg(mdev, "Crypto DEK enabled, %d deks per alloc (max %d), total %d\n", + 1 << dek_priv->log_dek_obj_range, + 1 << MLX5_CAP_CRYPTO(mdev, log_dek_max_alloc), + 1 << MLX5_CAP_CRYPTO(mdev, log_max_num_deks)); + + return dek_priv; + +err_sync_crypto: + kfree(dek_priv); + return ERR_PTR(err); +} diff --git a/drivers/net/ethernet/mellanox/mlx5/core/lib/crypto.h b/drivers/net/ethernet/mellanox/mlx5/core/lib/crypto.h new file mode 100644 index 000000000000..c819c047bb9c --- /dev/null +++ b/drivers/net/ethernet/mellanox/mlx5/core/lib/crypto.h @@ -0,0 +1,34 @@ +/* SPDX-License-Identifier: GPL-2.0 OR Linux-OpenIB */ +/* Copyright (c) 2022, NVIDIA CORPORATION & AFFILIATES. All rights reserved. */ + +#ifndef __MLX5_LIB_CRYPTO_H__ +#define __MLX5_LIB_CRYPTO_H__ + +enum { + MLX5_ACCEL_OBJ_TLS_KEY = MLX5_GENERAL_OBJECT_TYPE_ENCRYPTION_KEY_PURPOSE_TLS, + MLX5_ACCEL_OBJ_IPSEC_KEY = MLX5_GENERAL_OBJECT_TYPE_ENCRYPTION_KEY_PURPOSE_IPSEC, + MLX5_ACCEL_OBJ_MACSEC_KEY = MLX5_GENERAL_OBJECT_TYPE_ENCRYPTION_KEY_PURPOSE_MACSEC, + MLX5_ACCEL_OBJ_TYPE_KEY_NUM, +}; + +int mlx5_create_encryption_key(struct mlx5_core_dev *mdev, + const void *key, u32 sz_bytes, + u32 key_type, u32 *p_key_id); + +void mlx5_destroy_encryption_key(struct mlx5_core_dev *mdev, u32 key_id); + +struct mlx5_crypto_dek_pool; +struct mlx5_crypto_dek; + +struct mlx5_crypto_dek_pool *mlx5_crypto_dek_pool_create(struct mlx5_core_dev *mdev, + int key_purpose); +void mlx5_crypto_dek_pool_destroy(struct mlx5_crypto_dek_pool *pool); +struct mlx5_crypto_dek *mlx5_crypto_dek_create(struct mlx5_crypto_dek_pool *dek_pool, + const void *key, u32 sz_bytes); +void mlx5_crypto_dek_destroy(struct mlx5_crypto_dek_pool *dek_pool, + struct mlx5_crypto_dek *dek); +u32 mlx5_crypto_dek_get_id(struct mlx5_crypto_dek *dek); + +struct mlx5_crypto_dek_priv *mlx5_crypto_dek_init(struct mlx5_core_dev *mdev); +void mlx5_crypto_dek_cleanup(struct mlx5_crypto_dek_priv *dek_priv); +#endif /* __MLX5_LIB_CRYPTO_H__ */ diff --git a/drivers/net/ethernet/mellanox/mlx5/core/lib/fs_chains.c b/drivers/net/ethernet/mellanox/mlx5/core/lib/fs_chains.c index df58cba37930..81ed91fee59b 100644 --- a/drivers/net/ethernet/mellanox/mlx5/core/lib/fs_chains.c +++ b/drivers/net/ethernet/mellanox/mlx5/core/lib/fs_chains.c @@ -214,7 +214,7 @@ create_chain_restore(struct fs_chain *chain) struct mlx5_eswitch *esw = chain->chains->dev->priv.eswitch; u8 modact[MLX5_UN_SZ_BYTES(set_add_copy_action_in_auto)] = {}; struct mlx5_fs_chains *chains = chain->chains; - enum mlx5e_tc_attr_to_reg chain_to_reg; + enum mlx5e_tc_attr_to_reg mapped_obj_to_reg; struct mlx5_modify_hdr *mod_hdr; u32 index; int err; @@ -242,7 +242,7 @@ create_chain_restore(struct fs_chain *chain) chain->id = index; if (chains->ns == MLX5_FLOW_NAMESPACE_FDB) { - chain_to_reg = CHAIN_TO_REG; + mapped_obj_to_reg = MAPPED_OBJ_TO_REG; chain->restore_rule = esw_add_restore_rule(esw, chain->id); if (IS_ERR(chain->restore_rule)) { err = PTR_ERR(chain->restore_rule); @@ -253,7 +253,7 @@ create_chain_restore(struct fs_chain *chain) * since we write the metadata to reg_b * that is passed to SW directly. */ - chain_to_reg = NIC_CHAIN_TO_REG; + mapped_obj_to_reg = NIC_MAPPED_OBJ_TO_REG; } else { err = -EINVAL; goto err_rule; @@ -261,12 +261,12 @@ create_chain_restore(struct fs_chain *chain) MLX5_SET(set_action_in, modact, action_type, MLX5_ACTION_TYPE_SET); MLX5_SET(set_action_in, modact, field, - mlx5e_tc_attr_to_reg_mappings[chain_to_reg].mfield); + mlx5e_tc_attr_to_reg_mappings[mapped_obj_to_reg].mfield); MLX5_SET(set_action_in, modact, offset, - mlx5e_tc_attr_to_reg_mappings[chain_to_reg].moffset); + mlx5e_tc_attr_to_reg_mappings[mapped_obj_to_reg].moffset); MLX5_SET(set_action_in, modact, length, - mlx5e_tc_attr_to_reg_mappings[chain_to_reg].mlen == 32 ? - 0 : mlx5e_tc_attr_to_reg_mappings[chain_to_reg].mlen); + mlx5e_tc_attr_to_reg_mappings[mapped_obj_to_reg].mlen == 32 ? + 0 : mlx5e_tc_attr_to_reg_mappings[mapped_obj_to_reg].mlen); MLX5_SET(set_action_in, modact, data, chain->id); mod_hdr = mlx5_modify_header_alloc(chains->dev, chains->ns, 1, modact); diff --git a/drivers/net/ethernet/mellanox/mlx5/core/lib/ipsec_fs_roce.c b/drivers/net/ethernet/mellanox/mlx5/core/lib/ipsec_fs_roce.c new file mode 100644 index 000000000000..2c53589b765d --- /dev/null +++ b/drivers/net/ethernet/mellanox/mlx5/core/lib/ipsec_fs_roce.c @@ -0,0 +1,368 @@ +// SPDX-License-Identifier: GPL-2.0 OR Linux-OpenIB +/* Copyright (c) 2022, NVIDIA CORPORATION & AFFILIATES. All rights reserved. */ + +#include "fs_core.h" +#include "lib/ipsec_fs_roce.h" +#include "mlx5_core.h" + +struct mlx5_ipsec_miss { + struct mlx5_flow_group *group; + struct mlx5_flow_handle *rule; +}; + +struct mlx5_ipsec_rx_roce { + struct mlx5_flow_group *g; + struct mlx5_flow_table *ft; + struct mlx5_flow_handle *rule; + struct mlx5_ipsec_miss roce_miss; + + struct mlx5_flow_table *ft_rdma; + struct mlx5_flow_namespace *ns_rdma; +}; + +struct mlx5_ipsec_tx_roce { + struct mlx5_flow_group *g; + struct mlx5_flow_table *ft; + struct mlx5_flow_handle *rule; + struct mlx5_flow_namespace *ns; +}; + +struct mlx5_ipsec_fs { + struct mlx5_ipsec_rx_roce ipv4_rx; + struct mlx5_ipsec_rx_roce ipv6_rx; + struct mlx5_ipsec_tx_roce tx; +}; + +static void ipsec_fs_roce_setup_udp_dport(struct mlx5_flow_spec *spec, + u16 dport) +{ + spec->match_criteria_enable |= MLX5_MATCH_OUTER_HEADERS; + MLX5_SET_TO_ONES(fte_match_param, spec->match_criteria, outer_headers.ip_protocol); + MLX5_SET(fte_match_param, spec->match_value, outer_headers.ip_protocol, IPPROTO_UDP); + MLX5_SET_TO_ONES(fte_match_param, spec->match_criteria, outer_headers.udp_dport); + MLX5_SET(fte_match_param, spec->match_value, outer_headers.udp_dport, dport); +} + +static int +ipsec_fs_roce_rx_rule_setup(struct mlx5_core_dev *mdev, + struct mlx5_flow_destination *default_dst, + struct mlx5_ipsec_rx_roce *roce) +{ + struct mlx5_flow_destination dst = {}; + MLX5_DECLARE_FLOW_ACT(flow_act); + struct mlx5_flow_handle *rule; + struct mlx5_flow_spec *spec; + int err = 0; + + spec = kvzalloc(sizeof(*spec), GFP_KERNEL); + if (!spec) + return -ENOMEM; + + ipsec_fs_roce_setup_udp_dport(spec, ROCE_V2_UDP_DPORT); + + flow_act.action = MLX5_FLOW_CONTEXT_ACTION_FWD_DEST; + dst.type = MLX5_FLOW_DESTINATION_TYPE_TABLE_TYPE; + dst.ft = roce->ft_rdma; + rule = mlx5_add_flow_rules(roce->ft, spec, &flow_act, &dst, 1); + if (IS_ERR(rule)) { + err = PTR_ERR(rule); + mlx5_core_err(mdev, "Fail to add RX RoCE IPsec rule err=%d\n", + err); + goto fail_add_rule; + } + + roce->rule = rule; + + memset(spec, 0, sizeof(*spec)); + rule = mlx5_add_flow_rules(roce->ft, spec, &flow_act, default_dst, 1); + if (IS_ERR(rule)) { + err = PTR_ERR(rule); + mlx5_core_err(mdev, "Fail to add RX RoCE IPsec miss rule err=%d\n", + err); + goto fail_add_default_rule; + } + + roce->roce_miss.rule = rule; + + kvfree(spec); + return 0; + +fail_add_default_rule: + mlx5_del_flow_rules(roce->rule); +fail_add_rule: + kvfree(spec); + return err; +} + +static int ipsec_fs_roce_tx_rule_setup(struct mlx5_core_dev *mdev, + struct mlx5_ipsec_tx_roce *roce, + struct mlx5_flow_table *pol_ft) +{ + struct mlx5_flow_destination dst = {}; + MLX5_DECLARE_FLOW_ACT(flow_act); + struct mlx5_flow_handle *rule; + int err = 0; + + flow_act.action = MLX5_FLOW_CONTEXT_ACTION_FWD_DEST; + dst.type = MLX5_FLOW_DESTINATION_TYPE_TABLE_TYPE; + dst.ft = pol_ft; + rule = mlx5_add_flow_rules(roce->ft, NULL, &flow_act, &dst, + 1); + if (IS_ERR(rule)) { + err = PTR_ERR(rule); + mlx5_core_err(mdev, "Fail to add TX RoCE IPsec rule err=%d\n", + err); + goto out; + } + roce->rule = rule; + +out: + return err; +} + +void mlx5_ipsec_fs_roce_tx_destroy(struct mlx5_ipsec_fs *ipsec_roce) +{ + struct mlx5_ipsec_tx_roce *tx_roce; + + if (!ipsec_roce) + return; + + tx_roce = &ipsec_roce->tx; + + mlx5_del_flow_rules(tx_roce->rule); + mlx5_destroy_flow_group(tx_roce->g); + mlx5_destroy_flow_table(tx_roce->ft); +} + +#define MLX5_TX_ROCE_GROUP_SIZE BIT(0) + +int mlx5_ipsec_fs_roce_tx_create(struct mlx5_core_dev *mdev, + struct mlx5_ipsec_fs *ipsec_roce, + struct mlx5_flow_table *pol_ft) +{ + struct mlx5_flow_table_attr ft_attr = {}; + struct mlx5_ipsec_tx_roce *roce; + struct mlx5_flow_table *ft; + struct mlx5_flow_group *g; + int ix = 0; + int err; + u32 *in; + + if (!ipsec_roce) + return 0; + + roce = &ipsec_roce->tx; + + in = kvzalloc(MLX5_ST_SZ_BYTES(create_flow_group_in), GFP_KERNEL); + if (!in) + return -ENOMEM; + + ft_attr.max_fte = 1; + ft = mlx5_create_flow_table(roce->ns, &ft_attr); + if (IS_ERR(ft)) { + err = PTR_ERR(ft); + mlx5_core_err(mdev, "Fail to create RoCE IPsec tx ft err=%d\n", err); + return err; + } + + roce->ft = ft; + + MLX5_SET_CFG(in, start_flow_index, ix); + ix += MLX5_TX_ROCE_GROUP_SIZE; + MLX5_SET_CFG(in, end_flow_index, ix - 1); + g = mlx5_create_flow_group(ft, in); + if (IS_ERR(g)) { + err = PTR_ERR(g); + mlx5_core_err(mdev, "Fail to create RoCE IPsec tx group err=%d\n", err); + goto fail; + } + roce->g = g; + + err = ipsec_fs_roce_tx_rule_setup(mdev, roce, pol_ft); + if (err) { + mlx5_core_err(mdev, "Fail to create RoCE IPsec tx rules err=%d\n", err); + goto rule_fail; + } + + return 0; + +rule_fail: + mlx5_destroy_flow_group(roce->g); +fail: + mlx5_destroy_flow_table(ft); + return err; +} + +struct mlx5_flow_table *mlx5_ipsec_fs_roce_ft_get(struct mlx5_ipsec_fs *ipsec_roce, u32 family) +{ + struct mlx5_ipsec_rx_roce *rx_roce; + + if (!ipsec_roce) + return NULL; + + rx_roce = (family == AF_INET) ? &ipsec_roce->ipv4_rx : + &ipsec_roce->ipv6_rx; + + return rx_roce->ft; +} + +void mlx5_ipsec_fs_roce_rx_destroy(struct mlx5_ipsec_fs *ipsec_roce, u32 family) +{ + struct mlx5_ipsec_rx_roce *rx_roce; + + if (!ipsec_roce) + return; + + rx_roce = (family == AF_INET) ? &ipsec_roce->ipv4_rx : + &ipsec_roce->ipv6_rx; + + mlx5_del_flow_rules(rx_roce->roce_miss.rule); + mlx5_del_flow_rules(rx_roce->rule); + mlx5_destroy_flow_table(rx_roce->ft_rdma); + mlx5_destroy_flow_group(rx_roce->roce_miss.group); + mlx5_destroy_flow_group(rx_roce->g); + mlx5_destroy_flow_table(rx_roce->ft); +} + +#define MLX5_RX_ROCE_GROUP_SIZE BIT(0) + +int mlx5_ipsec_fs_roce_rx_create(struct mlx5_core_dev *mdev, + struct mlx5_ipsec_fs *ipsec_roce, + struct mlx5_flow_namespace *ns, + struct mlx5_flow_destination *default_dst, + u32 family, u32 level, u32 prio) +{ + struct mlx5_flow_table_attr ft_attr = {}; + struct mlx5_ipsec_rx_roce *roce; + struct mlx5_flow_table *ft; + struct mlx5_flow_group *g; + void *outer_headers_c; + int ix = 0; + u32 *in; + int err; + u8 *mc; + + if (!ipsec_roce) + return 0; + + roce = (family == AF_INET) ? &ipsec_roce->ipv4_rx : + &ipsec_roce->ipv6_rx; + + ft_attr.max_fte = 2; + ft_attr.level = level; + ft_attr.prio = prio; + ft = mlx5_create_flow_table(ns, &ft_attr); + if (IS_ERR(ft)) { + err = PTR_ERR(ft); + mlx5_core_err(mdev, "Fail to create RoCE IPsec rx ft at nic err=%d\n", err); + return err; + } + + roce->ft = ft; + + in = kvzalloc(MLX5_ST_SZ_BYTES(create_flow_group_in), GFP_KERNEL); + if (!in) { + err = -ENOMEM; + goto fail_nomem; + } + + mc = MLX5_ADDR_OF(create_flow_group_in, in, match_criteria); + outer_headers_c = MLX5_ADDR_OF(fte_match_param, mc, outer_headers); + MLX5_SET_TO_ONES(fte_match_set_lyr_2_4, outer_headers_c, ip_protocol); + MLX5_SET_TO_ONES(fte_match_set_lyr_2_4, outer_headers_c, udp_dport); + + MLX5_SET_CFG(in, match_criteria_enable, MLX5_MATCH_OUTER_HEADERS); + MLX5_SET_CFG(in, start_flow_index, ix); + ix += MLX5_RX_ROCE_GROUP_SIZE; + MLX5_SET_CFG(in, end_flow_index, ix - 1); + g = mlx5_create_flow_group(ft, in); + if (IS_ERR(g)) { + err = PTR_ERR(g); + mlx5_core_err(mdev, "Fail to create RoCE IPsec rx group at nic err=%d\n", err); + goto fail_group; + } + roce->g = g; + + memset(in, 0, MLX5_ST_SZ_BYTES(create_flow_group_in)); + MLX5_SET_CFG(in, start_flow_index, ix); + ix += MLX5_RX_ROCE_GROUP_SIZE; + MLX5_SET_CFG(in, end_flow_index, ix - 1); + g = mlx5_create_flow_group(ft, in); + if (IS_ERR(g)) { + err = PTR_ERR(g); + mlx5_core_err(mdev, "Fail to create RoCE IPsec rx miss group at nic err=%d\n", err); + goto fail_mgroup; + } + roce->roce_miss.group = g; + + memset(&ft_attr, 0, sizeof(ft_attr)); + if (family == AF_INET) + ft_attr.level = 1; + ft = mlx5_create_flow_table(roce->ns_rdma, &ft_attr); + if (IS_ERR(ft)) { + err = PTR_ERR(ft); + mlx5_core_err(mdev, "Fail to create RoCE IPsec rx ft at rdma err=%d\n", err); + goto fail_rdma_table; + } + + roce->ft_rdma = ft; + + err = ipsec_fs_roce_rx_rule_setup(mdev, default_dst, roce); + if (err) { + mlx5_core_err(mdev, "Fail to create RoCE IPsec rx rules err=%d\n", err); + goto fail_setup_rule; + } + + kvfree(in); + return 0; + +fail_setup_rule: + mlx5_destroy_flow_table(roce->ft_rdma); +fail_rdma_table: + mlx5_destroy_flow_group(roce->roce_miss.group); +fail_mgroup: + mlx5_destroy_flow_group(roce->g); +fail_group: + kvfree(in); +fail_nomem: + mlx5_destroy_flow_table(roce->ft); + return err; +} + +void mlx5_ipsec_fs_roce_cleanup(struct mlx5_ipsec_fs *ipsec_roce) +{ + kfree(ipsec_roce); +} + +struct mlx5_ipsec_fs *mlx5_ipsec_fs_roce_init(struct mlx5_core_dev *mdev) +{ + struct mlx5_ipsec_fs *roce_ipsec; + struct mlx5_flow_namespace *ns; + + ns = mlx5_get_flow_namespace(mdev, MLX5_FLOW_NAMESPACE_RDMA_RX_IPSEC); + if (!ns) { + mlx5_core_err(mdev, "Failed to get RoCE rx ns\n"); + return NULL; + } + + roce_ipsec = kzalloc(sizeof(*roce_ipsec), GFP_KERNEL); + if (!roce_ipsec) + return NULL; + + roce_ipsec->ipv4_rx.ns_rdma = ns; + roce_ipsec->ipv6_rx.ns_rdma = ns; + + ns = mlx5_get_flow_namespace(mdev, MLX5_FLOW_NAMESPACE_RDMA_TX_IPSEC); + if (!ns) { + mlx5_core_err(mdev, "Failed to get RoCE tx ns\n"); + goto err_tx; + } + + roce_ipsec->tx.ns = ns; + + return roce_ipsec; + +err_tx: + kfree(roce_ipsec); + return NULL; +} diff --git a/drivers/net/ethernet/mellanox/mlx5/core/lib/ipsec_fs_roce.h b/drivers/net/ethernet/mellanox/mlx5/core/lib/ipsec_fs_roce.h new file mode 100644 index 000000000000..9712d705fe48 --- /dev/null +++ b/drivers/net/ethernet/mellanox/mlx5/core/lib/ipsec_fs_roce.h @@ -0,0 +1,25 @@ +/* SPDX-License-Identifier: GPL-2.0 OR Linux-OpenIB */ +/* Copyright (c) 2022, NVIDIA CORPORATION & AFFILIATES. All rights reserved. */ + +#ifndef __MLX5_LIB_IPSEC_H__ +#define __MLX5_LIB_IPSEC_H__ + +struct mlx5_ipsec_fs; + +struct mlx5_flow_table * +mlx5_ipsec_fs_roce_ft_get(struct mlx5_ipsec_fs *ipsec_roce, u32 family); +void mlx5_ipsec_fs_roce_rx_destroy(struct mlx5_ipsec_fs *ipsec_roce, + u32 family); +int mlx5_ipsec_fs_roce_rx_create(struct mlx5_core_dev *mdev, + struct mlx5_ipsec_fs *ipsec_roce, + struct mlx5_flow_namespace *ns, + struct mlx5_flow_destination *default_dst, + u32 family, u32 level, u32 prio); +void mlx5_ipsec_fs_roce_tx_destroy(struct mlx5_ipsec_fs *ipsec_roce); +int mlx5_ipsec_fs_roce_tx_create(struct mlx5_core_dev *mdev, + struct mlx5_ipsec_fs *ipsec_roce, + struct mlx5_flow_table *pol_ft); +void mlx5_ipsec_fs_roce_cleanup(struct mlx5_ipsec_fs *ipsec_roce); +struct mlx5_ipsec_fs *mlx5_ipsec_fs_roce_init(struct mlx5_core_dev *mdev); + +#endif /* __MLX5_LIB_IPSEC_H__ */ diff --git a/drivers/net/ethernet/mellanox/mlx5/core/lib/mlx5.h b/drivers/net/ethernet/mellanox/mlx5/core/lib/mlx5.h index 032adb21ad4b..ccf12f7db6f0 100644 --- a/drivers/net/ethernet/mellanox/mlx5/core/lib/mlx5.h +++ b/drivers/net/ethernet/mellanox/mlx5/core/lib/mlx5.h @@ -79,28 +79,11 @@ struct mlx5_pme_stats { void mlx5_get_pme_stats(struct mlx5_core_dev *dev, struct mlx5_pme_stats *stats); int mlx5_notifier_call_chain(struct mlx5_events *events, unsigned int event, void *data); -/* Crypto */ -enum { - MLX5_ACCEL_OBJ_TLS_KEY = MLX5_GENERAL_OBJECT_TYPE_ENCRYPTION_KEY_TYPE_TLS, - MLX5_ACCEL_OBJ_IPSEC_KEY = MLX5_GENERAL_OBJECT_TYPE_ENCRYPTION_KEY_TYPE_IPSEC, - MLX5_ACCEL_OBJ_MACSEC_KEY = MLX5_GENERAL_OBJECT_TYPE_ENCRYPTION_KEY_TYPE_MACSEC, -}; - -int mlx5_create_encryption_key(struct mlx5_core_dev *mdev, - void *key, u32 sz_bytes, - u32 key_type, u32 *p_key_id); -void mlx5_destroy_encryption_key(struct mlx5_core_dev *mdev, u32 key_id); - static inline struct net *mlx5_core_net(struct mlx5_core_dev *dev) { return devlink_net(priv_to_devlink(dev)); } -static inline void mlx5_uplink_netdev_set(struct mlx5_core_dev *mdev, struct net_device *netdev) -{ - mdev->mlx5e_res.uplink_netdev = netdev; -} - static inline struct net_device *mlx5_uplink_netdev_get(struct mlx5_core_dev *mdev) { return mdev->mlx5e_res.uplink_netdev; diff --git a/drivers/net/ethernet/mellanox/mlx5/core/main.c b/drivers/net/ethernet/mellanox/mlx5/core/main.c index 4e1b5757528a..540840e80493 100644 --- a/drivers/net/ethernet/mellanox/mlx5/core/main.c +++ b/drivers/net/ethernet/mellanox/mlx5/core/main.c @@ -336,6 +336,24 @@ static u16 to_fw_pkey_sz(struct mlx5_core_dev *dev, u32 size) } } +void mlx5_core_uplink_netdev_set(struct mlx5_core_dev *dev, struct net_device *netdev) +{ + mutex_lock(&dev->mlx5e_res.uplink_netdev_lock); + dev->mlx5e_res.uplink_netdev = netdev; + mlx5_blocking_notifier_call_chain(dev, MLX5_DRIVER_EVENT_UPLINK_NETDEV, + netdev); + mutex_unlock(&dev->mlx5e_res.uplink_netdev_lock); +} + +void mlx5_core_uplink_netdev_event_replay(struct mlx5_core_dev *dev) +{ + mutex_lock(&dev->mlx5e_res.uplink_netdev_lock); + mlx5_blocking_notifier_call_chain(dev, MLX5_DRIVER_EVENT_UPLINK_NETDEV, + dev->mlx5e_res.uplink_netdev); + mutex_unlock(&dev->mlx5e_res.uplink_netdev_lock); +} +EXPORT_SYMBOL(mlx5_core_uplink_netdev_event_replay); + static int mlx5_core_get_caps_mode(struct mlx5_core_dev *dev, enum mlx5_cap_type cap_type, enum mlx5_cap_mode cap_mode) @@ -484,9 +502,9 @@ static int max_uc_list_get_devlink_param(struct mlx5_core_dev *dev) union devlink_param_value val; int err; - err = devlink_param_driverinit_value_get(devlink, - DEVLINK_PARAM_GENERIC_ID_MAX_MACS, - &val); + err = devl_param_driverinit_value_get(devlink, + DEVLINK_PARAM_GENERIC_ID_MAX_MACS, + &val); if (!err) return val.vu32; mlx5_core_dbg(dev, "Failed to get param. err = %d\n", err); @@ -499,9 +517,9 @@ bool mlx5_is_roce_on(struct mlx5_core_dev *dev) union devlink_param_value val; int err; - err = devlink_param_driverinit_value_get(devlink, - DEVLINK_PARAM_GENERIC_ID_ENABLE_ROCE, - &val); + err = devl_param_driverinit_value_get(devlink, + DEVLINK_PARAM_GENERIC_ID_ENABLE_ROCE, + &val); if (!err) return val.vbool; @@ -1390,9 +1408,9 @@ int mlx5_init_one(struct mlx5_core_dev *dev) set_bit(MLX5_INTERFACE_STATE_UP, &dev->intf_state); - err = mlx5_devlink_register(priv_to_devlink(dev)); + err = mlx5_devlink_params_register(priv_to_devlink(dev)); if (err) - goto err_devlink_reg; + goto err_devlink_params_reg; err = mlx5_register_device(dev); if (err) @@ -1403,8 +1421,8 @@ int mlx5_init_one(struct mlx5_core_dev *dev) return 0; err_register: - mlx5_devlink_unregister(priv_to_devlink(dev)); -err_devlink_reg: + mlx5_devlink_params_unregister(priv_to_devlink(dev)); +err_devlink_params_reg: clear_bit(MLX5_INTERFACE_STATE_UP, &dev->intf_state); mlx5_unload(dev); err_load: @@ -1426,7 +1444,7 @@ void mlx5_uninit_one(struct mlx5_core_dev *dev) mutex_lock(&dev->intf_state_mutex); mlx5_unregister_device(dev); - mlx5_devlink_unregister(priv_to_devlink(dev)); + mlx5_devlink_params_unregister(priv_to_devlink(dev)); if (!test_bit(MLX5_INTERFACE_STATE_UP, &dev->intf_state)) { mlx5_core_warn(dev, "%s: interface is down, NOP\n", @@ -1491,23 +1509,23 @@ out: return err; } -int mlx5_load_one(struct mlx5_core_dev *dev, bool recovery) +int mlx5_load_one(struct mlx5_core_dev *dev) { struct devlink *devlink = priv_to_devlink(dev); int ret; devl_lock(devlink); - ret = mlx5_load_one_devl_locked(dev, recovery); + ret = mlx5_load_one_devl_locked(dev, false); devl_unlock(devlink); return ret; } -void mlx5_unload_one_devl_locked(struct mlx5_core_dev *dev) +void mlx5_unload_one_devl_locked(struct mlx5_core_dev *dev, bool suspend) { devl_assert_locked(priv_to_devlink(dev)); mutex_lock(&dev->intf_state_mutex); - mlx5_detach_device(dev); + mlx5_detach_device(dev, suspend); if (!test_bit(MLX5_INTERFACE_STATE_UP, &dev->intf_state)) { mlx5_core_warn(dev, "%s: interface is down, NOP\n", @@ -1522,12 +1540,12 @@ out: mutex_unlock(&dev->intf_state_mutex); } -void mlx5_unload_one(struct mlx5_core_dev *dev) +void mlx5_unload_one(struct mlx5_core_dev *dev, bool suspend) { struct devlink *devlink = priv_to_devlink(dev); devl_lock(devlink); - mlx5_unload_one_devl_locked(dev); + mlx5_unload_one_devl_locked(dev, suspend); devl_unlock(devlink); } @@ -1555,6 +1573,7 @@ static const int types[] = { MLX5_CAP_DEV_SHAMPO, MLX5_CAP_MACSEC, MLX5_CAP_ADV_VIRTUALIZATION, + MLX5_CAP_CRYPTO, }; static void mlx5_hca_caps_free(struct mlx5_core_dev *dev) @@ -1608,6 +1627,7 @@ int mlx5_mdev_init(struct mlx5_core_dev *dev, int profile_idx) lockdep_register_key(&dev->lock_key); mutex_init(&dev->intf_state_mutex); lockdep_set_class(&dev->intf_state_mutex, &dev->lock_key); + mutex_init(&dev->mlx5e_res.uplink_netdev_lock); mutex_init(&priv->bfregs.reg_head.lock); mutex_init(&priv->bfregs.wc_head.lock); @@ -1696,6 +1716,7 @@ void mlx5_mdev_uninit(struct mlx5_core_dev *dev) mutex_destroy(&priv->alloc_mutex); mutex_destroy(&priv->bfregs.wc_head.lock); mutex_destroy(&priv->bfregs.reg_head.lock); + mutex_destroy(&dev->mlx5e_res.uplink_netdev_lock); mutex_destroy(&dev->intf_state_mutex); lockdep_unregister_key(&dev->lock_key); } @@ -1809,7 +1830,7 @@ static pci_ers_result_t mlx5_pci_err_detected(struct pci_dev *pdev, mlx5_enter_error_state(dev, false); mlx5_error_sw_reset(dev); - mlx5_unload_one(dev); + mlx5_unload_one(dev, true); mlx5_drain_health_wq(dev); mlx5_pci_disable_device(dev); @@ -1891,8 +1912,7 @@ static void mlx5_pci_resume(struct pci_dev *pdev) mlx5_pci_trace(dev, "Enter, loading driver..\n"); - err = mlx5_load_one(dev, false); - + err = mlx5_load_one(dev); if (!err) devlink_health_reporter_state_update(dev->priv.health.fw_fatal_reporter, DEVLINK_HEALTH_REPORTER_STATE_HEALTHY); @@ -1966,7 +1986,7 @@ static void shutdown(struct pci_dev *pdev) set_bit(MLX5_BREAK_FW_WAIT, &dev->intf_state); err = mlx5_try_fast_unload(dev); if (err) - mlx5_unload_one(dev); + mlx5_unload_one(dev, false); mlx5_pci_disable_device(dev); } @@ -1974,7 +1994,7 @@ static int mlx5_suspend(struct pci_dev *pdev, pm_message_t state) { struct mlx5_core_dev *dev = pci_get_drvdata(pdev); - mlx5_unload_one(dev); + mlx5_unload_one(dev, true); return 0; } @@ -1983,7 +2003,7 @@ static int mlx5_resume(struct pci_dev *pdev) { struct mlx5_core_dev *dev = pci_get_drvdata(pdev); - return mlx5_load_one(dev, false); + return mlx5_load_one(dev); } static const struct pci_device_id mlx5_core_pci_table[] = { @@ -2017,7 +2037,7 @@ MODULE_DEVICE_TABLE(pci, mlx5_core_pci_table); void mlx5_disable_device(struct mlx5_core_dev *dev) { mlx5_error_sw_reset(dev); - mlx5_unload_one_devl_locked(dev); + mlx5_unload_one_devl_locked(dev, false); } int mlx5_recover_device(struct mlx5_core_dev *dev) diff --git a/drivers/net/ethernet/mellanox/mlx5/core/mlx5_core.h b/drivers/net/ethernet/mellanox/mlx5/core/mlx5_core.h index 029305a8b80a..be0785f83083 100644 --- a/drivers/net/ethernet/mellanox/mlx5/core/mlx5_core.h +++ b/drivers/net/ethernet/mellanox/mlx5/core/mlx5_core.h @@ -236,7 +236,7 @@ void mlx5_adev_cleanup(struct mlx5_core_dev *dev); int mlx5_adev_init(struct mlx5_core_dev *dev); int mlx5_attach_device(struct mlx5_core_dev *dev); -void mlx5_detach_device(struct mlx5_core_dev *dev); +void mlx5_detach_device(struct mlx5_core_dev *dev, bool suspend); int mlx5_register_device(struct mlx5_core_dev *dev); void mlx5_unregister_device(struct mlx5_core_dev *dev); struct mlx5_core_dev *mlx5_get_next_phys_dev_lag(struct mlx5_core_dev *dev); @@ -319,9 +319,9 @@ int mlx5_mdev_init(struct mlx5_core_dev *dev, int profile_idx); void mlx5_mdev_uninit(struct mlx5_core_dev *dev); int mlx5_init_one(struct mlx5_core_dev *dev); void mlx5_uninit_one(struct mlx5_core_dev *dev); -void mlx5_unload_one(struct mlx5_core_dev *dev); -void mlx5_unload_one_devl_locked(struct mlx5_core_dev *dev); -int mlx5_load_one(struct mlx5_core_dev *dev, bool recovery); +void mlx5_unload_one(struct mlx5_core_dev *dev, bool suspend); +void mlx5_unload_one_devl_locked(struct mlx5_core_dev *dev, bool suspend); +int mlx5_load_one(struct mlx5_core_dev *dev); int mlx5_load_one_devl_locked(struct mlx5_core_dev *dev, bool recovery); int mlx5_vport_set_other_func_cap(struct mlx5_core_dev *dev, const void *hca_cap, u16 function_id, diff --git a/drivers/net/ethernet/mellanox/mlx5/core/pagealloc.c b/drivers/net/ethernet/mellanox/mlx5/core/pagealloc.c index 0eb50be175cc..64d4e7125e9b 100644 --- a/drivers/net/ethernet/mellanox/mlx5/core/pagealloc.c +++ b/drivers/net/ethernet/mellanox/mlx5/core/pagealloc.c @@ -219,7 +219,8 @@ static int alloc_4k(struct mlx5_core_dev *dev, u64 *addr, u32 function) n = find_first_bit(&fp->bitmask, 8 * sizeof(fp->bitmask)); if (n >= MLX5_NUM_4K_IN_PAGE) { - mlx5_core_warn(dev, "alloc 4k bug\n"); + mlx5_core_warn(dev, "alloc 4k bug: fw page = 0x%llx, n = %u, bitmask: %lu, max num of 4K pages: %d\n", + fp->addr, n, fp->bitmask, MLX5_NUM_4K_IN_PAGE); return -ENOENT; } clear_bit(n, &fp->bitmask); diff --git a/drivers/net/ethernet/mellanox/mlx5/core/sf/dev/driver.c b/drivers/net/ethernet/mellanox/mlx5/core/sf/dev/driver.c index 7b4783ce213e..a7377619ba6f 100644 --- a/drivers/net/ethernet/mellanox/mlx5/core/sf/dev/driver.c +++ b/drivers/net/ethernet/mellanox/mlx5/core/sf/dev/driver.c @@ -74,7 +74,7 @@ static void mlx5_sf_dev_shutdown(struct auxiliary_device *adev) { struct mlx5_sf_dev *sf_dev = container_of(adev, struct mlx5_sf_dev, adev); - mlx5_unload_one(sf_dev->mdev); + mlx5_unload_one(sf_dev->mdev, false); } static const struct auxiliary_device_id mlx5_sf_dev_id_table[] = { diff --git a/drivers/net/ethernet/mellanox/mlx5/core/steering/dr_send.c b/drivers/net/ethernet/mellanox/mlx5/core/steering/dr_send.c index a4476cb4c3b3..fd2d31cdbcf9 100644 --- a/drivers/net/ethernet/mellanox/mlx5/core/steering/dr_send.c +++ b/drivers/net/ethernet/mellanox/mlx5/core/steering/dr_send.c @@ -724,7 +724,6 @@ int mlx5dr_send_postsend_action(struct mlx5dr_domain *dmn, struct mlx5dr_action *action) { struct postsend_info send_info = {}; - int ret; send_info.write.addr = (uintptr_t)action->rewrite->data; send_info.write.length = action->rewrite->num_of_actions * @@ -734,9 +733,7 @@ int mlx5dr_send_postsend_action(struct mlx5dr_domain *dmn, mlx5dr_icm_pool_get_chunk_mr_addr(action->rewrite->chunk); send_info.rkey = mlx5dr_icm_pool_get_chunk_rkey(action->rewrite->chunk); - ret = dr_postsend_icm_data(dmn, &send_info); - - return ret; + return dr_postsend_icm_data(dmn, &send_info); } static int dr_modify_qp_rst2init(struct mlx5_core_dev *mdev, diff --git a/drivers/net/ethernet/mellanox/mlxbf_gige/mlxbf_gige.h b/drivers/net/ethernet/mellanox/mlxbf_gige/mlxbf_gige.h index 5a1027b07215..a453b9cd9033 100644 --- a/drivers/net/ethernet/mellanox/mlxbf_gige/mlxbf_gige.h +++ b/drivers/net/ethernet/mellanox/mlxbf_gige/mlxbf_gige.h @@ -14,6 +14,7 @@ #include <linux/irqreturn.h> #include <linux/netdevice.h> #include <linux/irq.h> +#include <linux/phy.h> /* The silicon design supports a maximum RX ring size of * 32K entries. Based on current testing this maximum size @@ -67,6 +68,29 @@ struct mlxbf_gige_stats { u64 rx_filter_discard_pkts; }; +struct mlxbf_gige_reg_param { + u32 mask; + u32 shift; +}; + +struct mlxbf_gige_mdio_gw { + u32 gw_address; + u32 read_data_address; + struct mlxbf_gige_reg_param busy; + struct mlxbf_gige_reg_param write_data; + struct mlxbf_gige_reg_param read_data; + struct mlxbf_gige_reg_param devad; + struct mlxbf_gige_reg_param partad; + struct mlxbf_gige_reg_param opcode; + struct mlxbf_gige_reg_param st1; +}; + +struct mlxbf_gige_link_cfg { + void (*set_phy_link_mode)(struct phy_device *phydev); + void (*adjust_link)(struct net_device *netdev); + phy_interface_t phy_mode; +}; + struct mlxbf_gige { void __iomem *base; void __iomem *llu_base; @@ -102,6 +126,9 @@ struct mlxbf_gige { u8 valid_polarity; struct napi_struct napi; struct mlxbf_gige_stats stats; + u8 hw_version; + struct mlxbf_gige_mdio_gw *mdio_gw; + int prev_speed; }; /* Rx Work Queue Element definitions */ diff --git a/drivers/net/ethernet/mellanox/mlxbf_gige/mlxbf_gige_ethtool.c b/drivers/net/ethernet/mellanox/mlxbf_gige/mlxbf_gige_ethtool.c index 41ebef25a930..253d7ad9b809 100644 --- a/drivers/net/ethernet/mellanox/mlxbf_gige/mlxbf_gige_ethtool.c +++ b/drivers/net/ethernet/mellanox/mlxbf_gige/mlxbf_gige_ethtool.c @@ -135,4 +135,5 @@ const struct ethtool_ops mlxbf_gige_ethtool_ops = { .nway_reset = phy_ethtool_nway_reset, .get_pauseparam = mlxbf_gige_get_pauseparam, .get_link_ksettings = phy_ethtool_get_link_ksettings, + .set_link_ksettings = phy_ethtool_set_link_ksettings, }; diff --git a/drivers/net/ethernet/mellanox/mlxbf_gige/mlxbf_gige_main.c b/drivers/net/ethernet/mellanox/mlxbf_gige/mlxbf_gige_main.c index 2292d63a279c..694de9513b9f 100644 --- a/drivers/net/ethernet/mellanox/mlxbf_gige/mlxbf_gige_main.c +++ b/drivers/net/ethernet/mellanox/mlxbf_gige/mlxbf_gige_main.c @@ -205,7 +205,7 @@ static int mlxbf_gige_stop(struct net_device *netdev) } static int mlxbf_gige_eth_ioctl(struct net_device *netdev, - struct ifreq *ifr, int cmd) + struct ifreq *ifr, int cmd) { if (!(netif_running(netdev))) return -EINVAL; @@ -263,13 +263,99 @@ static const struct net_device_ops mlxbf_gige_netdev_ops = { .ndo_get_stats64 = mlxbf_gige_get_stats64, }; -static void mlxbf_gige_adjust_link(struct net_device *netdev) +static void mlxbf_gige_bf2_adjust_link(struct net_device *netdev) { struct phy_device *phydev = netdev->phydev; phy_print_status(phydev); } +static void mlxbf_gige_bf3_adjust_link(struct net_device *netdev) +{ + struct mlxbf_gige *priv = netdev_priv(netdev); + struct phy_device *phydev = netdev->phydev; + u8 sgmii_mode; + u16 ipg_size; + u32 val; + + if (phydev->link && phydev->speed != priv->prev_speed) { + switch (phydev->speed) { + case 1000: + ipg_size = MLXBF_GIGE_1G_IPG_SIZE; + sgmii_mode = MLXBF_GIGE_1G_SGMII_MODE; + break; + case 100: + ipg_size = MLXBF_GIGE_100M_IPG_SIZE; + sgmii_mode = MLXBF_GIGE_100M_SGMII_MODE; + break; + case 10: + ipg_size = MLXBF_GIGE_10M_IPG_SIZE; + sgmii_mode = MLXBF_GIGE_10M_SGMII_MODE; + break; + default: + return; + } + + val = readl(priv->plu_base + MLXBF_GIGE_PLU_TX_REG0); + val &= ~(MLXBF_GIGE_PLU_TX_IPG_SIZE_MASK | MLXBF_GIGE_PLU_TX_SGMII_MODE_MASK); + val |= FIELD_PREP(MLXBF_GIGE_PLU_TX_IPG_SIZE_MASK, ipg_size); + val |= FIELD_PREP(MLXBF_GIGE_PLU_TX_SGMII_MODE_MASK, sgmii_mode); + writel(val, priv->plu_base + MLXBF_GIGE_PLU_TX_REG0); + + val = readl(priv->plu_base + MLXBF_GIGE_PLU_RX_REG0); + val &= ~MLXBF_GIGE_PLU_RX_SGMII_MODE_MASK; + val |= FIELD_PREP(MLXBF_GIGE_PLU_RX_SGMII_MODE_MASK, sgmii_mode); + writel(val, priv->plu_base + MLXBF_GIGE_PLU_RX_REG0); + + priv->prev_speed = phydev->speed; + } + + phy_print_status(phydev); +} + +static void mlxbf_gige_bf2_set_phy_link_mode(struct phy_device *phydev) +{ + /* MAC only supports 1000T full duplex mode */ + phy_remove_link_mode(phydev, ETHTOOL_LINK_MODE_1000baseT_Half_BIT); + phy_remove_link_mode(phydev, ETHTOOL_LINK_MODE_100baseT_Full_BIT); + phy_remove_link_mode(phydev, ETHTOOL_LINK_MODE_100baseT_Half_BIT); + phy_remove_link_mode(phydev, ETHTOOL_LINK_MODE_10baseT_Full_BIT); + phy_remove_link_mode(phydev, ETHTOOL_LINK_MODE_10baseT_Half_BIT); + + /* Only symmetric pause with flow control enabled is supported so no + * need to negotiate pause. + */ + linkmode_clear_bit(ETHTOOL_LINK_MODE_Pause_BIT, phydev->advertising); + linkmode_clear_bit(ETHTOOL_LINK_MODE_Asym_Pause_BIT, phydev->advertising); +} + +static void mlxbf_gige_bf3_set_phy_link_mode(struct phy_device *phydev) +{ + /* MAC only supports full duplex mode */ + phy_remove_link_mode(phydev, ETHTOOL_LINK_MODE_1000baseT_Half_BIT); + phy_remove_link_mode(phydev, ETHTOOL_LINK_MODE_100baseT_Half_BIT); + phy_remove_link_mode(phydev, ETHTOOL_LINK_MODE_10baseT_Half_BIT); + + /* Only symmetric pause with flow control enabled is supported so no + * need to negotiate pause. + */ + linkmode_clear_bit(ETHTOOL_LINK_MODE_Pause_BIT, phydev->advertising); + linkmode_clear_bit(ETHTOOL_LINK_MODE_Asym_Pause_BIT, phydev->advertising); +} + +static struct mlxbf_gige_link_cfg mlxbf_gige_link_cfgs[] = { + [MLXBF_GIGE_VERSION_BF2] = { + .set_phy_link_mode = mlxbf_gige_bf2_set_phy_link_mode, + .adjust_link = mlxbf_gige_bf2_adjust_link, + .phy_mode = PHY_INTERFACE_MODE_GMII + }, + [MLXBF_GIGE_VERSION_BF3] = { + .set_phy_link_mode = mlxbf_gige_bf3_set_phy_link_mode, + .adjust_link = mlxbf_gige_bf3_adjust_link, + .phy_mode = PHY_INTERFACE_MODE_SGMII + } +}; + static int mlxbf_gige_probe(struct platform_device *pdev) { struct phy_device *phydev; @@ -315,6 +401,8 @@ static int mlxbf_gige_probe(struct platform_device *pdev) spin_lock_init(&priv->lock); + priv->hw_version = readq(base + MLXBF_GIGE_VERSION); + /* Attach MDIO device */ err = mlxbf_gige_mdio_probe(pdev, priv); if (err) @@ -357,25 +445,14 @@ static int mlxbf_gige_probe(struct platform_device *pdev) phydev->irq = phy_irq; err = phy_connect_direct(netdev, phydev, - mlxbf_gige_adjust_link, - PHY_INTERFACE_MODE_GMII); + mlxbf_gige_link_cfgs[priv->hw_version].adjust_link, + mlxbf_gige_link_cfgs[priv->hw_version].phy_mode); if (err) { dev_err(&pdev->dev, "Could not attach to PHY\n"); goto out; } - /* MAC only supports 1000T full duplex mode */ - phy_remove_link_mode(phydev, ETHTOOL_LINK_MODE_1000baseT_Half_BIT); - phy_remove_link_mode(phydev, ETHTOOL_LINK_MODE_100baseT_Full_BIT); - phy_remove_link_mode(phydev, ETHTOOL_LINK_MODE_100baseT_Half_BIT); - phy_remove_link_mode(phydev, ETHTOOL_LINK_MODE_10baseT_Full_BIT); - phy_remove_link_mode(phydev, ETHTOOL_LINK_MODE_10baseT_Half_BIT); - - /* Only symmetric pause with flow control enabled is supported so no - * need to negotiate pause. - */ - linkmode_clear_bit(ETHTOOL_LINK_MODE_Pause_BIT, phydev->advertising); - linkmode_clear_bit(ETHTOOL_LINK_MODE_Asym_Pause_BIT, phydev->advertising); + mlxbf_gige_link_cfgs[priv->hw_version].set_phy_link_mode(phydev); /* Display information about attached PHY device */ phy_attached_info(phydev); diff --git a/drivers/net/ethernet/mellanox/mlxbf_gige/mlxbf_gige_mdio.c b/drivers/net/ethernet/mellanox/mlxbf_gige/mlxbf_gige_mdio.c index aa780b1614a3..654190263535 100644 --- a/drivers/net/ethernet/mellanox/mlxbf_gige/mlxbf_gige_mdio.c +++ b/drivers/net/ethernet/mellanox/mlxbf_gige/mlxbf_gige_mdio.c @@ -23,9 +23,75 @@ #include "mlxbf_gige.h" #include "mlxbf_gige_regs.h" +#include "mlxbf_gige_mdio_bf2.h" +#include "mlxbf_gige_mdio_bf3.h" -#define MLXBF_GIGE_MDIO_GW_OFFSET 0x0 -#define MLXBF_GIGE_MDIO_CFG_OFFSET 0x4 +static struct mlxbf_gige_mdio_gw mlxbf_gige_mdio_gw_t[] = { + [MLXBF_GIGE_VERSION_BF2] = { + .gw_address = MLXBF2_GIGE_MDIO_GW_OFFSET, + .read_data_address = MLXBF2_GIGE_MDIO_GW_OFFSET, + .busy = { + .mask = MLXBF2_GIGE_MDIO_GW_BUSY_MASK, + .shift = MLXBF2_GIGE_MDIO_GW_BUSY_SHIFT, + }, + .read_data = { + .mask = MLXBF2_GIGE_MDIO_GW_AD_MASK, + .shift = MLXBF2_GIGE_MDIO_GW_AD_SHIFT, + }, + .write_data = { + .mask = MLXBF2_GIGE_MDIO_GW_AD_MASK, + .shift = MLXBF2_GIGE_MDIO_GW_AD_SHIFT, + }, + .devad = { + .mask = MLXBF2_GIGE_MDIO_GW_DEVAD_MASK, + .shift = MLXBF2_GIGE_MDIO_GW_DEVAD_SHIFT, + }, + .partad = { + .mask = MLXBF2_GIGE_MDIO_GW_PARTAD_MASK, + .shift = MLXBF2_GIGE_MDIO_GW_PARTAD_SHIFT, + }, + .opcode = { + .mask = MLXBF2_GIGE_MDIO_GW_OPCODE_MASK, + .shift = MLXBF2_GIGE_MDIO_GW_OPCODE_SHIFT, + }, + .st1 = { + .mask = MLXBF2_GIGE_MDIO_GW_ST1_MASK, + .shift = MLXBF2_GIGE_MDIO_GW_ST1_SHIFT, + }, + }, + [MLXBF_GIGE_VERSION_BF3] = { + .gw_address = MLXBF3_GIGE_MDIO_GW_OFFSET, + .read_data_address = MLXBF3_GIGE_MDIO_DATA_READ, + .busy = { + .mask = MLXBF3_GIGE_MDIO_GW_BUSY_MASK, + .shift = MLXBF3_GIGE_MDIO_GW_BUSY_SHIFT, + }, + .read_data = { + .mask = MLXBF3_GIGE_MDIO_GW_DATA_READ_MASK, + .shift = MLXBF3_GIGE_MDIO_GW_DATA_READ_SHIFT, + }, + .write_data = { + .mask = MLXBF3_GIGE_MDIO_GW_DATA_MASK, + .shift = MLXBF3_GIGE_MDIO_GW_DATA_SHIFT, + }, + .devad = { + .mask = MLXBF3_GIGE_MDIO_GW_DEVAD_MASK, + .shift = MLXBF3_GIGE_MDIO_GW_DEVAD_SHIFT, + }, + .partad = { + .mask = MLXBF3_GIGE_MDIO_GW_PARTAD_MASK, + .shift = MLXBF3_GIGE_MDIO_GW_PARTAD_SHIFT, + }, + .opcode = { + .mask = MLXBF3_GIGE_MDIO_GW_OPCODE_MASK, + .shift = MLXBF3_GIGE_MDIO_GW_OPCODE_SHIFT, + }, + .st1 = { + .mask = MLXBF3_GIGE_MDIO_GW_ST1_MASK, + .shift = MLXBF3_GIGE_MDIO_GW_ST1_SHIFT, + }, + }, +}; #define MLXBF_GIGE_MDIO_FREQ_REFERENCE 156250000ULL #define MLXBF_GIGE_MDIO_COREPLL_CONST 16384ULL @@ -47,30 +113,10 @@ /* Busy bit is set by software and cleared by hardware */ #define MLXBF_GIGE_MDIO_SET_BUSY 0x1 -/* MDIO GW register bits */ -#define MLXBF_GIGE_MDIO_GW_AD_MASK GENMASK(15, 0) -#define MLXBF_GIGE_MDIO_GW_DEVAD_MASK GENMASK(20, 16) -#define MLXBF_GIGE_MDIO_GW_PARTAD_MASK GENMASK(25, 21) -#define MLXBF_GIGE_MDIO_GW_OPCODE_MASK GENMASK(27, 26) -#define MLXBF_GIGE_MDIO_GW_ST1_MASK GENMASK(28, 28) -#define MLXBF_GIGE_MDIO_GW_BUSY_MASK GENMASK(30, 30) - -/* MDIO config register bits */ -#define MLXBF_GIGE_MDIO_CFG_MDIO_MODE_MASK GENMASK(1, 0) -#define MLXBF_GIGE_MDIO_CFG_MDIO3_3_MASK GENMASK(2, 2) -#define MLXBF_GIGE_MDIO_CFG_MDIO_FULL_DRIVE_MASK GENMASK(4, 4) -#define MLXBF_GIGE_MDIO_CFG_MDC_PERIOD_MASK GENMASK(15, 8) -#define MLXBF_GIGE_MDIO_CFG_MDIO_IN_SAMP_MASK GENMASK(23, 16) -#define MLXBF_GIGE_MDIO_CFG_MDIO_OUT_SAMP_MASK GENMASK(31, 24) - -#define MLXBF_GIGE_MDIO_CFG_VAL (FIELD_PREP(MLXBF_GIGE_MDIO_CFG_MDIO_MODE_MASK, 1) | \ - FIELD_PREP(MLXBF_GIGE_MDIO_CFG_MDIO3_3_MASK, 1) | \ - FIELD_PREP(MLXBF_GIGE_MDIO_CFG_MDIO_FULL_DRIVE_MASK, 1) | \ - FIELD_PREP(MLXBF_GIGE_MDIO_CFG_MDIO_IN_SAMP_MASK, 6) | \ - FIELD_PREP(MLXBF_GIGE_MDIO_CFG_MDIO_OUT_SAMP_MASK, 13)) - #define MLXBF_GIGE_BF2_COREPLL_ADDR 0x02800c30 #define MLXBF_GIGE_BF2_COREPLL_SIZE 0x0000000c +#define MLXBF_GIGE_BF3_COREPLL_ADDR 0x13409824 +#define MLXBF_GIGE_BF3_COREPLL_SIZE 0x00000010 static struct resource corepll_params[] = { [MLXBF_GIGE_VERSION_BF2] = { @@ -78,6 +124,11 @@ static struct resource corepll_params[] = { .end = MLXBF_GIGE_BF2_COREPLL_ADDR + MLXBF_GIGE_BF2_COREPLL_SIZE - 1, .name = "COREPLL_RES" }, + [MLXBF_GIGE_VERSION_BF3] = { + .start = MLXBF_GIGE_BF3_COREPLL_ADDR, + .end = MLXBF_GIGE_BF3_COREPLL_ADDR + MLXBF_GIGE_BF3_COREPLL_SIZE - 1, + .name = "COREPLL_RES" + } }; /* Returns core clock i1clk in Hz */ @@ -134,19 +185,23 @@ static u8 mdio_period_map(struct mlxbf_gige *priv) return mdio_period; } -static u32 mlxbf_gige_mdio_create_cmd(u16 data, int phy_add, +static u32 mlxbf_gige_mdio_create_cmd(struct mlxbf_gige_mdio_gw *mdio_gw, u16 data, int phy_add, int phy_reg, u32 opcode) { u32 gw_reg = 0; - gw_reg |= FIELD_PREP(MLXBF_GIGE_MDIO_GW_AD_MASK, data); - gw_reg |= FIELD_PREP(MLXBF_GIGE_MDIO_GW_DEVAD_MASK, phy_reg); - gw_reg |= FIELD_PREP(MLXBF_GIGE_MDIO_GW_PARTAD_MASK, phy_add); - gw_reg |= FIELD_PREP(MLXBF_GIGE_MDIO_GW_OPCODE_MASK, opcode); - gw_reg |= FIELD_PREP(MLXBF_GIGE_MDIO_GW_ST1_MASK, - MLXBF_GIGE_MDIO_CL22_ST1); - gw_reg |= FIELD_PREP(MLXBF_GIGE_MDIO_GW_BUSY_MASK, - MLXBF_GIGE_MDIO_SET_BUSY); + gw_reg |= ((data << mdio_gw->write_data.shift) & + mdio_gw->write_data.mask); + gw_reg |= ((phy_reg << mdio_gw->devad.shift) & + mdio_gw->devad.mask); + gw_reg |= ((phy_add << mdio_gw->partad.shift) & + mdio_gw->partad.mask); + gw_reg |= ((opcode << mdio_gw->opcode.shift) & + mdio_gw->opcode.mask); + gw_reg |= ((MLXBF_GIGE_MDIO_CL22_ST1 << mdio_gw->st1.shift) & + mdio_gw->st1.mask); + gw_reg |= ((MLXBF_GIGE_MDIO_SET_BUSY << mdio_gw->busy.shift) & + mdio_gw->busy.mask); return gw_reg; } @@ -158,29 +213,27 @@ static int mlxbf_gige_mdio_read(struct mii_bus *bus, int phy_add, int phy_reg) int ret; u32 val; - if (phy_reg & MII_ADDR_C45) - return -EOPNOTSUPP; - /* Send mdio read request */ - cmd = mlxbf_gige_mdio_create_cmd(0, phy_add, phy_reg, MLXBF_GIGE_MDIO_CL22_READ); + cmd = mlxbf_gige_mdio_create_cmd(priv->mdio_gw, 0, phy_add, phy_reg, + MLXBF_GIGE_MDIO_CL22_READ); - writel(cmd, priv->mdio_io + MLXBF_GIGE_MDIO_GW_OFFSET); + writel(cmd, priv->mdio_io + priv->mdio_gw->gw_address); - ret = readl_poll_timeout_atomic(priv->mdio_io + MLXBF_GIGE_MDIO_GW_OFFSET, - val, !(val & MLXBF_GIGE_MDIO_GW_BUSY_MASK), + ret = readl_poll_timeout_atomic(priv->mdio_io + priv->mdio_gw->gw_address, + val, !(val & priv->mdio_gw->busy.mask), 5, 1000000); if (ret) { - writel(0, priv->mdio_io + MLXBF_GIGE_MDIO_GW_OFFSET); + writel(0, priv->mdio_io + priv->mdio_gw->gw_address); return ret; } - ret = readl(priv->mdio_io + MLXBF_GIGE_MDIO_GW_OFFSET); + ret = readl(priv->mdio_io + priv->mdio_gw->read_data_address); /* Only return ad bits of the gw register */ - ret &= MLXBF_GIGE_MDIO_GW_AD_MASK; + ret &= priv->mdio_gw->read_data.mask; /* The MDIO lock is set on read. To release it, clear gw register */ - writel(0, priv->mdio_io + MLXBF_GIGE_MDIO_GW_OFFSET); + writel(0, priv->mdio_io + priv->mdio_gw->gw_address); return ret; } @@ -193,21 +246,18 @@ static int mlxbf_gige_mdio_write(struct mii_bus *bus, int phy_add, u32 cmd; int ret; - if (phy_reg & MII_ADDR_C45) - return -EOPNOTSUPP; - /* Send mdio write request */ - cmd = mlxbf_gige_mdio_create_cmd(val, phy_add, phy_reg, + cmd = mlxbf_gige_mdio_create_cmd(priv->mdio_gw, val, phy_add, phy_reg, MLXBF_GIGE_MDIO_CL22_WRITE); - writel(cmd, priv->mdio_io + MLXBF_GIGE_MDIO_GW_OFFSET); + writel(cmd, priv->mdio_io + priv->mdio_gw->gw_address); /* If the poll timed out, drop the request */ - ret = readl_poll_timeout_atomic(priv->mdio_io + MLXBF_GIGE_MDIO_GW_OFFSET, - temp, !(temp & MLXBF_GIGE_MDIO_GW_BUSY_MASK), + ret = readl_poll_timeout_atomic(priv->mdio_io + priv->mdio_gw->gw_address, + temp, !(temp & priv->mdio_gw->busy.mask), 5, 1000000); /* The MDIO lock is set on read. To release it, clear gw register */ - writel(0, priv->mdio_io + MLXBF_GIGE_MDIO_GW_OFFSET); + writel(0, priv->mdio_io + priv->mdio_gw->gw_address); return ret; } @@ -219,9 +269,20 @@ static void mlxbf_gige_mdio_cfg(struct mlxbf_gige *priv) mdio_period = mdio_period_map(priv); - val = MLXBF_GIGE_MDIO_CFG_VAL; - val |= FIELD_PREP(MLXBF_GIGE_MDIO_CFG_MDC_PERIOD_MASK, mdio_period); - writel(val, priv->mdio_io + MLXBF_GIGE_MDIO_CFG_OFFSET); + if (priv->hw_version == MLXBF_GIGE_VERSION_BF2) { + val = MLXBF2_GIGE_MDIO_CFG_VAL; + val |= FIELD_PREP(MLXBF2_GIGE_MDIO_CFG_MDC_PERIOD_MASK, mdio_period); + writel(val, priv->mdio_io + MLXBF2_GIGE_MDIO_CFG_OFFSET); + } else { + val = FIELD_PREP(MLXBF3_GIGE_MDIO_CFG_MDIO_MODE_MASK, 1) | + FIELD_PREP(MLXBF3_GIGE_MDIO_CFG_MDIO_FULL_DRIVE_MASK, 1); + writel(val, priv->mdio_io + MLXBF3_GIGE_MDIO_CFG_REG0); + val = FIELD_PREP(MLXBF3_GIGE_MDIO_CFG_MDC_PERIOD_MASK, mdio_period); + writel(val, priv->mdio_io + MLXBF3_GIGE_MDIO_CFG_REG1); + val = FIELD_PREP(MLXBF3_GIGE_MDIO_CFG_MDIO_IN_SAMP_MASK, 6) | + FIELD_PREP(MLXBF3_GIGE_MDIO_CFG_MDIO_OUT_SAMP_MASK, 13); + writel(val, priv->mdio_io + MLXBF3_GIGE_MDIO_CFG_REG2); + } } int mlxbf_gige_mdio_probe(struct platform_device *pdev, struct mlxbf_gige *priv) @@ -230,6 +291,9 @@ int mlxbf_gige_mdio_probe(struct platform_device *pdev, struct mlxbf_gige *priv) struct resource *res; int ret; + if (priv->hw_version > MLXBF_GIGE_VERSION_BF3) + return -ENODEV; + priv->mdio_io = devm_platform_ioremap_resource(pdev, MLXBF_GIGE_RES_MDIO9); if (IS_ERR(priv->mdio_io)) return PTR_ERR(priv->mdio_io); @@ -242,13 +306,15 @@ int mlxbf_gige_mdio_probe(struct platform_device *pdev, struct mlxbf_gige *priv) /* For backward compatibility with older ACPI tables, also keep * CLK resource internal to the driver. */ - res = &corepll_params[MLXBF_GIGE_VERSION_BF2]; + res = &corepll_params[priv->hw_version]; } priv->clk_io = devm_ioremap(dev, res->start, resource_size(res)); if (!priv->clk_io) return -ENOMEM; + priv->mdio_gw = &mlxbf_gige_mdio_gw_t[priv->hw_version]; + mlxbf_gige_mdio_cfg(priv); priv->mdiobus = devm_mdiobus_alloc(dev); diff --git a/drivers/net/ethernet/mellanox/mlxbf_gige/mlxbf_gige_mdio_bf2.h b/drivers/net/ethernet/mellanox/mlxbf_gige/mlxbf_gige_mdio_bf2.h new file mode 100644 index 000000000000..7f1ff0ac7699 --- /dev/null +++ b/drivers/net/ethernet/mellanox/mlxbf_gige/mlxbf_gige_mdio_bf2.h @@ -0,0 +1,53 @@ +/* SPDX-License-Identifier: GPL-2.0-only OR BSD-3-Clause */ + +/* MDIO support for Mellanox Gigabit Ethernet driver + * + * Copyright (c) 2022 NVIDIA CORPORATION & AFFILIATES, ALL RIGHTS RESERVED. + * + * This software product is a proprietary product of NVIDIA CORPORATION & + * AFFILIATES (the "Company") and all right, title, and interest in and to the + * software product, including all associated intellectual property rights, are + * and shall remain exclusively with the Company. + * + * This software product is governed by the End User License Agreement + * provided with the software product. + */ + +#ifndef __MLXBF_GIGE_MDIO_BF2_H__ +#define __MLXBF_GIGE_MDIO_BF2_H__ + +#include <linux/bitfield.h> + +#define MLXBF2_GIGE_MDIO_GW_OFFSET 0x0 +#define MLXBF2_GIGE_MDIO_CFG_OFFSET 0x4 + +/* MDIO GW register bits */ +#define MLXBF2_GIGE_MDIO_GW_AD_MASK GENMASK(15, 0) +#define MLXBF2_GIGE_MDIO_GW_DEVAD_MASK GENMASK(20, 16) +#define MLXBF2_GIGE_MDIO_GW_PARTAD_MASK GENMASK(25, 21) +#define MLXBF2_GIGE_MDIO_GW_OPCODE_MASK GENMASK(27, 26) +#define MLXBF2_GIGE_MDIO_GW_ST1_MASK GENMASK(28, 28) +#define MLXBF2_GIGE_MDIO_GW_BUSY_MASK GENMASK(30, 30) + +#define MLXBF2_GIGE_MDIO_GW_AD_SHIFT 0 +#define MLXBF2_GIGE_MDIO_GW_DEVAD_SHIFT 16 +#define MLXBF2_GIGE_MDIO_GW_PARTAD_SHIFT 21 +#define MLXBF2_GIGE_MDIO_GW_OPCODE_SHIFT 26 +#define MLXBF2_GIGE_MDIO_GW_ST1_SHIFT 28 +#define MLXBF2_GIGE_MDIO_GW_BUSY_SHIFT 30 + +/* MDIO config register bits */ +#define MLXBF2_GIGE_MDIO_CFG_MDIO_MODE_MASK GENMASK(1, 0) +#define MLXBF2_GIGE_MDIO_CFG_MDIO3_3_MASK GENMASK(2, 2) +#define MLXBF2_GIGE_MDIO_CFG_MDIO_FULL_DRIVE_MASK GENMASK(4, 4) +#define MLXBF2_GIGE_MDIO_CFG_MDC_PERIOD_MASK GENMASK(15, 8) +#define MLXBF2_GIGE_MDIO_CFG_MDIO_IN_SAMP_MASK GENMASK(23, 16) +#define MLXBF2_GIGE_MDIO_CFG_MDIO_OUT_SAMP_MASK GENMASK(31, 24) + +#define MLXBF2_GIGE_MDIO_CFG_VAL (FIELD_PREP(MLXBF2_GIGE_MDIO_CFG_MDIO_MODE_MASK, 1) | \ + FIELD_PREP(MLXBF2_GIGE_MDIO_CFG_MDIO3_3_MASK, 1) | \ + FIELD_PREP(MLXBF2_GIGE_MDIO_CFG_MDIO_FULL_DRIVE_MASK, 1) | \ + FIELD_PREP(MLXBF2_GIGE_MDIO_CFG_MDIO_IN_SAMP_MASK, 6) | \ + FIELD_PREP(MLXBF2_GIGE_MDIO_CFG_MDIO_OUT_SAMP_MASK, 13)) + +#endif /* __MLXBF_GIGE_MDIO_BF2_H__ */ diff --git a/drivers/net/ethernet/mellanox/mlxbf_gige/mlxbf_gige_mdio_bf3.h b/drivers/net/ethernet/mellanox/mlxbf_gige/mlxbf_gige_mdio_bf3.h new file mode 100644 index 000000000000..9dd9144b9173 --- /dev/null +++ b/drivers/net/ethernet/mellanox/mlxbf_gige/mlxbf_gige_mdio_bf3.h @@ -0,0 +1,54 @@ +/* SPDX-License-Identifier: GPL-2.0-only OR BSD-3-Clause */ + +/* MDIO support for Mellanox Gigabit Ethernet driver + * + * Copyright (c) 2022 NVIDIA CORPORATION & AFFILIATES, ALL RIGHTS RESERVED. + * + * This software product is a proprietary product of NVIDIA CORPORATION & + * AFFILIATES (the "Company") and all right, title, and interest in and to the + * software product, including all associated intellectual property rights, are + * and shall remain exclusively with the Company. + * + * This software product is governed by the End User License Agreement + * provided with the software product. + */ + +#ifndef __MLXBF_GIGE_MDIO_BF3_H__ +#define __MLXBF_GIGE_MDIO_BF3_H__ + +#include <linux/bitfield.h> + +#define MLXBF3_GIGE_MDIO_GW_OFFSET 0x80 +#define MLXBF3_GIGE_MDIO_DATA_READ 0x8c +#define MLXBF3_GIGE_MDIO_CFG_REG0 0x100 +#define MLXBF3_GIGE_MDIO_CFG_REG1 0x104 +#define MLXBF3_GIGE_MDIO_CFG_REG2 0x108 + +/* MDIO GW register bits */ +#define MLXBF3_GIGE_MDIO_GW_ST1_MASK GENMASK(1, 1) +#define MLXBF3_GIGE_MDIO_GW_OPCODE_MASK GENMASK(3, 2) +#define MLXBF3_GIGE_MDIO_GW_PARTAD_MASK GENMASK(8, 4) +#define MLXBF3_GIGE_MDIO_GW_DEVAD_MASK GENMASK(13, 9) +/* For BlueField-3, this field is only used for mdio write */ +#define MLXBF3_GIGE_MDIO_GW_DATA_MASK GENMASK(29, 14) +#define MLXBF3_GIGE_MDIO_GW_BUSY_MASK GENMASK(30, 30) + +#define MLXBF3_GIGE_MDIO_GW_DATA_READ_MASK GENMASK(15, 0) + +#define MLXBF3_GIGE_MDIO_GW_ST1_SHIFT 1 +#define MLXBF3_GIGE_MDIO_GW_OPCODE_SHIFT 2 +#define MLXBF3_GIGE_MDIO_GW_PARTAD_SHIFT 4 +#define MLXBF3_GIGE_MDIO_GW_DEVAD_SHIFT 9 +#define MLXBF3_GIGE_MDIO_GW_DATA_SHIFT 14 +#define MLXBF3_GIGE_MDIO_GW_BUSY_SHIFT 30 + +#define MLXBF3_GIGE_MDIO_GW_DATA_READ_SHIFT 0 + +/* MDIO config register bits */ +#define MLXBF3_GIGE_MDIO_CFG_MDIO_MODE_MASK GENMASK(1, 0) +#define MLXBF3_GIGE_MDIO_CFG_MDIO_FULL_DRIVE_MASK GENMASK(2, 2) +#define MLXBF3_GIGE_MDIO_CFG_MDC_PERIOD_MASK GENMASK(7, 0) +#define MLXBF3_GIGE_MDIO_CFG_MDIO_IN_SAMP_MASK GENMASK(7, 0) +#define MLXBF3_GIGE_MDIO_CFG_MDIO_OUT_SAMP_MASK GENMASK(15, 8) + +#endif /* __MLXBF_GIGE_MDIO_BF3_H__ */ diff --git a/drivers/net/ethernet/mellanox/mlxbf_gige/mlxbf_gige_regs.h b/drivers/net/ethernet/mellanox/mlxbf_gige/mlxbf_gige_regs.h index 7be3a793984d..cd0973229c9b 100644 --- a/drivers/net/ethernet/mellanox/mlxbf_gige/mlxbf_gige_regs.h +++ b/drivers/net/ethernet/mellanox/mlxbf_gige/mlxbf_gige_regs.h @@ -8,8 +8,11 @@ #ifndef __MLXBF_GIGE_REGS_H__ #define __MLXBF_GIGE_REGS_H__ +#include <linux/bitfield.h> + #define MLXBF_GIGE_VERSION 0x0000 #define MLXBF_GIGE_VERSION_BF2 0x0 +#define MLXBF_GIGE_VERSION_BF3 0x1 #define MLXBF_GIGE_STATUS 0x0010 #define MLXBF_GIGE_STATUS_READY BIT(0) #define MLXBF_GIGE_INT_STATUS 0x0028 @@ -77,4 +80,23 @@ */ #define MLXBF_GIGE_MMIO_REG_SZ (MLXBF_GIGE_MAC_CFG + 8) +#define MLXBF_GIGE_PLU_TX_REG0 0x80 +#define MLXBF_GIGE_PLU_TX_IPG_SIZE_MASK GENMASK(11, 0) +#define MLXBF_GIGE_PLU_TX_SGMII_MODE_MASK GENMASK(15, 14) + +#define MLXBF_GIGE_PLU_RX_REG0 0x10 +#define MLXBF_GIGE_PLU_RX_SGMII_MODE_MASK GENMASK(25, 24) + +#define MLXBF_GIGE_1G_SGMII_MODE 0x0 +#define MLXBF_GIGE_10M_SGMII_MODE 0x1 +#define MLXBF_GIGE_100M_SGMII_MODE 0x2 + +/* ipg_size default value for 1G is fixed by HW to 11 + End = 12. + * So for 100M it is 12 * 10 - 1 = 119 + * For 10M, it is 12 * 100 - 1 = 1199 + */ +#define MLXBF_GIGE_1G_IPG_SIZE 11 +#define MLXBF_GIGE_100M_IPG_SIZE 119 +#define MLXBF_GIGE_10M_IPG_SIZE 1199 + #endif /* !defined(__MLXBF_GIGE_REGS_H__) */ diff --git a/drivers/net/ethernet/mellanox/mlxsw/core.c b/drivers/net/ethernet/mellanox/mlxsw/core.c index a0a06e2eff82..22db0bb15c45 100644 --- a/drivers/net/ethernet/mellanox/mlxsw/core.c +++ b/drivers/net/ethernet/mellanox/mlxsw/core.c @@ -78,6 +78,7 @@ struct mlxsw_core { spinlock_t trans_list_lock; /* protects trans_list writes */ bool use_emad; bool enable_string_tlv; + bool enable_latency_tlv; } emad; struct { u16 *mapping; /* lag_id+port_index to local_port mapping */ @@ -378,6 +379,22 @@ MLXSW_ITEM32(emad, string_tlv, len, 0x00, 16, 11); MLXSW_ITEM_BUF(emad, string_tlv, string, 0x04, MLXSW_EMAD_STRING_TLV_STRING_LEN); +/* emad_latency_tlv_type + * Type of the TLV. + * Must be set to 0x4 (latency TLV). + */ +MLXSW_ITEM32(emad, latency_tlv, type, 0x00, 27, 5); + +/* emad_latency_tlv_len + * Length of the latency TLV in u32. + */ +MLXSW_ITEM32(emad, latency_tlv, len, 0x00, 16, 11); + +/* emad_latency_tlv_latency_time + * EMAD latency time in units of uSec. + */ +MLXSW_ITEM32(emad, latency_tlv, latency_time, 0x04, 0, 32); + /* emad_reg_tlv_type * Type of the TLV. * Must be set to 0x3 (register TLV). @@ -461,6 +478,12 @@ static void mlxsw_emad_pack_op_tlv(char *op_tlv, mlxsw_emad_op_tlv_tid_set(op_tlv, tid); } +static void mlxsw_emad_pack_latency_tlv(char *latency_tlv) +{ + mlxsw_emad_latency_tlv_type_set(latency_tlv, MLXSW_EMAD_TLV_TYPE_LATENCY); + mlxsw_emad_latency_tlv_len_set(latency_tlv, MLXSW_EMAD_LATENCY_TLV_LEN); +} + static int mlxsw_emad_construct_eth_hdr(struct sk_buff *skb) { char *eth_hdr = skb_push(skb, MLXSW_EMAD_ETH_HDR_LEN); @@ -476,11 +499,11 @@ static int mlxsw_emad_construct_eth_hdr(struct sk_buff *skb) return 0; } -static void mlxsw_emad_construct(struct sk_buff *skb, +static void mlxsw_emad_construct(const struct mlxsw_core *mlxsw_core, + struct sk_buff *skb, const struct mlxsw_reg_info *reg, char *payload, - enum mlxsw_core_reg_access_type type, - u64 tid, bool enable_string_tlv) + enum mlxsw_core_reg_access_type type, u64 tid) { char *buf; @@ -490,7 +513,12 @@ static void mlxsw_emad_construct(struct sk_buff *skb, buf = skb_push(skb, reg->len + sizeof(u32)); mlxsw_emad_pack_reg_tlv(buf, reg, payload); - if (enable_string_tlv) { + if (mlxsw_core->emad.enable_latency_tlv) { + buf = skb_push(skb, MLXSW_EMAD_LATENCY_TLV_LEN * sizeof(u32)); + mlxsw_emad_pack_latency_tlv(buf); + } + + if (mlxsw_core->emad.enable_string_tlv) { buf = skb_push(skb, MLXSW_EMAD_STRING_TLV_LEN * sizeof(u32)); mlxsw_emad_pack_string_tlv(buf); } @@ -504,6 +532,7 @@ static void mlxsw_emad_construct(struct sk_buff *skb, struct mlxsw_emad_tlv_offsets { u16 op_tlv; u16 string_tlv; + u16 latency_tlv; u16 reg_tlv; }; @@ -514,6 +543,13 @@ static bool mlxsw_emad_tlv_is_string_tlv(const char *tlv) return tlv_type == MLXSW_EMAD_TLV_TYPE_STRING; } +static bool mlxsw_emad_tlv_is_latency_tlv(const char *tlv) +{ + u8 tlv_type = mlxsw_emad_latency_tlv_type_get(tlv); + + return tlv_type == MLXSW_EMAD_TLV_TYPE_LATENCY; +} + static void mlxsw_emad_tlv_parse(struct sk_buff *skb) { struct mlxsw_emad_tlv_offsets *offsets = @@ -521,6 +557,8 @@ static void mlxsw_emad_tlv_parse(struct sk_buff *skb) offsets->op_tlv = MLXSW_EMAD_ETH_HDR_LEN; offsets->string_tlv = 0; + offsets->latency_tlv = 0; + offsets->reg_tlv = MLXSW_EMAD_ETH_HDR_LEN + MLXSW_EMAD_OP_TLV_LEN * sizeof(u32); @@ -529,6 +567,11 @@ static void mlxsw_emad_tlv_parse(struct sk_buff *skb) offsets->string_tlv = offsets->reg_tlv; offsets->reg_tlv += MLXSW_EMAD_STRING_TLV_LEN * sizeof(u32); } + + if (mlxsw_emad_tlv_is_latency_tlv(skb->data + offsets->reg_tlv)) { + offsets->latency_tlv = offsets->reg_tlv; + offsets->reg_tlv += MLXSW_EMAD_LATENCY_TLV_LEN * sizeof(u32); + } } static char *mlxsw_emad_op_tlv(const struct sk_buff *skb) @@ -794,6 +837,32 @@ static const struct mlxsw_listener mlxsw_emad_rx_listener = MLXSW_RXL(mlxsw_emad_rx_listener_func, ETHEMAD, TRAP_TO_CPU, false, EMAD, DISCARD); +static int mlxsw_emad_tlv_enable(struct mlxsw_core *mlxsw_core) +{ + char mgir_pl[MLXSW_REG_MGIR_LEN]; + bool string_tlv, latency_tlv; + int err; + + mlxsw_reg_mgir_pack(mgir_pl); + err = mlxsw_reg_query(mlxsw_core, MLXSW_REG(mgir), mgir_pl); + if (err) + return err; + + string_tlv = mlxsw_reg_mgir_fw_info_string_tlv_get(mgir_pl); + mlxsw_core->emad.enable_string_tlv = string_tlv; + + latency_tlv = mlxsw_reg_mgir_fw_info_latency_tlv_get(mgir_pl); + mlxsw_core->emad.enable_latency_tlv = latency_tlv; + + return 0; +} + +static void mlxsw_emad_tlv_disable(struct mlxsw_core *mlxsw_core) +{ + mlxsw_core->emad.enable_latency_tlv = false; + mlxsw_core->emad.enable_string_tlv = false; +} + static int mlxsw_emad_init(struct mlxsw_core *mlxsw_core) { struct workqueue_struct *emad_wq; @@ -824,10 +893,17 @@ static int mlxsw_emad_init(struct mlxsw_core *mlxsw_core) if (err) goto err_trap_register; + err = mlxsw_emad_tlv_enable(mlxsw_core); + if (err) + goto err_emad_tlv_enable; + mlxsw_core->emad.use_emad = true; return 0; +err_emad_tlv_enable: + mlxsw_core_trap_unregister(mlxsw_core, &mlxsw_emad_rx_listener, + mlxsw_core); err_trap_register: destroy_workqueue(mlxsw_core->emad_wq); return err; @@ -840,13 +916,14 @@ static void mlxsw_emad_fini(struct mlxsw_core *mlxsw_core) return; mlxsw_core->emad.use_emad = false; + mlxsw_emad_tlv_disable(mlxsw_core); mlxsw_core_trap_unregister(mlxsw_core, &mlxsw_emad_rx_listener, mlxsw_core); destroy_workqueue(mlxsw_core->emad_wq); } static struct sk_buff *mlxsw_emad_alloc(const struct mlxsw_core *mlxsw_core, - u16 reg_len, bool enable_string_tlv) + u16 reg_len) { struct sk_buff *skb; u16 emad_len; @@ -854,8 +931,10 @@ static struct sk_buff *mlxsw_emad_alloc(const struct mlxsw_core *mlxsw_core, emad_len = (reg_len + sizeof(u32) + MLXSW_EMAD_ETH_HDR_LEN + (MLXSW_EMAD_OP_TLV_LEN + MLXSW_EMAD_END_TLV_LEN) * sizeof(u32) + mlxsw_core->driver->txhdr_len); - if (enable_string_tlv) + if (mlxsw_core->emad.enable_string_tlv) emad_len += MLXSW_EMAD_STRING_TLV_LEN * sizeof(u32); + if (mlxsw_core->emad.enable_latency_tlv) + emad_len += MLXSW_EMAD_LATENCY_TLV_LEN * sizeof(u32); if (emad_len > MLXSW_EMAD_MAX_FRAME_LEN) return NULL; @@ -877,7 +956,6 @@ static int mlxsw_emad_reg_access(struct mlxsw_core *mlxsw_core, mlxsw_reg_trans_cb_t *cb, unsigned long cb_priv, u64 tid) { - bool enable_string_tlv; struct sk_buff *skb; int err; @@ -885,12 +963,7 @@ static int mlxsw_emad_reg_access(struct mlxsw_core *mlxsw_core, tid, reg->id, mlxsw_reg_id_str(reg->id), mlxsw_core_reg_access_type_str(type)); - /* Since this can be changed during emad_reg_access, read it once and - * use the value all the way. - */ - enable_string_tlv = mlxsw_core->emad.enable_string_tlv; - - skb = mlxsw_emad_alloc(mlxsw_core, reg->len, enable_string_tlv); + skb = mlxsw_emad_alloc(mlxsw_core, reg->len); if (!skb) return -ENOMEM; @@ -907,8 +980,7 @@ static int mlxsw_emad_reg_access(struct mlxsw_core *mlxsw_core, trans->reg = reg; trans->type = type; - mlxsw_emad_construct(skb, reg, payload, type, trans->tid, - enable_string_tlv); + mlxsw_emad_construct(mlxsw_core, skb, reg, payload, type, trans->tid); mlxsw_core->driver->txhdr_construct(skb, &trans->tx_info); spin_lock_bh(&mlxsw_core->emad.trans_list_lock); @@ -1171,9 +1243,9 @@ static int mlxsw_core_fw_rev_validate(struct mlxsw_core *mlxsw_core, return 0; /* Don't check if devlink 'fw_load_policy' param is 'flash' */ - err = devlink_param_driverinit_value_get(priv_to_devlink(mlxsw_core), - DEVLINK_PARAM_GENERIC_ID_FW_LOAD_POLICY, - &value); + err = devl_param_driverinit_value_get(priv_to_devlink(mlxsw_core), + DEVLINK_PARAM_GENERIC_ID_FW_LOAD_POLICY, + &value); if (err) return err; if (value.vu8 == DEVLINK_PARAM_FW_LOAD_POLICY_VALUE_FLASH) @@ -1244,20 +1316,22 @@ static int mlxsw_core_fw_params_register(struct mlxsw_core *mlxsw_core) union devlink_param_value value; int err; - err = devlink_params_register(devlink, mlxsw_core_fw_devlink_params, - ARRAY_SIZE(mlxsw_core_fw_devlink_params)); + err = devl_params_register(devlink, mlxsw_core_fw_devlink_params, + ARRAY_SIZE(mlxsw_core_fw_devlink_params)); if (err) return err; value.vu8 = DEVLINK_PARAM_FW_LOAD_POLICY_VALUE_DRIVER; - devlink_param_driverinit_value_set(devlink, DEVLINK_PARAM_GENERIC_ID_FW_LOAD_POLICY, value); + devl_param_driverinit_value_set(devlink, + DEVLINK_PARAM_GENERIC_ID_FW_LOAD_POLICY, + value); return 0; } static void mlxsw_core_fw_params_unregister(struct mlxsw_core *mlxsw_core) { - devlink_params_unregister(priv_to_devlink(mlxsw_core), mlxsw_core_fw_devlink_params, - ARRAY_SIZE(mlxsw_core_fw_devlink_params)); + devl_params_unregister(priv_to_devlink(mlxsw_core), mlxsw_core_fw_devlink_params, + ARRAY_SIZE(mlxsw_core_fw_devlink_params)); } static void *__dl_port(struct devlink_port *devlink_port) @@ -1676,29 +1750,12 @@ static const struct devlink_ops mlxsw_devlink_ops = { static int mlxsw_core_params_register(struct mlxsw_core *mlxsw_core) { - int err; - - err = mlxsw_core_fw_params_register(mlxsw_core); - if (err) - return err; - - if (mlxsw_core->driver->params_register) { - err = mlxsw_core->driver->params_register(mlxsw_core); - if (err) - goto err_params_register; - } - return 0; - -err_params_register: - mlxsw_core_fw_params_unregister(mlxsw_core); - return err; + return mlxsw_core_fw_params_register(mlxsw_core); } static void mlxsw_core_params_unregister(struct mlxsw_core *mlxsw_core) { mlxsw_core_fw_params_unregister(mlxsw_core); - if (mlxsw_core->driver->params_register) - mlxsw_core->driver->params_unregister(mlxsw_core); } struct mlxsw_core_health_event { @@ -2051,8 +2108,8 @@ static int mlxsw_core_health_init(struct mlxsw_core *mlxsw_core) if (!(mlxsw_core->bus->features & MLXSW_BUS_F_TXRX)) return 0; - fw_fatal = devlink_health_reporter_create(devlink, &mlxsw_core_health_fw_fatal_ops, - 0, mlxsw_core); + fw_fatal = devl_health_reporter_create(devlink, &mlxsw_core_health_fw_fatal_ops, + 0, mlxsw_core); if (IS_ERR(fw_fatal)) { dev_err(mlxsw_core->bus_info->dev, "Failed to create fw fatal reporter"); return PTR_ERR(fw_fatal); @@ -2072,7 +2129,7 @@ static int mlxsw_core_health_init(struct mlxsw_core *mlxsw_core) err_fw_fatal_config: mlxsw_core_trap_unregister(mlxsw_core, &mlxsw_core_health_listener, mlxsw_core); err_trap_register: - devlink_health_reporter_destroy(mlxsw_core->health.fw_fatal); + devl_health_reporter_destroy(mlxsw_core->health.fw_fatal); return err; } @@ -2085,7 +2142,7 @@ static void mlxsw_core_health_fini(struct mlxsw_core *mlxsw_core) mlxsw_core_trap_unregister(mlxsw_core, &mlxsw_core_health_listener, mlxsw_core); /* Make sure there is no more event work scheduled */ mlxsw_core_flush_owq(); - devlink_health_reporter_destroy(mlxsw_core->health.fw_fatal); + devl_health_reporter_destroy(mlxsw_core->health.fw_fatal); } static void mlxsw_core_irq_event_handler_init(struct mlxsw_core *mlxsw_core) @@ -2127,6 +2184,7 @@ __mlxsw_core_bus_device_register(const struct mlxsw_bus_info *mlxsw_bus_info, goto err_devlink_alloc; } devl_lock(devlink); + devl_register(devlink); } mlxsw_core = devlink_priv(devlink); @@ -2210,11 +2268,8 @@ __mlxsw_core_bus_device_register(const struct mlxsw_bus_info *mlxsw_bus_info, goto err_driver_init; } - if (!reload) { - devlink_set_features(devlink, DEVLINK_F_RELOAD); + if (!reload) devl_unlock(devlink); - devlink_register(devlink); - } return 0; err_driver_init: @@ -2246,6 +2301,7 @@ err_register_resources: err_bus_init: mlxsw_core_irq_event_handler_fini(mlxsw_core); if (!reload) { + devl_unregister(devlink); devl_unlock(devlink); devlink_free(devlink); } @@ -2284,10 +2340,8 @@ void mlxsw_core_bus_device_unregister(struct mlxsw_core *mlxsw_core, { struct devlink *devlink = priv_to_devlink(mlxsw_core); - if (!reload) { - devlink_unregister(devlink); + if (!reload) devl_lock(devlink); - } if (devlink_is_reload_failed(devlink)) { if (!reload) @@ -2316,6 +2370,7 @@ void mlxsw_core_bus_device_unregister(struct mlxsw_core *mlxsw_core, mlxsw_core->bus->fini(mlxsw_core->bus_priv); mlxsw_core_irq_event_handler_fini(mlxsw_core); if (!reload) { + devl_unregister(devlink); devl_unlock(devlink); devlink_free(devlink); } @@ -2325,6 +2380,7 @@ void mlxsw_core_bus_device_unregister(struct mlxsw_core *mlxsw_core, reload_fail_deinit: mlxsw_core_params_unregister(mlxsw_core); devl_resources_unregister(devlink); + devl_unregister(devlink); devl_unlock(devlink); devlink_free(devlink); } @@ -3377,12 +3433,6 @@ bool mlxsw_core_sdq_supports_cqe_v2(struct mlxsw_core *mlxsw_core) } EXPORT_SYMBOL(mlxsw_core_sdq_supports_cqe_v2); -void mlxsw_core_emad_string_tlv_enable(struct mlxsw_core *mlxsw_core) -{ - mlxsw_core->emad.enable_string_tlv = true; -} -EXPORT_SYMBOL(mlxsw_core_emad_string_tlv_enable); - static int __init mlxsw_core_module_init(void) { int err; diff --git a/drivers/net/ethernet/mellanox/mlxsw/core.h b/drivers/net/ethernet/mellanox/mlxsw/core.h index e0a6fcbbcb19..e5474d3e34db 100644 --- a/drivers/net/ethernet/mellanox/mlxsw/core.h +++ b/drivers/net/ethernet/mellanox/mlxsw/core.h @@ -421,8 +421,6 @@ struct mlxsw_driver { const struct mlxsw_config_profile *profile, u64 *p_single_size, u64 *p_double_size, u64 *p_linear_size); - int (*params_register)(struct mlxsw_core *mlxsw_core); - void (*params_unregister)(struct mlxsw_core *mlxsw_core); /* Notify a driver that a timestamped packet was transmitted. Driver * is responsible for freeing the passed-in SKB. @@ -448,8 +446,6 @@ u32 mlxsw_core_read_utc_nsec(struct mlxsw_core *mlxsw_core); bool mlxsw_core_sdq_supports_cqe_v2(struct mlxsw_core *mlxsw_core); -void mlxsw_core_emad_string_tlv_enable(struct mlxsw_core *mlxsw_core); - bool mlxsw_core_res_valid(struct mlxsw_core *mlxsw_core, enum mlxsw_res_id res_id); diff --git a/drivers/net/ethernet/mellanox/mlxsw/core_linecards.c b/drivers/net/ethernet/mellanox/mlxsw/core_linecards.c index 83d2dc91ba2c..025e0db983fe 100644 --- a/drivers/net/ethernet/mellanox/mlxsw/core_linecards.c +++ b/drivers/net/ethernet/mellanox/mlxsw/core_linecards.c @@ -1259,9 +1259,9 @@ static int mlxsw_linecard_init(struct mlxsw_core *mlxsw_core, linecard->linecards = linecards; mutex_init(&linecard->lock); - devlink_linecard = devlink_linecard_create(priv_to_devlink(mlxsw_core), - slot_index, &mlxsw_linecard_ops, - linecard); + devlink_linecard = devl_linecard_create(priv_to_devlink(mlxsw_core), + slot_index, &mlxsw_linecard_ops, + linecard); if (IS_ERR(devlink_linecard)) return PTR_ERR(devlink_linecard); @@ -1285,7 +1285,7 @@ static void mlxsw_linecard_fini(struct mlxsw_core *mlxsw_core, if (linecard->active) mlxsw_linecard_active_clear(linecard); mlxsw_linecard_bdev_del(linecard); - devlink_linecard_destroy(linecard->devlink_linecard); + devl_linecard_destroy(linecard->devlink_linecard); mutex_destroy(&linecard->lock); } diff --git a/drivers/net/ethernet/mellanox/mlxsw/emad.h b/drivers/net/ethernet/mellanox/mlxsw/emad.h index acfbbec52424..c51a61aa19b7 100644 --- a/drivers/net/ethernet/mellanox/mlxsw/emad.h +++ b/drivers/net/ethernet/mellanox/mlxsw/emad.h @@ -21,6 +21,7 @@ enum { MLXSW_EMAD_TLV_TYPE_OP, MLXSW_EMAD_TLV_TYPE_STRING, MLXSW_EMAD_TLV_TYPE_REG, + MLXSW_EMAD_TLV_TYPE_LATENCY, }; /* OP TLV */ @@ -90,6 +91,9 @@ enum { /* STRING TLV */ #define MLXSW_EMAD_STRING_TLV_LEN 33 /* Length in u32 */ +/* LATENCY TLV */ +#define MLXSW_EMAD_LATENCY_TLV_LEN 7 /* Length in u32 */ + /* END TLV */ #define MLXSW_EMAD_END_TLV_LEN 1 /* Length in u32 */ diff --git a/drivers/net/ethernet/mellanox/mlxsw/reg.h b/drivers/net/ethernet/mellanox/mlxsw/reg.h index f2d6f8654e04..8165bf31a99a 100644 --- a/drivers/net/ethernet/mellanox/mlxsw/reg.h +++ b/drivers/net/ethernet/mellanox/mlxsw/reg.h @@ -10009,6 +10009,18 @@ MLXSW_REG_DEFINE(mgir, MLXSW_REG_MGIR_ID, MLXSW_REG_MGIR_LEN); */ MLXSW_ITEM32(reg, mgir, hw_info_device_hw_revision, 0x0, 16, 16); +/* reg_mgir_fw_info_latency_tlv + * When set, latency-TLV is supported. + * Access: RO + */ +MLXSW_ITEM32(reg, mgir, fw_info_latency_tlv, 0x20, 29, 1); + +/* reg_mgir_fw_info_string_tlv + * When set, string-TLV is supported. + * Access: RO + */ +MLXSW_ITEM32(reg, mgir, fw_info_string_tlv, 0x20, 28, 1); + #define MLXSW_REG_MGIR_FW_INFO_PSID_SIZE 16 /* reg_mgir_fw_info_psid diff --git a/drivers/net/ethernet/mellanox/mlxsw/spectrum.c b/drivers/net/ethernet/mellanox/mlxsw/spectrum.c index f5b2d965d476..a8f94b7544ee 100644 --- a/drivers/net/ethernet/mellanox/mlxsw/spectrum.c +++ b/drivers/net/ethernet/mellanox/mlxsw/spectrum.c @@ -3092,7 +3092,6 @@ static int mlxsw_sp_init(struct mlxsw_core *mlxsw_core, mlxsw_sp->bus_info = mlxsw_bus_info; mlxsw_sp_parsing_init(mlxsw_sp); - mlxsw_core_emad_string_tlv_enable(mlxsw_core); err = mlxsw_sp_base_mac_get(mlxsw_sp); if (err) { @@ -3862,62 +3861,6 @@ static int mlxsw_sp_kvd_sizes_get(struct mlxsw_core *mlxsw_core, return 0; } -static int -mlxsw_sp_params_acl_region_rehash_intrvl_get(struct devlink *devlink, u32 id, - struct devlink_param_gset_ctx *ctx) -{ - struct mlxsw_core *mlxsw_core = devlink_priv(devlink); - struct mlxsw_sp *mlxsw_sp = mlxsw_core_driver_priv(mlxsw_core); - - ctx->val.vu32 = mlxsw_sp_acl_region_rehash_intrvl_get(mlxsw_sp); - return 0; -} - -static int -mlxsw_sp_params_acl_region_rehash_intrvl_set(struct devlink *devlink, u32 id, - struct devlink_param_gset_ctx *ctx) -{ - struct mlxsw_core *mlxsw_core = devlink_priv(devlink); - struct mlxsw_sp *mlxsw_sp = mlxsw_core_driver_priv(mlxsw_core); - - return mlxsw_sp_acl_region_rehash_intrvl_set(mlxsw_sp, ctx->val.vu32); -} - -static const struct devlink_param mlxsw_sp2_devlink_params[] = { - DEVLINK_PARAM_DRIVER(MLXSW_DEVLINK_PARAM_ID_ACL_REGION_REHASH_INTERVAL, - "acl_region_rehash_interval", - DEVLINK_PARAM_TYPE_U32, - BIT(DEVLINK_PARAM_CMODE_RUNTIME), - mlxsw_sp_params_acl_region_rehash_intrvl_get, - mlxsw_sp_params_acl_region_rehash_intrvl_set, - NULL), -}; - -static int mlxsw_sp2_params_register(struct mlxsw_core *mlxsw_core) -{ - struct devlink *devlink = priv_to_devlink(mlxsw_core); - union devlink_param_value value; - int err; - - err = devlink_params_register(devlink, mlxsw_sp2_devlink_params, - ARRAY_SIZE(mlxsw_sp2_devlink_params)); - if (err) - return err; - - value.vu32 = 0; - devlink_param_driverinit_value_set(devlink, - MLXSW_DEVLINK_PARAM_ID_ACL_REGION_REHASH_INTERVAL, - value); - return 0; -} - -static void mlxsw_sp2_params_unregister(struct mlxsw_core *mlxsw_core) -{ - devlink_params_unregister(priv_to_devlink(mlxsw_core), - mlxsw_sp2_devlink_params, - ARRAY_SIZE(mlxsw_sp2_devlink_params)); -} - static void mlxsw_sp_ptp_transmitted(struct mlxsw_core *mlxsw_core, struct sk_buff *skb, u16 local_port) { @@ -3995,8 +3938,6 @@ static struct mlxsw_driver mlxsw_sp2_driver = { .trap_policer_counter_get = mlxsw_sp_trap_policer_counter_get, .txhdr_construct = mlxsw_sp_txhdr_construct, .resources_register = mlxsw_sp2_resources_register, - .params_register = mlxsw_sp2_params_register, - .params_unregister = mlxsw_sp2_params_unregister, .ptp_transmitted = mlxsw_sp_ptp_transmitted, .txhdr_len = MLXSW_TXHDR_LEN, .profile = &mlxsw_sp2_config_profile, @@ -4034,8 +3975,6 @@ static struct mlxsw_driver mlxsw_sp3_driver = { .trap_policer_counter_get = mlxsw_sp_trap_policer_counter_get, .txhdr_construct = mlxsw_sp_txhdr_construct, .resources_register = mlxsw_sp2_resources_register, - .params_register = mlxsw_sp2_params_register, - .params_unregister = mlxsw_sp2_params_unregister, .ptp_transmitted = mlxsw_sp_ptp_transmitted, .txhdr_len = MLXSW_TXHDR_LEN, .profile = &mlxsw_sp2_config_profile, @@ -4071,8 +4010,6 @@ static struct mlxsw_driver mlxsw_sp4_driver = { .trap_policer_counter_get = mlxsw_sp_trap_policer_counter_get, .txhdr_construct = mlxsw_sp_txhdr_construct, .resources_register = mlxsw_sp2_resources_register, - .params_register = mlxsw_sp2_params_register, - .params_unregister = mlxsw_sp2_params_unregister, .ptp_transmitted = mlxsw_sp_ptp_transmitted, .txhdr_len = MLXSW_TXHDR_LEN, .profile = &mlxsw_sp4_config_profile, diff --git a/drivers/net/ethernet/mellanox/mlxsw/spectrum.h b/drivers/net/ethernet/mellanox/mlxsw/spectrum.h index bbc73324451d..4c22f8004514 100644 --- a/drivers/net/ethernet/mellanox/mlxsw/spectrum.h +++ b/drivers/net/ethernet/mellanox/mlxsw/spectrum.h @@ -973,6 +973,7 @@ enum mlxsw_sp_acl_profile { }; struct mlxsw_afk *mlxsw_sp_acl_afk(struct mlxsw_sp_acl *acl); +struct mlxsw_sp_acl_tcam *mlxsw_sp_acl_to_tcam(struct mlxsw_sp_acl *acl); int mlxsw_sp_acl_ruleset_bind(struct mlxsw_sp *mlxsw_sp, struct mlxsw_sp_flow_block *block, @@ -1096,8 +1097,6 @@ mlxsw_sp_acl_act_cookie_lookup(struct mlxsw_sp *mlxsw_sp, u32 cookie_index) int mlxsw_sp_acl_init(struct mlxsw_sp *mlxsw_sp); void mlxsw_sp_acl_fini(struct mlxsw_sp *mlxsw_sp); -u32 mlxsw_sp_acl_region_rehash_intrvl_get(struct mlxsw_sp *mlxsw_sp); -int mlxsw_sp_acl_region_rehash_intrvl_set(struct mlxsw_sp *mlxsw_sp, u32 val); struct mlxsw_sp_acl_mangle_action; diff --git a/drivers/net/ethernet/mellanox/mlxsw/spectrum_acl.c b/drivers/net/ethernet/mellanox/mlxsw/spectrum_acl.c index 6c5af018546f..0423ac262d89 100644 --- a/drivers/net/ethernet/mellanox/mlxsw/spectrum_acl.c +++ b/drivers/net/ethernet/mellanox/mlxsw/spectrum_acl.c @@ -40,6 +40,11 @@ struct mlxsw_afk *mlxsw_sp_acl_afk(struct mlxsw_sp_acl *acl) return acl->afk; } +struct mlxsw_sp_acl_tcam *mlxsw_sp_acl_to_tcam(struct mlxsw_sp_acl *acl) +{ + return &acl->tcam; +} + struct mlxsw_sp_acl_ruleset_ht_key { struct mlxsw_sp_flow_block *block; u32 chain_index; @@ -1099,22 +1104,6 @@ void mlxsw_sp_acl_fini(struct mlxsw_sp *mlxsw_sp) kfree(acl); } -u32 mlxsw_sp_acl_region_rehash_intrvl_get(struct mlxsw_sp *mlxsw_sp) -{ - struct mlxsw_sp_acl *acl = mlxsw_sp->acl; - - return mlxsw_sp_acl_tcam_vregion_rehash_intrvl_get(mlxsw_sp, - &acl->tcam); -} - -int mlxsw_sp_acl_region_rehash_intrvl_set(struct mlxsw_sp *mlxsw_sp, u32 val) -{ - struct mlxsw_sp_acl *acl = mlxsw_sp->acl; - - return mlxsw_sp_acl_tcam_vregion_rehash_intrvl_set(mlxsw_sp, - &acl->tcam, val); -} - struct mlxsw_sp_acl_rulei_ops mlxsw_sp1_acl_rulei_ops = { .act_mangle_field = mlxsw_sp1_acl_rulei_act_mangle_field, }; diff --git a/drivers/net/ethernet/mellanox/mlxsw/spectrum_acl_tcam.c b/drivers/net/ethernet/mellanox/mlxsw/spectrum_acl_tcam.c index 3b9ba8fa247a..d50786b0a6ce 100644 --- a/drivers/net/ethernet/mellanox/mlxsw/spectrum_acl_tcam.c +++ b/drivers/net/ethernet/mellanox/mlxsw/spectrum_acl_tcam.c @@ -9,6 +9,7 @@ #include <linux/rhashtable.h> #include <linux/netdevice.h> #include <linux/mutex.h> +#include <net/devlink.h> #include <trace/events/mlxsw.h> #include "reg.h" @@ -29,67 +30,6 @@ size_t mlxsw_sp_acl_tcam_priv_size(struct mlxsw_sp *mlxsw_sp) #define MLXSW_SP_ACL_TCAM_VREGION_REHASH_INTRVL_MIN 3000 /* ms */ #define MLXSW_SP_ACL_TCAM_VREGION_REHASH_CREDITS 100 /* number of entries */ -int mlxsw_sp_acl_tcam_init(struct mlxsw_sp *mlxsw_sp, - struct mlxsw_sp_acl_tcam *tcam) -{ - const struct mlxsw_sp_acl_tcam_ops *ops = mlxsw_sp->acl_tcam_ops; - u64 max_tcam_regions; - u64 max_regions; - u64 max_groups; - int err; - - mutex_init(&tcam->lock); - tcam->vregion_rehash_intrvl = - MLXSW_SP_ACL_TCAM_VREGION_REHASH_INTRVL_DFLT; - INIT_LIST_HEAD(&tcam->vregion_list); - - max_tcam_regions = MLXSW_CORE_RES_GET(mlxsw_sp->core, - ACL_MAX_TCAM_REGIONS); - max_regions = MLXSW_CORE_RES_GET(mlxsw_sp->core, ACL_MAX_REGIONS); - - /* Use 1:1 mapping between ACL region and TCAM region */ - if (max_tcam_regions < max_regions) - max_regions = max_tcam_regions; - - tcam->used_regions = bitmap_zalloc(max_regions, GFP_KERNEL); - if (!tcam->used_regions) - return -ENOMEM; - tcam->max_regions = max_regions; - - max_groups = MLXSW_CORE_RES_GET(mlxsw_sp->core, ACL_MAX_GROUPS); - tcam->used_groups = bitmap_zalloc(max_groups, GFP_KERNEL); - if (!tcam->used_groups) { - err = -ENOMEM; - goto err_alloc_used_groups; - } - tcam->max_groups = max_groups; - tcam->max_group_size = MLXSW_CORE_RES_GET(mlxsw_sp->core, - ACL_MAX_GROUP_SIZE); - - err = ops->init(mlxsw_sp, tcam->priv, tcam); - if (err) - goto err_tcam_init; - - return 0; - -err_tcam_init: - bitmap_free(tcam->used_groups); -err_alloc_used_groups: - bitmap_free(tcam->used_regions); - return err; -} - -void mlxsw_sp_acl_tcam_fini(struct mlxsw_sp *mlxsw_sp, - struct mlxsw_sp_acl_tcam *tcam) -{ - const struct mlxsw_sp_acl_tcam_ops *ops = mlxsw_sp->acl_tcam_ops; - - mutex_destroy(&tcam->lock); - ops->fini(mlxsw_sp, tcam->priv); - bitmap_free(tcam->used_groups); - bitmap_free(tcam->used_regions); -} - int mlxsw_sp_acl_tcam_priority_get(struct mlxsw_sp *mlxsw_sp, struct mlxsw_sp_acl_rule_info *rulei, u32 *priority, bool fillup_priority) @@ -893,41 +833,6 @@ mlxsw_sp_acl_tcam_vregion_destroy(struct mlxsw_sp *mlxsw_sp, kfree(vregion); } -u32 mlxsw_sp_acl_tcam_vregion_rehash_intrvl_get(struct mlxsw_sp *mlxsw_sp, - struct mlxsw_sp_acl_tcam *tcam) -{ - const struct mlxsw_sp_acl_tcam_ops *ops = mlxsw_sp->acl_tcam_ops; - u32 vregion_rehash_intrvl; - - if (WARN_ON(!ops->region_rehash_hints_get)) - return 0; - vregion_rehash_intrvl = tcam->vregion_rehash_intrvl; - return vregion_rehash_intrvl; -} - -int mlxsw_sp_acl_tcam_vregion_rehash_intrvl_set(struct mlxsw_sp *mlxsw_sp, - struct mlxsw_sp_acl_tcam *tcam, - u32 val) -{ - const struct mlxsw_sp_acl_tcam_ops *ops = mlxsw_sp->acl_tcam_ops; - struct mlxsw_sp_acl_tcam_vregion *vregion; - - if (val < MLXSW_SP_ACL_TCAM_VREGION_REHASH_INTRVL_MIN && val) - return -EINVAL; - if (WARN_ON(!ops->region_rehash_hints_get)) - return -EOPNOTSUPP; - tcam->vregion_rehash_intrvl = val; - mutex_lock(&tcam->lock); - list_for_each_entry(vregion, &tcam->vregion_list, tlist) { - if (val) - mlxsw_core_schedule_dw(&vregion->rehash.dw, 0); - else - cancel_delayed_work_sync(&vregion->rehash.dw); - } - mutex_unlock(&tcam->lock); - return 0; -} - static struct mlxsw_sp_acl_tcam_vregion * mlxsw_sp_acl_tcam_vregion_get(struct mlxsw_sp *mlxsw_sp, struct mlxsw_sp_acl_tcam_vgroup *vgroup, @@ -1542,6 +1447,153 @@ mlxsw_sp_acl_tcam_vregion_rehash(struct mlxsw_sp *mlxsw_sp, mlxsw_sp_acl_tcam_vregion_rehash_end(mlxsw_sp, vregion, ctx); } +static int +mlxsw_sp_acl_tcam_region_rehash_intrvl_get(struct devlink *devlink, u32 id, + struct devlink_param_gset_ctx *ctx) +{ + struct mlxsw_core *mlxsw_core = devlink_priv(devlink); + struct mlxsw_sp_acl_tcam *tcam; + struct mlxsw_sp *mlxsw_sp; + + mlxsw_sp = mlxsw_core_driver_priv(mlxsw_core); + tcam = mlxsw_sp_acl_to_tcam(mlxsw_sp->acl); + ctx->val.vu32 = tcam->vregion_rehash_intrvl; + + return 0; +} + +static int +mlxsw_sp_acl_tcam_region_rehash_intrvl_set(struct devlink *devlink, u32 id, + struct devlink_param_gset_ctx *ctx) +{ + struct mlxsw_core *mlxsw_core = devlink_priv(devlink); + struct mlxsw_sp_acl_tcam_vregion *vregion; + struct mlxsw_sp_acl_tcam *tcam; + struct mlxsw_sp *mlxsw_sp; + u32 val = ctx->val.vu32; + + if (val < MLXSW_SP_ACL_TCAM_VREGION_REHASH_INTRVL_MIN && val) + return -EINVAL; + + mlxsw_sp = mlxsw_core_driver_priv(mlxsw_core); + tcam = mlxsw_sp_acl_to_tcam(mlxsw_sp->acl); + tcam->vregion_rehash_intrvl = val; + mutex_lock(&tcam->lock); + list_for_each_entry(vregion, &tcam->vregion_list, tlist) { + if (val) + mlxsw_core_schedule_dw(&vregion->rehash.dw, 0); + else + cancel_delayed_work_sync(&vregion->rehash.dw); + } + mutex_unlock(&tcam->lock); + return 0; +} + +static const struct devlink_param mlxsw_sp_acl_tcam_rehash_params[] = { + DEVLINK_PARAM_DRIVER(MLXSW_DEVLINK_PARAM_ID_ACL_REGION_REHASH_INTERVAL, + "acl_region_rehash_interval", + DEVLINK_PARAM_TYPE_U32, + BIT(DEVLINK_PARAM_CMODE_RUNTIME), + mlxsw_sp_acl_tcam_region_rehash_intrvl_get, + mlxsw_sp_acl_tcam_region_rehash_intrvl_set, + NULL), +}; + +static int mlxsw_sp_acl_tcam_rehash_params_register(struct mlxsw_sp *mlxsw_sp) +{ + struct devlink *devlink = priv_to_devlink(mlxsw_sp->core); + + if (!mlxsw_sp->acl_tcam_ops->region_rehash_hints_get) + return 0; + + return devl_params_register(devlink, mlxsw_sp_acl_tcam_rehash_params, + ARRAY_SIZE(mlxsw_sp_acl_tcam_rehash_params)); +} + +static void +mlxsw_sp_acl_tcam_rehash_params_unregister(struct mlxsw_sp *mlxsw_sp) +{ + struct devlink *devlink = priv_to_devlink(mlxsw_sp->core); + + if (!mlxsw_sp->acl_tcam_ops->region_rehash_hints_get) + return; + + devl_params_unregister(devlink, mlxsw_sp_acl_tcam_rehash_params, + ARRAY_SIZE(mlxsw_sp_acl_tcam_rehash_params)); +} + +int mlxsw_sp_acl_tcam_init(struct mlxsw_sp *mlxsw_sp, + struct mlxsw_sp_acl_tcam *tcam) +{ + const struct mlxsw_sp_acl_tcam_ops *ops = mlxsw_sp->acl_tcam_ops; + u64 max_tcam_regions; + u64 max_regions; + u64 max_groups; + int err; + + mutex_init(&tcam->lock); + tcam->vregion_rehash_intrvl = + MLXSW_SP_ACL_TCAM_VREGION_REHASH_INTRVL_DFLT; + INIT_LIST_HEAD(&tcam->vregion_list); + + err = mlxsw_sp_acl_tcam_rehash_params_register(mlxsw_sp); + if (err) + goto err_rehash_params_register; + + max_tcam_regions = MLXSW_CORE_RES_GET(mlxsw_sp->core, + ACL_MAX_TCAM_REGIONS); + max_regions = MLXSW_CORE_RES_GET(mlxsw_sp->core, ACL_MAX_REGIONS); + + /* Use 1:1 mapping between ACL region and TCAM region */ + if (max_tcam_regions < max_regions) + max_regions = max_tcam_regions; + + tcam->used_regions = bitmap_zalloc(max_regions, GFP_KERNEL); + if (!tcam->used_regions) { + err = -ENOMEM; + goto err_alloc_used_regions; + } + tcam->max_regions = max_regions; + + max_groups = MLXSW_CORE_RES_GET(mlxsw_sp->core, ACL_MAX_GROUPS); + tcam->used_groups = bitmap_zalloc(max_groups, GFP_KERNEL); + if (!tcam->used_groups) { + err = -ENOMEM; + goto err_alloc_used_groups; + } + tcam->max_groups = max_groups; + tcam->max_group_size = MLXSW_CORE_RES_GET(mlxsw_sp->core, + ACL_MAX_GROUP_SIZE); + + err = ops->init(mlxsw_sp, tcam->priv, tcam); + if (err) + goto err_tcam_init; + + return 0; + +err_tcam_init: + bitmap_free(tcam->used_groups); +err_alloc_used_groups: + bitmap_free(tcam->used_regions); +err_alloc_used_regions: + mlxsw_sp_acl_tcam_rehash_params_unregister(mlxsw_sp); +err_rehash_params_register: + mutex_destroy(&tcam->lock); + return err; +} + +void mlxsw_sp_acl_tcam_fini(struct mlxsw_sp *mlxsw_sp, + struct mlxsw_sp_acl_tcam *tcam) +{ + const struct mlxsw_sp_acl_tcam_ops *ops = mlxsw_sp->acl_tcam_ops; + + ops->fini(mlxsw_sp, tcam->priv); + bitmap_free(tcam->used_groups); + bitmap_free(tcam->used_regions); + mlxsw_sp_acl_tcam_rehash_params_unregister(mlxsw_sp); + mutex_destroy(&tcam->lock); +} + static const enum mlxsw_afk_element mlxsw_sp_acl_tcam_pattern_ipv4[] = { MLXSW_AFK_ELEMENT_SRC_SYS_PORT, MLXSW_AFK_ELEMENT_DMAC_32_47, diff --git a/drivers/net/ethernet/mellanox/mlxsw/spectrum_acl_tcam.h b/drivers/net/ethernet/mellanox/mlxsw/spectrum_acl_tcam.h index edbbc89e7a71..462bf448497d 100644 --- a/drivers/net/ethernet/mellanox/mlxsw/spectrum_acl_tcam.h +++ b/drivers/net/ethernet/mellanox/mlxsw/spectrum_acl_tcam.h @@ -29,11 +29,6 @@ int mlxsw_sp_acl_tcam_init(struct mlxsw_sp *mlxsw_sp, struct mlxsw_sp_acl_tcam *tcam); void mlxsw_sp_acl_tcam_fini(struct mlxsw_sp *mlxsw_sp, struct mlxsw_sp_acl_tcam *tcam); -u32 mlxsw_sp_acl_tcam_vregion_rehash_intrvl_get(struct mlxsw_sp *mlxsw_sp, - struct mlxsw_sp_acl_tcam *tcam); -int mlxsw_sp_acl_tcam_vregion_rehash_intrvl_set(struct mlxsw_sp *mlxsw_sp, - struct mlxsw_sp_acl_tcam *tcam, - u32 val); int mlxsw_sp_acl_tcam_priority_get(struct mlxsw_sp *mlxsw_sp, struct mlxsw_sp_acl_rule_info *rulei, u32 *priority, bool fillup_priority); diff --git a/drivers/net/ethernet/mellanox/mlxsw/spectrum_flower.c b/drivers/net/ethernet/mellanox/mlxsw/spectrum_flower.c index e91fb205e0b4..594cdcb90b3d 100644 --- a/drivers/net/ethernet/mellanox/mlxsw/spectrum_flower.c +++ b/drivers/net/ethernet/mellanox/mlxsw/spectrum_flower.c @@ -103,7 +103,7 @@ static int mlxsw_sp_flower_parse_actions(struct mlxsw_sp *mlxsw_sp, } ingress = mlxsw_sp_flow_block_is_ingress_bound(block); err = mlxsw_sp_acl_rulei_act_drop(rulei, ingress, - act->cookie, extack); + act->user_cookie, extack); if (err) { NL_SET_ERR_MSG_MOD(extack, "Cannot append drop action"); return err; diff --git a/drivers/net/ethernet/microchip/lan743x_main.c b/drivers/net/ethernet/microchip/lan743x_main.c index 534840f9a7ca..7e0871b631e4 100644 --- a/drivers/net/ethernet/microchip/lan743x_main.c +++ b/drivers/net/ethernet/microchip/lan743x_main.c @@ -792,7 +792,7 @@ static int lan743x_mac_mii_wait_till_not_busy(struct lan743x_adapter *adapter) !(data & MAC_MII_ACC_MII_BUSY_), 0, 1000000); } -static int lan743x_mdiobus_read(struct mii_bus *bus, int phy_id, int index) +static int lan743x_mdiobus_read_c22(struct mii_bus *bus, int phy_id, int index) { struct lan743x_adapter *adapter = bus->priv; u32 val, mii_access; @@ -814,8 +814,8 @@ static int lan743x_mdiobus_read(struct mii_bus *bus, int phy_id, int index) return (int)(val & 0xFFFF); } -static int lan743x_mdiobus_write(struct mii_bus *bus, - int phy_id, int index, u16 regval) +static int lan743x_mdiobus_write_c22(struct mii_bus *bus, + int phy_id, int index, u16 regval) { struct lan743x_adapter *adapter = bus->priv; u32 val, mii_access; @@ -835,12 +835,10 @@ static int lan743x_mdiobus_write(struct mii_bus *bus, return ret; } -static u32 lan743x_mac_mmd_access(int id, int index, int op) +static u32 lan743x_mac_mmd_access(int id, int dev_addr, int op) { - u16 dev_addr; u32 ret; - dev_addr = (index >> 16) & 0x1f; ret = (id << MAC_MII_ACC_PHY_ADDR_SHIFT_) & MAC_MII_ACC_PHY_ADDR_MASK_; ret |= (dev_addr << MAC_MII_ACC_MIIMMD_SHIFT_) & @@ -858,7 +856,8 @@ static u32 lan743x_mac_mmd_access(int id, int index, int op) return ret; } -static int lan743x_mdiobus_c45_read(struct mii_bus *bus, int phy_id, int index) +static int lan743x_mdiobus_read_c45(struct mii_bus *bus, int phy_id, + int dev_addr, int index) { struct lan743x_adapter *adapter = bus->priv; u32 mmd_access; @@ -868,32 +867,30 @@ static int lan743x_mdiobus_c45_read(struct mii_bus *bus, int phy_id, int index) ret = lan743x_mac_mii_wait_till_not_busy(adapter); if (ret < 0) return ret; - if (index & MII_ADDR_C45) { - /* Load Register Address */ - lan743x_csr_write(adapter, MAC_MII_DATA, (u32)(index & 0xffff)); - mmd_access = lan743x_mac_mmd_access(phy_id, index, - MMD_ACCESS_ADDRESS); - lan743x_csr_write(adapter, MAC_MII_ACC, mmd_access); - ret = lan743x_mac_mii_wait_till_not_busy(adapter); - if (ret < 0) - return ret; - /* Read Data */ - mmd_access = lan743x_mac_mmd_access(phy_id, index, - MMD_ACCESS_READ); - lan743x_csr_write(adapter, MAC_MII_ACC, mmd_access); - ret = lan743x_mac_mii_wait_till_not_busy(adapter); - if (ret < 0) - return ret; - ret = lan743x_csr_read(adapter, MAC_MII_DATA); - return (int)(ret & 0xFFFF); - } - ret = lan743x_mdiobus_read(bus, phy_id, index); - return ret; + /* Load Register Address */ + lan743x_csr_write(adapter, MAC_MII_DATA, index); + mmd_access = lan743x_mac_mmd_access(phy_id, dev_addr, + MMD_ACCESS_ADDRESS); + lan743x_csr_write(adapter, MAC_MII_ACC, mmd_access); + ret = lan743x_mac_mii_wait_till_not_busy(adapter); + if (ret < 0) + return ret; + + /* Read Data */ + mmd_access = lan743x_mac_mmd_access(phy_id, dev_addr, + MMD_ACCESS_READ); + lan743x_csr_write(adapter, MAC_MII_ACC, mmd_access); + ret = lan743x_mac_mii_wait_till_not_busy(adapter); + if (ret < 0) + return ret; + + ret = lan743x_csr_read(adapter, MAC_MII_DATA); + return (int)(ret & 0xFFFF); } -static int lan743x_mdiobus_c45_write(struct mii_bus *bus, - int phy_id, int index, u16 regval) +static int lan743x_mdiobus_write_c45(struct mii_bus *bus, int phy_id, + int dev_addr, int index, u16 regval) { struct lan743x_adapter *adapter = bus->priv; u32 mmd_access; @@ -903,26 +900,23 @@ static int lan743x_mdiobus_c45_write(struct mii_bus *bus, ret = lan743x_mac_mii_wait_till_not_busy(adapter); if (ret < 0) return ret; - if (index & MII_ADDR_C45) { - /* Load Register Address */ - lan743x_csr_write(adapter, MAC_MII_DATA, (u32)(index & 0xffff)); - mmd_access = lan743x_mac_mmd_access(phy_id, index, - MMD_ACCESS_ADDRESS); - lan743x_csr_write(adapter, MAC_MII_ACC, mmd_access); - ret = lan743x_mac_mii_wait_till_not_busy(adapter); - if (ret < 0) - return ret; - /* Write Data */ - lan743x_csr_write(adapter, MAC_MII_DATA, (u32)regval); - mmd_access = lan743x_mac_mmd_access(phy_id, index, - MMD_ACCESS_WRITE); - lan743x_csr_write(adapter, MAC_MII_ACC, mmd_access); - ret = lan743x_mac_mii_wait_till_not_busy(adapter); - } else { - ret = lan743x_mdiobus_write(bus, phy_id, index, regval); - } - return ret; + /* Load Register Address */ + lan743x_csr_write(adapter, MAC_MII_DATA, (u32)index); + mmd_access = lan743x_mac_mmd_access(phy_id, dev_addr, + MMD_ACCESS_ADDRESS); + lan743x_csr_write(adapter, MAC_MII_ACC, mmd_access); + ret = lan743x_mac_mii_wait_till_not_busy(adapter); + if (ret < 0) + return ret; + + /* Write Data */ + lan743x_csr_write(adapter, MAC_MII_DATA, (u32)regval); + mmd_access = lan743x_mac_mmd_access(phy_id, dev_addr, + MMD_ACCESS_WRITE); + lan743x_csr_write(adapter, MAC_MII_ACC, mmd_access); + + return lan743x_mac_mii_wait_till_not_busy(adapter); } static int lan743x_sgmii_wait_till_not_busy(struct lan743x_adapter *adapter) @@ -1424,14 +1418,6 @@ static void lan743x_phy_link_status_change(struct net_device *netdev) data = lan743x_csr_read(adapter, MAC_CR); - /* set interface mode */ - if (phy_interface_is_rgmii(phydev)) - /* RGMII */ - data &= ~MAC_CR_MII_EN_; - else - /* GMII */ - data |= MAC_CR_MII_EN_; - /* set duplex mode */ if (phydev->duplex) data |= MAC_CR_DPX_; @@ -1483,10 +1469,33 @@ static void lan743x_phy_close(struct lan743x_adapter *adapter) netdev->phydev = NULL; } +static void lan743x_phy_interface_select(struct lan743x_adapter *adapter) +{ + u32 id_rev; + u32 data; + + data = lan743x_csr_read(adapter, MAC_CR); + id_rev = adapter->csr.id_rev & ID_REV_ID_MASK_; + + if (adapter->is_pci11x1x && adapter->is_sgmii_en) + adapter->phy_interface = PHY_INTERFACE_MODE_SGMII; + else if (id_rev == ID_REV_ID_LAN7430_) + adapter->phy_interface = PHY_INTERFACE_MODE_GMII; + else if ((id_rev == ID_REV_ID_LAN7431_) && (data & MAC_CR_MII_EN_)) + adapter->phy_interface = PHY_INTERFACE_MODE_MII; + else + adapter->phy_interface = PHY_INTERFACE_MODE_RGMII; +} + static int lan743x_phy_open(struct lan743x_adapter *adapter) { struct net_device *netdev = adapter->netdev; struct lan743x_phy *phy = &adapter->phy; + struct fixed_phy_status fphy_status = { + .link = 1, + .speed = SPEED_1000, + .duplex = DUPLEX_FULL, + }; struct phy_device *phydev; int ret = -EIO; @@ -1497,17 +1506,25 @@ static int lan743x_phy_open(struct lan743x_adapter *adapter) if (!phydev) { /* try internal phy */ phydev = phy_find_first(adapter->mdiobus); - if (!phydev) - goto return_error; + if (!phydev) { + if ((adapter->csr.id_rev & ID_REV_ID_MASK_) == + ID_REV_ID_LAN7431_) { + phydev = fixed_phy_register(PHY_POLL, + &fphy_status, NULL); + if (IS_ERR(phydev)) { + netdev_err(netdev, "No PHY/fixed_PHY found\n"); + return -EIO; + } + } else { + goto return_error; + } + } - if (adapter->is_pci11x1x) - ret = phy_connect_direct(netdev, phydev, - lan743x_phy_link_status_change, - PHY_INTERFACE_MODE_RGMII); - else - ret = phy_connect_direct(netdev, phydev, - lan743x_phy_link_status_change, - PHY_INTERFACE_MODE_GMII); + lan743x_phy_interface_select(adapter); + + ret = phy_connect_direct(netdev, phydev, + lan743x_phy_link_status_change, + adapter->phy_interface); if (ret) goto return_error; } @@ -3285,9 +3302,10 @@ static int lan743x_mdiobus_init(struct lan743x_adapter *adapter) lan743x_csr_write(adapter, SGMII_CTL, sgmii_ctl); netif_dbg(adapter, drv, adapter->netdev, "SGMII operation\n"); - adapter->mdiobus->probe_capabilities = MDIOBUS_C22_C45; - adapter->mdiobus->read = lan743x_mdiobus_c45_read; - adapter->mdiobus->write = lan743x_mdiobus_c45_write; + adapter->mdiobus->read = lan743x_mdiobus_read_c22; + adapter->mdiobus->write = lan743x_mdiobus_write_c22; + adapter->mdiobus->read_c45 = lan743x_mdiobus_read_c45; + adapter->mdiobus->write_c45 = lan743x_mdiobus_write_c45; adapter->mdiobus->name = "lan743x-mdiobus-c45"; netif_dbg(adapter, drv, adapter->netdev, "lan743x-mdiobus-c45\n"); @@ -3299,16 +3317,15 @@ static int lan743x_mdiobus_init(struct lan743x_adapter *adapter) netif_dbg(adapter, drv, adapter->netdev, "RGMII operation\n"); // Only C22 support when RGMII I/F - adapter->mdiobus->probe_capabilities = MDIOBUS_C22; - adapter->mdiobus->read = lan743x_mdiobus_read; - adapter->mdiobus->write = lan743x_mdiobus_write; + adapter->mdiobus->read = lan743x_mdiobus_read_c22; + adapter->mdiobus->write = lan743x_mdiobus_write_c22; adapter->mdiobus->name = "lan743x-mdiobus"; netif_dbg(adapter, drv, adapter->netdev, "lan743x-mdiobus\n"); } } else { - adapter->mdiobus->read = lan743x_mdiobus_read; - adapter->mdiobus->write = lan743x_mdiobus_write; + adapter->mdiobus->read = lan743x_mdiobus_read_c22; + adapter->mdiobus->write = lan743x_mdiobus_write_c22; adapter->mdiobus->name = "lan743x-mdiobus"; netif_dbg(adapter, drv, adapter->netdev, "lan743x-mdiobus\n"); } diff --git a/drivers/net/ethernet/microchip/lan743x_main.h b/drivers/net/ethernet/microchip/lan743x_main.h index 8438c3dbcf36..52609fc13ad9 100644 --- a/drivers/net/ethernet/microchip/lan743x_main.h +++ b/drivers/net/ethernet/microchip/lan743x_main.h @@ -1042,6 +1042,7 @@ struct lan743x_adapter { #define LAN743X_ADAPTER_FLAG_OTP BIT(0) u32 flags; u32 hw_cfg; + phy_interface_t phy_interface; }; #define LAN743X_COMPONENT_FLAG_RX(channel) BIT(20 + (channel)) diff --git a/drivers/net/ethernet/microchip/lan966x/Makefile b/drivers/net/ethernet/microchip/lan966x/Makefile index 56afd694f3c7..7b0cda4ffa6b 100644 --- a/drivers/net/ethernet/microchip/lan966x/Makefile +++ b/drivers/net/ethernet/microchip/lan966x/Makefile @@ -15,5 +15,7 @@ lan966x-switch-objs := lan966x_main.o lan966x_phylink.o lan966x_port.o \ lan966x_xdp.o lan966x_vcap_impl.o lan966x_vcap_ag_api.o \ lan966x_tc_flower.o lan966x_goto.o +lan966x-switch-$(CONFIG_DEBUG_FS) += lan966x_vcap_debugfs.o + # Provide include files ccflags-y += -I$(srctree)/drivers/net/ethernet/microchip/vcap diff --git a/drivers/net/ethernet/microchip/lan966x/lan966x_goto.c b/drivers/net/ethernet/microchip/lan966x/lan966x_goto.c index bf0cfe24a8fc..9b18156eea1a 100644 --- a/drivers/net/ethernet/microchip/lan966x/lan966x_goto.c +++ b/drivers/net/ethernet/microchip/lan966x/lan966x_goto.c @@ -4,7 +4,7 @@ #include "vcap_api_client.h" int lan966x_goto_port_add(struct lan966x_port *port, - struct flow_action_entry *act, + int from_cid, int to_cid, unsigned long goto_id, struct netlink_ext_ack *extack) { @@ -12,7 +12,7 @@ int lan966x_goto_port_add(struct lan966x_port *port, int err; err = vcap_enable_lookups(lan966x->vcap_ctrl, port->dev, - act->chain_index, goto_id, + from_cid, to_cid, goto_id, true); if (err == -EFAULT) { NL_SET_ERR_MSG_MOD(extack, "Unsupported goto chain"); @@ -29,8 +29,6 @@ int lan966x_goto_port_add(struct lan966x_port *port, return err; } - port->tc.goto_id = goto_id; - return 0; } @@ -41,14 +39,12 @@ int lan966x_goto_port_del(struct lan966x_port *port, struct lan966x *lan966x = port->lan966x; int err; - err = vcap_enable_lookups(lan966x->vcap_ctrl, port->dev, 0, + err = vcap_enable_lookups(lan966x->vcap_ctrl, port->dev, 0, 0, goto_id, false); if (err) { NL_SET_ERR_MSG_MOD(extack, "Could not disable VCAP lookups"); return err; } - port->tc.goto_id = 0; - return 0; } diff --git a/drivers/net/ethernet/microchip/lan966x/lan966x_main.c b/drivers/net/ethernet/microchip/lan966x/lan966x_main.c index 580c91d24a52..8b89de0541ff 100644 --- a/drivers/net/ethernet/microchip/lan966x/lan966x_main.c +++ b/drivers/net/ethernet/microchip/lan966x/lan966x_main.c @@ -823,6 +823,11 @@ static int lan966x_probe_port(struct lan966x *lan966x, u32 p, port->phylink = phylink; + if (lan966x->fdma) + dev->xdp_features = NETDEV_XDP_ACT_BASIC | + NETDEV_XDP_ACT_REDIRECT | + NETDEV_XDP_ACT_NDO_XMIT; + err = register_netdev(dev); if (err) { dev_err(lan966x->dev, "register_netdev failed\n"); @@ -1035,6 +1040,8 @@ static int lan966x_probe(struct platform_device *pdev) platform_set_drvdata(pdev, lan966x); lan966x->dev = &pdev->dev; + lan966x->debugfs_root = debugfs_create_dir("lan966x", NULL); + if (!device_get_mac_address(&pdev->dev, mac_addr)) { ether_addr_copy(lan966x->base_mac, mac_addr); } else { @@ -1223,6 +1230,8 @@ static int lan966x_remove(struct platform_device *pdev) lan966x_fdb_deinit(lan966x); lan966x_ptp_deinit(lan966x); + debugfs_remove_recursive(lan966x->debugfs_root); + return 0; } diff --git a/drivers/net/ethernet/microchip/lan966x/lan966x_main.h b/drivers/net/ethernet/microchip/lan966x/lan966x_main.h index 3491f1961835..49f5159afbf3 100644 --- a/drivers/net/ethernet/microchip/lan966x/lan966x_main.h +++ b/drivers/net/ethernet/microchip/lan966x/lan966x_main.h @@ -3,6 +3,7 @@ #ifndef __LAN966X_MAIN_H__ #define __LAN966X_MAIN_H__ +#include <linux/debugfs.h> #include <linux/etherdevice.h> #include <linux/if_vlan.h> #include <linux/jiffies.h> @@ -14,6 +15,9 @@ #include <net/pkt_sched.h> #include <net/switchdev.h> +#include <vcap_api.h> +#include <vcap_api_client.h> + #include "lan966x_regs.h" #include "lan966x_ifh.h" @@ -128,6 +132,13 @@ enum LAN966X_PORT_MASK_MODE { LAN966X_PMM_REDIRECT, }; +enum vcap_is2_port_sel_ipv6 { + VCAP_IS2_PS_IPV6_TCPUDP_OTHER, + VCAP_IS2_PS_IPV6_STD, + VCAP_IS2_PS_IPV6_IP4_TCPUDP_IP4_OTHER, + VCAP_IS2_PS_IPV6_MAC_ETYPE, +}; + struct lan966x_port; struct lan966x_db { @@ -315,6 +326,9 @@ struct lan966x { /* vcap */ struct vcap_control *vcap_ctrl; + + /* debugfs */ + struct dentry *debugfs_root; }; struct lan966x_port_config { @@ -332,7 +346,6 @@ struct lan966x_port_tc { unsigned long police_id; unsigned long ingress_mirror_id; unsigned long egress_mirror_id; - unsigned long goto_id; struct flow_stats police_stat; struct flow_stats mirror_stat; }; @@ -602,12 +615,25 @@ static inline bool lan966x_xdp_port_present(struct lan966x_port *port) int lan966x_vcap_init(struct lan966x *lan966x); void lan966x_vcap_deinit(struct lan966x *lan966x); +#if defined(CONFIG_DEBUG_FS) +int lan966x_vcap_port_info(struct net_device *dev, + struct vcap_admin *admin, + struct vcap_output_print *out); +#else +static inline int lan966x_vcap_port_info(struct net_device *dev, + struct vcap_admin *admin, + struct vcap_output_print *out) +{ + return 0; +} +#endif int lan966x_tc_flower(struct lan966x_port *port, - struct flow_cls_offload *f); + struct flow_cls_offload *f, + bool ingress); int lan966x_goto_port_add(struct lan966x_port *port, - struct flow_action_entry *act, + int from_cid, int to_cid, unsigned long goto_id, struct netlink_ext_ack *extack); int lan966x_goto_port_del(struct lan966x_port *port, diff --git a/drivers/net/ethernet/microchip/lan966x/lan966x_ptp.c b/drivers/net/ethernet/microchip/lan966x/lan966x_ptp.c index a8348437dd87..931e37b9a0ad 100644 --- a/drivers/net/ethernet/microchip/lan966x/lan966x_ptp.c +++ b/drivers/net/ethernet/microchip/lan966x/lan966x_ptp.c @@ -83,8 +83,7 @@ static int lan966x_ptp_add_trap(struct lan966x_port *port, if (err) goto free_rule; - err = vcap_set_rule_set_actionset(vrule, VCAP_AFS_BASE_TYPE); - err |= vcap_rule_add_action_bit(vrule, VCAP_AF_CPU_COPY_ENA, VCAP_BIT_1); + err = vcap_rule_add_action_bit(vrule, VCAP_AF_CPU_COPY_ENA, VCAP_BIT_1); err |= vcap_rule_add_action_u32(vrule, VCAP_AF_MASK_MODE, LAN966X_PMM_REPLACE); err |= vcap_val_rule(vrule, proto); if (err) @@ -524,9 +523,9 @@ irqreturn_t lan966x_ptp_irq_handler(int irq, void *args) if (WARN_ON(!skb_match)) continue; - spin_lock(&lan966x->ptp_ts_id_lock); + spin_lock_irqsave(&lan966x->ptp_ts_id_lock, flags); lan966x->ptp_skbs--; - spin_unlock(&lan966x->ptp_ts_id_lock); + spin_unlock_irqrestore(&lan966x->ptp_ts_id_lock, flags); /* Get the h/w timestamp */ lan966x_get_hwtimestamp(lan966x, &ts, delay); diff --git a/drivers/net/ethernet/microchip/lan966x/lan966x_tc.c b/drivers/net/ethernet/microchip/lan966x/lan966x_tc.c index 01072121c999..cf0cc7562d04 100644 --- a/drivers/net/ethernet/microchip/lan966x/lan966x_tc.c +++ b/drivers/net/ethernet/microchip/lan966x/lan966x_tc.c @@ -1,6 +1,7 @@ // SPDX-License-Identifier: GPL-2.0+ #include <net/pkt_cls.h> +#include <net/pkt_sched.h> #include "lan966x_main.h" @@ -70,7 +71,7 @@ static int lan966x_tc_block_cb(enum tc_setup_type type, void *type_data, case TC_SETUP_CLSMATCHALL: return lan966x_tc_matchall(port, type_data, ingress); case TC_SETUP_CLSFLOWER: - return lan966x_tc_flower(port, type_data); + return lan966x_tc_flower(port, type_data, ingress); default: return -EOPNOTSUPP; } diff --git a/drivers/net/ethernet/microchip/lan966x/lan966x_tc_flower.c b/drivers/net/ethernet/microchip/lan966x/lan966x_tc_flower.c index ba3fa917d6b7..f960727ecaee 100644 --- a/drivers/net/ethernet/microchip/lan966x/lan966x_tc_flower.c +++ b/drivers/net/ethernet/microchip/lan966x/lan966x_tc_flower.c @@ -3,53 +3,135 @@ #include "lan966x_main.h" #include "vcap_api.h" #include "vcap_api_client.h" +#include "vcap_tc.h" -struct lan966x_tc_flower_parse_usage { - struct flow_cls_offload *f; - struct flow_rule *frule; - struct vcap_rule *vrule; - unsigned int used_keys; - u16 l3_proto; -}; +static bool lan966x_tc_is_known_etype(u16 etype) +{ + switch (etype) { + case ETH_P_ALL: + case ETH_P_ARP: + case ETH_P_IP: + case ETH_P_IPV6: + return true; + } + + return false; +} -static int lan966x_tc_flower_handler_ethaddr_usage(struct lan966x_tc_flower_parse_usage *st) +static int +lan966x_tc_flower_handler_control_usage(struct vcap_tc_flower_parse_usage *st) { - enum vcap_key_field smac_key = VCAP_KF_L2_SMAC; - enum vcap_key_field dmac_key = VCAP_KF_L2_DMAC; - struct flow_match_eth_addrs match; - struct vcap_u48_key smac, dmac; + struct flow_match_control match; int err = 0; - flow_rule_match_eth_addrs(st->frule, &match); - - if (!is_zero_ether_addr(match.mask->src)) { - vcap_netbytes_copy(smac.value, match.key->src, ETH_ALEN); - vcap_netbytes_copy(smac.mask, match.mask->src, ETH_ALEN); - err = vcap_rule_add_key_u48(st->vrule, smac_key, &smac); + flow_rule_match_control(st->frule, &match); + if (match.mask->flags & FLOW_DIS_IS_FRAGMENT) { + if (match.key->flags & FLOW_DIS_IS_FRAGMENT) + err = vcap_rule_add_key_bit(st->vrule, + VCAP_KF_L3_FRAGMENT, + VCAP_BIT_1); + else + err = vcap_rule_add_key_bit(st->vrule, + VCAP_KF_L3_FRAGMENT, + VCAP_BIT_0); if (err) goto out; } - if (!is_zero_ether_addr(match.mask->dst)) { - vcap_netbytes_copy(dmac.value, match.key->dst, ETH_ALEN); - vcap_netbytes_copy(dmac.mask, match.mask->dst, ETH_ALEN); - err = vcap_rule_add_key_u48(st->vrule, dmac_key, &dmac); + if (match.mask->flags & FLOW_DIS_FIRST_FRAG) { + if (match.key->flags & FLOW_DIS_FIRST_FRAG) + err = vcap_rule_add_key_bit(st->vrule, + VCAP_KF_L3_FRAG_OFS_GT0, + VCAP_BIT_0); + else + err = vcap_rule_add_key_bit(st->vrule, + VCAP_KF_L3_FRAG_OFS_GT0, + VCAP_BIT_1); if (err) goto out; } - st->used_keys |= BIT(FLOW_DISSECTOR_KEY_ETH_ADDRS); + st->used_keys |= BIT(FLOW_DISSECTOR_KEY_CONTROL); return err; out: - NL_SET_ERR_MSG_MOD(st->f->common.extack, "eth_addr parse error"); + NL_SET_ERR_MSG_MOD(st->fco->common.extack, "ip_frag parse error"); return err; } static int -(*lan966x_tc_flower_handlers_usage[])(struct lan966x_tc_flower_parse_usage *st) = { - [FLOW_DISSECTOR_KEY_ETH_ADDRS] = lan966x_tc_flower_handler_ethaddr_usage, +lan966x_tc_flower_handler_basic_usage(struct vcap_tc_flower_parse_usage *st) +{ + struct flow_match_basic match; + int err = 0; + + flow_rule_match_basic(st->frule, &match); + if (match.mask->n_proto) { + st->l3_proto = be16_to_cpu(match.key->n_proto); + if (!lan966x_tc_is_known_etype(st->l3_proto)) { + err = vcap_rule_add_key_u32(st->vrule, VCAP_KF_ETYPE, + st->l3_proto, ~0); + if (err) + goto out; + } else if (st->l3_proto == ETH_P_IP) { + err = vcap_rule_add_key_bit(st->vrule, VCAP_KF_IP4_IS, + VCAP_BIT_1); + if (err) + goto out; + } + } + if (match.mask->ip_proto) { + st->l4_proto = match.key->ip_proto; + + if (st->l4_proto == IPPROTO_TCP) { + err = vcap_rule_add_key_bit(st->vrule, + VCAP_KF_TCP_IS, + VCAP_BIT_1); + if (err) + goto out; + } else if (st->l4_proto == IPPROTO_UDP) { + err = vcap_rule_add_key_bit(st->vrule, + VCAP_KF_TCP_IS, + VCAP_BIT_0); + if (err) + goto out; + } else { + err = vcap_rule_add_key_u32(st->vrule, + VCAP_KF_L3_IP_PROTO, + st->l4_proto, ~0); + if (err) + goto out; + } + } + + st->used_keys |= BIT(FLOW_DISSECTOR_KEY_BASIC); + return err; +out: + NL_SET_ERR_MSG_MOD(st->fco->common.extack, "ip_proto parse error"); + return err; +} + +static int +lan966x_tc_flower_handler_vlan_usage(struct vcap_tc_flower_parse_usage *st) +{ + return vcap_tc_flower_handler_vlan_usage(st, + VCAP_KF_8021Q_VID_CLS, + VCAP_KF_8021Q_PCP_CLS); +} + +static int +(*lan966x_tc_flower_handlers_usage[])(struct vcap_tc_flower_parse_usage *st) = { + [FLOW_DISSECTOR_KEY_ETH_ADDRS] = vcap_tc_flower_handler_ethaddr_usage, + [FLOW_DISSECTOR_KEY_IPV4_ADDRS] = vcap_tc_flower_handler_ipv4_usage, + [FLOW_DISSECTOR_KEY_IPV6_ADDRS] = vcap_tc_flower_handler_ipv6_usage, + [FLOW_DISSECTOR_KEY_CONTROL] = lan966x_tc_flower_handler_control_usage, + [FLOW_DISSECTOR_KEY_PORTS] = vcap_tc_flower_handler_portnum_usage, + [FLOW_DISSECTOR_KEY_BASIC] = lan966x_tc_flower_handler_basic_usage, + [FLOW_DISSECTOR_KEY_VLAN] = lan966x_tc_flower_handler_vlan_usage, + [FLOW_DISSECTOR_KEY_TCP] = vcap_tc_flower_handler_tcp_usage, + [FLOW_DISSECTOR_KEY_ARP] = vcap_tc_flower_handler_arp_usage, + [FLOW_DISSECTOR_KEY_IP] = vcap_tc_flower_handler_ip_usage, }; static int lan966x_tc_flower_use_dissectors(struct flow_cls_offload *f, @@ -57,8 +139,8 @@ static int lan966x_tc_flower_use_dissectors(struct flow_cls_offload *f, struct vcap_rule *vrule, u16 *l3_proto) { - struct lan966x_tc_flower_parse_usage state = { - .f = f, + struct vcap_tc_flower_parse_usage state = { + .fco = f, .vrule = vrule, .l3_proto = ETH_P_ALL, }; @@ -82,8 +164,9 @@ static int lan966x_tc_flower_use_dissectors(struct flow_cls_offload *f, } static int lan966x_tc_flower_action_check(struct vcap_control *vctrl, + struct net_device *dev, struct flow_cls_offload *fco, - struct vcap_admin *admin) + bool ingress) { struct flow_rule *rule = flow_cls_offload_flow_rule(fco); struct flow_action_entry *actent, *last_actent = NULL; @@ -109,21 +192,24 @@ static int lan966x_tc_flower_action_check(struct vcap_control *vctrl, last_actent = actent; /* Save last action for later check */ } - /* Check that last action is a goto */ - if (last_actent->id != FLOW_ACTION_GOTO) { + /* Check that last action is a goto + * The last chain/lookup does not need to have goto action + */ + if (last_actent->id == FLOW_ACTION_GOTO) { + /* Check if the destination chain is in one of the VCAPs */ + if (!vcap_is_next_lookup(vctrl, fco->common.chain_index, + last_actent->chain_index)) { + NL_SET_ERR_MSG_MOD(fco->common.extack, + "Invalid goto chain"); + return -EINVAL; + } + } else if (!vcap_is_last_chain(vctrl, fco->common.chain_index, + ingress)) { NL_SET_ERR_MSG_MOD(fco->common.extack, "Last action must be 'goto'"); return -EINVAL; } - /* Check if the goto chain is in the next lookup */ - if (!vcap_is_next_lookup(vctrl, fco->common.chain_index, - last_actent->chain_index)) { - NL_SET_ERR_MSG_MOD(fco->common.extack, - "Invalid goto chain"); - return -EINVAL; - } - /* Catch unsupported combinations of actions */ if (action_mask & BIT(FLOW_ACTION_TRAP) && action_mask & BIT(FLOW_ACTION_ACCEPT)) { @@ -137,7 +223,8 @@ static int lan966x_tc_flower_action_check(struct vcap_control *vctrl, static int lan966x_tc_flower_add(struct lan966x_port *port, struct flow_cls_offload *f, - struct vcap_admin *admin) + struct vcap_admin *admin, + bool ingress) { struct flow_action_entry *act; u16 l3_proto = ETH_P_ALL; @@ -145,8 +232,8 @@ static int lan966x_tc_flower_add(struct lan966x_port *port, struct vcap_rule *vrule; int err, idx; - err = lan966x_tc_flower_action_check(port->lan966x->vcap_ctrl, f, - admin); + err = lan966x_tc_flower_action_check(port->lan966x->vcap_ctrl, + port->dev, f, ingress); if (err) return err; @@ -174,8 +261,6 @@ static int lan966x_tc_flower_add(struct lan966x_port *port, 0); err |= vcap_rule_add_action_u32(vrule, VCAP_AF_MASK_MODE, LAN966X_PMM_REPLACE); - err |= vcap_set_rule_set_actionset(vrule, - VCAP_AFS_BASE_TYPE); if (err) goto out; @@ -229,8 +314,27 @@ static int lan966x_tc_flower_del(struct lan966x_port *port, return err; } +static int lan966x_tc_flower_stats(struct lan966x_port *port, + struct flow_cls_offload *f, + struct vcap_admin *admin) +{ + struct vcap_counter count = {}; + int err; + + err = vcap_get_rule_count_by_cookie(port->lan966x->vcap_ctrl, + &count, f->cookie); + if (err) + return err; + + flow_stats_update(&f->stats, 0x0, count.value, 0, 0, + FLOW_ACTION_HW_STATS_IMMEDIATE); + + return err; +} + int lan966x_tc_flower(struct lan966x_port *port, - struct flow_cls_offload *f) + struct flow_cls_offload *f, + bool ingress) { struct vcap_admin *admin; @@ -243,9 +347,11 @@ int lan966x_tc_flower(struct lan966x_port *port, switch (f->command) { case FLOW_CLS_REPLACE: - return lan966x_tc_flower_add(port, f, admin); + return lan966x_tc_flower_add(port, f, admin, ingress); case FLOW_CLS_DESTROY: return lan966x_tc_flower_del(port, f, admin); + case FLOW_CLS_STATS: + return lan966x_tc_flower_stats(port, f, admin); default: return -EOPNOTSUPP; } diff --git a/drivers/net/ethernet/microchip/lan966x/lan966x_tc_matchall.c b/drivers/net/ethernet/microchip/lan966x/lan966x_tc_matchall.c index a539abaad9b6..20627323d656 100644 --- a/drivers/net/ethernet/microchip/lan966x/lan966x_tc_matchall.c +++ b/drivers/net/ethernet/microchip/lan966x/lan966x_tc_matchall.c @@ -24,7 +24,8 @@ static int lan966x_tc_matchall_add(struct lan966x_port *port, return lan966x_mirror_port_add(port, act, f->cookie, ingress, f->common.extack); case FLOW_ACTION_GOTO: - return lan966x_goto_port_add(port, act, f->cookie, + return lan966x_goto_port_add(port, f->common.chain_index, + act->chain_index, f->cookie, f->common.extack); default: NL_SET_ERR_MSG_MOD(f->common.extack, @@ -46,13 +47,8 @@ static int lan966x_tc_matchall_del(struct lan966x_port *port, f->cookie == port->tc.egress_mirror_id) { return lan966x_mirror_port_del(port, ingress, f->common.extack); - } else if (f->cookie == port->tc.goto_id) { - return lan966x_goto_port_del(port, f->cookie, - f->common.extack); } else { - NL_SET_ERR_MSG_MOD(f->common.extack, - "Unsupported action"); - return -EOPNOTSUPP; + return lan966x_goto_port_del(port, f->cookie, f->common.extack); } return 0; @@ -80,12 +76,6 @@ int lan966x_tc_matchall(struct lan966x_port *port, struct tc_cls_matchall_offload *f, bool ingress) { - if (!tc_cls_can_offload_and_chain0(port->dev, &f->common)) { - NL_SET_ERR_MSG_MOD(f->common.extack, - "Only chain zero is supported"); - return -EOPNOTSUPP; - } - switch (f->command) { case TC_CLSMATCHALL_REPLACE: return lan966x_tc_matchall_add(port, f, ingress); diff --git a/drivers/net/ethernet/microchip/lan966x/lan966x_vcap_debugfs.c b/drivers/net/ethernet/microchip/lan966x/lan966x_vcap_debugfs.c new file mode 100644 index 000000000000..7a0db58f5513 --- /dev/null +++ b/drivers/net/ethernet/microchip/lan966x/lan966x_vcap_debugfs.c @@ -0,0 +1,94 @@ +// SPDX-License-Identifier: GPL-2.0+ + +#include "lan966x_main.h" +#include "lan966x_vcap_ag_api.h" +#include "vcap_api.h" +#include "vcap_api_client.h" + +static void lan966x_vcap_port_keys(struct lan966x_port *port, + struct vcap_admin *admin, + struct vcap_output_print *out) +{ + struct lan966x *lan966x = port->lan966x; + u32 val; + + out->prf(out->dst, " port[%d] (%s): ", port->chip_port, + netdev_name(port->dev)); + + val = lan_rd(lan966x, ANA_VCAP_S2_CFG(port->chip_port)); + out->prf(out->dst, "\n state: "); + if (ANA_VCAP_S2_CFG_ENA_GET(val)) + out->prf(out->dst, "on"); + else + out->prf(out->dst, "off"); + + for (int l = 0; l < admin->lookups; ++l) { + out->prf(out->dst, "\n Lookup %d: ", l); + + out->prf(out->dst, "\n snap: "); + if (ANA_VCAP_S2_CFG_SNAP_DIS_GET(val) & (BIT(0) << l)) + out->prf(out->dst, "mac_llc"); + else + out->prf(out->dst, "mac_snap"); + + out->prf(out->dst, "\n oam: "); + if (ANA_VCAP_S2_CFG_OAM_DIS_GET(val) & (BIT(0) << l)) + out->prf(out->dst, "mac_etype"); + else + out->prf(out->dst, "mac_oam"); + + out->prf(out->dst, "\n arp: "); + if (ANA_VCAP_S2_CFG_ARP_DIS_GET(val) & (BIT(0) << l)) + out->prf(out->dst, "mac_etype"); + else + out->prf(out->dst, "mac_arp"); + + out->prf(out->dst, "\n ipv4_other: "); + if (ANA_VCAP_S2_CFG_IP_OTHER_DIS_GET(val) & (BIT(0) << l)) + out->prf(out->dst, "mac_etype"); + else + out->prf(out->dst, "ip4_other"); + + out->prf(out->dst, "\n ipv4_tcp_udp: "); + if (ANA_VCAP_S2_CFG_IP_TCPUDP_DIS_GET(val) & (BIT(0) << l)) + out->prf(out->dst, "mac_etype"); + else + out->prf(out->dst, "ipv4_tcp_udp"); + + out->prf(out->dst, "\n ipv6: "); + switch (ANA_VCAP_S2_CFG_IP6_CFG_GET(val) & (0x3 << l)) { + case VCAP_IS2_PS_IPV6_TCPUDP_OTHER: + out->prf(out->dst, "ipv6_tcp_udp ipv6_tcp_udp"); + break; + case VCAP_IS2_PS_IPV6_STD: + out->prf(out->dst, "ipv6_std"); + break; + case VCAP_IS2_PS_IPV6_IP4_TCPUDP_IP4_OTHER: + out->prf(out->dst, "ipv4_tcp_udp ipv4_tcp_udp"); + break; + case VCAP_IS2_PS_IPV6_MAC_ETYPE: + out->prf(out->dst, "mac_etype"); + break; + } + } + + out->prf(out->dst, "\n"); +} + +int lan966x_vcap_port_info(struct net_device *dev, + struct vcap_admin *admin, + struct vcap_output_print *out) +{ + struct lan966x_port *port = netdev_priv(dev); + struct lan966x *lan966x = port->lan966x; + const struct vcap_info *vcap; + struct vcap_control *vctrl; + + vctrl = lan966x->vcap_ctrl; + vcap = &vctrl->vcaps[admin->vtype]; + + out->prf(out->dst, "%s:\n", vcap->name); + lan966x_vcap_port_keys(port, admin, out); + + return 0; +} diff --git a/drivers/net/ethernet/microchip/lan966x/lan966x_vcap_impl.c b/drivers/net/ethernet/microchip/lan966x/lan966x_vcap_impl.c index a54c0426a35f..68f9d69fd37b 100644 --- a/drivers/net/ethernet/microchip/lan966x/lan966x_vcap_impl.c +++ b/drivers/net/ethernet/microchip/lan966x/lan966x_vcap_impl.c @@ -4,18 +4,12 @@ #include "lan966x_vcap_ag_api.h" #include "vcap_api.h" #include "vcap_api_client.h" +#include "vcap_api_debugfs.h" #define STREAMSIZE (64 * 4) #define LAN966X_IS2_LOOKUPS 2 -enum vcap_is2_port_sel_ipv6 { - VCAP_IS2_PS_IPV6_TCPUDP_OTHER, - VCAP_IS2_PS_IPV6_STD, - VCAP_IS2_PS_IPV6_IP4_TCPUDP_IP4_OTHER, - VCAP_IS2_PS_IPV6_MAC_ETYPE, -}; - static struct lan966x_vcap_inst { enum vcap_type vtype; /* type of vcap */ int tgt_inst; /* hardware instance number */ @@ -23,6 +17,7 @@ static struct lan966x_vcap_inst { int first_cid; /* first chain id in this vcap */ int last_cid; /* last chain id in this vcap */ int count; /* number of available addresses */ + bool ingress; /* is vcap in the ingress path */ } lan966x_vcap_inst_cfg[] = { { .vtype = VCAP_TYPE_IS2, /* IS2-0 */ @@ -31,6 +26,7 @@ static struct lan966x_vcap_inst { .first_cid = LAN966X_VCAP_CID_IS2_L0, .last_cid = LAN966X_VCAP_CID_IS2_MAX, .count = 256, + .ingress = true, }, }; @@ -383,27 +379,6 @@ static void lan966x_vcap_move(struct net_device *dev, lan966x_vcap_wait_update(lan966x, admin->tgt_inst); } -static int lan966x_vcap_port_info(struct net_device *dev, - struct vcap_admin *admin, - struct vcap_output_print *out) -{ - return 0; -} - -static int lan966x_vcap_enable(struct net_device *dev, - struct vcap_admin *admin, - bool enable) -{ - struct lan966x_port *port = netdev_priv(dev); - struct lan966x *lan966x = port->lan966x; - - lan_rmw(ANA_VCAP_S2_CFG_ENA_SET(enable), - ANA_VCAP_S2_CFG_ENA, - lan966x, ANA_VCAP_S2_CFG(port->chip_port)); - - return 0; -} - static struct vcap_operations lan966x_vcap_ops = { .validate_keyset = lan966x_vcap_validate_keyset, .add_default_fields = lan966x_vcap_add_default_fields, @@ -414,7 +389,6 @@ static struct vcap_operations lan966x_vcap_ops = { .update = lan966x_vcap_update, .move = lan966x_vcap_move, .port_info = lan966x_vcap_port_info, - .enable = lan966x_vcap_enable, }; static void lan966x_vcap_admin_free(struct vcap_admin *admin) @@ -446,6 +420,7 @@ lan966x_vcap_admin_alloc(struct lan966x *lan966x, struct vcap_control *ctrl, admin->vtype = cfg->vtype; admin->vinst = 0; + admin->ingress = cfg->ingress; admin->w32be = true; admin->tgt_inst = cfg->tgt_inst; @@ -498,6 +473,7 @@ int lan966x_vcap_init(struct lan966x *lan966x) struct lan966x_vcap_inst *cfg; struct vcap_control *ctrl; struct vcap_admin *admin; + struct dentry *dir; ctrl = kzalloc(sizeof(*ctrl), GFP_KERNEL); if (!ctrl) @@ -521,6 +497,18 @@ int lan966x_vcap_init(struct lan966x *lan966x) list_add_tail(&admin->list, &ctrl->list); } + dir = vcap_debugfs(lan966x->dev, lan966x->debugfs_root, ctrl); + for (int p = 0; p < lan966x->num_phys_ports; ++p) { + if (lan966x->ports[p]) { + vcap_port_debugfs(lan966x->dev, dir, ctrl, + lan966x->ports[p]->dev); + + lan_rmw(ANA_VCAP_S2_CFG_ENA_SET(true), + ANA_VCAP_S2_CFG_ENA, lan966x, + ANA_VCAP_S2_CFG(lan966x->ports[p]->chip_port)); + } + } + lan966x->vcap_ctrl = ctrl; return 0; diff --git a/drivers/net/ethernet/microchip/sparx5/Makefile b/drivers/net/ethernet/microchip/sparx5/Makefile index d0ed7090aa54..1cb1cc3f1a85 100644 --- a/drivers/net/ethernet/microchip/sparx5/Makefile +++ b/drivers/net/ethernet/microchip/sparx5/Makefile @@ -9,7 +9,8 @@ sparx5-switch-y := sparx5_main.o sparx5_packet.o \ sparx5_netdev.o sparx5_phylink.o sparx5_port.o sparx5_mactable.o sparx5_vlan.o \ sparx5_switchdev.o sparx5_calendar.o sparx5_ethtool.o sparx5_fdma.o \ sparx5_ptp.o sparx5_pgid.o sparx5_tc.o sparx5_qos.o \ - sparx5_vcap_impl.o sparx5_vcap_ag_api.o sparx5_tc_flower.o sparx5_tc_matchall.o + sparx5_vcap_impl.o sparx5_vcap_ag_api.o sparx5_tc_flower.o \ + sparx5_tc_matchall.o sparx5_pool.o sparx5_sdlb.o sparx5_police.o sparx5_psfp.o sparx5-switch-$(CONFIG_SPARX5_DCB) += sparx5_dcb.o sparx5-switch-$(CONFIG_DEBUG_FS) += sparx5_vcap_debugfs.o diff --git a/drivers/net/ethernet/microchip/sparx5/sparx5_dcb.c b/drivers/net/ethernet/microchip/sparx5/sparx5_dcb.c index 74abb946b2a3..871a3e62f852 100644 --- a/drivers/net/ethernet/microchip/sparx5/sparx5_dcb.c +++ b/drivers/net/ethernet/microchip/sparx5/sparx5_dcb.c @@ -133,12 +133,17 @@ static bool sparx5_dcb_apptrust_contains(int portno, u8 selector) static int sparx5_dcb_app_update(struct net_device *dev) { + struct dcb_ieee_app_prio_map dscp_rewr_map = {0}; + struct dcb_rewr_prio_pcp_map pcp_rewr_map = {0}; struct sparx5_port *port = netdev_priv(dev); struct sparx5_port_qos_dscp_map *dscp_map; struct sparx5_port_qos_pcp_map *pcp_map; struct sparx5_port_qos qos = {0}; struct dcb_app app_itr = {0}; int portno = port->portno; + bool dscp_rewr = false; + bool pcp_rewr = false; + u16 dscp; int i; dscp_map = &qos.dscp.map; @@ -163,31 +168,72 @@ static int sparx5_dcb_app_update(struct net_device *dev) pcp_map->map[i] = dcb_getapp(dev, &app_itr); } + /* Get pcp rewrite mapping */ + dcb_getrewr_prio_pcp_mask_map(dev, &pcp_rewr_map); + for (i = 0; i < ARRAY_SIZE(pcp_rewr_map.map); i++) { + if (!pcp_rewr_map.map[i]) + continue; + pcp_rewr = true; + qos.pcp_rewr.map.map[i] = fls(pcp_rewr_map.map[i]) - 1; + } + + /* Get dscp rewrite mapping */ + dcb_getrewr_prio_dscp_mask_map(dev, &dscp_rewr_map); + for (i = 0; i < ARRAY_SIZE(dscp_rewr_map.map); i++) { + if (!dscp_rewr_map.map[i]) + continue; + + /* The rewrite table of the switch has 32 entries; one for each + * priority for each DP level. Currently, the rewrite map does + * not indicate DP level, so we map classified QoS class to + * classified DSCP, for each classified DP level. Rewrite of + * DSCP is only enabled, if we have active mappings. + */ + dscp_rewr = true; + dscp = fls64(dscp_rewr_map.map[i]) - 1; + qos.dscp_rewr.map.map[i] = dscp; /* DP 0 */ + qos.dscp_rewr.map.map[i + 8] = dscp; /* DP 1 */ + qos.dscp_rewr.map.map[i + 16] = dscp; /* DP 2 */ + qos.dscp_rewr.map.map[i + 24] = dscp; /* DP 3 */ + } + /* Enable use of pcp for queue classification ? */ if (sparx5_dcb_apptrust_contains(portno, DCB_APP_SEL_PCP)) { qos.pcp.qos_enable = true; qos.pcp.dp_enable = qos.pcp.qos_enable; + /* Enable rewrite of PCP and DEI if PCP is trusted *and* rewrite + * table is not empty. + */ + if (pcp_rewr) + qos.pcp_rewr.enable = true; } /* Enable use of dscp for queue classification ? */ if (sparx5_dcb_apptrust_contains(portno, IEEE_8021QAZ_APP_SEL_DSCP)) { qos.dscp.qos_enable = true; qos.dscp.dp_enable = qos.dscp.qos_enable; + if (dscp_rewr) + /* Do not enable rewrite if no mappings are active, as + * classified DSCP will then be zero for all classified + * QoS class and DP combinations. + */ + qos.dscp_rewr.enable = true; } return sparx5_port_qos_set(port, &qos); } -/* Set or delete dscp app entry. +/* Set or delete DSCP app entry. * - * Dscp mapping is global for all ports, so set and delete app entries are + * DSCP mapping is global for all ports, so set and delete app entries are * replicated for each port. */ -static int sparx5_dcb_ieee_dscp_setdel_app(struct net_device *dev, - struct dcb_app *app, bool del) +static int sparx5_dcb_ieee_dscp_setdel(struct net_device *dev, + struct dcb_app *app, + int (*setdel)(struct net_device *, + struct dcb_app *)) { struct sparx5_port *port = netdev_priv(dev); - struct dcb_app apps[SPX5_PORTS]; struct sparx5_port *port_itr; int err, i; @@ -195,11 +241,7 @@ static int sparx5_dcb_ieee_dscp_setdel_app(struct net_device *dev, port_itr = port->sparx5->ports[i]; if (!port_itr) continue; - memcpy(&apps[i], app, sizeof(struct dcb_app)); - if (del) - err = dcb_ieee_delapp(port_itr->ndev, &apps[i]); - else - err = dcb_ieee_setapp(port_itr->ndev, &apps[i]); + err = setdel(port_itr->ndev, app); if (err) return err; } @@ -226,7 +268,7 @@ static int sparx5_dcb_ieee_setapp(struct net_device *dev, struct dcb_app *app) } if (app->selector == IEEE_8021QAZ_APP_SEL_DSCP) - err = sparx5_dcb_ieee_dscp_setdel_app(dev, app, false); + err = sparx5_dcb_ieee_dscp_setdel(dev, app, dcb_ieee_setapp); else err = dcb_ieee_setapp(dev, app); @@ -244,7 +286,7 @@ static int sparx5_dcb_ieee_delapp(struct net_device *dev, struct dcb_app *app) int err; if (app->selector == IEEE_8021QAZ_APP_SEL_DSCP) - err = sparx5_dcb_ieee_dscp_setdel_app(dev, app, true); + err = sparx5_dcb_ieee_dscp_setdel(dev, app, dcb_ieee_delapp); else err = dcb_ieee_delapp(dev, app); @@ -283,11 +325,60 @@ static int sparx5_dcb_getapptrust(struct net_device *dev, u8 *selectors, return 0; } +static int sparx5_dcb_delrewr(struct net_device *dev, struct dcb_app *app) +{ + int err; + + if (app->selector == IEEE_8021QAZ_APP_SEL_DSCP) + err = sparx5_dcb_ieee_dscp_setdel(dev, app, dcb_delrewr); + else + err = dcb_delrewr(dev, app); + + if (err < 0) + return err; + + return sparx5_dcb_app_update(dev); +} + +static int sparx5_dcb_setrewr(struct net_device *dev, struct dcb_app *app) +{ + struct dcb_app app_itr; + int err = 0; + u16 proto; + + err = sparx5_dcb_app_validate(dev, app); + if (err) + goto out; + + /* Delete current mapping, if it exists. */ + proto = dcb_getrewr(dev, app); + if (proto) { + app_itr = *app; + app_itr.protocol = proto; + sparx5_dcb_delrewr(dev, &app_itr); + } + + if (app->selector == IEEE_8021QAZ_APP_SEL_DSCP) + err = sparx5_dcb_ieee_dscp_setdel(dev, app, dcb_setrewr); + else + err = dcb_setrewr(dev, app); + + if (err) + goto out; + + sparx5_dcb_app_update(dev); + +out: + return err; +} + const struct dcbnl_rtnl_ops sparx5_dcbnl_ops = { .ieee_setapp = sparx5_dcb_ieee_setapp, .ieee_delapp = sparx5_dcb_ieee_delapp, .dcbnl_setapptrust = sparx5_dcb_setapptrust, .dcbnl_getapptrust = sparx5_dcb_getapptrust, + .dcbnl_setrewr = sparx5_dcb_setrewr, + .dcbnl_delrewr = sparx5_dcb_delrewr, }; int sparx5_dcb_init(struct sparx5 *sparx5) @@ -304,6 +395,12 @@ int sparx5_dcb_init(struct sparx5 *sparx5) sparx5_port_apptrust[port->portno] = &sparx5_dcb_apptrust_policies [SPARX5_DCB_APPTRUST_DSCP_PCP]; + + /* Enable DSCP classification based on classified QoS class and + * DP, for all DSCP values, for all ports. + */ + sparx5_port_qos_dscp_rewr_mode_set(port, + SPARX5_PORT_REW_DSCP_ALL); } return 0; diff --git a/drivers/net/ethernet/microchip/sparx5/sparx5_main.c b/drivers/net/ethernet/microchip/sparx5/sparx5_main.c index 3c5d4fe99373..42b77ba9b572 100644 --- a/drivers/net/ethernet/microchip/sparx5/sparx5_main.c +++ b/drivers/net/ethernet/microchip/sparx5/sparx5_main.c @@ -198,12 +198,15 @@ static const struct sparx5_main_io_resource sparx5_main_iomap[] = { { TARGET_QSYS, 0x110a0000, 2 }, /* 0x6110a0000 */ { TARGET_QFWD, 0x110b0000, 2 }, /* 0x6110b0000 */ { TARGET_XQS, 0x110c0000, 2 }, /* 0x6110c0000 */ + { TARGET_VCAP_ES2, 0x110d0000, 2 }, /* 0x6110d0000 */ + { TARGET_VCAP_ES0, 0x110e0000, 2 }, /* 0x6110e0000 */ { TARGET_CLKGEN, 0x11100000, 2 }, /* 0x611100000 */ { TARGET_ANA_AC_POL, 0x11200000, 2 }, /* 0x611200000 */ { TARGET_QRES, 0x11280000, 2 }, /* 0x611280000 */ { TARGET_EACL, 0x112c0000, 2 }, /* 0x6112c0000 */ { TARGET_ANA_CL, 0x11400000, 2 }, /* 0x611400000 */ { TARGET_ANA_L3, 0x11480000, 2 }, /* 0x611480000 */ + { TARGET_ANA_AC_SDLB, 0x11500000, 2 }, /* 0x611500000 */ { TARGET_HSCH, 0x11580000, 2 }, /* 0x611580000 */ { TARGET_REW, 0x11600000, 2 }, /* 0x611600000 */ { TARGET_ANA_L2, 0x11800000, 2 }, /* 0x611800000 */ @@ -500,8 +503,8 @@ static int sparx5_init_coreclock(struct sparx5 *sparx5) clk_period = sparx5_clk_period(freq); - spx5_rmw(HSCH_SYS_CLK_PER_SYS_CLK_PER_100PS_SET(clk_period / 100), - HSCH_SYS_CLK_PER_SYS_CLK_PER_100PS, + spx5_rmw(HSCH_SYS_CLK_PER_100PS_SET(clk_period / 100), + HSCH_SYS_CLK_PER_100PS, sparx5, HSCH_SYS_CLK_PER); diff --git a/drivers/net/ethernet/microchip/sparx5/sparx5_main.h b/drivers/net/ethernet/microchip/sparx5/sparx5_main.h index 4a574cdcb584..72e7928912eb 100644 --- a/drivers/net/ethernet/microchip/sparx5/sparx5_main.h +++ b/drivers/net/ethernet/microchip/sparx5/sparx5_main.h @@ -396,6 +396,7 @@ int sparx5_ptp_txtstamp_request(struct sparx5_port *port, void sparx5_ptp_txtstamp_release(struct sparx5_port *port, struct sk_buff *skb); irqreturn_t sparx5_ptp_irq_handler(int irq, void *args); +int sparx5_ptp_gettime64(struct ptp_clock_info *ptp, struct timespec64 *ts); /* sparx5_vcap_impl.c */ int sparx5_vcap_init(struct sparx5 *sparx5); @@ -413,6 +414,129 @@ int sparx5_pgid_alloc_glag(struct sparx5 *spx5, u16 *idx); int sparx5_pgid_alloc_mcast(struct sparx5 *spx5, u16 *idx); int sparx5_pgid_free(struct sparx5 *spx5, u16 idx); +/* sparx5_pool.c */ +struct sparx5_pool_entry { + u16 ref_cnt; + u32 idx; /* tc index */ +}; + +u32 sparx5_pool_idx_to_id(u32 idx); +int sparx5_pool_put(struct sparx5_pool_entry *pool, int size, u32 id); +int sparx5_pool_get(struct sparx5_pool_entry *pool, int size, u32 *id); +int sparx5_pool_get_with_idx(struct sparx5_pool_entry *pool, int size, u32 idx, + u32 *id); + +/* sparx5_sdlb.c */ +#define SPX5_SDLB_PUP_TOKEN_DISABLE 0x1FFF +#define SPX5_SDLB_PUP_TOKEN_MAX (SPX5_SDLB_PUP_TOKEN_DISABLE - 1) +#define SPX5_SDLB_GROUP_RATE_MAX 25000000000ULL +#define SPX5_SDLB_2CYCLES_TYPE2_THRES_OFFSET 13 +#define SPX5_SDLB_CNT 4096 +#define SPX5_SDLB_GROUP_CNT 10 +#define SPX5_CLK_PER_100PS_DEFAULT 16 + +struct sparx5_sdlb_group { + u64 max_rate; + u32 min_burst; + u32 frame_size; + u32 pup_interval; + u32 nsets; +}; + +extern struct sparx5_sdlb_group sdlb_groups[SPX5_SDLB_GROUP_CNT]; +int sparx5_sdlb_pup_token_get(struct sparx5 *sparx5, u32 pup_interval, + u64 rate); + +int sparx5_sdlb_clk_hz_get(struct sparx5 *sparx5); +int sparx5_sdlb_group_get_by_rate(struct sparx5 *sparx5, u32 rate, u32 burst); +int sparx5_sdlb_group_get_by_index(struct sparx5 *sparx5, u32 idx, u32 *group); + +int sparx5_sdlb_group_add(struct sparx5 *sparx5, u32 group, u32 idx); +int sparx5_sdlb_group_del(struct sparx5 *sparx5, u32 group, u32 idx); + +void sparx5_sdlb_group_init(struct sparx5 *sparx5, u64 max_rate, u32 min_burst, + u32 frame_size, u32 idx); + +/* sparx5_police.c */ +enum { + /* More policer types will be added later */ + SPX5_POL_SERVICE +}; + +struct sparx5_policer { + u32 type; + u32 idx; + u64 rate; + u32 burst; + u32 group; + u8 event_mask; +}; + +int sparx5_policer_conf_set(struct sparx5 *sparx5, struct sparx5_policer *pol); + +/* sparx5_psfp.c */ +#define SPX5_PSFP_GCE_CNT 4 +#define SPX5_PSFP_SG_CNT 1024 +#define SPX5_PSFP_SG_MIN_CYCLE_TIME_NS (1 * NSEC_PER_USEC) +#define SPX5_PSFP_SG_MAX_CYCLE_TIME_NS ((1 * NSEC_PER_SEC) - 1) +#define SPX5_PSFP_SG_MAX_IPV (SPX5_PRIOS - 1) +#define SPX5_PSFP_SG_OPEN (SPX5_PSFP_SG_CNT - 1) +#define SPX5_PSFP_SG_CYCLE_TIME_DEFAULT 1000000 +#define SPX5_PSFP_SF_MAX_SDU 16383 + +struct sparx5_psfp_fm { + struct sparx5_policer pol; +}; + +struct sparx5_psfp_gce { + bool gate_state; /* StreamGateState */ + u32 interval; /* TimeInterval */ + u32 ipv; /* InternalPriorityValue */ + u32 maxoctets; /* IntervalOctetMax */ +}; + +struct sparx5_psfp_sg { + bool gate_state; /* PSFPAdminGateStates */ + bool gate_enabled; /* PSFPGateEnabled */ + u32 ipv; /* PSFPAdminIPV */ + struct timespec64 basetime; /* PSFPAdminBaseTime */ + u32 cycletime; /* PSFPAdminCycleTime */ + u32 cycletimeext; /* PSFPAdminCycleTimeExtension */ + u32 num_entries; /* PSFPAdminControlListLength */ + struct sparx5_psfp_gce gce[SPX5_PSFP_GCE_CNT]; +}; + +struct sparx5_psfp_sf { + bool sblock_osize_ena; + bool sblock_osize; + u32 max_sdu; + u32 sgid; /* Gate id */ + u32 fmid; /* Flow meter id */ +}; + +int sparx5_psfp_fm_add(struct sparx5 *sparx5, u32 uidx, + struct sparx5_psfp_fm *fm, u32 *id); +int sparx5_psfp_fm_del(struct sparx5 *sparx5, u32 id); + +int sparx5_psfp_sg_add(struct sparx5 *sparx5, u32 uidx, + struct sparx5_psfp_sg *sg, u32 *id); +int sparx5_psfp_sg_del(struct sparx5 *sparx5, u32 id); + +int sparx5_psfp_sf_add(struct sparx5 *sparx5, const struct sparx5_psfp_sf *sf, + u32 *id); +int sparx5_psfp_sf_del(struct sparx5 *sparx5, u32 id); + +u32 sparx5_psfp_isdx_get_sf(struct sparx5 *sparx5, u32 isdx); +u32 sparx5_psfp_isdx_get_fm(struct sparx5 *sparx5, u32 isdx); +u32 sparx5_psfp_sf_get_sg(struct sparx5 *sparx5, u32 sfid); +void sparx5_isdx_conf_set(struct sparx5 *sparx5, u32 isdx, u32 sfid, u32 fmid); + +void sparx5_psfp_init(struct sparx5 *sparx5); + +/* sparx5_qos.c */ +void sparx5_new_base_time(struct sparx5 *sparx5, const u32 cycle_time, + const ktime_t org_base_time, ktime_t *new_base_time); + /* Clock period in picoseconds */ static inline u32 sparx5_clk_period(enum sparx5_core_clockfreq cclock) { diff --git a/drivers/net/ethernet/microchip/sparx5/sparx5_main_regs.h b/drivers/net/ethernet/microchip/sparx5/sparx5_main_regs.h index 6c93dd6b01b0..bd03a0a3c1da 100644 --- a/drivers/net/ethernet/microchip/sparx5/sparx5_main_regs.h +++ b/drivers/net/ethernet/microchip/sparx5/sparx5_main_regs.h @@ -4,8 +4,8 @@ * Copyright (c) 2021 Microchip Technology Inc. */ -/* This file is autogenerated by cml-utils 2022-09-28 11:17:02 +0200. - * Commit ID: 385c8a11d71a9f6a60368d3a3cb648fa257b479a +/* This file is autogenerated by cml-utils 2023-02-10 11:18:53 +0100. + * Commit ID: c30fb4bf0281cd4a7133bdab6682f9e43c872ada */ #ifndef _SPARX5_MAIN_REGS_H_ @@ -19,6 +19,7 @@ enum sparx5_target { TARGET_ANA_AC = 1, TARGET_ANA_ACL = 2, TARGET_ANA_AC_POL = 4, + TARGET_ANA_AC_SDLB = 5, TARGET_ANA_CL = 6, TARGET_ANA_L2 = 7, TARGET_ANA_L3 = 8, @@ -46,6 +47,8 @@ enum sparx5_target { TARGET_QS = 177, TARGET_QSYS = 178, TARGET_REW = 179, + TARGET_VCAP_ES0 = 323, + TARGET_VCAP_ES2 = 324, TARGET_VCAP_SUPER = 326, TARGET_VOP = 327, TARGET_XQS = 331, @@ -55,7 +58,8 @@ enum sparx5_target { #define __REG(...) __VA_ARGS__ /* ANA_AC:RAM_CTRL:RAM_INIT */ -#define ANA_AC_RAM_INIT __REG(TARGET_ANA_AC, 0, 1, 839108, 0, 1, 4, 0, 0, 1, 4) +#define ANA_AC_RAM_INIT __REG(TARGET_ANA_AC,\ + 0, 1, 839108, 0, 1, 4, 0, 0, 1, 4) #define ANA_AC_RAM_INIT_RAM_INIT BIT(1) #define ANA_AC_RAM_INIT_RAM_INIT_SET(x)\ @@ -70,7 +74,8 @@ enum sparx5_target { FIELD_GET(ANA_AC_RAM_INIT_RAM_CFG_HOOK, x) /* ANA_AC:PS_COMMON:OWN_UPSID */ -#define ANA_AC_OWN_UPSID(r) __REG(TARGET_ANA_AC, 0, 1, 894472, 0, 1, 352, 52, r, 3, 4) +#define ANA_AC_OWN_UPSID(r) __REG(TARGET_ANA_AC,\ + 0, 1, 894472, 0, 1, 352, 52, r, 3, 4) #define ANA_AC_OWN_UPSID_OWN_UPSID GENMASK(4, 0) #define ANA_AC_OWN_UPSID_OWN_UPSID_SET(x)\ @@ -79,13 +84,16 @@ enum sparx5_target { FIELD_GET(ANA_AC_OWN_UPSID_OWN_UPSID, x) /* ANA_AC:SRC:SRC_CFG */ -#define ANA_AC_SRC_CFG(g) __REG(TARGET_ANA_AC, 0, 1, 849920, g, 102, 16, 0, 0, 1, 4) +#define ANA_AC_SRC_CFG(g) __REG(TARGET_ANA_AC,\ + 0, 1, 849920, g, 102, 16, 0, 0, 1, 4) /* ANA_AC:SRC:SRC_CFG1 */ -#define ANA_AC_SRC_CFG1(g) __REG(TARGET_ANA_AC, 0, 1, 849920, g, 102, 16, 4, 0, 1, 4) +#define ANA_AC_SRC_CFG1(g) __REG(TARGET_ANA_AC,\ + 0, 1, 849920, g, 102, 16, 4, 0, 1, 4) /* ANA_AC:SRC:SRC_CFG2 */ -#define ANA_AC_SRC_CFG2(g) __REG(TARGET_ANA_AC, 0, 1, 849920, g, 102, 16, 8, 0, 1, 4) +#define ANA_AC_SRC_CFG2(g) __REG(TARGET_ANA_AC,\ + 0, 1, 849920, g, 102, 16, 8, 0, 1, 4) #define ANA_AC_SRC_CFG2_PORT_MASK2 BIT(0) #define ANA_AC_SRC_CFG2_PORT_MASK2_SET(x)\ @@ -94,13 +102,16 @@ enum sparx5_target { FIELD_GET(ANA_AC_SRC_CFG2_PORT_MASK2, x) /* ANA_AC:PGID:PGID_CFG */ -#define ANA_AC_PGID_CFG(g) __REG(TARGET_ANA_AC, 0, 1, 786432, g, 3290, 16, 0, 0, 1, 4) +#define ANA_AC_PGID_CFG(g) __REG(TARGET_ANA_AC,\ + 0, 1, 786432, g, 3290, 16, 0, 0, 1, 4) /* ANA_AC:PGID:PGID_CFG1 */ -#define ANA_AC_PGID_CFG1(g) __REG(TARGET_ANA_AC, 0, 1, 786432, g, 3290, 16, 4, 0, 1, 4) +#define ANA_AC_PGID_CFG1(g) __REG(TARGET_ANA_AC,\ + 0, 1, 786432, g, 3290, 16, 4, 0, 1, 4) /* ANA_AC:PGID:PGID_CFG2 */ -#define ANA_AC_PGID_CFG2(g) __REG(TARGET_ANA_AC, 0, 1, 786432, g, 3290, 16, 8, 0, 1, 4) +#define ANA_AC_PGID_CFG2(g) __REG(TARGET_ANA_AC,\ + 0, 1, 786432, g, 3290, 16, 8, 0, 1, 4) #define ANA_AC_PGID_CFG2_PORT_MASK2 BIT(0) #define ANA_AC_PGID_CFG2_PORT_MASK2_SET(x)\ @@ -109,7 +120,8 @@ enum sparx5_target { FIELD_GET(ANA_AC_PGID_CFG2_PORT_MASK2, x) /* ANA_AC:PGID:PGID_MISC_CFG */ -#define ANA_AC_PGID_MISC_CFG(g) __REG(TARGET_ANA_AC, 0, 1, 786432, g, 3290, 16, 12, 0, 1, 4) +#define ANA_AC_PGID_MISC_CFG(g) __REG(TARGET_ANA_AC,\ + 0, 1, 786432, g, 3290, 16, 12, 0, 1, 4) #define ANA_AC_PGID_MISC_CFG_PGID_CPU_QU GENMASK(6, 4) #define ANA_AC_PGID_MISC_CFG_PGID_CPU_QU_SET(x)\ @@ -129,8 +141,257 @@ enum sparx5_target { #define ANA_AC_PGID_MISC_CFG_PGID_CPU_COPY_ENA_GET(x)\ FIELD_GET(ANA_AC_PGID_MISC_CFG_PGID_CPU_COPY_ENA, x) +/* ANA_AC:TSN_SF:TSN_SF */ +#define ANA_AC_TSN_SF __REG(TARGET_ANA_AC,\ + 0, 1, 839136, 0, 1, 4, 0, 0, 1, 4) + +#define ANA_AC_TSN_SF_TSN_STREAM_BLOCK_OVERSIZE_STICKY BIT(9) +#define ANA_AC_TSN_SF_TSN_STREAM_BLOCK_OVERSIZE_STICKY_SET(x)\ + FIELD_PREP(ANA_AC_TSN_SF_TSN_STREAM_BLOCK_OVERSIZE_STICKY, x) +#define ANA_AC_TSN_SF_TSN_STREAM_BLOCK_OVERSIZE_STICKY_GET(x)\ + FIELD_GET(ANA_AC_TSN_SF_TSN_STREAM_BLOCK_OVERSIZE_STICKY, x) + +#define ANA_AC_TSN_SF_PORT_NUM GENMASK(8, 0) +#define ANA_AC_TSN_SF_PORT_NUM_SET(x)\ + FIELD_PREP(ANA_AC_TSN_SF_PORT_NUM, x) +#define ANA_AC_TSN_SF_PORT_NUM_GET(x)\ + FIELD_GET(ANA_AC_TSN_SF_PORT_NUM, x) + +/* ANA_AC:TSN_SF_CFG:TSN_SF_CFG */ +#define ANA_AC_TSN_SF_CFG(g) __REG(TARGET_ANA_AC,\ + 0, 1, 839680, g, 1024, 4, 0, 0, 1, 4) + +#define ANA_AC_TSN_SF_CFG_TSN_SGID GENMASK(25, 16) +#define ANA_AC_TSN_SF_CFG_TSN_SGID_SET(x)\ + FIELD_PREP(ANA_AC_TSN_SF_CFG_TSN_SGID, x) +#define ANA_AC_TSN_SF_CFG_TSN_SGID_GET(x)\ + FIELD_GET(ANA_AC_TSN_SF_CFG_TSN_SGID, x) + +#define ANA_AC_TSN_SF_CFG_TSN_MAX_SDU GENMASK(15, 2) +#define ANA_AC_TSN_SF_CFG_TSN_MAX_SDU_SET(x)\ + FIELD_PREP(ANA_AC_TSN_SF_CFG_TSN_MAX_SDU, x) +#define ANA_AC_TSN_SF_CFG_TSN_MAX_SDU_GET(x)\ + FIELD_GET(ANA_AC_TSN_SF_CFG_TSN_MAX_SDU, x) + +#define ANA_AC_TSN_SF_CFG_BLOCK_OVERSIZE_ENA BIT(1) +#define ANA_AC_TSN_SF_CFG_BLOCK_OVERSIZE_ENA_SET(x)\ + FIELD_PREP(ANA_AC_TSN_SF_CFG_BLOCK_OVERSIZE_ENA, x) +#define ANA_AC_TSN_SF_CFG_BLOCK_OVERSIZE_ENA_GET(x)\ + FIELD_GET(ANA_AC_TSN_SF_CFG_BLOCK_OVERSIZE_ENA, x) + +#define ANA_AC_TSN_SF_CFG_BLOCK_OVERSIZE_STATE BIT(0) +#define ANA_AC_TSN_SF_CFG_BLOCK_OVERSIZE_STATE_SET(x)\ + FIELD_PREP(ANA_AC_TSN_SF_CFG_BLOCK_OVERSIZE_STATE, x) +#define ANA_AC_TSN_SF_CFG_BLOCK_OVERSIZE_STATE_GET(x)\ + FIELD_GET(ANA_AC_TSN_SF_CFG_BLOCK_OVERSIZE_STATE, x) + +/* ANA_AC:TSN_SF_STATUS:TSN_SF_STATUS */ +#define ANA_AC_TSN_SF_STATUS __REG(TARGET_ANA_AC,\ + 0, 1, 839072, 0, 1, 16, 0, 0, 1, 4) + +#define ANA_AC_TSN_SF_STATUS_FRM_LEN GENMASK(25, 12) +#define ANA_AC_TSN_SF_STATUS_FRM_LEN_SET(x)\ + FIELD_PREP(ANA_AC_TSN_SF_STATUS_FRM_LEN, x) +#define ANA_AC_TSN_SF_STATUS_FRM_LEN_GET(x)\ + FIELD_GET(ANA_AC_TSN_SF_STATUS_FRM_LEN, x) + +#define ANA_AC_TSN_SF_STATUS_DLB_DROP BIT(11) +#define ANA_AC_TSN_SF_STATUS_DLB_DROP_SET(x)\ + FIELD_PREP(ANA_AC_TSN_SF_STATUS_DLB_DROP, x) +#define ANA_AC_TSN_SF_STATUS_DLB_DROP_GET(x)\ + FIELD_GET(ANA_AC_TSN_SF_STATUS_DLB_DROP, x) + +#define ANA_AC_TSN_SF_STATUS_TSN_SFID GENMASK(10, 1) +#define ANA_AC_TSN_SF_STATUS_TSN_SFID_SET(x)\ + FIELD_PREP(ANA_AC_TSN_SF_STATUS_TSN_SFID, x) +#define ANA_AC_TSN_SF_STATUS_TSN_SFID_GET(x)\ + FIELD_GET(ANA_AC_TSN_SF_STATUS_TSN_SFID, x) + +#define ANA_AC_TSN_SF_STATUS_TSTAMP_VLD BIT(0) +#define ANA_AC_TSN_SF_STATUS_TSTAMP_VLD_SET(x)\ + FIELD_PREP(ANA_AC_TSN_SF_STATUS_TSTAMP_VLD, x) +#define ANA_AC_TSN_SF_STATUS_TSTAMP_VLD_GET(x)\ + FIELD_GET(ANA_AC_TSN_SF_STATUS_TSTAMP_VLD, x) + +/* ANA_AC:SG_ACCESS:SG_ACCESS_CTRL */ +#define ANA_AC_SG_ACCESS_CTRL __REG(TARGET_ANA_AC,\ + 0, 1, 839140, 0, 1, 12, 0, 0, 1, 4) + +#define ANA_AC_SG_ACCESS_CTRL_SGID GENMASK(9, 0) +#define ANA_AC_SG_ACCESS_CTRL_SGID_SET(x)\ + FIELD_PREP(ANA_AC_SG_ACCESS_CTRL_SGID, x) +#define ANA_AC_SG_ACCESS_CTRL_SGID_GET(x)\ + FIELD_GET(ANA_AC_SG_ACCESS_CTRL_SGID, x) + +#define ANA_AC_SG_ACCESS_CTRL_CONFIG_CHANGE BIT(28) +#define ANA_AC_SG_ACCESS_CTRL_CONFIG_CHANGE_SET(x)\ + FIELD_PREP(ANA_AC_SG_ACCESS_CTRL_CONFIG_CHANGE, x) +#define ANA_AC_SG_ACCESS_CTRL_CONFIG_CHANGE_GET(x)\ + FIELD_GET(ANA_AC_SG_ACCESS_CTRL_CONFIG_CHANGE, x) + +/* ANA_AC:SG_ACCESS:SG_CYCLETIME_UPDATE_PERIOD */ +#define ANA_AC_SG_CYCLETIME_UPDATE_PERIOD __REG(TARGET_ANA_AC,\ + 0, 1, 839140, 0, 1, 12, 8, 0, 1, 4) + +#define ANA_AC_SG_CYCLETIME_UPDATE_PERIOD_SG_CT_CLKS GENMASK(15, 0) +#define ANA_AC_SG_CYCLETIME_UPDATE_PERIOD_SG_CT_CLKS_SET(x)\ + FIELD_PREP(ANA_AC_SG_CYCLETIME_UPDATE_PERIOD_SG_CT_CLKS, x) +#define ANA_AC_SG_CYCLETIME_UPDATE_PERIOD_SG_CT_CLKS_GET(x)\ + FIELD_GET(ANA_AC_SG_CYCLETIME_UPDATE_PERIOD_SG_CT_CLKS, x) + +#define ANA_AC_SG_CYCLETIME_UPDATE_PERIOD_SG_CT_UPDATE_ENA BIT(31) +#define ANA_AC_SG_CYCLETIME_UPDATE_PERIOD_SG_CT_UPDATE_ENA_SET(x)\ + FIELD_PREP(ANA_AC_SG_CYCLETIME_UPDATE_PERIOD_SG_CT_UPDATE_ENA, x) +#define ANA_AC_SG_CYCLETIME_UPDATE_PERIOD_SG_CT_UPDATE_ENA_GET(x)\ + FIELD_GET(ANA_AC_SG_CYCLETIME_UPDATE_PERIOD_SG_CT_UPDATE_ENA, x) + +/* ANA_AC:SG_CONFIG:SG_CONFIG_REG_1 */ +#define ANA_AC_SG_CONFIG_REG_1 __REG(TARGET_ANA_AC,\ + 0, 1, 851584, 0, 1, 128, 48, 0, 1, 4) + +/* ANA_AC:SG_CONFIG:SG_CONFIG_REG_2 */ +#define ANA_AC_SG_CONFIG_REG_2 __REG(TARGET_ANA_AC,\ + 0, 1, 851584, 0, 1, 128, 52, 0, 1, 4) + +/* ANA_AC:SG_CONFIG:SG_CONFIG_REG_3 */ +#define ANA_AC_SG_CONFIG_REG_3 __REG(TARGET_ANA_AC,\ + 0, 1, 851584, 0, 1, 128, 56, 0, 1, 4) + +#define ANA_AC_SG_CONFIG_REG_3_BASE_TIME_SEC_MSB GENMASK(15, 0) +#define ANA_AC_SG_CONFIG_REG_3_BASE_TIME_SEC_MSB_SET(x)\ + FIELD_PREP(ANA_AC_SG_CONFIG_REG_3_BASE_TIME_SEC_MSB, x) +#define ANA_AC_SG_CONFIG_REG_3_BASE_TIME_SEC_MSB_GET(x)\ + FIELD_GET(ANA_AC_SG_CONFIG_REG_3_BASE_TIME_SEC_MSB, x) + +#define ANA_AC_SG_CONFIG_REG_3_LIST_LENGTH GENMASK(18, 16) +#define ANA_AC_SG_CONFIG_REG_3_LIST_LENGTH_SET(x)\ + FIELD_PREP(ANA_AC_SG_CONFIG_REG_3_LIST_LENGTH, x) +#define ANA_AC_SG_CONFIG_REG_3_LIST_LENGTH_GET(x)\ + FIELD_GET(ANA_AC_SG_CONFIG_REG_3_LIST_LENGTH, x) + +#define ANA_AC_SG_CONFIG_REG_3_GATE_ENABLE BIT(20) +#define ANA_AC_SG_CONFIG_REG_3_GATE_ENABLE_SET(x)\ + FIELD_PREP(ANA_AC_SG_CONFIG_REG_3_GATE_ENABLE, x) +#define ANA_AC_SG_CONFIG_REG_3_GATE_ENABLE_GET(x)\ + FIELD_GET(ANA_AC_SG_CONFIG_REG_3_GATE_ENABLE, x) + +#define ANA_AC_SG_CONFIG_REG_3_INIT_IPS GENMASK(24, 21) +#define ANA_AC_SG_CONFIG_REG_3_INIT_IPS_SET(x)\ + FIELD_PREP(ANA_AC_SG_CONFIG_REG_3_INIT_IPS, x) +#define ANA_AC_SG_CONFIG_REG_3_INIT_IPS_GET(x)\ + FIELD_GET(ANA_AC_SG_CONFIG_REG_3_INIT_IPS, x) + +#define ANA_AC_SG_CONFIG_REG_3_INIT_GATE_STATE BIT(25) +#define ANA_AC_SG_CONFIG_REG_3_INIT_GATE_STATE_SET(x)\ + FIELD_PREP(ANA_AC_SG_CONFIG_REG_3_INIT_GATE_STATE, x) +#define ANA_AC_SG_CONFIG_REG_3_INIT_GATE_STATE_GET(x)\ + FIELD_GET(ANA_AC_SG_CONFIG_REG_3_INIT_GATE_STATE, x) + +#define ANA_AC_SG_CONFIG_REG_3_INVALID_RX_ENA BIT(26) +#define ANA_AC_SG_CONFIG_REG_3_INVALID_RX_ENA_SET(x)\ + FIELD_PREP(ANA_AC_SG_CONFIG_REG_3_INVALID_RX_ENA, x) +#define ANA_AC_SG_CONFIG_REG_3_INVALID_RX_ENA_GET(x)\ + FIELD_GET(ANA_AC_SG_CONFIG_REG_3_INVALID_RX_ENA, x) + +#define ANA_AC_SG_CONFIG_REG_3_INVALID_RX BIT(27) +#define ANA_AC_SG_CONFIG_REG_3_INVALID_RX_SET(x)\ + FIELD_PREP(ANA_AC_SG_CONFIG_REG_3_INVALID_RX, x) +#define ANA_AC_SG_CONFIG_REG_3_INVALID_RX_GET(x)\ + FIELD_GET(ANA_AC_SG_CONFIG_REG_3_INVALID_RX, x) + +#define ANA_AC_SG_CONFIG_REG_3_OCTETS_EXCEEDED_ENA BIT(28) +#define ANA_AC_SG_CONFIG_REG_3_OCTETS_EXCEEDED_ENA_SET(x)\ + FIELD_PREP(ANA_AC_SG_CONFIG_REG_3_OCTETS_EXCEEDED_ENA, x) +#define ANA_AC_SG_CONFIG_REG_3_OCTETS_EXCEEDED_ENA_GET(x)\ + FIELD_GET(ANA_AC_SG_CONFIG_REG_3_OCTETS_EXCEEDED_ENA, x) + +#define ANA_AC_SG_CONFIG_REG_3_OCTETS_EXCEEDED BIT(29) +#define ANA_AC_SG_CONFIG_REG_3_OCTETS_EXCEEDED_SET(x)\ + FIELD_PREP(ANA_AC_SG_CONFIG_REG_3_OCTETS_EXCEEDED, x) +#define ANA_AC_SG_CONFIG_REG_3_OCTETS_EXCEEDED_GET(x)\ + FIELD_GET(ANA_AC_SG_CONFIG_REG_3_OCTETS_EXCEEDED, x) + +/* ANA_AC:SG_CONFIG:SG_CONFIG_REG_4 */ +#define ANA_AC_SG_CONFIG_REG_4 __REG(TARGET_ANA_AC,\ + 0, 1, 851584, 0, 1, 128, 60, 0, 1, 4) + +/* ANA_AC:SG_CONFIG:SG_CONFIG_REG_5 */ +#define ANA_AC_SG_CONFIG_REG_5 __REG(TARGET_ANA_AC,\ + 0, 1, 851584, 0, 1, 128, 64, 0, 1, 4) + +/* ANA_AC:SG_CONFIG:SG_GCL_GS_CONFIG */ +#define ANA_AC_SG_GCL_GS_CONFIG(r) __REG(TARGET_ANA_AC,\ + 0, 1, 851584, 0, 1, 128, 0, r, 4, 4) + +#define ANA_AC_SG_GCL_GS_CONFIG_IPS GENMASK(3, 0) +#define ANA_AC_SG_GCL_GS_CONFIG_IPS_SET(x)\ + FIELD_PREP(ANA_AC_SG_GCL_GS_CONFIG_IPS, x) +#define ANA_AC_SG_GCL_GS_CONFIG_IPS_GET(x)\ + FIELD_GET(ANA_AC_SG_GCL_GS_CONFIG_IPS, x) + +#define ANA_AC_SG_GCL_GS_CONFIG_GATE_STATE BIT(4) +#define ANA_AC_SG_GCL_GS_CONFIG_GATE_STATE_SET(x)\ + FIELD_PREP(ANA_AC_SG_GCL_GS_CONFIG_GATE_STATE, x) +#define ANA_AC_SG_GCL_GS_CONFIG_GATE_STATE_GET(x)\ + FIELD_GET(ANA_AC_SG_GCL_GS_CONFIG_GATE_STATE, x) + +/* ANA_AC:SG_CONFIG:SG_GCL_TI_CONFIG */ +#define ANA_AC_SG_GCL_TI_CONFIG(r) __REG(TARGET_ANA_AC,\ + 0, 1, 851584, 0, 1, 128, 16, r, 4, 4) + +/* ANA_AC:SG_CONFIG:SG_GCL_OCT_CONFIG */ +#define ANA_AC_SG_GCL_OCT_CONFIG(r) __REG(TARGET_ANA_AC,\ + 0, 1, 851584, 0, 1, 128, 32, r, 4, 4) + +/* ANA_AC:SG_STATUS:SG_STATUS_REG_1 */ +#define ANA_AC_SG_STATUS_REG_1 __REG(TARGET_ANA_AC,\ + 0, 1, 839088, 0, 1, 16, 0, 0, 1, 4) + +/* ANA_AC:SG_STATUS:SG_STATUS_REG_2 */ +#define ANA_AC_SG_STATUS_REG_2 __REG(TARGET_ANA_AC,\ + 0, 1, 839088, 0, 1, 16, 4, 0, 1, 4) + +/* ANA_AC:SG_STATUS:SG_STATUS_REG_3 */ +#define ANA_AC_SG_STATUS_REG_3 __REG(TARGET_ANA_AC,\ + 0, 1, 839088, 0, 1, 16, 8, 0, 1, 4) + +#define ANA_AC_SG_STATUS_REG_3_CFG_CHG_TIME_SEC_MSB GENMASK(15, 0) +#define ANA_AC_SG_STATUS_REG_3_CFG_CHG_TIME_SEC_MSB_SET(x)\ + FIELD_PREP(ANA_AC_SG_STATUS_REG_3_CFG_CHG_TIME_SEC_MSB, x) +#define ANA_AC_SG_STATUS_REG_3_CFG_CHG_TIME_SEC_MSB_GET(x)\ + FIELD_GET(ANA_AC_SG_STATUS_REG_3_CFG_CHG_TIME_SEC_MSB, x) + +#define ANA_AC_SG_STATUS_REG_3_GATE_STATE BIT(16) +#define ANA_AC_SG_STATUS_REG_3_GATE_STATE_SET(x)\ + FIELD_PREP(ANA_AC_SG_STATUS_REG_3_GATE_STATE, x) +#define ANA_AC_SG_STATUS_REG_3_GATE_STATE_GET(x)\ + FIELD_GET(ANA_AC_SG_STATUS_REG_3_GATE_STATE, x) + +#define ANA_AC_SG_STATUS_REG_3_IPS GENMASK(23, 20) +#define ANA_AC_SG_STATUS_REG_3_IPS_SET(x)\ + FIELD_PREP(ANA_AC_SG_STATUS_REG_3_IPS, x) +#define ANA_AC_SG_STATUS_REG_3_IPS_GET(x)\ + FIELD_GET(ANA_AC_SG_STATUS_REG_3_IPS, x) + +#define ANA_AC_SG_STATUS_REG_3_CONFIG_PENDING BIT(24) +#define ANA_AC_SG_STATUS_REG_3_CONFIG_PENDING_SET(x)\ + FIELD_PREP(ANA_AC_SG_STATUS_REG_3_CONFIG_PENDING, x) +#define ANA_AC_SG_STATUS_REG_3_CONFIG_PENDING_GET(x)\ + FIELD_GET(ANA_AC_SG_STATUS_REG_3_CONFIG_PENDING, x) + +#define ANA_AC_SG_STATUS_REG_3_GCL_OCTET_INDEX GENMASK(27, 25) +#define ANA_AC_SG_STATUS_REG_3_GCL_OCTET_INDEX_SET(x)\ + FIELD_PREP(ANA_AC_SG_STATUS_REG_3_GCL_OCTET_INDEX, x) +#define ANA_AC_SG_STATUS_REG_3_GCL_OCTET_INDEX_GET(x)\ + FIELD_GET(ANA_AC_SG_STATUS_REG_3_GCL_OCTET_INDEX, x) + +/* ANA_AC:SG_STATUS:SG_STATUS_REG_4 */ +#define ANA_AC_SG_STATUS_REG_4 __REG(TARGET_ANA_AC,\ + 0, 1, 839088, 0, 1, 16, 12, 0, 1, 4) + /* ANA_AC:STAT_GLOBAL_CFG_PORT:STAT_GLOBAL_EVENT_MASK */ -#define ANA_AC_PORT_SGE_CFG(r) __REG(TARGET_ANA_AC, 0, 1, 851552, 0, 1, 20, 0, r, 4, 4) +#define ANA_AC_PORT_SGE_CFG(r) __REG(TARGET_ANA_AC,\ + 0, 1, 851552, 0, 1, 20, 0, r, 4, 4) #define ANA_AC_PORT_SGE_CFG_MASK GENMASK(15, 0) #define ANA_AC_PORT_SGE_CFG_MASK_SET(x)\ @@ -139,7 +400,8 @@ enum sparx5_target { FIELD_GET(ANA_AC_PORT_SGE_CFG_MASK, x) /* ANA_AC:STAT_GLOBAL_CFG_PORT:STAT_RESET */ -#define ANA_AC_STAT_RESET __REG(TARGET_ANA_AC, 0, 1, 851552, 0, 1, 20, 16, 0, 1, 4) +#define ANA_AC_STAT_RESET __REG(TARGET_ANA_AC,\ + 0, 1, 851552, 0, 1, 20, 16, 0, 1, 4) #define ANA_AC_STAT_RESET_RESET BIT(0) #define ANA_AC_STAT_RESET_RESET_SET(x)\ @@ -148,7 +410,8 @@ enum sparx5_target { FIELD_GET(ANA_AC_STAT_RESET_RESET, x) /* ANA_AC:STAT_CNT_CFG_PORT:STAT_CFG */ -#define ANA_AC_PORT_STAT_CFG(g, r) __REG(TARGET_ANA_AC, 0, 1, 843776, g, 70, 64, 4, r, 4, 4) +#define ANA_AC_PORT_STAT_CFG(g, r) __REG(TARGET_ANA_AC,\ + 0, 1, 843776, g, 70, 64, 4, r, 4, 4) #define ANA_AC_PORT_STAT_CFG_CFG_PRIO_MASK GENMASK(11, 4) #define ANA_AC_PORT_STAT_CFG_CFG_PRIO_MASK_SET(x)\ @@ -169,10 +432,42 @@ enum sparx5_target { FIELD_GET(ANA_AC_PORT_STAT_CFG_CFG_CNT_BYTE, x) /* ANA_AC:STAT_CNT_CFG_PORT:STAT_LSB_CNT */ -#define ANA_AC_PORT_STAT_LSB_CNT(g, r) __REG(TARGET_ANA_AC, 0, 1, 843776, g, 70, 64, 20, r, 4, 4) +#define ANA_AC_PORT_STAT_LSB_CNT(g, r) __REG(TARGET_ANA_AC,\ + 0, 1, 843776, g, 70, 64, 20, r, 4, 4) + +/* ANA_AC:STAT_GLOBAL_CFG_ACL:GLOBAL_CNT_FRM_TYPE_CFG */ +#define ANA_AC_ACL_GLOBAL_CNT_FRM_TYPE_CFG(r) __REG(TARGET_ANA_AC,\ + 0, 1, 893792, 0, 1, 24, 0, r, 2, 4) + +#define ANA_AC_ACL_GLOBAL_CNT_FRM_TYPE_CFG_GLOBAL_CFG_CNT_FRM_TYPE GENMASK(2, 0) +#define ANA_AC_ACL_GLOBAL_CNT_FRM_TYPE_CFG_GLOBAL_CFG_CNT_FRM_TYPE_SET(x)\ + FIELD_PREP(ANA_AC_ACL_GLOBAL_CNT_FRM_TYPE_CFG_GLOBAL_CFG_CNT_FRM_TYPE, x) +#define ANA_AC_ACL_GLOBAL_CNT_FRM_TYPE_CFG_GLOBAL_CFG_CNT_FRM_TYPE_GET(x)\ + FIELD_GET(ANA_AC_ACL_GLOBAL_CNT_FRM_TYPE_CFG_GLOBAL_CFG_CNT_FRM_TYPE, x) + +/* ANA_AC:STAT_GLOBAL_CFG_ACL:STAT_GLOBAL_CFG */ +#define ANA_AC_ACL_STAT_GLOBAL_CFG(r) __REG(TARGET_ANA_AC,\ + 0, 1, 893792, 0, 1, 24, 8, r, 2, 4) + +#define ANA_AC_ACL_STAT_GLOBAL_CFG_GLOBAL_CFG_CNT_BYTE BIT(0) +#define ANA_AC_ACL_STAT_GLOBAL_CFG_GLOBAL_CFG_CNT_BYTE_SET(x)\ + FIELD_PREP(ANA_AC_ACL_STAT_GLOBAL_CFG_GLOBAL_CFG_CNT_BYTE, x) +#define ANA_AC_ACL_STAT_GLOBAL_CFG_GLOBAL_CFG_CNT_BYTE_GET(x)\ + FIELD_GET(ANA_AC_ACL_STAT_GLOBAL_CFG_GLOBAL_CFG_CNT_BYTE, x) + +/* ANA_AC:STAT_GLOBAL_CFG_ACL:STAT_GLOBAL_EVENT_MASK */ +#define ANA_AC_ACL_STAT_GLOBAL_EVENT_MASK(r) __REG(TARGET_ANA_AC,\ + 0, 1, 893792, 0, 1, 24, 16, r, 2, 4) + +#define ANA_AC_ACL_STAT_GLOBAL_EVENT_MASK_GLOBAL_EVENT_MASK GENMASK(3, 0) +#define ANA_AC_ACL_STAT_GLOBAL_EVENT_MASK_GLOBAL_EVENT_MASK_SET(x)\ + FIELD_PREP(ANA_AC_ACL_STAT_GLOBAL_EVENT_MASK_GLOBAL_EVENT_MASK, x) +#define ANA_AC_ACL_STAT_GLOBAL_EVENT_MASK_GLOBAL_EVENT_MASK_GET(x)\ + FIELD_GET(ANA_AC_ACL_STAT_GLOBAL_EVENT_MASK_GLOBAL_EVENT_MASK, x) /* ANA_ACL:COMMON:VCAP_S2_CFG */ -#define ANA_ACL_VCAP_S2_CFG(r) __REG(TARGET_ANA_ACL, 0, 1, 32768, 0, 1, 592, 0, r, 70, 4) +#define ANA_ACL_VCAP_S2_CFG(r) __REG(TARGET_ANA_ACL,\ + 0, 1, 32768, 0, 1, 592, 0, r, 70, 4) #define ANA_ACL_VCAP_S2_CFG_SEC_ROUTE_HANDLING_ENA BIT(28) #define ANA_ACL_VCAP_S2_CFG_SEC_ROUTE_HANDLING_ENA_SET(x)\ @@ -259,7 +554,8 @@ enum sparx5_target { FIELD_GET(ANA_ACL_VCAP_S2_CFG_SEC_ENA, x) /* ANA_ACL:COMMON:SWAP_IP_CTRL */ -#define ANA_ACL_SWAP_IP_CTRL __REG(TARGET_ANA_ACL, 0, 1, 32768, 0, 1, 592, 412, 0, 1, 4) +#define ANA_ACL_SWAP_IP_CTRL __REG(TARGET_ANA_ACL,\ + 0, 1, 32768, 0, 1, 592, 412, 0, 1, 4) #define ANA_ACL_SWAP_IP_CTRL_DMAC_REPL_OFFSET_VAL GENMASK(23, 18) #define ANA_ACL_SWAP_IP_CTRL_DMAC_REPL_OFFSET_VAL_SET(x)\ @@ -292,7 +588,8 @@ enum sparx5_target { FIELD_GET(ANA_ACL_SWAP_IP_CTRL_IP_SWAP_IP4_TTL_ENA, x) /* ANA_ACL:COMMON:VCAP_S2_RLEG_STAT */ -#define ANA_ACL_VCAP_S2_RLEG_STAT(r) __REG(TARGET_ANA_ACL, 0, 1, 32768, 0, 1, 592, 424, r, 4, 4) +#define ANA_ACL_VCAP_S2_RLEG_STAT(r) __REG(TARGET_ANA_ACL,\ + 0, 1, 32768, 0, 1, 592, 424, r, 4, 4) #define ANA_ACL_VCAP_S2_RLEG_STAT_IRLEG_STAT_MASK GENMASK(12, 6) #define ANA_ACL_VCAP_S2_RLEG_STAT_IRLEG_STAT_MASK_SET(x)\ @@ -307,7 +604,8 @@ enum sparx5_target { FIELD_GET(ANA_ACL_VCAP_S2_RLEG_STAT_ERLEG_STAT_MASK, x) /* ANA_ACL:COMMON:VCAP_S2_FRAGMENT_CFG */ -#define ANA_ACL_VCAP_S2_FRAGMENT_CFG __REG(TARGET_ANA_ACL, 0, 1, 32768, 0, 1, 592, 440, 0, 1, 4) +#define ANA_ACL_VCAP_S2_FRAGMENT_CFG __REG(TARGET_ANA_ACL,\ + 0, 1, 32768, 0, 1, 592, 440, 0, 1, 4) #define ANA_ACL_VCAP_S2_FRAGMENT_CFG_L4_MIN_LEN GENMASK(9, 5) #define ANA_ACL_VCAP_S2_FRAGMENT_CFG_L4_MIN_LEN_SET(x)\ @@ -328,7 +626,8 @@ enum sparx5_target { FIELD_GET(ANA_ACL_VCAP_S2_FRAGMENT_CFG_FRAGMENT_OFFSET_THRES, x) /* ANA_ACL:COMMON:OWN_UPSID */ -#define ANA_ACL_OWN_UPSID(r) __REG(TARGET_ANA_ACL, 0, 1, 32768, 0, 1, 592, 580, r, 3, 4) +#define ANA_ACL_OWN_UPSID(r) __REG(TARGET_ANA_ACL,\ + 0, 1, 32768, 0, 1, 592, 580, r, 3, 4) #define ANA_ACL_OWN_UPSID_OWN_UPSID GENMASK(4, 0) #define ANA_ACL_OWN_UPSID_OWN_UPSID_SET(x)\ @@ -337,7 +636,8 @@ enum sparx5_target { FIELD_GET(ANA_ACL_OWN_UPSID_OWN_UPSID, x) /* ANA_ACL:KEY_SEL:VCAP_S2_KEY_SEL */ -#define ANA_ACL_VCAP_S2_KEY_SEL(g, r) __REG(TARGET_ANA_ACL, 0, 1, 34200, g, 134, 16, 0, r, 4, 4) +#define ANA_ACL_VCAP_S2_KEY_SEL(g, r) __REG(TARGET_ANA_ACL,\ + 0, 1, 34200, g, 134, 16, 0, r, 4, 4) #define ANA_ACL_VCAP_S2_KEY_SEL_KEY_SEL_ENA BIT(13) #define ANA_ACL_VCAP_S2_KEY_SEL_KEY_SEL_ENA_SET(x)\ @@ -388,13 +688,16 @@ enum sparx5_target { FIELD_GET(ANA_ACL_VCAP_S2_KEY_SEL_ARP_KEY_SEL, x) /* ANA_ACL:CNT_A:CNT_A */ -#define ANA_ACL_CNT_A(g) __REG(TARGET_ANA_ACL, 0, 1, 0, g, 4096, 4, 0, 0, 1, 4) +#define ANA_ACL_CNT_A(g) __REG(TARGET_ANA_ACL,\ + 0, 1, 0, g, 4096, 4, 0, 0, 1, 4) /* ANA_ACL:CNT_B:CNT_B */ -#define ANA_ACL_CNT_B(g) __REG(TARGET_ANA_ACL, 0, 1, 16384, g, 4096, 4, 0, 0, 1, 4) +#define ANA_ACL_CNT_B(g) __REG(TARGET_ANA_ACL,\ + 0, 1, 16384, g, 4096, 4, 0, 0, 1, 4) /* ANA_ACL:STICKY:SEC_LOOKUP_STICKY */ -#define ANA_ACL_SEC_LOOKUP_STICKY(r) __REG(TARGET_ANA_ACL, 0, 1, 36408, 0, 1, 16, 0, r, 4, 4) +#define ANA_ACL_SEC_LOOKUP_STICKY(r) __REG(TARGET_ANA_ACL,\ + 0, 1, 36408, 0, 1, 16, 0, r, 4, 4) #define ANA_ACL_SEC_LOOKUP_STICKY_KEY_SEL_CLM_STICKY BIT(17) #define ANA_ACL_SEC_LOOKUP_STICKY_KEY_SEL_CLM_STICKY_SET(x)\ @@ -505,7 +808,8 @@ enum sparx5_target { FIELD_GET(ANA_ACL_SEC_LOOKUP_STICKY_SEC_TYPE_MAC_ETYPE_STICKY, x) /* ANA_AC_POL:POL_ALL_CFG:POL_UPD_INT_CFG */ -#define ANA_AC_POL_POL_UPD_INT_CFG __REG(TARGET_ANA_AC_POL, 0, 1, 75968, 0, 1, 1160, 1148, 0, 1, 4) +#define ANA_AC_POL_POL_UPD_INT_CFG __REG(TARGET_ANA_AC_POL,\ + 0, 1, 75968, 0, 1, 1160, 1148, 0, 1, 4) #define ANA_AC_POL_POL_UPD_INT_CFG_POL_UPD_INT GENMASK(9, 0) #define ANA_AC_POL_POL_UPD_INT_CFG_POL_UPD_INT_SET(x)\ @@ -514,7 +818,8 @@ enum sparx5_target { FIELD_GET(ANA_AC_POL_POL_UPD_INT_CFG_POL_UPD_INT, x) /* ANA_AC_POL:COMMON_BDLB:DLB_CTRL */ -#define ANA_AC_POL_BDLB_DLB_CTRL __REG(TARGET_ANA_AC_POL, 0, 1, 79048, 0, 1, 8, 0, 0, 1, 4) +#define ANA_AC_POL_BDLB_DLB_CTRL __REG(TARGET_ANA_AC_POL,\ + 0, 1, 79048, 0, 1, 8, 0, 0, 1, 4) #define ANA_AC_POL_BDLB_DLB_CTRL_CLK_PERIOD_01NS GENMASK(26, 19) #define ANA_AC_POL_BDLB_DLB_CTRL_CLK_PERIOD_01NS_SET(x)\ @@ -541,7 +846,8 @@ enum sparx5_target { FIELD_GET(ANA_AC_POL_BDLB_DLB_CTRL_DLB_ADD_ENA, x) /* ANA_AC_POL:COMMON_BUM_SLB:DLB_CTRL */ -#define ANA_AC_POL_SLB_DLB_CTRL __REG(TARGET_ANA_AC_POL, 0, 1, 79056, 0, 1, 20, 0, 0, 1, 4) +#define ANA_AC_POL_SLB_DLB_CTRL __REG(TARGET_ANA_AC_POL,\ + 0, 1, 79056, 0, 1, 20, 0, 0, 1, 4) #define ANA_AC_POL_SLB_DLB_CTRL_CLK_PERIOD_01NS GENMASK(26, 19) #define ANA_AC_POL_SLB_DLB_CTRL_CLK_PERIOD_01NS_SET(x)\ @@ -567,8 +873,235 @@ enum sparx5_target { #define ANA_AC_POL_SLB_DLB_CTRL_DLB_ADD_ENA_GET(x)\ FIELD_GET(ANA_AC_POL_SLB_DLB_CTRL_DLB_ADD_ENA, x) +/* ANA_AC_SDLB:LBGRP_TBL:XLB_START */ +#define ANA_AC_SDLB_XLB_START(g) __REG(TARGET_ANA_AC_SDLB,\ + 0, 1, 295468, g, 10, 24, 0, 0, 1, 4) + +#define ANA_AC_SDLB_XLB_START_LBSET_START GENMASK(12, 0) +#define ANA_AC_SDLB_XLB_START_LBSET_START_SET(x)\ + FIELD_PREP(ANA_AC_SDLB_XLB_START_LBSET_START, x) +#define ANA_AC_SDLB_XLB_START_LBSET_START_GET(x)\ + FIELD_GET(ANA_AC_SDLB_XLB_START_LBSET_START, x) + +/* ANA_AC_SDLB:LBGRP_TBL:PUP_INTERVAL */ +#define ANA_AC_SDLB_PUP_INTERVAL(g) __REG(TARGET_ANA_AC_SDLB,\ + 0, 1, 295468, g, 10, 24, 4, 0, 1, 4) + +#define ANA_AC_SDLB_PUP_INTERVAL_PUP_INTERVAL GENMASK(19, 0) +#define ANA_AC_SDLB_PUP_INTERVAL_PUP_INTERVAL_SET(x)\ + FIELD_PREP(ANA_AC_SDLB_PUP_INTERVAL_PUP_INTERVAL, x) +#define ANA_AC_SDLB_PUP_INTERVAL_PUP_INTERVAL_GET(x)\ + FIELD_GET(ANA_AC_SDLB_PUP_INTERVAL_PUP_INTERVAL, x) + +/* ANA_AC_SDLB:LBGRP_TBL:PUP_CTRL */ +#define ANA_AC_SDLB_PUP_CTRL(g) __REG(TARGET_ANA_AC_SDLB,\ + 0, 1, 295468, g, 10, 24, 8, 0, 1, 4) + +#define ANA_AC_SDLB_PUP_CTRL_PUP_LB_DT GENMASK(18, 0) +#define ANA_AC_SDLB_PUP_CTRL_PUP_LB_DT_SET(x)\ + FIELD_PREP(ANA_AC_SDLB_PUP_CTRL_PUP_LB_DT, x) +#define ANA_AC_SDLB_PUP_CTRL_PUP_LB_DT_GET(x)\ + FIELD_GET(ANA_AC_SDLB_PUP_CTRL_PUP_LB_DT, x) + +#define ANA_AC_SDLB_PUP_CTRL_PUP_ENA BIT(24) +#define ANA_AC_SDLB_PUP_CTRL_PUP_ENA_SET(x)\ + FIELD_PREP(ANA_AC_SDLB_PUP_CTRL_PUP_ENA, x) +#define ANA_AC_SDLB_PUP_CTRL_PUP_ENA_GET(x)\ + FIELD_GET(ANA_AC_SDLB_PUP_CTRL_PUP_ENA, x) + +/* ANA_AC_SDLB:LBGRP_TBL:LBGRP_MISC */ +#define ANA_AC_SDLB_LBGRP_MISC(g) __REG(TARGET_ANA_AC_SDLB,\ + 0, 1, 295468, g, 10, 24, 12, 0, 1, 4) + +#define ANA_AC_SDLB_LBGRP_MISC_THRES_SHIFT GENMASK(12, 8) +#define ANA_AC_SDLB_LBGRP_MISC_THRES_SHIFT_SET(x)\ + FIELD_PREP(ANA_AC_SDLB_LBGRP_MISC_THRES_SHIFT, x) +#define ANA_AC_SDLB_LBGRP_MISC_THRES_SHIFT_GET(x)\ + FIELD_GET(ANA_AC_SDLB_LBGRP_MISC_THRES_SHIFT, x) + +/* ANA_AC_SDLB:LBGRP_TBL:FRM_RATE_TOKENS */ +#define ANA_AC_SDLB_FRM_RATE_TOKENS(g) __REG(TARGET_ANA_AC_SDLB,\ + 0, 1, 295468, g, 10, 24, 16, 0, 1, 4) + +#define ANA_AC_SDLB_FRM_RATE_TOKENS_FRM_RATE_TOKENS GENMASK(12, 0) +#define ANA_AC_SDLB_FRM_RATE_TOKENS_FRM_RATE_TOKENS_SET(x)\ + FIELD_PREP(ANA_AC_SDLB_FRM_RATE_TOKENS_FRM_RATE_TOKENS, x) +#define ANA_AC_SDLB_FRM_RATE_TOKENS_FRM_RATE_TOKENS_GET(x)\ + FIELD_GET(ANA_AC_SDLB_FRM_RATE_TOKENS_FRM_RATE_TOKENS, x) + +/* ANA_AC_SDLB:LBGRP_TBL:LBGRP_STATE_TBL */ +#define ANA_AC_SDLB_LBGRP_STATE_TBL(g) __REG(TARGET_ANA_AC_SDLB,\ + 0, 1, 295468, g, 10, 24, 20, 0, 1, 4) + +#define ANA_AC_SDLB_LBGRP_STATE_TBL_PUP_ONGOING BIT(0) +#define ANA_AC_SDLB_LBGRP_STATE_TBL_PUP_ONGOING_SET(x)\ + FIELD_PREP(ANA_AC_SDLB_LBGRP_STATE_TBL_PUP_ONGOING, x) +#define ANA_AC_SDLB_LBGRP_STATE_TBL_PUP_ONGOING_GET(x)\ + FIELD_GET(ANA_AC_SDLB_LBGRP_STATE_TBL_PUP_ONGOING, x) + +#define ANA_AC_SDLB_LBGRP_STATE_TBL_PUP_WAIT_ACK BIT(1) +#define ANA_AC_SDLB_LBGRP_STATE_TBL_PUP_WAIT_ACK_SET(x)\ + FIELD_PREP(ANA_AC_SDLB_LBGRP_STATE_TBL_PUP_WAIT_ACK, x) +#define ANA_AC_SDLB_LBGRP_STATE_TBL_PUP_WAIT_ACK_GET(x)\ + FIELD_GET(ANA_AC_SDLB_LBGRP_STATE_TBL_PUP_WAIT_ACK, x) + +#define ANA_AC_SDLB_LBGRP_STATE_TBL_PUP_LBSET_NEXT GENMASK(28, 16) +#define ANA_AC_SDLB_LBGRP_STATE_TBL_PUP_LBSET_NEXT_SET(x)\ + FIELD_PREP(ANA_AC_SDLB_LBGRP_STATE_TBL_PUP_LBSET_NEXT, x) +#define ANA_AC_SDLB_LBGRP_STATE_TBL_PUP_LBSET_NEXT_GET(x)\ + FIELD_GET(ANA_AC_SDLB_LBGRP_STATE_TBL_PUP_LBSET_NEXT, x) + +/* ANA_AC_SDLB:LBSET_TBL:PUP_TOKENS */ +#define ANA_AC_SDLB_PUP_TOKENS(g, r) __REG(TARGET_ANA_AC_SDLB,\ + 0, 1, 0, g, 4616, 64, 0, r, 2, 4) + +#define ANA_AC_SDLB_PUP_TOKENS_PUP_TOKENS GENMASK(12, 0) +#define ANA_AC_SDLB_PUP_TOKENS_PUP_TOKENS_SET(x)\ + FIELD_PREP(ANA_AC_SDLB_PUP_TOKENS_PUP_TOKENS, x) +#define ANA_AC_SDLB_PUP_TOKENS_PUP_TOKENS_GET(x)\ + FIELD_GET(ANA_AC_SDLB_PUP_TOKENS_PUP_TOKENS, x) + +/* ANA_AC_SDLB:LBSET_TBL:THRES */ +#define ANA_AC_SDLB_THRES(g, r) __REG(TARGET_ANA_AC_SDLB,\ + 0, 1, 0, g, 4616, 64, 8, r, 2, 4) + +#define ANA_AC_SDLB_THRES_THRES GENMASK(9, 0) +#define ANA_AC_SDLB_THRES_THRES_SET(x)\ + FIELD_PREP(ANA_AC_SDLB_THRES_THRES, x) +#define ANA_AC_SDLB_THRES_THRES_GET(x)\ + FIELD_GET(ANA_AC_SDLB_THRES_THRES, x) + +#define ANA_AC_SDLB_THRES_THRES_HYS GENMASK(25, 16) +#define ANA_AC_SDLB_THRES_THRES_HYS_SET(x)\ + FIELD_PREP(ANA_AC_SDLB_THRES_THRES_HYS, x) +#define ANA_AC_SDLB_THRES_THRES_HYS_GET(x)\ + FIELD_GET(ANA_AC_SDLB_THRES_THRES_HYS, x) + +/* ANA_AC_SDLB:LBSET_TBL:XLB_NEXT */ +#define ANA_AC_SDLB_XLB_NEXT(g) __REG(TARGET_ANA_AC_SDLB,\ + 0, 1, 0, g, 4616, 64, 16, 0, 1, 4) + +#define ANA_AC_SDLB_XLB_NEXT_LBSET_NEXT GENMASK(12, 0) +#define ANA_AC_SDLB_XLB_NEXT_LBSET_NEXT_SET(x)\ + FIELD_PREP(ANA_AC_SDLB_XLB_NEXT_LBSET_NEXT, x) +#define ANA_AC_SDLB_XLB_NEXT_LBSET_NEXT_GET(x)\ + FIELD_GET(ANA_AC_SDLB_XLB_NEXT_LBSET_NEXT, x) + +#define ANA_AC_SDLB_XLB_NEXT_LBGRP GENMASK(27, 24) +#define ANA_AC_SDLB_XLB_NEXT_LBGRP_SET(x)\ + FIELD_PREP(ANA_AC_SDLB_XLB_NEXT_LBGRP, x) +#define ANA_AC_SDLB_XLB_NEXT_LBGRP_GET(x)\ + FIELD_GET(ANA_AC_SDLB_XLB_NEXT_LBGRP, x) + +/* ANA_AC_SDLB:LBSET_TBL:INH_CTRL */ +#define ANA_AC_SDLB_INH_CTRL(g, r) __REG(TARGET_ANA_AC_SDLB,\ + 0, 1, 0, g, 4616, 64, 20, r, 2, 4) + +#define ANA_AC_SDLB_INH_CTRL_PUP_TOKENS_MAX GENMASK(12, 0) +#define ANA_AC_SDLB_INH_CTRL_PUP_TOKENS_MAX_SET(x)\ + FIELD_PREP(ANA_AC_SDLB_INH_CTRL_PUP_TOKENS_MAX, x) +#define ANA_AC_SDLB_INH_CTRL_PUP_TOKENS_MAX_GET(x)\ + FIELD_GET(ANA_AC_SDLB_INH_CTRL_PUP_TOKENS_MAX, x) + +#define ANA_AC_SDLB_INH_CTRL_INH_MODE GENMASK(21, 20) +#define ANA_AC_SDLB_INH_CTRL_INH_MODE_SET(x)\ + FIELD_PREP(ANA_AC_SDLB_INH_CTRL_INH_MODE, x) +#define ANA_AC_SDLB_INH_CTRL_INH_MODE_GET(x)\ + FIELD_GET(ANA_AC_SDLB_INH_CTRL_INH_MODE, x) + +#define ANA_AC_SDLB_INH_CTRL_INH_LB BIT(24) +#define ANA_AC_SDLB_INH_CTRL_INH_LB_SET(x)\ + FIELD_PREP(ANA_AC_SDLB_INH_CTRL_INH_LB, x) +#define ANA_AC_SDLB_INH_CTRL_INH_LB_GET(x)\ + FIELD_GET(ANA_AC_SDLB_INH_CTRL_INH_LB, x) + +/* ANA_AC_SDLB:LBSET_TBL:INH_LBSET_ADDR */ +#define ANA_AC_SDLB_INH_LBSET_ADDR(g) __REG(TARGET_ANA_AC_SDLB,\ + 0, 1, 0, g, 4616, 64, 28, 0, 1, 4) + +#define ANA_AC_SDLB_INH_LBSET_ADDR_INH_LBSET_ADDR GENMASK(12, 0) +#define ANA_AC_SDLB_INH_LBSET_ADDR_INH_LBSET_ADDR_SET(x)\ + FIELD_PREP(ANA_AC_SDLB_INH_LBSET_ADDR_INH_LBSET_ADDR, x) +#define ANA_AC_SDLB_INH_LBSET_ADDR_INH_LBSET_ADDR_GET(x)\ + FIELD_GET(ANA_AC_SDLB_INH_LBSET_ADDR_INH_LBSET_ADDR, x) + +/* ANA_AC_SDLB:LBSET_TBL:DLB_MISC */ +#define ANA_AC_SDLB_DLB_MISC(g) __REG(TARGET_ANA_AC_SDLB,\ + 0, 1, 0, g, 4616, 64, 32, 0, 1, 4) + +#define ANA_AC_SDLB_DLB_MISC_DLB_FRM_RATE_ENA BIT(0) +#define ANA_AC_SDLB_DLB_MISC_DLB_FRM_RATE_ENA_SET(x)\ + FIELD_PREP(ANA_AC_SDLB_DLB_MISC_DLB_FRM_RATE_ENA, x) +#define ANA_AC_SDLB_DLB_MISC_DLB_FRM_RATE_ENA_GET(x)\ + FIELD_GET(ANA_AC_SDLB_DLB_MISC_DLB_FRM_RATE_ENA, x) + +#define ANA_AC_SDLB_DLB_MISC_MARK_ALL_FRMS_RED_ENA BIT(6) +#define ANA_AC_SDLB_DLB_MISC_MARK_ALL_FRMS_RED_ENA_SET(x)\ + FIELD_PREP(ANA_AC_SDLB_DLB_MISC_MARK_ALL_FRMS_RED_ENA, x) +#define ANA_AC_SDLB_DLB_MISC_MARK_ALL_FRMS_RED_ENA_GET(x)\ + FIELD_GET(ANA_AC_SDLB_DLB_MISC_MARK_ALL_FRMS_RED_ENA, x) + +#define ANA_AC_SDLB_DLB_MISC_DLB_FRM_ADJ GENMASK(14, 8) +#define ANA_AC_SDLB_DLB_MISC_DLB_FRM_ADJ_SET(x)\ + FIELD_PREP(ANA_AC_SDLB_DLB_MISC_DLB_FRM_ADJ, x) +#define ANA_AC_SDLB_DLB_MISC_DLB_FRM_ADJ_GET(x)\ + FIELD_GET(ANA_AC_SDLB_DLB_MISC_DLB_FRM_ADJ, x) + +/* ANA_AC_SDLB:LBSET_TBL:DLB_CFG */ +#define ANA_AC_SDLB_DLB_CFG(g) __REG(TARGET_ANA_AC_SDLB,\ + 0, 1, 0, g, 4616, 64, 36, 0, 1, 4) + +#define ANA_AC_SDLB_DLB_CFG_DROP_ON_YELLOW_ENA BIT(11) +#define ANA_AC_SDLB_DLB_CFG_DROP_ON_YELLOW_ENA_SET(x)\ + FIELD_PREP(ANA_AC_SDLB_DLB_CFG_DROP_ON_YELLOW_ENA, x) +#define ANA_AC_SDLB_DLB_CFG_DROP_ON_YELLOW_ENA_GET(x)\ + FIELD_GET(ANA_AC_SDLB_DLB_CFG_DROP_ON_YELLOW_ENA, x) + +#define ANA_AC_SDLB_DLB_CFG_DP_BYPASS_LVL GENMASK(10, 9) +#define ANA_AC_SDLB_DLB_CFG_DP_BYPASS_LVL_SET(x)\ + FIELD_PREP(ANA_AC_SDLB_DLB_CFG_DP_BYPASS_LVL, x) +#define ANA_AC_SDLB_DLB_CFG_DP_BYPASS_LVL_GET(x)\ + FIELD_GET(ANA_AC_SDLB_DLB_CFG_DP_BYPASS_LVL, x) + +#define ANA_AC_SDLB_DLB_CFG_HIER_DLB_DIS BIT(8) +#define ANA_AC_SDLB_DLB_CFG_HIER_DLB_DIS_SET(x)\ + FIELD_PREP(ANA_AC_SDLB_DLB_CFG_HIER_DLB_DIS, x) +#define ANA_AC_SDLB_DLB_CFG_HIER_DLB_DIS_GET(x)\ + FIELD_GET(ANA_AC_SDLB_DLB_CFG_HIER_DLB_DIS, x) + +#define ANA_AC_SDLB_DLB_CFG_ENCAP_DATA_DIS BIT(7) +#define ANA_AC_SDLB_DLB_CFG_ENCAP_DATA_DIS_SET(x)\ + FIELD_PREP(ANA_AC_SDLB_DLB_CFG_ENCAP_DATA_DIS, x) +#define ANA_AC_SDLB_DLB_CFG_ENCAP_DATA_DIS_GET(x)\ + FIELD_GET(ANA_AC_SDLB_DLB_CFG_ENCAP_DATA_DIS, x) + +#define ANA_AC_SDLB_DLB_CFG_COLOR_AWARE_LVL GENMASK(6, 5) +#define ANA_AC_SDLB_DLB_CFG_COLOR_AWARE_LVL_SET(x)\ + FIELD_PREP(ANA_AC_SDLB_DLB_CFG_COLOR_AWARE_LVL, x) +#define ANA_AC_SDLB_DLB_CFG_COLOR_AWARE_LVL_GET(x)\ + FIELD_GET(ANA_AC_SDLB_DLB_CFG_COLOR_AWARE_LVL, x) + +#define ANA_AC_SDLB_DLB_CFG_CIR_INC_DP_VAL GENMASK(4, 3) +#define ANA_AC_SDLB_DLB_CFG_CIR_INC_DP_VAL_SET(x)\ + FIELD_PREP(ANA_AC_SDLB_DLB_CFG_CIR_INC_DP_VAL, x) +#define ANA_AC_SDLB_DLB_CFG_CIR_INC_DP_VAL_GET(x)\ + FIELD_GET(ANA_AC_SDLB_DLB_CFG_CIR_INC_DP_VAL, x) + +#define ANA_AC_SDLB_DLB_CFG_DLB_MODE BIT(2) +#define ANA_AC_SDLB_DLB_CFG_DLB_MODE_SET(x)\ + FIELD_PREP(ANA_AC_SDLB_DLB_CFG_DLB_MODE, x) +#define ANA_AC_SDLB_DLB_CFG_DLB_MODE_GET(x)\ + FIELD_GET(ANA_AC_SDLB_DLB_CFG_DLB_MODE, x) + +#define ANA_AC_SDLB_DLB_CFG_TRAFFIC_TYPE_MASK GENMASK(1, 0) +#define ANA_AC_SDLB_DLB_CFG_TRAFFIC_TYPE_MASK_SET(x)\ + FIELD_PREP(ANA_AC_SDLB_DLB_CFG_TRAFFIC_TYPE_MASK, x) +#define ANA_AC_SDLB_DLB_CFG_TRAFFIC_TYPE_MASK_GET(x)\ + FIELD_GET(ANA_AC_SDLB_DLB_CFG_TRAFFIC_TYPE_MASK, x) + /* ANA_CL:PORT:FILTER_CTRL */ -#define ANA_CL_FILTER_CTRL(g) __REG(TARGET_ANA_CL, 0, 1, 131072, g, 70, 512, 4, 0, 1, 4) +#define ANA_CL_FILTER_CTRL(g) __REG(TARGET_ANA_CL,\ + 0, 1, 131072, g, 70, 512, 4, 0, 1, 4) #define ANA_CL_FILTER_CTRL_FILTER_SMAC_MC_DIS BIT(2) #define ANA_CL_FILTER_CTRL_FILTER_SMAC_MC_DIS_SET(x)\ @@ -589,7 +1122,8 @@ enum sparx5_target { FIELD_GET(ANA_CL_FILTER_CTRL_FORCE_FCS_UPDATE_ENA, x) /* ANA_CL:PORT:VLAN_FILTER_CTRL */ -#define ANA_CL_VLAN_FILTER_CTRL(g, r) __REG(TARGET_ANA_CL, 0, 1, 131072, g, 70, 512, 8, r, 3, 4) +#define ANA_CL_VLAN_FILTER_CTRL(g, r) __REG(TARGET_ANA_CL,\ + 0, 1, 131072, g, 70, 512, 8, r, 3, 4) #define ANA_CL_VLAN_FILTER_CTRL_TAG_REQUIRED_ENA BIT(10) #define ANA_CL_VLAN_FILTER_CTRL_TAG_REQUIRED_ENA_SET(x)\ @@ -658,7 +1192,8 @@ enum sparx5_target { FIELD_GET(ANA_CL_VLAN_FILTER_CTRL_CUST3_STAG_DIS, x) /* ANA_CL:PORT:ETAG_FILTER_CTRL */ -#define ANA_CL_ETAG_FILTER_CTRL(g) __REG(TARGET_ANA_CL, 0, 1, 131072, g, 70, 512, 20, 0, 1, 4) +#define ANA_CL_ETAG_FILTER_CTRL(g) __REG(TARGET_ANA_CL,\ + 0, 1, 131072, g, 70, 512, 20, 0, 1, 4) #define ANA_CL_ETAG_FILTER_CTRL_ETAG_REQUIRED_ENA BIT(1) #define ANA_CL_ETAG_FILTER_CTRL_ETAG_REQUIRED_ENA_SET(x)\ @@ -673,7 +1208,8 @@ enum sparx5_target { FIELD_GET(ANA_CL_ETAG_FILTER_CTRL_ETAG_DIS, x) /* ANA_CL:PORT:VLAN_CTRL */ -#define ANA_CL_VLAN_CTRL(g) __REG(TARGET_ANA_CL, 0, 1, 131072, g, 70, 512, 32, 0, 1, 4) +#define ANA_CL_VLAN_CTRL(g) __REG(TARGET_ANA_CL,\ + 0, 1, 131072, g, 70, 512, 32, 0, 1, 4) #define ANA_CL_VLAN_CTRL_PORT_VOE_TPID_AWARE_DIS GENMASK(30, 26) #define ANA_CL_VLAN_CTRL_PORT_VOE_TPID_AWARE_DIS_SET(x)\ @@ -742,7 +1278,8 @@ enum sparx5_target { FIELD_GET(ANA_CL_VLAN_CTRL_PORT_VID, x) /* ANA_CL:PORT:VLAN_CTRL_2 */ -#define ANA_CL_VLAN_CTRL_2(g) __REG(TARGET_ANA_CL, 0, 1, 131072, g, 70, 512, 36, 0, 1, 4) +#define ANA_CL_VLAN_CTRL_2(g) __REG(TARGET_ANA_CL,\ + 0, 1, 131072, g, 70, 512, 36, 0, 1, 4) #define ANA_CL_VLAN_CTRL_2_VLAN_PUSH_CNT GENMASK(1, 0) #define ANA_CL_VLAN_CTRL_2_VLAN_PUSH_CNT_SET(x)\ @@ -751,7 +1288,8 @@ enum sparx5_target { FIELD_GET(ANA_CL_VLAN_CTRL_2_VLAN_PUSH_CNT, x) /* ANA_CL:PORT:PCP_DEI_MAP_CFG */ -#define ANA_CL_PCP_DEI_MAP_CFG(g, r) __REG(TARGET_ANA_CL, 0, 1, 131072, g, 70, 512, 108, r, 16, 4) +#define ANA_CL_PCP_DEI_MAP_CFG(g, r) __REG(TARGET_ANA_CL,\ + 0, 1, 131072, g, 70, 512, 108, r, 16, 4) #define ANA_CL_PCP_DEI_MAP_CFG_PCP_DEI_DP_VAL GENMASK(4, 3) #define ANA_CL_PCP_DEI_MAP_CFG_PCP_DEI_DP_VAL_SET(x)\ @@ -766,7 +1304,8 @@ enum sparx5_target { FIELD_GET(ANA_CL_PCP_DEI_MAP_CFG_PCP_DEI_QOS_VAL, x) /* ANA_CL:PORT:QOS_CFG */ -#define ANA_CL_QOS_CFG(g) __REG(TARGET_ANA_CL, 0, 1, 131072, g, 70, 512, 172, 0, 1, 4) +#define ANA_CL_QOS_CFG(g) __REG(TARGET_ANA_CL,\ + 0, 1, 131072, g, 70, 512, 172, 0, 1, 4) #define ANA_CL_QOS_CFG_DEFAULT_COSID_ENA BIT(17) #define ANA_CL_QOS_CFG_DEFAULT_COSID_ENA_SET(x)\ @@ -841,10 +1380,74 @@ enum sparx5_target { FIELD_GET(ANA_CL_QOS_CFG_DEFAULT_QOS_VAL, x) /* ANA_CL:PORT:CAPTURE_BPDU_CFG */ -#define ANA_CL_CAPTURE_BPDU_CFG(g) __REG(TARGET_ANA_CL, 0, 1, 131072, g, 70, 512, 196, 0, 1, 4) +#define ANA_CL_CAPTURE_BPDU_CFG(g) __REG(TARGET_ANA_CL,\ + 0, 1, 131072, g, 70, 512, 196, 0, 1, 4) + +/* ANA_CL:PORT:ADV_CL_CFG_2 */ +#define ANA_CL_ADV_CL_CFG_2(g, r) __REG(TARGET_ANA_CL,\ + 0, 1, 131072, g, 70, 512, 200, r, 6, 4) + +#define ANA_CL_ADV_CL_CFG_2_USE_CL_TCI0_ENA BIT(1) +#define ANA_CL_ADV_CL_CFG_2_USE_CL_TCI0_ENA_SET(x)\ + FIELD_PREP(ANA_CL_ADV_CL_CFG_2_USE_CL_TCI0_ENA, x) +#define ANA_CL_ADV_CL_CFG_2_USE_CL_TCI0_ENA_GET(x)\ + FIELD_GET(ANA_CL_ADV_CL_CFG_2_USE_CL_TCI0_ENA, x) + +#define ANA_CL_ADV_CL_CFG_2_USE_CL_DSCP_ENA BIT(0) +#define ANA_CL_ADV_CL_CFG_2_USE_CL_DSCP_ENA_SET(x)\ + FIELD_PREP(ANA_CL_ADV_CL_CFG_2_USE_CL_DSCP_ENA, x) +#define ANA_CL_ADV_CL_CFG_2_USE_CL_DSCP_ENA_GET(x)\ + FIELD_GET(ANA_CL_ADV_CL_CFG_2_USE_CL_DSCP_ENA, x) + +/* ANA_CL:PORT:ADV_CL_CFG */ +#define ANA_CL_ADV_CL_CFG(g, r) __REG(TARGET_ANA_CL,\ + 0, 1, 131072, g, 70, 512, 224, r, 6, 4) + +#define ANA_CL_ADV_CL_CFG_IP4_CLM_KEY_SEL GENMASK(30, 26) +#define ANA_CL_ADV_CL_CFG_IP4_CLM_KEY_SEL_SET(x)\ + FIELD_PREP(ANA_CL_ADV_CL_CFG_IP4_CLM_KEY_SEL, x) +#define ANA_CL_ADV_CL_CFG_IP4_CLM_KEY_SEL_GET(x)\ + FIELD_GET(ANA_CL_ADV_CL_CFG_IP4_CLM_KEY_SEL, x) + +#define ANA_CL_ADV_CL_CFG_IP6_CLM_KEY_SEL GENMASK(25, 21) +#define ANA_CL_ADV_CL_CFG_IP6_CLM_KEY_SEL_SET(x)\ + FIELD_PREP(ANA_CL_ADV_CL_CFG_IP6_CLM_KEY_SEL, x) +#define ANA_CL_ADV_CL_CFG_IP6_CLM_KEY_SEL_GET(x)\ + FIELD_GET(ANA_CL_ADV_CL_CFG_IP6_CLM_KEY_SEL, x) + +#define ANA_CL_ADV_CL_CFG_MPLS_UC_CLM_KEY_SEL GENMASK(20, 16) +#define ANA_CL_ADV_CL_CFG_MPLS_UC_CLM_KEY_SEL_SET(x)\ + FIELD_PREP(ANA_CL_ADV_CL_CFG_MPLS_UC_CLM_KEY_SEL, x) +#define ANA_CL_ADV_CL_CFG_MPLS_UC_CLM_KEY_SEL_GET(x)\ + FIELD_GET(ANA_CL_ADV_CL_CFG_MPLS_UC_CLM_KEY_SEL, x) + +#define ANA_CL_ADV_CL_CFG_MPLS_MC_CLM_KEY_SEL GENMASK(15, 11) +#define ANA_CL_ADV_CL_CFG_MPLS_MC_CLM_KEY_SEL_SET(x)\ + FIELD_PREP(ANA_CL_ADV_CL_CFG_MPLS_MC_CLM_KEY_SEL, x) +#define ANA_CL_ADV_CL_CFG_MPLS_MC_CLM_KEY_SEL_GET(x)\ + FIELD_GET(ANA_CL_ADV_CL_CFG_MPLS_MC_CLM_KEY_SEL, x) + +#define ANA_CL_ADV_CL_CFG_MLBS_CLM_KEY_SEL GENMASK(10, 6) +#define ANA_CL_ADV_CL_CFG_MLBS_CLM_KEY_SEL_SET(x)\ + FIELD_PREP(ANA_CL_ADV_CL_CFG_MLBS_CLM_KEY_SEL, x) +#define ANA_CL_ADV_CL_CFG_MLBS_CLM_KEY_SEL_GET(x)\ + FIELD_GET(ANA_CL_ADV_CL_CFG_MLBS_CLM_KEY_SEL, x) + +#define ANA_CL_ADV_CL_CFG_ETYPE_CLM_KEY_SEL GENMASK(5, 1) +#define ANA_CL_ADV_CL_CFG_ETYPE_CLM_KEY_SEL_SET(x)\ + FIELD_PREP(ANA_CL_ADV_CL_CFG_ETYPE_CLM_KEY_SEL, x) +#define ANA_CL_ADV_CL_CFG_ETYPE_CLM_KEY_SEL_GET(x)\ + FIELD_GET(ANA_CL_ADV_CL_CFG_ETYPE_CLM_KEY_SEL, x) + +#define ANA_CL_ADV_CL_CFG_LOOKUP_ENA BIT(0) +#define ANA_CL_ADV_CL_CFG_LOOKUP_ENA_SET(x)\ + FIELD_PREP(ANA_CL_ADV_CL_CFG_LOOKUP_ENA, x) +#define ANA_CL_ADV_CL_CFG_LOOKUP_ENA_GET(x)\ + FIELD_GET(ANA_CL_ADV_CL_CFG_LOOKUP_ENA, x) /* ANA_CL:COMMON:OWN_UPSID */ -#define ANA_CL_OWN_UPSID(r) __REG(TARGET_ANA_CL, 0, 1, 166912, 0, 1, 756, 0, r, 3, 4) +#define ANA_CL_OWN_UPSID(r) __REG(TARGET_ANA_CL,\ + 0, 1, 166912, 0, 1, 756, 0, r, 3, 4) #define ANA_CL_OWN_UPSID_OWN_UPSID GENMASK(4, 0) #define ANA_CL_OWN_UPSID_OWN_UPSID_SET(x)\ @@ -853,7 +1456,8 @@ enum sparx5_target { FIELD_GET(ANA_CL_OWN_UPSID_OWN_UPSID, x) /* ANA_CL:COMMON:DSCP_CFG */ -#define ANA_CL_DSCP_CFG(r) __REG(TARGET_ANA_CL, 0, 1, 166912, 0, 1, 756, 256, r, 64, 4) +#define ANA_CL_DSCP_CFG(r) __REG(TARGET_ANA_CL,\ + 0, 1, 166912, 0, 1, 756, 256, r, 64, 4) #define ANA_CL_DSCP_CFG_DSCP_TRANSLATE_VAL GENMASK(12, 7) #define ANA_CL_DSCP_CFG_DSCP_TRANSLATE_VAL_SET(x)\ @@ -885,14 +1489,103 @@ enum sparx5_target { #define ANA_CL_DSCP_CFG_DSCP_TRUST_ENA_GET(x)\ FIELD_GET(ANA_CL_DSCP_CFG_DSCP_TRUST_ENA, x) +/* ANA_CL:COMMON:QOS_MAP_CFG */ +#define ANA_CL_QOS_MAP_CFG(r) __REG(TARGET_ANA_CL,\ + 0, 1, 166912, 0, 1, 756, 512, r, 32, 4) + +#define ANA_CL_QOS_MAP_CFG_DSCP_REWR_VAL GENMASK(9, 4) +#define ANA_CL_QOS_MAP_CFG_DSCP_REWR_VAL_SET(x)\ + FIELD_PREP(ANA_CL_QOS_MAP_CFG_DSCP_REWR_VAL, x) +#define ANA_CL_QOS_MAP_CFG_DSCP_REWR_VAL_GET(x)\ + FIELD_GET(ANA_CL_QOS_MAP_CFG_DSCP_REWR_VAL, x) + +/* ANA_L2:COMMON:FWD_CFG */ +#define ANA_L2_FWD_CFG __REG(TARGET_ANA_L2,\ + 0, 1, 566024, 0, 1, 700, 0, 0, 1, 4) + +#define ANA_L2_FWD_CFG_MAC_TBL_SPLIT_SEL GENMASK(21, 20) +#define ANA_L2_FWD_CFG_MAC_TBL_SPLIT_SEL_SET(x)\ + FIELD_PREP(ANA_L2_FWD_CFG_MAC_TBL_SPLIT_SEL, x) +#define ANA_L2_FWD_CFG_MAC_TBL_SPLIT_SEL_GET(x)\ + FIELD_GET(ANA_L2_FWD_CFG_MAC_TBL_SPLIT_SEL, x) + +#define ANA_L2_FWD_CFG_PORT_DEFAULT_BDLB_ENA BIT(18) +#define ANA_L2_FWD_CFG_PORT_DEFAULT_BDLB_ENA_SET(x)\ + FIELD_PREP(ANA_L2_FWD_CFG_PORT_DEFAULT_BDLB_ENA, x) +#define ANA_L2_FWD_CFG_PORT_DEFAULT_BDLB_ENA_GET(x)\ + FIELD_GET(ANA_L2_FWD_CFG_PORT_DEFAULT_BDLB_ENA, x) + +#define ANA_L2_FWD_CFG_QUEUE_DEFAULT_SDLB_ENA BIT(17) +#define ANA_L2_FWD_CFG_QUEUE_DEFAULT_SDLB_ENA_SET(x)\ + FIELD_PREP(ANA_L2_FWD_CFG_QUEUE_DEFAULT_SDLB_ENA, x) +#define ANA_L2_FWD_CFG_QUEUE_DEFAULT_SDLB_ENA_GET(x)\ + FIELD_GET(ANA_L2_FWD_CFG_QUEUE_DEFAULT_SDLB_ENA, x) + +#define ANA_L2_FWD_CFG_ISDX_LOOKUP_ENA BIT(16) +#define ANA_L2_FWD_CFG_ISDX_LOOKUP_ENA_SET(x)\ + FIELD_PREP(ANA_L2_FWD_CFG_ISDX_LOOKUP_ENA, x) +#define ANA_L2_FWD_CFG_ISDX_LOOKUP_ENA_GET(x)\ + FIELD_GET(ANA_L2_FWD_CFG_ISDX_LOOKUP_ENA, x) + +#define ANA_L2_FWD_CFG_CPU_DMAC_QU GENMASK(10, 8) +#define ANA_L2_FWD_CFG_CPU_DMAC_QU_SET(x)\ + FIELD_PREP(ANA_L2_FWD_CFG_CPU_DMAC_QU, x) +#define ANA_L2_FWD_CFG_CPU_DMAC_QU_GET(x)\ + FIELD_GET(ANA_L2_FWD_CFG_CPU_DMAC_QU, x) + +#define ANA_L2_FWD_CFG_LOOPBACK_ENA BIT(7) +#define ANA_L2_FWD_CFG_LOOPBACK_ENA_SET(x)\ + FIELD_PREP(ANA_L2_FWD_CFG_LOOPBACK_ENA, x) +#define ANA_L2_FWD_CFG_LOOPBACK_ENA_GET(x)\ + FIELD_GET(ANA_L2_FWD_CFG_LOOPBACK_ENA, x) + +#define ANA_L2_FWD_CFG_CPU_DMAC_COPY_ENA BIT(6) +#define ANA_L2_FWD_CFG_CPU_DMAC_COPY_ENA_SET(x)\ + FIELD_PREP(ANA_L2_FWD_CFG_CPU_DMAC_COPY_ENA, x) +#define ANA_L2_FWD_CFG_CPU_DMAC_COPY_ENA_GET(x)\ + FIELD_GET(ANA_L2_FWD_CFG_CPU_DMAC_COPY_ENA, x) + +#define ANA_L2_FWD_CFG_FILTER_MODE_SEL BIT(4) +#define ANA_L2_FWD_CFG_FILTER_MODE_SEL_SET(x)\ + FIELD_PREP(ANA_L2_FWD_CFG_FILTER_MODE_SEL, x) +#define ANA_L2_FWD_CFG_FILTER_MODE_SEL_GET(x)\ + FIELD_GET(ANA_L2_FWD_CFG_FILTER_MODE_SEL, x) + +#define ANA_L2_FWD_CFG_FLOOD_MIRROR_ENA BIT(3) +#define ANA_L2_FWD_CFG_FLOOD_MIRROR_ENA_SET(x)\ + FIELD_PREP(ANA_L2_FWD_CFG_FLOOD_MIRROR_ENA, x) +#define ANA_L2_FWD_CFG_FLOOD_MIRROR_ENA_GET(x)\ + FIELD_GET(ANA_L2_FWD_CFG_FLOOD_MIRROR_ENA, x) + +#define ANA_L2_FWD_CFG_FLOOD_IGNORE_VLAN_ENA BIT(2) +#define ANA_L2_FWD_CFG_FLOOD_IGNORE_VLAN_ENA_SET(x)\ + FIELD_PREP(ANA_L2_FWD_CFG_FLOOD_IGNORE_VLAN_ENA, x) +#define ANA_L2_FWD_CFG_FLOOD_IGNORE_VLAN_ENA_GET(x)\ + FIELD_GET(ANA_L2_FWD_CFG_FLOOD_IGNORE_VLAN_ENA, x) + +#define ANA_L2_FWD_CFG_FLOOD_CPU_COPY_ENA BIT(1) +#define ANA_L2_FWD_CFG_FLOOD_CPU_COPY_ENA_SET(x)\ + FIELD_PREP(ANA_L2_FWD_CFG_FLOOD_CPU_COPY_ENA, x) +#define ANA_L2_FWD_CFG_FLOOD_CPU_COPY_ENA_GET(x)\ + FIELD_GET(ANA_L2_FWD_CFG_FLOOD_CPU_COPY_ENA, x) + +#define ANA_L2_FWD_CFG_FWD_ENA BIT(0) +#define ANA_L2_FWD_CFG_FWD_ENA_SET(x)\ + FIELD_PREP(ANA_L2_FWD_CFG_FWD_ENA, x) +#define ANA_L2_FWD_CFG_FWD_ENA_GET(x)\ + FIELD_GET(ANA_L2_FWD_CFG_FWD_ENA, x) + /* ANA_L2:COMMON:AUTO_LRN_CFG */ -#define ANA_L2_AUTO_LRN_CFG __REG(TARGET_ANA_L2, 0, 1, 566024, 0, 1, 700, 24, 0, 1, 4) +#define ANA_L2_AUTO_LRN_CFG __REG(TARGET_ANA_L2,\ + 0, 1, 566024, 0, 1, 700, 24, 0, 1, 4) /* ANA_L2:COMMON:AUTO_LRN_CFG1 */ -#define ANA_L2_AUTO_LRN_CFG1 __REG(TARGET_ANA_L2, 0, 1, 566024, 0, 1, 700, 28, 0, 1, 4) +#define ANA_L2_AUTO_LRN_CFG1 __REG(TARGET_ANA_L2,\ + 0, 1, 566024, 0, 1, 700, 28, 0, 1, 4) /* ANA_L2:COMMON:AUTO_LRN_CFG2 */ -#define ANA_L2_AUTO_LRN_CFG2 __REG(TARGET_ANA_L2, 0, 1, 566024, 0, 1, 700, 32, 0, 1, 4) +#define ANA_L2_AUTO_LRN_CFG2 __REG(TARGET_ANA_L2,\ + 0, 1, 566024, 0, 1, 700, 32, 0, 1, 4) #define ANA_L2_AUTO_LRN_CFG2_AUTO_LRN_ENA2 BIT(0) #define ANA_L2_AUTO_LRN_CFG2_AUTO_LRN_ENA2_SET(x)\ @@ -901,7 +1594,8 @@ enum sparx5_target { FIELD_GET(ANA_L2_AUTO_LRN_CFG2_AUTO_LRN_ENA2, x) /* ANA_L2:COMMON:OWN_UPSID */ -#define ANA_L2_OWN_UPSID(r) __REG(TARGET_ANA_L2, 0, 1, 566024, 0, 1, 700, 672, r, 3, 4) +#define ANA_L2_OWN_UPSID(r) __REG(TARGET_ANA_L2,\ + 0, 1, 566024, 0, 1, 700, 672, r, 3, 4) #define ANA_L2_OWN_UPSID_OWN_UPSID GENMASK(4, 0) #define ANA_L2_OWN_UPSID_OWN_UPSID_SET(x)\ @@ -909,8 +1603,29 @@ enum sparx5_target { #define ANA_L2_OWN_UPSID_OWN_UPSID_GET(x)\ FIELD_GET(ANA_L2_OWN_UPSID_OWN_UPSID, x) +/* ANA_L2:ISDX:DLB_CFG */ +#define ANA_L2_DLB_CFG(g) __REG(TARGET_ANA_L2,\ + 0, 1, 0, g, 4096, 128, 56, 0, 1, 4) + +#define ANA_L2_DLB_CFG_DLB_IDX GENMASK(12, 0) +#define ANA_L2_DLB_CFG_DLB_IDX_SET(x)\ + FIELD_PREP(ANA_L2_DLB_CFG_DLB_IDX, x) +#define ANA_L2_DLB_CFG_DLB_IDX_GET(x)\ + FIELD_GET(ANA_L2_DLB_CFG_DLB_IDX, x) + +/* ANA_L2:ISDX:TSN_CFG */ +#define ANA_L2_TSN_CFG(g) __REG(TARGET_ANA_L2,\ + 0, 1, 0, g, 4096, 128, 100, 0, 1, 4) + +#define ANA_L2_TSN_CFG_TSN_SFID GENMASK(9, 0) +#define ANA_L2_TSN_CFG_TSN_SFID_SET(x)\ + FIELD_PREP(ANA_L2_TSN_CFG_TSN_SFID, x) +#define ANA_L2_TSN_CFG_TSN_SFID_GET(x)\ + FIELD_GET(ANA_L2_TSN_CFG_TSN_SFID, x) + /* ANA_L3:COMMON:VLAN_CTRL */ -#define ANA_L3_VLAN_CTRL __REG(TARGET_ANA_L3, 0, 1, 493632, 0, 1, 184, 4, 0, 1, 4) +#define ANA_L3_VLAN_CTRL __REG(TARGET_ANA_L3,\ + 0, 1, 493632, 0, 1, 184, 4, 0, 1, 4) #define ANA_L3_VLAN_CTRL_VLAN_ENA BIT(0) #define ANA_L3_VLAN_CTRL_VLAN_ENA_SET(x)\ @@ -919,7 +1634,8 @@ enum sparx5_target { FIELD_GET(ANA_L3_VLAN_CTRL_VLAN_ENA, x) /* ANA_L3:VLAN:VLAN_CFG */ -#define ANA_L3_VLAN_CFG(g) __REG(TARGET_ANA_L3, 0, 1, 0, g, 5120, 64, 8, 0, 1, 4) +#define ANA_L3_VLAN_CFG(g) __REG(TARGET_ANA_L3,\ + 0, 1, 0, g, 5120, 64, 8, 0, 1, 4) #define ANA_L3_VLAN_CFG_VLAN_MSTP_PTR GENMASK(30, 24) #define ANA_L3_VLAN_CFG_VLAN_MSTP_PTR_SET(x)\ @@ -976,13 +1692,16 @@ enum sparx5_target { FIELD_GET(ANA_L3_VLAN_CFG_VLAN_MIRROR_ENA, x) /* ANA_L3:VLAN:VLAN_MASK_CFG */ -#define ANA_L3_VLAN_MASK_CFG(g) __REG(TARGET_ANA_L3, 0, 1, 0, g, 5120, 64, 16, 0, 1, 4) +#define ANA_L3_VLAN_MASK_CFG(g) __REG(TARGET_ANA_L3,\ + 0, 1, 0, g, 5120, 64, 16, 0, 1, 4) /* ANA_L3:VLAN:VLAN_MASK_CFG1 */ -#define ANA_L3_VLAN_MASK_CFG1(g) __REG(TARGET_ANA_L3, 0, 1, 0, g, 5120, 64, 20, 0, 1, 4) +#define ANA_L3_VLAN_MASK_CFG1(g) __REG(TARGET_ANA_L3,\ + 0, 1, 0, g, 5120, 64, 20, 0, 1, 4) /* ANA_L3:VLAN:VLAN_MASK_CFG2 */ -#define ANA_L3_VLAN_MASK_CFG2(g) __REG(TARGET_ANA_L3, 0, 1, 0, g, 5120, 64, 24, 0, 1, 4) +#define ANA_L3_VLAN_MASK_CFG2(g) __REG(TARGET_ANA_L3,\ + 0, 1, 0, g, 5120, 64, 24, 0, 1, 4) #define ANA_L3_VLAN_MASK_CFG2_VLAN_PORT_MASK2 BIT(0) #define ANA_L3_VLAN_MASK_CFG2_VLAN_PORT_MASK2_SET(x)\ @@ -991,274 +1710,364 @@ enum sparx5_target { FIELD_GET(ANA_L3_VLAN_MASK_CFG2_VLAN_PORT_MASK2, x) /* ASM:DEV_STATISTICS:RX_IN_BYTES_CNT */ -#define ASM_RX_IN_BYTES_CNT(g) __REG(TARGET_ASM, 0, 1, 0, g, 65, 512, 0, 0, 1, 4) +#define ASM_RX_IN_BYTES_CNT(g) __REG(TARGET_ASM,\ + 0, 1, 0, g, 65, 512, 0, 0, 1, 4) /* ASM:DEV_STATISTICS:RX_SYMBOL_ERR_CNT */ -#define ASM_RX_SYMBOL_ERR_CNT(g) __REG(TARGET_ASM, 0, 1, 0, g, 65, 512, 4, 0, 1, 4) +#define ASM_RX_SYMBOL_ERR_CNT(g) __REG(TARGET_ASM,\ + 0, 1, 0, g, 65, 512, 4, 0, 1, 4) /* ASM:DEV_STATISTICS:RX_PAUSE_CNT */ -#define ASM_RX_PAUSE_CNT(g) __REG(TARGET_ASM, 0, 1, 0, g, 65, 512, 8, 0, 1, 4) +#define ASM_RX_PAUSE_CNT(g) __REG(TARGET_ASM,\ + 0, 1, 0, g, 65, 512, 8, 0, 1, 4) /* ASM:DEV_STATISTICS:RX_UNSUP_OPCODE_CNT */ -#define ASM_RX_UNSUP_OPCODE_CNT(g) __REG(TARGET_ASM, 0, 1, 0, g, 65, 512, 12, 0, 1, 4) +#define ASM_RX_UNSUP_OPCODE_CNT(g) __REG(TARGET_ASM,\ + 0, 1, 0, g, 65, 512, 12, 0, 1, 4) /* ASM:DEV_STATISTICS:RX_OK_BYTES_CNT */ -#define ASM_RX_OK_BYTES_CNT(g) __REG(TARGET_ASM, 0, 1, 0, g, 65, 512, 16, 0, 1, 4) +#define ASM_RX_OK_BYTES_CNT(g) __REG(TARGET_ASM,\ + 0, 1, 0, g, 65, 512, 16, 0, 1, 4) /* ASM:DEV_STATISTICS:RX_BAD_BYTES_CNT */ -#define ASM_RX_BAD_BYTES_CNT(g) __REG(TARGET_ASM, 0, 1, 0, g, 65, 512, 20, 0, 1, 4) +#define ASM_RX_BAD_BYTES_CNT(g) __REG(TARGET_ASM,\ + 0, 1, 0, g, 65, 512, 20, 0, 1, 4) /* ASM:DEV_STATISTICS:RX_UC_CNT */ -#define ASM_RX_UC_CNT(g) __REG(TARGET_ASM, 0, 1, 0, g, 65, 512, 24, 0, 1, 4) +#define ASM_RX_UC_CNT(g) __REG(TARGET_ASM,\ + 0, 1, 0, g, 65, 512, 24, 0, 1, 4) /* ASM:DEV_STATISTICS:RX_MC_CNT */ -#define ASM_RX_MC_CNT(g) __REG(TARGET_ASM, 0, 1, 0, g, 65, 512, 28, 0, 1, 4) +#define ASM_RX_MC_CNT(g) __REG(TARGET_ASM,\ + 0, 1, 0, g, 65, 512, 28, 0, 1, 4) /* ASM:DEV_STATISTICS:RX_BC_CNT */ -#define ASM_RX_BC_CNT(g) __REG(TARGET_ASM, 0, 1, 0, g, 65, 512, 32, 0, 1, 4) +#define ASM_RX_BC_CNT(g) __REG(TARGET_ASM,\ + 0, 1, 0, g, 65, 512, 32, 0, 1, 4) /* ASM:DEV_STATISTICS:RX_CRC_ERR_CNT */ -#define ASM_RX_CRC_ERR_CNT(g) __REG(TARGET_ASM, 0, 1, 0, g, 65, 512, 36, 0, 1, 4) +#define ASM_RX_CRC_ERR_CNT(g) __REG(TARGET_ASM,\ + 0, 1, 0, g, 65, 512, 36, 0, 1, 4) /* ASM:DEV_STATISTICS:RX_UNDERSIZE_CNT */ -#define ASM_RX_UNDERSIZE_CNT(g) __REG(TARGET_ASM, 0, 1, 0, g, 65, 512, 40, 0, 1, 4) +#define ASM_RX_UNDERSIZE_CNT(g) __REG(TARGET_ASM,\ + 0, 1, 0, g, 65, 512, 40, 0, 1, 4) /* ASM:DEV_STATISTICS:RX_FRAGMENTS_CNT */ -#define ASM_RX_FRAGMENTS_CNT(g) __REG(TARGET_ASM, 0, 1, 0, g, 65, 512, 44, 0, 1, 4) +#define ASM_RX_FRAGMENTS_CNT(g) __REG(TARGET_ASM,\ + 0, 1, 0, g, 65, 512, 44, 0, 1, 4) /* ASM:DEV_STATISTICS:RX_IN_RANGE_LEN_ERR_CNT */ -#define ASM_RX_IN_RANGE_LEN_ERR_CNT(g) __REG(TARGET_ASM, 0, 1, 0, g, 65, 512, 48, 0, 1, 4) +#define ASM_RX_IN_RANGE_LEN_ERR_CNT(g) __REG(TARGET_ASM,\ + 0, 1, 0, g, 65, 512, 48, 0, 1, 4) /* ASM:DEV_STATISTICS:RX_OUT_OF_RANGE_LEN_ERR_CNT */ -#define ASM_RX_OUT_OF_RANGE_LEN_ERR_CNT(g) __REG(TARGET_ASM, 0, 1, 0, g, 65, 512, 52, 0, 1, 4) +#define ASM_RX_OUT_OF_RANGE_LEN_ERR_CNT(g) __REG(TARGET_ASM,\ + 0, 1, 0, g, 65, 512, 52, 0, 1, 4) /* ASM:DEV_STATISTICS:RX_OVERSIZE_CNT */ -#define ASM_RX_OVERSIZE_CNT(g) __REG(TARGET_ASM, 0, 1, 0, g, 65, 512, 56, 0, 1, 4) +#define ASM_RX_OVERSIZE_CNT(g) __REG(TARGET_ASM,\ + 0, 1, 0, g, 65, 512, 56, 0, 1, 4) /* ASM:DEV_STATISTICS:RX_JABBERS_CNT */ -#define ASM_RX_JABBERS_CNT(g) __REG(TARGET_ASM, 0, 1, 0, g, 65, 512, 60, 0, 1, 4) +#define ASM_RX_JABBERS_CNT(g) __REG(TARGET_ASM,\ + 0, 1, 0, g, 65, 512, 60, 0, 1, 4) /* ASM:DEV_STATISTICS:RX_SIZE64_CNT */ -#define ASM_RX_SIZE64_CNT(g) __REG(TARGET_ASM, 0, 1, 0, g, 65, 512, 64, 0, 1, 4) +#define ASM_RX_SIZE64_CNT(g) __REG(TARGET_ASM,\ + 0, 1, 0, g, 65, 512, 64, 0, 1, 4) /* ASM:DEV_STATISTICS:RX_SIZE65TO127_CNT */ -#define ASM_RX_SIZE65TO127_CNT(g) __REG(TARGET_ASM, 0, 1, 0, g, 65, 512, 68, 0, 1, 4) +#define ASM_RX_SIZE65TO127_CNT(g) __REG(TARGET_ASM,\ + 0, 1, 0, g, 65, 512, 68, 0, 1, 4) /* ASM:DEV_STATISTICS:RX_SIZE128TO255_CNT */ -#define ASM_RX_SIZE128TO255_CNT(g) __REG(TARGET_ASM, 0, 1, 0, g, 65, 512, 72, 0, 1, 4) +#define ASM_RX_SIZE128TO255_CNT(g) __REG(TARGET_ASM,\ + 0, 1, 0, g, 65, 512, 72, 0, 1, 4) /* ASM:DEV_STATISTICS:RX_SIZE256TO511_CNT */ -#define ASM_RX_SIZE256TO511_CNT(g) __REG(TARGET_ASM, 0, 1, 0, g, 65, 512, 76, 0, 1, 4) +#define ASM_RX_SIZE256TO511_CNT(g) __REG(TARGET_ASM,\ + 0, 1, 0, g, 65, 512, 76, 0, 1, 4) /* ASM:DEV_STATISTICS:RX_SIZE512TO1023_CNT */ -#define ASM_RX_SIZE512TO1023_CNT(g) __REG(TARGET_ASM, 0, 1, 0, g, 65, 512, 80, 0, 1, 4) +#define ASM_RX_SIZE512TO1023_CNT(g) __REG(TARGET_ASM,\ + 0, 1, 0, g, 65, 512, 80, 0, 1, 4) /* ASM:DEV_STATISTICS:RX_SIZE1024TO1518_CNT */ -#define ASM_RX_SIZE1024TO1518_CNT(g) __REG(TARGET_ASM, 0, 1, 0, g, 65, 512, 84, 0, 1, 4) +#define ASM_RX_SIZE1024TO1518_CNT(g) __REG(TARGET_ASM,\ + 0, 1, 0, g, 65, 512, 84, 0, 1, 4) /* ASM:DEV_STATISTICS:RX_SIZE1519TOMAX_CNT */ -#define ASM_RX_SIZE1519TOMAX_CNT(g) __REG(TARGET_ASM, 0, 1, 0, g, 65, 512, 88, 0, 1, 4) +#define ASM_RX_SIZE1519TOMAX_CNT(g) __REG(TARGET_ASM,\ + 0, 1, 0, g, 65, 512, 88, 0, 1, 4) /* ASM:DEV_STATISTICS:RX_IPG_SHRINK_CNT */ -#define ASM_RX_IPG_SHRINK_CNT(g) __REG(TARGET_ASM, 0, 1, 0, g, 65, 512, 92, 0, 1, 4) +#define ASM_RX_IPG_SHRINK_CNT(g) __REG(TARGET_ASM,\ + 0, 1, 0, g, 65, 512, 92, 0, 1, 4) /* ASM:DEV_STATISTICS:TX_OUT_BYTES_CNT */ -#define ASM_TX_OUT_BYTES_CNT(g) __REG(TARGET_ASM, 0, 1, 0, g, 65, 512, 96, 0, 1, 4) +#define ASM_TX_OUT_BYTES_CNT(g) __REG(TARGET_ASM,\ + 0, 1, 0, g, 65, 512, 96, 0, 1, 4) /* ASM:DEV_STATISTICS:TX_PAUSE_CNT */ -#define ASM_TX_PAUSE_CNT(g) __REG(TARGET_ASM, 0, 1, 0, g, 65, 512, 100, 0, 1, 4) +#define ASM_TX_PAUSE_CNT(g) __REG(TARGET_ASM,\ + 0, 1, 0, g, 65, 512, 100, 0, 1, 4) /* ASM:DEV_STATISTICS:TX_OK_BYTES_CNT */ -#define ASM_TX_OK_BYTES_CNT(g) __REG(TARGET_ASM, 0, 1, 0, g, 65, 512, 104, 0, 1, 4) +#define ASM_TX_OK_BYTES_CNT(g) __REG(TARGET_ASM,\ + 0, 1, 0, g, 65, 512, 104, 0, 1, 4) /* ASM:DEV_STATISTICS:TX_UC_CNT */ -#define ASM_TX_UC_CNT(g) __REG(TARGET_ASM, 0, 1, 0, g, 65, 512, 108, 0, 1, 4) +#define ASM_TX_UC_CNT(g) __REG(TARGET_ASM,\ + 0, 1, 0, g, 65, 512, 108, 0, 1, 4) /* ASM:DEV_STATISTICS:TX_MC_CNT */ -#define ASM_TX_MC_CNT(g) __REG(TARGET_ASM, 0, 1, 0, g, 65, 512, 112, 0, 1, 4) +#define ASM_TX_MC_CNT(g) __REG(TARGET_ASM,\ + 0, 1, 0, g, 65, 512, 112, 0, 1, 4) /* ASM:DEV_STATISTICS:TX_BC_CNT */ -#define ASM_TX_BC_CNT(g) __REG(TARGET_ASM, 0, 1, 0, g, 65, 512, 116, 0, 1, 4) +#define ASM_TX_BC_CNT(g) __REG(TARGET_ASM,\ + 0, 1, 0, g, 65, 512, 116, 0, 1, 4) /* ASM:DEV_STATISTICS:TX_SIZE64_CNT */ -#define ASM_TX_SIZE64_CNT(g) __REG(TARGET_ASM, 0, 1, 0, g, 65, 512, 120, 0, 1, 4) +#define ASM_TX_SIZE64_CNT(g) __REG(TARGET_ASM,\ + 0, 1, 0, g, 65, 512, 120, 0, 1, 4) /* ASM:DEV_STATISTICS:TX_SIZE65TO127_CNT */ -#define ASM_TX_SIZE65TO127_CNT(g) __REG(TARGET_ASM, 0, 1, 0, g, 65, 512, 124, 0, 1, 4) +#define ASM_TX_SIZE65TO127_CNT(g) __REG(TARGET_ASM,\ + 0, 1, 0, g, 65, 512, 124, 0, 1, 4) /* ASM:DEV_STATISTICS:TX_SIZE128TO255_CNT */ -#define ASM_TX_SIZE128TO255_CNT(g) __REG(TARGET_ASM, 0, 1, 0, g, 65, 512, 128, 0, 1, 4) +#define ASM_TX_SIZE128TO255_CNT(g) __REG(TARGET_ASM,\ + 0, 1, 0, g, 65, 512, 128, 0, 1, 4) /* ASM:DEV_STATISTICS:TX_SIZE256TO511_CNT */ -#define ASM_TX_SIZE256TO511_CNT(g) __REG(TARGET_ASM, 0, 1, 0, g, 65, 512, 132, 0, 1, 4) +#define ASM_TX_SIZE256TO511_CNT(g) __REG(TARGET_ASM,\ + 0, 1, 0, g, 65, 512, 132, 0, 1, 4) /* ASM:DEV_STATISTICS:TX_SIZE512TO1023_CNT */ -#define ASM_TX_SIZE512TO1023_CNT(g) __REG(TARGET_ASM, 0, 1, 0, g, 65, 512, 136, 0, 1, 4) +#define ASM_TX_SIZE512TO1023_CNT(g) __REG(TARGET_ASM,\ + 0, 1, 0, g, 65, 512, 136, 0, 1, 4) /* ASM:DEV_STATISTICS:TX_SIZE1024TO1518_CNT */ -#define ASM_TX_SIZE1024TO1518_CNT(g) __REG(TARGET_ASM, 0, 1, 0, g, 65, 512, 140, 0, 1, 4) +#define ASM_TX_SIZE1024TO1518_CNT(g) __REG(TARGET_ASM,\ + 0, 1, 0, g, 65, 512, 140, 0, 1, 4) /* ASM:DEV_STATISTICS:TX_SIZE1519TOMAX_CNT */ -#define ASM_TX_SIZE1519TOMAX_CNT(g) __REG(TARGET_ASM, 0, 1, 0, g, 65, 512, 144, 0, 1, 4) +#define ASM_TX_SIZE1519TOMAX_CNT(g) __REG(TARGET_ASM,\ + 0, 1, 0, g, 65, 512, 144, 0, 1, 4) /* ASM:DEV_STATISTICS:RX_ALIGNMENT_LOST_CNT */ -#define ASM_RX_ALIGNMENT_LOST_CNT(g) __REG(TARGET_ASM, 0, 1, 0, g, 65, 512, 148, 0, 1, 4) +#define ASM_RX_ALIGNMENT_LOST_CNT(g) __REG(TARGET_ASM,\ + 0, 1, 0, g, 65, 512, 148, 0, 1, 4) /* ASM:DEV_STATISTICS:RX_TAGGED_FRMS_CNT */ -#define ASM_RX_TAGGED_FRMS_CNT(g) __REG(TARGET_ASM, 0, 1, 0, g, 65, 512, 152, 0, 1, 4) +#define ASM_RX_TAGGED_FRMS_CNT(g) __REG(TARGET_ASM,\ + 0, 1, 0, g, 65, 512, 152, 0, 1, 4) /* ASM:DEV_STATISTICS:RX_UNTAGGED_FRMS_CNT */ -#define ASM_RX_UNTAGGED_FRMS_CNT(g) __REG(TARGET_ASM, 0, 1, 0, g, 65, 512, 156, 0, 1, 4) +#define ASM_RX_UNTAGGED_FRMS_CNT(g) __REG(TARGET_ASM,\ + 0, 1, 0, g, 65, 512, 156, 0, 1, 4) /* ASM:DEV_STATISTICS:TX_TAGGED_FRMS_CNT */ -#define ASM_TX_TAGGED_FRMS_CNT(g) __REG(TARGET_ASM, 0, 1, 0, g, 65, 512, 160, 0, 1, 4) +#define ASM_TX_TAGGED_FRMS_CNT(g) __REG(TARGET_ASM,\ + 0, 1, 0, g, 65, 512, 160, 0, 1, 4) /* ASM:DEV_STATISTICS:TX_UNTAGGED_FRMS_CNT */ -#define ASM_TX_UNTAGGED_FRMS_CNT(g) __REG(TARGET_ASM, 0, 1, 0, g, 65, 512, 164, 0, 1, 4) +#define ASM_TX_UNTAGGED_FRMS_CNT(g) __REG(TARGET_ASM,\ + 0, 1, 0, g, 65, 512, 164, 0, 1, 4) /* ASM:DEV_STATISTICS:PMAC_RX_SYMBOL_ERR_CNT */ -#define ASM_PMAC_RX_SYMBOL_ERR_CNT(g) __REG(TARGET_ASM, 0, 1, 0, g, 65, 512, 168, 0, 1, 4) +#define ASM_PMAC_RX_SYMBOL_ERR_CNT(g) __REG(TARGET_ASM,\ + 0, 1, 0, g, 65, 512, 168, 0, 1, 4) /* ASM:DEV_STATISTICS:PMAC_RX_PAUSE_CNT */ -#define ASM_PMAC_RX_PAUSE_CNT(g) __REG(TARGET_ASM, 0, 1, 0, g, 65, 512, 172, 0, 1, 4) +#define ASM_PMAC_RX_PAUSE_CNT(g) __REG(TARGET_ASM,\ + 0, 1, 0, g, 65, 512, 172, 0, 1, 4) /* ASM:DEV_STATISTICS:PMAC_RX_UNSUP_OPCODE_CNT */ -#define ASM_PMAC_RX_UNSUP_OPCODE_CNT(g) __REG(TARGET_ASM, 0, 1, 0, g, 65, 512, 176, 0, 1, 4) +#define ASM_PMAC_RX_UNSUP_OPCODE_CNT(g) __REG(TARGET_ASM,\ + 0, 1, 0, g, 65, 512, 176, 0, 1, 4) /* ASM:DEV_STATISTICS:PMAC_RX_OK_BYTES_CNT */ -#define ASM_PMAC_RX_OK_BYTES_CNT(g) __REG(TARGET_ASM, 0, 1, 0, g, 65, 512, 180, 0, 1, 4) +#define ASM_PMAC_RX_OK_BYTES_CNT(g) __REG(TARGET_ASM,\ + 0, 1, 0, g, 65, 512, 180, 0, 1, 4) /* ASM:DEV_STATISTICS:PMAC_RX_BAD_BYTES_CNT */ -#define ASM_PMAC_RX_BAD_BYTES_CNT(g) __REG(TARGET_ASM, 0, 1, 0, g, 65, 512, 184, 0, 1, 4) +#define ASM_PMAC_RX_BAD_BYTES_CNT(g) __REG(TARGET_ASM,\ + 0, 1, 0, g, 65, 512, 184, 0, 1, 4) /* ASM:DEV_STATISTICS:PMAC_RX_UC_CNT */ -#define ASM_PMAC_RX_UC_CNT(g) __REG(TARGET_ASM, 0, 1, 0, g, 65, 512, 188, 0, 1, 4) +#define ASM_PMAC_RX_UC_CNT(g) __REG(TARGET_ASM,\ + 0, 1, 0, g, 65, 512, 188, 0, 1, 4) /* ASM:DEV_STATISTICS:PMAC_RX_MC_CNT */ -#define ASM_PMAC_RX_MC_CNT(g) __REG(TARGET_ASM, 0, 1, 0, g, 65, 512, 192, 0, 1, 4) +#define ASM_PMAC_RX_MC_CNT(g) __REG(TARGET_ASM,\ + 0, 1, 0, g, 65, 512, 192, 0, 1, 4) /* ASM:DEV_STATISTICS:PMAC_RX_BC_CNT */ -#define ASM_PMAC_RX_BC_CNT(g) __REG(TARGET_ASM, 0, 1, 0, g, 65, 512, 196, 0, 1, 4) +#define ASM_PMAC_RX_BC_CNT(g) __REG(TARGET_ASM,\ + 0, 1, 0, g, 65, 512, 196, 0, 1, 4) /* ASM:DEV_STATISTICS:PMAC_RX_CRC_ERR_CNT */ -#define ASM_PMAC_RX_CRC_ERR_CNT(g) __REG(TARGET_ASM, 0, 1, 0, g, 65, 512, 200, 0, 1, 4) +#define ASM_PMAC_RX_CRC_ERR_CNT(g) __REG(TARGET_ASM,\ + 0, 1, 0, g, 65, 512, 200, 0, 1, 4) /* ASM:DEV_STATISTICS:PMAC_RX_UNDERSIZE_CNT */ -#define ASM_PMAC_RX_UNDERSIZE_CNT(g) __REG(TARGET_ASM, 0, 1, 0, g, 65, 512, 204, 0, 1, 4) +#define ASM_PMAC_RX_UNDERSIZE_CNT(g) __REG(TARGET_ASM,\ + 0, 1, 0, g, 65, 512, 204, 0, 1, 4) /* ASM:DEV_STATISTICS:PMAC_RX_FRAGMENTS_CNT */ -#define ASM_PMAC_RX_FRAGMENTS_CNT(g) __REG(TARGET_ASM, 0, 1, 0, g, 65, 512, 208, 0, 1, 4) +#define ASM_PMAC_RX_FRAGMENTS_CNT(g) __REG(TARGET_ASM,\ + 0, 1, 0, g, 65, 512, 208, 0, 1, 4) /* ASM:DEV_STATISTICS:PMAC_RX_IN_RANGE_LEN_ERR_CNT */ -#define ASM_PMAC_RX_IN_RANGE_LEN_ERR_CNT(g) __REG(TARGET_ASM, 0, 1, 0, g, 65, 512, 212, 0, 1, 4) +#define ASM_PMAC_RX_IN_RANGE_LEN_ERR_CNT(g) __REG(TARGET_ASM,\ + 0, 1, 0, g, 65, 512, 212, 0, 1, 4) /* ASM:DEV_STATISTICS:PMAC_RX_OUT_OF_RANGE_LEN_ERR_CNT */ -#define ASM_PMAC_RX_OUT_OF_RANGE_LEN_ERR_CNT(g) __REG(TARGET_ASM, 0, 1, 0, g, 65, 512, 216, 0, 1, 4) +#define ASM_PMAC_RX_OUT_OF_RANGE_LEN_ERR_CNT(g) __REG(TARGET_ASM,\ + 0, 1, 0, g, 65, 512, 216, 0, 1, 4) /* ASM:DEV_STATISTICS:PMAC_RX_OVERSIZE_CNT */ -#define ASM_PMAC_RX_OVERSIZE_CNT(g) __REG(TARGET_ASM, 0, 1, 0, g, 65, 512, 220, 0, 1, 4) +#define ASM_PMAC_RX_OVERSIZE_CNT(g) __REG(TARGET_ASM,\ + 0, 1, 0, g, 65, 512, 220, 0, 1, 4) /* ASM:DEV_STATISTICS:PMAC_RX_JABBERS_CNT */ -#define ASM_PMAC_RX_JABBERS_CNT(g) __REG(TARGET_ASM, 0, 1, 0, g, 65, 512, 224, 0, 1, 4) +#define ASM_PMAC_RX_JABBERS_CNT(g) __REG(TARGET_ASM,\ + 0, 1, 0, g, 65, 512, 224, 0, 1, 4) /* ASM:DEV_STATISTICS:PMAC_RX_SIZE64_CNT */ -#define ASM_PMAC_RX_SIZE64_CNT(g) __REG(TARGET_ASM, 0, 1, 0, g, 65, 512, 228, 0, 1, 4) +#define ASM_PMAC_RX_SIZE64_CNT(g) __REG(TARGET_ASM,\ + 0, 1, 0, g, 65, 512, 228, 0, 1, 4) /* ASM:DEV_STATISTICS:PMAC_RX_SIZE65TO127_CNT */ -#define ASM_PMAC_RX_SIZE65TO127_CNT(g) __REG(TARGET_ASM, 0, 1, 0, g, 65, 512, 232, 0, 1, 4) +#define ASM_PMAC_RX_SIZE65TO127_CNT(g) __REG(TARGET_ASM,\ + 0, 1, 0, g, 65, 512, 232, 0, 1, 4) /* ASM:DEV_STATISTICS:PMAC_RX_SIZE128TO255_CNT */ -#define ASM_PMAC_RX_SIZE128TO255_CNT(g) __REG(TARGET_ASM, 0, 1, 0, g, 65, 512, 236, 0, 1, 4) +#define ASM_PMAC_RX_SIZE128TO255_CNT(g) __REG(TARGET_ASM,\ + 0, 1, 0, g, 65, 512, 236, 0, 1, 4) /* ASM:DEV_STATISTICS:PMAC_RX_SIZE256TO511_CNT */ -#define ASM_PMAC_RX_SIZE256TO511_CNT(g) __REG(TARGET_ASM, 0, 1, 0, g, 65, 512, 240, 0, 1, 4) +#define ASM_PMAC_RX_SIZE256TO511_CNT(g) __REG(TARGET_ASM,\ + 0, 1, 0, g, 65, 512, 240, 0, 1, 4) /* ASM:DEV_STATISTICS:PMAC_RX_SIZE512TO1023_CNT */ -#define ASM_PMAC_RX_SIZE512TO1023_CNT(g) __REG(TARGET_ASM, 0, 1, 0, g, 65, 512, 244, 0, 1, 4) +#define ASM_PMAC_RX_SIZE512TO1023_CNT(g) __REG(TARGET_ASM,\ + 0, 1, 0, g, 65, 512, 244, 0, 1, 4) /* ASM:DEV_STATISTICS:PMAC_RX_SIZE1024TO1518_CNT */ -#define ASM_PMAC_RX_SIZE1024TO1518_CNT(g) __REG(TARGET_ASM, 0, 1, 0, g, 65, 512, 248, 0, 1, 4) +#define ASM_PMAC_RX_SIZE1024TO1518_CNT(g) __REG(TARGET_ASM,\ + 0, 1, 0, g, 65, 512, 248, 0, 1, 4) /* ASM:DEV_STATISTICS:PMAC_RX_SIZE1519TOMAX_CNT */ -#define ASM_PMAC_RX_SIZE1519TOMAX_CNT(g) __REG(TARGET_ASM, 0, 1, 0, g, 65, 512, 252, 0, 1, 4) +#define ASM_PMAC_RX_SIZE1519TOMAX_CNT(g) __REG(TARGET_ASM,\ + 0, 1, 0, g, 65, 512, 252, 0, 1, 4) /* ASM:DEV_STATISTICS:PMAC_TX_PAUSE_CNT */ -#define ASM_PMAC_TX_PAUSE_CNT(g) __REG(TARGET_ASM, 0, 1, 0, g, 65, 512, 256, 0, 1, 4) +#define ASM_PMAC_TX_PAUSE_CNT(g) __REG(TARGET_ASM,\ + 0, 1, 0, g, 65, 512, 256, 0, 1, 4) /* ASM:DEV_STATISTICS:PMAC_TX_OK_BYTES_CNT */ -#define ASM_PMAC_TX_OK_BYTES_CNT(g) __REG(TARGET_ASM, 0, 1, 0, g, 65, 512, 260, 0, 1, 4) +#define ASM_PMAC_TX_OK_BYTES_CNT(g) __REG(TARGET_ASM,\ + 0, 1, 0, g, 65, 512, 260, 0, 1, 4) /* ASM:DEV_STATISTICS:PMAC_TX_UC_CNT */ -#define ASM_PMAC_TX_UC_CNT(g) __REG(TARGET_ASM, 0, 1, 0, g, 65, 512, 264, 0, 1, 4) +#define ASM_PMAC_TX_UC_CNT(g) __REG(TARGET_ASM,\ + 0, 1, 0, g, 65, 512, 264, 0, 1, 4) /* ASM:DEV_STATISTICS:PMAC_TX_MC_CNT */ -#define ASM_PMAC_TX_MC_CNT(g) __REG(TARGET_ASM, 0, 1, 0, g, 65, 512, 268, 0, 1, 4) +#define ASM_PMAC_TX_MC_CNT(g) __REG(TARGET_ASM,\ + 0, 1, 0, g, 65, 512, 268, 0, 1, 4) /* ASM:DEV_STATISTICS:PMAC_TX_BC_CNT */ -#define ASM_PMAC_TX_BC_CNT(g) __REG(TARGET_ASM, 0, 1, 0, g, 65, 512, 272, 0, 1, 4) +#define ASM_PMAC_TX_BC_CNT(g) __REG(TARGET_ASM,\ + 0, 1, 0, g, 65, 512, 272, 0, 1, 4) /* ASM:DEV_STATISTICS:PMAC_TX_SIZE64_CNT */ -#define ASM_PMAC_TX_SIZE64_CNT(g) __REG(TARGET_ASM, 0, 1, 0, g, 65, 512, 276, 0, 1, 4) +#define ASM_PMAC_TX_SIZE64_CNT(g) __REG(TARGET_ASM,\ + 0, 1, 0, g, 65, 512, 276, 0, 1, 4) /* ASM:DEV_STATISTICS:PMAC_TX_SIZE65TO127_CNT */ -#define ASM_PMAC_TX_SIZE65TO127_CNT(g) __REG(TARGET_ASM, 0, 1, 0, g, 65, 512, 280, 0, 1, 4) +#define ASM_PMAC_TX_SIZE65TO127_CNT(g) __REG(TARGET_ASM,\ + 0, 1, 0, g, 65, 512, 280, 0, 1, 4) /* ASM:DEV_STATISTICS:PMAC_TX_SIZE128TO255_CNT */ -#define ASM_PMAC_TX_SIZE128TO255_CNT(g) __REG(TARGET_ASM, 0, 1, 0, g, 65, 512, 284, 0, 1, 4) +#define ASM_PMAC_TX_SIZE128TO255_CNT(g) __REG(TARGET_ASM,\ + 0, 1, 0, g, 65, 512, 284, 0, 1, 4) /* ASM:DEV_STATISTICS:PMAC_TX_SIZE256TO511_CNT */ -#define ASM_PMAC_TX_SIZE256TO511_CNT(g) __REG(TARGET_ASM, 0, 1, 0, g, 65, 512, 288, 0, 1, 4) +#define ASM_PMAC_TX_SIZE256TO511_CNT(g) __REG(TARGET_ASM,\ + 0, 1, 0, g, 65, 512, 288, 0, 1, 4) /* ASM:DEV_STATISTICS:PMAC_TX_SIZE512TO1023_CNT */ -#define ASM_PMAC_TX_SIZE512TO1023_CNT(g) __REG(TARGET_ASM, 0, 1, 0, g, 65, 512, 292, 0, 1, 4) +#define ASM_PMAC_TX_SIZE512TO1023_CNT(g) __REG(TARGET_ASM,\ + 0, 1, 0, g, 65, 512, 292, 0, 1, 4) /* ASM:DEV_STATISTICS:PMAC_TX_SIZE1024TO1518_CNT */ -#define ASM_PMAC_TX_SIZE1024TO1518_CNT(g) __REG(TARGET_ASM, 0, 1, 0, g, 65, 512, 296, 0, 1, 4) +#define ASM_PMAC_TX_SIZE1024TO1518_CNT(g) __REG(TARGET_ASM,\ + 0, 1, 0, g, 65, 512, 296, 0, 1, 4) /* ASM:DEV_STATISTICS:PMAC_TX_SIZE1519TOMAX_CNT */ -#define ASM_PMAC_TX_SIZE1519TOMAX_CNT(g) __REG(TARGET_ASM, 0, 1, 0, g, 65, 512, 300, 0, 1, 4) +#define ASM_PMAC_TX_SIZE1519TOMAX_CNT(g) __REG(TARGET_ASM,\ + 0, 1, 0, g, 65, 512, 300, 0, 1, 4) /* ASM:DEV_STATISTICS:PMAC_RX_ALIGNMENT_LOST_CNT */ -#define ASM_PMAC_RX_ALIGNMENT_LOST_CNT(g) __REG(TARGET_ASM, 0, 1, 0, g, 65, 512, 304, 0, 1, 4) +#define ASM_PMAC_RX_ALIGNMENT_LOST_CNT(g) __REG(TARGET_ASM,\ + 0, 1, 0, g, 65, 512, 304, 0, 1, 4) /* ASM:DEV_STATISTICS:MM_RX_ASSEMBLY_ERR_CNT */ -#define ASM_MM_RX_ASSEMBLY_ERR_CNT(g) __REG(TARGET_ASM, 0, 1, 0, g, 65, 512, 308, 0, 1, 4) +#define ASM_MM_RX_ASSEMBLY_ERR_CNT(g) __REG(TARGET_ASM,\ + 0, 1, 0, g, 65, 512, 308, 0, 1, 4) /* ASM:DEV_STATISTICS:MM_RX_SMD_ERR_CNT */ -#define ASM_MM_RX_SMD_ERR_CNT(g) __REG(TARGET_ASM, 0, 1, 0, g, 65, 512, 312, 0, 1, 4) +#define ASM_MM_RX_SMD_ERR_CNT(g) __REG(TARGET_ASM,\ + 0, 1, 0, g, 65, 512, 312, 0, 1, 4) /* ASM:DEV_STATISTICS:MM_RX_ASSEMBLY_OK_CNT */ -#define ASM_MM_RX_ASSEMBLY_OK_CNT(g) __REG(TARGET_ASM, 0, 1, 0, g, 65, 512, 316, 0, 1, 4) +#define ASM_MM_RX_ASSEMBLY_OK_CNT(g) __REG(TARGET_ASM,\ + 0, 1, 0, g, 65, 512, 316, 0, 1, 4) /* ASM:DEV_STATISTICS:MM_RX_MERGE_FRAG_CNT */ -#define ASM_MM_RX_MERGE_FRAG_CNT(g) __REG(TARGET_ASM, 0, 1, 0, g, 65, 512, 320, 0, 1, 4) +#define ASM_MM_RX_MERGE_FRAG_CNT(g) __REG(TARGET_ASM,\ + 0, 1, 0, g, 65, 512, 320, 0, 1, 4) /* ASM:DEV_STATISTICS:MM_TX_PFRAGMENT_CNT */ -#define ASM_MM_TX_PFRAGMENT_CNT(g) __REG(TARGET_ASM, 0, 1, 0, g, 65, 512, 324, 0, 1, 4) +#define ASM_MM_TX_PFRAGMENT_CNT(g) __REG(TARGET_ASM,\ + 0, 1, 0, g, 65, 512, 324, 0, 1, 4) /* ASM:DEV_STATISTICS:TX_MULTI_COLL_CNT */ -#define ASM_TX_MULTI_COLL_CNT(g) __REG(TARGET_ASM, 0, 1, 0, g, 65, 512, 328, 0, 1, 4) +#define ASM_TX_MULTI_COLL_CNT(g) __REG(TARGET_ASM,\ + 0, 1, 0, g, 65, 512, 328, 0, 1, 4) /* ASM:DEV_STATISTICS:TX_LATE_COLL_CNT */ -#define ASM_TX_LATE_COLL_CNT(g) __REG(TARGET_ASM, 0, 1, 0, g, 65, 512, 332, 0, 1, 4) +#define ASM_TX_LATE_COLL_CNT(g) __REG(TARGET_ASM,\ + 0, 1, 0, g, 65, 512, 332, 0, 1, 4) /* ASM:DEV_STATISTICS:TX_XCOLL_CNT */ -#define ASM_TX_XCOLL_CNT(g) __REG(TARGET_ASM, 0, 1, 0, g, 65, 512, 336, 0, 1, 4) +#define ASM_TX_XCOLL_CNT(g) __REG(TARGET_ASM,\ + 0, 1, 0, g, 65, 512, 336, 0, 1, 4) /* ASM:DEV_STATISTICS:TX_DEFER_CNT */ -#define ASM_TX_DEFER_CNT(g) __REG(TARGET_ASM, 0, 1, 0, g, 65, 512, 340, 0, 1, 4) +#define ASM_TX_DEFER_CNT(g) __REG(TARGET_ASM,\ + 0, 1, 0, g, 65, 512, 340, 0, 1, 4) /* ASM:DEV_STATISTICS:TX_XDEFER_CNT */ -#define ASM_TX_XDEFER_CNT(g) __REG(TARGET_ASM, 0, 1, 0, g, 65, 512, 344, 0, 1, 4) +#define ASM_TX_XDEFER_CNT(g) __REG(TARGET_ASM,\ + 0, 1, 0, g, 65, 512, 344, 0, 1, 4) /* ASM:DEV_STATISTICS:TX_BACKOFF1_CNT */ -#define ASM_TX_BACKOFF1_CNT(g) __REG(TARGET_ASM, 0, 1, 0, g, 65, 512, 348, 0, 1, 4) +#define ASM_TX_BACKOFF1_CNT(g) __REG(TARGET_ASM,\ + 0, 1, 0, g, 65, 512, 348, 0, 1, 4) /* ASM:DEV_STATISTICS:TX_CSENSE_CNT */ -#define ASM_TX_CSENSE_CNT(g) __REG(TARGET_ASM, 0, 1, 0, g, 65, 512, 352, 0, 1, 4) +#define ASM_TX_CSENSE_CNT(g) __REG(TARGET_ASM,\ + 0, 1, 0, g, 65, 512, 352, 0, 1, 4) /* ASM:DEV_STATISTICS:RX_IN_BYTES_MSB_CNT */ -#define ASM_RX_IN_BYTES_MSB_CNT(g) __REG(TARGET_ASM, 0, 1, 0, g, 65, 512, 356, 0, 1, 4) +#define ASM_RX_IN_BYTES_MSB_CNT(g) __REG(TARGET_ASM,\ + 0, 1, 0, g, 65, 512, 356, 0, 1, 4) #define ASM_RX_IN_BYTES_MSB_CNT_RX_IN_BYTES_MSB_CNT GENMASK(3, 0) #define ASM_RX_IN_BYTES_MSB_CNT_RX_IN_BYTES_MSB_CNT_SET(x)\ @@ -1267,7 +2076,8 @@ enum sparx5_target { FIELD_GET(ASM_RX_IN_BYTES_MSB_CNT_RX_IN_BYTES_MSB_CNT, x) /* ASM:DEV_STATISTICS:RX_OK_BYTES_MSB_CNT */ -#define ASM_RX_OK_BYTES_MSB_CNT(g) __REG(TARGET_ASM, 0, 1, 0, g, 65, 512, 360, 0, 1, 4) +#define ASM_RX_OK_BYTES_MSB_CNT(g) __REG(TARGET_ASM,\ + 0, 1, 0, g, 65, 512, 360, 0, 1, 4) #define ASM_RX_OK_BYTES_MSB_CNT_RX_OK_BYTES_MSB_CNT GENMASK(3, 0) #define ASM_RX_OK_BYTES_MSB_CNT_RX_OK_BYTES_MSB_CNT_SET(x)\ @@ -1276,7 +2086,8 @@ enum sparx5_target { FIELD_GET(ASM_RX_OK_BYTES_MSB_CNT_RX_OK_BYTES_MSB_CNT, x) /* ASM:DEV_STATISTICS:PMAC_RX_OK_BYTES_MSB_CNT */ -#define ASM_PMAC_RX_OK_BYTES_MSB_CNT(g) __REG(TARGET_ASM, 0, 1, 0, g, 65, 512, 364, 0, 1, 4) +#define ASM_PMAC_RX_OK_BYTES_MSB_CNT(g) __REG(TARGET_ASM,\ + 0, 1, 0, g, 65, 512, 364, 0, 1, 4) #define ASM_PMAC_RX_OK_BYTES_MSB_CNT_PMAC_RX_OK_BYTES_MSB_CNT GENMASK(3, 0) #define ASM_PMAC_RX_OK_BYTES_MSB_CNT_PMAC_RX_OK_BYTES_MSB_CNT_SET(x)\ @@ -1285,7 +2096,8 @@ enum sparx5_target { FIELD_GET(ASM_PMAC_RX_OK_BYTES_MSB_CNT_PMAC_RX_OK_BYTES_MSB_CNT, x) /* ASM:DEV_STATISTICS:RX_BAD_BYTES_MSB_CNT */ -#define ASM_RX_BAD_BYTES_MSB_CNT(g) __REG(TARGET_ASM, 0, 1, 0, g, 65, 512, 368, 0, 1, 4) +#define ASM_RX_BAD_BYTES_MSB_CNT(g) __REG(TARGET_ASM,\ + 0, 1, 0, g, 65, 512, 368, 0, 1, 4) #define ASM_RX_BAD_BYTES_MSB_CNT_RX_BAD_BYTES_MSB_CNT GENMASK(3, 0) #define ASM_RX_BAD_BYTES_MSB_CNT_RX_BAD_BYTES_MSB_CNT_SET(x)\ @@ -1294,7 +2106,8 @@ enum sparx5_target { FIELD_GET(ASM_RX_BAD_BYTES_MSB_CNT_RX_BAD_BYTES_MSB_CNT, x) /* ASM:DEV_STATISTICS:PMAC_RX_BAD_BYTES_MSB_CNT */ -#define ASM_PMAC_RX_BAD_BYTES_MSB_CNT(g) __REG(TARGET_ASM, 0, 1, 0, g, 65, 512, 372, 0, 1, 4) +#define ASM_PMAC_RX_BAD_BYTES_MSB_CNT(g) __REG(TARGET_ASM,\ + 0, 1, 0, g, 65, 512, 372, 0, 1, 4) #define ASM_PMAC_RX_BAD_BYTES_MSB_CNT_PMAC_RX_BAD_BYTES_MSB_CNT GENMASK(3, 0) #define ASM_PMAC_RX_BAD_BYTES_MSB_CNT_PMAC_RX_BAD_BYTES_MSB_CNT_SET(x)\ @@ -1303,7 +2116,8 @@ enum sparx5_target { FIELD_GET(ASM_PMAC_RX_BAD_BYTES_MSB_CNT_PMAC_RX_BAD_BYTES_MSB_CNT, x) /* ASM:DEV_STATISTICS:TX_OUT_BYTES_MSB_CNT */ -#define ASM_TX_OUT_BYTES_MSB_CNT(g) __REG(TARGET_ASM, 0, 1, 0, g, 65, 512, 376, 0, 1, 4) +#define ASM_TX_OUT_BYTES_MSB_CNT(g) __REG(TARGET_ASM,\ + 0, 1, 0, g, 65, 512, 376, 0, 1, 4) #define ASM_TX_OUT_BYTES_MSB_CNT_TX_OUT_BYTES_MSB_CNT GENMASK(3, 0) #define ASM_TX_OUT_BYTES_MSB_CNT_TX_OUT_BYTES_MSB_CNT_SET(x)\ @@ -1312,7 +2126,8 @@ enum sparx5_target { FIELD_GET(ASM_TX_OUT_BYTES_MSB_CNT_TX_OUT_BYTES_MSB_CNT, x) /* ASM:DEV_STATISTICS:TX_OK_BYTES_MSB_CNT */ -#define ASM_TX_OK_BYTES_MSB_CNT(g) __REG(TARGET_ASM, 0, 1, 0, g, 65, 512, 380, 0, 1, 4) +#define ASM_TX_OK_BYTES_MSB_CNT(g) __REG(TARGET_ASM,\ + 0, 1, 0, g, 65, 512, 380, 0, 1, 4) #define ASM_TX_OK_BYTES_MSB_CNT_TX_OK_BYTES_MSB_CNT GENMASK(3, 0) #define ASM_TX_OK_BYTES_MSB_CNT_TX_OK_BYTES_MSB_CNT_SET(x)\ @@ -1321,7 +2136,8 @@ enum sparx5_target { FIELD_GET(ASM_TX_OK_BYTES_MSB_CNT_TX_OK_BYTES_MSB_CNT, x) /* ASM:DEV_STATISTICS:PMAC_TX_OK_BYTES_MSB_CNT */ -#define ASM_PMAC_TX_OK_BYTES_MSB_CNT(g) __REG(TARGET_ASM, 0, 1, 0, g, 65, 512, 384, 0, 1, 4) +#define ASM_PMAC_TX_OK_BYTES_MSB_CNT(g) __REG(TARGET_ASM,\ + 0, 1, 0, g, 65, 512, 384, 0, 1, 4) #define ASM_PMAC_TX_OK_BYTES_MSB_CNT_PMAC_TX_OK_BYTES_MSB_CNT GENMASK(3, 0) #define ASM_PMAC_TX_OK_BYTES_MSB_CNT_PMAC_TX_OK_BYTES_MSB_CNT_SET(x)\ @@ -1330,10 +2146,12 @@ enum sparx5_target { FIELD_GET(ASM_PMAC_TX_OK_BYTES_MSB_CNT_PMAC_TX_OK_BYTES_MSB_CNT, x) /* ASM:DEV_STATISTICS:RX_SYNC_LOST_ERR_CNT */ -#define ASM_RX_SYNC_LOST_ERR_CNT(g) __REG(TARGET_ASM, 0, 1, 0, g, 65, 512, 388, 0, 1, 4) +#define ASM_RX_SYNC_LOST_ERR_CNT(g) __REG(TARGET_ASM,\ + 0, 1, 0, g, 65, 512, 388, 0, 1, 4) /* ASM:CFG:STAT_CFG */ -#define ASM_STAT_CFG __REG(TARGET_ASM, 0, 1, 33280, 0, 1, 1088, 0, 0, 1, 4) +#define ASM_STAT_CFG __REG(TARGET_ASM,\ + 0, 1, 33280, 0, 1, 1088, 0, 0, 1, 4) #define ASM_STAT_CFG_STAT_CNT_CLR_SHOT BIT(0) #define ASM_STAT_CFG_STAT_CNT_CLR_SHOT_SET(x)\ @@ -1342,7 +2160,8 @@ enum sparx5_target { FIELD_GET(ASM_STAT_CFG_STAT_CNT_CLR_SHOT, x) /* ASM:CFG:PORT_CFG */ -#define ASM_PORT_CFG(r) __REG(TARGET_ASM, 0, 1, 33280, 0, 1, 1088, 540, r, 67, 4) +#define ASM_PORT_CFG(r) __REG(TARGET_ASM,\ + 0, 1, 33280, 0, 1, 1088, 540, r, 67, 4) #define ASM_PORT_CFG_CSC_STAT_DIS BIT(12) #define ASM_PORT_CFG_CSC_STAT_DIS_SET(x)\ @@ -1411,7 +2230,8 @@ enum sparx5_target { FIELD_GET(ASM_PORT_CFG_PFRM_FLUSH, x) /* ASM:RAM_CTRL:RAM_INIT */ -#define ASM_RAM_INIT __REG(TARGET_ASM, 0, 1, 34832, 0, 1, 4, 0, 0, 1, 4) +#define ASM_RAM_INIT __REG(TARGET_ASM,\ + 0, 1, 34832, 0, 1, 4, 0, 0, 1, 4) #define ASM_RAM_INIT_RAM_INIT BIT(1) #define ASM_RAM_INIT_RAM_INIT_SET(x)\ @@ -1426,7 +2246,8 @@ enum sparx5_target { FIELD_GET(ASM_RAM_INIT_RAM_CFG_HOOK, x) /* CLKGEN:LCPLL1:LCPLL1_CORE_CLK_CFG */ -#define CLKGEN_LCPLL1_CORE_CLK_CFG __REG(TARGET_CLKGEN, 0, 1, 12, 0, 1, 36, 0, 0, 1, 4) +#define CLKGEN_LCPLL1_CORE_CLK_CFG __REG(TARGET_CLKGEN,\ + 0, 1, 12, 0, 1, 36, 0, 0, 1, 4) #define CLKGEN_LCPLL1_CORE_CLK_CFG_CORE_CLK_DIV GENMASK(7, 0) #define CLKGEN_LCPLL1_CORE_CLK_CFG_CORE_CLK_DIV_SET(x)\ @@ -1465,7 +2286,8 @@ enum sparx5_target { FIELD_GET(CLKGEN_LCPLL1_CORE_CLK_CFG_CORE_CLK_ENA, x) /* CPU:CPU_REGS:PROC_CTRL */ -#define CPU_PROC_CTRL __REG(TARGET_CPU, 0, 1, 0, 0, 1, 204, 176, 0, 1, 4) +#define CPU_PROC_CTRL __REG(TARGET_CPU,\ + 0, 1, 0, 0, 1, 204, 176, 0, 1, 4) #define CPU_PROC_CTRL_AARCH64_MODE_ENA BIT(12) #define CPU_PROC_CTRL_AARCH64_MODE_ENA_SET(x)\ @@ -1546,7 +2368,8 @@ enum sparx5_target { FIELD_GET(CPU_PROC_CTRL_ACP_DISABLE, x) /* DEV10G:MAC_CFG_STATUS:MAC_ENA_CFG */ -#define DEV10G_MAC_ENA_CFG(t) __REG(TARGET_DEV10G, t, 12, 0, 0, 1, 60, 0, 0, 1, 4) +#define DEV10G_MAC_ENA_CFG(t) __REG(TARGET_DEV10G,\ + t, 12, 0, 0, 1, 60, 0, 0, 1, 4) #define DEV10G_MAC_ENA_CFG_RX_ENA BIT(4) #define DEV10G_MAC_ENA_CFG_RX_ENA_SET(x)\ @@ -1561,7 +2384,8 @@ enum sparx5_target { FIELD_GET(DEV10G_MAC_ENA_CFG_TX_ENA, x) /* DEV10G:MAC_CFG_STATUS:MAC_MAXLEN_CFG */ -#define DEV10G_MAC_MAXLEN_CFG(t) __REG(TARGET_DEV10G, t, 12, 0, 0, 1, 60, 8, 0, 1, 4) +#define DEV10G_MAC_MAXLEN_CFG(t) __REG(TARGET_DEV10G,\ + t, 12, 0, 0, 1, 60, 8, 0, 1, 4) #define DEV10G_MAC_MAXLEN_CFG_MAX_LEN_TAG_CHK BIT(16) #define DEV10G_MAC_MAXLEN_CFG_MAX_LEN_TAG_CHK_SET(x)\ @@ -1576,7 +2400,8 @@ enum sparx5_target { FIELD_GET(DEV10G_MAC_MAXLEN_CFG_MAX_LEN, x) /* DEV10G:MAC_CFG_STATUS:MAC_NUM_TAGS_CFG */ -#define DEV10G_MAC_NUM_TAGS_CFG(t) __REG(TARGET_DEV10G, t, 12, 0, 0, 1, 60, 12, 0, 1, 4) +#define DEV10G_MAC_NUM_TAGS_CFG(t) __REG(TARGET_DEV10G,\ + t, 12, 0, 0, 1, 60, 12, 0, 1, 4) #define DEV10G_MAC_NUM_TAGS_CFG_NUM_TAGS GENMASK(1, 0) #define DEV10G_MAC_NUM_TAGS_CFG_NUM_TAGS_SET(x)\ @@ -1585,7 +2410,8 @@ enum sparx5_target { FIELD_GET(DEV10G_MAC_NUM_TAGS_CFG_NUM_TAGS, x) /* DEV10G:MAC_CFG_STATUS:MAC_TAGS_CFG */ -#define DEV10G_MAC_TAGS_CFG(t, r) __REG(TARGET_DEV10G, t, 12, 0, 0, 1, 60, 16, r, 3, 4) +#define DEV10G_MAC_TAGS_CFG(t, r) __REG(TARGET_DEV10G,\ + t, 12, 0, 0, 1, 60, 16, r, 3, 4) #define DEV10G_MAC_TAGS_CFG_TAG_ID GENMASK(31, 16) #define DEV10G_MAC_TAGS_CFG_TAG_ID_SET(x)\ @@ -1600,7 +2426,8 @@ enum sparx5_target { FIELD_GET(DEV10G_MAC_TAGS_CFG_TAG_ENA, x) /* DEV10G:MAC_CFG_STATUS:MAC_ADV_CHK_CFG */ -#define DEV10G_MAC_ADV_CHK_CFG(t) __REG(TARGET_DEV10G, t, 12, 0, 0, 1, 60, 28, 0, 1, 4) +#define DEV10G_MAC_ADV_CHK_CFG(t) __REG(TARGET_DEV10G,\ + t, 12, 0, 0, 1, 60, 28, 0, 1, 4) #define DEV10G_MAC_ADV_CHK_CFG_EXT_EOP_CHK_ENA BIT(24) #define DEV10G_MAC_ADV_CHK_CFG_EXT_EOP_CHK_ENA_SET(x)\ @@ -1645,7 +2472,8 @@ enum sparx5_target { FIELD_GET(DEV10G_MAC_ADV_CHK_CFG_INR_ERR_ENA, x) /* DEV10G:MAC_CFG_STATUS:MAC_TX_MONITOR_STICKY */ -#define DEV10G_MAC_TX_MONITOR_STICKY(t) __REG(TARGET_DEV10G, t, 12, 0, 0, 1, 60, 48, 0, 1, 4) +#define DEV10G_MAC_TX_MONITOR_STICKY(t) __REG(TARGET_DEV10G,\ + t, 12, 0, 0, 1, 60, 48, 0, 1, 4) #define DEV10G_MAC_TX_MONITOR_STICKY_LOCAL_ERR_STATE_STICKY BIT(4) #define DEV10G_MAC_TX_MONITOR_STICKY_LOCAL_ERR_STATE_STICKY_SET(x)\ @@ -1678,7 +2506,8 @@ enum sparx5_target { FIELD_GET(DEV10G_MAC_TX_MONITOR_STICKY_DIS_STATE_STICKY, x) /* DEV10G:DEV_CFG_STATUS:DEV_RST_CTRL */ -#define DEV10G_DEV_RST_CTRL(t) __REG(TARGET_DEV10G, t, 12, 436, 0, 1, 52, 0, 0, 1, 4) +#define DEV10G_DEV_RST_CTRL(t) __REG(TARGET_DEV10G,\ + t, 12, 436, 0, 1, 52, 0, 0, 1, 4) #define DEV10G_DEV_RST_CTRL_PARDET_MODE_ENA BIT(28) #define DEV10G_DEV_RST_CTRL_PARDET_MODE_ENA_SET(x)\ @@ -1735,7 +2564,8 @@ enum sparx5_target { FIELD_GET(DEV10G_DEV_RST_CTRL_MAC_RX_RST, x) /* DEV10G:PCS25G_CFG_STATUS:PCS25G_CFG */ -#define DEV10G_PCS25G_CFG(t) __REG(TARGET_DEV10G, t, 12, 488, 0, 1, 32, 0, 0, 1, 4) +#define DEV10G_PCS25G_CFG(t) __REG(TARGET_DEV10G,\ + t, 12, 488, 0, 1, 32, 0, 0, 1, 4) #define DEV10G_PCS25G_CFG_PCS25G_ENA BIT(0) #define DEV10G_PCS25G_CFG_PCS25G_ENA_SET(x)\ @@ -1744,7 +2574,8 @@ enum sparx5_target { FIELD_GET(DEV10G_PCS25G_CFG_PCS25G_ENA, x) /* DEV10G:MAC_CFG_STATUS:MAC_ENA_CFG */ -#define DEV25G_MAC_ENA_CFG(t) __REG(TARGET_DEV25G, t, 8, 0, 0, 1, 60, 0, 0, 1, 4) +#define DEV25G_MAC_ENA_CFG(t) __REG(TARGET_DEV25G,\ + t, 8, 0, 0, 1, 60, 0, 0, 1, 4) #define DEV25G_MAC_ENA_CFG_RX_ENA BIT(4) #define DEV25G_MAC_ENA_CFG_RX_ENA_SET(x)\ @@ -1759,7 +2590,8 @@ enum sparx5_target { FIELD_GET(DEV25G_MAC_ENA_CFG_TX_ENA, x) /* DEV10G:MAC_CFG_STATUS:MAC_MAXLEN_CFG */ -#define DEV25G_MAC_MAXLEN_CFG(t) __REG(TARGET_DEV25G, t, 8, 0, 0, 1, 60, 8, 0, 1, 4) +#define DEV25G_MAC_MAXLEN_CFG(t) __REG(TARGET_DEV25G,\ + t, 8, 0, 0, 1, 60, 8, 0, 1, 4) #define DEV25G_MAC_MAXLEN_CFG_MAX_LEN_TAG_CHK BIT(16) #define DEV25G_MAC_MAXLEN_CFG_MAX_LEN_TAG_CHK_SET(x)\ @@ -1774,7 +2606,8 @@ enum sparx5_target { FIELD_GET(DEV25G_MAC_MAXLEN_CFG_MAX_LEN, x) /* DEV10G:MAC_CFG_STATUS:MAC_ADV_CHK_CFG */ -#define DEV25G_MAC_ADV_CHK_CFG(t) __REG(TARGET_DEV25G, t, 8, 0, 0, 1, 60, 28, 0, 1, 4) +#define DEV25G_MAC_ADV_CHK_CFG(t) __REG(TARGET_DEV25G,\ + t, 8, 0, 0, 1, 60, 28, 0, 1, 4) #define DEV25G_MAC_ADV_CHK_CFG_EXT_EOP_CHK_ENA BIT(24) #define DEV25G_MAC_ADV_CHK_CFG_EXT_EOP_CHK_ENA_SET(x)\ @@ -1819,7 +2652,8 @@ enum sparx5_target { FIELD_GET(DEV25G_MAC_ADV_CHK_CFG_INR_ERR_ENA, x) /* DEV10G:DEV_CFG_STATUS:DEV_RST_CTRL */ -#define DEV25G_DEV_RST_CTRL(t) __REG(TARGET_DEV25G, t, 8, 436, 0, 1, 52, 0, 0, 1, 4) +#define DEV25G_DEV_RST_CTRL(t) __REG(TARGET_DEV25G,\ + t, 8, 436, 0, 1, 52, 0, 0, 1, 4) #define DEV25G_DEV_RST_CTRL_PARDET_MODE_ENA BIT(28) #define DEV25G_DEV_RST_CTRL_PARDET_MODE_ENA_SET(x)\ @@ -1876,7 +2710,8 @@ enum sparx5_target { FIELD_GET(DEV25G_DEV_RST_CTRL_MAC_RX_RST, x) /* DEV10G:PCS25G_CFG_STATUS:PCS25G_CFG */ -#define DEV25G_PCS25G_CFG(t) __REG(TARGET_DEV25G, t, 8, 488, 0, 1, 32, 0, 0, 1, 4) +#define DEV25G_PCS25G_CFG(t) __REG(TARGET_DEV25G,\ + t, 8, 488, 0, 1, 32, 0, 0, 1, 4) #define DEV25G_PCS25G_CFG_PCS25G_ENA BIT(0) #define DEV25G_PCS25G_CFG_PCS25G_ENA_SET(x)\ @@ -1885,7 +2720,8 @@ enum sparx5_target { FIELD_GET(DEV25G_PCS25G_CFG_PCS25G_ENA, x) /* DEV10G:PCS25G_CFG_STATUS:PCS25G_SD_CFG */ -#define DEV25G_PCS25G_SD_CFG(t) __REG(TARGET_DEV25G, t, 8, 488, 0, 1, 32, 4, 0, 1, 4) +#define DEV25G_PCS25G_SD_CFG(t) __REG(TARGET_DEV25G,\ + t, 8, 488, 0, 1, 32, 4, 0, 1, 4) #define DEV25G_PCS25G_SD_CFG_SD_SEL BIT(8) #define DEV25G_PCS25G_SD_CFG_SD_SEL_SET(x)\ @@ -1906,7 +2742,8 @@ enum sparx5_target { FIELD_GET(DEV25G_PCS25G_SD_CFG_SD_ENA, x) /* DEV1G:DEV_CFG_STATUS:DEV_RST_CTRL */ -#define DEV2G5_DEV_RST_CTRL(t) __REG(TARGET_DEV2G5, t, 65, 0, 0, 1, 36, 0, 0, 1, 4) +#define DEV2G5_DEV_RST_CTRL(t) __REG(TARGET_DEV2G5,\ + t, 65, 0, 0, 1, 36, 0, 0, 1, 4) #define DEV2G5_DEV_RST_CTRL_USXGMII_OSET_FILTER_DIS BIT(23) #define DEV2G5_DEV_RST_CTRL_USXGMII_OSET_FILTER_DIS_SET(x)\ @@ -1957,7 +2794,8 @@ enum sparx5_target { FIELD_GET(DEV2G5_DEV_RST_CTRL_MAC_RX_RST, x) /* DEV1G:MAC_CFG_STATUS:MAC_ENA_CFG */ -#define DEV2G5_MAC_ENA_CFG(t) __REG(TARGET_DEV2G5, t, 65, 52, 0, 1, 36, 0, 0, 1, 4) +#define DEV2G5_MAC_ENA_CFG(t) __REG(TARGET_DEV2G5,\ + t, 65, 52, 0, 1, 36, 0, 0, 1, 4) #define DEV2G5_MAC_ENA_CFG_RX_ENA BIT(4) #define DEV2G5_MAC_ENA_CFG_RX_ENA_SET(x)\ @@ -1972,7 +2810,8 @@ enum sparx5_target { FIELD_GET(DEV2G5_MAC_ENA_CFG_TX_ENA, x) /* DEV1G:MAC_CFG_STATUS:MAC_MODE_CFG */ -#define DEV2G5_MAC_MODE_CFG(t) __REG(TARGET_DEV2G5, t, 65, 52, 0, 1, 36, 4, 0, 1, 4) +#define DEV2G5_MAC_MODE_CFG(t) __REG(TARGET_DEV2G5,\ + t, 65, 52, 0, 1, 36, 4, 0, 1, 4) #define DEV2G5_MAC_MODE_CFG_FC_WORD_SYNC_ENA BIT(8) #define DEV2G5_MAC_MODE_CFG_FC_WORD_SYNC_ENA_SET(x)\ @@ -1993,7 +2832,8 @@ enum sparx5_target { FIELD_GET(DEV2G5_MAC_MODE_CFG_FDX_ENA, x) /* DEV1G:MAC_CFG_STATUS:MAC_MAXLEN_CFG */ -#define DEV2G5_MAC_MAXLEN_CFG(t) __REG(TARGET_DEV2G5, t, 65, 52, 0, 1, 36, 8, 0, 1, 4) +#define DEV2G5_MAC_MAXLEN_CFG(t) __REG(TARGET_DEV2G5,\ + t, 65, 52, 0, 1, 36, 8, 0, 1, 4) #define DEV2G5_MAC_MAXLEN_CFG_MAX_LEN GENMASK(15, 0) #define DEV2G5_MAC_MAXLEN_CFG_MAX_LEN_SET(x)\ @@ -2002,7 +2842,8 @@ enum sparx5_target { FIELD_GET(DEV2G5_MAC_MAXLEN_CFG_MAX_LEN, x) /* DEV1G:MAC_CFG_STATUS:MAC_TAGS_CFG */ -#define DEV2G5_MAC_TAGS_CFG(t) __REG(TARGET_DEV2G5, t, 65, 52, 0, 1, 36, 12, 0, 1, 4) +#define DEV2G5_MAC_TAGS_CFG(t) __REG(TARGET_DEV2G5,\ + t, 65, 52, 0, 1, 36, 12, 0, 1, 4) #define DEV2G5_MAC_TAGS_CFG_TAG_ID GENMASK(31, 16) #define DEV2G5_MAC_TAGS_CFG_TAG_ID_SET(x)\ @@ -2029,7 +2870,8 @@ enum sparx5_target { FIELD_GET(DEV2G5_MAC_TAGS_CFG_VLAN_AWR_ENA, x) /* DEV1G:MAC_CFG_STATUS:MAC_TAGS_CFG2 */ -#define DEV2G5_MAC_TAGS_CFG2(t) __REG(TARGET_DEV2G5, t, 65, 52, 0, 1, 36, 16, 0, 1, 4) +#define DEV2G5_MAC_TAGS_CFG2(t) __REG(TARGET_DEV2G5,\ + t, 65, 52, 0, 1, 36, 16, 0, 1, 4) #define DEV2G5_MAC_TAGS_CFG2_TAG_ID3 GENMASK(31, 16) #define DEV2G5_MAC_TAGS_CFG2_TAG_ID3_SET(x)\ @@ -2044,7 +2886,8 @@ enum sparx5_target { FIELD_GET(DEV2G5_MAC_TAGS_CFG2_TAG_ID2, x) /* DEV1G:MAC_CFG_STATUS:MAC_ADV_CHK_CFG */ -#define DEV2G5_MAC_ADV_CHK_CFG(t) __REG(TARGET_DEV2G5, t, 65, 52, 0, 1, 36, 20, 0, 1, 4) +#define DEV2G5_MAC_ADV_CHK_CFG(t) __REG(TARGET_DEV2G5,\ + t, 65, 52, 0, 1, 36, 20, 0, 1, 4) #define DEV2G5_MAC_ADV_CHK_CFG_LEN_DROP_ENA BIT(0) #define DEV2G5_MAC_ADV_CHK_CFG_LEN_DROP_ENA_SET(x)\ @@ -2053,7 +2896,8 @@ enum sparx5_target { FIELD_GET(DEV2G5_MAC_ADV_CHK_CFG_LEN_DROP_ENA, x) /* DEV1G:MAC_CFG_STATUS:MAC_IFG_CFG */ -#define DEV2G5_MAC_IFG_CFG(t) __REG(TARGET_DEV2G5, t, 65, 52, 0, 1, 36, 24, 0, 1, 4) +#define DEV2G5_MAC_IFG_CFG(t) __REG(TARGET_DEV2G5,\ + t, 65, 52, 0, 1, 36, 24, 0, 1, 4) #define DEV2G5_MAC_IFG_CFG_RESTORE_OLD_IPG_CHECK BIT(17) #define DEV2G5_MAC_IFG_CFG_RESTORE_OLD_IPG_CHECK_SET(x)\ @@ -2080,7 +2924,8 @@ enum sparx5_target { FIELD_GET(DEV2G5_MAC_IFG_CFG_RX_IFG1, x) /* DEV1G:MAC_CFG_STATUS:MAC_HDX_CFG */ -#define DEV2G5_MAC_HDX_CFG(t) __REG(TARGET_DEV2G5, t, 65, 52, 0, 1, 36, 28, 0, 1, 4) +#define DEV2G5_MAC_HDX_CFG(t) __REG(TARGET_DEV2G5,\ + t, 65, 52, 0, 1, 36, 28, 0, 1, 4) #define DEV2G5_MAC_HDX_CFG_BYPASS_COL_SYNC BIT(26) #define DEV2G5_MAC_HDX_CFG_BYPASS_COL_SYNC_SET(x)\ @@ -2113,7 +2958,8 @@ enum sparx5_target { FIELD_GET(DEV2G5_MAC_HDX_CFG_LATE_COL_POS, x) /* DEV1G:PCS1G_CFG_STATUS:PCS1G_CFG */ -#define DEV2G5_PCS1G_CFG(t) __REG(TARGET_DEV2G5, t, 65, 88, 0, 1, 68, 0, 0, 1, 4) +#define DEV2G5_PCS1G_CFG(t) __REG(TARGET_DEV2G5,\ + t, 65, 88, 0, 1, 68, 0, 0, 1, 4) #define DEV2G5_PCS1G_CFG_LINK_STATUS_TYPE BIT(4) #define DEV2G5_PCS1G_CFG_LINK_STATUS_TYPE_SET(x)\ @@ -2134,7 +2980,8 @@ enum sparx5_target { FIELD_GET(DEV2G5_PCS1G_CFG_PCS_ENA, x) /* DEV1G:PCS1G_CFG_STATUS:PCS1G_MODE_CFG */ -#define DEV2G5_PCS1G_MODE_CFG(t) __REG(TARGET_DEV2G5, t, 65, 88, 0, 1, 68, 4, 0, 1, 4) +#define DEV2G5_PCS1G_MODE_CFG(t) __REG(TARGET_DEV2G5,\ + t, 65, 88, 0, 1, 68, 4, 0, 1, 4) #define DEV2G5_PCS1G_MODE_CFG_UNIDIR_MODE_ENA BIT(4) #define DEV2G5_PCS1G_MODE_CFG_UNIDIR_MODE_ENA_SET(x)\ @@ -2155,7 +3002,8 @@ enum sparx5_target { FIELD_GET(DEV2G5_PCS1G_MODE_CFG_SGMII_MODE_ENA, x) /* DEV1G:PCS1G_CFG_STATUS:PCS1G_SD_CFG */ -#define DEV2G5_PCS1G_SD_CFG(t) __REG(TARGET_DEV2G5, t, 65, 88, 0, 1, 68, 8, 0, 1, 4) +#define DEV2G5_PCS1G_SD_CFG(t) __REG(TARGET_DEV2G5,\ + t, 65, 88, 0, 1, 68, 8, 0, 1, 4) #define DEV2G5_PCS1G_SD_CFG_SD_SEL BIT(8) #define DEV2G5_PCS1G_SD_CFG_SD_SEL_SET(x)\ @@ -2176,7 +3024,8 @@ enum sparx5_target { FIELD_GET(DEV2G5_PCS1G_SD_CFG_SD_ENA, x) /* DEV1G:PCS1G_CFG_STATUS:PCS1G_ANEG_CFG */ -#define DEV2G5_PCS1G_ANEG_CFG(t) __REG(TARGET_DEV2G5, t, 65, 88, 0, 1, 68, 12, 0, 1, 4) +#define DEV2G5_PCS1G_ANEG_CFG(t) __REG(TARGET_DEV2G5,\ + t, 65, 88, 0, 1, 68, 12, 0, 1, 4) #define DEV2G5_PCS1G_ANEG_CFG_ADV_ABILITY GENMASK(31, 16) #define DEV2G5_PCS1G_ANEG_CFG_ADV_ABILITY_SET(x)\ @@ -2203,7 +3052,8 @@ enum sparx5_target { FIELD_GET(DEV2G5_PCS1G_ANEG_CFG_ANEG_ENA, x) /* DEV1G:PCS1G_CFG_STATUS:PCS1G_LB_CFG */ -#define DEV2G5_PCS1G_LB_CFG(t) __REG(TARGET_DEV2G5, t, 65, 88, 0, 1, 68, 20, 0, 1, 4) +#define DEV2G5_PCS1G_LB_CFG(t) __REG(TARGET_DEV2G5,\ + t, 65, 88, 0, 1, 68, 20, 0, 1, 4) #define DEV2G5_PCS1G_LB_CFG_RA_ENA BIT(4) #define DEV2G5_PCS1G_LB_CFG_RA_ENA_SET(x)\ @@ -2224,7 +3074,8 @@ enum sparx5_target { FIELD_GET(DEV2G5_PCS1G_LB_CFG_TBI_HOST_LB_ENA, x) /* DEV1G:PCS1G_CFG_STATUS:PCS1G_ANEG_STATUS */ -#define DEV2G5_PCS1G_ANEG_STATUS(t) __REG(TARGET_DEV2G5, t, 65, 88, 0, 1, 68, 32, 0, 1, 4) +#define DEV2G5_PCS1G_ANEG_STATUS(t) __REG(TARGET_DEV2G5,\ + t, 65, 88, 0, 1, 68, 32, 0, 1, 4) #define DEV2G5_PCS1G_ANEG_STATUS_LP_ADV_ABILITY GENMASK(31, 16) #define DEV2G5_PCS1G_ANEG_STATUS_LP_ADV_ABILITY_SET(x)\ @@ -2251,7 +3102,8 @@ enum sparx5_target { FIELD_GET(DEV2G5_PCS1G_ANEG_STATUS_ANEG_COMPLETE, x) /* DEV1G:PCS1G_CFG_STATUS:PCS1G_LINK_STATUS */ -#define DEV2G5_PCS1G_LINK_STATUS(t) __REG(TARGET_DEV2G5, t, 65, 88, 0, 1, 68, 40, 0, 1, 4) +#define DEV2G5_PCS1G_LINK_STATUS(t) __REG(TARGET_DEV2G5,\ + t, 65, 88, 0, 1, 68, 40, 0, 1, 4) #define DEV2G5_PCS1G_LINK_STATUS_DELAY_VAR GENMASK(15, 12) #define DEV2G5_PCS1G_LINK_STATUS_DELAY_VAR_SET(x)\ @@ -2278,7 +3130,8 @@ enum sparx5_target { FIELD_GET(DEV2G5_PCS1G_LINK_STATUS_SYNC_STATUS, x) /* DEV1G:PCS1G_CFG_STATUS:PCS1G_STICKY */ -#define DEV2G5_PCS1G_STICKY(t) __REG(TARGET_DEV2G5, t, 65, 88, 0, 1, 68, 48, 0, 1, 4) +#define DEV2G5_PCS1G_STICKY(t) __REG(TARGET_DEV2G5,\ + t, 65, 88, 0, 1, 68, 48, 0, 1, 4) #define DEV2G5_PCS1G_STICKY_LINK_DOWN_STICKY BIT(4) #define DEV2G5_PCS1G_STICKY_LINK_DOWN_STICKY_SET(x)\ @@ -2293,7 +3146,8 @@ enum sparx5_target { FIELD_GET(DEV2G5_PCS1G_STICKY_OUT_OF_SYNC_STICKY, x) /* DEV1G:PCS_FX100_CONFIGURATION:PCS_FX100_CFG */ -#define DEV2G5_PCS_FX100_CFG(t) __REG(TARGET_DEV2G5, t, 65, 164, 0, 1, 4, 0, 0, 1, 4) +#define DEV2G5_PCS_FX100_CFG(t) __REG(TARGET_DEV2G5,\ + t, 65, 164, 0, 1, 4, 0, 0, 1, 4) #define DEV2G5_PCS_FX100_CFG_SD_SEL BIT(26) #define DEV2G5_PCS_FX100_CFG_SD_SEL_SET(x)\ @@ -2374,7 +3228,8 @@ enum sparx5_target { FIELD_GET(DEV2G5_PCS_FX100_CFG_PCS_ENA, x) /* DEV1G:PCS_FX100_STATUS:PCS_FX100_STATUS */ -#define DEV2G5_PCS_FX100_STATUS(t) __REG(TARGET_DEV2G5, t, 65, 168, 0, 1, 4, 0, 0, 1, 4) +#define DEV2G5_PCS_FX100_STATUS(t) __REG(TARGET_DEV2G5,\ + t, 65, 168, 0, 1, 4, 0, 0, 1, 4) #define DEV2G5_PCS_FX100_STATUS_EDGE_POS_PTP GENMASK(11, 8) #define DEV2G5_PCS_FX100_STATUS_EDGE_POS_PTP_SET(x)\ @@ -2425,7 +3280,8 @@ enum sparx5_target { FIELD_GET(DEV2G5_PCS_FX100_STATUS_SYNC_STATUS, x) /* DEV10G:MAC_CFG_STATUS:MAC_ENA_CFG */ -#define DEV5G_MAC_ENA_CFG(t) __REG(TARGET_DEV5G, t, 13, 0, 0, 1, 60, 0, 0, 1, 4) +#define DEV5G_MAC_ENA_CFG(t) __REG(TARGET_DEV5G,\ + t, 13, 0, 0, 1, 60, 0, 0, 1, 4) #define DEV5G_MAC_ENA_CFG_RX_ENA BIT(4) #define DEV5G_MAC_ENA_CFG_RX_ENA_SET(x)\ @@ -2440,7 +3296,8 @@ enum sparx5_target { FIELD_GET(DEV5G_MAC_ENA_CFG_TX_ENA, x) /* DEV10G:MAC_CFG_STATUS:MAC_MAXLEN_CFG */ -#define DEV5G_MAC_MAXLEN_CFG(t) __REG(TARGET_DEV5G, t, 13, 0, 0, 1, 60, 8, 0, 1, 4) +#define DEV5G_MAC_MAXLEN_CFG(t) __REG(TARGET_DEV5G,\ + t, 13, 0, 0, 1, 60, 8, 0, 1, 4) #define DEV5G_MAC_MAXLEN_CFG_MAX_LEN_TAG_CHK BIT(16) #define DEV5G_MAC_MAXLEN_CFG_MAX_LEN_TAG_CHK_SET(x)\ @@ -2455,7 +3312,8 @@ enum sparx5_target { FIELD_GET(DEV5G_MAC_MAXLEN_CFG_MAX_LEN, x) /* DEV10G:MAC_CFG_STATUS:MAC_ADV_CHK_CFG */ -#define DEV5G_MAC_ADV_CHK_CFG(t) __REG(TARGET_DEV5G, t, 13, 0, 0, 1, 60, 28, 0, 1, 4) +#define DEV5G_MAC_ADV_CHK_CFG(t) __REG(TARGET_DEV5G,\ + t, 13, 0, 0, 1, 60, 28, 0, 1, 4) #define DEV5G_MAC_ADV_CHK_CFG_EXT_EOP_CHK_ENA BIT(24) #define DEV5G_MAC_ADV_CHK_CFG_EXT_EOP_CHK_ENA_SET(x)\ @@ -2500,142 +3358,188 @@ enum sparx5_target { FIELD_GET(DEV5G_MAC_ADV_CHK_CFG_INR_ERR_ENA, x) /* DEV10G:DEV_STATISTICS_32BIT:RX_SYMBOL_ERR_CNT */ -#define DEV5G_RX_SYMBOL_ERR_CNT(t) __REG(TARGET_DEV5G, t, 13, 60, 0, 1, 312, 0, 0, 1, 4) +#define DEV5G_RX_SYMBOL_ERR_CNT(t) __REG(TARGET_DEV5G,\ + t, 13, 60, 0, 1, 312, 0, 0, 1, 4) /* DEV10G:DEV_STATISTICS_32BIT:RX_PAUSE_CNT */ -#define DEV5G_RX_PAUSE_CNT(t) __REG(TARGET_DEV5G, t, 13, 60, 0, 1, 312, 4, 0, 1, 4) +#define DEV5G_RX_PAUSE_CNT(t) __REG(TARGET_DEV5G,\ + t, 13, 60, 0, 1, 312, 4, 0, 1, 4) /* DEV10G:DEV_STATISTICS_32BIT:RX_UNSUP_OPCODE_CNT */ -#define DEV5G_RX_UNSUP_OPCODE_CNT(t) __REG(TARGET_DEV5G, t, 13, 60, 0, 1, 312, 8, 0, 1, 4) +#define DEV5G_RX_UNSUP_OPCODE_CNT(t) __REG(TARGET_DEV5G,\ + t, 13, 60, 0, 1, 312, 8, 0, 1, 4) /* DEV10G:DEV_STATISTICS_32BIT:RX_UC_CNT */ -#define DEV5G_RX_UC_CNT(t) __REG(TARGET_DEV5G, t, 13, 60, 0, 1, 312, 12, 0, 1, 4) +#define DEV5G_RX_UC_CNT(t) __REG(TARGET_DEV5G,\ + t, 13, 60, 0, 1, 312, 12, 0, 1, 4) /* DEV10G:DEV_STATISTICS_32BIT:RX_MC_CNT */ -#define DEV5G_RX_MC_CNT(t) __REG(TARGET_DEV5G, t, 13, 60, 0, 1, 312, 16, 0, 1, 4) +#define DEV5G_RX_MC_CNT(t) __REG(TARGET_DEV5G,\ + t, 13, 60, 0, 1, 312, 16, 0, 1, 4) /* DEV10G:DEV_STATISTICS_32BIT:RX_BC_CNT */ -#define DEV5G_RX_BC_CNT(t) __REG(TARGET_DEV5G, t, 13, 60, 0, 1, 312, 20, 0, 1, 4) +#define DEV5G_RX_BC_CNT(t) __REG(TARGET_DEV5G,\ + t, 13, 60, 0, 1, 312, 20, 0, 1, 4) /* DEV10G:DEV_STATISTICS_32BIT:RX_CRC_ERR_CNT */ -#define DEV5G_RX_CRC_ERR_CNT(t) __REG(TARGET_DEV5G, t, 13, 60, 0, 1, 312, 24, 0, 1, 4) +#define DEV5G_RX_CRC_ERR_CNT(t) __REG(TARGET_DEV5G,\ + t, 13, 60, 0, 1, 312, 24, 0, 1, 4) /* DEV10G:DEV_STATISTICS_32BIT:RX_UNDERSIZE_CNT */ -#define DEV5G_RX_UNDERSIZE_CNT(t) __REG(TARGET_DEV5G, t, 13, 60, 0, 1, 312, 28, 0, 1, 4) +#define DEV5G_RX_UNDERSIZE_CNT(t) __REG(TARGET_DEV5G,\ + t, 13, 60, 0, 1, 312, 28, 0, 1, 4) /* DEV10G:DEV_STATISTICS_32BIT:RX_FRAGMENTS_CNT */ -#define DEV5G_RX_FRAGMENTS_CNT(t) __REG(TARGET_DEV5G, t, 13, 60, 0, 1, 312, 32, 0, 1, 4) +#define DEV5G_RX_FRAGMENTS_CNT(t) __REG(TARGET_DEV5G,\ + t, 13, 60, 0, 1, 312, 32, 0, 1, 4) /* DEV10G:DEV_STATISTICS_32BIT:RX_IN_RANGE_LEN_ERR_CNT */ -#define DEV5G_RX_IN_RANGE_LEN_ERR_CNT(t) __REG(TARGET_DEV5G, t, 13, 60, 0, 1, 312, 36, 0, 1, 4) +#define DEV5G_RX_IN_RANGE_LEN_ERR_CNT(t) __REG(TARGET_DEV5G,\ + t, 13, 60, 0, 1, 312, 36, 0, 1, 4) /* DEV10G:DEV_STATISTICS_32BIT:RX_OUT_OF_RANGE_LEN_ERR_CNT */ -#define DEV5G_RX_OUT_OF_RANGE_LEN_ERR_CNT(t) __REG(TARGET_DEV5G, t, 13, 60, 0, 1, 312, 40, 0, 1, 4) +#define DEV5G_RX_OUT_OF_RANGE_LEN_ERR_CNT(t) __REG(TARGET_DEV5G,\ + t, 13, 60, 0, 1, 312, 40, 0, 1, 4) /* DEV10G:DEV_STATISTICS_32BIT:RX_OVERSIZE_CNT */ -#define DEV5G_RX_OVERSIZE_CNT(t) __REG(TARGET_DEV5G, t, 13, 60, 0, 1, 312, 44, 0, 1, 4) +#define DEV5G_RX_OVERSIZE_CNT(t) __REG(TARGET_DEV5G,\ + t, 13, 60, 0, 1, 312, 44, 0, 1, 4) /* DEV10G:DEV_STATISTICS_32BIT:RX_JABBERS_CNT */ -#define DEV5G_RX_JABBERS_CNT(t) __REG(TARGET_DEV5G, t, 13, 60, 0, 1, 312, 48, 0, 1, 4) +#define DEV5G_RX_JABBERS_CNT(t) __REG(TARGET_DEV5G,\ + t, 13, 60, 0, 1, 312, 48, 0, 1, 4) /* DEV10G:DEV_STATISTICS_32BIT:RX_SIZE64_CNT */ -#define DEV5G_RX_SIZE64_CNT(t) __REG(TARGET_DEV5G, t, 13, 60, 0, 1, 312, 52, 0, 1, 4) +#define DEV5G_RX_SIZE64_CNT(t) __REG(TARGET_DEV5G,\ + t, 13, 60, 0, 1, 312, 52, 0, 1, 4) /* DEV10G:DEV_STATISTICS_32BIT:RX_SIZE65TO127_CNT */ -#define DEV5G_RX_SIZE65TO127_CNT(t) __REG(TARGET_DEV5G, t, 13, 60, 0, 1, 312, 56, 0, 1, 4) +#define DEV5G_RX_SIZE65TO127_CNT(t) __REG(TARGET_DEV5G,\ + t, 13, 60, 0, 1, 312, 56, 0, 1, 4) /* DEV10G:DEV_STATISTICS_32BIT:RX_SIZE128TO255_CNT */ -#define DEV5G_RX_SIZE128TO255_CNT(t) __REG(TARGET_DEV5G, t, 13, 60, 0, 1, 312, 60, 0, 1, 4) +#define DEV5G_RX_SIZE128TO255_CNT(t) __REG(TARGET_DEV5G,\ + t, 13, 60, 0, 1, 312, 60, 0, 1, 4) /* DEV10G:DEV_STATISTICS_32BIT:RX_SIZE256TO511_CNT */ -#define DEV5G_RX_SIZE256TO511_CNT(t) __REG(TARGET_DEV5G, t, 13, 60, 0, 1, 312, 64, 0, 1, 4) +#define DEV5G_RX_SIZE256TO511_CNT(t) __REG(TARGET_DEV5G,\ + t, 13, 60, 0, 1, 312, 64, 0, 1, 4) /* DEV10G:DEV_STATISTICS_32BIT:RX_SIZE512TO1023_CNT */ -#define DEV5G_RX_SIZE512TO1023_CNT(t) __REG(TARGET_DEV5G, t, 13, 60, 0, 1, 312, 68, 0, 1, 4) +#define DEV5G_RX_SIZE512TO1023_CNT(t) __REG(TARGET_DEV5G,\ + t, 13, 60, 0, 1, 312, 68, 0, 1, 4) /* DEV10G:DEV_STATISTICS_32BIT:RX_SIZE1024TO1518_CNT */ -#define DEV5G_RX_SIZE1024TO1518_CNT(t) __REG(TARGET_DEV5G, t, 13, 60, 0, 1, 312, 72, 0, 1, 4) +#define DEV5G_RX_SIZE1024TO1518_CNT(t) __REG(TARGET_DEV5G,\ + t, 13, 60, 0, 1, 312, 72, 0, 1, 4) /* DEV10G:DEV_STATISTICS_32BIT:RX_SIZE1519TOMAX_CNT */ -#define DEV5G_RX_SIZE1519TOMAX_CNT(t) __REG(TARGET_DEV5G, t, 13, 60, 0, 1, 312, 76, 0, 1, 4) +#define DEV5G_RX_SIZE1519TOMAX_CNT(t) __REG(TARGET_DEV5G,\ + t, 13, 60, 0, 1, 312, 76, 0, 1, 4) /* DEV10G:DEV_STATISTICS_32BIT:RX_IPG_SHRINK_CNT */ -#define DEV5G_RX_IPG_SHRINK_CNT(t) __REG(TARGET_DEV5G, t, 13, 60, 0, 1, 312, 80, 0, 1, 4) +#define DEV5G_RX_IPG_SHRINK_CNT(t) __REG(TARGET_DEV5G,\ + t, 13, 60, 0, 1, 312, 80, 0, 1, 4) /* DEV10G:DEV_STATISTICS_32BIT:TX_PAUSE_CNT */ -#define DEV5G_TX_PAUSE_CNT(t) __REG(TARGET_DEV5G, t, 13, 60, 0, 1, 312, 84, 0, 1, 4) +#define DEV5G_TX_PAUSE_CNT(t) __REG(TARGET_DEV5G,\ + t, 13, 60, 0, 1, 312, 84, 0, 1, 4) /* DEV10G:DEV_STATISTICS_32BIT:TX_UC_CNT */ -#define DEV5G_TX_UC_CNT(t) __REG(TARGET_DEV5G, t, 13, 60, 0, 1, 312, 88, 0, 1, 4) +#define DEV5G_TX_UC_CNT(t) __REG(TARGET_DEV5G,\ + t, 13, 60, 0, 1, 312, 88, 0, 1, 4) /* DEV10G:DEV_STATISTICS_32BIT:TX_MC_CNT */ -#define DEV5G_TX_MC_CNT(t) __REG(TARGET_DEV5G, t, 13, 60, 0, 1, 312, 92, 0, 1, 4) +#define DEV5G_TX_MC_CNT(t) __REG(TARGET_DEV5G,\ + t, 13, 60, 0, 1, 312, 92, 0, 1, 4) /* DEV10G:DEV_STATISTICS_32BIT:TX_BC_CNT */ -#define DEV5G_TX_BC_CNT(t) __REG(TARGET_DEV5G, t, 13, 60, 0, 1, 312, 96, 0, 1, 4) +#define DEV5G_TX_BC_CNT(t) __REG(TARGET_DEV5G,\ + t, 13, 60, 0, 1, 312, 96, 0, 1, 4) /* DEV10G:DEV_STATISTICS_32BIT:TX_SIZE64_CNT */ -#define DEV5G_TX_SIZE64_CNT(t) __REG(TARGET_DEV5G, t, 13, 60, 0, 1, 312, 100, 0, 1, 4) +#define DEV5G_TX_SIZE64_CNT(t) __REG(TARGET_DEV5G,\ + t, 13, 60, 0, 1, 312, 100, 0, 1, 4) /* DEV10G:DEV_STATISTICS_32BIT:TX_SIZE65TO127_CNT */ -#define DEV5G_TX_SIZE65TO127_CNT(t) __REG(TARGET_DEV5G, t, 13, 60, 0, 1, 312, 104, 0, 1, 4) +#define DEV5G_TX_SIZE65TO127_CNT(t) __REG(TARGET_DEV5G,\ + t, 13, 60, 0, 1, 312, 104, 0, 1, 4) /* DEV10G:DEV_STATISTICS_32BIT:TX_SIZE128TO255_CNT */ -#define DEV5G_TX_SIZE128TO255_CNT(t) __REG(TARGET_DEV5G, t, 13, 60, 0, 1, 312, 108, 0, 1, 4) +#define DEV5G_TX_SIZE128TO255_CNT(t) __REG(TARGET_DEV5G,\ + t, 13, 60, 0, 1, 312, 108, 0, 1, 4) /* DEV10G:DEV_STATISTICS_32BIT:TX_SIZE256TO511_CNT */ -#define DEV5G_TX_SIZE256TO511_CNT(t) __REG(TARGET_DEV5G, t, 13, 60, 0, 1, 312, 112, 0, 1, 4) +#define DEV5G_TX_SIZE256TO511_CNT(t) __REG(TARGET_DEV5G,\ + t, 13, 60, 0, 1, 312, 112, 0, 1, 4) /* DEV10G:DEV_STATISTICS_32BIT:TX_SIZE512TO1023_CNT */ -#define DEV5G_TX_SIZE512TO1023_CNT(t) __REG(TARGET_DEV5G, t, 13, 60, 0, 1, 312, 116, 0, 1, 4) +#define DEV5G_TX_SIZE512TO1023_CNT(t) __REG(TARGET_DEV5G,\ + t, 13, 60, 0, 1, 312, 116, 0, 1, 4) /* DEV10G:DEV_STATISTICS_32BIT:TX_SIZE1024TO1518_CNT */ -#define DEV5G_TX_SIZE1024TO1518_CNT(t) __REG(TARGET_DEV5G, t, 13, 60, 0, 1, 312, 120, 0, 1, 4) +#define DEV5G_TX_SIZE1024TO1518_CNT(t) __REG(TARGET_DEV5G,\ + t, 13, 60, 0, 1, 312, 120, 0, 1, 4) /* DEV10G:DEV_STATISTICS_32BIT:TX_SIZE1519TOMAX_CNT */ -#define DEV5G_TX_SIZE1519TOMAX_CNT(t) __REG(TARGET_DEV5G, t, 13, 60, 0, 1, 312, 124, 0, 1, 4) +#define DEV5G_TX_SIZE1519TOMAX_CNT(t) __REG(TARGET_DEV5G,\ + t, 13, 60, 0, 1, 312, 124, 0, 1, 4) /* DEV10G:DEV_STATISTICS_32BIT:RX_ALIGNMENT_LOST_CNT */ -#define DEV5G_RX_ALIGNMENT_LOST_CNT(t) __REG(TARGET_DEV5G, t, 13, 60, 0, 1, 312, 128, 0, 1, 4) +#define DEV5G_RX_ALIGNMENT_LOST_CNT(t) __REG(TARGET_DEV5G,\ + t, 13, 60, 0, 1, 312, 128, 0, 1, 4) /* DEV10G:DEV_STATISTICS_32BIT:RX_TAGGED_FRMS_CNT */ -#define DEV5G_RX_TAGGED_FRMS_CNT(t) __REG(TARGET_DEV5G, t, 13, 60, 0, 1, 312, 132, 0, 1, 4) +#define DEV5G_RX_TAGGED_FRMS_CNT(t) __REG(TARGET_DEV5G,\ + t, 13, 60, 0, 1, 312, 132, 0, 1, 4) /* DEV10G:DEV_STATISTICS_32BIT:RX_UNTAGGED_FRMS_CNT */ -#define DEV5G_RX_UNTAGGED_FRMS_CNT(t) __REG(TARGET_DEV5G, t, 13, 60, 0, 1, 312, 136, 0, 1, 4) +#define DEV5G_RX_UNTAGGED_FRMS_CNT(t) __REG(TARGET_DEV5G,\ + t, 13, 60, 0, 1, 312, 136, 0, 1, 4) /* DEV10G:DEV_STATISTICS_32BIT:TX_TAGGED_FRMS_CNT */ -#define DEV5G_TX_TAGGED_FRMS_CNT(t) __REG(TARGET_DEV5G, t, 13, 60, 0, 1, 312, 140, 0, 1, 4) +#define DEV5G_TX_TAGGED_FRMS_CNT(t) __REG(TARGET_DEV5G,\ + t, 13, 60, 0, 1, 312, 140, 0, 1, 4) /* DEV10G:DEV_STATISTICS_32BIT:TX_UNTAGGED_FRMS_CNT */ -#define DEV5G_TX_UNTAGGED_FRMS_CNT(t) __REG(TARGET_DEV5G, t, 13, 60, 0, 1, 312, 144, 0, 1, 4) +#define DEV5G_TX_UNTAGGED_FRMS_CNT(t) __REG(TARGET_DEV5G,\ + t, 13, 60, 0, 1, 312, 144, 0, 1, 4) /* DEV10G:DEV_STATISTICS_32BIT:PMAC_RX_SYMBOL_ERR_CNT */ -#define DEV5G_PMAC_RX_SYMBOL_ERR_CNT(t) __REG(TARGET_DEV5G, t, 13, 60, 0, 1, 312, 148, 0, 1, 4) +#define DEV5G_PMAC_RX_SYMBOL_ERR_CNT(t) __REG(TARGET_DEV5G,\ + t, 13, 60, 0, 1, 312, 148, 0, 1, 4) /* DEV10G:DEV_STATISTICS_32BIT:PMAC_RX_PAUSE_CNT */ -#define DEV5G_PMAC_RX_PAUSE_CNT(t) __REG(TARGET_DEV5G, t, 13, 60, 0, 1, 312, 152, 0, 1, 4) +#define DEV5G_PMAC_RX_PAUSE_CNT(t) __REG(TARGET_DEV5G,\ + t, 13, 60, 0, 1, 312, 152, 0, 1, 4) /* DEV10G:DEV_STATISTICS_32BIT:PMAC_RX_UNSUP_OPCODE_CNT */ -#define DEV5G_PMAC_RX_UNSUP_OPCODE_CNT(t) __REG(TARGET_DEV5G, t, 13, 60, 0, 1, 312, 156, 0, 1, 4) +#define DEV5G_PMAC_RX_UNSUP_OPCODE_CNT(t) __REG(TARGET_DEV5G,\ + t, 13, 60, 0, 1, 312, 156, 0, 1, 4) /* DEV10G:DEV_STATISTICS_32BIT:PMAC_RX_UC_CNT */ -#define DEV5G_PMAC_RX_UC_CNT(t) __REG(TARGET_DEV5G, t, 13, 60, 0, 1, 312, 160, 0, 1, 4) +#define DEV5G_PMAC_RX_UC_CNT(t) __REG(TARGET_DEV5G,\ + t, 13, 60, 0, 1, 312, 160, 0, 1, 4) /* DEV10G:DEV_STATISTICS_32BIT:PMAC_RX_MC_CNT */ -#define DEV5G_PMAC_RX_MC_CNT(t) __REG(TARGET_DEV5G, t, 13, 60, 0, 1, 312, 164, 0, 1, 4) +#define DEV5G_PMAC_RX_MC_CNT(t) __REG(TARGET_DEV5G,\ + t, 13, 60, 0, 1, 312, 164, 0, 1, 4) /* DEV10G:DEV_STATISTICS_32BIT:PMAC_RX_BC_CNT */ -#define DEV5G_PMAC_RX_BC_CNT(t) __REG(TARGET_DEV5G, t, 13, 60, 0, 1, 312, 168, 0, 1, 4) +#define DEV5G_PMAC_RX_BC_CNT(t) __REG(TARGET_DEV5G,\ + t, 13, 60, 0, 1, 312, 168, 0, 1, 4) /* DEV10G:DEV_STATISTICS_32BIT:PMAC_RX_CRC_ERR_CNT */ -#define DEV5G_PMAC_RX_CRC_ERR_CNT(t) __REG(TARGET_DEV5G, t, 13, 60, 0, 1, 312, 172, 0, 1, 4) +#define DEV5G_PMAC_RX_CRC_ERR_CNT(t) __REG(TARGET_DEV5G,\ + t, 13, 60, 0, 1, 312, 172, 0, 1, 4) /* DEV10G:DEV_STATISTICS_32BIT:PMAC_RX_UNDERSIZE_CNT */ -#define DEV5G_PMAC_RX_UNDERSIZE_CNT(t) __REG(TARGET_DEV5G, t, 13, 60, 0, 1, 312, 176, 0, 1, 4) +#define DEV5G_PMAC_RX_UNDERSIZE_CNT(t) __REG(TARGET_DEV5G,\ + t, 13, 60, 0, 1, 312, 176, 0, 1, 4) /* DEV10G:DEV_STATISTICS_32BIT:PMAC_RX_FRAGMENTS_CNT */ -#define DEV5G_PMAC_RX_FRAGMENTS_CNT(t) __REG(TARGET_DEV5G, t, 13, 60, 0, 1, 312, 180, 0, 1, 4) +#define DEV5G_PMAC_RX_FRAGMENTS_CNT(t) __REG(TARGET_DEV5G,\ + t, 13, 60, 0, 1, 312, 180, 0, 1, 4) /* DEV10G:DEV_STATISTICS_32BIT:PMAC_RX_IN_RANGE_LEN_ERR_CNT */ #define DEV5G_PMAC_RX_IN_RANGE_LEN_ERR_CNT(t) __REG(TARGET_DEV5G,\ @@ -2646,100 +3550,132 @@ enum sparx5_target { t, 13, 60, 0, 1, 312, 188, 0, 1, 4) /* DEV10G:DEV_STATISTICS_32BIT:PMAC_RX_OVERSIZE_CNT */ -#define DEV5G_PMAC_RX_OVERSIZE_CNT(t) __REG(TARGET_DEV5G, t, 13, 60, 0, 1, 312, 192, 0, 1, 4) +#define DEV5G_PMAC_RX_OVERSIZE_CNT(t) __REG(TARGET_DEV5G,\ + t, 13, 60, 0, 1, 312, 192, 0, 1, 4) /* DEV10G:DEV_STATISTICS_32BIT:PMAC_RX_JABBERS_CNT */ -#define DEV5G_PMAC_RX_JABBERS_CNT(t) __REG(TARGET_DEV5G, t, 13, 60, 0, 1, 312, 196, 0, 1, 4) +#define DEV5G_PMAC_RX_JABBERS_CNT(t) __REG(TARGET_DEV5G,\ + t, 13, 60, 0, 1, 312, 196, 0, 1, 4) /* DEV10G:DEV_STATISTICS_32BIT:PMAC_RX_SIZE64_CNT */ -#define DEV5G_PMAC_RX_SIZE64_CNT(t) __REG(TARGET_DEV5G, t, 13, 60, 0, 1, 312, 200, 0, 1, 4) +#define DEV5G_PMAC_RX_SIZE64_CNT(t) __REG(TARGET_DEV5G,\ + t, 13, 60, 0, 1, 312, 200, 0, 1, 4) /* DEV10G:DEV_STATISTICS_32BIT:PMAC_RX_SIZE65TO127_CNT */ -#define DEV5G_PMAC_RX_SIZE65TO127_CNT(t) __REG(TARGET_DEV5G, t, 13, 60, 0, 1, 312, 204, 0, 1, 4) +#define DEV5G_PMAC_RX_SIZE65TO127_CNT(t) __REG(TARGET_DEV5G,\ + t, 13, 60, 0, 1, 312, 204, 0, 1, 4) /* DEV10G:DEV_STATISTICS_32BIT:PMAC_RX_SIZE128TO255_CNT */ -#define DEV5G_PMAC_RX_SIZE128TO255_CNT(t) __REG(TARGET_DEV5G, t, 13, 60, 0, 1, 312, 208, 0, 1, 4) +#define DEV5G_PMAC_RX_SIZE128TO255_CNT(t) __REG(TARGET_DEV5G,\ + t, 13, 60, 0, 1, 312, 208, 0, 1, 4) /* DEV10G:DEV_STATISTICS_32BIT:PMAC_RX_SIZE256TO511_CNT */ -#define DEV5G_PMAC_RX_SIZE256TO511_CNT(t) __REG(TARGET_DEV5G, t, 13, 60, 0, 1, 312, 212, 0, 1, 4) +#define DEV5G_PMAC_RX_SIZE256TO511_CNT(t) __REG(TARGET_DEV5G,\ + t, 13, 60, 0, 1, 312, 212, 0, 1, 4) /* DEV10G:DEV_STATISTICS_32BIT:PMAC_RX_SIZE512TO1023_CNT */ -#define DEV5G_PMAC_RX_SIZE512TO1023_CNT(t) __REG(TARGET_DEV5G, t, 13, 60, 0, 1, 312, 216, 0, 1, 4) +#define DEV5G_PMAC_RX_SIZE512TO1023_CNT(t) __REG(TARGET_DEV5G,\ + t, 13, 60, 0, 1, 312, 216, 0, 1, 4) /* DEV10G:DEV_STATISTICS_32BIT:PMAC_RX_SIZE1024TO1518_CNT */ -#define DEV5G_PMAC_RX_SIZE1024TO1518_CNT(t) __REG(TARGET_DEV5G, t, 13, 60, 0, 1, 312, 220, 0, 1, 4) +#define DEV5G_PMAC_RX_SIZE1024TO1518_CNT(t) __REG(TARGET_DEV5G,\ + t, 13, 60, 0, 1, 312, 220, 0, 1, 4) /* DEV10G:DEV_STATISTICS_32BIT:PMAC_RX_SIZE1519TOMAX_CNT */ -#define DEV5G_PMAC_RX_SIZE1519TOMAX_CNT(t) __REG(TARGET_DEV5G, t, 13, 60, 0, 1, 312, 224, 0, 1, 4) +#define DEV5G_PMAC_RX_SIZE1519TOMAX_CNT(t) __REG(TARGET_DEV5G,\ + t, 13, 60, 0, 1, 312, 224, 0, 1, 4) /* DEV10G:DEV_STATISTICS_32BIT:PMAC_TX_PAUSE_CNT */ -#define DEV5G_PMAC_TX_PAUSE_CNT(t) __REG(TARGET_DEV5G, t, 13, 60, 0, 1, 312, 228, 0, 1, 4) +#define DEV5G_PMAC_TX_PAUSE_CNT(t) __REG(TARGET_DEV5G,\ + t, 13, 60, 0, 1, 312, 228, 0, 1, 4) /* DEV10G:DEV_STATISTICS_32BIT:PMAC_TX_UC_CNT */ -#define DEV5G_PMAC_TX_UC_CNT(t) __REG(TARGET_DEV5G, t, 13, 60, 0, 1, 312, 232, 0, 1, 4) +#define DEV5G_PMAC_TX_UC_CNT(t) __REG(TARGET_DEV5G,\ + t, 13, 60, 0, 1, 312, 232, 0, 1, 4) /* DEV10G:DEV_STATISTICS_32BIT:PMAC_TX_MC_CNT */ -#define DEV5G_PMAC_TX_MC_CNT(t) __REG(TARGET_DEV5G, t, 13, 60, 0, 1, 312, 236, 0, 1, 4) +#define DEV5G_PMAC_TX_MC_CNT(t) __REG(TARGET_DEV5G,\ + t, 13, 60, 0, 1, 312, 236, 0, 1, 4) /* DEV10G:DEV_STATISTICS_32BIT:PMAC_TX_BC_CNT */ -#define DEV5G_PMAC_TX_BC_CNT(t) __REG(TARGET_DEV5G, t, 13, 60, 0, 1, 312, 240, 0, 1, 4) +#define DEV5G_PMAC_TX_BC_CNT(t) __REG(TARGET_DEV5G,\ + t, 13, 60, 0, 1, 312, 240, 0, 1, 4) /* DEV10G:DEV_STATISTICS_32BIT:PMAC_TX_SIZE64_CNT */ -#define DEV5G_PMAC_TX_SIZE64_CNT(t) __REG(TARGET_DEV5G, t, 13, 60, 0, 1, 312, 244, 0, 1, 4) +#define DEV5G_PMAC_TX_SIZE64_CNT(t) __REG(TARGET_DEV5G,\ + t, 13, 60, 0, 1, 312, 244, 0, 1, 4) /* DEV10G:DEV_STATISTICS_32BIT:PMAC_TX_SIZE65TO127_CNT */ -#define DEV5G_PMAC_TX_SIZE65TO127_CNT(t) __REG(TARGET_DEV5G, t, 13, 60, 0, 1, 312, 248, 0, 1, 4) +#define DEV5G_PMAC_TX_SIZE65TO127_CNT(t) __REG(TARGET_DEV5G,\ + t, 13, 60, 0, 1, 312, 248, 0, 1, 4) /* DEV10G:DEV_STATISTICS_32BIT:PMAC_TX_SIZE128TO255_CNT */ -#define DEV5G_PMAC_TX_SIZE128TO255_CNT(t) __REG(TARGET_DEV5G, t, 13, 60, 0, 1, 312, 252, 0, 1, 4) +#define DEV5G_PMAC_TX_SIZE128TO255_CNT(t) __REG(TARGET_DEV5G,\ + t, 13, 60, 0, 1, 312, 252, 0, 1, 4) /* DEV10G:DEV_STATISTICS_32BIT:PMAC_TX_SIZE256TO511_CNT */ -#define DEV5G_PMAC_TX_SIZE256TO511_CNT(t) __REG(TARGET_DEV5G, t, 13, 60, 0, 1, 312, 256, 0, 1, 4) +#define DEV5G_PMAC_TX_SIZE256TO511_CNT(t) __REG(TARGET_DEV5G,\ + t, 13, 60, 0, 1, 312, 256, 0, 1, 4) /* DEV10G:DEV_STATISTICS_32BIT:PMAC_TX_SIZE512TO1023_CNT */ -#define DEV5G_PMAC_TX_SIZE512TO1023_CNT(t) __REG(TARGET_DEV5G, t, 13, 60, 0, 1, 312, 260, 0, 1, 4) +#define DEV5G_PMAC_TX_SIZE512TO1023_CNT(t) __REG(TARGET_DEV5G,\ + t, 13, 60, 0, 1, 312, 260, 0, 1, 4) /* DEV10G:DEV_STATISTICS_32BIT:PMAC_TX_SIZE1024TO1518_CNT */ -#define DEV5G_PMAC_TX_SIZE1024TO1518_CNT(t) __REG(TARGET_DEV5G, t, 13, 60, 0, 1, 312, 264, 0, 1, 4) +#define DEV5G_PMAC_TX_SIZE1024TO1518_CNT(t) __REG(TARGET_DEV5G,\ + t, 13, 60, 0, 1, 312, 264, 0, 1, 4) /* DEV10G:DEV_STATISTICS_32BIT:PMAC_TX_SIZE1519TOMAX_CNT */ -#define DEV5G_PMAC_TX_SIZE1519TOMAX_CNT(t) __REG(TARGET_DEV5G, t, 13, 60, 0, 1, 312, 268, 0, 1, 4) +#define DEV5G_PMAC_TX_SIZE1519TOMAX_CNT(t) __REG(TARGET_DEV5G,\ + t, 13, 60, 0, 1, 312, 268, 0, 1, 4) /* DEV10G:DEV_STATISTICS_32BIT:PMAC_RX_ALIGNMENT_LOST_CNT */ -#define DEV5G_PMAC_RX_ALIGNMENT_LOST_CNT(t) __REG(TARGET_DEV5G, t, 13, 60, 0, 1, 312, 272, 0, 1, 4) +#define DEV5G_PMAC_RX_ALIGNMENT_LOST_CNT(t) __REG(TARGET_DEV5G,\ + t, 13, 60, 0, 1, 312, 272, 0, 1, 4) /* DEV10G:DEV_STATISTICS_32BIT:MM_RX_ASSEMBLY_ERR_CNT */ -#define DEV5G_MM_RX_ASSEMBLY_ERR_CNT(t) __REG(TARGET_DEV5G, t, 13, 60, 0, 1, 312, 276, 0, 1, 4) +#define DEV5G_MM_RX_ASSEMBLY_ERR_CNT(t) __REG(TARGET_DEV5G,\ + t, 13, 60, 0, 1, 312, 276, 0, 1, 4) /* DEV10G:DEV_STATISTICS_32BIT:MM_RX_SMD_ERR_CNT */ -#define DEV5G_MM_RX_SMD_ERR_CNT(t) __REG(TARGET_DEV5G, t, 13, 60, 0, 1, 312, 280, 0, 1, 4) +#define DEV5G_MM_RX_SMD_ERR_CNT(t) __REG(TARGET_DEV5G,\ + t, 13, 60, 0, 1, 312, 280, 0, 1, 4) /* DEV10G:DEV_STATISTICS_32BIT:MM_RX_ASSEMBLY_OK_CNT */ -#define DEV5G_MM_RX_ASSEMBLY_OK_CNT(t) __REG(TARGET_DEV5G, t, 13, 60, 0, 1, 312, 284, 0, 1, 4) +#define DEV5G_MM_RX_ASSEMBLY_OK_CNT(t) __REG(TARGET_DEV5G,\ + t, 13, 60, 0, 1, 312, 284, 0, 1, 4) /* DEV10G:DEV_STATISTICS_32BIT:MM_RX_MERGE_FRAG_CNT */ -#define DEV5G_MM_RX_MERGE_FRAG_CNT(t) __REG(TARGET_DEV5G, t, 13, 60, 0, 1, 312, 288, 0, 1, 4) +#define DEV5G_MM_RX_MERGE_FRAG_CNT(t) __REG(TARGET_DEV5G,\ + t, 13, 60, 0, 1, 312, 288, 0, 1, 4) /* DEV10G:DEV_STATISTICS_32BIT:MM_TX_PFRAGMENT_CNT */ -#define DEV5G_MM_TX_PFRAGMENT_CNT(t) __REG(TARGET_DEV5G, t, 13, 60, 0, 1, 312, 292, 0, 1, 4) +#define DEV5G_MM_TX_PFRAGMENT_CNT(t) __REG(TARGET_DEV5G,\ + t, 13, 60, 0, 1, 312, 292, 0, 1, 4) /* DEV10G:DEV_STATISTICS_32BIT:RX_HIH_CKSM_ERR_CNT */ -#define DEV5G_RX_HIH_CKSM_ERR_CNT(t) __REG(TARGET_DEV5G, t, 13, 60, 0, 1, 312, 296, 0, 1, 4) +#define DEV5G_RX_HIH_CKSM_ERR_CNT(t) __REG(TARGET_DEV5G,\ + t, 13, 60, 0, 1, 312, 296, 0, 1, 4) /* DEV10G:DEV_STATISTICS_32BIT:RX_XGMII_PROT_ERR_CNT */ -#define DEV5G_RX_XGMII_PROT_ERR_CNT(t) __REG(TARGET_DEV5G, t, 13, 60, 0, 1, 312, 300, 0, 1, 4) +#define DEV5G_RX_XGMII_PROT_ERR_CNT(t) __REG(TARGET_DEV5G,\ + t, 13, 60, 0, 1, 312, 300, 0, 1, 4) /* DEV10G:DEV_STATISTICS_32BIT:PMAC_RX_HIH_CKSM_ERR_CNT */ -#define DEV5G_PMAC_RX_HIH_CKSM_ERR_CNT(t) __REG(TARGET_DEV5G, t, 13, 60, 0, 1, 312, 304, 0, 1, 4) +#define DEV5G_PMAC_RX_HIH_CKSM_ERR_CNT(t) __REG(TARGET_DEV5G,\ + t, 13, 60, 0, 1, 312, 304, 0, 1, 4) /* DEV10G:DEV_STATISTICS_32BIT:PMAC_RX_XGMII_PROT_ERR_CNT */ -#define DEV5G_PMAC_RX_XGMII_PROT_ERR_CNT(t) __REG(TARGET_DEV5G, t, 13, 60, 0, 1, 312, 308, 0, 1, 4) +#define DEV5G_PMAC_RX_XGMII_PROT_ERR_CNT(t) __REG(TARGET_DEV5G,\ + t, 13, 60, 0, 1, 312, 308, 0, 1, 4) /* DEV10G:DEV_STATISTICS_40BIT:RX_IN_BYTES_CNT */ -#define DEV5G_RX_IN_BYTES_CNT(t) __REG(TARGET_DEV5G, t, 13, 372, 0, 1, 64, 0, 0, 1, 4) +#define DEV5G_RX_IN_BYTES_CNT(t) __REG(TARGET_DEV5G,\ + t, 13, 372, 0, 1, 64, 0, 0, 1, 4) /* DEV10G:DEV_STATISTICS_40BIT:RX_IN_BYTES_MSB_CNT */ -#define DEV5G_RX_IN_BYTES_MSB_CNT(t) __REG(TARGET_DEV5G, t, 13, 372, 0, 1, 64, 4, 0, 1, 4) +#define DEV5G_RX_IN_BYTES_MSB_CNT(t) __REG(TARGET_DEV5G,\ + t, 13, 372, 0, 1, 64, 4, 0, 1, 4) #define DEV5G_RX_IN_BYTES_MSB_CNT_RX_IN_BYTES_MSB_CNT GENMASK(7, 0) #define DEV5G_RX_IN_BYTES_MSB_CNT_RX_IN_BYTES_MSB_CNT_SET(x)\ @@ -2748,10 +3684,12 @@ enum sparx5_target { FIELD_GET(DEV5G_RX_IN_BYTES_MSB_CNT_RX_IN_BYTES_MSB_CNT, x) /* DEV10G:DEV_STATISTICS_40BIT:RX_OK_BYTES_CNT */ -#define DEV5G_RX_OK_BYTES_CNT(t) __REG(TARGET_DEV5G, t, 13, 372, 0, 1, 64, 8, 0, 1, 4) +#define DEV5G_RX_OK_BYTES_CNT(t) __REG(TARGET_DEV5G,\ + t, 13, 372, 0, 1, 64, 8, 0, 1, 4) /* DEV10G:DEV_STATISTICS_40BIT:RX_OK_BYTES_MSB_CNT */ -#define DEV5G_RX_OK_BYTES_MSB_CNT(t) __REG(TARGET_DEV5G, t, 13, 372, 0, 1, 64, 12, 0, 1, 4) +#define DEV5G_RX_OK_BYTES_MSB_CNT(t) __REG(TARGET_DEV5G,\ + t, 13, 372, 0, 1, 64, 12, 0, 1, 4) #define DEV5G_RX_OK_BYTES_MSB_CNT_RX_OK_BYTES_MSB_CNT GENMASK(7, 0) #define DEV5G_RX_OK_BYTES_MSB_CNT_RX_OK_BYTES_MSB_CNT_SET(x)\ @@ -2760,10 +3698,12 @@ enum sparx5_target { FIELD_GET(DEV5G_RX_OK_BYTES_MSB_CNT_RX_OK_BYTES_MSB_CNT, x) /* DEV10G:DEV_STATISTICS_40BIT:RX_BAD_BYTES_CNT */ -#define DEV5G_RX_BAD_BYTES_CNT(t) __REG(TARGET_DEV5G, t, 13, 372, 0, 1, 64, 16, 0, 1, 4) +#define DEV5G_RX_BAD_BYTES_CNT(t) __REG(TARGET_DEV5G,\ + t, 13, 372, 0, 1, 64, 16, 0, 1, 4) /* DEV10G:DEV_STATISTICS_40BIT:RX_BAD_BYTES_MSB_CNT */ -#define DEV5G_RX_BAD_BYTES_MSB_CNT(t) __REG(TARGET_DEV5G, t, 13, 372, 0, 1, 64, 20, 0, 1, 4) +#define DEV5G_RX_BAD_BYTES_MSB_CNT(t) __REG(TARGET_DEV5G,\ + t, 13, 372, 0, 1, 64, 20, 0, 1, 4) #define DEV5G_RX_BAD_BYTES_MSB_CNT_RX_BAD_BYTES_MSB_CNT GENMASK(7, 0) #define DEV5G_RX_BAD_BYTES_MSB_CNT_RX_BAD_BYTES_MSB_CNT_SET(x)\ @@ -2772,10 +3712,12 @@ enum sparx5_target { FIELD_GET(DEV5G_RX_BAD_BYTES_MSB_CNT_RX_BAD_BYTES_MSB_CNT, x) /* DEV10G:DEV_STATISTICS_40BIT:TX_OUT_BYTES_CNT */ -#define DEV5G_TX_OUT_BYTES_CNT(t) __REG(TARGET_DEV5G, t, 13, 372, 0, 1, 64, 24, 0, 1, 4) +#define DEV5G_TX_OUT_BYTES_CNT(t) __REG(TARGET_DEV5G,\ + t, 13, 372, 0, 1, 64, 24, 0, 1, 4) /* DEV10G:DEV_STATISTICS_40BIT:TX_OUT_BYTES_MSB_CNT */ -#define DEV5G_TX_OUT_BYTES_MSB_CNT(t) __REG(TARGET_DEV5G, t, 13, 372, 0, 1, 64, 28, 0, 1, 4) +#define DEV5G_TX_OUT_BYTES_MSB_CNT(t) __REG(TARGET_DEV5G,\ + t, 13, 372, 0, 1, 64, 28, 0, 1, 4) #define DEV5G_TX_OUT_BYTES_MSB_CNT_TX_OUT_BYTES_MSB_CNT GENMASK(7, 0) #define DEV5G_TX_OUT_BYTES_MSB_CNT_TX_OUT_BYTES_MSB_CNT_SET(x)\ @@ -2784,10 +3726,12 @@ enum sparx5_target { FIELD_GET(DEV5G_TX_OUT_BYTES_MSB_CNT_TX_OUT_BYTES_MSB_CNT, x) /* DEV10G:DEV_STATISTICS_40BIT:TX_OK_BYTES_CNT */ -#define DEV5G_TX_OK_BYTES_CNT(t) __REG(TARGET_DEV5G, t, 13, 372, 0, 1, 64, 32, 0, 1, 4) +#define DEV5G_TX_OK_BYTES_CNT(t) __REG(TARGET_DEV5G,\ + t, 13, 372, 0, 1, 64, 32, 0, 1, 4) /* DEV10G:DEV_STATISTICS_40BIT:TX_OK_BYTES_MSB_CNT */ -#define DEV5G_TX_OK_BYTES_MSB_CNT(t) __REG(TARGET_DEV5G, t, 13, 372, 0, 1, 64, 36, 0, 1, 4) +#define DEV5G_TX_OK_BYTES_MSB_CNT(t) __REG(TARGET_DEV5G,\ + t, 13, 372, 0, 1, 64, 36, 0, 1, 4) #define DEV5G_TX_OK_BYTES_MSB_CNT_TX_OK_BYTES_MSB_CNT GENMASK(7, 0) #define DEV5G_TX_OK_BYTES_MSB_CNT_TX_OK_BYTES_MSB_CNT_SET(x)\ @@ -2796,10 +3740,12 @@ enum sparx5_target { FIELD_GET(DEV5G_TX_OK_BYTES_MSB_CNT_TX_OK_BYTES_MSB_CNT, x) /* DEV10G:DEV_STATISTICS_40BIT:PMAC_RX_OK_BYTES_CNT */ -#define DEV5G_PMAC_RX_OK_BYTES_CNT(t) __REG(TARGET_DEV5G, t, 13, 372, 0, 1, 64, 40, 0, 1, 4) +#define DEV5G_PMAC_RX_OK_BYTES_CNT(t) __REG(TARGET_DEV5G,\ + t, 13, 372, 0, 1, 64, 40, 0, 1, 4) /* DEV10G:DEV_STATISTICS_40BIT:PMAC_RX_OK_BYTES_MSB_CNT */ -#define DEV5G_PMAC_RX_OK_BYTES_MSB_CNT(t) __REG(TARGET_DEV5G, t, 13, 372, 0, 1, 64, 44, 0, 1, 4) +#define DEV5G_PMAC_RX_OK_BYTES_MSB_CNT(t) __REG(TARGET_DEV5G,\ + t, 13, 372, 0, 1, 64, 44, 0, 1, 4) #define DEV5G_PMAC_RX_OK_BYTES_MSB_CNT_PMAC_RX_OK_BYTES_MSB_CNT GENMASK(7, 0) #define DEV5G_PMAC_RX_OK_BYTES_MSB_CNT_PMAC_RX_OK_BYTES_MSB_CNT_SET(x)\ @@ -2808,10 +3754,12 @@ enum sparx5_target { FIELD_GET(DEV5G_PMAC_RX_OK_BYTES_MSB_CNT_PMAC_RX_OK_BYTES_MSB_CNT, x) /* DEV10G:DEV_STATISTICS_40BIT:PMAC_RX_BAD_BYTES_CNT */ -#define DEV5G_PMAC_RX_BAD_BYTES_CNT(t) __REG(TARGET_DEV5G, t, 13, 372, 0, 1, 64, 48, 0, 1, 4) +#define DEV5G_PMAC_RX_BAD_BYTES_CNT(t) __REG(TARGET_DEV5G,\ + t, 13, 372, 0, 1, 64, 48, 0, 1, 4) /* DEV10G:DEV_STATISTICS_40BIT:PMAC_RX_BAD_BYTES_MSB_CNT */ -#define DEV5G_PMAC_RX_BAD_BYTES_MSB_CNT(t) __REG(TARGET_DEV5G, t, 13, 372, 0, 1, 64, 52, 0, 1, 4) +#define DEV5G_PMAC_RX_BAD_BYTES_MSB_CNT(t) __REG(TARGET_DEV5G,\ + t, 13, 372, 0, 1, 64, 52, 0, 1, 4) #define DEV5G_PMAC_RX_BAD_BYTES_MSB_CNT_PMAC_RX_BAD_BYTES_MSB_CNT GENMASK(7, 0) #define DEV5G_PMAC_RX_BAD_BYTES_MSB_CNT_PMAC_RX_BAD_BYTES_MSB_CNT_SET(x)\ @@ -2820,10 +3768,12 @@ enum sparx5_target { FIELD_GET(DEV5G_PMAC_RX_BAD_BYTES_MSB_CNT_PMAC_RX_BAD_BYTES_MSB_CNT, x) /* DEV10G:DEV_STATISTICS_40BIT:PMAC_TX_OK_BYTES_CNT */ -#define DEV5G_PMAC_TX_OK_BYTES_CNT(t) __REG(TARGET_DEV5G, t, 13, 372, 0, 1, 64, 56, 0, 1, 4) +#define DEV5G_PMAC_TX_OK_BYTES_CNT(t) __REG(TARGET_DEV5G,\ + t, 13, 372, 0, 1, 64, 56, 0, 1, 4) /* DEV10G:DEV_STATISTICS_40BIT:PMAC_TX_OK_BYTES_MSB_CNT */ -#define DEV5G_PMAC_TX_OK_BYTES_MSB_CNT(t) __REG(TARGET_DEV5G, t, 13, 372, 0, 1, 64, 60, 0, 1, 4) +#define DEV5G_PMAC_TX_OK_BYTES_MSB_CNT(t) __REG(TARGET_DEV5G,\ + t, 13, 372, 0, 1, 64, 60, 0, 1, 4) #define DEV5G_PMAC_TX_OK_BYTES_MSB_CNT_PMAC_TX_OK_BYTES_MSB_CNT GENMASK(7, 0) #define DEV5G_PMAC_TX_OK_BYTES_MSB_CNT_PMAC_TX_OK_BYTES_MSB_CNT_SET(x)\ @@ -2832,7 +3782,8 @@ enum sparx5_target { FIELD_GET(DEV5G_PMAC_TX_OK_BYTES_MSB_CNT_PMAC_TX_OK_BYTES_MSB_CNT, x) /* DEV10G:DEV_CFG_STATUS:DEV_RST_CTRL */ -#define DEV5G_DEV_RST_CTRL(t) __REG(TARGET_DEV5G, t, 13, 436, 0, 1, 52, 0, 0, 1, 4) +#define DEV5G_DEV_RST_CTRL(t) __REG(TARGET_DEV5G,\ + t, 13, 436, 0, 1, 52, 0, 0, 1, 4) #define DEV5G_DEV_RST_CTRL_PARDET_MODE_ENA BIT(28) #define DEV5G_DEV_RST_CTRL_PARDET_MODE_ENA_SET(x)\ @@ -2889,7 +3840,8 @@ enum sparx5_target { FIELD_GET(DEV5G_DEV_RST_CTRL_MAC_RX_RST, x) /* DSM:RAM_CTRL:RAM_INIT */ -#define DSM_RAM_INIT __REG(TARGET_DSM, 0, 1, 0, 0, 1, 4, 0, 0, 1, 4) +#define DSM_RAM_INIT __REG(TARGET_DSM,\ + 0, 1, 0, 0, 1, 4, 0, 0, 1, 4) #define DSM_RAM_INIT_RAM_INIT BIT(1) #define DSM_RAM_INIT_RAM_INIT_SET(x)\ @@ -2904,7 +3856,8 @@ enum sparx5_target { FIELD_GET(DSM_RAM_INIT_RAM_CFG_HOOK, x) /* DSM:CFG:BUF_CFG */ -#define DSM_BUF_CFG(r) __REG(TARGET_DSM, 0, 1, 20, 0, 1, 3528, 0, r, 67, 4) +#define DSM_BUF_CFG(r) __REG(TARGET_DSM,\ + 0, 1, 20, 0, 1, 3528, 0, r, 67, 4) #define DSM_BUF_CFG_CSC_STAT_DIS BIT(13) #define DSM_BUF_CFG_CSC_STAT_DIS_SET(x)\ @@ -2931,7 +3884,8 @@ enum sparx5_target { FIELD_GET(DSM_BUF_CFG_UNDERFLOW_WATCHDOG_TIMEOUT, x) /* DSM:CFG:DEV_TX_STOP_WM_CFG */ -#define DSM_DEV_TX_STOP_WM_CFG(r) __REG(TARGET_DSM, 0, 1, 20, 0, 1, 3528, 1360, r, 67, 4) +#define DSM_DEV_TX_STOP_WM_CFG(r) __REG(TARGET_DSM,\ + 0, 1, 20, 0, 1, 3528, 1360, r, 67, 4) #define DSM_DEV_TX_STOP_WM_CFG_FAST_STARTUP_ENA BIT(9) #define DSM_DEV_TX_STOP_WM_CFG_FAST_STARTUP_ENA_SET(x)\ @@ -2958,7 +3912,8 @@ enum sparx5_target { FIELD_GET(DSM_DEV_TX_STOP_WM_CFG_DEV_TX_CNT_CLR, x) /* DSM:CFG:RX_PAUSE_CFG */ -#define DSM_RX_PAUSE_CFG(r) __REG(TARGET_DSM, 0, 1, 20, 0, 1, 3528, 1628, r, 67, 4) +#define DSM_RX_PAUSE_CFG(r) __REG(TARGET_DSM,\ + 0, 1, 20, 0, 1, 3528, 1628, r, 67, 4) #define DSM_RX_PAUSE_CFG_RX_PAUSE_EN BIT(1) #define DSM_RX_PAUSE_CFG_RX_PAUSE_EN_SET(x)\ @@ -2973,7 +3928,8 @@ enum sparx5_target { FIELD_GET(DSM_RX_PAUSE_CFG_FC_OBEY_LOCAL, x) /* DSM:CFG:MAC_CFG */ -#define DSM_MAC_CFG(r) __REG(TARGET_DSM, 0, 1, 20, 0, 1, 3528, 2432, r, 67, 4) +#define DSM_MAC_CFG(r) __REG(TARGET_DSM,\ + 0, 1, 20, 0, 1, 3528, 2432, r, 67, 4) #define DSM_MAC_CFG_TX_PAUSE_VAL GENMASK(31, 16) #define DSM_MAC_CFG_TX_PAUSE_VAL_SET(x)\ @@ -3000,7 +3956,8 @@ enum sparx5_target { FIELD_GET(DSM_MAC_CFG_TX_PAUSE_XON_XOFF, x) /* DSM:CFG:MAC_ADDR_BASE_HIGH_CFG */ -#define DSM_MAC_ADDR_BASE_HIGH_CFG(r) __REG(TARGET_DSM, 0, 1, 20, 0, 1, 3528, 2700, r, 65, 4) +#define DSM_MAC_ADDR_BASE_HIGH_CFG(r) __REG(TARGET_DSM,\ + 0, 1, 20, 0, 1, 3528, 2700, r, 65, 4) #define DSM_MAC_ADDR_BASE_HIGH_CFG_MAC_ADDR_HIGH GENMASK(23, 0) #define DSM_MAC_ADDR_BASE_HIGH_CFG_MAC_ADDR_HIGH_SET(x)\ @@ -3009,7 +3966,8 @@ enum sparx5_target { FIELD_GET(DSM_MAC_ADDR_BASE_HIGH_CFG_MAC_ADDR_HIGH, x) /* DSM:CFG:MAC_ADDR_BASE_LOW_CFG */ -#define DSM_MAC_ADDR_BASE_LOW_CFG(r) __REG(TARGET_DSM, 0, 1, 20, 0, 1, 3528, 2960, r, 65, 4) +#define DSM_MAC_ADDR_BASE_LOW_CFG(r) __REG(TARGET_DSM,\ + 0, 1, 20, 0, 1, 3528, 2960, r, 65, 4) #define DSM_MAC_ADDR_BASE_LOW_CFG_MAC_ADDR_LOW GENMASK(23, 0) #define DSM_MAC_ADDR_BASE_LOW_CFG_MAC_ADDR_LOW_SET(x)\ @@ -3018,7 +3976,8 @@ enum sparx5_target { FIELD_GET(DSM_MAC_ADDR_BASE_LOW_CFG_MAC_ADDR_LOW, x) /* DSM:CFG:TAXI_CAL_CFG */ -#define DSM_TAXI_CAL_CFG(r) __REG(TARGET_DSM, 0, 1, 20, 0, 1, 3528, 3224, r, 9, 4) +#define DSM_TAXI_CAL_CFG(r) __REG(TARGET_DSM,\ + 0, 1, 20, 0, 1, 3528, 3224, r, 9, 4) #define DSM_TAXI_CAL_CFG_CAL_IDX GENMASK(20, 15) #define DSM_TAXI_CAL_CFG_CAL_IDX_SET(x)\ @@ -3050,8 +4009,41 @@ enum sparx5_target { #define DSM_TAXI_CAL_CFG_CAL_PGM_ENA_GET(x)\ FIELD_GET(DSM_TAXI_CAL_CFG_CAL_PGM_ENA, x) +/* EACL:ES2_KEY_SELECT_PROFILE:VCAP_ES2_KEY_SEL */ +#define EACL_VCAP_ES2_KEY_SEL(g, r) __REG(TARGET_EACL,\ + 0, 1, 149504, g, 138, 8, 0, r, 2, 4) + +#define EACL_VCAP_ES2_KEY_SEL_IP6_KEY_SEL GENMASK(7, 5) +#define EACL_VCAP_ES2_KEY_SEL_IP6_KEY_SEL_SET(x)\ + FIELD_PREP(EACL_VCAP_ES2_KEY_SEL_IP6_KEY_SEL, x) +#define EACL_VCAP_ES2_KEY_SEL_IP6_KEY_SEL_GET(x)\ + FIELD_GET(EACL_VCAP_ES2_KEY_SEL_IP6_KEY_SEL, x) + +#define EACL_VCAP_ES2_KEY_SEL_IP4_KEY_SEL GENMASK(4, 2) +#define EACL_VCAP_ES2_KEY_SEL_IP4_KEY_SEL_SET(x)\ + FIELD_PREP(EACL_VCAP_ES2_KEY_SEL_IP4_KEY_SEL, x) +#define EACL_VCAP_ES2_KEY_SEL_IP4_KEY_SEL_GET(x)\ + FIELD_GET(EACL_VCAP_ES2_KEY_SEL_IP4_KEY_SEL, x) + +#define EACL_VCAP_ES2_KEY_SEL_ARP_KEY_SEL BIT(1) +#define EACL_VCAP_ES2_KEY_SEL_ARP_KEY_SEL_SET(x)\ + FIELD_PREP(EACL_VCAP_ES2_KEY_SEL_ARP_KEY_SEL, x) +#define EACL_VCAP_ES2_KEY_SEL_ARP_KEY_SEL_GET(x)\ + FIELD_GET(EACL_VCAP_ES2_KEY_SEL_ARP_KEY_SEL, x) + +#define EACL_VCAP_ES2_KEY_SEL_KEY_ENA BIT(0) +#define EACL_VCAP_ES2_KEY_SEL_KEY_ENA_SET(x)\ + FIELD_PREP(EACL_VCAP_ES2_KEY_SEL_KEY_ENA, x) +#define EACL_VCAP_ES2_KEY_SEL_KEY_ENA_GET(x)\ + FIELD_GET(EACL_VCAP_ES2_KEY_SEL_KEY_ENA, x) + +/* EACL:CNT_TBL:ES2_CNT */ +#define EACL_ES2_CNT(g) __REG(TARGET_EACL,\ + 0, 1, 122880, g, 2048, 4, 0, 0, 1, 4) + /* EACL:POL_CFG:POL_EACL_CFG */ -#define EACL_POL_EACL_CFG __REG(TARGET_EACL, 0, 1, 150608, 0, 1, 780, 768, 0, 1, 4) +#define EACL_POL_EACL_CFG __REG(TARGET_EACL,\ + 0, 1, 150608, 0, 1, 780, 768, 0, 1, 4) #define EACL_POL_EACL_CFG_EACL_CNT_MARKED_AS_DROPPED BIT(5) #define EACL_POL_EACL_CFG_EACL_CNT_MARKED_AS_DROPPED_SET(x)\ @@ -3089,8 +4081,61 @@ enum sparx5_target { #define EACL_POL_EACL_CFG_EACL_FORCE_INIT_GET(x)\ FIELD_GET(EACL_POL_EACL_CFG_EACL_FORCE_INIT, x) +/* EACL:ES2_STICKY:SEC_LOOKUP_STICKY */ +#define EACL_SEC_LOOKUP_STICKY(r) __REG(TARGET_EACL,\ + 0, 1, 118696, 0, 1, 8, 0, r, 2, 4) + +#define EACL_SEC_LOOKUP_STICKY_SEC_TYPE_IP_7TUPLE_STICKY BIT(7) +#define EACL_SEC_LOOKUP_STICKY_SEC_TYPE_IP_7TUPLE_STICKY_SET(x)\ + FIELD_PREP(EACL_SEC_LOOKUP_STICKY_SEC_TYPE_IP_7TUPLE_STICKY, x) +#define EACL_SEC_LOOKUP_STICKY_SEC_TYPE_IP_7TUPLE_STICKY_GET(x)\ + FIELD_GET(EACL_SEC_LOOKUP_STICKY_SEC_TYPE_IP_7TUPLE_STICKY, x) + +#define EACL_SEC_LOOKUP_STICKY_SEC_TYPE_IP6_VID_STICKY BIT(6) +#define EACL_SEC_LOOKUP_STICKY_SEC_TYPE_IP6_VID_STICKY_SET(x)\ + FIELD_PREP(EACL_SEC_LOOKUP_STICKY_SEC_TYPE_IP6_VID_STICKY, x) +#define EACL_SEC_LOOKUP_STICKY_SEC_TYPE_IP6_VID_STICKY_GET(x)\ + FIELD_GET(EACL_SEC_LOOKUP_STICKY_SEC_TYPE_IP6_VID_STICKY, x) + +#define EACL_SEC_LOOKUP_STICKY_SEC_TYPE_IP6_STD_STICKY BIT(5) +#define EACL_SEC_LOOKUP_STICKY_SEC_TYPE_IP6_STD_STICKY_SET(x)\ + FIELD_PREP(EACL_SEC_LOOKUP_STICKY_SEC_TYPE_IP6_STD_STICKY, x) +#define EACL_SEC_LOOKUP_STICKY_SEC_TYPE_IP6_STD_STICKY_GET(x)\ + FIELD_GET(EACL_SEC_LOOKUP_STICKY_SEC_TYPE_IP6_STD_STICKY, x) + +#define EACL_SEC_LOOKUP_STICKY_SEC_TYPE_IP4_TCPUDP_STICKY BIT(4) +#define EACL_SEC_LOOKUP_STICKY_SEC_TYPE_IP4_TCPUDP_STICKY_SET(x)\ + FIELD_PREP(EACL_SEC_LOOKUP_STICKY_SEC_TYPE_IP4_TCPUDP_STICKY, x) +#define EACL_SEC_LOOKUP_STICKY_SEC_TYPE_IP4_TCPUDP_STICKY_GET(x)\ + FIELD_GET(EACL_SEC_LOOKUP_STICKY_SEC_TYPE_IP4_TCPUDP_STICKY, x) + +#define EACL_SEC_LOOKUP_STICKY_SEC_TYPE_IP4_VID_STICKY BIT(3) +#define EACL_SEC_LOOKUP_STICKY_SEC_TYPE_IP4_VID_STICKY_SET(x)\ + FIELD_PREP(EACL_SEC_LOOKUP_STICKY_SEC_TYPE_IP4_VID_STICKY, x) +#define EACL_SEC_LOOKUP_STICKY_SEC_TYPE_IP4_VID_STICKY_GET(x)\ + FIELD_GET(EACL_SEC_LOOKUP_STICKY_SEC_TYPE_IP4_VID_STICKY, x) + +#define EACL_SEC_LOOKUP_STICKY_SEC_TYPE_IP4_OTHER_STICKY BIT(2) +#define EACL_SEC_LOOKUP_STICKY_SEC_TYPE_IP4_OTHER_STICKY_SET(x)\ + FIELD_PREP(EACL_SEC_LOOKUP_STICKY_SEC_TYPE_IP4_OTHER_STICKY, x) +#define EACL_SEC_LOOKUP_STICKY_SEC_TYPE_IP4_OTHER_STICKY_GET(x)\ + FIELD_GET(EACL_SEC_LOOKUP_STICKY_SEC_TYPE_IP4_OTHER_STICKY, x) + +#define EACL_SEC_LOOKUP_STICKY_SEC_TYPE_ARP_STICKY BIT(1) +#define EACL_SEC_LOOKUP_STICKY_SEC_TYPE_ARP_STICKY_SET(x)\ + FIELD_PREP(EACL_SEC_LOOKUP_STICKY_SEC_TYPE_ARP_STICKY, x) +#define EACL_SEC_LOOKUP_STICKY_SEC_TYPE_ARP_STICKY_GET(x)\ + FIELD_GET(EACL_SEC_LOOKUP_STICKY_SEC_TYPE_ARP_STICKY, x) + +#define EACL_SEC_LOOKUP_STICKY_SEC_TYPE_MAC_ETYPE_STICKY BIT(0) +#define EACL_SEC_LOOKUP_STICKY_SEC_TYPE_MAC_ETYPE_STICKY_SET(x)\ + FIELD_PREP(EACL_SEC_LOOKUP_STICKY_SEC_TYPE_MAC_ETYPE_STICKY, x) +#define EACL_SEC_LOOKUP_STICKY_SEC_TYPE_MAC_ETYPE_STICKY_GET(x)\ + FIELD_GET(EACL_SEC_LOOKUP_STICKY_SEC_TYPE_MAC_ETYPE_STICKY, x) + /* EACL:RAM_CTRL:RAM_INIT */ -#define EACL_RAM_INIT __REG(TARGET_EACL, 0, 1, 118736, 0, 1, 4, 0, 0, 1, 4) +#define EACL_RAM_INIT __REG(TARGET_EACL,\ + 0, 1, 118736, 0, 1, 4, 0, 0, 1, 4) #define EACL_RAM_INIT_RAM_INIT BIT(1) #define EACL_RAM_INIT_RAM_INIT_SET(x)\ @@ -3105,7 +4150,8 @@ enum sparx5_target { FIELD_GET(EACL_RAM_INIT_RAM_CFG_HOOK, x) /* FDMA:FDMA:FDMA_CH_ACTIVATE */ -#define FDMA_CH_ACTIVATE __REG(TARGET_FDMA, 0, 1, 8, 0, 1, 428, 0, 0, 1, 4) +#define FDMA_CH_ACTIVATE __REG(TARGET_FDMA,\ + 0, 1, 8, 0, 1, 428, 0, 0, 1, 4) #define FDMA_CH_ACTIVATE_CH_ACTIVATE GENMASK(7, 0) #define FDMA_CH_ACTIVATE_CH_ACTIVATE_SET(x)\ @@ -3114,7 +4160,8 @@ enum sparx5_target { FIELD_GET(FDMA_CH_ACTIVATE_CH_ACTIVATE, x) /* FDMA:FDMA:FDMA_CH_RELOAD */ -#define FDMA_CH_RELOAD __REG(TARGET_FDMA, 0, 1, 8, 0, 1, 428, 4, 0, 1, 4) +#define FDMA_CH_RELOAD __REG(TARGET_FDMA,\ + 0, 1, 8, 0, 1, 428, 4, 0, 1, 4) #define FDMA_CH_RELOAD_CH_RELOAD GENMASK(7, 0) #define FDMA_CH_RELOAD_CH_RELOAD_SET(x)\ @@ -3123,7 +4170,8 @@ enum sparx5_target { FIELD_GET(FDMA_CH_RELOAD_CH_RELOAD, x) /* FDMA:FDMA:FDMA_CH_DISABLE */ -#define FDMA_CH_DISABLE __REG(TARGET_FDMA, 0, 1, 8, 0, 1, 428, 8, 0, 1, 4) +#define FDMA_CH_DISABLE __REG(TARGET_FDMA,\ + 0, 1, 8, 0, 1, 428, 8, 0, 1, 4) #define FDMA_CH_DISABLE_CH_DISABLE GENMASK(7, 0) #define FDMA_CH_DISABLE_CH_DISABLE_SET(x)\ @@ -3132,19 +4180,24 @@ enum sparx5_target { FIELD_GET(FDMA_CH_DISABLE_CH_DISABLE, x) /* FDMA:FDMA:FDMA_DCB_LLP */ -#define FDMA_DCB_LLP(r) __REG(TARGET_FDMA, 0, 1, 8, 0, 1, 428, 52, r, 8, 4) +#define FDMA_DCB_LLP(r) __REG(TARGET_FDMA,\ + 0, 1, 8, 0, 1, 428, 52, r, 8, 4) /* FDMA:FDMA:FDMA_DCB_LLP1 */ -#define FDMA_DCB_LLP1(r) __REG(TARGET_FDMA, 0, 1, 8, 0, 1, 428, 84, r, 8, 4) +#define FDMA_DCB_LLP1(r) __REG(TARGET_FDMA,\ + 0, 1, 8, 0, 1, 428, 84, r, 8, 4) /* FDMA:FDMA:FDMA_DCB_LLP_PREV */ -#define FDMA_DCB_LLP_PREV(r) __REG(TARGET_FDMA, 0, 1, 8, 0, 1, 428, 116, r, 8, 4) +#define FDMA_DCB_LLP_PREV(r) __REG(TARGET_FDMA,\ + 0, 1, 8, 0, 1, 428, 116, r, 8, 4) /* FDMA:FDMA:FDMA_DCB_LLP_PREV1 */ -#define FDMA_DCB_LLP_PREV1(r) __REG(TARGET_FDMA, 0, 1, 8, 0, 1, 428, 148, r, 8, 4) +#define FDMA_DCB_LLP_PREV1(r) __REG(TARGET_FDMA,\ + 0, 1, 8, 0, 1, 428, 148, r, 8, 4) /* FDMA:FDMA:FDMA_CH_CFG */ -#define FDMA_CH_CFG(r) __REG(TARGET_FDMA, 0, 1, 8, 0, 1, 428, 224, r, 8, 4) +#define FDMA_CH_CFG(r) __REG(TARGET_FDMA,\ + 0, 1, 8, 0, 1, 428, 224, r, 8, 4) #define FDMA_CH_CFG_CH_XTR_STATUS_MODE BIT(7) #define FDMA_CH_CFG_CH_XTR_STATUS_MODE_SET(x)\ @@ -3177,7 +4230,8 @@ enum sparx5_target { FIELD_GET(FDMA_CH_CFG_CH_MEM, x) /* FDMA:FDMA:FDMA_CH_TRANSLATE */ -#define FDMA_CH_TRANSLATE(r) __REG(TARGET_FDMA, 0, 1, 8, 0, 1, 428, 256, r, 8, 4) +#define FDMA_CH_TRANSLATE(r) __REG(TARGET_FDMA,\ + 0, 1, 8, 0, 1, 428, 256, r, 8, 4) #define FDMA_CH_TRANSLATE_OFFSET GENMASK(15, 0) #define FDMA_CH_TRANSLATE_OFFSET_SET(x)\ @@ -3186,7 +4240,8 @@ enum sparx5_target { FIELD_GET(FDMA_CH_TRANSLATE_OFFSET, x) /* FDMA:FDMA:FDMA_XTR_CFG */ -#define FDMA_XTR_CFG __REG(TARGET_FDMA, 0, 1, 8, 0, 1, 428, 364, 0, 1, 4) +#define FDMA_XTR_CFG __REG(TARGET_FDMA,\ + 0, 1, 8, 0, 1, 428, 364, 0, 1, 4) #define FDMA_XTR_CFG_XTR_FIFO_WM GENMASK(15, 11) #define FDMA_XTR_CFG_XTR_FIFO_WM_SET(x)\ @@ -3201,7 +4256,8 @@ enum sparx5_target { FIELD_GET(FDMA_XTR_CFG_XTR_ARB_SAT, x) /* FDMA:FDMA:FDMA_PORT_CTRL */ -#define FDMA_PORT_CTRL(r) __REG(TARGET_FDMA, 0, 1, 8, 0, 1, 428, 376, r, 2, 4) +#define FDMA_PORT_CTRL(r) __REG(TARGET_FDMA,\ + 0, 1, 8, 0, 1, 428, 376, r, 2, 4) #define FDMA_PORT_CTRL_INJ_STOP BIT(4) #define FDMA_PORT_CTRL_INJ_STOP_SET(x)\ @@ -3234,7 +4290,8 @@ enum sparx5_target { FIELD_GET(FDMA_PORT_CTRL_XTR_BUF_RST, x) /* FDMA:FDMA:FDMA_INTR_DCB */ -#define FDMA_INTR_DCB __REG(TARGET_FDMA, 0, 1, 8, 0, 1, 428, 384, 0, 1, 4) +#define FDMA_INTR_DCB __REG(TARGET_FDMA,\ + 0, 1, 8, 0, 1, 428, 384, 0, 1, 4) #define FDMA_INTR_DCB_INTR_DCB GENMASK(7, 0) #define FDMA_INTR_DCB_INTR_DCB_SET(x)\ @@ -3243,7 +4300,8 @@ enum sparx5_target { FIELD_GET(FDMA_INTR_DCB_INTR_DCB, x) /* FDMA:FDMA:FDMA_INTR_DCB_ENA */ -#define FDMA_INTR_DCB_ENA __REG(TARGET_FDMA, 0, 1, 8, 0, 1, 428, 388, 0, 1, 4) +#define FDMA_INTR_DCB_ENA __REG(TARGET_FDMA,\ + 0, 1, 8, 0, 1, 428, 388, 0, 1, 4) #define FDMA_INTR_DCB_ENA_INTR_DCB_ENA GENMASK(7, 0) #define FDMA_INTR_DCB_ENA_INTR_DCB_ENA_SET(x)\ @@ -3252,7 +4310,8 @@ enum sparx5_target { FIELD_GET(FDMA_INTR_DCB_ENA_INTR_DCB_ENA, x) /* FDMA:FDMA:FDMA_INTR_DB */ -#define FDMA_INTR_DB __REG(TARGET_FDMA, 0, 1, 8, 0, 1, 428, 392, 0, 1, 4) +#define FDMA_INTR_DB __REG(TARGET_FDMA,\ + 0, 1, 8, 0, 1, 428, 392, 0, 1, 4) #define FDMA_INTR_DB_INTR_DB GENMASK(7, 0) #define FDMA_INTR_DB_INTR_DB_SET(x)\ @@ -3261,7 +4320,8 @@ enum sparx5_target { FIELD_GET(FDMA_INTR_DB_INTR_DB, x) /* FDMA:FDMA:FDMA_INTR_DB_ENA */ -#define FDMA_INTR_DB_ENA __REG(TARGET_FDMA, 0, 1, 8, 0, 1, 428, 396, 0, 1, 4) +#define FDMA_INTR_DB_ENA __REG(TARGET_FDMA,\ + 0, 1, 8, 0, 1, 428, 396, 0, 1, 4) #define FDMA_INTR_DB_ENA_INTR_DB_ENA GENMASK(7, 0) #define FDMA_INTR_DB_ENA_INTR_DB_ENA_SET(x)\ @@ -3270,7 +4330,8 @@ enum sparx5_target { FIELD_GET(FDMA_INTR_DB_ENA_INTR_DB_ENA, x) /* FDMA:FDMA:FDMA_INTR_ERR */ -#define FDMA_INTR_ERR __REG(TARGET_FDMA, 0, 1, 8, 0, 1, 428, 400, 0, 1, 4) +#define FDMA_INTR_ERR __REG(TARGET_FDMA,\ + 0, 1, 8, 0, 1, 428, 400, 0, 1, 4) #define FDMA_INTR_ERR_INTR_PORT_ERR GENMASK(9, 8) #define FDMA_INTR_ERR_INTR_PORT_ERR_SET(x)\ @@ -3285,7 +4346,8 @@ enum sparx5_target { FIELD_GET(FDMA_INTR_ERR_INTR_CH_ERR, x) /* FDMA:FDMA:FDMA_ERRORS */ -#define FDMA_ERRORS __REG(TARGET_FDMA, 0, 1, 8, 0, 1, 428, 412, 0, 1, 4) +#define FDMA_ERRORS __REG(TARGET_FDMA,\ + 0, 1, 8, 0, 1, 428, 412, 0, 1, 4) #define FDMA_ERRORS_ERR_XTR_WR GENMASK(31, 30) #define FDMA_ERRORS_ERR_XTR_WR_SET(x)\ @@ -3336,7 +4398,8 @@ enum sparx5_target { FIELD_GET(FDMA_ERRORS_ERR_CH_WR, x) /* FDMA:FDMA:FDMA_ERRORS_2 */ -#define FDMA_ERRORS_2 __REG(TARGET_FDMA, 0, 1, 8, 0, 1, 428, 416, 0, 1, 4) +#define FDMA_ERRORS_2 __REG(TARGET_FDMA,\ + 0, 1, 8, 0, 1, 428, 416, 0, 1, 4) #define FDMA_ERRORS_2_ERR_XTR_FRAG GENMASK(1, 0) #define FDMA_ERRORS_2_ERR_XTR_FRAG_SET(x)\ @@ -3345,7 +4408,8 @@ enum sparx5_target { FIELD_GET(FDMA_ERRORS_2_ERR_XTR_FRAG, x) /* FDMA:FDMA:FDMA_CTRL */ -#define FDMA_CTRL __REG(TARGET_FDMA, 0, 1, 8, 0, 1, 428, 424, 0, 1, 4) +#define FDMA_CTRL __REG(TARGET_FDMA,\ + 0, 1, 8, 0, 1, 428, 424, 0, 1, 4) #define FDMA_CTRL_NRESET BIT(0) #define FDMA_CTRL_NRESET_SET(x)\ @@ -3354,7 +4418,8 @@ enum sparx5_target { FIELD_GET(FDMA_CTRL_NRESET, x) /* DEVCPU_GCB:CHIP_REGS:CHIP_ID */ -#define GCB_CHIP_ID __REG(TARGET_GCB, 0, 1, 0, 0, 1, 424, 0, 0, 1, 4) +#define GCB_CHIP_ID __REG(TARGET_GCB,\ + 0, 1, 0, 0, 1, 424, 0, 0, 1, 4) #define GCB_CHIP_ID_REV_ID GENMASK(31, 28) #define GCB_CHIP_ID_REV_ID_SET(x)\ @@ -3381,7 +4446,8 @@ enum sparx5_target { FIELD_GET(GCB_CHIP_ID_ONE, x) /* DEVCPU_GCB:CHIP_REGS:SOFT_RST */ -#define GCB_SOFT_RST __REG(TARGET_GCB, 0, 1, 0, 0, 1, 424, 8, 0, 1, 4) +#define GCB_SOFT_RST __REG(TARGET_GCB,\ + 0, 1, 0, 0, 1, 424, 8, 0, 1, 4) #define GCB_SOFT_RST_SOFT_NON_CFG_RST BIT(2) #define GCB_SOFT_RST_SOFT_NON_CFG_RST_SET(x)\ @@ -3402,7 +4468,8 @@ enum sparx5_target { FIELD_GET(GCB_SOFT_RST_SOFT_CHIP_RST, x) /* DEVCPU_GCB:CHIP_REGS:HW_SGPIO_SD_CFG */ -#define GCB_HW_SGPIO_SD_CFG __REG(TARGET_GCB, 0, 1, 0, 0, 1, 424, 20, 0, 1, 4) +#define GCB_HW_SGPIO_SD_CFG __REG(TARGET_GCB,\ + 0, 1, 0, 0, 1, 424, 20, 0, 1, 4) #define GCB_HW_SGPIO_SD_CFG_SD_HIGH_ENA BIT(1) #define GCB_HW_SGPIO_SD_CFG_SD_HIGH_ENA_SET(x)\ @@ -3417,7 +4484,8 @@ enum sparx5_target { FIELD_GET(GCB_HW_SGPIO_SD_CFG_SD_MAP_SEL, x) /* DEVCPU_GCB:CHIP_REGS:HW_SGPIO_TO_SD_MAP_CFG */ -#define GCB_HW_SGPIO_TO_SD_MAP_CFG(r) __REG(TARGET_GCB, 0, 1, 0, 0, 1, 424, 24, r, 65, 4) +#define GCB_HW_SGPIO_TO_SD_MAP_CFG(r) __REG(TARGET_GCB,\ + 0, 1, 0, 0, 1, 424, 24, r, 65, 4) #define GCB_HW_SGPIO_TO_SD_MAP_CFG_SGPIO_TO_SD_SEL GENMASK(8, 0) #define GCB_HW_SGPIO_TO_SD_MAP_CFG_SGPIO_TO_SD_SEL_SET(x)\ @@ -3426,7 +4494,8 @@ enum sparx5_target { FIELD_GET(GCB_HW_SGPIO_TO_SD_MAP_CFG_SGPIO_TO_SD_SEL, x) /* DEVCPU_GCB:SIO_CTRL:SIO_CLOCK */ -#define GCB_SIO_CLOCK(g) __REG(TARGET_GCB, 0, 1, 876, g, 3, 280, 20, 0, 1, 4) +#define GCB_SIO_CLOCK(g) __REG(TARGET_GCB,\ + 0, 1, 876, g, 3, 280, 20, 0, 1, 4) #define GCB_SIO_CLOCK_SIO_CLK_FREQ GENMASK(19, 8) #define GCB_SIO_CLOCK_SIO_CLK_FREQ_SET(x)\ @@ -3441,7 +4510,8 @@ enum sparx5_target { FIELD_GET(GCB_SIO_CLOCK_SYS_CLK_PERIOD, x) /* HSCH:HSCH_CFG:CIR_CFG */ -#define HSCH_CIR_CFG(g) __REG(TARGET_HSCH, 0, 1, 0, g, 5040, 32, 0, 0, 1, 4) +#define HSCH_CIR_CFG(g) __REG(TARGET_HSCH,\ + 0, 1, 0, g, 5040, 32, 0, 0, 1, 4) #define HSCH_CIR_CFG_CIR_RATE GENMASK(22, 6) #define HSCH_CIR_CFG_CIR_RATE_SET(x)\ @@ -3456,7 +4526,8 @@ enum sparx5_target { FIELD_GET(HSCH_CIR_CFG_CIR_BURST, x) /* HSCH:HSCH_CFG:EIR_CFG */ -#define HSCH_EIR_CFG(g) __REG(TARGET_HSCH, 0, 1, 0, g, 5040, 32, 4, 0, 1, 4) +#define HSCH_EIR_CFG(g) __REG(TARGET_HSCH,\ + 0, 1, 0, g, 5040, 32, 4, 0, 1, 4) #define HSCH_EIR_CFG_EIR_RATE GENMASK(22, 6) #define HSCH_EIR_CFG_EIR_RATE_SET(x)\ @@ -3471,7 +4542,8 @@ enum sparx5_target { FIELD_GET(HSCH_EIR_CFG_EIR_BURST, x) /* HSCH:HSCH_CFG:SE_CFG */ -#define HSCH_SE_CFG(g) __REG(TARGET_HSCH, 0, 1, 0, g, 5040, 32, 8, 0, 1, 4) +#define HSCH_SE_CFG(g) __REG(TARGET_HSCH,\ + 0, 1, 0, g, 5040, 32, 8, 0, 1, 4) #define HSCH_SE_CFG_SE_DWRR_CNT GENMASK(12, 6) #define HSCH_SE_CFG_SE_DWRR_CNT_SET(x)\ @@ -3504,7 +4576,8 @@ enum sparx5_target { FIELD_GET(HSCH_SE_CFG_SE_STOP, x) /* HSCH:HSCH_CFG:SE_CONNECT */ -#define HSCH_SE_CONNECT(g) __REG(TARGET_HSCH, 0, 1, 0, g, 5040, 32, 12, 0, 1, 4) +#define HSCH_SE_CONNECT(g) __REG(TARGET_HSCH,\ + 0, 1, 0, g, 5040, 32, 12, 0, 1, 4) #define HSCH_SE_CONNECT_SE_LEAK_LINK GENMASK(15, 0) #define HSCH_SE_CONNECT_SE_LEAK_LINK_SET(x)\ @@ -3513,7 +4586,8 @@ enum sparx5_target { FIELD_GET(HSCH_SE_CONNECT_SE_LEAK_LINK, x) /* HSCH:HSCH_CFG:SE_DLB_SENSE */ -#define HSCH_SE_DLB_SENSE(g) __REG(TARGET_HSCH, 0, 1, 0, g, 5040, 32, 16, 0, 1, 4) +#define HSCH_SE_DLB_SENSE(g) __REG(TARGET_HSCH,\ + 0, 1, 0, g, 5040, 32, 16, 0, 1, 4) #define HSCH_SE_DLB_SENSE_SE_DLB_PRIO GENMASK(12, 10) #define HSCH_SE_DLB_SENSE_SE_DLB_PRIO_SET(x)\ @@ -3546,7 +4620,8 @@ enum sparx5_target { FIELD_GET(HSCH_SE_DLB_SENSE_SE_DLB_DPORT_ENA, x) /* HSCH:HSCH_DWRR:DWRR_ENTRY */ -#define HSCH_DWRR_ENTRY(g) __REG(TARGET_HSCH, 0, 1, 162816, g, 72, 4, 0, 0, 1, 4) +#define HSCH_DWRR_ENTRY(g) __REG(TARGET_HSCH,\ + 0, 1, 162816, g, 72, 4, 0, 0, 1, 4) #define HSCH_DWRR_ENTRY_DWRR_COST GENMASK(24, 20) #define HSCH_DWRR_ENTRY_DWRR_COST_SET(x)\ @@ -3561,7 +4636,8 @@ enum sparx5_target { FIELD_GET(HSCH_DWRR_ENTRY_DWRR_BALANCE, x) /* HSCH:HSCH_MISC:HSCH_CFG_CFG */ -#define HSCH_HSCH_CFG_CFG __REG(TARGET_HSCH, 0, 1, 163104, 0, 1, 648, 284, 0, 1, 4) +#define HSCH_HSCH_CFG_CFG __REG(TARGET_HSCH,\ + 0, 1, 163104, 0, 1, 648, 284, 0, 1, 4) #define HSCH_HSCH_CFG_CFG_CFG_SE_IDX GENMASK(26, 14) #define HSCH_HSCH_CFG_CFG_CFG_SE_IDX_SET(x)\ @@ -3582,16 +4658,18 @@ enum sparx5_target { FIELD_GET(HSCH_HSCH_CFG_CFG_CSR_GRANT, x) /* HSCH:HSCH_MISC:SYS_CLK_PER */ -#define HSCH_SYS_CLK_PER __REG(TARGET_HSCH, 0, 1, 163104, 0, 1, 648, 640, 0, 1, 4) +#define HSCH_SYS_CLK_PER __REG(TARGET_HSCH,\ + 0, 1, 163104, 0, 1, 648, 640, 0, 1, 4) -#define HSCH_SYS_CLK_PER_SYS_CLK_PER_100PS GENMASK(7, 0) -#define HSCH_SYS_CLK_PER_SYS_CLK_PER_100PS_SET(x)\ - FIELD_PREP(HSCH_SYS_CLK_PER_SYS_CLK_PER_100PS, x) -#define HSCH_SYS_CLK_PER_SYS_CLK_PER_100PS_GET(x)\ - FIELD_GET(HSCH_SYS_CLK_PER_SYS_CLK_PER_100PS, x) +#define HSCH_SYS_CLK_PER_100PS GENMASK(7, 0) +#define HSCH_SYS_CLK_PER_100PS_SET(x)\ + FIELD_PREP(HSCH_SYS_CLK_PER_100PS, x) +#define HSCH_SYS_CLK_PER_100PS_GET(x)\ + FIELD_GET(HSCH_SYS_CLK_PER_100PS, x) /* HSCH:HSCH_LEAK_LISTS:HSCH_TIMER_CFG */ -#define HSCH_HSCH_TIMER_CFG(g, r) __REG(TARGET_HSCH, 0, 1, 161664, g, 4, 32, 0, r, 4, 4) +#define HSCH_HSCH_TIMER_CFG(g, r) __REG(TARGET_HSCH,\ + 0, 1, 161664, g, 4, 32, 0, r, 4, 4) #define HSCH_HSCH_TIMER_CFG_LEAK_TIME GENMASK(17, 0) #define HSCH_HSCH_TIMER_CFG_LEAK_TIME_SET(x)\ @@ -3600,7 +4678,8 @@ enum sparx5_target { FIELD_GET(HSCH_HSCH_TIMER_CFG_LEAK_TIME, x) /* HSCH:HSCH_LEAK_LISTS:HSCH_LEAK_CFG */ -#define HSCH_HSCH_LEAK_CFG(g, r) __REG(TARGET_HSCH, 0, 1, 161664, g, 4, 32, 16, r, 4, 4) +#define HSCH_HSCH_LEAK_CFG(g, r) __REG(TARGET_HSCH,\ + 0, 1, 161664, g, 4, 32, 16, r, 4, 4) #define HSCH_HSCH_LEAK_CFG_LEAK_FIRST GENMASK(16, 1) #define HSCH_HSCH_LEAK_CFG_LEAK_FIRST_SET(x)\ @@ -3615,7 +4694,8 @@ enum sparx5_target { FIELD_GET(HSCH_HSCH_LEAK_CFG_LEAK_ERR, x) /* HSCH:SYSTEM:FLUSH_CTRL */ -#define HSCH_FLUSH_CTRL __REG(TARGET_HSCH, 0, 1, 184000, 0, 1, 312, 4, 0, 1, 4) +#define HSCH_FLUSH_CTRL __REG(TARGET_HSCH,\ + 0, 1, 184000, 0, 1, 312, 4, 0, 1, 4) #define HSCH_FLUSH_CTRL_FLUSH_ENA BIT(27) #define HSCH_FLUSH_CTRL_FLUSH_ENA_SET(x)\ @@ -3660,7 +4740,8 @@ enum sparx5_target { FIELD_GET(HSCH_FLUSH_CTRL_FLUSH_HIER, x) /* HSCH:SYSTEM:PORT_MODE */ -#define HSCH_PORT_MODE(r) __REG(TARGET_HSCH, 0, 1, 184000, 0, 1, 312, 8, r, 70, 4) +#define HSCH_PORT_MODE(r) __REG(TARGET_HSCH,\ + 0, 1, 184000, 0, 1, 312, 8, r, 70, 4) #define HSCH_PORT_MODE_DEQUEUE_DIS BIT(4) #define HSCH_PORT_MODE_DEQUEUE_DIS_SET(x)\ @@ -3693,7 +4774,8 @@ enum sparx5_target { FIELD_GET(HSCH_PORT_MODE_CPU_PRIO_MODE, x) /* HSCH:SYSTEM:OUTB_SHARE_ENA */ -#define HSCH_OUTB_SHARE_ENA(r) __REG(TARGET_HSCH, 0, 1, 184000, 0, 1, 312, 288, r, 5, 4) +#define HSCH_OUTB_SHARE_ENA(r) __REG(TARGET_HSCH,\ + 0, 1, 184000, 0, 1, 312, 288, r, 5, 4) #define HSCH_OUTB_SHARE_ENA_OUTB_SHARE_ENA GENMASK(7, 0) #define HSCH_OUTB_SHARE_ENA_OUTB_SHARE_ENA_SET(x)\ @@ -3702,7 +4784,8 @@ enum sparx5_target { FIELD_GET(HSCH_OUTB_SHARE_ENA_OUTB_SHARE_ENA, x) /* HSCH:MMGT:RESET_CFG */ -#define HSCH_RESET_CFG __REG(TARGET_HSCH, 0, 1, 162368, 0, 1, 16, 8, 0, 1, 4) +#define HSCH_RESET_CFG __REG(TARGET_HSCH,\ + 0, 1, 162368, 0, 1, 16, 8, 0, 1, 4) #define HSCH_RESET_CFG_CORE_ENA BIT(0) #define HSCH_RESET_CFG_CORE_ENA_SET(x)\ @@ -3711,7 +4794,8 @@ enum sparx5_target { FIELD_GET(HSCH_RESET_CFG_CORE_ENA, x) /* HSCH:TAS_CONFIG:TAS_STATEMACHINE_CFG */ -#define HSCH_TAS_STATEMACHINE_CFG __REG(TARGET_HSCH, 0, 1, 162384, 0, 1, 12, 8, 0, 1, 4) +#define HSCH_TAS_STATEMACHINE_CFG __REG(TARGET_HSCH,\ + 0, 1, 162384, 0, 1, 12, 8, 0, 1, 4) #define HSCH_TAS_STATEMACHINE_CFG_REVISIT_DLY GENMASK(7, 0) #define HSCH_TAS_STATEMACHINE_CFG_REVISIT_DLY_SET(x)\ @@ -3720,7 +4804,8 @@ enum sparx5_target { FIELD_GET(HSCH_TAS_STATEMACHINE_CFG_REVISIT_DLY, x) /* LRN:COMMON:COMMON_ACCESS_CTRL */ -#define LRN_COMMON_ACCESS_CTRL __REG(TARGET_LRN, 0, 1, 0, 0, 1, 72, 0, 0, 1, 4) +#define LRN_COMMON_ACCESS_CTRL __REG(TARGET_LRN,\ + 0, 1, 0, 0, 1, 72, 0, 0, 1, 4) #define LRN_COMMON_ACCESS_CTRL_CPU_ACCESS_DIRECT_COL GENMASK(21, 20) #define LRN_COMMON_ACCESS_CTRL_CPU_ACCESS_DIRECT_COL_SET(x)\ @@ -3753,7 +4838,8 @@ enum sparx5_target { FIELD_GET(LRN_COMMON_ACCESS_CTRL_MAC_TABLE_ACCESS_SHOT, x) /* LRN:COMMON:MAC_ACCESS_CFG_0 */ -#define LRN_MAC_ACCESS_CFG_0 __REG(TARGET_LRN, 0, 1, 0, 0, 1, 72, 4, 0, 1, 4) +#define LRN_MAC_ACCESS_CFG_0 __REG(TARGET_LRN,\ + 0, 1, 0, 0, 1, 72, 4, 0, 1, 4) #define LRN_MAC_ACCESS_CFG_0_MAC_ENTRY_FID GENMASK(28, 16) #define LRN_MAC_ACCESS_CFG_0_MAC_ENTRY_FID_SET(x)\ @@ -3768,10 +4854,12 @@ enum sparx5_target { FIELD_GET(LRN_MAC_ACCESS_CFG_0_MAC_ENTRY_MAC_MSB, x) /* LRN:COMMON:MAC_ACCESS_CFG_1 */ -#define LRN_MAC_ACCESS_CFG_1 __REG(TARGET_LRN, 0, 1, 0, 0, 1, 72, 8, 0, 1, 4) +#define LRN_MAC_ACCESS_CFG_1 __REG(TARGET_LRN,\ + 0, 1, 0, 0, 1, 72, 8, 0, 1, 4) /* LRN:COMMON:MAC_ACCESS_CFG_2 */ -#define LRN_MAC_ACCESS_CFG_2 __REG(TARGET_LRN, 0, 1, 0, 0, 1, 72, 12, 0, 1, 4) +#define LRN_MAC_ACCESS_CFG_2 __REG(TARGET_LRN,\ + 0, 1, 0, 0, 1, 72, 12, 0, 1, 4) #define LRN_MAC_ACCESS_CFG_2_MAC_ENTRY_SRC_KILL_FWD BIT(28) #define LRN_MAC_ACCESS_CFG_2_MAC_ENTRY_SRC_KILL_FWD_SET(x)\ @@ -3846,7 +4934,8 @@ enum sparx5_target { FIELD_GET(LRN_MAC_ACCESS_CFG_2_MAC_ENTRY_ADDR, x) /* LRN:COMMON:MAC_ACCESS_CFG_3 */ -#define LRN_MAC_ACCESS_CFG_3 __REG(TARGET_LRN, 0, 1, 0, 0, 1, 72, 16, 0, 1, 4) +#define LRN_MAC_ACCESS_CFG_3 __REG(TARGET_LRN,\ + 0, 1, 0, 0, 1, 72, 16, 0, 1, 4) #define LRN_MAC_ACCESS_CFG_3_MAC_ENTRY_ISDX_LIMIT_IDX GENMASK(10, 0) #define LRN_MAC_ACCESS_CFG_3_MAC_ENTRY_ISDX_LIMIT_IDX_SET(x)\ @@ -3855,7 +4944,8 @@ enum sparx5_target { FIELD_GET(LRN_MAC_ACCESS_CFG_3_MAC_ENTRY_ISDX_LIMIT_IDX, x) /* LRN:COMMON:SCAN_NEXT_CFG */ -#define LRN_SCAN_NEXT_CFG __REG(TARGET_LRN, 0, 1, 0, 0, 1, 72, 20, 0, 1, 4) +#define LRN_SCAN_NEXT_CFG __REG(TARGET_LRN,\ + 0, 1, 0, 0, 1, 72, 20, 0, 1, 4) #define LRN_SCAN_NEXT_CFG_SCAN_AGE_FLAG_UPDATE_SEL GENMASK(21, 19) #define LRN_SCAN_NEXT_CFG_SCAN_AGE_FLAG_UPDATE_SEL_SET(x)\ @@ -3948,7 +5038,8 @@ enum sparx5_target { FIELD_GET(LRN_SCAN_NEXT_CFG_ADDR_FILTER_ENA, x) /* LRN:COMMON:SCAN_NEXT_CFG_1 */ -#define LRN_SCAN_NEXT_CFG_1 __REG(TARGET_LRN, 0, 1, 0, 0, 1, 72, 24, 0, 1, 4) +#define LRN_SCAN_NEXT_CFG_1 __REG(TARGET_LRN,\ + 0, 1, 0, 0, 1, 72, 24, 0, 1, 4) #define LRN_SCAN_NEXT_CFG_1_PORT_MOVE_NEW_ADDR GENMASK(30, 16) #define LRN_SCAN_NEXT_CFG_1_PORT_MOVE_NEW_ADDR_SET(x)\ @@ -3963,7 +5054,8 @@ enum sparx5_target { FIELD_GET(LRN_SCAN_NEXT_CFG_1_SCAN_ENTRY_ADDR_MASK, x) /* LRN:COMMON:AUTOAGE_CFG */ -#define LRN_AUTOAGE_CFG(r) __REG(TARGET_LRN, 0, 1, 0, 0, 1, 72, 36, r, 4, 4) +#define LRN_AUTOAGE_CFG(r) __REG(TARGET_LRN,\ + 0, 1, 0, 0, 1, 72, 36, r, 4, 4) #define LRN_AUTOAGE_CFG_UNIT_SIZE GENMASK(29, 28) #define LRN_AUTOAGE_CFG_UNIT_SIZE_SET(x)\ @@ -3978,7 +5070,8 @@ enum sparx5_target { FIELD_GET(LRN_AUTOAGE_CFG_PERIOD_VAL, x) /* LRN:COMMON:AUTOAGE_CFG_1 */ -#define LRN_AUTOAGE_CFG_1 __REG(TARGET_LRN, 0, 1, 0, 0, 1, 72, 52, 0, 1, 4) +#define LRN_AUTOAGE_CFG_1 __REG(TARGET_LRN,\ + 0, 1, 0, 0, 1, 72, 52, 0, 1, 4) #define LRN_AUTOAGE_CFG_1_PAUSE_AUTO_AGE_ENA BIT(25) #define LRN_AUTOAGE_CFG_1_PAUSE_AUTO_AGE_ENA_SET(x)\ @@ -4023,7 +5116,8 @@ enum sparx5_target { FIELD_GET(LRN_AUTOAGE_CFG_1_FORCE_IDLE_ENA, x) /* LRN:COMMON:AUTOAGE_CFG_2 */ -#define LRN_AUTOAGE_CFG_2 __REG(TARGET_LRN, 0, 1, 0, 0, 1, 72, 56, 0, 1, 4) +#define LRN_AUTOAGE_CFG_2 __REG(TARGET_LRN,\ + 0, 1, 0, 0, 1, 72, 56, 0, 1, 4) #define LRN_AUTOAGE_CFG_2_NEXT_ROW GENMASK(17, 4) #define LRN_AUTOAGE_CFG_2_NEXT_ROW_SET(x)\ @@ -4038,7 +5132,8 @@ enum sparx5_target { FIELD_GET(LRN_AUTOAGE_CFG_2_SCAN_ONGOING_STATUS, x) /* PCIE_DM_EP:PF0_ATU_CAP:IATU_REGION_CTRL_2_OFF_OUTBOUND_0 */ -#define PCEP_RCTRL_2_OUT_0 __REG(TARGET_PCEP, 0, 1, 3145728, 0, 1, 130852, 4, 0, 1, 4) +#define PCEP_RCTRL_2_OUT_0 __REG(TARGET_PCEP,\ + 0, 1, 3145728, 0, 1, 130852, 4, 0, 1, 4) #define PCEP_RCTRL_2_OUT_0_MSG_CODE GENMASK(7, 0) #define PCEP_RCTRL_2_OUT_0_MSG_CODE_SET(x)\ @@ -4101,7 +5196,8 @@ enum sparx5_target { FIELD_GET(PCEP_RCTRL_2_OUT_0_REGION_EN, x) /* PCIE_DM_EP:PF0_ATU_CAP:IATU_LWR_BASE_ADDR_OFF_OUTBOUND_0 */ -#define PCEP_ADDR_LWR_OUT_0 __REG(TARGET_PCEP, 0, 1, 3145728, 0, 1, 130852, 8, 0, 1, 4) +#define PCEP_ADDR_LWR_OUT_0 __REG(TARGET_PCEP,\ + 0, 1, 3145728, 0, 1, 130852, 8, 0, 1, 4) #define PCEP_ADDR_LWR_OUT_0_LWR_BASE_HW GENMASK(15, 0) #define PCEP_ADDR_LWR_OUT_0_LWR_BASE_HW_SET(x)\ @@ -4116,10 +5212,12 @@ enum sparx5_target { FIELD_GET(PCEP_ADDR_LWR_OUT_0_LWR_BASE_RW, x) /* PCIE_DM_EP:PF0_ATU_CAP:IATU_UPPER_BASE_ADDR_OFF_OUTBOUND_0 */ -#define PCEP_ADDR_UPR_OUT_0 __REG(TARGET_PCEP, 0, 1, 3145728, 0, 1, 130852, 12, 0, 1, 4) +#define PCEP_ADDR_UPR_OUT_0 __REG(TARGET_PCEP,\ + 0, 1, 3145728, 0, 1, 130852, 12, 0, 1, 4) /* PCIE_DM_EP:PF0_ATU_CAP:IATU_LIMIT_ADDR_OFF_OUTBOUND_0 */ -#define PCEP_ADDR_LIM_OUT_0 __REG(TARGET_PCEP, 0, 1, 3145728, 0, 1, 130852, 16, 0, 1, 4) +#define PCEP_ADDR_LIM_OUT_0 __REG(TARGET_PCEP,\ + 0, 1, 3145728, 0, 1, 130852, 16, 0, 1, 4) #define PCEP_ADDR_LIM_OUT_0_LIMIT_ADDR_HW GENMASK(15, 0) #define PCEP_ADDR_LIM_OUT_0_LIMIT_ADDR_HW_SET(x)\ @@ -4134,13 +5232,16 @@ enum sparx5_target { FIELD_GET(PCEP_ADDR_LIM_OUT_0_LIMIT_ADDR_RW, x) /* PCIE_DM_EP:PF0_ATU_CAP:IATU_LWR_TARGET_ADDR_OFF_OUTBOUND_0 */ -#define PCEP_ADDR_LWR_TGT_OUT_0 __REG(TARGET_PCEP, 0, 1, 3145728, 0, 1, 130852, 20, 0, 1, 4) +#define PCEP_ADDR_LWR_TGT_OUT_0 __REG(TARGET_PCEP,\ + 0, 1, 3145728, 0, 1, 130852, 20, 0, 1, 4) /* PCIE_DM_EP:PF0_ATU_CAP:IATU_UPPER_TARGET_ADDR_OFF_OUTBOUND_0 */ -#define PCEP_ADDR_UPR_TGT_OUT_0 __REG(TARGET_PCEP, 0, 1, 3145728, 0, 1, 130852, 24, 0, 1, 4) +#define PCEP_ADDR_UPR_TGT_OUT_0 __REG(TARGET_PCEP,\ + 0, 1, 3145728, 0, 1, 130852, 24, 0, 1, 4) /* PCIE_DM_EP:PF0_ATU_CAP:IATU_UPPR_LIMIT_ADDR_OFF_OUTBOUND_0 */ -#define PCEP_ADDR_UPR_LIM_OUT_0 __REG(TARGET_PCEP, 0, 1, 3145728, 0, 1, 130852, 32, 0, 1, 4) +#define PCEP_ADDR_UPR_LIM_OUT_0 __REG(TARGET_PCEP,\ + 0, 1, 3145728, 0, 1, 130852, 32, 0, 1, 4) #define PCEP_ADDR_UPR_LIM_OUT_0_UPPR_LIMIT_ADDR_RW GENMASK(1, 0) #define PCEP_ADDR_UPR_LIM_OUT_0_UPPR_LIMIT_ADDR_RW_SET(x)\ @@ -4155,7 +5256,8 @@ enum sparx5_target { FIELD_GET(PCEP_ADDR_UPR_LIM_OUT_0_UPPR_LIMIT_ADDR_HW, x) /* PCS_10GBASE_R:PCS_10GBR_CFG:PCS_CFG */ -#define PCS10G_BR_PCS_CFG(t) __REG(TARGET_PCS10G_BR, t, 12, 0, 0, 1, 56, 0, 0, 1, 4) +#define PCS10G_BR_PCS_CFG(t) __REG(TARGET_PCS10G_BR,\ + t, 12, 0, 0, 1, 56, 0, 0, 1, 4) #define PCS10G_BR_PCS_CFG_PCS_ENA BIT(31) #define PCS10G_BR_PCS_CFG_PCS_ENA_SET(x)\ @@ -4230,7 +5332,8 @@ enum sparx5_target { FIELD_GET(PCS10G_BR_PCS_CFG_TX_SCR_DISABLE, x) /* PCS_10GBASE_R:PCS_10GBR_CFG:PCS_SD_CFG */ -#define PCS10G_BR_PCS_SD_CFG(t) __REG(TARGET_PCS10G_BR, t, 12, 0, 0, 1, 56, 4, 0, 1, 4) +#define PCS10G_BR_PCS_SD_CFG(t) __REG(TARGET_PCS10G_BR,\ + t, 12, 0, 0, 1, 56, 4, 0, 1, 4) #define PCS10G_BR_PCS_SD_CFG_SD_SEL BIT(8) #define PCS10G_BR_PCS_SD_CFG_SD_SEL_SET(x)\ @@ -4251,7 +5354,8 @@ enum sparx5_target { FIELD_GET(PCS10G_BR_PCS_SD_CFG_SD_ENA, x) /* PCS_10GBASE_R:PCS_10GBR_CFG:PCS_CFG */ -#define PCS25G_BR_PCS_CFG(t) __REG(TARGET_PCS25G_BR, t, 8, 0, 0, 1, 56, 0, 0, 1, 4) +#define PCS25G_BR_PCS_CFG(t) __REG(TARGET_PCS25G_BR,\ + t, 8, 0, 0, 1, 56, 0, 0, 1, 4) #define PCS25G_BR_PCS_CFG_PCS_ENA BIT(31) #define PCS25G_BR_PCS_CFG_PCS_ENA_SET(x)\ @@ -4326,7 +5430,8 @@ enum sparx5_target { FIELD_GET(PCS25G_BR_PCS_CFG_TX_SCR_DISABLE, x) /* PCS_10GBASE_R:PCS_10GBR_CFG:PCS_SD_CFG */ -#define PCS25G_BR_PCS_SD_CFG(t) __REG(TARGET_PCS25G_BR, t, 8, 0, 0, 1, 56, 4, 0, 1, 4) +#define PCS25G_BR_PCS_SD_CFG(t) __REG(TARGET_PCS25G_BR,\ + t, 8, 0, 0, 1, 56, 4, 0, 1, 4) #define PCS25G_BR_PCS_SD_CFG_SD_SEL BIT(8) #define PCS25G_BR_PCS_SD_CFG_SD_SEL_SET(x)\ @@ -4347,7 +5452,8 @@ enum sparx5_target { FIELD_GET(PCS25G_BR_PCS_SD_CFG_SD_ENA, x) /* PCS_10GBASE_R:PCS_10GBR_CFG:PCS_CFG */ -#define PCS5G_BR_PCS_CFG(t) __REG(TARGET_PCS5G_BR, t, 13, 0, 0, 1, 56, 0, 0, 1, 4) +#define PCS5G_BR_PCS_CFG(t) __REG(TARGET_PCS5G_BR,\ + t, 13, 0, 0, 1, 56, 0, 0, 1, 4) #define PCS5G_BR_PCS_CFG_PCS_ENA BIT(31) #define PCS5G_BR_PCS_CFG_PCS_ENA_SET(x)\ @@ -4422,7 +5528,8 @@ enum sparx5_target { FIELD_GET(PCS5G_BR_PCS_CFG_TX_SCR_DISABLE, x) /* PCS_10GBASE_R:PCS_10GBR_CFG:PCS_SD_CFG */ -#define PCS5G_BR_PCS_SD_CFG(t) __REG(TARGET_PCS5G_BR, t, 13, 0, 0, 1, 56, 4, 0, 1, 4) +#define PCS5G_BR_PCS_SD_CFG(t) __REG(TARGET_PCS5G_BR,\ + t, 13, 0, 0, 1, 56, 4, 0, 1, 4) #define PCS5G_BR_PCS_SD_CFG_SD_SEL BIT(8) #define PCS5G_BR_PCS_SD_CFG_SD_SEL_SET(x)\ @@ -4443,7 +5550,8 @@ enum sparx5_target { FIELD_GET(PCS5G_BR_PCS_SD_CFG_SD_ENA, x) /* PORT_CONF:HW_CFG:DEV5G_MODES */ -#define PORT_CONF_DEV5G_MODES __REG(TARGET_PORT_CONF, 0, 1, 0, 0, 1, 24, 0, 0, 1, 4) +#define PORT_CONF_DEV5G_MODES __REG(TARGET_PORT_CONF,\ + 0, 1, 0, 0, 1, 24, 0, 0, 1, 4) #define PORT_CONF_DEV5G_MODES_DEV5G_D0_MODE BIT(0) #define PORT_CONF_DEV5G_MODES_DEV5G_D0_MODE_SET(x)\ @@ -4524,7 +5632,8 @@ enum sparx5_target { FIELD_GET(PORT_CONF_DEV5G_MODES_DEV5G_D64_MODE, x) /* PORT_CONF:HW_CFG:DEV10G_MODES */ -#define PORT_CONF_DEV10G_MODES __REG(TARGET_PORT_CONF, 0, 1, 0, 0, 1, 24, 4, 0, 1, 4) +#define PORT_CONF_DEV10G_MODES __REG(TARGET_PORT_CONF,\ + 0, 1, 0, 0, 1, 24, 4, 0, 1, 4) #define PORT_CONF_DEV10G_MODES_DEV10G_D12_MODE BIT(0) #define PORT_CONF_DEV10G_MODES_DEV10G_D12_MODE_SET(x)\ @@ -4599,7 +5708,8 @@ enum sparx5_target { FIELD_GET(PORT_CONF_DEV10G_MODES_DEV10G_D55_MODE, x) /* PORT_CONF:HW_CFG:DEV25G_MODES */ -#define PORT_CONF_DEV25G_MODES __REG(TARGET_PORT_CONF, 0, 1, 0, 0, 1, 24, 8, 0, 1, 4) +#define PORT_CONF_DEV25G_MODES __REG(TARGET_PORT_CONF,\ + 0, 1, 0, 0, 1, 24, 8, 0, 1, 4) #define PORT_CONF_DEV25G_MODES_DEV25G_D56_MODE BIT(0) #define PORT_CONF_DEV25G_MODES_DEV25G_D56_MODE_SET(x)\ @@ -4650,7 +5760,8 @@ enum sparx5_target { FIELD_GET(PORT_CONF_DEV25G_MODES_DEV25G_D63_MODE, x) /* PORT_CONF:HW_CFG:QSGMII_ENA */ -#define PORT_CONF_QSGMII_ENA __REG(TARGET_PORT_CONF, 0, 1, 0, 0, 1, 24, 12, 0, 1, 4) +#define PORT_CONF_QSGMII_ENA __REG(TARGET_PORT_CONF,\ + 0, 1, 0, 0, 1, 24, 12, 0, 1, 4) #define PORT_CONF_QSGMII_ENA_QSGMII_ENA_0 BIT(0) #define PORT_CONF_QSGMII_ENA_QSGMII_ENA_0_SET(x)\ @@ -4725,7 +5836,8 @@ enum sparx5_target { FIELD_GET(PORT_CONF_QSGMII_ENA_QSGMII_ENA_11, x) /* PORT_CONF:USGMII_CFG_STAT:USGMII_CFG */ -#define PORT_CONF_USGMII_CFG(g) __REG(TARGET_PORT_CONF, 0, 1, 72, g, 6, 8, 0, 0, 1, 4) +#define PORT_CONF_USGMII_CFG(g) __REG(TARGET_PORT_CONF,\ + 0, 1, 72, g, 6, 8, 0, 0, 1, 4) #define PORT_CONF_USGMII_CFG_BYPASS_SCRAM BIT(9) #define PORT_CONF_USGMII_CFG_BYPASS_SCRAM_SET(x)\ @@ -4770,7 +5882,8 @@ enum sparx5_target { FIELD_GET(PORT_CONF_USGMII_CFG_QUAD_MODE, x) /* DEVCPU_PTP:PTP_CFG:PTP_PIN_INTR */ -#define PTP_PTP_PIN_INTR __REG(TARGET_PTP, 0, 1, 320, 0, 1, 16, 0, 0, 1, 4) +#define PTP_PTP_PIN_INTR __REG(TARGET_PTP,\ + 0, 1, 320, 0, 1, 16, 0, 0, 1, 4) #define PTP_PTP_PIN_INTR_INTR_PTP GENMASK(4, 0) #define PTP_PTP_PIN_INTR_INTR_PTP_SET(x)\ @@ -4779,7 +5892,8 @@ enum sparx5_target { FIELD_GET(PTP_PTP_PIN_INTR_INTR_PTP, x) /* DEVCPU_PTP:PTP_CFG:PTP_PIN_INTR_ENA */ -#define PTP_PTP_PIN_INTR_ENA __REG(TARGET_PTP, 0, 1, 320, 0, 1, 16, 4, 0, 1, 4) +#define PTP_PTP_PIN_INTR_ENA __REG(TARGET_PTP,\ + 0, 1, 320, 0, 1, 16, 4, 0, 1, 4) #define PTP_PTP_PIN_INTR_ENA_INTR_PTP_ENA GENMASK(4, 0) #define PTP_PTP_PIN_INTR_ENA_INTR_PTP_ENA_SET(x)\ @@ -4788,7 +5902,8 @@ enum sparx5_target { FIELD_GET(PTP_PTP_PIN_INTR_ENA_INTR_PTP_ENA, x) /* DEVCPU_PTP:PTP_CFG:PTP_INTR_IDENT */ -#define PTP_PTP_INTR_IDENT __REG(TARGET_PTP, 0, 1, 320, 0, 1, 16, 8, 0, 1, 4) +#define PTP_PTP_INTR_IDENT __REG(TARGET_PTP,\ + 0, 1, 320, 0, 1, 16, 8, 0, 1, 4) #define PTP_PTP_INTR_IDENT_INTR_PTP_IDENT GENMASK(4, 0) #define PTP_PTP_INTR_IDENT_INTR_PTP_IDENT_SET(x)\ @@ -4797,7 +5912,8 @@ enum sparx5_target { FIELD_GET(PTP_PTP_INTR_IDENT_INTR_PTP_IDENT, x) /* DEVCPU_PTP:PTP_CFG:PTP_DOM_CFG */ -#define PTP_PTP_DOM_CFG __REG(TARGET_PTP, 0, 1, 320, 0, 1, 16, 12, 0, 1, 4) +#define PTP_PTP_DOM_CFG __REG(TARGET_PTP,\ + 0, 1, 320, 0, 1, 16, 12, 0, 1, 4) #define PTP_PTP_DOM_CFG_PTP_ENA GENMASK(11, 9) #define PTP_PTP_DOM_CFG_PTP_ENA_SET(x)\ @@ -4824,10 +5940,12 @@ enum sparx5_target { FIELD_GET(PTP_PTP_DOM_CFG_PTP_CLKCFG_DIS, x) /* DEVCPU_PTP:PTP_TOD_DOMAINS:CLK_PER_CFG */ -#define PTP_CLK_PER_CFG(g, r) __REG(TARGET_PTP, 0, 1, 336, g, 3, 28, 0, r, 2, 4) +#define PTP_CLK_PER_CFG(g, r) __REG(TARGET_PTP,\ + 0, 1, 336, g, 3, 28, 0, r, 2, 4) /* DEVCPU_PTP:PTP_TOD_DOMAINS:PTP_CUR_NSEC */ -#define PTP_PTP_CUR_NSEC(g) __REG(TARGET_PTP, 0, 1, 336, g, 3, 28, 8, 0, 1, 4) +#define PTP_PTP_CUR_NSEC(g) __REG(TARGET_PTP,\ + 0, 1, 336, g, 3, 28, 8, 0, 1, 4) #define PTP_PTP_CUR_NSEC_PTP_CUR_NSEC GENMASK(29, 0) #define PTP_PTP_CUR_NSEC_PTP_CUR_NSEC_SET(x)\ @@ -4836,7 +5954,8 @@ enum sparx5_target { FIELD_GET(PTP_PTP_CUR_NSEC_PTP_CUR_NSEC, x) /* DEVCPU_PTP:PTP_TOD_DOMAINS:PTP_CUR_NSEC_FRAC */ -#define PTP_PTP_CUR_NSEC_FRAC(g) __REG(TARGET_PTP, 0, 1, 336, g, 3, 28, 12, 0, 1, 4) +#define PTP_PTP_CUR_NSEC_FRAC(g) __REG(TARGET_PTP,\ + 0, 1, 336, g, 3, 28, 12, 0, 1, 4) #define PTP_PTP_CUR_NSEC_FRAC_PTP_CUR_NSEC_FRAC GENMASK(7, 0) #define PTP_PTP_CUR_NSEC_FRAC_PTP_CUR_NSEC_FRAC_SET(x)\ @@ -4845,10 +5964,12 @@ enum sparx5_target { FIELD_GET(PTP_PTP_CUR_NSEC_FRAC_PTP_CUR_NSEC_FRAC, x) /* DEVCPU_PTP:PTP_TOD_DOMAINS:PTP_CUR_SEC_LSB */ -#define PTP_PTP_CUR_SEC_LSB(g) __REG(TARGET_PTP, 0, 1, 336, g, 3, 28, 16, 0, 1, 4) +#define PTP_PTP_CUR_SEC_LSB(g) __REG(TARGET_PTP,\ + 0, 1, 336, g, 3, 28, 16, 0, 1, 4) /* DEVCPU_PTP:PTP_TOD_DOMAINS:PTP_CUR_SEC_MSB */ -#define PTP_PTP_CUR_SEC_MSB(g) __REG(TARGET_PTP, 0, 1, 336, g, 3, 28, 20, 0, 1, 4) +#define PTP_PTP_CUR_SEC_MSB(g) __REG(TARGET_PTP,\ + 0, 1, 336, g, 3, 28, 20, 0, 1, 4) #define PTP_PTP_CUR_SEC_MSB_PTP_CUR_SEC_MSB GENMASK(15, 0) #define PTP_PTP_CUR_SEC_MSB_PTP_CUR_SEC_MSB_SET(x)\ @@ -4857,10 +5978,12 @@ enum sparx5_target { FIELD_GET(PTP_PTP_CUR_SEC_MSB_PTP_CUR_SEC_MSB, x) /* DEVCPU_PTP:PTP_TOD_DOMAINS:NTP_CUR_NSEC */ -#define PTP_NTP_CUR_NSEC(g) __REG(TARGET_PTP, 0, 1, 336, g, 3, 28, 24, 0, 1, 4) +#define PTP_NTP_CUR_NSEC(g) __REG(TARGET_PTP,\ + 0, 1, 336, g, 3, 28, 24, 0, 1, 4) /* DEVCPU_PTP:PTP_PINS:PTP_PIN_CFG */ -#define PTP_PTP_PIN_CFG(g) __REG(TARGET_PTP, 0, 1, 0, g, 5, 64, 0, 0, 1, 4) +#define PTP_PTP_PIN_CFG(g) __REG(TARGET_PTP,\ + 0, 1, 0, g, 5, 64, 0, 0, 1, 4) #define PTP_PTP_PIN_CFG_PTP_PIN_ACTION GENMASK(28, 26) #define PTP_PTP_PIN_CFG_PTP_PIN_ACTION_SET(x)\ @@ -4917,7 +6040,8 @@ enum sparx5_target { FIELD_GET(PTP_PTP_PIN_CFG_PTP_PIN_OUTP_OFS, x) /* DEVCPU_PTP:PTP_PINS:PTP_TOD_SEC_MSB */ -#define PTP_PTP_TOD_SEC_MSB(g) __REG(TARGET_PTP, 0, 1, 0, g, 5, 64, 4, 0, 1, 4) +#define PTP_PTP_TOD_SEC_MSB(g) __REG(TARGET_PTP,\ + 0, 1, 0, g, 5, 64, 4, 0, 1, 4) #define PTP_PTP_TOD_SEC_MSB_PTP_TOD_SEC_MSB GENMASK(15, 0) #define PTP_PTP_TOD_SEC_MSB_PTP_TOD_SEC_MSB_SET(x)\ @@ -4926,10 +6050,12 @@ enum sparx5_target { FIELD_GET(PTP_PTP_TOD_SEC_MSB_PTP_TOD_SEC_MSB, x) /* DEVCPU_PTP:PTP_PINS:PTP_TOD_SEC_LSB */ -#define PTP_PTP_TOD_SEC_LSB(g) __REG(TARGET_PTP, 0, 1, 0, g, 5, 64, 8, 0, 1, 4) +#define PTP_PTP_TOD_SEC_LSB(g) __REG(TARGET_PTP,\ + 0, 1, 0, g, 5, 64, 8, 0, 1, 4) /* DEVCPU_PTP:PTP_PINS:PTP_TOD_NSEC */ -#define PTP_PTP_TOD_NSEC(g) __REG(TARGET_PTP, 0, 1, 0, g, 5, 64, 12, 0, 1, 4) +#define PTP_PTP_TOD_NSEC(g) __REG(TARGET_PTP,\ + 0, 1, 0, g, 5, 64, 12, 0, 1, 4) #define PTP_PTP_TOD_NSEC_PTP_TOD_NSEC GENMASK(29, 0) #define PTP_PTP_TOD_NSEC_PTP_TOD_NSEC_SET(x)\ @@ -4938,7 +6064,8 @@ enum sparx5_target { FIELD_GET(PTP_PTP_TOD_NSEC_PTP_TOD_NSEC, x) /* DEVCPU_PTP:PTP_PINS:PTP_TOD_NSEC_FRAC */ -#define PTP_PTP_TOD_NSEC_FRAC(g) __REG(TARGET_PTP, 0, 1, 0, g, 5, 64, 16, 0, 1, 4) +#define PTP_PTP_TOD_NSEC_FRAC(g) __REG(TARGET_PTP,\ + 0, 1, 0, g, 5, 64, 16, 0, 1, 4) #define PTP_PTP_TOD_NSEC_FRAC_PTP_TOD_NSEC_FRAC GENMASK(7, 0) #define PTP_PTP_TOD_NSEC_FRAC_PTP_TOD_NSEC_FRAC_SET(x)\ @@ -4947,10 +6074,12 @@ enum sparx5_target { FIELD_GET(PTP_PTP_TOD_NSEC_FRAC_PTP_TOD_NSEC_FRAC, x) /* DEVCPU_PTP:PTP_PINS:NTP_NSEC */ -#define PTP_NTP_NSEC(g) __REG(TARGET_PTP, 0, 1, 0, g, 5, 64, 20, 0, 1, 4) +#define PTP_NTP_NSEC(g) __REG(TARGET_PTP,\ + 0, 1, 0, g, 5, 64, 20, 0, 1, 4) /* DEVCPU_PTP:PTP_PINS:PIN_WF_HIGH_PERIOD */ -#define PTP_PIN_WF_HIGH_PERIOD(g) __REG(TARGET_PTP, 0, 1, 0, g, 5, 64, 24, 0, 1, 4) +#define PTP_PIN_WF_HIGH_PERIOD(g) __REG(TARGET_PTP,\ + 0, 1, 0, g, 5, 64, 24, 0, 1, 4) #define PTP_PIN_WF_HIGH_PERIOD_PIN_WFH GENMASK(29, 0) #define PTP_PIN_WF_HIGH_PERIOD_PIN_WFH_SET(x)\ @@ -4959,7 +6088,8 @@ enum sparx5_target { FIELD_GET(PTP_PIN_WF_HIGH_PERIOD_PIN_WFH, x) /* DEVCPU_PTP:PTP_PINS:PIN_WF_LOW_PERIOD */ -#define PTP_PIN_WF_LOW_PERIOD(g) __REG(TARGET_PTP, 0, 1, 0, g, 5, 64, 28, 0, 1, 4) +#define PTP_PIN_WF_LOW_PERIOD(g) __REG(TARGET_PTP,\ + 0, 1, 0, g, 5, 64, 28, 0, 1, 4) #define PTP_PIN_WF_LOW_PERIOD_PIN_WFL GENMASK(29, 0) #define PTP_PIN_WF_LOW_PERIOD_PIN_WFL_SET(x)\ @@ -4968,7 +6098,8 @@ enum sparx5_target { FIELD_GET(PTP_PIN_WF_LOW_PERIOD_PIN_WFL, x) /* DEVCPU_PTP:PTP_PINS:PIN_IOBOUNCH_DELAY */ -#define PTP_PIN_IOBOUNCH_DELAY(g) __REG(TARGET_PTP, 0, 1, 0, g, 5, 64, 32, 0, 1, 4) +#define PTP_PIN_IOBOUNCH_DELAY(g) __REG(TARGET_PTP,\ + 0, 1, 0, g, 5, 64, 32, 0, 1, 4) #define PTP_PIN_IOBOUNCH_DELAY_PIN_IOBOUNCH_VAL GENMASK(18, 3) #define PTP_PIN_IOBOUNCH_DELAY_PIN_IOBOUNCH_VAL_SET(x)\ @@ -4983,7 +6114,8 @@ enum sparx5_target { FIELD_GET(PTP_PIN_IOBOUNCH_DELAY_PIN_IOBOUNCH_CFG, x) /* DEVCPU_PTP:PHASE_DETECTOR_CTRL:PHAD_CTRL */ -#define PTP_PHAD_CTRL(g) __REG(TARGET_PTP, 0, 1, 420, g, 5, 8, 0, 0, 1, 4) +#define PTP_PHAD_CTRL(g) __REG(TARGET_PTP,\ + 0, 1, 420, g, 5, 8, 0, 0, 1, 4) #define PTP_PHAD_CTRL_PHAD_ENA BIT(7) #define PTP_PHAD_CTRL_PHAD_ENA_SET(x)\ @@ -5010,10 +6142,12 @@ enum sparx5_target { FIELD_GET(PTP_PHAD_CTRL_LOCK_ACC, x) /* DEVCPU_PTP:PHASE_DETECTOR_CTRL:PHAD_CYC_STAT */ -#define PTP_PHAD_CYC_STAT(g) __REG(TARGET_PTP, 0, 1, 420, g, 5, 8, 4, 0, 1, 4) +#define PTP_PHAD_CYC_STAT(g) __REG(TARGET_PTP,\ + 0, 1, 420, g, 5, 8, 4, 0, 1, 4) /* QFWD:SYSTEM:SWITCH_PORT_MODE */ -#define QFWD_SWITCH_PORT_MODE(r) __REG(TARGET_QFWD, 0, 1, 0, 0, 1, 340, 0, r, 70, 4) +#define QFWD_SWITCH_PORT_MODE(r) __REG(TARGET_QFWD,\ + 0, 1, 0, 0, 1, 340, 0, r, 70, 4) #define QFWD_SWITCH_PORT_MODE_PORT_ENA BIT(19) #define QFWD_SWITCH_PORT_MODE_PORT_ENA_SET(x)\ @@ -5070,7 +6204,8 @@ enum sparx5_target { FIELD_GET(QFWD_SWITCH_PORT_MODE_LEARNALL_MORE, x) /* QRES:RES_CTRL:RES_CFG */ -#define QRES_RES_CFG(g) __REG(TARGET_QRES, 0, 1, 0, g, 5120, 16, 0, 0, 1, 4) +#define QRES_RES_CFG(g) __REG(TARGET_QRES,\ + 0, 1, 0, g, 5120, 16, 0, 0, 1, 4) #define QRES_RES_CFG_WM_HIGH GENMASK(11, 0) #define QRES_RES_CFG_WM_HIGH_SET(x)\ @@ -5079,7 +6214,8 @@ enum sparx5_target { FIELD_GET(QRES_RES_CFG_WM_HIGH, x) /* QRES:RES_CTRL:RES_STAT */ -#define QRES_RES_STAT(g) __REG(TARGET_QRES, 0, 1, 0, g, 5120, 16, 4, 0, 1, 4) +#define QRES_RES_STAT(g) __REG(TARGET_QRES,\ + 0, 1, 0, g, 5120, 16, 4, 0, 1, 4) #define QRES_RES_STAT_MAXUSE GENMASK(20, 0) #define QRES_RES_STAT_MAXUSE_SET(x)\ @@ -5088,7 +6224,8 @@ enum sparx5_target { FIELD_GET(QRES_RES_STAT_MAXUSE, x) /* QRES:RES_CTRL:RES_STAT_CUR */ -#define QRES_RES_STAT_CUR(g) __REG(TARGET_QRES, 0, 1, 0, g, 5120, 16, 8, 0, 1, 4) +#define QRES_RES_STAT_CUR(g) __REG(TARGET_QRES,\ + 0, 1, 0, g, 5120, 16, 8, 0, 1, 4) #define QRES_RES_STAT_CUR_INUSE GENMASK(20, 0) #define QRES_RES_STAT_CUR_INUSE_SET(x)\ @@ -5097,7 +6234,8 @@ enum sparx5_target { FIELD_GET(QRES_RES_STAT_CUR_INUSE, x) /* DEVCPU_QS:XTR:XTR_GRP_CFG */ -#define QS_XTR_GRP_CFG(r) __REG(TARGET_QS, 0, 1, 0, 0, 1, 36, 0, r, 2, 4) +#define QS_XTR_GRP_CFG(r) __REG(TARGET_QS,\ + 0, 1, 0, 0, 1, 36, 0, r, 2, 4) #define QS_XTR_GRP_CFG_MODE GENMASK(3, 2) #define QS_XTR_GRP_CFG_MODE_SET(x)\ @@ -5118,10 +6256,12 @@ enum sparx5_target { FIELD_GET(QS_XTR_GRP_CFG_BYTE_SWAP, x) /* DEVCPU_QS:XTR:XTR_RD */ -#define QS_XTR_RD(r) __REG(TARGET_QS, 0, 1, 0, 0, 1, 36, 8, r, 2, 4) +#define QS_XTR_RD(r) __REG(TARGET_QS,\ + 0, 1, 0, 0, 1, 36, 8, r, 2, 4) /* DEVCPU_QS:XTR:XTR_FLUSH */ -#define QS_XTR_FLUSH __REG(TARGET_QS, 0, 1, 0, 0, 1, 36, 24, 0, 1, 4) +#define QS_XTR_FLUSH __REG(TARGET_QS,\ + 0, 1, 0, 0, 1, 36, 24, 0, 1, 4) #define QS_XTR_FLUSH_FLUSH GENMASK(1, 0) #define QS_XTR_FLUSH_FLUSH_SET(x)\ @@ -5130,7 +6270,8 @@ enum sparx5_target { FIELD_GET(QS_XTR_FLUSH_FLUSH, x) /* DEVCPU_QS:XTR:XTR_DATA_PRESENT */ -#define QS_XTR_DATA_PRESENT __REG(TARGET_QS, 0, 1, 0, 0, 1, 36, 28, 0, 1, 4) +#define QS_XTR_DATA_PRESENT __REG(TARGET_QS,\ + 0, 1, 0, 0, 1, 36, 28, 0, 1, 4) #define QS_XTR_DATA_PRESENT_DATA_PRESENT GENMASK(1, 0) #define QS_XTR_DATA_PRESENT_DATA_PRESENT_SET(x)\ @@ -5139,7 +6280,8 @@ enum sparx5_target { FIELD_GET(QS_XTR_DATA_PRESENT_DATA_PRESENT, x) /* DEVCPU_QS:INJ:INJ_GRP_CFG */ -#define QS_INJ_GRP_CFG(r) __REG(TARGET_QS, 0, 1, 36, 0, 1, 40, 0, r, 2, 4) +#define QS_INJ_GRP_CFG(r) __REG(TARGET_QS,\ + 0, 1, 36, 0, 1, 40, 0, r, 2, 4) #define QS_INJ_GRP_CFG_MODE GENMASK(3, 2) #define QS_INJ_GRP_CFG_MODE_SET(x)\ @@ -5154,10 +6296,12 @@ enum sparx5_target { FIELD_GET(QS_INJ_GRP_CFG_BYTE_SWAP, x) /* DEVCPU_QS:INJ:INJ_WR */ -#define QS_INJ_WR(r) __REG(TARGET_QS, 0, 1, 36, 0, 1, 40, 8, r, 2, 4) +#define QS_INJ_WR(r) __REG(TARGET_QS,\ + 0, 1, 36, 0, 1, 40, 8, r, 2, 4) /* DEVCPU_QS:INJ:INJ_CTRL */ -#define QS_INJ_CTRL(r) __REG(TARGET_QS, 0, 1, 36, 0, 1, 40, 16, r, 2, 4) +#define QS_INJ_CTRL(r) __REG(TARGET_QS,\ + 0, 1, 36, 0, 1, 40, 16, r, 2, 4) #define QS_INJ_CTRL_GAP_SIZE GENMASK(24, 21) #define QS_INJ_CTRL_GAP_SIZE_SET(x)\ @@ -5190,7 +6334,8 @@ enum sparx5_target { FIELD_GET(QS_INJ_CTRL_VLD_BYTES, x) /* DEVCPU_QS:INJ:INJ_STATUS */ -#define QS_INJ_STATUS __REG(TARGET_QS, 0, 1, 36, 0, 1, 40, 24, 0, 1, 4) +#define QS_INJ_STATUS __REG(TARGET_QS,\ + 0, 1, 36, 0, 1, 40, 24, 0, 1, 4) #define QS_INJ_STATUS_WMARK_REACHED GENMASK(5, 4) #define QS_INJ_STATUS_WMARK_REACHED_SET(x)\ @@ -5211,7 +6356,8 @@ enum sparx5_target { FIELD_GET(QS_INJ_STATUS_INJ_IN_PROGRESS, x) /* QSYS:PAUSE_CFG:PAUSE_CFG */ -#define QSYS_PAUSE_CFG(r) __REG(TARGET_QSYS, 0, 1, 544, 0, 1, 1128, 0, r, 70, 4) +#define QSYS_PAUSE_CFG(r) __REG(TARGET_QSYS,\ + 0, 1, 544, 0, 1, 1128, 0, r, 70, 4) #define QSYS_PAUSE_CFG_PAUSE_START GENMASK(25, 14) #define QSYS_PAUSE_CFG_PAUSE_START_SET(x)\ @@ -5238,7 +6384,8 @@ enum sparx5_target { FIELD_GET(QSYS_PAUSE_CFG_AGGRESSIVE_TAILDROP_ENA, x) /* QSYS:PAUSE_CFG:ATOP */ -#define QSYS_ATOP(r) __REG(TARGET_QSYS, 0, 1, 544, 0, 1, 1128, 284, r, 70, 4) +#define QSYS_ATOP(r) __REG(TARGET_QSYS,\ + 0, 1, 544, 0, 1, 1128, 284, r, 70, 4) #define QSYS_ATOP_ATOP GENMASK(11, 0) #define QSYS_ATOP_ATOP_SET(x)\ @@ -5247,7 +6394,8 @@ enum sparx5_target { FIELD_GET(QSYS_ATOP_ATOP, x) /* QSYS:PAUSE_CFG:FWD_PRESSURE */ -#define QSYS_FWD_PRESSURE(r) __REG(TARGET_QSYS, 0, 1, 544, 0, 1, 1128, 564, r, 70, 4) +#define QSYS_FWD_PRESSURE(r) __REG(TARGET_QSYS,\ + 0, 1, 544, 0, 1, 1128, 564, r, 70, 4) #define QSYS_FWD_PRESSURE_FWD_PRESSURE GENMASK(11, 1) #define QSYS_FWD_PRESSURE_FWD_PRESSURE_SET(x)\ @@ -5262,7 +6410,8 @@ enum sparx5_target { FIELD_GET(QSYS_FWD_PRESSURE_FWD_PRESSURE_DIS, x) /* QSYS:PAUSE_CFG:ATOP_TOT_CFG */ -#define QSYS_ATOP_TOT_CFG __REG(TARGET_QSYS, 0, 1, 544, 0, 1, 1128, 844, 0, 1, 4) +#define QSYS_ATOP_TOT_CFG __REG(TARGET_QSYS,\ + 0, 1, 544, 0, 1, 1128, 844, 0, 1, 4) #define QSYS_ATOP_TOT_CFG_ATOP_TOT GENMASK(11, 0) #define QSYS_ATOP_TOT_CFG_ATOP_TOT_SET(x)\ @@ -5271,7 +6420,8 @@ enum sparx5_target { FIELD_GET(QSYS_ATOP_TOT_CFG_ATOP_TOT, x) /* QSYS:CALCFG:CAL_AUTO */ -#define QSYS_CAL_AUTO(r) __REG(TARGET_QSYS, 0, 1, 2304, 0, 1, 40, 0, r, 7, 4) +#define QSYS_CAL_AUTO(r) __REG(TARGET_QSYS,\ + 0, 1, 2304, 0, 1, 40, 0, r, 7, 4) #define QSYS_CAL_AUTO_CAL_AUTO GENMASK(29, 0) #define QSYS_CAL_AUTO_CAL_AUTO_SET(x)\ @@ -5280,7 +6430,8 @@ enum sparx5_target { FIELD_GET(QSYS_CAL_AUTO_CAL_AUTO, x) /* QSYS:CALCFG:CAL_CTRL */ -#define QSYS_CAL_CTRL __REG(TARGET_QSYS, 0, 1, 2304, 0, 1, 40, 36, 0, 1, 4) +#define QSYS_CAL_CTRL __REG(TARGET_QSYS,\ + 0, 1, 2304, 0, 1, 40, 36, 0, 1, 4) #define QSYS_CAL_CTRL_CAL_MODE GENMASK(14, 11) #define QSYS_CAL_CTRL_CAL_MODE_SET(x)\ @@ -5301,7 +6452,8 @@ enum sparx5_target { FIELD_GET(QSYS_CAL_CTRL_CAL_AUTO_ERROR, x) /* QSYS:RAM_CTRL:RAM_INIT */ -#define QSYS_RAM_INIT __REG(TARGET_QSYS, 0, 1, 2344, 0, 1, 4, 0, 0, 1, 4) +#define QSYS_RAM_INIT __REG(TARGET_QSYS,\ + 0, 1, 2344, 0, 1, 4, 0, 0, 1, 4) #define QSYS_RAM_INIT_RAM_INIT BIT(1) #define QSYS_RAM_INIT_RAM_INIT_SET(x)\ @@ -5316,7 +6468,8 @@ enum sparx5_target { FIELD_GET(QSYS_RAM_INIT_RAM_CFG_HOOK, x) /* REW:COMMON:OWN_UPSID */ -#define REW_OWN_UPSID(r) __REG(TARGET_REW, 0, 1, 387264, 0, 1, 1232, 0, r, 3, 4) +#define REW_OWN_UPSID(r) __REG(TARGET_REW,\ + 0, 1, 387264, 0, 1, 1232, 0, r, 3, 4) #define REW_OWN_UPSID_OWN_UPSID GENMASK(4, 0) #define REW_OWN_UPSID_OWN_UPSID_SET(x)\ @@ -5324,8 +6477,71 @@ enum sparx5_target { #define REW_OWN_UPSID_OWN_UPSID_GET(x)\ FIELD_GET(REW_OWN_UPSID_OWN_UPSID, x) +/* REW:COMMON:RTAG_ETAG_CTRL */ +#define REW_RTAG_ETAG_CTRL(r) __REG(TARGET_REW,\ + 0, 1, 387264, 0, 1, 1232, 560, r, 70, 4) + +#define REW_RTAG_ETAG_CTRL_IPE_TBL GENMASK(9, 3) +#define REW_RTAG_ETAG_CTRL_IPE_TBL_SET(x)\ + FIELD_PREP(REW_RTAG_ETAG_CTRL_IPE_TBL, x) +#define REW_RTAG_ETAG_CTRL_IPE_TBL_GET(x)\ + FIELD_GET(REW_RTAG_ETAG_CTRL_IPE_TBL, x) + +#define REW_RTAG_ETAG_CTRL_ES0_ISDX_KEY_ENA GENMASK(2, 1) +#define REW_RTAG_ETAG_CTRL_ES0_ISDX_KEY_ENA_SET(x)\ + FIELD_PREP(REW_RTAG_ETAG_CTRL_ES0_ISDX_KEY_ENA, x) +#define REW_RTAG_ETAG_CTRL_ES0_ISDX_KEY_ENA_GET(x)\ + FIELD_GET(REW_RTAG_ETAG_CTRL_ES0_ISDX_KEY_ENA, x) + +#define REW_RTAG_ETAG_CTRL_KEEP_ETAG BIT(0) +#define REW_RTAG_ETAG_CTRL_KEEP_ETAG_SET(x)\ + FIELD_PREP(REW_RTAG_ETAG_CTRL_KEEP_ETAG, x) +#define REW_RTAG_ETAG_CTRL_KEEP_ETAG_GET(x)\ + FIELD_GET(REW_RTAG_ETAG_CTRL_KEEP_ETAG, x) + +/* REW:COMMON:ES0_CTRL */ +#define REW_ES0_CTRL __REG(TARGET_REW,\ + 0, 1, 387264, 0, 1, 1232, 852, 0, 1, 4) + +#define REW_ES0_CTRL_ES0_BY_RT_FWD BIT(5) +#define REW_ES0_CTRL_ES0_BY_RT_FWD_SET(x)\ + FIELD_PREP(REW_ES0_CTRL_ES0_BY_RT_FWD, x) +#define REW_ES0_CTRL_ES0_BY_RT_FWD_GET(x)\ + FIELD_GET(REW_ES0_CTRL_ES0_BY_RT_FWD, x) + +#define REW_ES0_CTRL_ES0_BY_RLEG BIT(4) +#define REW_ES0_CTRL_ES0_BY_RLEG_SET(x)\ + FIELD_PREP(REW_ES0_CTRL_ES0_BY_RLEG, x) +#define REW_ES0_CTRL_ES0_BY_RLEG_GET(x)\ + FIELD_GET(REW_ES0_CTRL_ES0_BY_RLEG, x) + +#define REW_ES0_CTRL_ES0_DPORT_ENA BIT(3) +#define REW_ES0_CTRL_ES0_DPORT_ENA_SET(x)\ + FIELD_PREP(REW_ES0_CTRL_ES0_DPORT_ENA, x) +#define REW_ES0_CTRL_ES0_DPORT_ENA_GET(x)\ + FIELD_GET(REW_ES0_CTRL_ES0_DPORT_ENA, x) + +#define REW_ES0_CTRL_ES0_FRM_LBK_CFG BIT(2) +#define REW_ES0_CTRL_ES0_FRM_LBK_CFG_SET(x)\ + FIELD_PREP(REW_ES0_CTRL_ES0_FRM_LBK_CFG, x) +#define REW_ES0_CTRL_ES0_FRM_LBK_CFG_GET(x)\ + FIELD_GET(REW_ES0_CTRL_ES0_FRM_LBK_CFG, x) + +#define REW_ES0_CTRL_ES0_VD2_ENCAP_ID_ENA BIT(1) +#define REW_ES0_CTRL_ES0_VD2_ENCAP_ID_ENA_SET(x)\ + FIELD_PREP(REW_ES0_CTRL_ES0_VD2_ENCAP_ID_ENA, x) +#define REW_ES0_CTRL_ES0_VD2_ENCAP_ID_ENA_GET(x)\ + FIELD_GET(REW_ES0_CTRL_ES0_VD2_ENCAP_ID_ENA, x) + +#define REW_ES0_CTRL_ES0_LU_ENA BIT(0) +#define REW_ES0_CTRL_ES0_LU_ENA_SET(x)\ + FIELD_PREP(REW_ES0_CTRL_ES0_LU_ENA, x) +#define REW_ES0_CTRL_ES0_LU_ENA_GET(x)\ + FIELD_GET(REW_ES0_CTRL_ES0_LU_ENA, x) + /* REW:PORT:PORT_VLAN_CFG */ -#define REW_PORT_VLAN_CFG(g) __REG(TARGET_REW, 0, 1, 360448, g, 70, 256, 0, 0, 1, 4) +#define REW_PORT_VLAN_CFG(g) __REG(TARGET_REW,\ + 0, 1, 360448, g, 70, 256, 0, 0, 1, 4) #define REW_PORT_VLAN_CFG_PORT_PCP GENMASK(15, 13) #define REW_PORT_VLAN_CFG_PORT_PCP_SET(x)\ @@ -5345,8 +6561,49 @@ enum sparx5_target { #define REW_PORT_VLAN_CFG_PORT_VID_GET(x)\ FIELD_GET(REW_PORT_VLAN_CFG_PORT_VID, x) +/* REW:PORT:PCP_MAP_DE0 */ +#define REW_PCP_MAP_DE0(g, r) __REG(TARGET_REW,\ + 0, 1, 360448, g, 70, 256, 4, r, 8, 4) + +#define REW_PCP_MAP_DE0_PCP_DE0 GENMASK(2, 0) +#define REW_PCP_MAP_DE0_PCP_DE0_SET(x)\ + FIELD_PREP(REW_PCP_MAP_DE0_PCP_DE0, x) +#define REW_PCP_MAP_DE0_PCP_DE0_GET(x)\ + FIELD_GET(REW_PCP_MAP_DE0_PCP_DE0, x) + +/* REW:PORT:PCP_MAP_DE1 */ +#define REW_PCP_MAP_DE1(g, r) __REG(TARGET_REW,\ + 0, 1, 360448, g, 70, 256, 36, r, 8, 4) + +#define REW_PCP_MAP_DE1_PCP_DE1 GENMASK(2, 0) +#define REW_PCP_MAP_DE1_PCP_DE1_SET(x)\ + FIELD_PREP(REW_PCP_MAP_DE1_PCP_DE1, x) +#define REW_PCP_MAP_DE1_PCP_DE1_GET(x)\ + FIELD_GET(REW_PCP_MAP_DE1_PCP_DE1, x) + +/* REW:PORT:DEI_MAP_DE0 */ +#define REW_DEI_MAP_DE0(g, r) __REG(TARGET_REW,\ + 0, 1, 360448, g, 70, 256, 68, r, 8, 4) + +#define REW_DEI_MAP_DE0_DEI_DE0 BIT(0) +#define REW_DEI_MAP_DE0_DEI_DE0_SET(x)\ + FIELD_PREP(REW_DEI_MAP_DE0_DEI_DE0, x) +#define REW_DEI_MAP_DE0_DEI_DE0_GET(x)\ + FIELD_GET(REW_DEI_MAP_DE0_DEI_DE0, x) + +/* REW:PORT:DEI_MAP_DE1 */ +#define REW_DEI_MAP_DE1(g, r) __REG(TARGET_REW,\ + 0, 1, 360448, g, 70, 256, 100, r, 8, 4) + +#define REW_DEI_MAP_DE1_DEI_DE1 BIT(0) +#define REW_DEI_MAP_DE1_DEI_DE1_SET(x)\ + FIELD_PREP(REW_DEI_MAP_DE1_DEI_DE1, x) +#define REW_DEI_MAP_DE1_DEI_DE1_GET(x)\ + FIELD_GET(REW_DEI_MAP_DE1_DEI_DE1, x) + /* REW:PORT:TAG_CTRL */ -#define REW_TAG_CTRL(g) __REG(TARGET_REW, 0, 1, 360448, g, 70, 256, 132, 0, 1, 4) +#define REW_TAG_CTRL(g) __REG(TARGET_REW,\ + 0, 1, 360448, g, 70, 256, 132, 0, 1, 4) #define REW_TAG_CTRL_TAG_CFG_OBEY_WAS_TAGGED BIT(13) #define REW_TAG_CTRL_TAG_CFG_OBEY_WAS_TAGGED_SET(x)\ @@ -5384,8 +6641,25 @@ enum sparx5_target { #define REW_TAG_CTRL_TAG_DEI_CFG_GET(x)\ FIELD_GET(REW_TAG_CTRL_TAG_DEI_CFG, x) +/* REW:PORT:DSCP_MAP */ +#define REW_DSCP_MAP(g) __REG(TARGET_REW,\ + 0, 1, 360448, g, 70, 256, 136, 0, 1, 4) + +#define REW_DSCP_MAP_DSCP_UPDATE_ENA BIT(1) +#define REW_DSCP_MAP_DSCP_UPDATE_ENA_SET(x)\ + FIELD_PREP(REW_DSCP_MAP_DSCP_UPDATE_ENA, x) +#define REW_DSCP_MAP_DSCP_UPDATE_ENA_GET(x)\ + FIELD_GET(REW_DSCP_MAP_DSCP_UPDATE_ENA, x) + +#define REW_DSCP_MAP_DSCP_REMAP_ENA BIT(0) +#define REW_DSCP_MAP_DSCP_REMAP_ENA_SET(x)\ + FIELD_PREP(REW_DSCP_MAP_DSCP_REMAP_ENA, x) +#define REW_DSCP_MAP_DSCP_REMAP_ENA_GET(x)\ + FIELD_GET(REW_DSCP_MAP_DSCP_REMAP_ENA, x) + /* REW:PTP_CTRL:PTP_TWOSTEP_CTRL */ -#define REW_PTP_TWOSTEP_CTRL __REG(TARGET_REW, 0, 1, 378368, 0, 1, 40, 0, 0, 1, 4) +#define REW_PTP_TWOSTEP_CTRL __REG(TARGET_REW,\ + 0, 1, 378368, 0, 1, 40, 0, 0, 1, 4) #define REW_PTP_TWOSTEP_CTRL_PTP_OVWR_ENA BIT(12) #define REW_PTP_TWOSTEP_CTRL_PTP_OVWR_ENA_SET(x)\ @@ -5424,7 +6698,8 @@ enum sparx5_target { FIELD_GET(REW_PTP_TWOSTEP_CTRL_PTP_OVFL, x) /* REW:PTP_CTRL:PTP_TWOSTEP_STAMP */ -#define REW_PTP_TWOSTEP_STAMP __REG(TARGET_REW, 0, 1, 378368, 0, 1, 40, 4, 0, 1, 4) +#define REW_PTP_TWOSTEP_STAMP __REG(TARGET_REW,\ + 0, 1, 378368, 0, 1, 40, 4, 0, 1, 4) #define REW_PTP_TWOSTEP_STAMP_STAMP_NSEC GENMASK(29, 0) #define REW_PTP_TWOSTEP_STAMP_STAMP_NSEC_SET(x)\ @@ -5433,7 +6708,8 @@ enum sparx5_target { FIELD_GET(REW_PTP_TWOSTEP_STAMP_STAMP_NSEC, x) /* REW:PTP_CTRL:PTP_TWOSTEP_STAMP_SUBNS */ -#define REW_PTP_TWOSTEP_STAMP_SUBNS __REG(TARGET_REW, 0, 1, 378368, 0, 1, 40, 8, 0, 1, 4) +#define REW_PTP_TWOSTEP_STAMP_SUBNS __REG(TARGET_REW,\ + 0, 1, 378368, 0, 1, 40, 8, 0, 1, 4) #define REW_PTP_TWOSTEP_STAMP_SUBNS_STAMP_SUB_NSEC GENMASK(7, 0) #define REW_PTP_TWOSTEP_STAMP_SUBNS_STAMP_SUB_NSEC_SET(x)\ @@ -5442,13 +6718,16 @@ enum sparx5_target { FIELD_GET(REW_PTP_TWOSTEP_STAMP_SUBNS_STAMP_SUB_NSEC, x) /* REW:PTP_CTRL:PTP_RSRV_NOT_ZERO */ -#define REW_PTP_RSRV_NOT_ZERO __REG(TARGET_REW, 0, 1, 378368, 0, 1, 40, 12, 0, 1, 4) +#define REW_PTP_RSRV_NOT_ZERO __REG(TARGET_REW,\ + 0, 1, 378368, 0, 1, 40, 12, 0, 1, 4) /* REW:PTP_CTRL:PTP_RSRV_NOT_ZERO1 */ -#define REW_PTP_RSRV_NOT_ZERO1 __REG(TARGET_REW, 0, 1, 378368, 0, 1, 40, 16, 0, 1, 4) +#define REW_PTP_RSRV_NOT_ZERO1 __REG(TARGET_REW,\ + 0, 1, 378368, 0, 1, 40, 16, 0, 1, 4) /* REW:PTP_CTRL:PTP_RSRV_NOT_ZERO2 */ -#define REW_PTP_RSRV_NOT_ZERO2 __REG(TARGET_REW, 0, 1, 378368, 0, 1, 40, 20, 0, 1, 4) +#define REW_PTP_RSRV_NOT_ZERO2 __REG(TARGET_REW,\ + 0, 1, 378368, 0, 1, 40, 20, 0, 1, 4) #define REW_PTP_RSRV_NOT_ZERO2_PTP_RSRV_NOT_ZERO2 GENMASK(5, 0) #define REW_PTP_RSRV_NOT_ZERO2_PTP_RSRV_NOT_ZERO2_SET(x)\ @@ -5457,7 +6736,8 @@ enum sparx5_target { FIELD_GET(REW_PTP_RSRV_NOT_ZERO2_PTP_RSRV_NOT_ZERO2, x) /* REW:PTP_CTRL:PTP_GEN_STAMP_FMT */ -#define REW_PTP_GEN_STAMP_FMT(r) __REG(TARGET_REW, 0, 1, 378368, 0, 1, 40, 24, r, 4, 4) +#define REW_PTP_GEN_STAMP_FMT(r) __REG(TARGET_REW,\ + 0, 1, 378368, 0, 1, 40, 24, r, 4, 4) #define REW_PTP_GEN_STAMP_FMT_RT_OFS GENMASK(6, 2) #define REW_PTP_GEN_STAMP_FMT_RT_OFS_SET(x)\ @@ -5472,7 +6752,8 @@ enum sparx5_target { FIELD_GET(REW_PTP_GEN_STAMP_FMT_RT_FMT, x) /* REW:RAM_CTRL:RAM_INIT */ -#define REW_RAM_INIT __REG(TARGET_REW, 0, 1, 378696, 0, 1, 4, 0, 0, 1, 4) +#define REW_RAM_INIT __REG(TARGET_REW,\ + 0, 1, 378696, 0, 1, 4, 0, 0, 1, 4) #define REW_RAM_INIT_RAM_INIT BIT(1) #define REW_RAM_INIT_RAM_INIT_SET(x)\ @@ -5486,8 +6767,333 @@ enum sparx5_target { #define REW_RAM_INIT_RAM_CFG_HOOK_GET(x)\ FIELD_GET(REW_RAM_INIT_RAM_CFG_HOOK, x) +/* VCAP_ES0:VCAP_CORE_CFG:VCAP_UPDATE_CTRL */ +#define VCAP_ES0_CTRL __REG(TARGET_VCAP_ES0,\ + 0, 1, 0, 0, 1, 8, 0, 0, 1, 4) + +#define VCAP_ES0_CTRL_UPDATE_CMD GENMASK(24, 22) +#define VCAP_ES0_CTRL_UPDATE_CMD_SET(x)\ + FIELD_PREP(VCAP_ES0_CTRL_UPDATE_CMD, x) +#define VCAP_ES0_CTRL_UPDATE_CMD_GET(x)\ + FIELD_GET(VCAP_ES0_CTRL_UPDATE_CMD, x) + +#define VCAP_ES0_CTRL_UPDATE_ENTRY_DIS BIT(21) +#define VCAP_ES0_CTRL_UPDATE_ENTRY_DIS_SET(x)\ + FIELD_PREP(VCAP_ES0_CTRL_UPDATE_ENTRY_DIS, x) +#define VCAP_ES0_CTRL_UPDATE_ENTRY_DIS_GET(x)\ + FIELD_GET(VCAP_ES0_CTRL_UPDATE_ENTRY_DIS, x) + +#define VCAP_ES0_CTRL_UPDATE_ACTION_DIS BIT(20) +#define VCAP_ES0_CTRL_UPDATE_ACTION_DIS_SET(x)\ + FIELD_PREP(VCAP_ES0_CTRL_UPDATE_ACTION_DIS, x) +#define VCAP_ES0_CTRL_UPDATE_ACTION_DIS_GET(x)\ + FIELD_GET(VCAP_ES0_CTRL_UPDATE_ACTION_DIS, x) + +#define VCAP_ES0_CTRL_UPDATE_CNT_DIS BIT(19) +#define VCAP_ES0_CTRL_UPDATE_CNT_DIS_SET(x)\ + FIELD_PREP(VCAP_ES0_CTRL_UPDATE_CNT_DIS, x) +#define VCAP_ES0_CTRL_UPDATE_CNT_DIS_GET(x)\ + FIELD_GET(VCAP_ES0_CTRL_UPDATE_CNT_DIS, x) + +#define VCAP_ES0_CTRL_UPDATE_ADDR GENMASK(18, 3) +#define VCAP_ES0_CTRL_UPDATE_ADDR_SET(x)\ + FIELD_PREP(VCAP_ES0_CTRL_UPDATE_ADDR, x) +#define VCAP_ES0_CTRL_UPDATE_ADDR_GET(x)\ + FIELD_GET(VCAP_ES0_CTRL_UPDATE_ADDR, x) + +#define VCAP_ES0_CTRL_UPDATE_SHOT BIT(2) +#define VCAP_ES0_CTRL_UPDATE_SHOT_SET(x)\ + FIELD_PREP(VCAP_ES0_CTRL_UPDATE_SHOT, x) +#define VCAP_ES0_CTRL_UPDATE_SHOT_GET(x)\ + FIELD_GET(VCAP_ES0_CTRL_UPDATE_SHOT, x) + +#define VCAP_ES0_CTRL_CLEAR_CACHE BIT(1) +#define VCAP_ES0_CTRL_CLEAR_CACHE_SET(x)\ + FIELD_PREP(VCAP_ES0_CTRL_CLEAR_CACHE, x) +#define VCAP_ES0_CTRL_CLEAR_CACHE_GET(x)\ + FIELD_GET(VCAP_ES0_CTRL_CLEAR_CACHE, x) + +#define VCAP_ES0_CTRL_MV_TRAFFIC_IGN BIT(0) +#define VCAP_ES0_CTRL_MV_TRAFFIC_IGN_SET(x)\ + FIELD_PREP(VCAP_ES0_CTRL_MV_TRAFFIC_IGN, x) +#define VCAP_ES0_CTRL_MV_TRAFFIC_IGN_GET(x)\ + FIELD_GET(VCAP_ES0_CTRL_MV_TRAFFIC_IGN, x) + +/* VCAP_ES0:VCAP_CORE_CFG:VCAP_MV_CFG */ +#define VCAP_ES0_CFG __REG(TARGET_VCAP_ES0,\ + 0, 1, 0, 0, 1, 8, 4, 0, 1, 4) + +#define VCAP_ES0_CFG_MV_NUM_POS GENMASK(31, 16) +#define VCAP_ES0_CFG_MV_NUM_POS_SET(x)\ + FIELD_PREP(VCAP_ES0_CFG_MV_NUM_POS, x) +#define VCAP_ES0_CFG_MV_NUM_POS_GET(x)\ + FIELD_GET(VCAP_ES0_CFG_MV_NUM_POS, x) + +#define VCAP_ES0_CFG_MV_SIZE GENMASK(15, 0) +#define VCAP_ES0_CFG_MV_SIZE_SET(x)\ + FIELD_PREP(VCAP_ES0_CFG_MV_SIZE, x) +#define VCAP_ES0_CFG_MV_SIZE_GET(x)\ + FIELD_GET(VCAP_ES0_CFG_MV_SIZE, x) + +/* VCAP_ES0:VCAP_CORE_CACHE:VCAP_ENTRY_DAT */ +#define VCAP_ES0_VCAP_ENTRY_DAT(r) __REG(TARGET_VCAP_ES0,\ + 0, 1, 8, 0, 1, 904, 0, r, 64, 4) + +/* VCAP_ES0:VCAP_CORE_CACHE:VCAP_MASK_DAT */ +#define VCAP_ES0_VCAP_MASK_DAT(r) __REG(TARGET_VCAP_ES0,\ + 0, 1, 8, 0, 1, 904, 256, r, 64, 4) + +/* VCAP_ES0:VCAP_CORE_CACHE:VCAP_ACTION_DAT */ +#define VCAP_ES0_VCAP_ACTION_DAT(r) __REG(TARGET_VCAP_ES0,\ + 0, 1, 8, 0, 1, 904, 512, r, 64, 4) + +/* VCAP_ES0:VCAP_CORE_CACHE:VCAP_CNT_DAT */ +#define VCAP_ES0_VCAP_CNT_DAT(r) __REG(TARGET_VCAP_ES0,\ + 0, 1, 8, 0, 1, 904, 768, r, 32, 4) + +/* VCAP_ES0:VCAP_CORE_CACHE:VCAP_CNT_FW_DAT */ +#define VCAP_ES0_VCAP_CNT_FW_DAT __REG(TARGET_VCAP_ES0,\ + 0, 1, 8, 0, 1, 904, 896, 0, 1, 4) + +/* VCAP_ES0:VCAP_CORE_CACHE:VCAP_TG_DAT */ +#define VCAP_ES0_VCAP_TG_DAT __REG(TARGET_VCAP_ES0,\ + 0, 1, 8, 0, 1, 904, 900, 0, 1, 4) + +/* VCAP_ES0:VCAP_CORE_MAP:VCAP_CORE_IDX */ +#define VCAP_ES0_IDX __REG(TARGET_VCAP_ES0,\ + 0, 1, 912, 0, 1, 8, 0, 0, 1, 4) + +#define VCAP_ES0_IDX_CORE_IDX GENMASK(3, 0) +#define VCAP_ES0_IDX_CORE_IDX_SET(x)\ + FIELD_PREP(VCAP_ES0_IDX_CORE_IDX, x) +#define VCAP_ES0_IDX_CORE_IDX_GET(x)\ + FIELD_GET(VCAP_ES0_IDX_CORE_IDX, x) + +/* VCAP_ES0:VCAP_CORE_MAP:VCAP_CORE_MAP */ +#define VCAP_ES0_MAP __REG(TARGET_VCAP_ES0,\ + 0, 1, 912, 0, 1, 8, 4, 0, 1, 4) + +#define VCAP_ES0_MAP_CORE_MAP GENMASK(2, 0) +#define VCAP_ES0_MAP_CORE_MAP_SET(x)\ + FIELD_PREP(VCAP_ES0_MAP_CORE_MAP, x) +#define VCAP_ES0_MAP_CORE_MAP_GET(x)\ + FIELD_GET(VCAP_ES0_MAP_CORE_MAP, x) + +/* VCAP_ES0:VCAP_CORE_STICKY:VCAP_STICKY */ +#define VCAP_ES0_VCAP_STICKY __REG(TARGET_VCAP_ES0,\ + 0, 1, 920, 0, 1, 4, 0, 0, 1, 4) + +#define VCAP_ES0_VCAP_STICKY_VCAP_ROW_DELETED_STICKY BIT(0) +#define VCAP_ES0_VCAP_STICKY_VCAP_ROW_DELETED_STICKY_SET(x)\ + FIELD_PREP(VCAP_ES0_VCAP_STICKY_VCAP_ROW_DELETED_STICKY, x) +#define VCAP_ES0_VCAP_STICKY_VCAP_ROW_DELETED_STICKY_GET(x)\ + FIELD_GET(VCAP_ES0_VCAP_STICKY_VCAP_ROW_DELETED_STICKY, x) + +/* VCAP_ES0:VCAP_CONST:VCAP_VER */ +#define VCAP_ES0_VCAP_VER __REG(TARGET_VCAP_ES0,\ + 0, 1, 924, 0, 1, 40, 0, 0, 1, 4) + +/* VCAP_ES0:VCAP_CONST:ENTRY_WIDTH */ +#define VCAP_ES0_ENTRY_WIDTH __REG(TARGET_VCAP_ES0,\ + 0, 1, 924, 0, 1, 40, 4, 0, 1, 4) + +/* VCAP_ES0:VCAP_CONST:ENTRY_CNT */ +#define VCAP_ES0_ENTRY_CNT __REG(TARGET_VCAP_ES0,\ + 0, 1, 924, 0, 1, 40, 8, 0, 1, 4) + +/* VCAP_ES0:VCAP_CONST:ENTRY_SWCNT */ +#define VCAP_ES0_ENTRY_SWCNT __REG(TARGET_VCAP_ES0,\ + 0, 1, 924, 0, 1, 40, 12, 0, 1, 4) + +/* VCAP_ES0:VCAP_CONST:ENTRY_TG_WIDTH */ +#define VCAP_ES0_ENTRY_TG_WIDTH __REG(TARGET_VCAP_ES0,\ + 0, 1, 924, 0, 1, 40, 16, 0, 1, 4) + +/* VCAP_ES0:VCAP_CONST:ACTION_DEF_CNT */ +#define VCAP_ES0_ACTION_DEF_CNT __REG(TARGET_VCAP_ES0,\ + 0, 1, 924, 0, 1, 40, 20, 0, 1, 4) + +/* VCAP_ES0:VCAP_CONST:ACTION_WIDTH */ +#define VCAP_ES0_ACTION_WIDTH __REG(TARGET_VCAP_ES0,\ + 0, 1, 924, 0, 1, 40, 24, 0, 1, 4) + +/* VCAP_ES0:VCAP_CONST:CNT_WIDTH */ +#define VCAP_ES0_CNT_WIDTH __REG(TARGET_VCAP_ES0,\ + 0, 1, 924, 0, 1, 40, 28, 0, 1, 4) + +/* VCAP_ES0:VCAP_CONST:CORE_CNT */ +#define VCAP_ES0_CORE_CNT __REG(TARGET_VCAP_ES0,\ + 0, 1, 924, 0, 1, 40, 32, 0, 1, 4) + +/* VCAP_ES0:VCAP_CONST:IF_CNT */ +#define VCAP_ES0_IF_CNT __REG(TARGET_VCAP_ES0,\ + 0, 1, 924, 0, 1, 40, 36, 0, 1, 4) + +/* VCAP_ES2:VCAP_CORE_CFG:VCAP_UPDATE_CTRL */ +#define VCAP_ES2_CTRL __REG(TARGET_VCAP_ES2,\ + 0, 1, 0, 0, 1, 8, 0, 0, 1, 4) + +#define VCAP_ES2_CTRL_UPDATE_CMD GENMASK(24, 22) +#define VCAP_ES2_CTRL_UPDATE_CMD_SET(x)\ + FIELD_PREP(VCAP_ES2_CTRL_UPDATE_CMD, x) +#define VCAP_ES2_CTRL_UPDATE_CMD_GET(x)\ + FIELD_GET(VCAP_ES2_CTRL_UPDATE_CMD, x) + +#define VCAP_ES2_CTRL_UPDATE_ENTRY_DIS BIT(21) +#define VCAP_ES2_CTRL_UPDATE_ENTRY_DIS_SET(x)\ + FIELD_PREP(VCAP_ES2_CTRL_UPDATE_ENTRY_DIS, x) +#define VCAP_ES2_CTRL_UPDATE_ENTRY_DIS_GET(x)\ + FIELD_GET(VCAP_ES2_CTRL_UPDATE_ENTRY_DIS, x) + +#define VCAP_ES2_CTRL_UPDATE_ACTION_DIS BIT(20) +#define VCAP_ES2_CTRL_UPDATE_ACTION_DIS_SET(x)\ + FIELD_PREP(VCAP_ES2_CTRL_UPDATE_ACTION_DIS, x) +#define VCAP_ES2_CTRL_UPDATE_ACTION_DIS_GET(x)\ + FIELD_GET(VCAP_ES2_CTRL_UPDATE_ACTION_DIS, x) + +#define VCAP_ES2_CTRL_UPDATE_CNT_DIS BIT(19) +#define VCAP_ES2_CTRL_UPDATE_CNT_DIS_SET(x)\ + FIELD_PREP(VCAP_ES2_CTRL_UPDATE_CNT_DIS, x) +#define VCAP_ES2_CTRL_UPDATE_CNT_DIS_GET(x)\ + FIELD_GET(VCAP_ES2_CTRL_UPDATE_CNT_DIS, x) + +#define VCAP_ES2_CTRL_UPDATE_ADDR GENMASK(18, 3) +#define VCAP_ES2_CTRL_UPDATE_ADDR_SET(x)\ + FIELD_PREP(VCAP_ES2_CTRL_UPDATE_ADDR, x) +#define VCAP_ES2_CTRL_UPDATE_ADDR_GET(x)\ + FIELD_GET(VCAP_ES2_CTRL_UPDATE_ADDR, x) + +#define VCAP_ES2_CTRL_UPDATE_SHOT BIT(2) +#define VCAP_ES2_CTRL_UPDATE_SHOT_SET(x)\ + FIELD_PREP(VCAP_ES2_CTRL_UPDATE_SHOT, x) +#define VCAP_ES2_CTRL_UPDATE_SHOT_GET(x)\ + FIELD_GET(VCAP_ES2_CTRL_UPDATE_SHOT, x) + +#define VCAP_ES2_CTRL_CLEAR_CACHE BIT(1) +#define VCAP_ES2_CTRL_CLEAR_CACHE_SET(x)\ + FIELD_PREP(VCAP_ES2_CTRL_CLEAR_CACHE, x) +#define VCAP_ES2_CTRL_CLEAR_CACHE_GET(x)\ + FIELD_GET(VCAP_ES2_CTRL_CLEAR_CACHE, x) + +#define VCAP_ES2_CTRL_MV_TRAFFIC_IGN BIT(0) +#define VCAP_ES2_CTRL_MV_TRAFFIC_IGN_SET(x)\ + FIELD_PREP(VCAP_ES2_CTRL_MV_TRAFFIC_IGN, x) +#define VCAP_ES2_CTRL_MV_TRAFFIC_IGN_GET(x)\ + FIELD_GET(VCAP_ES2_CTRL_MV_TRAFFIC_IGN, x) + +/* VCAP_ES2:VCAP_CORE_CFG:VCAP_MV_CFG */ +#define VCAP_ES2_CFG __REG(TARGET_VCAP_ES2,\ + 0, 1, 0, 0, 1, 8, 4, 0, 1, 4) + +#define VCAP_ES2_CFG_MV_NUM_POS GENMASK(31, 16) +#define VCAP_ES2_CFG_MV_NUM_POS_SET(x)\ + FIELD_PREP(VCAP_ES2_CFG_MV_NUM_POS, x) +#define VCAP_ES2_CFG_MV_NUM_POS_GET(x)\ + FIELD_GET(VCAP_ES2_CFG_MV_NUM_POS, x) + +#define VCAP_ES2_CFG_MV_SIZE GENMASK(15, 0) +#define VCAP_ES2_CFG_MV_SIZE_SET(x)\ + FIELD_PREP(VCAP_ES2_CFG_MV_SIZE, x) +#define VCAP_ES2_CFG_MV_SIZE_GET(x)\ + FIELD_GET(VCAP_ES2_CFG_MV_SIZE, x) + +/* VCAP_ES2:VCAP_CORE_CACHE:VCAP_ENTRY_DAT */ +#define VCAP_ES2_VCAP_ENTRY_DAT(r) __REG(TARGET_VCAP_ES2,\ + 0, 1, 8, 0, 1, 904, 0, r, 64, 4) + +/* VCAP_ES2:VCAP_CORE_CACHE:VCAP_MASK_DAT */ +#define VCAP_ES2_VCAP_MASK_DAT(r) __REG(TARGET_VCAP_ES2,\ + 0, 1, 8, 0, 1, 904, 256, r, 64, 4) + +/* VCAP_ES2:VCAP_CORE_CACHE:VCAP_ACTION_DAT */ +#define VCAP_ES2_VCAP_ACTION_DAT(r) __REG(TARGET_VCAP_ES2,\ + 0, 1, 8, 0, 1, 904, 512, r, 64, 4) + +/* VCAP_ES2:VCAP_CORE_CACHE:VCAP_CNT_DAT */ +#define VCAP_ES2_VCAP_CNT_DAT(r) __REG(TARGET_VCAP_ES2,\ + 0, 1, 8, 0, 1, 904, 768, r, 32, 4) + +/* VCAP_ES2:VCAP_CORE_CACHE:VCAP_CNT_FW_DAT */ +#define VCAP_ES2_VCAP_CNT_FW_DAT __REG(TARGET_VCAP_ES2,\ + 0, 1, 8, 0, 1, 904, 896, 0, 1, 4) + +/* VCAP_ES2:VCAP_CORE_CACHE:VCAP_TG_DAT */ +#define VCAP_ES2_VCAP_TG_DAT __REG(TARGET_VCAP_ES2,\ + 0, 1, 8, 0, 1, 904, 900, 0, 1, 4) + +/* VCAP_ES2:VCAP_CORE_MAP:VCAP_CORE_IDX */ +#define VCAP_ES2_IDX __REG(TARGET_VCAP_ES2,\ + 0, 1, 912, 0, 1, 8, 0, 0, 1, 4) + +#define VCAP_ES2_IDX_CORE_IDX GENMASK(3, 0) +#define VCAP_ES2_IDX_CORE_IDX_SET(x)\ + FIELD_PREP(VCAP_ES2_IDX_CORE_IDX, x) +#define VCAP_ES2_IDX_CORE_IDX_GET(x)\ + FIELD_GET(VCAP_ES2_IDX_CORE_IDX, x) + +/* VCAP_ES2:VCAP_CORE_MAP:VCAP_CORE_MAP */ +#define VCAP_ES2_MAP __REG(TARGET_VCAP_ES2,\ + 0, 1, 912, 0, 1, 8, 4, 0, 1, 4) + +#define VCAP_ES2_MAP_CORE_MAP GENMASK(2, 0) +#define VCAP_ES2_MAP_CORE_MAP_SET(x)\ + FIELD_PREP(VCAP_ES2_MAP_CORE_MAP, x) +#define VCAP_ES2_MAP_CORE_MAP_GET(x)\ + FIELD_GET(VCAP_ES2_MAP_CORE_MAP, x) + +/* VCAP_ES2:VCAP_CORE_STICKY:VCAP_STICKY */ +#define VCAP_ES2_VCAP_STICKY __REG(TARGET_VCAP_ES2,\ + 0, 1, 920, 0, 1, 4, 0, 0, 1, 4) + +#define VCAP_ES2_VCAP_STICKY_VCAP_ROW_DELETED_STICKY BIT(0) +#define VCAP_ES2_VCAP_STICKY_VCAP_ROW_DELETED_STICKY_SET(x)\ + FIELD_PREP(VCAP_ES2_VCAP_STICKY_VCAP_ROW_DELETED_STICKY, x) +#define VCAP_ES2_VCAP_STICKY_VCAP_ROW_DELETED_STICKY_GET(x)\ + FIELD_GET(VCAP_ES2_VCAP_STICKY_VCAP_ROW_DELETED_STICKY, x) + +/* VCAP_ES2:VCAP_CONST:VCAP_VER */ +#define VCAP_ES2_VCAP_VER __REG(TARGET_VCAP_ES2,\ + 0, 1, 924, 0, 1, 40, 0, 0, 1, 4) + +/* VCAP_ES2:VCAP_CONST:ENTRY_WIDTH */ +#define VCAP_ES2_ENTRY_WIDTH __REG(TARGET_VCAP_ES2,\ + 0, 1, 924, 0, 1, 40, 4, 0, 1, 4) + +/* VCAP_ES2:VCAP_CONST:ENTRY_CNT */ +#define VCAP_ES2_ENTRY_CNT __REG(TARGET_VCAP_ES2,\ + 0, 1, 924, 0, 1, 40, 8, 0, 1, 4) + +/* VCAP_ES2:VCAP_CONST:ENTRY_SWCNT */ +#define VCAP_ES2_ENTRY_SWCNT __REG(TARGET_VCAP_ES2,\ + 0, 1, 924, 0, 1, 40, 12, 0, 1, 4) + +/* VCAP_ES2:VCAP_CONST:ENTRY_TG_WIDTH */ +#define VCAP_ES2_ENTRY_TG_WIDTH __REG(TARGET_VCAP_ES2,\ + 0, 1, 924, 0, 1, 40, 16, 0, 1, 4) + +/* VCAP_ES2:VCAP_CONST:ACTION_DEF_CNT */ +#define VCAP_ES2_ACTION_DEF_CNT __REG(TARGET_VCAP_ES2,\ + 0, 1, 924, 0, 1, 40, 20, 0, 1, 4) + +/* VCAP_ES2:VCAP_CONST:ACTION_WIDTH */ +#define VCAP_ES2_ACTION_WIDTH __REG(TARGET_VCAP_ES2,\ + 0, 1, 924, 0, 1, 40, 24, 0, 1, 4) + +/* VCAP_ES2:VCAP_CONST:CNT_WIDTH */ +#define VCAP_ES2_CNT_WIDTH __REG(TARGET_VCAP_ES2,\ + 0, 1, 924, 0, 1, 40, 28, 0, 1, 4) + +/* VCAP_ES2:VCAP_CONST:CORE_CNT */ +#define VCAP_ES2_CORE_CNT __REG(TARGET_VCAP_ES2,\ + 0, 1, 924, 0, 1, 40, 32, 0, 1, 4) + +/* VCAP_ES2:VCAP_CONST:IF_CNT */ +#define VCAP_ES2_IF_CNT __REG(TARGET_VCAP_ES2,\ + 0, 1, 924, 0, 1, 40, 36, 0, 1, 4) + /* VCAP_SUPER:VCAP_CORE_CFG:VCAP_UPDATE_CTRL */ -#define VCAP_SUPER_CTRL __REG(TARGET_VCAP_SUPER, 0, 1, 0, 0, 1, 8, 0, 0, 1, 4) +#define VCAP_SUPER_CTRL __REG(TARGET_VCAP_SUPER,\ + 0, 1, 0, 0, 1, 8, 0, 0, 1, 4) #define VCAP_SUPER_CTRL_UPDATE_CMD GENMASK(24, 22) #define VCAP_SUPER_CTRL_UPDATE_CMD_SET(x)\ @@ -5538,7 +7144,8 @@ enum sparx5_target { FIELD_GET(VCAP_SUPER_CTRL_MV_TRAFFIC_IGN, x) /* VCAP_SUPER:VCAP_CORE_CFG:VCAP_MV_CFG */ -#define VCAP_SUPER_CFG __REG(TARGET_VCAP_SUPER, 0, 1, 0, 0, 1, 8, 4, 0, 1, 4) +#define VCAP_SUPER_CFG __REG(TARGET_VCAP_SUPER,\ + 0, 1, 0, 0, 1, 8, 4, 0, 1, 4) #define VCAP_SUPER_CFG_MV_NUM_POS GENMASK(31, 16) #define VCAP_SUPER_CFG_MV_NUM_POS_SET(x)\ @@ -5553,25 +7160,32 @@ enum sparx5_target { FIELD_GET(VCAP_SUPER_CFG_MV_SIZE, x) /* VCAP_SUPER:VCAP_CORE_CACHE:VCAP_ENTRY_DAT */ -#define VCAP_SUPER_VCAP_ENTRY_DAT(r) __REG(TARGET_VCAP_SUPER, 0, 1, 8, 0, 1, 904, 0, r, 64, 4) +#define VCAP_SUPER_VCAP_ENTRY_DAT(r) __REG(TARGET_VCAP_SUPER,\ + 0, 1, 8, 0, 1, 904, 0, r, 64, 4) /* VCAP_SUPER:VCAP_CORE_CACHE:VCAP_MASK_DAT */ -#define VCAP_SUPER_VCAP_MASK_DAT(r) __REG(TARGET_VCAP_SUPER, 0, 1, 8, 0, 1, 904, 256, r, 64, 4) +#define VCAP_SUPER_VCAP_MASK_DAT(r) __REG(TARGET_VCAP_SUPER,\ + 0, 1, 8, 0, 1, 904, 256, r, 64, 4) /* VCAP_SUPER:VCAP_CORE_CACHE:VCAP_ACTION_DAT */ -#define VCAP_SUPER_VCAP_ACTION_DAT(r) __REG(TARGET_VCAP_SUPER, 0, 1, 8, 0, 1, 904, 512, r, 64, 4) +#define VCAP_SUPER_VCAP_ACTION_DAT(r) __REG(TARGET_VCAP_SUPER,\ + 0, 1, 8, 0, 1, 904, 512, r, 64, 4) /* VCAP_SUPER:VCAP_CORE_CACHE:VCAP_CNT_DAT */ -#define VCAP_SUPER_VCAP_CNT_DAT(r) __REG(TARGET_VCAP_SUPER, 0, 1, 8, 0, 1, 904, 768, r, 32, 4) +#define VCAP_SUPER_VCAP_CNT_DAT(r) __REG(TARGET_VCAP_SUPER,\ + 0, 1, 8, 0, 1, 904, 768, r, 32, 4) /* VCAP_SUPER:VCAP_CORE_CACHE:VCAP_CNT_FW_DAT */ -#define VCAP_SUPER_VCAP_CNT_FW_DAT __REG(TARGET_VCAP_SUPER, 0, 1, 8, 0, 1, 904, 896, 0, 1, 4) +#define VCAP_SUPER_VCAP_CNT_FW_DAT __REG(TARGET_VCAP_SUPER,\ + 0, 1, 8, 0, 1, 904, 896, 0, 1, 4) /* VCAP_SUPER:VCAP_CORE_CACHE:VCAP_TG_DAT */ -#define VCAP_SUPER_VCAP_TG_DAT __REG(TARGET_VCAP_SUPER, 0, 1, 8, 0, 1, 904, 900, 0, 1, 4) +#define VCAP_SUPER_VCAP_TG_DAT __REG(TARGET_VCAP_SUPER,\ + 0, 1, 8, 0, 1, 904, 900, 0, 1, 4) /* VCAP_SUPER:VCAP_CORE_MAP:VCAP_CORE_IDX */ -#define VCAP_SUPER_IDX __REG(TARGET_VCAP_SUPER, 0, 1, 912, 0, 1, 8, 0, 0, 1, 4) +#define VCAP_SUPER_IDX __REG(TARGET_VCAP_SUPER,\ + 0, 1, 912, 0, 1, 8, 0, 0, 1, 4) #define VCAP_SUPER_IDX_CORE_IDX GENMASK(3, 0) #define VCAP_SUPER_IDX_CORE_IDX_SET(x)\ @@ -5580,7 +7194,8 @@ enum sparx5_target { FIELD_GET(VCAP_SUPER_IDX_CORE_IDX, x) /* VCAP_SUPER:VCAP_CORE_MAP:VCAP_CORE_MAP */ -#define VCAP_SUPER_MAP __REG(TARGET_VCAP_SUPER, 0, 1, 912, 0, 1, 8, 4, 0, 1, 4) +#define VCAP_SUPER_MAP __REG(TARGET_VCAP_SUPER,\ + 0, 1, 912, 0, 1, 8, 4, 0, 1, 4) #define VCAP_SUPER_MAP_CORE_MAP GENMASK(2, 0) #define VCAP_SUPER_MAP_CORE_MAP_SET(x)\ @@ -5589,37 +7204,48 @@ enum sparx5_target { FIELD_GET(VCAP_SUPER_MAP_CORE_MAP, x) /* VCAP_SUPER:VCAP_CONST:VCAP_VER */ -#define VCAP_SUPER_VCAP_VER __REG(TARGET_VCAP_SUPER, 0, 1, 924, 0, 1, 40, 0, 0, 1, 4) +#define VCAP_SUPER_VCAP_VER __REG(TARGET_VCAP_SUPER,\ + 0, 1, 924, 0, 1, 40, 0, 0, 1, 4) /* VCAP_SUPER:VCAP_CONST:ENTRY_WIDTH */ -#define VCAP_SUPER_ENTRY_WIDTH __REG(TARGET_VCAP_SUPER, 0, 1, 924, 0, 1, 40, 4, 0, 1, 4) +#define VCAP_SUPER_ENTRY_WIDTH __REG(TARGET_VCAP_SUPER,\ + 0, 1, 924, 0, 1, 40, 4, 0, 1, 4) /* VCAP_SUPER:VCAP_CONST:ENTRY_CNT */ -#define VCAP_SUPER_ENTRY_CNT __REG(TARGET_VCAP_SUPER, 0, 1, 924, 0, 1, 40, 8, 0, 1, 4) +#define VCAP_SUPER_ENTRY_CNT __REG(TARGET_VCAP_SUPER,\ + 0, 1, 924, 0, 1, 40, 8, 0, 1, 4) /* VCAP_SUPER:VCAP_CONST:ENTRY_SWCNT */ -#define VCAP_SUPER_ENTRY_SWCNT __REG(TARGET_VCAP_SUPER, 0, 1, 924, 0, 1, 40, 12, 0, 1, 4) +#define VCAP_SUPER_ENTRY_SWCNT __REG(TARGET_VCAP_SUPER,\ + 0, 1, 924, 0, 1, 40, 12, 0, 1, 4) /* VCAP_SUPER:VCAP_CONST:ENTRY_TG_WIDTH */ -#define VCAP_SUPER_ENTRY_TG_WIDTH __REG(TARGET_VCAP_SUPER, 0, 1, 924, 0, 1, 40, 16, 0, 1, 4) +#define VCAP_SUPER_ENTRY_TG_WIDTH __REG(TARGET_VCAP_SUPER,\ + 0, 1, 924, 0, 1, 40, 16, 0, 1, 4) /* VCAP_SUPER:VCAP_CONST:ACTION_DEF_CNT */ -#define VCAP_SUPER_ACTION_DEF_CNT __REG(TARGET_VCAP_SUPER, 0, 1, 924, 0, 1, 40, 20, 0, 1, 4) +#define VCAP_SUPER_ACTION_DEF_CNT __REG(TARGET_VCAP_SUPER,\ + 0, 1, 924, 0, 1, 40, 20, 0, 1, 4) /* VCAP_SUPER:VCAP_CONST:ACTION_WIDTH */ -#define VCAP_SUPER_ACTION_WIDTH __REG(TARGET_VCAP_SUPER, 0, 1, 924, 0, 1, 40, 24, 0, 1, 4) +#define VCAP_SUPER_ACTION_WIDTH __REG(TARGET_VCAP_SUPER,\ + 0, 1, 924, 0, 1, 40, 24, 0, 1, 4) /* VCAP_SUPER:VCAP_CONST:CNT_WIDTH */ -#define VCAP_SUPER_CNT_WIDTH __REG(TARGET_VCAP_SUPER, 0, 1, 924, 0, 1, 40, 28, 0, 1, 4) +#define VCAP_SUPER_CNT_WIDTH __REG(TARGET_VCAP_SUPER,\ + 0, 1, 924, 0, 1, 40, 28, 0, 1, 4) /* VCAP_SUPER:VCAP_CONST:CORE_CNT */ -#define VCAP_SUPER_CORE_CNT __REG(TARGET_VCAP_SUPER, 0, 1, 924, 0, 1, 40, 32, 0, 1, 4) +#define VCAP_SUPER_CORE_CNT __REG(TARGET_VCAP_SUPER,\ + 0, 1, 924, 0, 1, 40, 32, 0, 1, 4) /* VCAP_SUPER:VCAP_CONST:IF_CNT */ -#define VCAP_SUPER_IF_CNT __REG(TARGET_VCAP_SUPER, 0, 1, 924, 0, 1, 40, 36, 0, 1, 4) +#define VCAP_SUPER_IF_CNT __REG(TARGET_VCAP_SUPER,\ + 0, 1, 924, 0, 1, 40, 36, 0, 1, 4) /* VCAP_SUPER:RAM_CTRL:RAM_INIT */ -#define VCAP_SUPER_RAM_INIT __REG(TARGET_VCAP_SUPER, 0, 1, 1120, 0, 1, 4, 0, 0, 1, 4) +#define VCAP_SUPER_RAM_INIT __REG(TARGET_VCAP_SUPER,\ + 0, 1, 1120, 0, 1, 4, 0, 0, 1, 4) #define VCAP_SUPER_RAM_INIT_RAM_INIT BIT(1) #define VCAP_SUPER_RAM_INIT_RAM_INIT_SET(x)\ @@ -5634,7 +7260,8 @@ enum sparx5_target { FIELD_GET(VCAP_SUPER_RAM_INIT_RAM_CFG_HOOK, x) /* VOP:RAM_CTRL:RAM_INIT */ -#define VOP_RAM_INIT __REG(TARGET_VOP, 0, 1, 279176, 0, 1, 4, 0, 0, 1, 4) +#define VOP_RAM_INIT __REG(TARGET_VOP,\ + 0, 1, 279176, 0, 1, 4, 0, 0, 1, 4) #define VOP_RAM_INIT_RAM_INIT BIT(1) #define VOP_RAM_INIT_RAM_INIT_SET(x)\ @@ -5649,7 +7276,8 @@ enum sparx5_target { FIELD_GET(VOP_RAM_INIT_RAM_CFG_HOOK, x) /* XQS:SYSTEM:STAT_CFG */ -#define XQS_STAT_CFG __REG(TARGET_XQS, 0, 1, 6768, 0, 1, 872, 860, 0, 1, 4) +#define XQS_STAT_CFG __REG(TARGET_XQS,\ + 0, 1, 6768, 0, 1, 872, 860, 0, 1, 4) #define XQS_STAT_CFG_STAT_CLEAR_SHOT GENMASK(21, 18) #define XQS_STAT_CFG_STAT_CLEAR_SHOT_SET(x)\ @@ -5676,7 +7304,8 @@ enum sparx5_target { FIELD_GET(XQS_STAT_CFG_STAT_WRAP_DIS, x) /* XQS:QLIMIT_SHR:QLIMIT_SHR_TOP_CFG */ -#define XQS_QLIMIT_SHR_TOP_CFG(g) __REG(TARGET_XQS, 0, 1, 7936, g, 4, 48, 0, 0, 1, 4) +#define XQS_QLIMIT_SHR_TOP_CFG(g) __REG(TARGET_XQS,\ + 0, 1, 7936, g, 4, 48, 0, 0, 1, 4) #define XQS_QLIMIT_SHR_TOP_CFG_QLIMIT_SHR_TOP GENMASK(14, 0) #define XQS_QLIMIT_SHR_TOP_CFG_QLIMIT_SHR_TOP_SET(x)\ @@ -5685,7 +7314,8 @@ enum sparx5_target { FIELD_GET(XQS_QLIMIT_SHR_TOP_CFG_QLIMIT_SHR_TOP, x) /* XQS:QLIMIT_SHR:QLIMIT_SHR_ATOP_CFG */ -#define XQS_QLIMIT_SHR_ATOP_CFG(g) __REG(TARGET_XQS, 0, 1, 7936, g, 4, 48, 4, 0, 1, 4) +#define XQS_QLIMIT_SHR_ATOP_CFG(g) __REG(TARGET_XQS,\ + 0, 1, 7936, g, 4, 48, 4, 0, 1, 4) #define XQS_QLIMIT_SHR_ATOP_CFG_QLIMIT_SHR_ATOP GENMASK(14, 0) #define XQS_QLIMIT_SHR_ATOP_CFG_QLIMIT_SHR_ATOP_SET(x)\ @@ -5694,7 +7324,8 @@ enum sparx5_target { FIELD_GET(XQS_QLIMIT_SHR_ATOP_CFG_QLIMIT_SHR_ATOP, x) /* XQS:QLIMIT_SHR:QLIMIT_SHR_CTOP_CFG */ -#define XQS_QLIMIT_SHR_CTOP_CFG(g) __REG(TARGET_XQS, 0, 1, 7936, g, 4, 48, 8, 0, 1, 4) +#define XQS_QLIMIT_SHR_CTOP_CFG(g) __REG(TARGET_XQS,\ + 0, 1, 7936, g, 4, 48, 8, 0, 1, 4) #define XQS_QLIMIT_SHR_CTOP_CFG_QLIMIT_SHR_CTOP GENMASK(14, 0) #define XQS_QLIMIT_SHR_CTOP_CFG_QLIMIT_SHR_CTOP_SET(x)\ @@ -5703,7 +7334,8 @@ enum sparx5_target { FIELD_GET(XQS_QLIMIT_SHR_CTOP_CFG_QLIMIT_SHR_CTOP, x) /* XQS:QLIMIT_SHR:QLIMIT_SHR_QLIM_CFG */ -#define XQS_QLIMIT_SHR_QLIM_CFG(g) __REG(TARGET_XQS, 0, 1, 7936, g, 4, 48, 12, 0, 1, 4) +#define XQS_QLIMIT_SHR_QLIM_CFG(g) __REG(TARGET_XQS,\ + 0, 1, 7936, g, 4, 48, 12, 0, 1, 4) #define XQS_QLIMIT_SHR_QLIM_CFG_QLIMIT_SHR_QLIM GENMASK(14, 0) #define XQS_QLIMIT_SHR_QLIM_CFG_QLIMIT_SHR_QLIM_SET(x)\ @@ -5712,6 +7344,7 @@ enum sparx5_target { FIELD_GET(XQS_QLIMIT_SHR_QLIM_CFG_QLIMIT_SHR_QLIM, x) /* XQS:STAT:CNT */ -#define XQS_CNT(g) __REG(TARGET_XQS, 0, 1, 0, g, 1024, 4, 0, 0, 1, 4) +#define XQS_CNT(g) __REG(TARGET_XQS,\ + 0, 1, 0, g, 1024, 4, 0, 0, 1, 4) #endif /* _SPARX5_MAIN_REGS_H_ */ diff --git a/drivers/net/ethernet/microchip/sparx5/sparx5_police.c b/drivers/net/ethernet/microchip/sparx5/sparx5_police.c new file mode 100644 index 000000000000..8ada5cee1342 --- /dev/null +++ b/drivers/net/ethernet/microchip/sparx5/sparx5_police.c @@ -0,0 +1,53 @@ +// SPDX-License-Identifier: GPL-2.0+ +/* Microchip Sparx5 Switch driver + * + * Copyright (c) 2023 Microchip Technology Inc. and its subsidiaries. + */ + +#include "sparx5_main_regs.h" +#include "sparx5_main.h" + +static int sparx5_policer_service_conf_set(struct sparx5 *sparx5, + struct sparx5_policer *pol) +{ + u32 idx, pup_tokens, max_pup_tokens, burst, thres; + struct sparx5_sdlb_group *g; + u64 rate; + + g = &sdlb_groups[pol->group]; + idx = pol->idx; + + rate = pol->rate * 1000; + burst = pol->burst; + + pup_tokens = sparx5_sdlb_pup_token_get(sparx5, g->pup_interval, rate); + max_pup_tokens = + sparx5_sdlb_pup_token_get(sparx5, g->pup_interval, g->max_rate); + + thres = DIV_ROUND_UP(burst, g->min_burst); + + spx5_wr(ANA_AC_SDLB_PUP_TOKENS_PUP_TOKENS_SET(pup_tokens), sparx5, + ANA_AC_SDLB_PUP_TOKENS(idx, 0)); + + spx5_rmw(ANA_AC_SDLB_INH_CTRL_PUP_TOKENS_MAX_SET(max_pup_tokens), + ANA_AC_SDLB_INH_CTRL_PUP_TOKENS_MAX, sparx5, + ANA_AC_SDLB_INH_CTRL(idx, 0)); + + spx5_rmw(ANA_AC_SDLB_THRES_THRES_SET(thres), ANA_AC_SDLB_THRES_THRES, + sparx5, ANA_AC_SDLB_THRES(idx, 0)); + + return 0; +} + +int sparx5_policer_conf_set(struct sparx5 *sparx5, struct sparx5_policer *pol) +{ + /* More policer types will be added later */ + switch (pol->type) { + case SPX5_POL_SERVICE: + return sparx5_policer_service_conf_set(sparx5, pol); + default: + break; + } + + return 0; +} diff --git a/drivers/net/ethernet/microchip/sparx5/sparx5_pool.c b/drivers/net/ethernet/microchip/sparx5/sparx5_pool.c new file mode 100644 index 000000000000..b4b280c6138b --- /dev/null +++ b/drivers/net/ethernet/microchip/sparx5/sparx5_pool.c @@ -0,0 +1,81 @@ +// SPDX-License-Identifier: GPL-2.0+ +/* Microchip Sparx5 Switch driver + * + * Copyright (c) 2023 Microchip Technology Inc. and its subsidiaries. + */ + +#include "sparx5_main_regs.h" +#include "sparx5_main.h" + +static u32 sparx5_pool_id_to_idx(u32 id) +{ + return --id; +} + +u32 sparx5_pool_idx_to_id(u32 idx) +{ + return ++idx; +} + +/* Release resource from pool. + * Return reference count on success, otherwise return error. + */ +int sparx5_pool_put(struct sparx5_pool_entry *pool, int size, u32 id) +{ + struct sparx5_pool_entry *e_itr; + + e_itr = (pool + sparx5_pool_id_to_idx(id)); + if (e_itr->ref_cnt == 0) + return -EINVAL; + + return --e_itr->ref_cnt; +} + +/* Get resource from pool. + * Return reference count on success, otherwise return error. + */ +int sparx5_pool_get(struct sparx5_pool_entry *pool, int size, u32 *id) +{ + struct sparx5_pool_entry *e_itr; + int i; + + for (i = 0, e_itr = pool; i < size; i++, e_itr++) { + if (e_itr->ref_cnt == 0) { + *id = sparx5_pool_idx_to_id(i); + return ++e_itr->ref_cnt; + } + } + + return -ENOSPC; +} + +/* Get resource from pool that matches index. + * Return reference count on success, otherwise return error. + */ +int sparx5_pool_get_with_idx(struct sparx5_pool_entry *pool, int size, u32 idx, + u32 *id) +{ + struct sparx5_pool_entry *e_itr; + int i, ret = -ENOSPC; + + for (i = 0, e_itr = pool; i < size; i++, e_itr++) { + /* Pool index of first free entry */ + if (e_itr->ref_cnt == 0 && ret == -ENOSPC) + ret = i; + /* Tc index already in use ? */ + if (e_itr->idx == idx && e_itr->ref_cnt > 0) { + ret = i; + break; + } + } + + /* Did we find a free entry? */ + if (ret >= 0) { + *id = sparx5_pool_idx_to_id(ret); + e_itr = (pool + ret); + e_itr->idx = idx; + return ++e_itr->ref_cnt; + } + + return ret; +} diff --git a/drivers/net/ethernet/microchip/sparx5/sparx5_port.c b/drivers/net/ethernet/microchip/sparx5/sparx5_port.c index 107b9cd931c0..3a1b1a1f5a19 100644 --- a/drivers/net/ethernet/microchip/sparx5/sparx5_port.c +++ b/drivers/net/ethernet/microchip/sparx5/sparx5_port.c @@ -1071,6 +1071,11 @@ int sparx5_port_init(struct sparx5 *sparx5, /* Discard pause frame 01-80-C2-00-00-01 */ spx5_wr(PAUSE_DISCARD, sparx5, ANA_CL_CAPTURE_BPDU_CFG(port->portno)); + /* Discard SMAC multicast */ + spx5_rmw(ANA_CL_FILTER_CTRL_FILTER_SMAC_MC_DIS_SET(0), + ANA_CL_FILTER_CTRL_FILTER_SMAC_MC_DIS, + sparx5, ANA_CL_FILTER_CTRL(port->portno)); + if (conf->portmode == PHY_INTERFACE_MODE_QSGMII || conf->portmode == PHY_INTERFACE_MODE_SGMII) { err = sparx5_serdes_set(sparx5, port, conf); @@ -1151,11 +1156,69 @@ int sparx5_port_qos_set(struct sparx5_port *port, { sparx5_port_qos_dscp_set(port, &qos->dscp); sparx5_port_qos_pcp_set(port, &qos->pcp); + sparx5_port_qos_pcp_rewr_set(port, &qos->pcp_rewr); + sparx5_port_qos_dscp_rewr_set(port, &qos->dscp_rewr); sparx5_port_qos_default_set(port, qos); return 0; } +int sparx5_port_qos_pcp_rewr_set(const struct sparx5_port *port, + struct sparx5_port_qos_pcp_rewr *qos) +{ + int i, mode = SPARX5_PORT_REW_TAG_CTRL_CLASSIFIED; + struct sparx5 *sparx5 = port->sparx5; + u8 pcp, dei; + + /* Use mapping table, with classified QoS as index, to map QoS and DP + * to tagged PCP and DEI, if PCP is trusted. Otherwise use classified + * PCP. Classified PCP equals frame PCP. + */ + if (qos->enable) + mode = SPARX5_PORT_REW_TAG_CTRL_MAPPED; + + spx5_rmw(REW_TAG_CTRL_TAG_PCP_CFG_SET(mode) | + REW_TAG_CTRL_TAG_DEI_CFG_SET(mode), + REW_TAG_CTRL_TAG_PCP_CFG | REW_TAG_CTRL_TAG_DEI_CFG, + port->sparx5, REW_TAG_CTRL(port->portno)); + + for (i = 0; i < ARRAY_SIZE(qos->map.map); i++) { + /* Extract PCP and DEI */ + pcp = qos->map.map[i]; + if (pcp > SPARX5_PORT_QOS_PCP_COUNT) + dei = 1; + else + dei = 0; + + /* Rewrite PCP and DEI, for each classified QoS class and DP + * level. This table is only used if tag ctrl mode is set to + * 'mapped'. + * + * 0:0nd - prio=0 and dp:0 => pcp=0 and dei=0 + * 0:0de - prio=0 and dp:1 => pcp=0 and dei=1 + */ + if (dei) { + spx5_rmw(REW_PCP_MAP_DE1_PCP_DE1_SET(pcp), + REW_PCP_MAP_DE1_PCP_DE1, sparx5, + REW_PCP_MAP_DE1(port->portno, i)); + + spx5_rmw(REW_DEI_MAP_DE1_DEI_DE1_SET(dei), + REW_DEI_MAP_DE1_DEI_DE1, port->sparx5, + REW_DEI_MAP_DE1(port->portno, i)); + } else { + spx5_rmw(REW_PCP_MAP_DE0_PCP_DE0_SET(pcp), + REW_PCP_MAP_DE0_PCP_DE0, sparx5, + REW_PCP_MAP_DE0(port->portno, i)); + + spx5_rmw(REW_DEI_MAP_DE0_DEI_DE0_SET(dei), + REW_DEI_MAP_DE0_DEI_DE0, port->sparx5, + REW_DEI_MAP_DE0(port->portno, i)); + } + } + + return 0; +} + int sparx5_port_qos_pcp_set(const struct sparx5_port *port, struct sparx5_port_qos_pcp *qos) { @@ -1184,6 +1247,45 @@ int sparx5_port_qos_pcp_set(const struct sparx5_port *port, return 0; } +void sparx5_port_qos_dscp_rewr_mode_set(const struct sparx5_port *port, + int mode) +{ + spx5_rmw(ANA_CL_QOS_CFG_DSCP_REWR_MODE_SEL_SET(mode), + ANA_CL_QOS_CFG_DSCP_REWR_MODE_SEL, port->sparx5, + ANA_CL_QOS_CFG(port->portno)); +} + +int sparx5_port_qos_dscp_rewr_set(const struct sparx5_port *port, + struct sparx5_port_qos_dscp_rewr *qos) +{ + struct sparx5 *sparx5 = port->sparx5; + bool rewr = false; + u16 dscp; + int i; + + /* On egress, rewrite DSCP value to either classified DSCP or frame + * DSCP. If enabled; classified DSCP, if disabled; frame DSCP. + */ + if (qos->enable) + rewr = true; + + spx5_rmw(REW_DSCP_MAP_DSCP_UPDATE_ENA_SET(rewr), + REW_DSCP_MAP_DSCP_UPDATE_ENA, sparx5, + REW_DSCP_MAP(port->portno)); + + /* On ingress, map each classified QoS class and DP to classified DSCP + * value. This mapping table is global for all ports. + */ + for (i = 0; i < ARRAY_SIZE(qos->map.map); i++) { + dscp = qos->map.map[i]; + spx5_rmw(ANA_CL_QOS_MAP_CFG_DSCP_REWR_VAL_SET(dscp), + ANA_CL_QOS_MAP_CFG_DSCP_REWR_VAL, sparx5, + ANA_CL_QOS_MAP_CFG(i)); + } + + return 0; +} + int sparx5_port_qos_dscp_set(const struct sparx5_port *port, struct sparx5_port_qos_dscp *qos) { diff --git a/drivers/net/ethernet/microchip/sparx5/sparx5_port.h b/drivers/net/ethernet/microchip/sparx5/sparx5_port.h index fbafe22e25cc..607c4ff1df6b 100644 --- a/drivers/net/ethernet/microchip/sparx5/sparx5_port.h +++ b/drivers/net/ethernet/microchip/sparx5/sparx5_port.h @@ -9,6 +9,17 @@ #include "sparx5_main.h" +/* Port PCP rewrite mode */ +#define SPARX5_PORT_REW_TAG_CTRL_CLASSIFIED 0 +#define SPARX5_PORT_REW_TAG_CTRL_DEFAULT 1 +#define SPARX5_PORT_REW_TAG_CTRL_MAPPED 2 + +/* Port DSCP rewrite mode */ +#define SPARX5_PORT_REW_DSCP_NONE 0 +#define SPARX5_PORT_REW_DSCP_IF_ZERO 1 +#define SPARX5_PORT_REW_DSCP_SELECTED 2 +#define SPARX5_PORT_REW_DSCP_ALL 3 + static inline bool sparx5_port_is_2g5(int portno) { return portno >= 16 && portno <= 47; @@ -99,6 +110,15 @@ struct sparx5_port_qos_pcp_map { u8 map[SPARX5_PORT_QOS_PCP_DEI_COUNT]; }; +struct sparx5_port_qos_pcp_rewr_map { + u16 map[SPX5_PRIOS]; +}; + +#define SPARX5_PORT_QOS_DP_NUM 4 +struct sparx5_port_qos_dscp_rewr_map { + u16 map[SPX5_PRIOS * SPARX5_PORT_QOS_DP_NUM]; +}; + #define SPARX5_PORT_QOS_DSCP_COUNT 64 struct sparx5_port_qos_dscp_map { u8 map[SPARX5_PORT_QOS_DSCP_COUNT]; @@ -110,15 +130,27 @@ struct sparx5_port_qos_pcp { bool dp_enable; }; +struct sparx5_port_qos_pcp_rewr { + struct sparx5_port_qos_pcp_rewr_map map; + bool enable; +}; + struct sparx5_port_qos_dscp { struct sparx5_port_qos_dscp_map map; bool qos_enable; bool dp_enable; }; +struct sparx5_port_qos_dscp_rewr { + struct sparx5_port_qos_dscp_rewr_map map; + bool enable; +}; + struct sparx5_port_qos { struct sparx5_port_qos_pcp pcp; + struct sparx5_port_qos_pcp_rewr pcp_rewr; struct sparx5_port_qos_dscp dscp; + struct sparx5_port_qos_dscp_rewr dscp_rewr; u8 default_prio; }; @@ -127,9 +159,18 @@ int sparx5_port_qos_set(struct sparx5_port *port, struct sparx5_port_qos *qos); int sparx5_port_qos_pcp_set(const struct sparx5_port *port, struct sparx5_port_qos_pcp *qos); +int sparx5_port_qos_pcp_rewr_set(const struct sparx5_port *port, + struct sparx5_port_qos_pcp_rewr *qos); + int sparx5_port_qos_dscp_set(const struct sparx5_port *port, struct sparx5_port_qos_dscp *qos); +void sparx5_port_qos_dscp_rewr_mode_set(const struct sparx5_port *port, + int mode); + +int sparx5_port_qos_dscp_rewr_set(const struct sparx5_port *port, + struct sparx5_port_qos_dscp_rewr *qos); + int sparx5_port_qos_default_set(const struct sparx5_port *port, const struct sparx5_port_qos *qos); diff --git a/drivers/net/ethernet/microchip/sparx5/sparx5_psfp.c b/drivers/net/ethernet/microchip/sparx5/sparx5_psfp.c new file mode 100644 index 000000000000..8dee1ab1fa75 --- /dev/null +++ b/drivers/net/ethernet/microchip/sparx5/sparx5_psfp.c @@ -0,0 +1,332 @@ +// SPDX-License-Identifier: GPL-2.0+ +/* Microchip Sparx5 Switch driver + * + * Copyright (c) 2023 Microchip Technology Inc. and its subsidiaries. + */ + +#include "sparx5_main_regs.h" +#include "sparx5_main.h" + +#define SPX5_PSFP_SF_CNT 1024 +#define SPX5_PSFP_SG_CONFIG_CHANGE_SLEEP 1000 +#define SPX5_PSFP_SG_CONFIG_CHANGE_TIMEO 100000 + +/* Pool of available service policers */ +static struct sparx5_pool_entry sparx5_psfp_fm_pool[SPX5_SDLB_CNT]; + +/* Pool of available stream gates */ +static struct sparx5_pool_entry sparx5_psfp_sg_pool[SPX5_PSFP_SG_CNT]; + +/* Pool of available stream filters */ +static struct sparx5_pool_entry sparx5_psfp_sf_pool[SPX5_PSFP_SF_CNT]; + +static int sparx5_psfp_sf_get(u32 *id) +{ + return sparx5_pool_get(sparx5_psfp_sf_pool, SPX5_PSFP_SF_CNT, id); +} + +static int sparx5_psfp_sf_put(u32 id) +{ + return sparx5_pool_put(sparx5_psfp_sf_pool, SPX5_PSFP_SF_CNT, id); +} + +static int sparx5_psfp_sg_get(u32 idx, u32 *id) +{ + return sparx5_pool_get_with_idx(sparx5_psfp_sg_pool, SPX5_PSFP_SG_CNT, + idx, id); +} + +static int sparx5_psfp_sg_put(u32 id) +{ + return sparx5_pool_put(sparx5_psfp_sg_pool, SPX5_PSFP_SG_CNT, id); +} + +static int sparx5_psfp_fm_get(u32 idx, u32 *id) +{ + return sparx5_pool_get_with_idx(sparx5_psfp_fm_pool, SPX5_SDLB_CNT, idx, + id); +} + +static int sparx5_psfp_fm_put(u32 id) +{ + return sparx5_pool_put(sparx5_psfp_fm_pool, SPX5_SDLB_CNT, id); +} + +u32 sparx5_psfp_isdx_get_sf(struct sparx5 *sparx5, u32 isdx) +{ + return ANA_L2_TSN_CFG_TSN_SFID_GET(spx5_rd(sparx5, + ANA_L2_TSN_CFG(isdx))); +} + +u32 sparx5_psfp_isdx_get_fm(struct sparx5 *sparx5, u32 isdx) +{ + return ANA_L2_DLB_CFG_DLB_IDX_GET(spx5_rd(sparx5, + ANA_L2_DLB_CFG(isdx))); +} + +u32 sparx5_psfp_sf_get_sg(struct sparx5 *sparx5, u32 sfid) +{ + return ANA_AC_TSN_SF_CFG_TSN_SGID_GET(spx5_rd(sparx5, + ANA_AC_TSN_SF_CFG(sfid))); +} + +void sparx5_isdx_conf_set(struct sparx5 *sparx5, u32 isdx, u32 sfid, u32 fmid) +{ + spx5_rmw(ANA_L2_TSN_CFG_TSN_SFID_SET(sfid), ANA_L2_TSN_CFG_TSN_SFID, + sparx5, ANA_L2_TSN_CFG(isdx)); + + spx5_rmw(ANA_L2_DLB_CFG_DLB_IDX_SET(fmid), ANA_L2_DLB_CFG_DLB_IDX, + sparx5, ANA_L2_DLB_CFG(isdx)); +} + +/* Internal priority value to internal priority selector */ +static u32 sparx5_psfp_ipv_to_ips(s32 ipv) +{ + return ipv > 0 ? (ipv | BIT(3)) : 0; +} + +static int sparx5_psfp_sgid_get_status(struct sparx5 *sparx5) +{ + return spx5_rd(sparx5, ANA_AC_SG_ACCESS_CTRL); +} + +static int sparx5_psfp_sgid_wait_for_completion(struct sparx5 *sparx5) +{ + u32 val; + + return readx_poll_timeout(sparx5_psfp_sgid_get_status, sparx5, val, + !ANA_AC_SG_ACCESS_CTRL_CONFIG_CHANGE_GET(val), + SPX5_PSFP_SG_CONFIG_CHANGE_SLEEP, + SPX5_PSFP_SG_CONFIG_CHANGE_TIMEO); +} + +static void sparx5_psfp_sg_config_change(struct sparx5 *sparx5, u32 id) +{ + spx5_wr(ANA_AC_SG_ACCESS_CTRL_SGID_SET(id), sparx5, + ANA_AC_SG_ACCESS_CTRL); + + spx5_wr(ANA_AC_SG_ACCESS_CTRL_CONFIG_CHANGE_SET(1) | + ANA_AC_SG_ACCESS_CTRL_SGID_SET(id), + sparx5, ANA_AC_SG_ACCESS_CTRL); + + if (sparx5_psfp_sgid_wait_for_completion(sparx5) < 0) + pr_debug("%s:%d timed out waiting for sgid completion", + __func__, __LINE__); +} + +static void sparx5_psfp_sf_set(struct sparx5 *sparx5, u32 id, + const struct sparx5_psfp_sf *sf) +{ + /* Configure stream gate*/ + spx5_rmw(ANA_AC_TSN_SF_CFG_TSN_SGID_SET(sf->sgid) | + ANA_AC_TSN_SF_CFG_TSN_MAX_SDU_SET(sf->max_sdu) | + ANA_AC_TSN_SF_CFG_BLOCK_OVERSIZE_STATE_SET(sf->sblock_osize) | + ANA_AC_TSN_SF_CFG_BLOCK_OVERSIZE_ENA_SET(sf->sblock_osize_ena), + ANA_AC_TSN_SF_CFG_TSN_SGID | ANA_AC_TSN_SF_CFG_TSN_MAX_SDU | + ANA_AC_TSN_SF_CFG_BLOCK_OVERSIZE_STATE | + ANA_AC_TSN_SF_CFG_BLOCK_OVERSIZE_ENA, + sparx5, ANA_AC_TSN_SF_CFG(id)); +} + +static int sparx5_psfp_sg_set(struct sparx5 *sparx5, u32 id, + const struct sparx5_psfp_sg *sg) +{ + u32 ips, base_lsb, base_msb, accum_time_interval = 0; + const struct sparx5_psfp_gce *gce; + int i; + + ips = sparx5_psfp_ipv_to_ips(sg->ipv); + base_lsb = sg->basetime.tv_sec & 0xffffffff; + base_msb = sg->basetime.tv_sec >> 32; + + /* Set stream gate id */ + spx5_wr(ANA_AC_SG_ACCESS_CTRL_SGID_SET(id), sparx5, + ANA_AC_SG_ACCESS_CTRL); + + /* Write AdminPSFP values */ + spx5_wr(sg->basetime.tv_nsec, sparx5, ANA_AC_SG_CONFIG_REG_1); + spx5_wr(base_lsb, sparx5, ANA_AC_SG_CONFIG_REG_2); + + spx5_rmw(ANA_AC_SG_CONFIG_REG_3_BASE_TIME_SEC_MSB_SET(base_msb) | + ANA_AC_SG_CONFIG_REG_3_INIT_IPS_SET(ips) | + ANA_AC_SG_CONFIG_REG_3_LIST_LENGTH_SET(sg->num_entries) | + ANA_AC_SG_CONFIG_REG_3_INIT_GATE_STATE_SET(sg->gate_state) | + ANA_AC_SG_CONFIG_REG_3_GATE_ENABLE_SET(1), + ANA_AC_SG_CONFIG_REG_3_BASE_TIME_SEC_MSB | + ANA_AC_SG_CONFIG_REG_3_INIT_IPS | + ANA_AC_SG_CONFIG_REG_3_LIST_LENGTH | + ANA_AC_SG_CONFIG_REG_3_INIT_GATE_STATE | + ANA_AC_SG_CONFIG_REG_3_GATE_ENABLE, + sparx5, ANA_AC_SG_CONFIG_REG_3); + + spx5_wr(sg->cycletime, sparx5, ANA_AC_SG_CONFIG_REG_4); + spx5_wr(sg->cycletimeext, sparx5, ANA_AC_SG_CONFIG_REG_5); + + /* For each scheduling entry */ + for (i = 0; i < sg->num_entries; i++) { + gce = &sg->gce[i]; + ips = sparx5_psfp_ipv_to_ips(gce->ipv); + /* hardware needs TimeInterval to be cumulative */ + accum_time_interval += gce->interval; + /* Set gate state */ + spx5_wr(ANA_AC_SG_GCL_GS_CONFIG_IPS_SET(ips) | + ANA_AC_SG_GCL_GS_CONFIG_GATE_STATE_SET(gce->gate_state), + sparx5, ANA_AC_SG_GCL_GS_CONFIG(i)); + + /* Set time interval */ + spx5_wr(accum_time_interval, sparx5, + ANA_AC_SG_GCL_TI_CONFIG(i)); + + /* Set maximum octets */ + spx5_wr(gce->maxoctets, sparx5, ANA_AC_SG_GCL_OCT_CONFIG(i)); + } + + return 0; +} + +static int sparx5_sdlb_conf_set(struct sparx5 *sparx5, + struct sparx5_psfp_fm *fm) +{ + int (*sparx5_sdlb_group_action)(struct sparx5 *sparx5, u32 group, + u32 idx); + + if (!fm->pol.rate && !fm->pol.burst) + sparx5_sdlb_group_action = &sparx5_sdlb_group_del; + else + sparx5_sdlb_group_action = &sparx5_sdlb_group_add; + + sparx5_policer_conf_set(sparx5, &fm->pol); + + return sparx5_sdlb_group_action(sparx5, fm->pol.group, fm->pol.idx); +} + +int sparx5_psfp_sf_add(struct sparx5 *sparx5, const struct sparx5_psfp_sf *sf, + u32 *id) +{ + int ret; + + ret = sparx5_psfp_sf_get(id); + if (ret < 0) + return ret; + + sparx5_psfp_sf_set(sparx5, *id, sf); + + return 0; +} + +int sparx5_psfp_sf_del(struct sparx5 *sparx5, u32 id) +{ + const struct sparx5_psfp_sf sf = { 0 }; + + sparx5_psfp_sf_set(sparx5, id, &sf); + + return sparx5_psfp_sf_put(id); +} + +int sparx5_psfp_sg_add(struct sparx5 *sparx5, u32 uidx, + struct sparx5_psfp_sg *sg, u32 *id) +{ + ktime_t basetime; + int ret; + + ret = sparx5_psfp_sg_get(uidx, id); + if (ret < 0) + return ret; + /* Was already in use, no need to reconfigure */ + if (ret > 1) + return 0; + + /* Calculate basetime for this stream gate */ + sparx5_new_base_time(sparx5, sg->cycletime, 0, &basetime); + sg->basetime = ktime_to_timespec64(basetime); + + sparx5_psfp_sg_set(sparx5, *id, sg); + + /* Signal hardware to copy AdminPSFP values into OperPSFP values */ + sparx5_psfp_sg_config_change(sparx5, *id); + + return 0; +} + +int sparx5_psfp_sg_del(struct sparx5 *sparx5, u32 id) +{ + const struct sparx5_psfp_sg sg = { 0 }; + int ret; + + ret = sparx5_psfp_sg_put(id); + if (ret < 0) + return ret; + /* Stream gate still in use ? */ + if (ret > 0) + return 0; + + return sparx5_psfp_sg_set(sparx5, id, &sg); +} + +int sparx5_psfp_fm_add(struct sparx5 *sparx5, u32 uidx, + struct sparx5_psfp_fm *fm, u32 *id) +{ + struct sparx5_policer *pol = &fm->pol; + int ret; + + /* Get flow meter */ + ret = sparx5_psfp_fm_get(uidx, &fm->pol.idx); + if (ret < 0) + return ret; + /* Was already in use, no need to reconfigure */ + if (ret > 1) + return 0; + + ret = sparx5_sdlb_group_get_by_rate(sparx5, pol->rate, pol->burst); + if (ret < 0) + return ret; + + fm->pol.group = ret; + + ret = sparx5_sdlb_conf_set(sparx5, fm); + if (ret < 0) + return ret; + + *id = fm->pol.idx; + + return 0; +} + +int sparx5_psfp_fm_del(struct sparx5 *sparx5, u32 id) +{ + struct sparx5_psfp_fm fm = { .pol.idx = id, + .pol.type = SPX5_POL_SERVICE }; + int ret; + + /* Find the group that this lb belongs to */ + ret = sparx5_sdlb_group_get_by_index(sparx5, id, &fm.pol.group); + if (ret < 0) + return ret; + + ret = sparx5_psfp_fm_put(id); + if (ret < 0) + return ret; + /* Do not reset flow-meter if still in use. */ + if (ret > 0) + return 0; + + return sparx5_sdlb_conf_set(sparx5, &fm); +} + +void sparx5_psfp_init(struct sparx5 *sparx5) +{ + const struct sparx5_sdlb_group *group; + int i; + + for (i = 0; i < SPX5_SDLB_GROUP_CNT; i++) { + group = &sdlb_groups[i]; + sparx5_sdlb_group_init(sparx5, group->max_rate, + group->min_burst, group->frame_size, i); + } + + spx5_wr(ANA_AC_SG_CYCLETIME_UPDATE_PERIOD_SG_CT_UPDATE_ENA_SET(1), + sparx5, ANA_AC_SG_CYCLETIME_UPDATE_PERIOD); + + spx5_rmw(ANA_L2_FWD_CFG_ISDX_LOOKUP_ENA_SET(1), + ANA_L2_FWD_CFG_ISDX_LOOKUP_ENA, sparx5, ANA_L2_FWD_CFG); +} diff --git a/drivers/net/ethernet/microchip/sparx5/sparx5_ptp.c b/drivers/net/ethernet/microchip/sparx5/sparx5_ptp.c index 69e76634f9aa..0edb98cef7e4 100644 --- a/drivers/net/ethernet/microchip/sparx5/sparx5_ptp.c +++ b/drivers/net/ethernet/microchip/sparx5/sparx5_ptp.c @@ -476,8 +476,7 @@ static int sparx5_ptp_settime64(struct ptp_clock_info *ptp, return 0; } -static int sparx5_ptp_gettime64(struct ptp_clock_info *ptp, - struct timespec64 *ts) +int sparx5_ptp_gettime64(struct ptp_clock_info *ptp, struct timespec64 *ts) { struct sparx5_phc *phc = container_of(ptp, struct sparx5_phc, info); struct sparx5 *sparx5 = phc->sparx5; diff --git a/drivers/net/ethernet/microchip/sparx5/sparx5_qos.c b/drivers/net/ethernet/microchip/sparx5/sparx5_qos.c index 379e540e5e6a..5f34febaee6b 100644 --- a/drivers/net/ethernet/microchip/sparx5/sparx5_qos.c +++ b/drivers/net/ethernet/microchip/sparx5/sparx5_qos.c @@ -9,6 +9,63 @@ #include "sparx5_main.h" #include "sparx5_qos.h" +/* Calculate new base_time based on cycle_time. + * + * The hardware requires a base_time that is always in the future. + * We define threshold_time as current_time + (2 * cycle_time). + * If base_time is below threshold_time this function recalculates it to be in + * the interval: + * threshold_time <= base_time < (threshold_time + cycle_time) + * + * A very simple algorithm could be like this: + * new_base_time = org_base_time + N * cycle_time + * using the lowest N so (new_base_time >= threshold_time + */ +void sparx5_new_base_time(struct sparx5 *sparx5, const u32 cycle_time, + const ktime_t org_base_time, ktime_t *new_base_time) +{ + ktime_t current_time, threshold_time, new_time; + struct timespec64 ts; + u64 nr_of_cycles_p2; + u64 nr_of_cycles; + u64 diff_time; + + new_time = org_base_time; + + sparx5_ptp_gettime64(&sparx5->phc[SPARX5_PHC_PORT].info, &ts); + current_time = timespec64_to_ktime(ts); + threshold_time = current_time + (2 * cycle_time); + diff_time = threshold_time - new_time; + nr_of_cycles = div_u64(diff_time, cycle_time); + nr_of_cycles_p2 = 1; /* Use 2^0 as start value */ + + if (new_time >= threshold_time) { + *new_base_time = new_time; + return; + } + + /* Calculate the smallest power of 2 (nr_of_cycles_p2) + * that is larger than nr_of_cycles. + */ + while (nr_of_cycles_p2 < nr_of_cycles) + nr_of_cycles_p2 <<= 1; /* Next (higher) power of 2 */ + + /* Add as big chunks (power of 2 * cycle_time) + * as possible for each power of 2 + */ + while (nr_of_cycles_p2) { + if (new_time < threshold_time) { + new_time += cycle_time * nr_of_cycles_p2; + while (new_time < threshold_time) + new_time += cycle_time * nr_of_cycles_p2; + new_time -= cycle_time * nr_of_cycles_p2; + } + nr_of_cycles_p2 >>= 1; /* Next (lower) power of 2 */ + } + new_time += cycle_time; + *new_base_time = new_time; +} + /* Max rates for leak groups */ static const u32 spx5_hsch_max_group_rate[SPX5_HSCH_LEAK_GRP_CNT] = { 1048568, /* 1.049 Gbps */ @@ -393,6 +450,8 @@ int sparx5_qos_init(struct sparx5 *sparx5) if (ret < 0) return ret; + sparx5_psfp_init(sparx5); + return 0; } diff --git a/drivers/net/ethernet/microchip/sparx5/sparx5_sdlb.c b/drivers/net/ethernet/microchip/sparx5/sparx5_sdlb.c new file mode 100644 index 000000000000..f5267218caeb --- /dev/null +++ b/drivers/net/ethernet/microchip/sparx5/sparx5_sdlb.c @@ -0,0 +1,335 @@ +// SPDX-License-Identifier: GPL-2.0+ +/* Microchip Sparx5 Switch driver + * + * Copyright (c) 2023 Microchip Technology Inc. and its subsidiaries. + */ + +#include "sparx5_main_regs.h" +#include "sparx5_main.h" + +struct sparx5_sdlb_group sdlb_groups[SPX5_SDLB_GROUP_CNT] = { + { SPX5_SDLB_GROUP_RATE_MAX, 8192 / 1, 64 }, /* 25 G */ + { 15000000000ULL, 8192 / 1, 64 }, /* 15 G */ + { 10000000000ULL, 8192 / 1, 64 }, /* 10 G */ + { 5000000000ULL, 8192 / 1, 64 }, /* 5 G */ + { 2500000000ULL, 8192 / 1, 64 }, /* 2.5 G */ + { 1000000000ULL, 8192 / 2, 64 }, /* 1 G */ + { 500000000ULL, 8192 / 2, 64 }, /* 500 M */ + { 100000000ULL, 8192 / 4, 64 }, /* 100 M */ + { 50000000ULL, 8192 / 4, 64 }, /* 50 M */ + { 5000000ULL, 8192 / 8, 64 } /* 5 M */ +}; + +int sparx5_sdlb_clk_hz_get(struct sparx5 *sparx5) +{ + u32 clk_per_100ps; + u64 clk_hz; + + clk_per_100ps = HSCH_SYS_CLK_PER_100PS_GET(spx5_rd(sparx5, + HSCH_SYS_CLK_PER)); + if (!clk_per_100ps) + clk_per_100ps = SPX5_CLK_PER_100PS_DEFAULT; + + clk_hz = (10 * 1000 * 1000) / clk_per_100ps; + return clk_hz *= 1000; +} + +static int sparx5_sdlb_pup_interval_get(struct sparx5 *sparx5, u32 max_token, + u64 max_rate) +{ + u64 clk_hz; + + clk_hz = sparx5_sdlb_clk_hz_get(sparx5); + + return div64_u64((8 * clk_hz * max_token), max_rate); +} + +int sparx5_sdlb_pup_token_get(struct sparx5 *sparx5, u32 pup_interval, u64 rate) +{ + u64 clk_hz; + + if (!rate) + return SPX5_SDLB_PUP_TOKEN_DISABLE; + + clk_hz = sparx5_sdlb_clk_hz_get(sparx5); + + return DIV64_U64_ROUND_UP((rate * pup_interval), (clk_hz * 8)); +} + +static void sparx5_sdlb_group_disable(struct sparx5 *sparx5, u32 group) +{ + spx5_rmw(ANA_AC_SDLB_PUP_CTRL_PUP_ENA_SET(0), + ANA_AC_SDLB_PUP_CTRL_PUP_ENA, sparx5, + ANA_AC_SDLB_PUP_CTRL(group)); +} + +static void sparx5_sdlb_group_enable(struct sparx5 *sparx5, u32 group) +{ + spx5_rmw(ANA_AC_SDLB_PUP_CTRL_PUP_ENA_SET(1), + ANA_AC_SDLB_PUP_CTRL_PUP_ENA, sparx5, + ANA_AC_SDLB_PUP_CTRL(group)); +} + +static u32 sparx5_sdlb_group_get_first(struct sparx5 *sparx5, u32 group) +{ + u32 val; + + val = spx5_rd(sparx5, ANA_AC_SDLB_XLB_START(group)); + + return ANA_AC_SDLB_XLB_START_LBSET_START_GET(val); +} + +static u32 sparx5_sdlb_group_get_next(struct sparx5 *sparx5, u32 group, + u32 lb) +{ + u32 val; + + val = spx5_rd(sparx5, ANA_AC_SDLB_XLB_NEXT(lb)); + + return ANA_AC_SDLB_XLB_NEXT_LBSET_NEXT_GET(val); +} + +static bool sparx5_sdlb_group_is_first(struct sparx5 *sparx5, u32 group, + u32 lb) +{ + return lb == sparx5_sdlb_group_get_first(sparx5, group); +} + +static bool sparx5_sdlb_group_is_last(struct sparx5 *sparx5, u32 group, + u32 lb) +{ + return lb == sparx5_sdlb_group_get_next(sparx5, group, lb); +} + +static bool sparx5_sdlb_group_is_empty(struct sparx5 *sparx5, u32 group) +{ + u32 val; + + val = spx5_rd(sparx5, ANA_AC_SDLB_PUP_CTRL(group)); + + return ANA_AC_SDLB_PUP_CTRL_PUP_ENA_GET(val) == 0; +} + +static u32 sparx5_sdlb_group_get_last(struct sparx5 *sparx5, u32 group) +{ + u32 itr, next; + + itr = sparx5_sdlb_group_get_first(sparx5, group); + + for (;;) { + next = sparx5_sdlb_group_get_next(sparx5, group, itr); + if (itr == next) + return itr; + + itr = next; + } +} + +static bool sparx5_sdlb_group_is_singular(struct sparx5 *sparx5, u32 group) +{ + if (sparx5_sdlb_group_is_empty(sparx5, group)) + return false; + + return sparx5_sdlb_group_get_first(sparx5, group) == + sparx5_sdlb_group_get_last(sparx5, group); +} + +static int sparx5_sdlb_group_get_adjacent(struct sparx5 *sparx5, u32 group, + u32 idx, u32 *prev, u32 *next, + u32 *first) +{ + u32 itr; + + *first = sparx5_sdlb_group_get_first(sparx5, group); + *prev = *first; + *next = *first; + itr = *first; + + for (;;) { + *next = sparx5_sdlb_group_get_next(sparx5, group, itr); + + if (itr == idx) + return 0; /* Found it */ + + if (itr == *next) + return -EINVAL; /* Was not found */ + + *prev = itr; + itr = *next; + } +} + +static int sparx5_sdlb_group_get_count(struct sparx5 *sparx5, u32 group) +{ + u32 itr, next; + int count = 0; + + itr = sparx5_sdlb_group_get_first(sparx5, group); + + for (;;) { + next = sparx5_sdlb_group_get_next(sparx5, group, itr); + if (itr == next) + return count; + + itr = next; + count++; + } +} + +int sparx5_sdlb_group_get_by_rate(struct sparx5 *sparx5, u32 rate, u32 burst) +{ + const struct sparx5_sdlb_group *group; + u64 rate_bps; + int i, count; + + rate_bps = rate * 1000; + + for (i = SPX5_SDLB_GROUP_CNT - 1; i >= 0; i--) { + group = &sdlb_groups[i]; + + count = sparx5_sdlb_group_get_count(sparx5, i); + + /* Check that this group is not full. + * According to LB group configuration rules: the number of XLBs + * in a group must not exceed PUP_INTERVAL/4 - 1. + */ + if (count > ((group->pup_interval / 4) - 1)) + continue; + + if (rate_bps < group->max_rate) + return i; + } + + return -ENOSPC; +} + +int sparx5_sdlb_group_get_by_index(struct sparx5 *sparx5, u32 idx, u32 *group) +{ + u32 itr, next; + int i; + + for (i = 0; i < SPX5_SDLB_GROUP_CNT; i++) { + if (sparx5_sdlb_group_is_empty(sparx5, i)) + continue; + + itr = sparx5_sdlb_group_get_first(sparx5, i); + + for (;;) { + next = sparx5_sdlb_group_get_next(sparx5, i, itr); + + if (itr == idx) { + *group = i; + return 0; /* Found it */ + } + if (itr == next) + break; /* Was not found */ + + itr = next; + } + } + + return -EINVAL; +} + +static int sparx5_sdlb_group_link(struct sparx5 *sparx5, u32 group, u32 idx, + u32 first, u32 next, bool empty) +{ + /* Stop leaking */ + sparx5_sdlb_group_disable(sparx5, group); + + if (empty) + return 0; + + /* Link insertion lb to next lb */ + spx5_wr(ANA_AC_SDLB_XLB_NEXT_LBSET_NEXT_SET(next) | + ANA_AC_SDLB_XLB_NEXT_LBGRP_SET(group), + sparx5, ANA_AC_SDLB_XLB_NEXT(idx)); + + /* Set the first lb */ + spx5_wr(ANA_AC_SDLB_XLB_START_LBSET_START_SET(first), sparx5, + ANA_AC_SDLB_XLB_START(group)); + + /* Start leaking */ + sparx5_sdlb_group_enable(sparx5, group); + + return 0; +}; + +int sparx5_sdlb_group_add(struct sparx5 *sparx5, u32 group, u32 idx) +{ + u32 first, next; + + /* We always add to head of the list */ + first = idx; + + if (sparx5_sdlb_group_is_empty(sparx5, group)) + next = idx; + else + next = sparx5_sdlb_group_get_first(sparx5, group); + + return sparx5_sdlb_group_link(sparx5, group, idx, first, next, false); +} + +int sparx5_sdlb_group_del(struct sparx5 *sparx5, u32 group, u32 idx) +{ + u32 first, next, prev; + bool empty = false; + + if (sparx5_sdlb_group_get_adjacent(sparx5, group, idx, &prev, &next, + &first) < 0) { + pr_err("%s:%d Could not find idx: %d in group: %d", __func__, + __LINE__, idx, group); + return -EINVAL; + } + + if (sparx5_sdlb_group_is_singular(sparx5, group)) { + empty = true; + } else if (sparx5_sdlb_group_is_last(sparx5, group, idx)) { + /* idx is removed, prev is now last */ + idx = prev; + next = prev; + } else if (sparx5_sdlb_group_is_first(sparx5, group, idx)) { + /* idx is removed and points to itself, first is next */ + first = next; + next = idx; + } else { + /* Next is not touched */ + idx = prev; + } + + return sparx5_sdlb_group_link(sparx5, group, idx, first, next, empty); +} + +void sparx5_sdlb_group_init(struct sparx5 *sparx5, u64 max_rate, u32 min_burst, + u32 frame_size, u32 idx) +{ + u32 thres_shift, mask = 0x01, power = 0; + struct sparx5_sdlb_group *group; + u64 max_token; + + group = &sdlb_groups[idx]; + + /* Number of positions to right-shift LB's threshold value. */ + while ((min_burst & mask) == 0) { + power++; + mask <<= 1; + } + thres_shift = SPX5_SDLB_2CYCLES_TYPE2_THRES_OFFSET - power; + + max_token = (min_burst > SPX5_SDLB_PUP_TOKEN_MAX) ? + SPX5_SDLB_PUP_TOKEN_MAX : + min_burst; + group->pup_interval = + sparx5_sdlb_pup_interval_get(sparx5, max_token, max_rate); + + group->frame_size = frame_size; + + spx5_wr(ANA_AC_SDLB_PUP_INTERVAL_PUP_INTERVAL_SET(group->pup_interval), + sparx5, ANA_AC_SDLB_PUP_INTERVAL(idx)); + + spx5_wr(ANA_AC_SDLB_FRM_RATE_TOKENS_FRM_RATE_TOKENS_SET(frame_size), + sparx5, ANA_AC_SDLB_FRM_RATE_TOKENS(idx)); + + spx5_wr(ANA_AC_SDLB_LBGRP_MISC_THRES_SHIFT_SET(thres_shift), sparx5, + ANA_AC_SDLB_LBGRP_MISC(idx)); +} diff --git a/drivers/net/ethernet/microchip/sparx5/sparx5_tc.c b/drivers/net/ethernet/microchip/sparx5/sparx5_tc.c index 205246b5af82..e80f3166db7d 100644 --- a/drivers/net/ethernet/microchip/sparx5/sparx5_tc.c +++ b/drivers/net/ethernet/microchip/sparx5/sparx5_tc.c @@ -5,6 +5,7 @@ */ #include <net/pkt_cls.h> +#include <net/pkt_sched.h> #include "sparx5_tc.h" #include "sparx5_main.h" diff --git a/drivers/net/ethernet/microchip/sparx5/sparx5_tc.h b/drivers/net/ethernet/microchip/sparx5/sparx5_tc.h index adab88e6b21f..7ef470b28566 100644 --- a/drivers/net/ethernet/microchip/sparx5/sparx5_tc.h +++ b/drivers/net/ethernet/microchip/sparx5/sparx5_tc.h @@ -21,6 +21,80 @@ enum SPX5_PORT_MASK_MODE { SPX5_PMM_OR_PGID_MASK, }; +/* Controls ES0 forwarding */ +enum SPX5_FORWARDING_SEL { + SPX5_FWSEL_NO_ACTION, + SPX5_FWSEL_COPY_TO_LOOPBACK, + SPX5_FWSEL_REDIRECT_TO_LOOPBACK, + SPX5_FWSEL_DISCARD, +}; + +/* Controls tag A (outer tagging) */ +enum SPX5_OUTER_TAG_SEL { + SPX5_OTAG_PORT, + SPX5_OTAG_TAG_A, + SPX5_OTAG_FORCED_PORT, + SPX5_OTAG_UNTAG, +}; + +/* Selects TPID for ES0 tag A */ +enum SPX5_TPID_A_SEL { + SPX5_TPID_A_8100, + SPX5_TPID_A_88A8, + SPX5_TPID_A_CUST1, + SPX5_TPID_A_CUST2, + SPX5_TPID_A_CUST3, + SPX5_TPID_A_CLASSIFIED, +}; + +/* Selects VID for ES0 tag A */ +enum SPX5_VID_A_SEL { + SPX5_VID_A_CLASSIFIED, + SPX5_VID_A_VAL, + SPX5_VID_A_IFH, + SPX5_VID_A_RESERVED, +}; + +/* Select PCP source for ES0 tag A */ +enum SPX5_PCP_A_SEL { + SPX5_PCP_A_CLASSIFIED, + SPX5_PCP_A_VAL, + SPX5_PCP_A_RESERVED, + SPX5_PCP_A_POPPED, + SPX5_PCP_A_MAPPED_0, + SPX5_PCP_A_MAPPED_1, + SPX5_PCP_A_MAPPED_2, + SPX5_PCP_A_MAPPED_3, +}; + +/* Select DEI source for ES0 tag A */ +enum SPX5_DEI_A_SEL { + SPX5_DEI_A_CLASSIFIED, + SPX5_DEI_A_VAL, + SPX5_DEI_A_REW, + SPX5_DEI_A_POPPED, + SPX5_DEI_A_MAPPED_0, + SPX5_DEI_A_MAPPED_1, + SPX5_DEI_A_MAPPED_2, + SPX5_DEI_A_MAPPED_3, +}; + +/* Controls tag B (inner tagging) */ +enum SPX5_INNER_TAG_SEL { + SPX5_ITAG_NO_PUSH, + SPX5_ITAG_PUSH_B_TAG, +}; + +/* Selects TPID for ES0 tag B. */ +enum SPX5_TPID_B_SEL { + SPX5_TPID_B_8100, + SPX5_TPID_B_88A8, + SPX5_TPID_B_CUST1, + SPX5_TPID_B_CUST2, + SPX5_TPID_B_CUST3, + SPX5_TPID_B_CLASSIFIED, +}; + int sparx5_port_setup_tc(struct net_device *ndev, enum tc_setup_type type, void *type_data); diff --git a/drivers/net/ethernet/microchip/sparx5/sparx5_tc_flower.c b/drivers/net/ethernet/microchip/sparx5/sparx5_tc_flower.c index 1ed304a816cc..b36819aafaca 100644 --- a/drivers/net/ethernet/microchip/sparx5/sparx5_tc_flower.c +++ b/drivers/net/ethernet/microchip/sparx5/sparx5_tc_flower.c @@ -4,11 +4,13 @@ * Copyright (c) 2022 Microchip Technology Inc. and its subsidiaries. */ +#include <net/tc_act/tc_gate.h> #include <net/tcp.h> #include "sparx5_tc.h" #include "vcap_api.h" #include "vcap_api_client.h" +#include "vcap_tc.h" #include "sparx5_main.h" #include "sparx5_vcap_impl.h" @@ -26,249 +28,33 @@ struct sparx5_multiple_rules { struct sparx5_wildcard_rule rule[SPX5_MAX_RULE_SIZE]; }; -struct sparx5_tc_flower_parse_usage { - struct flow_cls_offload *fco; - struct flow_rule *frule; - struct vcap_rule *vrule; - u16 l3_proto; - u8 l4_proto; - unsigned int used_keys; -}; - -struct sparx5_tc_rule_pkt_cnt { - u64 cookie; - u32 pkts; -}; - -/* These protocols have dedicated keysets in IS2 and a TC dissector - * ETH_P_ARP does not have a TC dissector - */ -static u16 sparx5_tc_known_etypes[] = { - ETH_P_ALL, - ETH_P_ARP, - ETH_P_IP, - ETH_P_IPV6, -}; - -enum sparx5_is2_arp_opcode { - SPX5_IS2_ARP_REQUEST, - SPX5_IS2_ARP_REPLY, - SPX5_IS2_RARP_REQUEST, - SPX5_IS2_RARP_REPLY, -}; - -enum tc_arp_opcode { - TC_ARP_OP_RESERVED, - TC_ARP_OP_REQUEST, - TC_ARP_OP_REPLY, -}; - -static bool sparx5_tc_is_known_etype(u16 etype) -{ - int idx; - - /* For now this only knows about IS2 traffic classification */ - for (idx = 0; idx < ARRAY_SIZE(sparx5_tc_known_etypes); ++idx) - if (sparx5_tc_known_etypes[idx] == etype) - return true; - - return false; -} - -static int sparx5_tc_flower_handler_ethaddr_usage(struct sparx5_tc_flower_parse_usage *st) -{ - enum vcap_key_field smac_key = VCAP_KF_L2_SMAC; - enum vcap_key_field dmac_key = VCAP_KF_L2_DMAC; - struct flow_match_eth_addrs match; - struct vcap_u48_key smac, dmac; - int err = 0; - - flow_rule_match_eth_addrs(st->frule, &match); - - if (!is_zero_ether_addr(match.mask->src)) { - vcap_netbytes_copy(smac.value, match.key->src, ETH_ALEN); - vcap_netbytes_copy(smac.mask, match.mask->src, ETH_ALEN); - err = vcap_rule_add_key_u48(st->vrule, smac_key, &smac); - if (err) - goto out; - } - - if (!is_zero_ether_addr(match.mask->dst)) { - vcap_netbytes_copy(dmac.value, match.key->dst, ETH_ALEN); - vcap_netbytes_copy(dmac.mask, match.mask->dst, ETH_ALEN); - err = vcap_rule_add_key_u48(st->vrule, dmac_key, &dmac); - if (err) - goto out; - } - - st->used_keys |= BIT(FLOW_DISSECTOR_KEY_ETH_ADDRS); - - return err; - -out: - NL_SET_ERR_MSG_MOD(st->fco->common.extack, "eth_addr parse error"); - return err; -} - -static int -sparx5_tc_flower_handler_ipv4_usage(struct sparx5_tc_flower_parse_usage *st) -{ - int err = 0; - - if (st->l3_proto == ETH_P_IP) { - struct flow_match_ipv4_addrs mt; - - flow_rule_match_ipv4_addrs(st->frule, &mt); - if (mt.mask->src) { - err = vcap_rule_add_key_u32(st->vrule, - VCAP_KF_L3_IP4_SIP, - be32_to_cpu(mt.key->src), - be32_to_cpu(mt.mask->src)); - if (err) - goto out; - } - if (mt.mask->dst) { - err = vcap_rule_add_key_u32(st->vrule, - VCAP_KF_L3_IP4_DIP, - be32_to_cpu(mt.key->dst), - be32_to_cpu(mt.mask->dst)); - if (err) - goto out; - } - } - - st->used_keys |= BIT(FLOW_DISSECTOR_KEY_IPV4_ADDRS); - - return err; - -out: - NL_SET_ERR_MSG_MOD(st->fco->common.extack, "ipv4_addr parse error"); - return err; -} - -static int -sparx5_tc_flower_handler_ipv6_usage(struct sparx5_tc_flower_parse_usage *st) -{ - int err = 0; - - if (st->l3_proto == ETH_P_IPV6) { - struct flow_match_ipv6_addrs mt; - struct vcap_u128_key sip; - struct vcap_u128_key dip; - - flow_rule_match_ipv6_addrs(st->frule, &mt); - /* Check if address masks are non-zero */ - if (!ipv6_addr_any(&mt.mask->src)) { - vcap_netbytes_copy(sip.value, mt.key->src.s6_addr, 16); - vcap_netbytes_copy(sip.mask, mt.mask->src.s6_addr, 16); - err = vcap_rule_add_key_u128(st->vrule, - VCAP_KF_L3_IP6_SIP, &sip); - if (err) - goto out; - } - if (!ipv6_addr_any(&mt.mask->dst)) { - vcap_netbytes_copy(dip.value, mt.key->dst.s6_addr, 16); - vcap_netbytes_copy(dip.mask, mt.mask->dst.s6_addr, 16); - err = vcap_rule_add_key_u128(st->vrule, - VCAP_KF_L3_IP6_DIP, &dip); - if (err) - goto out; - } - } - st->used_keys |= BIT(FLOW_DISSECTOR_KEY_IPV6_ADDRS); - return err; -out: - NL_SET_ERR_MSG_MOD(st->fco->common.extack, "ipv6_addr parse error"); - return err; -} - static int -sparx5_tc_flower_handler_control_usage(struct sparx5_tc_flower_parse_usage *st) +sparx5_tc_flower_es0_tpid(struct vcap_tc_flower_parse_usage *st) { - struct flow_match_control mt; - u32 value, mask; int err = 0; - flow_rule_match_control(st->frule, &mt); - - if (mt.mask->flags) { - if (mt.mask->flags & FLOW_DIS_FIRST_FRAG) { - if (mt.key->flags & FLOW_DIS_FIRST_FRAG) { - value = 1; /* initial fragment */ - mask = 0x3; - } else { - if (mt.mask->flags & FLOW_DIS_IS_FRAGMENT) { - value = 3; /* follow up fragment */ - mask = 0x3; - } else { - value = 0; /* no fragment */ - mask = 0x3; - } - } - } else { - if (mt.mask->flags & FLOW_DIS_IS_FRAGMENT) { - value = 3; /* follow up fragment */ - mask = 0x3; - } else { - value = 0; /* no fragment */ - mask = 0x3; - } - } - + switch (st->tpid) { + case ETH_P_8021Q: err = vcap_rule_add_key_u32(st->vrule, - VCAP_KF_L3_FRAGMENT_TYPE, - value, mask); - if (err) - goto out; - } - - st->used_keys |= BIT(FLOW_DISSECTOR_KEY_CONTROL); - - return err; - -out: - NL_SET_ERR_MSG_MOD(st->fco->common.extack, "ip_frag parse error"); - return err; -} - -static int -sparx5_tc_flower_handler_portnum_usage(struct sparx5_tc_flower_parse_usage *st) -{ - struct flow_match_ports mt; - u16 value, mask; - int err = 0; - - flow_rule_match_ports(st->frule, &mt); - - if (mt.mask->src) { - value = be16_to_cpu(mt.key->src); - mask = be16_to_cpu(mt.mask->src); - err = vcap_rule_add_key_u32(st->vrule, VCAP_KF_L4_SPORT, value, - mask); - if (err) - goto out; - } - - if (mt.mask->dst) { - value = be16_to_cpu(mt.key->dst); - mask = be16_to_cpu(mt.mask->dst); - err = vcap_rule_add_key_u32(st->vrule, VCAP_KF_L4_DPORT, value, - mask); - if (err) - goto out; + VCAP_KF_8021Q_TPID, + SPX5_TPID_SEL_8100, ~0); + break; + case ETH_P_8021AD: + err = vcap_rule_add_key_u32(st->vrule, + VCAP_KF_8021Q_TPID, + SPX5_TPID_SEL_88A8, ~0); + break; + default: + NL_SET_ERR_MSG_MOD(st->fco->common.extack, + "Invalid vlan proto"); + err = -EINVAL; + break; } - - st->used_keys |= BIT(FLOW_DISSECTOR_KEY_PORTS); - - return err; - -out: - NL_SET_ERR_MSG_MOD(st->fco->common.extack, "port parse error"); return err; } static int -sparx5_tc_flower_handler_basic_usage(struct sparx5_tc_flower_parse_usage *st) +sparx5_tc_flower_handler_basic_usage(struct vcap_tc_flower_parse_usage *st) { struct flow_match_basic mt; int err = 0; @@ -277,7 +63,7 @@ sparx5_tc_flower_handler_basic_usage(struct sparx5_tc_flower_parse_usage *st) if (mt.mask->n_proto) { st->l3_proto = be16_to_cpu(mt.key->n_proto); - if (!sparx5_tc_is_known_etype(st->l3_proto)) { + if (!sparx5_vcap_is_known_etype(st->admin, st->l3_proto)) { err = vcap_rule_add_key_u32(st->vrule, VCAP_KF_ETYPE, st->l3_proto, ~0); if (err) @@ -292,6 +78,13 @@ sparx5_tc_flower_handler_basic_usage(struct sparx5_tc_flower_parse_usage *st) VCAP_BIT_0); if (err) goto out; + if (st->admin->vtype == VCAP_TYPE_IS0) { + err = vcap_rule_add_key_bit(st->vrule, + VCAP_KF_IP_SNAP_IS, + VCAP_BIT_1); + if (err) + goto out; + } } } @@ -309,6 +102,13 @@ sparx5_tc_flower_handler_basic_usage(struct sparx5_tc_flower_parse_usage *st) VCAP_BIT_0); if (err) goto out; + if (st->admin->vtype == VCAP_TYPE_IS0) { + err = vcap_rule_add_key_bit(st->vrule, + VCAP_KF_TCP_UDP_IS, + VCAP_BIT_1); + if (err) + goto out; + } } else { err = vcap_rule_add_key_u32(st->vrule, VCAP_KF_L3_IP_PROTO, @@ -328,253 +128,131 @@ out: } static int -sparx5_tc_flower_handler_vlan_usage(struct sparx5_tc_flower_parse_usage *st) +sparx5_tc_flower_handler_control_usage(struct vcap_tc_flower_parse_usage *st) { - enum vcap_key_field vid_key = VCAP_KF_8021Q_VID_CLS; - enum vcap_key_field pcp_key = VCAP_KF_8021Q_PCP_CLS; - struct flow_match_vlan mt; - int err; - - flow_rule_match_vlan(st->frule, &mt); - - if (mt.mask->vlan_id) { - err = vcap_rule_add_key_u32(st->vrule, vid_key, - mt.key->vlan_id, - mt.mask->vlan_id); - if (err) - goto out; - } - - if (mt.mask->vlan_priority) { - err = vcap_rule_add_key_u32(st->vrule, pcp_key, - mt.key->vlan_priority, - mt.mask->vlan_priority); - if (err) - goto out; - } - - st->used_keys |= BIT(FLOW_DISSECTOR_KEY_VLAN); - - return 0; -out: - NL_SET_ERR_MSG_MOD(st->fco->common.extack, "vlan parse error"); - return err; -} - -static int -sparx5_tc_flower_handler_tcp_usage(struct sparx5_tc_flower_parse_usage *st) -{ - struct flow_match_tcp mt; - u16 tcp_flags_mask; - u16 tcp_flags_key; - enum vcap_bit val; + struct flow_match_control mt; + u32 value, mask; int err = 0; - flow_rule_match_tcp(st->frule, &mt); - tcp_flags_key = be16_to_cpu(mt.key->flags); - tcp_flags_mask = be16_to_cpu(mt.mask->flags); - - if (tcp_flags_mask & TCPHDR_FIN) { - val = VCAP_BIT_0; - if (tcp_flags_key & TCPHDR_FIN) - val = VCAP_BIT_1; - err = vcap_rule_add_key_bit(st->vrule, VCAP_KF_L4_FIN, val); - if (err) - goto out; - } - - if (tcp_flags_mask & TCPHDR_SYN) { - val = VCAP_BIT_0; - if (tcp_flags_key & TCPHDR_SYN) - val = VCAP_BIT_1; - err = vcap_rule_add_key_bit(st->vrule, VCAP_KF_L4_SYN, val); - if (err) - goto out; - } - - if (tcp_flags_mask & TCPHDR_RST) { - val = VCAP_BIT_0; - if (tcp_flags_key & TCPHDR_RST) - val = VCAP_BIT_1; - err = vcap_rule_add_key_bit(st->vrule, VCAP_KF_L4_RST, val); - if (err) - goto out; - } - - if (tcp_flags_mask & TCPHDR_PSH) { - val = VCAP_BIT_0; - if (tcp_flags_key & TCPHDR_PSH) - val = VCAP_BIT_1; - err = vcap_rule_add_key_bit(st->vrule, VCAP_KF_L4_PSH, val); - if (err) - goto out; - } + flow_rule_match_control(st->frule, &mt); - if (tcp_flags_mask & TCPHDR_ACK) { - val = VCAP_BIT_0; - if (tcp_flags_key & TCPHDR_ACK) - val = VCAP_BIT_1; - err = vcap_rule_add_key_bit(st->vrule, VCAP_KF_L4_ACK, val); - if (err) - goto out; - } + if (mt.mask->flags) { + if (mt.mask->flags & FLOW_DIS_FIRST_FRAG) { + if (mt.key->flags & FLOW_DIS_FIRST_FRAG) { + value = 1; /* initial fragment */ + mask = 0x3; + } else { + if (mt.mask->flags & FLOW_DIS_IS_FRAGMENT) { + value = 3; /* follow up fragment */ + mask = 0x3; + } else { + value = 0; /* no fragment */ + mask = 0x3; + } + } + } else { + if (mt.mask->flags & FLOW_DIS_IS_FRAGMENT) { + value = 3; /* follow up fragment */ + mask = 0x3; + } else { + value = 0; /* no fragment */ + mask = 0x3; + } + } - if (tcp_flags_mask & TCPHDR_URG) { - val = VCAP_BIT_0; - if (tcp_flags_key & TCPHDR_URG) - val = VCAP_BIT_1; - err = vcap_rule_add_key_bit(st->vrule, VCAP_KF_L4_URG, val); + err = vcap_rule_add_key_u32(st->vrule, + VCAP_KF_L3_FRAGMENT_TYPE, + value, mask); if (err) goto out; } - st->used_keys |= BIT(FLOW_DISSECTOR_KEY_TCP); + st->used_keys |= BIT(FLOW_DISSECTOR_KEY_CONTROL); return err; out: - NL_SET_ERR_MSG_MOD(st->fco->common.extack, "tcp_flags parse error"); + NL_SET_ERR_MSG_MOD(st->fco->common.extack, "ip_frag parse error"); return err; } static int -sparx5_tc_flower_handler_arp_usage(struct sparx5_tc_flower_parse_usage *st) +sparx5_tc_flower_handler_cvlan_usage(struct vcap_tc_flower_parse_usage *st) { - struct flow_match_arp mt; - u16 value, mask; - u32 ipval, ipmsk; - int err; - - flow_rule_match_arp(st->frule, &mt); - - if (mt.mask->op) { - mask = 0x3; - if (st->l3_proto == ETH_P_ARP) { - value = mt.key->op == TC_ARP_OP_REQUEST ? - SPX5_IS2_ARP_REQUEST : - SPX5_IS2_ARP_REPLY; - } else { /* RARP */ - value = mt.key->op == TC_ARP_OP_REQUEST ? - SPX5_IS2_RARP_REQUEST : - SPX5_IS2_RARP_REPLY; - } - err = vcap_rule_add_key_u32(st->vrule, VCAP_KF_ARP_OPCODE, - value, mask); - if (err) - goto out; - } - - /* The IS2 ARP keyset does not support ARP hardware addresses */ - if (!is_zero_ether_addr(mt.mask->sha) || - !is_zero_ether_addr(mt.mask->tha)) { - err = -EINVAL; - goto out; - } - - if (mt.mask->sip) { - ipval = be32_to_cpu((__force __be32)mt.key->sip); - ipmsk = be32_to_cpu((__force __be32)mt.mask->sip); - - err = vcap_rule_add_key_u32(st->vrule, VCAP_KF_L3_IP4_SIP, - ipval, ipmsk); - if (err) - goto out; - } - - if (mt.mask->tip) { - ipval = be32_to_cpu((__force __be32)mt.key->tip); - ipmsk = be32_to_cpu((__force __be32)mt.mask->tip); - - err = vcap_rule_add_key_u32(st->vrule, VCAP_KF_L3_IP4_DIP, - ipval, ipmsk); - if (err) - goto out; + if (st->admin->vtype != VCAP_TYPE_IS0) { + NL_SET_ERR_MSG_MOD(st->fco->common.extack, + "cvlan not supported in this VCAP"); + return -EINVAL; } - st->used_keys |= BIT(FLOW_DISSECTOR_KEY_ARP); - - return 0; - -out: - NL_SET_ERR_MSG_MOD(st->fco->common.extack, "arp parse error"); - return err; + return vcap_tc_flower_handler_cvlan_usage(st); } static int -sparx5_tc_flower_handler_ip_usage(struct sparx5_tc_flower_parse_usage *st) +sparx5_tc_flower_handler_vlan_usage(struct vcap_tc_flower_parse_usage *st) { - struct flow_match_ip mt; - int err = 0; - - flow_rule_match_ip(st->frule, &mt); + enum vcap_key_field vid_key = VCAP_KF_8021Q_VID_CLS; + enum vcap_key_field pcp_key = VCAP_KF_8021Q_PCP_CLS; + int err; - if (mt.mask->tos) { - err = vcap_rule_add_key_u32(st->vrule, VCAP_KF_L3_TOS, - mt.key->tos, - mt.mask->tos); - if (err) - goto out; + if (st->admin->vtype == VCAP_TYPE_IS0) { + vid_key = VCAP_KF_8021Q_VID0; + pcp_key = VCAP_KF_8021Q_PCP0; } - st->used_keys |= BIT(FLOW_DISSECTOR_KEY_IP); + err = vcap_tc_flower_handler_vlan_usage(st, vid_key, pcp_key); + if (err) + return err; - return err; + if (st->admin->vtype == VCAP_TYPE_ES0 && st->tpid) + err = sparx5_tc_flower_es0_tpid(st); -out: - NL_SET_ERR_MSG_MOD(st->fco->common.extack, "ip_tos parse error"); return err; } -static int (*sparx5_tc_flower_usage_handlers[])(struct sparx5_tc_flower_parse_usage *st) = { - [FLOW_DISSECTOR_KEY_ETH_ADDRS] = sparx5_tc_flower_handler_ethaddr_usage, - [FLOW_DISSECTOR_KEY_IPV4_ADDRS] = sparx5_tc_flower_handler_ipv4_usage, - [FLOW_DISSECTOR_KEY_IPV6_ADDRS] = sparx5_tc_flower_handler_ipv6_usage, +static int (*sparx5_tc_flower_usage_handlers[])(struct vcap_tc_flower_parse_usage *st) = { + [FLOW_DISSECTOR_KEY_ETH_ADDRS] = vcap_tc_flower_handler_ethaddr_usage, + [FLOW_DISSECTOR_KEY_IPV4_ADDRS] = vcap_tc_flower_handler_ipv4_usage, + [FLOW_DISSECTOR_KEY_IPV6_ADDRS] = vcap_tc_flower_handler_ipv6_usage, [FLOW_DISSECTOR_KEY_CONTROL] = sparx5_tc_flower_handler_control_usage, - [FLOW_DISSECTOR_KEY_PORTS] = sparx5_tc_flower_handler_portnum_usage, + [FLOW_DISSECTOR_KEY_PORTS] = vcap_tc_flower_handler_portnum_usage, [FLOW_DISSECTOR_KEY_BASIC] = sparx5_tc_flower_handler_basic_usage, + [FLOW_DISSECTOR_KEY_CVLAN] = sparx5_tc_flower_handler_cvlan_usage, [FLOW_DISSECTOR_KEY_VLAN] = sparx5_tc_flower_handler_vlan_usage, - [FLOW_DISSECTOR_KEY_TCP] = sparx5_tc_flower_handler_tcp_usage, - [FLOW_DISSECTOR_KEY_ARP] = sparx5_tc_flower_handler_arp_usage, - [FLOW_DISSECTOR_KEY_IP] = sparx5_tc_flower_handler_ip_usage, + [FLOW_DISSECTOR_KEY_TCP] = vcap_tc_flower_handler_tcp_usage, + [FLOW_DISSECTOR_KEY_ARP] = vcap_tc_flower_handler_arp_usage, + [FLOW_DISSECTOR_KEY_IP] = vcap_tc_flower_handler_ip_usage, }; -static int sparx5_tc_use_dissectors(struct flow_cls_offload *fco, +static int sparx5_tc_use_dissectors(struct vcap_tc_flower_parse_usage *st, struct vcap_admin *admin, - struct vcap_rule *vrule, - u16 *l3_proto) + struct vcap_rule *vrule) { - struct sparx5_tc_flower_parse_usage state = { - .fco = fco, - .vrule = vrule, - .l3_proto = ETH_P_ALL, - }; int idx, err = 0; - state.frule = flow_cls_offload_flow_rule(fco); for (idx = 0; idx < ARRAY_SIZE(sparx5_tc_flower_usage_handlers); ++idx) { - if (!flow_rule_match_key(state.frule, idx)) + if (!flow_rule_match_key(st->frule, idx)) continue; if (!sparx5_tc_flower_usage_handlers[idx]) continue; - err = sparx5_tc_flower_usage_handlers[idx](&state); + err = sparx5_tc_flower_usage_handlers[idx](st); if (err) return err; } - if (state.frule->match.dissector->used_keys ^ state.used_keys) { - NL_SET_ERR_MSG_MOD(fco->common.extack, + if (st->frule->match.dissector->used_keys ^ st->used_keys) { + NL_SET_ERR_MSG_MOD(st->fco->common.extack, "Unsupported match item"); return -ENOENT; } - if (l3_proto) - *l3_proto = state.l3_proto; return err; } static int sparx5_tc_flower_action_check(struct vcap_control *vctrl, + struct net_device *ndev, struct flow_cls_offload *fco, - struct vcap_admin *admin) + bool ingress) { struct flow_rule *rule = flow_cls_offload_flow_rule(fco); struct flow_action_entry *actent, *last_actent = NULL; @@ -600,21 +278,24 @@ static int sparx5_tc_flower_action_check(struct vcap_control *vctrl, last_actent = actent; /* Save last action for later check */ } - /* Check that last action is a goto */ - if (last_actent->id != FLOW_ACTION_GOTO) { + /* Check if last action is a goto + * The last chain/lookup does not need to have a goto action + */ + if (last_actent->id == FLOW_ACTION_GOTO) { + /* Check if the destination chain is in one of the VCAPs */ + if (!vcap_is_next_lookup(vctrl, fco->common.chain_index, + last_actent->chain_index)) { + NL_SET_ERR_MSG_MOD(fco->common.extack, + "Invalid goto chain"); + return -EINVAL; + } + } else if (!vcap_is_last_chain(vctrl, fco->common.chain_index, + ingress)) { NL_SET_ERR_MSG_MOD(fco->common.extack, "Last action must be 'goto'"); return -EINVAL; } - /* Check if the goto chain is in the next lookup */ - if (!vcap_is_next_lookup(vctrl, fco->common.chain_index, - last_actent->chain_index)) { - NL_SET_ERR_MSG_MOD(fco->common.extack, - "Invalid goto chain"); - return -EINVAL; - } - /* Catch unsupported combinations of actions */ if (action_mask & BIT(FLOW_ACTION_TRAP) && action_mask & BIT(FLOW_ACTION_ACCEPT)) { @@ -623,21 +304,60 @@ static int sparx5_tc_flower_action_check(struct vcap_control *vctrl, return -EOPNOTSUPP; } + if (action_mask & BIT(FLOW_ACTION_VLAN_PUSH) && + action_mask & BIT(FLOW_ACTION_VLAN_POP)) { + NL_SET_ERR_MSG_MOD(fco->common.extack, + "Cannot combine vlan push and pop action"); + return -EOPNOTSUPP; + } + + if (action_mask & BIT(FLOW_ACTION_VLAN_PUSH) && + action_mask & BIT(FLOW_ACTION_VLAN_MANGLE)) { + NL_SET_ERR_MSG_MOD(fco->common.extack, + "Cannot combine vlan push and modify action"); + return -EOPNOTSUPP; + } + + if (action_mask & BIT(FLOW_ACTION_VLAN_POP) && + action_mask & BIT(FLOW_ACTION_VLAN_MANGLE)) { + NL_SET_ERR_MSG_MOD(fco->common.extack, + "Cannot combine vlan pop and modify action"); + return -EOPNOTSUPP; + } + return 0; } -/* Add a rule counter action - only IS2 is considered for now */ +/* Add a rule counter action */ static int sparx5_tc_add_rule_counter(struct vcap_admin *admin, struct vcap_rule *vrule) { int err; - err = vcap_rule_mod_action_u32(vrule, VCAP_AF_CNT_ID, vrule->id); - if (err) - return err; - - vcap_rule_set_counter_id(vrule, vrule->id); - return err; + switch (admin->vtype) { + case VCAP_TYPE_IS0: + break; + case VCAP_TYPE_ES0: + err = vcap_rule_mod_action_u32(vrule, VCAP_AF_ESDX, + vrule->id); + if (err) + return err; + vcap_rule_set_counter_id(vrule, vrule->id); + break; + case VCAP_TYPE_IS2: + case VCAP_TYPE_ES2: + err = vcap_rule_mod_action_u32(vrule, VCAP_AF_CNT_ID, + vrule->id); + if (err) + return err; + vcap_rule_set_counter_id(vrule, vrule->id); + break; + default: + pr_err("%s:%d: vcap type: %d not supported\n", + __func__, __LINE__, admin->vtype); + break; + } + return 0; } /* Collect all port keysets and apply the first of them, possibly wildcarded */ @@ -818,22 +538,490 @@ static int sparx5_tc_add_remaining_rules(struct vcap_control *vctrl, return err; } +/* Add the actionset that is the default for the VCAP type */ +static int sparx5_tc_set_actionset(struct vcap_admin *admin, + struct vcap_rule *vrule) +{ + enum vcap_actionfield_set aset; + int err = 0; + + switch (admin->vtype) { + case VCAP_TYPE_IS0: + aset = VCAP_AFS_CLASSIFICATION; + break; + case VCAP_TYPE_IS2: + aset = VCAP_AFS_BASE_TYPE; + break; + case VCAP_TYPE_ES0: + aset = VCAP_AFS_ES0; + break; + case VCAP_TYPE_ES2: + aset = VCAP_AFS_BASE_TYPE; + break; + default: + pr_err("%s:%d: %s\n", __func__, __LINE__, "Invalid VCAP type"); + return -EINVAL; + } + /* Do not overwrite any current actionset */ + if (vrule->actionset == VCAP_AFS_NO_VALUE) + err = vcap_set_rule_set_actionset(vrule, aset); + return err; +} + +/* Add the VCAP key to match on for a rule target value */ +static int sparx5_tc_add_rule_link_target(struct vcap_admin *admin, + struct vcap_rule *vrule, + int target_cid) +{ + int link_val = target_cid % VCAP_CID_LOOKUP_SIZE; + int err; + + if (!link_val) + return 0; + + switch (admin->vtype) { + case VCAP_TYPE_IS0: + /* Add NXT_IDX key for chaining rules between IS0 instances */ + err = vcap_rule_add_key_u32(vrule, VCAP_KF_LOOKUP_GEN_IDX_SEL, + 1, /* enable */ + ~0); + if (err) + return err; + return vcap_rule_add_key_u32(vrule, VCAP_KF_LOOKUP_GEN_IDX, + link_val, /* target */ + ~0); + case VCAP_TYPE_IS2: + /* Add PAG key for chaining rules from IS0 */ + return vcap_rule_add_key_u32(vrule, VCAP_KF_LOOKUP_PAG, + link_val, /* target */ + ~0); + case VCAP_TYPE_ES0: + case VCAP_TYPE_ES2: + /* Add ISDX key for chaining rules from IS0 */ + return vcap_rule_add_key_u32(vrule, VCAP_KF_ISDX_CLS, link_val, + ~0); + default: + break; + } + return 0; +} + +/* Add the VCAP action that adds a target value to a rule */ +static int sparx5_tc_add_rule_link(struct vcap_control *vctrl, + struct vcap_admin *admin, + struct vcap_rule *vrule, + int from_cid, int to_cid) +{ + struct vcap_admin *to_admin = vcap_find_admin(vctrl, to_cid); + int diff, err = 0; + + if (!to_admin) { + pr_err("%s:%d: unsupported chain direction: %d\n", + __func__, __LINE__, to_cid); + return -EINVAL; + } + + diff = vcap_chain_offset(vctrl, from_cid, to_cid); + if (!diff) + return 0; + + if (admin->vtype == VCAP_TYPE_IS0 && + to_admin->vtype == VCAP_TYPE_IS0) { + /* Between IS0 instances the G_IDX value is used */ + err = vcap_rule_add_action_u32(vrule, VCAP_AF_NXT_IDX, diff); + if (err) + goto out; + err = vcap_rule_add_action_u32(vrule, VCAP_AF_NXT_IDX_CTRL, + 1); /* Replace */ + if (err) + goto out; + } else if (admin->vtype == VCAP_TYPE_IS0 && + to_admin->vtype == VCAP_TYPE_IS2) { + /* Between IS0 and IS2 the PAG value is used */ + err = vcap_rule_add_action_u32(vrule, VCAP_AF_PAG_VAL, diff); + if (err) + goto out; + err = vcap_rule_add_action_u32(vrule, + VCAP_AF_PAG_OVERRIDE_MASK, + 0xff); + if (err) + goto out; + } else if (admin->vtype == VCAP_TYPE_IS0 && + (to_admin->vtype == VCAP_TYPE_ES0 || + to_admin->vtype == VCAP_TYPE_ES2)) { + /* Between IS0 and ES0/ES2 the ISDX value is used */ + err = vcap_rule_add_action_u32(vrule, VCAP_AF_ISDX_VAL, + diff); + if (err) + goto out; + err = vcap_rule_add_action_bit(vrule, + VCAP_AF_ISDX_ADD_REPLACE_SEL, + VCAP_BIT_1); + if (err) + goto out; + } else { + pr_err("%s:%d: unsupported chain destination: %d\n", + __func__, __LINE__, to_cid); + err = -EOPNOTSUPP; + } +out: + return err; +} + +static int sparx5_tc_flower_parse_act_gate(struct sparx5_psfp_sg *sg, + struct flow_action_entry *act, + struct netlink_ext_ack *extack) +{ + int i; + + if (act->gate.prio < -1 || act->gate.prio > SPX5_PSFP_SG_MAX_IPV) { + NL_SET_ERR_MSG_MOD(extack, "Invalid gate priority"); + return -EINVAL; + } + + if (act->gate.cycletime < SPX5_PSFP_SG_MIN_CYCLE_TIME_NS || + act->gate.cycletime > SPX5_PSFP_SG_MAX_CYCLE_TIME_NS) { + NL_SET_ERR_MSG_MOD(extack, "Invalid gate cycletime"); + return -EINVAL; + } + + if (act->gate.cycletimeext > SPX5_PSFP_SG_MAX_CYCLE_TIME_NS) { + NL_SET_ERR_MSG_MOD(extack, "Invalid gate cycletimeext"); + return -EINVAL; + } + + if (act->gate.num_entries >= SPX5_PSFP_GCE_CNT) { + NL_SET_ERR_MSG_MOD(extack, "Invalid number of gate entries"); + return -EINVAL; + } + + sg->gate_state = true; + sg->ipv = act->gate.prio; + sg->num_entries = act->gate.num_entries; + sg->cycletime = act->gate.cycletime; + sg->cycletimeext = act->gate.cycletimeext; + + for (i = 0; i < sg->num_entries; i++) { + sg->gce[i].gate_state = !!act->gate.entries[i].gate_state; + sg->gce[i].interval = act->gate.entries[i].interval; + sg->gce[i].ipv = act->gate.entries[i].ipv; + sg->gce[i].maxoctets = act->gate.entries[i].maxoctets; + } + + return 0; +} + +static int sparx5_tc_flower_parse_act_police(struct sparx5_policer *pol, + struct flow_action_entry *act, + struct netlink_ext_ack *extack) +{ + pol->type = SPX5_POL_SERVICE; + pol->rate = div_u64(act->police.rate_bytes_ps, 1000) * 8; + pol->burst = act->police.burst; + pol->idx = act->hw_index; + + /* rate is now in kbit */ + if (pol->rate > DIV_ROUND_UP(SPX5_SDLB_GROUP_RATE_MAX, 1000)) { + NL_SET_ERR_MSG_MOD(extack, "Maximum rate exceeded"); + return -EINVAL; + } + + if (act->police.exceed.act_id != FLOW_ACTION_DROP) { + NL_SET_ERR_MSG_MOD(extack, "Offload not supported when exceed action is not drop"); + return -EOPNOTSUPP; + } + + if (act->police.notexceed.act_id != FLOW_ACTION_PIPE && + act->police.notexceed.act_id != FLOW_ACTION_ACCEPT) { + NL_SET_ERR_MSG_MOD(extack, "Offload not supported when conform action is not pipe or ok"); + return -EOPNOTSUPP; + } + + return 0; +} + +static int sparx5_tc_flower_psfp_setup(struct sparx5 *sparx5, + struct vcap_rule *vrule, int sg_idx, + int pol_idx, struct sparx5_psfp_sg *sg, + struct sparx5_psfp_fm *fm, + struct sparx5_psfp_sf *sf) +{ + u32 psfp_sfid = 0, psfp_fmid = 0, psfp_sgid = 0; + int ret; + + /* Must always have a stream gate - max sdu (filter option) is evaluated + * after frames have passed the gate, so in case of only a policer, we + * allocate a stream gate that is always open. + */ + if (sg_idx < 0) { + sg_idx = sparx5_pool_idx_to_id(SPX5_PSFP_SG_OPEN); + sg->ipv = 0; /* Disabled */ + sg->cycletime = SPX5_PSFP_SG_CYCLE_TIME_DEFAULT; + sg->num_entries = 1; + sg->gate_state = 1; /* Open */ + sg->gate_enabled = 1; + sg->gce[0].gate_state = 1; + sg->gce[0].interval = SPX5_PSFP_SG_CYCLE_TIME_DEFAULT; + sg->gce[0].ipv = 0; + sg->gce[0].maxoctets = 0; /* Disabled */ + } + + ret = sparx5_psfp_sg_add(sparx5, sg_idx, sg, &psfp_sgid); + if (ret < 0) + return ret; + + if (pol_idx >= 0) { + /* Add new flow-meter */ + ret = sparx5_psfp_fm_add(sparx5, pol_idx, fm, &psfp_fmid); + if (ret < 0) + return ret; + } + + /* Map stream filter to stream gate */ + sf->sgid = psfp_sgid; + + /* Add new stream-filter and map it to a steam gate */ + ret = sparx5_psfp_sf_add(sparx5, sf, &psfp_sfid); + if (ret < 0) + return ret; + + /* Streams are classified by ISDX - map ISDX 1:1 to sfid for now. */ + sparx5_isdx_conf_set(sparx5, psfp_sfid, psfp_sfid, psfp_fmid); + + ret = vcap_rule_add_action_bit(vrule, VCAP_AF_ISDX_ADD_REPLACE_SEL, + VCAP_BIT_1); + if (ret) + return ret; + + ret = vcap_rule_add_action_u32(vrule, VCAP_AF_ISDX_VAL, psfp_sfid); + if (ret) + return ret; + + return 0; +} + +/* Handle the action trap for a VCAP rule */ +static int sparx5_tc_action_trap(struct vcap_admin *admin, + struct vcap_rule *vrule, + struct flow_cls_offload *fco) +{ + int err = 0; + + switch (admin->vtype) { + case VCAP_TYPE_IS2: + err = vcap_rule_add_action_bit(vrule, + VCAP_AF_CPU_COPY_ENA, + VCAP_BIT_1); + if (err) + break; + err = vcap_rule_add_action_u32(vrule, + VCAP_AF_CPU_QUEUE_NUM, 0); + if (err) + break; + err = vcap_rule_add_action_u32(vrule, + VCAP_AF_MASK_MODE, + SPX5_PMM_REPLACE_ALL); + break; + case VCAP_TYPE_ES0: + err = vcap_rule_add_action_u32(vrule, + VCAP_AF_FWD_SEL, + SPX5_FWSEL_REDIRECT_TO_LOOPBACK); + break; + case VCAP_TYPE_ES2: + err = vcap_rule_add_action_bit(vrule, + VCAP_AF_CPU_COPY_ENA, + VCAP_BIT_1); + if (err) + break; + err = vcap_rule_add_action_u32(vrule, + VCAP_AF_CPU_QUEUE_NUM, 0); + break; + default: + NL_SET_ERR_MSG_MOD(fco->common.extack, + "Trap action not supported in this VCAP"); + err = -EOPNOTSUPP; + break; + } + return err; +} + +static int sparx5_tc_action_vlan_pop(struct vcap_admin *admin, + struct vcap_rule *vrule, + struct flow_cls_offload *fco, + u16 tpid) +{ + int err = 0; + + switch (admin->vtype) { + case VCAP_TYPE_ES0: + break; + default: + NL_SET_ERR_MSG_MOD(fco->common.extack, + "VLAN pop action not supported in this VCAP"); + return -EOPNOTSUPP; + } + + switch (tpid) { + case ETH_P_8021Q: + case ETH_P_8021AD: + err = vcap_rule_add_action_u32(vrule, + VCAP_AF_PUSH_OUTER_TAG, + SPX5_OTAG_UNTAG); + break; + default: + NL_SET_ERR_MSG_MOD(fco->common.extack, + "Invalid vlan proto"); + err = -EINVAL; + } + return err; +} + +static int sparx5_tc_action_vlan_modify(struct vcap_admin *admin, + struct vcap_rule *vrule, + struct flow_cls_offload *fco, + struct flow_action_entry *act, + u16 tpid) +{ + int err = 0; + + switch (admin->vtype) { + case VCAP_TYPE_ES0: + err = vcap_rule_add_action_u32(vrule, + VCAP_AF_PUSH_OUTER_TAG, + SPX5_OTAG_TAG_A); + if (err) + return err; + break; + default: + NL_SET_ERR_MSG_MOD(fco->common.extack, + "VLAN modify action not supported in this VCAP"); + return -EOPNOTSUPP; + } + + switch (tpid) { + case ETH_P_8021Q: + err = vcap_rule_add_action_u32(vrule, + VCAP_AF_TAG_A_TPID_SEL, + SPX5_TPID_A_8100); + break; + case ETH_P_8021AD: + err = vcap_rule_add_action_u32(vrule, + VCAP_AF_TAG_A_TPID_SEL, + SPX5_TPID_A_88A8); + break; + default: + NL_SET_ERR_MSG_MOD(fco->common.extack, + "Invalid vlan proto"); + err = -EINVAL; + } + if (err) + return err; + + err = vcap_rule_add_action_u32(vrule, + VCAP_AF_TAG_A_VID_SEL, + SPX5_VID_A_VAL); + if (err) + return err; + + err = vcap_rule_add_action_u32(vrule, + VCAP_AF_VID_A_VAL, + act->vlan.vid); + if (err) + return err; + + err = vcap_rule_add_action_u32(vrule, + VCAP_AF_TAG_A_PCP_SEL, + SPX5_PCP_A_VAL); + if (err) + return err; + + err = vcap_rule_add_action_u32(vrule, + VCAP_AF_PCP_A_VAL, + act->vlan.prio); + if (err) + return err; + + return vcap_rule_add_action_u32(vrule, + VCAP_AF_TAG_A_DEI_SEL, + SPX5_DEI_A_CLASSIFIED); +} + +static int sparx5_tc_action_vlan_push(struct vcap_admin *admin, + struct vcap_rule *vrule, + struct flow_cls_offload *fco, + struct flow_action_entry *act, + u16 tpid) +{ + u16 act_tpid = be16_to_cpu(act->vlan.proto); + int err = 0; + + switch (admin->vtype) { + case VCAP_TYPE_ES0: + break; + default: + NL_SET_ERR_MSG_MOD(fco->common.extack, + "VLAN push action not supported in this VCAP"); + return -EOPNOTSUPP; + } + + if (tpid == ETH_P_8021AD) { + NL_SET_ERR_MSG_MOD(fco->common.extack, + "Cannot push on double tagged frames"); + return -EOPNOTSUPP; + } + + err = sparx5_tc_action_vlan_modify(admin, vrule, fco, act, act_tpid); + if (err) + return err; + + switch (act_tpid) { + case ETH_P_8021Q: + break; + case ETH_P_8021AD: + /* Push classified tag as inner tag */ + err = vcap_rule_add_action_u32(vrule, + VCAP_AF_PUSH_INNER_TAG, + SPX5_ITAG_PUSH_B_TAG); + if (err) + break; + err = vcap_rule_add_action_u32(vrule, + VCAP_AF_TAG_B_TPID_SEL, + SPX5_TPID_B_CLASSIFIED); + break; + default: + NL_SET_ERR_MSG_MOD(fco->common.extack, + "Invalid vlan proto"); + err = -EINVAL; + } + return err; +} + static int sparx5_tc_flower_replace(struct net_device *ndev, struct flow_cls_offload *fco, - struct vcap_admin *admin) + struct vcap_admin *admin, + bool ingress) { + struct sparx5_psfp_sf sf = { .max_sdu = SPX5_PSFP_SF_MAX_SDU }; + struct netlink_ext_ack *extack = fco->common.extack; + int err, idx, tc_sg_idx = -1, tc_pol_idx = -1; + struct vcap_tc_flower_parse_usage state = { + .fco = fco, + .l3_proto = ETH_P_ALL, + .admin = admin, + }; struct sparx5_port *port = netdev_priv(ndev); struct sparx5_multiple_rules multi = {}; + struct sparx5 *sparx5 = port->sparx5; + struct sparx5_psfp_sg sg = { 0 }; + struct sparx5_psfp_fm fm = { 0 }; struct flow_action_entry *act; struct vcap_control *vctrl; struct flow_rule *frule; struct vcap_rule *vrule; - u16 l3_proto; - int err, idx; vctrl = port->sparx5->vcap_ctrl; - err = sparx5_tc_flower_action_check(vctrl, fco, admin); + err = sparx5_tc_flower_action_check(vctrl, ndev, fco, ingress); if (err) return err; @@ -844,8 +1032,9 @@ static int sparx5_tc_flower_replace(struct net_device *ndev, vrule->cookie = fco->cookie; - l3_proto = ETH_P_ALL; - err = sparx5_tc_use_dissectors(fco, admin, vrule, &l3_proto); + state.vrule = vrule; + state.frule = flow_cls_offload_flow_rule(fco); + err = sparx5_tc_use_dissectors(&state, admin, vrule); if (err) goto out; @@ -853,38 +1042,69 @@ static int sparx5_tc_flower_replace(struct net_device *ndev, if (err) goto out; + err = sparx5_tc_add_rule_link_target(admin, vrule, + fco->common.chain_index); + if (err) + goto out; + frule = flow_cls_offload_flow_rule(fco); flow_action_for_each(idx, act, &frule->action) { switch (act->id) { + case FLOW_ACTION_GATE: { + err = sparx5_tc_flower_parse_act_gate(&sg, act, extack); + if (err < 0) + goto out; + + tc_sg_idx = act->hw_index; + + break; + } + case FLOW_ACTION_POLICE: { + err = sparx5_tc_flower_parse_act_police(&fm.pol, act, + extack); + if (err < 0) + goto out; + + tc_pol_idx = fm.pol.idx; + sf.max_sdu = act->police.mtu; + + break; + } case FLOW_ACTION_TRAP: - err = vcap_rule_add_action_bit(vrule, - VCAP_AF_CPU_COPY_ENA, - VCAP_BIT_1); + err = sparx5_tc_action_trap(admin, vrule, fco); if (err) goto out; - err = vcap_rule_add_action_u32(vrule, - VCAP_AF_CPU_QUEUE_NUM, 0); + break; + case FLOW_ACTION_ACCEPT: + err = sparx5_tc_set_actionset(admin, vrule); if (err) goto out; - err = vcap_rule_add_action_u32(vrule, VCAP_AF_MASK_MODE, - SPX5_PMM_REPLACE_ALL); + break; + case FLOW_ACTION_GOTO: + err = sparx5_tc_set_actionset(admin, vrule); if (err) goto out; - /* For now the actionset is hardcoded */ - err = vcap_set_rule_set_actionset(vrule, - VCAP_AFS_BASE_TYPE); + sparx5_tc_add_rule_link(vctrl, admin, vrule, + fco->common.chain_index, + act->chain_index); + break; + case FLOW_ACTION_VLAN_POP: + err = sparx5_tc_action_vlan_pop(admin, vrule, fco, + state.tpid); if (err) goto out; break; - case FLOW_ACTION_ACCEPT: - /* For now the actionset is hardcoded */ - err = vcap_set_rule_set_actionset(vrule, - VCAP_AFS_BASE_TYPE); + case FLOW_ACTION_VLAN_PUSH: + err = sparx5_tc_action_vlan_push(admin, vrule, fco, + act, state.tpid); if (err) goto out; break; - case FLOW_ACTION_GOTO: - /* Links between VCAPs will be added later */ + case FLOW_ACTION_VLAN_MANGLE: + err = sparx5_tc_action_vlan_modify(admin, vrule, fco, + act, state.tpid); + if (err) + goto out; break; default: NL_SET_ERR_MSG_MOD(fco->common.extack, @@ -894,8 +1114,16 @@ static int sparx5_tc_flower_replace(struct net_device *ndev, } } - err = sparx5_tc_select_protocol_keyset(ndev, vrule, admin, l3_proto, - &multi); + /* Setup PSFP */ + if (tc_sg_idx >= 0 || tc_pol_idx >= 0) { + err = sparx5_tc_flower_psfp_setup(sparx5, vrule, tc_sg_idx, + tc_pol_idx, &sg, &fm, &sf); + if (err) + goto out; + } + + err = sparx5_tc_select_protocol_keyset(ndev, vrule, admin, + state.l3_proto, &multi); if (err) { NL_SET_ERR_MSG_MOD(fco->common.extack, "No matching port keyset for filter protocol and keys"); @@ -903,7 +1131,7 @@ static int sparx5_tc_flower_replace(struct net_device *ndev, } /* provide the l3 protocol to guide the keyset selection */ - err = vcap_val_rule(vrule, l3_proto); + err = vcap_val_rule(vrule, state.l3_proto); if (err) { vcap_set_tc_exterr(fco, vrule); goto out; @@ -913,7 +1141,7 @@ static int sparx5_tc_flower_replace(struct net_device *ndev, NL_SET_ERR_MSG_MOD(fco->common.extack, "Could not add the filter"); - if (l3_proto == ETH_P_ALL) + if (state.l3_proto == ETH_P_ALL) err = sparx5_tc_add_remaining_rules(vctrl, fco, vrule, admin, &multi); @@ -922,19 +1150,86 @@ out: return err; } +static void sparx5_tc_free_psfp_resources(struct sparx5 *sparx5, + struct vcap_rule *vrule) +{ + struct vcap_client_actionfield *afield; + u32 isdx, sfid, sgid, fmid; + + /* Check if VCAP_AF_ISDX_VAL action is set for this rule - and if + * it is used for stream and/or flow-meter classification. + */ + afield = vcap_find_actionfield(vrule, VCAP_AF_ISDX_VAL); + if (!afield) + return; + + isdx = afield->data.u32.value; + sfid = sparx5_psfp_isdx_get_sf(sparx5, isdx); + + if (!sfid) + return; + + fmid = sparx5_psfp_isdx_get_fm(sparx5, isdx); + sgid = sparx5_psfp_sf_get_sg(sparx5, sfid); + + if (fmid && sparx5_psfp_fm_del(sparx5, fmid) < 0) + pr_err("%s:%d Could not delete invalid fmid: %d", __func__, + __LINE__, fmid); + + if (sgid && sparx5_psfp_sg_del(sparx5, sgid) < 0) + pr_err("%s:%d Could not delete invalid sgid: %d", __func__, + __LINE__, sgid); + + if (sparx5_psfp_sf_del(sparx5, sfid) < 0) + pr_err("%s:%d Could not delete invalid sfid: %d", __func__, + __LINE__, sfid); + + sparx5_isdx_conf_set(sparx5, isdx, 0, 0); +} + +static int sparx5_tc_free_rule_resources(struct net_device *ndev, + struct vcap_control *vctrl, + int rule_id) +{ + struct sparx5_port *port = netdev_priv(ndev); + struct sparx5 *sparx5 = port->sparx5; + struct vcap_rule *vrule; + int ret = 0; + + vrule = vcap_get_rule(vctrl, rule_id); + if (!vrule || IS_ERR(vrule)) + return -EINVAL; + + sparx5_tc_free_psfp_resources(sparx5, vrule); + + vcap_free_rule(vrule); + return ret; +} + static int sparx5_tc_flower_destroy(struct net_device *ndev, struct flow_cls_offload *fco, struct vcap_admin *admin) { struct sparx5_port *port = netdev_priv(ndev); + int err = -ENOENT, count = 0, rule_id; struct vcap_control *vctrl; - int err = -ENOENT, rule_id; vctrl = port->sparx5->vcap_ctrl; while (true) { rule_id = vcap_lookup_rule_by_cookie(vctrl, fco->cookie); if (rule_id <= 0) break; + if (count == 0) { + /* Resources are attached to the first rule of + * a set of rules. Only works if the rules are + * in the correct order. + */ + err = sparx5_tc_free_rule_resources(ndev, vctrl, + rule_id); + if (err) + pr_err("%s:%d: could not free resources %d\n", + __func__, __LINE__, rule_id); + } err = vcap_del_rule(vctrl, ndev, rule_id); if (err) { pr_err("%s:%d: could not delete rule %d\n", @@ -945,44 +1240,21 @@ static int sparx5_tc_flower_destroy(struct net_device *ndev, return err; } -/* Collect packet counts from all rules with the same cookie */ -static int sparx5_tc_rule_counter_cb(void *arg, struct vcap_rule *rule) -{ - struct sparx5_tc_rule_pkt_cnt *rinfo = arg; - struct vcap_counter counter; - int err = 0; - - if (rule->cookie == rinfo->cookie) { - err = vcap_rule_get_counter(rule, &counter); - if (err) - return err; - rinfo->pkts += counter.value; - /* Reset the rule counter */ - counter.value = 0; - vcap_rule_set_counter(rule, &counter); - } - return err; -} - static int sparx5_tc_flower_stats(struct net_device *ndev, struct flow_cls_offload *fco, struct vcap_admin *admin) { struct sparx5_port *port = netdev_priv(ndev); - struct sparx5_tc_rule_pkt_cnt rinfo = {}; + struct vcap_counter ctr = {}; struct vcap_control *vctrl; ulong lastused = 0; - u64 drops = 0; - u32 pkts = 0; int err; - rinfo.cookie = fco->cookie; vctrl = port->sparx5->vcap_ctrl; - err = vcap_rule_iter(vctrl, sparx5_tc_rule_counter_cb, &rinfo); + err = vcap_get_rule_count_by_cookie(vctrl, &ctr, fco->cookie); if (err) return err; - pkts = rinfo.pkts; - flow_stats_update(&fco->stats, 0x0, pkts, drops, lastused, + flow_stats_update(&fco->stats, 0x0, ctr.value, 0, lastused, FLOW_ACTION_HW_STATS_IMMEDIATE); return err; } @@ -1005,7 +1277,7 @@ int sparx5_tc_flower(struct net_device *ndev, struct flow_cls_offload *fco, switch (fco->command) { case FLOW_CLS_REPLACE: - return sparx5_tc_flower_replace(ndev, fco, admin); + return sparx5_tc_flower_replace(ndev, fco, admin, ingress); case FLOW_CLS_DESTROY: return sparx5_tc_flower_destroy(ndev, fco, admin); case FLOW_CLS_STATS: diff --git a/drivers/net/ethernet/microchip/sparx5/sparx5_tc_matchall.c b/drivers/net/ethernet/microchip/sparx5/sparx5_tc_matchall.c index 30dd61e5d150..d88a93f22606 100644 --- a/drivers/net/ethernet/microchip/sparx5/sparx5_tc_matchall.c +++ b/drivers/net/ethernet/microchip/sparx5/sparx5_tc_matchall.c @@ -31,6 +31,7 @@ static int sparx5_tc_matchall_replace(struct net_device *ndev, switch (action->id) { case FLOW_ACTION_GOTO: err = vcap_enable_lookups(sparx5->vcap_ctrl, ndev, + tmo->common.chain_index, action->chain_index, tmo->cookie, true); if (err == -EFAULT) { @@ -43,6 +44,11 @@ static int sparx5_tc_matchall_replace(struct net_device *ndev, "VCAP already enabled"); return -EOPNOTSUPP; } + if (err == -EADDRNOTAVAIL) { + NL_SET_ERR_MSG_MOD(tmo->common.extack, + "Already matching this chain"); + return -EOPNOTSUPP; + } if (err) { NL_SET_ERR_MSG_MOD(tmo->common.extack, "Could not enable VCAP lookups"); @@ -66,8 +72,8 @@ static int sparx5_tc_matchall_destroy(struct net_device *ndev, sparx5 = port->sparx5; if (!tmo->rule && tmo->cookie) { - err = vcap_enable_lookups(sparx5->vcap_ctrl, ndev, 0, - tmo->cookie, false); + err = vcap_enable_lookups(sparx5->vcap_ctrl, ndev, + 0, 0, tmo->cookie, false); if (err) return err; return 0; @@ -80,12 +86,6 @@ int sparx5_tc_matchall(struct net_device *ndev, struct tc_cls_matchall_offload *tmo, bool ingress) { - if (!tc_cls_can_offload_and_chain0(ndev, &tmo->common)) { - NL_SET_ERR_MSG_MOD(tmo->common.extack, - "Only chain zero is supported"); - return -EOPNOTSUPP; - } - switch (tmo->command) { case TC_CLSMATCHALL_REPLACE: return sparx5_tc_matchall_replace(ndev, tmo, ingress); diff --git a/drivers/net/ethernet/microchip/sparx5/sparx5_vcap_ag_api.c b/drivers/net/ethernet/microchip/sparx5/sparx5_vcap_ag_api.c index 1bd987c664e8..556d6ea0acd1 100644 --- a/drivers/net/ethernet/microchip/sparx5/sparx5_vcap_ag_api.c +++ b/drivers/net/ethernet/microchip/sparx5/sparx5_vcap_ag_api.c @@ -1,10 +1,10 @@ // SPDX-License-Identifier: BSD-3-Clause -/* Copyright (C) 2022 Microchip Technology Inc. and its subsidiaries. +/* Copyright (C) 2023 Microchip Technology Inc. and its subsidiaries. * Microchip VCAP API */ -/* This file is autogenerated by cml-utils 2022-10-13 10:04:41 +0200. - * Commit ID: fd7cafd175899f0672c73afb3a30fc872500ae86 +/* This file is autogenerated by cml-utils 2023-02-10 11:15:56 +0100. + * Commit ID: c30fb4bf0281cd4a7133bdab6682f9e43c872ada */ #include <linux/types.h> @@ -14,6 +14,372 @@ #include "sparx5_vcap_ag_api.h" /* keyfields */ +static const struct vcap_field is0_normal_7tuple_keyfield[] = { + [VCAP_KF_TYPE] = { + .type = VCAP_FIELD_BIT, + .offset = 0, + .width = 1, + }, + [VCAP_KF_LOOKUP_FIRST_IS] = { + .type = VCAP_FIELD_BIT, + .offset = 1, + .width = 1, + }, + [VCAP_KF_LOOKUP_GEN_IDX_SEL] = { + .type = VCAP_FIELD_U32, + .offset = 2, + .width = 2, + }, + [VCAP_KF_LOOKUP_GEN_IDX] = { + .type = VCAP_FIELD_U32, + .offset = 4, + .width = 12, + }, + [VCAP_KF_IF_IGR_PORT_MASK_SEL] = { + .type = VCAP_FIELD_U32, + .offset = 16, + .width = 2, + }, + [VCAP_KF_IF_IGR_PORT_MASK] = { + .type = VCAP_FIELD_U72, + .offset = 18, + .width = 65, + }, + [VCAP_KF_L2_MC_IS] = { + .type = VCAP_FIELD_BIT, + .offset = 83, + .width = 1, + }, + [VCAP_KF_L2_BC_IS] = { + .type = VCAP_FIELD_BIT, + .offset = 84, + .width = 1, + }, + [VCAP_KF_8021Q_VLAN_TAGS] = { + .type = VCAP_FIELD_U32, + .offset = 85, + .width = 3, + }, + [VCAP_KF_8021Q_TPID0] = { + .type = VCAP_FIELD_U32, + .offset = 88, + .width = 3, + }, + [VCAP_KF_8021Q_PCP0] = { + .type = VCAP_FIELD_U32, + .offset = 91, + .width = 3, + }, + [VCAP_KF_8021Q_DEI0] = { + .type = VCAP_FIELD_BIT, + .offset = 94, + .width = 1, + }, + [VCAP_KF_8021Q_VID0] = { + .type = VCAP_FIELD_U32, + .offset = 95, + .width = 12, + }, + [VCAP_KF_8021Q_TPID1] = { + .type = VCAP_FIELD_U32, + .offset = 107, + .width = 3, + }, + [VCAP_KF_8021Q_PCP1] = { + .type = VCAP_FIELD_U32, + .offset = 110, + .width = 3, + }, + [VCAP_KF_8021Q_DEI1] = { + .type = VCAP_FIELD_BIT, + .offset = 113, + .width = 1, + }, + [VCAP_KF_8021Q_VID1] = { + .type = VCAP_FIELD_U32, + .offset = 114, + .width = 12, + }, + [VCAP_KF_8021Q_TPID2] = { + .type = VCAP_FIELD_U32, + .offset = 126, + .width = 3, + }, + [VCAP_KF_8021Q_PCP2] = { + .type = VCAP_FIELD_U32, + .offset = 129, + .width = 3, + }, + [VCAP_KF_8021Q_DEI2] = { + .type = VCAP_FIELD_BIT, + .offset = 132, + .width = 1, + }, + [VCAP_KF_8021Q_VID2] = { + .type = VCAP_FIELD_U32, + .offset = 133, + .width = 12, + }, + [VCAP_KF_L2_DMAC] = { + .type = VCAP_FIELD_U48, + .offset = 145, + .width = 48, + }, + [VCAP_KF_L2_SMAC] = { + .type = VCAP_FIELD_U48, + .offset = 193, + .width = 48, + }, + [VCAP_KF_IP_MC_IS] = { + .type = VCAP_FIELD_BIT, + .offset = 241, + .width = 1, + }, + [VCAP_KF_ETYPE_LEN_IS] = { + .type = VCAP_FIELD_BIT, + .offset = 242, + .width = 1, + }, + [VCAP_KF_ETYPE] = { + .type = VCAP_FIELD_U32, + .offset = 243, + .width = 16, + }, + [VCAP_KF_IP_SNAP_IS] = { + .type = VCAP_FIELD_BIT, + .offset = 259, + .width = 1, + }, + [VCAP_KF_IP4_IS] = { + .type = VCAP_FIELD_BIT, + .offset = 260, + .width = 1, + }, + [VCAP_KF_L3_FRAGMENT_TYPE] = { + .type = VCAP_FIELD_U32, + .offset = 261, + .width = 2, + }, + [VCAP_KF_L3_FRAG_INVLD_L4_LEN] = { + .type = VCAP_FIELD_BIT, + .offset = 263, + .width = 1, + }, + [VCAP_KF_L3_OPTIONS_IS] = { + .type = VCAP_FIELD_BIT, + .offset = 264, + .width = 1, + }, + [VCAP_KF_L3_DSCP] = { + .type = VCAP_FIELD_U32, + .offset = 265, + .width = 6, + }, + [VCAP_KF_L3_IP6_DIP] = { + .type = VCAP_FIELD_U128, + .offset = 271, + .width = 128, + }, + [VCAP_KF_L3_IP6_SIP] = { + .type = VCAP_FIELD_U128, + .offset = 399, + .width = 128, + }, + [VCAP_KF_TCP_UDP_IS] = { + .type = VCAP_FIELD_BIT, + .offset = 527, + .width = 1, + }, + [VCAP_KF_TCP_IS] = { + .type = VCAP_FIELD_BIT, + .offset = 528, + .width = 1, + }, + [VCAP_KF_L4_SPORT] = { + .type = VCAP_FIELD_U32, + .offset = 529, + .width = 16, + }, + [VCAP_KF_L4_RNG] = { + .type = VCAP_FIELD_U32, + .offset = 545, + .width = 8, + }, +}; + +static const struct vcap_field is0_normal_5tuple_ip4_keyfield[] = { + [VCAP_KF_TYPE] = { + .type = VCAP_FIELD_U32, + .offset = 0, + .width = 2, + }, + [VCAP_KF_LOOKUP_FIRST_IS] = { + .type = VCAP_FIELD_BIT, + .offset = 2, + .width = 1, + }, + [VCAP_KF_LOOKUP_GEN_IDX_SEL] = { + .type = VCAP_FIELD_U32, + .offset = 3, + .width = 2, + }, + [VCAP_KF_LOOKUP_GEN_IDX] = { + .type = VCAP_FIELD_U32, + .offset = 5, + .width = 12, + }, + [VCAP_KF_IF_IGR_PORT_MASK_SEL] = { + .type = VCAP_FIELD_U32, + .offset = 17, + .width = 2, + }, + [VCAP_KF_IF_IGR_PORT_MASK] = { + .type = VCAP_FIELD_U72, + .offset = 19, + .width = 65, + }, + [VCAP_KF_L2_MC_IS] = { + .type = VCAP_FIELD_BIT, + .offset = 84, + .width = 1, + }, + [VCAP_KF_L2_BC_IS] = { + .type = VCAP_FIELD_BIT, + .offset = 85, + .width = 1, + }, + [VCAP_KF_8021Q_VLAN_TAGS] = { + .type = VCAP_FIELD_U32, + .offset = 86, + .width = 3, + }, + [VCAP_KF_8021Q_TPID0] = { + .type = VCAP_FIELD_U32, + .offset = 89, + .width = 3, + }, + [VCAP_KF_8021Q_PCP0] = { + .type = VCAP_FIELD_U32, + .offset = 92, + .width = 3, + }, + [VCAP_KF_8021Q_DEI0] = { + .type = VCAP_FIELD_BIT, + .offset = 95, + .width = 1, + }, + [VCAP_KF_8021Q_VID0] = { + .type = VCAP_FIELD_U32, + .offset = 96, + .width = 12, + }, + [VCAP_KF_8021Q_TPID1] = { + .type = VCAP_FIELD_U32, + .offset = 108, + .width = 3, + }, + [VCAP_KF_8021Q_PCP1] = { + .type = VCAP_FIELD_U32, + .offset = 111, + .width = 3, + }, + [VCAP_KF_8021Q_DEI1] = { + .type = VCAP_FIELD_BIT, + .offset = 114, + .width = 1, + }, + [VCAP_KF_8021Q_VID1] = { + .type = VCAP_FIELD_U32, + .offset = 115, + .width = 12, + }, + [VCAP_KF_8021Q_TPID2] = { + .type = VCAP_FIELD_U32, + .offset = 127, + .width = 3, + }, + [VCAP_KF_8021Q_PCP2] = { + .type = VCAP_FIELD_U32, + .offset = 130, + .width = 3, + }, + [VCAP_KF_8021Q_DEI2] = { + .type = VCAP_FIELD_BIT, + .offset = 133, + .width = 1, + }, + [VCAP_KF_8021Q_VID2] = { + .type = VCAP_FIELD_U32, + .offset = 134, + .width = 12, + }, + [VCAP_KF_IP_MC_IS] = { + .type = VCAP_FIELD_BIT, + .offset = 146, + .width = 1, + }, + [VCAP_KF_IP4_IS] = { + .type = VCAP_FIELD_BIT, + .offset = 147, + .width = 1, + }, + [VCAP_KF_L3_FRAGMENT_TYPE] = { + .type = VCAP_FIELD_U32, + .offset = 148, + .width = 2, + }, + [VCAP_KF_L3_FRAG_INVLD_L4_LEN] = { + .type = VCAP_FIELD_BIT, + .offset = 150, + .width = 1, + }, + [VCAP_KF_L3_OPTIONS_IS] = { + .type = VCAP_FIELD_BIT, + .offset = 151, + .width = 1, + }, + [VCAP_KF_L3_DSCP] = { + .type = VCAP_FIELD_U32, + .offset = 152, + .width = 6, + }, + [VCAP_KF_L3_IP4_DIP] = { + .type = VCAP_FIELD_U32, + .offset = 158, + .width = 32, + }, + [VCAP_KF_L3_IP4_SIP] = { + .type = VCAP_FIELD_U32, + .offset = 190, + .width = 32, + }, + [VCAP_KF_L3_IP_PROTO] = { + .type = VCAP_FIELD_U32, + .offset = 222, + .width = 8, + }, + [VCAP_KF_TCP_UDP_IS] = { + .type = VCAP_FIELD_BIT, + .offset = 230, + .width = 1, + }, + [VCAP_KF_TCP_IS] = { + .type = VCAP_FIELD_BIT, + .offset = 231, + .width = 1, + }, + [VCAP_KF_L4_RNG] = { + .type = VCAP_FIELD_U32, + .offset = 232, + .width = 8, + }, + [VCAP_KF_IP_PAYLOAD_5TUPLE] = { + .type = VCAP_FIELD_U32, + .offset = 240, + .width = 32, + }, +}; + static const struct vcap_field is2_mac_etype_keyfield[] = { [VCAP_KF_TYPE] = { .type = VCAP_FIELD_U32, @@ -967,7 +1333,971 @@ static const struct vcap_field is2_ip_7tuple_keyfield[] = { }, }; +static const struct vcap_field es0_isdx_keyfield[] = { + [VCAP_KF_TYPE] = { + .type = VCAP_FIELD_BIT, + .offset = 0, + .width = 1, + }, + [VCAP_KF_IF_EGR_PORT_NO] = { + .type = VCAP_FIELD_U32, + .offset = 1, + .width = 7, + }, + [VCAP_KF_8021Q_VID_CLS] = { + .type = VCAP_FIELD_U32, + .offset = 8, + .width = 13, + }, + [VCAP_KF_COSID_CLS] = { + .type = VCAP_FIELD_U32, + .offset = 21, + .width = 3, + }, + [VCAP_KF_8021Q_TPID] = { + .type = VCAP_FIELD_U32, + .offset = 24, + .width = 3, + }, + [VCAP_KF_L3_DPL_CLS] = { + .type = VCAP_FIELD_BIT, + .offset = 27, + .width = 1, + }, + [VCAP_KF_ISDX_GT0_IS] = { + .type = VCAP_FIELD_BIT, + .offset = 28, + .width = 1, + }, + [VCAP_KF_PROT_ACTIVE] = { + .type = VCAP_FIELD_BIT, + .offset = 29, + .width = 1, + }, + [VCAP_KF_ISDX_CLS] = { + .type = VCAP_FIELD_U32, + .offset = 39, + .width = 12, + }, +}; + +static const struct vcap_field es2_mac_etype_keyfield[] = { + [VCAP_KF_TYPE] = { + .type = VCAP_FIELD_U32, + .offset = 0, + .width = 3, + }, + [VCAP_KF_LOOKUP_FIRST_IS] = { + .type = VCAP_FIELD_BIT, + .offset = 3, + .width = 1, + }, + [VCAP_KF_L2_MC_IS] = { + .type = VCAP_FIELD_BIT, + .offset = 13, + .width = 1, + }, + [VCAP_KF_L2_BC_IS] = { + .type = VCAP_FIELD_BIT, + .offset = 14, + .width = 1, + }, + [VCAP_KF_ISDX_GT0_IS] = { + .type = VCAP_FIELD_BIT, + .offset = 15, + .width = 1, + }, + [VCAP_KF_ISDX_CLS] = { + .type = VCAP_FIELD_U32, + .offset = 16, + .width = 12, + }, + [VCAP_KF_8021Q_VLAN_TAGGED_IS] = { + .type = VCAP_FIELD_BIT, + .offset = 28, + .width = 1, + }, + [VCAP_KF_8021Q_VID_CLS] = { + .type = VCAP_FIELD_U32, + .offset = 29, + .width = 13, + }, + [VCAP_KF_IF_EGR_PORT_MASK_RNG] = { + .type = VCAP_FIELD_U32, + .offset = 42, + .width = 3, + }, + [VCAP_KF_IF_EGR_PORT_MASK] = { + .type = VCAP_FIELD_U32, + .offset = 45, + .width = 32, + }, + [VCAP_KF_IF_IGR_PORT_SEL] = { + .type = VCAP_FIELD_BIT, + .offset = 77, + .width = 1, + }, + [VCAP_KF_IF_IGR_PORT] = { + .type = VCAP_FIELD_U32, + .offset = 78, + .width = 9, + }, + [VCAP_KF_8021Q_PCP_CLS] = { + .type = VCAP_FIELD_U32, + .offset = 87, + .width = 3, + }, + [VCAP_KF_8021Q_DEI_CLS] = { + .type = VCAP_FIELD_BIT, + .offset = 90, + .width = 1, + }, + [VCAP_KF_COSID_CLS] = { + .type = VCAP_FIELD_U32, + .offset = 91, + .width = 3, + }, + [VCAP_KF_L3_DPL_CLS] = { + .type = VCAP_FIELD_BIT, + .offset = 94, + .width = 1, + }, + [VCAP_KF_L3_RT_IS] = { + .type = VCAP_FIELD_BIT, + .offset = 95, + .width = 1, + }, + [VCAP_KF_L2_DMAC] = { + .type = VCAP_FIELD_U48, + .offset = 99, + .width = 48, + }, + [VCAP_KF_L2_SMAC] = { + .type = VCAP_FIELD_U48, + .offset = 147, + .width = 48, + }, + [VCAP_KF_ETYPE_LEN_IS] = { + .type = VCAP_FIELD_BIT, + .offset = 195, + .width = 1, + }, + [VCAP_KF_ETYPE] = { + .type = VCAP_FIELD_U32, + .offset = 196, + .width = 16, + }, + [VCAP_KF_L2_PAYLOAD_ETYPE] = { + .type = VCAP_FIELD_U64, + .offset = 212, + .width = 64, + }, + [VCAP_KF_OAM_CCM_CNTS_EQ0] = { + .type = VCAP_FIELD_BIT, + .offset = 276, + .width = 1, + }, + [VCAP_KF_OAM_Y1731_IS] = { + .type = VCAP_FIELD_BIT, + .offset = 277, + .width = 1, + }, +}; + +static const struct vcap_field es2_arp_keyfield[] = { + [VCAP_KF_TYPE] = { + .type = VCAP_FIELD_U32, + .offset = 0, + .width = 3, + }, + [VCAP_KF_LOOKUP_FIRST_IS] = { + .type = VCAP_FIELD_BIT, + .offset = 3, + .width = 1, + }, + [VCAP_KF_L2_MC_IS] = { + .type = VCAP_FIELD_BIT, + .offset = 13, + .width = 1, + }, + [VCAP_KF_L2_BC_IS] = { + .type = VCAP_FIELD_BIT, + .offset = 14, + .width = 1, + }, + [VCAP_KF_ISDX_GT0_IS] = { + .type = VCAP_FIELD_BIT, + .offset = 15, + .width = 1, + }, + [VCAP_KF_ISDX_CLS] = { + .type = VCAP_FIELD_U32, + .offset = 16, + .width = 12, + }, + [VCAP_KF_8021Q_VLAN_TAGGED_IS] = { + .type = VCAP_FIELD_BIT, + .offset = 28, + .width = 1, + }, + [VCAP_KF_8021Q_VID_CLS] = { + .type = VCAP_FIELD_U32, + .offset = 29, + .width = 13, + }, + [VCAP_KF_IF_EGR_PORT_MASK_RNG] = { + .type = VCAP_FIELD_U32, + .offset = 42, + .width = 3, + }, + [VCAP_KF_IF_EGR_PORT_MASK] = { + .type = VCAP_FIELD_U32, + .offset = 45, + .width = 32, + }, + [VCAP_KF_IF_IGR_PORT_SEL] = { + .type = VCAP_FIELD_BIT, + .offset = 77, + .width = 1, + }, + [VCAP_KF_IF_IGR_PORT] = { + .type = VCAP_FIELD_U32, + .offset = 78, + .width = 9, + }, + [VCAP_KF_8021Q_PCP_CLS] = { + .type = VCAP_FIELD_U32, + .offset = 87, + .width = 3, + }, + [VCAP_KF_8021Q_DEI_CLS] = { + .type = VCAP_FIELD_BIT, + .offset = 90, + .width = 1, + }, + [VCAP_KF_COSID_CLS] = { + .type = VCAP_FIELD_U32, + .offset = 91, + .width = 3, + }, + [VCAP_KF_L3_DPL_CLS] = { + .type = VCAP_FIELD_BIT, + .offset = 94, + .width = 1, + }, + [VCAP_KF_L2_SMAC] = { + .type = VCAP_FIELD_U48, + .offset = 98, + .width = 48, + }, + [VCAP_KF_ARP_ADDR_SPACE_OK_IS] = { + .type = VCAP_FIELD_BIT, + .offset = 146, + .width = 1, + }, + [VCAP_KF_ARP_PROTO_SPACE_OK_IS] = { + .type = VCAP_FIELD_BIT, + .offset = 147, + .width = 1, + }, + [VCAP_KF_ARP_LEN_OK_IS] = { + .type = VCAP_FIELD_BIT, + .offset = 148, + .width = 1, + }, + [VCAP_KF_ARP_TGT_MATCH_IS] = { + .type = VCAP_FIELD_BIT, + .offset = 149, + .width = 1, + }, + [VCAP_KF_ARP_SENDER_MATCH_IS] = { + .type = VCAP_FIELD_BIT, + .offset = 150, + .width = 1, + }, + [VCAP_KF_ARP_OPCODE_UNKNOWN_IS] = { + .type = VCAP_FIELD_BIT, + .offset = 151, + .width = 1, + }, + [VCAP_KF_ARP_OPCODE] = { + .type = VCAP_FIELD_U32, + .offset = 152, + .width = 2, + }, + [VCAP_KF_L3_IP4_DIP] = { + .type = VCAP_FIELD_U32, + .offset = 154, + .width = 32, + }, + [VCAP_KF_L3_IP4_SIP] = { + .type = VCAP_FIELD_U32, + .offset = 186, + .width = 32, + }, + [VCAP_KF_L3_DIP_EQ_SIP_IS] = { + .type = VCAP_FIELD_BIT, + .offset = 218, + .width = 1, + }, +}; + +static const struct vcap_field es2_ip4_tcp_udp_keyfield[] = { + [VCAP_KF_TYPE] = { + .type = VCAP_FIELD_U32, + .offset = 0, + .width = 3, + }, + [VCAP_KF_LOOKUP_FIRST_IS] = { + .type = VCAP_FIELD_BIT, + .offset = 3, + .width = 1, + }, + [VCAP_KF_L2_MC_IS] = { + .type = VCAP_FIELD_BIT, + .offset = 13, + .width = 1, + }, + [VCAP_KF_L2_BC_IS] = { + .type = VCAP_FIELD_BIT, + .offset = 14, + .width = 1, + }, + [VCAP_KF_ISDX_GT0_IS] = { + .type = VCAP_FIELD_BIT, + .offset = 15, + .width = 1, + }, + [VCAP_KF_ISDX_CLS] = { + .type = VCAP_FIELD_U32, + .offset = 16, + .width = 12, + }, + [VCAP_KF_8021Q_VLAN_TAGGED_IS] = { + .type = VCAP_FIELD_BIT, + .offset = 28, + .width = 1, + }, + [VCAP_KF_8021Q_VID_CLS] = { + .type = VCAP_FIELD_U32, + .offset = 29, + .width = 13, + }, + [VCAP_KF_IF_EGR_PORT_MASK_RNG] = { + .type = VCAP_FIELD_U32, + .offset = 42, + .width = 3, + }, + [VCAP_KF_IF_EGR_PORT_MASK] = { + .type = VCAP_FIELD_U32, + .offset = 45, + .width = 32, + }, + [VCAP_KF_IF_IGR_PORT_SEL] = { + .type = VCAP_FIELD_BIT, + .offset = 77, + .width = 1, + }, + [VCAP_KF_IF_IGR_PORT] = { + .type = VCAP_FIELD_U32, + .offset = 78, + .width = 9, + }, + [VCAP_KF_8021Q_PCP_CLS] = { + .type = VCAP_FIELD_U32, + .offset = 87, + .width = 3, + }, + [VCAP_KF_8021Q_DEI_CLS] = { + .type = VCAP_FIELD_BIT, + .offset = 90, + .width = 1, + }, + [VCAP_KF_COSID_CLS] = { + .type = VCAP_FIELD_U32, + .offset = 91, + .width = 3, + }, + [VCAP_KF_L3_DPL_CLS] = { + .type = VCAP_FIELD_BIT, + .offset = 94, + .width = 1, + }, + [VCAP_KF_L3_RT_IS] = { + .type = VCAP_FIELD_BIT, + .offset = 95, + .width = 1, + }, + [VCAP_KF_IP4_IS] = { + .type = VCAP_FIELD_BIT, + .offset = 99, + .width = 1, + }, + [VCAP_KF_L3_FRAGMENT_TYPE] = { + .type = VCAP_FIELD_U32, + .offset = 100, + .width = 2, + }, + [VCAP_KF_L3_OPTIONS_IS] = { + .type = VCAP_FIELD_BIT, + .offset = 102, + .width = 1, + }, + [VCAP_KF_L3_TTL_GT0] = { + .type = VCAP_FIELD_BIT, + .offset = 103, + .width = 1, + }, + [VCAP_KF_L3_TOS] = { + .type = VCAP_FIELD_U32, + .offset = 104, + .width = 8, + }, + [VCAP_KF_L3_IP4_DIP] = { + .type = VCAP_FIELD_U32, + .offset = 112, + .width = 32, + }, + [VCAP_KF_L3_IP4_SIP] = { + .type = VCAP_FIELD_U32, + .offset = 144, + .width = 32, + }, + [VCAP_KF_L3_DIP_EQ_SIP_IS] = { + .type = VCAP_FIELD_BIT, + .offset = 176, + .width = 1, + }, + [VCAP_KF_TCP_IS] = { + .type = VCAP_FIELD_BIT, + .offset = 177, + .width = 1, + }, + [VCAP_KF_L4_DPORT] = { + .type = VCAP_FIELD_U32, + .offset = 178, + .width = 16, + }, + [VCAP_KF_L4_SPORT] = { + .type = VCAP_FIELD_U32, + .offset = 194, + .width = 16, + }, + [VCAP_KF_L4_RNG] = { + .type = VCAP_FIELD_U32, + .offset = 210, + .width = 16, + }, + [VCAP_KF_L4_SPORT_EQ_DPORT_IS] = { + .type = VCAP_FIELD_BIT, + .offset = 226, + .width = 1, + }, + [VCAP_KF_L4_SEQUENCE_EQ0_IS] = { + .type = VCAP_FIELD_BIT, + .offset = 227, + .width = 1, + }, + [VCAP_KF_L4_FIN] = { + .type = VCAP_FIELD_BIT, + .offset = 228, + .width = 1, + }, + [VCAP_KF_L4_SYN] = { + .type = VCAP_FIELD_BIT, + .offset = 229, + .width = 1, + }, + [VCAP_KF_L4_RST] = { + .type = VCAP_FIELD_BIT, + .offset = 230, + .width = 1, + }, + [VCAP_KF_L4_PSH] = { + .type = VCAP_FIELD_BIT, + .offset = 231, + .width = 1, + }, + [VCAP_KF_L4_ACK] = { + .type = VCAP_FIELD_BIT, + .offset = 232, + .width = 1, + }, + [VCAP_KF_L4_URG] = { + .type = VCAP_FIELD_BIT, + .offset = 233, + .width = 1, + }, + [VCAP_KF_L4_PAYLOAD] = { + .type = VCAP_FIELD_U64, + .offset = 234, + .width = 64, + }, +}; + +static const struct vcap_field es2_ip4_other_keyfield[] = { + [VCAP_KF_TYPE] = { + .type = VCAP_FIELD_U32, + .offset = 0, + .width = 3, + }, + [VCAP_KF_LOOKUP_FIRST_IS] = { + .type = VCAP_FIELD_BIT, + .offset = 3, + .width = 1, + }, + [VCAP_KF_L2_MC_IS] = { + .type = VCAP_FIELD_BIT, + .offset = 13, + .width = 1, + }, + [VCAP_KF_L2_BC_IS] = { + .type = VCAP_FIELD_BIT, + .offset = 14, + .width = 1, + }, + [VCAP_KF_ISDX_GT0_IS] = { + .type = VCAP_FIELD_BIT, + .offset = 15, + .width = 1, + }, + [VCAP_KF_ISDX_CLS] = { + .type = VCAP_FIELD_U32, + .offset = 16, + .width = 12, + }, + [VCAP_KF_8021Q_VLAN_TAGGED_IS] = { + .type = VCAP_FIELD_BIT, + .offset = 28, + .width = 1, + }, + [VCAP_KF_8021Q_VID_CLS] = { + .type = VCAP_FIELD_U32, + .offset = 29, + .width = 13, + }, + [VCAP_KF_IF_EGR_PORT_MASK_RNG] = { + .type = VCAP_FIELD_U32, + .offset = 42, + .width = 3, + }, + [VCAP_KF_IF_EGR_PORT_MASK] = { + .type = VCAP_FIELD_U32, + .offset = 45, + .width = 32, + }, + [VCAP_KF_IF_IGR_PORT_SEL] = { + .type = VCAP_FIELD_BIT, + .offset = 77, + .width = 1, + }, + [VCAP_KF_IF_IGR_PORT] = { + .type = VCAP_FIELD_U32, + .offset = 78, + .width = 9, + }, + [VCAP_KF_8021Q_PCP_CLS] = { + .type = VCAP_FIELD_U32, + .offset = 87, + .width = 3, + }, + [VCAP_KF_8021Q_DEI_CLS] = { + .type = VCAP_FIELD_BIT, + .offset = 90, + .width = 1, + }, + [VCAP_KF_COSID_CLS] = { + .type = VCAP_FIELD_U32, + .offset = 91, + .width = 3, + }, + [VCAP_KF_L3_DPL_CLS] = { + .type = VCAP_FIELD_BIT, + .offset = 94, + .width = 1, + }, + [VCAP_KF_L3_RT_IS] = { + .type = VCAP_FIELD_BIT, + .offset = 95, + .width = 1, + }, + [VCAP_KF_IP4_IS] = { + .type = VCAP_FIELD_BIT, + .offset = 99, + .width = 1, + }, + [VCAP_KF_L3_FRAGMENT_TYPE] = { + .type = VCAP_FIELD_U32, + .offset = 100, + .width = 2, + }, + [VCAP_KF_L3_OPTIONS_IS] = { + .type = VCAP_FIELD_BIT, + .offset = 102, + .width = 1, + }, + [VCAP_KF_L3_TTL_GT0] = { + .type = VCAP_FIELD_BIT, + .offset = 103, + .width = 1, + }, + [VCAP_KF_L3_TOS] = { + .type = VCAP_FIELD_U32, + .offset = 104, + .width = 8, + }, + [VCAP_KF_L3_IP4_DIP] = { + .type = VCAP_FIELD_U32, + .offset = 112, + .width = 32, + }, + [VCAP_KF_L3_IP4_SIP] = { + .type = VCAP_FIELD_U32, + .offset = 144, + .width = 32, + }, + [VCAP_KF_L3_DIP_EQ_SIP_IS] = { + .type = VCAP_FIELD_BIT, + .offset = 176, + .width = 1, + }, + [VCAP_KF_L3_IP_PROTO] = { + .type = VCAP_FIELD_U32, + .offset = 177, + .width = 8, + }, + [VCAP_KF_L3_PAYLOAD] = { + .type = VCAP_FIELD_U112, + .offset = 185, + .width = 96, + }, +}; + +static const struct vcap_field es2_ip_7tuple_keyfield[] = { + [VCAP_KF_LOOKUP_FIRST_IS] = { + .type = VCAP_FIELD_BIT, + .offset = 0, + .width = 1, + }, + [VCAP_KF_L2_MC_IS] = { + .type = VCAP_FIELD_BIT, + .offset = 10, + .width = 1, + }, + [VCAP_KF_L2_BC_IS] = { + .type = VCAP_FIELD_BIT, + .offset = 11, + .width = 1, + }, + [VCAP_KF_ISDX_GT0_IS] = { + .type = VCAP_FIELD_BIT, + .offset = 12, + .width = 1, + }, + [VCAP_KF_ISDX_CLS] = { + .type = VCAP_FIELD_U32, + .offset = 13, + .width = 12, + }, + [VCAP_KF_8021Q_VLAN_TAGGED_IS] = { + .type = VCAP_FIELD_BIT, + .offset = 25, + .width = 1, + }, + [VCAP_KF_8021Q_VID_CLS] = { + .type = VCAP_FIELD_U32, + .offset = 26, + .width = 13, + }, + [VCAP_KF_IF_EGR_PORT_MASK_RNG] = { + .type = VCAP_FIELD_U32, + .offset = 39, + .width = 3, + }, + [VCAP_KF_IF_EGR_PORT_MASK] = { + .type = VCAP_FIELD_U32, + .offset = 42, + .width = 32, + }, + [VCAP_KF_IF_IGR_PORT_SEL] = { + .type = VCAP_FIELD_BIT, + .offset = 74, + .width = 1, + }, + [VCAP_KF_IF_IGR_PORT] = { + .type = VCAP_FIELD_U32, + .offset = 75, + .width = 9, + }, + [VCAP_KF_8021Q_PCP_CLS] = { + .type = VCAP_FIELD_U32, + .offset = 84, + .width = 3, + }, + [VCAP_KF_8021Q_DEI_CLS] = { + .type = VCAP_FIELD_BIT, + .offset = 87, + .width = 1, + }, + [VCAP_KF_COSID_CLS] = { + .type = VCAP_FIELD_U32, + .offset = 88, + .width = 3, + }, + [VCAP_KF_L3_DPL_CLS] = { + .type = VCAP_FIELD_BIT, + .offset = 91, + .width = 1, + }, + [VCAP_KF_L3_RT_IS] = { + .type = VCAP_FIELD_BIT, + .offset = 92, + .width = 1, + }, + [VCAP_KF_L2_DMAC] = { + .type = VCAP_FIELD_U48, + .offset = 96, + .width = 48, + }, + [VCAP_KF_L2_SMAC] = { + .type = VCAP_FIELD_U48, + .offset = 144, + .width = 48, + }, + [VCAP_KF_IP4_IS] = { + .type = VCAP_FIELD_BIT, + .offset = 192, + .width = 1, + }, + [VCAP_KF_L3_TTL_GT0] = { + .type = VCAP_FIELD_BIT, + .offset = 193, + .width = 1, + }, + [VCAP_KF_L3_TOS] = { + .type = VCAP_FIELD_U32, + .offset = 194, + .width = 8, + }, + [VCAP_KF_L3_IP6_DIP] = { + .type = VCAP_FIELD_U128, + .offset = 202, + .width = 128, + }, + [VCAP_KF_L3_IP6_SIP] = { + .type = VCAP_FIELD_U128, + .offset = 330, + .width = 128, + }, + [VCAP_KF_L3_DIP_EQ_SIP_IS] = { + .type = VCAP_FIELD_BIT, + .offset = 458, + .width = 1, + }, + [VCAP_KF_TCP_UDP_IS] = { + .type = VCAP_FIELD_BIT, + .offset = 459, + .width = 1, + }, + [VCAP_KF_TCP_IS] = { + .type = VCAP_FIELD_BIT, + .offset = 460, + .width = 1, + }, + [VCAP_KF_L4_DPORT] = { + .type = VCAP_FIELD_U32, + .offset = 461, + .width = 16, + }, + [VCAP_KF_L4_SPORT] = { + .type = VCAP_FIELD_U32, + .offset = 477, + .width = 16, + }, + [VCAP_KF_L4_RNG] = { + .type = VCAP_FIELD_U32, + .offset = 493, + .width = 16, + }, + [VCAP_KF_L4_SPORT_EQ_DPORT_IS] = { + .type = VCAP_FIELD_BIT, + .offset = 509, + .width = 1, + }, + [VCAP_KF_L4_SEQUENCE_EQ0_IS] = { + .type = VCAP_FIELD_BIT, + .offset = 510, + .width = 1, + }, + [VCAP_KF_L4_FIN] = { + .type = VCAP_FIELD_BIT, + .offset = 511, + .width = 1, + }, + [VCAP_KF_L4_SYN] = { + .type = VCAP_FIELD_BIT, + .offset = 512, + .width = 1, + }, + [VCAP_KF_L4_RST] = { + .type = VCAP_FIELD_BIT, + .offset = 513, + .width = 1, + }, + [VCAP_KF_L4_PSH] = { + .type = VCAP_FIELD_BIT, + .offset = 514, + .width = 1, + }, + [VCAP_KF_L4_ACK] = { + .type = VCAP_FIELD_BIT, + .offset = 515, + .width = 1, + }, + [VCAP_KF_L4_URG] = { + .type = VCAP_FIELD_BIT, + .offset = 516, + .width = 1, + }, + [VCAP_KF_L4_PAYLOAD] = { + .type = VCAP_FIELD_U64, + .offset = 517, + .width = 64, + }, +}; + +static const struct vcap_field es2_ip6_std_keyfield[] = { + [VCAP_KF_TYPE] = { + .type = VCAP_FIELD_U32, + .offset = 0, + .width = 3, + }, + [VCAP_KF_LOOKUP_FIRST_IS] = { + .type = VCAP_FIELD_BIT, + .offset = 3, + .width = 1, + }, + [VCAP_KF_L2_MC_IS] = { + .type = VCAP_FIELD_BIT, + .offset = 13, + .width = 1, + }, + [VCAP_KF_L2_BC_IS] = { + .type = VCAP_FIELD_BIT, + .offset = 14, + .width = 1, + }, + [VCAP_KF_ISDX_GT0_IS] = { + .type = VCAP_FIELD_BIT, + .offset = 15, + .width = 1, + }, + [VCAP_KF_ISDX_CLS] = { + .type = VCAP_FIELD_U32, + .offset = 16, + .width = 12, + }, + [VCAP_KF_8021Q_VLAN_TAGGED_IS] = { + .type = VCAP_FIELD_BIT, + .offset = 28, + .width = 1, + }, + [VCAP_KF_8021Q_VID_CLS] = { + .type = VCAP_FIELD_U32, + .offset = 29, + .width = 13, + }, + [VCAP_KF_IF_EGR_PORT_MASK_RNG] = { + .type = VCAP_FIELD_U32, + .offset = 42, + .width = 3, + }, + [VCAP_KF_IF_EGR_PORT_MASK] = { + .type = VCAP_FIELD_U32, + .offset = 45, + .width = 32, + }, + [VCAP_KF_IF_IGR_PORT_SEL] = { + .type = VCAP_FIELD_BIT, + .offset = 77, + .width = 1, + }, + [VCAP_KF_IF_IGR_PORT] = { + .type = VCAP_FIELD_U32, + .offset = 78, + .width = 9, + }, + [VCAP_KF_8021Q_PCP_CLS] = { + .type = VCAP_FIELD_U32, + .offset = 87, + .width = 3, + }, + [VCAP_KF_8021Q_DEI_CLS] = { + .type = VCAP_FIELD_BIT, + .offset = 90, + .width = 1, + }, + [VCAP_KF_COSID_CLS] = { + .type = VCAP_FIELD_U32, + .offset = 91, + .width = 3, + }, + [VCAP_KF_L3_DPL_CLS] = { + .type = VCAP_FIELD_BIT, + .offset = 94, + .width = 1, + }, + [VCAP_KF_L3_RT_IS] = { + .type = VCAP_FIELD_BIT, + .offset = 95, + .width = 1, + }, + [VCAP_KF_L3_TTL_GT0] = { + .type = VCAP_FIELD_BIT, + .offset = 99, + .width = 1, + }, + [VCAP_KF_L3_IP6_SIP] = { + .type = VCAP_FIELD_U128, + .offset = 100, + .width = 128, + }, + [VCAP_KF_L3_DIP_EQ_SIP_IS] = { + .type = VCAP_FIELD_BIT, + .offset = 228, + .width = 1, + }, + [VCAP_KF_L3_IP_PROTO] = { + .type = VCAP_FIELD_U32, + .offset = 229, + .width = 8, + }, + [VCAP_KF_L4_RNG] = { + .type = VCAP_FIELD_U32, + .offset = 237, + .width = 16, + }, + [VCAP_KF_L3_PAYLOAD] = { + .type = VCAP_FIELD_U48, + .offset = 253, + .width = 40, + }, +}; + /* keyfield_set */ +static const struct vcap_set is0_keyfield_set[] = { + [VCAP_KFS_NORMAL_7TUPLE] = { + .type_id = 0, + .sw_per_item = 12, + .sw_cnt = 1, + }, + [VCAP_KFS_NORMAL_5TUPLE_IP4] = { + .type_id = 2, + .sw_per_item = 6, + .sw_cnt = 2, + }, +}; + static const struct vcap_set is2_keyfield_set[] = { [VCAP_KFS_MAC_ETYPE] = { .type_id = 0, @@ -1001,7 +2331,53 @@ static const struct vcap_set is2_keyfield_set[] = { }, }; +static const struct vcap_set es0_keyfield_set[] = { + [VCAP_KFS_ISDX] = { + .type_id = 0, + .sw_per_item = 1, + .sw_cnt = 1, + }, +}; + +static const struct vcap_set es2_keyfield_set[] = { + [VCAP_KFS_MAC_ETYPE] = { + .type_id = 0, + .sw_per_item = 6, + .sw_cnt = 2, + }, + [VCAP_KFS_ARP] = { + .type_id = 1, + .sw_per_item = 6, + .sw_cnt = 2, + }, + [VCAP_KFS_IP4_TCP_UDP] = { + .type_id = 2, + .sw_per_item = 6, + .sw_cnt = 2, + }, + [VCAP_KFS_IP4_OTHER] = { + .type_id = 3, + .sw_per_item = 6, + .sw_cnt = 2, + }, + [VCAP_KFS_IP_7TUPLE] = { + .type_id = -1, + .sw_per_item = 12, + .sw_cnt = 1, + }, + [VCAP_KFS_IP6_STD] = { + .type_id = 4, + .sw_per_item = 6, + .sw_cnt = 2, + }, +}; + /* keyfield_set map */ +static const struct vcap_field *is0_keyfield_set_map[] = { + [VCAP_KFS_NORMAL_7TUPLE] = is0_normal_7tuple_keyfield, + [VCAP_KFS_NORMAL_5TUPLE_IP4] = is0_normal_5tuple_ip4_keyfield, +}; + static const struct vcap_field *is2_keyfield_set_map[] = { [VCAP_KFS_MAC_ETYPE] = is2_mac_etype_keyfield, [VCAP_KFS_ARP] = is2_arp_keyfield, @@ -1011,7 +2387,25 @@ static const struct vcap_field *is2_keyfield_set_map[] = { [VCAP_KFS_IP_7TUPLE] = is2_ip_7tuple_keyfield, }; +static const struct vcap_field *es0_keyfield_set_map[] = { + [VCAP_KFS_ISDX] = es0_isdx_keyfield, +}; + +static const struct vcap_field *es2_keyfield_set_map[] = { + [VCAP_KFS_MAC_ETYPE] = es2_mac_etype_keyfield, + [VCAP_KFS_ARP] = es2_arp_keyfield, + [VCAP_KFS_IP4_TCP_UDP] = es2_ip4_tcp_udp_keyfield, + [VCAP_KFS_IP4_OTHER] = es2_ip4_other_keyfield, + [VCAP_KFS_IP_7TUPLE] = es2_ip_7tuple_keyfield, + [VCAP_KFS_IP6_STD] = es2_ip6_std_keyfield, +}; + /* keyfield_set map sizes */ +static int is0_keyfield_set_map_size[] = { + [VCAP_KFS_NORMAL_7TUPLE] = ARRAY_SIZE(is0_normal_7tuple_keyfield), + [VCAP_KFS_NORMAL_5TUPLE_IP4] = ARRAY_SIZE(is0_normal_5tuple_ip4_keyfield), +}; + static int is2_keyfield_set_map_size[] = { [VCAP_KFS_MAC_ETYPE] = ARRAY_SIZE(is2_mac_etype_keyfield), [VCAP_KFS_ARP] = ARRAY_SIZE(is2_arp_keyfield), @@ -1021,7 +2415,319 @@ static int is2_keyfield_set_map_size[] = { [VCAP_KFS_IP_7TUPLE] = ARRAY_SIZE(is2_ip_7tuple_keyfield), }; +static int es0_keyfield_set_map_size[] = { + [VCAP_KFS_ISDX] = ARRAY_SIZE(es0_isdx_keyfield), +}; + +static int es2_keyfield_set_map_size[] = { + [VCAP_KFS_MAC_ETYPE] = ARRAY_SIZE(es2_mac_etype_keyfield), + [VCAP_KFS_ARP] = ARRAY_SIZE(es2_arp_keyfield), + [VCAP_KFS_IP4_TCP_UDP] = ARRAY_SIZE(es2_ip4_tcp_udp_keyfield), + [VCAP_KFS_IP4_OTHER] = ARRAY_SIZE(es2_ip4_other_keyfield), + [VCAP_KFS_IP_7TUPLE] = ARRAY_SIZE(es2_ip_7tuple_keyfield), + [VCAP_KFS_IP6_STD] = ARRAY_SIZE(es2_ip6_std_keyfield), +}; + /* actionfields */ +static const struct vcap_field is0_classification_actionfield[] = { + [VCAP_AF_TYPE] = { + .type = VCAP_FIELD_BIT, + .offset = 0, + .width = 1, + }, + [VCAP_AF_DSCP_ENA] = { + .type = VCAP_FIELD_BIT, + .offset = 1, + .width = 1, + }, + [VCAP_AF_DSCP_VAL] = { + .type = VCAP_FIELD_U32, + .offset = 2, + .width = 6, + }, + [VCAP_AF_QOS_ENA] = { + .type = VCAP_FIELD_BIT, + .offset = 12, + .width = 1, + }, + [VCAP_AF_QOS_VAL] = { + .type = VCAP_FIELD_U32, + .offset = 13, + .width = 3, + }, + [VCAP_AF_DP_ENA] = { + .type = VCAP_FIELD_BIT, + .offset = 16, + .width = 1, + }, + [VCAP_AF_DP_VAL] = { + .type = VCAP_FIELD_U32, + .offset = 17, + .width = 2, + }, + [VCAP_AF_DEI_ENA] = { + .type = VCAP_FIELD_BIT, + .offset = 19, + .width = 1, + }, + [VCAP_AF_DEI_VAL] = { + .type = VCAP_FIELD_BIT, + .offset = 20, + .width = 1, + }, + [VCAP_AF_PCP_ENA] = { + .type = VCAP_FIELD_BIT, + .offset = 21, + .width = 1, + }, + [VCAP_AF_PCP_VAL] = { + .type = VCAP_FIELD_U32, + .offset = 22, + .width = 3, + }, + [VCAP_AF_MAP_LOOKUP_SEL] = { + .type = VCAP_FIELD_U32, + .offset = 25, + .width = 2, + }, + [VCAP_AF_MAP_KEY] = { + .type = VCAP_FIELD_U32, + .offset = 27, + .width = 3, + }, + [VCAP_AF_MAP_IDX] = { + .type = VCAP_FIELD_U32, + .offset = 30, + .width = 9, + }, + [VCAP_AF_CLS_VID_SEL] = { + .type = VCAP_FIELD_U32, + .offset = 39, + .width = 3, + }, + [VCAP_AF_VID_VAL] = { + .type = VCAP_FIELD_U32, + .offset = 45, + .width = 13, + }, + [VCAP_AF_ISDX_ADD_REPLACE_SEL] = { + .type = VCAP_FIELD_BIT, + .offset = 68, + .width = 1, + }, + [VCAP_AF_ISDX_VAL] = { + .type = VCAP_FIELD_U32, + .offset = 69, + .width = 12, + }, + [VCAP_AF_PAG_OVERRIDE_MASK] = { + .type = VCAP_FIELD_U32, + .offset = 109, + .width = 8, + }, + [VCAP_AF_PAG_VAL] = { + .type = VCAP_FIELD_U32, + .offset = 117, + .width = 8, + }, + [VCAP_AF_NXT_IDX_CTRL] = { + .type = VCAP_FIELD_U32, + .offset = 171, + .width = 3, + }, + [VCAP_AF_NXT_IDX] = { + .type = VCAP_FIELD_U32, + .offset = 174, + .width = 12, + }, +}; + +static const struct vcap_field is0_full_actionfield[] = { + [VCAP_AF_DSCP_ENA] = { + .type = VCAP_FIELD_BIT, + .offset = 0, + .width = 1, + }, + [VCAP_AF_DSCP_VAL] = { + .type = VCAP_FIELD_U32, + .offset = 1, + .width = 6, + }, + [VCAP_AF_QOS_ENA] = { + .type = VCAP_FIELD_BIT, + .offset = 11, + .width = 1, + }, + [VCAP_AF_QOS_VAL] = { + .type = VCAP_FIELD_U32, + .offset = 12, + .width = 3, + }, + [VCAP_AF_DP_ENA] = { + .type = VCAP_FIELD_BIT, + .offset = 15, + .width = 1, + }, + [VCAP_AF_DP_VAL] = { + .type = VCAP_FIELD_U32, + .offset = 16, + .width = 2, + }, + [VCAP_AF_DEI_ENA] = { + .type = VCAP_FIELD_BIT, + .offset = 18, + .width = 1, + }, + [VCAP_AF_DEI_VAL] = { + .type = VCAP_FIELD_BIT, + .offset = 19, + .width = 1, + }, + [VCAP_AF_PCP_ENA] = { + .type = VCAP_FIELD_BIT, + .offset = 20, + .width = 1, + }, + [VCAP_AF_PCP_VAL] = { + .type = VCAP_FIELD_U32, + .offset = 21, + .width = 3, + }, + [VCAP_AF_MAP_LOOKUP_SEL] = { + .type = VCAP_FIELD_U32, + .offset = 24, + .width = 2, + }, + [VCAP_AF_MAP_KEY] = { + .type = VCAP_FIELD_U32, + .offset = 26, + .width = 3, + }, + [VCAP_AF_MAP_IDX] = { + .type = VCAP_FIELD_U32, + .offset = 29, + .width = 9, + }, + [VCAP_AF_CLS_VID_SEL] = { + .type = VCAP_FIELD_U32, + .offset = 38, + .width = 3, + }, + [VCAP_AF_VID_VAL] = { + .type = VCAP_FIELD_U32, + .offset = 44, + .width = 13, + }, + [VCAP_AF_ISDX_ADD_REPLACE_SEL] = { + .type = VCAP_FIELD_BIT, + .offset = 67, + .width = 1, + }, + [VCAP_AF_ISDX_VAL] = { + .type = VCAP_FIELD_U32, + .offset = 68, + .width = 12, + }, + [VCAP_AF_MASK_MODE] = { + .type = VCAP_FIELD_U32, + .offset = 80, + .width = 3, + }, + [VCAP_AF_PORT_MASK] = { + .type = VCAP_FIELD_U72, + .offset = 83, + .width = 65, + }, + [VCAP_AF_PAG_OVERRIDE_MASK] = { + .type = VCAP_FIELD_U32, + .offset = 204, + .width = 8, + }, + [VCAP_AF_PAG_VAL] = { + .type = VCAP_FIELD_U32, + .offset = 212, + .width = 8, + }, + [VCAP_AF_NXT_IDX_CTRL] = { + .type = VCAP_FIELD_U32, + .offset = 298, + .width = 3, + }, + [VCAP_AF_NXT_IDX] = { + .type = VCAP_FIELD_U32, + .offset = 301, + .width = 12, + }, +}; + +static const struct vcap_field is0_class_reduced_actionfield[] = { + [VCAP_AF_TYPE] = { + .type = VCAP_FIELD_BIT, + .offset = 0, + .width = 1, + }, + [VCAP_AF_QOS_ENA] = { + .type = VCAP_FIELD_BIT, + .offset = 5, + .width = 1, + }, + [VCAP_AF_QOS_VAL] = { + .type = VCAP_FIELD_U32, + .offset = 6, + .width = 3, + }, + [VCAP_AF_DP_ENA] = { + .type = VCAP_FIELD_BIT, + .offset = 9, + .width = 1, + }, + [VCAP_AF_DP_VAL] = { + .type = VCAP_FIELD_U32, + .offset = 10, + .width = 2, + }, + [VCAP_AF_MAP_LOOKUP_SEL] = { + .type = VCAP_FIELD_U32, + .offset = 12, + .width = 2, + }, + [VCAP_AF_MAP_KEY] = { + .type = VCAP_FIELD_U32, + .offset = 14, + .width = 3, + }, + [VCAP_AF_CLS_VID_SEL] = { + .type = VCAP_FIELD_U32, + .offset = 17, + .width = 3, + }, + [VCAP_AF_VID_VAL] = { + .type = VCAP_FIELD_U32, + .offset = 23, + .width = 13, + }, + [VCAP_AF_ISDX_ADD_REPLACE_SEL] = { + .type = VCAP_FIELD_BIT, + .offset = 46, + .width = 1, + }, + [VCAP_AF_ISDX_VAL] = { + .type = VCAP_FIELD_U32, + .offset = 47, + .width = 12, + }, + [VCAP_AF_NXT_IDX_CTRL] = { + .type = VCAP_FIELD_U32, + .offset = 90, + .width = 3, + }, + [VCAP_AF_NXT_IDX] = { + .type = VCAP_FIELD_U32, + .offset = 93, + .width = 12, + }, +}; + static const struct vcap_field is2_base_type_actionfield[] = { [VCAP_AF_PIPELINE_FORCE_ENA] = { .type = VCAP_FIELD_BIT, @@ -1110,7 +2816,276 @@ static const struct vcap_field is2_base_type_actionfield[] = { }, }; +static const struct vcap_field es0_es0_actionfield[] = { + [VCAP_AF_PUSH_OUTER_TAG] = { + .type = VCAP_FIELD_U32, + .offset = 0, + .width = 2, + }, + [VCAP_AF_PUSH_INNER_TAG] = { + .type = VCAP_FIELD_BIT, + .offset = 2, + .width = 1, + }, + [VCAP_AF_TAG_A_TPID_SEL] = { + .type = VCAP_FIELD_U32, + .offset = 3, + .width = 3, + }, + [VCAP_AF_TAG_A_VID_SEL] = { + .type = VCAP_FIELD_U32, + .offset = 6, + .width = 2, + }, + [VCAP_AF_TAG_A_PCP_SEL] = { + .type = VCAP_FIELD_U32, + .offset = 8, + .width = 3, + }, + [VCAP_AF_TAG_A_DEI_SEL] = { + .type = VCAP_FIELD_U32, + .offset = 11, + .width = 3, + }, + [VCAP_AF_TAG_B_TPID_SEL] = { + .type = VCAP_FIELD_U32, + .offset = 14, + .width = 3, + }, + [VCAP_AF_TAG_B_VID_SEL] = { + .type = VCAP_FIELD_U32, + .offset = 17, + .width = 2, + }, + [VCAP_AF_TAG_B_PCP_SEL] = { + .type = VCAP_FIELD_U32, + .offset = 19, + .width = 3, + }, + [VCAP_AF_TAG_B_DEI_SEL] = { + .type = VCAP_FIELD_U32, + .offset = 22, + .width = 3, + }, + [VCAP_AF_TAG_C_TPID_SEL] = { + .type = VCAP_FIELD_U32, + .offset = 25, + .width = 3, + }, + [VCAP_AF_TAG_C_PCP_SEL] = { + .type = VCAP_FIELD_U32, + .offset = 28, + .width = 3, + }, + [VCAP_AF_TAG_C_DEI_SEL] = { + .type = VCAP_FIELD_U32, + .offset = 31, + .width = 3, + }, + [VCAP_AF_VID_A_VAL] = { + .type = VCAP_FIELD_U32, + .offset = 34, + .width = 12, + }, + [VCAP_AF_PCP_A_VAL] = { + .type = VCAP_FIELD_U32, + .offset = 46, + .width = 3, + }, + [VCAP_AF_DEI_A_VAL] = { + .type = VCAP_FIELD_BIT, + .offset = 49, + .width = 1, + }, + [VCAP_AF_VID_B_VAL] = { + .type = VCAP_FIELD_U32, + .offset = 50, + .width = 12, + }, + [VCAP_AF_PCP_B_VAL] = { + .type = VCAP_FIELD_U32, + .offset = 62, + .width = 3, + }, + [VCAP_AF_DEI_B_VAL] = { + .type = VCAP_FIELD_BIT, + .offset = 65, + .width = 1, + }, + [VCAP_AF_VID_C_VAL] = { + .type = VCAP_FIELD_U32, + .offset = 66, + .width = 12, + }, + [VCAP_AF_PCP_C_VAL] = { + .type = VCAP_FIELD_U32, + .offset = 78, + .width = 3, + }, + [VCAP_AF_DEI_C_VAL] = { + .type = VCAP_FIELD_BIT, + .offset = 81, + .width = 1, + }, + [VCAP_AF_POP_VAL] = { + .type = VCAP_FIELD_U32, + .offset = 82, + .width = 2, + }, + [VCAP_AF_UNTAG_VID_ENA] = { + .type = VCAP_FIELD_BIT, + .offset = 84, + .width = 1, + }, + [VCAP_AF_PUSH_CUSTOMER_TAG] = { + .type = VCAP_FIELD_U32, + .offset = 85, + .width = 2, + }, + [VCAP_AF_TAG_C_VID_SEL] = { + .type = VCAP_FIELD_U32, + .offset = 87, + .width = 2, + }, + [VCAP_AF_DSCP_SEL] = { + .type = VCAP_FIELD_U32, + .offset = 127, + .width = 3, + }, + [VCAP_AF_DSCP_VAL] = { + .type = VCAP_FIELD_U32, + .offset = 130, + .width = 6, + }, + [VCAP_AF_ESDX] = { + .type = VCAP_FIELD_U32, + .offset = 323, + .width = 13, + }, + [VCAP_AF_FWD_SEL] = { + .type = VCAP_FIELD_U32, + .offset = 459, + .width = 2, + }, + [VCAP_AF_CPU_QU] = { + .type = VCAP_FIELD_U32, + .offset = 461, + .width = 3, + }, + [VCAP_AF_PIPELINE_PT] = { + .type = VCAP_FIELD_U32, + .offset = 464, + .width = 2, + }, + [VCAP_AF_PIPELINE_ACT] = { + .type = VCAP_FIELD_BIT, + .offset = 466, + .width = 1, + }, + [VCAP_AF_SWAP_MACS_ENA] = { + .type = VCAP_FIELD_BIT, + .offset = 475, + .width = 1, + }, + [VCAP_AF_LOOP_ENA] = { + .type = VCAP_FIELD_BIT, + .offset = 476, + .width = 1, + }, +}; + +static const struct vcap_field es2_base_type_actionfield[] = { + [VCAP_AF_HIT_ME_ONCE] = { + .type = VCAP_FIELD_BIT, + .offset = 0, + .width = 1, + }, + [VCAP_AF_INTR_ENA] = { + .type = VCAP_FIELD_BIT, + .offset = 1, + .width = 1, + }, + [VCAP_AF_FWD_MODE] = { + .type = VCAP_FIELD_U32, + .offset = 2, + .width = 2, + }, + [VCAP_AF_COPY_QUEUE_NUM] = { + .type = VCAP_FIELD_U32, + .offset = 4, + .width = 16, + }, + [VCAP_AF_COPY_PORT_NUM] = { + .type = VCAP_FIELD_U32, + .offset = 20, + .width = 7, + }, + [VCAP_AF_MIRROR_PROBE_ID] = { + .type = VCAP_FIELD_U32, + .offset = 27, + .width = 2, + }, + [VCAP_AF_CPU_COPY_ENA] = { + .type = VCAP_FIELD_BIT, + .offset = 29, + .width = 1, + }, + [VCAP_AF_CPU_QUEUE_NUM] = { + .type = VCAP_FIELD_U32, + .offset = 30, + .width = 3, + }, + [VCAP_AF_POLICE_ENA] = { + .type = VCAP_FIELD_BIT, + .offset = 33, + .width = 1, + }, + [VCAP_AF_POLICE_REMARK] = { + .type = VCAP_FIELD_BIT, + .offset = 34, + .width = 1, + }, + [VCAP_AF_POLICE_IDX] = { + .type = VCAP_FIELD_U32, + .offset = 35, + .width = 6, + }, + [VCAP_AF_ES2_REW_CMD] = { + .type = VCAP_FIELD_U32, + .offset = 41, + .width = 3, + }, + [VCAP_AF_CNT_ID] = { + .type = VCAP_FIELD_U32, + .offset = 44, + .width = 11, + }, + [VCAP_AF_IGNORE_PIPELINE_CTRL] = { + .type = VCAP_FIELD_BIT, + .offset = 55, + .width = 1, + }, +}; + /* actionfield_set */ +static const struct vcap_set is0_actionfield_set[] = { + [VCAP_AFS_CLASSIFICATION] = { + .type_id = 1, + .sw_per_item = 2, + .sw_cnt = 6, + }, + [VCAP_AFS_FULL] = { + .type_id = -1, + .sw_per_item = 3, + .sw_cnt = 4, + }, + [VCAP_AFS_CLASS_REDUCED] = { + .type_id = 1, + .sw_per_item = 1, + .sw_cnt = 12, + }, +}; + static const struct vcap_set is2_actionfield_set[] = { [VCAP_AFS_BASE_TYPE] = { .type_id = -1, @@ -1119,17 +3094,196 @@ static const struct vcap_set is2_actionfield_set[] = { }, }; +static const struct vcap_set es0_actionfield_set[] = { + [VCAP_AFS_ES0] = { + .type_id = -1, + .sw_per_item = 1, + .sw_cnt = 1, + }, +}; + +static const struct vcap_set es2_actionfield_set[] = { + [VCAP_AFS_BASE_TYPE] = { + .type_id = -1, + .sw_per_item = 3, + .sw_cnt = 4, + }, +}; + /* actionfield_set map */ +static const struct vcap_field *is0_actionfield_set_map[] = { + [VCAP_AFS_CLASSIFICATION] = is0_classification_actionfield, + [VCAP_AFS_FULL] = is0_full_actionfield, + [VCAP_AFS_CLASS_REDUCED] = is0_class_reduced_actionfield, +}; + static const struct vcap_field *is2_actionfield_set_map[] = { [VCAP_AFS_BASE_TYPE] = is2_base_type_actionfield, }; +static const struct vcap_field *es0_actionfield_set_map[] = { + [VCAP_AFS_ES0] = es0_es0_actionfield, +}; + +static const struct vcap_field *es2_actionfield_set_map[] = { + [VCAP_AFS_BASE_TYPE] = es2_base_type_actionfield, +}; + /* actionfield_set map size */ +static int is0_actionfield_set_map_size[] = { + [VCAP_AFS_CLASSIFICATION] = ARRAY_SIZE(is0_classification_actionfield), + [VCAP_AFS_FULL] = ARRAY_SIZE(is0_full_actionfield), + [VCAP_AFS_CLASS_REDUCED] = ARRAY_SIZE(is0_class_reduced_actionfield), +}; + static int is2_actionfield_set_map_size[] = { [VCAP_AFS_BASE_TYPE] = ARRAY_SIZE(is2_base_type_actionfield), }; +static int es0_actionfield_set_map_size[] = { + [VCAP_AFS_ES0] = ARRAY_SIZE(es0_es0_actionfield), +}; + +static int es2_actionfield_set_map_size[] = { + [VCAP_AFS_BASE_TYPE] = ARRAY_SIZE(es2_base_type_actionfield), +}; + /* Type Groups */ +static const struct vcap_typegroup is0_x12_keyfield_set_typegroups[] = { + { + .offset = 0, + .width = 5, + .value = 16, + }, + { + .offset = 52, + .width = 1, + .value = 0, + }, + { + .offset = 104, + .width = 2, + .value = 0, + }, + { + .offset = 156, + .width = 3, + .value = 0, + }, + { + .offset = 208, + .width = 2, + .value = 0, + }, + { + .offset = 260, + .width = 1, + .value = 0, + }, + { + .offset = 312, + .width = 4, + .value = 0, + }, + { + .offset = 364, + .width = 1, + .value = 0, + }, + { + .offset = 416, + .width = 2, + .value = 0, + }, + { + .offset = 468, + .width = 3, + .value = 0, + }, + { + .offset = 520, + .width = 2, + .value = 0, + }, + { + .offset = 572, + .width = 1, + .value = 0, + }, + {} +}; + +static const struct vcap_typegroup is0_x6_keyfield_set_typegroups[] = { + { + .offset = 0, + .width = 4, + .value = 8, + }, + { + .offset = 52, + .width = 1, + .value = 0, + }, + { + .offset = 104, + .width = 2, + .value = 0, + }, + { + .offset = 156, + .width = 3, + .value = 0, + }, + { + .offset = 208, + .width = 2, + .value = 0, + }, + { + .offset = 260, + .width = 1, + .value = 0, + }, + {} +}; + +static const struct vcap_typegroup is0_x3_keyfield_set_typegroups[] = { + { + .offset = 0, + .width = 3, + .value = 4, + }, + { + .offset = 52, + .width = 2, + .value = 0, + }, + { + .offset = 104, + .width = 2, + .value = 0, + }, + {} +}; + +static const struct vcap_typegroup is0_x2_keyfield_set_typegroups[] = { + { + .offset = 0, + .width = 2, + .value = 2, + }, + { + .offset = 52, + .width = 1, + .value = 0, + }, + {} +}; + +static const struct vcap_typegroup is0_x1_keyfield_set_typegroups[] = { + {} +}; + static const struct vcap_typegroup is2_x12_keyfield_set_typegroups[] = { { .offset = 0, @@ -1176,6 +3330,70 @@ static const struct vcap_typegroup is2_x1_keyfield_set_typegroups[] = { {} }; +static const struct vcap_typegroup es0_x1_keyfield_set_typegroups[] = { + {} +}; + +static const struct vcap_typegroup es2_x12_keyfield_set_typegroups[] = { + { + .offset = 0, + .width = 3, + .value = 4, + }, + { + .offset = 156, + .width = 1, + .value = 0, + }, + { + .offset = 312, + .width = 2, + .value = 0, + }, + { + .offset = 468, + .width = 1, + .value = 0, + }, + {} +}; + +static const struct vcap_typegroup es2_x6_keyfield_set_typegroups[] = { + { + .offset = 0, + .width = 2, + .value = 2, + }, + { + .offset = 156, + .width = 1, + .value = 0, + }, + {} +}; + +static const struct vcap_typegroup es2_x3_keyfield_set_typegroups[] = { + { + .offset = 0, + .width = 1, + .value = 1, + }, + {} +}; + +static const struct vcap_typegroup es2_x1_keyfield_set_typegroups[] = { + {} +}; + +static const struct vcap_typegroup *is0_keyfield_set_typegroups[] = { + [12] = is0_x12_keyfield_set_typegroups, + [6] = is0_x6_keyfield_set_typegroups, + [3] = is0_x3_keyfield_set_typegroups, + [2] = is0_x2_keyfield_set_typegroups, + [1] = is0_x1_keyfield_set_typegroups, + [13] = NULL, +}; + static const struct vcap_typegroup *is2_keyfield_set_typegroups[] = { [12] = is2_x12_keyfield_set_typegroups, [6] = is2_x6_keyfield_set_typegroups, @@ -1184,6 +3402,61 @@ static const struct vcap_typegroup *is2_keyfield_set_typegroups[] = { [13] = NULL, }; +static const struct vcap_typegroup *es0_keyfield_set_typegroups[] = { + [1] = es0_x1_keyfield_set_typegroups, + [2] = NULL, +}; + +static const struct vcap_typegroup *es2_keyfield_set_typegroups[] = { + [12] = es2_x12_keyfield_set_typegroups, + [6] = es2_x6_keyfield_set_typegroups, + [3] = es2_x3_keyfield_set_typegroups, + [1] = es2_x1_keyfield_set_typegroups, + [13] = NULL, +}; + +static const struct vcap_typegroup is0_x3_actionfield_set_typegroups[] = { + { + .offset = 0, + .width = 3, + .value = 4, + }, + { + .offset = 110, + .width = 2, + .value = 0, + }, + { + .offset = 220, + .width = 2, + .value = 0, + }, + {} +}; + +static const struct vcap_typegroup is0_x2_actionfield_set_typegroups[] = { + { + .offset = 0, + .width = 2, + .value = 2, + }, + { + .offset = 110, + .width = 1, + .value = 0, + }, + {} +}; + +static const struct vcap_typegroup is0_x1_actionfield_set_typegroups[] = { + { + .offset = 0, + .width = 1, + .value = 1, + }, + {} +}; + static const struct vcap_typegroup is2_x3_actionfield_set_typegroups[] = { { .offset = 0, @@ -1207,36 +3480,122 @@ static const struct vcap_typegroup is2_x1_actionfield_set_typegroups[] = { {} }; +static const struct vcap_typegroup es0_x1_actionfield_set_typegroups[] = { + {} +}; + +static const struct vcap_typegroup es2_x3_actionfield_set_typegroups[] = { + { + .offset = 0, + .width = 2, + .value = 2, + }, + { + .offset = 21, + .width = 1, + .value = 0, + }, + { + .offset = 42, + .width = 1, + .value = 0, + }, + {} +}; + +static const struct vcap_typegroup es2_x1_actionfield_set_typegroups[] = { + {} +}; + +static const struct vcap_typegroup *is0_actionfield_set_typegroups[] = { + [3] = is0_x3_actionfield_set_typegroups, + [2] = is0_x2_actionfield_set_typegroups, + [1] = is0_x1_actionfield_set_typegroups, + [13] = NULL, +}; + static const struct vcap_typegroup *is2_actionfield_set_typegroups[] = { [3] = is2_x3_actionfield_set_typegroups, [1] = is2_x1_actionfield_set_typegroups, [13] = NULL, }; +static const struct vcap_typegroup *es0_actionfield_set_typegroups[] = { + [1] = es0_x1_actionfield_set_typegroups, + [2] = NULL, +}; + +static const struct vcap_typegroup *es2_actionfield_set_typegroups[] = { + [3] = es2_x3_actionfield_set_typegroups, + [1] = es2_x1_actionfield_set_typegroups, + [13] = NULL, +}; + /* Keyfieldset names */ static const char * const vcap_keyfield_set_names[] = { [VCAP_KFS_NO_VALUE] = "(None)", [VCAP_KFS_ARP] = "VCAP_KFS_ARP", + [VCAP_KFS_ETAG] = "VCAP_KFS_ETAG", [VCAP_KFS_IP4_OTHER] = "VCAP_KFS_IP4_OTHER", [VCAP_KFS_IP4_TCP_UDP] = "VCAP_KFS_IP4_TCP_UDP", + [VCAP_KFS_IP4_VID] = "VCAP_KFS_IP4_VID", + [VCAP_KFS_IP6_OTHER] = "VCAP_KFS_IP6_OTHER", [VCAP_KFS_IP6_STD] = "VCAP_KFS_IP6_STD", + [VCAP_KFS_IP6_TCP_UDP] = "VCAP_KFS_IP6_TCP_UDP", + [VCAP_KFS_IP6_VID] = "VCAP_KFS_IP6_VID", [VCAP_KFS_IP_7TUPLE] = "VCAP_KFS_IP_7TUPLE", + [VCAP_KFS_ISDX] = "VCAP_KFS_ISDX", + [VCAP_KFS_LL_FULL] = "VCAP_KFS_LL_FULL", [VCAP_KFS_MAC_ETYPE] = "VCAP_KFS_MAC_ETYPE", + [VCAP_KFS_MAC_LLC] = "VCAP_KFS_MAC_LLC", + [VCAP_KFS_MAC_SNAP] = "VCAP_KFS_MAC_SNAP", + [VCAP_KFS_NORMAL_5TUPLE_IP4] = "VCAP_KFS_NORMAL_5TUPLE_IP4", + [VCAP_KFS_NORMAL_7TUPLE] = "VCAP_KFS_NORMAL_7TUPLE", + [VCAP_KFS_OAM] = "VCAP_KFS_OAM", + [VCAP_KFS_PURE_5TUPLE_IP4] = "VCAP_KFS_PURE_5TUPLE_IP4", + [VCAP_KFS_SMAC_SIP4] = "VCAP_KFS_SMAC_SIP4", + [VCAP_KFS_SMAC_SIP6] = "VCAP_KFS_SMAC_SIP6", }; /* Actionfieldset names */ static const char * const vcap_actionfield_set_names[] = { [VCAP_AFS_NO_VALUE] = "(None)", [VCAP_AFS_BASE_TYPE] = "VCAP_AFS_BASE_TYPE", + [VCAP_AFS_CLASSIFICATION] = "VCAP_AFS_CLASSIFICATION", + [VCAP_AFS_CLASS_REDUCED] = "VCAP_AFS_CLASS_REDUCED", + [VCAP_AFS_ES0] = "VCAP_AFS_ES0", + [VCAP_AFS_FULL] = "VCAP_AFS_FULL", + [VCAP_AFS_SMAC_SIP] = "VCAP_AFS_SMAC_SIP", }; /* Keyfield names */ static const char * const vcap_keyfield_names[] = { [VCAP_KF_NO_VALUE] = "(None)", + [VCAP_KF_8021BR_ECID_BASE] = "8021BR_ECID_BASE", + [VCAP_KF_8021BR_ECID_EXT] = "8021BR_ECID_EXT", + [VCAP_KF_8021BR_E_TAGGED] = "8021BR_E_TAGGED", + [VCAP_KF_8021BR_GRP] = "8021BR_GRP", + [VCAP_KF_8021BR_IGR_ECID_BASE] = "8021BR_IGR_ECID_BASE", + [VCAP_KF_8021BR_IGR_ECID_EXT] = "8021BR_IGR_ECID_EXT", + [VCAP_KF_8021Q_DEI0] = "8021Q_DEI0", + [VCAP_KF_8021Q_DEI1] = "8021Q_DEI1", + [VCAP_KF_8021Q_DEI2] = "8021Q_DEI2", [VCAP_KF_8021Q_DEI_CLS] = "8021Q_DEI_CLS", + [VCAP_KF_8021Q_PCP0] = "8021Q_PCP0", + [VCAP_KF_8021Q_PCP1] = "8021Q_PCP1", + [VCAP_KF_8021Q_PCP2] = "8021Q_PCP2", [VCAP_KF_8021Q_PCP_CLS] = "8021Q_PCP_CLS", + [VCAP_KF_8021Q_TPID] = "8021Q_TPID", + [VCAP_KF_8021Q_TPID0] = "8021Q_TPID0", + [VCAP_KF_8021Q_TPID1] = "8021Q_TPID1", + [VCAP_KF_8021Q_TPID2] = "8021Q_TPID2", + [VCAP_KF_8021Q_VID0] = "8021Q_VID0", + [VCAP_KF_8021Q_VID1] = "8021Q_VID1", + [VCAP_KF_8021Q_VID2] = "8021Q_VID2", [VCAP_KF_8021Q_VID_CLS] = "8021Q_VID_CLS", [VCAP_KF_8021Q_VLAN_TAGGED_IS] = "8021Q_VLAN_TAGGED_IS", + [VCAP_KF_8021Q_VLAN_TAGS] = "8021Q_VLAN_TAGS", + [VCAP_KF_ACL_GRP_ID] = "ACL_GRP_ID", [VCAP_KF_ARP_ADDR_SPACE_OK_IS] = "ARP_ADDR_SPACE_OK_IS", [VCAP_KF_ARP_LEN_OK_IS] = "ARP_LEN_OK_IS", [VCAP_KF_ARP_OPCODE] = "ARP_OPCODE", @@ -1244,25 +3603,46 @@ static const char * const vcap_keyfield_names[] = { [VCAP_KF_ARP_PROTO_SPACE_OK_IS] = "ARP_PROTO_SPACE_OK_IS", [VCAP_KF_ARP_SENDER_MATCH_IS] = "ARP_SENDER_MATCH_IS", [VCAP_KF_ARP_TGT_MATCH_IS] = "ARP_TGT_MATCH_IS", + [VCAP_KF_COSID_CLS] = "COSID_CLS", + [VCAP_KF_ES0_ISDX_KEY_ENA] = "ES0_ISDX_KEY_ENA", [VCAP_KF_ETYPE] = "ETYPE", [VCAP_KF_ETYPE_LEN_IS] = "ETYPE_LEN_IS", + [VCAP_KF_HOST_MATCH] = "HOST_MATCH", + [VCAP_KF_IF_EGR_PORT_MASK] = "IF_EGR_PORT_MASK", + [VCAP_KF_IF_EGR_PORT_MASK_RNG] = "IF_EGR_PORT_MASK_RNG", + [VCAP_KF_IF_EGR_PORT_NO] = "IF_EGR_PORT_NO", + [VCAP_KF_IF_IGR_PORT] = "IF_IGR_PORT", [VCAP_KF_IF_IGR_PORT_MASK] = "IF_IGR_PORT_MASK", [VCAP_KF_IF_IGR_PORT_MASK_L3] = "IF_IGR_PORT_MASK_L3", [VCAP_KF_IF_IGR_PORT_MASK_RNG] = "IF_IGR_PORT_MASK_RNG", [VCAP_KF_IF_IGR_PORT_MASK_SEL] = "IF_IGR_PORT_MASK_SEL", + [VCAP_KF_IF_IGR_PORT_SEL] = "IF_IGR_PORT_SEL", [VCAP_KF_IP4_IS] = "IP4_IS", + [VCAP_KF_IP_MC_IS] = "IP_MC_IS", + [VCAP_KF_IP_PAYLOAD_5TUPLE] = "IP_PAYLOAD_5TUPLE", + [VCAP_KF_IP_SNAP_IS] = "IP_SNAP_IS", [VCAP_KF_ISDX_CLS] = "ISDX_CLS", [VCAP_KF_ISDX_GT0_IS] = "ISDX_GT0_IS", [VCAP_KF_L2_BC_IS] = "L2_BC_IS", [VCAP_KF_L2_DMAC] = "L2_DMAC", + [VCAP_KF_L2_FRM_TYPE] = "L2_FRM_TYPE", [VCAP_KF_L2_FWD_IS] = "L2_FWD_IS", + [VCAP_KF_L2_LLC] = "L2_LLC", [VCAP_KF_L2_MC_IS] = "L2_MC_IS", + [VCAP_KF_L2_PAYLOAD0] = "L2_PAYLOAD0", + [VCAP_KF_L2_PAYLOAD1] = "L2_PAYLOAD1", + [VCAP_KF_L2_PAYLOAD2] = "L2_PAYLOAD2", [VCAP_KF_L2_PAYLOAD_ETYPE] = "L2_PAYLOAD_ETYPE", [VCAP_KF_L2_SMAC] = "L2_SMAC", + [VCAP_KF_L2_SNAP] = "L2_SNAP", [VCAP_KF_L3_DIP_EQ_SIP_IS] = "L3_DIP_EQ_SIP_IS", + [VCAP_KF_L3_DPL_CLS] = "L3_DPL_CLS", + [VCAP_KF_L3_DSCP] = "L3_DSCP", [VCAP_KF_L3_DST_IS] = "L3_DST_IS", + [VCAP_KF_L3_FRAGMENT] = "L3_FRAGMENT", [VCAP_KF_L3_FRAGMENT_TYPE] = "L3_FRAGMENT_TYPE", [VCAP_KF_L3_FRAG_INVLD_L4_LEN] = "L3_FRAG_INVLD_L4_LEN", + [VCAP_KF_L3_FRAG_OFS_GT0] = "L3_FRAG_OFS_GT0", [VCAP_KF_L3_IP4_DIP] = "L3_IP4_DIP", [VCAP_KF_L3_IP4_SIP] = "L3_IP4_SIP", [VCAP_KF_L3_IP6_DIP] = "L3_IP6_DIP", @@ -1273,6 +3653,8 @@ static const char * const vcap_keyfield_names[] = { [VCAP_KF_L3_RT_IS] = "L3_RT_IS", [VCAP_KF_L3_TOS] = "L3_TOS", [VCAP_KF_L3_TTL_GT0] = "L3_TTL_GT0", + [VCAP_KF_L4_1588_DOM] = "L4_1588_DOM", + [VCAP_KF_L4_1588_VER] = "L4_1588_VER", [VCAP_KF_L4_ACK] = "L4_ACK", [VCAP_KF_L4_DPORT] = "L4_DPORT", [VCAP_KF_L4_FIN] = "L4_FIN", @@ -1286,9 +3668,19 @@ static const char * const vcap_keyfield_names[] = { [VCAP_KF_L4_SYN] = "L4_SYN", [VCAP_KF_L4_URG] = "L4_URG", [VCAP_KF_LOOKUP_FIRST_IS] = "LOOKUP_FIRST_IS", + [VCAP_KF_LOOKUP_GEN_IDX] = "LOOKUP_GEN_IDX", + [VCAP_KF_LOOKUP_GEN_IDX_SEL] = "LOOKUP_GEN_IDX_SEL", [VCAP_KF_LOOKUP_PAG] = "LOOKUP_PAG", + [VCAP_KF_MIRROR_PROBE] = "MIRROR_PROBE", [VCAP_KF_OAM_CCM_CNTS_EQ0] = "OAM_CCM_CNTS_EQ0", + [VCAP_KF_OAM_DETECTED] = "OAM_DETECTED", + [VCAP_KF_OAM_FLAGS] = "OAM_FLAGS", + [VCAP_KF_OAM_MEL_FLAGS] = "OAM_MEL_FLAGS", + [VCAP_KF_OAM_MEPID] = "OAM_MEPID", + [VCAP_KF_OAM_OPCODE] = "OAM_OPCODE", + [VCAP_KF_OAM_VER] = "OAM_VER", [VCAP_KF_OAM_Y1731_IS] = "OAM_Y1731_IS", + [VCAP_KF_PROT_ACTIVE] = "PROT_ACTIVE", [VCAP_KF_TCP_IS] = "TCP_IS", [VCAP_KF_TCP_UDP_IS] = "TCP_UDP_IS", [VCAP_KF_TYPE] = "TYPE", @@ -1297,27 +3689,116 @@ static const char * const vcap_keyfield_names[] = { /* Actionfield names */ static const char * const vcap_actionfield_names[] = { [VCAP_AF_NO_VALUE] = "(None)", + [VCAP_AF_ACL_ID] = "ACL_ID", + [VCAP_AF_CLS_VID_SEL] = "CLS_VID_SEL", [VCAP_AF_CNT_ID] = "CNT_ID", + [VCAP_AF_COPY_PORT_NUM] = "COPY_PORT_NUM", + [VCAP_AF_COPY_QUEUE_NUM] = "COPY_QUEUE_NUM", [VCAP_AF_CPU_COPY_ENA] = "CPU_COPY_ENA", + [VCAP_AF_CPU_QU] = "CPU_QU", [VCAP_AF_CPU_QUEUE_NUM] = "CPU_QUEUE_NUM", + [VCAP_AF_DEI_A_VAL] = "DEI_A_VAL", + [VCAP_AF_DEI_B_VAL] = "DEI_B_VAL", + [VCAP_AF_DEI_C_VAL] = "DEI_C_VAL", + [VCAP_AF_DEI_ENA] = "DEI_ENA", + [VCAP_AF_DEI_VAL] = "DEI_VAL", + [VCAP_AF_DP_ENA] = "DP_ENA", + [VCAP_AF_DP_VAL] = "DP_VAL", + [VCAP_AF_DSCP_ENA] = "DSCP_ENA", + [VCAP_AF_DSCP_SEL] = "DSCP_SEL", + [VCAP_AF_DSCP_VAL] = "DSCP_VAL", + [VCAP_AF_ES2_REW_CMD] = "ES2_REW_CMD", + [VCAP_AF_ESDX] = "ESDX", + [VCAP_AF_FWD_KILL_ENA] = "FWD_KILL_ENA", + [VCAP_AF_FWD_MODE] = "FWD_MODE", + [VCAP_AF_FWD_SEL] = "FWD_SEL", [VCAP_AF_HIT_ME_ONCE] = "HIT_ME_ONCE", + [VCAP_AF_HOST_MATCH] = "HOST_MATCH", [VCAP_AF_IGNORE_PIPELINE_CTRL] = "IGNORE_PIPELINE_CTRL", [VCAP_AF_INTR_ENA] = "INTR_ENA", + [VCAP_AF_ISDX_ADD_REPLACE_SEL] = "ISDX_ADD_REPLACE_SEL", + [VCAP_AF_ISDX_ENA] = "ISDX_ENA", + [VCAP_AF_ISDX_VAL] = "ISDX_VAL", + [VCAP_AF_LOOP_ENA] = "LOOP_ENA", [VCAP_AF_LRN_DIS] = "LRN_DIS", + [VCAP_AF_MAP_IDX] = "MAP_IDX", + [VCAP_AF_MAP_KEY] = "MAP_KEY", + [VCAP_AF_MAP_LOOKUP_SEL] = "MAP_LOOKUP_SEL", [VCAP_AF_MASK_MODE] = "MASK_MODE", [VCAP_AF_MATCH_ID] = "MATCH_ID", [VCAP_AF_MATCH_ID_MASK] = "MATCH_ID_MASK", + [VCAP_AF_MIRROR_ENA] = "MIRROR_ENA", [VCAP_AF_MIRROR_PROBE] = "MIRROR_PROBE", + [VCAP_AF_MIRROR_PROBE_ID] = "MIRROR_PROBE_ID", + [VCAP_AF_NXT_IDX] = "NXT_IDX", + [VCAP_AF_NXT_IDX_CTRL] = "NXT_IDX_CTRL", + [VCAP_AF_PAG_OVERRIDE_MASK] = "PAG_OVERRIDE_MASK", + [VCAP_AF_PAG_VAL] = "PAG_VAL", + [VCAP_AF_PCP_A_VAL] = "PCP_A_VAL", + [VCAP_AF_PCP_B_VAL] = "PCP_B_VAL", + [VCAP_AF_PCP_C_VAL] = "PCP_C_VAL", + [VCAP_AF_PCP_ENA] = "PCP_ENA", + [VCAP_AF_PCP_VAL] = "PCP_VAL", + [VCAP_AF_PIPELINE_ACT] = "PIPELINE_ACT", [VCAP_AF_PIPELINE_FORCE_ENA] = "PIPELINE_FORCE_ENA", [VCAP_AF_PIPELINE_PT] = "PIPELINE_PT", [VCAP_AF_POLICE_ENA] = "POLICE_ENA", [VCAP_AF_POLICE_IDX] = "POLICE_IDX", + [VCAP_AF_POLICE_REMARK] = "POLICE_REMARK", + [VCAP_AF_POLICE_VCAP_ONLY] = "POLICE_VCAP_ONLY", + [VCAP_AF_POP_VAL] = "POP_VAL", [VCAP_AF_PORT_MASK] = "PORT_MASK", + [VCAP_AF_PUSH_CUSTOMER_TAG] = "PUSH_CUSTOMER_TAG", + [VCAP_AF_PUSH_INNER_TAG] = "PUSH_INNER_TAG", + [VCAP_AF_PUSH_OUTER_TAG] = "PUSH_OUTER_TAG", + [VCAP_AF_QOS_ENA] = "QOS_ENA", + [VCAP_AF_QOS_VAL] = "QOS_VAL", + [VCAP_AF_REW_OP] = "REW_OP", [VCAP_AF_RT_DIS] = "RT_DIS", + [VCAP_AF_SWAP_MACS_ENA] = "SWAP_MACS_ENA", + [VCAP_AF_TAG_A_DEI_SEL] = "TAG_A_DEI_SEL", + [VCAP_AF_TAG_A_PCP_SEL] = "TAG_A_PCP_SEL", + [VCAP_AF_TAG_A_TPID_SEL] = "TAG_A_TPID_SEL", + [VCAP_AF_TAG_A_VID_SEL] = "TAG_A_VID_SEL", + [VCAP_AF_TAG_B_DEI_SEL] = "TAG_B_DEI_SEL", + [VCAP_AF_TAG_B_PCP_SEL] = "TAG_B_PCP_SEL", + [VCAP_AF_TAG_B_TPID_SEL] = "TAG_B_TPID_SEL", + [VCAP_AF_TAG_B_VID_SEL] = "TAG_B_VID_SEL", + [VCAP_AF_TAG_C_DEI_SEL] = "TAG_C_DEI_SEL", + [VCAP_AF_TAG_C_PCP_SEL] = "TAG_C_PCP_SEL", + [VCAP_AF_TAG_C_TPID_SEL] = "TAG_C_TPID_SEL", + [VCAP_AF_TAG_C_VID_SEL] = "TAG_C_VID_SEL", + [VCAP_AF_TYPE] = "TYPE", + [VCAP_AF_UNTAG_VID_ENA] = "UNTAG_VID_ENA", + [VCAP_AF_VID_A_VAL] = "VID_A_VAL", + [VCAP_AF_VID_B_VAL] = "VID_B_VAL", + [VCAP_AF_VID_C_VAL] = "VID_C_VAL", + [VCAP_AF_VID_VAL] = "VID_VAL", }; /* VCAPs */ const struct vcap_info sparx5_vcaps[] = { + [VCAP_TYPE_IS0] = { + .name = "is0", + .rows = 1024, + .sw_count = 12, + .sw_width = 52, + .sticky_width = 1, + .act_width = 110, + .default_cnt = 140, + .require_cnt_dis = 0, + .version = 1, + .keyfield_set = is0_keyfield_set, + .keyfield_set_size = ARRAY_SIZE(is0_keyfield_set), + .actionfield_set = is0_actionfield_set, + .actionfield_set_size = ARRAY_SIZE(is0_actionfield_set), + .keyfield_set_map = is0_keyfield_set_map, + .keyfield_set_map_size = is0_keyfield_set_map_size, + .actionfield_set_map = is0_actionfield_set_map, + .actionfield_set_map_size = is0_actionfield_set_map_size, + .keyfield_set_typegroups = is0_keyfield_set_typegroups, + .actionfield_set_typegroups = is0_actionfield_set_typegroups, + }, [VCAP_TYPE_IS2] = { .name = "is2", .rows = 256, @@ -1339,11 +3820,53 @@ const struct vcap_info sparx5_vcaps[] = { .keyfield_set_typegroups = is2_keyfield_set_typegroups, .actionfield_set_typegroups = is2_actionfield_set_typegroups, }, + [VCAP_TYPE_ES0] = { + .name = "es0", + .rows = 4096, + .sw_count = 1, + .sw_width = 52, + .sticky_width = 1, + .act_width = 489, + .default_cnt = 70, + .require_cnt_dis = 0, + .version = 1, + .keyfield_set = es0_keyfield_set, + .keyfield_set_size = ARRAY_SIZE(es0_keyfield_set), + .actionfield_set = es0_actionfield_set, + .actionfield_set_size = ARRAY_SIZE(es0_actionfield_set), + .keyfield_set_map = es0_keyfield_set_map, + .keyfield_set_map_size = es0_keyfield_set_map_size, + .actionfield_set_map = es0_actionfield_set_map, + .actionfield_set_map_size = es0_actionfield_set_map_size, + .keyfield_set_typegroups = es0_keyfield_set_typegroups, + .actionfield_set_typegroups = es0_actionfield_set_typegroups, + }, + [VCAP_TYPE_ES2] = { + .name = "es2", + .rows = 1024, + .sw_count = 12, + .sw_width = 52, + .sticky_width = 1, + .act_width = 21, + .default_cnt = 74, + .require_cnt_dis = 0, + .version = 1, + .keyfield_set = es2_keyfield_set, + .keyfield_set_size = ARRAY_SIZE(es2_keyfield_set), + .actionfield_set = es2_actionfield_set, + .actionfield_set_size = ARRAY_SIZE(es2_actionfield_set), + .keyfield_set_map = es2_keyfield_set_map, + .keyfield_set_map_size = es2_keyfield_set_map_size, + .actionfield_set_map = es2_actionfield_set_map, + .actionfield_set_map_size = es2_actionfield_set_map_size, + .keyfield_set_typegroups = es2_keyfield_set_typegroups, + .actionfield_set_typegroups = es2_actionfield_set_typegroups, + }, }; const struct vcap_statistics sparx5_vcap_stats = { .name = "sparx5", - .count = 1, + .count = 4, .keyfield_set_names = vcap_keyfield_set_names, .actionfield_set_names = vcap_actionfield_set_names, .keyfield_names = vcap_keyfield_names, diff --git a/drivers/net/ethernet/microchip/sparx5/sparx5_vcap_debugfs.c b/drivers/net/ethernet/microchip/sparx5/sparx5_vcap_debugfs.c index b91e05ffe2f4..07b472c84a47 100644 --- a/drivers/net/ethernet/microchip/sparx5/sparx5_vcap_debugfs.c +++ b/drivers/net/ethernet/microchip/sparx5/sparx5_vcap_debugfs.c @@ -13,10 +13,113 @@ #include "sparx5_vcap_impl.h" #include "sparx5_vcap_ag_api.h" -static void sparx5_vcap_port_keys(struct sparx5 *sparx5, - struct vcap_admin *admin, - struct sparx5_port *port, - struct vcap_output_print *out) +static const char *sparx5_vcap_is0_etype_str(u32 value) +{ + switch (value) { + case VCAP_IS0_PS_ETYPE_DEFAULT: + return "default"; + case VCAP_IS0_PS_ETYPE_NORMAL_7TUPLE: + return "normal_7tuple"; + case VCAP_IS0_PS_ETYPE_NORMAL_5TUPLE_IP4: + return "normal_5tuple_ip4"; + case VCAP_IS0_PS_ETYPE_MLL: + return "mll"; + case VCAP_IS0_PS_ETYPE_LL_FULL: + return "ll_full"; + case VCAP_IS0_PS_ETYPE_PURE_5TUPLE_IP4: + return "pure_5tuple_ip4"; + case VCAP_IS0_PS_ETYPE_ETAG: + return "etag"; + case VCAP_IS0_PS_ETYPE_NO_LOOKUP: + return "no lookup"; + default: + return "unknown"; + } +} + +static const char *sparx5_vcap_is0_mpls_str(u32 value) +{ + switch (value) { + case VCAP_IS0_PS_MPLS_FOLLOW_ETYPE: + return "follow_etype"; + case VCAP_IS0_PS_MPLS_NORMAL_7TUPLE: + return "normal_7tuple"; + case VCAP_IS0_PS_MPLS_NORMAL_5TUPLE_IP4: + return "normal_5tuple_ip4"; + case VCAP_IS0_PS_MPLS_MLL: + return "mll"; + case VCAP_IS0_PS_MPLS_LL_FULL: + return "ll_full"; + case VCAP_IS0_PS_MPLS_PURE_5TUPLE_IP4: + return "pure_5tuple_ip4"; + case VCAP_IS0_PS_MPLS_ETAG: + return "etag"; + case VCAP_IS0_PS_MPLS_NO_LOOKUP: + return "no lookup"; + default: + return "unknown"; + } +} + +static const char *sparx5_vcap_is0_mlbs_str(u32 value) +{ + switch (value) { + case VCAP_IS0_PS_MLBS_FOLLOW_ETYPE: + return "follow_etype"; + case VCAP_IS0_PS_MLBS_NO_LOOKUP: + return "no lookup"; + default: + return "unknown"; + } +} + +static void sparx5_vcap_is0_port_keys(struct sparx5 *sparx5, + struct vcap_admin *admin, + struct sparx5_port *port, + struct vcap_output_print *out) +{ + int lookup; + u32 value, val; + + out->prf(out->dst, " port[%02d] (%s): ", port->portno, + netdev_name(port->ndev)); + for (lookup = 0; lookup < admin->lookups; ++lookup) { + out->prf(out->dst, "\n Lookup %d: ", lookup); + + /* Get lookup state */ + value = spx5_rd(sparx5, + ANA_CL_ADV_CL_CFG(port->portno, lookup)); + out->prf(out->dst, "\n state: "); + if (ANA_CL_ADV_CL_CFG_LOOKUP_ENA_GET(value)) + out->prf(out->dst, "on"); + else + out->prf(out->dst, "off"); + val = ANA_CL_ADV_CL_CFG_ETYPE_CLM_KEY_SEL_GET(value); + out->prf(out->dst, "\n etype: %s", + sparx5_vcap_is0_etype_str(val)); + val = ANA_CL_ADV_CL_CFG_IP4_CLM_KEY_SEL_GET(value); + out->prf(out->dst, "\n ipv4: %s", + sparx5_vcap_is0_etype_str(val)); + val = ANA_CL_ADV_CL_CFG_IP6_CLM_KEY_SEL_GET(value); + out->prf(out->dst, "\n ipv6: %s", + sparx5_vcap_is0_etype_str(val)); + val = ANA_CL_ADV_CL_CFG_MPLS_UC_CLM_KEY_SEL_GET(value); + out->prf(out->dst, "\n mpls_uc: %s", + sparx5_vcap_is0_mpls_str(val)); + val = ANA_CL_ADV_CL_CFG_MPLS_MC_CLM_KEY_SEL_GET(value); + out->prf(out->dst, "\n mpls_mc: %s", + sparx5_vcap_is0_mpls_str(val)); + val = ANA_CL_ADV_CL_CFG_MLBS_CLM_KEY_SEL_GET(value); + out->prf(out->dst, "\n mlbs: %s", + sparx5_vcap_is0_mlbs_str(val)); + } + out->prf(out->dst, "\n"); +} + +static void sparx5_vcap_is2_port_keys(struct sparx5 *sparx5, + struct vcap_admin *admin, + struct sparx5_port *port, + struct vcap_output_print *out) { int lookup; u32 value; @@ -29,7 +132,7 @@ static void sparx5_vcap_port_keys(struct sparx5 *sparx5, /* Get lookup state */ value = spx5_rd(sparx5, ANA_ACL_VCAP_S2_CFG(port->portno)); out->prf(out->dst, "\n state: "); - if (ANA_ACL_VCAP_S2_CFG_SEC_ENA_GET(value)) + if (ANA_ACL_VCAP_S2_CFG_SEC_ENA_GET(value) & BIT(lookup)) out->prf(out->dst, "on"); else out->prf(out->dst, "off"); @@ -126,9 +229,9 @@ static void sparx5_vcap_port_keys(struct sparx5 *sparx5, out->prf(out->dst, "\n"); } -static void sparx5_vcap_port_stickies(struct sparx5 *sparx5, - struct vcap_admin *admin, - struct vcap_output_print *out) +static void sparx5_vcap_is2_port_stickies(struct sparx5 *sparx5, + struct vcap_admin *admin, + struct vcap_output_print *out) { int lookup; u32 value; @@ -181,6 +284,157 @@ static void sparx5_vcap_port_stickies(struct sparx5 *sparx5, out->prf(out->dst, "\n"); } +static void sparx5_vcap_es0_port_keys(struct sparx5 *sparx5, + struct vcap_admin *admin, + struct sparx5_port *port, + struct vcap_output_print *out) +{ + u32 value; + + out->prf(out->dst, " port[%02d] (%s): ", port->portno, + netdev_name(port->ndev)); + out->prf(out->dst, "\n Lookup 0: "); + + /* Get lookup state */ + value = spx5_rd(sparx5, REW_ES0_CTRL); + out->prf(out->dst, "\n state: "); + if (REW_ES0_CTRL_ES0_LU_ENA_GET(value)) + out->prf(out->dst, "on"); + else + out->prf(out->dst, "off"); + + out->prf(out->dst, "\n keyset: "); + value = spx5_rd(sparx5, REW_RTAG_ETAG_CTRL(port->portno)); + switch (REW_RTAG_ETAG_CTRL_ES0_ISDX_KEY_ENA_GET(value)) { + case VCAP_ES0_PS_NORMAL_SELECTION: + out->prf(out->dst, "normal"); + break; + case VCAP_ES0_PS_FORCE_ISDX_LOOKUPS: + out->prf(out->dst, "isdx"); + break; + case VCAP_ES0_PS_FORCE_VID_LOOKUPS: + out->prf(out->dst, "vid"); + break; + case VCAP_ES0_PS_RESERVED: + out->prf(out->dst, "reserved"); + break; + } + out->prf(out->dst, "\n"); +} + +static void sparx5_vcap_es2_port_keys(struct sparx5 *sparx5, + struct vcap_admin *admin, + struct sparx5_port *port, + struct vcap_output_print *out) +{ + int lookup; + u32 value; + + out->prf(out->dst, " port[%02d] (%s): ", port->portno, + netdev_name(port->ndev)); + for (lookup = 0; lookup < admin->lookups; ++lookup) { + out->prf(out->dst, "\n Lookup %d: ", lookup); + + /* Get lookup state */ + value = spx5_rd(sparx5, EACL_VCAP_ES2_KEY_SEL(port->portno, + lookup)); + out->prf(out->dst, "\n state: "); + if (EACL_VCAP_ES2_KEY_SEL_KEY_ENA_GET(value)) + out->prf(out->dst, "on"); + else + out->prf(out->dst, "off"); + + out->prf(out->dst, "\n arp: "); + switch (EACL_VCAP_ES2_KEY_SEL_ARP_KEY_SEL_GET(value)) { + case VCAP_ES2_PS_ARP_MAC_ETYPE: + out->prf(out->dst, "mac_etype"); + break; + case VCAP_ES2_PS_ARP_ARP: + out->prf(out->dst, "arp"); + break; + } + out->prf(out->dst, "\n ipv4: "); + switch (EACL_VCAP_ES2_KEY_SEL_IP4_KEY_SEL_GET(value)) { + case VCAP_ES2_PS_IPV4_MAC_ETYPE: + out->prf(out->dst, "mac_etype"); + break; + case VCAP_ES2_PS_IPV4_IP_7TUPLE: + out->prf(out->dst, "ip_7tuple"); + break; + case VCAP_ES2_PS_IPV4_IP4_TCP_UDP_VID: + out->prf(out->dst, "ip4_tcp_udp ip4_vid"); + break; + case VCAP_ES2_PS_IPV4_IP4_TCP_UDP_OTHER: + out->prf(out->dst, "ip4_tcp_udp ip4_other"); + break; + case VCAP_ES2_PS_IPV4_IP4_VID: + out->prf(out->dst, "ip4_vid"); + break; + case VCAP_ES2_PS_IPV4_IP4_OTHER: + out->prf(out->dst, "ip4_other"); + break; + } + out->prf(out->dst, "\n ipv6: "); + switch (EACL_VCAP_ES2_KEY_SEL_IP6_KEY_SEL_GET(value)) { + case VCAP_ES2_PS_IPV6_MAC_ETYPE: + out->prf(out->dst, "mac_etype"); + break; + case VCAP_ES2_PS_IPV6_IP_7TUPLE: + out->prf(out->dst, "ip_7tuple"); + break; + case VCAP_ES2_PS_IPV6_IP_7TUPLE_VID: + out->prf(out->dst, "ip_7tuple ip6_vid"); + break; + case VCAP_ES2_PS_IPV6_IP_7TUPLE_STD: + out->prf(out->dst, "ip_7tuple ip6_std"); + break; + case VCAP_ES2_PS_IPV6_IP6_VID: + out->prf(out->dst, "ip6_vid"); + break; + case VCAP_ES2_PS_IPV6_IP6_STD: + out->prf(out->dst, "ip6_std"); + break; + case VCAP_ES2_PS_IPV6_IP4_DOWNGRADE: + out->prf(out->dst, "ip4_downgrade"); + break; + } + } + out->prf(out->dst, "\n"); +} + +static void sparx5_vcap_es2_port_stickies(struct sparx5 *sparx5, + struct vcap_admin *admin, + struct vcap_output_print *out) +{ + int lookup; + u32 value; + + out->prf(out->dst, " Sticky bits: "); + for (lookup = 0; lookup < admin->lookups; ++lookup) { + value = spx5_rd(sparx5, EACL_SEC_LOOKUP_STICKY(lookup)); + out->prf(out->dst, "\n Lookup %d: ", lookup); + if (EACL_SEC_LOOKUP_STICKY_SEC_TYPE_IP_7TUPLE_STICKY_GET(value)) + out->prf(out->dst, " ip_7tuple"); + if (EACL_SEC_LOOKUP_STICKY_SEC_TYPE_IP6_VID_STICKY_GET(value)) + out->prf(out->dst, " ip6_vid"); + if (EACL_SEC_LOOKUP_STICKY_SEC_TYPE_IP6_STD_STICKY_GET(value)) + out->prf(out->dst, " ip6_std"); + if (EACL_SEC_LOOKUP_STICKY_SEC_TYPE_IP4_TCPUDP_STICKY_GET(value)) + out->prf(out->dst, " ip4_tcp_udp"); + if (EACL_SEC_LOOKUP_STICKY_SEC_TYPE_IP4_VID_STICKY_GET(value)) + out->prf(out->dst, " ip4_vid"); + if (EACL_SEC_LOOKUP_STICKY_SEC_TYPE_IP4_OTHER_STICKY_GET(value)) + out->prf(out->dst, " ip4_other"); + if (EACL_SEC_LOOKUP_STICKY_SEC_TYPE_ARP_STICKY_GET(value)) + out->prf(out->dst, " arp"); + if (EACL_SEC_LOOKUP_STICKY_SEC_TYPE_MAC_ETYPE_STICKY_GET(value)) + out->prf(out->dst, " mac_etype"); + /* Clear stickies */ + spx5_wr(value, sparx5, EACL_SEC_LOOKUP_STICKY(lookup)); + } + out->prf(out->dst, "\n"); +} + /* Provide port information via a callback interface */ int sparx5_port_info(struct net_device *ndev, struct vcap_admin *admin, @@ -194,7 +448,24 @@ int sparx5_port_info(struct net_device *ndev, vctrl = sparx5->vcap_ctrl; vcap = &vctrl->vcaps[admin->vtype]; out->prf(out->dst, "%s:\n", vcap->name); - sparx5_vcap_port_keys(sparx5, admin, port, out); - sparx5_vcap_port_stickies(sparx5, admin, out); + switch (admin->vtype) { + case VCAP_TYPE_IS0: + sparx5_vcap_is0_port_keys(sparx5, admin, port, out); + break; + case VCAP_TYPE_IS2: + sparx5_vcap_is2_port_keys(sparx5, admin, port, out); + sparx5_vcap_is2_port_stickies(sparx5, admin, out); + break; + case VCAP_TYPE_ES0: + sparx5_vcap_es0_port_keys(sparx5, admin, port, out); + break; + case VCAP_TYPE_ES2: + sparx5_vcap_es2_port_keys(sparx5, admin, port, out); + sparx5_vcap_es2_port_stickies(sparx5, admin, out); + break; + default: + out->prf(out->dst, " no info\n"); + break; + } return 0; } diff --git a/drivers/net/ethernet/microchip/sparx5/sparx5_vcap_impl.c b/drivers/net/ethernet/microchip/sparx5/sparx5_vcap_impl.c index a0c126ba9a87..d0d4e0385ac7 100644 --- a/drivers/net/ethernet/microchip/sparx5/sparx5_vcap_impl.c +++ b/drivers/net/ethernet/microchip/sparx5/sparx5_vcap_impl.c @@ -27,6 +27,28 @@ ANA_ACL_VCAP_S2_KEY_SEL_IP6_UC_KEY_SEL_SET(_v6_uc) | \ ANA_ACL_VCAP_S2_KEY_SEL_ARP_KEY_SEL_SET(_arp)) +#define SPARX5_IS0_LOOKUPS 6 +#define VCAP_IS0_KEYSEL(_ena, _etype, _ipv4, _ipv6, _mpls_uc, _mpls_mc, _mlbs) \ + (ANA_CL_ADV_CL_CFG_LOOKUP_ENA_SET(_ena) | \ + ANA_CL_ADV_CL_CFG_ETYPE_CLM_KEY_SEL_SET(_etype) | \ + ANA_CL_ADV_CL_CFG_IP4_CLM_KEY_SEL_SET(_ipv4) | \ + ANA_CL_ADV_CL_CFG_IP6_CLM_KEY_SEL_SET(_ipv6) | \ + ANA_CL_ADV_CL_CFG_MPLS_UC_CLM_KEY_SEL_SET(_mpls_uc) | \ + ANA_CL_ADV_CL_CFG_MPLS_MC_CLM_KEY_SEL_SET(_mpls_mc) | \ + ANA_CL_ADV_CL_CFG_MLBS_CLM_KEY_SEL_SET(_mlbs)) + +#define SPARX5_ES0_LOOKUPS 1 +#define VCAP_ES0_KEYSEL(_key) (REW_RTAG_ETAG_CTRL_ES0_ISDX_KEY_ENA_SET(_key)) +#define SPARX5_STAT_ESDX_GRN_PKTS 0x300 +#define SPARX5_STAT_ESDX_YEL_PKTS 0x301 + +#define SPARX5_ES2_LOOKUPS 2 +#define VCAP_ES2_KEYSEL(_ena, _arp, _ipv4, _ipv6) \ + (EACL_VCAP_ES2_KEY_SEL_KEY_ENA_SET(_ena) | \ + EACL_VCAP_ES2_KEY_SEL_ARP_KEY_SEL_SET(_arp) | \ + EACL_VCAP_ES2_KEY_SEL_IP4_KEY_SEL_SET(_ipv4) | \ + EACL_VCAP_ES2_KEY_SEL_IP6_KEY_SEL_SET(_ipv6)) + static struct sparx5_vcap_inst { enum vcap_type vtype; /* type of vcap */ int vinst; /* instance number within the same type */ @@ -38,8 +60,45 @@ static struct sparx5_vcap_inst { int map_id; /* id in the super vcap block mapping (if applicable) */ int blockno; /* starting block in super vcap (if applicable) */ int blocks; /* number of blocks in super vcap (if applicable) */ + bool ingress; /* is vcap in the ingress path */ } sparx5_vcap_inst_cfg[] = { { + .vtype = VCAP_TYPE_IS0, /* CLM-0 */ + .vinst = 0, + .map_id = 1, + .lookups = SPARX5_IS0_LOOKUPS, + .lookups_per_instance = SPARX5_IS0_LOOKUPS / 3, + .first_cid = SPARX5_VCAP_CID_IS0_L0, + .last_cid = SPARX5_VCAP_CID_IS0_L2 - 1, + .blockno = 8, /* Maps block 8-9 */ + .blocks = 2, + .ingress = true, + }, + { + .vtype = VCAP_TYPE_IS0, /* CLM-1 */ + .vinst = 1, + .map_id = 2, + .lookups = SPARX5_IS0_LOOKUPS, + .lookups_per_instance = SPARX5_IS0_LOOKUPS / 3, + .first_cid = SPARX5_VCAP_CID_IS0_L2, + .last_cid = SPARX5_VCAP_CID_IS0_L4 - 1, + .blockno = 6, /* Maps block 6-7 */ + .blocks = 2, + .ingress = true, + }, + { + .vtype = VCAP_TYPE_IS0, /* CLM-2 */ + .vinst = 2, + .map_id = 3, + .lookups = SPARX5_IS0_LOOKUPS, + .lookups_per_instance = SPARX5_IS0_LOOKUPS / 3, + .first_cid = SPARX5_VCAP_CID_IS0_L4, + .last_cid = SPARX5_VCAP_CID_IS0_MAX, + .blockno = 4, /* Maps block 4-5 */ + .blocks = 2, + .ingress = true, + }, + { .vtype = VCAP_TYPE_IS2, /* IS2-0 */ .vinst = 0, .map_id = 4, @@ -49,6 +108,7 @@ static struct sparx5_vcap_inst { .last_cid = SPARX5_VCAP_CID_IS2_L2 - 1, .blockno = 0, /* Maps block 0-1 */ .blocks = 2, + .ingress = true, }, { .vtype = VCAP_TYPE_IS2, /* IS2-1 */ @@ -60,9 +120,59 @@ static struct sparx5_vcap_inst { .last_cid = SPARX5_VCAP_CID_IS2_MAX, .blockno = 2, /* Maps block 2-3 */ .blocks = 2, + .ingress = true, + }, + { + .vtype = VCAP_TYPE_ES0, + .lookups = SPARX5_ES0_LOOKUPS, + .lookups_per_instance = SPARX5_ES0_LOOKUPS, + .first_cid = SPARX5_VCAP_CID_ES0_L0, + .last_cid = SPARX5_VCAP_CID_ES0_MAX, + .count = 4096, /* Addresses according to datasheet */ + .ingress = false, + }, + { + .vtype = VCAP_TYPE_ES2, + .lookups = SPARX5_ES2_LOOKUPS, + .lookups_per_instance = SPARX5_ES2_LOOKUPS, + .first_cid = SPARX5_VCAP_CID_ES2_L0, + .last_cid = SPARX5_VCAP_CID_ES2_MAX, + .count = 12288, /* Addresses according to datasheet */ + .ingress = false, }, }; +/* These protocols have dedicated keysets in IS0 and a TC dissector */ +static u16 sparx5_vcap_is0_known_etypes[] = { + ETH_P_ALL, + ETH_P_IP, + ETH_P_IPV6, +}; + +/* These protocols have dedicated keysets in IS2 and a TC dissector */ +static u16 sparx5_vcap_is2_known_etypes[] = { + ETH_P_ALL, + ETH_P_ARP, + ETH_P_IP, + ETH_P_IPV6, +}; + +/* These protocols have dedicated keysets in ES2 and a TC dissector */ +static u16 sparx5_vcap_es2_known_etypes[] = { + ETH_P_ALL, + ETH_P_ARP, + ETH_P_IP, + ETH_P_IPV6, +}; + +static void sparx5_vcap_type_err(struct sparx5 *sparx5, + struct vcap_admin *admin, + const char *fname) +{ + pr_err("%s: vcap type: %s not supported\n", + fname, sparx5_vcaps[admin->vtype].name); +} + /* Await the super VCAP completion of the current operation */ static void sparx5_vcap_wait_super_update(struct sparx5 *sparx5) { @@ -73,25 +183,81 @@ static void sparx5_vcap_wait_super_update(struct sparx5 *sparx5) false, sparx5, VCAP_SUPER_CTRL); } -/* Initializing a VCAP address range: only IS2 for now */ +/* Await the ES0 VCAP completion of the current operation */ +static void sparx5_vcap_wait_es0_update(struct sparx5 *sparx5) +{ + u32 value; + + read_poll_timeout(spx5_rd, value, + !VCAP_ES0_CTRL_UPDATE_SHOT_GET(value), 500, 10000, + false, sparx5, VCAP_ES0_CTRL); +} + +/* Await the ES2 VCAP completion of the current operation */ +static void sparx5_vcap_wait_es2_update(struct sparx5 *sparx5) +{ + u32 value; + + read_poll_timeout(spx5_rd, value, + !VCAP_ES2_CTRL_UPDATE_SHOT_GET(value), 500, 10000, + false, sparx5, VCAP_ES2_CTRL); +} + +/* Initializing a VCAP address range */ static void _sparx5_vcap_range_init(struct sparx5 *sparx5, struct vcap_admin *admin, u32 addr, u32 count) { u32 size = count - 1; - spx5_wr(VCAP_SUPER_CFG_MV_NUM_POS_SET(0) | - VCAP_SUPER_CFG_MV_SIZE_SET(size), - sparx5, VCAP_SUPER_CFG); - spx5_wr(VCAP_SUPER_CTRL_UPDATE_CMD_SET(VCAP_CMD_INITIALIZE) | - VCAP_SUPER_CTRL_UPDATE_ENTRY_DIS_SET(0) | - VCAP_SUPER_CTRL_UPDATE_ACTION_DIS_SET(0) | - VCAP_SUPER_CTRL_UPDATE_CNT_DIS_SET(0) | - VCAP_SUPER_CTRL_UPDATE_ADDR_SET(addr) | - VCAP_SUPER_CTRL_CLEAR_CACHE_SET(true) | - VCAP_SUPER_CTRL_UPDATE_SHOT_SET(true), - sparx5, VCAP_SUPER_CTRL); - sparx5_vcap_wait_super_update(sparx5); + switch (admin->vtype) { + case VCAP_TYPE_IS0: + case VCAP_TYPE_IS2: + spx5_wr(VCAP_SUPER_CFG_MV_NUM_POS_SET(0) | + VCAP_SUPER_CFG_MV_SIZE_SET(size), + sparx5, VCAP_SUPER_CFG); + spx5_wr(VCAP_SUPER_CTRL_UPDATE_CMD_SET(VCAP_CMD_INITIALIZE) | + VCAP_SUPER_CTRL_UPDATE_ENTRY_DIS_SET(0) | + VCAP_SUPER_CTRL_UPDATE_ACTION_DIS_SET(0) | + VCAP_SUPER_CTRL_UPDATE_CNT_DIS_SET(0) | + VCAP_SUPER_CTRL_UPDATE_ADDR_SET(addr) | + VCAP_SUPER_CTRL_CLEAR_CACHE_SET(true) | + VCAP_SUPER_CTRL_UPDATE_SHOT_SET(true), + sparx5, VCAP_SUPER_CTRL); + sparx5_vcap_wait_super_update(sparx5); + break; + case VCAP_TYPE_ES0: + spx5_wr(VCAP_ES0_CFG_MV_NUM_POS_SET(0) | + VCAP_ES0_CFG_MV_SIZE_SET(size), + sparx5, VCAP_ES0_CFG); + spx5_wr(VCAP_ES0_CTRL_UPDATE_CMD_SET(VCAP_CMD_INITIALIZE) | + VCAP_ES0_CTRL_UPDATE_ENTRY_DIS_SET(0) | + VCAP_ES0_CTRL_UPDATE_ACTION_DIS_SET(0) | + VCAP_ES0_CTRL_UPDATE_CNT_DIS_SET(0) | + VCAP_ES0_CTRL_UPDATE_ADDR_SET(addr) | + VCAP_ES0_CTRL_CLEAR_CACHE_SET(true) | + VCAP_ES0_CTRL_UPDATE_SHOT_SET(true), + sparx5, VCAP_ES0_CTRL); + sparx5_vcap_wait_es0_update(sparx5); + break; + case VCAP_TYPE_ES2: + spx5_wr(VCAP_ES2_CFG_MV_NUM_POS_SET(0) | + VCAP_ES2_CFG_MV_SIZE_SET(size), + sparx5, VCAP_ES2_CFG); + spx5_wr(VCAP_ES2_CTRL_UPDATE_CMD_SET(VCAP_CMD_INITIALIZE) | + VCAP_ES2_CTRL_UPDATE_ENTRY_DIS_SET(0) | + VCAP_ES2_CTRL_UPDATE_ACTION_DIS_SET(0) | + VCAP_ES2_CTRL_UPDATE_CNT_DIS_SET(0) | + VCAP_ES2_CTRL_UPDATE_ADDR_SET(addr) | + VCAP_ES2_CTRL_CLEAR_CACHE_SET(true) | + VCAP_ES2_CTRL_UPDATE_SHOT_SET(true), + sparx5, VCAP_ES2_CTRL); + sparx5_vcap_wait_es2_update(sparx5); + break; + default: + sparx5_vcap_type_err(sparx5, admin, __func__); + break; + } } /* Initializing VCAP rule data area */ @@ -112,6 +278,17 @@ static const char *sparx5_vcap_keyset_name(struct net_device *ndev, return vcap_keyset_name(port->sparx5->vcap_ctrl, keyset); } +/* Check if this is the first lookup of IS0 */ +static bool sparx5_vcap_is0_is_first_chain(struct vcap_rule *rule) +{ + return (rule->vcap_chain_id >= SPARX5_VCAP_CID_IS0_L0 && + rule->vcap_chain_id < SPARX5_VCAP_CID_IS0_L1) || + ((rule->vcap_chain_id >= SPARX5_VCAP_CID_IS0_L2 && + rule->vcap_chain_id < SPARX5_VCAP_CID_IS0_L3)) || + ((rule->vcap_chain_id >= SPARX5_VCAP_CID_IS0_L4 && + rule->vcap_chain_id < SPARX5_VCAP_CID_IS0_L5)); +} + /* Check if this is the first lookup of IS2 */ static bool sparx5_vcap_is2_is_first_chain(struct vcap_rule *rule) { @@ -121,9 +298,15 @@ static bool sparx5_vcap_is2_is_first_chain(struct vcap_rule *rule) rule->vcap_chain_id < SPARX5_VCAP_CID_IS2_L3)); } +static bool sparx5_vcap_es2_is_first_chain(struct vcap_rule *rule) +{ + return (rule->vcap_chain_id >= SPARX5_VCAP_CID_ES2_L0 && + rule->vcap_chain_id < SPARX5_VCAP_CID_ES2_L1); +} + /* Set the narrow range ingress port mask on a rule */ -static void sparx5_vcap_add_range_port_mask(struct vcap_rule *rule, - struct net_device *ndev) +static void sparx5_vcap_add_ingress_range_port_mask(struct vcap_rule *rule, + struct net_device *ndev) { struct sparx5_port *port = netdev_priv(ndev); u32 port_mask; @@ -153,12 +336,51 @@ static void sparx5_vcap_add_wide_port_mask(struct vcap_rule *rule, vcap_rule_add_key_u72(rule, VCAP_KF_IF_IGR_PORT_MASK, &port_mask); } -/* Convert chain id to vcap lookup id */ -static int sparx5_vcap_cid_to_lookup(int cid) +static void sparx5_vcap_add_egress_range_port_mask(struct vcap_rule *rule, + struct net_device *ndev) +{ + struct sparx5_port *port = netdev_priv(ndev); + u32 port_mask; + u32 range; + + /* Mask range selects: + * 0-2: Physical/Logical egress port number 0-31, 32–63, 64. + * 3-5: Virtual Interface Number 0-31, 32-63, 64. + * 6: CPU queue Number 0-7. + * + * Use physical/logical port ranges (0-2) + */ + range = port->portno / BITS_PER_TYPE(u32); + /* Port bit set to match-any */ + port_mask = ~BIT(port->portno % BITS_PER_TYPE(u32)); + vcap_rule_add_key_u32(rule, VCAP_KF_IF_EGR_PORT_MASK_RNG, range, 0xf); + vcap_rule_add_key_u32(rule, VCAP_KF_IF_EGR_PORT_MASK, 0, port_mask); +} + +/* Convert IS0 chain id to vcap lookup id */ +static int sparx5_vcap_is0_cid_to_lookup(int cid) +{ + int lookup = 0; + + if (cid >= SPARX5_VCAP_CID_IS0_L1 && cid < SPARX5_VCAP_CID_IS0_L2) + lookup = 1; + else if (cid >= SPARX5_VCAP_CID_IS0_L2 && cid < SPARX5_VCAP_CID_IS0_L3) + lookup = 2; + else if (cid >= SPARX5_VCAP_CID_IS0_L3 && cid < SPARX5_VCAP_CID_IS0_L4) + lookup = 3; + else if (cid >= SPARX5_VCAP_CID_IS0_L4 && cid < SPARX5_VCAP_CID_IS0_L5) + lookup = 4; + else if (cid >= SPARX5_VCAP_CID_IS0_L5 && cid < SPARX5_VCAP_CID_IS0_MAX) + lookup = 5; + + return lookup; +} + +/* Convert IS2 chain id to vcap lookup id */ +static int sparx5_vcap_is2_cid_to_lookup(int cid) { int lookup = 0; - /* For now only handle IS2 */ if (cid >= SPARX5_VCAP_CID_IS2_L1 && cid < SPARX5_VCAP_CID_IS2_L2) lookup = 1; else if (cid >= SPARX5_VCAP_CID_IS2_L2 && cid < SPARX5_VCAP_CID_IS2_L3) @@ -169,6 +391,86 @@ static int sparx5_vcap_cid_to_lookup(int cid) return lookup; } +/* Convert ES2 chain id to vcap lookup id */ +static int sparx5_vcap_es2_cid_to_lookup(int cid) +{ + int lookup = 0; + + if (cid >= SPARX5_VCAP_CID_ES2_L1) + lookup = 1; + + return lookup; +} + +/* Add ethernet type IS0 keyset to a list */ +static void +sparx5_vcap_is0_get_port_etype_keysets(struct vcap_keyset_list *keysetlist, + u32 value) +{ + switch (ANA_CL_ADV_CL_CFG_ETYPE_CLM_KEY_SEL_GET(value)) { + case VCAP_IS0_PS_ETYPE_NORMAL_7TUPLE: + vcap_keyset_list_add(keysetlist, VCAP_KFS_NORMAL_7TUPLE); + break; + case VCAP_IS0_PS_ETYPE_NORMAL_5TUPLE_IP4: + vcap_keyset_list_add(keysetlist, VCAP_KFS_NORMAL_5TUPLE_IP4); + break; + } +} + +/* Return the list of keysets for the vcap port configuration */ +static int sparx5_vcap_is0_get_port_keysets(struct net_device *ndev, + int lookup, + struct vcap_keyset_list *keysetlist, + u16 l3_proto) +{ + struct sparx5_port *port = netdev_priv(ndev); + struct sparx5 *sparx5 = port->sparx5; + int portno = port->portno; + u32 value; + + value = spx5_rd(sparx5, ANA_CL_ADV_CL_CFG(portno, lookup)); + + /* Collect all keysets for the port in a list */ + if (l3_proto == ETH_P_ALL) + sparx5_vcap_is0_get_port_etype_keysets(keysetlist, value); + + if (l3_proto == ETH_P_ALL || l3_proto == ETH_P_IP) + switch (ANA_CL_ADV_CL_CFG_IP4_CLM_KEY_SEL_GET(value)) { + case VCAP_IS0_PS_ETYPE_DEFAULT: + sparx5_vcap_is0_get_port_etype_keysets(keysetlist, + value); + break; + case VCAP_IS0_PS_ETYPE_NORMAL_7TUPLE: + vcap_keyset_list_add(keysetlist, + VCAP_KFS_NORMAL_7TUPLE); + break; + case VCAP_IS0_PS_ETYPE_NORMAL_5TUPLE_IP4: + vcap_keyset_list_add(keysetlist, + VCAP_KFS_NORMAL_5TUPLE_IP4); + break; + } + + if (l3_proto == ETH_P_ALL || l3_proto == ETH_P_IPV6) + switch (ANA_CL_ADV_CL_CFG_IP6_CLM_KEY_SEL_GET(value)) { + case VCAP_IS0_PS_ETYPE_DEFAULT: + sparx5_vcap_is0_get_port_etype_keysets(keysetlist, + value); + break; + case VCAP_IS0_PS_ETYPE_NORMAL_7TUPLE: + vcap_keyset_list_add(keysetlist, + VCAP_KFS_NORMAL_7TUPLE); + break; + case VCAP_IS0_PS_ETYPE_NORMAL_5TUPLE_IP4: + vcap_keyset_list_add(keysetlist, + VCAP_KFS_NORMAL_5TUPLE_IP4); + break; + } + + if (l3_proto != ETH_P_IP && l3_proto != ETH_P_IPV6) + sparx5_vcap_is0_get_port_etype_keysets(keysetlist, value); + return 0; +} + /* Return the list of keysets for the vcap port configuration */ static int sparx5_vcap_is2_get_port_keysets(struct net_device *ndev, int lookup, @@ -180,10 +482,7 @@ static int sparx5_vcap_is2_get_port_keysets(struct net_device *ndev, int portno = port->portno; u32 value; - /* Check if the port keyset selection is enabled */ value = spx5_rd(sparx5, ANA_ACL_VCAP_S2_KEY_SEL(portno, lookup)); - if (!ANA_ACL_VCAP_S2_KEY_SEL_KEY_SEL_ENA_GET(value)) - return -ENOENT; /* Collect all keysets for the port in a list */ if (l3_proto == ETH_P_ALL || l3_proto == ETH_P_ARP) { @@ -274,6 +573,121 @@ static int sparx5_vcap_is2_get_port_keysets(struct net_device *ndev, return 0; } +/* Return the keysets for the vcap port IP4 traffic class configuration */ +static void +sparx5_vcap_es2_get_port_ipv4_keysets(struct vcap_keyset_list *keysetlist, + u32 value) +{ + switch (EACL_VCAP_ES2_KEY_SEL_IP4_KEY_SEL_GET(value)) { + case VCAP_ES2_PS_IPV4_MAC_ETYPE: + vcap_keyset_list_add(keysetlist, VCAP_KFS_MAC_ETYPE); + break; + case VCAP_ES2_PS_IPV4_IP_7TUPLE: + vcap_keyset_list_add(keysetlist, VCAP_KFS_IP_7TUPLE); + break; + case VCAP_ES2_PS_IPV4_IP4_TCP_UDP_VID: + vcap_keyset_list_add(keysetlist, VCAP_KFS_IP4_TCP_UDP); + break; + case VCAP_ES2_PS_IPV4_IP4_TCP_UDP_OTHER: + vcap_keyset_list_add(keysetlist, VCAP_KFS_IP4_TCP_UDP); + vcap_keyset_list_add(keysetlist, VCAP_KFS_IP4_OTHER); + break; + case VCAP_ES2_PS_IPV4_IP4_VID: + /* Not used */ + break; + case VCAP_ES2_PS_IPV4_IP4_OTHER: + vcap_keyset_list_add(keysetlist, VCAP_KFS_IP4_OTHER); + break; + } +} + +/* Return the list of keysets for the vcap port configuration */ +static int sparx5_vcap_es0_get_port_keysets(struct net_device *ndev, + struct vcap_keyset_list *keysetlist, + u16 l3_proto) +{ + struct sparx5_port *port = netdev_priv(ndev); + struct sparx5 *sparx5 = port->sparx5; + int portno = port->portno; + u32 value; + + value = spx5_rd(sparx5, REW_RTAG_ETAG_CTRL(portno)); + + /* Collect all keysets for the port in a list */ + switch (REW_RTAG_ETAG_CTRL_ES0_ISDX_KEY_ENA_GET(value)) { + case VCAP_ES0_PS_NORMAL_SELECTION: + case VCAP_ES0_PS_FORCE_ISDX_LOOKUPS: + vcap_keyset_list_add(keysetlist, VCAP_KFS_ISDX); + break; + default: + break; + } + return 0; +} + +/* Return the list of keysets for the vcap port configuration */ +static int sparx5_vcap_es2_get_port_keysets(struct net_device *ndev, + int lookup, + struct vcap_keyset_list *keysetlist, + u16 l3_proto) +{ + struct sparx5_port *port = netdev_priv(ndev); + struct sparx5 *sparx5 = port->sparx5; + int portno = port->portno; + u32 value; + + value = spx5_rd(sparx5, EACL_VCAP_ES2_KEY_SEL(portno, lookup)); + + /* Collect all keysets for the port in a list */ + if (l3_proto == ETH_P_ALL || l3_proto == ETH_P_ARP) { + switch (EACL_VCAP_ES2_KEY_SEL_ARP_KEY_SEL_GET(value)) { + case VCAP_ES2_PS_ARP_MAC_ETYPE: + vcap_keyset_list_add(keysetlist, VCAP_KFS_MAC_ETYPE); + break; + case VCAP_ES2_PS_ARP_ARP: + vcap_keyset_list_add(keysetlist, VCAP_KFS_ARP); + break; + } + } + + if (l3_proto == ETH_P_ALL || l3_proto == ETH_P_IP) + sparx5_vcap_es2_get_port_ipv4_keysets(keysetlist, value); + + if (l3_proto == ETH_P_ALL || l3_proto == ETH_P_IPV6) { + switch (EACL_VCAP_ES2_KEY_SEL_IP6_KEY_SEL_GET(value)) { + case VCAP_ES2_PS_IPV6_MAC_ETYPE: + vcap_keyset_list_add(keysetlist, VCAP_KFS_MAC_ETYPE); + break; + case VCAP_ES2_PS_IPV6_IP_7TUPLE: + vcap_keyset_list_add(keysetlist, VCAP_KFS_IP_7TUPLE); + break; + case VCAP_ES2_PS_IPV6_IP_7TUPLE_VID: + vcap_keyset_list_add(keysetlist, VCAP_KFS_IP_7TUPLE); + break; + case VCAP_ES2_PS_IPV6_IP_7TUPLE_STD: + vcap_keyset_list_add(keysetlist, VCAP_KFS_IP_7TUPLE); + vcap_keyset_list_add(keysetlist, VCAP_KFS_IP6_STD); + break; + case VCAP_ES2_PS_IPV6_IP6_VID: + /* Not used */ + break; + case VCAP_ES2_PS_IPV6_IP6_STD: + vcap_keyset_list_add(keysetlist, VCAP_KFS_IP6_STD); + break; + case VCAP_ES2_PS_IPV6_IP4_DOWNGRADE: + sparx5_vcap_es2_get_port_ipv4_keysets(keysetlist, + value); + break; + } + } + + if (l3_proto != ETH_P_ARP && l3_proto != ETH_P_IP && + l3_proto != ETH_P_IPV6) { + vcap_keyset_list_add(keysetlist, VCAP_KFS_MAC_ETYPE); + } + return 0; +} + /* Get the port keyset for the vcap lookup */ int sparx5_vcap_get_port_keyset(struct net_device *ndev, struct vcap_admin *admin, @@ -281,10 +695,64 @@ int sparx5_vcap_get_port_keyset(struct net_device *ndev, u16 l3_proto, struct vcap_keyset_list *kslist) { - int lookup; + int lookup, err = -EINVAL; + struct sparx5_port *port; + + switch (admin->vtype) { + case VCAP_TYPE_IS0: + lookup = sparx5_vcap_is0_cid_to_lookup(cid); + err = sparx5_vcap_is0_get_port_keysets(ndev, lookup, kslist, + l3_proto); + break; + case VCAP_TYPE_IS2: + lookup = sparx5_vcap_is2_cid_to_lookup(cid); + err = sparx5_vcap_is2_get_port_keysets(ndev, lookup, kslist, + l3_proto); + break; + case VCAP_TYPE_ES0: + err = sparx5_vcap_es0_get_port_keysets(ndev, kslist, l3_proto); + break; + case VCAP_TYPE_ES2: + lookup = sparx5_vcap_es2_cid_to_lookup(cid); + err = sparx5_vcap_es2_get_port_keysets(ndev, lookup, kslist, + l3_proto); + break; + default: + port = netdev_priv(ndev); + sparx5_vcap_type_err(port->sparx5, admin, __func__); + break; + } + return err; +} + +/* Check if the ethertype is supported by the vcap port classification */ +bool sparx5_vcap_is_known_etype(struct vcap_admin *admin, u16 etype) +{ + const u16 *known_etypes; + int size, idx; - lookup = sparx5_vcap_cid_to_lookup(cid); - return sparx5_vcap_is2_get_port_keysets(ndev, lookup, kslist, l3_proto); + switch (admin->vtype) { + case VCAP_TYPE_IS0: + known_etypes = sparx5_vcap_is0_known_etypes; + size = ARRAY_SIZE(sparx5_vcap_is0_known_etypes); + break; + case VCAP_TYPE_IS2: + known_etypes = sparx5_vcap_is2_known_etypes; + size = ARRAY_SIZE(sparx5_vcap_is2_known_etypes); + break; + case VCAP_TYPE_ES0: + return true; + case VCAP_TYPE_ES2: + known_etypes = sparx5_vcap_es2_known_etypes; + size = ARRAY_SIZE(sparx5_vcap_es2_known_etypes); + break; + default: + return false; + } + for (idx = 0; idx < size; ++idx) + if (known_etypes[idx] == etype) + return true; + return false; } /* API callback used for validating a field keyset (check the port keysets) */ @@ -297,16 +765,40 @@ sparx5_vcap_validate_keyset(struct net_device *ndev, { struct vcap_keyset_list keysetlist = {}; enum vcap_keyfield_set keysets[10] = {}; + struct sparx5_port *port; int idx, jdx, lookup; if (!kslist || kslist->cnt == 0) return VCAP_KFS_NO_VALUE; - /* Get a list of currently configured keysets in the lookups */ - lookup = sparx5_vcap_cid_to_lookup(rule->vcap_chain_id); keysetlist.max = ARRAY_SIZE(keysets); keysetlist.keysets = keysets; - sparx5_vcap_is2_get_port_keysets(ndev, lookup, &keysetlist, l3_proto); + + /* Get a list of currently configured keysets in the lookups */ + switch (admin->vtype) { + case VCAP_TYPE_IS0: + lookup = sparx5_vcap_is0_cid_to_lookup(rule->vcap_chain_id); + sparx5_vcap_is0_get_port_keysets(ndev, lookup, &keysetlist, + l3_proto); + break; + case VCAP_TYPE_IS2: + lookup = sparx5_vcap_is2_cid_to_lookup(rule->vcap_chain_id); + sparx5_vcap_is2_get_port_keysets(ndev, lookup, &keysetlist, + l3_proto); + break; + case VCAP_TYPE_ES0: + sparx5_vcap_es0_get_port_keysets(ndev, &keysetlist, l3_proto); + break; + case VCAP_TYPE_ES2: + lookup = sparx5_vcap_es2_cid_to_lookup(rule->vcap_chain_id); + sparx5_vcap_es2_get_port_keysets(ndev, lookup, &keysetlist, + l3_proto); + break; + default: + port = netdev_priv(ndev); + sparx5_vcap_type_err(port->sparx5, admin, __func__); + break; + } /* Check if there is a match and return the match */ for (idx = 0; idx < kslist->cnt; ++idx) @@ -321,27 +813,97 @@ sparx5_vcap_validate_keyset(struct net_device *ndev, return -ENOENT; } -/* API callback used for adding default fields to a rule */ -static void sparx5_vcap_add_default_fields(struct net_device *ndev, - struct vcap_admin *admin, - struct vcap_rule *rule) +static void sparx5_vcap_ingress_add_default_fields(struct net_device *ndev, + struct vcap_admin *admin, + struct vcap_rule *rule) { const struct vcap_field *field; + bool is_first; + /* Add ingress port mask matching the net device */ field = vcap_lookup_keyfield(rule, VCAP_KF_IF_IGR_PORT_MASK); if (field && field->width == SPX5_PORTS) sparx5_vcap_add_wide_port_mask(rule, ndev); else if (field && field->width == BITS_PER_TYPE(u32)) - sparx5_vcap_add_range_port_mask(rule, ndev); + sparx5_vcap_add_ingress_range_port_mask(rule, ndev); else pr_err("%s:%d: %s: could not add an ingress port mask for: %s\n", __func__, __LINE__, netdev_name(ndev), sparx5_vcap_keyset_name(ndev, rule->keyset)); - /* add the lookup bit */ - if (sparx5_vcap_is2_is_first_chain(rule)) - vcap_rule_add_key_bit(rule, VCAP_KF_LOOKUP_FIRST_IS, VCAP_BIT_1); + + if (admin->vtype == VCAP_TYPE_IS0) + is_first = sparx5_vcap_is0_is_first_chain(rule); else - vcap_rule_add_key_bit(rule, VCAP_KF_LOOKUP_FIRST_IS, VCAP_BIT_0); + is_first = sparx5_vcap_is2_is_first_chain(rule); + + /* Add key that selects the first/second lookup */ + if (is_first) + vcap_rule_add_key_bit(rule, VCAP_KF_LOOKUP_FIRST_IS, + VCAP_BIT_1); + else + vcap_rule_add_key_bit(rule, VCAP_KF_LOOKUP_FIRST_IS, + VCAP_BIT_0); +} + +static void sparx5_vcap_es0_add_default_fields(struct net_device *ndev, + struct vcap_admin *admin, + struct vcap_rule *rule) +{ + struct sparx5_port *port = netdev_priv(ndev); + + vcap_rule_add_key_u32(rule, VCAP_KF_IF_EGR_PORT_NO, port->portno, ~0); + /* Match untagged frames if there was no VLAN key */ + vcap_rule_add_key_u32(rule, VCAP_KF_8021Q_TPID, SPX5_TPID_SEL_UNTAGGED, + ~0); +} + +static void sparx5_vcap_es2_add_default_fields(struct net_device *ndev, + struct vcap_admin *admin, + struct vcap_rule *rule) +{ + const struct vcap_field *field; + bool is_first; + + /* Add egress port mask matching the net device */ + field = vcap_lookup_keyfield(rule, VCAP_KF_IF_EGR_PORT_MASK); + if (field) + sparx5_vcap_add_egress_range_port_mask(rule, ndev); + + /* Add key that selects the first/second lookup */ + is_first = sparx5_vcap_es2_is_first_chain(rule); + + if (is_first) + vcap_rule_add_key_bit(rule, VCAP_KF_LOOKUP_FIRST_IS, + VCAP_BIT_1); + else + vcap_rule_add_key_bit(rule, VCAP_KF_LOOKUP_FIRST_IS, + VCAP_BIT_0); +} + +/* API callback used for adding default fields to a rule */ +static void sparx5_vcap_add_default_fields(struct net_device *ndev, + struct vcap_admin *admin, + struct vcap_rule *rule) +{ + struct sparx5_port *port; + + /* add the lookup bit */ + switch (admin->vtype) { + case VCAP_TYPE_IS0: + case VCAP_TYPE_IS2: + sparx5_vcap_ingress_add_default_fields(ndev, admin, rule); + break; + case VCAP_TYPE_ES0: + sparx5_vcap_es0_add_default_fields(ndev, admin, rule); + break; + case VCAP_TYPE_ES2: + sparx5_vcap_es2_add_default_fields(ndev, admin, rule); + break; + default: + port = netdev_priv(ndev); + sparx5_vcap_type_err(port->sparx5, admin, __func__); + break; + } } /* API callback used for erasing the vcap cache area (not the register area) */ @@ -353,21 +915,60 @@ static void sparx5_vcap_cache_erase(struct vcap_admin *admin) memset(&admin->cache.counter, 0, sizeof(admin->cache.counter)); } -/* API callback used for writing to the VCAP cache */ -static void sparx5_vcap_cache_write(struct net_device *ndev, - struct vcap_admin *admin, - enum vcap_selection sel, - u32 start, - u32 count) +static void sparx5_vcap_is0_cache_write(struct sparx5 *sparx5, + struct vcap_admin *admin, + enum vcap_selection sel, + u32 start, + u32 count) +{ + u32 *keystr, *mskstr, *actstr; + int idx; + + keystr = &admin->cache.keystream[start]; + mskstr = &admin->cache.maskstream[start]; + actstr = &admin->cache.actionstream[start]; + + switch (sel) { + case VCAP_SEL_ENTRY: + for (idx = 0; idx < count; ++idx) { + /* Avoid 'match-off' by setting value & mask */ + spx5_wr(keystr[idx] & mskstr[idx], sparx5, + VCAP_SUPER_VCAP_ENTRY_DAT(idx)); + spx5_wr(~mskstr[idx], sparx5, + VCAP_SUPER_VCAP_MASK_DAT(idx)); + } + break; + case VCAP_SEL_ACTION: + for (idx = 0; idx < count; ++idx) + spx5_wr(actstr[idx], sparx5, + VCAP_SUPER_VCAP_ACTION_DAT(idx)); + break; + case VCAP_SEL_ALL: + pr_err("%s:%d: cannot write all streams at once\n", + __func__, __LINE__); + break; + default: + break; + } + + if (sel & VCAP_SEL_COUNTER) + spx5_wr(admin->cache.counter, sparx5, + VCAP_SUPER_VCAP_CNT_DAT(0)); +} + +static void sparx5_vcap_is2_cache_write(struct sparx5 *sparx5, + struct vcap_admin *admin, + enum vcap_selection sel, + u32 start, + u32 count) { - struct sparx5_port *port = netdev_priv(ndev); - struct sparx5 *sparx5 = port->sparx5; u32 *keystr, *mskstr, *actstr; int idx; keystr = &admin->cache.keystream[start]; mskstr = &admin->cache.maskstream[start]; actstr = &admin->cache.actionstream[start]; + switch (sel) { case VCAP_SEL_ENTRY: for (idx = 0; idx < count; ++idx) { @@ -403,21 +1004,143 @@ static void sparx5_vcap_cache_write(struct net_device *ndev, } } -/* API callback used for reading from the VCAP into the VCAP cache */ -static void sparx5_vcap_cache_read(struct net_device *ndev, - struct vcap_admin *admin, - enum vcap_selection sel, - u32 start, - u32 count) +/* Use ESDX counters located in the XQS */ +static void sparx5_es0_write_esdx_counter(struct sparx5 *sparx5, + struct vcap_admin *admin, u32 id) +{ + mutex_lock(&sparx5->queue_stats_lock); + spx5_wr(XQS_STAT_CFG_STAT_VIEW_SET(id), sparx5, XQS_STAT_CFG); + spx5_wr(admin->cache.counter, sparx5, + XQS_CNT(SPARX5_STAT_ESDX_GRN_PKTS)); + spx5_wr(0, sparx5, XQS_CNT(SPARX5_STAT_ESDX_YEL_PKTS)); + mutex_unlock(&sparx5->queue_stats_lock); +} + +static void sparx5_vcap_es0_cache_write(struct sparx5 *sparx5, + struct vcap_admin *admin, + enum vcap_selection sel, + u32 start, + u32 count) +{ + u32 *keystr, *mskstr, *actstr; + int idx; + + keystr = &admin->cache.keystream[start]; + mskstr = &admin->cache.maskstream[start]; + actstr = &admin->cache.actionstream[start]; + + switch (sel) { + case VCAP_SEL_ENTRY: + for (idx = 0; idx < count; ++idx) { + /* Avoid 'match-off' by setting value & mask */ + spx5_wr(keystr[idx] & mskstr[idx], sparx5, + VCAP_ES0_VCAP_ENTRY_DAT(idx)); + spx5_wr(~mskstr[idx], sparx5, + VCAP_ES0_VCAP_MASK_DAT(idx)); + } + break; + case VCAP_SEL_ACTION: + for (idx = 0; idx < count; ++idx) + spx5_wr(actstr[idx], sparx5, + VCAP_ES0_VCAP_ACTION_DAT(idx)); + break; + case VCAP_SEL_ALL: + pr_err("%s:%d: cannot write all streams at once\n", + __func__, __LINE__); + break; + default: + break; + } + if (sel & VCAP_SEL_COUNTER) { + spx5_wr(admin->cache.counter, sparx5, VCAP_ES0_VCAP_CNT_DAT(0)); + sparx5_es0_write_esdx_counter(sparx5, admin, start); + } +} + +static void sparx5_vcap_es2_cache_write(struct sparx5 *sparx5, + struct vcap_admin *admin, + enum vcap_selection sel, + u32 start, + u32 count) +{ + u32 *keystr, *mskstr, *actstr; + int idx; + + keystr = &admin->cache.keystream[start]; + mskstr = &admin->cache.maskstream[start]; + actstr = &admin->cache.actionstream[start]; + + switch (sel) { + case VCAP_SEL_ENTRY: + for (idx = 0; idx < count; ++idx) { + /* Avoid 'match-off' by setting value & mask */ + spx5_wr(keystr[idx] & mskstr[idx], sparx5, + VCAP_ES2_VCAP_ENTRY_DAT(idx)); + spx5_wr(~mskstr[idx], sparx5, + VCAP_ES2_VCAP_MASK_DAT(idx)); + } + break; + case VCAP_SEL_ACTION: + for (idx = 0; idx < count; ++idx) + spx5_wr(actstr[idx], sparx5, + VCAP_ES2_VCAP_ACTION_DAT(idx)); + break; + case VCAP_SEL_ALL: + pr_err("%s:%d: cannot write all streams at once\n", + __func__, __LINE__); + break; + default: + break; + } + if (sel & VCAP_SEL_COUNTER) { + start = start & 0x7ff; /* counter limit */ + spx5_wr(admin->cache.counter, sparx5, EACL_ES2_CNT(start)); + spx5_wr(admin->cache.sticky, sparx5, VCAP_ES2_VCAP_CNT_DAT(0)); + } +} + +/* API callback used for writing to the VCAP cache */ +static void sparx5_vcap_cache_write(struct net_device *ndev, + struct vcap_admin *admin, + enum vcap_selection sel, + u32 start, + u32 count) { struct sparx5_port *port = netdev_priv(ndev); struct sparx5 *sparx5 = port->sparx5; + + switch (admin->vtype) { + case VCAP_TYPE_IS0: + sparx5_vcap_is0_cache_write(sparx5, admin, sel, start, count); + break; + case VCAP_TYPE_IS2: + sparx5_vcap_is2_cache_write(sparx5, admin, sel, start, count); + break; + case VCAP_TYPE_ES0: + sparx5_vcap_es0_cache_write(sparx5, admin, sel, start, count); + break; + case VCAP_TYPE_ES2: + sparx5_vcap_es2_cache_write(sparx5, admin, sel, start, count); + break; + default: + sparx5_vcap_type_err(sparx5, admin, __func__); + break; + } +} + +static void sparx5_vcap_is0_cache_read(struct sparx5 *sparx5, + struct vcap_admin *admin, + enum vcap_selection sel, + u32 start, + u32 count) +{ u32 *keystr, *mskstr, *actstr; int idx; keystr = &admin->cache.keystream[start]; mskstr = &admin->cache.maskstream[start]; actstr = &admin->cache.actionstream[start]; + if (sel & VCAP_SEL_ENTRY) { for (idx = 0; idx < count; ++idx) { keystr[idx] = spx5_rd(sparx5, @@ -426,11 +1149,47 @@ static void sparx5_vcap_cache_read(struct net_device *ndev, VCAP_SUPER_VCAP_MASK_DAT(idx)); } } - if (sel & VCAP_SEL_ACTION) { + + if (sel & VCAP_SEL_ACTION) for (idx = 0; idx < count; ++idx) actstr[idx] = spx5_rd(sparx5, VCAP_SUPER_VCAP_ACTION_DAT(idx)); + + if (sel & VCAP_SEL_COUNTER) { + admin->cache.counter = + spx5_rd(sparx5, VCAP_SUPER_VCAP_CNT_DAT(0)); + admin->cache.sticky = + spx5_rd(sparx5, VCAP_SUPER_VCAP_CNT_DAT(0)); + } +} + +static void sparx5_vcap_is2_cache_read(struct sparx5 *sparx5, + struct vcap_admin *admin, + enum vcap_selection sel, + u32 start, + u32 count) +{ + u32 *keystr, *mskstr, *actstr; + int idx; + + keystr = &admin->cache.keystream[start]; + mskstr = &admin->cache.maskstream[start]; + actstr = &admin->cache.actionstream[start]; + + if (sel & VCAP_SEL_ENTRY) { + for (idx = 0; idx < count; ++idx) { + keystr[idx] = spx5_rd(sparx5, + VCAP_SUPER_VCAP_ENTRY_DAT(idx)); + mskstr[idx] = ~spx5_rd(sparx5, + VCAP_SUPER_VCAP_MASK_DAT(idx)); + } } + + if (sel & VCAP_SEL_ACTION) + for (idx = 0; idx < count; ++idx) + actstr[idx] = spx5_rd(sparx5, + VCAP_SUPER_VCAP_ACTION_DAT(idx)); + if (sel & VCAP_SEL_COUNTER) { start = start & 0xfff; /* counter limit */ if (admin->vinst == 0) @@ -444,6 +1203,121 @@ static void sparx5_vcap_cache_read(struct net_device *ndev, } } +/* Use ESDX counters located in the XQS */ +static void sparx5_es0_read_esdx_counter(struct sparx5 *sparx5, + struct vcap_admin *admin, u32 id) +{ + u32 counter; + + mutex_lock(&sparx5->queue_stats_lock); + spx5_wr(XQS_STAT_CFG_STAT_VIEW_SET(id), sparx5, XQS_STAT_CFG); + counter = spx5_rd(sparx5, XQS_CNT(SPARX5_STAT_ESDX_GRN_PKTS)) + + spx5_rd(sparx5, XQS_CNT(SPARX5_STAT_ESDX_YEL_PKTS)); + mutex_unlock(&sparx5->queue_stats_lock); + if (counter) + admin->cache.counter = counter; +} + +static void sparx5_vcap_es0_cache_read(struct sparx5 *sparx5, + struct vcap_admin *admin, + enum vcap_selection sel, + u32 start, + u32 count) +{ + u32 *keystr, *mskstr, *actstr; + int idx; + + keystr = &admin->cache.keystream[start]; + mskstr = &admin->cache.maskstream[start]; + actstr = &admin->cache.actionstream[start]; + + if (sel & VCAP_SEL_ENTRY) { + for (idx = 0; idx < count; ++idx) { + keystr[idx] = + spx5_rd(sparx5, VCAP_ES0_VCAP_ENTRY_DAT(idx)); + mskstr[idx] = + ~spx5_rd(sparx5, VCAP_ES0_VCAP_MASK_DAT(idx)); + } + } + + if (sel & VCAP_SEL_ACTION) + for (idx = 0; idx < count; ++idx) + actstr[idx] = + spx5_rd(sparx5, VCAP_ES0_VCAP_ACTION_DAT(idx)); + + if (sel & VCAP_SEL_COUNTER) { + admin->cache.counter = + spx5_rd(sparx5, VCAP_ES0_VCAP_CNT_DAT(0)); + admin->cache.sticky = admin->cache.counter; + sparx5_es0_read_esdx_counter(sparx5, admin, start); + } +} + +static void sparx5_vcap_es2_cache_read(struct sparx5 *sparx5, + struct vcap_admin *admin, + enum vcap_selection sel, + u32 start, + u32 count) +{ + u32 *keystr, *mskstr, *actstr; + int idx; + + keystr = &admin->cache.keystream[start]; + mskstr = &admin->cache.maskstream[start]; + actstr = &admin->cache.actionstream[start]; + + if (sel & VCAP_SEL_ENTRY) { + for (idx = 0; idx < count; ++idx) { + keystr[idx] = + spx5_rd(sparx5, VCAP_ES2_VCAP_ENTRY_DAT(idx)); + mskstr[idx] = + ~spx5_rd(sparx5, VCAP_ES2_VCAP_MASK_DAT(idx)); + } + } + + if (sel & VCAP_SEL_ACTION) + for (idx = 0; idx < count; ++idx) + actstr[idx] = + spx5_rd(sparx5, VCAP_ES2_VCAP_ACTION_DAT(idx)); + + if (sel & VCAP_SEL_COUNTER) { + start = start & 0x7ff; /* counter limit */ + admin->cache.counter = + spx5_rd(sparx5, EACL_ES2_CNT(start)); + admin->cache.sticky = + spx5_rd(sparx5, VCAP_ES2_VCAP_CNT_DAT(0)); + } +} + +/* API callback used for reading from the VCAP into the VCAP cache */ +static void sparx5_vcap_cache_read(struct net_device *ndev, + struct vcap_admin *admin, + enum vcap_selection sel, + u32 start, + u32 count) +{ + struct sparx5_port *port = netdev_priv(ndev); + struct sparx5 *sparx5 = port->sparx5; + + switch (admin->vtype) { + case VCAP_TYPE_IS0: + sparx5_vcap_is0_cache_read(sparx5, admin, sel, start, count); + break; + case VCAP_TYPE_IS2: + sparx5_vcap_is2_cache_read(sparx5, admin, sel, start, count); + break; + case VCAP_TYPE_ES0: + sparx5_vcap_es0_cache_read(sparx5, admin, sel, start, count); + break; + case VCAP_TYPE_ES2: + sparx5_vcap_es2_cache_read(sparx5, admin, sel, start, count); + break; + default: + sparx5_vcap_type_err(sparx5, admin, __func__); + break; + } +} + /* API callback used for initializing a VCAP address range */ static void sparx5_vcap_range_init(struct net_device *ndev, struct vcap_admin *admin, u32 addr, @@ -455,16 +1329,12 @@ static void sparx5_vcap_range_init(struct net_device *ndev, _sparx5_vcap_range_init(sparx5, admin, addr, count); } -/* API callback used for updating the VCAP cache */ -static void sparx5_vcap_update(struct net_device *ndev, - struct vcap_admin *admin, enum vcap_command cmd, - enum vcap_selection sel, u32 addr) +static void sparx5_vcap_super_update(struct sparx5 *sparx5, + enum vcap_command cmd, + enum vcap_selection sel, u32 addr) { - struct sparx5_port *port = netdev_priv(ndev); - struct sparx5 *sparx5 = port->sparx5; - bool clear; + bool clear = (cmd == VCAP_CMD_INITIALIZE); - clear = (cmd == VCAP_CMD_INITIALIZE); spx5_wr(VCAP_SUPER_CFG_MV_NUM_POS_SET(0) | VCAP_SUPER_CFG_MV_SIZE_SET(0), sparx5, VCAP_SUPER_CFG); spx5_wr(VCAP_SUPER_CTRL_UPDATE_CMD_SET(cmd) | @@ -478,24 +1348,75 @@ static void sparx5_vcap_update(struct net_device *ndev, sparx5_vcap_wait_super_update(sparx5); } -/* API callback used for moving a block of rules in the VCAP */ -static void sparx5_vcap_move(struct net_device *ndev, struct vcap_admin *admin, - u32 addr, int offset, int count) +static void sparx5_vcap_es0_update(struct sparx5 *sparx5, + enum vcap_command cmd, + enum vcap_selection sel, u32 addr) +{ + bool clear = (cmd == VCAP_CMD_INITIALIZE); + + spx5_wr(VCAP_ES0_CFG_MV_NUM_POS_SET(0) | + VCAP_ES0_CFG_MV_SIZE_SET(0), sparx5, VCAP_ES0_CFG); + spx5_wr(VCAP_ES0_CTRL_UPDATE_CMD_SET(cmd) | + VCAP_ES0_CTRL_UPDATE_ENTRY_DIS_SET((VCAP_SEL_ENTRY & sel) == 0) | + VCAP_ES0_CTRL_UPDATE_ACTION_DIS_SET((VCAP_SEL_ACTION & sel) == 0) | + VCAP_ES0_CTRL_UPDATE_CNT_DIS_SET((VCAP_SEL_COUNTER & sel) == 0) | + VCAP_ES0_CTRL_UPDATE_ADDR_SET(addr) | + VCAP_ES0_CTRL_CLEAR_CACHE_SET(clear) | + VCAP_ES0_CTRL_UPDATE_SHOT_SET(true), + sparx5, VCAP_ES0_CTRL); + sparx5_vcap_wait_es0_update(sparx5); +} + +static void sparx5_vcap_es2_update(struct sparx5 *sparx5, + enum vcap_command cmd, + enum vcap_selection sel, u32 addr) +{ + bool clear = (cmd == VCAP_CMD_INITIALIZE); + + spx5_wr(VCAP_ES2_CFG_MV_NUM_POS_SET(0) | + VCAP_ES2_CFG_MV_SIZE_SET(0), sparx5, VCAP_ES2_CFG); + spx5_wr(VCAP_ES2_CTRL_UPDATE_CMD_SET(cmd) | + VCAP_ES2_CTRL_UPDATE_ENTRY_DIS_SET((VCAP_SEL_ENTRY & sel) == 0) | + VCAP_ES2_CTRL_UPDATE_ACTION_DIS_SET((VCAP_SEL_ACTION & sel) == 0) | + VCAP_ES2_CTRL_UPDATE_CNT_DIS_SET((VCAP_SEL_COUNTER & sel) == 0) | + VCAP_ES2_CTRL_UPDATE_ADDR_SET(addr) | + VCAP_ES2_CTRL_CLEAR_CACHE_SET(clear) | + VCAP_ES2_CTRL_UPDATE_SHOT_SET(true), + sparx5, VCAP_ES2_CTRL); + sparx5_vcap_wait_es2_update(sparx5); +} + +/* API callback used for updating the VCAP cache */ +static void sparx5_vcap_update(struct net_device *ndev, + struct vcap_admin *admin, enum vcap_command cmd, + enum vcap_selection sel, u32 addr) { struct sparx5_port *port = netdev_priv(ndev); struct sparx5 *sparx5 = port->sparx5; - enum vcap_command cmd; - u16 mv_num_pos; - u16 mv_size; - mv_size = count - 1; - if (offset > 0) { - mv_num_pos = offset - 1; - cmd = VCAP_CMD_MOVE_DOWN; - } else { - mv_num_pos = -offset - 1; - cmd = VCAP_CMD_MOVE_UP; + switch (admin->vtype) { + case VCAP_TYPE_IS0: + case VCAP_TYPE_IS2: + sparx5_vcap_super_update(sparx5, cmd, sel, addr); + break; + case VCAP_TYPE_ES0: + sparx5_vcap_es0_update(sparx5, cmd, sel, addr); + break; + case VCAP_TYPE_ES2: + sparx5_vcap_es2_update(sparx5, cmd, sel, addr); + break; + default: + sparx5_vcap_type_err(sparx5, admin, __func__); + break; } +} + +static void sparx5_vcap_super_move(struct sparx5 *sparx5, + u32 addr, + enum vcap_command cmd, + u16 mv_num_pos, + u16 mv_size) +{ spx5_wr(VCAP_SUPER_CFG_MV_NUM_POS_SET(mv_num_pos) | VCAP_SUPER_CFG_MV_SIZE_SET(mv_size), sparx5, VCAP_SUPER_CFG); @@ -510,29 +1431,82 @@ static void sparx5_vcap_move(struct net_device *ndev, struct vcap_admin *admin, sparx5_vcap_wait_super_update(sparx5); } -/* Enable all lookups in the VCAP instance */ -static int sparx5_vcap_enable(struct net_device *ndev, - struct vcap_admin *admin, - bool enable) +static void sparx5_vcap_es0_move(struct sparx5 *sparx5, + u32 addr, + enum vcap_command cmd, + u16 mv_num_pos, + u16 mv_size) +{ + spx5_wr(VCAP_ES0_CFG_MV_NUM_POS_SET(mv_num_pos) | + VCAP_ES0_CFG_MV_SIZE_SET(mv_size), + sparx5, VCAP_ES0_CFG); + spx5_wr(VCAP_ES0_CTRL_UPDATE_CMD_SET(cmd) | + VCAP_ES0_CTRL_UPDATE_ENTRY_DIS_SET(0) | + VCAP_ES0_CTRL_UPDATE_ACTION_DIS_SET(0) | + VCAP_ES0_CTRL_UPDATE_CNT_DIS_SET(0) | + VCAP_ES0_CTRL_UPDATE_ADDR_SET(addr) | + VCAP_ES0_CTRL_CLEAR_CACHE_SET(false) | + VCAP_ES0_CTRL_UPDATE_SHOT_SET(true), + sparx5, VCAP_ES0_CTRL); + sparx5_vcap_wait_es0_update(sparx5); +} + +static void sparx5_vcap_es2_move(struct sparx5 *sparx5, + u32 addr, + enum vcap_command cmd, + u16 mv_num_pos, + u16 mv_size) +{ + spx5_wr(VCAP_ES2_CFG_MV_NUM_POS_SET(mv_num_pos) | + VCAP_ES2_CFG_MV_SIZE_SET(mv_size), + sparx5, VCAP_ES2_CFG); + spx5_wr(VCAP_ES2_CTRL_UPDATE_CMD_SET(cmd) | + VCAP_ES2_CTRL_UPDATE_ENTRY_DIS_SET(0) | + VCAP_ES2_CTRL_UPDATE_ACTION_DIS_SET(0) | + VCAP_ES2_CTRL_UPDATE_CNT_DIS_SET(0) | + VCAP_ES2_CTRL_UPDATE_ADDR_SET(addr) | + VCAP_ES2_CTRL_CLEAR_CACHE_SET(false) | + VCAP_ES2_CTRL_UPDATE_SHOT_SET(true), + sparx5, VCAP_ES2_CTRL); + sparx5_vcap_wait_es2_update(sparx5); +} + +/* API callback used for moving a block of rules in the VCAP */ +static void sparx5_vcap_move(struct net_device *ndev, struct vcap_admin *admin, + u32 addr, int offset, int count) { struct sparx5_port *port = netdev_priv(ndev); - struct sparx5 *sparx5; - int portno; + struct sparx5 *sparx5 = port->sparx5; + enum vcap_command cmd; + u16 mv_num_pos; + u16 mv_size; - sparx5 = port->sparx5; - portno = port->portno; + mv_size = count - 1; + if (offset > 0) { + mv_num_pos = offset - 1; + cmd = VCAP_CMD_MOVE_DOWN; + } else { + mv_num_pos = -offset - 1; + cmd = VCAP_CMD_MOVE_UP; + } - /* For now we only consider IS2 */ - if (enable) - spx5_wr(ANA_ACL_VCAP_S2_CFG_SEC_ENA_SET(0xf), sparx5, - ANA_ACL_VCAP_S2_CFG(portno)); - else - spx5_wr(ANA_ACL_VCAP_S2_CFG_SEC_ENA_SET(0), sparx5, - ANA_ACL_VCAP_S2_CFG(portno)); - return 0; + switch (admin->vtype) { + case VCAP_TYPE_IS0: + case VCAP_TYPE_IS2: + sparx5_vcap_super_move(sparx5, addr, cmd, mv_num_pos, mv_size); + break; + case VCAP_TYPE_ES0: + sparx5_vcap_es0_move(sparx5, addr, cmd, mv_num_pos, mv_size); + break; + case VCAP_TYPE_ES2: + sparx5_vcap_es2_move(sparx5, addr, cmd, mv_num_pos, mv_size); + break; + default: + sparx5_vcap_type_err(sparx5, admin, __func__); + break; + } } -/* API callback operations: only IS2 is supported for now */ static struct vcap_operations sparx5_vcap_ops = { .validate_keyset = sparx5_vcap_validate_keyset, .add_default_fields = sparx5_vcap_add_default_fields, @@ -543,19 +1517,41 @@ static struct vcap_operations sparx5_vcap_ops = { .update = sparx5_vcap_update, .move = sparx5_vcap_move, .port_info = sparx5_port_info, - .enable = sparx5_vcap_enable, }; -/* Enable lookups per port and set the keyset generation: only IS2 for now */ -static void sparx5_vcap_port_key_selection(struct sparx5 *sparx5, - struct vcap_admin *admin) +/* Enable IS0 lookups per port and set the keyset generation */ +static void sparx5_vcap_is0_port_key_selection(struct sparx5 *sparx5, + struct vcap_admin *admin) +{ + int portno, lookup; + u32 keysel; + + keysel = VCAP_IS0_KEYSEL(false, + VCAP_IS0_PS_ETYPE_NORMAL_7TUPLE, + VCAP_IS0_PS_ETYPE_NORMAL_5TUPLE_IP4, + VCAP_IS0_PS_ETYPE_NORMAL_7TUPLE, + VCAP_IS0_PS_MPLS_FOLLOW_ETYPE, + VCAP_IS0_PS_MPLS_FOLLOW_ETYPE, + VCAP_IS0_PS_MLBS_FOLLOW_ETYPE); + for (lookup = 0; lookup < admin->lookups; ++lookup) { + for (portno = 0; portno < SPX5_PORTS; ++portno) { + spx5_wr(keysel, sparx5, + ANA_CL_ADV_CL_CFG(portno, lookup)); + spx5_rmw(ANA_CL_ADV_CL_CFG_LOOKUP_ENA, + ANA_CL_ADV_CL_CFG_LOOKUP_ENA, + sparx5, + ANA_CL_ADV_CL_CFG(portno, lookup)); + } + } +} + +/* Enable IS2 lookups per port and set the keyset generation */ +static void sparx5_vcap_is2_port_key_selection(struct sparx5 *sparx5, + struct vcap_admin *admin) { int portno, lookup; u32 keysel; - /* all traffic types generate the MAC_ETYPE keyset for now in all - * lookups on all ports - */ keysel = VCAP_IS2_KEYSEL(true, VCAP_IS2_PS_NONETH_MAC_ETYPE, VCAP_IS2_PS_IPV4_MC_IP4_TCP_UDP_OTHER, VCAP_IS2_PS_IPV4_UC_IP4_TCP_UDP_OTHER, @@ -568,19 +1564,107 @@ static void sparx5_vcap_port_key_selection(struct sparx5 *sparx5, ANA_ACL_VCAP_S2_KEY_SEL(portno, lookup)); } } + /* IS2 lookups are in bit 0:3 */ + for (portno = 0; portno < SPX5_PORTS; ++portno) + spx5_rmw(ANA_ACL_VCAP_S2_CFG_SEC_ENA_SET(0xf), + ANA_ACL_VCAP_S2_CFG_SEC_ENA, + sparx5, + ANA_ACL_VCAP_S2_CFG(portno)); } -/* Disable lookups per port and set the keyset generation: only IS2 for now */ -static void sparx5_vcap_port_key_deselection(struct sparx5 *sparx5, - struct vcap_admin *admin) +/* Enable ES0 lookups per port and set the keyset generation */ +static void sparx5_vcap_es0_port_key_selection(struct sparx5 *sparx5, + struct vcap_admin *admin) { int portno; + u32 keysel; + keysel = VCAP_ES0_KEYSEL(VCAP_ES0_PS_FORCE_ISDX_LOOKUPS); for (portno = 0; portno < SPX5_PORTS; ++portno) - spx5_rmw(ANA_ACL_VCAP_S2_CFG_SEC_ENA_SET(0), - ANA_ACL_VCAP_S2_CFG_SEC_ENA, - sparx5, - ANA_ACL_VCAP_S2_CFG(portno)); + spx5_rmw(keysel, REW_RTAG_ETAG_CTRL_ES0_ISDX_KEY_ENA, + sparx5, REW_RTAG_ETAG_CTRL(portno)); + + spx5_rmw(REW_ES0_CTRL_ES0_LU_ENA_SET(1), REW_ES0_CTRL_ES0_LU_ENA, + sparx5, REW_ES0_CTRL); +} + +/* Enable ES2 lookups per port and set the keyset generation */ +static void sparx5_vcap_es2_port_key_selection(struct sparx5 *sparx5, + struct vcap_admin *admin) +{ + int portno, lookup; + u32 keysel; + + keysel = VCAP_ES2_KEYSEL(true, VCAP_ES2_PS_ARP_MAC_ETYPE, + VCAP_ES2_PS_IPV4_IP4_TCP_UDP_OTHER, + VCAP_ES2_PS_IPV6_IP_7TUPLE); + for (lookup = 0; lookup < admin->lookups; ++lookup) + for (portno = 0; portno < SPX5_PORTS; ++portno) + spx5_wr(keysel, sparx5, + EACL_VCAP_ES2_KEY_SEL(portno, lookup)); +} + +/* Enable lookups per port and set the keyset generation */ +static void sparx5_vcap_port_key_selection(struct sparx5 *sparx5, + struct vcap_admin *admin) +{ + switch (admin->vtype) { + case VCAP_TYPE_IS0: + sparx5_vcap_is0_port_key_selection(sparx5, admin); + break; + case VCAP_TYPE_IS2: + sparx5_vcap_is2_port_key_selection(sparx5, admin); + break; + case VCAP_TYPE_ES0: + sparx5_vcap_es0_port_key_selection(sparx5, admin); + break; + case VCAP_TYPE_ES2: + sparx5_vcap_es2_port_key_selection(sparx5, admin); + break; + default: + sparx5_vcap_type_err(sparx5, admin, __func__); + break; + } +} + +/* Disable lookups per port */ +static void sparx5_vcap_port_key_deselection(struct sparx5 *sparx5, + struct vcap_admin *admin) +{ + int portno, lookup; + + switch (admin->vtype) { + case VCAP_TYPE_IS0: + for (lookup = 0; lookup < admin->lookups; ++lookup) + for (portno = 0; portno < SPX5_PORTS; ++portno) + spx5_rmw(ANA_CL_ADV_CL_CFG_LOOKUP_ENA_SET(0), + ANA_CL_ADV_CL_CFG_LOOKUP_ENA, + sparx5, + ANA_CL_ADV_CL_CFG(portno, lookup)); + break; + case VCAP_TYPE_IS2: + for (portno = 0; portno < SPX5_PORTS; ++portno) + spx5_rmw(ANA_ACL_VCAP_S2_CFG_SEC_ENA_SET(0), + ANA_ACL_VCAP_S2_CFG_SEC_ENA, + sparx5, + ANA_ACL_VCAP_S2_CFG(portno)); + break; + case VCAP_TYPE_ES0: + spx5_rmw(REW_ES0_CTRL_ES0_LU_ENA_SET(0), + REW_ES0_CTRL_ES0_LU_ENA, sparx5, REW_ES0_CTRL); + break; + case VCAP_TYPE_ES2: + for (lookup = 0; lookup < admin->lookups; ++lookup) + for (portno = 0; portno < SPX5_PORTS; ++portno) + spx5_rmw(EACL_VCAP_ES2_KEY_SEL_KEY_ENA_SET(0), + EACL_VCAP_ES2_KEY_SEL_KEY_ENA, + sparx5, + EACL_VCAP_ES2_KEY_SEL(portno, lookup)); + break; + default: + sparx5_vcap_type_err(sparx5, admin, __func__); + break; + } } static void sparx5_vcap_admin_free(struct vcap_admin *admin) @@ -610,6 +1694,7 @@ sparx5_vcap_admin_alloc(struct sparx5 *sparx5, struct vcap_control *ctrl, mutex_init(&admin->lock); admin->vtype = cfg->vtype; admin->vinst = cfg->vinst; + admin->ingress = cfg->ingress; admin->lookups = cfg->lookups; admin->lookups_per_instance = cfg->lookups_per_instance; admin->first_cid = cfg->first_cid; @@ -633,22 +1718,55 @@ static void sparx5_vcap_block_alloc(struct sparx5 *sparx5, struct vcap_admin *admin, const struct sparx5_vcap_inst *cfg) { - int idx; - - /* Super VCAP block mapping and address configuration. Block 0 - * is assigned addresses 0 through 3071, block 1 is assigned - * addresses 3072 though 6143, and so on. - */ - for (idx = cfg->blockno; idx < cfg->blockno + cfg->blocks; ++idx) { - spx5_wr(VCAP_SUPER_IDX_CORE_IDX_SET(idx), sparx5, - VCAP_SUPER_IDX); - spx5_wr(VCAP_SUPER_MAP_CORE_MAP_SET(cfg->map_id), sparx5, - VCAP_SUPER_MAP); - } - admin->first_valid_addr = cfg->blockno * SUPER_VCAP_BLK_SIZE; - admin->last_used_addr = admin->first_valid_addr + - cfg->blocks * SUPER_VCAP_BLK_SIZE; - admin->last_valid_addr = admin->last_used_addr - 1; + int idx, cores; + + switch (admin->vtype) { + case VCAP_TYPE_IS0: + case VCAP_TYPE_IS2: + /* Super VCAP block mapping and address configuration. Block 0 + * is assigned addresses 0 through 3071, block 1 is assigned + * addresses 3072 though 6143, and so on. + */ + for (idx = cfg->blockno; idx < cfg->blockno + cfg->blocks; + ++idx) { + spx5_wr(VCAP_SUPER_IDX_CORE_IDX_SET(idx), sparx5, + VCAP_SUPER_IDX); + spx5_wr(VCAP_SUPER_MAP_CORE_MAP_SET(cfg->map_id), + sparx5, VCAP_SUPER_MAP); + } + admin->first_valid_addr = cfg->blockno * SUPER_VCAP_BLK_SIZE; + admin->last_used_addr = admin->first_valid_addr + + cfg->blocks * SUPER_VCAP_BLK_SIZE; + admin->last_valid_addr = admin->last_used_addr - 1; + break; + case VCAP_TYPE_ES0: + admin->first_valid_addr = 0; + admin->last_used_addr = cfg->count; + admin->last_valid_addr = cfg->count - 1; + cores = spx5_rd(sparx5, VCAP_ES0_CORE_CNT); + for (idx = 0; idx < cores; ++idx) { + spx5_wr(VCAP_ES0_IDX_CORE_IDX_SET(idx), sparx5, + VCAP_ES0_IDX); + spx5_wr(VCAP_ES0_MAP_CORE_MAP_SET(1), sparx5, + VCAP_ES0_MAP); + } + break; + case VCAP_TYPE_ES2: + admin->first_valid_addr = 0; + admin->last_used_addr = cfg->count; + admin->last_valid_addr = cfg->count - 1; + cores = spx5_rd(sparx5, VCAP_ES2_CORE_CNT); + for (idx = 0; idx < cores; ++idx) { + spx5_wr(VCAP_ES2_IDX_CORE_IDX_SET(idx), sparx5, + VCAP_ES2_IDX); + spx5_wr(VCAP_ES2_MAP_CORE_MAP_SET(1), sparx5, + VCAP_ES2_MAP); + } + break; + default: + sparx5_vcap_type_err(sparx5, admin, __func__); + break; + } } /* Allocate a vcap control and vcap instances and configure the system */ diff --git a/drivers/net/ethernet/microchip/sparx5/sparx5_vcap_impl.h b/drivers/net/ethernet/microchip/sparx5/sparx5_vcap_impl.h index 0a0f2412c980..3260ab5e3a82 100644 --- a/drivers/net/ethernet/microchip/sparx5/sparx5_vcap_impl.h +++ b/drivers/net/ethernet/microchip/sparx5/sparx5_vcap_impl.h @@ -16,6 +16,15 @@ #include "vcap_api.h" #include "vcap_api_client.h" +#define SPARX5_VCAP_CID_IS0_L0 VCAP_CID_INGRESS_L0 /* IS0/CLM lookup 0 */ +#define SPARX5_VCAP_CID_IS0_L1 VCAP_CID_INGRESS_L1 /* IS0/CLM lookup 1 */ +#define SPARX5_VCAP_CID_IS0_L2 VCAP_CID_INGRESS_L2 /* IS0/CLM lookup 2 */ +#define SPARX5_VCAP_CID_IS0_L3 VCAP_CID_INGRESS_L3 /* IS0/CLM lookup 3 */ +#define SPARX5_VCAP_CID_IS0_L4 VCAP_CID_INGRESS_L4 /* IS0/CLM lookup 4 */ +#define SPARX5_VCAP_CID_IS0_L5 VCAP_CID_INGRESS_L5 /* IS0/CLM lookup 5 */ +#define SPARX5_VCAP_CID_IS0_MAX \ + (VCAP_CID_INGRESS_L5 + VCAP_CID_LOOKUP_SIZE - 1) /* IS0/CLM Max */ + #define SPARX5_VCAP_CID_IS2_L0 VCAP_CID_INGRESS_STAGE2_L0 /* IS2 lookup 0 */ #define SPARX5_VCAP_CID_IS2_L1 VCAP_CID_INGRESS_STAGE2_L1 /* IS2 lookup 1 */ #define SPARX5_VCAP_CID_IS2_L2 VCAP_CID_INGRESS_STAGE2_L2 /* IS2 lookup 2 */ @@ -23,6 +32,63 @@ #define SPARX5_VCAP_CID_IS2_MAX \ (VCAP_CID_INGRESS_STAGE2_L3 + VCAP_CID_LOOKUP_SIZE - 1) /* IS2 Max */ +#define SPARX5_VCAP_CID_ES0_L0 VCAP_CID_EGRESS_L0 /* ES0 lookup 0 */ +#define SPARX5_VCAP_CID_ES0_MAX (VCAP_CID_EGRESS_L1 - 1) /* ES0 Max */ + +#define SPARX5_VCAP_CID_ES2_L0 VCAP_CID_EGRESS_STAGE2_L0 /* ES2 lookup 0 */ +#define SPARX5_VCAP_CID_ES2_L1 VCAP_CID_EGRESS_STAGE2_L1 /* ES2 lookup 1 */ +#define SPARX5_VCAP_CID_ES2_MAX \ + (VCAP_CID_EGRESS_STAGE2_L1 + VCAP_CID_LOOKUP_SIZE - 1) /* ES2 Max */ + +/* IS0 port keyset selection control */ + +/* IS0 ethernet, IPv4, IPv6 traffic type keyset generation */ +enum vcap_is0_port_sel_etype { + VCAP_IS0_PS_ETYPE_DEFAULT, /* None or follow depending on class */ + VCAP_IS0_PS_ETYPE_MLL, + VCAP_IS0_PS_ETYPE_SGL_MLBS, + VCAP_IS0_PS_ETYPE_DBL_MLBS, + VCAP_IS0_PS_ETYPE_TRI_MLBS, + VCAP_IS0_PS_ETYPE_TRI_VID, + VCAP_IS0_PS_ETYPE_LL_FULL, + VCAP_IS0_PS_ETYPE_NORMAL_SRC, + VCAP_IS0_PS_ETYPE_NORMAL_DST, + VCAP_IS0_PS_ETYPE_NORMAL_7TUPLE, + VCAP_IS0_PS_ETYPE_NORMAL_5TUPLE_IP4, + VCAP_IS0_PS_ETYPE_PURE_5TUPLE_IP4, + VCAP_IS0_PS_ETYPE_DBL_VID_IDX, + VCAP_IS0_PS_ETYPE_ETAG, + VCAP_IS0_PS_ETYPE_NO_LOOKUP, +}; + +/* IS0 MPLS traffic type keyset generation */ +enum vcap_is0_port_sel_mpls_uc_mc { + VCAP_IS0_PS_MPLS_FOLLOW_ETYPE, + VCAP_IS0_PS_MPLS_MLL, + VCAP_IS0_PS_MPLS_SGL_MLBS, + VCAP_IS0_PS_MPLS_DBL_MLBS, + VCAP_IS0_PS_MPLS_TRI_MLBS, + VCAP_IS0_PS_MPLS_TRI_VID, + VCAP_IS0_PS_MPLS_LL_FULL, + VCAP_IS0_PS_MPLS_NORMAL_SRC, + VCAP_IS0_PS_MPLS_NORMAL_DST, + VCAP_IS0_PS_MPLS_NORMAL_7TUPLE, + VCAP_IS0_PS_MPLS_NORMAL_5TUPLE_IP4, + VCAP_IS0_PS_MPLS_PURE_5TUPLE_IP4, + VCAP_IS0_PS_MPLS_DBL_VID_IDX, + VCAP_IS0_PS_MPLS_ETAG, + VCAP_IS0_PS_MPLS_NO_LOOKUP, +}; + +/* IS0 MBLS traffic type keyset generation */ +enum vcap_is0_port_sel_mlbs { + VCAP_IS0_PS_MLBS_FOLLOW_ETYPE, + VCAP_IS0_PS_MLBS_SGL_MLBS, + VCAP_IS0_PS_MLBS_DBL_MLBS, + VCAP_IS0_PS_MLBS_TRI_MLBS, + VCAP_IS0_PS_MLBS_NO_LOOKUP = 17, +}; + /* IS2 port keyset selection control */ /* IS2 non-ethernet traffic type keyset generation */ @@ -71,6 +137,57 @@ enum vcap_is2_port_sel_arp { VCAP_IS2_PS_ARP_ARP, }; +/* ES0 port keyset selection control */ + +/* ES0 Egress port traffic type classification */ +enum vcap_es0_port_sel { + VCAP_ES0_PS_NORMAL_SELECTION, + VCAP_ES0_PS_FORCE_ISDX_LOOKUPS, + VCAP_ES0_PS_FORCE_VID_LOOKUPS, + VCAP_ES0_PS_RESERVED, +}; + +/* ES2 port keyset selection control */ + +/* ES2 IPv4 traffic type keyset generation */ +enum vcap_es2_port_sel_ipv4 { + VCAP_ES2_PS_IPV4_MAC_ETYPE, + VCAP_ES2_PS_IPV4_IP_7TUPLE, + VCAP_ES2_PS_IPV4_IP4_TCP_UDP_VID, + VCAP_ES2_PS_IPV4_IP4_TCP_UDP_OTHER, + VCAP_ES2_PS_IPV4_IP4_VID, + VCAP_ES2_PS_IPV4_IP4_OTHER, +}; + +/* ES2 IPv6 traffic type keyset generation */ +enum vcap_es2_port_sel_ipv6 { + VCAP_ES2_PS_IPV6_MAC_ETYPE, + VCAP_ES2_PS_IPV6_IP_7TUPLE, + VCAP_ES2_PS_IPV6_IP_7TUPLE_VID, + VCAP_ES2_PS_IPV6_IP_7TUPLE_STD, + VCAP_ES2_PS_IPV6_IP6_VID, + VCAP_ES2_PS_IPV6_IP6_STD, + VCAP_ES2_PS_IPV6_IP4_DOWNGRADE, +}; + +/* ES2 ARP traffic type keyset generation */ +enum vcap_es2_port_sel_arp { + VCAP_ES2_PS_ARP_MAC_ETYPE, + VCAP_ES2_PS_ARP_ARP, +}; + +/* Selects TPID for ES0 matching */ +enum SPX5_TPID_SEL { + SPX5_TPID_SEL_UNTAGGED, + SPX5_TPID_SEL_8100, + SPX5_TPID_SEL_UNUSED_0, + SPX5_TPID_SEL_UNUSED_1, + SPX5_TPID_SEL_88A8, + SPX5_TPID_SEL_TPIDCFG_1, + SPX5_TPID_SEL_TPIDCFG_2, + SPX5_TPID_SEL_TPIDCFG_3, +}; + /* Get the port keyset for the vcap lookup */ int sparx5_vcap_get_port_keyset(struct net_device *ndev, struct vcap_admin *admin, @@ -78,4 +195,7 @@ int sparx5_vcap_get_port_keyset(struct net_device *ndev, u16 l3_proto, struct vcap_keyset_list *kslist); +/* Check if the ethertype is supported by the vcap port classification */ +bool sparx5_vcap_is_known_etype(struct vcap_admin *admin, u16 etype); + #endif /* __SPARX5_VCAP_IMPL_H__ */ diff --git a/drivers/net/ethernet/microchip/sparx5/sparx5_vlan.c b/drivers/net/ethernet/microchip/sparx5/sparx5_vlan.c index 34f954bbf815..ac001ae59a38 100644 --- a/drivers/net/ethernet/microchip/sparx5/sparx5_vlan.c +++ b/drivers/net/ethernet/microchip/sparx5/sparx5_vlan.c @@ -219,8 +219,8 @@ void sparx5_vlan_port_apply(struct sparx5 *sparx5, spx5_wr(val, sparx5, ANA_CL_VLAN_FILTER_CTRL(port->portno, 0)); - /* Egress configuration (REW_TAG_CFG): VLAN tag type to 8021Q */ - val = REW_TAG_CTRL_TAG_TPID_CFG_SET(0); + /* Egress configuration (REW_TAG_CFG): VLAN tag selected via IFH */ + val = REW_TAG_CTRL_TAG_TPID_CFG_SET(5); if (port->vlan_aware) { if (port->vid) /* Tag all frames except when VID == DEFAULT_VLAN */ diff --git a/drivers/net/ethernet/microchip/vcap/Makefile b/drivers/net/ethernet/microchip/vcap/Makefile index 0adb8f5a8735..c86f20e6491f 100644 --- a/drivers/net/ethernet/microchip/vcap/Makefile +++ b/drivers/net/ethernet/microchip/vcap/Makefile @@ -7,4 +7,4 @@ obj-$(CONFIG_VCAP) += vcap.o obj-$(CONFIG_VCAP_KUNIT_TEST) += vcap_model_kunit.o vcap-$(CONFIG_DEBUG_FS) += vcap_api_debugfs.o -vcap-y += vcap_api.o +vcap-y += vcap_api.o vcap_tc.o diff --git a/drivers/net/ethernet/microchip/vcap/vcap_ag_api.h b/drivers/net/ethernet/microchip/vcap/vcap_ag_api.h index 84de2aee4169..0844fcaeee68 100644 --- a/drivers/net/ethernet/microchip/vcap/vcap_ag_api.h +++ b/drivers/net/ethernet/microchip/vcap/vcap_ag_api.h @@ -1,16 +1,17 @@ /* SPDX-License-Identifier: BSD-3-Clause */ -/* Copyright (C) 2022 Microchip Technology Inc. and its subsidiaries. +/* Copyright (C) 2023 Microchip Technology Inc. and its subsidiaries. * Microchip VCAP API */ -/* This file is autogenerated by cml-utils 2022-10-13 10:04:41 +0200. - * Commit ID: fd7cafd175899f0672c73afb3a30fc872500ae86 +/* This file is autogenerated by cml-utils 2023-02-10 11:15:56 +0100. + * Commit ID: c30fb4bf0281cd4a7133bdab6682f9e43c872ada */ #ifndef __VCAP_AG_API__ #define __VCAP_AG_API__ enum vcap_type { + VCAP_TYPE_ES0, VCAP_TYPE_ES2, VCAP_TYPE_IS0, VCAP_TYPE_IS2, @@ -20,27 +21,25 @@ enum vcap_type { /* Keyfieldset names with origin information */ enum vcap_keyfield_set { VCAP_KFS_NO_VALUE, /* initial value */ - VCAP_KFS_ARP, /* sparx5 is2 X6, sparx5 es2 X6 */ + VCAP_KFS_ARP, /* sparx5 is2 X6, sparx5 es2 X6, lan966x is2 X2 */ VCAP_KFS_ETAG, /* sparx5 is0 X2 */ - VCAP_KFS_IP4_OTHER, /* sparx5 is2 X6, sparx5 es2 X6 */ - VCAP_KFS_IP4_TCP_UDP, /* sparx5 is2 X6, sparx5 es2 X6 */ + VCAP_KFS_IP4_OTHER, /* sparx5 is2 X6, sparx5 es2 X6, lan966x is2 X2 */ + VCAP_KFS_IP4_TCP_UDP, /* sparx5 is2 X6, sparx5 es2 X6, lan966x is2 X2 */ VCAP_KFS_IP4_VID, /* sparx5 es2 X3 */ - VCAP_KFS_IP6_STD, /* sparx5 is2 X6 */ - VCAP_KFS_IP6_VID, /* sparx5 is2 X6, sparx5 es2 X6 */ + VCAP_KFS_IP6_OTHER, /* lan966x is2 X4 */ + VCAP_KFS_IP6_STD, /* sparx5 is2 X6, sparx5 es2 X6, lan966x is2 X2 */ + VCAP_KFS_IP6_TCP_UDP, /* lan966x is2 X4 */ + VCAP_KFS_IP6_VID, /* sparx5 es2 X6 */ VCAP_KFS_IP_7TUPLE, /* sparx5 is2 X12, sparx5 es2 X12 */ + VCAP_KFS_ISDX, /* sparx5 es0 X1 */ VCAP_KFS_LL_FULL, /* sparx5 is0 X6 */ - VCAP_KFS_MAC_ETYPE, /* sparx5 is2 X6, sparx5 es2 X6 */ - VCAP_KFS_MLL, /* sparx5 is0 X3 */ - VCAP_KFS_NORMAL, /* sparx5 is0 X6 */ - VCAP_KFS_NORMAL_5TUPLE_IP4, /* sparx5 is0 X6 */ - VCAP_KFS_NORMAL_7TUPLE, /* sparx5 is0 X12 */ - VCAP_KFS_PURE_5TUPLE_IP4, /* sparx5 is0 X3 */ - VCAP_KFS_TRI_VID, /* sparx5 is0 X2 */ + VCAP_KFS_MAC_ETYPE, /* sparx5 is2 X6, sparx5 es2 X6, lan966x is2 X2 */ VCAP_KFS_MAC_LLC, /* lan966x is2 X2 */ VCAP_KFS_MAC_SNAP, /* lan966x is2 X2 */ + VCAP_KFS_NORMAL_5TUPLE_IP4, /* sparx5 is0 X6 */ + VCAP_KFS_NORMAL_7TUPLE, /* sparx5 is0 X12 */ VCAP_KFS_OAM, /* lan966x is2 X2 */ - VCAP_KFS_IP6_TCP_UDP, /* lan966x is2 X4 */ - VCAP_KFS_IP6_OTHER, /* lan966x is2 X4 */ + VCAP_KFS_PURE_5TUPLE_IP4, /* sparx5 is0 X3 */ VCAP_KFS_SMAC_SIP4, /* lan966x is2 X1 */ VCAP_KFS_SMAC_SIP6, /* lan966x is2 X2 */ }; @@ -78,6 +77,8 @@ enum vcap_keyfield_set { * Third PCP in multiple vlan tags (not always available) * VCAP_KF_8021Q_PCP_CLS: W3, sparx5: is2/es2, lan966x: is2 * Classified PCP + * VCAP_KF_8021Q_TPID: W3, sparx5: es0 + * TPID for outer tag: 0: Customer TPID 1: Service TPID (88A8 or programmable) * VCAP_KF_8021Q_TPID0: W3, sparx5: is0 * First TPIC in multiple vlan tags (outer tag or default port tag) * VCAP_KF_8021Q_TPID1: W3, sparx5: is0 @@ -90,7 +91,8 @@ enum vcap_keyfield_set { * Second VID in multiple vlan tags (inner tag) * VCAP_KF_8021Q_VID2: W12, sparx5: is0 * Third VID in multiple vlan tags (not always available) - * VCAP_KF_8021Q_VID_CLS: W13, sparx5: is2/es2, lan966x is2 W12 + * VCAP_KF_8021Q_VID_CLS: sparx5 is2 W13, sparx5 es0 W13, sparx5 es2 W13, + * lan966x is2 W12 * Classified VID * VCAP_KF_8021Q_VLAN_TAGGED_IS: W1, sparx5: is2/es2, lan966x: is2 * Sparx5: Set if frame was received with a VLAN tag, LAN966x: Set if frame has @@ -104,7 +106,7 @@ enum vcap_keyfield_set { * Set if hardware address is Ethernet * VCAP_KF_ARP_LEN_OK_IS: W1, sparx5: is2/es2, lan966x: is2 * Set if hardware address length = 6 (Ethernet) and IP address length = 4 (IP). - * VCAP_KF_ARP_OPCODE: W2, sparx5: is2/es2, lan966x: i2 + * VCAP_KF_ARP_OPCODE: W2, sparx5: is2/es2, lan966x: is2 * ARP opcode * VCAP_KF_ARP_OPCODE_UNKNOWN_IS: W1, sparx5: is2/es2, lan966x: is2 * Set if not one of the codes defined in VCAP_KF_ARP_OPCODE @@ -114,25 +116,25 @@ enum vcap_keyfield_set { * Sender Hardware Address = SMAC (ARP) * VCAP_KF_ARP_TGT_MATCH_IS: W1, sparx5: is2/es2, lan966x: is2 * Target Hardware Address = SMAC (RARP) - * VCAP_KF_COSID_CLS: W3, sparx5: es2 + * VCAP_KF_COSID_CLS: W3, sparx5: es0/es2 * Class of service - * VCAP_KF_DST_ENTRY: W1, sparx5: is0 - * Selects whether the frame’s destination or source information is used for - * fields L2_SMAC and L3_IP4_SIP * VCAP_KF_ES0_ISDX_KEY_ENA: W1, sparx5: es2 * The value taken from the IFH .FWD.ES0_ISDX_KEY_ENA * VCAP_KF_ETYPE: W16, sparx5: is0/is2/es2, lan966x: is2 * Ethernet type * VCAP_KF_ETYPE_LEN_IS: W1, sparx5: is0/is2/es2 * Set if frame has EtherType >= 0x600 - * VCAP_KF_ETYPE_MPLS: W2, sparx5: is0 - * Type of MPLS Ethertype (or not) + * VCAP_KF_HOST_MATCH: W1, lan966x: is2 + * The action from the SMAC_SIP4 or SMAC_SIP6 lookups. Used for IP source + * guarding. * VCAP_KF_IF_EGR_PORT_MASK: W32, sparx5: es2 * Egress port mask, one bit per port * VCAP_KF_IF_EGR_PORT_MASK_RNG: W3, sparx5: es2 * Select which 32 port group is available in IF_EGR_PORT (or virtual ports or * CPU queue) - * VCAP_KF_IF_IGR_PORT: sparx5 is0 W7, sparx5 es2 W9 + * VCAP_KF_IF_EGR_PORT_NO: W7, sparx5: es0 + * Egress port number + * VCAP_KF_IF_IGR_PORT: sparx5 is0 W7, sparx5 es2 W9, lan966x is2 W4 * Sparx5: Logical ingress port number retrieved from * ANA_CL::PORT_ID_CFG.LPORT_NUM or ERLEG, LAN966x: ingress port nunmber * VCAP_KF_IF_IGR_PORT_MASK: sparx5 is0 W65, sparx5 is2 W32, sparx5 is2 W65, @@ -152,45 +154,61 @@ enum vcap_keyfield_set { * VCAP_KF_IP4_IS: W1, sparx5: is0/is2/es2, lan966x: is2 * Set if frame has EtherType = 0x800 and IP version = 4 * VCAP_KF_IP_MC_IS: W1, sparx5: is0 - * Set if frame is IPv4 frame and frame’s destination MAC address is an IPv4 - * multicast address (0x01005E0 /25). Set if frame is IPv6 frame and frame’s + * Set if frame is IPv4 frame and frame's destination MAC address is an IPv4 + * multicast address (0x01005E0 /25). Set if frame is IPv6 frame and frame's * destination MAC address is an IPv6 multicast address (0x3333/16). * VCAP_KF_IP_PAYLOAD_5TUPLE: W32, sparx5: is0 * Payload bytes after IP header * VCAP_KF_IP_SNAP_IS: W1, sparx5: is0 * Set if frame is IPv4, IPv6, or SNAP frame - * VCAP_KF_ISDX_CLS: W12, sparx5: is2/es2 + * VCAP_KF_ISDX_CLS: W12, sparx5: is2/es0/es2 * Classified ISDX - * VCAP_KF_ISDX_GT0_IS: W1, sparx5: is2/es2, lan966x: is2 + * VCAP_KF_ISDX_GT0_IS: W1, sparx5: is2/es0/es2, lan966x: is2 * Set if classified ISDX > 0 * VCAP_KF_L2_BC_IS: W1, sparx5: is0/is2/es2, lan966x: is2 - * Set if frame’s destination MAC address is the broadcast address + * Set if frame's destination MAC address is the broadcast address * (FF-FF-FF-FF-FF-FF). * VCAP_KF_L2_DMAC: W48, sparx5: is0/is2/es2, lan966x: is2 * Destination MAC address + * VCAP_KF_L2_FRM_TYPE: W4, lan966x: is2 + * Frame subtype for specific EtherTypes (MRP, DLR) * VCAP_KF_L2_FWD_IS: W1, sparx5: is2 * Set if the frame is allowed to be forwarded to front ports - * VCAP_KF_L2_MC_IS: W1, sparx5: is0/is2/es2, lan9966x is2 - * Set if frame’s destination MAC address is a multicast address (bit 40 = 1). + * VCAP_KF_L2_LLC: W40, lan966x: is2 + * LLC header and data after up to two VLAN tags and the type/length field + * VCAP_KF_L2_MC_IS: W1, sparx5: is0/is2/es2, lan966x: is2 + * Set if frame's destination MAC address is a multicast address (bit 40 = 1). + * VCAP_KF_L2_PAYLOAD0: W16, lan966x: is2 + * Payload bytes 0-1 after the frame's EtherType + * VCAP_KF_L2_PAYLOAD1: W8, lan966x: is2 + * Payload byte 4 after the frame's EtherType. This is specifically for PTP + * frames. + * VCAP_KF_L2_PAYLOAD2: W3, lan966x: is2 + * Bits 7, 2, and 1 from payload byte 6 after the frame's EtherType. This is + * specifically for PTP frames. * VCAP_KF_L2_PAYLOAD_ETYPE: W64, sparx5: is2/es2 * Byte 0-7 of L2 payload after Type/Len field and overloading for OAM - * VCAP_KF_L2_SMAC: W48, sparx5: is0/is2/es2, lan966x is2 + * VCAP_KF_L2_SMAC: W48, sparx5: is0/is2/es2, lan966x: is2 * Source MAC address + * VCAP_KF_L2_SNAP: W40, lan966x: is2 + * SNAP header after LLC header (AA-AA-03) * VCAP_KF_L3_DIP_EQ_SIP_IS: W1, sparx5: is2/es2, lan966x: is2 * Set if Src IP matches Dst IP address - * VCAP_KF_L3_DMAC_DIP_MATCH: W1, sparx5: is2 - * Match found in DIP security lookup in ANA_L3 - * VCAP_KF_L3_DPL_CLS: W1, sparx5: es2 + * VCAP_KF_L3_DPL_CLS: W1, sparx5: es0/es2 * The frames drop precedence level * VCAP_KF_L3_DSCP: W6, sparx5: is0 - * Frame’s DSCP value + * Frame's DSCP value * VCAP_KF_L3_DST_IS: W1, sparx5: is2 * Set if lookup is done for egress router leg + * VCAP_KF_L3_FRAGMENT: W1, lan966x: is2 + * Set if IPv4 frame is fragmented * VCAP_KF_L3_FRAGMENT_TYPE: W2, sparx5: is0/is2/es2 * L3 Fragmentation type (none, initial, suspicious, valid follow up) * VCAP_KF_L3_FRAG_INVLD_L4_LEN: W1, sparx5: is0/is2 * Set if frame's L4 length is less than ANA_CL:COMMON:CLM_FRAGMENT_CFG.L4_MIN_L * EN + * VCAP_KF_L3_FRAG_OFS_GT0: W1, lan966x: is2 + * Set if IPv4 frame is fragmented and it is not the first fragment * VCAP_KF_L3_IP4_DIP: W32, sparx5: is0/is2/es2, lan966x: is2 * Destination IPv4 Address * VCAP_KF_L3_IP4_SIP: W32, sparx5: is0/is2/es2, lan966x: is2 @@ -205,36 +223,38 @@ enum vcap_keyfield_set { * IPv4 frames: IP protocol. IPv6 frames: Next header, same as for IPV4 * VCAP_KF_L3_OPTIONS_IS: W1, sparx5: is0/is2/es2, lan966x: is2 * Set if IPv4 frame contains options (IP len > 5) - * VCAP_KF_L3_PAYLOAD: sparx5 is2 W96, sparx5 is2 W40, sparx5 es2 W96, - * lan966x is2 W56 + * VCAP_KF_L3_PAYLOAD: sparx5 is2 W96, sparx5 is2 W40, sparx5 es2 W96, sparx5 + * es2 W40, lan966x is2 W56 * Sparx5: Payload bytes after IP header. IPv4: IPv4 options are not parsed so * payload is always taken 20 bytes after the start of the IPv4 header, LAN966x: * Bytes 0-6 after IP header * VCAP_KF_L3_RT_IS: W1, sparx5: is2/es2 * Set if frame has hit a router leg - * VCAP_KF_L3_SMAC_SIP_MATCH: W1, sparx5: is2 - * Match found in SIP security lookup in ANA_L3 * VCAP_KF_L3_TOS: W8, sparx5: is2/es2, lan966x: is2 * Sparx5: Frame's IPv4/IPv6 DSCP and ECN fields, LAN966x: IP TOS field * VCAP_KF_L3_TTL_GT0: W1, sparx5: is2/es2, lan966x: is2 * Set if IPv4 TTL / IPv6 hop limit is greater than 0 + * VCAP_KF_L4_1588_DOM: W8, lan966x: is2 + * PTP over UDP: domainNumber + * VCAP_KF_L4_1588_VER: W4, lan966x: is2 + * PTP over UDP: version * VCAP_KF_L4_ACK: W1, sparx5: is2/es2, lan966x: is2 * Sparx5 and LAN966x: TCP flag ACK, LAN966x only: PTP over UDP: flagField bit 2 * (unicastFlag) * VCAP_KF_L4_DPORT: W16, sparx5: is2/es2, lan966x: is2 * Sparx5: TCP/UDP destination port. Overloading for IP_7TUPLE: Non-TCP/UDP IP * frames: L4_DPORT = L3_IP_PROTO, LAN966x: TCP/UDP destination port - * VCAP_KF_L4_FIN: W1, sparx5: is2/es2 + * VCAP_KF_L4_FIN: W1, sparx5: is2/es2, lan966x: is2 * TCP flag FIN, LAN966x: TCP flag FIN, and for PTP over UDP: messageType bit 1 * VCAP_KF_L4_PAYLOAD: W64, sparx5: is2/es2 * Payload bytes after TCP/UDP header Overloading for IP_7TUPLE: Non TCP/UDP - * frames: Payload bytes 0–7 after IP header. IPv4 options are not parsed so + * frames: Payload bytes 0-7 after IP header. IPv4 options are not parsed so * payload is always taken 20 bytes after the start of the IPv4 header for non * TCP/UDP IPv4 frames * VCAP_KF_L4_PSH: W1, sparx5: is2/es2, lan966x: is2 * Sparx5: TCP flag PSH, LAN966x: TCP: TCP flag PSH. PTP over UDP: flagField bit * 1 (twoStepFlag) - * VCAP_KF_L4_RNG: sparx5 is0 W8, sparx5 is2 W16, sparx5 es2 W16, lan966x: is2 + * VCAP_KF_L4_RNG: sparx5 is0 W8, sparx5 is2 W16, sparx5 es2 W16, lan966x is2 W8 * Range checker bitmask (one for each range checker). Input into range checkers * is taken from classified results (VID, DSCP) and frame (SPORT, DPORT, ETYPE, * outer VID, inner VID) @@ -263,58 +283,35 @@ enum vcap_keyfield_set { * Select the mode of the Generic Index * VCAP_KF_LOOKUP_PAG: W8, sparx5: is2, lan966x: is2 * Classified Policy Association Group: chains rules from IS1/CLM to IS2 + * VCAP_KF_MIRROR_PROBE: W2, sparx5: es2 + * Identifies frame copies generated as a result of mirroring * VCAP_KF_OAM_CCM_CNTS_EQ0: W1, sparx5: is2/es2, lan966x: is2 * Dual-ended loss measurement counters in CCM frames are all zero - * VCAP_KF_OAM_MEL_FLAGS: W7, sparx5: is0, lan966x: is2 + * VCAP_KF_OAM_DETECTED: W1, lan966x: is2 + * This is missing in the datasheet, but present in the OAM keyset in XML + * VCAP_KF_OAM_FLAGS: W8, lan966x: is2 + * Frame's OAM flags + * VCAP_KF_OAM_MEL_FLAGS: W7, lan966x: is2 * Encoding of MD level/MEG level (MEL) - * VCAP_KF_OAM_Y1731_IS: W1, sparx5: is0/is2/es2, lan966x: is2 - * Set if frame’s EtherType = 0x8902 - * VCAP_KF_PROT_ACTIVE: W1, sparx5: es2 + * VCAP_KF_OAM_MEPID: W16, lan966x: is2 + * CCM frame's OAM MEP ID + * VCAP_KF_OAM_OPCODE: W8, lan966x: is2 + * Frame's OAM opcode + * VCAP_KF_OAM_VER: W5, lan966x: is2 + * Frame's OAM version + * VCAP_KF_OAM_Y1731_IS: W1, sparx5: is2/es2, lan966x: is2 + * Set if frame's EtherType = 0x8902 + * VCAP_KF_PROT_ACTIVE: W1, sparx5: es0/es2 * Protection is active * VCAP_KF_TCP_IS: W1, sparx5: is0/is2/es2, lan966x: is2 * Set if frame is IPv4 TCP frame (IP protocol = 6) or IPv6 TCP frames (Next * header = 6) - * VCAP_KF_TCP_UDP_IS: W1, sparx5: is0/is2/es2, lan966x: is2 + * VCAP_KF_TCP_UDP_IS: W1, sparx5: is0/is2/es2 * Set if frame is IPv4/IPv6 TCP or UDP frame (IP protocol/next header equals 6 * or 17) * VCAP_KF_TYPE: sparx5 is0 W2, sparx5 is0 W1, sparx5 is2 W4, sparx5 is2 W2, - * sparx5 es2 W3, lan966x: is2 + * sparx5 es0 W1, sparx5 es2 W3, lan966x is2 W4, lan966x is2 W2 * Keyset type id - set by the API - * VCAP_KF_HOST_MATCH: W1, lan966x: is2 - * The action from the SMAC_SIP4 or SMAC_SIP6 lookups. Used for IP source - * guarding. - * VCAP_KF_L2_FRM_TYPE: W4, lan966x: is2 - * Frame subtype for specific EtherTypes (MRP, DLR) - * VCAP_KF_L2_PAYLOAD0: W16, lan966x: is2 - * Payload bytes 0-1 after the frame’s EtherType - * VCAP_KF_L2_PAYLOAD1: W8, lan966x: is2 - * Payload byte 4 after the frame’s EtherType. This is specifically for PTP - * frames. - * VCAP_KF_L2_PAYLOAD2: W3, lan966x: is2 - * Bits 7, 2, and 1 from payload byte 6 after the frame’s EtherType. This is - * specifically for PTP frames. - * VCAP_KF_L2_LLC: W40, lan966x: is2 - * LLC header and data after up to two VLAN tags and the type/length field - * VCAP_KF_L3_FRAGMENT: W1, lan966x: is2 - * Set if IPv4 frame is fragmented - * VCAP_KF_L3_FRAG_OFS_GT0: W1, lan966x: is2 - * Set if IPv4 frame is fragmented and it is not the first fragment - * VCAP_KF_L2_SNAP: W40, lan966x: is2 - * SNAP header after LLC header (AA-AA-03) - * VCAP_KF_L4_1588_DOM: W8, lan966x: is2 - * PTP over UDP: domainNumber - * VCAP_KF_L4_1588_VER: W4, lan966x: is2 - * PTP over UDP: version - * VCAP_KF_OAM_MEPID: W16, lan966x: is2 - * CCM frame’s OAM MEP ID - * VCAP_KF_OAM_OPCODE: W8, lan966x: is2 - * Frame’s OAM opcode - * VCAP_KF_OAM_VER: W5, lan966x: is2 - * Frame’s OAM version - * VCAP_KF_OAM_FLAGS: W8, lan966x: is2 - * Frame’s OAM flags - * VCAP_KF_OAM_DETECTED: W1, lan966x: is2 - * This is missing in the datasheet, but present in the OAM keyset in XML */ /* Keyfield names */ @@ -334,6 +331,7 @@ enum vcap_key_field { VCAP_KF_8021Q_PCP1, VCAP_KF_8021Q_PCP2, VCAP_KF_8021Q_PCP_CLS, + VCAP_KF_8021Q_TPID, VCAP_KF_8021Q_TPID0, VCAP_KF_8021Q_TPID1, VCAP_KF_8021Q_TPID2, @@ -352,13 +350,13 @@ enum vcap_key_field { VCAP_KF_ARP_SENDER_MATCH_IS, VCAP_KF_ARP_TGT_MATCH_IS, VCAP_KF_COSID_CLS, - VCAP_KF_DST_ENTRY, VCAP_KF_ES0_ISDX_KEY_ENA, VCAP_KF_ETYPE, VCAP_KF_ETYPE_LEN_IS, - VCAP_KF_ETYPE_MPLS, + VCAP_KF_HOST_MATCH, VCAP_KF_IF_EGR_PORT_MASK, VCAP_KF_IF_EGR_PORT_MASK_RNG, + VCAP_KF_IF_EGR_PORT_NO, VCAP_KF_IF_IGR_PORT, VCAP_KF_IF_IGR_PORT_MASK, VCAP_KF_IF_IGR_PORT_MASK_L3, @@ -373,17 +371,24 @@ enum vcap_key_field { VCAP_KF_ISDX_GT0_IS, VCAP_KF_L2_BC_IS, VCAP_KF_L2_DMAC, + VCAP_KF_L2_FRM_TYPE, VCAP_KF_L2_FWD_IS, + VCAP_KF_L2_LLC, VCAP_KF_L2_MC_IS, + VCAP_KF_L2_PAYLOAD0, + VCAP_KF_L2_PAYLOAD1, + VCAP_KF_L2_PAYLOAD2, VCAP_KF_L2_PAYLOAD_ETYPE, VCAP_KF_L2_SMAC, + VCAP_KF_L2_SNAP, VCAP_KF_L3_DIP_EQ_SIP_IS, - VCAP_KF_L3_DMAC_DIP_MATCH, VCAP_KF_L3_DPL_CLS, VCAP_KF_L3_DSCP, VCAP_KF_L3_DST_IS, + VCAP_KF_L3_FRAGMENT, VCAP_KF_L3_FRAGMENT_TYPE, VCAP_KF_L3_FRAG_INVLD_L4_LEN, + VCAP_KF_L3_FRAG_OFS_GT0, VCAP_KF_L3_IP4_DIP, VCAP_KF_L3_IP4_SIP, VCAP_KF_L3_IP6_DIP, @@ -392,9 +397,10 @@ enum vcap_key_field { VCAP_KF_L3_OPTIONS_IS, VCAP_KF_L3_PAYLOAD, VCAP_KF_L3_RT_IS, - VCAP_KF_L3_SMAC_SIP_MATCH, VCAP_KF_L3_TOS, VCAP_KF_L3_TTL_GT0, + VCAP_KF_L4_1588_DOM, + VCAP_KF_L4_1588_VER, VCAP_KF_L4_ACK, VCAP_KF_L4_DPORT, VCAP_KF_L4_FIN, @@ -411,30 +417,19 @@ enum vcap_key_field { VCAP_KF_LOOKUP_GEN_IDX, VCAP_KF_LOOKUP_GEN_IDX_SEL, VCAP_KF_LOOKUP_PAG, - VCAP_KF_MIRROR_ENA, + VCAP_KF_MIRROR_PROBE, VCAP_KF_OAM_CCM_CNTS_EQ0, + VCAP_KF_OAM_DETECTED, + VCAP_KF_OAM_FLAGS, VCAP_KF_OAM_MEL_FLAGS, + VCAP_KF_OAM_MEPID, + VCAP_KF_OAM_OPCODE, + VCAP_KF_OAM_VER, VCAP_KF_OAM_Y1731_IS, VCAP_KF_PROT_ACTIVE, VCAP_KF_TCP_IS, VCAP_KF_TCP_UDP_IS, VCAP_KF_TYPE, - VCAP_KF_HOST_MATCH, - VCAP_KF_L2_FRM_TYPE, - VCAP_KF_L2_PAYLOAD0, - VCAP_KF_L2_PAYLOAD1, - VCAP_KF_L2_PAYLOAD2, - VCAP_KF_L2_LLC, - VCAP_KF_L3_FRAGMENT, - VCAP_KF_L3_FRAG_OFS_GT0, - VCAP_KF_L2_SNAP, - VCAP_KF_L4_1588_DOM, - VCAP_KF_L4_1588_VER, - VCAP_KF_OAM_MEPID, - VCAP_KF_OAM_OPCODE, - VCAP_KF_OAM_VER, - VCAP_KF_OAM_FLAGS, - VCAP_KF_OAM_DETECTED, }; /* Actionset names with origin information */ @@ -443,14 +438,17 @@ enum vcap_actionfield_set { VCAP_AFS_BASE_TYPE, /* sparx5 is2 X3, sparx5 es2 X3, lan966x is2 X2 */ VCAP_AFS_CLASSIFICATION, /* sparx5 is0 X2 */ VCAP_AFS_CLASS_REDUCED, /* sparx5 is0 X1 */ + VCAP_AFS_ES0, /* sparx5 es0 X1 */ VCAP_AFS_FULL, /* sparx5 is0 X3 */ - VCAP_AFS_MLBS, /* sparx5 is0 X2 */ - VCAP_AFS_MLBS_REDUCED, /* sparx5 is0 X1 */ - VCAP_AFS_SMAC_SIP, /* lan966x is2 x1 */ + VCAP_AFS_SMAC_SIP, /* lan966x is2 X1 */ }; /* List of actionfields with description * + * VCAP_AF_ACL_ID: W6, lan966x: is2 + * Logical ID for the entry. This ID is extracted together with the frame in the + * CPU extraction header. Only applicable to actions with CPU_COPY_ENA or + * HIT_ME_ONCE set. * VCAP_AF_CLS_VID_SEL: W3, sparx5: is0 * Controls the classified VID: 0: VID_NONE: No action. 1: VID_ADD: New VID = * old VID + VID_VAL. 2: VID_REPLACE: New VID = VID_VAL. 3: VID_FIRST_TAG: New @@ -468,8 +466,16 @@ enum vcap_actionfield_set { * VCAP_AF_CPU_COPY_ENA: W1, sparx5: is2/es2, lan966x: is2 * Setting this bit to 1 causes all frames that hit this action to be copied to * the CPU extraction queue specified in CPU_QUEUE_NUM. + * VCAP_AF_CPU_QU: W3, sparx5: es0 + * CPU extraction queue. Used when FWD_SEL >0 and PIPELINE_ACT = XTR. * VCAP_AF_CPU_QUEUE_NUM: W3, sparx5: is2/es2, lan966x: is2 * CPU queue number. Used when CPU_COPY_ENA is set. + * VCAP_AF_DEI_A_VAL: W1, sparx5: es0 + * DEI used in ES0 tag A. See TAG_A_DEI_SEL. + * VCAP_AF_DEI_B_VAL: W1, sparx5: es0 + * DEI used in ES0 tag B. See TAG_B_DEI_SEL. + * VCAP_AF_DEI_C_VAL: W1, sparx5: es0 + * DEI used in ES0 tag C. See TAG_C_DEI_SEL. * VCAP_AF_DEI_ENA: W1, sparx5: is0 * If set, use DEI_VAL as classified DEI value. Otherwise, DEI from basic * classification is used @@ -483,19 +489,38 @@ enum vcap_actionfield_set { * VCAP_AF_DSCP_ENA: W1, sparx5: is0 * If set, use DSCP_VAL as classified DSCP value. Otherwise, DSCP value from * basic classification is used. - * VCAP_AF_DSCP_VAL: W6, sparx5: is0 + * VCAP_AF_DSCP_SEL: W3, sparx5: es0 + * Selects source for DSCP. 0: Controlled by port configuration and IFH. 1: + * Classified DSCP via IFH. 2: DSCP_VAL. 3: Reserved. 4: Mapped using mapping + * table 0, otherwise use DSCP_VAL. 5: Mapped using mapping table 1, otherwise + * use mapping table 0. 6: Mapped using mapping table 2, otherwise use DSCP_VAL. + * 7: Mapped using mapping table 3, otherwise use mapping table 2 + * VCAP_AF_DSCP_VAL: W6, sparx5: is0/es0 * See DSCP_ENA. * VCAP_AF_ES2_REW_CMD: W3, sparx5: es2 * Command forwarded to REW: 0: No action. 1: SWAP MAC addresses. 2: Do L2CP * DMAC translation when entering or leaving a tunnel. + * VCAP_AF_ESDX: W13, sparx5: es0 + * Egress counter index. Used to index egress counter set as defined in + * REW::STAT_CFG. + * VCAP_AF_FWD_KILL_ENA: W1, lan966x: is2 + * Setting this bit to 1 denies forwarding of the frame forwarding to any front + * port. The frame can still be copied to the CPU by other actions. * VCAP_AF_FWD_MODE: W2, sparx5: es2 * Forward selector: 0: Forward. 1: Discard. 2: Redirect. 3: Copy. + * VCAP_AF_FWD_SEL: W2, sparx5: es0 + * ES0 Forward selector. 0: No action. 1: Copy to loopback interface. 2: + * Redirect to loopback interface. 3: Discard * VCAP_AF_HIT_ME_ONCE: W1, sparx5: is2/es2, lan966x: is2 * Setting this bit to 1 causes the first frame that hits this action where the * HIT_CNT counter is zero to be copied to the CPU extraction queue specified in * CPU_QUEUE_NUM. The HIT_CNT counter is then incremented and any frames that * hit this action later are not copied to the CPU. To re-enable the HIT_ME_ONCE * functionality, the HIT_CNT counter must be cleared. + * VCAP_AF_HOST_MATCH: W1, lan966x: is2 + * Used for IP source guarding. If set, it signals that the host is a valid (for + * instance a valid combination of source MAC address and source IP address). + * HOST_MATCH is input to the IS2 keys. * VCAP_AF_IGNORE_PIPELINE_CTRL: W1, sparx5: is2/es2 * Ignore ingress pipeline control. This enforces the use of the VCAP IS2 action * even when the pipeline control has terminated the frame before VCAP IS2. @@ -504,8 +529,13 @@ enum vcap_actionfield_set { * VCAP_AF_ISDX_ADD_REPLACE_SEL: W1, sparx5: is0 * Controls the classified ISDX. 0: New ISDX = old ISDX + ISDX_VAL. 1: New ISDX * = ISDX_VAL. + * VCAP_AF_ISDX_ENA: W1, lan966x: is2 + * Setting this bit to 1 causes the classified ISDX to be set to the value of + * POLICE_IDX[8:0]. * VCAP_AF_ISDX_VAL: W12, sparx5: is0 * See isdx_add_replace_sel + * VCAP_AF_LOOP_ENA: W1, sparx5: es0 + * 0: Forward based on PIPELINE_PT and FWD_SEL * VCAP_AF_LRN_DIS: W1, sparx5: is2, lan966x: is2 * Setting this bit to 1 disables learning of frames hitting this action. * VCAP_AF_MAP_IDX: W9, sparx5: is0 @@ -521,136 +551,190 @@ enum vcap_actionfield_set { * are applied to. 0: No changes to the QoS Mapping Table lookup. 1: Update key * type and index for QoS Mapping Table lookup #0. 2: Update key type and index * for QoS Mapping Table lookup #1. 3: Reserved. - * VCAP_AF_MASK_MODE: W3, sparx5: is0/is2, lan966x is2 W2 + * VCAP_AF_MASK_MODE: sparx5 is0 W3, sparx5 is2 W3, lan966x is2 W2 * Controls the PORT_MASK use. Sparx5: 0: OR_DSTMASK, 1: AND_VLANMASK, 2: * REPLACE_PGID, 3: REPLACE_ALL, 4: REDIR_PGID, 5: OR_PGID_MASK, 6: VSTAX, 7: * Not applicable. LAN966X: 0: No action, 1: Permit/deny (AND), 2: Policy * forwarding (DMAC lookup), 3: Redirect. The CPU port is untouched by * MASK_MODE. - * VCAP_AF_MATCH_ID: W16, sparx5: is0/is2 + * VCAP_AF_MATCH_ID: W16, sparx5: is2 * Logical ID for the entry. The MATCH_ID is extracted together with the frame * if the frame is forwarded to the CPU (CPU_COPY_ENA). The result is placed in * IFH.CL_RSLT. - * VCAP_AF_MATCH_ID_MASK: W16, sparx5: is0/is2 + * VCAP_AF_MATCH_ID_MASK: W16, sparx5: is2 * Mask used by MATCH_ID. + * VCAP_AF_MIRROR_ENA: W1, lan966x: is2 + * Setting this bit to 1 causes frames to be mirrored to the mirror target port + * (ANA::MIRRPORPORTS). * VCAP_AF_MIRROR_PROBE: W2, sparx5: is2 * Mirroring performed according to configuration of a mirror probe. 0: No * mirroring. 1: Mirror probe 0. 2: Mirror probe 1. 3: Mirror probe 2 * VCAP_AF_MIRROR_PROBE_ID: W2, sparx5: es2 * Signals a mirror probe to be placed in the IFH. Only possible when FWD_MODE - * is copy. 0: No mirroring. 1–3: Use mirror probe 0-2. + * is copy. 0: No mirroring. 1-3: Use mirror probe 0-2. * VCAP_AF_NXT_IDX: W12, sparx5: is0 * Index used as part of key (field G_IDX) in the next lookup. * VCAP_AF_NXT_IDX_CTRL: W3, sparx5: is0 * Controls the generation of the G_IDX used in the VCAP CLM next lookup * VCAP_AF_PAG_OVERRIDE_MASK: W8, sparx5: is0 - * Bits set in this mask will override PAG_VAL from port profile. ï€ New PAG = - * (PAG (input) AND ~PAG_OVERRIDE_MASK) OR (PAG_VAL AND PAG_OVERRIDE_MASK) + * Bits set in this mask will override PAG_VAL from port profile. New PAG = (PAG + * (input) AND ~PAG_OVERRIDE_MASK) OR (PAG_VAL AND PAG_OVERRIDE_MASK) * VCAP_AF_PAG_VAL: W8, sparx5: is0 * See PAG_OVERRIDE_MASK. + * VCAP_AF_PCP_A_VAL: W3, sparx5: es0 + * PCP used in ES0 tag A. See TAG_A_PCP_SEL. + * VCAP_AF_PCP_B_VAL: W3, sparx5: es0 + * PCP used in ES0 tag B. See TAG_B_PCP_SEL. + * VCAP_AF_PCP_C_VAL: W3, sparx5: es0 + * PCP used in ES0 tag C. See TAG_C_PCP_SEL. * VCAP_AF_PCP_ENA: W1, sparx5: is0 * If set, use PCP_VAL as classified PCP value. Otherwise, PCP from basic * classification is used. * VCAP_AF_PCP_VAL: W3, sparx5: is0 * See PCP_ENA. - * VCAP_AF_PIPELINE_FORCE_ENA: sparx5 is0 W2, sparx5 is2 W1 + * VCAP_AF_PIPELINE_ACT: W1, sparx5: es0 + * Pipeline action when FWD_SEL > 0. 0: XTR. CPU_QU selects CPU extraction queue + * 1: LBK_ASM. + * VCAP_AF_PIPELINE_FORCE_ENA: W1, sparx5: is2 * If set, use PIPELINE_PT unconditionally and set PIPELINE_ACT = NONE if * PIPELINE_PT == NONE. Overrules previous settings of pipeline point. - * VCAP_AF_PIPELINE_PT: W5, sparx5: is0/is2 + * VCAP_AF_PIPELINE_PT: sparx5 is2 W5, sparx5 es0 W2 * Pipeline point used if PIPELINE_FORCE_ENA is set * VCAP_AF_POLICE_ENA: W1, sparx5: is2/es2, lan966x: is2 * Setting this bit to 1 causes frames that hit this action to be policed by the * ACL policer specified in POLICE_IDX. Only applies to the first lookup. - * VCAP_AF_POLICE_IDX: W6, sparx5: is2/es2, lan966x: is2 W9 + * VCAP_AF_POLICE_IDX: sparx5 is2 W6, sparx5 es2 W6, lan966x is2 W9 * Selects VCAP policer used when policing frames (POLICE_ENA) * VCAP_AF_POLICE_REMARK: W1, sparx5: es2 * If set, frames exceeding policer rates are marked as yellow but not * discarded. + * VCAP_AF_POLICE_VCAP_ONLY: W1, lan966x: is2 + * Disable policing from QoS, and port policers. Only the VCAP policer selected + * by POLICE_IDX is active. Only applies to the second lookup. + * VCAP_AF_POP_VAL: W2, sparx5: es0 + * Controls popping of Q-tags. The final number of Q-tags popped is calculated + * as shown in section 4.28.7.2 VLAN Pop Decision. * VCAP_AF_PORT_MASK: sparx5 is0 W65, sparx5 is2 W68, lan966x is2 W8 * Port mask applied to the forwarding decision based on MASK_MODE. + * VCAP_AF_PUSH_CUSTOMER_TAG: W2, sparx5: es0 + * Selects tag C mode: 0: Do not push tag C. 1: Push tag C if + * IFH.VSTAX.TAG.WAS_TAGGED = 1. 2: Push tag C if IFH.VSTAX.TAG.WAS_TAGGED = 0. + * 3: Push tag C if UNTAG_VID_ENA = 0 or (C-TAG.VID ! = VID_C_VAL). + * VCAP_AF_PUSH_INNER_TAG: W1, sparx5: es0 + * Controls inner tagging. 0: Do not push ES0 tag B as inner tag. 1: Push ES0 + * tag B as inner tag. + * VCAP_AF_PUSH_OUTER_TAG: W2, sparx5: es0 + * Controls outer tagging. 0: No ES0 tag A: Port tag is allowed if enabled on + * port. 1: ES0 tag A: Push ES0 tag A. No port tag. 2: Force port tag: Always + * push port tag. No ES0 tag A. 3: Force untag: Never push port tag or ES0 tag + * A. * VCAP_AF_QOS_ENA: W1, sparx5: is0 * If set, use QOS_VAL as classified QoS class. Otherwise, QoS class from basic * classification is used. * VCAP_AF_QOS_VAL: W3, sparx5: is0 * See QOS_ENA. + * VCAP_AF_REW_OP: W16, lan966x: is2 + * Rewriter operation command. * VCAP_AF_RT_DIS: W1, sparx5: is2 * If set, routing is disallowed. Only applies when IS_INNER_ACL is 0. See also * IGR_ACL_ENA, EGR_ACL_ENA, and RLEG_STAT_IDX. + * VCAP_AF_SWAP_MACS_ENA: W1, sparx5: es0 + * This setting is only active when FWD_SEL = 1 or FWD_SEL = 2 and PIPELINE_ACT + * = LBK_ASM. 0: No action. 1: Swap MACs and clear bit 40 in new SMAC. + * VCAP_AF_TAG_A_DEI_SEL: W3, sparx5: es0 + * Selects PCP for ES0 tag A. 0: Classified DEI. 1: DEI_A_VAL. 2: DP and QoS + * mapped to PCP (per port table). 3: DP. + * VCAP_AF_TAG_A_PCP_SEL: W3, sparx5: es0 + * Selects PCP for ES0 tag A. 0: Classified PCP. 1: PCP_A_VAL. 2: DP and QoS + * mapped to PCP (per port table). 3: QoS class. + * VCAP_AF_TAG_A_TPID_SEL: W3, sparx5: es0 + * Selects TPID for ES0 tag A: 0: 0x8100. 1: 0x88A8. 2: Custom + * (REW:PORT:PORT_VLAN_CFG.PORT_TPID). 3: If IFH.TAG_TYPE = 0 then 0x8100 else + * custom. + * VCAP_AF_TAG_A_VID_SEL: W2, sparx5: es0 + * Selects VID for ES0 tag A. 0: Classified VID + VID_A_VAL. 1: VID_A_VAL. + * VCAP_AF_TAG_B_DEI_SEL: W3, sparx5: es0 + * Selects PCP for ES0 tag B. 0: Classified DEI. 1: DEI_B_VAL. 2: DP and QoS + * mapped to PCP (per port table). 3: DP. + * VCAP_AF_TAG_B_PCP_SEL: W3, sparx5: es0 + * Selects PCP for ES0 tag B. 0: Classified PCP. 1: PCP_B_VAL. 2: DP and QoS + * mapped to PCP (per port table). 3: QoS class. + * VCAP_AF_TAG_B_TPID_SEL: W3, sparx5: es0 + * Selects TPID for ES0 tag B. 0: 0x8100. 1: 0x88A8. 2: Custom + * (REW:PORT:PORT_VLAN_CFG.PORT_TPID). 3: If IFH.TAG_TYPE = 0 then 0x8100 else + * custom. + * VCAP_AF_TAG_B_VID_SEL: W2, sparx5: es0 + * Selects VID for ES0 tag B. 0: Classified VID + VID_B_VAL. 1: VID_B_VAL. + * VCAP_AF_TAG_C_DEI_SEL: W3, sparx5: es0 + * Selects DEI source for ES0 tag C. 0: Classified DEI. 1: DEI_C_VAL. 2: + * REW::DP_MAP.DP [IFH.VSTAX.QOS.DP]. 3: DEI of popped VLAN tag if available + * (IFH.VSTAX.TAG.WAS_TAGGED = 1 and tot_pop_cnt>0) else DEI_C_VAL. 4: Mapped + * using mapping table 0, otherwise use DEI_C_VAL. 5: Mapped using mapping table + * 1, otherwise use mapping table 0. 6: Mapped using mapping table 2, otherwise + * use DEI_C_VAL. 7: Mapped using mapping table 3, otherwise use mapping table + * 2. + * VCAP_AF_TAG_C_PCP_SEL: W3, sparx5: es0 + * Selects PCP source for ES0 tag C. 0: Classified PCP. 1: PCP_C_VAL. 2: + * Reserved. 3: PCP of popped VLAN tag if available (IFH.VSTAX.TAG.WAS_TAGGED=1 + * and tot_pop_cnt>0) else PCP_C_VAL. 4: Mapped using mapping table 0, otherwise + * use PCP_C_VAL. 5: Mapped using mapping table 1, otherwise use mapping table + * 0. 6: Mapped using mapping table 2, otherwise use PCP_C_VAL. 7: Mapped using + * mapping table 3, otherwise use mapping table 2. + * VCAP_AF_TAG_C_TPID_SEL: W3, sparx5: es0 + * Selects TPID for ES0 tag C. 0: 0x8100. 1: 0x88A8. 2: Custom 1. 3: Custom 2. + * 4: Custom 3. 5: See TAG_A_TPID_SEL. + * VCAP_AF_TAG_C_VID_SEL: W2, sparx5: es0 + * Selects VID for ES0 tag C. The resulting VID is termed C-TAG.VID. 0: + * Classified VID. 1: VID_C_VAL. 2: IFH.ENCAP.GVID. 3: Reserved. * VCAP_AF_TYPE: W1, sparx5: is0 * Actionset type id - Set by the API + * VCAP_AF_UNTAG_VID_ENA: W1, sparx5: es0 + * Controls insertion of tag C. Untag or insert mode can be selected. See + * PUSH_CUSTOMER_TAG. + * VCAP_AF_VID_A_VAL: W12, sparx5: es0 + * VID used in ES0 tag A. See TAG_A_VID_SEL. + * VCAP_AF_VID_B_VAL: W12, sparx5: es0 + * VID used in ES0 tag B. See TAG_B_VID_SEL. + * VCAP_AF_VID_C_VAL: W12, sparx5: es0 + * VID used in ES0 tag C. See TAG_C_VID_SEL. * VCAP_AF_VID_VAL: W13, sparx5: is0 * New VID Value - * VCAP_AF_MIRROR_ENA: W1, lan966x: is2 - * Setting this bit to 1 causes frames to be mirrored to the mirror target - * port (ANA::MIRRPORPORTS). - * VCAP_AF_POLICE_VCAP_ONLY: W1, lan966x: is2 - * Disable policing from QoS, and port policers. Only the VCAP policer - * selected by POLICE_IDX is active. Only applies to the second lookup. - * VCAP_AF_REW_OP: W16, lan966x: is2 - * Rewriter operation command. - * VCAP_AF_ISDX_ENA: W1, lan966x: is2 - * Setting this bit to 1 causes the classified ISDX to be set to the value of - * POLICE_IDX[8:0]. - * VCAP_AF_ACL_ID: W6, lan966x: is2 - * Logical ID for the entry. This ID is extracted together with the frame in - * the CPU extraction header. Only applicable to actions with CPU_COPY_ENA or - * HIT_ME_ONCE set. - * VCAP_AF_FWD_KILL_ENA: W1, lan966x: is2 - * Setting this bit to 1 denies forwarding of the frame forwarding to any - * front port. The frame can still be copied to the CPU by other actions. - * VCAP_AF_HOST_MATCH: W1, lan966x: is2 - * Used for IP source guarding. If set, it signals that the host is a valid - * (for instance a valid combination of source MAC address and source IP - * address). HOST_MATCH is input to the IS2 keys. */ /* Actionfield names */ enum vcap_action_field { VCAP_AF_NO_VALUE, /* initial value */ - VCAP_AF_ACL_MAC, - VCAP_AF_ACL_RT_MODE, + VCAP_AF_ACL_ID, VCAP_AF_CLS_VID_SEL, VCAP_AF_CNT_ID, VCAP_AF_COPY_PORT_NUM, VCAP_AF_COPY_QUEUE_NUM, - VCAP_AF_COSID_ENA, - VCAP_AF_COSID_VAL, VCAP_AF_CPU_COPY_ENA, - VCAP_AF_CPU_DIS, - VCAP_AF_CPU_ENA, - VCAP_AF_CPU_Q, + VCAP_AF_CPU_QU, VCAP_AF_CPU_QUEUE_NUM, - VCAP_AF_CUSTOM_ACE_ENA, - VCAP_AF_CUSTOM_ACE_OFFSET, + VCAP_AF_DEI_A_VAL, + VCAP_AF_DEI_B_VAL, + VCAP_AF_DEI_C_VAL, VCAP_AF_DEI_ENA, VCAP_AF_DEI_VAL, - VCAP_AF_DLB_OFFSET, - VCAP_AF_DMAC_OFFSET_ENA, VCAP_AF_DP_ENA, VCAP_AF_DP_VAL, VCAP_AF_DSCP_ENA, + VCAP_AF_DSCP_SEL, VCAP_AF_DSCP_VAL, - VCAP_AF_EGR_ACL_ENA, VCAP_AF_ES2_REW_CMD, - VCAP_AF_FWD_DIS, + VCAP_AF_ESDX, + VCAP_AF_FWD_KILL_ENA, VCAP_AF_FWD_MODE, - VCAP_AF_FWD_TYPE, - VCAP_AF_GVID_ADD_REPLACE_SEL, + VCAP_AF_FWD_SEL, VCAP_AF_HIT_ME_ONCE, + VCAP_AF_HOST_MATCH, VCAP_AF_IGNORE_PIPELINE_CTRL, - VCAP_AF_IGR_ACL_ENA, - VCAP_AF_INJ_MASQ_ENA, - VCAP_AF_INJ_MASQ_LPORT, - VCAP_AF_INJ_MASQ_PORT, VCAP_AF_INTR_ENA, VCAP_AF_ISDX_ADD_REPLACE_SEL, + VCAP_AF_ISDX_ENA, VCAP_AF_ISDX_VAL, - VCAP_AF_IS_INNER_ACL, - VCAP_AF_L3_MAC_UPDATE_DIS, - VCAP_AF_LOG_MSG_INTERVAL, - VCAP_AF_LPM_AFFIX_ENA, - VCAP_AF_LPM_AFFIX_VAL, - VCAP_AF_LPORT_ENA, + VCAP_AF_LOOP_ENA, VCAP_AF_LRN_DIS, VCAP_AF_MAP_IDX, VCAP_AF_MAP_KEY, @@ -658,78 +742,53 @@ enum vcap_action_field { VCAP_AF_MASK_MODE, VCAP_AF_MATCH_ID, VCAP_AF_MATCH_ID_MASK, - VCAP_AF_MIP_SEL, + VCAP_AF_MIRROR_ENA, VCAP_AF_MIRROR_PROBE, VCAP_AF_MIRROR_PROBE_ID, - VCAP_AF_MPLS_IP_CTRL_ENA, - VCAP_AF_MPLS_MEP_ENA, - VCAP_AF_MPLS_MIP_ENA, - VCAP_AF_MPLS_OAM_FLAVOR, - VCAP_AF_MPLS_OAM_TYPE, - VCAP_AF_NUM_VLD_LABELS, VCAP_AF_NXT_IDX, VCAP_AF_NXT_IDX_CTRL, - VCAP_AF_NXT_KEY_TYPE, - VCAP_AF_NXT_NORMALIZE, - VCAP_AF_NXT_NORM_W16_OFFSET, - VCAP_AF_NXT_NORM_W32_OFFSET, - VCAP_AF_NXT_OFFSET_FROM_TYPE, - VCAP_AF_NXT_TYPE_AFTER_OFFSET, - VCAP_AF_OAM_IP_BFD_ENA, - VCAP_AF_OAM_TWAMP_ENA, - VCAP_AF_OAM_Y1731_SEL, VCAP_AF_PAG_OVERRIDE_MASK, VCAP_AF_PAG_VAL, + VCAP_AF_PCP_A_VAL, + VCAP_AF_PCP_B_VAL, + VCAP_AF_PCP_C_VAL, VCAP_AF_PCP_ENA, VCAP_AF_PCP_VAL, - VCAP_AF_PIPELINE_ACT_SEL, + VCAP_AF_PIPELINE_ACT, VCAP_AF_PIPELINE_FORCE_ENA, VCAP_AF_PIPELINE_PT, - VCAP_AF_PIPELINE_PT_REDUCED, VCAP_AF_POLICE_ENA, VCAP_AF_POLICE_IDX, VCAP_AF_POLICE_REMARK, + VCAP_AF_POLICE_VCAP_ONLY, + VCAP_AF_POP_VAL, VCAP_AF_PORT_MASK, - VCAP_AF_PTP_MASTER_SEL, + VCAP_AF_PUSH_CUSTOMER_TAG, + VCAP_AF_PUSH_INNER_TAG, + VCAP_AF_PUSH_OUTER_TAG, VCAP_AF_QOS_ENA, VCAP_AF_QOS_VAL, - VCAP_AF_REW_CMD, - VCAP_AF_RLEG_DMAC_CHK_DIS, - VCAP_AF_RLEG_STAT_IDX, - VCAP_AF_RSDX_ENA, - VCAP_AF_RSDX_VAL, - VCAP_AF_RSVD_LBL_VAL, + VCAP_AF_REW_OP, VCAP_AF_RT_DIS, - VCAP_AF_RT_SEL, - VCAP_AF_S2_KEY_SEL_ENA, - VCAP_AF_S2_KEY_SEL_IDX, - VCAP_AF_SAM_SEQ_ENA, - VCAP_AF_SIP_IDX, - VCAP_AF_SWAP_MAC_ENA, - VCAP_AF_TCP_UDP_DPORT, - VCAP_AF_TCP_UDP_ENA, - VCAP_AF_TCP_UDP_SPORT, - VCAP_AF_TC_ENA, - VCAP_AF_TC_LABEL, - VCAP_AF_TPID_SEL, - VCAP_AF_TTL_DECR_DIS, - VCAP_AF_TTL_ENA, - VCAP_AF_TTL_LABEL, - VCAP_AF_TTL_UPDATE_ENA, + VCAP_AF_SWAP_MACS_ENA, + VCAP_AF_TAG_A_DEI_SEL, + VCAP_AF_TAG_A_PCP_SEL, + VCAP_AF_TAG_A_TPID_SEL, + VCAP_AF_TAG_A_VID_SEL, + VCAP_AF_TAG_B_DEI_SEL, + VCAP_AF_TAG_B_PCP_SEL, + VCAP_AF_TAG_B_TPID_SEL, + VCAP_AF_TAG_B_VID_SEL, + VCAP_AF_TAG_C_DEI_SEL, + VCAP_AF_TAG_C_PCP_SEL, + VCAP_AF_TAG_C_TPID_SEL, + VCAP_AF_TAG_C_VID_SEL, VCAP_AF_TYPE, + VCAP_AF_UNTAG_VID_ENA, + VCAP_AF_VID_A_VAL, + VCAP_AF_VID_B_VAL, + VCAP_AF_VID_C_VAL, VCAP_AF_VID_VAL, - VCAP_AF_VLAN_POP_CNT, - VCAP_AF_VLAN_POP_CNT_ENA, - VCAP_AF_VLAN_PUSH_CNT, - VCAP_AF_VLAN_PUSH_CNT_ENA, - VCAP_AF_VLAN_WAS_TAGGED, - VCAP_AF_MIRROR_ENA, - VCAP_AF_POLICE_VCAP_ONLY, - VCAP_AF_REW_OP, - VCAP_AF_ISDX_ENA, - VCAP_AF_ACL_ID, - VCAP_AF_FWD_KILL_ENA, - VCAP_AF_HOST_MATCH, }; #endif /* __VCAP_AG_API__ */ diff --git a/drivers/net/ethernet/microchip/vcap/vcap_api.c b/drivers/net/ethernet/microchip/vcap/vcap_api.c index 664aae3e2acd..4847d0d99ec9 100644 --- a/drivers/net/ethernet/microchip/vcap/vcap_api.c +++ b/drivers/net/ethernet/microchip/vcap/vcap_api.c @@ -37,11 +37,13 @@ struct vcap_rule_move { int count; /* blocksize of addresses to move */ }; -/* Stores the filter cookie that enabled the port */ +/* Stores the filter cookie and chain id that enabled the port */ struct vcap_enabled_port { struct list_head list; /* for insertion in enabled ports list */ struct net_device *ndev; /* the enabled port */ unsigned long cookie; /* filter that enabled the port */ + int src_cid; /* source chain id */ + int dst_cid; /* destination chain id */ }; void vcap_iter_set(struct vcap_stream_iter *itr, int sw_width, @@ -508,10 +510,133 @@ static void vcap_encode_keyfield_typegroups(struct vcap_control *vctrl, vcap_encode_typegroups(cache->maskstream, sw_width, tgt, true); } +/* Copy data from src to dst but reverse the data in chunks of 32bits. + * For example if src is 00:11:22:33:44:55 where 55 is LSB the dst will + * have the value 22:33:44:55:00:11. + */ +static void vcap_copy_to_w32be(u8 *dst, const u8 *src, int size) +{ + for (int idx = 0; idx < size; ++idx) { + int first_byte_index = 0; + int nidx; + + first_byte_index = size - (((idx >> 2) + 1) << 2); + if (first_byte_index < 0) + first_byte_index = 0; + nidx = idx + first_byte_index - (idx & ~0x3); + dst[nidx] = src[idx]; + } +} + +static void +vcap_copy_from_client_keyfield(struct vcap_rule *rule, + struct vcap_client_keyfield *dst, + const struct vcap_client_keyfield *src) +{ + struct vcap_rule_internal *ri = to_intrule(rule); + const struct vcap_client_keyfield_data *sdata; + struct vcap_client_keyfield_data *ddata; + int size; + + dst->ctrl.type = src->ctrl.type; + dst->ctrl.key = src->ctrl.key; + INIT_LIST_HEAD(&dst->ctrl.list); + sdata = &src->data; + ddata = &dst->data; + + if (!ri->admin->w32be) { + memcpy(ddata, sdata, sizeof(dst->data)); + return; + } + + size = keyfield_size_table[dst->ctrl.type] / 2; + + switch (dst->ctrl.type) { + case VCAP_FIELD_BIT: + case VCAP_FIELD_U32: + memcpy(ddata, sdata, sizeof(dst->data)); + break; + case VCAP_FIELD_U48: + vcap_copy_to_w32be(ddata->u48.value, src->data.u48.value, size); + vcap_copy_to_w32be(ddata->u48.mask, src->data.u48.mask, size); + break; + case VCAP_FIELD_U56: + vcap_copy_to_w32be(ddata->u56.value, sdata->u56.value, size); + vcap_copy_to_w32be(ddata->u56.mask, sdata->u56.mask, size); + break; + case VCAP_FIELD_U64: + vcap_copy_to_w32be(ddata->u64.value, sdata->u64.value, size); + vcap_copy_to_w32be(ddata->u64.mask, sdata->u64.mask, size); + break; + case VCAP_FIELD_U72: + vcap_copy_to_w32be(ddata->u72.value, sdata->u72.value, size); + vcap_copy_to_w32be(ddata->u72.mask, sdata->u72.mask, size); + break; + case VCAP_FIELD_U112: + vcap_copy_to_w32be(ddata->u112.value, sdata->u112.value, size); + vcap_copy_to_w32be(ddata->u112.mask, sdata->u112.mask, size); + break; + case VCAP_FIELD_U128: + vcap_copy_to_w32be(ddata->u128.value, sdata->u128.value, size); + vcap_copy_to_w32be(ddata->u128.mask, sdata->u128.mask, size); + break; + } +} + +static void +vcap_copy_from_client_actionfield(struct vcap_rule *rule, + struct vcap_client_actionfield *dst, + const struct vcap_client_actionfield *src) +{ + struct vcap_rule_internal *ri = to_intrule(rule); + const struct vcap_client_actionfield_data *sdata; + struct vcap_client_actionfield_data *ddata; + int size; + + dst->ctrl.type = src->ctrl.type; + dst->ctrl.action = src->ctrl.action; + INIT_LIST_HEAD(&dst->ctrl.list); + sdata = &src->data; + ddata = &dst->data; + + if (!ri->admin->w32be) { + memcpy(ddata, sdata, sizeof(dst->data)); + return; + } + + size = actionfield_size_table[dst->ctrl.type]; + + switch (dst->ctrl.type) { + case VCAP_FIELD_BIT: + case VCAP_FIELD_U32: + memcpy(ddata, sdata, sizeof(dst->data)); + break; + case VCAP_FIELD_U48: + vcap_copy_to_w32be(ddata->u48.value, sdata->u48.value, size); + break; + case VCAP_FIELD_U56: + vcap_copy_to_w32be(ddata->u56.value, sdata->u56.value, size); + break; + case VCAP_FIELD_U64: + vcap_copy_to_w32be(ddata->u64.value, sdata->u64.value, size); + break; + case VCAP_FIELD_U72: + vcap_copy_to_w32be(ddata->u72.value, sdata->u72.value, size); + break; + case VCAP_FIELD_U112: + vcap_copy_to_w32be(ddata->u112.value, sdata->u112.value, size); + break; + case VCAP_FIELD_U128: + vcap_copy_to_w32be(ddata->u128.value, sdata->u128.value, size); + break; + } +} + static int vcap_encode_rule_keyset(struct vcap_rule_internal *ri) { const struct vcap_client_keyfield *ckf; const struct vcap_typegroup *tg_table; + struct vcap_client_keyfield tempkf; const struct vcap_field *kf_table; int keyset_size; @@ -552,7 +677,9 @@ static int vcap_encode_rule_keyset(struct vcap_rule_internal *ri) __func__, __LINE__, ckf->ctrl.key); return -EINVAL; } - vcap_encode_keyfield(ri, ckf, &kf_table[ckf->ctrl.key], tg_table); + vcap_copy_from_client_keyfield(&ri->data, &tempkf, ckf); + vcap_encode_keyfield(ri, &tempkf, &kf_table[ckf->ctrl.key], + tg_table); } /* Add typegroup bits to the key/mask bitstreams */ vcap_encode_keyfield_typegroups(ri->vctrl, ri, tg_table); @@ -667,6 +794,7 @@ static int vcap_encode_rule_actionset(struct vcap_rule_internal *ri) { const struct vcap_client_actionfield *caf; const struct vcap_typegroup *tg_table; + struct vcap_client_actionfield tempaf; const struct vcap_field *af_table; int actionset_size; @@ -707,8 +835,9 @@ static int vcap_encode_rule_actionset(struct vcap_rule_internal *ri) __func__, __LINE__, caf->ctrl.action); return -EINVAL; } - vcap_encode_actionfield(ri, caf, &af_table[caf->ctrl.action], - tg_table); + vcap_copy_from_client_actionfield(&ri->data, &tempaf, caf); + vcap_encode_actionfield(ri, &tempaf, + &af_table[caf->ctrl.action], tg_table); } /* Add typegroup bits to the entry bitstreams */ vcap_encode_actionfield_typegroups(ri, tg_table); @@ -738,7 +867,7 @@ int vcap_api_check(struct vcap_control *ctrl) !ctrl->ops->add_default_fields || !ctrl->ops->cache_erase || !ctrl->ops->cache_write || !ctrl->ops->cache_read || !ctrl->ops->init || !ctrl->ops->update || !ctrl->ops->move || - !ctrl->ops->port_info || !ctrl->ops->enable) { + !ctrl->ops->port_info) { pr_err("%s:%d: client operations are missing\n", __func__, __LINE__); return -ENOENT; @@ -791,9 +920,8 @@ int vcap_set_rule_set_actionset(struct vcap_rule *rule, } EXPORT_SYMBOL_GPL(vcap_set_rule_set_actionset); -/* Find a rule with a provided rule id */ -static struct vcap_rule_internal *vcap_lookup_rule(struct vcap_control *vctrl, - u32 id) +/* Check if a rule with this id exists */ +static bool vcap_rule_exists(struct vcap_control *vctrl, u32 id) { struct vcap_rule_internal *ri; struct vcap_admin *admin; @@ -802,7 +930,25 @@ static struct vcap_rule_internal *vcap_lookup_rule(struct vcap_control *vctrl, list_for_each_entry(admin, &vctrl->list, list) list_for_each_entry(ri, &admin->rules, list) if (ri->data.id == id) + return true; + return false; +} + +/* Find a rule with a provided rule id return a locked vcap */ +static struct vcap_rule_internal * +vcap_get_locked_rule(struct vcap_control *vctrl, u32 id) +{ + struct vcap_rule_internal *ri; + struct vcap_admin *admin; + + /* Look for the rule id in all vcaps */ + list_for_each_entry(admin, &vctrl->list, list) { + mutex_lock(&admin->lock); + list_for_each_entry(ri, &admin->rules, list) + if (ri->data.id == id) return ri; + mutex_unlock(&admin->lock); + } return NULL; } @@ -811,19 +957,31 @@ int vcap_lookup_rule_by_cookie(struct vcap_control *vctrl, u64 cookie) { struct vcap_rule_internal *ri; struct vcap_admin *admin; + int id = 0; /* Look for the rule id in all vcaps */ - list_for_each_entry(admin, &vctrl->list, list) - list_for_each_entry(ri, &admin->rules, list) - if (ri->data.cookie == cookie) - return ri->data.id; + list_for_each_entry(admin, &vctrl->list, list) { + mutex_lock(&admin->lock); + list_for_each_entry(ri, &admin->rules, list) { + if (ri->data.cookie == cookie) { + id = ri->data.id; + break; + } + } + mutex_unlock(&admin->lock); + if (id) + return id; + } return -ENOENT; } EXPORT_SYMBOL_GPL(vcap_lookup_rule_by_cookie); -/* Make a shallow copy of the rule without the fields */ -struct vcap_rule_internal *vcap_dup_rule(struct vcap_rule_internal *ri) +/* Make a copy of the rule, shallow or full */ +static struct vcap_rule_internal *vcap_dup_rule(struct vcap_rule_internal *ri, + bool full) { + struct vcap_client_actionfield *caf, *newcaf; + struct vcap_client_keyfield *ckf, *newckf; struct vcap_rule_internal *duprule; /* Allocate the client part */ @@ -836,6 +994,25 @@ struct vcap_rule_internal *vcap_dup_rule(struct vcap_rule_internal *ri) /* No elements in these lists */ INIT_LIST_HEAD(&duprule->data.keyfields); INIT_LIST_HEAD(&duprule->data.actionfields); + + /* A full rule copy includes keys and actions */ + if (!full) + return duprule; + + list_for_each_entry(ckf, &ri->data.keyfields, ctrl.list) { + newckf = kmemdup(ckf, sizeof(*newckf), GFP_KERNEL); + if (!newckf) + return ERR_PTR(-ENOMEM); + list_add_tail(&newckf->ctrl.list, &duprule->data.keyfields); + } + + list_for_each_entry(caf, &ri->data.actionfields, ctrl.list) { + newcaf = kmemdup(caf, sizeof(*newcaf), GFP_KERNEL); + if (!newcaf) + return ERR_PTR(-ENOMEM); + list_add_tail(&newcaf->ctrl.list, &duprule->data.actionfields); + } + return duprule; } @@ -1424,39 +1601,65 @@ struct vcap_admin *vcap_find_admin(struct vcap_control *vctrl, int cid) } EXPORT_SYMBOL_GPL(vcap_find_admin); -/* Is the next chain id in the following lookup, possible in another VCAP */ -bool vcap_is_next_lookup(struct vcap_control *vctrl, int cur_cid, int next_cid) +/* Is this the last admin instance ordered by chain id and direction */ +static bool vcap_admin_is_last(struct vcap_control *vctrl, + struct vcap_admin *admin, + bool ingress) { - struct vcap_admin *admin, *next_admin; - int lookup, next_lookup; + struct vcap_admin *iter, *last = NULL; + int max_cid = 0; - /* The offset must be at least one lookup */ - if (next_cid < cur_cid + VCAP_CID_LOOKUP_SIZE) + list_for_each_entry(iter, &vctrl->list, list) { + if (iter->first_cid > max_cid && + iter->ingress == ingress) { + last = iter; + max_cid = iter->first_cid; + } + } + if (!last) return false; - if (vcap_api_check(vctrl)) - return false; + return admin == last; +} - admin = vcap_find_admin(vctrl, cur_cid); - if (!admin) +/* Calculate the value used for chaining VCAP rules */ +int vcap_chain_offset(struct vcap_control *vctrl, int from_cid, int to_cid) +{ + int diff = to_cid - from_cid; + + if (diff < 0) /* Wrong direction */ + return diff; + to_cid %= VCAP_CID_LOOKUP_SIZE; + if (to_cid == 0) /* Destination aligned to a lookup == no chaining */ + return 0; + diff %= VCAP_CID_LOOKUP_SIZE; /* Limit to a value within a lookup */ + return diff; +} +EXPORT_SYMBOL_GPL(vcap_chain_offset); + +/* Is the next chain id in one of the following lookups + * For now this does not support filters linked to other filters using + * keys and actions. That will be added later. + */ +bool vcap_is_next_lookup(struct vcap_control *vctrl, int src_cid, int dst_cid) +{ + struct vcap_admin *admin; + int next_cid; + + if (vcap_api_check(vctrl)) return false; - /* If no VCAP contains the next chain, the next chain must be beyond - * the last chain in the current VCAP - */ - next_admin = vcap_find_admin(vctrl, next_cid); - if (!next_admin) - return next_cid > admin->last_cid; + /* The offset must be at least one lookup so round up one chain */ + next_cid = roundup(src_cid + 1, VCAP_CID_LOOKUP_SIZE); - lookup = vcap_chain_id_to_lookup(admin, cur_cid); - next_lookup = vcap_chain_id_to_lookup(next_admin, next_cid); + if (dst_cid < next_cid) + return false; - /* Next lookup must be the following lookup */ - if (admin == next_admin || admin->vtype == next_admin->vtype) - return next_lookup == lookup + 1; + admin = vcap_find_admin(vctrl, dst_cid); + if (!admin) + return false; - /* Must be the first lookup in the next VCAP instance */ - return next_lookup == 0; + return true; } EXPORT_SYMBOL_GPL(vcap_is_next_lookup); @@ -1504,6 +1707,39 @@ static int vcap_add_type_keyfield(struct vcap_rule *rule) return 0; } +/* Add the actionset typefield to the list of rule actionfields */ +static int vcap_add_type_actionfield(struct vcap_rule *rule) +{ + enum vcap_actionfield_set actionset = rule->actionset; + struct vcap_rule_internal *ri = to_intrule(rule); + enum vcap_type vt = ri->admin->vtype; + const struct vcap_field *fields; + const struct vcap_set *aset; + int ret = -EINVAL; + + aset = vcap_actionfieldset(ri->vctrl, vt, actionset); + if (!aset) + return ret; + if (aset->type_id == (u8)-1) /* No type field is needed */ + return 0; + + fields = vcap_actionfields(ri->vctrl, vt, actionset); + if (!fields) + return -EINVAL; + if (fields[VCAP_AF_TYPE].width > 1) { + ret = vcap_rule_add_action_u32(rule, VCAP_AF_TYPE, + aset->type_id); + } else { + if (aset->type_id) + ret = vcap_rule_add_action_bit(rule, VCAP_AF_TYPE, + VCAP_BIT_1); + else + ret = vcap_rule_add_action_bit(rule, VCAP_AF_TYPE, + VCAP_BIT_0); + } + return ret; +} + /* Add a keyset to a keyset list */ bool vcap_keyset_list_add(struct vcap_keyset_list *keysetlist, enum vcap_keyfield_set keyset) @@ -1521,6 +1757,22 @@ bool vcap_keyset_list_add(struct vcap_keyset_list *keysetlist, } EXPORT_SYMBOL_GPL(vcap_keyset_list_add); +/* Add a actionset to a actionset list */ +static bool vcap_actionset_list_add(struct vcap_actionset_list *actionsetlist, + enum vcap_actionfield_set actionset) +{ + int idx; + + if (actionsetlist->cnt < actionsetlist->max) { + /* Avoid duplicates */ + for (idx = 0; idx < actionsetlist->cnt; ++idx) + if (actionsetlist->actionsets[idx] == actionset) + return actionsetlist->cnt < actionsetlist->max; + actionsetlist->actionsets[actionsetlist->cnt++] = actionset; + } + return actionsetlist->cnt < actionsetlist->max; +} + /* map keyset id to a string with the keyset name */ const char *vcap_keyset_name(struct vcap_control *vctrl, enum vcap_keyfield_set keyset) @@ -1629,6 +1881,75 @@ bool vcap_rule_find_keysets(struct vcap_rule *rule, } EXPORT_SYMBOL_GPL(vcap_rule_find_keysets); +/* Return the actionfield that matches a action in a actionset */ +static const struct vcap_field * +vcap_find_actionset_actionfield(struct vcap_control *vctrl, + enum vcap_type vtype, + enum vcap_actionfield_set actionset, + enum vcap_action_field action) +{ + const struct vcap_field *fields; + int idx, count; + + fields = vcap_actionfields(vctrl, vtype, actionset); + if (!fields) + return NULL; + + /* Iterate the actionfields of the actionset */ + count = vcap_actionfield_count(vctrl, vtype, actionset); + for (idx = 0; idx < count; ++idx) { + if (fields[idx].width == 0) + continue; + + if (action == idx) + return &fields[idx]; + } + + return NULL; +} + +/* Match a list of actions against the actionsets available in a vcap type */ +static bool vcap_rule_find_actionsets(struct vcap_rule_internal *ri, + struct vcap_actionset_list *matches) +{ + int actionset, found, actioncount, map_size; + const struct vcap_client_actionfield *ckf; + const struct vcap_field **map; + enum vcap_type vtype; + + vtype = ri->admin->vtype; + map = ri->vctrl->vcaps[vtype].actionfield_set_map; + map_size = ri->vctrl->vcaps[vtype].actionfield_set_size; + + /* Get a count of the actionfields we want to match */ + actioncount = 0; + list_for_each_entry(ckf, &ri->data.actionfields, ctrl.list) + ++actioncount; + + matches->cnt = 0; + /* Iterate the actionsets of the VCAP */ + for (actionset = 0; actionset < map_size; ++actionset) { + if (!map[actionset]) + continue; + + /* Iterate the actions in the rule */ + found = 0; + list_for_each_entry(ckf, &ri->data.actionfields, ctrl.list) + if (vcap_find_actionset_actionfield(ri->vctrl, vtype, + actionset, + ckf->ctrl.action)) + ++found; + + /* Save the actionset if all actionfields were found */ + if (found == actioncount) + if (!vcap_actionset_list_add(matches, actionset)) + /* bail out when the quota is filled */ + break; + } + + return matches->cnt > 0; +} + /* Validate a rule with respect to available port keys */ int vcap_val_rule(struct vcap_rule *rule, u16 l3_proto) { @@ -1680,13 +2001,26 @@ int vcap_val_rule(struct vcap_rule *rule, u16 l3_proto) return ret; } if (ri->data.actionset == VCAP_AFS_NO_VALUE) { - /* Later also actionsets will be matched against actions in - * the rule, and the type will be set accordingly - */ - ri->data.exterr = VCAP_ERR_NO_ACTIONSET_MATCH; - return -EINVAL; + struct vcap_actionset_list matches = {}; + enum vcap_actionfield_set actionsets[10]; + + matches.actionsets = actionsets; + matches.max = ARRAY_SIZE(actionsets); + + /* Find an actionset that fits the rule actions */ + if (!vcap_rule_find_actionsets(ri, &matches)) { + ri->data.exterr = VCAP_ERR_NO_ACTIONSET_MATCH; + return -EINVAL; + } + ret = vcap_set_rule_set_actionset(rule, actionsets[0]); + if (ret < 0) { + pr_err("%s:%d: actionset was not updated: %d\n", + __func__, __LINE__, ret); + return ret; + } } vcap_add_type_keyfield(rule); + vcap_add_type_actionfield(rule); /* Add default fields to this rule */ ri->vctrl->ops->add_default_fields(ri->ndev, ri->admin, rule); @@ -1721,7 +2055,7 @@ static u32 vcap_set_rule_id(struct vcap_rule_internal *ri) return ri->data.id; for (u32 next_id = 1; next_id < ~0; ++next_id) { - if (!vcap_lookup_rule(ri->vctrl, next_id)) { + if (!vcap_rule_exists(ri->vctrl, next_id)) { ri->data.id = next_id; break; } @@ -1756,8 +2090,8 @@ static int vcap_insert_rule(struct vcap_rule_internal *ri, ri->addr = vcap_next_rule_addr(admin->last_used_addr, ri); admin->last_used_addr = ri->addr; - /* Add a shallow copy of the rule to the VCAP list */ - duprule = vcap_dup_rule(ri); + /* Add a copy of the rule to the VCAP list */ + duprule = vcap_dup_rule(ri, ri->state == VCAP_RS_DISABLED); if (IS_ERR(duprule)) return PTR_ERR(duprule); @@ -1770,8 +2104,8 @@ static int vcap_insert_rule(struct vcap_rule_internal *ri, ri->addr = vcap_next_rule_addr(addr, ri); addr = ri->addr; - /* Add a shallow copy of the rule to the VCAP list */ - duprule = vcap_dup_rule(ri); + /* Add a copy of the rule to the VCAP list */ + duprule = vcap_dup_rule(ri, ri->state == VCAP_RS_DISABLED); if (IS_ERR(duprule)) return PTR_ERR(duprule); @@ -1803,11 +2137,96 @@ static void vcap_move_rules(struct vcap_rule_internal *ri, move->offset, move->count); } +/* Check if the chain is already used to enable a VCAP lookup for this port */ +static bool vcap_is_chain_used(struct vcap_control *vctrl, + struct net_device *ndev, int src_cid) +{ + struct vcap_enabled_port *eport; + struct vcap_admin *admin; + + list_for_each_entry(admin, &vctrl->list, list) + list_for_each_entry(eport, &admin->enabled, list) + if (eport->src_cid == src_cid && eport->ndev == ndev) + return true; + + return false; +} + +/* Fetch the next chain in the enabled list for the port */ +static int vcap_get_next_chain(struct vcap_control *vctrl, + struct net_device *ndev, + int dst_cid) +{ + struct vcap_enabled_port *eport; + struct vcap_admin *admin; + + list_for_each_entry(admin, &vctrl->list, list) { + list_for_each_entry(eport, &admin->enabled, list) { + if (eport->ndev != ndev) + continue; + if (eport->src_cid == dst_cid) + return eport->dst_cid; + } + } + + return 0; +} + +static bool vcap_path_exist(struct vcap_control *vctrl, struct net_device *ndev, + int dst_cid) +{ + int cid = rounddown(dst_cid, VCAP_CID_LOOKUP_SIZE); + struct vcap_enabled_port *eport = NULL; + struct vcap_enabled_port *elem; + struct vcap_admin *admin; + int tmp; + + if (cid == 0) /* Chain zero is always available */ + return true; + + /* Find first entry that starts from chain 0*/ + list_for_each_entry(admin, &vctrl->list, list) { + list_for_each_entry(elem, &admin->enabled, list) { + if (elem->src_cid == 0 && elem->ndev == ndev) { + eport = elem; + break; + } + } + if (eport) + break; + } + + if (!eport) + return false; + + tmp = eport->dst_cid; + while (tmp != cid && tmp != 0) + tmp = vcap_get_next_chain(vctrl, ndev, tmp); + + return !!tmp; +} + +/* Internal clients can always store their rules in HW + * External clients can store their rules if the chain is enabled all + * the way from chain 0, otherwise the rule will be cached until + * the chain is enabled. + */ +static void vcap_rule_set_state(struct vcap_rule_internal *ri) +{ + if (ri->data.user <= VCAP_USER_QOS) + ri->state = VCAP_RS_PERMANENT; + else if (vcap_path_exist(ri->vctrl, ri->ndev, ri->data.vcap_chain_id)) + ri->state = VCAP_RS_ENABLED; + else + ri->state = VCAP_RS_DISABLED; +} + /* Encode and write a validated rule to the VCAP */ int vcap_add_rule(struct vcap_rule *rule) { struct vcap_rule_internal *ri = to_intrule(rule); struct vcap_rule_move move = {0}; + struct vcap_counter ctr = {0}; int ret; ret = vcap_api_check(ri->vctrl); @@ -1815,6 +2234,8 @@ int vcap_add_rule(struct vcap_rule *rule) return ret; /* Insert the new rule in the list of vcap rules */ mutex_lock(&ri->admin->lock); + + vcap_rule_set_state(ri); ret = vcap_insert_rule(ri, &move); if (ret < 0) { pr_err("%s:%d: could not insert rule in vcap list: %d\n", @@ -1823,6 +2244,19 @@ int vcap_add_rule(struct vcap_rule *rule) } if (move.count > 0) vcap_move_rules(ri, &move); + + /* Set the counter to zero */ + ret = vcap_write_counter(ri, &ctr); + if (ret) + goto out; + + if (ri->state == VCAP_RS_DISABLED) { + /* Erase the rule area */ + ri->vctrl->ops->init(ri->ndev, ri->admin, ri->addr, ri->size); + goto out; + } + + vcap_erase_cache(ri); ret = vcap_encode_rule(ri); if (ret) { pr_err("%s:%d: rule encoding error: %d\n", __func__, __LINE__, ret); @@ -1830,8 +2264,10 @@ int vcap_add_rule(struct vcap_rule *rule) } ret = vcap_write_rule(ri); - if (ret) + if (ret) { pr_err("%s:%d: rule write error: %d\n", __func__, __LINE__, ret); + goto out; + } out: mutex_unlock(&ri->admin->lock); return ret; @@ -1860,17 +2296,28 @@ struct vcap_rule *vcap_alloc_rule(struct vcap_control *vctrl, /* Sanity check that this VCAP is supported on this platform */ if (vctrl->vcaps[admin->vtype].rows == 0) return ERR_PTR(-EINVAL); + + mutex_lock(&admin->lock); /* Check if a rule with this id already exists */ - if (vcap_lookup_rule(vctrl, id)) - return ERR_PTR(-EEXIST); + if (vcap_rule_exists(vctrl, id)) { + err = -EINVAL; + goto out_unlock; + } + /* Check if there is room for the rule in the block(s) of the VCAP */ maxsize = vctrl->vcaps[admin->vtype].sw_count; /* worst case rule size */ - if (vcap_rule_space(admin, maxsize)) - return ERR_PTR(-ENOSPC); + if (vcap_rule_space(admin, maxsize)) { + err = -ENOSPC; + goto out_unlock; + } + /* Create a container for the rule and return it */ ri = kzalloc(sizeof(*ri), GFP_KERNEL); - if (!ri) - return ERR_PTR(-ENOMEM); + if (!ri) { + err = -ENOMEM; + goto out_unlock; + } + ri->data.vcap_chain_id = vcap_chain_id; ri->data.user = user; ri->data.priority = priority; @@ -1883,14 +2330,21 @@ struct vcap_rule *vcap_alloc_rule(struct vcap_control *vctrl, ri->ndev = ndev; ri->admin = admin; /* refer to the vcap instance */ ri->vctrl = vctrl; /* refer to the client */ - if (vcap_set_rule_id(ri) == 0) + + if (vcap_set_rule_id(ri) == 0) { + err = -EINVAL; goto out_free; - vcap_erase_cache(ri); + } + + mutex_unlock(&admin->lock); return (struct vcap_rule *)ri; out_free: kfree(ri); - return ERR_PTR(-EINVAL); +out_unlock: + mutex_unlock(&admin->lock); + return ERR_PTR(err); + } EXPORT_SYMBOL_GPL(vcap_alloc_rule); @@ -1915,43 +2369,52 @@ void vcap_free_rule(struct vcap_rule *rule) } EXPORT_SYMBOL_GPL(vcap_free_rule); -struct vcap_rule *vcap_get_rule(struct vcap_control *vctrl, u32 id) +/* Decode a rule from the VCAP cache and return a copy */ +struct vcap_rule *vcap_decode_rule(struct vcap_rule_internal *elem) { - struct vcap_rule_internal *elem; struct vcap_rule_internal *ri; int err; - ri = NULL; + ri = vcap_dup_rule(elem, elem->state == VCAP_RS_DISABLED); + if (IS_ERR(ri)) + return ERR_PTR(PTR_ERR(ri)); + + if (ri->state == VCAP_RS_DISABLED) + goto out; + + err = vcap_read_rule(ri); + if (err) + return ERR_PTR(err); + + err = vcap_decode_keyset(ri); + if (err) + return ERR_PTR(err); + + err = vcap_decode_actionset(ri); + if (err) + return ERR_PTR(err); + +out: + return &ri->data; +} + +struct vcap_rule *vcap_get_rule(struct vcap_control *vctrl, u32 id) +{ + struct vcap_rule_internal *elem; + struct vcap_rule *rule; + int err; err = vcap_api_check(vctrl); if (err) return ERR_PTR(err); - elem = vcap_lookup_rule(vctrl, id); + + elem = vcap_get_locked_rule(vctrl, id); if (!elem) return NULL; - mutex_lock(&elem->admin->lock); - ri = vcap_dup_rule(elem); - if (IS_ERR(ri)) - goto unlock; - err = vcap_read_rule(ri); - if (err) { - ri = ERR_PTR(err); - goto unlock; - } - err = vcap_decode_keyset(ri); - if (err) { - ri = ERR_PTR(err); - goto unlock; - } - err = vcap_decode_actionset(ri); - if (err) { - ri = ERR_PTR(err); - goto unlock; - } -unlock: + rule = vcap_decode_rule(elem); mutex_unlock(&elem->admin->lock); - return (struct vcap_rule *)ri; + return rule; } EXPORT_SYMBOL_GPL(vcap_get_rule); @@ -1966,10 +2429,13 @@ int vcap_mod_rule(struct vcap_rule *rule) if (err) return err; - if (!vcap_lookup_rule(ri->vctrl, ri->data.id)) + if (!vcap_get_locked_rule(ri->vctrl, ri->data.id)) return -ENOENT; - mutex_lock(&ri->admin->lock); + vcap_rule_set_state(ri); + if (ri->state == VCAP_RS_DISABLED) + goto out; + /* Encode the bitstreams to the VCAP cache */ vcap_erase_cache(ri); err = vcap_encode_rule(ri); @@ -1982,8 +2448,6 @@ int vcap_mod_rule(struct vcap_rule *rule) memset(&ctr, 0, sizeof(ctr)); err = vcap_write_counter(ri, &ctr); - if (err) - goto out; out: mutex_unlock(&ri->admin->lock); @@ -2050,20 +2514,19 @@ int vcap_del_rule(struct vcap_control *vctrl, struct net_device *ndev, u32 id) if (err) return err; /* Look for the rule id in all vcaps */ - ri = vcap_lookup_rule(vctrl, id); + ri = vcap_get_locked_rule(vctrl, id); if (!ri) - return -EINVAL; + return -ENOENT; + admin = ri->admin; if (ri->addr > admin->last_used_addr) gap = vcap_fill_rule_gap(ri); /* Delete the rule from the list of rules and the cache */ - mutex_lock(&admin->lock); list_del(&ri->list); vctrl->ops->init(ndev, admin, admin->last_used_addr, ri->size + gap); - kfree(ri); - mutex_unlock(&admin->lock); + vcap_free_rule(&ri->data); /* Update the last used address, set to default when no rules */ if (list_empty(&admin->rules)) { @@ -2073,7 +2536,9 @@ int vcap_del_rule(struct vcap_control *vctrl, struct net_device *ndev, u32 id) list); admin->last_used_addr = elem->addr; } - return 0; + + mutex_unlock(&admin->lock); + return err; } EXPORT_SYMBOL_GPL(vcap_del_rule); @@ -2091,7 +2556,7 @@ int vcap_del_rules(struct vcap_control *vctrl, struct vcap_admin *admin) list_for_each_entry_safe(ri, next_ri, &admin->rules, list) { vctrl->ops->init(ri->ndev, admin, ri->addr, ri->size); list_del(&ri->list); - kfree(ri); + vcap_free_rule(&ri->data); } admin->last_used_addr = admin->last_valid_addr; @@ -2137,69 +2602,6 @@ const struct vcap_field *vcap_lookup_keyfield(struct vcap_rule *rule, } EXPORT_SYMBOL_GPL(vcap_lookup_keyfield); -/* Copy data from src to dst but reverse the data in chunks of 32bits. - * For example if src is 00:11:22:33:44:55 where 55 is LSB the dst will - * have the value 22:33:44:55:00:11. - */ -static void vcap_copy_to_w32be(u8 *dst, u8 *src, int size) -{ - for (int idx = 0; idx < size; ++idx) { - int first_byte_index = 0; - int nidx; - - first_byte_index = size - (((idx >> 2) + 1) << 2); - if (first_byte_index < 0) - first_byte_index = 0; - nidx = idx + first_byte_index - (idx & ~0x3); - dst[nidx] = src[idx]; - } -} - -static void vcap_copy_from_client_keyfield(struct vcap_rule *rule, - struct vcap_client_keyfield *field, - struct vcap_client_keyfield_data *data) -{ - struct vcap_rule_internal *ri = to_intrule(rule); - int size; - - if (!ri->admin->w32be) { - memcpy(&field->data, data, sizeof(field->data)); - return; - } - - size = keyfield_size_table[field->ctrl.type] / 2; - switch (field->ctrl.type) { - case VCAP_FIELD_BIT: - case VCAP_FIELD_U32: - memcpy(&field->data, data, sizeof(field->data)); - break; - case VCAP_FIELD_U48: - vcap_copy_to_w32be(field->data.u48.value, data->u48.value, size); - vcap_copy_to_w32be(field->data.u48.mask, data->u48.mask, size); - break; - case VCAP_FIELD_U56: - vcap_copy_to_w32be(field->data.u56.value, data->u56.value, size); - vcap_copy_to_w32be(field->data.u56.mask, data->u56.mask, size); - break; - case VCAP_FIELD_U64: - vcap_copy_to_w32be(field->data.u64.value, data->u64.value, size); - vcap_copy_to_w32be(field->data.u64.mask, data->u64.mask, size); - break; - case VCAP_FIELD_U72: - vcap_copy_to_w32be(field->data.u72.value, data->u72.value, size); - vcap_copy_to_w32be(field->data.u72.mask, data->u72.mask, size); - break; - case VCAP_FIELD_U112: - vcap_copy_to_w32be(field->data.u112.value, data->u112.value, size); - vcap_copy_to_w32be(field->data.u112.mask, data->u112.mask, size); - break; - case VCAP_FIELD_U128: - vcap_copy_to_w32be(field->data.u128.value, data->u128.value, size); - vcap_copy_to_w32be(field->data.u128.mask, data->u128.mask, size); - break; - } -} - /* Check if the keyfield is already in the rule */ static bool vcap_keyfield_unique(struct vcap_rule *rule, enum vcap_key_field key) @@ -2257,9 +2659,9 @@ static int vcap_rule_add_key(struct vcap_rule *rule, field = kzalloc(sizeof(*field), GFP_KERNEL); if (!field) return -ENOMEM; + memcpy(&field->data, data, sizeof(field->data)); field->ctrl.key = key; field->ctrl.type = ftype; - vcap_copy_from_client_keyfield(rule, field, data); list_add_tail(&field->ctrl.list, &rule->keyfields); return 0; } @@ -2355,7 +2757,7 @@ int vcap_rule_get_key_u32(struct vcap_rule *rule, enum vcap_key_field key, EXPORT_SYMBOL_GPL(vcap_rule_get_key_u32); /* Find a client action field in a rule */ -static struct vcap_client_actionfield * +struct vcap_client_actionfield * vcap_find_actionfield(struct vcap_rule *rule, enum vcap_action_field act) { struct vcap_rule_internal *ri = (struct vcap_rule_internal *)rule; @@ -2366,45 +2768,7 @@ vcap_find_actionfield(struct vcap_rule *rule, enum vcap_action_field act) return caf; return NULL; } - -static void vcap_copy_from_client_actionfield(struct vcap_rule *rule, - struct vcap_client_actionfield *field, - struct vcap_client_actionfield_data *data) -{ - struct vcap_rule_internal *ri = to_intrule(rule); - int size; - - if (!ri->admin->w32be) { - memcpy(&field->data, data, sizeof(field->data)); - return; - } - - size = actionfield_size_table[field->ctrl.type]; - switch (field->ctrl.type) { - case VCAP_FIELD_BIT: - case VCAP_FIELD_U32: - memcpy(&field->data, data, sizeof(field->data)); - break; - case VCAP_FIELD_U48: - vcap_copy_to_w32be(field->data.u48.value, data->u48.value, size); - break; - case VCAP_FIELD_U56: - vcap_copy_to_w32be(field->data.u56.value, data->u56.value, size); - break; - case VCAP_FIELD_U64: - vcap_copy_to_w32be(field->data.u64.value, data->u64.value, size); - break; - case VCAP_FIELD_U72: - vcap_copy_to_w32be(field->data.u72.value, data->u72.value, size); - break; - case VCAP_FIELD_U112: - vcap_copy_to_w32be(field->data.u112.value, data->u112.value, size); - break; - case VCAP_FIELD_U128: - vcap_copy_to_w32be(field->data.u128.value, data->u128.value, size); - break; - } -} +EXPORT_SYMBOL_GPL(vcap_find_actionfield); /* Check if the actionfield is already in the rule */ static bool vcap_actionfield_unique(struct vcap_rule *rule, @@ -2463,9 +2827,9 @@ static int vcap_rule_add_action(struct vcap_rule *rule, field = kzalloc(sizeof(*field), GFP_KERNEL); if (!field) return -ENOMEM; + memcpy(&field->data, data, sizeof(field->data)); field->ctrl.action = action; field->ctrl.type = ftype; - vcap_copy_from_client_actionfield(rule, field, data); list_add_tail(&field->ctrl.list, &rule->actionfields); return 0; } @@ -2564,24 +2928,157 @@ void vcap_set_tc_exterr(struct flow_cls_offload *fco, struct vcap_rule *vrule) } EXPORT_SYMBOL_GPL(vcap_set_tc_exterr); +/* Write a rule to VCAP HW to enable it */ +static int vcap_enable_rule(struct vcap_rule_internal *ri) +{ + struct vcap_client_actionfield *af, *naf; + struct vcap_client_keyfield *kf, *nkf; + int err; + + vcap_erase_cache(ri); + err = vcap_encode_rule(ri); + if (err) + goto out; + err = vcap_write_rule(ri); + if (err) + goto out; + + /* Deallocate the list of keys and actions */ + list_for_each_entry_safe(kf, nkf, &ri->data.keyfields, ctrl.list) { + list_del(&kf->ctrl.list); + kfree(kf); + } + list_for_each_entry_safe(af, naf, &ri->data.actionfields, ctrl.list) { + list_del(&af->ctrl.list); + kfree(af); + } + ri->state = VCAP_RS_ENABLED; +out: + return err; +} + +/* Enable all disabled rules for a specific chain/port in the VCAP HW */ +static int vcap_enable_rules(struct vcap_control *vctrl, + struct net_device *ndev, int chain) +{ + int next_chain = chain + VCAP_CID_LOOKUP_SIZE; + struct vcap_rule_internal *ri; + struct vcap_admin *admin; + int err = 0; + + list_for_each_entry(admin, &vctrl->list, list) { + if (!(chain >= admin->first_cid && chain <= admin->last_cid)) + continue; + + /* Found the admin, now find the offloadable rules */ + mutex_lock(&admin->lock); + list_for_each_entry(ri, &admin->rules, list) { + /* Is the rule in the lookup defined by the chain */ + if (!(ri->data.vcap_chain_id >= chain && + ri->data.vcap_chain_id < next_chain)) { + continue; + } + + if (ri->ndev != ndev) + continue; + + if (ri->state != VCAP_RS_DISABLED) + continue; + + err = vcap_enable_rule(ri); + if (err) + break; + } + mutex_unlock(&admin->lock); + if (err) + break; + } + return err; +} + +/* Read and erase a rule from VCAP HW to disable it */ +static int vcap_disable_rule(struct vcap_rule_internal *ri) +{ + int err; + + err = vcap_read_rule(ri); + if (err) + return err; + err = vcap_decode_keyset(ri); + if (err) + return err; + err = vcap_decode_actionset(ri); + if (err) + return err; + + ri->state = VCAP_RS_DISABLED; + ri->vctrl->ops->init(ri->ndev, ri->admin, ri->addr, ri->size); + return 0; +} + +/* Disable all enabled rules for a specific chain/port in the VCAP HW */ +static int vcap_disable_rules(struct vcap_control *vctrl, + struct net_device *ndev, int chain) +{ + struct vcap_rule_internal *ri; + struct vcap_admin *admin; + int err = 0; + + list_for_each_entry(admin, &vctrl->list, list) { + if (!(chain >= admin->first_cid && chain <= admin->last_cid)) + continue; + + /* Found the admin, now find the rules on the chain */ + mutex_lock(&admin->lock); + list_for_each_entry(ri, &admin->rules, list) { + if (ri->data.vcap_chain_id != chain) + continue; + + if (ri->ndev != ndev) + continue; + + if (ri->state != VCAP_RS_ENABLED) + continue; + + err = vcap_disable_rule(ri); + if (err) + break; + } + mutex_unlock(&admin->lock); + if (err) + break; + } + return err; +} + /* Check if this port is already enabled for this VCAP instance */ -static bool vcap_is_enabled(struct vcap_admin *admin, struct net_device *ndev, - unsigned long cookie) +static bool vcap_is_enabled(struct vcap_control *vctrl, struct net_device *ndev, + int dst_cid) { struct vcap_enabled_port *eport; + struct vcap_admin *admin; - list_for_each_entry(eport, &admin->enabled, list) - if (eport->cookie == cookie || eport->ndev == ndev) - return true; + list_for_each_entry(admin, &vctrl->list, list) + list_for_each_entry(eport, &admin->enabled, list) + if (eport->dst_cid == dst_cid && eport->ndev == ndev) + return true; return false; } -/* Enable this port for this VCAP instance */ -static int vcap_enable(struct vcap_admin *admin, struct net_device *ndev, - unsigned long cookie) +/* Enable this port and chain id in a VCAP instance */ +static int vcap_enable(struct vcap_control *vctrl, struct net_device *ndev, + unsigned long cookie, int src_cid, int dst_cid) { struct vcap_enabled_port *eport; + struct vcap_admin *admin; + + if (src_cid >= dst_cid) + return -EFAULT; + + admin = vcap_find_admin(vctrl, dst_cid); + if (!admin) + return -ENOENT; eport = kzalloc(sizeof(*eport), GFP_KERNEL); if (!eport) @@ -2589,48 +3086,72 @@ static int vcap_enable(struct vcap_admin *admin, struct net_device *ndev, eport->ndev = ndev; eport->cookie = cookie; + eport->src_cid = src_cid; + eport->dst_cid = dst_cid; + mutex_lock(&admin->lock); list_add_tail(&eport->list, &admin->enabled); + mutex_unlock(&admin->lock); + if (vcap_path_exist(vctrl, ndev, src_cid)) { + /* Enable chained lookups */ + while (dst_cid) { + admin = vcap_find_admin(vctrl, dst_cid); + if (!admin) + return -ENOENT; + + vcap_enable_rules(vctrl, ndev, dst_cid); + dst_cid = vcap_get_next_chain(vctrl, ndev, dst_cid); + } + } return 0; } -/* Disable this port for this VCAP instance */ -static int vcap_disable(struct vcap_admin *admin, struct net_device *ndev, +/* Disable this port and chain id for a VCAP instance */ +static int vcap_disable(struct vcap_control *vctrl, struct net_device *ndev, unsigned long cookie) { - struct vcap_enabled_port *eport; + struct vcap_enabled_port *elem, *eport = NULL; + struct vcap_admin *found = NULL, *admin; + int dst_cid; - list_for_each_entry(eport, &admin->enabled, list) { - if (eport->cookie == cookie && eport->ndev == ndev) { - list_del(&eport->list); - kfree(eport); - return 0; + list_for_each_entry(admin, &vctrl->list, list) { + list_for_each_entry(elem, &admin->enabled, list) { + if (elem->cookie == cookie && elem->ndev == ndev) { + eport = elem; + found = admin; + break; + } } + if (eport) + break; } - return -ENOENT; -} + if (!eport) + return -ENOENT; -/* Find the VCAP instance that enabled the port using a specific filter */ -static struct vcap_admin *vcap_find_admin_by_cookie(struct vcap_control *vctrl, - unsigned long cookie) -{ - struct vcap_enabled_port *eport; - struct vcap_admin *admin; + /* Disable chained lookups */ + dst_cid = eport->dst_cid; + while (dst_cid) { + admin = vcap_find_admin(vctrl, dst_cid); + if (!admin) + return -ENOENT; - list_for_each_entry(admin, &vctrl->list, list) - list_for_each_entry(eport, &admin->enabled, list) - if (eport->cookie == cookie) - return admin; + vcap_disable_rules(vctrl, ndev, dst_cid); + dst_cid = vcap_get_next_chain(vctrl, ndev, dst_cid); + } - return NULL; + mutex_lock(&found->lock); + list_del(&eport->list); + mutex_unlock(&found->lock); + kfree(eport); + return 0; } -/* Enable/Disable the VCAP instance lookups. Chain id 0 means disable */ +/* Enable/Disable the VCAP instance lookups */ int vcap_enable_lookups(struct vcap_control *vctrl, struct net_device *ndev, - int chain_id, unsigned long cookie, bool enable) + int src_cid, int dst_cid, unsigned long cookie, + bool enable) { - struct vcap_admin *admin; int err; err = vcap_api_check(vctrl); @@ -2640,36 +3161,48 @@ int vcap_enable_lookups(struct vcap_control *vctrl, struct net_device *ndev, if (!ndev) return -ENODEV; - if (chain_id) - admin = vcap_find_admin(vctrl, chain_id); - else - admin = vcap_find_admin_by_cookie(vctrl, cookie); - if (!admin) - return -ENOENT; - - /* first instance and first chain */ - if (admin->vinst || chain_id > admin->first_cid) + /* Source and destination must be the first chain in a lookup */ + if (src_cid % VCAP_CID_LOOKUP_SIZE) + return -EFAULT; + if (dst_cid % VCAP_CID_LOOKUP_SIZE) return -EFAULT; - err = vctrl->ops->enable(ndev, admin, enable); - if (err) - return err; - - if (chain_id) { - if (vcap_is_enabled(admin, ndev, cookie)) + if (enable) { + if (vcap_is_enabled(vctrl, ndev, dst_cid)) return -EADDRINUSE; - mutex_lock(&admin->lock); - vcap_enable(admin, ndev, cookie); + if (vcap_is_chain_used(vctrl, ndev, src_cid)) + return -EADDRNOTAVAIL; + err = vcap_enable(vctrl, ndev, cookie, src_cid, dst_cid); } else { - mutex_lock(&admin->lock); - vcap_disable(admin, ndev, cookie); + err = vcap_disable(vctrl, ndev, cookie); } - mutex_unlock(&admin->lock); - return 0; + return err; } EXPORT_SYMBOL_GPL(vcap_enable_lookups); +/* Is this chain id the last lookup of all VCAPs */ +bool vcap_is_last_chain(struct vcap_control *vctrl, int cid, bool ingress) +{ + struct vcap_admin *admin; + int lookup; + + if (vcap_api_check(vctrl)) + return false; + + admin = vcap_find_admin(vctrl, cid); + if (!admin) + return false; + + if (!vcap_admin_is_last(vctrl, admin, ingress)) + return false; + + /* This must be the last lookup in this VCAP type */ + lookup = vcap_chain_id_to_lookup(admin, cid); + return lookup == admin->lookups - 1; +} +EXPORT_SYMBOL_GPL(vcap_is_last_chain); + /* Set a rule counter id (for certain vcaps only) */ void vcap_rule_set_counter_id(struct vcap_rule *rule, u32 counter_id) { @@ -2679,31 +3212,6 @@ void vcap_rule_set_counter_id(struct vcap_rule *rule, u32 counter_id) } EXPORT_SYMBOL_GPL(vcap_rule_set_counter_id); -/* Provide all rules via a callback interface */ -int vcap_rule_iter(struct vcap_control *vctrl, - int (*callback)(void *, struct vcap_rule *), void *arg) -{ - struct vcap_rule_internal *ri; - struct vcap_admin *admin; - int ret; - - ret = vcap_api_check(vctrl); - if (ret) - return ret; - - /* Iterate all rules in each VCAP instance */ - list_for_each_entry(admin, &vctrl->list, list) { - list_for_each_entry(ri, &admin->rules, list) { - ret = callback(arg, &ri->data); - if (ret) - return ret; - } - } - - return 0; -} -EXPORT_SYMBOL_GPL(vcap_rule_iter); - int vcap_rule_set_counter(struct vcap_rule *rule, struct vcap_counter *ctr) { struct vcap_rule_internal *ri = to_intrule(rule); @@ -2716,7 +3224,12 @@ int vcap_rule_set_counter(struct vcap_rule *rule, struct vcap_counter *ctr) pr_err("%s:%d: counter is missing\n", __func__, __LINE__); return -EINVAL; } - return vcap_write_counter(ri, ctr); + + mutex_lock(&ri->admin->lock); + err = vcap_write_counter(ri, ctr); + mutex_unlock(&ri->admin->lock); + + return err; } EXPORT_SYMBOL_GPL(vcap_rule_set_counter); @@ -2732,10 +3245,138 @@ int vcap_rule_get_counter(struct vcap_rule *rule, struct vcap_counter *ctr) pr_err("%s:%d: counter is missing\n", __func__, __LINE__); return -EINVAL; } - return vcap_read_counter(ri, ctr); + + mutex_lock(&ri->admin->lock); + err = vcap_read_counter(ri, ctr); + mutex_unlock(&ri->admin->lock); + + return err; } EXPORT_SYMBOL_GPL(vcap_rule_get_counter); +/* Get a copy of a client key field */ +static int vcap_rule_get_key(struct vcap_rule *rule, + enum vcap_key_field key, + struct vcap_client_keyfield *ckf) +{ + struct vcap_client_keyfield *field; + + field = vcap_find_keyfield(rule, key); + if (!field) + return -EINVAL; + memcpy(ckf, field, sizeof(*ckf)); + INIT_LIST_HEAD(&ckf->ctrl.list); + return 0; +} + +/* Find a keyset having the same size as the provided rule, where the keyset + * does not have a type id. + */ +static int vcap_rule_get_untyped_keyset(struct vcap_rule_internal *ri, + struct vcap_keyset_list *matches) +{ + struct vcap_control *vctrl = ri->vctrl; + enum vcap_type vt = ri->admin->vtype; + const struct vcap_set *keyfield_set; + int idx; + + keyfield_set = vctrl->vcaps[vt].keyfield_set; + for (idx = 0; idx < vctrl->vcaps[vt].keyfield_set_size; ++idx) { + if (keyfield_set[idx].sw_per_item == ri->keyset_sw && + keyfield_set[idx].type_id == (u8)-1) { + vcap_keyset_list_add(matches, idx); + return 0; + } + } + return -EINVAL; +} + +/* Get the keysets that matches the rule key type/mask */ +int vcap_rule_get_keysets(struct vcap_rule_internal *ri, + struct vcap_keyset_list *matches) +{ + struct vcap_control *vctrl = ri->vctrl; + enum vcap_type vt = ri->admin->vtype; + const struct vcap_set *keyfield_set; + struct vcap_client_keyfield kf = {}; + u32 value, mask; + int err, idx; + + err = vcap_rule_get_key(&ri->data, VCAP_KF_TYPE, &kf); + if (err) + return vcap_rule_get_untyped_keyset(ri, matches); + + if (kf.ctrl.type == VCAP_FIELD_BIT) { + value = kf.data.u1.value; + mask = kf.data.u1.mask; + } else if (kf.ctrl.type == VCAP_FIELD_U32) { + value = kf.data.u32.value; + mask = kf.data.u32.mask; + } else { + return -EINVAL; + } + + keyfield_set = vctrl->vcaps[vt].keyfield_set; + for (idx = 0; idx < vctrl->vcaps[vt].keyfield_set_size; ++idx) { + if (keyfield_set[idx].sw_per_item != ri->keyset_sw) + continue; + + if (keyfield_set[idx].type_id == (u8)-1) { + vcap_keyset_list_add(matches, idx); + continue; + } + + if ((keyfield_set[idx].type_id & mask) == value) + vcap_keyset_list_add(matches, idx); + } + if (matches->cnt > 0) + return 0; + + return -EINVAL; +} + +/* Collect packet counts from all rules with the same cookie */ +int vcap_get_rule_count_by_cookie(struct vcap_control *vctrl, + struct vcap_counter *ctr, u64 cookie) +{ + struct vcap_rule_internal *ri; + struct vcap_counter temp = {}; + struct vcap_admin *admin; + int err; + + err = vcap_api_check(vctrl); + if (err) + return err; + + /* Iterate all rules in each VCAP instance */ + list_for_each_entry(admin, &vctrl->list, list) { + mutex_lock(&admin->lock); + list_for_each_entry(ri, &admin->rules, list) { + if (ri->data.cookie != cookie) + continue; + + err = vcap_read_counter(ri, &temp); + if (err) + goto unlock; + ctr->value += temp.value; + + /* Reset the rule counter */ + temp.value = 0; + temp.sticky = 0; + err = vcap_write_counter(ri, &temp); + if (err) + goto unlock; + } + mutex_unlock(&admin->lock); + } + return err; + +unlock: + mutex_unlock(&admin->lock); + return err; +} +EXPORT_SYMBOL_GPL(vcap_get_rule_count_by_cookie); + static int vcap_rule_mod_key(struct vcap_rule *rule, enum vcap_key_field key, enum vcap_field_type ftype, @@ -2746,7 +3387,7 @@ static int vcap_rule_mod_key(struct vcap_rule *rule, field = vcap_find_keyfield(rule, key); if (!field) return vcap_rule_add_key(rule, key, ftype, data); - vcap_copy_from_client_keyfield(rule, field, data); + memcpy(&field->data, data, sizeof(field->data)); return 0; } @@ -2772,7 +3413,7 @@ static int vcap_rule_mod_action(struct vcap_rule *rule, field = vcap_find_actionfield(rule, action); if (!field) return vcap_rule_add_action(rule, action, ftype, data); - vcap_copy_from_client_actionfield(rule, field, data); + memcpy(&field->data, data, sizeof(field->data)); return 0; } diff --git a/drivers/net/ethernet/microchip/vcap/vcap_api.h b/drivers/net/ethernet/microchip/vcap/vcap_api.h index 689c7270f2a8..62db270f65af 100644 --- a/drivers/net/ethernet/microchip/vcap/vcap_api.h +++ b/drivers/net/ethernet/microchip/vcap/vcap_api.h @@ -176,6 +176,7 @@ struct vcap_admin { int first_valid_addr; /* bottom of address range to be used */ int last_used_addr; /* address of lowest added rule */ bool w32be; /* vcap uses "32bit-word big-endian" encoding */ + bool ingress; /* chain traffic direction */ struct vcap_cache_data cache; /* encoded rule data */ }; @@ -201,6 +202,13 @@ struct vcap_keyset_list { enum vcap_keyfield_set *keysets; /* the list of keysets */ }; +/* List of actionsets */ +struct vcap_actionset_list { + int max; /* size of the actionset list */ + int cnt; /* count of actionsets actually in the list */ + enum vcap_actionfield_set *actionsets; /* the list of actionsets */ +}; + /* Client output printf-like function with destination */ struct vcap_output_print { __printf(2, 3) @@ -259,11 +267,6 @@ struct vcap_operations { (struct net_device *ndev, struct vcap_admin *admin, struct vcap_output_print *out); - /* enable/disable the lookups in a vcap instance */ - int (*enable) - (struct net_device *ndev, - struct vcap_admin *admin, - bool enable); }; /* VCAP API Client control interface */ diff --git a/drivers/net/ethernet/microchip/vcap/vcap_api_client.h b/drivers/net/ethernet/microchip/vcap/vcap_api_client.h index 0319866f9c94..417af9754bcc 100644 --- a/drivers/net/ethernet/microchip/vcap/vcap_api_client.h +++ b/drivers/net/ethernet/microchip/vcap/vcap_api_client.h @@ -148,9 +148,10 @@ struct vcap_counter { bool sticky; }; -/* Enable/Disable the VCAP instance lookups. Chain id 0 means disable */ +/* Enable/Disable the VCAP instance lookups */ int vcap_enable_lookups(struct vcap_control *vctrl, struct net_device *ndev, - int chain_id, unsigned long cookie, bool enable); + int from_cid, int to_cid, unsigned long cookie, + bool enable); /* VCAP rule operations */ /* Allocate a rule and fill in the basic information */ @@ -201,6 +202,8 @@ int vcap_rule_add_action_u32(struct vcap_rule *rule, enum vcap_action_field action, u32 value); /* VCAP rule counter operations */ +int vcap_get_rule_count_by_cookie(struct vcap_control *vctrl, + struct vcap_counter *ctr, u64 cookie); int vcap_rule_set_counter(struct vcap_rule *rule, struct vcap_counter *ctr); int vcap_rule_get_counter(struct vcap_rule *rule, struct vcap_counter *ctr); @@ -214,8 +217,12 @@ const struct vcap_field *vcap_lookup_keyfield(struct vcap_rule *rule, enum vcap_key_field key); /* Find a rule id with a provided cookie */ int vcap_lookup_rule_by_cookie(struct vcap_control *vctrl, u64 cookie); +/* Calculate the value used for chaining VCAP rules */ +int vcap_chain_offset(struct vcap_control *vctrl, int from_cid, int to_cid); /* Is the next chain id in the following lookup, possible in another VCAP */ bool vcap_is_next_lookup(struct vcap_control *vctrl, int cur_cid, int next_cid); +/* Is this chain id the last lookup of all VCAPs */ +bool vcap_is_last_chain(struct vcap_control *vctrl, int cid, bool ingress); /* Provide all rules via a callback interface */ int vcap_rule_iter(struct vcap_control *vctrl, int (*callback)(void *, struct vcap_rule *), void *arg); @@ -262,4 +269,6 @@ int vcap_rule_mod_action_u32(struct vcap_rule *rule, int vcap_rule_get_key_u32(struct vcap_rule *rule, enum vcap_key_field key, u32 *value, u32 *mask); +struct vcap_client_actionfield * +vcap_find_actionfield(struct vcap_rule *rule, enum vcap_action_field act); #endif /* __VCAP_API_CLIENT__ */ diff --git a/drivers/net/ethernet/microchip/vcap/vcap_api_debugfs.c b/drivers/net/ethernet/microchip/vcap/vcap_api_debugfs.c index e0b206247f2e..c2c3397c5898 100644 --- a/drivers/net/ethernet/microchip/vcap/vcap_api_debugfs.c +++ b/drivers/net/ethernet/microchip/vcap/vcap_api_debugfs.c @@ -44,11 +44,14 @@ static void vcap_debugfs_show_rule_keyfield(struct vcap_control *vctrl, out->prf(out->dst, "%pI4h/%pI4h", &data->u32.value, &data->u32.mask); } else if (key == VCAP_KF_ETYPE || - key == VCAP_KF_IF_IGR_PORT_MASK) { + key == VCAP_KF_IF_IGR_PORT_MASK || + key == VCAP_KF_IF_EGR_PORT_MASK) { hex = true; } else { u32 fmsk = (1 << keyfield[key].width) - 1; + if (keyfield[key].width == 32) + fmsk = ~0; out->prf(out->dst, "%u/%u", data->u32.value & fmsk, data->u32.mask & fmsk); } @@ -152,37 +155,48 @@ vcap_debugfs_show_rule_actionfield(struct vcap_control *vctrl, out->prf(out->dst, "\n"); } -static int vcap_debugfs_show_rule_keyset(struct vcap_rule_internal *ri, - struct vcap_output_print *out) +static int vcap_debugfs_show_keysets(struct vcap_rule_internal *ri, + struct vcap_output_print *out) { - struct vcap_control *vctrl = ri->vctrl; struct vcap_admin *admin = ri->admin; enum vcap_keyfield_set keysets[10]; - const struct vcap_field *keyfield; - enum vcap_type vt = admin->vtype; - struct vcap_client_keyfield *ckf; struct vcap_keyset_list matches; - u32 *maskstream; - u32 *keystream; - int res; + int err; - keystream = admin->cache.keystream; - maskstream = admin->cache.maskstream; matches.keysets = keysets; matches.cnt = 0; matches.max = ARRAY_SIZE(keysets); - res = vcap_find_keystream_keysets(vctrl, vt, keystream, maskstream, - false, 0, &matches); - if (res < 0) { + + if (ri->state == VCAP_RS_DISABLED) + err = vcap_rule_get_keysets(ri, &matches); + else + err = vcap_find_keystream_keysets(ri->vctrl, admin->vtype, + admin->cache.keystream, + admin->cache.maskstream, + false, 0, &matches); + if (err) { pr_err("%s:%d: could not find valid keysets: %d\n", - __func__, __LINE__, res); - return -EINVAL; + __func__, __LINE__, err); + return err; } + out->prf(out->dst, " keysets:"); for (int idx = 0; idx < matches.cnt; ++idx) out->prf(out->dst, " %s", - vcap_keyset_name(vctrl, matches.keysets[idx])); + vcap_keyset_name(ri->vctrl, matches.keysets[idx])); out->prf(out->dst, "\n"); + return 0; +} + +static int vcap_debugfs_show_rule_keyset(struct vcap_rule_internal *ri, + struct vcap_output_print *out) +{ + struct vcap_control *vctrl = ri->vctrl; + struct vcap_admin *admin = ri->admin; + const struct vcap_field *keyfield; + struct vcap_client_keyfield *ckf; + + vcap_debugfs_show_keysets(ri, out); out->prf(out->dst, " keyset_sw: %d\n", ri->keyset_sw); out->prf(out->dst, " keyset_sw_regs: %d\n", ri->keyset_sw_regs); @@ -233,6 +247,18 @@ static void vcap_show_admin_rule(struct vcap_control *vctrl, out->prf(out->dst, " chain_id: %d\n", ri->data.vcap_chain_id); out->prf(out->dst, " user: %d\n", ri->data.user); out->prf(out->dst, " priority: %d\n", ri->data.priority); + out->prf(out->dst, " state: "); + switch (ri->state) { + case VCAP_RS_PERMANENT: + out->prf(out->dst, "permanent\n"); + break; + case VCAP_RS_DISABLED: + out->prf(out->dst, "disabled\n"); + break; + case VCAP_RS_ENABLED: + out->prf(out->dst, "enabled\n"); + break; + } vcap_debugfs_show_rule_keyset(ri, out); vcap_debugfs_show_rule_actionset(ri, out); } @@ -254,6 +280,7 @@ static void vcap_show_admin_info(struct vcap_control *vctrl, out->prf(out->dst, "version: %d\n", vcap->version); out->prf(out->dst, "vtype: %d\n", admin->vtype); out->prf(out->dst, "vinst: %d\n", admin->vinst); + out->prf(out->dst, "ingress: %d\n", admin->ingress); out->prf(out->dst, "first_cid: %d\n", admin->first_cid); out->prf(out->dst, "last_cid: %d\n", admin->last_cid); out->prf(out->dst, "lookups: %d\n", admin->lookups); @@ -272,7 +299,7 @@ static int vcap_show_admin(struct vcap_control *vctrl, vcap_show_admin_info(vctrl, admin, out); list_for_each_entry(elem, &admin->rules, list) { - vrule = vcap_get_rule(vctrl, elem->data.id); + vrule = vcap_decode_rule(elem); if (IS_ERR_OR_NULL(vrule)) { ret = PTR_ERR(vrule); break; @@ -381,8 +408,12 @@ static int vcap_debugfs_show(struct seq_file *m, void *unused) .prf = (void *)seq_printf, .dst = m, }; + int ret; - return vcap_show_admin(info->vctrl, info->admin, &out); + mutex_lock(&info->admin->lock); + ret = vcap_show_admin(info->vctrl, info->admin, &out); + mutex_unlock(&info->admin->lock); + return ret; } DEFINE_SHOW_ATTRIBUTE(vcap_debugfs); @@ -394,8 +425,12 @@ static int vcap_raw_debugfs_show(struct seq_file *m, void *unused) .prf = (void *)seq_printf, .dst = m, }; + int ret; - return vcap_show_admin_raw(info->vctrl, info->admin, &out); + mutex_lock(&info->admin->lock); + ret = vcap_show_admin_raw(info->vctrl, info->admin, &out); + mutex_unlock(&info->admin->lock); + return ret; } DEFINE_SHOW_ATTRIBUTE(vcap_raw_debugfs); diff --git a/drivers/net/ethernet/microchip/vcap/vcap_api_debugfs_kunit.c b/drivers/net/ethernet/microchip/vcap/vcap_api_debugfs_kunit.c index cf594668d5d9..0de3f677135a 100644 --- a/drivers/net/ethernet/microchip/vcap/vcap_api_debugfs_kunit.c +++ b/drivers/net/ethernet/microchip/vcap/vcap_api_debugfs_kunit.c @@ -221,13 +221,6 @@ static int vcap_test_port_info(struct net_device *ndev, return 0; } -static int vcap_test_enable(struct net_device *ndev, - struct vcap_admin *admin, - bool enable) -{ - return 0; -} - static struct vcap_operations test_callbacks = { .validate_keyset = test_val_keyset, .add_default_fields = test_add_def_fields, @@ -238,7 +231,6 @@ static struct vcap_operations test_callbacks = { .update = test_cache_update, .move = test_cache_move, .port_info = vcap_test_port_info, - .enable = vcap_test_enable, }; static struct vcap_control test_vctrl = { @@ -253,6 +245,8 @@ static void vcap_test_api_init(struct vcap_admin *admin) INIT_LIST_HEAD(&test_vctrl.list); INIT_LIST_HEAD(&admin->list); INIT_LIST_HEAD(&admin->rules); + INIT_LIST_HEAD(&admin->enabled); + mutex_init(&admin->lock); list_add_tail(&admin->list, &test_vctrl.list); memset(test_updateaddr, 0, sizeof(test_updateaddr)); test_updateaddridx = 0; @@ -393,8 +387,9 @@ static const char * const test_admin_info_expect[] = { "default_cnt: 73\n", "require_cnt_dis: 0\n", "version: 1\n", - "vtype: 2\n", + "vtype: 3\n", "vinst: 0\n", + "ingress: 1\n", "first_cid: 10000\n", "last_cid: 19999\n", "lookups: 4\n", @@ -413,6 +408,7 @@ static void vcap_api_show_admin_test(struct kunit *test) .last_valid_addr = 3071, .first_valid_addr = 0, .last_used_addr = 794, + .ingress = true, }; struct vcap_output_print out = { .prf = (void *)test_prf, @@ -439,8 +435,9 @@ static const char * const test_admin_expect[] = { "default_cnt: 73\n", "require_cnt_dis: 0\n", "version: 1\n", - "vtype: 2\n", + "vtype: 3\n", "vinst: 0\n", + "ingress: 1\n", "first_cid: 8000000\n", "last_cid: 8199999\n", "lookups: 4\n", @@ -452,6 +449,7 @@ static const char * const test_admin_expect[] = { " chain_id: 0\n", " user: 0\n", " priority: 0\n", + " state: permanent\n", " keysets: VCAP_KFS_MAC_ETYPE\n", " keyset_sw: 6\n", " keyset_sw_regs: 2\n", @@ -501,6 +499,7 @@ static void vcap_api_show_admin_rule_test(struct kunit *test) .last_valid_addr = 3071, .first_valid_addr = 0, .last_used_addr = 794, + .ingress = true, .cache = { .keystream = keydata, .maskstream = mskdata, diff --git a/drivers/net/ethernet/microchip/vcap/vcap_api_kunit.c b/drivers/net/ethernet/microchip/vcap/vcap_api_kunit.c index 76a31215ebfb..c07f25e791c7 100644 --- a/drivers/net/ethernet/microchip/vcap/vcap_api_kunit.c +++ b/drivers/net/ethernet/microchip/vcap/vcap_api_kunit.c @@ -211,13 +211,6 @@ static int vcap_test_port_info(struct net_device *ndev, return 0; } -static int vcap_test_enable(struct net_device *ndev, - struct vcap_admin *admin, - bool enable) -{ - return 0; -} - static struct vcap_operations test_callbacks = { .validate_keyset = test_val_keyset, .add_default_fields = test_add_def_fields, @@ -228,7 +221,6 @@ static struct vcap_operations test_callbacks = { .update = test_cache_update, .move = test_cache_move, .port_info = vcap_test_port_info, - .enable = vcap_test_enable, }; static struct vcap_control test_vctrl = { @@ -243,6 +235,8 @@ static void vcap_test_api_init(struct vcap_admin *admin) INIT_LIST_HEAD(&test_vctrl.list); INIT_LIST_HEAD(&admin->list); INIT_LIST_HEAD(&admin->rules); + INIT_LIST_HEAD(&admin->enabled); + mutex_init(&admin->lock); list_add_tail(&admin->list, &test_vctrl.list); memset(test_updateaddr, 0, sizeof(test_updateaddr)); test_updateaddridx = 0; @@ -302,7 +296,7 @@ test_vcap_xn_rule_creator(struct kunit *test, int cid, enum vcap_user user, ret = vcap_set_rule_set_keyset(rule, keyset); /* Add rule actions : there must be at least one action */ - ret = vcap_rule_add_action_u32(rule, VCAP_AF_COSID_VAL, 0); + ret = vcap_rule_add_action_u32(rule, VCAP_AF_ISDX_VAL, 0); /* Override rule actionset */ ret = vcap_set_rule_set_actionset(rule, actionset); @@ -1312,8 +1306,8 @@ static void vcap_api_encode_rule_test(struct kunit *test) struct vcap_admin is2_admin = { .vtype = VCAP_TYPE_IS2, - .first_cid = 10000, - .last_cid = 19999, + .first_cid = 8000000, + .last_cid = 8099999, .lookups = 4, .last_valid_addr = 3071, .first_valid_addr = 0, @@ -1326,7 +1320,7 @@ static void vcap_api_encode_rule_test(struct kunit *test) }; struct vcap_rule *rule; struct vcap_rule_internal *ri; - int vcap_chain_id = 10005; + int vcap_chain_id = 8000000; enum vcap_user user = VCAP_USER_VCAP_UTIL; u16 priority = 10; int id = 100; @@ -1343,8 +1337,8 @@ static void vcap_api_encode_rule_test(struct kunit *test) u32 port_mask_rng_mask = 0x0f; u32 igr_port_mask_value = 0xffabcd01; u32 igr_port_mask_mask = ~0; - /* counter is not written yet, so it is not in expwriteaddr */ - u32 expwriteaddr[] = {792, 793, 794, 795, 796, 797, 0}; + /* counter is written as the first operation */ + u32 expwriteaddr[] = {792, 792, 793, 794, 795, 796, 797}; int idx; vcap_test_api_init(&is2_admin); @@ -1398,6 +1392,11 @@ static void vcap_api_encode_rule_test(struct kunit *test) KUNIT_EXPECT_EQ(test, 2, ri->keyset_sw_regs); KUNIT_EXPECT_EQ(test, 4, ri->actionset_sw_regs); + /* Enable lookup, so the rule will be written */ + ret = vcap_enable_lookups(&test_vctrl, &test_netdev, 0, + rule->vcap_chain_id, rule->cookie, true); + KUNIT_EXPECT_EQ(test, 0, ret); + /* Add rule with write callback */ ret = vcap_add_rule(rule); KUNIT_EXPECT_EQ(test, 0, ret); @@ -1872,58 +1871,56 @@ static void vcap_api_next_lookup_basic_test(struct kunit *test) ret = vcap_is_next_lookup(&test_vctrl, 8300000, 8301000); KUNIT_EXPECT_EQ(test, false, ret); ret = vcap_is_next_lookup(&test_vctrl, 8300000, 8401000); - KUNIT_EXPECT_EQ(test, true, ret); + KUNIT_EXPECT_EQ(test, false, ret); } static void vcap_api_next_lookup_advanced_test(struct kunit *test) { - struct vcap_admin admin1 = { + struct vcap_admin admin[] = { + { .vtype = VCAP_TYPE_IS0, .vinst = 0, .first_cid = 1000000, .last_cid = 1199999, .lookups = 6, .lookups_per_instance = 2, - }; - struct vcap_admin admin2 = { + }, { .vtype = VCAP_TYPE_IS0, .vinst = 1, .first_cid = 1200000, .last_cid = 1399999, .lookups = 6, .lookups_per_instance = 2, - }; - struct vcap_admin admin3 = { + }, { .vtype = VCAP_TYPE_IS0, .vinst = 2, .first_cid = 1400000, .last_cid = 1599999, .lookups = 6, .lookups_per_instance = 2, - }; - struct vcap_admin admin4 = { + }, { .vtype = VCAP_TYPE_IS2, .vinst = 0, .first_cid = 8000000, .last_cid = 8199999, .lookups = 4, .lookups_per_instance = 2, - }; - struct vcap_admin admin5 = { + }, { .vtype = VCAP_TYPE_IS2, .vinst = 1, .first_cid = 8200000, .last_cid = 8399999, .lookups = 4, .lookups_per_instance = 2, + } }; bool ret; - vcap_test_api_init(&admin1); - list_add_tail(&admin2.list, &test_vctrl.list); - list_add_tail(&admin3.list, &test_vctrl.list); - list_add_tail(&admin4.list, &test_vctrl.list); - list_add_tail(&admin5.list, &test_vctrl.list); + vcap_test_api_init(&admin[0]); + list_add_tail(&admin[1].list, &test_vctrl.list); + list_add_tail(&admin[2].list, &test_vctrl.list); + list_add_tail(&admin[3].list, &test_vctrl.list); + list_add_tail(&admin[4].list, &test_vctrl.list); ret = vcap_is_next_lookup(&test_vctrl, 1000000, 1001000); KUNIT_EXPECT_EQ(test, false, ret); @@ -1933,9 +1930,9 @@ static void vcap_api_next_lookup_advanced_test(struct kunit *test) ret = vcap_is_next_lookup(&test_vctrl, 1100000, 1201000); KUNIT_EXPECT_EQ(test, true, ret); ret = vcap_is_next_lookup(&test_vctrl, 1100000, 1301000); - KUNIT_EXPECT_EQ(test, false, ret); + KUNIT_EXPECT_EQ(test, true, ret); ret = vcap_is_next_lookup(&test_vctrl, 1100000, 8101000); - KUNIT_EXPECT_EQ(test, false, ret); + KUNIT_EXPECT_EQ(test, true, ret); ret = vcap_is_next_lookup(&test_vctrl, 1300000, 1401000); KUNIT_EXPECT_EQ(test, true, ret); ret = vcap_is_next_lookup(&test_vctrl, 1400000, 1501000); @@ -1951,7 +1948,7 @@ static void vcap_api_next_lookup_advanced_test(struct kunit *test) ret = vcap_is_next_lookup(&test_vctrl, 8300000, 8301000); KUNIT_EXPECT_EQ(test, false, ret); ret = vcap_is_next_lookup(&test_vctrl, 8300000, 8401000); - KUNIT_EXPECT_EQ(test, true, ret); + KUNIT_EXPECT_EQ(test, false, ret); } static void vcap_api_filter_unsupported_keys_test(struct kunit *test) @@ -2146,6 +2143,71 @@ static void vcap_api_filter_keylist_test(struct kunit *test) KUNIT_EXPECT_EQ(test, 26, idx); } +static void vcap_api_rule_chain_path_test(struct kunit *test) +{ + struct vcap_admin admin1 = { + .vtype = VCAP_TYPE_IS0, + .vinst = 0, + .first_cid = 1000000, + .last_cid = 1199999, + .lookups = 6, + .lookups_per_instance = 2, + }; + struct vcap_enabled_port eport3 = { + .ndev = &test_netdev, + .cookie = 0x100, + .src_cid = 0, + .dst_cid = 1000000, + }; + struct vcap_enabled_port eport2 = { + .ndev = &test_netdev, + .cookie = 0x200, + .src_cid = 1000000, + .dst_cid = 1100000, + }; + struct vcap_enabled_port eport1 = { + .ndev = &test_netdev, + .cookie = 0x300, + .src_cid = 1100000, + .dst_cid = 8000000, + }; + bool ret; + int chain; + + vcap_test_api_init(&admin1); + list_add_tail(&eport1.list, &admin1.enabled); + list_add_tail(&eport2.list, &admin1.enabled); + list_add_tail(&eport3.list, &admin1.enabled); + + ret = vcap_path_exist(&test_vctrl, &test_netdev, 1000000); + KUNIT_EXPECT_EQ(test, true, ret); + + ret = vcap_path_exist(&test_vctrl, &test_netdev, 1100000); + KUNIT_EXPECT_EQ(test, true, ret); + + ret = vcap_path_exist(&test_vctrl, &test_netdev, 1200000); + KUNIT_EXPECT_EQ(test, false, ret); + + chain = vcap_get_next_chain(&test_vctrl, &test_netdev, 0); + KUNIT_EXPECT_EQ(test, 1000000, chain); + + chain = vcap_get_next_chain(&test_vctrl, &test_netdev, 1000000); + KUNIT_EXPECT_EQ(test, 1100000, chain); + + chain = vcap_get_next_chain(&test_vctrl, &test_netdev, 1100000); + KUNIT_EXPECT_EQ(test, 8000000, chain); +} + +static struct kunit_case vcap_api_rule_enable_test_cases[] = { + KUNIT_CASE(vcap_api_rule_chain_path_test), + {} +}; + +static struct kunit_suite vcap_api_rule_enable_test_suite = { + .name = "VCAP_API_Rule_Enable_Testsuite", + .test_cases = vcap_api_rule_enable_test_cases, +}; + static struct kunit_suite vcap_api_rule_remove_test_suite = { .name = "VCAP_API_Rule_Remove_Testsuite", .test_cases = vcap_api_rule_remove_test_cases, @@ -2236,6 +2298,7 @@ static struct kunit_suite vcap_api_encoding_test_suite = { .test_cases = vcap_api_encoding_test_cases, }; +kunit_test_suite(vcap_api_rule_enable_test_suite); kunit_test_suite(vcap_api_rule_remove_test_suite); kunit_test_suite(vcap_api_rule_insert_test_suite); kunit_test_suite(vcap_api_rule_counter_test_suite); diff --git a/drivers/net/ethernet/microchip/vcap/vcap_api_private.h b/drivers/net/ethernet/microchip/vcap/vcap_api_private.h index 4fd21da97679..df81d9ff502b 100644 --- a/drivers/net/ethernet/microchip/vcap/vcap_api_private.h +++ b/drivers/net/ethernet/microchip/vcap/vcap_api_private.h @@ -13,6 +13,12 @@ #define to_intrule(rule) container_of((rule), struct vcap_rule_internal, data) +enum vcap_rule_state { + VCAP_RS_PERMANENT, /* the rule is always stored in HW */ + VCAP_RS_ENABLED, /* enabled in HW but can be disabled */ + VCAP_RS_DISABLED, /* disabled (stored in SW) and can be enabled */ +}; + /* Private VCAP API rule data */ struct vcap_rule_internal { struct vcap_rule data; /* provided by the client */ @@ -29,6 +35,7 @@ struct vcap_rule_internal { u32 addr; /* address in the VCAP at insertion */ u32 counter_id; /* counter id (if a dedicated counter is available) */ struct vcap_counter counter; /* last read counter value */ + enum vcap_rule_state state; /* rule storage state */ }; /* Bit iterator for the VCAP cache streams */ @@ -43,8 +50,6 @@ struct vcap_stream_iter { /* Check that the control has a valid set of callbacks */ int vcap_api_check(struct vcap_control *ctrl); -/* Make a shallow copy of the rule without the fields */ -struct vcap_rule_internal *vcap_dup_rule(struct vcap_rule_internal *ri); /* Erase the VCAP cache area used or encoding and decoding */ void vcap_erase_cache(struct vcap_rule_internal *ri); @@ -110,4 +115,10 @@ int vcap_find_keystream_keysets(struct vcap_control *vctrl, enum vcap_type vt, u32 *keystream, u32 *mskstream, bool mask, int sw_max, struct vcap_keyset_list *kslist); +/* Get the keysets that matches the rule key type/mask */ +int vcap_rule_get_keysets(struct vcap_rule_internal *ri, + struct vcap_keyset_list *matches); +/* Decode a rule from the VCAP cache and return a copy */ +struct vcap_rule *vcap_decode_rule(struct vcap_rule_internal *elem); + #endif /* __VCAP_API_PRIVATE__ */ diff --git a/drivers/net/ethernet/microchip/vcap/vcap_model_kunit.c b/drivers/net/ethernet/microchip/vcap/vcap_model_kunit.c index 5d681d2697cd..5dbfc0d0c369 100644 --- a/drivers/net/ethernet/microchip/vcap/vcap_model_kunit.c +++ b/drivers/net/ethernet/microchip/vcap/vcap_model_kunit.c @@ -1,6 +1,10 @@ // SPDX-License-Identifier: BSD-3-Clause -/* Copyright (C) 2022 Microchip Technology Inc. and its subsidiaries. - * Microchip VCAP API Test VCAP Model Data +/* Copyright (C) 2023 Microchip Technology Inc. and its subsidiaries. + * Microchip VCAP test model interface for kunit testing + */ + +/* This file is autogenerated by cml-utils 2023-02-10 11:16:00 +0100. + * Commit ID: c30fb4bf0281cd4a7133bdab6682f9e43c872ada */ #include <linux/types.h> @@ -10,177 +14,6 @@ #include "vcap_model_kunit.h" /* keyfields */ -static const struct vcap_field is0_mll_keyfield[] = { - [VCAP_KF_TYPE] = { - .type = VCAP_FIELD_U32, - .offset = 0, - .width = 2, - }, - [VCAP_KF_LOOKUP_FIRST_IS] = { - .type = VCAP_FIELD_BIT, - .offset = 2, - .width = 1, - }, - [VCAP_KF_IF_IGR_PORT] = { - .type = VCAP_FIELD_U32, - .offset = 3, - .width = 7, - }, - [VCAP_KF_8021Q_VLAN_TAGS] = { - .type = VCAP_FIELD_U32, - .offset = 10, - .width = 3, - }, - [VCAP_KF_8021Q_TPID0] = { - .type = VCAP_FIELD_U32, - .offset = 13, - .width = 3, - }, - [VCAP_KF_8021Q_VID0] = { - .type = VCAP_FIELD_U32, - .offset = 16, - .width = 12, - }, - [VCAP_KF_8021Q_TPID1] = { - .type = VCAP_FIELD_U32, - .offset = 28, - .width = 3, - }, - [VCAP_KF_8021Q_VID1] = { - .type = VCAP_FIELD_U32, - .offset = 31, - .width = 12, - }, - [VCAP_KF_L2_DMAC] = { - .type = VCAP_FIELD_U48, - .offset = 43, - .width = 48, - }, - [VCAP_KF_L2_SMAC] = { - .type = VCAP_FIELD_U48, - .offset = 91, - .width = 48, - }, - [VCAP_KF_ETYPE_MPLS] = { - .type = VCAP_FIELD_U32, - .offset = 139, - .width = 2, - }, - [VCAP_KF_L4_RNG] = { - .type = VCAP_FIELD_U32, - .offset = 141, - .width = 8, - }, -}; - -static const struct vcap_field is0_tri_vid_keyfield[] = { - [VCAP_KF_TYPE] = { - .type = VCAP_FIELD_U32, - .offset = 0, - .width = 2, - }, - [VCAP_KF_LOOKUP_FIRST_IS] = { - .type = VCAP_FIELD_BIT, - .offset = 2, - .width = 1, - }, - [VCAP_KF_IF_IGR_PORT] = { - .type = VCAP_FIELD_U32, - .offset = 3, - .width = 7, - }, - [VCAP_KF_LOOKUP_GEN_IDX_SEL] = { - .type = VCAP_FIELD_U32, - .offset = 10, - .width = 2, - }, - [VCAP_KF_LOOKUP_GEN_IDX] = { - .type = VCAP_FIELD_U32, - .offset = 12, - .width = 12, - }, - [VCAP_KF_8021Q_VLAN_TAGS] = { - .type = VCAP_FIELD_U32, - .offset = 24, - .width = 3, - }, - [VCAP_KF_8021Q_TPID0] = { - .type = VCAP_FIELD_U32, - .offset = 27, - .width = 3, - }, - [VCAP_KF_8021Q_PCP0] = { - .type = VCAP_FIELD_U32, - .offset = 30, - .width = 3, - }, - [VCAP_KF_8021Q_DEI0] = { - .type = VCAP_FIELD_BIT, - .offset = 33, - .width = 1, - }, - [VCAP_KF_8021Q_VID0] = { - .type = VCAP_FIELD_U32, - .offset = 34, - .width = 12, - }, - [VCAP_KF_8021Q_TPID1] = { - .type = VCAP_FIELD_U32, - .offset = 46, - .width = 3, - }, - [VCAP_KF_8021Q_PCP1] = { - .type = VCAP_FIELD_U32, - .offset = 49, - .width = 3, - }, - [VCAP_KF_8021Q_DEI1] = { - .type = VCAP_FIELD_BIT, - .offset = 52, - .width = 1, - }, - [VCAP_KF_8021Q_VID1] = { - .type = VCAP_FIELD_U32, - .offset = 53, - .width = 12, - }, - [VCAP_KF_8021Q_TPID2] = { - .type = VCAP_FIELD_U32, - .offset = 65, - .width = 3, - }, - [VCAP_KF_8021Q_PCP2] = { - .type = VCAP_FIELD_U32, - .offset = 68, - .width = 3, - }, - [VCAP_KF_8021Q_DEI2] = { - .type = VCAP_FIELD_BIT, - .offset = 71, - .width = 1, - }, - [VCAP_KF_8021Q_VID2] = { - .type = VCAP_FIELD_U32, - .offset = 72, - .width = 12, - }, - [VCAP_KF_L4_RNG] = { - .type = VCAP_FIELD_U32, - .offset = 84, - .width = 8, - }, - [VCAP_KF_OAM_Y1731_IS] = { - .type = VCAP_FIELD_BIT, - .offset = 92, - .width = 1, - }, - [VCAP_KF_OAM_MEL_FLAGS] = { - .type = VCAP_FIELD_U32, - .offset = 93, - .width = 7, - }, -}; - static const struct vcap_field is0_ll_full_keyfield[] = { [VCAP_KF_TYPE] = { .type = VCAP_FIELD_U32, @@ -344,194 +177,6 @@ static const struct vcap_field is0_ll_full_keyfield[] = { }, }; -static const struct vcap_field is0_normal_keyfield[] = { - [VCAP_KF_TYPE] = { - .type = VCAP_FIELD_U32, - .offset = 0, - .width = 2, - }, - [VCAP_KF_LOOKUP_FIRST_IS] = { - .type = VCAP_FIELD_BIT, - .offset = 2, - .width = 1, - }, - [VCAP_KF_LOOKUP_GEN_IDX_SEL] = { - .type = VCAP_FIELD_U32, - .offset = 3, - .width = 2, - }, - [VCAP_KF_LOOKUP_GEN_IDX] = { - .type = VCAP_FIELD_U32, - .offset = 5, - .width = 12, - }, - [VCAP_KF_IF_IGR_PORT_MASK_SEL] = { - .type = VCAP_FIELD_U32, - .offset = 17, - .width = 2, - }, - [VCAP_KF_IF_IGR_PORT_MASK] = { - .type = VCAP_FIELD_U72, - .offset = 19, - .width = 65, - }, - [VCAP_KF_L2_MC_IS] = { - .type = VCAP_FIELD_BIT, - .offset = 84, - .width = 1, - }, - [VCAP_KF_L2_BC_IS] = { - .type = VCAP_FIELD_BIT, - .offset = 85, - .width = 1, - }, - [VCAP_KF_8021Q_VLAN_TAGS] = { - .type = VCAP_FIELD_U32, - .offset = 86, - .width = 3, - }, - [VCAP_KF_8021Q_TPID0] = { - .type = VCAP_FIELD_U32, - .offset = 89, - .width = 3, - }, - [VCAP_KF_8021Q_PCP0] = { - .type = VCAP_FIELD_U32, - .offset = 92, - .width = 3, - }, - [VCAP_KF_8021Q_DEI0] = { - .type = VCAP_FIELD_BIT, - .offset = 95, - .width = 1, - }, - [VCAP_KF_8021Q_VID0] = { - .type = VCAP_FIELD_U32, - .offset = 96, - .width = 12, - }, - [VCAP_KF_8021Q_TPID1] = { - .type = VCAP_FIELD_U32, - .offset = 108, - .width = 3, - }, - [VCAP_KF_8021Q_PCP1] = { - .type = VCAP_FIELD_U32, - .offset = 111, - .width = 3, - }, - [VCAP_KF_8021Q_DEI1] = { - .type = VCAP_FIELD_BIT, - .offset = 114, - .width = 1, - }, - [VCAP_KF_8021Q_VID1] = { - .type = VCAP_FIELD_U32, - .offset = 115, - .width = 12, - }, - [VCAP_KF_8021Q_TPID2] = { - .type = VCAP_FIELD_U32, - .offset = 127, - .width = 3, - }, - [VCAP_KF_8021Q_PCP2] = { - .type = VCAP_FIELD_U32, - .offset = 130, - .width = 3, - }, - [VCAP_KF_8021Q_DEI2] = { - .type = VCAP_FIELD_BIT, - .offset = 133, - .width = 1, - }, - [VCAP_KF_8021Q_VID2] = { - .type = VCAP_FIELD_U32, - .offset = 134, - .width = 12, - }, - [VCAP_KF_DST_ENTRY] = { - .type = VCAP_FIELD_BIT, - .offset = 146, - .width = 1, - }, - [VCAP_KF_L2_SMAC] = { - .type = VCAP_FIELD_U48, - .offset = 147, - .width = 48, - }, - [VCAP_KF_IP_MC_IS] = { - .type = VCAP_FIELD_BIT, - .offset = 195, - .width = 1, - }, - [VCAP_KF_ETYPE_LEN_IS] = { - .type = VCAP_FIELD_BIT, - .offset = 196, - .width = 1, - }, - [VCAP_KF_ETYPE] = { - .type = VCAP_FIELD_U32, - .offset = 197, - .width = 16, - }, - [VCAP_KF_IP_SNAP_IS] = { - .type = VCAP_FIELD_BIT, - .offset = 213, - .width = 1, - }, - [VCAP_KF_IP4_IS] = { - .type = VCAP_FIELD_BIT, - .offset = 214, - .width = 1, - }, - [VCAP_KF_L3_FRAGMENT_TYPE] = { - .type = VCAP_FIELD_U32, - .offset = 215, - .width = 2, - }, - [VCAP_KF_L3_FRAG_INVLD_L4_LEN] = { - .type = VCAP_FIELD_BIT, - .offset = 217, - .width = 1, - }, - [VCAP_KF_L3_OPTIONS_IS] = { - .type = VCAP_FIELD_BIT, - .offset = 218, - .width = 1, - }, - [VCAP_KF_L3_DSCP] = { - .type = VCAP_FIELD_U32, - .offset = 219, - .width = 6, - }, - [VCAP_KF_L3_IP4_SIP] = { - .type = VCAP_FIELD_U32, - .offset = 225, - .width = 32, - }, - [VCAP_KF_TCP_UDP_IS] = { - .type = VCAP_FIELD_BIT, - .offset = 257, - .width = 1, - }, - [VCAP_KF_TCP_IS] = { - .type = VCAP_FIELD_BIT, - .offset = 258, - .width = 1, - }, - [VCAP_KF_L4_SPORT] = { - .type = VCAP_FIELD_U32, - .offset = 259, - .width = 16, - }, - [VCAP_KF_L4_RNG] = { - .type = VCAP_FIELD_U32, - .offset = 275, - .width = 8, - }, -}; - static const struct vcap_field is0_normal_7tuple_keyfield[] = { [VCAP_KF_TYPE] = { .type = VCAP_FIELD_BIT, @@ -1095,16 +740,6 @@ static const struct vcap_field is2_mac_etype_keyfield[] = { .offset = 85, .width = 1, }, - [VCAP_KF_L3_SMAC_SIP_MATCH] = { - .type = VCAP_FIELD_BIT, - .offset = 86, - .width = 1, - }, - [VCAP_KF_L3_DMAC_DIP_MATCH] = { - .type = VCAP_FIELD_BIT, - .offset = 87, - .width = 1, - }, [VCAP_KF_L3_RT_IS] = { .type = VCAP_FIELD_BIT, .offset = 88, @@ -1381,16 +1016,6 @@ static const struct vcap_field is2_ip4_tcp_udp_keyfield[] = { .offset = 85, .width = 1, }, - [VCAP_KF_L3_SMAC_SIP_MATCH] = { - .type = VCAP_FIELD_BIT, - .offset = 86, - .width = 1, - }, - [VCAP_KF_L3_DMAC_DIP_MATCH] = { - .type = VCAP_FIELD_BIT, - .offset = 87, - .width = 1, - }, [VCAP_KF_L3_RT_IS] = { .type = VCAP_FIELD_BIT, .offset = 88, @@ -1594,16 +1219,6 @@ static const struct vcap_field is2_ip4_other_keyfield[] = { .offset = 85, .width = 1, }, - [VCAP_KF_L3_SMAC_SIP_MATCH] = { - .type = VCAP_FIELD_BIT, - .offset = 86, - .width = 1, - }, - [VCAP_KF_L3_DMAC_DIP_MATCH] = { - .type = VCAP_FIELD_BIT, - .offset = 87, - .width = 1, - }, [VCAP_KF_L3_RT_IS] = { .type = VCAP_FIELD_BIT, .offset = 88, @@ -1757,26 +1372,11 @@ static const struct vcap_field is2_ip6_std_keyfield[] = { .offset = 85, .width = 1, }, - [VCAP_KF_L3_SMAC_SIP_MATCH] = { - .type = VCAP_FIELD_BIT, - .offset = 86, - .width = 1, - }, - [VCAP_KF_L3_DMAC_DIP_MATCH] = { - .type = VCAP_FIELD_BIT, - .offset = 87, - .width = 1, - }, [VCAP_KF_L3_RT_IS] = { .type = VCAP_FIELD_BIT, .offset = 88, .width = 1, }, - [VCAP_KF_L3_DST_IS] = { - .type = VCAP_FIELD_BIT, - .offset = 89, - .width = 1, - }, [VCAP_KF_L3_TTL_GT0] = { .type = VCAP_FIELD_BIT, .offset = 90, @@ -1890,16 +1490,6 @@ static const struct vcap_field is2_ip_7tuple_keyfield[] = { .offset = 116, .width = 1, }, - [VCAP_KF_L3_SMAC_SIP_MATCH] = { - .type = VCAP_FIELD_BIT, - .offset = 117, - .width = 1, - }, - [VCAP_KF_L3_DMAC_DIP_MATCH] = { - .type = VCAP_FIELD_BIT, - .offset = 118, - .width = 1, - }, [VCAP_KF_L3_RT_IS] = { .type = VCAP_FIELD_BIT, .offset = 119, @@ -2022,69 +1612,6 @@ static const struct vcap_field is2_ip_7tuple_keyfield[] = { }, }; -static const struct vcap_field is2_ip6_vid_keyfield[] = { - [VCAP_KF_TYPE] = { - .type = VCAP_FIELD_U32, - .offset = 0, - .width = 4, - }, - [VCAP_KF_LOOKUP_FIRST_IS] = { - .type = VCAP_FIELD_BIT, - .offset = 4, - .width = 1, - }, - [VCAP_KF_LOOKUP_PAG] = { - .type = VCAP_FIELD_U32, - .offset = 5, - .width = 8, - }, - [VCAP_KF_ISDX_GT0_IS] = { - .type = VCAP_FIELD_BIT, - .offset = 13, - .width = 1, - }, - [VCAP_KF_ISDX_CLS] = { - .type = VCAP_FIELD_U32, - .offset = 14, - .width = 12, - }, - [VCAP_KF_8021Q_VID_CLS] = { - .type = VCAP_FIELD_U32, - .offset = 26, - .width = 13, - }, - [VCAP_KF_L3_SMAC_SIP_MATCH] = { - .type = VCAP_FIELD_BIT, - .offset = 39, - .width = 1, - }, - [VCAP_KF_L3_DMAC_DIP_MATCH] = { - .type = VCAP_FIELD_BIT, - .offset = 40, - .width = 1, - }, - [VCAP_KF_L3_RT_IS] = { - .type = VCAP_FIELD_BIT, - .offset = 41, - .width = 1, - }, - [VCAP_KF_L3_DST_IS] = { - .type = VCAP_FIELD_BIT, - .offset = 42, - .width = 1, - }, - [VCAP_KF_L3_IP6_DIP] = { - .type = VCAP_FIELD_U128, - .offset = 43, - .width = 128, - }, - [VCAP_KF_L3_IP6_SIP] = { - .type = VCAP_FIELD_U128, - .offset = 171, - .width = 128, - }, -}; - static const struct vcap_field es2_mac_etype_keyfield[] = { [VCAP_KF_TYPE] = { .type = VCAP_FIELD_U32, @@ -2096,16 +1623,6 @@ static const struct vcap_field es2_mac_etype_keyfield[] = { .offset = 3, .width = 1, }, - [VCAP_KF_ACL_GRP_ID] = { - .type = VCAP_FIELD_U32, - .offset = 4, - .width = 8, - }, - [VCAP_KF_PROT_ACTIVE] = { - .type = VCAP_FIELD_BIT, - .offset = 12, - .width = 1, - }, [VCAP_KF_L2_MC_IS] = { .type = VCAP_FIELD_BIT, .offset = 13, @@ -2181,16 +1698,6 @@ static const struct vcap_field es2_mac_etype_keyfield[] = { .offset = 95, .width = 1, }, - [VCAP_KF_ES0_ISDX_KEY_ENA] = { - .type = VCAP_FIELD_BIT, - .offset = 96, - .width = 1, - }, - [VCAP_KF_MIRROR_ENA] = { - .type = VCAP_FIELD_U32, - .offset = 97, - .width = 2, - }, [VCAP_KF_L2_DMAC] = { .type = VCAP_FIELD_U48, .offset = 99, @@ -2239,16 +1746,6 @@ static const struct vcap_field es2_arp_keyfield[] = { .offset = 3, .width = 1, }, - [VCAP_KF_ACL_GRP_ID] = { - .type = VCAP_FIELD_U32, - .offset = 4, - .width = 8, - }, - [VCAP_KF_PROT_ACTIVE] = { - .type = VCAP_FIELD_BIT, - .offset = 12, - .width = 1, - }, [VCAP_KF_L2_MC_IS] = { .type = VCAP_FIELD_BIT, .offset = 13, @@ -2319,16 +1816,6 @@ static const struct vcap_field es2_arp_keyfield[] = { .offset = 94, .width = 1, }, - [VCAP_KF_ES0_ISDX_KEY_ENA] = { - .type = VCAP_FIELD_BIT, - .offset = 95, - .width = 1, - }, - [VCAP_KF_MIRROR_ENA] = { - .type = VCAP_FIELD_U32, - .offset = 96, - .width = 2, - }, [VCAP_KF_L2_SMAC] = { .type = VCAP_FIELD_U48, .offset = 98, @@ -2397,16 +1884,6 @@ static const struct vcap_field es2_ip4_tcp_udp_keyfield[] = { .offset = 3, .width = 1, }, - [VCAP_KF_ACL_GRP_ID] = { - .type = VCAP_FIELD_U32, - .offset = 4, - .width = 8, - }, - [VCAP_KF_PROT_ACTIVE] = { - .type = VCAP_FIELD_BIT, - .offset = 12, - .width = 1, - }, [VCAP_KF_L2_MC_IS] = { .type = VCAP_FIELD_BIT, .offset = 13, @@ -2482,16 +1959,6 @@ static const struct vcap_field es2_ip4_tcp_udp_keyfield[] = { .offset = 95, .width = 1, }, - [VCAP_KF_ES0_ISDX_KEY_ENA] = { - .type = VCAP_FIELD_BIT, - .offset = 96, - .width = 1, - }, - [VCAP_KF_MIRROR_ENA] = { - .type = VCAP_FIELD_U32, - .offset = 97, - .width = 2, - }, [VCAP_KF_IP4_IS] = { .type = VCAP_FIELD_BIT, .offset = 99, @@ -2610,16 +2077,6 @@ static const struct vcap_field es2_ip4_other_keyfield[] = { .offset = 3, .width = 1, }, - [VCAP_KF_ACL_GRP_ID] = { - .type = VCAP_FIELD_U32, - .offset = 4, - .width = 8, - }, - [VCAP_KF_PROT_ACTIVE] = { - .type = VCAP_FIELD_BIT, - .offset = 12, - .width = 1, - }, [VCAP_KF_L2_MC_IS] = { .type = VCAP_FIELD_BIT, .offset = 13, @@ -2695,16 +2152,6 @@ static const struct vcap_field es2_ip4_other_keyfield[] = { .offset = 95, .width = 1, }, - [VCAP_KF_ES0_ISDX_KEY_ENA] = { - .type = VCAP_FIELD_BIT, - .offset = 96, - .width = 1, - }, - [VCAP_KF_MIRROR_ENA] = { - .type = VCAP_FIELD_U32, - .offset = 97, - .width = 2, - }, [VCAP_KF_IP4_IS] = { .type = VCAP_FIELD_BIT, .offset = 99, @@ -2763,16 +2210,6 @@ static const struct vcap_field es2_ip_7tuple_keyfield[] = { .offset = 0, .width = 1, }, - [VCAP_KF_ACL_GRP_ID] = { - .type = VCAP_FIELD_U32, - .offset = 1, - .width = 8, - }, - [VCAP_KF_PROT_ACTIVE] = { - .type = VCAP_FIELD_BIT, - .offset = 9, - .width = 1, - }, [VCAP_KF_L2_MC_IS] = { .type = VCAP_FIELD_BIT, .offset = 10, @@ -2848,16 +2285,6 @@ static const struct vcap_field es2_ip_7tuple_keyfield[] = { .offset = 92, .width = 1, }, - [VCAP_KF_ES0_ISDX_KEY_ENA] = { - .type = VCAP_FIELD_BIT, - .offset = 93, - .width = 1, - }, - [VCAP_KF_MIRROR_ENA] = { - .type = VCAP_FIELD_U32, - .offset = 94, - .width = 2, - }, [VCAP_KF_L2_DMAC] = { .type = VCAP_FIELD_U48, .offset = 96, @@ -2970,6 +2397,124 @@ static const struct vcap_field es2_ip_7tuple_keyfield[] = { }, }; +static const struct vcap_field es2_ip6_std_keyfield[] = { + [VCAP_KF_TYPE] = { + .type = VCAP_FIELD_U32, + .offset = 0, + .width = 3, + }, + [VCAP_KF_LOOKUP_FIRST_IS] = { + .type = VCAP_FIELD_BIT, + .offset = 3, + .width = 1, + }, + [VCAP_KF_L2_MC_IS] = { + .type = VCAP_FIELD_BIT, + .offset = 13, + .width = 1, + }, + [VCAP_KF_L2_BC_IS] = { + .type = VCAP_FIELD_BIT, + .offset = 14, + .width = 1, + }, + [VCAP_KF_ISDX_GT0_IS] = { + .type = VCAP_FIELD_BIT, + .offset = 15, + .width = 1, + }, + [VCAP_KF_ISDX_CLS] = { + .type = VCAP_FIELD_U32, + .offset = 16, + .width = 12, + }, + [VCAP_KF_8021Q_VLAN_TAGGED_IS] = { + .type = VCAP_FIELD_BIT, + .offset = 28, + .width = 1, + }, + [VCAP_KF_8021Q_VID_CLS] = { + .type = VCAP_FIELD_U32, + .offset = 29, + .width = 13, + }, + [VCAP_KF_IF_EGR_PORT_MASK_RNG] = { + .type = VCAP_FIELD_U32, + .offset = 42, + .width = 3, + }, + [VCAP_KF_IF_EGR_PORT_MASK] = { + .type = VCAP_FIELD_U32, + .offset = 45, + .width = 32, + }, + [VCAP_KF_IF_IGR_PORT_SEL] = { + .type = VCAP_FIELD_BIT, + .offset = 77, + .width = 1, + }, + [VCAP_KF_IF_IGR_PORT] = { + .type = VCAP_FIELD_U32, + .offset = 78, + .width = 9, + }, + [VCAP_KF_8021Q_PCP_CLS] = { + .type = VCAP_FIELD_U32, + .offset = 87, + .width = 3, + }, + [VCAP_KF_8021Q_DEI_CLS] = { + .type = VCAP_FIELD_BIT, + .offset = 90, + .width = 1, + }, + [VCAP_KF_COSID_CLS] = { + .type = VCAP_FIELD_U32, + .offset = 91, + .width = 3, + }, + [VCAP_KF_L3_DPL_CLS] = { + .type = VCAP_FIELD_BIT, + .offset = 94, + .width = 1, + }, + [VCAP_KF_L3_RT_IS] = { + .type = VCAP_FIELD_BIT, + .offset = 95, + .width = 1, + }, + [VCAP_KF_L3_TTL_GT0] = { + .type = VCAP_FIELD_BIT, + .offset = 99, + .width = 1, + }, + [VCAP_KF_L3_IP6_SIP] = { + .type = VCAP_FIELD_U128, + .offset = 100, + .width = 128, + }, + [VCAP_KF_L3_DIP_EQ_SIP_IS] = { + .type = VCAP_FIELD_BIT, + .offset = 228, + .width = 1, + }, + [VCAP_KF_L3_IP_PROTO] = { + .type = VCAP_FIELD_U32, + .offset = 229, + .width = 8, + }, + [VCAP_KF_L4_RNG] = { + .type = VCAP_FIELD_U32, + .offset = 237, + .width = 16, + }, + [VCAP_KF_L3_PAYLOAD] = { + .type = VCAP_FIELD_U48, + .offset = 253, + .width = 40, + }, +}; + static const struct vcap_field es2_ip4_vid_keyfield[] = { [VCAP_KF_LOOKUP_FIRST_IS] = { .type = VCAP_FIELD_BIT, @@ -3046,7 +2591,7 @@ static const struct vcap_field es2_ip4_vid_keyfield[] = { .offset = 48, .width = 1, }, - [VCAP_KF_MIRROR_ENA] = { + [VCAP_KF_MIRROR_PROBE] = { .type = VCAP_FIELD_U32, .offset = 49, .width = 2, @@ -3143,26 +2688,11 @@ static const struct vcap_field es2_ip6_vid_keyfield[] = { /* keyfield_set */ static const struct vcap_set is0_keyfield_set[] = { - [VCAP_KFS_MLL] = { - .type_id = 0, - .sw_per_item = 3, - .sw_cnt = 4, - }, - [VCAP_KFS_TRI_VID] = { - .type_id = 0, - .sw_per_item = 2, - .sw_cnt = 6, - }, [VCAP_KFS_LL_FULL] = { .type_id = 0, .sw_per_item = 6, .sw_cnt = 2, }, - [VCAP_KFS_NORMAL] = { - .type_id = 1, - .sw_per_item = 6, - .sw_cnt = 2, - }, [VCAP_KFS_NORMAL_7TUPLE] = { .type_id = 0, .sw_per_item = 12, @@ -3216,11 +2746,6 @@ static const struct vcap_set is2_keyfield_set[] = { .sw_per_item = 12, .sw_cnt = 1, }, - [VCAP_KFS_IP6_VID] = { - .type_id = 9, - .sw_per_item = 6, - .sw_cnt = 2, - }, }; static const struct vcap_set es2_keyfield_set[] = { @@ -3249,6 +2774,11 @@ static const struct vcap_set es2_keyfield_set[] = { .sw_per_item = 12, .sw_cnt = 1, }, + [VCAP_KFS_IP6_STD] = { + .type_id = 4, + .sw_per_item = 6, + .sw_cnt = 2, + }, [VCAP_KFS_IP4_VID] = { .type_id = -1, .sw_per_item = 3, @@ -3263,10 +2793,7 @@ static const struct vcap_set es2_keyfield_set[] = { /* keyfield_set map */ static const struct vcap_field *is0_keyfield_set_map[] = { - [VCAP_KFS_MLL] = is0_mll_keyfield, - [VCAP_KFS_TRI_VID] = is0_tri_vid_keyfield, [VCAP_KFS_LL_FULL] = is0_ll_full_keyfield, - [VCAP_KFS_NORMAL] = is0_normal_keyfield, [VCAP_KFS_NORMAL_7TUPLE] = is0_normal_7tuple_keyfield, [VCAP_KFS_NORMAL_5TUPLE_IP4] = is0_normal_5tuple_ip4_keyfield, [VCAP_KFS_PURE_5TUPLE_IP4] = is0_pure_5tuple_ip4_keyfield, @@ -3280,7 +2807,6 @@ static const struct vcap_field *is2_keyfield_set_map[] = { [VCAP_KFS_IP4_OTHER] = is2_ip4_other_keyfield, [VCAP_KFS_IP6_STD] = is2_ip6_std_keyfield, [VCAP_KFS_IP_7TUPLE] = is2_ip_7tuple_keyfield, - [VCAP_KFS_IP6_VID] = is2_ip6_vid_keyfield, }; static const struct vcap_field *es2_keyfield_set_map[] = { @@ -3289,16 +2815,14 @@ static const struct vcap_field *es2_keyfield_set_map[] = { [VCAP_KFS_IP4_TCP_UDP] = es2_ip4_tcp_udp_keyfield, [VCAP_KFS_IP4_OTHER] = es2_ip4_other_keyfield, [VCAP_KFS_IP_7TUPLE] = es2_ip_7tuple_keyfield, + [VCAP_KFS_IP6_STD] = es2_ip6_std_keyfield, [VCAP_KFS_IP4_VID] = es2_ip4_vid_keyfield, [VCAP_KFS_IP6_VID] = es2_ip6_vid_keyfield, }; /* keyfield_set map sizes */ static int is0_keyfield_set_map_size[] = { - [VCAP_KFS_MLL] = ARRAY_SIZE(is0_mll_keyfield), - [VCAP_KFS_TRI_VID] = ARRAY_SIZE(is0_tri_vid_keyfield), [VCAP_KFS_LL_FULL] = ARRAY_SIZE(is0_ll_full_keyfield), - [VCAP_KFS_NORMAL] = ARRAY_SIZE(is0_normal_keyfield), [VCAP_KFS_NORMAL_7TUPLE] = ARRAY_SIZE(is0_normal_7tuple_keyfield), [VCAP_KFS_NORMAL_5TUPLE_IP4] = ARRAY_SIZE(is0_normal_5tuple_ip4_keyfield), [VCAP_KFS_PURE_5TUPLE_IP4] = ARRAY_SIZE(is0_pure_5tuple_ip4_keyfield), @@ -3312,7 +2836,6 @@ static int is2_keyfield_set_map_size[] = { [VCAP_KFS_IP4_OTHER] = ARRAY_SIZE(is2_ip4_other_keyfield), [VCAP_KFS_IP6_STD] = ARRAY_SIZE(is2_ip6_std_keyfield), [VCAP_KFS_IP_7TUPLE] = ARRAY_SIZE(is2_ip_7tuple_keyfield), - [VCAP_KFS_IP6_VID] = ARRAY_SIZE(is2_ip6_vid_keyfield), }; static int es2_keyfield_set_map_size[] = { @@ -3321,387 +2844,12 @@ static int es2_keyfield_set_map_size[] = { [VCAP_KFS_IP4_TCP_UDP] = ARRAY_SIZE(es2_ip4_tcp_udp_keyfield), [VCAP_KFS_IP4_OTHER] = ARRAY_SIZE(es2_ip4_other_keyfield), [VCAP_KFS_IP_7TUPLE] = ARRAY_SIZE(es2_ip_7tuple_keyfield), + [VCAP_KFS_IP6_STD] = ARRAY_SIZE(es2_ip6_std_keyfield), [VCAP_KFS_IP4_VID] = ARRAY_SIZE(es2_ip4_vid_keyfield), [VCAP_KFS_IP6_VID] = ARRAY_SIZE(es2_ip6_vid_keyfield), }; /* actionfields */ -static const struct vcap_field is0_mlbs_actionfield[] = { - [VCAP_AF_TYPE] = { - .type = VCAP_FIELD_BIT, - .offset = 0, - .width = 1, - }, - [VCAP_AF_COSID_ENA] = { - .type = VCAP_FIELD_BIT, - .offset = 1, - .width = 1, - }, - [VCAP_AF_COSID_VAL] = { - .type = VCAP_FIELD_U32, - .offset = 2, - .width = 3, - }, - [VCAP_AF_QOS_ENA] = { - .type = VCAP_FIELD_BIT, - .offset = 5, - .width = 1, - }, - [VCAP_AF_QOS_VAL] = { - .type = VCAP_FIELD_U32, - .offset = 6, - .width = 3, - }, - [VCAP_AF_DP_ENA] = { - .type = VCAP_FIELD_BIT, - .offset = 9, - .width = 1, - }, - [VCAP_AF_DP_VAL] = { - .type = VCAP_FIELD_U32, - .offset = 10, - .width = 2, - }, - [VCAP_AF_MAP_LOOKUP_SEL] = { - .type = VCAP_FIELD_U32, - .offset = 12, - .width = 2, - }, - [VCAP_AF_MAP_KEY] = { - .type = VCAP_FIELD_U32, - .offset = 14, - .width = 3, - }, - [VCAP_AF_MAP_IDX] = { - .type = VCAP_FIELD_U32, - .offset = 17, - .width = 9, - }, - [VCAP_AF_CLS_VID_SEL] = { - .type = VCAP_FIELD_U32, - .offset = 26, - .width = 3, - }, - [VCAP_AF_GVID_ADD_REPLACE_SEL] = { - .type = VCAP_FIELD_U32, - .offset = 29, - .width = 3, - }, - [VCAP_AF_VID_VAL] = { - .type = VCAP_FIELD_U32, - .offset = 32, - .width = 13, - }, - [VCAP_AF_ISDX_ADD_REPLACE_SEL] = { - .type = VCAP_FIELD_BIT, - .offset = 45, - .width = 1, - }, - [VCAP_AF_ISDX_VAL] = { - .type = VCAP_FIELD_U32, - .offset = 46, - .width = 12, - }, - [VCAP_AF_FWD_DIS] = { - .type = VCAP_FIELD_BIT, - .offset = 58, - .width = 1, - }, - [VCAP_AF_CPU_ENA] = { - .type = VCAP_FIELD_BIT, - .offset = 59, - .width = 1, - }, - [VCAP_AF_CPU_Q] = { - .type = VCAP_FIELD_U32, - .offset = 60, - .width = 3, - }, - [VCAP_AF_OAM_Y1731_SEL] = { - .type = VCAP_FIELD_U32, - .offset = 63, - .width = 3, - }, - [VCAP_AF_OAM_TWAMP_ENA] = { - .type = VCAP_FIELD_BIT, - .offset = 66, - .width = 1, - }, - [VCAP_AF_OAM_IP_BFD_ENA] = { - .type = VCAP_FIELD_BIT, - .offset = 67, - .width = 1, - }, - [VCAP_AF_TC_LABEL] = { - .type = VCAP_FIELD_U32, - .offset = 68, - .width = 3, - }, - [VCAP_AF_TTL_LABEL] = { - .type = VCAP_FIELD_U32, - .offset = 71, - .width = 3, - }, - [VCAP_AF_NUM_VLD_LABELS] = { - .type = VCAP_FIELD_U32, - .offset = 74, - .width = 2, - }, - [VCAP_AF_FWD_TYPE] = { - .type = VCAP_FIELD_U32, - .offset = 76, - .width = 3, - }, - [VCAP_AF_MPLS_OAM_TYPE] = { - .type = VCAP_FIELD_U32, - .offset = 79, - .width = 3, - }, - [VCAP_AF_MPLS_MEP_ENA] = { - .type = VCAP_FIELD_BIT, - .offset = 82, - .width = 1, - }, - [VCAP_AF_MPLS_MIP_ENA] = { - .type = VCAP_FIELD_BIT, - .offset = 83, - .width = 1, - }, - [VCAP_AF_MPLS_OAM_FLAVOR] = { - .type = VCAP_FIELD_BIT, - .offset = 84, - .width = 1, - }, - [VCAP_AF_MPLS_IP_CTRL_ENA] = { - .type = VCAP_FIELD_BIT, - .offset = 85, - .width = 1, - }, - [VCAP_AF_PAG_OVERRIDE_MASK] = { - .type = VCAP_FIELD_U32, - .offset = 86, - .width = 8, - }, - [VCAP_AF_PAG_VAL] = { - .type = VCAP_FIELD_U32, - .offset = 94, - .width = 8, - }, - [VCAP_AF_S2_KEY_SEL_ENA] = { - .type = VCAP_FIELD_BIT, - .offset = 102, - .width = 1, - }, - [VCAP_AF_S2_KEY_SEL_IDX] = { - .type = VCAP_FIELD_U32, - .offset = 103, - .width = 6, - }, - [VCAP_AF_PIPELINE_FORCE_ENA] = { - .type = VCAP_FIELD_U32, - .offset = 109, - .width = 2, - }, - [VCAP_AF_PIPELINE_ACT_SEL] = { - .type = VCAP_FIELD_BIT, - .offset = 111, - .width = 1, - }, - [VCAP_AF_PIPELINE_PT] = { - .type = VCAP_FIELD_U32, - .offset = 112, - .width = 5, - }, - [VCAP_AF_NXT_KEY_TYPE] = { - .type = VCAP_FIELD_U32, - .offset = 117, - .width = 5, - }, - [VCAP_AF_NXT_NORM_W16_OFFSET] = { - .type = VCAP_FIELD_U32, - .offset = 122, - .width = 5, - }, - [VCAP_AF_NXT_OFFSET_FROM_TYPE] = { - .type = VCAP_FIELD_U32, - .offset = 127, - .width = 2, - }, - [VCAP_AF_NXT_TYPE_AFTER_OFFSET] = { - .type = VCAP_FIELD_U32, - .offset = 129, - .width = 2, - }, - [VCAP_AF_NXT_NORMALIZE] = { - .type = VCAP_FIELD_BIT, - .offset = 131, - .width = 1, - }, - [VCAP_AF_NXT_IDX_CTRL] = { - .type = VCAP_FIELD_U32, - .offset = 132, - .width = 3, - }, - [VCAP_AF_NXT_IDX] = { - .type = VCAP_FIELD_U32, - .offset = 135, - .width = 12, - }, -}; - -static const struct vcap_field is0_mlbs_reduced_actionfield[] = { - [VCAP_AF_TYPE] = { - .type = VCAP_FIELD_BIT, - .offset = 0, - .width = 1, - }, - [VCAP_AF_COSID_ENA] = { - .type = VCAP_FIELD_BIT, - .offset = 1, - .width = 1, - }, - [VCAP_AF_COSID_VAL] = { - .type = VCAP_FIELD_U32, - .offset = 2, - .width = 3, - }, - [VCAP_AF_QOS_ENA] = { - .type = VCAP_FIELD_BIT, - .offset = 5, - .width = 1, - }, - [VCAP_AF_QOS_VAL] = { - .type = VCAP_FIELD_U32, - .offset = 6, - .width = 3, - }, - [VCAP_AF_DP_ENA] = { - .type = VCAP_FIELD_BIT, - .offset = 9, - .width = 1, - }, - [VCAP_AF_DP_VAL] = { - .type = VCAP_FIELD_U32, - .offset = 10, - .width = 2, - }, - [VCAP_AF_MAP_LOOKUP_SEL] = { - .type = VCAP_FIELD_U32, - .offset = 12, - .width = 2, - }, - [VCAP_AF_ISDX_ADD_REPLACE_SEL] = { - .type = VCAP_FIELD_BIT, - .offset = 14, - .width = 1, - }, - [VCAP_AF_ISDX_VAL] = { - .type = VCAP_FIELD_U32, - .offset = 15, - .width = 12, - }, - [VCAP_AF_FWD_DIS] = { - .type = VCAP_FIELD_BIT, - .offset = 27, - .width = 1, - }, - [VCAP_AF_CPU_ENA] = { - .type = VCAP_FIELD_BIT, - .offset = 28, - .width = 1, - }, - [VCAP_AF_CPU_Q] = { - .type = VCAP_FIELD_U32, - .offset = 29, - .width = 3, - }, - [VCAP_AF_TC_ENA] = { - .type = VCAP_FIELD_BIT, - .offset = 32, - .width = 1, - }, - [VCAP_AF_TTL_ENA] = { - .type = VCAP_FIELD_BIT, - .offset = 33, - .width = 1, - }, - [VCAP_AF_FWD_TYPE] = { - .type = VCAP_FIELD_U32, - .offset = 34, - .width = 3, - }, - [VCAP_AF_MPLS_OAM_TYPE] = { - .type = VCAP_FIELD_U32, - .offset = 37, - .width = 3, - }, - [VCAP_AF_MPLS_MEP_ENA] = { - .type = VCAP_FIELD_BIT, - .offset = 40, - .width = 1, - }, - [VCAP_AF_MPLS_MIP_ENA] = { - .type = VCAP_FIELD_BIT, - .offset = 41, - .width = 1, - }, - [VCAP_AF_MPLS_OAM_FLAVOR] = { - .type = VCAP_FIELD_BIT, - .offset = 42, - .width = 1, - }, - [VCAP_AF_MPLS_IP_CTRL_ENA] = { - .type = VCAP_FIELD_BIT, - .offset = 43, - .width = 1, - }, - [VCAP_AF_PIPELINE_FORCE_ENA] = { - .type = VCAP_FIELD_U32, - .offset = 44, - .width = 2, - }, - [VCAP_AF_PIPELINE_ACT_SEL] = { - .type = VCAP_FIELD_BIT, - .offset = 46, - .width = 1, - }, - [VCAP_AF_PIPELINE_PT_REDUCED] = { - .type = VCAP_FIELD_U32, - .offset = 47, - .width = 3, - }, - [VCAP_AF_NXT_KEY_TYPE] = { - .type = VCAP_FIELD_U32, - .offset = 50, - .width = 5, - }, - [VCAP_AF_NXT_NORM_W32_OFFSET] = { - .type = VCAP_FIELD_U32, - .offset = 55, - .width = 2, - }, - [VCAP_AF_NXT_TYPE_AFTER_OFFSET] = { - .type = VCAP_FIELD_U32, - .offset = 57, - .width = 2, - }, - [VCAP_AF_NXT_NORMALIZE] = { - .type = VCAP_FIELD_BIT, - .offset = 59, - .width = 1, - }, - [VCAP_AF_NXT_IDX_CTRL] = { - .type = VCAP_FIELD_U32, - .offset = 60, - .width = 3, - }, - [VCAP_AF_NXT_IDX] = { - .type = VCAP_FIELD_U32, - .offset = 63, - .width = 12, - }, -}; - static const struct vcap_field is0_classification_actionfield[] = { [VCAP_AF_TYPE] = { .type = VCAP_FIELD_BIT, @@ -3718,16 +2866,6 @@ static const struct vcap_field is0_classification_actionfield[] = { .offset = 2, .width = 6, }, - [VCAP_AF_COSID_ENA] = { - .type = VCAP_FIELD_BIT, - .offset = 8, - .width = 1, - }, - [VCAP_AF_COSID_VAL] = { - .type = VCAP_FIELD_U32, - .offset = 9, - .width = 3, - }, [VCAP_AF_QOS_ENA] = { .type = VCAP_FIELD_BIT, .offset = 12, @@ -3788,46 +2926,11 @@ static const struct vcap_field is0_classification_actionfield[] = { .offset = 39, .width = 3, }, - [VCAP_AF_GVID_ADD_REPLACE_SEL] = { - .type = VCAP_FIELD_U32, - .offset = 42, - .width = 3, - }, [VCAP_AF_VID_VAL] = { .type = VCAP_FIELD_U32, .offset = 45, .width = 13, }, - [VCAP_AF_VLAN_POP_CNT_ENA] = { - .type = VCAP_FIELD_BIT, - .offset = 58, - .width = 1, - }, - [VCAP_AF_VLAN_POP_CNT] = { - .type = VCAP_FIELD_U32, - .offset = 59, - .width = 2, - }, - [VCAP_AF_VLAN_PUSH_CNT_ENA] = { - .type = VCAP_FIELD_BIT, - .offset = 61, - .width = 1, - }, - [VCAP_AF_VLAN_PUSH_CNT] = { - .type = VCAP_FIELD_U32, - .offset = 62, - .width = 2, - }, - [VCAP_AF_TPID_SEL] = { - .type = VCAP_FIELD_U32, - .offset = 64, - .width = 2, - }, - [VCAP_AF_VLAN_WAS_TAGGED] = { - .type = VCAP_FIELD_U32, - .offset = 66, - .width = 2, - }, [VCAP_AF_ISDX_ADD_REPLACE_SEL] = { .type = VCAP_FIELD_BIT, .offset = 68, @@ -3838,71 +2941,6 @@ static const struct vcap_field is0_classification_actionfield[] = { .offset = 69, .width = 12, }, - [VCAP_AF_RT_SEL] = { - .type = VCAP_FIELD_U32, - .offset = 81, - .width = 2, - }, - [VCAP_AF_LPM_AFFIX_ENA] = { - .type = VCAP_FIELD_BIT, - .offset = 83, - .width = 1, - }, - [VCAP_AF_LPM_AFFIX_VAL] = { - .type = VCAP_FIELD_U32, - .offset = 84, - .width = 10, - }, - [VCAP_AF_RLEG_DMAC_CHK_DIS] = { - .type = VCAP_FIELD_BIT, - .offset = 94, - .width = 1, - }, - [VCAP_AF_TTL_DECR_DIS] = { - .type = VCAP_FIELD_BIT, - .offset = 95, - .width = 1, - }, - [VCAP_AF_L3_MAC_UPDATE_DIS] = { - .type = VCAP_FIELD_BIT, - .offset = 96, - .width = 1, - }, - [VCAP_AF_FWD_DIS] = { - .type = VCAP_FIELD_BIT, - .offset = 97, - .width = 1, - }, - [VCAP_AF_CPU_ENA] = { - .type = VCAP_FIELD_BIT, - .offset = 98, - .width = 1, - }, - [VCAP_AF_CPU_Q] = { - .type = VCAP_FIELD_U32, - .offset = 99, - .width = 3, - }, - [VCAP_AF_MIP_SEL] = { - .type = VCAP_FIELD_U32, - .offset = 102, - .width = 2, - }, - [VCAP_AF_OAM_Y1731_SEL] = { - .type = VCAP_FIELD_U32, - .offset = 104, - .width = 3, - }, - [VCAP_AF_OAM_TWAMP_ENA] = { - .type = VCAP_FIELD_BIT, - .offset = 107, - .width = 1, - }, - [VCAP_AF_OAM_IP_BFD_ENA] = { - .type = VCAP_FIELD_BIT, - .offset = 108, - .width = 1, - }, [VCAP_AF_PAG_OVERRIDE_MASK] = { .type = VCAP_FIELD_U32, .offset = 109, @@ -3913,76 +2951,6 @@ static const struct vcap_field is0_classification_actionfield[] = { .offset = 117, .width = 8, }, - [VCAP_AF_S2_KEY_SEL_ENA] = { - .type = VCAP_FIELD_BIT, - .offset = 125, - .width = 1, - }, - [VCAP_AF_S2_KEY_SEL_IDX] = { - .type = VCAP_FIELD_U32, - .offset = 126, - .width = 6, - }, - [VCAP_AF_INJ_MASQ_ENA] = { - .type = VCAP_FIELD_BIT, - .offset = 132, - .width = 1, - }, - [VCAP_AF_INJ_MASQ_PORT] = { - .type = VCAP_FIELD_U32, - .offset = 133, - .width = 7, - }, - [VCAP_AF_LPORT_ENA] = { - .type = VCAP_FIELD_BIT, - .offset = 140, - .width = 1, - }, - [VCAP_AF_INJ_MASQ_LPORT] = { - .type = VCAP_FIELD_U32, - .offset = 141, - .width = 7, - }, - [VCAP_AF_PIPELINE_FORCE_ENA] = { - .type = VCAP_FIELD_U32, - .offset = 148, - .width = 2, - }, - [VCAP_AF_PIPELINE_ACT_SEL] = { - .type = VCAP_FIELD_BIT, - .offset = 150, - .width = 1, - }, - [VCAP_AF_PIPELINE_PT] = { - .type = VCAP_FIELD_U32, - .offset = 151, - .width = 5, - }, - [VCAP_AF_NXT_KEY_TYPE] = { - .type = VCAP_FIELD_U32, - .offset = 156, - .width = 5, - }, - [VCAP_AF_NXT_NORM_W16_OFFSET] = { - .type = VCAP_FIELD_U32, - .offset = 161, - .width = 5, - }, - [VCAP_AF_NXT_OFFSET_FROM_TYPE] = { - .type = VCAP_FIELD_U32, - .offset = 166, - .width = 2, - }, - [VCAP_AF_NXT_TYPE_AFTER_OFFSET] = { - .type = VCAP_FIELD_U32, - .offset = 168, - .width = 2, - }, - [VCAP_AF_NXT_NORMALIZE] = { - .type = VCAP_FIELD_BIT, - .offset = 170, - .width = 1, - }, [VCAP_AF_NXT_IDX_CTRL] = { .type = VCAP_FIELD_U32, .offset = 171, @@ -4006,16 +2974,6 @@ static const struct vcap_field is0_full_actionfield[] = { .offset = 1, .width = 6, }, - [VCAP_AF_COSID_ENA] = { - .type = VCAP_FIELD_BIT, - .offset = 7, - .width = 1, - }, - [VCAP_AF_COSID_VAL] = { - .type = VCAP_FIELD_U32, - .offset = 8, - .width = 3, - }, [VCAP_AF_QOS_ENA] = { .type = VCAP_FIELD_BIT, .offset = 11, @@ -4076,46 +3034,11 @@ static const struct vcap_field is0_full_actionfield[] = { .offset = 38, .width = 3, }, - [VCAP_AF_GVID_ADD_REPLACE_SEL] = { - .type = VCAP_FIELD_U32, - .offset = 41, - .width = 3, - }, [VCAP_AF_VID_VAL] = { .type = VCAP_FIELD_U32, .offset = 44, .width = 13, }, - [VCAP_AF_VLAN_POP_CNT_ENA] = { - .type = VCAP_FIELD_BIT, - .offset = 57, - .width = 1, - }, - [VCAP_AF_VLAN_POP_CNT] = { - .type = VCAP_FIELD_U32, - .offset = 58, - .width = 2, - }, - [VCAP_AF_VLAN_PUSH_CNT_ENA] = { - .type = VCAP_FIELD_BIT, - .offset = 60, - .width = 1, - }, - [VCAP_AF_VLAN_PUSH_CNT] = { - .type = VCAP_FIELD_U32, - .offset = 61, - .width = 2, - }, - [VCAP_AF_TPID_SEL] = { - .type = VCAP_FIELD_U32, - .offset = 63, - .width = 2, - }, - [VCAP_AF_VLAN_WAS_TAGGED] = { - .type = VCAP_FIELD_U32, - .offset = 65, - .width = 2, - }, [VCAP_AF_ISDX_ADD_REPLACE_SEL] = { .type = VCAP_FIELD_BIT, .offset = 67, @@ -4136,126 +3059,6 @@ static const struct vcap_field is0_full_actionfield[] = { .offset = 83, .width = 65, }, - [VCAP_AF_RT_SEL] = { - .type = VCAP_FIELD_U32, - .offset = 148, - .width = 2, - }, - [VCAP_AF_LPM_AFFIX_ENA] = { - .type = VCAP_FIELD_BIT, - .offset = 150, - .width = 1, - }, - [VCAP_AF_LPM_AFFIX_VAL] = { - .type = VCAP_FIELD_U32, - .offset = 151, - .width = 10, - }, - [VCAP_AF_RLEG_DMAC_CHK_DIS] = { - .type = VCAP_FIELD_BIT, - .offset = 161, - .width = 1, - }, - [VCAP_AF_TTL_DECR_DIS] = { - .type = VCAP_FIELD_BIT, - .offset = 162, - .width = 1, - }, - [VCAP_AF_L3_MAC_UPDATE_DIS] = { - .type = VCAP_FIELD_BIT, - .offset = 163, - .width = 1, - }, - [VCAP_AF_CPU_ENA] = { - .type = VCAP_FIELD_BIT, - .offset = 164, - .width = 1, - }, - [VCAP_AF_CPU_Q] = { - .type = VCAP_FIELD_U32, - .offset = 165, - .width = 3, - }, - [VCAP_AF_MIP_SEL] = { - .type = VCAP_FIELD_U32, - .offset = 168, - .width = 2, - }, - [VCAP_AF_OAM_Y1731_SEL] = { - .type = VCAP_FIELD_U32, - .offset = 170, - .width = 3, - }, - [VCAP_AF_OAM_TWAMP_ENA] = { - .type = VCAP_FIELD_BIT, - .offset = 173, - .width = 1, - }, - [VCAP_AF_OAM_IP_BFD_ENA] = { - .type = VCAP_FIELD_BIT, - .offset = 174, - .width = 1, - }, - [VCAP_AF_RSVD_LBL_VAL] = { - .type = VCAP_FIELD_U32, - .offset = 175, - .width = 4, - }, - [VCAP_AF_TC_LABEL] = { - .type = VCAP_FIELD_U32, - .offset = 179, - .width = 3, - }, - [VCAP_AF_TTL_LABEL] = { - .type = VCAP_FIELD_U32, - .offset = 182, - .width = 3, - }, - [VCAP_AF_NUM_VLD_LABELS] = { - .type = VCAP_FIELD_U32, - .offset = 185, - .width = 2, - }, - [VCAP_AF_FWD_TYPE] = { - .type = VCAP_FIELD_U32, - .offset = 187, - .width = 3, - }, - [VCAP_AF_MPLS_OAM_TYPE] = { - .type = VCAP_FIELD_U32, - .offset = 190, - .width = 3, - }, - [VCAP_AF_MPLS_MEP_ENA] = { - .type = VCAP_FIELD_BIT, - .offset = 193, - .width = 1, - }, - [VCAP_AF_MPLS_MIP_ENA] = { - .type = VCAP_FIELD_BIT, - .offset = 194, - .width = 1, - }, - [VCAP_AF_MPLS_OAM_FLAVOR] = { - .type = VCAP_FIELD_BIT, - .offset = 195, - .width = 1, - }, - [VCAP_AF_MPLS_IP_CTRL_ENA] = { - .type = VCAP_FIELD_BIT, - .offset = 196, - .width = 1, - }, - [VCAP_AF_CUSTOM_ACE_ENA] = { - .type = VCAP_FIELD_U32, - .offset = 197, - .width = 5, - }, - [VCAP_AF_CUSTOM_ACE_OFFSET] = { - .type = VCAP_FIELD_U32, - .offset = 202, - .width = 2, - }, [VCAP_AF_PAG_OVERRIDE_MASK] = { .type = VCAP_FIELD_U32, .offset = 204, @@ -4266,86 +3069,6 @@ static const struct vcap_field is0_full_actionfield[] = { .offset = 212, .width = 8, }, - [VCAP_AF_S2_KEY_SEL_ENA] = { - .type = VCAP_FIELD_BIT, - .offset = 220, - .width = 1, - }, - [VCAP_AF_S2_KEY_SEL_IDX] = { - .type = VCAP_FIELD_U32, - .offset = 221, - .width = 6, - }, - [VCAP_AF_INJ_MASQ_ENA] = { - .type = VCAP_FIELD_BIT, - .offset = 227, - .width = 1, - }, - [VCAP_AF_INJ_MASQ_PORT] = { - .type = VCAP_FIELD_U32, - .offset = 228, - .width = 7, - }, - [VCAP_AF_LPORT_ENA] = { - .type = VCAP_FIELD_BIT, - .offset = 235, - .width = 1, - }, - [VCAP_AF_INJ_MASQ_LPORT] = { - .type = VCAP_FIELD_U32, - .offset = 236, - .width = 7, - }, - [VCAP_AF_MATCH_ID] = { - .type = VCAP_FIELD_U32, - .offset = 243, - .width = 16, - }, - [VCAP_AF_MATCH_ID_MASK] = { - .type = VCAP_FIELD_U32, - .offset = 259, - .width = 16, - }, - [VCAP_AF_PIPELINE_FORCE_ENA] = { - .type = VCAP_FIELD_U32, - .offset = 275, - .width = 2, - }, - [VCAP_AF_PIPELINE_ACT_SEL] = { - .type = VCAP_FIELD_BIT, - .offset = 277, - .width = 1, - }, - [VCAP_AF_PIPELINE_PT] = { - .type = VCAP_FIELD_U32, - .offset = 278, - .width = 5, - }, - [VCAP_AF_NXT_KEY_TYPE] = { - .type = VCAP_FIELD_U32, - .offset = 283, - .width = 5, - }, - [VCAP_AF_NXT_NORM_W16_OFFSET] = { - .type = VCAP_FIELD_U32, - .offset = 288, - .width = 5, - }, - [VCAP_AF_NXT_OFFSET_FROM_TYPE] = { - .type = VCAP_FIELD_U32, - .offset = 293, - .width = 2, - }, - [VCAP_AF_NXT_TYPE_AFTER_OFFSET] = { - .type = VCAP_FIELD_U32, - .offset = 295, - .width = 2, - }, - [VCAP_AF_NXT_NORMALIZE] = { - .type = VCAP_FIELD_BIT, - .offset = 297, - .width = 1, - }, [VCAP_AF_NXT_IDX_CTRL] = { .type = VCAP_FIELD_U32, .offset = 298, @@ -4364,16 +3087,6 @@ static const struct vcap_field is0_class_reduced_actionfield[] = { .offset = 0, .width = 1, }, - [VCAP_AF_COSID_ENA] = { - .type = VCAP_FIELD_BIT, - .offset = 1, - .width = 1, - }, - [VCAP_AF_COSID_VAL] = { - .type = VCAP_FIELD_U32, - .offset = 2, - .width = 3, - }, [VCAP_AF_QOS_ENA] = { .type = VCAP_FIELD_BIT, .offset = 5, @@ -4409,46 +3122,11 @@ static const struct vcap_field is0_class_reduced_actionfield[] = { .offset = 17, .width = 3, }, - [VCAP_AF_GVID_ADD_REPLACE_SEL] = { - .type = VCAP_FIELD_U32, - .offset = 20, - .width = 3, - }, [VCAP_AF_VID_VAL] = { .type = VCAP_FIELD_U32, .offset = 23, .width = 13, }, - [VCAP_AF_VLAN_POP_CNT_ENA] = { - .type = VCAP_FIELD_BIT, - .offset = 36, - .width = 1, - }, - [VCAP_AF_VLAN_POP_CNT] = { - .type = VCAP_FIELD_U32, - .offset = 37, - .width = 2, - }, - [VCAP_AF_VLAN_PUSH_CNT_ENA] = { - .type = VCAP_FIELD_BIT, - .offset = 39, - .width = 1, - }, - [VCAP_AF_VLAN_PUSH_CNT] = { - .type = VCAP_FIELD_U32, - .offset = 40, - .width = 2, - }, - [VCAP_AF_TPID_SEL] = { - .type = VCAP_FIELD_U32, - .offset = 42, - .width = 2, - }, - [VCAP_AF_VLAN_WAS_TAGGED] = { - .type = VCAP_FIELD_U32, - .offset = 44, - .width = 2, - }, [VCAP_AF_ISDX_ADD_REPLACE_SEL] = { .type = VCAP_FIELD_BIT, .offset = 46, @@ -4459,61 +3137,6 @@ static const struct vcap_field is0_class_reduced_actionfield[] = { .offset = 47, .width = 12, }, - [VCAP_AF_FWD_DIS] = { - .type = VCAP_FIELD_BIT, - .offset = 59, - .width = 1, - }, - [VCAP_AF_CPU_ENA] = { - .type = VCAP_FIELD_BIT, - .offset = 60, - .width = 1, - }, - [VCAP_AF_CPU_Q] = { - .type = VCAP_FIELD_U32, - .offset = 61, - .width = 3, - }, - [VCAP_AF_MIP_SEL] = { - .type = VCAP_FIELD_U32, - .offset = 64, - .width = 2, - }, - [VCAP_AF_OAM_Y1731_SEL] = { - .type = VCAP_FIELD_U32, - .offset = 66, - .width = 3, - }, - [VCAP_AF_LPORT_ENA] = { - .type = VCAP_FIELD_BIT, - .offset = 69, - .width = 1, - }, - [VCAP_AF_INJ_MASQ_LPORT] = { - .type = VCAP_FIELD_U32, - .offset = 70, - .width = 7, - }, - [VCAP_AF_PIPELINE_FORCE_ENA] = { - .type = VCAP_FIELD_U32, - .offset = 77, - .width = 2, - }, - [VCAP_AF_PIPELINE_ACT_SEL] = { - .type = VCAP_FIELD_BIT, - .offset = 79, - .width = 1, - }, - [VCAP_AF_PIPELINE_PT] = { - .type = VCAP_FIELD_U32, - .offset = 80, - .width = 5, - }, - [VCAP_AF_NXT_KEY_TYPE] = { - .type = VCAP_FIELD_U32, - .offset = 85, - .width = 5, - }, [VCAP_AF_NXT_IDX_CTRL] = { .type = VCAP_FIELD_U32, .offset = 90, @@ -4527,11 +3150,6 @@ static const struct vcap_field is0_class_reduced_actionfield[] = { }; static const struct vcap_field is2_base_type_actionfield[] = { - [VCAP_AF_IS_INNER_ACL] = { - .type = VCAP_FIELD_BIT, - .offset = 0, - .width = 1, - }, [VCAP_AF_PIPELINE_FORCE_ENA] = { .type = VCAP_FIELD_BIT, .offset = 1, @@ -4562,11 +3180,6 @@ static const struct vcap_field is2_base_type_actionfield[] = { .offset = 10, .width = 3, }, - [VCAP_AF_CPU_DIS] = { - .type = VCAP_FIELD_BIT, - .offset = 13, - .width = 1, - }, [VCAP_AF_LRN_DIS] = { .type = VCAP_FIELD_BIT, .offset = 14, @@ -4592,11 +3205,6 @@ static const struct vcap_field is2_base_type_actionfield[] = { .offset = 23, .width = 1, }, - [VCAP_AF_DLB_OFFSET] = { - .type = VCAP_FIELD_U32, - .offset = 24, - .width = 3, - }, [VCAP_AF_MASK_MODE] = { .type = VCAP_FIELD_U32, .offset = 27, @@ -4607,51 +3215,11 @@ static const struct vcap_field is2_base_type_actionfield[] = { .offset = 30, .width = 68, }, - [VCAP_AF_RSDX_ENA] = { - .type = VCAP_FIELD_BIT, - .offset = 98, - .width = 1, - }, - [VCAP_AF_RSDX_VAL] = { - .type = VCAP_FIELD_U32, - .offset = 99, - .width = 12, - }, [VCAP_AF_MIRROR_PROBE] = { .type = VCAP_FIELD_U32, .offset = 111, .width = 2, }, - [VCAP_AF_REW_CMD] = { - .type = VCAP_FIELD_U32, - .offset = 113, - .width = 11, - }, - [VCAP_AF_TTL_UPDATE_ENA] = { - .type = VCAP_FIELD_BIT, - .offset = 124, - .width = 1, - }, - [VCAP_AF_SAM_SEQ_ENA] = { - .type = VCAP_FIELD_BIT, - .offset = 125, - .width = 1, - }, - [VCAP_AF_TCP_UDP_ENA] = { - .type = VCAP_FIELD_BIT, - .offset = 126, - .width = 1, - }, - [VCAP_AF_TCP_UDP_DPORT] = { - .type = VCAP_FIELD_U32, - .offset = 127, - .width = 16, - }, - [VCAP_AF_TCP_UDP_SPORT] = { - .type = VCAP_FIELD_U32, - .offset = 143, - .width = 16, - }, [VCAP_AF_MATCH_ID] = { .type = VCAP_FIELD_U32, .offset = 159, @@ -4667,56 +3235,6 @@ static const struct vcap_field is2_base_type_actionfield[] = { .offset = 191, .width = 12, }, - [VCAP_AF_SWAP_MAC_ENA] = { - .type = VCAP_FIELD_BIT, - .offset = 203, - .width = 1, - }, - [VCAP_AF_ACL_RT_MODE] = { - .type = VCAP_FIELD_U32, - .offset = 204, - .width = 4, - }, - [VCAP_AF_ACL_MAC] = { - .type = VCAP_FIELD_U48, - .offset = 208, - .width = 48, - }, - [VCAP_AF_DMAC_OFFSET_ENA] = { - .type = VCAP_FIELD_BIT, - .offset = 256, - .width = 1, - }, - [VCAP_AF_PTP_MASTER_SEL] = { - .type = VCAP_FIELD_U32, - .offset = 257, - .width = 2, - }, - [VCAP_AF_LOG_MSG_INTERVAL] = { - .type = VCAP_FIELD_U32, - .offset = 259, - .width = 4, - }, - [VCAP_AF_SIP_IDX] = { - .type = VCAP_FIELD_U32, - .offset = 263, - .width = 5, - }, - [VCAP_AF_RLEG_STAT_IDX] = { - .type = VCAP_FIELD_U32, - .offset = 268, - .width = 3, - }, - [VCAP_AF_IGR_ACL_ENA] = { - .type = VCAP_FIELD_BIT, - .offset = 271, - .width = 1, - }, - [VCAP_AF_EGR_ACL_ENA] = { - .type = VCAP_FIELD_BIT, - .offset = 272, - .width = 1, - }, }; static const struct vcap_field es2_base_type_actionfield[] = { @@ -4794,16 +3312,6 @@ static const struct vcap_field es2_base_type_actionfield[] = { /* actionfield_set */ static const struct vcap_set is0_actionfield_set[] = { - [VCAP_AFS_MLBS] = { - .type_id = 0, - .sw_per_item = 2, - .sw_cnt = 6, - }, - [VCAP_AFS_MLBS_REDUCED] = { - .type_id = 0, - .sw_per_item = 1, - .sw_cnt = 12, - }, [VCAP_AFS_CLASSIFICATION] = { .type_id = 1, .sw_per_item = 2, @@ -4839,8 +3347,6 @@ static const struct vcap_set es2_actionfield_set[] = { /* actionfield_set map */ static const struct vcap_field *is0_actionfield_set_map[] = { - [VCAP_AFS_MLBS] = is0_mlbs_actionfield, - [VCAP_AFS_MLBS_REDUCED] = is0_mlbs_reduced_actionfield, [VCAP_AFS_CLASSIFICATION] = is0_classification_actionfield, [VCAP_AFS_FULL] = is0_full_actionfield, [VCAP_AFS_CLASS_REDUCED] = is0_class_reduced_actionfield, @@ -4856,8 +3362,6 @@ static const struct vcap_field *es2_actionfield_set_map[] = { /* actionfield_set map size */ static int is0_actionfield_set_map_size[] = { - [VCAP_AFS_MLBS] = ARRAY_SIZE(is0_mlbs_actionfield), - [VCAP_AFS_MLBS_REDUCED] = ARRAY_SIZE(is0_mlbs_reduced_actionfield), [VCAP_AFS_CLASSIFICATION] = ARRAY_SIZE(is0_classification_actionfield), [VCAP_AFS_FULL] = ARRAY_SIZE(is0_full_actionfield), [VCAP_AFS_CLASS_REDUCED] = ARRAY_SIZE(is0_class_reduced_actionfield), @@ -5244,17 +3748,22 @@ static const char * const vcap_keyfield_set_names[] = { [VCAP_KFS_IP4_OTHER] = "VCAP_KFS_IP4_OTHER", [VCAP_KFS_IP4_TCP_UDP] = "VCAP_KFS_IP4_TCP_UDP", [VCAP_KFS_IP4_VID] = "VCAP_KFS_IP4_VID", + [VCAP_KFS_IP6_OTHER] = "VCAP_KFS_IP6_OTHER", [VCAP_KFS_IP6_STD] = "VCAP_KFS_IP6_STD", + [VCAP_KFS_IP6_TCP_UDP] = "VCAP_KFS_IP6_TCP_UDP", [VCAP_KFS_IP6_VID] = "VCAP_KFS_IP6_VID", [VCAP_KFS_IP_7TUPLE] = "VCAP_KFS_IP_7TUPLE", + [VCAP_KFS_ISDX] = "VCAP_KFS_ISDX", [VCAP_KFS_LL_FULL] = "VCAP_KFS_LL_FULL", [VCAP_KFS_MAC_ETYPE] = "VCAP_KFS_MAC_ETYPE", - [VCAP_KFS_MLL] = "VCAP_KFS_MLL", - [VCAP_KFS_NORMAL] = "VCAP_KFS_NORMAL", + [VCAP_KFS_MAC_LLC] = "VCAP_KFS_MAC_LLC", + [VCAP_KFS_MAC_SNAP] = "VCAP_KFS_MAC_SNAP", [VCAP_KFS_NORMAL_5TUPLE_IP4] = "VCAP_KFS_NORMAL_5TUPLE_IP4", [VCAP_KFS_NORMAL_7TUPLE] = "VCAP_KFS_NORMAL_7TUPLE", + [VCAP_KFS_OAM] = "VCAP_KFS_OAM", [VCAP_KFS_PURE_5TUPLE_IP4] = "VCAP_KFS_PURE_5TUPLE_IP4", - [VCAP_KFS_TRI_VID] = "VCAP_KFS_TRI_VID", + [VCAP_KFS_SMAC_SIP4] = "VCAP_KFS_SMAC_SIP4", + [VCAP_KFS_SMAC_SIP6] = "VCAP_KFS_SMAC_SIP6", }; /* Actionfieldset names */ @@ -5263,9 +3772,9 @@ static const char * const vcap_actionfield_set_names[] = { [VCAP_AFS_BASE_TYPE] = "VCAP_AFS_BASE_TYPE", [VCAP_AFS_CLASSIFICATION] = "VCAP_AFS_CLASSIFICATION", [VCAP_AFS_CLASS_REDUCED] = "VCAP_AFS_CLASS_REDUCED", + [VCAP_AFS_ES0] = "VCAP_AFS_ES0", [VCAP_AFS_FULL] = "VCAP_AFS_FULL", - [VCAP_AFS_MLBS] = "VCAP_AFS_MLBS", - [VCAP_AFS_MLBS_REDUCED] = "VCAP_AFS_MLBS_REDUCED", + [VCAP_AFS_SMAC_SIP] = "VCAP_AFS_SMAC_SIP", }; /* Keyfield names */ @@ -5285,6 +3794,7 @@ static const char * const vcap_keyfield_names[] = { [VCAP_KF_8021Q_PCP1] = "8021Q_PCP1", [VCAP_KF_8021Q_PCP2] = "8021Q_PCP2", [VCAP_KF_8021Q_PCP_CLS] = "8021Q_PCP_CLS", + [VCAP_KF_8021Q_TPID] = "8021Q_TPID", [VCAP_KF_8021Q_TPID0] = "8021Q_TPID0", [VCAP_KF_8021Q_TPID1] = "8021Q_TPID1", [VCAP_KF_8021Q_TPID2] = "8021Q_TPID2", @@ -5303,13 +3813,13 @@ static const char * const vcap_keyfield_names[] = { [VCAP_KF_ARP_SENDER_MATCH_IS] = "ARP_SENDER_MATCH_IS", [VCAP_KF_ARP_TGT_MATCH_IS] = "ARP_TGT_MATCH_IS", [VCAP_KF_COSID_CLS] = "COSID_CLS", - [VCAP_KF_DST_ENTRY] = "DST_ENTRY", [VCAP_KF_ES0_ISDX_KEY_ENA] = "ES0_ISDX_KEY_ENA", [VCAP_KF_ETYPE] = "ETYPE", [VCAP_KF_ETYPE_LEN_IS] = "ETYPE_LEN_IS", - [VCAP_KF_ETYPE_MPLS] = "ETYPE_MPLS", + [VCAP_KF_HOST_MATCH] = "HOST_MATCH", [VCAP_KF_IF_EGR_PORT_MASK] = "IF_EGR_PORT_MASK", [VCAP_KF_IF_EGR_PORT_MASK_RNG] = "IF_EGR_PORT_MASK_RNG", + [VCAP_KF_IF_EGR_PORT_NO] = "IF_EGR_PORT_NO", [VCAP_KF_IF_IGR_PORT] = "IF_IGR_PORT", [VCAP_KF_IF_IGR_PORT_MASK] = "IF_IGR_PORT_MASK", [VCAP_KF_IF_IGR_PORT_MASK_L3] = "IF_IGR_PORT_MASK_L3", @@ -5324,17 +3834,24 @@ static const char * const vcap_keyfield_names[] = { [VCAP_KF_ISDX_GT0_IS] = "ISDX_GT0_IS", [VCAP_KF_L2_BC_IS] = "L2_BC_IS", [VCAP_KF_L2_DMAC] = "L2_DMAC", + [VCAP_KF_L2_FRM_TYPE] = "L2_FRM_TYPE", [VCAP_KF_L2_FWD_IS] = "L2_FWD_IS", + [VCAP_KF_L2_LLC] = "L2_LLC", [VCAP_KF_L2_MC_IS] = "L2_MC_IS", + [VCAP_KF_L2_PAYLOAD0] = "L2_PAYLOAD0", + [VCAP_KF_L2_PAYLOAD1] = "L2_PAYLOAD1", + [VCAP_KF_L2_PAYLOAD2] = "L2_PAYLOAD2", [VCAP_KF_L2_PAYLOAD_ETYPE] = "L2_PAYLOAD_ETYPE", [VCAP_KF_L2_SMAC] = "L2_SMAC", + [VCAP_KF_L2_SNAP] = "L2_SNAP", [VCAP_KF_L3_DIP_EQ_SIP_IS] = "L3_DIP_EQ_SIP_IS", - [VCAP_KF_L3_DMAC_DIP_MATCH] = "L3_DMAC_DIP_MATCH", [VCAP_KF_L3_DPL_CLS] = "L3_DPL_CLS", [VCAP_KF_L3_DSCP] = "L3_DSCP", [VCAP_KF_L3_DST_IS] = "L3_DST_IS", + [VCAP_KF_L3_FRAGMENT] = "L3_FRAGMENT", [VCAP_KF_L3_FRAGMENT_TYPE] = "L3_FRAGMENT_TYPE", [VCAP_KF_L3_FRAG_INVLD_L4_LEN] = "L3_FRAG_INVLD_L4_LEN", + [VCAP_KF_L3_FRAG_OFS_GT0] = "L3_FRAG_OFS_GT0", [VCAP_KF_L3_IP4_DIP] = "L3_IP4_DIP", [VCAP_KF_L3_IP4_SIP] = "L3_IP4_SIP", [VCAP_KF_L3_IP6_DIP] = "L3_IP6_DIP", @@ -5343,9 +3860,10 @@ static const char * const vcap_keyfield_names[] = { [VCAP_KF_L3_OPTIONS_IS] = "L3_OPTIONS_IS", [VCAP_KF_L3_PAYLOAD] = "L3_PAYLOAD", [VCAP_KF_L3_RT_IS] = "L3_RT_IS", - [VCAP_KF_L3_SMAC_SIP_MATCH] = "L3_SMAC_SIP_MATCH", [VCAP_KF_L3_TOS] = "L3_TOS", [VCAP_KF_L3_TTL_GT0] = "L3_TTL_GT0", + [VCAP_KF_L4_1588_DOM] = "L4_1588_DOM", + [VCAP_KF_L4_1588_VER] = "L4_1588_VER", [VCAP_KF_L4_ACK] = "L4_ACK", [VCAP_KF_L4_DPORT] = "L4_DPORT", [VCAP_KF_L4_FIN] = "L4_FIN", @@ -5362,9 +3880,14 @@ static const char * const vcap_keyfield_names[] = { [VCAP_KF_LOOKUP_GEN_IDX] = "LOOKUP_GEN_IDX", [VCAP_KF_LOOKUP_GEN_IDX_SEL] = "LOOKUP_GEN_IDX_SEL", [VCAP_KF_LOOKUP_PAG] = "LOOKUP_PAG", - [VCAP_KF_MIRROR_ENA] = "MIRROR_ENA", + [VCAP_KF_MIRROR_PROBE] = "MIRROR_PROBE", [VCAP_KF_OAM_CCM_CNTS_EQ0] = "OAM_CCM_CNTS_EQ0", + [VCAP_KF_OAM_DETECTED] = "OAM_DETECTED", + [VCAP_KF_OAM_FLAGS] = "OAM_FLAGS", [VCAP_KF_OAM_MEL_FLAGS] = "OAM_MEL_FLAGS", + [VCAP_KF_OAM_MEPID] = "OAM_MEPID", + [VCAP_KF_OAM_OPCODE] = "OAM_OPCODE", + [VCAP_KF_OAM_VER] = "OAM_VER", [VCAP_KF_OAM_Y1731_IS] = "OAM_Y1731_IS", [VCAP_KF_PROT_ACTIVE] = "PROT_ACTIVE", [VCAP_KF_TCP_IS] = "TCP_IS", @@ -5375,50 +3898,37 @@ static const char * const vcap_keyfield_names[] = { /* Actionfield names */ static const char * const vcap_actionfield_names[] = { [VCAP_AF_NO_VALUE] = "(None)", - [VCAP_AF_ACL_MAC] = "ACL_MAC", - [VCAP_AF_ACL_RT_MODE] = "ACL_RT_MODE", + [VCAP_AF_ACL_ID] = "ACL_ID", [VCAP_AF_CLS_VID_SEL] = "CLS_VID_SEL", [VCAP_AF_CNT_ID] = "CNT_ID", [VCAP_AF_COPY_PORT_NUM] = "COPY_PORT_NUM", [VCAP_AF_COPY_QUEUE_NUM] = "COPY_QUEUE_NUM", - [VCAP_AF_COSID_ENA] = "COSID_ENA", - [VCAP_AF_COSID_VAL] = "COSID_VAL", [VCAP_AF_CPU_COPY_ENA] = "CPU_COPY_ENA", - [VCAP_AF_CPU_DIS] = "CPU_DIS", - [VCAP_AF_CPU_ENA] = "CPU_ENA", - [VCAP_AF_CPU_Q] = "CPU_Q", + [VCAP_AF_CPU_QU] = "CPU_QU", [VCAP_AF_CPU_QUEUE_NUM] = "CPU_QUEUE_NUM", - [VCAP_AF_CUSTOM_ACE_ENA] = "CUSTOM_ACE_ENA", - [VCAP_AF_CUSTOM_ACE_OFFSET] = "CUSTOM_ACE_OFFSET", + [VCAP_AF_DEI_A_VAL] = "DEI_A_VAL", + [VCAP_AF_DEI_B_VAL] = "DEI_B_VAL", + [VCAP_AF_DEI_C_VAL] = "DEI_C_VAL", [VCAP_AF_DEI_ENA] = "DEI_ENA", [VCAP_AF_DEI_VAL] = "DEI_VAL", - [VCAP_AF_DLB_OFFSET] = "DLB_OFFSET", - [VCAP_AF_DMAC_OFFSET_ENA] = "DMAC_OFFSET_ENA", [VCAP_AF_DP_ENA] = "DP_ENA", [VCAP_AF_DP_VAL] = "DP_VAL", [VCAP_AF_DSCP_ENA] = "DSCP_ENA", + [VCAP_AF_DSCP_SEL] = "DSCP_SEL", [VCAP_AF_DSCP_VAL] = "DSCP_VAL", - [VCAP_AF_EGR_ACL_ENA] = "EGR_ACL_ENA", [VCAP_AF_ES2_REW_CMD] = "ES2_REW_CMD", - [VCAP_AF_FWD_DIS] = "FWD_DIS", + [VCAP_AF_ESDX] = "ESDX", + [VCAP_AF_FWD_KILL_ENA] = "FWD_KILL_ENA", [VCAP_AF_FWD_MODE] = "FWD_MODE", - [VCAP_AF_FWD_TYPE] = "FWD_TYPE", - [VCAP_AF_GVID_ADD_REPLACE_SEL] = "GVID_ADD_REPLACE_SEL", + [VCAP_AF_FWD_SEL] = "FWD_SEL", [VCAP_AF_HIT_ME_ONCE] = "HIT_ME_ONCE", + [VCAP_AF_HOST_MATCH] = "HOST_MATCH", [VCAP_AF_IGNORE_PIPELINE_CTRL] = "IGNORE_PIPELINE_CTRL", - [VCAP_AF_IGR_ACL_ENA] = "IGR_ACL_ENA", - [VCAP_AF_INJ_MASQ_ENA] = "INJ_MASQ_ENA", - [VCAP_AF_INJ_MASQ_LPORT] = "INJ_MASQ_LPORT", - [VCAP_AF_INJ_MASQ_PORT] = "INJ_MASQ_PORT", [VCAP_AF_INTR_ENA] = "INTR_ENA", [VCAP_AF_ISDX_ADD_REPLACE_SEL] = "ISDX_ADD_REPLACE_SEL", + [VCAP_AF_ISDX_ENA] = "ISDX_ENA", [VCAP_AF_ISDX_VAL] = "ISDX_VAL", - [VCAP_AF_IS_INNER_ACL] = "IS_INNER_ACL", - [VCAP_AF_L3_MAC_UPDATE_DIS] = "L3_MAC_UPDATE_DIS", - [VCAP_AF_LOG_MSG_INTERVAL] = "LOG_MSG_INTERVAL", - [VCAP_AF_LPM_AFFIX_ENA] = "LPM_AFFIX_ENA", - [VCAP_AF_LPM_AFFIX_VAL] = "LPM_AFFIX_VAL", - [VCAP_AF_LPORT_ENA] = "LPORT_ENA", + [VCAP_AF_LOOP_ENA] = "LOOP_ENA", [VCAP_AF_LRN_DIS] = "LRN_DIS", [VCAP_AF_MAP_IDX] = "MAP_IDX", [VCAP_AF_MAP_KEY] = "MAP_KEY", @@ -5426,71 +3936,53 @@ static const char * const vcap_actionfield_names[] = { [VCAP_AF_MASK_MODE] = "MASK_MODE", [VCAP_AF_MATCH_ID] = "MATCH_ID", [VCAP_AF_MATCH_ID_MASK] = "MATCH_ID_MASK", - [VCAP_AF_MIP_SEL] = "MIP_SEL", + [VCAP_AF_MIRROR_ENA] = "MIRROR_ENA", [VCAP_AF_MIRROR_PROBE] = "MIRROR_PROBE", [VCAP_AF_MIRROR_PROBE_ID] = "MIRROR_PROBE_ID", - [VCAP_AF_MPLS_IP_CTRL_ENA] = "MPLS_IP_CTRL_ENA", - [VCAP_AF_MPLS_MEP_ENA] = "MPLS_MEP_ENA", - [VCAP_AF_MPLS_MIP_ENA] = "MPLS_MIP_ENA", - [VCAP_AF_MPLS_OAM_FLAVOR] = "MPLS_OAM_FLAVOR", - [VCAP_AF_MPLS_OAM_TYPE] = "MPLS_OAM_TYPE", - [VCAP_AF_NUM_VLD_LABELS] = "NUM_VLD_LABELS", [VCAP_AF_NXT_IDX] = "NXT_IDX", [VCAP_AF_NXT_IDX_CTRL] = "NXT_IDX_CTRL", - [VCAP_AF_NXT_KEY_TYPE] = "NXT_KEY_TYPE", - [VCAP_AF_NXT_NORMALIZE] = "NXT_NORMALIZE", - [VCAP_AF_NXT_NORM_W16_OFFSET] = "NXT_NORM_W16_OFFSET", - [VCAP_AF_NXT_NORM_W32_OFFSET] = "NXT_NORM_W32_OFFSET", - [VCAP_AF_NXT_OFFSET_FROM_TYPE] = "NXT_OFFSET_FROM_TYPE", - [VCAP_AF_NXT_TYPE_AFTER_OFFSET] = "NXT_TYPE_AFTER_OFFSET", - [VCAP_AF_OAM_IP_BFD_ENA] = "OAM_IP_BFD_ENA", - [VCAP_AF_OAM_TWAMP_ENA] = "OAM_TWAMP_ENA", - [VCAP_AF_OAM_Y1731_SEL] = "OAM_Y1731_SEL", [VCAP_AF_PAG_OVERRIDE_MASK] = "PAG_OVERRIDE_MASK", [VCAP_AF_PAG_VAL] = "PAG_VAL", + [VCAP_AF_PCP_A_VAL] = "PCP_A_VAL", + [VCAP_AF_PCP_B_VAL] = "PCP_B_VAL", + [VCAP_AF_PCP_C_VAL] = "PCP_C_VAL", [VCAP_AF_PCP_ENA] = "PCP_ENA", [VCAP_AF_PCP_VAL] = "PCP_VAL", - [VCAP_AF_PIPELINE_ACT_SEL] = "PIPELINE_ACT_SEL", + [VCAP_AF_PIPELINE_ACT] = "PIPELINE_ACT", [VCAP_AF_PIPELINE_FORCE_ENA] = "PIPELINE_FORCE_ENA", [VCAP_AF_PIPELINE_PT] = "PIPELINE_PT", - [VCAP_AF_PIPELINE_PT_REDUCED] = "PIPELINE_PT_REDUCED", [VCAP_AF_POLICE_ENA] = "POLICE_ENA", [VCAP_AF_POLICE_IDX] = "POLICE_IDX", [VCAP_AF_POLICE_REMARK] = "POLICE_REMARK", + [VCAP_AF_POLICE_VCAP_ONLY] = "POLICE_VCAP_ONLY", + [VCAP_AF_POP_VAL] = "POP_VAL", [VCAP_AF_PORT_MASK] = "PORT_MASK", - [VCAP_AF_PTP_MASTER_SEL] = "PTP_MASTER_SEL", + [VCAP_AF_PUSH_CUSTOMER_TAG] = "PUSH_CUSTOMER_TAG", + [VCAP_AF_PUSH_INNER_TAG] = "PUSH_INNER_TAG", + [VCAP_AF_PUSH_OUTER_TAG] = "PUSH_OUTER_TAG", [VCAP_AF_QOS_ENA] = "QOS_ENA", [VCAP_AF_QOS_VAL] = "QOS_VAL", - [VCAP_AF_REW_CMD] = "REW_CMD", - [VCAP_AF_RLEG_DMAC_CHK_DIS] = "RLEG_DMAC_CHK_DIS", - [VCAP_AF_RLEG_STAT_IDX] = "RLEG_STAT_IDX", - [VCAP_AF_RSDX_ENA] = "RSDX_ENA", - [VCAP_AF_RSDX_VAL] = "RSDX_VAL", - [VCAP_AF_RSVD_LBL_VAL] = "RSVD_LBL_VAL", + [VCAP_AF_REW_OP] = "REW_OP", [VCAP_AF_RT_DIS] = "RT_DIS", - [VCAP_AF_RT_SEL] = "RT_SEL", - [VCAP_AF_S2_KEY_SEL_ENA] = "S2_KEY_SEL_ENA", - [VCAP_AF_S2_KEY_SEL_IDX] = "S2_KEY_SEL_IDX", - [VCAP_AF_SAM_SEQ_ENA] = "SAM_SEQ_ENA", - [VCAP_AF_SIP_IDX] = "SIP_IDX", - [VCAP_AF_SWAP_MAC_ENA] = "SWAP_MAC_ENA", - [VCAP_AF_TCP_UDP_DPORT] = "TCP_UDP_DPORT", - [VCAP_AF_TCP_UDP_ENA] = "TCP_UDP_ENA", - [VCAP_AF_TCP_UDP_SPORT] = "TCP_UDP_SPORT", - [VCAP_AF_TC_ENA] = "TC_ENA", - [VCAP_AF_TC_LABEL] = "TC_LABEL", - [VCAP_AF_TPID_SEL] = "TPID_SEL", - [VCAP_AF_TTL_DECR_DIS] = "TTL_DECR_DIS", - [VCAP_AF_TTL_ENA] = "TTL_ENA", - [VCAP_AF_TTL_LABEL] = "TTL_LABEL", - [VCAP_AF_TTL_UPDATE_ENA] = "TTL_UPDATE_ENA", + [VCAP_AF_SWAP_MACS_ENA] = "SWAP_MACS_ENA", + [VCAP_AF_TAG_A_DEI_SEL] = "TAG_A_DEI_SEL", + [VCAP_AF_TAG_A_PCP_SEL] = "TAG_A_PCP_SEL", + [VCAP_AF_TAG_A_TPID_SEL] = "TAG_A_TPID_SEL", + [VCAP_AF_TAG_A_VID_SEL] = "TAG_A_VID_SEL", + [VCAP_AF_TAG_B_DEI_SEL] = "TAG_B_DEI_SEL", + [VCAP_AF_TAG_B_PCP_SEL] = "TAG_B_PCP_SEL", + [VCAP_AF_TAG_B_TPID_SEL] = "TAG_B_TPID_SEL", + [VCAP_AF_TAG_B_VID_SEL] = "TAG_B_VID_SEL", + [VCAP_AF_TAG_C_DEI_SEL] = "TAG_C_DEI_SEL", + [VCAP_AF_TAG_C_PCP_SEL] = "TAG_C_PCP_SEL", + [VCAP_AF_TAG_C_TPID_SEL] = "TAG_C_TPID_SEL", + [VCAP_AF_TAG_C_VID_SEL] = "TAG_C_VID_SEL", [VCAP_AF_TYPE] = "TYPE", + [VCAP_AF_UNTAG_VID_ENA] = "UNTAG_VID_ENA", + [VCAP_AF_VID_A_VAL] = "VID_A_VAL", + [VCAP_AF_VID_B_VAL] = "VID_B_VAL", + [VCAP_AF_VID_C_VAL] = "VID_C_VAL", [VCAP_AF_VID_VAL] = "VID_VAL", - [VCAP_AF_VLAN_POP_CNT] = "VLAN_POP_CNT", - [VCAP_AF_VLAN_POP_CNT_ENA] = "VLAN_POP_CNT_ENA", - [VCAP_AF_VLAN_PUSH_CNT] = "VLAN_PUSH_CNT", - [VCAP_AF_VLAN_PUSH_CNT_ENA] = "VLAN_PUSH_CNT_ENA", - [VCAP_AF_VLAN_WAS_TAGGED] = "VLAN_WAS_TAGGED", }; /* VCAPs */ diff --git a/drivers/net/ethernet/microchip/vcap/vcap_model_kunit.h b/drivers/net/ethernet/microchip/vcap/vcap_model_kunit.h index b5a74f0eef9b..55762f24e196 100644 --- a/drivers/net/ethernet/microchip/vcap/vcap_model_kunit.h +++ b/drivers/net/ethernet/microchip/vcap/vcap_model_kunit.h @@ -1,10 +1,18 @@ /* SPDX-License-Identifier: BSD-3-Clause */ -/* Copyright (C) 2022 Microchip Technology Inc. and its subsidiaries. +/* Copyright (C) 2023 Microchip Technology Inc. and its subsidiaries. * Microchip VCAP test model interface for kunit testing */ +/* This file is autogenerated by cml-utils 2023-02-10 11:16:00 +0100. + * Commit ID: c30fb4bf0281cd4a7133bdab6682f9e43c872ada + */ + #ifndef __VCAP_MODEL_KUNIT_H__ #define __VCAP_MODEL_KUNIT_H__ + +/* VCAPs */ extern const struct vcap_info kunit_test_vcaps[]; extern const struct vcap_statistics kunit_test_vcap_stats; + #endif /* __VCAP_MODEL_KUNIT_H__ */ + diff --git a/drivers/net/ethernet/microchip/vcap/vcap_tc.c b/drivers/net/ethernet/microchip/vcap/vcap_tc.c new file mode 100644 index 000000000000..09abe7944af6 --- /dev/null +++ b/drivers/net/ethernet/microchip/vcap/vcap_tc.c @@ -0,0 +1,412 @@ +// SPDX-License-Identifier: GPL-2.0+ +/* Microchip VCAP TC + * + * Copyright (c) 2023 Microchip Technology Inc. and its subsidiaries. + */ + +#include <net/flow_offload.h> +#include <net/ipv6.h> +#include <net/tcp.h> + +#include "vcap_api_client.h" +#include "vcap_tc.h" + +enum vcap_is2_arp_opcode { + VCAP_IS2_ARP_REQUEST, + VCAP_IS2_ARP_REPLY, + VCAP_IS2_RARP_REQUEST, + VCAP_IS2_RARP_REPLY, +}; + +enum vcap_arp_opcode { + VCAP_ARP_OP_RESERVED, + VCAP_ARP_OP_REQUEST, + VCAP_ARP_OP_REPLY, +}; + +int vcap_tc_flower_handler_ethaddr_usage(struct vcap_tc_flower_parse_usage *st) +{ + enum vcap_key_field smac_key = VCAP_KF_L2_SMAC; + enum vcap_key_field dmac_key = VCAP_KF_L2_DMAC; + struct flow_match_eth_addrs match; + struct vcap_u48_key smac, dmac; + int err = 0; + + flow_rule_match_eth_addrs(st->frule, &match); + + if (!is_zero_ether_addr(match.mask->src)) { + vcap_netbytes_copy(smac.value, match.key->src, ETH_ALEN); + vcap_netbytes_copy(smac.mask, match.mask->src, ETH_ALEN); + err = vcap_rule_add_key_u48(st->vrule, smac_key, &smac); + if (err) + goto out; + } + + if (!is_zero_ether_addr(match.mask->dst)) { + vcap_netbytes_copy(dmac.value, match.key->dst, ETH_ALEN); + vcap_netbytes_copy(dmac.mask, match.mask->dst, ETH_ALEN); + err = vcap_rule_add_key_u48(st->vrule, dmac_key, &dmac); + if (err) + goto out; + } + + st->used_keys |= BIT(FLOW_DISSECTOR_KEY_ETH_ADDRS); + + return err; + +out: + NL_SET_ERR_MSG_MOD(st->fco->common.extack, "eth_addr parse error"); + return err; +} +EXPORT_SYMBOL_GPL(vcap_tc_flower_handler_ethaddr_usage); + +int vcap_tc_flower_handler_ipv4_usage(struct vcap_tc_flower_parse_usage *st) +{ + int err = 0; + + if (st->l3_proto == ETH_P_IP) { + struct flow_match_ipv4_addrs mt; + + flow_rule_match_ipv4_addrs(st->frule, &mt); + if (mt.mask->src) { + err = vcap_rule_add_key_u32(st->vrule, + VCAP_KF_L3_IP4_SIP, + be32_to_cpu(mt.key->src), + be32_to_cpu(mt.mask->src)); + if (err) + goto out; + } + if (mt.mask->dst) { + err = vcap_rule_add_key_u32(st->vrule, + VCAP_KF_L3_IP4_DIP, + be32_to_cpu(mt.key->dst), + be32_to_cpu(mt.mask->dst)); + if (err) + goto out; + } + } + + st->used_keys |= BIT(FLOW_DISSECTOR_KEY_IPV4_ADDRS); + + return err; + +out: + NL_SET_ERR_MSG_MOD(st->fco->common.extack, "ipv4_addr parse error"); + return err; +} +EXPORT_SYMBOL_GPL(vcap_tc_flower_handler_ipv4_usage); + +int vcap_tc_flower_handler_ipv6_usage(struct vcap_tc_flower_parse_usage *st) +{ + int err = 0; + + if (st->l3_proto == ETH_P_IPV6) { + struct flow_match_ipv6_addrs mt; + struct vcap_u128_key sip; + struct vcap_u128_key dip; + + flow_rule_match_ipv6_addrs(st->frule, &mt); + /* Check if address masks are non-zero */ + if (!ipv6_addr_any(&mt.mask->src)) { + vcap_netbytes_copy(sip.value, mt.key->src.s6_addr, 16); + vcap_netbytes_copy(sip.mask, mt.mask->src.s6_addr, 16); + err = vcap_rule_add_key_u128(st->vrule, + VCAP_KF_L3_IP6_SIP, &sip); + if (err) + goto out; + } + if (!ipv6_addr_any(&mt.mask->dst)) { + vcap_netbytes_copy(dip.value, mt.key->dst.s6_addr, 16); + vcap_netbytes_copy(dip.mask, mt.mask->dst.s6_addr, 16); + err = vcap_rule_add_key_u128(st->vrule, + VCAP_KF_L3_IP6_DIP, &dip); + if (err) + goto out; + } + } + st->used_keys |= BIT(FLOW_DISSECTOR_KEY_IPV6_ADDRS); + return err; +out: + NL_SET_ERR_MSG_MOD(st->fco->common.extack, "ipv6_addr parse error"); + return err; +} +EXPORT_SYMBOL_GPL(vcap_tc_flower_handler_ipv6_usage); + +int vcap_tc_flower_handler_portnum_usage(struct vcap_tc_flower_parse_usage *st) +{ + struct flow_match_ports mt; + u16 value, mask; + int err = 0; + + flow_rule_match_ports(st->frule, &mt); + + if (mt.mask->src) { + value = be16_to_cpu(mt.key->src); + mask = be16_to_cpu(mt.mask->src); + err = vcap_rule_add_key_u32(st->vrule, VCAP_KF_L4_SPORT, value, + mask); + if (err) + goto out; + } + + if (mt.mask->dst) { + value = be16_to_cpu(mt.key->dst); + mask = be16_to_cpu(mt.mask->dst); + err = vcap_rule_add_key_u32(st->vrule, VCAP_KF_L4_DPORT, value, + mask); + if (err) + goto out; + } + + st->used_keys |= BIT(FLOW_DISSECTOR_KEY_PORTS); + + return err; + +out: + NL_SET_ERR_MSG_MOD(st->fco->common.extack, "port parse error"); + return err; +} +EXPORT_SYMBOL_GPL(vcap_tc_flower_handler_portnum_usage); + +int vcap_tc_flower_handler_cvlan_usage(struct vcap_tc_flower_parse_usage *st) +{ + enum vcap_key_field vid_key = VCAP_KF_8021Q_VID0; + enum vcap_key_field pcp_key = VCAP_KF_8021Q_PCP0; + struct flow_match_vlan mt; + u16 tpid; + int err; + + flow_rule_match_cvlan(st->frule, &mt); + + tpid = be16_to_cpu(mt.key->vlan_tpid); + + if (tpid == ETH_P_8021Q) { + vid_key = VCAP_KF_8021Q_VID1; + pcp_key = VCAP_KF_8021Q_PCP1; + } + + if (mt.mask->vlan_id) { + err = vcap_rule_add_key_u32(st->vrule, vid_key, + mt.key->vlan_id, + mt.mask->vlan_id); + if (err) + goto out; + } + + if (mt.mask->vlan_priority) { + err = vcap_rule_add_key_u32(st->vrule, pcp_key, + mt.key->vlan_priority, + mt.mask->vlan_priority); + if (err) + goto out; + } + + st->used_keys |= BIT(FLOW_DISSECTOR_KEY_CVLAN); + + return 0; +out: + NL_SET_ERR_MSG_MOD(st->fco->common.extack, "cvlan parse error"); + return err; +} +EXPORT_SYMBOL_GPL(vcap_tc_flower_handler_cvlan_usage); + +int vcap_tc_flower_handler_vlan_usage(struct vcap_tc_flower_parse_usage *st, + enum vcap_key_field vid_key, + enum vcap_key_field pcp_key) +{ + struct flow_match_vlan mt; + int err; + + flow_rule_match_vlan(st->frule, &mt); + + if (mt.mask->vlan_id) { + err = vcap_rule_add_key_u32(st->vrule, vid_key, + mt.key->vlan_id, + mt.mask->vlan_id); + if (err) + goto out; + } + + if (mt.mask->vlan_priority) { + err = vcap_rule_add_key_u32(st->vrule, pcp_key, + mt.key->vlan_priority, + mt.mask->vlan_priority); + if (err) + goto out; + } + + if (mt.mask->vlan_tpid) + st->tpid = be16_to_cpu(mt.key->vlan_tpid); + + st->used_keys |= BIT(FLOW_DISSECTOR_KEY_VLAN); + + return 0; +out: + NL_SET_ERR_MSG_MOD(st->fco->common.extack, "vlan parse error"); + return err; +} +EXPORT_SYMBOL_GPL(vcap_tc_flower_handler_vlan_usage); + +int vcap_tc_flower_handler_tcp_usage(struct vcap_tc_flower_parse_usage *st) +{ + struct flow_match_tcp mt; + u16 tcp_flags_mask; + u16 tcp_flags_key; + enum vcap_bit val; + int err = 0; + + flow_rule_match_tcp(st->frule, &mt); + tcp_flags_key = be16_to_cpu(mt.key->flags); + tcp_flags_mask = be16_to_cpu(mt.mask->flags); + + if (tcp_flags_mask & TCPHDR_FIN) { + val = VCAP_BIT_0; + if (tcp_flags_key & TCPHDR_FIN) + val = VCAP_BIT_1; + err = vcap_rule_add_key_bit(st->vrule, VCAP_KF_L4_FIN, val); + if (err) + goto out; + } + + if (tcp_flags_mask & TCPHDR_SYN) { + val = VCAP_BIT_0; + if (tcp_flags_key & TCPHDR_SYN) + val = VCAP_BIT_1; + err = vcap_rule_add_key_bit(st->vrule, VCAP_KF_L4_SYN, val); + if (err) + goto out; + } + + if (tcp_flags_mask & TCPHDR_RST) { + val = VCAP_BIT_0; + if (tcp_flags_key & TCPHDR_RST) + val = VCAP_BIT_1; + err = vcap_rule_add_key_bit(st->vrule, VCAP_KF_L4_RST, val); + if (err) + goto out; + } + + if (tcp_flags_mask & TCPHDR_PSH) { + val = VCAP_BIT_0; + if (tcp_flags_key & TCPHDR_PSH) + val = VCAP_BIT_1; + err = vcap_rule_add_key_bit(st->vrule, VCAP_KF_L4_PSH, val); + if (err) + goto out; + } + + if (tcp_flags_mask & TCPHDR_ACK) { + val = VCAP_BIT_0; + if (tcp_flags_key & TCPHDR_ACK) + val = VCAP_BIT_1; + err = vcap_rule_add_key_bit(st->vrule, VCAP_KF_L4_ACK, val); + if (err) + goto out; + } + + if (tcp_flags_mask & TCPHDR_URG) { + val = VCAP_BIT_0; + if (tcp_flags_key & TCPHDR_URG) + val = VCAP_BIT_1; + err = vcap_rule_add_key_bit(st->vrule, VCAP_KF_L4_URG, val); + if (err) + goto out; + } + + st->used_keys |= BIT(FLOW_DISSECTOR_KEY_TCP); + + return err; + +out: + NL_SET_ERR_MSG_MOD(st->fco->common.extack, "tcp_flags parse error"); + return err; +} +EXPORT_SYMBOL_GPL(vcap_tc_flower_handler_tcp_usage); + +int vcap_tc_flower_handler_arp_usage(struct vcap_tc_flower_parse_usage *st) +{ + struct flow_match_arp mt; + u16 value, mask; + u32 ipval, ipmsk; + int err; + + flow_rule_match_arp(st->frule, &mt); + + if (mt.mask->op) { + mask = 0x3; + if (st->l3_proto == ETH_P_ARP) { + value = mt.key->op == VCAP_ARP_OP_REQUEST ? + VCAP_IS2_ARP_REQUEST : + VCAP_IS2_ARP_REPLY; + } else { /* RARP */ + value = mt.key->op == VCAP_ARP_OP_REQUEST ? + VCAP_IS2_RARP_REQUEST : + VCAP_IS2_RARP_REPLY; + } + err = vcap_rule_add_key_u32(st->vrule, VCAP_KF_ARP_OPCODE, + value, mask); + if (err) + goto out; + } + + /* The IS2 ARP keyset does not support ARP hardware addresses */ + if (!is_zero_ether_addr(mt.mask->sha) || + !is_zero_ether_addr(mt.mask->tha)) { + err = -EINVAL; + goto out; + } + + if (mt.mask->sip) { + ipval = be32_to_cpu((__force __be32)mt.key->sip); + ipmsk = be32_to_cpu((__force __be32)mt.mask->sip); + + err = vcap_rule_add_key_u32(st->vrule, VCAP_KF_L3_IP4_SIP, + ipval, ipmsk); + if (err) + goto out; + } + + if (mt.mask->tip) { + ipval = be32_to_cpu((__force __be32)mt.key->tip); + ipmsk = be32_to_cpu((__force __be32)mt.mask->tip); + + err = vcap_rule_add_key_u32(st->vrule, VCAP_KF_L3_IP4_DIP, + ipval, ipmsk); + if (err) + goto out; + } + + st->used_keys |= BIT(FLOW_DISSECTOR_KEY_ARP); + + return 0; + +out: + NL_SET_ERR_MSG_MOD(st->fco->common.extack, "arp parse error"); + return err; +} +EXPORT_SYMBOL_GPL(vcap_tc_flower_handler_arp_usage); + +int vcap_tc_flower_handler_ip_usage(struct vcap_tc_flower_parse_usage *st) +{ + struct flow_match_ip mt; + int err = 0; + + flow_rule_match_ip(st->frule, &mt); + + if (mt.mask->tos) { + err = vcap_rule_add_key_u32(st->vrule, VCAP_KF_L3_TOS, + mt.key->tos, + mt.mask->tos); + if (err) + goto out; + } + + st->used_keys |= BIT(FLOW_DISSECTOR_KEY_IP); + + return err; + +out: + NL_SET_ERR_MSG_MOD(st->fco->common.extack, "ip_tos parse error"); + return err; +} +EXPORT_SYMBOL_GPL(vcap_tc_flower_handler_ip_usage); diff --git a/drivers/net/ethernet/microchip/vcap/vcap_tc.h b/drivers/net/ethernet/microchip/vcap/vcap_tc.h new file mode 100644 index 000000000000..071f892f9aa4 --- /dev/null +++ b/drivers/net/ethernet/microchip/vcap/vcap_tc.h @@ -0,0 +1,32 @@ +/* SPDX-License-Identifier: BSD-3-Clause */ +/* Copyright (C) 2023 Microchip Technology Inc. and its subsidiaries. + * Microchip VCAP TC + */ + +#ifndef __VCAP_TC__ +#define __VCAP_TC__ + +struct vcap_tc_flower_parse_usage { + struct flow_cls_offload *fco; + struct flow_rule *frule; + struct vcap_rule *vrule; + struct vcap_admin *admin; + u16 l3_proto; + u8 l4_proto; + u16 tpid; + unsigned int used_keys; +}; + +int vcap_tc_flower_handler_ethaddr_usage(struct vcap_tc_flower_parse_usage *st); +int vcap_tc_flower_handler_ipv4_usage(struct vcap_tc_flower_parse_usage *st); +int vcap_tc_flower_handler_ipv6_usage(struct vcap_tc_flower_parse_usage *st); +int vcap_tc_flower_handler_portnum_usage(struct vcap_tc_flower_parse_usage *st); +int vcap_tc_flower_handler_cvlan_usage(struct vcap_tc_flower_parse_usage *st); +int vcap_tc_flower_handler_vlan_usage(struct vcap_tc_flower_parse_usage *st, + enum vcap_key_field vid_key, + enum vcap_key_field pcp_key); +int vcap_tc_flower_handler_tcp_usage(struct vcap_tc_flower_parse_usage *st); +int vcap_tc_flower_handler_arp_usage(struct vcap_tc_flower_parse_usage *st); +int vcap_tc_flower_handler_ip_usage(struct vcap_tc_flower_parse_usage *st); + +#endif /* __VCAP_TC__ */ diff --git a/drivers/net/ethernet/microsoft/mana/mana_en.c b/drivers/net/ethernet/microsoft/mana/mana_en.c index 2f6a048dee90..6120f2b6684f 100644 --- a/drivers/net/ethernet/microsoft/mana/mana_en.c +++ b/drivers/net/ethernet/microsoft/mana/mana_en.c @@ -2160,6 +2160,8 @@ static int mana_probe_port(struct mana_context *ac, int port_idx, ndev->hw_features |= NETIF_F_RXHASH; ndev->features = ndev->hw_features; ndev->vlan_features = 0; + ndev->xdp_features = NETDEV_XDP_ACT_BASIC | NETDEV_XDP_ACT_REDIRECT | + NETDEV_XDP_ACT_NDO_XMIT; err = register_netdev(ndev); if (err) { diff --git a/drivers/net/ethernet/mscc/Kconfig b/drivers/net/ethernet/mscc/Kconfig index 8dd8c7f425d2..81e605691bb8 100644 --- a/drivers/net/ethernet/mscc/Kconfig +++ b/drivers/net/ethernet/mscc/Kconfig @@ -13,6 +13,7 @@ if NET_VENDOR_MICROSEMI # Users should depend on NET_SWITCHDEV, HAS_IOMEM, BRIDGE config MSCC_OCELOT_SWITCH_LIB + depends on PTP_1588_CLOCK_OPTIONAL select NET_DEVLINK select REGMAP_MMIO select PACKING diff --git a/drivers/net/ethernet/mscc/Makefile b/drivers/net/ethernet/mscc/Makefile index 5d435a565d4c..16987b72dfc0 100644 --- a/drivers/net/ethernet/mscc/Makefile +++ b/drivers/net/ethernet/mscc/Makefile @@ -5,6 +5,7 @@ mscc_ocelot_switch_lib-y := \ ocelot_devlink.o \ ocelot_flower.o \ ocelot_io.o \ + ocelot_mm.o \ ocelot_police.o \ ocelot_ptp.o \ ocelot_stats.o \ diff --git a/drivers/net/ethernet/mscc/ocelot.c b/drivers/net/ethernet/mscc/ocelot.c index da56f9bfeaf0..08acb7b89086 100644 --- a/drivers/net/ethernet/mscc/ocelot.c +++ b/drivers/net/ethernet/mscc/ocelot.c @@ -6,12 +6,16 @@ */ #include <linux/dsa/ocelot.h> #include <linux/if_bridge.h> +#include <linux/iopoll.h> #include <soc/mscc/ocelot_vcap.h> #include "ocelot.h" #include "ocelot_vcap.h" -#define TABLE_UPDATE_SLEEP_US 10 -#define TABLE_UPDATE_TIMEOUT_US 100000 +#define TABLE_UPDATE_SLEEP_US 10 +#define TABLE_UPDATE_TIMEOUT_US 100000 +#define MEM_INIT_SLEEP_US 1000 +#define MEM_INIT_TIMEOUT_US 100000 + #define OCELOT_RSV_VLAN_RANGE_START 4000 struct ocelot_mact_entry { @@ -2713,6 +2717,46 @@ static void ocelot_detect_features(struct ocelot *ocelot) ocelot->num_frame_refs = QSYS_MMGT_EQ_CTRL_FP_FREE_CNT(eq_ctrl); } +static int ocelot_mem_init_status(struct ocelot *ocelot) +{ + unsigned int val; + int err; + + err = regmap_field_read(ocelot->regfields[SYS_RESET_CFG_MEM_INIT], + &val); + + return err ?: val; +} + +int ocelot_reset(struct ocelot *ocelot) +{ + int err; + u32 val; + + err = regmap_field_write(ocelot->regfields[SYS_RESET_CFG_MEM_INIT], 1); + if (err) + return err; + + err = regmap_field_write(ocelot->regfields[SYS_RESET_CFG_MEM_ENA], 1); + if (err) + return err; + + /* MEM_INIT is a self-clearing bit. Wait for it to be cleared (should be + * 100us) before enabling the switch core. + */ + err = readx_poll_timeout(ocelot_mem_init_status, ocelot, val, !val, + MEM_INIT_SLEEP_US, MEM_INIT_TIMEOUT_US); + if (err) + return err; + + err = regmap_field_write(ocelot->regfields[SYS_RESET_CFG_MEM_ENA], 1); + if (err) + return err; + + return regmap_field_write(ocelot->regfields[SYS_RESET_CFG_CORE_ENA], 1); +} +EXPORT_SYMBOL(ocelot_reset); + int ocelot_init(struct ocelot *ocelot) { int i, ret; @@ -2738,10 +2782,8 @@ int ocelot_init(struct ocelot *ocelot) return -ENOMEM; ret = ocelot_stats_init(ocelot); - if (ret) { - destroy_workqueue(ocelot->owq); - return ret; - } + if (ret) + goto err_stats_init; INIT_LIST_HEAD(&ocelot->multicast); INIT_LIST_HEAD(&ocelot->pgids); @@ -2756,6 +2798,12 @@ int ocelot_init(struct ocelot *ocelot) if (ocelot->ops->psfp_init) ocelot->ops->psfp_init(ocelot); + if (ocelot->mm_supported) { + ret = ocelot_mm_init(ocelot); + if (ret) + goto err_mm_init; + } + for (port = 0; port < ocelot->num_phys_ports; port++) { /* Clear all counters (5 groups) */ ocelot_write(ocelot, SYS_STAT_CFG_STAT_VIEW(port) | @@ -2853,6 +2901,12 @@ int ocelot_init(struct ocelot *ocelot) ANA_CPUQ_8021_CFG, i); return 0; + +err_mm_init: + ocelot_stats_deinit(ocelot); +err_stats_init: + destroy_workqueue(ocelot->owq); + return ret; } EXPORT_SYMBOL(ocelot_init); diff --git a/drivers/net/ethernet/mscc/ocelot.h b/drivers/net/ethernet/mscc/ocelot.h index 70dbd9c4e512..e9a0179448bf 100644 --- a/drivers/net/ethernet/mscc/ocelot.h +++ b/drivers/net/ethernet/mscc/ocelot.h @@ -109,6 +109,8 @@ void ocelot_mirror_put(struct ocelot *ocelot); int ocelot_stats_init(struct ocelot *ocelot); void ocelot_stats_deinit(struct ocelot *ocelot); +int ocelot_mm_init(struct ocelot *ocelot); + extern struct notifier_block ocelot_netdevice_nb; extern struct notifier_block ocelot_switchdev_nb; extern struct notifier_block ocelot_switchdev_blocking_nb; diff --git a/drivers/net/ethernet/mscc/ocelot_devlink.c b/drivers/net/ethernet/mscc/ocelot_devlink.c index b8737efd2a85..d9ea75a14f2f 100644 --- a/drivers/net/ethernet/mscc/ocelot_devlink.c +++ b/drivers/net/ethernet/mscc/ocelot_devlink.c @@ -487,6 +487,37 @@ static void ocelot_watermark_init(struct ocelot *ocelot) ocelot_setup_sharing_watermarks(ocelot); } +/* Watermark encode + * Bit 8: Unit; 0:1, 1:16 + * Bit 7-0: Value to be multiplied with unit + */ +u16 ocelot_wm_enc(u16 value) +{ + WARN_ON(value >= 16 * BIT(8)); + + if (value >= BIT(8)) + return BIT(8) | (value / 16); + + return value; +} +EXPORT_SYMBOL(ocelot_wm_enc); + +u16 ocelot_wm_dec(u16 wm) +{ + if (wm & BIT(8)) + return (wm & GENMASK(7, 0)) * 16; + + return wm; +} +EXPORT_SYMBOL(ocelot_wm_dec); + +void ocelot_wm_stat(u32 val, u32 *inuse, u32 *maxuse) +{ + *inuse = (val & GENMASK(23, 12)) >> 12; + *maxuse = val & GENMASK(11, 0); +} +EXPORT_SYMBOL(ocelot_wm_stat); + /* Pool size and type are fixed up at runtime. Keeping this structure to * look up the cell size multipliers. */ diff --git a/drivers/net/ethernet/mscc/ocelot_mm.c b/drivers/net/ethernet/mscc/ocelot_mm.c new file mode 100644 index 000000000000..0a8f21ae23f0 --- /dev/null +++ b/drivers/net/ethernet/mscc/ocelot_mm.c @@ -0,0 +1,215 @@ +// SPDX-License-Identifier: (GPL-2.0 OR MIT) +/* + * Hardware library for MAC Merge Layer and Frame Preemption on TSN-capable + * switches (VSC9959) + * + * Copyright 2022-2023 NXP + */ +#include <linux/ethtool.h> +#include <soc/mscc/ocelot.h> +#include <soc/mscc/ocelot_dev.h> +#include <soc/mscc/ocelot_qsys.h> + +#include "ocelot.h" + +static const char * +mm_verify_state_to_string(enum ethtool_mm_verify_status state) +{ + switch (state) { + case ETHTOOL_MM_VERIFY_STATUS_INITIAL: + return "INITIAL"; + case ETHTOOL_MM_VERIFY_STATUS_VERIFYING: + return "VERIFYING"; + case ETHTOOL_MM_VERIFY_STATUS_SUCCEEDED: + return "SUCCEEDED"; + case ETHTOOL_MM_VERIFY_STATUS_FAILED: + return "FAILED"; + case ETHTOOL_MM_VERIFY_STATUS_DISABLED: + return "DISABLED"; + default: + return "UNKNOWN"; + } +} + +static enum ethtool_mm_verify_status ocelot_mm_verify_status(u32 val) +{ + switch (DEV_MM_STAT_MM_STATUS_PRMPT_VERIFY_STATE_X(val)) { + case 0: + return ETHTOOL_MM_VERIFY_STATUS_INITIAL; + case 1: + return ETHTOOL_MM_VERIFY_STATUS_VERIFYING; + case 2: + return ETHTOOL_MM_VERIFY_STATUS_SUCCEEDED; + case 3: + return ETHTOOL_MM_VERIFY_STATUS_FAILED; + case 4: + return ETHTOOL_MM_VERIFY_STATUS_DISABLED; + default: + return ETHTOOL_MM_VERIFY_STATUS_UNKNOWN; + } +} + +void ocelot_port_mm_irq(struct ocelot *ocelot, int port) +{ + struct ocelot_port *ocelot_port = ocelot->ports[port]; + struct ocelot_mm_state *mm = &ocelot->mm[port]; + enum ethtool_mm_verify_status verify_status; + u32 val; + + mutex_lock(&mm->lock); + + val = ocelot_port_readl(ocelot_port, DEV_MM_STATUS); + + verify_status = ocelot_mm_verify_status(val); + if (mm->verify_status != verify_status) { + dev_dbg(ocelot->dev, + "Port %d MAC Merge verification state %s\n", + port, mm_verify_state_to_string(verify_status)); + mm->verify_status = verify_status; + } + + if (val & DEV_MM_STAT_MM_STATUS_PRMPT_ACTIVE_STICKY) { + mm->tx_active = !!(val & DEV_MM_STAT_MM_STATUS_PRMPT_ACTIVE_STATUS); + + dev_dbg(ocelot->dev, "Port %d TX preemption %s\n", + port, mm->tx_active ? "active" : "inactive"); + } + + if (val & DEV_MM_STAT_MM_STATUS_UNEXP_RX_PFRM_STICKY) { + dev_err(ocelot->dev, + "Unexpected P-frame received on port %d while verification was unsuccessful or not yet verified\n", + port); + } + + if (val & DEV_MM_STAT_MM_STATUS_UNEXP_TX_PFRM_STICKY) { + dev_err(ocelot->dev, + "Unexpected P-frame requested to be transmitted on port %d while verification was unsuccessful or not yet verified, or MM_TX_ENA=0\n", + port); + } + + ocelot_port_writel(ocelot_port, val, DEV_MM_STATUS); + + mutex_unlock(&mm->lock); +} +EXPORT_SYMBOL_GPL(ocelot_port_mm_irq); + +int ocelot_port_set_mm(struct ocelot *ocelot, int port, + struct ethtool_mm_cfg *cfg, + struct netlink_ext_ack *extack) +{ + struct ocelot_port *ocelot_port = ocelot->ports[port]; + u32 mm_enable = 0, verify_disable = 0, add_frag_size; + struct ocelot_mm_state *mm; + int err; + + if (!ocelot->mm_supported) + return -EOPNOTSUPP; + + mm = &ocelot->mm[port]; + + err = ethtool_mm_frag_size_min_to_add(cfg->tx_min_frag_size, + &add_frag_size, extack); + if (err) + return err; + + if (cfg->pmac_enabled) + mm_enable |= DEV_MM_CONFIG_ENABLE_CONFIG_MM_RX_ENA; + + if (cfg->tx_enabled) + mm_enable |= DEV_MM_CONFIG_ENABLE_CONFIG_MM_TX_ENA; + + if (!cfg->verify_enabled) + verify_disable = DEV_MM_CONFIG_VERIF_CONFIG_PRM_VERIFY_DIS; + + mutex_lock(&mm->lock); + + ocelot_port_rmwl(ocelot_port, mm_enable, + DEV_MM_CONFIG_ENABLE_CONFIG_MM_TX_ENA | + DEV_MM_CONFIG_ENABLE_CONFIG_MM_RX_ENA, + DEV_MM_ENABLE_CONFIG); + + ocelot_port_rmwl(ocelot_port, verify_disable | + DEV_MM_CONFIG_VERIF_CONFIG_PRM_VERIFY_TIME(cfg->verify_time), + DEV_MM_CONFIG_VERIF_CONFIG_PRM_VERIFY_DIS | + DEV_MM_CONFIG_VERIF_CONFIG_PRM_VERIFY_TIME_M, + DEV_MM_VERIF_CONFIG); + + ocelot_rmw_rix(ocelot, + QSYS_PREEMPTION_CFG_MM_ADD_FRAG_SIZE(add_frag_size), + QSYS_PREEMPTION_CFG_MM_ADD_FRAG_SIZE_M, + QSYS_PREEMPTION_CFG, + port); + + mutex_unlock(&mm->lock); + + return 0; +} +EXPORT_SYMBOL_GPL(ocelot_port_set_mm); + +int ocelot_port_get_mm(struct ocelot *ocelot, int port, + struct ethtool_mm_state *state) +{ + struct ocelot_port *ocelot_port = ocelot->ports[port]; + struct ocelot_mm_state *mm; + u32 val, add_frag_size; + + if (!ocelot->mm_supported) + return -EOPNOTSUPP; + + mm = &ocelot->mm[port]; + + mutex_lock(&mm->lock); + + val = ocelot_port_readl(ocelot_port, DEV_MM_ENABLE_CONFIG); + state->pmac_enabled = !!(val & DEV_MM_CONFIG_ENABLE_CONFIG_MM_RX_ENA); + state->tx_enabled = !!(val & DEV_MM_CONFIG_ENABLE_CONFIG_MM_TX_ENA); + + val = ocelot_port_readl(ocelot_port, DEV_MM_VERIF_CONFIG); + state->verify_enabled = !(val & DEV_MM_CONFIG_VERIF_CONFIG_PRM_VERIFY_DIS); + state->verify_time = DEV_MM_CONFIG_VERIF_CONFIG_PRM_VERIFY_TIME_X(val); + state->max_verify_time = 128; + + val = ocelot_read_rix(ocelot, QSYS_PREEMPTION_CFG, port); + add_frag_size = QSYS_PREEMPTION_CFG_MM_ADD_FRAG_SIZE_X(val); + state->tx_min_frag_size = ethtool_mm_frag_size_add_to_min(add_frag_size); + state->rx_min_frag_size = ETH_ZLEN; + + state->verify_status = mm->verify_status; + state->tx_active = mm->tx_active; + + mutex_unlock(&mm->lock); + + return 0; +} +EXPORT_SYMBOL_GPL(ocelot_port_get_mm); + +int ocelot_mm_init(struct ocelot *ocelot) +{ + struct ocelot_port *ocelot_port; + struct ocelot_mm_state *mm; + int port; + + if (!ocelot->mm_supported) + return 0; + + ocelot->mm = devm_kcalloc(ocelot->dev, ocelot->num_phys_ports, + sizeof(*ocelot->mm), GFP_KERNEL); + if (!ocelot->mm) + return -ENOMEM; + + for (port = 0; port < ocelot->num_phys_ports; port++) { + u32 val; + + mm = &ocelot->mm[port]; + mutex_init(&mm->lock); + ocelot_port = ocelot->ports[port]; + + /* Update initial status variable for the + * verification state machine + */ + val = ocelot_port_readl(ocelot_port, DEV_MM_STATUS); + mm->verify_status = ocelot_mm_verify_status(val); + } + + return 0; +} diff --git a/drivers/net/ethernet/mscc/ocelot_stats.c b/drivers/net/ethernet/mscc/ocelot_stats.c index 1478c3b21af1..bdb893476832 100644 --- a/drivers/net/ethernet/mscc/ocelot_stats.c +++ b/drivers/net/ethernet/mscc/ocelot_stats.c @@ -4,6 +4,7 @@ * Copyright (c) 2017 Microsemi Corporation * Copyright 2022 NXP */ +#include <linux/ethtool_netlink.h> #include <linux/spinlock.h> #include <linux/mutex.h> #include <linux/workqueue.h> @@ -54,6 +55,29 @@ enum ocelot_stat { OCELOT_STAT_RX_GREEN_PRIO_5, OCELOT_STAT_RX_GREEN_PRIO_6, OCELOT_STAT_RX_GREEN_PRIO_7, + OCELOT_STAT_RX_ASSEMBLY_ERRS, + OCELOT_STAT_RX_SMD_ERRS, + OCELOT_STAT_RX_ASSEMBLY_OK, + OCELOT_STAT_RX_MERGE_FRAGMENTS, + OCELOT_STAT_RX_PMAC_OCTETS, + OCELOT_STAT_RX_PMAC_UNICAST, + OCELOT_STAT_RX_PMAC_MULTICAST, + OCELOT_STAT_RX_PMAC_BROADCAST, + OCELOT_STAT_RX_PMAC_SHORTS, + OCELOT_STAT_RX_PMAC_FRAGMENTS, + OCELOT_STAT_RX_PMAC_JABBERS, + OCELOT_STAT_RX_PMAC_CRC_ALIGN_ERRS, + OCELOT_STAT_RX_PMAC_SYM_ERRS, + OCELOT_STAT_RX_PMAC_64, + OCELOT_STAT_RX_PMAC_65_127, + OCELOT_STAT_RX_PMAC_128_255, + OCELOT_STAT_RX_PMAC_256_511, + OCELOT_STAT_RX_PMAC_512_1023, + OCELOT_STAT_RX_PMAC_1024_1526, + OCELOT_STAT_RX_PMAC_1527_MAX, + OCELOT_STAT_RX_PMAC_PAUSE, + OCELOT_STAT_RX_PMAC_CONTROL, + OCELOT_STAT_RX_PMAC_LONGS, OCELOT_STAT_TX_OCTETS, OCELOT_STAT_TX_UNICAST, OCELOT_STAT_TX_MULTICAST, @@ -85,6 +109,20 @@ enum ocelot_stat { OCELOT_STAT_TX_GREEN_PRIO_6, OCELOT_STAT_TX_GREEN_PRIO_7, OCELOT_STAT_TX_AGED, + OCELOT_STAT_TX_MM_HOLD, + OCELOT_STAT_TX_MERGE_FRAGMENTS, + OCELOT_STAT_TX_PMAC_OCTETS, + OCELOT_STAT_TX_PMAC_UNICAST, + OCELOT_STAT_TX_PMAC_MULTICAST, + OCELOT_STAT_TX_PMAC_BROADCAST, + OCELOT_STAT_TX_PMAC_PAUSE, + OCELOT_STAT_TX_PMAC_64, + OCELOT_STAT_TX_PMAC_65_127, + OCELOT_STAT_TX_PMAC_128_255, + OCELOT_STAT_TX_PMAC_256_511, + OCELOT_STAT_TX_PMAC_512_1023, + OCELOT_STAT_TX_PMAC_1024_1526, + OCELOT_STAT_TX_PMAC_1527_MAX, OCELOT_STAT_DROP_LOCAL, OCELOT_STAT_DROP_TAIL, OCELOT_STAT_DROP_YELLOW_PRIO_0, @@ -228,6 +266,55 @@ static const struct ocelot_stat_layout ocelot_stats_layout[OCELOT_NUM_STATS] = { OCELOT_COMMON_STATS, }; +static const struct ocelot_stat_layout ocelot_mm_stats_layout[OCELOT_NUM_STATS] = { + OCELOT_COMMON_STATS, + OCELOT_STAT(RX_ASSEMBLY_ERRS), + OCELOT_STAT(RX_SMD_ERRS), + OCELOT_STAT(RX_ASSEMBLY_OK), + OCELOT_STAT(RX_MERGE_FRAGMENTS), + OCELOT_STAT(TX_MERGE_FRAGMENTS), + OCELOT_STAT(RX_PMAC_OCTETS), + OCELOT_STAT(RX_PMAC_UNICAST), + OCELOT_STAT(RX_PMAC_MULTICAST), + OCELOT_STAT(RX_PMAC_BROADCAST), + OCELOT_STAT(RX_PMAC_SHORTS), + OCELOT_STAT(RX_PMAC_FRAGMENTS), + OCELOT_STAT(RX_PMAC_JABBERS), + OCELOT_STAT(RX_PMAC_CRC_ALIGN_ERRS), + OCELOT_STAT(RX_PMAC_SYM_ERRS), + OCELOT_STAT(RX_PMAC_64), + OCELOT_STAT(RX_PMAC_65_127), + OCELOT_STAT(RX_PMAC_128_255), + OCELOT_STAT(RX_PMAC_256_511), + OCELOT_STAT(RX_PMAC_512_1023), + OCELOT_STAT(RX_PMAC_1024_1526), + OCELOT_STAT(RX_PMAC_1527_MAX), + OCELOT_STAT(RX_PMAC_PAUSE), + OCELOT_STAT(RX_PMAC_CONTROL), + OCELOT_STAT(RX_PMAC_LONGS), + OCELOT_STAT(TX_PMAC_OCTETS), + OCELOT_STAT(TX_PMAC_UNICAST), + OCELOT_STAT(TX_PMAC_MULTICAST), + OCELOT_STAT(TX_PMAC_BROADCAST), + OCELOT_STAT(TX_PMAC_PAUSE), + OCELOT_STAT(TX_PMAC_64), + OCELOT_STAT(TX_PMAC_65_127), + OCELOT_STAT(TX_PMAC_128_255), + OCELOT_STAT(TX_PMAC_256_511), + OCELOT_STAT(TX_PMAC_512_1023), + OCELOT_STAT(TX_PMAC_1024_1526), + OCELOT_STAT(TX_PMAC_1527_MAX), +}; + +static const struct ocelot_stat_layout * +ocelot_get_stats_layout(struct ocelot *ocelot) +{ + if (ocelot->mm_supported) + return ocelot_mm_stats_layout; + + return ocelot_stats_layout; +} + /* Read the counters from hardware and keep them in region->buf. * Caller must hold &ocelot->stat_view_lock. */ @@ -306,17 +393,20 @@ static void ocelot_check_stats_work(struct work_struct *work) void ocelot_get_strings(struct ocelot *ocelot, int port, u32 sset, u8 *data) { + const struct ocelot_stat_layout *layout; int i; if (sset != ETH_SS_STATS) return; + layout = ocelot_get_stats_layout(ocelot); + for (i = 0; i < OCELOT_NUM_STATS; i++) { - if (ocelot_stats_layout[i].name[0] == '\0') + if (layout[i].name[0] == '\0') continue; - memcpy(data + i * ETH_GSTRING_LEN, ocelot_stats_layout[i].name, - ETH_GSTRING_LEN); + memcpy(data, layout[i].name, ETH_GSTRING_LEN); + data += ETH_GSTRING_LEN; } } EXPORT_SYMBOL(ocelot_get_strings); @@ -350,13 +440,16 @@ out_unlock: int ocelot_get_sset_count(struct ocelot *ocelot, int port, int sset) { + const struct ocelot_stat_layout *layout; int i, num_stats = 0; if (sset != ETH_SS_STATS) return -EOPNOTSUPP; + layout = ocelot_get_stats_layout(ocelot); + for (i = 0; i < OCELOT_NUM_STATS; i++) - if (ocelot_stats_layout[i].name[0] != '\0') + if (layout[i].name[0] != '\0') num_stats++; return num_stats; @@ -366,14 +459,17 @@ EXPORT_SYMBOL(ocelot_get_sset_count); static void ocelot_port_ethtool_stats_cb(struct ocelot *ocelot, int port, void *priv) { + const struct ocelot_stat_layout *layout; u64 *data = priv; int i; + layout = ocelot_get_stats_layout(ocelot); + /* Copy all supported counters */ for (i = 0; i < OCELOT_NUM_STATS; i++) { int index = port * OCELOT_NUM_STATS + i; - if (ocelot_stats_layout[i].name[0] == '\0') + if (layout[i].name[0] == '\0') continue; *data++ = ocelot->stats[index]; @@ -395,14 +491,63 @@ static void ocelot_port_pause_stats_cb(struct ocelot *ocelot, int port, void *pr pause_stats->rx_pause_frames = s[OCELOT_STAT_RX_PAUSE]; } +static void ocelot_port_pmac_pause_stats_cb(struct ocelot *ocelot, int port, + void *priv) +{ + u64 *s = &ocelot->stats[port * OCELOT_NUM_STATS]; + struct ethtool_pause_stats *pause_stats = priv; + + pause_stats->tx_pause_frames = s[OCELOT_STAT_TX_PMAC_PAUSE]; + pause_stats->rx_pause_frames = s[OCELOT_STAT_RX_PMAC_PAUSE]; +} + +static void ocelot_port_mm_stats_cb(struct ocelot *ocelot, int port, + void *priv) +{ + u64 *s = &ocelot->stats[port * OCELOT_NUM_STATS]; + struct ethtool_mm_stats *stats = priv; + + stats->MACMergeFrameAssErrorCount = s[OCELOT_STAT_RX_ASSEMBLY_ERRS]; + stats->MACMergeFrameSmdErrorCount = s[OCELOT_STAT_RX_SMD_ERRS]; + stats->MACMergeFrameAssOkCount = s[OCELOT_STAT_RX_ASSEMBLY_OK]; + stats->MACMergeFragCountRx = s[OCELOT_STAT_RX_MERGE_FRAGMENTS]; + stats->MACMergeFragCountTx = s[OCELOT_STAT_TX_MERGE_FRAGMENTS]; + stats->MACMergeHoldCount = s[OCELOT_STAT_TX_MM_HOLD]; +} + void ocelot_port_get_pause_stats(struct ocelot *ocelot, int port, struct ethtool_pause_stats *pause_stats) { - ocelot_port_stats_run(ocelot, port, pause_stats, - ocelot_port_pause_stats_cb); + struct net_device *dev; + + switch (pause_stats->src) { + case ETHTOOL_MAC_STATS_SRC_EMAC: + ocelot_port_stats_run(ocelot, port, pause_stats, + ocelot_port_pause_stats_cb); + break; + case ETHTOOL_MAC_STATS_SRC_PMAC: + if (ocelot->mm_supported) + ocelot_port_stats_run(ocelot, port, pause_stats, + ocelot_port_pmac_pause_stats_cb); + break; + case ETHTOOL_MAC_STATS_SRC_AGGREGATE: + dev = ocelot->ops->port_to_netdev(ocelot, port); + ethtool_aggregate_pause_stats(dev, pause_stats); + break; + } } EXPORT_SYMBOL_GPL(ocelot_port_get_pause_stats); +void ocelot_port_get_mm_stats(struct ocelot *ocelot, int port, + struct ethtool_mm_stats *stats) +{ + if (!ocelot->mm_supported) + return; + + ocelot_port_stats_run(ocelot, port, stats, ocelot_port_mm_stats_cb); +} +EXPORT_SYMBOL_GPL(ocelot_port_get_mm_stats); + static const struct ethtool_rmon_hist_range ocelot_rmon_ranges[] = { { 64, 64 }, { 65, 127 }, @@ -441,14 +586,57 @@ static void ocelot_port_rmon_stats_cb(struct ocelot *ocelot, int port, void *pri rmon_stats->hist_tx[6] = s[OCELOT_STAT_TX_1024_1526]; } +static void ocelot_port_pmac_rmon_stats_cb(struct ocelot *ocelot, int port, + void *priv) +{ + u64 *s = &ocelot->stats[port * OCELOT_NUM_STATS]; + struct ethtool_rmon_stats *rmon_stats = priv; + + rmon_stats->undersize_pkts = s[OCELOT_STAT_RX_PMAC_SHORTS]; + rmon_stats->oversize_pkts = s[OCELOT_STAT_RX_PMAC_LONGS]; + rmon_stats->fragments = s[OCELOT_STAT_RX_PMAC_FRAGMENTS]; + rmon_stats->jabbers = s[OCELOT_STAT_RX_PMAC_JABBERS]; + + rmon_stats->hist[0] = s[OCELOT_STAT_RX_PMAC_64]; + rmon_stats->hist[1] = s[OCELOT_STAT_RX_PMAC_65_127]; + rmon_stats->hist[2] = s[OCELOT_STAT_RX_PMAC_128_255]; + rmon_stats->hist[3] = s[OCELOT_STAT_RX_PMAC_256_511]; + rmon_stats->hist[4] = s[OCELOT_STAT_RX_PMAC_512_1023]; + rmon_stats->hist[5] = s[OCELOT_STAT_RX_PMAC_1024_1526]; + rmon_stats->hist[6] = s[OCELOT_STAT_RX_PMAC_1527_MAX]; + + rmon_stats->hist_tx[0] = s[OCELOT_STAT_TX_PMAC_64]; + rmon_stats->hist_tx[1] = s[OCELOT_STAT_TX_PMAC_65_127]; + rmon_stats->hist_tx[2] = s[OCELOT_STAT_TX_PMAC_128_255]; + rmon_stats->hist_tx[3] = s[OCELOT_STAT_TX_PMAC_128_255]; + rmon_stats->hist_tx[4] = s[OCELOT_STAT_TX_PMAC_256_511]; + rmon_stats->hist_tx[5] = s[OCELOT_STAT_TX_PMAC_512_1023]; + rmon_stats->hist_tx[6] = s[OCELOT_STAT_TX_PMAC_1024_1526]; +} + void ocelot_port_get_rmon_stats(struct ocelot *ocelot, int port, struct ethtool_rmon_stats *rmon_stats, const struct ethtool_rmon_hist_range **ranges) { + struct net_device *dev; + *ranges = ocelot_rmon_ranges; - ocelot_port_stats_run(ocelot, port, rmon_stats, - ocelot_port_rmon_stats_cb); + switch (rmon_stats->src) { + case ETHTOOL_MAC_STATS_SRC_EMAC: + ocelot_port_stats_run(ocelot, port, rmon_stats, + ocelot_port_rmon_stats_cb); + break; + case ETHTOOL_MAC_STATS_SRC_PMAC: + if (ocelot->mm_supported) + ocelot_port_stats_run(ocelot, port, rmon_stats, + ocelot_port_pmac_rmon_stats_cb); + break; + case ETHTOOL_MAC_STATS_SRC_AGGREGATE: + dev = ocelot->ops->port_to_netdev(ocelot, port); + ethtool_aggregate_rmon_stats(dev, rmon_stats); + break; + } } EXPORT_SYMBOL_GPL(ocelot_port_get_rmon_stats); @@ -460,11 +648,35 @@ static void ocelot_port_ctrl_stats_cb(struct ocelot *ocelot, int port, void *pri ctrl_stats->MACControlFramesReceived = s[OCELOT_STAT_RX_CONTROL]; } +static void ocelot_port_pmac_ctrl_stats_cb(struct ocelot *ocelot, int port, + void *priv) +{ + u64 *s = &ocelot->stats[port * OCELOT_NUM_STATS]; + struct ethtool_eth_ctrl_stats *ctrl_stats = priv; + + ctrl_stats->MACControlFramesReceived = s[OCELOT_STAT_RX_PMAC_CONTROL]; +} + void ocelot_port_get_eth_ctrl_stats(struct ocelot *ocelot, int port, struct ethtool_eth_ctrl_stats *ctrl_stats) { - ocelot_port_stats_run(ocelot, port, ctrl_stats, - ocelot_port_ctrl_stats_cb); + struct net_device *dev; + + switch (ctrl_stats->src) { + case ETHTOOL_MAC_STATS_SRC_EMAC: + ocelot_port_stats_run(ocelot, port, ctrl_stats, + ocelot_port_ctrl_stats_cb); + break; + case ETHTOOL_MAC_STATS_SRC_PMAC: + if (ocelot->mm_supported) + ocelot_port_stats_run(ocelot, port, ctrl_stats, + ocelot_port_pmac_ctrl_stats_cb); + break; + case ETHTOOL_MAC_STATS_SRC_AGGREGATE: + dev = ocelot->ops->port_to_netdev(ocelot, port); + ethtool_aggregate_ctrl_stats(dev, ctrl_stats); + break; + } } EXPORT_SYMBOL_GPL(ocelot_port_get_eth_ctrl_stats); @@ -510,11 +722,60 @@ static void ocelot_port_mac_stats_cb(struct ocelot *ocelot, int port, void *priv mac_stats->AlignmentErrors = s[OCELOT_STAT_RX_CRC_ALIGN_ERRS]; } +static void ocelot_port_pmac_mac_stats_cb(struct ocelot *ocelot, int port, + void *priv) +{ + u64 *s = &ocelot->stats[port * OCELOT_NUM_STATS]; + struct ethtool_eth_mac_stats *mac_stats = priv; + + mac_stats->OctetsTransmittedOK = s[OCELOT_STAT_TX_PMAC_OCTETS]; + mac_stats->FramesTransmittedOK = s[OCELOT_STAT_TX_PMAC_64] + + s[OCELOT_STAT_TX_PMAC_65_127] + + s[OCELOT_STAT_TX_PMAC_128_255] + + s[OCELOT_STAT_TX_PMAC_256_511] + + s[OCELOT_STAT_TX_PMAC_512_1023] + + s[OCELOT_STAT_TX_PMAC_1024_1526] + + s[OCELOT_STAT_TX_PMAC_1527_MAX]; + mac_stats->OctetsReceivedOK = s[OCELOT_STAT_RX_PMAC_OCTETS]; + mac_stats->FramesReceivedOK = s[OCELOT_STAT_RX_PMAC_64] + + s[OCELOT_STAT_RX_PMAC_65_127] + + s[OCELOT_STAT_RX_PMAC_128_255] + + s[OCELOT_STAT_RX_PMAC_256_511] + + s[OCELOT_STAT_RX_PMAC_512_1023] + + s[OCELOT_STAT_RX_PMAC_1024_1526] + + s[OCELOT_STAT_RX_PMAC_1527_MAX]; + mac_stats->MulticastFramesXmittedOK = s[OCELOT_STAT_TX_PMAC_MULTICAST]; + mac_stats->BroadcastFramesXmittedOK = s[OCELOT_STAT_TX_PMAC_BROADCAST]; + mac_stats->MulticastFramesReceivedOK = s[OCELOT_STAT_RX_PMAC_MULTICAST]; + mac_stats->BroadcastFramesReceivedOK = s[OCELOT_STAT_RX_PMAC_BROADCAST]; + mac_stats->FrameTooLongErrors = s[OCELOT_STAT_RX_PMAC_LONGS]; + /* Sadly, C_RX_CRC is the sum of FCS and alignment errors, they are not + * counted individually. + */ + mac_stats->FrameCheckSequenceErrors = s[OCELOT_STAT_RX_PMAC_CRC_ALIGN_ERRS]; + mac_stats->AlignmentErrors = s[OCELOT_STAT_RX_PMAC_CRC_ALIGN_ERRS]; +} + void ocelot_port_get_eth_mac_stats(struct ocelot *ocelot, int port, struct ethtool_eth_mac_stats *mac_stats) { - ocelot_port_stats_run(ocelot, port, mac_stats, - ocelot_port_mac_stats_cb); + struct net_device *dev; + + switch (mac_stats->src) { + case ETHTOOL_MAC_STATS_SRC_EMAC: + ocelot_port_stats_run(ocelot, port, mac_stats, + ocelot_port_mac_stats_cb); + break; + case ETHTOOL_MAC_STATS_SRC_PMAC: + if (ocelot->mm_supported) + ocelot_port_stats_run(ocelot, port, mac_stats, + ocelot_port_pmac_mac_stats_cb); + break; + case ETHTOOL_MAC_STATS_SRC_AGGREGATE: + dev = ocelot->ops->port_to_netdev(ocelot, port); + ethtool_aggregate_mac_stats(dev, mac_stats); + break; + } } EXPORT_SYMBOL_GPL(ocelot_port_get_eth_mac_stats); @@ -526,11 +787,35 @@ static void ocelot_port_phy_stats_cb(struct ocelot *ocelot, int port, void *priv phy_stats->SymbolErrorDuringCarrier = s[OCELOT_STAT_RX_SYM_ERRS]; } +static void ocelot_port_pmac_phy_stats_cb(struct ocelot *ocelot, int port, + void *priv) +{ + u64 *s = &ocelot->stats[port * OCELOT_NUM_STATS]; + struct ethtool_eth_phy_stats *phy_stats = priv; + + phy_stats->SymbolErrorDuringCarrier = s[OCELOT_STAT_RX_PMAC_SYM_ERRS]; +} + void ocelot_port_get_eth_phy_stats(struct ocelot *ocelot, int port, struct ethtool_eth_phy_stats *phy_stats) { - ocelot_port_stats_run(ocelot, port, phy_stats, - ocelot_port_phy_stats_cb); + struct net_device *dev; + + switch (phy_stats->src) { + case ETHTOOL_MAC_STATS_SRC_EMAC: + ocelot_port_stats_run(ocelot, port, phy_stats, + ocelot_port_phy_stats_cb); + break; + case ETHTOOL_MAC_STATS_SRC_PMAC: + if (ocelot->mm_supported) + ocelot_port_stats_run(ocelot, port, phy_stats, + ocelot_port_pmac_phy_stats_cb); + break; + case ETHTOOL_MAC_STATS_SRC_AGGREGATE: + dev = ocelot->ops->port_to_netdev(ocelot, port); + ethtool_aggregate_phy_stats(dev, phy_stats); + break; + } } EXPORT_SYMBOL_GPL(ocelot_port_get_eth_phy_stats); @@ -602,16 +887,19 @@ EXPORT_SYMBOL(ocelot_port_get_stats64); static int ocelot_prepare_stats_regions(struct ocelot *ocelot) { struct ocelot_stats_region *region = NULL; + const struct ocelot_stat_layout *layout; unsigned int last = 0; int i; INIT_LIST_HEAD(&ocelot->stats_regions); + layout = ocelot_get_stats_layout(ocelot); + for (i = 0; i < OCELOT_NUM_STATS; i++) { - if (!ocelot_stats_layout[i].reg) + if (!layout[i].reg) continue; - if (region && ocelot_stats_layout[i].reg == last + 4) { + if (region && layout[i].reg == last + 4) { region->count++; } else { region = devm_kzalloc(ocelot->dev, sizeof(*region), @@ -620,17 +908,17 @@ static int ocelot_prepare_stats_regions(struct ocelot *ocelot) return -ENOMEM; /* enum ocelot_stat must be kept sorted in the same - * order as ocelot_stats_layout[i].reg in order to have - * efficient bulking + * order as layout[i].reg in order to have efficient + * bulking */ - WARN_ON(last >= ocelot_stats_layout[i].reg); + WARN_ON(last >= layout[i].reg); - region->base = ocelot_stats_layout[i].reg; + region->base = layout[i].reg; region->count = 1; list_add_tail(®ion->node, &ocelot->stats_regions); } - last = ocelot_stats_layout[i].reg; + last = layout[i].reg; } list_for_each_entry(region, &ocelot->stats_regions, node) { diff --git a/drivers/net/ethernet/mscc/ocelot_vsc7514.c b/drivers/net/ethernet/mscc/ocelot_vsc7514.c index b097fd4a4061..7388c3b0535c 100644 --- a/drivers/net/ethernet/mscc/ocelot_vsc7514.c +++ b/drivers/net/ethernet/mscc/ocelot_vsc7514.c @@ -6,7 +6,6 @@ */ #include <linux/dsa/ocelot.h> #include <linux/interrupt.h> -#include <linux/iopoll.h> #include <linux/module.h> #include <linux/of_net.h> #include <linux/netdevice.h> @@ -17,6 +16,7 @@ #include <linux/skbuff.h> #include <net/switchdev.h> +#include <soc/mscc/ocelot.h> #include <soc/mscc/ocelot_vcap.h> #include <soc/mscc/ocelot_hsio.h> #include <soc/mscc/vsc7514_regs.h> @@ -26,80 +26,6 @@ #define VSC7514_VCAP_POLICER_BASE 128 #define VSC7514_VCAP_POLICER_MAX 191 -#define MEM_INIT_SLEEP_US 1000 -#define MEM_INIT_TIMEOUT_US 100000 - -static const u32 *ocelot_regmap[TARGET_MAX] = { - [ANA] = vsc7514_ana_regmap, - [QS] = vsc7514_qs_regmap, - [QSYS] = vsc7514_qsys_regmap, - [REW] = vsc7514_rew_regmap, - [SYS] = vsc7514_sys_regmap, - [S0] = vsc7514_vcap_regmap, - [S1] = vsc7514_vcap_regmap, - [S2] = vsc7514_vcap_regmap, - [PTP] = vsc7514_ptp_regmap, - [DEV_GMII] = vsc7514_dev_gmii_regmap, -}; - -static const struct reg_field ocelot_regfields[REGFIELD_MAX] = { - [ANA_ADVLEARN_VLAN_CHK] = REG_FIELD(ANA_ADVLEARN, 11, 11), - [ANA_ADVLEARN_LEARN_MIRROR] = REG_FIELD(ANA_ADVLEARN, 0, 10), - [ANA_ANEVENTS_MSTI_DROP] = REG_FIELD(ANA_ANEVENTS, 27, 27), - [ANA_ANEVENTS_ACLKILL] = REG_FIELD(ANA_ANEVENTS, 26, 26), - [ANA_ANEVENTS_ACLUSED] = REG_FIELD(ANA_ANEVENTS, 25, 25), - [ANA_ANEVENTS_AUTOAGE] = REG_FIELD(ANA_ANEVENTS, 24, 24), - [ANA_ANEVENTS_VS2TTL1] = REG_FIELD(ANA_ANEVENTS, 23, 23), - [ANA_ANEVENTS_STORM_DROP] = REG_FIELD(ANA_ANEVENTS, 22, 22), - [ANA_ANEVENTS_LEARN_DROP] = REG_FIELD(ANA_ANEVENTS, 21, 21), - [ANA_ANEVENTS_AGED_ENTRY] = REG_FIELD(ANA_ANEVENTS, 20, 20), - [ANA_ANEVENTS_CPU_LEARN_FAILED] = REG_FIELD(ANA_ANEVENTS, 19, 19), - [ANA_ANEVENTS_AUTO_LEARN_FAILED] = REG_FIELD(ANA_ANEVENTS, 18, 18), - [ANA_ANEVENTS_LEARN_REMOVE] = REG_FIELD(ANA_ANEVENTS, 17, 17), - [ANA_ANEVENTS_AUTO_LEARNED] = REG_FIELD(ANA_ANEVENTS, 16, 16), - [ANA_ANEVENTS_AUTO_MOVED] = REG_FIELD(ANA_ANEVENTS, 15, 15), - [ANA_ANEVENTS_DROPPED] = REG_FIELD(ANA_ANEVENTS, 14, 14), - [ANA_ANEVENTS_CLASSIFIED_DROP] = REG_FIELD(ANA_ANEVENTS, 13, 13), - [ANA_ANEVENTS_CLASSIFIED_COPY] = REG_FIELD(ANA_ANEVENTS, 12, 12), - [ANA_ANEVENTS_VLAN_DISCARD] = REG_FIELD(ANA_ANEVENTS, 11, 11), - [ANA_ANEVENTS_FWD_DISCARD] = REG_FIELD(ANA_ANEVENTS, 10, 10), - [ANA_ANEVENTS_MULTICAST_FLOOD] = REG_FIELD(ANA_ANEVENTS, 9, 9), - [ANA_ANEVENTS_UNICAST_FLOOD] = REG_FIELD(ANA_ANEVENTS, 8, 8), - [ANA_ANEVENTS_DEST_KNOWN] = REG_FIELD(ANA_ANEVENTS, 7, 7), - [ANA_ANEVENTS_BUCKET3_MATCH] = REG_FIELD(ANA_ANEVENTS, 6, 6), - [ANA_ANEVENTS_BUCKET2_MATCH] = REG_FIELD(ANA_ANEVENTS, 5, 5), - [ANA_ANEVENTS_BUCKET1_MATCH] = REG_FIELD(ANA_ANEVENTS, 4, 4), - [ANA_ANEVENTS_BUCKET0_MATCH] = REG_FIELD(ANA_ANEVENTS, 3, 3), - [ANA_ANEVENTS_CPU_OPERATION] = REG_FIELD(ANA_ANEVENTS, 2, 2), - [ANA_ANEVENTS_DMAC_LOOKUP] = REG_FIELD(ANA_ANEVENTS, 1, 1), - [ANA_ANEVENTS_SMAC_LOOKUP] = REG_FIELD(ANA_ANEVENTS, 0, 0), - [ANA_TABLES_MACACCESS_B_DOM] = REG_FIELD(ANA_TABLES_MACACCESS, 18, 18), - [ANA_TABLES_MACTINDX_BUCKET] = REG_FIELD(ANA_TABLES_MACTINDX, 10, 11), - [ANA_TABLES_MACTINDX_M_INDEX] = REG_FIELD(ANA_TABLES_MACTINDX, 0, 9), - [QSYS_TIMED_FRAME_ENTRY_TFRM_VLD] = REG_FIELD(QSYS_TIMED_FRAME_ENTRY, 20, 20), - [QSYS_TIMED_FRAME_ENTRY_TFRM_FP] = REG_FIELD(QSYS_TIMED_FRAME_ENTRY, 8, 19), - [QSYS_TIMED_FRAME_ENTRY_TFRM_PORTNO] = REG_FIELD(QSYS_TIMED_FRAME_ENTRY, 4, 7), - [QSYS_TIMED_FRAME_ENTRY_TFRM_TM_SEL] = REG_FIELD(QSYS_TIMED_FRAME_ENTRY, 1, 3), - [QSYS_TIMED_FRAME_ENTRY_TFRM_TM_T] = REG_FIELD(QSYS_TIMED_FRAME_ENTRY, 0, 0), - [SYS_RESET_CFG_CORE_ENA] = REG_FIELD(SYS_RESET_CFG, 2, 2), - [SYS_RESET_CFG_MEM_ENA] = REG_FIELD(SYS_RESET_CFG, 1, 1), - [SYS_RESET_CFG_MEM_INIT] = REG_FIELD(SYS_RESET_CFG, 0, 0), - /* Replicated per number of ports (12), register size 4 per port */ - [QSYS_SWITCH_PORT_MODE_PORT_ENA] = REG_FIELD_ID(QSYS_SWITCH_PORT_MODE, 14, 14, 12, 4), - [QSYS_SWITCH_PORT_MODE_SCH_NEXT_CFG] = REG_FIELD_ID(QSYS_SWITCH_PORT_MODE, 11, 13, 12, 4), - [QSYS_SWITCH_PORT_MODE_YEL_RSRVD] = REG_FIELD_ID(QSYS_SWITCH_PORT_MODE, 10, 10, 12, 4), - [QSYS_SWITCH_PORT_MODE_INGRESS_DROP_MODE] = REG_FIELD_ID(QSYS_SWITCH_PORT_MODE, 9, 9, 12, 4), - [QSYS_SWITCH_PORT_MODE_TX_PFC_ENA] = REG_FIELD_ID(QSYS_SWITCH_PORT_MODE, 1, 8, 12, 4), - [QSYS_SWITCH_PORT_MODE_TX_PFC_MODE] = REG_FIELD_ID(QSYS_SWITCH_PORT_MODE, 0, 0, 12, 4), - [SYS_PORT_MODE_DATA_WO_TS] = REG_FIELD_ID(SYS_PORT_MODE, 5, 6, 12, 4), - [SYS_PORT_MODE_INCL_INJ_HDR] = REG_FIELD_ID(SYS_PORT_MODE, 3, 4, 12, 4), - [SYS_PORT_MODE_INCL_XTR_HDR] = REG_FIELD_ID(SYS_PORT_MODE, 1, 2, 12, 4), - [SYS_PORT_MODE_INCL_HDR_ERR] = REG_FIELD_ID(SYS_PORT_MODE, 0, 0, 12, 4), - [SYS_PAUSE_CFG_PAUSE_START] = REG_FIELD_ID(SYS_PAUSE_CFG, 10, 18, 12, 4), - [SYS_PAUSE_CFG_PAUSE_STOP] = REG_FIELD_ID(SYS_PAUSE_CFG, 1, 9, 12, 4), - [SYS_PAUSE_CFG_PAUSE_ENA] = REG_FIELD_ID(SYS_PAUSE_CFG, 0, 1, 12, 4), -}; - static void ocelot_pll5_init(struct ocelot *ocelot) { /* Configure PLL5. This will need a proper CCF driver @@ -133,11 +59,11 @@ static int ocelot_chip_init(struct ocelot *ocelot, const struct ocelot_ops *ops) { int ret; - ocelot->map = ocelot_regmap; + ocelot->map = vsc7514_regmap; ocelot->num_mact_rows = 1024; ocelot->ops = ops; - ret = ocelot_regfields_init(ocelot, ocelot_regfields); + ret = ocelot_regfields_init(ocelot, vsc7514_regfields); if (ret) return ret; @@ -190,73 +116,6 @@ static const struct of_device_id mscc_ocelot_match[] = { }; MODULE_DEVICE_TABLE(of, mscc_ocelot_match); -static int ocelot_mem_init_status(struct ocelot *ocelot) -{ - unsigned int val; - int err; - - err = regmap_field_read(ocelot->regfields[SYS_RESET_CFG_MEM_INIT], - &val); - - return err ?: val; -} - -static int ocelot_reset(struct ocelot *ocelot) -{ - int err; - u32 val; - - err = regmap_field_write(ocelot->regfields[SYS_RESET_CFG_MEM_INIT], 1); - if (err) - return err; - - err = regmap_field_write(ocelot->regfields[SYS_RESET_CFG_MEM_ENA], 1); - if (err) - return err; - - /* MEM_INIT is a self-clearing bit. Wait for it to be cleared (should be - * 100us) before enabling the switch core. - */ - err = readx_poll_timeout(ocelot_mem_init_status, ocelot, val, !val, - MEM_INIT_SLEEP_US, MEM_INIT_TIMEOUT_US); - if (err) - return err; - - err = regmap_field_write(ocelot->regfields[SYS_RESET_CFG_MEM_ENA], 1); - if (err) - return err; - - return regmap_field_write(ocelot->regfields[SYS_RESET_CFG_CORE_ENA], 1); -} - -/* Watermark encode - * Bit 8: Unit; 0:1, 1:16 - * Bit 7-0: Value to be multiplied with unit - */ -static u16 ocelot_wm_enc(u16 value) -{ - WARN_ON(value >= 16 * BIT(8)); - - if (value >= BIT(8)) - return BIT(8) | (value / 16); - - return value; -} - -static u16 ocelot_wm_dec(u16 wm) -{ - if (wm & BIT(8)) - return (wm & GENMASK(7, 0)) * 16; - - return wm; -} - -static void ocelot_wm_stat(u32 val, u32 *inuse, u32 *maxuse) -{ - *inuse = (val & GENMASK(23, 12)) >> 12; - *maxuse = val & GENMASK(11, 0); -} - static const struct ocelot_ops ocelot_ops = { .reset = ocelot_reset, .wm_enc = ocelot_wm_enc, @@ -266,49 +125,6 @@ static const struct ocelot_ops ocelot_ops = { .netdev_to_port = ocelot_netdev_to_port, }; -static struct vcap_props vsc7514_vcap_props[] = { - [VCAP_ES0] = { - .action_type_width = 0, - .action_table = { - [ES0_ACTION_TYPE_NORMAL] = { - .width = 73, /* HIT_STICKY not included */ - .count = 1, - }, - }, - .target = S0, - .keys = vsc7514_vcap_es0_keys, - .actions = vsc7514_vcap_es0_actions, - }, - [VCAP_IS1] = { - .action_type_width = 0, - .action_table = { - [IS1_ACTION_TYPE_NORMAL] = { - .width = 78, /* HIT_STICKY not included */ - .count = 4, - }, - }, - .target = S1, - .keys = vsc7514_vcap_is1_keys, - .actions = vsc7514_vcap_is1_actions, - }, - [VCAP_IS2] = { - .action_type_width = 1, - .action_table = { - [IS2_ACTION_TYPE_NORMAL] = { - .width = 49, - .count = 2 - }, - [IS2_ACTION_TYPE_SMAC_SIP] = { - .width = 6, - .count = 4 - }, - }, - .target = S2, - .keys = vsc7514_vcap_is2_keys, - .actions = vsc7514_vcap_is2_actions, - }, -}; - static struct ptp_clock_info ocelot_ptp_clock_info = { .owner = THIS_MODULE, .name = "ocelot ptp", diff --git a/drivers/net/ethernet/mscc/vsc7514_regs.c b/drivers/net/ethernet/mscc/vsc7514_regs.c index 9d2d3e13cacf..ef6fd3f6be30 100644 --- a/drivers/net/ethernet/mscc/vsc7514_regs.c +++ b/drivers/net/ethernet/mscc/vsc7514_regs.c @@ -9,7 +9,66 @@ #include <soc/mscc/vsc7514_regs.h> #include "ocelot.h" -const u32 vsc7514_ana_regmap[] = { +const struct reg_field vsc7514_regfields[REGFIELD_MAX] = { + [ANA_ADVLEARN_VLAN_CHK] = REG_FIELD(ANA_ADVLEARN, 11, 11), + [ANA_ADVLEARN_LEARN_MIRROR] = REG_FIELD(ANA_ADVLEARN, 0, 10), + [ANA_ANEVENTS_MSTI_DROP] = REG_FIELD(ANA_ANEVENTS, 27, 27), + [ANA_ANEVENTS_ACLKILL] = REG_FIELD(ANA_ANEVENTS, 26, 26), + [ANA_ANEVENTS_ACLUSED] = REG_FIELD(ANA_ANEVENTS, 25, 25), + [ANA_ANEVENTS_AUTOAGE] = REG_FIELD(ANA_ANEVENTS, 24, 24), + [ANA_ANEVENTS_VS2TTL1] = REG_FIELD(ANA_ANEVENTS, 23, 23), + [ANA_ANEVENTS_STORM_DROP] = REG_FIELD(ANA_ANEVENTS, 22, 22), + [ANA_ANEVENTS_LEARN_DROP] = REG_FIELD(ANA_ANEVENTS, 21, 21), + [ANA_ANEVENTS_AGED_ENTRY] = REG_FIELD(ANA_ANEVENTS, 20, 20), + [ANA_ANEVENTS_CPU_LEARN_FAILED] = REG_FIELD(ANA_ANEVENTS, 19, 19), + [ANA_ANEVENTS_AUTO_LEARN_FAILED] = REG_FIELD(ANA_ANEVENTS, 18, 18), + [ANA_ANEVENTS_LEARN_REMOVE] = REG_FIELD(ANA_ANEVENTS, 17, 17), + [ANA_ANEVENTS_AUTO_LEARNED] = REG_FIELD(ANA_ANEVENTS, 16, 16), + [ANA_ANEVENTS_AUTO_MOVED] = REG_FIELD(ANA_ANEVENTS, 15, 15), + [ANA_ANEVENTS_DROPPED] = REG_FIELD(ANA_ANEVENTS, 14, 14), + [ANA_ANEVENTS_CLASSIFIED_DROP] = REG_FIELD(ANA_ANEVENTS, 13, 13), + [ANA_ANEVENTS_CLASSIFIED_COPY] = REG_FIELD(ANA_ANEVENTS, 12, 12), + [ANA_ANEVENTS_VLAN_DISCARD] = REG_FIELD(ANA_ANEVENTS, 11, 11), + [ANA_ANEVENTS_FWD_DISCARD] = REG_FIELD(ANA_ANEVENTS, 10, 10), + [ANA_ANEVENTS_MULTICAST_FLOOD] = REG_FIELD(ANA_ANEVENTS, 9, 9), + [ANA_ANEVENTS_UNICAST_FLOOD] = REG_FIELD(ANA_ANEVENTS, 8, 8), + [ANA_ANEVENTS_DEST_KNOWN] = REG_FIELD(ANA_ANEVENTS, 7, 7), + [ANA_ANEVENTS_BUCKET3_MATCH] = REG_FIELD(ANA_ANEVENTS, 6, 6), + [ANA_ANEVENTS_BUCKET2_MATCH] = REG_FIELD(ANA_ANEVENTS, 5, 5), + [ANA_ANEVENTS_BUCKET1_MATCH] = REG_FIELD(ANA_ANEVENTS, 4, 4), + [ANA_ANEVENTS_BUCKET0_MATCH] = REG_FIELD(ANA_ANEVENTS, 3, 3), + [ANA_ANEVENTS_CPU_OPERATION] = REG_FIELD(ANA_ANEVENTS, 2, 2), + [ANA_ANEVENTS_DMAC_LOOKUP] = REG_FIELD(ANA_ANEVENTS, 1, 1), + [ANA_ANEVENTS_SMAC_LOOKUP] = REG_FIELD(ANA_ANEVENTS, 0, 0), + [ANA_TABLES_MACACCESS_B_DOM] = REG_FIELD(ANA_TABLES_MACACCESS, 18, 18), + [ANA_TABLES_MACTINDX_BUCKET] = REG_FIELD(ANA_TABLES_MACTINDX, 10, 11), + [ANA_TABLES_MACTINDX_M_INDEX] = REG_FIELD(ANA_TABLES_MACTINDX, 0, 9), + [QSYS_TIMED_FRAME_ENTRY_TFRM_VLD] = REG_FIELD(QSYS_TIMED_FRAME_ENTRY, 20, 20), + [QSYS_TIMED_FRAME_ENTRY_TFRM_FP] = REG_FIELD(QSYS_TIMED_FRAME_ENTRY, 8, 19), + [QSYS_TIMED_FRAME_ENTRY_TFRM_PORTNO] = REG_FIELD(QSYS_TIMED_FRAME_ENTRY, 4, 7), + [QSYS_TIMED_FRAME_ENTRY_TFRM_TM_SEL] = REG_FIELD(QSYS_TIMED_FRAME_ENTRY, 1, 3), + [QSYS_TIMED_FRAME_ENTRY_TFRM_TM_T] = REG_FIELD(QSYS_TIMED_FRAME_ENTRY, 0, 0), + [SYS_RESET_CFG_CORE_ENA] = REG_FIELD(SYS_RESET_CFG, 2, 2), + [SYS_RESET_CFG_MEM_ENA] = REG_FIELD(SYS_RESET_CFG, 1, 1), + [SYS_RESET_CFG_MEM_INIT] = REG_FIELD(SYS_RESET_CFG, 0, 0), + /* Replicated per number of ports (12), register size 4 per port */ + [QSYS_SWITCH_PORT_MODE_PORT_ENA] = REG_FIELD_ID(QSYS_SWITCH_PORT_MODE, 14, 14, 12, 4), + [QSYS_SWITCH_PORT_MODE_SCH_NEXT_CFG] = REG_FIELD_ID(QSYS_SWITCH_PORT_MODE, 11, 13, 12, 4), + [QSYS_SWITCH_PORT_MODE_YEL_RSRVD] = REG_FIELD_ID(QSYS_SWITCH_PORT_MODE, 10, 10, 12, 4), + [QSYS_SWITCH_PORT_MODE_INGRESS_DROP_MODE] = REG_FIELD_ID(QSYS_SWITCH_PORT_MODE, 9, 9, 12, 4), + [QSYS_SWITCH_PORT_MODE_TX_PFC_ENA] = REG_FIELD_ID(QSYS_SWITCH_PORT_MODE, 1, 8, 12, 4), + [QSYS_SWITCH_PORT_MODE_TX_PFC_MODE] = REG_FIELD_ID(QSYS_SWITCH_PORT_MODE, 0, 0, 12, 4), + [SYS_PORT_MODE_DATA_WO_TS] = REG_FIELD_ID(SYS_PORT_MODE, 5, 6, 12, 4), + [SYS_PORT_MODE_INCL_INJ_HDR] = REG_FIELD_ID(SYS_PORT_MODE, 3, 4, 12, 4), + [SYS_PORT_MODE_INCL_XTR_HDR] = REG_FIELD_ID(SYS_PORT_MODE, 1, 2, 12, 4), + [SYS_PORT_MODE_INCL_HDR_ERR] = REG_FIELD_ID(SYS_PORT_MODE, 0, 0, 12, 4), + [SYS_PAUSE_CFG_PAUSE_START] = REG_FIELD_ID(SYS_PAUSE_CFG, 10, 18, 12, 4), + [SYS_PAUSE_CFG_PAUSE_STOP] = REG_FIELD_ID(SYS_PAUSE_CFG, 1, 9, 12, 4), + [SYS_PAUSE_CFG_PAUSE_ENA] = REG_FIELD_ID(SYS_PAUSE_CFG, 0, 1, 12, 4), +}; +EXPORT_SYMBOL(vsc7514_regfields); + +static const u32 vsc7514_ana_regmap[] = { REG(ANA_ADVLEARN, 0x009000), REG(ANA_VLANMASK, 0x009004), REG(ANA_PORT_B_DOMAIN, 0x009008), @@ -89,9 +148,8 @@ const u32 vsc7514_ana_regmap[] = { REG(ANA_POL_HYST, 0x008bec), REG(ANA_POL_MISC_CFG, 0x008bf0), }; -EXPORT_SYMBOL(vsc7514_ana_regmap); -const u32 vsc7514_qs_regmap[] = { +static const u32 vsc7514_qs_regmap[] = { REG(QS_XTR_GRP_CFG, 0x000000), REG(QS_XTR_RD, 0x000008), REG(QS_XTR_FRM_PRUNING, 0x000010), @@ -105,9 +163,8 @@ const u32 vsc7514_qs_regmap[] = { REG(QS_INJ_ERR, 0x000040), REG(QS_INH_DBG, 0x000048), }; -EXPORT_SYMBOL(vsc7514_qs_regmap); -const u32 vsc7514_qsys_regmap[] = { +static const u32 vsc7514_qsys_regmap[] = { REG(QSYS_PORT_MODE, 0x011200), REG(QSYS_SWITCH_PORT_MODE, 0x011234), REG(QSYS_STAT_CNT_CFG, 0x011264), @@ -150,9 +207,8 @@ const u32 vsc7514_qsys_regmap[] = { REG(QSYS_SE_STATE, 0x00004c), REG(QSYS_HSCH_MISC_CFG, 0x011388), }; -EXPORT_SYMBOL(vsc7514_qsys_regmap); -const u32 vsc7514_rew_regmap[] = { +static const u32 vsc7514_rew_regmap[] = { REG(REW_PORT_VLAN_CFG, 0x000000), REG(REW_TAG_CFG, 0x000004), REG(REW_PORT_CFG, 0x000008), @@ -165,9 +221,8 @@ const u32 vsc7514_rew_regmap[] = { REG(REW_STAT_CFG, 0x000890), REG(REW_PPT, 0x000680), }; -EXPORT_SYMBOL(vsc7514_rew_regmap); -const u32 vsc7514_sys_regmap[] = { +static const u32 vsc7514_sys_regmap[] = { REG(SYS_COUNT_RX_OCTETS, 0x000000), REG(SYS_COUNT_RX_UNICAST, 0x000004), REG(SYS_COUNT_RX_MULTICAST, 0x000008), @@ -288,9 +343,8 @@ const u32 vsc7514_sys_regmap[] = { REG(SYS_PTP_NXT, 0x0006c0), REG(SYS_PTP_CFG, 0x0006c4), }; -EXPORT_SYMBOL(vsc7514_sys_regmap); -const u32 vsc7514_vcap_regmap[] = { +static const u32 vsc7514_vcap_regmap[] = { /* VCAP_CORE_CFG */ REG(VCAP_CORE_UPDATE_CTRL, 0x000000), REG(VCAP_CORE_MV_CFG, 0x000004), @@ -312,9 +366,8 @@ const u32 vsc7514_vcap_regmap[] = { REG(VCAP_CONST_CORE_CNT, 0x0003b8), REG(VCAP_CONST_IF_CNT, 0x0003bc), }; -EXPORT_SYMBOL(vsc7514_vcap_regmap); -const u32 vsc7514_ptp_regmap[] = { +static const u32 vsc7514_ptp_regmap[] = { REG(PTP_PIN_CFG, 0x000000), REG(PTP_PIN_TOD_SEC_MSB, 0x000004), REG(PTP_PIN_TOD_SEC_LSB, 0x000008), @@ -325,9 +378,8 @@ const u32 vsc7514_ptp_regmap[] = { REG(PTP_CLK_CFG_ADJ_CFG, 0x0000a4), REG(PTP_CLK_CFG_ADJ_FREQ, 0x0000a8), }; -EXPORT_SYMBOL(vsc7514_ptp_regmap); -const u32 vsc7514_dev_gmii_regmap[] = { +static const u32 vsc7514_dev_gmii_regmap[] = { REG(DEV_CLOCK_CFG, 0x0), REG(DEV_PORT_MISC, 0x4), REG(DEV_EVENTS, 0x8), @@ -368,9 +420,22 @@ const u32 vsc7514_dev_gmii_regmap[] = { REG(DEV_PCS_FX100_CFG, 0x94), REG(DEV_PCS_FX100_STATUS, 0x98), }; -EXPORT_SYMBOL(vsc7514_dev_gmii_regmap); -const struct vcap_field vsc7514_vcap_es0_keys[] = { +const u32 *vsc7514_regmap[TARGET_MAX] = { + [ANA] = vsc7514_ana_regmap, + [QS] = vsc7514_qs_regmap, + [QSYS] = vsc7514_qsys_regmap, + [REW] = vsc7514_rew_regmap, + [SYS] = vsc7514_sys_regmap, + [S0] = vsc7514_vcap_regmap, + [S1] = vsc7514_vcap_regmap, + [S2] = vsc7514_vcap_regmap, + [PTP] = vsc7514_ptp_regmap, + [DEV_GMII] = vsc7514_dev_gmii_regmap, +}; +EXPORT_SYMBOL(vsc7514_regmap); + +static const struct vcap_field vsc7514_vcap_es0_keys[] = { [VCAP_ES0_EGR_PORT] = { 0, 4 }, [VCAP_ES0_IGR_PORT] = { 4, 4 }, [VCAP_ES0_RSV] = { 8, 2 }, @@ -380,9 +445,8 @@ const struct vcap_field vsc7514_vcap_es0_keys[] = { [VCAP_ES0_DP] = { 24, 1 }, [VCAP_ES0_PCP] = { 25, 3 }, }; -EXPORT_SYMBOL(vsc7514_vcap_es0_keys); -const struct vcap_field vsc7514_vcap_es0_actions[] = { +static const struct vcap_field vsc7514_vcap_es0_actions[] = { [VCAP_ES0_ACT_PUSH_OUTER_TAG] = { 0, 2 }, [VCAP_ES0_ACT_PUSH_INNER_TAG] = { 2, 1 }, [VCAP_ES0_ACT_TAG_A_TPID_SEL] = { 3, 2 }, @@ -402,9 +466,8 @@ const struct vcap_field vsc7514_vcap_es0_actions[] = { [VCAP_ES0_ACT_RSV] = { 49, 24 }, [VCAP_ES0_ACT_HIT_STICKY] = { 73, 1 }, }; -EXPORT_SYMBOL(vsc7514_vcap_es0_actions); -const struct vcap_field vsc7514_vcap_is1_keys[] = { +static const struct vcap_field vsc7514_vcap_is1_keys[] = { [VCAP_IS1_HK_TYPE] = { 0, 1 }, [VCAP_IS1_HK_LOOKUP] = { 1, 2 }, [VCAP_IS1_HK_IGR_PORT_MASK] = { 3, 12 }, @@ -454,9 +517,8 @@ const struct vcap_field vsc7514_vcap_is1_keys[] = { [VCAP_IS1_HK_IP4_L4_RNG] = { 148, 8 }, [VCAP_IS1_HK_IP4_IP_PAYLOAD_S1_5TUPLE] = { 156, 32 }, }; -EXPORT_SYMBOL(vsc7514_vcap_is1_keys); -const struct vcap_field vsc7514_vcap_is1_actions[] = { +static const struct vcap_field vsc7514_vcap_is1_actions[] = { [VCAP_IS1_ACT_DSCP_ENA] = { 0, 1 }, [VCAP_IS1_ACT_DSCP_VAL] = { 1, 6 }, [VCAP_IS1_ACT_QOS_ENA] = { 7, 1 }, @@ -479,9 +541,8 @@ const struct vcap_field vsc7514_vcap_is1_actions[] = { [VCAP_IS1_ACT_CUSTOM_ACE_TYPE_ENA] = { 74, 4 }, [VCAP_IS1_ACT_HIT_STICKY] = { 78, 1 }, }; -EXPORT_SYMBOL(vsc7514_vcap_is1_actions); -const struct vcap_field vsc7514_vcap_is2_keys[] = { +static const struct vcap_field vsc7514_vcap_is2_keys[] = { /* Common: 46 bits */ [VCAP_IS2_TYPE] = { 0, 4 }, [VCAP_IS2_HK_FIRST] = { 4, 1 }, @@ -560,9 +621,8 @@ const struct vcap_field vsc7514_vcap_is2_keys[] = { [VCAP_IS2_HK_OAM_CCM_CNTS_EQ0] = { 186, 1 }, [VCAP_IS2_HK_OAM_IS_Y1731] = { 187, 1 }, }; -EXPORT_SYMBOL(vsc7514_vcap_is2_keys); -const struct vcap_field vsc7514_vcap_is2_actions[] = { +static const struct vcap_field vsc7514_vcap_is2_actions[] = { [VCAP_IS2_ACT_HIT_ME_ONCE] = { 0, 1 }, [VCAP_IS2_ACT_CPU_COPY_ENA] = { 1, 1 }, [VCAP_IS2_ACT_CPU_QU_NUM] = { 2, 3 }, @@ -579,4 +639,47 @@ const struct vcap_field vsc7514_vcap_is2_actions[] = { [VCAP_IS2_ACT_ACL_ID] = { 43, 6 }, [VCAP_IS2_ACT_HIT_CNT] = { 49, 32 }, }; -EXPORT_SYMBOL(vsc7514_vcap_is2_actions); + +struct vcap_props vsc7514_vcap_props[] = { + [VCAP_ES0] = { + .action_type_width = 0, + .action_table = { + [ES0_ACTION_TYPE_NORMAL] = { + .width = 73, /* HIT_STICKY not included */ + .count = 1, + }, + }, + .target = S0, + .keys = vsc7514_vcap_es0_keys, + .actions = vsc7514_vcap_es0_actions, + }, + [VCAP_IS1] = { + .action_type_width = 0, + .action_table = { + [IS1_ACTION_TYPE_NORMAL] = { + .width = 78, /* HIT_STICKY not included */ + .count = 4, + }, + }, + .target = S1, + .keys = vsc7514_vcap_is1_keys, + .actions = vsc7514_vcap_is1_actions, + }, + [VCAP_IS2] = { + .action_type_width = 1, + .action_table = { + [IS2_ACTION_TYPE_NORMAL] = { + .width = 49, + .count = 2 + }, + [IS2_ACTION_TYPE_SMAC_SIP] = { + .width = 6, + .count = 4 + }, + }, + .target = S2, + .keys = vsc7514_vcap_is2_keys, + .actions = vsc7514_vcap_is2_actions, + }, +}; +EXPORT_SYMBOL(vsc7514_vcap_props); diff --git a/drivers/net/ethernet/netronome/Kconfig b/drivers/net/ethernet/netronome/Kconfig index e785c00b5845..d03d6e96f730 100644 --- a/drivers/net/ethernet/netronome/Kconfig +++ b/drivers/net/ethernet/netronome/Kconfig @@ -18,7 +18,7 @@ if NET_VENDOR_NETRONOME config NFP tristate "Netronome(R) NFP4000/NFP6000 NIC driver" - depends on PCI && PCI_MSI + depends on PCI_MSI depends on VXLAN || VXLAN=n depends on TLS && TLS_DEVICE || TLS_DEVICE=n select NET_DEVLINK diff --git a/drivers/net/ethernet/netronome/nfp/Makefile b/drivers/net/ethernet/netronome/nfp/Makefile index 8a250214e289..808599b8066e 100644 --- a/drivers/net/ethernet/netronome/nfp/Makefile +++ b/drivers/net/ethernet/netronome/nfp/Makefile @@ -80,6 +80,8 @@ nfp-objs += \ abm/main.o endif -nfp-$(CONFIG_NFP_NET_IPSEC) += crypto/ipsec.o nfd3/ipsec.o +nfp-$(CONFIG_NFP_NET_IPSEC) += crypto/ipsec.o nfd3/ipsec.o nfdk/ipsec.o nfp-$(CONFIG_NFP_DEBUG) += nfp_net_debugfs.o + +nfp-$(CONFIG_DCB) += nic/dcb.o diff --git a/drivers/net/ethernet/netronome/nfp/crypto/ipsec.c b/drivers/net/ethernet/netronome/nfp/crypto/ipsec.c index 063cd371033a..c0dcce8ae437 100644 --- a/drivers/net/ethernet/netronome/nfp/crypto/ipsec.c +++ b/drivers/net/ethernet/netronome/nfp/crypto/ipsec.c @@ -10,6 +10,7 @@ #include <linux/ktime.h> #include <net/xfrm.h> +#include "../nfpcore/nfp_dev.h" #include "../nfp_net_ctrl.h" #include "../nfp_net.h" #include "crypto.h" @@ -265,7 +266,8 @@ static void set_sha2_512hmac(struct nfp_ipsec_cfg_add_sa *cfg, int *trunc_len) } } -static int nfp_net_xfrm_add_state(struct xfrm_state *x) +static int nfp_net_xfrm_add_state(struct xfrm_state *x, + struct netlink_ext_ack *extack) { struct net_device *netdev = x->xso.dev; struct nfp_ipsec_cfg_mssg msg = {}; @@ -286,7 +288,7 @@ static int nfp_net_xfrm_add_state(struct xfrm_state *x) cfg->ctrl_word.mode = NFP_IPSEC_PROTMODE_TRANSPORT; break; default: - nn_err(nn, "Unsupported mode for xfrm offload\n"); + NL_SET_ERR_MSG_MOD(extack, "Unsupported mode for xfrm offload"); return -EINVAL; } @@ -298,17 +300,17 @@ static int nfp_net_xfrm_add_state(struct xfrm_state *x) cfg->ctrl_word.proto = NFP_IPSEC_PROTOCOL_AH; break; default: - nn_err(nn, "Unsupported protocol for xfrm offload\n"); + NL_SET_ERR_MSG_MOD(extack, "Unsupported protocol for xfrm offload"); return -EINVAL; } if (x->props.flags & XFRM_STATE_ESN) { - nn_err(nn, "Unsupported XFRM_REPLAY_MODE_ESN for xfrm offload\n"); + NL_SET_ERR_MSG_MOD(extack, "Unsupported XFRM_REPLAY_MODE_ESN for xfrm offload"); return -EINVAL; } if (x->xso.type != XFRM_DEV_OFFLOAD_CRYPTO) { - nn_err(nn, "Unsupported xfrm offload tyoe\n"); + NL_SET_ERR_MSG_MOD(extack, "Unsupported xfrm offload type"); return -EINVAL; } @@ -325,7 +327,7 @@ static int nfp_net_xfrm_add_state(struct xfrm_state *x) if (x->aead) { trunc_len = -1; } else { - nn_err(nn, "Unsupported authentication algorithm\n"); + NL_SET_ERR_MSG_MOD(extack, "Unsupported authentication algorithm"); return -EINVAL; } break; @@ -334,6 +336,10 @@ static int nfp_net_xfrm_add_state(struct xfrm_state *x) trunc_len = -1; break; case SADB_AALG_MD5HMAC: + if (nn->pdev->device == PCI_DEVICE_ID_NFP3800) { + NL_SET_ERR_MSG_MOD(extack, "Unsupported authentication algorithm"); + return -EINVAL; + } set_md5hmac(cfg, &trunc_len); break; case SADB_AALG_SHA1HMAC: @@ -349,19 +355,19 @@ static int nfp_net_xfrm_add_state(struct xfrm_state *x) set_sha2_512hmac(cfg, &trunc_len); break; default: - nn_err(nn, "Unsupported authentication algorithm\n"); + NL_SET_ERR_MSG_MOD(extack, "Unsupported authentication algorithm"); return -EINVAL; } if (!trunc_len) { - nn_err(nn, "Unsupported authentication algorithm trunc length\n"); + NL_SET_ERR_MSG_MOD(extack, "Unsupported authentication algorithm trunc length"); return -EINVAL; } if (x->aalg) { key_len = DIV_ROUND_UP(x->aalg->alg_key_len, BITS_PER_BYTE); if (key_len > sizeof(cfg->auth_key)) { - nn_err(nn, "Insufficient space for offloaded auth key\n"); + NL_SET_ERR_MSG_MOD(extack, "Insufficient space for offloaded auth key"); return -EINVAL; } for (i = 0; i < key_len / sizeof(cfg->auth_key[0]) ; i++) @@ -377,18 +383,22 @@ static int nfp_net_xfrm_add_state(struct xfrm_state *x) cfg->ctrl_word.cipher = NFP_IPSEC_CIPHER_NULL; break; case SADB_EALG_3DESCBC: + if (nn->pdev->device == PCI_DEVICE_ID_NFP3800) { + NL_SET_ERR_MSG_MOD(extack, "Unsupported encryption algorithm for offload"); + return -EINVAL; + } cfg->ctrl_word.cimode = NFP_IPSEC_CIMODE_CBC; cfg->ctrl_word.cipher = NFP_IPSEC_CIPHER_3DES; break; case SADB_X_EALG_AES_GCM_ICV16: case SADB_X_EALG_NULL_AES_GMAC: if (!x->aead) { - nn_err(nn, "Invalid AES key data\n"); + NL_SET_ERR_MSG_MOD(extack, "Invalid AES key data"); return -EINVAL; } if (x->aead->alg_icv_len != 128) { - nn_err(nn, "ICV must be 128bit with SADB_X_EALG_AES_GCM_ICV16\n"); + NL_SET_ERR_MSG_MOD(extack, "ICV must be 128bit with SADB_X_EALG_AES_GCM_ICV16"); return -EINVAL; } cfg->ctrl_word.cimode = NFP_IPSEC_CIMODE_CTR; @@ -396,23 +406,23 @@ static int nfp_net_xfrm_add_state(struct xfrm_state *x) /* Aead->alg_key_len includes 32-bit salt */ if (set_aes_keylen(cfg, x->props.ealgo, x->aead->alg_key_len - 32)) { - nn_err(nn, "Unsupported AES key length %d\n", x->aead->alg_key_len); + NL_SET_ERR_MSG_MOD(extack, "Unsupported AES key length"); return -EINVAL; } break; case SADB_X_EALG_AESCBC: cfg->ctrl_word.cimode = NFP_IPSEC_CIMODE_CBC; if (!x->ealg) { - nn_err(nn, "Invalid AES key data\n"); + NL_SET_ERR_MSG_MOD(extack, "Invalid AES key data"); return -EINVAL; } if (set_aes_keylen(cfg, x->props.ealgo, x->ealg->alg_key_len) < 0) { - nn_err(nn, "Unsupported AES key length %d\n", x->ealg->alg_key_len); + NL_SET_ERR_MSG_MOD(extack, "Unsupported AES key length"); return -EINVAL; } break; default: - nn_err(nn, "Unsupported encryption algorithm for offload\n"); + NL_SET_ERR_MSG_MOD(extack, "Unsupported encryption algorithm for offload"); return -EINVAL; } @@ -423,7 +433,7 @@ static int nfp_net_xfrm_add_state(struct xfrm_state *x) key_len -= salt_len; if (key_len > sizeof(cfg->ciph_key)) { - nn_err(nn, "aead: Insufficient space for offloaded key\n"); + NL_SET_ERR_MSG_MOD(extack, "aead: Insufficient space for offloaded key"); return -EINVAL; } @@ -439,7 +449,7 @@ static int nfp_net_xfrm_add_state(struct xfrm_state *x) key_len = DIV_ROUND_UP(x->ealg->alg_key_len, BITS_PER_BYTE); if (key_len > sizeof(cfg->ciph_key)) { - nn_err(nn, "ealg: Insufficient space for offloaded key\n"); + NL_SET_ERR_MSG_MOD(extack, "ealg: Insufficient space for offloaded key"); return -EINVAL; } for (i = 0; i < key_len / sizeof(cfg->ciph_key[0]) ; i++) @@ -462,7 +472,7 @@ static int nfp_net_xfrm_add_state(struct xfrm_state *x) } break; default: - nn_err(nn, "Unsupported address family\n"); + NL_SET_ERR_MSG_MOD(extack, "Unsupported address family"); return -EINVAL; } @@ -477,7 +487,7 @@ static int nfp_net_xfrm_add_state(struct xfrm_state *x) err = xa_alloc(&nn->xa_ipsec, &saidx, x, XA_LIMIT(0, NFP_NET_IPSEC_MAX_SA_CNT - 1), GFP_KERNEL); if (err < 0) { - nn_err(nn, "Unable to get sa_data number for IPsec\n"); + NL_SET_ERR_MSG_MOD(extack, "Unable to get sa_data number for IPsec"); return err; } @@ -488,7 +498,7 @@ static int nfp_net_xfrm_add_state(struct xfrm_state *x) sizeof(msg), nfp_net_ipsec_cfg); if (err) { xa_erase(&nn->xa_ipsec, saidx); - nn_err(nn, "Failed to issue IPsec command err ret=%d\n", err); + NL_SET_ERR_MSG_MOD(extack, "Failed to issue IPsec command"); return err; } diff --git a/drivers/net/ethernet/netronome/nfp/devlink_param.c b/drivers/net/ethernet/netronome/nfp/devlink_param.c index db297ee4d7ad..a655f9e69a7b 100644 --- a/drivers/net/ethernet/netronome/nfp/devlink_param.c +++ b/drivers/net/ethernet/netronome/nfp/devlink_param.c @@ -233,8 +233,8 @@ int nfp_devlink_params_register(struct nfp_pf *pf) if (err <= 0) return err; - return devlink_params_register(devlink, nfp_devlink_params, - ARRAY_SIZE(nfp_devlink_params)); + return devl_params_register(devlink, nfp_devlink_params, + ARRAY_SIZE(nfp_devlink_params)); } void nfp_devlink_params_unregister(struct nfp_pf *pf) @@ -245,6 +245,6 @@ void nfp_devlink_params_unregister(struct nfp_pf *pf) if (err <= 0) return; - devlink_params_unregister(priv_to_devlink(pf), nfp_devlink_params, - ARRAY_SIZE(nfp_devlink_params)); + devl_params_unregister(priv_to_devlink(pf), nfp_devlink_params, + ARRAY_SIZE(nfp_devlink_params)); } diff --git a/drivers/net/ethernet/netronome/nfp/flower/conntrack.c b/drivers/net/ethernet/netronome/nfp/flower/conntrack.c index f693119541d5..d23830b5bcb8 100644 --- a/drivers/net/ethernet/netronome/nfp/flower/conntrack.c +++ b/drivers/net/ethernet/netronome/nfp/flower/conntrack.c @@ -1964,6 +1964,27 @@ int nfp_fl_ct_stats(struct flow_cls_offload *flow, return 0; } +static bool +nfp_fl_ct_offload_nft_supported(struct flow_cls_offload *flow) +{ + struct flow_rule *flow_rule = flow->rule; + struct flow_action *flow_action = + &flow_rule->action; + struct flow_action_entry *act; + int i; + + flow_action_for_each(i, act, flow_action) { + if (act->id == FLOW_ACTION_CT_METADATA) { + enum ip_conntrack_info ctinfo = + act->ct_metadata.cookie & NFCT_INFOMASK; + + return ctinfo != IP_CT_NEW; + } + } + + return false; +} + static int nfp_fl_ct_offload_nft_flow(struct nfp_fl_ct_zone_entry *zt, struct flow_cls_offload *flow) { @@ -1976,6 +1997,9 @@ nfp_fl_ct_offload_nft_flow(struct nfp_fl_ct_zone_entry *zt, struct flow_cls_offl extack = flow->common.extack; switch (flow->command) { case FLOW_CLS_REPLACE: + if (!nfp_fl_ct_offload_nft_supported(flow)) + return -EOPNOTSUPP; + /* Netfilter can request offload multiple times for the same * flow - protect against adding duplicates. */ diff --git a/drivers/net/ethernet/netronome/nfp/nfd3/dp.c b/drivers/net/ethernet/netronome/nfp/nfd3/dp.c index 861082c5dbff..59fb0583cc08 100644 --- a/drivers/net/ethernet/netronome/nfp/nfd3/dp.c +++ b/drivers/net/ethernet/netronome/nfp/nfd3/dp.c @@ -192,10 +192,10 @@ static int nfp_nfd3_prep_tx_meta(struct nfp_net_dp *dp, struct sk_buff *skb, return 0; md_bytes = sizeof(meta_id) + - !!md_dst * NFP_NET_META_PORTID_SIZE + - !!tls_handle * NFP_NET_META_CONN_HANDLE_SIZE + - vlan_insert * NFP_NET_META_VLAN_SIZE + - *ipsec * NFP_NET_META_IPSEC_FIELD_SIZE; /* IPsec has 12 bytes of metadata */ + (!!md_dst ? NFP_NET_META_PORTID_SIZE : 0) + + (!!tls_handle ? NFP_NET_META_CONN_HANDLE_SIZE : 0) + + (vlan_insert ? NFP_NET_META_VLAN_SIZE : 0) + + (*ipsec ? NFP_NET_META_IPSEC_FIELD_SIZE : 0); if (unlikely(skb_cow_head(skb, md_bytes))) return -ENOMEM; @@ -226,9 +226,6 @@ static int nfp_nfd3_prep_tx_meta(struct nfp_net_dp *dp, struct sk_buff *skb, meta_id |= NFP_NET_META_VLAN; } if (*ipsec) { - /* IPsec has three consecutive 4-bit IPsec metadata types, - * so in total IPsec has three 4 bytes of metadata. - */ data -= NFP_NET_META_IPSEC_SIZE; put_unaligned_be32(offload_info.seq_hi, data); data -= NFP_NET_META_IPSEC_SIZE; diff --git a/drivers/net/ethernet/netronome/nfp/nfdk/dp.c b/drivers/net/ethernet/netronome/nfp/nfdk/dp.c index ccacb6ab6c39..d60c0e991a91 100644 --- a/drivers/net/ethernet/netronome/nfp/nfdk/dp.c +++ b/drivers/net/ethernet/netronome/nfp/nfdk/dp.c @@ -6,6 +6,7 @@ #include <linux/overflow.h> #include <linux/sizes.h> #include <linux/bitfield.h> +#include <net/xfrm.h> #include "../nfp_app.h" #include "../nfp_net.h" @@ -172,25 +173,32 @@ close_block: static int nfp_nfdk_prep_tx_meta(struct nfp_net_dp *dp, struct nfp_app *app, - struct sk_buff *skb) + struct sk_buff *skb, bool *ipsec) { struct metadata_dst *md_dst = skb_metadata_dst(skb); + struct nfp_ipsec_offload offload_info; unsigned char *data; bool vlan_insert; u32 meta_id = 0; int md_bytes; +#ifdef CONFIG_NFP_NET_IPSEC + if (xfrm_offload(skb)) + *ipsec = nfp_net_ipsec_tx_prep(dp, skb, &offload_info); +#endif + if (unlikely(md_dst && md_dst->type != METADATA_HW_PORT_MUX)) md_dst = NULL; vlan_insert = skb_vlan_tag_present(skb) && (dp->ctrl & NFP_NET_CFG_CTRL_TXVLAN_V2); - if (!(md_dst || vlan_insert)) + if (!(md_dst || vlan_insert || *ipsec)) return 0; md_bytes = sizeof(meta_id) + - !!md_dst * NFP_NET_META_PORTID_SIZE + - vlan_insert * NFP_NET_META_VLAN_SIZE; + (!!md_dst ? NFP_NET_META_PORTID_SIZE : 0) + + (vlan_insert ? NFP_NET_META_VLAN_SIZE : 0) + + (*ipsec ? NFP_NET_META_IPSEC_FIELD_SIZE : 0); if (unlikely(skb_cow_head(skb, md_bytes))) return -ENOMEM; @@ -212,6 +220,17 @@ nfp_nfdk_prep_tx_meta(struct nfp_net_dp *dp, struct nfp_app *app, meta_id |= NFP_NET_META_VLAN; } + if (*ipsec) { + data -= NFP_NET_META_IPSEC_SIZE; + put_unaligned_be32(offload_info.seq_hi, data); + data -= NFP_NET_META_IPSEC_SIZE; + put_unaligned_be32(offload_info.seq_low, data); + data -= NFP_NET_META_IPSEC_SIZE; + put_unaligned_be32(offload_info.handle - 1, data); + meta_id <<= NFP_NET_META_IPSEC_FIELD_SIZE; + meta_id |= NFP_NET_META_IPSEC << 8 | NFP_NET_META_IPSEC << 4 | NFP_NET_META_IPSEC; + } + meta_id = FIELD_PREP(NFDK_META_LEN, md_bytes) | FIELD_PREP(NFDK_META_FIELDS, meta_id); @@ -243,6 +262,7 @@ netdev_tx_t nfp_nfdk_tx(struct sk_buff *skb, struct net_device *netdev) struct nfp_net_dp *dp; int nr_frags, wr_idx; dma_addr_t dma_addr; + bool ipsec = false; u64 metadata; dp = &nn->dp; @@ -263,7 +283,7 @@ netdev_tx_t nfp_nfdk_tx(struct sk_buff *skb, struct net_device *netdev) return NETDEV_TX_BUSY; } - metadata = nfp_nfdk_prep_tx_meta(dp, nn->app, skb); + metadata = nfp_nfdk_prep_tx_meta(dp, nn->app, skb, &ipsec); if (unlikely((int)metadata < 0)) goto err_flush; @@ -361,6 +381,9 @@ netdev_tx_t nfp_nfdk_tx(struct sk_buff *skb, struct net_device *netdev) (txd - 1)->dma_len_type = cpu_to_le16(dlen_type | NFDK_DESC_TX_EOP); + if (ipsec) + metadata = nfp_nfdk_ipsec_tx(metadata, skb); + if (!skb_is_gso(skb)) { real_len = skb->len; /* Metadata desc */ @@ -760,6 +783,15 @@ nfp_nfdk_parse_meta(struct net_device *netdev, struct nfp_meta_parsed *meta, return false; data += sizeof(struct nfp_net_tls_resync_req); break; +#ifdef CONFIG_NFP_NET_IPSEC + case NFP_NET_META_IPSEC: + /* Note: IPsec packet could have zero saidx, so need add 1 + * to indicate packet is IPsec packet within driver. + */ + meta->ipsec_saidx = get_unaligned_be32(data) + 1; + data += 4; + break; +#endif default: return true; } @@ -1186,6 +1218,13 @@ static int nfp_nfdk_rx(struct nfp_net_rx_ring *rx_ring, int budget) continue; } +#ifdef CONFIG_NFP_NET_IPSEC + if (meta.ipsec_saidx != 0 && unlikely(nfp_net_ipsec_rx(&meta, skb))) { + nfp_nfdk_rx_drop(dp, r_vec, rx_ring, NULL, skb); + continue; + } +#endif + if (meta_len_xdp) skb_metadata_set(skb, meta_len_xdp); diff --git a/drivers/net/ethernet/netronome/nfp/nfdk/ipsec.c b/drivers/net/ethernet/netronome/nfp/nfdk/ipsec.c new file mode 100644 index 000000000000..58d8f59eb885 --- /dev/null +++ b/drivers/net/ethernet/netronome/nfp/nfdk/ipsec.c @@ -0,0 +1,17 @@ +// SPDX-License-Identifier: (GPL-2.0-only OR BSD-2-Clause) +/* Copyright (C) 2023 Corigine, Inc */ + +#include <net/xfrm.h> + +#include "../nfp_net.h" +#include "nfdk.h" + +u64 nfp_nfdk_ipsec_tx(u64 flags, struct sk_buff *skb) +{ + struct xfrm_state *x = xfrm_input_state(skb); + + if (x->xso.dev && (x->xso.dev->features & NETIF_F_HW_ESP_TX_CSUM)) + flags |= NFDK_DESC_TX_L3_CSUM | NFDK_DESC_TX_L4_CSUM; + + return flags; +} diff --git a/drivers/net/ethernet/netronome/nfp/nfdk/nfdk.h b/drivers/net/ethernet/netronome/nfp/nfdk/nfdk.h index 0ea51d9f2325..fe55980348e9 100644 --- a/drivers/net/ethernet/netronome/nfp/nfdk/nfdk.h +++ b/drivers/net/ethernet/netronome/nfp/nfdk/nfdk.h @@ -125,4 +125,12 @@ nfp_nfdk_ctrl_tx_one(struct nfp_net *nn, struct nfp_net_r_vector *r_vec, void nfp_nfdk_ctrl_poll(struct tasklet_struct *t); void nfp_nfdk_rx_ring_fill_freelist(struct nfp_net_dp *dp, struct nfp_net_rx_ring *rx_ring); +#ifndef CONFIG_NFP_NET_IPSEC +static inline u64 nfp_nfdk_ipsec_tx(u64 flags, struct sk_buff *skb) +{ + return flags; +} +#else +u64 nfp_nfdk_ipsec_tx(u64 flags, struct sk_buff *skb); +#endif #endif diff --git a/drivers/net/ethernet/netronome/nfp/nfp_net_common.c b/drivers/net/ethernet/netronome/nfp/nfp_net_common.c index 70d7484c82af..81b7ca0ad222 100644 --- a/drivers/net/ethernet/netronome/nfp/nfp_net_common.c +++ b/drivers/net/ethernet/netronome/nfp/nfp_net_common.c @@ -2531,10 +2531,15 @@ static void nfp_net_netdev_init(struct nfp_net *nn) netdev->features &= ~NETIF_F_HW_VLAN_STAG_RX; nn->dp.ctrl &= ~NFP_NET_CFG_CTRL_RXQINQ; + netdev->xdp_features = NETDEV_XDP_ACT_BASIC; + if (nn->app && nn->app->type->id == NFP_APP_BPF_NIC) + netdev->xdp_features |= NETDEV_XDP_ACT_HW_OFFLOAD; + /* Finalise the netdev setup */ switch (nn->dp.ops->version) { case NFP_NFD_VER_NFD3: netdev->netdev_ops = &nfp_nfd3_netdev_ops; + netdev->xdp_features |= NETDEV_XDP_ACT_XSK_ZEROCOPY; break; case NFP_NFD_VER_NFDK: netdev->netdev_ops = &nfp_nfdk_netdev_ops; diff --git a/drivers/net/ethernet/netronome/nfp/nfp_net_ctrl.h b/drivers/net/ethernet/netronome/nfp/nfp_net_ctrl.h index f03dcadff738..669b9dccb6a9 100644 --- a/drivers/net/ethernet/netronome/nfp/nfp_net_ctrl.h +++ b/drivers/net/ethernet/netronome/nfp/nfp_net_ctrl.h @@ -412,6 +412,7 @@ #define NFP_NET_CFG_MBOX_CMD_IPSEC 3 #define NFP_NET_CFG_MBOX_CMD_PCI_DSCP_PRIOMAP_SET 5 #define NFP_NET_CFG_MBOX_CMD_TLV_CMSG 6 +#define NFP_NET_CFG_MBOX_CMD_DCB_UPDATE 7 #define NFP_NET_CFG_MBOX_CMD_MULTICAST_ADD 8 #define NFP_NET_CFG_MBOX_CMD_MULTICAST_DEL 9 diff --git a/drivers/net/ethernet/netronome/nfp/nfp_net_ethtool.c b/drivers/net/ethernet/netronome/nfp/nfp_net_ethtool.c index cc97b3d00414..dfedb52b7e70 100644 --- a/drivers/net/ethernet/netronome/nfp/nfp_net_ethtool.c +++ b/drivers/net/ethernet/netronome/nfp/nfp_net_ethtool.c @@ -313,6 +313,10 @@ static const struct nfp_eth_media_link_mode { .ethtool_link_mode = ETHTOOL_LINK_MODE_10000baseKR_Full_BIT, .speed = NFP_SPEED_10G, }, + [NFP_MEDIA_10GBASE_LR] = { + .ethtool_link_mode = ETHTOOL_LINK_MODE_10000baseLR_Full_BIT, + .speed = NFP_SPEED_10G, + }, [NFP_MEDIA_10GBASE_CX4] = { .ethtool_link_mode = ETHTOOL_LINK_MODE_10000baseKX4_Full_BIT, .speed = NFP_SPEED_10G, @@ -349,6 +353,14 @@ static const struct nfp_eth_media_link_mode { .ethtool_link_mode = ETHTOOL_LINK_MODE_25000baseSR_Full_BIT, .speed = NFP_SPEED_25G, }, + [NFP_MEDIA_25GBASE_LR] = { + .ethtool_link_mode = ETHTOOL_LINK_MODE_25000baseSR_Full_BIT, + .speed = NFP_SPEED_25G, + }, + [NFP_MEDIA_25GBASE_ER] = { + .ethtool_link_mode = ETHTOOL_LINK_MODE_25000baseSR_Full_BIT, + .speed = NFP_SPEED_25G, + }, [NFP_MEDIA_40GBASE_CR4] = { .ethtool_link_mode = ETHTOOL_LINK_MODE_40000baseCR4_Full_BIT, .speed = NFP_SPEED_40G, @@ -2027,16 +2039,16 @@ static int nfp_net_get_eeprom(struct net_device *netdev, struct ethtool_eeprom *eeprom, u8 *bytes) { - struct nfp_net *nn = netdev_priv(netdev); + struct nfp_app *app = nfp_app_from_netdev(netdev); u8 buf[NFP_EEPROM_LEN] = {}; - if (eeprom->len == 0) - return -EINVAL; - if (nfp_net_get_port_mac_by_hwinfo(netdev, buf)) return -EOPNOTSUPP; - eeprom->magic = nn->pdev->vendor | (nn->pdev->device << 16); + if (eeprom->len == 0) + return -EINVAL; + + eeprom->magic = app->pdev->vendor | (app->pdev->device << 16); memcpy(bytes, buf + eeprom->offset, eeprom->len); return 0; @@ -2046,18 +2058,18 @@ static int nfp_net_set_eeprom(struct net_device *netdev, struct ethtool_eeprom *eeprom, u8 *bytes) { - struct nfp_net *nn = netdev_priv(netdev); + struct nfp_app *app = nfp_app_from_netdev(netdev); u8 buf[NFP_EEPROM_LEN] = {}; + if (nfp_net_get_port_mac_by_hwinfo(netdev, buf)) + return -EOPNOTSUPP; + if (eeprom->len == 0) return -EINVAL; - if (eeprom->magic != (nn->pdev->vendor | nn->pdev->device << 16)) + if (eeprom->magic != (app->pdev->vendor | app->pdev->device << 16)) return -EINVAL; - if (nfp_net_get_port_mac_by_hwinfo(netdev, buf)) - return -EOPNOTSUPP; - memcpy(buf + eeprom->offset, bytes, eeprom->len); if (nfp_net_set_port_mac_by_hwinfo(netdev, buf)) return -EOPNOTSUPP; @@ -2117,6 +2129,9 @@ const struct ethtool_ops nfp_port_ethtool_ops = { .set_dump = nfp_app_set_dump, .get_dump_flag = nfp_app_get_dump_flag, .get_dump_data = nfp_app_get_dump_data, + .get_eeprom_len = nfp_net_get_eeprom_len, + .get_eeprom = nfp_net_get_eeprom, + .set_eeprom = nfp_net_set_eeprom, .get_module_info = nfp_port_get_module_info, .get_module_eeprom = nfp_port_get_module_eeprom, .get_link_ksettings = nfp_net_get_link_ksettings, diff --git a/drivers/net/ethernet/netronome/nfp/nfp_net_main.c b/drivers/net/ethernet/netronome/nfp/nfp_net_main.c index abfe788d558f..cbe4972ba104 100644 --- a/drivers/net/ethernet/netronome/nfp/nfp_net_main.c +++ b/drivers/net/ethernet/netronome/nfp/nfp_net_main.c @@ -754,11 +754,11 @@ int nfp_net_pci_probe(struct nfp_pf *pf) if (err) goto err_devlink_unreg; + devl_lock(devlink); err = nfp_devlink_params_register(pf); if (err) goto err_shared_buf_unreg; - devl_lock(devlink); pf->ddir = nfp_net_debugfs_device_add(pf->pdev); /* Allocate the vnics and do basic init */ @@ -791,9 +791,9 @@ err_free_vnics: nfp_net_pf_free_vnics(pf); err_clean_ddir: nfp_net_debugfs_dir_clean(&pf->ddir); - devl_unlock(devlink); nfp_devlink_params_unregister(pf); err_shared_buf_unreg: + devl_unlock(devlink); nfp_shared_buf_unregister(pf); err_devlink_unreg: cancel_work_sync(&pf->port_refresh_work); @@ -821,9 +821,10 @@ void nfp_net_pci_remove(struct nfp_pf *pf) /* stop app first, to avoid double free of ctrl vNIC's ddir */ nfp_net_debugfs_dir_clean(&pf->ddir); + nfp_devlink_params_unregister(pf); + devl_unlock(devlink); - nfp_devlink_params_unregister(pf); nfp_shared_buf_unregister(pf); nfp_net_pf_free_irqs(pf); diff --git a/drivers/net/ethernet/netronome/nfp/nfpcore/nfp_nsp.h b/drivers/net/ethernet/netronome/nfp/nfpcore/nfp_nsp.h index 8f5cab0032d0..781edc451bd4 100644 --- a/drivers/net/ethernet/netronome/nfp/nfpcore/nfp_nsp.h +++ b/drivers/net/ethernet/netronome/nfp/nfpcore/nfp_nsp.h @@ -140,6 +140,9 @@ enum nfp_ethtool_link_mode_list { NFP_MEDIA_100GBASE_CR4, NFP_MEDIA_100GBASE_KP4, NFP_MEDIA_100GBASE_CR10, + NFP_MEDIA_10GBASE_LR, + NFP_MEDIA_25GBASE_LR, + NFP_MEDIA_25GBASE_ER, NFP_MEDIA_LINK_MODES_NUMBER }; diff --git a/drivers/net/ethernet/netronome/nfp/nic/dcb.c b/drivers/net/ethernet/netronome/nfp/nic/dcb.c new file mode 100644 index 000000000000..bb498ac6bd7d --- /dev/null +++ b/drivers/net/ethernet/netronome/nfp/nic/dcb.c @@ -0,0 +1,571 @@ +// SPDX-License-Identifier: (GPL-2.0-only OR BSD-2-Clause) +/* Copyright (C) 2023 Corigine, Inc. */ + +#include <linux/device.h> +#include <linux/netdevice.h> +#include <net/dcbnl.h> + +#include "../nfp_app.h" +#include "../nfp_net.h" +#include "../nfp_main.h" +#include "../nfpcore/nfp_cpp.h" +#include "../nfpcore/nfp_nffw.h" +#include "../nfp_net_sriov.h" + +#include "main.h" + +#define NFP_DCB_TRUST_PCP 1 +#define NFP_DCB_TRUST_DSCP 2 +#define NFP_DCB_TRUST_INVALID 0xff + +#define NFP_DCB_TSA_VENDOR 1 +#define NFP_DCB_TSA_STRICT 2 +#define NFP_DCB_TSA_ETS 3 + +#define NFP_DCB_GBL_ENABLE BIT(0) +#define NFP_DCB_QOS_ENABLE BIT(1) +#define NFP_DCB_DISABLE 0 +#define NFP_DCB_ALL_QOS_ENABLE (NFP_DCB_GBL_ENABLE | NFP_DCB_QOS_ENABLE) + +#define NFP_DCB_UPDATE_MSK_SZ 4 +#define NFP_DCB_TC_RATE_MAX 0xffff + +#define NFP_DCB_DATA_OFF_DSCP2IDX 0 +#define NFP_DCB_DATA_OFF_PCP2IDX 64 +#define NFP_DCB_DATA_OFF_TSA 80 +#define NFP_DCB_DATA_OFF_IDX_BW_PCT 88 +#define NFP_DCB_DATA_OFF_RATE 96 +#define NFP_DCB_DATA_OFF_CAP 112 +#define NFP_DCB_DATA_OFF_ENABLE 116 +#define NFP_DCB_DATA_OFF_TRUST 120 + +#define NFP_DCB_MSG_MSK_ENABLE BIT(31) +#define NFP_DCB_MSG_MSK_TRUST BIT(30) +#define NFP_DCB_MSG_MSK_TSA BIT(29) +#define NFP_DCB_MSG_MSK_DSCP BIT(28) +#define NFP_DCB_MSG_MSK_PCP BIT(27) +#define NFP_DCB_MSG_MSK_RATE BIT(26) +#define NFP_DCB_MSG_MSK_PCT BIT(25) + +static struct nfp_dcb *get_dcb_priv(struct nfp_net *nn) +{ + struct nfp_dcb *dcb = &((struct nfp_app_nic_private *)nn->app_priv)->dcb; + + return dcb; +} + +static u8 nfp_tsa_ieee2nfp(u8 tsa) +{ + switch (tsa) { + case IEEE_8021QAZ_TSA_STRICT: + return NFP_DCB_TSA_STRICT; + case IEEE_8021QAZ_TSA_ETS: + return NFP_DCB_TSA_ETS; + default: + return NFP_DCB_TSA_VENDOR; + } +} + +static int nfp_nic_dcbnl_ieee_getets(struct net_device *dev, + struct ieee_ets *ets) +{ + struct nfp_net *nn = netdev_priv(dev); + struct nfp_dcb *dcb; + + dcb = get_dcb_priv(nn); + + for (unsigned int i = 0; i < IEEE_8021QAZ_MAX_TCS; i++) { + ets->prio_tc[i] = dcb->prio2tc[i]; + ets->tc_tx_bw[i] = dcb->tc_tx_pct[i]; + ets->tc_tsa[i] = dcb->tc_tsa[i]; + } + + return 0; +} + +static bool nfp_refresh_tc2idx(struct nfp_net *nn) +{ + u8 tc2idx[IEEE_8021QAZ_MAX_TCS]; + bool change = false; + struct nfp_dcb *dcb; + int maxstrict = 0; + + dcb = get_dcb_priv(nn); + + for (unsigned int i = 0; i < IEEE_8021QAZ_MAX_TCS; i++) { + tc2idx[i] = i; + if (dcb->tc_tsa[i] == IEEE_8021QAZ_TSA_STRICT) + maxstrict = i; + } + + if (maxstrict > 0 && dcb->tc_tsa[0] != IEEE_8021QAZ_TSA_STRICT) { + tc2idx[0] = maxstrict; + tc2idx[maxstrict] = 0; + } + + for (unsigned int j = 0; j < IEEE_8021QAZ_MAX_TCS; j++) { + if (dcb->tc2idx[j] != tc2idx[j]) { + change = true; + dcb->tc2idx[j] = tc2idx[j]; + } + } + + return change; +} + +static int nfp_fill_maxrate(struct nfp_net *nn, u64 *max_rate_array) +{ + struct nfp_app *app = nn->app; + struct nfp_dcb *dcb; + u32 ratembps; + + dcb = get_dcb_priv(nn); + + for (unsigned int i = 0; i < IEEE_8021QAZ_MAX_TCS; i++) { + /* Convert bandwidth from kbps to mbps. */ + ratembps = max_rate_array[i] / 1024; + + /* Reject input values >= NFP_DCB_TC_RATE_MAX */ + if (ratembps >= NFP_DCB_TC_RATE_MAX) { + nfp_warn(app->cpp, "ratembps(%d) must less than %d.", + ratembps, NFP_DCB_TC_RATE_MAX); + return -EINVAL; + } + /* Input value 0 mapped to NFP_DCB_TC_RATE_MAX for firmware. */ + if (ratembps == 0) + ratembps = NFP_DCB_TC_RATE_MAX; + + writew((u16)ratembps, dcb->dcbcfg_tbl + + dcb->cfg_offset + NFP_DCB_DATA_OFF_RATE + dcb->tc2idx[i] * 2); + /* for rate value from user space, need to sync to dcb structure */ + if (dcb->tc_maxrate != max_rate_array) + dcb->tc_maxrate[i] = max_rate_array[i]; + } + + return 0; +} + +static int update_dscp_maxrate(struct net_device *dev, u32 *update) +{ + struct nfp_net *nn = netdev_priv(dev); + struct nfp_dcb *dcb; + int err; + + dcb = get_dcb_priv(nn); + + err = nfp_fill_maxrate(nn, dcb->tc_maxrate); + if (err) + return err; + + *update |= NFP_DCB_MSG_MSK_RATE; + + /* We only refresh dscp in dscp trust mode. */ + if (dcb->dscp_cnt > 0) { + for (unsigned int i = 0; i < NFP_NET_MAX_DSCP; i++) { + writeb(dcb->tc2idx[dcb->prio2tc[dcb->dscp2prio[i]]], + dcb->dcbcfg_tbl + dcb->cfg_offset + + NFP_DCB_DATA_OFF_DSCP2IDX + i); + } + *update |= NFP_DCB_MSG_MSK_DSCP; + } + + return 0; +} + +static void nfp_nic_set_trust(struct nfp_net *nn, u32 *update) +{ + struct nfp_dcb *dcb; + u8 trust; + + dcb = get_dcb_priv(nn); + + if (dcb->trust_status != NFP_DCB_TRUST_INVALID) + return; + + trust = dcb->dscp_cnt > 0 ? NFP_DCB_TRUST_DSCP : NFP_DCB_TRUST_PCP; + writeb(trust, dcb->dcbcfg_tbl + dcb->cfg_offset + + NFP_DCB_DATA_OFF_TRUST); + + dcb->trust_status = trust; + *update |= NFP_DCB_MSG_MSK_TRUST; +} + +static void nfp_nic_set_enable(struct nfp_net *nn, u32 enable, u32 *update) +{ + struct nfp_dcb *dcb; + u32 value = 0; + + dcb = get_dcb_priv(nn); + + value = readl(dcb->dcbcfg_tbl + dcb->cfg_offset + + NFP_DCB_DATA_OFF_ENABLE); + if (value != enable) { + writel(enable, dcb->dcbcfg_tbl + dcb->cfg_offset + + NFP_DCB_DATA_OFF_ENABLE); + *update |= NFP_DCB_MSG_MSK_ENABLE; + } +} + +static int dcb_ets_check(struct net_device *dev, struct ieee_ets *ets) +{ + struct nfp_net *nn = netdev_priv(dev); + struct nfp_app *app = nn->app; + bool ets_exists = false; + int sum = 0; + + for (unsigned int i = 0; i < IEEE_8021QAZ_MAX_TCS; i++) { + /* For ets mode, check bw percentage sum. */ + if (ets->tc_tsa[i] == IEEE_8021QAZ_TSA_ETS) { + ets_exists = true; + sum += ets->tc_tx_bw[i]; + } else if (ets->tc_tx_bw[i]) { + nfp_warn(app->cpp, "ETS BW for strict/vendor TC must be 0."); + return -EINVAL; + } + } + + if (ets_exists && sum != 100) { + nfp_warn(app->cpp, "Failed to validate ETS BW: sum must be 100."); + return -EINVAL; + } + + return 0; +} + +static void nfp_nic_fill_ets(struct nfp_net *nn) +{ + struct nfp_dcb *dcb; + + dcb = get_dcb_priv(nn); + + for (unsigned int i = 0; i < IEEE_8021QAZ_MAX_TCS; i++) { + writeb(dcb->tc2idx[dcb->prio2tc[i]], + dcb->dcbcfg_tbl + dcb->cfg_offset + NFP_DCB_DATA_OFF_PCP2IDX + i); + writeb(dcb->tc_tx_pct[i], dcb->dcbcfg_tbl + + dcb->cfg_offset + NFP_DCB_DATA_OFF_IDX_BW_PCT + dcb->tc2idx[i]); + writeb(nfp_tsa_ieee2nfp(dcb->tc_tsa[i]), dcb->dcbcfg_tbl + + dcb->cfg_offset + NFP_DCB_DATA_OFF_TSA + dcb->tc2idx[i]); + } +} + +static void nfp_nic_ets_init(struct nfp_net *nn, u32 *update) +{ + struct nfp_dcb *dcb = get_dcb_priv(nn); + + if (dcb->ets_init) + return; + + nfp_nic_fill_ets(nn); + dcb->ets_init = true; + *update |= NFP_DCB_MSG_MSK_TSA | NFP_DCB_MSG_MSK_PCT | NFP_DCB_MSG_MSK_PCP; +} + +static int nfp_nic_dcbnl_ieee_setets(struct net_device *dev, + struct ieee_ets *ets) +{ + const u32 cmd = NFP_NET_CFG_MBOX_CMD_DCB_UPDATE; + struct nfp_net *nn = netdev_priv(dev); + struct nfp_app *app = nn->app; + struct nfp_dcb *dcb; + u32 update = 0; + bool change; + int err; + + err = dcb_ets_check(dev, ets); + if (err) + return err; + + dcb = get_dcb_priv(nn); + + for (unsigned int i = 0; i < IEEE_8021QAZ_MAX_TCS; i++) { + dcb->prio2tc[i] = ets->prio_tc[i]; + dcb->tc_tx_pct[i] = ets->tc_tx_bw[i]; + dcb->tc_tsa[i] = ets->tc_tsa[i]; + } + + change = nfp_refresh_tc2idx(nn); + nfp_nic_fill_ets(nn); + dcb->ets_init = true; + if (change || !dcb->rate_init) { + err = update_dscp_maxrate(dev, &update); + if (err) { + nfp_warn(app->cpp, + "nfp dcbnl ieee setets ERROR:%d.", + err); + return err; + } + + dcb->rate_init = true; + } + nfp_nic_set_enable(nn, NFP_DCB_ALL_QOS_ENABLE, &update); + nfp_nic_set_trust(nn, &update); + err = nfp_net_mbox_lock(nn, NFP_DCB_UPDATE_MSK_SZ); + if (err) + return err; + + nn_writel(nn, nn->tlv_caps.mbox_off + NFP_NET_CFG_MBOX_SIMPLE_VAL, + update | NFP_DCB_MSG_MSK_TSA | NFP_DCB_MSG_MSK_PCT | + NFP_DCB_MSG_MSK_PCP); + + return nfp_net_mbox_reconfig_and_unlock(nn, cmd); +} + +static int nfp_nic_dcbnl_ieee_getmaxrate(struct net_device *dev, + struct ieee_maxrate *maxrate) +{ + struct nfp_net *nn = netdev_priv(dev); + struct nfp_dcb *dcb; + + dcb = get_dcb_priv(nn); + + for (unsigned int i = 0; i < IEEE_8021QAZ_MAX_TCS; i++) + maxrate->tc_maxrate[i] = dcb->tc_maxrate[i]; + + return 0; +} + +static int nfp_nic_dcbnl_ieee_setmaxrate(struct net_device *dev, + struct ieee_maxrate *maxrate) +{ + const u32 cmd = NFP_NET_CFG_MBOX_CMD_DCB_UPDATE; + struct nfp_net *nn = netdev_priv(dev); + struct nfp_app *app = nn->app; + struct nfp_dcb *dcb; + u32 update = 0; + int err; + + err = nfp_fill_maxrate(nn, maxrate->tc_maxrate); + if (err) { + nfp_warn(app->cpp, + "nfp dcbnl ieee setmaxrate ERROR:%d.", + err); + return err; + } + + dcb = get_dcb_priv(nn); + + dcb->rate_init = true; + nfp_nic_set_enable(nn, NFP_DCB_ALL_QOS_ENABLE, &update); + nfp_nic_set_trust(nn, &update); + nfp_nic_ets_init(nn, &update); + + err = nfp_net_mbox_lock(nn, NFP_DCB_UPDATE_MSK_SZ); + if (err) + return err; + + nn_writel(nn, nn->tlv_caps.mbox_off + NFP_NET_CFG_MBOX_SIMPLE_VAL, + update | NFP_DCB_MSG_MSK_RATE); + + return nfp_net_mbox_reconfig_and_unlock(nn, cmd); +} + +static int nfp_nic_set_trust_status(struct nfp_net *nn, u8 status) +{ + const u32 cmd = NFP_NET_CFG_MBOX_CMD_DCB_UPDATE; + struct nfp_dcb *dcb; + u32 update = 0; + int err; + + dcb = get_dcb_priv(nn); + if (!dcb->rate_init) { + err = nfp_fill_maxrate(nn, dcb->tc_maxrate); + if (err) + return err; + + update |= NFP_DCB_MSG_MSK_RATE; + dcb->rate_init = true; + } + + err = nfp_net_mbox_lock(nn, NFP_DCB_UPDATE_MSK_SZ); + if (err) + return err; + + nfp_nic_ets_init(nn, &update); + writeb(status, dcb->dcbcfg_tbl + dcb->cfg_offset + + NFP_DCB_DATA_OFF_TRUST); + nfp_nic_set_enable(nn, NFP_DCB_ALL_QOS_ENABLE, &update); + nn_writel(nn, nn->tlv_caps.mbox_off + NFP_NET_CFG_MBOX_SIMPLE_VAL, + update | NFP_DCB_MSG_MSK_TRUST); + + err = nfp_net_mbox_reconfig_and_unlock(nn, cmd); + if (err) + return err; + + dcb->trust_status = status; + + return 0; +} + +static int nfp_nic_set_dscp2prio(struct nfp_net *nn, u8 dscp, u8 prio) +{ + const u32 cmd = NFP_NET_CFG_MBOX_CMD_DCB_UPDATE; + struct nfp_dcb *dcb; + u8 idx, tc; + int err; + + err = nfp_net_mbox_lock(nn, NFP_DCB_UPDATE_MSK_SZ); + if (err) + return err; + + dcb = get_dcb_priv(nn); + + tc = dcb->prio2tc[prio]; + idx = dcb->tc2idx[tc]; + + writeb(idx, dcb->dcbcfg_tbl + dcb->cfg_offset + + NFP_DCB_DATA_OFF_DSCP2IDX + dscp); + + nn_writel(nn, nn->tlv_caps.mbox_off + + NFP_NET_CFG_MBOX_SIMPLE_VAL, NFP_DCB_MSG_MSK_DSCP); + + err = nfp_net_mbox_reconfig_and_unlock(nn, cmd); + if (err) + return err; + + dcb->dscp2prio[dscp] = prio; + + return 0; +} + +static int nfp_nic_dcbnl_ieee_setapp(struct net_device *dev, + struct dcb_app *app) +{ + struct nfp_net *nn = netdev_priv(dev); + struct dcb_app old_app; + struct nfp_dcb *dcb; + bool is_new; + int err; + + if (app->selector != IEEE_8021QAZ_APP_SEL_DSCP) + return -EINVAL; + + dcb = get_dcb_priv(nn); + + /* Save the old entry info */ + old_app.selector = IEEE_8021QAZ_APP_SEL_DSCP; + old_app.protocol = app->protocol; + old_app.priority = dcb->dscp2prio[app->protocol]; + + /* Check trust status */ + if (!dcb->dscp_cnt) { + err = nfp_nic_set_trust_status(nn, NFP_DCB_TRUST_DSCP); + if (err) + return err; + } + + /* Check if the new mapping is same as old or in init stage */ + if (app->priority != old_app.priority || app->priority == 0) { + err = nfp_nic_set_dscp2prio(nn, app->protocol, app->priority); + if (err) + return err; + } + + /* Delete the old entry if exists */ + is_new = !!dcb_ieee_delapp(dev, &old_app); + + /* Add new entry and update counter */ + err = dcb_ieee_setapp(dev, app); + if (err) + return err; + + if (is_new) + dcb->dscp_cnt++; + + return 0; +} + +static int nfp_nic_dcbnl_ieee_delapp(struct net_device *dev, + struct dcb_app *app) +{ + struct nfp_net *nn = netdev_priv(dev); + struct nfp_dcb *dcb; + int err; + + if (app->selector != IEEE_8021QAZ_APP_SEL_DSCP) + return -EINVAL; + + dcb = get_dcb_priv(nn); + + /* Check if the dcb_app param match fw */ + if (app->priority != dcb->dscp2prio[app->protocol]) + return -ENOENT; + + /* Set fw dscp mapping to 0 */ + err = nfp_nic_set_dscp2prio(nn, app->protocol, 0); + if (err) + return err; + + /* Delete app from dcb list */ + err = dcb_ieee_delapp(dev, app); + if (err) + return err; + + /* Decrease dscp counter */ + dcb->dscp_cnt--; + + /* If no dscp mapping is configured, trust pcp */ + if (dcb->dscp_cnt == 0) + return nfp_nic_set_trust_status(nn, NFP_DCB_TRUST_PCP); + + return 0; +} + +static const struct dcbnl_rtnl_ops nfp_nic_dcbnl_ops = { + /* ieee 802.1Qaz std */ + .ieee_getets = nfp_nic_dcbnl_ieee_getets, + .ieee_setets = nfp_nic_dcbnl_ieee_setets, + .ieee_getmaxrate = nfp_nic_dcbnl_ieee_getmaxrate, + .ieee_setmaxrate = nfp_nic_dcbnl_ieee_setmaxrate, + .ieee_setapp = nfp_nic_dcbnl_ieee_setapp, + .ieee_delapp = nfp_nic_dcbnl_ieee_delapp, +}; + +int nfp_nic_dcb_init(struct nfp_net *nn) +{ + struct nfp_app *app = nn->app; + struct nfp_dcb *dcb; + int err; + + dcb = get_dcb_priv(nn); + dcb->cfg_offset = NFP_DCB_CFG_STRIDE * nn->id; + dcb->dcbcfg_tbl = nfp_pf_map_rtsym(app->pf, "net.dcbcfg_tbl", + "_abi_dcb_cfg", + dcb->cfg_offset, &dcb->dcbcfg_tbl_area); + if (IS_ERR(dcb->dcbcfg_tbl)) { + if (PTR_ERR(dcb->dcbcfg_tbl) != -ENOENT) { + err = PTR_ERR(dcb->dcbcfg_tbl); + dcb->dcbcfg_tbl = NULL; + nfp_err(app->cpp, + "Failed to map dcbcfg_tbl area, min_size %u.\n", + dcb->cfg_offset); + return err; + } + dcb->dcbcfg_tbl = NULL; + } + + if (dcb->dcbcfg_tbl) { + for (unsigned int i = 0; i < IEEE_8021QAZ_MAX_TCS; i++) { + dcb->prio2tc[i] = i; + dcb->tc2idx[i] = i; + dcb->tc_tx_pct[i] = 0; + dcb->tc_maxrate[i] = 0; + dcb->tc_tsa[i] = IEEE_8021QAZ_TSA_VENDOR; + } + dcb->trust_status = NFP_DCB_TRUST_INVALID; + dcb->rate_init = false; + dcb->ets_init = false; + + nn->dp.netdev->dcbnl_ops = &nfp_nic_dcbnl_ops; + } + + return 0; +} + +void nfp_nic_dcb_clean(struct nfp_net *nn) +{ + struct nfp_dcb *dcb; + + dcb = get_dcb_priv(nn); + if (dcb->dcbcfg_tbl_area) + nfp_cpp_area_release_free(dcb->dcbcfg_tbl_area); +} diff --git a/drivers/net/ethernet/netronome/nfp/nic/main.c b/drivers/net/ethernet/netronome/nfp/nic/main.c index aea8579206ee..9dd5afe37f6e 100644 --- a/drivers/net/ethernet/netronome/nfp/nic/main.c +++ b/drivers/net/ethernet/netronome/nfp/nic/main.c @@ -5,6 +5,8 @@ #include "../nfpcore/nfp_nsp.h" #include "../nfp_app.h" #include "../nfp_main.h" +#include "../nfp_net.h" +#include "main.h" static int nfp_nic_init(struct nfp_app *app) { @@ -28,13 +30,50 @@ static void nfp_nic_sriov_disable(struct nfp_app *app) { } +static int nfp_nic_vnic_init(struct nfp_app *app, struct nfp_net *nn) +{ + return nfp_nic_dcb_init(nn); +} + +static void nfp_nic_vnic_clean(struct nfp_app *app, struct nfp_net *nn) +{ + nfp_nic_dcb_clean(nn); +} + +static int nfp_nic_vnic_alloc(struct nfp_app *app, struct nfp_net *nn, + unsigned int id) +{ + struct nfp_app_nic_private *app_pri = nn->app_priv; + int err; + + err = nfp_app_nic_vnic_alloc(app, nn, id); + if (err) + return err; + + if (sizeof(*app_pri)) { + nn->app_priv = kzalloc(sizeof(*app_pri), GFP_KERNEL); + if (!nn->app_priv) + return -ENOMEM; + } + + return 0; +} + +static void nfp_nic_vnic_free(struct nfp_app *app, struct nfp_net *nn) +{ + kfree(nn->app_priv); +} + const struct nfp_app_type app_nic = { .id = NFP_APP_CORE_NIC, .name = "nic", .init = nfp_nic_init, - .vnic_alloc = nfp_app_nic_vnic_alloc, - + .vnic_alloc = nfp_nic_vnic_alloc, + .vnic_free = nfp_nic_vnic_free, .sriov_enable = nfp_nic_sriov_enable, .sriov_disable = nfp_nic_sriov_disable, + + .vnic_init = nfp_nic_vnic_init, + .vnic_clean = nfp_nic_vnic_clean, }; diff --git a/drivers/net/ethernet/netronome/nfp/nic/main.h b/drivers/net/ethernet/netronome/nfp/nic/main.h new file mode 100644 index 000000000000..094374df42b8 --- /dev/null +++ b/drivers/net/ethernet/netronome/nfp/nic/main.h @@ -0,0 +1,46 @@ +/* SPDX-License-Identifier: (GPL-2.0-only OR BSD-2-Clause) */ +/* Copyright (C) 2023 Corigine, Inc. */ + +#ifndef __NFP_NIC_H__ +#define __NFP_NIC_H__ 1 + +#include <linux/netdevice.h> + +#ifdef CONFIG_DCB +/* DCB feature definitions */ +#define NFP_NET_MAX_DSCP 4 +#define NFP_NET_MAX_TC IEEE_8021QAZ_MAX_TCS +#define NFP_NET_MAX_PRIO 8 +#define NFP_DCB_CFG_STRIDE 256 + +struct nfp_dcb { + u8 dscp2prio[NFP_NET_MAX_DSCP]; + u8 prio2tc[NFP_NET_MAX_PRIO]; + u8 tc2idx[IEEE_8021QAZ_MAX_TCS]; + u64 tc_maxrate[IEEE_8021QAZ_MAX_TCS]; + u8 tc_tx_pct[IEEE_8021QAZ_MAX_TCS]; + u8 tc_tsa[IEEE_8021QAZ_MAX_TCS]; + u8 dscp_cnt; + u8 trust_status; + bool rate_init; + bool ets_init; + + struct nfp_cpp_area *dcbcfg_tbl_area; + u8 __iomem *dcbcfg_tbl; + u32 cfg_offset; +}; + +int nfp_nic_dcb_init(struct nfp_net *nn); +void nfp_nic_dcb_clean(struct nfp_net *nn); +#else +static inline int nfp_nic_dcb_init(struct nfp_net *nn) { return 0; } +static inline void nfp_nic_dcb_clean(struct nfp_net *nn) {} +#endif + +struct nfp_app_nic_private { +#ifdef CONFIG_DCB + struct nfp_dcb dcb; +#endif +}; + +#endif diff --git a/drivers/net/ethernet/ni/nixge.c b/drivers/net/ethernet/ni/nixge.c index 62320be4de5a..56e02cba0b8a 100644 --- a/drivers/net/ethernet/ni/nixge.c +++ b/drivers/net/ethernet/ni/nixge.c @@ -1081,40 +1081,59 @@ static const struct ethtool_ops nixge_ethtool_ops = { .get_link = ethtool_op_get_link, }; -static int nixge_mdio_read(struct mii_bus *bus, int phy_id, int reg) +static int nixge_mdio_read_c22(struct mii_bus *bus, int phy_id, int reg) { struct nixge_priv *priv = bus->priv; u32 status, tmp; int err; u16 device; - if (reg & MII_ADDR_C45) { - device = (reg >> 16) & 0x1f; + device = reg & 0x1f; - nixge_ctrl_write_reg(priv, NIXGE_REG_MDIO_ADDR, reg & 0xffff); + tmp = NIXGE_MDIO_CLAUSE22 | NIXGE_MDIO_OP(NIXGE_MDIO_C22_READ) | + NIXGE_MDIO_ADDR(phy_id) | NIXGE_MDIO_MMD(device); - tmp = NIXGE_MDIO_CLAUSE45 | NIXGE_MDIO_OP(NIXGE_MDIO_OP_ADDRESS) - | NIXGE_MDIO_ADDR(phy_id) | NIXGE_MDIO_MMD(device); + nixge_ctrl_write_reg(priv, NIXGE_REG_MDIO_OP, tmp); + nixge_ctrl_write_reg(priv, NIXGE_REG_MDIO_CTRL, 1); - nixge_ctrl_write_reg(priv, NIXGE_REG_MDIO_OP, tmp); - nixge_ctrl_write_reg(priv, NIXGE_REG_MDIO_CTRL, 1); + err = nixge_ctrl_poll_timeout(priv, NIXGE_REG_MDIO_CTRL, status, + !status, 10, 1000); + if (err) { + dev_err(priv->dev, "timeout setting read command"); + return err; + } - err = nixge_ctrl_poll_timeout(priv, NIXGE_REG_MDIO_CTRL, status, - !status, 10, 1000); - if (err) { - dev_err(priv->dev, "timeout setting address"); - return err; - } + status = nixge_ctrl_read_reg(priv, NIXGE_REG_MDIO_DATA); - tmp = NIXGE_MDIO_CLAUSE45 | NIXGE_MDIO_OP(NIXGE_MDIO_C45_READ) | - NIXGE_MDIO_ADDR(phy_id) | NIXGE_MDIO_MMD(device); - } else { - device = reg & 0x1f; + return status; +} - tmp = NIXGE_MDIO_CLAUSE22 | NIXGE_MDIO_OP(NIXGE_MDIO_C22_READ) | - NIXGE_MDIO_ADDR(phy_id) | NIXGE_MDIO_MMD(device); +static int nixge_mdio_read_c45(struct mii_bus *bus, int phy_id, int device, + int reg) +{ + struct nixge_priv *priv = bus->priv; + u32 status, tmp; + int err; + + nixge_ctrl_write_reg(priv, NIXGE_REG_MDIO_ADDR, reg & 0xffff); + + tmp = NIXGE_MDIO_CLAUSE45 | + NIXGE_MDIO_OP(NIXGE_MDIO_OP_ADDRESS) | + NIXGE_MDIO_ADDR(phy_id) | NIXGE_MDIO_MMD(device); + + nixge_ctrl_write_reg(priv, NIXGE_REG_MDIO_OP, tmp); + nixge_ctrl_write_reg(priv, NIXGE_REG_MDIO_CTRL, 1); + + err = nixge_ctrl_poll_timeout(priv, NIXGE_REG_MDIO_CTRL, status, + !status, 10, 1000); + if (err) { + dev_err(priv->dev, "timeout setting address"); + return err; } + tmp = NIXGE_MDIO_CLAUSE45 | NIXGE_MDIO_OP(NIXGE_MDIO_C45_READ) | + NIXGE_MDIO_ADDR(phy_id) | NIXGE_MDIO_MMD(device); + nixge_ctrl_write_reg(priv, NIXGE_REG_MDIO_OP, tmp); nixge_ctrl_write_reg(priv, NIXGE_REG_MDIO_CTRL, 1); @@ -1130,57 +1149,65 @@ static int nixge_mdio_read(struct mii_bus *bus, int phy_id, int reg) return status; } -static int nixge_mdio_write(struct mii_bus *bus, int phy_id, int reg, u16 val) +static int nixge_mdio_write_c22(struct mii_bus *bus, int phy_id, int reg, + u16 val) { struct nixge_priv *priv = bus->priv; u32 status, tmp; u16 device; int err; - if (reg & MII_ADDR_C45) { - device = (reg >> 16) & 0x1f; + device = reg & 0x1f; - nixge_ctrl_write_reg(priv, NIXGE_REG_MDIO_ADDR, reg & 0xffff); + tmp = NIXGE_MDIO_CLAUSE22 | NIXGE_MDIO_OP(NIXGE_MDIO_C22_WRITE) | + NIXGE_MDIO_ADDR(phy_id) | NIXGE_MDIO_MMD(device); - tmp = NIXGE_MDIO_CLAUSE45 | NIXGE_MDIO_OP(NIXGE_MDIO_OP_ADDRESS) - | NIXGE_MDIO_ADDR(phy_id) | NIXGE_MDIO_MMD(device); + nixge_ctrl_write_reg(priv, NIXGE_REG_MDIO_DATA, val); + nixge_ctrl_write_reg(priv, NIXGE_REG_MDIO_OP, tmp); + nixge_ctrl_write_reg(priv, NIXGE_REG_MDIO_CTRL, 1); - nixge_ctrl_write_reg(priv, NIXGE_REG_MDIO_OP, tmp); - nixge_ctrl_write_reg(priv, NIXGE_REG_MDIO_CTRL, 1); + err = nixge_ctrl_poll_timeout(priv, NIXGE_REG_MDIO_CTRL, status, + !status, 10, 1000); + if (err) + dev_err(priv->dev, "timeout setting write command"); - err = nixge_ctrl_poll_timeout(priv, NIXGE_REG_MDIO_CTRL, status, - !status, 10, 1000); - if (err) { - dev_err(priv->dev, "timeout setting address"); - return err; - } + return err; +} - tmp = NIXGE_MDIO_CLAUSE45 | NIXGE_MDIO_OP(NIXGE_MDIO_C45_WRITE) - | NIXGE_MDIO_ADDR(phy_id) | NIXGE_MDIO_MMD(device); +static int nixge_mdio_write_c45(struct mii_bus *bus, int phy_id, + int device, int reg, u16 val) +{ + struct nixge_priv *priv = bus->priv; + u32 status, tmp; + int err; - nixge_ctrl_write_reg(priv, NIXGE_REG_MDIO_DATA, val); - nixge_ctrl_write_reg(priv, NIXGE_REG_MDIO_OP, tmp); - err = nixge_ctrl_poll_timeout(priv, NIXGE_REG_MDIO_CTRL, status, - !status, 10, 1000); - if (err) - dev_err(priv->dev, "timeout setting write command"); - } else { - device = reg & 0x1f; + nixge_ctrl_write_reg(priv, NIXGE_REG_MDIO_ADDR, reg & 0xffff); - tmp = NIXGE_MDIO_CLAUSE22 | - NIXGE_MDIO_OP(NIXGE_MDIO_C22_WRITE) | - NIXGE_MDIO_ADDR(phy_id) | NIXGE_MDIO_MMD(device); + tmp = NIXGE_MDIO_CLAUSE45 | + NIXGE_MDIO_OP(NIXGE_MDIO_OP_ADDRESS) | + NIXGE_MDIO_ADDR(phy_id) | NIXGE_MDIO_MMD(device); - nixge_ctrl_write_reg(priv, NIXGE_REG_MDIO_DATA, val); - nixge_ctrl_write_reg(priv, NIXGE_REG_MDIO_OP, tmp); - nixge_ctrl_write_reg(priv, NIXGE_REG_MDIO_CTRL, 1); + nixge_ctrl_write_reg(priv, NIXGE_REG_MDIO_OP, tmp); + nixge_ctrl_write_reg(priv, NIXGE_REG_MDIO_CTRL, 1); - err = nixge_ctrl_poll_timeout(priv, NIXGE_REG_MDIO_CTRL, status, - !status, 10, 1000); - if (err) - dev_err(priv->dev, "timeout setting write command"); + err = nixge_ctrl_poll_timeout(priv, NIXGE_REG_MDIO_CTRL, status, + !status, 10, 1000); + if (err) { + dev_err(priv->dev, "timeout setting address"); + return err; } + tmp = NIXGE_MDIO_CLAUSE45 | NIXGE_MDIO_OP(NIXGE_MDIO_C45_WRITE) | + NIXGE_MDIO_ADDR(phy_id) | NIXGE_MDIO_MMD(device); + + nixge_ctrl_write_reg(priv, NIXGE_REG_MDIO_DATA, val); + nixge_ctrl_write_reg(priv, NIXGE_REG_MDIO_OP, tmp); + + err = nixge_ctrl_poll_timeout(priv, NIXGE_REG_MDIO_CTRL, status, + !status, 10, 1000); + if (err) + dev_err(priv->dev, "timeout setting write command"); + return err; } @@ -1195,8 +1222,10 @@ static int nixge_mdio_setup(struct nixge_priv *priv, struct device_node *np) snprintf(bus->id, MII_BUS_ID_SIZE, "%s-mii", dev_name(priv->dev)); bus->priv = priv; bus->name = "nixge_mii_bus"; - bus->read = nixge_mdio_read; - bus->write = nixge_mdio_write; + bus->read = nixge_mdio_read_c22; + bus->write = nixge_mdio_write_c22; + bus->read_c45 = nixge_mdio_read_c45; + bus->write_c45 = nixge_mdio_write_c45; bus->parent = priv->dev; priv->mii_bus = bus; diff --git a/drivers/net/ethernet/pensando/ionic/ionic_bus_pci.c b/drivers/net/ethernet/pensando/ionic/ionic_bus_pci.c index ce436e97324a..e508f8eb43bf 100644 --- a/drivers/net/ethernet/pensando/ionic/ionic_bus_pci.c +++ b/drivers/net/ethernet/pensando/ionic/ionic_bus_pci.c @@ -121,7 +121,7 @@ static void ionic_vf_dealloc_locked(struct ionic *ionic) if (v->stats_pa) { vfc.stats_pa = 0; - (void)ionic_set_vf_config(ionic, i, &vfc); + ionic_set_vf_config(ionic, i, &vfc); dma_unmap_single(ionic->dev, v->stats_pa, sizeof(v->stats), DMA_FROM_DEVICE); v->stats_pa = 0; @@ -169,7 +169,7 @@ static int ionic_vf_alloc(struct ionic *ionic, int num_vfs) /* ignore failures from older FW, we just won't get stats */ vfc.stats_pa = cpu_to_le64(v->stats_pa); - (void)ionic_set_vf_config(ionic, i, &vfc); + ionic_set_vf_config(ionic, i, &vfc); } out: @@ -352,6 +352,7 @@ err_out_port_reset: err_out_reset: ionic_reset(ionic); err_out_teardown: + ionic_dev_teardown(ionic); pci_clear_master(pdev); /* Don't fail the probe for these errors, keep * the hw interface around for inspection @@ -390,6 +391,7 @@ static void ionic_remove(struct pci_dev *pdev) ionic_port_reset(ionic); ionic_reset(ionic); + ionic_dev_teardown(ionic); pci_clear_master(pdev); ionic_unmap_bars(ionic); pci_release_regions(pdev); diff --git a/drivers/net/ethernet/pensando/ionic/ionic_dev.c b/drivers/net/ethernet/pensando/ionic/ionic_dev.c index d911f4fd9af6..c06576f43916 100644 --- a/drivers/net/ethernet/pensando/ionic/ionic_dev.c +++ b/drivers/net/ethernet/pensando/ionic/ionic_dev.c @@ -92,6 +92,7 @@ int ionic_dev_setup(struct ionic *ionic) unsigned int num_bars = ionic->num_bars; struct ionic_dev *idev = &ionic->idev; struct device *dev = ionic->dev; + int size; u32 sig; /* BAR0: dev_cmd and interrupts */ @@ -133,9 +134,36 @@ int ionic_dev_setup(struct ionic *ionic) idev->db_pages = bar->vaddr; idev->phy_db_pages = bar->bus_addr; + /* BAR2: optional controller memory mapping */ + bar++; + mutex_init(&idev->cmb_inuse_lock); + if (num_bars < 3 || !ionic->bars[IONIC_PCI_BAR_CMB].len) { + idev->cmb_inuse = NULL; + return 0; + } + + idev->phy_cmb_pages = bar->bus_addr; + idev->cmb_npages = bar->len / PAGE_SIZE; + size = BITS_TO_LONGS(idev->cmb_npages) * sizeof(long); + idev->cmb_inuse = kzalloc(size, GFP_KERNEL); + if (!idev->cmb_inuse) + dev_warn(dev, "No memory for CMB, disabling\n"); + return 0; } +void ionic_dev_teardown(struct ionic *ionic) +{ + struct ionic_dev *idev = &ionic->idev; + + kfree(idev->cmb_inuse); + idev->cmb_inuse = NULL; + idev->phy_cmb_pages = 0; + idev->cmb_npages = 0; + + mutex_destroy(&idev->cmb_inuse_lock); +} + /* Devcmd Interface */ bool ionic_is_fw_running(struct ionic_dev *idev) { @@ -571,6 +599,33 @@ int ionic_db_page_num(struct ionic_lif *lif, int pid) return (lif->hw_index * lif->dbid_count) + pid; } +int ionic_get_cmb(struct ionic_lif *lif, u32 *pgid, phys_addr_t *pgaddr, int order) +{ + struct ionic_dev *idev = &lif->ionic->idev; + int ret; + + mutex_lock(&idev->cmb_inuse_lock); + ret = bitmap_find_free_region(idev->cmb_inuse, idev->cmb_npages, order); + mutex_unlock(&idev->cmb_inuse_lock); + + if (ret < 0) + return ret; + + *pgid = ret; + *pgaddr = idev->phy_cmb_pages + ret * PAGE_SIZE; + + return 0; +} + +void ionic_put_cmb(struct ionic_lif *lif, u32 pgid, int order) +{ + struct ionic_dev *idev = &lif->ionic->idev; + + mutex_lock(&idev->cmb_inuse_lock); + bitmap_release_region(idev->cmb_inuse, pgid, order); + mutex_unlock(&idev->cmb_inuse_lock); +} + int ionic_cq_init(struct ionic_lif *lif, struct ionic_cq *cq, struct ionic_intr_info *intr, unsigned int num_descs, size_t desc_size) @@ -679,6 +734,18 @@ void ionic_q_map(struct ionic_queue *q, void *base, dma_addr_t base_pa) cur->desc = base + (i * q->desc_size); } +void ionic_q_cmb_map(struct ionic_queue *q, void __iomem *base, dma_addr_t base_pa) +{ + struct ionic_desc_info *cur; + unsigned int i; + + q->cmb_base = base; + q->cmb_base_pa = base_pa; + + for (i = 0, cur = q->info; i < q->num_descs; i++, cur++) + cur->cmb_desc = base + (i * q->desc_size); +} + void ionic_q_sg_map(struct ionic_queue *q, void *base, dma_addr_t base_pa) { struct ionic_desc_info *cur; diff --git a/drivers/net/ethernet/pensando/ionic/ionic_dev.h b/drivers/net/ethernet/pensando/ionic/ionic_dev.h index bce3ca38669b..0bea208bfba2 100644 --- a/drivers/net/ethernet/pensando/ionic/ionic_dev.h +++ b/drivers/net/ethernet/pensando/ionic/ionic_dev.h @@ -159,6 +159,11 @@ struct ionic_dev { struct ionic_intr __iomem *intr_ctrl; u64 __iomem *intr_status; + struct mutex cmb_inuse_lock; /* for cmb_inuse */ + unsigned long *cmb_inuse; + dma_addr_t phy_cmb_pages; + u32 cmb_npages; + u32 port_info_sz; struct ionic_port_info *port_info; dma_addr_t port_info_pa; @@ -203,6 +208,7 @@ struct ionic_desc_info { struct ionic_rxq_desc *rxq_desc; struct ionic_admin_cmd *adminq_desc; }; + void __iomem *cmb_desc; union { void *sg_desc; struct ionic_txq_sg_desc *txq_sg_desc; @@ -241,12 +247,14 @@ struct ionic_queue { struct ionic_rxq_desc *rxq; struct ionic_admin_cmd *adminq; }; + void __iomem *cmb_base; union { void *sg_base; struct ionic_txq_sg_desc *txq_sgl; struct ionic_rxq_sg_desc *rxq_sgl; }; dma_addr_t base_pa; + dma_addr_t cmb_base_pa; dma_addr_t sg_base_pa; unsigned int desc_size; unsigned int sg_desc_size; @@ -309,6 +317,7 @@ static inline bool ionic_q_has_space(struct ionic_queue *q, unsigned int want) void ionic_init_devinfo(struct ionic *ionic); int ionic_dev_setup(struct ionic *ionic); +void ionic_dev_teardown(struct ionic *ionic); void ionic_dev_cmd_go(struct ionic_dev *idev, union ionic_dev_cmd *cmd); u8 ionic_dev_cmd_status(struct ionic_dev *idev); @@ -344,6 +353,9 @@ void ionic_dev_cmd_adminq_init(struct ionic_dev *idev, struct ionic_qcq *qcq, int ionic_db_page_num(struct ionic_lif *lif, int pid); +int ionic_get_cmb(struct ionic_lif *lif, u32 *pgid, phys_addr_t *pgaddr, int order); +void ionic_put_cmb(struct ionic_lif *lif, u32 pgid, int order); + int ionic_cq_init(struct ionic_lif *lif, struct ionic_cq *cq, struct ionic_intr_info *intr, unsigned int num_descs, size_t desc_size); @@ -360,6 +372,7 @@ int ionic_q_init(struct ionic_lif *lif, struct ionic_dev *idev, unsigned int num_descs, size_t desc_size, size_t sg_desc_size, unsigned int pid); void ionic_q_map(struct ionic_queue *q, void *base, dma_addr_t base_pa); +void ionic_q_cmb_map(struct ionic_queue *q, void __iomem *base, dma_addr_t base_pa); void ionic_q_sg_map(struct ionic_queue *q, void *base, dma_addr_t base_pa); void ionic_q_post(struct ionic_queue *q, bool ring_doorbell, ionic_desc_cb cb, void *cb_arg); diff --git a/drivers/net/ethernet/pensando/ionic/ionic_ethtool.c b/drivers/net/ethernet/pensando/ionic/ionic_ethtool.c index 01c22701482d..cf33503468a3 100644 --- a/drivers/net/ethernet/pensando/ionic/ionic_ethtool.c +++ b/drivers/net/ethernet/pensando/ionic/ionic_ethtool.c @@ -511,6 +511,87 @@ static int ionic_set_coalesce(struct net_device *netdev, return 0; } +static int ionic_validate_cmb_config(struct ionic_lif *lif, + struct ionic_queue_params *qparam) +{ + int pages_have, pages_required = 0; + unsigned long sz; + + if (!lif->ionic->idev.cmb_inuse && + (qparam->cmb_tx || qparam->cmb_rx)) { + netdev_info(lif->netdev, "CMB rings are not supported on this device\n"); + return -EOPNOTSUPP; + } + + if (qparam->cmb_tx) { + if (!(lif->qtype_info[IONIC_QTYPE_TXQ].features & IONIC_QIDENT_F_CMB)) { + netdev_info(lif->netdev, + "CMB rings for tx-push are not supported on this device\n"); + return -EOPNOTSUPP; + } + + sz = sizeof(struct ionic_txq_desc) * qparam->ntxq_descs * qparam->nxqs; + pages_required += ALIGN(sz, PAGE_SIZE) / PAGE_SIZE; + } + + if (qparam->cmb_rx) { + if (!(lif->qtype_info[IONIC_QTYPE_RXQ].features & IONIC_QIDENT_F_CMB)) { + netdev_info(lif->netdev, + "CMB rings for rx-push are not supported on this device\n"); + return -EOPNOTSUPP; + } + + sz = sizeof(struct ionic_rxq_desc) * qparam->nrxq_descs * qparam->nxqs; + pages_required += ALIGN(sz, PAGE_SIZE) / PAGE_SIZE; + } + + pages_have = lif->ionic->bars[IONIC_PCI_BAR_CMB].len / PAGE_SIZE; + if (pages_required > pages_have) { + netdev_info(lif->netdev, + "Not enough CMB pages for number of queues and size of descriptor rings, need %d have %d", + pages_required, pages_have); + return -ENOMEM; + } + + return pages_required; +} + +static int ionic_cmb_rings_toggle(struct ionic_lif *lif, bool cmb_tx, bool cmb_rx) +{ + struct ionic_queue_params qparam; + int pages_used; + + if (netif_running(lif->netdev)) { + netdev_info(lif->netdev, "Please stop device to toggle CMB for tx/rx-push\n"); + return -EBUSY; + } + + ionic_init_queue_params(lif, &qparam); + qparam.cmb_tx = cmb_tx; + qparam.cmb_rx = cmb_rx; + pages_used = ionic_validate_cmb_config(lif, &qparam); + if (pages_used < 0) + return pages_used; + + if (cmb_tx) + set_bit(IONIC_LIF_F_CMB_TX_RINGS, lif->state); + else + clear_bit(IONIC_LIF_F_CMB_TX_RINGS, lif->state); + + if (cmb_rx) + set_bit(IONIC_LIF_F_CMB_RX_RINGS, lif->state); + else + clear_bit(IONIC_LIF_F_CMB_RX_RINGS, lif->state); + + if (cmb_tx || cmb_rx) + netdev_info(lif->netdev, "Enabling CMB %s %s rings - %d pages\n", + cmb_tx ? "TX" : "", cmb_rx ? "RX" : "", pages_used); + else + netdev_info(lif->netdev, "Disabling CMB rings\n"); + + return 0; +} + static void ionic_get_ringparam(struct net_device *netdev, struct ethtool_ringparam *ring, struct kernel_ethtool_ringparam *kernel_ring, @@ -522,6 +603,8 @@ static void ionic_get_ringparam(struct net_device *netdev, ring->tx_pending = lif->ntxq_descs; ring->rx_max_pending = IONIC_MAX_RX_DESC; ring->rx_pending = lif->nrxq_descs; + kernel_ring->tx_push = test_bit(IONIC_LIF_F_CMB_TX_RINGS, lif->state); + kernel_ring->rx_push = test_bit(IONIC_LIF_F_CMB_RX_RINGS, lif->state); } static int ionic_set_ringparam(struct net_device *netdev, @@ -551,9 +634,28 @@ static int ionic_set_ringparam(struct net_device *netdev, /* if nothing to do return success */ if (ring->tx_pending == lif->ntxq_descs && - ring->rx_pending == lif->nrxq_descs) + ring->rx_pending == lif->nrxq_descs && + kernel_ring->tx_push == test_bit(IONIC_LIF_F_CMB_TX_RINGS, lif->state) && + kernel_ring->rx_push == test_bit(IONIC_LIF_F_CMB_RX_RINGS, lif->state)) return 0; + qparam.ntxq_descs = ring->tx_pending; + qparam.nrxq_descs = ring->rx_pending; + qparam.cmb_tx = kernel_ring->tx_push; + qparam.cmb_rx = kernel_ring->rx_push; + + err = ionic_validate_cmb_config(lif, &qparam); + if (err < 0) + return err; + + if (kernel_ring->tx_push != test_bit(IONIC_LIF_F_CMB_TX_RINGS, lif->state) || + kernel_ring->rx_push != test_bit(IONIC_LIF_F_CMB_RX_RINGS, lif->state)) { + err = ionic_cmb_rings_toggle(lif, kernel_ring->tx_push, + kernel_ring->rx_push); + if (err < 0) + return err; + } + if (ring->tx_pending != lif->ntxq_descs) netdev_info(netdev, "Changing Tx ring size from %d to %d\n", lif->ntxq_descs, ring->tx_pending); @@ -569,9 +671,6 @@ static int ionic_set_ringparam(struct net_device *netdev, return 0; } - qparam.ntxq_descs = ring->tx_pending; - qparam.nrxq_descs = ring->rx_pending; - mutex_lock(&lif->queue_lock); err = ionic_reconfigure_queues(lif, &qparam); mutex_unlock(&lif->queue_lock); @@ -638,7 +737,7 @@ static int ionic_set_channels(struct net_device *netdev, lif->nxqs, ch->combined_count); qparam.nxqs = ch->combined_count; - qparam.intr_split = 0; + qparam.intr_split = false; } else { max_cnt /= 2; if (ch->rx_count > max_cnt) @@ -654,9 +753,13 @@ static int ionic_set_channels(struct net_device *netdev, lif->nxqs, ch->rx_count); qparam.nxqs = ch->rx_count; - qparam.intr_split = 1; + qparam.intr_split = true; } + err = ionic_validate_cmb_config(lif, &qparam); + if (err < 0) + return err; + /* if we're not running, just set the values and return */ if (!netif_running(lif->netdev)) { lif->nxqs = qparam.nxqs; @@ -965,6 +1068,8 @@ static const struct ethtool_ops ionic_ethtool_ops = { .supported_coalesce_params = ETHTOOL_COALESCE_USECS | ETHTOOL_COALESCE_USE_ADAPTIVE_RX | ETHTOOL_COALESCE_USE_ADAPTIVE_TX, + .supported_ring_params = ETHTOOL_RING_USE_TX_PUSH | + ETHTOOL_RING_USE_RX_PUSH, .get_drvinfo = ionic_get_drvinfo, .get_regs_len = ionic_get_regs_len, .get_regs = ionic_get_regs, diff --git a/drivers/net/ethernet/pensando/ionic/ionic_if.h b/drivers/net/ethernet/pensando/ionic/ionic_if.h index eac09b2375b8..9a1825edf0d0 100644 --- a/drivers/net/ethernet/pensando/ionic/ionic_if.h +++ b/drivers/net/ethernet/pensando/ionic/ionic_if.h @@ -3073,9 +3073,10 @@ union ionic_adminq_comp { #define IONIC_BARS_MAX 6 #define IONIC_PCI_BAR_DBELL 1 +#define IONIC_PCI_BAR_CMB 2 -/* BAR0 */ #define IONIC_BAR0_SIZE 0x8000 +#define IONIC_BAR2_SIZE 0x800000 #define IONIC_BAR0_DEV_INFO_REGS_OFFSET 0x0000 #define IONIC_BAR0_DEV_CMD_REGS_OFFSET 0x0800 diff --git a/drivers/net/ethernet/pensando/ionic/ionic_lif.c b/drivers/net/ethernet/pensando/ionic/ionic_lif.c index 63a78a9ac241..957027e546b3 100644 --- a/drivers/net/ethernet/pensando/ionic/ionic_lif.c +++ b/drivers/net/ethernet/pensando/ionic/ionic_lif.c @@ -26,9 +26,12 @@ static const u8 ionic_qtype_versions[IONIC_QTYPE_MAX] = { [IONIC_QTYPE_ADMINQ] = 0, /* 0 = Base version with CQ support */ [IONIC_QTYPE_NOTIFYQ] = 0, /* 0 = Base version */ - [IONIC_QTYPE_RXQ] = 0, /* 0 = Base version with CQ+SG support */ - [IONIC_QTYPE_TXQ] = 1, /* 0 = Base version with CQ+SG support - * 1 = ... with Tx SG version 1 + [IONIC_QTYPE_RXQ] = 2, /* 0 = Base version with CQ+SG support + * 2 = ... with CMB rings + */ + [IONIC_QTYPE_TXQ] = 3, /* 0 = Base version with CQ+SG support + * 1 = ... with Tx SG version 1 + * 3 = ... with CMB rings */ }; @@ -149,7 +152,7 @@ static void ionic_link_status_check(struct ionic_lif *lif) mutex_lock(&lif->queue_lock); err = ionic_start_queues(lif); if (err && err != -EBUSY) { - netdev_err(lif->netdev, + netdev_err(netdev, "Failed to start queues: %d\n", err); set_bit(IONIC_LIF_F_BROKEN, lif->state); netif_carrier_off(lif->netdev); @@ -397,6 +400,15 @@ static void ionic_qcq_free(struct ionic_lif *lif, struct ionic_qcq *qcq) qcq->q_base_pa = 0; } + if (qcq->cmb_q_base) { + iounmap(qcq->cmb_q_base); + ionic_put_cmb(lif, qcq->cmb_pgid, qcq->cmb_order); + qcq->cmb_pgid = 0; + qcq->cmb_order = 0; + qcq->cmb_q_base = NULL; + qcq->cmb_q_base_pa = 0; + } + if (qcq->cq_base) { dma_free_coherent(dev, qcq->cq_size, qcq->cq_base, qcq->cq_base_pa); qcq->cq_base = NULL; @@ -608,6 +620,7 @@ static int ionic_qcq_alloc(struct ionic_lif *lif, unsigned int type, ionic_cq_map(&new->cq, cq_base, cq_base_pa); ionic_cq_bind(&new->cq, &new->q); } else { + /* regular DMA q descriptors */ new->q_size = PAGE_SIZE + (num_descs * desc_size); new->q_base = dma_alloc_coherent(dev, new->q_size, &new->q_base_pa, GFP_KERNEL); @@ -620,6 +633,33 @@ static int ionic_qcq_alloc(struct ionic_lif *lif, unsigned int type, q_base_pa = ALIGN(new->q_base_pa, PAGE_SIZE); ionic_q_map(&new->q, q_base, q_base_pa); + if (flags & IONIC_QCQ_F_CMB_RINGS) { + /* on-chip CMB q descriptors */ + new->cmb_q_size = num_descs * desc_size; + new->cmb_order = order_base_2(new->cmb_q_size / PAGE_SIZE); + + err = ionic_get_cmb(lif, &new->cmb_pgid, &new->cmb_q_base_pa, + new->cmb_order); + if (err) { + netdev_err(lif->netdev, + "Cannot allocate queue order %d from cmb: err %d\n", + new->cmb_order, err); + goto err_out_free_q; + } + + new->cmb_q_base = ioremap_wc(new->cmb_q_base_pa, new->cmb_q_size); + if (!new->cmb_q_base) { + netdev_err(lif->netdev, "Cannot map queue from cmb\n"); + ionic_put_cmb(lif, new->cmb_pgid, new->cmb_order); + err = -ENOMEM; + goto err_out_free_q; + } + + new->cmb_q_base_pa -= idev->phy_cmb_pages; + ionic_q_cmb_map(&new->q, new->cmb_q_base, new->cmb_q_base_pa); + } + + /* cq DMA descriptors */ new->cq_size = PAGE_SIZE + (num_descs * cq_desc_size); new->cq_base = dma_alloc_coherent(dev, new->cq_size, &new->cq_base_pa, GFP_KERNEL); @@ -658,6 +698,10 @@ static int ionic_qcq_alloc(struct ionic_lif *lif, unsigned int type, err_out_free_cq: dma_free_coherent(dev, new->cq_size, new->cq_base, new->cq_base_pa); err_out_free_q: + if (new->cmb_q_base) { + iounmap(new->cmb_q_base); + ionic_put_cmb(lif, new->cmb_pgid, new->cmb_order); + } dma_free_coherent(dev, new->q_size, new->q_base, new->q_base_pa); err_out_free_cq_info: vfree(new->cq.info); @@ -739,6 +783,8 @@ static void ionic_qcq_sanitize(struct ionic_qcq *qcq) qcq->cq.tail_idx = 0; qcq->cq.done_color = 1; memset(qcq->q_base, 0, qcq->q_size); + if (qcq->cmb_q_base) + memset_io(qcq->cmb_q_base, 0, qcq->cmb_q_size); memset(qcq->cq_base, 0, qcq->cq_size); memset(qcq->sg_base, 0, qcq->sg_size); } @@ -758,6 +804,7 @@ static int ionic_lif_txq_init(struct ionic_lif *lif, struct ionic_qcq *qcq) .index = cpu_to_le32(q->index), .flags = cpu_to_le16(IONIC_QINIT_F_IRQ | IONIC_QINIT_F_SG), + .intr_index = cpu_to_le16(qcq->intr.index), .pid = cpu_to_le16(q->pid), .ring_size = ilog2(q->num_descs), .ring_base = cpu_to_le64(q->base_pa), @@ -766,17 +813,19 @@ static int ionic_lif_txq_init(struct ionic_lif *lif, struct ionic_qcq *qcq) .features = cpu_to_le64(q->features), }, }; - unsigned int intr_index; int err; - intr_index = qcq->intr.index; - - ctx.cmd.q_init.intr_index = cpu_to_le16(intr_index); + if (qcq->flags & IONIC_QCQ_F_CMB_RINGS) { + ctx.cmd.q_init.flags |= cpu_to_le16(IONIC_QINIT_F_CMB); + ctx.cmd.q_init.ring_base = cpu_to_le64(qcq->cmb_q_base_pa); + } dev_dbg(dev, "txq_init.pid %d\n", ctx.cmd.q_init.pid); dev_dbg(dev, "txq_init.index %d\n", ctx.cmd.q_init.index); dev_dbg(dev, "txq_init.ring_base 0x%llx\n", ctx.cmd.q_init.ring_base); dev_dbg(dev, "txq_init.ring_size %d\n", ctx.cmd.q_init.ring_size); + dev_dbg(dev, "txq_init.cq_ring_base 0x%llx\n", ctx.cmd.q_init.cq_ring_base); + dev_dbg(dev, "txq_init.sg_ring_base 0x%llx\n", ctx.cmd.q_init.sg_ring_base); dev_dbg(dev, "txq_init.flags 0x%x\n", ctx.cmd.q_init.flags); dev_dbg(dev, "txq_init.ver %d\n", ctx.cmd.q_init.ver); dev_dbg(dev, "txq_init.intr_index %d\n", ctx.cmd.q_init.intr_index); @@ -834,6 +883,11 @@ static int ionic_lif_rxq_init(struct ionic_lif *lif, struct ionic_qcq *qcq) }; int err; + if (qcq->flags & IONIC_QCQ_F_CMB_RINGS) { + ctx.cmd.q_init.flags |= cpu_to_le16(IONIC_QINIT_F_CMB); + ctx.cmd.q_init.ring_base = cpu_to_le64(qcq->cmb_q_base_pa); + } + dev_dbg(dev, "rxq_init.pid %d\n", ctx.cmd.q_init.pid); dev_dbg(dev, "rxq_init.index %d\n", ctx.cmd.q_init.index); dev_dbg(dev, "rxq_init.ring_base 0x%llx\n", ctx.cmd.q_init.ring_base); @@ -2010,8 +2064,13 @@ static int ionic_txrx_alloc(struct ionic_lif *lif) sg_desc_sz = sizeof(struct ionic_txq_sg_desc); flags = IONIC_QCQ_F_TX_STATS | IONIC_QCQ_F_SG; + + if (test_bit(IONIC_LIF_F_CMB_TX_RINGS, lif->state)) + flags |= IONIC_QCQ_F_CMB_RINGS; + if (test_bit(IONIC_LIF_F_SPLIT_INTR, lif->state)) flags |= IONIC_QCQ_F_INTR; + for (i = 0; i < lif->nxqs; i++) { err = ionic_qcq_alloc(lif, IONIC_QTYPE_TXQ, i, "tx", flags, num_desc, desc_sz, comp_sz, sg_desc_sz, @@ -2032,6 +2091,9 @@ static int ionic_txrx_alloc(struct ionic_lif *lif) flags = IONIC_QCQ_F_RX_STATS | IONIC_QCQ_F_SG | IONIC_QCQ_F_INTR; + if (test_bit(IONIC_LIF_F_CMB_RX_RINGS, lif->state)) + flags |= IONIC_QCQ_F_CMB_RINGS; + num_desc = lif->nrxq_descs; desc_sz = sizeof(struct ionic_rxq_desc); comp_sz = sizeof(struct ionic_rxq_comp); @@ -2507,7 +2569,7 @@ static int ionic_set_vf_rate(struct net_device *netdev, int vf, ret = ionic_set_vf_config(ionic, vf, &vfc); if (!ret) - lif->ionic->vfs[vf].maxrate = cpu_to_le32(tx_max); + ionic->vfs[vf].maxrate = cpu_to_le32(tx_max); } up_write(&ionic->vf_op_lock); @@ -2707,6 +2769,55 @@ static const struct net_device_ops ionic_netdev_ops = { .ndo_get_vf_stats = ionic_get_vf_stats, }; +static int ionic_cmb_reconfig(struct ionic_lif *lif, + struct ionic_queue_params *qparam) +{ + struct ionic_queue_params start_qparams; + int err = 0; + + /* When changing CMB queue parameters, we're using limited + * on-device memory and don't have extra memory to use for + * duplicate allocations, so we free it all first then + * re-allocate with the new parameters. + */ + + /* Checkpoint for possible unwind */ + ionic_init_queue_params(lif, &start_qparams); + + /* Stop and free the queues */ + ionic_stop_queues_reconfig(lif); + ionic_txrx_free(lif); + + /* Set up new qparams */ + ionic_set_queue_params(lif, qparam); + + if (netif_running(lif->netdev)) { + /* Alloc and start the new configuration */ + err = ionic_txrx_alloc(lif); + if (err) { + dev_warn(lif->ionic->dev, + "CMB reconfig failed, restoring values: %d\n", err); + + /* Back out the changes */ + ionic_set_queue_params(lif, &start_qparams); + err = ionic_txrx_alloc(lif); + if (err) { + dev_err(lif->ionic->dev, + "CMB restore failed: %d\n", err); + goto errout; + } + } + + ionic_start_queues_reconfig(lif); + } else { + /* This was detached in ionic_stop_queues_reconfig() */ + netif_device_attach(lif->netdev); + } + +errout: + return err; +} + static void ionic_swap_queues(struct ionic_qcq *a, struct ionic_qcq *b) { /* only swapping the queues, not the napi, flags, or other stuff */ @@ -2749,6 +2860,11 @@ int ionic_reconfigure_queues(struct ionic_lif *lif, unsigned int flags, i; int err = 0; + /* Are we changing q params while CMB is on */ + if ((test_bit(IONIC_LIF_F_CMB_TX_RINGS, lif->state) && qparam->cmb_tx) || + (test_bit(IONIC_LIF_F_CMB_RX_RINGS, lif->state) && qparam->cmb_rx)) + return ionic_cmb_reconfig(lif, qparam); + /* allocate temporary qcq arrays to hold new queue structs */ if (qparam->nxqs != lif->nxqs || qparam->ntxq_descs != lif->ntxq_descs) { tx_qcqs = devm_kcalloc(lif->ionic->dev, lif->ionic->ntxqs_per_lif, @@ -2785,6 +2901,16 @@ int ionic_reconfigure_queues(struct ionic_lif *lif, sg_desc_sz = sizeof(struct ionic_txq_sg_desc); for (i = 0; i < qparam->nxqs; i++) { + /* If missing, short placeholder qcq needed for swap */ + if (!lif->txqcqs[i]) { + flags = IONIC_QCQ_F_TX_STATS | IONIC_QCQ_F_SG; + err = ionic_qcq_alloc(lif, IONIC_QTYPE_TXQ, i, "tx", flags, + 4, desc_sz, comp_sz, sg_desc_sz, + lif->kern_pid, &lif->txqcqs[i]); + if (err) + goto err_out; + } + flags = lif->txqcqs[i]->flags & ~IONIC_QCQ_F_INTR; err = ionic_qcq_alloc(lif, IONIC_QTYPE_TXQ, i, "tx", flags, num_desc, desc_sz, comp_sz, sg_desc_sz, @@ -2804,6 +2930,16 @@ int ionic_reconfigure_queues(struct ionic_lif *lif, comp_sz *= 2; for (i = 0; i < qparam->nxqs; i++) { + /* If missing, short placeholder qcq needed for swap */ + if (!lif->rxqcqs[i]) { + flags = IONIC_QCQ_F_RX_STATS | IONIC_QCQ_F_SG; + err = ionic_qcq_alloc(lif, IONIC_QTYPE_RXQ, i, "rx", flags, + 4, desc_sz, comp_sz, sg_desc_sz, + lif->kern_pid, &lif->rxqcqs[i]); + if (err) + goto err_out; + } + flags = lif->rxqcqs[i]->flags & ~IONIC_QCQ_F_INTR; err = ionic_qcq_alloc(lif, IONIC_QTYPE_RXQ, i, "rx", flags, num_desc, desc_sz, comp_sz, sg_desc_sz, @@ -2853,10 +2989,15 @@ int ionic_reconfigure_queues(struct ionic_lif *lif, lif->tx_coalesce_hw = lif->rx_coalesce_hw; } - /* clear existing interrupt assignments */ + /* Clear existing interrupt assignments. We check for NULL here + * because we're checking the whole array for potential qcqs, not + * just those qcqs that have just been set up. + */ for (i = 0; i < lif->ionic->ntxqs_per_lif; i++) { - ionic_qcq_intr_free(lif, lif->txqcqs[i]); - ionic_qcq_intr_free(lif, lif->rxqcqs[i]); + if (lif->txqcqs[i]) + ionic_qcq_intr_free(lif, lif->txqcqs[i]); + if (lif->rxqcqs[i]) + ionic_qcq_intr_free(lif, lif->rxqcqs[i]); } /* re-assign the interrupts */ diff --git a/drivers/net/ethernet/pensando/ionic/ionic_lif.h b/drivers/net/ethernet/pensando/ionic/ionic_lif.h index 734519895614..c9c4c46d5a16 100644 --- a/drivers/net/ethernet/pensando/ionic/ionic_lif.h +++ b/drivers/net/ethernet/pensando/ionic/ionic_lif.h @@ -59,6 +59,7 @@ struct ionic_rx_stats { #define IONIC_QCQ_F_TX_STATS BIT(3) #define IONIC_QCQ_F_RX_STATS BIT(4) #define IONIC_QCQ_F_NOTIFYQ BIT(5) +#define IONIC_QCQ_F_CMB_RINGS BIT(6) struct ionic_qcq { void *q_base; @@ -70,6 +71,11 @@ struct ionic_qcq { void *sg_base; dma_addr_t sg_base_pa; u32 sg_size; + void __iomem *cmb_q_base; + phys_addr_t cmb_q_base_pa; + u32 cmb_q_size; + u32 cmb_pgid; + u32 cmb_order; struct dim dim; struct ionic_queue q; struct ionic_cq cq; @@ -142,6 +148,8 @@ enum ionic_lif_state_flags { IONIC_LIF_F_BROKEN, IONIC_LIF_F_TX_DIM_INTR, IONIC_LIF_F_RX_DIM_INTR, + IONIC_LIF_F_CMB_TX_RINGS, + IONIC_LIF_F_CMB_RX_RINGS, /* leave this as last */ IONIC_LIF_F_STATE_SIZE @@ -245,8 +253,10 @@ struct ionic_queue_params { unsigned int nxqs; unsigned int ntxq_descs; unsigned int nrxq_descs; - unsigned int intr_split; u64 rxq_features; + bool intr_split; + bool cmb_tx; + bool cmb_rx; }; static inline void ionic_init_queue_params(struct ionic_lif *lif, @@ -255,8 +265,34 @@ static inline void ionic_init_queue_params(struct ionic_lif *lif, qparam->nxqs = lif->nxqs; qparam->ntxq_descs = lif->ntxq_descs; qparam->nrxq_descs = lif->nrxq_descs; - qparam->intr_split = test_bit(IONIC_LIF_F_SPLIT_INTR, lif->state); qparam->rxq_features = lif->rxq_features; + qparam->intr_split = test_bit(IONIC_LIF_F_SPLIT_INTR, lif->state); + qparam->cmb_tx = test_bit(IONIC_LIF_F_CMB_TX_RINGS, lif->state); + qparam->cmb_rx = test_bit(IONIC_LIF_F_CMB_RX_RINGS, lif->state); +} + +static inline void ionic_set_queue_params(struct ionic_lif *lif, + struct ionic_queue_params *qparam) +{ + lif->nxqs = qparam->nxqs; + lif->ntxq_descs = qparam->ntxq_descs; + lif->nrxq_descs = qparam->nrxq_descs; + lif->rxq_features = qparam->rxq_features; + + if (qparam->intr_split) + set_bit(IONIC_LIF_F_SPLIT_INTR, lif->state); + else + clear_bit(IONIC_LIF_F_SPLIT_INTR, lif->state); + + if (qparam->cmb_tx) + set_bit(IONIC_LIF_F_CMB_TX_RINGS, lif->state); + else + clear_bit(IONIC_LIF_F_CMB_TX_RINGS, lif->state); + + if (qparam->cmb_rx) + set_bit(IONIC_LIF_F_CMB_RX_RINGS, lif->state); + else + clear_bit(IONIC_LIF_F_CMB_RX_RINGS, lif->state); } static inline u32 ionic_coal_usec_to_hw(struct ionic *ionic, u32 usecs) diff --git a/drivers/net/ethernet/pensando/ionic/ionic_main.c b/drivers/net/ethernet/pensando/ionic/ionic_main.c index 08c42b039d92..1dc79cecc5cc 100644 --- a/drivers/net/ethernet/pensando/ionic/ionic_main.c +++ b/drivers/net/ethernet/pensando/ionic/ionic_main.c @@ -388,7 +388,7 @@ int ionic_adminq_wait(struct ionic_lif *lif, struct ionic_admin_ctx *ctx, break; /* force a check of FW status and break out if FW reset */ - (void)ionic_heartbeat_check(lif->ionic); + ionic_heartbeat_check(lif->ionic); if ((test_bit(IONIC_LIF_F_FW_RESET, lif->state) && !lif->ionic->idev.fw_status_ready) || test_bit(IONIC_LIF_F_FW_STOPPING, lif->state)) { @@ -676,7 +676,7 @@ int ionic_port_init(struct ionic *ionic) err = ionic_dev_cmd_wait(ionic, DEVCMD_TIMEOUT); ionic_dev_cmd_port_state(&ionic->idev, IONIC_PORT_ADMIN_STATE_UP); - (void)ionic_dev_cmd_wait(ionic, DEVCMD_TIMEOUT); + ionic_dev_cmd_wait(ionic, DEVCMD_TIMEOUT); mutex_unlock(&ionic->dev_cmd_lock); if (err) { diff --git a/drivers/net/ethernet/pensando/ionic/ionic_phc.c b/drivers/net/ethernet/pensando/ionic/ionic_phc.c index 887046838b3b..eac2f0e3576e 100644 --- a/drivers/net/ethernet/pensando/ionic/ionic_phc.c +++ b/drivers/net/ethernet/pensando/ionic/ionic_phc.c @@ -268,7 +268,7 @@ static u64 ionic_hwstamp_read(struct ionic *ionic, u32 tick_high_before, tick_high, tick_low; /* read and discard low part to defeat hw staging of high part */ - (void)ioread32(&ionic->idev.hwstamp_regs->tick_low); + ioread32(&ionic->idev.hwstamp_regs->tick_low); tick_high_before = ioread32(&ionic->idev.hwstamp_regs->tick_high); diff --git a/drivers/net/ethernet/pensando/ionic/ionic_rx_filter.c b/drivers/net/ethernet/pensando/ionic/ionic_rx_filter.c index b7363376dfc8..1ee2f285cb42 100644 --- a/drivers/net/ethernet/pensando/ionic/ionic_rx_filter.c +++ b/drivers/net/ethernet/pensando/ionic/ionic_rx_filter.c @@ -604,14 +604,14 @@ loop_out: * they can clear room for some new filters */ list_for_each_entry_safe(sync_item, spos, &sync_del_list, list) { - (void)ionic_lif_filter_del(lif, &sync_item->f.cmd); + ionic_lif_filter_del(lif, &sync_item->f.cmd); list_del(&sync_item->list); devm_kfree(dev, sync_item); } list_for_each_entry_safe(sync_item, spos, &sync_add_list, list) { - (void)ionic_lif_filter_add(lif, &sync_item->f.cmd); + ionic_lif_filter_add(lif, &sync_item->f.cmd); list_del(&sync_item->list); devm_kfree(dev, sync_item); diff --git a/drivers/net/ethernet/pensando/ionic/ionic_txrx.c b/drivers/net/ethernet/pensando/ionic/ionic_txrx.c index f761780f0162..26798fc635db 100644 --- a/drivers/net/ethernet/pensando/ionic/ionic_txrx.c +++ b/drivers/net/ethernet/pensando/ionic/ionic_txrx.c @@ -402,6 +402,14 @@ bool ionic_rx_service(struct ionic_cq *cq, struct ionic_cq_info *cq_info) return true; } +static inline void ionic_write_cmb_desc(struct ionic_queue *q, + void __iomem *cmb_desc, + void *desc) +{ + if (q_to_qcq(q)->flags & IONIC_QCQ_F_CMB_RINGS) + memcpy_toio(cmb_desc, desc, q->desc_size); +} + void ionic_rx_fill(struct ionic_queue *q) { struct net_device *netdev = q->lif->netdev; @@ -480,6 +488,8 @@ void ionic_rx_fill(struct ionic_queue *q) IONIC_RXQ_DESC_OPCODE_SIMPLE; desc_info->nbufs = nfrags; + ionic_write_cmb_desc(q, desc_info->cmb_desc, desc); + ionic_rxq_post(q, false, ionic_rx_clean, NULL); } @@ -943,7 +953,8 @@ static int ionic_tx_tcp_pseudo_csum(struct sk_buff *skb) return 0; } -static void ionic_tx_tso_post(struct ionic_queue *q, struct ionic_txq_desc *desc, +static void ionic_tx_tso_post(struct ionic_queue *q, + struct ionic_desc_info *desc_info, struct sk_buff *skb, dma_addr_t addr, u8 nsge, u16 len, unsigned int hdrlen, unsigned int mss, @@ -951,6 +962,7 @@ static void ionic_tx_tso_post(struct ionic_queue *q, struct ionic_txq_desc *desc u16 vlan_tci, bool has_vlan, bool start, bool done) { + struct ionic_txq_desc *desc = desc_info->desc; u8 flags = 0; u64 cmd; @@ -966,6 +978,8 @@ static void ionic_tx_tso_post(struct ionic_queue *q, struct ionic_txq_desc *desc desc->hdr_len = cpu_to_le16(hdrlen); desc->mss = cpu_to_le16(mss); + ionic_write_cmb_desc(q, desc_info->cmb_desc, desc); + if (start) { skb_tx_timestamp(skb); if (!unlikely(q->features & IONIC_TXQ_F_HWSTAMP)) @@ -1084,7 +1098,7 @@ static int ionic_tx_tso(struct ionic_queue *q, struct sk_buff *skb) seg_rem = min(tso_rem, mss); done = (tso_rem == 0); /* post descriptor */ - ionic_tx_tso_post(q, desc, skb, + ionic_tx_tso_post(q, desc_info, skb, desc_addr, desc_nsge, desc_len, hdrlen, mss, outer_csum, vlan_tci, has_vlan, start, done); @@ -1133,6 +1147,8 @@ static void ionic_tx_calc_csum(struct ionic_queue *q, struct sk_buff *skb, desc->csum_start = cpu_to_le16(skb_checksum_start_offset(skb)); desc->csum_offset = cpu_to_le16(skb->csum_offset); + ionic_write_cmb_desc(q, desc_info->cmb_desc, desc); + if (skb_csum_is_sctp(skb)) stats->crc32_csum++; else @@ -1170,6 +1186,8 @@ static void ionic_tx_calc_no_csum(struct ionic_queue *q, struct sk_buff *skb, desc->csum_start = 0; desc->csum_offset = 0; + ionic_write_cmb_desc(q, desc_info->cmb_desc, desc); + stats->csum_none++; } diff --git a/drivers/net/ethernet/qlogic/qed/qed_devlink.c b/drivers/net/ethernet/qlogic/qed/qed_devlink.c index 922c47797af6..be5cc8b79bd5 100644 --- a/drivers/net/ethernet/qlogic/qed/qed_devlink.c +++ b/drivers/net/ethernet/qlogic/qed/qed_devlink.c @@ -198,7 +198,6 @@ static const struct devlink_ops qed_dl_ops = { struct devlink *qed_devlink_register(struct qed_dev *cdev) { - union devlink_param_value value; struct qed_devlink *qdevlink; struct devlink *dl; int rc; @@ -216,11 +215,6 @@ struct devlink *qed_devlink_register(struct qed_dev *cdev) if (rc) goto err_unregister; - value.vbool = false; - devlink_param_driverinit_value_set(dl, - QED_DEVLINK_PARAM_ID_IWARP_CMT, - value); - cdev->iwarp_cmt = false; qed_fw_reporters_create(dl); diff --git a/drivers/net/ethernet/qlogic/qed/qed_sriov.c b/drivers/net/ethernet/qlogic/qed/qed_sriov.c index 0848b5529d48..2bf18748581d 100644 --- a/drivers/net/ethernet/qlogic/qed/qed_sriov.c +++ b/drivers/net/ethernet/qlogic/qed/qed_sriov.c @@ -831,7 +831,7 @@ static int qed_iov_enable_vf_access(struct qed_hwfn *p_hwfn, * @p_hwfn: HW device data. * @p_ptt: PTT window for writing the registers. * @vf: VF info data. - * @enable: The actual permision for this VF. + * @enable: The actual permission for this VF. * * In E4, queue zone permission table size is 320x9. There * are 320 VF queues for single engine device (256 for dual diff --git a/drivers/net/ethernet/qlogic/qede/qede_main.c b/drivers/net/ethernet/qlogic/qede/qede_main.c index 953f304b8588..4a3c3b5fb4a1 100644 --- a/drivers/net/ethernet/qlogic/qede/qede_main.c +++ b/drivers/net/ethernet/qlogic/qede/qede_main.c @@ -892,6 +892,9 @@ static void qede_init_ndev(struct qede_dev *edev) ndev->hw_features = hw_features; + ndev->xdp_features = NETDEV_XDP_ACT_BASIC | NETDEV_XDP_ACT_REDIRECT | + NETDEV_XDP_ACT_NDO_XMIT; + /* MTU range: 46 - 9600 */ ndev->min_mtu = ETH_ZLEN - ETH_HLEN; ndev->max_mtu = QEDE_MAX_JUMBO_PACKET_SIZE; @@ -970,8 +973,15 @@ static int qede_alloc_fp_array(struct qede_dev *edev) goto err; } - mem = krealloc(edev->coal_entry, QEDE_QUEUE_CNT(edev) * - sizeof(*edev->coal_entry), GFP_KERNEL); + if (!edev->coal_entry) { + mem = kcalloc(QEDE_MAX_RSS_CNT(edev), + sizeof(*edev->coal_entry), GFP_KERNEL); + } else { + mem = krealloc(edev->coal_entry, + QEDE_QUEUE_CNT(edev) * sizeof(*edev->coal_entry), + GFP_KERNEL); + } + if (!mem) { DP_ERR(edev, "coalesce entry allocation failed\n"); kfree(edev->coal_entry); diff --git a/drivers/net/ethernet/qualcomm/rmnet/rmnet_config.c b/drivers/net/ethernet/qualcomm/rmnet/rmnet_config.c index 27b1663c476e..39d24e07f306 100644 --- a/drivers/net/ethernet/qualcomm/rmnet/rmnet_config.c +++ b/drivers/net/ethernet/qualcomm/rmnet/rmnet_config.c @@ -12,6 +12,7 @@ #include "rmnet_handlers.h" #include "rmnet_vnd.h" #include "rmnet_private.h" +#include "rmnet_map.h" /* Local Definitions and Declarations */ @@ -39,6 +40,8 @@ static int rmnet_unregister_real_device(struct net_device *real_dev) if (port->nr_rmnet_devs) return -EINVAL; + rmnet_map_tx_aggregate_exit(port); + netdev_rx_handler_unregister(real_dev); kfree(port); @@ -79,6 +82,8 @@ static int rmnet_register_real_device(struct net_device *real_dev, for (entry = 0; entry < RMNET_MAX_LOGICAL_EP; entry++) INIT_HLIST_HEAD(&port->muxed_ep[entry]); + rmnet_map_tx_aggregate_init(port); + netdev_dbg(real_dev, "registered with rmnet\n"); return 0; } diff --git a/drivers/net/ethernet/qualcomm/rmnet/rmnet_config.h b/drivers/net/ethernet/qualcomm/rmnet/rmnet_config.h index 3d3cba56c516..ed112d51ac5a 100644 --- a/drivers/net/ethernet/qualcomm/rmnet/rmnet_config.h +++ b/drivers/net/ethernet/qualcomm/rmnet/rmnet_config.h @@ -6,6 +6,7 @@ */ #include <linux/skbuff.h> +#include <linux/time.h> #include <net/gro_cells.h> #ifndef _RMNET_CONFIG_H_ @@ -19,6 +20,12 @@ struct rmnet_endpoint { struct hlist_node hlnode; }; +struct rmnet_egress_agg_params { + u32 bytes; + u32 count; + u64 time_nsec; +}; + /* One instance of this structure is instantiated for each real_dev associated * with rmnet. */ @@ -30,6 +37,19 @@ struct rmnet_port { struct hlist_head muxed_ep[RMNET_MAX_LOGICAL_EP]; struct net_device *bridge_ep; struct net_device *rmnet_dev; + + /* Egress aggregation information */ + struct rmnet_egress_agg_params egress_agg_params; + /* Protect aggregation related elements */ + spinlock_t agg_lock; + struct sk_buff *skbagg_head; + struct sk_buff *skbagg_tail; + int agg_state; + u8 agg_count; + struct timespec64 agg_time; + struct timespec64 agg_last; + struct hrtimer hrtimer; + struct work_struct agg_wq; }; extern struct rtnl_link_ops rmnet_link_ops; diff --git a/drivers/net/ethernet/qualcomm/rmnet/rmnet_handlers.c b/drivers/net/ethernet/qualcomm/rmnet/rmnet_handlers.c index a313242a762e..9f3479500f85 100644 --- a/drivers/net/ethernet/qualcomm/rmnet/rmnet_handlers.c +++ b/drivers/net/ethernet/qualcomm/rmnet/rmnet_handlers.c @@ -164,8 +164,18 @@ static int rmnet_map_egress_handler(struct sk_buff *skb, map_header->mux_id = mux_id; - skb->protocol = htons(ETH_P_MAP); + if (READ_ONCE(port->egress_agg_params.count) > 1) { + unsigned int len; + + len = rmnet_map_tx_aggregate(skb, port, orig_dev); + if (likely(len)) { + rmnet_vnd_tx_fixup_len(len, orig_dev); + return -EINPROGRESS; + } + return -ENOMEM; + } + skb->protocol = htons(ETH_P_MAP); return 0; } @@ -235,6 +245,7 @@ void rmnet_egress_handler(struct sk_buff *skb) struct rmnet_port *port; struct rmnet_priv *priv; u8 mux_id; + int err; sk_pacing_shift_update(skb->sk, 8); @@ -247,8 +258,11 @@ void rmnet_egress_handler(struct sk_buff *skb) if (!port) goto drop; - if (rmnet_map_egress_handler(skb, port, mux_id, orig_dev)) + err = rmnet_map_egress_handler(skb, port, mux_id, orig_dev); + if (err == -ENOMEM) goto drop; + else if (err == -EINPROGRESS) + return; rmnet_vnd_tx_fixup(skb, orig_dev); diff --git a/drivers/net/ethernet/qualcomm/rmnet/rmnet_map.h b/drivers/net/ethernet/qualcomm/rmnet/rmnet_map.h index 2b033060fc20..b70284095568 100644 --- a/drivers/net/ethernet/qualcomm/rmnet/rmnet_map.h +++ b/drivers/net/ethernet/qualcomm/rmnet/rmnet_map.h @@ -53,5 +53,11 @@ void rmnet_map_checksum_uplink_packet(struct sk_buff *skb, struct net_device *orig_dev, int csum_type); int rmnet_map_process_next_hdr_packet(struct sk_buff *skb, u16 len); +unsigned int rmnet_map_tx_aggregate(struct sk_buff *skb, struct rmnet_port *port, + struct net_device *orig_dev); +void rmnet_map_tx_aggregate_init(struct rmnet_port *port); +void rmnet_map_tx_aggregate_exit(struct rmnet_port *port); +void rmnet_map_update_ul_agg_config(struct rmnet_port *port, u32 size, + u32 count, u32 time); #endif /* _RMNET_MAP_H_ */ diff --git a/drivers/net/ethernet/qualcomm/rmnet/rmnet_map_data.c b/drivers/net/ethernet/qualcomm/rmnet/rmnet_map_data.c index ba194698cc14..a5e3d1a88305 100644 --- a/drivers/net/ethernet/qualcomm/rmnet/rmnet_map_data.c +++ b/drivers/net/ethernet/qualcomm/rmnet/rmnet_map_data.c @@ -12,6 +12,7 @@ #include "rmnet_config.h" #include "rmnet_map.h" #include "rmnet_private.h" +#include "rmnet_vnd.h" #define RMNET_MAP_DEAGGR_SPACING 64 #define RMNET_MAP_DEAGGR_HEADROOM (RMNET_MAP_DEAGGR_SPACING / 2) @@ -518,3 +519,193 @@ int rmnet_map_process_next_hdr_packet(struct sk_buff *skb, return 0; } + +#define RMNET_AGG_BYPASS_TIME_NSEC 10000000L + +static void reset_aggr_params(struct rmnet_port *port) +{ + port->skbagg_head = NULL; + port->agg_count = 0; + port->agg_state = 0; + memset(&port->agg_time, 0, sizeof(struct timespec64)); +} + +static void rmnet_send_skb(struct rmnet_port *port, struct sk_buff *skb) +{ + if (skb_needs_linearize(skb, port->dev->features)) { + if (unlikely(__skb_linearize(skb))) { + struct rmnet_priv *priv; + + priv = netdev_priv(port->rmnet_dev); + this_cpu_inc(priv->pcpu_stats->stats.tx_drops); + dev_kfree_skb_any(skb); + return; + } + } + + dev_queue_xmit(skb); +} + +static void rmnet_map_flush_tx_packet_work(struct work_struct *work) +{ + struct sk_buff *skb = NULL; + struct rmnet_port *port; + + port = container_of(work, struct rmnet_port, agg_wq); + + spin_lock_bh(&port->agg_lock); + if (likely(port->agg_state == -EINPROGRESS)) { + /* Buffer may have already been shipped out */ + if (likely(port->skbagg_head)) { + skb = port->skbagg_head; + reset_aggr_params(port); + } + port->agg_state = 0; + } + + spin_unlock_bh(&port->agg_lock); + if (skb) + rmnet_send_skb(port, skb); +} + +static enum hrtimer_restart rmnet_map_flush_tx_packet_queue(struct hrtimer *t) +{ + struct rmnet_port *port; + + port = container_of(t, struct rmnet_port, hrtimer); + + schedule_work(&port->agg_wq); + + return HRTIMER_NORESTART; +} + +unsigned int rmnet_map_tx_aggregate(struct sk_buff *skb, struct rmnet_port *port, + struct net_device *orig_dev) +{ + struct timespec64 diff, last; + unsigned int len = skb->len; + struct sk_buff *agg_skb; + int size; + + spin_lock_bh(&port->agg_lock); + memcpy(&last, &port->agg_last, sizeof(struct timespec64)); + ktime_get_real_ts64(&port->agg_last); + + if (!port->skbagg_head) { + /* Check to see if we should agg first. If the traffic is very + * sparse, don't aggregate. + */ +new_packet: + diff = timespec64_sub(port->agg_last, last); + size = port->egress_agg_params.bytes - skb->len; + + if (size < 0) { + /* dropped */ + spin_unlock_bh(&port->agg_lock); + return 0; + } + + if (diff.tv_sec > 0 || diff.tv_nsec > RMNET_AGG_BYPASS_TIME_NSEC || + size == 0) + goto no_aggr; + + port->skbagg_head = skb_copy_expand(skb, 0, size, GFP_ATOMIC); + if (!port->skbagg_head) + goto no_aggr; + + dev_kfree_skb_any(skb); + port->skbagg_head->protocol = htons(ETH_P_MAP); + port->agg_count = 1; + ktime_get_real_ts64(&port->agg_time); + skb_frag_list_init(port->skbagg_head); + goto schedule; + } + diff = timespec64_sub(port->agg_last, port->agg_time); + size = port->egress_agg_params.bytes - port->skbagg_head->len; + + if (skb->len > size) { + agg_skb = port->skbagg_head; + reset_aggr_params(port); + spin_unlock_bh(&port->agg_lock); + hrtimer_cancel(&port->hrtimer); + rmnet_send_skb(port, agg_skb); + spin_lock_bh(&port->agg_lock); + goto new_packet; + } + + if (skb_has_frag_list(port->skbagg_head)) + port->skbagg_tail->next = skb; + else + skb_shinfo(port->skbagg_head)->frag_list = skb; + + port->skbagg_head->len += skb->len; + port->skbagg_head->data_len += skb->len; + port->skbagg_head->truesize += skb->truesize; + port->skbagg_tail = skb; + port->agg_count++; + + if (diff.tv_sec > 0 || diff.tv_nsec > port->egress_agg_params.time_nsec || + port->agg_count >= port->egress_agg_params.count || + port->skbagg_head->len == port->egress_agg_params.bytes) { + agg_skb = port->skbagg_head; + reset_aggr_params(port); + spin_unlock_bh(&port->agg_lock); + hrtimer_cancel(&port->hrtimer); + rmnet_send_skb(port, agg_skb); + return len; + } + +schedule: + if (!hrtimer_active(&port->hrtimer) && port->agg_state != -EINPROGRESS) { + port->agg_state = -EINPROGRESS; + hrtimer_start(&port->hrtimer, + ns_to_ktime(port->egress_agg_params.time_nsec), + HRTIMER_MODE_REL); + } + spin_unlock_bh(&port->agg_lock); + + return len; + +no_aggr: + spin_unlock_bh(&port->agg_lock); + skb->protocol = htons(ETH_P_MAP); + dev_queue_xmit(skb); + + return len; +} + +void rmnet_map_update_ul_agg_config(struct rmnet_port *port, u32 size, + u32 count, u32 time) +{ + spin_lock_bh(&port->agg_lock); + port->egress_agg_params.bytes = size; + WRITE_ONCE(port->egress_agg_params.count, count); + port->egress_agg_params.time_nsec = time * NSEC_PER_USEC; + spin_unlock_bh(&port->agg_lock); +} + +void rmnet_map_tx_aggregate_init(struct rmnet_port *port) +{ + hrtimer_init(&port->hrtimer, CLOCK_MONOTONIC, HRTIMER_MODE_REL); + port->hrtimer.function = rmnet_map_flush_tx_packet_queue; + spin_lock_init(&port->agg_lock); + rmnet_map_update_ul_agg_config(port, 4096, 1, 800); + INIT_WORK(&port->agg_wq, rmnet_map_flush_tx_packet_work); +} + +void rmnet_map_tx_aggregate_exit(struct rmnet_port *port) +{ + hrtimer_cancel(&port->hrtimer); + cancel_work_sync(&port->agg_wq); + + spin_lock_bh(&port->agg_lock); + if (port->agg_state == -EINPROGRESS) { + if (port->skbagg_head) { + dev_kfree_skb_any(port->skbagg_head); + reset_aggr_params(port); + } + + port->agg_state = 0; + } + spin_unlock_bh(&port->agg_lock); +} diff --git a/drivers/net/ethernet/qualcomm/rmnet/rmnet_vnd.c b/drivers/net/ethernet/qualcomm/rmnet/rmnet_vnd.c index 3f5e6572d20e..046b5f7d8e7c 100644 --- a/drivers/net/ethernet/qualcomm/rmnet/rmnet_vnd.c +++ b/drivers/net/ethernet/qualcomm/rmnet/rmnet_vnd.c @@ -29,7 +29,7 @@ void rmnet_vnd_rx_fixup(struct sk_buff *skb, struct net_device *dev) u64_stats_update_end(&pcpu_ptr->syncp); } -void rmnet_vnd_tx_fixup(struct sk_buff *skb, struct net_device *dev) +void rmnet_vnd_tx_fixup_len(unsigned int len, struct net_device *dev) { struct rmnet_priv *priv = netdev_priv(dev); struct rmnet_pcpu_stats *pcpu_ptr; @@ -38,10 +38,15 @@ void rmnet_vnd_tx_fixup(struct sk_buff *skb, struct net_device *dev) u64_stats_update_begin(&pcpu_ptr->syncp); pcpu_ptr->stats.tx_pkts++; - pcpu_ptr->stats.tx_bytes += skb->len; + pcpu_ptr->stats.tx_bytes += len; u64_stats_update_end(&pcpu_ptr->syncp); } +void rmnet_vnd_tx_fixup(struct sk_buff *skb, struct net_device *dev) +{ + rmnet_vnd_tx_fixup_len(skb->len, dev); +} + /* Network Device Operations */ static netdev_tx_t rmnet_vnd_start_xmit(struct sk_buff *skb, @@ -210,7 +215,52 @@ static void rmnet_get_ethtool_stats(struct net_device *dev, memcpy(data, st, ARRAY_SIZE(rmnet_gstrings_stats) * sizeof(u64)); } +static int rmnet_get_coalesce(struct net_device *dev, + struct ethtool_coalesce *coal, + struct kernel_ethtool_coalesce *kernel_coal, + struct netlink_ext_ack *extack) +{ + struct rmnet_priv *priv = netdev_priv(dev); + struct rmnet_port *port; + + port = rmnet_get_port_rtnl(priv->real_dev); + + memset(kernel_coal, 0, sizeof(*kernel_coal)); + kernel_coal->tx_aggr_max_bytes = port->egress_agg_params.bytes; + kernel_coal->tx_aggr_max_frames = port->egress_agg_params.count; + kernel_coal->tx_aggr_time_usecs = div_u64(port->egress_agg_params.time_nsec, + NSEC_PER_USEC); + + return 0; +} + +static int rmnet_set_coalesce(struct net_device *dev, + struct ethtool_coalesce *coal, + struct kernel_ethtool_coalesce *kernel_coal, + struct netlink_ext_ack *extack) +{ + struct rmnet_priv *priv = netdev_priv(dev); + struct rmnet_port *port; + + port = rmnet_get_port_rtnl(priv->real_dev); + + if (kernel_coal->tx_aggr_max_frames < 1 || kernel_coal->tx_aggr_max_frames > 64) + return -EINVAL; + + if (kernel_coal->tx_aggr_max_bytes > 32768) + return -EINVAL; + + rmnet_map_update_ul_agg_config(port, kernel_coal->tx_aggr_max_bytes, + kernel_coal->tx_aggr_max_frames, + kernel_coal->tx_aggr_time_usecs); + + return 0; +} + static const struct ethtool_ops rmnet_ethtool_ops = { + .supported_coalesce_params = ETHTOOL_COALESCE_TX_AGGR, + .get_coalesce = rmnet_get_coalesce, + .set_coalesce = rmnet_set_coalesce, .get_ethtool_stats = rmnet_get_ethtool_stats, .get_strings = rmnet_get_strings, .get_sset_count = rmnet_get_sset_count, diff --git a/drivers/net/ethernet/qualcomm/rmnet/rmnet_vnd.h b/drivers/net/ethernet/qualcomm/rmnet/rmnet_vnd.h index dc3a4443ef0a..c2b2baf86894 100644 --- a/drivers/net/ethernet/qualcomm/rmnet/rmnet_vnd.h +++ b/drivers/net/ethernet/qualcomm/rmnet/rmnet_vnd.h @@ -16,6 +16,7 @@ int rmnet_vnd_newlink(u8 id, struct net_device *rmnet_dev, int rmnet_vnd_dellink(u8 id, struct rmnet_port *port, struct rmnet_endpoint *ep); void rmnet_vnd_rx_fixup(struct sk_buff *skb, struct net_device *dev); +void rmnet_vnd_tx_fixup_len(unsigned int len, struct net_device *dev); void rmnet_vnd_tx_fixup(struct sk_buff *skb, struct net_device *dev); void rmnet_vnd_setup(struct net_device *dev); int rmnet_vnd_validate_real_dev_mtu(struct net_device *real_dev); diff --git a/drivers/net/ethernet/realtek/r8169_main.c b/drivers/net/ethernet/realtek/r8169_main.c index dadd61bccfe7..45147a1016be 100644 --- a/drivers/net/ethernet/realtek/r8169_main.c +++ b/drivers/net/ethernet/realtek/r8169_main.c @@ -576,6 +576,7 @@ struct rtl8169_tc_offsets { enum rtl_flag { RTL_FLAG_TASK_ENABLED = 0, RTL_FLAG_TASK_RESET_PENDING, + RTL_FLAG_TASK_TX_TIMEOUT, RTL_FLAG_MAX }; @@ -3928,7 +3929,7 @@ static void rtl8169_tx_timeout(struct net_device *dev, unsigned int txqueue) { struct rtl8169_private *tp = netdev_priv(dev); - rtl_schedule_task(tp, RTL_FLAG_TASK_RESET_PENDING); + rtl_schedule_task(tp, RTL_FLAG_TASK_TX_TIMEOUT); } static int rtl8169_tx_map(struct rtl8169_private *tp, const u32 *opts, u32 len, @@ -4522,6 +4523,7 @@ static void rtl_task(struct work_struct *work) { struct rtl8169_private *tp = container_of(work, struct rtl8169_private, wk.work); + int ret; rtnl_lock(); @@ -4529,7 +4531,27 @@ static void rtl_task(struct work_struct *work) !test_bit(RTL_FLAG_TASK_ENABLED, tp->wk.flags)) goto out_unlock; + if (test_and_clear_bit(RTL_FLAG_TASK_TX_TIMEOUT, tp->wk.flags)) { + /* if chip isn't accessible, reset bus to revive it */ + if (RTL_R32(tp, TxConfig) == ~0) { + ret = pci_reset_bus(tp->pci_dev); + if (ret < 0) { + netdev_err(tp->dev, "Can't reset secondary PCI bus, detach NIC\n"); + netif_device_detach(tp->dev); + goto out_unlock; + } + } + + /* ASPM compatibility issues are a typical reason for tx timeouts */ + ret = pci_disable_link_state(tp->pci_dev, PCIE_LINK_STATE_L1 | + PCIE_LINK_STATE_L0S); + if (!ret) + netdev_warn_once(tp->dev, "ASPM disabled on Tx timeout\n"); + goto reset; + } + if (test_and_clear_bit(RTL_FLAG_TASK_RESET_PENDING, tp->wk.flags)) { +reset: rtl_reset_work(tp); netif_wake_queue(tp->dev); } diff --git a/drivers/net/ethernet/renesas/rswitch.c b/drivers/net/ethernet/renesas/rswitch.c index 2370c7797a0a..853394e5bb8b 100644 --- a/drivers/net/ethernet/renesas/rswitch.c +++ b/drivers/net/ethernet/renesas/rswitch.c @@ -16,7 +16,6 @@ #include <linux/of_irq.h> #include <linux/of_mdio.h> #include <linux/of_net.h> -#include <linux/phylink.h> #include <linux/phy/phy.h> #include <linux/pm_runtime.h> #include <linux/rtnetlink.h> @@ -124,13 +123,6 @@ static void rswitch_fwd_init(struct rswitch_private *priv) iowrite32(GENMASK(RSWITCH_NUM_PORTS - 1, 0), priv->addr + FWPBFC(priv->gwca.index)); } -/* gPTP timer (gPTP) */ -static void rswitch_get_timestamp(struct rswitch_private *priv, - struct timespec64 *ts) -{ - priv->ptp_priv->info.gettime64(&priv->ptp_priv->info, ts); -} - /* Gateway CPU agent block (GWCA) */ static int rswitch_gwca_change_mode(struct rswitch_private *priv, enum rswitch_gwca_mode mode) @@ -241,7 +233,7 @@ static int rswitch_get_num_cur_queues(struct rswitch_gwca_queue *gq) static bool rswitch_is_queue_rxed(struct rswitch_gwca_queue *gq) { - struct rswitch_ext_ts_desc *desc = &gq->ts_ring[gq->dirty]; + struct rswitch_ext_ts_desc *desc = &gq->rx_ring[gq->dirty]; if ((desc->desc.die_dt & DT_MASK) != DT_FEMPTY) return true; @@ -281,36 +273,43 @@ static void rswitch_gwca_queue_free(struct net_device *ndev, { int i; - if (gq->gptp) { + if (!gq->dir_tx) { dma_free_coherent(ndev->dev.parent, sizeof(struct rswitch_ext_ts_desc) * - (gq->ring_size + 1), gq->ts_ring, gq->ring_dma); - gq->ts_ring = NULL; - } else { - dma_free_coherent(ndev->dev.parent, - sizeof(struct rswitch_ext_desc) * - (gq->ring_size + 1), gq->ring, gq->ring_dma); - gq->ring = NULL; - } + (gq->ring_size + 1), gq->rx_ring, gq->ring_dma); + gq->rx_ring = NULL; - if (!gq->dir_tx) { for (i = 0; i < gq->ring_size; i++) dev_kfree_skb(gq->skbs[i]); + } else { + dma_free_coherent(ndev->dev.parent, + sizeof(struct rswitch_ext_desc) * + (gq->ring_size + 1), gq->tx_ring, gq->ring_dma); + gq->tx_ring = NULL; } kfree(gq->skbs); gq->skbs = NULL; } +static void rswitch_gwca_ts_queue_free(struct rswitch_private *priv) +{ + struct rswitch_gwca_queue *gq = &priv->gwca.ts_queue; + + dma_free_coherent(&priv->pdev->dev, + sizeof(struct rswitch_ts_desc) * (gq->ring_size + 1), + gq->ts_ring, gq->ring_dma); + gq->ts_ring = NULL; +} + static int rswitch_gwca_queue_alloc(struct net_device *ndev, struct rswitch_private *priv, struct rswitch_gwca_queue *gq, - bool dir_tx, bool gptp, int ring_size) + bool dir_tx, int ring_size) { int i, bit; gq->dir_tx = dir_tx; - gq->gptp = gptp; gq->ring_size = ring_size; gq->ndev = ndev; @@ -318,18 +317,19 @@ static int rswitch_gwca_queue_alloc(struct net_device *ndev, if (!gq->skbs) return -ENOMEM; - if (!dir_tx) + if (!dir_tx) { rswitch_gwca_queue_alloc_skb(gq, 0, gq->ring_size); - if (gptp) - gq->ts_ring = dma_alloc_coherent(ndev->dev.parent, + gq->rx_ring = dma_alloc_coherent(ndev->dev.parent, sizeof(struct rswitch_ext_ts_desc) * (gq->ring_size + 1), &gq->ring_dma, GFP_KERNEL); - else - gq->ring = dma_alloc_coherent(ndev->dev.parent, - sizeof(struct rswitch_ext_desc) * - (gq->ring_size + 1), &gq->ring_dma, GFP_KERNEL); - if (!gq->ts_ring && !gq->ring) + } else { + gq->tx_ring = dma_alloc_coherent(ndev->dev.parent, + sizeof(struct rswitch_ext_desc) * + (gq->ring_size + 1), &gq->ring_dma, GFP_KERNEL); + } + + if (!gq->rx_ring && !gq->tx_ring) goto out; i = gq->index / 32; @@ -347,6 +347,17 @@ out: return -ENOMEM; } +static int rswitch_gwca_ts_queue_alloc(struct rswitch_private *priv) +{ + struct rswitch_gwca_queue *gq = &priv->gwca.ts_queue; + + gq->ring_size = TS_RING_SIZE; + gq->ts_ring = dma_alloc_coherent(&priv->pdev->dev, + sizeof(struct rswitch_ts_desc) * + (gq->ring_size + 1), &gq->ring_dma, GFP_KERNEL); + return !gq->ts_ring ? -ENOMEM : 0; +} + static void rswitch_desc_set_dptr(struct rswitch_desc *desc, dma_addr_t addr) { desc->dptrl = cpu_to_le32(lower_32_bits(addr)); @@ -362,14 +373,14 @@ static int rswitch_gwca_queue_format(struct net_device *ndev, struct rswitch_private *priv, struct rswitch_gwca_queue *gq) { - int tx_ring_size = sizeof(struct rswitch_ext_desc) * gq->ring_size; + int ring_size = sizeof(struct rswitch_ext_desc) * gq->ring_size; struct rswitch_ext_desc *desc; struct rswitch_desc *linkfix; dma_addr_t dma_addr; int i; - memset(gq->ring, 0, tx_ring_size); - for (i = 0, desc = gq->ring; i < gq->ring_size; i++, desc++) { + memset(gq->tx_ring, 0, ring_size); + for (i = 0, desc = gq->tx_ring; i < gq->ring_size; i++, desc++) { if (!gq->dir_tx) { dma_addr = dma_map_single(ndev->dev.parent, gq->skbs[i]->data, PKT_BUF_SZ, @@ -387,7 +398,7 @@ static int rswitch_gwca_queue_format(struct net_device *ndev, rswitch_desc_set_dptr(&desc->desc, gq->ring_dma); desc->desc.die_dt = DT_LINKFIX; - linkfix = &priv->linkfix_table[gq->index]; + linkfix = &priv->gwca.linkfix_table[gq->index]; linkfix->die_dt = DT_LINKFIX; rswitch_desc_set_dptr(linkfix, gq->ring_dma); @@ -398,7 +409,7 @@ static int rswitch_gwca_queue_format(struct net_device *ndev, err: if (!gq->dir_tx) { - for (i--, desc = gq->ring; i >= 0; i--, desc++) { + for (i--, desc = gq->tx_ring; i >= 0; i--, desc++) { dma_addr = rswitch_desc_get_dptr(&desc->desc); dma_unmap_single(ndev->dev.parent, dma_addr, PKT_BUF_SZ, DMA_FROM_DEVICE); @@ -408,9 +419,23 @@ err: return -ENOMEM; } -static int rswitch_gwca_queue_ts_fill(struct net_device *ndev, - struct rswitch_gwca_queue *gq, - int start_index, int num) +static void rswitch_gwca_ts_queue_fill(struct rswitch_private *priv, + int start_index, int num) +{ + struct rswitch_gwca_queue *gq = &priv->gwca.ts_queue; + struct rswitch_ts_desc *desc; + int i, index; + + for (i = 0; i < num; i++) { + index = (i + start_index) % gq->ring_size; + desc = &gq->ts_ring[index]; + desc->desc.die_dt = DT_FEMPTY_ND | DIE; + } +} + +static int rswitch_gwca_queue_ext_ts_fill(struct net_device *ndev, + struct rswitch_gwca_queue *gq, + int start_index, int num) { struct rswitch_device *rdev = netdev_priv(ndev); struct rswitch_ext_ts_desc *desc; @@ -419,7 +444,7 @@ static int rswitch_gwca_queue_ts_fill(struct net_device *ndev, for (i = 0; i < num; i++) { index = (i + start_index) % gq->ring_size; - desc = &gq->ts_ring[index]; + desc = &gq->rx_ring[index]; if (!gq->dir_tx) { dma_addr = dma_map_single(ndev->dev.parent, gq->skbs[index]->data, PKT_BUF_SZ, @@ -443,7 +468,7 @@ err: if (!gq->dir_tx) { for (i--; i >= 0; i--) { index = (i + start_index) % gq->ring_size; - desc = &gq->ts_ring[index]; + desc = &gq->rx_ring[index]; dma_addr = rswitch_desc_get_dptr(&desc->desc); dma_unmap_single(ndev->dev.parent, dma_addr, PKT_BUF_SZ, DMA_FROM_DEVICE); @@ -453,25 +478,25 @@ err: return -ENOMEM; } -static int rswitch_gwca_queue_ts_format(struct net_device *ndev, - struct rswitch_private *priv, - struct rswitch_gwca_queue *gq) +static int rswitch_gwca_queue_ext_ts_format(struct net_device *ndev, + struct rswitch_private *priv, + struct rswitch_gwca_queue *gq) { - int tx_ts_ring_size = sizeof(struct rswitch_ext_ts_desc) * gq->ring_size; + int ring_size = sizeof(struct rswitch_ext_ts_desc) * gq->ring_size; struct rswitch_ext_ts_desc *desc; struct rswitch_desc *linkfix; int err; - memset(gq->ts_ring, 0, tx_ts_ring_size); - err = rswitch_gwca_queue_ts_fill(ndev, gq, 0, gq->ring_size); + memset(gq->rx_ring, 0, ring_size); + err = rswitch_gwca_queue_ext_ts_fill(ndev, gq, 0, gq->ring_size); if (err < 0) return err; - desc = &gq->ts_ring[gq->ring_size]; /* Last */ + desc = &gq->rx_ring[gq->ring_size]; /* Last */ rswitch_desc_set_dptr(&desc->desc, gq->ring_dma); desc->desc.die_dt = DT_LINKFIX; - linkfix = &priv->linkfix_table[gq->index]; + linkfix = &priv->gwca.linkfix_table[gq->index]; linkfix->die_dt = DT_LINKFIX; rswitch_desc_set_dptr(linkfix, gq->ring_dma); @@ -481,28 +506,31 @@ static int rswitch_gwca_queue_ts_format(struct net_device *ndev, return 0; } -static int rswitch_gwca_desc_alloc(struct rswitch_private *priv) +static int rswitch_gwca_linkfix_alloc(struct rswitch_private *priv) { int i, num_queues = priv->gwca.num_queues; + struct rswitch_gwca *gwca = &priv->gwca; struct device *dev = &priv->pdev->dev; - priv->linkfix_table_size = sizeof(struct rswitch_desc) * num_queues; - priv->linkfix_table = dma_alloc_coherent(dev, priv->linkfix_table_size, - &priv->linkfix_table_dma, GFP_KERNEL); - if (!priv->linkfix_table) + gwca->linkfix_table_size = sizeof(struct rswitch_desc) * num_queues; + gwca->linkfix_table = dma_alloc_coherent(dev, gwca->linkfix_table_size, + &gwca->linkfix_table_dma, GFP_KERNEL); + if (!gwca->linkfix_table) return -ENOMEM; for (i = 0; i < num_queues; i++) - priv->linkfix_table[i].die_dt = DT_EOS; + gwca->linkfix_table[i].die_dt = DT_EOS; return 0; } -static void rswitch_gwca_desc_free(struct rswitch_private *priv) +static void rswitch_gwca_linkfix_free(struct rswitch_private *priv) { - if (priv->linkfix_table) - dma_free_coherent(&priv->pdev->dev, priv->linkfix_table_size, - priv->linkfix_table, priv->linkfix_table_dma); - priv->linkfix_table = NULL; + struct rswitch_gwca *gwca = &priv->gwca; + + if (gwca->linkfix_table) + dma_free_coherent(&priv->pdev->dev, gwca->linkfix_table_size, + gwca->linkfix_table, gwca->linkfix_table_dma); + gwca->linkfix_table = NULL; } static struct rswitch_gwca_queue *rswitch_gwca_get(struct rswitch_private *priv) @@ -537,8 +565,7 @@ static int rswitch_txdmac_alloc(struct net_device *ndev) if (!rdev->tx_queue) return -EBUSY; - err = rswitch_gwca_queue_alloc(ndev, priv, rdev->tx_queue, true, false, - TX_RING_SIZE); + err = rswitch_gwca_queue_alloc(ndev, priv, rdev->tx_queue, true, TX_RING_SIZE); if (err < 0) { rswitch_gwca_put(priv, rdev->tx_queue); return err; @@ -572,8 +599,7 @@ static int rswitch_rxdmac_alloc(struct net_device *ndev) if (!rdev->rx_queue) return -EBUSY; - err = rswitch_gwca_queue_alloc(ndev, priv, rdev->rx_queue, false, true, - RX_RING_SIZE); + err = rswitch_gwca_queue_alloc(ndev, priv, rdev->rx_queue, false, RX_RING_SIZE); if (err < 0) { rswitch_gwca_put(priv, rdev->rx_queue); return err; @@ -595,7 +621,7 @@ static int rswitch_rxdmac_init(struct rswitch_private *priv, int index) struct rswitch_device *rdev = priv->rdev[index]; struct net_device *ndev = rdev->ndev; - return rswitch_gwca_queue_ts_format(ndev, priv, rdev->rx_queue); + return rswitch_gwca_queue_ext_ts_format(ndev, priv, rdev->rx_queue); } static int rswitch_gwca_hw_init(struct rswitch_private *priv) @@ -618,8 +644,11 @@ static int rswitch_gwca_hw_init(struct rswitch_private *priv) iowrite32(GWVCC_VEM_SC_TAG, priv->addr + GWVCC); iowrite32(0, priv->addr + GWTTFC); - iowrite32(lower_32_bits(priv->linkfix_table_dma), priv->addr + GWDCBAC1); - iowrite32(upper_32_bits(priv->linkfix_table_dma), priv->addr + GWDCBAC0); + iowrite32(lower_32_bits(priv->gwca.linkfix_table_dma), priv->addr + GWDCBAC1); + iowrite32(upper_32_bits(priv->gwca.linkfix_table_dma), priv->addr + GWDCBAC0); + iowrite32(lower_32_bits(priv->gwca.ts_queue.ring_dma), priv->addr + GWTDCAC10); + iowrite32(upper_32_bits(priv->gwca.ts_queue.ring_dma), priv->addr + GWTDCAC00); + iowrite32(GWCA_TS_IRQ_BIT, priv->addr + GWTSDCC0); rswitch_gwca_set_rate_limit(priv, priv->gwca.speed); for (i = 0; i < RSWITCH_NUM_PORTS; i++) { @@ -676,7 +705,7 @@ static bool rswitch_rx(struct net_device *ndev, int *quota) boguscnt = min_t(int, gq->ring_size, *quota); limit = boguscnt; - desc = &gq->ts_ring[gq->cur]; + desc = &gq->rx_ring[gq->cur]; while ((desc->desc.die_dt & DT_MASK) != DT_FEMPTY) { if (--boguscnt < 0) break; @@ -704,14 +733,14 @@ static bool rswitch_rx(struct net_device *ndev, int *quota) rdev->ndev->stats.rx_bytes += pkt_len; gq->cur = rswitch_next_queue_index(gq, true, 1); - desc = &gq->ts_ring[gq->cur]; + desc = &gq->rx_ring[gq->cur]; } num = rswitch_get_num_cur_queues(gq); ret = rswitch_gwca_queue_alloc_skb(gq, gq->dirty, num); if (ret < 0) goto err; - ret = rswitch_gwca_queue_ts_fill(ndev, gq, gq->dirty, num); + ret = rswitch_gwca_queue_ext_ts_fill(ndev, gq, gq->dirty, num); if (ret < 0) goto err; gq->dirty = rswitch_next_queue_index(gq, false, num); @@ -738,7 +767,7 @@ static int rswitch_tx_free(struct net_device *ndev, bool free_txed_only) for (; rswitch_get_num_cur_queues(gq) > 0; gq->dirty = rswitch_next_queue_index(gq, false, 1)) { - desc = &gq->ring[gq->dirty]; + desc = &gq->tx_ring[gq->dirty]; if (free_txed_only && (desc->desc.die_dt & DT_MASK) != DT_FEMPTY) break; @@ -746,15 +775,6 @@ static int rswitch_tx_free(struct net_device *ndev, bool free_txed_only) size = le16_to_cpu(desc->desc.info_ds) & TX_DS; skb = gq->skbs[gq->dirty]; if (skb) { - if (skb_shinfo(skb)->tx_flags & SKBTX_HW_TSTAMP) { - struct skb_shared_hwtstamps shhwtstamps; - struct timespec64 ts; - - rswitch_get_timestamp(rdev->priv, &ts); - memset(&shhwtstamps, 0, sizeof(shhwtstamps)); - shhwtstamps.hwtstamp = timespec64_to_ktime(ts); - skb_tstamp_tx(skb, &shhwtstamps); - } dma_addr = rswitch_desc_get_dptr(&desc->desc); dma_unmap_single(ndev->dev.parent, dma_addr, size, DMA_TO_DEVICE); @@ -880,6 +900,73 @@ static int rswitch_gwca_request_irqs(struct rswitch_private *priv) return 0; } +static void rswitch_ts(struct rswitch_private *priv) +{ + struct rswitch_gwca_queue *gq = &priv->gwca.ts_queue; + struct rswitch_gwca_ts_info *ts_info, *ts_info2; + struct skb_shared_hwtstamps shhwtstamps; + struct rswitch_ts_desc *desc; + struct timespec64 ts; + u32 tag, port; + int num; + + desc = &gq->ts_ring[gq->cur]; + while ((desc->desc.die_dt & DT_MASK) != DT_FEMPTY_ND) { + dma_rmb(); + + port = TS_DESC_DPN(__le32_to_cpu(desc->desc.dptrl)); + tag = TS_DESC_TSUN(__le32_to_cpu(desc->desc.dptrl)); + + list_for_each_entry_safe(ts_info, ts_info2, &priv->gwca.ts_info_list, list) { + if (!(ts_info->port == port && ts_info->tag == tag)) + continue; + + memset(&shhwtstamps, 0, sizeof(shhwtstamps)); + ts.tv_sec = __le32_to_cpu(desc->ts_sec); + ts.tv_nsec = __le32_to_cpu(desc->ts_nsec & cpu_to_le32(0x3fffffff)); + shhwtstamps.hwtstamp = timespec64_to_ktime(ts); + skb_tstamp_tx(ts_info->skb, &shhwtstamps); + dev_consume_skb_irq(ts_info->skb); + list_del(&ts_info->list); + kfree(ts_info); + break; + } + + gq->cur = rswitch_next_queue_index(gq, true, 1); + desc = &gq->ts_ring[gq->cur]; + } + + num = rswitch_get_num_cur_queues(gq); + rswitch_gwca_ts_queue_fill(priv, gq->dirty, num); + gq->dirty = rswitch_next_queue_index(gq, false, num); +} + +static irqreturn_t rswitch_gwca_ts_irq(int irq, void *dev_id) +{ + struct rswitch_private *priv = dev_id; + + if (ioread32(priv->addr + GWTSDIS) & GWCA_TS_IRQ_BIT) { + iowrite32(GWCA_TS_IRQ_BIT, priv->addr + GWTSDIS); + rswitch_ts(priv); + + return IRQ_HANDLED; + } + + return IRQ_NONE; +} + +static int rswitch_gwca_ts_request_irqs(struct rswitch_private *priv) +{ + int irq; + + irq = platform_get_irq_byname(priv->pdev, GWCA_TS_IRQ_RESOURCE_NAME); + if (irq < 0) + return irq; + + return devm_request_irq(&priv->pdev->dev, irq, rswitch_gwca_ts_irq, + 0, GWCA_TS_IRQ_NAME, priv); +} + /* Ethernet TSN Agent block (ETHA) and Ethernet MAC IP block (RMAC) */ static int rswitch_etha_change_mode(struct rswitch_etha *etha, enum rswitch_etha_mode mode) @@ -1024,34 +1111,18 @@ static int rswitch_etha_set_access(struct rswitch_etha *etha, bool read, return ret; } -static int rswitch_etha_mii_read(struct mii_bus *bus, int addr, int regnum) +static int rswitch_etha_mii_read_c45(struct mii_bus *bus, int addr, int devad, + int regad) { struct rswitch_etha *etha = bus->priv; - int mode, devad, regad; - - mode = regnum & MII_ADDR_C45; - devad = (regnum >> MII_DEVADDR_C45_SHIFT) & 0x1f; - regad = regnum & MII_REGADDR_C45_MASK; - - /* Not support Clause 22 access method */ - if (!mode) - return -EOPNOTSUPP; return rswitch_etha_set_access(etha, true, addr, devad, regad, 0); } -static int rswitch_etha_mii_write(struct mii_bus *bus, int addr, int regnum, u16 val) +static int rswitch_etha_mii_write_c45(struct mii_bus *bus, int addr, int devad, + int regad, u16 val) { struct rswitch_etha *etha = bus->priv; - int mode, devad, regad; - - mode = regnum & MII_ADDR_C45; - devad = (regnum >> MII_DEVADDR_C45_SHIFT) & 0x1f; - regad = regnum & MII_REGADDR_C45_MASK; - - /* Not support Clause 22 access method */ - if (!mode) - return -EOPNOTSUPP; return rswitch_etha_set_access(etha, false, addr, devad, regad, val); } @@ -1087,33 +1158,25 @@ out: return port; } -/* Call of_node_put(mdio) after done */ -static struct device_node *rswitch_get_mdio_node(struct rswitch_device *rdev) -{ - struct device_node *port, *mdio; - - port = rswitch_get_port_node(rdev); - if (!port) - return NULL; - - mdio = of_get_child_by_name(port, "mdio"); - of_node_put(port); - - return mdio; -} - static int rswitch_etha_get_params(struct rswitch_device *rdev) { - struct device_node *port; + u32 max_speed; int err; - port = rswitch_get_port_node(rdev); - if (!port) + if (!rdev->np_port) return 0; /* ignored */ - err = of_get_phy_mode(port, &rdev->etha->phy_interface); - of_node_put(port); + err = of_get_phy_mode(rdev->np_port, &rdev->etha->phy_interface); + if (err) + return err; + err = of_property_read_u32(rdev->np_port, "max-speed", &max_speed); + if (!err) { + rdev->etha->speed = max_speed; + return 0; + } + + /* if no "max-speed" property, let's use default speed */ switch (rdev->etha->phy_interface) { case PHY_INTERFACE_MODE_MII: rdev->etha->speed = SPEED_100; @@ -1125,11 +1188,10 @@ static int rswitch_etha_get_params(struct rswitch_device *rdev) rdev->etha->speed = SPEED_2500; break; default: - err = -EINVAL; - break; + return -EINVAL; } - return err; + return 0; } static int rswitch_mii_register(struct rswitch_device *rdev) @@ -1145,11 +1207,11 @@ static int rswitch_mii_register(struct rswitch_device *rdev) mii_bus->name = "rswitch_mii"; sprintf(mii_bus->id, "etha%d", rdev->etha->index); mii_bus->priv = rdev->etha; - mii_bus->read = rswitch_etha_mii_read; - mii_bus->write = rswitch_etha_mii_write; + mii_bus->read_c45 = rswitch_etha_mii_read_c45; + mii_bus->write_c45 = rswitch_etha_mii_write_c45; mii_bus->parent = &rdev->priv->pdev->dev; - mdio_np = rswitch_get_mdio_node(rdev); + mdio_np = of_get_child_by_name(rdev->np_port, "mdio"); err = of_mdiobus_register(mii_bus, mdio_np); if (err < 0) { mdiobus_free(mii_bus); @@ -1173,114 +1235,107 @@ static void rswitch_mii_unregister(struct rswitch_device *rdev) } } -static void rswitch_mac_config(struct phylink_config *config, - unsigned int mode, - const struct phylink_link_state *state) +static void rswitch_adjust_link(struct net_device *ndev) { -} + struct rswitch_device *rdev = netdev_priv(ndev); + struct phy_device *phydev = ndev->phydev; -static void rswitch_mac_link_down(struct phylink_config *config, - unsigned int mode, - phy_interface_t interface) -{ + /* Current hardware has a restriction not to change speed at runtime */ + if (phydev->link != rdev->etha->link) { + phy_print_status(phydev); + if (phydev->link) + phy_power_on(rdev->serdes); + else + phy_power_off(rdev->serdes); + + rdev->etha->link = phydev->link; + } } -static void rswitch_mac_link_up(struct phylink_config *config, - struct phy_device *phydev, unsigned int mode, - phy_interface_t interface, int speed, - int duplex, bool tx_pause, bool rx_pause) +static void rswitch_phy_remove_link_mode(struct rswitch_device *rdev, + struct phy_device *phydev) { - /* Current hardware cannot change speed at runtime */ -} + /* Current hardware has a restriction not to change speed at runtime */ + switch (rdev->etha->speed) { + case SPEED_2500: + phy_remove_link_mode(phydev, ETHTOOL_LINK_MODE_1000baseT_Full_BIT); + phy_remove_link_mode(phydev, ETHTOOL_LINK_MODE_100baseT_Full_BIT); + break; + case SPEED_1000: + phy_remove_link_mode(phydev, ETHTOOL_LINK_MODE_2500baseX_Full_BIT); + phy_remove_link_mode(phydev, ETHTOOL_LINK_MODE_100baseT_Full_BIT); + break; + case SPEED_100: + phy_remove_link_mode(phydev, ETHTOOL_LINK_MODE_2500baseX_Full_BIT); + phy_remove_link_mode(phydev, ETHTOOL_LINK_MODE_1000baseT_Full_BIT); + break; + default: + break; + } -static const struct phylink_mac_ops rswitch_phylink_ops = { - .mac_config = rswitch_mac_config, - .mac_link_down = rswitch_mac_link_down, - .mac_link_up = rswitch_mac_link_up, -}; + phy_set_max_speed(phydev, rdev->etha->speed); +} -static int rswitch_phylink_init(struct rswitch_device *rdev) +static int rswitch_phy_device_init(struct rswitch_device *rdev) { - struct device_node *port; - struct phylink *phylink; - int err; + struct phy_device *phydev; + struct device_node *phy; + int err = -ENOENT; - port = rswitch_get_port_node(rdev); - if (!port) + if (!rdev->np_port) return -ENODEV; - rdev->phylink_config.dev = &rdev->ndev->dev; - rdev->phylink_config.type = PHYLINK_NETDEV; - __set_bit(PHY_INTERFACE_MODE_SGMII, rdev->phylink_config.supported_interfaces); - __set_bit(PHY_INTERFACE_MODE_USXGMII, rdev->phylink_config.supported_interfaces); - rdev->phylink_config.mac_capabilities = MAC_100FD | MAC_1000FD | MAC_2500FD; + phy = of_parse_phandle(rdev->np_port, "phy-handle", 0); + if (!phy) + return -ENODEV; - phylink = phylink_create(&rdev->phylink_config, &port->fwnode, - rdev->etha->phy_interface, &rswitch_phylink_ops); - if (IS_ERR(phylink)) { - err = PTR_ERR(phylink); + /* Set phydev->host_interfaces before calling of_phy_connect() to + * configure the PHY with the information of host_interfaces. + */ + phydev = of_phy_find_device(phy); + if (!phydev) goto out; - } + __set_bit(rdev->etha->phy_interface, phydev->host_interfaces); - rdev->phylink = phylink; - err = phylink_of_phy_connect(rdev->phylink, port, rdev->etha->phy_interface); + phydev = of_phy_connect(rdev->ndev, phy, rswitch_adjust_link, 0, + rdev->etha->phy_interface); + if (!phydev) + goto out; + + phy_set_max_speed(phydev, SPEED_2500); + phy_remove_link_mode(phydev, ETHTOOL_LINK_MODE_10baseT_Half_BIT); + phy_remove_link_mode(phydev, ETHTOOL_LINK_MODE_10baseT_Full_BIT); + phy_remove_link_mode(phydev, ETHTOOL_LINK_MODE_100baseT_Half_BIT); + phy_remove_link_mode(phydev, ETHTOOL_LINK_MODE_1000baseT_Half_BIT); + rswitch_phy_remove_link_mode(rdev, phydev); + + phy_attached_info(phydev); + + err = 0; out: - of_node_put(port); + of_node_put(phy); return err; } -static void rswitch_phylink_deinit(struct rswitch_device *rdev) +static void rswitch_phy_device_deinit(struct rswitch_device *rdev) { - rtnl_lock(); - phylink_disconnect_phy(rdev->phylink); - rtnl_unlock(); - phylink_destroy(rdev->phylink); + if (rdev->ndev->phydev) { + phy_disconnect(rdev->ndev->phydev); + rdev->ndev->phydev = NULL; + } } static int rswitch_serdes_set_params(struct rswitch_device *rdev) { - struct device_node *port = rswitch_get_port_node(rdev); - struct phy *serdes; int err; - serdes = devm_of_phy_get(&rdev->priv->pdev->dev, port, NULL); - of_node_put(port); - if (IS_ERR(serdes)) - return PTR_ERR(serdes); - - err = phy_set_mode_ext(serdes, PHY_MODE_ETHERNET, + err = phy_set_mode_ext(rdev->serdes, PHY_MODE_ETHERNET, rdev->etha->phy_interface); if (err < 0) return err; - return phy_set_speed(serdes, rdev->etha->speed); -} - -static int rswitch_serdes_init(struct rswitch_device *rdev) -{ - struct device_node *port = rswitch_get_port_node(rdev); - struct phy *serdes; - - serdes = devm_of_phy_get(&rdev->priv->pdev->dev, port, NULL); - of_node_put(port); - if (IS_ERR(serdes)) - return PTR_ERR(serdes); - - return phy_init(serdes); -} - -static int rswitch_serdes_deinit(struct rswitch_device *rdev) -{ - struct device_node *port = rswitch_get_port_node(rdev); - struct phy *serdes; - - serdes = devm_of_phy_get(&rdev->priv->pdev->dev, port, NULL); - of_node_put(port); - if (IS_ERR(serdes)) - return PTR_ERR(serdes); - - return phy_exit(serdes); + return phy_set_speed(rdev->serdes, rdev->etha->speed); } static int rswitch_ether_port_init_one(struct rswitch_device *rdev) @@ -1298,9 +1353,15 @@ static int rswitch_ether_port_init_one(struct rswitch_device *rdev) if (err < 0) return err; - err = rswitch_phylink_init(rdev); + err = rswitch_phy_device_init(rdev); if (err < 0) - goto err_phylink_init; + goto err_phy_device_init; + + rdev->serdes = devm_of_phy_get(&rdev->priv->pdev->dev, rdev->np_port, NULL); + if (IS_ERR(rdev->serdes)) { + err = PTR_ERR(rdev->serdes); + goto err_serdes_phy_get; + } err = rswitch_serdes_set_params(rdev); if (err < 0) @@ -1309,9 +1370,10 @@ static int rswitch_ether_port_init_one(struct rswitch_device *rdev) return 0; err_serdes_set_params: - rswitch_phylink_deinit(rdev); +err_serdes_phy_get: + rswitch_phy_device_deinit(rdev); -err_phylink_init: +err_phy_device_init: rswitch_mii_unregister(rdev); return err; @@ -1319,7 +1381,7 @@ err_phylink_init: static void rswitch_ether_port_deinit_one(struct rswitch_device *rdev) { - rswitch_phylink_deinit(rdev); + rswitch_phy_device_deinit(rdev); rswitch_mii_unregister(rdev); } @@ -1334,7 +1396,7 @@ static int rswitch_ether_port_init_all(struct rswitch_private *priv) } rswitch_for_each_enabled_port(priv, i) { - err = rswitch_serdes_init(priv->rdev[i]); + err = phy_init(priv->rdev[i]->serdes); if (err) goto err_serdes; } @@ -1343,7 +1405,7 @@ static int rswitch_ether_port_init_all(struct rswitch_private *priv) err_serdes: rswitch_for_each_enabled_port_continue_reverse(priv, i) - rswitch_serdes_deinit(priv->rdev[i]); + phy_exit(priv->rdev[i]->serdes); i = RSWITCH_NUM_PORTS; err_init_one: @@ -1358,7 +1420,7 @@ static void rswitch_ether_port_deinit_all(struct rswitch_private *priv) int i; for (i = 0; i < RSWITCH_NUM_PORTS; i++) { - rswitch_serdes_deinit(priv->rdev[i]); + phy_exit(priv->rdev[i]->serdes); rswitch_ether_port_deinit_one(priv->rdev[i]); } } @@ -1367,7 +1429,7 @@ static int rswitch_open(struct net_device *ndev) { struct rswitch_device *rdev = netdev_priv(ndev); - phylink_start(rdev->phylink); + phy_start(ndev->phydev); napi_enable(&rdev->napi); netif_start_queue(ndev); @@ -1375,19 +1437,32 @@ static int rswitch_open(struct net_device *ndev) rswitch_enadis_data_irq(rdev->priv, rdev->tx_queue->index, true); rswitch_enadis_data_irq(rdev->priv, rdev->rx_queue->index, true); + iowrite32(GWCA_TS_IRQ_BIT, rdev->priv->addr + GWTSDIE); + return 0; }; static int rswitch_stop(struct net_device *ndev) { struct rswitch_device *rdev = netdev_priv(ndev); + struct rswitch_gwca_ts_info *ts_info, *ts_info2; netif_tx_stop_all_queues(ndev); + iowrite32(GWCA_TS_IRQ_BIT, rdev->priv->addr + GWTSDID); + + list_for_each_entry_safe(ts_info, ts_info2, &rdev->priv->gwca.ts_info_list, list) { + if (ts_info->port != rdev->port) + continue; + dev_kfree_skb_irq(ts_info->skb); + list_del(&ts_info->list); + kfree(ts_info); + } + rswitch_enadis_data_irq(rdev->priv, rdev->tx_queue->index, false); rswitch_enadis_data_irq(rdev->priv, rdev->rx_queue->index, false); - phylink_stop(rdev->phylink); + phy_stop(ndev->phydev); napi_disable(&rdev->napi); return 0; @@ -1416,17 +1491,31 @@ static netdev_tx_t rswitch_start_xmit(struct sk_buff *skb, struct net_device *nd } gq->skbs[gq->cur] = skb; - desc = &gq->ring[gq->cur]; + desc = &gq->tx_ring[gq->cur]; rswitch_desc_set_dptr(&desc->desc, dma_addr); desc->desc.info_ds = cpu_to_le16(skb->len); desc->info1 = cpu_to_le64(INFO1_DV(BIT(rdev->etha->index)) | INFO1_FMT); if (skb_shinfo(skb)->tx_flags & SKBTX_HW_TSTAMP) { + struct rswitch_gwca_ts_info *ts_info; + + ts_info = kzalloc(sizeof(*ts_info), GFP_ATOMIC); + if (!ts_info) { + dma_unmap_single(ndev->dev.parent, dma_addr, skb->len, DMA_TO_DEVICE); + return -ENOMEM; + } + skb_shinfo(skb)->tx_flags |= SKBTX_IN_PROGRESS; rdev->ts_tag++; desc->info1 |= cpu_to_le64(INFO1_TSUN(rdev->ts_tag) | INFO1_TXC); + + ts_info->skb = skb_get(skb); + ts_info->port = rdev->port; + ts_info->tag = rdev->ts_tag; + list_add_tail(&ts_info->list, &rdev->priv->gwca.ts_info_list); + + skb_tx_timestamp(skb); } - skb_tx_timestamp(skb); dma_wmb(); @@ -1515,8 +1604,6 @@ static int rswitch_hwstamp_set(struct net_device *ndev, struct ifreq *req) static int rswitch_eth_ioctl(struct net_device *ndev, struct ifreq *req, int cmd) { - struct rswitch_device *rdev = netdev_priv(ndev); - if (!netif_running(ndev)) return -EINVAL; @@ -1526,7 +1613,7 @@ static int rswitch_eth_ioctl(struct net_device *ndev, struct ifreq *req, int cmd case SIOCSHWTSTAMP: return rswitch_hwstamp_set(ndev, req); default: - return phylink_mii_ioctl(rdev->phylink, req, cmd); + return phy_mii_ioctl(ndev->phydev, req, cmd); } } @@ -1581,7 +1668,6 @@ static int rswitch_device_alloc(struct rswitch_private *priv, int index) { struct platform_device *pdev = priv->pdev; struct rswitch_device *rdev; - struct device_node *port; struct net_device *ndev; int err; @@ -1610,10 +1696,10 @@ static int rswitch_device_alloc(struct rswitch_private *priv, int index) netif_napi_add(ndev, &rdev->napi, rswitch_poll); - port = rswitch_get_port_node(rdev); - rdev->disabled = !port; - err = of_get_ethdev_address(port, ndev); - of_node_put(port); + rdev->np_port = rswitch_get_port_node(rdev); + rdev->disabled = !rdev->np_port; + err = of_get_ethdev_address(rdev->np_port, ndev); + of_node_put(rdev->np_port); if (err) { if (is_valid_ether_addr(rdev->etha->mac_addr)) eth_hw_addr_set(ndev, rdev->etha->mac_addr); @@ -1679,10 +1765,17 @@ static int rswitch_init(struct rswitch_private *priv) if (err < 0) return err; - err = rswitch_gwca_desc_alloc(priv); + err = rswitch_gwca_linkfix_alloc(priv); if (err < 0) return -ENOMEM; + err = rswitch_gwca_ts_queue_alloc(priv); + if (err < 0) + goto err_ts_queue_alloc; + + rswitch_gwca_ts_queue_fill(priv, 0, TS_RING_SIZE); + INIT_LIST_HEAD(&priv->gwca.ts_info_list); + for (i = 0; i < RSWITCH_NUM_PORTS; i++) { err = rswitch_device_alloc(priv, i); if (err < 0) { @@ -1703,6 +1796,10 @@ static int rswitch_init(struct rswitch_private *priv) if (err < 0) goto err_gwca_request_irq; + err = rswitch_gwca_ts_request_irqs(priv); + if (err < 0) + goto err_gwca_ts_request_irq; + err = rswitch_gwca_hw_init(priv); if (err < 0) goto err_gwca_hw_init; @@ -1733,6 +1830,7 @@ err_ether_port_init_all: rswitch_gwca_hw_deinit(priv); err_gwca_hw_init: +err_gwca_ts_request_irq: err_gwca_request_irq: rcar_gen4_ptp_unregister(priv->ptp_priv); @@ -1741,7 +1839,10 @@ err_ptp_register: rswitch_device_free(priv, i); err_device_alloc: - rswitch_gwca_desc_free(priv); + rswitch_gwca_ts_queue_free(priv); + +err_ts_queue_alloc: + rswitch_gwca_linkfix_free(priv); return err; } @@ -1814,13 +1915,14 @@ static void rswitch_deinit(struct rswitch_private *priv) for (i = 0; i < RSWITCH_NUM_PORTS; i++) { struct rswitch_device *rdev = priv->rdev[i]; - rswitch_serdes_deinit(rdev); + phy_exit(priv->rdev[i]->serdes); rswitch_ether_port_deinit_one(rdev); unregister_netdev(rdev->ndev); rswitch_device_free(priv, i); } - rswitch_gwca_desc_free(priv); + rswitch_gwca_ts_queue_free(priv); + rswitch_gwca_linkfix_free(priv); rswitch_clock_disable(priv); } diff --git a/drivers/net/ethernet/renesas/rswitch.h b/drivers/net/ethernet/renesas/rswitch.h index 49efb0f31c77..27d3d38c055f 100644 --- a/drivers/net/ethernet/renesas/rswitch.h +++ b/drivers/net/ethernet/renesas/rswitch.h @@ -27,6 +27,7 @@ #define TX_RING_SIZE 1024 #define RX_RING_SIZE 1024 +#define TS_RING_SIZE (TX_RING_SIZE * RSWITCH_NUM_PORTS) #define PKT_BUF_SZ 1584 #define RSWITCH_ALIGN 128 @@ -49,6 +50,10 @@ #define AGENT_INDEX_GWCA 3 #define GWRO RSWITCH_GWCA0_OFFSET +#define GWCA_TS_IRQ_RESOURCE_NAME "gwca0_rxts0" +#define GWCA_TS_IRQ_NAME "rswitch: gwca0_rxts0" +#define GWCA_TS_IRQ_BIT BIT(0) + #define FWRO 0 #define TPRO RSWITCH_TOP_OFFSET #define CARO RSWITCH_COMA_OFFSET @@ -831,7 +836,7 @@ enum DIE_DT { DT_FSINGLE = 0x80, DT_FSTART = 0x90, DT_FMID = 0xa0, - DT_FEND = 0xb8, + DT_FEND = 0xb0, /* Chain control */ DT_LEMPTY = 0xc0, @@ -843,7 +848,7 @@ enum DIE_DT { DT_FEMPTY = 0x40, DT_FEMPTY_IS = 0x10, DT_FEMPTY_IC = 0x20, - DT_FEMPTY_ND = 0x38, + DT_FEMPTY_ND = 0x30, DT_FEMPTY_START = 0x50, DT_FEMPTY_MID = 0x60, DT_FEMPTY_END = 0x70, @@ -865,6 +870,12 @@ enum DIE_DT { /* For reception */ #define INFO1_SPN(port) ((u64)(port) << 36ULL) +/* For timestamp descriptor in dptrl (Byte 4 to 7) */ +#define TS_DESC_TSUN(dptrl) ((dptrl) & GENMASK(7, 0)) +#define TS_DESC_SPN(dptrl) (((dptrl) & GENMASK(10, 8)) >> 8) +#define TS_DESC_DPN(dptrl) (((dptrl) & GENMASK(17, 16)) >> 16) +#define TS_DESC_TN(dptrl) ((dptrl) & BIT(24)) + struct rswitch_desc { __le16 info_ds; /* Descriptor size */ u8 die_dt; /* Descriptor interrupt enable and type */ @@ -911,27 +922,43 @@ struct rswitch_etha { * name, this driver calls "queue". */ struct rswitch_gwca_queue { - int index; - bool dir_tx; - bool gptp; union { - struct rswitch_ext_desc *ring; - struct rswitch_ext_ts_desc *ts_ring; + struct rswitch_ext_desc *tx_ring; + struct rswitch_ext_ts_desc *rx_ring; + struct rswitch_ts_desc *ts_ring; }; + + /* Common */ dma_addr_t ring_dma; int ring_size; int cur; int dirty; - struct sk_buff **skbs; + /* For [rt]_ring */ + int index; + bool dir_tx; + struct sk_buff **skbs; struct net_device *ndev; /* queue to ndev for irq */ }; +struct rswitch_gwca_ts_info { + struct sk_buff *skb; + struct list_head list; + + int port; + u8 tag; +}; + #define RSWITCH_NUM_IRQ_REGS (RSWITCH_MAX_NUM_QUEUES / BITS_PER_TYPE(u32)) struct rswitch_gwca { int index; + struct rswitch_desc *linkfix_table; + dma_addr_t linkfix_table_dma; + u32 linkfix_table_size; struct rswitch_gwca_queue *queues; int num_queues; + struct rswitch_gwca_queue ts_queue; + struct list_head ts_info_list; DECLARE_BITMAP(used, RSWITCH_MAX_NUM_QUEUES); u32 tx_irq_bits[RSWITCH_NUM_IRQ_REGS]; u32 rx_irq_bits[RSWITCH_NUM_IRQ_REGS]; @@ -943,8 +970,6 @@ struct rswitch_device { struct rswitch_private *priv; struct net_device *ndev; struct napi_struct napi; - struct phylink *phylink; - struct phylink_config phylink_config; void __iomem *addr; struct rswitch_gwca_queue *tx_queue; struct rswitch_gwca_queue *rx_queue; @@ -953,6 +978,8 @@ struct rswitch_device { int port; struct rswitch_etha *etha; + struct device_node *np_port; + struct phy *serdes; }; struct rswitch_mfwd_mac_table_entry { @@ -969,9 +996,6 @@ struct rswitch_private { struct platform_device *pdev; void __iomem *addr; struct rcar_gen4_ptp_private *ptp_priv; - struct rswitch_desc *linkfix_table; - dma_addr_t linkfix_table_dma; - u32 linkfix_table_size; struct rswitch_device *rdev[RSWITCH_NUM_PORTS]; diff --git a/drivers/net/ethernet/renesas/sh_eth.c b/drivers/net/ethernet/renesas/sh_eth.c index 71a499113308..ed17163d7811 100644 --- a/drivers/net/ethernet/renesas/sh_eth.c +++ b/drivers/net/ethernet/renesas/sh_eth.c @@ -3044,23 +3044,46 @@ static int sh_mdio_release(struct sh_eth_private *mdp) return 0; } -static int sh_mdiobb_read(struct mii_bus *bus, int phy, int reg) +static int sh_mdiobb_read_c22(struct mii_bus *bus, int phy, int reg) { int res; pm_runtime_get_sync(bus->parent); - res = mdiobb_read(bus, phy, reg); + res = mdiobb_read_c22(bus, phy, reg); pm_runtime_put(bus->parent); return res; } -static int sh_mdiobb_write(struct mii_bus *bus, int phy, int reg, u16 val) +static int sh_mdiobb_write_c22(struct mii_bus *bus, int phy, int reg, u16 val) { int res; pm_runtime_get_sync(bus->parent); - res = mdiobb_write(bus, phy, reg, val); + res = mdiobb_write_c22(bus, phy, reg, val); + pm_runtime_put(bus->parent); + + return res; +} + +static int sh_mdiobb_read_c45(struct mii_bus *bus, int phy, int devad, int reg) +{ + int res; + + pm_runtime_get_sync(bus->parent); + res = mdiobb_read_c45(bus, phy, devad, reg); + pm_runtime_put(bus->parent); + + return res; +} + +static int sh_mdiobb_write_c45(struct mii_bus *bus, int phy, int devad, + int reg, u16 val) +{ + int res; + + pm_runtime_get_sync(bus->parent); + res = mdiobb_write_c45(bus, phy, devad, reg, val); pm_runtime_put(bus->parent); return res; @@ -3091,8 +3114,10 @@ static int sh_mdio_init(struct sh_eth_private *mdp, return -ENOMEM; /* Wrap accessors with Runtime PM-aware ops */ - mdp->mii_bus->read = sh_mdiobb_read; - mdp->mii_bus->write = sh_mdiobb_write; + mdp->mii_bus->read = sh_mdiobb_read_c22; + mdp->mii_bus->write = sh_mdiobb_write_c22; + mdp->mii_bus->read_c45 = sh_mdiobb_read_c45; + mdp->mii_bus->write_c45 = sh_mdiobb_write_c45; /* Hook up MII support for ethtool */ mdp->mii_bus->name = "sh_mii"; diff --git a/drivers/net/ethernet/samsung/sxgbe/sxgbe_mdio.c b/drivers/net/ethernet/samsung/sxgbe/sxgbe_mdio.c index fceb6d637235..0227223c06fa 100644 --- a/drivers/net/ethernet/samsung/sxgbe/sxgbe_mdio.c +++ b/drivers/net/ethernet/samsung/sxgbe/sxgbe_mdio.c @@ -50,12 +50,12 @@ static void sxgbe_mdio_ctrl_data(struct sxgbe_priv_data *sp, u32 cmd, } static void sxgbe_mdio_c45(struct sxgbe_priv_data *sp, u32 cmd, int phyaddr, - int phyreg, u16 phydata) + int devad, int phyreg, u16 phydata) { u32 reg; /* set mdio address register */ - reg = ((phyreg >> 16) & 0x1f) << 21; + reg = (devad & 0x1f) << 21; reg |= (phyaddr << 16) | (phyreg & 0xffff); writel(reg, sp->ioaddr + sp->hw->mii.addr); @@ -76,8 +76,8 @@ static void sxgbe_mdio_c22(struct sxgbe_priv_data *sp, u32 cmd, int phyaddr, sxgbe_mdio_ctrl_data(sp, cmd, phydata); } -static int sxgbe_mdio_access(struct sxgbe_priv_data *sp, u32 cmd, int phyaddr, - int phyreg, u16 phydata) +static int sxgbe_mdio_access_c22(struct sxgbe_priv_data *sp, u32 cmd, + int phyaddr, int phyreg, u16 phydata) { const struct mii_regs *mii = &sp->hw->mii; int rc; @@ -86,33 +86,46 @@ static int sxgbe_mdio_access(struct sxgbe_priv_data *sp, u32 cmd, int phyaddr, if (rc < 0) return rc; - if (phyreg & MII_ADDR_C45) { - sxgbe_mdio_c45(sp, cmd, phyaddr, phyreg, phydata); - } else { - /* Ports 0-3 only support C22. */ - if (phyaddr >= 4) - return -ENODEV; + /* Ports 0-3 only support C22. */ + if (phyaddr >= 4) + return -ENODEV; - sxgbe_mdio_c22(sp, cmd, phyaddr, phyreg, phydata); - } + sxgbe_mdio_c22(sp, cmd, phyaddr, phyreg, phydata); + + return sxgbe_mdio_busy_wait(sp->ioaddr, mii->data); +} + +static int sxgbe_mdio_access_c45(struct sxgbe_priv_data *sp, u32 cmd, + int phyaddr, int devad, int phyreg, + u16 phydata) +{ + const struct mii_regs *mii = &sp->hw->mii; + int rc; + + rc = sxgbe_mdio_busy_wait(sp->ioaddr, mii->data); + if (rc < 0) + return rc; + + sxgbe_mdio_c45(sp, cmd, phyaddr, devad, phyreg, phydata); return sxgbe_mdio_busy_wait(sp->ioaddr, mii->data); } /** - * sxgbe_mdio_read + * sxgbe_mdio_read_c22 * @bus: points to the mii_bus structure * @phyaddr: address of phy port * @phyreg: address of register with in phy register - * Description: this function used for C45 and C22 MDIO Read + * Description: this function used for C22 MDIO Read */ -static int sxgbe_mdio_read(struct mii_bus *bus, int phyaddr, int phyreg) +static int sxgbe_mdio_read_c22(struct mii_bus *bus, int phyaddr, int phyreg) { struct net_device *ndev = bus->priv; struct sxgbe_priv_data *priv = netdev_priv(ndev); int rc; - rc = sxgbe_mdio_access(priv, SXGBE_SMA_READ_CMD, phyaddr, phyreg, 0); + rc = sxgbe_mdio_access_c22(priv, SXGBE_SMA_READ_CMD, phyaddr, + phyreg, 0); if (rc < 0) return rc; @@ -120,21 +133,63 @@ static int sxgbe_mdio_read(struct mii_bus *bus, int phyaddr, int phyreg) } /** - * sxgbe_mdio_write + * sxgbe_mdio_read_c45 + * @bus: points to the mii_bus structure + * @phyaddr: address of phy port + * @devad: device (MMD) address + * @phyreg: address of register with in phy register + * Description: this function used for C45 MDIO Read + */ +static int sxgbe_mdio_read_c45(struct mii_bus *bus, int phyaddr, int devad, + int phyreg) +{ + struct net_device *ndev = bus->priv; + struct sxgbe_priv_data *priv = netdev_priv(ndev); + int rc; + + rc = sxgbe_mdio_access_c45(priv, SXGBE_SMA_READ_CMD, phyaddr, + devad, phyreg, 0); + if (rc < 0) + return rc; + + return readl(priv->ioaddr + priv->hw->mii.data) & 0xffff; +} + +/** + * sxgbe_mdio_write_c22 + * @bus: points to the mii_bus structure + * @phyaddr: address of phy port + * @phyreg: address of phy registers + * @phydata: data to be written into phy register + * Description: this function is used for C22 MDIO write + */ +static int sxgbe_mdio_write_c22(struct mii_bus *bus, int phyaddr, int phyreg, + u16 phydata) +{ + struct net_device *ndev = bus->priv; + struct sxgbe_priv_data *priv = netdev_priv(ndev); + + return sxgbe_mdio_access_c22(priv, SXGBE_SMA_WRITE_CMD, phyaddr, phyreg, + phydata); +} + +/** + * sxgbe_mdio_write_c45 * @bus: points to the mii_bus structure * @phyaddr: address of phy port * @phyreg: address of phy registers + * @devad: device (MMD) address * @phydata: data to be written into phy register - * Description: this function is used for C45 and C22 MDIO write + * Description: this function is used for C45 MDIO write */ -static int sxgbe_mdio_write(struct mii_bus *bus, int phyaddr, int phyreg, - u16 phydata) +static int sxgbe_mdio_write_c45(struct mii_bus *bus, int phyaddr, int devad, + int phyreg, u16 phydata) { struct net_device *ndev = bus->priv; struct sxgbe_priv_data *priv = netdev_priv(ndev); - return sxgbe_mdio_access(priv, SXGBE_SMA_WRITE_CMD, phyaddr, phyreg, - phydata); + return sxgbe_mdio_access_c45(priv, SXGBE_SMA_WRITE_CMD, phyaddr, + devad, phyreg, phydata); } int sxgbe_mdio_register(struct net_device *ndev) @@ -161,8 +216,10 @@ int sxgbe_mdio_register(struct net_device *ndev) /* assign mii bus fields */ mdio_bus->name = "sxgbe"; - mdio_bus->read = &sxgbe_mdio_read; - mdio_bus->write = &sxgbe_mdio_write; + mdio_bus->read = sxgbe_mdio_read_c22; + mdio_bus->write = sxgbe_mdio_write_c22; + mdio_bus->read_c45 = sxgbe_mdio_read_c45; + mdio_bus->write_c45 = sxgbe_mdio_write_c45; snprintf(mdio_bus->id, MII_BUS_ID_SIZE, "%s-%x", mdio_bus->name, priv->plat->bus_id); mdio_bus->priv = ndev; diff --git a/drivers/net/ethernet/sfc/Kconfig b/drivers/net/ethernet/sfc/Kconfig index 0950e6b0508f..4af36ba8906b 100644 --- a/drivers/net/ethernet/sfc/Kconfig +++ b/drivers/net/ethernet/sfc/Kconfig @@ -22,6 +22,7 @@ config SFC depends on PTP_1588_CLOCK_OPTIONAL select MDIO select CRC32 + select NET_DEVLINK help This driver supports 10/40-gigabit Ethernet cards based on the Solarflare SFC9100-family controllers. diff --git a/drivers/net/ethernet/sfc/Makefile b/drivers/net/ethernet/sfc/Makefile index 712a48d00069..55b9c73cd8ef 100644 --- a/drivers/net/ethernet/sfc/Makefile +++ b/drivers/net/ethernet/sfc/Makefile @@ -6,7 +6,8 @@ sfc-y += efx.o efx_common.o efx_channels.o nic.o \ mcdi.o mcdi_port.o mcdi_port_common.o \ mcdi_functions.o mcdi_filters.o mcdi_mon.o \ ef100.o ef100_nic.o ef100_netdev.o \ - ef100_ethtool.o ef100_rx.o ef100_tx.o + ef100_ethtool.o ef100_rx.o ef100_tx.o \ + efx_devlink.o sfc-$(CONFIG_SFC_MTD) += mtd.o sfc-$(CONFIG_SFC_SRIOV) += sriov.o ef10_sriov.o ef100_sriov.o ef100_rep.o \ mae.o tc.o tc_bindings.o tc_counters.o diff --git a/drivers/net/ethernet/sfc/ef100_netdev.c b/drivers/net/ethernet/sfc/ef100_netdev.c index ddcc325ed570..d916877b5a9a 100644 --- a/drivers/net/ethernet/sfc/ef100_netdev.c +++ b/drivers/net/ethernet/sfc/ef100_netdev.c @@ -24,6 +24,7 @@ #include "rx_common.h" #include "ef100_sriov.h" #include "tc_bindings.h" +#include "efx_devlink.h" static void ef100_update_name(struct efx_nic *efx) { @@ -332,9 +333,11 @@ void ef100_remove_netdev(struct efx_probe_data *probe_data) efx_ef100_pci_sriov_disable(efx, true); #endif + efx_fini_devlink_lock(efx); ef100_unregister_netdev(efx); #ifdef CONFIG_SFC_SRIOV + ef100_pf_unset_devlink_port(efx); efx_fini_tc(efx); #endif @@ -345,6 +348,8 @@ void ef100_remove_netdev(struct efx_probe_data *probe_data) kfree(efx->phy_data); efx->phy_data = NULL; + efx_fini_devlink_and_unlock(efx); + free_netdev(efx->net_dev); efx->net_dev = NULL; efx->state = STATE_PROBED; @@ -354,6 +359,7 @@ int ef100_probe_netdev(struct efx_probe_data *probe_data) { struct efx_nic *efx = &probe_data->efx; struct efx_probe_data **probe_ptr; + struct ef100_nic_data *nic_data; struct net_device *net_dev; int rc; @@ -405,6 +411,20 @@ int ef100_probe_netdev(struct efx_probe_data *probe_data) /* Don't fail init if RSS setup doesn't work. */ efx_mcdi_push_default_indir_table(efx, efx->n_rx_channels); + nic_data = efx->nic_data; + rc = ef100_get_mac_address(efx, net_dev->perm_addr, CLIENT_HANDLE_SELF, + efx->type->is_vf); + if (rc) + return rc; + /* Assign MAC address */ + eth_hw_addr_set(net_dev, net_dev->perm_addr); + ether_addr_copy(nic_data->port_id, net_dev->perm_addr); + + /* devlink creation, registration and lock */ + rc = efx_probe_devlink_and_lock(efx); + if (rc) + pci_info(efx->pci_dev, "devlink registration failed"); + rc = ef100_register_netdev(efx); if (rc) goto fail; @@ -413,6 +433,9 @@ int ef100_probe_netdev(struct efx_probe_data *probe_data) rc = ef100_probe_netdev_pf(efx); if (rc) goto fail; +#ifdef CONFIG_SFC_SRIOV + ef100_pf_set_devlink_port(efx); +#endif } efx->netdev_notifier.notifier_call = ef100_netdev_event; @@ -423,6 +446,13 @@ int ef100_probe_netdev(struct efx_probe_data *probe_data) goto fail; } + efx_probe_devlink_unlock(efx); + return rc; fail: +#ifdef CONFIG_SFC_SRIOV + /* remove devlink port if does exist */ + ef100_pf_unset_devlink_port(efx); +#endif + efx_probe_devlink_unlock(efx); return rc; } diff --git a/drivers/net/ethernet/sfc/ef100_nic.c b/drivers/net/ethernet/sfc/ef100_nic.c index ad686c671ab8..4dc643b0d2db 100644 --- a/drivers/net/ethernet/sfc/ef100_nic.c +++ b/drivers/net/ethernet/sfc/ef100_nic.c @@ -130,23 +130,34 @@ static void ef100_mcdi_reboot_detected(struct efx_nic *efx) /* MCDI calls */ -static int ef100_get_mac_address(struct efx_nic *efx, u8 *mac_address) +int ef100_get_mac_address(struct efx_nic *efx, u8 *mac_address, + int client_handle, bool empty_ok) { - MCDI_DECLARE_BUF(outbuf, MC_CMD_GET_MAC_ADDRESSES_OUT_LEN); + MCDI_DECLARE_BUF(outbuf, MC_CMD_GET_CLIENT_MAC_ADDRESSES_OUT_LEN(1)); + MCDI_DECLARE_BUF(inbuf, MC_CMD_GET_CLIENT_MAC_ADDRESSES_IN_LEN); size_t outlen; int rc; BUILD_BUG_ON(MC_CMD_GET_MAC_ADDRESSES_IN_LEN != 0); + MCDI_SET_DWORD(inbuf, GET_CLIENT_MAC_ADDRESSES_IN_CLIENT_HANDLE, + client_handle); - rc = efx_mcdi_rpc(efx, MC_CMD_GET_MAC_ADDRESSES, NULL, 0, - outbuf, sizeof(outbuf), &outlen); + rc = efx_mcdi_rpc(efx, MC_CMD_GET_CLIENT_MAC_ADDRESSES, inbuf, + sizeof(inbuf), outbuf, sizeof(outbuf), &outlen); if (rc) return rc; - if (outlen < MC_CMD_GET_MAC_ADDRESSES_OUT_LEN) - return -EIO; - ether_addr_copy(mac_address, - MCDI_PTR(outbuf, GET_MAC_ADDRESSES_OUT_MAC_ADDR_BASE)); + if (outlen >= MC_CMD_GET_CLIENT_MAC_ADDRESSES_OUT_LEN(1)) { + ether_addr_copy(mac_address, + MCDI_PTR(outbuf, GET_CLIENT_MAC_ADDRESSES_OUT_MAC_ADDRS)); + } else if (empty_ok) { + pci_warn(efx->pci_dev, + "No MAC address provisioned for client ID %#x.\n", + client_handle); + eth_zero_addr(mac_address); + } else { + return -ENOENT; + } return 0; } @@ -388,14 +399,14 @@ static int ef100_filter_table_up(struct efx_nic *efx) * filter insertion will need to take the lock for read. */ up_write(&efx->filter_sem); -#ifdef CONFIG_SFC_SRIOV - rc = efx_tc_insert_rep_filters(efx); + if (IS_ENABLED(CONFIG_SFC_SRIOV)) + rc = efx_tc_insert_rep_filters(efx); + /* Rep filter failure is nonfatal */ if (rc) netif_warn(efx, drv, efx->net_dev, "Failed to insert representor filters, rc %d\n", rc); -#endif return 0; fail_vlan0: @@ -408,9 +419,8 @@ fail_unspec: static void ef100_filter_table_down(struct efx_nic *efx) { -#ifdef CONFIG_SFC_SRIOV - efx_tc_remove_rep_filters(efx); -#endif + if (IS_ENABLED(CONFIG_SFC_SRIOV)) + efx_tc_remove_rep_filters(efx); down_write(&efx->filter_sem); efx_mcdi_filter_del_vlan(efx, 0); efx_mcdi_filter_del_vlan(efx, EFX_FILTER_VID_UNSPEC); @@ -726,7 +736,6 @@ static unsigned int efx_ef100_recycle_ring_size(const struct efx_nic *efx) return 10 * EFX_RECYCLE_RING_SIZE_10G; } -#ifdef CONFIG_SFC_SRIOV static int efx_ef100_get_base_mport(struct efx_nic *efx) { struct ef100_nic_data *nic_data = efx->nic_data; @@ -736,7 +745,7 @@ static int efx_ef100_get_base_mport(struct efx_nic *efx) /* Construct mport selector for "physical network port" */ efx_mae_mport_wire(efx, &selector); /* Look up actual mport ID */ - rc = efx_mae_lookup_mport(efx, selector, &id); + rc = efx_mae_fw_lookup_mport(efx, selector, &id); if (rc) return rc; /* The ID should always fit in 16 bits, because that's how wide the @@ -747,9 +756,21 @@ static int efx_ef100_get_base_mport(struct efx_nic *efx) id); nic_data->base_mport = id; nic_data->have_mport = true; + + /* Construct mport selector for "calling PF" */ + efx_mae_mport_uplink(efx, &selector); + /* Look up actual mport ID */ + rc = efx_mae_fw_lookup_mport(efx, selector, &id); + if (rc) + return rc; + if (id >> 16) + netif_warn(efx, probe, efx->net_dev, "Bad own m-port id %#x\n", + id); + nic_data->own_mport = id; + nic_data->have_own_mport = true; + return 0; } -#endif static int compare_versions(const char *a, const char *b) { @@ -1098,23 +1119,42 @@ fail: return rc; } +/* MCDI commands are related to the same device issuing them. This function + * allows to do an MCDI command on behalf of another device, mainly PFs setting + * things for VFs. + */ +int efx_ef100_lookup_client_id(struct efx_nic *efx, efx_qword_t pciefn, u32 *id) +{ + MCDI_DECLARE_BUF(outbuf, MC_CMD_GET_CLIENT_HANDLE_OUT_LEN); + MCDI_DECLARE_BUF(inbuf, MC_CMD_GET_CLIENT_HANDLE_IN_LEN); + u64 pciefn_flat = le64_to_cpu(pciefn.u64[0]); + size_t outlen; + int rc; + + MCDI_SET_DWORD(inbuf, GET_CLIENT_HANDLE_IN_TYPE, + MC_CMD_GET_CLIENT_HANDLE_IN_TYPE_FUNC); + MCDI_SET_QWORD(inbuf, GET_CLIENT_HANDLE_IN_FUNC, + pciefn_flat); + + rc = efx_mcdi_rpc(efx, MC_CMD_GET_CLIENT_HANDLE, inbuf, sizeof(inbuf), + outbuf, sizeof(outbuf), &outlen); + if (rc) + return rc; + if (outlen < sizeof(outbuf)) + return -EIO; + *id = MCDI_DWORD(outbuf, GET_CLIENT_HANDLE_OUT_HANDLE); + return 0; +} + int ef100_probe_netdev_pf(struct efx_nic *efx) { struct ef100_nic_data *nic_data = efx->nic_data; struct net_device *net_dev = efx->net_dev; int rc; - rc = ef100_get_mac_address(efx, net_dev->perm_addr); - if (rc) - goto fail; - /* Assign MAC address */ - eth_hw_addr_set(net_dev, net_dev->perm_addr); - memcpy(nic_data->port_id, net_dev->perm_addr, ETH_ALEN); - - if (!nic_data->grp_mae) + if (!IS_ENABLED(CONFIG_SFC_SRIOV) || !nic_data->grp_mae) return 0; -#ifdef CONFIG_SFC_SRIOV rc = efx_init_struct_tc(efx); if (rc) return rc; @@ -1126,6 +1166,14 @@ int ef100_probe_netdev_pf(struct efx_nic *efx) rc); } + rc = efx_init_mae(efx); + if (rc) + netif_warn(efx, probe, net_dev, + "Failed to init MAE rc %d; representors will not function\n", + rc); + else + efx_ef100_init_reps(efx); + rc = efx_init_tc(efx); if (rc) { /* Either we don't have an MAE at all (i.e. legacy v-switching), @@ -1141,10 +1189,6 @@ int ef100_probe_netdev_pf(struct efx_nic *efx) net_dev->features |= NETIF_F_HW_TC; efx->fixed_features |= NETIF_F_HW_TC; } -#endif - return 0; - -fail: return rc; } @@ -1157,6 +1201,11 @@ void ef100_remove(struct efx_nic *efx) { struct ef100_nic_data *nic_data = efx->nic_data; + if (IS_ENABLED(CONFIG_SFC_SRIOV) && efx->mae) { + efx_ef100_fini_reps(efx); + efx_fini_mae(efx); + } + efx_mcdi_detach(efx); efx_mcdi_fini(efx); if (nic_data) @@ -1249,9 +1298,8 @@ const struct efx_nic_type ef100_pf_nic_type = { .update_stats = ef100_update_stats, .pull_stats = efx_mcdi_mac_pull_stats, .stop_stats = efx_mcdi_mac_stop_stats, -#ifdef CONFIG_SFC_SRIOV - .sriov_configure = efx_ef100_sriov_configure, -#endif + .sriov_configure = IS_ENABLED(CONFIG_SFC_SRIOV) ? + efx_ef100_sriov_configure : NULL, /* Per-type bar/size configuration not used on ef100. Location of * registers is defined by extended capabilities. diff --git a/drivers/net/ethernet/sfc/ef100_nic.h b/drivers/net/ethernet/sfc/ef100_nic.h index 0295933145fa..f1ed481c1260 100644 --- a/drivers/net/ethernet/sfc/ef100_nic.h +++ b/drivers/net/ethernet/sfc/ef100_nic.h @@ -74,6 +74,10 @@ struct ef100_nic_data { u64 stats[EF100_STAT_COUNT]; u32 base_mport; bool have_mport; /* base_mport was populated successfully */ + u32 own_mport; + u32 local_mae_intf; /* interface_idx that corresponds to us, in mport enumerate */ + bool have_own_mport; /* own_mport was populated successfully */ + bool have_local_intf; /* local_mae_intf was populated successfully */ bool grp_mae; /* MAE Privilege */ u16 tso_max_hdr_len; u16 tso_max_payload_num_segs; @@ -88,4 +92,7 @@ int efx_ef100_init_datapath_caps(struct efx_nic *efx); int ef100_phy_probe(struct efx_nic *efx); int ef100_filter_table_probe(struct efx_nic *efx); +int ef100_get_mac_address(struct efx_nic *efx, u8 *mac_address, + int client_handle, bool empty_ok); +int efx_ef100_lookup_client_id(struct efx_nic *efx, efx_qword_t pciefn, u32 *id); #endif /* EFX_EF100_NIC_H */ diff --git a/drivers/net/ethernet/sfc/ef100_rep.c b/drivers/net/ethernet/sfc/ef100_rep.c index 81ab22c74635..0b3083ef0ead 100644 --- a/drivers/net/ethernet/sfc/ef100_rep.c +++ b/drivers/net/ethernet/sfc/ef100_rep.c @@ -9,12 +9,14 @@ * by the Free Software Foundation, incorporated herein by reference. */ +#include <linux/rhashtable.h> #include "ef100_rep.h" #include "ef100_netdev.h" #include "ef100_nic.h" #include "mae.h" #include "rx_common.h" #include "tc_bindings.h" +#include "efx_devlink.h" #define EFX_EF100_REP_DRIVER "efx_ef100_rep" @@ -242,14 +244,11 @@ fail1: static int efx_ef100_configure_rep(struct efx_rep *efv) { struct efx_nic *efx = efv->parent; - u32 selector; int rc; efv->rx_pring_size = EFX_REP_DEFAULT_PSEUDO_RING_SIZE; - /* Construct mport selector for corresponding VF */ - efx_mae_mport_vf(efx, efv->idx, &selector); /* Look up actual mport ID */ - rc = efx_mae_lookup_mport(efx, selector, &efv->mport); + rc = efx_mae_lookup_mport(efx, efv->idx, &efv->mport); if (rc) return rc; pci_dbg(efx->pci_dev, "VF %u has mport ID %#x\n", efv->idx, efv->mport); @@ -299,6 +298,7 @@ int efx_ef100_vfrep_create(struct efx_nic *efx, unsigned int i) i, rc); goto fail1; } + ef100_rep_set_devlink_port(efv); rc = register_netdev(efv->net_dev); if (rc) { pci_err(efx->pci_dev, @@ -310,6 +310,7 @@ int efx_ef100_vfrep_create(struct efx_nic *efx, unsigned int i) efv->net_dev->name); return 0; fail2: + ef100_rep_unset_devlink_port(efv); efx_ef100_deconfigure_rep(efv); fail1: efx_ef100_rep_destroy_netdev(efv); @@ -325,6 +326,7 @@ void efx_ef100_vfrep_destroy(struct efx_nic *efx, struct efx_rep *efv) return; netif_dbg(efx, drv, rep_dev, "Removing VF representor\n"); unregister_netdev(rep_dev); + ef100_rep_unset_devlink_port(efv); efx_ef100_deconfigure_rep(efv); efx_ef100_rep_destroy_netdev(efv); } @@ -341,6 +343,53 @@ void efx_ef100_fini_vfreps(struct efx_nic *efx) efx_ef100_vfrep_destroy(efx, efv); } +static bool ef100_mport_is_pcie_vnic(struct mae_mport_desc *mport_desc) +{ + return mport_desc->mport_type == MAE_MPORT_DESC_MPORT_TYPE_VNIC && + mport_desc->vnic_client_type == MAE_MPORT_DESC_VNIC_CLIENT_TYPE_FUNCTION; +} + +bool ef100_mport_on_local_intf(struct efx_nic *efx, + struct mae_mport_desc *mport_desc) +{ + struct ef100_nic_data *nic_data = efx->nic_data; + bool pcie_func; + + pcie_func = ef100_mport_is_pcie_vnic(mport_desc); + + return nic_data->have_local_intf && pcie_func && + mport_desc->interface_idx == nic_data->local_mae_intf; +} + +bool ef100_mport_is_vf(struct mae_mport_desc *mport_desc) +{ + bool pcie_func; + + pcie_func = ef100_mport_is_pcie_vnic(mport_desc); + return pcie_func && (mport_desc->vf_idx != MAE_MPORT_DESC_VF_IDX_NULL); +} + +void efx_ef100_init_reps(struct efx_nic *efx) +{ + struct ef100_nic_data *nic_data = efx->nic_data; + int rc; + + nic_data->have_local_intf = false; + rc = efx_mae_enumerate_mports(efx); + if (rc) + pci_warn(efx->pci_dev, + "Could not enumerate mports (rc=%d), are we admin?", + rc); +} + +void efx_ef100_fini_reps(struct efx_nic *efx) +{ + struct efx_mae *mae = efx->mae; + + rhashtable_free_and_destroy(&mae->mports_ht, efx_mae_remove_mport, + NULL); +} + static int efx_ef100_rep_poll(struct napi_struct *napi, int weight) { struct efx_rep *efv = container_of(napi, struct efx_rep, napi); diff --git a/drivers/net/ethernet/sfc/ef100_rep.h b/drivers/net/ethernet/sfc/ef100_rep.h index c21bc716f847..a042525a2240 100644 --- a/drivers/net/ethernet/sfc/ef100_rep.h +++ b/drivers/net/ethernet/sfc/ef100_rep.h @@ -22,6 +22,8 @@ struct efx_rep_sw_stats { atomic64_t rx_dropped, tx_errors; }; +struct devlink_port; + /** * struct efx_rep - Private data for an Efx representor * @@ -39,6 +41,7 @@ struct efx_rep_sw_stats { * @rx_lock: protects @rx_list * @napi: NAPI control structure * @stats: software traffic counters for netdev stats + * @dl_port: devlink port associated to this netdev representor */ struct efx_rep { struct efx_nic *parent; @@ -54,6 +57,7 @@ struct efx_rep { spinlock_t rx_lock; struct napi_struct napi; struct efx_rep_sw_stats stats; + struct devlink_port *dl_port; }; int efx_ef100_vfrep_create(struct efx_nic *efx, unsigned int i); @@ -67,4 +71,10 @@ void efx_ef100_rep_rx_packet(struct efx_rep *efv, struct efx_rx_buffer *rx_buf); */ struct efx_rep *efx_ef100_find_rep_by_mport(struct efx_nic *efx, u16 mport); extern const struct net_device_ops efx_ef100_rep_netdev_ops; +void efx_ef100_init_reps(struct efx_nic *efx); +void efx_ef100_fini_reps(struct efx_nic *efx); +struct mae_mport_desc; +bool ef100_mport_on_local_intf(struct efx_nic *efx, + struct mae_mport_desc *mport_desc); +bool ef100_mport_is_vf(struct mae_mport_desc *mport_desc); #endif /* EF100_REP_H */ diff --git a/drivers/net/ethernet/sfc/efx.c b/drivers/net/ethernet/sfc/efx.c index 3a86f1213a05..02c2adeb0a12 100644 --- a/drivers/net/ethernet/sfc/efx.c +++ b/drivers/net/ethernet/sfc/efx.c @@ -1028,6 +1028,10 @@ static int efx_pci_probe_post_io(struct efx_nic *efx) net_dev->features &= ~NETIF_F_HW_VLAN_CTAG_FILTER; net_dev->features |= efx->fixed_features; + net_dev->xdp_features = NETDEV_XDP_ACT_BASIC | + NETDEV_XDP_ACT_REDIRECT | + NETDEV_XDP_ACT_NDO_XMIT; + rc = efx_register_netdev(efx); if (!rc) return 0; diff --git a/drivers/net/ethernet/sfc/efx_devlink.c b/drivers/net/ethernet/sfc/efx_devlink.c new file mode 100644 index 000000000000..381b805659d3 --- /dev/null +++ b/drivers/net/ethernet/sfc/efx_devlink.c @@ -0,0 +1,731 @@ +// SPDX-License-Identifier: GPL-2.0-only +/**************************************************************************** + * Driver for AMD network controllers and boards + * Copyright (C) 2023, Advanced Micro Devices, Inc. + * + * This program is free software; you can redistribute it and/or modify it + * under the terms of the GNU General Public License version 2 as published + * by the Free Software Foundation, incorporated herein by reference. + */ + +#include "net_driver.h" +#include "ef100_nic.h" +#include "efx_devlink.h" +#include <linux/rtc.h> +#include "mcdi.h" +#include "mcdi_functions.h" +#include "mcdi_pcol.h" +#ifdef CONFIG_SFC_SRIOV +#include "mae.h" +#include "ef100_rep.h" +#endif + +struct efx_devlink { + struct efx_nic *efx; +}; + +#ifdef CONFIG_SFC_SRIOV +static void efx_devlink_del_port(struct devlink_port *dl_port) +{ + if (!dl_port) + return; + devl_port_unregister(dl_port); +} + +static int efx_devlink_add_port(struct efx_nic *efx, + struct mae_mport_desc *mport) +{ + bool external = false; + + if (!ef100_mport_on_local_intf(efx, mport)) + external = true; + + switch (mport->mport_type) { + case MAE_MPORT_DESC_MPORT_TYPE_VNIC: + if (mport->vf_idx != MAE_MPORT_DESC_VF_IDX_NULL) + devlink_port_attrs_pci_vf_set(&mport->dl_port, 0, mport->pf_idx, + mport->vf_idx, + external); + else + devlink_port_attrs_pci_pf_set(&mport->dl_port, 0, mport->pf_idx, + external); + break; + default: + /* MAE_MPORT_DESC_MPORT_ALIAS and UNDEFINED */ + return 0; + } + + mport->dl_port.index = mport->mport_id; + + return devl_port_register(efx->devlink, &mport->dl_port, mport->mport_id); +} + +static int efx_devlink_port_addr_get(struct devlink_port *port, u8 *hw_addr, + int *hw_addr_len, + struct netlink_ext_ack *extack) +{ + struct efx_devlink *devlink = devlink_priv(port->devlink); + struct mae_mport_desc *mport_desc; + efx_qword_t pciefn; + u32 client_id; + int rc = 0; + + mport_desc = container_of(port, struct mae_mport_desc, dl_port); + + if (!ef100_mport_on_local_intf(devlink->efx, mport_desc)) { + rc = -EINVAL; + NL_SET_ERR_MSG_FMT(extack, + "Port not on local interface (mport: %u)", + mport_desc->mport_id); + goto out; + } + + if (ef100_mport_is_vf(mport_desc)) + EFX_POPULATE_QWORD_3(pciefn, + PCIE_FUNCTION_PF, PCIE_FUNCTION_PF_NULL, + PCIE_FUNCTION_VF, mport_desc->vf_idx, + PCIE_FUNCTION_INTF, PCIE_INTERFACE_CALLER); + else + EFX_POPULATE_QWORD_3(pciefn, + PCIE_FUNCTION_PF, mport_desc->pf_idx, + PCIE_FUNCTION_VF, PCIE_FUNCTION_VF_NULL, + PCIE_FUNCTION_INTF, PCIE_INTERFACE_CALLER); + + rc = efx_ef100_lookup_client_id(devlink->efx, pciefn, &client_id); + if (rc) { + NL_SET_ERR_MSG_FMT(extack, + "No internal client_ID for port (mport: %u)", + mport_desc->mport_id); + goto out; + } + + rc = ef100_get_mac_address(devlink->efx, hw_addr, client_id, true); + if (rc != 0) + NL_SET_ERR_MSG_FMT(extack, + "No available MAC for port (mport: %u)", + mport_desc->mport_id); +out: + *hw_addr_len = ETH_ALEN; + return rc; +} + +static int efx_devlink_port_addr_set(struct devlink_port *port, + const u8 *hw_addr, int hw_addr_len, + struct netlink_ext_ack *extack) +{ + MCDI_DECLARE_BUF(inbuf, MC_CMD_SET_CLIENT_MAC_ADDRESSES_IN_LEN(1)); + struct efx_devlink *devlink = devlink_priv(port->devlink); + struct mae_mport_desc *mport_desc; + efx_qword_t pciefn; + u32 client_id; + int rc; + + mport_desc = container_of(port, struct mae_mport_desc, dl_port); + + if (!ef100_mport_is_vf(mport_desc)) { + NL_SET_ERR_MSG_FMT(extack, + "port mac change not allowed (mport: %u)", + mport_desc->mport_id); + return -EPERM; + } + + EFX_POPULATE_QWORD_3(pciefn, + PCIE_FUNCTION_PF, PCIE_FUNCTION_PF_NULL, + PCIE_FUNCTION_VF, mport_desc->vf_idx, + PCIE_FUNCTION_INTF, PCIE_INTERFACE_CALLER); + + rc = efx_ef100_lookup_client_id(devlink->efx, pciefn, &client_id); + if (rc) { + NL_SET_ERR_MSG_FMT(extack, + "No internal client_ID for port (mport: %u)", + mport_desc->mport_id); + return rc; + } + + MCDI_SET_DWORD(inbuf, SET_CLIENT_MAC_ADDRESSES_IN_CLIENT_HANDLE, + client_id); + + ether_addr_copy(MCDI_PTR(inbuf, SET_CLIENT_MAC_ADDRESSES_IN_MAC_ADDRS), + hw_addr); + + rc = efx_mcdi_rpc(devlink->efx, MC_CMD_SET_CLIENT_MAC_ADDRESSES, inbuf, + sizeof(inbuf), NULL, 0, NULL); + if (rc) + NL_SET_ERR_MSG_FMT(extack, + "sfc MC_CMD_SET_CLIENT_MAC_ADDRESSES mcdi error (mport: %u)", + mport_desc->mport_id); + + return rc; +} + +#endif + +static int efx_devlink_info_nvram_partition(struct efx_nic *efx, + struct devlink_info_req *req, + unsigned int partition_type, + const char *version_name) +{ + char buf[EFX_MAX_VERSION_INFO_LEN]; + u16 version[4]; + int rc; + + rc = efx_mcdi_nvram_metadata(efx, partition_type, NULL, version, NULL, + 0); + if (rc) { + netif_err(efx, drv, efx->net_dev, "mcdi nvram %s: failed\n", + version_name); + return rc; + } + + snprintf(buf, EFX_MAX_VERSION_INFO_LEN, "%u.%u.%u.%u", version[0], + version[1], version[2], version[3]); + devlink_info_version_stored_put(req, version_name, buf); + + return 0; +} + +static int efx_devlink_info_stored_versions(struct efx_nic *efx, + struct devlink_info_req *req) +{ + int rc; + + rc = efx_devlink_info_nvram_partition(efx, req, + NVRAM_PARTITION_TYPE_BUNDLE, + DEVLINK_INFO_VERSION_GENERIC_FW_BUNDLE_ID); + if (rc) + return rc; + + rc = efx_devlink_info_nvram_partition(efx, req, + NVRAM_PARTITION_TYPE_MC_FIRMWARE, + DEVLINK_INFO_VERSION_GENERIC_FW_MGMT); + if (rc) + return rc; + + rc = efx_devlink_info_nvram_partition(efx, req, + NVRAM_PARTITION_TYPE_SUC_FIRMWARE, + EFX_DEVLINK_INFO_VERSION_FW_MGMT_SUC); + if (rc) + return rc; + + rc = efx_devlink_info_nvram_partition(efx, req, + NVRAM_PARTITION_TYPE_EXPANSION_ROM, + EFX_DEVLINK_INFO_VERSION_FW_EXPROM); + if (rc) + return rc; + + rc = efx_devlink_info_nvram_partition(efx, req, + NVRAM_PARTITION_TYPE_EXPANSION_UEFI, + EFX_DEVLINK_INFO_VERSION_FW_UEFI); + return rc; +} + +#define EFX_VER_FLAG(_f) \ + (MC_CMD_GET_VERSION_V5_OUT_ ## _f ## _PRESENT_LBN) + +static void efx_devlink_info_running_v2(struct efx_nic *efx, + struct devlink_info_req *req, + unsigned int flags, efx_dword_t *outbuf) +{ + char buf[EFX_MAX_VERSION_INFO_LEN]; + union { + const __le32 *dwords; + const __le16 *words; + const char *str; + } ver; + struct rtc_time build_date; + unsigned int build_id; + size_t offset; + __maybe_unused u64 tstamp; + + if (flags & BIT(EFX_VER_FLAG(BOARD_EXT_INFO))) { + snprintf(buf, EFX_MAX_VERSION_INFO_LEN, "%s", + MCDI_PTR(outbuf, GET_VERSION_V2_OUT_BOARD_NAME)); + devlink_info_version_fixed_put(req, + DEVLINK_INFO_VERSION_GENERIC_BOARD_ID, + buf); + + /* Favour full board version if present (in V5 or later) */ + if (~flags & BIT(EFX_VER_FLAG(BOARD_VERSION))) { + snprintf(buf, EFX_MAX_VERSION_INFO_LEN, "%u", + MCDI_DWORD(outbuf, + GET_VERSION_V2_OUT_BOARD_REVISION)); + devlink_info_version_fixed_put(req, + DEVLINK_INFO_VERSION_GENERIC_BOARD_REV, + buf); + } + + ver.str = MCDI_PTR(outbuf, GET_VERSION_V2_OUT_BOARD_SERIAL); + if (ver.str[0]) + devlink_info_board_serial_number_put(req, ver.str); + } + + if (flags & BIT(EFX_VER_FLAG(FPGA_EXT_INFO))) { + ver.dwords = (__le32 *)MCDI_PTR(outbuf, + GET_VERSION_V2_OUT_FPGA_VERSION); + offset = snprintf(buf, EFX_MAX_VERSION_INFO_LEN, "%u_%c%u", + le32_to_cpu(ver.dwords[0]), + 'A' + le32_to_cpu(ver.dwords[1]), + le32_to_cpu(ver.dwords[2])); + + ver.str = MCDI_PTR(outbuf, GET_VERSION_V2_OUT_FPGA_EXTRA); + if (ver.str[0]) + snprintf(&buf[offset], EFX_MAX_VERSION_INFO_LEN - offset, + " (%s)", ver.str); + + devlink_info_version_running_put(req, + EFX_DEVLINK_INFO_VERSION_FPGA_REV, + buf); + } + + if (flags & BIT(EFX_VER_FLAG(CMC_EXT_INFO))) { + ver.dwords = (__le32 *)MCDI_PTR(outbuf, + GET_VERSION_V2_OUT_CMCFW_VERSION); + offset = snprintf(buf, EFX_MAX_VERSION_INFO_LEN, "%u.%u.%u.%u", + le32_to_cpu(ver.dwords[0]), + le32_to_cpu(ver.dwords[1]), + le32_to_cpu(ver.dwords[2]), + le32_to_cpu(ver.dwords[3])); + +#ifdef CONFIG_RTC_LIB + tstamp = MCDI_QWORD(outbuf, + GET_VERSION_V2_OUT_CMCFW_BUILD_DATE); + if (tstamp) { + rtc_time64_to_tm(tstamp, &build_date); + snprintf(&buf[offset], EFX_MAX_VERSION_INFO_LEN - offset, + " (%ptRd)", &build_date); + } +#endif + + devlink_info_version_running_put(req, + EFX_DEVLINK_INFO_VERSION_FW_MGMT_CMC, + buf); + } + + ver.words = (__le16 *)MCDI_PTR(outbuf, GET_VERSION_V2_OUT_VERSION); + offset = snprintf(buf, EFX_MAX_VERSION_INFO_LEN, "%u.%u.%u.%u", + le16_to_cpu(ver.words[0]), le16_to_cpu(ver.words[1]), + le16_to_cpu(ver.words[2]), le16_to_cpu(ver.words[3])); + if (flags & BIT(EFX_VER_FLAG(MCFW_EXT_INFO))) { + build_id = MCDI_DWORD(outbuf, GET_VERSION_V2_OUT_MCFW_BUILD_ID); + snprintf(&buf[offset], EFX_MAX_VERSION_INFO_LEN - offset, + " (%x) %s", build_id, + MCDI_PTR(outbuf, GET_VERSION_V2_OUT_MCFW_BUILD_NAME)); + } + devlink_info_version_running_put(req, + DEVLINK_INFO_VERSION_GENERIC_FW_MGMT, + buf); + + if (flags & BIT(EFX_VER_FLAG(SUCFW_EXT_INFO))) { + ver.dwords = (__le32 *)MCDI_PTR(outbuf, + GET_VERSION_V2_OUT_SUCFW_VERSION); +#ifdef CONFIG_RTC_LIB + tstamp = MCDI_QWORD(outbuf, + GET_VERSION_V2_OUT_SUCFW_BUILD_DATE); + rtc_time64_to_tm(tstamp, &build_date); +#else + memset(&build_date, 0, sizeof(build_date)); +#endif + build_id = MCDI_DWORD(outbuf, GET_VERSION_V2_OUT_SUCFW_CHIP_ID); + + snprintf(buf, EFX_MAX_VERSION_INFO_LEN, + "%u.%u.%u.%u type %x (%ptRd)", + le32_to_cpu(ver.dwords[0]), le32_to_cpu(ver.dwords[1]), + le32_to_cpu(ver.dwords[2]), le32_to_cpu(ver.dwords[3]), + build_id, &build_date); + + devlink_info_version_running_put(req, + EFX_DEVLINK_INFO_VERSION_FW_MGMT_SUC, + buf); + } +} + +static void efx_devlink_info_running_v3(struct efx_nic *efx, + struct devlink_info_req *req, + unsigned int flags, efx_dword_t *outbuf) +{ + char buf[EFX_MAX_VERSION_INFO_LEN]; + union { + const __le32 *dwords; + const __le16 *words; + const char *str; + } ver; + + if (flags & BIT(EFX_VER_FLAG(DATAPATH_HW_VERSION))) { + ver.dwords = (__le32 *)MCDI_PTR(outbuf, + GET_VERSION_V3_OUT_DATAPATH_HW_VERSION); + + snprintf(buf, EFX_MAX_VERSION_INFO_LEN, "%u.%u.%u", + le32_to_cpu(ver.dwords[0]), le32_to_cpu(ver.dwords[1]), + le32_to_cpu(ver.dwords[2])); + + devlink_info_version_running_put(req, + EFX_DEVLINK_INFO_VERSION_DATAPATH_HW, + buf); + } + + if (flags & BIT(EFX_VER_FLAG(DATAPATH_FW_VERSION))) { + ver.dwords = (__le32 *)MCDI_PTR(outbuf, + GET_VERSION_V3_OUT_DATAPATH_FW_VERSION); + + snprintf(buf, EFX_MAX_VERSION_INFO_LEN, "%u.%u.%u", + le32_to_cpu(ver.dwords[0]), le32_to_cpu(ver.dwords[1]), + le32_to_cpu(ver.dwords[2])); + + devlink_info_version_running_put(req, + EFX_DEVLINK_INFO_VERSION_DATAPATH_FW, + buf); + } +} + +static void efx_devlink_info_running_v4(struct efx_nic *efx, + struct devlink_info_req *req, + unsigned int flags, efx_dword_t *outbuf) +{ + char buf[EFX_MAX_VERSION_INFO_LEN]; + union { + const __le32 *dwords; + const __le16 *words; + const char *str; + } ver; + + if (flags & BIT(EFX_VER_FLAG(SOC_BOOT_VERSION))) { + ver.dwords = (__le32 *)MCDI_PTR(outbuf, + GET_VERSION_V4_OUT_SOC_BOOT_VERSION); + + snprintf(buf, EFX_MAX_VERSION_INFO_LEN, "%u.%u.%u.%u", + le32_to_cpu(ver.dwords[0]), le32_to_cpu(ver.dwords[1]), + le32_to_cpu(ver.dwords[2]), + le32_to_cpu(ver.dwords[3])); + + devlink_info_version_running_put(req, + EFX_DEVLINK_INFO_VERSION_SOC_BOOT, + buf); + } + + if (flags & BIT(EFX_VER_FLAG(SOC_UBOOT_VERSION))) { + ver.dwords = (__le32 *)MCDI_PTR(outbuf, + GET_VERSION_V4_OUT_SOC_UBOOT_VERSION); + + snprintf(buf, EFX_MAX_VERSION_INFO_LEN, "%u.%u.%u.%u", + le32_to_cpu(ver.dwords[0]), le32_to_cpu(ver.dwords[1]), + le32_to_cpu(ver.dwords[2]), + le32_to_cpu(ver.dwords[3])); + + devlink_info_version_running_put(req, + EFX_DEVLINK_INFO_VERSION_SOC_UBOOT, + buf); + } + + if (flags & BIT(EFX_VER_FLAG(SOC_MAIN_ROOTFS_VERSION))) { + ver.dwords = (__le32 *)MCDI_PTR(outbuf, + GET_VERSION_V4_OUT_SOC_MAIN_ROOTFS_VERSION); + + snprintf(buf, EFX_MAX_VERSION_INFO_LEN, "%u.%u.%u.%u", + le32_to_cpu(ver.dwords[0]), le32_to_cpu(ver.dwords[1]), + le32_to_cpu(ver.dwords[2]), + le32_to_cpu(ver.dwords[3])); + + devlink_info_version_running_put(req, + EFX_DEVLINK_INFO_VERSION_SOC_MAIN, + buf); + } + + if (flags & BIT(EFX_VER_FLAG(SOC_RECOVERY_BUILDROOT_VERSION))) { + ver.dwords = (__le32 *)MCDI_PTR(outbuf, + GET_VERSION_V4_OUT_SOC_RECOVERY_BUILDROOT_VERSION); + + snprintf(buf, EFX_MAX_VERSION_INFO_LEN, "%u.%u.%u.%u", + le32_to_cpu(ver.dwords[0]), le32_to_cpu(ver.dwords[1]), + le32_to_cpu(ver.dwords[2]), + le32_to_cpu(ver.dwords[3])); + + devlink_info_version_running_put(req, + EFX_DEVLINK_INFO_VERSION_SOC_RECOVERY, + buf); + } + + if (flags & BIT(EFX_VER_FLAG(SUCFW_VERSION)) && + ~flags & BIT(EFX_VER_FLAG(SUCFW_EXT_INFO))) { + ver.dwords = (__le32 *)MCDI_PTR(outbuf, + GET_VERSION_V4_OUT_SUCFW_VERSION); + + snprintf(buf, EFX_MAX_VERSION_INFO_LEN, "%u.%u.%u.%u", + le32_to_cpu(ver.dwords[0]), le32_to_cpu(ver.dwords[1]), + le32_to_cpu(ver.dwords[2]), + le32_to_cpu(ver.dwords[3])); + + devlink_info_version_running_put(req, + EFX_DEVLINK_INFO_VERSION_FW_MGMT_SUC, + buf); + } +} + +static void efx_devlink_info_running_v5(struct efx_nic *efx, + struct devlink_info_req *req, + unsigned int flags, efx_dword_t *outbuf) +{ + char buf[EFX_MAX_VERSION_INFO_LEN]; + union { + const __le32 *dwords; + const __le16 *words; + const char *str; + } ver; + + if (flags & BIT(EFX_VER_FLAG(BOARD_VERSION))) { + ver.dwords = (__le32 *)MCDI_PTR(outbuf, + GET_VERSION_V5_OUT_BOARD_VERSION); + + snprintf(buf, EFX_MAX_VERSION_INFO_LEN, "%u.%u.%u.%u", + le32_to_cpu(ver.dwords[0]), le32_to_cpu(ver.dwords[1]), + le32_to_cpu(ver.dwords[2]), + le32_to_cpu(ver.dwords[3])); + + devlink_info_version_running_put(req, + DEVLINK_INFO_VERSION_GENERIC_BOARD_REV, + buf); + } + + if (flags & BIT(EFX_VER_FLAG(BUNDLE_VERSION))) { + ver.dwords = (__le32 *)MCDI_PTR(outbuf, + GET_VERSION_V5_OUT_BUNDLE_VERSION); + + snprintf(buf, EFX_MAX_VERSION_INFO_LEN, "%u.%u.%u.%u", + le32_to_cpu(ver.dwords[0]), le32_to_cpu(ver.dwords[1]), + le32_to_cpu(ver.dwords[2]), + le32_to_cpu(ver.dwords[3])); + + devlink_info_version_running_put(req, + DEVLINK_INFO_VERSION_GENERIC_FW_BUNDLE_ID, + buf); + } +} + +static int efx_devlink_info_running_versions(struct efx_nic *efx, + struct devlink_info_req *req) +{ + MCDI_DECLARE_BUF(outbuf, MC_CMD_GET_VERSION_V5_OUT_LEN); + MCDI_DECLARE_BUF(inbuf, MC_CMD_GET_VERSION_EXT_IN_LEN); + char buf[EFX_MAX_VERSION_INFO_LEN]; + union { + const __le32 *dwords; + const __le16 *words; + const char *str; + } ver; + size_t outlength; + unsigned int flags; + int rc; + + rc = efx_mcdi_rpc(efx, MC_CMD_GET_VERSION, inbuf, sizeof(inbuf), + outbuf, sizeof(outbuf), &outlength); + if (rc || outlength < MC_CMD_GET_VERSION_OUT_LEN) { + netif_err(efx, drv, efx->net_dev, + "mcdi MC_CMD_GET_VERSION failed\n"); + return rc; + } + + /* Handle previous output */ + if (outlength < MC_CMD_GET_VERSION_V2_OUT_LEN) { + ver.words = (__le16 *)MCDI_PTR(outbuf, + GET_VERSION_EXT_OUT_VERSION); + snprintf(buf, EFX_MAX_VERSION_INFO_LEN, "%u.%u.%u.%u", + le16_to_cpu(ver.words[0]), + le16_to_cpu(ver.words[1]), + le16_to_cpu(ver.words[2]), + le16_to_cpu(ver.words[3])); + + devlink_info_version_running_put(req, + DEVLINK_INFO_VERSION_GENERIC_FW_MGMT, + buf); + return 0; + } + + /* Handle V2 additions */ + flags = MCDI_DWORD(outbuf, GET_VERSION_V2_OUT_FLAGS); + efx_devlink_info_running_v2(efx, req, flags, outbuf); + + if (outlength < MC_CMD_GET_VERSION_V3_OUT_LEN) + return 0; + + /* Handle V3 additions */ + efx_devlink_info_running_v3(efx, req, flags, outbuf); + + if (outlength < MC_CMD_GET_VERSION_V4_OUT_LEN) + return 0; + + /* Handle V4 additions */ + efx_devlink_info_running_v4(efx, req, flags, outbuf); + + if (outlength < MC_CMD_GET_VERSION_V5_OUT_LEN) + return 0; + + /* Handle V5 additions */ + efx_devlink_info_running_v5(efx, req, flags, outbuf); + + return 0; +} + +#define EFX_MAX_SERIALNUM_LEN (ETH_ALEN * 2 + 1) + +static int efx_devlink_info_board_cfg(struct efx_nic *efx, + struct devlink_info_req *req) +{ + char sn[EFX_MAX_SERIALNUM_LEN]; + u8 mac_address[ETH_ALEN]; + int rc; + + rc = efx_mcdi_get_board_cfg(efx, (u8 *)mac_address, NULL, NULL); + if (!rc) { + snprintf(sn, EFX_MAX_SERIALNUM_LEN, "%pm", mac_address); + devlink_info_serial_number_put(req, sn); + } + return rc; +} + +static int efx_devlink_info_get(struct devlink *devlink, + struct devlink_info_req *req, + struct netlink_ext_ack *extack) +{ + struct efx_devlink *devlink_private = devlink_priv(devlink); + struct efx_nic *efx = devlink_private->efx; + int rc; + + /* Several different MCDI commands are used. We report first error + * through extack returning at that point. Specific error + * information via system messages. + */ + rc = efx_devlink_info_board_cfg(efx, req); + if (rc) { + NL_SET_ERR_MSG_MOD(extack, "Getting board info failed"); + return rc; + } + rc = efx_devlink_info_stored_versions(efx, req); + if (rc) { + NL_SET_ERR_MSG_MOD(extack, "Getting stored versions failed"); + return rc; + } + rc = efx_devlink_info_running_versions(efx, req); + if (rc) { + NL_SET_ERR_MSG_MOD(extack, "Getting running versions failed"); + return rc; + } + + return 0; +} + +static const struct devlink_ops sfc_devlink_ops = { + .info_get = efx_devlink_info_get, +#ifdef CONFIG_SFC_SRIOV + .port_function_hw_addr_get = efx_devlink_port_addr_get, + .port_function_hw_addr_set = efx_devlink_port_addr_set, +#endif +}; + +#ifdef CONFIG_SFC_SRIOV +static struct devlink_port *ef100_set_devlink_port(struct efx_nic *efx, u32 idx) +{ + struct mae_mport_desc *mport; + u32 id; + int rc; + + if (efx_mae_lookup_mport(efx, idx, &id)) { + /* This should not happen. */ + if (idx == MAE_MPORT_DESC_VF_IDX_NULL) + pci_warn_once(efx->pci_dev, "No mport ID found for PF.\n"); + else + pci_warn_once(efx->pci_dev, "No mport ID found for VF %u.\n", + idx); + return NULL; + } + + mport = efx_mae_get_mport(efx, id); + if (!mport) { + /* This should not happen. */ + if (idx == MAE_MPORT_DESC_VF_IDX_NULL) + pci_warn_once(efx->pci_dev, "No mport found for PF.\n"); + else + pci_warn_once(efx->pci_dev, "No mport found for VF %u.\n", + idx); + return NULL; + } + + rc = efx_devlink_add_port(efx, mport); + if (rc) { + if (idx == MAE_MPORT_DESC_VF_IDX_NULL) + pci_warn(efx->pci_dev, + "devlink port creation for PF failed.\n"); + else + pci_warn(efx->pci_dev, + "devlink_port creation for VF %u failed.\n", + idx); + return NULL; + } + + return &mport->dl_port; +} + +void ef100_rep_set_devlink_port(struct efx_rep *efv) +{ + efv->dl_port = ef100_set_devlink_port(efv->parent, efv->idx); +} + +void ef100_pf_set_devlink_port(struct efx_nic *efx) +{ + efx->dl_port = ef100_set_devlink_port(efx, MAE_MPORT_DESC_VF_IDX_NULL); +} + +void ef100_rep_unset_devlink_port(struct efx_rep *efv) +{ + efx_devlink_del_port(efv->dl_port); +} + +void ef100_pf_unset_devlink_port(struct efx_nic *efx) +{ + efx_devlink_del_port(efx->dl_port); +} +#endif + +void efx_fini_devlink_lock(struct efx_nic *efx) +{ + if (efx->devlink) + devl_lock(efx->devlink); +} + +void efx_fini_devlink_and_unlock(struct efx_nic *efx) +{ + if (efx->devlink) { + devl_unregister(efx->devlink); + devl_unlock(efx->devlink); + devlink_free(efx->devlink); + efx->devlink = NULL; + } +} + +int efx_probe_devlink_and_lock(struct efx_nic *efx) +{ + struct efx_devlink *devlink_private; + + if (efx->type->is_vf) + return 0; + + efx->devlink = devlink_alloc(&sfc_devlink_ops, + sizeof(struct efx_devlink), + &efx->pci_dev->dev); + if (!efx->devlink) + return -ENOMEM; + + devl_lock(efx->devlink); + devlink_private = devlink_priv(efx->devlink); + devlink_private->efx = efx; + + devl_register(efx->devlink); + + return 0; +} + +void efx_probe_devlink_unlock(struct efx_nic *efx) +{ + if (!efx->devlink) + return; + + devl_unlock(efx->devlink); +} diff --git a/drivers/net/ethernet/sfc/efx_devlink.h b/drivers/net/ethernet/sfc/efx_devlink.h new file mode 100644 index 000000000000..e5fd5e1dcc27 --- /dev/null +++ b/drivers/net/ethernet/sfc/efx_devlink.h @@ -0,0 +1,47 @@ +/* SPDX-License-Identifier: GPL-2.0-only */ +/**************************************************************************** + * Driver for AMD network controllers and boards + * Copyright (C) 2023, Advanced Micro Devices, Inc. + * + * This program is free software; you can redistribute it and/or modify it + * under the terms of the GNU General Public License version 2 as published + * by the Free Software Foundation, incorporated herein by reference. + */ + +#ifndef _EFX_DEVLINK_H +#define _EFX_DEVLINK_H + +#include "net_driver.h" +#include <net/devlink.h> + +/* Custom devlink-info version object names for details that do not map to the + * generic standardized names. + */ +#define EFX_DEVLINK_INFO_VERSION_FW_MGMT_SUC "fw.mgmt.suc" +#define EFX_DEVLINK_INFO_VERSION_FW_MGMT_CMC "fw.mgmt.cmc" +#define EFX_DEVLINK_INFO_VERSION_FPGA_REV "fpga.rev" +#define EFX_DEVLINK_INFO_VERSION_DATAPATH_HW "fpga.app" +#define EFX_DEVLINK_INFO_VERSION_DATAPATH_FW DEVLINK_INFO_VERSION_GENERIC_FW_APP +#define EFX_DEVLINK_INFO_VERSION_SOC_BOOT "coproc.boot" +#define EFX_DEVLINK_INFO_VERSION_SOC_UBOOT "coproc.uboot" +#define EFX_DEVLINK_INFO_VERSION_SOC_MAIN "coproc.main" +#define EFX_DEVLINK_INFO_VERSION_SOC_RECOVERY "coproc.recovery" +#define EFX_DEVLINK_INFO_VERSION_FW_EXPROM "fw.exprom" +#define EFX_DEVLINK_INFO_VERSION_FW_UEFI "fw.uefi" + +#define EFX_MAX_VERSION_INFO_LEN 64 + +int efx_probe_devlink_and_lock(struct efx_nic *efx); +void efx_probe_devlink_unlock(struct efx_nic *efx); +void efx_fini_devlink_lock(struct efx_nic *efx); +void efx_fini_devlink_and_unlock(struct efx_nic *efx); + +#ifdef CONFIG_SFC_SRIOV +struct efx_rep; + +void ef100_pf_set_devlink_port(struct efx_nic *efx); +void ef100_rep_set_devlink_port(struct efx_rep *efv); +void ef100_pf_unset_devlink_port(struct efx_nic *efx); +void ef100_rep_unset_devlink_port(struct efx_rep *efv); +#endif +#endif /* _EFX_DEVLINK_H */ diff --git a/drivers/net/ethernet/sfc/mae.c b/drivers/net/ethernet/sfc/mae.c index 583baf69981c..2d32abe5f478 100644 --- a/drivers/net/ethernet/sfc/mae.c +++ b/drivers/net/ethernet/sfc/mae.c @@ -9,8 +9,11 @@ * by the Free Software Foundation, incorporated herein by reference. */ +#include <linux/rhashtable.h> +#include "ef100_nic.h" #include "mae.h" #include "mcdi.h" +#include "mcdi_pcol.h" #include "mcdi_pcol_mae.h" int efx_mae_allocate_mport(struct efx_nic *efx, u32 *id, u32 *label) @@ -94,7 +97,7 @@ void efx_mae_mport_mport(struct efx_nic *efx __always_unused, u32 mport_id, u32 } /* id is really only 24 bits wide */ -int efx_mae_lookup_mport(struct efx_nic *efx, u32 selector, u32 *id) +int efx_mae_fw_lookup_mport(struct efx_nic *efx, u32 selector, u32 *id) { MCDI_DECLARE_BUF(outbuf, MC_CMD_MAE_MPORT_LOOKUP_OUT_LEN); MCDI_DECLARE_BUF(inbuf, MC_CMD_MAE_MPORT_LOOKUP_IN_LEN); @@ -485,11 +488,193 @@ int efx_mae_free_counter(struct efx_nic *efx, struct efx_tc_counter *cnt) return 0; } +int efx_mae_lookup_mport(struct efx_nic *efx, u32 vf_idx, u32 *id) +{ + struct ef100_nic_data *nic_data = efx->nic_data; + struct efx_mae *mae = efx->mae; + struct rhashtable_iter walk; + struct mae_mport_desc *m; + int rc = -ENOENT; + + rhashtable_walk_enter(&mae->mports_ht, &walk); + rhashtable_walk_start(&walk); + while ((m = rhashtable_walk_next(&walk)) != NULL) { + if (m->mport_type == MAE_MPORT_DESC_MPORT_TYPE_VNIC && + m->interface_idx == nic_data->local_mae_intf && + m->pf_idx == 0 && + m->vf_idx == vf_idx) { + *id = m->mport_id; + rc = 0; + break; + } + } + rhashtable_walk_stop(&walk); + rhashtable_walk_exit(&walk); + return rc; +} + static bool efx_mae_asl_id(u32 id) { return !!(id & BIT(31)); } +/* mport handling */ +static const struct rhashtable_params efx_mae_mports_ht_params = { + .key_len = sizeof(u32), + .key_offset = offsetof(struct mae_mport_desc, mport_id), + .head_offset = offsetof(struct mae_mport_desc, linkage), +}; + +struct mae_mport_desc *efx_mae_get_mport(struct efx_nic *efx, u32 mport_id) +{ + return rhashtable_lookup_fast(&efx->mae->mports_ht, &mport_id, + efx_mae_mports_ht_params); +} + +static int efx_mae_add_mport(struct efx_nic *efx, struct mae_mport_desc *desc) +{ + struct efx_mae *mae = efx->mae; + int rc; + + rc = rhashtable_insert_fast(&mae->mports_ht, &desc->linkage, + efx_mae_mports_ht_params); + + if (rc) { + pci_err(efx->pci_dev, "Failed to insert MPORT %08x, rc %d\n", + desc->mport_id, rc); + kfree(desc); + return rc; + } + + return rc; +} + +void efx_mae_remove_mport(void *desc, void *arg) +{ + struct mae_mport_desc *mport = desc; + + synchronize_rcu(); + kfree(mport); +} + +static int efx_mae_process_mport(struct efx_nic *efx, + struct mae_mport_desc *desc) +{ + struct ef100_nic_data *nic_data = efx->nic_data; + struct mae_mport_desc *mport; + + mport = efx_mae_get_mport(efx, desc->mport_id); + if (!IS_ERR_OR_NULL(mport)) { + netif_err(efx, drv, efx->net_dev, + "mport with id %u does exist!!!\n", desc->mport_id); + return -EEXIST; + } + + if (nic_data->have_own_mport && + desc->mport_id == nic_data->own_mport) { + WARN_ON(desc->mport_type != MAE_MPORT_DESC_MPORT_TYPE_VNIC); + WARN_ON(desc->vnic_client_type != + MAE_MPORT_DESC_VNIC_CLIENT_TYPE_FUNCTION); + nic_data->local_mae_intf = desc->interface_idx; + nic_data->have_local_intf = true; + pci_dbg(efx->pci_dev, "MAE interface_idx is %u\n", + nic_data->local_mae_intf); + } + + return efx_mae_add_mport(efx, desc); +} + +#define MCDI_MPORT_JOURNAL_LEN \ + ALIGN(MC_CMD_MAE_MPORT_READ_JOURNAL_OUT_LENMAX_MCDI2, 4) + +int efx_mae_enumerate_mports(struct efx_nic *efx) +{ + efx_dword_t *outbuf = kzalloc(MCDI_MPORT_JOURNAL_LEN, GFP_KERNEL); + MCDI_DECLARE_BUF(inbuf, MC_CMD_MAE_MPORT_READ_JOURNAL_IN_LEN); + MCDI_DECLARE_STRUCT_PTR(desc); + size_t outlen, stride, count; + int rc = 0, i; + + if (!outbuf) + return -ENOMEM; + do { + rc = efx_mcdi_rpc(efx, MC_CMD_MAE_MPORT_READ_JOURNAL, inbuf, + sizeof(inbuf), outbuf, + MCDI_MPORT_JOURNAL_LEN, &outlen); + if (rc) + goto fail; + if (outlen < MC_CMD_MAE_MPORT_READ_JOURNAL_OUT_MPORT_DESC_DATA_OFST) { + rc = -EIO; + goto fail; + } + count = MCDI_DWORD(outbuf, MAE_MPORT_READ_JOURNAL_OUT_MPORT_DESC_COUNT); + if (!count) + continue; /* not break; we want to look at MORE flag */ + stride = MCDI_DWORD(outbuf, MAE_MPORT_READ_JOURNAL_OUT_SIZEOF_MPORT_DESC); + if (stride < MAE_MPORT_DESC_LEN) { + rc = -EIO; + goto fail; + } + if (outlen < MC_CMD_MAE_MPORT_READ_JOURNAL_OUT_LEN(count * stride)) { + rc = -EIO; + goto fail; + } + + for (i = 0; i < count; i++) { + struct mae_mport_desc *d; + + d = kzalloc(sizeof(*d), GFP_KERNEL); + if (!d) { + rc = -ENOMEM; + goto fail; + } + + desc = (efx_dword_t *) + _MCDI_PTR(outbuf, MC_CMD_MAE_MPORT_READ_JOURNAL_OUT_MPORT_DESC_DATA_OFST + + i * stride); + d->mport_id = MCDI_STRUCT_DWORD(desc, MAE_MPORT_DESC_MPORT_ID); + d->flags = MCDI_STRUCT_DWORD(desc, MAE_MPORT_DESC_FLAGS); + d->caller_flags = MCDI_STRUCT_DWORD(desc, + MAE_MPORT_DESC_CALLER_FLAGS); + d->mport_type = MCDI_STRUCT_DWORD(desc, + MAE_MPORT_DESC_MPORT_TYPE); + switch (d->mport_type) { + case MAE_MPORT_DESC_MPORT_TYPE_NET_PORT: + d->port_idx = MCDI_STRUCT_DWORD(desc, + MAE_MPORT_DESC_NET_PORT_IDX); + break; + case MAE_MPORT_DESC_MPORT_TYPE_ALIAS: + d->alias_mport_id = MCDI_STRUCT_DWORD(desc, + MAE_MPORT_DESC_ALIAS_DELIVER_MPORT_ID); + break; + case MAE_MPORT_DESC_MPORT_TYPE_VNIC: + d->vnic_client_type = MCDI_STRUCT_DWORD(desc, + MAE_MPORT_DESC_VNIC_CLIENT_TYPE); + d->interface_idx = MCDI_STRUCT_DWORD(desc, + MAE_MPORT_DESC_VNIC_FUNCTION_INTERFACE); + d->pf_idx = MCDI_STRUCT_WORD(desc, + MAE_MPORT_DESC_VNIC_FUNCTION_PF_IDX); + d->vf_idx = MCDI_STRUCT_WORD(desc, + MAE_MPORT_DESC_VNIC_FUNCTION_VF_IDX); + break; + default: + /* Unknown mport_type, just accept it */ + break; + } + rc = efx_mae_process_mport(efx, d); + /* Any failure will be due to memory allocation faiure, + * so there is no point to try subsequent entries. + */ + if (rc) + goto fail; + } + } while (MCDI_FIELD(outbuf, MAE_MPORT_READ_JOURNAL_OUT, MORE) && + !WARN_ON(!count)); +fail: + kfree(outbuf); + return rc; +} + int efx_mae_alloc_action_set(struct efx_nic *efx, struct efx_tc_action_set *act) { MCDI_DECLARE_BUF(outbuf, MC_CMD_MAE_ACTION_SET_ALLOC_OUT_LEN); @@ -805,3 +990,34 @@ int efx_mae_delete_rule(struct efx_nic *efx, u32 id) return -EIO; return 0; } + +int efx_init_mae(struct efx_nic *efx) +{ + struct ef100_nic_data *nic_data = efx->nic_data; + struct efx_mae *mae; + int rc; + + if (!nic_data->have_mport) + return -EINVAL; + + mae = kmalloc(sizeof(*mae), GFP_KERNEL); + if (!mae) + return -ENOMEM; + + rc = rhashtable_init(&mae->mports_ht, &efx_mae_mports_ht_params); + if (rc < 0) { + kfree(mae); + return rc; + } + efx->mae = mae; + mae->efx = efx; + return 0; +} + +void efx_fini_mae(struct efx_nic *efx) +{ + struct efx_mae *mae = efx->mae; + + kfree(mae); + efx->mae = NULL; +} diff --git a/drivers/net/ethernet/sfc/mae.h b/drivers/net/ethernet/sfc/mae.h index 72343e90e222..bec293a06733 100644 --- a/drivers/net/ethernet/sfc/mae.h +++ b/drivers/net/ethernet/sfc/mae.h @@ -13,6 +13,7 @@ #define EF100_MAE_H /* MCDI interface for the ef100 Match-Action Engine */ +#include <net/devlink.h> #include "net_driver.h" #include "tc.h" #include "mcdi_pcol.h" /* needed for various MC_CMD_MAE_*_NULL defines */ @@ -27,6 +28,40 @@ void efx_mae_mport_mport(struct efx_nic *efx, u32 mport_id, u32 *out); int efx_mae_lookup_mport(struct efx_nic *efx, u32 selector, u32 *id); +struct mae_mport_desc { + u32 mport_id; + u32 flags; + u32 caller_flags; /* enum mae_mport_desc_caller_flags */ + u32 mport_type; /* MAE_MPORT_DESC_MPORT_TYPE_* */ + union { + u32 port_idx; /* for mport_type == NET_PORT */ + u32 alias_mport_id; /* for mport_type == ALIAS */ + struct { /* for mport_type == VNIC */ + u32 vnic_client_type; /* MAE_MPORT_DESC_VNIC_CLIENT_TYPE_* */ + u32 interface_idx; + u16 pf_idx; + u16 vf_idx; + }; + }; + struct rhash_head linkage; + struct devlink_port dl_port; +}; + +int efx_mae_enumerate_mports(struct efx_nic *efx); +struct mae_mport_desc *efx_mae_get_mport(struct efx_nic *efx, u32 mport_id); +void efx_mae_put_mport(struct efx_nic *efx, struct mae_mport_desc *desc); + +/** + * struct efx_mae - MAE information + * + * @efx: The associated NIC + * @mports_ht: m-port descriptions from MC_CMD_MAE_MPORT_READ_JOURNAL + */ +struct efx_mae { + struct efx_nic *efx; + struct rhashtable mports_ht; +}; + int efx_mae_start_counters(struct efx_nic *efx, struct efx_rx_queue *rx_queue); int efx_mae_stop_counters(struct efx_nic *efx, struct efx_rx_queue *rx_queue); void efx_mae_counters_grant_credits(struct work_struct *work); @@ -60,4 +95,9 @@ int efx_mae_insert_rule(struct efx_nic *efx, const struct efx_tc_match *match, u32 prio, u32 acts_id, u32 *id); int efx_mae_delete_rule(struct efx_nic *efx, u32 id); +int efx_init_mae(struct efx_nic *efx); +void efx_fini_mae(struct efx_nic *efx); +void efx_mae_remove_mport(void *desc, void *arg); +int efx_mae_fw_lookup_mport(struct efx_nic *efx, u32 selector, u32 *id); +int efx_mae_lookup_mport(struct efx_nic *efx, u32 vf, u32 *id); #endif /* EF100_MAE_H */ diff --git a/drivers/net/ethernet/sfc/mcdi.c b/drivers/net/ethernet/sfc/mcdi.c index af338208eae9..a7f2c31071e8 100644 --- a/drivers/net/ethernet/sfc/mcdi.c +++ b/drivers/net/ethernet/sfc/mcdi.c @@ -2175,6 +2175,78 @@ int efx_mcdi_get_privilege_mask(struct efx_nic *efx, u32 *mask) return 0; } +int efx_mcdi_nvram_metadata(struct efx_nic *efx, unsigned int type, + u32 *subtype, u16 version[4], char *desc, + size_t descsize) +{ + MCDI_DECLARE_BUF(inbuf, MC_CMD_NVRAM_METADATA_IN_LEN); + efx_dword_t *outbuf; + size_t outlen; + u32 flags; + int rc; + + outbuf = kzalloc(MC_CMD_NVRAM_METADATA_OUT_LENMAX_MCDI2, GFP_KERNEL); + if (!outbuf) + return -ENOMEM; + + MCDI_SET_DWORD(inbuf, NVRAM_METADATA_IN_TYPE, type); + + rc = efx_mcdi_rpc_quiet(efx, MC_CMD_NVRAM_METADATA, inbuf, + sizeof(inbuf), outbuf, + MC_CMD_NVRAM_METADATA_OUT_LENMAX_MCDI2, + &outlen); + if (rc) + goto out_free; + if (outlen < MC_CMD_NVRAM_METADATA_OUT_LENMIN) { + rc = -EIO; + goto out_free; + } + + flags = MCDI_DWORD(outbuf, NVRAM_METADATA_OUT_FLAGS); + + if (desc && descsize > 0) { + if (flags & BIT(MC_CMD_NVRAM_METADATA_OUT_DESCRIPTION_VALID_LBN)) { + if (descsize <= + MC_CMD_NVRAM_METADATA_OUT_DESCRIPTION_NUM(outlen)) { + rc = -E2BIG; + goto out_free; + } + + strncpy(desc, + MCDI_PTR(outbuf, NVRAM_METADATA_OUT_DESCRIPTION), + MC_CMD_NVRAM_METADATA_OUT_DESCRIPTION_NUM(outlen)); + desc[MC_CMD_NVRAM_METADATA_OUT_DESCRIPTION_NUM(outlen)] = '\0'; + } else { + desc[0] = '\0'; + } + } + + if (subtype) { + if (flags & BIT(MC_CMD_NVRAM_METADATA_OUT_SUBTYPE_VALID_LBN)) + *subtype = MCDI_DWORD(outbuf, NVRAM_METADATA_OUT_SUBTYPE); + else + *subtype = 0; + } + + if (version) { + if (flags & BIT(MC_CMD_NVRAM_METADATA_OUT_VERSION_VALID_LBN)) { + version[0] = MCDI_WORD(outbuf, NVRAM_METADATA_OUT_VERSION_W); + version[1] = MCDI_WORD(outbuf, NVRAM_METADATA_OUT_VERSION_X); + version[2] = MCDI_WORD(outbuf, NVRAM_METADATA_OUT_VERSION_Y); + version[3] = MCDI_WORD(outbuf, NVRAM_METADATA_OUT_VERSION_Z); + } else { + version[0] = 0; + version[1] = 0; + version[2] = 0; + version[3] = 0; + } + } + +out_free: + kfree(outbuf); + return rc; +} + #ifdef CONFIG_SFC_MTD #define EFX_MCDI_NVRAM_LEN_MAX 128 diff --git a/drivers/net/ethernet/sfc/mcdi.h b/drivers/net/ethernet/sfc/mcdi.h index 7e35fec9da35..b139b76febff 100644 --- a/drivers/net/ethernet/sfc/mcdi.h +++ b/drivers/net/ethernet/sfc/mcdi.h @@ -229,6 +229,9 @@ void efx_mcdi_sensor_event(struct efx_nic *efx, efx_qword_t *ev); #define MCDI_WORD(_buf, _field) \ ((u16)BUILD_BUG_ON_ZERO(MC_CMD_ ## _field ## _LEN != 2) + \ le16_to_cpu(*(__force const __le16 *)MCDI_PTR(_buf, _field))) +#define MCDI_STRUCT_WORD(_buf, _field) \ + ((void)BUILD_BUG_ON_ZERO(_field ## _LEN != 2), \ + le16_to_cpu(*(__force const __le16 *)MCDI_STRUCT_PTR(_buf, _field))) /* Write a 16-bit field defined in the protocol as being big-endian. */ #define MCDI_STRUCT_SET_WORD_BE(_buf, _field, _value) do { \ BUILD_BUG_ON(_field ## _LEN != 2); \ @@ -241,6 +244,8 @@ void efx_mcdi_sensor_event(struct efx_nic *efx, efx_qword_t *ev); EFX_POPULATE_DWORD_1(*_MCDI_STRUCT_DWORD(_buf, _field), EFX_DWORD_0, _value) #define MCDI_DWORD(_buf, _field) \ EFX_DWORD_FIELD(*_MCDI_DWORD(_buf, _field), EFX_DWORD_0) +#define MCDI_STRUCT_DWORD(_buf, _field) \ + EFX_DWORD_FIELD(*_MCDI_STRUCT_DWORD(_buf, _field), EFX_DWORD_0) /* Write a 32-bit field defined in the protocol as being big-endian. */ #define MCDI_STRUCT_SET_DWORD_BE(_buf, _field, _value) do { \ BUILD_BUG_ON(_field ## _LEN != 4); \ @@ -378,6 +383,9 @@ int efx_mcdi_nvram_info(struct efx_nic *efx, unsigned int type, size_t *size_out, size_t *erase_size_out, bool *protected_out); int efx_new_mcdi_nvram_test_all(struct efx_nic *efx); +int efx_mcdi_nvram_metadata(struct efx_nic *efx, unsigned int type, + u32 *subtype, u16 version[4], char *desc, + size_t descsize); int efx_mcdi_nvram_test_all(struct efx_nic *efx); int efx_mcdi_handle_assertion(struct efx_nic *efx); int efx_mcdi_set_id_led(struct efx_nic *efx, enum efx_led_mode mode); diff --git a/drivers/net/ethernet/sfc/net_driver.h b/drivers/net/ethernet/sfc/net_driver.h index 3b49e216768b..fcd51d3992fa 100644 --- a/drivers/net/ethernet/sfc/net_driver.h +++ b/drivers/net/ethernet/sfc/net_driver.h @@ -845,6 +845,8 @@ enum efx_xdp_tx_queues_mode { EFX_XDP_TX_QUEUES_BORROWED /* queues borrowed from net stack */ }; +struct efx_mae; + /** * struct efx_nic - an Efx NIC * @name: Device name (net device name or bus id before net device registered) @@ -881,6 +883,7 @@ enum efx_xdp_tx_queues_mode { * @msi_context: Context for each MSI * @extra_channel_types: Types of extra (non-traffic) channels that * should be allocated for this NIC + * @mae: Details of the Match Action Engine * @xdp_tx_queue_count: Number of entries in %xdp_tx_queues. * @xdp_tx_queues: Array of pointers to tx queues used for XDP transmit. * @xdp_txq_queues_mode: XDP TX queues sharing strategy. @@ -994,6 +997,8 @@ enum efx_xdp_tx_queues_mode { * xdp_rxq_info structures? * @netdev_notifier: Netdevice notifier. * @tc: state for TC offload (EF100). + * @devlink: reference to devlink structure owned by this device + * @dl_port: devlink port associated with the PF * @mem_bar: The BAR that is mapped into membase. * @reg_base: Offset from the start of the bar to the function control window. * @monitor_work: Hardware monitor workitem @@ -1043,6 +1048,7 @@ struct efx_nic { struct efx_msi_context msi_context[EFX_MAX_CHANNELS]; const struct efx_channel_type * extra_channel_type[EFX_MAX_EXTRA_CHANNELS]; + struct efx_mae *mae; unsigned int xdp_tx_queue_count; struct efx_tx_queue **xdp_tx_queues; @@ -1179,6 +1185,8 @@ struct efx_nic { struct notifier_block netdev_notifier; struct efx_tc_state *tc; + struct devlink *devlink; + struct devlink_port *dl_port; unsigned int mem_bar; u32 reg_base; diff --git a/drivers/net/ethernet/sfc/siena/efx.c b/drivers/net/ethernet/sfc/siena/efx.c index 60e5b7c8ccf9..ef52ec71d197 100644 --- a/drivers/net/ethernet/sfc/siena/efx.c +++ b/drivers/net/ethernet/sfc/siena/efx.c @@ -1007,6 +1007,10 @@ static int efx_pci_probe_post_io(struct efx_nic *efx) net_dev->features &= ~NETIF_F_HW_VLAN_CTAG_FILTER; net_dev->features |= efx->fixed_features; + net_dev->xdp_features = NETDEV_XDP_ACT_BASIC | + NETDEV_XDP_ACT_REDIRECT | + NETDEV_XDP_ACT_NDO_XMIT; + rc = efx_register_netdev(efx); if (!rc) return 0; diff --git a/drivers/net/ethernet/socionext/netsec.c b/drivers/net/ethernet/socionext/netsec.c index 9b46579b5a10..2d7347b71c41 100644 --- a/drivers/net/ethernet/socionext/netsec.c +++ b/drivers/net/ethernet/socionext/netsec.c @@ -2104,6 +2104,9 @@ static int netsec_probe(struct platform_device *pdev) NETIF_F_IP_CSUM | NETIF_F_IPV6_CSUM; ndev->hw_features = ndev->features; + ndev->xdp_features = NETDEV_XDP_ACT_BASIC | NETDEV_XDP_ACT_REDIRECT | + NETDEV_XDP_ACT_NDO_XMIT; + priv->rx_cksum_offload_flag = true; ret = netsec_register_mdio(priv, phy_addr); diff --git a/drivers/net/ethernet/stmicro/stmmac/dwmac-dwc-qos-eth.c b/drivers/net/ethernet/stmicro/stmmac/dwmac-dwc-qos-eth.c index 80efdeeb0b59..18acf7dd74e5 100644 --- a/drivers/net/ethernet/stmicro/stmmac/dwmac-dwc-qos-eth.c +++ b/drivers/net/ethernet/stmicro/stmmac/dwmac-dwc-qos-eth.c @@ -159,15 +159,13 @@ disable: return err; } -static int dwc_qos_remove(struct platform_device *pdev) +static void dwc_qos_remove(struct platform_device *pdev) { struct net_device *ndev = platform_get_drvdata(pdev); struct stmmac_priv *priv = netdev_priv(ndev); clk_disable_unprepare(priv->plat->pclk); clk_disable_unprepare(priv->plat->stmmac_clk); - - return 0; } #define SDMEMCOMPPADCTRL 0x8800 @@ -384,7 +382,7 @@ error: return err; } -static int tegra_eqos_remove(struct platform_device *pdev) +static void tegra_eqos_remove(struct platform_device *pdev) { struct tegra_eqos *eqos = get_stmmac_bsp_priv(&pdev->dev); @@ -394,15 +392,13 @@ static int tegra_eqos_remove(struct platform_device *pdev) clk_disable_unprepare(eqos->clk_rx); clk_disable_unprepare(eqos->clk_slave); clk_disable_unprepare(eqos->clk_master); - - return 0; } struct dwc_eth_dwmac_data { int (*probe)(struct platform_device *pdev, struct plat_stmmacenet_data *data, struct stmmac_resources *res); - int (*remove)(struct platform_device *pdev); + void (*remove)(struct platform_device *pdev); }; static const struct dwc_eth_dwmac_data dwc_qos_data = { @@ -473,21 +469,16 @@ static int dwc_eth_dwmac_remove(struct platform_device *pdev) struct net_device *ndev = platform_get_drvdata(pdev); struct stmmac_priv *priv = netdev_priv(ndev); const struct dwc_eth_dwmac_data *data; - int err; data = device_get_match_data(&pdev->dev); - err = stmmac_dvr_remove(&pdev->dev); - if (err < 0) - dev_err(&pdev->dev, "failed to remove platform: %d\n", err); + stmmac_dvr_remove(&pdev->dev); - err = data->remove(pdev); - if (err < 0) - dev_err(&pdev->dev, "failed to remove subdriver: %d\n", err); + data->remove(pdev); stmmac_remove_config_dt(pdev, priv->plat); - return err; + return 0; } static const struct of_device_id dwc_eth_dwmac_match[] = { diff --git a/drivers/net/ethernet/stmicro/stmmac/dwmac-imx.c b/drivers/net/ethernet/stmicro/stmmac/dwmac-imx.c index bd52fb7cf486..ac8580f501e2 100644 --- a/drivers/net/ethernet/stmicro/stmmac/dwmac-imx.c +++ b/drivers/net/ethernet/stmicro/stmmac/dwmac-imx.c @@ -31,6 +31,12 @@ #define GPR_ENET_QOS_CLK_TX_CLK_SEL (0x1 << 20) #define GPR_ENET_QOS_RGMII_EN (0x1 << 21) +#define MX93_GPR_ENET_QOS_INTF_MODE_MASK GENMASK(3, 0) +#define MX93_GPR_ENET_QOS_INTF_SEL_MII (0x0 << 1) +#define MX93_GPR_ENET_QOS_INTF_SEL_RMII (0x4 << 1) +#define MX93_GPR_ENET_QOS_INTF_SEL_RGMII (0x1 << 1) +#define MX93_GPR_ENET_QOS_CLK_GEN_EN (0x1 << 0) + struct imx_dwmac_ops { u32 addr_width; bool mac_rgmii_txclk_auto_adj; @@ -90,6 +96,35 @@ imx8dxl_set_intf_mode(struct plat_stmmacenet_data *plat_dat) return ret; } +static int imx93_set_intf_mode(struct plat_stmmacenet_data *plat_dat) +{ + struct imx_priv_data *dwmac = plat_dat->bsp_priv; + int val; + + switch (plat_dat->interface) { + case PHY_INTERFACE_MODE_MII: + val = MX93_GPR_ENET_QOS_INTF_SEL_MII; + break; + case PHY_INTERFACE_MODE_RMII: + val = MX93_GPR_ENET_QOS_INTF_SEL_RMII; + break; + case PHY_INTERFACE_MODE_RGMII: + case PHY_INTERFACE_MODE_RGMII_ID: + case PHY_INTERFACE_MODE_RGMII_RXID: + case PHY_INTERFACE_MODE_RGMII_TXID: + val = MX93_GPR_ENET_QOS_INTF_SEL_RGMII; + break; + default: + dev_dbg(dwmac->dev, "imx dwmac doesn't support %d interface\n", + plat_dat->interface); + return -EINVAL; + } + + val |= MX93_GPR_ENET_QOS_CLK_GEN_EN; + return regmap_update_bits(dwmac->intf_regmap, dwmac->intf_reg_off, + MX93_GPR_ENET_QOS_INTF_MODE_MASK, val); +}; + static int imx_dwmac_clks_config(void *priv, bool enabled) { struct imx_priv_data *dwmac = priv; @@ -188,7 +223,9 @@ imx_dwmac_parse_dt(struct imx_priv_data *dwmac, struct device *dev) } dwmac->clk_mem = NULL; - if (of_machine_is_compatible("fsl,imx8dxl")) { + + if (of_machine_is_compatible("fsl,imx8dxl") || + of_machine_is_compatible("fsl,imx93")) { dwmac->clk_mem = devm_clk_get(dev, "mem"); if (IS_ERR(dwmac->clk_mem)) { dev_err(dev, "failed to get mem clock\n"); @@ -196,10 +233,11 @@ imx_dwmac_parse_dt(struct imx_priv_data *dwmac, struct device *dev) } } - if (of_machine_is_compatible("fsl,imx8mp")) { - /* Binding doc describes the property: - is required by i.MX8MP. - is optional for i.MX8DXL. + if (of_machine_is_compatible("fsl,imx8mp") || + of_machine_is_compatible("fsl,imx93")) { + /* Binding doc describes the propety: + * is required by i.MX8MP, i.MX93. + * is optinoal for i.MX8DXL. */ dwmac->intf_regmap = syscon_regmap_lookup_by_phandle(np, "intf_mode"); if (IS_ERR(dwmac->intf_regmap)) @@ -296,9 +334,16 @@ static struct imx_dwmac_ops imx8dxl_dwmac_data = { .set_intf_mode = imx8dxl_set_intf_mode, }; +static struct imx_dwmac_ops imx93_dwmac_data = { + .addr_width = 32, + .mac_rgmii_txclk_auto_adj = true, + .set_intf_mode = imx93_set_intf_mode, +}; + static const struct of_device_id imx_dwmac_match[] = { { .compatible = "nxp,imx8mp-dwmac-eqos", .data = &imx8mp_dwmac_data }, { .compatible = "nxp,imx8dxl-dwmac-eqos", .data = &imx8dxl_dwmac_data }, + { .compatible = "nxp,imx93-dwmac-eqos", .data = &imx93_dwmac_data }, { } }; MODULE_DEVICE_TABLE(of, imx_dwmac_match); diff --git a/drivers/net/ethernet/stmicro/stmmac/dwmac-rk.c b/drivers/net/ethernet/stmicro/stmmac/dwmac-rk.c index 6656d76b6766..4b8fd11563e4 100644 --- a/drivers/net/ethernet/stmicro/stmmac/dwmac-rk.c +++ b/drivers/net/ethernet/stmicro/stmmac/dwmac-rk.c @@ -1915,11 +1915,12 @@ err_remove_config_dt: static int rk_gmac_remove(struct platform_device *pdev) { struct rk_priv_data *bsp_priv = get_stmmac_bsp_priv(&pdev->dev); - int ret = stmmac_dvr_remove(&pdev->dev); + + stmmac_dvr_remove(&pdev->dev); rk_gmac_powerdown(bsp_priv); - return ret; + return 0; } #ifdef CONFIG_PM_SLEEP diff --git a/drivers/net/ethernet/stmicro/stmmac/dwmac-sti.c b/drivers/net/ethernet/stmicro/stmmac/dwmac-sti.c index 710d7435733e..be3b1ebc06ab 100644 --- a/drivers/net/ethernet/stmicro/stmmac/dwmac-sti.c +++ b/drivers/net/ethernet/stmicro/stmmac/dwmac-sti.c @@ -371,11 +371,12 @@ err_remove_config_dt: static int sti_dwmac_remove(struct platform_device *pdev) { struct sti_dwmac *dwmac = get_stmmac_bsp_priv(&pdev->dev); - int ret = stmmac_dvr_remove(&pdev->dev); + + stmmac_dvr_remove(&pdev->dev); clk_disable_unprepare(dwmac->clk); - return ret; + return 0; } #ifdef CONFIG_PM_SLEEP diff --git a/drivers/net/ethernet/stmicro/stmmac/dwmac-stm32.c b/drivers/net/ethernet/stmicro/stmmac/dwmac-stm32.c index 2b38a499a404..0616b3a04ff3 100644 --- a/drivers/net/ethernet/stmicro/stmmac/dwmac-stm32.c +++ b/drivers/net/ethernet/stmicro/stmmac/dwmac-stm32.c @@ -421,9 +421,10 @@ static int stm32_dwmac_remove(struct platform_device *pdev) { struct net_device *ndev = platform_get_drvdata(pdev); struct stmmac_priv *priv = netdev_priv(ndev); - int ret = stmmac_dvr_remove(&pdev->dev); struct stm32_dwmac *dwmac = priv->plat->bsp_priv; + stmmac_dvr_remove(&pdev->dev); + stm32_dwmac_clk_disable(priv->plat->bsp_priv); if (dwmac->irq_pwr_wakeup >= 0) { @@ -431,7 +432,7 @@ static int stm32_dwmac_remove(struct platform_device *pdev) device_init_wakeup(&pdev->dev, false); } - return ret; + return 0; } static int stm32mp1_suspend(struct stm32_dwmac *dwmac) diff --git a/drivers/net/ethernet/stmicro/stmmac/hwif.h b/drivers/net/ethernet/stmicro/stmmac/hwif.h index 592b4067f9b8..16a7421715cb 100644 --- a/drivers/net/ethernet/stmicro/stmmac/hwif.h +++ b/drivers/net/ethernet/stmicro/stmmac/hwif.h @@ -567,6 +567,7 @@ struct tc_cbs_qopt_offload; struct flow_cls_offload; struct tc_taprio_qopt_offload; struct tc_etf_qopt_offload; +struct tc_query_caps_base; struct stmmac_tc_ops { int (*init)(struct stmmac_priv *priv); @@ -580,6 +581,8 @@ struct stmmac_tc_ops { struct tc_taprio_qopt_offload *qopt); int (*setup_etf)(struct stmmac_priv *priv, struct tc_etf_qopt_offload *qopt); + int (*query_caps)(struct stmmac_priv *priv, + struct tc_query_caps_base *base); }; #define stmmac_tc_init(__priv, __args...) \ @@ -594,6 +597,8 @@ struct stmmac_tc_ops { stmmac_do_callback(__priv, tc, setup_taprio, __args) #define stmmac_tc_setup_etf(__priv, __args...) \ stmmac_do_callback(__priv, tc, setup_etf, __args) +#define stmmac_tc_query_caps(__priv, __args...) \ + stmmac_do_callback(__priv, tc, query_caps, __args) struct stmmac_counters; diff --git a/drivers/net/ethernet/stmicro/stmmac/stmmac.h b/drivers/net/ethernet/stmicro/stmmac/stmmac.h index bdbf86cb102a..3d15e1e92e18 100644 --- a/drivers/net/ethernet/stmicro/stmmac/stmmac.h +++ b/drivers/net/ethernet/stmicro/stmmac/stmmac.h @@ -345,7 +345,7 @@ int stmmac_xdp_open(struct net_device *dev); void stmmac_xdp_release(struct net_device *dev); int stmmac_resume(struct device *dev); int stmmac_suspend(struct device *dev); -int stmmac_dvr_remove(struct device *dev); +void stmmac_dvr_remove(struct device *dev); int stmmac_dvr_probe(struct device *device, struct plat_stmmacenet_data *plat_dat, struct stmmac_resources *res); diff --git a/drivers/net/ethernet/stmicro/stmmac/stmmac_main.c b/drivers/net/ethernet/stmicro/stmmac/stmmac_main.c index 1a5b8dab5e9b..e4902a7bb61e 100644 --- a/drivers/net/ethernet/stmicro/stmmac/stmmac_main.c +++ b/drivers/net/ethernet/stmicro/stmmac/stmmac_main.c @@ -5992,6 +5992,8 @@ static int stmmac_setup_tc(struct net_device *ndev, enum tc_setup_type type, struct stmmac_priv *priv = netdev_priv(ndev); switch (type) { + case TC_QUERY_CAPS: + return stmmac_tc_query_caps(priv, priv, type_data); case TC_SETUP_BLOCK: return flow_block_cb_setup_simple(type_data, &stmmac_block_cb_list, @@ -7151,6 +7153,9 @@ int stmmac_dvr_probe(struct device *device, ndev->hw_features = NETIF_F_SG | NETIF_F_IP_CSUM | NETIF_F_IPV6_CSUM | NETIF_F_RXCSUM; + ndev->xdp_features = NETDEV_XDP_ACT_BASIC | NETDEV_XDP_ACT_REDIRECT | + NETDEV_XDP_ACT_XSK_ZEROCOPY | + NETDEV_XDP_ACT_NDO_XMIT; ret = stmmac_tc_init(priv, priv); if (!ret) { @@ -7347,7 +7352,7 @@ EXPORT_SYMBOL_GPL(stmmac_dvr_probe); * Description: this function resets the TX/RX processes, disables the MAC RX/TX * changes the link status, releases the DMA descriptor rings. */ -int stmmac_dvr_remove(struct device *dev) +void stmmac_dvr_remove(struct device *dev) { struct net_device *ndev = dev_get_drvdata(dev); struct stmmac_priv *priv = netdev_priv(ndev); @@ -7383,8 +7388,6 @@ int stmmac_dvr_remove(struct device *dev) pm_runtime_disable(dev); pm_runtime_put_noidle(dev); - - return 0; } EXPORT_SYMBOL_GPL(stmmac_dvr_remove); diff --git a/drivers/net/ethernet/stmicro/stmmac/stmmac_mdio.c b/drivers/net/ethernet/stmicro/stmmac/stmmac_mdio.c index 5f177ea80725..21aaa2730ac8 100644 --- a/drivers/net/ethernet/stmicro/stmmac/stmmac_mdio.c +++ b/drivers/net/ethernet/stmicro/stmmac/stmmac_mdio.c @@ -45,8 +45,8 @@ #define MII_XGMAC_PA_SHIFT 16 #define MII_XGMAC_DA_SHIFT 21 -static int stmmac_xgmac2_c45_format(struct stmmac_priv *priv, int phyaddr, - int phyreg, u32 *hw_addr) +static void stmmac_xgmac2_c45_format(struct stmmac_priv *priv, int phyaddr, + int devad, int phyreg, u32 *hw_addr) { u32 tmp; @@ -56,19 +56,14 @@ static int stmmac_xgmac2_c45_format(struct stmmac_priv *priv, int phyaddr, writel(tmp, priv->ioaddr + XGMAC_MDIO_C22P); *hw_addr = (phyaddr << MII_XGMAC_PA_SHIFT) | (phyreg & 0xffff); - *hw_addr |= (phyreg >> MII_DEVADDR_C45_SHIFT) << MII_XGMAC_DA_SHIFT; - return 0; + *hw_addr |= devad << MII_XGMAC_DA_SHIFT; } -static int stmmac_xgmac2_c22_format(struct stmmac_priv *priv, int phyaddr, - int phyreg, u32 *hw_addr) +static void stmmac_xgmac2_c22_format(struct stmmac_priv *priv, int phyaddr, + int phyreg, u32 *hw_addr) { u32 tmp; - /* HW does not support C22 addr >= 4 */ - if (phyaddr > MII_XGMAC_MAX_C22ADDR) - return -ENODEV; - /* Set port as Clause 22 */ tmp = readl(priv->ioaddr + XGMAC_MDIO_C22P); tmp &= ~MII_XGMAC_C22P_MASK; @@ -76,16 +71,14 @@ static int stmmac_xgmac2_c22_format(struct stmmac_priv *priv, int phyaddr, writel(tmp, priv->ioaddr + XGMAC_MDIO_C22P); *hw_addr = (phyaddr << MII_XGMAC_PA_SHIFT) | (phyreg & 0x1f); - return 0; } -static int stmmac_xgmac2_mdio_read(struct mii_bus *bus, int phyaddr, int phyreg) +static int stmmac_xgmac2_mdio_read(struct stmmac_priv *priv, u32 addr, + u32 value) { - struct net_device *ndev = bus->priv; - struct stmmac_priv *priv = netdev_priv(ndev); unsigned int mii_address = priv->hw->mii.addr; unsigned int mii_data = priv->hw->mii.data; - u32 tmp, addr, value = MII_XGMAC_BUSY; + u32 tmp; int ret; ret = pm_runtime_resume_and_get(priv->device); @@ -99,20 +92,6 @@ static int stmmac_xgmac2_mdio_read(struct mii_bus *bus, int phyaddr, int phyreg) goto err_disable_clks; } - if (phyreg & MII_ADDR_C45) { - phyreg &= ~MII_ADDR_C45; - - ret = stmmac_xgmac2_c45_format(priv, phyaddr, phyreg, &addr); - if (ret) - goto err_disable_clks; - } else { - ret = stmmac_xgmac2_c22_format(priv, phyaddr, phyreg, &addr); - if (ret) - goto err_disable_clks; - - value |= MII_XGMAC_SADDR; - } - value |= (priv->clk_csr << priv->hw->mii.clk_csr_shift) & priv->hw->mii.clk_csr_mask; value |= MII_XGMAC_READ; @@ -144,14 +123,44 @@ err_disable_clks: return ret; } -static int stmmac_xgmac2_mdio_write(struct mii_bus *bus, int phyaddr, - int phyreg, u16 phydata) +static int stmmac_xgmac2_mdio_read_c22(struct mii_bus *bus, int phyaddr, + int phyreg) { struct net_device *ndev = bus->priv; - struct stmmac_priv *priv = netdev_priv(ndev); + struct stmmac_priv *priv; + u32 addr; + + priv = netdev_priv(ndev); + + /* HW does not support C22 addr >= 4 */ + if (phyaddr > MII_XGMAC_MAX_C22ADDR) + return -ENODEV; + + stmmac_xgmac2_c22_format(priv, phyaddr, phyreg, &addr); + + return stmmac_xgmac2_mdio_read(priv, addr, MII_XGMAC_BUSY); +} + +static int stmmac_xgmac2_mdio_read_c45(struct mii_bus *bus, int phyaddr, + int devad, int phyreg) +{ + struct net_device *ndev = bus->priv; + struct stmmac_priv *priv; + u32 addr; + + priv = netdev_priv(ndev); + + stmmac_xgmac2_c45_format(priv, phyaddr, devad, phyreg, &addr); + + return stmmac_xgmac2_mdio_read(priv, addr, MII_XGMAC_BUSY); +} + +static int stmmac_xgmac2_mdio_write(struct stmmac_priv *priv, u32 addr, + u32 value, u16 phydata) +{ unsigned int mii_address = priv->hw->mii.addr; unsigned int mii_data = priv->hw->mii.data; - u32 addr, tmp, value = MII_XGMAC_BUSY; + u32 tmp; int ret; ret = pm_runtime_resume_and_get(priv->device); @@ -165,20 +174,6 @@ static int stmmac_xgmac2_mdio_write(struct mii_bus *bus, int phyaddr, goto err_disable_clks; } - if (phyreg & MII_ADDR_C45) { - phyreg &= ~MII_ADDR_C45; - - ret = stmmac_xgmac2_c45_format(priv, phyaddr, phyreg, &addr); - if (ret) - goto err_disable_clks; - } else { - ret = stmmac_xgmac2_c22_format(priv, phyaddr, phyreg, &addr); - if (ret) - goto err_disable_clks; - - value |= MII_XGMAC_SADDR; - } - value |= (priv->clk_csr << priv->hw->mii.clk_csr_shift) & priv->hw->mii.clk_csr_mask; value |= phydata; @@ -205,8 +200,63 @@ err_disable_clks: return ret; } +static int stmmac_xgmac2_mdio_write_c22(struct mii_bus *bus, int phyaddr, + int phyreg, u16 phydata) +{ + struct net_device *ndev = bus->priv; + struct stmmac_priv *priv; + u32 addr; + + priv = netdev_priv(ndev); + + /* HW does not support C22 addr >= 4 */ + if (phyaddr > MII_XGMAC_MAX_C22ADDR) + return -ENODEV; + + stmmac_xgmac2_c22_format(priv, phyaddr, phyreg, &addr); + + return stmmac_xgmac2_mdio_write(priv, addr, + MII_XGMAC_BUSY | MII_XGMAC_SADDR, phydata); +} + +static int stmmac_xgmac2_mdio_write_c45(struct mii_bus *bus, int phyaddr, + int devad, int phyreg, u16 phydata) +{ + struct net_device *ndev = bus->priv; + struct stmmac_priv *priv; + u32 addr; + + priv = netdev_priv(ndev); + + stmmac_xgmac2_c45_format(priv, phyaddr, devad, phyreg, &addr); + + return stmmac_xgmac2_mdio_write(priv, addr, MII_XGMAC_BUSY, + phydata); +} + +static int stmmac_mdio_read(struct stmmac_priv *priv, int data, u32 value) +{ + unsigned int mii_address = priv->hw->mii.addr; + unsigned int mii_data = priv->hw->mii.data; + u32 v; + + if (readl_poll_timeout(priv->ioaddr + mii_address, v, !(v & MII_BUSY), + 100, 10000)) + return -EBUSY; + + writel(data, priv->ioaddr + mii_data); + writel(value, priv->ioaddr + mii_address); + + if (readl_poll_timeout(priv->ioaddr + mii_address, v, !(v & MII_BUSY), + 100, 10000)) + return -EBUSY; + + /* Read the data from the MII data register */ + return readl(priv->ioaddr + mii_data) & MII_DATA_MASK; +} + /** - * stmmac_mdio_read + * stmmac_mdio_read_c22 * @bus: points to the mii_bus structure * @phyaddr: MII addr * @phyreg: MII reg @@ -215,15 +265,12 @@ err_disable_clks: * accessing the PHY registers. * Fortunately, it seems this has no drawback for the 7109 MAC. */ -static int stmmac_mdio_read(struct mii_bus *bus, int phyaddr, int phyreg) +static int stmmac_mdio_read_c22(struct mii_bus *bus, int phyaddr, int phyreg) { struct net_device *ndev = bus->priv; struct stmmac_priv *priv = netdev_priv(ndev); - unsigned int mii_address = priv->hw->mii.addr; - unsigned int mii_data = priv->hw->mii.data; u32 value = MII_BUSY; int data = 0; - u32 v; data = pm_runtime_resume_and_get(priv->device); if (data < 0) @@ -236,60 +283,94 @@ static int stmmac_mdio_read(struct mii_bus *bus, int phyaddr, int phyreg) & priv->hw->mii.clk_csr_mask; if (priv->plat->has_gmac4) { value |= MII_GMAC4_READ; - if (phyreg & MII_ADDR_C45) { - value |= MII_GMAC4_C45E; - value &= ~priv->hw->mii.reg_mask; - value |= ((phyreg >> MII_DEVADDR_C45_SHIFT) << - priv->hw->mii.reg_shift) & - priv->hw->mii.reg_mask; - - data |= (phyreg & MII_REGADDR_C45_MASK) << - MII_GMAC4_REG_ADDR_SHIFT; - } } - if (readl_poll_timeout(priv->ioaddr + mii_address, v, !(v & MII_BUSY), - 100, 10000)) { - data = -EBUSY; - goto err_disable_clks; - } + data = stmmac_mdio_read(priv, data, value); - writel(data, priv->ioaddr + mii_data); - writel(value, priv->ioaddr + mii_address); + pm_runtime_put(priv->device); - if (readl_poll_timeout(priv->ioaddr + mii_address, v, !(v & MII_BUSY), - 100, 10000)) { - data = -EBUSY; - goto err_disable_clks; + return data; +} + +/** + * stmmac_mdio_read_c45 + * @bus: points to the mii_bus structure + * @phyaddr: MII addr + * @devad: device address to read + * @phyreg: MII reg + * Description: it reads data from the MII register from within the phy device. + * For the 7111 GMAC, we must set the bit 0 in the MII address register while + * accessing the PHY registers. + * Fortunately, it seems this has no drawback for the 7109 MAC. + */ +static int stmmac_mdio_read_c45(struct mii_bus *bus, int phyaddr, int devad, + int phyreg) +{ + struct net_device *ndev = bus->priv; + struct stmmac_priv *priv = netdev_priv(ndev); + u32 value = MII_BUSY; + int data = 0; + + data = pm_runtime_get_sync(priv->device); + if (data < 0) { + pm_runtime_put_noidle(priv->device); + return data; } - /* Read the data from the MII data register */ - data = (int)readl(priv->ioaddr + mii_data) & MII_DATA_MASK; + value |= (phyaddr << priv->hw->mii.addr_shift) + & priv->hw->mii.addr_mask; + value |= (phyreg << priv->hw->mii.reg_shift) & priv->hw->mii.reg_mask; + value |= (priv->clk_csr << priv->hw->mii.clk_csr_shift) + & priv->hw->mii.clk_csr_mask; + value |= MII_GMAC4_READ; + value |= MII_GMAC4_C45E; + value &= ~priv->hw->mii.reg_mask; + value |= (devad << priv->hw->mii.reg_shift) & priv->hw->mii.reg_mask; + + data |= phyreg << MII_GMAC4_REG_ADDR_SHIFT; + + data = stmmac_mdio_read(priv, data, value); -err_disable_clks: pm_runtime_put(priv->device); return data; } +static int stmmac_mdio_write(struct stmmac_priv *priv, int data, u32 value) +{ + unsigned int mii_address = priv->hw->mii.addr; + unsigned int mii_data = priv->hw->mii.data; + u32 v; + + /* Wait until any existing MII operation is complete */ + if (readl_poll_timeout(priv->ioaddr + mii_address, v, !(v & MII_BUSY), + 100, 10000)) + return -EBUSY; + + /* Set the MII address register to write */ + writel(data, priv->ioaddr + mii_data); + writel(value, priv->ioaddr + mii_address); + + /* Wait until any existing MII operation is complete */ + return readl_poll_timeout(priv->ioaddr + mii_address, v, + !(v & MII_BUSY), 100, 10000); +} + /** - * stmmac_mdio_write + * stmmac_mdio_write_c22 * @bus: points to the mii_bus structure * @phyaddr: MII addr * @phyreg: MII reg * @phydata: phy data * Description: it writes the data into the MII register from within the device. */ -static int stmmac_mdio_write(struct mii_bus *bus, int phyaddr, int phyreg, - u16 phydata) +static int stmmac_mdio_write_c22(struct mii_bus *bus, int phyaddr, int phyreg, + u16 phydata) { struct net_device *ndev = bus->priv; struct stmmac_priv *priv = netdev_priv(ndev); - unsigned int mii_address = priv->hw->mii.addr; - unsigned int mii_data = priv->hw->mii.data; int ret, data = phydata; u32 value = MII_BUSY; - u32 v; ret = pm_runtime_resume_and_get(priv->device); if (ret < 0) @@ -301,38 +382,57 @@ static int stmmac_mdio_write(struct mii_bus *bus, int phyaddr, int phyreg, value |= (priv->clk_csr << priv->hw->mii.clk_csr_shift) & priv->hw->mii.clk_csr_mask; - if (priv->plat->has_gmac4) { + if (priv->plat->has_gmac4) value |= MII_GMAC4_WRITE; - if (phyreg & MII_ADDR_C45) { - value |= MII_GMAC4_C45E; - value &= ~priv->hw->mii.reg_mask; - value |= ((phyreg >> MII_DEVADDR_C45_SHIFT) << - priv->hw->mii.reg_shift) & - priv->hw->mii.reg_mask; - - data |= (phyreg & MII_REGADDR_C45_MASK) << - MII_GMAC4_REG_ADDR_SHIFT; - } - } else { + else value |= MII_WRITE; - } - /* Wait until any existing MII operation is complete */ - if (readl_poll_timeout(priv->ioaddr + mii_address, v, !(v & MII_BUSY), - 100, 10000)) { - ret = -EBUSY; - goto err_disable_clks; + ret = stmmac_mdio_write(priv, data, value); + + pm_runtime_put(priv->device); + + return ret; +} + +/** + * stmmac_mdio_write_c45 + * @bus: points to the mii_bus structure + * @phyaddr: MII addr + * @phyreg: MII reg + * @devad: device address to read + * @phydata: phy data + * Description: it writes the data into the MII register from within the device. + */ +static int stmmac_mdio_write_c45(struct mii_bus *bus, int phyaddr, + int devad, int phyreg, u16 phydata) +{ + struct net_device *ndev = bus->priv; + struct stmmac_priv *priv = netdev_priv(ndev); + int ret, data = phydata; + u32 value = MII_BUSY; + + ret = pm_runtime_get_sync(priv->device); + if (ret < 0) { + pm_runtime_put_noidle(priv->device); + return ret; } - /* Set the MII address register to write */ - writel(data, priv->ioaddr + mii_data); - writel(value, priv->ioaddr + mii_address); + value |= (phyaddr << priv->hw->mii.addr_shift) + & priv->hw->mii.addr_mask; + value |= (phyreg << priv->hw->mii.reg_shift) & priv->hw->mii.reg_mask; - /* Wait until any existing MII operation is complete */ - ret = readl_poll_timeout(priv->ioaddr + mii_address, v, !(v & MII_BUSY), - 100, 10000); + value |= (priv->clk_csr << priv->hw->mii.clk_csr_shift) + & priv->hw->mii.clk_csr_mask; + + value |= MII_GMAC4_WRITE; + value |= MII_GMAC4_C45E; + value &= ~priv->hw->mii.reg_mask; + value |= (devad << priv->hw->mii.reg_shift) & priv->hw->mii.reg_mask; + + data |= phyreg << MII_GMAC4_REG_ADDR_SHIFT; + + ret = stmmac_mdio_write(priv, data, value); -err_disable_clks: pm_runtime_put(priv->device); return ret; @@ -453,12 +553,11 @@ int stmmac_mdio_register(struct net_device *ndev) new_bus->name = "stmmac"; - if (priv->plat->has_gmac4) - new_bus->probe_capabilities = MDIOBUS_C22_C45; - if (priv->plat->has_xgmac) { - new_bus->read = &stmmac_xgmac2_mdio_read; - new_bus->write = &stmmac_xgmac2_mdio_write; + new_bus->read = &stmmac_xgmac2_mdio_read_c22; + new_bus->write = &stmmac_xgmac2_mdio_write_c22; + new_bus->read_c45 = &stmmac_xgmac2_mdio_read_c45; + new_bus->write_c45 = &stmmac_xgmac2_mdio_write_c45; /* Right now only C22 phys are supported */ max_addr = MII_XGMAC_MAX_C22ADDR + 1; @@ -468,8 +567,13 @@ int stmmac_mdio_register(struct net_device *ndev) dev_err(dev, "Unsupported phy_addr (max=%d)\n", MII_XGMAC_MAX_C22ADDR); } else { - new_bus->read = &stmmac_mdio_read; - new_bus->write = &stmmac_mdio_write; + new_bus->read = &stmmac_mdio_read_c22; + new_bus->write = &stmmac_mdio_write_c22; + if (priv->plat->has_gmac4) { + new_bus->read_c45 = &stmmac_mdio_read_c45; + new_bus->write_c45 = &stmmac_mdio_write_c45; + } + max_addr = PHY_MAX_ADDR; } @@ -490,7 +594,7 @@ int stmmac_mdio_register(struct net_device *ndev) /* Looks like we need a dummy read for XGMAC only and C45 PHYs */ if (priv->plat->has_xgmac) - stmmac_xgmac2_mdio_read(new_bus, 0, MII_ADDR_C45); + stmmac_xgmac2_mdio_read_c45(new_bus, 0, 0, 0); /* If fixed-link is set, skip PHY scanning */ if (!fwnode) diff --git a/drivers/net/ethernet/stmicro/stmmac/stmmac_platform.c b/drivers/net/ethernet/stmicro/stmmac/stmmac_platform.c index 0046a4ee6e64..067a40fe0a23 100644 --- a/drivers/net/ethernet/stmicro/stmmac/stmmac_platform.c +++ b/drivers/net/ethernet/stmicro/stmmac/stmmac_platform.c @@ -711,14 +711,15 @@ int stmmac_pltfr_remove(struct platform_device *pdev) struct net_device *ndev = platform_get_drvdata(pdev); struct stmmac_priv *priv = netdev_priv(ndev); struct plat_stmmacenet_data *plat = priv->plat; - int ret = stmmac_dvr_remove(&pdev->dev); + + stmmac_dvr_remove(&pdev->dev); if (plat->exit) plat->exit(pdev, plat->bsp_priv); stmmac_remove_config_dt(pdev, plat); - return ret; + return 0; } EXPORT_SYMBOL_GPL(stmmac_pltfr_remove); diff --git a/drivers/net/ethernet/stmicro/stmmac/stmmac_tc.c b/drivers/net/ethernet/stmicro/stmmac/stmmac_tc.c index 2cfb18cef1d4..9d55226479b4 100644 --- a/drivers/net/ethernet/stmicro/stmmac/stmmac_tc.c +++ b/drivers/net/ethernet/stmicro/stmmac/stmmac_tc.c @@ -1107,6 +1107,25 @@ static int tc_setup_etf(struct stmmac_priv *priv, return 0; } +static int tc_query_caps(struct stmmac_priv *priv, + struct tc_query_caps_base *base) +{ + switch (base->type) { + case TC_SETUP_QDISC_TAPRIO: { + struct tc_taprio_caps *caps = base->caps; + + if (!priv->dma_cap.estsel) + return -EOPNOTSUPP; + + caps->gate_mask_per_txq = true; + + return 0; + } + default: + return -EOPNOTSUPP; + } +} + const struct stmmac_tc_ops dwmac510_tc_ops = { .init = tc_init, .setup_cls_u32 = tc_setup_cls_u32, @@ -1114,4 +1133,5 @@ const struct stmmac_tc_ops dwmac510_tc_ops = { .setup_cls = tc_setup_cls, .setup_taprio = tc_setup_taprio, .setup_etf = tc_setup_etf, + .query_caps = tc_query_caps, }; diff --git a/drivers/net/ethernet/sunplus/spl2sw_mdio.c b/drivers/net/ethernet/sunplus/spl2sw_mdio.c index 733ae1704269..c8ef17e34f3c 100644 --- a/drivers/net/ethernet/sunplus/spl2sw_mdio.c +++ b/drivers/net/ethernet/sunplus/spl2sw_mdio.c @@ -61,9 +61,6 @@ static int spl2sw_mii_read(struct mii_bus *bus, int addr, int regnum) { struct spl2sw_common *comm = bus->priv; - if (regnum & MII_ADDR_C45) - return -EOPNOTSUPP; - return spl2sw_mdio_access(comm, SPL2SW_MDIO_READ_CMD, addr, regnum, 0); } @@ -72,9 +69,6 @@ static int spl2sw_mii_write(struct mii_bus *bus, int addr, int regnum, u16 val) struct spl2sw_common *comm = bus->priv; int ret; - if (regnum & MII_ADDR_C45) - return -EOPNOTSUPP; - ret = spl2sw_mdio_access(comm, SPL2SW_MDIO_WRITE_CMD, addr, regnum, val); if (ret < 0) return ret; diff --git a/drivers/net/ethernet/ti/am65-cpsw-nuss.c b/drivers/net/ethernet/ti/am65-cpsw-nuss.c index 6cda4b7c10cb..4e3861c47708 100644 --- a/drivers/net/ethernet/ti/am65-cpsw-nuss.c +++ b/drivers/net/ethernet/ti/am65-cpsw-nuss.c @@ -1426,6 +1426,70 @@ static const struct net_device_ops am65_cpsw_nuss_netdev_ops = { .ndo_setup_tc = am65_cpsw_qos_ndo_setup_tc, }; +static void am65_cpsw_disable_phy(struct phy *phy) +{ + phy_power_off(phy); + phy_exit(phy); +} + +static int am65_cpsw_enable_phy(struct phy *phy) +{ + int ret; + + ret = phy_init(phy); + if (ret < 0) + return ret; + + ret = phy_power_on(phy); + if (ret < 0) { + phy_exit(phy); + return ret; + } + + return 0; +} + +static void am65_cpsw_disable_serdes_phy(struct am65_cpsw_common *common) +{ + struct am65_cpsw_port *port; + struct phy *phy; + int i; + + for (i = 0; i < common->port_num; i++) { + port = &common->ports[i]; + phy = port->slave.serdes_phy; + if (phy) + am65_cpsw_disable_phy(phy); + } +} + +static int am65_cpsw_init_serdes_phy(struct device *dev, struct device_node *port_np, + struct am65_cpsw_port *port) +{ + const char *name = "serdes-phy"; + struct phy *phy; + int ret; + + phy = devm_of_phy_get(dev, port_np, name); + if (PTR_ERR(phy) == -ENODEV) + return 0; + if (IS_ERR(phy)) + return PTR_ERR(phy); + + /* Serdes PHY exists. Store it. */ + port->slave.serdes_phy = phy; + + ret = am65_cpsw_enable_phy(phy); + if (ret < 0) + goto err_phy; + + return 0; + +err_phy: + devm_phy_put(dev, phy); + return ret; +} + static void am65_cpsw_nuss_mac_config(struct phylink_config *config, unsigned int mode, const struct phylink_link_state *state) { @@ -1883,11 +1947,6 @@ static int am65_cpsw_init_cpts(struct am65_cpsw_common *common) int ret = PTR_ERR(cpts); of_node_put(node); - if (ret == -EOPNOTSUPP) { - dev_info(dev, "cpts disabled\n"); - return 0; - } - dev_err(dev, "cpts create err %d\n", ret); return ret; } @@ -1969,6 +2028,11 @@ static int am65_cpsw_nuss_init_slave_ports(struct am65_cpsw_common *common) goto of_node_put; } + /* Initialize the Serdes PHY for the port */ + ret = am65_cpsw_init_serdes_phy(dev, port_np, port); + if (ret) + return ret; + port->slave.mac_only = of_property_read_bool(port_np, "ti,mac-only"); @@ -2694,11 +2758,19 @@ static const struct am65_cpsw_pdata j7200_cpswxg_pdata = { .extra_modes = BIT(PHY_INTERFACE_MODE_QSGMII), }; +static const struct am65_cpsw_pdata j721e_cpswxg_pdata = { + .quirks = 0, + .ale_dev_id = "am64-cpswxg", + .fdqring_mode = K3_RINGACC_RING_MODE_MESSAGE, + .extra_modes = BIT(PHY_INTERFACE_MODE_QSGMII), +}; + static const struct of_device_id am65_cpsw_nuss_of_mtable[] = { { .compatible = "ti,am654-cpsw-nuss", .data = &am65x_sr1_0}, { .compatible = "ti,j721e-cpsw-nuss", .data = &j721e_pdata}, { .compatible = "ti,am642-cpsw-nuss", .data = &am64x_cpswxg_pdata}, { .compatible = "ti,j7200-cpswxg-nuss", .data = &j7200_cpswxg_pdata}, + { .compatible = "ti,j721e-cpswxg-nuss", .data = &j721e_cpswxg_pdata}, { /* sentinel */ }, }; MODULE_DEVICE_TABLE(of, am65_cpsw_nuss_of_mtable); @@ -2852,6 +2924,7 @@ static int am65_cpsw_nuss_probe(struct platform_device *pdev) err_free_phylink: am65_cpsw_nuss_phylink_cleanup(common); + am65_cpts_release(common->cpts); err_of_clear: of_platform_device_destroy(common->mdio_dev, NULL); err_pm_clear: @@ -2880,6 +2953,8 @@ static int am65_cpsw_nuss_remove(struct platform_device *pdev) */ am65_cpsw_nuss_cleanup_ndev(common); am65_cpsw_nuss_phylink_cleanup(common); + am65_cpts_release(common->cpts); + am65_cpsw_disable_serdes_phy(common); of_platform_device_destroy(common->mdio_dev, NULL); diff --git a/drivers/net/ethernet/ti/am65-cpsw-nuss.h b/drivers/net/ethernet/ti/am65-cpsw-nuss.h index e5f1c44788c1..cad04662739c 100644 --- a/drivers/net/ethernet/ti/am65-cpsw-nuss.h +++ b/drivers/net/ethernet/ti/am65-cpsw-nuss.h @@ -32,6 +32,7 @@ struct am65_cpsw_slave_data { struct device_node *phy_node; phy_interface_t phy_if; struct phy *ifphy; + struct phy *serdes_phy; bool rx_pause; bool tx_pause; u8 mac_addr[ETH_ALEN]; diff --git a/drivers/net/ethernet/ti/am65-cpsw-qos.c b/drivers/net/ethernet/ti/am65-cpsw-qos.c index e162771893af..8dc2c3085dcf 100644 --- a/drivers/net/ethernet/ti/am65-cpsw-qos.c +++ b/drivers/net/ethernet/ti/am65-cpsw-qos.c @@ -585,6 +585,26 @@ static int am65_cpsw_setup_taprio(struct net_device *ndev, void *type_data) return am65_cpsw_set_taprio(ndev, type_data); } +static int am65_cpsw_tc_query_caps(struct net_device *ndev, void *type_data) +{ + struct tc_query_caps_base *base = type_data; + + switch (base->type) { + case TC_SETUP_QDISC_TAPRIO: { + struct tc_taprio_caps *caps = base->caps; + + if (!IS_ENABLED(CONFIG_TI_AM65_CPSW_TAS)) + return -EOPNOTSUPP; + + caps->gate_mask_per_txq = true; + + return 0; + } + default: + return -EOPNOTSUPP; + } +} + static int am65_cpsw_qos_clsflower_add_policer(struct am65_cpsw_port *port, struct netlink_ext_ack *extack, struct flow_cls_offload *cls, @@ -765,6 +785,8 @@ int am65_cpsw_qos_ndo_setup_tc(struct net_device *ndev, enum tc_setup_type type, void *type_data) { switch (type) { + case TC_QUERY_CAPS: + return am65_cpsw_tc_query_caps(ndev, type_data); case TC_SETUP_QDISC_TAPRIO: return am65_cpsw_setup_taprio(ndev, type_data); case TC_SETUP_BLOCK: diff --git a/drivers/net/ethernet/ti/am65-cpts.c b/drivers/net/ethernet/ti/am65-cpts.c index 9535396b28cd..16ee9c29cb35 100644 --- a/drivers/net/ethernet/ti/am65-cpts.c +++ b/drivers/net/ethernet/ti/am65-cpts.c @@ -176,6 +176,10 @@ struct am65_cpts { u32 genf_enable; u32 hw_ts_enable; struct sk_buff_head txq; + bool pps_enabled; + bool pps_present; + u32 pps_hw_ts_idx; + u32 pps_genf_idx; /* context save/restore */ u64 sr_cpts_ns; u64 sr_ktime_ns; @@ -319,8 +323,15 @@ static int am65_cpts_fifo_read(struct am65_cpts *cpts) case AM65_CPTS_EV_HW: pevent.index = am65_cpts_event_get_port(event) - 1; pevent.timestamp = event->timestamp; - pevent.type = PTP_CLOCK_EXTTS; - dev_dbg(cpts->dev, "AM65_CPTS_EV_HW p:%d t:%llu\n", + if (cpts->pps_enabled && pevent.index == cpts->pps_hw_ts_idx) { + pevent.type = PTP_CLOCK_PPSUSR; + pevent.pps_times.ts_real = ns_to_timespec64(pevent.timestamp); + } else { + pevent.type = PTP_CLOCK_EXTTS; + } + dev_dbg(cpts->dev, "AM65_CPTS_EV_HW:%s p:%d t:%llu\n", + pevent.type == PTP_CLOCK_EXTTS ? + "extts" : "pps", pevent.index, event->timestamp); ptp_clock_event(cpts->ptp_clock, &pevent); @@ -394,10 +405,13 @@ static irqreturn_t am65_cpts_interrupt(int irq, void *dev_id) static int am65_cpts_ptp_adjfine(struct ptp_clock_info *ptp, long scaled_ppm) { struct am65_cpts *cpts = container_of(ptp, struct am65_cpts, ptp_info); + u32 pps_ctrl_val = 0, pps_ppm_hi = 0, pps_ppm_low = 0; s32 ppb = scaled_ppm_to_ppb(scaled_ppm); + int pps_index = cpts->pps_genf_idx; + u64 adj_period, pps_adj_period; + u32 ctrl_val, ppm_hi, ppm_low; + unsigned long flags; int neg_adj = 0; - u64 adj_period; - u32 val; if (ppb < 0) { neg_adj = 1; @@ -417,17 +431,53 @@ static int am65_cpts_ptp_adjfine(struct ptp_clock_info *ptp, long scaled_ppm) mutex_lock(&cpts->ptp_clk_lock); - val = am65_cpts_read32(cpts, control); + ctrl_val = am65_cpts_read32(cpts, control); if (neg_adj) - val |= AM65_CPTS_CONTROL_TS_PPM_DIR; + ctrl_val |= AM65_CPTS_CONTROL_TS_PPM_DIR; else - val &= ~AM65_CPTS_CONTROL_TS_PPM_DIR; - am65_cpts_write32(cpts, val, control); + ctrl_val &= ~AM65_CPTS_CONTROL_TS_PPM_DIR; + + ppm_hi = upper_32_bits(adj_period) & 0x3FF; + ppm_low = lower_32_bits(adj_period); + + if (cpts->pps_enabled) { + pps_ctrl_val = am65_cpts_read32(cpts, genf[pps_index].control); + if (neg_adj) + pps_ctrl_val &= ~BIT(1); + else + pps_ctrl_val |= BIT(1); + + /* GenF PPM will do correction using cpts refclk tick which is + * (cpts->ts_add_val + 1) ns, so GenF length PPM adj period + * need to be corrected. + */ + pps_adj_period = adj_period * (cpts->ts_add_val + 1); + pps_ppm_hi = upper_32_bits(pps_adj_period) & 0x3FF; + pps_ppm_low = lower_32_bits(pps_adj_period); + } + + spin_lock_irqsave(&cpts->lock, flags); + + /* All below writes must be done extremely fast: + * - delay between PPM dir and PPM value changes can cause err due old + * PPM correction applied in wrong direction + * - delay between CPTS-clock PPM cfg and GenF PPM cfg can cause err + * due CPTS-clock PPM working with new cfg while GenF PPM cfg still + * with old for short period of time + */ + + am65_cpts_write32(cpts, ctrl_val, control); + am65_cpts_write32(cpts, ppm_hi, ts_ppm_hi); + am65_cpts_write32(cpts, ppm_low, ts_ppm_low); + + if (cpts->pps_enabled) { + am65_cpts_write32(cpts, pps_ctrl_val, genf[pps_index].control); + am65_cpts_write32(cpts, pps_ppm_hi, genf[pps_index].ppm_hi); + am65_cpts_write32(cpts, pps_ppm_low, genf[pps_index].ppm_low); + } - val = upper_32_bits(adj_period) & 0x3FF; - am65_cpts_write32(cpts, val, ts_ppm_hi); - val = lower_32_bits(adj_period); - am65_cpts_write32(cpts, val, ts_ppm_low); + /* All GenF/EstF can be updated here the same way */ + spin_unlock_irqrestore(&cpts->lock, flags); mutex_unlock(&cpts->ptp_clk_lock); @@ -507,7 +557,13 @@ static void am65_cpts_extts_enable_hw(struct am65_cpts *cpts, u32 index, int on) static int am65_cpts_extts_enable(struct am65_cpts *cpts, u32 index, int on) { - if (!!(cpts->hw_ts_enable & BIT(index)) == !!on) + if (index >= cpts->ptp_info.n_ext_ts) + return -ENXIO; + + if (cpts->pps_present && index == cpts->pps_hw_ts_idx) + return -EINVAL; + + if (((cpts->hw_ts_enable & BIT(index)) >> index) == on) return 0; mutex_lock(&cpts->ptp_clk_lock); @@ -591,6 +647,12 @@ static void am65_cpts_perout_enable_hw(struct am65_cpts *cpts, static int am65_cpts_perout_enable(struct am65_cpts *cpts, struct ptp_perout_request *req, int on) { + if (req->index >= cpts->ptp_info.n_per_out) + return -ENXIO; + + if (cpts->pps_present && req->index == cpts->pps_genf_idx) + return -EINVAL; + if (!!(cpts->genf_enable & BIT(req->index)) == !!on) return 0; @@ -604,6 +666,48 @@ static int am65_cpts_perout_enable(struct am65_cpts *cpts, return 0; } +static int am65_cpts_pps_enable(struct am65_cpts *cpts, int on) +{ + int ret = 0; + struct timespec64 ts; + struct ptp_clock_request rq; + u64 ns; + + if (!cpts->pps_present) + return -EINVAL; + + if (cpts->pps_enabled == !!on) + return 0; + + mutex_lock(&cpts->ptp_clk_lock); + + if (on) { + am65_cpts_extts_enable_hw(cpts, cpts->pps_hw_ts_idx, on); + + ns = am65_cpts_gettime(cpts, NULL); + ts = ns_to_timespec64(ns); + rq.perout.period.sec = 1; + rq.perout.period.nsec = 0; + rq.perout.start.sec = ts.tv_sec + 2; + rq.perout.start.nsec = 0; + rq.perout.index = cpts->pps_genf_idx; + + am65_cpts_perout_enable_hw(cpts, &rq.perout, on); + cpts->pps_enabled = true; + } else { + rq.perout.index = cpts->pps_genf_idx; + am65_cpts_perout_enable_hw(cpts, &rq.perout, on); + am65_cpts_extts_enable_hw(cpts, cpts->pps_hw_ts_idx, on); + cpts->pps_enabled = false; + } + + mutex_unlock(&cpts->ptp_clk_lock); + + dev_dbg(cpts->dev, "%s: pps: %s\n", + __func__, on ? "enabled" : "disabled"); + return ret; +} + static int am65_cpts_ptp_enable(struct ptp_clock_info *ptp, struct ptp_clock_request *rq, int on) { @@ -614,6 +718,8 @@ static int am65_cpts_ptp_enable(struct ptp_clock_info *ptp, return am65_cpts_extts_enable(cpts, rq->extts.index, on); case PTP_CLK_REQ_PEROUT: return am65_cpts_perout_enable(cpts, &rq->perout, on); + case PTP_CLK_REQ_PPS: + return am65_cpts_pps_enable(cpts, on); default: break; } @@ -926,17 +1032,33 @@ static int am65_cpts_of_parse(struct am65_cpts *cpts, struct device_node *node) if (!of_property_read_u32(node, "ti,cpts-periodic-outputs", &prop[0])) cpts->genf_num = prop[0]; + if (!of_property_read_u32_array(node, "ti,pps", prop, 2)) { + cpts->pps_present = true; + + if (prop[0] > 7) { + dev_err(cpts->dev, "invalid HWx_TS_PUSH index: %u provided\n", prop[0]); + cpts->pps_present = false; + } + if (prop[1] > 1) { + dev_err(cpts->dev, "invalid GENFy index: %u provided\n", prop[1]); + cpts->pps_present = false; + } + if (cpts->pps_present) { + cpts->pps_hw_ts_idx = prop[0]; + cpts->pps_genf_idx = prop[1]; + } + } + return cpts_of_mux_clk_setup(cpts, node); } -static void am65_cpts_release(void *data) +void am65_cpts_release(struct am65_cpts *cpts) { - struct am65_cpts *cpts = data; - ptp_clock_unregister(cpts->ptp_clock); am65_cpts_disable(cpts); clk_disable_unprepare(cpts->refclk); } +EXPORT_SYMBOL_GPL(am65_cpts_release); struct am65_cpts *am65_cpts_create(struct device *dev, void __iomem *regs, struct device_node *node) @@ -993,6 +1115,8 @@ struct am65_cpts *am65_cpts_create(struct device *dev, void __iomem *regs, cpts->ptp_info.n_ext_ts = cpts->ext_ts_inputs; if (cpts->genf_num) cpts->ptp_info.n_per_out = cpts->genf_num; + if (cpts->pps_present) + cpts->ptp_info.pps = 1; am65_cpts_set_add_val(cpts); @@ -1014,26 +1138,22 @@ struct am65_cpts *am65_cpts_create(struct device *dev, void __iomem *regs, } cpts->phc_index = ptp_clock_index(cpts->ptp_clock); - ret = devm_add_action_or_reset(dev, am65_cpts_release, cpts); - if (ret) { - dev_err(dev, "failed to add ptpclk reset action %d", ret); - return ERR_PTR(ret); - } - ret = devm_request_threaded_irq(dev, cpts->irq, NULL, am65_cpts_interrupt, IRQF_ONESHOT, dev_name(dev), cpts); if (ret < 0) { dev_err(cpts->dev, "error attaching irq %d\n", ret); - return ERR_PTR(ret); + goto reset_ptpclk; } - dev_info(dev, "CPTS ver 0x%08x, freq:%u, add_val:%u\n", + dev_info(dev, "CPTS ver 0x%08x, freq:%u, add_val:%u pps:%d\n", am65_cpts_read32(cpts, idver), - cpts->refclk_freq, cpts->ts_add_val); + cpts->refclk_freq, cpts->ts_add_val, cpts->pps_present); return cpts; +reset_ptpclk: + am65_cpts_release(cpts); refclk_disable: clk_disable_unprepare(cpts->refclk); return ERR_PTR(ret); diff --git a/drivers/net/ethernet/ti/am65-cpts.h b/drivers/net/ethernet/ti/am65-cpts.h index bd08f4b2edd2..6e14df0be113 100644 --- a/drivers/net/ethernet/ti/am65-cpts.h +++ b/drivers/net/ethernet/ti/am65-cpts.h @@ -18,6 +18,7 @@ struct am65_cpts_estf_cfg { }; #if IS_ENABLED(CONFIG_TI_K3_AM65_CPTS) +void am65_cpts_release(struct am65_cpts *cpts); struct am65_cpts *am65_cpts_create(struct device *dev, void __iomem *regs, struct device_node *node); int am65_cpts_phc_index(struct am65_cpts *cpts); @@ -31,6 +32,10 @@ void am65_cpts_estf_disable(struct am65_cpts *cpts, int idx); void am65_cpts_suspend(struct am65_cpts *cpts); void am65_cpts_resume(struct am65_cpts *cpts); #else +static inline void am65_cpts_release(struct am65_cpts *cpts) +{ +} + static inline struct am65_cpts *am65_cpts_create(struct device *dev, void __iomem *regs, struct device_node *node) diff --git a/drivers/net/ethernet/ti/cpsw.c b/drivers/net/ethernet/ti/cpsw.c index 13c9c2d6b79b..37f0b62ec5d6 100644 --- a/drivers/net/ethernet/ti/cpsw.c +++ b/drivers/net/ethernet/ti/cpsw.c @@ -1458,6 +1458,8 @@ static int cpsw_probe_dual_emac(struct cpsw_priv *priv) priv_sl2->emac_port = 1; cpsw->slaves[1].ndev = ndev; ndev->features |= NETIF_F_HW_VLAN_CTAG_FILTER | NETIF_F_HW_VLAN_CTAG_RX; + ndev->xdp_features = NETDEV_XDP_ACT_BASIC | NETDEV_XDP_ACT_REDIRECT | + NETDEV_XDP_ACT_NDO_XMIT; ndev->netdev_ops = &cpsw_netdev_ops; ndev->ethtool_ops = &cpsw_ethtool_ops; @@ -1635,6 +1637,8 @@ static int cpsw_probe(struct platform_device *pdev) cpsw->slaves[0].ndev = ndev; ndev->features |= NETIF_F_HW_VLAN_CTAG_FILTER | NETIF_F_HW_VLAN_CTAG_RX; + ndev->xdp_features = NETDEV_XDP_ACT_BASIC | NETDEV_XDP_ACT_REDIRECT | + NETDEV_XDP_ACT_NDO_XMIT; ndev->netdev_ops = &cpsw_netdev_ops; ndev->ethtool_ops = &cpsw_ethtool_ops; diff --git a/drivers/net/ethernet/ti/cpsw_new.c b/drivers/net/ethernet/ti/cpsw_new.c index 83596ec0c7cb..35128dd45ffc 100644 --- a/drivers/net/ethernet/ti/cpsw_new.c +++ b/drivers/net/ethernet/ti/cpsw_new.c @@ -1405,6 +1405,10 @@ static int cpsw_create_ports(struct cpsw_common *cpsw) ndev->features |= NETIF_F_HW_VLAN_CTAG_FILTER | NETIF_F_HW_VLAN_CTAG_RX | NETIF_F_NETNS_LOCAL | NETIF_F_HW_TC; + ndev->xdp_features = NETDEV_XDP_ACT_BASIC | + NETDEV_XDP_ACT_REDIRECT | + NETDEV_XDP_ACT_NDO_XMIT; + ndev->netdev_ops = &cpsw_netdev_ops; ndev->ethtool_ops = &cpsw_ethtool_ops; SET_NETDEV_DEV(ndev, dev); diff --git a/drivers/net/ethernet/ti/cpsw_priv.c b/drivers/net/ethernet/ti/cpsw_priv.c index 758295c898ac..e966dd47e2db 100644 --- a/drivers/net/ethernet/ti/cpsw_priv.c +++ b/drivers/net/ethernet/ti/cpsw_priv.c @@ -20,6 +20,7 @@ #include <linux/skbuff.h> #include <net/page_pool.h> #include <net/pkt_cls.h> +#include <net/pkt_sched.h> #include "cpsw.h" #include "cpts.h" diff --git a/drivers/net/ethernet/ti/davinci_mdio.c b/drivers/net/ethernet/ti/davinci_mdio.c index 946b9753ccfb..23169e36a3d4 100644 --- a/drivers/net/ethernet/ti/davinci_mdio.c +++ b/drivers/net/ethernet/ti/davinci_mdio.c @@ -225,7 +225,7 @@ static int davinci_get_mdio_data(struct mdiobb_ctrl *ctrl) return test_bit(MDIO_PIN, ®); } -static int davinci_mdiobb_read(struct mii_bus *bus, int phy, int reg) +static int davinci_mdiobb_read_c22(struct mii_bus *bus, int phy, int reg) { int ret; @@ -233,7 +233,7 @@ static int davinci_mdiobb_read(struct mii_bus *bus, int phy, int reg) if (ret < 0) return ret; - ret = mdiobb_read(bus, phy, reg); + ret = mdiobb_read_c22(bus, phy, reg); pm_runtime_mark_last_busy(bus->parent); pm_runtime_put_autosuspend(bus->parent); @@ -241,8 +241,8 @@ static int davinci_mdiobb_read(struct mii_bus *bus, int phy, int reg) return ret; } -static int davinci_mdiobb_write(struct mii_bus *bus, int phy, int reg, - u16 val) +static int davinci_mdiobb_write_c22(struct mii_bus *bus, int phy, int reg, + u16 val) { int ret; @@ -250,7 +250,41 @@ static int davinci_mdiobb_write(struct mii_bus *bus, int phy, int reg, if (ret < 0) return ret; - ret = mdiobb_write(bus, phy, reg, val); + ret = mdiobb_write_c22(bus, phy, reg, val); + + pm_runtime_mark_last_busy(bus->parent); + pm_runtime_put_autosuspend(bus->parent); + + return ret; +} + +static int davinci_mdiobb_read_c45(struct mii_bus *bus, int phy, int devad, + int reg) +{ + int ret; + + ret = pm_runtime_resume_and_get(bus->parent); + if (ret < 0) + return ret; + + ret = mdiobb_read_c45(bus, phy, devad, reg); + + pm_runtime_mark_last_busy(bus->parent); + pm_runtime_put_autosuspend(bus->parent); + + return ret; +} + +static int davinci_mdiobb_write_c45(struct mii_bus *bus, int phy, int devad, + int reg, u16 val) +{ + int ret; + + ret = pm_runtime_resume_and_get(bus->parent); + if (ret < 0) + return ret; + + ret = mdiobb_write_c45(bus, phy, devad, reg, val); pm_runtime_mark_last_busy(bus->parent); pm_runtime_put_autosuspend(bus->parent); @@ -573,8 +607,10 @@ static int davinci_mdio_probe(struct platform_device *pdev) data->bus->name = dev_name(dev); if (data->manual_mode) { - data->bus->read = davinci_mdiobb_read; - data->bus->write = davinci_mdiobb_write; + data->bus->read = davinci_mdiobb_read_c22; + data->bus->write = davinci_mdiobb_write_c22; + data->bus->read_c45 = davinci_mdiobb_read_c45; + data->bus->write_c45 = davinci_mdiobb_write_c45; data->bus->reset = davinci_mdiobb_reset; dev_info(dev, "Configuring MDIO in manual mode\n"); diff --git a/drivers/net/ethernet/wangxun/Kconfig b/drivers/net/ethernet/wangxun/Kconfig index 86310588c6c1..c9d88673d306 100644 --- a/drivers/net/ethernet/wangxun/Kconfig +++ b/drivers/net/ethernet/wangxun/Kconfig @@ -18,6 +18,7 @@ if NET_VENDOR_WANGXUN config LIBWX tristate + select PAGE_POOL help Common library for Wangxun(R) Ethernet drivers. @@ -25,6 +26,7 @@ config NGBE tristate "Wangxun(R) GbE PCI Express adapters support" depends on PCI select LIBWX + select PHYLIB help This driver supports Wangxun(R) GbE PCI Express family of adapters. diff --git a/drivers/net/ethernet/wangxun/libwx/Makefile b/drivers/net/ethernet/wangxun/libwx/Makefile index 1ed5e23af944..42ccd6e4052e 100644 --- a/drivers/net/ethernet/wangxun/libwx/Makefile +++ b/drivers/net/ethernet/wangxun/libwx/Makefile @@ -4,4 +4,4 @@ obj-$(CONFIG_LIBWX) += libwx.o -libwx-objs := wx_hw.o +libwx-objs := wx_hw.o wx_lib.o wx_ethtool.o diff --git a/drivers/net/ethernet/wangxun/libwx/wx_ethtool.c b/drivers/net/ethernet/wangxun/libwx/wx_ethtool.c new file mode 100644 index 000000000000..93cb6f2294e7 --- /dev/null +++ b/drivers/net/ethernet/wangxun/libwx/wx_ethtool.c @@ -0,0 +1,18 @@ +// SPDX-License-Identifier: GPL-2.0 +/* Copyright (c) 2015 - 2023 Beijing WangXun Technology Co., Ltd. */ + +#include <linux/pci.h> +#include <linux/phy.h> + +#include "wx_type.h" +#include "wx_ethtool.h" + +void wx_get_drvinfo(struct net_device *netdev, struct ethtool_drvinfo *info) +{ + struct wx *wx = netdev_priv(netdev); + + strscpy(info->driver, wx->driver_name, sizeof(info->driver)); + strscpy(info->fw_version, wx->eeprom_id, sizeof(info->fw_version)); + strscpy(info->bus_info, pci_name(wx->pdev), sizeof(info->bus_info)); +} +EXPORT_SYMBOL(wx_get_drvinfo); diff --git a/drivers/net/ethernet/wangxun/libwx/wx_ethtool.h b/drivers/net/ethernet/wangxun/libwx/wx_ethtool.h new file mode 100644 index 000000000000..e85538c69454 --- /dev/null +++ b/drivers/net/ethernet/wangxun/libwx/wx_ethtool.h @@ -0,0 +1,8 @@ +/* SPDX-License-Identifier: GPL-2.0 */ +/* Copyright (c) 2015 - 2023 Beijing WangXun Technology Co., Ltd. */ + +#ifndef _WX_ETHTOOL_H_ +#define _WX_ETHTOOL_H_ + +void wx_get_drvinfo(struct net_device *netdev, struct ethtool_drvinfo *info); +#endif /* _WX_ETHTOOL_H_ */ diff --git a/drivers/net/ethernet/wangxun/libwx/wx_hw.c b/drivers/net/ethernet/wangxun/libwx/wx_hw.c index c57dc3238b3f..7db57f934a91 100644 --- a/drivers/net/ethernet/wangxun/libwx/wx_hw.c +++ b/drivers/net/ethernet/wangxun/libwx/wx_hw.c @@ -2,59 +2,100 @@ /* Copyright (c) 2015 - 2022 Beijing WangXun Technology Co., Ltd. */ #include <linux/etherdevice.h> +#include <linux/netdevice.h> #include <linux/if_ether.h> #include <linux/iopoll.h> #include <linux/pci.h> #include "wx_type.h" +#include "wx_lib.h" #include "wx_hw.h" -static void wx_intr_disable(struct wx_hw *wxhw, u64 qmask) +static void wx_intr_disable(struct wx *wx, u64 qmask) { u32 mask; - mask = (qmask & 0xFFFFFFFF); + mask = (qmask & U32_MAX); if (mask) - wr32(wxhw, WX_PX_IMS(0), mask); + wr32(wx, WX_PX_IMS(0), mask); - if (wxhw->mac.type == wx_mac_sp) { + if (wx->mac.type == wx_mac_sp) { mask = (qmask >> 32); if (mask) - wr32(wxhw, WX_PX_IMS(1), mask); + wr32(wx, WX_PX_IMS(1), mask); } } +void wx_intr_enable(struct wx *wx, u64 qmask) +{ + u32 mask; + + mask = (qmask & U32_MAX); + if (mask) + wr32(wx, WX_PX_IMC(0), mask); + if (wx->mac.type == wx_mac_sp) { + mask = (qmask >> 32); + if (mask) + wr32(wx, WX_PX_IMC(1), mask); + } +} +EXPORT_SYMBOL(wx_intr_enable); + +/** + * wx_irq_disable - Mask off interrupt generation on the NIC + * @wx: board private structure + **/ +void wx_irq_disable(struct wx *wx) +{ + struct pci_dev *pdev = wx->pdev; + + wr32(wx, WX_PX_MISC_IEN, 0); + wx_intr_disable(wx, WX_INTR_ALL); + + if (pdev->msix_enabled) { + int vector; + + for (vector = 0; vector < wx->num_q_vectors; vector++) + synchronize_irq(wx->msix_entries[vector].vector); + + synchronize_irq(wx->msix_entries[vector].vector); + } else { + synchronize_irq(pdev->irq); + } +} +EXPORT_SYMBOL(wx_irq_disable); + /* cmd_addr is used for some special command: * 1. to be sector address, when implemented erase sector command * 2. to be flash address when implemented read, write flash address */ -static int wx_fmgr_cmd_op(struct wx_hw *wxhw, u32 cmd, u32 cmd_addr) +static int wx_fmgr_cmd_op(struct wx *wx, u32 cmd, u32 cmd_addr) { u32 cmd_val = 0, val = 0; cmd_val = WX_SPI_CMD_CMD(cmd) | WX_SPI_CMD_CLK(WX_SPI_CLK_DIV) | cmd_addr; - wr32(wxhw, WX_SPI_CMD, cmd_val); + wr32(wx, WX_SPI_CMD, cmd_val); return read_poll_timeout(rd32, val, (val & 0x1), 10, 100000, - false, wxhw, WX_SPI_STATUS); + false, wx, WX_SPI_STATUS); } -static int wx_flash_read_dword(struct wx_hw *wxhw, u32 addr, u32 *data) +static int wx_flash_read_dword(struct wx *wx, u32 addr, u32 *data) { int ret = 0; - ret = wx_fmgr_cmd_op(wxhw, WX_SPI_CMD_READ_DWORD, addr); + ret = wx_fmgr_cmd_op(wx, WX_SPI_CMD_READ_DWORD, addr); if (ret < 0) return ret; - *data = rd32(wxhw, WX_SPI_DATA); + *data = rd32(wx, WX_SPI_DATA); return ret; } -int wx_check_flash_load(struct wx_hw *hw, u32 check_bit) +int wx_check_flash_load(struct wx *hw, u32 check_bit) { u32 reg = 0; int err = 0; @@ -73,29 +114,25 @@ int wx_check_flash_load(struct wx_hw *hw, u32 check_bit) } EXPORT_SYMBOL(wx_check_flash_load); -void wx_control_hw(struct wx_hw *wxhw, bool drv) +void wx_control_hw(struct wx *wx, bool drv) { - if (drv) { - /* Let firmware know the driver has taken over */ - wr32m(wxhw, WX_CFG_PORT_CTL, - WX_CFG_PORT_CTL_DRV_LOAD, WX_CFG_PORT_CTL_DRV_LOAD); - } else { - /* Let firmware take over control of hw */ - wr32m(wxhw, WX_CFG_PORT_CTL, - WX_CFG_PORT_CTL_DRV_LOAD, 0); - } + /* True : Let firmware know the driver has taken over + * False : Let firmware take over control of hw + */ + wr32m(wx, WX_CFG_PORT_CTL, WX_CFG_PORT_CTL_DRV_LOAD, + drv ? WX_CFG_PORT_CTL_DRV_LOAD : 0); } EXPORT_SYMBOL(wx_control_hw); /** * wx_mng_present - returns 0 when management capability is present - * @wxhw: pointer to hardware structure + * @wx: pointer to hardware structure */ -int wx_mng_present(struct wx_hw *wxhw) +int wx_mng_present(struct wx *wx) { u32 fwsm; - fwsm = rd32(wxhw, WX_MIS_ST); + fwsm = rd32(wx, WX_MIS_ST); if (fwsm & WX_MIS_ST_MNG_INIT_DN) return 0; else @@ -108,40 +145,40 @@ static DEFINE_MUTEX(wx_sw_sync_lock); /** * wx_release_sw_sync - Release SW semaphore - * @wxhw: pointer to hardware structure + * @wx: pointer to hardware structure * @mask: Mask to specify which semaphore to release * * Releases the SW semaphore for the specified * function (CSR, PHY0, PHY1, EEPROM, Flash) **/ -static void wx_release_sw_sync(struct wx_hw *wxhw, u32 mask) +static void wx_release_sw_sync(struct wx *wx, u32 mask) { mutex_lock(&wx_sw_sync_lock); - wr32m(wxhw, WX_MNG_SWFW_SYNC, mask, 0); + wr32m(wx, WX_MNG_SWFW_SYNC, mask, 0); mutex_unlock(&wx_sw_sync_lock); } /** * wx_acquire_sw_sync - Acquire SW semaphore - * @wxhw: pointer to hardware structure + * @wx: pointer to hardware structure * @mask: Mask to specify which semaphore to acquire * * Acquires the SW semaphore for the specified * function (CSR, PHY0, PHY1, EEPROM, Flash) **/ -static int wx_acquire_sw_sync(struct wx_hw *wxhw, u32 mask) +static int wx_acquire_sw_sync(struct wx *wx, u32 mask) { u32 sem = 0; int ret = 0; mutex_lock(&wx_sw_sync_lock); ret = read_poll_timeout(rd32, sem, !(sem & mask), - 5000, 2000000, false, wxhw, WX_MNG_SWFW_SYNC); + 5000, 2000000, false, wx, WX_MNG_SWFW_SYNC); if (!ret) { sem |= mask; - wr32(wxhw, WX_MNG_SWFW_SYNC, sem); + wr32(wx, WX_MNG_SWFW_SYNC, sem); } else { - wx_err(wxhw, "SW Semaphore not granted: 0x%x.\n", sem); + wx_err(wx, "SW Semaphore not granted: 0x%x.\n", sem); } mutex_unlock(&wx_sw_sync_lock); @@ -150,7 +187,7 @@ static int wx_acquire_sw_sync(struct wx_hw *wxhw, u32 mask) /** * wx_host_interface_command - Issue command to manageability block - * @wxhw: pointer to the HW structure + * @wx: pointer to the HW structure * @buffer: contains the command to write and where the return status will * be placed * @length: length of buffer, must be multiple of 4 bytes @@ -162,7 +199,7 @@ static int wx_acquire_sw_sync(struct wx_hw *wxhw, u32 mask) * So we will leave this up to the caller to read back the data * in these cases. **/ -int wx_host_interface_command(struct wx_hw *wxhw, u32 *buffer, +int wx_host_interface_command(struct wx *wx, u32 *buffer, u32 length, u32 timeout, bool return_data) { u32 hdr_size = sizeof(struct wx_hic_hdr); @@ -172,17 +209,17 @@ int wx_host_interface_command(struct wx_hw *wxhw, u32 *buffer, u16 buf_len; if (length == 0 || length > WX_HI_MAX_BLOCK_BYTE_LENGTH) { - wx_err(wxhw, "Buffer length failure buffersize=%d.\n", length); + wx_err(wx, "Buffer length failure buffersize=%d.\n", length); return -EINVAL; } - status = wx_acquire_sw_sync(wxhw, WX_MNG_SWFW_SYNC_SW_MB); + status = wx_acquire_sw_sync(wx, WX_MNG_SWFW_SYNC_SW_MB); if (status != 0) return status; /* Calculate length in DWORDs. We must be DWORD aligned */ if ((length % (sizeof(u32))) != 0) { - wx_err(wxhw, "Buffer length failure, not aligned to dword"); + wx_err(wx, "Buffer length failure, not aligned to dword"); status = -EINVAL; goto rel_out; } @@ -193,38 +230,38 @@ int wx_host_interface_command(struct wx_hw *wxhw, u32 *buffer, * into the ram area. */ for (i = 0; i < dword_len; i++) { - wr32a(wxhw, WX_MNG_MBOX, i, (__force u32)cpu_to_le32(buffer[i])); + wr32a(wx, WX_MNG_MBOX, i, (__force u32)cpu_to_le32(buffer[i])); /* write flush */ - buf[i] = rd32a(wxhw, WX_MNG_MBOX, i); + buf[i] = rd32a(wx, WX_MNG_MBOX, i); } /* Setting this bit tells the ARC that a new command is pending. */ - wr32m(wxhw, WX_MNG_MBOX_CTL, + wr32m(wx, WX_MNG_MBOX_CTL, WX_MNG_MBOX_CTL_SWRDY, WX_MNG_MBOX_CTL_SWRDY); status = read_poll_timeout(rd32, hicr, hicr & WX_MNG_MBOX_CTL_FWRDY, 1000, - timeout * 1000, false, wxhw, WX_MNG_MBOX_CTL); + timeout * 1000, false, wx, WX_MNG_MBOX_CTL); /* Check command completion */ if (status) { - wx_dbg(wxhw, "Command has failed with no status valid.\n"); + wx_dbg(wx, "Command has failed with no status valid.\n"); - buf[0] = rd32(wxhw, WX_MNG_MBOX); + buf[0] = rd32(wx, WX_MNG_MBOX); if ((buffer[0] & 0xff) != (~buf[0] >> 24)) { status = -EINVAL; goto rel_out; } if ((buf[0] & 0xff0000) >> 16 == 0x80) { - wx_dbg(wxhw, "It's unknown cmd.\n"); + wx_dbg(wx, "It's unknown cmd.\n"); status = -EINVAL; goto rel_out; } - wx_dbg(wxhw, "write value:\n"); + wx_dbg(wx, "write value:\n"); for (i = 0; i < dword_len; i++) - wx_dbg(wxhw, "%x ", buffer[i]); - wx_dbg(wxhw, "read value:\n"); + wx_dbg(wx, "%x ", buffer[i]); + wx_dbg(wx, "read value:\n"); for (i = 0; i < dword_len; i++) - wx_dbg(wxhw, "%x ", buf[i]); + wx_dbg(wx, "%x ", buf[i]); } if (!return_data) @@ -235,7 +272,7 @@ int wx_host_interface_command(struct wx_hw *wxhw, u32 *buffer, /* first pull in the header so we know the buffer length */ for (bi = 0; bi < dword_len; bi++) { - buffer[bi] = rd32a(wxhw, WX_MNG_MBOX, bi); + buffer[bi] = rd32a(wx, WX_MNG_MBOX, bi); le32_to_cpus(&buffer[bi]); } @@ -245,7 +282,7 @@ int wx_host_interface_command(struct wx_hw *wxhw, u32 *buffer, goto rel_out; if (length < buf_len + hdr_size) { - wx_err(wxhw, "Buffer not large enough for reply message.\n"); + wx_err(wx, "Buffer not large enough for reply message.\n"); status = -EFAULT; goto rel_out; } @@ -255,12 +292,12 @@ int wx_host_interface_command(struct wx_hw *wxhw, u32 *buffer, /* Pull in the rest of the buffer (bi is where we left off) */ for (; bi <= dword_len; bi++) { - buffer[bi] = rd32a(wxhw, WX_MNG_MBOX, bi); + buffer[bi] = rd32a(wx, WX_MNG_MBOX, bi); le32_to_cpus(&buffer[bi]); } rel_out: - wx_release_sw_sync(wxhw, WX_MNG_SWFW_SYNC_SW_MB); + wx_release_sw_sync(wx, WX_MNG_SWFW_SYNC_SW_MB); return status; } EXPORT_SYMBOL(wx_host_interface_command); @@ -268,13 +305,13 @@ EXPORT_SYMBOL(wx_host_interface_command); /** * wx_read_ee_hostif_data - Read EEPROM word using a host interface cmd * assuming that the semaphore is already obtained. - * @wxhw: pointer to hardware structure + * @wx: pointer to hardware structure * @offset: offset of word in the EEPROM to read * @data: word read from the EEPROM * * Reads a 16 bit word from the EEPROM using the hostif. **/ -static int wx_read_ee_hostif_data(struct wx_hw *wxhw, u16 offset, u16 *data) +static int wx_read_ee_hostif_data(struct wx *wx, u16 offset, u16 *data) { struct wx_hic_read_shadow_ram buffer; int status; @@ -289,33 +326,33 @@ static int wx_read_ee_hostif_data(struct wx_hw *wxhw, u16 offset, u16 *data) /* one word */ buffer.length = (__force u16)cpu_to_be16(sizeof(u16)); - status = wx_host_interface_command(wxhw, (u32 *)&buffer, sizeof(buffer), + status = wx_host_interface_command(wx, (u32 *)&buffer, sizeof(buffer), WX_HI_COMMAND_TIMEOUT, false); if (status != 0) return status; - *data = (u16)rd32a(wxhw, WX_MNG_MBOX, FW_NVM_DATA_OFFSET); + *data = (u16)rd32a(wx, WX_MNG_MBOX, FW_NVM_DATA_OFFSET); return status; } /** * wx_read_ee_hostif - Read EEPROM word using a host interface cmd - * @wxhw: pointer to hardware structure + * @wx: pointer to hardware structure * @offset: offset of word in the EEPROM to read * @data: word read from the EEPROM * * Reads a 16 bit word from the EEPROM using the hostif. **/ -int wx_read_ee_hostif(struct wx_hw *wxhw, u16 offset, u16 *data) +int wx_read_ee_hostif(struct wx *wx, u16 offset, u16 *data) { int status = 0; - status = wx_acquire_sw_sync(wxhw, WX_MNG_SWFW_SYNC_SW_FLASH); + status = wx_acquire_sw_sync(wx, WX_MNG_SWFW_SYNC_SW_FLASH); if (status == 0) { - status = wx_read_ee_hostif_data(wxhw, offset, data); - wx_release_sw_sync(wxhw, WX_MNG_SWFW_SYNC_SW_FLASH); + status = wx_read_ee_hostif_data(wx, offset, data); + wx_release_sw_sync(wx, WX_MNG_SWFW_SYNC_SW_FLASH); } return status; @@ -324,14 +361,14 @@ EXPORT_SYMBOL(wx_read_ee_hostif); /** * wx_read_ee_hostif_buffer- Read EEPROM word(s) using hostif - * @wxhw: pointer to hardware structure + * @wx: pointer to hardware structure * @offset: offset of word in the EEPROM to read * @words: number of words * @data: word(s) read from the EEPROM * * Reads a 16 bit word(s) from the EEPROM using the hostif. **/ -int wx_read_ee_hostif_buffer(struct wx_hw *wxhw, +int wx_read_ee_hostif_buffer(struct wx *wx, u16 offset, u16 words, u16 *data) { struct wx_hic_read_shadow_ram buffer; @@ -342,7 +379,7 @@ int wx_read_ee_hostif_buffer(struct wx_hw *wxhw, u32 i; /* Take semaphore for the entire operation. */ - status = wx_acquire_sw_sync(wxhw, WX_MNG_SWFW_SYNC_SW_FLASH); + status = wx_acquire_sw_sync(wx, WX_MNG_SWFW_SYNC_SW_FLASH); if (status != 0) return status; @@ -361,20 +398,20 @@ int wx_read_ee_hostif_buffer(struct wx_hw *wxhw, buffer.address = (__force u32)cpu_to_be32((offset + current_word) * 2); buffer.length = (__force u16)cpu_to_be16(words_to_read * 2); - status = wx_host_interface_command(wxhw, (u32 *)&buffer, + status = wx_host_interface_command(wx, (u32 *)&buffer, sizeof(buffer), WX_HI_COMMAND_TIMEOUT, false); if (status != 0) { - wx_err(wxhw, "Host interface command failed\n"); + wx_err(wx, "Host interface command failed\n"); goto out; } for (i = 0; i < words_to_read; i++) { u32 reg = WX_MNG_MBOX + (FW_NVM_DATA_OFFSET << 2) + 2 * i; - value = rd32(wxhw, reg); + value = rd32(wx, reg); data[current_word] = (u16)(value & 0xffff); current_word++; i++; @@ -388,7 +425,7 @@ int wx_read_ee_hostif_buffer(struct wx_hw *wxhw, } out: - wx_release_sw_sync(wxhw, WX_MNG_SWFW_SYNC_SW_FLASH); + wx_release_sw_sync(wx, WX_MNG_SWFW_SYNC_SW_FLASH); return status; } EXPORT_SYMBOL(wx_read_ee_hostif_buffer); @@ -416,12 +453,12 @@ static u8 wx_calculate_checksum(u8 *buffer, u32 length) /** * wx_reset_hostif - send reset cmd to fw - * @wxhw: pointer to hardware structure + * @wx: pointer to hardware structure * * Sends reset cmd to firmware through the manageability * block. **/ -int wx_reset_hostif(struct wx_hw *wxhw) +int wx_reset_hostif(struct wx *wx) { struct wx_hic_reset reset_cmd; int ret_val = 0; @@ -430,15 +467,15 @@ int wx_reset_hostif(struct wx_hw *wxhw) reset_cmd.hdr.cmd = FW_RESET_CMD; reset_cmd.hdr.buf_len = FW_RESET_LEN; reset_cmd.hdr.cmd_or_resp.cmd_resv = FW_CEM_CMD_RESERVED; - reset_cmd.lan_id = wxhw->bus.func; - reset_cmd.reset_type = (u16)wxhw->reset_type; + reset_cmd.lan_id = wx->bus.func; + reset_cmd.reset_type = (u16)wx->reset_type; reset_cmd.hdr.checksum = 0; reset_cmd.hdr.checksum = wx_calculate_checksum((u8 *)&reset_cmd, (FW_CEM_HDR_LEN + reset_cmd.hdr.buf_len)); for (i = 0; i <= FW_CEM_MAX_RETRIES; i++) { - ret_val = wx_host_interface_command(wxhw, (u32 *)&reset_cmd, + ret_val = wx_host_interface_command(wx, (u32 *)&reset_cmd, sizeof(reset_cmd), WX_HI_COMMAND_TIMEOUT, true); @@ -460,14 +497,14 @@ EXPORT_SYMBOL(wx_reset_hostif); /** * wx_init_eeprom_params - Initialize EEPROM params - * @wxhw: pointer to hardware structure + * @wx: pointer to hardware structure * * Initializes the EEPROM parameters wx_eeprom_info within the * wx_hw struct in order to set up EEPROM access. **/ -void wx_init_eeprom_params(struct wx_hw *wxhw) +void wx_init_eeprom_params(struct wx *wx) { - struct wx_eeprom_info *eeprom = &wxhw->eeprom; + struct wx_eeprom_info *eeprom = &wx->eeprom; u16 eeprom_size; u16 data = 0x80; @@ -475,21 +512,21 @@ void wx_init_eeprom_params(struct wx_hw *wxhw) eeprom->semaphore_delay = 10; eeprom->type = wx_eeprom_none; - if (!(rd32(wxhw, WX_SPI_STATUS) & + if (!(rd32(wx, WX_SPI_STATUS) & WX_SPI_STATUS_FLASH_BYPASS)) { eeprom->type = wx_flash; eeprom_size = 4096; eeprom->word_size = eeprom_size >> 1; - wx_dbg(wxhw, "Eeprom params: type = %d, size = %d\n", + wx_dbg(wx, "Eeprom params: type = %d, size = %d\n", eeprom->type, eeprom->word_size); } } - if (wxhw->mac.type == wx_mac_sp) { - if (wx_read_ee_hostif(wxhw, WX_SW_REGION_PTR, &data)) { - wx_err(wxhw, "NVM Read Error\n"); + if (wx->mac.type == wx_mac_sp) { + if (wx_read_ee_hostif(wx, WX_SW_REGION_PTR, &data)) { + wx_err(wx, "NVM Read Error\n"); return; } data = data >> 1; @@ -501,22 +538,22 @@ EXPORT_SYMBOL(wx_init_eeprom_params); /** * wx_get_mac_addr - Generic get MAC address - * @wxhw: pointer to hardware structure + * @wx: pointer to hardware structure * @mac_addr: Adapter MAC address * * Reads the adapter's MAC address from first Receive Address Register (RAR0) * A reset of the adapter must be performed prior to calling this function * in order for the MAC address to have been loaded from the EEPROM into RAR0 **/ -void wx_get_mac_addr(struct wx_hw *wxhw, u8 *mac_addr) +void wx_get_mac_addr(struct wx *wx, u8 *mac_addr) { u32 rar_high; u32 rar_low; u16 i; - wr32(wxhw, WX_PSR_MAC_SWC_IDX, 0); - rar_high = rd32(wxhw, WX_PSR_MAC_SWC_AD_H); - rar_low = rd32(wxhw, WX_PSR_MAC_SWC_AD_L); + wr32(wx, WX_PSR_MAC_SWC_IDX, 0); + rar_high = rd32(wx, WX_PSR_MAC_SWC_AD_H); + rar_low = rd32(wx, WX_PSR_MAC_SWC_AD_L); for (i = 0; i < 2; i++) mac_addr[i] = (u8)(rar_high >> (1 - i) * 8); @@ -528,7 +565,7 @@ EXPORT_SYMBOL(wx_get_mac_addr); /** * wx_set_rar - Set Rx address register - * @wxhw: pointer to hardware structure + * @wx: pointer to hardware structure * @index: Receive address register to write * @addr: Address to put into receive address register * @pools: VMDq "set" or "pool" index @@ -536,25 +573,25 @@ EXPORT_SYMBOL(wx_get_mac_addr); * * Puts an ethernet address into a receive address register. **/ -int wx_set_rar(struct wx_hw *wxhw, u32 index, u8 *addr, u64 pools, - u32 enable_addr) +static int wx_set_rar(struct wx *wx, u32 index, u8 *addr, u64 pools, + u32 enable_addr) { - u32 rar_entries = wxhw->mac.num_rar_entries; + u32 rar_entries = wx->mac.num_rar_entries; u32 rar_low, rar_high; /* Make sure we are using a valid rar index range */ if (index >= rar_entries) { - wx_err(wxhw, "RAR index %d is out of range.\n", index); + wx_err(wx, "RAR index %d is out of range.\n", index); return -EINVAL; } /* select the MAC address */ - wr32(wxhw, WX_PSR_MAC_SWC_IDX, index); + wr32(wx, WX_PSR_MAC_SWC_IDX, index); /* setup VMDq pool mapping */ - wr32(wxhw, WX_PSR_MAC_SWC_VM_L, pools & 0xFFFFFFFF); - if (wxhw->mac.type == wx_mac_sp) - wr32(wxhw, WX_PSR_MAC_SWC_VM_H, pools >> 32); + wr32(wx, WX_PSR_MAC_SWC_VM_L, pools & 0xFFFFFFFF); + if (wx->mac.type == wx_mac_sp) + wr32(wx, WX_PSR_MAC_SWC_VM_H, pools >> 32); /* HW expects these in little endian so we reverse the byte * order from network order (big endian) to little endian @@ -572,31 +609,30 @@ int wx_set_rar(struct wx_hw *wxhw, u32 index, u8 *addr, u64 pools, if (enable_addr != 0) rar_high |= WX_PSR_MAC_SWC_AD_H_AV; - wr32(wxhw, WX_PSR_MAC_SWC_AD_L, rar_low); - wr32m(wxhw, WX_PSR_MAC_SWC_AD_H, - (WX_PSR_MAC_SWC_AD_H_AD(~0) | - WX_PSR_MAC_SWC_AD_H_ADTYPE(~0) | + wr32(wx, WX_PSR_MAC_SWC_AD_L, rar_low); + wr32m(wx, WX_PSR_MAC_SWC_AD_H, + (WX_PSR_MAC_SWC_AD_H_AD(U16_MAX) | + WX_PSR_MAC_SWC_AD_H_ADTYPE(1) | WX_PSR_MAC_SWC_AD_H_AV), rar_high); return 0; } -EXPORT_SYMBOL(wx_set_rar); /** * wx_clear_rar - Remove Rx address register - * @wxhw: pointer to hardware structure + * @wx: pointer to hardware structure * @index: Receive address register to write * * Clears an ethernet address from a receive address register. **/ -int wx_clear_rar(struct wx_hw *wxhw, u32 index) +static int wx_clear_rar(struct wx *wx, u32 index) { - u32 rar_entries = wxhw->mac.num_rar_entries; + u32 rar_entries = wx->mac.num_rar_entries; /* Make sure we are using a valid rar index range */ if (index >= rar_entries) { - wx_err(wxhw, "RAR index %d is out of range.\n", index); + wx_err(wx, "RAR index %d is out of range.\n", index); return -EINVAL; } @@ -604,78 +640,77 @@ int wx_clear_rar(struct wx_hw *wxhw, u32 index) * so save everything except the lower 16 bits that hold part * of the address and the address valid bit. */ - wr32(wxhw, WX_PSR_MAC_SWC_IDX, index); + wr32(wx, WX_PSR_MAC_SWC_IDX, index); - wr32(wxhw, WX_PSR_MAC_SWC_VM_L, 0); - wr32(wxhw, WX_PSR_MAC_SWC_VM_H, 0); + wr32(wx, WX_PSR_MAC_SWC_VM_L, 0); + wr32(wx, WX_PSR_MAC_SWC_VM_H, 0); - wr32(wxhw, WX_PSR_MAC_SWC_AD_L, 0); - wr32m(wxhw, WX_PSR_MAC_SWC_AD_H, - (WX_PSR_MAC_SWC_AD_H_AD(~0) | - WX_PSR_MAC_SWC_AD_H_ADTYPE(~0) | + wr32(wx, WX_PSR_MAC_SWC_AD_L, 0); + wr32m(wx, WX_PSR_MAC_SWC_AD_H, + (WX_PSR_MAC_SWC_AD_H_AD(U16_MAX) | + WX_PSR_MAC_SWC_AD_H_ADTYPE(1) | WX_PSR_MAC_SWC_AD_H_AV), 0); return 0; } -EXPORT_SYMBOL(wx_clear_rar); /** * wx_clear_vmdq - Disassociate a VMDq pool index from a rx address - * @wxhw: pointer to hardware struct + * @wx: pointer to hardware struct * @rar: receive address register index to disassociate * @vmdq: VMDq pool index to remove from the rar **/ -static int wx_clear_vmdq(struct wx_hw *wxhw, u32 rar, u32 __maybe_unused vmdq) +static int wx_clear_vmdq(struct wx *wx, u32 rar, u32 __maybe_unused vmdq) { - u32 rar_entries = wxhw->mac.num_rar_entries; + u32 rar_entries = wx->mac.num_rar_entries; u32 mpsar_lo, mpsar_hi; /* Make sure we are using a valid rar index range */ if (rar >= rar_entries) { - wx_err(wxhw, "RAR index %d is out of range.\n", rar); + wx_err(wx, "RAR index %d is out of range.\n", rar); return -EINVAL; } - wr32(wxhw, WX_PSR_MAC_SWC_IDX, rar); - mpsar_lo = rd32(wxhw, WX_PSR_MAC_SWC_VM_L); - mpsar_hi = rd32(wxhw, WX_PSR_MAC_SWC_VM_H); + wr32(wx, WX_PSR_MAC_SWC_IDX, rar); + mpsar_lo = rd32(wx, WX_PSR_MAC_SWC_VM_L); + mpsar_hi = rd32(wx, WX_PSR_MAC_SWC_VM_H); if (!mpsar_lo && !mpsar_hi) return 0; /* was that the last pool using this rar? */ if (mpsar_lo == 0 && mpsar_hi == 0 && rar != 0) - wx_clear_rar(wxhw, rar); + wx_clear_rar(wx, rar); return 0; } /** * wx_init_uta_tables - Initialize the Unicast Table Array - * @wxhw: pointer to hardware structure + * @wx: pointer to hardware structure **/ -static void wx_init_uta_tables(struct wx_hw *wxhw) +static void wx_init_uta_tables(struct wx *wx) { int i; - wx_dbg(wxhw, " Clearing UTA\n"); + wx_dbg(wx, " Clearing UTA\n"); for (i = 0; i < 128; i++) - wr32(wxhw, WX_PSR_UC_TBL(i), 0); + wr32(wx, WX_PSR_UC_TBL(i), 0); } /** * wx_init_rx_addrs - Initializes receive address filters. - * @wxhw: pointer to hardware structure + * @wx: pointer to hardware structure * * Places the MAC address in receive address register 0 and clears the rest * of the receive address registers. Clears the multicast table. Assumes * the receiver is in reset when the routine is called. **/ -void wx_init_rx_addrs(struct wx_hw *wxhw) +void wx_init_rx_addrs(struct wx *wx) { - u32 rar_entries = wxhw->mac.num_rar_entries; + u32 rar_entries = wx->mac.num_rar_entries; u32 psrctl; int i; @@ -683,97 +718,829 @@ void wx_init_rx_addrs(struct wx_hw *wxhw) * to the permanent address. * Otherwise, use the permanent address from the eeprom. */ - if (!is_valid_ether_addr(wxhw->mac.addr)) { + if (!is_valid_ether_addr(wx->mac.addr)) { /* Get the MAC address from the RAR0 for later reference */ - wx_get_mac_addr(wxhw, wxhw->mac.addr); - wx_dbg(wxhw, "Keeping Current RAR0 Addr = %pM\n", wxhw->mac.addr); + wx_get_mac_addr(wx, wx->mac.addr); + wx_dbg(wx, "Keeping Current RAR0 Addr = %pM\n", wx->mac.addr); } else { /* Setup the receive address. */ - wx_dbg(wxhw, "Overriding MAC Address in RAR[0]\n"); - wx_dbg(wxhw, "New MAC Addr = %pM\n", wxhw->mac.addr); + wx_dbg(wx, "Overriding MAC Address in RAR[0]\n"); + wx_dbg(wx, "New MAC Addr = %pM\n", wx->mac.addr); - wx_set_rar(wxhw, 0, wxhw->mac.addr, 0, WX_PSR_MAC_SWC_AD_H_AV); + wx_set_rar(wx, 0, wx->mac.addr, 0, WX_PSR_MAC_SWC_AD_H_AV); - if (wxhw->mac.type == wx_mac_sp) { + if (wx->mac.type == wx_mac_sp) { /* clear VMDq pool/queue selection for RAR 0 */ - wx_clear_vmdq(wxhw, 0, WX_CLEAR_VMDQ_ALL); + wx_clear_vmdq(wx, 0, WX_CLEAR_VMDQ_ALL); } } /* Zero out the other receive addresses. */ - wx_dbg(wxhw, "Clearing RAR[1-%d]\n", rar_entries - 1); + wx_dbg(wx, "Clearing RAR[1-%d]\n", rar_entries - 1); for (i = 1; i < rar_entries; i++) { - wr32(wxhw, WX_PSR_MAC_SWC_IDX, i); - wr32(wxhw, WX_PSR_MAC_SWC_AD_L, 0); - wr32(wxhw, WX_PSR_MAC_SWC_AD_H, 0); + wr32(wx, WX_PSR_MAC_SWC_IDX, i); + wr32(wx, WX_PSR_MAC_SWC_AD_L, 0); + wr32(wx, WX_PSR_MAC_SWC_AD_H, 0); } /* Clear the MTA */ - wxhw->addr_ctrl.mta_in_use = 0; - psrctl = rd32(wxhw, WX_PSR_CTL); + wx->addr_ctrl.mta_in_use = 0; + psrctl = rd32(wx, WX_PSR_CTL); psrctl &= ~(WX_PSR_CTL_MO | WX_PSR_CTL_MFE); - psrctl |= wxhw->mac.mc_filter_type << WX_PSR_CTL_MO_SHIFT; - wr32(wxhw, WX_PSR_CTL, psrctl); - wx_dbg(wxhw, " Clearing MTA\n"); - for (i = 0; i < wxhw->mac.mcft_size; i++) - wr32(wxhw, WX_PSR_MC_TBL(i), 0); + psrctl |= wx->mac.mc_filter_type << WX_PSR_CTL_MO_SHIFT; + wr32(wx, WX_PSR_CTL, psrctl); + wx_dbg(wx, " Clearing MTA\n"); + for (i = 0; i < wx->mac.mcft_size; i++) + wr32(wx, WX_PSR_MC_TBL(i), 0); - wx_init_uta_tables(wxhw); + wx_init_uta_tables(wx); } EXPORT_SYMBOL(wx_init_rx_addrs); -void wx_disable_rx(struct wx_hw *wxhw) +static void wx_sync_mac_table(struct wx *wx) +{ + int i; + + for (i = 0; i < wx->mac.num_rar_entries; i++) { + if (wx->mac_table[i].state & WX_MAC_STATE_MODIFIED) { + if (wx->mac_table[i].state & WX_MAC_STATE_IN_USE) { + wx_set_rar(wx, i, + wx->mac_table[i].addr, + wx->mac_table[i].pools, + WX_PSR_MAC_SWC_AD_H_AV); + } else { + wx_clear_rar(wx, i); + } + wx->mac_table[i].state &= ~(WX_MAC_STATE_MODIFIED); + } + } +} + +/* this function destroys the first RAR entry */ +void wx_mac_set_default_filter(struct wx *wx, u8 *addr) +{ + memcpy(&wx->mac_table[0].addr, addr, ETH_ALEN); + wx->mac_table[0].pools = 1ULL; + wx->mac_table[0].state = (WX_MAC_STATE_DEFAULT | WX_MAC_STATE_IN_USE); + wx_set_rar(wx, 0, wx->mac_table[0].addr, + wx->mac_table[0].pools, + WX_PSR_MAC_SWC_AD_H_AV); +} +EXPORT_SYMBOL(wx_mac_set_default_filter); + +void wx_flush_sw_mac_table(struct wx *wx) +{ + u32 i; + + for (i = 0; i < wx->mac.num_rar_entries; i++) { + if (!(wx->mac_table[i].state & WX_MAC_STATE_IN_USE)) + continue; + + wx->mac_table[i].state |= WX_MAC_STATE_MODIFIED; + wx->mac_table[i].state &= ~WX_MAC_STATE_IN_USE; + memset(wx->mac_table[i].addr, 0, ETH_ALEN); + wx->mac_table[i].pools = 0; + } + wx_sync_mac_table(wx); +} +EXPORT_SYMBOL(wx_flush_sw_mac_table); + +static int wx_add_mac_filter(struct wx *wx, u8 *addr, u16 pool) +{ + u32 i; + + if (is_zero_ether_addr(addr)) + return -EINVAL; + + for (i = 0; i < wx->mac.num_rar_entries; i++) { + if (wx->mac_table[i].state & WX_MAC_STATE_IN_USE) { + if (ether_addr_equal(addr, wx->mac_table[i].addr)) { + if (wx->mac_table[i].pools != (1ULL << pool)) { + memcpy(wx->mac_table[i].addr, addr, ETH_ALEN); + wx->mac_table[i].pools |= (1ULL << pool); + wx_sync_mac_table(wx); + return i; + } + } + } + + if (wx->mac_table[i].state & WX_MAC_STATE_IN_USE) + continue; + wx->mac_table[i].state |= (WX_MAC_STATE_MODIFIED | + WX_MAC_STATE_IN_USE); + memcpy(wx->mac_table[i].addr, addr, ETH_ALEN); + wx->mac_table[i].pools |= (1ULL << pool); + wx_sync_mac_table(wx); + return i; + } + return -ENOMEM; +} + +static int wx_del_mac_filter(struct wx *wx, u8 *addr, u16 pool) +{ + u32 i; + + if (is_zero_ether_addr(addr)) + return -EINVAL; + + /* search table for addr, if found, set to 0 and sync */ + for (i = 0; i < wx->mac.num_rar_entries; i++) { + if (!ether_addr_equal(addr, wx->mac_table[i].addr)) + continue; + + wx->mac_table[i].state |= WX_MAC_STATE_MODIFIED; + wx->mac_table[i].pools &= ~(1ULL << pool); + if (!wx->mac_table[i].pools) { + wx->mac_table[i].state &= ~WX_MAC_STATE_IN_USE; + memset(wx->mac_table[i].addr, 0, ETH_ALEN); + } + wx_sync_mac_table(wx); + return 0; + } + return -ENOMEM; +} + +static int wx_available_rars(struct wx *wx) +{ + u32 i, count = 0; + + for (i = 0; i < wx->mac.num_rar_entries; i++) { + if (wx->mac_table[i].state == 0) + count++; + } + + return count; +} + +/** + * wx_write_uc_addr_list - write unicast addresses to RAR table + * @netdev: network interface device structure + * @pool: index for mac table + * + * Writes unicast address list to the RAR table. + * Returns: -ENOMEM on failure/insufficient address space + * 0 on no addresses written + * X on writing X addresses to the RAR table + **/ +static int wx_write_uc_addr_list(struct net_device *netdev, int pool) +{ + struct wx *wx = netdev_priv(netdev); + int count = 0; + + /* return ENOMEM indicating insufficient memory for addresses */ + if (netdev_uc_count(netdev) > wx_available_rars(wx)) + return -ENOMEM; + + if (!netdev_uc_empty(netdev)) { + struct netdev_hw_addr *ha; + + netdev_for_each_uc_addr(ha, netdev) { + wx_del_mac_filter(wx, ha->addr, pool); + wx_add_mac_filter(wx, ha->addr, pool); + count++; + } + } + return count; +} + +/** + * wx_mta_vector - Determines bit-vector in multicast table to set + * @wx: pointer to private structure + * @mc_addr: the multicast address + * + * Extracts the 12 bits, from a multicast address, to determine which + * bit-vector to set in the multicast table. The hardware uses 12 bits, from + * incoming rx multicast addresses, to determine the bit-vector to check in + * the MTA. Which of the 4 combination, of 12-bits, the hardware uses is set + * by the MO field of the MCSTCTRL. The MO field is set during initialization + * to mc_filter_type. + **/ +static u32 wx_mta_vector(struct wx *wx, u8 *mc_addr) +{ + u32 vector = 0; + + switch (wx->mac.mc_filter_type) { + case 0: /* use bits [47:36] of the address */ + vector = ((mc_addr[4] >> 4) | (((u16)mc_addr[5]) << 4)); + break; + case 1: /* use bits [46:35] of the address */ + vector = ((mc_addr[4] >> 3) | (((u16)mc_addr[5]) << 5)); + break; + case 2: /* use bits [45:34] of the address */ + vector = ((mc_addr[4] >> 2) | (((u16)mc_addr[5]) << 6)); + break; + case 3: /* use bits [43:32] of the address */ + vector = ((mc_addr[4]) | (((u16)mc_addr[5]) << 8)); + break; + default: /* Invalid mc_filter_type */ + wx_err(wx, "MC filter type param set incorrectly\n"); + break; + } + + /* vector can only be 12-bits or boundary will be exceeded */ + vector &= 0xFFF; + return vector; +} + +/** + * wx_set_mta - Set bit-vector in multicast table + * @wx: pointer to private structure + * @mc_addr: Multicast address + * + * Sets the bit-vector in the multicast table. + **/ +static void wx_set_mta(struct wx *wx, u8 *mc_addr) +{ + u32 vector, vector_bit, vector_reg; + + wx->addr_ctrl.mta_in_use++; + + vector = wx_mta_vector(wx, mc_addr); + wx_dbg(wx, " bit-vector = 0x%03X\n", vector); + + /* The MTA is a register array of 128 32-bit registers. It is treated + * like an array of 4096 bits. We want to set bit + * BitArray[vector_value]. So we figure out what register the bit is + * in, read it, OR in the new bit, then write back the new value. The + * register is determined by the upper 7 bits of the vector value and + * the bit within that register are determined by the lower 5 bits of + * the value. + */ + vector_reg = (vector >> 5) & 0x7F; + vector_bit = vector & 0x1F; + wx->mac.mta_shadow[vector_reg] |= (1 << vector_bit); +} + +/** + * wx_update_mc_addr_list - Updates MAC list of multicast addresses + * @wx: pointer to private structure + * @netdev: pointer to net device structure + * + * The given list replaces any existing list. Clears the MC addrs from receive + * address registers and the multicast table. Uses unused receive address + * registers for the first multicast addresses, and hashes the rest into the + * multicast table. + **/ +static void wx_update_mc_addr_list(struct wx *wx, struct net_device *netdev) +{ + struct netdev_hw_addr *ha; + u32 i, psrctl; + + /* Set the new number of MC addresses that we are being requested to + * use. + */ + wx->addr_ctrl.num_mc_addrs = netdev_mc_count(netdev); + wx->addr_ctrl.mta_in_use = 0; + + /* Clear mta_shadow */ + wx_dbg(wx, " Clearing MTA\n"); + memset(&wx->mac.mta_shadow, 0, sizeof(wx->mac.mta_shadow)); + + /* Update mta_shadow */ + netdev_for_each_mc_addr(ha, netdev) { + wx_dbg(wx, " Adding the multicast addresses:\n"); + wx_set_mta(wx, ha->addr); + } + + /* Enable mta */ + for (i = 0; i < wx->mac.mcft_size; i++) + wr32a(wx, WX_PSR_MC_TBL(0), i, + wx->mac.mta_shadow[i]); + + if (wx->addr_ctrl.mta_in_use > 0) { + psrctl = rd32(wx, WX_PSR_CTL); + psrctl &= ~(WX_PSR_CTL_MO | WX_PSR_CTL_MFE); + psrctl |= WX_PSR_CTL_MFE | + (wx->mac.mc_filter_type << WX_PSR_CTL_MO_SHIFT); + wr32(wx, WX_PSR_CTL, psrctl); + } + + wx_dbg(wx, "Update mc addr list Complete\n"); +} + +/** + * wx_write_mc_addr_list - write multicast addresses to MTA + * @netdev: network interface device structure + * + * Writes multicast address list to the MTA hash table. + * Returns: 0 on no addresses written + * X on writing X addresses to MTA + **/ +static int wx_write_mc_addr_list(struct net_device *netdev) +{ + struct wx *wx = netdev_priv(netdev); + + if (!netif_running(netdev)) + return 0; + + wx_update_mc_addr_list(wx, netdev); + + return netdev_mc_count(netdev); +} + +/** + * wx_set_mac - Change the Ethernet Address of the NIC + * @netdev: network interface device structure + * @p: pointer to an address structure + * + * Returns 0 on success, negative on failure + **/ +int wx_set_mac(struct net_device *netdev, void *p) +{ + struct wx *wx = netdev_priv(netdev); + struct sockaddr *addr = p; + int retval; + + retval = eth_prepare_mac_addr_change(netdev, addr); + if (retval) + return retval; + + wx_del_mac_filter(wx, wx->mac.addr, 0); + eth_hw_addr_set(netdev, addr->sa_data); + memcpy(wx->mac.addr, addr->sa_data, netdev->addr_len); + + wx_mac_set_default_filter(wx, wx->mac.addr); + + return 0; +} +EXPORT_SYMBOL(wx_set_mac); + +void wx_disable_rx(struct wx *wx) { u32 pfdtxgswc; u32 rxctrl; - rxctrl = rd32(wxhw, WX_RDB_PB_CTL); + rxctrl = rd32(wx, WX_RDB_PB_CTL); if (rxctrl & WX_RDB_PB_CTL_RXEN) { - pfdtxgswc = rd32(wxhw, WX_PSR_CTL); + pfdtxgswc = rd32(wx, WX_PSR_CTL); if (pfdtxgswc & WX_PSR_CTL_SW_EN) { pfdtxgswc &= ~WX_PSR_CTL_SW_EN; - wr32(wxhw, WX_PSR_CTL, pfdtxgswc); - wxhw->mac.set_lben = true; + wr32(wx, WX_PSR_CTL, pfdtxgswc); + wx->mac.set_lben = true; } else { - wxhw->mac.set_lben = false; + wx->mac.set_lben = false; } rxctrl &= ~WX_RDB_PB_CTL_RXEN; - wr32(wxhw, WX_RDB_PB_CTL, rxctrl); + wr32(wx, WX_RDB_PB_CTL, rxctrl); - if (!(((wxhw->subsystem_device_id & WX_NCSI_MASK) == WX_NCSI_SUP) || - ((wxhw->subsystem_device_id & WX_WOL_MASK) == WX_WOL_SUP))) { + if (!(((wx->subsystem_device_id & WX_NCSI_MASK) == WX_NCSI_SUP) || + ((wx->subsystem_device_id & WX_WOL_MASK) == WX_WOL_SUP))) { /* disable mac receiver */ - wr32m(wxhw, WX_MAC_RX_CFG, + wr32m(wx, WX_MAC_RX_CFG, WX_MAC_RX_CFG_RE, 0); } } } EXPORT_SYMBOL(wx_disable_rx); +static void wx_enable_rx(struct wx *wx) +{ + u32 psrctl; + + /* enable mac receiver */ + wr32m(wx, WX_MAC_RX_CFG, + WX_MAC_RX_CFG_RE, WX_MAC_RX_CFG_RE); + + wr32m(wx, WX_RDB_PB_CTL, + WX_RDB_PB_CTL_RXEN, WX_RDB_PB_CTL_RXEN); + + if (wx->mac.set_lben) { + psrctl = rd32(wx, WX_PSR_CTL); + psrctl |= WX_PSR_CTL_SW_EN; + wr32(wx, WX_PSR_CTL, psrctl); + wx->mac.set_lben = false; + } +} + +/** + * wx_set_rxpba - Initialize Rx packet buffer + * @wx: pointer to private structure + **/ +static void wx_set_rxpba(struct wx *wx) +{ + u32 rxpktsize, txpktsize, txpbthresh; + + rxpktsize = wx->mac.rx_pb_size << WX_RDB_PB_SZ_SHIFT; + wr32(wx, WX_RDB_PB_SZ(0), rxpktsize); + + /* Only support an equally distributed Tx packet buffer strategy. */ + txpktsize = wx->mac.tx_pb_size; + txpbthresh = (txpktsize / 1024) - WX_TXPKT_SIZE_MAX; + wr32(wx, WX_TDB_PB_SZ(0), txpktsize); + wr32(wx, WX_TDM_PB_THRE(0), txpbthresh); +} + +static void wx_configure_port(struct wx *wx) +{ + u32 value, i; + + value = WX_CFG_PORT_CTL_D_VLAN | WX_CFG_PORT_CTL_QINQ; + wr32m(wx, WX_CFG_PORT_CTL, + WX_CFG_PORT_CTL_D_VLAN | + WX_CFG_PORT_CTL_QINQ, + value); + + wr32(wx, WX_CFG_TAG_TPID(0), + ETH_P_8021Q | ETH_P_8021AD << 16); + wx->tpid[0] = ETH_P_8021Q; + wx->tpid[1] = ETH_P_8021AD; + for (i = 1; i < 4; i++) + wr32(wx, WX_CFG_TAG_TPID(i), + ETH_P_8021Q | ETH_P_8021Q << 16); + for (i = 2; i < 8; i++) + wx->tpid[i] = ETH_P_8021Q; +} + +/** + * wx_disable_sec_rx_path - Stops the receive data path + * @wx: pointer to private structure + * + * Stops the receive data path and waits for the HW to internally empty + * the Rx security block + **/ +static int wx_disable_sec_rx_path(struct wx *wx) +{ + u32 secrx; + + wr32m(wx, WX_RSC_CTL, + WX_RSC_CTL_RX_DIS, WX_RSC_CTL_RX_DIS); + + return read_poll_timeout(rd32, secrx, secrx & WX_RSC_ST_RSEC_RDY, + 1000, 40000, false, wx, WX_RSC_ST); +} + +/** + * wx_enable_sec_rx_path - Enables the receive data path + * @wx: pointer to private structure + * + * Enables the receive data path. + **/ +static void wx_enable_sec_rx_path(struct wx *wx) +{ + wr32m(wx, WX_RSC_CTL, WX_RSC_CTL_RX_DIS, 0); + WX_WRITE_FLUSH(wx); +} + +void wx_set_rx_mode(struct net_device *netdev) +{ + struct wx *wx = netdev_priv(netdev); + u32 fctrl, vmolr, vlnctrl; + int count; + + /* Check for Promiscuous and All Multicast modes */ + fctrl = rd32(wx, WX_PSR_CTL); + fctrl &= ~(WX_PSR_CTL_UPE | WX_PSR_CTL_MPE); + vmolr = rd32(wx, WX_PSR_VM_L2CTL(0)); + vmolr &= ~(WX_PSR_VM_L2CTL_UPE | + WX_PSR_VM_L2CTL_MPE | + WX_PSR_VM_L2CTL_ROPE | + WX_PSR_VM_L2CTL_ROMPE); + vlnctrl = rd32(wx, WX_PSR_VLAN_CTL); + vlnctrl &= ~(WX_PSR_VLAN_CTL_VFE | WX_PSR_VLAN_CTL_CFIEN); + + /* set all bits that we expect to always be set */ + fctrl |= WX_PSR_CTL_BAM | WX_PSR_CTL_MFE; + vmolr |= WX_PSR_VM_L2CTL_BAM | + WX_PSR_VM_L2CTL_AUPE | + WX_PSR_VM_L2CTL_VACC; + vlnctrl |= WX_PSR_VLAN_CTL_VFE; + + wx->addr_ctrl.user_set_promisc = false; + if (netdev->flags & IFF_PROMISC) { + wx->addr_ctrl.user_set_promisc = true; + fctrl |= WX_PSR_CTL_UPE | WX_PSR_CTL_MPE; + /* pf don't want packets routing to vf, so clear UPE */ + vmolr |= WX_PSR_VM_L2CTL_MPE; + vlnctrl &= ~WX_PSR_VLAN_CTL_VFE; + } + + if (netdev->flags & IFF_ALLMULTI) { + fctrl |= WX_PSR_CTL_MPE; + vmolr |= WX_PSR_VM_L2CTL_MPE; + } + + if (netdev->features & NETIF_F_RXALL) { + vmolr |= (WX_PSR_VM_L2CTL_UPE | WX_PSR_VM_L2CTL_MPE); + vlnctrl &= ~WX_PSR_VLAN_CTL_VFE; + /* receive bad packets */ + wr32m(wx, WX_RSC_CTL, + WX_RSC_CTL_SAVE_MAC_ERR, + WX_RSC_CTL_SAVE_MAC_ERR); + } else { + vmolr |= WX_PSR_VM_L2CTL_ROPE | WX_PSR_VM_L2CTL_ROMPE; + } + + /* Write addresses to available RAR registers, if there is not + * sufficient space to store all the addresses then enable + * unicast promiscuous mode + */ + count = wx_write_uc_addr_list(netdev, 0); + if (count < 0) { + vmolr &= ~WX_PSR_VM_L2CTL_ROPE; + vmolr |= WX_PSR_VM_L2CTL_UPE; + } + + /* Write addresses to the MTA, if the attempt fails + * then we should just turn on promiscuous mode so + * that we can at least receive multicast traffic + */ + count = wx_write_mc_addr_list(netdev); + if (count < 0) { + vmolr &= ~WX_PSR_VM_L2CTL_ROMPE; + vmolr |= WX_PSR_VM_L2CTL_MPE; + } + + wr32(wx, WX_PSR_VLAN_CTL, vlnctrl); + wr32(wx, WX_PSR_CTL, fctrl); + wr32(wx, WX_PSR_VM_L2CTL(0), vmolr); +} +EXPORT_SYMBOL(wx_set_rx_mode); + +static void wx_set_rx_buffer_len(struct wx *wx) +{ + struct net_device *netdev = wx->netdev; + u32 mhadd, max_frame; + + max_frame = netdev->mtu + ETH_HLEN + ETH_FCS_LEN; + /* adjust max frame to be at least the size of a standard frame */ + if (max_frame < (ETH_FRAME_LEN + ETH_FCS_LEN)) + max_frame = (ETH_FRAME_LEN + ETH_FCS_LEN); + + mhadd = rd32(wx, WX_PSR_MAX_SZ); + if (max_frame != mhadd) + wr32(wx, WX_PSR_MAX_SZ, max_frame); +} + +/* Disable the specified rx queue */ +void wx_disable_rx_queue(struct wx *wx, struct wx_ring *ring) +{ + u8 reg_idx = ring->reg_idx; + u32 rxdctl; + int ret; + + /* write value back with RRCFG.EN bit cleared */ + wr32m(wx, WX_PX_RR_CFG(reg_idx), + WX_PX_RR_CFG_RR_EN, 0); + + /* the hardware may take up to 100us to really disable the rx queue */ + ret = read_poll_timeout(rd32, rxdctl, !(rxdctl & WX_PX_RR_CFG_RR_EN), + 10, 100, true, wx, WX_PX_RR_CFG(reg_idx)); + + if (ret == -ETIMEDOUT) { + /* Just for information */ + wx_err(wx, + "RRCFG.EN on Rx queue %d not cleared within the polling period\n", + reg_idx); + } +} +EXPORT_SYMBOL(wx_disable_rx_queue); + +static void wx_enable_rx_queue(struct wx *wx, struct wx_ring *ring) +{ + u8 reg_idx = ring->reg_idx; + u32 rxdctl; + int ret; + + ret = read_poll_timeout(rd32, rxdctl, rxdctl & WX_PX_RR_CFG_RR_EN, + 1000, 10000, true, wx, WX_PX_RR_CFG(reg_idx)); + + if (ret == -ETIMEDOUT) { + /* Just for information */ + wx_err(wx, + "RRCFG.EN on Rx queue %d not set within the polling period\n", + reg_idx); + } +} + +static void wx_configure_srrctl(struct wx *wx, + struct wx_ring *rx_ring) +{ + u16 reg_idx = rx_ring->reg_idx; + u32 srrctl; + + srrctl = rd32(wx, WX_PX_RR_CFG(reg_idx)); + srrctl &= ~(WX_PX_RR_CFG_RR_HDR_SZ | + WX_PX_RR_CFG_RR_BUF_SZ | + WX_PX_RR_CFG_SPLIT_MODE); + /* configure header buffer length, needed for RSC */ + srrctl |= WX_RXBUFFER_256 << WX_PX_RR_CFG_BHDRSIZE_SHIFT; + + /* configure the packet buffer length */ + srrctl |= WX_RX_BUFSZ >> WX_PX_RR_CFG_BSIZEPKT_SHIFT; + + wr32(wx, WX_PX_RR_CFG(reg_idx), srrctl); +} + +static void wx_configure_tx_ring(struct wx *wx, + struct wx_ring *ring) +{ + u32 txdctl = WX_PX_TR_CFG_ENABLE; + u8 reg_idx = ring->reg_idx; + u64 tdba = ring->dma; + int ret; + + /* disable queue to avoid issues while updating state */ + wr32(wx, WX_PX_TR_CFG(reg_idx), WX_PX_TR_CFG_SWFLSH); + WX_WRITE_FLUSH(wx); + + wr32(wx, WX_PX_TR_BAL(reg_idx), tdba & DMA_BIT_MASK(32)); + wr32(wx, WX_PX_TR_BAH(reg_idx), upper_32_bits(tdba)); + + /* reset head and tail pointers */ + wr32(wx, WX_PX_TR_RP(reg_idx), 0); + wr32(wx, WX_PX_TR_WP(reg_idx), 0); + ring->tail = wx->hw_addr + WX_PX_TR_WP(reg_idx); + + if (ring->count < WX_MAX_TXD) + txdctl |= ring->count / 128 << WX_PX_TR_CFG_TR_SIZE_SHIFT; + txdctl |= 0x20 << WX_PX_TR_CFG_WTHRESH_SHIFT; + + /* reinitialize tx_buffer_info */ + memset(ring->tx_buffer_info, 0, + sizeof(struct wx_tx_buffer) * ring->count); + + /* enable queue */ + wr32(wx, WX_PX_TR_CFG(reg_idx), txdctl); + + /* poll to verify queue is enabled */ + ret = read_poll_timeout(rd32, txdctl, txdctl & WX_PX_TR_CFG_ENABLE, + 1000, 10000, true, wx, WX_PX_TR_CFG(reg_idx)); + if (ret == -ETIMEDOUT) + wx_err(wx, "Could not enable Tx Queue %d\n", reg_idx); +} + +static void wx_configure_rx_ring(struct wx *wx, + struct wx_ring *ring) +{ + u16 reg_idx = ring->reg_idx; + union wx_rx_desc *rx_desc; + u64 rdba = ring->dma; + u32 rxdctl; + + /* disable queue to avoid issues while updating state */ + rxdctl = rd32(wx, WX_PX_RR_CFG(reg_idx)); + wx_disable_rx_queue(wx, ring); + + wr32(wx, WX_PX_RR_BAL(reg_idx), rdba & DMA_BIT_MASK(32)); + wr32(wx, WX_PX_RR_BAH(reg_idx), upper_32_bits(rdba)); + + if (ring->count == WX_MAX_RXD) + rxdctl |= 0 << WX_PX_RR_CFG_RR_SIZE_SHIFT; + else + rxdctl |= (ring->count / 128) << WX_PX_RR_CFG_RR_SIZE_SHIFT; + + rxdctl |= 0x1 << WX_PX_RR_CFG_RR_THER_SHIFT; + wr32(wx, WX_PX_RR_CFG(reg_idx), rxdctl); + + /* reset head and tail pointers */ + wr32(wx, WX_PX_RR_RP(reg_idx), 0); + wr32(wx, WX_PX_RR_WP(reg_idx), 0); + ring->tail = wx->hw_addr + WX_PX_RR_WP(reg_idx); + + wx_configure_srrctl(wx, ring); + + /* initialize rx_buffer_info */ + memset(ring->rx_buffer_info, 0, + sizeof(struct wx_rx_buffer) * ring->count); + + /* initialize Rx descriptor 0 */ + rx_desc = WX_RX_DESC(ring, 0); + rx_desc->wb.upper.length = 0; + + /* enable receive descriptor ring */ + wr32m(wx, WX_PX_RR_CFG(reg_idx), + WX_PX_RR_CFG_RR_EN, WX_PX_RR_CFG_RR_EN); + + wx_enable_rx_queue(wx, ring); + wx_alloc_rx_buffers(ring, wx_desc_unused(ring)); +} + +/** + * wx_configure_tx - Configure Transmit Unit after Reset + * @wx: pointer to private structure + * + * Configure the Tx unit of the MAC after a reset. + **/ +static void wx_configure_tx(struct wx *wx) +{ + u32 i; + + /* TDM_CTL.TE must be before Tx queues are enabled */ + wr32m(wx, WX_TDM_CTL, + WX_TDM_CTL_TE, WX_TDM_CTL_TE); + + /* Setup the HW Tx Head and Tail descriptor pointers */ + for (i = 0; i < wx->num_tx_queues; i++) + wx_configure_tx_ring(wx, wx->tx_ring[i]); + + wr32m(wx, WX_TSC_BUF_AE, WX_TSC_BUF_AE_THR, 0x10); + + if (wx->mac.type == wx_mac_em) + wr32m(wx, WX_TSC_CTL, WX_TSC_CTL_TX_DIS | WX_TSC_CTL_TSEC_DIS, 0x1); + + /* enable mac transmitter */ + wr32m(wx, WX_MAC_TX_CFG, + WX_MAC_TX_CFG_TE, WX_MAC_TX_CFG_TE); +} + +/** + * wx_configure_rx - Configure Receive Unit after Reset + * @wx: pointer to private structure + * + * Configure the Rx unit of the MAC after a reset. + **/ +static void wx_configure_rx(struct wx *wx) +{ + u32 psrtype, i; + int ret; + + wx_disable_rx(wx); + + psrtype = WX_RDB_PL_CFG_L4HDR | + WX_RDB_PL_CFG_L3HDR | + WX_RDB_PL_CFG_L2HDR | + WX_RDB_PL_CFG_TUN_TUNHDR | + WX_RDB_PL_CFG_TUN_TUNHDR; + wr32(wx, WX_RDB_PL_CFG(0), psrtype); + + /* enable hw crc stripping */ + wr32m(wx, WX_RSC_CTL, WX_RSC_CTL_CRC_STRIP, WX_RSC_CTL_CRC_STRIP); + + if (wx->mac.type == wx_mac_sp) { + u32 psrctl; + + /* RSC Setup */ + psrctl = rd32(wx, WX_PSR_CTL); + psrctl |= WX_PSR_CTL_RSC_ACK; /* Disable RSC for ACK packets */ + psrctl |= WX_PSR_CTL_RSC_DIS; + wr32(wx, WX_PSR_CTL, psrctl); + } + + /* set_rx_buffer_len must be called before ring initialization */ + wx_set_rx_buffer_len(wx); + + /* Setup the HW Rx Head and Tail Descriptor Pointers and + * the Base and Length of the Rx Descriptor Ring + */ + for (i = 0; i < wx->num_rx_queues; i++) + wx_configure_rx_ring(wx, wx->rx_ring[i]); + + /* Enable all receives, disable security engine prior to block traffic */ + ret = wx_disable_sec_rx_path(wx); + if (ret < 0) + wx_err(wx, "The register status is abnormal, please check device."); + + wx_enable_rx(wx); + wx_enable_sec_rx_path(wx); +} + +static void wx_configure_isb(struct wx *wx) +{ + /* set ISB Address */ + wr32(wx, WX_PX_ISB_ADDR_L, wx->isb_dma & DMA_BIT_MASK(32)); + if (IS_ENABLED(CONFIG_ARCH_DMA_ADDR_T_64BIT)) + wr32(wx, WX_PX_ISB_ADDR_H, upper_32_bits(wx->isb_dma)); +} + +void wx_configure(struct wx *wx) +{ + wx_set_rxpba(wx); + wx_configure_port(wx); + + wx_set_rx_mode(wx->netdev); + + wx_enable_sec_rx_path(wx); + + wx_configure_tx(wx); + wx_configure_rx(wx); + wx_configure_isb(wx); +} +EXPORT_SYMBOL(wx_configure); + /** * wx_disable_pcie_master - Disable PCI-express master access - * @wxhw: pointer to hardware structure + * @wx: pointer to hardware structure * * Disables PCI-Express master access and verifies there are no pending * requests. **/ -int wx_disable_pcie_master(struct wx_hw *wxhw) +int wx_disable_pcie_master(struct wx *wx) { int status = 0; u32 val; /* Always set this bit to ensure any future transactions are blocked */ - pci_clear_master(wxhw->pdev); + pci_clear_master(wx->pdev); /* Exit if master requests are blocked */ - if (!(rd32(wxhw, WX_PX_TRANSACTION_PENDING))) + if (!(rd32(wx, WX_PX_TRANSACTION_PENDING))) return 0; /* Poll for master request bit to clear */ status = read_poll_timeout(rd32, val, !val, 100, WX_PCI_MASTER_DISABLE_TIMEOUT, - false, wxhw, WX_PX_TRANSACTION_PENDING); + false, wx, WX_PX_TRANSACTION_PENDING); if (status < 0) - wx_err(wxhw, "PCIe transaction pending bit did not clear.\n"); + wx_err(wx, "PCIe transaction pending bit did not clear.\n"); return status; } @@ -781,106 +1548,106 @@ EXPORT_SYMBOL(wx_disable_pcie_master); /** * wx_stop_adapter - Generic stop Tx/Rx units - * @wxhw: pointer to hardware structure + * @wx: pointer to hardware structure * * Sets the adapter_stopped flag within wx_hw struct. Clears interrupts, * disables transmit and receive units. The adapter_stopped flag is used by * the shared code and drivers to determine if the adapter is in a stopped * state and should not touch the hardware. **/ -int wx_stop_adapter(struct wx_hw *wxhw) +int wx_stop_adapter(struct wx *wx) { u16 i; /* Set the adapter_stopped flag so other driver functions stop touching * the hardware */ - wxhw->adapter_stopped = true; + wx->adapter_stopped = true; /* Disable the receive unit */ - wx_disable_rx(wxhw); + wx_disable_rx(wx); /* Set interrupt mask to stop interrupts from being generated */ - wx_intr_disable(wxhw, WX_INTR_ALL); + wx_intr_disable(wx, WX_INTR_ALL); /* Clear any pending interrupts, flush previous writes */ - wr32(wxhw, WX_PX_MISC_IC, 0xffffffff); - wr32(wxhw, WX_BME_CTL, 0x3); + wr32(wx, WX_PX_MISC_IC, 0xffffffff); + wr32(wx, WX_BME_CTL, 0x3); /* Disable the transmit unit. Each queue must be disabled. */ - for (i = 0; i < wxhw->mac.max_tx_queues; i++) { - wr32m(wxhw, WX_PX_TR_CFG(i), + for (i = 0; i < wx->mac.max_tx_queues; i++) { + wr32m(wx, WX_PX_TR_CFG(i), WX_PX_TR_CFG_SWFLSH | WX_PX_TR_CFG_ENABLE, WX_PX_TR_CFG_SWFLSH); } /* Disable the receive unit by stopping each queue */ - for (i = 0; i < wxhw->mac.max_rx_queues; i++) { - wr32m(wxhw, WX_PX_RR_CFG(i), + for (i = 0; i < wx->mac.max_rx_queues; i++) { + wr32m(wx, WX_PX_RR_CFG(i), WX_PX_RR_CFG_RR_EN, 0); } /* flush all queues disables */ - WX_WRITE_FLUSH(wxhw); + WX_WRITE_FLUSH(wx); /* Prevent the PCI-E bus from hanging by disabling PCI-E master * access and verify no pending requests */ - return wx_disable_pcie_master(wxhw); + return wx_disable_pcie_master(wx); } EXPORT_SYMBOL(wx_stop_adapter); -void wx_reset_misc(struct wx_hw *wxhw) +void wx_reset_misc(struct wx *wx) { int i; /* receive packets that size > 2048 */ - wr32m(wxhw, WX_MAC_RX_CFG, WX_MAC_RX_CFG_JE, WX_MAC_RX_CFG_JE); + wr32m(wx, WX_MAC_RX_CFG, WX_MAC_RX_CFG_JE, WX_MAC_RX_CFG_JE); /* clear counters on read */ - wr32m(wxhw, WX_MMC_CONTROL, + wr32m(wx, WX_MMC_CONTROL, WX_MMC_CONTROL_RSTONRD, WX_MMC_CONTROL_RSTONRD); - wr32m(wxhw, WX_MAC_RX_FLOW_CTRL, + wr32m(wx, WX_MAC_RX_FLOW_CTRL, WX_MAC_RX_FLOW_CTRL_RFE, WX_MAC_RX_FLOW_CTRL_RFE); - wr32(wxhw, WX_MAC_PKT_FLT, WX_MAC_PKT_FLT_PR); + wr32(wx, WX_MAC_PKT_FLT, WX_MAC_PKT_FLT_PR); - wr32m(wxhw, WX_MIS_RST_ST, + wr32m(wx, WX_MIS_RST_ST, WX_MIS_RST_ST_RST_INIT, 0x1E00); /* errata 4: initialize mng flex tbl and wakeup flex tbl*/ - wr32(wxhw, WX_PSR_MNG_FLEX_SEL, 0); + wr32(wx, WX_PSR_MNG_FLEX_SEL, 0); for (i = 0; i < 16; i++) { - wr32(wxhw, WX_PSR_MNG_FLEX_DW_L(i), 0); - wr32(wxhw, WX_PSR_MNG_FLEX_DW_H(i), 0); - wr32(wxhw, WX_PSR_MNG_FLEX_MSK(i), 0); + wr32(wx, WX_PSR_MNG_FLEX_DW_L(i), 0); + wr32(wx, WX_PSR_MNG_FLEX_DW_H(i), 0); + wr32(wx, WX_PSR_MNG_FLEX_MSK(i), 0); } - wr32(wxhw, WX_PSR_LAN_FLEX_SEL, 0); + wr32(wx, WX_PSR_LAN_FLEX_SEL, 0); for (i = 0; i < 16; i++) { - wr32(wxhw, WX_PSR_LAN_FLEX_DW_L(i), 0); - wr32(wxhw, WX_PSR_LAN_FLEX_DW_H(i), 0); - wr32(wxhw, WX_PSR_LAN_FLEX_MSK(i), 0); + wr32(wx, WX_PSR_LAN_FLEX_DW_L(i), 0); + wr32(wx, WX_PSR_LAN_FLEX_DW_H(i), 0); + wr32(wx, WX_PSR_LAN_FLEX_MSK(i), 0); } /* set pause frame dst mac addr */ - wr32(wxhw, WX_RDB_PFCMACDAL, 0xC2000001); - wr32(wxhw, WX_RDB_PFCMACDAH, 0x0180); + wr32(wx, WX_RDB_PFCMACDAL, 0xC2000001); + wr32(wx, WX_RDB_PFCMACDAH, 0x0180); } EXPORT_SYMBOL(wx_reset_misc); /** * wx_get_pcie_msix_counts - Gets MSI-X vector count - * @wxhw: pointer to hardware structure + * @wx: pointer to hardware structure * @msix_count: number of MSI interrupts that can be obtained * @max_msix_count: number of MSI interrupts that mac need * * Read PCIe configuration space, and get the MSI-X vector count from * the capabilities table. **/ -int wx_get_pcie_msix_counts(struct wx_hw *wxhw, u16 *msix_count, u16 max_msix_count) +int wx_get_pcie_msix_counts(struct wx *wx, u16 *msix_count, u16 max_msix_count) { - struct pci_dev *pdev = wxhw->pdev; + struct pci_dev *pdev = wx->pdev; struct device *dev = &pdev->dev; int pos; @@ -904,31 +1671,39 @@ int wx_get_pcie_msix_counts(struct wx_hw *wxhw, u16 *msix_count, u16 max_msix_co } EXPORT_SYMBOL(wx_get_pcie_msix_counts); -int wx_sw_init(struct wx_hw *wxhw) +int wx_sw_init(struct wx *wx) { - struct pci_dev *pdev = wxhw->pdev; + struct pci_dev *pdev = wx->pdev; u32 ssid = 0; int err = 0; - wxhw->vendor_id = pdev->vendor; - wxhw->device_id = pdev->device; - wxhw->revision_id = pdev->revision; - wxhw->oem_svid = pdev->subsystem_vendor; - wxhw->oem_ssid = pdev->subsystem_device; - wxhw->bus.device = PCI_SLOT(pdev->devfn); - wxhw->bus.func = PCI_FUNC(pdev->devfn); - - if (wxhw->oem_svid == PCI_VENDOR_ID_WANGXUN) { - wxhw->subsystem_vendor_id = pdev->subsystem_vendor; - wxhw->subsystem_device_id = pdev->subsystem_device; + wx->vendor_id = pdev->vendor; + wx->device_id = pdev->device; + wx->revision_id = pdev->revision; + wx->oem_svid = pdev->subsystem_vendor; + wx->oem_ssid = pdev->subsystem_device; + wx->bus.device = PCI_SLOT(pdev->devfn); + wx->bus.func = PCI_FUNC(pdev->devfn); + + if (wx->oem_svid == PCI_VENDOR_ID_WANGXUN) { + wx->subsystem_vendor_id = pdev->subsystem_vendor; + wx->subsystem_device_id = pdev->subsystem_device; } else { - err = wx_flash_read_dword(wxhw, 0xfffdc, &ssid); + err = wx_flash_read_dword(wx, 0xfffdc, &ssid); if (!err) - wxhw->subsystem_device_id = swab16((u16)ssid); + wx->subsystem_device_id = swab16((u16)ssid); return err; } + wx->mac_table = kcalloc(wx->mac.num_rar_entries, + sizeof(struct wx_mac_addr), + GFP_KERNEL); + if (!wx->mac_table) { + wx_err(wx, "mac_table allocation failed\n"); + return -ENOMEM; + } + return 0; } EXPORT_SYMBOL(wx_sw_init); diff --git a/drivers/net/ethernet/wangxun/libwx/wx_hw.h b/drivers/net/ethernet/wangxun/libwx/wx_hw.h index a0652f5e9939..44dfd6ea442a 100644 --- a/drivers/net/ethernet/wangxun/libwx/wx_hw.h +++ b/drivers/net/ethernet/wangxun/libwx/wx_hw.h @@ -4,25 +4,31 @@ #ifndef _WX_HW_H_ #define _WX_HW_H_ -int wx_check_flash_load(struct wx_hw *hw, u32 check_bit); -void wx_control_hw(struct wx_hw *wxhw, bool drv); -int wx_mng_present(struct wx_hw *wxhw); -int wx_host_interface_command(struct wx_hw *wxhw, u32 *buffer, +void wx_intr_enable(struct wx *wx, u64 qmask); +void wx_irq_disable(struct wx *wx); +int wx_check_flash_load(struct wx *wx, u32 check_bit); +void wx_control_hw(struct wx *wx, bool drv); +int wx_mng_present(struct wx *wx); +int wx_host_interface_command(struct wx *wx, u32 *buffer, u32 length, u32 timeout, bool return_data); -int wx_read_ee_hostif(struct wx_hw *wxhw, u16 offset, u16 *data); -int wx_read_ee_hostif_buffer(struct wx_hw *wxhw, +int wx_read_ee_hostif(struct wx *wx, u16 offset, u16 *data); +int wx_read_ee_hostif_buffer(struct wx *wx, u16 offset, u16 words, u16 *data); -int wx_reset_hostif(struct wx_hw *wxhw); -void wx_init_eeprom_params(struct wx_hw *wxhw); -void wx_get_mac_addr(struct wx_hw *wxhw, u8 *mac_addr); -int wx_set_rar(struct wx_hw *wxhw, u32 index, u8 *addr, u64 pools, u32 enable_addr); -int wx_clear_rar(struct wx_hw *wxhw, u32 index); -void wx_init_rx_addrs(struct wx_hw *wxhw); -void wx_disable_rx(struct wx_hw *wxhw); -int wx_disable_pcie_master(struct wx_hw *wxhw); -int wx_stop_adapter(struct wx_hw *wxhw); -void wx_reset_misc(struct wx_hw *wxhw); -int wx_get_pcie_msix_counts(struct wx_hw *wxhw, u16 *msix_count, u16 max_msix_count); -int wx_sw_init(struct wx_hw *wxhw); +int wx_reset_hostif(struct wx *wx); +void wx_init_eeprom_params(struct wx *wx); +void wx_get_mac_addr(struct wx *wx, u8 *mac_addr); +void wx_init_rx_addrs(struct wx *wx); +void wx_mac_set_default_filter(struct wx *wx, u8 *addr); +void wx_flush_sw_mac_table(struct wx *wx); +int wx_set_mac(struct net_device *netdev, void *p); +void wx_disable_rx(struct wx *wx); +void wx_set_rx_mode(struct net_device *netdev); +void wx_disable_rx_queue(struct wx *wx, struct wx_ring *ring); +void wx_configure(struct wx *wx); +int wx_disable_pcie_master(struct wx *wx); +int wx_stop_adapter(struct wx *wx); +void wx_reset_misc(struct wx *wx); +int wx_get_pcie_msix_counts(struct wx *wx, u16 *msix_count, u16 max_msix_count); +int wx_sw_init(struct wx *wx); #endif /* _WX_HW_H_ */ diff --git a/drivers/net/ethernet/wangxun/libwx/wx_lib.c b/drivers/net/ethernet/wangxun/libwx/wx_lib.c new file mode 100644 index 000000000000..eb89a274083e --- /dev/null +++ b/drivers/net/ethernet/wangxun/libwx/wx_lib.c @@ -0,0 +1,2004 @@ +// SPDX-License-Identifier: GPL-2.0 +/* Copyright (c) 2019 - 2022 Beijing WangXun Technology Co., Ltd. */ + +#include <linux/etherdevice.h> +#include <net/page_pool.h> +#include <linux/iopoll.h> +#include <linux/pci.h> + +#include "wx_type.h" +#include "wx_lib.h" +#include "wx_hw.h" + +/* wx_test_staterr - tests bits in Rx descriptor status and error fields */ +static __le32 wx_test_staterr(union wx_rx_desc *rx_desc, + const u32 stat_err_bits) +{ + return rx_desc->wb.upper.status_error & cpu_to_le32(stat_err_bits); +} + +static bool wx_can_reuse_rx_page(struct wx_rx_buffer *rx_buffer, + int rx_buffer_pgcnt) +{ + unsigned int pagecnt_bias = rx_buffer->pagecnt_bias; + struct page *page = rx_buffer->page; + + /* avoid re-using remote and pfmemalloc pages */ + if (!dev_page_is_reusable(page)) + return false; + +#if (PAGE_SIZE < 8192) + /* if we are only owner of page we can reuse it */ + if (unlikely((rx_buffer_pgcnt - pagecnt_bias) > 1)) + return false; +#endif + + /* If we have drained the page fragment pool we need to update + * the pagecnt_bias and page count so that we fully restock the + * number of references the driver holds. + */ + if (unlikely(pagecnt_bias == 1)) { + page_ref_add(page, USHRT_MAX - 1); + rx_buffer->pagecnt_bias = USHRT_MAX; + } + + return true; +} + +/** + * wx_reuse_rx_page - page flip buffer and store it back on the ring + * @rx_ring: rx descriptor ring to store buffers on + * @old_buff: donor buffer to have page reused + * + * Synchronizes page for reuse by the adapter + **/ +static void wx_reuse_rx_page(struct wx_ring *rx_ring, + struct wx_rx_buffer *old_buff) +{ + u16 nta = rx_ring->next_to_alloc; + struct wx_rx_buffer *new_buff; + + new_buff = &rx_ring->rx_buffer_info[nta]; + + /* update, and store next to alloc */ + nta++; + rx_ring->next_to_alloc = (nta < rx_ring->count) ? nta : 0; + + /* transfer page from old buffer to new buffer */ + new_buff->page = old_buff->page; + new_buff->page_dma = old_buff->page_dma; + new_buff->page_offset = old_buff->page_offset; + new_buff->pagecnt_bias = old_buff->pagecnt_bias; +} + +static void wx_dma_sync_frag(struct wx_ring *rx_ring, + struct wx_rx_buffer *rx_buffer) +{ + struct sk_buff *skb = rx_buffer->skb; + skb_frag_t *frag = &skb_shinfo(skb)->frags[0]; + + dma_sync_single_range_for_cpu(rx_ring->dev, + WX_CB(skb)->dma, + skb_frag_off(frag), + skb_frag_size(frag), + DMA_FROM_DEVICE); + + /* If the page was released, just unmap it. */ + if (unlikely(WX_CB(skb)->page_released)) + page_pool_put_full_page(rx_ring->page_pool, rx_buffer->page, false); +} + +static struct wx_rx_buffer *wx_get_rx_buffer(struct wx_ring *rx_ring, + union wx_rx_desc *rx_desc, + struct sk_buff **skb, + int *rx_buffer_pgcnt) +{ + struct wx_rx_buffer *rx_buffer; + unsigned int size; + + rx_buffer = &rx_ring->rx_buffer_info[rx_ring->next_to_clean]; + size = le16_to_cpu(rx_desc->wb.upper.length); + +#if (PAGE_SIZE < 8192) + *rx_buffer_pgcnt = page_count(rx_buffer->page); +#else + *rx_buffer_pgcnt = 0; +#endif + + prefetchw(rx_buffer->page); + *skb = rx_buffer->skb; + + /* Delay unmapping of the first packet. It carries the header + * information, HW may still access the header after the writeback. + * Only unmap it when EOP is reached + */ + if (!wx_test_staterr(rx_desc, WX_RXD_STAT_EOP)) { + if (!*skb) + goto skip_sync; + } else { + if (*skb) + wx_dma_sync_frag(rx_ring, rx_buffer); + } + + /* we are reusing so sync this buffer for CPU use */ + dma_sync_single_range_for_cpu(rx_ring->dev, + rx_buffer->dma, + rx_buffer->page_offset, + size, + DMA_FROM_DEVICE); +skip_sync: + rx_buffer->pagecnt_bias--; + + return rx_buffer; +} + +static void wx_put_rx_buffer(struct wx_ring *rx_ring, + struct wx_rx_buffer *rx_buffer, + struct sk_buff *skb, + int rx_buffer_pgcnt) +{ + if (wx_can_reuse_rx_page(rx_buffer, rx_buffer_pgcnt)) { + /* hand second half of page back to the ring */ + wx_reuse_rx_page(rx_ring, rx_buffer); + } else { + if (!IS_ERR(skb) && WX_CB(skb)->dma == rx_buffer->dma) + /* the page has been released from the ring */ + WX_CB(skb)->page_released = true; + else + page_pool_put_full_page(rx_ring->page_pool, rx_buffer->page, false); + + __page_frag_cache_drain(rx_buffer->page, + rx_buffer->pagecnt_bias); + } + + /* clear contents of rx_buffer */ + rx_buffer->page = NULL; + rx_buffer->skb = NULL; +} + +static struct sk_buff *wx_build_skb(struct wx_ring *rx_ring, + struct wx_rx_buffer *rx_buffer, + union wx_rx_desc *rx_desc) +{ + unsigned int size = le16_to_cpu(rx_desc->wb.upper.length); +#if (PAGE_SIZE < 8192) + unsigned int truesize = WX_RX_BUFSZ; +#else + unsigned int truesize = ALIGN(size, L1_CACHE_BYTES); +#endif + struct sk_buff *skb = rx_buffer->skb; + + if (!skb) { + void *page_addr = page_address(rx_buffer->page) + + rx_buffer->page_offset; + + /* prefetch first cache line of first page */ + prefetch(page_addr); +#if L1_CACHE_BYTES < 128 + prefetch(page_addr + L1_CACHE_BYTES); +#endif + + /* allocate a skb to store the frags */ + skb = napi_alloc_skb(&rx_ring->q_vector->napi, WX_RXBUFFER_256); + if (unlikely(!skb)) + return NULL; + + /* we will be copying header into skb->data in + * pskb_may_pull so it is in our interest to prefetch + * it now to avoid a possible cache miss + */ + prefetchw(skb->data); + + if (size <= WX_RXBUFFER_256) { + memcpy(__skb_put(skb, size), page_addr, + ALIGN(size, sizeof(long))); + rx_buffer->pagecnt_bias++; + + return skb; + } + + if (!wx_test_staterr(rx_desc, WX_RXD_STAT_EOP)) + WX_CB(skb)->dma = rx_buffer->dma; + + skb_add_rx_frag(skb, 0, rx_buffer->page, + rx_buffer->page_offset, + size, truesize); + goto out; + + } else { + skb_add_rx_frag(skb, skb_shinfo(skb)->nr_frags, rx_buffer->page, + rx_buffer->page_offset, size, truesize); + } + +out: +#if (PAGE_SIZE < 8192) + /* flip page offset to other buffer */ + rx_buffer->page_offset ^= truesize; +#else + /* move offset up to the next cache line */ + rx_buffer->page_offset += truesize; +#endif + + return skb; +} + +static bool wx_alloc_mapped_page(struct wx_ring *rx_ring, + struct wx_rx_buffer *bi) +{ + struct page *page = bi->page; + dma_addr_t dma; + + /* since we are recycling buffers we should seldom need to alloc */ + if (likely(page)) + return true; + + page = page_pool_dev_alloc_pages(rx_ring->page_pool); + WARN_ON(!page); + dma = page_pool_get_dma_addr(page); + + bi->page_dma = dma; + bi->page = page; + bi->page_offset = 0; + page_ref_add(page, USHRT_MAX - 1); + bi->pagecnt_bias = USHRT_MAX; + + return true; +} + +/** + * wx_alloc_rx_buffers - Replace used receive buffers + * @rx_ring: ring to place buffers on + * @cleaned_count: number of buffers to replace + **/ +void wx_alloc_rx_buffers(struct wx_ring *rx_ring, u16 cleaned_count) +{ + u16 i = rx_ring->next_to_use; + union wx_rx_desc *rx_desc; + struct wx_rx_buffer *bi; + + /* nothing to do */ + if (!cleaned_count) + return; + + rx_desc = WX_RX_DESC(rx_ring, i); + bi = &rx_ring->rx_buffer_info[i]; + i -= rx_ring->count; + + do { + if (!wx_alloc_mapped_page(rx_ring, bi)) + break; + + /* sync the buffer for use by the device */ + dma_sync_single_range_for_device(rx_ring->dev, bi->dma, + bi->page_offset, + WX_RX_BUFSZ, + DMA_FROM_DEVICE); + + rx_desc->read.pkt_addr = + cpu_to_le64(bi->page_dma + bi->page_offset); + + rx_desc++; + bi++; + i++; + if (unlikely(!i)) { + rx_desc = WX_RX_DESC(rx_ring, 0); + bi = rx_ring->rx_buffer_info; + i -= rx_ring->count; + } + + /* clear the status bits for the next_to_use descriptor */ + rx_desc->wb.upper.status_error = 0; + + cleaned_count--; + } while (cleaned_count); + + i += rx_ring->count; + + if (rx_ring->next_to_use != i) { + rx_ring->next_to_use = i; + /* update next to alloc since we have filled the ring */ + rx_ring->next_to_alloc = i; + + /* Force memory writes to complete before letting h/w + * know there are new descriptors to fetch. (Only + * applicable for weak-ordered memory model archs, + * such as IA-64). + */ + wmb(); + writel(i, rx_ring->tail); + } +} + +u16 wx_desc_unused(struct wx_ring *ring) +{ + u16 ntc = ring->next_to_clean; + u16 ntu = ring->next_to_use; + + return ((ntc > ntu) ? 0 : ring->count) + ntc - ntu - 1; +} + +/** + * wx_is_non_eop - process handling of non-EOP buffers + * @rx_ring: Rx ring being processed + * @rx_desc: Rx descriptor for current buffer + * @skb: Current socket buffer containing buffer in progress + * + * This function updates next to clean. If the buffer is an EOP buffer + * this function exits returning false, otherwise it will place the + * sk_buff in the next buffer to be chained and return true indicating + * that this is in fact a non-EOP buffer. + **/ +static bool wx_is_non_eop(struct wx_ring *rx_ring, + union wx_rx_desc *rx_desc, + struct sk_buff *skb) +{ + u32 ntc = rx_ring->next_to_clean + 1; + + /* fetch, update, and store next to clean */ + ntc = (ntc < rx_ring->count) ? ntc : 0; + rx_ring->next_to_clean = ntc; + + prefetch(WX_RX_DESC(rx_ring, ntc)); + + /* if we are the last buffer then there is nothing else to do */ + if (likely(wx_test_staterr(rx_desc, WX_RXD_STAT_EOP))) + return false; + + rx_ring->rx_buffer_info[ntc].skb = skb; + + return true; +} + +static void wx_pull_tail(struct sk_buff *skb) +{ + skb_frag_t *frag = &skb_shinfo(skb)->frags[0]; + unsigned int pull_len; + unsigned char *va; + + /* it is valid to use page_address instead of kmap since we are + * working with pages allocated out of the lomem pool per + * alloc_page(GFP_ATOMIC) + */ + va = skb_frag_address(frag); + + /* we need the header to contain the greater of either ETH_HLEN or + * 60 bytes if the skb->len is less than 60 for skb_pad. + */ + pull_len = eth_get_headlen(skb->dev, va, WX_RXBUFFER_256); + + /* align pull length to size of long to optimize memcpy performance */ + skb_copy_to_linear_data(skb, va, ALIGN(pull_len, sizeof(long))); + + /* update all of the pointers */ + skb_frag_size_sub(frag, pull_len); + skb_frag_off_add(frag, pull_len); + skb->data_len -= pull_len; + skb->tail += pull_len; +} + +/** + * wx_cleanup_headers - Correct corrupted or empty headers + * @rx_ring: rx descriptor ring packet is being transacted on + * @rx_desc: pointer to the EOP Rx descriptor + * @skb: pointer to current skb being fixed + * + * Check for corrupted packet headers caused by senders on the local L2 + * embedded NIC switch not setting up their Tx Descriptors right. These + * should be very rare. + * + * Also address the case where we are pulling data in on pages only + * and as such no data is present in the skb header. + * + * In addition if skb is not at least 60 bytes we need to pad it so that + * it is large enough to qualify as a valid Ethernet frame. + * + * Returns true if an error was encountered and skb was freed. + **/ +static bool wx_cleanup_headers(struct wx_ring *rx_ring, + union wx_rx_desc *rx_desc, + struct sk_buff *skb) +{ + struct net_device *netdev = rx_ring->netdev; + + /* verify that the packet does not have any known errors */ + if (!netdev || + unlikely(wx_test_staterr(rx_desc, WX_RXD_ERR_RXE) && + !(netdev->features & NETIF_F_RXALL))) { + dev_kfree_skb_any(skb); + return true; + } + + /* place header in linear portion of buffer */ + if (!skb_headlen(skb)) + wx_pull_tail(skb); + + /* if eth_skb_pad returns an error the skb was freed */ + if (eth_skb_pad(skb)) + return true; + + return false; +} + +/** + * wx_clean_rx_irq - Clean completed descriptors from Rx ring - bounce buf + * @q_vector: structure containing interrupt and ring information + * @rx_ring: rx descriptor ring to transact packets on + * @budget: Total limit on number of packets to process + * + * This function provides a "bounce buffer" approach to Rx interrupt + * processing. The advantage to this is that on systems that have + * expensive overhead for IOMMU access this provides a means of avoiding + * it by maintaining the mapping of the page to the system. + * + * Returns amount of work completed. + **/ +static int wx_clean_rx_irq(struct wx_q_vector *q_vector, + struct wx_ring *rx_ring, + int budget) +{ + unsigned int total_rx_bytes = 0, total_rx_packets = 0; + u16 cleaned_count = wx_desc_unused(rx_ring); + + do { + struct wx_rx_buffer *rx_buffer; + union wx_rx_desc *rx_desc; + struct sk_buff *skb; + int rx_buffer_pgcnt; + + /* return some buffers to hardware, one at a time is too slow */ + if (cleaned_count >= WX_RX_BUFFER_WRITE) { + wx_alloc_rx_buffers(rx_ring, cleaned_count); + cleaned_count = 0; + } + + rx_desc = WX_RX_DESC(rx_ring, rx_ring->next_to_clean); + if (!wx_test_staterr(rx_desc, WX_RXD_STAT_DD)) + break; + + /* This memory barrier is needed to keep us from reading + * any other fields out of the rx_desc until we know the + * descriptor has been written back + */ + dma_rmb(); + + rx_buffer = wx_get_rx_buffer(rx_ring, rx_desc, &skb, &rx_buffer_pgcnt); + + /* retrieve a buffer from the ring */ + skb = wx_build_skb(rx_ring, rx_buffer, rx_desc); + + /* exit if we failed to retrieve a buffer */ + if (!skb) { + rx_buffer->pagecnt_bias++; + break; + } + + wx_put_rx_buffer(rx_ring, rx_buffer, skb, rx_buffer_pgcnt); + cleaned_count++; + + /* place incomplete frames back on ring for completion */ + if (wx_is_non_eop(rx_ring, rx_desc, skb)) + continue; + + /* verify the packet layout is correct */ + if (wx_cleanup_headers(rx_ring, rx_desc, skb)) + continue; + + /* probably a little skewed due to removing CRC */ + total_rx_bytes += skb->len; + + skb_record_rx_queue(skb, rx_ring->queue_index); + skb->protocol = eth_type_trans(skb, rx_ring->netdev); + napi_gro_receive(&q_vector->napi, skb); + + /* update budget accounting */ + total_rx_packets++; + } while (likely(total_rx_packets < budget)); + + u64_stats_update_begin(&rx_ring->syncp); + rx_ring->stats.packets += total_rx_packets; + rx_ring->stats.bytes += total_rx_bytes; + u64_stats_update_end(&rx_ring->syncp); + q_vector->rx.total_packets += total_rx_packets; + q_vector->rx.total_bytes += total_rx_bytes; + + return total_rx_packets; +} + +static struct netdev_queue *wx_txring_txq(const struct wx_ring *ring) +{ + return netdev_get_tx_queue(ring->netdev, ring->queue_index); +} + +/** + * wx_clean_tx_irq - Reclaim resources after transmit completes + * @q_vector: structure containing interrupt and ring information + * @tx_ring: tx ring to clean + * @napi_budget: Used to determine if we are in netpoll + **/ +static bool wx_clean_tx_irq(struct wx_q_vector *q_vector, + struct wx_ring *tx_ring, int napi_budget) +{ + unsigned int budget = q_vector->wx->tx_work_limit; + unsigned int total_bytes = 0, total_packets = 0; + unsigned int i = tx_ring->next_to_clean; + struct wx_tx_buffer *tx_buffer; + union wx_tx_desc *tx_desc; + + if (!netif_carrier_ok(tx_ring->netdev)) + return true; + + tx_buffer = &tx_ring->tx_buffer_info[i]; + tx_desc = WX_TX_DESC(tx_ring, i); + i -= tx_ring->count; + + do { + union wx_tx_desc *eop_desc = tx_buffer->next_to_watch; + + /* if next_to_watch is not set then there is no work pending */ + if (!eop_desc) + break; + + /* prevent any other reads prior to eop_desc */ + smp_rmb(); + + /* if DD is not set pending work has not been completed */ + if (!(eop_desc->wb.status & cpu_to_le32(WX_TXD_STAT_DD))) + break; + + /* clear next_to_watch to prevent false hangs */ + tx_buffer->next_to_watch = NULL; + + /* update the statistics for this packet */ + total_bytes += tx_buffer->bytecount; + total_packets += tx_buffer->gso_segs; + + /* free the skb */ + napi_consume_skb(tx_buffer->skb, napi_budget); + + /* unmap skb header data */ + dma_unmap_single(tx_ring->dev, + dma_unmap_addr(tx_buffer, dma), + dma_unmap_len(tx_buffer, len), + DMA_TO_DEVICE); + + /* clear tx_buffer data */ + dma_unmap_len_set(tx_buffer, len, 0); + + /* unmap remaining buffers */ + while (tx_desc != eop_desc) { + tx_buffer++; + tx_desc++; + i++; + if (unlikely(!i)) { + i -= tx_ring->count; + tx_buffer = tx_ring->tx_buffer_info; + tx_desc = WX_TX_DESC(tx_ring, 0); + } + + /* unmap any remaining paged data */ + if (dma_unmap_len(tx_buffer, len)) { + dma_unmap_page(tx_ring->dev, + dma_unmap_addr(tx_buffer, dma), + dma_unmap_len(tx_buffer, len), + DMA_TO_DEVICE); + dma_unmap_len_set(tx_buffer, len, 0); + } + } + + /* move us one more past the eop_desc for start of next pkt */ + tx_buffer++; + tx_desc++; + i++; + if (unlikely(!i)) { + i -= tx_ring->count; + tx_buffer = tx_ring->tx_buffer_info; + tx_desc = WX_TX_DESC(tx_ring, 0); + } + + /* issue prefetch for next Tx descriptor */ + prefetch(tx_desc); + + /* update budget accounting */ + budget--; + } while (likely(budget)); + + i += tx_ring->count; + tx_ring->next_to_clean = i; + u64_stats_update_begin(&tx_ring->syncp); + tx_ring->stats.bytes += total_bytes; + tx_ring->stats.packets += total_packets; + u64_stats_update_end(&tx_ring->syncp); + q_vector->tx.total_bytes += total_bytes; + q_vector->tx.total_packets += total_packets; + + netdev_tx_completed_queue(wx_txring_txq(tx_ring), + total_packets, total_bytes); + +#define TX_WAKE_THRESHOLD (DESC_NEEDED * 2) + if (unlikely(total_packets && netif_carrier_ok(tx_ring->netdev) && + (wx_desc_unused(tx_ring) >= TX_WAKE_THRESHOLD))) { + /* Make sure that anybody stopping the queue after this + * sees the new next_to_clean. + */ + smp_mb(); + + if (__netif_subqueue_stopped(tx_ring->netdev, + tx_ring->queue_index) && + netif_running(tx_ring->netdev)) + netif_wake_subqueue(tx_ring->netdev, + tx_ring->queue_index); + } + + return !!budget; +} + +/** + * wx_poll - NAPI polling RX/TX cleanup routine + * @napi: napi struct with our devices info in it + * @budget: amount of work driver is allowed to do this pass, in packets + * + * This function will clean all queues associated with a q_vector. + **/ +static int wx_poll(struct napi_struct *napi, int budget) +{ + struct wx_q_vector *q_vector = container_of(napi, struct wx_q_vector, napi); + int per_ring_budget, work_done = 0; + struct wx *wx = q_vector->wx; + bool clean_complete = true; + struct wx_ring *ring; + + wx_for_each_ring(ring, q_vector->tx) { + if (!wx_clean_tx_irq(q_vector, ring, budget)) + clean_complete = false; + } + + /* Exit if we are called by netpoll */ + if (budget <= 0) + return budget; + + /* attempt to distribute budget to each queue fairly, but don't allow + * the budget to go below 1 because we'll exit polling + */ + if (q_vector->rx.count > 1) + per_ring_budget = max(budget / q_vector->rx.count, 1); + else + per_ring_budget = budget; + + wx_for_each_ring(ring, q_vector->rx) { + int cleaned = wx_clean_rx_irq(q_vector, ring, per_ring_budget); + + work_done += cleaned; + if (cleaned >= per_ring_budget) + clean_complete = false; + } + + /* If all work not completed, return budget and keep polling */ + if (!clean_complete) + return budget; + + /* all work done, exit the polling mode */ + if (likely(napi_complete_done(napi, work_done))) { + if (netif_running(wx->netdev)) + wx_intr_enable(wx, WX_INTR_Q(q_vector->v_idx)); + } + + return min(work_done, budget - 1); +} + +static int wx_maybe_stop_tx(struct wx_ring *tx_ring, u16 size) +{ + if (likely(wx_desc_unused(tx_ring) >= size)) + return 0; + + netif_stop_subqueue(tx_ring->netdev, tx_ring->queue_index); + + /* For the next check */ + smp_mb(); + + /* We need to check again in a case another CPU has just + * made room available. + */ + if (likely(wx_desc_unused(tx_ring) < size)) + return -EBUSY; + + /* A reprieve! - use start_queue because it doesn't call schedule */ + netif_start_subqueue(tx_ring->netdev, tx_ring->queue_index); + + return 0; +} + +static void wx_tx_map(struct wx_ring *tx_ring, + struct wx_tx_buffer *first) +{ + struct sk_buff *skb = first->skb; + struct wx_tx_buffer *tx_buffer; + u16 i = tx_ring->next_to_use; + unsigned int data_len, size; + union wx_tx_desc *tx_desc; + skb_frag_t *frag; + dma_addr_t dma; + u32 cmd_type; + + cmd_type = WX_TXD_DTYP_DATA | WX_TXD_IFCS; + tx_desc = WX_TX_DESC(tx_ring, i); + + tx_desc->read.olinfo_status = cpu_to_le32(skb->len << WX_TXD_PAYLEN_SHIFT); + + size = skb_headlen(skb); + data_len = skb->data_len; + dma = dma_map_single(tx_ring->dev, skb->data, size, DMA_TO_DEVICE); + + tx_buffer = first; + + for (frag = &skb_shinfo(skb)->frags[0];; frag++) { + if (dma_mapping_error(tx_ring->dev, dma)) + goto dma_error; + + /* record length, and DMA address */ + dma_unmap_len_set(tx_buffer, len, size); + dma_unmap_addr_set(tx_buffer, dma, dma); + + tx_desc->read.buffer_addr = cpu_to_le64(dma); + + while (unlikely(size > WX_MAX_DATA_PER_TXD)) { + tx_desc->read.cmd_type_len = + cpu_to_le32(cmd_type ^ WX_MAX_DATA_PER_TXD); + + i++; + tx_desc++; + if (i == tx_ring->count) { + tx_desc = WX_TX_DESC(tx_ring, 0); + i = 0; + } + tx_desc->read.olinfo_status = 0; + + dma += WX_MAX_DATA_PER_TXD; + size -= WX_MAX_DATA_PER_TXD; + + tx_desc->read.buffer_addr = cpu_to_le64(dma); + } + + if (likely(!data_len)) + break; + + tx_desc->read.cmd_type_len = cpu_to_le32(cmd_type ^ size); + + i++; + tx_desc++; + if (i == tx_ring->count) { + tx_desc = WX_TX_DESC(tx_ring, 0); + i = 0; + } + tx_desc->read.olinfo_status = 0; + + size = skb_frag_size(frag); + + data_len -= size; + + dma = skb_frag_dma_map(tx_ring->dev, frag, 0, size, + DMA_TO_DEVICE); + + tx_buffer = &tx_ring->tx_buffer_info[i]; + } + + /* write last descriptor with RS and EOP bits */ + cmd_type |= size | WX_TXD_EOP | WX_TXD_RS; + tx_desc->read.cmd_type_len = cpu_to_le32(cmd_type); + + netdev_tx_sent_queue(wx_txring_txq(tx_ring), first->bytecount); + + skb_tx_timestamp(skb); + + /* Force memory writes to complete before letting h/w know there + * are new descriptors to fetch. (Only applicable for weak-ordered + * memory model archs, such as IA-64). + * + * We also need this memory barrier to make certain all of the + * status bits have been updated before next_to_watch is written. + */ + wmb(); + + /* set next_to_watch value indicating a packet is present */ + first->next_to_watch = tx_desc; + + i++; + if (i == tx_ring->count) + i = 0; + + tx_ring->next_to_use = i; + + wx_maybe_stop_tx(tx_ring, DESC_NEEDED); + + if (netif_xmit_stopped(wx_txring_txq(tx_ring)) || !netdev_xmit_more()) + writel(i, tx_ring->tail); + + return; +dma_error: + dev_err(tx_ring->dev, "TX DMA map failed\n"); + + /* clear dma mappings for failed tx_buffer_info map */ + for (;;) { + tx_buffer = &tx_ring->tx_buffer_info[i]; + if (dma_unmap_len(tx_buffer, len)) + dma_unmap_page(tx_ring->dev, + dma_unmap_addr(tx_buffer, dma), + dma_unmap_len(tx_buffer, len), + DMA_TO_DEVICE); + dma_unmap_len_set(tx_buffer, len, 0); + if (tx_buffer == first) + break; + if (i == 0) + i += tx_ring->count; + i--; + } + + dev_kfree_skb_any(first->skb); + first->skb = NULL; + + tx_ring->next_to_use = i; +} + +static netdev_tx_t wx_xmit_frame_ring(struct sk_buff *skb, + struct wx_ring *tx_ring) +{ + u16 count = TXD_USE_COUNT(skb_headlen(skb)); + struct wx_tx_buffer *first; + unsigned short f; + + /* need: 1 descriptor per page * PAGE_SIZE/WX_MAX_DATA_PER_TXD, + * + 1 desc for skb_headlen/WX_MAX_DATA_PER_TXD, + * + 2 desc gap to keep tail from touching head, + * + 1 desc for context descriptor, + * otherwise try next time + */ + for (f = 0; f < skb_shinfo(skb)->nr_frags; f++) + count += TXD_USE_COUNT(skb_frag_size(&skb_shinfo(skb)-> + frags[f])); + + if (wx_maybe_stop_tx(tx_ring, count + 3)) + return NETDEV_TX_BUSY; + + /* record the location of the first descriptor for this packet */ + first = &tx_ring->tx_buffer_info[tx_ring->next_to_use]; + first->skb = skb; + first->bytecount = skb->len; + first->gso_segs = 1; + + wx_tx_map(tx_ring, first); + + return NETDEV_TX_OK; +} + +netdev_tx_t wx_xmit_frame(struct sk_buff *skb, + struct net_device *netdev) +{ + unsigned int r_idx = skb->queue_mapping; + struct wx *wx = netdev_priv(netdev); + struct wx_ring *tx_ring; + + if (!netif_carrier_ok(netdev)) { + dev_kfree_skb_any(skb); + return NETDEV_TX_OK; + } + + /* The minimum packet size for olinfo paylen is 17 so pad the skb + * in order to meet this minimum size requirement. + */ + if (skb_put_padto(skb, 17)) + return NETDEV_TX_OK; + + if (r_idx >= wx->num_tx_queues) + r_idx = r_idx % wx->num_tx_queues; + tx_ring = wx->tx_ring[r_idx]; + + return wx_xmit_frame_ring(skb, tx_ring); +} +EXPORT_SYMBOL(wx_xmit_frame); + +void wx_napi_enable_all(struct wx *wx) +{ + struct wx_q_vector *q_vector; + int q_idx; + + for (q_idx = 0; q_idx < wx->num_q_vectors; q_idx++) { + q_vector = wx->q_vector[q_idx]; + napi_enable(&q_vector->napi); + } +} +EXPORT_SYMBOL(wx_napi_enable_all); + +void wx_napi_disable_all(struct wx *wx) +{ + struct wx_q_vector *q_vector; + int q_idx; + + for (q_idx = 0; q_idx < wx->num_q_vectors; q_idx++) { + q_vector = wx->q_vector[q_idx]; + napi_disable(&q_vector->napi); + } +} +EXPORT_SYMBOL(wx_napi_disable_all); + +/** + * wx_set_rss_queues: Allocate queues for RSS + * @wx: board private structure to initialize + * + * This is our "base" multiqueue mode. RSS (Receive Side Scaling) will try + * to allocate one Rx queue per CPU, and if available, one Tx queue per CPU. + * + **/ +static void wx_set_rss_queues(struct wx *wx) +{ + wx->num_rx_queues = wx->mac.max_rx_queues; + wx->num_tx_queues = wx->mac.max_tx_queues; +} + +static void wx_set_num_queues(struct wx *wx) +{ + /* Start with base case */ + wx->num_rx_queues = 1; + wx->num_tx_queues = 1; + wx->queues_per_pool = 1; + + wx_set_rss_queues(wx); +} + +/** + * wx_acquire_msix_vectors - acquire MSI-X vectors + * @wx: board private structure + * + * Attempts to acquire a suitable range of MSI-X vector interrupts. Will + * return a negative error code if unable to acquire MSI-X vectors for any + * reason. + */ +static int wx_acquire_msix_vectors(struct wx *wx) +{ + struct irq_affinity affd = {0, }; + int nvecs, i; + + nvecs = min_t(int, num_online_cpus(), wx->mac.max_msix_vectors); + + wx->msix_entries = kcalloc(nvecs, + sizeof(struct msix_entry), + GFP_KERNEL); + if (!wx->msix_entries) + return -ENOMEM; + + nvecs = pci_alloc_irq_vectors_affinity(wx->pdev, nvecs, + nvecs, + PCI_IRQ_MSIX | PCI_IRQ_AFFINITY, + &affd); + if (nvecs < 0) { + wx_err(wx, "Failed to allocate MSI-X interrupts. Err: %d\n", nvecs); + kfree(wx->msix_entries); + wx->msix_entries = NULL; + return nvecs; + } + + for (i = 0; i < nvecs; i++) { + wx->msix_entries[i].entry = i; + wx->msix_entries[i].vector = pci_irq_vector(wx->pdev, i); + } + + /* one for msix_other */ + nvecs -= 1; + wx->num_q_vectors = nvecs; + wx->num_rx_queues = nvecs; + wx->num_tx_queues = nvecs; + + return 0; +} + +/** + * wx_set_interrupt_capability - set MSI-X or MSI if supported + * @wx: board private structure to initialize + * + * Attempt to configure the interrupts using the best available + * capabilities of the hardware and the kernel. + **/ +static int wx_set_interrupt_capability(struct wx *wx) +{ + struct pci_dev *pdev = wx->pdev; + int nvecs, ret; + + /* We will try to get MSI-X interrupts first */ + ret = wx_acquire_msix_vectors(wx); + if (ret == 0 || (ret == -ENOMEM)) + return ret; + + wx->num_rx_queues = 1; + wx->num_tx_queues = 1; + wx->num_q_vectors = 1; + + /* minmum one for queue, one for misc*/ + nvecs = 1; + nvecs = pci_alloc_irq_vectors(pdev, nvecs, + nvecs, PCI_IRQ_MSI | PCI_IRQ_LEGACY); + if (nvecs == 1) { + if (pdev->msi_enabled) + wx_err(wx, "Fallback to MSI.\n"); + else + wx_err(wx, "Fallback to LEGACY.\n"); + } else { + wx_err(wx, "Failed to allocate MSI/LEGACY interrupts. Error: %d\n", nvecs); + return nvecs; + } + + pdev->irq = pci_irq_vector(pdev, 0); + + return 0; +} + +/** + * wx_cache_ring_rss - Descriptor ring to register mapping for RSS + * @wx: board private structure to initialize + * + * Cache the descriptor ring offsets for RSS, ATR, FCoE, and SR-IOV. + * + **/ +static void wx_cache_ring_rss(struct wx *wx) +{ + u16 i; + + for (i = 0; i < wx->num_rx_queues; i++) + wx->rx_ring[i]->reg_idx = i; + + for (i = 0; i < wx->num_tx_queues; i++) + wx->tx_ring[i]->reg_idx = i; +} + +static void wx_add_ring(struct wx_ring *ring, struct wx_ring_container *head) +{ + ring->next = head->ring; + head->ring = ring; + head->count++; +} + +/** + * wx_alloc_q_vector - Allocate memory for a single interrupt vector + * @wx: board private structure to initialize + * @v_count: q_vectors allocated on wx, used for ring interleaving + * @v_idx: index of vector in wx struct + * @txr_count: total number of Tx rings to allocate + * @txr_idx: index of first Tx ring to allocate + * @rxr_count: total number of Rx rings to allocate + * @rxr_idx: index of first Rx ring to allocate + * + * We allocate one q_vector. If allocation fails we return -ENOMEM. + **/ +static int wx_alloc_q_vector(struct wx *wx, + unsigned int v_count, unsigned int v_idx, + unsigned int txr_count, unsigned int txr_idx, + unsigned int rxr_count, unsigned int rxr_idx) +{ + struct wx_q_vector *q_vector; + int ring_count, default_itr; + struct wx_ring *ring; + + /* note this will allocate space for the ring structure as well! */ + ring_count = txr_count + rxr_count; + + q_vector = kzalloc(struct_size(q_vector, ring, ring_count), + GFP_KERNEL); + if (!q_vector) + return -ENOMEM; + + /* initialize NAPI */ + netif_napi_add(wx->netdev, &q_vector->napi, + wx_poll); + + /* tie q_vector and wx together */ + wx->q_vector[v_idx] = q_vector; + q_vector->wx = wx; + q_vector->v_idx = v_idx; + if (cpu_online(v_idx)) + q_vector->numa_node = cpu_to_node(v_idx); + + /* initialize pointer to rings */ + ring = q_vector->ring; + + if (wx->mac.type == wx_mac_sp) + default_itr = WX_12K_ITR; + else + default_itr = WX_7K_ITR; + /* initialize ITR */ + if (txr_count && !rxr_count) + /* tx only vector */ + q_vector->itr = wx->tx_itr_setting ? + default_itr : wx->tx_itr_setting; + else + /* rx or rx/tx vector */ + q_vector->itr = wx->rx_itr_setting ? + default_itr : wx->rx_itr_setting; + + while (txr_count) { + /* assign generic ring traits */ + ring->dev = &wx->pdev->dev; + ring->netdev = wx->netdev; + + /* configure backlink on ring */ + ring->q_vector = q_vector; + + /* update q_vector Tx values */ + wx_add_ring(ring, &q_vector->tx); + + /* apply Tx specific ring traits */ + ring->count = wx->tx_ring_count; + + ring->queue_index = txr_idx; + + /* assign ring to wx */ + wx->tx_ring[txr_idx] = ring; + + /* update count and index */ + txr_count--; + txr_idx += v_count; + + /* push pointer to next ring */ + ring++; + } + + while (rxr_count) { + /* assign generic ring traits */ + ring->dev = &wx->pdev->dev; + ring->netdev = wx->netdev; + + /* configure backlink on ring */ + ring->q_vector = q_vector; + + /* update q_vector Rx values */ + wx_add_ring(ring, &q_vector->rx); + + /* apply Rx specific ring traits */ + ring->count = wx->rx_ring_count; + ring->queue_index = rxr_idx; + + /* assign ring to wx */ + wx->rx_ring[rxr_idx] = ring; + + /* update count and index */ + rxr_count--; + rxr_idx += v_count; + + /* push pointer to next ring */ + ring++; + } + + return 0; +} + +/** + * wx_free_q_vector - Free memory allocated for specific interrupt vector + * @wx: board private structure to initialize + * @v_idx: Index of vector to be freed + * + * This function frees the memory allocated to the q_vector. In addition if + * NAPI is enabled it will delete any references to the NAPI struct prior + * to freeing the q_vector. + **/ +static void wx_free_q_vector(struct wx *wx, int v_idx) +{ + struct wx_q_vector *q_vector = wx->q_vector[v_idx]; + struct wx_ring *ring; + + wx_for_each_ring(ring, q_vector->tx) + wx->tx_ring[ring->queue_index] = NULL; + + wx_for_each_ring(ring, q_vector->rx) + wx->rx_ring[ring->queue_index] = NULL; + + wx->q_vector[v_idx] = NULL; + netif_napi_del(&q_vector->napi); + kfree_rcu(q_vector, rcu); +} + +/** + * wx_alloc_q_vectors - Allocate memory for interrupt vectors + * @wx: board private structure to initialize + * + * We allocate one q_vector per queue interrupt. If allocation fails we + * return -ENOMEM. + **/ +static int wx_alloc_q_vectors(struct wx *wx) +{ + unsigned int rxr_idx = 0, txr_idx = 0, v_idx = 0; + unsigned int rxr_remaining = wx->num_rx_queues; + unsigned int txr_remaining = wx->num_tx_queues; + unsigned int q_vectors = wx->num_q_vectors; + int rqpv, tqpv; + int err; + + for (; v_idx < q_vectors; v_idx++) { + rqpv = DIV_ROUND_UP(rxr_remaining, q_vectors - v_idx); + tqpv = DIV_ROUND_UP(txr_remaining, q_vectors - v_idx); + err = wx_alloc_q_vector(wx, q_vectors, v_idx, + tqpv, txr_idx, + rqpv, rxr_idx); + + if (err) + goto err_out; + + /* update counts and index */ + rxr_remaining -= rqpv; + txr_remaining -= tqpv; + rxr_idx++; + txr_idx++; + } + + return 0; + +err_out: + wx->num_tx_queues = 0; + wx->num_rx_queues = 0; + wx->num_q_vectors = 0; + + while (v_idx--) + wx_free_q_vector(wx, v_idx); + + return -ENOMEM; +} + +/** + * wx_free_q_vectors - Free memory allocated for interrupt vectors + * @wx: board private structure to initialize + * + * This function frees the memory allocated to the q_vectors. In addition if + * NAPI is enabled it will delete any references to the NAPI struct prior + * to freeing the q_vector. + **/ +static void wx_free_q_vectors(struct wx *wx) +{ + int v_idx = wx->num_q_vectors; + + wx->num_tx_queues = 0; + wx->num_rx_queues = 0; + wx->num_q_vectors = 0; + + while (v_idx--) + wx_free_q_vector(wx, v_idx); +} + +void wx_reset_interrupt_capability(struct wx *wx) +{ + struct pci_dev *pdev = wx->pdev; + + if (!pdev->msi_enabled && !pdev->msix_enabled) + return; + + pci_free_irq_vectors(wx->pdev); + if (pdev->msix_enabled) { + kfree(wx->msix_entries); + wx->msix_entries = NULL; + } +} +EXPORT_SYMBOL(wx_reset_interrupt_capability); + +/** + * wx_clear_interrupt_scheme - Clear the current interrupt scheme settings + * @wx: board private structure to clear interrupt scheme on + * + * We go through and clear interrupt specific resources and reset the structure + * to pre-load conditions + **/ +void wx_clear_interrupt_scheme(struct wx *wx) +{ + wx_free_q_vectors(wx); + wx_reset_interrupt_capability(wx); +} +EXPORT_SYMBOL(wx_clear_interrupt_scheme); + +int wx_init_interrupt_scheme(struct wx *wx) +{ + int ret; + + /* Number of supported queues */ + wx_set_num_queues(wx); + + /* Set interrupt mode */ + ret = wx_set_interrupt_capability(wx); + if (ret) { + wx_err(wx, "Allocate irq vectors for failed.\n"); + return ret; + } + + /* Allocate memory for queues */ + ret = wx_alloc_q_vectors(wx); + if (ret) { + wx_err(wx, "Unable to allocate memory for queue vectors.\n"); + wx_reset_interrupt_capability(wx); + return ret; + } + + wx_cache_ring_rss(wx); + + return 0; +} +EXPORT_SYMBOL(wx_init_interrupt_scheme); + +irqreturn_t wx_msix_clean_rings(int __always_unused irq, void *data) +{ + struct wx_q_vector *q_vector = data; + + /* EIAM disabled interrupts (on this vector) for us */ + if (q_vector->rx.ring || q_vector->tx.ring) + napi_schedule_irqoff(&q_vector->napi); + + return IRQ_HANDLED; +} +EXPORT_SYMBOL(wx_msix_clean_rings); + +void wx_free_irq(struct wx *wx) +{ + struct pci_dev *pdev = wx->pdev; + int vector; + + if (!(pdev->msix_enabled)) { + free_irq(pdev->irq, wx); + return; + } + + for (vector = 0; vector < wx->num_q_vectors; vector++) { + struct wx_q_vector *q_vector = wx->q_vector[vector]; + struct msix_entry *entry = &wx->msix_entries[vector]; + + /* free only the irqs that were actually requested */ + if (!q_vector->rx.ring && !q_vector->tx.ring) + continue; + + free_irq(entry->vector, q_vector); + } + + free_irq(wx->msix_entries[vector].vector, wx); +} +EXPORT_SYMBOL(wx_free_irq); + +/** + * wx_setup_isb_resources - allocate interrupt status resources + * @wx: board private structure + * + * Return 0 on success, negative on failure + **/ +int wx_setup_isb_resources(struct wx *wx) +{ + struct pci_dev *pdev = wx->pdev; + + wx->isb_mem = dma_alloc_coherent(&pdev->dev, + sizeof(u32) * 4, + &wx->isb_dma, + GFP_KERNEL); + if (!wx->isb_mem) { + wx_err(wx, "Alloc isb_mem failed\n"); + return -ENOMEM; + } + + return 0; +} +EXPORT_SYMBOL(wx_setup_isb_resources); + +/** + * wx_free_isb_resources - allocate all queues Rx resources + * @wx: board private structure + * + * Return 0 on success, negative on failure + **/ +void wx_free_isb_resources(struct wx *wx) +{ + struct pci_dev *pdev = wx->pdev; + + dma_free_coherent(&pdev->dev, sizeof(u32) * 4, + wx->isb_mem, wx->isb_dma); + wx->isb_mem = NULL; +} +EXPORT_SYMBOL(wx_free_isb_resources); + +u32 wx_misc_isb(struct wx *wx, enum wx_isb_idx idx) +{ + u32 cur_tag = 0; + + cur_tag = wx->isb_mem[WX_ISB_HEADER]; + wx->isb_tag[idx] = cur_tag; + + return (__force u32)cpu_to_le32(wx->isb_mem[idx]); +} +EXPORT_SYMBOL(wx_misc_isb); + +/** + * wx_set_ivar - set the IVAR registers, mapping interrupt causes to vectors + * @wx: pointer to wx struct + * @direction: 0 for Rx, 1 for Tx, -1 for other causes + * @queue: queue to map the corresponding interrupt to + * @msix_vector: the vector to map to the corresponding queue + * + **/ +static void wx_set_ivar(struct wx *wx, s8 direction, + u16 queue, u16 msix_vector) +{ + u32 ivar, index; + + if (direction == -1) { + /* other causes */ + msix_vector |= WX_PX_IVAR_ALLOC_VAL; + index = 0; + ivar = rd32(wx, WX_PX_MISC_IVAR); + ivar &= ~(0xFF << index); + ivar |= (msix_vector << index); + wr32(wx, WX_PX_MISC_IVAR, ivar); + } else { + /* tx or rx causes */ + msix_vector |= WX_PX_IVAR_ALLOC_VAL; + index = ((16 * (queue & 1)) + (8 * direction)); + ivar = rd32(wx, WX_PX_IVAR(queue >> 1)); + ivar &= ~(0xFF << index); + ivar |= (msix_vector << index); + wr32(wx, WX_PX_IVAR(queue >> 1), ivar); + } +} + +/** + * wx_write_eitr - write EITR register in hardware specific way + * @q_vector: structure containing interrupt and ring information + * + * This function is made to be called by ethtool and by the driver + * when it needs to update EITR registers at runtime. Hardware + * specific quirks/differences are taken care of here. + */ +static void wx_write_eitr(struct wx_q_vector *q_vector) +{ + struct wx *wx = q_vector->wx; + int v_idx = q_vector->v_idx; + u32 itr_reg; + + if (wx->mac.type == wx_mac_sp) + itr_reg = q_vector->itr & WX_SP_MAX_EITR; + else + itr_reg = q_vector->itr & WX_EM_MAX_EITR; + + itr_reg |= WX_PX_ITR_CNT_WDIS; + + wr32(wx, WX_PX_ITR(v_idx), itr_reg); +} + +/** + * wx_configure_vectors - Configure vectors for hardware + * @wx: board private structure + * + * wx_configure_vectors sets up the hardware to properly generate MSI-X/MSI/LEGACY + * interrupts. + **/ +void wx_configure_vectors(struct wx *wx) +{ + struct pci_dev *pdev = wx->pdev; + u32 eitrsel = 0; + u16 v_idx; + + if (pdev->msix_enabled) { + /* Populate MSIX to EITR Select */ + wr32(wx, WX_PX_ITRSEL, eitrsel); + /* use EIAM to auto-mask when MSI-X interrupt is asserted + * this saves a register write for every interrupt + */ + wr32(wx, WX_PX_GPIE, WX_PX_GPIE_MODEL); + } else { + /* legacy interrupts, use EIAM to auto-mask when reading EICR, + * specifically only auto mask tx and rx interrupts. + */ + wr32(wx, WX_PX_GPIE, 0); + } + + /* Populate the IVAR table and set the ITR values to the + * corresponding register. + */ + for (v_idx = 0; v_idx < wx->num_q_vectors; v_idx++) { + struct wx_q_vector *q_vector = wx->q_vector[v_idx]; + struct wx_ring *ring; + + wx_for_each_ring(ring, q_vector->rx) + wx_set_ivar(wx, 0, ring->reg_idx, v_idx); + + wx_for_each_ring(ring, q_vector->tx) + wx_set_ivar(wx, 1, ring->reg_idx, v_idx); + + wx_write_eitr(q_vector); + } + + wx_set_ivar(wx, -1, 0, v_idx); + if (pdev->msix_enabled) + wr32(wx, WX_PX_ITR(v_idx), 1950); +} +EXPORT_SYMBOL(wx_configure_vectors); + +/** + * wx_clean_rx_ring - Free Rx Buffers per Queue + * @rx_ring: ring to free buffers from + **/ +static void wx_clean_rx_ring(struct wx_ring *rx_ring) +{ + struct wx_rx_buffer *rx_buffer; + u16 i = rx_ring->next_to_clean; + + rx_buffer = &rx_ring->rx_buffer_info[i]; + + /* Free all the Rx ring sk_buffs */ + while (i != rx_ring->next_to_alloc) { + if (rx_buffer->skb) { + struct sk_buff *skb = rx_buffer->skb; + + if (WX_CB(skb)->page_released) + page_pool_put_full_page(rx_ring->page_pool, rx_buffer->page, false); + + dev_kfree_skb(skb); + } + + /* Invalidate cache lines that may have been written to by + * device so that we avoid corrupting memory. + */ + dma_sync_single_range_for_cpu(rx_ring->dev, + rx_buffer->dma, + rx_buffer->page_offset, + WX_RX_BUFSZ, + DMA_FROM_DEVICE); + + /* free resources associated with mapping */ + page_pool_put_full_page(rx_ring->page_pool, rx_buffer->page, false); + __page_frag_cache_drain(rx_buffer->page, + rx_buffer->pagecnt_bias); + + i++; + rx_buffer++; + if (i == rx_ring->count) { + i = 0; + rx_buffer = rx_ring->rx_buffer_info; + } + } + + rx_ring->next_to_alloc = 0; + rx_ring->next_to_clean = 0; + rx_ring->next_to_use = 0; +} + +/** + * wx_clean_all_rx_rings - Free Rx Buffers for all queues + * @wx: board private structure + **/ +void wx_clean_all_rx_rings(struct wx *wx) +{ + int i; + + for (i = 0; i < wx->num_rx_queues; i++) + wx_clean_rx_ring(wx->rx_ring[i]); +} +EXPORT_SYMBOL(wx_clean_all_rx_rings); + +/** + * wx_free_rx_resources - Free Rx Resources + * @rx_ring: ring to clean the resources from + * + * Free all receive software resources + **/ +static void wx_free_rx_resources(struct wx_ring *rx_ring) +{ + wx_clean_rx_ring(rx_ring); + kvfree(rx_ring->rx_buffer_info); + rx_ring->rx_buffer_info = NULL; + + /* if not set, then don't free */ + if (!rx_ring->desc) + return; + + dma_free_coherent(rx_ring->dev, rx_ring->size, + rx_ring->desc, rx_ring->dma); + + rx_ring->desc = NULL; + + if (rx_ring->page_pool) { + page_pool_destroy(rx_ring->page_pool); + rx_ring->page_pool = NULL; + } +} + +/** + * wx_free_all_rx_resources - Free Rx Resources for All Queues + * @wx: pointer to hardware structure + * + * Free all receive software resources + **/ +static void wx_free_all_rx_resources(struct wx *wx) +{ + int i; + + for (i = 0; i < wx->num_rx_queues; i++) + wx_free_rx_resources(wx->rx_ring[i]); +} + +/** + * wx_clean_tx_ring - Free Tx Buffers + * @tx_ring: ring to be cleaned + **/ +static void wx_clean_tx_ring(struct wx_ring *tx_ring) +{ + struct wx_tx_buffer *tx_buffer; + u16 i = tx_ring->next_to_clean; + + tx_buffer = &tx_ring->tx_buffer_info[i]; + + while (i != tx_ring->next_to_use) { + union wx_tx_desc *eop_desc, *tx_desc; + + /* Free all the Tx ring sk_buffs */ + dev_kfree_skb_any(tx_buffer->skb); + + /* unmap skb header data */ + dma_unmap_single(tx_ring->dev, + dma_unmap_addr(tx_buffer, dma), + dma_unmap_len(tx_buffer, len), + DMA_TO_DEVICE); + + /* check for eop_desc to determine the end of the packet */ + eop_desc = tx_buffer->next_to_watch; + tx_desc = WX_TX_DESC(tx_ring, i); + + /* unmap remaining buffers */ + while (tx_desc != eop_desc) { + tx_buffer++; + tx_desc++; + i++; + if (unlikely(i == tx_ring->count)) { + i = 0; + tx_buffer = tx_ring->tx_buffer_info; + tx_desc = WX_TX_DESC(tx_ring, 0); + } + + /* unmap any remaining paged data */ + if (dma_unmap_len(tx_buffer, len)) + dma_unmap_page(tx_ring->dev, + dma_unmap_addr(tx_buffer, dma), + dma_unmap_len(tx_buffer, len), + DMA_TO_DEVICE); + } + + /* move us one more past the eop_desc for start of next pkt */ + tx_buffer++; + i++; + if (unlikely(i == tx_ring->count)) { + i = 0; + tx_buffer = tx_ring->tx_buffer_info; + } + } + + netdev_tx_reset_queue(wx_txring_txq(tx_ring)); + + /* reset next_to_use and next_to_clean */ + tx_ring->next_to_use = 0; + tx_ring->next_to_clean = 0; +} + +/** + * wx_clean_all_tx_rings - Free Tx Buffers for all queues + * @wx: board private structure + **/ +void wx_clean_all_tx_rings(struct wx *wx) +{ + int i; + + for (i = 0; i < wx->num_tx_queues; i++) + wx_clean_tx_ring(wx->tx_ring[i]); +} +EXPORT_SYMBOL(wx_clean_all_tx_rings); + +/** + * wx_free_tx_resources - Free Tx Resources per Queue + * @tx_ring: Tx descriptor ring for a specific queue + * + * Free all transmit software resources + **/ +static void wx_free_tx_resources(struct wx_ring *tx_ring) +{ + wx_clean_tx_ring(tx_ring); + kvfree(tx_ring->tx_buffer_info); + tx_ring->tx_buffer_info = NULL; + + /* if not set, then don't free */ + if (!tx_ring->desc) + return; + + dma_free_coherent(tx_ring->dev, tx_ring->size, + tx_ring->desc, tx_ring->dma); + tx_ring->desc = NULL; +} + +/** + * wx_free_all_tx_resources - Free Tx Resources for All Queues + * @wx: pointer to hardware structure + * + * Free all transmit software resources + **/ +static void wx_free_all_tx_resources(struct wx *wx) +{ + int i; + + for (i = 0; i < wx->num_tx_queues; i++) + wx_free_tx_resources(wx->tx_ring[i]); +} + +void wx_free_resources(struct wx *wx) +{ + wx_free_isb_resources(wx); + wx_free_all_rx_resources(wx); + wx_free_all_tx_resources(wx); +} +EXPORT_SYMBOL(wx_free_resources); + +static int wx_alloc_page_pool(struct wx_ring *rx_ring) +{ + int ret = 0; + + struct page_pool_params pp_params = { + .flags = PP_FLAG_DMA_MAP | PP_FLAG_DMA_SYNC_DEV, + .order = 0, + .pool_size = rx_ring->size, + .nid = dev_to_node(rx_ring->dev), + .dev = rx_ring->dev, + .dma_dir = DMA_FROM_DEVICE, + .offset = 0, + .max_len = PAGE_SIZE, + }; + + rx_ring->page_pool = page_pool_create(&pp_params); + if (IS_ERR(rx_ring->page_pool)) { + ret = PTR_ERR(rx_ring->page_pool); + rx_ring->page_pool = NULL; + } + + return ret; +} + +/** + * wx_setup_rx_resources - allocate Rx resources (Descriptors) + * @rx_ring: rx descriptor ring (for a specific queue) to setup + * + * Returns 0 on success, negative on failure + **/ +static int wx_setup_rx_resources(struct wx_ring *rx_ring) +{ + struct device *dev = rx_ring->dev; + int orig_node = dev_to_node(dev); + int numa_node = NUMA_NO_NODE; + int size, ret; + + size = sizeof(struct wx_rx_buffer) * rx_ring->count; + + if (rx_ring->q_vector) + numa_node = rx_ring->q_vector->numa_node; + + rx_ring->rx_buffer_info = kvmalloc_node(size, GFP_KERNEL, numa_node); + if (!rx_ring->rx_buffer_info) + rx_ring->rx_buffer_info = kvmalloc(size, GFP_KERNEL); + if (!rx_ring->rx_buffer_info) + goto err; + + /* Round up to nearest 4K */ + rx_ring->size = rx_ring->count * sizeof(union wx_rx_desc); + rx_ring->size = ALIGN(rx_ring->size, 4096); + + set_dev_node(dev, numa_node); + rx_ring->desc = dma_alloc_coherent(dev, rx_ring->size, + &rx_ring->dma, GFP_KERNEL); + if (!rx_ring->desc) { + set_dev_node(dev, orig_node); + rx_ring->desc = dma_alloc_coherent(dev, rx_ring->size, + &rx_ring->dma, GFP_KERNEL); + } + + if (!rx_ring->desc) + goto err; + + rx_ring->next_to_clean = 0; + rx_ring->next_to_use = 0; + + ret = wx_alloc_page_pool(rx_ring); + if (ret < 0) { + dev_err(rx_ring->dev, "Page pool creation failed: %d\n", ret); + goto err; + } + + return 0; +err: + kvfree(rx_ring->rx_buffer_info); + rx_ring->rx_buffer_info = NULL; + dev_err(dev, "Unable to allocate memory for the Rx descriptor ring\n"); + return -ENOMEM; +} + +/** + * wx_setup_all_rx_resources - allocate all queues Rx resources + * @wx: pointer to hardware structure + * + * If this function returns with an error, then it's possible one or + * more of the rings is populated (while the rest are not). It is the + * callers duty to clean those orphaned rings. + * + * Return 0 on success, negative on failure + **/ +static int wx_setup_all_rx_resources(struct wx *wx) +{ + int i, err = 0; + + for (i = 0; i < wx->num_rx_queues; i++) { + err = wx_setup_rx_resources(wx->rx_ring[i]); + if (!err) + continue; + + wx_err(wx, "Allocation for Rx Queue %u failed\n", i); + goto err_setup_rx; + } + + return 0; +err_setup_rx: + /* rewind the index freeing the rings as we go */ + while (i--) + wx_free_rx_resources(wx->rx_ring[i]); + return err; +} + +/** + * wx_setup_tx_resources - allocate Tx resources (Descriptors) + * @tx_ring: tx descriptor ring (for a specific queue) to setup + * + * Return 0 on success, negative on failure + **/ +static int wx_setup_tx_resources(struct wx_ring *tx_ring) +{ + struct device *dev = tx_ring->dev; + int orig_node = dev_to_node(dev); + int numa_node = NUMA_NO_NODE; + int size; + + size = sizeof(struct wx_tx_buffer) * tx_ring->count; + + if (tx_ring->q_vector) + numa_node = tx_ring->q_vector->numa_node; + + tx_ring->tx_buffer_info = kvmalloc_node(size, GFP_KERNEL, numa_node); + if (!tx_ring->tx_buffer_info) + tx_ring->tx_buffer_info = kvmalloc(size, GFP_KERNEL); + if (!tx_ring->tx_buffer_info) + goto err; + + /* round up to nearest 4K */ + tx_ring->size = tx_ring->count * sizeof(union wx_tx_desc); + tx_ring->size = ALIGN(tx_ring->size, 4096); + + set_dev_node(dev, numa_node); + tx_ring->desc = dma_alloc_coherent(dev, tx_ring->size, + &tx_ring->dma, GFP_KERNEL); + if (!tx_ring->desc) { + set_dev_node(dev, orig_node); + tx_ring->desc = dma_alloc_coherent(dev, tx_ring->size, + &tx_ring->dma, GFP_KERNEL); + } + + if (!tx_ring->desc) + goto err; + + tx_ring->next_to_use = 0; + tx_ring->next_to_clean = 0; + + return 0; + +err: + kvfree(tx_ring->tx_buffer_info); + tx_ring->tx_buffer_info = NULL; + dev_err(dev, "Unable to allocate memory for the Tx descriptor ring\n"); + return -ENOMEM; +} + +/** + * wx_setup_all_tx_resources - allocate all queues Tx resources + * @wx: pointer to private structure + * + * If this function returns with an error, then it's possible one or + * more of the rings is populated (while the rest are not). It is the + * callers duty to clean those orphaned rings. + * + * Return 0 on success, negative on failure + **/ +static int wx_setup_all_tx_resources(struct wx *wx) +{ + int i, err = 0; + + for (i = 0; i < wx->num_tx_queues; i++) { + err = wx_setup_tx_resources(wx->tx_ring[i]); + if (!err) + continue; + + wx_err(wx, "Allocation for Tx Queue %u failed\n", i); + goto err_setup_tx; + } + + return 0; +err_setup_tx: + /* rewind the index freeing the rings as we go */ + while (i--) + wx_free_tx_resources(wx->tx_ring[i]); + return err; +} + +int wx_setup_resources(struct wx *wx) +{ + int err; + + /* allocate transmit descriptors */ + err = wx_setup_all_tx_resources(wx); + if (err) + return err; + + /* allocate receive descriptors */ + err = wx_setup_all_rx_resources(wx); + if (err) + goto err_free_tx; + + err = wx_setup_isb_resources(wx); + if (err) + goto err_free_rx; + + return 0; + +err_free_rx: + wx_free_all_rx_resources(wx); +err_free_tx: + wx_free_all_tx_resources(wx); + + return err; +} +EXPORT_SYMBOL(wx_setup_resources); + +/** + * wx_get_stats64 - Get System Network Statistics + * @netdev: network interface device structure + * @stats: storage space for 64bit statistics + */ +void wx_get_stats64(struct net_device *netdev, + struct rtnl_link_stats64 *stats) +{ + struct wx *wx = netdev_priv(netdev); + int i; + + rcu_read_lock(); + for (i = 0; i < wx->num_rx_queues; i++) { + struct wx_ring *ring = READ_ONCE(wx->rx_ring[i]); + u64 bytes, packets; + unsigned int start; + + if (ring) { + do { + start = u64_stats_fetch_begin(&ring->syncp); + packets = ring->stats.packets; + bytes = ring->stats.bytes; + } while (u64_stats_fetch_retry(&ring->syncp, start)); + stats->rx_packets += packets; + stats->rx_bytes += bytes; + } + } + + for (i = 0; i < wx->num_tx_queues; i++) { + struct wx_ring *ring = READ_ONCE(wx->tx_ring[i]); + u64 bytes, packets; + unsigned int start; + + if (ring) { + do { + start = u64_stats_fetch_begin(&ring->syncp); + packets = ring->stats.packets; + bytes = ring->stats.bytes; + } while (u64_stats_fetch_retry(&ring->syncp, + start)); + stats->tx_packets += packets; + stats->tx_bytes += bytes; + } + } + + rcu_read_unlock(); +} +EXPORT_SYMBOL(wx_get_stats64); + +MODULE_LICENSE("GPL"); diff --git a/drivers/net/ethernet/wangxun/libwx/wx_lib.h b/drivers/net/ethernet/wangxun/libwx/wx_lib.h new file mode 100644 index 000000000000..50ee41f1fa10 --- /dev/null +++ b/drivers/net/ethernet/wangxun/libwx/wx_lib.h @@ -0,0 +1,32 @@ +/* SPDX-License-Identifier: GPL-2.0 */ +/* + * WangXun Gigabit PCI Express Linux driver + * Copyright (c) 2019 - 2022 Beijing WangXun Technology Co., Ltd. + */ + +#ifndef _WX_LIB_H_ +#define _WX_LIB_H_ + +void wx_alloc_rx_buffers(struct wx_ring *rx_ring, u16 cleaned_count); +u16 wx_desc_unused(struct wx_ring *ring); +netdev_tx_t wx_xmit_frame(struct sk_buff *skb, + struct net_device *netdev); +void wx_napi_enable_all(struct wx *wx); +void wx_napi_disable_all(struct wx *wx); +void wx_reset_interrupt_capability(struct wx *wx); +void wx_clear_interrupt_scheme(struct wx *wx); +int wx_init_interrupt_scheme(struct wx *wx); +irqreturn_t wx_msix_clean_rings(int __always_unused irq, void *data); +void wx_free_irq(struct wx *wx); +int wx_setup_isb_resources(struct wx *wx); +void wx_free_isb_resources(struct wx *wx); +u32 wx_misc_isb(struct wx *wx, enum wx_isb_idx idx); +void wx_configure_vectors(struct wx *wx); +void wx_clean_all_rx_rings(struct wx *wx); +void wx_clean_all_tx_rings(struct wx *wx); +void wx_free_resources(struct wx *wx); +int wx_setup_resources(struct wx *wx); +void wx_get_stats64(struct net_device *netdev, + struct rtnl_link_stats64 *stats); + +#endif /* _NGBE_LIB_H_ */ diff --git a/drivers/net/ethernet/wangxun/libwx/wx_type.h b/drivers/net/ethernet/wangxun/libwx/wx_type.h index 1cbeef8230bf..77d8d7f1707e 100644 --- a/drivers/net/ethernet/wangxun/libwx/wx_type.h +++ b/drivers/net/ethernet/wangxun/libwx/wx_type.h @@ -4,6 +4,9 @@ #ifndef _WX_TYPE_H_ #define _WX_TYPE_H_ +#include <linux/bitfield.h> +#include <linux/netdevice.h> + /* Vendor ID */ #ifndef PCI_VENDOR_ID_WANGXUN #define PCI_VENDOR_ID_WANGXUN 0x8088 @@ -36,12 +39,11 @@ #define WX_SPI_CMD 0x10104 #define WX_SPI_CMD_READ_DWORD 0x1 #define WX_SPI_CLK_DIV 0x3 -#define WX_SPI_CMD_CMD(_v) (((_v) & 0x7) << 28) -#define WX_SPI_CMD_CLK(_v) (((_v) & 0x7) << 25) -#define WX_SPI_CMD_ADDR(_v) (((_v) & 0xFFFFFF)) +#define WX_SPI_CMD_CMD(_v) FIELD_PREP(GENMASK(30, 28), _v) +#define WX_SPI_CMD_CLK(_v) FIELD_PREP(GENMASK(27, 25), _v) +#define WX_SPI_CMD_ADDR(_v) FIELD_PREP(GENMASK(23, 0), _v) #define WX_SPI_DATA 0x10108 #define WX_SPI_DATA_BYPASS BIT(31) -#define WX_SPI_DATA_STATUS(_v) (((_v) & 0xFF) << 16) #define WX_SPI_DATA_OP_DONE BIT(0) #define WX_SPI_STATUS 0x1010C #define WX_SPI_STATUS_OPDONE BIT(0) @@ -64,21 +66,50 @@ /* port cfg Registers */ #define WX_CFG_PORT_CTL 0x14400 #define WX_CFG_PORT_CTL_DRV_LOAD BIT(3) +#define WX_CFG_PORT_CTL_QINQ BIT(2) +#define WX_CFG_PORT_CTL_D_VLAN BIT(0) /* double vlan*/ +#define WX_CFG_TAG_TPID(_i) (0x14430 + ((_i) * 4)) + +/* GPIO Registers */ +#define WX_GPIO_DR 0x14800 +#define WX_GPIO_DR_0 BIT(0) /* SDP0 Data Value */ +#define WX_GPIO_DR_1 BIT(1) /* SDP1 Data Value */ +#define WX_GPIO_DDR 0x14804 +#define WX_GPIO_DDR_0 BIT(0) /* SDP0 IO direction */ +#define WX_GPIO_DDR_1 BIT(1) /* SDP1 IO direction */ +#define WX_GPIO_CTL 0x14808 +#define WX_GPIO_INTEN 0x14830 +#define WX_GPIO_INTEN_0 BIT(0) +#define WX_GPIO_INTEN_1 BIT(1) +#define WX_GPIO_INTMASK 0x14834 +#define WX_GPIO_INTTYPE_LEVEL 0x14838 +#define WX_GPIO_POLARITY 0x1483C +#define WX_GPIO_EOI 0x1484C /*********************** Transmit DMA registers **************************/ /* transmit global control */ #define WX_TDM_CTL 0x18000 /* TDM CTL BIT */ #define WX_TDM_CTL_TE BIT(0) /* Transmit Enable */ +#define WX_TDM_PB_THRE(_i) (0x18020 + ((_i) * 4)) /***************************** RDB registers *********************************/ /* receive packet buffer */ #define WX_RDB_PB_CTL 0x19000 #define WX_RDB_PB_CTL_RXEN BIT(31) /* Enable Receiver */ #define WX_RDB_PB_CTL_DISABLED BIT(0) +#define WX_RDB_PB_SZ(_i) (0x19020 + ((_i) * 4)) +#define WX_RDB_PB_SZ_SHIFT 10 /* statistic */ #define WX_RDB_PFCMACDAL 0x19210 #define WX_RDB_PFCMACDAH 0x19214 +/* ring assignment */ +#define WX_RDB_PL_CFG(_i) (0x19300 + ((_i) * 4)) +#define WX_RDB_PL_CFG_L4HDR BIT(1) +#define WX_RDB_PL_CFG_L3HDR BIT(2) +#define WX_RDB_PL_CFG_L2HDR BIT(3) +#define WX_RDB_PL_CFG_TUN_TUNHDR BIT(4) +#define WX_RDB_PL_CFG_TUN_OUTL2HDR BIT(5) /******************************* PSR Registers *******************************/ /* psr control */ @@ -96,10 +127,24 @@ #define WX_PSR_CTL_MO_SHIFT 5 #define WX_PSR_CTL_MO (0x3 << WX_PSR_CTL_MO_SHIFT) #define WX_PSR_CTL_TPE BIT(4) +#define WX_PSR_MAX_SZ 0x15020 +#define WX_PSR_VLAN_CTL 0x15088 +#define WX_PSR_VLAN_CTL_CFIEN BIT(29) /* bit 29 */ +#define WX_PSR_VLAN_CTL_VFE BIT(30) /* bit 30 */ /* mcasst/ucast overflow tbl */ #define WX_PSR_MC_TBL(_i) (0x15200 + ((_i) * 4)) #define WX_PSR_UC_TBL(_i) (0x15400 + ((_i) * 4)) +/* VM L2 contorl */ +#define WX_PSR_VM_L2CTL(_i) (0x15600 + ((_i) * 4)) +#define WX_PSR_VM_L2CTL_UPE BIT(4) /* unicast promiscuous */ +#define WX_PSR_VM_L2CTL_VACC BIT(6) /* accept nomatched vlan */ +#define WX_PSR_VM_L2CTL_AUPE BIT(8) /* accept untagged packets */ +#define WX_PSR_VM_L2CTL_ROMPE BIT(9) /* accept packets in MTA tbl */ +#define WX_PSR_VM_L2CTL_ROPE BIT(10) /* accept packets in UC tbl */ +#define WX_PSR_VM_L2CTL_BAM BIT(11) /* accept broadcast packets */ +#define WX_PSR_VM_L2CTL_MPE BIT(12) /* multicast promiscuous */ + /* Management */ #define WX_PSR_MNG_FLEX_SEL 0x1582C #define WX_PSR_MNG_FLEX_DW_L(_i) (0x15A00 + ((_i) * 16)) @@ -113,14 +158,35 @@ /* mac switcher */ #define WX_PSR_MAC_SWC_AD_L 0x16200 #define WX_PSR_MAC_SWC_AD_H 0x16204 -#define WX_PSR_MAC_SWC_AD_H_AD(v) (((v) & 0xFFFF)) -#define WX_PSR_MAC_SWC_AD_H_ADTYPE(v) (((v) & 0x1) << 30) +#define WX_PSR_MAC_SWC_AD_H_AD(v) FIELD_PREP(U16_MAX, v) +#define WX_PSR_MAC_SWC_AD_H_ADTYPE(v) FIELD_PREP(BIT(30), v) #define WX_PSR_MAC_SWC_AD_H_AV BIT(31) #define WX_PSR_MAC_SWC_VM_L 0x16208 #define WX_PSR_MAC_SWC_VM_H 0x1620C #define WX_PSR_MAC_SWC_IDX 0x16210 #define WX_CLEAR_VMDQ_ALL 0xFFFFFFFFU +/********************************* RSEC **************************************/ +/* general rsec */ +#define WX_RSC_CTL 0x17000 +#define WX_RSC_CTL_SAVE_MAC_ERR BIT(6) +#define WX_RSC_CTL_CRC_STRIP BIT(2) +#define WX_RSC_CTL_RX_DIS BIT(1) +#define WX_RSC_ST 0x17004 +#define WX_RSC_ST_RSEC_RDY BIT(0) + +/****************************** TDB ******************************************/ +#define WX_TDB_PB_SZ(_i) (0x1CC00 + ((_i) * 4)) +#define WX_TXPKT_SIZE_MAX 0xA /* Max Tx Packet size */ + +/****************************** TSEC *****************************************/ +/* Security Control Registers */ +#define WX_TSC_CTL 0x1D000 +#define WX_TSC_CTL_TX_DIS BIT(1) +#define WX_TSC_CTL_TSEC_DIS BIT(0) +#define WX_TSC_BUF_AE 0x1D00C +#define WX_TSC_BUF_AE_THR GENMASK(9, 0) + /************************************** MNG ********************************/ #define WX_MNG_SWFW_SYNC 0x1E008 #define WX_MNG_SWFW_SYNC_SW_MB BIT(2) @@ -133,11 +199,15 @@ /************************************* ETH MAC *****************************/ #define WX_MAC_TX_CFG 0x11000 #define WX_MAC_TX_CFG_TE BIT(0) +#define WX_MAC_TX_CFG_SPEED_MASK GENMASK(30, 29) +#define WX_MAC_TX_CFG_SPEED_10G FIELD_PREP(WX_MAC_TX_CFG_SPEED_MASK, 0) +#define WX_MAC_TX_CFG_SPEED_1G FIELD_PREP(WX_MAC_TX_CFG_SPEED_MASK, 3) #define WX_MAC_RX_CFG 0x11004 #define WX_MAC_RX_CFG_RE BIT(0) #define WX_MAC_RX_CFG_JE BIT(8) #define WX_MAC_PKT_FLT 0x11008 #define WX_MAC_PKT_FLT_PR BIT(0) /* promiscuous mode */ +#define WX_MAC_WDG_TIMEOUT 0x1100C #define WX_MAC_RX_FLOW_CTRL 0x11090 #define WX_MAC_RX_FLOW_CTRL_RFE BIT(0) /* receive fc enable */ #define WX_MMC_CONTROL 0x11800 @@ -147,10 +217,34 @@ /* Interrupt Registers */ #define WX_BME_CTL 0x12020 #define WX_PX_MISC_IC 0x100 +#define WX_PX_MISC_ICS 0x104 +#define WX_PX_MISC_IEN 0x108 +#define WX_PX_INTA 0x110 +#define WX_PX_GPIE 0x118 +#define WX_PX_GPIE_MODEL BIT(0) +#define WX_PX_IC 0x120 #define WX_PX_IMS(_i) (0x140 + (_i) * 4) +#define WX_PX_IMC(_i) (0x150 + (_i) * 4) +#define WX_PX_ISB_ADDR_L 0x160 +#define WX_PX_ISB_ADDR_H 0x164 #define WX_PX_TRANSACTION_PENDING 0x168 +#define WX_PX_ITRSEL 0x180 +#define WX_PX_ITR(_i) (0x200 + (_i) * 4) +#define WX_PX_ITR_CNT_WDIS BIT(31) +#define WX_PX_MISC_IVAR 0x4FC +#define WX_PX_IVAR(_i) (0x500 + (_i) * 4) + +#define WX_PX_IVAR_ALLOC_VAL 0x80 /* Interrupt Allocation valid */ +#define WX_7K_ITR 595 +#define WX_12K_ITR 336 +#define WX_SP_MAX_EITR 0x00000FF8U +#define WX_EM_MAX_EITR 0x00007FFCU /* transmit DMA Registers */ +#define WX_PX_TR_BAL(_i) (0x03000 + ((_i) * 0x40)) +#define WX_PX_TR_BAH(_i) (0x03004 + ((_i) * 0x40)) +#define WX_PX_TR_WP(_i) (0x03008 + ((_i) * 0x40)) +#define WX_PX_TR_RP(_i) (0x0300C + ((_i) * 0x40)) #define WX_PX_TR_CFG(_i) (0x03010 + ((_i) * 0x40)) /* Transmit Config masks */ #define WX_PX_TR_CFG_ENABLE BIT(0) /* Ena specific Tx Queue */ @@ -160,8 +254,22 @@ #define WX_PX_TR_CFG_THRE_SHIFT 8 /* Receive DMA Registers */ +#define WX_PX_RR_BAL(_i) (0x01000 + ((_i) * 0x40)) +#define WX_PX_RR_BAH(_i) (0x01004 + ((_i) * 0x40)) +#define WX_PX_RR_WP(_i) (0x01008 + ((_i) * 0x40)) +#define WX_PX_RR_RP(_i) (0x0100C + ((_i) * 0x40)) #define WX_PX_RR_CFG(_i) (0x01010 + ((_i) * 0x40)) /* PX_RR_CFG bit definitions */ +#define WX_PX_RR_CFG_SPLIT_MODE BIT(26) +#define WX_PX_RR_CFG_RR_THER_SHIFT 16 +#define WX_PX_RR_CFG_RR_HDR_SZ GENMASK(15, 12) +#define WX_PX_RR_CFG_RR_BUF_SZ GENMASK(11, 8) +#define WX_PX_RR_CFG_BHDRSIZE_SHIFT 6 /* 64byte resolution (>> 6) + * + at bit 8 offset (<< 12) + * = (<< 6) + */ +#define WX_PX_RR_CFG_BSIZEPKT_SHIFT 2 /* so many KBs */ +#define WX_PX_RR_CFG_RR_SIZE_SHIFT 1 #define WX_PX_RR_CFG_RR_EN BIT(0) /* Number of 80 microseconds we wait for PCI Express master disable */ @@ -185,6 +293,50 @@ #define WX_SW_REGION_PTR 0x1C +#define WX_MAC_STATE_DEFAULT 0x1 +#define WX_MAC_STATE_MODIFIED 0x2 +#define WX_MAC_STATE_IN_USE 0x4 + +#define WX_MAX_RXD 8192 +#define WX_MAX_TXD 8192 + +/* Supported Rx Buffer Sizes */ +#define WX_RXBUFFER_256 256 /* Used for skb receive header */ +#define WX_RXBUFFER_2K 2048 +#define WX_MAX_RXBUFFER 16384 /* largest size for single descriptor */ + +#if MAX_SKB_FRAGS < 8 +#define WX_RX_BUFSZ ALIGN(WX_MAX_RXBUFFER / MAX_SKB_FRAGS, 1024) +#else +#define WX_RX_BUFSZ WX_RXBUFFER_2K +#endif + +#define WX_RX_BUFFER_WRITE 16 /* Must be power of 2 */ + +#define WX_MAX_DATA_PER_TXD BIT(14) +/* Tx Descriptors needed, worst case */ +#define TXD_USE_COUNT(S) DIV_ROUND_UP((S), WX_MAX_DATA_PER_TXD) +#define DESC_NEEDED (MAX_SKB_FRAGS + 4) + +/* Ether Types */ +#define WX_ETH_P_CNM 0x22E7 + +#define WX_CFG_PORT_ST 0x14404 + +/******************* Receive Descriptor bit definitions **********************/ +#define WX_RXD_STAT_DD BIT(0) /* Done */ +#define WX_RXD_STAT_EOP BIT(1) /* End of Packet */ + +#define WX_RXD_ERR_RXE BIT(29) /* Any MAC Error */ + +/*********************** Transmit Descriptor Config Masks ****************/ +#define WX_TXD_STAT_DD BIT(0) /* Descriptor Done */ +#define WX_TXD_DTYP_DATA 0 /* Adv Data Descriptor */ +#define WX_TXD_PAYLEN_SHIFT 13 /* Desc PAYLEN shift */ +#define WX_TXD_EOP BIT(24) /* End of Packet */ +#define WX_TXD_IFCS BIT(25) /* Insert FCS */ +#define WX_TXD_RS BIT(27) /* Report Status */ + /* Host Interface Command Structures */ struct wx_hic_hdr { u8 cmd; @@ -249,14 +401,23 @@ enum wx_mac_type { wx_mac_em }; +enum em_mac_type { + em_mac_type_unknown = 0, + em_mac_type_mdi, + em_mac_type_rgmii +}; + struct wx_mac_info { enum wx_mac_type type; bool set_lben; u8 addr[ETH_ALEN]; u8 perm_addr[ETH_ALEN]; + u32 mta_shadow[128]; s32 mc_filter_type; u32 mcft_size; u32 num_rar_entries; + u32 rx_pb_size; + u32 tx_pb_size; u32 max_tx_queues; u32 max_rx_queues; @@ -284,19 +445,183 @@ struct wx_addr_filter_info { bool user_set_promisc; }; +struct wx_mac_addr { + u8 addr[ETH_ALEN]; + u16 state; /* bitmask */ + u64 pools; +}; + enum wx_reset_type { WX_LAN_RESET = 0, WX_SW_RESET, WX_GLOBAL_RESET }; -struct wx_hw { +struct wx_cb { + dma_addr_t dma; + u16 append_cnt; /* number of skb's appended */ + bool page_released; + bool dma_released; +}; + +#define WX_CB(skb) ((struct wx_cb *)(skb)->cb) + +/* Transmit Descriptor */ +union wx_tx_desc { + struct { + __le64 buffer_addr; /* Address of descriptor's data buf */ + __le32 cmd_type_len; + __le32 olinfo_status; + } read; + struct { + __le64 rsvd; /* Reserved */ + __le32 nxtseq_seed; + __le32 status; + } wb; +}; + +/* Receive Descriptor */ +union wx_rx_desc { + struct { + __le64 pkt_addr; /* Packet buffer address */ + __le64 hdr_addr; /* Header buffer address */ + } read; + struct { + struct { + union { + __le32 data; + struct { + __le16 pkt_info; /* RSS, Pkt type */ + __le16 hdr_info; /* Splithdr, hdrlen */ + } hs_rss; + } lo_dword; + union { + __le32 rss; /* RSS Hash */ + struct { + __le16 ip_id; /* IP id */ + __le16 csum; /* Packet Checksum */ + } csum_ip; + } hi_dword; + } lower; + struct { + __le32 status_error; /* ext status/error */ + __le16 length; /* Packet length */ + __le16 vlan; /* VLAN tag */ + } upper; + } wb; /* writeback */ +}; + +#define WX_RX_DESC(R, i) \ + (&(((union wx_rx_desc *)((R)->desc))[i])) +#define WX_TX_DESC(R, i) \ + (&(((union wx_tx_desc *)((R)->desc))[i])) + +/* wrapper around a pointer to a socket buffer, + * so a DMA handle can be stored along with the buffer + */ +struct wx_tx_buffer { + union wx_tx_desc *next_to_watch; + struct sk_buff *skb; + unsigned int bytecount; + unsigned short gso_segs; + DEFINE_DMA_UNMAP_ADDR(dma); + DEFINE_DMA_UNMAP_LEN(len); +}; + +struct wx_rx_buffer { + struct sk_buff *skb; + dma_addr_t dma; + dma_addr_t page_dma; + struct page *page; + unsigned int page_offset; + u16 pagecnt_bias; +}; + +struct wx_queue_stats { + u64 packets; + u64 bytes; +}; + +/* iterator for handling rings in ring container */ +#define wx_for_each_ring(posm, headm) \ + for (posm = (headm).ring; posm; posm = posm->next) + +struct wx_ring_container { + struct wx_ring *ring; /* pointer to linked list of rings */ + unsigned int total_bytes; /* total bytes processed this int */ + unsigned int total_packets; /* total packets processed this int */ + u8 count; /* total number of rings in vector */ + u8 itr; /* current ITR setting for ring */ +}; + +struct wx_ring { + struct wx_ring *next; /* pointer to next ring in q_vector */ + struct wx_q_vector *q_vector; /* backpointer to host q_vector */ + struct net_device *netdev; /* netdev ring belongs to */ + struct device *dev; /* device for DMA mapping */ + struct page_pool *page_pool; + void *desc; /* descriptor ring memory */ + union { + struct wx_tx_buffer *tx_buffer_info; + struct wx_rx_buffer *rx_buffer_info; + }; + u8 __iomem *tail; + dma_addr_t dma; /* phys. address of descriptor ring */ + unsigned int size; /* length in bytes */ + + u16 count; /* amount of descriptors */ + + u8 queue_index; /* needed for multiqueue queue management */ + u8 reg_idx; /* holds the special value that gets + * the hardware register offset + * associated with this ring, which is + * different for DCB and RSS modes + */ + u16 next_to_use; + u16 next_to_clean; + u16 next_to_alloc; + + struct wx_queue_stats stats; + struct u64_stats_sync syncp; +} ____cacheline_internodealigned_in_smp; + +struct wx_q_vector { + struct wx *wx; + int cpu; /* CPU for DCA */ + int numa_node; + u16 v_idx; /* index of q_vector within array, also used for + * finding the bit in EICR and friends that + * represents the vector for this ring + */ + u16 itr; /* Interrupt throttle rate written to EITR */ + struct wx_ring_container rx, tx; + struct napi_struct napi; + struct rcu_head rcu; /* to avoid race with update stats on free */ + + char name[IFNAMSIZ + 17]; + + /* for dynamic allocation of rings associated with this q_vector */ + struct wx_ring ring[0] ____cacheline_internodealigned_in_smp; +}; + +enum wx_isb_idx { + WX_ISB_HEADER, + WX_ISB_MISC, + WX_ISB_VEC0, + WX_ISB_VEC1, + WX_ISB_MAX +}; + +struct wx { u8 __iomem *hw_addr; struct pci_dev *pdev; + struct net_device *netdev; struct wx_bus_info bus; struct wx_mac_info mac; + enum em_mac_type mac_type; struct wx_eeprom_info eeprom; struct wx_addr_filter_info addr_ctrl; + struct wx_mac_addr *mac_table; u16 device_id; u16 vendor_id; u16 subsystem_device_id; @@ -304,11 +629,63 @@ struct wx_hw { u8 revision_id; u16 oem_ssid; u16 oem_svid; + u16 msg_enable; bool adapter_stopped; + u16 tpid[8]; + char eeprom_id[32]; + char *driver_name; enum wx_reset_type reset_type; + + /* PHY stuff */ + unsigned int link; + int speed; + int duplex; + struct phy_device *phydev; + + bool wol_enabled; + bool ncsi_enabled; + bool gpio_ctrl; + + /* Tx fast path data */ + int num_tx_queues; + u16 tx_itr_setting; + u16 tx_work_limit; + + /* Rx fast path data */ + int num_rx_queues; + u16 rx_itr_setting; + u16 rx_work_limit; + + int num_q_vectors; /* current number of q_vectors for device */ + int max_q_vectors; /* upper limit of q_vectors for device */ + + u32 tx_ring_count; + u32 rx_ring_count; + + struct wx_ring *tx_ring[64] ____cacheline_aligned_in_smp; + struct wx_ring *rx_ring[64]; + struct wx_q_vector *q_vector[64]; + + unsigned int queues_per_pool; + struct msix_entry *msix_entries; + + /* misc interrupt status block */ + dma_addr_t isb_dma; + u32 *isb_mem; + u32 isb_tag[WX_ISB_MAX]; + +#define WX_MAX_RETA_ENTRIES 128 + u8 rss_indir_tbl[WX_MAX_RETA_ENTRIES]; + +#define WX_RSS_KEY_SIZE 40 /* size of RSS Hash Key in bytes */ + u32 *rss_key; + u32 wol; + + u16 bd_number; }; #define WX_INTR_ALL (~0ULL) +#define WX_INTR_Q(i) BIT(i) /* register operations */ #define wr32(a, reg, value) writel((value), ((a)->hw_addr + (reg))) @@ -319,23 +696,23 @@ struct wx_hw { wr32((a), (reg) + ((off) << 2), (val)) static inline u32 -rd32m(struct wx_hw *wxhw, u32 reg, u32 mask) +rd32m(struct wx *wx, u32 reg, u32 mask) { u32 val; - val = rd32(wxhw, reg); + val = rd32(wx, reg); return val & mask; } static inline void -wr32m(struct wx_hw *wxhw, u32 reg, u32 mask, u32 field) +wr32m(struct wx *wx, u32 reg, u32 mask, u32 field) { u32 val; - val = rd32(wxhw, reg); + val = rd32(wx, reg); val = ((val & ~mask) | (field & mask)); - wr32(wxhw, reg, val); + wr32(wx, reg, val); } /* On some domestic CPU platforms, sometimes IO is not synchronized with @@ -343,10 +720,10 @@ wr32m(struct wx_hw *wxhw, u32 reg, u32 mask, u32 field) */ #define WX_WRITE_FLUSH(H) rd32(H, WX_MIS_PWR) -#define wx_err(wxhw, fmt, arg...) \ - dev_err(&(wxhw)->pdev->dev, fmt, ##arg) +#define wx_err(wx, fmt, arg...) \ + dev_err(&(wx)->pdev->dev, fmt, ##arg) -#define wx_dbg(wxhw, fmt, arg...) \ - dev_dbg(&(wxhw)->pdev->dev, fmt, ##arg) +#define wx_dbg(wx, fmt, arg...) \ + dev_dbg(&(wx)->pdev->dev, fmt, ##arg) #endif /* _WX_TYPE_H_ */ diff --git a/drivers/net/ethernet/wangxun/ngbe/Makefile b/drivers/net/ethernet/wangxun/ngbe/Makefile index 391c2cbc1bb4..61a13d98abe7 100644 --- a/drivers/net/ethernet/wangxun/ngbe/Makefile +++ b/drivers/net/ethernet/wangxun/ngbe/Makefile @@ -6,4 +6,4 @@ obj-$(CONFIG_NGBE) += ngbe.o -ngbe-objs := ngbe_main.o ngbe_hw.o +ngbe-objs := ngbe_main.o ngbe_hw.o ngbe_mdio.o ngbe_ethtool.o diff --git a/drivers/net/ethernet/wangxun/ngbe/ngbe.h b/drivers/net/ethernet/wangxun/ngbe/ngbe.h deleted file mode 100644 index af147ca8605c..000000000000 --- a/drivers/net/ethernet/wangxun/ngbe/ngbe.h +++ /dev/null @@ -1,79 +0,0 @@ -/* SPDX-License-Identifier: GPL-2.0 */ -/* Copyright (c) 2019 - 2022 Beijing WangXun Technology Co., Ltd. */ - -#ifndef _NGBE_H_ -#define _NGBE_H_ - -#include "ngbe_type.h" - -#define NGBE_MAX_FDIR_INDICES 7 - -#define NGBE_MAX_RX_QUEUES (NGBE_MAX_FDIR_INDICES + 1) -#define NGBE_MAX_TX_QUEUES (NGBE_MAX_FDIR_INDICES + 1) - -#define NGBE_ETH_LENGTH_OF_ADDRESS 6 -#define NGBE_MAX_MSIX_VECTORS 0x09 -#define NGBE_RAR_ENTRIES 32 - -/* TX/RX descriptor defines */ -#define NGBE_DEFAULT_TXD 512 /* default ring size */ -#define NGBE_DEFAULT_TX_WORK 256 -#define NGBE_MAX_TXD 8192 -#define NGBE_MIN_TXD 128 - -#define NGBE_DEFAULT_RXD 512 /* default ring size */ -#define NGBE_DEFAULT_RX_WORK 256 -#define NGBE_MAX_RXD 8192 -#define NGBE_MIN_RXD 128 - -#define NGBE_MAC_STATE_DEFAULT 0x1 -#define NGBE_MAC_STATE_MODIFIED 0x2 -#define NGBE_MAC_STATE_IN_USE 0x4 - -struct ngbe_mac_addr { - u8 addr[ETH_ALEN]; - u16 state; /* bitmask */ - u64 pools; -}; - -/* board specific private data structure */ -struct ngbe_adapter { - u8 __iomem *io_addr; /* Mainly for iounmap use */ - /* OS defined structs */ - struct net_device *netdev; - struct pci_dev *pdev; - - /* structs defined in ngbe_hw.h */ - struct ngbe_hw hw; - struct ngbe_mac_addr *mac_table; - u16 msg_enable; - - /* Tx fast path data */ - int num_tx_queues; - u16 tx_itr_setting; - u16 tx_work_limit; - - /* Rx fast path data */ - int num_rx_queues; - u16 rx_itr_setting; - u16 rx_work_limit; - - int num_q_vectors; /* current number of q_vectors for device */ - int max_q_vectors; /* upper limit of q_vectors for device */ - - u32 tx_ring_count; - u32 rx_ring_count; - -#define NGBE_MAX_RETA_ENTRIES 128 - u8 rss_indir_tbl[NGBE_MAX_RETA_ENTRIES]; - -#define NGBE_RSS_KEY_SIZE 40 /* size of RSS Hash Key in bytes */ - u32 *rss_key; - u32 wol; - - u16 bd_number; -}; - -extern char ngbe_driver_name[]; - -#endif /* _NGBE_H_ */ diff --git a/drivers/net/ethernet/wangxun/ngbe/ngbe_ethtool.c b/drivers/net/ethernet/wangxun/ngbe/ngbe_ethtool.c new file mode 100644 index 000000000000..5b25834baf38 --- /dev/null +++ b/drivers/net/ethernet/wangxun/ngbe/ngbe_ethtool.c @@ -0,0 +1,22 @@ +// SPDX-License-Identifier: GPL-2.0 +/* Copyright (c) 2015 - 2023 Beijing WangXun Technology Co., Ltd. */ + +#include <linux/pci.h> +#include <linux/phy.h> +#include <linux/netdevice.h> + +#include "../libwx/wx_ethtool.h" +#include "ngbe_ethtool.h" + +static const struct ethtool_ops ngbe_ethtool_ops = { + .get_drvinfo = wx_get_drvinfo, + .get_link = ethtool_op_get_link, + .get_link_ksettings = phy_ethtool_get_link_ksettings, + .set_link_ksettings = phy_ethtool_set_link_ksettings, + .nway_reset = phy_ethtool_nway_reset, +}; + +void ngbe_set_ethtool_ops(struct net_device *netdev) +{ + netdev->ethtool_ops = &ngbe_ethtool_ops; +} diff --git a/drivers/net/ethernet/wangxun/ngbe/ngbe_ethtool.h b/drivers/net/ethernet/wangxun/ngbe/ngbe_ethtool.h new file mode 100644 index 000000000000..487074e0eeec --- /dev/null +++ b/drivers/net/ethernet/wangxun/ngbe/ngbe_ethtool.h @@ -0,0 +1,9 @@ +/* SPDX-License-Identifier: GPL-2.0 */ +/* Copyright (c) 2015 - 2023 Beijing WangXun Technology Co., Ltd. */ + +#ifndef _NGBE_ETHTOOL_H_ +#define _NGBE_ETHTOOL_H_ + +void ngbe_set_ethtool_ops(struct net_device *netdev); + +#endif /* _NGBE_ETHTOOL_H_ */ diff --git a/drivers/net/ethernet/wangxun/ngbe/ngbe_hw.c b/drivers/net/ethernet/wangxun/ngbe/ngbe_hw.c index 0e3923b3737e..6562a2de9527 100644 --- a/drivers/net/ethernet/wangxun/ngbe/ngbe_hw.c +++ b/drivers/net/ethernet/wangxun/ngbe/ngbe_hw.c @@ -9,12 +9,10 @@ #include "../libwx/wx_hw.h" #include "ngbe_type.h" #include "ngbe_hw.h" -#include "ngbe.h" -int ngbe_eeprom_chksum_hostif(struct ngbe_hw *hw) +int ngbe_eeprom_chksum_hostif(struct wx *wx) { struct wx_hic_read_shadow_ram buffer; - struct wx_hw *wxhw = &hw->wxhw; int status; int tmp; @@ -27,61 +25,73 @@ int ngbe_eeprom_chksum_hostif(struct ngbe_hw *hw) /* one word */ buffer.length = 0; - status = wx_host_interface_command(wxhw, (u32 *)&buffer, sizeof(buffer), + status = wx_host_interface_command(wx, (u32 *)&buffer, sizeof(buffer), WX_HI_COMMAND_TIMEOUT, false); if (status < 0) return status; - tmp = rd32a(wxhw, WX_MNG_MBOX, 1); + tmp = rd32a(wx, WX_MNG_MBOX, 1); if (tmp == NGBE_FW_CMD_ST_PASS) return 0; return -EIO; } -static int ngbe_reset_misc(struct ngbe_hw *hw) +static int ngbe_reset_misc(struct wx *wx) { - struct wx_hw *wxhw = &hw->wxhw; - - wx_reset_misc(wxhw); - if (hw->mac_type == ngbe_mac_type_rgmii) - wr32(wxhw, NGBE_MDIO_CLAUSE_SELECT, 0xF); - if (hw->gpio_ctrl) { + wx_reset_misc(wx); + if (wx->gpio_ctrl) { /* gpio0 is used to power on/off control*/ - wr32(wxhw, NGBE_GPIO_DDR, 0x1); - wr32(wxhw, NGBE_GPIO_DR, NGBE_GPIO_DR_0); + wr32(wx, NGBE_GPIO_DDR, 0x1); + ngbe_sfp_modules_txrx_powerctl(wx, false); } return 0; } +void ngbe_sfp_modules_txrx_powerctl(struct wx *wx, bool swi) +{ + /* gpio0 is used to power on control . 0 is on */ + wr32(wx, NGBE_GPIO_DR, swi ? 0 : NGBE_GPIO_DR_0); +} + /** * ngbe_reset_hw - Perform hardware reset - * @hw: pointer to hardware structure + * @wx: pointer to hardware structure * * Resets the hardware by resetting the transmit and receive units, masks * and clears all interrupts, perform a PHY reset, and perform a link (MAC) * reset. **/ -int ngbe_reset_hw(struct ngbe_hw *hw) +int ngbe_reset_hw(struct wx *wx) { - struct wx_hw *wxhw = &hw->wxhw; - int status = 0; - u32 reset = 0; + u32 val = 0; + int ret = 0; - /* Call adapter stop to disable tx/rx and clear interrupts */ - status = wx_stop_adapter(wxhw); - if (status != 0) - return status; - reset = WX_MIS_RST_LAN_RST(wxhw->bus.func); - wr32(wxhw, WX_MIS_RST, reset | rd32(wxhw, WX_MIS_RST)); - ngbe_reset_misc(hw); + /* Call wx stop to disable tx/rx and clear interrupts */ + ret = wx_stop_adapter(wx); + if (ret != 0) + return ret; + + if (wx->mac_type != em_mac_type_mdi) { + val = WX_MIS_RST_LAN_RST(wx->bus.func); + wr32(wx, WX_MIS_RST, val | rd32(wx, WX_MIS_RST)); + + ret = read_poll_timeout(rd32, val, + !(val & (BIT(9) << wx->bus.func)), 1000, + 100000, false, wx, 0x10028); + if (ret) { + wx_err(wx, "Lan reset exceed s maximum times.\n"); + return ret; + } + } + ngbe_reset_misc(wx); /* Store the permanent mac address */ - wx_get_mac_addr(wxhw, wxhw->mac.perm_addr); + wx_get_mac_addr(wx, wx->mac.perm_addr); /* reset num_rar_entries to 128 */ - wxhw->mac.num_rar_entries = NGBE_RAR_ENTRIES; - wx_init_rx_addrs(wxhw); - pci_set_master(wxhw->pdev); + wx->mac.num_rar_entries = NGBE_RAR_ENTRIES; + wx_init_rx_addrs(wx); + pci_set_master(wx->pdev); return 0; } diff --git a/drivers/net/ethernet/wangxun/ngbe/ngbe_hw.h b/drivers/net/ethernet/wangxun/ngbe/ngbe_hw.h index 42476a3fe57c..a4693e006816 100644 --- a/drivers/net/ethernet/wangxun/ngbe/ngbe_hw.h +++ b/drivers/net/ethernet/wangxun/ngbe/ngbe_hw.h @@ -7,6 +7,7 @@ #ifndef _NGBE_HW_H_ #define _NGBE_HW_H_ -int ngbe_eeprom_chksum_hostif(struct ngbe_hw *hw); -int ngbe_reset_hw(struct ngbe_hw *hw); +int ngbe_eeprom_chksum_hostif(struct wx *wx); +void ngbe_sfp_modules_txrx_powerctl(struct wx *wx, bool swi); +int ngbe_reset_hw(struct wx *wx); #endif /* _NGBE_HW_H_ */ diff --git a/drivers/net/ethernet/wangxun/ngbe/ngbe_main.c b/drivers/net/ethernet/wangxun/ngbe/ngbe_main.c index f0b24366da18..5b564d348c09 100644 --- a/drivers/net/ethernet/wangxun/ngbe/ngbe_main.c +++ b/drivers/net/ethernet/wangxun/ngbe/ngbe_main.c @@ -9,12 +9,16 @@ #include <linux/aer.h> #include <linux/etherdevice.h> #include <net/ip.h> +#include <linux/phy.h> #include "../libwx/wx_type.h" #include "../libwx/wx_hw.h" +#include "../libwx/wx_lib.h" #include "ngbe_type.h" +#include "ngbe_mdio.h" #include "ngbe_hw.h" -#include "ngbe.h" +#include "ngbe_ethtool.h" + char ngbe_driver_name[] = "ngbe"; /* ngbe_pci_tbl - PCI Device ID Table @@ -39,70 +43,27 @@ static const struct pci_device_id ngbe_pci_tbl[] = { { .device = 0 } }; -static void ngbe_mac_set_default_filter(struct ngbe_adapter *adapter, u8 *addr) -{ - struct ngbe_hw *hw = &adapter->hw; - - memcpy(&adapter->mac_table[0].addr, addr, ETH_ALEN); - adapter->mac_table[0].pools = 1ULL; - adapter->mac_table[0].state = (NGBE_MAC_STATE_DEFAULT | - NGBE_MAC_STATE_IN_USE); - wx_set_rar(&hw->wxhw, 0, adapter->mac_table[0].addr, - adapter->mac_table[0].pools, - WX_PSR_MAC_SWC_AD_H_AV); -} - /** * ngbe_init_type_code - Initialize the shared code - * @hw: pointer to hardware structure + * @wx: pointer to hardware structure **/ -static void ngbe_init_type_code(struct ngbe_hw *hw) +static void ngbe_init_type_code(struct wx *wx) { int wol_mask = 0, ncsi_mask = 0; - struct wx_hw *wxhw = &hw->wxhw; - u16 type_mask = 0; + u16 type_mask = 0, val; - wxhw->mac.type = wx_mac_em; - type_mask = (u16)(wxhw->subsystem_device_id & NGBE_OEM_MASK); - ncsi_mask = wxhw->subsystem_device_id & NGBE_NCSI_MASK; - wol_mask = wxhw->subsystem_device_id & NGBE_WOL_MASK; - - switch (type_mask) { - case NGBE_SUBID_M88E1512_SFP: - case NGBE_SUBID_LY_M88E1512_SFP: - hw->phy.type = ngbe_phy_m88e1512_sfi; - break; - case NGBE_SUBID_M88E1512_RJ45: - hw->phy.type = ngbe_phy_m88e1512; - break; - case NGBE_SUBID_M88E1512_MIX: - hw->phy.type = ngbe_phy_m88e1512_unknown; - break; - case NGBE_SUBID_YT8521S_SFP: - case NGBE_SUBID_YT8521S_SFP_GPIO: - case NGBE_SUBID_LY_YT8521S_SFP: - hw->phy.type = ngbe_phy_yt8521s_sfi; - break; - case NGBE_SUBID_INTERNAL_YT8521S_SFP: - case NGBE_SUBID_INTERNAL_YT8521S_SFP_GPIO: - hw->phy.type = ngbe_phy_internal_yt8521s_sfi; - break; - case NGBE_SUBID_RGMII_FPGA: - case NGBE_SUBID_OCP_CARD: - fallthrough; - default: - hw->phy.type = ngbe_phy_internal; - break; - } + wx->mac.type = wx_mac_em; + type_mask = (u16)(wx->subsystem_device_id & NGBE_OEM_MASK); + ncsi_mask = wx->subsystem_device_id & NGBE_NCSI_MASK; + wol_mask = wx->subsystem_device_id & NGBE_WOL_MASK; - if (hw->phy.type == ngbe_phy_internal || - hw->phy.type == ngbe_phy_internal_yt8521s_sfi) - hw->mac_type = ngbe_mac_type_mdi; - else - hw->mac_type = ngbe_mac_type_rgmii; + val = rd32(wx, WX_CFG_PORT_ST); + wx->mac_type = (val & BIT(7)) >> 7 ? + em_mac_type_rgmii : + em_mac_type_mdi; - hw->wol_enabled = (wol_mask == NGBE_WOL_SUP) ? 1 : 0; - hw->ncsi_enabled = (ncsi_mask == NGBE_NCSI_MASK || + wx->wol_enabled = (wol_mask == NGBE_WOL_SUP) ? 1 : 0; + wx->ncsi_enabled = (ncsi_mask == NGBE_NCSI_MASK || type_mask == NGBE_SUBID_OCP_CARD) ? 1 : 0; switch (type_mask) { @@ -110,31 +71,31 @@ static void ngbe_init_type_code(struct ngbe_hw *hw) case NGBE_SUBID_LY_M88E1512_SFP: case NGBE_SUBID_YT8521S_SFP_GPIO: case NGBE_SUBID_INTERNAL_YT8521S_SFP_GPIO: - hw->gpio_ctrl = 1; + wx->gpio_ctrl = 1; break; default: - hw->gpio_ctrl = 0; + wx->gpio_ctrl = 0; break; } } /** - * ngbe_init_rss_key - Initialize adapter RSS key - * @adapter: device handle + * ngbe_init_rss_key - Initialize wx RSS key + * @wx: device handle * * Allocates and initializes the RSS key if it is not allocated. **/ -static inline int ngbe_init_rss_key(struct ngbe_adapter *adapter) +static inline int ngbe_init_rss_key(struct wx *wx) { u32 *rss_key; - if (!adapter->rss_key) { - rss_key = kzalloc(NGBE_RSS_KEY_SIZE, GFP_KERNEL); + if (!wx->rss_key) { + rss_key = kzalloc(WX_RSS_KEY_SIZE, GFP_KERNEL); if (unlikely(!rss_key)) return -ENOMEM; - netdev_rss_key_fill(rss_key, NGBE_RSS_KEY_SIZE); - adapter->rss_key = rss_key; + netdev_rss_key_fill(rss_key, WX_RSS_KEY_SIZE); + wx->rss_key = rss_key; } return 0; @@ -142,72 +103,263 @@ static inline int ngbe_init_rss_key(struct ngbe_adapter *adapter) /** * ngbe_sw_init - Initialize general software structures - * @adapter: board private structure to initialize + * @wx: board private structure to initialize **/ -static int ngbe_sw_init(struct ngbe_adapter *adapter) +static int ngbe_sw_init(struct wx *wx) { - struct pci_dev *pdev = adapter->pdev; - struct ngbe_hw *hw = &adapter->hw; - struct wx_hw *wxhw = &hw->wxhw; + struct pci_dev *pdev = wx->pdev; u16 msix_count = 0; int err = 0; - wxhw->hw_addr = adapter->io_addr; - wxhw->pdev = pdev; + wx->mac.num_rar_entries = NGBE_RAR_ENTRIES; + wx->mac.max_rx_queues = NGBE_MAX_RX_QUEUES; + wx->mac.max_tx_queues = NGBE_MAX_TX_QUEUES; + wx->mac.mcft_size = NGBE_MC_TBL_SIZE; + wx->mac.rx_pb_size = NGBE_RX_PB_SIZE; + wx->mac.tx_pb_size = NGBE_TDB_PB_SZ; /* PCI config space info */ - err = wx_sw_init(wxhw); + err = wx_sw_init(wx); if (err < 0) { - netif_err(adapter, probe, adapter->netdev, - "Read of internal subsystem device id failed\n"); + wx_err(wx, "read of internal subsystem device id failed\n"); return err; } /* mac type, phy type , oem type */ - ngbe_init_type_code(hw); + ngbe_init_type_code(wx); - wxhw->mac.max_rx_queues = NGBE_MAX_RX_QUEUES; - wxhw->mac.max_tx_queues = NGBE_MAX_TX_QUEUES; - wxhw->mac.num_rar_entries = NGBE_RAR_ENTRIES; /* Set common capability flags and settings */ - adapter->max_q_vectors = NGBE_MAX_MSIX_VECTORS; - - err = wx_get_pcie_msix_counts(wxhw, &msix_count, NGBE_MAX_MSIX_VECTORS); + wx->max_q_vectors = NGBE_MAX_MSIX_VECTORS; + err = wx_get_pcie_msix_counts(wx, &msix_count, NGBE_MAX_MSIX_VECTORS); if (err) dev_err(&pdev->dev, "Do not support MSI-X\n"); - wxhw->mac.max_msix_vectors = msix_count; + wx->mac.max_msix_vectors = msix_count; - adapter->mac_table = kcalloc(wxhw->mac.num_rar_entries, - sizeof(struct ngbe_mac_addr), - GFP_KERNEL); - if (!adapter->mac_table) { - dev_err(&pdev->dev, "mac_table allocation failed: %d\n", err); - return -ENOMEM; - } - - if (ngbe_init_rss_key(adapter)) + if (ngbe_init_rss_key(wx)) return -ENOMEM; /* enable itr by default in dynamic mode */ - adapter->rx_itr_setting = 1; - adapter->tx_itr_setting = 1; + wx->rx_itr_setting = 1; + wx->tx_itr_setting = 1; /* set default ring sizes */ - adapter->tx_ring_count = NGBE_DEFAULT_TXD; - adapter->rx_ring_count = NGBE_DEFAULT_RXD; + wx->tx_ring_count = NGBE_DEFAULT_TXD; + wx->rx_ring_count = NGBE_DEFAULT_RXD; /* set default work limits */ - adapter->tx_work_limit = NGBE_DEFAULT_TX_WORK; - adapter->rx_work_limit = NGBE_DEFAULT_RX_WORK; + wx->tx_work_limit = NGBE_DEFAULT_TX_WORK; + wx->rx_work_limit = NGBE_DEFAULT_RX_WORK; return 0; } -static void ngbe_down(struct ngbe_adapter *adapter) +/** + * ngbe_irq_enable - Enable default interrupt generation settings + * @wx: board private structure + * @queues: enable all queues interrupts + **/ +static void ngbe_irq_enable(struct wx *wx, bool queues) { - netif_carrier_off(adapter->netdev); - netif_tx_disable(adapter->netdev); -}; + u32 mask; + + /* enable misc interrupt */ + mask = NGBE_PX_MISC_IEN_MASK; + + wr32(wx, WX_GPIO_DDR, WX_GPIO_DDR_0); + wr32(wx, WX_GPIO_INTEN, WX_GPIO_INTEN_0 | WX_GPIO_INTEN_1); + wr32(wx, WX_GPIO_INTTYPE_LEVEL, 0x0); + wr32(wx, WX_GPIO_POLARITY, wx->gpio_ctrl ? 0 : 0x3); + + wr32(wx, WX_PX_MISC_IEN, mask); + + /* mask interrupt */ + if (queues) + wx_intr_enable(wx, NGBE_INTR_ALL); + else + wx_intr_enable(wx, NGBE_INTR_MISC(wx)); +} + +/** + * ngbe_intr - msi/legacy mode Interrupt Handler + * @irq: interrupt number + * @data: pointer to a network interface device structure + **/ +static irqreturn_t ngbe_intr(int __always_unused irq, void *data) +{ + struct wx_q_vector *q_vector; + struct wx *wx = data; + struct pci_dev *pdev; + u32 eicr; + + q_vector = wx->q_vector[0]; + pdev = wx->pdev; + + eicr = wx_misc_isb(wx, WX_ISB_VEC0); + if (!eicr) { + /* shared interrupt alert! + * the interrupt that we masked before the EICR read. + */ + if (netif_running(wx->netdev)) + ngbe_irq_enable(wx, true); + return IRQ_NONE; /* Not our interrupt */ + } + wx->isb_mem[WX_ISB_VEC0] = 0; + if (!(pdev->msi_enabled)) + wr32(wx, WX_PX_INTA, 1); + + wx->isb_mem[WX_ISB_MISC] = 0; + /* would disable interrupts here but it is auto disabled */ + napi_schedule_irqoff(&q_vector->napi); + + if (netif_running(wx->netdev)) + ngbe_irq_enable(wx, false); + + return IRQ_HANDLED; +} + +static irqreturn_t ngbe_msix_other(int __always_unused irq, void *data) +{ + struct wx *wx = data; + + /* re-enable the original interrupt state, no lsc, no queues */ + if (netif_running(wx->netdev)) + ngbe_irq_enable(wx, false); + + return IRQ_HANDLED; +} + +/** + * ngbe_request_msix_irqs - Initialize MSI-X interrupts + * @wx: board private structure + * + * ngbe_request_msix_irqs allocates MSI-X vectors and requests + * interrupts from the kernel. + **/ +static int ngbe_request_msix_irqs(struct wx *wx) +{ + struct net_device *netdev = wx->netdev; + int vector, err; + + for (vector = 0; vector < wx->num_q_vectors; vector++) { + struct wx_q_vector *q_vector = wx->q_vector[vector]; + struct msix_entry *entry = &wx->msix_entries[vector]; + + if (q_vector->tx.ring && q_vector->rx.ring) + snprintf(q_vector->name, sizeof(q_vector->name) - 1, + "%s-TxRx-%d", netdev->name, entry->entry); + else + /* skip this unused q_vector */ + continue; + + err = request_irq(entry->vector, wx_msix_clean_rings, 0, + q_vector->name, q_vector); + if (err) { + wx_err(wx, "request_irq failed for MSIX interrupt %s Error: %d\n", + q_vector->name, err); + goto free_queue_irqs; + } + } + + err = request_irq(wx->msix_entries[vector].vector, + ngbe_msix_other, 0, netdev->name, wx); + + if (err) { + wx_err(wx, "request_irq for msix_other failed: %d\n", err); + goto free_queue_irqs; + } + + return 0; + +free_queue_irqs: + while (vector) { + vector--; + free_irq(wx->msix_entries[vector].vector, + wx->q_vector[vector]); + } + wx_reset_interrupt_capability(wx); + return err; +} + +/** + * ngbe_request_irq - initialize interrupts + * @wx: board private structure + * + * Attempts to configure interrupts using the best available + * capabilities of the hardware and kernel. + **/ +static int ngbe_request_irq(struct wx *wx) +{ + struct net_device *netdev = wx->netdev; + struct pci_dev *pdev = wx->pdev; + int err; + + if (pdev->msix_enabled) + err = ngbe_request_msix_irqs(wx); + else if (pdev->msi_enabled) + err = request_irq(pdev->irq, ngbe_intr, 0, + netdev->name, wx); + else + err = request_irq(pdev->irq, ngbe_intr, IRQF_SHARED, + netdev->name, wx); + + if (err) + wx_err(wx, "request_irq failed, Error %d\n", err); + + return err; +} + +static void ngbe_disable_device(struct wx *wx) +{ + struct net_device *netdev = wx->netdev; + u32 i; + + /* disable all enabled rx queues */ + for (i = 0; i < wx->num_rx_queues; i++) + /* this call also flushes the previous write */ + wx_disable_rx_queue(wx, wx->rx_ring[i]); + /* disable receives */ + wx_disable_rx(wx); + wx_napi_disable_all(wx); + netif_tx_stop_all_queues(netdev); + netif_tx_disable(netdev); + if (wx->gpio_ctrl) + ngbe_sfp_modules_txrx_powerctl(wx, false); + wx_irq_disable(wx); + /* disable transmits in the hardware now that interrupts are off */ + for (i = 0; i < wx->num_tx_queues; i++) { + u8 reg_idx = wx->tx_ring[i]->reg_idx; + + wr32(wx, WX_PX_TR_CFG(reg_idx), WX_PX_TR_CFG_SWFLSH); + } +} + +static void ngbe_down(struct wx *wx) +{ + phy_stop(wx->phydev); + ngbe_disable_device(wx); + wx_clean_all_tx_rings(wx); + wx_clean_all_rx_rings(wx); +} + +static void ngbe_up(struct wx *wx) +{ + wx_configure_vectors(wx); + + /* make sure to complete pre-operations */ + smp_mb__before_atomic(); + wx_napi_enable_all(wx); + /* enable transmits */ + netif_tx_start_all_queues(wx->netdev); + + /* clear any pending interrupts, may auto mask */ + rd32(wx, WX_PX_IC); + rd32(wx, WX_PX_MISC_IC); + ngbe_irq_enable(wx, true); + if (wx->gpio_ctrl) + ngbe_sfp_modules_txrx_powerctl(wx, true); + + phy_start(wx->phydev); +} /** * ngbe_open - Called when a network interface is made active @@ -220,13 +372,43 @@ static void ngbe_down(struct ngbe_adapter *adapter) **/ static int ngbe_open(struct net_device *netdev) { - struct ngbe_adapter *adapter = netdev_priv(netdev); - struct ngbe_hw *hw = &adapter->hw; - struct wx_hw *wxhw = &hw->wxhw; + struct wx *wx = netdev_priv(netdev); + int err; + + wx_control_hw(wx, true); + + err = wx_setup_resources(wx); + if (err) + return err; + + wx_configure(wx); + + err = ngbe_request_irq(wx); + if (err) + goto err_free_resources; + + err = ngbe_phy_connect(wx); + if (err) + goto err_free_irq; + + err = netif_set_real_num_tx_queues(netdev, wx->num_tx_queues); + if (err) + goto err_dis_phy; + + err = netif_set_real_num_rx_queues(netdev, wx->num_rx_queues); + if (err) + goto err_dis_phy; - wx_control_hw(wxhw, true); + ngbe_up(wx); return 0; +err_dis_phy: + phy_disconnect(wx->phydev); +err_free_irq: + wx_free_irq(wx); +err_free_resources: + wx_free_resources(wx); + return err; } /** @@ -242,66 +424,40 @@ static int ngbe_open(struct net_device *netdev) **/ static int ngbe_close(struct net_device *netdev) { - struct ngbe_adapter *adapter = netdev_priv(netdev); - - ngbe_down(adapter); - wx_control_hw(&adapter->hw.wxhw, false); - - return 0; -} - -static netdev_tx_t ngbe_xmit_frame(struct sk_buff *skb, - struct net_device *netdev) -{ - return NETDEV_TX_OK; -} - -/** - * ngbe_set_mac - Change the Ethernet Address of the NIC - * @netdev: network interface device structure - * @p: pointer to an address structure - * - * Returns 0 on success, negative on failure - **/ -static int ngbe_set_mac(struct net_device *netdev, void *p) -{ - struct ngbe_adapter *adapter = netdev_priv(netdev); - struct wx_hw *wxhw = &adapter->hw.wxhw; - struct sockaddr *addr = p; + struct wx *wx = netdev_priv(netdev); - if (!is_valid_ether_addr(addr->sa_data)) - return -EADDRNOTAVAIL; - - eth_hw_addr_set(netdev, addr->sa_data); - memcpy(wxhw->mac.addr, addr->sa_data, netdev->addr_len); - - ngbe_mac_set_default_filter(adapter, wxhw->mac.addr); + ngbe_down(wx); + wx_free_irq(wx); + wx_free_resources(wx); + phy_disconnect(wx->phydev); + wx_control_hw(wx, false); return 0; } static void ngbe_dev_shutdown(struct pci_dev *pdev, bool *enable_wake) { - struct ngbe_adapter *adapter = pci_get_drvdata(pdev); - struct net_device *netdev = adapter->netdev; + struct wx *wx = pci_get_drvdata(pdev); + struct net_device *netdev; + netdev = wx->netdev; netif_device_detach(netdev); rtnl_lock(); if (netif_running(netdev)) - ngbe_down(adapter); + ngbe_down(wx); rtnl_unlock(); - wx_control_hw(&adapter->hw.wxhw, false); + wx_control_hw(wx, false); pci_disable_device(pdev); } static void ngbe_shutdown(struct pci_dev *pdev) { - struct ngbe_adapter *adapter = pci_get_drvdata(pdev); + struct wx *wx = pci_get_drvdata(pdev); bool wake; - wake = !!adapter->wol; + wake = !!wx->wol; ngbe_dev_shutdown(pdev, &wake); @@ -314,9 +470,11 @@ static void ngbe_shutdown(struct pci_dev *pdev) static const struct net_device_ops ngbe_netdev_ops = { .ndo_open = ngbe_open, .ndo_stop = ngbe_close, - .ndo_start_xmit = ngbe_xmit_frame, + .ndo_start_xmit = wx_xmit_frame, + .ndo_set_rx_mode = wx_set_rx_mode, .ndo_validate_addr = eth_validate_addr, - .ndo_set_mac_address = ngbe_set_mac, + .ndo_set_mac_address = wx_set_mac, + .ndo_get_stats64 = wx_get_stats64, }; /** @@ -326,18 +484,16 @@ static const struct net_device_ops ngbe_netdev_ops = { * * Returns 0 on success, negative on failure * - * ngbe_probe initializes an adapter identified by a pci_dev structure. - * The OS initialization, configuring of the adapter private structure, + * ngbe_probe initializes an wx identified by a pci_dev structure. + * The OS initialization, configuring of the wx private structure, * and a hardware reset occur. **/ static int ngbe_probe(struct pci_dev *pdev, const struct pci_device_id __always_unused *ent) { - struct ngbe_adapter *adapter = NULL; - struct ngbe_hw *hw = NULL; - struct wx_hw *wxhw = NULL; struct net_device *netdev; u32 e2rom_cksum_cap = 0; + struct wx *wx = NULL; static int func_nums; u16 e2rom_ver = 0; u32 etrack_id = 0; @@ -368,7 +524,7 @@ static int ngbe_probe(struct pci_dev *pdev, pci_set_master(pdev); netdev = devm_alloc_etherdev_mqs(&pdev->dev, - sizeof(struct ngbe_adapter), + sizeof(struct wx), NGBE_MAX_TX_QUEUES, NGBE_MAX_RX_QUEUES); if (!netdev) { @@ -378,63 +534,74 @@ static int ngbe_probe(struct pci_dev *pdev, SET_NETDEV_DEV(netdev, &pdev->dev); - adapter = netdev_priv(netdev); - adapter->netdev = netdev; - adapter->pdev = pdev; - hw = &adapter->hw; - wxhw = &hw->wxhw; - adapter->msg_enable = BIT(3) - 1; - - adapter->io_addr = devm_ioremap(&pdev->dev, - pci_resource_start(pdev, 0), - pci_resource_len(pdev, 0)); - if (!adapter->io_addr) { + wx = netdev_priv(netdev); + wx->netdev = netdev; + wx->pdev = pdev; + wx->msg_enable = BIT(3) - 1; + + wx->hw_addr = devm_ioremap(&pdev->dev, + pci_resource_start(pdev, 0), + pci_resource_len(pdev, 0)); + if (!wx->hw_addr) { err = -EIO; goto err_pci_release_regions; } + wx->driver_name = ngbe_driver_name; + ngbe_set_ethtool_ops(netdev); netdev->netdev_ops = &ngbe_netdev_ops; netdev->features |= NETIF_F_HIGHDMA; + netdev->features = NETIF_F_SG; - adapter->bd_number = func_nums; + /* copy netdev features into list of user selectable features */ + netdev->hw_features |= netdev->features | + NETIF_F_RXALL; + + netdev->priv_flags |= IFF_UNICAST_FLT; + netdev->priv_flags |= IFF_SUPP_NOFCS; + + netdev->min_mtu = ETH_MIN_MTU; + netdev->max_mtu = NGBE_MAX_JUMBO_FRAME_SIZE - (ETH_HLEN + ETH_FCS_LEN); + + wx->bd_number = func_nums; /* setup the private structure */ - err = ngbe_sw_init(adapter); + err = ngbe_sw_init(wx); if (err) goto err_free_mac_table; /* check if flash load is done after hw power up */ - err = wx_check_flash_load(wxhw, NGBE_SPI_ILDR_STATUS_PERST); + err = wx_check_flash_load(wx, NGBE_SPI_ILDR_STATUS_PERST); if (err) goto err_free_mac_table; - err = wx_check_flash_load(wxhw, NGBE_SPI_ILDR_STATUS_PWRRST); + err = wx_check_flash_load(wx, NGBE_SPI_ILDR_STATUS_PWRRST); if (err) goto err_free_mac_table; - err = wx_mng_present(wxhw); + err = wx_mng_present(wx); if (err) { dev_err(&pdev->dev, "Management capability is not present\n"); goto err_free_mac_table; } - err = ngbe_reset_hw(hw); + err = ngbe_reset_hw(wx); if (err) { dev_err(&pdev->dev, "HW Init failed: %d\n", err); goto err_free_mac_table; } - if (wxhw->bus.func == 0) { - wr32(wxhw, NGBE_CALSUM_CAP_STATUS, 0x0); - wr32(wxhw, NGBE_EEPROM_VERSION_STORE_REG, 0x0); + if (wx->bus.func == 0) { + wr32(wx, NGBE_CALSUM_CAP_STATUS, 0x0); + wr32(wx, NGBE_EEPROM_VERSION_STORE_REG, 0x0); } else { - e2rom_cksum_cap = rd32(wxhw, NGBE_CALSUM_CAP_STATUS); - saved_ver = rd32(wxhw, NGBE_EEPROM_VERSION_STORE_REG); + e2rom_cksum_cap = rd32(wx, NGBE_CALSUM_CAP_STATUS); + saved_ver = rd32(wx, NGBE_EEPROM_VERSION_STORE_REG); } - wx_init_eeprom_params(wxhw); - if (wxhw->bus.func == 0 || e2rom_cksum_cap == 0) { + wx_init_eeprom_params(wx); + if (wx->bus.func == 0 || e2rom_cksum_cap == 0) { /* make sure the EEPROM is ready */ - err = ngbe_eeprom_chksum_hostif(hw); + err = ngbe_eeprom_chksum_hostif(wx); if (err) { dev_err(&pdev->dev, "The EEPROM Checksum Is Not Valid\n"); err = -EIO; @@ -442,14 +609,14 @@ static int ngbe_probe(struct pci_dev *pdev, } } - adapter->wol = 0; - if (hw->wol_enabled) - adapter->wol = NGBE_PSR_WKUP_CTL_MAG; + wx->wol = 0; + if (wx->wol_enabled) + wx->wol = NGBE_PSR_WKUP_CTL_MAG; - hw->wol_enabled = !!(adapter->wol); - wr32(wxhw, NGBE_PSR_WKUP_CTL, adapter->wol); + wx->wol_enabled = !!(wx->wol); + wr32(wx, NGBE_PSR_WKUP_CTL, wx->wol); - device_set_wakeup_enable(&pdev->dev, adapter->wol); + device_set_wakeup_enable(&pdev->dev, wx->wol); /* Save off EEPROM version number and Option Rom version which * together make a unique identify for the eeprom @@ -457,37 +624,50 @@ static int ngbe_probe(struct pci_dev *pdev, if (saved_ver) { etrack_id = saved_ver; } else { - wx_read_ee_hostif(wxhw, - wxhw->eeprom.sw_region_offset + NGBE_EEPROM_VERSION_H, + wx_read_ee_hostif(wx, + wx->eeprom.sw_region_offset + NGBE_EEPROM_VERSION_H, &e2rom_ver); etrack_id = e2rom_ver << 16; - wx_read_ee_hostif(wxhw, - wxhw->eeprom.sw_region_offset + NGBE_EEPROM_VERSION_L, + wx_read_ee_hostif(wx, + wx->eeprom.sw_region_offset + NGBE_EEPROM_VERSION_L, &e2rom_ver); etrack_id |= e2rom_ver; - wr32(wxhw, NGBE_EEPROM_VERSION_STORE_REG, etrack_id); + wr32(wx, NGBE_EEPROM_VERSION_STORE_REG, etrack_id); } + snprintf(wx->eeprom_id, sizeof(wx->eeprom_id), + "0x%08x", etrack_id); + + eth_hw_addr_set(netdev, wx->mac.perm_addr); + wx_mac_set_default_filter(wx, wx->mac.perm_addr); - eth_hw_addr_set(netdev, wxhw->mac.perm_addr); - ngbe_mac_set_default_filter(adapter, wxhw->mac.perm_addr); + err = wx_init_interrupt_scheme(wx); + if (err) + goto err_free_mac_table; + + /* phy Interface Configuration */ + err = ngbe_mdio_init(wx); + if (err) + goto err_clear_interrupt_scheme; err = register_netdev(netdev); if (err) goto err_register; - pci_set_drvdata(pdev, adapter); + pci_set_drvdata(pdev, wx); - netif_info(adapter, probe, netdev, + netif_info(wx, probe, netdev, "PHY: %s, PBA No: Wang Xun GbE Family Controller\n", - hw->phy.type == ngbe_phy_internal ? "Internal" : "External"); - netif_info(adapter, probe, netdev, "%pM\n", netdev->dev_addr); + wx->mac_type == em_mac_type_mdi ? "Internal" : "External"); + netif_info(wx, probe, netdev, "%pM\n", netdev->dev_addr); return 0; err_register: - wx_control_hw(wxhw, false); + wx_control_hw(wx, false); +err_clear_interrupt_scheme: + wx_clear_interrupt_scheme(wx); err_free_mac_table: - kfree(adapter->mac_table); + kfree(wx->mac_table); err_pci_release_regions: pci_disable_pcie_error_reporting(pdev); pci_release_selected_regions(pdev, @@ -508,15 +688,16 @@ err_pci_disable_dev: **/ static void ngbe_remove(struct pci_dev *pdev) { - struct ngbe_adapter *adapter = pci_get_drvdata(pdev); + struct wx *wx = pci_get_drvdata(pdev); struct net_device *netdev; - netdev = adapter->netdev; + netdev = wx->netdev; unregister_netdev(netdev); pci_release_selected_regions(pdev, pci_select_bars(pdev, IORESOURCE_MEM)); - kfree(adapter->mac_table); + kfree(wx->mac_table); + wx_clear_interrupt_scheme(wx); pci_disable_pcie_error_reporting(pdev); pci_disable_device(pdev); diff --git a/drivers/net/ethernet/wangxun/ngbe/ngbe_mdio.c b/drivers/net/ethernet/wangxun/ngbe/ngbe_mdio.c new file mode 100644 index 000000000000..c9ddbbc3fa4f --- /dev/null +++ b/drivers/net/ethernet/wangxun/ngbe/ngbe_mdio.c @@ -0,0 +1,286 @@ +// SPDX-License-Identifier: GPL-2.0 +/* Copyright (c) 2019 - 2022 Beijing WangXun Technology Co., Ltd. */ + +#include <linux/ethtool.h> +#include <linux/iopoll.h> +#include <linux/pci.h> +#include <linux/phy.h> + +#include "../libwx/wx_type.h" +#include "../libwx/wx_hw.h" +#include "ngbe_type.h" +#include "ngbe_mdio.h" + +static int ngbe_phy_read_reg_internal(struct mii_bus *bus, int phy_addr, int regnum) +{ + struct wx *wx = bus->priv; + + if (phy_addr != 0) + return 0xffff; + return (u16)rd32(wx, NGBE_PHY_CONFIG(regnum)); +} + +static int ngbe_phy_write_reg_internal(struct mii_bus *bus, int phy_addr, int regnum, u16 value) +{ + struct wx *wx = bus->priv; + + if (phy_addr == 0) + wr32(wx, NGBE_PHY_CONFIG(regnum), value); + return 0; +} + +static int ngbe_phy_read_reg_mdi_c22(struct mii_bus *bus, int phy_addr, int regnum) +{ + u32 command, val, device_type = 0; + struct wx *wx = bus->priv; + int ret; + + wr32(wx, NGBE_MDIO_CLAUSE_SELECT, 0xF); + /* setup and write the address cycle command */ + command = NGBE_MSCA_RA(regnum) | + NGBE_MSCA_PA(phy_addr) | + NGBE_MSCA_DA(device_type); + wr32(wx, NGBE_MSCA, command); + command = NGBE_MSCC_CMD(NGBE_MSCA_CMD_READ) | + NGBE_MSCC_BUSY | + NGBE_MDIO_CLK(6); + wr32(wx, NGBE_MSCC, command); + + /* wait to complete */ + ret = read_poll_timeout(rd32, val, !(val & NGBE_MSCC_BUSY), 1000, + 100000, false, wx, NGBE_MSCC); + if (ret) { + wx_err(wx, "Mdio read c22 command did not complete.\n"); + return ret; + } + + return (u16)rd32(wx, NGBE_MSCC); +} + +static int ngbe_phy_write_reg_mdi_c22(struct mii_bus *bus, int phy_addr, int regnum, u16 value) +{ + u32 command, val, device_type = 0; + struct wx *wx = bus->priv; + int ret; + + wr32(wx, NGBE_MDIO_CLAUSE_SELECT, 0xF); + /* setup and write the address cycle command */ + command = NGBE_MSCA_RA(regnum) | + NGBE_MSCA_PA(phy_addr) | + NGBE_MSCA_DA(device_type); + wr32(wx, NGBE_MSCA, command); + command = value | + NGBE_MSCC_CMD(NGBE_MSCA_CMD_WRITE) | + NGBE_MSCC_BUSY | + NGBE_MDIO_CLK(6); + wr32(wx, NGBE_MSCC, command); + + /* wait to complete */ + ret = read_poll_timeout(rd32, val, !(val & NGBE_MSCC_BUSY), 1000, + 100000, false, wx, NGBE_MSCC); + if (ret) + wx_err(wx, "Mdio write c22 command did not complete.\n"); + + return ret; +} + +static int ngbe_phy_read_reg_mdi_c45(struct mii_bus *bus, int phy_addr, int devnum, int regnum) +{ + struct wx *wx = bus->priv; + u32 val, command; + int ret; + + wr32(wx, NGBE_MDIO_CLAUSE_SELECT, 0x0); + /* setup and write the address cycle command */ + command = NGBE_MSCA_RA(regnum) | + NGBE_MSCA_PA(phy_addr) | + NGBE_MSCA_DA(devnum); + wr32(wx, NGBE_MSCA, command); + command = NGBE_MSCC_CMD(NGBE_MSCA_CMD_READ) | + NGBE_MSCC_BUSY | + NGBE_MDIO_CLK(6); + wr32(wx, NGBE_MSCC, command); + + /* wait to complete */ + ret = read_poll_timeout(rd32, val, !(val & NGBE_MSCC_BUSY), 1000, + 100000, false, wx, NGBE_MSCC); + if (ret) { + wx_err(wx, "Mdio read c45 command did not complete.\n"); + return ret; + } + + return (u16)rd32(wx, NGBE_MSCC); +} + +static int ngbe_phy_write_reg_mdi_c45(struct mii_bus *bus, int phy_addr, + int devnum, int regnum, u16 value) +{ + struct wx *wx = bus->priv; + int ret, command; + u16 val; + + wr32(wx, NGBE_MDIO_CLAUSE_SELECT, 0x0); + /* setup and write the address cycle command */ + command = NGBE_MSCA_RA(regnum) | + NGBE_MSCA_PA(phy_addr) | + NGBE_MSCA_DA(devnum); + wr32(wx, NGBE_MSCA, command); + command = value | + NGBE_MSCC_CMD(NGBE_MSCA_CMD_WRITE) | + NGBE_MSCC_BUSY | + NGBE_MDIO_CLK(6); + wr32(wx, NGBE_MSCC, command); + + /* wait to complete */ + ret = read_poll_timeout(rd32, val, !(val & NGBE_MSCC_BUSY), 1000, + 100000, false, wx, NGBE_MSCC); + if (ret) + wx_err(wx, "Mdio write c45 command did not complete.\n"); + + return ret; +} + +static int ngbe_phy_read_reg_c22(struct mii_bus *bus, int phy_addr, int regnum) +{ + struct wx *wx = bus->priv; + u16 phy_data; + + if (wx->mac_type == em_mac_type_mdi) + phy_data = ngbe_phy_read_reg_internal(bus, phy_addr, regnum); + else + phy_data = ngbe_phy_read_reg_mdi_c22(bus, phy_addr, regnum); + + return phy_data; +} + +static int ngbe_phy_write_reg_c22(struct mii_bus *bus, int phy_addr, + int regnum, u16 value) +{ + struct wx *wx = bus->priv; + int ret; + + if (wx->mac_type == em_mac_type_mdi) + ret = ngbe_phy_write_reg_internal(bus, phy_addr, regnum, value); + else + ret = ngbe_phy_write_reg_mdi_c22(bus, phy_addr, regnum, value); + + return ret; +} + +static void ngbe_handle_link_change(struct net_device *dev) +{ + struct wx *wx = netdev_priv(dev); + struct phy_device *phydev; + u32 lan_speed, reg; + + phydev = wx->phydev; + if (!(wx->link != phydev->link || + wx->speed != phydev->speed || + wx->duplex != phydev->duplex)) + return; + + wx->link = phydev->link; + wx->speed = phydev->speed; + wx->duplex = phydev->duplex; + switch (phydev->speed) { + case SPEED_10: + lan_speed = 0; + break; + case SPEED_100: + lan_speed = 1; + break; + case SPEED_1000: + default: + lan_speed = 2; + break; + } + wr32m(wx, NGBE_CFG_LAN_SPEED, 0x3, lan_speed); + + if (phydev->link) { + reg = rd32(wx, WX_MAC_TX_CFG); + reg &= ~WX_MAC_TX_CFG_SPEED_MASK; + reg |= WX_MAC_TX_CFG_SPEED_1G | WX_MAC_TX_CFG_TE; + wr32(wx, WX_MAC_TX_CFG, reg); + /* Re configure MAC RX */ + reg = rd32(wx, WX_MAC_RX_CFG); + wr32(wx, WX_MAC_RX_CFG, reg); + wr32(wx, WX_MAC_PKT_FLT, WX_MAC_PKT_FLT_PR); + reg = rd32(wx, WX_MAC_WDG_TIMEOUT); + wr32(wx, WX_MAC_WDG_TIMEOUT, reg); + } + phy_print_status(phydev); +} + +int ngbe_phy_connect(struct wx *wx) +{ + int ret; + + ret = phy_connect_direct(wx->netdev, + wx->phydev, + ngbe_handle_link_change, + PHY_INTERFACE_MODE_RGMII_ID); + if (ret) { + wx_err(wx, "PHY connect failed.\n"); + return ret; + } + + return 0; +} + +static void ngbe_phy_fixup(struct wx *wx) +{ + struct phy_device *phydev = wx->phydev; + struct ethtool_eee eee; + + phy_remove_link_mode(phydev, ETHTOOL_LINK_MODE_10baseT_Half_BIT); + phy_remove_link_mode(phydev, ETHTOOL_LINK_MODE_100baseT_Half_BIT); + phy_remove_link_mode(phydev, ETHTOOL_LINK_MODE_1000baseT_Half_BIT); + + if (wx->mac_type != em_mac_type_mdi) + return; + /* disable EEE, internal phy does not support eee */ + memset(&eee, 0, sizeof(eee)); + phy_ethtool_set_eee(phydev, &eee); +} + +int ngbe_mdio_init(struct wx *wx) +{ + struct pci_dev *pdev = wx->pdev; + struct mii_bus *mii_bus; + int ret; + + mii_bus = devm_mdiobus_alloc(&pdev->dev); + if (!mii_bus) + return -ENOMEM; + + mii_bus->name = "ngbe_mii_bus"; + mii_bus->read = ngbe_phy_read_reg_c22; + mii_bus->write = ngbe_phy_write_reg_c22; + mii_bus->phy_mask = GENMASK(31, 4); + mii_bus->parent = &pdev->dev; + mii_bus->priv = wx; + + if (wx->mac_type == em_mac_type_rgmii) { + mii_bus->read_c45 = ngbe_phy_read_reg_mdi_c45; + mii_bus->write_c45 = ngbe_phy_write_reg_mdi_c45; + } + + snprintf(mii_bus->id, MII_BUS_ID_SIZE, "ngbe-%x", + (pdev->bus->number << 8) | pdev->devfn); + ret = devm_mdiobus_register(&pdev->dev, mii_bus); + if (ret) + return ret; + + wx->phydev = phy_find_first(mii_bus); + if (!wx->phydev) + return -ENODEV; + + phy_attached_info(wx->phydev); + ngbe_phy_fixup(wx); + + wx->link = 0; + wx->speed = 0; + wx->duplex = 0; + + return 0; +} diff --git a/drivers/net/ethernet/wangxun/ngbe/ngbe_mdio.h b/drivers/net/ethernet/wangxun/ngbe/ngbe_mdio.h new file mode 100644 index 000000000000..0a6400dd89c4 --- /dev/null +++ b/drivers/net/ethernet/wangxun/ngbe/ngbe_mdio.h @@ -0,0 +1,12 @@ +/* SPDX-License-Identifier: GPL-2.0 */ +/* + * WangXun Gigabit PCI Express Linux driver + * Copyright (c) 2019 - 2022 Beijing WangXun Technology Co., Ltd. + */ + +#ifndef _NGBE_MDIO_H_ +#define _NGBE_MDIO_H_ + +int ngbe_phy_connect(struct wx *wx); +int ngbe_mdio_init(struct wx *wx); +#endif /* _NGBE_MDIO_H_ */ diff --git a/drivers/net/ethernet/wangxun/ngbe/ngbe_type.h b/drivers/net/ethernet/wangxun/ngbe/ngbe_type.h index 39f6c03f1a54..a2351349785e 100644 --- a/drivers/net/ethernet/wangxun/ngbe/ngbe_type.h +++ b/drivers/net/ethernet/wangxun/ngbe/ngbe_type.h @@ -49,7 +49,6 @@ #define NGBE_SPI_ILDR_STATUS 0x10120 #define NGBE_SPI_ILDR_STATUS_PERST BIT(0) /* PCIE_PERST is done */ #define NGBE_SPI_ILDR_STATUS_PWRRST BIT(1) /* Power on reset is done */ -#define NGBE_SPI_ILDR_STATUS_LAN_SW_RST(_i) BIT((_i) + 9) /* lan soft reset done */ /* Checksum and EEPROM pointers */ #define NGBE_CALSUM_COMMAND 0xE9 @@ -60,6 +59,25 @@ #define NGBE_EEPROM_VERSION_L 0x1D #define NGBE_EEPROM_VERSION_H 0x1E +/* mdio access */ +#define NGBE_MSCA 0x11200 +#define NGBE_MSCA_RA(v) FIELD_PREP(U16_MAX, v) +#define NGBE_MSCA_PA(v) FIELD_PREP(GENMASK(20, 16), v) +#define NGBE_MSCA_DA(v) FIELD_PREP(GENMASK(25, 21), v) +#define NGBE_MSCC 0x11204 +#define NGBE_MSCC_CMD(v) FIELD_PREP(GENMASK(17, 16), v) + +enum NGBE_MSCA_CMD_value { + NGBE_MSCA_CMD_RSV = 0, + NGBE_MSCA_CMD_WRITE, + NGBE_MSCA_CMD_POST_READ, + NGBE_MSCA_CMD_READ, +}; + +#define NGBE_MSCC_SADDR BIT(18) +#define NGBE_MSCC_BUSY BIT(22) +#define NGBE_MDIO_CLK(v) FIELD_PREP(GENMASK(21, 19), v) + /* Media-dependent registers. */ #define NGBE_MDIO_CLAUSE_SELECT 0x11220 @@ -72,6 +90,24 @@ #define NGBE_GPIO_DDR_0 BIT(0) /* SDP0 IO direction */ #define NGBE_GPIO_DDR_1 BIT(1) /* SDP1 IO direction */ +/* Extended Interrupt Enable Set */ +#define NGBE_PX_MISC_IEN_DEV_RST BIT(10) +#define NGBE_PX_MISC_IEN_ETH_LK BIT(18) +#define NGBE_PX_MISC_IEN_INT_ERR BIT(20) +#define NGBE_PX_MISC_IEN_GPIO BIT(26) +#define NGBE_PX_MISC_IEN_MASK ( \ + NGBE_PX_MISC_IEN_DEV_RST | \ + NGBE_PX_MISC_IEN_ETH_LK | \ + NGBE_PX_MISC_IEN_INT_ERR | \ + NGBE_PX_MISC_IEN_GPIO) + +#define NGBE_INTR_ALL 0x1FF +#define NGBE_INTR_MISC(A) BIT((A)->num_q_vectors) + +#define NGBE_PHY_CONFIG(reg_offset) (0x14000 + ((reg_offset) * 4)) +#define NGBE_CFG_LAN_SPEED 0x14440 +#define NGBE_CFG_PORT_ST 0x14404 + /* Wake up registers */ #define NGBE_PSR_WKUP_CTL 0x15B80 /* Wake Up Filter Control Bit */ @@ -90,50 +126,30 @@ #define NGBE_FW_CMD_ST_PASS 0x80658383 #define NGBE_FW_CMD_ST_FAIL 0x70657376 -enum ngbe_phy_type { - ngbe_phy_unknown = 0, - ngbe_phy_none, - ngbe_phy_internal, - ngbe_phy_m88e1512, - ngbe_phy_m88e1512_sfi, - ngbe_phy_m88e1512_unknown, - ngbe_phy_yt8521s, - ngbe_phy_yt8521s_sfi, - ngbe_phy_internal_yt8521s_sfi, - ngbe_phy_generic -}; +#define NGBE_MAX_FDIR_INDICES 7 -enum ngbe_media_type { - ngbe_media_type_unknown = 0, - ngbe_media_type_fiber, - ngbe_media_type_copper, - ngbe_media_type_backplane, -}; - -enum ngbe_mac_type { - ngbe_mac_type_unknown = 0, - ngbe_mac_type_mdi, - ngbe_mac_type_rgmii -}; +#define NGBE_MAX_RX_QUEUES (NGBE_MAX_FDIR_INDICES + 1) +#define NGBE_MAX_TX_QUEUES (NGBE_MAX_FDIR_INDICES + 1) -struct ngbe_phy_info { - enum ngbe_phy_type type; - enum ngbe_media_type media_type; +#define NGBE_ETH_LENGTH_OF_ADDRESS 6 +#define NGBE_MAX_MSIX_VECTORS 0x09 +#define NGBE_RAR_ENTRIES 32 +#define NGBE_RX_PB_SIZE 42 +#define NGBE_MC_TBL_SIZE 128 +#define NGBE_TDB_PB_SZ (20 * 1024) /* 160KB Packet Buffer */ +#define NGBE_MAX_JUMBO_FRAME_SIZE 9432 /* max payload 9414 */ - u32 addr; - u32 id; +/* TX/RX descriptor defines */ +#define NGBE_DEFAULT_TXD 512 /* default ring size */ +#define NGBE_DEFAULT_TX_WORK 256 +#define NGBE_MAX_TXD 8192 +#define NGBE_MIN_TXD 128 - bool reset_if_overtemp; +#define NGBE_DEFAULT_RXD 512 /* default ring size */ +#define NGBE_DEFAULT_RX_WORK 256 +#define NGBE_MAX_RXD 8192 +#define NGBE_MIN_RXD 128 -}; - -struct ngbe_hw { - struct wx_hw wxhw; - struct ngbe_phy_info phy; - enum ngbe_mac_type mac_type; +extern char ngbe_driver_name[]; - bool wol_enabled; - bool ncsi_enabled; - bool gpio_ctrl; -}; #endif /* _NGBE_TYPE_H_ */ diff --git a/drivers/net/ethernet/wangxun/txgbe/Makefile b/drivers/net/ethernet/wangxun/txgbe/Makefile index 78484c58b78b..6db14a2cb2d0 100644 --- a/drivers/net/ethernet/wangxun/txgbe/Makefile +++ b/drivers/net/ethernet/wangxun/txgbe/Makefile @@ -7,4 +7,5 @@ obj-$(CONFIG_TXGBE) += txgbe.o txgbe-objs := txgbe_main.o \ - txgbe_hw.o + txgbe_hw.o \ + txgbe_ethtool.o diff --git a/drivers/net/ethernet/wangxun/txgbe/txgbe.h b/drivers/net/ethernet/wangxun/txgbe/txgbe.h deleted file mode 100644 index 19e61377bd00..000000000000 --- a/drivers/net/ethernet/wangxun/txgbe/txgbe.h +++ /dev/null @@ -1,43 +0,0 @@ -/* SPDX-License-Identifier: GPL-2.0 */ -/* Copyright (c) 2015 - 2022 Beijing WangXun Technology Co., Ltd. */ - -#ifndef _TXGBE_H_ -#define _TXGBE_H_ - -#define TXGBE_MAX_FDIR_INDICES 63 - -#define TXGBE_MAX_RX_QUEUES (TXGBE_MAX_FDIR_INDICES + 1) -#define TXGBE_MAX_TX_QUEUES (TXGBE_MAX_FDIR_INDICES + 1) - -#define TXGBE_SP_MAX_TX_QUEUES 128 -#define TXGBE_SP_MAX_RX_QUEUES 128 -#define TXGBE_SP_RAR_ENTRIES 128 -#define TXGBE_SP_MC_TBL_SIZE 128 - -struct txgbe_mac_addr { - u8 addr[ETH_ALEN]; - u16 state; /* bitmask */ - u64 pools; -}; - -#define TXGBE_MAC_STATE_DEFAULT 0x1 -#define TXGBE_MAC_STATE_MODIFIED 0x2 -#define TXGBE_MAC_STATE_IN_USE 0x4 - -/* board specific private data structure */ -struct txgbe_adapter { - u8 __iomem *io_addr; /* Mainly for iounmap use */ - /* OS defined structs */ - struct net_device *netdev; - struct pci_dev *pdev; - - /* structs defined in txgbe_type.h */ - struct txgbe_hw hw; - u16 msg_enable; - struct txgbe_mac_addr *mac_table; - char eeprom_id[32]; -}; - -extern char txgbe_driver_name[]; - -#endif /* _TXGBE_H_ */ diff --git a/drivers/net/ethernet/wangxun/txgbe/txgbe_ethtool.c b/drivers/net/ethernet/wangxun/txgbe/txgbe_ethtool.c new file mode 100644 index 000000000000..d914e9a05404 --- /dev/null +++ b/drivers/net/ethernet/wangxun/txgbe/txgbe_ethtool.c @@ -0,0 +1,19 @@ +// SPDX-License-Identifier: GPL-2.0 +/* Copyright (c) 2015 - 2023 Beijing WangXun Technology Co., Ltd. */ + +#include <linux/pci.h> +#include <linux/phylink.h> +#include <linux/netdevice.h> + +#include "../libwx/wx_ethtool.h" +#include "txgbe_ethtool.h" + +static const struct ethtool_ops txgbe_ethtool_ops = { + .get_drvinfo = wx_get_drvinfo, + .get_link = ethtool_op_get_link, +}; + +void txgbe_set_ethtool_ops(struct net_device *netdev) +{ + netdev->ethtool_ops = &txgbe_ethtool_ops; +} diff --git a/drivers/net/ethernet/wangxun/txgbe/txgbe_ethtool.h b/drivers/net/ethernet/wangxun/txgbe/txgbe_ethtool.h new file mode 100644 index 000000000000..ace1b3571012 --- /dev/null +++ b/drivers/net/ethernet/wangxun/txgbe/txgbe_ethtool.h @@ -0,0 +1,9 @@ +/* SPDX-License-Identifier: GPL-2.0 */ +/* Copyright (c) 2015 - 2023 Beijing WangXun Technology Co., Ltd. */ + +#ifndef _TXGBE_ETHTOOL_H_ +#define _TXGBE_ETHTOOL_H_ + +void txgbe_set_ethtool_ops(struct net_device *netdev); + +#endif /* _TXGBE_ETHTOOL_H_ */ diff --git a/drivers/net/ethernet/wangxun/txgbe/txgbe_hw.c b/drivers/net/ethernet/wangxun/txgbe/txgbe_hw.c index 167f7ff73192..ebc46f3be056 100644 --- a/drivers/net/ethernet/wangxun/txgbe/txgbe_hw.c +++ b/drivers/net/ethernet/wangxun/txgbe/txgbe_hw.c @@ -12,70 +12,67 @@ #include "../libwx/wx_hw.h" #include "txgbe_type.h" #include "txgbe_hw.h" -#include "txgbe.h" /** * txgbe_init_thermal_sensor_thresh - Inits thermal sensor thresholds - * @hw: pointer to hardware structure + * @wx: pointer to hardware structure * * Inits the thermal sensor thresholds according to the NVM map * and save off the threshold and location values into mac.thermal_sensor_data **/ -static void txgbe_init_thermal_sensor_thresh(struct txgbe_hw *hw) +static void txgbe_init_thermal_sensor_thresh(struct wx *wx) { - struct wx_hw *wxhw = &hw->wxhw; - struct wx_thermal_sensor_data *data = &wxhw->mac.sensor; + struct wx_thermal_sensor_data *data = &wx->mac.sensor; memset(data, 0, sizeof(struct wx_thermal_sensor_data)); /* Only support thermal sensors attached to SP physical port 0 */ - if (wxhw->bus.func) + if (wx->bus.func) return; - wr32(wxhw, TXGBE_TS_CTL, TXGBE_TS_CTL_EVAL_MD); + wr32(wx, TXGBE_TS_CTL, TXGBE_TS_CTL_EVAL_MD); - wr32(wxhw, WX_TS_INT_EN, + wr32(wx, WX_TS_INT_EN, WX_TS_INT_EN_ALARM_INT_EN | WX_TS_INT_EN_DALARM_INT_EN); - wr32(wxhw, WX_TS_EN, WX_TS_EN_ENA); + wr32(wx, WX_TS_EN, WX_TS_EN_ENA); data->alarm_thresh = 100; - wr32(wxhw, WX_TS_ALARM_THRE, 677); + wr32(wx, WX_TS_ALARM_THRE, 677); data->dalarm_thresh = 90; - wr32(wxhw, WX_TS_DALARM_THRE, 614); + wr32(wx, WX_TS_DALARM_THRE, 614); } /** * txgbe_read_pba_string - Reads part number string from EEPROM - * @hw: pointer to hardware structure + * @wx: pointer to hardware structure * @pba_num: stores the part number string from the EEPROM * @pba_num_size: part number string buffer length * * Reads the part number string from the EEPROM. **/ -int txgbe_read_pba_string(struct txgbe_hw *hw, u8 *pba_num, u32 pba_num_size) +int txgbe_read_pba_string(struct wx *wx, u8 *pba_num, u32 pba_num_size) { u16 pba_ptr, offset, length, data; - struct wx_hw *wxhw = &hw->wxhw; int ret_val; if (!pba_num) { - wx_err(wxhw, "PBA string buffer was null\n"); + wx_err(wx, "PBA string buffer was null\n"); return -EINVAL; } - ret_val = wx_read_ee_hostif(wxhw, - wxhw->eeprom.sw_region_offset + TXGBE_PBANUM0_PTR, + ret_val = wx_read_ee_hostif(wx, + wx->eeprom.sw_region_offset + TXGBE_PBANUM0_PTR, &data); if (ret_val != 0) { - wx_err(wxhw, "NVM Read Error\n"); + wx_err(wx, "NVM Read Error\n"); return ret_val; } - ret_val = wx_read_ee_hostif(wxhw, - wxhw->eeprom.sw_region_offset + TXGBE_PBANUM1_PTR, + ret_val = wx_read_ee_hostif(wx, + wx->eeprom.sw_region_offset + TXGBE_PBANUM1_PTR, &pba_ptr); if (ret_val != 0) { - wx_err(wxhw, "NVM Read Error\n"); + wx_err(wx, "NVM Read Error\n"); return ret_val; } @@ -84,11 +81,11 @@ int txgbe_read_pba_string(struct txgbe_hw *hw, u8 *pba_num, u32 pba_num_size) * and we can decode it into an ascii string */ if (data != TXGBE_PBANUM_PTR_GUARD) { - wx_err(wxhw, "NVM PBA number is not stored as string\n"); + wx_err(wx, "NVM PBA number is not stored as string\n"); /* we will need 11 characters to store the PBA */ if (pba_num_size < 11) { - wx_err(wxhw, "PBA string buffer too small\n"); + wx_err(wx, "PBA string buffer too small\n"); return -ENOMEM; } @@ -118,20 +115,20 @@ int txgbe_read_pba_string(struct txgbe_hw *hw, u8 *pba_num, u32 pba_num_size) return 0; } - ret_val = wx_read_ee_hostif(wxhw, pba_ptr, &length); + ret_val = wx_read_ee_hostif(wx, pba_ptr, &length); if (ret_val != 0) { - wx_err(wxhw, "NVM Read Error\n"); + wx_err(wx, "NVM Read Error\n"); return ret_val; } if (length == 0xFFFF || length == 0) { - wx_err(wxhw, "NVM PBA number section invalid length\n"); + wx_err(wx, "NVM PBA number section invalid length\n"); return -EINVAL; } /* check if pba_num buffer is big enough */ if (pba_num_size < (((u32)length * 2) - 1)) { - wx_err(wxhw, "PBA string buffer too small\n"); + wx_err(wx, "PBA string buffer too small\n"); return -ENOMEM; } @@ -140,9 +137,9 @@ int txgbe_read_pba_string(struct txgbe_hw *hw, u8 *pba_num, u32 pba_num_size) length--; for (offset = 0; offset < length; offset++) { - ret_val = wx_read_ee_hostif(wxhw, pba_ptr + offset, &data); + ret_val = wx_read_ee_hostif(wx, pba_ptr + offset, &data); if (ret_val != 0) { - wx_err(wxhw, "NVM Read Error\n"); + wx_err(wx, "NVM Read Error\n"); return ret_val; } pba_num[offset * 2] = (u8)(data >> 8); @@ -155,14 +152,13 @@ int txgbe_read_pba_string(struct txgbe_hw *hw, u8 *pba_num, u32 pba_num_size) /** * txgbe_calc_eeprom_checksum - Calculates and returns the checksum - * @hw: pointer to hardware structure + * @wx: pointer to hardware structure * @checksum: pointer to cheksum * * Returns a negative error code on error **/ -static int txgbe_calc_eeprom_checksum(struct txgbe_hw *hw, u16 *checksum) +static int txgbe_calc_eeprom_checksum(struct wx *wx, u16 *checksum) { - struct wx_hw *wxhw = &hw->wxhw; u16 *eeprom_ptrs = NULL; u32 buffer_size = 0; u16 *buffer = NULL; @@ -170,7 +166,7 @@ static int txgbe_calc_eeprom_checksum(struct txgbe_hw *hw, u16 *checksum) int status; u16 i; - wx_init_eeprom_params(wxhw); + wx_init_eeprom_params(wx); if (!buffer) { eeprom_ptrs = kvmalloc_array(TXGBE_EEPROM_LAST_WORD, sizeof(u16), @@ -178,11 +174,11 @@ static int txgbe_calc_eeprom_checksum(struct txgbe_hw *hw, u16 *checksum) if (!eeprom_ptrs) return -ENOMEM; /* Read pointer area */ - status = wx_read_ee_hostif_buffer(wxhw, 0, + status = wx_read_ee_hostif_buffer(wx, 0, TXGBE_EEPROM_LAST_WORD, eeprom_ptrs); if (status != 0) { - wx_err(wxhw, "Failed to read EEPROM image\n"); + wx_err(wx, "Failed to read EEPROM image\n"); kvfree(eeprom_ptrs); return status; } @@ -194,7 +190,7 @@ static int txgbe_calc_eeprom_checksum(struct txgbe_hw *hw, u16 *checksum) } for (i = 0; i < TXGBE_EEPROM_LAST_WORD; i++) - if (i != wxhw->eeprom.sw_region_offset + TXGBE_EEPROM_CHECKSUM) + if (i != wx->eeprom.sw_region_offset + TXGBE_EEPROM_CHECKSUM) *checksum += local_buffer[i]; if (eeprom_ptrs) @@ -210,15 +206,14 @@ static int txgbe_calc_eeprom_checksum(struct txgbe_hw *hw, u16 *checksum) /** * txgbe_validate_eeprom_checksum - Validate EEPROM checksum - * @hw: pointer to hardware structure + * @wx: pointer to hardware structure * @checksum_val: calculated checksum * * Performs checksum calculation and validates the EEPROM checksum. If the * caller does not need checksum_val, the value can be NULL. **/ -int txgbe_validate_eeprom_checksum(struct txgbe_hw *hw, u16 *checksum_val) +int txgbe_validate_eeprom_checksum(struct wx *wx, u16 *checksum_val) { - struct wx_hw *wxhw = &hw->wxhw; u16 read_checksum = 0; u16 checksum; int status; @@ -227,18 +222,18 @@ int txgbe_validate_eeprom_checksum(struct txgbe_hw *hw, u16 *checksum_val) * not continue or we could be in for a very long wait while every * EEPROM read fails */ - status = wx_read_ee_hostif(wxhw, 0, &checksum); + status = wx_read_ee_hostif(wx, 0, &checksum); if (status) { - wx_err(wxhw, "EEPROM read failed\n"); + wx_err(wx, "EEPROM read failed\n"); return status; } checksum = 0; - status = txgbe_calc_eeprom_checksum(hw, &checksum); + status = txgbe_calc_eeprom_checksum(wx, &checksum); if (status != 0) return status; - status = wx_read_ee_hostif(wxhw, wxhw->eeprom.sw_region_offset + + status = wx_read_ee_hostif(wx, wx->eeprom.sw_region_offset + TXGBE_EEPROM_CHECKSUM, &read_checksum); if (status != 0) return status; @@ -248,7 +243,7 @@ int txgbe_validate_eeprom_checksum(struct txgbe_hw *hw, u16 *checksum_val) */ if (read_checksum != checksum) { status = -EIO; - wx_err(wxhw, "Invalid EEPROM checksum\n"); + wx_err(wx, "Invalid EEPROM checksum\n"); } /* If the user cares, return the calculated checksum */ @@ -258,55 +253,52 @@ int txgbe_validate_eeprom_checksum(struct txgbe_hw *hw, u16 *checksum_val) return status; } -static void txgbe_reset_misc(struct txgbe_hw *hw) +static void txgbe_reset_misc(struct wx *wx) { - struct wx_hw *wxhw = &hw->wxhw; - - wx_reset_misc(wxhw); - txgbe_init_thermal_sensor_thresh(hw); + wx_reset_misc(wx); + txgbe_init_thermal_sensor_thresh(wx); } /** * txgbe_reset_hw - Perform hardware reset - * @hw: pointer to hardware structure + * @wx: pointer to wx structure * * Resets the hardware by resetting the transmit and receive units, masks * and clears all interrupts, perform a PHY reset, and perform a link (MAC) * reset. **/ -int txgbe_reset_hw(struct txgbe_hw *hw) +int txgbe_reset_hw(struct wx *wx) { - struct wx_hw *wxhw = &hw->wxhw; int status; /* Call adapter stop to disable tx/rx and clear interrupts */ - status = wx_stop_adapter(wxhw); + status = wx_stop_adapter(wx); if (status != 0) return status; - if (!(((wxhw->subsystem_device_id & WX_NCSI_MASK) == WX_NCSI_SUP) || - ((wxhw->subsystem_device_id & WX_WOL_MASK) == WX_WOL_SUP))) - wx_reset_hostif(wxhw); + if (!(((wx->subsystem_device_id & WX_NCSI_MASK) == WX_NCSI_SUP) || + ((wx->subsystem_device_id & WX_WOL_MASK) == WX_WOL_SUP))) + wx_reset_hostif(wx); usleep_range(10, 100); - status = wx_check_flash_load(wxhw, TXGBE_SPI_ILDR_STATUS_LAN_SW_RST(wxhw->bus.func)); + status = wx_check_flash_load(wx, TXGBE_SPI_ILDR_STATUS_LAN_SW_RST(wx->bus.func)); if (status != 0) return status; - txgbe_reset_misc(hw); + txgbe_reset_misc(wx); /* Store the permanent mac address */ - wx_get_mac_addr(wxhw, wxhw->mac.perm_addr); + wx_get_mac_addr(wx, wx->mac.perm_addr); /* Store MAC address from RAR0, clear receive address registers, and * clear the multicast table. Also reset num_rar_entries to 128, * since we modify this value when programming the SAN MAC address. */ - wxhw->mac.num_rar_entries = TXGBE_SP_RAR_ENTRIES; - wx_init_rx_addrs(wxhw); + wx->mac.num_rar_entries = TXGBE_SP_RAR_ENTRIES; + wx_init_rx_addrs(wx); - pci_set_master(wxhw->pdev); + pci_set_master(wx->pdev); return 0; } diff --git a/drivers/net/ethernet/wangxun/txgbe/txgbe_hw.h b/drivers/net/ethernet/wangxun/txgbe/txgbe_hw.h index 6a751a69177b..e82f65dff8a6 100644 --- a/drivers/net/ethernet/wangxun/txgbe/txgbe_hw.h +++ b/drivers/net/ethernet/wangxun/txgbe/txgbe_hw.h @@ -4,8 +4,8 @@ #ifndef _TXGBE_HW_H_ #define _TXGBE_HW_H_ -int txgbe_read_pba_string(struct txgbe_hw *hw, u8 *pba_num, u32 pba_num_size); -int txgbe_validate_eeprom_checksum(struct txgbe_hw *hw, u16 *checksum_val); -int txgbe_reset_hw(struct txgbe_hw *hw); +int txgbe_read_pba_string(struct wx *wx, u8 *pba_num, u32 pba_num_size); +int txgbe_validate_eeprom_checksum(struct wx *wx, u16 *checksum_val); +int txgbe_reset_hw(struct wx *wx); #endif /* _TXGBE_HW_H_ */ diff --git a/drivers/net/ethernet/wangxun/txgbe/txgbe_main.c b/drivers/net/ethernet/wangxun/txgbe/txgbe_main.c index 36780e7f05b7..6c0a98230557 100644 --- a/drivers/net/ethernet/wangxun/txgbe/txgbe_main.c +++ b/drivers/net/ethernet/wangxun/txgbe/txgbe_main.c @@ -11,10 +11,11 @@ #include <net/ip.h> #include "../libwx/wx_type.h" +#include "../libwx/wx_lib.h" #include "../libwx/wx_hw.h" #include "txgbe_type.h" #include "txgbe_hw.h" -#include "txgbe.h" +#include "txgbe_ethtool.h" char txgbe_driver_name[] = "txgbe"; @@ -35,26 +36,26 @@ static const struct pci_device_id txgbe_pci_tbl[] = { #define DEFAULT_DEBUG_LEVEL_SHIFT 3 -static void txgbe_check_minimum_link(struct txgbe_adapter *adapter) +static void txgbe_check_minimum_link(struct wx *wx) { struct pci_dev *pdev; - pdev = adapter->pdev; + pdev = wx->pdev; pcie_print_link_status(pdev); } /** * txgbe_enumerate_functions - Get the number of ports this device has - * @adapter: adapter structure + * @wx: wx structure * * This function enumerates the phsyical functions co-located on a single slot, * in order to determine how many ports a device has. This is most useful in * determining the required GT/s of PCIe bandwidth necessary for optimal * performance. **/ -static int txgbe_enumerate_functions(struct txgbe_adapter *adapter) +static int txgbe_enumerate_functions(struct wx *wx) { - struct pci_dev *entry, *pdev = adapter->pdev; + struct pci_dev *entry, *pdev = wx->pdev; int physfns = 0; list_for_each_entry(entry, &pdev->bus->devices, bus_list) { @@ -73,196 +74,299 @@ static int txgbe_enumerate_functions(struct txgbe_adapter *adapter) return physfns; } -static void txgbe_sync_mac_table(struct txgbe_adapter *adapter) +/** + * txgbe_irq_enable - Enable default interrupt generation settings + * @wx: pointer to private structure + * @queues: enable irqs for queues + **/ +static void txgbe_irq_enable(struct wx *wx, bool queues) { - struct txgbe_hw *hw = &adapter->hw; - struct wx_hw *wxhw = &hw->wxhw; - int i; - - for (i = 0; i < wxhw->mac.num_rar_entries; i++) { - if (adapter->mac_table[i].state & TXGBE_MAC_STATE_MODIFIED) { - if (adapter->mac_table[i].state & TXGBE_MAC_STATE_IN_USE) { - wx_set_rar(wxhw, i, - adapter->mac_table[i].addr, - adapter->mac_table[i].pools, - WX_PSR_MAC_SWC_AD_H_AV); - } else { - wx_clear_rar(wxhw, i); - } - adapter->mac_table[i].state &= ~(TXGBE_MAC_STATE_MODIFIED); - } - } + /* unmask interrupt */ + wx_intr_enable(wx, TXGBE_INTR_MISC(wx)); + if (queues) + wx_intr_enable(wx, TXGBE_INTR_QALL(wx)); } -/* this function destroys the first RAR entry */ -static void txgbe_mac_set_default_filter(struct txgbe_adapter *adapter, - u8 *addr) +/** + * txgbe_intr - msi/legacy mode Interrupt Handler + * @irq: interrupt number + * @data: pointer to a network interface device structure + **/ +static irqreturn_t txgbe_intr(int __always_unused irq, void *data) { - struct wx_hw *wxhw = &adapter->hw.wxhw; - - memcpy(&adapter->mac_table[0].addr, addr, ETH_ALEN); - adapter->mac_table[0].pools = 1ULL; - adapter->mac_table[0].state = (TXGBE_MAC_STATE_DEFAULT | - TXGBE_MAC_STATE_IN_USE); - wx_set_rar(wxhw, 0, adapter->mac_table[0].addr, - adapter->mac_table[0].pools, - WX_PSR_MAC_SWC_AD_H_AV); + struct wx_q_vector *q_vector; + struct wx *wx = data; + struct pci_dev *pdev; + u32 eicr; + + q_vector = wx->q_vector[0]; + pdev = wx->pdev; + + eicr = wx_misc_isb(wx, WX_ISB_VEC0); + if (!eicr) { + /* shared interrupt alert! + * the interrupt that we masked before the ICR read. + */ + if (netif_running(wx->netdev)) + txgbe_irq_enable(wx, true); + return IRQ_NONE; /* Not our interrupt */ + } + wx->isb_mem[WX_ISB_VEC0] = 0; + if (!(pdev->msi_enabled)) + wr32(wx, WX_PX_INTA, 1); + + wx->isb_mem[WX_ISB_MISC] = 0; + /* would disable interrupts here but it is auto disabled */ + napi_schedule_irqoff(&q_vector->napi); + + /* re-enable link(maybe) and non-queue interrupts, no flush. + * txgbe_poll will re-enable the queue interrupts + */ + if (netif_running(wx->netdev)) + txgbe_irq_enable(wx, false); + + return IRQ_HANDLED; } -static void txgbe_flush_sw_mac_table(struct txgbe_adapter *adapter) +static irqreturn_t txgbe_msix_other(int __always_unused irq, void *data) { - struct wx_hw *wxhw = &adapter->hw.wxhw; - u32 i; + struct wx *wx = data; - for (i = 0; i < wxhw->mac.num_rar_entries; i++) { - adapter->mac_table[i].state |= TXGBE_MAC_STATE_MODIFIED; - adapter->mac_table[i].state &= ~TXGBE_MAC_STATE_IN_USE; - memset(adapter->mac_table[i].addr, 0, ETH_ALEN); - adapter->mac_table[i].pools = 0; - } - txgbe_sync_mac_table(adapter); + /* re-enable the original interrupt state */ + if (netif_running(wx->netdev)) + txgbe_irq_enable(wx, false); + + return IRQ_HANDLED; } -static int txgbe_del_mac_filter(struct txgbe_adapter *adapter, u8 *addr, u16 pool) +/** + * txgbe_request_msix_irqs - Initialize MSI-X interrupts + * @wx: board private structure + * + * Allocate MSI-X vectors and request interrupts from the kernel. + **/ +static int txgbe_request_msix_irqs(struct wx *wx) { - struct wx_hw *wxhw = &adapter->hw.wxhw; - u32 i; + struct net_device *netdev = wx->netdev; + int vector, err; + + for (vector = 0; vector < wx->num_q_vectors; vector++) { + struct wx_q_vector *q_vector = wx->q_vector[vector]; + struct msix_entry *entry = &wx->msix_entries[vector]; + + if (q_vector->tx.ring && q_vector->rx.ring) + snprintf(q_vector->name, sizeof(q_vector->name) - 1, + "%s-TxRx-%d", netdev->name, entry->entry); + else + /* skip this unused q_vector */ + continue; - if (is_zero_ether_addr(addr)) - return -EINVAL; - - /* search table for addr, if found, set to 0 and sync */ - for (i = 0; i < wxhw->mac.num_rar_entries; i++) { - if (ether_addr_equal(addr, adapter->mac_table[i].addr)) { - if (adapter->mac_table[i].pools & (1ULL << pool)) { - adapter->mac_table[i].state |= TXGBE_MAC_STATE_MODIFIED; - adapter->mac_table[i].state &= ~TXGBE_MAC_STATE_IN_USE; - adapter->mac_table[i].pools &= ~(1ULL << pool); - txgbe_sync_mac_table(adapter); - } - return 0; + err = request_irq(entry->vector, wx_msix_clean_rings, 0, + q_vector->name, q_vector); + if (err) { + wx_err(wx, "request_irq failed for MSIX interrupt %s Error: %d\n", + q_vector->name, err); + goto free_queue_irqs; } + } - if (adapter->mac_table[i].pools != (1 << pool)) - continue; - if (!ether_addr_equal(addr, adapter->mac_table[i].addr)) - continue; + err = request_irq(wx->msix_entries[vector].vector, + txgbe_msix_other, 0, netdev->name, wx); + if (err) { + wx_err(wx, "request_irq for msix_other failed: %d\n", err); + goto free_queue_irqs; + } + + return 0; - adapter->mac_table[i].state |= TXGBE_MAC_STATE_MODIFIED; - adapter->mac_table[i].state &= ~TXGBE_MAC_STATE_IN_USE; - memset(adapter->mac_table[i].addr, 0, ETH_ALEN); - adapter->mac_table[i].pools = 0; - txgbe_sync_mac_table(adapter); - return 0; +free_queue_irqs: + while (vector) { + vector--; + free_irq(wx->msix_entries[vector].vector, + wx->q_vector[vector]); } - return -ENOMEM; + wx_reset_interrupt_capability(wx); + return err; } -static void txgbe_up_complete(struct txgbe_adapter *adapter) +/** + * txgbe_request_irq - initialize interrupts + * @wx: board private structure + * + * Attempt to configure interrupts using the best available + * capabilities of the hardware and kernel. + **/ +static int txgbe_request_irq(struct wx *wx) { - struct txgbe_hw *hw = &adapter->hw; - struct wx_hw *wxhw = &hw->wxhw; + struct net_device *netdev = wx->netdev; + struct pci_dev *pdev = wx->pdev; + int err; - wx_control_hw(wxhw, true); + if (pdev->msix_enabled) + err = txgbe_request_msix_irqs(wx); + else if (pdev->msi_enabled) + err = request_irq(wx->pdev->irq, &txgbe_intr, 0, + netdev->name, wx); + else + err = request_irq(wx->pdev->irq, &txgbe_intr, IRQF_SHARED, + netdev->name, wx); + + if (err) + wx_err(wx, "request_irq failed, Error %d\n", err); + + return err; } -static void txgbe_reset(struct txgbe_adapter *adapter) +static void txgbe_up_complete(struct wx *wx) { - struct net_device *netdev = adapter->netdev; - struct txgbe_hw *hw = &adapter->hw; + u32 reg; + + wx_control_hw(wx, true); + wx_configure_vectors(wx); + + /* make sure to complete pre-operations */ + smp_mb__before_atomic(); + wx_napi_enable_all(wx); + + /* clear any pending interrupts, may auto mask */ + rd32(wx, WX_PX_IC); + rd32(wx, WX_PX_MISC_IC); + txgbe_irq_enable(wx, true); + + /* Configure MAC Rx and Tx when link is up */ + reg = rd32(wx, WX_MAC_RX_CFG); + wr32(wx, WX_MAC_RX_CFG, reg); + wr32(wx, WX_MAC_PKT_FLT, WX_MAC_PKT_FLT_PR); + reg = rd32(wx, WX_MAC_WDG_TIMEOUT); + wr32(wx, WX_MAC_WDG_TIMEOUT, reg); + reg = rd32(wx, WX_MAC_TX_CFG); + wr32(wx, WX_MAC_TX_CFG, (reg & ~WX_MAC_TX_CFG_SPEED_MASK) | WX_MAC_TX_CFG_SPEED_10G); + + /* enable transmits */ + netif_tx_start_all_queues(wx->netdev); + netif_carrier_on(wx->netdev); +} + +static void txgbe_reset(struct wx *wx) +{ + struct net_device *netdev = wx->netdev; u8 old_addr[ETH_ALEN]; int err; - err = txgbe_reset_hw(hw); + err = txgbe_reset_hw(wx); if (err != 0) - dev_err(&adapter->pdev->dev, "Hardware Error: %d\n", err); + wx_err(wx, "Hardware Error: %d\n", err); /* do not flush user set addresses */ - memcpy(old_addr, &adapter->mac_table[0].addr, netdev->addr_len); - txgbe_flush_sw_mac_table(adapter); - txgbe_mac_set_default_filter(adapter, old_addr); + memcpy(old_addr, &wx->mac_table[0].addr, netdev->addr_len); + wx_flush_sw_mac_table(wx); + wx_mac_set_default_filter(wx, old_addr); } -static void txgbe_disable_device(struct txgbe_adapter *adapter) +static void txgbe_disable_device(struct wx *wx) { - struct net_device *netdev = adapter->netdev; - struct wx_hw *wxhw = &adapter->hw.wxhw; + struct net_device *netdev = wx->netdev; + u32 i; - wx_disable_pcie_master(wxhw); + wx_disable_pcie_master(wx); /* disable receives */ - wx_disable_rx(wxhw); + wx_disable_rx(wx); + + /* disable all enabled rx queues */ + for (i = 0; i < wx->num_rx_queues; i++) + /* this call also flushes the previous write */ + wx_disable_rx_queue(wx, wx->rx_ring[i]); + netif_tx_stop_all_queues(netdev); netif_carrier_off(netdev); netif_tx_disable(netdev); - if (wxhw->bus.func < 2) - wr32m(wxhw, TXGBE_MIS_PRB_CTL, TXGBE_MIS_PRB_CTL_LAN_UP(wxhw->bus.func), 0); + wx_irq_disable(wx); + wx_napi_disable_all(wx); + + if (wx->bus.func < 2) + wr32m(wx, TXGBE_MIS_PRB_CTL, TXGBE_MIS_PRB_CTL_LAN_UP(wx->bus.func), 0); else - dev_err(&adapter->pdev->dev, - "%s: invalid bus lan id %d\n", - __func__, wxhw->bus.func); + wx_err(wx, "%s: invalid bus lan id %d\n", + __func__, wx->bus.func); - if (!(((wxhw->subsystem_device_id & WX_NCSI_MASK) == WX_NCSI_SUP) || - ((wxhw->subsystem_device_id & WX_WOL_MASK) == WX_WOL_SUP))) { + if (!(((wx->subsystem_device_id & WX_NCSI_MASK) == WX_NCSI_SUP) || + ((wx->subsystem_device_id & WX_WOL_MASK) == WX_WOL_SUP))) { /* disable mac transmiter */ - wr32m(wxhw, WX_MAC_TX_CFG, WX_MAC_TX_CFG_TE, 0); + wr32m(wx, WX_MAC_TX_CFG, WX_MAC_TX_CFG_TE, 0); + } + + /* disable transmits in the hardware now that interrupts are off */ + for (i = 0; i < wx->num_tx_queues; i++) { + u8 reg_idx = wx->tx_ring[i]->reg_idx; + + wr32(wx, WX_PX_TR_CFG(reg_idx), WX_PX_TR_CFG_SWFLSH); } /* Disable the Tx DMA engine */ - wr32m(wxhw, WX_TDM_CTL, WX_TDM_CTL_TE, 0); + wr32m(wx, WX_TDM_CTL, WX_TDM_CTL_TE, 0); } -static void txgbe_down(struct txgbe_adapter *adapter) +static void txgbe_down(struct wx *wx) { - txgbe_disable_device(adapter); - txgbe_reset(adapter); + txgbe_disable_device(wx); + txgbe_reset(wx); + + wx_clean_all_tx_rings(wx); + wx_clean_all_rx_rings(wx); } /** - * txgbe_sw_init - Initialize general software structures (struct txgbe_adapter) - * @adapter: board private structure to initialize + * txgbe_sw_init - Initialize general software structures (struct wx) + * @wx: board private structure to initialize **/ -static int txgbe_sw_init(struct txgbe_adapter *adapter) +static int txgbe_sw_init(struct wx *wx) { - struct pci_dev *pdev = adapter->pdev; - struct txgbe_hw *hw = &adapter->hw; - struct wx_hw *wxhw = &hw->wxhw; + u16 msix_count = 0; int err; - wxhw->hw_addr = adapter->io_addr; - wxhw->pdev = pdev; + wx->mac.num_rar_entries = TXGBE_SP_RAR_ENTRIES; + wx->mac.max_tx_queues = TXGBE_SP_MAX_TX_QUEUES; + wx->mac.max_rx_queues = TXGBE_SP_MAX_RX_QUEUES; + wx->mac.mcft_size = TXGBE_SP_MC_TBL_SIZE; + wx->mac.rx_pb_size = TXGBE_SP_RX_PB_SIZE; + wx->mac.tx_pb_size = TXGBE_SP_TDB_PB_SZ; /* PCI config space info */ - err = wx_sw_init(wxhw); + err = wx_sw_init(wx); if (err < 0) { - netif_err(adapter, probe, adapter->netdev, - "read of internal subsystem device id failed\n"); + wx_err(wx, "read of internal subsystem device id failed\n"); return err; } - switch (wxhw->device_id) { + switch (wx->device_id) { case TXGBE_DEV_ID_SP1000: case TXGBE_DEV_ID_WX1820: - wxhw->mac.type = wx_mac_sp; + wx->mac.type = wx_mac_sp; break; default: - wxhw->mac.type = wx_mac_unknown; + wx->mac.type = wx_mac_unknown; break; } - wxhw->mac.num_rar_entries = TXGBE_SP_RAR_ENTRIES; - wxhw->mac.max_tx_queues = TXGBE_SP_MAX_TX_QUEUES; - wxhw->mac.max_rx_queues = TXGBE_SP_MAX_RX_QUEUES; - wxhw->mac.mcft_size = TXGBE_SP_MC_TBL_SIZE; - - adapter->mac_table = kcalloc(wxhw->mac.num_rar_entries, - sizeof(struct txgbe_mac_addr), - GFP_KERNEL); - if (!adapter->mac_table) { - netif_err(adapter, probe, adapter->netdev, - "mac_table allocation failed\n"); - return -ENOMEM; - } + /* Set common capability flags and settings */ + wx->max_q_vectors = TXGBE_MAX_MSIX_VECTORS; + err = wx_get_pcie_msix_counts(wx, &msix_count, TXGBE_MAX_MSIX_VECTORS); + if (err) + wx_err(wx, "Do not support MSI-X\n"); + wx->mac.max_msix_vectors = msix_count; + + /* enable itr by default in dynamic mode */ + wx->rx_itr_setting = 1; + wx->tx_itr_setting = 1; + + /* set default ring sizes */ + wx->tx_ring_count = TXGBE_DEFAULT_TXD; + wx->rx_ring_count = TXGBE_DEFAULT_RXD; + + /* set default work limits */ + wx->tx_work_limit = TXGBE_DEFAULT_TX_WORK; + wx->rx_work_limit = TXGBE_DEFAULT_RX_WORK; return 0; } @@ -278,23 +382,53 @@ static int txgbe_sw_init(struct txgbe_adapter *adapter) **/ static int txgbe_open(struct net_device *netdev) { - struct txgbe_adapter *adapter = netdev_priv(netdev); + struct wx *wx = netdev_priv(netdev); + int err; + + err = wx_setup_resources(wx); + if (err) + goto err_reset; + + wx_configure(wx); - txgbe_up_complete(adapter); + err = txgbe_request_irq(wx); + if (err) + goto err_free_isb; + + /* Notify the stack of the actual queue counts. */ + err = netif_set_real_num_tx_queues(netdev, wx->num_tx_queues); + if (err) + goto err_free_irq; + + err = netif_set_real_num_rx_queues(netdev, wx->num_rx_queues); + if (err) + goto err_free_irq; + + txgbe_up_complete(wx); return 0; + +err_free_irq: + wx_free_irq(wx); +err_free_isb: + wx_free_isb_resources(wx); +err_reset: + txgbe_reset(wx); + + return err; } /** * txgbe_close_suspend - actions necessary to both suspend and close flows - * @adapter: the private adapter struct + * @wx: the private wx struct * * This function should contain the necessary work common to both suspending * and closing of the device. */ -static void txgbe_close_suspend(struct txgbe_adapter *adapter) +static void txgbe_close_suspend(struct wx *wx) { - txgbe_disable_device(adapter); + txgbe_disable_device(wx); + wx_free_resources(wx); } /** @@ -310,29 +444,30 @@ static void txgbe_close_suspend(struct txgbe_adapter *adapter) **/ static int txgbe_close(struct net_device *netdev) { - struct txgbe_adapter *adapter = netdev_priv(netdev); + struct wx *wx = netdev_priv(netdev); - txgbe_down(adapter); - wx_control_hw(&adapter->hw.wxhw, false); + txgbe_down(wx); + wx_free_irq(wx); + wx_free_resources(wx); + wx_control_hw(wx, false); return 0; } static void txgbe_dev_shutdown(struct pci_dev *pdev, bool *enable_wake) { - struct txgbe_adapter *adapter = pci_get_drvdata(pdev); - struct net_device *netdev = adapter->netdev; - struct txgbe_hw *hw = &adapter->hw; - struct wx_hw *wxhw = &hw->wxhw; + struct wx *wx = pci_get_drvdata(pdev); + struct net_device *netdev; + netdev = wx->netdev; netif_device_detach(netdev); rtnl_lock(); if (netif_running(netdev)) - txgbe_close_suspend(adapter); + txgbe_close_suspend(wx); rtnl_unlock(); - wx_control_hw(wxhw, false); + wx_control_hw(wx, false); pci_disable_device(pdev); } @@ -349,45 +484,14 @@ static void txgbe_shutdown(struct pci_dev *pdev) } } -static netdev_tx_t txgbe_xmit_frame(struct sk_buff *skb, - struct net_device *netdev) -{ - return NETDEV_TX_OK; -} - -/** - * txgbe_set_mac - Change the Ethernet Address of the NIC - * @netdev: network interface device structure - * @p: pointer to an address structure - * - * Returns 0 on success, negative on failure - **/ -static int txgbe_set_mac(struct net_device *netdev, void *p) -{ - struct txgbe_adapter *adapter = netdev_priv(netdev); - struct wx_hw *wxhw = &adapter->hw.wxhw; - struct sockaddr *addr = p; - int retval; - - retval = eth_prepare_mac_addr_change(netdev, addr); - if (retval) - return retval; - - txgbe_del_mac_filter(adapter, wxhw->mac.addr, 0); - eth_hw_addr_set(netdev, addr->sa_data); - memcpy(wxhw->mac.addr, addr->sa_data, netdev->addr_len); - - txgbe_mac_set_default_filter(adapter, wxhw->mac.addr); - - return 0; -} - static const struct net_device_ops txgbe_netdev_ops = { .ndo_open = txgbe_open, .ndo_stop = txgbe_close, - .ndo_start_xmit = txgbe_xmit_frame, + .ndo_start_xmit = wx_xmit_frame, + .ndo_set_rx_mode = wx_set_rx_mode, .ndo_validate_addr = eth_validate_addr, - .ndo_set_mac_address = txgbe_set_mac, + .ndo_set_mac_address = wx_set_mac, + .ndo_get_stats64 = wx_get_stats64, }; /** @@ -398,17 +502,15 @@ static const struct net_device_ops txgbe_netdev_ops = { * Returns 0 on success, negative on failure * * txgbe_probe initializes an adapter identified by a pci_dev structure. - * The OS initialization, configuring of the adapter private structure, + * The OS initialization, configuring of the wx private structure, * and a hardware reset occur. **/ static int txgbe_probe(struct pci_dev *pdev, const struct pci_device_id __always_unused *ent) { - struct txgbe_adapter *adapter = NULL; - struct txgbe_hw *hw = NULL; - struct wx_hw *wxhw = NULL; struct net_device *netdev; int err, expected_gts; + struct wx *wx = NULL; u16 eeprom_verh = 0, eeprom_verl = 0, offset = 0; u16 eeprom_cfg_blkh = 0, eeprom_cfg_blkl = 0; @@ -440,7 +542,7 @@ static int txgbe_probe(struct pci_dev *pdev, pci_set_master(pdev); netdev = devm_alloc_etherdev_mqs(&pdev->dev, - sizeof(struct txgbe_adapter), + sizeof(struct wx), TXGBE_MAX_TX_QUEUES, TXGBE_MAX_RX_QUEUES); if (!netdev) { @@ -450,81 +552,96 @@ static int txgbe_probe(struct pci_dev *pdev, SET_NETDEV_DEV(netdev, &pdev->dev); - adapter = netdev_priv(netdev); - adapter->netdev = netdev; - adapter->pdev = pdev; - hw = &adapter->hw; - wxhw = &hw->wxhw; - adapter->msg_enable = (1 << DEFAULT_DEBUG_LEVEL_SHIFT) - 1; - - adapter->io_addr = devm_ioremap(&pdev->dev, - pci_resource_start(pdev, 0), - pci_resource_len(pdev, 0)); - if (!adapter->io_addr) { + wx = netdev_priv(netdev); + wx->netdev = netdev; + wx->pdev = pdev; + + wx->msg_enable = (1 << DEFAULT_DEBUG_LEVEL_SHIFT) - 1; + + wx->hw_addr = devm_ioremap(&pdev->dev, + pci_resource_start(pdev, 0), + pci_resource_len(pdev, 0)); + if (!wx->hw_addr) { err = -EIO; goto err_pci_release_regions; } + wx->driver_name = txgbe_driver_name; + txgbe_set_ethtool_ops(netdev); netdev->netdev_ops = &txgbe_netdev_ops; /* setup the private structure */ - err = txgbe_sw_init(adapter); + err = txgbe_sw_init(wx); if (err) goto err_free_mac_table; /* check if flash load is done after hw power up */ - err = wx_check_flash_load(wxhw, TXGBE_SPI_ILDR_STATUS_PERST); + err = wx_check_flash_load(wx, TXGBE_SPI_ILDR_STATUS_PERST); if (err) goto err_free_mac_table; - err = wx_check_flash_load(wxhw, TXGBE_SPI_ILDR_STATUS_PWRRST); + err = wx_check_flash_load(wx, TXGBE_SPI_ILDR_STATUS_PWRRST); if (err) goto err_free_mac_table; - err = wx_mng_present(wxhw); + err = wx_mng_present(wx); if (err) { dev_err(&pdev->dev, "Management capability is not present\n"); goto err_free_mac_table; } - err = txgbe_reset_hw(hw); + err = txgbe_reset_hw(wx); if (err) { dev_err(&pdev->dev, "HW Init failed: %d\n", err); goto err_free_mac_table; } netdev->features |= NETIF_F_HIGHDMA; + netdev->features = NETIF_F_SG; + + /* copy netdev features into list of user selectable features */ + netdev->hw_features |= netdev->features | NETIF_F_RXALL; + + netdev->priv_flags |= IFF_UNICAST_FLT; + netdev->priv_flags |= IFF_SUPP_NOFCS; + + netdev->min_mtu = ETH_MIN_MTU; + netdev->max_mtu = TXGBE_MAX_JUMBO_FRAME_SIZE - (ETH_HLEN + ETH_FCS_LEN); /* make sure the EEPROM is good */ - err = txgbe_validate_eeprom_checksum(hw, NULL); + err = txgbe_validate_eeprom_checksum(wx, NULL); if (err != 0) { dev_err(&pdev->dev, "The EEPROM Checksum Is Not Valid\n"); - wr32(wxhw, WX_MIS_RST, WX_MIS_RST_SW_RST); + wr32(wx, WX_MIS_RST, WX_MIS_RST_SW_RST); err = -EIO; goto err_free_mac_table; } - eth_hw_addr_set(netdev, wxhw->mac.perm_addr); - txgbe_mac_set_default_filter(adapter, wxhw->mac.perm_addr); + eth_hw_addr_set(netdev, wx->mac.perm_addr); + wx_mac_set_default_filter(wx, wx->mac.perm_addr); + + err = wx_init_interrupt_scheme(wx); + if (err) + goto err_free_mac_table; /* Save off EEPROM version number and Option Rom version which * together make a unique identify for the eeprom */ - wx_read_ee_hostif(wxhw, - wxhw->eeprom.sw_region_offset + TXGBE_EEPROM_VERSION_H, + wx_read_ee_hostif(wx, + wx->eeprom.sw_region_offset + TXGBE_EEPROM_VERSION_H, &eeprom_verh); - wx_read_ee_hostif(wxhw, - wxhw->eeprom.sw_region_offset + TXGBE_EEPROM_VERSION_L, + wx_read_ee_hostif(wx, + wx->eeprom.sw_region_offset + TXGBE_EEPROM_VERSION_L, &eeprom_verl); etrack_id = (eeprom_verh << 16) | eeprom_verl; - wx_read_ee_hostif(wxhw, - wxhw->eeprom.sw_region_offset + TXGBE_ISCSI_BOOT_CONFIG, + wx_read_ee_hostif(wx, + wx->eeprom.sw_region_offset + TXGBE_ISCSI_BOOT_CONFIG, &offset); /* Make sure offset to SCSI block is valid */ if (!(offset == 0x0) && !(offset == 0xffff)) { - wx_read_ee_hostif(wxhw, offset + 0x84, &eeprom_cfg_blkh); - wx_read_ee_hostif(wxhw, offset + 0x83, &eeprom_cfg_blkl); + wx_read_ee_hostif(wx, offset + 0x84, &eeprom_cfg_blkh); + wx_read_ee_hostif(wx, offset + 0x83, &eeprom_cfg_blkl); /* Only display Option Rom if exist */ if (eeprom_cfg_blkl && eeprom_cfg_blkh) { @@ -532,15 +649,15 @@ static int txgbe_probe(struct pci_dev *pdev, build = (eeprom_cfg_blkl << 8) | (eeprom_cfg_blkh >> 8); patch = eeprom_cfg_blkh & 0x00ff; - snprintf(adapter->eeprom_id, sizeof(adapter->eeprom_id), + snprintf(wx->eeprom_id, sizeof(wx->eeprom_id), "0x%08x, %d.%d.%d", etrack_id, major, build, patch); } else { - snprintf(adapter->eeprom_id, sizeof(adapter->eeprom_id), + snprintf(wx->eeprom_id, sizeof(wx->eeprom_id), "0x%08x", etrack_id); } } else { - snprintf(adapter->eeprom_id, sizeof(adapter->eeprom_id), + snprintf(wx->eeprom_id, sizeof(wx->eeprom_id), "0x%08x", etrack_id); } @@ -548,7 +665,9 @@ static int txgbe_probe(struct pci_dev *pdev, if (err) goto err_release_hw; - pci_set_drvdata(pdev, adapter); + pci_set_drvdata(pdev, wx); + + netif_tx_stop_all_queues(netdev); /* calculate the expected PCIe bandwidth required for optimal * performance. Note that some older parts will never have enough @@ -556,27 +675,28 @@ static int txgbe_probe(struct pci_dev *pdev, * parts to ensure that no warning is displayed, as this could confuse * users otherwise. */ - expected_gts = txgbe_enumerate_functions(adapter) * 10; + expected_gts = txgbe_enumerate_functions(wx) * 10; /* don't check link if we failed to enumerate functions */ if (expected_gts > 0) - txgbe_check_minimum_link(adapter); + txgbe_check_minimum_link(wx); else dev_warn(&pdev->dev, "Failed to enumerate PF devices.\n"); /* First try to read PBA as a string */ - err = txgbe_read_pba_string(hw, part_str, TXGBE_PBANUM_LENGTH); + err = txgbe_read_pba_string(wx, part_str, TXGBE_PBANUM_LENGTH); if (err) strncpy(part_str, "Unknown", TXGBE_PBANUM_LENGTH); - netif_info(adapter, probe, netdev, "%pM\n", netdev->dev_addr); + netif_info(wx, probe, netdev, "%pM\n", netdev->dev_addr); return 0; err_release_hw: - wx_control_hw(wxhw, false); + wx_clear_interrupt_scheme(wx); + wx_control_hw(wx, false); err_free_mac_table: - kfree(adapter->mac_table); + kfree(wx->mac_table); err_pci_release_regions: pci_disable_pcie_error_reporting(pdev); pci_release_selected_regions(pdev, @@ -597,16 +717,17 @@ err_pci_disable_dev: **/ static void txgbe_remove(struct pci_dev *pdev) { - struct txgbe_adapter *adapter = pci_get_drvdata(pdev); + struct wx *wx = pci_get_drvdata(pdev); struct net_device *netdev; - netdev = adapter->netdev; + netdev = wx->netdev; unregister_netdev(netdev); pci_release_selected_regions(pdev, pci_select_bars(pdev, IORESOURCE_MEM)); - kfree(adapter->mac_table); + kfree(wx->mac_table); + wx_clear_interrupt_scheme(wx); pci_disable_pcie_error_reporting(pdev); diff --git a/drivers/net/ethernet/wangxun/txgbe/txgbe_type.h b/drivers/net/ethernet/wangxun/txgbe/txgbe_type.h index 740a1c447e20..563ea51deca6 100644 --- a/drivers/net/ethernet/wangxun/txgbe/txgbe_type.h +++ b/drivers/net/ethernet/wangxun/txgbe/txgbe_type.h @@ -67,8 +67,37 @@ #define TXGBE_PBANUM1_PTR 0x06 #define TXGBE_PBANUM_PTR_GUARD 0xFAFA -struct txgbe_hw { - struct wx_hw wxhw; -}; +#define TXGBE_MAX_MSIX_VECTORS 64 +#define TXGBE_MAX_FDIR_INDICES 63 + +#define TXGBE_MAX_RX_QUEUES (TXGBE_MAX_FDIR_INDICES + 1) +#define TXGBE_MAX_TX_QUEUES (TXGBE_MAX_FDIR_INDICES + 1) + +#define TXGBE_SP_MAX_TX_QUEUES 128 +#define TXGBE_SP_MAX_RX_QUEUES 128 +#define TXGBE_SP_RAR_ENTRIES 128 +#define TXGBE_SP_MC_TBL_SIZE 128 +#define TXGBE_SP_RX_PB_SIZE 512 +#define TXGBE_SP_TDB_PB_SZ (160 * 1024) /* 160KB Packet Buffer */ +#define TXGBE_MAX_JUMBO_FRAME_SIZE 9432 /* max payload 9414 */ + +/* TX/RX descriptor defines */ +#define TXGBE_DEFAULT_TXD 512 +#define TXGBE_DEFAULT_TX_WORK 256 + +#if (PAGE_SIZE < 8192) +#define TXGBE_DEFAULT_RXD 512 +#define TXGBE_DEFAULT_RX_WORK 256 +#else +#define TXGBE_DEFAULT_RXD 256 +#define TXGBE_DEFAULT_RX_WORK 128 +#endif + +#define TXGBE_INTR_MISC(A) BIT((A)->num_q_vectors) +#define TXGBE_INTR_QALL(A) (TXGBE_INTR_MISC(A) - 1) + +#define TXGBE_MAX_EITR GENMASK(11, 3) + +extern char txgbe_driver_name[]; #endif /* _TXGBE_TYPE_H_ */ |