summaryrefslogtreecommitdiff
path: root/drivers/net
AgeCommit message (Collapse)AuthorFilesLines
2024-12-23eth: fbnic: support querying RSS configAlexander Duyck1-0/+103
The initial driver submission already added all the RSS state, as part of multi-queue support. Expose the configuration via the ethtool APIs. Signed-off-by: Alexander Duyck <alexanderduyck@fb.com> Reviewed-by: Przemek Kitszel <przemyslaw.kitszel@intel.com> Link: https://patch.msgid.link/20241220025241.1522781-3-kuba@kernel.org Signed-off-by: Jakub Kicinski <kuba@kernel.org>
2024-12-23eth: fbnic: reorder ethtool codeJakub Kicinski1-79/+79
Define ethtool callback handlers in order in which they are defined in the ops struct. It doesn't really matter what the order is, but it's good to have an order. Reviewed-by: Larysa Zaremba <larysa.zaremba@intel.com> Link: https://patch.msgid.link/20241220025241.1522781-2-kuba@kernel.org Signed-off-by: Jakub Kicinski <kuba@kernel.org>
2024-12-23net/mlx5: fs, Add support for RDMA RX steering over IB link layerPatrisious Haddad2-3/+3
Relax the capability check for creating the RDMA RX steering domain by considering only the capabilities reported by the firmware as necessary for its creation, which in turn allows RDMA RX creation over devices with IB link layer as well. The table_miss_action_domain capability is required only for a specific priority, which is handled in mlx5_rdma_enable_roce_steering(). The additional capability check for this case is already in place. Signed-off-by: Patrisious Haddad <phaddad@nvidia.com> Reviewed-by: Mark Bloch <mbloch@nvidia.com> Signed-off-by: Tariq Toukan <tariqt@nvidia.com> Link: https://patch.msgid.link/20241219175841.1094544-12-tariqt@nvidia.com Signed-off-by: Jakub Kicinski <kuba@kernel.org>
2024-12-23net/mlx5: Remove PTM support log messageCarolina Jubran1-3/+1
The absence of Precision Time Measurement support should not emit a message, as it can be misleading in contexts where PTM is not required. Remove the log message indicating the lack of PCIe PTM support. Signed-off-by: Carolina Jubran <cjubran@nvidia.com> Reviewed-by: Dragos Tatulea <dtatulea@nvidia.com> Signed-off-by: Tariq Toukan <tariqt@nvidia.com> Link: https://patch.msgid.link/20241219175841.1094544-11-tariqt@nvidia.com Signed-off-by: Jakub Kicinski <kuba@kernel.org>
2024-12-23net/mlx5: DR, add support for ConnectX-8 steeringItamar Gozlan7-2/+267
Add support for a new steering format version that is implemented by ConnectX-8. Except for several differences, the STEv3 is identical to STEv2, so for most callbacks STEv3 context struct will call STEv2 functions. Signed-off-by: Itamar Gozlan <igozlan@nvidia.com> Signed-off-by: Yevgeny Kliteynik <kliteyn@nvidia.com> Signed-off-by: Tariq Toukan <tariqt@nvidia.com> Link: https://patch.msgid.link/20241219175841.1094544-10-tariqt@nvidia.com Signed-off-by: Jakub Kicinski <kuba@kernel.org>
2024-12-23net/mlx5: DR, expand SWS STE callbacks and consolidate common structsItamar Gozlan7-342/+377
Expand SWS STE callbacks to support ConnectX-8 hardware. Move common enums and structures to a shared header file. Signed-off-by: Itamar Gozlan <igozlan@nvidia.com> Signed-off-by: Yevgeny Kliteynik <kliteyn@nvidia.com> Signed-off-by: Tariq Toukan <tariqt@nvidia.com> Link: https://patch.msgid.link/20241219175841.1094544-9-tariqt@nvidia.com Signed-off-by: Jakub Kicinski <kuba@kernel.org>
2024-12-23net/mlx5: HWS, do not initialize native API queuesYevgeny Kliteynik5-7/+25
HWS has two types of APIs: - Native: fastest and slimmest, async API. The user of this API is required to manage rule handles memory, and to poll for completion for each rule. - BWC: backward compatible API, similar semantics to SWS API. This layer is implemented above native API and it does all the work for the user, so that it is easy to switch between SWS and HWS. Right now the existing users of HWS require only BWC API. Therefore, in order to not waste resources, this patch disables send queues allocation for native API. If in the future support for faster HWS rule insertion will be required (such as for Connection Tracking), native queues can be enabled. Signed-off-by: Yevgeny Kliteynik <kliteyn@nvidia.com> Reviewed-by: Itamar Gozlan <igozlan@nvidia.com> Reviewed-by: Mark Bloch <mbloch@nvidia.com> Signed-off-by: Tariq Toukan <tariqt@nvidia.com> Link: https://patch.msgid.link/20241219175841.1094544-8-tariqt@nvidia.com Signed-off-by: Jakub Kicinski <kuba@kernel.org>
2024-12-23net/mlx5: HWS, no need to expose mlx5hws_send_queues_open/closeYevgeny Kliteynik2-10/+4
No need to have mlx5hws_send_queues_open/close in header. Make them static and remove from header. Signed-off-by: Yevgeny Kliteynik <kliteyn@nvidia.com> Reviewed-by: Itamar Gozlan <igozlan@nvidia.com> Reviewed-by: Mark Bloch <mbloch@nvidia.com> Signed-off-by: Tariq Toukan <tariqt@nvidia.com> Link: https://patch.msgid.link/20241219175841.1094544-7-tariqt@nvidia.com Signed-off-by: Jakub Kicinski <kuba@kernel.org>
2024-12-23net/mlx5: fs, retry insertion to hash table on EBUSYMark Bloch1-1/+7
When inserting into an rhashtable faster than it can grow, an -EBUSY error may be encountered. Modify the insertion logic to retry on -EBUSY until either a successful insertion or a genuine error is returned. Signed-off-by: Mark Bloch <mbloch@nvidia.com> Signed-off-by: Tariq Toukan <tariqt@nvidia.com> Reviewed-by: Przemek Kitszel <przemyslaw.kitszel@intel.com> Link: https://patch.msgid.link/20241219175841.1094544-6-tariqt@nvidia.com Signed-off-by: Jakub Kicinski <kuba@kernel.org>
2024-12-23net/mlx5: fs, add mlx5_fs_pool APIMoshe Shemesh4-209/+327
Refactor fc_pool API to create generic fs_pool API, as HW steering has more flow steering elements which can take advantage of the same pool of bulks API. Change fs_counters code to use the fs_pool API. Note, removed __counted_by from struct mlx5_fc_bulk as bulk_len is now inner struct member. It will be added back once __counted_by can support inner struct members. Signed-off-by: Moshe Shemesh <moshe@nvidia.com> Reviewed-by: Yevgeny Kliteynik <kliteyn@nvidia.com> Reviewed-by: Mark Bloch <mbloch@nvidia.com> Signed-off-by: Tariq Toukan <tariqt@nvidia.com> Link: https://patch.msgid.link/20241219175841.1094544-5-tariqt@nvidia.com Signed-off-by: Jakub Kicinski <kuba@kernel.org>
2024-12-23net/mlx5: fs, add counter object to flow destinationMoshe Shemesh12-30/+86
Currently mlx5_flow_destination includes counter_id which is assigned in case we use flow counter on the flow steering rule. However, counter_id is not enough data in case of using HW Steering. Thus, have mlx5_fc object as part of mlx5_flow_destination instead of counter_id and assign it where needed. In case counter_id is received from user space, create a local counter object to represent it. Signed-off-by: Moshe Shemesh <moshe@nvidia.com> Reviewed-by: Yevgeny Kliteynik <kliteyn@nvidia.com> Reviewed-by: Mark Bloch <mbloch@nvidia.com> Signed-off-by: Tariq Toukan <tariqt@nvidia.com> Link: https://patch.msgid.link/20241219175841.1094544-4-tariqt@nvidia.com Signed-off-by: Jakub Kicinski <kuba@kernel.org>
2024-12-23net/mlx5: LAG, Support LAG over Multi-Host NICsRongwei Liu6-78/+229
New multi-host NICs provide each host with partial ports, allowing each host to maintain its unique LAG configuration. On these multi-host NICs, the 'native_port_num' capability is no longer continuous on each host and can exceed the 'num_lag_ports' capability. Therefore, it is necessary to skip the PFs with ldev->pf[i].dev == NULL when querying/modifying the lag devices' information. There is no need to check dev.native_port_num against ldev->ports. Signed-off-by: Rongwei Liu <rongweil@nvidia.com> Reviewed-by: Saeed Mahameed <saeedm@nvidia.com> Signed-off-by: Tariq Toukan <tariqt@nvidia.com> Link: https://patch.msgid.link/20241219175841.1094544-3-tariqt@nvidia.com Signed-off-by: Jakub Kicinski <kuba@kernel.org>
2024-12-23net/mlx5: LAG, Refactor lag logicRongwei Liu6-121/+137
Wrap the lag pf access into two new macros: 1. ldev_for_each() 2. ldev_for_each_reverse() The maximum number of lag ports and the index to `natvie_port_num` mapping will be handled by the two new macros. Users shouldn't use the for loop anymore. Signed-off-by: Rongwei Liu <rongweil@nvidia.com> Reviewed-by: Saeed Mahameed <saeedm@nvidia.com> Reviewed-by: Mark Bloch <mbloch@nvidia.com> Signed-off-by: Tariq Toukan <tariqt@nvidia.com> Link: https://patch.msgid.link/20241219175841.1094544-2-tariqt@nvidia.com Signed-off-by: Jakub Kicinski <kuba@kernel.org>
2024-12-23net: phy: microchip_t1 : Add initialization of ptp for lan887xDivya Koppera1-3/+38
Add initialization of ptp for lan887x. Reviewed-by: Andrew Lunn <andrew@lunn.ch> Signed-off-by: Divya Koppera <divya.koppera@microchip.com> Link: https://patch.msgid.link/20241219123311.30213-6-divya.koppera@microchip.com Signed-off-by: Jakub Kicinski <kuba@kernel.org>
2024-12-23net: phy: Makefile: Add makefile support for rds ptp in Microchip physDivya Koppera1-0/+1
Add makefile support for rds ptp library. Reviewed-by: Andrew Lunn <andrew@lunn.ch> Reviewed-by: Vadim Fedorenko <vadim.fedorenko@linux.dev> Signed-off-by: Divya Koppera <divya.koppera@microchip.com> Link: https://patch.msgid.link/20241219123311.30213-5-divya.koppera@microchip.com Signed-off-by: Jakub Kicinski <kuba@kernel.org>
2024-12-23net: phy: Kconfig: Add rds ptp library support and 1588 optional flag in ↵Divya Koppera1-1/+8
Microchip phys Add ptp library support in Kconfig As some of Microchip T1 phys support ptp, add dependency of 1588 optional flag in Kconfig Reviewed-by: Andrew Lunn <andrew@lunn.ch> Reviewed-by: Vadim Fedorenko <vadim.fedorenko@linux.dev> Signed-off-by: Divya Koppera <divya.koppera@microchip.com> Link: https://patch.msgid.link/20241219123311.30213-4-divya.koppera@microchip.com Signed-off-by: Jakub Kicinski <kuba@kernel.org>
2024-12-23net: phy: microchip_rds_ptp : Add rds ptp library for Microchip physDivya Koppera1-0/+1039
Add rds ptp library for Microchip phys 1-step and 2-step modes are supported, over Ethernet and UDP(ipv4, ipv6) Reviewed-by: Vadim Fedorenko <vadim.fedorenko@linux.dev> Signed-off-by: Divya Koppera <divya.koppera@microchip.com> Link: https://patch.msgid.link/20241219123311.30213-3-divya.koppera@microchip.com Signed-off-by: Jakub Kicinski <kuba@kernel.org>
2024-12-23net: phy: microchip_rds_ptp: Add header file for Microchip rds ptp libraryDivya Koppera1-0/+223
This rds ptp header file will cover ptp macros for future phys in Microchip where addresses will be same but base offset and mmd address may changes. Reviewed-by: Andrew Lunn <andrew@lunn.ch> Reviewed-by: Vadim Fedorenko <vadim.fedorenko@linux.dev> Signed-off-by: Divya Koppera <divya.koppera@microchip.com> Link: https://patch.msgid.link/20241219123311.30213-2-divya.koppera@microchip.com Signed-off-by: Jakub Kicinski <kuba@kernel.org>
2024-12-23sfc: Use netdev refcount tracking in struct efx_async_filter_insertionYiFei Zhu4-4/+10
I was debugging some netdev refcount issues in OpenOnload, and one of the places I was looking at was in the sfc driver. Only struct efx_async_filter_insertion was not using netdev refcount tracker, so add it here. GFP_ATOMIC because this code path is called by ndo_rx_flow_steer which holds RCU. This patch should be a no-op if !CONFIG_NET_DEV_REFCNT_TRACKER Signed-off-by: YiFei Zhu <zhuyifei@google.com> Reviewed-by: Eric Dumazet <edumazet@google.com> Link: https://patch.msgid.link/20241219173004.2615655-1-zhuyifei@google.com Signed-off-by: Jakub Kicinski <kuba@kernel.org>
2024-12-23net: vxlan: rename SKB_DROP_REASON_VXLAN_NO_REMOTERadu Rendec2-3/+3
The SKB_DROP_REASON_VXLAN_NO_REMOTE skb drop reason was introduced in the specific context of vxlan. As it turns out, there are similar cases when a packet needs to be dropped in other parts of the network stack, such as the bridge module. Rename SKB_DROP_REASON_VXLAN_NO_REMOTE and give it a more generic name, so that it can be used in other parts of the network stack. This is not a functional change, and the numeric value of the drop reason even remains unchanged. Signed-off-by: Radu Rendec <rrendec@redhat.com> Reviewed-by: Ido Schimmel <idosch@nvidia.com> Acked-by: Nikolay Aleksandrov <razor@blackwall.org> Link: https://patch.msgid.link/20241219163606.717758-2-rrendec@redhat.com Signed-off-by: Jakub Kicinski <kuba@kernel.org>
2024-12-23net: Fix netns for ip_tunnel_init_flow()Xiao Liang1-2/+1
The device denoted by tunnel->parms.link resides in the underlay net namespace. Therefore pass tunnel->net to ip_tunnel_init_flow(). Fixes: db53cd3d88dc ("net: Handle l3mdev in ip_tunnel_init_flow") Signed-off-by: Xiao Liang <shaw.leon@gmail.com> Reviewed-by: Ido Schimmel <idosch@nvidia.com> Link: https://patch.msgid.link/20241219130336.103839-1-shaw.leon@gmail.com Signed-off-by: Jakub Kicinski <kuba@kernel.org>
2024-12-23net: enetc: add UDP segmentation offload supportWei Fang2-4/+8
Set NETIF_F_GSO_UDP_L4 bit of hw_features and features because i.MX95 enetc and LS1028A driver implements UDP segmentation. - i.MX95 ENETC supports UDP segmentation via LSO. - LS1028A ENETC supports UDP segmentation since the commit 3d5b459ba0e3 ("net: tso: add UDP segmentation support"). Signed-off-by: Wei Fang <wei.fang@nxp.com> Reviewed-by: Frank Li <Frank.Li@nxp.com> Link: https://patch.msgid.link/20241219054755.1615626-5-wei.fang@nxp.com Signed-off-by: Jakub Kicinski <kuba@kernel.org>
2024-12-23net: enetc: add LSO support for i.MX95 ENETC PFWei Fang5-10/+310
ENETC rev 4.1 supports large send offload (LSO), segmenting large TCP and UDP transmit units into multiple Ethernet frames. To support LSO, software needs to fill some auxiliary information in Tx BD, such as LSO header length, frame length, LSO maximum segment size, etc. At 1Gbps link rate, TCP segmentation was tested using iperf3, and the CPU performance before and after applying the patch was compared through the top command. It can be seen that LSO saves a significant amount of CPU cycles compared to software TSO. Before applying the patch: %Cpu(s): 0.1 us, 4.1 sy, 0.0 ni, 85.7 id, 0.0 wa, 0.5 hi, 9.7 si After applying the patch: %Cpu(s): 0.1 us, 2.3 sy, 0.0 ni, 94.5 id, 0.0 wa, 0.4 hi, 2.6 si Signed-off-by: Wei Fang <wei.fang@nxp.com> Reviewed-by: Frank Li <Frank.Li@nxp.com> Reviewed-by: Claudiu Manoil <claudiu.manoil@nxp.com> Link: https://patch.msgid.link/20241219054755.1615626-4-wei.fang@nxp.com Signed-off-by: Jakub Kicinski <kuba@kernel.org>
2024-12-23net: enetc: update max chained Tx BD number for i.MX95 ENETCWei Fang4-6/+22
The max chained Tx BDs of latest ENETC (i.MX95 ENETC, rev 4.1) has been increased to 63, but since the range of MAX_SKB_FRAGS is 17~45, so for i.MX95 ENETC and later revision, it is better to set ENETC4_MAX_SKB_FRAGS to MAX_SKB_FRAGS. In addition, add max_frags in struct enetc_drvdata to indicate the max chained BDs supported by device. Because the max number of chained BDs supported by LS1028A and i.MX95 ENETC is different. Signed-off-by: Wei Fang <wei.fang@nxp.com> Reviewed-by: Frank Li <Frank.Li@nxp.com> Reviewed-by: Claudiu Manoil <claudiu.manoil@nxp.com> Reviewed-by: Simon Horman <horms@kernel.org> Link: https://patch.msgid.link/20241219054755.1615626-3-wei.fang@nxp.com Signed-off-by: Jakub Kicinski <kuba@kernel.org>
2024-12-23net: enetc: add Tx checksum offload for i.MX95 ENETCWei Fang4-10/+63
In addition to supporting Rx checksum offload, i.MX95 ENETC also supports Tx checksum offload. The transmit checksum offload is implemented through the Tx BD. To support Tx checksum offload, software needs to fill some auxiliary information in Tx BD, such as IP version, IP header offset and size, whether L4 is UDP or TCP, etc. Same as Rx checksum offload, Tx checksum offload capability isn't defined in register, so tx_csum bit is added to struct enetc_drvdata to indicate whether the device supports Tx checksum offload. Signed-off-by: Wei Fang <wei.fang@nxp.com> Reviewed-by: Frank Li <Frank.Li@nxp.com> Reviewed-by: Claudiu Manoil <claudiu.manoil@nxp.com> Link: https://patch.msgid.link/20241219054755.1615626-2-wei.fang@nxp.com Signed-off-by: Jakub Kicinski <kuba@kernel.org>
2024-12-23net: stmmac: restructure the error path of stmmac_probe_config_dt()Joe Hattori1-26/+17
Current implementation of stmmac_probe_config_dt() does not release the OF node reference obtained by of_parse_phandle() in some error paths. The problem is that some error paths call stmmac_remove_config_dt() to clean up but others use and unwind ladder. These two types of error handling have not kept in sync and have been a recurring source of bugs. Re-write the error handling in stmmac_probe_config_dt() to use an unwind ladder. Consequently, stmmac_remove_config_dt() is not needed anymore, thus remove it. This bug was found by an experimental verification tool that I am developing. Fixes: 4838a5405028 ("net: stmmac: Fix wrapper drivers not detecting PHY") Signed-off-by: Joe Hattori <joe@pf.is.s.u-tokyo.ac.jp> Link: https://patch.msgid.link/20241219024119.2017012-1-joe@pf.is.s.u-tokyo.ac.jp Signed-off-by: Jakub Kicinski <kuba@kernel.org>
2024-12-23eth: fbnic: fix csr boundary for RPM RAM sectionMohsin Bashir1-1/+1
The CSR dump support leverages the FBNIC_BOUNDS macro, which pads the end condition for each section by adding an offset of 1. However, the RPC RAM section, which is dumped differently from other sections, does not rely on this macro and instead directly uses end boundary address. Hence, subtracting 1 from the end address results in skipping a register. Fixes 3d12862b216d (“eth: fbnic: Add support to dump registers”) Signed-off-by: Mohsin Bashir <mohsin.bashr@gmail.com> Reviewed-by: Michal Swiatkowski <michal.swiatkowski@linux.intel.com> Link: https://patch.msgid.link/20241218232614.439329-1-mohsin.bashr@gmail.com Signed-off-by: Jakub Kicinski <kuba@kernel.org>
2024-12-21net: dsa: microchip: Do not execute PTP driver code for unsupported switchesTristram Ha2-11/+30
The PTP driver code only works for certain KSZ switches like KSZ9477, KSZ9567, LAN937X and their varieties. This code is enabled by kernel configuration CONFIG_NET_DSA_MICROCHIP_KSZ_PTP. As the DSA driver is common to work with all KSZ switches this PTP code is not appropriate for other unsupported switches. The ptp_capable indication is added to the chip data structure to signal whether to execute those code. Signed-off-by: Tristram Ha <tristram.ha@microchip.com> Link: https://patch.msgid.link/20241218020240.70601-1-Tristram.Ha@microchip.com Signed-off-by: Jakub Kicinski <kuba@kernel.org>
2024-12-21qlcnic: use const 'struct bin_attribute' callbacksThomas Weißschuh1-35/+34
The sysfs core now provides callback variants that explicitly take a const pointer. Use them so the non-const variants can be removed. Signed-off-by: Thomas Weißschuh <linux@weissschuh.net> Link: https://patch.msgid.link/20241219-sysfs-const-bin_attr-net-v2-1-93bdaece3c90@weissschuh.net Signed-off-by: Jakub Kicinski <kuba@kernel.org>
2024-12-20net: hisilicon: hns: Remove unused enumsDr. David Alan Gilbert1-23/+0
The enums dsaf_roce_port_mode, dsaf_roce_port_num and dsaf_roce_qos_sl are unused after the removal of the reset code. Remove them. Signed-off-by: Dr. David Alan Gilbert <linux@treblig.org> Reviewed-by: Jijie Shao<shaojijie@huawei.com> Link: https://patch.msgid.link/20241218163341.40297-5-linux@treblig.org Signed-off-by: Jakub Kicinski <kuba@kernel.org>
2024-12-20net: hisilicon: hns: Remove reset helpersDr. David Alan Gilbert2-70/+0
With hns_dsaf_roce_reset() removed in a previous patch, the two helper member pointers, 'hns_dsaf_roce_srst', and 'hns_dsaf_srst_chns' are now unread. Remove them, and the helper functions that they were initialised to, that is hns_dsaf_srst_chns(), hns_dsaf_srst_chns_acpi(), hns_dsaf_roce_srst() and hns_dsaf_roce_srst_acpi(). Signed-off-by: Dr. David Alan Gilbert <linux@treblig.org> Reviewed-by: Jijie Shao<shaojijie@huawei.com> Link: https://patch.msgid.link/20241218163341.40297-4-linux@treblig.org Signed-off-by: Jakub Kicinski <kuba@kernel.org>
2024-12-20net: hisilicon: hns: Remove unused hns_rcb_startDr. David Alan Gilbert2-6/+0
hns_rcb_start() has been unused since 2016's commit 454784d85de3 ("net: hns: delete redundancy ring enable operations") Remove it. Signed-off-by: Dr. David Alan Gilbert <linux@treblig.org> Reviewed-by: Jijie Shao<shaojijie@huawei.com> Link: https://patch.msgid.link/20241218163341.40297-3-linux@treblig.org Signed-off-by: Jakub Kicinski <kuba@kernel.org>
2024-12-20net: hisilicon: hns: Remove unused hns_dsaf_roce_resetDr. David Alan Gilbert2-111/+0
hns_dsaf_roce_reset() has been unused since 2021's commit 38d220882426 ("RDMA/hns: Remove support for HIP06") Remove it. Signed-off-by: Dr. David Alan Gilbert <linux@treblig.org> Reviewed-by: Jijie Shao<shaojijie@huawei.com> Link: https://patch.msgid.link/20241218163341.40297-2-linux@treblig.org Signed-off-by: Jakub Kicinski <kuba@kernel.org>
2024-12-20ixgbevf: Add support for Intel(R) E610 devicePiotr Kwapulinski5-6/+33
Add support for Intel(R) E610 Series of network devices. The E610 is based on X550 but adds firmware managed link, enhanced security capabilities and support for updated server manageability Reviewed-by: Przemek Kitszel <przemyslaw.kitszel@intel.com> Signed-off-by: Piotr Kwapulinski <piotr.kwapulinski@intel.com> Reviewed-by: Simon Horman <horms@kernel.org> Tested-by: Rafal Romanowski <rafal.romanowski@intel.com> Signed-off-by: Tony Nguyen <anthony.l.nguyen@intel.com>
2024-12-20ixgbe: Enable link management in E610 devicePiotr Kwapulinski14-30/+659
Add high level link management support for E610 device. Enable the following features: - driver load - bring up network interface - IP address assignment - pass traffic - show statistics (e.g. via ethtool) - disable network interface - driver unload Co-developed-by: Carolyn Wyborny <carolyn.wyborny@intel.com> Signed-off-by: Carolyn Wyborny <carolyn.wyborny@intel.com> Co-developed-by: Jedrzej Jagielski <jedrzej.jagielski@intel.com> Signed-off-by: Jedrzej Jagielski <jedrzej.jagielski@intel.com> Reviewed-by: Jan Glaza <jan.glaza@intel.com> Reviewed-by: Simon Horman <horms@kernel.org> Tested-by: Bharath R <bharath.r@intel.com> Signed-off-by: Piotr Kwapulinski <piotr.kwapulinski@intel.com> Signed-off-by: Tony Nguyen <anthony.l.nguyen@intel.com>
2024-12-20ixgbe: Clean up the E610 link management related codePiotr Kwapulinski2-12/+17
Required for enabling the link management in E610 device. Reviewed-by: Simon Horman <horms@kernel.org> Tested-by: Bharath R <bharath.r@intel.com> Signed-off-by: Piotr Kwapulinski <piotr.kwapulinski@intel.com> Signed-off-by: Tony Nguyen <anthony.l.nguyen@intel.com>
2024-12-20ixgbe: Add ixgbe_x540 multiple header inclusion protectionPiotr Kwapulinski1-1/+6
Required to adopt x540 specific functions by E610 device. Reviewed-by: Simon Horman <horms@kernel.org> Tested-by: Bharath R <bharath.r@intel.com> Signed-off-by: Piotr Kwapulinski <piotr.kwapulinski@intel.com> Signed-off-by: Tony Nguyen <anthony.l.nguyen@intel.com>
2024-12-20ixgbe: Add support for EEPROM dump in E610 devicePiotr Kwapulinski3-0/+107
Add low level support for EEPROM dump for the specified network device. Co-developed-by: Stefan Wegrzyn <stefan.wegrzyn@intel.com> Signed-off-by: Stefan Wegrzyn <stefan.wegrzyn@intel.com> Reviewed-by: Przemek Kitszel <przemyslaw.kitszel@intel.com> Reviewed-by: Simon Horman <horms@kernel.org> Tested-by: Bharath R <bharath.r@intel.com> Signed-off-by: Piotr Kwapulinski <piotr.kwapulinski@intel.com> Signed-off-by: Tony Nguyen <anthony.l.nguyen@intel.com>
2024-12-20ixgbe: Add support for NVM handling in E610 devicePiotr Kwapulinski2-0/+303
Add low level support for accessing NVM in E610 device. NVM operations are handled via the Admin Command Interface. Add the following NVM specific operations: - acquire, release, read - validate checksum - read shadow ram Co-developed-by: Stefan Wegrzyn <stefan.wegrzyn@intel.com> Signed-off-by: Stefan Wegrzyn <stefan.wegrzyn@intel.com> Co-developed-by: Jedrzej Jagielski <jedrzej.jagielski@intel.com> Signed-off-by: Jedrzej Jagielski <jedrzej.jagielski@intel.com> Reviewed-by: Michal Swiatkowski <michal.swiatkowski@linux.intel.com> Reviewed-by: Simon Horman <horms@kernel.org> Tested-by: Bharath R <bharath.r@intel.com> Signed-off-by: Piotr Kwapulinski <piotr.kwapulinski@intel.com> Signed-off-by: Tony Nguyen <anthony.l.nguyen@intel.com>
2024-12-20ixgbe: Add link management support for E610 devicePiotr Kwapulinski3-0/+1114
Add low level link management support for E610 device. Link management operations are handled via the Admin Command Interface. Add the following link management operations: - get link capabilities - set up link - get media type - get link status, link status events - link power management Co-developed-by: Stefan Wegrzyn <stefan.wegrzyn@intel.com> Signed-off-by: Stefan Wegrzyn <stefan.wegrzyn@intel.com> Co-developed-by: Jedrzej Jagielski <jedrzej.jagielski@intel.com> Signed-off-by: Jedrzej Jagielski <jedrzej.jagielski@intel.com> Reviewed-by: Jan Glaza <jan.glaza@intel.com> Tested-by: Bharath R <bharath.r@intel.com> Signed-off-by: Piotr Kwapulinski <piotr.kwapulinski@intel.com> Signed-off-by: Tony Nguyen <anthony.l.nguyen@intel.com>
2024-12-20ixgbe: Add support for E610 device capabilities detectionPiotr Kwapulinski3-0/+548
Add low level support for E610 device capabilities detection. The capabilities are discovered via the Admin Command Interface. Discover the following capabilities: - function caps: vmdq, dcb, rss, rx/tx qs, msix, nvm, orom, reset - device caps: vsi, fdir, 1588 - phy caps Co-developed-by: Stefan Wegrzyn <stefan.wegrzyn@intel.com> Signed-off-by: Stefan Wegrzyn <stefan.wegrzyn@intel.com> Co-developed-by: Jedrzej Jagielski <jedrzej.jagielski@intel.com> Signed-off-by: Jedrzej Jagielski <jedrzej.jagielski@intel.com> Reviewed-by: Jan Sokolowski <jan.sokolowski@intel.com> Reviewed-by: Simon Horman <horms@kernel.org> Tested-by: Bharath R <bharath.r@intel.com> Signed-off-by: Piotr Kwapulinski <piotr.kwapulinski@intel.com> Signed-off-by: Tony Nguyen <anthony.l.nguyen@intel.com>
2024-12-20ixgbe: Add support for E610 FW Admin Command InterfacePiotr Kwapulinski6-7/+1657
Add low level support for Admin Command Interface (ACI). ACI is the Firmware interface used by a driver to communicate with E610 adapter. Add the following ACI features: - data structures, macros, register definitions - commands handling - events handling Co-developed-by: Stefan Wegrzyn <stefan.wegrzyn@intel.com> Signed-off-by: Stefan Wegrzyn <stefan.wegrzyn@intel.com> Co-developed-by: Jedrzej Jagielski <jedrzej.jagielski@intel.com> Signed-off-by: Jedrzej Jagielski <jedrzej.jagielski@intel.com> Reviewed-by: Michal Swiatkowski <michal.swiatkowski@linux.intel.com> Reviewed-by: Simon Horman <horms@kernel.org> Tested-by: Bharath R <bharath.r@intel.com> Signed-off-by: Piotr Kwapulinski <piotr.kwapulinski@intel.com> Signed-off-by: Tony Nguyen <anthony.l.nguyen@intel.com>
2024-12-20gve: fix XDP allocation path in edge casesJoshua Washington1-1/+8
This patch fixes a number of consistency issues in the queue allocation path related to XDP. As it stands, the number of allocated XDP queues changes in three different scenarios. 1) Adding an XDP program while the interface is up via gve_add_xdp_queues 2) Removing an XDP program while the interface is up via gve_remove_xdp_queues 3) After queues have been allocated and the old queue memory has been removed in gve_queues_start. However, the requirement for the interface to be up for gve_(add|remove)_xdp_queues to be called, in conjunction with the fact that the number of queues stored in priv isn't updated until _after_ XDP queues have been allocated in the normal queue allocation path means that if an XDP program is added while the interface is down, XDP queues won't be added until the _second_ if_up, not the first. Given the expectation that the number of XDP queues is equal to the number of RX queues, scenario (3) has another problematic implication. When changing the number of queues while an XDP program is loaded, the number of XDP queues must be updated as well, as there is logic in the driver (gve_xdp_tx_queue_id()) which relies on every RX queue having a corresponding XDP TX queue. However, the number of XDP queues stored in priv would not be updated until _after_ a close/open leading to a mismatch in the number of XDP queues reported vs the number of XDP queues which actually exist after the queue count update completes. This patch remedies these issues by doing the following: 1) The allocation config getter function is set up to retrieve the _expected_ number of XDP queues to allocate instead of relying on the value stored in `priv` which is only updated once the queues have been allocated. 2) When adjusting queues, XDP queues are adjusted to match the number of RX queues when XDP is enabled. This only works in the case when queues are live, so part (1) of the fix must still be available in the case that queues are adjusted when there is an XDP program and the interface is down. Fixes: 5f08cd3d6423 ("gve: Alloc before freeing when adjusting queues") Cc: stable@vger.kernel.org Signed-off-by: Joshua Washington <joshwash@google.com> Signed-off-by: Praveen Kaligineedi <pkaligineedi@google.com> Reviewed-by: Praveen Kaligineedi <pkaligineedi@google.com> Reviewed-by: Shailend Chand <shailend@google.com> Reviewed-by: Willem de Bruijn <willemb@google.com> Signed-off-by: David S. Miller <davem@davemloft.net>
2024-12-20gve: process XSK TX descriptors as part of RX NAPIJoshua Washington3-14/+31
When busy polling is enabled, xsk_sendmsg for AF_XDP zero copy marks the NAPI ID corresponding to the memory pool allocated for the socket. In GVE, this NAPI ID will never correspond to a NAPI ID of one of the dedicated XDP TX queues registered with the umem because XDP TX is not set up to share a NAPI with a corresponding RX queue. This patch moves XSK TX descriptor processing from the TX NAPI to the RX NAPI, and the gve_xsk_wakeup callback is updated to use the RX NAPI instead of the TX NAPI, accordingly. The branch on if the wakeup is for TX is removed, as the NAPI poll should be invoked whether the wakeup is for TX or for RX. Fixes: fd8e40321a12 ("gve: Add AF_XDP zero-copy support for GQI-QPL format") Cc: stable@vger.kernel.org Signed-off-by: Praveen Kaligineedi <pkaligineedi@google.com> Signed-off-by: Joshua Washington <joshwash@google.com> Reviewed-by: Willem de Bruijn <willemb@google.com> Signed-off-by: David S. Miller <davem@davemloft.net>
2024-12-20gve: guard XSK operations on the existence of queuesJoshua Washington1-12/+10
This patch predicates the enabling and disabling of XSK pools on the existence of queues. As it stands, if the interface is down, disabling or enabling XSK pools would result in a crash, as the RX queue pointer would be NULL. XSK pool registration will occur as part of the next interface up. Similarly, xsk_wakeup needs be guarded against queues disappearing while the function is executing, so a check against the GVE_PRIV_FLAGS_NAPI_ENABLED flag is added to synchronize with the disabling of the bit and the synchronize_net() in gve_turndown. Fixes: fd8e40321a12 ("gve: Add AF_XDP zero-copy support for GQI-QPL format") Cc: stable@vger.kernel.org Signed-off-by: Joshua Washington <joshwash@google.com> Signed-off-by: Praveen Kaligineedi <pkaligineedi@google.com> Reviewed-by: Praveen Kaligineedi <pkaligineedi@google.com> Reviewed-by: Shailend Chand <shailend@google.com> Reviewed-by: Willem de Bruijn <willemb@google.com> Reviewed-by: Larysa Zaremba <larysa.zaremba@intel.com> Signed-off-by: David S. Miller <davem@davemloft.net>
2024-12-20gve: guard XDP xmit NDO on existence of xdp queuesJoshua Washington2-1/+7
In GVE, dedicated XDP queues only exist when an XDP program is installed and the interface is up. As such, the NDO XDP XMIT callback should return early if either of these conditions are false. In the case of no loaded XDP program, priv->num_xdp_queues=0 which can cause a divide-by-zero error, and in the case of interface down, num_xdp_queues remains untouched to persist XDP queue count for the next interface up, but the TX pointer itself would be NULL. The XDP xmit callback also needs to synchronize with a device transitioning from open to close. This synchronization will happen via the GVE_PRIV_FLAGS_NAPI_ENABLED bit along with a synchronize_net() call, which waits for any RCU critical sections at call-time to complete. Fixes: 39a7f4aa3e4a ("gve: Add XDP REDIRECT support for GQI-QPL format") Cc: stable@vger.kernel.org Signed-off-by: Joshua Washington <joshwash@google.com> Signed-off-by: Praveen Kaligineedi <pkaligineedi@google.com> Reviewed-by: Praveen Kaligineedi <pkaligineedi@google.com> Reviewed-by: Shailend Chand <shailend@google.com> Reviewed-by: Willem de Bruijn <willemb@google.com> Signed-off-by: David S. Miller <davem@davemloft.net>
2024-12-20gve: clean XDP queues in gve_tx_stop_ring_gqiJoshua Washington1-1/+4
When stopping XDP TX rings, the XDP clean function needs to be called to clean out the entire queue, similar to what happens in the normal TX queue case. Otherwise, the FIFO won't be cleared correctly, and xsk_tx_completed won't be reported. Fixes: 75eaae158b1b ("gve: Add XDP DROP and TX support for GQI-QPL format") Cc: stable@vger.kernel.org Signed-off-by: Joshua Washington <joshwash@google.com> Signed-off-by: Praveen Kaligineedi <pkaligineedi@google.com> Reviewed-by: Praveen Kaligineedi <pkaligineedi@google.com> Reviewed-by: Willem de Bruijn <willemb@google.com> Signed-off-by: David S. Miller <davem@davemloft.net>
2024-12-20xsk: make xsk_buff_add_frag() really add the frag via __xdp_buff_add_frag()Alexander Lobakin2-57/+5
Currently, xsk_buff_add_frag() only adds the frag to pool's linked list, not doing anything with the &xdp_buff. The drivers do that manually and the logic is the same. Make it really add an skb frag, just like xdp_buff_add_frag() does that, and freeing frags on error if needed. This allows to remove repeating code from i40e and ice and not add the same code again and again. Acked-by: Maciej Fijalkowski <maciej.fijalkowski@intel.com> Signed-off-by: Alexander Lobakin <aleksander.lobakin@intel.com> Link: https://patch.msgid.link/20241218174435.1445282-5-aleksander.lobakin@intel.com Signed-off-by: Jakub Kicinski <kuba@kernel.org>
2024-12-20sfc: remove efx_writed_page_lockedAndy Moreton1-24/+0
From: Andy Moreton <andy.moreton@amd.com> efx_writed_page_locked is a workaround for Siena hardware that is not needed on later adapters, and has no callers. Remove it. Signed-off-by: Andy Moreton <andy.moreton@amd.com> Signed-off-by: Edward Cree <ecree.xilinx@gmail.com> Link: https://patch.msgid.link/20241218135930.2350358-1-edward.cree@amd.com Signed-off-by: Jakub Kicinski <kuba@kernel.org>
2024-12-20net: stmmac: Drop useless code related to ethtool rx-copybreakFurong Xu3-46/+0
After commit 2af6106ae949 ("net: stmmac: Introducing support for Page Pool"), the driver always copies frames to get a better performance, zero-copy for RX frames is no more, then these code turned to be useless and users of ethtool may get confused about the unhandled rx-copybreak parameter. This patch mostly reverts commit 22ad38381547 ("stmmac: do not perform zero-copy for rx frames") Signed-off-by: Furong Xu <0x1207@gmail.com> Link: https://patch.msgid.link/20241218083407.390509-1-0x1207@gmail.com Signed-off-by: Jakub Kicinski <kuba@kernel.org>