summaryrefslogtreecommitdiff
path: root/drivers/net/ethernet
AgeCommit message (Collapse)AuthorFilesLines
2025-03-10eth: bnxt: return fail if interface is down in bnxt_queue_mem_alloc()Taehee Yoo1-0/+3
The bnxt_queue_mem_alloc() is called to allocate new queue memory when a queue is restarted. It internally accesses rx buffer descriptor corresponding to the index. The rx buffer descriptor is allocated and set when the interface is up and it's freed when the interface is down. So, if queue is restarted if interface is down, kernel panic occurs. Splat looks like: BUG: unable to handle page fault for address: 000000000000b240 #PF: supervisor read access in kernel mode #PF: error_code(0x0000) - not-present page PGD 0 P4D 0 Oops: Oops: 0000 [#1] PREEMPT SMP NOPTI CPU: 3 UID: 0 PID: 1563 Comm: ncdevmem2 Not tainted 6.14.0-rc2+ #9 844ddba6e7c459cafd0bf4db9a3198e Hardware name: ASUS System Product Name/PRIME Z690-P D4, BIOS 0603 11/01/2021 RIP: 0010:bnxt_queue_mem_alloc+0x3f/0x4e0 [bnxt_en] Code: 41 54 4d 89 c4 4d 69 c0 c0 05 00 00 55 48 89 f5 53 48 89 fb 4c 8d b5 40 05 00 00 48 83 ec 15 RSP: 0018:ffff9dcc83fef9e8 EFLAGS: 00010202 RAX: ffffffffc0457720 RBX: ffff934ed8d40000 RCX: 0000000000000000 RDX: 000000000000001f RSI: ffff934ea508f800 RDI: ffff934ea508f808 RBP: ffff934ea508f800 R08: 000000000000b240 R09: ffff934e84f4b000 R10: ffff9dcc83fefa30 R11: ffff934e84f4b000 R12: 000000000000001f R13: ffff934ed8d40ac0 R14: ffff934ea508fd40 R15: ffff934e84f4b000 FS: 00007fa73888c740(0000) GS:ffff93559f780000(0000) knlGS:0000000000000000 CS: 0010 DS: 0000 ES: 0000 CR0: 0000000080050033 CR2: 000000000000b240 CR3: 0000000145a2e000 CR4: 00000000007506f0 PKRU: 55555554 Call Trace: <TASK> ? __die+0x20/0x70 ? page_fault_oops+0x15a/0x460 ? exc_page_fault+0x6e/0x180 ? asm_exc_page_fault+0x22/0x30 ? __pfx_bnxt_queue_mem_alloc+0x10/0x10 [bnxt_en 7f85e76f4d724ba07471d7e39d9e773aea6597b7] ? bnxt_queue_mem_alloc+0x3f/0x4e0 [bnxt_en 7f85e76f4d724ba07471d7e39d9e773aea6597b7] netdev_rx_queue_restart+0xc5/0x240 net_devmem_bind_dmabuf_to_queue+0xf8/0x200 netdev_nl_bind_rx_doit+0x3a7/0x450 genl_family_rcv_msg_doit+0xd9/0x130 genl_rcv_msg+0x184/0x2b0 ? __pfx_netdev_nl_bind_rx_doit+0x10/0x10 ? __pfx_genl_rcv_msg+0x10/0x10 netlink_rcv_skb+0x54/0x100 genl_rcv+0x24/0x40 ... Reviewed-by: Somnath Kotur <somnath.kotur@broadcom.com> Reviewed-by: Jakub Kicinski <kuba@kernel.org> Fixes: 2d694c27d32e ("bnxt_en: implement netdev_queue_mgmt_ops") Signed-off-by: Taehee Yoo <ap420073@gmail.com> Reviewed-by: Mina Almasry <almasrymina@google.com> Link: https://patch.msgid.link/20250309134219.91670-3-ap420073@gmail.com Signed-off-by: Jakub Kicinski <kuba@kernel.org>
2025-03-10eth: bnxt: fix truesize for mb-xdp-pass caseTaehee Yoo2-1/+9
When mb-xdp is set and return is XDP_PASS, packet is converted from xdp_buff to sk_buff with xdp_update_skb_shared_info() in bnxt_xdp_build_skb(). bnxt_xdp_build_skb() passes incorrect truesize argument to xdp_update_skb_shared_info(). The truesize is calculated as BNXT_RX_PAGE_SIZE * sinfo->nr_frags but the skb_shared_info was wiped by napi_build_skb() before. So it stores sinfo->nr_frags before bnxt_xdp_build_skb() and use it instead of getting skb_shared_info from xdp_get_shared_info_from_buff(). Splat looks like: ------------[ cut here ]------------ WARNING: CPU: 2 PID: 0 at net/core/skbuff.c:6072 skb_try_coalesce+0x504/0x590 Modules linked in: xt_nat xt_tcpudp veth af_packet xt_conntrack nft_chain_nat xt_MASQUERADE nf_conntrack_netlink xfrm_user xt_addrtype nft_coms CPU: 2 UID: 0 PID: 0 Comm: swapper/2 Not tainted 6.14.0-rc2+ #3 RIP: 0010:skb_try_coalesce+0x504/0x590 Code: 4b fd ff ff 49 8b 34 24 40 80 e6 40 0f 84 3d fd ff ff 49 8b 74 24 48 40 f6 c6 01 0f 84 2e fd ff ff 48 8d 4e ff e9 25 fd ff ff <0f> 0b e99 RSP: 0018:ffffb62c4120caa8 EFLAGS: 00010287 RAX: 0000000000000003 RBX: ffffb62c4120cb14 RCX: 0000000000000ec0 RDX: 0000000000001000 RSI: ffffa06e5d7dc000 RDI: 0000000000000003 RBP: ffffa06e5d7ddec0 R08: ffffa06e6120a800 R09: ffffa06e7a119900 R10: 0000000000002310 R11: ffffa06e5d7dcec0 R12: ffffe4360575f740 R13: ffffe43600000000 R14: 0000000000000002 R15: 0000000000000002 FS: 0000000000000000(0000) GS:ffffa0755f700000(0000) knlGS:0000000000000000 CS: 0010 DS: 0000 ES: 0000 CR0: 0000000080050033 CR2: 00007f147b76b0f8 CR3: 00000001615d4000 CR4: 00000000007506f0 PKRU: 55555554 Call Trace: <IRQ> ? __warn+0x84/0x130 ? skb_try_coalesce+0x504/0x590 ? report_bug+0x18a/0x1a0 ? handle_bug+0x53/0x90 ? exc_invalid_op+0x14/0x70 ? asm_exc_invalid_op+0x16/0x20 ? skb_try_coalesce+0x504/0x590 inet_frag_reasm_finish+0x11f/0x2e0 ip_defrag+0x37a/0x900 ip_local_deliver+0x51/0x120 ip_sublist_rcv_finish+0x64/0x70 ip_sublist_rcv+0x179/0x210 ip_list_rcv+0xf9/0x130 How to reproduce: <Node A> ip link set $interface1 xdp obj xdp_pass.o ip link set $interface1 mtu 9000 up ip a a 10.0.0.1/24 dev $interface1 <Node B> ip link set $interfac2 mtu 9000 up ip a a 10.0.0.2/24 dev $interface2 ping 10.0.0.1 -s 65000 Following ping.py patch adds xdp-mb-pass case. so ping.py is going to be able to reproduce this issue. Fixes: 1dc4c557bfed ("bnxt: adding bnxt_xdp_build_skb to build skb from multibuffer xdp_buff") Signed-off-by: Taehee Yoo <ap420073@gmail.com> Link: https://patch.msgid.link/20250309134219.91670-2-ap420073@gmail.com Signed-off-by: Jakub Kicinski <kuba@kernel.org>
2025-03-10eth: fbnic: fix memory corruption in fbnic_tlv_attr_get_string()Dan Carpenter1-1/+1
This code is trying to ensure that the last byte of the buffer is a NUL terminator. However, the problem is that attr->value[] is an array of __le32, not char, so it zeroes out 4 bytes way beyond the end of the buffer. Cast the buffer to char to address this. Fixes: e5cf5107c9e4 ("eth: fbnic: Update fbnic_tlv_attr_get_string() to work like nla_strscpy()") Signed-off-by: Dan Carpenter <dan.carpenter@linaro.org> Reviewed-by: Lee Trager <lee@trager.us> Link: https://patch.msgid.link/2791d4be-ade4-4e50-9b12-33307d8410f6@stanley.mountain Signed-off-by: Jakub Kicinski <kuba@kernel.org>
2025-03-10r8169: increase max jumbo packet size on RTL8125/RTL8126Heiner Kallweit1-0/+4
Realtek confirmed that all RTL8125/RTL8126 chip versions support up to 16K jumbo packets. Reflect this in the driver. Tested by Rui on RTL8125B with 12K jumbo packets. Suggested-by: Rui Salvaterra <rsalvaterra@gmail.com> Tested-by: Rui Salvaterra <rsalvaterra@gmail.com> Signed-off-by: Heiner Kallweit <hkallweit1@gmail.com> Reviewed-by: Simon Horman <horms@kernel.org> Link: https://patch.msgid.link/396762ad-cc65-4e60-b01e-8847db89e98b@gmail.com Signed-off-by: Jakub Kicinski <kuba@kernel.org>
2025-03-10net/mlx5: handle errors in mlx5_chains_create_table()Wentao Liang1-0/+5
In mlx5_chains_create_table(), the return value of mlx5_get_fdb_sub_ns() and mlx5_get_flow_namespace() must be checked to prevent NULL pointer dereferences. If either function fails, the function should log error message with mlx5_core_warn() and return error pointer. Fixes: 39ac237ce009 ("net/mlx5: E-Switch, Refactor chains and priorities") Signed-off-by: Wentao Liang <vulab@iscas.ac.cn> Reviewed-by: Tariq Toukan <tariqt@nvidia.com> Link: https://patch.msgid.link/20250307021820.2646-1-vulab@iscas.ac.cn Signed-off-by: Jakub Kicinski <kuba@kernel.org>
2025-03-08Add support and infrastructure for RDMA TRANSPORTLeon Romanovsky8-34/+294
--------------------------------------------------------------------- Hi, This is preparation series targeted for mlx5-next, which will be used later in RDMA. This series adds RDMA transport steering logic which would allow the vport group manager to catch control packets from VFs and forward them to control SW to help with congestion control. In addition, RDMA will provide new set of APIs to better control exposed FW capabilities and this series is needed to make sure mlx5 command interface will ensure that privileged commands can always proceed, Thanks Link: https://lore.kernel.org/all/cover.1740574103.git.leon@kernel.org Signed-off-by: Leon Romanovsky <leon@kernel.org> * mlx5-next: net/mlx5: fs, add RDMA TRANSPORT steering domain support net/mlx5: Query ADV_RDMA capabilities net/mlx5: Limit non-privileged commands net/mlx5: Allow the throttle mechanism to be more dynamic net/mlx5: Add RDMA_CTRL HW capabilities
2025-03-08net/mlx5: fs, add RDMA TRANSPORT steering domain supportPatrisious Haddad5-18/+182
Add RX and TX RDMA_TRANSPORT flow table namespace, and the ability to create flow tables in those namespaces. The RDMA_TRANSPORT RX and TX are per vport. Packets will traverse through RDMA_TRANSPORT_RX after RDMA_RX and through RDMA_TRANSPORT_TX before RDMA_TX, ensuring proper control and management. RDMA_TRANSPORT domains are managed by the vport group manager. Signed-off-by: Patrisious Haddad <phaddad@nvidia.com> Reviewed-by: Mark Bloch <mbloch@nvidia.com> Link: https://patch.msgid.link/a6b550d9859a197eafa804b9a8d76916ca481da9.1740574103.git.leon@kernel.org Signed-off-by: Leon Romanovsky <leon@kernel.org>
2025-03-08net/mlx5: Query ADV_RDMA capabilitiesPatrisious Haddad2-0/+8
Query ADV_RDMA capabilities which provide information for advanced RDMA related features. Signed-off-by: Patrisious Haddad <phaddad@nvidia.com> Reviewed-by: Mark Bloch <mbloch@nvidia.com> Link: https://patch.msgid.link/e3e6ede03ea31cd201078dcdd4e407608e4a5a87.1740574103.git.leon@kernel.org Signed-off-by: Leon Romanovsky <leon@kernel.org>
2025-03-08net/mlx5: Limit non-privileged commandsChiara Meiohas1-8/+77
Limit non-privileged UID commands to half of the available command slots when privileged UIDs are present. Privileged throttle commands will not be limited. Use an xarray to store privileged UIDs. Add insert and remove functions for privileged UIDs management. Non-user commands (with uid 0) are not limited. Signed-off-by: Chiara Meiohas <cmeiohas@nvidia.com> Reviewed-by: Moshe Shemesh <moshe@nvidia.com> Reviewed-by: Tariq Toukan <tariqt@nvidia.com> Link: https://patch.msgid.link/d2f3dd9a0dbad3c9f2b4bb0723837995e4e06de2.1740574103.git.leon@kernel.org Signed-off-by: Leon Romanovsky <leon@kernel.org>
2025-03-08net/mlx5: Allow the throttle mechanism to be more dynamicChiara Meiohas1-12/+31
Previously, throttle commands were identified and limited based on opcode. These commands were limited to half the command slots using a semaphore, and callback commands checked the opcode to determine semaphore release. To allow exceptions, we introduce a variable to indicate when the throttle lock is held. This allows scenarios where throttle commands are not limited. Callback functions use this variable to determine if the throttle semaphore needs to be released. This patch contains no functional changes. It's a preparation for the next patch. Signed-off-by: Chiara Meiohas <cmeiohas@nvidia.com> Reviewed-by: Tariq Toukan <tariqt@nvidia.com> Reviewed-by: Moshe Shemesh <moshe@nvidia.com> Link: https://patch.msgid.link/055d975edeb816ac4c0fd1e665c6157d11947d26.1740574103.git.leon@kernel.org Signed-off-by: Leon Romanovsky <leon@kernel.org>
2025-03-08net: move misc netdev_lock flavors to a separate headerJakub Kicinski4-0/+4
Move the more esoteric helpers for netdev instance lock to a dedicated header. This avoids growing netdevice.h to infinity and makes rebuilding the kernel much faster (after touching the header with the helpers). The main netdev_lock() / netdev_unlock() functions are used in static inlines in netdevice.h and will probably be used most commonly, so keep them in netdevice.h. Acked-by: Stanislav Fomichev <sdf@fomichev.me> Link: https://patch.msgid.link/20250307183006.2312761-1-kuba@kernel.org Signed-off-by: Jakub Kicinski <kuba@kernel.org>
2025-03-08net: ethernet: Remove accidental duplication in Kconfig fileLukas Bulwahn1-1/+0
Commit fb3dda82fd38 ("net: airoha: Move airoha_eth driver in a dedicated folder") accidentally added the line: source "drivers/net/ethernet/mellanox/Kconfig" in drivers/net/ethernet/Kconfig, so that this line is duplicated in that file. Remove this accidental duplication. Fixes: fb3dda82fd38 ("net: airoha: Move airoha_eth driver in a dedicated folder") Signed-off-by: Lukas Bulwahn <lukas.bulwahn@redhat.com> Acked-by: Lorenzo Bianconi <lorenzo@kernel.org> Link: https://patch.msgid.link/20250306094753.63806-1-lukas.bulwahn@redhat.com Signed-off-by: Jakub Kicinski <kuba@kernel.org>
2025-03-08net: airoha: Fix dev->dsa_ptr check in airoha_get_dsa_tag()Lorenzo Bianconi1-6/+1
Fix the following warning reported by Smatch static checker in airoha_get_dsa_tag routine: drivers/net/ethernet/airoha/airoha_eth.c:1722 airoha_get_dsa_tag() warn: 'dp' isn't an ERR_PTR dev->dsa_ptr can't be set to an error pointer, it can just be NULL. Remove this check since it is already performed in netdev_uses_dsa(). Reported-by: Dan Carpenter <dan.carpenter@linaro.org> Closes: https://lore.kernel.org/netdev/Z8l3E0lGOcrel07C@lore-desk/T/#m54adc113fcdd8c5e6c5f65ffd60d8e8b1d483d90 Fixes: af3cf757d5c9 ("net: airoha: Move DSA tag in DMA descriptor") Signed-off-by: Lorenzo Bianconi <lorenzo@kernel.org> Reviewed-by: Simon Horman <horms@kernel.org> Link: https://patch.msgid.link/20250306-airoha-flowtable-fixes-v1-1-68d3c1296cdd@kernel.org Signed-off-by: Jakub Kicinski <kuba@kernel.org>
2025-03-08eth: fbnic: support ring size configurationJakub Kicinski2-0/+122
Support ethtool -g / -G. Leverage the code added for -l / -L to alloc / stop / start / free. Check parameters against HW min/max but also our own min/max. Min HW queue is 16 entries, we can't deal with TWQs that small because of the queue waking logic. Add similar contraint on RCQ for symmetry. We need 3 sizes on Rx, as the NIC does header-data split two separate buffer pools: (1) head page ring - how many empty pages we post for headers (2) payload page ring - how many empty pages we post for payloads (3) completion ring - where NIC produces the Rx descriptors Acked-by: Joe Damato <jdamato@fastly.com> Link: https://patch.msgid.link/20250306145150.1757263-4-kuba@kernel.org Signed-off-by: Jakub Kicinski <kuba@kernel.org>
2025-03-08eth: fbnic: fix typo in compile assertJakub Kicinski1-1/+1
We should be validating the Rx count on the Rx struct, not the Tx struct. There is no real change here, rx_stats and tx_stats are instances of the same struct. Acked-by: Joe Damato <jdamato@fastly.com> Link: https://patch.msgid.link/20250306145150.1757263-3-kuba@kernel.org Signed-off-by: Jakub Kicinski <kuba@kernel.org>
2025-03-08eth: fbnic: link NAPIs to page poolsJakub Kicinski1-1/+3
The lifetime of page pools is tied to NAPI instances, and they are destroyed before NAPI is deleted. It's safe to link them up. Acked-by: Joe Damato <jdamato@fastly.com> Link: https://patch.msgid.link/20250306145150.1757263-2-kuba@kernel.org Signed-off-by: Jakub Kicinski <kuba@kernel.org>
2025-03-08net: bcmgenet: revise suspend/resumeDoug Berger3-52/+119
If the network interface is configured for Wake-on-LAN we should avoid bringing the interface down and up since it slows the time to reestablish network traffic on resume. Redundant calls to phy_suspend() and phy_resume() are removed since they are already invoked from within phy_stop() and phy_start() called from bcmgenet_netif_stop() and bcmgenet_netif_start(). Signed-off-by: Doug Berger <opendmb@gmail.com> Reviewed-by: Florian Fainelli <florian.fainelli@broadcom.com> Link: https://patch.msgid.link/20250306192643.2383632-15-opendmb@gmail.com Signed-off-by: Jakub Kicinski <kuba@kernel.org>
2025-03-08net: bcmgenet: allow return of power up statusDoug Berger3-12/+17
It is possible for a WoL power up to fail due to the GENET being reset while in the suspend state. Allow these failures to be returned as error codes to allow different recovery behavior when necessary. Signed-off-by: Doug Berger <opendmb@gmail.com> Reviewed-by: Florian Fainelli <florian.fainelli@broadcom.com> Link: https://patch.msgid.link/20250306192643.2383632-14-opendmb@gmail.com Signed-off-by: Jakub Kicinski <kuba@kernel.org>
2025-03-08net: bcmgenet: move bcmgenet_power_up into resume_noirqDoug Berger3-18/+15
The bcmgenet_power_up() function is moved from the resume method to the resume_noirq method for symmetry with the suspend_noirq method. This allows the wol_active flag to be removed. The UMAC_IRQ_WAKE_EVENT interrupts that can be unmasked by the bcmgenet_wol_power_down_cfg() function are now re-masked by the bcmgenet_wol_power_up_cfg() function at the resume_noirq level as well. Signed-off-by: Doug Berger <opendmb@gmail.com> Reviewed-by: Florian Fainelli <florian.fainelli@broadcom.com> Link: https://patch.msgid.link/20250306192643.2383632-13-opendmb@gmail.com Signed-off-by: Jakub Kicinski <kuba@kernel.org>
2025-03-08net: bcmgenet: support reclaiming unsent Tx packetsDoug Berger1-7/+30
When disabling the transmitter any outstanding packets can now be reclaimed by bcmgenet_tx_reclaim_all() rather than by the bcmgenet_fini_dma() function. Signed-off-by: Doug Berger <opendmb@gmail.com> Reviewed-by: Florian Fainelli <florian.fainelli@broadcom.com> Link: https://patch.msgid.link/20250306192643.2383632-12-opendmb@gmail.com Signed-off-by: Jakub Kicinski <kuba@kernel.org>
2025-03-08net: bcmgenet: introduce bcmgenet_[r|t]dma_disableDoug Berger1-61/+62
The bcmgenet_rdma_disable and bcmgenet_tdma_disable functions are introduced to provide a common method for disabling each dma and the code is simplified. Signed-off-by: Doug Berger <opendmb@gmail.com> Reviewed-by: Florian Fainelli <florian.fainelli@broadcom.com> Link: https://patch.msgid.link/20250306192643.2383632-11-opendmb@gmail.com Signed-off-by: Jakub Kicinski <kuba@kernel.org>
2025-03-08net: bcmgenet: consolidate dma initializationDoug Berger1-96/+54
The functions bcmgenet_dma_disable and bcmgenet_enable_dma are only used as part of dma initialization. Their functionality is moved inside bcmgenet_init_dma and the functions are removed. Since the dma is always disabled inside of bcmgenet_init_dma, the initialization functions bcmgenet_init_rx_queues and bcmgenet_init_tx_queues no longer need to attempt to manage its state. Signed-off-by: Doug Berger <opendmb@gmail.com> Reviewed-by: Florian Fainelli <florian.fainelli@broadcom.com> Link: https://patch.msgid.link/20250306192643.2383632-10-opendmb@gmail.com Signed-off-by: Jakub Kicinski <kuba@kernel.org>
2025-03-08net: bcmgenet: remove dma_ctrl argumentDoug Berger1-13/+8
Since the individual queues manage their own DMA enables there is no need to return dma_ctrl from bcmgenet_dma_disable() and pass it back to bcmgenet_enable_dma(). Signed-off-by: Doug Berger <opendmb@gmail.com> Reviewed-by: Florian Fainelli <florian.fainelli@broadcom.com> Link: https://patch.msgid.link/20250306192643.2383632-9-opendmb@gmail.com Signed-off-by: Jakub Kicinski <kuba@kernel.org>
2025-03-08net: bcmgenet: add support for RX_CLS_FLOW_DISCDoug Berger1-5/+8
Now that the DESC_INDEX ring descriptor is no longer used we can enable hardware discarding of flows by routing them to a queue that is not enabled. Signed-off-by: Doug Berger <opendmb@gmail.com> Reviewed-by: Florian Fainelli <florian.fainelli@broadcom.com> Link: https://patch.msgid.link/20250306192643.2383632-8-opendmb@gmail.com Signed-off-by: Jakub Kicinski <kuba@kernel.org>
2025-03-08net: bcmgenet: move DESC_INDEX flow to ring 0Doug Berger3-275/+110
The default transmit and receive packet handling is moved from the DESC_INDEX (i.e. 16) descriptor rings to the Ring 0 queues. This saves a fair amount of special case code by unifying the handling. A default dummy filter is enabled in the Hardware Filter Block to route default receive packets to Ring 0. Signed-off-by: Doug Berger <opendmb@gmail.com> Reviewed-by: Florian Fainelli <florian.fainelli@broadcom.com> Link: https://patch.msgid.link/20250306192643.2383632-7-opendmb@gmail.com Signed-off-by: Jakub Kicinski <kuba@kernel.org>
2025-03-08net: bcmgenet: extend bcmgenet_hfb_* APIDoug Berger1-37/+57
Extend the bcmgenet_hfb_* API to allow initialization and programming of the Hardware Filter Block on GENET v1 and GENET v2 hardware. Programming of ethtool flows is still not supported on this older hardware. Signed-off-by: Doug Berger <opendmb@gmail.com> Reviewed-by: Florian Fainelli <florian.fainelli@broadcom.com> Link: https://patch.msgid.link/20250306192643.2383632-6-opendmb@gmail.com Signed-off-by: Jakub Kicinski <kuba@kernel.org>
2025-03-08net: bcmgenet: BCM7712 is GENETv5 compatibleDoug Berger1-1/+1
The major revision of the GENET core in the BCM7712 SoC was bumped to 7 but it is compatible with the GENETv5 implementation. This commit maps the version accordingly to avoid a warning. Signed-off-by: Doug Berger <opendmb@gmail.com> Reviewed-by: Florian Fainelli <florian.fainelli@broadcom.com> Link: https://patch.msgid.link/20250306192643.2383632-5-opendmb@gmail.com Signed-off-by: Jakub Kicinski <kuba@kernel.org>
2025-03-08net: bcmgenet: move feature flags to bcmgenet_privDoug Berger2-15/+21
The feature flags are moved and consolidated to the primary private driver structure and are now initialized from the platform device data rather than the hardware parameters to allow finer control over which platforms use which features. Signed-off-by: Doug Berger <opendmb@gmail.com> Reviewed-by: Florian Fainelli <florian.fainelli@broadcom.com> Link: https://patch.msgid.link/20250306192643.2383632-4-opendmb@gmail.com Signed-off-by: Jakub Kicinski <kuba@kernel.org>
2025-03-08net: bcmgenet: add bcmgenet_has_* helpersDoug Berger3-14/+39
Introduce helper functions to indicate whether the driver should make use of a particular feature that it supports. These helpers abstract the implementation of how the feature availability is encoded. Signed-off-by: Doug Berger <opendmb@gmail.com> Reviewed-by: Florian Fainelli <florian.fainelli@broadcom.com> Link: https://patch.msgid.link/20250306192643.2383632-3-opendmb@gmail.com Signed-off-by: Jakub Kicinski <kuba@kernel.org>
2025-03-08net: bcmgenet: bcmgenet_hw_params clean upDoug Berger2-101/+84
The entries of the bcmgenet_hw_params array are broken out to remove unused and duplicate entries and are made read only since they should not change for a specific version of the GENET hardware. Signed-off-by: Doug Berger <opendmb@gmail.com> Reviewed-by: Florian Fainelli <florian.fainelli@broadcom.com> Link: https://patch.msgid.link/20250306192643.2383632-2-opendmb@gmail.com Signed-off-by: Jakub Kicinski <kuba@kernel.org>
2025-03-08net/mlx5: Fill out devlink dev info only for PFsJiri Pirko1-0/+3
Firmware version query is supported on the PFs. Due to this following kernel warning log is observed: [ 188.590344] mlx5_core 0000:08:00.2: mlx5_fw_version_query:816:(pid 1453): fw query isn't supported by the FW Fix it by restricting the query and devlink info to the PF. Fixes: 8338d9378895 ("net/mlx5: Added devlink info callback") Signed-off-by: Jiri Pirko <jiri@nvidia.com> Reviewed-by: Kalesh AP <kalesh-anakkur.purayil@broadcom.com> Signed-off-by: Tariq Toukan <tariqt@nvidia.com> Reviewed-by: Parav Pandit <parav@nvidia.com> Link: https://patch.msgid.link/20250306212529.429329-1-tariqt@nvidia.com Signed-off-by: Jakub Kicinski <kuba@kernel.org>
2025-03-08net: stmmac: remove write-only priv->speedRussell King (Oracle)2-4/+0
priv->speed is only ever written to in two locations, but never read. Therefore, it serves no useful purpose. Remove this unnecessary struct member. Signed-off-by: Russell King (Oracle) <rmk+kernel@armlinux.org.uk> Reviewed-by: Andrew Lunn <andrew@lunn.ch> Link: https://patch.msgid.link/E1tqLJJ-005aQm-Mv@rmk-PC.armlinux.org.uk Signed-off-by: Jakub Kicinski <kuba@kernel.org>
2025-03-08gve: convert to use netmem for DQO RDA modeHarshitha Ramamurthy3-22/+47
To add netmem support to the gve driver, add a union to the struct gve_rx_slot_page_info. netmem_ref is used for DQO queue format's raw DMA addressing(RDA) mode. The struct page is retained for other usecases. Then, switch to using relevant netmem helper functions for page pool and skb frag management. Reviewed-by: Mina Almasry <almasrymina@google.com> Reviewed-by: Willem de Bruijn <willemb@google.com> Signed-off-by: Harshitha Ramamurthy <hramamurthy@google.com> Link: https://patch.msgid.link/20250307003905.601175-1-hramamurthy@google.com Signed-off-by: Jakub Kicinski <kuba@kernel.org>
2025-03-08net: stmmac: Add glue layer for Sophgo SG2044 SoCInochi Amaoto3-0/+87
Adds Sophgo dwmac driver support on the Sophgo SG2044 SoC. Signed-off-by: Inochi Amaoto <inochiama@gmail.com> Reviewed-by: Russell King (Oracle) <rmk+kernel@armlinux.org.uk> Link: https://patch.msgid.link/20250307011623.440792-5-inochiama@gmail.com Signed-off-by: Jakub Kicinski <kuba@kernel.org>
2025-03-08net: stmmac: platform: Add snps,dwmac-5.30a IP compatible stringInochi Amaoto1-0/+1
Add "snps,dwmac-5.30a" compatible string for 5.30a version that can avoid to define some platform data in the glue layer. Signed-off-by: Inochi Amaoto <inochiama@gmail.com> Reviewed-by: Romain Gantois <romain.gantois@bootlin.com> Reviewed-by: Andrew Lunn <andrew@lunn.ch> Reviewed-by: Russell King (Oracle) <rmk+kernel@armlinux.org.uk> Link: https://patch.msgid.link/20250307011623.440792-4-inochiama@gmail.com Signed-off-by: Jakub Kicinski <kuba@kernel.org>
2025-03-08net: stmmac: platform: Group GMAC4 compatible checkInochi Amaoto1-5/+11
Use of_device_compatible_match to group existing compatible check of GMAC4 device. Signed-off-by: Inochi Amaoto <inochiama@gmail.com> Reviewed-by: Romain Gantois <romain.gantois@bootlin.com> Reviewed-by: Andrew Lunn <andrew@lunn.ch> Reviewed-by: Russell King (Oracle) <rmk+kernel@armlinux.org.uk> Link: https://patch.msgid.link/20250307011623.440792-3-inochiama@gmail.com Signed-off-by: Jakub Kicinski <kuba@kernel.org>
2025-03-07net/mlx5e: Properly match IPsec subnet addressesLeon Romanovsky3-9/+69
Existing match criteria didn't allow to match whole subnet and only by specific addresses only. This caused to tunnel mode do not forward such traffic through relevant SA. In tunnel mode, policies look like this: src 192.169.0.0/16 dst 192.169.0.0/16 dir out priority 383615 ptype main tmpl src 192.169.101.2 dst 192.169.101.1 proto esp spi 0xc5141c18 reqid 1 mode tunnel crypto offload parameters: dev eth2 mode packet In this case, the XFRM core code handled all subnet calculations and forwarded network address to the drivers e.g. 192.169.0.0. For mlx5 devices, there is a need to set relevant prefix e.g. 0xFFFF00 to perform flow steering match operation. Signed-off-by: Leon Romanovsky <leonro@nvidia.com> Signed-off-by: Tariq Toukan <tariqt@nvidia.com> Reviewed-by: Michal Swiatkowski <michal.swiatkowski@linux.intel.com> Link: https://patch.msgid.link/20250304160620.417580-7-tariqt@nvidia.com Signed-off-by: Jakub Kicinski <kuba@kernel.org>
2025-03-07net/mlx5e: Separate address related variables to be in structLeon Romanovsky3-65/+68
Prepare the code to addition of prefix handling logic which is needed to support matching logic based on source and/or destination network prefixes. Signed-off-by: Leon Romanovsky <leonro@nvidia.com> Signed-off-by: Tariq Toukan <tariqt@nvidia.com> Reviewed-by: Michal Swiatkowski <michal.swiatkowski@linux.intel.com> Link: https://patch.msgid.link/20250304160620.417580-6-tariqt@nvidia.com Signed-off-by: Jakub Kicinski <kuba@kernel.org>
2025-03-07net/mlx5: Lag, Enable Multiport E-Switch offloads on 8 ports LAGAmir Tzin1-4/+0
Patch [1] added mlx5 driver support for 8 ports HCAs which are available since ConnectX-8. Now that Multiport E-Switch is tested, we can enable it by removing flag MLX5_LAG_MPESW_OFFLOADS_SUPPORTED_PORTS. [1] commit e0e6adfe8c20 ("net/mlx5: Enable 8 ports LAG") Signed-off-by: Amir Tzin <amirtz@nvidia.com> Signed-off-by: Mark Bloch <mbloch@nvidia.com> Signed-off-by: Tariq Toukan <tariqt@nvidia.com> Reviewed-by: Michal Swiatkowski <michal.swiatkowski@linux.intel.com> Link: https://patch.msgid.link/20250304160620.417580-5-tariqt@nvidia.com Signed-off-by: Jakub Kicinski <kuba@kernel.org>
2025-03-07net/mlx5e: Enable lanes configuration when auto-negotiation is offShahar Shitrit3-55/+67
Currently, when auto-negotiation is disabled, the driver retrieves the speed and converts it into all link modes that correspond to that speed. With this patch, we add the ability to set the number of lanes, so that the combination of speed and lanes corresponds to exactly one specific link mode for the extended bit map. For the legacy bit map the driver sets all link modes correspond to speed and lanes. This change provides users with the option to set a specific link mode, rather than enabling all link modes associated with a given speed when auto-negotiation is off. Signed-off-by: Shahar Shitrit <shshitrit@nvidia.com> Reviewed-by: Carolina Jubran <cjubran@nvidia.com> Signed-off-by: Tariq Toukan <tariqt@nvidia.com> Link: https://patch.msgid.link/20250304160620.417580-4-tariqt@nvidia.com Signed-off-by: Jakub Kicinski <kuba@kernel.org>
2025-03-07net/mlx5: Refactor link speed handling with mlx5_link_info structShahar Shitrit4-88/+103
Introduce struct mlx5_link_info with a speed field and change the types of mlx5e_link_speed and mlx5e_ext_link_speed from arrays of u32 to arrays of struct mlx5_link_info. These arrays are renamed to mlx5e_link_info and mlx5e_ext_link_info, respectively. This change prepares for a future patch that will introduce a lanes field in struct mlx5_link_info and add lanes mapping alongside the speed for each link mode in the two arrays. Additionally, rename function mlx5_port_speed2linkmodes() to mlx5_port_info2linkmodes() and function mlx5_port_ptys2speed() to mlx5_port_ptys2info() and update the speed parameter/return type to struct mlx5_link_info, in preparation for the upcoming patch where these functions will also utilize the lanes field. Signed-off-by: Shahar Shitrit <shshitrit@nvidia.com> Reviewed-by: Carolina Jubran <cjubran@nvidia.com> Signed-off-by: Tariq Toukan <tariqt@nvidia.com> Link: https://patch.msgid.link/20250304160620.417580-3-tariqt@nvidia.com Signed-off-by: Jakub Kicinski <kuba@kernel.org>
2025-03-07net/mlx5: Relocate function declarations from port.h to mlx5_core.hShahar Shitrit2-20/+85
The port header is a general file under include, yet it contains declarations for functions that are either not exported or exported but not used outside the mlx5_core driver. To enhance code organization, we move these declarations to mlx5_core.h, where they are more appropriately scoped. This refactor removes unnecessary exported symbols and prevents unexported functions from being inadvertently referenced outside of the mlx5_core driver. Signed-off-by: Shahar Shitrit <shshitrit@nvidia.com> Reviewed-by: Carolina Jubran <cjubran@nvidia.com> Signed-off-by: Tariq Toukan <tariqt@nvidia.com> Link: https://patch.msgid.link/20250304160620.417580-2-tariqt@nvidia.com Signed-off-by: Jakub Kicinski <kuba@kernel.org>
2025-03-07net: ti: icss-iep: Add phase offset configuration for perout signalMeghana Malladi1-2/+16
icss_iep_perout_enable_hw() is a common function for generating both pps and perout signals. When enabling pps, the application needs to only pass enable/disable argument, whereas for perout it supports different flags to configure the signal. In case the app passes a valid phase offset value, the signal should start toggling after that phase offset, else start immediately or as soon as possible. ICSS_IEP_SYNC_START_REG register take number of clock cycles to wait before starting the signal after activation time. Set appropriate value to this register to support phase offset. Signed-off-by: Meghana Malladi <m-malladi@ti.com> Reviewed-by: Jacob Keller <jacob.e.keller@intel.com> Reviewed-by: Kory Maincent <kory.maincent@bootlin.com> Link: https://patch.msgid.link/20250304105753.1552159-3-m-malladi@ti.com Signed-off-by: Jakub Kicinski <kuba@kernel.org>
2025-03-07net: ti: icss-iep: Add pwidth configuration for perout signalMeghana Malladi1-3/+44
icss_iep_perout_enable_hw() is a common function for generating both pps and perout signals. When enabling pps, the application needs to only pass enable/disable argument, whereas for perout it supports different flags to configure the signal. But icss_iep_perout_enable_hw() function is missing to hook the configuration params passed by the app, causing perout to behave same a pps (except being able to configure the period). As duty cycle is also one feature which can configured for perout, incorporate this in the function to get the expected signal. Signed-off-by: Meghana Malladi <m-malladi@ti.com> Reviewed-by: Jacob Keller <jacob.e.keller@intel.com> Reviewed-by: Kory Maincent <kory.maincent@bootlin.com> Link: https://patch.msgid.link/20250304105753.1552159-2-m-malladi@ti.com Signed-off-by: Jakub Kicinski <kuba@kernel.org>
2025-03-07net: airoha: Enable TSO/Scatter Gather for LAN portLorenzo Bianconi1-0/+1
Set net_device vlan_features in order to enable TSO and Scatter Gather for DSA user ports. Reviewed-by: Mateusz Polchlopek <mateusz.polchlopek@intel.com> Signed-off-by: Lorenzo Bianconi <lorenzo@kernel.org> Reviewed-by: Simon Horman <horms@kernel.org> Link: https://patch.msgid.link/20250304-lan-enable-tso-v1-1-b398eb9976ba@kernel.org Signed-off-by: Jakub Kicinski <kuba@kernel.org>
2025-03-07net: airoha: Fix lan4 support in airoha_qdma_get_gdm_port()Lorenzo Bianconi1-1/+1
EN7581 SoC supports lan{1,4} ports on MT7530 DSA switch. Fix lan4 reported value in airoha_qdma_get_gdm_port routine. Fixes: 23020f0493270 ("net: airoha: Introduce ethernet support for EN7581 SoC") Signed-off-by: Lorenzo Bianconi <lorenzo@kernel.org> Reviewed-by: Simon Horman <horms@kernel.org> Link: https://patch.msgid.link/20250304-airoha-eth-fix-lan4-v1-1-832417da4bb5@kernel.org Signed-off-by: Jakub Kicinski <kuba@kernel.org>
2025-03-07net: airoha: Increase max mtu to 9kLorenzo Bianconi1-1/+1
EN7581 SoC supports 9k maximum MTU. Signed-off-by: Lorenzo Bianconi <lorenzo@kernel.org> Reviewed-by: Simon Horman <horms@kernel.org> Link: https://patch.msgid.link/20250304-airoha-eth-rx-sg-v1-4-283ebc61120e@kernel.org Signed-off-by: Jakub Kicinski <kuba@kernel.org>
2025-03-07net: airoha: Introduce airoha_dev_change_mtu callbackLorenzo Bianconi1-0/+15
Add airoha_dev_change_mtu callback to update the MTU of a running device. Signed-off-by: Lorenzo Bianconi <lorenzo@kernel.org> Reviewed-by: Simon Horman <horms@kernel.org> Link: https://patch.msgid.link/20250304-airoha-eth-rx-sg-v1-3-283ebc61120e@kernel.org Signed-off-by: Jakub Kicinski <kuba@kernel.org>
2025-03-07net: airoha: Enable Rx Scatter-GatherLorenzo Bianconi3-26/+48
EN7581 SoC can receive 9k frames. Enable the reception of Scatter-Gather (SG) frames. Signed-off-by: Lorenzo Bianconi <lorenzo@kernel.org> Reviewed-by: Simon Horman <horms@kernel.org> Link: https://patch.msgid.link/20250304-airoha-eth-rx-sg-v1-2-283ebc61120e@kernel.org Signed-off-by: Jakub Kicinski <kuba@kernel.org>
2025-03-07net: airoha: Move min/max packet len configuration in airoha_dev_open()Lorenzo Bianconi1-7/+7
In order to align max allowed packet size to the configured mtu, move REG_GDM_LEN_CFG configuration in airoha_dev_open routine. Signed-off-by: Lorenzo Bianconi <lorenzo@kernel.org> Reviewed-by: Simon Horman <horms@kernel.org> Link: https://patch.msgid.link/20250304-airoha-eth-rx-sg-v1-1-283ebc61120e@kernel.org Signed-off-by: Jakub Kicinski <kuba@kernel.org>