summaryrefslogtreecommitdiff
path: root/drivers/net/ethernet
AgeCommit message (Collapse)AuthorFilesLines
2023-07-24ice: Disable vlan pruning for uplink VSIWojciech Drewek1-0/+10
In switchdev mode, uplink VSI is configured to be default VSI which means it will receive all unmatched packets. In order to receive vlan packets we need to disable vlan pruning as well. This is done by dis_rx_filtering vlan op. Reviewed-by: Paul Menzel <pmenzel@molgen.mpg.de> Reviewed-by: Simon Horman <simon.horman@corigine.com> Signed-off-by: Wojciech Drewek <wojciech.drewek@intel.com> Tested-by: Sujai Buvaneswaran <sujai.buvaneswaran@intel.com> Signed-off-by: Tony Nguyen <anthony.l.nguyen@intel.com>
2023-07-24ice: Don't tx before switchdev is fully configuredWojciech Drewek1-0/+3
There is possibility that ice_eswitch_port_start_xmit might be called while some resources are still not allocated which might cause NULL pointer dereference. Fix this by checking if switchdev configuration was finished. Reviewed-by: Paul Menzel <pmenzel@molgen.mpg.de> Reviewed-by: Simon Horman <simon.horman@corigine.com> Signed-off-by: Wojciech Drewek <wojciech.drewek@intel.com> Tested-by: Sujai Buvaneswaran <sujai.buvaneswaran@intel.com> Signed-off-by: Tony Nguyen <anthony.l.nguyen@intel.com>
2023-07-24ice: Prohibit rx mode change in switchdev modeWojciech Drewek1-1/+1
Don't allow to change promisc mode in switchdev mode. When switchdev is configured, PF netdev is set to be a default VSI. This is needed for the slow-path to work correctly. All the unmatched packets will be directed to PF netdev. It is possible that this setting might be overwritten by ndo_set_rx_mode. Prevent this by checking if switchdev is enabled in ice_set_rx_mode. Reviewed-by: Paul Menzel <pmenzel@molgen.mpg.de> Reviewed-by: Simon Horman <simon.horman@corigine.com> Signed-off-by: Wojciech Drewek <wojciech.drewek@intel.com> Tested-by: Sujai Buvaneswaran <sujai.buvaneswaran@intel.com> Signed-off-by: Tony Nguyen <anthony.l.nguyen@intel.com>
2023-07-24ice: Skip adv rules removal upon switchdev releaseWojciech Drewek3-55/+0
Advanced rules for ctrl VSI will be removed anyway when the VSI will cleaned up, no need to do it explicitly. Reviewed-by: Paul Menzel <pmenzel@molgen.mpg.de> Reviewed-by: Simon Horman <simon.horman@corigine.com> Signed-off-by: Wojciech Drewek <wojciech.drewek@intel.com> Tested-by: Sujai Buvaneswaran <sujai.buvaneswaran@intel.com> Signed-off-by: Tony Nguyen <anthony.l.nguyen@intel.com>
2023-07-24ionic: add FLR recovery supportShannon Nelson3-4/+62
Add support for the PCI reset handlers in order to manage an FLR event. Signed-off-by: Shannon Nelson <shannon.nelson@amd.com> Signed-off-by: David S. Miller <davem@davemloft.net>
2023-07-24ionic: pull out common bits from fw_upShannon Nelson1-22/+42
Pull out some code from ionic_lif_handle_fw_up() that can be used in the coming FLR recovery patch. Signed-off-by: Shannon Nelson <shannon.nelson@amd.com> Signed-off-by: David S. Miller <davem@davemloft.net>
2023-07-24ionic: extract common bits from ionic_probeShannon Nelson1-35/+49
Pull out some chunks of code from ionic_probe() that will be common in rebuild paths. Signed-off-by: Shannon Nelson <shannon.nelson@amd.com> Signed-off-by: David S. Miller <davem@davemloft.net>
2023-07-24ionic: extract common bits from ionic_removeShannon Nelson1-12/+13
Pull out a chunk of code from ionic_remove() that will be common in teardown paths. Signed-off-by: Shannon Nelson <shannon.nelson@amd.com> Signed-off-by: David S. Miller <davem@davemloft.net>
2023-07-24ethernet: atheros: fix return value check in atl1c_tso_csum()Yuanjun Gong1-2/+5
in atl1c_tso_csum, it should check the return value of pskb_trim(), and return an error code if an unexpected value is returned by pskb_trim(). Signed-off-by: Yuanjun Gong <ruc_gongyuanjun@163.com> Reviewed-by: Simon Horman <simon.horman@corigine.com> Signed-off-by: David S. Miller <davem@davemloft.net>
2023-07-24vxlan: calculate correct header length for GPEJiri Benc1-1/+1
VXLAN-GPE does not add an extra inner Ethernet header. Take that into account when calculating header length. This causes problems in skb_tunnel_check_pmtu, where incorrect PMTU is cached. In the collect_md mode (which is the only mode that VXLAN-GPE supports), there's no magic auto-setting of the tunnel interface MTU. It can't be, since the destination and thus the underlying interface may be different for each packet. So, the administrator is responsible for setting the correct tunnel interface MTU. Apparently, the administrators are capable enough to calculate that the maximum MTU for VXLAN-GPE is (their_lower_MTU - 36). They set the tunnel interface MTU to 1464. If you run a TCP stream over such interface, it's then segmented according to the MTU 1464, i.e. producing 1514 bytes frames. Which is okay, this still fits the lower MTU. However, skb_tunnel_check_pmtu (called from vxlan_xmit_one) uses 50 as the header size and thus incorrectly calculates the frame size to be 1528. This leads to ICMP too big message being generated (locally), PMTU of 1450 to be cached and the TCP stream to be resegmented. The fix is to use the correct actual header size, especially for skb_tunnel_check_pmtu calculation. Fixes: e1e5314de08ba ("vxlan: implement GPE") Signed-off-by: Jiri Benc <jbenc@redhat.com> Reviewed-by: Simon Horman <simon.horman@corigine.com> Signed-off-by: David S. Miller <davem@davemloft.net>
2023-07-24net: hns3: fix wrong bw weight of disabled tc issueJijie Shao2-4/+16
In dwrr mode, the default bandwidth weight of disabled tc is set to 0. If the bandwidth weight is 0, the mode will change to sp. Therefore, disabled tc default bandwidth weight need changed to 1, and 0 is returned when query the bandwidth weight of disabled tc. In addition, driver need stop configure bandwidth weight if tc is disabled. Fixes: 848440544b41 ("net: hns3: Add support of TX Scheduler & Shaper to HNS3 driver") Signed-off-by: Jie Wang <wangjie125@huawei.com> Signed-off-by: Jijie Shao <shaojijie@huawei.com> Signed-off-by: David S. Miller <davem@davemloft.net>
2023-07-24net: hns3: fix wrong tc bandwidth weight data issueJijie Shao1-2/+1
Currently, the weight saved by the driver is used as the query result, which may be different from the actual weight in the register. Therefore, the register value read from the firmware is used as the query result Fixes: 0e32038dc856 ("net: hns3: refactor dump tc of debugfs") Signed-off-by: Jijie Shao <shaojijie@huawei.com> Signed-off-by: David S. Miller <davem@davemloft.net>
2023-07-24net: hns3: add tm flush when setting tmHao Lan7-6/+73
When the tm module is configured with traffic, traffic may be abnormal. This patch fixes this problem. Before the tm module is configured, traffic processing should be stopped. After the tm module is configured, traffic processing is enabled. Signed-off-by: Hao Lan <lanhao@huawei.com> Signed-off-by: Jijie Shao <shaojijie@huawei.com> Signed-off-by: David S. Miller <davem@davemloft.net>
2023-07-24net: hns3: fix the imp capability bit cannot exceed 32 bits issueHao Lan2-4/+20
Current only the first 32 bits of the capability flag bit are considered. When the matching capability flag bit is greater than 31 bits, it will get an error bit.This patch use bitmap to solve this issue. It can handle each capability bit whitout bit width limit. Fixes: da77aef9cc58 ("net: hns3: create common cmdq resource allocate/free/query APIs") Signed-off-by: Hao Lan <lanhao@huawei.com> Signed-off-by: Jijie Shao <shaojijie@huawei.com> Signed-off-by: David S. Miller <davem@davemloft.net>
2023-07-22eth: stmmac: let page recycling happen with skbsJakub Kicinski1-2/+2
stmmac removes pages from the page pool after attaching them to skbs. Use page recycling instead. skb heads are always copied, and pages are always from page pool in this driver. We could as well mark all allocated skbs for recycling. Reviewed-by: Yunsheng Lin <linyunsheng@huawei.com> Link: https://lore.kernel.org/r/20230720010409.1967072-3-kuba@kernel.org Reviewed-by: Alexander Lobakin <aleksander.lobakin@intel.com> Signed-off-by: Jakub Kicinski <kuba@kernel.org>
2023-07-22eth: tsnep: let page recycling happen with skbsJakub Kicinski1-1/+1
tsnep builds an skb with napi_build_skb() and then calls page_pool_release_page() for the page in which that skb's head sits. Use recycling instead, recycling of heads works just fine. Reviewed-by: Yunsheng Lin <linyunsheng@huawei.com> Link: https://lore.kernel.org/r/20230720010409.1967072-2-kuba@kernel.org Reviewed-by: Alexander Lobakin <aleksander.lobakin@intel.com> Signed-off-by: Jakub Kicinski <kuba@kernel.org>
2023-07-21iavf: check for removal state before IAVF_FLAG_PF_COMMS_FAILEDJacob Keller1-3/+3
In iavf_adminq_task(), if the function can't acquire the adapter->crit_lock, it checks if the driver is removing. If so, it simply exits without re-enabling the interrupt. This is done to ensure that the task stops processing as soon as possible once the driver is being removed. However, if the IAVF_FLAG_PF_COMMS_FAILED is set, the function checks this before attempting to acquire the lock. In this case, the function exits early and re-enables the interrupt. This will happen even if the driver is already removing. Avoid this, by moving the check to after the adapter->crit_lock is acquired. This way, if the driver is removing, we will not re-enable the interrupt. Fixes: fc2e6b3b132a ("iavf: Rework mutexes for better synchronisation") Signed-off-by: Jacob Keller <jacob.e.keller@intel.com> Tested-by: Rafal Romanowski <rafal.romanowski@intel.com> Signed-off-by: Tony Nguyen <anthony.l.nguyen@intel.com>
2023-07-21iavf: fix potential deadlock on allocation failureJacob Keller1-2/+3
In iavf_adminq_task(), if kzalloc() fails to allocate the event.msg_buf, the function will exit without releasing the adapter->crit_lock. This is unlikely, but if it happens, the next access to that mutex will deadlock. Fix this by moving the unlock to the end of the function, and adding a new label to allow jumping to the unlock portion of the function exit flow. Fixes: fc2e6b3b132a ("iavf: Rework mutexes for better synchronisation") Signed-off-by: Jacob Keller <jacob.e.keller@intel.com> Tested-by: Rafal Romanowski <rafal.romanowski@intel.com> Signed-off-by: Tony Nguyen <anthony.l.nguyen@intel.com>
2023-07-21i40e: Fix an NULL vs IS_ERR() bug for debugfs_create_dir()Wang Ming1-1/+1
The debugfs_create_dir() function returns error pointers. It never returns NULL. Most incorrect error checks were fixed, but the one in i40e_dbg_init() was forgotten. Fix the remaining error check. Fixes: 02e9c290814c ("i40e: debugfs interface") Signed-off-by: Wang Ming <machel@vivo.com> Tested-by: Pucha Himasekhar Reddy <himasekharx.reddy.pucha@intel.com> (A Contingent worker at Intel) Signed-off-by: Tony Nguyen <anthony.l.nguyen@intel.com>
2023-07-21octeontx2-pf: htb offload support for Round Robin schedulingNaveen Mamindlapalli4-40/+247
When multiple traffic flows reach Transmit level with the same priority, with Round robin scheduling traffic flow with the highest quantum value is picked. With this support, the user can add multiple classes with the same priority and different quantum. This patch does necessary changes to support the same. Signed-off-by: Naveen Mamindlapalli <naveenm@marvell.com> Signed-off-by: Hariprasad Kelam <hkelam@marvell.com> Signed-off-by: David S. Miller <davem@davemloft.net>
2023-07-21sch_htb: Allow HTB quantum parameter in offload modeNaveen Mamindlapalli1-2/+2
The current implementation of HTB offload returns the EINVAL error for quantum parameter. This patch removes the error returning checks for 'quantum' parameter and populates its value to tc_htb_qopt_offload structure such that driver can use the same. Add quantum parameter check in mlx5 driver, as mlx5 devices are not capable of supporting the quantum parameter when htb offload is used. Report error if quantum parameter is set to a non-default value. Signed-off-by: Naveen Mamindlapalli <naveenm@marvell.com> Signed-off-by: Hariprasad Kelam <hkelam@marvell.com> Signed-off-by: David S. Miller <davem@davemloft.net>
2023-07-21octeontx2-pf: implement transmit schedular allocation algorithmNaveen Mamindlapalli2-6/+136
unlike strict priority, where number of classes are limited to max 8, there is no restriction on the number of dwrr child nodes unless the count increases the max number of child nodes supported. Hardware expects strict priority transmit schedular indexes mapped to their priority. This patch adds defines transmit schedular allocation algorithm such that the above requirement is honored. Signed-off-by: Naveen Mamindlapalli <naveenm@marvell.com> Signed-off-by: Hariprasad Kelam <hkelam@marvell.com> Signed-off-by: David S. Miller <davem@davemloft.net>
2023-07-21mlxsw: spectrum: Permit enslavement to netdevices with uppersPetr Machata1-4/+62
Enslaving of front panel ports (and their uppers) to netdevices that already have uppers is currently forbidden. In the previous patches, a number of replays have been added. Those ensure that various bits of state, such as next hops or switchdev objects, are offloaded when they become relevant due to a mlxsw lower being introduced into the topology. However the act of actually, for example, enslaving a front-panel port to a bridge with uppers, has been vetoed so far. In this patch, remove the vetoes and permit the operation. mlxsw currently validates creation of "interesting" uppers. Thus creating VLAN netdevices on top of 802.1ad bridges is forbidden if the bridge has an mlxsw lower, but permitted in general. This validation code never gets run when a port is introduced as a lower of an existing netdevice structure. Thus when enslaving an mlxsw netdevice to netdevices with uppers, invoke the PRECHANGEUPPER event handler for each netdevice above the one that the front panel port is being enslaved to. This way the tower of netdevices above the attachment point is validated. Signed-off-by: Petr Machata <petrm@nvidia.com> Reviewed-by: Danielle Ratson <danieller@nvidia.com> Signed-off-by: David S. Miller <davem@davemloft.net>
2023-07-21mlxsw: spectrum_router: Replay IP NETDEV_UP on device deslavementPetr Machata3-8/+65
When a netdevice is removed from a bridge or a LAG, and it has an IP address, it should join the router and gain a RIF. Do that by replaying address addition event on the netdevice. When handling deslavement of LAG or its upper from a bridge device, the replay should be done after all the lowers of the LAG have left the bridge. Thus these scenarios are handled by passing replay_deslavement of false, and by invoking, after the lowers have been processed, a new helper, mlxsw_sp_netdevice_post_lag_event(), which does the per-LAG / -upper handling, and in particular invokes the replay. Signed-off-by: Petr Machata <petrm@nvidia.com> Reviewed-by: Danielle Ratson <danieller@nvidia.com> Signed-off-by: David S. Miller <davem@davemloft.net>
2023-07-21mlxsw: spectrum_router: Replay IP NETDEV_UP on device enslavementPetr Machata4-0/+137
Enslaving of front panel ports (and their uppers) to netdevices that already have uppers is currently forbidden. When this is permitted, any uppers with IP addresses need to have the NETDEV_UP inetaddr event replayed, so that any RIFs are created. Signed-off-by: Petr Machata <petrm@nvidia.com> Reviewed-by: Danielle Ratson <danieller@nvidia.com> Signed-off-by: David S. Miller <davem@davemloft.net>
2023-07-21mlxsw: spectrum_router: Replay neighbours when RIF is madePetr Machata1-1/+61
As neighbours are created, mlxsw is involved through the netevent notifications. When at the time there is no RIF for a given neighbour, the notification is not acted upon. When the RIF is later created, these outstanding neighbours are left unoffloaded and cause traffic to go through the SW datapath. In order to fix this issue, as a RIF is created, walk the ARP and ND tables and find neighbours for the netdevice that represents the RIF. Then schedule neighbour work for them, allowing them to be offloaded. Signed-off-by: Petr Machata <petrm@nvidia.com> Reviewed-by: Danielle Ratson <danieller@nvidia.com> Signed-off-by: David S. Miller <davem@davemloft.net>
2023-07-21mlxsw: spectrum_router: Replay MACVLANs when RIF is madePetr Machata2-27/+54
If IP address is added to a MACVLAN netdevice, the effect is of configuring VRRP on the RIF for the netdevice linked to the MACVLAN. Because the MACVLAN offload is tied to existence of a RIF at the linked netdevice, adding a MACVLAN is currently not allowed until a RIF is present. If this requirement stays, it will never be possible to attach a first port into a topology that involves a MACVLAN. Thus topologies would need to be built in a certain order, which is impractical. Additionally, IP address removal, which leads to disappearance of the RIF that the MACVLAN depends on, cannot be vetoed. Thus even as things stand now it is possible to get to a state where a MACVLAN netdevice exists without a RIF, despite having mlxsw lowers. And once the MACVLAN is un-offloaded due to RIF getting destroyed, recreating the RIF does not bring it back. In this patch, accept that MACVLAN can be created out of order and support that use case. One option would seem to be to simply recognize MACVLAN netdevices as "interesting", and let the existing replay mechanisms take care of the offload. However, that does not address the necessity to reoffload MACVLAN once a RIF is created. Thus add a new replay hook, symmetrical to mlxsw_sp_rif_macvlan_flush(), called mlxsw_sp_rif_macvlan_replay(), which instead of unwinding the existing offloads, applies the configuration as if the netdevice were created just now. Additionally, remove all vetoes and warning messages that checked for presence of a RIF at the linked device. Signed-off-by: Petr Machata <petrm@nvidia.com> Reviewed-by: Danielle Ratson <danieller@nvidia.com> Signed-off-by: David S. Miller <davem@davemloft.net>
2023-07-21mlxsw: spectrum_router: Offload ethernet nexthops when RIF is madePetr Machata1-0/+54
As RIF is created, refresh each netxhop group tracked at the CRIF for which the RIF was created. Note that nothing needs to be done for IPIP nexthops. The RIF for these is either available from the get-go, or will never be available, so no after the fact offloading needs to be done. Signed-off-by: Petr Machata <petrm@nvidia.com> Reviewed-by: Danielle Ratson <danieller@nvidia.com> Signed-off-by: David S. Miller <davem@davemloft.net>
2023-07-21mlxsw: spectrum_router: Join RIFs of LAG upper VLANsPetr Machata1-3/+52
In the following patches, the requirement that ports be only enslaved to masters without uppers, is going to be relaxed. It will therefore be necessary to join not only RIF for the immediate LAG, as is currently the case, but also RIFs for VLAN netdevices upper to the LAG. In this patch, extend mlxsw_sp_netdevice_router_join_lag() to walk the uppers of a LAG being joined, and also join any VLAN ones. Signed-off-by: Petr Machata <petrm@nvidia.com> Reviewed-by: Danielle Ratson <danieller@nvidia.com> Signed-off-by: David S. Miller <davem@davemloft.net>
2023-07-21mlxsw: spectrum_switchdev: Replay switchdev objects on port joinPetr Machata3-5/+132
Currently it never happens that a netdevice that is already a bridge slave would suddenly become mlxsw upper. The only case where this might be possible as far as mlxsw is concerned, is with LAG netdevices. But if a LAG has any upper (e.g. is enslaved), enlaving mlxsw port to that LAG is forbidden. Thus the only way to install a LAG between a bridge and a mlxsw port is by first enslaving the port to the LAG, and then enslaving that LAG to a bridge. At that point there are no bridge objects (such as port VLANs) to replay. Those are added afterwards, and notified as they are created. This holds even for the PVID. However in the following patches, the requirement that ports be only enslaved to masters without uppers, is going to be relaxed. It will therefore be necessary to replay the existing bridge objects. Without this replay, e.g. the mlxsw bridge_port_vlan objects are not instantiated, which causes issues later, as a lot of code relies on their presence. To that end, add a new notifier block whose sole role is to filter out events related to the one relevant upper, and forward those to the existing switchdev notifier block. Pass the new notifier block to switchdev_bridge_port_offload() when the bridge port is created. Signed-off-by: Petr Machata <petrm@nvidia.com> Reviewed-by: Danielle Ratson <danieller@nvidia.com> Signed-off-by: David S. Miller <davem@davemloft.net>
2023-07-21mlxsw: spectrum: On port enslavement to a LAG, join upper's bridgesPetr Machata1-0/+90
Currently it never happens that a netdevice that is already a bridge slave would suddenly become mlxsw upper. The only case where this might be possible as far as mlxsw is concerned, is with LAG netdevices. But if a LAG already has an upper, enslaving mlxsw port to that LAG is forbidden. Thus the only way to install a LAG between a bridge and a mlxsw port is by first enslaving the port to the LAG, and then enslaving that LAG to a bridge. However in the following patches, the requirement that ports be only enslaved to masters without uppers, is going to be relaxed. It will therefore be necessary to join bridges of LAG uppers. Without this replay, the mlxsw bridge_port objects are not instantiated, which causes issues later, as a lot of code relies on their presence. Therefore in this patch, when the first mlxsw physical netdevice is enslaved to a LAG, consider bridges upper to the LAG (both the direct master, if any, and any bridge masters of VLAN uppers), and have the relevant netdevices join their bridges. Signed-off-by: Petr Machata <petrm@nvidia.com> Reviewed-by: Danielle Ratson <danieller@nvidia.com> Signed-off-by: David S. Miller <davem@davemloft.net>
2023-07-21mlxsw: spectrum: Add a replay_deslavement argument to event handlersPetr Machata1-8/+12
When handling deslavement of LAG or its upper from a bridge device, when the deslaved netdevice has an IP address, it should join the router. This should be done after all the lowers of the LAG have left the bridge. The replay intended to cause the device to join the router therefore cannot be invoked unconditionally in the event handlers themselves. It can be done right away if the handler is invoked for a sole device, but when it is invoked repeated for each LAG lower, the replay needs to be postponed until after this processing is done. To that end, add a boolean parameter, replay_deslavement, to mlxsw_sp_netdevice_port_upper_event(), mlxsw_sp_netdevice_port_vlan_event() and one helper on the call path. Have the invocations that are done for sole netdevices pass true, and those done for LAG lowers pass false. Nothing depends on this flag at this point, but it removes some noise from the patch that introduces the replay itself. Signed-off-by: Petr Machata <petrm@nvidia.com> Reviewed-by: Danielle Ratson <danieller@nvidia.com> Signed-off-by: David S. Miller <davem@davemloft.net>
2023-07-21mlxsw: spectrum: Allow event handlers to check unowned bridgesPetr Machata1-16/+24
Currently the bridge-related handlers bail out when the event is related to a netdevice that is not an upper of one of the front-panel ports. In order to allow enslavement of front-panel ports to bridges that already have uppers, it will be necessary to replay CHANGEUPPER events to validate that the configuration is offloadable. In order for the replay to be effective, it must be possible to ignore unsupported configuration in the context of an actual notifier event, but to still "veto" these configurations when the validation is performed. To that end, introduce two parameters to a number of handlers: mlxsw_sp, because it will not be possible to deduce that from the netdevice lowers; and process_foreign to indicate whether netdevices that are not front panel uppers should be validated. Signed-off-by: Petr Machata <petrm@nvidia.com> Reviewed-by: Danielle Ratson <danieller@nvidia.com> Signed-off-by: David S. Miller <davem@davemloft.net>
2023-07-21mlxsw: spectrum: Split a helper out of mlxsw_sp_netdevice_event()Petr Machata1-5/+15
Move the meat of mlxsw_sp_netdevice_event() to a separate function that does just the validation. This separate helper will be possible to call later for recursive ascent when validating attachment of a front panel port to a bridge with uppers. Signed-off-by: Petr Machata <petrm@nvidia.com> Reviewed-by: Danielle Ratson <danieller@nvidia.com> Signed-off-by: David S. Miller <davem@davemloft.net>
2023-07-21mlxsw: spectrum_router: Extract a helper to schedule neighbour workPetr Machata1-9/+16
This will come in handy for neighbour replay. Signed-off-by: Petr Machata <petrm@nvidia.com> Reviewed-by: Danielle Ratson <danieller@nvidia.com> Signed-off-by: David S. Miller <davem@davemloft.net>
2023-07-21mlxsw: spectrum_router: Allow address handlers to run on bridge portsPetr Machata1-17/+24
Currently the IP address event handlers bail out when the event is related to a netdevice that is a bridge port or a member of a LAG. In order to create a RIF when a bridged or LAG'd port is unenslaved, these event handlers will be replayed. However, at the point in time when the NETDEV_CHANGEUPPER event is delivered, informing of the loss of enslavement, the port is still formally enslaved. In order for the operation to have any effect, these handlers need an extra parameter to indicate that the check for bridge or LAG membership should not be done. In this patch, add an argument "nomaster" to several event handlers. Signed-off-by: Petr Machata <petrm@nvidia.com> Reviewed-by: Danielle Ratson <danieller@nvidia.com> Signed-off-by: David S. Miller <davem@davemloft.net>
2023-07-21net: ethernet: mtk_ppe: add MTK_FOE_ENTRY_V{1,2}_SIZE macrosLorenzo Bianconi2-5/+8
Introduce MTK_FOE_ENTRY_V{1,2}_SIZE macros in order to make more explicit foe_entry size for different chipset revisions. Signed-off-by: Lorenzo Bianconi <lorenzo@kernel.org> Reviewed-by: Simon Horman <simon.horman@corigine.com> Signed-off-by: David S. Miller <davem@davemloft.net>
2023-07-21eth: bnxt: handle invalid Tx completions more gracefullyJakub Kicinski3-1/+31
Invalid Tx completions should never happen (tm) but when they do they crash the host, because driver blindly trusts that there is a valid skb pointer on the ring. The completions I've seen appear to be some form of FW / HW miscalculation or staleness, they have typical (small) values (<100), but they are most often higher than number of queued descriptors. They usually happen after boot. Instead of crashing, print a warning and schedule a reset. Link: https://lore.kernel.org/r/20230720010440.1967136-4-kuba@kernel.org Reviewed-by: Michael Chan <michael.chan@broadcom.com> Signed-off-by: Jakub Kicinski <kuba@kernel.org>
2023-07-21eth: bnxt: take the bit to set as argument of bnxt_queue_sp_work()Jakub Kicinski1-35/+26
Most callers of bnxt_queue_sp_work() set a bit to indicate what work to perform right before calling it. Pass it to the function instead. Link: https://lore.kernel.org/r/20230720010440.1967136-3-kuba@kernel.org Reviewed-by: Michael Chan <michael.chan@broadcom.com> Signed-off-by: Jakub Kicinski <kuba@kernel.org>
2023-07-21eth: bnxt: move and rename reset helpersJakub Kicinski1-36/+36
Move the reset helpers, subsequent patches will need some of them on the Tx path. While at it rename bnxt_sched_reset(), on more recent chips it schedules a queue reset, instead of a fuller reset. Link: https://lore.kernel.org/r/20230720010440.1967136-2-kuba@kernel.org Reviewed-by: Michael Chan <michael.chan@broadcom.com> Signed-off-by: Jakub Kicinski <kuba@kernel.org>
2023-07-21Merge git://git.kernel.org/pub/scm/linux/kernel/git/netdev/netJakub Kicinski19-227/+374
Cross-merge networking fixes after downstream PR. No conflicts or adjacent changes. Signed-off-by: Jakub Kicinski <kuba@kernel.org>
2023-07-20net: ethernet: mtk_eth_soc: always mtk_get_ib1_pkt_typeDaniel Golle1-1/+1
entries and bind debugfs files would display wrong data on NETSYS_V2 and later because instead of using mtk_get_ib1_pkt_type the driver would use MTK_FOE_IB1_PACKET_TYPE which corresponds to NETSYS_V1(.x) SoCs. Use mtk_get_ib1_pkt_type so entries and bind records display correctly. Fixes: 03a3180e5c09e ("net: ethernet: mtk_eth_soc: introduce flow offloading support for mt7986") Signed-off-by: Daniel Golle <daniel@makrotopia.org> Acked-by: Lorenzo Bianconi <lorenzo@kernel.org> Link: https://lore.kernel.org/r/c0ae03d0182f4d27b874cbdf0059bc972c317f3c.1689727134.git.daniel@makrotopia.org Signed-off-by: Jakub Kicinski <kuba@kernel.org>
2023-07-20Revert "r8169: disable ASPM during NAPI poll"Heiner Kallweit1-10/+1
This reverts commit e1ed3e4d91112027b90c7ee61479141b3f948e6a. Turned out the change causes a performance regression. Link: https://lore.kernel.org/netdev/20230713124914.GA12924@green245/T/ Cc: stable@vger.kernel.org Signed-off-by: Heiner Kallweit <hkallweit1@gmail.com> Link: https://lore.kernel.org/r/055c6bc2-74fa-8c67-9897-3f658abb5ae7@gmail.com Signed-off-by: Jakub Kicinski <kuba@kernel.org>
2023-07-20r8169: revert 2ab19de62d67 ("r8169: remove ASPM restrictions now that ASPM ↵Heiner Kallweit1-1/+26
is disabled during NAPI poll") There have been reports that on a number of systems this change breaks network connectivity. Therefore effectively revert it. Mainly affected seem to be systems where BIOS denies ASPM access to OS. Due to later changes we can't do a direct revert. Fixes: 2ab19de62d67 ("r8169: remove ASPM restrictions now that ASPM is disabled during NAPI poll") Cc: stable@vger.kernel.org Link: https://lore.kernel.org/netdev/e47bac0d-e802-65e1-b311-6acb26d5cf10@freenet.de/T/ Closes: https://bugzilla.kernel.org/show_bug.cgi?id=217596 Signed-off-by: Heiner Kallweit <hkallweit1@gmail.com> Link: https://lore.kernel.org/r/57f13ec0-b216-d5d8-363d-5b05528ec5fb@gmail.com Signed-off-by: Jakub Kicinski <kuba@kernel.org>
2023-07-20net: fec: remove unused members from struct fec_enet_privateWei Fang1-3/+0
Three members of struct fec_enet_private have not been used since they were first introduced into the FEC driver (commit 6605b730c061 ("FEC: Add time stamping code and a PTP hardware clock")). Namely, last_overflow_check, rx_hwtstamp_filter and base_incval. These unused members make the struct fec_enet_private a bit messy and might confuse the readers. There is no reason to keep them in the FEC driver any longer. Signed-off-by: Wei Fang <wei.fang@nxp.com> Link: https://lore.kernel.org/r/20230718090928.2654347-4-wei.fang@nxp.com Reviewed-by: Simon Horman <simon.horman@corigine.com> Signed-off-by: Jakub Kicinski <kuba@kernel.org>
2023-07-20net: fec: remove fec_set_mac_address() from fec_enet_init()Wei Fang1-3/+0
The fec_enet_init() is only invoked when the FEC driver probes, and the network device of FEC is not been brought up at this moment. So the fec_set_mac_address() does nothing and just returns zero when it is invoked in the fec_enet_init(). Actually, the MAC address is set into the hardware through fec_restart() which is also called in the fec_enet_init(). Signed-off-by: Wei Fang <wei.fang@nxp.com> Link: https://lore.kernel.org/r/20230718090928.2654347-3-wei.fang@nxp.com Reviewed-by: Simon Horman <simon.horman@corigine.com> Signed-off-by: Jakub Kicinski <kuba@kernel.org>
2023-07-20net: fec: remove the remaining code of rx copybreakWei Fang2-45/+0
Since the commit 95698ff6177b ("net: fec: using page pool to manage RX buffers") has been applied, all the rx packets, no matter small packets or large packets are put directly into the kernel networking buffers. That is to say, the rx copybreak function has been removed since then, but the related code has not been completely cleaned up. So the purpose of this patch is to clean up the remaining related code of rx copybreak. Signed-off-by: Wei Fang <wei.fang@nxp.com> Link: https://lore.kernel.org/r/20230718090928.2654347-2-wei.fang@nxp.com Reviewed-by: Simon Horman <simon.horman@corigine.com> Signed-off-by: Jakub Kicinski <kuba@kernel.org>
2023-07-20net: stmmac: use per-queue 64 bit statistics where necessaryJisheng Zhang14-158/+335
Currently, there are two major issues with stmmac driver statistics First of all, statistics in stmmac_extra_stats, stmmac_rxq_stats and stmmac_txq_stats are 32 bit variables on 32 bit platforms. This can cause some stats to overflow after several minutes of high traffic, for example rx_pkt_n, tx_pkt_n and so on. Secondly, if HW supports multiqueues, there are frequent cacheline ping pongs on some driver statistic vars, for example, normal_irq_n, tx_pkt_n and so on. What's more, frequent cacheline ping pongs on normal_irq_n happens in ISR, this makes the situation worse. To improve the driver, we convert those statistics to 64 bit, implement ndo_get_stats64 and update .get_ethtool_stats implementation accordingly. We also use per-queue statistics where necessary to remove the cacheline ping pongs as much as possible to make multiqueue operations faster. Those statistics which are not possible to overflow and not frequently updated are kept as is. Signed-off-by: Jisheng Zhang <jszhang@kernel.org> Link: https://lore.kernel.org/r/20230717160630.1892-3-jszhang@kernel.org Signed-off-by: Jakub Kicinski <kuba@kernel.org>
2023-07-20net: stmmac: don't clear network statistics in .ndo_open()Jisheng Zhang1-4/+2
FWICT, the common style in other network drivers: the network statistics are not cleared since initialization, follow the common style for stmmac. Signed-off-by: Jisheng Zhang <jszhang@kernel.org> Link: https://lore.kernel.org/r/20230717160630.1892-2-jszhang@kernel.org Signed-off-by: Jakub Kicinski <kuba@kernel.org>
2023-07-20Merge tag 'for-netdev' of ↵Jakub Kicinski7-90/+254
https://git.kernel.org/pub/scm/linux/kernel/git/bpf/bpf-next Alexei Starovoitov says: ==================== pull-request: bpf-next 2023-07-19 We've added 45 non-merge commits during the last 3 day(s) which contain a total of 71 files changed, 7808 insertions(+), 592 deletions(-). The main changes are: 1) multi-buffer support in AF_XDP, from Maciej Fijalkowski, Magnus Karlsson, Tirthendu Sarkar. 2) BPF link support for tc BPF programs, from Daniel Borkmann. 3) Enable bpf_map_sum_elem_count kfunc for all program types, from Anton Protopopov. 4) Add 'owner' field to bpf_rb_node to fix races in shared ownership, Dave Marchevsky. 5) Prevent potential skb_header_pointer() misuse, from Alexei Starovoitov. * tag 'for-netdev' of https://git.kernel.org/pub/scm/linux/kernel/git/bpf/bpf-next: (45 commits) bpf, net: Introduce skb_pointer_if_linear(). bpf: sync tools/ uapi header with selftests/bpf: Add mprog API tests for BPF tcx links selftests/bpf: Add mprog API tests for BPF tcx opts bpftool: Extend net dump with tcx progs libbpf: Add helper macro to clear opts structs libbpf: Add link-based API for tcx libbpf: Add opts-based attach/detach/query API for tcx bpf: Add fd-based tcx multi-prog infra with link support bpf: Add generic attach/detach/query API for multi-progs selftests/xsk: reset NIC settings to default after running test suite selftests/xsk: add test for too many frags selftests/xsk: add metadata copy test for multi-buff selftests/xsk: add invalid descriptor test for multi-buffer selftests/xsk: add unaligned mode test for multi-buffer selftests/xsk: add basic multi-buffer test selftests/xsk: transmit and receive multi-buffer packets xsk: add multi-buffer documentation i40e: xsk: add TX multi-buffer support ice: xsk: Tx multi-buffer support ... ==================== Link: https://lore.kernel.org/r/20230719175424.75717-1-alexei.starovoitov@gmail.com Signed-off-by: Jakub Kicinski <kuba@kernel.org>