summaryrefslogtreecommitdiff
AgeCommit message (Collapse)AuthorFilesLines
2024-10-10vxlan: Handle error of rtnl_register_module().Kuniyuki Iwashima3-12/+15
Since introduced, vxlan_vnifilter_init() has been ignoring the returned value of rtnl_register_module(), which could fail silently. Handling the error allows users to view a module as an all-or-nothing thing in terms of the rtnetlink functionality. This prevents syzkaller from reporting spurious errors from its tests, where OOM often occurs and module is automatically loaded. Let's handle the errors by rtnl_register_many(). Fixes: f9c4bb0b245c ("vxlan: vni filtering support on collect metadata device") Signed-off-by: Kuniyuki Iwashima <kuniyu@amazon.com> Reviewed-by: Nikolay Aleksandrov <razor@blackwall.org> Signed-off-by: Paolo Abeni <pabeni@redhat.com>
2024-10-10rtnetlink: Add bulk registration helpers for rtnetlink message handlers.Kuniyuki Iwashima2-0/+46
Before commit addf9b90de22 ("net: rtnetlink: use rcu to free rtnl message handlers"), once rtnl_msg_handlers[protocol] was allocated, the following rtnl_register_module() for the same protocol never failed. However, after the commit, rtnl_msg_handler[protocol][msgtype] needs to be allocated in each rtnl_register_module(), so each call could fail. Many callers of rtnl_register_module() do not handle the returned error, and we need to add many error handlings. To handle that easily, let's add wrapper functions for bulk registration of rtnetlink message handlers. Signed-off-by: Kuniyuki Iwashima <kuniyu@amazon.com> Signed-off-by: Paolo Abeni <pabeni@redhat.com>
2024-10-10net: phy: Validate PHY LED OPs presence before registeringChristian Marangi1-0/+11
Validate PHY LED OPs presence before registering and parsing them. Defining LED nodes for a PHY driver that actually doesn't supports them is redundant and useless. It's also the case with Generic PHY driver used and a DT having LEDs node for the specific PHY. Skip it and report the error with debug print enabled. Signed-off-by: Christian Marangi <ansuelsmth@gmail.com> Reviewed-by: Andrew Lunn <andrew@lunn.ch> Link: https://patch.msgid.link/20241008194718.9682-1-ansuelsmth@gmail.com Signed-off-by: Paolo Abeni <pabeni@redhat.com>
2024-10-10Merge tag 'nf-24-10-09' of ↵Paolo Abeni19-170/+459
git://git.kernel.org/pub/scm/linux/kernel/git/netfilter/nf Pablo Neira Ayuso says: ==================== Netfilter fixes for net The following patchset contains Netfilter fixes for net: 1) Restrict xtables extensions to families that are safe, syzbot found a way to combine ebtables with extensions that are never used by userspace tools. From Florian Westphal. 2) Set l3mdev inconditionally whenever possible in nft_fib to fix lookup mismatch, also from Florian. netfilter pull request 24-10-09 * tag 'nf-24-10-09' of git://git.kernel.org/pub/scm/linux/kernel/git/netfilter/nf: selftests: netfilter: conntrack_vrf.sh: add fib test case netfilter: fib: check correct rtable in vrf setups netfilter: xtables: avoid NFPROTO_UNSPEC where needed ==================== Link: https://patch.msgid.link/20241009213858.3565808-1-pablo@netfilter.org Signed-off-by: Paolo Abeni <pabeni@redhat.com>
2024-10-10Merge branch 'net-mlx5-qos-refactor-esw-qos-to-support-new-features'Paolo Abeni12-376/+547
Tariq Toukan says: ==================== net/mlx5: qos: Refactor esw qos to support new features This patch series by Cosmin and Carolina prepares the mlx5 qos infra for the upcoming feature of cross E-Switch scheduling. Noop cleanups: net/mlx5: qos: Flesh out element_attributes in mlx5_ifc.h net/mlx5: qos: Rename vport 'tsar' into 'sched_elem'. net/mlx5: qos: Consistently name vport vars as 'vport' net/mlx5: qos: Refactor and document bw_share calculation net/mlx5: qos: Rename rate group 'list' as 'parent_entry' Refactor the code with the goal of moving groups out of E-Switches: net/mlx5: qos: Maintain rate group vport members in a list net/mlx5: qos: Always create group0 net/mlx5: qos: Drop 'esw' param from vport qos functions net/mlx5: qos: Store the eswitch in a mlx5_esw_rate_group Move groups from an E-Switch into an mlx5_qos_domain: net/mlx5: qos: Store rate groups in a qos domain Refactor locking to use a new mutex in the qos domain: net/mlx5: qos: Refactor locking to a qos domain mutex In follow-up patchsets, we'll allow qos domains to be shared between E-Switches of the same NIC. The two top patches are simple enhancements. ==================== Link: https://patch.msgid.link/20241008183222.137702-1-tariqt@nvidia.com Signed-off-by: Paolo Abeni <pabeni@redhat.com>
2024-10-10net/mlx5: Add support check for TSAR types in QoS schedulingCarolina Jubran4-2/+34
Introduce a new function, mlx5_qos_tsar_type_supported(), to handle the validation of TSAR types within QoS scheduling contexts. Refactor the existing code to use this new function, replacing direct checks for TSAR type support in the NIC scheduling hierarchy. Signed-off-by: Carolina Jubran <cjubran@nvidia.com> Signed-off-by: Tariq Toukan <tariqt@nvidia.com> Signed-off-by: Paolo Abeni <pabeni@redhat.com>
2024-10-10net/mlx5: Unify QoS element type checks across NIC and E-SwitchCarolina Jubran4-23/+44
Refactor the QoS element type support check by introducing a new function, mlx5_qos_element_type_supported(), which handles element type validation for both NIC and E-Switch schedulers. This change removes the redundant esw_qos_element_type_supported() function and unifies the element type checks into a single implementation. Signed-off-by: Carolina Jubran <cjubran@nvidia.com> Signed-off-by: Tariq Toukan <tariqt@nvidia.com> Signed-off-by: Paolo Abeni <pabeni@redhat.com>
2024-10-10net/mlx5: qos: Refactor locking to a qos domain mutexCosmin Ratiu5-39/+74
E-Switch qos changes used the esw state_lock to serialize qos changes. With the introduction of cross-esw scheduling, multiple E-Switches might be involved in a qos operation, so prepare for that by switching locking to use a qos domain mutex. Add three helper functions: - esw_qos_lock - esw_qos_unlock - esw_assert_qos_lock_held Convert existing direct lock/unlock/lockdep calls to them. Also call esw_assert_qos_lock_held in a couple more places. mlx5_esw_qos_set_vport_rate expected to be called with the esw state_lock already held. Change it to instead acquire the qos lock directly. mlx5_eswitch_get_vport_config also accessed qos properties with the esw state lock. Introduce a new function mlx5_esw_qos_get_vport_rate to access those with the correct lock and change get_vport_config to use it. Finally, mlx5_vport_disable is called from the cleanup path with the esw state_lock held, so have it additionally acquire the qos lock to make sure there are no races. Signed-off-by: Cosmin Ratiu <cratiu@nvidia.com> Signed-off-by: Tariq Toukan <tariqt@nvidia.com> Signed-off-by: Paolo Abeni <pabeni@redhat.com>
2024-10-10net/mlx5: qos: Store rate groups in a qos domainCosmin Ratiu4-11/+65
Groups are currently maintained as a list in their corresponding eswitch, protected by the esw state_lock. The upcoming cross-eswitch scheduling feature cannot work with this approach, as it would require acquiring multiple eswitch locks (in the correct order) in order to maintain group membership. This commit moves the rate groups into a new 'qos domain' struct and adds explicit qos init/cleanup steps to the eswitch init/cleanup. Upcoming patches will expand the qos domain struct and allow it to be shared between eswitches. For now, qos domains are private to each esw so there's only an extra indirection. Signed-off-by: Cosmin Ratiu <cratiu@nvidia.com> Signed-off-by: Tariq Toukan <tariqt@nvidia.com> Signed-off-by: Paolo Abeni <pabeni@redhat.com>
2024-10-10net/mlx5: qos: Rename rate group 'list' as 'parent_entry'Cosmin Ratiu1-5/+5
'list' is not very descriptive, I prefer list membership to clearly specify which list the entry belongs to. This commit renames the list entry into the esw groups list as 'parent_entry' to make the code more readable. This is a no-op change. Signed-off-by: Cosmin Ratiu <cratiu@nvidia.com> Signed-off-by: Tariq Toukan <tariqt@nvidia.com> Signed-off-by: Paolo Abeni <pabeni@redhat.com>
2024-10-10net/mlx5: qos: Add an explicit 'dev' to vport trace callsCosmin Ratiu2-13/+16
vport qos trace calls used vport->dev implicitly as the device to which the command was sent (and thus the device logged in traces). But that will no longer be the case for cross-esw scheduling, where the commands have to be sent to the group esw device instead. This commit corrects that. Signed-off-by: Cosmin Ratiu <cratiu@nvidia.com> Signed-off-by: Tariq Toukan <tariqt@nvidia.com> Signed-off-by: Paolo Abeni <pabeni@redhat.com>
2024-10-10net/mlx5: qos: Store the eswitch in a mlx5_esw_rate_groupCosmin Ratiu1-63/+52
The rate groups are about to be moved out of eswitches, so store a reference to the eswitch they belong to so things can still work later. This allows dropping the esw parameter from a couple of functions and simplifying some of the code. Use this opportunity to make sure that vport scheduling element commands are always sent to the group eswitch, because that will be relevant for cross-esw scheduling. For now though, the eswitches are not different. There is no functionality change here. Signed-off-by: Cosmin Ratiu <cratiu@nvidia.com> Signed-off-by: Tariq Toukan <tariqt@nvidia.com> Signed-off-by: Paolo Abeni <pabeni@redhat.com>
2024-10-10net/mlx5: qos: Drop 'esw' param from vport qos functionsCosmin Ratiu7-60/+57
The vport has a pointer to its own eswitch in vport->dev->priv.eswitch, so passing the same eswitch as a parameter to the various functions manipulating vport qos is superfluous at best and prone to errors at worst. More importantly, with the upcoming cross-esw scheduling changes, the eswitch that should receive the various scheduling element commands is NOT the same as the vport's eswitch, so the current code's assumptions will break. To avoid confusion and bugs, this commit drops the 'esw' parameter from all vport qos functions and uses the vport's own eswitch pointer instead. Signed-off-by: Cosmin Ratiu <cratiu@nvidia.com> Reviewed-by: Carolina Jubran <cjubran@nvidia.com> Signed-off-by: Tariq Toukan <tariqt@nvidia.com> Signed-off-by: Paolo Abeni <pabeni@redhat.com>
2024-10-10net/mlx5: qos: Always create group0Cosmin Ratiu2-18/+30
All vports not explicitly members of a group with QoS enabled are part of the internal esw group0, except when the hw reports that groups aren't supported (log_esw_max_sched_depth == 0). This creates corner cases in the code, which has to make sure that this case is supported. Additionally, the groups are about to be moved out of eswitches, and group0 being NULL creates additional complications there. This patch makes sure to always create group0, even if max sched depth is 0. In that case, a software-only group0 is created referencing the root TSAR. Vports can point to this group when their QoS is enabled and they'll be attached to the root TSAR directly. This eliminates corner cases in the code by offering the guarantee that if qos is enabled, vport->qos.group is non-NULL. Signed-off-by: Cosmin Ratiu <cratiu@nvidia.com> Signed-off-by: Tariq Toukan <tariqt@nvidia.com> Signed-off-by: Paolo Abeni <pabeni@redhat.com>
2024-10-10net/mlx5: qos: Maintain rate group vport members in a listCosmin Ratiu2-37/+58
Previously, finding group members was done by iterating over all vports of an eswitch and comparing their group with the required one, but that approach will break down when a group can contain vports from multiple eswitches. Solve that by maintaining a list of vport members. Instead of iterating over esw vports, loop over the members list. Use this opportunity to provide two new functions to allocate and free a group, so that the number of state transitions is smaller. This will also be used in a future patch. Signed-off-by: Cosmin Ratiu <cratiu@nvidia.com> Signed-off-by: Tariq Toukan <tariqt@nvidia.com> Signed-off-by: Paolo Abeni <pabeni@redhat.com>
2024-10-10net/mlx5: qos: Refactor and document bw_share calculationCosmin Ratiu2-66/+71
The previous function (esw_qos_calculate_group_min_rate_divider) had two completely different modes of execution, depending on the 'group_level' parameter. Split it into two separate functions: - esw_qos_calculate_min_rate_divider - computes min across groups. - esw_qos_calculate_group_min_rate_divider - computes min in a group. Fold the divider calculation into the corresponding normalize functions to avoid having the caller compute the corresponding divider. Also rename the normalize functions to better indicate what level they're operating on. Finally, document everything so that this topic can more easily be understood by future maintainers. Signed-off-by: Cosmin Ratiu <cratiu@nvidia.com> Signed-off-by: Tariq Toukan <tariqt@nvidia.com> Signed-off-by: Paolo Abeni <pabeni@redhat.com>
2024-10-10net/mlx5: qos: Consistently name vport vars as 'vport'Cosmin Ratiu1-24/+24
The current mixture of 'vport' and 'evport' can be improved. There is no functional change. Signed-off-by: Cosmin Ratiu <cratiu@nvidia.com> Signed-off-by: Tariq Toukan <tariqt@nvidia.com> Signed-off-by: Paolo Abeni <pabeni@redhat.com>
2024-10-10net/mlx5: qos: Rename vport 'tsar' into 'sched_elem'.Cosmin Ratiu3-30/+27
Vports do not use TSARs (Transmit Scheduling ARbiters), which are used for grouping multiple entities together. Use the correct name in variables and functions for clarity. Also move the scheduling context to a local variable in the esw_qos_sched_elem_config function instead of an empty parameter that needs to be provided by all callers. There is no functional change here. Signed-off-by: Cosmin Ratiu <cratiu@nvidia.com> Signed-off-by: Tariq Toukan <tariqt@nvidia.com> Signed-off-by: Paolo Abeni <pabeni@redhat.com>
2024-10-10net/mlx5: qos: Flesh out element_attributes in mlx5_ifc.hCosmin Ratiu2-40/+45
This is used for multiple purposes, depending on the scheduling element created. There are a few helper struct defined a long time ago, but they are not easy to find in the file and they are about to get new members. This commit cleans up this area a bit by: - moving the helper structs closer to where they are relevant. - defining a helper union to include all of them to help discoverability. - making use of it everywhere element_attributes is used. - using a consistent 'attr' name. Signed-off-by: Cosmin Ratiu <cratiu@nvidia.com> Signed-off-by: Tariq Toukan <tariqt@nvidia.com> Signed-off-by: Paolo Abeni <pabeni@redhat.com>
2024-10-10Merge branch 'eth-fbnic-add-timestamping-support'Paolo Abeni11-8/+722
Vadim Fedorenko says: ==================== eth: fbnic: add timestamping support The series is to add timestamping support for Meta's NIC driver. Changelog: v3 -> v4: - use adjust_by_scaled_ppm() instead of open coding it - adjust cached value of high bits of timestamp to be sure it is older then incoming timestamps v2 -> v3: - rebase on top of net-next - add doc to describe retur value of fbnic_ts40_to_ns() v1 -> v2: - adjust comment about using u64 stats locking primitive - fix typo in the first patch - Cc Richard ==================== Link: https://patch.msgid.link/20241008181436.4120604-1-vadfed@meta.com Signed-off-by: Paolo Abeni <pabeni@redhat.com>
2024-10-10eth: fbnic: add ethtool timestamping statisticsVadim Fedorenko3-1/+34
Add counters of packets with HW timestamps requests and lost timestamps with no associated skbs. Use ethtool interface to report these counters. Signed-off-by: Vadim Fedorenko <vadfed@meta.com> Signed-off-by: Paolo Abeni <pabeni@redhat.com>
2024-10-10eth: fbnic: add TX packets timestamping supportVadim Fedorenko2-3/+95
Add TX configuration to ethtool interface. Add processing of TX timestamp completions as well as configuration to request HW to create TX timestamp completion. Signed-off-by: Vadim Fedorenko <vadfed@meta.com> Signed-off-by: Paolo Abeni <pabeni@redhat.com>
2024-10-10eth: fbnic: add RX packets timestamping supportVadim Fedorenko7-1/+194
Add callbacks to support timestamping configuration via ethtool. Add processing of RX timestamps. Signed-off-by: Vadim Fedorenko <vadfed@meta.com> Signed-off-by: Paolo Abeni <pabeni@redhat.com>
2024-10-10eth: fbnic: add initial PHC supportVadim Fedorenko7-4/+386
Create PHC device and provide callbacks needed for ptp_clock device. Signed-off-by: Vadim Fedorenko <vadfed@meta.com> Signed-off-by: Paolo Abeni <pabeni@redhat.com>
2024-10-10eth: fbnic: add software TX timestamping supportVadim Fedorenko2-0/+14
Add software TX timestamping support. RX software timestamping is implemented in the core and there is no need to provide special flag in the driver anymore. Signed-off-by: Vadim Fedorenko <vadfed@meta.com> Signed-off-by: Paolo Abeni <pabeni@redhat.com>
2024-10-10net: Remove likely from l3mdev_master_ifindex_by_indexBreno Leitao1-1/+1
The likely() annotation in l3mdev_master_ifindex_by_index() has been found to be incorrect 100% of the time in real-world workloads (e.g., web servers). Annotated branches shows the following in these servers: correct incorrect % Function File Line 0 169053813 100 l3mdev_master_ifindex_by_index l3mdev.h 81 This is happening because l3mdev_master_ifindex_by_index() is called from __inet_check_established(), which calls l3mdev_master_ifindex_by_index() passing the socked bounded interface. l3mdev_master_ifindex_by_index(net, sk->sk_bound_dev_if); Since most sockets are not going to be bound to a network device, the likely() is giving the wrong assumption. Remove the likely() annotation to ensure more accurate branch prediction. Signed-off-by: Breno Leitao <leitao@debian.org> Reviewed-by: David Ahern <dsahern@kernel.org> Reviewed-by: Eric Dumazet <edumazet@google.com> Link: https://patch.msgid.link/20241008163205.3939629-1-leitao@debian.org Signed-off-by: Paolo Abeni <pabeni@redhat.com>
2024-10-10net: do not delay dst_entries_add() in dst_release()Eric Dumazet1-5/+12
dst_entries_add() uses per-cpu data that might be freed at netns dismantle from ip6_route_net_exit() calling dst_entries_destroy() Before ip6_route_net_exit() can be called, we release all the dsts associated with this netns, via calls to dst_release(), which waits an rcu grace period before calling dst_destroy() dst_entries_add() use in dst_destroy() is racy, because dst_entries_destroy() could have been called already. Decrementing the number of dsts must happen sooner. Notes: 1) in CONFIG_XFRM case, dst_destroy() can call dst_release_immediate(child), this might also cause UAF if the child does not have DST_NOCOUNT set. IPSEC maintainers might take a look and see how to address this. 2) There is also discussion about removing this count of dst, which might happen in future kernels. Fixes: f88649721268 ("ipv4: fix dst race in sk_dst_get()") Closes: https://lore.kernel.org/lkml/CANn89iLCCGsP7SFn9HKpvnKu96Td4KD08xf7aGtiYgZnkjaL=w@mail.gmail.com/T/ Reported-by: Naresh Kamboju <naresh.kamboju@linaro.org> Tested-by: Linux Kernel Functional Testing <lkft@linaro.org> Tested-by: Naresh Kamboju <naresh.kamboju@linaro.org> Signed-off-by: Eric Dumazet <edumazet@google.com> Cc: Xin Long <lucien.xin@gmail.com> Cc: Steffen Klassert <steffen.klassert@secunet.com> Reviewed-by: Xin Long <lucien.xin@gmail.com> Link: https://patch.msgid.link/20241008143110.1064899-1-edumazet@google.com Signed-off-by: Paolo Abeni <pabeni@redhat.com>
2024-10-10Merge branch 'ipv4-namespacify-ipv4-address-hash-table'Jakub Kicinski3-31/+42
Kuniyuki Iwashima says: ==================== ipv4: Namespacify IPv4 address hash table. This is a prep of per-net RTNL conversion for RTM_(NEW|DEL|SET)ADDR. Currently, each IPv4 address is linked to the global hash table, and this needs to be protected by another global lock or namespacified to support per-net RTNL. Adding a global lock will cause deadlock in the rtnetlink path and GC, rtnetlink check_lifetime |- rtnl_net_lock(net) |- acquire the global lock |- acquire the global lock |- check ifa's netns `- put ifa into hash table `- rtnl_net_lock(net) so we need to namespacify the hash table. The IPv6 one is already namespacified, let's follow that. v2: https://lore.kernel.org/netdev/20241004195958.64396-1-kuniyu@amazon.com/ v1: https://lore.kernel.org/netdev/20241001024837.96425-1-kuniyu@amazon.com/ ==================== Link: https://patch.msgid.link/20241008172906.1326-1-kuniyu@amazon.com Signed-off-by: Jakub Kicinski <kuba@kernel.org>
2024-10-10ipv4: Retire global IPv4 hash table inet_addr_lst.Kuniyuki Iwashima2-11/+0
No one uses inet_addr_lst anymore, so let's remove it. Reviewed-by: Eric Dumazet <edumazet@google.com> Signed-off-by: Kuniyuki Iwashima <kuniyu@amazon.com> Link: https://patch.msgid.link/20241008172906.1326-5-kuniyu@amazon.com Signed-off-by: Jakub Kicinski <kuba@kernel.org>
2024-10-10ipv4: Namespacify IPv4 address GC.Kuniyuki Iwashima2-14/+19
Each IPv4 address could have a lifetime, which is useful for DHCP, and GC is periodically executed as check_lifetime_work. check_lifetime() does the actual GC under RTNL. 1. Acquire RTNL 2. Iterate inet_addr_lst 3. Remove IPv4 address if expired 4. Release RTNL Namespacifying the GC is required for per-netns RTNL, but using the per-netns hash table will shorten the time on the hash bucket iteration under RTNL. Let's add per-netns GC work and use the per-netns hash table. Reviewed-by: Eric Dumazet <edumazet@google.com> Signed-off-by: Kuniyuki Iwashima <kuniyu@amazon.com> Link: https://patch.msgid.link/20241008172906.1326-4-kuniyu@amazon.com Signed-off-by: Jakub Kicinski <kuba@kernel.org>
2024-10-10ipv4: Use per-netns hash table in inet_lookup_ifaddr_rcu().Kuniyuki Iwashima1-3/+2
Now, all IPv4 addresses are put in the per-netns hash table. Let's use it in inet_lookup_ifaddr_rcu(). Reviewed-by: Eric Dumazet <edumazet@google.com> Signed-off-by: Kuniyuki Iwashima <kuniyu@amazon.com> Link: https://patch.msgid.link/20241008172906.1326-3-kuniyu@amazon.com Signed-off-by: Jakub Kicinski <kuba@kernel.org>
2024-10-10ipv4: Link IPv4 address to per-netns hash table.Kuniyuki Iwashima3-3/+21
As a prep for per-netns RTNL conversion, we want to namespacify the IPv4 address hash table and the GC work. Let's allocate the per-netns IPv4 address hash table to net->ipv4.inet_addr_lst and link IPv4 addresses into it. The actual users will be converted later. Note that the IPv6 address hash table is already namespacified. Reviewed-by: Eric Dumazet <edumazet@google.com> Signed-off-by: Kuniyuki Iwashima <kuniyu@amazon.com> Link: https://patch.msgid.link/20241008172906.1326-2-kuniyu@amazon.com Signed-off-by: Jakub Kicinski <kuba@kernel.org>
2024-10-10Merge branch '100GbE' of ↵Jakub Kicinski25-100/+269
git://git.kernel.org/pub/scm/linux/kernel/git/tnguy/next-queue Tony Nguyen says: ==================== Intel Wired LAN Driver Updates 2024-10-08 (ice, iavf, igb, e1000e, e1000) This series contains updates to ice, iavf, igb, e1000e, and e1000 drivers. For ice: Wojciech adds support for ethtool reset. Paul adds support for hardware based VF mailbox limits for E830 devices. Jake adjusts to a common iterator in ice_vc_cfg_qs_msg() and moves storing of max_frame and rx_buf_len from VSI struct to the ring structure. Hongbo Li uses assign_bit() to replace an open-coded instance. Markus Elfring adjusts a couple of PTP error paths to use a common, shared exit point. Yue Haibing removes unused declarations. For iavf: Yue Haibing removes unused declarations. For igb: Yue Haibing removes unused declarations. For e1000e: Takamitsu Iwai removes unneccessary writel() calls. Joe Damato adds support for netdev-genl support to query IRQ, NAPI, and queue information. For e1000: Joe Damato adds support for netdev-genl support to query IRQ, NAPI, and queue information. * '100GbE' of git://git.kernel.org/pub/scm/linux/kernel/git/tnguy/next-queue: e1000: Link NAPI instances to queues and IRQs e1000e: Link NAPI instances to queues and IRQs e1000e: Remove duplicated writel() in e1000_configure_tx/rx() igb: Cleanup unused declarations iavf: Remove unused declarations ice: Cleanup unused declarations ice: Use common error handling code in two functions ice: Make use of assign_bit() API ice: store max_frame and rx_buf_len only in ice_rx_ring ice: consistently use q_idx in ice_vc_cfg_qs_msg() ice: add E830 HW VF mailbox message limit support ice: Implement ethtool reset support ==================== Link: https://patch.msgid.link/20241008233441.928802-1-anthony.l.nguyen@intel.com Signed-off-by: Jakub Kicinski <kuba@kernel.org>
2024-10-10Merge branch '100GbE' of ↵Jakub Kicinski11-35/+31
git://git.kernel.org/pub/scm/linux/kernel/git/tnguy/net-queue Tony Nguyen says: ==================== Intel Wired LAN Driver Updates 2024-10-08 (ice, i40e, igb, e1000e) This series contains updates to ice, i40e, igb, and e1000e drivers. For ice: Marcin allows driver to load, into safe mode, when DDP package is missing or corrupted and adjusts the netif_is_ice() check to account for when the device is in safe mode. He also fixes an out-of-bounds issue when MSI-X are increased for VFs. Wojciech clears FDB entries on reset to match the hardware state. For i40e: Aleksandr adds locking around MACVLAN filters to prevent memory leaks due to concurrency issues. For igb: Mohamed Khalfella adds a check to not attempt to bring up an already running interface on non-fatal PCIe errors. For e1000e: Vitaly changes board type for I219 to more closely match the hardware and stop PHY issues. * '100GbE' of git://git.kernel.org/pub/scm/linux/kernel/git/tnguy/net-queue: e1000e: change I219 (19) devices to ADP igb: Do not bring the device up after non-fatal error i40e: Fix macvlan leak by synchronizing access to mac_filter_hash ice: Fix increasing MSI-X on VF ice: Flush FDB entries before reset ice: Fix netif_is_ice() in Safe Mode ice: Fix entering Safe Mode ==================== Link: https://patch.msgid.link/20241008230050.928245-1-anthony.l.nguyen@intel.com Signed-off-by: Jakub Kicinski <kuba@kernel.org>
2024-10-10Fix misspelling of "accept*" in netAlexander Zubkov3-4/+4
Several files have "accept*" misspelled as "accpet*" in the comments. Fix all such occurrences. Signed-off-by: Alexander Zubkov <green@qrator.net> Reviewed-by: Simon Horman <horms@kernel.org> Link: https://patch.msgid.link/20241008162756.22618-2-green@qrator.net Signed-off-by: Jakub Kicinski <kuba@kernel.org>
2024-10-10net_sched: sch_sfq: handle bigger packetsEric Dumazet1-26/+13
SFQ has an assumption on dealing with packets smaller than 64KB. Even before BIG TCP, TCA_STAB can provide arbitrary big values in qdisc_pkt_len(skb) It is time to switch (struct sfq_slot)->allot to a 32bit field. sizeof(struct sfq_slot) is now 64 bytes, giving better cache locality. Signed-off-by: Eric Dumazet <edumazet@google.com> Reviewed-by: Toke Høiland-Jørgensen <toke@redhat.com> Link: https://patch.msgid.link/20241008111603.653140-1-edumazet@google.com Signed-off-by: Jakub Kicinski <kuba@kernel.org>
2024-10-10net: stmmac: Add DW QoS Eth v4/v5 ip payload error statisticsMinda Chen2-1/+3
Add DW QoS Eth v4/v5 ip payload error statistics, and rename descriptor bit macro because v4/v5 descriptor IPCE bit claims ip checksum error or TCP/UDP/ICMP segment length error. Here is bit description from DW QoS Eth data book(Part 19.6.2.2) bit7 IPCE: IP Payload Error When this bit is programmed, it indicates either of the following: 1).The 16-bit IP payload checksum (that is, the TCP, UDP, or ICMP checksum) calculated by the MAC does not match the corresponding checksum field in the received segment. 2).The TCP, UDP, or ICMP segment length does not match the payload length value in the IP Header field. 3).The TCP, UDP, or ICMP segment length is less than minimum allowed segment length for TCP, UDP, or ICMP. Signed-off-by: Minda Chen <minda.chen@starfivetech.com> Reviewed-by: Simon Horman <horms@kernel.org> Reviewed-by: Serge Semin <fancer.lancer@gmail.com> Link: https://patch.msgid.link/20241008111443.81467-1-minda.chen@starfivetech.com Signed-off-by: Jakub Kicinski <kuba@kernel.org>
2024-10-10Merge branch 'mptcp-misc-fixes-involving-fallback-to-tcp'Jakub Kicinski6-10/+32
Matthieu Baerts says: ==================== mptcp: misc. fixes involving fallback to TCP - Patch 1: better handle DSS corruptions from a bugged peer: reducing warnings, doing a fallback or a reset depending on the subflow state. For >= v5.7. - Patch 2: fix DSS corruption due to large pmtu xmit, where MPTCP was not taken into account. For >= v5.6. - Patch 3: fallback when MPTCP opts are dropped after the first data packet, instead of resetting the connection. For >= v5.6. - Patch 4: restrict the removal of a subflow to other closing states, a better fix, for a recent one. For >= v5.10. ==================== Link: https://patch.msgid.link/20241008-net-mptcp-fallback-fixes-v1-0-c6fb8e93e551@kernel.org Signed-off-by: Jakub Kicinski <kuba@kernel.org>
2024-10-10mptcp: pm: do not remove closing subflowsMatthieu Baerts (NGI0)1-1/+2
In a previous fix, the in-kernel path-manager has been modified not to retrigger the removal of a subflow if it was already closed, e.g. when the initial subflow is removed, but kept in the subflows list. To be complete, this fix should also skip the subflows that are in any closing state: mptcp_close_ssk() will initiate the closure, but the switch to the TCP_CLOSE state depends on the other peer. Fixes: 58e1b66b4e4b ("mptcp: pm: do not remove already closed subflows") Cc: stable@vger.kernel.org Suggested-by: Paolo Abeni <pabeni@redhat.com> Acked-by: Paolo Abeni <pabeni@redhat.com> Signed-off-by: Matthieu Baerts (NGI0) <matttbe@kernel.org> Link: https://patch.msgid.link/20241008-net-mptcp-fallback-fixes-v1-4-c6fb8e93e551@kernel.org Signed-off-by: Jakub Kicinski <kuba@kernel.org>
2024-10-10mptcp: fallback when MPTCP opts are dropped after 1st dataMatthieu Baerts (NGI0)1-1/+1
As reported by Christoph [1], before this patch, an MPTCP connection was wrongly reset when a host received a first data packet with MPTCP options after the 3wHS, but got the next ones without. According to the MPTCP v1 specs [2], a fallback should happen in this case, because the host didn't receive a DATA_ACK from the other peer, nor receive data for more than the initial window which implies a DATA_ACK being received by the other peer. The patch here re-uses the same logic as the one used in other places: by looking at allow_infinite_fallback, which is disabled at the creation of an additional subflow. It's not looking at the first DATA_ACK (or implying one received from the other side) as suggested by the RFC, but it is in continuation with what was already done, which is safer, and it fixes the reported issue. The next step, looking at this first DATA_ACK, is tracked in [4]. This patch has been validated using the following Packetdrill script: 0 socket(..., SOCK_STREAM, IPPROTO_MPTCP) = 3 +0 setsockopt(3, SOL_SOCKET, SO_REUSEADDR, [1], 4) = 0 +0 bind(3, ..., ...) = 0 +0 listen(3, 1) = 0 // 3WHS is OK +0.0 < S 0:0(0) win 65535 <mss 1460, sackOK, nop, nop, nop, wscale 6, mpcapable v1 flags[flag_h] nokey> +0.0 > S. 0:0(0) ack 1 <mss 1460, nop, nop, sackOK, nop, wscale 8, mpcapable v1 flags[flag_h] key[skey]> +0.1 < . 1:1(0) ack 1 win 2048 <mpcapable v1 flags[flag_h] key[ckey=2, skey]> +0 accept(3, ..., ...) = 4 // Data from the client with valid MPTCP options (no DATA_ACK: normal) +0.1 < P. 1:501(500) ack 1 win 2048 <mpcapable v1 flags[flag_h] key[skey, ckey] mpcdatalen 500, nop, nop> // From here, the MPTCP options will be dropped by a middlebox +0.0 > . 1:1(0) ack 501 <dss dack8=501 dll=0 nocs> +0.1 read(4, ..., 500) = 500 +0 write(4, ..., 100) = 100 // The server replies with data, still thinking MPTCP is being used +0.0 > P. 1:101(100) ack 501 <dss dack8=501 dsn8=1 ssn=1 dll=100 nocs, nop, nop> // But the client already did a fallback to TCP, because the two previous packets have been received without MPTCP options +0.1 < . 501:501(0) ack 101 win 2048 +0.0 < P. 501:601(100) ack 101 win 2048 // The server should fallback to TCP, not reset: it didn't get a DATA_ACK, nor data for more than the initial window +0.0 > . 101:101(0) ack 601 Note that this script requires Packetdrill with MPTCP support, see [3]. Fixes: dea2b1ea9c70 ("mptcp: do not reset MP_CAPABLE subflow on mapping errors") Cc: stable@vger.kernel.org Reported-by: Christoph Paasch <cpaasch@apple.com> Closes: https://github.com/multipath-tcp/mptcp_net-next/issues/518 [1] Link: https://datatracker.ietf.org/doc/html/rfc8684#name-fallback [2] Link: https://github.com/multipath-tcp/packetdrill [3] Link: https://github.com/multipath-tcp/mptcp_net-next/issues/519 [4] Reviewed-by: Paolo Abeni <pabeni@redhat.com> Signed-off-by: Matthieu Baerts (NGI0) <matttbe@kernel.org> Link: https://patch.msgid.link/20241008-net-mptcp-fallback-fixes-v1-3-c6fb8e93e551@kernel.org Signed-off-by: Jakub Kicinski <kuba@kernel.org>
2024-10-10tcp: fix mptcp DSS corruption due to large pmtu xmitPaolo Abeni1-4/+1
Syzkaller was able to trigger a DSS corruption: TCP: request_sock_subflow_v4: Possible SYN flooding on port [::]:20002. Sending cookies. ------------[ cut here ]------------ WARNING: CPU: 0 PID: 5227 at net/mptcp/protocol.c:695 __mptcp_move_skbs_from_subflow+0x20a9/0x21f0 net/mptcp/protocol.c:695 Modules linked in: CPU: 0 UID: 0 PID: 5227 Comm: syz-executor350 Not tainted 6.11.0-syzkaller-08829-gaf9c191ac2a0 #0 Hardware name: Google Google Compute Engine/Google Compute Engine, BIOS Google 08/06/2024 RIP: 0010:__mptcp_move_skbs_from_subflow+0x20a9/0x21f0 net/mptcp/protocol.c:695 Code: 0f b6 dc 31 ff 89 de e8 b5 dd ea f5 89 d8 48 81 c4 50 01 00 00 5b 41 5c 41 5d 41 5e 41 5f 5d c3 cc cc cc cc e8 98 da ea f5 90 <0f> 0b 90 e9 47 ff ff ff e8 8a da ea f5 90 0f 0b 90 e9 99 e0 ff ff RSP: 0018:ffffc90000006db8 EFLAGS: 00010246 RAX: ffffffff8ba9df18 RBX: 00000000000055f0 RCX: ffff888030023c00 RDX: 0000000000000100 RSI: 00000000000081e5 RDI: 00000000000055f0 RBP: 1ffff110062bf1ae R08: ffffffff8ba9cf12 R09: 1ffff110062bf1b8 R10: dffffc0000000000 R11: ffffed10062bf1b9 R12: 0000000000000000 R13: dffffc0000000000 R14: 00000000700cec61 R15: 00000000000081e5 FS: 000055556679c380(0000) GS:ffff8880b8600000(0000) knlGS:0000000000000000 CS: 0010 DS: 0000 ES: 0000 CR0: 0000000080050033 CR2: 0000000020287000 CR3: 0000000077892000 CR4: 00000000003506f0 DR0: 0000000000000000 DR1: 0000000000000000 DR2: 0000000000000000 DR3: 0000000000000000 DR6: 00000000fffe0ff0 DR7: 0000000000000400 Call Trace: <IRQ> move_skbs_to_msk net/mptcp/protocol.c:811 [inline] mptcp_data_ready+0x29c/0xa90 net/mptcp/protocol.c:854 subflow_data_ready+0x34a/0x920 net/mptcp/subflow.c:1490 tcp_data_queue+0x20fd/0x76c0 net/ipv4/tcp_input.c:5283 tcp_rcv_established+0xfba/0x2020 net/ipv4/tcp_input.c:6237 tcp_v4_do_rcv+0x96d/0xc70 net/ipv4/tcp_ipv4.c:1915 tcp_v4_rcv+0x2dc0/0x37f0 net/ipv4/tcp_ipv4.c:2350 ip_protocol_deliver_rcu+0x22e/0x440 net/ipv4/ip_input.c:205 ip_local_deliver_finish+0x341/0x5f0 net/ipv4/ip_input.c:233 NF_HOOK+0x3a4/0x450 include/linux/netfilter.h:314 NF_HOOK+0x3a4/0x450 include/linux/netfilter.h:314 __netif_receive_skb_one_core net/core/dev.c:5662 [inline] __netif_receive_skb+0x2bf/0x650 net/core/dev.c:5775 process_backlog+0x662/0x15b0 net/core/dev.c:6107 __napi_poll+0xcb/0x490 net/core/dev.c:6771 napi_poll net/core/dev.c:6840 [inline] net_rx_action+0x89b/0x1240 net/core/dev.c:6962 handle_softirqs+0x2c5/0x980 kernel/softirq.c:554 do_softirq+0x11b/0x1e0 kernel/softirq.c:455 </IRQ> <TASK> __local_bh_enable_ip+0x1bb/0x200 kernel/softirq.c:382 local_bh_enable include/linux/bottom_half.h:33 [inline] rcu_read_unlock_bh include/linux/rcupdate.h:919 [inline] __dev_queue_xmit+0x1764/0x3e80 net/core/dev.c:4451 dev_queue_xmit include/linux/netdevice.h:3094 [inline] neigh_hh_output include/net/neighbour.h:526 [inline] neigh_output include/net/neighbour.h:540 [inline] ip_finish_output2+0xd41/0x1390 net/ipv4/ip_output.c:236 ip_local_out net/ipv4/ip_output.c:130 [inline] __ip_queue_xmit+0x118c/0x1b80 net/ipv4/ip_output.c:536 __tcp_transmit_skb+0x2544/0x3b30 net/ipv4/tcp_output.c:1466 tcp_transmit_skb net/ipv4/tcp_output.c:1484 [inline] tcp_mtu_probe net/ipv4/tcp_output.c:2547 [inline] tcp_write_xmit+0x641d/0x6bf0 net/ipv4/tcp_output.c:2752 __tcp_push_pending_frames+0x9b/0x360 net/ipv4/tcp_output.c:3015 tcp_push_pending_frames include/net/tcp.h:2107 [inline] tcp_data_snd_check net/ipv4/tcp_input.c:5714 [inline] tcp_rcv_established+0x1026/0x2020 net/ipv4/tcp_input.c:6239 tcp_v4_do_rcv+0x96d/0xc70 net/ipv4/tcp_ipv4.c:1915 sk_backlog_rcv include/net/sock.h:1113 [inline] __release_sock+0x214/0x350 net/core/sock.c:3072 release_sock+0x61/0x1f0 net/core/sock.c:3626 mptcp_push_release net/mptcp/protocol.c:1486 [inline] __mptcp_push_pending+0x6b5/0x9f0 net/mptcp/protocol.c:1625 mptcp_sendmsg+0x10bb/0x1b10 net/mptcp/protocol.c:1903 sock_sendmsg_nosec net/socket.c:730 [inline] __sock_sendmsg+0x1a6/0x270 net/socket.c:745 ____sys_sendmsg+0x52a/0x7e0 net/socket.c:2603 ___sys_sendmsg net/socket.c:2657 [inline] __sys_sendmsg+0x2aa/0x390 net/socket.c:2686 do_syscall_x64 arch/x86/entry/common.c:52 [inline] do_syscall_64+0xf3/0x230 arch/x86/entry/common.c:83 entry_SYSCALL_64_after_hwframe+0x77/0x7f RIP: 0033:0x7fb06e9317f9 Code: ff c3 66 2e 0f 1f 84 00 00 00 00 00 0f 1f 44 00 00 48 89 f8 48 89 f7 48 89 d6 48 89 ca 4d 89 c2 4d 89 c8 4c 8b 4c 24 08 0f 05 <48> 3d 01 f0 ff ff 73 01 c3 48 c7 c1 b8 ff ff ff f7 d8 64 89 01 48 RSP: 002b:00007ffe2cfd4f98 EFLAGS: 00000246 ORIG_RAX: 000000000000002e RAX: ffffffffffffffda RBX: 00007fb06e97f468 RCX: 00007fb06e9317f9 RDX: 0000000000000000 RSI: 0000000020000080 RDI: 0000000000000005 RBP: 00007fb06e97f446 R08: 0000555500000000 R09: 0000555500000000 R10: 0000555500000000 R11: 0000000000000246 R12: 00007fb06e97f406 R13: 0000000000000001 R14: 00007ffe2cfd4fe0 R15: 0000000000000003 </TASK> Additionally syzkaller provided a nice reproducer. The repro enables pmtu on the loopback device, leading to tcp_mtu_probe() generating very large probe packets. tcp_can_coalesce_send_queue_head() currently does not check for mptcp-level invariants, and allowed the creation of cross-DSS probes, leading to the mentioned corruption. Address the issue teaching tcp_can_coalesce_send_queue_head() about mptcp using the tcp_skb_can_collapse(), also reducing the code duplication. Fixes: 85712484110d ("tcp: coalesce/collapse must respect MPTCP extensions") Cc: stable@vger.kernel.org Reported-by: syzbot+d1bff73460e33101f0e7@syzkaller.appspotmail.com Closes: https://github.com/multipath-tcp/mptcp_net-next/issues/513 Signed-off-by: Paolo Abeni <pabeni@redhat.com> Acked-by: Matthieu Baerts (NGI0) <matttbe@kernel.org> Signed-off-by: Matthieu Baerts (NGI0) <matttbe@kernel.org> Link: https://patch.msgid.link/20241008-net-mptcp-fallback-fixes-v1-2-c6fb8e93e551@kernel.org Signed-off-by: Jakub Kicinski <kuba@kernel.org>
2024-10-10mptcp: handle consistently DSS corruptionPaolo Abeni4-4/+28
Bugged peer implementation can send corrupted DSS options, consistently hitting a few warning in the data path. Use DEBUG_NET assertions, to avoid the splat on some builds and handle consistently the error, dumping related MIBs and performing fallback and/or reset according to the subflow type. Fixes: 6771bfd9ee24 ("mptcp: update mptcp ack sequence from work queue") Cc: stable@vger.kernel.org Signed-off-by: Paolo Abeni <pabeni@redhat.com> Reviewed-by: Matthieu Baerts (NGI0) <matttbe@kernel.org> Signed-off-by: Matthieu Baerts (NGI0) <matttbe@kernel.org> Link: https://patch.msgid.link/20241008-net-mptcp-fallback-fixes-v1-1-c6fb8e93e551@kernel.org Signed-off-by: Jakub Kicinski <kuba@kernel.org>
2024-10-10net: netconsole: fix wrong warningBreno Leitao1-1/+7
A warning is triggered when there is insufficient space in the buffer for userdata. However, this is not an issue since userdata will be sent in the next iteration. Current warning message: ------------[ cut here ]------------ WARNING: CPU: 13 PID: 3013042 at drivers/net/netconsole.c:1122 write_ext_msg+0x3b6/0x3d0 ? write_ext_msg+0x3b6/0x3d0 console_flush_all+0x1e9/0x330 The code incorrectly issues a warning when this_chunk is zero, which is a valid scenario. The warning should only be triggered when this_chunk is negative. Fixes: 1ec9daf95093 ("net: netconsole: append userdata to fragmented netconsole messages") Signed-off-by: Breno Leitao <leitao@debian.org> Reviewed-by: Simon Horman <horms@kernel.org> Link: https://patch.msgid.link/20241008094325.896208-1-leitao@debian.org Signed-off-by: Jakub Kicinski <kuba@kernel.org>
2024-10-10net: dsa: refuse cross-chip mirroring operationsVladimir Oltean1-3/+8
In case of a tc mirred action from one switch to another, the behavior is not correct. We simply tell the source switch driver to program a mirroring entry towards mirror->to_local_port = to_dp->index, but it is not even guaranteed that the to_dp belongs to the same switch as dp. For proper cross-chip support, we would need to go through the cross-chip notifier layer in switch.c, program the entry on cascade ports, and introduce new, explicit API for cross-chip mirroring, given that intermediary switches should have introspection into the DSA tags passed through the cascade port (and not just program a port mirror on the entire cascade port). None of that exists today. Reject what is not implemented so that user space is not misled into thinking it works. Fixes: f50f212749e8 ("net: dsa: Add plumbing for port mirroring") Signed-off-by: Vladimir Oltean <vladimir.oltean@nxp.com> Reviewed-by: Andrew Lunn <andrew@lunn.ch> Link: https://patch.msgid.link/20241008094320.3340980-1-vladimir.oltean@nxp.com Signed-off-by: Jakub Kicinski <kuba@kernel.org>
2024-10-10ipv6: Remove redundant unlikely()Tobias Klauser1-1/+1
IS_ERR_OR_NULL() already implies unlikely(). Signed-off-by: Tobias Klauser <tklauser@distanz.ch> Reviewed-by: Simon Horman <horms@kernel.org> Link: https://patch.msgid.link/20241008085454.8087-1-tklauser@distanz.ch Signed-off-by: Jakub Kicinski <kuba@kernel.org>
2024-10-10net: fec: don't save PTP state if PTP is unsupportedWei Fang1-2/+4
Some platforms (such as i.MX25 and i.MX27) do not support PTP, so on these platforms fec_ptp_init() is not called and the related members in fep are not initialized. However, fec_ptp_save_state() is called unconditionally, which causes the kernel to panic. Therefore, add a condition so that fec_ptp_save_state() is not called if PTP is not supported. Fixes: a1477dc87dc4 ("net: fec: Restart PPS after link state change") Reported-by: Guenter Roeck <linux@roeck-us.net> Closes: https://lore.kernel.org/lkml/353e41fe-6bb4-4ee9-9980-2da2a9c1c508@roeck-us.net/ Signed-off-by: Wei Fang <wei.fang@nxp.com> Reviewed-by: Csókás, Bence <csokas.bence@prolan.hu> Reviewed-by: Simon Horman <horms@kernel.org> Tested-by: Guenter Roeck <linux@roeck-us.net> Link: https://patch.msgid.link/20241008061153.1977930-1-wei.fang@nxp.com Signed-off-by: Jakub Kicinski <kuba@kernel.org>
2024-10-10ipv6: switch inet6_acaddr_hash() to less predictable hashEric Dumazet1-2/+3
commit 2384d02520ff ("net/ipv6: Add anycast addresses to a global hashtable") added inet6_acaddr_hash(), using ipv6_addr_hash() and net_hash_mix() to get hash spreading for typical users. However ipv6_addr_hash() is highly predictable and a malicious user could abuse a specific hash bucket. Switch to __ipv6_addr_jhash(). We could use a dedicated secret, or reuse net_hash_mix() as I did in this patch. Signed-off-by: Eric Dumazet <edumazet@google.com> Reviewed-by: David Ahern <dsahern@kernel.org> Reviewed-by: Kuniyuki Iwashima <kuniyu@amazon.com> Link: https://patch.msgid.link/20241008121307.800040-1-edumazet@google.com Signed-off-by: Jakub Kicinski <kuba@kernel.org>
2024-10-10ipv6: switch inet6_addr_hash() to less predictable hashEric Dumazet1-1/+1
In commit 3f27fb23219e ("ipv6: addrconf: add per netns perturbation in inet6_addr_hash()"), I added net_hash_mix() in inet6_addr_hash() to get better hash dispersion, at a time all netns were sharing the hash table. Since then, commit 21a216a8fc63 ("ipv6/addrconf: allocate a per netns hash table") made the hash table per netns. We could remove the net_hash_mix() from inet6_addr_hash(), but there is still an issue with ipv6_addr_hash(). It is highly predictable and a malicious user can easily create thousands of IPv6 addresses all stored in the same hash bucket. Switch to __ipv6_addr_jhash(). We could use a dedicated secret, or reuse net_hash_mix() as I did in this patch. Signed-off-by: Eric Dumazet <edumazet@google.com> Reviewed-by: David Ahern <dsahern@kernel.org> Reviewed-by: Kuniyuki Iwashima <kuniyu@amazon.com> Link: https://patch.msgid.link/20241008120101.734521-1-edumazet@google.com Signed-off-by: Jakub Kicinski <kuba@kernel.org>
2024-10-10net: airoha: Fix EGRESS_RATE_METER_EN_MASK definitionLorenzo Bianconi1-1/+1
Fix typo in EGRESS_RATE_METER_EN_MASK mask definition. This bus in not introducing any user visible problem since, even if we are setting EGRESS_RATE_METER_EN_MASK bit in REG_EGRESS_RATE_METER_CFG register, egress QoS metering is not supported yet since we are missing some other hw configurations (e.g token bucket rate, token bucket size). Introduced by commit 23020f049327 ("net: airoha: Introduce ethernet support for EN7581 SoC") Signed-off-by: Lorenzo Bianconi <lorenzo@kernel.org> Reviewed-by: Simon Horman <horms@kernel.org> Link: https://patch.msgid.link/20241009-airoha-fixes-v2-1-18af63ec19bf@kernel.org Signed-off-by: Jakub Kicinski <kuba@kernel.org>
2024-10-10net: ibm: emac: mal: add dcr_unmap to _removeRosen Penev1-0/+2
It's done in probe so it should be undone here. Fixes: 1d3bb996481e ("Device tree aware EMAC driver") Signed-off-by: Rosen Penev <rosenp@gmail.com> Reviewed-by: Breno Leitao <leitao@debian.org> Link: https://patch.msgid.link/20241008233050.9422-1-rosenp@gmail.com Signed-off-by: Jakub Kicinski <kuba@kernel.org>