summaryrefslogtreecommitdiff
AgeCommit message (Collapse)AuthorFilesLines
2022-08-24r8169: remove support for chip version 49Heiner Kallweit3-47/+3
Detection of this chip version has been disabled for few kernel versions now. Nobody complained, so remove support for this chip version. v2: - fix a typo: RTL_GIGA_MAC_VER_40 -> RTL_GIGA_MAC_VER_50 Signed-off-by: Heiner Kallweit <hkallweit1@gmail.com> Signed-off-by: David S. Miller <davem@davemloft.net>
2022-08-24r8169: remove support for chip versions 45 and 47Heiner Kallweit3-82/+5
Detection of these chip versions has been disabled for few kernel versions now. Nobody complained, so remove support for this chip version. Signed-off-by: Heiner Kallweit <hkallweit1@gmail.com> Signed-off-by: David S. Miller <davem@davemloft.net>
2022-08-24r8169: remove support for chip version 41Heiner Kallweit3-5/+1
Detection of this chip version has been disabled for few kernel versions now. Nobody complained, so remove support for this chip version. Signed-off-by: Heiner Kallweit <hkallweit1@gmail.com> Signed-off-by: David S. Miller <davem@davemloft.net>
2022-08-24Merge tag 'mlx5-updates-2022-08-22' of ↵David S. Miller29-765/+1246
git://git.kernel.org/pub/scm/linux/kernel/git/saeed/linux mlx5-updates-2022-08-22 Roi Dayan Says: =============== Add support for SF tunnel offload Mlx5 driver only supports VF tunnel offload. To add support for SF tunnel offload the driver needs to: 1. Add send-to-vport metadata matching rules like done for VFs. 2. Set an indirect table for SF vport, same as VF vport. info smaller sub functions for better maintainability. rules from esw init phase to representor load phase. SFs could be created after esw initialized and thus the send-to-vport meta rules would not be created for those SFs. By moving the creation of the rules to representor load phase we ensure creating the rules also for SFs created later. =============== Lama Kayal Says: ================ Make flow steering API loosely coupled from mlx5e_priv, in a manner to introduce more readable and maintainable modules. Make TC's private, let mlx5e_flow_steering struct be dynamically allocated, and introduce its API to maintain the code via setters and getters instead of publicly exposing it. Introduce flow steering debug macros to provide an elegant finish to the decoupled flow steering API, where errors related to flow steering shall be reported via them. All flow steering related files will drop any coupling to mlx5e_priv, instead they will get the relevant members as input. Among these, fs_tt_redirect, fs_tc, and arfs. ================
2022-08-24micrel: ksz8851: fixes struct pointer issueJerry Ray1-3/+2
Issue found during code review. This bug has no impact as long as the ks8851_net structure is the first element of the ks8851_net_spi structure. As long as the offset to the ks8851_net struct is zero, the container_of() macro is subtracting 0 and therefore no damage done. But if the ks8851_net_spi struct is ever modified such that the ks8851_net struct within it is no longer the first element of the struct, then the bug would manifest itself and cause problems. struct ks8851_net is contained within ks8851_net_spi. ks is contained within kss. kss is the priv_data of the netdev structure. Signed-off-by: Jerry Ray <jerry.ray@microchip.com> Signed-off-by: David S. Miller <davem@davemloft.net>
2022-08-24tcp: annotate data-race around tcp_md5sig_pool_populatedEric Dumazet1-4/+10
tcp_md5sig_pool_populated can be read while another thread changes its value. The race has no consequence because allocations are protected with tcp_md5sig_mutex. This patch adds READ_ONCE() and WRITE_ONCE() to document the race and silence KCSAN. Reported-by: Abhishek Shah <abhishek.shah@columbia.edu> Signed-off-by: Eric Dumazet <edumazet@google.com> Signed-off-by: David S. Miller <davem@davemloft.net>
2022-08-24net: marvell: prestera: implement br_port_locked flag offloadingOleksandr Mazur5-1/+34
Both <port> br_port_locked and <lag> interfaces's flag offloading is supported. No new ABI is being added, rather existing (port_param_set) API call gets extended. Signed-off-by: Oleksandr Mazur <oleksandr.mazur@plvision.eu> V2: add missing receipents (linux-kernel, netdev) Signed-off-by: David S. Miller <davem@davemloft.net>
2022-08-24Merge branch 'j7200-support'David S. Miller3-9/+55
Siddharth Vadapalli says: ==================== J7200: CPSW5G: Add support for QSGMII mode to am65-cpsw driver Add support for QSGMII mode to am65-cpsw driver. Change log: v4-> v5: 1. Move ti,j7200-cpswxg-nuss compatible to the line above the ti,j721e-cpsw-nuss compatible. 2. Add allOf and move if-then statements within it to allow future if-then statements to be added easily. v3 -> v4: 1. Update bindings to disallow ports based on compatible, instead of adding a new if/then statement for the new compatible. 2. Add Else-If condition for RMII mode in the set of supported interfaces. Support for RMII mode is already present in the driver and I had missed out adding a condition for RMII mode in the previous patches. v2 -> v3: 1. In ti,k3-am654-cpsw-nuss.yaml, restrict if/then statement to port nodes. v1 -> v2: 1. Add new compatible for CPSW5G in ti,k3-am654-cpsw-nuss.yaml and extend properties for new compatible. 2. Add extra_modes member to struct am65_cpsw_pdata to be used for QSGMII mode by new compatible. 3. Add check for phylink supported modes to ensure that only one phy mode is advertised as supported. 4. Check if extra_modes supports QSGMII mode in am65_cpsw_nuss_mac_config() for register write. 5. Add check for assigning port->sgmii_base only when extra_modes is valid. v4: https://lore.kernel.org/r/20220816060139.111934-1-s-vadapalli@ti.com/ v3: https://lore.kernel.org/r/20220606110443.30362-1-s-vadapalli@ti.com/ v2: https://lore.kernel.org/r/20220602114558.6204-1-s-vadapalli@ti.com/ v1: https://lore.kernel.org/r/20220531113058.23708-1-s-vadapalli@ti.com/ ==================== Signed-off-by: David S. Miller <davem@davemloft.net>
2022-08-24net: ethernet: ti: am65-cpsw: Move phy_set_mode_ext() to correct locationSiddharth Vadapalli1-5/+4
In TI's J7200 SoC CPSW5G ports, each of the 4 ports can be configured as a QSGMII main or QSGMII-SUB port. This configuration is performed by phy-gmii-sel driver on invoking the phy_set_mode_ext() function. It is necessary for the QSGMII main port to be configured before any of the QSGMII-SUB interfaces are brought up. Currently, the QSGMII-SUB interfaces come up before the QSGMII main port is configured. Fix this by moving the call to phy_set_mode_ext() from am65_cpsw_nuss_ndo_slave_open() to am65_cpsw_nuss_init_slave_ports(), thereby ensuring that the QSGMII main port is configured before any of the QSGMII-SUB ports are brought up. Signed-off-by: Siddharth Vadapalli <s-vadapalli@ti.com> Signed-off-by: David S. Miller <davem@davemloft.net>
2022-08-24net: ethernet: ti: am65-cpsw: Add support for J7200 CPSW5GSiddharth Vadapalli2-2/+35
CPSW5G in J7200 supports additional modes like QSGMII and SGMII. Add new compatible for J7200 and enable QSGMII mode in am65-cpsw driver. Signed-off-by: Siddharth Vadapalli <s-vadapalli@ti.com> Signed-off-by: David S. Miller <davem@davemloft.net>
2022-08-24dt-bindings: net: ti: k3-am654-cpsw-nuss: Update bindings for J7200 CPSW5GSiddharth Vadapalli1-2/+16
Update bindings for TI K3 J7200 SoC which contains 5 ports (4 external ports) CPSW5G module and add compatible for it. Changes made: - Add new compatible ti,j7200-cpswxg-nuss for CPSW5G. - Extend pattern properties for new compatible. - Change maximum number of CPSW ports to 4 for new compatible. Signed-off-by: Siddharth Vadapalli <s-vadapalli@ti.com> Reviewed-by: Rob Herring <robh@kernel.org> Signed-off-by: David S. Miller <davem@davemloft.net>
2022-08-24net: skb: prevent the split of kfree_skb_reason() by gccMenglong Dong3-3/+12
Sometimes, gcc will optimize the function by spliting it to two or more functions. In this case, kfree_skb_reason() is splited to kfree_skb_reason and kfree_skb_reason.part.0. However, the function/tracepoint trace_kfree_skb() in it needs the return address of kfree_skb_reason(). This split makes the call chains becomes: kfree_skb_reason() -> kfree_skb_reason.part.0 -> trace_kfree_skb() which makes the return address that passed to trace_kfree_skb() be kfree_skb(). Therefore, introduce '__fix_address', which is the combination of '__noclone' and 'noinline', and apply it to kfree_skb_reason() to prevent to from being splited or made inline. (Is it better to simply apply '__noclone oninline' to kfree_skb_reason? I'm thinking maybe other functions have the same problems) Meanwhile, wrap 'skb_unref()' with 'unlikely()', as the compiler thinks it is likely return true and splits kfree_skb_reason(). Signed-off-by: Menglong Dong <imagedong@tencent.com> Signed-off-by: David S. Miller <davem@davemloft.net>
2022-08-24Merge branch 'add-interface-mode-select-and-rmii'Jakub Kicinski2-5/+95
Wei Fang says: ==================== add interface mode select and RMII From: Wei Fang <wei.fang@nxp.com> The patches add the below feature support for both TJA1100 and TJA1101 PHYs cards: - Add MII and RMII mode support. - Add REF_CLK input/output support for RMII mode. ==================== Link: https://lore.kernel.org/r/20220822015949.1569969-1-wei.fang@nxp.com Signed-off-by: Jakub Kicinski <kuba@kernel.org>
2022-08-24net: phy: tja11xx: add interface mode and RMII REF_CLK supportWei Fang1-5/+78
Add below features support for both TJA1100 and TJA1101 cards: - Add MII and RMII mode support. - Add REF_CLK input/output support for RMII mode. Signed-off-by: Wei Fang <wei.fang@nxp.com> Signed-off-by: Jakub Kicinski <kuba@kernel.org>
2022-08-24dt-bindings: net: tja11xx: add nxp,refclk_in propertyWei Fang1-0/+17
TJA110x REF_CLK can be configured as interface reference clock intput or output when the RMII mode enabled. This patch add the property to make the REF_CLK can be configurable. Signed-off-by: Wei Fang <wei.fang@nxp.com> Acked-by: Rob Herring <robh@kernel.org> Signed-off-by: Jakub Kicinski <kuba@kernel.org>
2022-08-24Merge branch 'mlxsw-introduce-modular-system-support-by-minimal-driver'Jakub Kicinski5-86/+546
Petr Machata says: ==================== mlxsw: Introduce modular system support by minimal driver Vadim Pasternak writes: This patchset adds line cards support in mlxsw_minimal, which is used for monitoring purposes on BMC systems. The BMC is connected to the ASIC over I2C bus, unlike the host CPU that is connected to the ASIC via PCI bus. The BMC system needs to be notified whenever line cards become active or inactive, so that, for example, netdevs will be registered / unregistered by mlxsw_minimal. However, traps cannot be generated towards the BMC over the I2C bus. To overcome that, the I2C bus driver (i.e., mlxsw_i2c) registers an handler for an IRQ that is fired upon specific system wide changes, like line card activation and deactivation. The generated event is handled by mlxsw_core, which checks whether anything changed in the state of available line cards. If a line card becomes active or inactive, interested parties such as mlxsw_minimal are notified via their registered line card event callback. Patch set overview: Patches #1 is preparations. Patches #2-#3 extend mlxsw_core with an infrastructure to handle the previously mentioned system events. Patch #4 extends the I2C bus driver to register an handler for the IRQ fired upon specific system wide changes. Patches #5-#8 gradually add line cards support in mlxsw_minimal by dynamically registering / unregistering netdevs for ports found on line cards, whenever a line card becomes active / inactive. ==================== Link: https://lore.kernel.org/r/cover.1661093502.git.petrm@nvidia.com Signed-off-by: Jakub Kicinski <kuba@kernel.org>
2022-08-24mlxsw: minimal: Extend to support line card dynamic operationsVadim Pasternak1-1/+99
Implement line card operation callbacks got_active() / got_inactive(). The purpose of these callback to create / remove line card ports after line card is getting active / inactive. Implement line ports_remove_selected() callback to support line card un-provisioning flow through 'devlink'. Add line card operation registration and de-registration APIs. Add module offset for line card. Offset for main board iz zero. For line card in slot #n offset is calculated as (#n - 1) multiplied by maximum modules number. Signed-off-by: Vadim Pasternak <vadimp@nvidia.com> Reviewed-by: Jiri Pirko <jiri@nvidia.com> Signed-off-by: Ido Schimmel <idosch@nvidia.com> Signed-off-by: Petr Machata <petrm@nvidia.com> Signed-off-by: Jakub Kicinski <kuba@kernel.org>
2022-08-24mlxsw: minimal: Extend module to port mapping with slot indexVadim Pasternak1-52/+163
The interfaces for ports found on line card are created and removed dynamically after line card is getting active or inactive. Introduce per line card array with module to port mapping. For each port get 'slot_index' through PMLP register and set port mapping for the relevant [slot_index][module] entry. Split module and port allocation into separate routines. Split per line card port creation and removing into separate routines. Motivation to re-use these routines for line card operations. Signed-off-by: Vadim Pasternak <vadimp@nvidia.com> Reviewed-by: Jiri Pirko <jiri@nvidia.com> Signed-off-by: Ido Schimmel <idosch@nvidia.com> Signed-off-by: Petr Machata <petrm@nvidia.com> Signed-off-by: Jakub Kicinski <kuba@kernel.org>
2022-08-24mlxsw: minimal: Move ports allocation to separate routineVadim Pasternak1-8/+34
Perform ports allocation in a separate routine. Motivation is to re-use this routine for ports found on line cards. Signed-off-by: Vadim Pasternak <vadimp@nvidia.com> Reviewed-by: Jiri Pirko <jiri@nvidia.com> Signed-off-by: Ido Schimmel <idosch@nvidia.com> Signed-off-by: Petr Machata <petrm@nvidia.com> Signed-off-by: Jakub Kicinski <kuba@kernel.org>
2022-08-24mlxsw: minimal: Extend APIs with slot index for modular system supportVadim Pasternak1-14/+24
Add 'slot_index' field to port structure. Replace zero slot_index argument with 'slot_index' in 'ethtool' related APIs. Add 'slot_index' argument to port initialization and de-initialization related APIs. Motivation is to prepare minimal driver for modular system support. Signed-off-by: Vadim Pasternak <vadimp@nvidia.com> Reviewed-by: Jiri Pirko <jiri@nvidia.com> Signed-off-by: Ido Schimmel <idosch@nvidia.com> Signed-off-by: Petr Machata <petrm@nvidia.com> Signed-off-by: Jakub Kicinski <kuba@kernel.org>
2022-08-24mlxsw: i2c: Add support for system interrupt handlingVadim Pasternak1-1/+86
Extend i2c bus driver with interrupt handler to support system specific hotplug events, related to line card state change. Provide system IRQ line for interrupt handler. IRQ line Id could be provided through the platform data if available, or could be set to the default value. Signed-off-by: Vadim Pasternak <vadimp@nvidia.com> Reviewed-by: Jiri Pirko <jiri@nvidia.com> Signed-off-by: Ido Schimmel <idosch@nvidia.com> Signed-off-by: Petr Machata <petrm@nvidia.com> Signed-off-by: Jakub Kicinski <kuba@kernel.org>
2022-08-24mlxsw: core_linecards: Register a system event handlerVadim Pasternak1-0/+25
Add line card system event handler. Register it with core. It is triggered by system interrupts raised from chassis programmable logic devices to CPU. The purpose is to handle line card state changes over I2C bus. Signed-off-by: Vadim Pasternak <vadimp@nvidia.com> Reviewed-by: Jiri Pirko <jiri@nvidia.com> Signed-off-by: Ido Schimmel <idosch@nvidia.com> Signed-off-by: Petr Machata <petrm@nvidia.com> Signed-off-by: Jakub Kicinski <kuba@kernel.org>
2022-08-24mlxsw: core: Add registration APIs for system event handlerVadim Pasternak2-0/+76
The purpose of system event handler is to handle system interrupts. Such interrupts are raised to CPU from system programmable logic devices, upon specific system wide changes, like line card activation and deactivation. The purpose is to create an alternative to trap mechanism, which delivers these events to driver over PCI bus, but not available for the driver working over I2C bus. Mechanism is system dependent and applicable only for the systems equipped with programmable devices with custom logic. Add APIs for event handler registration and un-registration and API which should be invoked from the registered callbacks when system interrupt is raised to CPU. Signed-off-by: Vadim Pasternak <vadimp@nvidia.com> Reviewed-by: Jiri Pirko <jiri@nvidia.com> Signed-off-by: Ido Schimmel <idosch@nvidia.com> Signed-off-by: Petr Machata <petrm@nvidia.com> Signed-off-by: Jakub Kicinski <kuba@kernel.org>
2022-08-24mlxsw: core_linecards: Separate line card init and fini flowVadim Pasternak1-21/+50
Currently, each line card is initialized using the following steps: 1. Initializing its various fields (e.g., slot index). 2. Creating the corresponding devlink object. 3. Enabling events (i.e., traps) for changes in line card status. 4. Querying and processing line card status. Unlike traps, the IRQ that notifies the CPU about line card status changes cannot be enabled / disabled on a per line card basis. If a handler is registered before the line cards are initialized, the handler risks accessing uninitialized memory. On the other hand, if the handler is registered after initialization, we risk missing events. For example, in step 4, the driver might see that a line card is in ready state and will tell the device to enable it. When enablement is done, the line card will be activated and the IRQ will be triggered. Since a handler was not registered, the event will be missed. Solve this by splitting the initialization sequence into two steps (1-2 and 3-4). In a subsequent patch, the handler will be registered between both steps. Signed-off-by: Vadim Pasternak <vadimp@nvidia.com> Reviewed-by: Jiri Pirko <jiri@nvidia.com> Signed-off-by: Ido Schimmel <idosch@nvidia.com> Signed-off-by: Petr Machata <petrm@nvidia.com> Signed-off-by: Jakub Kicinski <kuba@kernel.org>
2022-08-24docs: netlink: basic introduction to NetlinkJakub Kicinski3-0/+656
Provide a bit of a brain dump of netlink related information as documentation. Hopefully this will be useful to people trying to navigate implementing YAML based parsing in languages we won't be able to help with. I started writing this doc while trying to figure out what it'd take to widen the applicability of YAML to good old rtnl, but the doc grew beyond that as it usually happens. In all honesty a lot of this information is new to me as I usually follow the "copy an existing example, drink to forget" process of writing netlink user space, so reviews will be much appreciated. Reviewed-by: Jacob Keller <jacob.e.keller@intel.com> Acked-by: Jonathan Corbet <corbet@lwn.net> Link: https://lore.kernel.org/r/20220819200221.422801-2-kuba@kernel.org Signed-off-by: Jakub Kicinski <kuba@kernel.org>
2022-08-24net: improve and fix netlink kdocJakub Kicinski1-5/+16
Subsequent patch will render the kdoc from include/uapi/linux/netlink.h into Documentation. We need to fix the warnings. While at it move the comments on struct nlmsghdr to a proper kdoc comment. Link: https://lore.kernel.org/r/20220819200221.422801-1-kuba@kernel.org Signed-off-by: Jakub Kicinski <kuba@kernel.org>
2022-08-24net: ftmac100: set max_mtu to allow DSA overhead settingSergei Antonov1-0/+1
In case ftmac100 is used with a DSA switch, Linux wants to set MTU to 1504 to accommodate for DSA overhead. With the default max_mtu it leads to the error message: ftmac100 92000000.mac eth0: error -22 setting MTU to 1504 to include DSA overhead ftmac100 supports packet length 1518 (MAX_PKT_SIZE constant), so it is safe to report it in max_mtu. Signed-off-by: Sergei Antonov <saproj@gmail.com> Reviewed-by: Andrew Lunn <andrew@lunn.ch> Link: https://lore.kernel.org/r/20220821160844.474277-1-saproj@gmail.com Signed-off-by: Jakub Kicinski <kuba@kernel.org>
2022-08-23Merge branch 'dsa-changes-for-multiple-cpu-ports-part-3'Paolo Abeni9-96/+197
Vladimir Oltean says: ==================== DSA changes for multiple CPU ports (part 3) Those who have been following part 1: https://patchwork.kernel.org/project/netdevbpf/cover/20220511095020.562461-1-vladimir.oltean@nxp.com/ and part 2: https://patchwork.kernel.org/project/netdevbpf/cover/20220521213743.2735445-1-vladimir.oltean@nxp.com/ will know that I am trying to enable the second internal port pair from the NXP LS1028A Felix switch for DSA-tagged traffic via "ocelot-8021q". This series represents part 3 of that effort. Covered here are some preparations in DSA for handling multiple DSA masters: - when changing the tagging protocol via sysfs - when the masters go down as well as preparation for monitoring the upper devices of a DSA master (to support DSA masters under a LAG). There are also 2 small preparations for the ocelot driver, for the case where multiple tag_8021q CPU ports are used in a LAG. Both those changes have to do with PGID forwarding domains. Compared to v1, the patches were trimmed down to just another preparation stage, and the UAPI changes were pushed further out to part 4. https://patchwork.kernel.org/project/netdevbpf/cover/20220523104256.3556016-1-olteanv@gmail.com/ Compared to v2, I had to export a symbol I forgot to (ocelot_port_teardown_dsa_8021q_cpu), to avoid a build breakage when the felix and seville drivers are built as modules. ==================== Link: https://lore.kernel.org/r/20220819174820.3585002-1-vladimir.oltean@nxp.com Signed-off-by: Paolo Abeni <pabeni@redhat.com>
2022-08-23net: mscc: ocelot: adjust forwarding domain for CPU ports in a LAGVladimir Oltean1-0/+19
Currently when we have 2 CPU ports configured for DSA tag_8021q mode and we put them in a LAG, a PGID dump looks like this: PGID_SRC[0] = ports 4, PGID_SRC[1] = ports 4, PGID_SRC[2] = ports 4, PGID_SRC[3] = ports 4, PGID_SRC[4] = ports 0, 1, 2, 3, 4, 5, PGID_SRC[5] = no ports (ports 0-3 are user ports, ports 4 and 5 are CPU ports) There are 2 problems with the configuration above: - user ports should enable forwarding towards both CPU ports, not just 4, and the aggregation PGIDs should prune one CPU port or the other from the destination port mask, based on a hash computed from packet headers. - CPU ports should not be allowed to forward towards themselves and also not towards other ports in the same LAG as themselves The first problem requires fixing up the PGID_SRC of user ports, when ocelot_port_assigned_dsa_8021q_cpu_mask() is called. We need to say that when a user port is assigned to a tag_8021q CPU port and that port is in a LAG, it should forward towards all ports in that LAG. The second problem requires fixing up the PGID_SRC of port 4, to remove ports 4 and 5 (in a LAG) from the allowed destinations. After this change, the PGID source masks look as follows: PGID_SRC[0] = ports 4, 5, PGID_SRC[1] = ports 4, 5, PGID_SRC[2] = ports 4, 5, PGID_SRC[3] = ports 4, 5, PGID_SRC[4] = ports 0, 1, 2, 3, PGID_SRC[5] = no ports Note that PGID_SRC[5] still looks weird (it should say "0, 1, 2, 3" just like PGID_SRC[4] does), but I've tested forwarding through this CPU port and it doesn't seem like anything is affected (it appears that PGID_SRC[4] is being looked up on forwarding from the CPU, since both ports 4 and 5 have logical port ID 4). The reason why it looks weird is because we've never called ocelot_port_assign_dsa_8021q_cpu() for any user port towards port 5 (all user ports are assigned to port 4 which is in a LAG with 5). Since things aren't broken, I'm willing to leave it like that for now and just document the oddity. Signed-off-by: Vladimir Oltean <vladimir.oltean@nxp.com> Signed-off-by: Paolo Abeni <pabeni@redhat.com>
2022-08-23net: mscc: ocelot: set up tag_8021q CPU ports independent of user port affinityVladimir Oltean3-32/+40
This is a partial revert of commit c295f9831f1d ("net: mscc: ocelot: switch from {,un}set to {,un}assign for tag_8021q CPU ports"), because as it turns out, this isn't how tag_8021q CPU ports under a LAG are supposed to work. Under that scenario, all user ports are "assigned" to the single tag_8021q CPU port represented by the logical port corresponding to the bonding interface. So one CPU port in a LAG would have is_dsa_8021q_cpu set to true (the one whose physical port ID is equal to the logical port ID), and the other one to false. In turn, this makes 2 undesirable things happen: (1) PGID_CPU contains only the first physical CPU port, rather than both (2) only the first CPU port will be added to the private VLANs used by ocelot for VLAN-unaware bridging To make the driver behave in the same way for both bonded CPU ports, we need to bring back the old concept of setting up a port as a tag_8021q CPU port, and this is what deals with VLAN membership and PGID_CPU updating. But we also need the CPU port "assignment" (the user to CPU port affinity), and this is what updates the PGID_SRC forwarding rules. All DSA CPU ports are statically configured for tag_8021q mode when the tagging protocol is changed to ocelot-8021q. User ports are "assigned" to one CPU port or the other dynamically (this will be handled by a future change). Signed-off-by: Vladimir Oltean <vladimir.oltean@nxp.com> Signed-off-by: Paolo Abeni <pabeni@redhat.com>
2022-08-23net: dsa: use dsa_tree_for_each_cpu_port in dsa_tree_{setup,teardown}_masterVladimir Oltean2-25/+25
More logic will be added to dsa_tree_setup_master() and dsa_tree_teardown_master() in upcoming changes. Reduce the indentation by one level in these functions by introducing and using a dedicated iterator for CPU ports of a tree. Signed-off-by: Vladimir Oltean <vladimir.oltean@nxp.com> Reviewed-by: Florian Fainelli <f.fainelli@gmail.com> Signed-off-by: Paolo Abeni <pabeni@redhat.com>
2022-08-23net: dsa: all DSA masters must be down when changing the tagging protocolVladimir Oltean3-9/+4
The fact that the tagging protocol is set and queried from the /sys/class/net/<dsa-master>/dsa/tagging file is a bit of a quirk from the single CPU port days which isn't aging very well now that DSA can have more than a single CPU port. This is because the tagging protocol is a switch property, yet in the presence of multiple CPU ports it can be queried and set from multiple sysfs files, all of which are handled by the same implementation. The current logic ensures that the net device whose sysfs file we're changing the tagging protocol through must be down. That net device is the DSA master, and this is fine for single DSA master / CPU port setups. But exactly because the tagging protocol is per switch [ tree, in fact ] and not per DSA master, this isn't fine any longer with multiple CPU ports, and we must iterate through the tree and find all DSA masters, and make sure that all of them are down. Signed-off-by: Vladimir Oltean <vladimir.oltean@nxp.com> Reviewed-by: Florian Fainelli <f.fainelli@gmail.com> Signed-off-by: Paolo Abeni <pabeni@redhat.com>
2022-08-23net: dsa: only bring down user ports assigned to a given DSA masterVladimir Oltean1-0/+3
This is an adaptation of commit c0a8a9c27493 ("net: dsa: automatically bring user ports down when master goes down") for multiple DSA masters. When a DSA master goes down, only the user ports under its control should go down too, the others can still send/receive traffic. Signed-off-by: Vladimir Oltean <vladimir.oltean@nxp.com> Reviewed-by: Florian Fainelli <f.fainelli@gmail.com> Signed-off-by: Paolo Abeni <pabeni@redhat.com>
2022-08-23net: dsa: existing DSA masters cannot join upper interfacesVladimir Oltean1-0/+33
All the traffic to/from a DSA master is supposed to be distributed among its DSA switch upper interfaces, so we should not allow other upper device kinds. An exception to this is DSA_TAG_PROTO_NONE (switches with no DSA tags), and in that case it is actually expected to create e.g. VLAN interfaces on the master. But for those, netdev_uses_dsa(master) returns false, so the restriction doesn't apply. The motivation for this change is to allow LAG interfaces of DSA masters to be DSA masters themselves. We want to restrict the user's degrees of freedom by 1: the LAG should already have all DSA masters as lowers, and while lower ports of the LAG can be removed, none can be added after the fact. Signed-off-by: Vladimir Oltean <vladimir.oltean@nxp.com> Reviewed-by: Florian Fainelli <f.fainelli@gmail.com> Signed-off-by: Paolo Abeni <pabeni@redhat.com>
2022-08-23net: bridge: move DSA master bridging restriction to DSAVladimir Oltean2-20/+44
When DSA gains support for multiple CPU ports in a LAG, it will become mandatory to monitor the changeupper events for the DSA master. In fact, there are already some restrictions to be imposed in that area, namely that a DSA master cannot be a bridge port except in some special circumstances. Centralize the restrictions at the level of the DSA layer as a preliminary step. Signed-off-by: Vladimir Oltean <vladimir.oltean@nxp.com> Reviewed-by: Florian Fainelli <f.fainelli@gmail.com> Acked-by: Nikolay Aleksandrov <razor@blackwall.org> Signed-off-by: Paolo Abeni <pabeni@redhat.com>
2022-08-23net: dsa: don't stop at NOTIFY_OK when calling ds->ops->port_prechangeupperVladimir Oltean1-1/+1
dsa_slave_prechangeupper_sanity_check() is supposed to enforce some adjacency restrictions, and calls ds->ops->port_prechangeupper if the driver implements it. We convert the error code from the port_prechangeupper() call to a notifier code, and 0 is converted to NOTIFY_OK, but the caller of dsa_slave_prechangeupper_sanity_check() stops at any notifier code different from NOTIFY_DONE. Avoid this by converting back the notifier code to an error code, so that both NOTIFY_OK and NOTIFY_DONE will be seen as 0. This allows more parallel sanity check functions to be added. Signed-off-by: Vladimir Oltean <vladimir.oltean@nxp.com> Reviewed-by: Florian Fainelli <f.fainelli@gmail.com> Signed-off-by: Paolo Abeni <pabeni@redhat.com>
2022-08-23net: dsa: walk through all changeupper notifier functionsVladimir Oltean1-9/+28
Traditionally, DSA has had a single netdev notifier handling function for each device type. For the sake of code cleanliness, we would like to introduce more handling functions which do one thing, but the conditions for entering these functions start to overlap. Example: a handling function which tracks whether any bridges contain both DSA and non-DSA interfaces. Either this is placed before dsa_slave_changeupper(), case in which it will prevent that function from executing, or we place it after dsa_slave_changeupper(), case in which we will prevent it from executing. The other alternative is to ignore errors from the new handling function (not ideal). To support this usage, we need to change the pattern. In the new model, we enter all notifier handling sub-functions, and exit with NOTIFY_DONE if there is nothing to do. This allows the sub-functions to be relatively free-form and independent from each other. Signed-off-by: Vladimir Oltean <vladimir.oltean@nxp.com> Reviewed-by: Florian Fainelli <f.fainelli@gmail.com> Signed-off-by: Paolo Abeni <pabeni@redhat.com>
2022-08-23Merge branch 'vsock-updates-for-so_rcvlowat-handling'Paolo Abeni7-17/+162
Arseniy Krasnov says: ==================== vsock: updates for SO_RCVLOWAT handling This patchset includes some updates for SO_RCVLOWAT: 1) af_vsock: During my experiments with zerocopy receive, i found, that in some cases, poll() implementation violates POSIX: when socket has non- default SO_RCVLOWAT(e.g. not 1), poll() will always set POLLIN and POLLRDNORM bits in 'revents' even number of bytes available to read on socket is smaller than SO_RCVLOWAT value. In this case,user sees POLLIN flag and then tries to read data(for example using 'read()' call), but read call will be blocked, because SO_RCVLOWAT logic is supported in dequeue loop in af_vsock.c. But the same time, POSIX requires that: "POLLIN Data other than high-priority data may be read without blocking. POLLRDNORM Normal data may be read without blocking." See https://www.open-std.org/jtc1/sc22/open/n4217.pdf, page 293. So, we have, that poll() syscall returns POLLIN, but read call will be blocked. Also in man page socket(7) i found that: "Since Linux 2.6.28, select(2), poll(2), and epoll(7) indicate a socket as readable only if at least SO_RCVLOWAT bytes are available." I checked TCP callback for poll()(net/ipv4/tcp.c, tcp_poll()), it uses SO_RCVLOWAT value to set POLLIN bit, also i've tested TCP with this case for TCP socket, it works as POSIX required. I've added some fixes to af_vsock.c and virtio_transport_common.c, test is also implemented. 2) virtio/vsock: It adds some optimization to wake ups, when new data arrived. Now, SO_RCVLOWAT is considered before wake up sleepers who wait new data. There is no sense, to kick waiter, when number of available bytes in socket's queue < SO_RCVLOWAT, because if we wake up reader in this case, it will wait for SO_RCVLOWAT data anyway during dequeue, or in poll() case, POLLIN/POLLRDNORM bits won't be set, so such exit from poll() will be "spurious". This logic is also used in TCP sockets. 3) vmci/vsock: Same as 2), but i'm not sure about this changes. Will be very good, to get comments from someone who knows this code. 4) Hyper-V: As Dexuan Cui mentioned, for Hyper-V transport it is difficult to support SO_RCVLOWAT, so he suggested to disable this feature for Hyper-V. ==================== Link: https://lore.kernel.org/r/de41de4c-0345-34d7-7c36-4345258b7ba8@sberdevices.ru Signed-off-by: Paolo Abeni <pabeni@redhat.com>
2022-08-23vsock_test: POLLIN + SO_RCVLOWAT testArseniy Krasnov1-0/+108
This adds test to check, that when poll() returns POLLIN, POLLRDNORM bits, next read call won't block. Signed-off-by: Arseniy Krasnov <AVKrasnov@sberdevices.ru> Reviewed-by: Stefano Garzarella <sgarzare@redhat.com> Signed-off-by: Paolo Abeni <pabeni@redhat.com>
2022-08-23vmci/vsock: check SO_RCVLOWAT before wake up readerArseniy Krasnov2-3/+3
This adds extra condition to wake up data reader: do it only when number of readable bytes >= SO_RCVLOWAT. Otherwise, there is no sense to kick user, because it will wait until SO_RCVLOWAT bytes will be dequeued. This check is performed in vsock_data_ready(). Signed-off-by: Arseniy Krasnov <AVKrasnov@sberdevices.ru> Reviewed-by: Vishnu Dasa <vdasa@vmware.com> Signed-off-by: Paolo Abeni <pabeni@redhat.com>
2022-08-23virtio/vsock: check SO_RCVLOWAT before wake up readerArseniy Krasnov1-1/+1
This adds extra condition to wake up data reader: do it only when number of readable bytes >= SO_RCVLOWAT. Otherwise, there is no sense to kick user,because it will wait until SO_RCVLOWAT bytes will be dequeued. This check is performed in vsock_data_ready(). Signed-off-by: Arseniy Krasnov <AVKrasnov@sberdevices.ru> Reviewed-by: Stefano Garzarella <sgarzare@redhat.com> Signed-off-by: Paolo Abeni <pabeni@redhat.com>
2022-08-23vsock: add API call for data readyArseniy Krasnov2-0/+11
This adds 'vsock_data_ready()' which must be called by transport to kick sleeping data readers. It checks for SO_RCVLOWAT value before waking user, thus preventing spurious wake ups. Based on 'tcp_data_ready()' logic. Signed-off-by: Arseniy Krasnov <AVKrasnov@sberdevices.ru> Reviewed-by: Stefano Garzarella <sgarzare@redhat.com> Signed-off-by: Paolo Abeni <pabeni@redhat.com>
2022-08-23vsock: pass sock_rcvlowat to notify_poll_in as targetArseniy Krasnov1-1/+2
Passing 1 as the target to notify_poll_in(), we don't honor what the user has set via SO_RCVLOWAT, going to set POLLIN and POLLRDNORM, even if we don't have the amount of bytes expected by the user. Let's use sock_rcvlowat() to get the right target to pass to notify_poll_in(); Signed-off-by: Arseniy Krasnov <AVKrasnov@sberdevices.ru> Reviewed-by: Stefano Garzarella <sgarzare@redhat.com> Signed-off-by: Paolo Abeni <pabeni@redhat.com>
2022-08-23vmci/vsock: use 'target' in notify_poll_in callbackArseniy Krasnov2-8/+8
This callback controls setting of POLLIN, POLLRDNORM output bits of poll() syscall, but in some cases, it is incorrectly to set it, when socket has at least 1 bytes of available data. Use 'target' which is already exists. Signed-off-by: Arseniy Krasnov <AVKrasnov@sberdevices.ru> Reviewed-by: Stefano Garzarella <sgarzare@redhat.com> Reviewed-by: Vishnu Dasa <vdasa@vmware.com> Signed-off-by: Paolo Abeni <pabeni@redhat.com>
2022-08-23virtio/vsock: use 'target' in notify_poll_in callbackArseniy Krasnov1-4/+1
This callback controls setting of POLLIN, POLLRDNORM output bits of poll() syscall, but in some cases, it is incorrectly to set it, when socket has at least 1 bytes of available data. Use 'target' which is already exists. Signed-off-by: Arseniy Krasnov <AVKrasnov@sberdevices.ru> Reviewed-by: Stefano Garzarella <sgarzare@redhat.com> Signed-off-by: Paolo Abeni <pabeni@redhat.com>
2022-08-23hv_sock: disable SO_RCVLOWAT supportArseniy Krasnov1-0/+7
For Hyper-V it is quiet difficult to support this socket option,due to transport internals, so disable it. Signed-off-by: Arseniy Krasnov <AVKrasnov@sberdevices.ru> Reviewed-by: Dexuan Cui <decui@microsoft.com> Reviewed-by: Stefano Garzarella <sgarzare@redhat.com> Signed-off-by: Paolo Abeni <pabeni@redhat.com>
2022-08-23vsock: SO_RCVLOWAT transport set callbackArseniy Krasnov2-0/+21
This adds transport specific callback for SO_RCVLOWAT, because in some transports it may be difficult to know current available number of bytes ready to read. Thus, when SO_RCVLOWAT is set, transport may reject it. Signed-off-by: Arseniy Krasnov <AVKrasnov@sberdevices.ru> Signed-off-by: Paolo Abeni <pabeni@redhat.com>
2022-08-23net: sched: remove duplicate check of user rights in qdiscZhengchao Shao2-21/+0
In rtnetlink_rcv_msg function, the permission for all user operations is checked except the GET operation, which is the same as the checking in qdisc. Therefore, remove the user rights check in qdisc. Signed-off-by: Zhengchao Shao <shaozhengchao@huawei.com> Message-Id: <20220819041854.83372-1-shaozhengchao@huawei.com> Signed-off-by: Paolo Abeni <pabeni@redhat.com>
2022-08-23net/mlx5: TC, Add support for SF tunnel offloadRoi Dayan2-3/+10
VF tunnel flow already exists and SF tunnel is the same flow. Support offloading of tunneling over SF device by allow to attach an encap route over SF and set to use indirect flow table on SF. Signed-off-by: Roi Dayan <roid@nvidia.com> Reviewed-by: Mark Bloch <mbloch@nvidia.com> Reviewed-by: Maor Dickman <maord@nvidia.com> Signed-off-by: Saeed Mahameed <saeedm@nvidia.com>
2022-08-23net/mlx5: E-Switch, Move send to vport meta rule creationRoi Dayan6-94/+90
Move the creation of the rules from offloads fdb table init to per rep vport init. This way the driver will creating the send to vport meta rule on any representor, e.g. SF representors. Signed-off-by: Roi Dayan <roid@nvidia.com> Reviewed-by: Mark Bloch <mbloch@nvidia.com> Reviewed-by: Maor Dickman <maord@nvidia.com> Signed-off-by: Saeed Mahameed <saeedm@nvidia.com>