summaryrefslogtreecommitdiff
path: root/drivers/net
AgeCommit message (Collapse)AuthorFilesLines
2021-08-06netdevsim: Protect both reload_down and reload_up pathsLeon Romanovsky3-1/+22
Don't progress with adding and deleting ports as long as devlink reload is running. Fixes: 23809a726c0d ("netdevsim: Forbid devlink reload when adding or deleting ports") Signed-off-by: Leon Romanovsky <leonro@nvidia.com> Signed-off-by: David S. Miller <davem@davemloft.net>
2021-08-06net: ethernet: ti: davinci_cpdma: drop frame paddingGrygorii Strashko4-8/+0
Hence all users of davinci_cpdma switched to skb_put_padto() the frame padding can be removed from it. Signed-off-by: Grygorii Strashko <grygorii.strashko@ti.com> Signed-off-by: David S. Miller <davem@davemloft.net>
2021-08-06net: ethernet: ti: davinci_emac: switch to use skb_put_padto()Grygorii Strashko1-1/+1
Use skb_put_padto() instead of skb_padto() so skb->len also got updated, as preparation for further removing frame padding from cpdma. It also makes xmit path more clear and linear. Signed-off-by: Grygorii Strashko <grygorii.strashko@ti.com> Signed-off-by: David S. Miller <davem@davemloft.net>
2021-08-06net: ethernet: ti: cpsw: switch to use skb_put_padto()Grygorii Strashko1-1/+1
Use skb_put_padto() instead of skb_padto() so skb->len also got updated, as preparation for further removing frame padding from cpdma. It also makes xmit path more clear and linear. Signed-off-by: Grygorii Strashko <grygorii.strashko@ti.com> Signed-off-by: David S. Miller <davem@davemloft.net>
2021-08-06intersil: remove obsolete prism54 wireless driverLukas Bulwahn18-7470/+0
Commit 1d89cae1b47d ("MAINTAINERS: mark prism54 obsolete") indicated the prism54 driver as obsolete in July 2010. Now, after being exposed for ten years to refactoring, general tree-wide changes and various janitor clean-up, it is really time to delete the driver for good. This was discovered as part of a checkpatch evaluation, investigating all reports of checkpatch's WARNING:OBSOLETE check. p54 replaces prism54 so users should be unaffected. There was a one off chipset someone long ago reported that p54 didn't work with but the reporter never followed up on that. Additionally, distributions have been blacklisting prism54 for years now. Signed-off-by: Lukas Bulwahn <lukas.bulwahn@gmail.com> Acked-by: Luis Chamberlain <mcgrof@kernel.org> Signed-off-by: Kalle Valo <kvalo@codeaurora.org> Link: https://lore.kernel.org/r/20210713054025.32006-1-lukas.bulwahn@gmail.com
2021-08-06mt76: fix enum type mismatchArnd Bergmann4-4/+6
There is no 'NONE' version of 'enum mcu_cipher_type', and returning 'MT_CIPHER_NONE' causes a warning: drivers/net/wireless/mediatek/mt76/mt7921/mcu.c: In function 'mt7921_mcu_get_cipher': drivers/net/wireless/mediatek/mt76/mt7921/mcu.c:114:24: error: implicit conversion from 'enum mt76_cipher_type' to 'enum mcu_cipher_type' [-Werror=enum-conversion] 114 | return MT_CIPHER_NONE; | ^~~~~~~~~~~~~~ Add the missing MCU_CIPHER_NONE defintion that fits in here with the same value. Fixes: c368362c36d3 ("mt76: fix iv and CCMP header insertion") Signed-off-by: Arnd Bergmann <arnd@arndb.de> Signed-off-by: Kalle Valo <kvalo@codeaurora.org> Link: https://lore.kernel.org/r/20210721150745.1914829-1-arnd@kernel.org
2021-08-06Merge git://git.kernel.org/pub/scm/linux/kernel/git/netdev/netJakub Kicinski24-143/+251
Build failure in drivers/net/wwan/mhi_wwan_mbim.c: add missing parameter (0, assuming we don't want buffer pre-alloc). Conflict in drivers/net/dsa/sja1105/sja1105_main.c between: 589918df9322 ("net: dsa: sja1105: be stateless with FDB entries on SJA1105P/Q/R/S/SJA1110 too") 0fac6aa098ed ("net: dsa: sja1105: delete the best_effort_vlan_filtering mode") Follow the instructions from the commit message of the former commit - removed the if conditions. When looking at commit 589918df9322 ("net: dsa: sja1105: be stateless with FDB entries on SJA1105P/Q/R/S/SJA1110 too") note that the mask_iotag fields get removed by the following patch. Signed-off-by: Jakub Kicinski <kuba@kernel.org>
2021-08-05net/mlx5: Lag, Create shared FDB when in switchdev modeMark Bloch3-18/+105
If both eswitches are in switchdev mode and the uplink representors are enslaved to the same bond device create a shared FDB configuration. When moving to shared FDB mode not only the hardware needs be configured but the RDMA driver needs to reconfigure itself. When such change is done, unload the RDMA devices, configure the hardware and load the RDMA representors. When destroying the lag (can happen if a PCI function is unbinded, driver is unloaded or by just removing a netdev from the bond) make sure to restore the system to the previous state only if possible. For example, if a PCI function is unbinded there is no need to load the representors as the device is going away. Signed-off-by: Mark Bloch <mbloch@nvidia.com> Signed-off-by: Saeed Mahameed <saeedm@nvidia.com>
2021-08-05net/mlx5: E-Switch, add logic to enable shared FDBMark Bloch6-2/+394
Shared FDB allows to direct traffic from all the vports in the HCA to a single eswitch. In order to do that three things are needed. 1) Point the ingress ACL of the slave uplink to that of the master. With this, wire traffic from both uplinks will reach the same eswitch with the same metadata where a single steering rule can catch traffic from both ports. 2) Set the FDB root flow table of the slave's eswitch to that of the master. As this flow table can change dynamically make sure to sync it on any set root flow table FDB command. This will make sure traffic from SFs, VFs, ECPFs and PFs reach the master eswitch. 3) Split wire traffic at the eswitch manager egress ACL so that it's directed to the native eswitch manager. We only treat wire traffic from both ports the same at the eswitch level. If such traffic wasn't handled in the eswitch it needs to reach the right representor to be processed by software. For example LACP packets should *always* reach the right uplink representor for correct operation. Signed-off-by: Mark Bloch <mbloch@nvidia.com> Reviewed-by: Mark Zhang <markzhang@nvidia.com> Signed-off-by: Saeed Mahameed <saeedm@nvidia.com>
2021-08-05net/mlx5: Lag, move lag destruction to a workqueueMark Bloch1-8/+9
If a netdev is removed from the lag the lag should be destroyed. With downstream patches this might trigger a reconfiguration of representors on a different eswitch and such we don't have the proper locking to so from this path. Move the destruction to be done by the workqueue. As the destruction won't affect the netdev side it okay to do so. The RDMA side will be reconfigured and it already coded to handle such reconfiguration. Signed-off-by: Mark Bloch <mbloch@nvidia.com> Reviewed-by: Mark Zhang <markzhang@nvidia.com> Signed-off-by: Saeed Mahameed <saeedm@nvidia.com>
2021-08-05net/mlx5: Lag, properly lock eswitch if neededMark Bloch7-18/+107
Currently when doing hardware lag we check the eswitch mode but as this isn't done under a lock the check isn't valid. As the code needs to sync between two different devices an extra care is needed. - When going to change eswitch mode, if hardware lag is active destroy it. - While changing eswitch modes block any hardware bond creation. - Delay handling bonding events until there are no mode changes in progress. - When attaching a new mdev to lag, block until there is no mode change in progress. In order for the mode change to finish the interface lock will have to be taken. Release the lock and sleep for 100ms to allow forward progress. As this is a very rare condition (can happen if the user unbinds and binds a PCI function while also changing eswitch mode of the other PCI function) it has no real world impact. As taking multiple eswitch mode locks is now required lockdep will complain about a possible deadlock. Register a key per eswitch to make lockdep happy. Signed-off-by: Mark Bloch <mbloch@nvidia.com> Reviewed-by: Mark Zhang <markzhang@nvidia.com> Signed-off-by: Saeed Mahameed <saeedm@nvidia.com>
2021-08-05net/mlx5: Add send to vport rules on paired deviceMark Bloch3-3/+101
When two mlx5 devices are paired in switchdev mode, always offload the send-to-vport rule to the peer E-Switch. This allows to abstract the logic when this is really necessary (single FDB) and combine the logic of both cases into one. Signed-off-by: Mark Bloch <mbloch@nvidia.com> Reviewed-by: Mark Zhang <markzhang@nvidia.com> Signed-off-by: Saeed Mahameed <saeedm@nvidia.com>
2021-08-05net/mlx5: E-Switch, Add event callback for representorsMark Bloch1-3/+47
This callback will allow to notify representors about relevant events when in OFFLOADS mode. In downstream patches, this will be used to notify about PAIR/UNPAIR devcom events. Signed-off-by: Mark Bloch <mbloch@nvidia.com> Reviewed-by: Mark Zhang <markzhang@nvidia.com> Signed-off-by: Saeed Mahameed <saeedm@nvidia.com>
2021-08-05net/mlx5e: Use shared mappings for restoring from metadataRoi Dayan4-10/+39
FTEs are added with mapped metadata which is saved per eswitch. When uplink reps are bonded and we are in a single FDB mode, we could fail to find metadata which was stored on one eswitch mapping but not the other or with a different id. To resolve this issue use shared mapping between eswitch ports. We do not have any conflict using a single mapping, for a type, between the ports. Signed-off-by: Roi Dayan <roid@nvidia.com> Signed-off-by: Saeed Mahameed <saeedm@nvidia.com>
2021-08-05net/mlx5e: Add an option to create a shared mappingRoi Dayan2-0/+50
The shared mapping is identified by an id and type. Signed-off-by: Roi Dayan <roid@nvidia.com> Signed-off-by: Saeed Mahameed <saeedm@nvidia.com>
2021-08-05net/mlx5: E-Switch, set flow source for send to uplink ruleAriel Levkovich1-0/+3
Set the flow source param to local vport for the uplink rep send-to-vport rule. This will comply with the recent changes in SW steering that use the flow source as an indication for the rule type - rx or tx. Since the uplink send-to-vport rule is forwarding traffic to the wire it has to indicate that it is an sx rule and can't use the any port value in the flow source. Signed-off-by: Ariel Levkovich <lariel@nvidia.com> Reviewed-by: Roi Dayan <roid@nvidia.com> Signed-off-by: Saeed Mahameed <saeedm@nvidia.com>
2021-08-05{net, RDMA}/mlx5: Extend send to vport rulesMark Bloch2-3/+4
In shared FDB there is only one eswitch which is active and it receives traffic from all representors and all vports in the HCA. While the Ethernet representor will always reside on its native PF the IB representor will not. Extend send to vport rule creation to support such flows. Need to account for source vport that sends the traffic (on which the representors resides) and the target eswitch the traffic which reach. Signed-off-by: Mark Bloch <mbloch@nvidia.com> Reviewed-by: Mark Zhang <markzhang@nvidia.com> Signed-off-by: Saeed Mahameed <saeedm@nvidia.com>
2021-08-05net/mlx5: Lag, add initial logic for shared FDBMark Bloch2-0/+50
As shared FDB requires changes in two subsystems first expose the needed core functions so the RDMA side can be changed. mlx5_lag_is_master(): return true if a given mlx5 device is the lag master. mlx5_lag_is_shared_fdb(): Returns true if the lag mode is shared FDB. mlx5_lag_get_peer_mdev(): Return the peer mdev in lag. The mentioned functions will be used by downstream patches in order to add support for shared FDB for the RDMA side. Signed-off-by: Mark Bloch <mbloch@nvidia.com> Reviewed-by: Mark Zhang <markzhang@nvidia.com> Signed-off-by: Saeed Mahameed <saeedm@nvidia.com>
2021-08-05net/mlx5: Return mdev from eswitchMark Bloch1-0/+12
Export a function so users can retrieve the mellanox device that manages the eswitch from the eswitch device. Signed-off-by: Mark Bloch <mbloch@nvidia.com> Reviewed-by: Mark Zhang <markzhang@nvidia.com> Signed-off-by: Saeed Mahameed <saeedm@nvidia.com>
2021-08-05net: vxge: fix use-after-free in vxge_device_unregisterPavel Skripkin1-3/+3
Smatch says: drivers/net/ethernet/neterion/vxge/vxge-main.c:3518 vxge_device_unregister() error: Using vdev after free_{netdev,candev}(dev); drivers/net/ethernet/neterion/vxge/vxge-main.c:3518 vxge_device_unregister() error: Using vdev after free_{netdev,candev}(dev); drivers/net/ethernet/neterion/vxge/vxge-main.c:3520 vxge_device_unregister() error: Using vdev after free_{netdev,candev}(dev); drivers/net/ethernet/neterion/vxge/vxge-main.c:3520 vxge_device_unregister() error: Using vdev after free_{netdev,candev}(dev); Since vdev pointer is netdev private data accessing it after free_netdev() call can cause use-after-free bug. Fix it by moving free_netdev() call at the end of the function Fixes: 6cca200362b4 ("vxge: cleanup probe error paths") Reported-by: Dan Carpenter <dan.carpenter@oracle.com> Signed-off-by: Pavel Skripkin <paskripkin@gmail.com> Reviewed-by: Jesse Brandeburg <jesse.brandeburg@intel.com> Signed-off-by: Jakub Kicinski <kuba@kernel.org>
2021-08-05net: fec: fix use-after-free in fec_drv_removePavel Skripkin1-1/+1
Smatch says: drivers/net/ethernet/freescale/fec_main.c:3994 fec_drv_remove() error: Using fep after free_{netdev,candev}(ndev); drivers/net/ethernet/freescale/fec_main.c:3995 fec_drv_remove() error: Using fep after free_{netdev,candev}(ndev); Since fep pointer is netdev private data, accessing it after free_netdev() call can cause use-after-free bug. Fix it by moving free_netdev() call at the end of the function Reported-by: Dan Carpenter <dan.carpenter@oracle.com> Fixes: a31eda65ba21 ("net: fec: fix clock count mis-match") Signed-off-by: Pavel Skripkin <paskripkin@gmail.com> Reviewed-by: Joakim Zhang <qiangqing.zhang@nxp.com> Reviewed-by: Jesse Brandeburg <jesse.brandeburg@intel.com> Signed-off-by: Jakub Kicinski <kuba@kernel.org>
2021-08-05net: pegasus: fix uninit-value in get_interrupt_intervalPavel Skripkin1-3/+11
Syzbot reported uninit value pegasus_probe(). The problem was in missing error handling. get_interrupt_interval() internally calls read_eprom_word() which can fail in some cases. For example: failed to receive usb control message. These cases should be handled to prevent uninit value bug, since read_eprom_word() will not initialize passed stack variable in case of internal failure. Fail log: BUG: KMSAN: uninit-value in get_interrupt_interval drivers/net/usb/pegasus.c:746 [inline] BUG: KMSAN: uninit-value in pegasus_probe+0x10e7/0x4080 drivers/net/usb/pegasus.c:1152 CPU: 1 PID: 825 Comm: kworker/1:1 Not tainted 5.12.0-rc6-syzkaller #0 ... Workqueue: usb_hub_wq hub_event Call Trace: __dump_stack lib/dump_stack.c:79 [inline] dump_stack+0x24c/0x2e0 lib/dump_stack.c:120 kmsan_report+0xfb/0x1e0 mm/kmsan/kmsan_report.c:118 __msan_warning+0x5c/0xa0 mm/kmsan/kmsan_instr.c:197 get_interrupt_interval drivers/net/usb/pegasus.c:746 [inline] pegasus_probe+0x10e7/0x4080 drivers/net/usb/pegasus.c:1152 .... Local variable ----data.i@pegasus_probe created at: get_interrupt_interval drivers/net/usb/pegasus.c:1151 [inline] pegasus_probe+0xe57/0x4080 drivers/net/usb/pegasus.c:1152 get_interrupt_interval drivers/net/usb/pegasus.c:1151 [inline] pegasus_probe+0xe57/0x4080 drivers/net/usb/pegasus.c:1152 Reported-and-tested-by: syzbot+02c9f70f3afae308464a@syzkaller.appspotmail.com Fixes: 1da177e4c3f4 ("Linux-2.6.12-rc2") Signed-off-by: Pavel Skripkin <paskripkin@gmail.com> Link: https://lore.kernel.org/r/20210804143005.439-1-paskripkin@gmail.com Signed-off-by: Jakub Kicinski <kuba@kernel.org>
2021-08-05net: ethernet: ti: am65-cpsw: fix crash in ↵Grygorii Strashko1-1/+5
am65_cpsw_port_offload_fwd_mark_update() The am65_cpsw_port_offload_fwd_mark_update() causes NULL exception crash when there is at least one disabled port and any other port added to the bridge first time. Unable to handle kernel NULL pointer dereference at virtual address 0000000000000858 pc : am65_cpsw_port_offload_fwd_mark_update+0x54/0x68 lr : am65_cpsw_netdevice_event+0x8c/0xf0 Call trace: am65_cpsw_port_offload_fwd_mark_update+0x54/0x68 notifier_call_chain+0x54/0x98 raw_notifier_call_chain+0x14/0x20 call_netdevice_notifiers_info+0x34/0x78 __netdev_upper_dev_link+0x1c8/0x290 netdev_master_upper_dev_link+0x1c/0x28 br_add_if+0x3f0/0x6d0 [bridge] Fix it by adding proper check for port->ndev != NULL. Fixes: 2934db9bcb30 ("net: ti: am65-cpsw-nuss: Add netdevice notifiers") Signed-off-by: Grygorii Strashko <grygorii.strashko@ti.com> Signed-off-by: David S. Miller <davem@davemloft.net>
2021-08-05bnx2x: fix an error code in bnx2x_nic_load()Dan Carpenter1-1/+2
Set the error code if bnx2x_alloc_fw_stats_mem() fails. The current code returns success. Fixes: ad5afc89365e ("bnx2x: Separate VF and PF logic") Signed-off-by: Dan Carpenter <dan.carpenter@oracle.com> Signed-off-by: David S. Miller <davem@davemloft.net>
2021-08-05netdevsim: Forbid devlink reload when adding or deleting portsLeon Romanovsky2-12/+11
In order to remove complexity in devlink core related to devlink_reload_enable/disable, let's rewrite new_port/del_port logic to rely on internal to netdevsim lcok. We should protect only reload_down flow because it destroys nsim_dev, which is needed for nsim_dev_port_add/nsim_dev_port_del to hold port_list_lock. Signed-off-by: Leon Romanovsky <leonro@nvidia.com> Signed-off-by: David S. Miller <davem@davemloft.net>
2021-08-05ath11k: set register access length for MHI driverBhaumik Bhatt1-0/+1
MHI driver requires register space length to add range checks and prevent memory region accesses outside of that for MMIO space. Set it before registering the MHI controller. Link: https://lore.kernel.org/r/1620330705-40192-5-git-send-email-bbhatt@codeaurora.org Reviewed-by: Hemant Kumar <hemantk@codeaurora.org> Reviewed-by: Manivannan Sadhasivam <manivannan.sadhasivam@linaro.org> Acked-by: Kalle Valo <kvalo@codeaurora.org> Signed-off-by: Bhaumik Bhatt <bbhatt@codeaurora.org> Signed-off-by: Manivannan Sadhasivam <manivannan.sadhasivam@linaro.org> Link: https://lore.kernel.org/r/20210802051255.5771-7-manivannan.sadhasivam@linaro.org Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
2021-08-05bus: mhi: Add inbound buffers allocation flagLoic Poulain2-2/+2
Currently, the MHI controller driver defines which channels should have their inbound buffers allocated and queued. But ideally, this is something that should be decided by the MHI device driver instead, which actually deals with that buffers. Add a flag parameter to mhi_prepare_for_transfer allowing to specify if buffers have to be allocated and queued by the MHI stack. Keep auto_queue flag for now, but should be removed at some point. Link: https://lore.kernel.org/r/1624566520-20406-1-git-send-email-loic.poulain@linaro.org Tested-by: Bhaumik Bhatt <bbhatt@codeaurora.org> Reviewed-by: Bhaumik Bhatt <bbhatt@codeaurora.org> Reviewed-by: Hemant Kumar <hemantk@codeaurora.org> Reviewed-by: Manivannan Sadhasivam <manivannan.sadhasivam@linaro.org> Acked-by: Jakub Kicinski <kuba@kernel.org> Signed-off-by: Loic Poulain <loic.poulain@linaro.org> Signed-off-by: Manivannan Sadhasivam <manivannan.sadhasivam@linaro.org> Link: https://lore.kernel.org/r/20210802051255.5771-2-manivannan.sadhasivam@linaro.org Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
2021-08-05Revert "wwan: mhi: Fix build."David S. Miller1-1/+1
This reverts commit ab996c420508761f3313c15c5f72d06ca7dc1a5b. Only aplicable when net is merged into net-next. Signed-off-by: David S. Miller <davem@davemloft.net>
2021-08-05wwan: mhi: Fix build.David S. Miller1-1/+1
Reported-by: Mark Brown <broonie@kernel.org> Signed-off-by: David S. Miller <davem@davemloft.net>
2021-08-05net: wwan: iosm: fix recursive lock acquire in unregisterM Chetan Kumar1-1/+1
Calling unregister_netdevice() inside wwan del link is trying to acquire the held lock in ndo_stop_cb(). Instead, queue net dev to be unregistered later. Signed-off-by: M Chetan Kumar <m.chetan.kumar@linux.intel.com> Reviewed-by: Loic Poulain <loic.poulain@linaro.org> Signed-off-by: David S. Miller <davem@davemloft.net>
2021-08-05net: wwan: iosm: correct data protocol mask bitM Chetan Kumar1-2/+2
Correct ul/dl data protocol mask bit to know which protocol capability does device implement. Signed-off-by: M Chetan Kumar <m.chetan.kumar@linux.intel.com> Reviewed-by: Loic Poulain <loic.poulain@linaro.org> Signed-off-by: David S. Miller <davem@davemloft.net>
2021-08-05net: wwan: iosm: endianness type correctionM Chetan Kumar2-3/+3
Endianness type correction for nr_of_bytes. This field is exchanged as part of host-device protocol communication. Signed-off-by: M Chetan Kumar <m.chetan.kumar@linux.intel.com> Reviewed-by: Loic Poulain <loic.poulain@linaro.org> Signed-off-by: David S. Miller <davem@davemloft.net>
2021-08-05net: wwan: iosm: fix lkp buildbot warningM Chetan Kumar1-2/+2
Correct td buffer type casting & format specifier to fix lkp buildbot warning. Reported-by: kernel test robot <lkp@intel.com> Signed-off-by: M Chetan Kumar <m.chetan.kumar@linux.intel.com> Signed-off-by: David S. Miller <davem@davemloft.net>
2021-08-05net: ipa: move IPA flags fieldAlex Elder2-14/+14
The ipa->flags field is only ever used in "ipa_clock.c", related to suspend/resume activity. Move the definition of the ipa_flag enumerated type to "ipa_clock.c". And move the flags field from the ipa structure and to the ipa_clock structure. Rename the type and its values to include "power" or "POWER" in the name. Signed-off-by: Alex Elder <elder@linaro.org> Signed-off-by: David S. Miller <davem@davemloft.net>
2021-08-05net: ipa: move ipa_suspend_handler()Alex Elder3-31/+53
Move ipa_suspend_handler() into "ipa_clock.c" from "ipa_main.c", to group with the reset of the suspend/resume code. This IPA interrupt is triggered if an IPA RX endpoint is suspended but has a packet to be delivered. Introduce ipa_power_setup() and ipa_power_teardown() to add and remove the handler for the IPA SUSPEND interrupt at the same place as before, while allowing the handler to remain private. The "power" naming convention will be adopted elsewhere in this file as well (soon). Signed-off-by: Alex Elder <elder@linaro.org> Signed-off-by: David S. Miller <davem@davemloft.net>
2021-08-05net: ipa: move IPA power operations to ipa_clock.cAlex Elder3-59/+65
Move ipa_suspend() and ipa_resume(), as well as the definition of the ipa_pm_ops structure into "ipa_clock.c". Make ipa_pm_ops public and declare it as extern in "ipa_clock.h". This is part of centralizing IPA power management functionality into "ipa_clock.c" (the file will eventually get a name change). Signed-off-by: Alex Elder <elder@linaro.org> Signed-off-by: David S. Miller <davem@davemloft.net>
2021-08-05net: ipa: improve IPA clock error messagesAlex Elder1-17/+22
Rearrange messages reported when errors occur in the IPA clock code, so that the specific interconnect is identified when an error occurs enabling or disabling it, or the core clock is indicated when an error occurs enabling it. Have ipa_interconnect_disable() return zero or the negative error value returned by the first interconnect that produced an error when disabled. For now, the callers ignore the returned value. Signed-off-by: Alex Elder <elder@linaro.org> Signed-off-by: David S. Miller <davem@davemloft.net>
2021-08-05net: ipa: reorder netdev pointer assignmentsAlex Elder1-7/+9
Assign the ipa->modem_netdev and endpoint->netdev pointers *before* registering the network device. As soon as the device is registered it can be opened, and by that time we'll want those pointers valid. Similarly, don't make those pointers NULL until *after* the modem network device is unregistered in ipa_modem_stop(). Signed-off-by: Alex Elder <elder@linaro.org> Signed-off-by: David S. Miller <davem@davemloft.net>
2021-08-05net: ipa: don't suspend/resume modem if not upAlex Elder1-2/+10
The modem network device is set up by ipa_modem_start(). But its TX queue is not actually started and endpoints enabled until it is opened. So avoid stopping the modem network device TX queue and disabling endpoints on suspend or stop unless the netdev is marked UP. And skip attempting to resume unless it is UP. Signed-off-by: Alex Elder <elder@linaro.org> Signed-off-by: David S. Miller <davem@davemloft.net>
2021-08-05net: dsa: sja1105: enable address learning on cascade portsVladimir Oltean1-2/+7
Right now, address learning is disabled on DSA ports, which means that a packet received over a DSA port from a cross-chip switch will be flooded to unrelated ports. It is desirable to eliminate that, but for that we need a breakdown of the possibilities for the sja1105 driver. A DSA port can be: - a downstream-facing cascade port. This is simple because it will always receive packets from a downstream switch, and there should be no other route to reach that downstream switch in the first place, which means it should be safe to learn that MAC address towards that switch. - an upstream-facing cascade port. This receives packets either: * autonomously forwarded by an upstream switch (and therefore these packets belong to the data plane of a bridge, so address learning should be ok), or * injected from the CPU. This deserves further discussion, as normally, an upstream-facing cascade port is no different than the CPU port itself. But with "H" topologies (a DSA link towards a switch that has its own CPU port), these are more "laterally-facing" cascade ports than they are "upstream-facing". Here, there is a risk that the port might learn the host addresses on the wrong port (on the DSA port instead of on its own CPU port), but this is solved by DSA's RX filtering infrastructure, which installs the host addresses as static FDB entries on the CPU port of all switches in a "H" tree. So even if there will be an attempt from the switch to migrate the FDB entry from the CPU port to the laterally-facing cascade port, it will fail to do that, because the FDB entry that already exists is static and cannot migrate. So address learning should be safe for this configuration too. Ok, so what about other MAC addresses coming from the host, not necessarily the bridge local FDB entries? What about MAC addresses dynamically learned on foreign interfaces, isn't there a risk that cascade ports will learn these entries dynamically when they are supposed to be delivered towards the CPU port? Well, that is correct, and this is why we also need to enable the assisted learning feature, to snoop for these addresses and write them to hardware as static FDB entries towards the CPU, to make the switch's learning process on the cascade ports ineffective for them. With assisted learning enabled, the hardware learning on the CPU port must be disabled. Signed-off-by: Vladimir Oltean <vladimir.oltean@nxp.com> Signed-off-by: David S. Miller <davem@davemloft.net>
2021-08-05net: dsa: sja1105: suppress TX packets from looping back in "H" topologiesVladimir Oltean1-0/+29
H topologies like this one have a problem: eth0 eth1 | | CPU port CPU port | DSA link | sw0p0 sw0p1 sw0p2 sw0p3 sw0p4 -------- sw1p4 sw1p3 sw1p2 sw1p1 sw1p0 | | | | | | user user user user user user port port port port port port Basically any packet sent by the eth0 DSA master can be flooded on the interconnecting DSA link sw0p4 <-> sw1p4 and it will be received by the eth1 DSA master too. Basically we are talking to ourselves. In VLAN-unaware mode, these packets are encoded using a tag_8021q TX VLAN, which dsa_8021q_rcv() rightfully cannot decode and complains. Whereas in VLAN-aware mode, the packets are encoded with a bridge VLAN which _can_ be decoded by the tagger running on eth1, so it will attempt to reinject that packet into the network stack (the bridge, if there is any port under eth1 that is under a bridge). In the case where the ports under eth1 are under the same cross-chip bridge as the ports under eth0, the TX packets will even be learned as RX packets. The only thing that will prevent loops with the software bridging path, and therefore disaster, is that the source port and the destination port are in the same hardware domain, and the bridge will receive packets from the driver with skb->offload_fwd_mark = true and will not forward between the two. The proper solution to this problem is to detect H topologies and enforce that all packets are received through the local switch and we do not attempt to receive packets on our CPU port from switches that have their own. This is a viable solution which works thanks to the fact that MAC addresses which should be filtered towards the host are installed by DSA as static MAC addresses towards the CPU port of each switch. TX from a CPU port towards the DSA port continues to be allowed, this is because sja1105 supports bridge TX forwarding offload, and the skb->dev used initially for xmit does not have any direct correlation with where the station that will respond to that packet is connected. It may very well happen that when we send a ping through a br0 interface that spans all switch ports, the xmit packet will exit the system through a DSA switch interface under eth1 (say sw1p2), but the destination station is connected to a switch port under eth0, like sw0p0. So the switch under eth1 needs to communicate on TX with the switch under eth0. The response, however, will not follow the same path, but instead, this patch enforces that the response is sent by the first switch directly to its DSA master which is eth0. Signed-off-by: Vladimir Oltean <vladimir.oltean@nxp.com> Signed-off-by: David S. Miller <davem@davemloft.net>
2021-08-05net: dsa: sja1105: increase MTU to account for VLAN header on DSA portsVladimir Oltean1-2/+2
Since all packets are transmitted as VLAN-tagged over a DSA link (this VLAN tag represents the tag_8021q header), we need to increase the MTU of these interfaces to account for the possibility that we are already transporting a user-visible VLAN header. Signed-off-by: Vladimir Oltean <vladimir.oltean@nxp.com> Signed-off-by: David S. Miller <davem@davemloft.net>
2021-08-05net: dsa: sja1105: manage VLANs on cascade portsVladimir Oltean1-3/+3
Since commit ed040abca4c1 ("net: dsa: sja1105: use 4095 as the private VLAN for untagged traffic"), this driver uses a reserved value as pvid for the host port (DSA CPU port). Control packets which are sent as untagged get classified to this VLAN, and all ports are members of it (this is to be expected for control packets). Manage all cascade ports in the same way and allow control packets to egress everywhere. Also, all VLANs need to be sent as egress-tagged on all cascade ports. Signed-off-by: Vladimir Oltean <vladimir.oltean@nxp.com> Signed-off-by: David S. Miller <davem@davemloft.net>
2021-08-05net: dsa: sja1105: manage the forwarding domain towards DSA portsVladimir Oltean1-24/+60
Manage DSA links towards other switches, be they host ports or cascade ports, the same as the CPU port, i.e. allow forwarding and flooding unconditionally from all user ports. We send packets as always VLAN-tagged on a DSA port, and we rely on the cross-chip notifiers from tag_8021q to install the RX VLAN of a switch port only on the proper remote ports of another switch (the ports that are in the same bridging domain). So if there is no cross-chip bridging in the system, the flooded packets will be sent on the DSA ports too, but they will be dropped by the remote switches due to either (a) a lack of the RX VLAN in the VLAN table of the ingress DSA port, or (b) a lack of valid destinations for those packets, due to a lack of the RX VLAN on the user ports of the switch Note that switches which only transport packets in a cross-chip bridge, but have no user ports of their own as part of that bridge, such as switch 1 in this case: DSA link DSA link sw0p0 sw0p1 sw0p2 -------- sw1p0 sw1p2 sw1p3 -------- sw2p0 sw2p2 sw2p3 ip link set sw0p0 master br0 ip link set sw2p3 master br0 will still work, because the tag_8021q cross-chip notifiers keep the RX VLANs installed on all DSA ports. Signed-off-by: Vladimir Oltean <vladimir.oltean@nxp.com> Signed-off-by: David S. Miller <davem@davemloft.net>
2021-08-05net: dsa: sja1105: configure the cascade ports based on topologyVladimir Oltean1-27/+70
The sja1105 switch family has a feature called "cascade ports" which can be used in topologies where multiple SJA1105/SJA1110 switches are daisy chained. Upstream switches set this bit for the DSA link towards the downstream switches. This is used when the upstream switch receives a control packet (PTP, STP) from a downstream switch, because if the source port for a control packet is marked as a cascade port, then the source port, switch ID and RX timestamp will not be taken again on the upstream switch, it is assumed that this has already been done by the downstream switch (the leaf port in the tree) and that the CPU has everything it needs to decode the information from this packet. We need to distinguish between an upstream-facing DSA link and a downstream-facing DSA link, because the upstream-facing DSA links are "host ports" for the SJA1105/SJA1110 switches, and the downstream-facing DSA links are "cascade ports". Note that SJA1105 supports a single cascade port, so only daisy chain topologies work. With SJA1110, there can be more complex topologies such as: eth0 | host port | sw0p0 sw0p1 sw0p2 sw0p3 sw0p4 | | | | cascade cascade user user port port port port | | | | | | | host | port | | | sw1p0 sw1p1 sw1p2 sw1p3 sw1p4 | | | | | | user user user user host port port port port port | sw2p0 sw2p1 sw2p2 sw2p3 sw2p4 | | | | user user user user port port port port Signed-off-by: Vladimir Oltean <vladimir.oltean@nxp.com> Signed-off-by: David S. Miller <davem@davemloft.net>
2021-08-05net: ipa: fix IPA v4.9 interconnectsAlex Elder1-7/+2
Three interconnects are defined for IPA version 4.9, but there should only be two. They should also use names that match what's used for other platforms (and specified in the Device Tree binding). Signed-off-by: Alex Elder <elder@linaro.org> Signed-off-by: David S. Miller <davem@davemloft.net>
2021-08-04virtio_net: Replace deprecated CPU-hotplug functions.Sebastian Andrzej Siewior1-5/+5
The functions get_online_cpus() and put_online_cpus() have been deprecated during the CPU hotplug rework. They map directly to cpus_read_lock() and cpus_read_unlock(). Replace deprecated CPU-hotplug functions with the official version. The behavior remains unchanged. Cc: "Michael S. Tsirkin" <mst@redhat.com> Cc: Jason Wang <jasowang@redhat.com> Cc: virtualization@lists.linux-foundation.org Signed-off-by: Sebastian Andrzej Siewior <bigeasy@linutronix.de> Signed-off-by: Jakub Kicinski <kuba@kernel.org>
2021-08-04net: usb: lan78xx: don't modify phy_device state concurrentlyIvan T. Ivanov1-4/+12
Currently phy_device state could be left in inconsistent state shown by following alert message[1]. This is because phy_read_status could be called concurrently from lan78xx_delayedwork, phy_state_machine and __ethtool_get_link. Fix this by making sure that phy_device state is updated atomically. [1] lan78xx 1-1.1.1:1.0 eth0: No phy led trigger registered for speed(-1) Signed-off-by: Ivan T. Ivanov <iivanov@suse.de> Signed-off-by: David S. Miller <davem@davemloft.net>
2021-08-04Revert "net: build all switchdev drivers as modules when the bridge is a module"Vladimir Oltean2-3/+0
This reverts commit b0e81817629a496854ff1799f6cbd89597db65fd. Explicit driver dependency on the bridge is no longer needed since switchdev_bridge_port_{,un}offload() is no longer implemented by the bridge driver but by switchdev. Signed-off-by: Vladimir Oltean <vladimir.oltean@nxp.com> Tested-by: Grygorii Strashko <grygorii.strashko@ti.com> Signed-off-by: David S. Miller <davem@davemloft.net>
2021-08-04net: make switchdev_bridge_port_{,unoffload} loosely coupled with the bridgeVladimir Oltean2-2/+2
With the introduction of explicit offloading API in switchdev in commit 2f5dc00f7a3e ("net: bridge: switchdev: let drivers inform which bridge ports are offloaded"), we started having Ethernet switch drivers calling directly into a function exported by net/bridge/br_switchdev.c, which is a function exported by the bridge driver. This means that drivers that did not have an explicit dependency on the bridge before, like cpsw and am65-cpsw, now do - otherwise it is not possible to call a symbol exported by a driver that can be built as module unless you are a module too. There was an attempt to solve the dependency issue in the form of commit b0e81817629a ("net: build all switchdev drivers as modules when the bridge is a module"). Grygorii Strashko, however, says about it: | In my opinion, the problem is a bit bigger here than just fixing the | build :( | | In case, of ^cpsw the switchdev mode is kinda optional and in many | cases (especially for testing purposes, NFS) the multi-mac mode is | still preferable mode. | | There were no such tight dependency between switchdev drivers and | bridge core before and switchdev serviced as independent, notification | based layer between them, so ^cpsw still can be "Y" and bridge can be | "M". Now for mostly every kernel build configuration the CONFIG_BRIDGE | will need to be set as "Y", or we will have to update drivers to | support build with BRIDGE=n and maintain separate builds for | networking vs non-networking testing. But is this enough? Wouldn't | it cause 'chain reaction' required to add more and more "Y" options | (like CONFIG_VLAN_8021Q)? | | PS. Just to be sure we on the same page - ARM builds will be forced | (with this patch) to have CONFIG_TI_CPSW_SWITCHDEV=m and so all our | automation testing will just fail with omap2plus_defconfig. In the light of this, it would be desirable for some configurations to avoid dependencies between switchdev drivers and the bridge, and have the switchdev mode as completely optional within the driver. Arnd Bergmann also tried to write a patch which better expressed the build time dependency for Ethernet switch drivers where the switchdev support is optional, like cpsw/am65-cpsw, and this made the drivers follow the bridge (compile as module if the bridge is a module) only if the optional switchdev support in the driver was enabled in the first place: https://patchwork.kernel.org/project/netdevbpf/patch/20210802144813.1152762-1-arnd@kernel.org/ but this still did not solve the fact that cpsw and am65-cpsw now must be built as modules when the bridge is a module - it just expressed correctly that optional dependency. But the new behavior is an apparent regression from Grygorii's perspective. So to support the use case where the Ethernet driver is built-in, NET_SWITCHDEV (a bool option) is enabled, and the bridge is a module, we need a framework that can handle the possible absence of the bridge from the running system, i.e. runtime bloatware as opposed to build-time bloatware. Luckily we already have this framework, since switchdev has been using it extensively. Events from the bridge side are transmitted to the driver side using notifier chains - this was originally done so that unrelated drivers could snoop for events emitted by the bridge towards ports that are implemented by other drivers (think of a switch driver with LAG offload that listens for switchdev events on a bonding/team interface that it offloads). There are also events which are transmitted from the driver side to the bridge side, which again are modeled using notifiers. SWITCHDEV_FDB_ADD_TO_BRIDGE is an example of this, and deals with notifying the bridge that a MAC address has been dynamically learned. So there is a precedent we can use for modeling the new framework. The difference compared to SWITCHDEV_FDB_ADD_TO_BRIDGE is that the work that the bridge needs to do when a port becomes offloaded is blocking in its nature: replay VLANs, MDBs etc. The calling context is indeed blocking (we are under rtnl_mutex), but the existing switchdev notification chain that the bridge is subscribed to is only the atomic one. So we need to subscribe the bridge to the blocking switchdev notification chain too. This patch: - keeps the driver-side perception of the switchdev_bridge_port_{,un}offload unchanged - moves the implementation of switchdev_bridge_port_{,un}offload from the bridge module into the switchdev module. - makes everybody that is subscribed to the switchdev blocking notifier chain "hear" offload & unoffload events - makes the bridge driver subscribe and handle those events - moves the bridge driver's handling of those events into 2 new functions called br_switchdev_port_{,un}offload. These functions contain in fact the core of the logic that was previously in switchdev_bridge_port_{,un}offload, just that now we go through an extra indirection layer to reach them. Unlike all the other switchdev notification structures, the structure used to carry the bridge port information, struct switchdev_notifier_brport_info, does not contain a "bool handled". This is because in the current usage pattern, we always know that a switchdev bridge port offloading event will be handled by the bridge, because the switchdev_bridge_port_offload() call was initiated by a NETDEV_CHANGEUPPER event in the first place, where info->upper_dev is a bridge. So if the bridge wasn't loaded, then the CHANGEUPPER event couldn't have happened. Signed-off-by: Vladimir Oltean <vladimir.oltean@nxp.com> Tested-by: Grygorii Strashko <grygorii.strashko@ti.com> Signed-off-by: David S. Miller <davem@davemloft.net>