summaryrefslogtreecommitdiff
path: root/drivers
AgeCommit message (Collapse)AuthorFilesLines
2019-07-10net: hisilicon: Add an tx_desc to adapt HI13X1_GMACJiangfeng Xiao1-3/+31
HI13X1 changed the offsets and bitmaps for tx_desc registers in the same peripheral device on different models of the hip04_eth. Signed-off-by: Jiangfeng Xiao <xiaojiangfeng@huawei.com> Signed-off-by: David S. Miller <davem@davemloft.net>
2019-07-10net: hisilicon: Add an rx_desc to adapt HI13X1_GMACJiangfeng Xiao1-0/+9
HI13X1 changed the offsets and bitmaps for rx_desc registers in the same peripheral device on different models of the hip04_eth. Signed-off-by: Jiangfeng Xiao <xiaojiangfeng@huawei.com> Signed-off-by: David S. Miller <davem@davemloft.net>
2019-07-10net: hisilicon: Offset buf address to adapt HI13X1_GMACJiangfeng Xiao1-3/+17
The buf unit size of HI13X1_GMAC is cache_line_size, which is 64, so the address we write to the buf register needs to be shifted right by 6 bits. The 31st bit of the PPE_CFG_CPU_ADD_ADDR register of HI13X1_GMAC indicates whether to release the buffer of the message, and the low indicates that it is valid. Signed-off-by: Jiangfeng Xiao <xiaojiangfeng@huawei.com> Signed-off-by: David S. Miller <davem@davemloft.net>
2019-07-10net: hisilicon: Add group field to adapt HI13X1_GMACJiangfeng Xiao1-3/+5
In general, group is the same as the port, but some boards specify a special group for better load balancing of each processing unit. Signed-off-by: Jiangfeng Xiao <xiaojiangfeng@huawei.com> Signed-off-by: David S. Miller <davem@davemloft.net>
2019-07-10net: hisilicon: HI13X1_GMAX need dreq reset at firstJiangfeng Xiao1-0/+24
HI13X1_GMAC delete request for soft reset at first, otherwise, the subsequent initialization will not take effect. Signed-off-by: Jiangfeng Xiao <xiaojiangfeng@huawei.com> Signed-off-by: David S. Miller <davem@davemloft.net>
2019-07-10net: hisilicon: HI13X1_GMAX skip write LOCAL_PAGE_REGJiangfeng Xiao1-0/+2
HI13X1_GMAC changed the offsets and bitmaps for GE_TX_LOCAL_PAGE_REG registers in the same peripheral device on different models of the hip04_eth. With the default configuration, HI13X1_GMAC can also work without any writes to the GE_TX_LOCAL_PAGE_REG register. Signed-off-by: Jiangfeng Xiao <xiaojiangfeng@huawei.com> Signed-off-by: David S. Miller <davem@davemloft.net>
2019-07-10net: hisilicon: Cleanup for cast to restricted __be32Jiangfeng Xiao1-2/+2
This patch fixes the following warning from sparse: hip04_eth.c:533:23: warning: cast to restricted __be16 hip04_eth.c:533:23: warning: cast to restricted __be16 hip04_eth.c:533:23: warning: cast to restricted __be16 hip04_eth.c:533:23: warning: cast to restricted __be16 hip04_eth.c:534:23: warning: cast to restricted __be32 hip04_eth.c:534:23: warning: cast to restricted __be32 hip04_eth.c:534:23: warning: cast to restricted __be32 hip04_eth.c:534:23: warning: cast to restricted __be32 hip04_eth.c:534:23: warning: cast to restricted __be32 hip04_eth.c:534:23: warning: cast to restricted __be32 Signed-off-by: Jiangfeng Xiao <xiaojiangfeng@huawei.com> Signed-off-by: David S. Miller <davem@davemloft.net>
2019-07-10net: hisilicon: Cleanup for got restricted __be32Jiangfeng Xiao1-4/+4
This patch fixes the following warning from sparse: hip04_eth.c:468:25: warning: incorrect type in assignment hip04_eth.c:468:25: expected unsigned int [usertype] send_addr hip04_eth.c:468:25: got restricted __be32 [usertype] hip04_eth.c:469:25: warning: incorrect type in assignment hip04_eth.c:469:25: expected unsigned int [usertype] send_size hip04_eth.c:469:25: got restricted __be32 [usertype] hip04_eth.c:470:19: warning: incorrect type in assignment hip04_eth.c:470:19: expected unsigned int [usertype] cfg hip04_eth.c:470:19: got restricted __be32 [usertype] hip04_eth.c:472:23: warning: incorrect type in assignment hip04_eth.c:472:23: expected unsigned int [usertype] wb_addr hip04_eth.c:472:23: got restricted __be32 [usertype] Signed-off-by: Jiangfeng Xiao <xiaojiangfeng@huawei.com> Signed-off-by: David S. Miller <davem@davemloft.net>
2019-07-10net: hisilicon: Add support for HI13X1 to hip04_ethJiangfeng Xiao2-7/+40
Extend the hip04_eth driver to support HI13X1_GMAC. Enable it with CONFIG_HI13X1_GMAC option. Signed-off-by: Jiangfeng Xiao <xiaojiangfeng@huawei.com> Signed-off-by: David S. Miller <davem@davemloft.net>
2019-07-10net: stmmac: add support for hash table size 128/256 in dwmac4Biao Huang5-25/+42
1. get hash table size in hw feature reigster, and add support for taller hash table(128/256) in dwmac4. 2. only clear GMAC_PACKET_FILTER bits used in this function, to avoid side effect to functions of other bits. stmmac selftests output log with flow control on: ethtool -t eth0 The test result is PASS The test extra info: 1. MAC Loopback 0 2. PHY Loopback -95 3. MMC Counters 0 4. EEE -95 5. Hash Filter MC 0 6. Perfect Filter UC 0 7. MC Filter 0 8. UC Filter 0 9. Flow Control 0 Signed-off-by: Biao Huang <biao.huang@mediatek.com> Signed-off-by: David S. Miller <davem@davemloft.net>
2019-07-10net: stmmac: dwmac4: mac address array boudary violation issueBiao Huang1-1/+1
The mac address array size is GMAC_MAX_PERFECT_ADDRESSES, so the 'reg' should be less than it, or will affect other registers. Signed-off-by: Biao Huang <biao.huang@mediatek.com> Signed-off-by: David S. Miller <davem@davemloft.net>
2019-07-09net: dsa: vsc73xx: fix NET_DSA and OF dependenciesArnd Bergmann1-0/+4
The restructuring of the driver got the dependencies wrong: without CONFIG_NET_DSA we get this build failure: WARNING: unmet direct dependencies detected for NET_DSA_VITESSE_VSC73XX Depends on [n]: NETDEVICES [=y] && HAVE_NET_DSA [=y] && OF [=y] && NET_DSA [=n] Selected by [m]: - NET_DSA_VITESSE_VSC73XX_PLATFORM [=m] && NETDEVICES [=y] && HAVE_NET_DSA [=y] && HAS_IOMEM [=y] ERROR: "dsa_unregister_switch" [drivers/net/dsa/vitesse-vsc73xx-core.ko] undefined! ERROR: "dsa_switch_alloc" [drivers/net/dsa/vitesse-vsc73xx-core.ko] undefined! ERROR: "dsa_register_switch" [drivers/net/dsa/vitesse-vsc73xx-core.ko] undefined! Add the appropriate dependencies. Fixes: 95711cd5f0b4 ("net: dsa: vsc73xx: Split vsc73xx driver") Signed-off-by: Arnd Bergmann <arnd@arndb.de> Signed-off-by: David S. Miller <davem@davemloft.net>
2019-07-09net: mvmdio: defer probe of orion-mdio if a clock is not readyJosua Mayer1-0/+5
Defer probing of the orion-mdio interface when getting a clock returns EPROBE_DEFER. This avoids locking up the Armada 8k SoC when mdio is used before all clocks have been enabled. Signed-off-by: Josua Mayer <josua@solid-run.com> Reviewed-by: Andrew Lunn <andrew@lunn.ch> Signed-off-by: David S. Miller <davem@davemloft.net>
2019-07-09net: mvmdio: print warning when orion-mdio has too many clocksJosua Mayer1-0/+4
Print a warning when device tree specifies more than the maximum of four clocks supported by orion-mdio. Because reading from mdio can lock up the Armada 8k when a required clock is not initialized, it is important to notify the user when a specified clock is ignored. Signed-off-by: Josua Mayer <josua@solid-run.com> Reviewed-by: Andrew Lunn <andrew@lunn.ch> Signed-off-by: David S. Miller <davem@davemloft.net>
2019-07-09net: mvmdio: allow up to four clocks to be specified for orion-mdioJosua Mayer1-1/+1
Allow up to four clocks to be specified and enabled for the orion-mdio interface, which are required by the Armada 8k and defined in armada-cp110.dtsi. Fixes a hang in probing the mvmdio driver that was encountered on the Clearfog GT 8K with all drivers built as modules, but also affects other boards such as the MacchiatoBIN. Cc: stable@vger.kernel.org Fixes: 96cb43423822 ("net: mvmdio: allow up to three clocks to be specified for orion-mdio") Reviewed-by: Andrew Lunn <andrew@lunn.ch> Signed-off-by: Josua Mayer <josua@solid-run.com> Signed-off-by: David S. Miller <davem@davemloft.net>
2019-07-09net: netsec: start using buffers if page_pool registration succeededIlias Apalodimas1-8/+9
The current driver starts using page_pool buffers before calling xdp_rxq_info_reg_mem_model(). Start using the buffers after the registration succeeded, so we won't have to call page_pool_request_shutdown() in case of failure Fixes: 5c67bf0ec4d0 ("net: netsec: Use page_pool API") Signed-off-by: Ilias Apalodimas <ilias.apalodimas@linaro.org> Signed-off-by: David S. Miller <davem@davemloft.net>
2019-07-09net: stmmac: Introducing support for Page PoolJose Abreu3-144/+70
Mapping and unmapping DMA region is an high bottleneck in stmmac driver, specially in the RX path. This commit introduces support for Page Pool API and uses it in all RX queues. With this change, we get more stable troughput and some increase of banwidth with iperf: - MAC1000 - 950 Mbps - XGMAC: 9.22 Gbps Changes from v3: - Use page_pool_destroy() (Ilias) Changes from v2: - Uncoditionally call page_pool_free() (Jesper) Changes from v1: - Use page_pool_get_dma_addr() (Jesper) - Add a comment (Jesper) - Add page_pool_free() call (Jesper) - Reintroduce sync_single_for_device (Arnd / Ilias) Signed-off-by: Jose Abreu <joabreu@synopsys.com> Acked-by: Ilias Apalodimas <ilias.apalodimas@linaro.org> Signed-off-by: David S. Miller <davem@davemloft.net>
2019-07-09net: stmmac: Fix descriptors address being in > 32 bits address spaceJose Abreu7-22/+26
Commit a993db88d17d ("net: stmmac: Enable support for > 32 Bits addressing in XGMAC"), introduced support for > 32 bits addressing in XGMAC but the conversion of descriptors to dma_addr_t was left out. As some devices assing coherent memory in regions > 32 bits we need to set lower and upper value of descriptors address when initializing DMA channels. Luckly, this was working for me because I was assigning CMA to < 4GB address space for performance reasons. Fixes: a993db88d17d ("net: stmmac: Enable support for > 32 Bits addressing in XGMAC") Signed-off-by: Jose Abreu <joabreu@synopsys.com> Signed-off-by: David S. Miller <davem@davemloft.net>
2019-07-09net: stmmac: Implement RX Coalesce Frames settingJose Abreu4-8/+20
Add support for coalescing RX path by specifying number of frames which don't need to have interrupt on completion bit set. This is only available when RX Watchdog is enabled. Acked-by: Jakub Kicinski <jakub.kicinski@netronome.com> Signed-off-by: Jose Abreu <joabreu@synopsys.com> Signed-off-by: David S. Miller <davem@davemloft.net>
2019-07-09bnxt_en: Add page_pool_destroy() during RX ring cleanup.Michael Chan1-6/+2
Add page_pool_destroy() in bnxt_free_rx_rings() during normal RX ring cleanup, as Ilias has informed us that the following commit has been merged: 1da4bbeffe41 ("net: core: page_pool: add user refcnt and reintroduce page_pool_destroy") The special error handling code to call page_pool_free() can now be removed. bnxt_free_rx_rings() will always be called during normal shutdown or any error paths. Fixes: 322b87ca55f2 ("bnxt_en: add page_pool support") Cc: Ilias Apalodimas <ilias.apalodimas@linaro.org> Cc: Andy Gospodarek <gospo@broadcom.com> Signed-off-by: Michael Chan <michael.chan@broadcom.com> Acked-by: Andy Gospodarek <gospo@broadcom.com> Signed-off-by: David S. Miller <davem@davemloft.net>
2019-07-09net/mlx5e: Register devlink ports for physical link, PCI PF, VFsParav Pandit2-31/+78
Register devlink port of physical port, PCI PF and PCI VF flavour for each PF, VF when a given devlink instance is in switchdev mode. Implement ndo_get_devlink_port callback API to make use of registered devlink ports. This eliminates ndo_get_phys_port_name() and ndo_get_port_parent_id() callbacks. Hence, remove them. An example output with 2 VFs, without a PF and single uplink port is below. $devlink port show pci/0000:06:00.0/65535: type eth netdev ens2f0 flavour physical pci/0000:05:00.0/1: type eth netdev eth1 flavour pcivf pfnum 0 vfnum 0 pci/0000:05:00.0/2: type eth netdev eth2 flavour pcivf pfnum 0 vfnum 1 Reviewed-by: Roi Dayan <roid@mellanox.com> Acked-by: Jiri Pirko <jiri@mellanox.com> Signed-off-by: Parav Pandit <parav@mellanox.com> Signed-off-by: David S. Miller <davem@davemloft.net>
2019-07-09nfp: tls: undo TLS sequence tracking when dropping the frameJakub Kicinski1-0/+23
If driver has to drop the TLS frame it needs to undo the TCP sequence tracking changes, otherwise device will receive segments out of order and drop them. Signed-off-by: Jakub Kicinski <jakub.kicinski@netronome.com> Reviewed-by: Dirk van der Merwe <dirk.vandermerwe@netronome.com> Signed-off-by: David S. Miller <davem@davemloft.net>
2019-07-09nfp: tls: avoid one of the ifdefs for TLSJakub Kicinski1-4/+2
Move the #ifdef CONFIG_TLS_DEVICE a little so we can eliminate the other one. Signed-off-by: Jakub Kicinski <jakub.kicinski@netronome.com> Reviewed-by: Dirk van der Merwe <dirk.vandermerwe@netronome.com> Signed-off-by: David S. Miller <davem@davemloft.net>
2019-07-09nfp: tls: don't leave key material in freed FW cmsg skbsJakub Kicinski1-1/+15
Make sure the contents of the skb which carried key material to the FW is cleared. Signed-off-by: Jakub Kicinski <jakub.kicinski@netronome.com> Reviewed-by: Dirk van der Merwe <dirk.vandermerwe@netronome.com> Signed-off-by: David S. Miller <davem@davemloft.net>
2019-07-09net/tls: don't clear TX resync flag on errorDirk van der Merwe2-7/+14
Introduce a return code for the tls_dev_resync callback. When the driver TX resync fails, kernel can retry the resync again until it succeeds. This prevents drivers from attempting to offload TLS packets if the connection is known to be out of sync. We don't worry about the RX resync since they will be retried naturally as more encrypted records get received. Signed-off-by: Dirk van der Merwe <dirk.vandermerwe@netronome.com> Reviewed-by: Jakub Kicinski <jakub.kicinski@netronome.com> Signed-off-by: David S. Miller <davem@davemloft.net>
2019-07-09nfp: tls: count TSO segments separately for the TLS offloadJakub Kicinski1-1/+4
Count the number of successfully submitted TLS segments, not skbs. This will make it easier to compare the TLS encryption count against other counters. Signed-off-by: Jakub Kicinski <jakub.kicinski@netronome.com> Reviewed-by: Dirk van der Merwe <dirk.vandermerwe@netronome.com> Signed-off-by: David S. Miller <davem@davemloft.net>
2019-07-09nfp: ccm: increase message limitsDirk van der Merwe1-3/+3
Increase the batch limit to consume small message bursts more effectively. Practically, the effect on the 'add' messages is not significant since the mailbox is sized such that the 'add' messages are still limited to the same order of magnitude that it was originally set for. Furthermore, increase the queue size limit to 1024 entries. This further improves the handling of bursts of small control messages. Signed-off-by: Dirk van der Merwe <dirk.vandermerwe@netronome.com> Reviewed-by: Jakub Kicinski <jakub.kicinski@netronome.com> Signed-off-by: David S. Miller <davem@davemloft.net>
2019-07-09nfp: tls: use unique connection ids instead of 4-tuple for TXJakub Kicinski3-14/+34
Connection 4 tuple reuse is slightly problematic - TLS socket and context do not get destroyed until all the associated skbs left the system and all references are released. This leads to stale connection entry in the device preventing addition of new one if the 4 tuple is reused quickly enough. Instead of using read 4 tuple as the key use a unique ID. Set the protocol to TCP and port to 0 to ensure no collisions with real connections. Signed-off-by: Jakub Kicinski <jakub.kicinski@netronome.com> Reviewed-by: Dirk van der Merwe <dirk.vandermerwe@netronome.com> Signed-off-by: David S. Miller <davem@davemloft.net>
2019-07-09nfp: tls: move setting ipver_vlan to a helperJakub Kicinski1-6/+10
Long lines are ugly. No functional changes. Signed-off-by: Jakub Kicinski <jakub.kicinski@netronome.com> Reviewed-by: Dirk van der Merwe <dirk.vandermerwe@netronome.com> Signed-off-by: David S. Miller <davem@davemloft.net>
2019-07-09nfp: tls: ignore queue limits for delete commandsJakub Kicinski3-10/+24
We need to do our best not to drop delete commands, otherwise we will have stale entries in the connection table. Ignore the control message queue limits for delete commands. Signed-off-by: Jakub Kicinski <jakub.kicinski@netronome.com> Reviewed-by: Dirk van der Merwe <dirk.vandermerwe@netronome.com> Signed-off-by: David S. Miller <davem@davemloft.net>
2019-07-09net: phy: Make use of linkmode_mod_bit helperFuqian Huang1-12/+4
linkmode_mod_bit is introduced as a helper function to set/clear bits in a linkmode. Replace the if else code structure with a call to the helper linkmode_mod_bit. Signed-off-by: Fuqian Huang <huangfq.daxian@gmail.com> Signed-off-by: David S. Miller <davem@davemloft.net>
2019-07-09Merge git://git.kernel.org/pub/scm/linux/kernel/git/davem/netDavid S. Miller53-320/+623
Two cases of overlapping changes, nothing fancy. Signed-off-by: David S. Miller <davem@davemloft.net>
2019-07-09bonding: fix value exported by Netlink for peer_notif_delayVincent Bernat1-1/+1
IFLA_BOND_PEER_NOTIF_DELAY was set to the value of downdelay instead of peer_notif_delay. After this change, the correct value is exported. Fixes: 07a4ddec3ce9 ("bonding: add an option to specify a delay between peer notifications") Signed-off-by: Vincent Bernat <vincent@bernat.ch> Signed-off-by: David S. Miller <davem@davemloft.net>
2019-07-09coallocate socket_wq with socket itselfAl Viro2-8/+5
socket->wq is assign-once, set when we are initializing both struct socket it's in and struct socket_wq it points to. As the matter of fact, the only reason for separate allocation was the ability to RCU-delay freeing of socket_wq. RCU-delaying the freeing of socket itself gets rid of that need, so we can just fold struct socket_wq into the end of struct socket and simplify the life both for sock_alloc_inode() (one allocation instead of two) and for tun/tap oddballs, where we used to embed struct socket and struct socket_wq into the same structure (now - embedding just the struct socket). Note that reference to struct socket_wq in struct sock does remain a reference - that's unchanged. Signed-off-by: Al Viro <viro@zeniv.linux.org.uk> Signed-off-by: David S. Miller <davem@davemloft.net>
2019-07-09net: pasemi: fix an use-after-free in pasemi_mac_phy_init()Wen Yang1-1/+1
The phy_dn variable is still being used in of_phy_connect() after the of_node_put() call, which may result in use-after-free. Fixes: 1dd2d06c0459 ("net: Rework pasemi_mac driver to use of_mdio infrastructure") Signed-off-by: Wen Yang <wen.yang99@zte.com.cn> Cc: "David S. Miller" <davem@davemloft.net> Cc: Thomas Gleixner <tglx@linutronix.de> Cc: Luis Chamberlain <mcgrof@kernel.org> Cc: Michael Ellerman <mpe@ellerman.id.au> Cc: netdev@vger.kernel.org Cc: linux-kernel@vger.kernel.org Signed-off-by: David S. Miller <davem@davemloft.net>
2019-07-09net: axienet: fix a potential double free in axienet_probe()Wen Yang1-1/+0
There is a possible use-after-free issue in the axienet_probe(): 1701: np = of_parse_phandle(pdev->dev.of_node, "axistream-connected", 0); 1702: if (np) { ... 1787: of_node_put(np); ---> released here 1788: lp->eth_irq = platform_get_irq(pdev, 0); 1789: } else { ... 1801: } 1802: if (IS_ERR(lp->dma_regs)) { ... 1805: of_node_put(np); ---> double released here 1806: goto free_netdev; 1807: } We solve this problem by removing the unnecessary of_node_put(). Fixes: 28ef9ebdb64c ("net: axienet: make use of axistream-connected attribute optional") Signed-off-by: Wen Yang <wen.yang99@zte.com.cn> Cc: Anirudha Sarangi <anirudh@xilinx.com> Cc: John Linn <John.Linn@xilinx.com> Cc: "David S. Miller" <davem@davemloft.net> Cc: Michal Simek <michal.simek@xilinx.com> Cc: Robert Hancock <hancock@sedsystems.ca> Cc: netdev@vger.kernel.org Cc: linux-arm-kernel@lists.infradead.org Cc: linux-kernel@vger.kernel.org Reviewed-by: Robert Hancock <hancock@sedsystems.ca> Signed-off-by: David S. Miller <davem@davemloft.net>
2019-07-09net: stmmac: enable clause 45 mdio supportKweh Hock Leong1-8/+35
DWMAC4 is capable to support clause 45 mdio communication. This patch enable the feature on stmmac_mdio_write() and stmmac_mdio_read() by following phy_write_mmd() and phy_read_mmd() mdiobus read write implementation format. Reviewed-by: Li, Yifan <yifan2.li@intel.com> Signed-off-by: Kweh Hock Leong <hock.leong.kweh@intel.com> Signed-off-by: Ong Boon Leong <boon.leong.ong@intel.com> Signed-off-by: Voon Weifeng <weifeng.voon@intel.com> Reviewed-by: Andrew Lunn <andrew@lunn.ch> Signed-off-by: David S. Miller <davem@davemloft.net>
2019-07-09net: mvpp2: cls: Add support for ETHER_FLOWMaxime Chevallier1-0/+2
Users can specify classification actions based on the 'ether' flow type. In that case, this will apply to all ethernet traffic, superseeding flows such as 'udp4' or 'tcp6'. Add support for this flow type in the PPv2 classifier, by mapping the ETHER_FLOW value to the corresponding entries in the classifier. Signed-off-by: Maxime Chevallier <maxime.chevallier@bootlin.com> Signed-off-by: David S. Miller <davem@davemloft.net>
2019-07-09net: mvpp2: cls: Report an error for unsupported flow typesMaxime Chevallier1-0/+4
Add a missing check to detect flow types that we don't support, so that user can be informed of this. Signed-off-by: Maxime Chevallier <maxime.chevallier@bootlin.com> Signed-off-by: David S. Miller <davem@davemloft.net>
2019-07-09nfp: tls: fix error return code in nfp_net_tls_add()Wei Yongjun1-0/+1
Fix to return negative error code -EINVAL from the error handling case instead of 0, as done elsewhere in this function. Fixes: 1f35a56cf586 ("nfp: tls: add/delete TLS TX connections") Signed-off-by: Wei Yongjun <weiyongjun1@huawei.com> Acked-by: Jakub Kicinski <jakub.kicinski@netronome.com> Signed-off-by: David S. Miller <davem@davemloft.net>
2019-07-09bnxt_en: add page_pool supportAndy Gospodarek4-7/+47
This removes contention over page allocation for XDP_REDIRECT actions by adding page_pool support per queue for the driver. The performance for XDP_REDIRECT actions scales linearly with the number of cores performing redirect actions when using the page pools instead of the standard page allocator. v2: Fix up the error path from XDP registration, noted by Ilias Apalodimas. Signed-off-by: Andy Gospodarek <gospo@broadcom.com> Signed-off-by: Michael Chan <michael.chan@broadcom.com> Signed-off-by: David S. Miller <davem@davemloft.net>
2019-07-09bnxt_en: optimized XDP_REDIRECT supportAndy Gospodarek4-10/+140
This adds basic support for XDP_REDIRECT in the bnxt_en driver. Next patch adds the more optimized page pool support. Signed-off-by: Andy Gospodarek <gospo@broadcom.com> Signed-off-by: Michael Chan <michael.chan@broadcom.com> Signed-off-by: David S. Miller <davem@davemloft.net>
2019-07-09bnxt_en: Refactor __bnxt_xmit_xdp().Michael Chan4-13/+28
__bnxt_xmit_xdp() is used by XDP_TX and ethtool loopback packet transmit. Refactor it so that it can be re-used by the XDP_REDIRECT logic. Restructure the TX interrupt handler logic to cleanly separate XDP_TX logic in preparation for XDP_REDIRECT. Acked-by: Andy Gospodarek <gospo@broadcom.com> Signed-off-by: Michael Chan <michael.chan@broadcom.com> Signed-off-by: David S. Miller <davem@davemloft.net>
2019-07-09bnxt_en: rename some xdp functionsAndy Gospodarek3-7/+7
Renaming bnxt_xmit_xdp to __bnxt_xmit_xdp to get ready for XDP_REDIRECT support and reduce confusion/namespace collision. Signed-off-by: Andy Gospodarek <gospo@broadcom.com> Signed-off-by: Michael Chan <michael.chan@broadcom.com> Signed-off-by: David S. Miller <davem@davemloft.net>
2019-07-09net: ethernet: ti: cpsw: add XDP supportIvan Khoronzhuk4-59/+488
Add XDP support based on rx page_pool allocator, one frame per page. Page pool allocator is used with assumption that only one rx_handler is running simultaneously. DMA map/unmap is reused from page pool despite there is no need to map whole page. Due to specific of cpsw, the same TX/RX handler can be used by 2 network devices, so special fields in buffer are added to identify an interface the frame is destined to. Thus XDP works for both interfaces, that allows to test xdp redirect between two interfaces easily. Also, each rx queue have own page pools, but common for both netdevs. XDP prog is common for all channels till appropriate changes are added in XDP infrastructure. Also, once page_pool recycling becomes part of skb netstack some simplifications can be added, like removing page_pool_release_page() before skb receive. In order to keep rx_dev while redirect, that can be somehow used in future, do flush in rx_handler, that allows to keep rx dev the same while redirect. It allows to conform with tracing rx_dev pointed by Jesper. Also, there is probability, that XDP generic code can be extended to support multi ndev drivers like this one, using same rx queue for several ndevs, based on switchdev for instance or else. In this case, driver can be modified like exposed here: https://lkml.org/lkml/2019/7/3/243 Acked-by: Jesper Dangaard Brouer <brouer@redhat.com> Signed-off-by: Ivan Khoronzhuk <ivan.khoronzhuk@linaro.org> Signed-off-by: David S. Miller <davem@davemloft.net>
2019-07-09net: ethernet: ti: cpsw_ethtool: allow res split while downIvan Khoronzhuk1-2/+1
That's possible to set channel num while interfaces are down. When interface gets up it should resplit budget. This resplit can happen after phy is up but only if speed is changed, so should be set before this, for this allow it to happen while changing number of channels, when interfaces are down. Signed-off-by: Ivan Khoronzhuk <ivan.khoronzhuk@linaro.org> Signed-off-by: David S. Miller <davem@davemloft.net>
2019-07-09net: ethernet: ti: davinci_cpdma: allow desc split while downIvan Khoronzhuk3-9/+28
That's possible to set ring params while interfaces are down. When interface gets up it uses number of descs to fill rx queue and on later on changes to create rx pools. Usually, this resplit can happen after phy is up, but it can be needed before this, so allow it to happen while setting number of rx descs, when interfaces are down. Also, if no dependency on intf state, move it to cpdma layer, where it should be. Signed-off-by: Ivan Khoronzhuk <ivan.khoronzhuk@linaro.org> Signed-off-by: David S. Miller <davem@davemloft.net>
2019-07-09net: ethernet: ti: davinci_cpdma: add dma mapped submitIvan Khoronzhuk2-10/+83
In case if dma mapped packet needs to be sent, like with XDP page pool, the "mapped" submit can be used. This patch adds dma mapped submit based on regular one. Signed-off-by: Ivan Khoronzhuk <ivan.khoronzhuk@linaro.org> Signed-off-by: David S. Miller <davem@davemloft.net>
2019-07-09net: core: page_pool: add user refcnt and reintroduce page_pool_destroyIvan Khoronzhuk2-8/+4
Jesper recently removed page_pool_destroy() (from driver invocation) and moved shutdown and free of page_pool into xdp_rxq_info_unreg(), in-order to handle in-flight packets/pages. This created an asymmetry in drivers create/destroy pairs. This patch reintroduce page_pool_destroy and add page_pool user refcnt. This serves the purpose to simplify drivers error handling as driver now drivers always calls page_pool_destroy() and don't need to track if xdp_rxq_info_reg_mem_model() was unsuccessful. This could be used for a special cases where a single RX-queue (with a single page_pool) provides packets for two net_device'es, and thus needs to register the same page_pool twice with two xdp_rxq_info structures. This patch is primarily to ease API usage for drivers. The recently merged netsec driver, actually have a bug in this area, which is solved by this API change. This patch is a modified version of Ivan Khoronzhuk's original patch. Link: https://lore.kernel.org/netdev/20190625175948.24771-2-ivan.khoronzhuk@linaro.org/ Fixes: 5c67bf0ec4d0 ("net: netsec: Use page_pool API") Signed-off-by: Jesper Dangaard Brouer <brouer@redhat.com> Reviewed-by: Ilias Apalodimas <ilias.apalodimas@linaro.org> Acked-by: Jesper Dangaard Brouer <brouer@redhat.com> Reviewed-by: Saeed Mahameed <saeedm@mellanox.com> Signed-off-by: Ivan Khoronzhuk <ivan.khoronzhuk@linaro.org> Signed-off-by: David S. Miller <davem@davemloft.net>
2019-07-08macb: fix build warning for !CONFIG_OFArnd Bergmann1-2/+2
When CONFIG_OF is disabled, we get a harmless warning about the newly added variable: drivers/net/ethernet/cadence/macb_main.c:48:39: error: 'mgmt' defined but not used [-Werror=unused-variable] static struct sifive_fu540_macb_mgmt *mgmt; Move the variable closer to its use inside of the #ifdef. Fixes: c218ad559020 ("macb: Add support for SiFive FU540-C000") Signed-off-by: Arnd Bergmann <arnd@arndb.de> Signed-off-by: David S. Miller <davem@davemloft.net>