summaryrefslogtreecommitdiff
path: root/drivers/net/ethernet/google
AgeCommit message (Collapse)AuthorFilesLines
2023-07-10gve: unify driver name usageJunfeng Guo3-6/+8
Current codebase contained the usage of two different names for this driver (i.e., `gvnic` and `gve`), which is quite unfriendly for users to use, especially when trying to bind or unbind the driver manually. The corresponding kernel module is registered with the name of `gve`. It's more reasonable to align the name of the driver with the module. Fixes: 893ce44df565 ("gve: Add basic driver framework for Compute Engine Virtual NIC") Cc: csully@google.com Signed-off-by: Junfeng Guo <junfeng.guo@intel.com> Signed-off-by: David S. Miller <davem@davemloft.net>
2023-07-07gve: Set default duplex configuration to fullJunfeng Guo1-0/+3
Current duplex mode was unset in the driver, resulting in the default parameter being set to 0, which corresponds to half duplex. It might mislead users to have incorrect expectation about the driver's transmission capabilities. Set the default duplex configuration to full, as the driver runs in full duplex mode at this point. Fixes: 7e074d5a76ca ("gve: Enable Link Speed Reporting in the driver.") Signed-off-by: Junfeng Guo <junfeng.guo@intel.com> Reviewed-by: Leon Romanovsky <leonro@nvidia.com> Message-ID: <20230706044128.2726747-1-junfeng.guo@intel.com> Signed-off-by: Jakub Kicinski <kuba@kernel.org>
2023-06-27gve: use vmalloc_array and vcallocJulia Lawall1-1/+1
Use vmalloc_array and vcalloc to protect against multiplication overflows. The changes were done using the following Coccinelle semantic patch: // <smpl> @initialize:ocaml@ @@ let rename alloc = match alloc with "vmalloc" -> "vmalloc_array" | "vzalloc" -> "vcalloc" | _ -> failwith "unknown" @@ size_t e1,e2; constant C1, C2; expression E1, E2, COUNT, x1, x2, x3; typedef u8; typedef __u8; type t = {u8,__u8,char,unsigned char}; identifier alloc = {vmalloc,vzalloc}; fresh identifier realloc = script:ocaml(alloc) { rename alloc }; @@ ( alloc(x1*x2*x3) | alloc(C1 * C2) | alloc((sizeof(t)) * (COUNT), ...) | - alloc((e1) * (e2)) + realloc(e1, e2) | - alloc((e1) * (COUNT)) + realloc(COUNT, e1) | - alloc((E1) * (E2)) + realloc(E1, E2) ) // </smpl> Signed-off-by: Julia Lawall <Julia.Lawall@inria.fr> Link: https://lore.kernel.org/r/20230627144339.144478-5-Julia.Lawall@inria.fr Signed-off-by: Jakub Kicinski <kuba@kernel.org>
2023-05-24gve: Support IPv6 Big TCP on DQCoco Li2-0/+9
Add support for using IPv6 Big TCP on DQ which can handle large TSO/GRO packets. See https://lwn.net/Articles/895398/. This can improve the throughput and CPU usage. Perf test result: ip -d link show $DEV gso_max_size 185000 gso_max_segs 65535 tso_max_size 262143 tso_max_segs 65535 gro_max_size 185000 For performance, tested with neper using 9k MTU on hardware that supports 200Gb/s line rate. In single streams when line rate is not saturated, we expect throughput improvements. When the networking is performing at line rate, we expect cpu usage improvements. Tcp_stream (unidirectional stream test, T=thread, F=flow): skb=180kb, T=1, F=1, no zerocopy: throughput average=64576.88 Mb/s, sender stime=8.3, receiver stime=10.68 skb=64kb, T=1, F=1, no zerocopy: throughput average=64862.54 Mb/s, sender stime=9.96, receiver stime=12.67 skb=180kb, T=1, F=1, yes zerocopy: throughput average=146604.97 Mb/s, sender stime=10.61, receiver stime=5.52 skb=64kb, T=1, F=1, yes zerocopy: throughput average=131357.78 Mb/s, sender stime=12.11, receiver stime=12.25 skb=180kb, T=20, F=100, no zerocopy: throughput average=182411.37 Mb/s, sender stime=41.62, receiver stime=79.4 skb=64kb, T=20, F=100, no zerocopy: throughput average=182892.02 Mb/s, sender stime=57.39, receiver stime=72.69 skb=180kb, T=20, F=100, yes zerocopy: throughput average=182337.65 Mb/s, sender stime=27.94, receiver stime=39.7 skb=64kb, T=20, F=100, yes zerocopy: throughput average=182144.20 Mb/s, sender stime=47.06, receiver stime=39.01 Signed-off-by: Ziwei Xiao <ziweixiao@google.com> Signed-off-by: Coco Li <lixiaoyan@google.com> Reviewed-by: Simon Horman <simon.horman@corigine.com> Link: https://lore.kernel.org/r/20230522201552.3585421-1-ziweixiao@google.com Signed-off-by: Jakub Kicinski <kuba@kernel.org>
2023-05-10gve: Remove the code of clearing PBA bitZiwei Xiao1-13/+0
Clearing the PBA bit from the driver is race prone and it may lead to dropped interrupt events. This could potentially lead to the traffic being completely halted. Fixes: 5e8c5adf95f8 ("gve: DQO: Add core netdev features") Signed-off-by: Ziwei Xiao <ziweixiao@google.com> Signed-off-by: Bailey Forrest <bcf@google.com> Reviewed-by: Simon Horman <simon.horman@corigine.com> Signed-off-by: David S. Miller <davem@davemloft.net>
2023-04-11gve: Unify duplicate GQ min pkt desc size constantsShailend Chand2-4/+2
The two constants accomplish the same thing. Signed-off-by: Shailend Chand <shailend@google.com> Reviewed-by: Jakub Kicinski <kuba@kernel.org> Link: https://lore.kernel.org/r/20230407184830.309398-1-shailend@google.com Signed-off-by: Paolo Abeni <pabeni@redhat.com>
2023-04-06Merge git://git.kernel.org/pub/scm/linux/kernel/git/netdev/netJakub Kicinski2-7/+7
Conflicts: drivers/net/ethernet/google/gve/gve.h 3ce934558097 ("gve: Secure enough bytes in the first TX desc for all TCP pkts") 75eaae158b1b ("gve: Add XDP DROP and TX support for GQI-QPL format") https://lore.kernel.org/all/20230406104927.45d176f5@canb.auug.org.au/ https://lore.kernel.org/all/c5872985-1a95-0bc8-9dcc-b6f23b439e9d@tessares.net/ Adjacent changes: net/can/isotp.c 051737439eae ("can: isotp: fix race between isotp_sendsmg() and isotp_release()") 96d1c81e6a04 ("can: isotp: add module parameter for maximum pdu size") Signed-off-by: Jakub Kicinski <kuba@kernel.org>
2023-04-05gve: Secure enough bytes in the first TX desc for all TCP pktsShailend Chand2-7/+7
Non-GSO TCP packets whose SKBs' linear portion did not include the entire TCP header were not populating the first Tx descriptor with as many bytes as the vNIC expected. This change ensures that all TCP packets populate the first descriptor with the correct number of bytes. Fixes: 893ce44df565 ("gve: Add basic driver framework for Compute Engine Virtual NIC") Signed-off-by: Shailend Chand <shailend@google.com> Link: https://lore.kernel.org/r/20230403172809.2939306-1-shailend@google.com Signed-off-by: Jakub Kicinski <kuba@kernel.org>
2023-03-24Merge git://git.kernel.org/pub/scm/linux/kernel/git/netdev/netJakub Kicinski1-1/+4
Conflicts: drivers/net/ethernet/mellanox/mlx5/core/en_tc.c 6e9d51b1a5cb ("net/mlx5e: Initialize link speed to zero") 1bffcea42926 ("net/mlx5e: Add devlink hairpin queues parameters") https://lore.kernel.org/all/20230324120623.4ebbc66f@canb.auug.org.au/ https://lore.kernel.org/all/20230321211135.47711-1-saeed@kernel.org/ Adjacent changes: drivers/net/phy/phy.c 323fe43cf9ae ("net: phy: Improved PHY error reporting in state machine") 4203d84032e2 ("net: phy: Ensure state transitions are processed from phy_stop()") Signed-off-by: Jakub Kicinski <kuba@kernel.org>
2023-03-23gve: Cache link_speed value from deviceJoshua Washington1-1/+4
The link speed is never changed for the uptime of a VM, and the current implementation sends an admin queue command for each call. Admin queue command invocations have nontrivial overhead (e.g., VM exits), which can be disruptive to users if triggered frequently. Our telemetry data shows that there are VMs that make frequent calls to this admin queue command. Caching the result of the original admin queue command would eliminate the need to send multiple admin queue commands on subsequent calls to retrieve link speed. Fixes: 7e074d5a76ca ("gve: Enable Link Speed Reporting in the driver.") Signed-off-by: Joshua Washington <joshwash@google.com> Reviewed-by: Simon Horman <simon.horman@corigine.com> Link: https://lore.kernel.org/r/20230321172332.91678-1-joshwash@google.com Signed-off-by: Jakub Kicinski <kuba@kernel.org>
2023-03-17gve: Add AF_XDP zero-copy support for GQI-QPL formatPraveen Kaligineedi5-9/+274
Adding AF_XDP zero-copy support. Note: Although these changes support AF_XDP socket in zero-copy mode, there is still a copy happening within the driver between XSK buffer pool and QPL bounce buffers in GQI-QPL format. In GQI-QPL queue format, the driver needs to allocate a fixed size memory, the size specified by vNIC device, for RX/TX and register this memory as a bounce buffer with the vNIC device when a queue is created. The number of pages in the bounce buffer is limited and the pages need to be made available to the vNIC by copying the RX data out to prevent head-of-line blocking. Therefore, we cannot pass the XSK buffer pool to the vNIC. The number of copies on RX path from the bounce buffer to XSK buffer is 2 for AF_XDP copy mode (bounce buffer -> allocated page frag -> XSK buffer) and 1 for AF_XDP zero-copy mode (bounce buffer -> XSK buffer). This patch contains the following changes: 1) Enable and disable XSK buffer pool 2) Copy XDP packets from QPL bounce buffers to XSK buffer on rx 3) Copy XDP packets from XSK buffer to QPL bounce buffers and ring the doorbell as part of XDP TX napi poll 4) ndo_xsk_wakeup callback support Signed-off-by: Praveen Kaligineedi <pkaligineedi@google.com> Reviewed-by: Jeroen de Borst <jeroendb@google.com> Signed-off-by: David S. Miller <davem@davemloft.net>
2023-03-17gve: Add XDP REDIRECT support for GQI-QPL formatPraveen Kaligineedi5-17/+138
This patch contains the following changes: 1) Support for XDP REDIRECT action on rx 2) ndo_xdp_xmit callback support In GQI-QPL queue format, the driver needs to allocate a fixed size memory, the size specified by vNIC device, for RX/TX and register this memory as a bounce buffer with the vNIC device when a queue is created. The number of pages in the bounce buffer is limited and the pages need to be made available to the vNIC by copying the RX data out to prevent head-of-line blocking. The XDP_REDIRECT packets are therefore immediately copied to a newly allocated page. Signed-off-by: Praveen Kaligineedi <pkaligineedi@google.com> Reviewed-by: Jeroen de Borst <jeroendb@google.com> Signed-off-by: David S. Miller <davem@davemloft.net>
2023-03-17gve: Add XDP DROP and TX support for GQI-QPL formatPraveen Kaligineedi5-39/+687
Add support for XDP PASS, DROP and TX actions. This patch contains the following changes: 1) Support installing/uninstalling XDP program 2) Add dedicated XDP TX queues 3) Add support for XDP DROP action 4) Add support for XDP TX action Signed-off-by: Praveen Kaligineedi <pkaligineedi@google.com> Reviewed-by: Jeroen de Borst <jeroendb@google.com> Signed-off-by: David S. Miller <davem@davemloft.net>
2023-03-17gve: Changes to add new TX queuesPraveen Kaligineedi6-50/+104
Changes to enable adding and removing TX queues without calling gve_close() and gve_open(). Made the following changes: 1) priv->tx, priv->rx and priv->qpls arrays are allocated based on max tx queues and max rx queues 2) Changed gve_adminq_create_tx_queues(), gve_adminq_destroy_tx_queues(), gve_tx_alloc_rings() and gve_tx_free_rings() functions to add/remove a subset of TX queues rather than all the TX queues. Signed-off-by: Praveen Kaligineedi <pkaligineedi@google.com> Reviewed-by: Jeroen de Borst <jeroendb@google.com> Signed-off-by: David S. Miller <davem@davemloft.net>
2023-03-17gve: XDP support GQI-QPL: helper function changesPraveen Kaligineedi8-44/+70
This patch adds/modifies helper functions needed to add XDP support. Signed-off-by: Praveen Kaligineedi <pkaligineedi@google.com> Reviewed-by: Jeroen de Borst <jeroendb@google.com> Signed-off-by: David S. Miller <davem@davemloft.net>
2023-02-06gve: Fix gve interrupt namesPraveen Kaligineedi1-5/+4
IRQs are currently requested before the netdevice is registered and a proper name is assigned to the device. Changing interrupt name to avoid using the format string in the name. Interrupt name before change: eth%d-ntfy-block.<blk_id> Interrupt name after change: gve-ntfy-blk<blk_id>@pci:<pci_name> Signed-off-by: Praveen Kaligineedi <pkaligineedi@google.com> Reviewed-by: Jeroen de Borst <jeroendb@google.com> Acked-by: Jakub Kicinski <kuba@kernel.org> Signed-off-by: David S. Miller <davem@davemloft.net>
2022-11-21gve: Handle alternate miss completionsJeroen de Borst3-8/+21
The virtual NIC has 2 ways of indicating a miss-path completion. This handles the alternate. Signed-off-by: Jeroen de Borst <jeroendb@google.com> Reviewed-by: Jesse Brandeburg <jesse.brandeburg@intel.com> Signed-off-by: David S. Miller <davem@davemloft.net>
2022-11-21gve: Adding a new AdminQ command to verify driverJeroen de Borst4-1/+122
Check whether the driver is compatible with the device presented. Signed-off-by: Jeroen de Borst <jeroendb@google.com> Reviewed-by: Jesse Brandeburg <jesse.brandeburg@intel.com> Signed-off-by: David S. Miller <davem@davemloft.net>
2022-11-07gve: Fix error return code in gve_prefill_rx_pages()Yang Yingliang1-1/+3
If alloc_page() fails in gve_prefill_rx_pages(), it should return an error code in the error path. Fixes: 82fd151d38d9 ("gve: Reduce alloc and copy costs in the GQ rx path") Cc: Jeroen de Borst <jeroendb@google.com> Cc: Catherine Sullivan <csully@google.com> Cc: Shailend Chand <shailend@google.com> Cc: "David S. Miller" <davem@davemloft.net> Signed-off-by: Yang Yingliang <yangyingliang@huawei.com> Signed-off-by: David S. Miller <davem@davemloft.net>
2022-11-02gve: Reduce alloc and copy costs in the GQ rx pathShailend Chand6-266/+338
Previously, even if just one of the many fragments of a 9k packet required a copy, we'd copy the whole packet into a freshly-allocated 9k-sized linear SKB, and this led to performance issues. By having a pool of pages to copy into, each fragment can be independently handled, leading to a reduced incidence of allocation and copy. Signed-off-by: Shailend Chand <shailend@google.com> Signed-off-by: David S. Miller <davem@davemloft.net>
2022-10-29net: Remove the obsolte u64_stats_fetch_*_irq() users (drivers).Thomas Gleixner2-14/+14
Now that the 32bit UP oddity is gone and 32bit uses always a sequence count, there is no need for the fetch_irq() variants anymore. Convert to the regular interface. Signed-off-by: Thomas Gleixner <tglx@linutronix.de> Signed-off-by: Sebastian Andrzej Siewior <bigeasy@linutronix.de> Acked-by: Peter Zijlstra (Intel) <peterz@infradead.org> Signed-off-by: Jakub Kicinski <kuba@kernel.org>
2022-09-29net: drop the weight argument from netif_napi_addJakub Kicinski1-2/+1
We tell driver developers to always pass NAPI_POLL_WEIGHT as the weight to netif_napi_add(). This may be confusing to newcomers, drop the weight argument, those who really need to tweak the weight can use netif_napi_add_weight(). Acked-by: Marc Kleine-Budde <mkl@pengutronix.de> # for CAN Link: https://lore.kernel.org/r/20220927132753.750069-1-kuba@kernel.org Signed-off-by: Jakub Kicinski <kuba@kernel.org>
2022-09-20gve: Fix GFP flags when allocing pagesShailend Chand1-1/+1
Use GFP_ATOMIC when allocating pages out of the hotpath, continue to use GFP_KERNEL when allocating pages during setup. GFP_KERNEL will allow blocking which allows it to succeed more often in a low memory enviornment but in the hotpath we do not want to allow the allocation to block. Fixes: 9b8dd5e5ea48b ("gve: DQO: Add RX path") Signed-off-by: Shailend Chand <shailend@google.com> Signed-off-by: Jeroen de Borst <jeroendb@google.com> Link: https://lore.kernel.org/r/20220913000901.959546-1-jeroendb@google.com Signed-off-by: Jakub Kicinski <kuba@kernel.org>
2022-08-29net: Use u64_stats_fetch_begin_irq() for stats fetch.Sebastian Andrzej Siewior2-14/+14
On 32bit-UP u64_stats_fetch_begin() disables only preemption. If the reader is in preemptible context and the writer side (u64_stats_update_begin*()) runs in an interrupt context (IRQ or softirq) then the writer can update the stats during the read operation. This update remains undetected. Use u64_stats_fetch_begin_irq() to ensure the stats fetch on 32bit-UP are not interrupted by a writer. 32bit-SMP remains unaffected by this change. Cc: "David S. Miller" <davem@davemloft.net> Cc: Catherine Sullivan <csully@google.com> Cc: David Awogbemila <awogbemila@google.com> Cc: Dimitris Michailidis <dmichail@fungible.com> Cc: Eric Dumazet <edumazet@google.com> Cc: Hans Ulli Kroll <ulli.kroll@googlemail.com> Cc: Jakub Kicinski <kuba@kernel.org> Cc: Jeroen de Borst <jeroendb@google.com> Cc: Johannes Berg <johannes@sipsolutions.net> Cc: Linus Walleij <linus.walleij@linaro.org> Cc: Paolo Abeni <pabeni@redhat.com> Cc: Simon Horman <simon.horman@corigine.com> Cc: linux-arm-kernel@lists.infradead.org Cc: linux-wireless@vger.kernel.org Cc: netdev@vger.kernel.org Cc: oss-drivers@corigine.com Cc: stable@vger.kernel.org Signed-off-by: Sebastian Andrzej Siewior <bigeasy@linutronix.de> Reviewed-by: Simon Horman <simon.horman@corigine.com> Signed-off-by: David S. Miller <davem@davemloft.net>
2022-07-02net: add skb_[inner_]tcp_all_headers helpersEric Dumazet1-2/+2
Most drivers use "skb_transport_offset(skb) + tcp_hdrlen(skb)" to compute headers length for a TCP packet, but others use more convoluted (but equivalent) ways. Add skb_tcp_all_headers() and skb_inner_tcp_all_headers() helpers to harmonize this a bit. Signed-off-by: Eric Dumazet <edumazet@google.com> Signed-off-by: David S. Miller <davem@davemloft.net>
2022-07-01google/gve:fix repeated words in commentsJilin Yuan1-1/+1
Delete the redundant word 'a'. Signed-off-by: Jilin Yuan <yuanjilin@cdjrlc.com> Link: https://lore.kernel.org/r/20220629130013.3273-1-yuanjilin@cdjrlc.com Signed-off-by: Jakub Kicinski <kuba@kernel.org>
2022-03-17gve: Fix spelling mistake "droping" -> "dropping"Colin Ian King1-1/+1
There is a spelling mistake in a netdev_warn warning. Fix it. Signed-off-by: Colin Ian King <colin.i.king@gmail.com> Link: https://lore.kernel.org/r/20220315222615.2960504-1-colin.i.king@gmail.com Signed-off-by: Jakub Kicinski <kuba@kernel.org>
2022-02-16gve: enhance no queue page list detectionHaiyue Wang1-4/+2
The commit a5886ef4f4bf ("gve: Introduce per netdev `enum gve_queue_format`") introduces three queue format type, only GVE_GQI_QPL_FORMAT queue has page list. So it should use the queue page list number to detect the zero size queue page list. Correct the design logic. Using the 'queue_format == GVE_GQI_RDA_FORMAT' may lead to request zero sized memory allocation, like if the queue format is GVE_DQO_RDA_FORMAT. The kernel memory subsystem will return ZERO_SIZE_PTR, which is not NULL address, so the driver can run successfully. Also the code still checks the queue page list number firstly, then accesses the allocated memory, so zero number queue page list allocation will not lead to access fault. Signed-off-by: Haiyue Wang <haiyue.wang@intel.com> Reviewed-by: Bailey Forrest <bcf@google.com> Link: https://lore.kernel.org/r/20220215051751.260866-1-haiyue.wang@intel.com Signed-off-by: Jakub Kicinski <kuba@kernel.org>
2022-02-09gve: Recording rx queue before sending to napiTao Liu1-0/+1
This caused a significant performance degredation when using generic XDP with multiple queues. Fixes: f5cedc84a30d2 ("gve: Add transmit and receive support") Signed-off-by: Tao Liu <xliutaox@google.com> Link: https://lore.kernel.org/r/20220207175901.2486596-1-jeroendb@google.com Signed-off-by: Jakub Kicinski <kuba@kernel.org>
2022-01-28gve: fix the wrong AdminQ buffer queue index checkHaiyue Wang1-1/+1
The 'tail' and 'head' are 'unsigned int' type free-running count, when 'head' is overflow, the 'int i (= tail) < u32 head' will be false: Only '- loop 0: idx = 63' result is shown, so it needs to use 'int' type to compare, it can handle the overflow correctly. typedef uint32_t u32; int main() { u32 tail, head; int stail, shead; int i, loop; tail = 0xffffffff; head = 0x00000000; for (i = tail, loop = 0; i < head; i++) { unsigned int idx = i & 63; printf("+ loop %d: idx = %u\n", loop++, idx); } stail = tail; shead = head; for (i = stail, loop = 0; i < shead; i++) { unsigned int idx = i & 63; printf("- loop %d: idx = %u\n", loop++, idx); } return 0; } Fixes: 5cdad90de62c ("gve: Batch AQ commands for creating and destroying queues.") Signed-off-by: Haiyue Wang <haiyue.wang@intel.com> Signed-off-by: David S. Miller <davem@davemloft.net>
2022-01-27gve: Fix GFP flags when allocing pagesCatherine Sullivan4-6/+7
Use GFP_ATOMIC when allocating pages out of the hotpath, continue to use GFP_KERNEL when allocating pages during setup. GFP_KERNEL will allow blocking which allows it to succeed more often in a low memory enviornment but in the hotpath we do not want to allow the allocation to block. Fixes: f5cedc84a30d2 ("gve: Add transmit and receive support") Signed-off-by: Catherine Sullivan <csully@google.com> Signed-off-by: David Awogbemila <awogbemila@google.com> Link: https://lore.kernel.org/r/20220126003843.3584521-1-awogbemila@google.com Signed-off-by: Jakub Kicinski <kuba@kernel.org>
2021-12-16gve: Add tx|rx-coalesce-usec for DQOTao Liu4-11/+91
Adding ethtool support for changing rx-coalesce-usec and tx-coalesce-usec when using the DQO queue format. Signed-off-by: Tao Liu <xliutaox@google.com> Signed-off-by: Jeroen de Borst <jeroendb@google.com> Signed-off-by: David S. Miller <davem@davemloft.net>
2021-12-16gve: Add consumed counts to ethtool statsJordan Kim1-9/+12
Being able to see how many descriptors are in-use is helpful when diagnosing certain issues. Signed-off-by: Jeroen de Borst <jeroendb@google.com> Signed-off-by: Jordan Kim <jrkim@google.com> Signed-off-by: David S. Miller <davem@davemloft.net>
2021-12-16gve: Implement suspend/resume/shutdownCatherine Sullivan2-0/+60
Add support for suspend, resume and shutdown. Signed-off-by: Catherine Sullivan <csully@google.com> Signed-off-by: David Awogbemila <awogbemila@google.com> Signed-off-by: David S. Miller <davem@davemloft.net>
2021-12-16gve: Add optional metadata descriptor type GVE_TXD_MTDWillem de Bruijn3-20/+74
Allow drivers to pass metadata along with packet data to the device. Introduce a new metadata descriptor type * GVE_TXD_MTD This descriptor is optional. If present it immediate follows the packet descriptor and precedes the segment descriptor. This descriptor may be repeated. Multiple metadata descriptors may follow. There are no immediate uses for this, this is for future proofing. At present devices allow only 1 MTD descriptor. The lower four bits of the type_flags field encode GVE_TXD_MTD. The upper four bits of the type_flags field encodes a *sub*type. Introduce one such metadata descriptor subtype * GVE_MTD_SUBTYPE_PATH This shares path information with the device for network failure discovery and robust response: Linux derives ipv6 flowlabel and ECMP multipath from sk->sk_txhash, and updates this field on error with sk_rethink_txhash. Allow the host stack to do the same. Pass the tx_hash value if set. Also communicate whether the path hash is set, or more exactly, what its type is. Define two common types GVE_MTD_PATH_HASH_NONE GVE_MTD_PATH_HASH_L4 Concrete examples of error conditions that are resolved are mentioned in the commits that add sk_rethink_txhash calls. Such as commit 7788174e8726 ("tcp: change IPv6 flow-label upon receiving spurious retransmission"). Experimental results mirror what the theory suggests: where IPv6 FlowLabel is included in path selection (e.g., LAG/ECMP), flowlabel rotation on TCP timeout avoids the vast majority of TCP disconnects that would otherwise have occurred during link failures in long-haul backbones, when an alternative path is available. Rotation can be applied to various bad connection signals, such as timeouts and spurious retransmissions. In aggregate, such flow level signals can help locate network issues. Define initial common states: GVE_MTD_PATH_STATE_DEFAULT GVE_MTD_PATH_STATE_TIMEOUT GVE_MTD_PATH_STATE_CONGESTION GVE_MTD_PATH_STATE_RETRANSMIT Signed-off-by: Willem de Bruijn <willemb@google.com> Signed-off-by: David Awogbemila <awogbemila@google.com> Signed-off-by: David S. Miller <davem@davemloft.net>
2021-12-16gve: remove memory barrier around seqnoCatherine Sullivan1-2/+0
No longer needed after we introduced the barrier in gve_napi_poll. Signed-off-by: Catherine Sullivan <csully@google.com> Signed-off-by: David S. Miller <davem@davemloft.net>
2021-12-16gve: Update gve_free_queue_page_list signatureCatherine Sullivan1-2/+1
The id field should be a u32 not a signed int. Signed-off-by: Catherine Sullivan <csully@google.com> Signed-off-by: David S. Miller <davem@davemloft.net>
2021-12-16gve: Move the irq db indexes out of the ntfy block structCatherine Sullivan4-17/+36
Giving the device access to other kernel structs is not ideal. Move the indexes into their own array and just keep pointers to them in the ntfy block struct. Signed-off-by: Catherine Sullivan <csully@google.com> Signed-off-by: David Awogbemila <awogbemila@google.com> Signed-off-by: David S. Miller <davem@davemloft.net>
2021-12-16gve: Correct order of processing device optionsJeroen de Borst1-4/+4
The legacy raw addressing device option was processed before the new RDA queue format option. This caused the supported features mask, which is provided only on the RDA queue format option, not to be set. This disabled jumbo-frame support when using raw adressing. Fixes: 255489f5b33c ("gve: Add a jumbo-frame device option") Signed-off-by: Jeroen de Borst <jeroendb@google.com> Signed-off-by: David S. Miller <davem@davemloft.net>
2021-12-10Merge git://git.kernel.org/pub/scm/linux/kernel/git/netdev/netJakub Kicinski1-0/+3
No conflicts. Signed-off-by: Jakub Kicinski <kuba@kernel.org>
2021-12-08gve: fix for null pointer dereference.Ameer Hamza1-0/+3
Avoid passing NULL skb to __skb_put() function call if napi_alloc_skb() returns NULL. Fixes: 37149e9374bf ("gve: Implement packet continuation for RX.") Signed-off-by: Ameer Hamza <amhamza.mgc@gmail.com> Link: https://lore.kernel.org/r/20211205183810.8299-1-amhamza.mgc@gmail.com Signed-off-by: Jakub Kicinski <kuba@kernel.org>
2021-11-22ethtool: extend ringparam setting/getting API with rx_buf_lenHao Chen1-1/+3
Add two new parameters kernel_ringparam and extack for .get_ringparam and .set_ringparam to extend more ring params through netlink. Signed-off-by: Hao Chen <chenhao288@hisilicon.com> Signed-off-by: Guangbin Huang <huangguangbin2@huawei.com> Signed-off-by: David S. Miller <davem@davemloft.net>
2021-11-10gve: fix unmatched u64_stats_update_end()Dan Carpenter1-1/+2
The u64_stats_update_end() call is supposed to be inside the curly braces so it pairs with the u64_stats_update_begin(). Fixes: 37149e9374bf ("gve: Implement packet continuation for RX.") Signed-off-by: Dan Carpenter <dan.carpenter@oracle.com> Signed-off-by: David S. Miller <davem@davemloft.net>
2021-11-09gve: Fix off by one in gve_tx_timeout()Dan Carpenter1-1/+1
The priv->ntfy_blocks[] has "priv->num_ntfy_blks" elements so this > needs to be >= to prevent an off by one bug. The priv->ntfy_blocks[] array is allocated in gve_alloc_notify_blocks(). Fixes: 87a7f321bb6a ("gve: Recover from queue stall due to missed IRQ") Signed-off-by: Dan Carpenter <dan.carpenter@oracle.com> Signed-off-by: David S. Miller <davem@davemloft.net>
2021-10-25gve: Add a jumbo-frame device option.Shailend Chand2-4/+68
A widely deployed driver has a bug that will cause the driver not to load when a max_mtu > 2048 is present in the device descriptor. To avoid this bug while still enabling jumbo frames, we present a lower max_mtu in the device descriptor and pass the actual max_mtu in a separate device option. The driver supports 2 different queue formats. To enable features on one queue format, but not the other, a supported_features mask was added to the device options in the device descriptor. Signed-off-by: Shailend Chand <shailend@google.com> Signed-off-by: Jeroen de Borst <jeroendb@google.com> Signed-off-by: David S. Miller <davem@davemloft.net>
2021-10-25gve: Implement packet continuation for RX.David Awogbemila9-126/+292
This enables the driver to receive RX packets spread across multiple buffers: For a given multi-fragment packet the "packet continuation" bit is set on all descriptors except the last one. These descriptors' payloads are combined into a single SKB before the SKB is handed to the networking stack. This change adds a "packet buffer size" notion for RX queues. The CreateRxQueue AdminQueue command sent to the device now includes the packet_buffer_size. We opt for a packet_buffer_size of PAGE_SIZE / 2 to give the driver the opportunity to flip pages where we can instead of copying. Signed-off-by: David Awogbemila <awogbemila@google.com> Signed-off-by: Jeroen de Borst <jeroendb@google.com> Reviewed-by: Catherine Sullivan <csully@google.com> Signed-off-by: David S. Miller <davem@davemloft.net>
2021-10-25gve: Add RX context.David Awogbemila2-37/+44
This refactor moves the skb_head and skb_tail fields into a new gve_rx_ctx struct. This new struct will contain information about the current packet being processed. This is in preparation for multi-descriptor RX packets. Signed-off-by: David Awogbemila <awogbemila@google.com> Signed-off-by: Jeroen de Borst <jeroendb@google.com> Reviewed-by: Catherine Sullivan <csully@google.com> Signed-off-by: David S. Miller <davem@davemloft.net>
2021-10-12gve: Track RX buffer allocation failuresCatherine Sullivan1-1/+6
The rx_buf_alloc_fail counter wasn't getting updated. Fixes: 433e274b8f7b0 ("gve: Add stats for gve.") Signed-off-by: Catherine Sullivan <csully@google.com> Signed-off-by: Jeroen de Borst <jeroendb@google.com> Signed-off-by: David S. Miller <davem@davemloft.net>
2021-10-12gve: Allow pageflips on larger pagesJordan Kim1-1/+1
Half pages are just used for small enough packets. This change allows this to also apply for systems with pages larger than 4 KB. Fixes: 02b0e0c18ba75 ("gve: Rx Buffer Recycling") Signed-off-by: Jordan Kim <jrkim@google.com> Signed-off-by: Jeroen de Borst <jeroendb@google.com> Signed-off-by: David S. Miller <davem@davemloft.net>
2021-10-12gve: Add netif_set_xps_queue callCatherine Sullivan1-0/+4
Configure XPS when adding tx queues to the notification blocks. Fixes: dbdaa67540512 ("gve: Move some static functions to a common file") Signed-off-by: Catherine Sullivan <csully@google.com> Signed-off-by: David Awogbemila <awogbemila@google.com> Signed-off-by: David S. Miller <davem@davemloft.net>