summaryrefslogtreecommitdiff
AgeCommit message (Collapse)AuthorFilesLines
2024-05-07gtp: move debugging to skbuff build helper functionPablo Neira Ayuso1-4/+6
Move debugging to the routine to build GTP packets in preparation for supporting IPv4-in-IPv6-GTP and IPv6-in-IPv4-GTP. Signed-off-by: Pablo Neira Ayuso <pablo@netfilter.org>
2024-05-07gtp: pass up link local traffic to userspace socketPablo Neira Ayuso1-0/+4
According to TS 29.061, it is possible to see IPv6 link-local traffic in the GTP tunnel, see 11.2.1.3.2 IPv6 Stateless Address Autoconfiguration (IPv6 SLAAC). Pass up these packets to the userspace daemon to handle them as control GTP traffic. Signed-off-by: Pablo Neira Ayuso <pablo@netfilter.org>
2024-05-07gtp: use IPv6 address /64 prefix for UE/MSPablo Neira Ayuso1-10/+42
Harald Welte reports that according to 3GPP TS 29.060: PDN Connection: the association between a MS represented by one IPv4 address and/or one IPv6 prefix and a PDN represented by an APN. this clearly states that IPv4 is a single address while IPv6 is a single prefix. Then, 3GPP TS 29.061, Section 11.2.1.3: For APNs that are configured for IPv6 address allocation, the GGSN/P-GW shall only use the Prefix part of the IPv6 address for forwarding of mobile terminated IP packets. The size of the prefix shall be according to the maximum prefix length for a global IPv6 address as specified in the IPv6 Addressing Architecture, see RFC 4291 [82]. RFC 4291 section 2.5.4 states All Global Unicast addresses other than those that start with binary 000 have a 64-bit interface ID field (i.e., n + m = 64) ... 3GPP TS 29.61 Section 11.2.1.3.2a: In the procedure in the cases of using GTP-based S5/S8, P-GW acts as an access router, and allocates to a UE a globally unique /64 IPv6 prefix if the PLMN allocates the prefix. Therefore, compare IPv6 address /64 prefix only since MS/UE is not a single address like in the IPv4 case. Reject IPv6 address with EADDRNOTAVAIL if it lower 64 bits of the IPv6 address from the control plane are set. Signed-off-by: Pablo Neira Ayuso <pablo@netfilter.org>
2024-05-07gtp: add IPv6 supportPablo Neira Ayuso3-27/+374
Add new iflink attributes to configure in-kernel UDP listener socket address: IFLA_GTP_LOCAL and IFLA_GTP_LOCAL6. If none of these attributes are specified, default is still to IPv4 INADDR_ANY for backward compatibility. Add new attributes to set up family and IPv6 address of GTP tunnels: GTPA_FAMILY, GTPA_PEER_ADDR6 and GTPA_MS_ADDR6. If no GTPA_FAMILY is specified, AF_INET is assumed for backward compatibility. setsockopt IPV6_ADDRFORM allows to downgrade socket from IPv6 to IPv4 after socket is bound. Assumption is that socket listener that is attached to the gtp device needs to be either IPv4 or IPv6. Therefore, GTP socket listener does not allow for IPv4-mapped-IPv6 listener. Signed-off-by: Pablo Neira Ayuso <pablo@netfilter.org>
2024-05-07gtp: prepare for IPv6 supportPablo Neira Ayuso1-55/+96
Use union artifact to prepare for IPv6 support. Add and use GTP_{IPV4,TH}_MAXLEN. Signed-off-by: Pablo Neira Ayuso <pablo@netfilter.org>
2024-05-07gtp: properly parse extension headersPablo Neira Ayuso2-0/+46
Currently GTP packets are dropped if the next extension field is set to non-zero value, but this are valid GTP packets. TS 29.281 provides a longer header format, which is defined as struct gtp1_header_long. Such long header format is used if any of the S, PN, E flags is set. This long header is 4 bytes longer than struct gtp1_header, plus variable length (optional) extension headers. The next extension header field is zero is no extension header is provided. The extension header is composed of a length field which includes total number of 4 byte words including the extension header itself (1 byte), payload (variable length) and next type (1 byte). The extension header size and its payload is aligned to 4 bytes. A GTP packet might come with a chain extensions headers, which makes it slightly cumbersome to parse because the extension next header field comes at the end of the extension header, and there is a need to check if this field becomes zero to stop the extension header parser. Signed-off-by: Pablo Neira Ayuso <pablo@netfilter.org>
2024-05-07gtp: remove useless initializationPablo Neira Ayuso1-2/+2
Update b20dc3c68458 ("gtp: Allow to create GTP device without FDs") to remove useless initialization to NULL, sockets are initialized to non-NULL just a few lines of code after this. Signed-off-by: Pablo Neira Ayuso <pablo@netfilter.org>
2024-05-06Merge branch 'add-tcp-fraglist-gro-support'Paolo Abeni6-69/+325
Felix Fietkau says: ==================== Add TCP fraglist GRO support When forwarding TCP after GRO, software segmentation is very expensive, especially when the checksum needs to be recalculated. One case where that's currently unavoidable is when routing packets over PPPoE. Performance improves significantly when using fraglist GRO implemented in the same way as for UDP. When NETIF_F_GRO_FRAGLIST is enabled, perform a lookup for an established socket in the same netns as the receiving device. While this may not cover all relevant use cases in multi-netns configurations, it should be good enough for most configurations that need this. Here's a measurement of running 2 TCP streams through a MediaTek MT7622 device (2-core Cortex-A53), which runs NAT with flow offload enabled from one ethernet port to PPPoE on another ethernet port + cake qdisc set to 1Gbps. rx-gro-list off: 630 Mbit/s, CPU 35% idle rx-gro-list on: 770 Mbit/s, CPU 40% idle Changes since v4: - add likely() to prefer the non-fraglist path in check Changes since v3: - optimize __tcpv4_gso_segment_csum - add unlikely() - reorder dev_net/skb_gro_network_header calls after NETIF_F_GRO_FRAGLIST check - add support for ipv6 nat - drop redundant pskb_may_pull check Changes since v2: - create tcp_gro_header_pull helper function to pull tcp header only once - optimize __tcpv4_gso_segment_list_csum, drop obsolete flags check Changes since v1: - revert bogus tcp flags overwrite on segmentation - fix kbuild issue with !CONFIG_IPV6 - only perform socket lookup for the first skb in the GRO train Changes since RFC: - split up patches - handle TCP flags mutations ==================== Link: https://lore.kernel.org/r/20240502084450.44009-1-nbd@nbd.name Signed-off-by: Paolo Abeni <pabeni@redhat.com>
2024-05-06net: add heuristic for enabling TCP fraglist GROFelix Fietkau2-0/+67
When forwarding TCP after GRO, software segmentation is very expensive, especially when the checksum needs to be recalculated. One case where that's currently unavoidable is when routing packets over PPPoE. Performance improves significantly when using fraglist GRO implemented in the same way as for UDP. When NETIF_F_GRO_FRAGLIST is enabled, perform a lookup for an established socket in the same netns as the receiving device. While this may not cover all relevant use cases in multi-netns configurations, it should be good enough for most configurations that need this. Here's a measurement of running 2 TCP streams through a MediaTek MT7622 device (2-core Cortex-A53), which runs NAT with flow offload enabled from one ethernet port to PPPoE on another ethernet port + cake qdisc set to 1Gbps. rx-gro-list off: 630 Mbit/s, CPU 35% idle rx-gro-list on: 770 Mbit/s, CPU 40% idle Acked-by: Paolo Abeni <pabeni@redhat.com> Reviewed-by: Eric Dumazet <edumazet@google.com> Signed-off-by: Felix Fietkau <nbd@nbd.name> Reviewed-by: David Ahern <dsahern@kernel.org> Reviewed-by: Willem de Bruijn <willemb@google.com> Signed-off-by: Paolo Abeni <pabeni@redhat.com>
2024-05-06net: create tcp_gro_header_pull helper functionFelix Fietkau3-27/+50
Pull the code out of tcp_gro_receive in order to access the tcp header from tcp4/6_gro_receive. Acked-by: Paolo Abeni <pabeni@redhat.com> Reviewed-by: Eric Dumazet <edumazet@google.com> Signed-off-by: Felix Fietkau <nbd@nbd.name> Reviewed-by: David Ahern <dsahern@kernel.org> Reviewed-by: Willem de Bruijn <willemb@google.com> Signed-off-by: Paolo Abeni <pabeni@redhat.com>
2024-05-06net: create tcp_gro_lookup helper functionFelix Fietkau2-16/+26
This pulls the flow port matching out of tcp_gro_receive, so that it can be reused for the next change, which adds the TCP fraglist GRO heuristic. Acked-by: Paolo Abeni <pabeni@redhat.com> Reviewed-by: Eric Dumazet <edumazet@google.com> Signed-off-by: Felix Fietkau <nbd@nbd.name> Reviewed-by: David Ahern <dsahern@kernel.org> Reviewed-by: Willem de Bruijn <willemb@google.com> Signed-off-by: Paolo Abeni <pabeni@redhat.com>
2024-05-06net: add code for TCP fraglist GROFelix Fietkau2-0/+30
This implements fraglist GRO similar to how it's handled in UDP, however no functional changes are added yet. The next change adds a heuristic for using fraglist GRO instead of regular GRO. Acked-by: Paolo Abeni <pabeni@redhat.com> Signed-off-by: Felix Fietkau <nbd@nbd.name> Reviewed-by: Eric Dumazet <edumazet@google.com> Reviewed-by: David Ahern <dsahern@kernel.org> Reviewed-by: Willem de Bruijn <willemb@google.com> Signed-off-by: Paolo Abeni <pabeni@redhat.com>
2024-05-06net: add support for segmenting TCP fraglist GSO packetsFelix Fietkau2-0/+125
Preparation for adding TCP fraglist GRO support. It expects packets to be combined in a similar way as UDP fraglist GSO packets. For IPv4 packets, NAT is handled in the same way as UDP fraglist GSO. Acked-by: Paolo Abeni <pabeni@redhat.com> Reviewed-by: Eric Dumazet <edumazet@google.com> Signed-off-by: Felix Fietkau <nbd@nbd.name> Reviewed-by: David Ahern <dsahern@kernel.org> Reviewed-by: Willem de Bruijn <willemb@google.com> Signed-off-by: Paolo Abeni <pabeni@redhat.com>
2024-05-06net: move skb_gro_receive_list from udp to coreFelix Fietkau3-27/+28
This helper function will be used for TCP fraglist GRO support Acked-by: Paolo Abeni <pabeni@redhat.com> Reviewed-by: Eric Dumazet <edumazet@google.com> Signed-off-by: Felix Fietkau <nbd@nbd.name> Reviewed-by: David Ahern <dsahern@kernel.org> Reviewed-by: Willem de Bruijn <willemb@google.com> Signed-off-by: Paolo Abeni <pabeni@redhat.com>
2024-05-06net: microchip: lan743x: Reduce PTP timeout on HW failureRengarajan S2-1/+2
The PTP_CMD_CTL is a self clearing register which controls the PTP clock values. In the current implementation driver waits for a duration of 20 sec in case of HW failure to clear the PTP_CMD_CTL register bit. This timeout of 20 sec is very long to recognize a HW failure, as it is typically cleared in one clock(<16ns). Hence reducing the timeout to 1 sec would be sufficient to conclude if there is any HW failure observed. The usleep_range will sleep somewhere between 1 msec to 20 msec for each iteration. By setting the PTP_CMD_CTL_TIMEOUT_CNT to 50 the max timeout is extended to 1 sec. Signed-off-by: Rengarajan S <rengarajan.s@microchip.com> Reviewed-by: Simon Horman <horms@kernel.org> Link: https://lore.kernel.org/r/20240502050300.38689-1-rengarajan.s@microchip.com Signed-off-by: Paolo Abeni <pabeni@redhat.com>
2024-05-05Merge branch 'gve-queue-api'David S. Miller11-406/+489
Shailend Chand says: ==================== gve: Implement queue api Following the discussion on https://patchwork.kernel.org/project/linux-media/patch/20240305020153.2787423-2-almasrymina@google.com/, the queue api defined by Mina is implemented for gve. The first patch is just Mina's introduction of the api. The rest of the patches make surgical changes in gve to enable it to work correctly with only a subset of queues present (thus far it had assumed that either all queues are up or all are down). The final patch has the api implementation. Changes since v1: clang warning fixes, kdoc warning fix, and addressed review comments. ==================== Reviewed-by: Willem de Bruijn <willemb@google.com> Signed-off-by: David S. Miller <davem@davemloft.net>
2024-05-05gve: Alloc and free QPLs with the ringsShailend Chand7-331/+171
Every tx and rx ring has its own queue-page-list (QPL) that serves as the bounce buffer. Previously we were allocating QPLs for all queues before the queues themselves were allocated and later associating a QPL with a queue. This is avoidable complexity: it is much more natural for each queue to allocate and free its own QPL. Moreover, the advent of new queue-manipulating ndo hooks make it hard to keep things as is: we would need to transfer a QPL from an old queue to a new queue, and that is unpleasant. Tested-by: Mina Almasry <almasrymina@google.com> Reviewed-by: Praveen Kaligineedi <pkaligineedi@google.com> Reviewed-by: Harshitha Ramamurthy <hramamurthy@google.com> Signed-off-by: Shailend Chand <shailend@google.com> Signed-off-by: David S. Miller <davem@davemloft.net>
2024-05-05gve: Account for stopped queues when reading NIC statsShailend Chand1-6/+35
We now account for the fact that the NIC might send us stats for a subset of queues. Without this change, gve_get_ethtool_stats might make an invalid access on the priv->stats_report->stats array. Tested-by: Mina Almasry <almasrymina@google.com> Reviewed-by: Praveen Kaligineedi <pkaligineedi@google.com> Reviewed-by: Harshitha Ramamurthy <hramamurthy@google.com> Signed-off-by: Shailend Chand <shailend@google.com> Signed-off-by: David S. Miller <davem@davemloft.net>
2024-05-05gve: Reset Rx ring state in the ring-stop funcsShailend Chand2-30/+120
This does not fix any existing bug. In anticipation of the ndo queue api hooks that alloc/free/start/stop a single Rx queue, the already existing per-queue stop functions are being made more robust. Specifically for this use case: rx_queue_n.stop() + rx_queue_n.start() Note that this is not the use case being used in devmem tcp (the first place these new ndo hooks would be used). There the usecase is: new_queue.alloc() + old_queue.stop() + new_queue.start() + old_queue.free() Tested-by: Mina Almasry <almasrymina@google.com> Reviewed-by: Praveen Kaligineedi <pkaligineedi@google.com> Reviewed-by: Harshitha Ramamurthy <hramamurthy@google.com> Signed-off-by: Shailend Chand <shailend@google.com> Signed-off-by: David S. Miller <davem@davemloft.net>
2024-05-05gve: Avoid rescheduling napi if on wrong cpuShailend Chand2-2/+32
In order to make possible the implementation of per-queue ndo hooks, gve_turnup was changed in a previous patch to account for queues already having some unprocessed descriptors: it does a one-off napi_schdule to handle them. If conditions of consistent high traffic persist in the immediate aftermath of this, the poll routine for a queue can be "stuck" on the cpu on which the ndo hooks ran, instead of the cpu its irq has affinity with. This situation is exacerbated by the fact that the ndo hooks for all the queues are invoked on the same cpu, potentially causing all the napi poll routines to be residing on the same cpu. A self correcting mechanism in the poll method itself solves this problem. Tested-by: Mina Almasry <almasrymina@google.com> Reviewed-by: Praveen Kaligineedi <pkaligineedi@google.com> Reviewed-by: Harshitha Ramamurthy <hramamurthy@google.com> Signed-off-by: Shailend Chand <shailend@google.com> Signed-off-by: David S. Miller <davem@davemloft.net>
2024-05-05gve: Make gve_turnup work for nonempty queuesShailend Chand1-0/+14
gVNIC has a requirement that all queues have to be quiesced before any queue is operated on (created or destroyed). To enable the implementation of future ndo hooks that work on a single queue, we need to evolve gve_turnup to account for queues already having some unprocessed descriptors in the ring. Say rxq 4 is being stopped and started via the queue api. Due to gve's requirement of quiescence, queues 0 through 3 are not processing their rings while queue 4 is being toggled. Once they are made live, these queues need to be poked to cause them to check their rings for descriptors that were written during their brief period of quiescence. Tested-by: Mina Almasry <almasrymina@google.com> Reviewed-by: Praveen Kaligineedi <pkaligineedi@google.com> Reviewed-by: Harshitha Ramamurthy <hramamurthy@google.com> Signed-off-by: Shailend Chand <shailend@google.com> Signed-off-by: David S. Miller <davem@davemloft.net>
2024-05-05gve: Make gve_turn(up|down) ignore stopped queuesShailend Chand1-0/+10
Currently the queues are either all live or all dead, toggling from one state to the other via the ndo open and stop hooks. The future addition of single-queue ndo hooks changes this, and thus gve_turnup and gve_turndown should evolve to account for a state where some queues are live and some aren't. Tested-by: Mina Almasry <almasrymina@google.com> Reviewed-by: Praveen Kaligineedi <pkaligineedi@google.com> Reviewed-by: Harshitha Ramamurthy <hramamurthy@google.com> Signed-off-by: Shailend Chand <shailend@google.com> Signed-off-by: David S. Miller <davem@davemloft.net>
2024-05-05gve: Add adminq funcs to add/remove a single Rx queueShailend Chand2-27/+54
This allows for implementing future ndo hooks that act on a single queue. Tested-by: Mina Almasry <almasrymina@google.com> Reviewed-by: Praveen Kaligineedi <pkaligineedi@google.com> Reviewed-by: Harshitha Ramamurthy <hramamurthy@google.com> Signed-off-by: Shailend Chand <shailend@google.com> Signed-off-by: David S. Miller <davem@davemloft.net>
2024-05-05gve: Make the GQ RX free queue funcs idempotentShailend Chand1-10/+19
Although this is not fixing any existing double free bug, making these functions idempotent allows for a simpler implementation of future ndo hooks that act on a single queue. Tested-by: Mina Almasry <almasrymina@google.com> Reviewed-by: Praveen Kaligineedi <pkaligineedi@google.com> Reviewed-by: Harshitha Ramamurthy <hramamurthy@google.com> Signed-off-by: Shailend Chand <shailend@google.com> Signed-off-by: David S. Miller <davem@davemloft.net>
2024-05-05queue_api: define queue apiMina Almasry2-0/+34
This API enables the net stack to reset the queues used for devmem TCP. Signed-off-by: Mina Almasry <almasrymina@google.com> Signed-off-by: Shailend Chand <shailend@google.com> Signed-off-by: David S. Miller <davem@davemloft.net>
2024-05-04Revert "net: mirror skb frag ref/unref helpers"Mina Almasry6-44/+51
This reverts commit a580ea994fd37f4105028f5a85c38ff6508a2b25. This revert is to resolve Dragos's report of page_pool leak here: https://lore.kernel.org/lkml/20240424165646.1625690-2-dtatulea@nvidia.com/ The reverted patch interacts very badly with commit 2cc3aeb5eccc ("skbuff: Fix a potential race while recycling page_pool packets"). The reverted commit hopes that the pp_recycle + is_pp_page variables do not change between the skb_frag_ref and skb_frag_unref operation. If such a change occurs, the skb_frag_ref/unref will not operate on the same reference type. In the case of Dragos's report, the grabbed ref was a pp ref, but the unref was a page ref, because the pp_recycle setting on the skb was changed. Attempting to fix this issue on the fly is risky. Lets revert and I hope to reland this with better understanding and testing to ensure we don't regress some edge case while streamlining skb reffing. Fixes: a580ea994fd3 ("net: mirror skb frag ref/unref helpers") Reported-by: Dragos Tatulea <dtatulea@nvidia.com> Signed-off-by: Mina Almasry <almasrymina@google.com> Link: https://lore.kernel.org/r/20240502175423.2456544-1-almasrymina@google.com Signed-off-by: Jakub Kicinski <kuba@kernel.org>
2024-05-04bnxt: fix bnxt_get_avail_msix() returning negative valuesDavid Wei2-13/+5
Current net-next/main does not boot for older chipsets e.g. Stratus. Sample dmesg: [ 11.368315] bnxt_en 0000:02:00.0 (unnamed net_device) (uninitialized): Able to reserve only 0 out of 9 requested RX rings [ 11.390181] bnxt_en 0000:02:00.0 (unnamed net_device) (uninitialized): Unable to reserve tx rings [ 11.438780] bnxt_en 0000:02:00.0 (unnamed net_device) (uninitialized): 2nd rings reservation failed. [ 11.487559] bnxt_en 0000:02:00.0 (unnamed net_device) (uninitialized): Not enough rings available. [ 11.506012] bnxt_en 0000:02:00.0: probe with driver bnxt_en failed with error -12 This is caused by bnxt_get_avail_msix() returning a negative value for these chipsets not using the new resource manager i.e. !BNXT_NEW_RM. This in turn causes hwr.cp in __bnxt_reserve_rings() to be set to 0. In the current call stack, __bnxt_reserve_rings() is called from bnxt_set_dflt_rings() before bnxt_init_int_mode(). Therefore, bp->total_irqs is always 0 and for !BNXT_NEW_RM bnxt_get_avail_msix() always returns a negative number. Historically, MSIX vectors were requested by the RoCE driver during run-time and bnxt_get_avail_msix() was used for this purpose. Today, RoCE MSIX vectors are statically allocated. bnxt_get_avail_msix() should only be called for the BNXT_NEW_RM() case to reserve the MSIX ahead of time for RoCE use. bnxt_get_avail_msix() is also be simplified to handle the BNXT_NEW_RM() case only. Fixes: d630624ebd70 ("bnxt_en: Utilize ulp client resources if RoCE is not registered") Signed-off-by: David Wei <dw@davidwei.uk> Reviewed-by: Michael Chan <michael.chan@broadcom.com> Link: https://lore.kernel.org/r/20240502203757.3761827-1-dw@davidwei.uk Signed-off-by: Jakub Kicinski <kuba@kernel.org>
2024-05-04net: no longer acquire RTNL in threaded_show()Eric Dumazet3-7/+7
dev->threaded can be read locklessly, if we add corresponding READ_ONCE()/WRITE_ONCE() annotations. Signed-off-by: Eric Dumazet <edumazet@google.com> Link: https://lore.kernel.org/r/20240502173926.2010646-1-edumazet@google.com Signed-off-by: Jakub Kicinski <kuba@kernel.org>
2024-05-04tools: ynl: add --list-ops and --list-msgs to CLIJakub Kicinski2-0/+11
I often forget the exact naming of ops and have to look at the spec to find it. Add support for listing the operations: $ ./cli.py --spec .../netdev.yaml --list-ops dev-get [ do, dump ] page-pool-get [ do, dump ] page-pool-stats-get [ do, dump ] queue-get [ do, dump ] napi-get [ do, dump ] qstats-get [ dump ] For completeness also support listing all ops (including notifications: # ./cli.py --spec .../netdev.yaml --list-msgs dev-get [ dump, do ] dev-add-ntf [ notify ] dev-del-ntf [ notify ] dev-change-ntf [ notify ] page-pool-get [ dump, do ] page-pool-add-ntf [ notify ] page-pool-del-ntf [ notify ] page-pool-change-ntf [ notify ] page-pool-stats-get [ dump, do ] queue-get [ dump, do ] napi-get [ dump, do ] qstats-get [ dump ] Use double space after the name for slightly easier to read output. Reviewed-by: Jiri Pirko <jiri@nvidia.com> Reviewed-by: Donald Hunter <donald.hunter@gmail.com> Link: https://lore.kernel.org/r/20240502164043.2130184-1-kuba@kernel.org Signed-off-by: Jakub Kicinski <kuba@kernel.org>
2024-05-04Merge branch 'rtnetlink-rtnl_stats_dump-changes'Jakub Kicinski1-38/+23
Eric Dumazet says: ==================== rtnetlink: rtnl_stats_dump() changes Getting rid of RTNL in rtnl_stats_dump() looks challenging. In the meantime, we can: 1) Avoid RTNL acquisition for the final NLMSG_DONE marker. 2) Use for_each_netdev_dump() instead of the net->dev_index_head[] hash table. ==================== Link: https://lore.kernel.org/r/20240502113748.1622637-1-edumazet@google.com Signed-off-by: Jakub Kicinski <kuba@kernel.org>
2024-05-04rtnetlink: use for_each_netdev_dump() in rtnl_stats_dump()Eric Dumazet1-37/+22
Switch rtnl_stats_dump() to use for_each_netdev_dump() instead of net->dev_index_head[] hash table. This makes the code much easier to read, and fixes scalability issues. Signed-off-by: Eric Dumazet <edumazet@google.com> Reviewed-by: David Ahern <dsahern@kernel.org> Link: https://lore.kernel.org/r/20240502113748.1622637-3-edumazet@google.com Signed-off-by: Jakub Kicinski <kuba@kernel.org>
2024-05-04rtnetlink: change rtnl_stats_dump() return valueEric Dumazet1-1/+1
By returning 0 (or an error) instead of skb->len, we allow NLMSG_DONE to be appended to the current skb at the end of a dump, saving a couple of recvmsg() system calls. Signed-off-by: Eric Dumazet <edumazet@google.com> Reviewed-by: David Ahern <dsahern@kernel.org> Link: https://lore.kernel.org/r/20240502113748.1622637-2-edumazet@google.com Signed-off-by: Jakub Kicinski <kuba@kernel.org>
2024-05-03Merge branch 'net-sysctl-sentinel'David S. Miller47-119/+48
Joel Granados says: ==================== sysctl: Remove sentinel elements from networking What? These commits remove the sentinel element (last empty element) from the sysctl arrays of all the files under the "net/" directory that register a sysctl array. The merging of the preparation patches [4] to mainline allows us to just remove sentinel elements without changing behavior. This is safe because the sysctl registration code (register_sysctl() and friends) use the array size in addition to checking for a sentinel [1]. Why? By removing the sysctl sentinel elements we avoid kernel bloat as ctl_table arrays get moved out of kernel/sysctl.c into their own respective subsystems. This move was started long ago to avoid merge conflicts; the sentinel removal bit came after Mathew Wilcox suggested it to avoid bloating the kernel by one element as arrays moved out. This patchset will reduce the overall build time size of the kernel and run time memory bloat by about ~64 bytes per declared ctl_table array (more info here [5]). When are we done? There are 4 patchest (25 commits [2]) that are still outstanding to completely remove the sentinels: files under "net/" (this patchset), files under "kernel/" dir, misc dirs (files under mm/ security/ and others) and the final set that removes the unneeded check for ->procname == NULL. Testing: * Ran sysctl selftests (./tools/testing/selftests/sysctl/sysctl.sh) * Ran this through 0-day with no errors or warnings Savings in vmlinux: A total of 64 bytes per sentinel is saved after removal; I measured in x86_64 to give an idea of the aggregated savings. The actual savings will depend on individual kernel configuration. * bloat-o-meter - The "yesall" config saves 3976 bytes (bloat-o-meter output [6]) - A reduced config [3] saves 1263 bytes (bloat-o-meter output [7]) Savings in allocated memory: None in this set but will occur when the superfluous allocations are removed from proc_sysctl.c. I include it here for context. The estimated savings during boot for config [3] are 6272 bytes. See [8] for how to measure it. Comments/feedback greatly appreciated Changes in v6: - Rebased onto net-next/main. - Besides re-running my cocci scripts, I ran a new find script [9]. Found 0 hits in net/ - Moved "i" variable declaraction out of for() in sysctl_core_net_init - Removed forgotten sentinel in mpls_table - Removed CONFIG_AX25_DAMA_SLAVE guard from net/ax25/ax25_ds_timer.c. It is not needed because that file is compiled only when CONFIG_AX25_DAMA_SLAVE is set. - When traversing smc_table, stop on ARRAY_SIZE instead of ARRAY_SIZE-1. - Link to v5: https://lore.kernel.org/r/20240426-jag-sysctl_remset_net-v5-0-e3b12f6111a6@samsung.com Changes in v5: - Added net files with additional variable to my test .config so the typo can be caught next time. - Fixed typo tabel_size -> table_size - Link to v4: https://lore.kernel.org/r/20240425-jag-sysctl_remset_net-v4-0-9e82f985777d@samsung.com Changes in v4: - Keep reverse xmas tree order when introducing new variables - Use a table_size variable to keep the value of ARRAY_SIZE - Separated the original "networking: Remove the now superfluous sentinel elements from ctl_table arra" into smaller commits to ease review - Merged x.25 and ax.25 commits together. - Removed any SOB from the commits that were changed - Link to v3: https://lore.kernel.org/r/20240412-jag-sysctl_remset_net-v3-0-11187d13c211@samsung.com Changes in v3: - Reworkded ax.25 - Added a BUILD_BUG_ON for the ax.25 commit - Added a CONFIG_AX25_DAMA_SLAVE guard where needed - Link to v2: https://lore.kernel.org/r/20240328-jag-sysctl_remset_net-v2-0-52c9fad9a1af@samsung.com Changes in v2: - Rebased to v6.9-rc1 - Removed unneeded comment from sysctl_net_ax25.c - Link to v1: https://lore.kernel.org/r/20240314-jag-sysctl_remset_net-v1-0-aa26b44d29d9@samsung.com ==================== Signed-off-by: David S. Miller <davem@davemloft.net>
2024-05-03ax.25: x.25: Remove the now superfluous sentinel elements from ctl_table arrayJoel Granados5-3/+7
This commit comes at the tail end of a greater effort to remove the empty elements at the end of the ctl_table arrays (sentinels) which will reduce the overall build time size of the kernel and run time memory bloat by ~64 bytes per sentinel (further information Link : https://lore.kernel.org/all/ZO5Yx5JFogGi%2FcBo@bombadil.infradead.org/) Avoid a buffer overflow when traversing the ctl_table by ensuring that AX25_MAX_VALUES is the same as the size of ax25_param_table. This is done with a BUILD_BUG_ON where ax25_param_table is defined and a CONFIG_AX25_DAMA_SLAVE guard in the unnamed enum definition as well as in the ax25_dev_device_up and ax25_ds_set_timer functions. The overflow happened when the sentinel was removed from ax25_param_table. The sentinel's data element was changed when CONFIG_AX25_DAMA_SLAVE was undefined. This had no adverse effects as it still stopped on the sentinel's null procname but needed to be addressed once the sentinel was removed. Signed-off-by: Joel Granados <j.granados@samsung.com> Signed-off-by: David S. Miller <davem@davemloft.net>
2024-05-03appletalk: Remove the now superfluous sentinel elements from ctl_table arrayJoel Granados1-1/+0
This commit comes at the tail end of a greater effort to remove the empty elements at the end of the ctl_table arrays (sentinels) which will reduce the overall build time size of the kernel and run time memory bloat by ~64 bytes per sentinel (further information Link : https://lore.kernel.org/all/ZO5Yx5JFogGi%2FcBo@bombadil.infradead.org/) Remove sentinel from atalk_table ctl_table array. Acked-by: Kees Cook <keescook@chromium.org> # loadpin & yama Signed-off-by: Joel Granados <j.granados@samsung.com> Signed-off-by: David S. Miller <davem@davemloft.net>
2024-05-03netfilter: Remove the now superfluous sentinel elements from ctl_table arrayJoel Granados7-21/+5
This commit comes at the tail end of a greater effort to remove the empty elements at the end of the ctl_table arrays (sentinels) which will reduce the overall build time size of the kernel and run time memory bloat by ~64 bytes per sentinel (further information Link : https://lore.kernel.org/all/ZO5Yx5JFogGi%2FcBo@bombadil.infradead.org/) * Remove sentinel elements from ctl_table structs * Remove instances where an array element is zeroed out to make it look like a sentinel. This is not longer needed and is safe after commit c899710fe7f9 ("networking: Update to register_net_sysctl_sz") added the array size to the ctl_table registration * Remove the need for having __NF_SYSCTL_CT_LAST_SYSCTL as the sysctl array size is now in NF_SYSCTL_CT_LAST_SYSCTL * Remove extra element in ctl_table arrays declarations Acked-by: Kees Cook <keescook@chromium.org> # loadpin & yama Signed-off-by: Joel Granados <j.granados@samsung.com> Signed-off-by: David S. Miller <davem@davemloft.net>
2024-05-03net: Remove ctl_table sentinel elements from several networking subsystemsJoel Granados10-26/+9
This commit comes at the tail end of a greater effort to remove the empty elements at the end of the ctl_table arrays (sentinels) which will reduce the overall build time size of the kernel and run time memory bloat by ~64 bytes per sentinel (further information Link : https://lore.kernel.org/all/ZO5Yx5JFogGi%2FcBo@bombadil.infradead.org/) To avoid lots of small commits, this commit brings together network changes from (as they appear in MAINTAINERS) LLC, MPTCP, NETROM NETWORK LAYER, PHONET PROTOCOL, ROSE NETWORK LAYER, RXRPC SOCKETS, SCTP PROTOCOL, SHARED MEMORY COMMUNICATIONS (SMC), TIPC NETWORK LAYER and NETWORKING [IPSEC] * Remove sentinel element from ctl_table structs. * Replace empty array registration with the register_net_sysctl_sz call in llc_sysctl_init * Replace the for loop stop condition that tests for procname == NULL with one that depends on array size in sctp_sysctl_net_register * Remove instances where an array element is zeroed out to make it look like a sentinel in xfrm_sysctl_init. This is not longer needed and is safe after commit c899710fe7f9 ("networking: Update to register_net_sysctl_sz") added the array size to the ctl_table registration * Use a table_size variable to keep the value of ARRAY_SIZE Signed-off-by: Joel Granados <j.granados@samsung.com> Signed-off-by: David S. Miller <davem@davemloft.net>
2024-05-03net: sunrpc: Remove the now superfluous sentinel elements from ctl_table arrayJoel Granados4-4/+0
This commit comes at the tail end of a greater effort to remove the empty elements at the end of the ctl_table arrays (sentinels) which will reduce the overall build time size of the kernel and run time memory bloat by ~64 bytes per sentinel (further information Link : https://lore.kernel.org/all/ZO5Yx5JFogGi%2FcBo@bombadil.infradead.org/) * Remove sentinel element from ctl_table structs. Signed-off-by: Joel Granados <j.granados@samsung.com> Signed-off-by: David S. Miller <davem@davemloft.net>
2024-05-03net: rds: Remove the now superfluous sentinel elements from ctl_table arrayJoel Granados3-3/+0
This commit comes at the tail end of a greater effort to remove the empty elements at the end of the ctl_table arrays (sentinels) which will reduce the overall build time size of the kernel and run time memory bloat by ~64 bytes per sentinel (further information Link : https://lore.kernel.org/all/ZO5Yx5JFogGi%2FcBo@bombadil.infradead.org/) * Remove sentinel element from ctl_table structs. Signed-off-by: Joel Granados <j.granados@samsung.com> Acked-by: Allison Henderson <allison.henderson@oracle.com> Signed-off-by: David S. Miller <davem@davemloft.net>
2024-05-03net: ipv{6,4}: Remove the now superfluous sentinel elements from ctl_table arrayJoel Granados11-35/+13
This commit comes at the tail end of a greater effort to remove the empty elements at the end of the ctl_table arrays (sentinels) which will reduce the overall build time size of the kernel and run time memory bloat by ~64 bytes per sentinel (further information Link : https://lore.kernel.org/all/ZO5Yx5JFogGi%2FcBo@bombadil.infradead.org/) * Remove sentinel element from ctl_table structs. * Remove the zeroing out of an array element (to make it look like a sentinel) in sysctl_route_net_init And ipv6_route_sysctl_init. This is not longer needed and is safe after commit c899710fe7f9 ("networking: Update to register_net_sysctl_sz") added the array size to the ctl_table registration. * Remove extra sentinel element in the declaration of devinet_vars. * Removed the "-1" in __devinet_sysctl_register, sysctl_route_net_init, ipv6_sysctl_net_init and ipv4_sysctl_init_net that adjusted for having an extra empty element when looping over ctl_table arrays * Replace the for loop stop condition in __addrconf_sysctl_register that tests for procname == NULL with one that depends on array size * Removing the unprivileged user check in ipv6_route_sysctl_init is safe as it is replaced by calling ipv6_route_sysctl_table_size; introduced in commit c899710fe7f9 ("networking: Update to register_net_sysctl_sz") * Use a table_size variable to keep the value of ARRAY_SIZE Signed-off-by: Joel Granados <j.granados@samsung.com> Signed-off-by: David S. Miller <davem@davemloft.net>
2024-05-03net: Remove the now superfluous sentinel elements from ctl_table arrayJoel Granados6-26/+14
This commit comes at the tail end of a greater effort to remove the empty elements at the end of the ctl_table arrays (sentinels) which will reduce the overall build time size of the kernel and run time memory bloat by ~64 bytes per sentinel (further information Link : https://lore.kernel.org/all/ZO5Yx5JFogGi%2FcBo@bombadil.infradead.org/) * Remove sentinel element from ctl_table structs. * Remove the zeroing out of an array element (to make it look like a sentinel) in neigh_sysctl_register and lowpan_frags_ns_sysctl_register This is not longer needed and is safe after commit c899710fe7f9 ("networking: Update to register_net_sysctl_sz") added the array size to the ctl_table registration. * Replace the for loop stop condition in sysctl_core_net_init that tests for procname == NULL with one that depends on array size * Removed the "-1" in mpls_net_init that adjusted for having an extra empty element when looping over ctl_table arrays * Use a table_size variable to keep the value of ARRAY_SIZE Signed-off-by: Joel Granados <j.granados@samsung.com> Signed-off-by: David S. Miller <davem@davemloft.net>
2024-05-03net_sched: sch_sfq: annotate data-races around q->perturb_periodEric Dumazet1-4/+9
sfq_perturbation() reads q->perturb_period locklessly. Add annotations to fix potential issues. Signed-off-by: Eric Dumazet <edumazet@google.com> Reviewed-by: Simon Horman <horms@kernel.org> Link: https://lore.kernel.org/r/20240430180015.3111398-1-edumazet@google.com Signed-off-by: Jakub Kicinski <kuba@kernel.org>
2024-05-03net: dsa: mv88e6xxx: Correct check for empty listSimon Horman1-2/+2
Since commit a3c53be55c95 ("net: dsa: mv88e6xxx: Support multiple MDIO busses") mv88e6xxx_default_mdio_bus() has checked that the return value of list_first_entry() is non-NULL. This appears to be intended to guard against the list chip->mdios being empty. However, it is not the correct check as the implementation of list_first_entry is not designed to return NULL for empty lists. Instead, use list_first_entry_or_null() which does return NULL if the list is empty. Flagged by Smatch. Compile tested only. Reviewed-by: Andrew Lunn <andrew@lunn.ch> Signed-off-by: Simon Horman <horms@kernel.org> Link: https://lore.kernel.org/r/20240430-mv88e6xx-list_empty-v3-1-c35c69d88d2e@kernel.org Signed-off-by: Jakub Kicinski <kuba@kernel.org>
2024-05-03selftests/net: skip partial checksum packets in csum testWillem de Bruijn1-3/+15
Detect packets with ip_summed CHECKSUM_PARTIAL and skip these. These should not exist, as the test sends individual packets between two hosts. But if (HW) GRO is on, with randomized content sometimes subsequent packets can be coalesced. In this case the GSO packet checksum is converted to a pseudo checksum in anticipation of sending out as TSO/USO. So the field will not match the expected value. Do not count these as test errors. Signed-off-by: Willem de Bruijn <willemb@google.com> Link: https://lore.kernel.org/r/20240501193156.3627344-1-willemdebruijn.kernel@gmail.com Signed-off-by: Jakub Kicinski <kuba@kernel.org>
2024-05-03selftests: net: py: check process exit code in bkg() and background cmd()Jakub Kicinski1-2/+6
We're a bit too loose with error checking for background processes. cmd() completely ignores the fail argument passed to the constructor if background is True. Default to checking for errors if process is not terminated explicitly. Caller can override with True / False. For bkg() the processing step is called magically by __exit__ so record the value passed in the constructor. Reported-by: Willem de Bruijn <willemb@google.com> Tested-by: Willem de Bruijn <willemb@google.com> Link: https://lore.kernel.org/r/20240502025325.1924923-1-kuba@kernel.org Signed-off-by: Jakub Kicinski <kuba@kernel.org>
2024-05-03IB/hfi1: allocate dummy net_device dynamicallyBreno Leitao2-3/+8
Embedding net_device into structures prohibits the usage of flexible arrays in the net_device structure. For more details, see the discussion at [1]. Un-embed the net_device from struct hfi1_netdev_rx by converting it into a pointer. Then use the leverage alloc_netdev() to allocate the net_device object at hfi1_alloc_rx(). [1] https://lore.kernel.org/all/20240229225910.79e224cf@kernel.org/ Acked-by: Dennis Dalessandro <dennis.dalessandro@cornelisnetworks.com> Signed-off-by: Breno Leitao <leitao@debian.org> Acked-by: Leon Romanovsky <leon@kernel.org> Link: https://lore.kernel.org/r/20240430162213.746492-1-leitao@debian.org Signed-off-by: Jakub Kicinski <kuba@kernel.org>
2024-05-03net/mlx5e: flower: check for unsupported control flagsAsbjørn Sloth Tønnesen1-6/+4
Use flow_rule_is_supp_control_flags() to reject filters with unsupported control flags. In case any unsupported control flags are masked, flow_rule_is_supp_control_flags() sets a NL extended error message, and we return -EOPNOTSUPP. Remove FLOW_DIS_FIRST_FRAG specific error message, and treat it as any other unsupported control flag. Only compile-tested. Signed-off-by: Asbjørn Sloth Tønnesen <ast@fiberby.net> Reviewed-by: Jianbo Liu <jianbol@nvidia.com> Reviewed-by: Simon Horman <horms@kernel.org> Acked-by: Tariq Toukan <tariqt@nvidia.com> Link: https://lore.kernel.org/r/20240422152728.175677-1-ast@fiberby.net Signed-off-by: Jakub Kicinski <kuba@kernel.org>
2024-05-02Merge git://git.kernel.org/pub/scm/linux/kernel/git/netdev/netJakub Kicinski306-1532/+2454
Cross-merge networking fixes after downstream PR. Conflicts: include/linux/filter.h kernel/bpf/core.c 66e13b615a0c ("bpf: verifier: prevent userspace memory access") d503a04f8bc0 ("bpf: Add support for certain atomics in bpf_arena to x86 JIT") https://lore.kernel.org/all/20240429114939.210328b0@canb.auug.org.au/ No adjacent changes. Signed-off-by: Jakub Kicinski <kuba@kernel.org>
2024-05-02Merge tag 'net-6.9-rc7' of ↵Linus Torvalds49-193/+378
git://git.kernel.org/pub/scm/linux/kernel/git/netdev/net Pull networking fixes from Paolo Abeni: "Including fixes from bpf. Relatively calm week, likely due to public holiday in most places. No known outstanding regressions. Current release - regressions: - rxrpc: fix wrong alignmask in __page_frag_alloc_align() - eth: e1000e: change usleep_range to udelay in PHY mdic access Previous releases - regressions: - gro: fix udp bad offset in socket lookup - bpf: fix incorrect runtime stat for arm64 - tipc: fix UAF in error path - netfs: fix a potential infinite loop in extract_user_to_sg() - eth: ice: ensure the copied buf is NUL terminated - eth: qeth: fix kernel panic after setting hsuid Previous releases - always broken: - bpf: - verifier: prevent userspace memory access - xdp: use flags field to disambiguate broadcast redirect - bridge: fix multicast-to-unicast with fraglist GSO - mptcp: ensure snd_nxt is properly initialized on connect - nsh: fix outer header access in nsh_gso_segment(). - eth: bcmgenet: fix racing registers access - eth: vxlan: fix stats counters. Misc: - a bunch of MAINTAINERS file updates" * tag 'net-6.9-rc7' of git://git.kernel.org/pub/scm/linux/kernel/git/netdev/net: (45 commits) MAINTAINERS: mark MYRICOM MYRI-10G as Orphan MAINTAINERS: remove Ariel Elior net: gro: add flush check in udp_gro_receive_segment net: gro: fix udp bad offset in socket lookup by adding {inner_}network_offset to napi_gro_cb ipv4: Fix uninit-value access in __ip_make_skb() s390/qeth: Fix kernel panic after setting hsuid vxlan: Pull inner IP header in vxlan_rcv(). tipc: fix a possible memleak in tipc_buf_append tipc: fix UAF in error path rxrpc: Clients must accept conn from any address net: core: reject skb_copy(_expand) for fraglist GSO skbs net: bridge: fix multicast-to-unicast with fraglist GSO mptcp: ensure snd_nxt is properly initialized on connect e1000e: change usleep_range to udelay in PHY mdic access net: dsa: mv88e6xxx: Fix number of databases for 88E6141 / 88E6341 cxgb4: Properly lock TX queue for the selftest. rxrpc: Fix using alignmask being zero for __page_frag_alloc_align() vxlan: Add missing VNI filter counter update in arp_reduce(). vxlan: Fix racy device stats updates. net: qede: use return from qede_parse_actions() ...
2024-05-02Merge branch 'bnxt_en-updates-for-net-next'Jakub Kicinski6-61/+107
Michael Chan says: ==================== bnxt_en: Updates for net-next The first patch converts the sw_stats field in the completion ring structure to a pointer. This allows the group of completion rings using the same MSIX to share the same sw_stats structure. Prior to this, the correct completion ring must be used to count packets. The next four patches remove the RTNL lock when calling the RoCE driver for asynchronous stop and start during error recovery and firmware reset. The RTNL ilock is replaced with a private mutex used to synchronize RoCE register, unregister, stop, and start. The last patch adds VF PCI IDs for the 5760X chips. v2: Dropped patch #1 from v1. Will work with David to get that patch in separately. ==================== Link: https://lore.kernel.org/r/20240501003056.100607-1-michael.chan@broadcom.com Signed-off-by: Jakub Kicinski <kuba@kernel.org>