summaryrefslogtreecommitdiff
path: root/drivers/net/ethernet/mellanox
AgeCommit message (Collapse)AuthorFilesLines
2022-10-24Merge git://git.kernel.org/pub/scm/linux/kernel/git/netdev/netJakub Kicinski1-10/+1
include/linux/net.h a5ef058dc4d9 ("net: introduce and use custom sockopt socket flag") e993ffe3da4b ("net: flag sockets supporting msghdr originated zerocopy") Signed-off-by: Jakub Kicinski <kuba@kernel.org>
2022-10-24mlxsw: Add support for 800Gbps link modesAmit Cohen2-0/+22
Add support for 800Gbps speed, link modes of 100Gbps per lane. Signed-off-by: Amit Cohen <amcohen@nvidia.com> Reviewed-by: Ido Schimmel <idosch@nvidia.com> Signed-off-by: Petr Machata <petrm@nvidia.com> Signed-off-by: David S. Miller <davem@davemloft.net>
2022-10-24net/mlx5e: Cleanup MACsec uninitialization routineLeon Romanovsky1-10/+1
The mlx5e_macsec_cleanup() routine has NULL pointer dereferencing if mlx5 device doesn't support MACsec (priv->macsec will be NULL). While at it delete comment line, assignment and extra blank lines, so fix everything in one patch. Fixes: 1f53da676439 ("net/mlx5e: Create advanced steering operation (ASO) object for MACsec") Reported-by: Dan Carpenter <dan.carpenter@oracle.com> Signed-off-by: Leon Romanovsky <leonro@nvidia.com> Signed-off-by: David S. Miller <davem@davemloft.net>
2022-10-13Merge tag 'net-6.1-rc1' of ↵Linus Torvalds4-13/+11
git://git.kernel.org/pub/scm/linux/kernel/git/netdev/net Pull networking fixes from Jakub Kicinski: "Including fixes from netfilter, and wifi. Current release - regressions: - Revert "net/sched: taprio: make qdisc_leaf() see the per-netdev-queue pfifo child qdiscs", it may cause crashes when the qdisc is reconfigured - inet: ping: fix splat due to packet allocation refactoring in inet - tcp: clean up kernel listener's reqsk in inet_twsk_purge(), fix UAF due to races when per-netns hash table is used Current release - new code bugs: - eth: adin1110: check in netdev_event that netdev belongs to driver - fixes for PTR_ERR() vs NULL bugs in driver code, from Dan and co. Previous releases - regressions: - ipv4: handle attempt to delete multipath route when fib_info contains an nh reference, avoid oob access - wifi: fix handful of bugs in the new Multi-BSSID code - wifi: mt76: fix rate reporting / throughput regression on mt7915 and newer, fix checksum offload - wifi: iwlwifi: mvm: fix double list_add at iwl_mvm_mac_wake_tx_queue (other cases) - wifi: mac80211: do not drop packets smaller than the LLC-SNAP header on fast-rx Previous releases - always broken: - ieee802154: don't warn zero-sized raw_sendmsg() - ipv6: ping: fix wrong checksum for large frames - mctp: prevent double key removal and unref - tcp/udp: fix memory leaks and races around IPV6_ADDRFORM - hv_netvsc: fix race between VF offering and VF association message Misc: - remove -Warray-bounds silencing in the drivers, compilers fixed" * tag 'net-6.1-rc1' of git://git.kernel.org/pub/scm/linux/kernel/git/netdev/net: (73 commits) sunhme: fix an IS_ERR() vs NULL check in probe net: marvell: prestera: fix a couple NULL vs IS_ERR() checks kcm: avoid potential race in kcm_tx_work tcp: Clean up kernel listener's reqsk in inet_twsk_purge() net: phy: micrel: Fixes FIELD_GET assertion openvswitch: add nf_ct_is_confirmed check before assigning the helper tcp: Fix data races around icsk->icsk_af_ops. ipv6: Fix data races around sk->sk_prot. tcp/udp: Call inet6_destroy_sock() in IPv6 sk->sk_destruct(). udp: Call inet6_destroy_sock() in setsockopt(IPV6_ADDRFORM). tcp/udp: Fix memory leak in ipv6_renew_options(). mctp: prevent double key removal and unref selftests: netfilter: Fix nft_fib.sh for all.rp_filter=1 netfilter: rpfilter/fib: Populate flowic_l3mdev field selftests: netfilter: Test reverse path filtering net/mlx5: Make ASO poll CQ usable in atomic context tcp: cdg: allow tcp_cdg_release() to be called multiple times inet: ping: fix recent breakage ipv6: ping: fix wrong checksum for large frames net: ethernet: ti: am65-cpsw: set correct devlink flavour for unused ports ...
2022-10-12net/mlx5: Make ASO poll CQ usable in atomic contextLeon Romanovsky4-13/+11
Poll CQ functions shouldn't sleep as they are called in atomic context. The following splat appears once the mlx5_aso_poll_cq() is used in such flow. BUG: scheduling while atomic: swapper/17/0/0x00000100 Modules linked in: sch_ingress openvswitch nsh mlx5_vdpa vringh vhost_iotlb vdpa mlx5_ib mlx5_core xt_conntrack xt_MASQUERADE nf_conntrack_netlink nfnetlink xt_addrtype iptable_nat nf_nat br_netfilter overlay rpcrdma rdma_ucm ib_iser libiscsi scsi_transport_iscsi ib_umad rdma_cm ib_ipoib iw_cm ib_cm ib_uverbs ib_core fuse [last unloaded: mlx5_core] CPU: 17 PID: 0 Comm: swapper/17 Tainted: G W 6.0.0-rc2+ #13 Hardware name: QEMU Standard PC (Q35 + ICH9, 2009), BIOS rel-1.13.0-0-gf21b5a4aeb02-prebuilt.qemu.org 04/01/2014 Call Trace: <IRQ> dump_stack_lvl+0x34/0x44 __schedule_bug.cold+0x47/0x53 __schedule+0x4b6/0x670 ? hrtimer_start_range_ns+0x28d/0x360 schedule+0x50/0x90 schedule_hrtimeout_range_clock+0x98/0x120 ? __hrtimer_init+0xb0/0xb0 usleep_range_state+0x60/0x90 mlx5_aso_poll_cq+0xad/0x190 [mlx5_core] mlx5e_ipsec_aso_update_curlft+0x81/0xb0 [mlx5_core] xfrm_timer_handler+0x6b/0x360 ? xfrm_find_acq_byseq+0x50/0x50 __hrtimer_run_queues+0x139/0x290 hrtimer_run_softirq+0x7d/0xe0 __do_softirq+0xc7/0x272 irq_exit_rcu+0x87/0xb0 sysvec_apic_timer_interrupt+0x72/0x90 </IRQ> <TASK> asm_sysvec_apic_timer_interrupt+0x16/0x20 RIP: 0010:default_idle+0x18/0x20 Code: ae 7d ff ff cc cc cc cc cc cc cc cc cc cc cc cc cc cc 0f 1f 44 00 00 8b 05 b5 30 0d 01 85 c0 7e 07 0f 00 2d 0a e3 53 00 fb f4 <c3> 0f 1f 80 00 00 00 00 0f 1f 44 00 00 65 48 8b 04 25 80 ad 01 00 RSP: 0018:ffff888100883ee0 EFLAGS: 00000242 RAX: 0000000000000001 RBX: ffff888100849580 RCX: 4000000000000000 RDX: 0000000000000001 RSI: 0000000000000083 RDI: 000000000008863c RBP: 0000000000000011 R08: 00000064e6977fa9 R09: 0000000000000001 R10: 0000000000000000 R11: 0000000000000000 R12: 0000000000000000 R13: 0000000000000000 R14: 0000000000000000 R15: 0000000000000000 default_idle_call+0x37/0xb0 do_idle+0x1cd/0x1e0 cpu_startup_entry+0x19/0x20 start_secondary+0xfe/0x120 secondary_startup_64_no_verify+0xcd/0xdb </TASK> softirq: huh, entered softirq 8 HRTIMER 00000000a97c08cb with preempt_count 00000100, exited with 00000000? Signed-off-by: Leon Romanovsky <leonro@nvidia.com> Signed-off-by: David S. Miller <davem@davemloft.net>
2022-10-05Merge tag 'i2c-for-6.1-rc1' of ↵Linus Torvalds1-3/+1
git://git.kernel.org/pub/scm/linux/kernel/git/wsa/linux Pull i2c updates from Wolfram Sang: - 'remove' callback converted to return void. Big change with trivial fixes all over the tree. Other subsystems depending on this change have been asked to pull an immutable topic branch for this. - new driver for Microchip PCI1xxxx switch - heavy refactoring of the Mellanox BlueField driver - we prefer async probe in the i801 driver now - the rest is usual driver updates (support for more SoCs, some refactoring, some feature additions) * tag 'i2c-for-6.1-rc1' of git://git.kernel.org/pub/scm/linux/kernel/git/wsa/linux: (37 commits) i2c: pci1xxxx: prevent signed integer overflow i2c: acpi: Replace zero-length array with DECLARE_FLEX_ARRAY() helper i2c: i801: Prefer async probe i2c: designware-pci: Use standard pattern for memory allocation i2c: designware-pci: Group AMD NAVI quirk parts together i2c: microchip: pci1xxxx: Add driver for I2C host controller in multifunction endpoint of pci1xxxx switch docs: i2c: slave-interface: return errno when handle I2C_SLAVE_WRITE_REQUESTED i2c: mlxbf: remove device tree support i2c: mlxbf: support BlueField-3 SoC i2c: cadence: Add standard bus recovery support i2c: mlxbf: add multi slave functionality i2c: mlxbf: support lock mechanism macintosh/ams: Adapt declaration of ams_i2c_remove() to earlier change i2c: riic: Use devm_platform_ioremap_resource() i2c: mlxbf: remove IRQF_ONESHOT dt-bindings: i2c: rockchip: add rockchip,rk3128-i2c dt-bindings: i2c: renesas,rcar-i2c: Add r8a779g0 support i2c: tegra: Add GPCDMA support i2c: scmi: Convert to be a platform driver i2c: rk3x: Add rv1126 support ...
2022-10-04Merge tag 'net-next-6.1' of ↵Linus Torvalds100-2061/+7303
git://git.kernel.org/pub/scm/linux/kernel/git/netdev/net-next Pull networking updates from Jakub Kicinski: "Core: - Introduce and use a single page frag cache for allocating small skb heads, clawing back the 10-20% performance regression in UDP flood test from previous fixes. - Run packets which already went thru HW coalescing thru SW GRO. This significantly improves TCP segment coalescing and simplifies deployments as different workloads benefit from HW or SW GRO. - Shrink the size of the base zero-copy send structure. - Move TCP init under a new slow / sleepable version of DO_ONCE(). BPF: - Add BPF-specific, any-context-safe memory allocator. - Add helpers/kfuncs for PKCS#7 signature verification from BPF programs. - Define a new map type and related helpers for user space -> kernel communication over a ring buffer (BPF_MAP_TYPE_USER_RINGBUF). - Allow targeting BPF iterators to loop through resources of one task/thread. - Add ability to call selected destructive functions. Expose crash_kexec() to allow BPF to trigger a kernel dump. Use CAP_SYS_BOOT check on the loading process to judge permissions. - Enable BPF to collect custom hierarchical cgroup stats efficiently by integrating with the rstat framework. - Support struct arguments for trampoline based programs. Only structs with size <= 16B and x86 are supported. - Invoke cgroup/connect{4,6} programs for unprivileged ICMP ping sockets (instead of just TCP and UDP sockets). - Add a helper for accessing CLOCK_TAI for time sensitive network related programs. - Support accessing network tunnel metadata's flags. - Make TCP SYN ACK RTO tunable by BPF programs with TCP Fast Open. - Add support for writing to Netfilter's nf_conn:mark. Protocols: - WiFi: more Extremely High Throughput (EHT) and Multi-Link Operation (MLO) work (802.11be, WiFi 7). - vsock: improve support for SO_RCVLOWAT. - SMC: support SO_REUSEPORT. - Netlink: define and document how to use netlink in a "modern" way. Support reporting missing attributes via extended ACK. - IPSec: support collect metadata mode for xfrm interfaces. - TCPv6: send consistent autoflowlabel in SYN_RECV state and RST packets. - TCP: introduce optional per-netns connection hash table to allow better isolation between namespaces (opt-in, at the cost of memory and cache pressure). - MPTCP: support TCP_FASTOPEN_CONNECT. - Add NEXT-C-SID support in Segment Routing (SRv6) End behavior. - Adjust IP_UNICAST_IF sockopt behavior for connected UDP sockets. - Open vSwitch: - Allow specifying ifindex of new interfaces. - Allow conntrack and metering in non-initial user namespace. - TLS: support the Korean ARIA-GCM crypto algorithm. - Remove DECnet support. Driver API: - Allow selecting the conduit interface used by each port in DSA switches, at runtime. - Ethernet Power Sourcing Equipment and Power Device support. - Add tc-taprio support for queueMaxSDU parameter, i.e. setting per traffic class max frame size for time-based packet schedules. - Support PHY rate matching - adapting between differing host-side and link-side speeds. - Introduce QUSGMII PHY mode and 1000BASE-KX interface mode. - Validate OF (device tree) nodes for DSA shared ports; make phylink-related properties mandatory on DSA and CPU ports. Enforcing more uniformity should allow transitioning to phylink. - Require that flash component name used during update matches one of the components for which version is reported by info_get(). - Remove "weight" argument from driver-facing NAPI API as much as possible. It's one of those magic knobs which seemed like a good idea at the time but is too indirect to use in practice. - Support offload of TLS connections with 256 bit keys. New hardware / drivers: - Ethernet: - Microchip KSZ9896 6-port Gigabit Ethernet Switch - Renesas Ethernet AVB (EtherAVB-IF) Gen4 SoCs - Analog Devices ADIN1110 and ADIN2111 industrial single pair Ethernet (10BASE-T1L) MAC+PHY. - Rockchip RV1126 Gigabit Ethernet (a version of stmmac IP). - Ethernet SFPs / modules: - RollBall / Hilink / Turris 10G copper SFPs - HALNy GPON module - WiFi: - CYW43439 SDIO chipset (brcmfmac) - CYW89459 PCIe chipset (brcmfmac) - BCM4378 on Apple platforms (brcmfmac) Drivers: - CAN: - gs_usb: HW timestamp support - Ethernet PHYs: - lan8814: cable diagnostics - Ethernet NICs: - Intel (100G): - implement control of FCS/CRC stripping - port splitting via devlink - L2TPv3 filtering offload - nVidia/Mellanox: - tunnel offload for sub-functions - MACSec offload, w/ Extended packet number and replay window offload - significantly restructure, and optimize the AF_XDP support, align the behavior with other vendors - Huawei: - configuring DSCP map for traffic class selection - querying standard FEC statistics - querying SerDes lane number via ethtool - Marvell/Cavium: - egress priority flow control - MACSec offload - AMD/SolarFlare: - PTP over IPv6 and raw Ethernet - small / embedded: - ax88772: convert to phylink (to support SFP cages) - altera: tse: convert to phylink - ftgmac100: support fixed link - enetc: standard Ethtool counters - macb: ZynqMP SGMII dynamic configuration support - tsnep: support multi-queue and use page pool - lan743x: Rx IP & TCP checksum offload - igc: add xdp frags support to ndo_xdp_xmit - Ethernet high-speed switches: - Marvell (prestera): - support SPAN port features (traffic mirroring) - nexthop object offloading - Microchip (sparx5): - multicast forwarding offload - QoS queuing offload (tc-mqprio, tc-tbf, tc-ets) - Ethernet embedded switches: - Marvell (mv88e6xxx): - support RGMII cmode - NXP (felix): - standardized ethtool counters - Microchip (lan966x): - QoS queuing offload (tc-mqprio, tc-tbf, tc-cbs, tc-ets) - traffic policing and mirroring - link aggregation / bonding offload - QUSGMII PHY mode support - Qualcomm 802.11ax WiFi (ath11k): - cold boot calibration support on WCN6750 - support to connect to a non-transmit MBSSID AP profile - enable remain-on-channel support on WCN6750 - Wake-on-WLAN support for WCN6750 - support to provide transmit power from firmware via nl80211 - support to get power save duration for each client - spectral scan support for 160 MHz - MediaTek WiFi (mt76): - WiFi-to-Ethernet bridging offload for MT7986 chips - RealTek WiFi (rtw89): - P2P support" * tag 'net-next-6.1' of git://git.kernel.org/pub/scm/linux/kernel/git/netdev/net-next: (1864 commits) eth: pse: add missing static inlines once: rename _SLOW to _SLEEPABLE net: pse-pd: add regulator based PSE driver dt-bindings: net: pse-dt: add bindings for regulator based PoDL PSE controller ethtool: add interface to interact with Ethernet Power Equipment net: mdiobus: search for PSE nodes by parsing PHY nodes. net: mdiobus: fwnode_mdiobus_register_phy() rework error handling net: add framework to support Ethernet PSE and PDs devices dt-bindings: net: phy: add PoDL PSE property net: marvell: prestera: Propagate nh state from hw to kernel net: marvell: prestera: Add neighbour cache accounting net: marvell: prestera: add stub handler neighbour events net: marvell: prestera: Add heplers to interact with fib_notifier_info net: marvell: prestera: Add length macros for prestera_ip_addr net: marvell: prestera: add delayed wq and flush wq on deinit net: marvell: prestera: Add strict cleanup of fib arbiter net: marvell: prestera: Add cleanup of allocated fib_nodes net: marvell: prestera: Add router nexthops ABI eth: octeon: fix build after netif_napi_add() changes net/mlx5: E-Switch, Return EBUSY if can't get mode lock ...
2022-10-04net/mlx5: E-Switch, Return EBUSY if can't get mode lockJianbo Liu1-1/+1
It is to avoid tc retrying during device mode change. Signed-off-by: Jianbo Liu <jianbol@nvidia.com> Reviewed-by: Roi Dayan <roid@nvidia.com> Signed-off-by: Saeed Mahameed <saeedm@nvidia.com> Signed-off-by: Jakub Kicinski <kuba@kernel.org>
2022-10-04net/mlx5: E-switch, Don't update group if qos is not enabledChris Mi1-1/+5
Currently, qos group will be updated and qos will be enabled when unregistering devlink port. Actually no need to update group if qos is not enabled. Add a check to prevent unnecessary enabling and disabling qos for every port. Signed-off-by: Chris Mi <cmi@nvidia.com> Reviewed-by: Dmytro Linkin <dlinkin@nvidia.com> Signed-off-by: Saeed Mahameed <saeedm@nvidia.com> Signed-off-by: Jakub Kicinski <kuba@kernel.org>
2022-10-04net/mlx5: E-Switch, Allow offloading fwd dest flow table with vportRoi Dayan1-7/+9
Before this commit a fwd dest flow table resulted in ignoring vport dests which is incorrect and is supported. With this commit the dests can be a mix of flow table and vport dests. There is still a limitation that there cannot be more than one flow table dest. Signed-off-by: Roi Dayan <roid@nvidia.com> Reviewed-by: Maor Dickman <maord@nvidia.com> Signed-off-by: Saeed Mahameed <saeedm@nvidia.com> Signed-off-by: Jakub Kicinski <kuba@kernel.org>
2022-10-04net/mlx5: Set default grace period based on function typeMaher Sanalla1-2/+16
Currently, driver sets the same grace period for fw fatal health reporter to any type of function. Since the lower level functions are more vulnerable to fw fatal errors as a result of parent function closure/reload, set a smaller grace period for the lower level functions, as follows: 1. For ECPF: 180 seconds. 2. For PF: 60 seconds. 3. For VF/SF: 30 seconds. Signed-off-by: Maher Sanalla <msanalla@nvidia.com> Reviewed-by: Moshe Shemesh <moshe@nvidia.com> Signed-off-by: Saeed Mahameed <saeedm@nvidia.com> Signed-off-by: Jakub Kicinski <kuba@kernel.org>
2022-10-04net/mlx5: Start health poll at earlier stage of driver loadMoshe Shemesh2-10/+18
Start health poll at earlier stage, so if fw fatal issue occurred before or during initialization commands such as init_hca or set_hca_cap the poll health can detect and indicate that the driver is already in error state. Signed-off-by: Moshe Shemesh <moshe@nvidia.com> Signed-off-by: Saeed Mahameed <saeedm@nvidia.com> Signed-off-by: Jakub Kicinski <kuba@kernel.org>
2022-10-04net/mlx5e: Expose rx_oversize_pkts_buffer counterGal Pressman3-2/+26
Add the rx_oversize_pkts_buffer counter to ethtool statistics. This counter exposes the number of dropped received packets due to length which arrived to RQ and exceed software buffer size allocated by the device for incoming traffic. It might imply that the device MTU is larger than the software buffers size. Signed-off-by: Gal Pressman <gal@nvidia.com> Reviewed-by: Tariq Toukan <tariqt@nvidia.com> Signed-off-by: Saeed Mahameed <saeedm@nvidia.com> Signed-off-by: Jakub Kicinski <kuba@kernel.org>
2022-10-04net/mlx5e: xsk: Optimize for unaligned mode with 3072-byte framesMaxim Mikityanskiy4-3/+61
When XSK frame size is 3072 (or another power of two multiplied by 3), KLM mechanism for NIC virtual memory page mapping can be optimized by replacing it with KSM. Before this change, two KLM entries were needed to map an XSK frame that is not a power of two: one entry maps the UMEM memory up to the frame length, the other maps the rest of the stride to the garbage page. When the frame length divided by 3 is a power of two, it can be mapped using 3 KSM entries, and the fourth will map the rest of the stride to the garbage page. All 4 KSM entries are of the same size, which allows for a much faster lookup. Frame size 3072 is useful in certain use cases, because it allows packing 4 frames into 3 pages. Generally speaking, other frame sizes equal to PAGE_SIZE minus a power of two can be optimized in a similar way, but it will require many more KSMs per frame, which slows down UMRs a little bit, but more importantly may hit the limit for the maximum number of KSM entries. Signed-off-by: Maxim Mikityanskiy <maximmi@nvidia.com> Reviewed-by: Tariq Toukan <tariqt@nvidia.com> Signed-off-by: Saeed Mahameed <saeedm@nvidia.com> Signed-off-by: Jakub Kicinski <kuba@kernel.org>
2022-10-04net/mlx5e: xsk: Print a warning in slow configurationsMaxim Mikityanskiy1-0/+9
On striding RQ, when the XSK frame size doesn't match the MKey page size, KLM is used for memory mappings, which is a slower mechanism than MTT or KSM. It may happen in two cases: 1. Frame size is not a power of two (only possible in the unaligned mode of XSK). 2. Frame size is 2048 bytes, and the firmware doesn't support MKey pages smaller than 4096 bytes. Depending on the case, print a warning and recommend to disable striding RQ or upgrade the firmware. Signed-off-by: Maxim Mikityanskiy <maximmi@nvidia.com> Reviewed-by: Tariq Toukan <tariqt@nvidia.com> Signed-off-by: Saeed Mahameed <saeedm@nvidia.com> Signed-off-by: Jakub Kicinski <kuba@kernel.org>
2022-10-04net/mlx5e: xsk: Use KLM to protect frame overrun in unaligned modeMaxim Mikityanskiy4-10/+90
XSK RQs support striding RQ linear mode, but the stride size may be bigger than the XSK frame size, because: 1. The stride size must be a power of two. 2. The stride size must be equal to the UMR page size. Each XSK frame is treated as a separate page, because they aren't necessarily adjacent in physical memory, so the driver can't put more than one stride per page. 3. The minimal MTT page size is 4096 on older firmware. That means that if XSK frame size is 2048 or not a power of two, the strides may be bigger than XSK frames. Normally, it's not a problem if the hardware enforces the MTU. However, traffic between vports skips the hardware MTU check, and oversized packets may be received. If an oversized packet is bigger than the XSK frame but not bigger than the stride, it will cause overwriting of the adjacent UMEM region. If the packet takes more than one stride, they can be recycled for reuse, so it's not a problem when the XSK frame size matches the stride size. Work around the above issue by leveraging KLM to make a more fine-grained mapping. The beginning of each stride is mapped to the frame memory, and the padding up to the closest power of two is mapped to the overflow page that doesn't belong to UMEM. This way, application data corruption won't happen upon receiving packets bigger than MTU. Signed-off-by: Maxim Mikityanskiy <maximmi@nvidia.com> Reviewed-by: Tariq Toukan <tariqt@nvidia.com> Signed-off-by: Saeed Mahameed <saeedm@nvidia.com> Signed-off-by: Jakub Kicinski <kuba@kernel.org>
2022-10-04net/mlx5e: Improve MTT/KSM alignmentMaxim Mikityanskiy5-16/+18
Make mlx5e_mpwrq_mtts_per_wqe take into account that KSM requires smaller alignment than MTT. Ensure that there is always an even amount of MTTs in a UMR WQE, so that complete octwords are formed, and no garbage is mapped. Drop extra alignment in MLX5_MTT_OCTW that may cause setting too big ucseg->xlt_octowords, also leading to mapping garbage. Generalize some calculations by introducing the MLX5_OCTWORD constant. Signed-off-by: Maxim Mikityanskiy <maximmi@nvidia.com> Reviewed-by: Tariq Toukan <tariqt@nvidia.com> Signed-off-by: Saeed Mahameed <saeedm@nvidia.com> Signed-off-by: Jakub Kicinski <kuba@kernel.org>
2022-10-04net/mlx5e: xsk: Use umr_mode to calculate striding RQ parametersMaxim Mikityanskiy6-72/+167
Instead of passing the unaligned flag, pass an enum that indicates the UMR mode. The next commit will add the third mode (KLM for certain configurations of XSK), which will be added to this enum instead of adding another bool flag everywhere. Signed-off-by: Maxim Mikityanskiy <maximmi@nvidia.com> Reviewed-by: Tariq Toukan <tariqt@nvidia.com> Signed-off-by: Saeed Mahameed <saeedm@nvidia.com> Signed-off-by: Jakub Kicinski <kuba@kernel.org>
2022-10-04net/mlx5e: xsk: Improve need_wakeup logicMaxim Mikityanskiy4-37/+23
XSK need_wakeup mechanism allows the driver to stop busy waiting for buffers when the fill ring is empty, yield to the application and signal it that the driver needs to be waken up after the application refills the fill ring. Add protection against the race condition on the RX (refill) side: if the application refills buffers after xskrq->post_wqes is called, but before mlx5e_xsk_update_rx_wakeup, NAPI will exit, skipping taking these buffers to the hardware WQ, and the application won't wake it up again. Optimize the whole need_wakeup logic, removing unneeded flows, to compensate for this new check. Signed-off-by: Maxim Mikityanskiy <maximmi@nvidia.com> Reviewed-by: Tariq Toukan <tariqt@nvidia.com> Signed-off-by: Saeed Mahameed <saeedm@nvidia.com> Signed-off-by: Jakub Kicinski <kuba@kernel.org>
2022-10-04net/mlx5e: xsk: Include XSK skb_from_cqe callbacks in INDIRECT_CALLMaxim Mikityanskiy1-2/+4
XSK is a performance-critical data path. To avoid an indirect function call with a retpoline, include XSK callbacks in the INDIRECT_CALL macro, so that they are called directly in XSK flows. Signed-off-by: Maxim Mikityanskiy <maximmi@nvidia.com> Reviewed-by: Tariq Toukan <tariqt@nvidia.com> Signed-off-by: Saeed Mahameed <saeedm@nvidia.com> Signed-off-by: Jakub Kicinski <kuba@kernel.org>
2022-10-04net/mlx5e: xsk: Set napi_id to support busy pollingMaxim Mikityanskiy1-1/+1
xdp_rxq_info_reg should get the actual napi_id, not 0, in order to support socket busy polling properly. Signed-off-by: Maxim Mikityanskiy <maximmi@nvidia.com> Reviewed-by: Tariq Toukan <tariqt@nvidia.com> Signed-off-by: Saeed Mahameed <saeedm@nvidia.com> Signed-off-by: Jakub Kicinski <kuba@kernel.org>
2022-10-04net/mlx5e: xsk: Flush RQ on XSK activation to save memoryMaxim Mikityanskiy3-5/+19
The regular RQ remains open after opening an XSK socket, in order to guarantee that closing the XSK socket never fails due to an error when reopening the regular RQ. To save memory, the regular RQ can be deactivated and flushed, releasing all pages, when an XSK socket is open. Signed-off-by: Maxim Mikityanskiy <maximmi@nvidia.com> Reviewed-by: Tariq Toukan <tariqt@nvidia.com> Signed-off-by: Saeed Mahameed <saeedm@nvidia.com> Signed-off-by: Jakub Kicinski <kuba@kernel.org>
2022-10-04Merge tag 'thermal-6.1-rc1' of ↵Linus Torvalds1-75/+2
git://git.kernel.org/pub/scm/linux/kernel/git/rafael/linux-pm Pull thermal control updates from Rafael Wysocki: "The most significant part of this update is the thermal control DT initialization rework from Daniel Lezcano and the following conversion of drivers to use the new API introduced by it Apart from that, the maximum number of trip points in a thermal zone is increased and there are some fixes and code cleanups Specifics: - Rework the device tree initialization, convert the drivers to the new API and remove the old OF code (Daniel Lezcano) - Fix return value to -ENODEV when searching for a specific thermal zone which does not exist (Daniel Lezcano) - Fix the return value inspection in of_thermal_zone_find() (Dan Carpenter) - Fix kernel panic when KASAN is enabled as it detects use after free when unregistering a thermal zone (Daniel Lezcano) - Move the set_trip ops inside the therma sysfs code (Daniel Lezcano) - Remove unnecessary error message as it is already shown in the underlying function (Jiapeng Chong) - Rework the monitoring path and move the locks upper in the call stack to fix some potentials race windows (Daniel Lezcano) - Fix lockdep_assert() warning introduced by the lock rework (Daniel Lezcano) - Do not lock thermal zone mutex in the user space governor (Rafael Wysocki) - Revert the Mellanox 'hotter thermal zone' feature because it is already handled in the thermal framework core code (Daniel Lezcano) - Increase maximum number of trip points in the thermal core (Sumeet Pawnikar) - Replace strlcpy() with unused retval with strscpy() in the core thermal control code (Wolfram Sang) - Use module_pci_driver() macro in the int340x processor_thermal driver (Shang XiaoJing) - Use get_cpu() instead of smp_processor_id() in the intel_powerclamp thermal driver to prevent it from crashing and remove unused accounting for IRQ wakes from it (Srinivas Pandruvada) - Consolidate priv->data_vault checks in int340x_thermal (Rafael Wysocki) - Check the policy first in cpufreq_cooling_register() (Xuewen Yan) - Drop redundant error message from da9062-thermal (zhaoxiao) - Drop of_match_ptr() from thermal_mmio (Jean Delvare)" * tag 'thermal-6.1-rc1' of git://git.kernel.org/pub/scm/linux/kernel/git/rafael/linux-pm: (55 commits) thermal: core: Increase maximum number of trip points thermal: int340x: processor_thermal: Use module_pci_driver() macro thermal: intel_powerclamp: Remove accounting for IRQ wakes thermal: intel_powerclamp: Use get_cpu() instead of smp_processor_id() to avoid crash thermal: int340x_thermal: Consolidate priv->data_vault checks thermal: cpufreq_cooling: Check the policy first in cpufreq_cooling_register() thermal: Drop duplicate words from comments thermal: move from strlcpy() with unused retval to strscpy() thermal: da9062-thermal: Drop redundant error message thermal/drivers/thermal_mmio: Drop of_match_ptr() thermal: gov_user_space: Do not lock thermal zone mutex Revert "mlxsw: core: Add the hottest thermal zone detection" thermal/core: Fix lockdep_assert() warning thermal/core: Move the mutex inside the thermal_zone_device_update() function thermal/core: Move the thermal zone lock out of the governors thermal/governors: Group the thermal zone lock inside the throttle function thermal/core: Rework the monitoring a bit thermal/core: Rearm the monitoring only one time thermal/drivers/qcom/spmi-adc-tm5: Remove unnecessary print function dev_err() thermal/of: Remove old OF code ...
2022-10-03Merge branch 'thermal-core'Rafael J. Wysocki1-75/+2
Merge core thermal control changes for 6.1-rc1: - Increase maximum number of trip points in the thermal core (Sumeet Pawnikar). - Replace strlcpy() with unused retval with strscpy() in the core thermal control code (Wolfram Sang). - Do not lock thermal zone mutex in the user space governor (Rafael Wysocki) - Rework the device tree initialization, convert the drivers to the new API and remove the old OF code (Daniel Lezcano) - Fix return value to -ENODEV when searching for a specific thermal zone which does not exist (Daniel Lezcano) - Fix the return value inspection in of_thermal_zone_find() (Dan Carpenter) - Fix kernel panic when KASAN is enabled as it detects use after free when unregistering a thermal zone (Daniel Lezcano) - Move the set_trip ops inside the therma sysfs code (Daniel Lezcano) - Remove unnecessary error message as it is already showed in the underlying function (Jiapeng Chong) - Rework the monitoring path and move the locks upper in the call stack to fix some potentials race windows (Daniel Lezcano) - Fix lockdep_assert() warning introduced by the lock rework (Daniel Lezcano) - Revert the Mellanox 'hotter thermal zone' feature because it is already handled in the thermal framework core code (Daniel Lezcano) * thermal-core: (47 commits) thermal: core: Increase maximum number of trip points thermal: move from strlcpy() with unused retval to strscpy() thermal: gov_user_space: Do not lock thermal zone mutex Revert "mlxsw: core: Add the hottest thermal zone detection" thermal/core: Fix lockdep_assert() warning thermal/core: Move the mutex inside the thermal_zone_device_update() function thermal/core: Move the thermal zone lock out of the governors thermal/governors: Group the thermal zone lock inside the throttle function thermal/core: Rework the monitoring a bit thermal/core: Rearm the monitoring only one time thermal/drivers/qcom/spmi-adc-tm5: Remove unnecessary print function dev_err() thermal/of: Remove old OF code thermal/core: Move set_trip_temp ops to the sysfs code thermal/drivers/samsung: Switch to new of thermal API regulator/drivers/max8976: Switch to new of thermal API Input: sun4i-ts - switch to new of thermal API iio/drivers/sun4i_gpadc: Switch to new of thermal API hwmon/drivers/core: Switch to new of thermal API hwmon: pm_bus: core: Switch to new of thermal API ata/drivers/ahci_imx: Switch to new of thermal API ...
2022-10-01net/mlx5e: xsk: Use queue indices starting from 0 for XSK queuesMaxim Mikityanskiy14-205/+52
In the initial implementation of XSK in mlx5e, XSK RQs coexisted with regular RQs in the same channel. The main idea was to allow RSS work the same for regular traffic, without need to reconfigure RSS to exclude XSK queues. However, this scheme didn't prove to be beneficial, mainly because of incompatibility with other vendors. Some tools don't properly support using higher indices for XSK queues, some tools get confused with the double amount of RQs exposed in sysfs. Some use cases are purely XSK, and allocating the same amount of unused regular RQs is a waste of resources. This commit changes the queuing scheme to the standard one, where XSK RQs replace regular RQs on the channels where XSK sockets are open. Two RQs still exist in the channel to allow failsafe disable of XSK, but only one is exposed at a time. The next commit will achieve the desired memory save by flushing the buffers when the regular RQ is unused. As the result of this transition: 1. It's possible to use RSS contexts over XSK RQs. 2. It's possible to dedicate all queues to XSK. 3. When XSK RQs coexist with regular RQs, the admin should make sure no unwanted traffic goes into XSK RQs by either excluding them from RSS or settings up the XDP program to return XDP_PASS for non-XSK traffic. 4. When using a mixed fleet of mlx5e devices and other netdevs, the same configuration can be applied. If the application supports the fallback to copy mode on unsupported drivers, it will work too. Signed-off-by: Maxim Mikityanskiy <maximmi@nvidia.com> Reviewed-by: Tariq Toukan <tariqt@nvidia.com> Signed-off-by: Saeed Mahameed <saeedm@nvidia.com> Signed-off-by: Jakub Kicinski <kuba@kernel.org>
2022-10-01net/mlx5e: Introduce the mlx5e_flush_rq functionMaxim Mikityanskiy3-24/+29
Add a function to flush an RQ: clean up descriptors, release pages and reset the RQ. This procedure is used by the recovery flow, and it will also be used in a following commit to free some memory when switching a channel to the XSK mode. Signed-off-by: Maxim Mikityanskiy <maximmi@nvidia.com> Reviewed-by: Tariq Toukan <tariqt@nvidia.com> Signed-off-by: Saeed Mahameed <saeedm@nvidia.com> Signed-off-by: Jakub Kicinski <kuba@kernel.org>
2022-10-01net/mlx5e: xsk: Support XDP metadata on XSK RQsMaxim Mikityanskiy1-8/+12
Add support for XDP metadata on XSK RQs for cross-program communication. The driver no longer calls xdp_set_data_meta_invalid and copies the metadata to a newly allocated SKB on XDP_PASS. Signed-off-by: Maxim Mikityanskiy <maximmi@nvidia.com> Reviewed-by: Tariq Toukan <tariqt@nvidia.com> Signed-off-by: Saeed Mahameed <saeedm@nvidia.com> Signed-off-by: Jakub Kicinski <kuba@kernel.org>
2022-10-01net/mlx5e: Optimize RQ page deallocationMaxim Mikityanskiy2-19/+24
mlx5e_free_rx_mpwqe loops over all pages of a MPWQE, calling mlx5e_page_release for ones that are not scheduled for XDP_TX or XDP_REDIRECT; and mlx5e_page_release checks whether it's an XSK RQ or a regular one for each page/XSK frame. This check can be moved outside the loop to reduce the number of branches. mlx5e_free_rx_wqe loops over all fragments, calling mlx5e_page_release for the ones that are last in a page; and mlx5e_page_release checks whether it's an XSK RQ or a regular one for each fragment. Using the fact that XSK doesn't support multiple fragments, it can be optimized for both XSK and regular usages: 1. Make an early check for XSK and call its deallocator directly, saving 3 branches (loop condition, frag->last_in_page and selection of deallocator). 2. Call the regular deallocator directly in the non-XSK case, saving a branch per fragment, except the first one. After the changes, mlx5e_page_release is removed, as there are no callers left. Signed-off-by: Maxim Mikityanskiy <maximmi@nvidia.com> Reviewed-by: Tariq Toukan <tariqt@nvidia.com> Signed-off-by: Saeed Mahameed <saeedm@nvidia.com> Signed-off-by: Jakub Kicinski <kuba@kernel.org>
2022-10-01net/mlx5e: Call mlx5e_page_release_dynamic directly where possibleMaxim Mikityanskiy1-16/+4
mlx5e_page_release calls the appropriate deallocator depending on whether it's an XSK RQ or a regular one. Some flows that call this function are not compatible with XSK, so they can call the non-XSK deallocator directly to save a branch. Signed-off-by: Maxim Mikityanskiy <maximmi@nvidia.com> Reviewed-by: Tariq Toukan <tariqt@nvidia.com> Signed-off-by: Saeed Mahameed <saeedm@nvidia.com> Signed-off-by: Jakub Kicinski <kuba@kernel.org>
2022-10-01net/mlx5e: Use non-XSK page allocator in SHAMPOMaxim Mikityanskiy1-11/+1
The SHAMPO flow is not compatible with XSK, it can call the page pool allocator directly to save a branch. mlx5e_page_alloc is removed, as it's no longer used in any flow. Signed-off-by: Maxim Mikityanskiy <maximmi@nvidia.com> Reviewed-by: Tariq Toukan <tariqt@nvidia.com> Signed-off-by: Saeed Mahameed <saeedm@nvidia.com> Signed-off-by: Jakub Kicinski <kuba@kernel.org>
2022-10-01net/mlx5e: xsk: Use xsk_buff_alloc_batch on striding RQMaxim Mikityanskiy4-48/+106
XSK provides a function to allocate frames in batches for more efficient processing. This commit starts using this function on striding RQ and creates an optimized flow for XSK. A side effect is an opportunity to optimize the regular RX flow by dropping branching for XSK cases. Performance improvement is up to 6.4% in the aligned mode and up to 7.5% in the unaligned mode. Aligned mode, 2048-byte frames: 12.9 Mpps -> 13.8 Mpps Aligned mode, 4096-byte frames: 11.8 Mpps -> 12.5 Mpps Unaligned mode, 2048-byte frames: 11.9 Mpps -> 12.8 Mpps Unaligned mode, 3072-byte frames: 11.4 Mpps -> 12.1 Mpps Unaligned mode, 4096-byte frames: 11.0 Mpps -> 11.2 Mpps CPU: Intel(R) Xeon(R) Gold 6240 CPU @ 2.60GHz Signed-off-by: Maxim Mikityanskiy <maximmi@nvidia.com> Reviewed-by: Tariq Toukan <tariqt@nvidia.com> Signed-off-by: Saeed Mahameed <saeedm@nvidia.com> Signed-off-by: Jakub Kicinski <kuba@kernel.org>
2022-10-01net/mlx5e: xsk: Use xsk_buff_alloc_batch on legacy RQMaxim Mikityanskiy4-0/+55
XSK provides a function to allocate frames in batches for more efficient processing. This commit starts using this function on legacy RQ, adding a special case for XSK. The new branch introduced basically replaces the branch that was removed from the same place a few commits before. A check is made that DMA sync is not needed, because the batching allocator falls back to returning one frame when DMA sync is needed, and this is best handled by the loop in the standard case. Performance improvement is up to 8% in the aligned mode and up to 9% in the unaligned mode. Aligned mode, 2048-byte frames: 12.8 Mpps -> 13.5 Mpps Aligned mode, 4096-byte frames: 11.5 Mpps -> 12.4 Mpps Unaligned mode, 2048-byte frames: 12.2 Mpps -> 13.4 Mpps Unaligned mode, 3072-byte frames: 11.6 Mpps -> 12.5 Mpps Unaligned mode, 4096-byte frames: 11.2 Mpps -> 12.2 Mpps CPU: Intel(R) Xeon(R) Gold 6240 CPU @ 2.60GHz Signed-off-by: Maxim Mikityanskiy <maximmi@nvidia.com> Reviewed-by: Tariq Toukan <tariqt@nvidia.com> Signed-off-by: Saeed Mahameed <saeedm@nvidia.com> Signed-off-by: Jakub Kicinski <kuba@kernel.org>
2022-10-01net/mlx5e: xsk: Split out WQE allocation for legacy XSK RQMaxim Mikityanskiy3-4/+34
Allocation of XSK frames on legacy RQ may be made more efficient with a specialized routine that relies on certain assumptions, such as there is only one fragment, allocation units (XSK frames) are not shared among multiple packets. It reduces the number of branches both in the XSK code and in the regular RQ, because with this approach there is only a single check whether it's an XSK or regular RQ. Signed-off-by: Maxim Mikityanskiy <maximmi@nvidia.com> Reviewed-by: Tariq Toukan <tariqt@nvidia.com> Signed-off-by: Saeed Mahameed <saeedm@nvidia.com> Signed-off-by: Jakub Kicinski <kuba@kernel.org>
2022-10-01net/mlx5e: Remove the outer loop when allocating legacy RQ WQEsMaxim Mikityanskiy2-22/+17
Legacy RQ WQEs are allocated in a loop in small batches (8 WQEs). As partial batches are allowed, there is no point to have a loop in a loop, so the outer loop is removed, and the batch size is increased up to the total number of WQEs to allocate, still not smaller than 8. Signed-off-by: Maxim Mikityanskiy <maximmi@nvidia.com> Reviewed-by: Tariq Toukan <tariqt@nvidia.com> Signed-off-by: Saeed Mahameed <saeedm@nvidia.com> Signed-off-by: Jakub Kicinski <kuba@kernel.org>
2022-10-01net/mlx5e: xsk: Use partial batches in legacy RQ with XSKMaxim Mikityanskiy1-13/+1
The previous commit allowed allocating WQE batches in legacy RQ partially, however, XSK still checks whether there are enough frames in the fill ring. Remove this check to allow to allocate batches partially also with XSK. Signed-off-by: Maxim Mikityanskiy <maximmi@nvidia.com> Reviewed-by: Tariq Toukan <tariqt@nvidia.com> Signed-off-by: Saeed Mahameed <saeedm@nvidia.com> Signed-off-by: Jakub Kicinski <kuba@kernel.org>
2022-10-01net/mlx5e: Use partial batches in legacy RQMaxim Mikityanskiy1-18/+21
Legacy RQ allocates WQEs in batches. If the batch allocation fails, the pages of the allocated part are released. This commit changes this behavior to allow to use the pages that have been already allocated. After this change, we need to be careful about indexing rq->wqe.frags[]. The WQ size is a power of two that divides by wqe_bulk (8), and the old code used whole bulks, which allowed to use indices [8*K; 8*K+7] without overflowing. Now that the bulks may be partial, the range can start at any location (not only at 8*K), so we need to wrap them around to avoid out-of-bounds array access. Signed-off-by: Maxim Mikityanskiy <maximmi@nvidia.com> Reviewed-by: Tariq Toukan <tariqt@nvidia.com> Signed-off-by: Saeed Mahameed <saeedm@nvidia.com> Signed-off-by: Jakub Kicinski <kuba@kernel.org>
2022-10-01net/mlx5e: Make the wqe_index_mask calculation more exactMaxim Mikityanskiy1-1/+20
The old calculation of wqe_index_mask may give false positives, i.e. request bulking of pairs of WQEs when not strictly needed, for example, when the first fragment size is equal to the PAGE_SIZE, bulking is not needed, even if the number of fragments is odd. Make the calculation more exact to cut false positives. Signed-off-by: Maxim Mikityanskiy <maximmi@nvidia.com> Reviewed-by: Tariq Toukan <tariqt@nvidia.com> Signed-off-by: Saeed Mahameed <saeedm@nvidia.com> Signed-off-by: Jakub Kicinski <kuba@kernel.org>
2022-10-01net/mlx5e: Introduce wqe_index_mask for legacy RQMaxim Mikityanskiy2-4/+22
When fragments of different WQEs share the same page, mlx5e_post_rx_wqes must wait until the old WQE stops using the page, only then the new WQE can allocate the new page. Essentially, it means that if WQE index i is still in use, the allocation must stop before `i % bulk`, where bulk is the number of WQEs that may share the same page. As bulk is always a power of two, `i % bulk = i & (bulk - 1)`, and the new wqe_index_mask field will be equal to `bulk - 1`. At the same time, wqe_bulk remains for optimization purposes and stores `max(bulk, 8)`, which allows to skip the allocation until we have at least 8 WQEs free. Signed-off-by: Maxim Mikityanskiy <maximmi@nvidia.com> Reviewed-by: Tariq Toukan <tariqt@nvidia.com> Signed-off-by: Saeed Mahameed <saeedm@nvidia.com> Signed-off-by: Jakub Kicinski <kuba@kernel.org>
2022-10-01net/mlx5e: xsk: Drop the check for XSK state in mlx5e_xsk_wakeupMaxim Mikityanskiy2-4/+1
The MLX5E_CHANNEL_STATE_XSK flag checked in mlx5e_xsk_wakeup indicates that XSK queues are open, but not necessarily activated. This check is not very useful, because: 0. Both XSK setup and netdev state transitions take the same state_lock mutex, so they can't happen at the same time. 1. If the netdev is up, xsk_is_bound can return true only when MLX5E_CHANNEL_STATE_XSK is set on the corresponding channel. mlx5e_xsk_wakeup is only called when xsk_is_bound is true. 2. If the XSK socket is bound, and the netdev is going up or down, mlx5e_xsk_wakeup can take one of two branches, depending on the return value of napi_if_scheduled_mark_missed: 2.1. True means one of two things: either NAPI was enabled at this point, which means MLX5E_CHANNEL_STATE_XSK was also set; or NAPI was disabled, and nothing really happened. 2.2. False means that NAPI was enabled by this point, which also implies MLX5E_CHANNEL_STATE_XSK was set. Additionally, mlx5e_xsk_wakeup contains a following check for MLX5E_SQ_STATE_ENABLED on async_icosq, and this flag implies MLX5E_CHANNEL_STATE_XSK too on XSK channels. As checking this flag doesn't cut any flows, remove the check. Signed-off-by: Maxim Mikityanskiy <maximmi@nvidia.com> Reviewed-by: Tariq Toukan <tariqt@nvidia.com> Signed-off-by: Saeed Mahameed <saeedm@nvidia.com> Signed-off-by: Jakub Kicinski <kuba@kernel.org>
2022-10-01net/mlx5e: xsk: Use mlx5e_trigger_napi_icosq for XSK wakeupMaxim Mikityanskiy1-3/+1
mlx5e_xsk_wakeup triggers an IRQ by posting a NOP to async_icosq, taking a spinlock to protect from concurrent access. There is already a function that does the same: mlx5e_trigger_napi_icosq. Use this function in mlx5e_xsk_wakeup. Signed-off-by: Maxim Mikityanskiy <maximmi@nvidia.com> Reviewed-by: Tariq Toukan <tariqt@nvidia.com> Signed-off-by: Saeed Mahameed <saeedm@nvidia.com> Signed-off-by: Jakub Kicinski <kuba@kernel.org>
2022-09-30net/mlx5e: Clean up and fix error flows in mlx5e_alloc_rqMaxim Mikityanskiy1-5/+7
Although mlx5e_rq_free_shampo can be called unconditionally, it belongs to case MLX5_WQ_TYPE_LINKED_LIST_STRIDING_RQ. Move it there to allow to add more init/cleanup actions to the striding RQ case. If xdp_rxq_info_reg_mem_model fails, don't forget to destroy the page pool. Signed-off-by: Maxim Mikityanskiy <maximmi@nvidia.com> Reviewed-by: Saeed Mahameed <saeedm@nvidia.com> Reviewed-by: Tariq Toukan <tariqt@nvidia.com> Signed-off-by: Saeed Mahameed <saeedm@nvidia.com> Signed-off-by: Jakub Kicinski <kuba@kernel.org>
2022-09-30net/mlx5e: Move repeating clear_bit in mlx5e_rx_reporter_err_rq_cqe_recoverMaxim Mikityanskiy1-5/+2
The same clear_bit is called in both error and success flows. Move the call to do it only once and remove the out label. Signed-off-by: Maxim Mikityanskiy <maximmi@nvidia.com> Reviewed-by: Saeed Mahameed <saeedm@nvidia.com> Reviewed-by: Tariq Toukan <tariqt@nvidia.com> Signed-off-by: Saeed Mahameed <saeedm@nvidia.com> Signed-off-by: Jakub Kicinski <kuba@kernel.org>
2022-09-30net/mlx5e: Split out channel (de)activation in rx_resMaxim Mikityanskiy1-53/+53
To decrease the nesting level and reduce duplication of code, create functions to redirect direct RQTs to the actual RQs or drop_rq, which are used in the activation and deactivation flows of channels. Signed-off-by: Maxim Mikityanskiy <maximmi@nvidia.com> Reviewed-by: Saeed Mahameed <saeedm@nvidia.com> Reviewed-by: Tariq Toukan <tariqt@nvidia.com> Signed-off-by: Saeed Mahameed <saeedm@nvidia.com> Signed-off-by: Jakub Kicinski <kuba@kernel.org>
2022-09-30net/mlx5e: xsk: Remove mlx5e_xsk_page_alloc_poolMaxim Mikityanskiy2-13/+5
mlx5e_xsk_page_alloc_pool became a thin wrapper around xsk_buff_alloc. Drop it and call xsk_buff_alloc directly. Signed-off-by: Maxim Mikityanskiy <maximmi@nvidia.com> Reviewed-by: Saeed Mahameed <saeedm@nvidia.com> Reviewed-by: Tariq Toukan <tariqt@nvidia.com> Signed-off-by: Saeed Mahameed <saeedm@nvidia.com> Signed-off-by: Jakub Kicinski <kuba@kernel.org>
2022-09-30net/mlx5e: Convert struct mlx5e_alloc_unit to a unionMaxim Mikityanskiy4-32/+30
struct mlx5e_alloc_unit consists of a single union. Convert it to a union itself to simplify casting it to struct xdp_buff *, which will be used to implement XSK batching on striding RQ. Signed-off-by: Maxim Mikityanskiy <maximmi@nvidia.com> Reviewed-by: Tariq Toukan <tariqt@nvidia.com> Signed-off-by: Saeed Mahameed <saeedm@nvidia.com> Signed-off-by: Jakub Kicinski <kuba@kernel.org>
2022-09-30net/mlx5e: Remove DMA address from mlx5e_alloc_unitMaxim Mikityanskiy3-25/+40
mlx5e_alloc_unit stores the DMA address and a pointer to either struct page (regular RQ) or struct xdp_buff (XSK RQ). This DMA address is redundant, because when a page or an XSK frame is allocated, the same address is also stored there. Some flows take the address from struct mlx5e_alloc_unit, and some take it from struct page or xdp_buff. This commit removes the address from struct mlx5e_alloc_unit, which makes it twice as small and improves locality (this struct is used in an array), also saving on unnecessary stores to the addr field. Almost all flows know unambiguously whether the DMA address should be taken from page or from xdp_buff. The exception is the allocation flows, where a new branch appeared, which will be optimized out in the next commits. struct mlx5e_alloc_unit used to be called mlx5e_dma_info. Signed-off-by: Maxim Mikityanskiy <maximmi@nvidia.com> Reviewed-by: Tariq Toukan <tariqt@nvidia.com> Signed-off-by: Saeed Mahameed <saeedm@nvidia.com> Signed-off-by: Jakub Kicinski <kuba@kernel.org>
2022-09-30net/mlx5e: Rename mlx5e_dma_info to prepare for removal of DMA addressMaxim Mikityanskiy5-111/+123
The next commit will remove the DMA address from the struct currently called mlx5e_dma_info, because the same value can be retrieved with page_pool_get_dma_addr(page) in almost all cases, with the notable exception of SHAMPO (HW GRO implementation) that modifies this address on the fly, after the initial allocation. To keep the SHAMPO logic intact, struct mlx5e_dma_info remains in the SHAMPO code, consisting of addr and page (XSK is not compatible with SHAMPO). The struct used in all other places is renamed to mlx5e_alloc_unit, allowing the next commit to remove the addr field without affecting SHAMPO. The new name means "allocation unit", and it's more appropriate after the field with the DMA address gets removed. Signed-off-by: Maxim Mikityanskiy <maximmi@nvidia.com> Reviewed-by: Tariq Toukan <tariqt@nvidia.com> Signed-off-by: Saeed Mahameed <saeedm@nvidia.com> Signed-off-by: Jakub Kicinski <kuba@kernel.org>
2022-09-30net/mlx5e: Optimize the page cache reducing its size 2xMaxim Mikityanskiy3-8/+6
RX page cache stores dma_info structs, that consist of a pointer to struct page and a DMA address. In fact, the DMA address is extracted from struct page using page_pool_get_dma_addr when a page is pushed to the cache. By moving this call to the point when a page is popped from the cache, we can avoid storing the DMA address in the cache, effectively reducing its size by two times without losing any functionality. Signed-off-by: Maxim Mikityanskiy <maximmi@nvidia.com> Reviewed-by: Saeed Mahameed <saeedm@nvidia.com> Reviewed-by: Tariq Toukan <tariqt@nvidia.com> Signed-off-by: Saeed Mahameed <saeedm@nvidia.com> Signed-off-by: Jakub Kicinski <kuba@kernel.org>
2022-09-30net/mlx5e: Fix calculations for ICOSQ sizeMaxim Mikityanskiy1-1/+13
WQEs must not cross page boundaries, they are padded with NOPs if they don't fit the page. mlx5e_mpwrq_total_umr_wqebbs doesn't take into account this padding, risking reserving not enough space. The padding is not straightforward to add to this calculation, because WQEs of different sizes may be mixed together in the queue. If each page ends with a big WQE that doesn't fit and requires at most its size minus 1 WQEBB of padding, the total space can be much bigger than in case when smaller WQEs take advantage of this padding. Replace the wrong exact calculation by the following estimation. Each padding can be at most the size of the maximum WQE used in the queue minus one WQEBB. Let's call the rest of the page "useful space". If we divide the total size of all needed WQEs by this useful space, rounding up, we'll get the number of pages, which is enough to contain all these WQEs. It's correct, because every WQE that appeared on the boundary between two blocks of useful space would start in the useful space of one page and end in the padding of the same page, while our estimation reserved space for its tail in the next space, making the estimation not smaller than the real space occupied in the queue. The code actually uses a looser estimation: instead of taking the maximum size of all used WQE types minus 1 WQEBB, it takes the maximum hardware size of a WQE. It's made for simplicity and extensibility. Signed-off-by: Maxim Mikityanskiy <maximmi@nvidia.com> Reviewed-by: Saeed Mahameed <saeedm@nvidia.com> Reviewed-by: Tariq Toukan <tariqt@nvidia.com> Signed-off-by: Saeed Mahameed <saeedm@nvidia.com> Signed-off-by: Jakub Kicinski <kuba@kernel.org>
2022-09-30net/mlx5e: xsk: Use KSM for unaligned XSKMaxim Mikityanskiy8-100/+185
UMR MTTs used in striding RQ have certain alignment requirements. While it's guaranteed to work when UMR pages are aligned to the UMR page size, in practice it works then UMR pages are aligned to 8 bytes. However, it's still not enough flexibility for the unaligned mode of XSK. This patch leverages KSM to map UMR pages without alignment requirements, when unaligned XSK is active. The downside is that KSM entries are twice as big as MTTs, which limits the maximum WQE size, so regular RQs and aligned XSK continue using MTTs. Signed-off-by: Maxim Mikityanskiy <maximmi@nvidia.com> Reviewed-by: Tariq Toukan <tariqt@nvidia.com> Signed-off-by: Saeed Mahameed <saeedm@nvidia.com> Signed-off-by: Jakub Kicinski <kuba@kernel.org>