summaryrefslogtreecommitdiff
AgeCommit message (Collapse)AuthorFilesLines
2024-10-09net: pcs: xpcs: correctly place DW_VR_MII_DIG_CTRL1_2G5_ENRussell King (Oracle)1-2/+1
Place DW_VR_MII_DIG_CTRL1_2G5_EN with the other DW_VR_MII_DIG_CTRL1 definitions rather than in the middle of a register list. Signed-off-by: Russell King (Oracle) <rmk+kernel@armlinux.org.uk> Signed-off-by: David S. Miller <davem@davemloft.net>
2024-10-09net: pcs: xpcs: use dev_*() to print messagesRussell King (Oracle)1-22/+22
Use the dev_*() family of functions to print all messages from the XPCS driver so we know which instance issues the messages. Signed-off-by: Russell King (Oracle) <rmk+kernel@armlinux.org.uk> Signed-off-by: David S. Miller <davem@davemloft.net>
2024-10-09net: pcs: xpcs: convert to use read_poll_timeout()Russell King (Oracle)1-10/+7
Convert the xpcs driver to use read_poll_timeout() when waiting for reset to complete, rather than open-coding this. Signed-off-by: Russell King (Oracle) <rmk+kernel@armlinux.org.uk> Signed-off-by: David S. Miller <davem@davemloft.net>
2024-10-09net: pcs: xpcs: add _modify() accessorsRussell King (Oracle)4-142/+104
The xpcs driver does a lot of read-modify-write operations on registers, which leads to long-winded code to read the register, check whether the read was successful, modify the value in some way, and then write it back. We have a mdiodev _modify() accessor that encapsulates this, and does the register modification under the MDIO bus lock ensuring that the modification is atomic with respect to other bus operations. Convert the xpcs driver to use this accessor. Signed-off-by: Russell King (Oracle) <rmk+kernel@armlinux.org.uk> Signed-off-by: David S. Miller <davem@davemloft.net>
2024-10-09net: pcs: xpcs: use FIELD_PREP() and FIELD_GET()Russell King (Oracle)2-12/+6
Convert xpcs to use the bitfield macros rather than definining the bitfield shifts and open-coding the insertion and extraction of these bitfields. Signed-off-by: Russell King (Oracle) <rmk+kernel@armlinux.org.uk> Signed-off-by: David S. Miller <davem@davemloft.net>
2024-10-09net: pcs: xpcs: move searching ID list out of lineRussell King (Oracle)1-20/+21
Move the searching of the physical ID out of xpcs_create() and into its own xpcs_identify() function, which makes it self contained. This reduces the complexity in xpcs_craete(), making it easier to follow, rather than having a lot of once-run code in the big for() loop. Signed-off-by: Russell King (Oracle) <rmk+kernel@armlinux.org.uk> Signed-off-by: David S. Miller <davem@davemloft.net>
2024-10-09net: pcs: xpcs: rename xpcs_get_id()Russell King (Oracle)1-2/+2
Rename xpcs_get_id() to xpcs_read_id() which more closely reflects the purpose of this function. Signed-off-by: Russell King (Oracle) <rmk+kernel@armlinux.org.uk> Signed-off-by: David S. Miller <davem@davemloft.net>
2024-10-09net: pcs: xpcs: move definition of struct dw_xpcs to private headerRussell King (Oracle)2-17/+19
There should be no reason for anything outside the XPCS code to know the contents of struct dw_xpcs - this is a private structure to XPCS. Move the definition to the private pcs-xpcs.h header, leaving a declaration in the global pcs/pcs-xpcs.h Signed-off-by: Russell King (Oracle) <rmk+kernel@armlinux.org.uk> Signed-off-by: David S. Miller <davem@davemloft.net>
2024-10-09net: pcs: xpcs: provide a helper to get the phylink pcs given xpcsRussell King (Oracle)3-1/+8
Provide a helper to provide the pointer to the phylink_pcs struct given a valid xpcs pointer. This will be necessary when we make struct dw_xpcs private to pcs-xpcs.c Signed-off-by: Russell King (Oracle) <rmk+kernel@armlinux.org.uk> Signed-off-by: David S. Miller <davem@davemloft.net>
2024-10-09net: pcs: xpcs: pass xpcs instead of xpcs->id to xpcs_find_compat()Russell King (Oracle)1-7/+7
xpcs_find_compat() is now always passed xpcs->id. Rather than always dereferencing this in the caller, move it into xpcs_find_compat(), thus making this function consistent with most of the other xpcs functions in taking an xpcs pointer. Signed-off-by: Russell King (Oracle) <rmk+kernel@armlinux.org.uk> Signed-off-by: David S. Miller <davem@davemloft.net>
2024-10-09net: pcs: xpcs: don't use array for interfaceRussell King (Oracle)1-57/+14
Currently, xpcs uses an array of interfaces that each "compat" entry supports. When looking up the compat entry for an interface, we iterate over the compat entries and then over each interface. Since each compat entry only has a single interface in its interfaces array, replace the array with a single member in the compat structure. Signed-off-by: Russell King (Oracle) <rmk+kernel@armlinux.org.uk> Signed-off-by: David S. Miller <davem@davemloft.net>
2024-10-09net: pcs: xpcs: remove dw_xpcs_compat enumRussell King (Oracle)1-44/+25
There is no reason for the struct dw_xpcs_compat arrays to be a fixed size other than the way we iterate over them. The index into the array isn't used for anything, and having them fixed size needlessly wastes space. Remove the enum that defines their size, and instead use an empty array entry (with NULL ->supported) to mark the end of the array. Signed-off-by: Russell King (Oracle) <rmk+kernel@armlinux.org.uk> Signed-off-by: David S. Miller <davem@davemloft.net>
2024-10-09xfs: fix a typoAndrew Kreimer1-1/+1
Fix a typo in comments. Signed-off-by: Andrew Kreimer <algonell@gmail.com> Reviewed-by: Darrick J. Wong <djwong@kernel.org> Signed-off-by: Carlos Maiolino <cem@kernel.org>
2024-10-09xfs: don't free cowblocks from under dirty pagecache on unshareBrian Foster3-7/+23
fallocate unshare mode explicitly breaks extent sharing. When a command completes, it checks the data fork for any remaining shared extents to determine whether the reflink inode flag and COW fork preallocation can be removed. This logic doesn't consider in-core pagecache and I/O state, however, which means we can unsafely remove COW fork blocks that are still needed under certain conditions. For example, consider the following command sequence: xfs_io -fc "pwrite 0 1k" -c "reflink <file> 0 256k 1k" \ -c "pwrite 0 32k" -c "funshare 0 1k" <file> This allocates a data block at offset 0, shares it, and then overwrites it with a larger buffered write. The overwrite triggers COW fork preallocation, 32 blocks by default, which maps the entire 32k write to delalloc in the COW fork. All but the shared block at offset 0 remains hole mapped in the data fork. The unshare command redirties and flushes the folio at offset 0, removing the only shared extent from the inode. Since the inode no longer maps shared extents, unshare purges the COW fork before the remaining 28k may have written back. This leaves dirty pagecache backed by holes, which writeback quietly skips, thus leaving clean, non-zeroed pagecache over holes in the file. To verify, fiemap shows holes in the first 32k of the file and reads return different data across a remount: $ xfs_io -c "fiemap -v" <file> <file>: EXT: FILE-OFFSET BLOCK-RANGE TOTAL FLAGS ... 1: [8..511]: hole 504 ... $ xfs_io -c "pread -v 4k 8" <file> 00001000: cd cd cd cd cd cd cd cd ........ $ umount <mnt>; mount <dev> <mnt> $ xfs_io -c "pread -v 4k 8" <file> 00001000: 00 00 00 00 00 00 00 00 ........ To avoid this problem, make unshare follow the same rules used for background cowblock scanning and never purge the COW fork for inodes with dirty pagecache or in-flight I/O. Fixes: 46afb0628b86347 ("xfs: only flush the unshared range in xfs_reflink_unshare") Signed-off-by: Brian Foster <bfoster@redhat.com> Reviewed-by: Darrick J. Wong <djwong@kernel.org> Signed-off-by: Carlos Maiolino <cem@kernel.org>
2024-10-09net: ibm: emac: mal: fix wrong gotoRosen Penev1-1/+1
dcr_map is called in the previous if and therefore needs to be unmapped. Fixes: 1ff0fcfcb1a6 ("ibm_newemac: Fix new MAL feature handling") Signed-off-by: Rosen Penev <rosenp@gmail.com> Link: https://patch.msgid.link/20241007235711.5714-1-rosenp@gmail.com Signed-off-by: Jakub Kicinski <kuba@kernel.org>
2024-10-09net: phy: microchip_t1: SQI support for LAN887xTarun Alle1-0/+171
Add support for measuring Signal Quality Index for LAN887x T1 PHY. Signal Quality Index (SQI) is measure of Link Channel Quality from 0 to 7, with 7 as the best. By default, a link loss event shall indicate an SQI of 0. Signed-off-by: Tarun Alle <Tarun.Alle@microchip.com> Reviewed-by: Andrew Lunn <andrew@lunn.ch> Link: https://patch.msgid.link/20241007063943.3233-1-tarun.alle@microchip.com Signed-off-by: Jakub Kicinski <kuba@kernel.org>
2024-10-09Merge branch 'net-phy-marvell-88q2xxx-enable-auto-negotiation-for-mv88q2110'Jakub Kicinski1-53/+71
Niklas Söderlund says: ==================== net: phy: marvell-88q2xxx: Enable auto negotiation for mv88q2110 This series enables auto negotiation for the mv88q2110 device. Previously this feature have been disabled for mv88q2110, while enabled for other devices supported by this driver. The initial driver implementation states this is due to the configuration sequence provided by the vendor did not work. By comparing the initialization sequence of other devices this driver supports and the out-of-tree PHY driver for mv88q2110 found in the Renesas BSP [1] I was able to figure out a working configuration. As I have no access to the datasheets of either of these devices it would be super if someone who has could sanity check the initialization sequence. With this series I'm able to auto negotiate both 1000Mbps and 100Mbps links without issue. # ethtool eth0 Settings for eth0: Supported ports: [ ] Supported link modes: 100baseT1/Full 1000baseT1/Full Supported pause frame use: Symmetric Receive-only Supports auto-negotiation: Yes Supported FEC modes: Not reported Advertised link modes: 100baseT1/Full 1000baseT1/Full Advertised pause frame use: No Advertised auto-negotiation: Yes Advertised FEC modes: Not reported Link partner advertised link modes: 100baseT1/Full 1000baseT1/Full Link partner advertised pause frame use: No Link partner advertised auto-negotiation: Yes Link partner advertised FEC modes: Not reported Speed: 1000Mb/s Duplex: Full Auto-negotiation: on master-slave cfg: preferred master master-slave status: slave Port: Twisted Pair PHYAD: 0 Transceiver: external MDI-X: Unknown Link detected: yes SQI: 15/15 And the performance is good too. Without this change I was not able to manually configure a 1000Mbps link, only 100Mbps ones. So this gives a huge performance boost for my use-case. [ 5] local 10.1.0.2 port 5201 connected to 10.1.0.1 port 38346 [ ID] Interval Transfer Bitrate Retr Cwnd [ 5] 0.00-1.00 sec 96.8 MBytes 812 Mbits/sec 0 469 KBytes [ 5] 1.00-2.00 sec 94.3 MBytes 791 Mbits/sec 0 469 KBytes [ 5] 2.00-3.00 sec 96.1 MBytes 806 Mbits/sec 0 469 KBytes [ 5] 3.00-4.00 sec 98.3 MBytes 825 Mbits/sec 0 469 KBytes [ 5] 4.00-5.00 sec 98.4 MBytes 825 Mbits/sec 0 469 KBytes [ 5] 5.00-6.00 sec 98.4 MBytes 826 Mbits/sec 0 469 KBytes [ 5] 6.00-7.00 sec 98.9 MBytes 830 Mbits/sec 0 469 KBytes [ 5] 7.00-8.00 sec 91.7 MBytes 769 Mbits/sec 0 469 KBytes [ 5] 8.00-9.00 sec 99.4 MBytes 834 Mbits/sec 0 747 KBytes [ 5] 9.00-10.00 sec 101 MBytes 851 Mbits/sec 0 747 KBytes Patch 1/3 and 2/3 are preparation patches that align and move functions around as the mv88q2110 code paths can now reuses much of what is done for mv88q2220. While patch 3/3 adds the new initialization sequence and removes the auto negotiation limit for mv88q2110. 1. https://github.com/renesas-rcar/linux-bsp/commit/2a1f07d0e722a18188cfe62842b61f2fbc0ba812 ==================== Link: https://patch.msgid.link/20241005112412.544360-1-niklas.soderlund+renesas@ragnatech.se Signed-off-by: Jakub Kicinski <kuba@kernel.org>
2024-10-09net: phy: marvell-88q2xxx: Enable auto negotiation for mv88q2110Niklas Söderlund1-10/+36
The initial marvell-88q2xxx driver only supported the Marvell 88Q2110 PHY without auto negotiation support. The reason documented states that the provided initialization sequence did not to work. Now a method to enable auto negotiation have been found by comparing the initialization of other supported devices and an out-of-tree PHY driver. Perform the minimal needed initialization of the PHY to get auto negotiation working and remove the limitation that disables the auto negotiation feature for the mv88q2110 device. With this change a 1000Mbps full duplex link is able to be negotiated between two mv88q2110 and the link works perfectly. The other side also reflects the manually configure settings of the master device. # ethtool eth0 Settings for eth0: Supported ports: [ ] Supported link modes: 100baseT1/Full 1000baseT1/Full Supported pause frame use: Symmetric Receive-only Supports auto-negotiation: Yes Supported FEC modes: Not reported Advertised link modes: 100baseT1/Full 1000baseT1/Full Advertised pause frame use: No Advertised auto-negotiation: Yes Advertised FEC modes: Not reported Link partner advertised link modes: 100baseT1/Full 1000baseT1/Full Link partner advertised pause frame use: No Link partner advertised auto-negotiation: Yes Link partner advertised FEC modes: Not reported Speed: 1000Mb/s Duplex: Full Auto-negotiation: on master-slave cfg: preferred master master-slave status: slave Port: Twisted Pair PHYAD: 0 Transceiver: external MDI-X: Unknown Link detected: yes SQI: 15/15 Before this change I was not able to manually configure 1000Mbps link, only a 100Mpps link so this change providers an improvement in performance for this device. [ 5] local 10.1.0.2 port 5201 connected to 10.1.0.1 port 38346 [ ID] Interval Transfer Bitrate Retr Cwnd [ 5] 0.00-1.00 sec 96.8 MBytes 812 Mbits/sec 0 469 KBytes [ 5] 1.00-2.00 sec 94.3 MBytes 791 Mbits/sec 0 469 KBytes [ 5] 2.00-3.00 sec 96.1 MBytes 806 Mbits/sec 0 469 KBytes [ 5] 3.00-4.00 sec 98.3 MBytes 825 Mbits/sec 0 469 KBytes [ 5] 4.00-5.00 sec 98.4 MBytes 825 Mbits/sec 0 469 KBytes [ 5] 5.00-6.00 sec 98.4 MBytes 826 Mbits/sec 0 469 KBytes [ 5] 6.00-7.00 sec 98.9 MBytes 830 Mbits/sec 0 469 KBytes [ 5] 7.00-8.00 sec 91.7 MBytes 769 Mbits/sec 0 469 KBytes [ 5] 8.00-9.00 sec 99.4 MBytes 834 Mbits/sec 0 747 KBytes [ 5] 9.00-10.00 sec 101 MBytes 851 Mbits/sec 0 747 KBytes Signed-off-by: Niklas Söderlund <niklas.soderlund+renesas@ragnatech.se> Tested-by: Stefan Eichenberger <eichest@gmail.com> Reviewed-by: Andrew Lunn <andrew@lunn.ch> Link: https://patch.msgid.link/20241005112412.544360-4-niklas.soderlund+renesas@ragnatech.se Signed-off-by: Jakub Kicinski <kuba@kernel.org>
2024-10-09net: phy: marvell-88q2xxx: Make register writer function genericNiklas Söderlund1-20/+20
In preparation to adding auto negotiation support to mv88q2110 move and rename the helper function used to write an array of register values to the PHY. Just as for mv88q2220 devices this helper will be needed to for the initial configuration of the mv88q2110 to support auto negotiation. The function is moved verbatim, there is no change in behavior. Signed-off-by: Niklas Söderlund <niklas.soderlund+renesas@ragnatech.se> Reviewed-by: Andrew Lunn <andrew@lunn.ch> Tested-by: Dimitri Fedrau <dima.fedrau@gmail.com> Tested-by: Stefan Eichenberger <eichest@gmail.com> Link: https://patch.msgid.link/20241005112412.544360-3-niklas.soderlund+renesas@ragnatech.se Signed-off-by: Jakub Kicinski <kuba@kernel.org>
2024-10-09net: phy: marvell-88q2xxx: Align soft reset for mv88q2110 and mv88q2220Niklas Söderlund1-34/+26
The soft reset implementations for mv88q2110 and mv88q2220 differ as the later need to consider that auto negation is supported on mv88q2220 devices. In preparation of enabling auto negotiation on mv88q2110 merge the two rest functions into a device generic one. The mv88q2220 behavior is kept as is but extended to wait for the reset bit to be clears before continuing, as was done previously on mv88q2220. Signed-off-by: Niklas Söderlund <niklas.soderlund+renesas@ragnatech.se> Reviewed-by: Andrew Lunn <andrew@lunn.ch> Tested-by: Dimitri Fedrau <dima.fedrau@gmail.com> Tested-by: Stefan Eichenberger <eichest@gmail.com> Link: https://patch.msgid.link/20241005112412.544360-2-niklas.soderlund+renesas@ragnatech.se Signed-off-by: Jakub Kicinski <kuba@kernel.org>
2024-10-09fsl/fman: Fix a typoAndrew Kreimer1-1/+1
Fix a typo in comments: bellow -> below. Reported-by: Matthew Wilcox <willy@infradead.org> Signed-off-by: Andrew Kreimer <algonell@gmail.com> Reviewed-by: Simon Horman <horms@kernel.org> Link: https://patch.msgid.link/20241006130829.13967-1-algonell@gmail.com Signed-off-by: Jakub Kicinski <kuba@kernel.org>
2024-10-09net: phy: aquantia: allow forcing order of MDI pairsDaniel Golle1-0/+33
Despite supporting Auto MDI-X, it looks like Aquantia only supports swapping pair (1,2) with pair (3,6) like it used to be for MDI-X on 100MBit/s networks. When all 4 pairs are in use (for 1000MBit/s or faster) the link does not come up with pair order is not configured correctly, either using MDI_CFG pin or using the "PMA Receive Reserved Vendor Provisioning 1" register. Normally, the order of MDI pairs being either ABCD or DCBA is configured by pulling the MDI_CFG pin. However, some hardware designs require overriding the value configured by that bootstrap pin. The PHY allows doing that by setting a bit in "PMA Receive Reserved Vendor Provisioning 1" register which allows ignoring the state of the MDI_CFG pin and another bit configuring whether the order of MDI pairs should be normal (ABCD) or reverse (DCBA). Pair polarity is not affected and remains identical in both settings. Introduce property "marvell,mdi-cfg-order" which allows forcing either normal or reverse order of the MDI pairs from DT. If the property isn't present, the behavior is unchanged and MDI pair order configuration is untouched (ie. either the result of MDI_CFG pin pull-up/pull-down, or pair order override already configured by the bootloader before Linux is started). Forcing normal pair order is required on the Adtran SDG-8733A Wi-Fi 7 residential gateway. Signed-off-by: Daniel Golle <daniel@makrotopia.org> Reviewed-by: Andrew Lunn <andrew@lunn.ch> Link: https://patch.msgid.link/9ed760ff87d5fc456f31e407ead548bbb754497d.1728058550.git.daniel@makrotopia.org Signed-off-by: Jakub Kicinski <kuba@kernel.org>
2024-10-09dt-bindings: net: marvell,aquantia: add property to override MDI_CFGDaniel Golle1-0/+6
Usually the MDI pair order reversal configuration is defined by bootstrap pin MDI_CFG. Some designs, however, require overriding the MDI pair order and force either normal or reverse order. Add property 'marvell,mdi-cfg-order' to allow forcing either normal or reverse order of the MDI pairs. Signed-off-by: Daniel Golle <daniel@makrotopia.org> Reviewed-by: Rob Herring (Arm) <robh@kernel.org> Link: https://patch.msgid.link/7ccf25d6d7859f1ce9983c81a2051cfdfb0e0a99.1728058550.git.daniel@makrotopia.org Signed-off-by: Jakub Kicinski <kuba@kernel.org>
2024-10-09net/sched: accept TCA_STAB only for root qdiscEric Dumazet2-2/+6
Most qdiscs maintain their backlog using qdisc_pkt_len(skb) on the assumption it is invariant between the enqueue() and dequeue() handlers. Unfortunately syzbot can crash a host rather easily using a TBF + SFQ combination, with an STAB on SFQ [1] We can't support TCA_STAB on arbitrary level, this would require to maintain per-qdisc storage. [1] [ 88.796496] BUG: kernel NULL pointer dereference, address: 0000000000000000 [ 88.798611] #PF: supervisor read access in kernel mode [ 88.799014] #PF: error_code(0x0000) - not-present page [ 88.799506] PGD 0 P4D 0 [ 88.799829] Oops: Oops: 0000 [#1] SMP NOPTI [ 88.800569] CPU: 14 UID: 0 PID: 2053 Comm: b371744477 Not tainted 6.12.0-rc1-virtme #1117 [ 88.801107] Hardware name: QEMU Standard PC (i440FX + PIIX, 1996), BIOS 1.16.3-debian-1.16.3-2 04/01/2014 [ 88.801779] RIP: 0010:sfq_dequeue (net/sched/sch_sfq.c:272 net/sched/sch_sfq.c:499) sch_sfq [ 88.802544] Code: 0f b7 50 12 48 8d 04 d5 00 00 00 00 48 89 d6 48 29 d0 48 8b 91 c0 01 00 00 48 c1 e0 03 48 01 c2 66 83 7a 1a 00 7e c0 48 8b 3a <4c> 8b 07 4c 89 02 49 89 50 08 48 c7 47 08 00 00 00 00 48 c7 07 00 All code ======== 0: 0f b7 50 12 movzwl 0x12(%rax),%edx 4: 48 8d 04 d5 00 00 00 lea 0x0(,%rdx,8),%rax b: 00 c: 48 89 d6 mov %rdx,%rsi f: 48 29 d0 sub %rdx,%rax 12: 48 8b 91 c0 01 00 00 mov 0x1c0(%rcx),%rdx 19: 48 c1 e0 03 shl $0x3,%rax 1d: 48 01 c2 add %rax,%rdx 20: 66 83 7a 1a 00 cmpw $0x0,0x1a(%rdx) 25: 7e c0 jle 0xffffffffffffffe7 27: 48 8b 3a mov (%rdx),%rdi 2a:* 4c 8b 07 mov (%rdi),%r8 <-- trapping instruction 2d: 4c 89 02 mov %r8,(%rdx) 30: 49 89 50 08 mov %rdx,0x8(%r8) 34: 48 c7 47 08 00 00 00 movq $0x0,0x8(%rdi) 3b: 00 3c: 48 rex.W 3d: c7 .byte 0xc7 3e: 07 (bad) ... Code starting with the faulting instruction =========================================== 0: 4c 8b 07 mov (%rdi),%r8 3: 4c 89 02 mov %r8,(%rdx) 6: 49 89 50 08 mov %rdx,0x8(%r8) a: 48 c7 47 08 00 00 00 movq $0x0,0x8(%rdi) 11: 00 12: 48 rex.W 13: c7 .byte 0xc7 14: 07 (bad) ... [ 88.803721] RSP: 0018:ffff9a1f892b7d58 EFLAGS: 00000206 [ 88.804032] RAX: 0000000000000000 RBX: ffff9a1f8420c800 RCX: ffff9a1f8420c800 [ 88.804560] RDX: ffff9a1f81bc1440 RSI: 0000000000000000 RDI: 0000000000000000 [ 88.805056] RBP: ffffffffc04bb0e0 R08: 0000000000000001 R09: 00000000ff7f9a1f [ 88.805473] R10: 000000000001001b R11: 0000000000009a1f R12: 0000000000000140 [ 88.806194] R13: 0000000000000001 R14: ffff9a1f886df400 R15: ffff9a1f886df4ac [ 88.806734] FS: 00007f445601a740(0000) GS:ffff9a2e7fd80000(0000) knlGS:0000000000000000 [ 88.807225] CS: 0010 DS: 0000 ES: 0000 CR0: 0000000080050033 [ 88.807672] CR2: 0000000000000000 CR3: 000000050cc46000 CR4: 00000000000006f0 [ 88.808165] Call Trace: [ 88.808459] <TASK> [ 88.808710] ? __die (arch/x86/kernel/dumpstack.c:421 arch/x86/kernel/dumpstack.c:434) [ 88.809261] ? page_fault_oops (arch/x86/mm/fault.c:715) [ 88.809561] ? exc_page_fault (./arch/x86/include/asm/irqflags.h:26 ./arch/x86/include/asm/irqflags.h:87 ./arch/x86/include/asm/irqflags.h:147 arch/x86/mm/fault.c:1489 arch/x86/mm/fault.c:1539) [ 88.809806] ? asm_exc_page_fault (./arch/x86/include/asm/idtentry.h:623) [ 88.810074] ? sfq_dequeue (net/sched/sch_sfq.c:272 net/sched/sch_sfq.c:499) sch_sfq [ 88.810411] sfq_reset (net/sched/sch_sfq.c:525) sch_sfq [ 88.810671] qdisc_reset (./include/linux/skbuff.h:2135 ./include/linux/skbuff.h:2441 ./include/linux/skbuff.h:3304 ./include/linux/skbuff.h:3310 net/sched/sch_generic.c:1036) [ 88.810950] tbf_reset (./include/linux/timekeeping.h:169 net/sched/sch_tbf.c:334) sch_tbf [ 88.811208] qdisc_reset (./include/linux/skbuff.h:2135 ./include/linux/skbuff.h:2441 ./include/linux/skbuff.h:3304 ./include/linux/skbuff.h:3310 net/sched/sch_generic.c:1036) [ 88.811484] netif_set_real_num_tx_queues (./include/linux/spinlock.h:396 ./include/net/sch_generic.h:768 net/core/dev.c:2958) [ 88.811870] __tun_detach (drivers/net/tun.c:590 drivers/net/tun.c:673) [ 88.812271] tun_chr_close (drivers/net/tun.c:702 drivers/net/tun.c:3517) [ 88.812505] __fput (fs/file_table.c:432 (discriminator 1)) [ 88.812735] task_work_run (kernel/task_work.c:230) [ 88.813016] do_exit (kernel/exit.c:940) [ 88.813372] ? trace_hardirqs_on (kernel/trace/trace_preemptirq.c:58 (discriminator 4)) [ 88.813639] ? handle_mm_fault (./arch/x86/include/asm/irqflags.h:42 ./arch/x86/include/asm/irqflags.h:97 ./arch/x86/include/asm/irqflags.h:155 ./include/linux/memcontrol.h:1022 ./include/linux/memcontrol.h:1045 ./include/linux/memcontrol.h:1052 mm/memory.c:5928 mm/memory.c:6088) [ 88.813867] do_group_exit (kernel/exit.c:1070) [ 88.814138] __x64_sys_exit_group (kernel/exit.c:1099) [ 88.814490] x64_sys_call (??:?) [ 88.814791] do_syscall_64 (arch/x86/entry/common.c:52 (discriminator 1) arch/x86/entry/common.c:83 (discriminator 1)) [ 88.815012] entry_SYSCALL_64_after_hwframe (arch/x86/entry/entry_64.S:130) [ 88.815495] RIP: 0033:0x7f44560f1975 Fixes: 175f9c1bba9b ("net_sched: Add size table for qdiscs") Reported-by: syzbot <syzkaller@googlegroups.com> Signed-off-by: Eric Dumazet <edumazet@google.com> Cc: Daniel Borkmann <daniel@iogearbox.net> Link: https://patch.msgid.link/20241007184130.3960565-1-edumazet@google.com Signed-off-by: Jakub Kicinski <kuba@kernel.org>
2024-10-09Merge branch 'selftests-mlxsw-stabilize-red-tests'Jakub Kicinski2-16/+20
Petr Machata says: ==================== selftests: mlxsw: Stabilize RED tests Tweak the mlxsw-specific RED selftests to increase stability on Spectrum-3 and Spectrum-4 machines. ==================== Link: https://patch.msgid.link/cover.1728316370.git.petrm@nvidia.com Signed-off-by: Jakub Kicinski <kuba@kernel.org>
2024-10-09selftests: mlxsw: sch_red_core: Lower TBF ratePetr Machata1-3/+3
The RED test uses a pair of TBF shapers. The first to get predictably-sized stream of traffic, and second to get a 100% saturated chokepoint. To this chokepoint it injects individual packets. Because the chokepoint is saturated, these additional packets go straight to the backlog. This allows the test to check RED behavior across various queue sizes. The shapers are rated at 1Gbps, for historical reasons (before mlxsw supported TBF offload, the test used port speed to create the chokepoints). Machines with a low-power CPU may have trouble consistently generating 1Gbps of traffic, and the test then spuriously fails. Instead, drop the rate to 200Mbps (Spectrum has a guaranteed shaper rate granularity of 200Mbps, so anything lower is not guaranteed to work well). Because that means fewer packets will be mirrored in the ECN-mark test, adjust the passing condition accordingly. Signed-off-by: Petr Machata <petrm@nvidia.com> Link: https://patch.msgid.link/c6712f9c5de75ae0bc2ab3d8ea7d92aaaf93af95.1728316370.git.petrm@nvidia.com Signed-off-by: Jakub Kicinski <kuba@kernel.org>
2024-10-09selftests: mlxsw: sch_red_core: Send more packets for drop testsPetr Machata1-7/+9
This test works by injecting into a port with a maxed-out queue a couple packets and checks if a corresponding number of packets were dropped. This has worked well on Spectrum<4, but on Spectrum-4 it has been noisy. This is in line with the observation that on Spectrum-4, queue size tends to fluctuate more. A handful of packets could then still be accepted to the queue even though it was nominally full just recently. In order to accommodate this behavior, send many more packets. The buffer can fit N extra packets, but not N% packets. This therefore allows us to set wider absolute margins, while actually narrowing them relatively. Signed-off-by: Petr Machata <petrm@nvidia.com> Reviewed-by: Ido Schimmel <idosch@nvidia.com> Link: https://patch.msgid.link/abc869b9f6003d400d6293ddd5edb2f4517f44d5.1728316370.git.petrm@nvidia.com Signed-off-by: Jakub Kicinski <kuba@kernel.org>
2024-10-09selftests: mlxsw: sch_red_core: Sleep before querying queue depthPetr Machata1-0/+1
The qdisc stats are taken from the port's periodic HW stats, which are updated once a second. We try to accommodate the latency by using busywait in build_backlog(). The issue in that seems to be that when do_mark_test() builds the backlog, it makes the decision whether to send more packets based on the first instance of the queue depth stat exceeding the current value, when in fact more traffic is on the way and the queue depth would increase further. This leads to failures in TC 1 of mark-mirror test, where we see the following failure: TEST: TC 0: marked packets mirror'd [ OK ] TEST: TC 1: marked packets mirror'd [FAIL] Spurious packets (1680 -> 2290) observed without buffer pressure Fix by waiting for the full second before reading the queue depth for the first time, to make sure it reflects all in-flight traffic. Signed-off-by: Petr Machata <petrm@nvidia.com> Reviewed-by: Amit Cohen <amcohen@nvidia.com> Link: https://patch.msgid.link/321dcf8b3e9a1f0766429c8cf3e3f1746f1bc375.1728316370.git.petrm@nvidia.com Signed-off-by: Jakub Kicinski <kuba@kernel.org>
2024-10-09selftests: mlxsw: sch_red_core: Increase backlog size tolerancePetr Machata1-2/+3
Backlog fluctuates on Spectrum-4 much more than on <4. In practice we can sample queue depth values going from about -12% to about +7% of the configured RED limit. The test which checks the queue size has a limit of +-10%, and as a result often fails. We attempted to fix the issue by busywaiting for several seconds hoping to get within the bounds, but that still proved to be too noisy (or the wait time would be impractically long). Unfortunately we have to bump the value tolerance from 10% to 15%, which in this patch do. Signed-off-by: Petr Machata <petrm@nvidia.com> Reviewed-by: Amit Cohen <amcohen@nvidia.com> Link: https://patch.msgid.link/f54950df2a8fcba46c3ddc1053376352fa2e592b.1728316370.git.petrm@nvidia.com Signed-off-by: Jakub Kicinski <kuba@kernel.org>
2024-10-09selftests: mlxsw: sch_red_ets: Increase required backlogPetr Machata1-4/+4
Backlog fluctuates on Spectrum-4 much more than on <4. Increasing the desired backlog seems to help, as the constant fluctuations do not overlap into the territory where packets are marked. Signed-off-by: Petr Machata <petrm@nvidia.com> Reviewed-by: Amit Cohen <amcohen@nvidia.com> Link: https://patch.msgid.link/0821fb3aa8bb6a6c0d3000baab04995517c9a0cc.1728316370.git.petrm@nvidia.com Signed-off-by: Jakub Kicinski <kuba@kernel.org>
2024-10-09net: phy: smsc: use devm_clk_get_optional_enabled_with_rate()Bartosz Golaszewski1-2/+3
Fold the separate call to clk_set_rate() into the clock getter. Signed-off-by: Bartosz Golaszewski <bartosz.golaszewski@linaro.org> Reviewed-by: Andrew Lunn <andrew@lunn.ch> Link: https://patch.msgid.link/20241007134100.107921-1-brgl@bgdev.pl Signed-off-by: Jakub Kicinski <kuba@kernel.org>
2024-10-09chelsio/chtls: Remove unused chtls_set_tcb_tflagDr. David Alan Gilbert2-10/+0
chtls_set_tcb_tflag() has been unused since 2021's commit 827d329105bf ("chtls: Remove invalid set_tcb call") Remove it. Signed-off-by: Dr. David Alan Gilbert <linux@treblig.org> Reviewed-by: Simon Horman <horms@kernel.org> Link: https://patch.msgid.link/20241007004652.150065-1-linux@treblig.org Signed-off-by: Jakub Kicinski <kuba@kernel.org>
2024-10-09caif: Remove unused cfsrvl_getphyidDr. David Alan Gilbert2-7/+0
cfsrvl_getphyid() has been unused since 2011's commit f36214408470 ("caif: Use RCU and lists in cfcnfg.c for managing caif link layers") Remove it. Signed-off-by: Dr. David Alan Gilbert <linux@treblig.org> Reviewed-by: Simon Horman <horms@kernel.org> Link: https://patch.msgid.link/20241007004456.149899-1-linux@treblig.org Signed-off-by: Jakub Kicinski <kuba@kernel.org>
2024-10-09net-timestamp: namespacify the sysctl_tstamp_allow_dataJason Xing6-14/+12
Let it be tuned in per netns by admins. Signed-off-by: Jason Xing <kernelxing@tencent.com> Reviewed-by: Kuniyuki Iwashima <kuniyu@amazon.com> Reviewed-by: Eric Dumazet <edumazet@google.com> Reviewed-by: Willem de Bruijn <willemb@google.com> Link: https://patch.msgid.link/20241005222609.94980-1-kerneljasonxing@gmail.com Signed-off-by: Jakub Kicinski <kuba@kernel.org>
2024-10-09net: dsa: mv88e6xxx: Add FID map cacheAryan Srivastava4-38/+14
Add a cached FID bitmap. This mitigates the need to walk all VTU entries to find the next free FID. When flushing the VTU (during init), zero the FID bitmap. Use and manipulate this bitmap from now on, instead of reading HW for the FID map. The repeated VTU walks are costly and can take ~40 mins if ~4000 vlans are added. Caching the FID map reduces this time to <2 mins. Signed-off-by: Aryan Srivastava <aryan.srivastava@alliedtelesis.co.nz> Reviewed-by: Andrew Lunn <andrew@lunn.ch> Link: https://patch.msgid.link/20241006212905.3142976-1-aryan.srivastava@alliedtelesis.co.nz Signed-off-by: Jakub Kicinski <kuba@kernel.org>
2024-10-09e1000: Link NAPI instances to queues and IRQsJoe Damato1-0/+5
Add support for netdev-genl, allowing users to query IRQ, NAPI, and queue information. After this patch is applied, note the IRQ assigned to my NIC: $ cat /proc/interrupts | grep enp0s8 | cut -f1 --delimiter=':' 18 Note the output from the cli: $ ./tools/net/ynl/cli.py --spec Documentation/netlink/specs/netdev.yaml \ --dump napi-get --json='{"ifindex": 2}' [{'id': 513, 'ifindex': 2, 'irq': 18}] This device supports only 1 rx and 1 tx queue, so querying that: $ ./tools/net/ynl/cli.py --spec Documentation/netlink/specs/netdev.yaml \ --dump queue-get --json='{"ifindex": 2}' [{'id': 0, 'ifindex': 2, 'napi-id': 513, 'type': 'rx'}, {'id': 0, 'ifindex': 2, 'napi-id': 513, 'type': 'tx'}] Signed-off-by: Joe Damato <jdamato@fastly.com> Reviewed-by: Simon Horman <horms@kernel.org> Signed-off-by: Tony Nguyen <anthony.l.nguyen@intel.com>
2024-10-09e1000e: Link NAPI instances to queues and IRQsJoe Damato1-0/+11
Add support for netdev-genl, allowing users to query IRQ, NAPI, and queue information. After this patch is applied, note the IRQs assigned to my NIC: $ cat /proc/interrupts | grep ens | cut -f1 --delimiter=':' 50 51 52 While e1000e allocates 3 IRQs (RX, TX, and other), it looks like e1000e only has a single NAPI, so I've associated the NAPI with the RX IRQ (50 on my system, seen above). Note the output from the cli: $ ./tools/net/ynl/cli.py --spec Documentation/netlink/specs/netdev.yaml \ --dump napi-get --json='{"ifindex": 2}' [{'id': 145, 'ifindex': 2, 'irq': 50}] This device supports only 1 rx and 1 tx queue. so querying that: $ ./tools/net/ynl/cli.py --spec Documentation/netlink/specs/netdev.yaml \ --dump queue-get --json='{"ifindex": 2}' [{'id': 0, 'ifindex': 2, 'napi-id': 145, 'type': 'rx'}, {'id': 0, 'ifindex': 2, 'napi-id': 145, 'type': 'tx'}] Signed-off-by: Joe Damato <jdamato@fastly.com> Reviewed-by: Simon Horman <horms@kernel.org> Acked-by: Vitaly Lifshits <vitaly.lifshits@intel.com> Tested-by: Avigail Dahan <avigailx.dahan@intel.com> Signed-off-by: Tony Nguyen <anthony.l.nguyen@intel.com>
2024-10-09e1000e: Remove duplicated writel() in e1000_configure_tx/rx()Takamitsu Iwai1-6/+0
Duplicated register initialization codes exist in e1000_configure_tx() and e1000_configure_rx(). For example, writel(0, tx_ring->head) writes 0 to tx_ring->head, which is adapter->hw.hw_addr + E1000_TDH(0). This initialization is already done in ew32(TDH(0), 0). ew32(TDH(0), 0) is equivalent to __ew32(hw, E1000_TDH(0), 0). It executes writel(0, hw->hw_addr + E1000_TDH(0)). Since variable hw is set to &adapter->hw, it is equal to writel(0, tx_ring->head). We can remove similar four writel() in e1000_configure_tx() and e1000_configure_rx(). commit 0845d45e900c ("e1000e: Modify Tx/Rx configurations to avoid null pointer dereferences in e1000_open") has introduced these writel(). This commit moved register writing to e1000_configure_tx/rx(), and as result, it caused duplication in e1000_configure_tx/rx(). This patch modifies the sequence of register writing, but removing these writes is safe because the same writes were already there before the commit. I also have checked the datasheets [0] [1] and have not found any description that we need to write RDH, RDT, TDH and TDT registers twice at initialization. Furthermore, we have tested this patch on an I219-V device physically. Link: https://www.intel.com/content/dam/www/public/us/en/documents/datasheets/82577-gbe-phy-datasheet.pdf [0] Link: https://www.intel.com/content/www/us/en/content-details/613460/intel-82583v-gbe-controller-datasheet.html [1] Tested-by: Kohei Enju <enjuk@amazon.com> Signed-off-by: Takamitsu Iwai <takamitz@amazon.co.jp> Tested-by: Mor Bar-Gabay <morx.bar.gabay@intel.com> Signed-off-by: Tony Nguyen <anthony.l.nguyen@intel.com>
2024-10-09igb: Cleanup unused declarationsYue Haibing2-2/+0
e1000_init_function_pointers_82575() is never implemented and used since commit 9d5c824399de ("igb: PCI-Express 82575 Gigabit Ethernet driver"). And commit 9835fd7321a6 ("igb: Add new function to read part number from EEPROM in string format") removed igb_read_part_num() implementation. Signed-off-by: Yue Haibing <yuehaibing@huawei.com> Reviewed-by: Maciej Fijalkowski <maciej.fijalkowski@intel.com> Signed-off-by: Tony Nguyen <anthony.l.nguyen@intel.com>
2024-10-09iavf: Remove unused declarationsYue Haibing2-13/+0
There is no caller and implementation in tree. Signed-off-by: Yue Haibing <yuehaibing@huawei.com> Reviewed-by: Maciej Fijalkowski <maciej.fijalkowski@intel.com> Signed-off-by: Tony Nguyen <anthony.l.nguyen@intel.com>
2024-10-09ice: Cleanup unused declarationsYue Haibing5-14/+0
Since commit fff292b47ac1 ("ice: add VF representors one by one") ice_eswitch_configure() is not used anymore. Commit 1b8f15b64a00 ("ice: refactor filter functions") removed ice_vsi_cfg_mac_fltr() but leave declaration. Commit a24b4c6e9aab ("ice: xsk: Do not convert to buff to frame for XDP_TX") leave ice_xmit_xdp_buff() declaration. Commit 7cab44f1c35f ("ice: Introduce ETH56G PHY model for E825C products") declared ice_phy_cfg_{rx,tx}_offset_eth56g(), commit a1ffafb0b4a4 ("ice: Support configuring the device to Double VLAN Mode") declared ice_pkg_buf_get_free_space(), and commit 8a3a565ff210 ("ice: add admin commands to access cgu configuration") declared ice_is_pca9575_present(), but all these never be implemented. Signed-off-by: Yue Haibing <yuehaibing@huawei.com> Reviewed-by: Maciej Fijalkowski <maciej.fijalkowski@intel.com> Signed-off-by: Tony Nguyen <anthony.l.nguyen@intel.com>
2024-10-09e1000e: change I219 (19) devices to ADPVitaly Lifshits2-4/+4
Sporadic issues, such as PHY access loss, have been observed on I219 (19) devices. It was found that these devices have hardware more closely related to ADP than MTP and the issues were caused by taking MTP-specific flows. Change the MAC and board types of these devices from MTP to ADP to correctly reflect the LAN hardware, and flows, of these devices. Fixes: db2d737d63c5 ("e1000e: Separate MTP board type from ADP") Signed-off-by: Vitaly Lifshits <vitaly.lifshits@intel.com> Tested-by: Mor Bar-Gabay <morx.bar.gabay@intel.com> Signed-off-by: Tony Nguyen <anthony.l.nguyen@intel.com>
2024-10-09igb: Do not bring the device up after non-fatal errorMohamed Khalfella1-0/+4
Commit 004d25060c78 ("igb: Fix igb_down hung on surprise removal") changed igb_io_error_detected() to ignore non-fatal pcie errors in order to avoid hung task that can happen when igb_down() is called multiple times. This caused an issue when processing transient non-fatal errors. igb_io_resume(), which is called after igb_io_error_detected(), assumes that device is brought down by igb_io_error_detected() if the interface is up. This resulted in panic with stacktrace below. [ T3256] igb 0000:09:00.0 haeth0: igb: haeth0 NIC Link is Down [ T292] pcieport 0000:00:1c.5: AER: Uncorrected (Non-Fatal) error received: 0000:09:00.0 [ T292] igb 0000:09:00.0: PCIe Bus Error: severity=Uncorrected (Non-Fatal), type=Transaction Layer, (Requester ID) [ T292] igb 0000:09:00.0: device [8086:1537] error status/mask=00004000/00000000 [ T292] igb 0000:09:00.0: [14] CmpltTO [ 200.105524,009][ T292] igb 0000:09:00.0: AER: TLP Header: 00000000 00000000 00000000 00000000 [ T292] pcieport 0000:00:1c.5: AER: broadcast error_detected message [ T292] igb 0000:09:00.0: Non-correctable non-fatal error reported. [ T292] pcieport 0000:00:1c.5: AER: broadcast mmio_enabled message [ T292] pcieport 0000:00:1c.5: AER: broadcast resume message [ T292] ------------[ cut here ]------------ [ T292] kernel BUG at net/core/dev.c:6539! [ T292] invalid opcode: 0000 [#1] PREEMPT SMP [ T292] RIP: 0010:napi_enable+0x37/0x40 [ T292] Call Trace: [ T292] <TASK> [ T292] ? die+0x33/0x90 [ T292] ? do_trap+0xdc/0x110 [ T292] ? napi_enable+0x37/0x40 [ T292] ? do_error_trap+0x70/0xb0 [ T292] ? napi_enable+0x37/0x40 [ T292] ? napi_enable+0x37/0x40 [ T292] ? exc_invalid_op+0x4e/0x70 [ T292] ? napi_enable+0x37/0x40 [ T292] ? asm_exc_invalid_op+0x16/0x20 [ T292] ? napi_enable+0x37/0x40 [ T292] igb_up+0x41/0x150 [ T292] igb_io_resume+0x25/0x70 [ T292] report_resume+0x54/0x70 [ T292] ? report_frozen_detected+0x20/0x20 [ T292] pci_walk_bus+0x6c/0x90 [ T292] ? aer_print_port_info+0xa0/0xa0 [ T292] pcie_do_recovery+0x22f/0x380 [ T292] aer_process_err_devices+0x110/0x160 [ T292] aer_isr+0x1c1/0x1e0 [ T292] ? disable_irq_nosync+0x10/0x10 [ T292] irq_thread_fn+0x1a/0x60 [ T292] irq_thread+0xe3/0x1a0 [ T292] ? irq_set_affinity_notifier+0x120/0x120 [ T292] ? irq_affinity_notify+0x100/0x100 [ T292] kthread+0xe2/0x110 [ T292] ? kthread_complete_and_exit+0x20/0x20 [ T292] ret_from_fork+0x2d/0x50 [ T292] ? kthread_complete_and_exit+0x20/0x20 [ T292] ret_from_fork_asm+0x11/0x20 [ T292] </TASK> To fix this issue igb_io_resume() checks if the interface is running and the device is not down this means igb_io_error_detected() did not bring the device down and there is no need to bring it up. Signed-off-by: Mohamed Khalfella <mkhalfella@purestorage.com> Reviewed-by: Yuanyuan Zhong <yzhong@purestorage.com> Fixes: 004d25060c78 ("igb: Fix igb_down hung on surprise removal") Reviewed-by: Simon Horman <horms@kernel.org> Tested-by: Pucha Himasekhar Reddy <himasekharx.reddy.pucha@intel.com> (A Contingent worker at Intel) Signed-off-by: Tony Nguyen <anthony.l.nguyen@intel.com>
2024-10-09i40e: Fix macvlan leak by synchronizing access to mac_filter_hashAleksandr Loktionov2-0/+3
This patch addresses a macvlan leak issue in the i40e driver caused by concurrent access to vsi->mac_filter_hash. The leak occurs when multiple threads attempt to modify the mac_filter_hash simultaneously, leading to inconsistent state and potential memory leaks. To fix this, we now wrap the calls to i40e_del_mac_filter() and zeroing vf->default_lan_addr.addr with spin_lock/unlock_bh(&vsi->mac_filter_hash_lock), ensuring atomic operations and preventing concurrent access. Additionally, we add lockdep_assert_held(&vsi->mac_filter_hash_lock) in i40e_add_mac_filter() to help catch similar issues in the future. Reproduction steps: 1. Spawn VFs and configure port vlan on them. 2. Trigger concurrent macvlan operations (e.g., adding and deleting portvlan and/or mac filters). 3. Observe the potential memory leak and inconsistent state in the mac_filter_hash. This synchronization ensures the integrity of the mac_filter_hash and prevents the described leak. Fixes: fed0d9f13266 ("i40e: Fix VF's MAC Address change on VM") Reviewed-by: Arkadiusz Kubalewski <arkadiusz.kubalewski@intel.com> Signed-off-by: Aleksandr Loktionov <aleksandr.loktionov@intel.com> Reviewed-by: Simon Horman <horms@kernel.org> Tested-by: Rafal Romanowski <rafal.romanowski@intel.com> Signed-off-by: Tony Nguyen <anthony.l.nguyen@intel.com>
2024-10-09ice: Use common error handling code in two functionsMarkus Elfring1-16/+16
Add jump targets so that a bit of exception handling can be better reused at the end of two function implementations. This issue was detected by using the Coccinelle software. Signed-off-by: Markus Elfring <elfring@users.sourceforge.net> Reviewed-by: Jacob Keller <jacob.e.keller@intel.com> Tested-by: Pucha Himasekhar Reddy <himasekharx.reddy.pucha@intel.com> (A Contingent worker at Intel) Signed-off-by: Tony Nguyen <anthony.l.nguyen@intel.com>
2024-10-09ice: Make use of assign_bit() APIHongbo Li1-2/+1
We have for some time the assign_bit() API to replace open coded if (foo) set_bit(n, bar); else clear_bit(n, bar); Use this API to clean the code. No functional change intended. Signed-off-by: Hongbo Li <lihongbo22@huawei.com> Reviewed-by: Gerhard Engleder <gerhard@engleder-embedded.com> Tested-by: George Kuruvinakunnel <george.kuruvinakunnel@intel.com> Reviewed-by: Jacob Keller <jacob.e.keller@intel.com> Signed-off-by: Tony Nguyen <anthony.l.nguyen@intel.com>
2024-10-09ice: store max_frame and rx_buf_len only in ice_rx_ringJacob Keller4-24/+23
The max_frame and rx_buf_len fields of the VSI set the maximum frame size for packets on the wire, and configure the size of the Rx buffer. In the hardware, these are per-queue configuration. Most VSI types use a simple method to determine the size of the buffers for all queues. However, VFs may potentially configure different values for each queue. While the Linux iAVF driver does not do this, it is allowed by the virtchnl interface. The current virtchnl code simply sets the per-VSI fields inbetween calls to ice_vsi_cfg_single_rxq(). This technically works, as these fields are only ever used when programming the Rx ring, and otherwise not checked again. However, it is confusing to maintain. The Rx ring also already has an rx_buf_len field in order to access the buffer length in the hotpath. It also has extra unused bytes in the ring structure which we can make use of to store the maximum frame size. Drop the VSI max_frame and rx_buf_len fields. Add max_frame to the Rx ring, and slightly re-order rx_buf_len to better fit into the gaps in the structure layout. Change the ice_vsi_cfg_frame_size function so that it writes to the ring fields. Call this function once per ring in ice_vsi_cfg_rxqs(). This is done over calling it inside the ice_vsi_cfg_rxq(), because ice_vsi_cfg_rxq() is called in the virtchnl flow where the max_frame and rx_buf_len have already been configured. Change the accesses for rx_buf_len and max_frame to all point to the ring structure. This has the added benefit that ice_vsi_cfg_rxq() no longer has the surprise side effect of updating ring->rx_buf_len based on the VSI field. Update the virtchnl ice_vc_cfg_qs_msg() function to set the ring values directly, and drop references to the removed VSI fields. This now makes the VF logic clear, as the ring fields are obviously per-queue. This reduces the required cognitive load when reasoning about this logic. Note that removing the VSI fields does leave a 4 byte gap, but the ice_vsi structure has many gaps, and its layout is not as critical in the hot path. The structure may benefit from a more thorough repacking, but no attempt was made in this change. Signed-off-by: Jacob Keller <jacob.e.keller@intel.com> Tested-by: Rafal Romanowski <rafal.romanowski@intel.com> Signed-off-by: Tony Nguyen <anthony.l.nguyen@intel.com>
2024-10-09ice: consistently use q_idx in ice_vc_cfg_qs_msg()Jacob Keller1-11/+10
The ice_vc_cfg_qs_msg() function is used to configure VF queues in response to a VIRTCHNL_OP_CONFIG_VSI_QUEUES command. The virtchnl command contains an array of queue pair data for configuring Tx and Rx queues. This data includes a queue ID. When configuring the queues, the driver generally uses this queue ID to determine which Tx and Rx ring to program. However, a handful of places use the index into the queue pair data from the VF. While most VF implementations appear to send this data in order, it is not mandated by the virtchnl and it is not verified that the queue pair data comes in order. Fix the driver to consistently use the q_idx field instead of the 'i' iterator value when accessing the rings. For the Rx case, introduce a local ring variable to keep lines short. Fixes: 7ad15440acf8 ("ice: Refactor VIRTCHNL_OP_CONFIG_VSI_QUEUES handling") Signed-off-by: Jacob Keller <jacob.e.keller@intel.com> Tested-by: Rafal Romanowski <rafal.romanowski@intel.com> Signed-off-by: Tony Nguyen <anthony.l.nguyen@intel.com>
2024-10-09ice: add E830 HW VF mailbox message limit supportPaul Greenwalt9-13/+96
E830 adds hardware support to prevent the VF from overflowing the PF mailbox with VIRTCHNL messages. E830 will use the hardware feature (ICE_F_MBX_LIMIT) instead of the software solution ice_is_malicious_vf(). To prevent a VF from overflowing the PF, the PF sets the number of messages per VF that can be in the PF's mailbox queue (ICE_MBX_OVERFLOW_WATERMARK). When the PF processes a message from a VF, the PF decrements the per VF message count using the E830_MBX_VF_DEC_TRIG register. Signed-off-by: Paul Greenwalt <paul.greenwalt@intel.com> Reviewed-by: Alexander Lobakin <aleksander.lobakin@intel.com> Tested-by: Rafal Romanowski <rafal.romanowski@intel.com> Signed-off-by: Tony Nguyen <anthony.l.nguyen@intel.com>
2024-10-09ice: Implement ethtool reset supportWojciech Drewek2-0/+108
Enable ethtool reset support. Ethtool reset flags are mapped to the E810 reset type: PF reset: $ ethtool --reset <ethX> irq dma filter offload CORE reset: $ ethtool --reset <ethX> irq-shared dma-shared filter-shared \ offload-shared ram-shared GLOBAL reset: $ ethtool --reset <ethX> irq-shared dma-shared filter-shared \ offload-shared mac-shared phy-shared ram-shared Calling the same set of flags as in PF reset case on port representor triggers VF reset. Reviewed-by: Michal Swiatkowski <michal.swiatkowski@linux.intel.com> Reviewed-by: Marcin Szycik <marcin.szycik@linux.intel.com> Reviewed-by: Przemek Kitszel <przemyslaw.kitszel@intel.com> Acked-by: Jakub Kicinski <kuba@kernel.org> Signed-off-by: Wojciech Drewek <wojciech.drewek@intel.com> Tested-by: Pucha Himasekhar Reddy <himasekharx.reddy.pucha@intel.com> (A Contingent worker at Intel) Signed-off-by: Tony Nguyen <anthony.l.nguyen@intel.com>