summaryrefslogtreecommitdiff
path: root/drivers/net
AgeCommit message (Collapse)AuthorFilesLines
2021-03-30can: bittiming: add CAN_KBPS, CAN_MBPS and CAN_MHZ macrosVincent Mailhol1-2/+2
Add three macro to simplify the readability of big bit timing numbers: - CAN_KBPS: kilobits per second (one thousand) - CAN_MBPS: megabits per second (one million) - CAN_MHZ: megahertz per second (one million) Example: u32 bitrate_max = 8 * CAN_MBPS; struct can_clock clock = {.freq = 80 * CAN_MHZ}; instead of: u32 bitrate_max = 8000000; struct can_clock clock = {.freq = 80000000}; Apply the new macro to driver/net/can/dev/bittiming.c. Link: https://lore.kernel.org/r/20210306054040.76483-1-mailhol.vincent@wanadoo.fr Signed-off-by: Vincent Mailhol <mailhol.vincent@wanadoo.fr> Signed-off-by: Marc Kleine-Budde <mkl@pengutronix.de>
2021-03-30can: bittiming: add calculation for CAN FD Transmitter Delay Compensation (TDC)Vincent Mailhol2-0/+26
The logic for the tdco calculation is to just reuse the normal sample point: tdco = sp. Because the sample point is expressed in tenth of percent and the tdco is expressed in time quanta, a conversion is needed. At the end, ssp = tdcv + tdco = tdcv + sp. Another popular method is to set tdco to the middle of the bit: tdc->tdco = can_bit_time(dbt) / 2 During benchmark tests, we could not find a clear advantages for one of the two methods. The tdco calculation is triggered each time the data_bittiming is changed so that users relying on automated calculation can use the netlink interface the exact same way without need of new parameters. For example, a command such as: ip link set canX type can bitrate 500000 dbitrate 4000000 fd on would trigger the calculation. The user using CONFIG_CAN_CALC_BITTIMING who does not want automated calculation needs to manually set tdco to zero. For example with: ip link set canX type can tdco 0 bitrate 500000 dbitrate 4000000 fd on (if the tdco parameter is provided in a previous command, it will be overwritten). If tdcv is set to zero (default), it is automatically calculated by the transiver for each frame. As such, there is no code in the kernel to calculate it. tdcf has no automated calculation functions because we could not figure out a formula for this parameter. Link: https://lore.kernel.org/r/20210224002008.4158-6-mailhol.vincent@wanadoo.fr Signed-off-by: Vincent Mailhol <mailhol.vincent@wanadoo.fr> Signed-off-by: Marc Kleine-Budde <mkl@pengutronix.de>
2021-03-30can: netlink: move '=' operators back to previous line (checkpatch fix)Vincent Mailhol1-14/+11
Fix the warning triggered by having an '=' at the beginning of the line by moving it back to the previous line. Also replace all indentations with a single space so that future entries can be more easily added. Extract of ./scripts/checkpatch.pl -f drivers/net/can/dev/netlink.c: CHECK: Assignment operator '=' should be on the previous line + [IFLA_CAN_BITTIMING_CONST] + = { .len = sizeof(struct can_bittiming_const) }, CHECK: Assignment operator '=' should be on the previous line + [IFLA_CAN_DATA_BITTIMING] + = { .len = sizeof(struct can_bittiming) }, CHECK: Assignment operator '=' should be on the previous line + [IFLA_CAN_DATA_BITTIMING_CONST] + = { .len = sizeof(struct can_bittiming_const) }, Link: https://lore.kernel.org/r/20210224002008.4158-4-mailhol.vincent@wanadoo.fr Signed-off-by: Vincent Mailhol <mailhol.vincent@wanadoo.fr> Signed-off-by: Marc Kleine-Budde <mkl@pengutronix.de>
2021-03-30can: dev: can_free_echo_skb(): extend to return can frame lengthMarc Kleine-Budde16-20/+27
In order to implement byte queue limits (bql) in CAN drivers, the length of the CAN frame needs to be passed into the networking stack even if the transmission failed for some reason. To avoid to calculate this length twice, extend can_free_echo_skb() to return that value. Convert all users of this function, too. This patch is the natural extension of commit: | 9420e1d495e2 ("can: dev: can_get_echo_skb(): extend to return can | frame length") Link: https://lore.kernel.org/r/20210319142700.305648-3-mkl@pengutronix.de Signed-off-by: Marc Kleine-Budde <mkl@pengutronix.de>
2021-03-30can: dev: can_free_echo_skb(): don't crash the kernel if can_priv::echo_skb ↵Marc Kleine-Budde1-1/+5
is accessed out of bounds A out of bounds access to "struct can_priv::echo_skb" leads to a kernel crash. Better print a sensible warning message instead and try to recover. This patch is similar to: | e7a6994d043a ("can: dev: __can_get_echo_skb(): Don't crash the kernel | if can_priv::echo_skb is accessed out of bounds") Link: https://lore.kernel.org/r/20210319142700.305648-2-mkl@pengutronix.de Signed-off-by: Marc Kleine-Budde <mkl@pengutronix.de>
2021-03-30can: dev: always create TX echo skbMarc Kleine-Budde1-2/+8
So far the creation of the TX echo skb was optional and can be controlled by the local sender of a CAN frame. It turns out that the TX echo CAN skb can be piggybacked to carry information in the driver from the TX- to the TX-complete handler. Several drivers already use the return value of can_get_echo_skb() (which is the length of the data field in the CAN frame) for their number of transferred bytes statistics. The statistics are not working if CAN echo skbs are disabled. Another use case is to calculate and set the CAN frame length on the wire, which is needed for BQL support in both the TX and TX-completion handler. For now in can_put_echo_skb(), which is called from the TX handler, the skb carrying the CAN frame is discarded if no TX echo is requested, leading to the above illustrated problems. This patch changes the can_put_echo_skb() function, so that the echo skb is always generated. If the sender requests no echo, the echo skb is consumed in __can_get_echo_skb() without being passed into the RX handler of the networking stack, but the CAN data length and CAN frame length information is properly returned. Link: https://lore.kernel.org/r/20210309211904.3348700-1-mkl@pengutronix.de Signed-off-by: Marc Kleine-Budde <mkl@pengutronix.de>
2021-03-30net/mlx5e: Update ethtool setting of CQE compressionAya Levin3-7/+10
Remove restriction blocking configuration of CQE compression when PTP rx filter is set. Instead turn on indication for RX PTP, and try to reopen the channels. Signed-off-by: Aya Levin <ayal@nvidia.com> Reviewed-by: Tariq Toukan <tariqt@nvidia.com> Signed-off-by: Saeed Mahameed <saeedm@nvidia.com>
2021-03-30net/mlx5e: Allow coexistence of CQE compression and HW TS PTPAya Levin3-12/+35
Update setting HW time-stamp to allow coexistence with CQE compression. Turn on RX PTP indication and try to reopen the channels. On success, coexistence with CQE compression is enabled. Otherwise, fall-back to turning off CQE compression. Signed-off-by: Aya Levin <ayal@nvidia.com> Reviewed-by: Tariq Toukan <tariqt@nvidia.com> Signed-off-by: Saeed Mahameed <saeedm@nvidia.com>
2021-03-30net/mlx5e: Add PTP Flow Steering supportAya Levin4-1/+146
When opening PTP channel with MLX5E_PTP_STATE_RX set, add the corresponding flow steering rules. Capture UDP packets with destination port 319 and L2 packets with ethertype 0x88F7 and steer them into the RQ of the PTP channel. Add API that manages the flow steering rules to be used in the following patches via safe_reopen_channels mechanism. Signed-off-by: Aya Levin <ayal@nvidia.com> Reviewed-by: Tariq Toukan <tariqt@nvidia.com> Signed-off-by: Saeed Mahameed <saeedm@nvidia.com>
2021-03-30net/mlx5e: Introduce Flow Steering ANY APIAya Levin4-1/+273
Add a new FS API which captures the ANY traffic from the traffic classifier into a dedicated FS table. The table consists of a group matching the ethertype and a must-be-last group which contains a default rule redirecting the unmatched packets back to the RSS logic. Signed-off-by: Aya Levin <ayal@nvidia.com> Reviewed-by: Tariq Toukan <tariqt@nvidia.com> Signed-off-by: Saeed Mahameed <saeedm@nvidia.com>
2021-03-30net/mlx5e: Introduce Flow Steering UDP APIAya Levin5-2/+368
Add a new FS API which captures the UDP traffic from the traffic classifier into a dedicated FS table. This API handles both UDP over IPv4 and IPv6 in the same manner. The tables (one for UDPv4 and another for UDPv6) consist of a group matching the UDP destination port and a must-be-last group which contains a default rule redirecting the unmatched packets back to the RSS logic. Signed-off-by: Aya Levin <ayal@nvidia.com> Reviewed-by: Tariq Toukan <tariqt@nvidia.com> Signed-off-by: Saeed Mahameed <saeedm@nvidia.com>
2021-03-30net/mlx5e: Cleanup Flow Steering levelAya Levin2-3/+3
Flow Steering levels are used to determine the order between the tables. As of today, each one of these tables follows the TTC table, and hijacks its traffic, and cannot be combined together for now. Putting them in the same layer better reflects the situation. Signed-off-by: Aya Levin <ayal@nvidia.com> Reviewed-by: Tariq Toukan <tariqt@nvidia.com> Signed-off-by: Saeed Mahameed <saeedm@nvidia.com>
2021-03-30net/mlx5e: Add PTP RQ to RX reporterAya Levin1-2/+66
When present, add the PTP RQ to the RX reporter. Signed-off-by: Aya Levin <ayal@nvidia.com> Reviewed-by: Tariq Toukan <tariqt@nvidia.com> Signed-off-by: Saeed Mahameed <saeedm@nvidia.com>
2021-03-30net/mlx5e: Refactor RX reporter diagnosticsAya Levin1-38/+66
Break RX diagnostics function into smaller helpers. This enables easier enhancement in the next patch in the set. Signed-off-by: Aya Levin <ayal@nvidia.com> Reviewed-by: Tariq Toukan <tariqt@nvidia.com> Signed-off-by: Saeed Mahameed <saeedm@nvidia.com>
2021-03-30net:mlx5e: Add PTP-TIR and PTP-RQTAya Levin7-4/+52
Add PTP-TIR and initiate its RQT to allow PTP-RQ to integrate into the safe-reopen flow on configuration change. Add rx_ptp_support flag on a profile and turn it on for ETH driver. With this flag set, create a redirect-RQT for PTP-RQ. Signed-off-by: Aya Levin <ayal@nvidia.com> Reviewed-by: Tariq Toukan <tariqt@nvidia.com> Signed-off-by: Saeed Mahameed <saeedm@nvidia.com>
2021-03-30net/mlx5e: Add PTP-RX statisticsAya Levin5-26/+100
Like PTP-TX, once the PTP-RX is opened, corresponding statistics appear. Add indication that PTP-RX was ever opened: rx_ptp_opened. If any of the PTP RX or TX were opened, display the PTP channel's statistics. Signed-off-by: Aya Levin <ayal@nvidia.com> Signed-off-by: Tariq Toukan <tariqt@nvidia.com> Signed-off-by: Saeed Mahameed <saeedm@nvidia.com>
2021-03-30net/mlx5e: Add RQ to PTP channelAya Levin3-8/+118
Enhance PTP channel to allow PTP without disabling CQE compression. Add RQ, TIR and PTP_RX_STATE to PTP channel. When this bit is set, PTP channel manages its RQ, and PTP traffic is directed to the PTP-RQ which is not affected by compression. Signed-off-by: Aya Levin <ayal@nvidia.com> Signed-off-by: Tariq Toukan <tariqt@nvidia.com> Signed-off-by: Saeed Mahameed <saeedm@nvidia.com>
2021-03-30net/mlx5e: Add states to PTP channelAya Levin5-34/+71
Add PTP TX state to PTP channel, which indicates the corresponding SQ is available. Further patches in the set extend PTP channel to include RQ. The PTP channel state will be used for separation and coexistence of RX and TX PTP. Enhance conditions to verify the TX PTP state is set. Signed-off-by: Aya Levin <ayal@nvidia.com> Reviewed-by: Tariq Toukan <tariqt@nvidia.com> Signed-off-by: Saeed Mahameed <saeedm@nvidia.com>
2021-03-30ethernet/netronome/nfp: Fix a use after free in nfp_bpf_ctrl_msg_rxLv Yunlong1-0/+1
In nfp_bpf_ctrl_msg_rx, if nfp_ccm_get_type(skb) == NFP_CCM_TYPE_BPF_BPF_EVENT is true, the skb will be freed. But the skb is still used by nfp_ccm_rx(&bpf->ccm, skb). My patch adds a return when the skb was freed. Fixes: bcf0cafab44fd ("nfp: split out common control message handling code") Signed-off-by: Lv Yunlong <lyl2019@mail.ustc.edu.cn> Reviewed-by: Jakub Kicinski <kuba@kernel.org> Signed-off-by: David S. Miller <davem@davemloft.net>
2021-03-30hv_netvsc: Add error handling while switching data pathHaiyang Zhang3-11/+48
Add error handling in case of failure to send switching data path message to the host. Reported-by: Shachar Raindel <shacharr@microsoft.com> Signed-off-by: Haiyang Zhang <haiyangz@microsoft.com> Signed-off-by: David S. Miller <davem@davemloft.net>
2021-03-30Merge branch '100GbE' of ↵David S. Miller10-41/+86
git://git.kernel.org/pub/scm/linux/kernel/git/tnguy/net-queue Tony Nguyen says: ==================== Intel Wired LAN Driver Updates 2021-03-29 This series contains updates to ice driver only. Ani does not fail on link/PHY errors during probe as this is not a fatal error to prevent the user from remedying the problem. He also corrects checking Wake on LAN support to be port number, not PF ID. Fabio increases the AdminQ timeout as some commands can take longer than the current value. Chinh fixes iSCSI to use be able to use port 860 by using information from DCBx and other QoS configuration info. Krzysztof fixes a possible race between ice_open() and ice_stop(). Bruce corrects the ordering of arguments in a memory allocation call. Dave removes DCBNL device reset bit which is blocking changes coming from DCBNL interface. Jacek adds error handling for filter allocation failure. Robert ensures memory is freed if VSI filter list issues are encountered. ==================== Signed-off-by: David S. Miller <davem@davemloft.net>
2021-03-30Merge branch '1GbE' of ↵David S. Miller6-80/+496
git://git.kernel.org/pub/scm/linux/kernel/git/tnguy/next-queue Tony Nguyen says: ==================== 1GbE Intel Wired LAN Driver Updates 2021-03-29 This series contains updates to igc driver only. Andre Guedes says: Add XDP support for the igc driver. The approach implemented by this series follows the same approach implemented in other Intel drivers as much as possible for the sake of consistency. The series is organized in two parts. In the first part, i.e. patches from 1 to 4, igc_main.c and igc_ptp.c code is refactored in preparation for landing the XDP support, which is introduced in the second part (patches from 5 to 8). As far as code organization is concerned, XDP-related helpers are defined in a new file, igc_xdp.c, and are called by igc_main.c. The features added by this series have been tested with the samples provided in samples/bpf/: xdp1, xdp2, xdp_redirect_cpu, and xdp_redirect_map. Upcoming series will add support of UMEM and zero-copy features from AF_XDP. ==================== Signed-off-by: David S. Miller <davem@davemloft.net>
2021-03-30net: mhi: Allow decoupled MTU/MRULoic Poulain3-1/+15
MBIM protocol makes the mhi network interface asymmetric, ingress data received from MHI is MBIM protocol, possibly containing multiple aggregated IP packets, while egress data received from network stack is IP protocol. This changes allows a 'protocol' to specify its own MRU, that when specified is used to allocate MHI RX buffers (skb). For MBIM, Set the default MTU to 1500, which is the usual network MTU for WWAN IP packets, and MRU to 3.5K (for allocation efficiency), allowing skb to fit in an usual 4K page (including padding, skb_shared_info, ...). Signed-off-by: Loic Poulain <loic.poulain@linaro.org> Signed-off-by: David S. Miller <davem@davemloft.net>
2021-03-30net: mhi: Add support for non-linear MBIM skb processingLoic Poulain1-26/+25
Currently, if skb is non-linear, due to MHI skb chaining, it is linearized in MBIM RX handler prior MBIM decoding, causing extra allocation and copy that can be as large as the maximum MBIM frame size (32K). This change introduces MBIM decoding for non-linear skb, allowing to process 'large' non-linear MBIM packets without skb linearization. The IP packets are simply extracted from the MBIM frame using the skb_copy_bits helper. Signed-off-by: Loic Poulain <loic.poulain@linaro.org> Signed-off-by: David S. Miller <davem@davemloft.net>
2021-03-30ieee802154: hwsim: remove redundant initialization of variable resColin Ian King1-1/+1
The variable res is being initialized with a value that is never read and it is being updated later with a new value. The initialization is redundant and can be removed. Addresses-Coverity: ("Unused value") Signed-off-by: Colin Ian King <colin.king@canonical.com> Signed-off-by: David S. Miller <davem@davemloft.net>
2021-03-30cxgb4: avoid collecting SGE_QBASE regs during trafficRahul Lakkireddy2-5/+21
Accessing SGE_QBASE_MAP[0-3] and SGE_QBASE_INDEX registers can lead to SGE missing doorbells under heavy traffic. So, only collect them when adapter is idle. Also update the regdump range to skip collecting these registers. Fixes: 80a95a80d358 ("cxgb4: collect SGE PF/VF queue map") Signed-off-by: Rahul Lakkireddy <rahul.lakkireddy@chelsio.com> Signed-off-by: David S. Miller <davem@davemloft.net>
2021-03-29gianfar: Handle error code at MAC address changeClaudiu Manoil1-1/+5
Handle return error code of eth_mac_addr(); Fixes: 3d23a05c75c7 ("gianfar: Enable changing mac addr when if up") Signed-off-by: Claudiu Manoil <claudiu.manoil@nxp.com> Signed-off-by: David S. Miller <davem@davemloft.net>
2021-03-29net: mdio: Correct function name mdio45_links_ok() in commentYang Yingliang1-1/+1
Fix the following make W=1 kernel build warning: drivers/net/mdio.c:95: warning: expecting prototype for mdio_link_ok(). Prototype was for mdio45_links_ok() instead Reported-by: Hulk Robot <hulkci@huawei.com> Signed-off-by: Yang Yingliang <yangyingliang@huawei.com> Signed-off-by: David S. Miller <davem@davemloft.net>
2021-03-29net: bonding: Correct function name bond_change_active_slave() in commentYang Yingliang1-1/+1
Fix the following make W=1 kernel build warning: drivers/net/bonding/bond_main.c:982: warning: expecting prototype for change_active_interface(). Prototype was for bond_change_active_slave() instead Reported-by: Hulk Robot <hulkci@huawei.com> Signed-off-by: Yang Yingliang <yangyingliang@huawei.com> Signed-off-by: David S. Miller <davem@davemloft.net>
2021-03-29net: phy: Correct function name mdiobus_register_board_info() in commentYang Yingliang1-1/+1
Fix the following make W=1 kernel build warning: drivers/net/phy/mdio-boardinfo.c:63: warning: expecting prototype for mdio_register_board_info(). Prototype was for mdiobus_register_board_info() instead Reported-by: Hulk Robot <hulkci@huawei.com> Signed-off-by: Yang Yingliang <yangyingliang@huawei.com> Reviewed-by: Andrew Lunn <andrew@lunn.ch> Signed-off-by: David S. Miller <davem@davemloft.net>
2021-03-29ethernet: myri10ge: Fix a use after free in myri10ge_sw_tsoLv Yunlong1-1/+1
In myri10ge_sw_tso, the skb_list_walk_safe macro will set (curr) = (segs) and (next) = (curr)->next. If status!=0 is true, the memory pointed by curr and segs will be free by dev_kfree_skb_any(curr). But later, the segs is used by segs = segs->next and causes a uaf. As (next) = (curr)->next, my patch replaces seg->next to next. Fixes: 536577f36ff7a ("net: myri10ge: use skb_list_walk_safe helper for gso segments") Signed-off-by: Lv Yunlong <lyl2019@mail.ustc.edu.cn> Signed-off-by: David S. Miller <davem@davemloft.net>
2021-03-29mlxsw: spectrum: Veto sampling if already enabled on portIdo Schimmel1-0/+5
The per-port sampling triggers (i.e., ingress / egress) cannot be enabled twice. Meaning, the below configuration will not result in packets being sampled twice: # tc filter add dev swp1 ingress matchall skip_sw action sample rate 100 group 1 # tc filter add dev swp1 ingress matchall skip_sw action sample rate 100 group 1 Therefore, reject such configurations. Fixes: 90f53c53ec4a ("mlxsw: spectrum: Start using sampling triggers hash table") Signed-off-by: Ido Schimmel <idosch@nvidia.com> Reviewed-by: Jiri Pirko <jiri@nvidia.com> Signed-off-by: David S. Miller <davem@davemloft.net>
2021-03-29mlxsw: spectrum_matchall: Perform priority checks earlierIdo Schimmel1-18/+13
Perform the priority check earlier in the function instead of repeating it for every action. This fixes a bug that allowed matchall rules with sample action to be added in front of flower rules on egress. Fixes: 54d0e963f683 ("mlxsw: spectrum_matchall: Add support for egress sampling") Signed-off-by: Ido Schimmel <idosch@nvidia.com> Reviewed-by: Jiri Pirko <jiri@nvidia.com> Signed-off-by: David S. Miller <davem@davemloft.net>
2021-03-29mlxsw: spectrum_matchall: Convert if statements to a switch statementIdo Schimmel1-3/+6
Previous patch moved the protocol check out of the action check, so these if statements can now be converted to a switch statement. Perform the conversion. Signed-off-by: Ido Schimmel <idosch@nvidia.com> Reviewed-by: Jiri Pirko <jiri@nvidia.com> Signed-off-by: David S. Miller <davem@davemloft.net>
2021-03-29mlxsw: spectrum_matchall: Perform protocol check earlierIdo Schimmel1-3/+7
Perform the protocol check earlier in the function instead of repeating it for every action. Example: # tc filter add dev swp1 ingress proto ip matchall skip_sw action sample group 1 rate 100 Error: matchall rules only supported with 'all' protocol. We have an error talking to the kernel Signed-off-by: Ido Schimmel <idosch@nvidia.com> Reviewed-by: Jiri Pirko <jiri@nvidia.com> Signed-off-by: David S. Miller <davem@davemloft.net>
2021-03-29mlxsw: spectrum: Fix ECN marking in tunnel decapsulationIdo Schimmel3-8/+21
Cited commit changed the behavior of the software data path with regards to the ECN marking of decapsulated packets. However, the commit did not change other callers of __INET_ECN_decapsulate(), namely mlxsw. The driver is using the function in order to ensure that the hardware and software data paths act the same with regards to the ECN marking of decapsulated packets. The discrepancy was uncovered by commit 5aa3c334a449 ("selftests: forwarding: vxlan_bridge_1d: Fix vxlan ecn decapsulate value") that aligned the selftest to the new behavior. Without this patch the selftest passes when used with veth pairs, but fails when used with mlxsw netdevs. Fix this by instructing the device to propagate the ECT(1) mark from the outer header to the inner header when the inner header is ECT(0), for both NVE and IP-in-IP tunnels. A helper is added in order not to duplicate the code between both tunnel types. Fixes: b723748750ec ("tunnel: Propagate ECT(1) when decapsulating as recommended by RFC6040") Signed-off-by: Ido Schimmel <idosch@nvidia.com> Reviewed-by: Petr Machata <petrm@nvidia.com> Acked-by: Toke Høiland-Jørgensen <toke@redhat.com> Signed-off-by: David S. Miller <davem@davemloft.net>
2021-03-29net: marvell: Fix an alignment problemYangyang Li1-1/+1
Use tab instead of space to align the code. Signed-off-by: Yangyang Li <liyangyang20@huawei.com> Signed-off-by: Weihang Li <liweihang@huawei.com> Signed-off-by: David S. Miller <davem@davemloft.net>
2021-03-29net: marvell: Delete extra spacesYangyang Li2-3/+4
Just delete three extra spaces. Signed-off-by: Yangyang Li <liyangyang20@huawei.com> Signed-off-by: Weihang Li <liweihang@huawei.com> Signed-off-by: David S. Miller <davem@davemloft.net>
2021-03-29net: marvell: Fix the trailing format of some block commentsYangyang Li3-7/+14
Use a trailing */ on a separate line for block comments. Signed-off-by: Yangyang Li <liyangyang20@huawei.com> Signed-off-by: Weihang Li <liweihang@huawei.com> Signed-off-by: David S. Miller <davem@davemloft.net>
2021-03-29net: marvell: Delete duplicate word in commentsYangyang Li2-4/+2
Delete duplicate word in two comments. Signed-off-by: Yangyang Li <liyangyang20@huawei.com> Signed-off-by: Weihang Li <liweihang@huawei.com> Signed-off-by: David S. Miller <davem@davemloft.net>
2021-03-29net: hns3: add stats logging when skb padding failsYunsheng Lin1-0/+5
skb_put_padto() may fails because of memory failure, sw_err_cnt is already used to log memory failure in hns3_skb_linearize(), so use it to log the memory failure for skb_put_padto() too. Signed-off-by: Yunsheng Lin <linyunsheng@huawei.com> Signed-off-by: Huazhong Tan <tanhuazhong@huawei.com> Signed-off-by: David S. Miller <davem@davemloft.net>
2021-03-29net: hns3: expand the tc config commandGuojia Liao2-2/+8
The device HNAE3_DEVICE_VERSION_V3 supports up to 1280 queues and qsets for one function, so the bitwidth of tc_offset, meaning the tqps index, needs to expand from 10 bits to 11 bits. The device HNAE3_DEVICE_VERSION_V3 supports up to 512 queues on one TC. The tc_size, meaning the exponent with base 2 of queues supported on TC, which needs to expand from 3 bits to 4 bits. Signed-off-by: Guojia Liao <liaoguojia@huawei.com> Signed-off-by: Huazhong Tan <tanhuazhong@huawei.com> Signed-off-by: David S. Miller <davem@davemloft.net>
2021-03-29net: hns3: add tx send size handling for tso skbYunsheng Lin2-8/+24
The actual size on wire for tso skb should be (gso_segs - 1) * hdr + skb->len instead of skb->len, which can be seen by user using 'ethtool -S ethX' cmd, and 'Byte Queue Limit' also use the send size stat to do the queue limiting, so add send_bytes in the desc_cb to record the actual send size for a skb. And send_bytes is only for tx desc_cb and page_offset is only for rx desc, so reuse the same space for both of them. Signed-off-by: Yunsheng Lin <linyunsheng@huawei.com> Signed-off-by: Huazhong Tan <tanhuazhong@huawei.com> Signed-off-by: David S. Miller <davem@davemloft.net>
2021-03-29net: hns3: add handling for xmit skb with recursive fraglistYunsheng Lin3-41/+78
Currently hns3 driver only handle the xmit skb with one level of fraglist skb, add handling for multi level by calling hns3_tx_bd_num() recursively when calculating bd num and calling hns3_fill_skb_to_desc() recursively when filling tx desc. When the skb has a fraglist level of 24, the skb is simply dropped and stats.max_recursion_level is added to record the error. Move the stat handling from hns3_nic_net_xmit() to hns3_nic_maybe_stop_tx() in order to handle different error stat and add the 'max_recursion_level' and 'hw_limitation' stat. Note that the max recursive level as 24 is chose according to below: commit 48a1df65334b ("skbuff: return -EMSGSIZE in skb_to_sgvec to prevent overflow"). And that we are not able to find a testcase to verify the recursive fraglist case, so Fixes tag is not provided. Reported-by: Barry Song <song.bao.hua@hisilicon.com> Signed-off-by: Yunsheng Lin <linyunsheng@huawei.com> Signed-off-by: Huazhong Tan <tanhuazhong@huawei.com> Signed-off-by: David S. Miller <davem@davemloft.net>
2021-03-29net: hns3: optimize the process of queue resetYufeng Mo7-109/+182
Currently, the queue reset process needs to be performed one by one, which is inefficient. However, the queue reset of the same function is always performed at the same time. Therefore, according to the UM, command HCLGE_OPC_CFG_RST_TRIGGER can be used to reset all queues of the same function at a time, in order to optimize the queue reset process. Signed-off-by: Yufeng Mo <moyufeng@huawei.com> Signed-off-by: Huazhong Tan <tanhuazhong@huawei.com> Signed-off-by: David S. Miller <davem@davemloft.net>
2021-03-29net: hns3: remove the rss_size limitation by vector numJian Shen1-9/+0
Currently, if user hasn't change channel number, the rss_size is limited to be no more than the vector number, in order to keep one vector only being mapped to one queue. But the queue number of each tc can be different, and one vector also can be mapped by multiple queues. So remove this limitation. Signed-off-by: Jian Shen <shenjian15@huawei.com> Signed-off-by: Huazhong Tan <tanhuazhong@huawei.com> Signed-off-by: David S. Miller <davem@davemloft.net>
2021-03-29net: hns3: remediate a potential overflow risk of bd_num_listGuangbin Huang1-7/+20
The array size of bd_num_list is a fixed value, it may have potential overflow risk when array size of hclge_dfx_bd_offset_list is greater than that fixed value. So modify bd_num_list as a pointer and allocate memory for it according to array size of hclge_dfx_bd_offset_list. Signed-off-by: Guangbin Huang <huangguangbin2@huawei.com> Signed-off-by: Huazhong Tan <tanhuazhong@huawei.com> Signed-off-by: David S. Miller <davem@davemloft.net>
2021-03-29net: hns3: fix use-after-free issue for hclge_add_fd_entry_common()Jian Shen1-1/+1
When new rule state is TO_ADD or ACTIVE, and there is already a rule with same location in the fd_rule_list, the new rule will be freed after modifying the old rule. It may cause user-after-free issue when access rule again in hclge_add_fd_entry_common(). Fixes: fc4243b8de8b ("net: hns3: refactor flow director configuration") Signed-off-by: Jian Shen <shenjian15@huawei.com> Signed-off-by: Huazhong Tan <tanhuazhong@huawei.com> Signed-off-by: David S. Miller <davem@davemloft.net>
2021-03-29net: hns3: fix missing rule state assignmentJian Shen1-3/+4
Currently, when adding flow director rule, it missed to set rule state. Which may cause the rule state in software is unconsistent with hardware. Fixes: fc4243b8de8b ("net: hns3: refactor flow director configuration") Signed-off-by: Jian Shen <shenjian15@huawei.com> Signed-off-by: Huazhong Tan <tanhuazhong@huawei.com> Signed-off-by: David S. Miller <davem@davemloft.net>
2021-03-29net: mscc: ocelot: remove redundant dev_err call in vsc9959_mdio_bus_alloc()Guobin Huang1-3/+1
There is a error message within devm_ioremap_resource already, so remove the dev_err call to avoid redundant error message. Reported-by: Hulk Robot <hulkci@huawei.com> Signed-off-by: Guobin Huang <huangguobin4@huawei.com> Signed-off-by: David S. Miller <davem@davemloft.net>