summaryrefslogtreecommitdiff
AgeCommit message (Collapse)AuthorFilesLines
2020-11-16treewide: rename nla_strlcpy to nla_strscpy.Francis Laniel29-49/+49
Calls to nla_strlcpy are now replaced by calls to nla_strscpy which is the new name of this function. Signed-off-by: Francis Laniel <laniel_francis@privacyrequired.com> Reviewed-by: Kees Cook <keescook@chromium.org> Signed-off-by: Jakub Kicinski <kuba@kernel.org>
2020-11-16Modify return value of nla_strlcpy to match that of strscpy.Francis Laniel6-19/+30
nla_strlcpy now returns -E2BIG if src was truncated when written to dst. It also returns this error value if dstsize is 0 or higher than INT_MAX. For example, if src is "foo\0" and dst is 3 bytes long, the result will be: 1. "foG" after memcpy (G means garbage). 2. "fo\0" after memset. 3. -E2BIG is returned because src was not completely written into dst. The callers of nla_strlcpy were modified to take into account this modification. Signed-off-by: Francis Laniel <laniel_francis@privacyrequired.com> Reviewed-by: Kees Cook <keescook@chromium.org> Signed-off-by: Jakub Kicinski <kuba@kernel.org>
2020-11-16Fix unefficient call to memset before memcpu in nla_strlcpy.Francis Laniel1-1/+2
Before this commit, nla_strlcpy first memseted dst to 0 then wrote src into it. This is inefficient because bytes whom number is less than src length are written twice. This patch solves this issue by first writing src into dst then fill dst with 0's. Note that, in the case where src length is higher than dst, only 0 is written. Otherwise there are as many 0's written to fill dst. For example, if src is "foo\0" and dst is 5 bytes long, the result will be: 1. "fooGG" after memcpy (G means garbage). 2. "foo\0\0" after memset. Signed-off-by: Francis Laniel <laniel_francis@privacyrequired.com> Reviewed-by: Kees Cook <keescook@chromium.org> Signed-off-by: Jakub Kicinski <kuba@kernel.org>
2020-11-16r8169: improve rtl8169_start_xmitHeiner Kallweit1-9/+6
Improve the following in rtl8169_start_xmit: - tp->cur_tx can be accessed in parallel by rtl_tx(), therefore annotate the race by using WRITE_ONCE - avoid checking stop_queue a second time by moving the doorbell check - netif_stop_queue() uses atomic operation set_bit() that includes a full memory barrier on some platforms, therefore use smp_mb__after_atomic to avoid overhead Signed-off-by: Heiner Kallweit <hkallweit1@gmail.com> Link: https://lore.kernel.org/r/80085451-3eaf-507a-c7c0-08d607c46fbc@gmail.com Signed-off-by: Jakub Kicinski <kuba@kernel.org>
2020-11-15net: xfrm: use core API for updating/providing statsLev Stipakov1-17/+2
Commit d3fd65484c781 ("net: core: add dev_sw_netstats_tx_add") has added function "dev_sw_netstats_tx_add()" to update net device per-cpu TX stats. Use this function instead of own code. While on it, remove xfrmi_get_stats64() and replace it with dev_get_tstats64(). Signed-off-by: Lev Stipakov <lev@openvpn.net> Reviewed-by: Heiner Kallweit <hkallweit1@gmail.com> Link: https://lore.kernel.org/r/20201113215939.147007-1-lev@openvpn.net Signed-off-by: Jakub Kicinski <kuba@kernel.org>
2020-11-15net: openvswitch: use core API to update/provide statsLev Stipakov1-22/+7
Commit d3fd65484c781 ("net: core: add dev_sw_netstats_tx_add") has added function "dev_sw_netstats_tx_add()" to update net device per-cpu TX stats. Use this function instead of own code. While on it, remove internal_get_stats() and replace it with dev_get_tstats64(). Signed-off-by: Lev Stipakov <lev@openvpn.net> Reviewed-by: Heiner Kallweit <hkallweit1@gmail.com> Link: https://lore.kernel.org/r/20201113215336.145998-1-lev@openvpn.net Signed-off-by: Jakub Kicinski <kuba@kernel.org>
2020-11-15Merge branch 'mlxsw-preparations-for-nexthop-objects-support-part-1-2'Jakub Kicinski3-286/+300
Ido Schimmel says: ==================== mlxsw: Preparations for nexthop objects support - part 1/2 This patch set contains small and non-functional changes aimed at making it easier to support nexthop objects in mlxsw. Follow up patches can be found here [1]. Patches #1-#4 add a type field to the nexthop group struct instead of the existing protocol field. This will be used later on to add a nexthop object type, which can contain both IPv4 and IPv6 nexthops. Patches #5-#7 move the IPv4 FIB info pointer (i.e., 'struct fib_info') from the nexthop group struct to the route. The pointer will not be available when the nexthop group is a nexthop object, but it needs to be accessible to routes regardless. Patch #8 is the biggest change, but it is an entirely cosmetic change and should therefore be easy to review. The motivation and the change itself are explained in detail in the commit message. Patches #9-#12 perform small changes so that two functions that are currently split between IPv4 and IPv6 could be consolidated in patches Patch #15 removes an outdated comment. [1] https://github.com/idosch/linux/tree/submit/nexthop_objects ==================== Link: https://lore.kernel.org/r/20201113160559.22148-1-idosch@idosch.org Signed-off-by: Jakub Kicinski <kuba@kernel.org>
2020-11-15mlxsw: spectrum_router: Remove outdated commentIdo Schimmel1-6/+0
Since commit 21151f64a458 ("mlxsw: Add new FIB entry type for reject routes") this comment is no longer correct. Remove it. Signed-off-by: Ido Schimmel <idosch@nvidia.com> Reviewed-by: Jiri Pirko <jiri@nvidia.com> Signed-off-by: Jakub Kicinski <kuba@kernel.org>
2020-11-15mlxsw: spectrum_router: Consolidate mlxsw_sp_nexthop{4, 6}_type_fini()Ido Schimmel1-15/+3
The two functions are identical, so consolidate them to mlxsw_sp_nexthop_type_fini(). Signed-off-by: Ido Schimmel <idosch@nvidia.com> Reviewed-by: Jiri Pirko <jiri@nvidia.com> Signed-off-by: Jakub Kicinski <kuba@kernel.org>
2020-11-15mlxsw: spectrum_router: Consolidate mlxsw_sp_nexthop{4, 6}_type_init()Ido Schimmel1-57/+21
The two functions are now identical, so consolidate them to mlxsw_sp_nexthop_type_init(). Signed-off-by: Ido Schimmel <idosch@nvidia.com> Reviewed-by: Jiri Pirko <jiri@nvidia.com> Signed-off-by: Jakub Kicinski <kuba@kernel.org>
2020-11-15mlxsw: spectrum_router: Remove unused argument from ↵Ido Schimmel1-2/+1
mlxsw_sp_nexthop6_type_init() Remove it as it is unused. Signed-off-by: Ido Schimmel <idosch@nvidia.com> Reviewed-by: Jiri Pirko <jiri@nvidia.com> Signed-off-by: Jakub Kicinski <kuba@kernel.org>
2020-11-15mlxsw: spectrum_router: Pass nexthop netdev to mlxsw_sp_nexthop4_type_init()Ido Schimmel1-4/+3
Instead of passing the nexthop and resolving the nexthop netdev from it, pass the nexthop netdev directly. This will later allow us to consolidate code paths between IPv4 and IPv6 code. Signed-off-by: Ido Schimmel <idosch@nvidia.com> Reviewed-by: Jiri Pirko <jiri@nvidia.com> Signed-off-by: Jakub Kicinski <kuba@kernel.org>
2020-11-15mlxsw: spectrum_router: Pass nexthop netdev to mlxsw_sp_nexthop6_type_init()Ido Schimmel1-3/+2
Instead of passing the route and resolving the nexthop netdev from it, pass the nexthop netdev directly. This will later allow us to consolidate code paths between IPv4 and IPv6 code. Signed-off-by: Ido Schimmel <idosch@nvidia.com> Reviewed-by: Jiri Pirko <jiri@nvidia.com> Signed-off-by: Jakub Kicinski <kuba@kernel.org>
2020-11-15mlxsw: spectrum_ipip: Remove overlay protocol from can_offload() callbackIdo Schimmel3-13/+5
The overlay protocol (i.e., IPv4/IPv6) that is being encapsulated has no impact on whether a certain IP tunnel can be offloaded or not. Only the underlay protocol matters. Therefore, remove the unused overlay protocol parameter from the callback. This will later allow us to consolidate code paths between IPv4 and IPv6 code. Signed-off-by: Ido Schimmel <idosch@nvidia.com> Reviewed-by: Jiri Pirko <jiri@nvidia.com> Signed-off-by: Jakub Kicinski <kuba@kernel.org>
2020-11-15mlxsw: spectrum_router: Split nexthop group configuration to a different structIdo Schimmel1-149/+228
Currently, the individual nexthops member in the group and attributes of the group (e.g., its type) are stored in the same struct (i.e., 'struct mlxsw_sp_nexthop_group'). This is fine since the individual nexthops cannot change during the lifetime of the group. With nexthop objects this is no longer the case. An existing nexthop group can be replaced to use a new set of nexthops. Creating a new struct whenever a group is replaced entails replacing the group pointer of all the routes (i.e., 'struct mlxsw_sp_fib_entry') using the group. Avoid this inefficient step by splitting the nexthop group configuration to a different struct (i.e., 'struct mlxsw_sp_nexthop_group_info'). When a nexthop group is replaced a new group info struct is created and the individual rotues do not need to be touched. Illustration after the change: mlxsw_sp_fib_entry mlxsw_sp_nexthop_group mlxsw_sp_nexthop_group_info +-------------------+ +----------------------+ +---------------------------+ | nh_group; +--> nhgi; +--> | | | | | | | +-------------------+ +----------------------+ +---------------------------+ No functional changes intended. Signed-off-by: Ido Schimmel <idosch@nvidia.com> Reviewed-by: Jiri Pirko <jiri@nvidia.com> Signed-off-by: Jakub Kicinski <kuba@kernel.org>
2020-11-15mlxsw: spectrum_router: Move IPv4 FIB info into a union in nexthop group structIdo Schimmel1-11/+9
Instead of storing the FIB info as 'priv' when the nexthop group represents an IPv4 nexthop group, simply store it as a FIB info with a proper comment. When nexthop objects are supported, this field will become a union with the nexthop object's identifier. Signed-off-by: Ido Schimmel <idosch@nvidia.com> Reviewed-by: Jiri Pirko <jiri@nvidia.com> Signed-off-by: Jakub Kicinski <kuba@kernel.org>
2020-11-15mlxsw: spectrum_router: Remove unused field 'prio' from IPv4 FIB entry structIdo Schimmel1-2/+0
Not used anywhere. Signed-off-by: Ido Schimmel <idosch@nvidia.com> Reviewed-by: Jiri Pirko <jiri@nvidia.com> Signed-off-by: Jakub Kicinski <kuba@kernel.org>
2020-11-15mlxsw: spectrum_router: Store FIB info in routeIdo Schimmel1-6/+7
When needed, IPv4 routes fetch the FIB info (i.e., 'struct fib_info') from their associated nexthop group. This will not work when the nexthop group represents a nexthop object (i.e., 'struct nexthop'), as it will only have access to the nexthop's identifier. Instead, store the FIB info in the route itself. Signed-off-by: Ido Schimmel <idosch@nvidia.com> Reviewed-by: Jiri Pirko <jiri@nvidia.com> Signed-off-by: Jakub Kicinski <kuba@kernel.org>
2020-11-15mlxsw: spectrum_router: Associate neighbour table with nexthop instead of groupIdo Schimmel1-11/+9
As explained in the previous patch, nexthop objects can have both IPv4 and IPv6 nexthops in the same group. Therefore, move the neighbour table to be a property of the nexthop instead of the nexthop group. Signed-off-by: Ido Schimmel <idosch@nvidia.com> Reviewed-by: Jiri Pirko <jiri@nvidia.com> Signed-off-by: Jakub Kicinski <kuba@kernel.org>
2020-11-15mlxsw: spectrum_router: Use nexthop group type in hash table keyIdo Schimmel1-13/+12
Both IPv4 and IPv6 nexthop groups are hashed in the same table. The protocol field is used to indicate how the hash should be computed for each group. When nexthop group objects are supported, the hash will be computed for them based on the nexthop identifier. To differentiate between all the nexthop group types, encode the type of the group in the key instead of the protocol. Signed-off-by: Ido Schimmel <idosch@nvidia.com> Reviewed-by: Jiri Pirko <jiri@nvidia.com> Signed-off-by: Jakub Kicinski <kuba@kernel.org>
2020-11-15mlxsw: spectrum_router: Add nexthop group type fieldIdo Schimmel1-16/+18
Currently, the type (i.e., IPv4/IPv6) of the nexthop group is derived from the neighbour table associated with the group. This is problematic when nexthop objects are taken into account, as a nexthop group object can contain both IPv4 and IPv6 nexthops. Instead, add a new field that indicates the type of the group and initialize it during the group's creation. Currently, the types are IPv4 ('struct fib_info') and IPv6 ('struct fib6_info'). In the future another type will be added for nexthop objects ('struct nexthop'). Signed-off-by: Ido Schimmel <idosch@nvidia.com> Reviewed-by: Jiri Pirko <jiri@nvidia.com> Signed-off-by: Jakub Kicinski <kuba@kernel.org>
2020-11-15mlxsw: spectrum_router: Compare key with correct object typeIdo Schimmel1-6/+10
When comparing a key with a nexthop group in rhastable's obj_cmpfn() callback, make sure that the key and nexthop group are of the same type (i.e., IPv4 / IPv6). The bug is not currently visible because IPv6 nexthop groups do not populate the FIB info pointer and IPv4 nexthop groups do not set the ifindex for the individual nexthops. Signed-off-by: Ido Schimmel <idosch@nvidia.com> Reviewed-by: Jiri Pirko <jiri@nvidia.com> Signed-off-by: Jakub Kicinski <kuba@kernel.org>
2020-11-15nfc: refined function nci_hci_resp_receivedAlex Shi1-6/+3
We don't use the parameter result actually, so better to remove it and skip a gcc warning for unused variable. Signed-off-by: Alex Shi <alex.shi@linux.alibaba.com> Link: https://lore.kernel.org/r/1605239517-49707-1-git-send-email-alex.shi@linux.alibaba.com Signed-off-by: Jakub Kicinski <kuba@kernel.org>
2020-11-15inet: unexport udp{4|6}_lib_lookup_skb()Eric Dumazet2-2/+0
These functions do not need to be exported. Signed-off-by: Eric Dumazet <edumazet@google.com> Link: https://lore.kernel.org/r/20201113113553.3411756-1-eric.dumazet@gmail.com Signed-off-by: Jakub Kicinski <kuba@kernel.org>
2020-11-15Merge branch 'tcp-avoid-indirect-call-in-__sk_stream_memory_free'Jakub Kicinski3-14/+21
Eric Dumazet says: ==================== tcp: avoid indirect call in __sk_stream_memory_free() Small improvement for CONFIG_RETPOLINE=y, when dealing with TCP sockets. ==================== Link: https://lore.kernel.org/r/20201113150809.3443527-1-eric.dumazet@gmail.com Signed-off-by: Jakub Kicinski <kuba@kernel.org>
2020-11-15tcp: avoid indirect call to tcp_stream_memory_free()Eric Dumazet1-2/+6
Signed-off-by: Eric Dumazet <edumazet@google.com> Signed-off-by: Jakub Kicinski <kuba@kernel.org>
2020-11-15tcp: uninline tcp_stream_memory_free()Eric Dumazet2-12/+15
Both IPv4 and IPv6 needs it via a function pointer. Following patch will avoid the indirect call. Signed-off-by: Eric Dumazet <edumazet@google.com> Signed-off-by: Jakub Kicinski <kuba@kernel.org>
2020-11-15IPv4: RTM_GETROUTE: Add RTA_ENCAP to resultOliver Herms1-0/+3
This patch adds an IPv4 routes encapsulation attribute to the result of netlink RTM_GETROUTE requests (e.g. ip route get 192.0.2.1). Signed-off-by: Oliver Herms <oliver.peter.herms@gmail.com> Reviewed-by: David Ahern <dsahern@kernel.org> Link: https://lore.kernel.org/r/20201113085517.GA1307262@tws Signed-off-by: Jakub Kicinski <kuba@kernel.org>
2020-11-15Merge branch 'ionic-updates'Jakub Kicinski5-58/+81
Shannon Nelson says: ==================== ionic updates These updates are a bit of code cleaning and a minor bit of performance tweaking. v3: convert ionic_lif_quiesce() to void v2: added void cast on call to ionic_lif_quiesce() lowered batching threshold added patch to flatten calls to ionic_lif_rx_mode added patch to change from_ndo to can_sleep ==================== Link: https://lore.kernel.org/r/20201112182208.46770-1-snelson@pensando.io Signed-off-by: Jakub Kicinski <kuba@kernel.org>
2020-11-15ionic: useful names for booleansShannon Nelson3-9/+15
With a few more uses of true and false in function calls, we need to give them some useful names so we can tell from the calling point what we're doing. Signed-off-by: Shannon Nelson <snelson@pensando.io> Reviewed-by: Saeed Mahameed <saeedm@nvidia.com> Signed-off-by: Jakub Kicinski <kuba@kernel.org>
2020-11-15ionic: change set_rx_mode from_ndo to can_sleepShannon Nelson1-10/+10
Instead of having two different ways of expressing the same sleepability concept, using opposite logic, we can rework the from_ndo to can_sleep for a more consistent usage. Fixes: 1800eee16676 ("net: ionic: Replace in_interrupt() usage.") Signed-off-by: Shannon Nelson <snelson@pensando.io> Reviewed-by: Saeed Mahameed <saeedm@nvidia.com> Signed-off-by: Jakub Kicinski <kuba@kernel.org>
2020-11-15ionic: flatten calls to ionic_lif_rx_modeShannon Nelson1-22/+16
The _ionic_lif_rx_mode() is only used once and really doesn't need to be broken out. Signed-off-by: Shannon Nelson <snelson@pensando.io> Reviewed-by: Saeed Mahameed <saeedm@nvidia.com> Signed-off-by: Jakub Kicinski <kuba@kernel.org>
2020-11-15ionic: use mc sync for multicast filtersShannon Nelson1-11/+8
We should be using the multicast sync routines for the multicast filters. Also, let's just flatten the logic a bit and pull the small unicast routine back into ionic_set_rx_mode(). Fixes: 1800eee16676 ("net: ionic: Replace in_interrupt() usage.") Signed-off-by: Shannon Nelson <snelson@pensando.io> Reviewed-by: Saeed Mahameed <saeedm@nvidia.com> Signed-off-by: Jakub Kicinski <kuba@kernel.org>
2020-11-15ionic: batch rx buffer refillingShannon Nelson2-9/+13
We don't need to refill the rx descriptors on every napi if only a few were handled. Waiting until we can batch up a few together will save us a few Rx cycles. Signed-off-by: Shannon Nelson <snelson@pensando.io> Reviewed-by: Saeed Mahameed <saeedm@nvidia.com> Signed-off-by: Jakub Kicinski <kuba@kernel.org>
2020-11-15ionic: add lif quiesceShannon Nelson1-0/+20
After the queues are stopped, expressly quiesce the lif. This assures that even if the queues were in an odd state, the firmware will close up everything cleanly. Signed-off-by: Shannon Nelson <snelson@pensando.io> Reviewed-by: Saeed Mahameed <saeedm@nvidia.com> Signed-off-by: Jakub Kicinski <kuba@kernel.org>
2020-11-15ionic: check for link after netdev registrationShannon Nelson1-0/+2
Request a link check as soon as the netdev is registered rather than waiting for the watchdog to go off in order to get the interface operational a little more quickly. Signed-off-by: Shannon Nelson <snelson@pensando.io> Reviewed-by: Saeed Mahameed <saeedm@nvidia.com> Signed-off-by: Jakub Kicinski <kuba@kernel.org>
2020-11-15ionic: start queues before announcing link upShannon Nelson1-6/+6
Change the order of operations in the link_up handling to be sure that the queues are up and ready before we announce that the link is up. Signed-off-by: Shannon Nelson <snelson@pensando.io> Reviewed-by: Saeed Mahameed <saeedm@nvidia.com> Signed-off-by: Jakub Kicinski <kuba@kernel.org>
2020-11-14net: macb: Fix passing zero to 'PTR_ERR'YueHaibing1-8/+2
Check PTR_ERR with IS_ERR to fix this. Signed-off-by: YueHaibing <yuehaibing@huawei.com> Link: https://lore.kernel.org/r/20201112144936.54776-1-yuehaibing@huawei.com Signed-off-by: Jakub Kicinski <kuba@kernel.org>
2020-11-14ipv6: remove unused function ipv6_skb_idev()Lukas Bulwahn1-5/+0
Commit bdb7cc643fc9 ("ipv6: Count interface receive statistics on the ingress netdev") removed all callees for ipv6_skb_idev(). Hence, since then, ipv6_skb_idev() is unused and make CC=clang W=1 warns: net/ipv6/exthdrs.c:909:33: warning: unused function 'ipv6_skb_idev' [-Wunused-function] So, remove this unused function and a -Wunused-function warning. Signed-off-by: Lukas Bulwahn <lukas.bulwahn@gmail.com> Reviewed-by: Nathan Chancellor <natechancellor@gmail.com> Link: https://lore.kernel.org/r/20201113135012.32499-1-lukas.bulwahn@gmail.com Signed-off-by: Jakub Kicinski <kuba@kernel.org>
2020-11-14Merge git://git.kernel.org/pub/scm/linux/kernel/git/bpf/bpf-nextJakub Kicinski83-1271/+3907
Daniel Borkmann says: ==================== pull-request: bpf-next 2020-11-14 1) Add BTF generation for kernel modules and extend BTF infra in kernel e.g. support for split BTF loading and validation, from Andrii Nakryiko. 2) Support for pointers beyond pkt_end to recognize LLVM generated patterns on inlined branch conditions, from Alexei Starovoitov. 3) Implements bpf_local_storage for task_struct for BPF LSM, from KP Singh. 4) Enable FENTRY/FEXIT/RAW_TP tracing program to use the bpf_sk_storage infra, from Martin KaFai Lau. 5) Add XDP bulk APIs that introduce a defer/flush mechanism to optimize the XDP_REDIRECT path, from Lorenzo Bianconi. 6) Fix a potential (although rather theoretical) deadlock of hashtab in NMI context, from Song Liu. 7) Fixes for cross and out-of-tree build of bpftool and runqslower allowing build for different target archs on same source tree, from Jean-Philippe Brucker. 8) Fix error path in htab_map_alloc() triggered from syzbot, from Eric Dumazet. 9) Move functionality from test_tcpbpf_user into the test_progs framework so it can run in BPF CI, from Alexander Duyck. 10) Lift hashtab key_size limit to be larger than MAX_BPF_STACK, from Florian Lehner. Note that for the fix from Song we have seen a sparse report on context imbalance which requires changes in sparse itself for proper annotation detection where this is currently being discussed on linux-sparse among developers [0]. Once we have more clarification/guidance after their fix, Song will follow-up. [0] https://lore.kernel.org/linux-sparse/CAHk-=wh4bx8A8dHnX612MsDO13st6uzAz1mJ1PaHHVevJx_ZCw@mail.gmail.com/T/ https://lore.kernel.org/linux-sparse/20201109221345.uklbp3lzgq6g42zb@ltop.local/T/ * git://git.kernel.org/pub/scm/linux/kernel/git/bpf/bpf-next: (66 commits) net: mlx5: Add xdp tx return bulking support net: mvpp2: Add xdp tx return bulking support net: mvneta: Add xdp tx return bulking support net: page_pool: Add bulk support for ptr_ring net: xdp: Introduce bulking for xdp tx return path bpf: Expose bpf_d_path helper to sleepable LSM hooks bpf: Augment the set of sleepable LSM hooks bpf: selftest: Use bpf_sk_storage in FENTRY/FEXIT/RAW_TP bpf: Allow using bpf_sk_storage in FENTRY/FEXIT/RAW_TP bpf: Rename some functions in bpf_sk_storage bpf: Folding omem_charge() into sk_storage_charge() selftests/bpf: Add asm tests for pkt vs pkt_end comparison. selftests/bpf: Add skb_pkt_end test bpf: Support for pointers beyond pkt_end. tools/bpf: Always run the *-clean recipes tools/bpf: Add bootstrap/ to .gitignore bpf: Fix NULL dereference in bpf_task_storage tools/bpftool: Fix build slowdown tools/runqslower: Build bpftool using HOSTCC tools/runqslower: Enable out-of-tree build ... ==================== Link: https://lore.kernel.org/r/20201114020819.29584-1-daniel@iogearbox.net Signed-off-by: Jakub Kicinski <kuba@kernel.org>
2020-11-14net: phy: mscc: Add PTP support for 2 more VSC PHYsSteen Hegelund1-0/+2
Add VSC8572 and VSC8574 in the PTP configuration as they also support PTP. The relevant datasheets can be found here: - VSC8572: https://www.microchip.com/wwwproducts/en/VSC8572 - VSC8574: https://www.microchip.com/wwwproducts/en/VSC8574 Signed-off-by: Steen Hegelund <steen.hegelund@microchip.com> Link: https://lore.kernel.org/r/20201112092250.914079-1-steen.hegelund@microchip.com Signed-off-by: Jakub Kicinski <kuba@kernel.org>
2020-11-14Merge branch 'xdp-redirect-bulk'Daniel Borkmann7-17/+192
Lorenzo Bianconi says: ==================== XDP bulk APIs introduce a defer/flush mechanism to return pages belonging to the same xdp_mem_allocator object (identified via the mem.id field) in bulk to optimize I-cache and D-cache since xdp_return_frame is usually run inside the driver NAPI tx completion loop. Convert mvneta, mvpp2 and mlx5 drivers to xdp_return_frame_bulk APIs. More details on benchmarks run on mlx5 can be found here: https://github.com/xdp-project/xdp-project/blob/master/areas/mem/xdp_bulk_return01.org Changes since v5: - do not keep looping over ptr_ring if the cache is full but release leftover pages running page_pool_return_page Changes since v4: - fix comments - introduce xdp_frame_bulk_init utility routine - compiler annotations for I-cache code layout - move rcu_read_lock outside fast-path - mlx5 xdp bulking code optimization Changes since v3: - align DEV_MAP_BULK_SIZE to XDP_BULK_QUEUE_SIZE - refactor page_pool_put_page_bulk to avoid code duplication Changes since v2: - move mvneta changes in a dedicated patch Changes since v1: - improve comments - rework xdp_return_frame_bulk routine logic - move count and xa fields at the beginning of xdp_frame_bulk struct - invert logic in page_pool_put_page_bulk for loop ==================== Signed-off-by: Daniel Borkmann <daniel@iogearbox.net> Acked-by: Jesper Dangaard Brouer <brouer@redhat.com>
2020-11-14net: mlx5: Add xdp tx return bulking supportLorenzo Bianconi1-4/+18
Convert mlx5 driver to xdp_return_frame_bulk APIs. XDP_REDIRECT (upstream codepath): 8.9Mpps XDP_REDIRECT (upstream codepath + bulking APIs): 10.2Mpps Co-developed-by: Jesper Dangaard Brouer <brouer@redhat.com> Signed-off-by: Jesper Dangaard Brouer <brouer@redhat.com> Signed-off-by: Lorenzo Bianconi <lorenzo@kernel.org> Signed-off-by: Daniel Borkmann <daniel@iogearbox.net> Tested-by: Jesper Dangaard Brouer <brouer@redhat.com> Acked-by: John Fastabend <john.fastabend@gmail.com> Link: https://lore.kernel.org/bpf/250460319fd868b7b5668fc1deca74dd42813a90.1605267335.git.lorenzo@kernel.org
2020-11-14net: mvpp2: Add xdp tx return bulking supportLorenzo Bianconi1-1/+9
Convert mvpp2 driver to xdp_return_frame_bulk APIs. XDP_REDIRECT (upstream codepath): 1.79Mpps XDP_REDIRECT (upstream codepath + bulking APIs): 1.93Mpps Co-developed-by: Jesper Dangaard Brouer <brouer@redhat.com> Signed-off-by: Jesper Dangaard Brouer <brouer@redhat.com> Signed-off-by: Lorenzo Bianconi <lorenzo@kernel.org> Signed-off-by: Daniel Borkmann <daniel@iogearbox.net> Tested-by: Matteo Croce <mcroce@microsoft.com> Acked-by: John Fastabend <john.fastabend@gmail.com> Link: https://lore.kernel.org/bpf/0b38c295e58e8ce251ef6b4e2187a2f457f9f7a3.1605267335.git.lorenzo@kernel.org
2020-11-14net: mvneta: Add xdp tx return bulking supportLorenzo Bianconi1-1/+9
Convert mvneta driver to xdp_return_frame_bulk APIs. XDP_REDIRECT (upstream codepath): 275Kpps XDP_REDIRECT (upstream codepath + bulking APIs): 284Kpps Co-developed-by: Jesper Dangaard Brouer <brouer@redhat.com> Signed-off-by: Jesper Dangaard Brouer <brouer@redhat.com> Signed-off-by: Lorenzo Bianconi <lorenzo@kernel.org> Signed-off-by: Daniel Borkmann <daniel@iogearbox.net> Acked-by: John Fastabend <john.fastabend@gmail.com> Link: https://lore.kernel.org/bpf/9af8014006d022fc0fec78cdaa71beb56999750d.1605267335.git.lorenzo@kernel.org
2020-11-14net: page_pool: Add bulk support for ptr_ringLorenzo Bianconi3-17/+88
Introduce the capability to batch page_pool ptr_ring refill since it is usually run inside the driver NAPI tx completion loop. Suggested-by: Jesper Dangaard Brouer <brouer@redhat.com> Co-developed-by: Jesper Dangaard Brouer <brouer@redhat.com> Signed-off-by: Jesper Dangaard Brouer <brouer@redhat.com> Signed-off-by: Lorenzo Bianconi <lorenzo@kernel.org> Signed-off-by: Daniel Borkmann <daniel@iogearbox.net> Acked-by: John Fastabend <john.fastabend@gmail.com> Acked-by: Ilias Apalodimas <ilias.apalodimas@linaro.org> Link: https://lore.kernel.org/bpf/08dd249c9522c001313f520796faa777c4089e1c.1605267335.git.lorenzo@kernel.org
2020-11-14net: xdp: Introduce bulking for xdp tx return pathLorenzo Bianconi2-1/+75
XDP bulk APIs introduce a defer/flush mechanism to return pages belonging to the same xdp_mem_allocator object (identified via the mem.id field) in bulk to optimize I-cache and D-cache since xdp_return_frame is usually run inside the driver NAPI tx completion loop. The bulk queue size is set to 16 to be aligned to how XDP_REDIRECT bulking works. The bulk is flushed when it is full or when mem.id changes. xdp_frame_bulk is usually stored/allocated on the function call-stack to avoid locking penalties. Current implementation considers only page_pool memory model. Suggested-by: Jesper Dangaard Brouer <brouer@redhat.com> Co-developed-by: Jesper Dangaard Brouer <brouer@redhat.com> Signed-off-by: Jesper Dangaard Brouer <brouer@redhat.com> Signed-off-by: Lorenzo Bianconi <lorenzo@kernel.org> Signed-off-by: Daniel Borkmann <daniel@iogearbox.net> Acked-by: John Fastabend <john.fastabend@gmail.com> Acked-by: Ilias Apalodimas <ilias.apalodimas@linaro.org> Link: https://lore.kernel.org/bpf/e190c03eac71b20c8407ae0fc2c399eda7835f49.1605267335.git.lorenzo@kernel.org
2020-11-14net: stmmac: platform: use optional clk/reset get APIsJisheng Zhang1-13/+9
Use the devm_reset_control_get_optional() and devm_clk_get_optional() rather than open coding them. Signed-off-by: Jisheng Zhang <Jisheng.Zhang@synaptics.com> Link: https://lore.kernel.org/r/20201112092606.5173aa6f@xhacker.debian Signed-off-by: Jakub Kicinski <kuba@kernel.org>
2020-11-14r8169: improve rtl_txHeiner Kallweit1-4/+3
We can simplify the for() condition and eliminate variable tx_left. The change also considers that tp->cur_tx may be incremented by a racing rtl8169_start_xmit(). In addition replace the write to tp->dirty_tx and the following smp_mb() with an equivalent call to smp_store_mb(). This implicitly adds a WRITE_ONCE() to the write. Signed-off-by: Heiner Kallweit <hkallweit1@gmail.com> Link: https://lore.kernel.org/r/c2e19e5e-3d3f-d663-af32-13c3374f5def@gmail.com Signed-off-by: Jakub Kicinski <kuba@kernel.org>
2020-11-14r8169: use READ_ONCE in rtl_tx_slots_availHeiner Kallweit1-1/+2
tp->dirty_tx and tp->cur_tx may be changed by a racing rtl_tx() or rtl8169_start_xmit(). Use READ_ONCE() to annotate the races and ensure that the compiler doesn't use cached values. Signed-off-by: Heiner Kallweit <hkallweit1@gmail.com> Link: https://lore.kernel.org/r/5676fee3-f6b4-84f2-eba5-c64949a371ad@gmail.com Signed-off-by: Jakub Kicinski <kuba@kernel.org>