summaryrefslogtreecommitdiff
path: root/tools
AgeCommit message (Collapse)AuthorFilesLines
2021-06-12tools/bpftool: Fix error return code in do_batch()Zhihao Cheng1-1/+3
Fix to return a negative error code from the error handling case instead of 0, as done elsewhere in this function. Fixes: 668da745af3c2 ("tools: bpftool: add support for quotations ...") Reported-by: Hulk Robot <hulkci@huawei.com> Signed-off-by: Zhihao Cheng <chengzhihao1@huawei.com> Signed-off-by: Andrii Nakryiko <andrii@kernel.org> Reviewed-by: Quentin Monnet <quentin@isovalent.com> Link: https://lore.kernel.org/bpf/20210609115916.2186872-1-chengzhihao1@huawei.com
2021-06-12libbpf: Simplify the return expression of bpf_object__init_maps functionWang Hai1-3/+1
There is no need for special treatment of the 'ret == 0' case. This patch simplifies the return expression. Signed-off-by: Wang Hai <wanghai38@huawei.com> Signed-off-by: Andrii Nakryiko <andrii@kernel.org> Acked-by: Yonghong Song <yhs@fb.com> Link: https://lore.kernel.org/bpf/20210609115651.3392580-1-wanghai38@huawei.com
2021-06-08selftests, bpf: Make docs tests fail more reliablyJoe Stringer3-1/+4
Previously, if rst2man caught errors, then these would be ignored and the output file would be written anyway. This would allow developers to introduce regressions in the docs comments in the BPF headers. Additionally, even if you instruct rst2man to fail out, it will still write out to the destination target file, so if you ran the tests twice in a row it would always pass. Use a temporary file for the initial run to ensure that if rst2man fails out under "--strict" mode, subsequent runs will not automatically pass. Tested via ./tools/testing/selftests/bpf/test_doc_build.sh Signed-off-by: Joe Stringer <joe@cilium.io> Signed-off-by: Daniel Borkmann <daniel@iogearbox.net> Reviewed-by: Quentin Monnet <quentin@isovalent.com> Link: https://lore.kernel.org/bpf/20210608015756.340385-1-joe@cilium.io
2021-06-08libbpf: Fix pr_warn type warnings on 32bitMichal Suchanek1-2/+2
The printed value is ptrdiff_t and is formatted wiht %ld. This works on 64bit but produces a warning on 32bit. Fix the format specifier to %td. Fixes: 67234743736a ("libbpf: Generate loader program out of BPF ELF file.") Signed-off-by: Michal Suchanek <msuchanek@suse.de> Signed-off-by: Andrii Nakryiko <andrii@kernel.org> Signed-off-by: Daniel Borkmann <daniel@iogearbox.net> Acked-by: Yonghong Song <yhs@fb.com> Link: https://lore.kernel.org/bpf/20210604112448.32297-1-msuchanek@suse.de
2021-06-08tools/bpftool: Fix cross-buildJean-Philippe Brucker1-1/+4
When the bootstrap and final bpftool have different architectures, we need to build two distinct disasm.o objects. Add a recipe for the bootstrap disasm.o. After commit d510296d331a ("bpftool: Use syscall/loader program in "prog load" and "gen skeleton" command.") cross-building bpftool didn't work anymore, because the bootstrap bpftool was linked using objects from different architectures: $ make O=/tmp/bpftool ARCH=arm64 CROSS_COMPILE=aarch64-linux-gnu- -C tools/bpf/bpftool/ V=1 [...] aarch64-linux-gnu-gcc ... -c -MMD -o /tmp/bpftool/disasm.o /home/z/src/linux/kernel/bpf/disasm.c gcc ... -c -MMD -o /tmp/bpftool//bootstrap/main.o main.c gcc ... -o /tmp/bpftool//bootstrap/bpftool /tmp/bpftool//bootstrap/main.o ... /tmp/bpftool/disasm.o /usr/bin/ld: /tmp/bpftool/disasm.o: Relocations in generic ELF (EM: 183) /usr/bin/ld: /tmp/bpftool/disasm.o: Relocations in generic ELF (EM: 183) /usr/bin/ld: /tmp/bpftool/disasm.o: Relocations in generic ELF (EM: 183) /usr/bin/ld: /tmp/bpftool/disasm.o: error adding symbols: file in wrong format collect2: error: ld returned 1 exit status [...] The final bpftool was built for e.g. arm64, while the bootstrap bpftool, executed on the host, was built for x86. The problem here was that disasm.o linked into the bootstrap bpftool was arm64 rather than x86. With the fix we build two disasm.o, one for the target bpftool in arm64, and one for the bootstrap bpftool in x86. Fixes: d510296d331a ("bpftool: Use syscall/loader program in "prog load" and "gen skeleton" command.") Signed-off-by: Jean-Philippe Brucker <jean-philippe@linaro.org> Signed-off-by: Andrii Nakryiko <andrii@kernel.org> Signed-off-by: Daniel Borkmann <daniel@iogearbox.net> Acked-by: Andrii Nakryiko <andrii@kernel.org> Link: https://lore.kernel.org/bpf/20210603170515.1854642-1-jean-philippe@linaro.org
2021-06-03selftests/bpf: Add xdp_redirect_multi into .gitignoreAndrii Nakryiko1-0/+1
When xdp_redirect_multi test binary was added recently, it wasn't added to .gitignore. Fix that. Fixes: d23292476297 ("selftests/bpf: Add xdp_redirect_multi test") Signed-off-by: Andrii Nakryiko <andrii@kernel.org> Signed-off-by: Daniel Borkmann <daniel@iogearbox.net> Link: https://lore.kernel.org/bpf/20210603004026.2698513-5-andrii@kernel.org
2021-06-03libbpf: Install skel_internal.h header used from light skeletonsAndrii Nakryiko1-1/+1
Light skeleton code assumes skel_internal.h header to be installed system-wide by libbpf package. Make sure it is actually installed. Fixes: 67234743736a ("libbpf: Generate loader program out of BPF ELF file.") Signed-off-by: Andrii Nakryiko <andrii@kernel.org> Signed-off-by: Daniel Borkmann <daniel@iogearbox.net> Link: https://lore.kernel.org/bpf/20210603004026.2698513-4-andrii@kernel.org
2021-06-03libbpf: Refactor header installation portions of MakefileAndrii Nakryiko1-12/+7
As we gradually get more headers that have to be installed, it's quite annoying to copy/paste long $(call) commands. So extract that logic and do a simple $(foreach) over the list of headers. Signed-off-by: Andrii Nakryiko <andrii@kernel.org> Signed-off-by: Daniel Borkmann <daniel@iogearbox.net> Link: https://lore.kernel.org/bpf/20210603004026.2698513-3-andrii@kernel.org
2021-06-03libbpf: Move few APIs from 0.4 to 0.5 versionAndrii Nakryiko1-3/+3
Official libbpf 0.4 release doesn't include three APIs that were tentatively put into 0.4 section. Fix libbpf.map and move these three APIs: - bpf_map__initial_value; - bpf_map_lookup_and_delete_elem_flags; - bpf_object__gen_loader. Signed-off-by: Andrii Nakryiko <andrii@kernel.org> Signed-off-by: Daniel Borkmann <daniel@iogearbox.net> Link: https://lore.kernel.org/bpf/20210603004026.2698513-2-andrii@kernel.org
2021-05-28bpf, docs: Add llvm_reloc.rst to explain llvm bpf relocationsYonghong Song1-0/+19
LLVM upstream commit https://reviews.llvm.org/D102712 made some changes to bpf relocations to make them llvm linker lld friendly. The scope of existing relocations R_BPF_64_{64,32} is narrowed and new relocations R_BPF_64_{ABS32,ABS64,NODYLD32} are introduced. Let us add some documentation about llvm bpf relocations so people can understand how to resolve them properly in their respective tools. Signed-off-by: Yonghong Song <yhs@fb.com> Signed-off-by: Daniel Borkmann <daniel@iogearbox.net> Acked-by: John Fastabend <john.fastabend@gmail.com> Link: https://lore.kernel.org/bpf/20210526152457.335210-1-yhs@fb.com
2021-05-26libbpf: Move BPF_SEQ_PRINTF and BPF_SNPRINTF to bpf_helpers.hFlorent Revest16-68/+74
These macros are convenient wrappers around the bpf_seq_printf and bpf_snprintf helpers. They are currently provided by bpf_tracing.h which targets low level tracing primitives. bpf_helpers.h is a better fit. The __bpf_narg and __bpf_apply are needed in both files and provided twice. __bpf_empty isn't used anywhere and is removed from bpf_tracing.h Reported-by: Andrii Nakryiko <andrii@kernel.org> Signed-off-by: Florent Revest <revest@chromium.org> Signed-off-by: Andrii Nakryiko <andrii@kernel.org> Link: https://lore.kernel.org/bpf/20210526164643.2881368-1-revest@chromium.org
2021-05-26selftests/bpf: Add xdp_redirect_multi testHangbin Liu4-1/+526
Add a bpf selftest for new helper xdp_redirect_map_multi(). In this test there are 3 forward groups and 1 exclude group. The test will redirect each interface's packets to all the interfaces in the forward group, and exclude the interface in exclude map. Two maps (DEVMAP, DEVMAP_HASH) and two xdp modes (generic, drive) will be tested. XDP egress program will also be tested by setting pkt src MAC to egress interface's MAC address. For more test details, you can find it in the test script. Here is the test result. ]# time ./test_xdp_redirect_multi.sh Pass: xdpgeneric arp(F_BROADCAST) ns1-1 Pass: xdpgeneric arp(F_BROADCAST) ns1-2 Pass: xdpgeneric arp(F_BROADCAST) ns1-3 Pass: xdpgeneric IPv4 (F_BROADCAST|F_EXCLUDE_INGRESS) ns1-1 Pass: xdpgeneric IPv4 (F_BROADCAST|F_EXCLUDE_INGRESS) ns1-2 Pass: xdpgeneric IPv4 (F_BROADCAST|F_EXCLUDE_INGRESS) ns1-3 Pass: xdpgeneric IPv6 (no flags) ns1-1 Pass: xdpgeneric IPv6 (no flags) ns1-2 Pass: xdpdrv arp(F_BROADCAST) ns1-1 Pass: xdpdrv arp(F_BROADCAST) ns1-2 Pass: xdpdrv arp(F_BROADCAST) ns1-3 Pass: xdpdrv IPv4 (F_BROADCAST|F_EXCLUDE_INGRESS) ns1-1 Pass: xdpdrv IPv4 (F_BROADCAST|F_EXCLUDE_INGRESS) ns1-2 Pass: xdpdrv IPv4 (F_BROADCAST|F_EXCLUDE_INGRESS) ns1-3 Pass: xdpdrv IPv6 (no flags) ns1-1 Pass: xdpdrv IPv6 (no flags) ns1-2 Pass: xdpegress mac ns1-2 Pass: xdpegress mac ns1-3 Summary: PASS 18, FAIL 0 real 1m18.321s user 0m0.123s sys 0m0.350s Signed-off-by: Hangbin Liu <liuhangbin@gmail.com> Signed-off-by: Daniel Borkmann <daniel@iogearbox.net> Acked-by: Toke Høiland-Jørgensen <toke@redhat.com> Link: https://lore.kernel.org/bpf/20210519090747.1655268-5-liuhangbin@gmail.com
2021-05-26xdp: Extend xdp_redirect_map with broadcast supportHangbin Liu1-2/+12
This patch adds two flags BPF_F_BROADCAST and BPF_F_EXCLUDE_INGRESS to extend xdp_redirect_map for broadcast support. With BPF_F_BROADCAST the packet will be broadcasted to all the interfaces in the map. with BPF_F_EXCLUDE_INGRESS the ingress interface will be excluded when do broadcasting. When getting the devices in dev hash map via dev_map_hash_get_next_key(), there is a possibility that we fall back to the first key when a device was removed. This will duplicate packets on some interfaces. So just walk the whole buckets to avoid this issue. For dev array map, we also walk the whole map to find valid interfaces. Function bpf_clear_redirect_map() was removed in commit ee75aef23afe ("bpf, xdp: Restructure redirect actions"). Add it back as we need to use ri->map again. With test topology: +-------------------+ +-------------------+ | Host A (i40e 10G) | ---------- | eno1(i40e 10G) | +-------------------+ | | | Host B | +-------------------+ | | | Host C (i40e 10G) | ---------- | eno2(i40e 10G) | +-------------------+ | | | +------+ | | veth0 -- | Peer | | | veth1 -- | | | | veth2 -- | NS | | | +------+ | +-------------------+ On Host A: # pktgen/pktgen_sample03_burst_single_flow.sh -i eno1 -d $dst_ip -m $dst_mac -s 64 On Host B(Intel(R) Xeon(R) CPU E5-2690 v3 @ 2.60GHz, 128G Memory): Use xdp_redirect_map and xdp_redirect_map_multi in samples/bpf for testing. All the veth peers in the NS have a XDP_DROP program loaded. The forward_map max_entries in xdp_redirect_map_multi is modify to 4. Testing the performance impact on the regular xdp_redirect path with and without patch (to check impact of additional check for broadcast mode): 5.12 rc4 | redirect_map i40e->i40e | 2.0M | 9.7M 5.12 rc4 | redirect_map i40e->veth | 1.7M | 11.8M 5.12 rc4 + patch | redirect_map i40e->i40e | 2.0M | 9.6M 5.12 rc4 + patch | redirect_map i40e->veth | 1.7M | 11.7M Testing the performance when cloning packets with the redirect_map_multi test, using a redirect map size of 4, filled with 1-3 devices: 5.12 rc4 + patch | redirect_map multi i40e->veth (x1) | 1.7M | 11.4M 5.12 rc4 + patch | redirect_map multi i40e->veth (x2) | 1.1M | 4.3M 5.12 rc4 + patch | redirect_map multi i40e->veth (x3) | 0.8M | 2.6M Signed-off-by: Hangbin Liu <liuhangbin@gmail.com> Signed-off-by: Daniel Borkmann <daniel@iogearbox.net> Acked-by: Toke Høiland-Jørgensen <toke@redhat.com> Acked-by: Martin KaFai Lau <kafai@fb.com> Acked-by: John Fastabend <john.fastabend@gmail.com> Acked-by: Jesper Dangaard Brouer <brouer@redhat.com> Link: https://lore.kernel.org/bpf/20210519090747.1655268-3-liuhangbin@gmail.com
2021-05-26bpftool: Set errno on skeleton failures and propagate errorsAndrii Nakryiko1-9/+18
Follow libbpf's error handling conventions and pass through errors and errno properly. Skeleton code always returned NULL on errors (not ERR_PTR(err)), so there are no backwards compatibility concerns. But now we also set errno properly, so it's possible to distinguish different reasons for failure, if necessary. Signed-off-by: Andrii Nakryiko <andrii@kernel.org> Signed-off-by: Alexei Starovoitov <ast@kernel.org> Acked-by: John Fastabend <john.fastabend@gmail.com> Acked-by: Toke Høiland-Jørgensen <toke@redhat.com> Link: https://lore.kernel.org/bpf/20210525035935.1461796-6-andrii@kernel.org
2021-05-26libbpf: Streamline error reporting for high-level APIsAndrii Nakryiko9-468/+531
Implement changes to error reporting for high-level libbpf APIs to make them less surprising and less error-prone to users: - in all the cases when error happens, errno is set to an appropriate error value; - in libbpf 1.0 mode, all pointer-returning APIs return NULL on error and error code is communicated through errno; this applies both to APIs that already returned NULL before (so now they communicate more detailed error codes), as well as for many APIs that used ERR_PTR() macro and encoded error numbers as fake pointers. - in legacy (default) mode, those APIs that were returning ERR_PTR(err), continue doing so, but still set errno. With these changes, errno can be always used to extract actual error, regardless of legacy or libbpf 1.0 modes. This is utilized internally in libbpf in places where libbpf uses it's own high-level APIs. libbpf_get_error() is adapted to handle both cases completely transparently to end-users (and is used by libbpf consistently as well). More context, justification, and discussion can be found in "Libbpf: the road to v1.0" document ([0]). [0] https://docs.google.com/document/d/1UyjTZuPFWiPFyKk1tV5an11_iaRuec6U-ZESZ54nNTY Signed-off-by: Andrii Nakryiko <andrii@kernel.org> Signed-off-by: Alexei Starovoitov <ast@kernel.org> Acked-by: John Fastabend <john.fastabend@gmail.com> Acked-by: Toke Høiland-Jørgensen <toke@redhat.com> Link: https://lore.kernel.org/bpf/20210525035935.1461796-5-andrii@kernel.org
2021-05-26libbpf: Streamline error reporting for low-level APIsAndrii Nakryiko3-50/+156
Ensure that low-level APIs behave uniformly across the libbpf as follows: - in case of an error, errno is always set to the correct error code; - when libbpf 1.0 mode is enabled with LIBBPF_STRICT_DIRECT_ERRS option to libbpf_set_strict_mode(), return -Exxx error value directly, instead of -1; - by default, until libbpf 1.0 is released, keep returning -1 directly. More context, justification, and discussion can be found in "Libbpf: the road to v1.0" document ([0]). [0] https://docs.google.com/document/d/1UyjTZuPFWiPFyKk1tV5an11_iaRuec6U-ZESZ54nNTY Signed-off-by: Andrii Nakryiko <andrii@kernel.org> Signed-off-by: Alexei Starovoitov <ast@kernel.org> Acked-by: John Fastabend <john.fastabend@gmail.com> Acked-by: Toke Høiland-Jørgensen <toke@redhat.com> Link: https://lore.kernel.org/bpf/20210525035935.1461796-4-andrii@kernel.org
2021-05-26selftests/bpf: Turn on libbpf 1.0 mode and fix all IS_ERR checksAndrii Nakryiko56-425/+347
Turn ony libbpf 1.0 mode. Fix all the explicit IS_ERR checks that now will be broken because libbpf returns NULL on error (and sets errno). Fix ASSERT_OK_PTR and ASSERT_ERR_PTR to work for both old mode and new modes and use them throughout selftests. This is trivial to do by using libbpf_get_error() API that all libbpf users are supposed to use, instead of IS_ERR checks. A bunch of checks also did explicit -1 comparison for various fd-returning APIs. Such checks are replaced with >= 0 or < 0 cases. There were also few misuses of bpf_object__find_map_by_name() in test_maps. Those are fixed in this patch as well. Signed-off-by: Andrii Nakryiko <andrii@kernel.org> Signed-off-by: Alexei Starovoitov <ast@kernel.org> Acked-by: John Fastabend <john.fastabend@gmail.com> Acked-by: Toke Høiland-Jørgensen <toke@redhat.com> Link: https://lore.kernel.org/bpf/20210525035935.1461796-3-andrii@kernel.org
2021-05-26libbpf: Add libbpf_set_strict_mode() API to turn on libbpf 1.0 behaviorsAndrii Nakryiko5-0/+71
Add libbpf_set_strict_mode() API that allows application to simulate libbpf 1.0 breaking changes before libbpf 1.0 is released. This will help users migrate gradually and with confidence. For now only ALL or NONE options are available, subsequent patches will add more flags. This patch is preliminary for selftests/bpf changes. Signed-off-by: Andrii Nakryiko <andrii@kernel.org> Signed-off-by: Alexei Starovoitov <ast@kernel.org> Acked-by: John Fastabend <john.fastabend@gmail.com> Acked-by: Toke Høiland-Jørgensen <toke@redhat.com> Link: https://lore.kernel.org/bpf/20210525035935.1461796-2-andrii@kernel.org
2021-05-25libbpf: Add support for new llvm bpf relocationsYonghong Song2-1/+8
LLVM patch https://reviews.llvm.org/D102712 narrowed the scope of existing R_BPF_64_64 and R_BPF_64_32 relocations, and added three new relocations, R_BPF_64_ABS64, R_BPF_64_ABS32 and R_BPF_64_NODYLD32. The main motivation is to make relocations linker friendly. This change, unfortunately, breaks libbpf build, and we will see errors like below: libbpf: ELF relo #0 in section #6 has unexpected type 2 in /home/yhs/work/bpf-next/tools/testing/selftests/bpf/bpf_tcp_nogpl.o Error: failed to link '/home/yhs/work/bpf-next/tools/testing/selftests/bpf/bpf_tcp_nogpl.o': Unknown error -22 (-22) The new relocation R_BPF_64_ABS64 is generated and libbpf linker sanity check doesn't understand it. Relocation section '.rel.struct_ops' at offset 0x1410 contains 1 entries: Offset Info Type Symbol's Value Symbol's Name 0000000000000018 0000000700000002 R_BPF_64_ABS64 0000000000000000 nogpltcp_init Look at the selftests/bpf/bpf_tcp_nogpl.c, void BPF_STRUCT_OPS(nogpltcp_init, struct sock *sk) { } SEC(".struct_ops") struct tcp_congestion_ops bpf_nogpltcp = { .init = (void *)nogpltcp_init, .name = "bpf_nogpltcp", }; The new llvm relocation scheme categorizes 'nogpltcp_init' reference as R_BPF_64_ABS64 instead of R_BPF_64_64 which is used to specify ld_imm64 relocation in the new scheme. Let us fix the linker sanity checking by including R_BPF_64_ABS64 and R_BPF_64_ABS32. There is no need to check R_BPF_64_NODYLD32 which is used for .BTF and .BTF.ext. Signed-off-by: Yonghong Song <yhs@fb.com> Signed-off-by: Andrii Nakryiko <andrii@kernel.org> Acked-by: John Fastabend <john.fastabend@gmail.com> Link: https://lore.kernel.org/bpf/20210522162341.3687617-1-yhs@fb.com
2021-05-24selftests/bpf: Add bpf_lookup_and_delete_elem testsDenis Salopek4-0/+339
Add bpf selftests and extend existing ones for a new function bpf_lookup_and_delete_elem() for (percpu) hash and (percpu) LRU hash map types. In test_lru_map and test_maps we add an element, lookup_and_delete it, then check whether it's deleted. The newly added lookup_and_delete prog tests practically do the same thing but additionally use a BPF program to change the value of the element for LRU maps. Signed-off-by: Denis Salopek <denis.salopek@sartura.hr> Signed-off-by: Andrii Nakryiko <andrii@kernel.org> Acked-by: Yonghong Song <yhs@fb.com> Link: https://lore.kernel.org/bpf/d30d3e0060c1f750e133579623cf1c60ff58f3d9.1620763117.git.denis.salopek@sartura.hr
2021-05-24bpf: Extend libbpf with bpf_map_lookup_and_delete_elem_flagsDenis Salopek3-0/+16
Add bpf_map_lookup_and_delete_elem_flags() libbpf API in order to use the BPF_F_LOCK flag with the map_lookup_and_delete_elem() function. Signed-off-by: Denis Salopek <denis.salopek@sartura.hr> Signed-off-by: Andrii Nakryiko <andrii@kernel.org> Acked-by: Yonghong Song <yhs@fb.com> Link: https://lore.kernel.org/bpf/15b05dafe46c7e0750d110f233977372029d1f62.1620763117.git.denis.salopek@sartura.hr
2021-05-24bpf: Add lookup_and_delete_elem support to hashtabDenis Salopek1-0/+13
Extend the existing bpf_map_lookup_and_delete_elem() functionality to hashtab map types, in addition to stacks and queues. Create a new hashtab bpf_map_ops function that does lookup and deletion of the element under the same bucket lock and add the created map_ops to bpf.h. Signed-off-by: Denis Salopek <denis.salopek@sartura.hr> Signed-off-by: Andrii Nakryiko <andrii@kernel.org> Acked-by: Yonghong Song <yhs@fb.com> Link: https://lore.kernel.org/bpf/4d18480a3e990ffbf14751ddef0325eed3be2966.1620763117.git.denis.salopek@sartura.hr
2021-05-24libbpf: Skip bpf_object__probe_loading for light skeletonStanislav Fomichev1-0/+3
I'm getting the following error when running 'gen skeleton -L' as regular user: libbpf: Error in bpf_object__probe_loading():Operation not permitted(1). Couldn't load trivial BPF program. Make sure your kernel supports BPF (CONFIG_BPF_SYSCALL=y) and/or that RLIMIT_MEMLOCK is set to big enough value. Fixes: 67234743736a ("libbpf: Generate loader program out of BPF ELF file.") Signed-off-by: Stanislav Fomichev <sdf@google.com> Signed-off-by: Andrii Nakryiko <andrii@kernel.org> Link: https://lore.kernel.org/bpf/20210521030653.2626513-1-sdf@google.com
2021-05-21selftests: net: devlink_port_split.py: skip the test if no devlink devicePo-Hsu Lin1-1/+7
When there is no devlink device, the following command will return: $ devlink -j dev show {dev:{}} This will cause IndexError when trying to access the first element in dev of this json dataset. Use the kselftest framework skip code to skip this test in this case. Example output with this change: # selftests: net: devlink_port_split.py # no devlink device was found, test skipped ok 7 selftests: net: devlink_port_split.py # SKIP Link: https://bugs.launchpad.net/bugs/1928889 Signed-off-by: Po-Hsu Lin <po-hsu.lin@canonical.com> Signed-off-by: David S. Miller <davem@davemloft.net>
2021-05-19Merge git://git.kernel.org/pub/scm/linux/kernel/git/bpf/bpf-nextDavid S. Miller56-314/+3102
Alexei Starovoitov says: ==================== pull-request: bpf-next 2021-05-19 The following pull-request contains BPF updates for your *net-next* tree. We've added 43 non-merge commits during the last 11 day(s) which contain a total of 74 files changed, 3717 insertions(+), 578 deletions(-). The main changes are: 1) syscall program type, fd array, and light skeleton, from Alexei. 2) Stop emitting static variables in skeleton, from Andrii. 3) Low level tc-bpf api, from Kumar. 4) Reduce verifier kmalloc/kfree churn, from Lorenz. ====================
2021-05-19bpf: Add cmd alias BPF_PROG_RUNAlexei Starovoitov2-1/+2
Add BPF_PROG_RUN command as an alias to BPF_RPOG_TEST_RUN to better indicate the full range of use cases done by the command. Suggested-by: Daniel Borkmann <daniel@iogearbox.net> Signed-off-by: Alexei Starovoitov <ast@kernel.org> Signed-off-by: Daniel Borkmann <daniel@iogearbox.net> Acked-by: Song Liu <songliubraving@fb.com> Link: https://lore.kernel.org/bpf/20210519014032.20908-1-alexei.starovoitov@gmail.com
2021-05-19selftests/bpf: Convert test trace_printk to lskel.Alexei Starovoitov2-2/+2
Convert test trace_printk to light skeleton to check rodata support in lskel. Signed-off-by: Alexei Starovoitov <ast@kernel.org> Signed-off-by: Daniel Borkmann <daniel@iogearbox.net> Acked-by: Andrii Nakryiko <andrii@kernel.org> Link: https://lore.kernel.org/bpf/20210514003623.28033-22-alexei.starovoitov@gmail.com
2021-05-19selftests/bpf: Convert test printk to use rodata.Alexei Starovoitov2-3/+6
Convert test trace_printk to more aggressively validate and use rodata. Signed-off-by: Alexei Starovoitov <ast@kernel.org> Signed-off-by: Daniel Borkmann <daniel@iogearbox.net> Acked-by: Andrii Nakryiko <andrii@kernel.org> Link: https://lore.kernel.org/bpf/20210514003623.28033-21-alexei.starovoitov@gmail.com
2021-05-19selftests/bpf: Convert atomics test to light skeleton.Alexei Starovoitov2-37/+37
Convert prog_tests/atomics.c to lskel.h Signed-off-by: Alexei Starovoitov <ast@kernel.org> Signed-off-by: Daniel Borkmann <daniel@iogearbox.net> Acked-by: Andrii Nakryiko <andrii@kernel.org> Link: https://lore.kernel.org/bpf/20210514003623.28033-20-alexei.starovoitov@gmail.com
2021-05-19selftests/bpf: Convert few tests to light skeleton.Alexei Starovoitov10-28/+41
Convert few tests that don't use CO-RE to light skeleton. Signed-off-by: Alexei Starovoitov <ast@kernel.org> Signed-off-by: Daniel Borkmann <daniel@iogearbox.net> Acked-by: Andrii Nakryiko <andrii@kernel.org> Link: https://lore.kernel.org/bpf/20210514003623.28033-19-alexei.starovoitov@gmail.com
2021-05-19bpftool: Use syscall/loader program in "prog load" and "gen skeleton" command.Alexei Starovoitov6-24/+482
Add -L flag to bpftool to use libbpf gen_trace facility and syscall/loader program for skeleton generation and program loading. "bpftool gen skeleton -L" command will generate a "light skeleton" or "loader skeleton" that is similar to existing skeleton, but has one major difference: $ bpftool gen skeleton lsm.o > lsm.skel.h $ bpftool gen skeleton -L lsm.o > lsm.lskel.h $ diff lsm.skel.h lsm.lskel.h @@ -5,34 +4,34 @@ #define __LSM_SKEL_H__ #include <stdlib.h> -#include <bpf/libbpf.h> +#include <bpf/bpf.h> The light skeleton does not use majority of libbpf infrastructure. It doesn't need libelf. It doesn't parse .o file. It only needs few sys_bpf wrappers. All of them are in bpf/bpf.h file. In future libbpf/bpf.c can be inlined into bpf.h, so not even libbpf.a would be needed to work with light skeleton. "bpftool prog load -L file.o" command is introduced for debugging of syscall/loader program generation. Just like the same command without -L it will try to load the programs from file.o into the kernel. It won't even try to pin them. "bpftool prog load -L -d file.o" command will provide additional debug messages on how syscall/loader program was generated. Also the execution of syscall/loader program will use bpf_trace_printk() for each step of loading BTF, creating maps, and loading programs. The user can do "cat /.../trace_pipe" for further debug. An example of fexit_sleep.lskel.h generated from progs/fexit_sleep.c: struct fexit_sleep { struct bpf_loader_ctx ctx; struct { struct bpf_map_desc bss; } maps; struct { struct bpf_prog_desc nanosleep_fentry; struct bpf_prog_desc nanosleep_fexit; } progs; struct { int nanosleep_fentry_fd; int nanosleep_fexit_fd; } links; struct fexit_sleep__bss { int pid; int fentry_cnt; int fexit_cnt; } *bss; }; Signed-off-by: Alexei Starovoitov <ast@kernel.org> Signed-off-by: Daniel Borkmann <daniel@iogearbox.net> Acked-by: Andrii Nakryiko <andrii@kernel.org> Link: https://lore.kernel.org/bpf/20210514003623.28033-18-alexei.starovoitov@gmail.com
2021-05-19libbpf: Introduce bpf_map__initial_value().Alexei Starovoitov3-0/+10
Introduce bpf_map__initial_value() to read initial contents of mmaped data/rodata/bss maps. Note that bpf_map__set_initial_value() doesn't allow modifying kconfig map while bpf_map__initial_value() allows reading its values. Signed-off-by: Alexei Starovoitov <ast@kernel.org> Signed-off-by: Daniel Borkmann <daniel@iogearbox.net> Acked-by: Andrii Nakryiko <andrii@kernel.org> Link: https://lore.kernel.org/bpf/20210514003623.28033-17-alexei.starovoitov@gmail.com
2021-05-19libbpf: Cleanup temp FDs when intermediate sys_bpf fails.Alexei Starovoitov2-4/+45
Fix loader program to close temporary FDs when intermediate sys_bpf command fails. Signed-off-by: Alexei Starovoitov <ast@kernel.org> Signed-off-by: Daniel Borkmann <daniel@iogearbox.net> Acked-by: Andrii Nakryiko <andrii@kernel.org> Link: https://lore.kernel.org/bpf/20210514003623.28033-16-alexei.starovoitov@gmail.com
2021-05-19libbpf: Generate loader program out of BPF ELF file.Alexei Starovoitov8-35/+1060
The BPF program loading process performed by libbpf is quite complex and consists of the following steps: "open" phase: - parse elf file and remember relocations, sections - collect externs and ksyms including their btf_ids in prog's BTF - patch BTF datasec (since llvm couldn't do it) - init maps (old style map_def, BTF based, global data map, kconfig map) - collect relocations against progs and maps "load" phase: - probe kernel features - load vmlinux BTF - resolve externs (kconfig and ksym) - load program BTF - init struct_ops - create maps - apply CO-RE relocations - patch ld_imm64 insns with src_reg=PSEUDO_MAP, PSEUDO_MAP_VALUE, PSEUDO_BTF_ID - reposition subprograms and adjust call insns - sanitize and load progs During this process libbpf does sys_bpf() calls to load BTF, create maps, populate maps and finally load programs. Instead of actually doing the syscalls generate a trace of what libbpf would have done and represent it as the "loader program". The "loader program" consists of single map with: - union bpf_attr(s) - BTF bytes - map value bytes - insns bytes and single bpf program that passes bpf_attr(s) and data into bpf_sys_bpf() helper. Executing such "loader program" via bpf_prog_test_run() command will replay the sequence of syscalls that libbpf would have done which will result the same maps created and programs loaded as specified in the elf file. The "loader program" removes libelf and majority of libbpf dependency from program loading process. kconfig, typeless ksym, struct_ops and CO-RE are not supported yet. The order of relocate_data and relocate_calls had to change, so that bpf_gen__prog_load() can see all relocations for a given program with correct insn_idx-es. Signed-off-by: Alexei Starovoitov <ast@kernel.org> Signed-off-by: Daniel Borkmann <daniel@iogearbox.net> Acked-by: Andrii Nakryiko <andrii@kernel.org> Link: https://lore.kernel.org/bpf/20210514003623.28033-15-alexei.starovoitov@gmail.com
2021-05-19libbpf: Preliminary support for fd_idxAlexei Starovoitov1-6/+25
Prep libbpf to use FD_IDX kernel feature when generating loader program. Signed-off-by: Alexei Starovoitov <ast@kernel.org> Signed-off-by: Daniel Borkmann <daniel@iogearbox.net> Acked-by: Andrii Nakryiko <andrii@kernel.org> Link: https://lore.kernel.org/bpf/20210514003623.28033-14-alexei.starovoitov@gmail.com
2021-05-19libbpf: Add bpf_object pointer to kernel_supports().Alexei Starovoitov1-22/+22
Add a pointer to 'struct bpf_object' to kernel_supports() helper. It will be used in the next patch. No functional changes. Signed-off-by: Alexei Starovoitov <ast@kernel.org> Signed-off-by: Daniel Borkmann <daniel@iogearbox.net> Acked-by: Andrii Nakryiko <andrii@kernel.org> Link: https://lore.kernel.org/bpf/20210514003623.28033-13-alexei.starovoitov@gmail.com
2021-05-19libbpf: Change the order of data and text relocations.Alexei Starovoitov2-14/+85
In order to be able to generate loader program in the later patches change the order of data and text relocations. Also improve the test to include data relos. If the kernel supports "FD array" the map_fd relocations can be processed before text relos since generated loader program won't need to manually patch ld_imm64 insns with map_fd. But ksym and kfunc relocations can only be processed after all calls are relocated, since loader program will consist of a sequence of calls to bpf_btf_find_by_name_kind() followed by patching of btf_id and btf_obj_fd into corresponding ld_imm64 insns. The locations of those ld_imm64 insns are specified in relocations. Hence process all data relocations (maps, ksym, kfunc) together after call relos. Signed-off-by: Alexei Starovoitov <ast@kernel.org> Signed-off-by: Daniel Borkmann <daniel@iogearbox.net> Acked-by: Andrii Nakryiko <andrii@kernel.org> Link: https://lore.kernel.org/bpf/20210514003623.28033-12-alexei.starovoitov@gmail.com
2021-05-19bpf: Add bpf_sys_close() helper.Alexei Starovoitov1-0/+7
Add bpf_sys_close() helper to be used by the syscall/loader program to close intermediate FDs and other cleanup. Note this helper must never be allowed inside fdget/fdput bracketing. Signed-off-by: Alexei Starovoitov <ast@kernel.org> Signed-off-by: Daniel Borkmann <daniel@iogearbox.net> Acked-by: Andrii Nakryiko <andrii@kernel.org> Link: https://lore.kernel.org/bpf/20210514003623.28033-11-alexei.starovoitov@gmail.com
2021-05-19bpf: Add bpf_btf_find_by_name_kind() helper.Alexei Starovoitov1-0/+7
Add new helper: long bpf_btf_find_by_name_kind(char *name, int name_sz, u32 kind, int flags) Description Find BTF type with given name and kind in vmlinux BTF or in module's BTFs. Return Returns btf_id and btf_obj_fd in lower and upper 32 bits. It will be used by loader program to find btf_id to attach the program to and to find btf_ids of ksyms. Signed-off-by: Alexei Starovoitov <ast@kernel.org> Signed-off-by: Daniel Borkmann <daniel@iogearbox.net> Acked-by: Andrii Nakryiko <andrii@kernel.org> Link: https://lore.kernel.org/bpf/20210514003623.28033-10-alexei.starovoitov@gmail.com
2021-05-19bpf: Introduce fd_idxAlexei Starovoitov1-5/+11
Typical program loading sequence involves creating bpf maps and applying map FDs into bpf instructions in various places in the bpf program. This job is done by libbpf that is using compiler generated ELF relocations to patch certain instruction after maps are created and BTFs are loaded. The goal of fd_idx is to allow bpf instructions to stay immutable after compilation. At load time the libbpf would still create maps as usual, but it wouldn't need to patch instructions. It would store map_fds into __u32 fd_array[] and would pass that pointer to sys_bpf(BPF_PROG_LOAD). Signed-off-by: Alexei Starovoitov <ast@kernel.org> Signed-off-by: Daniel Borkmann <daniel@iogearbox.net> Acked-by: Andrii Nakryiko <andrii@kernel.org> Link: https://lore.kernel.org/bpf/20210514003623.28033-9-alexei.starovoitov@gmail.com
2021-05-19selftests/bpf: Test for btf_load command.Alexei Starovoitov2-0/+53
Improve selftest to check that btf_load is working from bpf program. Signed-off-by: Alexei Starovoitov <ast@kernel.org> Signed-off-by: Daniel Borkmann <daniel@iogearbox.net> Acked-by: Andrii Nakryiko <andrii@kernel.org> Link: https://lore.kernel.org/bpf/20210514003623.28033-8-alexei.starovoitov@gmail.com
2021-05-19selftests/bpf: Test for syscall program typeAlexei Starovoitov2-0/+123
bpf_prog_type_syscall is a program that creates a bpf map, updates it, and loads another bpf program using bpf_sys_bpf() helper. Signed-off-by: Alexei Starovoitov <ast@kernel.org> Signed-off-by: Daniel Borkmann <daniel@iogearbox.net> Acked-by: Andrii Nakryiko <andrii@kernel.org> Link: https://lore.kernel.org/bpf/20210514003623.28033-6-alexei.starovoitov@gmail.com
2021-05-19libbpf: Support for syscall program typeAlexei Starovoitov1-0/+2
Trivial support for syscall program type. Signed-off-by: Alexei Starovoitov <ast@kernel.org> Signed-off-by: Daniel Borkmann <daniel@iogearbox.net> Acked-by: Andrii Nakryiko <andrii@kernel.org> Link: https://lore.kernel.org/bpf/20210514003623.28033-5-alexei.starovoitov@gmail.com
2021-05-19bpf: Introduce bpf_sys_bpf() helper and program type.Alexei Starovoitov1-0/+8
Add placeholders for bpf_sys_bpf() helper and new program type. Make sure to check that expected_attach_type is zero for future extensibility. Allow tracing helper functions to be used in this program type, since they will only execute from user context via bpf_prog_test_run. Signed-off-by: Alexei Starovoitov <ast@kernel.org> Signed-off-by: Daniel Borkmann <daniel@iogearbox.net> Acked-by: John Fastabend <john.fastabend@gmail.com> Acked-by: Andrii Nakryiko <andrii@kernel.org> Link: https://lore.kernel.org/bpf/20210514003623.28033-2-alexei.starovoitov@gmail.com
2021-05-18selftests: forwarding: Add test for custom multipath hash with IPv6 GREIdo Schimmel1-0/+458
Test that when the hash policy is set to custom, traffic is distributed only according to the inner fields set in the fib_multipath_hash_fields sysctl. Each time set a different field and make sure traffic is only distributed when the field is changed in the packet stream. The test only verifies the behavior of IPv4/IPv6 overlays on top of an IPv6 underlay network. The previous patch verified the same with an IPv4 underlay network. Example output: # ./ip6gre_custom_multipath_hash.sh TEST: ping [ OK ] TEST: ping6 [ OK ] INFO: Running IPv4 overlay custom multipath hash tests TEST: Multipath hash field: Inner source IP (balanced) [ OK ] INFO: Packets sent on path1 / path2: 6602 / 6002 TEST: Multipath hash field: Inner source IP (unbalanced) [ OK ] INFO: Packets sent on path1 / path2: 1 / 12601 TEST: Multipath hash field: Inner destination IP (balanced) [ OK ] INFO: Packets sent on path1 / path2: 6802 / 5801 TEST: Multipath hash field: Inner destination IP (unbalanced) [ OK ] INFO: Packets sent on path1 / path2: 12602 / 3 TEST: Multipath hash field: Inner source port (balanced) [ OK ] INFO: Packets sent on path1 / path2: 16431 / 16344 TEST: Multipath hash field: Inner source port (unbalanced) [ OK ] INFO: Packets sent on path1 / path2: 0 / 32773 TEST: Multipath hash field: Inner destination port (balanced) [ OK ] INFO: Packets sent on path1 / path2: 16431 / 16344 TEST: Multipath hash field: Inner destination port (unbalanced) [ OK ] INFO: Packets sent on path1 / path2: 2 / 32772 INFO: Running IPv6 overlay custom multipath hash tests TEST: Multipath hash field: Inner source IP (balanced) [ OK ] INFO: Packets sent on path1 / path2: 6704 / 5902 TEST: Multipath hash field: Inner source IP (unbalanced) [ OK ] INFO: Packets sent on path1 / path2: 1 / 12600 TEST: Multipath hash field: Inner destination IP (balanced) [ OK ] INFO: Packets sent on path1 / path2: 5751 / 6852 TEST: Multipath hash field: Inner destination IP (unbalanced) [ OK ] INFO: Packets sent on path1 / path2: 12602 / 0 TEST: Multipath hash field: Inner flowlabel (balanced) [ OK ] INFO: Packets sent on path1 / path2: 8272 / 8181 TEST: Multipath hash field: Inner flowlabel (unbalanced) [ OK ] INFO: Packets sent on path1 / path2: 3 / 12602 TEST: Multipath hash field: Inner source port (balanced) [ OK ] INFO: Packets sent on path1 / path2: 16424 / 16351 TEST: Multipath hash field: Inner source port (unbalanced) [ OK ] INFO: Packets sent on path1 / path2: 3 / 32774 TEST: Multipath hash field: Inner destination port (balanced) [ OK ] INFO: Packets sent on path1 / path2: 16425 / 16350 TEST: Multipath hash field: Inner destination port (unbalanced) [ OK ] INFO: Packets sent on path1 / path2: 2 / 32773 Signed-off-by: Ido Schimmel <idosch@nvidia.com> Acked-by: David Ahern <dsahern@kernel.org> Signed-off-by: David S. Miller <davem@davemloft.net>
2021-05-18selftests: forwarding: Add test for custom multipath hash with IPv4 GREIdo Schimmel1-0/+456
Test that when the hash policy is set to custom, traffic is distributed only according to the inner fields set in the fib_multipath_hash_fields sysctl. Each time set a different field and make sure traffic is only distributed when the field is changed in the packet stream. The test only verifies the behavior of IPv4/IPv6 overlays on top of an IPv4 underlay network. A subsequent patch will do the same with an IPv6 underlay network. Example output: # ./gre_custom_multipath_hash.sh TEST: ping [ OK ] TEST: ping6 [ OK ] INFO: Running IPv4 overlay custom multipath hash tests TEST: Multipath hash field: Inner source IP (balanced) [ OK ] INFO: Packets sent on path1 / path2: 6601 / 6001 TEST: Multipath hash field: Inner source IP (unbalanced) [ OK ] INFO: Packets sent on path1 / path2: 0 / 12600 TEST: Multipath hash field: Inner destination IP (balanced) [ OK ] INFO: Packets sent on path1 / path2: 6802 / 5802 TEST: Multipath hash field: Inner destination IP (unbalanced) [ OK ] INFO: Packets sent on path1 / path2: 12601 / 1 TEST: Multipath hash field: Inner source port (balanced) [ OK ] INFO: Packets sent on path1 / path2: 16430 / 16344 TEST: Multipath hash field: Inner source port (unbalanced) [ OK ] INFO: Packets sent on path1 / path2: 0 / 32772 TEST: Multipath hash field: Inner destination port (balanced) [ OK ] INFO: Packets sent on path1 / path2: 16430 / 16343 TEST: Multipath hash field: Inner destination port (unbalanced) [ OK ] INFO: Packets sent on path1 / path2: 0 / 32772 INFO: Running IPv6 overlay custom multipath hash tests TEST: Multipath hash field: Inner source IP (balanced) [ OK ] INFO: Packets sent on path1 / path2: 6702 / 5900 TEST: Multipath hash field: Inner source IP (unbalanced) [ OK ] INFO: Packets sent on path1 / path2: 0 / 12601 TEST: Multipath hash field: Inner destination IP (balanced) [ OK ] INFO: Packets sent on path1 / path2: 5751 / 6851 TEST: Multipath hash field: Inner destination IP (unbalanced) [ OK ] INFO: Packets sent on path1 / path2: 12602 / 1 TEST: Multipath hash field: Inner flowlabel (balanced) [ OK ] INFO: Packets sent on path1 / path2: 8364 / 8065 TEST: Multipath hash field: Inner flowlabel (unbalanced) [ OK ] INFO: Packets sent on path1 / path2: 12601 / 0 TEST: Multipath hash field: Inner source port (balanced) [ OK ] INFO: Packets sent on path1 / path2: 16425 / 16349 TEST: Multipath hash field: Inner source port (unbalanced) [ OK ] INFO: Packets sent on path1 / path2: 1 / 32770 TEST: Multipath hash field: Inner destination port (balanced) [ OK ] INFO: Packets sent on path1 / path2: 16425 / 16349 TEST: Multipath hash field: Inner destination port (unbalanced) [ OK ] INFO: Packets sent on path1 / path2: 2 / 32770 Signed-off-by: Ido Schimmel <idosch@nvidia.com> Acked-by: David Ahern <dsahern@kernel.org> Signed-off-by: David S. Miller <davem@davemloft.net>
2021-05-18selftests: forwarding: Add test for custom multipath hashIdo Schimmel1-0/+364
Test that when the hash policy is set to custom, traffic is distributed only according to the outer fields set in the fib_multipath_hash_fields sysctl. Each time set a different field and make sure traffic is only distributed when the field is changed in the packet stream. The test only verifies the behavior with non-encapsulated IPv4 and IPv6 packets. Subsequent patches will add tests for IPv4/IPv6 overlays on top of IPv4/IPv6 underlay networks. Example output: # ./custom_multipath_hash.sh TEST: ping [ OK ] TEST: ping6 [ OK ] INFO: Running IPv4 custom multipath hash tests TEST: Multipath hash field: Source IP (balanced) [ OK ] INFO: Packets sent on path1 / path2: 6353 / 6254 TEST: Multipath hash field: Source IP (unbalanced) [ OK ] INFO: Packets sent on path1 / path2: 0 / 12600 TEST: Multipath hash field: Destination IP (balanced) [ OK ] INFO: Packets sent on path1 / path2: 6102 / 6502 TEST: Multipath hash field: Destination IP (unbalanced) [ OK ] INFO: Packets sent on path1 / path2: 1 / 12601 TEST: Multipath hash field: Source port (balanced) [ OK ] INFO: Packets sent on path1 / path2: 16428 / 16345 TEST: Multipath hash field: Source port (unbalanced) [ OK ] INFO: Packets sent on path1 / path2: 32770 / 2 TEST: Multipath hash field: Destination port (balanced) [ OK ] INFO: Packets sent on path1 / path2: 16428 / 16345 TEST: Multipath hash field: Destination port (unbalanced) [ OK ] INFO: Packets sent on path1 / path2: 32770 / 2 INFO: Running IPv6 custom multipath hash tests TEST: Multipath hash field: Source IP (balanced) [ OK ] INFO: Packets sent on path1 / path2: 6704 / 5903 TEST: Multipath hash field: Source IP (unbalanced) [ OK ] INFO: Packets sent on path1 / path2: 12600 / 0 TEST: Multipath hash field: Destination IP (balanced) [ OK ] INFO: Packets sent on path1 / path2: 5551 / 7052 TEST: Multipath hash field: Destination IP (unbalanced) [ OK ] INFO: Packets sent on path1 / path2: 12603 / 0 TEST: Multipath hash field: Flowlabel (balanced) [ OK ] INFO: Packets sent on path1 / path2: 8378 / 8080 TEST: Multipath hash field: Flowlabel (unbalanced) [ OK ] INFO: Packets sent on path1 / path2: 2 / 12603 TEST: Multipath hash field: Source port (balanced) [ OK ] INFO: Packets sent on path1 / path2: 16385 / 16388 TEST: Multipath hash field: Source port (unbalanced) [ OK ] INFO: Packets sent on path1 / path2: 0 / 32774 TEST: Multipath hash field: Destination port (balanced) [ OK ] INFO: Packets sent on path1 / path2: 16386 / 16390 TEST: Multipath hash field: Destination port (unbalanced) [ OK ] INFO: Packets sent on path1 / path2: 32771 / 2 Signed-off-by: Ido Schimmel <idosch@nvidia.com> Reviewed-by: David Ahern <dsahern@kernel.org> Signed-off-by: David S. Miller <davem@davemloft.net>
2021-05-18selftests: mlxsw: qos_lib: Drop __mlnx_qosPetr Machata1-14/+0
Now that the two users of this helper have been converted to iproute2 dcb, it is not necessary anymore. Drop it. Signed-off-by: Petr Machata <petrm@nvidia.com> Signed-off-by: Ido Schimmel <idosch@nvidia.com> Signed-off-by: David S. Miller <davem@davemloft.net>
2021-05-18selftests: mlxsw: qos_pfc: Convert to iproute2 dcbPetr Machata1-12/+12
There is a dedicated tool for configuration of DCB in iproute2 now. Use it in the selftest instead of mlnx_qos. Signed-off-by: Petr Machata <petrm@nvidia.com> Signed-off-by: Ido Schimmel <idosch@nvidia.com> Signed-off-by: David S. Miller <davem@davemloft.net>
2021-05-18selftests: mlxsw: qos_headroom: Convert to iproute2 dcbPetr Machata1-34/+35
There is a dedicated tool for configuration of DCB in iproute2 now. Use it in the selftest instead of mlnx_qos. Signed-off-by: Petr Machata <petrm@nvidia.com> Signed-off-by: Ido Schimmel <idosch@nvidia.com> Signed-off-by: David S. Miller <davem@davemloft.net>