summaryrefslogtreecommitdiff
AgeCommit message (Collapse)AuthorFilesLines
2020-02-26net/mlx5e: Add missing LRO cap checkTariq Toukan1-3/+2
The LRO boolean state in params->lro_en must not be set in case the NIC is not capable. Enforce this check and remove the TODO comment. Signed-off-by: Tariq Toukan <tariqt@mellanox.com> Signed-off-by: Saeed Mahameed <saeedm@mellanox.com>
2020-02-26net/mlx5e: Define one flow for TXQ selection when TCs are configuredEran Ben Elisha1-7/+6
We shall always extract channel index out of the txq, regardless of the relation between txq_ix and num channels. The extraction is always valid, as if txq is smaller than number of channels, txq_ix == priv->txq2sq[txq_ix]->ch_ix. By doing so, we can remove an if clause from the select queue method, and have one flow for all packets. Signed-off-by: Eran Ben Elisha <eranbe@mellanox.com> Reviewed-by: Tariq Toukan <tariqt@mellanox.com> Signed-off-by: Saeed Mahameed <saeedm@mellanox.com>
2020-02-26bootconfig: Fix CONFIG_BOOTTIME_TRACING dependency issueMasami Hiramatsu2-2/+1
Since commit d8a953ddde5e ("bootconfig: Set CONFIG_BOOT_CONFIG=n by default") also changed the CONFIG_BOOTTIME_TRACING to select CONFIG_BOOT_CONFIG to show the boot-time tracing on the menu, it introduced wrong dependencies with BLK_DEV_INITRD as below. WARNING: unmet direct dependencies detected for BOOT_CONFIG Depends on [n]: BLK_DEV_INITRD [=n] Selected by [y]: - BOOTTIME_TRACING [=y] && TRACING_SUPPORT [=y] && FTRACE [=y] && TRACING [=y] This makes the CONFIG_BOOT_CONFIG selects CONFIG_BLK_DEV_INITRD to fix this error and make CONFIG_BOOTTIME_TRACING=n by default, so that both boot-time tracing and boot configuration off but those appear on the menu list. Link: http://lkml.kernel.org/r/158264140162.23842.11237423518607465535.stgit@devnote2 Fixes: d8a953ddde5e ("bootconfig: Set CONFIG_BOOT_CONFIG=n by default") Reported-by: Randy Dunlap <rdunlap@infradead.org> Compiled-tested-by: Randy Dunlap <rdunlap@infradead.org> Signed-off-by: Masami Hiramatsu <mhiramat@kernel.org> Signed-off-by: Steven Rostedt (VMware) <rostedt@goodmis.org>
2020-02-26virtio_net: Add XDP meta data supportYuya Kusakabe1-20/+32
Implement support for transferring XDP meta data into skb for virtio_net driver; before calling into the program, xdp.data_meta points to xdp.data, where on program return with pass verdict, we call into skb_metadata_set(). Tested with the script at https://github.com/higebu/virtio_net-xdp-metadata-test. Signed-off-by: Yuya Kusakabe <yuya.kusakabe@gmail.com> Signed-off-by: Daniel Borkmann <daniel@iogearbox.net> Acked-by: Jason Wang <jasowang@redhat.com> Acked-by: Michael S. Tsirkin <mst@redhat.com> Link: https://lore.kernel.org/bpf/20200225033212.437563-2-yuya.kusakabe@gmail.com
2020-02-26virtio_net: Keep vnet header zeroed if XDP is loaded for small bufferYuya Kusakabe1-2/+2
We do not want to care about the vnet header in receive_small() if XDP is loaded, since we can not know whether or not the packet is modified by XDP. Fixes: f6b10209b90d ("virtio-net: switch to use build_skb() for small buffer") Signed-off-by: Yuya Kusakabe <yuya.kusakabe@gmail.com> Signed-off-by: Daniel Borkmann <daniel@iogearbox.net> Acked-by: Jason Wang <jasowang@redhat.com> Acked-by: Michael S. Tsirkin <mst@redhat.com> Link: https://lore.kernel.org/bpf/20200225033212.437563-1-yuya.kusakabe@gmail.com
2020-02-26selftests/bpf: Print backtrace on SIGSEGV in test_progsAndrii Nakryiko2-1/+26
Due to various bugs in tests clean up code (usually), if host system is misconfigured, it happens that test_progs will just crash in the middle of running a test with little to no indication of where and why the crash happened. For cases where coredump is not readily available (e.g., inside a CI), it's very helpful to have a stack trace, which lead to crash, to be printed out. This change adds a signal handler that will capture and print out symbolized backtrace: $ sudo ./test_progs -t mmap test_mmap:PASS:skel_open_and_load 0 nsec test_mmap:PASS:bss_mmap 0 nsec test_mmap:PASS:data_mmap 0 nsec Caught signal #11! Stack trace: ./test_progs(crash_handler+0x18)[0x42a888] /lib64/libpthread.so.0(+0xf5d0)[0x7f2aab5175d0] ./test_progs(test_mmap+0x3c0)[0x41f0a0] ./test_progs(main+0x160)[0x407d10] /lib64/libc.so.6(__libc_start_main+0xf5)[0x7f2aab15d3d5] ./test_progs[0x407ebc] [1] 1988412 segmentation fault (core dumped) sudo ./test_progs -t mmap Unfortunately, glibc's symbolization support is unable to symbolize static functions, only global ones will be present in stack trace. But it's still a step forward without adding extra libraries to get a better symbolization. Signed-off-by: Andrii Nakryiko <andriin@fb.com> Signed-off-by: Daniel Borkmann <daniel@iogearbox.net> Acked-by: Song Liu <songliubraving@fb.com> Link: https://lore.kernel.org/bpf/20200225000847.3965188-1-andriin@fb.com
2020-02-25Merge branch 'mlxsw-Implement-ACL-dropped-packets-identification'David S. Miller21-25/+605
Jiri Pirko says: ==================== mlxsw: Implement ACL-dropped packets identification mlxsw hardware allows to insert a ACL-drop action with a value defined by user that would be later on passed with a dropped packet. To implement this, use the existing TC action cookie and pass it to the driver. As the cookie format coming down from TC and the mlxsw HW cookie format is different, do the mapping of these two using idr and rhashtable. The cookie is passed up from the HW through devlink_trap_report() to drop_monitor code. A new metadata type is used for that. Example: $ tc qdisc add dev enp0s16np1 clsact $ tc filter add dev enp0s16np1 ingress protocol ip pref 10 flower skip_sw dst_ip 192.168.1.2 action drop cookie 3b45fa38c8 ^^^^^^^^^^ $ devlink trap set pci/0000:00:10.0 trap acl action trap $ dropwatch Initializing null lookup method dropwatch> set hw true setting hardware drops monitoring to 1 dropwatch> set alertmode packet Setting alert mode Alert mode successfully set dropwatch> start Enabling monitoring... Kernel monitoring activated. Issue Ctrl-C to stop monitoring drop at: ingress_flow_action_drop (acl_drops) origin: hardware input port ifindex: 30 input port name: enp0s16np1 cookie: 3b45fa38c8 <<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<< timestamp: Fri Jan 24 17:10:53 2020 715387671 nsec protocol: 0x800 length: 98 original length: 98 This way the user may insert multiple drop rules and monitor the dropped packets with the information of which action caused the drop. ==================== Signed-off-by: David S. Miller <davem@davemloft.net>
2020-02-25selftests: netdevsim: Extend devlink trap test to include flow action cookieJiri Pirko1-0/+5
Extend existing devlink trap test to include metadata type for flow action cookie. Signed-off-by: Jiri Pirko <jiri@mellanox.com> Signed-off-by: Ido Schimmel <idosch@mellanox.com> Signed-off-by: David S. Miller <davem@davemloft.net>
2020-02-25netdevsim: add ACL trap reporting cookie as a metadataJiri Pirko2-3/+116
Add new trap ACL which reports flow action cookie in a metadata. Allow used to setup the cookie using debugfs file. Signed-off-by: Jiri Pirko <jiri@mellanox.com> Signed-off-by: Ido Schimmel <idosch@mellanox.com> Reviewed-by: Jakub Kicinski <kuba@kernel.org> Signed-off-by: David S. Miller <davem@davemloft.net>
2020-02-25mlxsw: spectrum_trap: Lookup and pass cookie down to devlink_trap_report()Jiri Pirko4-3/+63
Use the cookie index received along with the packet to lookup original flow_offload cookie binary and pass it down to devlink_trap_report(). Add "fa_cookie" metadata to the ACL trap. Signed-off-by: Jiri Pirko <jiri@mellanox.com> Signed-off-by: Ido Schimmel <idosch@mellanox.com> Signed-off-by: David S. Miller <davem@davemloft.net>
2020-02-25mlxsw: pci: Extract cookie index for ACL discard trap packetsJiri Pirko3-1/+18
In case the received packet comes in due to one of ACL discard traps, take the user_def_val_orig_pkt_len field from CQE and store it in skb->cb as ACL cookie index. Signed-off-by: Jiri Pirko <jiri@mellanox.com> Signed-off-by: Ido Schimmel <idosch@mellanox.com> Signed-off-by: David S. Miller <davem@davemloft.net>
2020-02-25mlxsw: core_acl_flex_actions: Implement flow_offload action cookie offloadJiri Pirko5-6/+257
Track cookies coming down to driver by flow_offload. Assign a cookie_index to each unique cookie binary. Use previously defined "Trap with userdef" flex action to ask HW to pass cookie_index alongside with the dropped packets. Signed-off-by: Jiri Pirko <jiri@mellanox.com> Signed-off-by: Ido Schimmel <idosch@mellanox.com> Signed-off-by: David S. Miller <davem@davemloft.net>
2020-02-25mlxsw: core_acl_flex_actions: Add trap with userdef actionJiri Pirko1-2/+28
Expose "Trap action with userdef". It is the same as already defined "Trap action" with a difference that it would ask the policy engine to pass arbitrary value (userdef) alongside with received packets. This would be later on used to carry cookie index. Signed-off-by: Jiri Pirko <jiri@mellanox.com> Signed-off-by: Ido Schimmel <idosch@mellanox.com> Signed-off-by: David S. Miller <davem@davemloft.net>
2020-02-25devlink: extend devlink_trap_report() to accept cookie and passJiri Pirko4-9/+15
Add cookie argument to devlink_trap_report() allowing driver to pass in the user cookie. Pass on the cookie down to drop monitor code. Signed-off-by: Jiri Pirko <jiri@mellanox.com> Signed-off-by: Ido Schimmel <idosch@mellanox.com> Reviewed-by: Jakub Kicinski <kuba@kernel.org> Signed-off-by: David S. Miller <davem@davemloft.net>
2020-02-25drop_monitor: extend by passing cookie from driverJiri Pirko3-1/+36
If driver passed along the cookie, push it through Netlink. Signed-off-by: Jiri Pirko <jiri@mellanox.com> Signed-off-by: David S. Miller <davem@davemloft.net>
2020-02-25devlink: add trap metadata type for cookieJiri Pirko3-0/+6
Allow driver to indicate cookie metadata for registered traps. Signed-off-by: Jiri Pirko <jiri@mellanox.com> Signed-off-by: Ido Schimmel <idosch@mellanox.com> Reviewed-by: Jakub Kicinski <kuba@kernel.org> Signed-off-by: David S. Miller <davem@davemloft.net>
2020-02-25flow_offload: pass action cookie through offload structuresJiri Pirko3-1/+62
Extend struct flow_action_entry in order to hold TC action cookie specified by user inserting the action. Signed-off-by: Jiri Pirko <jiri@mellanox.com> Signed-off-by: Ido Schimmel <idosch@mellanox.com> Reviewed-by: Jakub Kicinski <kuba@kernel.org> Signed-off-by: David S. Miller <davem@davemloft.net>
2020-02-25icmp: allow icmpv6_ndo_send to work with CONFIG_IPV6=nJason A. Donenfeld1-6/+10
The icmpv6_send function has long had a static inline implementation with an empty body for CONFIG_IPV6=n, so that code calling it doesn't need to be ifdef'd. The new icmpv6_ndo_send function, which is intended for drivers as a drop-in replacement with an identical function signature, should follow the same pattern. Without this patch, drivers that used to work with CONFIG_IPV6=n now result in a linker error. Cc: Chen Zhou <chenzhou10@huawei.com> Reported-by: Hulk Robot <hulkci@huawei.com> Fixes: 0b41713b6066 ("icmp: introduce helper for nat'd source address in network device context") Signed-off-by: Jason A. Donenfeld <Jason@zx2c4.com> Signed-off-by: David S. Miller <davem@davemloft.net>
2020-02-25Merge tag 'riscv-for-linux-5.6-rc4' of ↵Linus Torvalds5-24/+53
git://git.kernel.org/pub/scm/linux/kernel/git/riscv/linux Pull RISC-V fixes from Palmer Dabbelt: "This contains a handful of RISC-V related fixes that I've collected and would like to target for 5.6-rc4: - A fix to set up the PMPs on boot, which allows the kernel to access memory on systems that don't set up permissive PMPs before getting to Linux. This only effects machine-mode kernels, which currently means only NOMMU kernels. - A fix to avoid enabling supervisor-mode interrupts when running in machine-mode, also only for NOMMU kernels. - A pair of fixes to our KASAN support to avoid corrupting memory. - A gitignore fix. This boots on QEMU's virt board for me" * tag 'riscv-for-linux-5.6-rc4' of git://git.kernel.org/pub/scm/linux/kernel/git/riscv/linux: riscv: adjust the indent riscv: allocate a complete page size for each page table riscv: Fix gitignore RISC-V: Don't enable all interrupts in trap_init() riscv: set pmp configuration if kernel is running in M-mode
2020-02-25Merge branch 'mips-fixes' of ↵Linus Torvalds8-29/+56
git://git.kernel.org/pub/scm/linux/kernel/git/mips/linux Pull MIPS fixes from Paul Burton: "Here are a few MIPS fixes, and a MAINTAINERS update to hand over MIPS maintenance to Thomas Bogendoerfer - this will be my final pull request as MIPS maintainer. Thanks for your helpful comments, useful corrections & responsiveness during the time I've fulfilled the role, and I'm sure I'll pop up elsewhere in the tree somewhere down the line" * 'mips-fixes' of git://git.kernel.org/pub/scm/linux/kernel/git/mips/linux: MAINTAINERS: Hand MIPS over to Thomas MIPS: ingenic: DTS: Fix watchdog nodes MIPS: X1000: Fix clock of watchdog node. MIPS: vdso: Wrap -mexplicit-relocs in cc-option MIPS: VPE: Fix a double free and a memory leak in 'release_vpe()' MIPS: cavium_octeon: Fix syncw generation. mips: vdso: add build time check that no 'jalr t9' calls left MIPS: Disable VDSO time functionality on microMIPS mips: vdso: fix 'jalr t9' crash in vdso code
2020-02-25selftests: nft_concat_range: Move option for 'list ruleset' before commandStefano Brivio1-6/+6
Before nftables commit fb9cea50e8b3 ("main: enforce options before commands"), 'nft list ruleset -a' happened to work, but it's wrong and won't work anymore. Replace it by 'nft -a list ruleset'. Reported-by: Chen Yi <yiche@redhat.com> Fixes: 611973c1e06f ("selftests: netfilter: Introduce tests for sets with range concatenation") Signed-off-by: Stefano Brivio <sbrivio@redhat.com> Signed-off-by: Pablo Neira Ayuso <pablo@netfilter.org>
2020-02-25docs: Fix empty parallelism argumentKees Cook1-1/+1
When there was no parallelism (no top-level -j arg and a pre-1.7 sphinx-build), the argument passed would be empty ("") instead of just being missing, which would (understandably) badly confuse sphinx-build. Fix this by removing the quotes. Reported-by: Rafael J. Wysocki <rafael@kernel.org> Fixes: 51e46c7a4007 ("docs, parallelism: Rearrange how jobserver reservations are made") Cc: stable@vger.kernel.org # v5.5 only Signed-off-by: Kees Cook <keescook@chromium.org> Signed-off-by: Jonathan Corbet <corbet@lwn.net>
2020-02-25docs: remove MPX from the x86 tocStephen Kitt1-1/+0
MPX was removed in commit 45fc24e89b7c ("x86/mpx: remove MPX from arch/x86"), this removes the corresponding entry in the x86 toc. This was suggested by a Sphinx warning. Signed-off-by: Stephen Kitt <steve@sk2.org> Fixes: 45fc24e89b7cc ("x86/mpx: remove MPX from arch/x86") Acked-by: Dave Hansen <dave.hansen@intel.com> Signed-off-by: Jonathan Corbet <corbet@lwn.net>
2020-02-25MAINTAINERS: Hand MIPS over to ThomasPaul Burton2-4/+7
My time with MIPS the company has reached its end, and so at best I'll have little time spend on maintaining arch/mips/. Ralf last authored a patch over 2 years ago, the last time he committed one is even further back & activity was sporadic for a while before that. The reality is that he isn't active. Having a new maintainer with time to do things properly will be beneficial all round. Thomas Bogendoerfer has been involved in MIPS development for a long time & has offered to step up as maintainer, so add Thomas and remove myself & Ralf from the MIPS entry. Ralf already has an entry in CREDITS to honor his contributions, so this just adds one for me. Signed-off-by: Paul Burton <paulburton@kernel.org> Reviewed-by: Philippe Mathieu-Daudé <f4bug@amsat.org> Acked-by: Thomas Bogendoerfer <tsbogend@alpha.franken.de> Cc: Ralf Baechle <ralf@linux-mips.org> Cc: linux-kernel@vger.kernel.org Cc: linux-mips@vger.kernel.org
2020-02-25selftests/bpf: Run SYN cookies with reuseport BPF test only for TCPJakub Sitnicki1-8/+9
Currently we run SYN cookies test for all socket types and mark the test as skipped if socket type is not compatible. This causes confusion because skipped test might indicate a problem with the testing environment. Instead, run the test only for the socket type which supports SYN cookies. Also, switch to using designated initializers when setting up tests, so that we can tweak only some test parameters, leaving the rest initialized to default values. Fixes: eecd618b4516 ("selftests/bpf: Mark SYN cookie test skipped for UDP sockets") Reported-by: Alexei Starovoitov <ast@kernel.org> Signed-off-by: Jakub Sitnicki <jakub@cloudflare.com> Signed-off-by: Alexei Starovoitov <ast@kernel.org> Acked-by: Martin KaFai Lau <kafai@fb.com> Link: https://lore.kernel.org/bpf/20200224135327.121542-2-jakub@cloudflare.com
2020-02-25selftests/bpf: Run reuseport tests only with supported socket typesJakub Sitnicki1-7/+6
SOCKMAP and SOCKHASH map types can be used with reuseport BPF programs but don't support yet storing UDP sockets. Instead of marking UDP tests with SOCK{MAP,HASH} as skipped, don't run them at all. Skipped test might signal that the test environment is not suitable for running the test, while in reality the functionality is not implemented in the kernel yet. Before: sh# ./test_progs -t select_reuseport … #40 select_reuseport:OK Summary: 1/126 PASSED, 30 SKIPPED, 0 FAILED After: sh# ./test_progs -t select_reuseport … #40 select_reuseport:OK Summary: 1/98 PASSED, 2 SKIPPED, 0 FAILED The remaining two skipped tests are SYN cookies tests, which will be addressed in the subsequent patch. Fixes: 11318ba8cafd ("selftests/bpf: Extend SK_REUSEPORT tests to cover SOCKMAP/SOCKHASH") Reported-by: Alexei Starovoitov <ast@kernel.org> Signed-off-by: Jakub Sitnicki <jakub@cloudflare.com> Signed-off-by: Alexei Starovoitov <ast@kernel.org> Acked-by: Martin KaFai Lau <kafai@fb.com> Link: https://lore.kernel.org/bpf/20200224135327.121542-1-jakub@cloudflare.com
2020-02-25Merge branch 'BPF_and_RT'Alexei Starovoitov18-142/+283
Thomas Gleixner says: ==================== This is the third version of the BPF/RT patch set which makes both coexist nicely. The long explanation can be found in the cover letter of the V1 submission: https://lore.kernel.org/r/20200214133917.304937432@linutronix.de V2 is here: https://lore.kernel.org/r/20200220204517.863202864@linutronix.de The following changes vs. V2 have been made: - Rebased to bpf-next, adjusted to the lock changes in the hashmap code. - Split the preallocation enforcement patch for instrumentation type BPF programs into two pieces: 1) Emit a one-time warning on !RT kernels when any instrumentation type BPF program uses run-time allocation. Emit also a corresponding warning in the verifier log. But allow the program to run for backward compatibility sake. After a grace period this should be enforced. 2) On RT reject such programs because on RT the memory allocator cannot be called from truly atomic contexts. - Fixed the fallout from V2 as reported by Alexei and 0-day - Removed the redundant preempt_disable() from trace_call_bpf() - Removed the unused export of trace_call_bpf() ==================== Signed-off-by: Alexei Starovoitov <ast@kernel.org>
2020-02-25bpf/stackmap: Dont trylock mmap_sem with PREEMPT_RT and interrupts disabledDavid Miller1-3/+15
In a RT kernel down_read_trylock() cannot be used from NMI context and up_read_non_owner() is another problematic issue. So in such a configuration, simply elide the annotated stackmap and just report the raw IPs. In the longer term, it might be possible to provide a atomic friendly versions of the page cache traversal which will at least provide the info if the pages are resident and don't need to be paged in. [ tglx: Use IS_ENABLED() to avoid the #ifdeffery, fixup the irq work callback and add a comment ] Signed-off-by: David S. Miller <davem@davemloft.net> Signed-off-by: Thomas Gleixner <tglx@linutronix.de> Signed-off-by: Alexei Starovoitov <ast@kernel.org> Link: https://lore.kernel.org/bpf/20200224145644.708960317@linutronix.de
2020-02-25bpf, lpm: Make locking RT friendlyThomas Gleixner1-6/+6
The LPM trie map cannot be used in contexts like perf, kprobes and tracing as this map type dynamically allocates memory. The memory allocation happens with a raw spinlock held which is a truly spinning lock on a PREEMPT RT enabled kernel which disables preemption and interrupts. As RT does not allow memory allocation from such a section for various reasons, convert the raw spinlock to a regular spinlock. On a RT enabled kernel these locks are substituted by 'sleeping' spinlocks which provide the proper protection but keep the code preemptible. On a non-RT kernel regular spinlocks map to raw spinlocks, i.e. this does not cause any functional change. Signed-off-by: Thomas Gleixner <tglx@linutronix.de> Signed-off-by: Alexei Starovoitov <ast@kernel.org> Link: https://lore.kernel.org/bpf/20200224145644.602129531@linutronix.de
2020-02-25bpf: Prepare hashtab locking for PREEMPT_RTThomas Gleixner1-9/+56
PREEMPT_RT forbids certain operations like memory allocations (even with GFP_ATOMIC) from atomic contexts. This is required because even with GFP_ATOMIC the memory allocator calls into code pathes which acquire locks with long held lock sections. To ensure the deterministic behaviour these locks are regular spinlocks, which are converted to 'sleepable' spinlocks on RT. The only true atomic contexts on an RT kernel are the low level hardware handling, scheduling, low level interrupt handling, NMIs etc. None of these contexts should ever do memory allocations. As regular device interrupt handlers and soft interrupts are forced into thread context, the existing code which does spin_lock*(); alloc(GPF_ATOMIC); spin_unlock*(); just works. In theory the BPF locks could be converted to regular spinlocks as well, but the bucket locks and percpu_freelist locks can be taken from arbitrary contexts (perf, kprobes, tracepoints) which are required to be atomic contexts even on RT. These mechanisms require preallocated maps, so there is no need to invoke memory allocations within the lock held sections. BPF maps which need dynamic allocation are only used from (forced) thread context on RT and can therefore use regular spinlocks which in turn allows to invoke memory allocations from the lock held section. To achieve this make the hash bucket lock a union of a raw and a regular spinlock and initialize and lock/unlock either the raw spinlock for preallocated maps or the regular variant for maps which require memory allocations. On a non RT kernel this distinction is neither possible nor required. spinlock maps to raw_spinlock and the extra code and conditional is optimized out by the compiler. No functional change. Signed-off-by: Thomas Gleixner <tglx@linutronix.de> Signed-off-by: Alexei Starovoitov <ast@kernel.org> Link: https://lore.kernel.org/bpf/20200224145644.509685912@linutronix.de
2020-02-25bpf: Factor out hashtab bucket lock operationsThomas Gleixner1-23/+46
As a preparation for making the BPF locking RT friendly, factor out the hash bucket lock operations into inline functions. This allows to do the necessary RT modification in one place instead of sprinkling it all over the place. No functional change. The now unused htab argument of the lock/unlock functions will be used in the next step which adds PREEMPT_RT support. Signed-off-by: Thomas Gleixner <tglx@linutronix.de> Signed-off-by: Alexei Starovoitov <ast@kernel.org> Link: https://lore.kernel.org/bpf/20200224145644.420416916@linutronix.de
2020-02-25bpf: Replace open coded recursion prevention in sys_bpf()Thomas Gleixner1-19/+8
The required protection is that the caller cannot be migrated to a different CPU as these functions end up in places which take either a hash bucket lock or might trigger a kprobe inside the memory allocator. Both scenarios can lead to deadlocks. The deadlock prevention is per CPU by incrementing a per CPU variable which temporarily blocks the invocation of BPF programs from perf and kprobes. Replace the open coded preempt_[dis|en]able and __this_cpu_[inc|dec] pairs with the new helper functions. These functions are already prepared to make BPF work on PREEMPT_RT enabled kernels. No functional change for !RT kernels. Signed-off-by: Thomas Gleixner <tglx@linutronix.de> Signed-off-by: Alexei Starovoitov <ast@kernel.org> Link: https://lore.kernel.org/bpf/20200224145644.317843926@linutronix.de
2020-02-25bpf: Use recursion prevention helpers in hashtab codeThomas Gleixner1-8/+4
The required protection is that the caller cannot be migrated to a different CPU as these places take either a hash bucket lock or might trigger a kprobe inside the memory allocator. Both scenarios can lead to deadlocks. The deadlock prevention is per CPU by incrementing a per CPU variable which temporarily blocks the invocation of BPF programs from perf and kprobes. Replace the open coded preempt_disable/enable() and this_cpu_inc/dec() pairs with the new recursion prevention helpers to prepare BPF to work on PREEMPT_RT enabled kernels. On a non-RT kernel the migrate disable/enable in the helpers map to preempt_disable/enable(), i.e. no functional change. Signed-off-by: Thomas Gleixner <tglx@linutronix.de> Signed-off-by: Alexei Starovoitov <ast@kernel.org> Link: https://lore.kernel.org/bpf/20200224145644.211208533@linutronix.de
2020-02-25bpf: Provide recursion prevention helpersThomas Gleixner1-0/+30
The places which need to prevent the execution of trace type BPF programs to prevent deadlocks on the hash bucket lock do this open coded. Provide two inline functions, bpf_disable/enable_instrumentation() to replace these open coded protection constructs. Use migrate_disable/enable() instead of preempt_disable/enable() right away so this works on RT enabled kernels. On a !RT kernel migrate_disable / enable() are mapped to preempt_disable/enable(). These helpers use this_cpu_inc/dec() instead of __this_cpu_inc/dec() on an RT enabled kernel because migrate disabled regions are preemptible and preemption might hit in the middle of a RMW operation which can lead to inconsistent state. Signed-off-by: Thomas Gleixner <tglx@linutronix.de> Signed-off-by: Alexei Starovoitov <ast@kernel.org> Link: https://lore.kernel.org/bpf/20200224145644.103910133@linutronix.de
2020-02-25bpf: Use migrate_disable/enable in array macros and cgroup/lirc code.David Miller2-6/+7
Replace the preemption disable/enable with migrate_disable/enable() to reflect the actual requirement and to allow PREEMPT_RT to substitute it with an actual migration disable mechanism which does not disable preemption. Including the code paths that go via __bpf_prog_run_save_cb(). Signed-off-by: David S. Miller <davem@davemloft.net> Signed-off-by: Thomas Gleixner <tglx@linutronix.de> Signed-off-by: Alexei Starovoitov <ast@kernel.org> Link: https://lore.kernel.org/bpf/20200224145643.998293311@linutronix.de
2020-02-25bpf: Use migrate_disable/enabe() in trampoline code.David Miller1-4/+5
Instead of preemption disable/enable to reflect the purpose. This allows PREEMPT_RT to substitute it with an actual migration disable implementation. On non RT kernels this is still mapped to preempt_disable/enable(). Signed-off-by: David S. Miller <davem@davemloft.net> Signed-off-by: Thomas Gleixner <tglx@linutronix.de> Signed-off-by: Alexei Starovoitov <ast@kernel.org> Link: https://lore.kernel.org/bpf/20200224145643.891428873@linutronix.de
2020-02-25bpf/tests: Use migrate disable instead of preempt disableDavid Miller2-6/+6
Replace the preemption disable/enable with migrate_disable/enable() to reflect the actual requirement and to allow PREEMPT_RT to substitute it with an actual migration disable mechanism which does not disable preemption. [ tglx: Switched it over to migrate disable ] Signed-off-by: David S. Miller <davem@davemloft.net> Signed-off-by: Thomas Gleixner <tglx@linutronix.de> Signed-off-by: Alexei Starovoitov <ast@kernel.org> Link: https://lore.kernel.org/bpf/20200224145643.785306549@linutronix.de
2020-02-25bpf: Use bpf_prog_run_pin_on_cpu() at simple call sites.David Miller5-18/+6
All of these cases are strictly of the form: preempt_disable(); BPF_PROG_RUN(...); preempt_enable(); Replace this with bpf_prog_run_pin_on_cpu() which wraps BPF_PROG_RUN() with: migrate_disable(); BPF_PROG_RUN(...); migrate_enable(); On non RT enabled kernels this maps to preempt_disable/enable() and on RT enabled kernels this solely prevents migration, which is sufficient as there is no requirement to prevent reentrancy to any BPF program from a preempting task. The only requirement is that the program stays on the same CPU. Therefore, this is a trivially correct transformation. The seccomp loop does not need protection over the loop. It only needs protection per BPF filter program [ tglx: Converted to bpf_prog_run_pin_on_cpu() ] Signed-off-by: David S. Miller <davem@davemloft.net> Signed-off-by: Thomas Gleixner <tglx@linutronix.de> Signed-off-by: Alexei Starovoitov <ast@kernel.org> Link: https://lore.kernel.org/bpf/20200224145643.691493094@linutronix.de
2020-02-25bpf: Replace cant_sleep() with cant_migrate()Thomas Gleixner1-1/+1
As already discussed in the previous change which introduced BPF_RUN_PROG_PIN_ON_CPU() BPF only requires to disable migration to guarantee per CPUness. If RT substitutes the preempt disable based migration protection then the cant_sleep() check will obviously trigger as preemption is not disabled. Replace it by cant_migrate() which maps to cant_sleep() on a non RT kernel and will verify that migration is disabled on a full RT kernel. Signed-off-by: Thomas Gleixner <tglx@linutronix.de> Signed-off-by: Alexei Starovoitov <ast@kernel.org> Link: https://lore.kernel.org/bpf/20200224145643.583038889@linutronix.de
2020-02-25bpf: Provide bpf_prog_run_pin_on_cpu() helperThomas Gleixner1-2/+24
BPF programs require to run on one CPU to completion as they use per CPU storage, but according to Alexei they don't need reentrancy protection as obviously BPF programs running in thread context can always be 'preempted' by hard and soft interrupts and instrumentation and the same program can run concurrently on a different CPU. The currently used mechanism to ensure CPUness is to wrap the invocation into a preempt_disable/enable() pair. Disabling preemption is also disabling migration for a task. preempt_disable/enable() is used because there is no explicit way to reliably disable only migration. Provide a separate macro to invoke a BPF program which can be used in migrateable task context. It wraps BPF_PROG_RUN() in a migrate_disable/enable() pair which maps on non RT enabled kernels to preempt_disable/enable(). On RT enabled kernels this merely disables migration. Both methods ensure that the invoked BPF program runs on one CPU to completion. Signed-off-by: Thomas Gleixner <tglx@linutronix.de> Signed-off-by: Alexei Starovoitov <ast@kernel.org> Link: https://lore.kernel.org/bpf/20200224145643.474592620@linutronix.de
2020-02-25bpf: Dont iterate over possible CPUs with interrupts disabledThomas Gleixner1-10/+10
pcpu_freelist_populate() is disabling interrupts and then iterates over the possible CPUs. The reason why this disables interrupts is to silence lockdep because the invoked ___pcpu_freelist_push() takes spin locks. Neither the interrupt disabling nor the locking are required in this function because it's called during initialization and the resulting map is not yet visible to anything. Split out the actual push assignement into an inline, call it from the loop and remove the interrupt disable. Signed-off-by: Thomas Gleixner <tglx@linutronix.de> Signed-off-by: Alexei Starovoitov <ast@kernel.org> Link: https://lore.kernel.org/bpf/20200224145643.365930116@linutronix.de
2020-02-25bpf: Remove recursion prevention from rcu free callbackThomas Gleixner1-8/+0
If an element is freed via RCU then recursion into BPF instrumentation functions is not a concern. The element is already detached from the map and the RCU callback does not hold any locks on which a kprobe, perf event or tracepoint attached BPF program could deadlock. Signed-off-by: Thomas Gleixner <tglx@linutronix.de> Signed-off-by: Alexei Starovoitov <ast@kernel.org> Link: https://lore.kernel.org/bpf/20200224145643.259118710@linutronix.de
2020-02-25perf/bpf: Remove preempt disable around BPF invocationThomas Gleixner1-2/+0
The BPF invocation from the perf event overflow handler does not require to disable preemption because this is called from NMI or at least hard interrupt context which is already non-preemptible. Signed-off-by: Thomas Gleixner <tglx@linutronix.de> Signed-off-by: Alexei Starovoitov <ast@kernel.org> Link: https://lore.kernel.org/bpf/20200224145643.151953573@linutronix.de
2020-02-25bpf/trace: Remove redundant preempt_disable from trace_call_bpf()Thomas Gleixner1-2/+1
Similar to __bpf_trace_run this is redundant because __bpf_trace_run() is invoked from a trace point via __DO_TRACE() which already disables preemption _before_ invoking any of the functions which are attached to a trace point. Remove it and add a cant_sleep() check. Signed-off-by: Thomas Gleixner <tglx@linutronix.de> Signed-off-by: Alexei Starovoitov <ast@kernel.org> Link: https://lore.kernel.org/bpf/20200224145643.059995527@linutronix.de
2020-02-25bpf: disable preemption for bpf progs attached to uprobeAlexei Starovoitov1-2/+9
trace_call_bpf() no longer disables preemption on its own. All callers of this function has to do it explicitly. Signed-off-by: Alexei Starovoitov <ast@kernel.org> Acked-by: Thomas Gleixner <tglx@linutronix.de>
2020-02-25bpf/trace: Remove EXPORT from trace_call_bpf()Thomas Gleixner1-1/+0
All callers are built in. No point to export this. Signed-off-by: Thomas Gleixner <tglx@linutronix.de> Signed-off-by: Alexei Starovoitov <ast@kernel.org>
2020-02-25bpf/tracing: Remove redundant preempt_disable() in __bpf_trace_run()Thomas Gleixner1-2/+1
__bpf_trace_run() disables preemption around the BPF_PROG_RUN() invocation. This is redundant because __bpf_trace_run() is invoked from a trace point via __DO_TRACE() which already disables preemption _before_ invoking any of the functions which are attached to a trace point. Remove it and add a cant_sleep() check. Signed-off-by: Thomas Gleixner <tglx@linutronix.de> Signed-off-by: Alexei Starovoitov <ast@kernel.org> Link: https://lore.kernel.org/bpf/20200224145642.847220186@linutronix.de
2020-02-25bpf: Update locking comment in hashtab codeThomas Gleixner1-4/+20
The comment where the bucket lock is acquired says: /* bpf_map_update_elem() can be called in_irq() */ which is not really helpful and aside of that it does not explain the subtle details of the hash bucket locks expecially in the context of BPF and perf, kprobes and tracing. Add a comment at the top of the file which explains the protection scopes and the details how potential deadlocks are prevented. Signed-off-by: Thomas Gleixner <tglx@linutronix.de> Signed-off-by: Alexei Starovoitov <ast@kernel.org> Link: https://lore.kernel.org/bpf/20200224145642.755793061@linutronix.de
2020-02-25bpf: Enforce preallocation for instrumentation programs on RTThomas Gleixner1-4/+9
Aside of the general unsafety of run-time map allocation for instrumentation type programs RT enabled kernels have another constraint: The instrumentation programs are invoked with preemption disabled, but the memory allocator spinlocks cannot be acquired in atomic context because they are converted to 'sleeping' spinlocks on RT. Therefore enforce map preallocation for these programs types when RT is enabled. Signed-off-by: Thomas Gleixner <tglx@linutronix.de> Signed-off-by: Alexei Starovoitov <ast@kernel.org> Link: https://lore.kernel.org/bpf/20200224145642.648784007@linutronix.de
2020-02-25bpf: Tighten the requirements for preallocated hash mapsThomas Gleixner1-11/+28
The assumption that only programs attached to perf NMI events can deadlock on memory allocators is wrong. Assume the following simplified callchain: kmalloc() from regular non BPF context cache empty freelist empty lock(zone->lock); tracepoint or kprobe BPF() update_elem() lock(bucket) kmalloc() cache empty freelist empty lock(zone->lock); <- DEADLOCK There are other ways which do not involve locking to create wreckage: kmalloc() from regular non BPF context local_irq_save(); ... obj = slab_first(); kprobe() BPF() update_elem() lock(bucket) kmalloc() local_irq_save(); ... obj = slab_first(); <- Same object as above ... So preallocation _must_ be enforced for all variants of intrusive instrumentation. Unfortunately immediate enforcement would break backwards compatibility, so for now such programs still are allowed to run, but a one time warning is emitted in dmesg and the verifier emits a warning in the verifier log as well so developers are made aware about this and can fix their programs before the enforcement becomes mandatory. Signed-off-by: Thomas Gleixner <tglx@linutronix.de> Signed-off-by: Alexei Starovoitov <ast@kernel.org> Link: https://lore.kernel.org/bpf/20200224145642.540542802@linutronix.de