diff options
| author | Alexei Starovoitov <ast@kernel.org> | 2018-10-19 23:24:31 +0300 |
|---|---|---|
| committer | Alexei Starovoitov <ast@kernel.org> | 2018-10-19 23:24:32 +0300 |
| commit | 43ed375ff249e9f2fc986f77ed9746561895aeb3 (patch) | |
| tree | ba7dca5349f151688141b56642ea209a069281de /tools/include | |
| parent | 3ddeac6705aba31b7528c7d7a528eabb74475622 (diff) | |
| parent | 43b987d23d6bd08db41a9c4a85aacfb3f0b2a94c (diff) | |
| download | linux-43ed375ff249e9f2fc986f77ed9746561895aeb3.tar.xz | |
Merge branch 'queue_stack_maps'
Mauricio Vasquez says:
====================
In some applications this is needed have a pool of free elements, for
example the list of free L4 ports in a SNAT. None of the current maps allow
to do it as it is not possible to get any element without having they key
it is associated to, even if it were possible, the lack of locking mecanishms in
eBPF would do it almost impossible to be implemented without data races.
This patchset implements two new kind of eBPF maps: queue and stack.
Those maps provide to eBPF programs the peek, push and pop operations, and for
userspace applications a new bpf_map_lookup_and_delete_elem() is added.
Signed-off-by: Mauricio Vasquez B <mauricio.vasquez@polito.it>
v2 -> v3:
- Remove "almost dead code" in syscall.c
- Remove unnecessary copy_from_user in bpf_map_lookup_and_delete_elem
- Rebase
v1 -> v2:
- Put ARG_PTR_TO_UNINIT_MAP_VALUE logic into a separated patch
- Fix missing __this_cpu_dec & preempt_enable calls in kernel/bpf/syscall.c
RFC v4 -> v1:
- Remove roundup to power of 2 in memory allocation
- Remove count and use a free slot to check if queue/stack is empty
- Use if + assigment for wrapping indexes
- Fix some minor style issues
- Squash two patches together
RFC v3 -> RFC v4:
- Revert renaming of kernel/bpf/stackmap.c
- Remove restriction on value size
- Remove len arguments from peek/pop helpers
- Add new ARG_PTR_TO_UNINIT_MAP_VALUE
RFC v2 -> RFC v3:
- Return elements by value instead that by reference
- Implement queue/stack base on array and head + tail indexes
- Rename stack trace related files to avoid confusion and conflicts
RFC v1 -> RFC v2:
- Create two separate maps instead of single one + flags
- Implement bpf_map_lookup_and_delete syscall
- Support peek operation
- Define replacement policy through flags in the update() method
- Add eBPF side tests
====================
Acked-by: Daniel Borkmann <daniel@iogearbox.net>
Signed-off-by: Alexei Starovoitov <ast@kernel.org>
Diffstat (limited to 'tools/include')
| -rw-r--r-- | tools/include/uapi/linux/bpf.h | 30 |
1 files changed, 29 insertions, 1 deletions
diff --git a/tools/include/uapi/linux/bpf.h b/tools/include/uapi/linux/bpf.h index 5e46f6732781..a2fb333290dc 100644 --- a/tools/include/uapi/linux/bpf.h +++ b/tools/include/uapi/linux/bpf.h @@ -103,6 +103,7 @@ enum bpf_cmd { BPF_BTF_LOAD, BPF_BTF_GET_FD_BY_ID, BPF_TASK_FD_QUERY, + BPF_MAP_LOOKUP_AND_DELETE_ELEM, }; enum bpf_map_type { @@ -128,6 +129,8 @@ enum bpf_map_type { BPF_MAP_TYPE_CGROUP_STORAGE, BPF_MAP_TYPE_REUSEPORT_SOCKARRAY, BPF_MAP_TYPE_PERCPU_CGROUP_STORAGE, + BPF_MAP_TYPE_QUEUE, + BPF_MAP_TYPE_STACK, }; enum bpf_prog_type { @@ -462,6 +465,28 @@ union bpf_attr { * Return * 0 on success, or a negative error in case of failure. * + * int bpf_map_push_elem(struct bpf_map *map, const void *value, u64 flags) + * Description + * Push an element *value* in *map*. *flags* is one of: + * + * **BPF_EXIST** + * If the queue/stack is full, the oldest element is removed to + * make room for this. + * Return + * 0 on success, or a negative error in case of failure. + * + * int bpf_map_pop_elem(struct bpf_map *map, void *value) + * Description + * Pop an element from *map*. + * Return + * 0 on success, or a negative error in case of failure. + * + * int bpf_map_peek_elem(struct bpf_map *map, void *value) + * Description + * Get an element from *map* without removing it. + * Return + * 0 on success, or a negative error in case of failure. + * * int bpf_probe_read(void *dst, u32 size, const void *src) * Description * For tracing programs, safely attempt to read *size* bytes from @@ -2303,7 +2328,10 @@ union bpf_attr { FN(skb_ancestor_cgroup_id), \ FN(sk_lookup_tcp), \ FN(sk_lookup_udp), \ - FN(sk_release), + FN(sk_release), \ + FN(map_push_elem), \ + FN(map_pop_elem), \ + FN(map_peek_elem), /* integer value in 'imm' field of BPF_CALL instruction selects which helper * function eBPF program intends to call |
