diff options
| author | Martin KaFai Lau <martin.lau@kernel.org> | 2026-02-06 23:01:25 +0300 |
|---|---|---|
| committer | Martin KaFai Lau <martin.lau@kernel.org> | 2026-02-07 01:48:05 +0300 |
| commit | db975debcb8c4cd367a78811bc1ba84c83f854bd (patch) | |
| tree | 64a48da23cba00eae16c121f9a0e73bba714ead5 /tools | |
| parent | b8c89f5cc2037b1902d680d169332fda71a0d38e (diff) | |
| parent | 97b859b5ed04dbbe99be19895d8498009a19553f (diff) | |
| download | linux-db975debcb8c4cd367a78811bc1ba84c83f854bd.tar.xz | |
Merge branch 'remove-task-and-cgroup-local-storage-percpu-counters'
Amery Hung says:
====================
Remove task and cgroup local storage percpu counters
* Motivation *
The goal of this patchset is to make bpf syscalls and helpers updating
task and cgroup local storage more robust by removing percpu counters
in them. Task local storage and cgroup storage each employs a percpu
counter to prevent deadlock caused by recursion. Since the underlying
bpf local storage takes spinlocks in various operations, bpf programs
running recursively may try to take a spinlock which is already taken.
For example, when a tracing bpf program called recursively during
bpf_task_storage_get(..., F_CREATE) tries to call
bpf_task_storage_get(..., F_CREATE) again, it will cause AA deadlock
if the percpu variable is not in place.
However, sometimes, the percpu counter may cause bpf syscalls or helpers
to return errors spuriously, as soon as another threads is also updating
the local storage or the local storage map. Ideally, the two threads
could have taken turn to take the locks and perform their jobs
respectively. However, due to the percpu counter, the syscalls and
helpers can return -EBUSY even if one of them does not run recursively
in another one. All it takes for this to happen is if the two threads run
on the same CPU. This happened when BPF-CI ran the selftest of task local
data. Since CI runs the test on VM with 2 CPUs, bpf_task_storage_get(...,
F_CREATE) can easily fail.
The failure mode is not good for users as they need to add retry logic
in user space or bpf programs to avoid it. Even with retry, there
is no guaranteed upper bound of the loop for a success call. Therefore,
this patchset seeks to remove the percpu counter and makes the related
bpf syscalls and helpers more reliable, while still make sure recursion
deadlock will not happen, with the help of resilient queued spinlock
(rqspinlock).
* Implementation *
To remove the percpu counter without introducing deadlock,
bpf_local_storage is refactored by changing the locks from raw_spin_lock
to rqspinlock, which prevents deadlock with deadlock detection and a
timeout mechanism.
The refactor basically repalces the locks with rqspinlock and propagates
errors returned by the locking function to BPF helpers or syscalls.
bpf_selem_unlink_nofail() is introduced to handle rqspinlock errors
in two lock acquiring functions that cannot fail,
bpf_local_storage_destroy() and bpf_local_storage_map_free()
(i.e., local storage is being freed by the subsystem or the map is
being freed). The high level idea is to bitfiel and atomic operation to
track who is referencing an selem when any locks cannot be acquired.
Additional care is needed to make sure special fields are freed and
owner memory are uncharged safely and correctly.
If not familiar with local storage, the last section briefly describe
the locks and structure of local storage. It also shows the abbreviation
used in the rest of the letter.
* Test *
Task and cgroup local storage selftests have already covered deadlock
caused by recursion. Patch 14 updates the expected result of task local
storage selftests as task local storage bpf helpers can now run on the
same CPU as they don't cause deadlock.
* Benchmark *
./bench -p 1 local-storage-create --storage-type <socket,task> \
--batch-size <16,32,64>
The benchmark is a microbenchmark stress-testing how fast local storage
can be created. After swicthing to rqspinlock and
bpf_unlink_selem_nofail(), socket local storage creation speed has a
~5% gain. For task local storage, the number remains the same.
Socket local storage
batch creation speed creation speed diff
--------------- ---- ------------------ ----
Before 16 134.371 ± 0.884k/s 3.12 kmallocs/create
32 133.032 ± 3.405k/s 3.12 kmallocs/create
64 133.494 ± 0.862k/s 3.12 kmallocs/create
After 16 140.778 ± 1.306k/s 3.12 kmallocs/create +4.8%
32 140.550 ± 2.058k/s 3.11 kmallocs/create +5.7%
64 139.311 ± 0.911k/s 3.13 kmallocs/create +4.4%
Task local storage
batch creation speed creation speed diff
--------------- ---- ------------------ ----
Before 16 25.301 ± 0.089k/s 2.43 kmallocs/create
32 23.797 ± 0.106k/s 2.51 kmallocs/create
64 23.251 ± 0.187k/s 2.51 kmallocs/create
After 16 25.307 ± 0.080k/s 2.45 kmallocs/create +0.0%
32 23.889 ± 0.089k/s 2.46 kmallocs/create +0.0%
64 23.230 ± 0.113k/s 2.63 kmallocs/create -0.1%
* Patchset organization *
Patch 1-4 convert local storage internal helpers to failable.
Patch 5 changes the locks to rqspinlock and propagate the error
returned from raw_res_spin_lock_irqsave() to BPF heleprs and syscalls.
Patch 6-8 remove percpu counters in task and cgroup local storage.
Patch 9-11 address the unlikely rqspinlock errors by switching to
bpf_selem_unlink_nofail() in map_free() and destroy().
Patch 12-17 update selftests.
* Appendix: local storage internal *
There are two locks in bpf_local_storage due to the ownership model as
illustrated in the figure below. A map value, which consists of a
pointer to the map and the data, is a bpf_local_storage_map_data (sdata)
stored in a bpf_local_storage_elem (selem). A selem belongs to a
bpf_local_storage and bpf_local_storage_map at the same time.
bpf_local_storage::lock (lock_storage->lock in short) protects the list
in a bpf_local_storage and bpf_local_storage_map_bucket::lock (b->lock)
protects the hash bucket in a bpf_local_storage_map.
task_struct
┌ task1 ───────┐ bpf_local_storage
│ *bpf_storage │---->┌─────────┐
└──────────────┘<----│ *owner │ bpf_local_storage_elem
│ *cache[16] (selem) selem
│ *smap │ ┌──────────┐ ┌──────────┐
│ list │------->│ snode │<------->│ snode │
│ lock │ ┌---->│ map_node │<--┐ ┌-->│ map_node │
└─────────┘ │ │ sdata = │ │ │ │ sdata = │
task_struct │ │ {&mapA,} │ │ │ │ {&mapB,} │
┌ task2 ───────┐ bpf_local_storage └──────────┘ │ │ └──────────┘
│ *bpf_storage │---->┌─────────┐ │ │ │
└──────────────┘<----│ *owner │ │ │ │
│ *cache[16] │ selem │ │ selem
│ *smap │ │ ┌──────────┐ │ │ ┌──────────┐
│ list │--│---->│ snode │<--│-│-->│ snode │
│ lock │ │ ┌-->│ map_node │ └-│-->│ map_node │
└─────────┘ │ │ │ sdata = │ │ │ sdata = │
bpf_local_storage_map │ │ │ {&mapB,} │ │ │ {&mapA,} │
(smap) │ │ └──────────┘ │ └──────────┘
┌ mapA ───────┐ │ │ │
│ bpf_map map │ bpf_local_storage_map_bucket │
│ *buckets │---->┌ b[0] ┐ │ │ │
└─────────────┘ │ list │------┘ │ │
│ lock │ │ │
└──────┘ │ │
smap ... │ │
┌ mapB ───────┐ │ │
│ bpf_map map │ bpf_local_storage_map_bucket │
│ *buckets │---->┌ b[0] ┐ │ │
└─────────────┘ │ list │--------┘ │
│ lock │ │
└──────┘ │
┌ b[1] ┐ │
│ list │-----------------------------┘
│ lock │
└──────┘
...
* Changelog *
v6 -> v7
- Minor comment and commit msg tweaks
- Patch 9: Remove unused "owner" (kernel test robot)
- Patch 13: Update comments in task_ls_recursion.c (AI)
Link: https://lore.kernel.org/bpf/20260205070208.186382-1-ameryhung@gmail.com/
v5 -> v6
- Redo benchmark
- Patch 9: Remove storage->smap as it is not used any more
- Patch 17: Remove storage->smap check in selftests
- Patch 10, 11: Pass reuse_now = true to bpf_selem_free() and
bpf_local_storage_free() to allow faster memory reclaim (Martin)
- Patch 10: Use bitfield instead of refcount to track selem state to
be more precise, which removes the possibility map_free missing an
selem (Martin)
- Patch 10: Allow map_free() to free local_storage and drop
the change in bpf_local_storage_map_update() (Martin)
- Patch 11: Simplify destroy() by not deferring work as an owner is
unlikely to have too many maps that stalls RCU (Martin)
Link: https://lore.kernel.org/bpf/20260201175050.468601-1-ameryhung@gmail.com/
v4 -> v5
- Patch 1: Fix incorrect bucket calculation (AI)
- Patch 3: Fix memory leak in bpf_sk_storage_clone() (AI)
- Patch 5: Fix memory leak in bpf_local_storage_update() (AI)
- Fix typo/comment/commit msg (AI)
- Patch 10: Replace smp_rmb() with smp_mb(). smp_rmb does not imply
acquire semantics
Link: https://lore.kernel.org/bpf/20260131050920.2574084-1-ameryhung@gmail.com/
v3 -> v4
- Add performance numbers
- Avoid stale element when calling bpf_local_storage_map_free()
by allowing it to unlink selem from local_storage->list and uncharge
memory. Block destroy() from returning when pending map_free()
are uncharging
- Fix an -EAGAIN bug in bpf_local_storage_update() as map_free() now
does not free local storage
- Fix possible double-free of selem by ensuring an selem is only
processed once for each caller (Kumar)
- Fix possible inifinite loop in bpf_selem_unlink_nofail() when
iterating b->list by replacing while loop with
hlist_for_each_entry_rcu
- Fix unsafe iteration in destroy() by iterating local_storage->list
using hlist_for_each_entry_rcu
- Fix UAF due to clearing storage_owner after destroy(). Flip the order
to fix it
- Misc clean-up suggested by Martin
Link: https://lore.kernel.org/bpf/20251218175628.1460321-1-ameryhung@gmail.com/
v2 -> v3
- Rebase to bpf-next where BPF memory allocator is replaced with
kmalloc_nolock()
- Revert to selecting bucket based on selem
- Introduce bpf_selem_unlink_lockless() to allow unlinking and
freeing selem without taking locks
Link: https://lore.kernel.org/bpf/20251002225356.1505480-1-ameryhung@gmail.com/
v1 -> v2
- Rebase to bpf-next
- Select bucket based on local_storage instead of selem (Martin)
- Simplify bpf_selem_unlink (Martin)
- Change handling of rqspinlock errors in bpf_local_storage_destroy()
and bpf_local_storage_map_free(). Retry instead of WARN_ON.
Link: https://lore.kernel.org/bpf/20250729182550.185356-1-ameryhung@gmail.com/
====================
Link: https://patch.msgid.link/20260205222916.1788211-1-ameryhung@gmail.com
Signed-off-by: Martin KaFai Lau <martin.lau@kernel.org>
Diffstat (limited to 'tools')
8 files changed, 16 insertions, 216 deletions
diff --git a/tools/testing/selftests/bpf/map_tests/task_storage_map.c b/tools/testing/selftests/bpf/map_tests/task_storage_map.c deleted file mode 100644 index a4121d2248ac..000000000000 --- a/tools/testing/selftests/bpf/map_tests/task_storage_map.c +++ /dev/null @@ -1,128 +0,0 @@ -// SPDX-License-Identifier: GPL-2.0 -/* Copyright (C) 2022. Huawei Technologies Co., Ltd */ -#define _GNU_SOURCE -#include <sched.h> -#include <unistd.h> -#include <stdlib.h> -#include <stdbool.h> -#include <errno.h> -#include <string.h> -#include <pthread.h> - -#include <bpf/bpf.h> -#include <bpf/libbpf.h> - -#include "bpf_util.h" -#include "test_maps.h" -#include "task_local_storage_helpers.h" -#include "read_bpf_task_storage_busy.skel.h" - -struct lookup_ctx { - bool start; - bool stop; - int pid_fd; - int map_fd; - int loop; -}; - -static void *lookup_fn(void *arg) -{ - struct lookup_ctx *ctx = arg; - long value; - int i = 0; - - while (!ctx->start) - usleep(1); - - while (!ctx->stop && i++ < ctx->loop) - bpf_map_lookup_elem(ctx->map_fd, &ctx->pid_fd, &value); - return NULL; -} - -static void abort_lookup(struct lookup_ctx *ctx, pthread_t *tids, unsigned int nr) -{ - unsigned int i; - - ctx->stop = true; - ctx->start = true; - for (i = 0; i < nr; i++) - pthread_join(tids[i], NULL); -} - -void test_task_storage_map_stress_lookup(void) -{ -#define MAX_NR_THREAD 4096 - unsigned int i, nr = 256, loop = 8192, cpu = 0; - struct read_bpf_task_storage_busy *skel; - pthread_t tids[MAX_NR_THREAD]; - struct lookup_ctx ctx; - cpu_set_t old, new; - const char *cfg; - int err; - - cfg = getenv("TASK_STORAGE_MAP_NR_THREAD"); - if (cfg) { - nr = atoi(cfg); - if (nr > MAX_NR_THREAD) - nr = MAX_NR_THREAD; - } - cfg = getenv("TASK_STORAGE_MAP_NR_LOOP"); - if (cfg) - loop = atoi(cfg); - cfg = getenv("TASK_STORAGE_MAP_PIN_CPU"); - if (cfg) - cpu = atoi(cfg); - - skel = read_bpf_task_storage_busy__open_and_load(); - err = libbpf_get_error(skel); - CHECK(err, "open_and_load", "error %d\n", err); - - /* Only for a fully preemptible kernel */ - if (!skel->kconfig->CONFIG_PREEMPTION) { - printf("%s SKIP (no CONFIG_PREEMPTION)\n", __func__); - read_bpf_task_storage_busy__destroy(skel); - skips++; - return; - } - - /* Save the old affinity setting */ - sched_getaffinity(getpid(), sizeof(old), &old); - - /* Pinned on a specific CPU */ - CPU_ZERO(&new); - CPU_SET(cpu, &new); - sched_setaffinity(getpid(), sizeof(new), &new); - - ctx.start = false; - ctx.stop = false; - ctx.pid_fd = sys_pidfd_open(getpid(), 0); - ctx.map_fd = bpf_map__fd(skel->maps.task); - ctx.loop = loop; - for (i = 0; i < nr; i++) { - err = pthread_create(&tids[i], NULL, lookup_fn, &ctx); - if (err) { - abort_lookup(&ctx, tids, i); - CHECK(err, "pthread_create", "error %d\n", err); - goto out; - } - } - - ctx.start = true; - for (i = 0; i < nr; i++) - pthread_join(tids[i], NULL); - - skel->bss->pid = getpid(); - err = read_bpf_task_storage_busy__attach(skel); - CHECK(err, "attach", "error %d\n", err); - - /* Trigger program */ - sys_gettid(); - skel->bss->pid = 0; - - CHECK(skel->bss->busy != 0, "bad bpf_task_storage_busy", "got %d\n", skel->bss->busy); -out: - read_bpf_task_storage_busy__destroy(skel); - /* Restore affinity setting */ - sched_setaffinity(getpid(), sizeof(old), &old); - printf("%s:PASS\n", __func__); -} diff --git a/tools/testing/selftests/bpf/prog_tests/btf_dump.c b/tools/testing/selftests/bpf/prog_tests/btf_dump.c index 10cba526d3e6..f1642794f70e 100644 --- a/tools/testing/selftests/bpf/prog_tests/btf_dump.c +++ b/tools/testing/selftests/bpf/prog_tests/btf_dump.c @@ -875,8 +875,8 @@ static void test_btf_dump_var_data(struct btf *btf, struct btf_dump *d, TEST_BTF_DUMP_VAR(btf, d, NULL, str, "cpu_number", int, BTF_F_COMPACT, "int cpu_number = (int)100", 100); #endif - TEST_BTF_DUMP_VAR(btf, d, NULL, str, "bpf_cgrp_storage_busy", int, BTF_F_COMPACT, - "static int bpf_cgrp_storage_busy = (int)2", 2); + TEST_BTF_DUMP_VAR(btf, d, NULL, str, "bpf_bprintf_nest_level", int, BTF_F_COMPACT, + "static int bpf_bprintf_nest_level = (int)2", 2); } struct btf_dump_string_ctx { diff --git a/tools/testing/selftests/bpf/prog_tests/task_local_storage.c b/tools/testing/selftests/bpf/prog_tests/task_local_storage.c index 42e822ea352f..7bee33797c71 100644 --- a/tools/testing/selftests/bpf/prog_tests/task_local_storage.c +++ b/tools/testing/selftests/bpf/prog_tests/task_local_storage.c @@ -112,24 +112,24 @@ static void test_recursion(void) task_ls_recursion__detach(skel); /* Refer to the comment in BPF_PROG(on_update) for - * the explanation on the value 201 and 100. + * the explanation on the value 200 and 1. */ map_fd = bpf_map__fd(skel->maps.map_a); err = bpf_map_lookup_elem(map_fd, &task_fd, &value); ASSERT_OK(err, "lookup map_a"); - ASSERT_EQ(value, 201, "map_a value"); - ASSERT_EQ(skel->bss->nr_del_errs, 1, "bpf_task_storage_delete busy"); + ASSERT_EQ(value, 200, "map_a value"); + ASSERT_EQ(skel->bss->nr_del_errs, 0, "bpf_task_storage_delete busy"); map_fd = bpf_map__fd(skel->maps.map_b); err = bpf_map_lookup_elem(map_fd, &task_fd, &value); ASSERT_OK(err, "lookup map_b"); - ASSERT_EQ(value, 100, "map_b value"); + ASSERT_EQ(value, 1, "map_b value"); prog_fd = bpf_program__fd(skel->progs.on_update); memset(&info, 0, sizeof(info)); err = bpf_prog_get_info_by_fd(prog_fd, &info, &info_len); ASSERT_OK(err, "get prog info"); - ASSERT_EQ(info.recursion_misses, 0, "on_update prog recursion"); + ASSERT_EQ(info.recursion_misses, 2, "on_update prog recursion"); prog_fd = bpf_program__fd(skel->progs.on_enter); memset(&info, 0, sizeof(info)); diff --git a/tools/testing/selftests/bpf/progs/local_storage.c b/tools/testing/selftests/bpf/progs/local_storage.c index 637e75df2e14..d0be77011a84 100644 --- a/tools/testing/selftests/bpf/progs/local_storage.c +++ b/tools/testing/selftests/bpf/progs/local_storage.c @@ -62,7 +62,6 @@ SEC("lsm/inode_unlink") int BPF_PROG(unlink_hook, struct inode *dir, struct dentry *victim) { __u32 pid = bpf_get_current_pid_tgid() >> 32; - struct bpf_local_storage *local_storage; struct local_storage *storage; struct task_struct *task; bool is_self_unlink; @@ -88,15 +87,10 @@ int BPF_PROG(unlink_hook, struct inode *dir, struct dentry *victim) if (!storage || storage->value) return 0; - if (bpf_task_storage_delete(&task_storage_map, task)) + if (bpf_task_storage_delete(&task_storage_map2, task)) return 0; - /* Ensure that the task_storage_map is disconnected from the storage. - * The storage memory should not be freed back to the - * bpf_mem_alloc. - */ - local_storage = task->bpf_storage; - if (!local_storage || local_storage->smap) + if (bpf_task_storage_delete(&task_storage_map, task)) return 0; task_storage_result = 0; @@ -164,18 +158,9 @@ int BPF_PROG(socket_bind, struct socket *sock, struct sockaddr *address, if (bpf_sk_storage_delete(&sk_storage_map2, sk)) return 0; - storage = bpf_sk_storage_get(&sk_storage_map2, sk, 0, - BPF_LOCAL_STORAGE_GET_F_CREATE); - if (!storage) - return 0; - if (bpf_sk_storage_delete(&sk_storage_map, sk)) return 0; - /* Ensure that the sk_storage_map is disconnected from the storage. */ - if (!sk->sk_bpf_storage || sk->sk_bpf_storage->smap) - return 0; - sk_storage_result = 0; return 0; } diff --git a/tools/testing/selftests/bpf/progs/read_bpf_task_storage_busy.c b/tools/testing/selftests/bpf/progs/read_bpf_task_storage_busy.c deleted file mode 100644 index 69da05bb6c63..000000000000 --- a/tools/testing/selftests/bpf/progs/read_bpf_task_storage_busy.c +++ /dev/null @@ -1,38 +0,0 @@ -// SPDX-License-Identifier: GPL-2.0 -/* Copyright (C) 2022. Huawei Technologies Co., Ltd */ -#include "vmlinux.h" -#include <bpf/bpf_helpers.h> -#include <bpf/bpf_tracing.h> - -extern bool CONFIG_PREEMPTION __kconfig __weak; -extern const int bpf_task_storage_busy __ksym; - -char _license[] SEC("license") = "GPL"; - -int pid = 0; -int busy = 0; - -struct { - __uint(type, BPF_MAP_TYPE_TASK_STORAGE); - __uint(map_flags, BPF_F_NO_PREALLOC); - __type(key, int); - __type(value, long); -} task SEC(".maps"); - -SEC("raw_tp/sys_enter") -int BPF_PROG(read_bpf_task_storage_busy) -{ - int *value; - - if (!CONFIG_PREEMPTION) - return 0; - - if (bpf_get_current_pid_tgid() >> 32 != pid) - return 0; - - value = bpf_this_cpu_ptr(&bpf_task_storage_busy); - if (value) - busy = *value; - - return 0; -} diff --git a/tools/testing/selftests/bpf/progs/sk_storage_omem_uncharge.c b/tools/testing/selftests/bpf/progs/sk_storage_omem_uncharge.c index 46d6eb2a3b17..c8f4815c8dfb 100644 --- a/tools/testing/selftests/bpf/progs/sk_storage_omem_uncharge.c +++ b/tools/testing/selftests/bpf/progs/sk_storage_omem_uncharge.c @@ -6,7 +6,6 @@ #include <bpf/bpf_tracing.h> #include <bpf/bpf_core_read.h> -void *local_storage_ptr = NULL; void *sk_ptr = NULL; int cookie_found = 0; __u64 cookie = 0; @@ -19,21 +18,17 @@ struct { __type(value, int); } sk_storage SEC(".maps"); -SEC("fexit/bpf_local_storage_destroy") -int BPF_PROG(bpf_local_storage_destroy, struct bpf_local_storage *local_storage) +SEC("fexit/bpf_sk_storage_free") +int BPF_PROG(bpf_sk_storage_free, struct sock *sk) { - struct sock *sk; - - if (local_storage_ptr != local_storage) + if (sk_ptr != sk) return 0; - sk = bpf_core_cast(sk_ptr, struct sock); if (sk->sk_cookie.counter != cookie) return 0; cookie_found++; omem = sk->sk_omem_alloc.counter; - local_storage_ptr = NULL; return 0; } @@ -50,7 +45,6 @@ int BPF_PROG(inet6_sock_destruct, struct sock *sk) if (value && *value == 0xdeadbeef) { cookie_found++; sk_ptr = sk; - local_storage_ptr = sk->sk_bpf_storage; } return 0; diff --git a/tools/testing/selftests/bpf/progs/task_ls_recursion.c b/tools/testing/selftests/bpf/progs/task_ls_recursion.c index f1853c38aada..b37359432692 100644 --- a/tools/testing/selftests/bpf/progs/task_ls_recursion.c +++ b/tools/testing/selftests/bpf/progs/task_ls_recursion.c @@ -36,14 +36,9 @@ int BPF_PROG(on_update) if (!test_pid || task->pid != test_pid) return 0; + /* This will succeed as there is no real deadlock */ ptr = bpf_task_storage_get(&map_a, task, 0, BPF_LOCAL_STORAGE_GET_F_CREATE); - /* ptr will not be NULL when it is called from - * the bpf_task_storage_get(&map_b,...F_CREATE) in - * the BPF_PROG(on_enter) below. It is because - * the value can be found in map_a and the kernel - * does not need to acquire any spin_lock. - */ if (ptr) { int err; @@ -53,12 +48,7 @@ int BPF_PROG(on_update) nr_del_errs++; } - /* This will still fail because map_b is empty and - * this BPF_PROG(on_update) has failed to acquire - * the percpu busy lock => meaning potential - * deadlock is detected and it will fail to create - * new storage. - */ + /* This will succeed as there is no real deadlock */ ptr = bpf_task_storage_get(&map_b, task, 0, BPF_LOCAL_STORAGE_GET_F_CREATE); if (ptr) diff --git a/tools/testing/selftests/bpf/progs/task_storage_nodeadlock.c b/tools/testing/selftests/bpf/progs/task_storage_nodeadlock.c index 986829aaf73a..6ce98fe9f387 100644 --- a/tools/testing/selftests/bpf/progs/task_storage_nodeadlock.c +++ b/tools/testing/selftests/bpf/progs/task_storage_nodeadlock.c @@ -1,15 +1,12 @@ // SPDX-License-Identifier: GPL-2.0 #include "vmlinux.h" +#include <errno.h> #include <bpf/bpf_helpers.h> #include <bpf/bpf_tracing.h> char _license[] SEC("license") = "GPL"; -#ifndef EBUSY -#define EBUSY 16 -#endif - extern bool CONFIG_PREEMPTION __kconfig __weak; int nr_get_errs = 0; int nr_del_errs = 0; @@ -40,7 +37,7 @@ int BPF_PROG(socket_post_create, struct socket *sock, int family, int type, ret = bpf_task_storage_delete(&task_storage, bpf_get_current_task_btf()); - if (ret == -EBUSY) + if (ret == -EDEADLK || ret == -ETIMEDOUT) __sync_fetch_and_add(&nr_del_errs, 1); return 0; |
