diff options
author | Breno Leitao <leitao@debian.org> | 2024-11-28 15:16:25 +0300 |
---|---|---|
committer | Herbert Xu <herbert@gondor.apana.org.au> | 2024-12-21 12:05:29 +0300 |
commit | e1d3422c95f003eba241c176adfe593c33e8a8f6 (patch) | |
tree | df395caabbf4a58b29545a635c1b8ef1806ec797 | |
parent | f916e44487f56df4827069ff3a2070c0746dc511 (diff) | |
download | linux-e1d3422c95f003eba241c176adfe593c33e8a8f6.tar.xz |
rhashtable: Fix potential deadlock by moving schedule_work outside lock
Move the hash table growth check and work scheduling outside the
rht lock to prevent a possible circular locking dependency.
The original implementation could trigger a lockdep warning due to
a potential deadlock scenario involving nested locks between
rhashtable bucket, rq lock, and dsq lock. By relocating the
growth check and work scheduling after releasing the rth lock, we break
this potential deadlock chain.
This change expands the flexibility of rhashtable by removing
restrictive locking that previously limited its use in scheduler
and workqueue contexts.
Import to say that this calls rht_grow_above_75(), which reads from
struct rhashtable without holding the lock, if this is a problem, we can
move the check to the lock, and schedule the workqueue after the lock.
Fixes: f0e1a0643a59 ("sched_ext: Implement BPF extensible scheduler class")
Suggested-by: Tejun Heo <tj@kernel.org>
Signed-off-by: Breno Leitao <leitao@debian.org>
Modified so that atomic_inc is also moved outside of the bucket
lock along with the growth above 75% check.
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
-rw-r--r-- | lib/rhashtable.c | 10 |
1 files changed, 6 insertions, 4 deletions
diff --git a/lib/rhashtable.c b/lib/rhashtable.c index 6c902639728b..bf956b85455a 100644 --- a/lib/rhashtable.c +++ b/lib/rhashtable.c @@ -584,10 +584,6 @@ static struct bucket_table *rhashtable_insert_one( */ rht_assign_locked(bkt, obj); - atomic_inc(&ht->nelems); - if (rht_grow_above_75(ht, tbl)) - schedule_work(&ht->run_work); - return NULL; } @@ -624,6 +620,12 @@ static void *rhashtable_try_insert(struct rhashtable *ht, const void *key, data = ERR_CAST(new_tbl); rht_unlock(tbl, bkt, flags); + + if (PTR_ERR(data) == -ENOENT && !new_tbl) { + atomic_inc(&ht->nelems); + if (rht_grow_above_75(ht, tbl)) + schedule_work(&ht->run_work); + } } } while (!IS_ERR_OR_NULL(new_tbl)); |