diff options
author | Jakub Kicinski <kuba@kernel.org> | 2024-12-18 06:37:02 +0300 |
---|---|---|
committer | Jakub Kicinski <kuba@kernel.org> | 2024-12-18 06:37:57 +0300 |
commit | 3a4130550998f23762184b0de4cc9163a3f2c49d (patch) | |
tree | b6d9cd4e2333cf6afc87beb0bc7675bc73777006 /tools/testing/selftests/net/lib/py/utils.py | |
parent | bf8469fc4d1ef2696bbe10b049cc8f9ef501face (diff) | |
parent | a853c609504e2d1d83e71285e3622fda1f1451d8 (diff) | |
download | linux-3a4130550998f23762184b0de4cc9163a3f2c49d.tar.xz |
Merge branch 'inetpeer-reduce-false-sharing-and-atomic-operations'
Eric Dumazet says:
====================
inetpeer: reduce false sharing and atomic operations
After commit 8c2bd38b95f7 ("icmp: change the order of rate limits"),
there is a risk that a host receiving packets from an unique
source targeting closed ports is using a common inet_peer structure
from many cpus.
All these cpus have to acquire/release a refcount and update
the inet_peer timestamp (p->dtime)
Switch to pure RCU to avoid changing the refcount, and update
p->dtime only once per jiffy.
Tested:
DUT : 128 cores, 32 hw rx queues.
receiving 8,400,000 UDP packets per second, targeting closed ports.
Before the series:
- napi poll can not keep up, NIC drops 1,200,000 packets
per second.
- We use 20 % of cpu cycles
After this series:
- All packets are received (no more hw drops)
- We use 12 % of cpu cycles.
v1: https://lore.kernel.org/20241213130212.1783302-1-edumazet@google.com
====================
Link: https://patch.msgid.link/20241215175629.1248773-1-edumazet@google.com
Signed-off-by: Jakub Kicinski <kuba@kernel.org>
Diffstat (limited to 'tools/testing/selftests/net/lib/py/utils.py')
0 files changed, 0 insertions, 0 deletions