diff options
author | Jakub Kicinski <kuba@kernel.org> | 2022-07-06 02:59:26 +0300 |
---|---|---|
committer | David S. Miller <davem@davemloft.net> | 2022-07-06 14:56:35 +0300 |
commit | c46b01839f7aad5889e23505bbfbeb5f4d7fde8e (patch) | |
tree | c3759309c9c88b17a1f87d0bcbb4b939491e2ef7 /net/core/sock.c | |
parent | f36068a20256bad993d60e49602f02e3af336506 (diff) | |
download | linux-c46b01839f7aad5889e23505bbfbeb5f4d7fde8e.tar.xz |
tls: rx: periodically flush socket backlog
We continuously hold the socket lock during large reads and writes.
This may inflate RTT and negatively impact TCP performance.
Flush the backlog periodically. I tried to pick a flush period (128kB)
which gives significant benefit but the max Bps rate is not yet visibly
impacted.
Signed-off-by: Jakub Kicinski <kuba@kernel.org>
Signed-off-by: David S. Miller <davem@davemloft.net>
Diffstat (limited to 'net/core/sock.c')
-rw-r--r-- | net/core/sock.c | 1 |
1 files changed, 1 insertions, 0 deletions
diff --git a/net/core/sock.c b/net/core/sock.c index 92a0296ccb18..4cb957d934a2 100644 --- a/net/core/sock.c +++ b/net/core/sock.c @@ -2870,6 +2870,7 @@ void __sk_flush_backlog(struct sock *sk) __release_sock(sk); spin_unlock_bh(&sk->sk_lock.slock); } +EXPORT_SYMBOL_GPL(__sk_flush_backlog); /** * sk_wait_data - wait for data to arrive at sk_receive_queue |