summaryrefslogtreecommitdiff
path: root/include
diff options
context:
space:
mode:
authorEric Dumazet <edumazet@google.com>2016-08-27 17:37:54 +0300
committerDavid S. Miller <davem@davemloft.net>2016-08-29 07:20:24 +0300
commitc9c3321257e1b95be9b375f811fb250162af8d39 (patch)
treeb3c1a7773a51c0076cc8213a86a47ea6e06d1024 /include
parent1a8ff8f52faebe28d145324884357fcdf33322c0 (diff)
downloadlinux-c9c3321257e1b95be9b375f811fb250162af8d39.tar.xz
tcp: add tcp_add_backlog()
When TCP operates in lossy environments (between 1 and 10 % packet losses), many SACK blocks can be exchanged, and I noticed we could drop them on busy senders, if these SACK blocks have to be queued into the socket backlog. While the main cause is the poor performance of RACK/SACK processing, we can try to avoid these drops of valuable information that can lead to spurious timeouts and retransmits. Cause of the drops is the skb->truesize overestimation caused by : - drivers allocating ~2048 (or more) bytes as a fragment to hold an Ethernet frame. - various pskb_may_pull() calls bringing the headers into skb->head might have pulled all the frame content, but skb->truesize could not be lowered, as the stack has no idea of each fragment truesize. The backlog drops are also more visible on bidirectional flows, since their sk_rmem_alloc can be quite big. Let's add some room for the backlog, as only the socket owner can selectively take action to lower memory needs, like collapsing receive queues or partial ofo pruning. Signed-off-by: Eric Dumazet <edumazet@google.com> Cc: Yuchung Cheng <ycheng@google.com> Cc: Neal Cardwell <ncardwell@google.com> Acked-by: Neal Cardwell <ncardwell@google.com> Signed-off-by: David S. Miller <davem@davemloft.net>
Diffstat (limited to 'include')
-rw-r--r--include/net/tcp.h1
1 files changed, 1 insertions, 0 deletions
diff --git a/include/net/tcp.h b/include/net/tcp.h
index a5af6be3a572..dd99679e2e51 100644
--- a/include/net/tcp.h
+++ b/include/net/tcp.h
@@ -1161,6 +1161,7 @@ static inline void tcp_prequeue_init(struct tcp_sock *tp)
}
bool tcp_prequeue(struct sock *sk, struct sk_buff *skb);
+bool tcp_add_backlog(struct sock *sk, struct sk_buff *skb);
#undef STATE_TRACE