diff options
author | David S. Miller <davem@davemloft.net> | 2017-10-06 07:24:48 +0300 |
---|---|---|
committer | David S. Miller <davem@davemloft.net> | 2017-10-06 07:24:48 +0300 |
commit | cec451ce60e50dba6d4136b7d1e62a5900cd264f (patch) | |
tree | cb70c1552a2c58cc5b8f63bd8e85b7fdc3b33497 /net/ipv4/tcp.c | |
parent | b1fb67fa501c4787035317f84db6caf013385581 (diff) | |
parent | bef06223083b81d2064824afe2bc85be416ab73a (diff) | |
download | linux-cec451ce60e50dba6d4136b7d1e62a5900cd264f.tar.xz |
Merge branch 'tcp-improving-RACK-cpu-performance'
Yuchung Cheng says:
====================
tcp: improving RACK cpu performance
This patch set improves the CPU consumption of the RACK TCP loss
recovery algorithm, in particular for high-speed networks. Currently,
for every ACK in recovery RACK can potentially iterate over all sent
packets in the write queue. On large BDP networks with non-trivial
losses the RACK write queue walk CPU usage becomes unreasonably high.
This patch introduces a new queue in TCP that keeps only skbs sent and
not yet (s)acked or marked lost, in time order instead of sequence
order. With that, RACK can examine this time-sorted list and only
check packets that were sent recently, within the reordering window,
per ACK. This is the fastest way without any write queue walks. The
number of skbs examined per ACK is reduced by orders of magnitude.
====================
Signed-off-by: David S. Miller <davem@davemloft.net>
Diffstat (limited to 'net/ipv4/tcp.c')
-rw-r--r-- | net/ipv4/tcp.c | 2 |
1 files changed, 2 insertions, 0 deletions
diff --git a/net/ipv4/tcp.c b/net/ipv4/tcp.c index c115e37ca608..8cf742fd4f99 100644 --- a/net/ipv4/tcp.c +++ b/net/ipv4/tcp.c @@ -415,6 +415,7 @@ void tcp_init_sock(struct sock *sk) tp->out_of_order_queue = RB_ROOT; tcp_init_xmit_timers(sk); INIT_LIST_HEAD(&tp->tsq_node); + INIT_LIST_HEAD(&tp->tsorted_sent_queue); icsk->icsk_rto = TCP_TIMEOUT_INIT; tp->mdev_us = jiffies_to_usecs(TCP_TIMEOUT_INIT); @@ -881,6 +882,7 @@ struct sk_buff *sk_stream_alloc_skb(struct sock *sk, int size, gfp_t gfp, * available to the caller, no more, no less. */ skb->reserved_tailroom = skb->end - skb->tail - size; + INIT_LIST_HEAD(&skb->tcp_tsorted_anchor); return skb; } __kfree_skb(skb); |