diff options
| author | David S. Miller <davem@davemloft.net> | 2020-09-14 23:28:03 +0300 |
|---|---|---|
| committer | David S. Miller <davem@davemloft.net> | 2020-09-14 23:28:03 +0300 |
| commit | b91c06c5df511dfed3995c4e1a44729756ef0025 (patch) | |
| tree | f8ae1d2f23762b225220d65a63fa2d4bf248d582 /include | |
| parent | 26cdb8f72a952ef247dfb1d507eadfe0ec8277ae (diff) | |
| parent | 1a418cb8e888ccee29d5aca305cfdbae6cff2139 (diff) | |
| download | linux-b91c06c5df511dfed3995c4e1a44729756ef0025.tar.xz | |
Merge branch 'mptcp-introduce-support-for-real-multipath-xmit'
Paolo Abeni says:
====================
mptcp: introduce support for real multipath xmit
This series enable MPTCP socket to transmit data on multiple subflows
concurrently in a load balancing scenario.
First the receive code path is refactored to better deal with out-of-order
data (patches 1-7). An RB-tree is introduced to queue MPTCP-level out-of-order
data, closely resembling the TCP level OoO handling.
When data is sent on multiple subflows, the peer can easily see OoO - "future"
data at the MPTCP level, especially if speeds, delay, or jitter are not
symmetric.
The other major change regards the netlink PM, which is extended to allow
creating non backup subflows in patches 9-11.
There are a few smaller additions, like the introduction of OoO related mibs,
send buffer autotuning and better ack handling.
Finally a bunch of new self-tests is introduced. The new feature is tested
ensuring that the B/W used by an MPTCP socket using multiple subflows matches
the link aggregated B/W - we use low B/W virtual links, to ensure the tests
are not CPU bounded.
v1 -> v2:
- fix 32 bit build breakage
- fix a bunch of checkpatch issues
====================
Signed-off-by: David S. Miller <davem@davemloft.net>
Diffstat (limited to 'include')
| -rw-r--r-- | include/net/tcp.h | 2 |
1 files changed, 2 insertions, 0 deletions
diff --git a/include/net/tcp.h b/include/net/tcp.h index e85d564446c6..852f0d71dd40 100644 --- a/include/net/tcp.h +++ b/include/net/tcp.h @@ -1414,6 +1414,8 @@ static inline int tcp_full_space(const struct sock *sk) return tcp_win_from_space(sk, READ_ONCE(sk->sk_rcvbuf)); } +void tcp_cleanup_rbuf(struct sock *sk, int copied); + /* We provision sk_rcvbuf around 200% of sk_rcvlowat. * If 87.5 % (7/8) of the space has been consumed, we want to override * SO_RCVLOWAT constraint, since we are receiving skbs with too small |
