diff options
author | Soheil Hassas Yeganeh <soheil@google.com> | 2018-01-04 05:47:11 +0300 |
---|---|---|
committer | David S. Miller <davem@davemloft.net> | 2018-01-05 19:14:57 +0300 |
commit | 0a38806f31729c8931383d2ce944115312855931 (patch) | |
tree | 708cc9eada262ba5ae590aecfab1c8db01ea8814 | |
parent | e3f2c4a3db1413bebfd502f7ac94fb55e3ba8c84 (diff) | |
download | linux-0a38806f31729c8931383d2ce944115312855931.tar.xz |
net: revert "Update RFS target at poll for tcp/udp"
On multi-threaded processes, one common architecture is to have
one (or a small number of) threads polling sockets, and a
considerably larger pool of threads reading form and writing to the
sockets. When we set RPS core on tcp_poll() or udp_poll() we essentially
steer all packets of all the polled FDs to one (or small number of)
cores, creaing a bottleneck and/or RPS misprediction.
Another common architecture is to shard FDs among threads pinned
to cores. In such a setting, setting RPS core in tcp_poll() and
udp_poll() is redundant because the RFS core is correctly
set in recvmsg and sendmsg.
Thus, revert the following commit:
c3f1dbaf6e28 ("net: Update RFS target at poll for tcp/udp").
Signed-off-by: Soheil Hassas Yeganeh <soheil@google.com>
Signed-off-by: Willem de Bruijn <willemb@google.com>
Signed-off-by: Eric Dumazet <edumazet@google.com>
Signed-off-by: Neal Cardwell <ncardwell@google.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
-rw-r--r-- | net/ipv4/tcp.c | 2 | ||||
-rw-r--r-- | net/ipv4/udp.c | 2 |
2 files changed, 0 insertions, 4 deletions
diff --git a/net/ipv4/tcp.c b/net/ipv4/tcp.c index 7ac583a2b9fe..f68cb33d50d1 100644 --- a/net/ipv4/tcp.c +++ b/net/ipv4/tcp.c @@ -498,8 +498,6 @@ unsigned int tcp_poll(struct file *file, struct socket *sock, poll_table *wait) const struct tcp_sock *tp = tcp_sk(sk); int state; - sock_rps_record_flow(sk); - sock_poll_wait(file, sk_sleep(sk), wait); state = inet_sk_state_load(sk); diff --git a/net/ipv4/udp.c b/net/ipv4/udp.c index e9c0d1e1772e..db72619e07e4 100644 --- a/net/ipv4/udp.c +++ b/net/ipv4/udp.c @@ -2490,8 +2490,6 @@ unsigned int udp_poll(struct file *file, struct socket *sock, poll_table *wait) if (!skb_queue_empty(&udp_sk(sk)->reader_queue)) mask |= POLLIN | POLLRDNORM; - sock_rps_record_flow(sk); - /* Check for false positives due to checksum errors */ if ((mask & POLLRDNORM) && !(file->f_flags & O_NONBLOCK) && !(sk->sk_shutdown & RCV_SHUTDOWN) && first_packet_length(sk) == -1) |