summaryrefslogtreecommitdiff
path: root/net/core
diff options
context:
space:
mode:
authorEric Dumazet <edumazet@google.com>2018-10-31 18:39:13 +0300
committerGreg Kroah-Hartman <gregkh@linuxfoundation.org>2019-12-01 11:17:15 +0300
commitf9fca78e6cf2a44504dac45761c5195973f49c29 (patch)
tree299d0c2abfa132ec81210c55e93842e16cc2d2b1 /net/core
parent0d3b9ac2844fd0fe3eb2b19592aaced0abb0d6bb (diff)
downloadlinux-f9fca78e6cf2a44504dac45761c5195973f49c29.tar.xz
net: do not abort bulk send on BQL status
[ Upstream commit fe60faa5063822f2d555f4f326c7dd72a60929bf ] Before calling dev_hard_start_xmit(), upper layers tried to cook optimal skb list based on BQL budget. Problem is that GSO packets can end up comsuming more than the BQL budget. Breaking the loop is not useful, since requeued packets are ahead of any packets still in the qdisc. It is also more expensive, since next TX completion will push these packets later, while skbs are not in cpu caches. It is also a behavior difference with TSO packets, that can break the BQL limit by a large amount. Note that drivers should use __netdev_tx_sent_queue() in order to have optimal xmit_more support, and avoid useless atomic operations as shown in the following patch. Signed-off-by: Eric Dumazet <edumazet@google.com> Signed-off-by: David S. Miller <davem@davemloft.net> Signed-off-by: Sasha Levin <sashal@kernel.org>
Diffstat (limited to 'net/core')
-rw-r--r--net/core/dev.c2
1 files changed, 1 insertions, 1 deletions
diff --git a/net/core/dev.c b/net/core/dev.c
index e96c88b1465d..91179febdeee 100644
--- a/net/core/dev.c
+++ b/net/core/dev.c
@@ -3277,7 +3277,7 @@ struct sk_buff *dev_hard_start_xmit(struct sk_buff *first, struct net_device *de
}
skb = next;
- if (netif_xmit_stopped(txq) && skb) {
+ if (netif_tx_queue_stopped(txq) && skb) {
rc = NETDEV_TX_BUSY;
break;
}