summaryrefslogtreecommitdiff
path: root/drivers/infiniband/hw/mlx5
diff options
context:
space:
mode:
authorDavid S. Miller <davem@davemloft.net>2018-04-01 21:08:21 +0300
committerDavid S. Miller <davem@davemloft.net>2018-04-01 21:08:21 +0300
commit16c3c91346961fdd733ad232311772795599be7f (patch)
tree253468761ee8e0f7a7cd97f21c24ef3c841529e7 /drivers/infiniband/hw/mlx5
parentc07255020551cadae8bf903f2c5e1fcbad731bac (diff)
parent1f4c6eb24029689a40dceae561e31ff6926d7f0d (diff)
downloadlinux-16c3c91346961fdd733ad232311772795599be7f.tar.xz
Merge branch 'inet-factorize-sk_wmem_alloc-updates'
Eric Dumazet says: ==================== inet: factorize sk_wmem_alloc updates While testing my inet defrag changes, I found that senders could spend ~20% of cpu cycles in skb_set_owner_w() updating sk->sk_wmem_alloc for every fragment they cook, competing with TX completion of prior skbs possibly happening on another cpus. One solution to this problem is to use alloc_skb() instead of sock_wmalloc() and manually perform a single sk_wmem_alloc change. This greatly increases speed for applications sending big UDP datagrams. ==================== Signed-off-by: David S. Miller <davem@davemloft.net>
Diffstat (limited to 'drivers/infiniband/hw/mlx5')
0 files changed, 0 insertions, 0 deletions