summaryrefslogtreecommitdiff
path: root/drivers/net/ethernet/intel/ixgbe/ixgbe_lib.c
diff options
context:
space:
mode:
authorAlexander Duyck <alexander.h.duyck@redhat.com>2015-07-31 01:19:28 +0300
committerJeff Kirsher <jeffrey.t.kirsher@intel.com>2015-09-16 03:05:12 +0300
commit8ac34f10a5ea4c7b6f57dfd52b0693a2b67d9ac4 (patch)
treecf713902aefd1f99d141b6ad113e25651056f2e6 /drivers/net/ethernet/intel/ixgbe/ixgbe_lib.c
parent6b010e9b1f0a406d1d35202a694fa724a559bf77 (diff)
downloadlinux-8ac34f10a5ea4c7b6f57dfd52b0693a2b67d9ac4.tar.xz
ixgbe: Limit lowest interrupt rate for adaptive interrupt moderation to 12K
This patch updates the lowest limit for adaptive interrupt interrupt moderation to roughly 12K interrupts per second. The way I came about reaching 12K as the desired interrupt rate is by testing with UDP flows. Specifically I had a simple test that ran a netperf UDP_STREAM test at varying sizes. What I found was as the packet sizes increased the performance fell steadily behind until we were only able to receive at ~4Gb/s with a message size of 65507. A bit of digging found that we were dropping packets for the socket in the network stack, and looking at things further what I found was I could solve it by increasing the interrupt rate, or increasing the rmem_default/rmem_max. What I found was that when the interrupt coalescing resulted in more data being processed per interrupt than could be stored in the socket buffer we started losing packets and the performance dropped. So I reached 12K based on the following math. rmem_default = 212992 skb->truesize = 2994 212992 / 2994 = 71.14 packets to fill the buffer packet rate at 1514 packet size is 812744pps 71.14 / 812744 = 87.9us to fill socket buffer From there it was just a matter of choosing the interrupt rate and providing a bit of wiggle room which is why I decided to go with 12K interrupts per second as that uses a value of 84us. The data below is based on VM to VM over a direct assigned ixgbe interface. The test run was: netperf -H <ip> -t UDP_STREAM" Socket Message Elapsed Messages CPU Service Size Size Time Okay Errors Throughput Util Demand bytes bytes secs # # 10^6bits/sec % SS us/KB Before: 212992 65507 60.00 1100662 0 9613.4 10.89 0.557 212992 60.00 473474 4135.4 11.27 0.576 After: 212992 65507 60.00 1100413 0 9611.2 10.73 0.549 212992 60.00 974132 8508.3 11.69 0.598 Using bare metal the data is similar but not as dramatic as the throughput increases from about 8.5Gb/s to 9.5Gb/s. Signed-off-by: Alexander Duyck <alexander.h.duyck@redhat.com> Tested-by: Krishneil Singh <krishneil.k.singh@intel.com> Signed-off-by: Jeff Kirsher <jeffrey.t.kirsher@intel.com>
Diffstat (limited to 'drivers/net/ethernet/intel/ixgbe/ixgbe_lib.c')
-rw-r--r--drivers/net/ethernet/intel/ixgbe/ixgbe_lib.c2
1 files changed, 1 insertions, 1 deletions
diff --git a/drivers/net/ethernet/intel/ixgbe/ixgbe_lib.c b/drivers/net/ethernet/intel/ixgbe/ixgbe_lib.c
index 68e1e757ecef..f3168bcc7d87 100644
--- a/drivers/net/ethernet/intel/ixgbe/ixgbe_lib.c
+++ b/drivers/net/ethernet/intel/ixgbe/ixgbe_lib.c
@@ -866,7 +866,7 @@ static int ixgbe_alloc_q_vector(struct ixgbe_adapter *adapter,
if (txr_count && !rxr_count) {
/* tx only vector */
if (adapter->tx_itr_setting == 1)
- q_vector->itr = IXGBE_10K_ITR;
+ q_vector->itr = IXGBE_12K_ITR;
else
q_vector->itr = adapter->tx_itr_setting;
} else {