summaryrefslogtreecommitdiff
path: root/drivers/s390/net/qeth_ethtool.c
AgeCommit message (Collapse)AuthorFilesLines
2020-03-25s390/qeth: add TX IRQ coalescing support for IQD devicesJulian Wiedmann1-0/+75
Since IQD devices complete (most of) their transmissions synchronously, they don't offer TX completion IRQs and have no HW coalescing controls. But we can fake the easy parts in SW, and give the user some control wrt to how often the TX NAPI code should be triggered to process the TX completions. Having per-queue controls can in particular help the dedicated mcast queue, as it likely benefits from different fine-tuning than what the ucast queues need. CC: Jakub Kicinski <kuba@kernel.org> Signed-off-by: Julian Wiedmann <jwi@linux.ibm.com> Signed-off-by: David S. Miller <davem@davemloft.net>
2020-03-25s390/qeth: collect more TX statisticsJulian Wiedmann1-0/+1
Count the number of TX doorbells we issue to the qdio layer. Also count the number of actual frames in a TX buffer, and then use this data along with the byte count during TX completion. We'll make additional use of the frame count in a subsequent patch. Signed-off-by: Julian Wiedmann <jwi@linux.ibm.com> Signed-off-by: David S. Miller <davem@davemloft.net>
2020-03-19s390/qeth: don't report hard-coded driver versionJulian Wiedmann1-1/+0
Versions are meaningless for an in-kernel driver. Instead use the UTS_RELEASE that is set by ethtool_get_drvinfo(). Cc: Leon Romanovsky <leonro@mellanox.com> Signed-off-by: Julian Wiedmann <jwi@linux.ibm.com> Reviewed-by: Leon Romanovsky <leonro@mellanox.com> Signed-off-by: David S. Miller <davem@davemloft.net>
2020-03-19s390/qeth: add SW timestamping support for IQD devicesJulian Wiedmann1-0/+12
This adds support for SOF_TIMESTAMPING_TX_SOFTWARE. No support for non-IQD devices, since they orphan the skb in their xmit path. To play nice with TX bulking, set the timestamp when the buffer that contains the skb(s) is actually flushed out to HW. Signed-off-by: Julian Wiedmann <jwi@linux.ibm.com> Signed-off-by: David S. Miller <davem@davemloft.net>
2020-03-19s390/qeth: balance the TX queue selection for IQD devicesJulian Wiedmann1-1/+1
For ucast traffic, qeth_iqd_select_queue() falls back to netdev_pick_tx(). This will potentially use skb_tx_hash() to distribute the flow over all active TX queues - so txq 0 is a valid selection, and qeth_iqd_select_queue() needs to check for this and put it on some other queue. As a result, the distribution for ucast flows is unbalanced and hits QETH_IQD_MIN_UCAST_TXQ heavier than the other queues. Open-coding a custom variant of skb_tx_hash() isn't an option, since netdev_pick_tx() also gives us eg. access to XPS. But we can pull a little trick: add a single TC class that excludes the mcast txq, and thus encourage skb_tx_hash() to not pick the mcast txq. Signed-off-by: Julian Wiedmann <jwi@linux.ibm.com> Signed-off-by: David S. Miller <davem@davemloft.net>
2020-03-19s390/qeth: allow configuration of TX queues for IQD devicesJulian Wiedmann1-3/+16
Similar to the support for z/VM NICs, but we need to take extra care about the dedicated mcast queue: 1. netdev_pick_tx() is unaware of this limitation and might select the mcast txq. Catch this. 2. require at least _two_ TX queues - one for ucast, one for mcast. 3. when reducing the number of TX queues, there's a potential race where netdev_cap_txqueue() over-rules the selected txq index and falls back to index 0. This would place ucast traffic on the mcast queue, and result in TX errors. So for IQD, reject a reduction while the interface is running. Signed-off-by: Julian Wiedmann <jwi@linux.ibm.com> Signed-off-by: David S. Miller <davem@davemloft.net>
2020-03-19s390/qeth: allow configuration of TX queues for z/VM NICsJulian Wiedmann1-0/+17
Add support for ETHTOOL_SCHANNELS to change the count of active TX queues. Since all TX queue structs are pre-allocated and -registered, we just need to trivially adjust dev->real_num_tx_queues. Signed-off-by: Julian Wiedmann <jwi@linux.ibm.com> Signed-off-by: David S. Miller <davem@davemloft.net>
2020-02-27s390/qeth: support configurable RX copybreakJulian Wiedmann1-0/+31
Implement the ethtool hooks for the ETHTOOL_RX_COPYBREAK tunable. The copybreak is stored into netdev_priv, so that we automatically go back to the default value if the netdev is re-allocated. Signed-off-by: Julian Wiedmann <jwi@linux.ibm.com> Signed-off-by: David S. Miller <davem@davemloft.net>
2019-12-05s390/qeth: guard against runt packetsJulian Wiedmann1-0/+1
Depending on a packet's type, the RX path needs to access fields in the packet headers and thus requires a minimum packet length. Enforce this length when building the skb. On the other hand a single runt packet is no reason to drop the whole RX buffer. So just skip it, and continue processing on the next packet. Fixes: 4a71df50047f ("qeth: new qeth device driver") Signed-off-by: Julian Wiedmann <jwi@linux.ibm.com> Signed-off-by: David S. Miller <davem@davemloft.net>
2019-11-15s390/qeth: gather more detailed RX dropped/error statisticsJulian Wiedmann1-0/+2
Where available, use the fine-grained counters in rtnl_link_stats64 to indicate different RX error causes. For drop reasons, use driver-private ethtool counters. In particular this patch allows us to keep track of driver-side drops due to unknown/unsupported HW descriptor format. Signed-off-by: Julian Wiedmann <jwi@linux.ibm.com> Signed-off-by: David S. Miller <davem@davemloft.net>
2019-08-25s390/qeth: add TX NAPI support for IQD devicesJulian Wiedmann1-0/+2
Due to their large MTU and potentially low utilization of TX buffers, IQD devices in particular require fast TX recycling. This makes them a prime candidate for a TX NAPI path in qeth. qeth_tx_poll() uses the recently introduced qdio_inspect_queue() helper to poll the TX queue for completed buffers. To avoid hogging the CPU for too long, we yield to the stack after completing an entire queue's worth of buffers. While IQD is expected to transfer its buffers synchronously (and thus doesn't support TX interrupts), a timer covers for the odd case where a TX buffer doesn't complete synchronously. Currently this timer should only ever fire for (1) the mcast queue, (2) the occasional race, where the NAPI poll code observes an update to queue->used_buffers while the TX doorbell hasn't been issued yet. Signed-off-by: Julian Wiedmann <jwi@linux.ibm.com> Signed-off-by: David S. Miller <davem@davemloft.net>
2019-04-17s390/qeth: stop/wake TX queues based on their fill levelJulian Wiedmann1-0/+1
Current xmit code only stops the txq after attempting to fill an IO buffer that hasn't been TX-completed yet. In many-connection scenarios, this can result in frequent rejected TX attempts, requeuing of skbs with NETDEV_TX_BUSY and extra overhead. Now that we have a proper 1-to-1 relation between stack-side txqs and our HW Queues, overhaul the stop/wake logic so that the xmit code stops the txq as needed. Given that we might map multiple skbs into a single buffer, it's crucial to ensure that the queue always provides an _entirely_ empty IO buffer. Otherwise large skbs (eg TSO) might not fit into the last available buffer. So whenever qeth_do_send_packet() first utilizes an _empty_ buffer, it updates & checks the used_buffers count. This now ensures that an skb passed to qeth_xmit() can always be mapped into an IO buffer, so remove all of the -EBUSY roll-back handling in the TX path. We preserve the minimal safety-checks ("Is this IO buffer really available?"), just in case some nasty future bug ever attempts to corrupt an in-use buffer. Signed-off-by: Julian Wiedmann <jwi@linux.ibm.com> Signed-off-by: David S. Miller <davem@davemloft.net>
2019-04-17s390/qeth: add TX multiqueue support for IQD devicesJulian Wiedmann1-0/+16
qeth has been supporting multiple HW Output Queues for a long time. But rather than exposing those queues to the stack, it uses its own queue selection logic in .ndo_start_xmit... with all the drawbacks that entails. Start off by switching IQD devices over to a proper mqs net_device, and converting all the netdev_queue management code. One oddity with IQD devices is the requirement to place all mcast traffic on the _highest_ established HW queue. Doing so via .ndo_select_queue seems straight-forward - but that won't work if only some of the HW queues are active (ie. when dev->real_num_tx_queues < dev->num_tx_queues), since netdev_cap_txqueue() will not allow us to put skbs on the higher queues. To make this work, we 1. let .ndo_select_queue() map all mcast traffic to netdev_queue 0, and 2. later re-map the netdev_queue and HW queue indices in .ndo_start_xmit and the TX completion handler. With this patch we default to a fixed set of 1 ucast and 1 mcast queue. Support for dynamic reconfiguration is added at a later time. Signed-off-by: Julian Wiedmann <jwi@linux.ibm.com> Signed-off-by: David S. Miller <davem@davemloft.net>
2019-02-16s390/qeth: add support for ETHTOOL_GRINGPARAMJulian Wiedmann1-0/+17
Implement a trivial callback that exposes the queue sizes. Signed-off-by: Julian Wiedmann <jwi@linux.ibm.com> Signed-off-by: David S. Miller <davem@davemloft.net>
2019-02-16s390/qeth: overhaul ethtool statisticsJulian Wiedmann1-66/+86
Accumulate per-TX queue statistics, and increase their size to 64 bit. Don't bother with enabling/disabling the statistics, the overhead is negligible. Signed-off-by: Julian Wiedmann <jwi@linux.ibm.com> Signed-off-by: David S. Miller <davem@davemloft.net>
2019-02-16s390/qeth: move ethtool code into its own fileJulian Wiedmann1-0/+333
Most of this is self-contained code. Signed-off-by: Julian Wiedmann <jwi@linux.ibm.com> Signed-off-by: David S. Miller <davem@davemloft.net>