diff options
author | Will Deacon <will.deacon@arm.com> | 2019-02-22 20:14:59 +0300 |
---|---|---|
committer | Will Deacon <will.deacon@arm.com> | 2019-04-08 14:01:02 +0300 |
commit | fb24ea52f78e0d595852e09e3a55697c8f442189 (patch) | |
tree | 00ca29c7b0b8df6258a1ad1faf34f6e838ada26c /drivers/net/ethernet/intel/iavf | |
parent | 949b8c72768e3a7c69d270962b8a142ee8deec1b (diff) | |
download | linux-fb24ea52f78e0d595852e09e3a55697c8f442189.tar.xz |
drivers: Remove explicit invocations of mmiowb()
mmiowb() is now implied by spin_unlock() on architectures that require
it, so there is no reason to call it from driver code. This patch was
generated using coccinelle:
@mmiowb@
@@
- mmiowb();
and invoked as:
$ for d in drivers include/linux/qed sound; do \
spatch --include-headers --sp-file mmiowb.cocci --dir $d --in-place; done
NOTE: mmiowb() has only ever guaranteed ordering in conjunction with
spin_unlock(). However, pairing each mmiowb() removal in this patch with
the corresponding call to spin_unlock() is not at all trivial, so there
is a small chance that this change may regress any drivers incorrectly
relying on mmiowb() to order MMIO writes between CPUs using lock-free
synchronisation. If you've ended up bisecting to this commit, you can
reintroduce the mmiowb() calls using wmb() instead, which should restore
the old behaviour on all architectures other than some esoteric ia64
systems.
Acked-by: Linus Torvalds <torvalds@linux-foundation.org>
Signed-off-by: Will Deacon <will.deacon@arm.com>
Diffstat (limited to 'drivers/net/ethernet/intel/iavf')
-rw-r--r-- | drivers/net/ethernet/intel/iavf/iavf_txrx.c | 5 |
1 files changed, 0 insertions, 5 deletions
diff --git a/drivers/net/ethernet/intel/iavf/iavf_txrx.c b/drivers/net/ethernet/intel/iavf/iavf_txrx.c index 9b4d7cec2e18..6bfef82e7607 100644 --- a/drivers/net/ethernet/intel/iavf/iavf_txrx.c +++ b/drivers/net/ethernet/intel/iavf/iavf_txrx.c @@ -2360,11 +2360,6 @@ static inline void iavf_tx_map(struct iavf_ring *tx_ring, struct sk_buff *skb, /* notify HW of packet */ if (netif_xmit_stopped(txring_txq(tx_ring)) || !skb->xmit_more) { writel(i, tx_ring->tail); - - /* we need this if more than one processor can write to our tail - * at a time, it synchronizes IO on IA64/Altix systems - */ - mmiowb(); } return; |