summaryrefslogtreecommitdiff
path: root/drivers/net/bonding
AgeCommit message (Collapse)AuthorFilesLines
2013-09-27bonding: remove bond_prev_slave()Veaceslav Falico2-10/+3
We don't really need it, and it's really hard to RCUify the list->prev. CC: Jay Vosburgh <fubar@us.ibm.com> CC: Andy Gospodarek <andy@greyhouse.net> Signed-off-by: Veaceslav Falico <vfalico@redhat.com> Signed-off-by: David S. Miller <davem@davemloft.net>
2013-09-27bonding: convert first/last slave logic to use neighboursVeaceslav Falico1-6/+11
For that, use netdev_adjacent_get_private(list_head) on bond's lower neighbour list members. Also, add a small macro - bond_slave_list(bond), which returns the bond list via neighbour list. CC: Jay Vosburgh <fubar@us.ibm.com> CC: Andy Gospodarek <andy@greyhouse.net> Signed-off-by: Veaceslav Falico <vfalico@redhat.com> Signed-off-by: David S. Miller <davem@davemloft.net>
2013-09-27bonding: convert bond_has_slaves() to use the neighbour listVeaceslav Falico1-1/+1
The same way as it was used for its own slave_list. CC: Jay Vosburgh <fubar@us.ibm.com> CC: Andy Gospodarek <andy@greyhouse.net> Signed-off-by: Veaceslav Falico <vfalico@redhat.com> Signed-off-by: David S. Miller <davem@davemloft.net>
2013-09-27bonding: add bond_has_slaves() and use itVeaceslav Falico5-23/+25
Currently we verify if we have slaves by checking if bond->slave_list is empty. Create a define bond_has_slaves() and use it, a bit more readable and easier to change in the future. CC: Jay Vosburgh <fubar@us.ibm.com> CC: Andy Gospodarek <andy@greyhouse.net> Signed-off-by: Veaceslav Falico <vfalico@redhat.com> Signed-off-by: David S. Miller <davem@davemloft.net>
2013-09-27bonding: remove unused bond_for_each_slave_from()Veaceslav Falico1-13/+0
It has no users, so we can remove it. CC: Jay Vosburgh <fubar@us.ibm.com> CC: Andy Gospodarek <andy@greyhouse.net> Signed-off-by: Veaceslav Falico <vfalico@redhat.com> Signed-off-by: David S. Miller <davem@davemloft.net>
2013-09-27bonding: rework bond_ab_arp_probe() to use bond_for_each_slave()Veaceslav Falico1-14/+24
Currently it uses the hard-to-rcuify bond_for_each_slave_from(), and also it doesn't check every slave for disrepencies between the actual IS_UP(slave) and the slave->link == BOND_LINK_UP, but only till we find the next suitable slave. Fix this by using bond_for_each_slave() and storing the first good slave in *before till we find the current_arp_slave, after that we store the first good slave in new_slave. If new_slave is empty - use the slave stored in before, and if it's also empty - then we didn't find any suitable slave. Also, in the meanwhile, check for each slave status. CC: Jay Vosburgh <fubar@us.ibm.com> CC: Andy Gospodarek <andy@greyhouse.net> Signed-off-by: Veaceslav Falico <vfalico@redhat.com> Signed-off-by: David S. Miller <davem@davemloft.net>
2013-09-27bonding: rework bond_find_best_slave() to use bond_for_each_slave()Veaceslav Falico1-31/+12
bond_find_best_slave() does not have to be balanced - i.e. return the slave that is *after* some other slave, but rather return the best slave that suits, except of bond->primary_slave - in which case we just return it if it's suitable. After that we just look through all the slaves and return either first up slave or the slave whose link came back earliest. We also don't care about curr_active_slave lock cause we use it in bond_should_change_active() only and there we take it right away - i.e. it won't go away. CC: Jay Vosburgh <fubar@us.ibm.com> CC: Andy Gospodarek <andy@greyhouse.net> Signed-off-by: Veaceslav Falico <vfalico@redhat.com> Signed-off-by: David S. Miller <davem@davemloft.net>
2013-09-27bonding: rework rlb_next_rx_slave() to use bond_for_each_slave()Veaceslav Falico2-23/+22
Currently, we're using bond_for_each_slave_from(), which is really hard to implement under RCU and/or neighbour list. Remove it and use bond_for_each_slave() instead, taking care of the last used slave. Also, rename next_rx_slave to rx_slave and store the current (last) rx_slave. CC: Jay Vosburgh <fubar@us.ibm.com> CC: Andy Gospodarek <andy@greyhouse.net> Signed-off-by: Veaceslav Falico <vfalico@redhat.com> Signed-off-by: David S. Miller <davem@davemloft.net>
2013-09-27bonding: rework bond_3ad_xmit_xor() to use bond_for_each_slave() onlyVeaceslav Falico1-24/+22
Currently, there are two loops - first we find the first slave in an aggregator after the xmit_hash_policy() returned number, and after that we loop from that slave, over bonding head, and till that slave to find any suitable slave to send the packet through. Replace it by just one bond_for_each_slave() loop, which first loops through the requested number of slaves, saving the first suitable one, and after that we've hit the requested number of slaves to skip - search for any up slave to send the packet through. If we don't find such kind of slave - then just send the packet through the first suitable slave found. Logic remains unchainged, and we skip two loops. Also, refactor it a bit for readability. CC: Jay Vosburgh <fubar@us.ibm.com> CC: Andy Gospodarek <andy@greyhouse.net> Signed-off-by: Veaceslav Falico <vfalico@redhat.com> Signed-off-by: David S. Miller <davem@davemloft.net>
2013-09-27bonding: use bond_for_each_slave() in bond_uninit()Veaceslav Falico1-2/+3
We're safe agains removal there, cause we use neighbours primitives. CC: Jay Vosburgh <fubar@us.ibm.com> CC: Andy Gospodarek <andy@greyhouse.net> Signed-off-by: Veaceslav Falico <vfalico@redhat.com> Signed-off-by: David S. Miller <davem@davemloft.net>
2013-09-27bonding: make bond_for_each_slave() use lower neighbour's privateVeaceslav Falico6-54/+96
It needs a list_head *iter, so add it wherever needed. Use both non-rcu and rcu variants. CC: Jay Vosburgh <fubar@us.ibm.com> CC: Andy Gospodarek <andy@greyhouse.net> CC: Dimitris Michailidis <dm@chelsio.com> Signed-off-by: Veaceslav Falico <vfalico@redhat.com> Signed-off-by: David S. Miller <davem@davemloft.net>
2013-09-27bonding: remove bond_for_each_slave_continue_reverse()Veaceslav Falico3-28/+30
We only use it in rollback scenarios and can easily use the standart bond_for_each_dev() instead. CC: Jay Vosburgh <fubar@us.ibm.com> CC: Andy Gospodarek <andy@greyhouse.net> Signed-off-by: Veaceslav Falico <vfalico@redhat.com> Signed-off-by: David S. Miller <davem@davemloft.net>
2013-09-27bonding: modify bond_get_slave_by_dev() to use neighboursVeaceslav Falico1-7/+1
It should be used under rtnl/bonding lock, so use the non-RCU version. CC: Jay Vosburgh <fubar@us.ibm.com> CC: Andy Gospodarek <andy@greyhouse.net> Signed-off-by: Veaceslav Falico <vfalico@redhat.com> Signed-off-by: David S. Miller <davem@davemloft.net>
2013-09-27bonding: populate neighbour's private on enslaveVeaceslav Falico1-14/+16
Use the new provided function when attaching the lower slave to populate its ->private with struct slave *new_slave. Also, move it to the end to be able to 'find' it only after it was completely initialized, and deinitialize in the first place on release. CC: Jay Vosburgh <fubar@us.ibm.com> CC: Andy Gospodarek <andy@greyhouse.net> Signed-off-by: Veaceslav Falico <vfalico@redhat.com> Signed-off-by: David S. Miller <davem@davemloft.net>
2013-09-27net: add adj_list to save only neighboursVeaceslav Falico2-5/+7
Currently, we distinguish neighbours (first-level linked devices) from non-neighbours by the neighbour bool in the netdev_adjacent. This could be quite time-consuming in case we would like to traverse *only* through neighbours - cause we'd have to traverse through all devices and check for this flag, and in a (quite common) scenario where we have lots of vlans on top of bridge, which is on top of a bond - the bonding would have to go through all those vlans to get its upper neighbour linked devices. This situation is really unpleasant, cause there are already a lot of cases when a device with slaves needs to go through them in hot path. To fix this, introduce a new upper/lower device lists structure - adj_list, which contains only the neighbours. It works always in pair with the all_adj_list structure (renamed from upper/lower_dev_list), i.e. both of them contain the same links, only that all_adj_list contains also non-neighbour device links. It's really a small change visible, currently, only for __netdev_adjacent_dev_insert/remove(), and doesn't change the main linked logic at all. Also, add some comments a fix a name collision in netdev_for_each_upper_dev_rcu() and rework the naming by the following rules: netdev_(all_)(upper|lower)_* If "all_" is present, then we work with the whole list of upper/lower devices, otherwise - only with direct neighbours. Uninline functions - to get better stack traces. CC: "David S. Miller" <davem@davemloft.net> CC: Eric Dumazet <edumazet@google.com> CC: Jiri Pirko <jiri@resnulli.us> CC: Alexander Duyck <alexander.h.duyck@intel.com> CC: Cong Wang <amwang@redhat.com> Signed-off-by: Veaceslav Falico <vfalico@redhat.com> Signed-off-by: David S. Miller <davem@davemloft.net>
2013-09-16bonding: Make alb learning packet interval configurableNeil Horman5-5/+47
running bonding in ALB mode requires that learning packets be sent periodically, so that the switch knows where to send responding traffic. However, depending on switch configuration, there may not be any need to send traffic at the default rate of 3 packets per second, which represents little more than wasted data. Allow the ALB learning packet interval to be made configurable via sysfs Signed-off-by: Neil Horman <nhorman@tuxdriver.com> Acked-by: Acked-by: Veaceslav Falico <vfalico@redhat.com> CC: Jay Vosburgh <fubar@us.ibm.com> CC: Andy Gospodarek <andy@greyhouse.net> CC: "David S. Miller" <davem@davemloft.net> Signed-off-by: Andy Gospodarek <andy@greyhouse.net> Signed-off-by: David S. Miller <davem@davemloft.net>
2013-09-11bonding: fix bond_arp_rcv setting and arp validate desync statenikolay@redhat.com3-5/+19
We make bond_arp_rcv global so it can be used in bond_sysfs if the bond interface is up and arp_interval is being changed to a positive value and cleared otherwise as per Jay's suggestion. This also fixes a problem where bond_arp_rcv was set even though arp_validate was disabled while the bond was up by unsetting recv_probe in bond_store_arp_validate and respectively setting it if enabled. Signed-off-by: Nikolay Aleksandrov <nikolay@redhat.com> Signed-off-by: Marcelo Ricardo Leitner <mleitner@redhat.com> Acked-by: Veaceslav Falico <vfalico@redhat.com> Signed-off-by: David S. Miller <davem@davemloft.net>
2013-09-11bonding: fix store_arp_validate race with mode changenikolay@redhat.com1-4/+10
We need to protect store_arp_validate via rtnl because it can race with mode changing and we can end up having arp_validate set in a mode different from active-backup. Signed-off-by: Nikolay Aleksandrov <nikolay@redhat.com> Acked-by: Veaceslav Falico <vfalico@redhat.com> Signed-off-by: David S. Miller <davem@davemloft.net>
2013-09-04bonding: drop read_lock in bond_compute_featuresnikolay@redhat.com1-7/+3
bond_compute_features is always called with RTNL held, so we can safely drop the read bond->lock. Signed-off-by: Nikolay Aleksandrov <nikolay@redhat.com> Signed-off-by: David S. Miller <davem@davemloft.net>
2013-09-04bonding: drop read_lock in bond_fix_featuresnikolay@redhat.com1-7/+3
We're protected by RTNL so nothing can happen and we can safely drop the read bond->lock. Signed-off-by: Nikolay Aleksandrov <nikolay@redhat.com> Signed-off-by: David S. Miller <davem@davemloft.net>
2013-09-04bonding: simplify bond_3ad_update_lacp_rate and use RTNL for syncnikolay@redhat.com2-8/+7
We can drop the use of bond->lock for mutual exclusion in bond_3ad_update_lacp_rate and use RTNL in the sysfs store function instead. This way we'll prevent races with mode change and interface up/down as well as simplify update_lacp_rate by removing the check for port->slave because it'll always be initialized (done while enslaving with RTNL). This change will also help in the future removal of reader bond->lock from bond_enslave. Signed-off-by: Nikolay Aleksandrov <nikolay@redhat.com> Signed-off-by: David S. Miller <davem@davemloft.net>
2013-09-04bonding: trivial: remove outdated comment and bracesnikolay@redhat.com1-5/+1
We don't have to release all slaves when closing the bond dev, so remove the outdated comment and the braces around the left single statement. Signed-off-by: Nikolay Aleksandrov <nikolay@redhat.com> Signed-off-by: David S. Miller <davem@davemloft.net>
2013-09-04bonding: simplify and fix peer notificationnikolay@redhat.com1-15/+7
This patch aims to remove a use of the bond->lock for mutual exclusion which will later allow easier migration to RCU of the users of this functionality. We use RTNL as a synchronizing mechanism since it's always held when send_peer_notif is set, and when it is decremented from the notifier function. We can also drop some locking, and fix the leakage of the send_peer_notif counter. Signed-off-by: Nikolay Aleksandrov <nikolay@redhat.com> Signed-off-by: David S. Miller <davem@davemloft.net>
2013-09-04bonding: use rlb_client_info->vlan_id instead of ->tagVeaceslav Falico2-5/+4
Store VID in ->vlan_id (if any), and remove the useless ->tag. CC: Jay Vosburgh <fubar@us.ibm.com> CC: Andy Gospodarek <andy@greyhouse.net> Signed-off-by: Veaceslav Falico <vfalico@redhat.com> Signed-off-by: David S. Miller <davem@davemloft.net>
2013-09-04bonding: remove bond_vlan_used()Veaceslav Falico2-22/+2
We're using it currently to verify if we have vlans before getting the tag from the skb we're about to send. It's useless because the vlan_get_tag() verifies if the skb has the tag (and returns an error if not), and we can receive tagged skbs only if we *already* have vlans. Plus, the current RCUed implementation is kind of useless anyway - the we can remove the last vlan in the moment we return from the function. So remove the only usage of it and the whole function. CC: Jay Vosburgh <fubar@us.ibm.com> CC: Andy Gospodarek <andy@greyhouse.net> Signed-off-by: Veaceslav Falico <vfalico@redhat.com> Signed-off-by: David S. Miller <davem@davemloft.net>
2013-08-30bonding: pr_debug instead of pr_warn in bond_arp_send_allVeaceslav Falico1-8/+6
They're simply annoying and will spam dmesg constantly if we hit them, so convert to pr_debug so that we still can access them in case of debugging. CC: Jay Vosburgh <fubar@us.ibm.com> CC: Andy Gospodarek <andy@greyhouse.net> Signed-off-by: Veaceslav Falico <vfalico@redhat.com> Signed-off-by: David S. Miller <davem@davemloft.net>
2013-08-30bonding: remove vlan_list/current_alb_vlanVeaceslav Falico4-143/+2
Currently there are no real users of vlan_list/current_alb_vlan, only the helpers which maintain them, so remove them. CC: Jay Vosburgh <fubar@us.ibm.com> CC: Andy Gospodarek <andy@greyhouse.net> Signed-off-by: Veaceslav Falico <vfalico@redhat.com> Signed-off-by: David S. Miller <davem@davemloft.net>
2013-08-30bonding: make alb_send_learning_packets() use upper dev listVeaceslav Falico2-19/+11
Currently, if there are vlans on top of bond, alb_send_learning_packets() will never send LPs from the bond itself (i.e. untagged), which might leave untagged clients unupdated. Also, the 'circular vlan' logic (i.e. update only MAX_LP_BURST vlans at a time, and save the last vlan for the next update) is really suboptimal - in case of lots of vlans it will take a lot of time to update every vlan. It is also never called in any hot path and sends only a few small packets - thus the optimization by itself is useless. So remove the whole current_alb_vlan/MAX_LP_BURST logic from alb_send_learning_packets(). Instead, we'll first send a packet untagged and then traverse the upper dev list, sending a tagged packet for each vlan found. Also, remove the MAX_LP_BURST define - we already don't need it. CC: Jay Vosburgh <fubar@us.ibm.com> CC: Andy Gospodarek <andy@greyhouse.net> Signed-off-by: Veaceslav Falico <vfalico@redhat.com> Signed-off-by: David S. Miller <davem@davemloft.net>
2013-08-30bonding: split alb_send_learning_packets()Veaceslav Falico1-24/+35
Create alb_send_lp_vid(), which will handle the skb/lp creation, vlan tagging and sending, and use it in alb_send_learning_packets(). This way all the logic remains in alb_send_learning_packets(), which becomes a lot more cleaner and easier to understand. CC: Jay Vosburgh <fubar@us.ibm.com> CC: Andy Gospodarek <andy@greyhouse.net> Signed-off-by: Veaceslav Falico <vfalico@redhat.com> Signed-off-by: David S. Miller <davem@davemloft.net>
2013-08-30bonding: use vlan_uses_dev() in __bond_release_one()Veaceslav Falico1-1/+1
We always hold the rtnl_lock() in __bond_release_one(), so use vlan_uses_dev() instead of bond_vlan_used(). CC: Jay Vosburgh <fubar@us.ibm.com> CC: Andy Gospodarek <andy@greyhouse.net> Signed-off-by: Veaceslav Falico <vfalico@redhat.com> Signed-off-by: David S. Miller <davem@davemloft.net>
2013-08-30bonding: convert bond_has_this_ip() to use upper devicesVeaceslav Falico1-12/+13
Currently, bond_has_this_ip() is aware only of vlan upper devices, and thus will return false if the address is associated with the upper bridge or any other device, and thus will break the arp logic. Fix this by using the upper device list. For every upper device we verify if the address associated with it is our address, and if yes - return true. CC: Jay Vosburgh <fubar@us.ibm.com> CC: Andy Gospodarek <andy@greyhouse.net> Signed-off-by: Veaceslav Falico <vfalico@redhat.com> Signed-off-by: David S. Miller <davem@davemloft.net>
2013-08-30bonding: make bond_arp_send_all use upper device listVeaceslav Falico1-51/+51
Currently, bond_arp_send_all() is aware only of vlans, which breaks configurations like bond <- bridge (or any other 'upper' device) with IP (which is quite a common scenario for virt setups). To fix this we convert the bond_arp_send_all() to first verify if the rt device is the bond itself, and if not - to go through its list of upper vlans and their respectiv upper devices (if the vlan's upper device matches - tag the packet), if still not found - go through all of our upper list devices to see if any of them match the route device for the target. If the match is a vlan device - we also save its vlan_id and tag it in bond_arp_send(). Also, clean the function a bit to be more readable. CC: Vlad Yasevich <vyasevic@redhat.com> CC: Jay Vosburgh <fubar@us.ibm.com> CC: Andy Gospodarek <andy@greyhouse.net> Signed-off-by: Veaceslav Falico <vfalico@redhat.com> Signed-off-by: David S. Miller <davem@davemloft.net>
2013-08-30bonding: use netdev_upper list in bond_vlan_usedVeaceslav Falico1-1/+14
Convert bond_vlan_used() to traverse the upper device list to see if we have any vlans above us. It's protected by rcu, and in case we are holding rtnl_lock we should call vlan_uses_dev() instead - it's faster. CC: Jay Vosburgh <fubar@us.ibm.com> CC: Andy Gospodarek <andy@greyhouse.net> Signed-off-by: Veaceslav Falico <vfalico@redhat.com> Signed-off-by: David S. Miller <davem@davemloft.net>
2013-08-26bonding: fix error return code in bond_enslave()Wei Yongjun1-1/+2
Fix to return a negative error code in the add bond vlan ids error handling case instead of 0, as done elsewhere in this function. Introduced by commit 1ff412ad7714f6952f76ffd77f0a7f2f563288a1. (bonding: change the bond's vlan syncing functions with the standard ones) Signed-off-by: Wei Yongjun <yongjun_wei@trendmicro.com.cn> Acked-by: Nikolay Aleksandrov <nikolay@redhat.com> Signed-off-by: David S. Miller <davem@davemloft.net>
2013-08-09bonding: unwind on bond_add_vlan failurenikolay@redhat.com1-2/+2
In case of bond_add_vlan() failure currently we'll have the vlan's refcnt bumped up in all slaves, but it will never go down because it failed to get added to the bond, so properly unwind the added vlan if bond_add_vlan fails. Signed-off-by: Nikolay Aleksandrov <nikolay@redhat.com> Acked-by: Veaceslav Falico <vfalico@redhat.com> Signed-off-by: David S. Miller <davem@davemloft.net>
2013-08-09bonding: change the bond's vlan syncing functions with the standard onesnikolay@redhat.com1-30/+7
Now we have vlan_vids_add/del_by_dev() which serve the same purpose as bond's bond_add/del_vlans_on_slave() with the good side effect of reverting the changes if one of the additions fails. There's only 1 change in the behaviour of enslave: if adding of the vlans to the slave fails, we'll fail the enslaving because otherwise we might delete some vlan that wasn't added by the bonding. The only way this may happen is with ENOMEM currently, so we're in trouble anyway. Signed-off-by: Nikolay Aleksandrov <nikolay@redhat.com> Acked-by: Veaceslav Falico <vfalico@redhat.com> Signed-off-by: David S. Miller <davem@davemloft.net>
2013-08-05bonding: remove locking from bond_set_rx_mode()Veaceslav Falico1-6/+4
We're already protected by RTNL lock, so nothing can happen to bond/its slaves, and thus the locking is useless here (both bond->lock and bond->curr_active_slave). Also, add ASSERT_RTNL() both to bond_set_rx_mode() and bond_hw_addr_swap() to catch possible uses of it without RTNL locking. This patch also saves us from a lockdep false-positive in bond_set_rx_mode() vs bond_hw_addr_swap(). CC: Jay Vosburgh <fubar@us.ibm.com> CC: Andy Gospodarek <andy@greyhouse.net> CC: Nikolay Aleksandrov <nikolay@redhat.com> Signed-off-by: Veaceslav Falico <vfalico@redhat.com> Signed-off-by: Nikolay Aleksandrov <nikolay@redhat.com> Signed-off-by: David S. Miller <davem@davemloft.net>
2013-08-05bonding: add bond_time_in_interval() and use it for time comparisonVeaceslav Falico1-53/+32
Currently we use a lot of time comparison math for arp_interval comparisons, which are sometimes quite hard to read and understand. All the time comparisons have one pattern: (time - arp_interval_jiffies) <= jiffies <= (time + mod * arp_interval_jiffies + arp_interval_jiffies/2) Introduce a new helper - bond_time_in_interval(), which will do the math in one place and, thus, will clean up the logical code. This helper introduces a bit of overhead (by always calculating the jiffies from arp_interval), however it's really not visible, considering that functions using it usually run once in arp_interval milliseconds. There are several lines slightly over 80 chars, however breaking them would result in more hard-to-read code than several character after the 80 mark. CC: Jay Vosburgh <fubar@us.ibm.com> CC: Andy Gospodarek <andy@greyhouse.net> Signed-off-by: Veaceslav Falico <vfalico@redhat.com> Signed-off-by: David S. Miller <davem@davemloft.net>
2013-08-05bonding: call slave_last_rx() only once per slaveVeaceslav Falico1-7/+8
Simple cleanup to not call slave_last_rx() on every time function. It won't give any measurable boost - but looks cleaner and easier to understand. There are no time-consuming functions in between these calls, so it's safe to call it in the beginning only once. CC: Jay Vosburgh <fubar@us.ibm.com> CC: Andy Gospodarek <andy@greyhouse.net> Signed-off-by: Veaceslav Falico <vfalico@redhat.com> Signed-off-by: David S. Miller <davem@davemloft.net>
2013-08-03bonding: modify only neigh_parms owned by usVeaceslav Falico1-1/+7
Otherwise, on neighbour creation, bond_neigh_init() will be called with a foreign netdev. Signed-off-by: Veaceslav Falico <vfalico@redhat.com> Signed-off-by: David S. Miller <davem@davemloft.net>
2013-08-02bonding: initial RCU conversionnikolay@redhat.com5-29/+32
This patch does the initial bonding conversion to RCU. After it the following modes are protected by RCU alone: roundrobin, active-backup, broadcast and xor. Modes ALB/TLB and 3ad still acquire bond->lock for reading, and will be dealt with later. curr_active_slave needs to be dereferenced via rcu in the converted modes because the only thing protecting the slave after this patch is rcu_read_lock, so we need the proper barrier for weakly ordered archs and to make sure we don't have stale pointer. It's not tagged with __rcu yet because there's still work to be done to remove the curr_slave_lock, so sparse will complain when rcu_assign_pointer and rcu_dereference are used, but the alternative to use rcu_dereference_protected would've created much bigger code churn which is more difficult to test and review. That will be converted in time. 1. Active-backup mode 1.1 Perf recording while doing iperf -P 4 - old bonding: iperf spent 0.55% in bonding, system spent 0.29% CPU in bonding - new bonding: iperf spent 0.29% in bonding, system spent 0.15% CPU in bonding 1.2. Bandwidth measurements - old bonding: 16.1 gbps consistently - new bonding: 17.5 gbps consistently 2. Round-robin mode 2.1 Perf recording while doing iperf -P 4 - old bonding: iperf spent 0.51% in bonding, system spent 0.24% CPU in bonding - new bonding: iperf spent 0.16% in bonding, system spent 0.11% CPU in bonding 2.2 Bandwidth measurements - old bonding: 8 gbps (variable due to packet reorderings) - new bonding: 10 gbps (variable due to packet reorderings) Of course the latency has improved in all converted modes, and moreover while doing enslave/release (since it doesn't affect tx anymore). Also I've stress tested all modes doing enslave/release in a loop while transmitting traffic. Signed-off-by: Nikolay Aleksandrov <nikolay@redhat.com> Signed-off-by: David S. Miller <davem@davemloft.net>
2013-08-02bonding: factor out slave id tx code and simplify xmit pathsNikolay Aleksandrov2-67/+61
I factored out the tx xmit code which relies on slave id in bond_xmit_slave_id. It is global because later it can be used also in 3ad mode xmit. Unnecessary obvious comments are removed. Active-backup mode is simplified because bond_dev_queue_xmit always consumes the skb. bond_xmit_xor becomes one line because of bond_xmit_slave_id. bond_for_each_slave_from is not used in bond_xmit_slave_id because later when RCU is used we can avoid important race condition by using standard rculist routines. Signed-off-by: Nikolay Aleksandrov <nikolay@redhat.com> Signed-off-by: David S. Miller <davem@davemloft.net>
2013-08-02bonding: simplify broadcast_xmit functionNikolay Aleksandrov1-36/+16
We don't need to start from the curr_active_slave as the frame will be sent to all eligible slaves anyway, so we remove the unnecessary local variables, checks and comments, and make it use the standard list API. This has the nice side-effect that later when it's converted to RCU a race condition will be avoided which could lead to double packet tx. Signed-off-by: Nikolay Aleksandrov <nikolay@redhat.com> Signed-off-by: David S. Miller <davem@davemloft.net>
2013-08-02bonding: remove unnecessary read_locks of curr_slave_locknikolay@redhat.com1-25/+8
In all the cases we already hold bond->lock for reading, so the slave can't get away and the check != NULL is sufficient. curr_active_slave can still change after the read_lock is unlocked prior to use of the dereferenced value, so there's no need for it. It either contains a valid slave which we use (and can't get away), or it is NULL which is checked. In some places the read_lock of curr_slave_lock was left because we need it not to change while performing some action (e.g. syncing current active slave's addresses, sending ARP requests through the active slave) such cases will be dealt with individually while converting to RCU. Signed-off-by: Nikolay Aleksandrov <nikolay@redhat.com> Signed-off-by: David S. Miller <davem@davemloft.net>
2013-08-02bonding: convert to list API and replace bond's custom listnikolay@redhat.com6-236/+203
This patch aims to remove struct bonding's first_slave and struct slave's next and prev pointers, and replace them with the standard Linux list API. The old macros are converted to list API as well and some new primitives are available now. The checks if there're slaves that used slave_cnt have been replaced by the list_empty macro. Also a few small style fixes, changing longest -> shortest line in local variable declarations, leaving an empty line before return and removing unnecessary brackets. This is the first step to gradual RCU conversion. Signed-off-by: Nikolay Aleksandrov <nikolay@redhat.com> Signed-off-by: David S. Miller <davem@davemloft.net>
2013-08-02bonding: fix system hang due to fast igmp timer reschedulingNikolay Aleksandrov1-2/+2
After commit 4aa5dee4d9 ("net: convert resend IGMP to notifier event") we try to acquire rtnl in bond_resend_igmp_join_requests but it can be scheduled with rtnl already held (e.g. when bond_change_active_slave is called with rtnl) causing a loop of immediate reschedules + calls because rtnl_trylock fails each time since it's being already held. For me this issue leads to system hangs very easy: modprobe bonding; ifconfig bond0 up; ifenslave bond0 eth0; rmmod bonding; The fix is to introduce a small (1 jiffy) delay which is enough for the sections holding rtnl to finish without putting any strain on the system. Also adjust the timer in bond_change_active_slave to be 1 jiffy, since most of the time it's called with rtnl already held. Signed-off-by: Nikolay Aleksandrov <nikolay@redhat.com> Signed-off-by: David S. Miller <davem@davemloft.net>
2013-07-28bonding: remove bond_resend_igmp_join_requests read_unlock leftovernikolay@redhat.com1-1/+0
After commit 4aa5dee4d9 ("net: convert resend IGMP to notifier event") we have 1 read_unlock in bond_resend_igmp_join_requests which isn't paired with a read_lock because it's removed by that commit. Signed-off-by: Nikolay Aleksandrov <nikolay@redhat.com> Reviewed-by: Jiri Pirko <jiri@resnulli.us> Signed-off-by: David S. Miller <davem@davemloft.net>
2013-07-27bond: cleanup netpoll codestephen hemminger1-7/+1
This started out with fixing a sparse warning, then I realized that the wrapper function bond_netpoll_info could just be removed by rolling it into the enable code. Signed-off-by: Stephen Hemminger <stephen@networkplumber.org> Reviewed-by: Jiri Pirko <jiri@resnulli.us> Acked-by: Neil Horman <nhorman@tuxdriver.com> Signed-off-by: David S. Miller <davem@davemloft.net>
2013-07-27bonding: use pre-defined macro in bond_mode_name instead of magic number 0Wang Sheng-Hui1-1/+1
We have BOND_MODE_ROUNDROBIN pre-defined as 0, and it's the lowest mode number. Use it to check the arg lower bound instead of magic number 0 in bond_mode_name. Signed-off-by: Wang Sheng-Hui <shhuiw@gmail.com> Signed-off-by: David S. Miller <davem@davemloft.net>
2013-07-25bonding: Fixed up a error "do not initialise statics to 0 or NULL" in ↵dingtianhong1-1/+1
bond_main.c The error is found by the checkpatch.pl tools. Signed-off-by: Ding Tianhong <dingtianhong@huawei.com> Cc: Jay Vosburgh <fubar@us.ibm.com> Cc: Andy Gospodarek <andy@greyhouse.net> Signed-off-by: David S. Miller <davem@davemloft.net>