summaryrefslogtreecommitdiff
AgeCommit message (Collapse)AuthorFilesLines
2016-05-10iwlwifi: Edit the 8265 SDIO IDMordechai Goodstein2-0/+15
Add new 8265 series SDIO ID. Signed-off-by: Mordechai Goodstein <mordechay.goodstein@intel.com> Signed-off-by: Luca Coelho <luciano.coelho@intel.com>
2016-05-10iwlwifi: mvm: support p2p device frames tx on dqa queue #2Liad Kaufman4-10/+27
Support sending P2P device frames should be sent from queue #2, as required in DQA mode. Signed-off-by: Liad Kaufman <liad.kaufman@intel.com> Signed-off-by: Luca Coelho <luciano.coelho@intel.com>
2016-05-10iwlwifi: mvm: allocate queue for probe response in dqa modeLiad Kaufman4-13/+66
In DQA mode, allocate a dedicated queue (#9) for P2P GO/soft AP probe responses. Signed-off-by: Liad Kaufman <liad.kaufman@intel.com> Signed-off-by: Luca Coelho <luciano.coelho@intel.com>
2016-05-10Merge tag 'iwlwifi-for-kalle-2016-05-04' of ↵Luca Coelho9-55/+69
git://git.kernel.org/pub/scm/linux/kernel/git/iwlwifi/iwlwifi-fixes * fix P2P rates (and possibly other issues) Signed-off-by: Luca Coelho <luciano.coelho@intel.com>
2016-05-10Merge tag 'mac80211-next-for-davem-2016-04-13' of ↵Luca Coelho4931-99311/+236842
git://git.kernel.org/pub/scm/linux/kernel/git/jberg/mac80211-next into master To synchronize with Kalle, here's just a big change that affects all drivers - removing the duplicated enum ieee80211_band and replacing it by enum nl80211_band. On top of that, just a small documentation update.
2016-05-04iwlwifi: mvm: don't override the rate with the AMSDU lenEmmanuel Grumbach1-35/+48
The TSO code creates A-MSDUs from a single large send. Each A-MSDU is an skb and skb->len doesn't include the number of bytes which need to be added for the headers being added (subframe header, TCP header, IP header, SNAP, padding). To be able to set the right value in the Tx command, we put the number of bytes added by those headers in driver_data in iwl_mvm_tx_tso and use this value in iwl_mvm_set_tx_cmd. The problem by setting this value in driver_data is that it overrides the ieee80211_tx_info. The bug manifested itself when we send P2P related frames in CCK since the rate in ieee80211_tx_info is zero-ed. This of course is a violation of the P2P specification. To fix this, copy the original ieee80211_tx_info to the stack and pass it to the functions which need it. Assign the number of bytes added by the headers to the driver_data inside the skb itself. Fixes: a6d5e32f247c ("iwlwifi: mvm: send large SKBs to the transport") Signed-off-by: Emmanuel Grumbach <emmanuel.grumbach@intel.com>
2016-05-03rtl8xxxu: Remove the now obsolete mbox_ext_reg info from rtl8xxxu_fileopsJes Sorensen5-10/+0
With two different h2c_cmd() functions, mbox_ext_reg and mbox_ext_width are no longer needed. Signed-off-by: Jes Sorensen <Jes.Sorensen@redhat.com> Signed-off-by: Kalle Valo <kvalo@codeaurora.org>
2016-05-03rtl8xxxu: rtl8xxxu_prepare_calibrate() is never used on gen1Jes Sorensen3-11/+4
Rename it to rtl8xxxu_gen2_prepare_calibrate() and remove the calls to it from rtl8xxxu_gen1_phy_iq_calibrate() Signed-off-by: Jes Sorensen <Jes.Sorensen@redhat.com> Signed-off-by: Kalle Valo <kvalo@codeaurora.org>
2016-05-03rtl8xxxu: Split rtl8723a_h2c_cmd() into a gen1 and a gen2 versionJes Sorensen3-29/+72
The H2C API is completely different between gen1 and gen2 parts, so there is little point trying to treat this as a generic function. All calls to *_h2c_cmd() will always come from a gen1 or a gen2 specific function. Signed-off-by: Jes Sorensen <Jes.Sorensen@redhat.com> Signed-off-by: Kalle Valo <kvalo@codeaurora.org>
2016-05-03rtl8xxxu: Rename rtl8723a_disabled_to_emu() to rtl8xxxu_disabled_to_emu()Jes Sorensen4-4/+4
This function is generic to most of the chips, so change the name to reflect this. Signed-off-by: Jes Sorensen <Jes.Sorensen@redhat.com> Signed-off-by: Kalle Valo <kvalo@codeaurora.org>
2016-05-03rtl8xxxu: rename rtl8723a_channel_group() to rtl8xxxu_gen1_channel_to_group()Jes Sorensen1-2/+2
This function is generic for most (if not all) gen1 parts, so rename it to reflect this. Signed-off-by: Jes Sorensen <Jes.Sorensen@redhat.com> Signed-off-by: Kalle Valo <kvalo@codeaurora.org>
2016-05-03rtl8xxxu: Rename rtl8723a_stop_tx_beacon() to rtl8xxxu_stop_tx_beacon()Jes Sorensen1-4/+3
There is nothing 8723au specific about this function, so rename it to reflect this. Signed-off-by: Jes Sorensen <Jes.Sorensen@redhat.com> Signed-off-by: Kalle Valo <kvalo@codeaurora.org>
2016-05-03rtl8xxxu: move rtl8188[cr] and rtl8192c related code into rtl8xxxu_8192c.cJes Sorensen4-568/+590
This moves the code for rtl8188c, rtl8188r, and rtl8192c into it's own file. This is purely a code moving exercise, there is no change to the code itself. Signed-off-by: Jes Sorensen <Jes.Sorensen@redhat.com> Signed-off-by: Kalle Valo <kvalo@codeaurora.org>
2016-05-03rtl8xxxu: move rtl8723a related code into rtl8xxxu_8723a.cJes Sorensen4-375/+432
This moves the rtl8723a code into it's own file. This is purely a code moving exercise, no code changes. This device specific file is a lot smaller since the gen1 chips (8723a, 8188c, 8188r, 8192c) share a lot more common code than the gen2 chips. Signed-off-by: Jes Sorensen <Jes.Sorensen@redhat.com> Signed-off-by: Kalle Valo <kvalo@codeaurora.org>
2016-05-03rtl8xxxu: move rtl8723b related code into rtl8xxxu_8723b.cJes Sorensen4-1651/+1698
This moves the rtl8723b code into it's own file. This is purely a code moving exercise, no functional changes. This did expose rtl723a_h2c_cmd() as a function that should be refactored into a gen1 and a gen2 version. Signed-off-by: Jes Sorensen <Jes.Sorensen@redhat.com> Signed-off-by: Kalle Valo <kvalo@codeaurora.org>
2016-05-03rtl8xxxu: move rtl8192e related code into rtl8xxxu_8192e.cJes Sorensen4-1538/+1629
This moves the rtl8192e code into it's own file. This is purely a code moving exercise, no code changes. Signed-off-by: Jes Sorensen <Jes.Sorensen@redhat.com> Signed-off-by: Kalle Valo <kvalo@codeaurora.org>
2016-05-03rtl8xxxu: Rename rtl8xxxu.c to rtl8xxxu_core.cJes Sorensen2-0/+2
This renames the core file to rtl8xxxu_core.c in order to allow us to keep the module nake rtl8xxxu.ko when refactoring the code into multiple files. Signed-off-by: Jes Sorensen <Jes.Sorensen@redhat.com> Signed-off-by: Kalle Valo <kvalo@codeaurora.org>
2016-05-03rtlwifi: rtl818x: Deinline indexed IO functions, save 21568 bytesDenys Vlasenko2-87/+105
rtl818x_ioread8_idx: 151 bytes, 29 calls rtl818x_ioread16_idx: 151 bytes, 11 calls rtl818x_ioread32_idx: 151 bytes, 5 calls rtl818x_iowrite8_idx: 157 bytes, 117 calls rtl818x_iowrite16_idx: 158 bytes, 74 calls rtl818x_iowrite32_idx: 157 bytes, 22 calls Each of these functions has a pair of mutex lock/unlock ops, both of these ops perform atomic updates of memory (on x86, it boils down to "lock cmpxchg %reg,mem" insn), which are 4-8 times more expensive than call+return. text data bss dec hex filename 95894242 20860288 35991552 152746082 91ab862 vmlinux_before 95872674 20860320 35991552 152724546 91a6442 vmlinux Signed-off-by: Denys Vlasenko <dvlasenk@redhat.com> CC: Larry Finger <Larry.Finger@lwfinger.net> CC: Chaoming Li <chaoming_li@realsil.com.cn> CC: linux-wireless@vger.kernel.org CC: linux-kernel@vger.kernel.org Signed-off-by: Kalle Valo <kvalo@codeaurora.org>
2016-05-03Merge tag 'wireless-drivers-next-for-davem-2016-05-02' of ↵David S. Miller81-1767/+3670
git://git.kernel.org/pub/scm/linux/kernel/git/kvalo/wireless-drivers-next Kalle Valo says: ==================== wireless-drivers patches for 4.7 Major changes: brcmfmac * add support for nl80211 BSS_SELECT feature mwifiex * add platform specific wakeup interrupt support ath10k * implement set_tsf() for 10.2.4 branch * remove rare MSI range support * remove deprecated firmware API 1 support ath9k * add module parameter to invert LED polarity wcn36xx * fixes to get the driver properly working on Dragonboard 410c ==================== Signed-off-by: David S. Miller <davem@davemloft.net>
2016-05-03net: relax expensive skb_unclone() in iptunnel_handle_offloads()Eric Dumazet2-1/+11
Locally generated TCP GSO packets having to go through a GRE/SIT/IPIP tunnel have to go through an expensive skb_unclone() Reallocating skb->head is a lot of work. Test should really check if a 'real clone' of the packet was done. TCP does not care if the original gso_type is changed while the packet travels in the stack. This adds skb_header_unclone() which is a variant of skb_clone() using skb_header_cloned() check instead of skb_cloned(). This variant can probably be used from other points. Signed-off-by: Eric Dumazet <edumazet@google.com> Signed-off-by: David S. Miller <davem@davemloft.net>
2016-05-03netdevice: shrink size of struct netdev_queueFlorian Westphal1-7/+6
- trans_timeout is incremented when tx queue timed out (tx watchdog). - tx_maxrate is set via sysfs Moving tx_maxrate to read-mostly part shrinks the struct by 64 bytes. While at it, also move trans_timeout (it is out-of-place in the 'write-mostly' part). Signed-off-by: Florian Westphal <fw@strlen.de> Acked-by: Eric Dumazet <edumazet@google.com> Signed-off-by: David S. Miller <davem@davemloft.net>
2016-05-03Merge branch 'bridge-per-vlan-stats'David S. Miller8-28/+307
Nikolay Aleksandrov says: ==================== bridge: per-vlan stats This set adds support for bridge per-vlan statistics. In order to be able to dump statistics for many vlans we need a way to continue dumping after reaching maximum size, thus patches 01 and 02 extend the new stats API with a per-device extended link stats attribute and callback which can save its local state and continue where it left off afterwards. I considered using the already existing "fill_xstats" callback but it gets confusing since we need to separate the linkinfo dump from the new stats api dump and adding a flag/argument to do that just looks messy. I don't think the rtnl_link_ops size is an issue, so adding these seemed like the cleaner approach. Patches 03 and 04 add the stats support and netlink dump support respectively. The stats accounting is controlled via a bridge option which is default off, thus the performance impact is kept minimal. I've tested this set with both old and modified iproute2, kmemleak on and some traffic stress tests while adding/removing vlans and ports. v3: - drop the RCU pvid patch and remove one pointer fetch as requested - make stats accounting optional with default to off, the option is in the same cache line as vlan_proto and vlan_enabled, so it is already fetched before the fast path check thus the performance impact is minimal, this also allows us to avoid one vlan lookup and return early when using pvid - rebased and retested v2: - Improve the error checking, rename lidx to prividx and save the current idx user instead of restricting it to one in patch 01 - squash patch 02 into 01 and remove the restriction - add callback descriptions, improve the size calculation and change the xstats message structure to have an embedding level per rtnl link type so we can avoid one call to get the link type (and thus filter on it) and also each link type can now have any number of private attributes inside - fix a problem where the vlan stats are not dumped if the bridge has 0 vlans on it but has vlans on the ports, add bridge link type private attributes and also add paddings for future extensions to avoid at least a few netlink attributes and improve struct alignment - drop the is_skb_forwardable argument constifying patch as it's not needed anymore, but it's a nice cleanup which I'll send separately ==================== Signed-off-by: David S. Miller <davem@davemloft.net>
2016-05-03bridge: netlink: export per-vlan statsNikolay Aleksandrov5-0/+118
Add a new LINK_XSTATS_TYPE_BRIDGE attribute and implement the RTM_GETSTATS callbacks for IFLA_STATS_LINK_XSTATS (fill_linkxstats and get_linkxstats_size) in order to export the per-vlan stats. The paddings were added because soon these fields will be needed for per-port per-vlan stats (or something else if someone beats me to it) so avoiding at least a few more netlink attributes. Signed-off-by: Nikolay Aleksandrov <nikolay@cumulusnetworks.com> Signed-off-by: David S. Miller <davem@davemloft.net>
2016-05-03bridge: vlan: learn to countNikolay Aleksandrov5-16/+110
Add support for per-VLAN Tx/Rx statistics. Every global vlan context gets allocated a per-cpu stats which is then set in each per-port vlan context for quick access. The br_allowed_ingress() common function is used to account for Rx packets and the br_handle_vlan() common function is used to account for Tx packets. Stats accounting is performed only if the bridge-wide vlan_stats_enabled option is set either via sysfs or netlink. A struct hole between vlan_enabled and vlan_proto is used for the new option so it is in the same cache line. Currently it is binary (on/off) but it is intentionally restricted to exactly 0 and 1 since other values will be used in the future for different purposes (e.g. per-port stats). Signed-off-by: Nikolay Aleksandrov <nikolay@cumulusnetworks.com> Signed-off-by: David S. Miller <davem@davemloft.net>
2016-05-03net: rtnetlink: add linkxstats callbacks and attributeNikolay Aleksandrov3-0/+49
Add callbacks to calculate the size and fill link extended statistics which can be split into multiple messages and are dumped via the new rtnl stats API (RTM_GETSTATS) with the IFLA_STATS_LINK_XSTATS attribute. Also add that attribute to the idx mask check since it is expected to be able to save state and resume dumping (e.g. future bridge per-vlan stats will be dumped via this attribute and callbacks). Each link type should nest its private attributes under the per-link type attribute. This allows to have any number of separated private attributes and to avoid one call to get the dev link type. Signed-off-by: Nikolay Aleksandrov <nikolay@cumulusnetworks.com> Signed-off-by: David S. Miller <davem@davemloft.net>
2016-05-03net: rtnetlink: allow rtnl_fill_statsinfo to save private state counterNikolay Aleksandrov1-13/+31
The new prividx argument allows the current dumping device to save a private state counter which would enable it to continue dumping from where it left off. And the idxattr is used to save the current idx user so multiple prividx using attributes can be requested at the same time as suggested by Roopa Prabhu. Signed-off-by: Nikolay Aleksandrov <nikolay@cumulusnetworks.com> Signed-off-by: David S. Miller <davem@davemloft.net>
2016-05-03Merge branch 'ipv6-tunnel-cleanups'David S. Miller6-584/+452
Tom Herbert says: ==================== net: Cleanup IPv6 ip tunnels The IPv6 tunnel code is very different from IPv4 code. There is a lot of redundancy with the IPv4 code, particularly in the GRE tunneling. This patch set cleans up the tunnel code to make the IPv6 code look more like the IPv4 code and use common functions between the two stacks where possible. This work should make it easier to maintain and extend the IPv6 ip tunnels. Items in this patch set: - Cleanup IPv6 tunnel receive path (ip6_tnl_rcv). Includes using gro_cells and exporting ip6_tnl_rcv so the ip6_gre can call it - Move GRE functions to common header file (tx functions) or gre_demux.c (rx functions like gre_parse_header) - Call common GRE functions from IPv6 GRE - Create ip6_tnl_xmit (to be like ip_tunnel_xmit) Tested: Ran super_netperf tests for TCP_RR and TCP_STREAM for: - IPv4 over gre, gretap, gre6, gre6tap - IPv6 over gre, gretap, gre6, gre6tap - ipip - ip6ip6 - ipip/gue - IPv6 over gre/gue - IPv4 over gre/gue ==================== Signed-off-by: David S. Miller <davem@davemloft.net>
2016-05-03gre6: Cleanup GREv6 transmit path, call common GRE functionsTom Herbert1-202/+50
Changes in GREv6 transmit path: - Call gre_checksum, remove gre6_checksum - Rename ip6gre_xmit2 to __gre6_xmit - Call gre_build_header utility function - Call ip6_tnl_xmit common function - Call ip6_tnl_change_mtu, eliminate ip6gre_tunnel_change_mtu Signed-off-by: Tom Herbert <tom@herbertland.com> Signed-off-by: David S. Miller <davem@davemloft.net>
2016-05-03ipv6: Generic tunnel cleanupTom Herbert2-3/+9
A few generic changes to generalize tunnels in IPv6: - Export ip6_tnl_change_mtu so that it can be called by ip6_gre - Add tun_hlen to ip6_tnl structure. Signed-off-by: Tom Herbert <tom@herbertland.com> Signed-off-by: David S. Miller <davem@davemloft.net>
2016-05-03gre: Create common functions for transmitTom Herbert2-47/+49
Create common functions for both IPv4 and IPv6 GRE in transmit. These are put into gre.h. Common functions are for: - GRE checksum calculation. Move gre_checksum to gre.h. - Building a GRE header. Move GRE build_header and rename gre_build_header. Signed-off-by: Tom Herbert <tom@herbertland.com> Signed-off-by: David S. Miller <davem@davemloft.net>
2016-05-03ipv6: Create ip6_tnl_xmitTom Herbert2-17/+32
This patch renames ip6_tnl_xmit2 to ip6_tnl_xmit and exports it. Other users like GRE will be able to call this. The original ip6_tnl_xmit function is renamed to ip6_tnl_start_xmit (this is an ndo_start_xmit function). Signed-off-by: Tom Herbert <tom@herbertland.com> Signed-off-by: David S. Miller <davem@davemloft.net>
2016-05-03gre6: Cleanup GREv6 receive path, call common GRE functionsTom Herbert1-117/+23
- Create gre_rcv function. This calls gre_parse_header and ip6gre_rcv. - Call ip6_tnl_rcv. Doing this and using gre_parse_header eliminates most of the code in ip6gre_rcv. Signed-off-by: Tom Herbert <tom@herbertland.com> Signed-off-by: David S. Miller <davem@davemloft.net>
2016-05-03gre: Move utility functions to common headersTom Herbert3-129/+144
Several of the GRE functions defined in net/ipv4/ip_gre.c are usable for IPv6 GRE implementation (that is they are protocol agnostic). These include: - GRE flag handling functions are move to gre.h - GRE build_header is moved to gre.h and renamed gre_build_header - parse_gre_header is moved to gre_demux.c and renamed gre_parse_header - iptunnel_pull_header is taken out of gre_parse_header. This is now done by caller. The header length is returned from gre_parse_header in an int* argument. Signed-off-by: Tom Herbert <tom@herbertland.com> Signed-off-by: David S. Miller <davem@davemloft.net>
2016-05-03ipv6: Cleanup IPv6 tunnel receive pathTom Herbert2-70/+146
Some basic changes to make IPv6 tunnel receive path look more like IPv4 path: - Make ip6_tnl_rcv non-static so that GREv6 and others can call it - Make ip6_tnl_rcv look like ip_tunnel_rcv - Switch to gro_cells_receive - Make ip6_tnl_rcv non-static and export it Signed-off-by: Tom Herbert <tom@herbertland.com> Signed-off-by: David S. Miller <davem@davemloft.net>
2016-05-03Merge branch 'tcp-preempt'David S. Miller20-157/+150
Eric Dumazet says: ==================== net: make TCP preemptible Most of TCP stack assumed it was running from BH handler. This is great for most things, as TCP behavior is very sensitive to scheduling artifacts. However, the prequeue and backlog processing are problematic, as they need to be flushed with BH being blocked. To cope with modern needs, TCP sockets have big sk_rcvbuf values, in the order of 16 MB, and soon 32 MB. This means that backlog can hold thousands of packets, and things like TCP coalescing or collapsing on this amount of packets can lead to insane latency spikes, since BH are blocked for too long. It is time to make UDP/TCP stacks preemptible. Note that fast path still runs from BH handler. v2: Added "tcp: make tcp_sendmsg() aware of socket backlog" to reduce latency problems of large sends. v3: Fixed a typo in tcp_cdg.c ==================== Signed-off-by: David S. Miller <davem@davemloft.net>
2016-05-03tcp: make tcp_sendmsg() aware of socket backlogEric Dumazet3-2/+24
Large sendmsg()/write() hold socket lock for the duration of the call, unless sk->sk_sndbuf limit is hit. This is bad because incoming packets are parked into socket backlog for a long time. Critical decisions like fast retransmit might be delayed. Receivers have to maintain a big out of order queue with additional cpu overhead, and also possible stalls in TX once windows are full. Bidirectional flows are particularly hurt since the backlog can become quite big if the copy from user space triggers IO (page faults) Some applications learnt to use sendmsg() (or sendmmsg()) with small chunks to avoid this issue. Kernel should know better, right ? Add a generic sk_flush_backlog() helper and use it right before a new skb is allocated. Typically we put 64KB of payload per skb (unless MSG_EOR is requested) and checking socket backlog every 64KB gives good results. As a matter of fact, tests with TSO/GSO disabled give very nice results, as we manage to keep a small write queue and smaller perceived rtt. Note that sk_flush_backlog() maintains socket ownership, so is not equivalent to a {release_sock(sk); lock_sock(sk);}, to ensure implicit atomicity rules that sendmsg() was giving to (possibly buggy) applications. In this simple implementation, I chose to not call tcp_release_cb(), but we might consider this later. Signed-off-by: Eric Dumazet <edumazet@google.com> Cc: Alexei Starovoitov <ast@fb.com> Cc: Marcelo Ricardo Leitner <marcelo.leitner@gmail.com> Acked-by: Soheil Hassas Yeganeh <soheil@google.com> Signed-off-by: David S. Miller <davem@davemloft.net>
2016-05-03net: do not block BH while processing socket backlogEric Dumazet1-14/+8
Socket backlog processing is a major latency source. With current TCP socket sk_rcvbuf limits, I have sampled __release_sock() holding cpu for more than 5 ms, and packets being dropped by the NIC once ring buffer is filled. All users are now ready to be called from process context, we can unblock BH and let interrupts be serviced faster. cond_resched_softirq() could be removed, as it has no more user. Signed-off-by: Eric Dumazet <edumazet@google.com> Acked-by: Soheil Hassas Yeganeh <soheil@google.com> Acked-by: Alexei Starovoitov <ast@kernel.org> Signed-off-by: David S. Miller <davem@davemloft.net>
2016-05-03sctp: prepare for socket backlog behavior changeEric Dumazet1-0/+2
sctp_inq_push() will soon be called without BH being blocked when generic socket code flushes the socket backlog. It is very possible SCTP can be converted to not rely on BH, but this needs to be done by SCTP experts. Signed-off-by: Eric Dumazet <edumazet@google.com> Acked-by: Marcelo Ricardo Leitner <marcelo.leitner@gmail.com> Signed-off-by: David S. Miller <davem@davemloft.net>
2016-05-03udp: prepare for non BH masking at backlog processingEric Dumazet2-4/+4
UDP uses the generic socket backlog code, and this will soon be changed to not disable BH when protocol is called back. We need to use appropriate SNMP accessors. Signed-off-by: Eric Dumazet <edumazet@google.com> Acked-by: Soheil Hassas Yeganeh <soheil@google.com> Signed-off-by: David S. Miller <davem@davemloft.net>
2016-05-03dccp: do not assume DCCP code is non preemptibleEric Dumazet4-6/+6
DCCP uses the generic backlog code, and this will soon be changed to not disable BH when protocol is called back. Signed-off-by: Eric Dumazet <edumazet@google.com> Acked-by: Soheil Hassas Yeganeh <soheil@google.com> Signed-off-by: David S. Miller <davem@davemloft.net>
2016-05-03tcp: do not block bh during prequeue processingEric Dumazet2-32/+2
AFAIK, nothing in current TCP stack absolutely wants BH being disabled once socket is owned by a thread running in process context. As mentioned in my prior patch ("tcp: give prequeue mode some care"), processing a batch of packets might take time, better not block BH at all. Signed-off-by: Eric Dumazet <edumazet@google.com> Acked-by: Soheil Hassas Yeganeh <soheil@google.com> Acked-by: Alexei Starovoitov <ast@kernel.org> Signed-off-by: David S. Miller <davem@davemloft.net>
2016-05-03tcp: do not assume TCP code is non preemptibleEric Dumazet11-99/+104
We want to to make TCP stack preemptible, as draining prequeue and backlog queues can take lot of time. Many SNMP updates were assuming that BH (and preemption) was disabled. Need to convert some __NET_INC_STATS() calls to NET_INC_STATS() and some __TCP_INC_STATS() to TCP_INC_STATS() Before using this_cpu_ptr(net->ipv4.tcp_sk) in tcp_v4_send_reset() and tcp_v4_send_ack(), we add an explicit preempt disabled section. Signed-off-by: Eric Dumazet <edumazet@google.com> Acked-by: Soheil Hassas Yeganeh <soheil@google.com> Signed-off-by: David S. Miller <davem@davemloft.net>
2016-05-02Merge branch 'xgene-channel-number'David S. Miller4-1/+18
Iyappan Subramanian says: ==================== drivers: net: xgene: fix: Get channel number from device binding This patch set adds 'channel' property to get ethernet to CPU channel number, thus decoupling the Linux driver from static resource selection. v2: Address review comments from v1 - removed irq reference from Linux driver - added 'channel' property to get ethernet to CPU channel number v1: - Initial version ==================== Signed-off-by: Iyappan Subramanian <isubramanian@apm.com> Signed-off-by: David S. Miller <davem@davemloft.net>
2016-05-02dtb: xgene: Add channel propertyIyappan Subramanian2-0/+2
Added 'channel' property, describing ethernet to CPU channel number. Signed-off-by: Iyappan Subramanian <isubramanian@apm.com> Signed-off-by: David S. Miller <davem@davemloft.net>
2016-05-02Documentation: dtb: xgene: Add channel propertyIyappan Subramanian1-0/+2
Signed-off-by: Iyappan Subramanian <isubramanian@apm.com> Signed-off-by: David S. Miller <davem@davemloft.net>
2016-05-02drivers: net: xgene: Get channel number from device bindingIyappan Subramanian1-1/+14
This patch gets ethernet to CPU channel (prefetch buffer number) from the newly added 'channel' property, thus decoupling Linux driver from resource management. Signed-off-by: Iyappan Subramanian <isubramanian@apm.com> Signed-off-by: David S. Miller <davem@davemloft.net>
2016-05-02Merge branch 'qed-selftests'David S. Miller13-6/+598
Sudarsana Reddy Kalluru says: ==================== qed/qede: ethtool selftests support. This series adds the driver support for following selftests: 1. Register test 2. Memory test 3. Clock test 4. Interrupt test 5. Internal loopback test Patch (1) adds the qed driver infrastructure for selftests. Patches (2) and (3) add qede driver support for ethtool selftests. Please consider applying this series to "net-next". ==================== Signed-off-by: David S. Miller <davem@davemloft.net>
2016-05-02qede: add implementation for internal loopback test.Sudarsana Reddy Kalluru3-4/+242
This patch adds the qede implementation for internal loopback test. Signed-off-by: Sudarsana Reddy Kalluru <sudarsana.kalluru@qlogic.com> Signed-off-by: Yuval Mintz <Yuval.Mintz@qlogic.com> Signed-off-by: Manish Chopra <manish.chopra@qlogic.com> Signed-off-by: David S. Miller <davem@davemloft.net>
2016-05-02qede: add support for selftests.Sudarsana Reddy Kalluru1-1/+55
This patch adds the qede ethtool support for the following tests: - interrupt test - memory test - register test - clock test Signed-off-by: Sudarsana Reddy Kalluru <sudarsana.kalluru@qlogic.com> Signed-off-by: Yuval Mintz <Yuval.Mintz@qlogic.com> Signed-off-by: David S. Miller <davem@davemloft.net>
2016-05-02qed: add infrastructure for device self tests.Sudarsana Reddy Kalluru10-1/+301
This patch adds the functionality and APIs needed for selftests. It adds the ability to configure the link-mode which is required for the implementation of loopback tests. It adds the APIs for clock test, register test, interrupt test and memory test. Signed-off-by: Sudarsana Reddy Kalluru <sudarsana.kalluru@qlogic.com> Signed-off-by: Yuval Mintz <Yuval.Mintz@qlogic.com> Signed-off-by: David S. Miller <davem@davemloft.net>