summaryrefslogtreecommitdiff
AgeCommit message (Collapse)AuthorFilesLines
2019-06-12tcp: add optional per socket transmit delayEric Dumazet8-8/+76
Adding delays to TCP flows is crucial for studying behavior of TCP stacks, including congestion control modules. Linux offers netem module, but it has unpractical constraints : - Need root access to change qdisc - Hard to setup on egress if combined with non trivial qdisc like FQ - Single delay for all flows. EDT (Earliest Departure Time) adoption in TCP stack allows us to enable a per socket delay at a very small cost. Networking tools can now establish thousands of flows, each of them with a different delay, simulating real world conditions. This requires FQ packet scheduler or a EDT-enabled NIC. This patchs adds TCP_TX_DELAY socket option, to set a delay in usec units. unsigned int tx_delay = 10000; /* 10 msec */ setsockopt(fd, SOL_TCP, TCP_TX_DELAY, &tx_delay, sizeof(tx_delay)); Note that FQ packet scheduler limits might need some tweaking : man tc-fq PARAMETERS limit Hard limit on the real queue size. When this limit is reached, new packets are dropped. If the value is lowered, packets are dropped so that the new limit is met. Default is 10000 packets. flow_limit Hard limit on the maximum number of packets queued per flow. Default value is 100. Use of TCP_TX_DELAY option will increase number of skbs in FQ qdisc, so packets would be dropped if any of the previous limit is hit. Use of a jump label makes this support runtime-free, for hosts never using the option. Also note that TSQ (TCP Small Queues) limits are slightly changed with this patch : we need to account that skbs artificially delayed wont stop us providind more skbs to feed the pipe (netem uses skb_orphan_partial() for this purpose, but FQ can not use this trick) Because of that, using big delays might very well trigger old bugs in TSO auto defer logic and/or sndbuf limited detection. Signed-off-by: Eric Dumazet <edumazet@google.com> Signed-off-by: David S. Miller <davem@davemloft.net>
2019-06-12Merge branch 'ena-dynamic-queue-sizes'David S. Miller7-143/+403
Sameeh Jubran says: ==================== Support for dynamic queue size changes This patchset introduces the following: * add new admin command for supporting different queue size for Tx/Rx * add support for Tx/Rx queues size modification through ethtool * allow queues allocation backoff when low on memory * update driver version Difference from v2: * Dropped superfluous range checks which are already done in ethtool. [patch 5/7] * Dropped inline keyword from function. [patch 4/7] * Added a new patch which drops inline keyword all *.c files. [patch 6/7] Difference from v1: * Changed ena_update_queue_sizes() signature to use u32 instead of int type for the size arguments. [patch 5/7] ==================== Signed-off-by: David S. Miller <davem@davemloft.net>
2019-06-12net: ena: update driver version from 2.0.3 to 2.1.0Sameeh Jubran1-2/+2
Update driver version to match device specification. Signed-off-by: Sameeh Jubran <sameehj@amazon.com> Signed-off-by: David S. Miller <davem@davemloft.net>
2019-06-12net: ena: remove inline keyword from functions in *.cSameeh Jubran3-24/+24
Let the compiler decide if the function should be inline in *.c files Signed-off-by: Sameeh Jubran <sameehj@amazon.com> Signed-off-by: David S. Miller <davem@davemloft.net>
2019-06-12net: ena: add ethtool function for changing io queue sizesSameeh Jubran3-1/+40
Implement the set_ringparam() function of the ethtool interface to enable the changing of io queue sizes. Signed-off-by: Arthur Kiyanovski <akiyano@amazon.com> Signed-off-by: Sameeh Jubran <sameehj@amazon.com> Signed-off-by: David S. Miller <davem@davemloft.net>
2019-06-12net: ena: allow queue allocation backoff when low on memorySameeh Jubran3-42/+127
If there is not enough memory to allocate io queues the driver will try to allocate smaller queues. The backoff algorithm is as follows: 1. Try to allocate TX and RX and if successful. 1.1. return success 2. Divide by 2 the size of the larger of RX and TX queues (or both if their size is the same). 3. If TX or RX is smaller than 256 3.1. return failure. 4. else 4.1. go back to 1. Also change the tx_queue_size, rx_queue_size field names in struct adapter to requested_tx_queue_size and requested_rx_queue_size, and use RX and TX queue 0 for actual queue sizes. Explanation: The original fields were useless as they were simply used to assign values once from them to each of the queues in the adapter in ena_probe(). They could simply be deleted. However now that we have a backoff feature, we have use for them. In case of backoff there is a difference between the requested queue sizes and the actual sizes. Therefore there is a need to save the requested queue size for future retries of queue allocation (for example if allocation failed and then ifdown + ifup was called we want to start the allocation from the original requested size of the queues). Signed-off-by: Arthur Kiyanovski <akiyano@amazon.com> Signed-off-by: Sameeh Jubran <sameehj@amazon.com> Signed-off-by: David S. Miller <davem@davemloft.net>
2019-06-12net: ena: make ethtool show correct current and max queue sizesSameeh Jubran2-6/+6
Currently ethtool -g shows the same size for current and max queue sizes. Signed-off-by: Arthur Kiyanovski <akiyano@amazon.com> Signed-off-by: Sameeh Jubran <sameehj@amazon.com> Signed-off-by: David S. Miller <davem@davemloft.net>
2019-06-12net: ena: enable negotiating larger Rx ring sizeSameeh Jubran2-49/+110
Use MAX_QUEUES_EXT get feature capability to query the device. Signed-off-by: Netanel Belgazal <netanel@amazon.com> Signed-off-by: Sameeh Jubran <sameehj@amazon.com> Signed-off-by: David S. Miller <davem@davemloft.net>
2019-06-12net: ena: add MAX_QUEUES_EXT get feature admin commandArthur Kiyanovski3-30/+105
Add a new admin command to support different queue size for Tx/Rx queues (the change also support different SQ/CQ sizes) Signed-off-by: Arthur Kiyanovski <akiyano@amazon.com> Signed-off-by: Sameeh Jubran <sameehj@amazon.com> Signed-off-by: David S. Miller <davem@davemloft.net>
2019-06-12Merge branch 'dpaa2-eth-Add-support-for-MQPRIO-offloading'David S. Miller2-26/+112
Ioana Radulescu says: ==================== dpaa2-eth: Add support for MQPRIO offloading Add support for adding multiple TX traffic classes with mqprio. We can have up to one netdev queue and hardware frame queue per TC per core. ==================== Signed-off-by: David S. Miller <davem@davemloft.net>
2019-06-12dpaa2-eth: Add mqprio supportIoana Radulescu2-5/+64
Implement mqprio qdisc support by mapping traffic classes to different hardware enqueue priorities. The maximum number of supported traffic classes is an attribute of each DPNI object. The traffic classes map to hardware priorities from highest (0) to lowest (highest prio number). The skb priority information received from the stack is used to select the hardware Tx queue on which to enqueue the frame. Signed-off-by: Ioana Radulescu <ruxandra.radulescu@nxp.com> Signed-off-by: Bogdan Purcareata <bogdan.purcareata@nxp.com> Signed-off-by: David S. Miller <davem@davemloft.net>
2019-06-12dpaa2-eth: Support multiple traffic classes on TxIoana Radulescu2-11/+19
DPNI objects can have multiple traffic classes, as reflected by the num_tc attribute. Until now we ignored its value and only used traffic class 0. This patch adds support for multiple Tx traffic classes; we have num_queues x num_tcs hardware queues available for each interface. Signed-off-by: Ioana Radulescu <ruxandra.radulescu@nxp.com> Signed-off-by: David S. Miller <davem@davemloft.net>
2019-06-12dpaa2-eth: Refactor xps codeIoana Radulescu1-13/+32
Move the code configuring xps on the netdev TX queues to a separate function. A subsequent patch will need to call this in another context as well. Signed-off-by: Ioana Radulescu <ruxandra.radulescu@nxp.com> Signed-off-by: David S. Miller <davem@davemloft.net>
2019-06-12net: ethernet: ti: cpts: fix build failure for powerpcGrygorii Strashko1-0/+1
Add dependency to TI CPTS from Common CLK framework COMMON_CLK to fix allyesconfig build for Powerpc: drivers/net/ethernet/ti/cpts.c: In function 'cpts_of_mux_clk_setup': drivers/net/ethernet/ti/cpts.c:567:2: error: implicit declaration of function 'of_clk_parent_fill'; did you mean 'of_clk_get_parent_name'? [-Werror=implicit-function-declaration] of_clk_parent_fill(refclk_np, parent_names, num_parents); ^~~~~~~~~~~~~~~~~~ of_clk_get_parent_name Fixes: a3047a81ba13 ("net: ethernet: ti: cpts: add support for ext rftclk selection") Reported-by: Stephen Rothwell <sfr@canb.auug.org.au> Signed-off-by: Grygorii Strashko <grygorii.strashko@ti.com> Signed-off-by: David S. Miller <davem@davemloft.net>
2019-06-12Merge branch 'net-mvpp2-prs-Fixes-for-VID-filtering'David S. Miller1-11/+12
Maxime Chevallier says: ==================== net: mvpp2: prs: Fixes for VID filtering This series fixes some issues with VID filtering offload, mainly due to the wrong ranges being used in the TCAM header parser. The first patch fixes a bug where removing a VLAN from a port's whitelist would also remove it from other port's, if they are on the same PPv2 instance. The second patch makes so that we don't invalidate the wrong TCAM entries when clearing the whole whitelist. ==================== Signed-off-by: David S. Miller <davem@davemloft.net>
2019-06-12net: mvpp2: prs: Use the correct helpers when removing all VID filtersMaxime Chevallier1-2/+4
When removing all VID filters, the mvpp2_prs_vid_entry_remove would be called with the TCAM id incorrectly used as a VID, causing the wrong TCAM entries to be invalidated. Fix this by directly invalidating entries in the VID range. Fixes: 56beda3db602 ("net: mvpp2: Add hardware offloading for VLAN filtering") Suggested-by: Yuri Chipchev <yuric@marvell.com> Signed-off-by: Maxime Chevallier <maxime.chevallier@bootlin.com> Signed-off-by: David S. Miller <davem@davemloft.net>
2019-06-12net: mvpp2: prs: Fix parser range for VID filteringMaxime Chevallier1-9/+8
VID filtering is implemented in the Header Parser, with one range of 11 vids being assigned for each no-loopback port. Make sure we use the per-port range when looking for existing entries in the Parser. Since we used a global range instead of a per-port one, this causes VIDs to be removed from the whitelist from all ports of the same PPv2 instance. Fixes: 56beda3db602 ("net: mvpp2: Add hardware offloading for VLAN filtering") Suggested-by: Yuri Chipchev <yuric@marvell.com> Signed-off-by: Maxime Chevallier <maxime.chevallier@bootlin.com> Signed-off-by: David S. Miller <davem@davemloft.net>
2019-06-12Merge branch 'mlxsw-Various-fixes'David S. Miller7-10/+161
Ido Schimmel says: ==================== mlxsw: Various fixes This patchset contains various fixes for mlxsw. Patch #1 fixes an hash polarization problem when a nexthop device is a LAG device. This is caused by the fact that the same seed is used for the LAG and ECMP hash functions. Patch #2 fixes an issue in which the driver fails to refresh a nexthop neighbour after it becomes dead. This prevents the nexthop from ever being written to the adjacency table and used to forward traffic. Patch Patch #4 fixes a wrong extraction of TOS value in flower offload code. Patch #5 is a test case. Patch #6 works around a buffer issue in Spectrum-2 by reducing the default sizes of the shared buffer pools. Patch #7 prevents prio-tagged packets from entering the switch when PVID is removed from the bridge port. Please consider patches #2, #4 and #6 for 5.1.y ==================== Signed-off-by: David S. Miller <davem@davemloft.net>
2019-06-12mlxsw: spectrum: Disallow prio-tagged packets when PVID is removedIdo Schimmel1-1/+1
When PVID is removed from a bridge port, the Linux bridge drops both untagged and prio-tagged packets. Align mlxsw with this behavior. Fixes: 148f472da5db ("mlxsw: reg: Add the Switch Port Acceptable Frame Types register") Acked-by: Jiri Pirko <jiri@mellanox.com> Signed-off-by: Ido Schimmel <idosch@mellanox.com> Signed-off-by: David S. Miller <davem@davemloft.net>
2019-06-12mlxsw: spectrum_buffers: Reduce pool size on Spectrum-2Petr Machata1-2/+2
Due to an issue on Spectrum-2, in front-panel ports split four ways, 2 out of 32 port buffers cannot be used. To work around this, the next FW release will mark them as unused, and will report correspondingly lower total shared buffer size. mlxsw will pick up the new value through a query to cap_total_buffer_size resource. However the initial size for shared buffer pool 0 is hard-coded and therefore needs to be updated. Thus reduce the pool size by 2.7 MiB (which corresponds to 2/32 of the total size of 42 MiB), and round down to the whole number of cells. Fixes: fe099bf682ab ("mlxsw: spectrum_buffers: Add Spectrum-2 shared buffer configuration") Signed-off-by: Petr Machata <petrm@mellanox.com> Acked-by: Jiri Pirko <jiri@mellanox.com> Signed-off-by: Ido Schimmel <idosch@mellanox.com> Signed-off-by: David S. Miller <davem@davemloft.net>
2019-06-12selftests: tc_flower: Add TOS matching testJiri Pirko1-1/+35
Signed-off-by: Jiri Pirko <jiri@mellanox.com> Signed-off-by: Ido Schimmel <idosch@mellanox.com> Signed-off-by: David S. Miller <davem@davemloft.net>
2019-06-12mlxsw: spectrum_flower: Fix TOS matchingJiri Pirko1-2/+2
The TOS value was not extracted correctly. Fix it. Fixes: 87996f91f739 ("mlxsw: spectrum_flower: Add support for ip tos") Reported-by: Alexander Petrovskiy <alexpe@mellanox.com> Signed-off-by: Jiri Pirko <jiri@mellanox.com> Signed-off-by: Ido Schimmel <idosch@mellanox.com> Signed-off-by: David S. Miller <davem@davemloft.net>
2019-06-12selftests: mlxsw: Test nexthop offload indicationIdo Schimmel1-0/+47
Test that IPv4 and IPv6 nexthops are correctly marked with offload indication in response to neighbour events. Signed-off-by: Ido Schimmel <idosch@mellanox.com> Signed-off-by: David S. Miller <davem@davemloft.net>
2019-06-12mlxsw: spectrum_router: Refresh nexthop neighbour when it becomes deadIdo Schimmel1-3/+70
The driver tries to periodically refresh neighbours that are used to reach nexthops. This is done by periodically calling neigh_event_send(). However, if the neighbour becomes dead, there is nothing we can do to return it to a connected state and the above function call is basically a NOP. This results in the nexthop never being written to the device's adjacency table and therefore never used to forward packets. Fix this by dropping our reference from the dead neighbour and associating the nexthop with a new neigbhour which we will try to refresh. Fixes: a7ff87acd995 ("mlxsw: spectrum_router: Implement next-hop routing") Signed-off-by: Ido Schimmel <idosch@mellanox.com> Reported-by: Alex Veber <alexve@mellanox.com> Tested-by: Alex Veber <alexve@mellanox.com> Acked-by: Jiri Pirko <jiri@mellanox.com> Signed-off-by: David S. Miller <davem@davemloft.net>
2019-06-12mlxsw: spectrum: Use different seeds for ECMP and LAG hashIdo Schimmel1-1/+4
The same hash function and seed are used for both ECMP and LAG hash. Therefore, when a LAG device is used as a nexthop device as part of an ECMP group, hash polarization can occur and all the traffic will be hashed to a single LAG slave. Fix this by using a different seed for the LAG hash. Fixes: fa73989f2697 ("mlxsw: spectrum: Use a stable ECMP/LAG seed") Signed-off-by: Ido Schimmel <idosch@mellanox.com> Reported-by: Alex Veber <alexve@mellanox.com> Tested-by: Alex Veber <alexve@mellanox.com> Acked-by: Jiri Pirko <jiri@mellanox.com> Signed-off-by: David S. Miller <davem@davemloft.net>
2019-06-12net: tls, correctly account for copied bytes with multiple sk_msgsJohn Fastabend1-1/+0
tls_sw_do_sendpage needs to return the total number of bytes sent regardless of how many sk_msgs are allocated. Unfortunately, copied (the value we return up the stack) is zero'd before each new sk_msg is allocated so we only return the copied size of the last sk_msg used. The caller (splice, etc.) of sendpage will then believe only part of its data was sent and send the missing chunks again. However, because the data actually was sent the receiver will get multiple copies of the same data. To reproduce this do multiple sendfile calls with a length close to the max record size. This will in turn call splice/sendpage, sendpage may use multiple sk_msg in this case and then returns the incorrect number of bytes. This will cause splice to resend creating duplicate data on the receiver. Andre created a C program that can easily generate this case so we will push a similar selftest for this to bpf-next shortly. The fix is to _not_ zero the copied field so that the total sent bytes is returned. Reported-by: Steinar H. Gunderson <steinar+kernel@gunderson.no> Reported-by: Andre Tomt <andre@tomt.net> Tested-by: Andre Tomt <andre@tomt.net> Fixes: d829e9c4112b ("tls: convert to generic sk_msg interface") Signed-off-by: John Fastabend <john.fastabend@gmail.com> Signed-off-by: David S. Miller <davem@davemloft.net>
2019-06-12net: dsa: Deal with non-existing PHY/fixed-linkFlorian Fainelli1-1/+1
We need to specifically deal with phylink_of_phy_connect() returning -ENODEV, because this can happen when a CPU/DSA port does connect neither to a PHY, nor has a fixed-link property. This is a valid use case that is permitted by the binding and indicates to the switch: auto-configure port with maximum capabilities. Fixes: 0e27921816ad ("net: dsa: Use PHYLINK for the CPU/DSA ports") Signed-off-by: Florian Fainelli <f.fainelli@gmail.com> Reviewed-by: Andrew Lunn <andrew@lunn.ch> Signed-off-by: David S. Miller <davem@davemloft.net>
2019-06-12vrf: Increment Icmp6InMsgs on the original netdevStephen Suryaputra3-8/+29
Get the ingress interface and increment ICMP counters based on that instead of skb->dev when the the dev is a VRF device. This is a follow up on the following message: https://www.spinics.net/lists/netdev/msg560268.html v2: Avoid changing skb->dev since it has unintended effect for local delivery (David Ahern). Signed-off-by: Stephen Suryaputra <ssuryaextr@gmail.com> Reviewed-by: David Ahern <dsahern@gmail.com> Signed-off-by: David S. Miller <davem@davemloft.net>
2019-06-12cpuset: restore sanity to cpuset_cpus_allowed_fallback()Joel Savitz1-1/+14
In the case that a process is constrained by taskset(1) (i.e. sched_setaffinity(2)) to a subset of available cpus, and all of those are subsequently offlined, the scheduler will set tsk->cpus_allowed to the current value of task_cs(tsk)->effective_cpus. This is done via a call to do_set_cpus_allowed() in the context of cpuset_cpus_allowed_fallback() made by the scheduler when this case is detected. This is the only call made to cpuset_cpus_allowed_fallback() in the latest mainline kernel. However, this is not sane behavior. I will demonstrate this on a system running the latest upstream kernel with the following initial configuration: # grep -i cpu /proc/$$/status Cpus_allowed: ffffffff,fffffff Cpus_allowed_list: 0-63 (Where cpus 32-63 are provided via smt.) If we limit our current shell process to cpu2 only and then offline it and reonline it: # taskset -p 4 $$ pid 2272's current affinity mask: ffffffffffffffff pid 2272's new affinity mask: 4 # echo off > /sys/devices/system/cpu/cpu2/online # dmesg | tail -3 [ 2195.866089] process 2272 (bash) no longer affine to cpu2 [ 2195.872700] IRQ 114: no longer affine to CPU2 [ 2195.879128] smpboot: CPU 2 is now offline # echo on > /sys/devices/system/cpu/cpu2/online # dmesg | tail -1 [ 2617.043572] smpboot: Booting Node 0 Processor 2 APIC 0x4 We see that our current process now has an affinity mask containing every cpu available on the system _except_ the one we originally constrained it to: # grep -i cpu /proc/$$/status Cpus_allowed: ffffffff,fffffffb Cpus_allowed_list: 0-1,3-63 This is not sane behavior, as the scheduler can now not only place the process on previously forbidden cpus, it can't even schedule it on the cpu it was originally constrained to! Other cases result in even more exotic affinity masks. Take for instance a process with an affinity mask containing only cpus provided by smt at the moment that smt is toggled, in a configuration such as the following: # taskset -p f000000000 $$ # grep -i cpu /proc/$$/status Cpus_allowed: 000000f0,00000000 Cpus_allowed_list: 36-39 A double toggle of smt results in the following behavior: # echo off > /sys/devices/system/cpu/smt/control # echo on > /sys/devices/system/cpu/smt/control # grep -i cpus /proc/$$/status Cpus_allowed: ffffff00,ffffffff Cpus_allowed_list: 0-31,40-63 This is even less sane than the previous case, as the new affinity mask excludes all smt-provided cpus with ids less than those that were previously in the affinity mask, as well as those that were actually in the mask. With this patch applied, both of these cases end in the following state: # grep -i cpu /proc/$$/status Cpus_allowed: ffffffff,ffffffff Cpus_allowed_list: 0-63 The original policy is discarded. Though not ideal, it is the simplest way to restore sanity to this fallback case without reinventing the cpuset wheel that rolls down the kernel just fine in cgroup v2. A user who wishes for the previous affinity mask to be restored in this fallback case can use that mechanism instead. This patch modifies scheduler behavior by instead resetting the mask to task_cs(tsk)->cpus_allowed by default, and cpu_possible mask in legacy mode. I tested the cases above on both modes. Note that the scheduler uses this fallback mechanism if and only if _every_ other valid avenue has been traveled, and it is the last resort before calling BUG(). Suggested-by: Waiman Long <longman@redhat.com> Suggested-by: Phil Auld <pauld@redhat.com> Signed-off-by: Joel Savitz <jsavitz@redhat.com> Acked-by: Phil Auld <pauld@redhat.com> Acked-by: Waiman Long <longman@redhat.com> Acked-by: Peter Zijlstra (Intel) <peterz@infradead.org> Signed-off-by: Tejun Heo <tj@kernel.org>
2019-06-12net: dsa: mv88e6xxx: lock mutex in port_fdb_dumpVivien Didelot1-8/+6
During a port FDB dump operation, the mutex protecting the concurrent access to the switch registers is currently held by the internal mv88e6xxx_port_db_dump and mv88e6xxx_port_db_dump_fid helpers. It must be held at the higher level in mv88e6xxx_port_fdb_dump which is called directly by DSA through ds->ops->port_fdb_dump. Fix this. Signed-off-by: Vivien Didelot <vivien.didelot@gmail.com> Signed-off-by: David S. Miller <davem@davemloft.net>
2019-06-12net: ethtool: Allow matching on vlan DEI bitMaxime Chevallier2-0/+6
Using ethtool, users can specify a classification action matching on the full vlan tag, which includes the DEI bit (also previously called CFI). However, when converting the ethool_flow_spec to a flow_rule, we use dissector keys to represent the matching patterns. Since the vlan dissector key doesn't include the DEI bit, this information was silently discarded when translating the ethtool flow spec in to a flow_rule. This commit adds the DEI bit into the vlan dissector key, and allows propagating the information to the driver when parsing the ethtool flow spec. Fixes: eca4205f9ec3 ("ethtool: add ethtool_rx_flow_spec to flow_rule structure translator") Reported-by: Michał Mirosław <mirq-linux@rere.qmqm.pl> Signed-off-by: Maxime Chevallier <maxime.chevallier@bootlin.com> Signed-off-by: David S. Miller <davem@davemloft.net>
2019-06-12linux-next: DOC: RDS: Fix a typo in rds.txtMasanari Iida1-1/+1
This patch fixes a spelling typo in rds.txt Signed-off-by: Masanari Iida <standby24x7@gmail.com> Acked-by: Santosh Shilimkar <santosh.shilimkar@oracle.com> Signed-off-by: David S. Miller <davem@davemloft.net>
2019-06-12x86/kgdb: Return 0 from kgdb_arch_set_breakpoint()Matt Mullins1-1/+1
err must be nonzero in order to reach text_poke(), which caused kgdb to fail to set breakpoints: (gdb) break __x64_sys_sync Breakpoint 1 at 0xffffffff81288910: file ../fs/sync.c, line 124. (gdb) c Continuing. Warning: Cannot insert breakpoint 1. Cannot access memory at address 0xffffffff81288910 Command aborted. Fixes: 86a22057127d ("x86/kgdb: Avoid redundant comparison of patched code") Signed-off-by: Matt Mullins <mmullins@fb.com> Signed-off-by: Borislav Petkov <bp@suse.de> Reviewed-by: Nadav Amit <namit@vmware.com> Cc: Andy Lutomirski <luto@amacapital.net> Cc: Christophe Leroy <christophe.leroy@c-s.fr> Cc: Daniel Thompson <daniel.thompson@linaro.org> Cc: Douglas Anderson <dianders@chromium.org> Cc: "Gustavo A. R. Silva" <gustavo@embeddedor.com> Cc: "H. Peter Anvin" <hpa@zytor.com> Cc: Ingo Molnar <mingo@redhat.com> Cc: "Peter Zijlstra (Intel)" <peterz@infradead.org> Cc: Rick Edgecombe <rick.p.edgecombe@intel.com> Cc: Thomas Gleixner <tglx@linutronix.de> Cc: x86-ml <x86@kernel.org> Link: https://lkml.kernel.org/r/20190531194755.6320-1-mmullins@fb.com
2019-06-12dt-bindings: net: wiznet: add w5x00 supportNicolas Saenz Julienne1-0/+50
Add bindings for Wiznet's w5x00 series of SPI interfaced Ethernet chips. Based on the bindings for microchip,enc28j60. Signed-off-by: Nicolas Saenz Julienne <nsaenzjulienne@suse.de> Signed-off-by: David S. Miller <davem@davemloft.net>
2019-06-12net: ethernet: wiznet: w5X00 add device tree supportNicolas Saenz Julienne1-2/+22
The w5X00 chip provides an SPI to Ethernet inteface. This patch allows platform devices to be defined through the device tree. Signed-off-by: Nicolas Saenz Julienne <nsaenzjulienne@suse.de> Signed-off-by: David S. Miller <davem@davemloft.net>
2019-06-12mpls: fix af_mpls dependencies for realMatteo Croce1-1/+1
Randy reported that selecting MPLS_ROUTING without PROC_FS breaks the build, because since commit c1a9d65954c6 ("mpls: fix af_mpls dependencies"), MPLS_ROUTING selects PROC_SYSCTL, but Kconfig's select doesn't recursively handle dependencies. Change the select into a dependency. Fixes: c1a9d65954c6 ("mpls: fix af_mpls dependencies") Reported-by: Randy Dunlap <rdunlap@infradead.org> Signed-off-by: Matteo Croce <mcroce@redhat.com> Signed-off-by: David S. Miller <davem@davemloft.net>
2019-06-12net: sched: ingress: set 'unlocked' flag for Qdisc opsVlad Buslov1-0/+1
To remove rtnl lock dependency in tc filter update API when using ingress Qdisc, set QDISC_CLASS_OPS_DOIT_UNLOCKED flag in ingress Qdisc_class_ops. Ingress Qdisc ops don't require any modifications to be used without rtnl lock on tc filter update path. Ingress implementation never changes its q->block and only releases it when Qdisc is being destroyed. This means it is enough for RTM_{NEWTFILTER|DELTFILTER|GETTFILTER} message handlers to hold ingress Qdisc reference while using it without relying on rtnl lock protection. Unlocked Qdisc ops support is already implemented in filter update path by unlocked cls API patch set. Signed-off-by: Vlad Buslov <vladbu@mellanox.com> Signed-off-by: David S. Miller <davem@davemloft.net>
2019-06-12selinux: fix a missing-check bug in selinux_sb_eat_lsm_opts()Gen Zhang1-6/+14
In selinux_sb_eat_lsm_opts(), 'arg' is allocated by kmemdup_nul(). It returns NULL when fails. So 'arg' should be checked. And 'mnt_opts' should be freed when error. Signed-off-by: Gen Zhang <blackgod016574@gmail.com> Fixes: 99dbbb593fe6 ("selinux: rewrite selinux_sb_eat_lsm_opts()") Cc: <stable@vger.kernel.org> Signed-off-by: Paul Moore <paul@paul-moore.com>
2019-06-12Merge branch 'mediatek-drm-fixes-5.2' of ↵Daniel Vetter4-31/+26
https://github.com/ckhu-mediatek/linux.git-tags into drm-fixes CK writes: This include unbind error fix, clock control flow refinement, and PRIME mmap with page offset. Signed-off-by: Daniel Vetter <daniel.vetter@ffwll.ch> From: CK Hu <ck.hu@mediatek.com> Link: https://patchwork.freedesktop.org/patch/msgid/1560325868.3259.6.camel@mtksdaap41
2019-06-12Merge tag 'media/v5.2-2' of ↵Linus Torvalds2-3/+3
git://git.kernel.org/pub/scm/linux/kernel/git/mchehab/linux-media Pull media fixes from Mauro Carvalho Chehab: - a debug warning for satellite tuning at dvb core was producing too much noise - a regression at hfi_parser on Venus driver * tag 'media/v5.2-2' of git://git.kernel.org/pub/scm/linux/kernel/git/mchehab/linux-media: media: venus: hfi_parser: fix a regression in parser media: dvb: warning about dvb frequency limits produces too much noise
2019-06-12selinux: fix a missing-check bug in selinux_add_mnt_opt( )Gen Zhang1-5/+14
In selinux_add_mnt_opt(), 'val' is allocated by kmemdup_nul(). It returns NULL when fails. So 'val' should be checked. And 'mnt_opts' should be freed when error. Signed-off-by: Gen Zhang <blackgod016574@gmail.com> Fixes: 757cbe597fe8 ("LSM: new method: ->sb_add_mnt_opt()") Cc: <stable@vger.kernel.org> [PM: fixed some indenting problems] Signed-off-by: Paul Moore <paul@paul-moore.com>
2019-06-12arm64: tlbflush: Ensure start/end of address range are aligned to strideWill Deacon1-0/+3
Since commit 3d65b6bbc01e ("arm64: tlbi: Set MAX_TLBI_OPS to PTRS_PER_PTE"), we resort to per-ASID invalidation when attempting to perform more than PTRS_PER_PTE invalidation instructions in a single call to __flush_tlb_range(). Whilst this is beneficial, the mmu_gather code does not ensure that the end address of the range is rounded-up to the stride when freeing intermediate page tables in pXX_free_tlb(), which defeats our range checking. Align the bounds passed into __flush_tlb_range(). Cc: Catalin Marinas <catalin.marinas@arm.com> Cc: Peter Zijlstra <peterz@infradead.org> Reported-by: Hanjun Guo <guohanjun@huawei.com> Tested-by: Hanjun Guo <guohanjun@huawei.com> Reviewed-by: Hanjun Guo <guohanjun@huawei.com> Signed-off-by: Will Deacon <will.deacon@arm.com>
2019-06-12usb: typec: Make sure an alt mode exist before getting its partnerHeikki Krogerus1-1/+1
Adding check to typec_altmode_get_partner() to prevent potential NULL pointer dereference. Reported-by: Vladimir Yerilov <openmindead@gmail.com> Fixes: ad74b8649bea ("usb: typec: ucsi: Preliminary support for alternate modes") Signed-off-by: Heikki Krogerus <heikki.krogerus@linux.intel.com> Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
2019-06-12KVM: arm/arm64: vgic: Fix kvm_device leak in vgic_its_destroyDave Martin1-0/+1
kvm_device->destroy() seems to be supposed to free its kvm_device struct, but vgic_its_destroy() is not currently doing this, resulting in a memory leak, resulting in kmemleak reports such as the following: unreferenced object 0xffff800aeddfe280 (size 128): comm "qemu-system-aar", pid 13799, jiffies 4299827317 (age 1569.844s) [...] backtrace: [<00000000a08b80e2>] kmem_cache_alloc+0x178/0x208 [<00000000dcad2bd3>] kvm_vm_ioctl+0x350/0xbc0 Fix it. Cc: Andre Przywara <andre.przywara@arm.com> Fixes: 1085fdc68c60 ("KVM: arm64: vgic-its: Introduce new KVM ITS device") Signed-off-by: Dave Martin <Dave.Martin@arm.com> Signed-off-by: Marc Zyngier <marc.zyngier@arm.com>
2019-06-12KVM: arm64: Filter out invalid core register IDs in KVM_GET_REG_LISTDave Martin1-13/+40
Since commit d26c25a9d19b ("arm64: KVM: Tighten guest core register access from userspace"), KVM_{GET,SET}_ONE_REG rejects register IDs that do not correspond to a single underlying architectural register. KVM_GET_REG_LIST was not changed to match however: instead, it simply yields a list of 32-bit register IDs that together cover the whole kvm_regs struct. This means that if userspace tries to use the resulting list of IDs directly to drive calls to KVM_*_ONE_REG, some of those calls will now fail. This was not the intention. Instead, iterating KVM_*_ONE_REG over the list of IDs returned by KVM_GET_REG_LIST should be guaranteed to work. This patch fixes the problem by splitting validate_core_offset() into a backend core_reg_size_from_offset() which does all of the work except for checking that the size field in the register ID matches, and kvm_arm_copy_reg_indices() and num_core_regs() are converted to use this to enumerate the valid offsets. kvm_arm_copy_reg_indices() now also sets the register ID size field appropriately based on the value returned, so the register ID supplied to userspace is fully qualified for use with the register access ioctls. Cc: stable@vger.kernel.org Fixes: d26c25a9d19b ("arm64: KVM: Tighten guest core register access from userspace") Signed-off-by: Dave Martin <Dave.Martin@arm.com> Reviewed-by: Andrew Jones <drjones@redhat.com> Tested-by: Andrew Jones <drjones@redhat.com> Signed-off-by: Marc Zyngier <marc.zyngier@arm.com>
2019-06-12KVM: arm64: Implement vq_present() as a macroViresh Kumar1-9/+3
This routine is a one-liner and doesn't really need to be function and can be implemented as a macro. Suggested-by: Dave Martin <Dave.Martin@arm.com> Reviewed-by: Dave Martin <Dave.Martin@arm.com> Signed-off-by: Viresh Kumar <viresh.kumar@linaro.org> Signed-off-by: Marc Zyngier <marc.zyngier@arm.com>
2019-06-12bpf: silence warning messages in coreValdis Klētnieks1-0/+1
Compiling kernel/bpf/core.c with W=1 causes a flood of warnings: kernel/bpf/core.c:1198:65: warning: initialized field overwritten [-Woverride-init] 1198 | #define BPF_INSN_3_TBL(x, y, z) [BPF_##x | BPF_##y | BPF_##z] = true | ^~~~ kernel/bpf/core.c:1087:2: note: in expansion of macro 'BPF_INSN_3_TBL' 1087 | INSN_3(ALU, ADD, X), \ | ^~~~~~ kernel/bpf/core.c:1202:3: note: in expansion of macro 'BPF_INSN_MAP' 1202 | BPF_INSN_MAP(BPF_INSN_2_TBL, BPF_INSN_3_TBL), | ^~~~~~~~~~~~ kernel/bpf/core.c:1198:65: note: (near initialization for 'public_insntable[12]') 1198 | #define BPF_INSN_3_TBL(x, y, z) [BPF_##x | BPF_##y | BPF_##z] = true | ^~~~ kernel/bpf/core.c:1087:2: note: in expansion of macro 'BPF_INSN_3_TBL' 1087 | INSN_3(ALU, ADD, X), \ | ^~~~~~ kernel/bpf/core.c:1202:3: note: in expansion of macro 'BPF_INSN_MAP' 1202 | BPF_INSN_MAP(BPF_INSN_2_TBL, BPF_INSN_3_TBL), | ^~~~~~~~~~~~ 98 copies of the above. The attached patch silences the warnings, because we *know* we're overwriting the default initializer. That leaves bpf/core.c with only 6 other warnings, which become more visible in comparison. Signed-off-by: Valdis Kletnieks <valdis.kletnieks@vt.edu> Acked-by: Andrii Nakryiko <andriin@fb.com> Signed-off-by: Daniel Borkmann <daniel@iogearbox.net>
2019-06-12xdp: check device pointer before clearingIlya Maximets1-5/+6
We should not call 'ndo_bpf()' or 'dev_put()' with NULL argument. Fixes: c9b47cc1fabc ("xsk: fix bug when trying to use both copy and zero-copy on one queue id") Signed-off-by: Ilya Maximets <i.maximets@samsung.com> Acked-by: Jonathan Lemon <jonathan.lemon@gmail.com> Signed-off-by: Daniel Borkmann <daniel@iogearbox.net>
2019-06-12bpf: net: Set sk_bpf_storage back to NULL for cloned skMartin KaFai Lau1-0/+3
The cloned sk should not carry its parent-listener's sk_bpf_storage. This patch fixes it by setting it back to NULL. Fixes: 6ac99e8f23d4 ("bpf: Introduce bpf sk local storage") Signed-off-by: Martin KaFai Lau <kafai@fb.com> Acked-by: Andrii Nakryiko <andriin@fb.com> Signed-off-by: Daniel Borkmann <daniel@iogearbox.net>
2019-06-12Btrfs: fix race between block group removal and block group allocationFilipe Manana1-11/+13
If a task is removing the block group that currently has the highest start offset amongst all existing block groups, there is a short time window where it races with a concurrent block group allocation, resulting in a transaction abort with an error code of EEXIST. The following diagram explains the race in detail: Task A Task B btrfs_remove_block_group(bg offset X) remove_extent_mapping(em offset X) -> removes extent map X from the tree of extent maps (fs_info->mapping_tree), so the next call to find_next_chunk() will return offset X btrfs_alloc_chunk() find_next_chunk() --> returns offset X __btrfs_alloc_chunk(offset X) btrfs_make_block_group() btrfs_create_block_group_cache() --> creates btrfs_block_group_cache object with a key corresponding to the block group item in the extent, the key is: (offset X, BTRFS_BLOCK_GROUP_ITEM_KEY, 1G) --> adds the btrfs_block_group_cache object to the list new_bgs of the transaction handle btrfs_end_transaction(trans handle) __btrfs_end_transaction() btrfs_create_pending_block_groups() --> sees the new btrfs_block_group_cache in the new_bgs list of the transaction handle --> its call to btrfs_insert_item() fails with -EEXIST when attempting to insert the block group item key (offset X, BTRFS_BLOCK_GROUP_ITEM_KEY, 1G) because task A has not removed that key yet --> aborts the running transaction with error -EEXIST btrfs_del_item() -> removes the block group's key from the extent tree, key is (offset X, BTRFS_BLOCK_GROUP_ITEM_KEY, 1G) A sample transaction abort trace: [78912.403537] ------------[ cut here ]------------ [78912.403811] BTRFS: Transaction aborted (error -17) [78912.404082] WARNING: CPU: 2 PID: 20465 at fs/btrfs/extent-tree.c:10551 btrfs_create_pending_block_groups+0x196/0x250 [btrfs] (...) [78912.405642] CPU: 2 PID: 20465 Comm: btrfs Tainted: G W 5.0.0-btrfs-next-46 #1 [78912.405941] Hardware name: QEMU Standard PC (i440FX + PIIX, 1996), BIOS rel-1.11.2-0-gf9626ccb91-prebuilt.qemu-project.org 04/01/2014 [78912.406586] RIP: 0010:btrfs_create_pending_block_groups+0x196/0x250 [btrfs] (...) [78912.407636] RSP: 0018:ffff9d3d4b7e3b08 EFLAGS: 00010282 [78912.407997] RAX: 0000000000000000 RBX: ffff90959a3796f0 RCX: 0000000000000006 [78912.408369] RDX: 0000000000000007 RSI: 0000000000000001 RDI: ffff909636b16860 [78912.408746] RBP: ffff909626758a58 R08: 0000000000000000 R09: 0000000000000000 [78912.409144] R10: ffff9095ff462400 R11: 0000000000000000 R12: ffff90959a379588 [78912.409521] R13: ffff909626758ab0 R14: ffff9095036c0000 R15: ffff9095299e1158 [78912.409899] FS: 00007f387f16f700(0000) GS:ffff909636b00000(0000) knlGS:0000000000000000 [78912.410285] CS: 0010 DS: 0000 ES: 0000 CR0: 0000000080050033 [78912.410673] CR2: 00007f429fc87cbc CR3: 000000014440a004 CR4: 00000000003606e0 [78912.411095] DR0: 0000000000000000 DR1: 0000000000000000 DR2: 0000000000000000 [78912.411496] DR3: 0000000000000000 DR6: 00000000fffe0ff0 DR7: 0000000000000400 [78912.411898] Call Trace: [78912.412318] __btrfs_end_transaction+0x5b/0x1c0 [btrfs] [78912.412746] btrfs_inc_block_group_ro+0xcf/0x160 [btrfs] [78912.413179] scrub_enumerate_chunks+0x188/0x5b0 [btrfs] [78912.413622] ? __mutex_unlock_slowpath+0x100/0x2a0 [78912.414078] btrfs_scrub_dev+0x2ef/0x720 [btrfs] [78912.414535] ? __sb_start_write+0xd4/0x1c0 [78912.414963] ? mnt_want_write_file+0x24/0x50 [78912.415403] btrfs_ioctl+0x17fb/0x3120 [btrfs] [78912.415832] ? lock_acquire+0xa6/0x190 [78912.416256] ? do_vfs_ioctl+0xa2/0x6f0 [78912.416685] ? btrfs_ioctl_get_supported_features+0x30/0x30 [btrfs] [78912.417116] do_vfs_ioctl+0xa2/0x6f0 [78912.417534] ? __fget+0x113/0x200 [78912.417954] ksys_ioctl+0x70/0x80 [78912.418369] __x64_sys_ioctl+0x16/0x20 [78912.418812] do_syscall_64+0x60/0x1b0 [78912.419231] entry_SYSCALL_64_after_hwframe+0x49/0xbe [78912.419644] RIP: 0033:0x7f3880252dd7 (...) [78912.420957] RSP: 002b:00007f387f16ed68 EFLAGS: 00000246 ORIG_RAX: 0000000000000010 [78912.421426] RAX: ffffffffffffffda RBX: 000055f5becc1df0 RCX: 00007f3880252dd7 [78912.421889] RDX: 000055f5becc1df0 RSI: 00000000c400941b RDI: 0000000000000003 [78912.422354] RBP: 0000000000000000 R08: 00007f387f16f700 R09: 0000000000000000 [78912.422790] R10: 00007f387f16f700 R11: 0000000000000246 R12: 0000000000000000 [78912.423202] R13: 00007ffda49c266f R14: 0000000000000000 R15: 00007f388145e040 [78912.425505] ---[ end trace eb9bfe7c426fc4d3 ]--- Fix this by calling remove_extent_mapping(), at btrfs_remove_block_group(), only at the very end, after removing the block group item key from the extent tree (and removing the free space tree entry if we are using the free space tree feature). Fixes: 04216820fe83d5 ("Btrfs: fix race between fs trimming and block group remove/allocation") CC: stable@vger.kernel.org # 4.4+ Signed-off-by: Filipe Manana <fdmanana@suse.com> Signed-off-by: David Sterba <dsterba@suse.com>