summaryrefslogtreecommitdiff
AgeCommit message (Collapse)AuthorFilesLines
2019-02-15batman-adv: Avoid WARN on net_device without parent in netnsSven Eckelmann1-2/+3
commit 955d3411a17f590364238bd0d3329b61f20c1cd2 upstream. It is not allowed to use WARN* helpers on potential incorrect input from the user or transient problems because systems configured as panic_on_warn will reboot due to such a problem. A NULL return value of __dev_get_by_index can be caused by various problems which can either be related to the system configuration or problems (incorrectly returned network namespaces) in other (virtual) net_device drivers. batman-adv should not cause a (harmful) WARN in this situation and instead only report it via a simple message. Fixes: b7eddd0b3950 ("batman-adv: prevent using any virtual device created on batman-adv as hard-interface") Reported-by: syzbot+c764de0fcfadca9a8595@syzkaller.appspotmail.com Reported-by: Dmitry Vyukov <dvyukov@google.com> Signed-off-by: Sven Eckelmann <sven@narfation.org> Signed-off-by: Simon Wunderlich <sw@simonwunderlich.de> Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
2019-02-15xfrm: refine validation of template and selector familiesFlorian Westphal1-4/+9
commit 35e6103861a3a970de6c84688c6e7a1f65b164ca upstream. The check assumes that in transport mode, the first templates family must match the address family of the policy selector. Syzkaller managed to build a template using MODE_ROUTEOPTIMIZATION, with ipv4-in-ipv6 chain, leading to following splat: BUG: KASAN: stack-out-of-bounds in xfrm_state_find+0x1db/0x1854 Read of size 4 at addr ffff888063e57aa0 by task a.out/2050 xfrm_state_find+0x1db/0x1854 xfrm_tmpl_resolve+0x100/0x1d0 xfrm_resolve_and_create_bundle+0x108/0x1000 [..] Problem is that addresses point into flowi4 struct, but xfrm_state_find treats them as being ipv6 because it uses templ->encap_family is used (AF_INET6 in case of reproducer) rather than family (AF_INET). This patch inverts the logic: Enforce 'template family must match selector' EXCEPT for tunnel and BEET mode. In BEET and Tunnel mode, xfrm_tmpl_resolve_one will have remote/local address pointers changed to point at the addresses found in the template, rather than the flowi ones, so no oob read will occur. Reported-by: 3ntr0py1337@gmail.com Reported-by: Daniel Borkmann <daniel@iogearbox.net> Signed-off-by: Florian Westphal <fw@strlen.de> Signed-off-by: Steffen Klassert <steffen.klassert@secunet.com> Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
2019-02-15libceph: avoid KEEPALIVE_PENDING races in ceph_con_keepalive()Ilya Dryomov1-2/+3
commit 4aac9228d16458cedcfd90c7fb37211cf3653ac3 upstream. con_fault() can transition the connection into STANDBY right after ceph_con_keepalive() clears STANDBY in clear_standby(): libceph user thread ceph-msgr worker ceph_con_keepalive() mutex_lock(&con->mutex) clear_standby(con) mutex_unlock(&con->mutex) mutex_lock(&con->mutex) con_fault() ... if KEEPALIVE_PENDING isn't set set state to STANDBY ... mutex_unlock(&con->mutex) set KEEPALIVE_PENDING set WRITE_PENDING This triggers warnings in clear_standby() when either ceph_con_send() or ceph_con_keepalive() get to clearing STANDBY next time. I don't see a reason to condition queue_con() call on the previous value of KEEPALIVE_PENDING, so move the setting of KEEPALIVE_PENDING into the critical section -- unlike WRITE_PENDING, KEEPALIVE_PENDING could have been a non-atomic flag. Reported-by: syzbot+acdeb633f6211ccdf886@syzkaller.appspotmail.com Signed-off-by: Ilya Dryomov <idryomov@gmail.com> Tested-by: Myungho Jung <mhjungk@gmail.com> Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
2019-02-15Revert "ext4: use ext4_write_inode() when fsyncing w/o a journal"Theodore Ts'o1-9/+4
commit 8fdd60f2ae3682caf2a7258626abc21eb4711892 upstream. This reverts commit ad211f3e94b314a910d4af03178a0b52a7d1ee0a. As Jan Kara pointed out, this change was unsafe since it means we lose the call to sync_mapping_buffers() in the nojournal case. The original point of the commit was avoid taking the inode mutex (since it causes a lockdep warning in generic/113); but we need the mutex in order to call sync_mapping_buffers(). The real fix to this problem was discussed here: https://lore.kernel.org/lkml/20181025150540.259281-4-bvanassche@acm.org The proposed patch was to fix a syzbot complaint, but the problem can also demonstrated via "kvm-xfstests -c nojournal generic/113". Multiple solutions were discused in the e-mail thread, but none have landed in the kernel as of this writing. Anyway, commit ad211f3e94b314 is absolutely the wrong way to suppress the lockdep, so revert it. Fixes: ad211f3e94b314a910d4af03178a0b52a7d1ee0a ("ext4: use ext4_write_inode() when fsyncing w/o a journal") Signed-off-by: Theodore Ts'o <tytso@mit.edu> Reported: Jan Kara <jack@suse.cz> Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
2019-02-15HID: debug: fix the ring buffer implementationVladis Dronov2-79/+52
commit 13054abbaa4f1fd4e6f3b4b63439ec033b4c8035 upstream. Ring buffer implementation in hid_debug_event() and hid_debug_events_read() is strange allowing lost or corrupted data. After commit 717adfdaf147 ("HID: debug: check length before copy_to_user()") it is possible to enter an infinite loop in hid_debug_events_read() by providing 0 as count, this locks up a system. Fix this by rewriting the ring buffer implementation with kfifo and simplify the code. This fixes CVE-2019-3819. v2: fix an execution logic and add a comment v3: use __set_current_state() instead of set_current_state() Backport to v4.14: 2 tree-wide patches 6396bb22151 ("treewide: kzalloc() -> kcalloc()") and a9a08845e9ac ("vfs: do bulk POLL* -> EPOLL* replacement") are missing in v4.14 so cherry-pick relevant pieces. Link: https://bugzilla.redhat.com/show_bug.cgi?id=1669187 Cc: stable@vger.kernel.org # v4.18+ Fixes: cd667ce24796 ("HID: use debugfs for events/reports dumping") Fixes: 717adfdaf147 ("HID: debug: check length before copy_to_user()") Signed-off-by: Vladis Dronov <vdronov@redhat.com> Reviewed-by: Oleg Nesterov <oleg@redhat.com> Signed-off-by: Benjamin Tissoires <benjamin.tissoires@redhat.com> Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
2019-02-15drm/vmwgfx: Return error code from vmw_execbuf_copy_fence_userThomas Hellstrom1-1/+1
commit 728354c005c36eaf44b6e5552372b67e60d17f56 upstream. The function was unconditionally returning 0, and a caller would have to rely on the returned fence pointer being NULL to detect errors. However, the function vmw_execbuf_copy_fence_user() would expect a non-zero error code in that case and would BUG otherwise. So make sure we return a proper non-zero error code if the fence pointer returned is NULL. Cc: <stable@vger.kernel.org> Fixes: ae2a104058e2: ("vmwgfx: Implement fence objects") Signed-off-by: Thomas Hellstrom <thellstrom@vmware.com> Reviewed-by: Deepak Rawat <drawat@vmware.com> Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
2019-02-15drm/vmwgfx: Fix setting of dma masksThomas Hellstrom1-3/+6
commit 4cbfa1e6c09e98450aab3240e5119b0ab2c9795b upstream. Previously we set only the dma mask and not the coherent mask. Fix that. Also, for clarity, make sure both are initially set to 64 bits. Cc: <stable@vger.kernel.org> Fixes: 0d00c488f3de: ("drm/vmwgfx: Fix the driver for large dma addresses") Signed-off-by: Thomas Hellstrom <thellstrom@vmware.com> Reviewed-by: Deepak Rawat <drawat@vmware.com> Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
2019-02-15drm/modes: Prevent division by zero htotalTina Zhang1-1/+1
commit a2fcd5c84f7a7825e028381b10182439067aa90d upstream. This patch prevents division by zero htotal. In a follow-up mail Tina writes: > > How did you manage to get here with htotal == 0? This needs backtraces (or if > > this is just about static checkers, a mention of that). > > -Daniel > > In GVT-g, we are trying to enable a virtual display w/o setting timings for a pipe > (a.k.a htotal=0), then we met the following kernel panic: > > [ 32.832048] divide error: 0000 [#1] SMP PTI > [ 32.833614] CPU: 0 PID: 1 Comm: swapper/0 Not tainted 4.18.0-rc4-sriov+ #33 > [ 32.834438] Hardware name: QEMU Standard PC (Q35 + ICH9, 2009), BIOS rel-1.10.1-0-g8891697-dirty-20180511_165818-tinazhang-linux-1 04/01/2014 > [ 32.835901] RIP: 0010:drm_mode_hsync+0x1e/0x40 > [ 32.836004] Code: 31 c0 c3 90 90 90 90 90 90 90 90 90 0f 1f 44 00 00 8b 87 d8 00 00 00 85 c0 75 22 8b 4f 68 85 c9 78 1b 69 47 58 e8 03 00 00 99 <f7> f9 b9 d3 4d 62 10 05 f4 01 00 00 f7 e1 89 d0 c1 e8 06 f3 c3 66 > [ 32.836004] RSP: 0000:ffffc900000ebb90 EFLAGS: 00010206 > [ 32.836004] RAX: 0000000000000000 RBX: ffff88001c67c8a0 RCX: 0000000000000000 > [ 32.836004] RDX: 0000000000000000 RSI: ffff88001c67c000 RDI: ffff88001c67c8a0 > [ 32.836004] RBP: ffff88001c7d03a0 R08: ffff88001c67c8a0 R09: ffff88001c7d0330 > [ 32.836004] R10: ffffffff822c3a98 R11: 0000000000000001 R12: ffff88001c67c000 > [ 32.836004] R13: ffff88001c7d0370 R14: ffffffff8207eb78 R15: ffff88001c67c800 > [ 32.836004] FS: 0000000000000000(0000) GS:ffff88001da00000(0000) knlGS:0000000000000000 > [ 32.836004] CS: 0010 DS: 0000 ES: 0000 CR0: 0000000080050033 > [ 32.836004] CR2: 0000000000000000 CR3: 000000000220a000 CR4: 00000000000006f0 > [ 32.836004] DR0: 0000000000000000 DR1: 0000000000000000 DR2: 0000000000000000 > [ 32.836004] DR3: 0000000000000000 DR6: 00000000fffe0ff0 DR7: 0000000000000400 > [ 32.836004] Call Trace: > [ 32.836004] intel_mode_from_pipe_config+0x72/0x90 > [ 32.836004] intel_modeset_setup_hw_state+0x569/0xf90 > [ 32.836004] intel_modeset_init+0x905/0x1db0 > [ 32.836004] i915_driver_load+0xb8c/0x1120 > [ 32.836004] i915_pci_probe+0x4d/0xb0 > [ 32.836004] local_pci_probe+0x44/0xa0 > [ 32.836004] ? pci_assign_irq+0x27/0x130 > [ 32.836004] pci_device_probe+0x102/0x1c0 > [ 32.836004] driver_probe_device+0x2b8/0x480 > [ 32.836004] __driver_attach+0x109/0x110 > [ 32.836004] ? driver_probe_device+0x480/0x480 > [ 32.836004] bus_for_each_dev+0x67/0xc0 > [ 32.836004] ? klist_add_tail+0x3b/0x70 > [ 32.836004] bus_add_driver+0x1e8/0x260 > [ 32.836004] driver_register+0x5b/0xe0 > [ 32.836004] ? mipi_dsi_bus_init+0x11/0x11 > [ 32.836004] do_one_initcall+0x4d/0x1eb > [ 32.836004] kernel_init_freeable+0x197/0x237 > [ 32.836004] ? rest_init+0xd0/0xd0 > [ 32.836004] kernel_init+0xa/0x110 > [ 32.836004] ret_from_fork+0x35/0x40 > [ 32.836004] Modules linked in: > [ 32.859183] ---[ end trace 525608b0ed0e8665 ]--- > [ 32.859722] RIP: 0010:drm_mode_hsync+0x1e/0x40 > [ 32.860287] Code: 31 c0 c3 90 90 90 90 90 90 90 90 90 0f 1f 44 00 00 8b 87 d8 00 00 00 85 c0 75 22 8b 4f 68 85 c9 78 1b 69 47 58 e8 03 00 00 99 <f7> f9 b9 d3 4d 62 10 05 f4 01 00 00 f7 e1 89 d0 c1 e8 06 f3 c3 66 > [ 32.862680] RSP: 0000:ffffc900000ebb90 EFLAGS: 00010206 > [ 32.863309] RAX: 0000000000000000 RBX: ffff88001c67c8a0 RCX: 0000000000000000 > [ 32.864182] RDX: 0000000000000000 RSI: ffff88001c67c000 RDI: ffff88001c67c8a0 > [ 32.865206] RBP: ffff88001c7d03a0 R08: ffff88001c67c8a0 R09: ffff88001c7d0330 > [ 32.866359] R10: ffffffff822c3a98 R11: 0000000000000001 R12: ffff88001c67c000 > [ 32.867213] R13: ffff88001c7d0370 R14: ffffffff8207eb78 R15: ffff88001c67c800 > [ 32.868075] FS: 0000000000000000(0000) GS:ffff88001da00000(0000) knlGS:0000000000000000 > [ 32.868983] CS: 0010 DS: 0000 ES: 0000 CR0: 0000000080050033 > [ 32.869659] CR2: 0000000000000000 CR3: 000000000220a000 CR4: 00000000000006f0 > [ 32.870599] DR0: 0000000000000000 DR1: 0000000000000000 DR2: 0000000000000000 > [ 32.871598] DR3: 0000000000000000 DR6: 00000000fffe0ff0 DR7: 0000000000000400 > [ 32.872549] Kernel panic - not syncing: Attempted to kill init! exitcode=0x0000000b > > Since drm_mode_hsync() has the logic to check mode->htotal, I just extend it to cover the case htotal==0. Signed-off-by: Tina Zhang <tina.zhang@intel.com> Cc: Adam Jackson <ajax@redhat.com> Cc: Dave Airlie <airlied@redhat.com> Cc: Daniel Vetter <daniel@ffwll.ch> [danvet: Add additional explanations + cc: stable.] Cc: stable@vger.kernel.org Signed-off-by: Daniel Vetter <daniel.vetter@ffwll.ch> Link: https://patchwork.freedesktop.org/patch/msgid/1548228539-3061-1-git-send-email-tina.zhang@intel.com Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
2019-02-15mac80211: ensure that mgmt tx skbs have tailroom for encryptionFelix Fietkau1-3/+9
commit 9d0f50b80222dc273e67e4e14410fcfa4130a90c upstream. Some drivers use IEEE80211_KEY_FLAG_SW_MGMT_TX to indicate that management frames need to be software encrypted. Since normal data packets are still encrypted by the hardware, crypto_tx_tailroom_needed_cnt gets decremented after key upload to hw. This can lead to passing skbs to ccmp_encrypt_skb, which don't have the necessary tailroom for software encryption. Change the code to add tailroom for encrypted management packets, even if crypto_tx_tailroom_needed_cnt is 0. Cc: stable@vger.kernel.org Signed-off-by: Felix Fietkau <nbd@nbd.name> Signed-off-by: Johannes Berg <johannes.berg@intel.com> Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
2019-02-15ARM: tango: Improve ARCH_MULTIPLATFORM compatibilityMarc Gonzalez3-4/+11
commit d0f9f16788e15d9eb40f68b047732d49658c5a3a upstream. Calling platform-specific code unconditionally blows up when running an ARCH_MULTIPLATFORM kernel on a different platform. Don't do it. Reported-by: Paolo Pisati <p.pisati@gmail.com> Signed-off-by: Marc Gonzalez <marc.w.gonzalez@free.fr> Acked-by: Pavel Machek <pavel@ucw.cz> Cc: stable@vger.kernel.org # v4.8+ Fixes: a30eceb7a59d ("ARM: tango: add Suspend-to-RAM support") Signed-off-by: Arnd Bergmann <arnd@arndb.de> Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
2019-02-15ARM: iop32x/n2100: fix PCI IRQ mappingRussell King1-2/+1
commit db4090920ba2d61a5827a23e441447926a02ffee upstream. Booting 4.20 on a TheCUS N2100 results in a kernel oops while probing PCI, due to n2100_pci_map_irq() having been discarded during boot. Signed-off-by: Russell King <rmk+kernel@armlinux.org.uk> Cc: stable@vger.kernel.org # 2.6.18+ Signed-off-by: Arnd Bergmann <arnd@arndb.de> Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
2019-02-15MIPS: VDSO: Include $(ccflags-vdso) in o32,n32 .lds buildsPaul Burton1-2/+2
commit 67fc5dc8a541e8f458d7f08bf88ff55933bf9f9d upstream. When generating vdso-o32.lds & vdso-n32.lds for use with programs running as compat ABIs under 64b kernels, we previously haven't included the compiler flags that are supposedly common to all ABIs - ie. those in the ccflags-vdso variable. This is problematic in cases where we need to provide the -m%-float flag in order to ensure that we don't attempt to use a floating point ABI that's incompatible with the target CPU & ABI. For example a toolchain using current gcc trunk configured --with-fp-32=xx fails to build a 64r6el_defconfig kernel with the following error: cc1: error: '-march=mips1' requires '-mfp32' make[2]: *** [arch/mips/vdso/Makefile:135: arch/mips/vdso/vdso-o32.lds] Error 1 Include $(ccflags-vdso) for the compat VDSO .lds builds, just as it is included for the native VDSO .lds & when compiling objects for the compat VDSOs. This ensures we consistently provide the -msoft-float flag amongst others, avoiding the problem by ensuring we're agnostic to the toolchain defaults. Signed-off-by: Paul Burton <paul.burton@mips.com> Fixes: ebb5e78cc634 ("MIPS: Initial implementation of a VDSO") Cc: linux-mips@vger.kernel.org Cc: Kevin Hilman <khilman@baylibre.com> Cc: Guenter Roeck <linux@roeck-us.net> Cc: Maciej W . Rozycki <macro@linux-mips.org> Cc: stable@vger.kernel.org # v4.4+ Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
2019-02-15MIPS: OCTEON: don't set octeon_dma_bar_type if PCI is disabledAaro Koskinen1-5/+5
commit dcf300a69ac307053dfb35c2e33972e754a98bce upstream. Don't set octeon_dma_bar_type if PCI is disabled. This avoids creation of the MSI irqchip later on, and saves a bit of memory. Signed-off-by: Aaro Koskinen <aaro.koskinen@iki.fi> Signed-off-by: Paul Burton <paul.burton@mips.com> Fixes: a214720cbf50 ("Disable MSI also when pcie-octeon.pcie_disable on") Cc: stable@vger.kernel.org # v3.3+ Cc: linux-mips@vger.kernel.org Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
2019-02-15mips: cm: reprime error causeVladimir Kondratiev1-1/+1
commit 05dc6001af0630e200ad5ea08707187fe5537e6d upstream. Accordingly to the documentation ---cut--- The GCR_ERROR_CAUSE.ERR_TYPE field and the GCR_ERROR_MULT.ERR_TYPE fields can be cleared by either a reset or by writing the current value of GCR_ERROR_CAUSE.ERR_TYPE to the GCR_ERROR_CAUSE.ERR_TYPE register. ---cut--- Do exactly this. Original value of cm_error may be safely written back; it clears error cause and keeps other bits untouched. Fixes: 3885c2b463f6 ("MIPS: CM: Add support for reporting CM cache errors") Signed-off-by: Vladimir Kondratiev <vladimir.kondratiev@linux.intel.com> Signed-off-by: Paul Burton <paul.burton@mips.com> Cc: Ralf Baechle <ralf@linux-mips.org> Cc: James Hogan <jhogan@kernel.org> Cc: linux-mips@vger.kernel.org Cc: linux-kernel@vger.kernel.org Cc: stable@vger.kernel.org # v4.3+ Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
2019-02-15tracing: uprobes: Fix typo in pr_fmt stringAndreas Ziegler1-1/+1
commit ea6eb5e7d15e1838de335609994b4546e2abcaaf upstream. The subsystem-specific message prefix for uprobes was also "trace_kprobe: " instead of "trace_uprobe: " as described in the original commit message. Link: http://lkml.kernel.org/r/20190117133023.19292-1-andreas.ziegler@fau.de Cc: Ingo Molnar <mingo@redhat.com> Cc: stable@vger.kernel.org Acked-by: Masami Hiramatsu <mhiramat@kernel.org> Fixes: 7257634135c24 ("tracing/probe: Show subsystem name in messages") Signed-off-by: Andreas Ziegler <andreas.ziegler@fau.de> Signed-off-by: Steven Rostedt (VMware) <rostedt@goodmis.org> Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
2019-02-15debugfs: fix debugfs_rename parameter checkingGreg Kroah-Hartman1-0/+7
commit d88c93f090f708c18195553b352b9f205e65418f upstream. debugfs_rename() needs to check that the dentries passed into it really are valid, as sometimes they are not (i.e. if the return value of another debugfs call is passed into this one.) So fix this up by properly checking if the two parent directories are errors (they are allowed to be NULL), and if the dentry to rename is not NULL or an error. Cc: stable <stable@vger.kernel.org> Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
2019-02-15samples: mei: use /dev/mei0 instead of /dev/meiTomas Winkler1-1/+1
commit c4a46acf1db3ce547d290c29e55b3476c78dd76c upstream. The device was moved from misc device to character devices to support multiple mei devices. Cc: <stable@vger.kernel.org> #v4.9+ Signed-off-by: Tomas Winkler <tomas.winkler@intel.com> Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
2019-02-15misc: vexpress: Off by one in vexpress_syscfg_exec()Dan Carpenter1-1/+1
commit f8a70d8b889f180e6860cb1f85fed43d37844c5a upstream. The > comparison should be >= to prevent reading beyond the end of the func->template[] array. (The func->template array is allocated in vexpress_syscfg_regmap_init() and it has func->num_templates elements.) Fixes: 974cc7b93441 ("mfd: vexpress: Define the device as MFD cells") Signed-off-by: Dan Carpenter <dan.carpenter@oracle.com> Acked-by: Sudeep Holla <sudeep.holla@arm.com> Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
2019-02-15signal: Better detection of synchronous signalsEric W. Biederman1-1/+51
commit 7146db3317c67b517258cb5e1b08af387da0618b upstream. Recently syzkaller was able to create unkillablle processes by creating a timer that is delivered as a thread local signal on SIGHUP, and receiving SIGHUP SA_NODEFERER. Ultimately causing a loop failing to deliver SIGHUP but always trying. When the stack overflows delivery of SIGHUP fails and force_sigsegv is called. Unfortunately because SIGSEGV is numerically higher than SIGHUP next_signal tries again to deliver a SIGHUP. From a quality of implementation standpoint attempting to deliver the timer SIGHUP signal is wrong. We should attempt to deliver the synchronous SIGSEGV signal we just forced. We can make that happening in a fairly straight forward manner by instead of just looking at the signal number we also look at the si_code. In particular for exceptions (aka synchronous signals) the si_code is always greater than 0. That still has the potential to pick up a number of asynchronous signals as in a few cases the same si_codes that are used for synchronous signals are also used for asynchronous signals, and SI_KERNEL is also included in the list of possible si_codes. Still the heuristic is much better and timer signals are definitely excluded. Which is enough to prevent all known ways for someone sending a process signals fast enough to cause unexpected and arguably incorrect behavior. Cc: stable@vger.kernel.org Fixes: a27341cd5fcb ("Prioritize synchronous signals over 'normal' signals") Tested-by: Dmitry Vyukov <dvyukov@google.com> Reported-by: Dmitry Vyukov <dvyukov@google.com> Signed-off-by: "Eric W. Biederman" <ebiederm@xmission.com> Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
2019-02-15signal: Always notice exiting tasksEric W. Biederman1-0/+6
commit 35634ffa1751b6efd8cf75010b509dcb0263e29b upstream. Recently syzkaller was able to create unkillablle processes by creating a timer that is delivered as a thread local signal on SIGHUP, and receiving SIGHUP SA_NODEFERER. Ultimately causing a loop failing to deliver SIGHUP but always trying. Upon examination it turns out part of the problem is actually most of the solution. Since 2.5 signal delivery has found all fatal signals, marked the signal group for death, and queued SIGKILL in every threads thread queue relying on signal->group_exit_code to preserve the information of which was the actual fatal signal. The conversion of all fatal signals to SIGKILL results in the synchronous signal heuristic in next_signal kicking in and preferring SIGHUP to SIGKILL. Which is especially problematic as all fatal signals have already been transformed into SIGKILL. Instead of dequeueing signals and depending upon SIGKILL to be the first signal dequeued, first test if the signal group has already been marked for death. This guarantees that nothing in the signal queue can prevent a process that needs to exit from exiting. Cc: stable@vger.kernel.org Tested-by: Dmitry Vyukov <dvyukov@google.com> Reported-by: Dmitry Vyukov <dvyukov@google.com> Ref: ebf5ebe31d2c ("[PATCH] signal-fixes-2.5.59-A4") History Tree: https://git.kernel.org/pub/scm/linux/kernel/git/tglx/history.git Signed-off-by: "Eric W. Biederman" <ebiederm@xmission.com> Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
2019-02-15iio: chemical: atlas-ph-sensor: correct IIO_TEMP values to millicelsiusMatt Ranostay1-4/+3
commit 0808831dc62e90023ad14ff8da4804c7846e904b upstream. IIO_TEMP scale value for temperature was incorrect and not in millicelsius as required by the ABI documentation. Signed-off-by: Matt Ranostay <matt.ranostay@konsulko.com> Fixes: 27dec00ecf2d (iio: chemical: add Atlas pH-SM sensor support) Cc: <stable@vger.kernel.org> Signed-off-by: Jonathan Cameron <Jonathan.Cameron@huawei.com> Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
2019-02-15iio: adc: axp288: Fix TS-pin handlingHans de Goede1-16/+60
commit 9bcf15f75cac3c6a00d8f8083a635de9c8537799 upstream. Prior to this commit there were 3 issues with our handling of the TS-pin: 1) There are 2 ways how the firmware can disable monitoring of the TS-pin for designs which do not have a temperature-sensor for the battery: a) Clearing bit 0 of the AXP20X_ADC_EN1 register b) Setting bit 2 of the AXP288_ADC_TS_PIN_CTRL monitoring Prior to this commit we were unconditionally setting both bits to the value used on devices with a TS. This causes the temperature protection to kick in on devices without a TS, such as the Jumper ezbook v2, causing them to not charge under Linux. This commit fixes this by using regmap_update_bits when updating these 2 registers, leaving the 2 mentioned bits alone. The next 2 problems are related to our handling of the current-source for the TS-pin. The current-source used for the battery temp-sensor (TS) is shared with the GPADC. For proper fuel-gauge and charger operation the TS current-source needs to be permanently on. But to read the GPADC we need to temporary switch the TS current-source to ondemand, so that the GPADC can use it, otherwise we will always read an all 0 value. 2) Problem 2 is we were writing hardcoded values to the ADC TS pin-ctrl register, overwriting various other unrelated bits. Specifically we were overwriting the current-source setting for the TS and GPIO0 pins, forcing it to 80ųA independent of its original setting. On a Chuwi Vi10 tablet this was causing us to get a too high adc value (due to a too high current-source) resulting in the following errors being logged: ACPI Error: AE_ERROR, Returned by Handler for [UserDefinedRegion] ACPI Error: Method parse/execution failed \_SB.SXP1._TMP, AE_ERROR This commit fixes this by using regmap_update_bits to change only the relevant bits. 3) After reading the GPADC channel we were unconditionally enabling the TS current-source even on devices where the TS-pin is not used and the current-source thus was off before axp288_adc_read_raw call. This commit fixes this by making axp288_adc_set_ts a nop on devices where the ADC is not enabled for the TS-pin. BugLink: https://bugzilla.redhat.com/show_bug.cgi?id=1610545 Fixes: 3091141d7803 ("iio: adc: axp288: Fix the GPADC pin ...") Signed-off-by: Hans de Goede <hdegoede@redhat.com> Cc: <Stable@vger.kernel.org> Signed-off-by: Jonathan Cameron <Jonathan.Cameron@huawei.com> Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
2019-02-15mtd: rawnand: gpmi: fix MX28 bus master lockup problemMartin Kepplinger1-8/+7
commit d5d27fd9826b59979b184ec288e4812abac0e988 upstream. Disable BCH soft reset according to MX23 erratum #2847 ("BCH soft reset may cause bus master lock up") for MX28 too. It has the same problem. Observed problem: once per 100,000+ MX28 reboots NAND read failed on DMA timeout errors: [ 1.770823] UBI: attaching mtd3 to ubi0 [ 2.768088] gpmi_nand: DMA timeout, last DMA :1 [ 3.958087] gpmi_nand: BCH timeout, last DMA :1 [ 4.156033] gpmi_nand: Error in ECC-based read: -110 [ 4.161136] UBI warning: ubi_io_read: error -110 while reading 64 bytes from PEB 0:0, read only 0 bytes, retry [ 4.171283] step 1 error [ 4.173846] gpmi_nand: Chip: 0, Error -1 Without BCH soft reset we successfully executed 1,000,000 MX28 reboots. I have a quote from NXP regarding this problem, from July 18th 2016: "As the i.MX23 and i.MX28 are of the same generation, they share many characteristics. Unfortunately, also the erratas may be shared. In case of the documented erratas and the workarounds, you can also apply the workaround solution of one device on the other one. This have been reported, but I’m afraid that there are not an estimated date for updating the Errata documents. Please accept our apologies for any inconveniences this may cause." Fixes: 6f2a6a52560a ("mtd: nand: gpmi: reset BCH earlier, too, to avoid NAND startup problems") Cc: stable@vger.kernel.org Signed-off-by: Manfred Schlaegl <manfred.schlaegl@ginzinger.com> Signed-off-by: Martin Kepplinger <martin.kepplinger@ginzinger.com> Reviewed-by: Miquel Raynal <miquel.raynal@bootlin.com> Reviewed-by: Fabio Estevam <festevam@gmail.com> Acked-by: Han Xu <han.xu@nxp.com> Signed-off-by: Boris Brezillon <bbrezillon@kernel.org> Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
2019-02-12Linux 4.14.99v4.14.99Greg Kroah-Hartman1-1/+1
2019-02-12ath9k: dynack: check da->enabled first in sampling routinesLorenzo Bianconi1-2/+2
commit 9d3d65a91f027b8a9af5e63752d9b78cb10eb92d upstream. Check da->enabled flag first in ath_dynack_sample_tx_ts and ath_dynack_sample_ack_ts routines in order to avoid useless processing Tested-by: Koen Vandeputte <koen.vandeputte@ncentric.com> Signed-off-by: Lorenzo Bianconi <lorenzo.bianconi@redhat.com> Signed-off-by: Kalle Valo <kvalo@codeaurora.org> Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
2019-02-12ath9k: dynack: make ewma estimation fasterLorenzo Bianconi4-12/+29
commit 0c60c490830a1a756c80f8de8d33d9c6359d4a36 upstream. In order to make propagation time estimation faster, use current sample as ewma output value during 'late ack' tracking Tested-by: Koen Vandeputte <koen.vandeputte@ncentric.com> Signed-off-by: Lorenzo Bianconi <lorenzo.bianconi@redhat.com> Signed-off-by: Kalle Valo <kvalo@codeaurora.org> Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
2019-02-12perf/x86/intel: Delay memory deallocation until x86_pmu_dead_cpu()Peter Zijlstra1-2/+8
commit 602cae04c4864bb3487dfe4c2126c8d9e7e1614a upstream. intel_pmu_cpu_prepare() allocated memory for ->shared_regs among other members of struct cpu_hw_events. This memory is released in intel_pmu_cpu_dying() which is wrong. The counterpart of the intel_pmu_cpu_prepare() callback is x86_pmu_dead_cpu(). Otherwise if the CPU fails on the UP path between CPUHP_PERF_X86_PREPARE and CPUHP_AP_PERF_X86_STARTING then it won't release the memory but allocate new memory on the next attempt to online the CPU (leaking the old memory). Also, if the CPU down path fails between CPUHP_AP_PERF_X86_STARTING and CPUHP_PERF_X86_PREPARE then the CPU will go back online but never allocate the memory that was released in x86_pmu_dying_cpu(). Make the memory allocation/free symmetrical in regard to the CPU hotplug notifier by moving the deallocation to intel_pmu_cpu_dead(). This started in commit: a7e3ed1e47011 ("perf: Add support for supplementary event registers"). In principle the bug was introduced in v2.6.39 (!), but it will almost certainly not backport cleanly across the big CPU hotplug rewrite between v4.7-v4.15... [ bigeasy: Added patch description. ] [ mingo: Added backporting guidance. ] Reported-by: He Zhe <zhe.he@windriver.com> Signed-off-by: Peter Zijlstra (Intel) <peterz@infradead.org> # With developer hat on Signed-off-by: Sebastian Andrzej Siewior <bigeasy@linutronix.de> Signed-off-by: Peter Zijlstra (Intel) <peterz@infradead.org> # With maintainer hat on Cc: Alexander Shishkin <alexander.shishkin@linux.intel.com> Cc: Arnaldo Carvalho de Melo <acme@redhat.com> Cc: Jiri Olsa <jolsa@redhat.com> Cc: Linus Torvalds <torvalds@linux-foundation.org> Cc: Peter Zijlstra <peterz@infradead.org> Cc: Thomas Gleixner <tglx@linutronix.de> Cc: acme@kernel.org Cc: bp@alien8.de Cc: hpa@zytor.com Cc: jolsa@kernel.org Cc: kan.liang@linux.intel.com Cc: namhyung@kernel.org Cc: <stable@vger.kernel.org> Fixes: a7e3ed1e47011 ("perf: Add support for supplementary event registers"). Link: https://lkml.kernel.org/r/20181219165350.6s3jvyxbibpvlhtq@linutronix.de Signed-off-by: Ingo Molnar <mingo@kernel.org> [ He Zhe: Fixes conflict caused by missing disable_counter_freeze which is introduced since v4.20 af3bdb991a5cb. ] Signed-off-by: He Zhe <zhe.he@windriver.com> Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
2019-02-12IB/hfi1: Add limit test for RC/UC send via loopbackMike Marciniszyn2-2/+12
commit 09ce351dff8e7636af0beb72cd4a86c3904a0500 upstream. Fix potential memory corruption and panic in loopback for IB_WR_SEND variants. The code blindly assumes the posted length will fit in the fetched rwqe, which is not a valid assumption. Fix by adding a limit test, and triggering the appropriate send completion and putting the QP in an error state. This mimics the handling for non-loopback QPs. Fixes: 15703461533a ("IB/{hfi1, qib, rdmavt}: Move ruc_loopback to rdmavt") Cc: <stable@vger.kernel.org> #v4.20+ Reviewed-by: Michael J. Ruhl <michael.j.ruhl@intel.com> Signed-off-by: Mike Marciniszyn <mike.marciniszyn@intel.com> Signed-off-by: Dennis Dalessandro <dennis.dalessandro@intel.com> Signed-off-by: Jason Gunthorpe <jgg@mellanox.com> Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org> Signed-off-by: Mike Marciniszyn <mike.marciniszyn@intel.com>
2019-02-12nfsd4: catch some false session retriesJ. Bruce Fields2-1/+37
commit 53da6a53e1d414e05759fa59b7032ee08f4e22d7 upstream. The spec allows us to return NFS4ERR_SEQ_FALSE_RETRY if we notice that the client is making a call that matches a previous (slot, seqid) pair but that *isn't* actually a replay, because some detail of the call doesn't actually match the previous one. Catching every such case is difficult, but we may as well catch a few easy ones. This also handles the case described in the previous patch, in a different way. The spec does however require us to catch the case where the difference is in the rpc credentials. This prevents somebody from snooping another user's replies by fabricating retries. (But the practical value of the attack is limited by the fact that the replies with the most sensitive data are READ replies, which are not normally cached.) Tested-by: Olga Kornievskaia <aglo@umich.edu> Signed-off-by: J. Bruce Fields <bfields@redhat.com> Signed-off-by: Donald Buczek <buczek@molgen.mpg.de> Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
2019-02-12nfsd4: fix cached replies to solo SEQUENCE compoundsJ. Bruce Fields3-7/+27
commit 085def3ade52f2ffe3e31f42e98c27dcc222dd37 upstream. Currently our handling of 4.1+ requests without "cachethis" set is confusing and not quite correct. Suppose a client sends a compound consisting of only a single SEQUENCE op, and it matches the seqid in a session slot (so it's a retry), but the previous request with that seqid did not have "cachethis" set. The obvious thing to do might be to return NFS4ERR_RETRY_UNCACHED_REP, but the protocol only allows that to be returned on the op following the SEQUENCE, and there is no such op in this case. The protocol permits us to cache replies even if the client didn't ask us to. And it's easy to do so in the case of solo SEQUENCE compounds. So, when we get a solo SEQUENCE, we can either return the previously cached reply or NFSERR_SEQ_FALSE_RETRY if we notice it differs in some way from the original call. Currently, we're returning a corrupt reply in the case a solo SEQUENCE matches a previous compound with more ops. This actually matters because the Linux client recently started doing this as a way to recover from lost replies to idempotent operations in the case the process doing the original reply was killed: in that case it's difficult to keep the original arguments around to do a real retry, and the client no longer cares what the result is anyway, but it would like to make sure that the slot's sequence id has been incremented, and the solo SEQUENCE assures that: if the server never got the original reply, it will increment the sequence id. If it did get the original reply, it won't increment, and nothing else that about the reply really matters much. But we can at least attempt to return valid xdr! Tested-by: Olga Kornievskaia <aglo@umich.edu> Signed-off-by: J. Bruce Fields <bfields@redhat.com> Signed-off-by: Donald Buczek <buczek@molgen.mpg.de> Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
2019-02-12serial: 8250_pci: Make PCI class test non fatalAndy Shevchenko1-4/+5
commit 824d17c57b0abbcb9128fb3f7327fae14761914b upstream. As has been reported the National Instruments serial cards have broken PCI class. The commit 7d8905d06405 ("serial: 8250_pci: Enable device after we check black list") made the PCI class check mandatory for the case when device is listed in a quirk list. Make PCI class test non fatal to allow broken card be enumerated. Fixes: 7d8905d06405 ("serial: 8250_pci: Enable device after we check black list") Cc: stable <stable@vger.kernel.org> Reported-by: Guan Yung Tseng <guan.yung.tseng@ni.com> Tested-by: Guan Yung Tseng <guan.yung.tseng@ni.com> Tested-by: KHUENY.Gerhard <Gerhard.KHUENY@bachmann.info> Signed-off-by: Andy Shevchenko <andriy.shevchenko@linux.intel.com> Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
2019-02-12serial: fix race between flush_to_ldisc and tty_openGreg Kroah-Hartman1-0/+6
commit fedb5760648a291e949f2380d383b5b2d2749b5e upstream. There still is a race window after the commit b027e2298bd588 ("tty: fix data race between tty_init_dev and flush of buf"), and we encountered this crash issue if receive_buf call comes before tty initialization completes in tty_open and tty->driver_data may be NULL. CPU0 CPU1 ---- ---- tty_open tty_init_dev tty_ldisc_unlock schedule flush_to_ldisc receive_buf tty_port_default_receive_buf tty_ldisc_receive_buf n_tty_receive_buf_common __receive_buf uart_flush_chars uart_start /*tty->driver_data is NULL*/ tty->ops->open /*init tty->driver_data*/ it can be fixed by extending ldisc semaphore lock in tty_init_dev to driver_data initialized completely after tty->ops->open(), but this will lead to get lock on one function and unlock in some other function, and hard to maintain, so fix this race only by checking tty->driver_data when receiving, and return if tty->driver_data is NULL, and n_tty_receive_buf_common maybe calls uart_unthrottle, so add the same check. Because the tty layer knows nothing about the driver associated with the device, the tty layer can not do anything here, it is up to the tty driver itself to check for this type of race. Fix up the serial driver to correctly check to see if it is finished binding with the device when being called, and if not, abort the tty calls. [Description and problem report and testing from Li RongQing, I rewrote the patch to be in the serial layer, not in the tty core - gregkh] Reported-by: Li RongQing <lirongqing@baidu.com> Tested-by: Li RongQing <lirongqing@baidu.com> Signed-off-by: Wang Li <wangli39@baidu.com> Signed-off-by: Zhang Yu <zhangyu31@baidu.com> Signed-off-by: Li RongQing <lirongqing@baidu.com> Cc: stable <stable@vger.kernel.org> Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
2019-02-12perf tests evsel-tp-sched: Fix bitwise operatorGustavo A. R. Silva1-1/+1
commit 489338a717a0dfbbd5a3fabccf172b78f0ac9015 upstream. Notice that the use of the bitwise OR operator '|' always leads to true in this particular case, which seems a bit suspicious due to the context in which this expression is being used. Fix this by using bitwise AND operator '&' instead. This bug was detected with the help of Coccinelle. Signed-off-by: Gustavo A. R. Silva <gustavo@embeddedor.com> Acked-by: Jiri Olsa <jolsa@kernel.org> Cc: Alexander Shishkin <alexander.shishkin@linux.intel.com> Cc: Namhyung Kim <namhyung@kernel.org> Cc: Peter Zijlstra <peterz@infradead.org> Cc: stable@vger.kernel.org Fixes: 6a6cd11d4e57 ("perf test: Add test for the sched tracepoint format fields") Link: http://lkml.kernel.org/r/20190122233439.GA5868@embeddedor Signed-off-by: Arnaldo Carvalho de Melo <acme@redhat.com> Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
2019-02-12perf/core: Don't WARN() for impossible ring-buffer sizesMark Rutland1-0/+3
commit 9dff0aa95a324e262ffb03f425d00e4751f3294e upstream. The perf tool uses /proc/sys/kernel/perf_event_mlock_kb to determine how large its ringbuffer mmap should be. This can be configured to arbitrary values, which can be larger than the maximum possible allocation from kmalloc. When this is configured to a suitably large value (e.g. thanks to the perf fuzzer), attempting to use perf record triggers a WARN_ON_ONCE() in __alloc_pages_nodemask(): WARNING: CPU: 2 PID: 5666 at mm/page_alloc.c:4511 __alloc_pages_nodemask+0x3f8/0xbc8 Let's avoid this by checking that the requested allocation is possible before calling kzalloc. Reported-by: Julien Thierry <julien.thierry@arm.com> Signed-off-by: Mark Rutland <mark.rutland@arm.com> Signed-off-by: Peter Zijlstra (Intel) <peterz@infradead.org> Reviewed-by: Julien Thierry <julien.thierry@arm.com> Cc: Alexander Shishkin <alexander.shishkin@linux.intel.com> Cc: Arnaldo Carvalho de Melo <acme@redhat.com> Cc: Jiri Olsa <jolsa@redhat.com> Cc: Linus Torvalds <torvalds@linux-foundation.org> Cc: Namhyung Kim <namhyung@kernel.org> Cc: Peter Zijlstra <peterz@infradead.org> Cc: Thomas Gleixner <tglx@linutronix.de> Cc: <stable@vger.kernel.org> Link: https://lkml.kernel.org/r/20190110142745.25495-1-mark.rutland@arm.com Signed-off-by: Ingo Molnar <mingo@kernel.org> Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
2019-02-12x86/MCE: Initialize mce.bank in the case of a fatal error in mce_no_way_out()Tony Luck1-0/+1
commit d28af26faa0b1daf3c692603d46bc4687c16f19e upstream. Internal injection testing crashed with a console log that said: mce: [Hardware Error]: CPU 7: Machine Check Exception: f Bank 0: bd80000000100134 This caused a lot of head scratching because the MCACOD (bits 15:0) of that status is a signature from an L1 data cache error. But Linux says that it found it in "Bank 0", which on this model CPU only reports L1 instruction cache errors. The answer was that Linux doesn't initialize "m->bank" in the case that it finds a fatal error in the mce_no_way_out() pre-scan of banks. If this was a local machine check, then this partially initialized struct mce is being passed to mce_panic(). Fix is simple: just initialize m->bank in the case of a fatal error. Fixes: 40c36e2741d7 ("x86/mce: Fix incorrect "Machine check from unknown source" message") Signed-off-by: Tony Luck <tony.luck@intel.com> Signed-off-by: Borislav Petkov <bp@suse.de> Cc: "H. Peter Anvin" <hpa@zytor.com> Cc: Ingo Molnar <mingo@redhat.com> Cc: Thomas Gleixner <tglx@linutronix.de> Cc: Vishal Verma <vishal.l.verma@intel.com> Cc: x86-ml <x86@kernel.org> Cc: stable@vger.kernel.org # v4.18 Note pre-v5.0 arch/x86/kernel/cpu/mce/core.c was called arch/x86/kernel/cpu/mcheck/mce.c Link: https://lkml.kernel.org/r/20190201003341.10638-1-tony.luck@intel.com Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
2019-02-12perf/x86/intel/uncore: Add Node ID maskKan Liang1-1/+3
commit 9e63a7894fd302082cf3627fe90844421a6cbe7f upstream. Some PCI uncore PMUs cannot be registered on an 8-socket system (HPE Superdome Flex). To understand which Socket the PCI uncore PMUs belongs to, perf retrieves the local Node ID of the uncore device from CPUNODEID(0xC0) of the PCI configuration space, and the mapping between Socket ID and Node ID from GIDNIDMAP(0xD4). The Socket ID can be calculated accordingly. The local Node ID is only available at bit 2:0, but current code doesn't mask it. If a BIOS doesn't clear the rest of the bits, an incorrect Node ID will be fetched. Filter the Node ID by adding a mask. Reported-by: Song Liu <songliubraving@fb.com> Tested-by: Song Liu <songliubraving@fb.com> Signed-off-by: Kan Liang <kan.liang@linux.intel.com> Signed-off-by: Peter Zijlstra (Intel) <peterz@infradead.org> Cc: Alexander Shishkin <alexander.shishkin@linux.intel.com> Cc: Arnaldo Carvalho de Melo <acme@redhat.com> Cc: Jiri Olsa <jolsa@redhat.com> Cc: Linus Torvalds <torvalds@linux-foundation.org> Cc: Peter Zijlstra <peterz@infradead.org> Cc: Thomas Gleixner <tglx@linutronix.de> Cc: <stable@vger.kernel.org> # v3.7+ Fixes: 7c94ee2e0917 ("perf/x86: Add Intel Nehalem and Sandy Bridge-EP uncore support") Link: https://lkml.kernel.org/r/1548600794-33162-1-git-send-email-kan.liang@linux.intel.com Signed-off-by: Ingo Molnar <mingo@kernel.org> Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
2019-02-12cpu/hotplug: Fix "SMT disabled by BIOS" detection for KVMJosh Poimboeuf6-35/+8
commit b284909abad48b07d3071a9fc9b5692b3e64914b upstream. With the following commit: 73d5e2b47264 ("cpu/hotplug: detect SMT disabled by BIOS") ... the hotplug code attempted to detect when SMT was disabled by BIOS, in which case it reported SMT as permanently disabled. However, that code broke a virt hotplug scenario, where the guest is booted with only primary CPU threads, and a sibling is brought online later. The problem is that there doesn't seem to be a way to reliably distinguish between the HW "SMT disabled by BIOS" case and the virt "sibling not yet brought online" case. So the above-mentioned commit was a bit misguided, as it permanently disabled SMT for both cases, preventing future virt sibling hotplugs. Going back and reviewing the original problems which were attempted to be solved by that commit, when SMT was disabled in BIOS: 1) /sys/devices/system/cpu/smt/control showed "on" instead of "notsupported"; and 2) vmx_vm_init() was incorrectly showing the L1TF_MSG_SMT warning. I'd propose that we instead consider #1 above to not actually be a problem. Because, at least in the virt case, it's possible that SMT wasn't disabled by BIOS and a sibling thread could be brought online later. So it makes sense to just always default the smt control to "on" to allow for that possibility (assuming cpuid indicates that the CPU supports SMT). The real problem is #2, which has a simple fix: change vmx_vm_init() to query the actual current SMT state -- i.e., whether any siblings are currently online -- instead of looking at the SMT "control" sysfs value. So fix it by: a) reverting the original "fix" and its followup fix: 73d5e2b47264 ("cpu/hotplug: detect SMT disabled by BIOS") bc2d8d262cba ("cpu/hotplug: Fix SMT supported evaluation") and b) changing vmx_vm_init() to query the actual current SMT state -- instead of the sysfs control value -- to determine whether the L1TF warning is needed. This also requires the 'sched_smt_present' variable to exported, instead of 'cpu_smt_control'. Fixes: 73d5e2b47264 ("cpu/hotplug: detect SMT disabled by BIOS") Reported-by: Igor Mammedov <imammedo@redhat.com> Signed-off-by: Josh Poimboeuf <jpoimboe@redhat.com> Signed-off-by: Thomas Gleixner <tglx@linutronix.de> Cc: Joe Mario <jmario@redhat.com> Cc: Jiri Kosina <jikos@kernel.org> Cc: Peter Zijlstra <peterz@infradead.org> Cc: kvm@vger.kernel.org Cc: stable@vger.kernel.org Link: https://lkml.kernel.org/r/e3a85d585da28cc333ecbc1e78ee9216e6da9396.1548794349.git.jpoimboe@redhat.com Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
2019-02-12KVM: nVMX: unconditionally cancel preemption timer in free_nested ↵Peter Shier1-0/+1
(CVE-2019-7221) commit ecec76885bcfe3294685dc363fd1273df0d5d65f upstream. Bugzilla: 1671904 There are multiple code paths where an hrtimer may have been started to emulate an L1 VMX preemption timer that can result in a call to free_nested without an intervening L2 exit where the hrtimer is normally cancelled. Unconditionally cancel in free_nested to cover all cases. Embargoed until Feb 7th 2019. Signed-off-by: Peter Shier <pshier@google.com> Reported-by: Jim Mattson <jmattson@google.com> Reviewed-by: Jim Mattson <jmattson@google.com> Reported-by: Felix Wilhelm <fwilhelm@google.com> Cc: stable@kernel.org Message-Id: <20181011184646.154065-1-pshier@google.com> Signed-off-by: Paolo Bonzini <pbonzini@redhat.com> Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
2019-02-12kvm: fix kvm_ioctl_create_device() reference counting (CVE-2019-6974)Jann Horn1-1/+2
commit cfa39381173d5f969daf43582c95ad679189cbc9 upstream. kvm_ioctl_create_device() does the following: 1. creates a device that holds a reference to the VM object (with a borrowed reference, the VM's refcount has not been bumped yet) 2. initializes the device 3. transfers the reference to the device to the caller's file descriptor table 4. calls kvm_get_kvm() to turn the borrowed reference to the VM into a real reference The ownership transfer in step 3 must not happen before the reference to the VM becomes a proper, non-borrowed reference, which only happens in step 4. After step 3, an attacker can close the file descriptor and drop the borrowed reference, which can cause the refcount of the kvm object to drop to zero. This means that we need to grab a reference for the device before anon_inode_getfd(), otherwise the VM can disappear from under us. Fixes: 852b6d57dc7f ("kvm: add device control API") Cc: stable@kernel.org Signed-off-by: Jann Horn <jannh@google.com> Signed-off-by: Paolo Bonzini <pbonzini@redhat.com> Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
2019-02-12KVM: x86: work around leak of uninitialized stack contents (CVE-2019-7222)Paolo Bonzini1-0/+7
commit 353c0956a618a07ba4bbe7ad00ff29fe70e8412a upstream. Bugzilla: 1671930 Emulation of certain instructions (VMXON, VMCLEAR, VMPTRLD, VMWRITE with memory operand, INVEPT, INVVPID) can incorrectly inject a page fault when passed an operand that points to an MMIO address. The page fault will use uninitialized kernel stack memory as the CR2 and error code. The right behavior would be to abort the VM with a KVM_EXIT_INTERNAL_ERROR exit to userspace; however, it is not an easy fix, so for now just ensure that the error code and CR2 are zero. Embargoed until Feb 7th 2019. Reported-by: Felix Wilhelm <fwilhelm@google.com> Cc: stable@kernel.org Signed-off-by: Paolo Bonzini <pbonzini@redhat.com> Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
2019-02-12scsi: aic94xx: fix module loadingJames Bottomley1-4/+4
commit 42caa0edabd6a0a392ec36a5f0943924e4954311 upstream. The aic94xx driver is currently failing to load with errors like sysfs: cannot create duplicate filename '/devices/pci0000:00/0000:00:03.0/0000:02:00.3/0000:07:02.0/revision' Because the PCI code had recently added a file named 'revision' to every PCI device. Fix this by renaming the aic94xx revision file to aic_revision. This is safe to do for us because as far as I can tell, there's nothing in userspace relying on the current aic94xx revision file so it can be renamed without breaking anything. Fixes: 702ed3be1b1b (PCI: Create revision file in sysfs) Cc: stable@vger.kernel.org Signed-off-by: James Bottomley <James.Bottomley@HansenPartnership.com> Signed-off-by: Martin K. Petersen <martin.petersen@oracle.com> Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
2019-02-12scsi: cxlflash: Prevent deadlock when adapter probe failsVaibhav Jain1-0/+2
commit bb61b843ffd46978d7ca5095453e572714934eeb upstream. Presently when an error is encountered during probe of the cxlflash adapter, a deadlock is seen with cpu thread stuck inside cxlflash_remove(). Below is the trace of the deadlock as logged by khungtaskd: cxlflash 0006:00:00.0: cxlflash_probe: init_afu failed rc=-16 INFO: task kworker/80:1:890 blocked for more than 120 seconds. Not tainted 5.0.0-rc4-capi2-kexec+ #2 "echo 0 > /proc/sys/kernel/hung_task_timeout_secs" disables this message. kworker/80:1 D 0 890 2 0x00000808 Workqueue: events work_for_cpu_fn Call Trace: 0x4d72136320 (unreliable) __switch_to+0x2cc/0x460 __schedule+0x2bc/0xac0 schedule+0x40/0xb0 cxlflash_remove+0xec/0x640 [cxlflash] cxlflash_probe+0x370/0x8f0 [cxlflash] local_pci_probe+0x6c/0x140 work_for_cpu_fn+0x38/0x60 process_one_work+0x260/0x530 worker_thread+0x280/0x5d0 kthread+0x1a8/0x1b0 ret_from_kernel_thread+0x5c/0x80 INFO: task systemd-udevd:5160 blocked for more than 120 seconds. The deadlock occurs as cxlflash_remove() is called from cxlflash_probe() without setting 'cxlflash_cfg->state' to STATE_PROBED and the probe thread starts to wait on 'cxlflash_cfg->reset_waitq'. Since the device was never successfully probed the 'cxlflash_cfg->state' never changes from STATE_PROBING hence the deadlock occurs. We fix this deadlock by setting the variable 'cxlflash_cfg->state' to STATE_PROBED in case an error occurs during cxlflash_probe() and just before calling cxlflash_remove(). Cc: stable@vger.kernel.org Fixes: c21e0bbfc485("cxlflash: Base support for IBM CXL Flash Adapter") Signed-off-by: Vaibhav Jain <vaibhav@linux.ibm.com> Signed-off-by: Martin K. Petersen <martin.petersen@oracle.com> Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
2019-02-12staging: speakup: fix tty-operation NULL derefsJohan Hovold1-2/+4
commit a1960e0f1639cb1f7a3d94521760fc73091f6640 upstream. The send_xchar() and tiocmset() tty operations are optional. Add the missing sanity checks to prevent user-space triggerable NULL-pointer dereferences. Fixes: 6b9ad1c742bf ("staging: speakup: add send_xchar, tiocmset and input functionality for tty") Cc: stable <stable@vger.kernel.org> # 4.13 Cc: Okash Khawaja <okash.khawaja@gmail.com> Cc: Samuel Thibault <samuel.thibault@ens-lyon.org> Signed-off-by: Johan Hovold <johan@kernel.org> Reviewed-by: Samuel Thibault <samuel.thibault@ens-lyon.org> Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
2019-02-12usb: gadget: musb: fix short isoc packets with inventra dmaPaul Elder2-22/+12
commit c418fd6c01fbc5516a2cd1eaf1df1ec86869028a upstream. Handling short packets (length < max packet size) in the Inventra DMA engine in the MUSB driver causes the MUSB DMA controller to hang. An example of a problem that is caused by this problem is when streaming video out of a UVC gadget, only the first video frame is transferred. For short packets (mode-0 or mode-1 DMA), MUSB_TXCSR_TXPKTRDY must be set manually by the driver. This was previously done in musb_g_tx (musb_gadget.c), but incorrectly (all csr flags were cleared, and only MUSB_TXCSR_MODE and MUSB_TXCSR_TXPKTRDY were set). Fixing that problem allows some requests to be transferred correctly, but multiple requests were often put together in one USB packet, and caused problems if the packet size was not a multiple of 4. Instead, set MUSB_TXCSR_TXPKTRDY in dma_controller_irq (musbhsdma.c), just like host mode transfers. This topic was originally tackled by Nicolas Boichat [0] [1] and is discussed further at [2] as part of his GSoC project [3]. [0] https://groups.google.com/forum/?hl=en#!topic/beagleboard-gsoc/k8Azwfp75CU [1] https://gitorious.org/beagleboard-usbsniffer/beagleboard-usbsniffer-kernel/commit/b0be3b6cc195ba732189b04f1d43ec843c3e54c9?p=beagleboard-usbsniffer:beagleboard-usbsniffer-kernel.git;a=patch;h=b0be3b6cc195ba732189b04f1d43ec843c3e54c9 [2] http://beagleboard-usbsniffer.blogspot.com/2010/07/musb-isochronous-transfers-fixed.html [3] http://elinux.org/BeagleBoard/GSoC/USBSniffer Fixes: 550a7375fe72 ("USB: Add MUSB and TUSB support") Signed-off-by: Paul Elder <paul.elder@ideasonboard.com> Signed-off-by: Bin Liu <b-liu@ti.com> Cc: stable <stable@vger.kernel.org> Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
2019-02-12usb: gadget: udc: net2272: Fix bitwise and boolean operationsGustavo A. R. Silva1-1/+1
commit 07c69f1148da7de3978686d3af9263325d9d60bd upstream. (!x & y) strikes again. Fix bitwise and boolean operations by enclosing the expression: intcsr & (1 << NET2272_PCI_IRQ) in parentheses, before applying the boolean operator '!'. Notice that this code has been there since 2011. So, it would be helpful if someone can double-check this. This issue was detected with the help of Coccinelle. Fixes: ceb80363b2ec ("USB: net2272: driver for PLX NET2272 USB device controller") Cc: stable@vger.kernel.org Signed-off-by: Gustavo A. R. Silva <gustavo@embeddedor.com> Signed-off-by: Felipe Balbi <felipe.balbi@linux.intel.com> Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
2019-02-12usb: dwc3: gadget: Handle 0 xfer length for OUT EPTejas Joglekar1-1/+1
commit 1e19cdc8060227b0802bda6bc0bd22b23679ba32 upstream. For OUT endpoints, zero-length transfers require MaxPacketSize buffer as per the DWC_usb3 programming guide 3.30a section 4.2.3.3. This patch fixes this by explicitly checking zero length transfer to correctly pad up to MaxPacketSize. Fixes: c6267a51639b ("usb: dwc3: gadget: align transfers to wMaxPacketSize") Cc: stable@vger.kernel.org Signed-off-by: Tejas Joglekar <joglekar@synopsys.com> Signed-off-by: Felipe Balbi <felipe.balbi@linux.intel.com> Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
2019-02-12usb: phy: am335x: fix race condition in _probeBin Liu1-4/+1
commit a53469a68eb886e84dd8b69a1458a623d3591793 upstream. power off the phy should be done before populate the phy. Otherwise, am335x_init() could be called by the phy owner to power on the phy first, then am335x_phy_probe() turns off the phy again without the caller knowing it. Fixes: 2fc711d76352 ("usb: phy: am335x: Enable USB remote wakeup using PHY wakeup") Cc: stable@vger.kernel.org # v3.18+ Signed-off-by: Bin Liu <b-liu@ti.com> Signed-off-by: Felipe Balbi <felipe.balbi@linux.intel.com> Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
2019-02-12irqchip/gic-v3-its: Plug allocation race for devices sharing a DevIDMarc Zyngier1-5/+27
commit 9791ec7df0e7b4d80706ccea8f24b6542f6059e9 upstream. On systems or VMs where multiple devices share a single DevID (because they sit behind a PCI bridge, or because the HW is broken in funky ways), we reuse the save its_device structure in order to reflect this. It turns out that there is a distinct lack of locking when looking up the its_device, and two device being probed concurrently can result in double allocations. That's obviously not nice. A solution for this is to have a per-ITS mutex that serializes device allocation. A similar issue exists on the freeing side, which can run concurrently with the allocation. On top of now taking the appropriate lock, we also make sure that a shared device is never freed, as we have no way to currently track the life cycle of such object. Reported-by: Zheng Xiang <zhengxiang9@huawei.com> Tested-by: Zheng Xiang <zhengxiang9@huawei.com> Cc: stable@vger.kernel.org Signed-off-by: Marc Zyngier <marc.zyngier@arm.com> Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
2019-02-12futex: Handle early deadlock return correctlyThomas Gleixner2-15/+50
commit 1a1fb985f2e2b85ec0d3dc2e519ee48389ec2434 upstream. commit 56222b212e8e ("futex: Drop hb->lock before enqueueing on the rtmutex") changed the locking rules in the futex code so that the hash bucket lock is not longer held while the waiter is enqueued into the rtmutex wait list. This made the lock and the unlock path symmetric, but unfortunately the possible early exit from __rt_mutex_proxy_start() due to a detected deadlock was not updated accordingly. That allows a concurrent unlocker to observe inconsitent state which triggers the warning in the unlock path. futex_lock_pi() futex_unlock_pi() lock(hb->lock) queue(hb_waiter) lock(hb->lock) lock(rtmutex->wait_lock) unlock(hb->lock) // acquired hb->lock hb_waiter = futex_top_waiter() lock(rtmutex->wait_lock) __rt_mutex_proxy_start() ---> fail remove(rtmutex_waiter); ---> returns -EDEADLOCK unlock(rtmutex->wait_lock) // acquired wait_lock wake_futex_pi() rt_mutex_next_owner() --> returns NULL --> WARN lock(hb->lock) unqueue(hb_waiter) The problem is caused by the remove(rtmutex_waiter) in the failure case of __rt_mutex_proxy_start() as this lets the unlocker observe a waiter in the hash bucket but no waiter on the rtmutex, i.e. inconsistent state. The original commit handles this correctly for the other early return cases (timeout, signal) by delaying the removal of the rtmutex waiter until the returning task reacquired the hash bucket lock. Treat the failure case of __rt_mutex_proxy_start() in the same way and let the existing cleanup code handle the eventual handover of the rtmutex gracefully. The regular rt_mutex_proxy_start() gains the rtmutex waiter removal for the failure case, so that the other callsites are still operating correctly. Add proper comments to the code so all these details are fully documented. Thanks to Peter for helping with the analysis and writing the really valuable code comments. Fixes: 56222b212e8e ("futex: Drop hb->lock before enqueueing on the rtmutex") Reported-by: Heiko Carstens <heiko.carstens@de.ibm.com> Co-developed-by: Peter Zijlstra <peterz@infradead.org> Signed-off-by: Peter Zijlstra <peterz@infradead.org> Signed-off-by: Thomas Gleixner <tglx@linutronix.de> Tested-by: Heiko Carstens <heiko.carstens@de.ibm.com> Cc: Martin Schwidefsky <schwidefsky@de.ibm.com> Cc: linux-s390@vger.kernel.org Cc: Stefan Liebler <stli@linux.ibm.com> Cc: Sebastian Sewior <bigeasy@linutronix.de> Cc: stable@vger.kernel.org Link: https://lkml.kernel.org/r/alpine.DEB.2.21.1901292311410.1950@nanos.tec.linutronix.de Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
2019-02-12dmaengine: imx-dma: fix wrong callback invokeLeonid Iziumtsev1-4/+4
commit 341198eda723c8c1cddbb006a89ad9e362502ea2 upstream. Once the "ld_queue" list is not empty, next descriptor will migrate into "ld_active" list. The "desc" variable will be overwritten during that transition. And later the dmaengine_desc_get_callback_invoke() will use it as an argument. As result we invoke wrong callback. That behaviour was in place since: commit fcaaba6c7136 ("dmaengine: imx-dma: fix callback path in tasklet"). But after commit 4cd13c21b207 ("softirq: Let ksoftirqd do its job") things got worse, since possible delay between tasklet_schedule() from DMA irq handler and actual tasklet function execution got bigger. And that gave more time for new DMA request to be submitted and to be put into "ld_queue" list. It has been noticed that DMA issue is causing problems for "mxc-mmc" driver. While stressing the system with heavy network traffic and writing/reading to/from sd card simultaneously the timeout may happen: 10013000.sdhci: mxcmci_watchdog: read time out (status = 0x30004900) That often lead to file system corruption. Signed-off-by: Leonid Iziumtsev <leonid.iziumtsev@gmail.com> Signed-off-by: Vinod Koul <vkoul@kernel.org> Cc: stable@vger.kernel.org Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>