summaryrefslogtreecommitdiff
path: root/arch
AgeCommit message (Collapse)AuthorFilesLines
2017-11-15x86/oprofile/ppro: Do not use __this_cpu*() in preemptible contextBorislav Petkov1-2/+2
commit a743bbeef27b9176987ec0cb7f906ab0ab52d1da upstream. The warning below says it all: BUG: using __this_cpu_read() in preemptible [00000000] code: swapper/0/1 caller is __this_cpu_preempt_check CPU: 0 PID: 1 Comm: swapper/0 Not tainted 4.14.0-rc8 #4 Call Trace: dump_stack check_preemption_disabled ? do_early_param __this_cpu_preempt_check arch_perfmon_init op_nmi_init ? alloc_pci_root_info oprofile_arch_init oprofile_init do_one_initcall ... These accessors should not have been used in the first place: it is PPro so no mixed silicon revisions and thus it can simply use boot_cpu_data. Reported-by: Fengguang Wu <fengguang.wu@intel.com> Tested-by: Fengguang Wu <fengguang.wu@intel.com> Fix-creation-mandated-by: Linus Torvalds <torvalds@linux-foundation.org> Signed-off-by: Borislav Petkov <bp@suse.de> Signed-off-by: Thomas Gleixner <tglx@linutronix.de> Cc: Robert Richter <rric@kernel.org> Cc: x86@kernel.org Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
2017-11-15x86/smpboot: Make optimization of delay calibration work correctlyPavel Tatashin2-10/+9
commit 76ce7cfe35ef58f34e6ba85327afb5fbf6c3ff9b upstream. If the TSC has constant frequency then the delay calibration can be skipped when it has been calibrated for a package already. This is checked in calibrate_delay_is_known(), but that function is buggy in two aspects: It returns 'false' if (!tsc_disabled && !cpu_has(&cpu_data(cpu), X86_FEATURE_CONSTANT_TSC) which is obviously the reverse of the intended check and the check for the sibling mask cannot work either because the topology links have not been set up yet. Correct the condition and move the call to set_cpu_sibling_map() before invoking calibrate_delay() so the sibling check works correctly. [ tglx: Rewrote changelong ] Fixes: c25323c07345 ("x86/tsc: Use topology functions") Signed-off-by: Pavel Tatashin <pasha.tatashin@oracle.com> Signed-off-by: Thomas Gleixner <tglx@linutronix.de> Cc: peterz@infradead.org Cc: bob.picco@oracle.com Cc: steven.sistare@oracle.com Cc: daniel.m.jordan@oracle.com Link: https://lkml.kernel.org/r/20171028001100.26603-1-pasha.tatashin@oracle.com Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
2017-11-15x86/debug: Handle warnings before the notifier chain, to fix KGDB crashAlexander Shishkin1-3/+7
commit b8347c2196492f4e1cccde3d92fda1cc2cc7de7e upstream. Commit: 9a93848fe787 ("x86/debug: Implement __WARN() using UD0") turned warnings into UD0, but the fixup code only runs after the notify_die() chain. This is a problem, in particular, with kgdb, which kicks in as if it was a BUG(). Fix this by running the fixup code before the notifier chain in the invalid op handler path. Signed-off-by: Alexander Shishkin <alexander.shishkin@linux.intel.com> Tested-by: Ilya Dryomov <idryomov@gmail.com> Acked-by: Daniel Thompson <daniel.thompson@linaro.org> Acked-by: Thomas Gleixner <tglx@linutronix.de> Cc: Jason Wessel <jason.wessel@windriver.com> Cc: Arjan van de Ven <arjan@linux.intel.com> Cc: Borislav Petkov <bp@alien8.de> Cc: Linus Torvalds <torvalds@linux-foundation.org> Cc: Peter Zijlstra <peterz@infradead.org> Cc: Richard Weinberger <richard.weinberger@gmail.com> Link: http://lkml.kernel.org/r/20170724100428.19173-1-alexander.shishkin@linux.intel.com Signed-off-by: Ingo Molnar <mingo@kernel.org> Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
2017-11-15Revert "x86: CPU: Fix up "cpu MHz" in /proc/cpuinfo"Linus Torvalds3-11/+6
commit ea0ee33988778fb73e4f45e7c73fb735787e2f32 upstream. This reverts commit 941f5f0f6ef5338814145cf2b813cf1f98873e2f. Sadly, it turns out that we really can't just do the cross-CPU IPI to all CPU's to get their proper frequencies, because it's much too expensive on systems with lots of cores. So we'll have to revert this for now, and revisit it using a smarter model (probably doing one system-wide IPI at open time, and doing all the frequency calculations in parallel). Reported-by: WANG Chao <chao.wang@ucloud.cn> Reported-by: Ingo Molnar <mingo@kernel.org> Cc: Rafael J Wysocki <rafael.j.wysocki@intel.com> Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org> Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
2017-11-15KVM: PPC: Book3S HV: Fix exclusion between HPT resizing and other HPT updatesPaul Mackerras2-10/+29
commit 38c53af853069adf87181684370d7b8866d6387b upstream. Commit 5e9859699aba ("KVM: PPC: Book3S HV: Outline of KVM-HV HPT resizing implementation", 2016-12-20) added code that tries to exclude any use or update of the hashed page table (HPT) while the HPT resizing code is iterating through all the entries in the HPT. It does this by taking the kvm->lock mutex, clearing the kvm->arch.hpte_setup_done flag and then sending an IPI to all CPUs in the host. The idea is that any VCPU task that tries to enter the guest will see that the hpte_setup_done flag is clear and therefore call kvmppc_hv_setup_htab_rma, which also takes the kvm->lock mutex and will therefore block until we release kvm->lock. However, any VCPU that is already in the guest, or is handling a hypervisor page fault or hypercall, can re-enter the guest without rechecking the hpte_setup_done flag. The IPI will cause a guest exit of any VCPUs that are currently in the guest, but does not prevent those VCPU tasks from immediately re-entering the guest. The result is that after resize_hpt_rehash_hpte() has made a HPTE absent, a hypervisor page fault can occur and make that HPTE present again. This includes updating the rmap array for the guest real page, meaning that we now have a pointer in the rmap array which connects with pointers in the old rev array but not the new rev array. In fact, if the HPT is being reduced in size, the pointer in the rmap array could point outside the bounds of the new rev array. If that happens, we can get a host crash later on such as this one: [91652.628516] Unable to handle kernel paging request for data at address 0xd0000000157fb10c [91652.628668] Faulting instruction address: 0xc0000000000e2640 [91652.628736] Oops: Kernel access of bad area, sig: 11 [#1] [91652.628789] LE SMP NR_CPUS=1024 NUMA PowerNV [91652.628847] Modules linked in: binfmt_misc vhost_net vhost tap xt_CHECKSUM ipt_MASQUERADE nf_nat_masquerade_ipv4 ip6t_rpfilter ip6t_REJECT nf_reject_ipv6 nf_conntrack_ipv6 nf_defrag_ipv6 xt_conntrack ip_set nfnetlink ebtable_nat ebtable_broute bridge stp llc ip6table_mangle ip6table_security ip6table_raw iptable_nat nf_conntrack_ipv4 nf_defrag_ipv4 nf_nat_ipv4 nf_nat nf_conntrack libcrc32c iptable_mangle iptable_security iptable_raw ebtable_filter ebtables ip6table_filter ip6_tables ses enclosure scsi_transport_sas i2c_opal ipmi_powernv ipmi_devintf i2c_core ipmi_msghandler powernv_op_panel nfsd auth_rpcgss oid_registry nfs_acl lockd grace sunrpc kvm_hv kvm_pr kvm scsi_dh_alua dm_service_time dm_multipath tg3 ptp pps_core [last unloaded: stap_552b612747aec2da355051e464fa72a1_14259] [91652.629566] CPU: 136 PID: 41315 Comm: CPU 21/KVM Tainted: G O 4.14.0-1.rc4.dev.gitb27fc5c.el7.centos.ppc64le #1 [91652.629684] task: c0000007a419e400 task.stack: c0000000028d8000 [91652.629750] NIP: c0000000000e2640 LR: d00000000c36e498 CTR: c0000000000e25f0 [91652.629829] REGS: c0000000028db5d0 TRAP: 0300 Tainted: G O (4.14.0-1.rc4.dev.gitb27fc5c.el7.centos.ppc64le) [91652.629932] MSR: 900000010280b033 <SF,HV,VEC,VSX,EE,FP,ME,IR,DR,RI,LE,TM[E]> CR: 44022422 XER: 00000000 [91652.630034] CFAR: d00000000c373f84 DAR: d0000000157fb10c DSISR: 40000000 SOFTE: 1 [91652.630034] GPR00: d00000000c36e498 c0000000028db850 c000000001403900 c0000007b7960000 [91652.630034] GPR04: d0000000117fb100 d000000007ab00d8 000000000033bb10 0000000000000000 [91652.630034] GPR08: fffffffffffffe7f 801001810073bb10 d00000000e440000 d00000000c373f70 [91652.630034] GPR12: c0000000000e25f0 c00000000fdb9400 f000000003b24680 0000000000000000 [91652.630034] GPR16: 00000000000004fb 00007ff7081a0000 00000000000ec91a 000000000033bb10 [91652.630034] GPR20: 0000000000010000 00000000001b1190 0000000000000001 0000000000010000 [91652.630034] GPR24: c0000007b7ab8038 d0000000117fb100 0000000ec91a1190 c000001e6a000000 [91652.630034] GPR28: 00000000033bb100 000000000073bb10 c0000007b7960000 d0000000157fb100 [91652.630735] NIP [c0000000000e2640] kvmppc_add_revmap_chain+0x50/0x120 [91652.630806] LR [d00000000c36e498] kvmppc_book3s_hv_page_fault+0xbb8/0xc40 [kvm_hv] [91652.630884] Call Trace: [91652.630913] [c0000000028db850] [c0000000028db8b0] 0xc0000000028db8b0 (unreliable) [91652.630996] [c0000000028db8b0] [d00000000c36e498] kvmppc_book3s_hv_page_fault+0xbb8/0xc40 [kvm_hv] [91652.631091] [c0000000028db9e0] [d00000000c36a078] kvmppc_vcpu_run_hv+0xdf8/0x1300 [kvm_hv] [91652.631179] [c0000000028dbb30] [d00000000c2248c4] kvmppc_vcpu_run+0x34/0x50 [kvm] [91652.631266] [c0000000028dbb50] [d00000000c220d54] kvm_arch_vcpu_ioctl_run+0x114/0x2a0 [kvm] [91652.631351] [c0000000028dbbd0] [d00000000c2139d8] kvm_vcpu_ioctl+0x598/0x7a0 [kvm] [91652.631433] [c0000000028dbd40] [c0000000003832e0] do_vfs_ioctl+0xd0/0x8c0 [91652.631501] [c0000000028dbde0] [c000000000383ba4] SyS_ioctl+0xd4/0x130 [91652.631569] [c0000000028dbe30] [c00000000000b8e0] system_call+0x58/0x6c [91652.631635] Instruction dump: [91652.631676] fba1ffe8 fbc1fff0 fbe1fff8 f8010010 f821ffa1 2fa70000 793d0020 e9432110 [91652.631814] 7bbf26e4 7c7e1b78 7feafa14 409e0094 <807f000c> 786326e4 7c6a1a14 93a40008 [91652.631959] ---[ end trace ac85ba6db72e5b2e ]--- To fix this, we tighten up the way that the hpte_setup_done flag is checked to ensure that it does provide the guarantee that the resizing code needs. In kvmppc_run_core(), we check the hpte_setup_done flag after disabling interrupts and refuse to enter the guest if it is clear (for a HPT guest). The code that checks hpte_setup_done and calls kvmppc_hv_setup_htab_rma() is moved from kvmppc_vcpu_run_hv() to a point inside the main loop in kvmppc_run_vcpu(), ensuring that we don't just spin endlessly calling kvmppc_run_core() while hpte_setup_done is clear, but instead have a chance to block on the kvm->lock mutex. Finally we also check hpte_setup_done inside the region in kvmppc_book3s_hv_page_fault() where the HPTE is locked and we are about to update the HPTE, and bail out if it is clear. If another CPU is inside kvm_vm_ioctl_resize_hpt_commit) and has cleared hpte_setup_done, then we know that either we are looking at a HPTE that resize_hpt_rehash_hpte() has not yet processed, which is OK, or else we will see hpte_setup_done clear and refuse to update it, because of the full barrier formed by the unlock of the HPTE in resize_hpt_rehash_hpte() combined with the locking of the HPTE in kvmppc_book3s_hv_page_fault(). Fixes: 5e9859699aba ("KVM: PPC: Book3S HV: Outline of KVM-HV HPT resizing implementation") Reported-by: Satheesh Rajendran <satheera@in.ibm.com> Signed-off-by: Paul Mackerras <paulus@ozlabs.org> Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
2017-11-15MIPS: AR7: Ensure that serial ports are properly set upOswald Buddenhagen1-0/+1
commit b084116f8587b222a2c5ef6dcd846f40f24b9420 upstream. Without UPF_FIXED_TYPE, the data from the PORT_AR7 uart_config entry is never copied, resulting in a dead port. Fixes: 154615d55459 ("MIPS: AR7: Use correct UART port type") Signed-off-by: Oswald Buddenhagen <oswald.buddenhagen@gmx.de> [jonas.gorski: add Fixes tag] Signed-off-by: Jonas Gorski <jonas.gorski@gmail.com> Reviewed-by: Florian Fainelli <f.fainelli@gmail.com> Cc: Ralf Baechle <ralf@linux-mips.org> Cc: Greg Kroah-Hartman <gregkh@linuxfoundation.org> Cc: Yoshihiro YUNOMAE <yoshihiro.yunomae.ez@hitachi.com> Cc: Nicolas Schichan <nschichan@freebox.fr> Cc: Oswald Buddenhagen <oswald.buddenhagen@gmx.de> Cc: linux-mips@linux-mips.org Cc: linux-serial@vger.kernel.org Patchwork: https://patchwork.linux-mips.org/patch/17543/ Signed-off-by: James Hogan <jhogan@kernel.org> Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
2017-11-15MIPS: AR7: Defer registration of GPIOJonas Gorski2-2/+4
commit e6b03ab63b4d270e0249f96536fde632409dc1dc upstream. When called from prom init code, ar7_gpio_init() will fail as it will call gpiochip_add() which relies on a working kmalloc() to alloc the gpio_desc array and kmalloc is not useable yet at prom init time. Move ar7_gpio_init() to ar7_register_devices() (a device_initcall) where kmalloc works. Fixes: 14e85c0e69d5 ("gpio: remove gpio_descs global array") Signed-off-by: Jonas Gorski <jonas.gorski@gmail.com> Reviewed-by: Florian Fainelli <f.fainelli@gmail.com> Cc: Ralf Baechle <ralf@linux-mips.org> Cc: Greg Kroah-Hartman <gregkh@linuxfoundation.org> Cc: Yoshihiro YUNOMAE <yoshihiro.yunomae.ez@hitachi.com> Cc: Nicolas Schichan <nschichan@freebox.fr> Cc: linux-mips@linux-mips.org Cc: linux-serial@vger.kernel.org Patchwork: https://patchwork.linux-mips.org/patch/17542/ Signed-off-by: James Hogan <jhogan@kernel.org> Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
2017-11-15MIPS: BMIPS: Fix missing cbr addressJaedon Shin1-2/+2
commit ea4b3afe1eac8f88bb453798a084fba47a1f155a upstream. Fix NULL pointer access in BMIPS3300 RAC flush. Fixes: 738a3f79027b ("MIPS: BMIPS: Add early CPU initialization code") Signed-off-by: Jaedon Shin <jaedon.shin@gmail.com> Reviewed-by: Florian Fainelli <f.fainelli@gmail.com> Cc: Kevin Cernekee <cernekee@gmail.com> Cc: linux-mips@linux-mips.org Patchwork: https://patchwork.linux-mips.org/patch/16423/ Signed-off-by: James Hogan <jhogan@kernel.org> Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
2017-11-15MIPS: Fix CM region target definitionsPaul Burton1-2/+2
commit 6a6cba1d945a7511cdfaf338526871195e420762 upstream. The default CM target field in the GCR_BASE register is encoded with 0 meaning memory & 1 being reserved. However the definitions we use for those bits effectively get these two values backwards - likely because they were copied from the definitions for the CM regions where the target is encoded differently. This results in use setting up GCR_BASE with the reserved target value by default, rather than targeting memory as intended. Although we currently seem to get away with this it's not a great idea to rely upon. Fix this by changing our macros to match the documentated target values. The incorrect encoding became used as of commit 9f98f3dd0c51 ("MIPS: Add generic CM probe & access code") in the Linux v3.15 cycle, and was likely carried forwards from older but unused code introduced by commit 39b8d5254246 ("[MIPS] Add support for MIPS CMP platform.") in the v2.6.26 cycle. Fixes: 9f98f3dd0c51 ("MIPS: Add generic CM probe & access code") Signed-off-by: Paul Burton <paul.burton@mips.com> Reported-by: Matt Redfearn <matt.redfearn@mips.com> Reviewed-by: James Hogan <jhogan@kernel.org> Cc: Matt Redfearn <matt.redfearn@mips.com> Cc: Ralf Baechle <ralf@linux-mips.org> Cc: linux-mips@linux-mips.org Cc: <stable@vger.kernel.org> # v3.15+ Patchwork: https://patchwork.linux-mips.org/patch/17562/ Signed-off-by: James Hogan <jhogan@kernel.org> [jhogan@kernel.org: Backported 3.15..4.13] Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
2017-11-15ARM: 8720/1: ensure dump_instr() checks addr_limitMark Rutland1-10/+18
commit b9dd05c7002ee0ca8b676428b2268c26399b5e31 upstream. When CONFIG_DEBUG_USER is enabled, it's possible for a user to deliberately trigger dump_instr() with a chosen kernel address. Let's avoid problems resulting from this by using get_user() rather than __get_user(), ensuring that we don't erroneously access kernel memory. So that we can use the same code to dump user instructions and kernel instructions, the common dumping code is factored out to __dump_instr(), with the fs manipulated appropriately in dump_instr() around calls to this. Signed-off-by: Mark Rutland <mark.rutland@arm.com> Signed-off-by: Russell King <rmk+kernel@armlinux.org.uk> Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
2017-11-15crypto: x86/sha256-mb - fix panic due to unaligned accessAndrey Ryabinin1-6/+6
commit 5dfeaac15f2b1abb5a53c9146041c7235eb9aa04 upstream. struct sha256_ctx_mgr allocated in sha256_mb_mod_init() via kzalloc() and later passed in sha256_mb_flusher_mgr_flush_avx2() function where instructions vmovdqa used to access the struct. vmovdqa requires 16-bytes aligned argument, but nothing guarantees that struct sha256_ctx_mgr will have that alignment. Unaligned vmovdqa will generate GP fault. Fix this by replacing vmovdqa with vmovdqu which doesn't have alignment requirements. Fixes: a377c6b1876e ("crypto: sha256-mb - submit/flush routines for AVX2") Reported-by: Josh Poimboeuf <jpoimboe@redhat.com> Signed-off-by: Andrey Ryabinin <aryabinin@virtuozzo.com> Acked-by: Tim Chen Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au> Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
2017-11-15crypto: x86/sha1-mb - fix panic due to unaligned accessAndrey Ryabinin1-6/+6
commit d041b557792c85677f17e08eee535eafbd6b9aa2 upstream. struct sha1_ctx_mgr allocated in sha1_mb_mod_init() via kzalloc() and later passed in sha1_mb_flusher_mgr_flush_avx2() function where instructions vmovdqa used to access the struct. vmovdqa requires 16-bytes aligned argument, but nothing guarantees that struct sha1_ctx_mgr will have that alignment. Unaligned vmovdqa will generate GP fault. Fix this by replacing vmovdqa with vmovdqu which doesn't have alignment requirements. Fixes: 2249cbb53ead ("crypto: sha-mb - SHA1 multibuffer submit and flush routines for AVX2") Signed-off-by: Andrey Ryabinin <aryabinin@virtuozzo.com> Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au> Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
2017-11-08x86/mcelog: Get rid of RCU remnantsBorislav Petkov1-94/+27
commit 7298f08ea8870d44d36c7d6cd07dd0303faef6c2 upstream. Jeremy reported a suspicious RCU usage warning in mcelog. /dev/mcelog is called in process context now as part of the notifier chain and doesn't need any of the fancy RCU and lockless accesses which it did in atomic context. Axe it all in favor of a simple mutex synchronization which cures the problem reported. Fixes: 5de97c9f6d85 ("x86/mce: Factor out and deprecate the /dev/mcelog driver") Reported-by: Jeremy Cline <jcline@redhat.com> Signed-off-by: Borislav Petkov <bp@suse.de> Signed-off-by: Thomas Gleixner <tglx@linutronix.de> Reviewed-and-tested-by: Tony Luck <tony.luck@intel.com> Cc: Andi Kleen <ak@linux.intel.com> Cc: linux-edac@vger.kernel.org Cc: Laura Abbott <labbott@redhat.com> Link: https://lkml.kernel.org/r/20171101164754.xzzmskl4ngrqc5br@pd.tnic Link: https://bugzilla.redhat.com/show_bug.cgi?id=1498969 Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
2017-11-08powerpc/kprobes: Dereference function pointers only if the address does not ↵Naveen N. Rao1-1/+6
belong to kernel text commit e6c4dcb308160115287afd87afb63b5684d75a5b upstream. This makes the changes introduced in commit 83e840c770f2c5 ("powerpc64/elfv1: Only dereference function descriptor for non-text symbols") to be specific to the kprobe subsystem. We previously changed ppc_function_entry() to always check the provided address to confirm if it needed to be dereferenced. This is actually only an issue for kprobe blacklisted asm labels (through use of _ASM_NOKPROBE_SYMBOL) and can cause other issues with ftrace. Also, the additional checks are not really necessary for our other uses. As such, move this check to the kprobes subsystem. Fixes: 83e840c770f2 ("powerpc64/elfv1: Only dereference function descriptor for non-text symbols") Signed-off-by: Naveen N. Rao <naveen.n.rao@linux.vnet.ibm.com> Signed-off-by: Michael Ellerman <mpe@ellerman.id.au> Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
2017-11-08x86: CPU: Fix up "cpu MHz" in /proc/cpuinfoRafael J. Wysocki3-6/+11
commit 941f5f0f6ef5338814145cf2b813cf1f98873e2f upstream. Commit 890da9cf0983 (Revert "x86: do not use cpufreq_quick_get() for /proc/cpuinfo "cpu MHz"") is not sufficient to restore the previous behavior of "cpu MHz" in /proc/cpuinfo on x86 due to some changes made after the commit it has reverted. To address this, make the code in question use arch_freq_get_on_cpu() which also is used by cpufreq for reporting the current frequency of CPUs and since that function doesn't really depend on cpufreq in any way, drop the CONFIG_CPU_FREQ dependency for the object file containing it. Also refactor arch_freq_get_on_cpu() somewhat to avoid IPIs and return cached values right away if it is called very often over a short time (to prevent user space from triggering IPI storms through it). Fixes: 890da9cf0983 (Revert "x86: do not use cpufreq_quick_get() for /proc/cpuinfo "cpu MHz"") Signed-off-by: Rafael J. Wysocki <rafael.j.wysocki@intel.com> Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org> Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
2017-11-08Revert "x86: do not use cpufreq_quick_get() for /proc/cpuinfo "cpu MHz""Linus Torvalds1-2/+8
commit 890da9cf098364b11a7f7f5c22fa652531624d03 upstream. This reverts commit 51204e0639c49ada02fd823782ad673b6326d748. There wasn't really any good reason for it, and people are complaining (rightly) that it broke existing practice. Cc: Len Brown <len.brown@intel.com> Cc: Thomas Gleixner <tglx@linutronix.de> Cc: Rafael J. Wysocki <rafael.j.wysocki@intel.com> Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org> Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
2017-11-08MIPS: SMP: Fix deadlock & online raceMatt Redfearn1-6/+16
commit 9e8c399a88f0b87e41a894911475ed2a8f8dff9e upstream. Commit 6f542ebeaee0 ("MIPS: Fix race on setting and getting cpu_online_mask") effectively reverted commit 8f46cca1e6c06 ("MIPS: SMP: Fix possibility of deadlock when bringing CPUs online") and thus has reinstated the possibility of deadlock. The commit was based on testing of kernel v4.4, where the CPU hotplug core code issued a BUG() if the starting CPU is not marked online when the boot CPU returns from __cpu_up. The commit fixes this race (in v4.4), but re-introduces the deadlock situation. As noted in the commit message, upstream differs in this area. Commit 8df3e07e7f21f ("cpu/hotplug: Let upcoming cpu bring itself fully up") adds a completion event in the CPU hotplug core code, making this race impossible. However, people were unhappy with relying on the core code to do the right thing. To address the issues both commits were trying to fix, add a second completion event in the MIPS smp hotplug path. It removes the possibility of a race, since the MIPS smp hotplug code now synchronises both the boot and secondary CPUs before they return to the hotplug core code. It also addresses the deadlock by ensuring that the secondary CPU is not marked online before it's counters are synchronised. This fix should also be backported to fix the race condition introduced by the backport of commit 8f46cca1e6c06 ("MIPS: SMP: Fix possibility of deadlock when bringing CPUs online"), through really that race only existed before commit 8df3e07e7f21f ("cpu/hotplug: Let upcoming cpu bring itself fully up"). Signed-off-by: Matt Redfearn <matt.redfearn@imgtec.com> Fixes: 6f542ebeaee0 ("MIPS: Fix race on setting and getting cpu_online_mask") CC: Matija Glavinic Pecotic <matija.glavinic-pecotic.ext@nokia.com> Patchwork: https://patchwork.linux-mips.org/patch/17376/ Signed-off-by: James Hogan <jhogan@kernel.org> Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
2017-11-08MIPS: microMIPS: Fix incorrect mask in insn_table_MMGustavo A. R. Silva1-1/+1
commit 77238e76b9156d28d86c1e31c00ed2960df0e4de upstream. It seems that this is a typo error and the proper bit masking is "RT | RS" instead of "RS | RS". This issue was detected with the help of Coccinelle. Fixes: d6b3314b49e1 ("MIPS: uasm: Add lh uam instruction") Reported-by: Julia Lawall <julia.lawall@lip6.fr> Signed-off-by: Gustavo A. R. Silva <garsilva@embeddedor.com> Reviewed-by: James Hogan <jhogan@kernel.org> Patchwork: https://patchwork.linux-mips.org/patch/17551/ Signed-off-by: James Hogan <jhogan@kernel.org> Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
2017-11-08MIPS: smp-cmp: Use right include for task_structJason A. Donenfeld1-1/+1
commit f677b77050c144bd4c515b91ea48bd0efe82355e upstream. When task_struct was moved, this MIPS code was neglected. Evidently nobody is using it anymore. This fixes this build error: In file included from ./arch/mips/include/asm/thread_info.h:15:0, from ./include/linux/thread_info.h:37, from ./include/asm-generic/current.h:4, from ./arch/mips/include/generated/asm/current.h:1, from ./include/linux/sched.h:11, from arch/mips/kernel/smp-cmp.c:22: arch/mips/kernel/smp-cmp.c: In function ‘cmp_boot_secondary’: ./arch/mips/include/asm/processor.h:384:41: error: implicit declaration of function ‘task_stack_page’ [-Werror=implicit-function-declaration] #define __KSTK_TOS(tsk) ((unsigned long)task_stack_page(tsk) + \ ^ arch/mips/kernel/smp-cmp.c:84:21: note: in expansion of macro ‘__KSTK_TOS’ unsigned long sp = __KSTK_TOS(idle); ^~~~~~~~~~ Fixes: f3ac60671954 ("sched/headers: Move task-stack related APIs from <linux/sched.h> to <linux/sched/task_stack.h>") Signed-off-by: Jason A. Donenfeld <Jason@zx2c4.com> Patchwork: https://patchwork.linux-mips.org/patch/17522/ Signed-off-by: James Hogan <jhogan@kernel.org> Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
2017-11-08MIPS: bpf: Fix a typo in build_one_insn()Wei Yongjun1-1/+1
commit 6a2932a463d526e362a6b4e112be226f1d18d088 upstream. Fix a typo in build_one_insn(). Fixes: b6bd53f9c4e8 ("MIPS: Add missing file for eBPF JIT.") Signed-off-by: Wei Yongjun <weiyongjun1@huawei.com> Patchwork: https://patchwork.linux-mips.org/patch/17491/ Signed-off-by: James Hogan <jhogan@kernel.org> Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
2017-11-08Revert "powerpc64/elfv1: Only dereference function descriptor for non-text ↵Naveen N. Rao1-9/+1
symbols" commit 63be1a81e40733ecd175713b6a7558dc43f00851 upstream. This reverts commit 83e840c770f2c5 ("powerpc64/elfv1: Only dereference function descriptor for non-text symbols"). Chandan reported that on newer kernels, trying to enable function_graph tracer on ppc64 (BE) locks up the system with the following trace: Unable to handle kernel paging request for data at address 0x600000002fa30010 Faulting instruction address: 0xc0000000001f1300 Thread overran stack, or stack corrupted Oops: Kernel access of bad area, sig: 11 [#1] BE SMP NR_CPUS=2048 DEBUG_PAGEALLOC NUMA pSeries Modules linked in: CPU: 1 PID: 6586 Comm: bash Not tainted 4.14.0-rc3-00162-g6e51f1f-dirty #20 task: c000000625c07200 task.stack: c000000625c07310 NIP: c0000000001f1300 LR: c000000000121cac CTR: c000000000061af8 REGS: c000000625c088c0 TRAP: 0380 Not tainted (4.14.0-rc3-00162-g6e51f1f-dirty) MSR: 8000000000001032 <SF,ME,IR,DR,RI> CR: 28002848 XER: 00000000 CFAR: c0000000001f1320 SOFTE: 0 ... NIP [c0000000001f1300] .__is_insn_slot_addr+0x30/0x90 LR [c000000000121cac] .kernel_text_address+0x18c/0x1c0 Call Trace: [c000000625c08b40] [c0000000001bd040] .is_module_text_address+0x20/0x40 (unreliable) [c000000625c08bc0] [c000000000121cac] .kernel_text_address+0x18c/0x1c0 [c000000625c08c50] [c000000000061960] .prepare_ftrace_return+0x50/0x130 [c000000625c08cf0] [c000000000061b10] .ftrace_graph_caller+0x14/0x34 [c000000625c08d60] [c000000000121b40] .kernel_text_address+0x20/0x1c0 [c000000625c08df0] [c000000000061960] .prepare_ftrace_return+0x50/0x130 ... [c000000625c0ab30] [c000000000061960] .prepare_ftrace_return+0x50/0x130 [c000000625c0abd0] [c000000000061b10] .ftrace_graph_caller+0x14/0x34 [c000000625c0ac40] [c000000000121b40] .kernel_text_address+0x20/0x1c0 [c000000625c0acd0] [c000000000061960] .prepare_ftrace_return+0x50/0x130 [c000000625c0ad70] [c000000000061b10] .ftrace_graph_caller+0x14/0x34 [c000000625c0ade0] [c000000000121b40] .kernel_text_address+0x20/0x1c0 This is because ftrace is using ppc_function_entry() for obtaining the address of return_to_handler() in prepare_ftrace_return(). The call to kernel_text_address() itself gets traced and we end up in a recursive loop. Fixes: 83e840c770f2 ("powerpc64/elfv1: Only dereference function descriptor for non-text symbols") Reported-by: Chandan Rajendra <chandan@linux.vnet.ibm.com> Signed-off-by: Naveen N. Rao <naveen.n.rao@linux.vnet.ibm.com> Signed-off-by: Michael Ellerman <mpe@ellerman.id.au> Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
2017-11-08ARM: 8715/1: add a private asm/unaligned.hArnd Bergmann2-1/+27
commit 1cce91dfc8f7990ca3aea896bfb148f240b12860 upstream. The asm-generic/unaligned.h header provides two different implementations for accessing unaligned variables: the access_ok.h version used when CONFIG_HAVE_EFFICIENT_UNALIGNED_ACCESS is set pretends that all pointers are in fact aligned, while the le_struct.h version convinces gcc that the alignment of a pointer is '1', to make it issue the correct load/store instructions depending on the architecture flags. On ARMv5 and older, we always use the second version, to let the compiler use byte accesses. On ARMv6 and newer, we currently use the access_ok.h version, so the compiler can use any instruction including stm/ldm and ldrd/strd that will cause an alignment trap. This trap can significantly impact performance when we have to do a lot of fixups and, worse, has led to crashes in the LZ4 decompressor code that does not have a trap handler. This adds an ARM specific version of asm/unaligned.h that uses the le_struct.h/be_struct.h implementation unconditionally. This should lead to essentially the same code on ARMv6+ as before, with the exception of using regular load/store instructions instead of the trapping instructions multi-register variants. The crash in the LZ4 decompressor code was probably introduced by the patch replacing the LZ4 implementation, commit 4e1a33b105dd ("lib: update LZ4 compressor module"), so linux-4.11 and higher would be affected most. However, we probably want to have this backported to all older stable kernels as well, to help with the performance issues. There are two follow-ups that I think we should also work on, but not backport to stable kernels, first to change the asm-generic version of the header to remove the ARM special case, and second to review all other uses of CONFIG_HAVE_EFFICIENT_UNALIGNED_ACCESS to see if they might be affected by the same problem on ARM. Signed-off-by: Arnd Bergmann <arnd@arndb.de> Signed-off-by: Russell King <rmk+kernel@armlinux.org.uk> Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
2017-11-08ARM: dts: mvebu: pl310-cache disable double-linefillYan Markman3-6/+6
commit cda80a82ac3e89309706c027ada6ab232be1d640 upstream. Under heavy system stress mvebu SoC using Cortex A9 sporadically encountered instability issues. The "double linefill" feature of L2 cache was identified as causing dependency between read and write which lead to the deadlock. Especially, it was the cause of deadlock seen under heavy PCIe traffic, as this dependency violates PCIE overtaking rule. Fixes: c8f5a878e554 ("ARM: mvebu: use DT properties to fine-tune the L2 configuration") Signed-off-by: Yan Markman <ymarkman@marvell.com> Signed-off-by: Igal Liberman <igall@marvell.com> Signed-off-by: Nadav Haklai <nadavh@marvell.com> [gregory.clement@free-electrons.com: reformulate commit log, add Armada 375 and add Fixes tag] Signed-off-by: Gregory CLEMENT <gregory.clement@free-electrons.com> Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
2017-11-08arm/arm64: kvm: Disable branch profiling in HYP codeJulien Thierry2-2/+2
commit f9b269f3098121b5d54aaf822e0898c8ed1d3fec upstream. When HYP code runs into branch profiling code, it attempts to jump to unmapped memory, causing a HYP Panic. Disable the branch profiling for code designed to run at HYP mode. Signed-off-by: Julien Thierry <julien.thierry@arm.com> Acked-by: Marc Zyngier <marc.zyngier@arm.com> Cc: Christoffer Dall <christoffer.dall@linaro.org> Cc: Catalin Marinas <catalin.marinas@arm.com> Cc: Will Deacon <will.deacon@arm.com> Cc: Russell King <linux@armlinux.org.uk> Signed-off-by: Christoffer Dall <christoffer.dall@linaro.org> Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
2017-11-08arm/arm64: KVM: set right LR register value for 32 bit guest when inject abortDongjiu Geng2-5/+17
commit fd6c8c206fc5d0717b0433b191de0715122f33bb upstream. When a exception is trapped to EL2, hardware uses ELR_ELx to hold the current fault instruction address. If KVM wants to inject a abort to 32 bit guest, it needs to set the LR register for the guest to emulate this abort happened in the guest. Because ARM32 architecture is pipelined execution, so the LR value has an offset to the fault instruction address. The offsets applied to Link value for exceptions as shown below, which should be added for the ARM32 link register(LR). Table taken from ARMv8 ARM DDI0487B-B, table G1-10: Exception Offset, for PE state of: A32 T32 Undefined Instruction +4 +2 Prefetch Abort +4 +4 Data Abort +8 +8 IRQ or FIQ +4 +4 [ Removed unused variables in inject_abt to avoid compile warnings. -- Christoffer ] Signed-off-by: Dongjiu Geng <gengdongjiu@huawei.com> Tested-by: Haibin Zhang <zhanghaibin7@huawei.com> Reviewed-by: Marc Zyngier <marc.zyngier@arm.com> Signed-off-by: Christoffer Dall <cdall@linaro.org> Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
2017-11-08arm64: ensure __dump_instr() checks addr_limitMark Rutland1-1/+1
commit 7a7003b1da010d2b0d1dc8bf21c10f5c73b389f1 upstream. It's possible for a user to deliberately trigger __dump_instr with a chosen kernel address. Let's avoid problems resulting from this by using get_user() rather than __get_user(), ensuring that we don't erroneously access kernel memory. Where we use __dump_instr() on kernel text, we already switch to KERNEL_DS, so this shouldn't adversely affect those cases. Fixes: 60ffc30d5652810d ("arm64: Exception handling") Acked-by: Will Deacon <will.deacon@arm.com> Signed-off-by: Mark Rutland <mark.rutland@arm.com> Signed-off-by: Catalin Marinas <catalin.marinas@arm.com> Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
2017-11-02powerpc/xive: Fix the size of the cpumask used in xive_find_target_in_mask()Cédric Le Goater1-1/+1
commit a9dadc1c512807f955f0799e85830b420da47932 upstream. When called from xive_irq_startup(), the size of the cpumask can be larger than nr_cpu_ids. This can result in a WARN_ON such as: WARNING: CPU: 10 PID: 1 at ../arch/powerpc/sysdev/xive/common.c:476 xive_find_target_in_mask+0x110/0x2f0 ... NIP [c00000000008a310] xive_find_target_in_mask+0x110/0x2f0 LR [c00000000008a2e4] xive_find_target_in_mask+0xe4/0x2f0 Call Trace: xive_find_target_in_mask+0x74/0x2f0 (unreliable) xive_pick_irq_target.isra.1+0x200/0x230 xive_irq_startup+0x60/0x180 irq_startup+0x70/0xd0 __setup_irq+0x7bc/0x880 request_threaded_irq+0x14c/0x2c0 request_event_sources_irqs+0x100/0x180 __machine_initcall_pseries_init_ras_IRQ+0x104/0x134 do_one_initcall+0x68/0x1d0 kernel_init_freeable+0x290/0x374 kernel_init+0x24/0x170 ret_from_kernel_thread+0x5c/0x74 This happens because we're being called with our affinity mask set to irq_default_affinity. That in turn was populated using cpumask_setall(), which sets NR_CPUs worth of bits, not nr_cpu_ids worth. Finally cpumask_weight() will return > nr_cpu_ids when passed a mask which has > nr_cpu_ids bits set. Fix it by limiting the value returned by cpumask_weight(). Signed-off-by: Cédric Le Goater <clg@kaod.org> [mpe: Add change log details on actual cause] Signed-off-by: Michael Ellerman <mpe@ellerman.id.au> Cc: Stewart Smith <stewart@linux.vnet.ibm.com> Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
2017-11-02x86/cpu/AMD: Apply the Erratum 688 fix when the BIOS doesn'tBorislav Petkov1-0/+41
commit bfc1168de949cd3e9ca18c3480b5085deff1ea7c upstream. Some F14h machines have an erratum which, "under a highly specific and detailed set of internal timing conditions" can lead to skipping instructions and RIP corruption. Add the fix for those machines when their BIOS doesn't apply it or there simply isn't BIOS update for them. Tested-by: <mirh@protonmail.ch> Signed-off-by: Borislav Petkov <bp@suse.de> Cc: Linus Torvalds <torvalds@linux-foundation.org> Cc: Peter Zijlstra <peterz@infradead.org> Cc: Sherry Hurwitz <sherry.hurwitz@amd.com> Cc: Thomas Gleixner <tglx@linutronix.de> Cc: Yazen Ghannam <Yazen.Ghannam@amd.com> Link: http://lkml.kernel.org/r/20171022104731.28249-1-bp@alien8.de Link: https://bugzilla.kernel.org/show_bug.cgi?id=197285 [ Added pr_info() that we activated the workaround. ] Signed-off-by: Ingo Molnar <mingo@kernel.org> Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
2017-11-02s390/kvm: fix detection of guest machine checksMartin Schwidefsky1-2/+5
commit 0a5e2ec2647737907d267c09dc9a25fab1468865 upstream. The new detection code for guest machine checks added a check based on %r11 to .Lcleanup_sie to distinguish between normal asynchronous interrupts and machine checks. But the funtion is called from the program check handler as well with an undefined value in %r11. The effect is that all program exceptions pointing to the SIE instruction will set the CIF_MCCK_GUEST bit. The bit stays set for the CPU until the next machine check comes in which will incorrectly be interpreted as a guest machine check. The simplest fix is to stop using .Lcleanup_sie in the program check handler and duplicate a few instructions. Fixes: c929500d7a5a ("s390/nmi: s390: New low level handling for machine check happening in guest") Reviewed-by: Christian Borntraeger <borntraeger@de.ibm.com> Signed-off-by: Martin Schwidefsky <schwidefsky@de.ibm.com> Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
2017-11-02KVM: PPC: Book3S: Protect kvmppc_gpa_to_ua() with SRCUAlexey Kardashevskiy1-9/+14
commit 8f6a9f0d0604817f7c8d4376fd51718f1bf192ee upstream. kvmppc_gpa_to_ua() accesses KVM memory slot array via srcu_dereference_check() and this produces warnings from RCU like below. This extends the existing srcu_read_lock/unlock to cover that kvmppc_gpa_to_ua() as well. We did not hit this before as this lock is not needed for the realmode handlers and hash guests would use the realmode path all the time; however the radix guests are always redirected to the virtual mode handlers and hence the warning. [ 68.253798] ./include/linux/kvm_host.h:575 suspicious rcu_dereference_check() usage! [ 68.253799] other info that might help us debug this: [ 68.253802] rcu_scheduler_active = 2, debug_locks = 1 [ 68.253804] 1 lock held by qemu-system-ppc/6413: [ 68.253806] #0: (&vcpu->mutex){+.+.}, at: [<c00800000e3c22f4>] vcpu_load+0x3c/0xc0 [kvm] [ 68.253826] stack backtrace: [ 68.253830] CPU: 92 PID: 6413 Comm: qemu-system-ppc Tainted: G W 4.14.0-rc3-00553-g432dcba58e9c-dirty #72 [ 68.253833] Call Trace: [ 68.253839] [c000000fd3d9f790] [c000000000b7fcc8] dump_stack+0xe8/0x160 (unreliable) [ 68.253845] [c000000fd3d9f7d0] [c0000000001924c0] lockdep_rcu_suspicious+0x110/0x180 [ 68.253851] [c000000fd3d9f850] [c0000000000e825c] kvmppc_gpa_to_ua+0x26c/0x2b0 [ 68.253858] [c000000fd3d9f8b0] [c00800000e3e1984] kvmppc_h_put_tce+0x12c/0x2a0 [kvm] Fixes: 121f80ba68f1 ("KVM: PPC: VFIO: Add in-kernel acceleration for VFIO") Signed-off-by: Alexey Kardashevskiy <aik@ozlabs.ru> Signed-off-by: Paul Mackerras <paulus@ozlabs.org> Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
2017-11-02KVM: PPC: Book3S HV: POWER9 more doorbell fixesNicholas Piggin1-0/+5
commit 2cde3716321ec64a1faeaf567bd94100c7b4160f upstream. - Add another case where msgsync is required. - Required barrier sequence for global doorbells is msgsync ; lwsync When msgsnd is used for IPIs to other cores, msgsync must be executed by the target to order stores performed on the source before its msgsnd (provided the source executes the appropriate sync). Fixes: 1704a81ccebc ("KVM: PPC: Book3S HV: Use msgsnd for IPIs to other cores on POWER9") Signed-off-by: Nicholas Piggin <npiggin@gmail.com> Signed-off-by: Paul Mackerras <paulus@ozlabs.org> Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
2017-11-02KVM: PPC: Fix oops when checking KVM_CAP_PPC_HTMGreg Kurz1-2/+1
commit ac64115a66c18c01745bbd3c47a36b124e5fd8c0 upstream. The following program causes a kernel oops: #include <sys/types.h> #include <sys/stat.h> #include <fcntl.h> #include <sys/ioctl.h> #include <linux/kvm.h> main() { int fd = open("/dev/kvm", O_RDWR); ioctl(fd, KVM_CHECK_EXTENSION, KVM_CAP_PPC_HTM); } This happens because when using the global KVM fd with KVM_CHECK_EXTENSION, kvm_vm_ioctl_check_extension() gets called with a NULL kvm argument, which gets dereferenced in is_kvmppc_hv_enabled(). Spotted while reading the code. Let's use the hv_enabled fallback variable, like everywhere else in this function. Fixes: 23528bb21ee2 ("KVM: PPC: Introduce KVM_CAP_PPC_HTM") Signed-off-by: Greg Kurz <groug@kaod.org> Reviewed-by: David Gibson <david@gibson.dropbear.id.au> Reviewed-by: Thomas Huth <thuth@redhat.com> Signed-off-by: Paul Mackerras <paulus@ozlabs.org> Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
2017-10-27arm64: dts: rockchip: correct vqmmc voltage for rk3399 platformsShawn Lin1-2/+2
commit b31ce3041787b61f2dad39d2dcda5c4a81d10e2b upstream. The vcc_sd or vcc_sdio used for IO voltage for sdmmc and sdio interface on rk3399 platform have a limitation that it can't be larger than 3.0v, otherwise it has a potential risk for the chip. Correct all of them. Fixes: 171582e00db1 ("arm64: dts: rockchip: add support for firefly-rk3399 board") Fixes: 2c66fc34e945 ("arm64: dts: rockchip: add RK3399-Q7 (Puma) SoM") Fixes: 8164a84cca12 ("arm64: dts: rockchip: Add support for rk3399 sapphire SOM") Cc: stable@vger.kernel.org Signed-off-by: Shawn Lin <shawn.lin@rock-chips.com> Tested-by: Klaus Goger <klaus.goger@theobroma-systems.com> Signed-off-by: Heiko Stuebner <heiko@sntech.de> Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
2017-10-27x86/microcode/intel: Disable late loading on model 79Borislav Petkov1-0/+19
commit 723f2828a98c8ca19842042f418fb30dd8cfc0f7 upstream. Blacklist Broadwell X model 79 for late loading due to an erratum. Signed-off-by: Borislav Petkov <bp@suse.de> Acked-by: Tony Luck <tony.luck@intel.com> Cc: Linus Torvalds <torvalds@linux-foundation.org> Cc: Peter Zijlstra <peterz@infradead.org> Cc: Thomas Gleixner <tglx@linutronix.de> Link: http://lkml.kernel.org/r/20171018111225.25635-1-bp@alien8.de Signed-off-by: Ingo Molnar <mingo@kernel.org> Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
2017-10-27ARM: dts: sun6i: Fix endpoint IDs in second display pipelineChen-Yu Tsai1-8/+8
commit a231d2783c332ef3e3ba238e82dbe599ff41ba14 upstream. When the second display pipeline device nodes for the A31/A31s were added, it was not known that the TCONs could (through either DRCs) select either backend as their input. Thus in the endpoints connecting these components together, the endpoint IDs were set to 0, while in fact they should have been set to 1. Fixes: 9a26882a7378 ("ARM: dts: sun6i: Add second display pipeline device nodes") Signed-off-by: Chen-Yu Tsai <wens@csie.org> Signed-off-by: Maxime Ripard <maxime.ripard@free-electrons.com> Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
2017-10-27parisc: Fix detection of nonsynchronous cr16 cycle countersHelge Deller1-1/+4
commit 8642b31ba9eef8a01845146a26682d4869e62513 upstream. For CPUs which have an unknown or invalid CPU location (physical location) assume that their cycle counters aren't syncronized across CPUs. Signed-off-by: Helge Deller <deller@gmx.de> Fixes: c8c3735997a3 ("parisc: Enhance detection of synchronous cr16 clocksources") Signed-off-by: Helge Deller <deller@gmx.de> Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
2017-10-27parisc: Fix double-word compare and exchange in LWS code on 32-bit kernelsJohn David Anglin1-3/+3
commit 374b3bf8e8b519f61eb9775888074c6e46b3bf0c upstream. As discussed on the debian-hppa list, double-wordcompare and exchange operations fail on 32-bit kernels. Looking at the code, I realized that the ",ma" completer does the wrong thing in the "ldw,ma 4(%r26), %r29" instruction. This increments %r26 and causes the following store to write to the wrong location. Note by Helge Deller: The patch applies cleanly to stable kernel series if this upstream commit is merged in advance: f4125cfdb300 ("parisc: Avoid trashing sr2 and sr3 in LWS code"). Signed-off-by: John David Anglin <dave.anglin@bell.net> Tested-by: Christoph Biedl <debian.axhn@manchmal.in-ulm.de> Fixes: 89206491201c ("parisc: Implement new LWS CAS supporting 64 bit operations.") Signed-off-by: Helge Deller <deller@gmx.de> Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
2017-10-27s390/cputime: fix guest/irq/softirq times after CPU hotplugChristian Borntraeger1-0/+3
commit b7662eef14caf4f582d453d45395825b5a8f594c upstream. On CPU hotplug some cpu stats contain bogus values: $ cat /proc/stat cpu 0 0 49 1280 0 0 0 3 0 0 cpu0 0 0 49 618 0 0 0 3 0 0 cpu1 0 0 0 662 0 0 0 0 0 0 [...] $ echo 0 > /sys/devices/system/cpu/cpu1/online $ echo 1 > /sys/devices/system/cpu/cpu1/online $ cat /proc/stat cpu 0 0 49 3200 0 450359962737 450359962737 3 0 0 cpu0 0 0 49 1956 0 0 0 3 0 0 cpu1 0 0 0 1244 0 450359962737 450359962737 0 0 0 [...] pcpu_attach_task() needs the same assignments as vtime_task_switch. Signed-off-by: Christian Borntraeger <borntraeger@de.ibm.com> Fixes: b7394a5f4ce9 ("sched/cputime, s390: Implement delayed accounting of system time") Signed-off-by: Martin Schwidefsky <schwidefsky@de.ibm.com> Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
2017-10-21x86/apic: Silence "FW_BUG TSC_DEADLINE disabled due to Errata" on hypervisorsPaolo Bonzini1-1/+2
commit cc6afe2240298049585e86b1ade85efc8a7f225d upstream. Commit 594a30fb1242 ("x86/apic: Silence "FW_BUG TSC_DEADLINE disabled due to Errata" on CPUs without the feature", 2017-08-30) was also about silencing the warning on VirtualBox; however, KVM does expose the TSC deadline timer, and it's virtualized so that it is immune from CPU errata. Therefore, booting 4.13 with "-cpu Haswell" shows this in the logs: [ 0.000000] [Firmware Bug]: TSC_DEADLINE disabled due to Errata; please update microcode to version: 0xb2 (or later) Even if you had a hypervisor that does _not_ virtualize the TSC deadline and rather exposes the hardware one, it should be the hypervisors task to update microcode and possibly hide the flag from CPUID. So just hide the message when running on _any_ hypervisor, not just those that do not support the TSC deadline timer. The older check still makes sense, so keep it. Fixes: bd9240a18e ("x86/apic: Add TSC_DEADLINE quirk due to errata") Signed-off-by: Paolo Bonzini <pbonzini@redhat.com> Signed-off-by: Thomas Gleixner <tglx@linutronix.de> Cc: Peter Zijlstra <peterz@infradead.org> Cc: Hans de Goede <hdegoede@redhat.com> Cc: kvm@vger.kernel.org Link: https://lkml.kernel.org/r/1507630377-54471-1-git-send-email-pbonzini@redhat.com Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
2017-10-21x86/apic: Silence "FW_BUG TSC_DEADLINE disabled due to Errata" on CPUs ↵Hans de Goede1-1/+5
without the feature commit 594a30fb12424717a41c62323d2a8bf167dbccad upstream. When booting 4.13 on a VirtualBox VM on a Skylake host the following error shows up in the logs: [ 0.000000] [Firmware Bug]: TSC_DEADLINE disabled due to Errata; please update microcode to version: 0xb2 (or later) This is caused by apic_check_deadline_errata() only checking CPU model and not the X86_FEATURE_TSC_DEADLINE_TIMER flag (which VirtualBox does NOT export to the guest), combined with VirtualBox not exporting the micro-code version to the guest. This commit adds a check for X86_FEATURE_TSC_DEADLINE_TIMER to apic_check_deadline_errata(), silencing this error on VirtualBox VMs. Signed-off-by: Hans de Goede <hdegoede@redhat.com> Acked-by: Thomas Gleixner <tglx@linutronix.de> Cc: Frank Mehnert <frank.mehnert@oracle.com> Cc: Linus Torvalds <torvalds@linux-foundation.org> Cc: Michael Thayer <michael.thayer@oracle.com> Cc: Michal Necasek <michal.necasek@oracle.com> Cc: Peter Zijlstra <peterz@infradead.org> Fixes: bd9240a18e ("x86/apic: Add TSC_DEADLINE quirk due to errata") Link: http://lkml.kernel.org/r/20170830105811.27539-1-hdegoede@redhat.com Signed-off-by: Ingo Molnar <mingo@kernel.org> Signed-off-by: Paolo Bonzini <pbonzini@redhat.com> Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
2017-10-18KVM: nVMX: update last_nonleaf_level when initializing nested EPTLadi Prosek1-0/+1
commit fd19d3b45164466a4adce7cbff448ba9189e1427 upstream. The function updates context->root_level but didn't call update_last_nonleaf_level so the previous and potentially wrong value was used for page walks. For example, a zero value of last_nonleaf_level would allow a potential out-of-bounds access in arch/x86/mmu/paging_tmpl.h's walk_addr_generic function (CVE-2017-12188). Fixes: 155a97a3d7c78b46cef6f1a973c831bc5a4f82bb Signed-off-by: Ladi Prosek <lprosek@redhat.com> Signed-off-by: Paolo Bonzini <pbonzini@redhat.com> Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
2017-10-18x86/alternatives: Fix alt_max_short macro to really be a max()Mathias Krause2-4/+6
commit 6b32c126d33d5cb379bca280ab8acedc1ca978ff upstream. The alt_max_short() macro in asm/alternative.h does not work as intended, leading to nasty bugs. E.g. alt_max_short("1", "3") evaluates to 3, but alt_max_short("3", "1") evaluates to 1 -- not exactly the maximum of 1 and 3. In fact, I had to learn it the hard way by crashing my kernel in not so funny ways by attempting to make use of the ALTENATIVE_2 macro with alternatives where the first one was larger than the second one. According to [1] and commit dbe4058a6a44 ("x86/alternatives: Fix ALTERNATIVE_2 padding generation properly") the right handed side should read "-(-(a < b))" not "-(-(a - b))". Fix that, to make the macro work as intended. While at it, fix up the comments regarding the additional "-", too. It's not about gas' usage of s32 but brain dead logic of having a "true" value of -1 for the < operator ... *sigh* Btw., the one in asm/alternative-asm.h is correct. And, apparently, all current users of ALTERNATIVE_2() pass same sized alternatives, avoiding to hit the bug. [1] http://graphics.stanford.edu/~seander/bithacks.html#IntegerMinOrMax Reviewed-and-tested-by: Borislav Petkov <bp@suse.de> Fixes: dbe4058a6a44 ("x86/alternatives: Fix ALTERNATIVE_2 padding generation properly") Signed-off-by: Mathias Krause <minipli@googlemail.com> Signed-off-by: Thomas Gleixner <tglx@linutronix.de> Cc: Borislav Petkov <bp@suse.de> Link: https://lkml.kernel.org/r/1507228213-13095-1-git-send-email-minipli@googlemail.com Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
2017-10-18x86/microcode: Do the family check firstBorislav Petkov1-9/+18
commit 1f161f67a272cc4f29f27934dd3f74cb657eb5c4 upstream. On CPUs like AMD's Geode, for example, we shouldn't even try to load microcode because they do not support the modern microcode loading interface. However, we do the family check *after* the other checks whether the loader has been disabled on the command line or whether we're running in a guest. So move the family checks first in order to exit early if we're being loaded on an unsupported family. Reported-and-tested-by: Sven Glodowski <glodi1@arcor.de> Signed-off-by: Borislav Petkov <bp@suse.de> Cc: Linus Torvalds <torvalds@linux-foundation.org> Cc: Peter Zijlstra <peterz@infradead.org> Cc: Thomas Gleixner <tglx@linutronix.de> Link: http://bugzilla.suse.com/show_bug.cgi?id=1061396 Link: http://lkml.kernel.org/r/20171012112316.977-1-bp@alien8.de Signed-off-by: Ingo Molnar <mingo@kernel.org> Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
2017-10-18KVM: nVMX: fix guest CR4 loading when emulating L2 to L1 exitHaozhong Zhang1-1/+1
commit 8eb3f87d903168bdbd1222776a6b1e281f50513e upstream. When KVM emulates an exit from L2 to L1, it loads L1 CR4 into the guest CR4. Before this CR4 loading, the guest CR4 refers to L2 CR4. Because these two CR4's are in different levels of guest, we should vmx_set_cr4() rather than kvm_set_cr4() here. The latter, which is used to handle guest writes to its CR4, checks the guest change to CR4 and may fail if the change is invalid. The failure may cause trouble. Consider we start a L1 guest with non-zero L1 PCID in use, (i.e. L1 CR4.PCIDE == 1 && L1 CR3.PCID != 0) and a L2 guest with L2 PCID disabled, (i.e. L2 CR4.PCIDE == 0) and following events may happen: 1. If kvm_set_cr4() is used in load_vmcs12_host_state() to load L1 CR4 into guest CR4 (in VMCS01) for L2 to L1 exit, it will fail because of PCID check. As a result, the guest CR4 recorded in L0 KVM (i.e. vcpu->arch.cr4) is left to the value of L2 CR4. 2. Later, if L1 attempts to change its CR4, e.g., clearing VMXE bit, kvm_set_cr4() in L0 KVM will think L1 also wants to enable PCID, because the wrong L2 CR4 is used by L0 KVM as L1 CR4. As L1 CR3.PCID != 0, L0 KVM will inject GP to L1 guest. Fixes: 4704d0befb072 ("KVM: nVMX: Exiting from L2 to L1") Cc: qemu-stable@nongnu.org Signed-off-by: Haozhong Zhang <haozhong.zhang@intel.com> Signed-off-by: Paolo Bonzini <pbonzini@redhat.com> Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
2017-10-18KVM: MMU: always terminate page walks at level 1Ladi Prosek2-8/+9
commit 829ee279aed43faa5cb1e4d65c0cad52f2426c53 upstream. is_last_gpte() is not equivalent to the pseudo-code given in commit 6bb69c9b69c31 ("KVM: MMU: simplify last_pte_bitmap") because an incorrect value of last_nonleaf_level may override the result even if level == 1. It is critical for is_last_gpte() to return true on level == 1 to terminate page walks. Otherwise memory corruption may occur as level is used as an index to various data structures throughout the page walking code. Even though the actual bug would be wherever the MMU is initialized (as in the previous patch), be defensive and ensure here that is_last_gpte() returns the correct value. This patch is also enough to fix CVE-2017-12188. Fixes: 6bb69c9b69c315200ddc2bc79aee14c0184cf5b2 Cc: Andy Honig <ahonig@google.com> Signed-off-by: Ladi Prosek <lprosek@redhat.com> [Panic if walk_addr_generic gets an incorrect level; this is a serious bug and it's not worth a WARN_ON where the recovery path might hide further exploitable issues; suggested by Andrew Honig. - Paolo] Signed-off-by: Paolo Bonzini <pbonzini@redhat.com> Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
2017-10-18MIPS: bpf: Fix uninitialised target compiler errorMatt Redfearn1-1/+1
commit 94c3390ab84a6b449accc7351ffda4a0c17bdb92 upstream. Compiling ebpf_jit.c with gcc 4.9 results in a (likely spurious) compiler warning, as gcc has detected that the variable "target" may be used uninitialised. Since -Werror is active, this is treated as an error and causes a kernel build failure whenever CONFIG_MIPS_EBPF_JIT is enabled. arch/mips/net/ebpf_jit.c: In function 'build_one_insn': arch/mips/net/ebpf_jit.c:1118:80: error: 'target' may be used uninitialized in this function [-Werror=maybe-uninitialized] emit_instr(ctx, j, target); ^ cc1: all warnings being treated as errors Fix this by initialising "target" to 0. If it really is used uninitialised this would result in a jump to 0 and a detectable run time failure. Signed-off-by: Matt Redfearn <matt.redfearn@imgtec.com> Fixes: b6bd53f9c4e8 ("MIPS: Add missing file for eBPF JIT.") Cc: James Hogan <james.hogan@imgtec.com> Cc: David Daney <david.daney@cavium.com> Cc: David S. Miller <davem@davemloft.net> Cc: Colin Ian King <colin.king@canonical.com> Cc: Daniel Borkmann <daniel@iogearbox.net> Cc: linux-mips@linux-mips.org Cc: linux-kernel@vger.kernel.org Patchwork: https://patchwork.linux-mips.org/patch/17375/ Signed-off-by: Ralf Baechle <ralf@linux-mips.org> Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
2017-10-18MIPS: math-emu: Remove pr_err() calls from fpu_emu()Paul Burton1-2/+0
commit ca8eb05b5f332a9e1ab3e2ece498d49f4d683470 upstream. The FPU emulator includes 2 calls to pr_err() which are triggered by invalid instruction encodings for MIPSr6 cmp.cond.fmt instructions. These cases are not kernel errors, merely invalid instructions which are already handled by delivering a SIGILL which will provide notification that something failed in cases where that makes sense. In cases where that SIGILL is somewhat expected & being handled, for example when crashme happens to generate one of the affected bad encodings, the message is printed with no useful context about what triggered it & spams the kernel log for no good reason. Remove the pr_err() calls to make crashme run silently & treat the bad encodings the same way we do others, with a SIGILL & no further kernel log output. Signed-off-by: Paul Burton <paul.burton@imgtec.com> Fixes: f8c3c6717a71 ("MIPS: math-emu: Add support for the CMP.condn.fmt R6 instruction") Cc: linux-mips@linux-mips.org Patchwork: https://patchwork.linux-mips.org/patch/17253/ Signed-off-by: Ralf Baechle <ralf@linux-mips.org> Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
2017-10-12kvm/x86: Avoid async PF preempting the kernel incorrectlyBoqun Feng3-7/+13
commit a2b7861bb33b2538420bb5d8554153484d3f961f upstream. Currently, in PREEMPT_COUNT=n kernel, kvm_async_pf_task_wait() could call schedule() to reschedule in some cases. This could result in accidentally ending the current RCU read-side critical section early, causing random memory corruption in the guest, or otherwise preempting the currently running task inside between preempt_disable and preempt_enable. The difficulty to handle this well is because we don't know whether an async PF delivered in a preemptible section or RCU read-side critical section for PREEMPT_COUNT=n, since preempt_disable()/enable() and rcu_read_lock/unlock() are both no-ops in that case. To cure this, we treat any async PF interrupting a kernel context as one that cannot be preempted, preventing kvm_async_pf_task_wait() from choosing the schedule() path in that case. To do so, a second parameter for kvm_async_pf_task_wait() is introduced, so that we know whether it's called from a context interrupting the kernel, and the parameter is set properly in all the callsites. Cc: "Paul E. McKenney" <paulmck@linux.vnet.ibm.com> Cc: Peter Zijlstra <peterz@infradead.org> Cc: Wanpeng Li <wanpeng.li@hotmail.com> Signed-off-by: Boqun Feng <boqun.feng@gmail.com> Signed-off-by: Radim Krčmář <rkrcmar@redhat.com> Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
2017-10-12KVM: PPC: Book3S: Fix server always zero from kvmppc_xive_get_xive()Sam Bobroff2-4/+2
commit 2fb1e946450a4fef74bb72f360555f7760d816f0 upstream. In KVM's XICS-on-XIVE emulation, kvmppc_xive_get_xive() returns the value of state->guest_server as "server". However, this value is not set by it's counterpart kvmppc_xive_set_xive(). When the guest uses this interface to migrate interrupts away from a CPU that is going offline, it sees all interrupts as belonging to CPU 0, so they are left assigned to (now) offline CPUs. This patch removes the guest_server field from the state, and returns act_server in it's place (that is, the CPU actually handling the interrupt, which may differ from the one requested). Fixes: 5af50993850a ("KVM: PPC: Book3S HV: Native usage of the XIVE interrupt controller") Signed-off-by: Sam Bobroff <sam.bobroff@au1.ibm.com> Acked-by: Benjamin Herrenschmidt <benh@kernel.crashing.org> Signed-off-by: Radim Krčmář <rkrcmar@redhat.com> Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
2017-10-12arm64: Ensure the instruction emulation is ready for userspaceSuzuki K Poulose2-2/+2
commit c0d8832e78cbfd4a64b7112e34920af4b0b0e60e upstream. We trap and emulate some instructions (e.g, mrs, deprecated instructions) for the userspace. However the handlers for these are registered as late_initcalls and the userspace could be up and running from the initramfs by that time (with populate_rootfs, which is a rootfs_initcall()). This could cause problems for the early applications ending up in failure like : [ 11.152061] modprobe[93]: undefined instruction: pc=0000ffff8ca48ff4 This patch promotes the specific calls to core_initcalls, which are guaranteed to be completed before we hit userspace. Cc: Dave Martin <dave.martin@arm.com> Cc: Matthias Brugger <mbrugger@suse.com> Cc: James Morse <james.morse@arm.com> Reported-by: Matwey V. Kornilov <matwey.kornilov@gmail.com> Signed-off-by: Suzuki K Poulose <suzuki.poulose@arm.com> Signed-off-by: Catalin Marinas <catalin.marinas@arm.com> Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>