summaryrefslogtreecommitdiff
path: root/arch
AgeCommit message (Collapse)AuthorFilesLines
2019-08-16KVM: Fix leak vCPU's VMCS value into other pCPUWanpeng Li5-0/+34
commit 17e433b54393a6269acbcb792da97791fe1592d8 upstream. After commit d73eb57b80b (KVM: Boost vCPUs that are delivering interrupts), a five years old bug is exposed. Running ebizzy benchmark in three 80 vCPUs VMs on one 80 pCPUs Skylake server, a lot of rcu_sched stall warning splatting in the VMs after stress testing: INFO: rcu_sched detected stalls on CPUs/tasks: { 4 41 57 62 77} (detected by 15, t=60004 jiffies, g=899, c=898, q=15073) Call Trace: flush_tlb_mm_range+0x68/0x140 tlb_flush_mmu.part.75+0x37/0xe0 tlb_finish_mmu+0x55/0x60 zap_page_range+0x142/0x190 SyS_madvise+0x3cd/0x9c0 system_call_fastpath+0x1c/0x21 swait_active() sustains to be true before finish_swait() is called in kvm_vcpu_block(), voluntarily preempted vCPUs are taken into account by kvm_vcpu_on_spin() loop greatly increases the probability condition kvm_arch_vcpu_runnable(vcpu) is checked and can be true, when APICv is enabled the yield-candidate vCPU's VMCS RVI field leaks(by vmx_sync_pir_to_irr()) into spinning-on-a-taken-lock vCPU's current VMCS. This patch fixes it by checking conservatively a subset of events. Cc: Paolo Bonzini <pbonzini@redhat.com> Cc: Radim Krčmář <rkrcmar@redhat.com> Cc: Christian Borntraeger <borntraeger@de.ibm.com> Cc: Marc Zyngier <Marc.Zyngier@arm.com> Cc: stable@vger.kernel.org Fixes: 98f4a1467 (KVM: add kvm_arch_vcpu_runnable() test to kvm_vcpu_on_spin() loop) Signed-off-by: Wanpeng Li <wanpengli@tencent.com> Signed-off-by: Paolo Bonzini <pbonzini@redhat.com> Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
2019-08-16KVM/nSVM: properly map nested VMCBVitaly Kuznetsov1-2/+2
commit 8f38302c0be2d2daf3b40f7d2142ec77e35d209e upstream. Commit 8c5fbf1a7231 ("KVM/nSVM: Use the new mapping API for mapping guest memory") broke nested SVM completely: kvm_vcpu_map()'s second parameter is GFN so vmcb_gpa needs to be converted with gpa_to_gfn(), not the other way around. Fixes: 8c5fbf1a7231 ("KVM/nSVM: Use the new mapping API for mapping guest memory") Signed-off-by: Vitaly Kuznetsov <vkuznets@redhat.com> Reviewed-by: Sean Christopherson <sean.j.christopherson@intel.com> Signed-off-by: Paolo Bonzini <pbonzini@redhat.com> Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
2019-08-16s390/dma: provide proper ARCH_ZONE_DMA_BITS valueHalil Pasic1-0/+2
[ Upstream commit 1a2dcff881059dedc14fafc8a442664c8dbd60f1 ] On s390 ZONE_DMA is up to 2G, i.e. ARCH_ZONE_DMA_BITS should be 31 bits. The current value is 24 and makes __dma_direct_alloc_pages() take a wrong turn first (but __dma_direct_alloc_pages() recovers then). Let's correct ARCH_ZONE_DMA_BITS value and avoid wrong turns. Signed-off-by: Halil Pasic <pasic@linux.ibm.com> Reported-by: Petr Tesarik <ptesarik@suse.cz> Fixes: c61e9637340e ("dma-direct: add support for allocation from ZONE_DMA and ZONE_DMA32") Signed-off-by: Heiko Carstens <heiko.carstens@de.ibm.com> Signed-off-by: Sasha Levin <sashal@kernel.org>
2019-08-16perf/x86: Apply more accurate check on hypervisor platformZhenzhong Duan1-2/+1
[ Upstream commit 5ea3f6fb37b79da33ac9211df336fd2b9f47c39f ] check_msr is used to fix a bug report in guest where KVM doesn't support LBR MSR and cause #GP. The msr check is bypassed on real HW to workaround a false failure, see commit d0e1a507bdc7 ("perf/x86/intel: Disable check_msr for real HW") When running a guest with CONFIG_HYPERVISOR_GUEST not set or "nopv" enabled, current check isn't enough and #GP could trigger. Signed-off-by: Zhenzhong Duan <zhenzhong.duan@oracle.com> Signed-off-by: Peter Zijlstra (Intel) <peterz@infradead.org> Cc: Alexander Shishkin <alexander.shishkin@linux.intel.com> Cc: Arnaldo Carvalho de Melo <acme@kernel.org> Cc: Boris Ostrovsky <boris.ostrovsky@oracle.com> Cc: Borislav Petkov <bp@alien8.de> Cc: Jiri Olsa <jolsa@redhat.com> Cc: Juergen Gross <jgross@suse.com> Cc: Linus Torvalds <torvalds@linux-foundation.org> Cc: Namhyung Kim <namhyung@kernel.org> Cc: Peter Zijlstra <peterz@infradead.org> Cc: Thomas Gleixner <tglx@linutronix.de> Link: https://lkml.kernel.org/r/1564022366-18293-1-git-send-email-zhenzhong.duan@oracle.com Signed-off-by: Ingo Molnar <mingo@kernel.org> Signed-off-by: Sasha Levin <sashal@kernel.org>
2019-08-16perf/x86/intel: Fix invalid Bit 13 for Icelake MSR_OFFCORE_RSP_x registerYunying Sun1-2/+2
[ Upstream commit 3b238a64c3009fed36eaea1af629d9377759d87d ] The Intel SDM states that bit 13 of Icelake's MSR_OFFCORE_RSP_x register is valid, and used for counting hardware generated prefetches of L3 cache. Update the bitmask to allow bit 13. Before: $ perf stat -e cpu/event=0xb7,umask=0x1,config1=0x1bfff/u sleep 3 Performance counter stats for 'sleep 3': <not supported> cpu/event=0xb7,umask=0x1,config1=0x1bfff/u After: $ perf stat -e cpu/event=0xb7,umask=0x1,config1=0x1bfff/u sleep 3 Performance counter stats for 'sleep 3': 9,293 cpu/event=0xb7,umask=0x1,config1=0x1bfff/u Signed-off-by: Yunying Sun <yunying.sun@intel.com> Signed-off-by: Peter Zijlstra (Intel) <peterz@infradead.org> Reviewed-by: Kan Liang <kan.liang@linux.intel.com> Cc: Linus Torvalds <torvalds@linux-foundation.org> Cc: Peter Zijlstra <peterz@infradead.org> Cc: Thomas Gleixner <tglx@linutronix.de> Cc: acme@kernel.org Cc: alexander.shishkin@linux.intel.com Cc: bp@alien8.de Cc: hpa@zytor.com Cc: jolsa@redhat.com Cc: namhyung@kernel.org Link: https://lkml.kernel.org/r/20190724082932.12833-1-yunying.sun@intel.com Signed-off-by: Ingo Molnar <mingo@kernel.org> Signed-off-by: Sasha Levin <sashal@kernel.org>
2019-08-16perf/x86/intel: Fix SLOTS PEBS event constraintKan Liang1-1/+1
[ Upstream commit 3d0c3953601d250175c7684ec0d9df612061dae5 ] Sampling SLOTS event and ref-cycles event in a group on Icelake gives EINVAL. SLOTS event is the event stands for the fixed counter 3, not fixed counter 2. Wrong mask was set to SLOTS event in intel_icl_pebs_event_constraints[]. Reported-by: Andi Kleen <ak@linux.intel.com> Signed-off-by: Kan Liang <kan.liang@linux.intel.com> Signed-off-by: Peter Zijlstra (Intel) <peterz@infradead.org> Cc: Linus Torvalds <torvalds@linux-foundation.org> Cc: Peter Zijlstra <peterz@infradead.org> Cc: Thomas Gleixner <tglx@linutronix.de> Fixes: 6017608936c1 ("perf/x86/intel: Add Icelake support") Link: https://lkml.kernel.org/r/20190723200429.8180-1-kan.liang@linux.intel.com Signed-off-by: Ingo Molnar <mingo@kernel.org> Signed-off-by: Sasha Levin <sashal@kernel.org>
2019-08-16ARM: dts: bcm: bcm47094: add missing #cells for mdio-bus-muxArnd Bergmann1-0/+3
[ Upstream commit 3a9d2569e45cb02769cda26fee4a02126867c934 ] The mdio-bus-mux has no #address-cells/#size-cells property, which causes a few dtc warnings: arch/arm/boot/dts/bcm47094-linksys-panamera.dts:129.4-18: Warning (reg_format): /mdio-bus-mux/mdio@200:reg: property has invalid length (4 bytes) (#address-cells == 2, #size-cells == 1) arch/arm/boot/dts/bcm47094-linksys-panamera.dtb: Warning (pci_device_bus_num): Failed prerequisite 'reg_format' arch/arm/boot/dts/bcm47094-linksys-panamera.dtb: Warning (i2c_bus_reg): Failed prerequisite 'reg_format' arch/arm/boot/dts/bcm47094-linksys-panamera.dtb: Warning (spi_bus_reg): Failed prerequisite 'reg_format' arch/arm/boot/dts/bcm47094-linksys-panamera.dts:128.22-132.5: Warning (avoid_default_addr_size): /mdio-bus-mux/mdio@200: Relying on default #address-cells value arch/arm/boot/dts/bcm47094-linksys-panamera.dts:128.22-132.5: Warning (avoid_default_addr_size): /mdio-bus-mux/mdio@200: Relying on default #size-cells value Add the normal cell numbers. Link: https://lore.kernel.org/r/20190722145618.1155492-1-arnd@arndb.de Fixes: 2bebdfcdcd0f ("ARM: dts: BCM5301X: Add support for Linksys EA9500") Signed-off-by: Arnd Bergmann <arnd@arndb.de> Signed-off-by: Olof Johansson <olof@lixom.net> Signed-off-by: Sasha Levin <sashal@kernel.org>
2019-08-16ARM: davinci: fix sleep.S build error on ARMv4Arnd Bergmann1-0/+1
[ Upstream commit d64b212ea960db4276a1d8372bd98cb861dfcbb0 ] When building a multiplatform kernel that includes armv4 support, the default target CPU does not support the blx instruction, which leads to a build failure: arch/arm/mach-davinci/sleep.S: Assembler messages: arch/arm/mach-davinci/sleep.S:56: Error: selected processor does not support `blx ip' in ARM mode Add a .arch statement in the sources to make this file build. Link: https://lore.kernel.org/r/20190722145211.1154785-1-arnd@arndb.de Acked-by: Sekhar Nori <nsekhar@ti.com> Signed-off-by: Arnd Bergmann <arnd@arndb.de> Signed-off-by: Olof Johansson <olof@lixom.net> Signed-off-by: Sasha Levin <sashal@kernel.org>
2019-08-16arm64: dts: imx8mq: fix SAI compatibleLucas Stach1-2/+1
[ Upstream commit 8d0148473dece51675d11dd59b8db5fe4b5d2e7e ] The i.MX8M SAI block is not compatible with the i.MX6SX one, as the register layout has changed due to two version registers being added at the beginning of the address map. Remove the bogus compatible. Fixes: 8c61538dc945 ("arm64: dts: imx8mq: Add SAI2 node") Signed-off-by: Lucas Stach <l.stach@pengutronix.de> Reviewed-by: Daniel Baluta <daniel.baluta@nxp.com> Signed-off-by: Shawn Guo <shawnguo@kernel.org> Signed-off-by: Sasha Levin <sashal@kernel.org>
2019-08-16arm64: dts: imx8mm: Correct SAI3 RXC/TXFS pin's mux option #1Anson Huang1-2/+2
[ Upstream commit 52d09014bb104a9157c0f5530700291052d2955c ] According to i.MX8MM reference manual Rev.1, 03/2019: SAI3_RXC pin's mux option #1 should be GPT1_CLK, NOT GPT1_CAPTURE2; SAI3_TXFS pin's mux option #1 should be GPT1_CAPTURE2, NOT GPT1_CLK. Fixes: c1c9d41319c3 ("dt-bindings: imx: Add pinctrl binding doc for imx8mm") Signed-off-by: Anson Huang <Anson.Huang@nxp.com> Signed-off-by: Shawn Guo <shawnguo@kernel.org> Signed-off-by: Sasha Levin <sashal@kernel.org>
2019-08-16arm64: entry: SP Alignment Fault doesn't write to FAR_EL1James Morse1-9/+13
[ Upstream commit 40ca0ce56d4bb889dc43b455c55398468115569a ] Comparing the arm-arm's pseudocode for AArch64.PCAlignmentFault() with AArch64.SPAlignmentFault() shows that SP faults don't copy the faulty-SP to FAR_EL1, but this is where we read from, and the address we provide to user-space with the BUS_ADRALN signal. For user-space this value will be UNKNOWN due to the previous ERET to user-space. If the last value is preserved, on systems with KASLR or KPTI this will be the user-space link-register left in FAR_EL1 by tramp_exit(). Fix this to retrieve the original sp_el0 value, and pass this to do_sp_pc_fault(). SP alignment faults from EL1 will cause us to take the fault again when trying to store the pt_regs. This eventually takes us to the overflow stack. Remove the ESR_ELx_EC_SP_ALIGN check as we will never make it this far. Fixes: 60ffc30d5652 ("arm64: Exception handling") Signed-off-by: James Morse <james.morse@arm.com> [will: change label name and fleshed out comment] Signed-off-by: Will Deacon <will@kernel.org> Signed-off-by: Sasha Levin <sashal@kernel.org>
2019-08-16arm64: Force SSBS on context switchMarc Zyngier2-3/+40
[ Upstream commit cbdf8a189a66001c36007bf0f5c975d0376c5c3a ] On a CPU that doesn't support SSBS, PSTATE[12] is RES0. In a system where only some of the CPUs implement SSBS, we end-up losing track of the SSBS bit across task migration. To address this issue, let's force the SSBS bit on context switch. Fixes: 8f04e8e6e29c ("arm64: ssbd: Add support for PSTATE.SSBS rather than trapping to EL3") Signed-off-by: Marc Zyngier <marc.zyngier@arm.com> [will: inverted logic and added comments] Signed-off-by: Will Deacon <will@kernel.org> Signed-off-by: Sasha Levin <sashal@kernel.org>
2019-08-16powerpc/papr_scm: Force a scm-unbind if initial scm-bind failsVaibhav Jain1-1/+14
[ Upstream commit 3a855b7ac7d5021674aa3e1cc9d3bfd6b604e9c0 ] In some cases initial bind of scm memory for an lpar can fail if previously it wasn't released using a scm-unbind hcall. This situation can arise due to panic of the previous kernel or forced lpar fadump. In such cases the H_SCM_BIND_MEM return a H_OVERLAP error. To mitigate such cases the patch updates papr_scm_probe() to force a call to drc_pmem_unbind() in case the initial bind of scm memory fails with EBUSY error. In case scm-bind operation again fails after the forced scm-unbind then we follow the existing error path. We also update drc_pmem_bind() to handle the H_OVERLAP error returned by phyp and indicate it as a EBUSY error back to the caller. Suggested-by: "Oliver O'Halloran" <oohall@gmail.com> Signed-off-by: Vaibhav Jain <vaibhav@linux.ibm.com> Reviewed-by: Oliver O'Halloran <oohall@gmail.com> Signed-off-by: Michael Ellerman <mpe@ellerman.id.au> Link: https://lore.kernel.org/r/20190629160610.23402-4-vaibhav@linux.ibm.com Signed-off-by: Sasha Levin <sashal@kernel.org>
2019-08-16ARM: dts: imx6ul: fix clock frequency property name of I2C busesSébastien Szymanski5-6/+6
[ Upstream commit 2ca99396333999b9b5c5b91b36cbccacfe571aaf ] A few boards set clock frequency of their I2C buses with "clock_frequency" property. The right property is "clock-frequency". Signed-off-by: Sébastien Szymanski <sebastien.szymanski@armadeus.com> Reviewed-by: Fabio Estevam <festevam@gmail.com> Signed-off-by: Shawn Guo <shawnguo@kernel.org> Signed-off-by: Sasha Levin <sashal@kernel.org>
2019-08-16powerpc: fix off by one in max_zone_pfn initialization for ZONE_DMAAndrea Arcangeli1-1/+1
[ Upstream commit 03800e0526ee25ed7c843ca1e57b69ac2a5af642 ] 25078dc1f74be16b858e914f52cc8f4d03c2271a first introduced an off by one error in the ZONE_DMA initialization of PPC_BOOK3E_64=y and since 9739ab7eda459f0669ec9807e0d9be5020bab88c the off by one applies to PPC32=y too. This simply corrects the off by one and should resolve crashes like below: [ 65.179101] page 0x7fff outside node 0 zone DMA [ 0x0 - 0x7fff ] Unfortunately in various MM places "max" means a non inclusive end of range. free_area_init_nodes max_zone_pfn parameter is one case and MAX_ORDER is another one (unrelated) that comes by memory. Reported-by: Zorro Lang <zlang@redhat.com> Fixes: 25078dc1f74b ("powerpc: use mm zones more sensibly") Fixes: 9739ab7eda45 ("powerpc: enable a 30-bit ZONE_DMA for 32-bit pmac") Signed-off-by: Andrea Arcangeli <aarcange@redhat.com> Reviewed-by: Christoph Hellwig <hch@lst.de> Signed-off-by: Michael Ellerman <mpe@ellerman.id.au> Link: https://lore.kernel.org/r/20190625141727.2883-1-aarcange@redhat.com Signed-off-by: Sasha Levin <sashal@kernel.org>
2019-08-16x86/purgatory: Use CFLAGS_REMOVE rather than reset KBUILD_CFLAGSNick Desaulniers1-5/+28
commit b059f801a937d164e03b33c1848bb3dca67c0b04 upstream. KBUILD_CFLAGS is very carefully built up in the top level Makefile, particularly when cross compiling or using different build tools. Resetting KBUILD_CFLAGS via := assignment is an antipattern. The comment above the reset mentions that -pg is problematic. Other Makefiles use `CFLAGS_REMOVE_file.o = $(CC_FLAGS_FTRACE)` when CONFIG_FUNCTION_TRACER is set. Prefer that pattern to wiping out all of the important KBUILD_CFLAGS then manually having to re-add them. Seems also that __stack_chk_fail references are generated when using CONFIG_STACKPROTECTOR or CONFIG_STACKPROTECTOR_STRONG. Fixes: 8fc5b4d4121c ("purgatory: core purgatory functionality") Reported-by: Vaibhav Rustagi <vaibhavrustagi@google.com> Suggested-by: Peter Zijlstra <peterz@infradead.org> Suggested-by: Thomas Gleixner <tglx@linutronix.de> Signed-off-by: Nick Desaulniers <ndesaulniers@google.com> Signed-off-by: Thomas Gleixner <tglx@linutronix.de> Tested-by: Vaibhav Rustagi <vaibhavrustagi@google.com> Cc: stable@vger.kernel.org Link: https://lkml.kernel.org/r/20190807221539.94583-2-ndesaulniers@google.com Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
2019-08-16x86/purgatory: Do not use __builtin_memcpy and __builtin_memsetNick Desaulniers4-23/+17
commit 4ce97317f41d38584fb93578e922fcd19e535f5b upstream. Implementing memcpy and memset in terms of __builtin_memcpy and __builtin_memset is problematic. GCC at -O2 will replace calls to the builtins with calls to memcpy and memset (but will generate an inline implementation at -Os). Clang will replace the builtins with these calls regardless of optimization level. $ llvm-objdump -dr arch/x86/purgatory/string.o | tail 0000000000000339 memcpy: 339: 48 b8 00 00 00 00 00 00 00 00 movabsq $0, %rax 000000000000033b: R_X86_64_64 memcpy 343: ff e0 jmpq *%rax 0000000000000345 memset: 345: 48 b8 00 00 00 00 00 00 00 00 movabsq $0, %rax 0000000000000347: R_X86_64_64 memset 34f: ff e0 Such code results in infinite recursion at runtime. This is observed when doing kexec. Instead, reuse an implementation from arch/x86/boot/compressed/string.c. This requires to implement a stub function for warn(). Also, Clang may lower memcmp's that compare against 0 to bcmp's, so add a small definition, too. See also: commit 5f074f3e192f ("lib/string.c: implement a basic bcmp") Fixes: 8fc5b4d4121c ("purgatory: core purgatory functionality") Reported-by: Vaibhav Rustagi <vaibhavrustagi@google.com> Debugged-by: Vaibhav Rustagi <vaibhavrustagi@google.com> Debugged-by: Manoj Gupta <manojgupta@google.com> Suggested-by: Alistair Delva <adelva@google.com> Signed-off-by: Nick Desaulniers <ndesaulniers@google.com> Signed-off-by: Thomas Gleixner <tglx@linutronix.de> Tested-by: Vaibhav Rustagi <vaibhavrustagi@google.com> Cc: stable@vger.kernel.org Link: https://bugs.chromium.org/p/chromium/issues/detail?id=984056 Link: https://lkml.kernel.org/r/20190807221539.94583-1-ndesaulniers@google.com Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
2019-08-16x86/mm: Sync also unmappings in vmalloc_sync_all()Joerg Roedel1-8/+5
commit 8e998fc24de47c55b47a887f6c95ab91acd4a720 upstream. With huge-page ioremap areas the unmappings also need to be synced between all page-tables. Otherwise it can cause data corruption when a region is unmapped and later re-used. Make the vmalloc_sync_one() function ready to sync unmappings and make sure vmalloc_sync_all() iterates over all page-tables even when an unmapped PMD is found. Fixes: 5d72b4fba40ef ('x86, mm: support huge I/O mapping capability I/F') Signed-off-by: Joerg Roedel <jroedel@suse.de> Signed-off-by: Thomas Gleixner <tglx@linutronix.de> Reviewed-by: Dave Hansen <dave.hansen@linux.intel.com> Link: https://lkml.kernel.org/r/20190719184652.11391-3-joro@8bytes.org Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
2019-08-16x86/mm: Check for pfn instead of page in vmalloc_sync_one()Joerg Roedel1-1/+1
commit 51b75b5b563a2637f9d8dc5bd02a31b2ff9e5ea0 upstream. Do not require a struct page for the mapped memory location because it might not exist. This can happen when an ioremapped region is mapped with 2MB pages. Fixes: 5d72b4fba40ef ('x86, mm: support huge I/O mapping capability I/F') Signed-off-by: Joerg Roedel <jroedel@suse.de> Signed-off-by: Thomas Gleixner <tglx@linutronix.de> Reviewed-by: Dave Hansen <dave.hansen@linux.intel.com> Link: https://lkml.kernel.org/r/20190719184652.11391-2-joro@8bytes.org Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
2019-08-06x86/speculation/swapgs: Exclude ATOMs from speculation through SWAPGSThomas Gleixner3-30/+33
commit f36cf386e3fec258a341d446915862eded3e13d8 upstream Intel provided the following information: On all current Atom processors, instructions that use a segment register value (e.g. a load or store) will not speculatively execute before the last writer of that segment retires. Thus they will not use a speculatively written segment value. That means on ATOMs there is no speculation through SWAPGS, so the SWAPGS entry paths can be excluded from the extra LFENCE if PTI is disabled. Create a separate bug flag for the through SWAPGS speculation and mark all out-of-order ATOMs and AMD/HYGON CPUs as not affected. The in-order ATOMs are excluded from the whole mitigation mess anyway. Reported-by: Andrew Cooper <andrew.cooper3@citrix.com> Signed-off-by: Thomas Gleixner <tglx@linutronix.de> Reviewed-by: Tyler Hicks <tyhicks@canonical.com> Reviewed-by: Josh Poimboeuf <jpoimboe@redhat.com> Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
2019-08-06x86/entry/64: Use JMP instead of JMPQJosh Poimboeuf1-1/+1
commit 64dbc122b20f75183d8822618c24f85144a5a94d upstream Somehow the swapgs mitigation entry code patch ended up with a JMPQ instruction instead of JMP, where only the short jump is needed. Some assembler versions apparently fail to optimize JMPQ into a two-byte JMP when possible, instead always using a 7-byte JMP with relocation. For some reason that makes the entry code explode with a #GP during boot. Change it back to "JMP" as originally intended. Fixes: 18ec54fdd6d1 ("x86/speculation: Prepare entry code for Spectre v1 swapgs mitigations") Signed-off-by: Josh Poimboeuf <jpoimboe@redhat.com> Signed-off-by: Thomas Gleixner <tglx@linutronix.de> Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
2019-08-06x86/speculation: Enable Spectre v1 swapgs mitigationsJosh Poimboeuf1-9/+106
commit a2059825986a1c8143fd6698774fa9d83733bb11 upstream The previous commit added macro calls in the entry code which mitigate the Spectre v1 swapgs issue if the X86_FEATURE_FENCE_SWAPGS_* features are enabled. Enable those features where applicable. The mitigations may be disabled with "nospectre_v1" or "mitigations=off". There are different features which can affect the risk of attack: - When FSGSBASE is enabled, unprivileged users are able to place any value in GS, using the wrgsbase instruction. This means they can write a GS value which points to any value in kernel space, which can be useful with the following gadget in an interrupt/exception/NMI handler: if (coming from user space) swapgs mov %gs:<percpu_offset>, %reg1 // dependent load or store based on the value of %reg // for example: mov %(reg1), %reg2 If an interrupt is coming from user space, and the entry code speculatively skips the swapgs (due to user branch mistraining), it may speculatively execute the GS-based load and a subsequent dependent load or store, exposing the kernel data to an L1 side channel leak. Note that, on Intel, a similar attack exists in the above gadget when coming from kernel space, if the swapgs gets speculatively executed to switch back to the user GS. On AMD, this variant isn't possible because swapgs is serializing with respect to future GS-based accesses. NOTE: The FSGSBASE patch set hasn't been merged yet, so the above case doesn't exist quite yet. - When FSGSBASE is disabled, the issue is mitigated somewhat because unprivileged users must use prctl(ARCH_SET_GS) to set GS, which restricts GS values to user space addresses only. That means the gadget would need an additional step, since the target kernel address needs to be read from user space first. Something like: if (coming from user space) swapgs mov %gs:<percpu_offset>, %reg1 mov (%reg1), %reg2 // dependent load or store based on the value of %reg2 // for example: mov %(reg2), %reg3 It's difficult to audit for this gadget in all the handlers, so while there are no known instances of it, it's entirely possible that it exists somewhere (or could be introduced in the future). Without tooling to analyze all such code paths, consider it vulnerable. Effects of SMAP on the !FSGSBASE case: - If SMAP is enabled, and the CPU reports RDCL_NO (i.e., not susceptible to Meltdown), the kernel is prevented from speculatively reading user space memory, even L1 cached values. This effectively disables the !FSGSBASE attack vector. - If SMAP is enabled, but the CPU *is* susceptible to Meltdown, SMAP still prevents the kernel from speculatively reading user space memory. But it does *not* prevent the kernel from reading the user value from L1, if it has already been cached. This is probably only a small hurdle for an attacker to overcome. Thanks to Dave Hansen for contributing the speculative_smap() function. Thanks to Andrew Cooper for providing the inside scoop on whether swapgs is serializing on AMD. [ tglx: Fixed the USER fence decision and polished the comment as suggested by Dave Hansen ] Signed-off-by: Josh Poimboeuf <jpoimboe@redhat.com> Signed-off-by: Thomas Gleixner <tglx@linutronix.de> Reviewed-by: Dave Hansen <dave.hansen@intel.com> Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
2019-08-06x86/speculation: Prepare entry code for Spectre v1 swapgs mitigationsJosh Poimboeuf3-3/+37
commit 18ec54fdd6d18d92025af097cd042a75cf0ea24c upstream Spectre v1 isn't only about array bounds checks. It can affect any conditional checks. The kernel entry code interrupt, exception, and NMI handlers all have conditional swapgs checks. Those may be problematic in the context of Spectre v1, as kernel code can speculatively run with a user GS. For example: if (coming from user space) swapgs mov %gs:<percpu_offset>, %reg mov (%reg), %reg1 When coming from user space, the CPU can speculatively skip the swapgs, and then do a speculative percpu load using the user GS value. So the user can speculatively force a read of any kernel value. If a gadget exists which uses the percpu value as an address in another load/store, then the contents of the kernel value may become visible via an L1 side channel attack. A similar attack exists when coming from kernel space. The CPU can speculatively do the swapgs, causing the user GS to get used for the rest of the speculative window. The mitigation is similar to a traditional Spectre v1 mitigation, except: a) index masking isn't possible; because the index (percpu offset) isn't user-controlled; and b) an lfence is needed in both the "from user" swapgs path and the "from kernel" non-swapgs path (because of the two attacks described above). The user entry swapgs paths already have SWITCH_TO_KERNEL_CR3, which has a CR3 write when PTI is enabled. Since CR3 writes are serializing, the lfences can be skipped in those cases. On the other hand, the kernel entry swapgs paths don't depend on PTI. To avoid unnecessary lfences for the user entry case, create two separate features for alternative patching: X86_FEATURE_FENCE_SWAPGS_USER X86_FEATURE_FENCE_SWAPGS_KERNEL Use these features in entry code to patch in lfences where needed. The features aren't enabled yet, so there's no functional change. Signed-off-by: Josh Poimboeuf <jpoimboe@redhat.com> Signed-off-by: Thomas Gleixner <tglx@linutronix.de> Reviewed-by: Dave Hansen <dave.hansen@intel.com> Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
2019-08-06x86/cpufeatures: Combine word 11 and 12 into a new scattered features wordFenghua Yu6-34/+34
commit acec0ce081de0c36459eea91647faf99296445a3 upstream It's a waste for the four X86_FEATURE_CQM_* feature bits to occupy two whole feature bits words. To better utilize feature words, re-define word 11 to host scattered features and move the four X86_FEATURE_CQM_* features into Linux defined word 11. More scattered features can be added in word 11 in the future. Rename leaf 11 in cpuid_leafs to CPUID_LNX_4 to reflect it's a Linux-defined leaf. Rename leaf 12 as CPUID_DUMMY which will be replaced by a meaningful name in the next patch when CPUID.7.1:EAX occupies world 12. Maximum number of RMID and cache occupancy scale are retrieved from CPUID.0xf.1 after scattered CQM features are enumerated. Carve out the code into a separate function. KVM doesn't support resctrl now. So it's safe to move the X86_FEATURE_CQM_* features to scattered features word 11 for KVM. Signed-off-by: Fenghua Yu <fenghua.yu@intel.com> Signed-off-by: Borislav Petkov <bp@suse.de> Signed-off-by: Thomas Gleixner <tglx@linutronix.de> Cc: Aaron Lewis <aaronlewis@google.com> Cc: Andy Lutomirski <luto@kernel.org> Cc: Babu Moger <babu.moger@amd.com> Cc: "Chang S. Bae" <chang.seok.bae@intel.com> Cc: "Sean J Christopherson" <sean.j.christopherson@intel.com> Cc: Frederic Weisbecker <frederic@kernel.org> Cc: "H. Peter Anvin" <hpa@zytor.com> Cc: Ingo Molnar <mingo@redhat.com> Cc: Jann Horn <jannh@google.com> Cc: Juergen Gross <jgross@suse.com> Cc: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com> Cc: kvm ML <kvm@vger.kernel.org> Cc: Masahiro Yamada <yamada.masahiro@socionext.com> Cc: Masami Hiramatsu <mhiramat@kernel.org> Cc: Nadav Amit <namit@vmware.com> Cc: Paolo Bonzini <pbonzini@redhat.com> Cc: Pavel Tatashin <pasha.tatashin@oracle.com> Cc: Peter Feiner <pfeiner@google.com> Cc: "Peter Zijlstra (Intel)" <peterz@infradead.org> Cc: "Radim Krčmář" <rkrcmar@redhat.com> Cc: "Rafael J. Wysocki" <rafael.j.wysocki@intel.com> Cc: Ravi V Shankar <ravi.v.shankar@intel.com> Cc: Sherry Hurwitz <sherry.hurwitz@amd.com> Cc: Thomas Gleixner <tglx@linutronix.de> Cc: Thomas Lendacky <Thomas.Lendacky@amd.com> Cc: x86 <x86@kernel.org> Link: https://lkml.kernel.org/r/1560794416-217638-2-git-send-email-fenghua.yu@intel.com Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
2019-08-06x86/cpufeatures: Carve out CQM features retrievalBorislav Petkov1-27/+33
commit 45fc56e629caa451467e7664fbd4c797c434a6c4 upstream ... into a separate function for better readability. Split out from a patch from Fenghua Yu <fenghua.yu@intel.com> to keep the mechanical, sole code movement separate for easy review. No functional changes. Signed-off-by: Borislav Petkov <bp@suse.de> Signed-off-by: Thomas Gleixner <tglx@linutronix.de> Cc: Fenghua Yu <fenghua.yu@intel.com> Cc: x86@kernel.org Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
2019-08-06arm64: cpufeature: Fix feature comparison for CTR_EL0.{CWG,ERG}Will Deacon2-5/+10
commit 147b9635e6347104b91f48ca9dca61eb0fbf2a54 upstream. If CTR_EL0.{CWG,ERG} are 0b0000 then they must be interpreted to have their architecturally maximum values, which defeats the use of FTR_HIGHER_SAFE when sanitising CPU ID registers on heterogeneous machines. Introduce FTR_HIGHER_OR_ZERO_SAFE so that these fields effectively saturate at zero. Fixes: 3c739b571084 ("arm64: Keep track of CPU feature registers") Cc: <stable@vger.kernel.org> # 4.4.x- Reviewed-by: Suzuki K Poulose <suzuki.poulose@arm.com> Acked-by: Mark Rutland <mark.rutland@arm.com> Signed-off-by: Will Deacon <will@kernel.org> Signed-off-by: Catalin Marinas <catalin.marinas@arm.com> Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
2019-08-06arm64: compat: Allow single-byte watchpoints on all addressesWill Deacon1-3/+4
commit 849adec41203ac5837c40c2d7e08490ffdef3c2c upstream. Commit d968d2b801d8 ("ARM: 7497/1: hw_breakpoint: allow single-byte watchpoints on all addresses") changed the validation requirements for hardware watchpoints on arch/arm/. Update our compat layer to implement the same relaxation. Cc: <stable@vger.kernel.org> Signed-off-by: Will Deacon <will@kernel.org> Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
2019-08-06parisc: Fix build of compressed kernel even with debug enabledHelge Deller1-2/+2
commit 3fe6c873af2f2247544debdbe51ec29f690a2ccf upstream. With debug info enabled (CONFIG_DEBUG_INFO=y) the resulting vmlinux may get that huge that we need to increase the start addresss for the decompression text section otherwise one will face a linker error. Reported-by: Sven Schnelle <svens@stackframe.org> Tested-by: Sven Schnelle <svens@stackframe.org> Cc: stable@vger.kernel.org # v4.14+ Signed-off-by: Helge Deller <deller@gmx.de> Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
2019-08-06parisc: Strip debug info from kernel before creating compressed vmlinuzHelge Deller1-1/+2
commit e50beea8e7738377b4fa664078547be338038ff9 upstream. Same as on x86-64, strip the .comment, .note and debug sections from the Linux kernel before creating the compressed image for the boot loader. Reported-by: James Bottomley <James.Bottomley@HansenPartnership.com> Reported-by: Sven Schnelle <svens@stackframe.org> Cc: stable@vger.kernel.org # v4.20+ Signed-off-by: Helge Deller <deller@gmx.de> Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
2019-08-06parisc: Add archclean Makefile targetJames Bottomley2-0/+4
commit f2c5ed0dd5004c2cff5c0e3d430a107576fcc17f upstream. Apparently we don't have an archclean target in our arch/parisc/Makefile, so files in there never get cleaned out by make mrproper. This, in turn means that the sizes.h file in arch/parisc/boot/compressed never gets removed and worse, when you transition to an O=build/parisc[64] build model it overrides the generated file. The upshot being my bzImage was building with a SZ_end that was too small. I fixed it by making mrproper clean everything. Signed-off-by: James Bottomley <James.Bottomley@HansenPartnership.com> Cc: stable@vger.kernel.org # v4.20+ Signed-off-by: Helge Deller <deller@gmx.de> Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
2019-08-06powerpc/kasan: fix early boot failure on PPC32Christophe Leroy1-2/+5
commit d7e23b887f67178c4f840781be7a6aa6aeb52ab1 upstream. Due to commit 4a6d8cf90017 ("powerpc/mm: don't use pte_alloc_kernel() until slab is available on PPC32"), pte_alloc_kernel() cannot be used during early KASAN init. Fix it by using memblock_alloc() instead. Fixes: 2edb16efc899 ("powerpc/32: Add KASAN support") Cc: stable@vger.kernel.org # v5.2+ Reported-by: Erhard F. <erhard_f@mailbox.org> Signed-off-by: Christophe Leroy <christophe.leroy@c-s.fr> Signed-off-by: Michael Ellerman <mpe@ellerman.id.au> Link: https://lore.kernel.org/r/da89670093651437f27d2975224712e0a130b055.1564552796.git.christophe.leroy@c-s.fr Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
2019-08-06x86, boot: Remove multiple copy of static function sanitize_boot_params()Zhenzhong Duan2-1/+1
[ Upstream commit 8c5477e8046ca139bac250386c08453da37ec1ae ] Kernel build warns: 'sanitize_boot_params' defined but not used [-Wunused-function] at below files: arch/x86/boot/compressed/cmdline.c arch/x86/boot/compressed/error.c arch/x86/boot/compressed/early_serial_console.c arch/x86/boot/compressed/acpi.c That's becausethey each include misc.h which includes a definition of sanitize_boot_params() via bootparam_utils.h. Remove the inclusion from misc.h and have the c file including bootparam_utils.h directly. Signed-off-by: Zhenzhong Duan <zhenzhong.duan@oracle.com> Signed-off-by: Thomas Gleixner <tglx@linutronix.de> Link: https://lkml.kernel.org/r/1563283092-1189-1-git-send-email-zhenzhong.duan@oracle.com Signed-off-by: Sasha Levin <sashal@kernel.org>
2019-08-06x86/paravirt: Fix callee-saved function ELF sizesJosh Poimboeuf2-0/+2
[ Upstream commit 083db6764821996526970e42d09c1ab2f4155dd4 ] The __raw_callee_save_*() functions have an ELF symbol size of zero, which confuses objtool and other tools. Fixes a bunch of warnings like the following: arch/x86/xen/mmu_pv.o: warning: objtool: __raw_callee_save_xen_pte_val() is missing an ELF size annotation arch/x86/xen/mmu_pv.o: warning: objtool: __raw_callee_save_xen_pgd_val() is missing an ELF size annotation arch/x86/xen/mmu_pv.o: warning: objtool: __raw_callee_save_xen_make_pte() is missing an ELF size annotation arch/x86/xen/mmu_pv.o: warning: objtool: __raw_callee_save_xen_make_pgd() is missing an ELF size annotation Signed-off-by: Josh Poimboeuf <jpoimboe@redhat.com> Signed-off-by: Thomas Gleixner <tglx@linutronix.de> Reviewed-by: Juergen Gross <jgross@suse.com> Acked-by: Peter Zijlstra (Intel) <peterz@infradead.org> Link: https://lkml.kernel.org/r/afa6d49bb07497ca62e4fc3b27a2d0cece545b4e.1563413318.git.jpoimboe@redhat.com Signed-off-by: Sasha Levin <sashal@kernel.org>
2019-08-06x86/kvm: Don't call kvm_spurious_fault() from .fixupJosh Poimboeuf1-15/+19
[ Upstream commit 3901336ed9887b075531bffaeef7742ba614058b ] After making a change to improve objtool's sibling call detection, it started showing the following warning: arch/x86/kvm/vmx/nested.o: warning: objtool: .fixup+0x15: sibling call from callable instruction with modified stack frame The problem is the ____kvm_handle_fault_on_reboot() macro. It does a fake call by pushing a fake RIP and doing a jump. That tricks the unwinder into printing the function which triggered the exception, rather than the .fixup code. Instead of the hack to make it look like the original function made the call, just change the macro so that the original function actually does make the call. This allows removal of the hack, and also makes objtool happy. I triggered a vmx instruction exception and verified that the stack trace is still sane: kernel BUG at arch/x86/kvm/x86.c:358! invalid opcode: 0000 [#1] SMP PTI CPU: 28 PID: 4096 Comm: qemu-kvm Not tainted 5.2.0+ #16 Hardware name: Lenovo THINKSYSTEM SD530 -[7X2106Z000]-/-[7X2106Z000]-, BIOS -[TEE113Z-1.00]- 07/17/2017 RIP: 0010:kvm_spurious_fault+0x5/0x10 Code: 00 00 00 00 00 8b 44 24 10 89 d2 45 89 c9 48 89 44 24 10 8b 44 24 08 48 89 44 24 08 e9 d4 40 22 00 0f 1f 40 00 0f 1f 44 00 00 <0f> 0b 66 0f 1f 84 00 00 00 00 00 0f 1f 44 00 00 41 55 49 89 fd 41 RSP: 0018:ffffbf91c683bd00 EFLAGS: 00010246 RAX: 000061f040000000 RBX: ffff9e159c77bba0 RCX: ffff9e15a5c87000 RDX: 0000000665c87000 RSI: ffff9e15a5c87000 RDI: ffff9e159c77bba0 RBP: 0000000000000000 R08: 0000000000000000 R09: ffff9e15a5c87000 R10: 0000000000000000 R11: fffff8f2d99721c0 R12: ffff9e159c77bba0 R13: ffffbf91c671d960 R14: ffff9e159c778000 R15: 0000000000000000 FS: 00007fa341cbe700(0000) GS:ffff9e15b7400000(0000) knlGS:0000000000000000 CS: 0010 DS: 0000 ES: 0000 CR0: 0000000080050033 CR2: 00007fdd38356804 CR3: 00000006759de003 CR4: 00000000007606e0 DR0: 0000000000000000 DR1: 0000000000000000 DR2: 0000000000000000 DR3: 0000000000000000 DR6: 00000000fffe0ff0 DR7: 0000000000000400 PKRU: 55555554 Call Trace: loaded_vmcs_init+0x4f/0xe0 alloc_loaded_vmcs+0x38/0xd0 vmx_create_vcpu+0xf7/0x600 kvm_vm_ioctl+0x5e9/0x980 ? __switch_to_asm+0x40/0x70 ? __switch_to_asm+0x34/0x70 ? __switch_to_asm+0x40/0x70 ? __switch_to_asm+0x34/0x70 ? free_one_page+0x13f/0x4e0 do_vfs_ioctl+0xa4/0x630 ksys_ioctl+0x60/0x90 __x64_sys_ioctl+0x16/0x20 do_syscall_64+0x55/0x1c0 entry_SYSCALL_64_after_hwframe+0x44/0xa9 RIP: 0033:0x7fa349b1ee5b Signed-off-by: Josh Poimboeuf <jpoimboe@redhat.com> Signed-off-by: Thomas Gleixner <tglx@linutronix.de> Acked-by: Paolo Bonzini <pbonzini@redhat.com> Acked-by: Peter Zijlstra (Intel) <peterz@infradead.org> Link: https://lkml.kernel.org/r/64a9b64d127e87b6920a97afde8e96ea76f6524e.1563413318.git.jpoimboe@redhat.com Signed-off-by: Sasha Levin <sashal@kernel.org>
2019-08-06xen/pv: Fix a boot up hang revealed by int3 self testZhenzhong Duan4-4/+2
[ Upstream commit b23e5844dfe78a80ba672793187d3f52e4b528d7 ] Commit 7457c0da024b ("x86/alternatives: Add int3_emulate_call() selftest") is used to ensure there is a gap setup in int3 exception stack which could be used for inserting call return address. This gap is missed in XEN PV int3 exception entry path, then below panic triggered: [ 0.772876] general protection fault: 0000 [#1] SMP NOPTI [ 0.772886] CPU: 0 PID: 0 Comm: swapper/0 Not tainted 5.2.0+ #11 [ 0.772893] RIP: e030:int3_magic+0x0/0x7 [ 0.772905] RSP: 3507:ffffffff82203e98 EFLAGS: 00000246 [ 0.773334] Call Trace: [ 0.773334] alternative_instructions+0x3d/0x12e [ 0.773334] check_bugs+0x7c9/0x887 [ 0.773334] ? __get_locked_pte+0x178/0x1f0 [ 0.773334] start_kernel+0x4ff/0x535 [ 0.773334] ? set_init_arg+0x55/0x55 [ 0.773334] xen_start_kernel+0x571/0x57a For 64bit PV guests, Xen's ABI enters the kernel with using SYSRET, with %rcx/%r11 on the stack. To convert back to "normal" looking exceptions, the xen thunks do 'xen_*: pop %rcx; pop %r11; jmp *'. E.g. Extracting 'xen_pv_trap xenint3' we have: xen_xenint3: pop %rcx; pop %r11; jmp xenint3 As xenint3 and int3 entry code are same except xenint3 doesn't generate a gap, we can fix it by using int3 and drop useless xenint3. Signed-off-by: Zhenzhong Duan <zhenzhong.duan@oracle.com> Reviewed-by: Juergen Gross <jgross@suse.com> Cc: Boris Ostrovsky <boris.ostrovsky@oracle.com> Cc: Juergen Gross <jgross@suse.com> Cc: Stefano Stabellini <sstabellini@kernel.org> Cc: Andy Lutomirski <luto@kernel.org> Cc: Peter Zijlstra <peterz@infradead.org> Cc: Thomas Gleixner <tglx@linutronix.de> Cc: Ingo Molnar <mingo@redhat.com> Cc: Borislav Petkov <bp@alien8.de> Cc: Andrew Cooper <andrew.cooper3@citrix.com> Signed-off-by: Juergen Gross <jgross@suse.com> Signed-off-by: Sasha Levin <sashal@kernel.org>
2019-08-06nds32: fix asm/syscall.hDmitry V. Levin1-10/+17
[ Upstream commit 33644b95eb342201511fc951d8fcd10362bd435b ] PTRACE_GET_SYSCALL_INFO is a generic ptrace API that lets ptracer obtain details of the syscall the tracee is blocked in. There are two reasons for a special syscall-related ptrace request. Firstly, with the current ptrace API there are cases when ptracer cannot retrieve necessary information about syscalls. Some examples include: * The notorious int-0x80-from-64-bit-task issue. See [1] for details. In short, if a 64-bit task performs a syscall through int 0x80, its tracer has no reliable means to find out that the syscall was, in fact, a compat syscall, and misidentifies it. * Syscall-enter-stop and syscall-exit-stop look the same for the tracer. Common practice is to keep track of the sequence of ptrace-stops in order not to mix the two syscall-stops up. But it is not as simple as it looks; for example, strace had a (just recently fixed) long-standing bug where attaching strace to a tracee that is performing the execve system call led to the tracer identifying the following syscall-exit-stop as syscall-enter-stop, which messed up all the state tracking. * Since the introduction of commit 84d77d3f06e7 ("ptrace: Don't allow accessing an undumpable mm"), both PTRACE_PEEKDATA and process_vm_readv become unavailable when the process dumpable flag is cleared. On such architectures as ia64 this results in all syscall arguments being unavailable for the tracer. Secondly, ptracers also have to support a lot of arch-specific code for obtaining information about the tracee. For some architectures, this requires a ptrace(PTRACE_PEEKUSER, ...) invocation for every syscall argument and return value. PTRACE_GET_SYSCALL_INFO returns the following structure: struct ptrace_syscall_info { __u8 op; /* PTRACE_SYSCALL_INFO_* */ __u32 arch __attribute__((__aligned__(sizeof(__u32)))); __u64 instruction_pointer; __u64 stack_pointer; union { struct { __u64 nr; __u64 args[6]; } entry; struct { __s64 rval; __u8 is_error; } exit; struct { __u64 nr; __u64 args[6]; __u32 ret_data; } seccomp; }; }; The structure was chosen according to [2], except for the following changes: * seccomp substructure was added as a superset of entry substructure * the type of nr field was changed from int to __u64 because syscall numbers are, as a practical matter, 64 bits * stack_pointer field was added along with instruction_pointer field since it is readily available and can save the tracer from extra PTRACE_GETREGS/PTRACE_GETREGSET calls * arch is always initialized to aid with tracing system calls such as execve() * instruction_pointer and stack_pointer are always initialized so they could be easily obtained for non-syscall stops * a boolean is_error field was added along with rval field, this way the tracer can more reliably distinguish a return value from an error value strace has been ported to PTRACE_GET_SYSCALL_INFO. Starting with release 4.26, strace uses PTRACE_GET_SYSCALL_INFO API as the preferred mechanism of obtaining syscall information. [1] https://lore.kernel.org/lkml/CA+55aFzcSVmdDj9Lh_gdbz1OzHyEm6ZrGPBDAJnywm2LF_eVyg@mail.gmail.com/ [2] https://lore.kernel.org/lkml/CAObL_7GM0n80N7J_DFw_eQyfLyzq+sf4y2AvsCCV88Tb3AwEHA@mail.gmail.com/ This patch (of 7): All syscall_get_*() and syscall_set_*() functions must be defined as static inline as on all other architectures, otherwise asm/syscall.h cannot be included in more than one compilation unit. This bug has to be fixed in order to extend the generic ptrace API with PTRACE_GET_SYSCALL_INFO request. Link: http://lkml.kernel.org/r/20190510152749.GA28558@altlinux.org Fixes: 1932fbe36e02 ("nds32: System calls handling") Signed-off-by: Dmitry V. Levin <ldv@altlinux.org> Reported-by: kbuild test robot <lkp@intel.com> Acked-by: Greentime Hu <greentime@andestech.com> Cc: Vincent Chen <deanbo422@gmail.com> Cc: Elvira Khabirova <lineprinter@altlinux.org> Cc: Eugene Syromyatnikov <esyr@redhat.com> Cc: Oleg Nesterov <oleg@redhat.com> Cc: Andy Lutomirski <luto@kernel.org> Cc: Benjamin Herrenschmidt <benh@kernel.crashing.org> Cc: Helge Deller <deller@gmx.de> [parisc] Cc: James E.J. Bottomley <jejb@parisc-linux.org> Cc: James Hogan <jhogan@kernel.org> Cc: Kees Cook <keescook@chromium.org> Cc: Michael Ellerman <mpe@ellerman.id.au> Cc: Paul Burton <paul.burton@mips.com> Cc: Paul Mackerras <paulus@samba.org> Cc: Ralf Baechle <ralf@linux-mips.org> Cc: Richard Kuo <rkuo@codeaurora.org> Cc: Shuah Khan <shuah@kernel.org> Signed-off-by: Andrew Morton <akpm@linux-foundation.org> Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org> Signed-off-by: Sasha Levin <sashal@kernel.org>
2019-08-06x86: math-emu: Hide clang warnings for 16-bit overflowArnd Bergmann2-2/+2
[ Upstream commit 29e7e9664aec17b94a9c8c5a75f8d216a206aa3a ] clang warns about a few parts of the math-emu implementation where a 16-bit integer becomes negative during assignment: arch/x86/math-emu/poly_tan.c:88:35: error: implicit conversion from 'int' to 'short' changes value from 49216 to -16320 [-Werror,-Wconstant-conversion] (0x41 + EXTENDED_Ebias) | SIGN_Negative); ~~~~~~~~~~~~~~~~~~~~~~~~^~~~~~~~~~~~~~~~ arch/x86/math-emu/fpu_emu.h:180:58: note: expanded from macro 'setexponent16' #define setexponent16(x,y) { (*(short *)&((x)->exp)) = (y); } ~ ^ arch/x86/math-emu/reg_constant.c:37:32: error: implicit conversion from 'int' to 'short' changes value from 49085 to -16451 [-Werror,-Wconstant-conversion] FPU_REG const CONST_PI2extra = MAKE_REG(NEG, -66, ^~~~~~~~~~~~~~~~~~ arch/x86/math-emu/reg_constant.c:21:25: note: expanded from macro 'MAKE_REG' ((EXTENDED_Ebias+(e)) | ((SIGN_##s != 0)*0x8000)) } ~~~~~~~~~~~~~~~~~~~~~^~~~~~~~~~~~~~~~~~~~~~~~~~ arch/x86/math-emu/reg_constant.c:48:28: error: implicit conversion from 'int' to 'short' changes value from 65535 to -1 [-Werror,-Wconstant-conversion] FPU_REG const CONST_QNaN = MAKE_REG(NEG, EXP_OVER, 0x00000000, 0xC0000000); ^~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ arch/x86/math-emu/reg_constant.c:21:25: note: expanded from macro 'MAKE_REG' ((EXTENDED_Ebias+(e)) | ((SIGN_##s != 0)*0x8000)) } ~~~~~~~~~~~~~~~~~~~~~^~~~~~~~~~~~~~~~~~~~~~~~~~ The code is correct as is, so add a typecast to shut up the warnings. Signed-off-by: Arnd Bergmann <arnd@arndb.de> Signed-off-by: Thomas Gleixner <tglx@linutronix.de> Link: https://lkml.kernel.org/r/20190712090816.350668-1-arnd@arndb.de Signed-off-by: Sasha Levin <sashal@kernel.org>
2019-08-06x86/apic: Silence -Wtype-limits compiler warningsQian Cai2-2/+2
[ Upstream commit ec6335586953b0df32f83ef696002063090c7aef ] There are many compiler warnings like this, In file included from ./arch/x86/include/asm/smp.h:13, from ./arch/x86/include/asm/mmzone_64.h:11, from ./arch/x86/include/asm/mmzone.h:5, from ./include/linux/mmzone.h:969, from ./include/linux/gfp.h:6, from ./include/linux/mm.h:10, from arch/x86/kernel/apic/io_apic.c:34: arch/x86/kernel/apic/io_apic.c: In function 'check_timer': ./arch/x86/include/asm/apic.h:37:11: warning: comparison of unsigned expression >= 0 is always true [-Wtype-limits] if ((v) <= apic_verbosity) \ ^~ arch/x86/kernel/apic/io_apic.c:2160:2: note: in expansion of macro 'apic_printk' apic_printk(APIC_QUIET, KERN_INFO "..TIMER: vector=0x%02X " ^~~~~~~~~~~ ./arch/x86/include/asm/apic.h:37:11: warning: comparison of unsigned expression >= 0 is always true [-Wtype-limits] if ((v) <= apic_verbosity) \ ^~ arch/x86/kernel/apic/io_apic.c:2207:4: note: in expansion of macro 'apic_printk' apic_printk(APIC_QUIET, KERN_ERR "..MP-BIOS bug: " ^~~~~~~~~~~ APIC_QUIET is 0, so silence them by making apic_verbosity type int. Signed-off-by: Qian Cai <cai@lca.pw> Signed-off-by: Thomas Gleixner <tglx@linutronix.de> Link: https://lkml.kernel.org/r/1562621805-24789-1-git-send-email-cai@lca.pw Signed-off-by: Sasha Levin <sashal@kernel.org>
2019-08-06KVM: nVMX: Ignore segment base for VMX memory operand when segment not FS or GSLiran Alon1-1/+4
[ Upstream commit 6694e48012826351036fd10fc506ca880023e25f ] As reported by Maxime at https://bugzilla.kernel.org/show_bug.cgi?id=204175: In vmx/nested.c::get_vmx_mem_address(), when the guest runs in long mode, the base address of the memory operand is computed with a simple: *ret = s.base + off; This is incorrect, the base applies only to FS and GS, not to the others. Because of that, if the guest uses a VMX instruction based on DS and has a DS.base that is non-zero, KVM wrongfully adds the base to the resulting address. Reported-by: Maxime Villard <max@m00nbsd.net> Reviewed-by: Joao Martins <joao.m.martins@oracle.com> Signed-off-by: Liran Alon <liran.alon@oracle.com> Signed-off-by: Paolo Bonzini <pbonzini@redhat.com> Signed-off-by: Sasha Levin <sashal@kernel.org>
2019-08-06x86: kvm: avoid constant-conversion warningArnd Bergmann1-3/+3
[ Upstream commit a6a6d3b1f867d34ba5bd61aa7bb056b48ca67cff ] clang finds a contruct suspicious that converts an unsigned character to a signed integer and back, causing an overflow: arch/x86/kvm/mmu.c:4605:39: error: implicit conversion from 'int' to 'u8' (aka 'unsigned char') changes value from -205 to 51 [-Werror,-Wconstant-conversion] u8 wf = (pfec & PFERR_WRITE_MASK) ? ~w : 0; ~~ ^~ arch/x86/kvm/mmu.c:4607:38: error: implicit conversion from 'int' to 'u8' (aka 'unsigned char') changes value from -241 to 15 [-Werror,-Wconstant-conversion] u8 uf = (pfec & PFERR_USER_MASK) ? ~u : 0; ~~ ^~ arch/x86/kvm/mmu.c:4609:39: error: implicit conversion from 'int' to 'u8' (aka 'unsigned char') changes value from -171 to 85 [-Werror,-Wconstant-conversion] u8 ff = (pfec & PFERR_FETCH_MASK) ? ~x : 0; ~~ ^~ Add an explicit cast to tell clang that everything works as intended here. Signed-off-by: Arnd Bergmann <arnd@arndb.de> Link: https://github.com/ClangBuiltLinux/linux/issues/95 Signed-off-by: Paolo Bonzini <pbonzini@redhat.com> Signed-off-by: Sasha Levin <sashal@kernel.org>
2019-08-06arm64: dts: rockchip: Fix USB3 Type-C on rk3399-sapphireVicente Bergas1-3/+2
[ Upstream commit e1d9149e8389f1690cdd4e4056766dd26488a0fe ] Before this patch, the Type-C port on the Sapphire board is dead. If setting the 'regulator-always-on' property to 'vcc5v0_typec0' then the port works for about 4 seconds at start-up. This is a sample trace with a memory stick plugged in: 1.- The memory stick LED lights on and kernel reports: [ 4.782999] scsi 0:0:0:0: Direct-Access USB DISK PMAP PQ: 0 ANSI: 4 [ 5.904580] sd 0:0:0:0: [sdb] 3913344 512-byte logical blocks: (2.00 GB/1.87 GiB) [ 5.906860] sd 0:0:0:0: [sdb] Write Protect is off [ 5.908973] sd 0:0:0:0: [sdb] Mode Sense: 23 00 00 00 [ 5.909122] sd 0:0:0:0: [sdb] No Caching mode page found [ 5.911214] sd 0:0:0:0: [sdb] Assuming drive cache: write through [ 5.951585] sdb: sdb1 [ 5.954816] sd 0:0:0:0: [sdb] Attached SCSI removable disk 2.- 4 seconds later the memory stick LED lights off and kernel reports: [ 9.082822] phy phy-ff770000.syscon:usb2-phy@e450.2: charger = USB_DCP_CHARGER 3.- After a minute the kernel reports: [ 71.666761] usb 5-1: USB disconnect, device number 2 It has been checked that, although the LED is off, VBUS is present. If, instead, the dr_mode is changed to host and the phy-supply changed accordingly, then it works. It has only been tested in host mode. Signed-off-by: Vicente Bergas <vicencb@gmail.com> Signed-off-by: Heiko Stuebner <heiko@sntech.de> Signed-off-by: Sasha Levin <sashal@kernel.org>
2019-08-06ARM: exynos: Only build MCPM support if usedArnd Bergmann3-5/+9
[ Upstream commit 24d2c73ff28bcda48607eacc4bc804002dbf78d9 ] We get a link error for configurations that enable an Exynos SoC that does not require MCPM, but then manually enable MCPM anyway without also turning on the arm-cci: arch/arm/mach-exynos/mcpm-exynos.o: In function `exynos_pm_power_up_setup': mcpm-exynos.c:(.text+0x8): undefined reference to `cci_enable_port_for_self' Change it back to only build the code we actually need, by introducing a CONFIG_EXYNOS_MCPM that serves the same purpose as the older CONFIG_EXYNOS5420_MCPM. Fixes: 2997520c2d4e ("ARM: exynos: Set MCPM as mandatory for Exynos542x/5800 SoCs") Signed-off-by: Arnd Bergmann <arnd@arndb.de> Signed-off-by: Krzysztof Kozlowski <krzk@kernel.org> Signed-off-by: Sasha Levin <sashal@kernel.org>
2019-08-06MIPS: lantiq: Fix bitfield maskingPetr Cvek1-2/+3
[ Upstream commit ba1bc0fcdeaf3bf583c1517bd2e3e29cf223c969 ] The modification of EXIN register doesn't clean the bitfield before the writing of a new value. After a few modifications the bitfield would accumulate only '1's. Signed-off-by: Petr Cvek <petrcvekcz@gmail.com> Signed-off-by: Paul Burton <paul.burton@mips.com> Cc: hauke@hauke-m.de Cc: john@phrozen.org Cc: linux-mips@vger.kernel.org Cc: openwrt-devel@lists.openwrt.org Cc: pakahmar@hotmail.com Signed-off-by: Sasha Levin <sashal@kernel.org>
2019-08-06arm64: qcom: qcs404: Add reset-cells to GCC nodeAndy Gross1-0/+1
[ Upstream commit 0763d0c2273a3c72247d325c48fbac3d918d6b87 ] This patch adds a reset-cells property to the gcc controller on the QCS404. Without this in place, we get warnings like the following if nodes reference a gcc reset: arch/arm64/boot/dts/qcom/qcs404.dtsi:261.38-310.5: Warning (resets_property): /soc@0/remoteproc@b00000: Missing property '#reset-cells' in node /soc@0/clock-controller@1800000 or bad phandle (referred from resets[0]) also defined at arch/arm64/boot/dts/qcom/qcs404-evb.dtsi:82.18-84.3 DTC arch/arm64/boot/dts/qcom/qcs404-evb-4000.dtb arch/arm64/boot/dts/qcom/qcs404.dtsi:261.38-310.5: Warning (resets_property): /soc@0/remoteproc@b00000: Missing property '#reset-cells' in node /soc@0/clock-controller@1800000 or bad phandle (referred from resets[0]) also defined at arch/arm64/boot/dts/qcom/qcs404-evb.dtsi:82.18-84.3 Signed-off-by: Andy Gross <agross@kernel.org> Reviewed-by: Niklas Cassel <niklas.cassel@linaro.org> Reviewed-by: Vinod Koul <vkoul@kernel.org> Signed-off-by: Olof Johansson <olof@lixom.net> Signed-off-by: Sasha Levin <sashal@kernel.org>
2019-08-06arm64: dts: rockchip: fix isp iommu clocks and power domainHelen Koike1-4/+4
[ Upstream commit c432a29d3fc9ee928caeca2f5cf68b3aebfa6817 ] isp iommu requires wrapper variants of the clocks. noc variants are always on and using the wrapper variants will activate {A,H}CLK_ISP{0,1} due to the hierarchy. Tested using the pending isp patch set (which is not upstream yet). Without this patch, streaming from the isp stalls. Also add the respective power domain and remove the "disabled" status. Refer: RK3399 TRM v1.4 Fig. 2-4 RK3399 Clock Architecture Diagram RK3399 TRM v1.4 Fig. 8-1 RK3399 Power Domain Partition Signed-off-by: Helen Koike <helen.koike@collabora.com> Tested-by: Manivannan Sadhasivam <manivannan.sadhasivam@linaro.org> Signed-off-by: Heiko Stuebner <heiko@sntech.de> Signed-off-by: Sasha Levin <sashal@kernel.org>
2019-08-06arm64: dts: marvell: mcbin: enlarge PCI memory windowHeinrich Schuchardt1-0/+2
[ Upstream commit d3446b266a8c72a7bbc94b65f5fc6d206be77d24 ] Running a graphics adapter on the MACCHIATObin fails due to an insufficiently sized memory window. Enlarge the memory window for the PCIe slot to 512 MiB. With the patch I am able to use a GT710 graphics adapter with 1 GB onboard memory. These are the mapped memory areas that the graphics adapter is actually using: Region 0: Memory at cc000000 (32-bit, non-prefetchable) [size=16M] Region 1: Memory at c0000000 (64-bit, prefetchable) [size=128M] Region 3: Memory at c8000000 (64-bit, prefetchable) [size=32M] Region 5: I/O ports at 1000 [size=128] Expansion ROM at ca000000 [disabled] [size=512K] Signed-off-by: Heinrich Schuchardt <xypron.glpk@gmx.de> Signed-off-by: Gregory CLEMENT <gregory.clement@bootlin.com> Signed-off-by: Sasha Levin <sashal@kernel.org>
2019-08-06arm64: dts: qcom: qcs404-evb: fix l3 min voltageNiklas Cassel1-1/+1
[ Upstream commit 887b528c958f40b064d53edd0bfa9fea3a69eccd ] The current l3 min voltage level is not supported by the regulator (the voltage is not a multiple of the regulator step size), so a driver requesting this exact voltage would fail, see discussion in: https://patchwork.kernel.org/comment/22461199/ It was agreed upon to set a min voltage level that is a multiple of the regulator step size. There was actually a patch sent that did this: https://patchwork.kernel.org/patch/10819313/ However, the commit 331ab98f8c4a ("arm64: dts: qcom: qcs404: Fix voltages l3") that was applied is not identical to that patch. Signed-off-by: Niklas Cassel <niklas.cassel@linaro.org> Signed-off-by: Bjorn Andersson <bjorn.andersson@linaro.org> Signed-off-by: Andy Gross <agross@kernel.org> Signed-off-by: Sasha Levin <sashal@kernel.org>
2019-08-06ARM: dts: rockchip: Mark that the rk3288 timer might stop in suspendDouglas Anderson1-0/+1
[ Upstream commit 8ef1ba39a9fa53d2205e633bc9b21840a275908e ] This is similar to commit e6186820a745 ("arm64: dts: rockchip: Arch counter doesn't tick in system suspend"). Specifically on the rk3288 it can be seen that the timer stops ticking in suspend if we end up running through the "osc_disable" path in rk3288_slp_mode_set(). In that path the 24 MHz clock will turn off and the timer stops. To test this, I ran this on a Chrome OS filesystem: before=$(date); \ suspend_stress_test -c1 --suspend_min=30 --suspend_max=31; \ echo ${before}; date ...and I found that unless I plug in a device that requests USB wakeup to be active that the two calls to "date" would show that fewer than 30 seconds passed. NOTE: deep suspend (where the 24 MHz clock gets disabled) isn't supported yet on upstream Linux so this was tested on a downstream kernel. Signed-off-by: Douglas Anderson <dianders@chromium.org> Signed-off-by: Heiko Stuebner <heiko@sntech.de> Signed-off-by: Sasha Levin <sashal@kernel.org>
2019-08-06ARM: dts: rockchip: Make rk3288-veyron-mickey's emmc work againDouglas Anderson1-4/+0
[ Upstream commit 99fa066710f75f18f4d9a5bc5f6a711968a581d5 ] When I try to boot rk3288-veyron-mickey I totally fail to make the eMMC work. Specifically my logs (on Chrome OS 4.19): mmc_host mmc1: card is non-removable. mmc_host mmc1: Bus speed (slot 0) = 400000Hz (slot req 400000Hz, actual 400000HZ div = 0) mmc_host mmc1: Bus speed (slot 0) = 50000000Hz (slot req 52000000Hz, actual 50000000HZ div = 0) mmc1: switch to bus width 8 failed mmc1: switch to bus width 4 failed mmc1: new high speed MMC card at address 0001 mmcblk1: mmc1:0001 HAG2e 14.7 GiB mmcblk1boot0: mmc1:0001 HAG2e partition 1 4.00 MiB mmcblk1boot1: mmc1:0001 HAG2e partition 2 4.00 MiB mmcblk1rpmb: mmc1:0001 HAG2e partition 3 4.00 MiB, chardev (243:0) mmc_host mmc1: Bus speed (slot 0) = 400000Hz (slot req 400000Hz, actual 400000HZ div = 0) mmc_host mmc1: Bus speed (slot 0) = 50000000Hz (slot req 52000000Hz, actual 50000000HZ div = 0) mmc1: switch to bus width 8 failed mmc1: switch to bus width 4 failed mmc1: tried to HW reset card, got error -110 mmcblk1: error -110 requesting status mmcblk1: recovery failed! print_req_error: I/O error, dev mmcblk1, sector 0 ... When I remove the '/delete-property/mmc-hs200-1_8v' then everything is hunky dory. That line comes from the original submission of the mickey dts upstream, so presumably at the time the HS200 was failing and just enumerating things as a high speed device was fine. ...or maybe it's just that some mickey devices work when enumerating at "high speed", just not mine? In any case, hs200 seems good now. Let's turn it on. Signed-off-by: Douglas Anderson <dianders@chromium.org> Signed-off-by: Heiko Stuebner <heiko@sntech.de> Signed-off-by: Sasha Levin <sashal@kernel.org>
2019-08-06ARM: dts: rockchip: Make rk3288-veyron-minnie run at hs200Douglas Anderson1-4/+0
[ Upstream commit 1c0479023412ab7834f2e98b796eb0d8c627cd62 ] As some point hs200 was failing on rk3288-veyron-minnie. See commit 984926781122 ("ARM: dts: rockchip: temporarily remove emmc hs200 speed from rk3288 minnie"). Although I didn't track down exactly when it started working, it seems to work OK now, so let's turn it back on. To test this, I booted from SD card and then used this script to stress the enumeration process after fixing a memory leak [1]: cd /sys/bus/platform/drivers/dwmmc_rockchip for i in $(seq 1 3000); do echo "========================" $i echo ff0f0000.dwmmc > unbind sleep .5 echo ff0f0000.dwmmc > bind while true; do if [ -e /dev/mmcblk2 ]; then break; fi sleep .1 done done It worked fine. [1] https://lkml.kernel.org/r/20190503233526.226272-1-dianders@chromium.org Signed-off-by: Douglas Anderson <dianders@chromium.org> Signed-off-by: Heiko Stuebner <heiko@sntech.de> Signed-off-by: Sasha Levin <sashal@kernel.org>