summaryrefslogtreecommitdiff
path: root/arch/x86/include
AgeCommit message (Collapse)AuthorFilesLines
2020-04-24efi/libstub/x86: Avoid getter function for efi_is64Ard Biesheuvel1-3/+8
We no longer need to take special care when using global variables in the EFI stub, so switch to a simple symbol reference for efi_is64. Signed-off-by: Ard Biesheuvel <ardb@kernel.org>
2020-04-24efi/libstub: Drop __pure getter for efi_system_tableArd Biesheuvel1-6/+8
The practice of using __pure getter functions to access global variables in the EFI stub dates back to the time when we had to carefully prevent GOT entries from being emitted, because we could not rely on the toolchain to do this for us. Today, we use the hidden visibility pragma for all EFI stub source files, which now all live in the same subdirectory, and we apply a sanity check on the objects, so we can get rid of these getter functions and simply refer to global data objects directly. Start with efi_system_table(), and convert it into a global variable. While at it, make it a pointer-to-const, because we can. Signed-off-by: Ard Biesheuvel <ardb@kernel.org>
2020-04-24Merge branch 'ib-mfd-x86-usb-watchdog-v5.7'Andy Shevchenko5-116/+163
Merge branch 'ib-mfd-x86-usb-watchdog-v5.7' of git://git.kernel.org/pub/scm/linux/kernel/git/lee/mfd.git to avoid conflicts in PDx86. Signed-off-by: Andy Shevchenko <andriy.shevchenko@linux.intel.com>
2020-04-24platform/x86: intel_pmc_ipc: Convert to MFDMika Westerberg2-47/+1
This driver only creates a bunch of platform devices sharing resources belonging to the PMC device. This is pretty much what MFD subsystem is for so move the driver there, renaming it to intel_pmc_bxt.c which should be more clear what it is. MFD subsystem provides nice helper APIs for subdevice creation so convert the driver to use those. Unfortunately the ACPI device includes separate resources for most of the subdevices so we cannot simply call mfd_add_devices() to create all of them but instead we need to call it separately for each device. The new MFD driver continues to expose two sysfs attributes that allow userspace to send IPC commands to the PMC/SCU to avoid breaking any existing applications that may use these. Generally this is bad idea so document this in the ABI documentation. Signed-off-by: Mika Westerberg <mika.westerberg@linux.intel.com> Reviewed-by: Andy Shevchenko <andriy.shevchenko@linux.intel.com> Signed-off-by: Lee Jones <lee.jones@linaro.org>
2020-04-24platform/x86: intel_telemetry: Add telemetry_get_pltdata()Mika Westerberg1-1/+1
Add new function that allows telemetry modules to get pointer to the platform specific configuration. This is needed to allow the telemetry debugfs module to fetch PMC IPC instance in the subsequent patch. This also allows us to replace telemetry_pltconfig_valid() with telemetry_get_pltdata() as well. Signed-off-by: Mika Westerberg <mika.westerberg@linux.intel.com> Reviewed-by: Andy Shevchenko <andriy.shevchenko@linux.intel.com> Signed-off-by: Lee Jones <lee.jones@linaro.org>
2020-04-24x86/platform/intel-mid: Add empty stubs for intel_scu_devices_[create|destroy]()Mika Westerberg1-3/+6
This allows to call the functions even when CONFIG_X86_INTEL_MID is not enabled. Signed-off-by: Mika Westerberg <mika.westerberg@linux.intel.com> Reviewed-by: Andy Shevchenko <andriy.shevchenko@linux.intel.com> Signed-off-by: Lee Jones <lee.jones@linaro.org>
2020-04-24platform/x86: intel_pmc_ipc: Drop intel_pmc_ipc_command()Mika Westerberg1-8/+0
Now that all callers have been converted over to the SCU IPC API we can drop intel_pmc_ipc_command(). Signed-off-by: Mika Westerberg <mika.westerberg@linux.intel.com> Reviewed-by: Andy Shevchenko <andriy.shevchenko@linux.intel.com> Signed-off-by: Lee Jones <lee.jones@linaro.org>
2020-04-24platform/x86: intel_telemetry: Convert to use new SCU IPC APIMika Westerberg2-1/+3
Convert the Intel Apollo Lake telemetry driver to use the new SCU IPC API. This allows us to get rid of the duplicate PMC IPC implementation which is now covered in SCU IPC driver. Also move telemetry specific IPC message constant to the telemetry driver where it belongs. Signed-off-by: Mika Westerberg <mika.westerberg@linux.intel.com> Reviewed-by: Andy Shevchenko <andriy.shevchenko@linux.intel.com> Signed-off-by: Lee Jones <lee.jones@linaro.org>
2020-04-24mfd: intel_soc_pmic_bxtwc: Convert to use new SCU IPC APIMika Westerberg1-3/+0
Convert the Intel Broxton Whiskey Cover PMIC driver to use the new SCU IPC API. This allows us to get rid of the PMC IPC implementation which is now covered in SCU IPC driver. We drop the error log if the IPC command fails because intel_scu_ipc_dev_command() does that already. Also move PMIC specific IPC message constants to the PMIC driver from the intel_pmc_ipc.h header. Signed-off-by: Mika Westerberg <mika.westerberg@linux.intel.com> Reviewed-by: Andy Shevchenko <andriy.shevchenko@linux.intel.com> Signed-off-by: Lee Jones <lee.jones@linaro.org>
2020-04-24platform/x86: intel_scu_ipc: Add managed function to register SCU IPCMika Westerberg1-0/+10
Drivers such as intel_pmc_ipc.c can be unloaded as well so in order to support those in this driver add a new function that can be called to unregister the SCU IPC when it is not needed anymore. We also add a managed version of the intel_scu_ipc_register() that takes care of calling intel_scu_ipc_unregister() automatically when the driver is unbound. Signed-off-by: Mika Westerberg <mika.westerberg@linux.intel.com> Reviewed-by: Andy Shevchenko <andriy.shevchenko@linux.intel.com> Signed-off-by: Lee Jones <lee.jones@linaro.org>
2020-04-24platform/x86: intel_scu_ipc: Introduce new SCU IPC APIMika Westerberg2-10/+73
The current SCU IPC API has been operating on a single instance and there has been no way to pin the providing module in place when the SCU IPC is in use. This implements a new API that takes the SCU IPC instance as first parameter (NULL means the single instance is being used). The SCU IPC instance can be retrieved by calling new function intel_scu_ipc_dev_get() that take care of pinning the providing module in place as long as intel_scu_ipc_dev_put() is not called. The old API is updated to call the new API and is is left there in the legacy API header to support the existing users that cannot be converted easily. Subsequent patches will convert most of the users over to the new API. Signed-off-by: Mika Westerberg <mika.westerberg@linux.intel.com> Reviewed-by: Andy Shevchenko <andriy.shevchenko@linux.intel.com> Signed-off-by: Lee Jones <lee.jones@linaro.org>
2020-04-24platform/x86: intel_scu_ipc: Move legacy SCU IPC API to a separate headerMika Westerberg2-55/+63
In preparation for introducing a new API for SCU IPC, move the legacy API and constants to a separate header that is is subject to be removed eventually. Signed-off-by: Mika Westerberg <mika.westerberg@linux.intel.com> Reviewed-by: Andy Shevchenko <andriy.shevchenko@linux.intel.com> Signed-off-by: Lee Jones <lee.jones@linaro.org>
2020-04-24platform/x86: intel_scu_ipc: Split out SCU IPC functionality from the SCU driverMika Westerberg1-0/+18
The SCU IPC functionality is usable outside of Intel MID devices. For example modern Intel CPUs include the same thing but now it is called PMC (Power Management Controller) instead of SCU. To make the IPC available for those split the driver into core part (intel_scu_ipc.c) and the SCU PCI driver part (intel_scu_pcidrv.c) which then calls the former before it goes and creates rest of the SCU devices. The SCU IPC will also register a new class that gets assigned to the device that is created under the parent PCI device. We also split the Kconfig symbols so that INTEL_SCU_IPC enables the SCU IPC library and INTEL_SCU_PCI the SCU driver and convert the users accordingly. While there remove default y from the INTEL_SCU_PCI symbol as it is already selected by X86_INTEL_MID. Signed-off-by: Mika Westerberg <mika.westerberg@linux.intel.com> Reviewed-by: Andy Shevchenko <andriy.shevchenko@linux.intel.com> Signed-off-by: Lee Jones <lee.jones@linaro.org>
2020-04-23efi/gop: Add prototypes for query_mode and set_modeArvind Sankar1-0/+4
Add prototypes and argmap for the Graphics Output Protocol's QueryMode and SetMode functions. Signed-off-by: Arvind Sankar <nivedita@alum.mit.edu> Link: https://lore.kernel.org/r/20200320020028.1936003-11-nivedita@alum.mit.edu Signed-off-by: Ard Biesheuvel <ardb@kernel.org>
2020-04-23KVM: x86: move nested-related kvm_x86_ops to a separate structPaolo Bonzini1-13/+16
Clean up some of the patching of kvm_x86_ops, by moving kvm_x86_ops related to nested virtualization into a separate struct. As a result, these ops will always be non-NULL on VMX. This is not a problem: * check_nested_events is only called if is_guest_mode(vcpu) returns true * get_nested_state treats VMXOFF state the same as nested being disabled * set_nested_state fails if you attempt to set nested state while nesting is disabled * nested_enable_evmcs could already be called on a CPU without VMX enabled in CPUID. * nested_get_evmcs_version was fixed in the previous patch Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
2020-04-23x86/mm: Use pgprotval_t in protval_4k_2_large() and protval_large_2_4k()Christoph Hellwig1-2/+2
Use the proper type for "raw" page table values. Signed-off-by: Christoph Hellwig <hch@lst.de> Signed-off-by: Borislav Petkov <bp@suse.de> Acked-by: Peter Zijlstra (Intel) <peterz@infradead.org> Link: https://lkml.kernel.org/r/20200422170116.GA28345@lst.de
2020-04-23x86/mm: Unexport __cachemode2pte_tblChristoph Hellwig1-12/+2
Exporting the raw data for a table is generally a bad idea. Move cachemode2protval() out of line given that it isn't really used in the fast path, and then mark __cachemode2pte_tbl static. Signed-off-by: Christoph Hellwig <hch@lst.de> Signed-off-by: Borislav Petkov <bp@suse.de> Acked-by: Peter Zijlstra (Intel) <peterz@infradead.org> Link: https://lkml.kernel.org/r/20200408152745.1565832-5-hch@lst.de
2020-04-23x86/mm: Cleanup pgprot_4k_2_large() and pgprot_large_2_4k()Christoph Hellwig1-13/+13
Make use of lower level helpers that operate on the raw protection values to make the code a little easier to understand, and to also avoid extra conversions in a few callers. [ Qian: Fix a wrongly placed bracket in the original submission. Reported and fixed by Qian Cai <cai@lca.pw>. Details in second Link: below. ] Signed-off-by: Christoph Hellwig <hch@lst.de> Signed-off-by: Borislav Petkov <bp@suse.de> Acked-by: Peter Zijlstra (Intel) <peterz@infradead.org> Link: https://lkml.kernel.org/r/20200408152745.1565832-4-hch@lst.de Link: https://lkml.kernel.org/r/1ED37D02-125F-4919-861A-371981581D9E@lca.pw
2020-04-23arch: split MODULE_ARCH_VERMAGIC definitions out to <asm/vermagic.h>Masahiro Yamada2-60/+68
As the bug report [1] pointed out, <linux/vermagic.h> must be included after <linux/module.h>. I believe we should not impose any include order restriction. We often sort include directives alphabetically, but it is just coding style convention. Technically, we can include header files in any order by making every header self-contained. Currently, arch-specific MODULE_ARCH_VERMAGIC is defined in <asm/module.h>, which is not included from <linux/vermagic.h>. Hence, the straight-forward fix-up would be as follows: |--- a/include/linux/vermagic.h |+++ b/include/linux/vermagic.h |@@ -1,5 +1,6 @@ | /* SPDX-License-Identifier: GPL-2.0 */ | #include <generated/utsrelease.h> |+#include <linux/module.h> | | /* Simply sanity version stamp for modules. */ | #ifdef CONFIG_SMP This works enough, but for further cleanups, I split MODULE_ARCH_VERMAGIC definitions into <asm/vermagic.h>. With this, <linux/module.h> and <linux/vermagic.h> will be orthogonal, and the location of MODULE_ARCH_VERMAGIC definitions will be consistent. For arc and ia64, MODULE_PROC_FAMILY is only used for defining MODULE_ARCH_VERMAGIC. I squashed it. For hexagon, nds32, and xtensa, I removed <asm/modules.h> entirely because they contained nothing but MODULE_ARCH_VERMAGIC definition. Kbuild will automatically generate <asm/modules.h> at build-time, wrapping <asm-generic/module.h>. [1] https://lore.kernel.org/lkml/20200411155623.GA22175@zn.tnic Reported-by: Borislav Petkov <bp@suse.de> Signed-off-by: Masahiro Yamada <masahiroy@kernel.org> Acked-by: Jessica Yu <jeyu@kernel.org>
2020-04-22get rid of csum_partial_copy_to_user()Al Viro1-3/+1
For historical reasons some architectures call their csum_and_copy_to_user() csum_partial_copy_to_user() instead (and supply a macro defining the former as the latter). That's the last remnants of old experiment that went nowhere; time to bury them. Rename those to csum_and_copy_to_user() and get rid of the macros. Signed-off-by: Al Viro <viro@zeniv.linux.org.uk>
2020-04-22objtool: Remove SAVE/RESTORE hintsPeter Zijlstra2-30/+1
The SAVE/RESTORE hints are now unused; remove them. Signed-off-by: Peter Zijlstra (Intel) <peterz@infradead.org> Reviewed-by: Miroslav Benes <mbenes@suse.cz> Reviewed-by: Alexandre Chartre <alexandre.chartre@oracle.com> Acked-by: Josh Poimboeuf <jpoimboe@redhat.com> Link: https://lkml.kernel.org/r/20200416115118.926738768@infradead.org Signed-off-by: Ingo Molnar <mingo@kernel.org>
2020-04-22objtool: Introduce HINT_RET_OFFSETPeter Zijlstra2-0/+11
Normally objtool ensures a function keeps the stack layout invariant. But there is a useful exception, it is possible to stuff the return stack in order to 'inject' a 'call': push $fun ret In this case the invariant mentioned above is violated. Add an objtool HINT to annotate this and allow a function exit with a modified stack frame. Signed-off-by: Peter Zijlstra (Intel) <peterz@infradead.org> Reviewed-by: Miroslav Benes <mbenes@suse.cz> Reviewed-by: Alexandre Chartre <alexandre.chartre@oracle.com> Acked-by: Josh Poimboeuf <jpoimboe@redhat.com> Link: https://lkml.kernel.org/r/20200416115118.690601403@infradead.org Signed-off-by: Ingo Molnar <mingo@kernel.org>
2020-04-22objtool: Better handle IRETPeter Zijlstra1-2/+0
Teach objtool a little more about IRET so that we can avoid using the SAVE/RESTORE annotation. In particular, make the weird corner case in insn->restore go away. The purpose of that corner case is to deal with the fact that UNWIND_HINT_RESTORE lands on the instruction after IRET, but that instruction can end up being outside the basic block, consider: if (cond) sync_core() foo(); Then the hint will land on foo(), and we'll encounter the restore hint without ever having seen the save hint. By teaching objtool about the arch specific exception frame size, and assuming that any IRET in an STT_FUNC symbol is an exception frame sized POP, we can remove the use of save/restore hints for this code. Signed-off-by: Peter Zijlstra (Intel) <peterz@infradead.org> Reviewed-by: Miroslav Benes <mbenes@suse.cz> Reviewed-by: Alexandre Chartre <alexandre.chartre@oracle.com> Acked-by: Josh Poimboeuf <jpoimboe@redhat.com> Link: https://lkml.kernel.org/r/20200416115118.631224674@infradead.org Signed-off-by: Ingo Molnar <mingo@kernel.org>
2020-04-21Merge tag 'kvm-s390-master-5.7-2' of ↵Paolo Bonzini12-62/+151
git://git.kernel.org/pub/scm/linux/kernel/git/kvms390/linux into kvm-master KVM: s390: Fix for 5.7 and maintainer update - Silence false positive lockdep warning - add Claudio as reviewer
2020-04-21KVM: X86: Improve latency for single target IPI fastpathWanpeng Li1-3/+2
IPI and Timer cause the main MSRs write vmexits in cloud environment observation, let's optimize virtual IPI latency more aggressively to inject target IPI as soon as possible. Running kvm-unit-tests/vmexit.flat IPI testing on SKX server, disable adaptive advance lapic timer and adaptive halt-polling to avoid the interference, this patch can give another 7% improvement. w/o fastpath -> x86.c fastpath 4238 -> 3543 16.4% x86.c fastpath -> vmx.c fastpath 3543 -> 3293 7% w/o fastpath -> vmx.c fastpath 4238 -> 3293 22.3% Cc: Haiwei Li <lihaiwei@tencent.com> Signed-off-by: Wanpeng Li <wanpengli@tencent.com> Signed-off-by: Sean Christopherson <sean.j.christopherson@intel.com> Message-Id: <20200410174703.1138-3-sean.j.christopherson@intel.com> Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
2020-04-21KVM: VMX: Cache vmcs.EXIT_INTR_INFO using arch avail_reg flagsSean Christopherson1-0/+1
Introduce a new "extended register" type, EXIT_INFO_2 (to pair with the nomenclature in .get_exit_info()), and use it to cache VMX's vmcs.EXIT_INTR_INFO. Drop a comment in vmx_recover_nmi_blocking() that is obsoleted by the generic caching mechanism. Signed-off-by: Sean Christopherson <sean.j.christopherson@intel.com> Message-Id: <20200415203454.8296-6-sean.j.christopherson@intel.com> Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
2020-04-21KVM: VMX: Cache vmcs.EXIT_QUALIFICATION using arch avail_reg flagsSean Christopherson1-0/+1
Introduce a new "extended register" type, EXIT_INFO_1 (to pair with the nomenclature in .get_exit_info()), and use it to cache VMX's vmcs.EXIT_QUALIFICATION. Signed-off-by: Sean Christopherson <sean.j.christopherson@intel.com> Message-Id: <20200415203454.8296-5-sean.j.christopherson@intel.com> Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
2020-04-21KVM: x86: Replace "cr3" with "pgd" in "new cr3/pgd" related codeSean Christopherson1-4/+4
Rename functions and variables in kvm_mmu_new_cr3() and related code to replace "cr3" with "pgd", i.e. continue the work started by commit 727a7e27cf88a ("KVM: x86: rename set_cr3 callback and related flags to load_mmu_pgd"). kvm_mmu_new_cr3() and company are not always loading a new CR3, e.g. when nested EPT is enabled "cr3" is actually an EPTP. Signed-off-by: Sean Christopherson <sean.j.christopherson@intel.com> Message-Id: <20200320212833.3507-37-sean.j.christopherson@intel.com> Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
2020-04-21KVM: x86/mmu: Add separate override for MMU sync during fast CR3 switchSean Christopherson1-1/+2
Add a separate "skip" override for MMU sync, a future change to avoid TLB flushes on nested VMX transitions may need to sync the MMU even if the TLB flush is unnecessary. Suggested-by: Paolo Bonzini <pbonzini@redhat.com> Signed-off-by: Sean Christopherson <sean.j.christopherson@intel.com> Message-Id: <20200320212833.3507-32-sean.j.christopherson@intel.com> Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
2020-04-21KVM: VMX: Retrieve APIC access page HPA only when necessarySean Christopherson1-1/+1
Move the retrieval of the HPA associated with L1's APIC access page into VMX code to avoid unnecessarily calling gfn_to_page(), e.g. when the vCPU is in guest mode (L2). Alternatively, the optimization logic in VMX could be mirrored into the common x86 code, but that will get ugly fast when further optimizations are introduced. Signed-off-by: Sean Christopherson <sean.j.christopherson@intel.com> Message-Id: <20200320212833.3507-29-sean.j.christopherson@intel.com> Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
2020-04-21KVM: x86: Introduce KVM_REQ_TLB_FLUSH_CURRENT to flush current ASIDSean Christopherson1-1/+3
Add KVM_REQ_TLB_FLUSH_CURRENT to allow optimized TLB flushing of VMX's EPTP/VPID contexts[*] from the KVM MMU and/or in a deferred manner, e.g. to flush L2's context during nested VM-Enter. Convert KVM_REQ_TLB_FLUSH to KVM_REQ_TLB_FLUSH_CURRENT in flows where the flush is directly associated with vCPU-scoped instruction emulation, i.e. MOV CR3 and INVPCID. Add a comment in vmx_vcpu_load_vmcs() above its KVM_REQ_TLB_FLUSH to make it clear that it deliberately requests a flush of all contexts. Service any pending flush request on nested VM-Exit as it's possible a nested VM-Exit could occur after requesting a flush for L2. Add the same logic for nested VM-Enter even though it's _extremely_ unlikely for flush to be pending on nested VM-Enter, but theoretically possible (in the future) due to RSM (SMM) emulation. [*] Intel also has an Address Space Identifier (ASID) concept, e.g. EPTP+VPID+PCID == ASID, it's just not documented in the SDM because the rules of invalidation are different based on which piece of the ASID is being changed, i.e. whether the EPTP, VPID, or PCID context must be invalidated. Signed-off-by: Sean Christopherson <sean.j.christopherson@intel.com> Message-Id: <20200320212833.3507-25-sean.j.christopherson@intel.com> Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
2020-04-21KVM: x86: Rename ->tlb_flush() to ->tlb_flush_all()Sean Christopherson1-1/+1
Rename ->tlb_flush() to ->tlb_flush_all() in preparation for adding a new hook to flush only the current ASID/context. Opportunstically replace the comment in vmx_flush_tlb() that explains why it flushes all EPTP/VPID contexts with a comment explaining why it unconditionally uses INVEPT when EPT is enabled. I.e. rely on the "all" part of the name to clarify why it does global INVEPT/INVVPID. No functional change intended. Signed-off-by: Sean Christopherson <sean.j.christopherson@intel.com> Message-Id: <20200320212833.3507-23-sean.j.christopherson@intel.com> Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
2020-04-21KVM: x86: Drop @invalidate_gpa param from kvm_x86_ops' tlb_flush()Sean Christopherson1-1/+1
Drop @invalidate_gpa from ->tlb_flush() and kvm_vcpu_flush_tlb() now that all callers pass %true for said param, or ignore the param (SVM has an internal call to svm_flush_tlb() in svm_flush_tlb_guest that somewhat arbitrarily passes %false). Remove __vmx_flush_tlb() as it is no longer used. No functional change intended. Signed-off-by: Sean Christopherson <sean.j.christopherson@intel.com> Message-Id: <20200320212833.3507-17-sean.j.christopherson@intel.com> Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
2020-04-21KVM: x86: make Hyper-V PV TLB flush use tlb_flush_guest()Vitaly Kuznetsov1-0/+2
Hyper-V PV TLB flush mechanism does TLB flush on behalf of the guest so doing tlb_flush_all() is an overkill, switch to using tlb_flush_guest() (just like KVM PV TLB flush mechanism) instead. Introduce KVM_REQ_HV_TLB_FLUSH to support the change. Signed-off-by: Vitaly Kuznetsov <vkuznets@redhat.com> Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
2020-04-21Drivers: hv: Move AEOI determination to architecture dependent codeMichael Kelley1-0/+2
Hyper-V on ARM64 doesn't provide a flag for the AEOI recommendation in ms_hyperv.hints, so having the test in architecture independent code doesn't work. Resolve this by moving the check of the flag to an architecture dependent helper function. No functionality is changed. Signed-off-by: Michael Kelley <mikelley@microsoft.com> Link: https://lore.kernel.org/r/20200420164926.24471-1-mikelley@microsoft.com Signed-off-by: Wei Liu <wei.liu@kernel.org>
2020-04-21KVM: x86: Move "flush guest's TLB" logic to separate kvm_x86_ops hookSean Christopherson1-0/+6
Add a dedicated hook to handle flushing TLB entries on behalf of the guest, i.e. for a paravirtualized TLB flush, and use it directly instead of bouncing through kvm_vcpu_flush_tlb(). For VMX, change the effective implementation implementation to never do INVEPT and flush only the current context, i.e. to always flush via INVVPID(SINGLE_CONTEXT). The INVEPT performed by __vmx_flush_tlb() when @invalidate_gpa=false and enable_vpid=0 is unnecessary, as it will only flush guest-physical mappings; linear and combined mappings are flushed by VM-Enter when VPID is disabled, and changes in the guest pages tables do not affect guest-physical mappings. When EPT and VPID are enabled, doing INVVPID is not required (by Intel's architecture) to invalidate guest-physical mappings, i.e. TLB entries that cache guest-physical mappings can live across INVVPID as the mappings are associated with an EPTP, not a VPID. The intent of @invalidate_gpa is to inform vmx_flush_tlb() that it must "invalidate gpa mappings", i.e. do INVEPT and not simply INVVPID. Other than nested VPID handling, which now calls vpid_sync_context() directly, the only scenario where KVM can safely do INVVPID instead of INVEPT (when EPT is enabled) is if KVM is flushing TLB entries from the guest's perspective, i.e. is only required to invalidate linear mappings. For SVM, flushing TLB entries from the guest's perspective can be done by flushing the current ASID, as changes to the guest's page tables are associated only with the current ASID. Adding a dedicated ->tlb_flush_guest() paves the way toward removing @invalidate_gpa, which is a potentially dangerous control flag as its meaning is not exactly crystal clear, even for those who are familiar with the subtleties of what mappings Intel CPUs are/aren't allowed to keep across various invalidation scenarios. Signed-off-by: Sean Christopherson <sean.j.christopherson@intel.com> Message-Id: <20200320212833.3507-15-sean.j.christopherson@intel.com> Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
2020-04-21KVM: x86: introduce kvm_mmu_invalidate_gvaPaolo Bonzini1-0/+2
Wrap the combination of mmu->invlpg and kvm_x86_ops->tlb_flush_gva into a new function. This function also lets us specify the host PGD to invalidate and also the MMU, both of which will be useful in fixing and simplifying kvm_inject_emulated_page_fault. A nested guest's MMU however has g_context->invlpg == NULL. Instead of setting it to nonpaging_invlpg, make kvm_mmu_invalidate_gva the only entry point to mmu->invlpg and make a NULL invlpg pointer equivalent to nonpaging_invlpg, saving a retpoline. Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
2020-04-20x86/elf: Disable automatic READ_IMPLIES_EXEC on 64-bitKees Cook1-2/+2
With modern x86 64-bit environments, there should never be a need for automatic READ_IMPLIES_EXEC, as the architecture is intended to always be execute-bit aware (as in, the default memory protection should be NX unless a region explicitly requests to be executable). There were very old x86_64 systems that lacked the NX bit, but for those, the NX bit is, obviously, unenforceable, so these changes should have no impact on them. Suggested-by: Hector Marco-Gisbert <hecmargi@upv.es> Signed-off-by: Kees Cook <keescook@chromium.org> Signed-off-by: Borislav Petkov <bp@suse.de> Reviewed-by: Jason Gunthorpe <jgg@mellanox.com> Link: https://lkml.kernel.org/r/20200327064820.12602-4-keescook@chromium.org
2020-04-20x86/elf: Split READ_IMPLIES_EXEC from executable PT_GNU_STACKKees Cook1-2/+3
The READ_IMPLIES_EXEC workaround was designed for old toolchains that lacked the ELF PT_GNU_STACK marking under the assumption that toolchains that couldn't specify executable permission flags for the stack may not know how to do it correctly for any memory region. This logic is sensible for having ancient binaries coexist in a system with possibly NX memory, but was implemented in a way that equated having a PT_GNU_STACK marked executable as being as "broken" as lacking the PT_GNU_STACK marking entirely. Things like unmarked assembly and stack trampolines may cause PT_GNU_STACK to need an executable bit, but they do not imply all mappings must be executable. This confusion has led to situations where modern programs with explicitly marked executable stacks are forced into the READ_IMPLIES_EXEC state when no such thing is needed. (And leads to unexpected failures when mmap()ing regions of device driver memory that wish to disallow VM_EXEC[1].) In looking for other reasons for the READ_IMPLIES_EXEC behavior, Jann Horn noted that glibc thread stacks have always been marked RWX (until 2003 when they started tracking the PT_GNU_STACK flag instead[2]). And musl doesn't support executable stacks at all[3]. As such, no breakage for multithreaded applications is expected from this change. [1] https://lkml.kernel.org/r/20190418055759.GA3155@mellanox.com [2] https://sourceware.org/git/?p=glibc.git;a=commitdiff;h=54ee14b3882 [3] https://lkml.kernel.org/r/20190423192534.GN23599@brightrain.aerifal.cx Suggested-by: Hector Marco-Gisbert <hecmargi@upv.es> Signed-off-by: Kees Cook <keescook@chromium.org> Signed-off-by: Borislav Petkov <bp@suse.de> Reviewed-by: Jason Gunthorpe <jgg@mellanox.com> Link: https://lkml.kernel.org/r/20200327064820.12602-3-keescook@chromium.org
2020-04-20x86/elf: Add table to document READ_IMPLIES_EXECKees Cook1-0/+19
Add a table to document the current behavior of READ_IMPLIES_EXEC in preparation for changing the behavior. Signed-off-by: Kees Cook <keescook@chromium.org> Signed-off-by: Borislav Petkov <bp@suse.de> Reviewed-by: Jason Gunthorpe <jgg@mellanox.com> Link: https://lkml.kernel.org/r/20200327064820.12602-2-keescook@chromium.org
2020-04-20x86/mm: Move pgprot2cachemode out of lineChristoph Hellwig2-10/+1
This helper is only used by x86 low-level MM code. Also remove the entirely pointless __pte2cachemode_tbl export as that symbol can be marked static now. Signed-off-by: Christoph Hellwig <hch@lst.de> Signed-off-by: Borislav Petkov <bp@suse.de> Acked-by: Peter Zijlstra (Intel) <peterz@infradead.org> Link: https://lkml.kernel.org/r/20200408152745.1565832-3-hch@lst.de
2020-04-20x86/mm: Add a x86_has_pat_wp() helperChristoph Hellwig1-0/+2
Abstract the ioremap code away from the caching mode internals. Signed-off-by: Christoph Hellwig <hch@lst.de> Signed-off-by: Borislav Petkov <bp@suse.de> Acked-by: Peter Zijlstra (Intel) <peterz@infradead.org> Link: https://lkml.kernel.org/r/20200408152745.1565832-2-hch@lst.de
2020-04-20x86/speculation: Add Special Register Buffer Data Sampling (SRBDS) mitigationMark Gross2-0/+6
SRBDS is an MDS-like speculative side channel that can leak bits from the random number generator (RNG) across cores and threads. New microcode serializes the processor access during the execution of RDRAND and RDSEED. This ensures that the shared buffer is overwritten before it is released for reuse. While it is present on all affected CPU models, the microcode mitigation is not needed on models that enumerate ARCH_CAPABILITIES[MDS_NO] in the cases where TSX is not supported or has been disabled with TSX_CTRL. The mitigation is activated by default on affected processors and it increases latency for RDRAND and RDSEED instructions. Among other effects this will reduce throughput from /dev/urandom. * Enable administrator to configure the mitigation off when desired using either mitigations=off or srbds=off. * Export vulnerability status via sysfs * Rename file-scoped macros to apply for non-whitelist table initializations. [ bp: Massage, - s/VULNBL_INTEL_STEPPING/VULNBL_INTEL_STEPPINGS/g, - do not read arch cap MSR a second time in tsx_fused_off() - just pass it in, - flip check in cpu_set_bug_bits() to save an indentation level, - reflow comments. jpoimboe: s/Mitigated/Mitigation/ in user-visible strings tglx: Dropped the fused off magic for now ] Signed-off-by: Mark Gross <mgross@linux.intel.com> Signed-off-by: Borislav Petkov <bp@suse.de> Signed-off-by: Thomas Gleixner <tglx@linutronix.de> Reviewed-by: Tony Luck <tony.luck@intel.com> Reviewed-by: Pawan Gupta <pawan.kumar.gupta@linux.intel.com> Reviewed-by: Josh Poimboeuf <jpoimboe@redhat.com> Tested-by: Neelima Krishnan <neelima.krishnan@intel.com>
2020-04-20x86/cpu: Add a steppings field to struct x86_cpu_idMark Gross1-3/+24
Intel uses the same family/model for several CPUs. Sometimes the stepping must be checked to tell them apart. On x86 there can be at most 16 steppings. Add a steppings bitmask to x86_cpu_id and a X86_MATCH_VENDOR_FAMILY_MODEL_STEPPING_FEATURE macro and support for matching against family/model/stepping. [ bp: Massage. ] Signed-off-by: Mark Gross <mgross@linux.intel.com> Signed-off-by: Borislav Petkov <bp@suse.de> Signed-off-by: Thomas Gleixner <tglx@linutronix.de> Reviewed-by: Tony Luck <tony.luck@intel.com> Reviewed-by: Josh Poimboeuf <jpoimboe@redhat.com>
2020-04-19Merge tag 'x86-urgent-2020-04-19' of ↵Linus Torvalds1-1/+1
git://git.kernel.org/pub/scm/linux/kernel/git/tip/tip Pull x86 and objtool fixes from Thomas Gleixner: "A set of fixes for x86 and objtool: objtool: - Ignore the double UD2 which is emitted in BUG() when CONFIG_UBSAN_TRAP is enabled. - Support clang non-section symbols in objtool ORC dump - Fix switch table detection in .text.unlikely - Make the BP scratch register warning more robust. x86: - Increase microcode maximum patch size for AMD to cope with new CPUs which have a larger patch size. - Fix a crash in the resource control filesystem when the removal of the default resource group is attempted. - Preserve Code and Data Prioritization enabled state accross CPU hotplug. - Update split lock cpu matching to use the new X86_MATCH macros. - Change the split lock enumeration as Intel finaly decided that the IA32_CORE_CAPABILITIES bits are not architectural contrary to what the SDM claims. !@#%$^! - Add Tremont CPU models to the split lock detection cpu match. - Add a missing static attribute to make sparse happy" * tag 'x86-urgent-2020-04-19' of git://git.kernel.org/pub/scm/linux/kernel/git/tip/tip: x86/split_lock: Add Tremont family CPU models x86/split_lock: Bits in IA32_CORE_CAPABILITIES are not architectural x86/resctrl: Preserve CDP enable over CPU hotplug x86/resctrl: Fix invalid attempt at removing the default resource group x86/split_lock: Update to use X86_MATCH_INTEL_FAM6_MODEL() x86/umip: Make umip_insns static x86/microcode/AMD: Increase microcode PATCH_MAX_SIZE objtool: Make BP scratch register warning more robust objtool: Fix switch table detection in .text.unlikely objtool: Support Clang non-section symbols in ORC generation objtool: Support Clang non-section symbols in ORC dump objtool: Fix CONFIG_UBSAN_TRAP unreachable warnings
2020-04-17x86/mce: Drop bogus comment about mce.kflagsTony Luck1-1/+1
The bit definitions for kflags are for internal use only. A late edit moved them from uapi/asm/mce.h to the internal x86 <asm/mce.h>, but the comment saying "See below" was accidentally left here. Delete "See below". Just labelling this field as internal kernel use is sufficient. Fixes: 1de08dccd383 ("x86/mce: Add a struct mce.kflags field") Signed-off-by: Tony Luck <tony.luck@intel.com> Signed-off-by: Borislav Petkov <bp@suse.de> Link: https://lkml.kernel.org/r/20200415195826.GA13681@agluck-desk2.amr.corp.intel.com
2020-04-15KVM: x86: Export kvm_propagate_fault() (as kvm_inject_emulated_page_fault)Sean Christopherson1-0/+2
Export the page fault propagation helper so that VMX can use it to correctly emulate TLB invalidation on page faults in an upcoming patch. In the (hopefully) not-too-distant future, SGX virtualization will also want access to the helper for injecting page faults to the correct level (L1 vs. L2) when emulating ENCLS instructions. Rename the function to kvm_inject_emulated_page_fault() to clarify that it is (a) injecting a fault and (b) only for page faults. WARN if it's invoked with an exception other than PF_VECTOR. Signed-off-by: Sean Christopherson <sean.j.christopherson@intel.com> Message-Id: <20200320212833.3507-6-sean.j.christopherson@intel.com> Reviewed-by: Vitaly Kuznetsov <vkuznets@redhat.com> Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
2020-04-14x86/microcode/AMD: Increase microcode PATCH_MAX_SIZEJohn Allen1-1/+1
Future AMD CPUs will have microcode patches that exceed the default 4K patch size. Raise our limit. Signed-off-by: John Allen <john.allen@amd.com> Signed-off-by: Borislav Petkov <bp@suse.de> Cc: stable@vger.kernel.org # v4.14.. Link: https://lkml.kernel.org/r/20200409152931.GA685273@mojo.amd.com
2020-04-14x86/mce: Fixup exception only for the correct MCEsBorislav Petkov1-0/+1
The severity grading code returns IN_KERNEL_RECOV error context for errors which have happened in kernel space but from which the kernel can recover. Whether the recovery can happen is determined by the exception table entry having as handler ex_handler_fault() and which has been declared at build time using _ASM_EXTABLE_FAULT(). IN_KERNEL_RECOV is used in mce_severity_intel() to lookup the corresponding error severity in the severities table. However, the mapping back from error severity to whether the error is IN_KERNEL_RECOV is ambiguous and in the very paranoid case - which might not be possible right now - but be better safe than sorry later, an exception fixup could be attempted for another MCE whose address is in the exception table and has the proper severity. Which would be unfortunate, to say the least. Therefore, mark such MCEs explicitly as MCE_IN_KERNEL_RECOV so that the recovery attempt is done only for them. Document the whole handling, while at it, as it is not trivial. Reported-by: Thomas Gleixner <tglx@linutronix.de> Signed-off-by: Borislav Petkov <bp@suse.de> Tested-by: Tony Luck <tony.luck@intel.com> Link: https://lkml.kernel.org/r/20200407163414.18058-10-bp@alien8.de
2020-04-14x86/mce: Add a struct mce.kflags fieldTony Luck2-0/+9
There can be many different subsystems register on the mce handler chain. Add a new bitmask field and define values so that handlers can indicate whether they took any action to log or otherwise handle an error. The default handler at the end of the chain can use this information to decide whether to print to the console log. Boris suggested a generic name and leaving plenty of spare bits for possible future use. [ bp: Move flag bits to the internal mce.h header and use BIT_ULL(). ] Signed-off-by: Tony Luck <tony.luck@intel.com> Signed-off-by: Borislav Petkov <bp@suse.de> Tested-by: Tony Luck <tony.luck@intel.com> Link: https://lkml.kernel.org/r/20200214222720.13168-4-tony.luck@intel.com