summaryrefslogtreecommitdiff
path: root/arch/x86/include
AgeCommit message (Collapse)AuthorFilesLines
2020-03-28Merge branch 'next.uaccess-2' of ↵Thomas Gleixner7-288/+3
git://git.kernel.org/pub/scm/linux/kernel/git/viro/vfs into x86/cleanups Pull uaccess cleanups from Al Viro: Consolidate the user access areas and get rid of uaccess_try(), user_ex() and other warts.
2020-03-28x86: get rid of user_atomic_cmpxchg_inatomic()Al Viro2-94/+19
Only one user left; the thing had been made polymorphic back in 2013 for the sake of MPX. No point keeping it now that MPX is gone. Convert futex_atomic_cmpxchg_inatomic() to user_access_{begin,end}() while we are at it. Signed-off-by: Al Viro <viro@zeniv.linux.org.uk>
2020-03-28x86: don't reload after cmpxchg in unsafe_atomic_op2() loopAl Viro1-8/+8
lock cmpxchg leaves the current value in eax; no need to reload it. Signed-off-by: Al Viro <viro@zeniv.linux.org.uk>
2020-03-28x86: convert arch_futex_atomic_op_inuser() to ↵Al Viro1-26/+36
user_access_begin/user_access_end() Lift stac/clac pairs from __futex_atomic_op{1,2} into arch_futex_atomic_op_inuser(), fold them with access_ok() in there. The switch in arch_futex_atomic_op_inuser() is what has required the previous (objtool) commit... Signed-off-by: Al Viro <viro@zeniv.linux.org.uk>
2020-03-28futex: arch_futex_atomic_op_inuser() calling conventions changeAl Viro1-3/+2
Move access_ok() in and pagefault_enable()/pagefault_disable() out. Mechanical conversion only - some instances don't really need a separate access_ok() at all (e.g. the ones only using get_user()/put_user(), or architectures where access_ok() is always true); we'll deal with that in followups. Signed-off-by: Al Viro <viro@zeniv.linux.org.uk>
2020-03-27x86/mm/set_memory: Fix -Wmissing-prototypes warningsBenjamin Thiel1-0/+2
Add missing includes and move prototypes into the header set_memory.h in order to fix -Wmissing-prototypes warnings. [ bp: Add ifdeffery around arch_invalidate_pmem() ] Signed-off-by: Benjamin Thiel <b.thiel@posteo.de> Signed-off-by: Borislav Petkov <bp@suse.de> Link: https://lkml.kernel.org/r/20200320145028.6013-1-b.thiel@posteo.de
2020-03-27x86/platform/uv: Add a missing prototype for uv_bau_message_interrupt()Benjamin Thiel1-0/+2
... in order to fix a -Wmissing-prototypes warning: arch/x86/platform/uv/tlb_uv.c:1275:6: warning: no previous prototype for ‘uv_bau_message_interrupt’ [-Wmissing-prototypes] \ void uv_bau_message_interrupt(struct pt_regs *regs) Signed-off-by: Benjamin Thiel <b.thiel@posteo.de> Signed-off-by: Borislav Petkov <bp@suse.de> Link: https://lkml.kernel.org/r/20200327072621.2255-1-b.thiel@posteo.de
2020-03-26kill uaccess_try()Al Viro3-72/+0
finally Signed-off-by: Al Viro <viro@zeniv.linux.org.uk>
2020-03-25Merge branch 'x86/cpu' into perf/core, to resolve conflictIngo Molnar4-15/+137
Conflicts: arch/x86/events/intel/uncore.c Signed-off-by: Ingo Molnar <mingo@kernel.org>
2020-03-25x86/entry: Fix build error x86 with !CONFIG_POSIX_TIMERSBrian Gerst1-1/+1
Add missing semicolon. Fixes: a74d187c2df3 ("x86/entry: Refactor SYS_NI macros") Reported-by: Ingo Molnar <mingo@kernel.org> Signed-off-by: Brian Gerst <brgerst@gmail.com> Signed-off-by: Thomas Gleixner <tglx@linutronix.de> Link: https://lkml.kernel.org/r/20200324143520.898733-1-brgerst@gmail.com
2020-03-24x86/cpu: Cleanup the now unused CPU match macrosThomas Gleixner2-16/+0
No more users. Signed-off-by: Thomas Gleixner <tglx@linutronix.de> Signed-off-by: Borislav Petkov <bp@suse.de> Reviewed-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org> Link: https://lkml.kernel.org/r/20200320131510.900226233@linutronix.de
2020-03-24x86/cpu: Add consistent CPU match macrosThomas Gleixner2-9/+137
Finding all places which build x86_cpu_id match tables is tedious and the logic is hidden in lots of differently named macro wrappers. Most of these initializer macros use plain C89 initializers which rely on the ordering of the struct members. So new members could only be added at the end of the struct, but that's ugly as hell and C99 initializers are really the right thing to use. Provide a set of macros which: - Have a proper naming scheme, starting with X86_MATCH_ - Use C99 initializers The set of provided macros are all subsets of the base macro X86_MATCH_VENDOR_FAM_MODEL_FEATURE() which allows to supply all possible selection criteria: vendor, family, model, feature The other macros shorten this to avoid typing all arguments when they are not needed and would require one of the _ANY constants. They have been created due to the requirements of the existing usage sites. Also add a few model constants for Centaur CPUs and QUARK. Signed-off-by: Thomas Gleixner <tglx@linutronix.de> Signed-off-by: Borislav Petkov <bp@suse.de> Reviewed-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org> Link: https://lkml.kernel.org/r/20200320131508.826011988@linutronix.de
2020-03-24x86/devicetable: Move x86 specific macro out of generic codeThomas Gleixner1-1/+12
There is no reason that this gunk is in a generic header file. The wildcard defines need to stay as they are required by file2alias. Signed-off-by: Thomas Gleixner <tglx@linutronix.de> Signed-off-by: Borislav Petkov <bp@suse.de> Reviewed-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org> Link: https://lkml.kernel.org/r/20200320131508.736205164@linutronix.de
2020-03-23um: Fix header inclusionVincenzo Frascino1-0/+6
User Mode Linux is a flavor of x86 that from the vDSO prospective always falls back on system calls. This implies that it does not require any of the unified vDSO definitions and their inclusion causes side effects like this: In file included from include/vdso/processor.h:10:0, from include/vdso/datapage.h:17, from arch/x86/include/asm/vgtod.h:7, from arch/x86/um/../kernel/sys_ia32.c:49: >> arch/x86/include/asm/vdso/processor.h:11:29: error: redefinition of 'rep_nop' static __always_inline void rep_nop(void) ^~~~~~~ In file included from include/linux/rcupdate.h:30:0, from include/linux/rculist.h:11, from include/linux/pid.h:5, from include/linux/sched.h:14, from arch/x86/um/../kernel/sys_ia32.c:25: arch/x86/um/asm/processor.h:24:20: note: previous definition of 'rep_nop' was here static inline void rep_nop(void) Make sure that the unnecessary headers are not included when um is built to address the problem. Fixes: abc22418db02 ("x86/vdso: Enable x86 to use common headers") Reported-by: kbuild test robot <lkp@intel.com> Signed-off-by: Vincenzo Frascino <vincenzo.frascino@arm.com> Signed-off-by: Borislav Petkov <bp@suse.de> Link: https://lkml.kernel.org/r/20200323124109.7104-1-vincenzo.frascino@arm.com
2020-03-23x86/mm: Drop pud_mknotpresent()Anshuman Khandual1-6/+0
There is an inconsistency between PMD and PUD-based THP page table helpers like the following, as pud_present() does not test for _PAGE_PSE. pmd_present(pmd_mknotpresent(pmd)) : True pud_present(pud_mknotpresent(pud)) : False Drop pud_mknotpresent() as there are no current users. If/when needed back later, pud_present() will also have to be fixed to accommodate _PAGE_PSE. Signed-off-by: Anshuman Khandual <anshuman.khandual@arm.com> Signed-off-by: Borislav Petkov <bp@suse.de> Reviewed-by: Baoquan He <bhe@redhat.com> Acked-by: Balbir Singh <bsingharora@gmail.com> Acked-by: Kirill A. Shutemov <kirill.shutemov@linux.intel.com> Link: https://lkml.kernel.org/r/1584925542-13034-1-git-send-email-anshuman.khandual@arm.com
2020-03-22x86/mce/amd: Add PPIN support for AMD MCEWei Huang1-0/+1
Newer AMD CPUs support a feature called protected processor identification number (PPIN). This feature can be detected via CPUID_Fn80000008_EBX[23]. However, CPUID alone is not enough to read the processor identification number - MSR_AMD_PPIN_CTL also needs to be configured properly. If, for any reason, MSR_AMD_PPIN_CTL[PPIN_EN] can not be turned on, such as disabled in BIOS, the CPU capability bit X86_FEATURE_AMD_PPIN needs to be cleared. When the X86_FEATURE_AMD_PPIN capability is available, the identification number is issued together with the MCE error info in order to keep track of the source of MCE errors. [ bp: Massage. ] Co-developed-by: Smita Koralahalli Channabasappa <smita.koralahallichannabasappa@amd.com> Signed-off-by: Smita Koralahalli Channabasappa <smita.koralahallichannabasappa@amd.com> Signed-off-by: Wei Huang <wei.huang2@amd.com> Signed-off-by: Borislav Petkov <bp@suse.de> Acked-by: Tony Luck <tony.luck@intel.com> Link: https://lkml.kernel.org/r/20200321193800.3666964-1-wei.huang2@amd.com
2020-03-21x86/entry: Rename ___preempt_schedulePeter Zijlstra1-4/+4
Because moar '_' isn't always moar readable. git grep -l "___preempt_schedule\(_notrace\)*" | while read file; do sed -ie 's/___preempt_schedule\(_notrace\)*/preempt_schedule\1_thunk/g' $file; done Reported-by: Thomas Gleixner <tglx@linutronix.de> Signed-off-by: Peter Zijlstra (Intel) <peterz@infradead.org> Signed-off-by: Thomas Gleixner <tglx@linutronix.de> Reviewed-by: Thomas Gleixner <tglx@linutronix.de> Acked-by: Will Deacon <will@kernel.org> Link: https://lkml.kernel.org/r/20200320115858.995685950@infradead.org
2020-03-21x86: Remove unneeded includesBrian Gerst1-5/+0
Clean up includes of and in <asm/syscalls.h> Signed-off-by: Brian Gerst <brgerst@gmail.com> Signed-off-by: Thomas Gleixner <tglx@linutronix.de> Link: https://lkml.kernel.org/r/20200313195144.164260-19-brgerst@gmail.com
2020-03-21x86/entry: Drop asmlinkage from syscallsBrian Gerst2-18/+15
asmlinkage is no longer required since the syscall ABI is now fully under x86 architecture control. This makes the 32-bit native syscalls a bit more effecient by passing in regs via EAX instead of on the stack. Signed-off-by: Brian Gerst <brgerst@gmail.com> Signed-off-by: Thomas Gleixner <tglx@linutronix.de> Reviewed-by: Dominik Brodowski <linux@dominikbrodowski.net> Reviewed-by: Andy Lutomirski <luto@kernel.org> Link: https://lkml.kernel.org/r/20200313195144.164260-18-brgerst@gmail.com
2020-03-21x86/entry/32: Enable pt_regs based syscallsBrian Gerst3-85/+61
Enable pt_regs based syscalls for 32-bit. This makes the 32-bit native kernel consistent with the 64-bit kernel, and improves the syscall interface by not needing to push all 6 potential arguments onto the stack. Signed-off-by: Brian Gerst <brgerst@gmail.com> Signed-off-by: Thomas Gleixner <tglx@linutronix.de> Reviewed-by: Dominik Brodowski <linux@dominikbrodowski.net> Link: https://lkml.kernel.org/r/20200313195144.164260-17-brgerst@gmail.com
2020-03-21x86/entry: Move max syscall number calculation to syscallhdr.shBrian Gerst2-3/+7
Instead of using an array in asm-offsets to calculate the max syscall number, calculate it when writing out the syscall headers. Signed-off-by: Brian Gerst <brgerst@gmail.com> Signed-off-by: Thomas Gleixner <tglx@linutronix.de> Link: https://lkml.kernel.org/r/20200313195144.164260-9-brgerst@gmail.com
2020-03-21x86/entry/64: Move sys_ni_syscall stub to common.cBrian Gerst1-0/+3
so it can be available to multiple syscall tables. Also directly return -ENOSYS instead of bouncing to the generic sys_ni_syscall(). Signed-off-by: Brian Gerst <brgerst@gmail.com> Signed-off-by: Thomas Gleixner <tglx@linutronix.de> Link: https://lkml.kernel.org/r/20200313195144.164260-7-brgerst@gmail.com
2020-03-21x86/entry/64: Use syscall wrappers for x32_rt_sigreturnBrian Gerst1-5/+0
Add missing syscall wrapper for x32_rt_sigreturn(). Signed-off-by: Brian Gerst <brgerst@gmail.com> Signed-off-by: Thomas Gleixner <tglx@linutronix.de> Reviewed-by: Dominik Brodowski <linux@dominikbrodowski.net> Reviewed-by: Andy Lutomirski <luto@kernel.org> Link: https://lkml.kernel.org/r/20200313195144.164260-6-brgerst@gmail.com
2020-03-21x86/entry: Refactor SYS_NI macrosBrian Gerst1-9/+23
Pull the common code out from the SYS_NI macros into a new __SYS_NI macro. Also conditionalize the X64 version in preparation for enabling syscall wrappers on 32-bit native kernels. Signed-off-by: Brian Gerst <brgerst@gmail.com> Signed-off-by: Thomas Gleixner <tglx@linutronix.de> Reviewed-by: Dominik Brodowski <linux@dominikbrodowski.net> Reviewed-by: Andy Lutomirski <luto@kernel.org> Link: https://lkml.kernel.org/r/20200313195144.164260-5-brgerst@gmail.com
2020-03-21x86/entry: Refactor COND_SYSCALL macrosBrian Gerst1-18/+26
Pull the common code out from the COND_SYSCALL macros into a new __COND_SYSCALL macro. Also conditionalize the X64 version in preparation for enabling syscall wrappers on 32-bit native kernels. Signed-off-by: Brian Gerst <brgerst@gmail.com> Signed-off-by: Thomas Gleixner <tglx@linutronix.de> Reviewed-by: Dominik Brodowski <linux@dominikbrodowski.net> Reviewed-by: Andy Lutomirski <luto@kernel.org> Link: https://lkml.kernel.org/r/20200313195144.164260-4-brgerst@gmail.com
2020-03-21x86/entry: Refactor SYSCALL_DEFINE0 macrosBrian Gerst1-41/+32
Pull the common code out from the SYSCALL_DEFINE0 macros into a new __SYS_STUB0 macro. Also conditionalize the X64 version in preparation for enabling syscall wrappers on 32-bit native kernels. Signed-off-by: Brian Gerst <brgerst@gmail.com> Signed-off-by: Thomas Gleixner <tglx@linutronix.de> Reviewed-by: Dominik Brodowski <linux@dominikbrodowski.net> Reviewed-by: Andy Lutomirski <luto@kernel.org> Link: https://lkml.kernel.org/r/20200313195144.164260-3-brgerst@gmail.com
2020-03-21x86/entry: Refactor SYSCALL_DEFINEx macrosBrian Gerst1-25/+24
Pull the common code out from the SYSCALL_DEFINEx macros into a new __SYS_STUBx macro. Also conditionalize the X64 version in preparation for enabling syscall wrappers on 32-bit native kernels. Signed-off-by: Brian Gerst <brgerst@gmail.com> Signed-off-by: Thomas Gleixner <tglx@linutronix.de> Reviewed-by: Dominik Brodowski <linux@dominikbrodowski.net> Reviewed-by: Andy Lutomirski <luto@kernel.org> Link: https://lkml.kernel.org/r/20200313195144.164260-2-brgerst@gmail.com
2020-03-21x86/vdso: Enable x86 to use common headersVincenzo Frascino2-11/+24
Enable x86 to use only the common headers in the implementation of the vDSO library. Signed-off-by: Vincenzo Frascino <vincenzo.frascino@arm.com> Signed-off-by: Thomas Gleixner <tglx@linutronix.de> Link: https://lkml.kernel.org/r/20200320145351.32292-24-vincenzo.frascino@arm.com
2020-03-21x86: Introduce asm/vdso/clocksource.hVincenzo Frascino2-4/+11
The vDSO library should only include the necessary headers required for a userspace library (UAPI and a minimal set of kernel headers). To make this possible it is necessary to isolate from the kernel headers the common parts that are strictly necessary to build the library. Introduce asm/vdso/clocksource.h to contain all the arm64 specific functions that are suitable for vDSO inclusion. This header will be required by a future patch that will generalize vdso/clocksource.h. Signed-off-by: Vincenzo Frascino <vincenzo.frascino@arm.com> Signed-off-by: Thomas Gleixner <tglx@linutronix.de> Link: https://lkml.kernel.org/r/20200320145351.32292-5-vincenzo.frascino@arm.com
2020-03-21Merge branch 'linus' into locking/kcsan, to pick up fixesIngo Molnar8-5/+36
Signed-off-by: Ingo Molnar <mingo@kernel.org>
2020-03-21Merge branch 'x86/kdump' into locking/kcsan, to resolve conflictsIngo Molnar61-995/+853
Conflicts: arch/x86/purgatory/Makefile Signed-off-by: Ingo Molnar <mingo@kernel.org>
2020-03-20x86/optprobe: Fix OPTPROBE vs UACCESSPeter Zijlstra1-0/+1
While looking at an objtool UACCESS warning, it suddenly occurred to me that it is entirely possible to have an OPTPROBE right in the middle of an UACCESS region. In this case we must of course clear FLAGS.AC while running the KPROBE. Luckily the trampoline already saves/restores [ER]FLAGS, so all we need to do is inject a CLAC. Unfortunately we cannot use ALTERNATIVE() in the trampoline text, so we have to frob that manually. Fixes: ca0bbc70f147 ("sched/x86_64: Don't save flags on context switch") Signed-off-by: Peter Zijlstra (Intel) <peterz@infradead.org> Acked-by: Masami Hiramatsu <mhiramat@kernel.org> Link: https://lkml.kernel.org/r/20200305092130.GU2596@hirez.programming.kicks-ass.net
2020-03-19Merge branch 'perf/urgent' into perf/core, to pick up fixesIngo Molnar1-1/+0
Signed-off-by: Ingo Molnar <mingo@kernel.org>
2020-03-19x86/setup: Fix static memory detectionGuenter Roeck1-0/+20
When booting x86 images in qemu, the following warning is seen randomly if DEBUG_LOCKDEP is enabled. WARNING: CPU: 0 PID: 1 at kernel/locking/lockdep.c:1119 lockdep_register_key+0xc0/0x100 static_obj() returns true if an address is between _stext and _end. On x86, this includes the brk memory space. Problem is that this memory block is not static on x86; its unused portions are released after init and can be allocated. This results in the observed warning if a lockdep object is allocated from this memory. Solve the problem by implementing arch_is_kernel_initmem_freed() for x86 and have it return true if an address is within the released memory range. The same problem was solved for s390 with commit 7a5da02de8d6e ("locking/lockdep: check for freed initmem in static_obj()"), which introduced arch_is_kernel_initmem_freed(). Signed-off-by: Guenter Roeck <linux@roeck-us.net> Signed-off-by: Borislav Petkov <bp@suse.de> Acked-by: Peter Zijlstra <peterz@infradead.org> Link: https://lkml.kernel.org/r/20200131021159.9178-1-linux@roeck-us.net
2020-03-19x86: switch setup_sigcontext() to unsafe_put_user()Al Viro1-3/+0
Signed-off-by: Al Viro <viro@zeniv.linux.org.uk>
2020-03-19x86: kill get_user_{try,catch,ex}Al Viro1-54/+0
no users left Signed-off-by: Al Viro <viro@zeniv.linux.org.uk>
2020-03-18x86: get rid of small constant size cases in raw_copy_{to,from}_user()Al Viro3-145/+2
Very few call sites where that would be triggered remain, and none of those is anywhere near hot enough to bother. Signed-off-by: Al Viro <viro@zeniv.linux.org.uk>
2020-03-18x86: switch sigframe sigset handling to explict __get_user()/__put_user()Al Viro1-5/+1
... and consolidate the definition of sigframe_ia32->extramask - it's always a 1-element array of 32bit unsigned. Signed-off-by: Al Viro <viro@zeniv.linux.org.uk>
2020-03-18x86: Fix bitops.h warning with a moved castJesse Brandeburg1-2/+2
Fix many sparse warnings when building with C=1. These are useless noise from the bitops.h file and getting rid of them helps developers make more use of the tools and possibly find real bugs. When the kernel is compiled with C=1, there are lots of messages like: arch/x86/include/asm/bitops.h:77:37: warning: cast truncates bits from constant value (ffffff7f becomes 7f) CONST_MASK() is using a signed integer "1" to create the mask which is later cast to (u8), in order to yield an 8-bit value for the assembly instructions to use. Simplify the expressions used to clearly indicate they are working on 8-bit values only, which still keeps sparse happy without an accidental promotion to a 32 bit integer. The warning was occurring because certain bitmasks that end with a bit set next to a natural boundary like 7, 15, 23, 31, end up with a mask like 0x7f, which then results in sign extension due to the integer type promotion rules[1]. It was really only clear_bit() that was having problems, and it was only on some bit checks that resulted in a mask like 0xffffff7f being generated after the inversion. Verify with a test module (see next patch) and assembly inspection that the fix doesn't introduce any change in generated code. [ bp: Massage. ] Signed-off-by: Jesse Brandeburg <jesse.brandeburg@intel.com> Signed-off-by: Borislav Petkov <bp@suse.de> Reviewed-by: Andy Shevchenko <andriy.shevchenko@intel.com> Acked-by: Luc Van Oostenryck <luc.vanoostenryck@gmail.com> Acked-by: Peter Zijlstra (Intel) <peterz@infradead.org> Link: https://stackoverflow.com/questions/46073295/implicit-type-promotion-rules [1] Link: https://lkml.kernel.org/r/20200310221747.2848474-1-jesse.brandeburg@intel.com
2020-03-17perf/amd/uncore: Add support for Family 19h L3 PMUKim Phillips1-2/+13
Family 19h introduces change in slice, core and thread specification in its L3 Performance Event Select (ChL3PmcCfg) h/w register. The change is incompatible with Family 17h's version of the register. Introduce a new path in l3_thread_slice_mask() to do things differently for Family 19h vs. Family 17h, otherwise the new hardware doesn't get programmed correctly. Instead of a linear core--thread bitmask, Family 19h takes an encoded core number, and a separate thread mask. There are new bits that are set for all cores and all slices, of which only the latter is used, since the driver counts events for all slices on behalf of the specified CPU. Also update amd_uncore_init() to base its L2/NB vs. L3/Data Fabric mode decision based on Family 17h or above, not just 17h and 18h: the Family 19h Data Fabric PMC is compatible with the Family 17h DF PMC. [ bp: Touchups. ] Signed-off-by: Kim Phillips <kim.phillips@amd.com> Signed-off-by: Borislav Petkov <bp@suse.de> Acked-by: Peter Zijlstra <peterz@infradead.org> Link: https://lkml.kernel.org/r/20200313231024.17601-3-kim.phillips@amd.com
2020-03-17x86: Don't let pgprot_modify() change the page encryption bitThomas Hellstrom2-3/+6
When SEV or SME is enabled and active, vm_get_page_prot() typically returns with the encryption bit set. This means that users of pgprot_modify(, vm_get_page_prot()) (mprotect_fixup(), do_mmap()) end up with a value of vma->vm_pg_prot that is not consistent with the intended protection of the PTEs. This is also important for fault handlers that rely on the VMA vm_page_prot to set the page protection. Fix this by not allowing pgprot_modify() to change the encryption bit, similar to how it's done for PAT bits. Signed-off-by: Thomas Hellstrom <thellstrom@vmware.com> Signed-off-by: Borislav Petkov <bp@suse.de> Reviewed-by: Dave Hansen <dave.hansen@linux.intel.com> Acked-by: Tom Lendacky <thomas.lendacky@amd.com> Link: https://lkml.kernel.org/r/20200304114527.3636-2-thomas_os@shipmail.org
2020-03-17x86/amd_nb, char/amd64-agp: Use amd_nb_num() accessorBorislav Petkov1-1/+0
... to find whether there are northbridges present on the system. Convert the last forgotten user and therefore, unexport amd_nb_misc_ids[] too. Signed-off-by: Borislav Petkov <bp@suse.de> Cc: Michal Kubecek <mkubecek@suse.cz> Cc: Yazen Ghannam <yazen.ghannam@amd.com> Link: https://lkml.kernel.org/r/20200316150725.925-1-bp@alien8.de
2020-03-16KVM: x86: rename set_cr3 callback and related flags to load_mmu_pgdPaolo Bonzini1-2/+3
The set_cr3 callback is not setting the guest CR3, it is setting the root of the guest page tables, either shadow or two-dimensional. To make this clearer as well as to indicate that the MMU calls it via kvm_mmu_load_cr3, rename it to load_mmu_pgd. Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
2020-03-16KVM: x86: unify callbacks to load paging rootPaolo Bonzini1-3/+0
Similar to what kvm-intel.ko is doing, provide a single callback that merges svm_set_cr3, set_tdp_cr3 and nested_svm_set_tdp_cr3. This lets us unify the set_cr3 and set_tdp_cr3 entries in kvm_x86_ops. I'm doing that in this same patch because splitting it adds quite a bit of churn due to the need for forward declarations. For the same reason the assignment to vcpu->arch.mmu->set_cr3 is moved to kvm_init_shadow_mmu from init_kvm_softmmu and nested_svm_init_mmu_context. Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
2020-03-16KVM: x86: Move nSVM CPUID 0x8000000A handling into common x86 codeSean Christopherson1-2/+0
Handle CPUID 0x8000000A in the main switch in __do_cpuid_func() and drop ->set_supported_cpuid() now that both VMX and SVM implementations are empty. Like leaf 0x14 (Intel PT) and leaf 0x8000001F (SEV), leaf 0x8000000A is is (obviously) vendor specific but can be queried in common code while respecting SVM's wishes by querying kvm_cpu_cap_has(). Suggested-by: Vitaly Kuznetsov <vkuznets@redhat.com> Signed-off-by: Sean Christopherson <sean.j.christopherson@intel.com> Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
2020-03-16KVM: x86: Move VMX's host_efer to common x86 codeSean Christopherson1-0/+2
Move host_efer to common x86 code and use it for CPUID's is_efer_nx() to avoid constantly re-reading the MSR. No functional change intended. Reviewed-by: Vitaly Kuznetsov <vkuznets@redhat.com> Signed-off-by: Sean Christopherson <sean.j.christopherson@intel.com> Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
2020-03-16KVM: x86/mmu: Configure max page level during hardware setupSean Christopherson1-2/+1
Configure the max page level during hardware setup to avoid a retpoline in the page fault handler. Drop ->get_lpage_level() as the page fault handler was the last user. No functional change intended. Reviewed-by: Vitaly Kuznetsov <vkuznets@redhat.com> Signed-off-by: Sean Christopherson <sean.j.christopherson@intel.com> Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
2020-03-16KVM: x86/mmu: Merge kvm_{enable,disable}_tdp() into a common functionSean Christopherson1-2/+1
Combine kvm_enable_tdp() and kvm_disable_tdp() into a single function, kvm_configure_mmu(), in preparation for doing additional configuration during hardware setup. And because having separate helpers is silly. No functional change intended. Reviewed-by: Vitaly Kuznetsov <vkuznets@redhat.com> Signed-off-by: Sean Christopherson <sean.j.christopherson@intel.com> Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
2020-03-16KVM: VMX: Directly query Intel PT mode when refreshing PMUsSean Christopherson1-2/+0
Use vmx_pt_mode_is_host_guest() in intel_pmu_refresh() instead of bouncing through kvm_x86_ops->pt_supported, and remove ->pt_supported() as the PMU code was the last remaining user. Opportunistically clean up the wording of a comment that referenced kvm_x86_ops->pt_supported(). No functional change intended. Reviewed-by: Vitaly Kuznetsov <vkuznets@redhat.com> Signed-off-by: Sean Christopherson <sean.j.christopherson@intel.com> Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
2020-03-16KVM: x86: Use KVM cpu caps to detect MSR_TSC_AUX virt supportSean Christopherson1-1/+0
Check for MSR_TSC_AUX virtualization via kvm_cpu_cap_has() and drop ->rdtscp_supported(). Note, vmx_rdtscp_supported() needs to hang around a tiny bit longer due other usage in VMX code. No functional change intended. Reviewed-by: Vitaly Kuznetsov <vkuznets@redhat.com> Signed-off-by: Sean Christopherson <sean.j.christopherson@intel.com> Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>