summaryrefslogtreecommitdiff
path: root/arch/x86
AgeCommit message (Collapse)AuthorFilesLines
2020-07-31KVM: nSVM: Correctly set the shadow NPT root level in its MMU roleSean Christopherson2-1/+3
Move the initialization of shadow NPT MMU's shadow_root_level into kvm_init_shadow_npt_mmu() and explicitly set the level in the shadow NPT MMU's role to be the TDP level. This ensures the role and MMU levels are synchronized and also initialized before __kvm_mmu_new_pgd(), which consumes the level when attempting a fast PGD switch. Cc: Vitaly Kuznetsov <vkuznets@redhat.com> Fixes: 9fa72119b24db ("kvm: x86: Introduce kvm_mmu_calc_root_page_role()") Fixes: a506fdd223426 ("KVM: nSVM: implement nested_svm_load_cr3() and use it for host->guest switch") Signed-off-by: Sean Christopherson <sean.j.christopherson@intel.com> Message-Id: <20200716034122.5998-2-sean.j.christopherson@intel.com> Reviewed-by: Vitaly Kuznetsov <vkuznets@redhat.com> Tested-by: Vitaly Kuznetsov <vkuznets@redhat.com> Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
2020-07-30x86/kvm: Use __xfer_to_guest_mode_work_pending() in kvm_run_vcpu()Thomas Gleixner1-1/+1
The comments explicitely explain that the work flags check and handling in kvm_run_vcpu() is done with preemption and interrupts enabled as KVM invokes the check again right before entering guest mode with interrupts disabled which guarantees that the work flags are observed and handled before VMENTER. Nevertheless the flag pending check in kvm_run_vcpu() uses the helper variant which requires interrupts to be disabled triggering an instant lockdep splat. This was caught in testing before and then not fixed up in the patch before applying. :( Use the relaxed and intentionally racy __xfer_to_guest_mode_work_pending() instead. Fixes: 72c3c0fe54a3 ("x86/kvm: Use generic xfer to guest work function") Reported-by: Qian Cai <cai@lca.pw> writes: Signed-off-by: Thomas Gleixner <tglx@linutronix.de> Link: https://lkml.kernel.org/r/87bljxa2sa.fsf@nanos.tec.linutronix.de
2020-07-30initrd: remove support for multiple floppiesChristoph Hellwig1-2/+0
Remove the special handling for multiple floppies in the initrd code. No one should be using floppies for booting these days. (famous last words..) Includes a spelling fix from Colin Ian King <colin.king@canonical.com>. Signed-off-by: Christoph Hellwig <hch@lst.de> Acked-by: Linus Torvalds <torvalds@linux-foundation.org>
2020-07-29x86/i8259: Use printk_deferred() to prevent deadlockThomas Gleixner1-1/+1
0day reported a possible circular locking dependency: Chain exists of: &irq_desc_lock_class --> console_owner --> &port_lock_key Possible unsafe locking scenario: CPU0 CPU1 ---- ---- lock(&port_lock_key); lock(console_owner); lock(&port_lock_key); lock(&irq_desc_lock_class); The reason for this is a printk() in the i8259 interrupt chip driver which is invoked with the irq descriptor lock held, which reverses the lock operations vs. printk() from arbitrary contexts. Switch the printk() to printk_deferred() to avoid that. Reported-by: kernel test robot <lkp@intel.com> Signed-off-by: Thomas Gleixner <tglx@linutronix.de> Signed-off-by: Ingo Molnar <mingo@kernel.org> Cc: stable@vger.kernel.org Link: https://lore.kernel.org/r/87365abt2v.fsf@nanos.tec.linutronix.de
2020-07-29Merge branch 'locking/header'Peter Zijlstra1-2/+0
2020-07-29locking/atomic: Move ATOMIC_INIT into linux/types.hHerbert Xu1-2/+0
This patch moves ATOMIC_INIT from asm/atomic.h into linux/types.h. This allows users of atomic_t to use ATOMIC_INIT without having to include atomic.h as that way may lead to header loops. Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au> Signed-off-by: Peter Zijlstra (Intel) <peterz@infradead.org> Acked-by: Waiman Long <longman@redhat.com> Link: https://lkml.kernel.org/r/20200729123105.GB7047@gondor.apana.org.au
2020-07-29Merge branches 'arm/renesas', 'arm/qcom', 'arm/mediatek', 'arm/omap', ↵Joerg Roedel1-3/+0
'arm/exynos', 'arm/smmu', 'ppc/pamu', 'x86/vt-d', 'x86/amd' and 'core' into next
2020-07-28perf/x86/rapl: Add Hygon Fam18h RAPL supportPu Wen1-1/+2
Hygon Family 18h(Dhyana) support RAPL in bit 14 of CPUID 0x80000007 EDX, and has MSRs RAPL_PWR_UNIT/CORE_ENERGY_STAT/PKG_ENERGY_STAT. So add Hygon Dhyana Family 18h support for RAPL. The output is available via the energy-pkg pseudo event: $ perf stat -a -I 1000 --per-socket -e power/energy-pkg/ [ mingo: Tidied up the initializers. ] Signed-off-by: Pu Wen <puwen@hygon.cn> Signed-off-by: Ingo Molnar <mingo@kernel.org> Link: https://lore.kernel.org/r/20200720082205.1307-1-puwen@hygon.cn
2020-07-28Merge tag 'v5.8-rc7' into perf/core, to pick up fixesIngo Molnar37-161/+320
Signed-off-by: Ingo Molnar <mingo@kernel.org>
2020-07-27x86: switch to ->regset_get()Al Viro9-259/+83
All instances of ->get() in arch/x86 switched; that might or might not be worth splitting up. Notes: * for xstateregs_get() the amount we want to store is determined at the boot time; see init_xstate_size() and update_regset_xstate_info() for details. task->thread.fpu.state.xsave ends with a flexible array member and the amount of data in it depends upon the FPU features supported/enabled. * fpregs_get() writes slightly less than full ->thread.fpu.state.fsave (the last word is not copied); we pass the full size of state.fsave and let membuf_write() trim to the amount declared by regset - __regset_get() will make sure that the space in buffer is no more than that. * copy_xstate_to_user() and its helpers are gone now. * fpregs_soft_get() was getting user_regset_copyout() arguments wrong. Since "x86: x86 user_regset math_emu" back in 2008... I really doubt that it's worth splitting out for -stable, though - you need a 486SX box for that to trigger... [Kevin's braino fix for copy_xstate_to_kernel() essentially duplicated here] Signed-off-by: Al Viro <viro@zeniv.linux.org.uk>
2020-07-27kill elf_fpxregs_tAl Viro1-2/+0
all uses are conditional upon ELF_CORE_COPY_XFPREGS, which has not been defined on any architecture since 2010 Signed-off-by: Al Viro <viro@zeniv.linux.org.uk>
2020-07-27KVM: Using macros instead of magic valuesHaiwei Li1-1/+2
Instead of using magic values, use macros. Signed-off-by: Haiwei Li <lihaiwei@tencent.com> Message-Id: <4c072161-80dd-b7ed-7adb-02acccaa0701@gmail.com> Reviewed-by: Vitaly Kuznetsov <vkuznets@redhat.com> Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
2020-07-27genirq/affinity: Make affinity setting if activated opt-inThomas Gleixner1-0/+4
John reported that on a RK3288 system the perf per CPU interrupts are all affine to CPU0 and provided the analysis: "It looks like what happens is that because the interrupts are not per-CPU in the hardware, armpmu_request_irq() calls irq_force_affinity() while the interrupt is deactivated and then request_irq() with IRQF_PERCPU | IRQF_NOBALANCING. Now when irq_startup() runs with IRQ_STARTUP_NORMAL, it calls irq_setup_affinity() which returns early because IRQF_PERCPU and IRQF_NOBALANCING are set, leaving the interrupt on its original CPU." This was broken by the recent commit which blocked interrupt affinity setting in hardware before activation of the interrupt. While this works in general, it does not work for this particular case. As contrary to the initial analysis not all interrupt chip drivers implement an activate callback, the safe cure is to make the deferred interrupt affinity setting at activation time opt-in. Implement the necessary core logic and make the two irqchip implementations for which this is required opt-in. In hindsight this would have been the right thing to do, but ... Fixes: baedb87d1b53 ("genirq/affinity: Handle affinity setting on inactive interrupts correctly") Reported-by: John Keeping <john@metanate.com> Signed-off-by: Thomas Gleixner <tglx@linutronix.de> Tested-by: Marc Zyngier <maz@kernel.org> Acked-by: Marc Zyngier <maz@kernel.org> Cc: stable@vger.kernel.org Link: https://lkml.kernel.org/r/87blk4tzgm.fsf@nanos.tec.linutronix.de
2020-07-27locking/lockdep: Fix TRACE_IRQFLAGS vs. NMIspeterz@infradead.org1-0/+3
Prior to commit: 859d069ee1dd ("lockdep: Prepare for NMI IRQ state tracking") IRQ state tracking was disabled in NMIs due to nmi_enter() doing lockdep_off() -- with the obvious requirement that NMI entry call nmi_enter() before trace_hardirqs_off(). [ AFAICT, PowerPC and SH violate this order on their NMI entry ] However, that commit explicitly changed lockdep_hardirqs_*() to ignore lockdep_off() and breaks every architecture that has irq-tracing in it's NMI entry that hasn't been fixed up (x86 being the only fixed one at this point). The reason for this change is that by ignoring lockdep_off() we can: - get rid of 'current->lockdep_recursion' in lockdep_assert_irqs*() which was going to to give header-recursion issues with the seqlock rework. - allow these lockdep_assert_*() macros to function in NMI context. Restore the previous state of things and allow an architecture to opt-in to the NMI IRQ tracking support, however instead of relying on lockdep_off(), rely on in_nmi(), both are part of nmi_enter() and so over-all entry ordering doesn't need to change. Signed-off-by: Peter Zijlstra (Intel) <peterz@infradead.org> Signed-off-by: Ingo Molnar <mingo@kernel.org> Link: https://lore.kernel.org/r/20200727124852.GK119549@hirez.programming.kicks-ass.net
2020-07-27KVM: nVMX: check for invalid hdr.vmx.flagsPaolo Bonzini1-0/+3
hdr.vmx.flags is meant for future extensions to the ABI, rejecting invalid flags is necessary to avoid broken half-loads of the nVMX state. Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
2020-07-27KVM: nVMX: check for required but missing VMCS12 in KVM_SET_NESTED_STATEPaolo Bonzini2-3/+15
A missing VMCS12 was not causing -EINVAL (it was just read with copy_from_user, so it is not a security issue, but it is still wrong). Test for VMCS12 validity and reject the nested state if a VMCS12 is required but not present. Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
2020-07-27x86/cpu: Refactor sync_core() for readabilityRicardo Neri2-25/+32
Instead of having #ifdef/#endif blocks inside sync_core() for X86_64 and X86_32, implement the new function iret_to_self() with two versions. In this manner, avoid having to use even more more #ifdef/#endif blocks when adding support for SERIALIZE in sync_core(). Co-developed-by: Tony Luck <tony.luck@intel.com> Signed-off-by: Tony Luck <tony.luck@intel.com> Signed-off-by: Ricardo Neri <ricardo.neri-calderon@linux.intel.com> Signed-off-by: Ingo Molnar <mingo@kernel.org> Link: https://lore.kernel.org/r/20200727043132.15082-4-ricardo.neri-calderon@linux.intel.com
2020-07-27x86/cpu: Relocate sync_core() to sync_core.hRicardo Neri4-64/+66
Having sync_core() in processor.h is problematic since it is not possible to check for hardware capabilities via the *cpu_has() family of macros. The latter needs the definitions in processor.h. It also looks more intuitive to relocate the function to sync_core.h. This changeset does not make changes in functionality. Signed-off-by: Ricardo Neri <ricardo.neri-calderon@linux.intel.com> Signed-off-by: Ingo Molnar <mingo@kernel.org> Reviewed-by: Tony Luck <tony.luck@intel.com> Link: https://lore.kernel.org/r/20200727043132.15082-3-ricardo.neri-calderon@linux.intel.com
2020-07-27x86/cpufeatures: Add enumeration for SERIALIZE instructionRicardo Neri1-0/+1
The Intel architecture defines a set of Serializing Instructions (a detailed definition can be found in Vol.3 Section 8.3 of the Intel "main" manual, SDM). However, these instructions do more than what is required, have side effects and/or may be rather invasive. Furthermore, some of these instructions are only available in kernel mode or may cause VMExits. Thus, software using these instructions only to serialize execution (as defined in the manual) must handle the undesired side effects. As indicated in the name, SERIALIZE is a new Intel architecture Serializing Instruction. Crucially, it does not have any of the mentioned side effects. Also, it does not cause VMExit and can be used in user mode. This new instruction is currently documented in the latest "extensions" manual (ISE). It will appear in the "main" manual in the future. Signed-off-by: Ricardo Neri <ricardo.neri-calderon@linux.intel.com> Signed-off-by: Ingo Molnar <mingo@kernel.org> Reviewed-by: Tony Luck <tony.luck@intel.com> Acked-by: Dave Hansen <dave.hansen@linux.intel.com> Link: https://lore.kernel.org/r/20200727043132.15082-2-ricardo.neri-calderon@linux.intel.com
2020-07-27Merge tag 'v5.8-rc7' into x86/cpu, to pick up fixesIngo Molnar6-18/+26
Signed-off-by: Ingo Molnar <mingo@kernel.org>
2020-07-27Merge back cpufreq material for v5.9.Rafael J. Wysocki1-2/+4
2020-07-27x86/mm/64: Make sync_global_pgds() staticJoerg Roedel2-3/+1
The function is only called from within init_64.c and can be static. Also remove it from pgtable_64.h. Signed-off-by: Joerg Roedel <jroedel@suse.de> Signed-off-by: Ingo Molnar <mingo@kernel.org> Reviewed-by: Mike Rapoport <rppt@linux.ibm.com> Link: https://lore.kernel.org/r/20200721095953.6218-4-joro@8bytes.org
2020-07-27x86/mm/64: Do not sync vmalloc/ioremap mappingsJoerg Roedel2-7/+0
Remove the code to sync the vmalloc and ioremap ranges for x86-64. The page-table pages are all pre-allocated now so that synchronization is no longer necessary. Signed-off-by: Joerg Roedel <jroedel@suse.de> Signed-off-by: Ingo Molnar <mingo@kernel.org> Reviewed-by: Mike Rapoport <rppt@linux.ibm.com> Link: https://lore.kernel.org/r/20200721095953.6218-3-joro@8bytes.org
2020-07-27x86/mm: Pre-allocate P4D/PUD pages for vmalloc areaJoerg Roedel1-0/+52
Pre-allocate the page-table pages for the vmalloc area at the level which needs synchronization on x86-64, which is P4D for 5-level and PUD for 4-level paging. Doing this at boot makes sure no synchronization of that area is necessary at runtime. The synchronization takes the pgd_lock and iterates over all page-tables in the system, so it can take quite long and is better avoided. Signed-off-by: Joerg Roedel <jroedel@suse.de> Signed-off-by: Ingo Molnar <mingo@kernel.org> Reviewed-by: Mike Rapoport <rppt@linux.ibm.com> Link: https://lore.kernel.org/r/20200721095953.6218-2-joro@8bytes.org
2020-07-26x86/ioperm: Initialize pointer bitmap with NULL rather than 0Colin Ian King1-1/+1
The pointer bitmap is being initialized with a plain integer 0, fix this by initializing it with a NULL instead. Cleans up sparse warning: arch/x86/xen/enlighten_pv.c:876:27: warning: Using plain integer as NULL pointer Signed-off-by: Colin Ian King <colin.king@canonical.com> Signed-off-by: Ingo Molnar <mingo@kernel.org> Reviewed-by: Juergen Gross <jgross@suse.com> Link: https://lore.kernel.org/r/20200721100217.407975-1-colin.king@canonical.com
2020-07-26Merge branch 'x86/urgent' into x86/cleanupsIngo Molnar78-361/+556
Refresh the branch for a dependent commit. Signed-off-by: Ingo Molnar <mingo@kernel.org>
2020-07-26Merge branch 'locking/nmi' into x86/entryIngo Molnar4-16/+47
Resolve conflicts with ongoing lockdep work that fixed the NMI entry code. Conflicts: arch/x86/entry/common.c arch/x86/include/asm/idtentry.h Signed-off-by: Ingo Molnar <mingo@kernel.org>
2020-07-26x86: uv: uv_hub.h: Delete duplicated wordRandy Dunlap1-1/+1
Delete the repeated word "the". [ mingo: While at it, also capitalize CPU properly. ] Signed-off-by: Randy Dunlap <rdunlap@infradead.org> Signed-off-by: Ingo Molnar <mingo@kernel.org> Link: https://lore.kernel.org/r/20200726004124.20618-4-rdunlap@infradead.org
2020-07-26x86: cmpxchg_32.h: Delete duplicated wordRandy Dunlap1-1/+1
Delete the repeated word "you". Signed-off-by: Randy Dunlap <rdunlap@infradead.org> Signed-off-by: Ingo Molnar <mingo@kernel.org> Link: https://lore.kernel.org/r/20200726004124.20618-3-rdunlap@infradead.org
2020-07-26x86: bootparam.h: Delete duplicated wordRandy Dunlap1-1/+1
Delete the repeated word "for". Signed-off-by: Randy Dunlap <rdunlap@infradead.org> Signed-off-by: Ingo Molnar <mingo@kernel.org> Link: https://lore.kernel.org/r/20200726004124.20618-2-rdunlap@infradead.org
2020-07-26Merge git://git.kernel.org/pub/scm/linux/kernel/git/netdev/netDavid S. Miller22-85/+116
The UDP reuseport conflict was a little bit tricky. The net-next code, via bpf-next, extracted the reuseport handling into a helper so that the BPF sk lookup code could invoke it. At the same time, the logic for reuseport handling of unconnected sockets changed via commit efc6b6f6c3113e8b203b9debfb72d81e0f3dcace which changed the logic to carry on the reuseport result into the rest of the lookup loop if we do not return immediately. This requires moving the reuseport_has_conns() logic into the callers. While we are here, get rid of inline directives as they do not belong in foo.c files. The other changes were cases of more straightforward overlapping modifications. Signed-off-by: David S. Miller <davem@davemloft.net>
2020-07-26Merge tag 'x86-urgent-2020-07-25' of ↵Linus Torvalds4-17/+24
git://git.kernel.org/pub/scm/linux/kernel/git/tip/tip into master Pull x86 fixes from Ingo Molnar: "Misc fixes: - Fix a section end page alignment assumption that was causing crashes - Fix ORC unwinding on freshly forked tasks which haven't executed yet and which have empty user task stacks - Fix the debug.exception-trace=1 sysctl dumping of user stacks, which was broken by recent maccess changes" * tag 'x86-urgent-2020-07-25' of git://git.kernel.org/pub/scm/linux/kernel/git/tip/tip: x86/dumpstack: Dump user space code correctly again x86/stacktrace: Fix reliable check for empty user task stacks x86/unwind/orc: Fix ORC for newly forked tasks x86, vmlinux.lds: Page-align end of ..page_aligned sections
2020-07-25Merge tag 'v5.8-rc6' into locking/core, to pick up fixesIngo Molnar22-81/+108
Signed-off-by: Ingo Molnar <mingo@kernel.org>
2020-07-25x86/xen/time: Set the X86_FEATURE_TSC_KNOWN_FREQ flag in xen_tsc_khz()Hayato Ohhashi1-0/+1
If the TSC frequency is known from the pvclock page, the TSC frequency does not need to be recalibrated. We can avoid recalibration by setting X86_FEATURE_TSC_KNOWN_FREQ. Signed-off-by: Hayato Ohhashi <o8@vmm.dev> Signed-off-by: Ingo Molnar <mingo@kernel.org> Reviewed-by: Boris Ostrovsky <boris.ostrovsky@oracle.com> Link: https://lore.kernel.org/r/20200721161231.6019-1-o8@vmm.dev
2020-07-25x86/split_lock: Enable the split lock feature on Sapphire Rapids and Alder ↵Fenghua Yu1-0/+2
Lake CPUs Add Sapphire Rapids and Alder Lake processors to CPU list to enumerate and enable the split lock feature. Signed-off-by: Fenghua Yu <fenghua.yu@intel.com> Signed-off-by: Ingo Molnar <mingo@kernel.org> Reviewed-by: Tony Luck <tony.luck@intel.com> Link: https://lore.kernel.org/r/1595634320-79689-1-git-send-email-fenghua.yu@intel.com
2020-07-25x86/cpu: Add Lakefield, Alder Lake and Rocket Lake models to the to Intel ↵Tony Luck1-0/+7
CPU family Add three new Intel CPU models. Signed-off-by: Tony Luck <tony.luck@intel.com> Signed-off-by: Ingo Molnar <mingo@kernel.org> Link: https://lore.kernel.org/r/20200721043749.31567-1-tony.luck@intel.com
2020-07-25Merge tag 'v5.8-rc6' into x86/cpu, to refresh the branch before adding new ↵Ingo Molnar73-344/+531
commits Signed-off-by: Ingo Molnar <mingo@kernel.org>
2020-07-25x86/defconfigs: Refresh defconfig filesIngo Molnar2-122/+62
Perform a 'make savedefconfig' pass over our main defconfig files, which keeps the defconfig result the same, but compresses the file where defaults were changed, options removed or reordered. Signed-off-by: Ingo Molnar <mingo@kernel.org> Link: https://lore.kernel.org/r/20200724130638.645844-2-mingo@kernel.org
2020-07-25x86/mm: Remove the unused mk_kernel_pgd() #defineIngo Molnar1-7/+4
AFAICS the last uses of directly 'making' kernel PGDs was removed 7 years ago: 8b78c21d72d9: ("x86, 64bit, mm: hibernate use generic mapping_init") Where the explicit PGD walking loop was replaced with kernel_ident_mapping_init() calls. This was then (unnecessarily) carried over through the 5-level paging conversion. Also clean up the 'level' comments a bit, to convey the original, meanwhile somewhat bit-rotten notion, that these are empty comment blocks with no methods to handle any of the levels except the PTE level. Signed-off-by: Ingo Molnar <mingo@kernel.org> Link: https://lore.kernel.org/r/20200724114418.629021-4-mingo@kernel.org
2020-07-25x86/tsc: Remove unused "US_SCALE" and "NS_SCALE" leftover macrosIngo Molnar1-3/+0
Last use of them was removed 13 years ago, when the code was converted to use CYC2NS_SCALE_FACTOR: 53d517cdbaac: ("x86: scale cyc_2_nsec according to CPU frequency") The current TSC code uses the 'struct cyc2ns_data' scaling abstraction, the old fixed scaling approach is long gone. This cleanup also removes the 'arbitralrily' typo from the comment, so win-win. ;-) Signed-off-by: Ingo Molnar <mingo@kernel.org> Link: https://lore.kernel.org/r/20200724114418.629021-3-mingo@kernel.org
2020-07-25x86/ioapic: Remove unused "IOAPIC_AUTO" defineIngo Molnar1-1/+0
Last use was removed more than 5 years ago, in: 5ad274d41c1b: ("x86/irq: Remove unused old IOAPIC irqdomain interfaces") Signed-off-by: Ingo Molnar <mingo@kernel.org> Link: https://lore.kernel.org/r/20200724114418.629021-2-mingo@kernel.org
2020-07-24xen: Remove redundant initialization of irqColin Ian King1-1/+1
The variable irq is being initialized with a value that is never read and it is being updated later with a new value. The initialization is redundant and can be removed. Addresses-Coverity: ("Unused value") Link: https://lore.kernel.org/r/20200611123134.922395-1-colin.king@canonical.com Signed-off-by: Colin Ian King <colin.king@canonical.com> Signed-off-by: Bjorn Helgaas <bhelgaas@google.com>
2020-07-24x86/build: Move max-page-size option to LDFLAGS_vmlinuxArvind Sankar1-15/+17
This option is only required for vmlinux on 64-bit, to enforce 2MiB alignment, so set it in LDFLAGS_vmlinux instead of KBUILD_LDFLAGS. Also drop the ld-option check: this option was added in binutils-2.18 and all the other places that use it already don't have the check. This reduces the size of the intermediate ELF files arch/x86/boot/setup.elf and arch/x86/realmode/rm/realmode.elf by about 2MiB each. The binary versions are unchanged. Move the LDFLAGS settings to all be together and just after CFLAGS settings are done. Signed-off-by: Arvind Sankar <nivedita@alum.mit.edu> Signed-off-by: Ingo Molnar <mingo@kernel.org> Tested-by: Sedat Dilek <sedat.dilek@gmail.com> Cc: Masahiro Yamada <yamada.masahiro@socionext.com> Cc: Michal Marek <michal.lkml@markovi.net> Link: https://lore.kernel.org/r/20200722184334.3785418-1-nivedita@alum.mit.edu
2020-07-24x86/kvm: Use generic xfer to guest work functionThomas Gleixner3-15/+12
Use the generic infrastructure to check for and handle pending work before transitioning into guest mode. This now handles TIF_NOTIFY_RESUME as well which was ignored so far. Handling it is important as this covers task work and task work will be used to offload the heavy lifting of POSIX CPU timers to thread context. Signed-off-by: Thomas Gleixner <tglx@linutronix.de> Link: https://lkml.kernel.org/r/20200722220520.979724969@linutronix.de
2020-07-24x86/entry: Cleanup idtentry_enter/exitThomas Gleixner5-31/+26
Remove the temporary defines and fixup all references. Signed-off-by: Thomas Gleixner <tglx@linutronix.de> Reviewed-by: Kees Cook <keescook@chromium.org> Link: https://lkml.kernel.org/r/20200722220520.855839271@linutronix.de
2020-07-24x86/entry: Use generic interrupt entry/exit codeThomas Gleixner2-172/+5
Replace the x86 code with the generic variant. Use temporary defines for idtentry_* which will be cleaned up in the next step. Signed-off-by: Thomas Gleixner <tglx@linutronix.de> Link: https://lkml.kernel.org/r/20200722220520.711492752@linutronix.de
2020-07-24x86/entry: Cleanup idtentry_entry/exit_userThomas Gleixner3-15/+11
Cleanup the temporary defines and use irqentry_ instead of idtentry_. Signed-off-by: Thomas Gleixner <tglx@linutronix.de> Reviewed-by: Kees Cook <keescook@chromium.org> Link: https://lkml.kernel.org/r/20200722220520.602603691@linutronix.de
2020-07-24x86/entry: Use generic syscall exit functionalityThomas Gleixner7-222/+54
Replace the x86 variant with the generic version. Provide the relevant architecture specific helper functions and defines. Use a temporary define for idtentry_exit_user which will be cleaned up seperately. Signed-off-by: Thomas Gleixner <tglx@linutronix.de> Acked-by: Kees Cook <keescook@chromium.org> Link: https://lkml.kernel.org/r/20200722220520.494648601@linutronix.de
2020-07-24x86/entry: Use generic syscall entry functionThomas Gleixner5-179/+45
Replace the syscall entry work handling with the generic version. Provide the necessary helper inlines to handle the real architecture specific parts, e.g. ptrace. Use a temporary define for idtentry_enter_user which will be cleaned up seperately. Signed-off-by: Thomas Gleixner <tglx@linutronix.de> Reviewed-by: Kees Cook <keescook@chromium.org> Link: https://lkml.kernel.org/r/20200722220520.376213694@linutronix.de
2020-07-24x86/ptrace: Provide pt_regs helper for entry/exitThomas Gleixner1-0/+5
As a preparatory step for moving the syscall and interrupt entry/exit handling into generic code, provide a pt_regs helper which retrieves the interrupt state from pt_regs. This is required to check whether interrupts are reenabled by return from interrupt/exception. Signed-off-by: Thomas Gleixner <tglx@linutronix.de> Reviewed-by: Kees Cook <keescook@chromium.org> Link: https://lkml.kernel.org/r/20200722220520.258511584@linutronix.de