summaryrefslogtreecommitdiff
path: root/arch/x86/kernel
AgeCommit message (Collapse)AuthorFilesLines
2025-02-22x86/arch_prctl/64: Clean up ARCH_MAP_VDSO_32Brian Gerst1-1/+1
process_64.c is not built on native 32-bit, so CONFIG_X86_32 will never be set. No change in functionality intended. Signed-off-by: Brian Gerst <brgerst@gmail.com> Signed-off-by: Ingo Molnar <mingo@kernel.org> Cc: "H. Peter Anvin" <hpa@zytor.com> Link: https://lore.kernel.org/r/20250202202323.422113-3-brgerst@gmail.com
2025-02-22x86/arch_prctl: Simplify sys_arch_prctl()Brian Gerst3-24/+4
Use in_ia32_syscall() instead of a compat syscall entry. No change in functionality intended. Signed-off-by: Brian Gerst <brgerst@gmail.com> Signed-off-by: Ingo Molnar <mingo@kernel.org> Cc: "H. Peter Anvin" <hpa@zytor.com> Link: https://lore.kernel.org/r/20250202202323.422113-2-brgerst@gmail.com
2025-02-21x86/apic: Use str_disabled_enabled() helper in print_ipi_mode()Thorsten Blum1-1/+2
Remove hard-coded strings by using the str_disabled_enabled() helper. No change in functionality intended. Signed-off-by: Thorsten Blum <thorsten.blum@linux.dev> Signed-off-by: Ingo Molnar <mingo@kernel.org> Link: https://lore.kernel.org/r/20250209210333.5666-2-thorsten.blum@linux.dev
2025-02-21x86/mm: Remove pv_ops.mmu.tlb_remove_table callRik van Riel2-2/+0
Every pv_ops.mmu.tlb_remove_table call ends up calling tlb_remove_table. Get rid of the indirection by simply calling tlb_remove_table directly, and not going through the paravirt function pointers. Suggested-by: Qi Zheng <zhengqi.arch@bytedance.com> Signed-off-by: Rik van Riel <riel@surriel.com> Signed-off-by: Ingo Molnar <mingo@kernel.org> Tested-by: Manali Shukla <Manali.Shukla@amd.com> Tested-by: Brendan Jackman <jackmanb@google.com> Tested-by: Michael Kelley <mhklinux@outlook.com> Link: https://lore.kernel.org/r/20250213161423.449435-3-riel@surriel.com
2025-02-21x86/mm: Make MMU_GATHER_RCU_TABLE_FREE unconditionalRik van Riel1-16/+1
Currently x86 uses CONFIG_MMU_GATHER_TABLE_FREE when using paravirt, and not when running on bare metal. There is no real good reason to do things differently for each setup. Make them all the same. Currently get_user_pages_fast synchronizes against page table freeing in two different ways: - on bare metal, by blocking IRQs, which block TLB flush IPIs - on paravirt, with MMU_GATHER_RCU_TABLE_FREE This is done because some paravirt TLB flush implementations handle the TLB flush in the hypervisor, and will do the flush even when the target CPU has interrupts disabled. Always handle page table freeing with MMU_GATHER_RCU_TABLE_FREE. Using RCU synchronization between page table freeing and get_user_pages_fast() allows bare metal to also do TLB flushing while interrupts are disabled. Various places in the mm do still block IRQs or disable preemption as an implicit way to block RCU frees. That makes it safe to use INVLPGB on AMD CPUs. Suggested-by: Peter Zijlstra <peterz@infradead.org> Signed-off-by: Rik van Riel <riel@surriel.com> Signed-off-by: Ingo Molnar <mingo@kernel.org> Tested-by: Manali Shukla <Manali.Shukla@amd.com> Tested-by: Brendan Jackman <jackmanb@google.com> Tested-by: Michael Kelley <mhklinux@outlook.com> Link: https://lore.kernel.org/r/20250213161423.449435-2-riel@surriel.com
2025-02-21x86/e820: Drop obsolete E820_TYPE_RESERVED_KERN and related codeMike Rapoport (Microsoft)3-65/+4
E820_TYPE_RESERVED_KERN is a relict from the ancient history that was used to early reserve setup_data, see: 28bb22379513 ("x86: move reserve_setup_data to setup.c") Nowadays setup_data is anyway reserved in memblock and there is no point in carrying E820_TYPE_RESERVED_KERN that behaves exactly like E820_TYPE_RAM but only complicates the code. A bonus for removing E820_TYPE_RESERVED_KERN is a small but measurable speedup of 20 microseconds in init_mem_mappings() on a VM with 32GB or RAM. Signed-off-by: Mike Rapoport (Microsoft) <rppt@kernel.org> Signed-off-by: Ingo Molnar <mingo@kernel.org> Cc: "H. Peter Anvin" <hpa@zytor.com> Cc: Ard Biesheuvel <ardb@kernel.org> Cc: Kees Cook <keescook@chromium.org> Link: https://lore.kernel.org/r/20250214090651.3331663-5-rppt@kernel.org
2025-02-21x86/boot: Split parsing of boot_params into the parse_boot_params() helper ↵Mike Rapoport (Microsoft)1-31/+41
function Makes setup_arch() a bit easier to comprehend. No functional changes. Signed-off-by: Mike Rapoport (Microsoft) <rppt@kernel.org> Signed-off-by: Ingo Molnar <mingo@kernel.org> Link: https://lore.kernel.org/r/20250214090651.3331663-4-rppt@kernel.org
2025-02-21x86/boot: Split kernel resources setup into the setup_kernel_resources() ↵Mike Rapoport (Microsoft)1-14/+22
helper function Makes setup_arch() a bit easier to comprehend. No functional changes. Signed-off-by: Mike Rapoport (Microsoft) <rppt@kernel.org> Signed-off-by: Ingo Molnar <mingo@kernel.org> Link: https://lore.kernel.org/r/20250214090651.3331663-3-rppt@kernel.org
2025-02-21x86/boot: Move setting of memblock parameters to e820__memblock_setup()Mike Rapoport (Microsoft)2-25/+30
Changing memblock parameters, namely bottom_up and allocation upper limit does not have any effect before memblock initialization in e820__memblock_setup(). Move the calls to memblock_set_bottom_up() and memblock_set_current_limit() to e820__memblock_setup() to group all the memblock initial setup and make setup_arch() more readable. No functional changes. Signed-off-by: Mike Rapoport (Microsoft) <rppt@kernel.org> Signed-off-by: Ingo Molnar <mingo@kernel.org> Link: https://lore.kernel.org/r/20250214090651.3331663-2-rppt@kernel.org
2025-02-21x86/tsc: Always save/restore TSC sched_clock() on suspend/resumeGuilherme G. Piccoli1-2/+2
TSC could be reset in deep ACPI sleep states, even with invariant TSC. That's the reason we have sched_clock() save/restore functions, to deal with this situation. But what happens is that such functions are guarded with a check for the stability of sched_clock - if not considered stable, the save/restore routines aren't executed. On top of that, we have a clear comment in native_sched_clock() saying that *even* with TSC unstable, we continue using TSC for sched_clock due to its speed. In other words, if we have a situation of TSC getting detected as unstable, it marks the sched_clock as unstable as well, so subsequent S3 sleep cycles could bring bogus sched_clock values due to the lack of the save/restore mechanism, causing warnings like this: [22.954918] ------------[ cut here ]------------ [22.954923] Delta way too big! 18446743750843854390 ts=18446744072977390405 before=322133536015 after=322133536015 write stamp=18446744072977390405 [22.954923] If you just came from a suspend/resume, [22.954923] please switch to the trace global clock: [22.954923] echo global > /sys/kernel/tracing/trace_clock [22.954923] or add trace_clock=global to the kernel command line [22.954937] WARNING: CPU: 2 PID: 5728 at kernel/trace/ring_buffer.c:2890 rb_add_timestamp+0x193/0x1c0 Notice that the above was reproduced even with "trace_clock=global". The fix for that is to _always_ save/restore the sched_clock on suspend cycle _if TSC is used_ as sched_clock - only if we fallback to jiffies the sched_clock_stable() check becomes relevant to save/restore the sched_clock. Debugged-by: Thadeu Lima de Souza Cascardo <cascardo@igalia.com> Signed-off-by: Guilherme G. Piccoli <gpiccoli@igalia.com> Signed-off-by: Ingo Molnar <mingo@kernel.org> Cc: stable@vger.kernel.org Cc: Thomas Gleixner <tglx@linutronix.de> Cc: Peter Zijlstra <peterz@infradead.org> Cc: Linus Torvalds <torvalds@linux-foundation.org> Link: https://lore.kernel.org/r/20250215210314.351480-1-gpiccoli@igalia.com
2025-02-21ACPI/processor_idle: Export acpi_processor_ffh_play_dead()Artem Bityutskiy1-0/+1
The kernel test robot reported the following build error: >> ERROR: modpost: "acpi_processor_ffh_play_dead" [drivers/acpi/processor.ko] undefined! Caused by this recently merged commit: 541ddf31e300 ("ACPI/processor_idle: Add FFH state handling") The build failure is due to an oversight in the 'CONFIG_ACPI_PROCESSOR=m' case, the function export is missing. Add it. Reported-by: kernel test robot <lkp@intel.com> Closes: https://lore.kernel.org/oe-kbuild-all/202502151207.FA9UO1iX-lkp@intel.com/ Fixes: 541ddf31e300 ("ACPI/processor_idle: Add FFH state handling") Signed-off-by: Artem Bityutskiy <artem.bityutskiy@linux.intel.com> Signed-off-by: Ingo Molnar <mingo@kernel.org> Cc: Dave Hansen <dave.hansen@linux.intel.com> Link: https://lore.kernel.org/r/de5bf4f116779efde315782a15146fdc77a4a044.camel@linux.intel.com
2025-02-21Merge tag 'v6.14-rc3' into x86/mm, to pick up fixes before merging new changesIngo Molnar1-7/+14
Signed-off-by: Ingo Molnar <mingo@kernel.org>
2025-02-21x86/fpu: Fix guest FPU state buffer allocation sizeStanislav Spassov1-1/+1
Ongoing work on an optimization to batch-preallocate vCPU state buffers for KVM revealed a mismatch between the allocation sizes used in fpu_alloc_guest_fpstate() and fpstate_realloc(). While the former allocates a buffer sized to fit the default set of XSAVE features in UABI form (as per fpu_user_cfg), the latter uses its ksize argument derived (for the requested set of features) in the same way as the sizes found in fpu_kernel_cfg, i.e. using the compacted in-kernel representation. The correct size to use for guest FPU state should indeed be the kernel one as seen in fpstate_realloc(). The original issue likely went unnoticed through a combination of UABI size typically being larger than or equal to kernel size, and/or both amounting to the same number of allocated 4K pages. Fixes: 69f6ed1d14c6 ("x86/fpu: Provide infrastructure for KVM FPU cleanup") Signed-off-by: Stanislav Spassov <stanspas@amazon.de> Signed-off-by: Ingo Molnar <mingo@kernel.org> Link: https://lore.kernel.org/r/20250218141045.85201-1-stanspas@amazon.de
2025-02-21x86/module: Remove unnecessary check in module_finalize()Dan Carpenter1-4/+2
The "calls" pointer can no longer be NULL after the following commit: ab9fea59487d ("x86/alternative: Simplify callthunk patching") Delete this unnecessary check. Signed-off-by: Dan Carpenter <dan.carpenter@linaro.org> Signed-off-by: Ingo Molnar <mingo@kernel.org> Acked-by: Song Liu <song@kernel.org> Link: https://lore.kernel.org/r/fcbb2f57-0714-4139-b441-8817365c16a1@stanley.mountain
2025-02-21x86/cpufeatures: Make AVX-VNNI depend on AVXEric Biggers1-0/+1
The 'noxsave' boot option disables support for AVX, but support for the AVX-VNNI feature was still declared on CPUs that support it. Fix this. Signed-off-by: Eric Biggers <ebiggers@google.com> Signed-off-by: Ingo Molnar <mingo@kernel.org> Cc: Dave Hansen <dave.hansen@linux.intel.com> Link: https://lore.kernel.org/r/20250220060124.89622-1-ebiggers@kernel.org
2025-02-20genirq: Introduce common irq_force_complete_move() implementationThomas Gleixner1-123/+108
CONFIG_GENERIC_PENDING_IRQ requires an architecture specific implementation of irq_force_complete_move() for CPU hotplug. At the moment, only x86 implements this unconditionally, but for RISC-V irq_force_complete_move() is only needed when the RISC-V IMSIC driver is in use and not needed otherwise. To allow runtime configuration of this mechanism, introduce a common irq_force_complete_move() implementation in the interrupt core code, which only invokes the completion function, when a interrupt chip in the hierarchy implements it. Switch X86 over to the new mechanism. No functional change intended. Signed-off-by: Thomas Gleixner <tglx@linutronix.de> Signed-off-by: Anup Patel <apatel@ventanamicro.com> Signed-off-by: Thomas Gleixner <tglx@linutronix.de> Link: https://lore.kernel.org/all/20250217085657.789309-5-apatel@ventanamicro.com
2025-02-18x86/ACPI: CPPC: Add missing includeMario Limonciello1-0/+2
Some minimial kernel configurations will fail with -Werror=implicit-function-declaration due to a missing header include. Add that header. Signed-off-by: Mario Limonciello <mario.limonciello@amd.com> Link: https://patch.msgid.link/20250211203314.762755-1-superm1@kernel.org [ rjw: Subject edit ] Signed-off-by: Rafael J. Wysocki <rafael.j.wysocki@intel.com>
2025-02-18x86: Move sysctls into arch/x86Joel Granados1-0/+66
Move the following sysctl tables into arch/x86/kernel/setup.c: panic_on_{unrecoverable_nmi,io_nmi} bootloader_{type,version} io_delay_type unknown_nmi_panic acpi_realmode_flags Variables moved from include/linux/ to arch/x86/include/asm/ because there is no longer need for them outside arch/x86/kernel: acpi_realmode_flags panic_on_{unrecoverable_nmi,io_nmi} Include <asm/nmi.h> in arch/s86/kernel/setup.h in order to bring in panic_on_{io_nmi,unrecovered_nmi}. This is part of a greater effort to move ctl tables into their respective subsystems which will reduce the merge conflicts in kerenel/sysctl.c. Signed-off-by: Joel Granados <joel.granados@kernel.org> Signed-off-by: Ingo Molnar <mingo@kernel.org> Link: https://lore.kernel.org/r/20250218-jag-mv_ctltables-v1-8-cd3698ab8d29@kernel.org
2025-02-18Merge tag 'v6.14-rc3' into x86/core, to pick up fixesIngo Molnar1-7/+14
Pick up upstream x86 fixes before applying new patches. Signed-off-by: Ingo Molnar <mingo@kernel.org>
2025-02-18x86/percpu/64: Remove INIT_PER_CPU macrosBrian Gerst3-9/+1
Now that the load and link addresses of percpu variables are the same, these macros are no longer necessary. Signed-off-by: Brian Gerst <brgerst@gmail.com> Signed-off-by: Ingo Molnar <mingo@kernel.org> Reviewed-by: Ard Biesheuvel <ardb@kernel.org> Reviewed-by: Uros Bizjak <ubizjak@gmail.com> Cc: Linus Torvalds <torvalds@linux-foundation.org> Link: https://lore.kernel.org/r/20250123190747.745588-12-brgerst@gmail.com
2025-02-18x86/percpu/64: Remove fixed_percpu_dataBrian Gerst2-5/+0
Now that the stack protector canary value is a normal percpu variable, fixed_percpu_data is unused and can be removed. Signed-off-by: Brian Gerst <brgerst@gmail.com> Signed-off-by: Ingo Molnar <mingo@kernel.org> Reviewed-by: Ard Biesheuvel <ardb@kernel.org> Reviewed-by: Uros Bizjak <ubizjak@gmail.com> Cc: Linus Torvalds <torvalds@linux-foundation.org> Link: https://lore.kernel.org/r/20250123190747.745588-10-brgerst@gmail.com
2025-02-18x86/percpu/64: Use relative percpu offsetsBrian Gerst3-48/+12
The percpu section is currently linked at absolute address 0, because older compilers hard-coded the stack protector canary value at a fixed offset from the start of the GS segment. Now that the canary is a normal percpu variable, the percpu section does not need to be linked at a specific address. x86-64 will now calculate the percpu offsets as the delta between the initial percpu address and the dynamically allocated memory, like other architectures. Note that GSBASE is limited to the canonical address width (48 or 57 bits, sign-extended). As long as the kernel text, modules, and the dynamically allocated percpu memory are all in the negative address space, the delta will not overflow this limit. Signed-off-by: Brian Gerst <brgerst@gmail.com> Signed-off-by: Ingo Molnar <mingo@kernel.org> Reviewed-by: Ard Biesheuvel <ardb@kernel.org> Reviewed-by: Uros Bizjak <ubizjak@gmail.com> Cc: Linus Torvalds <torvalds@linux-foundation.org> Link: https://lore.kernel.org/r/20250123190747.745588-9-brgerst@gmail.com
2025-02-18x86/stackprotector/64: Convert to normal per-CPU variableBrian Gerst3-12/+2
Older versions of GCC fixed the location of the stack protector canary at %gs:40. This constraint forced the percpu section to be linked at absolute address 0 so that the canary could be the first data object in the percpu section. Supporting the zero-based percpu section requires additional code to handle relocations for RIP-relative references to percpu data, extra complexity to kallsyms, and workarounds for linker bugs due to the use of absolute symbols. GCC 8.1 supports redefining where the canary is located, allowing it to become a normal percpu variable instead of at a fixed location. This removes the constraint that the percpu section must be zero-based. Signed-off-by: Brian Gerst <brgerst@gmail.com> Signed-off-by: Ingo Molnar <mingo@kernel.org> Reviewed-by: Ard Biesheuvel <ardb@kernel.org> Reviewed-by: Uros Bizjak <ubizjak@gmail.com> Cc: Linus Torvalds <torvalds@linux-foundation.org> Link: https://lore.kernel.org/r/20250123190747.745588-8-brgerst@gmail.com
2025-02-18x86/module: Deal with GOT based stack cookie load on Clang < 17Ard Biesheuvel1-0/+15
Clang versions before 17 will not honour -fdirect-access-external-data for the load of the stack cookie emitted into each function's prologue and epilogue. This is not an issue for the core kernel, as the linker will relax these loads into LEA instructions that take the address of __stack_chk_guard directly. For modules, however, we need to work around this, by dealing with R_X86_64_REX_GOTPCRELX relocations that refer to __stack_chk_guard. In this case, given that this is a GOT load, the reference should not refer to __stack_chk_guard directly, but to a memory location that holds its address. So take the address of __stack_chk_guard into a static variable, and fix up the relocations to refer to that. [ mingo: Fix broken R_X86_64_GOTPCRELX definition. ] Signed-off-by: Ard Biesheuvel <ardb@kernel.org> Signed-off-by: Brian Gerst <brgerst@gmail.com> Signed-off-by: Ingo Molnar <mingo@kernel.org> Cc: Linus Torvalds <torvalds@linux-foundation.org> Link: https://lore.kernel.org/r/20250123190747.745588-7-brgerst@gmail.com
2025-02-18x86/boot: Disable stack protector for early boot codeBrian Gerst1-0/+2
On 64-bit, this will prevent crashes when the canary access is changed from %gs:40 to %gs:__stack_chk_guard(%rip). RIP-relative addresses from the identity-mapped early boot code will target the wrong address with zero-based percpu. KASLR could then shift that address to an unmapped page causing a crash on boot. This early boot code runs well before user-space is active and does not need stack protector enabled. Signed-off-by: Brian Gerst <brgerst@gmail.com> Signed-off-by: Ingo Molnar <mingo@kernel.org> Reviewed-by: Ard Biesheuvel <ardb@kernel.org> Cc: Linus Torvalds <torvalds@linux-foundation.org> Link: https://lore.kernel.org/r/20250123190747.745588-4-brgerst@gmail.com
2025-02-17cpufreq: Allow arch_freq_get_on_cpu to return an errorBeata Michalska2-3/+6
Allow arch_freq_get_on_cpu to return an error for cases when retrieving current CPU frequency is not possible, whether that being due to lack of required arch support or due to other circumstances when the current frequency cannot be determined at given point of time. Signed-off-by: Beata Michalska <beata.michalska@arm.com> Reviewed-by: Prasanna Kumar T S M <ptsm@linux.microsoft.com> Acked-by: Viresh Kumar <viresh.kumar@linaro.org> Acked-by: Rafael J. Wysocki <rafael@kernel.org> Link: https://lore.kernel.org/r/20250131162439.3843071-2-beata.michalska@arm.com Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
2025-02-17x86/microcode/AMD: Add get_patch_level()Borislav Petkov (AMD)1-22/+24
Put the MSR_AMD64_PATCH_LEVEL reading of the current microcode revision the hw has, into a separate function. Signed-off-by: Borislav Petkov (AMD) <bp@alien8.de> Reviewed-by: Thomas Gleixner <tglx@linutronix.de> Link: https://lore.kernel.org/r/20250211163648.30531-6-bp@kernel.org
2025-02-17x86/microcode/AMD: Get rid of the _load_microcode_amd() forward declarationBorislav Petkov (AMD)1-28/+26
Simply move save_microcode_in_initrd() down. No functional changes. Signed-off-by: Borislav Petkov (AMD) <bp@alien8.de> Reviewed-by: Thomas Gleixner <tglx@linutronix.de> Link: https://lore.kernel.org/r/20250211163648.30531-5-bp@kernel.org
2025-02-17x86/microcode/AMD: Merge early_apply_microcode() into its single callsiteBorislav Petkov (AMD)1-34/+26
No functional changes. Signed-off-by: Borislav Petkov (AMD) <bp@alien8.de> Reviewed-by: Thomas Gleixner <tglx@linutronix.de> Link: https://lore.kernel.org/r/20250211163648.30531-4-bp@kernel.org
2025-02-17x86/microcode/AMD: Remove unused save_microcode_in_initrd_amd() declarationsBorislav Petkov (AMD)2-3/+1
Commit a7939f016720 ("x86/microcode/amd: Cache builtin/initrd microcode early") renamed it to save_microcode_in_initrd() and made it static. Zap the forgotten declarations. No functional changes. Signed-off-by: Borislav Petkov (AMD) <bp@alien8.de> Reviewed-by: Thomas Gleixner <tglx@linutronix.de> Link: https://lore.kernel.org/r/20250211163648.30531-3-bp@kernel.org
2025-02-17x86/microcode/AMD: Remove ugly linebreak in __verify_patch_section() signatureBorislav Petkov (AMD)1-2/+1
No functional changes. Signed-off-by: Borislav Petkov (AMD) <bp@alien8.de> Reviewed-by: Thomas Gleixner <tglx@linutronix.de> Link: https://lore.kernel.org/r/20250211163648.30531-2-bp@kernel.org
2025-02-17Merge 6.14-rc3 into driver-core-nextGreg Kroah-Hartman1-7/+14
We need the faux_device changes in here for future work. Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
2025-02-15kernfs: Use RCU to access kernfs_node::name.Sebastian Andrzej Siewior3-9/+20
Using RCU lifetime rules to access kernfs_node::name can avoid the trouble with kernfs_rename_lock in kernfs_name() and kernfs_path_from_node() if the fs was created with KERNFS_ROOT_INVARIANT_PARENT. This is usefull as it allows to implement kernfs_path_from_node() only with RCU protection and avoiding kernfs_rename_lock. The lock is only required if the __parent node can be changed and the function requires an unchanged hierarchy while it iterates from the node to its parent. The change is needed to allow the lookup of the node's path (kernfs_path_from_node()) from context which runs always with disabled preemption and or interrutps even on PREEMPT_RT. The problem is that kernfs_rename_lock becomes a sleeping lock on PREEMPT_RT. I went through all ::name users and added the required access for the lookup with a few extensions: - rdtgroup_pseudo_lock_create() drops all locks and then uses the name later on. resctrl supports rename with different parents. Here I made a temporal copy of the name while it is used outside of the lock. - kernfs_rename_ns() accepts NULL as new_parent. This simplifies sysfs_move_dir_ns() where it can set NULL in order to reuse the current name. - kernfs_rename_ns() is only using kernfs_rename_lock if the parents are different. All users use either kernfs_rwsem (for stable path view) or just RCU for the lookup. The ::name uses always RCU free. Use RCU lifetime guarantees to access kernfs_node::name. Suggested-by: Tejun Heo <tj@kernel.org> Acked-by: Tejun Heo <tj@kernel.org> Reported-by: syzbot+6ea37e2e6ffccf41a7e6@syzkaller.appspotmail.com Closes: https://lore.kernel.org/lkml/67251dc6.050a0220.529b6.015e.GAE@google.com/ Reported-by: Hillf Danton <hdanton@sina.com> Closes: https://lore.kernel.org/20241102001224.2789-1-hdanton@sina.com Signed-off-by: Sebastian Andrzej Siewior <bigeasy@linutronix.de> Link: https://lore.kernel.org/r/20250213145023.2820193-7-bigeasy@linutronix.de Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
2025-02-15kernfs: Use RCU to access kernfs_node::parent.Sebastian Andrzej Siewior1-21/+44
kernfs_rename_lock is used to obtain stable kernfs_node::{name|parent} pointer. This is a preparation to access kernfs_node::parent under RCU and ensure that the pointer remains stable under the RCU lifetime guarantees. For a complete path, as it is done in kernfs_path_from_node(), the kernfs_rename_lock is still required in order to obtain a stable parent relationship while computing the relevant node depth. This must not change while the nodes are inspected in order to build the path. If the kernfs user never moves the nodes (changes the parent) then the kernfs_rename_lock is not required and the RCU guarantees are sufficient. This "restriction" can be set with KERNFS_ROOT_INVARIANT_PARENT. Otherwise the lock is required. Rename kernfs_node::parent to kernfs_node::__parent to denote the RCU access and use RCU accessor while accessing the node. Make cgroup use KERNFS_ROOT_INVARIANT_PARENT since the parent here can not change. Acked-by: Tejun Heo <tj@kernel.org> Cc: Yonghong Song <yonghong.song@linux.dev> Signed-off-by: Sebastian Andrzej Siewior <bigeasy@linutronix.de> Link: https://lore.kernel.org/r/20250213145023.2820193-6-bigeasy@linutronix.de Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
2025-02-14x86/ibt: Handle FineIBT in handle_cfi_failure()Peter Zijlstra2-4/+48
Sami reminded me that FineIBT failure does not hook into the regular CFI failure case, and as such CFI_PERMISSIVE does not work. Reported-by: Sami Tolvanen <samitolvanen@google.com> Signed-off-by: Peter Zijlstra (Intel) <peterz@infradead.org> Reviewed-by: Sami Tolvanen <samitolvanen@google.com> Link: https://lkml.kernel.org/r/20250214092619.GB21726@noisy.programming.kicks-ass.net
2025-02-14x86/early_printk: Harden early_serialPeter Zijlstra1-25/+24
Scott found that mem32_serial_in() is an ideal speculation gadget, an indirectly callable function that takes an adddress and offset and immediately does a load. Use static_call() to take away the need for indirect calls and explicitly seal the functions to ensure they're not callable on IBT enabled parts. Reported-by: Scott Constable <scott.d.constable@intel.com> Signed-off-by: Peter Zijlstra (Intel) <peterz@infradead.org> Reviewed-by: Sami Tolvanen <samitolvanen@google.com> Link: https://lore.kernel.org/r/20250207122546.919773202@infradead.org
2025-02-14x86/ibt: Clean up poison_endbr()Peter Zijlstra1-7/+36
Basically, get rid of the .warn argument and explicitly don't call the function when we know there isn't an endbr. This makes the calling code clearer. Note: perhaps don't add functions to .cfi_sites when the function doesn't have endbr -- OTOH why would the compiler emit the prefix if it has already determined there are no indirect callers and has omitted the ENDBR instruction. Signed-off-by: Peter Zijlstra (Intel) <peterz@infradead.org> Reviewed-by: Sami Tolvanen <samitolvanen@google.com> Link: https://lore.kernel.org/r/20250207122546.815505775@infradead.org
2025-02-14x86/traps: Cleanup and robustify decode_bug()Peter Zijlstra1-22/+60
Notably, don't attempt to decode an immediate when MOD == 3. Additionally have it return the instruction length, such that WARN like bugs can more reliably skip to the correct instruction. Signed-off-by: Peter Zijlstra (Intel) <peterz@infradead.org> Reviewed-by: Sami Tolvanen <samitolvanen@google.com> Link: https://lore.kernel.org/r/20250207122546.721120726@infradead.org
2025-02-14x86/alternative: Simplify callthunk patchingPeter Zijlstra3-28/+10
Now that paravirt call patching is implemented using alternatives, it is possible to avoid having to patch the alternative sites by including the altinstr_replacement calls in the call_sites list. This means we're now stacking relative adjustments like so: callthunks_patch_builtin_calls(): patches all function calls to target: func() -> func()-10 since the CALL accounting lives in the CALL_PADDING. This explicitly includes .altinstr_replacement alt_replace_call(): patches: x86_BUG() -> target() this patching is done in a relative manner, and will preserve the above adjustment, meaning that with calldepth patching it will do: x86_BUG()-10 -> target()-10 apply_relocation(): does code relocation, and adjusts all RIP-relative instructions to the new location, also in a relative manner. Signed-off-by: Peter Zijlstra (Intel) <peterz@infradead.org> Reviewed-by: Sami Tolvanen <samitolvanen@google.com> Link: https://lore.kernel.org/r/20250207122546.617187089@infradead.org
2025-02-14x86/boot: Mark start_secondary() with __noendbrPeter Zijlstra1-1/+2
The handoff between the boot stubs and start_secondary() are before IBT is enabled and is definitely not subject to kCFI. As such, suppress all that for this function. Notably when the ENDBR poison would become fatal (ud1 instead of nop) this will trigger a tripple fault because we haven't set up the IDT to handle #UD yet. Signed-off-by: Peter Zijlstra (Intel) <peterz@infradead.org> Reviewed-by: Sami Tolvanen <samitolvanen@google.com> Link: https://lore.kernel.org/r/20250207122546.509520369@infradead.org
2025-02-14x86/cfi: Clean up linkagePeter Zijlstra6-8/+22
With the introduction of kCFI the addition of ENDBR to SYM_FUNC_START* no longer suffices to make the function indirectly callable. This now requires the use of SYM_TYPED_FUNC_START. As such, remove the implicit ENDBR from SYM_FUNC_START* and add some explicit annotations to fix things up again. Signed-off-by: Peter Zijlstra (Intel) <peterz@infradead.org> Reviewed-by: Sami Tolvanen <samitolvanen@google.com> Link: https://lore.kernel.org/r/20250207122546.409116003@infradead.org
2025-02-14x86/ibt: Clean up is_endbr()Peter Zijlstra2-16/+15
Pretty much every caller of is_endbr() actually wants to test something at an address and ends up doing get_kernel_nofault(). Fold the lot into a more convenient helper. Signed-off-by: Peter Zijlstra (Intel) <peterz@infradead.org> Reviewed-by: Sami Tolvanen <samitolvanen@google.com> Acked-by: Alexei Starovoitov <ast@kernel.org> Acked-by: Andrii Nakryiko <andrii@kernel.org> Acked-by: "Masami Hiramatsu (Google)" <mhiramat@kernel.org> Link: https://lore.kernel.org/r/20250207122546.181367417@infradead.org
2025-02-11x86/cpu/kvm: SRSO: Fix possible missing IBPB on VM-ExitPatrick Bellasi1-7/+14
In [1] the meaning of the synthetic IBPB flags has been redefined for a better separation of concerns: - ENTRY_IBPB -- issue IBPB on entry only - IBPB_ON_VMEXIT -- issue IBPB on VM-Exit only and the Retbleed mitigations have been updated to match this new semantics. Commit [2] was merged shortly before [1], and their interaction was not handled properly. This resulted in IBPB not being triggered on VM-Exit in all SRSO mitigation configs requesting an IBPB there. Specifically, an IBPB on VM-Exit is triggered only when X86_FEATURE_IBPB_ON_VMEXIT is set. However: - X86_FEATURE_IBPB_ON_VMEXIT is not set for "spec_rstack_overflow=ibpb", because before [1] having X86_FEATURE_ENTRY_IBPB was enough. Hence, an IBPB is triggered on entry but the expected IBPB on VM-exit is not. - X86_FEATURE_IBPB_ON_VMEXIT is not set also when "spec_rstack_overflow=ibpb-vmexit" if X86_FEATURE_ENTRY_IBPB is already set. That's because before [1] this was effectively redundant. Hence, e.g. a "retbleed=ibpb spec_rstack_overflow=bpb-vmexit" config mistakenly reports the machine still vulnerable to SRSO, despite an IBPB being triggered both on entry and VM-Exit, because of the Retbleed selected mitigation config. - UNTRAIN_RET_VM won't still actually do anything unless CONFIG_MITIGATION_IBPB_ENTRY is set. For "spec_rstack_overflow=ibpb", enable IBPB on both entry and VM-Exit and clear X86_FEATURE_RSB_VMEXIT which is made superfluous by X86_FEATURE_IBPB_ON_VMEXIT. This effectively makes this mitigation option similar to the one for 'retbleed=ibpb', thus re-order the code for the RETBLEED_MITIGATION_IBPB option to be less confusing by having all features enabling before the disabling of the not needed ones. For "spec_rstack_overflow=ibpb-vmexit", guard this mitigation setting with CONFIG_MITIGATION_IBPB_ENTRY to ensure UNTRAIN_RET_VM sequence is effectively compiled in. Drop instead the CONFIG_MITIGATION_SRSO guard, since none of the SRSO compile cruft is required in this configuration. Also, check only that the required microcode is present to effectively enabled the IBPB on VM-Exit. Finally, update the KConfig description for CONFIG_MITIGATION_IBPB_ENTRY to list also all SRSO config settings enabled by this guard. Fixes: 864bcaa38ee4 ("x86/cpu/kvm: Provide UNTRAIN_RET_VM") [1] Fixes: d893832d0e1e ("x86/srso: Add IBPB on VMEXIT") [2] Reported-by: Yosry Ahmed <yosryahmed@google.com> Signed-off-by: Patrick Bellasi <derkling@google.com> Reviewed-by: Borislav Petkov (AMD) <bp@alien8.de> Cc: stable@kernel.org Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
2025-02-11x86/fpu: Fully optimize out WARN_ON_FPU()Eric Biggers1-1/+1
Currently WARN_ON_FPU evaluates its argument even if CONFIG_X86_DEBUG_FPU is disabled, which adds unnecessary instructions to several functions, for example kernel_fpu_begin(). Fix this by using BUILD_BUG_ON_INVALID(x) in the no-debug case rather than (void)(x). Fixes: 83242c515881 ("x86/fpu: Make WARN_ON_FPU() more robust in the !CONFIG_X86_DEBUG_FPU case") Suggested-by: Sean Christopherson <seanjc@google.com> Signed-off-by: Eric Biggers <ebiggers@google.com> Signed-off-by: Dave Hansen <dave.hansen@linux.intel.com> Link: https://lore.kernel.org/all/20250127224523.94300-1-ebiggers%40kernel.org
2025-02-10x86: move ZMM exclusion list into CPU feature flagEric Biggers1-0/+22
Lift zmm_exclusion_list in aesni-intel_glue.c into the x86 CPU setup code, and add a new x86 CPU feature flag X86_FEATURE_PREFER_YMM that is set when the CPU is on this list. This allows other code in arch/x86/, such as the CRC library code, to apply the same exclusion list when deciding whether to execute 256-bit or 512-bit optimized functions. Note that full AVX512 support including ZMM registers is still exposed to userspace and is still supported for in-kernel use. This flag just indicates whether in-kernel code should prefer to use YMM registers. Acked-by: Ard Biesheuvel <ardb@kernel.org> Acked-by: Ingo Molnar <mingo@kernel.org> Acked-by: Keith Busch <kbusch@kernel.org> Reviewed-by: "Martin K. Petersen" <martin.petersen@oracle.com> Link: https://lore.kernel.org/r/20250210174540.161705-2-ebiggers@kernel.org Signed-off-by: Eric Biggers <ebiggers@google.com>
2025-02-05x86/smp: Eliminate mwait_play_dead_cpuid_hint()Patryk Wlazlyn1-47/+7
Currently, mwait_play_dead_cpuid_hint() looks up the MWAIT hint of the deepest idle state by inspecting CPUID leaf 0x5 with the assumption that, if the number of sub-states for a given major C-state is nonzero, those sub-states are always represented by consecutive numbers starting from 0. This assumption is not based on the documented platform behavior and in fact it is not met on recent Intel platforms. For example, Intel's Sierra Forest report two C-states with two substates each in cpuid leaf 0x5: Name* target cstate target subcstate (mwait hint) =========================================================== C1 0x00 0x00 C1E 0x00 0x01 -- 0x10 ---- C6S 0x20 0x22 C6P 0x20 0x23 -- 0x30 ---- /* No more (sub)states all the way down to the end. */ =========================================================== * Names of the cstates are not included in the CPUID leaf 0x5, they are taken from the product specific documentation. Notice that hints 0x20 and 0x21 are not defined for C-state 0x20 (C6), so the existing MWAIT hint lookup in mwait_play_dead_cpuid_hint() based on the CPUID leaf 0x5 contents does not work in this case. Instead of using MWAIT hint lookup that is not guaranteed to work, make native_play_dead() rely on the idle driver for the given platform to put CPUs going offline into appropriate idle state and, if that fails, fall back to hlt_play_dead(). Accordingly, drop mwait_play_dead_cpuid_hint() altogether and make native_play_dead() call cpuidle_play_dead() instead of it unconditionally with the assumption that it will not return if it is successful. Still, in case cpuidle_play_dead() fails, call hlt_play_dead() at the end. Signed-off-by: Patryk Wlazlyn <patryk.wlazlyn@linux.intel.com> Signed-off-by: Artem Bityutskiy <artem.bityutskiy@linux.intel.com> Signed-off-by: Dave Hansen <dave.hansen@linux.intel.com> Reviewed-by: Gautham R. Shenoy <gautham.shenoy@amd.com> Acked-by: Rafael J. Wysocki <rafael.j.wysocki@intel.com> Link: https://lore.kernel.org/all/20250205155211.329780-5-artem.bityutskiy%40linux.intel.com
2025-02-05ACPI/processor_idle: Add FFH state handlingPatryk Wlazlyn1-0/+10
Recent Intel platforms will depend on the idle driver to pass the correct hint for playing dead via mwait_play_dead_with_hint(). Expand the existing enter_dead interface with handling for FFH states and pass the MWAIT hint to the mwait_play_dead code. Suggested-by: Gautham R. Shenoy <gautham.shenoy@amd.com> Signed-off-by: Patryk Wlazlyn <patryk.wlazlyn@linux.intel.com> Signed-off-by: Artem Bityutskiy <artem.bityutskiy@linux.intel.com> Signed-off-by: Dave Hansen <dave.hansen@linux.intel.com> Acked-by: Rafael J. Wysocki <rafael.j.wysocki@intel.com> Link: https://lore.kernel.org/all/20250205155211.329780-3-artem.bityutskiy%40linux.intel.com
2025-02-05x86/smp: Allow calling mwait_play_dead with an arbitrary hintPatryk Wlazlyn1-41/+47
Introduce a helper function to allow offlined CPUs to enter idle states with a specific MWAIT hint. The new helper will be used in subsequent patches by the acpi_idle and intel_idle drivers. No functional change intended. Signed-off-by: Patryk Wlazlyn <patryk.wlazlyn@linux.intel.com> Signed-off-by: Artem Bityutskiy <artem.bityutskiy@linux.intel.com> Signed-off-by: Dave Hansen <dave.hansen@linux.intel.com> Reviewed-by: Gautham R. Shenoy <gautham.shenoy@amd.com> Acked-by: Rafael J. Wysocki <rafael.j.wysocki@intel.com> Link: https://lore.kernel.org/all/20250205155211.329780-2-artem.bityutskiy%40linux.intel.com
2025-02-04x86/cpu: Fix #define name for Intel CPU model 0x5ATony Luck2-2/+2
This CPU was mistakenly given the name INTEL_ATOM_AIRMONT_MID. But it uses a Silvermont core, not Airmont. Change #define name to INTEL_ATOM_SILVERMONT_MID2 Reported-by: Christian Ludloff <ludloff@gmail.com> Signed-off-by: Tony Luck <tony.luck@intel.com> Signed-off-by: Dave Hansen <dave.hansen@linux.intel.com> Link: https://lore.kernel.org/all/20241007165701.19693-1-tony.luck%40intel.com
2025-02-03Revert "x86/module: prepare module loading for ROX allocations of text"Mike Rapoport (Microsoft)3-152/+104
The module code does not create a writable copy of the executable memory anymore so there is no need to handle it in module relocation and alternatives patching. This reverts commit 9bfc4824fd4836c16bb44f922bfaffba5da3e4f3. Signed-off-by: "Mike Rapoport (Microsoft)" <rppt@kernel.org> Signed-off-by: Peter Zijlstra (Intel) <peterz@infradead.org> Link: https://lore.kernel.org/r/20250126074733.1384926-8-rppt@kernel.org