summaryrefslogtreecommitdiff
path: root/arch/x86
AgeCommit message (Collapse)AuthorFilesLines
2025-04-14x86/fpu: Convert task_struct::thread.fpu accesses to use x86_task_fpu()Ingo Molnar15-68/+68
This will make the removal of the task_struct::thread.fpu array easier. No change in functionality - code generated before and after this commit is identical on x86-defconfig: kepler:~/tip> diff -up vmlinux.before.asm vmlinux.after.asm kepler:~/tip> Signed-off-by: Ingo Molnar <mingo@kernel.org> Cc: Andy Lutomirski <luto@kernel.org> Cc: Brian Gerst <brgerst@gmail.com> Cc: Chang S. Bae <chang.seok.bae@intel.com> Cc: H. Peter Anvin <hpa@zytor.com> Cc: Linus Torvalds <torvalds@linux-foundation.org> Cc: Oleg Nesterov <oleg@redhat.com> Link: https://lore.kernel.org/r/20250409211127.3544993-3-mingo@kernel.org
2025-04-14x86/fpu: Introduce the x86_task_fpu() helper methodIngo Molnar1-0/+2
The per-task FPU context/save area is allocated right next to task_struct, currently in a variable-size array via task_struct::thread.fpu[], but we plan to fully hide it from the C type scope. Introduce the x86_task_fpu() accessor that gets to the FPU context pointer explicitly from the task pointer. Right now this is a simple (task)->thread.fpu wrapper. Signed-off-by: Ingo Molnar <mingo@kernel.org> Cc: Andy Lutomirski <luto@kernel.org> Cc: Brian Gerst <brgerst@gmail.com> Cc: Chang S. Bae <chang.seok.bae@intel.com> Cc: H. Peter Anvin <hpa@zytor.com> Cc: Linus Torvalds <torvalds@linux-foundation.org> Cc: Oleg Nesterov <oleg@redhat.com> Cc: Peter Zijlstra <peterz@infradead.org> Link: https://lore.kernel.org/r/20250409211127.3544993-2-mingo@kernel.org
2025-04-14x86/fpu/xstate: Adjust xstate copying logic for user ABIChang S. Bae1-9/+9
== Background == As feature positions in the userspace XSAVE buffer do not always align with their feature numbers, the XSAVE format conversion needs to be reconsidered to align with the revised xstate size calculation logic. * For signal handling, XSAVE and XRSTOR are used directly to save and restore extended registers. * For ptrace, KVM, and signal returns (for 32-bit frame), the kernel copies data between its internal buffer and the userspace XSAVE buffer. If memcpy() were used for these cases, existing offset helpers — such as __raw_xsave_addr() or xstate_offsets[] — would be sufficient to handle the format conversion. == Problem == When copying data from the compacted in-kernel buffer to the non-compacted userspace buffer, the function follows the user_regset_get2_fn() prototype. This means it utilizes struct membuf helpers for the destination buffer. As defined in regset.h, these helpers update the memory pointer during the copy process, enforcing sequential writes within the loop. Since xstate components are processed sequentially, any component whose buffer position does not align with its feature number has an issue. == Solution == Replace for_each_extended_xfeature() with the newly introduced for_each_extended_xfeature_in_order(). This macro ensures xstate components are handled in the correct order based on their actual positions in the destination buffer, rather than their feature numbers. Signed-off-by: Chang S. Bae <chang.seok.bae@intel.com> Signed-off-by: Ingo Molnar <mingo@kernel.org> Cc: Andy Lutomirski <luto@kernel.org> Cc: H. Peter Anvin <hpa@zytor.com> Cc: Linus Torvalds <torvalds@linux-foundation.org> Cc: Oleg Nesterov <oleg@redhat.com> Link: https://lore.kernel.org/r/20250320234301.8342-5-chang.seok.bae@intel.com
2025-04-14x86/fpu/xstate: Adjust XSAVE buffer size calculationChang S. Bae1-2/+9
The current xstate size calculation assumes that the highest-numbered xstate feature has the highest offset in the buffer, determining the size based on the topmost bit in the feature mask. However, this assumption is not architecturally guaranteed -- higher-numbered features may have lower offsets. With the introduction of the xfeature order table and its helper macro, xstate components can now be traversed in their positional order. Update the non-compacted format handling to iterate through the table to determine the last-positioned feature. Then, set the offset accordingly. Since size calculation primarily occurs during initialization or in non-critical paths, looping to find the last feature is not expected to have a meaningful performance impact. Signed-off-by: Chang S. Bae <chang.seok.bae@intel.com> Signed-off-by: Ingo Molnar <mingo@kernel.org> Cc: Andy Lutomirski <luto@kernel.org> Cc: H. Peter Anvin <hpa@zytor.com> Cc: Linus Torvalds <torvalds@linux-foundation.org> Cc: Oleg Nesterov <oleg@redhat.com> Link: https://lore.kernel.org/r/20250320234301.8342-4-chang.seok.bae@intel.com
2025-04-14x86/fpu/xstate: Introduce xfeature order table and accessor macroChang S. Bae1-8/+50
The kernel has largely assumed that higher xstate component numbers correspond to later offsets in the buffer. However, this assumption no longer holds for the non-compacted format, where a newer state component may have a lower offset. When iterating over xstate components in offset order, using the feature number as an index may be misleading. At the same time, the CPU exposes each component’s size and offset based on its feature number, making it a key for state information. To provide flexibility in handling xstate ordering, introduce a mapping table: feature order -> feature number. The table is dynamically populated based on the CPU-exposed features and is sorted in offset order at boot time. Additionally, add an accessor macro to facilitate sequential traversal of xstate components based on their actual buffer positions, given a feature bitmask. This accessor macro will be particularly useful for computing custom non-compacted format sizes and iterating over xstate offsets in non-compacted buffers. Suggested-by: Dave Hansen <dave.hansen@linux.intel.com> Signed-off-by: Chang S. Bae <chang.seok.bae@intel.com> Signed-off-by: Ingo Molnar <mingo@kernel.org> Cc: Andy Lutomirski <luto@kernel.org> Cc: H. Peter Anvin <hpa@zytor.com> Cc: Linus Torvalds <torvalds@linux-foundation.org> Cc: Oleg Nesterov <oleg@redhat.com> Link: https://lore.kernel.org/r/20250320234301.8342-3-chang.seok.bae@intel.com
2025-04-14x86/fpu/xstate: Remove xstate offset checkChang S. Bae1-13/+0
Traditionally, new xstate components have been assigned sequentially, aligning feature numbers with their offsets in the XSAVE buffer. However, this ordering is not architecturally mandated in the non-compacted format, where a component's offset may not correspond to its feature number. The kernel caches CPUID-reported xstate component details, including size and offset in the non-compacted format. As part of this process, a sanity check is also conducted to ensure alignment between feature numbers and offsets. This check was likely intended as a general guideline rather than a strict requirement. Upcoming changes will support out-of-order offsets. Remove the check as becoming obsolete. Signed-off-by: Chang S. Bae <chang.seok.bae@intel.com> Signed-off-by: Ingo Molnar <mingo@kernel.org> Cc: Andy Lutomirski <luto@kernel.org> Cc: H. Peter Anvin <hpa@zytor.com> Cc: Linus Torvalds <torvalds@linux-foundation.org> Cc: Oleg Nesterov <oleg@redhat.com> Link: https://lore.kernel.org/r/20250320234301.8342-2-chang.seok.bae@intel.com
2025-04-13x86/uaccess: Use asm_inline() instead of asm() in __untagged_addr()Uros Bizjak1-2/+2
Use asm_inline() to instruct the compiler that the size of asm() is the minimum size of one instruction, ignoring how many instructions the compiler thinks it is. ALTERNATIVE macro that expands to several pseudo directives causes instruction length estimate to count more than 20 instructions. bloat-o-meter reports minimal code size increase (x86_64 defconfig with CONFIG_ADDRESS_MASKING, gcc-14.2.1): add/remove: 2/2 grow/shrink: 5/1 up/down: 2365/-1995 (370) Function old new delta ----------------------------------------------------- do_get_mempolicy - 1449 +1449 copy_nodes_to_user - 226 +226 __x64_sys_get_mempolicy 35 213 +178 syscall_user_dispatch_set_config 157 332 +175 __ia32_sys_get_mempolicy 31 206 +175 set_syscall_user_dispatch 29 181 +152 __do_sys_mremap 2073 2083 +10 sp_insert 133 117 -16 task_set_syscall_user_dispatch 172 - -172 kernel_get_mempolicy 1807 - -1807 Total: Before=21423151, After=21423521, chg +0.00% The code size increase is due to the compiler inlining more functions that inline untagged_addr(), e.g: task_set_syscall_user_dispatch() is now fully inlined in set_syscall_user_dispatch(): 000000000010b7e0 <set_syscall_user_dispatch>: 10b7e0: f3 0f 1e fa endbr64 10b7e4: 49 89 c8 mov %rcx,%r8 10b7e7: 48 89 d1 mov %rdx,%rcx 10b7ea: 48 89 f2 mov %rsi,%rdx 10b7ed: 48 89 fe mov %rdi,%rsi 10b7f0: 65 48 8b 3d 00 00 00 mov %gs:0x0(%rip),%rdi 10b7f7: 00 10b7f8: e9 03 fe ff ff jmp 10b600 <task_set_syscall_user_dispatch> that after inlining becomes: 000000000010b730 <set_syscall_user_dispatch>: 10b730: f3 0f 1e fa endbr64 10b734: 65 48 8b 05 00 00 00 mov %gs:0x0(%rip),%rax 10b73b: 00 10b73c: 48 85 ff test %rdi,%rdi 10b73f: 74 54 je 10b795 <set_syscall_user_dispatch+0x65> 10b741: 48 83 ff 01 cmp $0x1,%rdi 10b745: 74 06 je 10b74d <set_syscall_user_dispatch+0x1d> 10b747: b8 ea ff ff ff mov $0xffffffea,%eax 10b74c: c3 ret 10b74d: 48 85 f6 test %rsi,%rsi 10b750: 75 7b jne 10b7cd <set_syscall_user_dispatch+0x9d> 10b752: 48 85 c9 test %rcx,%rcx 10b755: 74 1a je 10b771 <set_syscall_user_dispatch+0x41> 10b757: 48 89 cf mov %rcx,%rdi 10b75a: 49 b8 ef cd ab 89 67 movabs $0x123456789abcdef,%r8 10b761: 45 23 01 10b764: 90 nop 10b765: 90 nop 10b766: 90 nop 10b767: 90 nop 10b768: 90 nop 10b769: 90 nop 10b76a: 90 nop 10b76b: 90 nop 10b76c: 49 39 f8 cmp %rdi,%r8 10b76f: 72 6e jb 10b7df <set_syscall_user_dispatch+0xaf> 10b771: 48 89 88 48 08 00 00 mov %rcx,0x848(%rax) 10b778: 48 89 b0 50 08 00 00 mov %rsi,0x850(%rax) 10b77f: 48 89 90 58 08 00 00 mov %rdx,0x858(%rax) 10b786: c6 80 60 08 00 00 00 movb $0x0,0x860(%rax) 10b78d: f0 80 48 08 20 lock orb $0x20,0x8(%rax) 10b792: 31 c0 xor %eax,%eax 10b794: c3 ret 10b795: 48 09 d1 or %rdx,%rcx 10b798: 48 09 f1 or %rsi,%rcx 10b79b: 75 aa jne 10b747 <set_syscall_user_dispatch+0x17> 10b79d: 48 c7 80 48 08 00 00 movq $0x0,0x848(%rax) 10b7a4: 00 00 00 00 10b7a8: 48 c7 80 50 08 00 00 movq $0x0,0x850(%rax) 10b7af: 00 00 00 00 10b7b3: 48 c7 80 58 08 00 00 movq $0x0,0x858(%rax) 10b7ba: 00 00 00 00 10b7be: c6 80 60 08 00 00 00 movb $0x0,0x860(%rax) 10b7c5: f0 80 60 08 df lock andb $0xdf,0x8(%rax) 10b7ca: 31 c0 xor %eax,%eax 10b7cc: c3 ret 10b7cd: 48 8d 3c 16 lea (%rsi,%rdx,1),%rdi 10b7d1: 48 39 fe cmp %rdi,%rsi 10b7d4: 0f 82 78 ff ff ff jb 10b752 <set_syscall_user_dispatch+0x22> 10b7da: e9 68 ff ff ff jmp 10b747 <set_syscall_user_dispatch+0x17> 10b7df: b8 f2 ff ff ff mov $0xfffffff2,%eax 10b7e4: c3 ret Please note a series of NOPs that get replaced with an alternative: 11f0: 65 48 23 05 00 00 00 and %gs:0x0(%rip),%rax 11f7: 00 Signed-off-by: Uros Bizjak <ubizjak@gmail.com> Signed-off-by: Ingo Molnar <mingo@kernel.org> Cc: H. Peter Anvin <hpa@zytor.com> Cc: Josh Poimboeuf <jpoimboe@redhat.com> Cc: Linus Torvalds <torvalds@linux-foundation.org> Cc: Peter Zijlstra <peterz@infradead.org> Link: https://lore.kernel.org/r/20250407072129.33440-1-ubizjak@gmail.com
2025-04-13perf/x86/intel/bts: Replace offsetof() with struct_size()Thorsten Blum1-1/+1
Use struct_size() to calculate the number of bytes to allocate for a new bts_buffer. Compared to offsetof(), struct_size() provides additional compile-time checks (e.g., __must_be_array()). Signed-off-by: Thorsten Blum <thorsten.blum@linux.dev> Signed-off-by: Ingo Molnar <mingo@kernel.org> Cc: Peter Zijlstra <peterz@infradead.org> Link: https://lore.kernel.org/r/20250413104108.49142-2-thorsten.blum@linux.dev
2025-04-13x86/msr: Add compatibility wrappers for rdmsrl()/wrmsrl()Ingo Molnar1-0/+5
To reduce the impact of the API renames in -next, add compatibility wrappers for the two most popular MSR access APIs: rdmsrl() and wrmsrl(). Signed-off-by: Ingo Molnar <mingo@kernel.org> Cc: "H. Peter Anvin" <hpa@zytor.com> Cc: Peter Zijlstra (Intel) <peterz@infradead.org> Cc: Juergen Gross <jgross@suse.com> Cc: Dave Hansen <dave.hansen@intel.com> Cc: Xin Li <xin@zytor.com> Cc: Linus Torvalds <torvalds@linux-foundation.org> Cc: linux-kernel@vger.kernel.org
2025-04-13objtool, x86/hweight: Remove ANNOTATE_IGNORE_ALTERNATIVEJosh Poimboeuf1-4/+2
Since objtool's inception, frame pointer warnings have been manually silenced for __arch_hweight*() to allow those functions' inline asm to avoid using ASM_CALL_CONSTRAINT. The potentially dubious reasoning for that decision over nine years ago was that since !X86_FEATURE_POPCNT is exceedingly rare, it's not worth hurting the code layout for a function call that will never happen on the vast majority of systems. However, those functions actually started using ASM_CALL_CONSTRAINT with the following commit: 194a613088a8 ("x86/hweight: Use ASM_CALL_CONSTRAINT in inline asm()") And rightfully so, as it makes the code correct. ASM_CALL_CONSTRAINT will soon have no effect for non-FP configs anyway. With ASM_CALL_CONSTRAINT in place, ANNOTATE_IGNORE_ALTERNATIVE no longer has a purpose for the hweight functions. Remove it. Signed-off-by: Josh Poimboeuf <jpoimboe@kernel.org> Signed-off-by: Ingo Molnar <mingo@kernel.org> Cc: Andy Lutomirski <luto@kernel.org> Cc: Brian Gerst <brgerst@gmail.com> Cc: H. Peter Anvin <hpa@zytor.com> Cc: Juergen Gross <jgross@suse.com> Cc: Kees Cook <keescook@chromium.org> Cc: Linus Torvalds <torvalds@linux-foundation.org> Link: https://lore.kernel.org/r/e7070dba3278c90f1a836b16157dcd34ccd21e21.1744318586.git.jpoimboe@kernel.org
2025-04-13x86/percpu: Refer __percpu_prefix to __force_percpu_prefixUros Bizjak1-9/+11
Refer __percpu_prefix to __force_percpu_prefix to avoid duplicate definition. While there, slightly reorder definitions to a more logical sequence, remove unneeded double quotes and move misplaced comment to the right place. No functional changes intended. Signed-off-by: Uros Bizjak <ubizjak@gmail.com> Signed-off-by: Ingo Molnar <mingo@kernel.org> Cc: H. Peter Anvin <hpa@zytor.com> Link: https://lore.kernel.org/r/20250411093130.81389-1-ubizjak@gmail.com
2025-04-12x86/microcode/AMD: Extend the SHA check to Zen5, block loading of any ↵Borislav Petkov (AMD)1-2/+7
unreleased standalone Zen5 microcode patches All Zen5 machines out there should get BIOS updates which update to the correct microcode patches addressing the microcode signature issue. However, silly people carve out random microcode blobs from BIOS packages and think are doing other people a service this way... Block loading of any unreleased standalone Zen5 microcode patches. Signed-off-by: Borislav Petkov (AMD) <bp@alien8.de> Signed-off-by: Ingo Molnar <mingo@kernel.org> Cc: <stable@kernel.org> Cc: Andrew Cooper <andrew.cooper3@citrix.com> Cc: Boris Ostrovsky <boris.ostrovsky@oracle.com> Cc: Linus Torvalds <torvalds@linux-foundation.org> Cc: Maciej S. Szmigiero <mail@maciej.szmigiero.name> Cc: Nikolay Borisov <nik.borisov@suse.com> Cc: Tom Lendacky <thomas.lendacky@amd.com> Link: https://lore.kernel.org/r/20250410114222.32523-1-bp@kernel.org
2025-04-12x86/sev: Prepare for splitting off early SEV codeArd Biesheuvel5-152/+194
Prepare for splitting off parts of the SEV core.c source file into a file that carries code that must tolerate being called from the early 1:1 mapping. This will allow special build-time handling of thise code, to ensure that it gets generated in a way that is compatible with the early execution context. So create a de-facto internal SEV API and put the definitions into sev-internal.h. No attempt is made to allow this header file to be included in arbitrary other sources - this is explicitly not the intent. Signed-off-by: Ard Biesheuvel <ardb@kernel.org> Signed-off-by: Ingo Molnar <mingo@kernel.org> Cc: Dionna Amalie Glaze <dionnaglaze@google.com> Cc: H. Peter Anvin <hpa@zytor.com> Cc: Kees Cook <keescook@chromium.org> Cc: Kevin Loughlin <kevinloughlin@google.com> Cc: Linus Torvalds <torvalds@linux-foundation.org> Cc: Tom Lendacky <thomas.lendacky@amd.com> Cc: linux-efi@vger.kernel.org Link: https://lore.kernel.org/r/20250410134117.3713574-20-ardb+git@google.com
2025-04-12x86/boot: Drop RIP_REL_REF() uses from SME startup codeArd Biesheuvel3-8/+7
RIP_REL_REF() has no effect on code residing in arch/x86/boot/startup, as it is built with -fPIC. So remove any occurrences from the SME startup code. Note the SME is the only caller of cc_set_mask() that requires this, so drop it from there as well. Signed-off-by: Ard Biesheuvel <ardb@kernel.org> Signed-off-by: Ingo Molnar <mingo@kernel.org> Cc: Dionna Amalie Glaze <dionnaglaze@google.com> Cc: H. Peter Anvin <hpa@zytor.com> Cc: Kees Cook <keescook@chromium.org> Cc: Kevin Loughlin <kevinloughlin@google.com> Cc: Linus Torvalds <torvalds@linux-foundation.org> Cc: Tom Lendacky <thomas.lendacky@amd.com> Cc: linux-efi@vger.kernel.org Link: https://lore.kernel.org/r/20250410134117.3713574-19-ardb+git@google.com
2025-04-12x86/boot: Move early SME init code into startup/Ard Biesheuvel3-8/+1
Move the SME initialization code, which runs from the 1:1 mapping of memory as it operates on the kernel virtual mapping, into the new sub-directory arch/x86/boot/startup/ where all startup code will reside that needs to tolerate executing from the 1:1 mapping. Signed-off-by: Ard Biesheuvel <ardb@kernel.org> Signed-off-by: Ingo Molnar <mingo@kernel.org> Cc: Dionna Amalie Glaze <dionnaglaze@google.com> Cc: H. Peter Anvin <hpa@zytor.com> Cc: Kees Cook <keescook@chromium.org> Cc: Kevin Loughlin <kevinloughlin@google.com> Cc: Linus Torvalds <torvalds@linux-foundation.org> Cc: Tom Lendacky <thomas.lendacky@amd.com> Cc: linux-efi@vger.kernel.org Link: https://lore.kernel.org/r/20250410134117.3713574-18-ardb+git@google.com
2025-04-12x86/boot: Drop RIP_REL_REF() uses from early mapping codeArd Biesheuvel1-20/+21
Now that __startup_64() is built using -fPIC, RIP_REL_REF() has become a NOP and can be removed. Only some occurrences of rip_rel_ptr() will remain, to explicitly take the address of certain global structures in the 1:1 mapping of memory. While at it, update the code comment to describe why this is needed. Signed-off-by: Ard Biesheuvel <ardb@kernel.org> Signed-off-by: Ingo Molnar <mingo@kernel.org> Cc: Dionna Amalie Glaze <dionnaglaze@google.com> Cc: H. Peter Anvin <hpa@zytor.com> Cc: Kees Cook <keescook@chromium.org> Cc: Kevin Loughlin <kevinloughlin@google.com> Cc: Linus Torvalds <torvalds@linux-foundation.org> Cc: Tom Lendacky <thomas.lendacky@amd.com> Cc: linux-efi@vger.kernel.org Link: https://lore.kernel.org/r/20250410134117.3713574-17-ardb+git@google.com
2025-04-12x86/boot: Move early kernel mapping code into startup/Ard Biesheuvel3-211/+226
The startup code that constructs the kernel virtual mapping runs from the 1:1 mapping of memory itself, and therefore, cannot use absolute symbol references. Before making changes in subsequent patches, move this code into a separate source file under arch/x86/boot/startup/ where all such code will be kept from now on. Signed-off-by: Ard Biesheuvel <ardb@kernel.org> Signed-off-by: Ingo Molnar <mingo@kernel.org> Cc: Dionna Amalie Glaze <dionnaglaze@google.com> Cc: H. Peter Anvin <hpa@zytor.com> Cc: Kees Cook <keescook@chromium.org> Cc: Kevin Loughlin <kevinloughlin@google.com> Cc: Linus Torvalds <torvalds@linux-foundation.org> Cc: Tom Lendacky <thomas.lendacky@amd.com> Cc: linux-efi@vger.kernel.org Link: https://lore.kernel.org/r/20250410134117.3713574-16-ardb+git@google.com
2025-04-12x86/boot: Move the early GDT/IDT setup code into startup/Ard Biesheuvel4-75/+100
Move the early GDT/IDT setup code that runs long before the kernel virtual mapping is up into arch/x86/boot/startup/, and build it in a way that ensures that the code tolerates being called from the 1:1 mapping of memory. The code itself is left unchanged by this patch. Also tweak the sed symbol matching pattern in the decompressor to match on lower case 't' or 'b', as these will be emitted by Clang for symbols with hidden linkage. Signed-off-by: Ard Biesheuvel <ardb@kernel.org> Signed-off-by: Ingo Molnar <mingo@kernel.org> Cc: Dionna Amalie Glaze <dionnaglaze@google.com> Cc: H. Peter Anvin <hpa@zytor.com> Cc: Kees Cook <keescook@chromium.org> Cc: Kevin Loughlin <kevinloughlin@google.com> Cc: Linus Torvalds <torvalds@linux-foundation.org> Cc: Tom Lendacky <thomas.lendacky@amd.com> Cc: linux-efi@vger.kernel.org Link: https://lore.kernel.org/r/20250410134117.3713574-15-ardb+git@google.com
2025-04-12x86/asm: Make rip_rel_ptr() usable from fPIC codeArd Biesheuvel5-19/+19
RIP_REL_REF() is used in non-PIC C code that is called very early, before the kernel virtual mapping is up, which is the mapping that the linker expects. It is currently used in two different ways: - to refer to the value of a global variable, including as an lvalue in assignments; - to take the address of a global variable via the mapping that the code currently executes at. The former case is only needed in non-PIC code, as PIC code will never use absolute symbol references when the address of the symbol is not being used. But taking the address of a variable in PIC code may still require extra care, as a stack allocated struct assignment may be emitted as a memcpy() from a statically allocated copy in .rodata. For instance, this void startup_64_setup_gdt_idt(void) { struct desc_ptr startup_gdt_descr = { .address = (__force unsigned long)gdt_page.gdt, .size = GDT_SIZE - 1, }; may result in an absolute symbol reference in PIC code, even though the struct is allocated on the stack and populated at runtime. To address this case, make rip_rel_ptr() accessible in PIC code, and update any existing uses where the address of a global variable is taken using RIP_REL_REF. Once all code of this nature has been moved into arch/x86/boot/startup and built with -fPIC, RIP_REL_REF() can be retired, and only rip_rel_ptr() will remain. Signed-off-by: Ard Biesheuvel <ardb@kernel.org> Signed-off-by: Ingo Molnar <mingo@kernel.org> Cc: Dionna Amalie Glaze <dionnaglaze@google.com> Cc: H. Peter Anvin <hpa@zytor.com> Cc: Kees Cook <keescook@chromium.org> Cc: Kevin Loughlin <kevinloughlin@google.com> Cc: Linus Torvalds <torvalds@linux-foundation.org> Cc: Tom Lendacky <thomas.lendacky@amd.com> Cc: linux-efi@vger.kernel.org Link: https://lore.kernel.org/r/20250410134117.3713574-14-ardb+git@google.com
2025-04-12x86/mm: Opt-in to IRQs-off activate_mm()Andy Lutomirski2-1/+2
We gain nothing by having the core code enable IRQs right before calling activate_mm() only for us to turn them right back off again in switch_mm(). This will save a few cycles, so execve() should be blazingly fast with this patch applied! Signed-off-by: Andy Lutomirski <luto@kernel.org> Signed-off-by: Peter Zijlstra (Intel) <peterz@infradead.org> Signed-off-by: Ingo Molnar <mingo@kernel.org> Cc: Rik van Riel <riel@surriel.com> Cc: "H. Peter Anvin" <hpa@zytor.com> Cc: Linus Torvalds <torvalds@linux-foundation.org> Cc: Andrew Morton <akpm@linux-foundation.org> Link: https://lore.kernel.org/r/20250402094540.3586683-8-mingo@kernel.org
2025-04-12x86/efi: Make efi_enter/leave_mm() use the use_/unuse_temporary_mm() machineryAndy Lutomirski1-5/+2
This should be considerably more robust. It's also necessary for optimized for_each_possible_lazymm_cpu() on x86 -- without this patch, EFI calls in lazy context would remove the lazy mm from mm_cpumask(). [ mingo: Merged it on top of x86/alternatives ] Signed-off-by: Andy Lutomirski <luto@kernel.org> Signed-off-by: Peter Zijlstra (Intel) <peterz@infradead.org> Signed-off-by: Ingo Molnar <mingo@kernel.org> Cc: Rik van Riel <riel@surriel.com> Cc: "H. Peter Anvin" <hpa@zytor.com> Cc: Linus Torvalds <torvalds@linux-foundation.org> Cc: Andrew Morton <akpm@linux-foundation.org> Link: https://lore.kernel.org/r/20250402094540.3586683-7-mingo@kernel.org
2025-04-12x86/mm: Allow temporary MMs when IRQs are onAndy Lutomirski1-7/+12
EFI runtime services should use temporary MMs, but EFI runtime services want IRQs on. Preemption must still be disabled in a temporary MM context. At some point, the entirely temporary MM mechanism should be moved out of arch code. Signed-off-by: Andy Lutomirski <luto@kernel.org> Signed-off-by: Peter Zijlstra (Intel) <peterz@infradead.org> Signed-off-by: Ingo Molnar <mingo@kernel.org> Cc: Rik van Riel <riel@surriel.com> Cc: "H. Peter Anvin" <hpa@zytor.com> Cc: Linus Torvalds <torvalds@linux-foundation.org> Cc: Andrew Morton <akpm@linux-foundation.org> Cc: Ard Biesheuvel <ardb@kernel.org> Link: https://lore.kernel.org/r/20250402094540.3586683-6-mingo@kernel.org
2025-04-12x86/mm: Remove 'mm' argument from unuse_temporary_mm() againPeter Zijlstra3-6/+6
Now that unuse_temporary_mm() lives in tlb.c it can access cpu_tlbstate.loaded_mm. [ mingo: Merged it on top of x86/alternatives ] Signed-off-by: Peter Zijlstra (Intel) <peterz@infradead.org> Signed-off-by: Ingo Molnar <mingo@kernel.org> Cc: Andy Lutomirski <luto@kernel.org> Cc: Rik van Riel <riel@surriel.com> Cc: "H. Peter Anvin" <hpa@zytor.com> Cc: Linus Torvalds <torvalds@linux-foundation.org> Cc: Andrew Morton <akpm@linux-foundation.org> Link: https://lore.kernel.org/r/20250402094540.3586683-5-mingo@kernel.org
2025-04-12x86/mm: Make use_/unuse_temporary_mm() non-staticAndy Lutomirski3-64/+67
This prepares them for use outside of the alternative machinery. The code is unchanged. Signed-off-by: Andy Lutomirski <luto@kernel.org> Signed-off-by: Peter Zijlstra (Intel) <peterz@infradead.org> Signed-off-by: Ingo Molnar <mingo@kernel.org> Cc: Rik van Riel <riel@surriel.com> Cc: "H. Peter Anvin" <hpa@zytor.com> Cc: Linus Torvalds <torvalds@linux-foundation.org> Cc: Andrew Morton <akpm@linux-foundation.org> Link: https://lore.kernel.org/r/20250402094540.3586683-4-mingo@kernel.org
2025-04-12x86/events, x86/insn-eval: Remove incorrect current->active_mm referencesAndy Lutomirski2-4/+18
When decoding an instruction or handling a perf event that references an LDT segment, if we don't have a valid user context, trying to access the LDT by any means other than SLDT is racy. Certainly, using current->active_mm is wrong, as active_mm can point to a real user mm when CR3 and LDTR no longer reference that mm. Clean up the code. If nmi_uaccess_okay() says we don't have a valid context, just fail. Otherwise use current->mm. Signed-off-by: Andy Lutomirski <luto@kernel.org> Signed-off-by: Peter Zijlstra (Intel) <peterz@infradead.org> Signed-off-by: Ingo Molnar <mingo@kernel.org> Cc: Rik van Riel <riel@surriel.com> Cc: "H. Peter Anvin" <hpa@zytor.com> Cc: Linus Torvalds <torvalds@linux-foundation.org> Cc: Andrew Morton <akpm@linux-foundation.org> Link: https://lore.kernel.org/r/20250402094540.3586683-3-mingo@kernel.org
2025-04-12x86/mm: Add 'mm' argument to unuse_temporary_mm()Peter Zijlstra1-3/+3
In commit 209954cbc7d0 ("x86/mm/tlb: Update mm_cpumask lazily") unuse_temporary_mm() grew the assumption that it gets used on poking_mm exclusively. While this is currently true, lets not hard code this assumption. Signed-off-by: Peter Zijlstra (Intel) <peterz@infradead.org> Signed-off-by: Ingo Molnar <mingo@kernel.org> Cc: Andy Lutomirski <luto@kernel.org> Cc: Rik van Riel <riel@surriel.com> Cc: "H. Peter Anvin" <hpa@zytor.com> Cc: Linus Torvalds <torvalds@linux-foundation.org> Cc: Andrew Morton <akpm@linux-foundation.org> Link: https://lore.kernel.org/r/20250402094540.3586683-2-mingo@kernel.org
2025-04-11x86/xen: Fix __xen_hypercall_setfunc()Jason Andryuk1-6/+1
Hypercall detection is failing with xen_hypercall_intel() chosen even on an AMD processor. Looking at the disassembly, the call to xen_get_vendor() was removed. The check for boot_cpu_has(X86_FEATURE_CPUID) was used as a proxy for the x86_vendor having been set. When CONFIG_X86_REQUIRED_FEATURE_CPUID=y (the default value), DCE eliminates the call to xen_get_vendor(). An uninitialized value 0 means X86_VENDOR_INTEL, so the Intel function is always returned. Remove the if and always call xen_get_vendor() to avoid this issue. Fixes: 3d37d9396eb3 ("x86/cpufeatures: Add {REQUIRED,DISABLED} feature configs") Suggested-by: Juergen Gross <jgross@suse.com> Signed-off-by: Jason Andryuk <jason.andryuk@amd.com> Signed-off-by: Ingo Molnar <mingo@kernel.org> Reviewed-by: Juergen Gross <jgross@suse.com> Cc: Boris Ostrovsky <boris.ostrovsky@oracle.com> Cc: "H. Peter Anvin" <hpa@zytor.com> Cc: "Xin Li (Intel)" <xin@zytor.com> Link: https://lore.kernel.org/r/20250410193106.16353-1-jason.andryuk@amd.com
2025-04-11x86/cacheinfo: Standardize header files and CPUID referencesAhmed S. Darwish1-10/+10
Reference header files using their canonical form <linux/cacheinfo.h>. Standardize on CPUID(0xN), instead of CPUID(N), for all standard leaves. This removes ambiguity and aligns them with their extended counterparts like CPUID(0x8000001d). References: 0dd09e215a39 ("x86/cacheinfo: Apply maintainer-tip coding style fixes") Signed-off-by: Ahmed S. Darwish <darwi@linutronix.de> Signed-off-by: Ingo Molnar <mingo@kernel.org> Cc: Andrew Cooper <andrew.cooper3@citrix.com> Cc: "H. Peter Anvin" <hpa@zytor.com> Cc: John Ogness <john.ogness@linutronix.de> Link: https://lore.kernel.org/r/20250411070401.1358760-3-darwi@linutronix.de
2025-04-11x86/cpuid: Remove obsolete CPUID(0x2) iteration macroAhmed S. Darwish1-23/+0
The CPUID(0x2) cache descriptors iterator at <cpuid/leaf_0x2_api.h>: for_each_leaf_0x2_desc() has no more call sites. Remove it. Signed-off-by: Ahmed S. Darwish <darwi@linutronix.de> Signed-off-by: Ingo Molnar <mingo@kernel.org> Link: https://lore.kernel.org/r/20250411070401.1358760-2-darwi@linutronix.de
2025-04-11Merge tag 'v6.15-rc1' into x86/cpu, to refresh the branch with upstream changesIngo Molnar170-4254/+4627
Signed-off-by: Ingo Molnar <mingo@kernel.org>
2025-04-11x86/alternatives: Make smp_text_poke_batch_process() subsume ↵Nikolay Borisov1-11/+8
smp_text_poke_batch_finish() Simplify the alternatives interface some more by moving the poke_batch_finish check into poke_batch_process and renaming the latter. The net effect is one less function name to consider when reading the code. Signed-off-by: Nikolay Borisov <nik.borisov@suse.com> Signed-off-by: Ingo Molnar <mingo@kernel.org> Cc: Juergen Gross <jgross@suse.com> Cc: "H . Peter Anvin" <hpa@zytor.com> Cc: Linus Torvalds <torvalds@linux-foundation.org> Cc: Peter Zijlstra <peterz@infradead.org> Link: https://lore.kernel.org/r/20250411054105.2341982-54-mingo@kernel.org
2025-04-11x86/alternatives: Add comment about noinstr expectationsIngo Molnar1-0/+5
Signed-off-by: Ingo Molnar <mingo@kernel.org> Cc: Juergen Gross <jgross@suse.com> Cc: "H . Peter Anvin" <hpa@zytor.com> Cc: Linus Torvalds <torvalds@linux-foundation.org> Cc: Peter Zijlstra <peterz@infradead.org> Link: https://lore.kernel.org/r/20250411054105.2341982-53-mingo@kernel.org
2025-04-11x86/alternatives: Rename 'apply_relocation()' to 'text_poke_apply_relocation()'Ingo Molnar3-7/+7
Join the text_poke_*() API namespace. Signed-off-by: Ingo Molnar <mingo@kernel.org> Cc: Juergen Gross <jgross@suse.com> Cc: "H . Peter Anvin" <hpa@zytor.com> Cc: Linus Torvalds <torvalds@linux-foundation.org> Cc: Peter Zijlstra <peterz@infradead.org> Link: https://lore.kernel.org/r/20250411054105.2341982-52-mingo@kernel.org
2025-04-11x86/alternatives: Update the comments in smp_text_poke_batch_process()Ingo Molnar1-13/+15
- Capitalize 'INT3' consistently, - make it clear that 'sync cores' means an SMP sync to all CPUs, - fix typos and spelling. Signed-off-by: Ingo Molnar <mingo@kernel.org> Cc: Juergen Gross <jgross@suse.com> Cc: "H . Peter Anvin" <hpa@zytor.com> Cc: Linus Torvalds <torvalds@linux-foundation.org> Cc: Peter Zijlstra <peterz@infradead.org> Link: https://lore.kernel.org/r/20250411054105.2341982-51-mingo@kernel.org
2025-04-11x86/alternatives: Remove 'smp_text_poke_batch_flush()'Ingo Molnar1-9/+2
It only has a single user left, merge it into smp_text_poke_batch_add() and remove the helper function. Signed-off-by: Ingo Molnar <mingo@kernel.org> Cc: Juergen Gross <jgross@suse.com> Cc: "H . Peter Anvin" <hpa@zytor.com> Cc: Linus Torvalds <torvalds@linux-foundation.org> Cc: Peter Zijlstra <peterz@infradead.org> Link: https://lore.kernel.org/r/20250411054105.2341982-50-mingo@kernel.org
2025-04-11x86/alternatives: Move declarations of vmlinux.lds.S defined section symbols ↵Ingo Molnar2-6/+6
to <asm/alternative.h> Move it from the middle of a .c file next to the similar declarations of __alt_instructions[] et al. Signed-off-by: Ingo Molnar <mingo@kernel.org> Cc: Juergen Gross <jgross@suse.com> Cc: "H . Peter Anvin" <hpa@zytor.com> Cc: Linus Torvalds <torvalds@linux-foundation.org> Cc: Peter Zijlstra <peterz@infradead.org> Link: https://lore.kernel.org/r/20250411054105.2341982-49-mingo@kernel.org
2025-04-11x86/alternatives: Simplify the #include sectionIngo Molnar1-25/+3
We accumulated lots of unnecessary header inclusions over the years, trim them. Signed-off-by: Ingo Molnar <mingo@kernel.org> Cc: Juergen Gross <jgross@suse.com> Cc: "H . Peter Anvin" <hpa@zytor.com> Cc: Linus Torvalds <torvalds@linux-foundation.org> Cc: Peter Zijlstra <peterz@infradead.org> Link: https://lore.kernel.org/r/20250411054105.2341982-48-mingo@kernel.org
2025-04-11x86/alternatives: Rename 'POKE_MAX_OPCODE_SIZE' to 'TEXT_POKE_MAX_OPCODE_SIZE'Ingo Molnar2-5/+5
Join the TEXT_POKE_ namespace. Signed-off-by: Ingo Molnar <mingo@kernel.org> Cc: Juergen Gross <jgross@suse.com> Cc: "H . Peter Anvin" <hpa@zytor.com> Cc: Linus Torvalds <torvalds@linux-foundation.org> Cc: Peter Zijlstra <peterz@infradead.org> Link: https://lore.kernel.org/r/20250411054105.2341982-47-mingo@kernel.org
2025-04-11x86/alternatives: Rename 'TP_ARRAY_NR_ENTRIES_MAX' to 'TEXT_POKE_ARRAY_MAX'Ingo Molnar1-3/+3
Standardize on TEXT_POKE_ namespace for CPP constants too. Signed-off-by: Ingo Molnar <mingo@kernel.org> Cc: Juergen Gross <jgross@suse.com> Cc: "H . Peter Anvin" <hpa@zytor.com> Cc: Linus Torvalds <torvalds@linux-foundation.org> Cc: Peter Zijlstra <peterz@infradead.org> Link: https://lore.kernel.org/r/20250411054105.2341982-46-mingo@kernel.org
2025-04-11x86/alternatives: Standardize on 'tpl' local variable names for 'struct ↵Ingo Molnar1-27/+27
smp_text_poke_loc *' There's no toilet paper in this code. Signed-off-by: Ingo Molnar <mingo@kernel.org> Cc: Juergen Gross <jgross@suse.com> Cc: "H . Peter Anvin" <hpa@zytor.com> Cc: Linus Torvalds <torvalds@linux-foundation.org> Cc: Peter Zijlstra <peterz@infradead.org> Link: https://lore.kernel.org/r/20250411054105.2341982-45-mingo@kernel.org
2025-04-11x86/alternatives: Simplify and clean up patch_cmp()Ingo Molnar1-5/+3
- No need to cast over to 'struct smp_text_poke_loc *', void * is just fine for a binary search, - Use the canonical (a, b) input parameter nomenclature of cmp_func_t functions and rename the input parameters from (tp, elt) to (tpl_a, tpl_b). Signed-off-by: Ingo Molnar <mingo@kernel.org> Cc: Juergen Gross <jgross@suse.com> Cc: "H . Peter Anvin" <hpa@zytor.com> Cc: Linus Torvalds <torvalds@linux-foundation.org> Cc: Peter Zijlstra <peterz@infradead.org> Link: https://lore.kernel.org/r/20250411054105.2341982-44-mingo@kernel.org
2025-04-11x86/alternatives: Constify text_poke_addr()Ingo Molnar1-1/+1
This will also allow the simplification of patch_cmp(). Signed-off-by: Ingo Molnar <mingo@kernel.org> Cc: Juergen Gross <jgross@suse.com> Cc: "H . Peter Anvin" <hpa@zytor.com> Cc: Linus Torvalds <torvalds@linux-foundation.org> Cc: Peter Zijlstra <peterz@infradead.org> Link: https://lore.kernel.org/r/20250411054105.2341982-43-mingo@kernel.org
2025-04-11x86/alternatives: Simplify text_poke_addr_ordered()Ingo Molnar1-4/+1
- Use direct 'void *' pointer comparison, there's no need to force the type to 'unsigned long'. - Remove the 'tp' local variable indirection Signed-off-by: Ingo Molnar <mingo@kernel.org> Cc: Juergen Gross <jgross@suse.com> Cc: "H . Peter Anvin" <hpa@zytor.com> Cc: Linus Torvalds <torvalds@linux-foundation.org> Cc: Peter Zijlstra <peterz@infradead.org> Link: https://lore.kernel.org/r/20250411054105.2341982-42-mingo@kernel.org
2025-04-11x86/alternatives: Rename 'text_poke_sync()' to 'smp_text_poke_sync_each_cpu()'Ingo Molnar5-12/+12
Unlike sync_core(), text_poke_sync() is a very heavy operation, as it sends an IPI to every online CPU in the system and waits for completion. Reflect this in the name. Signed-off-by: Ingo Molnar <mingo@kernel.org> Cc: Juergen Gross <jgross@suse.com> Cc: "H . Peter Anvin" <hpa@zytor.com> Cc: Linus Torvalds <torvalds@linux-foundation.org> Cc: Peter Zijlstra <peterz@infradead.org> Link: https://lore.kernel.org/r/20250411054105.2341982-41-mingo@kernel.org
2025-04-11x86/alternatives: Move text_poke_array completion from ↵Ingo Molnar1-6/+5
smp_text_poke_batch_finish() and smp_text_poke_batch_flush() to smp_text_poke_batch_process() Simplifies the code and improves code generation a bit: text data bss dec hex filename 14769 1017 4112 19898 4dba alternative.o.before 14742 1017 4112 19871 4d9f alternative.o.after Signed-off-by: Ingo Molnar <mingo@kernel.org> Cc: Juergen Gross <jgross@suse.com> Cc: "H . Peter Anvin" <hpa@zytor.com> Cc: Linus Torvalds <torvalds@linux-foundation.org> Cc: Peter Zijlstra <peterz@infradead.org> Link: https://lore.kernel.org/r/20250411054105.2341982-40-mingo@kernel.org
2025-04-11x86/alternatives: Add documentation for smp_text_poke_batch_add()Ingo Molnar1-0/+13
Signed-off-by: Ingo Molnar <mingo@kernel.org> Cc: Juergen Gross <jgross@suse.com> Cc: "H . Peter Anvin" <hpa@zytor.com> Cc: Linus Torvalds <torvalds@linux-foundation.org> Cc: Peter Zijlstra <peterz@infradead.org> Link: https://lore.kernel.org/r/20250411054105.2341982-39-mingo@kernel.org
2025-04-11x86/alternatives: Document 'smp_text_poke_single()'Ingo Molnar1-2/+3
Extend the documentation to better describe its purpose. Signed-off-by: Ingo Molnar <mingo@kernel.org> Cc: Juergen Gross <jgross@suse.com> Cc: "H . Peter Anvin" <hpa@zytor.com> Cc: Linus Torvalds <torvalds@linux-foundation.org> Cc: Peter Zijlstra <peterz@infradead.org> Link: https://lore.kernel.org/r/20250411054105.2341982-38-mingo@kernel.org
2025-04-11x86/alternatives: Remove the mixed-patching restriction on ↵Ingo Molnar1-3/+0
smp_text_poke_single() At this point smp_text_poke_single(addr, opcode, len, emulate) is equivalent to: smp_text_poke_batch_add(addr, opcode, len, emulate); smp_text_poke_batch_finish(); So remove the restriction on mixing single-instruction patching with multi-instruction patching. Signed-off-by: Ingo Molnar <mingo@kernel.org> Cc: Juergen Gross <jgross@suse.com> Cc: "H . Peter Anvin" <hpa@zytor.com> Cc: Linus Torvalds <torvalds@linux-foundation.org> Cc: Peter Zijlstra <peterz@infradead.org> Link: https://lore.kernel.org/r/20250411054105.2341982-37-mingo@kernel.org
2025-04-11x86/alternatives: Move the text_poke_array manipulation into ↵Ingo Molnar1-12/+6
text_poke_int3_loc_init() and rename it to __smp_text_poke_batch_add() This simplifies the code and code generation a bit: text data bss dec hex filename 14802 1029 4112 19943 4de7 alternative.o.before 14784 1029 4112 19925 4dd5 alternative.o.after Signed-off-by: Ingo Molnar <mingo@kernel.org> Cc: Juergen Gross <jgross@suse.com> Cc: "H . Peter Anvin" <hpa@zytor.com> Cc: Linus Torvalds <torvalds@linux-foundation.org> Cc: Peter Zijlstra <peterz@infradead.org> Link: https://lore.kernel.org/r/20250411054105.2341982-36-mingo@kernel.org
2025-04-11x86/alternatives: Simplify smp_text_poke_batch_process()Ingo Molnar1-23/+20
This function is now using the text_poke_array state exclusively, make that explicit by removing the redundant input parameters. Signed-off-by: Ingo Molnar <mingo@kernel.org> Cc: Juergen Gross <jgross@suse.com> Cc: "H . Peter Anvin" <hpa@zytor.com> Cc: Linus Torvalds <torvalds@linux-foundation.org> Cc: Peter Zijlstra <peterz@infradead.org> Link: https://lore.kernel.org/r/20250411054105.2341982-34-mingo@kernel.org