summaryrefslogtreecommitdiff
path: root/arch
AgeCommit message (Collapse)AuthorFilesLines
2025-04-13arm64: dts: apple: t8011: Add CPU cachesNick Chan1-0/+16
Add information about CPU caches in the P-cluster of Apple A10X SoC. Due to "Apple Fusion Architecture" big.LITTLE switcher, only caches from one of the clusters can be used at any given moment. Signed-off-by: Nick Chan <towinchenmi@gmail.com> Link: https://lore.kernel.org/r/20250220-caches-v1-7-2c7011097768@gmail.com Signed-off-by: Sven Peter <sven@svenpeter.dev>
2025-04-13arm64: dts: apple: t8010: Add CPU cachesNick Chan1-0/+13
Add information about CPU caches in the P-cluster of Apple A10 SoC. Due to "Apple Fusion Architecture" big.LITTLE switcher, only caches from one of the clusters can be used at any given moment. Signed-off-by: Nick Chan <towinchenmi@gmail.com> Link: https://lore.kernel.org/r/20250220-caches-v1-6-2c7011097768@gmail.com Signed-off-by: Sven Peter <sven@svenpeter.dev>
2025-04-13arm64: dts: apple: s8001: Add CPU cachesNick Chan1-0/+13
Add information about CPU caches in Apple A9X SoC. Signed-off-by: Nick Chan <towinchenmi@gmail.com> Link: https://lore.kernel.org/r/20250220-caches-v1-5-2c7011097768@gmail.com Signed-off-by: Sven Peter <sven@svenpeter.dev>
2025-04-13arm64: dts: apple: s800-0-3: Add CPU cachesNick Chan1-0/+13
Add information about CPU caches in both variants of Apple A9 SoC. Signed-off-by: Nick Chan <towinchenmi@gmail.com> Link: https://lore.kernel.org/r/20250220-caches-v1-4-2c7011097768@gmail.com Signed-off-by: Sven Peter <sven@svenpeter.dev>
2025-04-13arm64: dts: apple: t7001: Add CPU cachesNick Chan1-0/+16
Add information about CPU caches in Apple A8X SoC. Signed-off-by: Nick Chan <towinchenmi@gmail.com> Link: https://lore.kernel.org/r/20250220-caches-v1-3-2c7011097768@gmail.com Signed-off-by: Sven Peter <sven@svenpeter.dev>
2025-04-13arm64: dts: apple: t7000: Add CPU cachesNick Chan1-0/+13
Add information about CPU caches in Apple A8 SoC. Signed-off-by: Nick Chan <towinchenmi@gmail.com> Link: https://lore.kernel.org/r/20250220-caches-v1-2-2c7011097768@gmail.com Signed-off-by: Sven Peter <sven@svenpeter.dev>
2025-04-13arm64: dts: apple: s5l8960x: Add CPU cachesNick Chan1-0/+13
Add information about CPU caches in Apple A7 SoC. Signed-off-by: Nick Chan <towinchenmi@gmail.com> Link: https://lore.kernel.org/r/20250220-caches-v1-1-2c7011097768@gmail.com Signed-off-by: Sven Peter <sven@svenpeter.dev>
2025-04-13objtool, x86/hweight: Remove ANNOTATE_IGNORE_ALTERNATIVEJosh Poimboeuf1-4/+2
Since objtool's inception, frame pointer warnings have been manually silenced for __arch_hweight*() to allow those functions' inline asm to avoid using ASM_CALL_CONSTRAINT. The potentially dubious reasoning for that decision over nine years ago was that since !X86_FEATURE_POPCNT is exceedingly rare, it's not worth hurting the code layout for a function call that will never happen on the vast majority of systems. However, those functions actually started using ASM_CALL_CONSTRAINT with the following commit: 194a613088a8 ("x86/hweight: Use ASM_CALL_CONSTRAINT in inline asm()") And rightfully so, as it makes the code correct. ASM_CALL_CONSTRAINT will soon have no effect for non-FP configs anyway. With ASM_CALL_CONSTRAINT in place, ANNOTATE_IGNORE_ALTERNATIVE no longer has a purpose for the hweight functions. Remove it. Signed-off-by: Josh Poimboeuf <jpoimboe@kernel.org> Signed-off-by: Ingo Molnar <mingo@kernel.org> Cc: Andy Lutomirski <luto@kernel.org> Cc: Brian Gerst <brgerst@gmail.com> Cc: H. Peter Anvin <hpa@zytor.com> Cc: Juergen Gross <jgross@suse.com> Cc: Kees Cook <keescook@chromium.org> Cc: Linus Torvalds <torvalds@linux-foundation.org> Link: https://lore.kernel.org/r/e7070dba3278c90f1a836b16157dcd34ccd21e21.1744318586.git.jpoimboe@kernel.org
2025-04-13x86/percpu: Refer __percpu_prefix to __force_percpu_prefixUros Bizjak1-9/+11
Refer __percpu_prefix to __force_percpu_prefix to avoid duplicate definition. While there, slightly reorder definitions to a more logical sequence, remove unneeded double quotes and move misplaced comment to the right place. No functional changes intended. Signed-off-by: Uros Bizjak <ubizjak@gmail.com> Signed-off-by: Ingo Molnar <mingo@kernel.org> Cc: H. Peter Anvin <hpa@zytor.com> Link: https://lore.kernel.org/r/20250411093130.81389-1-ubizjak@gmail.com
2025-04-12Merge tag 'bpf-fixes' of git://git.kernel.org/pub/scm/linux/kernel/git/bpf/bpfLinus Torvalds1-1/+1
Pull bpf fixes from Alexei Starovoitov: - Followup fixes for resilient spinlock (Kumar Kartikeya Dwivedi): - Make res_spin_lock test less verbose, since it was spamming BPF CI on failure, and make the check for AA deadlock stronger - Fix rebasing mistake and use architecture provided res_smp_cond_load_acquire - Convert BPF maps (queue_stack and ringbuf) to resilient spinlock to address long standing syzbot reports - Make sure that classic BPF load instruction from SKF_[NET|LL]_OFF offsets works when skb is fragmeneted (Willem de Bruijn) * tag 'bpf-fixes' of git://git.kernel.org/pub/scm/linux/kernel/git/bpf/bpf: bpf: Convert ringbuf map to rqspinlock bpf: Convert queue_stack map to rqspinlock bpf: Use architecture provided res_smp_cond_load_acquire selftests/bpf: Make res_spin_lock AA test condition stronger selftests/net: test sk_filter support for SKF_NET_OFF on frags bpf: support SKF_NET_OFF and SKF_LL_OFF on skb frags selftests/bpf: Make res_spin_lock test less verbose
2025-04-12x86/microcode/AMD: Extend the SHA check to Zen5, block loading of any ↵Borislav Petkov (AMD)1-2/+7
unreleased standalone Zen5 microcode patches All Zen5 machines out there should get BIOS updates which update to the correct microcode patches addressing the microcode signature issue. However, silly people carve out random microcode blobs from BIOS packages and think are doing other people a service this way... Block loading of any unreleased standalone Zen5 microcode patches. Signed-off-by: Borislav Petkov (AMD) <bp@alien8.de> Signed-off-by: Ingo Molnar <mingo@kernel.org> Cc: <stable@kernel.org> Cc: Andrew Cooper <andrew.cooper3@citrix.com> Cc: Boris Ostrovsky <boris.ostrovsky@oracle.com> Cc: Linus Torvalds <torvalds@linux-foundation.org> Cc: Maciej S. Szmigiero <mail@maciej.szmigiero.name> Cc: Nikolay Borisov <nik.borisov@suse.com> Cc: Tom Lendacky <thomas.lendacky@amd.com> Link: https://lore.kernel.org/r/20250410114222.32523-1-bp@kernel.org
2025-04-12x86/sev: Prepare for splitting off early SEV codeArd Biesheuvel5-152/+194
Prepare for splitting off parts of the SEV core.c source file into a file that carries code that must tolerate being called from the early 1:1 mapping. This will allow special build-time handling of thise code, to ensure that it gets generated in a way that is compatible with the early execution context. So create a de-facto internal SEV API and put the definitions into sev-internal.h. No attempt is made to allow this header file to be included in arbitrary other sources - this is explicitly not the intent. Signed-off-by: Ard Biesheuvel <ardb@kernel.org> Signed-off-by: Ingo Molnar <mingo@kernel.org> Cc: Dionna Amalie Glaze <dionnaglaze@google.com> Cc: H. Peter Anvin <hpa@zytor.com> Cc: Kees Cook <keescook@chromium.org> Cc: Kevin Loughlin <kevinloughlin@google.com> Cc: Linus Torvalds <torvalds@linux-foundation.org> Cc: Tom Lendacky <thomas.lendacky@amd.com> Cc: linux-efi@vger.kernel.org Link: https://lore.kernel.org/r/20250410134117.3713574-20-ardb+git@google.com
2025-04-12x86/boot: Drop RIP_REL_REF() uses from SME startup codeArd Biesheuvel3-8/+7
RIP_REL_REF() has no effect on code residing in arch/x86/boot/startup, as it is built with -fPIC. So remove any occurrences from the SME startup code. Note the SME is the only caller of cc_set_mask() that requires this, so drop it from there as well. Signed-off-by: Ard Biesheuvel <ardb@kernel.org> Signed-off-by: Ingo Molnar <mingo@kernel.org> Cc: Dionna Amalie Glaze <dionnaglaze@google.com> Cc: H. Peter Anvin <hpa@zytor.com> Cc: Kees Cook <keescook@chromium.org> Cc: Kevin Loughlin <kevinloughlin@google.com> Cc: Linus Torvalds <torvalds@linux-foundation.org> Cc: Tom Lendacky <thomas.lendacky@amd.com> Cc: linux-efi@vger.kernel.org Link: https://lore.kernel.org/r/20250410134117.3713574-19-ardb+git@google.com
2025-04-12x86/boot: Move early SME init code into startup/Ard Biesheuvel3-8/+1
Move the SME initialization code, which runs from the 1:1 mapping of memory as it operates on the kernel virtual mapping, into the new sub-directory arch/x86/boot/startup/ where all startup code will reside that needs to tolerate executing from the 1:1 mapping. Signed-off-by: Ard Biesheuvel <ardb@kernel.org> Signed-off-by: Ingo Molnar <mingo@kernel.org> Cc: Dionna Amalie Glaze <dionnaglaze@google.com> Cc: H. Peter Anvin <hpa@zytor.com> Cc: Kees Cook <keescook@chromium.org> Cc: Kevin Loughlin <kevinloughlin@google.com> Cc: Linus Torvalds <torvalds@linux-foundation.org> Cc: Tom Lendacky <thomas.lendacky@amd.com> Cc: linux-efi@vger.kernel.org Link: https://lore.kernel.org/r/20250410134117.3713574-18-ardb+git@google.com
2025-04-12x86/boot: Drop RIP_REL_REF() uses from early mapping codeArd Biesheuvel1-20/+21
Now that __startup_64() is built using -fPIC, RIP_REL_REF() has become a NOP and can be removed. Only some occurrences of rip_rel_ptr() will remain, to explicitly take the address of certain global structures in the 1:1 mapping of memory. While at it, update the code comment to describe why this is needed. Signed-off-by: Ard Biesheuvel <ardb@kernel.org> Signed-off-by: Ingo Molnar <mingo@kernel.org> Cc: Dionna Amalie Glaze <dionnaglaze@google.com> Cc: H. Peter Anvin <hpa@zytor.com> Cc: Kees Cook <keescook@chromium.org> Cc: Kevin Loughlin <kevinloughlin@google.com> Cc: Linus Torvalds <torvalds@linux-foundation.org> Cc: Tom Lendacky <thomas.lendacky@amd.com> Cc: linux-efi@vger.kernel.org Link: https://lore.kernel.org/r/20250410134117.3713574-17-ardb+git@google.com
2025-04-12x86/boot: Move early kernel mapping code into startup/Ard Biesheuvel3-211/+226
The startup code that constructs the kernel virtual mapping runs from the 1:1 mapping of memory itself, and therefore, cannot use absolute symbol references. Before making changes in subsequent patches, move this code into a separate source file under arch/x86/boot/startup/ where all such code will be kept from now on. Signed-off-by: Ard Biesheuvel <ardb@kernel.org> Signed-off-by: Ingo Molnar <mingo@kernel.org> Cc: Dionna Amalie Glaze <dionnaglaze@google.com> Cc: H. Peter Anvin <hpa@zytor.com> Cc: Kees Cook <keescook@chromium.org> Cc: Kevin Loughlin <kevinloughlin@google.com> Cc: Linus Torvalds <torvalds@linux-foundation.org> Cc: Tom Lendacky <thomas.lendacky@amd.com> Cc: linux-efi@vger.kernel.org Link: https://lore.kernel.org/r/20250410134117.3713574-16-ardb+git@google.com
2025-04-12x86/boot: Move the early GDT/IDT setup code into startup/Ard Biesheuvel4-75/+100
Move the early GDT/IDT setup code that runs long before the kernel virtual mapping is up into arch/x86/boot/startup/, and build it in a way that ensures that the code tolerates being called from the 1:1 mapping of memory. The code itself is left unchanged by this patch. Also tweak the sed symbol matching pattern in the decompressor to match on lower case 't' or 'b', as these will be emitted by Clang for symbols with hidden linkage. Signed-off-by: Ard Biesheuvel <ardb@kernel.org> Signed-off-by: Ingo Molnar <mingo@kernel.org> Cc: Dionna Amalie Glaze <dionnaglaze@google.com> Cc: H. Peter Anvin <hpa@zytor.com> Cc: Kees Cook <keescook@chromium.org> Cc: Kevin Loughlin <kevinloughlin@google.com> Cc: Linus Torvalds <torvalds@linux-foundation.org> Cc: Tom Lendacky <thomas.lendacky@amd.com> Cc: linux-efi@vger.kernel.org Link: https://lore.kernel.org/r/20250410134117.3713574-15-ardb+git@google.com
2025-04-12x86/asm: Make rip_rel_ptr() usable from fPIC codeArd Biesheuvel5-19/+19
RIP_REL_REF() is used in non-PIC C code that is called very early, before the kernel virtual mapping is up, which is the mapping that the linker expects. It is currently used in two different ways: - to refer to the value of a global variable, including as an lvalue in assignments; - to take the address of a global variable via the mapping that the code currently executes at. The former case is only needed in non-PIC code, as PIC code will never use absolute symbol references when the address of the symbol is not being used. But taking the address of a variable in PIC code may still require extra care, as a stack allocated struct assignment may be emitted as a memcpy() from a statically allocated copy in .rodata. For instance, this void startup_64_setup_gdt_idt(void) { struct desc_ptr startup_gdt_descr = { .address = (__force unsigned long)gdt_page.gdt, .size = GDT_SIZE - 1, }; may result in an absolute symbol reference in PIC code, even though the struct is allocated on the stack and populated at runtime. To address this case, make rip_rel_ptr() accessible in PIC code, and update any existing uses where the address of a global variable is taken using RIP_REL_REF. Once all code of this nature has been moved into arch/x86/boot/startup and built with -fPIC, RIP_REL_REF() can be retired, and only rip_rel_ptr() will remain. Signed-off-by: Ard Biesheuvel <ardb@kernel.org> Signed-off-by: Ingo Molnar <mingo@kernel.org> Cc: Dionna Amalie Glaze <dionnaglaze@google.com> Cc: H. Peter Anvin <hpa@zytor.com> Cc: Kees Cook <keescook@chromium.org> Cc: Kevin Loughlin <kevinloughlin@google.com> Cc: Linus Torvalds <torvalds@linux-foundation.org> Cc: Tom Lendacky <thomas.lendacky@amd.com> Cc: linux-efi@vger.kernel.org Link: https://lore.kernel.org/r/20250410134117.3713574-14-ardb+git@google.com
2025-04-12x86/mm: Opt-in to IRQs-off activate_mm()Andy Lutomirski2-1/+2
We gain nothing by having the core code enable IRQs right before calling activate_mm() only for us to turn them right back off again in switch_mm(). This will save a few cycles, so execve() should be blazingly fast with this patch applied! Signed-off-by: Andy Lutomirski <luto@kernel.org> Signed-off-by: Peter Zijlstra (Intel) <peterz@infradead.org> Signed-off-by: Ingo Molnar <mingo@kernel.org> Cc: Rik van Riel <riel@surriel.com> Cc: "H. Peter Anvin" <hpa@zytor.com> Cc: Linus Torvalds <torvalds@linux-foundation.org> Cc: Andrew Morton <akpm@linux-foundation.org> Link: https://lore.kernel.org/r/20250402094540.3586683-8-mingo@kernel.org
2025-04-12x86/efi: Make efi_enter/leave_mm() use the use_/unuse_temporary_mm() machineryAndy Lutomirski1-5/+2
This should be considerably more robust. It's also necessary for optimized for_each_possible_lazymm_cpu() on x86 -- without this patch, EFI calls in lazy context would remove the lazy mm from mm_cpumask(). [ mingo: Merged it on top of x86/alternatives ] Signed-off-by: Andy Lutomirski <luto@kernel.org> Signed-off-by: Peter Zijlstra (Intel) <peterz@infradead.org> Signed-off-by: Ingo Molnar <mingo@kernel.org> Cc: Rik van Riel <riel@surriel.com> Cc: "H. Peter Anvin" <hpa@zytor.com> Cc: Linus Torvalds <torvalds@linux-foundation.org> Cc: Andrew Morton <akpm@linux-foundation.org> Link: https://lore.kernel.org/r/20250402094540.3586683-7-mingo@kernel.org
2025-04-12x86/mm: Allow temporary MMs when IRQs are onAndy Lutomirski1-7/+12
EFI runtime services should use temporary MMs, but EFI runtime services want IRQs on. Preemption must still be disabled in a temporary MM context. At some point, the entirely temporary MM mechanism should be moved out of arch code. Signed-off-by: Andy Lutomirski <luto@kernel.org> Signed-off-by: Peter Zijlstra (Intel) <peterz@infradead.org> Signed-off-by: Ingo Molnar <mingo@kernel.org> Cc: Rik van Riel <riel@surriel.com> Cc: "H. Peter Anvin" <hpa@zytor.com> Cc: Linus Torvalds <torvalds@linux-foundation.org> Cc: Andrew Morton <akpm@linux-foundation.org> Cc: Ard Biesheuvel <ardb@kernel.org> Link: https://lore.kernel.org/r/20250402094540.3586683-6-mingo@kernel.org
2025-04-12x86/mm: Remove 'mm' argument from unuse_temporary_mm() againPeter Zijlstra3-6/+6
Now that unuse_temporary_mm() lives in tlb.c it can access cpu_tlbstate.loaded_mm. [ mingo: Merged it on top of x86/alternatives ] Signed-off-by: Peter Zijlstra (Intel) <peterz@infradead.org> Signed-off-by: Ingo Molnar <mingo@kernel.org> Cc: Andy Lutomirski <luto@kernel.org> Cc: Rik van Riel <riel@surriel.com> Cc: "H. Peter Anvin" <hpa@zytor.com> Cc: Linus Torvalds <torvalds@linux-foundation.org> Cc: Andrew Morton <akpm@linux-foundation.org> Link: https://lore.kernel.org/r/20250402094540.3586683-5-mingo@kernel.org
2025-04-12x86/mm: Make use_/unuse_temporary_mm() non-staticAndy Lutomirski3-64/+67
This prepares them for use outside of the alternative machinery. The code is unchanged. Signed-off-by: Andy Lutomirski <luto@kernel.org> Signed-off-by: Peter Zijlstra (Intel) <peterz@infradead.org> Signed-off-by: Ingo Molnar <mingo@kernel.org> Cc: Rik van Riel <riel@surriel.com> Cc: "H. Peter Anvin" <hpa@zytor.com> Cc: Linus Torvalds <torvalds@linux-foundation.org> Cc: Andrew Morton <akpm@linux-foundation.org> Link: https://lore.kernel.org/r/20250402094540.3586683-4-mingo@kernel.org
2025-04-12x86/events, x86/insn-eval: Remove incorrect current->active_mm referencesAndy Lutomirski2-4/+18
When decoding an instruction or handling a perf event that references an LDT segment, if we don't have a valid user context, trying to access the LDT by any means other than SLDT is racy. Certainly, using current->active_mm is wrong, as active_mm can point to a real user mm when CR3 and LDTR no longer reference that mm. Clean up the code. If nmi_uaccess_okay() says we don't have a valid context, just fail. Otherwise use current->mm. Signed-off-by: Andy Lutomirski <luto@kernel.org> Signed-off-by: Peter Zijlstra (Intel) <peterz@infradead.org> Signed-off-by: Ingo Molnar <mingo@kernel.org> Cc: Rik van Riel <riel@surriel.com> Cc: "H. Peter Anvin" <hpa@zytor.com> Cc: Linus Torvalds <torvalds@linux-foundation.org> Cc: Andrew Morton <akpm@linux-foundation.org> Link: https://lore.kernel.org/r/20250402094540.3586683-3-mingo@kernel.org
2025-04-12x86/mm: Add 'mm' argument to unuse_temporary_mm()Peter Zijlstra1-3/+3
In commit 209954cbc7d0 ("x86/mm/tlb: Update mm_cpumask lazily") unuse_temporary_mm() grew the assumption that it gets used on poking_mm exclusively. While this is currently true, lets not hard code this assumption. Signed-off-by: Peter Zijlstra (Intel) <peterz@infradead.org> Signed-off-by: Ingo Molnar <mingo@kernel.org> Cc: Andy Lutomirski <luto@kernel.org> Cc: Rik van Riel <riel@surriel.com> Cc: "H. Peter Anvin" <hpa@zytor.com> Cc: Linus Torvalds <torvalds@linux-foundation.org> Cc: Andrew Morton <akpm@linux-foundation.org> Link: https://lore.kernel.org/r/20250402094540.3586683-2-mingo@kernel.org
2025-04-12net: Retire DCCP socket.Kuniyuki Iwashima17-29/+0
DCCP was orphaned in 2021 by commit 054c4610bd05 ("MAINTAINERS: dccp: move Gerrit Renker to CREDITS"), which noted that the last maintainer had been inactive for five years. In recent years, it has become a playground for syzbot, and most changes to DCCP have been odd bug fixes triggered by syzbot. Apart from that, the only changes have been driven by treewide or networking API updates or adjustments related to TCP. Thus, in 2023, we announced we would remove DCCP in 2025 via commit b144fcaf46d4 ("dccp: Print deprecation notice."). Since then, only one individual has contacted the netdev mailing list. [0] There is ongoing research for Multipath DCCP. The repository is hosted on GitHub [1], and development is not taking place through the upstream community. While the repository is published under the GPLv2 license, the scheduling part remains proprietary, with a LICENSE file [2] stating: "This is not Open Source software." The researcher mentioned a plan to address the licensing issue, upstream the patches, and step up as a maintainer, but there has been no further communication since then. Maintaining DCCP for a decade without any real users has become a burden. Therefore, it's time to remove it. Removing DCCP will also provide significant benefits to TCP. It allows us to freely reorganize the layout of struct inet_connection_sock, which is currently shared with DCCP, and optimize it to reduce the number of cachelines accessed in the TCP fast path. Note that we keep DCCP netfilter modules as requested. [3] Link: https://lore.kernel.org/netdev/20230710182253.81446-1-kuniyu@amazon.com/T/#u #[0] Link: https://github.com/telekom/mp-dccp #[1] Link: https://github.com/telekom/mp-dccp/blob/mpdccp_v03_k5.10/net/dccp/non_gpl_scheduler/LICENSE #[2] Link: https://lore.kernel.org/netdev/Z_VQ0KlCRkqYWXa-@calendula/ #[3] Signed-off-by: Kuniyuki Iwashima <kuniyu@amazon.com> Acked-by: Paul Moore <paul@paul-moore.com> (LSM and SELinux) Acked-by: Casey Schaufler <casey@schaufler-ca.com> Link: https://patch.msgid.link/20250410023921.11307-3-kuniyu@amazon.com Signed-off-by: Jakub Kicinski <kuba@kernel.org>
2025-04-12arm64: dts: broadcom: bcm2712: Use "l2-cache" for L2 cache node namesRob Herring (Arm)1-4/+4
There's no need include the CPU number in the L2 cache node names as the names are local to the CPU nodes. The documented node name is also just "l2-cache". Signed-off-by: Rob Herring (Arm) <robh@kernel.org> Reviewed-by: Philippe Mathieu-Daudé <philmd@linaro.org> Link: https://lore.kernel.org/all/20250410-dt-cpu-schema-v2-2-63d7dc9ddd0a@kernel.org/ Signed-off-by: Florian Fainelli <florian.fainelli@broadcom.com>
2025-04-11Merge tag 's390-6.15-3' of ↵Linus Torvalds8-15/+198
git://git.kernel.org/pub/scm/linux/kernel/git/s390/linux Pull s390 updates from Heiko Carstens: "Note that besides two bug fixes this includes three commits for IBM z17, which was announced this week. - Add IBM z17 bits: - Setup elf_platform for new machine types - Allow to compile the kernel with z17 optimizations - Add new performance counters - Fix mismatch between indicator bits and queue indexes in virtio CCW code - Fix double free in pmu setup error path" * tag 's390-6.15-3' of git://git.kernel.org/pub/scm/linux/kernel/git/s390/linux: s390/cpumf: Fix double free on error in cpumf_pmu_event_init() s390/cpumf: Update CPU Measurement facility extended counter set support s390: Allow to compile with z17 optimizations s390: Add z17 elf platform s390/virtio_ccw: Don't allocate/assign airqs for non-existing queues
2025-04-11KVM: arm64: Let kvm_vcpu_read_pmcr() return an EL-dependent value for PMCR_EL0.NMarc Zyngier1-1/+5
When EL2 is present, PMCR_EL0.N is the effective value of MDCR_EL2.HPMN when accessed from EL1 or EL0. Make sure we honor this requirement. Reviewed-by: Oliver Upton <oliver.upton@linux.dev> Signed-off-by: Marc Zyngier <maz@kernel.org>
2025-04-11KVM: arm64: Handle out-of-bound write to MDCR_EL2.HPMNMarc Zyngier1-6/+23
We don't really pay attention to what gets written to MDCR_EL2.HPMN, and funky guests could play ugly games on us. Restrict what gets written there, and limit the number of counters to what the PMU is allowed to have. Reviewed-by: Oliver Upton <oliver.upton@linux.dev> Signed-off-by: Marc Zyngier <maz@kernel.org>
2025-04-11KVM: arm64: Don't let userspace write to PMCR_EL0.N when the vcpu has EL2Marc Zyngier1-0/+1
Now that userspace can provide its limit for hte maximum number of counters, prevent it from writing to PMCR_EL0.N, as the value should be derived from MDCR_EL2.HPMN in that case. Reviewed-by: Oliver Upton <oliver.upton@linux.dev> Signed-off-by: Marc Zyngier <maz@kernel.org>
2025-04-11KVM: arm64: Allow userspace to limit the number of PMU counters for EL2 VMsMarc Zyngier2-4/+29
As long as we had purely EL1 VMs, we could easily update the number of guest-visible counters by letting userspace write to PMCR_EL0.N. With VMs started at EL2, PMCR_EL1.N only reflects MDCR_EL2.HPMN, and we don't have a good way to limit it. For this purpose, introduce a new PMUv3 attribute that allows limiting the maximum number of counters. This requires the explicit selection of a PMU. Suggested-by: Oliver Upton <oliver.upton@linux.dev> Reviewed-by: Oliver Upton <oliver.upton@linux.dev> Signed-off-by: Marc Zyngier <maz@kernel.org>
2025-04-11KVM: arm64: Contextualise the handling of PMCR_EL0.P writesMarc Zyngier1-5/+3
Contrary to what the comment says in kvm_pmu_handle_pmcr(), writing PMCR_EL0.P==1 has the following effects: <quote> The event counters affected by this field are: * All event counters in the first range. * If any of the following are true, all event counters in the second range: - EL2 is disabled or not implemented in the current Security state. - The PE is executing at EL2 or EL3. </quote> where the "first range" represent the counters in the [0..HPMN-1] range, and the "second range" the counters in the [HPMN..MAX] range. It so appears that writing P from EL2 should nuke all counters, and not just the "guest" view. Just do that, and nuke the misleading comment. Reported-by: Joey Gouly <joey.gouly@arm.com> Reviewed-by: Oliver Upton <oliver.upton@linux.dev> Signed-off-by: Marc Zyngier <maz@kernel.org>
2025-04-11KVM: arm64: Fix MDCR_EL2.HPMN reset valueMarc Zyngier2-2/+26
The MDCR_EL2 documentation indicates that the HPMN field has the following behaviour: "On a Warm reset, this field resets to the expression NUM_PMU_COUNTERS." However, it appears we reset it to zero, which is not very useful. Add a reset helper for MDCR_EL2, and handle the case where userspace changes the target PMU, which may force us to change HPMN again. Reported-by: Joey Gouly <joey.gouly@arm.com> Reviewed-by: Oliver Upton <oliver.upton@linux.dev> Signed-off-by: Marc Zyngier <maz@kernel.org>
2025-04-11KVM: arm64: Repaint pmcr_n into nr_pmu_countersMarc Zyngier3-7/+7
The pmcr_n field obviously refers to PMCR_EL0.N, but is generally used as the number of counters seen by the guest. Rename it accordingly. Suggested-by: Oliver upton <oliver.upton@linux.dev> Signed-off-by: Marc Zyngier <maz@kernel.org>
2025-04-11x86/xen: Fix __xen_hypercall_setfunc()Jason Andryuk1-6/+1
Hypercall detection is failing with xen_hypercall_intel() chosen even on an AMD processor. Looking at the disassembly, the call to xen_get_vendor() was removed. The check for boot_cpu_has(X86_FEATURE_CPUID) was used as a proxy for the x86_vendor having been set. When CONFIG_X86_REQUIRED_FEATURE_CPUID=y (the default value), DCE eliminates the call to xen_get_vendor(). An uninitialized value 0 means X86_VENDOR_INTEL, so the Intel function is always returned. Remove the if and always call xen_get_vendor() to avoid this issue. Fixes: 3d37d9396eb3 ("x86/cpufeatures: Add {REQUIRED,DISABLED} feature configs") Suggested-by: Juergen Gross <jgross@suse.com> Signed-off-by: Jason Andryuk <jason.andryuk@amd.com> Signed-off-by: Ingo Molnar <mingo@kernel.org> Reviewed-by: Juergen Gross <jgross@suse.com> Cc: Boris Ostrovsky <boris.ostrovsky@oracle.com> Cc: "H. Peter Anvin" <hpa@zytor.com> Cc: "Xin Li (Intel)" <xin@zytor.com> Link: https://lore.kernel.org/r/20250410193106.16353-1-jason.andryuk@amd.com
2025-04-11x86/cacheinfo: Standardize header files and CPUID referencesAhmed S. Darwish1-10/+10
Reference header files using their canonical form <linux/cacheinfo.h>. Standardize on CPUID(0xN), instead of CPUID(N), for all standard leaves. This removes ambiguity and aligns them with their extended counterparts like CPUID(0x8000001d). References: 0dd09e215a39 ("x86/cacheinfo: Apply maintainer-tip coding style fixes") Signed-off-by: Ahmed S. Darwish <darwi@linutronix.de> Signed-off-by: Ingo Molnar <mingo@kernel.org> Cc: Andrew Cooper <andrew.cooper3@citrix.com> Cc: "H. Peter Anvin" <hpa@zytor.com> Cc: John Ogness <john.ogness@linutronix.de> Link: https://lore.kernel.org/r/20250411070401.1358760-3-darwi@linutronix.de
2025-04-11x86/cpuid: Remove obsolete CPUID(0x2) iteration macroAhmed S. Darwish1-23/+0
The CPUID(0x2) cache descriptors iterator at <cpuid/leaf_0x2_api.h>: for_each_leaf_0x2_desc() has no more call sites. Remove it. Signed-off-by: Ahmed S. Darwish <darwi@linutronix.de> Signed-off-by: Ingo Molnar <mingo@kernel.org> Link: https://lore.kernel.org/r/20250411070401.1358760-2-darwi@linutronix.de
2025-04-11Merge tag 'v6.15-rc1' into x86/cpu, to refresh the branch with upstream changesIngo Molnar1621-23671/+49418
Signed-off-by: Ingo Molnar <mingo@kernel.org>
2025-04-11x86/alternatives: Make smp_text_poke_batch_process() subsume ↵Nikolay Borisov1-11/+8
smp_text_poke_batch_finish() Simplify the alternatives interface some more by moving the poke_batch_finish check into poke_batch_process and renaming the latter. The net effect is one less function name to consider when reading the code. Signed-off-by: Nikolay Borisov <nik.borisov@suse.com> Signed-off-by: Ingo Molnar <mingo@kernel.org> Cc: Juergen Gross <jgross@suse.com> Cc: "H . Peter Anvin" <hpa@zytor.com> Cc: Linus Torvalds <torvalds@linux-foundation.org> Cc: Peter Zijlstra <peterz@infradead.org> Link: https://lore.kernel.org/r/20250411054105.2341982-54-mingo@kernel.org
2025-04-11x86/alternatives: Add comment about noinstr expectationsIngo Molnar1-0/+5
Signed-off-by: Ingo Molnar <mingo@kernel.org> Cc: Juergen Gross <jgross@suse.com> Cc: "H . Peter Anvin" <hpa@zytor.com> Cc: Linus Torvalds <torvalds@linux-foundation.org> Cc: Peter Zijlstra <peterz@infradead.org> Link: https://lore.kernel.org/r/20250411054105.2341982-53-mingo@kernel.org
2025-04-11x86/alternatives: Rename 'apply_relocation()' to 'text_poke_apply_relocation()'Ingo Molnar3-7/+7
Join the text_poke_*() API namespace. Signed-off-by: Ingo Molnar <mingo@kernel.org> Cc: Juergen Gross <jgross@suse.com> Cc: "H . Peter Anvin" <hpa@zytor.com> Cc: Linus Torvalds <torvalds@linux-foundation.org> Cc: Peter Zijlstra <peterz@infradead.org> Link: https://lore.kernel.org/r/20250411054105.2341982-52-mingo@kernel.org
2025-04-11x86/alternatives: Update the comments in smp_text_poke_batch_process()Ingo Molnar1-13/+15
- Capitalize 'INT3' consistently, - make it clear that 'sync cores' means an SMP sync to all CPUs, - fix typos and spelling. Signed-off-by: Ingo Molnar <mingo@kernel.org> Cc: Juergen Gross <jgross@suse.com> Cc: "H . Peter Anvin" <hpa@zytor.com> Cc: Linus Torvalds <torvalds@linux-foundation.org> Cc: Peter Zijlstra <peterz@infradead.org> Link: https://lore.kernel.org/r/20250411054105.2341982-51-mingo@kernel.org
2025-04-11x86/alternatives: Remove 'smp_text_poke_batch_flush()'Ingo Molnar1-9/+2
It only has a single user left, merge it into smp_text_poke_batch_add() and remove the helper function. Signed-off-by: Ingo Molnar <mingo@kernel.org> Cc: Juergen Gross <jgross@suse.com> Cc: "H . Peter Anvin" <hpa@zytor.com> Cc: Linus Torvalds <torvalds@linux-foundation.org> Cc: Peter Zijlstra <peterz@infradead.org> Link: https://lore.kernel.org/r/20250411054105.2341982-50-mingo@kernel.org
2025-04-11x86/alternatives: Move declarations of vmlinux.lds.S defined section symbols ↵Ingo Molnar2-6/+6
to <asm/alternative.h> Move it from the middle of a .c file next to the similar declarations of __alt_instructions[] et al. Signed-off-by: Ingo Molnar <mingo@kernel.org> Cc: Juergen Gross <jgross@suse.com> Cc: "H . Peter Anvin" <hpa@zytor.com> Cc: Linus Torvalds <torvalds@linux-foundation.org> Cc: Peter Zijlstra <peterz@infradead.org> Link: https://lore.kernel.org/r/20250411054105.2341982-49-mingo@kernel.org
2025-04-11x86/alternatives: Simplify the #include sectionIngo Molnar1-25/+3
We accumulated lots of unnecessary header inclusions over the years, trim them. Signed-off-by: Ingo Molnar <mingo@kernel.org> Cc: Juergen Gross <jgross@suse.com> Cc: "H . Peter Anvin" <hpa@zytor.com> Cc: Linus Torvalds <torvalds@linux-foundation.org> Cc: Peter Zijlstra <peterz@infradead.org> Link: https://lore.kernel.org/r/20250411054105.2341982-48-mingo@kernel.org
2025-04-11x86/alternatives: Rename 'POKE_MAX_OPCODE_SIZE' to 'TEXT_POKE_MAX_OPCODE_SIZE'Ingo Molnar2-5/+5
Join the TEXT_POKE_ namespace. Signed-off-by: Ingo Molnar <mingo@kernel.org> Cc: Juergen Gross <jgross@suse.com> Cc: "H . Peter Anvin" <hpa@zytor.com> Cc: Linus Torvalds <torvalds@linux-foundation.org> Cc: Peter Zijlstra <peterz@infradead.org> Link: https://lore.kernel.org/r/20250411054105.2341982-47-mingo@kernel.org
2025-04-11x86/alternatives: Rename 'TP_ARRAY_NR_ENTRIES_MAX' to 'TEXT_POKE_ARRAY_MAX'Ingo Molnar1-3/+3
Standardize on TEXT_POKE_ namespace for CPP constants too. Signed-off-by: Ingo Molnar <mingo@kernel.org> Cc: Juergen Gross <jgross@suse.com> Cc: "H . Peter Anvin" <hpa@zytor.com> Cc: Linus Torvalds <torvalds@linux-foundation.org> Cc: Peter Zijlstra <peterz@infradead.org> Link: https://lore.kernel.org/r/20250411054105.2341982-46-mingo@kernel.org
2025-04-11x86/alternatives: Standardize on 'tpl' local variable names for 'struct ↵Ingo Molnar1-27/+27
smp_text_poke_loc *' There's no toilet paper in this code. Signed-off-by: Ingo Molnar <mingo@kernel.org> Cc: Juergen Gross <jgross@suse.com> Cc: "H . Peter Anvin" <hpa@zytor.com> Cc: Linus Torvalds <torvalds@linux-foundation.org> Cc: Peter Zijlstra <peterz@infradead.org> Link: https://lore.kernel.org/r/20250411054105.2341982-45-mingo@kernel.org
2025-04-11x86/alternatives: Simplify and clean up patch_cmp()Ingo Molnar1-5/+3
- No need to cast over to 'struct smp_text_poke_loc *', void * is just fine for a binary search, - Use the canonical (a, b) input parameter nomenclature of cmp_func_t functions and rename the input parameters from (tp, elt) to (tpl_a, tpl_b). Signed-off-by: Ingo Molnar <mingo@kernel.org> Cc: Juergen Gross <jgross@suse.com> Cc: "H . Peter Anvin" <hpa@zytor.com> Cc: Linus Torvalds <torvalds@linux-foundation.org> Cc: Peter Zijlstra <peterz@infradead.org> Link: https://lore.kernel.org/r/20250411054105.2341982-44-mingo@kernel.org