summaryrefslogtreecommitdiff
path: root/arch
AgeCommit message (Collapse)AuthorFilesLines
2025-04-16x86/bugs: Rename mmio_stale_data_clear to cpu_buf_vm_clearPawan Gupta3-8/+16
The static key mmio_stale_data_clear controls the KVM-only mitigation for MMIO Stale Data vulnerability. Rename it to reflect its purpose. No functional change. Signed-off-by: Pawan Gupta <pawan.kumar.gupta@linux.intel.com> Signed-off-by: Borislav Petkov (AMD) <bp@alien8.de> Link: https://lore.kernel.org/20250416-mmio-rename-v2-1-ad1f5488767c@linux.intel.com
2025-04-16powerpc: enable dynamic preemptionShrikanth Hegde4-2/+23
Once the lazy preemption is supported, it would be desirable to change the preemption models at runtime. So add support for dynamic preemption using DYNAMIC_KEY. ::Tested lightly on Power10 LPAR Performance numbers indicate that, preempt=none(no dynamic) and preempt=none(dynamic) are close. cat /sys/kernel/debug/sched/preempt (none) voluntary full lazy perf stat -e probe:__cond_resched -a sleep 1 Performance counter stats for 'system wide': 1,253 probe:__cond_resched echo full > /sys/kernel/debug/sched/preempt cat /sys/kernel/debug/sched/preempt none voluntary (full) lazy perf stat -e probe:__cond_resched -a sleep 1 Performance counter stats for 'system wide': 0 probe:__cond_resched Signed-off-by: Shrikanth Hegde <sshegde@linux.ibm.com> Signed-off-by: Madhavan Srinivasan <maddy@linux.ibm.com> Link: https://patch.msgid.link/20250210184334.567383-2-sshegde@linux.ibm.com
2025-04-16fadump: Use str_yes_no() helper in fadump_show_config()Thorsten Blum1-4/+2
Remove hard-coded strings by using the str_yes_no() helper function. Reviewed-by: Sourabh Jain <sourabhjain@linux.ibm.com> Reviewed-by: Ritesh Harjani (IBM) <ritesh.list@gmail.com> Signed-off-by: Thorsten Blum <thorsten.blum@linux.dev> Signed-off-by: Madhavan Srinivasan <maddy@linux.ibm.com> Link: https://patch.msgid.link/20250209081704.2758-2-thorsten.blum@linux.dev
2025-04-16KVM: powerpc: Enable commented out BUILD_BUG_ON() assertionThorsten Blum1-4/+0
The BUILD_BUG_ON() assertion was commented out in commit 38634e676992 ("powerpc/kvm: Remove problematic BUILD_BUG_ON statement") and fixed in commit c0a187e12d48 ("KVM: powerpc: Fix BUILD_BUG_ON condition"), but not enabled. Enable it now that this no longer breaks and remove the comment. Signed-off-by: Thorsten Blum <thorsten.blum@linux.dev> Signed-off-by: Madhavan Srinivasan <maddy@linux.ibm.com> Link: https://patch.msgid.link/20250411084222.6916-1-thorsten.blum@linux.dev
2025-04-16powerpc: mpic: Use str_enabled_disabled() helper functionThorsten Blum1-3/+4
Remove hard-coded strings by using the str_enabled_disabled() helper function. Use pr_debug() instead of printk(KERN_DEBUG) to silence a checkpatch warning. Reviewed-by: Ricardo B. Marlière <ricardo@marliere.net> Signed-off-by: Thorsten Blum <thorsten.blum@linux.dev> Signed-off-by: Madhavan Srinivasan <maddy@linux.ibm.com> Link: https://patch.msgid.link/20250219112053.3352-2-thorsten.blum@linux.dev
2025-04-16powerpc/ps3: Use str_write_read() in ps3_notification_read_write()Thorsten Blum1-1/+2
Remove hard-coded strings by using the str_write_read() helper function. Suggested-by: Christophe Leroy <christophe.leroy@csgroup.eu> Signed-off-by: Thorsten Blum <thorsten.blum@linux.dev> Reviewed-by: Geert Uytterhoeven <geert@linux-m68k.org> Signed-off-by: Madhavan Srinivasan <maddy@linux.ibm.com> Link: https://patch.msgid.link/20250219111445.2875-2-thorsten.blum@linux.dev
2025-04-16powerpc/kvm-hv-pmu: Add perf-events for Hostwide countersVaibhav Jain1-1/+91
Update 'kvm-hv-pmu.c' to add five new perf-events mapped to the five Hostwide counters. Since these newly introduced perf events are at system wide scope and can be read from any L1-Lpar CPU, 'kvmppc_pmu' scope and capabilities are updated appropriately. Also introduce two new helpers. First is kvmppc_update_l0_stats() that uses the infrastructure introduced in previous patches to issues the H_GUEST_GET_STATE hcall L0-PowerVM to fetch guest-state-buffer holding the latest values of these counters which is then parsed and 'l0_stats' variable updated. Second helper is kvmppc_pmu_event_update() which is called from 'kvmppv_pmu' callbacks and uses kvmppc_update_l0_stats() to update 'l0_stats' and the update the 'struct perf_event's event-counter. Some minor updates to kvmppc_pmu_{add, del, read}() to remove some debug scaffolding code. Signed-off-by: Vaibhav Jain <vaibhav@linux.ibm.com> Reviewed-by: Athira Rajeev <atrajeev@linux.ibm.com> Signed-off-by: Madhavan Srinivasan <maddy@linux.ibm.com> Link: https://patch.msgid.link/20250416162740.93143-7-vaibhav@linux.ibm.com
2025-04-16powerpc/kvm-hv-pmu: Implement GSB message-ops for hostwide countersVaibhav Jain1-0/+207
Implement and setup necessary structures to send a prepolulated Guest-State-Buffer(GSB) requesting hostwide counters to L0-PowerVM and have the returned GSB holding the values of these counters parsed. This is done via existing GSB implementation and with the newly added support of Hostwide elements in GSB. The request to L0-PowerVM to return Hostwide counters is done using a pre-allocated GSB named 'gsb_l0_stats'. To be able to populate this GSB with the needed Guest-State-Elements (GSIDs) a instance of 'struct kvmppc_gs_msg' named 'gsm_l0_stats' is introduced. The 'gsm_l0_stats' is tied to an instance of 'struct kvmppc_gs_msg_ops' named 'gsb_ops_l0_stats' which holds various callbacks to be compute the size ( hostwide_get_size() ), populate the GSB ( hostwide_fill_info() ) and refresh ( hostwide_refresh_info() ) the contents of 'l0_stats' that holds the Hostwide counters returned from L0-PowerVM. To protect these structures from simultaneous access a spinlock 'lock_l0_stats' has been introduced. The allocation and initialization of the above structures is done in newly introduced kvmppc_init_hostwide() and similarly the cleanup is performed in newly introduced kvmppc_cleanup_hostwide(). Signed-off-by: Vaibhav Jain <vaibhav@linux.ibm.com> Signed-off-by: Madhavan Srinivasan <maddy@linux.ibm.com> Link: https://patch.msgid.link/20250416162740.93143-6-vaibhav@linux.ibm.com
2025-04-16kvm powerpc/book3s-apiv2: Introduce kvm-hv specific PMUVaibhav Jain3-0/+153
Introduce a new PMU named 'kvm-hv' inside a new module named 'kvm-hv-pmu' to report Book3s kvm-hv specific performance counters. This will expose KVM-HV specific performance attributes to user-space via kernel's PMU infrastructure and would enableusers to monitor active kvm-hv based guests. The patch creates necessary scaffolding to for the new PMU callbacks and introduces the new kernel module name 'kvm-hv-pmu' which is built with CONFIG_KVM_BOOK3S_HV_PMU. The patch doesn't introduce any perf-events yet, which will be introduced in later patches Signed-off-by: Vaibhav Jain <vaibhav@linux.ibm.com> Signed-off-by: Madhavan Srinivasan <maddy@linux.ibm.com> Link: https://patch.msgid.link/20250416162740.93143-5-vaibhav@linux.ibm.com
2025-04-16kvm powerpc/book3s-apiv2: Add kunit tests for Hostwide GSB elementsVaibhav Jain1-0/+214
Update 'test-guest-state-buffer.c' to add two new KUNIT test cases for validating correctness of changes to Guest-state-buffer management infrastructure for adding support for Hostwide GSB elements. The newly introduced test test_gs_hostwide_msg() checks if the Hostwide elements can be set and parsed from a Guest-state-buffer. The second kunit test test_gs_hostwide_counters() checks if the Hostwide GSB elements can be send to the L0-PowerVM hypervisor via the H_GUEST_SET_STATE hcall and ensures that the returned guest-state-buffer has all the 5 Hostwide stat counters present. Below is the KATP test report with the newly added KUNIT tests: KTAP version 1 # Subtest: guest_state_buffer_test # module: test_guest_state_buffer 1..7 ok 1 test_creating_buffer ok 2 test_adding_element ok 3 test_gs_bitmap ok 4 test_gs_parsing ok 5 test_gs_msg ok 6 test_gs_hostwide_msg # test_gs_hostwide_counters: Guest Heap Size=0 bytes # test_gs_hostwide_counters: Guest Heap Size Max=10995367936 bytes # test_gs_hostwide_counters: Guest Page-table Size=2178304 bytes # test_gs_hostwide_counters: Guest Page-table Size Max=2147483648 bytes # test_gs_hostwide_counters: Guest Page-table Reclaim Size=0 bytes ok 7 test_gs_hostwide_counters # guest_state_buffer_test: pass:7 fail:0 skip:0 total:7 # Totals: pass:7 fail:0 skip:0 total:7 ok 1 guest_state_buffer_test Signed-off-by: Vaibhav Jain <vaibhav@linux.ibm.com> Signed-off-by: Madhavan Srinivasan <maddy@linux.ibm.com> Link: https://patch.msgid.link/20250416162740.93143-4-vaibhav@linux.ibm.com
2025-04-16kvm powerpc/book3s-apiv2: Add support for Hostwide GSB elementsVaibhav Jain4-12/+81
Add support for adding and parsing Hostwide elements to the Guest-state-buffer data structure used in apiv2. These elements are used to share meta-information pertaining to entire L1-Lpar and this meta-information is maintained by L0-PowerVM hypervisor. Example of this include the amount of the page-table memory currently used by L0-PowerVM for hosting the Shadow-Pagetable of all active L2-Guests. More of the are documented in kernel-documentation at [1]. The Hostwide GSB elements are currently only support with H_GUEST_SET_STATE hcall with a special flag namely 'KVMPPC_GS_FLAGS_HOST_WIDE'. The patch introduces new defs for the 5 new Hostwide GSB elements including their GSIDs as well as introduces a new class of GSB elements namely 'KVMPPC_GS_CLASS_HOSTWIDE' to indicate to GSB construction/parsing infrastructure in 'kvm/guest-state-buffer.c'. Also gs_msg_ops_vcpu_get_size(), kvmppc_gsid_type() and kvmppc_gse_{flatten,unflatten}_iden() are updated to appropriately indicate the needed size for these Hostwide GSB elements as well as how to flatten/unflatten their GSIDs so that they can be marked as available in GSB bitmap. [1] Documention/arch/powerpc/kvm-nested.rst Signed-off-by: Vaibhav Jain <vaibhav@linux.ibm.com> Signed-off-by: Madhavan Srinivasan <maddy@linux.ibm.com> Link: https://patch.msgid.link/20250416162740.93143-3-vaibhav@linux.ibm.com
2025-04-16ARM: davinci: remove support for da830Bartosz Golaszewski12-955/+0
We no longer support any boards with the da830 SoC in mainline linux. Let's remove all bits and pieces related to it. Signed-off-by: Bartosz Golaszewski <bartosz.golaszewski@linaro.org> Reviewed-by: Kevin Hilman <khilman@baylibre.com> Reviewed-by: David Lechner <dlechner@baylibre.com> Link: https://lore.kernel.org/r/20250407-davinci-remove-da830-v1-1-39f803dd5a14@linaro.org Signed-off-by: Bartosz Golaszewski <brgl@bgdev.pl>
2025-04-16ARM: 9447/1: arm/memremap: fix arch_memremap_can_ram_remap()Ross Stutterheim1-3/+1
arm/memremap: fix arch_memremap_can_ram_remap() commit 260364d112bc ("arm[64]/memremap: don't abuse pfn_valid() to ensure presence of linear map") added the definition of arch_memremap_can_ram_remap() for arm[64] specific filtering of what pages can be used from the linear mapping. memblock_is_map_memory() was called with the pfn of the address given to arch_memremap_can_ram_remap(); however, memblock_is_map_memory() expects to be given an address for arm, not a pfn. This results in calls to memremap() returning a newly mapped area when it should return an address in the existing linear mapping. Fix this by removing the address to pfn translation and pass the address directly. Fixes: 260364d112bc ("arm[64]/memremap: don't abuse pfn_valid() to ensure presence of linear map") Signed-off-by: Ross Stutterheim <ross.stutterheim@garmin.com> Cc: Mike Rapoport <rppt@kernel.org> Cc: stable@vger.kernel.org Reviewed-by: Catalin Marinas <catalin.marinas@arm.com> Reviewed-by: Linus Walleij <linus.walleij@linaro.org> Signed-off-by: Russell King (Oracle) <rmk+kernel@armlinux.org.uk>
2025-04-16riscv: defconfig: Remove EXPERTJoel Stanley1-22/+2
The setting of EXPERT is a leftover from when the riscv defconfig was first added. As mentioned in the EXPERT Kconfig help text it is not intended to be set in the usual case. Upon removal a bunch of intrusive debug-related kernel options are no longer set, which is good. A few may want to come back in the future but let those be advocated for on a case by case basis. NAMESPACES, SYSFS_SYSCALL and MEDIA_SUPPORT_FILTER default on and thus fall out of the defconfig. Set VIDEO_CADENCE_CSI2RX=y to ensure VIDEO_CADENCE_CSI2RX stays enabled. Set DEBUG_KERNEL=y in line with other arch defconfigs. This turns on tracing. Signed-off-by: Joel Stanley <joel@jms.id.au> Link: https://lore.kernel.org/r/20250415122832.982610-1-joel@jms.id.au Signed-off-by: Palmer Dabbelt <palmer@rivosinc.com>
2025-04-16Merge patch series "riscv: Rework the arch_kgdb_breakpoint() implementation"Palmer Dabbelt2-8/+7
WangYuli <wangyuli@uniontech.com> says: 1. The arch_kgdb_breakpoint() function defines the kgdb_compiled_break symbol using inline assembly. There's a potential issue where the compiler might inline arch_kgdb_breakpoint(), which would then define the kgdb_compiled_break symbol multiple times, leading to fail to link vmlinux.o. This isn't merely a potential compilation problem. The intent here is to determine the global symbol address of kgdb_compiled_break, and if this function is inlined multiple times, it would logically be a grave error. 2. Remove ".option norvc/.option rvc" to fix a bug that the C extension would unconditionally enable even if the kernel is being built with CONFIG_RISCV_ISA_C=n. * b4-shazam-merge: riscv: KGDB: Remove ".option norvc/.option rvc" for kgdb_compiled_break riscv: KGDB: Do not inline arch_kgdb_breakpoint() Link: https://lore.kernel.org/r/D5A83DF3A06E1DF9+20250411072905.55134-1-wangyuli@uniontech.com Signed-off-by: Palmer Dabbelt <palmer@rivosinc.com>
2025-04-16riscv: KGDB: Remove ".option norvc/.option rvc" for kgdb_compiled_breakWangYuli1-3/+1
[ Quoting Samuel Holland: ] This is a separate issue, but using ".option rvc" here is a bug. It will unconditionally enable the C extension for the rest of the file, even if the kernel is being built with CONFIG_RISCV_ISA_C=n. [ Quoting Palmer Dabbelt: ] We're just looking at the address of kgdb_compiled_break, so it's fine if it ends up as a c.ebreak. [ Quoting Alexandre Ghiti: ] .option norvc is used to prevent the assembler from using compressed instructions, but it's generally used when we need to ensure the size of the instructions that are used, which is not the case here as noted by Palmer since we only care about the address. So yes it will work fine with C enabled :) So let's just remove them all. Link: https://lore.kernel.org/all/4b4187c1-77e5-44b7-885f-d6826723dd9a@sifive.com/ Link: https://lore.kernel.org/all/mhng-69513841-5068-441d-be8f-2aeebdc56a08@palmer-ri-x1c9a/ Link: https://lore.kernel.org/all/23693e7f-4fff-40f3-a437-e06d827278a5@ghiti.fr/ Fixes: fe89bd2be866 ("riscv: Add KGDB support") Cc: Samuel Holland <samuel.holland@sifive.com> Cc: Palmer Dabbelt <palmer@dabbelt.com> Cc: Alexandre Ghiti <alex@ghiti.fr> Signed-off-by: WangYuli <wangyuli@uniontech.com> Link: https://lore.kernel.org/r/8B431C6A4626225C+20250411073222.56820-2-wangyuli@uniontech.com Signed-off-by: Palmer Dabbelt <palmer@rivosinc.com>
2025-04-16riscv: KGDB: Do not inline arch_kgdb_breakpoint()WangYuli2-8/+9
The arch_kgdb_breakpoint() function defines the kgdb_compiled_break symbol using inline assembly. There's a potential issue where the compiler might inline arch_kgdb_breakpoint(), which would then define the kgdb_compiled_break symbol multiple times, leading to fail to link vmlinux.o. This isn't merely a potential compilation problem. The intent here is to determine the global symbol address of kgdb_compiled_break, and if this function is inlined multiple times, it would logically be a grave error. Link: https://lore.kernel.org/all/4b4187c1-77e5-44b7-885f-d6826723dd9a@sifive.com/ Link: https://lore.kernel.org/all/5b0adf9b-2b22-43fe-ab74-68df94115b9a@ghiti.fr/ Link: https://lore.kernel.org/all/23693e7f-4fff-40f3-a437-e06d827278a5@ghiti.fr/ Fixes: fe89bd2be866 ("riscv: Add KGDB support") Co-developed-by: Huacai Chen <chenhuacai@loongson.cn> Signed-off-by: Huacai Chen <chenhuacai@loongson.cn> Signed-off-by: WangYuli <wangyuli@uniontech.com> Link: https://lore.kernel.org/r/F22359AFB6FF9FD8+20250411073222.56820-1-wangyuli@uniontech.com Signed-off-by: Palmer Dabbelt <palmer@rivosinc.com>
2025-04-16RISC-V: vDSO: Wire up getrandom() vDSO implementationXi Ruoyao6-0/+298
Hook up the generic vDSO implementation to the generic vDSO getrandom implementation by providing the required __arch_chacha20_blocks_nostack and getrandom_syscall implementations. Also wire up the selftests. The benchmark result: vdso: 25000000 times in 2.466341333 seconds libc: 25000000 times in 41.447720005 seconds syscall: 25000000 times in 41.043926672 seconds vdso: 25000000 x 256 times in 162.286219353 seconds libc: 25000000 x 256 times in 2953.855018685 seconds syscall: 25000000 x 256 times in 2796.268546000 seconds Signed-off-by: Xi Ruoyao <xry111@xry111.site> Link: https://lore.kernel.org/r/20250411024600.16045-1-xry111@xry111.site Signed-off-by: Palmer Dabbelt <palmer@rivosinc.com>
2025-04-16powerpc/boot: Check for ld-option supportMadhavan Srinivasan1-4/+2
Commit 579aee9fc594 ("powerpc: suppress some linker warnings in recent linker versions") enabled support to add linker option "--no-warn-rwx-segments", if the version is greater than 2.39. Similar build warning were reported recently from linker version 2.35.2. ld: warning: arch/powerpc/boot/zImage.epapr has a LOAD segment with RWX permissions ld: warning: arch/powerpc/boot/zImage.pseries has a LOAD segment with RWX permissions Fix the warning by checking for "--no-warn-rwx-segments" option support in linker to enable it, instead of checking for the version range. Fixes: 579aee9fc594 ("powerpc: suppress some linker warnings in recent linker versions") Reported-by: Venkat Rao Bagalkote <venkat88@linux.ibm.com> Suggested-by: Christophe Leroy <christophe.leroy@csgroup.eu> Tested-by: Venkat Rao Bagalkote <venkat88@linux.ibm.com> Closes: https://lore.kernel.org/linuxppc-dev/61cf556c-4947-4bd6-af63-892fc0966dad@linux.ibm.com/ Signed-off-by: Madhavan Srinivasan <maddy@linux.ibm.com> Link: https://patch.msgid.link/20250401004218.24869-1-maddy@linux.ibm.com
2025-04-16riscv: Avoid fortify warning in syscall_get_arguments()Nathan Chancellor1-2/+5
When building with CONFIG_FORTIFY_SOURCE=y and W=1, there is a warning because of the memcpy() in syscall_get_arguments(): In file included from include/linux/string.h:392, from include/linux/bitmap.h:13, from include/linux/cpumask.h:12, from arch/riscv/include/asm/processor.h:55, from include/linux/sched.h:13, from kernel/ptrace.c:13: In function 'fortify_memcpy_chk', inlined from 'syscall_get_arguments.isra' at arch/riscv/include/asm/syscall.h:66:2: include/linux/fortify-string.h:580:25: error: call to '__read_overflow2_field' declared with attribute warning: detected read beyond size of field (2nd parameter); maybe use struct_group()? [-Werror=attribute-warning] 580 | __read_overflow2_field(q_size_field, size); | ^~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ cc1: all warnings being treated as errors The fortified memcpy() routine enforces that the source is not overread and the destination is not overwritten if the size of either field and the size of the copy are known at compile time. The memcpy() in syscall_get_arguments() intentionally overreads from a1 to a5 in 'struct pt_regs' but this is bigger than the size of a1. Normally, this could be solved by wrapping a1 through a5 with struct_group() but there was already a struct_group() applied to these members in commit bba547810c66 ("riscv: tracing: Fix __write_overflow_field in ftrace_partial_regs()"). Just avoid memcpy() altogether and write the copying of args from regs manually, which clears up the warning at the expense of three extra lines of code. Signed-off-by: Nathan Chancellor <nathan@kernel.org> Reviewed-by: Alexandre Ghiti <alexghiti@rivosinc.com> Reviewed-by: Dmitry V. Levin <ldv@strace.io> Link: https://lore.kernel.org/r/20250409-riscv-avoid-fortify-warning-syscall_get_arguments-v1-1-7853436d4755@kernel.org Signed-off-by: Alexandre Ghiti <alexghiti@rivosinc.com>
2025-04-16arm64: dts: mediatek: mt8196: Add pinmux macro header fileCathy Xu1-0/+1574
Add the pinctrl header file on MediaTek mt8196. Signed-off-by: Guodong Liu <guodong.liu@mediatek.com> Signed-off-by: Cathy Xu <ot_cathy.xu@mediatek.com> Reviewed-by: AngeloGioacchino Del Regno <angelogioacchino.delregno@collabora.com> Link: https://lore.kernel.org/r/20250414090215.16091-3-ot_cathy.xu@mediatek.com Signed-off-by: AngeloGioacchino Del Regno <angelogioacchino.delregno@collabora.com>
2025-04-16arm64: dts: mediatek: Add MT6893 pinmux macro header fileAngeloGioacchino Del Regno1-0/+1356
Add the required macros for the pinmux nodes of the MT6893 Dimensity 1200 SoC. Link: https://lore.kernel.org/r/20250410144044.476060-4-angelogioacchino.delregno@collabora.com Signed-off-by: AngeloGioacchino Del Regno <angelogioacchino.delregno@collabora.com>
2025-04-16x86/fpu: Rename fpu_reset_fpregs() to fpu_reset_fpstate_regs()Chang S. Bae1-3/+3
The original function name came from an overly compressed form of 'fpstate_regs' by commit: e61d6310a0f8 ("x86/fpu: Reset permission and fpstate on exec()") However, the term 'fpregs' typically refers to physical FPU registers. In contrast, this function copies the init values to fpu->fpstate->regs, not hardware registers. Rename the function to better reflect what it actually does. No functional change. Signed-off-by: Chang S. Bae <chang.seok.bae@intel.com> Signed-off-by: Ingo Molnar <mingo@kernel.org> Cc: Andy Lutomirski <luto@kernel.org> Cc: H. Peter Anvin <hpa@zytor.com> Cc: Linus Torvalds <torvalds@linux-foundation.org> Cc: Oleg Nesterov <oleg@redhat.com> Link: https://lore.kernel.org/r/20250416021720.12305-11-chang.seok.bae@intel.com
2025-04-16x86/fpu: Remove export of mxcsr_feature_maskChang S. Bae1-1/+0
The variable was previously referenced in KVM code but the last usage was removed by: ea4d6938d4c0 ("x86/fpu: Replace KVMs home brewed FPU copy from user") Remove its export symbol. Reviewed-by: Nikolay Borisov <nik.borisov@suse.com> Signed-off-by: Chang S. Bae <chang.seok.bae@intel.com> Signed-off-by: Ingo Molnar <mingo@kernel.org> Cc: Andy Lutomirski <luto@kernel.org> Cc: H. Peter Anvin <hpa@zytor.com> Cc: Linus Torvalds <torvalds@linux-foundation.org> Cc: Oleg Nesterov <oleg@redhat.com> Link: https://lore.kernel.org/r/20250416021720.12305-10-chang.seok.bae@intel.com
2025-04-16x86/pkeys: Simplify PKRU update in signal frameChang S. Bae1-6/+3
The signal delivery logic was modified to always set the PKRU bit in xregs_state->header->xfeatures by this commit: ae6012d72fa6 ("x86/pkeys: Ensure updated PKRU value is XRSTOR'd") However, the change derives the bitmask value using XGETBV(1), rather than simply updating the buffer that already holds the value. Thus, this approach induces an unnecessary dependency on XGETBV1 for PKRU handling. Eliminate the dependency by using the established helper function. Subsequently, remove the now-unused 'mask' argument. Signed-off-by: Chang S. Bae <chang.seok.bae@intel.com> Signed-off-by: Ingo Molnar <mingo@kernel.org> Cc: Andy Lutomirski <luto@kernel.org> Cc: Aruna Ramakrishna <aruna.ramakrishna@oracle.com> Cc: H. Peter Anvin <hpa@zytor.com> Cc: Linus Torvalds <torvalds@linux-foundation.org> Cc: Oleg Nesterov <oleg@redhat.com> Cc: Tony W Wang-oc <TonyWWang-oc@zhaoxin.com> Cc: Dave Hansen <dave.hansen@linux.intel.com> Link: https://lore.kernel.org/r/20250416021720.12305-9-chang.seok.bae@intel.com
2025-04-16x86/fpu: Refactor xfeature bitmask update code for sigframe XSAVEChang S. Bae2-10/+14
Currently, saving register states in the signal frame, the legacy feature bits are always set in xregs_state->header->xfeatures. This code sequence can be generalized for reuse in similar cases. Refactor the logic to ensure a consistent approach across similar usages. Signed-off-by: Chang S. Bae <chang.seok.bae@intel.com> Signed-off-by: Ingo Molnar <mingo@kernel.org> Cc: Andy Lutomirski <luto@kernel.org> Cc: H. Peter Anvin <hpa@zytor.com> Cc: Linus Torvalds <torvalds@linux-foundation.org> Cc: Oleg Nesterov <oleg@redhat.com> Link: https://lore.kernel.org/r/20250416021720.12305-8-chang.seok.bae@intel.com
2025-04-16x86/fpu: Log XSAVE disablement consistentlyChang S. Bae1-3/+5
Not all paths that lead to fpu__init_disable_system_xstate() currently emit a message indicating that XSAVE has been disabled. Move the print statement into the function to ensure the message in all cases. Suggested-by: Dave Hansen <dave.hansen@linux.intel.com> Signed-off-by: Chang S. Bae <chang.seok.bae@intel.com> Signed-off-by: Ingo Molnar <mingo@kernel.org> Cc: Andy Lutomirski <luto@kernel.org> Cc: H. Peter Anvin <hpa@zytor.com> Cc: Linus Torvalds <torvalds@linux-foundation.org> Link: https://lore.kernel.org/r/20250416021720.12305-7-chang.seok.bae@intel.com
2025-04-16x86/fpu/apx: Enable APX state supportChang S. Bae2-2/+4
With securing APX against conflicting MPX, it is now ready to be enabled. Include APX in the enabled xfeature set. Signed-off-by: Chang S. Bae <chang.seok.bae@intel.com> Signed-off-by: Ingo Molnar <mingo@kernel.org> Reviewed-by: Sohil Mehta <sohil.mehta@intel.com> Cc: Andy Lutomirski <luto@kernel.org> Cc: H. Peter Anvin <hpa@zytor.com> Cc: Linus Torvalds <torvalds@linux-foundation.org> Cc: Oleg Nesterov <oleg@redhat.com> Link: https://lore.kernel.org/r/20250416021720.12305-5-chang.seok.bae@intel.com
2025-04-16x86/fpu/apx: Disallow conflicting MPX presenceChang S. Bae1-0/+11
XSTATE components are architecturally independent. There is no rule requiring their offsets in the non-compacted format to be strictly ascending or mutually non-overlapping. However, in practice, such overlaps have not occurred -- until now. APX is introduced as xstate component 19, following AMX. In the non-compacted XSAVE format, its offset overlaps with the space previously occupied by the now-deprecated MPX feature: 45fc24e89b7c ("x86/mpx: remove MPX from arch/x86") To prevent conflicts, the kernel must ensure the CPU never expose both features at the same time. If so, it indicates unreliable hardware. In such cases, XSAVE should be disabled entirely as a precautionary measure. Add a sanity check to detect this condition and disable XSAVE if an invalid hardware configuration is identified. Note: MPX state components remain enabled on legacy systems solely for KVM guest support. Signed-off-by: Chang S. Bae <chang.seok.bae@intel.com> Signed-off-by: Ingo Molnar <mingo@kernel.org> Reviewed-by: Sohil Mehta <sohil.mehta@intel.com> Cc: Andy Lutomirski <luto@kernel.org> Cc: H. Peter Anvin <hpa@zytor.com> Cc: Linus Torvalds <torvalds@linux-foundation.org> Cc: Oleg Nesterov <oleg@redhat.com> Link: https://lore.kernel.org/r/20250416021720.12305-4-chang.seok.bae@intel.com
2025-04-16x86/fpu/apx: Define APX state componentChang S. Bae2-0/+12
Advanced Performance Extensions (APX) is associated with a new state component number 19. To support saving and restoring of the corresponding registers via the XSAVE mechanism, introduce the component definition along with the necessary sanity checks. Define the new component number, state name, and those register data type. Then, extend the size checker to validate the register data type and explicitly list the APX feature flag as a dependency for the new component in xsave_cpuid_features[]. Signed-off-by: Chang S. Bae <chang.seok.bae@intel.com> Signed-off-by: Ingo Molnar <mingo@kernel.org> Reviewed-by: Sohil Mehta <sohil.mehta@intel.com> Cc: Andy Lutomirski <luto@kernel.org> Cc: H. Peter Anvin <hpa@zytor.com> Cc: Linus Torvalds <torvalds@linux-foundation.org> Cc: Oleg Nesterov <oleg@redhat.com> Link: https://lore.kernel.org/r/20250416021720.12305-3-chang.seok.bae@intel.com
2025-04-16x86/cpufeatures: Add X86_FEATURE_APXChang S. Bae3-0/+3
Intel Advanced Performance Extensions (APX) introduce a new set of general-purpose registers, managed as an extended state component via the xstate management facility. Before enabling this new xstate, define a feature flag to clarify the dependency in xsave_cpuid_features[]. APX is enumerated under CPUID level 7 with EDX=1. Since this CPUID leaf is not yet allocated, place the flag in a scattered feature word. While this feature is intended only for userspace, exposing it via /proc/cpuinfo is unnecessary. Instead, the existing arch_prctl(2) mechanism with the ARCH_GET_XCOMP_SUPP option can be used to query the feature availability. Finally, clarify that APX depends on XSAVE. Signed-off-by: Chang S. Bae <chang.seok.bae@intel.com> Signed-off-by: Ingo Molnar <mingo@kernel.org> Reviewed-by: Sohil Mehta <sohil.mehta@intel.com> Cc: Andy Lutomirski <luto@kernel.org> Cc: H. Peter Anvin <hpa@zytor.com> Cc: Linus Torvalds <torvalds@linux-foundation.org> Cc: Oleg Nesterov <oleg@redhat.com> Link: https://lore.kernel.org/r/20250416021720.12305-2-chang.seok.bae@intel.com
2025-04-16arm64: dts: mediatek: mt7622: Align GPIO hog name with bindingsKrzysztof Kozlowski1-1/+1
Bindings expect GPIO hog names to end with 'hog' suffix, so correct it to fix dtbs_check warning: mt7622-bananapi-bpi-r64.dtb: asm_sel: $nodename:0: 'asm_sel' does not match '^(hog-[0-9]+|.+-hog(-[0-9]+)?)$' Link: https://lore.kernel.org/r/20250115211801.194247-1-krzysztof.kozlowski@linaro.org Signed-off-by: Krzysztof Kozlowski <krzysztof.kozlowski@linaro.org>
2025-04-16crypto: x86/poly1305 - don't select CRYPTO_LIB_POLY1305_GENERICEric Biggers1-1/+0
The x86 Poly1305 code never falls back to the generic code, so selecting CRYPTO_LIB_POLY1305_GENERIC is unnecessary. Signed-off-by: Eric Biggers <ebiggers@google.com> Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
2025-04-16crypto: x86/poly1305 - remove redundant shash algorithmEric Biggers2-97/+7
Since crypto/poly1305.c now registers a poly1305-$(ARCH) shash algorithm that uses the architecture's Poly1305 library functions, individual architectures no longer need to do the same. Therefore, remove the redundant shash algorithm from the arch-specific code and leave just the library functions there. Signed-off-by: Eric Biggers <ebiggers@google.com> Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
2025-04-16crypto: mips/poly1305 - remove redundant shash algorithmEric Biggers2-123/+2
Since crypto/poly1305.c now registers a poly1305-$(ARCH) shash algorithm that uses the architecture's Poly1305 library functions, individual architectures no longer need to do the same. Therefore, remove the redundant shash algorithm from the arch-specific code and leave just the library functions there. Signed-off-by: Eric Biggers <ebiggers@google.com> Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
2025-04-16crypto: mips/poly1305 - drop redundant dependency on CONFIG_MIPSEric Biggers1-1/+0
arch/mips/crypto/Kconfig is sourced only when CONFIG_MIPS is enabled, so there is no need for options defined in that file to depend on it. Signed-off-by: Eric Biggers <ebiggers@google.com> Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
2025-04-16crypto: arm64/poly1305 - remove redundant shash algorithmEric Biggers2-140/+6
Since crypto/poly1305.c now registers a poly1305-$(ARCH) shash algorithm that uses the architecture's Poly1305 library functions, individual architectures no longer need to do the same. Therefore, remove the redundant shash algorithm from the arch-specific code and leave just the library functions there. Signed-off-by: Eric Biggers <ebiggers@google.com> Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
2025-04-16crypto: arm/poly1305 - remove redundant shash algorithmEric Biggers2-170/+3
Since crypto/poly1305.c now registers a poly1305-$(ARCH) shash algorithm that uses the architecture's Poly1305 library functions, individual architectures no longer need to do the same. Therefore, remove the redundant shash algorithm from the arch-specific code and leave just the library functions there. Signed-off-by: Eric Biggers <ebiggers@google.com> Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
2025-04-16crypto: poly1305 - centralize the shash wrappers for arch codeEric Biggers5-6/+38
Following the example of the crc32, crc32c, and chacha code, make the crypto subsystem register both generic and architecture-optimized poly1305 shash algorithms, both implemented on top of the appropriate library functions. This eliminates the need for every architecture to implement the same shash glue code. Note that the poly1305 shash requires that the key be prepended to the data, which differs from the library functions where the key is simply a parameter to poly1305_init(). Previously this was handled at a fairly low level, polluting the library code with shash-specific code. Reorganize things so that the shash code handles this quirk itself. Also, to register the architecture-optimized shashes only when architecture-optimized code is actually being used, add a function poly1305_is_arch_optimized() and make each arch implement it. Change each architecture's Poly1305 module_init function to arch_initcall so that the CPU feature detection is guaranteed to run before poly1305_is_arch_optimized() gets called by crypto/poly1305.c. (In cases where poly1305_is_arch_optimized() just returns true unconditionally, using arch_initcall is not strictly needed, but it's still good to be consistent across architectures.) Signed-off-by: Eric Biggers <ebiggers@google.com> Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
2025-04-16crypto: powerpc/poly1305 - implement library instead of shashEric Biggers2-104/+39
Currently the Power10 optimized Poly1305 is only wired up to the crypto_shash API, which makes it unavailable to users of the library API. The crypto_shash API for Poly1305 is going to change to be implemented on top of the library API, so the library API needs to be supported. And of course it's needed anyway to serve the library users. Therefore, change the Power10 optimized Poly1305 code to implement the library API instead of the crypto_shash API. Cc: Danny Tsen <dtsen@linux.ibm.com> Signed-off-by: Eric Biggers <ebiggers@google.com> Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
2025-04-16crypto: cbcmac - Set block size properlyHerbert Xu2-2/+2
The block size of a hash algorithm is meant to be the number of bytes its block function can handle. For cbcmac that should be the block size of the underlying block cipher instead of one. Set the block size of all cbcmac implementations accordingly. Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
2025-04-16crypto: lib/sm3 - Move sm3 library into lib/cryptoHerbert Xu3-4/+4
Move the sm3 library code into lib/crypto. Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
2025-04-16crypto: arm64/sha512 - Fix header inclusionsHerbert Xu1-3/+2
Instead of relying on linux/module.h being included through the header file sha512_base.h, include it directly. Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
2025-04-16x86: Make simd.h more resilientHerbert Xu1-0/+6
Add missing header inclusions and protect against double inclusion. Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
2025-04-16arm: Make simd.h more resilientHerbert Xu1-1/+7
Add missing header inclusions and protect against double inclusion. Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
2025-04-16crypto: powerpc - Include uaccess.h and othersHerbert Xu5-11/+23
The powerpc aes/ghash code was relying on pagefault_disable from being pulled in by random header files. Fix this by explicitly including uaccess.h. Also add other missing header files to prevent similar problems in future. Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
2025-04-16arm64: dts: exynos: update all samsung,mode constantsIvaylo Ivanov5-35/+35
Update all samsung,mode property values to account for renaming USI_V2 constants to USI_MODE in the bindings. Signed-off-by: Ivaylo Ivanov <ivo.ivanov.ivanov1@gmail.com> Link: https://lore.kernel.org/r/20250209132043.3906127-1-ivo.ivanov.ivanov1@gmail.com Signed-off-by: Krzysztof Kozlowski <krzysztof.kozlowski@linaro.org>
2025-04-16Merge branch 'x86/cpu' into x86/fpu, to pick up dependent commitsIngo Molnar32-887/+1017
Signed-off-by: Ingo Molnar <mingo@kernel.org>
2025-04-16watchdog: diag288_wdt: Implement module autoloadHeiko Carstens5-0/+65
The s390 specific diag288_wdt watchdog driver makes use of the virtual watchdog timer, which is available in most machine configurations. If executing the diagnose instruction with subcode 0x288 results in an exception the watchdog timer is not available, otherwise it is available. In order to allow module autoload of the diag288_wdt module, move the detection of the virtual watchdog timer to early boot code, and provide its availability as a cpu feature. This allows to make use of module_cpu_feature_match() to automatically load the module iff the virtual watchdog timer is available. Suggested-by: Marc Hartmayer <mhartmay@linux.ibm.com> Tested-by: Mete Durlu <meted@linux.ibm.com> Acked-by: Guenter Roeck <linux@roeck-us.net> Link: https://lore.kernel.org/r/20250410095036.1525057-1-hca@linux.ibm.com Signed-off-by: Heiko Carstens <hca@linux.ibm.com>
2025-04-16x86/cpu: Add CPU model number for Bartlett Lake CPUs with Raptor Cove coresPi Xiange1-0/+2
Bartlett Lake has a P-core only product with Raptor Cove. [ mingo: Switch around the define as pointed out by Christian Ludloff: Ratpr Cove is the core, Bartlett Lake is the product. Signed-off-by: Pi Xiange <xiange.pi@intel.com> Signed-off-by: Ingo Molnar <mingo@kernel.org> Cc: Christian Ludloff <ludloff@gmail.com> Cc: Peter Zijlstra <peterz@infradead.org> Cc: Tony Luck <tony.luck@intel.com> Cc: Andrew Cooper <andrew.cooper3@citrix.com> Cc: "H. Peter Anvin" <hpa@zytor.com> Cc: John Ogness <john.ogness@linutronix.de> Cc: "Ahmed S. Darwish" <darwi@linutronix.de> Cc: x86-cpuid@lists.linux.dev Link: https://lore.kernel.org/r/20250414032839.5368-1-xiange.pi@intel.com