summaryrefslogtreecommitdiff
path: root/arch
AgeCommit message (Collapse)AuthorFilesLines
2025-07-01ARM: dts: imx28: add pwm7 muxing optionsDario Binacchi1-0/+10
Add alternative pinmuxing for pwm7. Signed-off-by: Dario Binacchi <dario.binacchi@amarulasolutions.com> Reviewed-by: Frank Li <Frank.Li@nxp.com> Signed-off-by: Shawn Guo <shawnguo@kernel.org>
2025-07-01x86/sev: Use TSC_FACTOR for Secure TSC frequency calculationNikunj A Dadhania2-4/+35
When using Secure TSC, the GUEST_TSC_FREQ MSR reports a frequency based on the nominal P0 frequency, which deviates slightly (typically ~0.2%) from the actual mean TSC frequency due to clocking parameters. Over extended VM uptime, this discrepancy accumulates, causing clock skew between the hypervisor and a SEV-SNP VM, leading to early timer interrupts as perceived by the guest. The guest kernel relies on the reported nominal frequency for TSC-based timekeeping, while the actual frequency set during SNP_LAUNCH_START may differ. This mismatch results in inaccurate time calculations, causing the guest to perceive hrtimers as firing earlier than expected. Utilize the TSC_FACTOR from the SEV firmware's secrets page (see "Secrets Page Format" in the SNP Firmware ABI Specification) to calculate the mean TSC frequency, ensuring accurate timekeeping and mitigating clock skew in SEV-SNP VMs. Use early_ioremap_encrypted() to map the secrets page as ioremap_encrypted() uses kmalloc() which is not available during early TSC initialization and causes a panic. [ bp: Drop the silly dummy var: https://lore.kernel.org/r/20250630192726.GBaGLlHl84xIopx4Pt@fat_crate.local ] Fixes: 73bbf3b0fbba ("x86/tsc: Init the TSC for Secure TSC guests") Signed-off-by: Nikunj A Dadhania <nikunj@amd.com> Signed-off-by: Borislav Petkov (AMD) <bp@alien8.de> Cc: stable@vger.kernel.org Link: https://lore.kernel.org/20250630081858.485187-1-nikunj@amd.com
2025-06-30riscv: dts: thead: Add PVT nodeMichal Wilczynski1-0/+11
Add PVT DT node for thermal sensor. Reviewed-by: Drew Fustini <drew@pdp7.com> Signed-off-by: Michal Wilczynski <m.wilczynski@samsung.com> Signed-off-by: Drew Fustini <drew@pdp7.com>
2025-06-30riscv: dts: thead: th1520: Add GPU clkgen reset to AON nodeMichal Wilczynski1-0/+3
Add the "gpu-clkgen" reset property to the AON device tree node. This allows the AON power domain driver to detect the capability to power sequence the GPU and spawn the necessary pwrseq-thead-gpu auxiliary driver for managing the GPU's complex power sequence. This commit also adds the prerequisite dt-bindings/reset/thead,th1520-reset.h include to make the TH1520_RESET_ID_GPU_CLKGEN available. This include was previously dropped during a conflict resolution [1]. Link: https://lore.kernel.org/all/aAvfn2mq0Ksi8DF2@x1/ [1] Reviewed-by: Ulf Hansson <ulf.hansson@linaro.org> Reviewed-by: Bartosz Golaszewski <bartosz.golaszewski@linaro.org> Reviewed-by: Drew Fustini <drew@pdp7.com> Signed-off-by: Michal Wilczynski <m.wilczynski@samsung.com> Signed-off-by: Drew Fustini <drew@pdp7.com>
2025-06-30m68k: mm: Convert pointer table macros to use ptdescsVishal Moola (Oracle)1-3/+2
Motorola uses its pointer tables for page tables, so its macros should be using struct ptdesc, not struct page. This removes a user of page->lru. Signed-off-by: "Vishal Moola (Oracle)" <vishal.moola@gmail.com> Reviewed-by: Geert Uytterhoeven <geert@linux-m68k.org> Link: https://lore.kernel.org/20250611001255.527952-5-vishal.moola@gmail.com Signed-off-by: Geert Uytterhoeven <geert@linux-m68k.org>
2025-06-30m68k: mm: Convert init_pointer_table() to use ptdescsVishal Moola (Oracle)1-6/+6
Motorola uses init_pointer_table() for page tables, so it should be using struct ptdesc, not struct page. This helps us prepare to allocate ptdescs as their own memory descriptor, and prepares to remove a user of page->lru. Signed-off-by: "Vishal Moola (Oracle)" <vishal.moola@gmail.com> Reviewed-by: Geert Uytterhoeven <geert@linux-m68k.org> Link: https://lore.kernel.org/20250611001255.527952-4-vishal.moola@gmail.com Signed-off-by: Geert Uytterhoeven <geert@linux-m68k.org>
2025-06-30m68k: mm: Convert free_pointer_table() to use ptdescsVishal Moola (Oracle)1-7/+6
Motorola uses free_pointer_table() for page tables, so it should be using struct ptdesc, not struct page. This helps us prepare to allocate ptdescs as their own memory descriptor, and prepares to remove a user of page->lru. Signed-off-by: "Vishal Moola (Oracle)" <vishal.moola@gmail.com> Reviewed-by: Geert Uytterhoeven <geert@linux-m68k.org> Link: https://lore.kernel.org/20250611001255.527952-3-vishal.moola@gmail.com Signed-off-by: Geert Uytterhoeven <geert@linux-m68k.org>
2025-06-30m68k: mm: Convert get_pointer_table() to use ptdescsVishal Moola (Oracle)1-11/+15
Motorola uses get_pointer_table() for page tables, so it should be using struct ptdesc, not struct page. This helps us prepare to allocate ptdescs as their own memory descriptor, and prepares to remove a user of page->lru. Signed-off-by: "Vishal Moola (Oracle)" <vishal.moola@gmail.com> Reviewed-by: Geert Uytterhoeven <geert@linux-m68k.org> Link: https://lore.kernel.org/20250611001255.527952-2-vishal.moola@gmail.com Signed-off-by: Geert Uytterhoeven <geert@linux-m68k.org>
2025-06-30arm: dts: omap: Add support for BeagleBone Green Eco boardKory Maincent2-0/+170
SeeedStudio BeagleBone Green Eco (BBGE) is a clone of the BeagleBone Green (BBG). It has minor differences from the BBG, such as a different PMIC, a different Ethernet PHY, and a larger eMMC. Signed-off-by: Kory Maincent <kory.maincent@bootlin.com> Reviewed-by: Andreas Kemnade <andreas@kemnade.info> Tested-by: Judith Mendez <jm@ti.com> Link: https://lore.kernel.org/r/20250620-bbg-v5-3-84f9b9a2e3a8@bootlin.com Signed-off-by: Kevin Hilman <khilman@baylibre.com>
2025-06-30arm: dts: omap: am335x-bone-common: Rename tps to generic pmic nodeKory Maincent1-1/+1
Rename tps@24 to the generic pmic@24 node name. Signed-off-by: Kory Maincent <kory.maincent@bootlin.com> Reviewed-by: Andreas Kemnade <andreas@kemnade.info> Tested-by: Judith Mendez <jm@ti.com> Link: https://lore.kernel.org/r/20250620-bbg-v5-1-84f9b9a2e3a8@bootlin.com Signed-off-by: Kevin Hilman <khilman@baylibre.com>
2025-06-30entry: Split generic entry into generic exception and syscall entryJinjie Ruan1-0/+9
Currently CONFIG_GENERIC_ENTRY enables both the generic exception entry logic and the generic syscall entry logic, which are otherwise loosely coupled. Introduce separate config options for these so that architectures can select the two independently. This will make it easier for architectures to migrate to generic entry code. Suggested-by: Mark Rutland <mark.rutland@arm.com> Signed-off-by: Jinjie Ruan <ruanjinjie@huawei.com> Signed-off-by: Linus Walleij <linus.walleij@linaro.org> Signed-off-by: Thomas Gleixner <tglx@linutronix.de> Acked-by: Linus Walleij <linus.walleij@linaro.org> Link: https://lore.kernel.org/20250213130007.1418890-2-ruanjinjie@huawei.com Link: https://lore.kernel.org/all/20250624-generic-entry-split-v1-1-53d5ef4f94df@linaro.org [Linus Walleij: rebase onto v6.16-rc1]
2025-06-30arm64/mm: Elide tlbi in contpte_convert() under BBML2Mikołaj Lenczewski1-1/+138
When converting a region via contpte_convert() to use mTHP, we have two different goals. We have to mark each entry as contiguous, and we would like to smear the dirty and young (access) bits across all entries in the contiguous block. Currently, we do this by first accumulating the dirty and young bits in the block, using an atomic __ptep_get_and_clear() and the relevant pte_{dirty,young}() calls, performing a tlbi, and finally smearing the correct bits across the block using __set_ptes(). This approach works fine for BBM level 0, but with support for BBM level 2 we are allowed to reorder the tlbi to after setting the pagetable entries. We expect the time cost of a tlbi to be much greater than the cost of clearing and resetting the PTEs. As such, this reordering of the tlbi outside the window where our PTEs are invalid greatly reduces the duration the PTE are visibly invalid for other threads. This reduces the likelyhood of a concurrent page walk finding an invalid PTE, reducing the likelyhood of a fault in other threads, and improving performance (more so when there are more threads). Because we support via allowlist only bbml2 implementations that never raise conflict aborts and instead invalidate the tlb entries automatically in hardware, we can avoid the final flush altogether. However, avoiding the intermediate tlbi+dsb must be carefully considered to ensure that we remain both correct and performant. We document our reasoning and the expected interactions further in the contpte_convert() source. To do so we rely on the aarch64 spec (DDI 0487L.a D8.7.1.1) requirements RNGLXZ and RJQQTC to provide guarantees that the elision is correct. Signed-off-by: Mikołaj Lenczewski <miko.lenczewski@arm.com> Reviewed-by: Catalin Marinas <catalin.marinas@arm.com> Reviewed-by: Ryan Roberts <ryan.roberts@arm.com> Reviewed-by: Suzuki K Poulose <suzuki.poulose@arm.com> Link: https://lore.kernel.org/r/20250625113435.26849-5-miko.lenczewski@arm.com Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
2025-06-30arm64: Add BBM Level 2 cpu featureMikołaj Lenczewski3-0/+44
The Break-Before-Make cpu feature supports multiple levels (levels 0-2), and this commit adds a dedicated BBML2 cpufeature to test against support for. To support BBML2 in as wide a range of contexts as we can, we want not only the architectural guarantees that BBML2 makes, but additionally want BBML2 to not create TLB conflict aborts. Not causing aborts avoids us having to prove that no recursive faults can be induced in any path that uses BBML2, allowing its use for arbitrary kernel mappings. This feature builds on the previous ARM64_CPUCAP_EARLY_LOCAL_CPU_FEATURE, as all early cpus must support BBML2 for us to enable it (and any later cpus must also support it to be onlined). Not onlining late cpus that do not support BBML2 is unavoidable, as we might currently be using BBML2 semantics for kernel memory regions. This could cause faults in the late cpus, and would be difficult to unwind, so let us avoid the case altogether. Signed-off-by: Mikołaj Lenczewski <miko.lenczewski@arm.com> Reviewed-by: Catalin Marinas <catalin.marinas@arm.com> Reviewed-by: Suzuki K Poulose <suzuki.poulose@arm.com> Reviewed-by: Ryan Roberts <ryan.roberts@arm.com> Link: https://lore.kernel.org/r/20250625113435.26849-3-miko.lenczewski@arm.com Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
2025-06-30arm64: cpufeature: Introduce MATCH_ALL_EARLY_CPUS capability typeCatalin Marinas2-11/+72
For system-wide capabilities, the kernel has the SCOPE_SYSTEM type. Such capabilities are checked once the SMP boot has completed using the sanitised ID registers. However, there is a need for a new capability type similar in scope to the system one but with checking performed locally on each CPU during boot (e.g. based on MIDR_EL1 which is not a sanitised register). Introduce ARM64_CPUCAP_MATCH_ALL_EARLY_CPUS which, together with ARM64_CPUCAP_SCOPE_LOCAL_CPU, ensures that such capability is enabled only if all early CPUs have it. For ease of use, define ARM64_CPUCAP_EARLY_LOCAL_CPU_FEATURE which combines SCOPE_LOCAL_CPU, PERMITTED_FOR_LATE_CPUS and MATCH_ALL_EARLY_CPUS. Signed-off-by: Mikołaj Lenczewski <miko.lenczewski@arm.com> Reviewed-by: Suzuki K Poulose <Suzuki.Poulose@arm.com> Link: https://lore.kernel.org/r/20250625113435.26849-2-miko.lenczewski@arm.com Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
2025-06-30lib/crc: x86: Migrate optimized CRC code into lib/crc/Eric Biggers12-1438/+0
Move the x86-optimized CRC code from arch/x86/lib/crc* into its new location in lib/crc/x86/, and wire it up in the new way. This new way of organizing the CRC code eliminates the need to artificially split the code for each CRC variant into separate arch and generic modules, enabling better inlining and dead code elimination. For more details, see "lib/crc: Prepare for arch-optimized code in subdirs of lib/crc/". Reviewed-by: "Martin K. Petersen" <martin.petersen@oracle.com> Acked-by: Ingo Molnar <mingo@kernel.org> Acked-by: "Jason A. Donenfeld" <Jason@zx2c4.com> Link: https://lore.kernel.org/r/20250607200454.73587-12-ebiggers@kernel.org Signed-off-by: Eric Biggers <ebiggers@kernel.org>
2025-06-30lib/crc: sparc: Migrate optimized CRC code into lib/crc/Eric Biggers4-116/+0
Move the sparc-optimized CRC code from arch/sparc/lib/crc* into its new location in lib/crc/sparc/, and wire it up in the new way. This new way of organizing the CRC code eliminates the need to artificially split the code for each CRC variant into separate arch and generic modules, enabling better inlining and dead code elimination. For more details, see "lib/crc: Prepare for arch-optimized code in subdirs of lib/crc/". Reviewed-by: "Martin K. Petersen" <martin.petersen@oracle.com> Acked-by: Ingo Molnar <mingo@kernel.org> Acked-by: "Jason A. Donenfeld" <Jason@zx2c4.com> Link: https://lore.kernel.org/r/20250607200454.73587-11-ebiggers@kernel.org Signed-off-by: Eric Biggers <ebiggers@kernel.org>
2025-06-30lib/crc: s390: Migrate optimized CRC code into lib/crc/Eric Biggers6-507/+0
Move the s390-optimized CRC code from arch/s390/lib/crc* into its new location in lib/crc/s390/, and wire it up in the new way. This new way of organizing the CRC code eliminates the need to artificially split the code for each CRC variant into separate arch and generic modules, enabling better inlining and dead code elimination. For more details, see "lib/crc: Prepare for arch-optimized code in subdirs of lib/crc/". Reviewed-by: "Martin K. Petersen" <martin.petersen@oracle.com> Acked-by: Ingo Molnar <mingo@kernel.org> Acked-by: "Jason A. Donenfeld" <Jason@zx2c4.com> Link: https://lore.kernel.org/r/20250607200454.73587-10-ebiggers@kernel.org Signed-off-by: Eric Biggers <ebiggers@kernel.org>
2025-06-30lib/crc: riscv: Migrate optimized CRC code into lib/crc/Eric Biggers13-620/+0
Move the riscv-optimized CRC code from arch/riscv/lib/crc* into its new location in lib/crc/riscv/, and wire it up in the new way. This new way of organizing the CRC code eliminates the need to artificially split the code for each CRC variant into separate arch and generic modules, enabling better inlining and dead code elimination. For more details, see "lib/crc: Prepare for arch-optimized code in subdirs of lib/crc/". Reviewed-by: "Martin K. Petersen" <martin.petersen@oracle.com> Acked-by: Ingo Molnar <mingo@kernel.org> Acked-by: "Jason A. Donenfeld" <Jason@zx2c4.com> Link: https://lore.kernel.org/r/20250607200454.73587-9-ebiggers@kernel.org Signed-off-by: Eric Biggers <ebiggers@kernel.org>
2025-06-30lib/crc: powerpc: Migrate optimized CRC code into lib/crc/Eric Biggers7-2617/+0
Move the powerpc-optimized CRC code from arch/powerpc/lib/crc* into its new location in lib/crc/powerpc/, and wire it up in the new way. This new way of organizing the CRC code eliminates the need to artificially split the code for each CRC variant into separate arch and generic modules, enabling better inlining and dead code elimination. For more details, see "lib/crc: Prepare for arch-optimized code in subdirs of lib/crc/". Reviewed-by: "Martin K. Petersen" <martin.petersen@oracle.com> Acked-by: Ingo Molnar <mingo@kernel.org> Acked-by: "Jason A. Donenfeld" <Jason@zx2c4.com> Link: https://lore.kernel.org/r/20250607200454.73587-8-ebiggers@kernel.org Signed-off-by: Eric Biggers <ebiggers@kernel.org>
2025-06-30lib/crc: mips: Migrate optimized CRC code into lib/crc/Eric Biggers3-186/+0
Move the mips-optimized CRC code from arch/mips/lib/crc* into its new location in lib/crc/mips/, and wire it up in the new way. This new way of organizing the CRC code eliminates the need to artificially split the code for each CRC variant into separate arch and generic modules, enabling better inlining and dead code elimination. For more details, see "lib/crc: Prepare for arch-optimized code in subdirs of lib/crc/". Reviewed-by: "Martin K. Petersen" <martin.petersen@oracle.com> Acked-by: Ingo Molnar <mingo@kernel.org> Acked-by: "Jason A. Donenfeld" <Jason@zx2c4.com> Link: https://lore.kernel.org/r/20250607200454.73587-7-ebiggers@kernel.org Signed-off-by: Eric Biggers <ebiggers@kernel.org>
2025-06-30lib/crc: loongarch: Migrate optimized CRC code into lib/crc/Eric Biggers3-139/+0
Move the loongarch-optimized CRC code from arch/loongarch/lib/crc* into its new location in lib/crc/loongarch/, and wire it up in the new way. This new way of organizing the CRC code eliminates the need to artificially split the code for each CRC variant into separate arch and generic modules, enabling better inlining and dead code elimination. For more details, see "lib/crc: Prepare for arch-optimized code in subdirs of lib/crc/". Reviewed-by: "Martin K. Petersen" <martin.petersen@oracle.com> Acked-by: Ingo Molnar <mingo@kernel.org> Acked-by: "Jason A. Donenfeld" <Jason@zx2c4.com> Link: https://lore.kernel.org/r/20250607200454.73587-6-ebiggers@kernel.org Signed-off-by: Eric Biggers <ebiggers@kernel.org>
2025-06-30lib/crc: arm64: Migrate optimized CRC code into lib/crc/Eric Biggers6-1011/+0
Move the arm64-optimized CRC code from arch/arm64/lib/crc* into its new location in lib/crc/arm64/, and wire it up in the new way. This new way of organizing the CRC code eliminates the need to artificially split the code for each CRC variant into separate arch and generic modules, enabling better inlining and dead code elimination. For more details, see "lib/crc: Prepare for arch-optimized code in subdirs of lib/crc/". Reviewed-by: "Martin K. Petersen" <martin.petersen@oracle.com> Acked-by: Ingo Molnar <mingo@kernel.org> Acked-by: "Jason A. Donenfeld" <Jason@zx2c4.com> Link: https://lore.kernel.org/r/20250607200454.73587-5-ebiggers@kernel.org Signed-off-by: Eric Biggers <ebiggers@kernel.org>
2025-06-30lib/crc: arm: Migrate optimized CRC code into lib/crc/Eric Biggers6-977/+0
Move the arm-optimized CRC code from arch/arm/lib/crc* into its new location in lib/crc/arm/, and wire it up in the new way. This new way of organizing the CRC code eliminates the need to artificially split the code for each CRC variant into separate arch and generic modules, enabling better inlining and dead code elimination. For more details, see "lib/crc: Prepare for arch-optimized code in subdirs of lib/crc/". Reviewed-by: "Martin K. Petersen" <martin.petersen@oracle.com> Acked-by: Ingo Molnar <mingo@kernel.org> Acked-by: "Jason A. Donenfeld" <Jason@zx2c4.com> Link: https://lore.kernel.org/r/20250607200454.73587-4-ebiggers@kernel.org Signed-off-by: Eric Biggers <ebiggers@kernel.org>
2025-06-30crypto: stm32 - remove crc32 and crc32c supportEric Biggers1-1/+0
Remove the crc32 and crc32c support from the stm32 driver. Since it's not wired up to the CRC library, almost no CRC user in the kernel can actually be taking advantage of it, so it's effectively dead code. Support for this hardware could be migrated to the CRC library, but there doesn't seem to be much point. This CRC engine is present only on a couple older SoCs that lacked CRC instructions. Even for those SoCs, it probably wouldn't be worthwhile. This driver has to deal with things like locking and runtime power management that do not exist in software CRC code and are a source of bugs (as is clear from the commit log) and add significant overhead to the processing of short messages, which are common. The patch that added this driver seemed to justify it based purely on a microbenchmark on Cortex-M7 on long messages, not a real use case. These days, if this driver were to be used at all it would likely be on Cortex-A7 instead. This CRC engine is also not supported by QEMU, making the driver not easily testable. Acked-by: Ard Biesheuvel <ardb@kernel.org> Acked-by: Arnd Bergmann <arnd@arndb.de> Cc: Alexandre Torgue <alexandre.torgue@foss.st.com> Cc: Fabien Dessenne <fabien.dessenne@foss.st.com> Cc: Lionel Debieve <lionel.debieve@foss.st.com> Cc: Maxime Coquelin <mcoquelin.stm32@gmail.com> Cc: linux-stm32@st-md-mailman.stormreply.com Link: https://lore.kernel.org/r/20250601193441.6913-1-ebiggers@kernel.org Signed-off-by: Eric Biggers <ebiggers@kernel.org>
2025-06-30x86/crc: drop checks of CONFIG_AS_VPCLMULQDQEric Biggers2-9/+1
Now that the minimum binutils version supports VPCLMULQDQ (and the minimum clang version does too), there is no need to check for assembler support before compiling code that uses these instructions. Link: https://lore.kernel.org/r/20250531211318.83677-1-ebiggers@kernel.org Signed-off-by: Eric Biggers <ebiggers@kernel.org>
2025-06-30lib/crypto: x86: Move arch/x86/lib/crypto/ into lib/crypto/Eric Biggers18-9666/+4
Move the contents of arch/x86/lib/crypto/ into lib/crypto/x86/. The new code organization makes a lot more sense for how this code actually works and is developed. In particular, it makes it possible to build each algorithm as a single module, with better inlining and dead code elimination. For a more detailed explanation, see the patchset which did this for the CRC library code: https://lore.kernel.org/r/20250607200454.73587-1-ebiggers@kernel.org/. Also see the patchset which did this for SHA-512: https://lore.kernel.org/linux-crypto/20250616014019.415791-1-ebiggers@kernel.org/ This is just a preparatory commit, which does the move to get the files into their new location but keeps them building the same way as before. Later commits will make the actual improvements to the way the arch-optimized code is integrated for each algorithm. Add a gitignore entry for the removed directory arch/x86/lib/crypto/ so that people don't accidentally commit leftover generated files. Acked-by: Ard Biesheuvel <ardb@kernel.org> Reviewed-by: Martin K. Petersen <martin.petersen@oracle.com> Reviewed-by: Sohil Mehta <sohil.mehta@intel.com> Link: https://lore.kernel.org/r/20250619191908.134235-9-ebiggers@kernel.org Signed-off-by: Eric Biggers <ebiggers@kernel.org>
2025-06-30lib/crypto: sparc: Move arch/sparc/lib/crypto/ into lib/crypto/Eric Biggers5-155/+0
Move the contents of arch/sparc/lib/crypto/ into lib/crypto/sparc/. The new code organization makes a lot more sense for how this code actually works and is developed. In particular, it makes it possible to build each algorithm as a single module, with better inlining and dead code elimination. For a more detailed explanation, see the patchset which did this for the CRC library code: https://lore.kernel.org/r/20250607200454.73587-1-ebiggers@kernel.org/. Also see the patchset which did this for SHA-512: https://lore.kernel.org/linux-crypto/20250616014019.415791-1-ebiggers@kernel.org/ This is just a preparatory commit, which does the move to get the files into their new location but keeps them building the same way as before. Later commits will make the actual improvements to the way the arch-optimized code is integrated for each algorithm. Acked-by: Ard Biesheuvel <ardb@kernel.org> Reviewed-by: Martin K. Petersen <martin.petersen@oracle.com> Reviewed-by: Sohil Mehta <sohil.mehta@intel.com> Link: https://lore.kernel.org/r/20250619191908.134235-8-ebiggers@kernel.org Signed-off-by: Eric Biggers <ebiggers@kernel.org>
2025-06-30lib/crypto: s390: Move arch/s390/lib/crypto/ into lib/crypto/Eric Biggers7-1046/+0
Move the contents of arch/s390/lib/crypto/ into lib/crypto/s390/. The new code organization makes a lot more sense for how this code actually works and is developed. In particular, it makes it possible to build each algorithm as a single module, with better inlining and dead code elimination. For a more detailed explanation, see the patchset which did this for the CRC library code: https://lore.kernel.org/r/20250607200454.73587-1-ebiggers@kernel.org/. Also see the patchset which did this for SHA-512: https://lore.kernel.org/linux-crypto/20250616014019.415791-1-ebiggers@kernel.org/ This is just a preparatory commit, which does the move to get the files into their new location but keeps them building the same way as before. Later commits will make the actual improvements to the way the arch-optimized code is integrated for each algorithm. Acked-by: Ard Biesheuvel <ardb@kernel.org> Reviewed-by: Martin K. Petersen <martin.petersen@oracle.com> Reviewed-by: Sohil Mehta <sohil.mehta@intel.com> Link: https://lore.kernel.org/r/20250619191908.134235-7-ebiggers@kernel.org Signed-off-by: Eric Biggers <ebiggers@kernel.org>
2025-06-30lib/crypto: riscv: Move arch/riscv/lib/crypto/ into lib/crypto/Eric Biggers7-688/+0
Move the contents of arch/riscv/lib/crypto/ into lib/crypto/riscv/. The new code organization makes a lot more sense for how this code actually works and is developed. In particular, it makes it possible to build each algorithm as a single module, with better inlining and dead code elimination. For a more detailed explanation, see the patchset which did this for the CRC library code: https://lore.kernel.org/r/20250607200454.73587-1-ebiggers@kernel.org/. Also see the patchset which did this for SHA-512: https://lore.kernel.org/linux-crypto/20250616014019.415791-1-ebiggers@kernel.org/ This is just a preparatory commit, which does the move to get the files into their new location but keeps them building the same way as before. Later commits will make the actual improvements to the way the arch-optimized code is integrated for each algorithm. Acked-by: Ard Biesheuvel <ardb@kernel.org> Acked-by: Palmer Dabbelt <palmer@dabbelt.com> Reviewed-by: Martin K. Petersen <martin.petersen@oracle.com> Reviewed-by: Sohil Mehta <sohil.mehta@intel.com> Link: https://lore.kernel.org/r/20250619191908.134235-6-ebiggers@kernel.org Signed-off-by: Eric Biggers <ebiggers@kernel.org>
2025-06-30lib/crypto: powerpc: Move arch/powerpc/lib/crypto/ into lib/crypto/Eric Biggers9-2533/+0
Move the contents of arch/powerpc/lib/crypto/ into lib/crypto/powerpc/. The new code organization makes a lot more sense for how this code actually works and is developed. In particular, it makes it possible to build each algorithm as a single module, with better inlining and dead code elimination. For a more detailed explanation, see the patchset which did this for the CRC library code: https://lore.kernel.org/r/20250607200454.73587-1-ebiggers@kernel.org/. Also see the patchset which did this for SHA-512: https://lore.kernel.org/linux-crypto/20250616014019.415791-1-ebiggers@kernel.org/ This is just a preparatory commit, which does the move to get the files into their new location but keeps them building the same way as before. Later commits will make the actual improvements to the way the arch-optimized code is integrated for each algorithm. Acked-by: Ard Biesheuvel <ardb@kernel.org> Reviewed-by: Martin K. Petersen <martin.petersen@oracle.com> Reviewed-by: Sohil Mehta <sohil.mehta@intel.com> Link: https://lore.kernel.org/r/20250619191908.134235-5-ebiggers@kernel.org Signed-off-by: Eric Biggers <ebiggers@kernel.org>
2025-06-30lib/crypto: mips: Move arch/mips/lib/crypto/ into lib/crypto/Eric Biggers9-1867/+4
Move the contents of arch/mips/lib/crypto/ into lib/crypto/mips/. The new code organization makes a lot more sense for how this code actually works and is developed. In particular, it makes it possible to build each algorithm as a single module, with better inlining and dead code elimination. For a more detailed explanation, see the patchset which did this for the CRC library code: https://lore.kernel.org/r/20250607200454.73587-1-ebiggers@kernel.org/. Also see the patchset which did this for SHA-512: https://lore.kernel.org/linux-crypto/20250616014019.415791-1-ebiggers@kernel.org/ This is just a preparatory commit, which does the move to get the files into their new location but keeps them building the same way as before. Later commits will make the actual improvements to the way the arch-optimized code is integrated for each algorithm. Add a gitignore entry for the removed directory arch/mips/lib/crypto/ so that people don't accidentally commit leftover generated files. Acked-by: Ard Biesheuvel <ardb@kernel.org> Reviewed-by: Martin K. Petersen <martin.petersen@oracle.com> Reviewed-by: Sohil Mehta <sohil.mehta@intel.com> Link: https://lore.kernel.org/r/20250619191908.134235-4-ebiggers@kernel.org Signed-off-by: Eric Biggers <ebiggers@kernel.org>
2025-06-30lib/crypto: arm64: Move arch/arm64/lib/crypto/ into lib/crypto/Eric Biggers12-2961/+4
Move the contents of arch/arm64/lib/crypto/ into lib/crypto/arm64/. The new code organization makes a lot more sense for how this code actually works and is developed. In particular, it makes it possible to build each algorithm as a single module, with better inlining and dead code elimination. For a more detailed explanation, see the patchset which did this for the CRC library code: https://lore.kernel.org/r/20250607200454.73587-1-ebiggers@kernel.org/. Also see the patchset which did this for SHA-512: https://lore.kernel.org/linux-crypto/20250616014019.415791-1-ebiggers@kernel.org/ This is just a preparatory commit, which does the move to get the files into their new location but keeps them building the same way as before. Later commits will make the actual improvements to the way the arch-optimized code is integrated for each algorithm. Add a gitignore entry for the removed directory arch/arm64/lib/crypto/ so that people don't accidentally commit leftover generated files. Acked-by: Ard Biesheuvel <ardb@kernel.org> Reviewed-by: Martin K. Petersen <martin.petersen@oracle.com> Reviewed-by: Sohil Mehta <sohil.mehta@intel.com> Link: https://lore.kernel.org/r/20250619191908.134235-3-ebiggers@kernel.org Signed-off-by: Eric Biggers <ebiggers@kernel.org>
2025-06-30lib/crypto: arm: Move arch/arm/lib/crypto/ into lib/crypto/Eric Biggers15-3833/+4
Move the contents of arch/arm/lib/crypto/ into lib/crypto/arm/. The new code organization makes a lot more sense for how this code actually works and is developed. In particular, it makes it possible to build each algorithm as a single module, with better inlining and dead code elimination. For a more detailed explanation, see the patchset which did this for the CRC library code: https://lore.kernel.org/r/20250607200454.73587-1-ebiggers@kernel.org/. Also see the patchset which did this for SHA-512: https://lore.kernel.org/linux-crypto/20250616014019.415791-1-ebiggers@kernel.org/ This is just a preparatory commit, which does the move to get the files into their new location but keeps them building the same way as before. Later commits will make the actual improvements to the way the arch-optimized code is integrated for each algorithm. Add a gitignore entry for the removed directory arch/arm/lib/crypto/ so that people don't accidentally commit leftover generated files. Acked-by: Ard Biesheuvel <ardb@kernel.org> Reviewed-by: Martin K. Petersen <martin.petersen@oracle.com> Reviewed-by: Sohil Mehta <sohil.mehta@intel.com> Link: https://lore.kernel.org/r/20250619191908.134235-2-ebiggers@kernel.org Signed-off-by: Eric Biggers <ebiggers@kernel.org>
2025-06-30lib/crypto: x86/sha512: Migrate optimized SHA-512 code to libraryEric Biggers6-1936/+0
Instead of exposing the x86-optimized SHA-512 code via x86-specific crypto_shash algorithms, instead just implement the sha512_blocks() library function. This is much simpler, it makes the SHA-512 (and SHA-384) library functions be x86-optimized, and it fixes the longstanding issue where the x86-optimized SHA-512 code was disabled by default. SHA-512 still remains available through crypto_shash, but individual architectures no longer need to handle it. To match sha512_blocks(), change the type of the nblocks parameter of the assembly functions from int to size_t. The assembly functions actually already treated it as size_t. Acked-by: Ard Biesheuvel <ardb@kernel.org> Link: https://lore.kernel.org/r/20250630160320.2888-15-ebiggers@kernel.org Signed-off-by: Eric Biggers <ebiggers@kernel.org>
2025-06-30lib/crypto: sparc/sha512: Migrate optimized SHA-512 code to libraryEric Biggers4-236/+0
Instead of exposing the sparc-optimized SHA-512 code via sparc-specific crypto_shash algorithms, instead just implement the sha512_blocks() library function. This is much simpler, it makes the SHA-512 (and SHA-384) library functions be sparc-optimized, and it fixes the longstanding issue where the sparc-optimized SHA-512 code was disabled by default. SHA-512 still remains available through crypto_shash, but individual architectures no longer need to handle it. To match sha512_blocks(), change the type of the nblocks parameter of the assembly function from int to size_t. The assembly function actually already treated it as size_t. Note: to see the diff from arch/sparc/crypto/sha512_glue.c to lib/crypto/sparc/sha512.h, view this commit with 'git show -M10'. Acked-by: Ard Biesheuvel <ardb@kernel.org> Link: https://lore.kernel.org/r/20250630160320.2888-14-ebiggers@kernel.org Signed-off-by: Eric Biggers <ebiggers@kernel.org>
2025-06-30lib/crypto: s390/sha512: Migrate optimized SHA-512 code to libraryEric Biggers5-164/+0
Instead of exposing the s390-optimized SHA-512 code via s390-specific crypto_shash algorithms, instead just implement the sha512_blocks() library function. This is much simpler, it makes the SHA-512 (and SHA-384) library functions be s390-optimized, and it fixes the longstanding issue where the s390-optimized SHA-512 code was disabled by default. SHA-512 still remains available through crypto_shash, but individual architectures no longer need to handle it. Acked-by: Ard Biesheuvel <ardb@kernel.org> Link: https://lore.kernel.org/r/20250630160320.2888-13-ebiggers@kernel.org Signed-off-by: Eric Biggers <ebiggers@kernel.org>
2025-06-30lib/crypto: riscv/sha512: Migrate optimized SHA-512 code to libraryEric Biggers4-348/+0
Instead of exposing the riscv-optimized SHA-512 code via riscv-specific crypto_shash algorithms, instead just implement the sha512_blocks() library function. This is much simpler, it makes the SHA-512 (and SHA-384) library functions be riscv-optimized, and it fixes the longstanding issue where the riscv-optimized SHA-512 code was disabled by default. SHA-512 still remains available through crypto_shash, but individual architectures no longer need to handle it. To match sha512_blocks(), change the type of the nblocks parameter of the assembly function from int to size_t. The assembly function actually already treated it as size_t. Note: to see the diff from arch/riscv/crypto/sha512-riscv64-glue.c to lib/crypto/riscv/sha512.h, view this commit with 'git show -M10'. Acked-by: Ard Biesheuvel <ardb@kernel.org> Link: https://lore.kernel.org/r/20250630160320.2888-12-ebiggers@kernel.org Signed-off-by: Eric Biggers <ebiggers@kernel.org>
2025-06-30lib/crypto: mips/sha512: Migrate optimized SHA-512 code to libraryEric Biggers4-178/+0
Instead of exposing the mips-optimized SHA-512 code via mips-specific crypto_shash algorithms, instead just implement the sha512_blocks() library function. This is much simpler, it makes the SHA-512 (and SHA-384) library functions be mips-optimized, and it fixes the longstanding issue where the mips-optimized SHA-512 code was disabled by default. SHA-512 still remains available through crypto_shash, but individual architectures no longer need to handle it. Note: to see the diff from arch/mips/cavium-octeon/crypto/octeon-sha512.c to lib/crypto/mips/sha512.h, view this commit with 'git show -M10'. Acked-by: Ard Biesheuvel <ardb@kernel.org> Link: https://lore.kernel.org/r/20250630160320.2888-11-ebiggers@kernel.org Signed-off-by: Eric Biggers <ebiggers@kernel.org>
2025-06-30mips: cavium-octeon: Move octeon-crypto.h into asm directoryEric Biggers6-10/+5
Since arch/mips/cavium-octeon/crypto/octeon-crypto.h is now needed outside of its directory, move it to arch/mips/include/asm/octeon/crypto.h so that it can be included as <asm/octeon/crypto.h>. Acked-by: Ard Biesheuvel <ardb@kernel.org> Link: https://lore.kernel.org/r/20250630160320.2888-10-ebiggers@kernel.org Signed-off-by: Eric Biggers <ebiggers@kernel.org>
2025-06-30lib/crypto: arm64/sha512: Migrate optimized SHA-512 code to libraryEric Biggers6-419/+0
Instead of exposing the arm64-optimized SHA-512 code via arm64-specific crypto_shash algorithms, instead just implement the sha512_blocks() library function. This is much simpler, it makes the SHA-512 (and SHA-384) library functions be arm64-optimized, and it fixes the longstanding issue where the arm64-optimized SHA-512 code was disabled by default. SHA-512 still remains available through crypto_shash, but individual architectures no longer need to handle it. To match sha512_blocks(), change the type of the nblocks parameter of the assembly functions from int or 'unsigned int' to size_t. Update the ARMv8 CE assembly function accordingly. The scalar assembly function actually already treated it as size_t. Acked-by: Ard Biesheuvel <ardb@kernel.org> Link: https://lore.kernel.org/r/20250630160320.2888-9-ebiggers@kernel.org Signed-off-by: Eric Biggers <ebiggers@kernel.org>
2025-06-30lib/crypto: arm/sha512: Migrate optimized SHA-512 code to libraryEric Biggers11-875/+0
Instead of exposing the arm-optimized SHA-512 code via arm-specific crypto_shash algorithms, instead just implement the sha512_blocks() library function. This is much simpler, it makes the SHA-512 (and SHA-384) library functions be arm-optimized, and it fixes the longstanding issue where the arm-optimized SHA-512 code was disabled by default. SHA-512 still remains available through crypto_shash, but individual architectures no longer need to handle it. To match sha512_blocks(), change the type of the nblocks parameter of the assembly functions from int to size_t. The assembly functions actually already treated it as size_t. Acked-by: Ard Biesheuvel <ardb@kernel.org> Link: https://lore.kernel.org/r/20250630160320.2888-8-ebiggers@kernel.org Signed-off-by: Eric Biggers <ebiggers@kernel.org>
2025-06-30crypto: riscv/sha512 - Stop depending on sha512_generic_block_fnEric Biggers2-1/+8
sha512_generic_block_fn() will no longer be available when the SHA-512 support in the old-school crypto API is changed to just wrap the SHA-512 library. Replace the use of sha512_generic_block_fn() in sha512-riscv64-glue.c with temporary code that uses the library's __sha512_update(). This is just a temporary workaround to keep the kernel building and functional at each commit; this code gets superseded when the RISC-V optimized SHA-512 is migrated to lib/crypto/ anyway. Acked-by: Ard Biesheuvel <ardb@kernel.org> Link: https://lore.kernel.org/r/20250630160320.2888-5-ebiggers@kernel.org Signed-off-by: Eric Biggers <ebiggers@kernel.org>
2025-06-30crypto: sha512 - Rename conflicting symbolsEric Biggers4-20/+20
Rename existing functions and structs in architecture-optimized SHA-512 code that had names conflicting with the upcoming library interface which will be added to <crypto/sha2.h>: sha384_init, sha512_init, sha512_update, sha384, and sha512. Note: all affected code will be superseded by later commits that migrate the arch-optimized SHA-512 code into the library. This commit simply keeps the kernel building for the initial introduction of the library. Acked-by: Ard Biesheuvel <ardb@kernel.org> Link: https://lore.kernel.org/r/20250630160320.2888-2-ebiggers@kernel.org Signed-off-by: Eric Biggers <ebiggers@kernel.org>
2025-06-30s390/smp: Remove conditional emergency signal order code usageHeiko Carstens1-4/+1
pcpu_ec_call() uses either the external call or emergency signal order code to signal (aka send an IPI) to a remote CPU. If the remote CPU is not running the emergency signal order is used. Measurements show that always using the external order code is at least as good, and sometimes even better, than the existing code. Therefore remove emergency signal order code usage from pcpu_ec_call(). Suggested-by: Christian Borntraeger <borntraeger@linux.ibm.com> Acked-by: Christian Borntraeger <borntraeger@linux.ibm.com> Reviewed-by: Alexander Gordeev <agordeev@linux.ibm.com> Signed-off-by: Heiko Carstens <hca@linux.ibm.com> Signed-off-by: Alexander Gordeev <agordeev@linux.ibm.com>
2025-06-30Move FCH header to a location accessible by all archsMario Limonciello2-14/+1
A new header fch.h was created to store registers used by different AMD drivers. This header was included by i2c-piix4 in commit 624b0d5696a8 ("i2c: piix4, x86/platform: Move the SB800 PIIX4 FCH definitions to <asm/amd/fch.h>"). To prevent compile failures on non-x86 archs i2c-piix4 was set to only compile on x86 by commit 7e173eb82ae9717 ("i2c: piix4: Make CONFIG_I2C_PIIX4 dependent on CONFIG_X86"). This was not a good decision because loongarch and mips both actually support i2c-piix4 and set it enabled in the defconfig. Move the header to a location accessible by all architectures. Fixes: 624b0d5696a89 ("i2c: piix4, x86/platform: Move the SB800 PIIX4 FCH definitions to <asm/amd/fch.h>") Suggested-by: Hans de Goede <hansg@kernel.org> Signed-off-by: Mario Limonciello <mario.limonciello@amd.com> Reviewed-by: Hans de Goede <hansg@kernel.org> Link: https://lore.kernel.org/r/20250610205817.3912944-1-superm1@kernel.org Reviewed-by: Ilpo Järvinen <ilpo.jarvinen@linux.intel.com> Signed-off-by: Ilpo Järvinen <ilpo.jarvinen@linux.intel.com>
2025-06-30arm64: dts: rockchip: Enable eMMC HS200 mode on Radxa E20CJonas Karlman1-0/+1
eMMC HS200 mode (1.8V I/O) is supported by the MMC host controller on RK3528 and works with the optional on-board eMMC module on Radxa E20C. Be explicit about HS200 support in the device tree for Radxa E20C. Fixes: 3a01b5f14a8a ("arm64: dts: rockchip: Enable onboard eMMC on Radxa E20C") Signed-off-by: Jonas Karlman <jonas@kwiboo.se> Link: https://lore.kernel.org/r/20250621165832.2226160-1-jonas@kwiboo.se Signed-off-by: Heiko Stuebner <heiko@sntech.de>
2025-06-30arm64: dts: rockchip: Add bluetooth support to ArmSoM Sige7Jianfeng Liu1-0/+36
ArmSoM Sige7 has onboard AP6275P Wi-Fi6 (PCIe) and BT5 (UART) module which is similar with Khadas Edge2. This commit enables bluetooth at uart6. Signed-off-by: Jianfeng Liu <liujianfeng1994@gmail.com> Link: https://lore.kernel.org/r/20250621135319.61766-1-liujianfeng1994@gmail.com Signed-off-by: Heiko Stuebner <heiko@sntech.de>
2025-06-30arm64: dts: rockchip: enable PCIe on ROCK 4DNicolas Frattaroli1-0/+15
The RADXA ROCK 4D board has a PCIe controller connected to a flat flex connector, compatible with the one the RPi5 uses. Enable the associated combphy and pcie controller node, as well as add the remaining pinctrl definition for the reset. Signed-off-by: Nicolas Frattaroli <nicolas.frattaroli@collabora.com> Link: https://lore.kernel.org/r/20250621-rk3576-rock4d-pcie-v1-1-2b33c9f12955@collabora.com Signed-off-by: Heiko Stuebner <heiko@sntech.de>
2025-06-30arm64: dts: rockchip: Enable HDMI receiver on CM3588Valentin Hăloiu1-0/+17
Enable support for the HDMI input port found on FriendlyElec CM3588 and CM3588 Plus. Signed-off-by: Valentin Hăloiu <valentin.haloiu@gmail.com> Link: https://lore.kernel.org/r/20250622185814.35031-1-valentin.haloiu@gmail.com Signed-off-by: Heiko Stuebner <heiko@sntech.de>
2025-06-30arm64: dts: rockchip: Add HDMI PHY PLL clock source to VOP2 on rk3576Cristian Ciocaltea1-2/+4
Since commit c871a311edf0 ("phy: rockchip: samsung-hdptx: Setup TMDS char rate via phy_configure_opts_hdmi"), the workaround of passing the rate from DW HDMI QP bridge driver via phy_set_bus_width() became partially broken, as it cannot reliably handle mode switches anymore. Attempting to fix this up at PHY level would not only introduce additional hacks, but it would also fail to adequately resolve the display issues that are a consequence of the system CRU limitations. Instead, proceed with the solution already implemented for RK3588: make use of the HDMI PHY PLL as a better suited DCLK source for VOP2. This will not only address the aforementioned problem, but it should also facilitate the proper operation of display modes up to 4K@60Hz. It's worth noting that anything above 4K@30Hz still requires high TMDS clock ratio and scrambling support, which hasn't been mainlined yet. Fixes: d74b842cab08 ("arm64: dts: rockchip: Add vop for rk3576") Cc: stable@vger.kernel.org Signed-off-by: Cristian Ciocaltea <cristian.ciocaltea@collabora.com> Tested-By: Detlev Casanova <detlev.casanova@collabora.com> Tested-by: Nicolas Frattaroli <nicolas.frattaroli@collabora.com> Link: https://lore.kernel.org/r/20250612-rk3576-hdmitx-fix-v1-3-4b11007d8675@collabora.com Signed-off-by: Heiko Stuebner <heiko@sntech.de>