summaryrefslogtreecommitdiff
path: root/lib
AgeCommit message (Collapse)AuthorFilesLines
2025-07-14lib/crypto: tests: Add KUnit tests for Poly1305Eric Biggers4-0/+361
Add a KUnit test suite for the Poly1305 functions. Most of its test cases are instantiated from hash-test-template.h, which is also used by the SHA-2 tests. A couple additional test cases are also included to test edge cases specific to Poly1305. Acked-by: Ard Biesheuvel <ardb@kernel.org> Link: https://lore.kernel.org/r/20250709200112.258500-5-ebiggers@kernel.org Signed-off-by: Eric Biggers <ebiggers@kernel.org>
2025-07-14lib/crypto: tests: Add KUnit tests for SHA-384 and SHA-512Eric Biggers6-0/+723
Add KUnit test suites for the SHA-384 and SHA-512 library functions, including the corresponding HMAC support. The core test logic is in the previously-added hash-test-template.h. This commit just adds the actual KUnit suites, and it adds the generated test vectors to the tree so that gen-hash-testvecs.py won't have to be run at build time. Acked-by: Ard Biesheuvel <ardb@kernel.org> Link: https://lore.kernel.org/r/20250709200112.258500-4-ebiggers@kernel.org Signed-off-by: Eric Biggers <ebiggers@kernel.org>
2025-07-14lib/crypto: tests: Add KUnit tests for SHA-224 and SHA-256Eric Biggers8-0/+590
Add KUnit test suites for the SHA-224 and SHA-256 library functions, including the corresponding HMAC support. The core test logic is in the previously-added hash-test-template.h. This commit just adds the actual KUnit suites, and it adds the generated test vectors to the tree so that gen-hash-testvecs.py won't have to be run at build time. Acked-by: Ard Biesheuvel <ardb@kernel.org> Link: https://lore.kernel.org/r/20250709200112.258500-3-ebiggers@kernel.org Signed-off-by: Eric Biggers <ebiggers@kernel.org>
2025-07-14lib/crypto: tests: Add hash-test-template.h and gen-hash-testvecs.pyEric Biggers1-0/+683
Add hash-test-template.h which generates the following KUnit test cases for hash functions: test_hash_test_vectors test_hash_all_lens_up_to_4096 test_hash_incremental_updates test_hash_buffer_overruns test_hash_overlaps test_hash_alignment_consistency test_hash_ctx_zeroization test_hash_interrupt_context_1 test_hash_interrupt_context_2 test_hmac (when HMAC is supported) benchmark_hash (when CONFIG_CRYPTO_LIB_BENCHMARK=y) The initial use cases for this will be sha224_kunit, sha256_kunit, sha384_kunit, sha512_kunit, and poly1305_kunit. Add a Python script gen-hash-testvecs.py which generates the test vectors required by test_hash_test_vectors, test_hash_all_lens_up_to_4096, and test_hmac. Acked-by: Ard Biesheuvel <ardb@kernel.org> Link: https://lore.kernel.org/r/20250709200112.258500-2-ebiggers@kernel.org Signed-off-by: Eric Biggers <ebiggers@kernel.org>
2025-07-14lib/crypto: x86/sha1: Migrate optimized code into libraryEric Biggers6-0/+1625
Instead of exposing the x86-optimized SHA-1 code via x86-specific crypto_shash algorithms, instead just implement the sha1_blocks() library function. This is much simpler, it makes the SHA-1 library functions be x86-optimized, and it fixes the longstanding issue where the x86-optimized SHA-1 code was disabled by default. SHA-1 still remains available through crypto_shash, but individual architectures no longer need to handle it. To match sha1_blocks(), change the type of the nblocks parameter of the assembly functions from int to size_t. The assembly functions actually already treated it as size_t. Reviewed-by: Ard Biesheuvel <ardb@kernel.org> Link: https://lore.kernel.org/r/20250712232329.818226-14-ebiggers@kernel.org Signed-off-by: Eric Biggers <ebiggers@kernel.org>
2025-07-14lib/crypto: sparc/sha1: Migrate optimized code into libraryEric Biggers4-0/+117
Instead of exposing the sparc-optimized SHA-1 code via sparc-specific crypto_shash algorithms, instead just implement the sha1_blocks() library function. This is much simpler, it makes the SHA-1 library functions be sparc-optimized, and it fixes the longstanding issue where the sparc-optimized SHA-1 code was disabled by default. SHA-1 still remains available through crypto_shash, but individual architectures no longer need to handle it. Note: to see the diff from arch/sparc/crypto/sha1_glue.c to lib/crypto/sparc/sha1.h, view this commit with 'git show -M10'. Reviewed-by: Ard Biesheuvel <ardb@kernel.org> Link: https://lore.kernel.org/r/20250712232329.818226-13-ebiggers@kernel.org Signed-off-by: Eric Biggers <ebiggers@kernel.org>
2025-07-14lib/crypto: s390/sha1: Migrate optimized code into libraryEric Biggers2-0/+29
Instead of exposing the s390-optimized SHA-1 code via s390-specific crypto_shash algorithms, instead just implement the sha1_blocks() library function. This is much simpler, it makes the SHA-1 library functions be s390-optimized, and it fixes the longstanding issue where the s390-optimized SHA-1 code was disabled by default. SHA-1 still remains available through crypto_shash, but individual architectures no longer need to handle it. Reviewed-by: Ard Biesheuvel <ardb@kernel.org> Link: https://lore.kernel.org/r/20250712232329.818226-12-ebiggers@kernel.org Signed-off-by: Eric Biggers <ebiggers@kernel.org>
2025-07-14lib/crypto: powerpc/sha1: Migrate optimized code into libraryEric Biggers5-0/+554
Instead of exposing the powerpc-optimized SHA-1 code via powerpc-specific crypto_shash algorithms, instead just implement the sha1_blocks() library function. This is much simpler, it makes the SHA-1 library functions be powerpc-optimized, and it fixes the longstanding issue where the powerpc-optimized SHA-1 code was disabled by default. SHA-1 still remains available through crypto_shash, but individual architectures no longer need to handle it. Note: to see the diff from arch/powerpc/crypto/sha1-spe-glue.c to lib/crypto/powerpc/sha1.h, view this commit with 'git show -M10'. Reviewed-by: Ard Biesheuvel <ardb@kernel.org> Link: https://lore.kernel.org/r/20250712232329.818226-11-ebiggers@kernel.org Signed-off-by: Eric Biggers <ebiggers@kernel.org>
2025-07-14lib/crypto: mips/sha1: Migrate optimized code into libraryEric Biggers2-0/+82
Instead of exposing the mips-optimized SHA-1 code via mips-specific crypto_shash algorithms, instead just implement the sha1_blocks() library function. This is much simpler, it makes the SHA-1 library functions be mips-optimized, and it fixes the longstanding issue where the mips-optimized SHA-1 code was disabled by default. SHA-1 still remains available through crypto_shash, but individual architectures no longer need to handle it. Note: to see the diff from arch/mips/cavium-octeon/crypto/octeon-sha1.c to lib/crypto/mips/sha1.h, view this commit with 'git show -M10'. Reviewed-by: Ard Biesheuvel <ardb@kernel.org> Link: https://lore.kernel.org/r/20250712232329.818226-10-ebiggers@kernel.org Signed-off-by: Eric Biggers <ebiggers@kernel.org>
2025-07-14lib/crypto: arm64/sha1: Migrate optimized code into libraryEric Biggers4-0/+171
Instead of exposing the arm64-optimized SHA-1 code via arm64-specific crypto_shash algorithms, instead just implement the sha1_blocks() library function. This is much simpler, it makes the SHA-1 library functions be arm64-optimized, and it fixes the longstanding issue where the arm64-optimized SHA-1 code was disabled by default. SHA-1 still remains available through crypto_shash, but individual architectures no longer need to handle it. Remove support for SHA-1 finalization from assembly code, since the library does not yet support architecture-specific overrides of the finalization. (Support for that has been omitted for now, for simplicity and because usually it isn't performance-critical.) To match sha1_blocks(), change the type of the nblocks parameter and the return value of __sha1_ce_transform() from int to size_t. Update the assembly code accordingly. Reviewed-by: Ard Biesheuvel <ardb@kernel.org> Link: https://lore.kernel.org/r/20250712232329.818226-9-ebiggers@kernel.org Signed-off-by: Eric Biggers <ebiggers@kernel.org>
2025-07-14lib/crypto: arm/sha1: Migrate optimized code into libraryEric Biggers6-0/+1315
Instead of exposing the arm-optimized SHA-1 code via arm-specific crypto_shash algorithms, instead just implement the sha1_blocks() library function. This is much simpler, it makes the SHA-1 library functions be arm-optimized, and it fixes the longstanding issue where the arm-optimized SHA-1 code was disabled by default. SHA-1 still remains available through crypto_shash, but individual architectures no longer need to handle it. To match sha1_blocks(), change the type of the nblocks parameter of the assembly functions from int to size_t. The assembly functions actually already treated it as size_t. Reviewed-by: Ard Biesheuvel <ardb@kernel.org> Link: https://lore.kernel.org/r/20250712232329.818226-8-ebiggers@kernel.org Signed-off-by: Eric Biggers <ebiggers@kernel.org>
2025-07-14lib/crypto: sha1: Add HMAC supportEric Biggers1-3/+105
Add HMAC support to the SHA-1 library, again following what was done for SHA-2. Besides providing the basis for a more streamlined "hmac(sha1)" shash, this will also be useful for multiple in-kernel users such as net/sctp/auth.c, net/ipv6/seg6_hmac.c, and security/keys/trusted-keys/trusted_tpm1.c. Those are currently using crypto_shash, but using the library functions would be much simpler. Reviewed-by: Ard Biesheuvel <ardb@kernel.org> Link: https://lore.kernel.org/r/20250712232329.818226-5-ebiggers@kernel.org Signed-off-by: Eric Biggers <ebiggers@kernel.org>
2025-07-14lib/crypto: sha1: Add SHA-1 library functionsEric Biggers3-7/+124
Add a library interface for SHA-1, following the SHA-2 one. As was the case with SHA-2, this will be useful for various in-kernel users. The crypto_shash interface will be reimplemented on top of it as well. Reviewed-by: Ard Biesheuvel <ardb@kernel.org> Link: https://lore.kernel.org/r/20250712232329.818226-4-ebiggers@kernel.org Signed-off-by: Eric Biggers <ebiggers@kernel.org>
2025-07-14lib/crypto: sha1: Rename sha1_init() to sha1_init_raw()Eric Biggers1-3/+3
Rename the existing sha1_init() to sha1_init_raw(), since it conflicts with the upcoming library function. This will later be removed, but this keeps the kernel building for the introduction of the library. Reviewed-by: Ard Biesheuvel <ardb@kernel.org> Link: https://lore.kernel.org/r/20250712232329.818226-3-ebiggers@kernel.org Signed-off-by: Eric Biggers <ebiggers@kernel.org>
2025-07-14lib/crypto: sha2: Add hmac_sha*_init_usingrawkey()Eric Biggers2-34/+74
While the HMAC library functions support both incremental and one-shot computation and both prepared and raw keys, the combination of raw key + incremental was missing. It turns out that several potential users of the HMAC library functions (tpm2-sessions.c, smb2transport.c, trusted_tpm1.c) want exactly that. Therefore, add the missing functions hmac_sha*_init_usingrawkey(). Implement them in an optimized way that directly initializes the HMAC context without a separate key preparation step. Reimplement the one-shot raw key functions hmac_sha*_usingrawkey() on top of the new functions, which makes them a bit more efficient. Reviewed-by: Ard Biesheuvel <ardb@kernel.org> Link: https://lore.kernel.org/r/20250711215844.41715-1-ebiggers@kernel.org Signed-off-by: Eric Biggers <ebiggers@kernel.org>
2025-07-14lib/crypto: arm/poly1305: Remove unneeded empty weak functionEric Biggers2-6/+1
Fix poly1305-armv4.pl to not do '.globl poly1305_blocks_neon' when poly1305_blocks_neon() is not defined. Then, remove the empty __weak definition of poly1305_blocks_neon(), which was still needed only because of that unnecessary globl statement. (It also used to be needed because the compiler could generate calls to it when CONFIG_KERNEL_MODE_NEON=n, but that has been fixed.) Thanks to Arnd Bergmann for reporting that the globl statement in the asm file was still depending on the weak symbol. Acked-by: Ard Biesheuvel <ardb@kernel.org> Link: https://lore.kernel.org/r/20250711212822.6372-1-ebiggers@kernel.org Signed-off-by: Eric Biggers <ebiggers@kernel.org>
2025-07-14Merge branch 'tip/sched/urgent'Peter Zijlstra10-56/+90
Avoid merge conflicts Signed-off-by: Peter Zijlstra <peterz@infradead.org>
2025-07-14lib/test_vmalloc.c: introduce xfail for failing testsRaghavendra K T1-15/+21
The test align_shift_alloc_test is expected to fail. Reporting the test as fail confuses to be a genuine failure. Introduce widely used xfail sematics to address the issue. Note: a warn_alloc dump similar to below is still expected: Call Trace: <TASK> dump_stack_lvl+0x64/0x80 warn_alloc+0x137/0x1b0 ? __get_vm_area_node+0x134/0x140 Snippet of dmesg after change: Summary: random_size_align_alloc_test passed: 1 failed: 0 xfailed: 0 .. Summary: align_shift_alloc_test passed: 0 failed: 0 xfailed: 1 .. Summary: pcpu_alloc_test passed: 1 failed: 0 xfailed: 0 .. Link: https://lkml.kernel.org/r/20250702064319.885-1-raghavendra.kt@amd.com Signed-off-by: Raghavendra K T <raghavendra.kt@amd.com> Reviewed-by: "Uladzislau Rezki (Sony)" <urezki@gmail.com> Cc: Dev Jain <dev.jain@arm.com> Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
2025-07-14maple tree: add some commentsDev Jain1-1/+7
Add comments explaining the fields for maple_metadata, since "end" is ambiguous and "gap" can be confused as the largest gap, whereas it is actually the offset of the largest gap. Add comment for mas_ascend() to explain, whose min and max we are trying to find. Explain that, for example, if we are already on offset zero, then the parent min is mas->min, otherwise we need to walk up to find the implied pivot min. Link: https://lkml.kernel.org/r/20250703063338.51509-1-dev.jain@arm.com Signed-off-by: Dev Jain <dev.jain@arm.com> Reviewed-by: Liam R. Howlett <Liam.Howlett@oracle.com> Cc: Wei Yang <richard.weiyang@gmail.com> Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
2025-07-13Merge branch 'mm-hotfixes-stable' into mm-stable to pick up changes whichAndrew Morton2-0/+4
are required for a merge of the series "mm: folio_pte_batch() improvements".
2025-07-12lib/crypto: x86/poly1305: Fix performance regression on short messagesEric Biggers1-0/+8
Restore the len >= 288 condition on using the AVX implementation, which was incidentally removed by commit 318c53ae02f2 ("crypto: x86/poly1305 - Add block-only interface"). This check took into account the overhead in key power computation, kernel-mode "FPU", and tail handling associated with the AVX code. Indeed, restoring this check slightly improves performance for len < 256 as measured using poly1305_kunit on an "AMD Ryzen AI 9 365" (Zen 5) CPU: Length Before After ====== ========== ========== 1 30 MB/s 36 MB/s 16 516 MB/s 598 MB/s 64 1700 MB/s 1882 MB/s 127 2265 MB/s 2651 MB/s 128 2457 MB/s 2827 MB/s 200 2702 MB/s 3238 MB/s 256 3841 MB/s 3768 MB/s 511 4580 MB/s 4585 MB/s 512 5430 MB/s 5398 MB/s 1024 7268 MB/s 7305 MB/s 3173 8999 MB/s 8948 MB/s 4096 9942 MB/s 9921 MB/s 16384 10557 MB/s 10545 MB/s While the optimal threshold for this CPU might be slightly lower than 288 (see the len == 256 case), other CPUs would need to be tested too, and these sorts of benchmarks can underestimate the true cost of kernel-mode "FPU". Therefore, for now just restore the 288 threshold. Fixes: 318c53ae02f2 ("crypto: x86/poly1305 - Add block-only interface") Cc: stable@vger.kernel.org Reviewed-by: Ard Biesheuvel <ardb@kernel.org> Link: https://lore.kernel.org/r/20250706231100.176113-6-ebiggers@kernel.org Signed-off-by: Eric Biggers <ebiggers@kernel.org>
2025-07-12lib/crypto: x86/poly1305: Fix register corruption in no-SIMD contextsEric Biggers1-1/+39
Restore the SIMD usability check and base conversion that were removed by commit 318c53ae02f2 ("crypto: x86/poly1305 - Add block-only interface"). This safety check is cheap and is well worth eliminating a footgun. While the Poly1305 functions should not be called when SIMD registers are unusable, if they are anyway, they should just do the right thing instead of corrupting random tasks' registers and/or computing incorrect MACs. Fixing this is also needed for poly1305_kunit to pass. Just use irq_fpu_usable() instead of the original crypto_simd_usable(), since poly1305_kunit won't rely on crypto_simd_disabled_for_test. Fixes: 318c53ae02f2 ("crypto: x86/poly1305 - Add block-only interface") Cc: stable@vger.kernel.org Reviewed-by: Ard Biesheuvel <ardb@kernel.org> Link: https://lore.kernel.org/r/20250706231100.176113-5-ebiggers@kernel.org Signed-off-by: Eric Biggers <ebiggers@kernel.org>
2025-07-12lib/crypto: arm64/poly1305: Fix register corruption in no-SIMD contextsEric Biggers1-1/+2
Restore the SIMD usability check that was removed by commit a59e5468a921 ("crypto: arm64/poly1305 - Add block-only interface"). This safety check is cheap and is well worth eliminating a footgun. While the Poly1305 functions should not be called when SIMD registers are unusable, if they are anyway, they should just do the right thing instead of corrupting random tasks' registers and/or computing incorrect MACs. Fixing this is also needed for poly1305_kunit to pass. Just use may_use_simd() instead of the original crypto_simd_usable(), since poly1305_kunit won't rely on crypto_simd_disabled_for_test. Fixes: a59e5468a921 ("crypto: arm64/poly1305 - Add block-only interface") Cc: stable@vger.kernel.org Reviewed-by: Ard Biesheuvel <ardb@kernel.org> Link: https://lore.kernel.org/r/20250706231100.176113-4-ebiggers@kernel.org Signed-off-by: Eric Biggers <ebiggers@kernel.org>
2025-07-12lib/crypto: arm/poly1305: Fix register corruption in no-SIMD contextsEric Biggers1-1/+2
Restore the SIMD usability check that was removed by commit 773426f4771b ("crypto: arm/poly1305 - Add block-only interface"). This safety check is cheap and is well worth eliminating a footgun. While the Poly1305 functions should not be called when SIMD registers are unusable, if they are anyway, they should just do the right thing instead of corrupting random tasks' registers and/or computing incorrect MACs. Fixing this is also needed for poly1305_kunit to pass. Just use may_use_simd() instead of the original crypto_simd_usable(), since poly1305_kunit won't rely on crypto_simd_disabled_for_test. Fixes: 773426f4771b ("crypto: arm/poly1305 - Add block-only interface") Cc: stable@vger.kernel.org Reviewed-by: Ard Biesheuvel <ardb@kernel.org> Link: https://lore.kernel.org/r/20250706231100.176113-3-ebiggers@kernel.org Signed-off-by: Eric Biggers <ebiggers@kernel.org>
2025-07-10kunit: fix longest symbol length testSergio González Collado2-2/+2
The kunit test that checks the longests symbol length [1], has triggered warnings in some pilelines when symbol prefixes are used [2][3]. The test will to depend on !PREFIX_SYMBOLS and !CFI_CLANG as sujested in [4] and on !GCOV_KERNEL. [1] https://lore.kernel.org/rust-for-linux/CABVgOSm=5Q0fM6neBhxSbOUHBgNzmwf2V22vsYC10YRBT=kN1g@mail.gmail.com/T/#t [2] https://lore.kernel.org/all/20250328112156.2614513-1-arnd@kernel.org/T/#u [3] https://lore.kernel.org/rust-for-linux/bbd03b37-c4d9-4a92-9be2-75aaf8c19815@infradead.org/T/#t [4] https://lore.kernel.org/linux-kselftest/20250427200916.GA1661412@ax162/T/#t Link: https://lore.kernel.org/r/20250706201855.232451-1-sergio.collado@gmail.com Reviewed-by: Rae Moar <rmoar@google.com> Signed-off-by: Sergio González Collado <sergio.collado@gmail.com> Acked-by: Randy Dunlap <rdunlap@infradead.org> Tested-by: Randy Dunlap <rdunlap@infradead.org> Signed-off-by: Shuah Khan <skhan@linuxfoundation.org>
2025-07-10lib/smp_processor_id: Make migration check unconditional of SMPSebastian Andrzej Siewior1-2/+0
Commit cac5cefbade90 ("sched/smp: Make SMP unconditional") migrate_disable() even on UP builds. Commit 06ddd17521bf1 ("sched/smp: Always define is_percpu_thread() and scheduler_ipi()") made is_percpu_thread() check the affinity mask instead replying always true for UP mask. As a consequence smp_processor_id() now complains if invoked within a migrate_disable() section because is_percpu_thread() checks its mask and the migration check is left out. Make migration check unconditional of SMP. Fixes: cac5cefbade90 ("sched/smp: Make SMP unconditional") Closes: https://lore.kernel.org/oe-lkp/202507100448.6b88d6f1-lkp@intel.com Reported-by: kernel test robot <oliver.sang@intel.com> Signed-off-by: Sebastian Andrzej Siewior <bigeasy@linutronix.de> Signed-off-by: Peter Zijlstra (Intel) <peterz@infradead.org> Reviewed-by: Chen Yu <yu.c.chen@intel.com> Link: https://lore.kernel.org/r/20250710082748.-DPO1rjO@linutronix.de
2025-07-10lib: test_objagg: split test_hints_case() into two functionsArnd Bergmann1-32/+45
With sanitizers enabled, this function uses a lot of stack, causing a harmless warning: lib/test_objagg.c: In function 'test_hints_case.constprop': lib/test_objagg.c:994:1: error: the frame size of 1440 bytes is larger than 1408 bytes [-Werror=frame-larger-than=] Most of this is from the two 'struct world' structures. Since most of the work in this function is duplicated for the two, split it up into separate functions that each use one of them. The combined stack usage is still the same here, but there is no warning any more, and the code is still safe because of the known call chain. Link: https://lkml.kernel.org/r/20250620111907.3395296-1-arnd@kernel.org Signed-off-by: Arnd Bergmann <arnd@arndb.de> Cc: Jiri Pirko <jiri@resnulli.us> Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
2025-07-10lib/raid6: replace custom zero page with ZERO_PAGEHerbert Xu8-27/+24
Use the system-wide zero page instead of a custom zero page. [herbert@gondor.apana.org.au: update lib/raid6/recov_rvv.c, per Klara] Link: https://lkml.kernel.org/r/aFkUnXWtxcgOTVkw@gondor.apana.org.au Link: https://lkml.kernel.org/r/Z9flJNkWQICx0PXk@gondor.apana.org.au Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au> Cc: Song Liu <song@kernel.org> Cc: Yu Kuai <yukuai3@huawei.com> Cc: Klara Modin <klarasmodin@gmail.com> Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
2025-07-10mul_u64_u64_div_u64: fix the division-by-zero behaviorNicolas Pitre1-6/+7
The current implementation forces a compile-time 1/0 division, which generates an undefined instruction (ud2 on x86) rather than a proper runtime division-by-zero exception. Change to trigger an actual div-by-0 exception at runtime, consistent with other division operations. Use a non-1 dividend to prevent the compiler from optimizing the division into a comparison. Link: https://lkml.kernel.org/r/q246p466-1453-qon9-29so-37105116009q@onlyvoer.pbz Signed-off-by: Nicolas Pitre <npitre@baylibre.com> Cc: Biju Das <biju.das.jz@bp.renesas.com> Cc: Oleg Nesterov <oleg@redhat.com> Cc: Peter Zijlstra <peterz@infradead.org> Cc: Thomas Weißschuh <linux@weissschuh.net> Cc: Uwe Kleine-König <u.kleine-koenig@baylibre.com> Cc: David Laight <david.laight.linux@gmail.com> Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
2025-07-10maple_tree: add testing for restoring maple state to activeLiam R. Howlett1-0/+32
Restoring maple status to ma_active on overflow/underflow when mas->node was NULL could have happened in the past, but was masked by a bug in mas_walk(). Add test cases that triggered the bug when the node was mas->node prior to fixing the maple state setup. Add a few extra tests around restoring the active maple status. Link: https://lore.kernel.org/all/202506191556.6bfc7b93-lkp@intel.com/ Link: https://lkml.kernel.org/r/20250624154823.52221-2-Liam.Howlett@oracle.com Signed-off-by: Liam R. Howlett <Liam.Howlett@oracle.com> Reviewed-by: Wei Yang <richard.weiyang@gmail.com> Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
2025-07-10maple_tree: fix status setup on restore to activeLiam R. Howlett1-7/+18
During the initial call with a maple state, an error status may be set before a valid node is populated into the maple state node. Subsequent calls with the maple state may restore the state into an active state with no node set. This was masked by the mas_walk() always resetting the status to ma_start and result in an extra walk in this rare scenario. Don't restore the state to active unless there is a value in the structs node. This also allows mas_walk() to be fixed to use the active state without exposing an issue. User visible results are marginal performance improvements when an active state can be restored and used instead of rewalking the tree. Stable is not Cc'ed because the existing code is stable and the performance gains are not worth the risk. Link: https://lore.kernel.org/all/20250611011253.19515-1-richard.weiyang@gmail.com/ Link: https://lore.kernel.org/all/20250407231354.11771-1-richard.weiyang@gmail.com/ Link: https://lore.kernel.org/all/202506191556.6bfc7b93-lkp@intel.com/ Link: https://lkml.kernel.org/r/20250624154823.52221-1-Liam.Howlett@oracle.com Fixes: a8091f039c1e ("maple_tree: add MAS_UNDERFLOW and MAS_OVERFLOW states") Signed-off-by: Liam R. Howlett <Liam.Howlett@oracle.com> Reported-by: Wei Yang <richard.weiyang@gmail.com> Reported-by: kernel test robot <oliver.sang@intel.com> Closes: https://lore.kernel.org/oe-lkp/202506191556.6bfc7b93-lkp@intel.com Reviewed-by: Wei Yang <richard.weiyang@gmail.com> Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
2025-07-10lib/test_vmalloc.c: restrict default test mask to avoid test warningsUladzislau Rezki (Sony)1-1/+1
When the vmalloc test is built into the kernel, it runs automatically during the boot. The current-default "run_test_mask" includes all test cases, including those which are designed to fail and which trigger kernel warnings. These kernel splats can be misinterpreted as actual kernel bugs, leading to false alarms and unnecessary reports. To address this, limit the default test mask to only the first few tests which are expected to pass cleanly. These tests are safe and should not generate any warnings unless there is a real bug. Users who wish to explicitly run specific test cases have to pass the run_test_mask as a boot parameter or at module load time. Link: https://lkml.kernel.org/r/20250623184035.581229-2-urezki@gmail.com Signed-off-by: Uladzislau Rezki (Sony) <urezki@gmail.com> Reviewed-by: Baoquan He <bhe@redhat.com> Cc: Harry Yoo <harry.yoo@oracle.com> Cc: Suren Baghdasaryan <surenb@google.com> Cc: David Wang <00107082@163.com> Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
2025-07-10lib/test_vmalloc.c: use late_initcall() if built-in for init orderingUladzislau Rezki (Sony)1-0/+4
When the vmalloc test code is compiled as a built-in, use late_initcall() instead of module_init() to defer a vmalloc test execution until most subsystems are up and running. It avoids interfering with components that may not yet be initialized at module_init() time. For example, there was a recent report of memory profiling infrastructure not being ready early enough leading to kernel crash. By using late_initcall() in the built-in case, we ensure the tests are run at a safer point during a boot sequence. Link: https://lkml.kernel.org/r/20250623184035.581229-1-urezki@gmail.com Signed-off-by: Uladzislau Rezki (Sony) <urezki@gmail.com> Reviewed-by: Baoquan He <bhe@redhat.com> Cc: Harry Yoo <harry.yoo@oracle.com> Cc: Suren Baghdasaryan <surenb@google.com> Cc: David Wang <00107082@163.com> Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
2025-07-10maple tree: use goto label to simplify codeDev Jain1-5/+2
Use the underflow goto label to set the status to ma_underflow and return NULL, as is being done elsewhere. [akpm@linux-foundation.org: add newline, per Liam (and remove one, per akpm)] Link: https://lkml.kernel.org/r/20250624080748.4855-1-dev.jain@arm.com Signed-off-by: Dev Jain <dev.jain@arm.com> Reviewed-by: Liam R. Howlett <Liam.Howlett@oracle.com> Reviewed-by: Wei Yang <richard.weiyang@gmail.com> Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
2025-07-10mm/percpu: conditionally define _shared_alloc_tag via ↵Hao Ge1-0/+2
CONFIG_ARCH_MODULE_NEEDS_WEAK_PER_CPU Recently discovered this entry while checking kallsyms on ARM64: ffff800083e509c0 D _shared_alloc_tag If ARCH_NEEDS_WEAK_PER_CPU is not defined(it is only defined for s390 and alpha architectures), there's no need to statically define the percpu variable _shared_alloc_tag. Therefore, we need to implement isolation for this purpose. When building the core kernel code for s390 or alpha architectures, ARCH_NEEDS_WEAK_PER_CPU remains undefined (as it is gated by #if defined(MODULE)). However, when building modules for these architectures, the macro is explicitly defined. Therefore, we remove all instances of ARCH_NEEDS_WEAK_PER_CPU from the code and introduced CONFIG_ARCH_MODULE_NEEDS_WEAK_PER_CPU to replace the relevant logic. We can now conditionally define the perpcu variable _shared_alloc_tag based on CONFIG_ARCH_MODULE_NEEDS_WEAK_PER_CPU. This allows architectures (such as s390/alpha) that require weak definitions for percpu variables in modules to include the definition, while others can omit it via compile-time exclusion. Link: https://lkml.kernel.org/r/20250618015809.1235761-1-hao.ge@linux.dev Signed-off-by: Hao Ge <gehao@kylinos.cn> Suggested-by: Suren Baghdasaryan <surenb@google.com> Acked-by: Alexander Gordeev <agordeev@linux.ibm.com> [s390] Acked-by: Mike Rapoport (Microsoft) <rppt@kernel.org> Cc: Chistoph Lameter <cl@linux.com> Cc: Christian Borntraeger <borntraeger@linux.ibm.com> Cc: David Hildenbrand <david@redhat.com> Cc: Dennis Zhou <dennis@kernel.org> Cc: Heiko Carstens <hca@linux.ibm.com> Cc: Kent Overstreet <kent.overstreet@linux.dev> Cc: Liam Howlett <liam.howlett@oracle.com> Cc: Lorenzo Stoakes <lorenzo.stoakes@oracle.com> Cc: Matt Turner <mattst88@gmail.com> Cc: Richard Henderson <richard.henderson@linaro.org> Cc: Sven Schnelle <svens@linux.ibm.com> Cc: Tejun Heo <tj@kernel.org> Cc: Vasily Gorbik <gor@linux.ibm.com> Cc: Vlastimil Babka <vbabka@suse.cz> Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
2025-07-10alloc_tag: keep codetag iterator active between read()David Wang1-19/+10
When reading /proc/allocinfo, for each read syscall, seq_file would invoke start/stop callbacks. In start callback, a memory is alloced to store iterator and the iterator would start from beginning to walk linearly to current read position. seq_file read() takes at most 4096 bytes, even if read with a larger user space buffer, meaning read out all of /proc/allocinfo, tens of read syscalls are needed. For example, a 306036 bytes allocinfo files need 76 reads: $ sudo cat /proc/allocinfo | wc 3964 16678 306036 $ sudo strace -T -e read cat /proc/allocinfo ... read(3, " 4096 1 arch/x86/k"..., 131072) = 4063 <0.000062> ... read(3, " 0 0 sound/core"..., 131072) = 4021 <0.000150> ... For those n=3964 lines, each read takes about m=3964/76=52 lines, since iterator restart from beginning for each read(), it would move forward m steps on 1st read 2*m steps on 2nd read 3*m steps on 3rd read ... n steps on last read As read() along, those linear seek steps make read() calls slower and slower. Adding those up, codetag iterator moves about O(n*n/m) steps, making data structure traversal take significant part of the whole reading. Profiling when stress reading /proc/allocinfo confirms it: vfs_read(99.959% 1677299/1677995) proc_reg_read_iter(99.856% 1674881/1677299) seq_read_iter(99.959% 1674191/1674881) allocinfo_start(75.664% 1266755/1674191) codetag_next_ct(79.217% 1003487/1266755) <--- srso_return_thunk(1.264% 16011/1266755) __kmalloc_cache_noprof(0.102% 1296/1266755) ... allocinfo_show(21.287% 356378/1674191) allocinfo_next(1.530% 25621/1674191) codetag_next_ct() takes major part. A private data alloced at open() time can be used to carry iterator alive across read() calls, and avoid the memory allocation and iterator reset for each read(). This way, only O(1) memory allocation and O(n) steps iterating, and `time` shows performance improvement from ~7ms to ~4ms. Profiling with the change: vfs_read(99.865% 1581073/1583214) proc_reg_read_iter(99.485% 1572934/1581073) seq_read_iter(99.846% 1570519/1572934) allocinfo_show(87.428% 1373074/1570519) seq_buf_printf(83.695% 1149196/1373074) seq_buf_putc(1.917% 26321/1373074) _find_next_bit(1.531% 21023/1373074) ... codetag_to_text(0.490% 6727/1373074) ... allocinfo_next(6.275% 98543/1570519) ... allocinfo_start(0.369% 5790/1570519) ... Now seq_buf_printf() takes major part. Link: https://lkml.kernel.org/r/20250609064408.112783-1-00107082@163.com Signed-off-by: David Wang <00107082@163.com> Acked-by: Suren Baghdasaryan <surenb@google.com> Cc: Kent Overstreet <kent.overstreet@linux.dev> Cc: Tim Chen <tim.c.chen@linux.intel.com> Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
2025-07-10alloc_tag: add sequence number for module and iteratorDavid Wang1-3/+14
Codetag iterator use <id,address> pair to guarantee the validness. But both id and address can be reused, there is theoretical possibility when module inserted right after another module removed, kmalloc returns an address same as the address kfree by previous module and IDR key reuses the key recently removed. Add a sequence number to codetag_module and code_iterator, the sequence number is strickly incremented whenever a module is loaded. An iterator is valid if and only if its sequence number match codetag_module's. Link: https://lkml.kernel.org/r/20250609064200.112639-1-00107082@163.com Signed-off-by: David Wang <00107082@163.com> Acked-by: Suren Baghdasaryan <surenb@google.com> Cc: Kent Overstreet <kent.overstreet@linux.dev> Cc: Tim Chen <tim.c.chen@linux.intel.com> Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
2025-07-10lib/test_hmm: reduce stack usageArnd Bergmann1-7/+7
The various test ioctl handlers use arrays of 64 integers that add up to 1KiB of stack data, which in turn leads to exceeding the warning limit in some configurations: lib/test_hmm.c:935:12: error: stack frame size (1408) exceeds limit (1280) in 'dmirror_migrate_to_device' [-Werror,-Wframe-larger-than] Use half the size for these arrays, in order to stay under the warning limits. The code can already deal with arbitrary lengths, but this may be a little less efficient. Link: https://lkml.kernel.org/r/20250610092159.2639515-1-arnd@kernel.org Signed-off-by: Arnd Bergmann <arnd@arndb.de> Cc: Alistair Popple <apopple@nvidia.com> Cc: David Hildenbrand <david@redhat.com> Cc: Jason Gunthorpe <jgg@ziepe.ca> Cc: Jeff Johnson <jeff.johnson@oss.qualcomm.com> Cc: Jerome Glisse <jglisse@redhat.com> Cc: Thorsten Blum <thorsten.blum@linux.dev> Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
2025-07-10xarray: add a BUG_ON() to ensure caller is not siblingDev Jain1-0/+3
Suppose xas is pointing somewhere near the end of the multi-entry batch. Then it may happen that the computed slot already falls beyond the batch, thus breaking the loop due to !xa_is_sibling(), and computing the wrong order. For example, suppose we have a shift-6 node having an order-9 entry => 8 - 1 = 7 siblings, so assume the slots are at offset 0 till 7 in this node. If xas->xa_offset is 6, then the code will compute order as 1 + xas->xa_node->shift = 7. Therefore, the order computation must start from the beginning of the multi-slot entries, that is, the non-sibling entry. Thus ensure that the caller is aware of this by triggering a BUG when the entry is a sibling entry. Note that this BUG_ON() is only active while running selftests, so there is no overhead in a running kernel. Link: https://lkml.kernel.org/r/20250604041533.91198-1-dev.jain@arm.com Signed-off-by: Dev Jain <dev.jain@arm.com> Acked-by: Zi Yan <ziy@nvidia.com> Cc: "Aneesh Kumar K.V" <aneesh.kumar@kernel.org> Cc: Anshuman Khandual <anshuman.khandual@arm.com> Cc: David Hildenbrand <david@redhat.com> Cc: Matthew Wilcox (Oracle) <willy@infradead.org> Cc: Ryan Roberts <ryan.roberts@arm.com> Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
2025-07-10maple_tree: fix mt_destroy_walk() on root leaf nodeWei Yang1-0/+1
On destroy, we should set each node dead. But current code miss this when the maple tree has only the root node. The reason is mt_destroy_walk() leverage mte_destroy_descend() to set node dead, but this is skipped since the only root node is a leaf. Fixes this by setting the node dead if it is a leaf. Link: https://lore.kernel.org/all/20250407231354.11771-1-richard.weiyang@gmail.com/ Link: https://lkml.kernel.org/r/20250624191841.64682-1-Liam.Howlett@oracle.com Fixes: 54a611b60590 ("Maple Tree: add new data structure") Signed-off-by: Wei Yang <richard.weiyang@gmail.com> Signed-off-by: Liam R. Howlett <Liam.Howlett@oracle.com> Reviewed-by: Dev Jain <dev.jain@arm.com> Cc: <stable@vger.kernel.org> Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
2025-07-10lib/alloc_tag: do not acquire non-existent lock in alloc_tag_top_users()Harry Yoo1-0/+3
alloc_tag_top_users() attempts to lock alloc_tag_cttype->mod_lock even when the alloc_tag_cttype is not allocated because: 1) alloc tagging is disabled because mem profiling is disabled (!alloc_tag_cttype) 2) alloc tagging is enabled, but not yet initialized (!alloc_tag_cttype) 3) alloc tagging is enabled, but failed initialization (!alloc_tag_cttype or IS_ERR(alloc_tag_cttype)) In all cases, alloc_tag_cttype is not allocated, and therefore alloc_tag_top_users() should not attempt to acquire the semaphore. This leads to a crash on memory allocation failure by attempting to acquire a non-existent semaphore: Oops: general protection fault, probably for non-canonical address 0xdffffc000000001b: 0000 [#3] SMP KASAN NOPTI KASAN: null-ptr-deref in range [0x00000000000000d8-0x00000000000000df] CPU: 2 UID: 0 PID: 1 Comm: systemd Tainted: G D 6.16.0-rc2 #1 VOLUNTARY Tainted: [D]=DIE Hardware name: QEMU Standard PC (i440FX + PIIX, 1996), BIOS 1.16.2-debian-1.16.2-1 04/01/2014 RIP: 0010:down_read_trylock+0xaa/0x3b0 Code: d0 7c 08 84 d2 0f 85 a0 02 00 00 8b 0d df 31 dd 04 85 c9 75 29 48 b8 00 00 00 00 00 fc ff df 48 8d 6b 68 48 89 ea 48 c1 ea 03 <80> 3c 02 00 0f 85 88 02 00 00 48 3b 5b 68 0f 85 53 01 00 00 65 ff RSP: 0000:ffff8881002ce9b8 EFLAGS: 00010016 RAX: dffffc0000000000 RBX: 0000000000000070 RCX: 0000000000000000 RDX: 000000000000001b RSI: 000000000000000a RDI: 0000000000000070 RBP: 00000000000000d8 R08: 0000000000000001 R09: ffffed107dde49d1 R10: ffff8883eef24e8b R11: ffff8881002cec20 R12: 1ffff11020059d37 R13: 00000000003fff7b R14: ffff8881002cec20 R15: dffffc0000000000 FS: 00007f963f21d940(0000) GS:ffff888458ca6000(0000) knlGS:0000000000000000 CS: 0010 DS: 0000 ES: 0000 CR0: 0000000080050033 CR2: 00007f963f5edf71 CR3: 000000010672c000 CR4: 0000000000350ef0 Call Trace: <TASK> codetag_trylock_module_list+0xd/0x20 alloc_tag_top_users+0x369/0x4b0 __show_mem+0x1cd/0x6e0 warn_alloc+0x2b1/0x390 __alloc_frozen_pages_noprof+0x12b9/0x21a0 alloc_pages_mpol+0x135/0x3e0 alloc_slab_page+0x82/0xe0 new_slab+0x212/0x240 ___slab_alloc+0x82a/0xe00 </TASK> As David Wang points out, this issue became easier to trigger after commit 780138b12381 ("alloc_tag: check mem_profiling_support in alloc_tag_init"). Before the commit, the issue occurred only when it failed to allocate and initialize alloc_tag_cttype or if a memory allocation fails before alloc_tag_init() is called. After the commit, it can be easily triggered when memory profiling is compiled but disabled at boot. To properly determine whether alloc_tag_init() has been called and its data structures initialized, verify that alloc_tag_cttype is a valid pointer before acquiring the semaphore. If the variable is NULL or an error value, it has not been properly initialized. In such a case, just skip and do not attempt to acquire the semaphore. [harry.yoo@oracle.com: v3] Link: https://lkml.kernel.org/r/20250624072513.84219-1-harry.yoo@oracle.com Link: https://lkml.kernel.org/r/20250620195305.1115151-1-harry.yoo@oracle.com Fixes: 780138b12381 ("alloc_tag: check mem_profiling_support in alloc_tag_init") Fixes: 1438d349d16b ("lib: add memory allocations report in show_mem()") Signed-off-by: Harry Yoo <harry.yoo@oracle.com> Reported-by: kernel test robot <oliver.sang@intel.com> Closes: https://lore.kernel.org/oe-lkp/202506181351.bba867dd-lkp@intel.com Acked-by: Suren Baghdasaryan <surenb@google.com> Tested-by: Raghavendra K T <raghavendra.kt@amd.com> Cc: Casey Chen <cachen@purestorage.com> Cc: David Wang <00107082@163.com> Cc: Kent Overstreet <kent.overstreet@linux.dev> Cc: Yuanyuan Zhong <yzhong@purestorage.com> Cc: <stable@vger.kernel.org> Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
2025-07-09vdso/gettimeofday: Return bool from clock_getres() helpersThomas Weißschuh1-11/+13
The internal helpers are effectively using boolean results, while pretending to use error numbers. Switch the return type to bool for more clarity. Signed-off-by: Thomas Weißschuh <thomas.weissschuh@linutronix.de> Signed-off-by: Thomas Gleixner <tglx@linutronix.de> Link: https://lore.kernel.org/all/20250701-vdso-auxclock-v1-5-df7d9f87b9b8@linutronix.de
2025-07-09bitmap: generalize node_random()Yury Norov [NVIDIA]1-0/+24
Generalize node_random() and make it available to general bitmaps and cpumasks users. Notice, find_first_bit() is generally faster than find_nth_bit(), and we employ it when there's a single set bit in the bitmap. See commit 3e061d924fe9c7b4 ("lib/nodemask: optimize node_random for nodemask with single NUMA node"). CC: Andrew Morton <akpm@linux-foundation.org> Signed-off-by: "Yury Norov [NVIDIA]" <yury.norov@gmail.com>
2025-07-08lib/crypto: hash_info: Move hash_info.c into lib/crypto/Eric Biggers3-0/+68
crypto/hash_info.c just contains a couple of arrays that map HASH_ALGO_* algorithm IDs to properties of those algorithms. It is compiled only when CRYPTO_HASH_INFO=y, but currently CRYPTO_HASH_INFO depends on CRYPTO. Since this can be useful without the old-school crypto API, move it into lib/crypto/ so that it no longer depends on CRYPTO. This eliminates the need for FS_VERITY to select CRYPTO after it's been converted to use lib/crypto/. Acked-by: Ard Biesheuvel <ardb@kernel.org> Link: https://lore.kernel.org/r/20250630172224.46909-2-ebiggers@kernel.org Signed-off-by: Eric Biggers <ebiggers@kernel.org>
2025-07-04lib/crypto: x86/sha256: Remove unnecessary checks for nblocks==0Eric Biggers4-10/+0
Since sha256_blocks() is called only with nblocks >= 1, remove unnecessary checks for nblocks == 0 from the x86 SHA-256 assembly code. Acked-by: Ard Biesheuvel <ardb@kernel.org> Link: https://lore.kernel.org/r/20250704023958.73274-3-ebiggers@kernel.org Signed-off-by: Eric Biggers <ebiggers@kernel.org>
2025-07-04lib/crypto: x86/sha256: Move static_call above kernel-mode FPU sectionEric Biggers5-34/+25
As I did for sha512_blocks(), reorganize x86's sha256_blocks() to be just a static_call. To achieve that, for each assembly function add a C function that handles the kernel-mode FPU section and fallback. While this increases total code size slightly, the amount of code actually executed on a given system does not increase, and it is slightly more efficient since it eliminates the extra static_key. It also makes the assembly functions be called with standard direct calls instead of static calls, eliminating the need for ANNOTATE_NOENDBR. Acked-by: Ard Biesheuvel <ardb@kernel.org> Link: https://lore.kernel.org/r/20250704023958.73274-2-ebiggers@kernel.org Signed-off-by: Eric Biggers <ebiggers@kernel.org>
2025-07-04lib/crypto: sha256: Sync sha256_update() with sha512_update()Eric Biggers1-3/+25
The BLOCK_HASH_UPDATE_BLOCKS macro is difficult to read. For now, let's just write the update explicitly in the straightforward way, mirroring sha512_update(). It's possible that we'll bring back a macro for this later, but it needs to be properly justified and hopefully a bit more readable. Acked-by: Ard Biesheuvel <ardb@kernel.org> Link: https://lore.kernel.org/r/20250630160645.3198-14-ebiggers@kernel.org Signed-off-by: Eric Biggers <ebiggers@kernel.org>
2025-07-04lib/crypto: sha256: Consolidate into single moduleEric Biggers26-390/+281
Consolidate the CPU-based SHA-256 code into a single module, following what I did with SHA-512: - Each arch now provides a header file lib/crypto/$(SRCARCH)/sha256.h, replacing lib/crypto/$(SRCARCH)/sha256.c. The header defines sha256_blocks() and optionally sha256_mod_init_arch(). It is included by lib/crypto/sha256.c, and thus the code gets built into the single libsha256 module, with proper inlining and dead code elimination. - sha256_blocks_generic() is moved from lib/crypto/sha256-generic.c into lib/crypto/sha256.c. It's now a static function marked with __maybe_unused, so the compiler automatically eliminates it in any cases where it's not used. - Whether arch-optimized SHA-256 is buildable is now controlled centrally by lib/crypto/Kconfig instead of by lib/crypto/$(SRCARCH)/Kconfig. The conditions for enabling it remain the same as before, and it remains enabled by default. - Any additional arch-specific translation units for the optimized SHA-256 code (such as assembly files) are now compiled by lib/crypto/Makefile instead of lib/crypto/$(SRCARCH)/Makefile. Acked-by: Ard Biesheuvel <ardb@kernel.org> Link: https://lore.kernel.org/r/20250630160645.3198-13-ebiggers@kernel.org Signed-off-by: Eric Biggers <ebiggers@kernel.org>
2025-07-04lib/crypto: sha256: Remove sha256_is_arch_optimized()Eric Biggers7-44/+0
Remove sha256_is_arch_optimized(), since it is no longer used. Acked-by: Ard Biesheuvel <ardb@kernel.org> Link: https://lore.kernel.org/r/20250630160645.3198-12-ebiggers@kernel.org Signed-off-by: Eric Biggers <ebiggers@kernel.org>
2025-07-04lib/crypto: sha256: Add HMAC-SHA224 and HMAC-SHA256 supportEric Biggers1-5/+142
Since HMAC support is commonly needed and is fairly simple, include it as a first-class citizen of the SHA-256 library. The API supports both incremental and one-shot computation, and either preparing the key ahead of time or just using a raw key. The implementation is much more streamlined than crypto/hmac.c. I've kept it consistent with the HMAC-SHA384 and HMAC-SHA512 code as much as possible. Testing of these functions will be via sha224_kunit and sha256_kunit, added by a later commit. Acked-by: Ard Biesheuvel <ardb@kernel.org> Link: https://lore.kernel.org/r/20250630160645.3198-9-ebiggers@kernel.org Signed-off-by: Eric Biggers <ebiggers@kernel.org>