summaryrefslogtreecommitdiff
path: root/crypto
AgeCommit message (Collapse)AuthorFilesLines
2019-12-27crypto: api - Retain alg refcount in crypto_grab_spawnHerbert Xu1-8/+40
This patch changes crypto_grab_spawn to retain the reference count on the algorithm. This is because the caller needs to access the algorithm parameters and without the reference count the algorithm can be freed at any time. The reference count will be subsequently dropped by the crypto API once the instance has been registered. The helper crypto_drop_spawn will also conditionally drop the reference count depending on whether it has been registered. Note that the code is actually added to crypto_init_spawn. However, unless the caller activates this by setting spawn->dropref beforehand then nothing happens. The only caller that sets dropref is currently crypto_grab_spawn. Once all legacy users of crypto_init_spawn disappear, then we can kill the dropref flag. Internally each instance will maintain a list of its spawns prior to registration. This memory used by this list is shared with other fields that are only used after registration. In order for this to work a new flag spawn->registered is added to indicate whether spawn->inst can be used. Fixes: d6ef2f198d4c ("crypto: api - Add crypto_grab_spawn primitive") Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
2019-12-20crypto: algapi - make unregistration functions return voidEric Biggers6-37/+22
Some of the algorithm unregistration functions return -ENOENT when asked to unregister a non-registered algorithm, while others always return 0 or always return void. But no users check the return value, except for two of the bulk unregistration functions which print a message on error but still always return 0 to their caller, and crypto_del_alg() which calls crypto_unregister_instance() which always returns 0. Since unregistering a non-registered algorithm is always a kernel bug but there isn't anything callers should do to handle this situation at runtime, let's simplify things by making all the unregistration functions return void, and moving the error message into crypto_unregister_alg() and upgrading it to a WARN(). Signed-off-by: Eric Biggers <ebiggers@google.com> Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
2019-12-20crypto: api - fix unexpectedly getting generic implementationHerbert Xu2-4/+24
When CONFIG_CRYPTO_MANAGER_EXTRA_TESTS=y, the first lookup of an algorithm that needs to be instantiated using a template will always get the generic implementation, even when an accelerated one is available. This happens because the extra self-tests for the accelerated implementation allocate the generic implementation for comparison purposes, and then crypto_alg_tested() for the generic implementation "fulfills" the original request (i.e. sets crypto_larval::adult). This patch fixes this by only fulfilling the original request if we are currently the best outstanding larval as judged by the priority. If we're not the best then we will ask all waiters on that larval request to retry the lookup. Note that this patch introduces a behaviour change when the module providing the new algorithm is unregistered during the process. Previously we would have failed with ENOENT, after the patch we will instead redo the lookup. Fixes: 9a8a6b3f0950 ("crypto: testmgr - fuzz hashes against...") Fixes: d435e10e67be ("crypto: testmgr - fuzz skciphers against...") Fixes: 40153b10d91c ("crypto: testmgr - fuzz AEADs against...") Reported-by: Eric Biggers <ebiggers@google.com> Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au> Reviewed-by: Eric Biggers <ebiggers@google.com> Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
2019-12-11crypto: hmac - Use init_tfm/exit_tfm interfaceHerbert Xu1-13/+7
This patch switches hmac over to the new init_tfm/exit_tfm interface as opposed to cra_init/cra_exit. This way the shash API can make sure that descsize does not exceed the maximum. This patch also adds the API helper shash_alg_instance. Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au> Reviewed-by: Eric Biggers <ebiggers@google.com> Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
2019-12-11crypto: shash - Add init_tfm/exit_tfm and verify descsizeHerbert Xu1-0/+26
The shash interface supports a dynamic descsize field because of the presence of fallbacks (it's just padlock-sha actually, perhaps we can remove it one day). As it is the API does not verify the setting of descsize at all. It is up to the individual algorithms to ensure that descsize does not exceed the specified maximum value of HASH_MAX_DESCSIZE (going above would cause stack corruption). In order to allow the API to impose this limit directly, this patch adds init_tfm/exit_tfm hooks to the shash_alg structure. We can then verify the descsize setting in the API directly. Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au> Reviewed-by: Eric Biggers <ebiggers@google.com> Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
2019-12-11crypto: api - Add more comments to crypto_remove_spawnsHerbert Xu1-0/+25
This patch explains the logic behind crypto_remove_spawns and its underling crypto_more_spawns. Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
2019-12-11crypto: api - Do not zap spawn->algHerbert Xu1-10/+12
Currently when a spawn is removed we will zap its alg field. This is racy because the spawn could belong to an unregistered instance which may dereference the spawn->alg field. This patch fixes this by keeping spawn->alg constant and instead adding a new spawn->dead field to indicate that a spawn is going away. Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
2019-12-11crypto: api - Fix race condition in crypto_spawn_algHerbert Xu3-14/+6
The function crypto_spawn_alg is racy because it drops the lock before shooting the dying algorithm. The algorithm could disappear altogether before we shoot it. This patch fixes it by moving the shooting into the locked section. Fixes: 6bfd48096ff8 ("[CRYPTO] api: Added spawns") Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
2019-12-11crypto: api - Check spawn->alg under lock in crypto_drop_spawnHerbert Xu1-4/+2
We need to check whether spawn->alg is NULL under lock as otherwise the algorithm could be removed from under us after we have checked it and found it to be non-NULL. This could cause us to remove the spawn from a non-existent list. Fixes: 7ede5a5ba55a ("crypto: api - Fix crypto_drop_spawn crash...") Cc: <stable@vger.kernel.org> Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
2019-12-11crypto: af_alg - Use bh_lock_sock in sk_destructHerbert Xu1-2/+4
As af_alg_release_parent may be called from BH context (most notably due to an async request that only completes after socket closure, or as reported here because of an RCU-delayed sk_destruct call), we must use bh_lock_sock instead of lock_sock. Reported-by: syzbot+c2f1558d49e25cc36e5e@syzkaller.appspotmail.com Reported-by: Eric Dumazet <eric.dumazet@gmail.com> Fixes: c840ac6af3f8 ("crypto: af_alg - Disallow bind/setkey/...") Cc: <stable@vger.kernel.org> Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
2019-12-11padata: remove cpumask change notifierDaniel Jordan1-1/+0
Since commit 63d3578892dc ("crypto: pcrypt - remove padata cpumask notifier") this feature is unused, so get rid of it. Signed-off-by: Daniel Jordan <daniel.m.jordan@oracle.com> Cc: Eric Biggers <ebiggers@kernel.org> Cc: Herbert Xu <herbert@gondor.apana.org.au> Cc: Jonathan Corbet <corbet@lwn.net> Cc: Steffen Klassert <steffen.klassert@secunet.com> Cc: linux-crypto@vger.kernel.org Cc: linux-doc@vger.kernel.org Cc: linux-kernel@vger.kernel.org Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
2019-12-11crypto: cipher - remove crt_u.cipher (struct cipher_tfm)Eric Biggers3-72/+37
Of the three fields in crt_u.cipher (struct cipher_tfm), ->cit_setkey() is pointless because it always points to setkey() in crypto/cipher.c. ->cit_decrypt_one() and ->cit_encrypt_one() are slightly less pointless, since if the algorithm doesn't have an alignmask, they are set directly to ->cia_encrypt() and ->cia_decrypt(). However, this "optimization" isn't worthwhile because: - The "cipher" algorithm type is the only algorithm still using crt_u, so it's bloating every struct crypto_tfm for every algorithm type. - If the algorithm has an alignmask, this "optimization" actually makes things slower, as it causes 2 indirect calls per block rather than 1. - It adds extra code complexity. - Some templates already call ->cia_encrypt()/->cia_decrypt() directly instead of going through ->cit_encrypt_one()/->cit_decrypt_one(). - The "cipher" algorithm type never gives optimal performance anyway. For that, a higher-level type such as skcipher needs to be used. Therefore, just remove the extra indirection, and make crypto_cipher_setkey(), crypto_cipher_encrypt_one(), and crypto_cipher_decrypt_one() be direct calls into crypto/cipher.c. Also remove the unused function crypto_cipher_cast(). Signed-off-by: Eric Biggers <ebiggers@google.com> Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
2019-12-11crypto: compress - remove crt_u.compress (struct compress_tfm)Eric Biggers3-21/+13
crt_u.compress (struct compress_tfm) is pointless because its two fields, ->cot_compress() and ->cot_decompress(), always point to crypto_compress() and crypto_decompress(). Remove this pointless indirection, and just make crypto_comp_compress() and crypto_comp_decompress() be direct calls to what used to be crypto_compress() and crypto_decompress(). Also remove the unused function crypto_comp_cast(). Signed-off-by: Eric Biggers <ebiggers@google.com> Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
2019-12-11crypto: testmgr - generate inauthentic AEAD test vectorsEric Biggers2-73/+261
The whole point of using an AEAD over length-preserving encryption is that the data is authenticated. However currently the fuzz tests don't test any inauthentic inputs to verify that the data is actually being authenticated. And only two algorithms ("rfc4543(gcm(aes))" and "ccm(aes)") even have any inauthentic test vectors at all. Therefore, update the AEAD fuzz tests to sometimes generate inauthentic test vectors, either by generating a (ciphertext, AAD) pair without using the key, or by mutating an authentic pair that was generated. To avoid flakiness, only assume this works reliably if the auth tag is at least 8 bytes. Also account for the rfc4106, rfc4309, and rfc7539esp algorithms intentionally ignoring the last 8 AAD bytes, and for some algorithms doing extra checks that result in EINVAL rather than EBADMSG. Signed-off-by: Eric Biggers <ebiggers@google.com> Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
2019-12-11crypto: testmgr - create struct aead_extra_tests_ctxEric Biggers1-71/+99
In preparation for adding inauthentic input fuzz tests, which don't require that a generic implementation of the algorithm be available, refactor test_aead_vs_generic_impl() so that instead there's a higher-level function test_aead_extra() which initializes a struct aead_extra_tests_ctx and then calls test_aead_vs_generic_impl() with a pointer to that struct. As a bonus, this reduces stack usage. Also switch from crypto_aead_alg(tfm)->maxauthsize to crypto_aead_maxauthsize(), now that the latter is available in <crypto/aead.h>. Signed-off-by: Eric Biggers <ebiggers@google.com> Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
2019-12-11crypto: testmgr - test setting misaligned keysEric Biggers1-4/+69
The alignment bug in ghash_setkey() fixed by commit 5c6bc4dfa515 ("crypto: ghash - fix unaligned memory access in ghash_setkey()") wasn't reliably detected by the crypto self-tests on ARM because the tests only set the keys directly from the test vectors. To improve test coverage, update the tests to sometimes pass misaligned keys to setkey(). This applies to shash, ahash, skcipher, and aead. Signed-off-by: Eric Biggers <ebiggers@google.com> Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
2019-12-11crypto: testmgr - check skcipher min_keysizeEric Biggers1-0/+9
When checking two implementations of the same skcipher algorithm for consistency, require that the minimum key size be the same, not just the maximum key size. There's no good reason to allow different minimum key sizes. Signed-off-by: Eric Biggers <ebiggers@google.com> Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
2019-12-11crypto: testmgr - don't try to decrypt uninitialized buffersEric Biggers1-4/+16
Currently if the comparison fuzz tests encounter an encryption error when generating an skcipher or AEAD test vector, they will still test the decryption side (passing it the uninitialized ciphertext buffer) and expect it to fail with the same error. This is sort of broken because it's not well-defined usage of the API to pass an uninitialized buffer, and furthermore in the AEAD case it's acceptable for the decryption error to be EBADMSG (meaning "inauthentic input") even if the encryption error was something else like EINVAL. Fix this for skcipher by explicitly initializing the ciphertext buffer on error, and for AEAD by skipping the decryption test on error. Reported-by: Pascal Van Leeuwen <pvanleeuwen@verimatrix.com> Fixes: d435e10e67be ("crypto: testmgr - fuzz skciphers against their generic implementation") Fixes: 40153b10d91c ("crypto: testmgr - fuzz AEADs against their generic implementation") Signed-off-by: Eric Biggers <ebiggers@google.com> Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
2019-12-11crypto: shash - allow essiv and hmac to use OPTIONAL_KEY algorithmsEric Biggers3-5/+4
The essiv and hmac templates refuse to use any hash algorithm that has a ->setkey() function, which includes not just algorithms that always need a key, but also algorithms that optionally take a key. Previously the only optionally-keyed hash algorithms in the crypto API were non-cryptographic algorithms like crc32, so this didn't really matter. But that's changed with BLAKE2 support being added. BLAKE2 should work with essiv and hmac, just like any other cryptographic hash. Fix this by allowing the use of both algorithms without a ->setkey() function and algorithms that have the OPTIONAL_KEY flag set. Signed-off-by: Eric Biggers <ebiggers@google.com> Acked-by: Ard Biesheuvel <ardb@kernel.org> Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
2019-12-11crypto: skcipher - remove crypto_skcipher_extsize()Eric Biggers1-6/+1
Due to the removal of the blkcipher and ablkcipher algorithm types, crypto_skcipher_extsize() now simply calls crypto_alg_extsize(). So remove it and just use crypto_alg_extsize(). Signed-off-by: Eric Biggers <ebiggers@google.com> Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
2019-12-11crypto: skcipher - remove crypto_skcipher::decryptEric Biggers1-3/+1
Due to the removal of the blkcipher and ablkcipher algorithm types, crypto_skcipher::decrypt is now redundant since it always equals crypto_skcipher_alg(tfm)->decrypt. Remove it and update crypto_skcipher_decrypt() accordingly. Signed-off-by: Eric Biggers <ebiggers@google.com> Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
2019-12-11crypto: skcipher - remove crypto_skcipher::encryptEric Biggers1-2/+1
Due to the removal of the blkcipher and ablkcipher algorithm types, crypto_skcipher::encrypt is now redundant since it always equals crypto_skcipher_alg(tfm)->encrypt. Remove it and update crypto_skcipher_encrypt() accordingly. Signed-off-by: Eric Biggers <ebiggers@google.com> Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
2019-12-11crypto: skcipher - remove crypto_skcipher::setkeyEric Biggers1-2/+2
Due to the removal of the blkcipher and ablkcipher algorithm types, crypto_skcipher::setkey now always points to skcipher_setkey(). Simplify by removing this function pointer and instead just making skcipher_setkey() be crypto_skcipher_setkey() directly. Signed-off-by: Eric Biggers <ebiggers@google.com> Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
2019-12-11crypto: skcipher - remove crypto_skcipher::keysizeEric Biggers2-6/+7
Due to the removal of the blkcipher and ablkcipher algorithm types, crypto_skcipher::keysize is now redundant since it always equals crypto_skcipher_alg(tfm)->max_keysize. Remove it and update crypto_skcipher_default_keysize() accordingly. Also rename crypto_skcipher_default_keysize() to crypto_skcipher_max_keysize() to clarify that it specifically returns the maximum key size, not some unspecified "default". Signed-off-by: Eric Biggers <ebiggers@google.com> Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
2019-12-11crypto: skcipher - remove crypto_skcipher::ivsizeEric Biggers1-1/+0
Due to the removal of the blkcipher and ablkcipher algorithm types, crypto_skcipher::ivsize is now redundant since it always equals crypto_skcipher_alg(tfm)->ivsize. Remove it and update crypto_skcipher_ivsize() accordingly. Signed-off-by: Eric Biggers <ebiggers@google.com> Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
2019-12-11crypto: api - remove another reference to blkcipherEric Biggers1-1/+1
Update a comment to refer to crypto_alloc_skcipher() rather than crypto_alloc_blkcipher() (the latter having been removed). Signed-off-by: Eric Biggers <ebiggers@google.com> Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
2019-12-11crypto: pcrypt - Do not clear MAY_SLEEP flag in original requestHerbert Xu1-1/+0
We should not be modifying the original request's MAY_SLEEP flag upon completion. It makes no sense to do so anyway. Reported-by: Eric Biggers <ebiggers@kernel.org> Fixes: 5068c7a883d1 ("crypto: pcrypt - Add pcrypt crypto...") Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au> Tested-by: Eric Biggers <ebiggers@kernel.org> Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
2019-12-11crypto: x86 - Regularize glue function prototypesKees Cook2-10/+14
The crypto glue performed function prototype casting via macros to make indirect calls to assembly routines. Instead of performing casts at the call sites (which trips Control Flow Integrity prototype checking), switch each prototype to a common standard set of arguments which allows the removal of the existing macros. In order to keep pointer math unchanged, internal casting between u128 pointers and u8 pointers is added. Co-developed-by: João Moreira <joao.moreira@intel.com> Signed-off-by: João Moreira <joao.moreira@intel.com> Signed-off-by: Kees Cook <keescook@chromium.org> Reviewed-by: Eric Biggers <ebiggers@kernel.org> Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
2019-12-11crypto: pcrypt - Avoid deadlock by using per-instance padata queuesHerbert Xu1-3/+33
If the pcrypt template is used multiple times in an algorithm, then a deadlock occurs because all pcrypt instances share the same padata_instance, which completes requests in the order submitted. That is, the inner pcrypt request waits for the outer pcrypt request while the outer request is already waiting for the inner. This patch fixes this by allocating a set of queues for each pcrypt instance instead of using two global queues. In order to maintain the existing user-space interface, the pinst structure remains global so any sysfs modifications will apply to every pcrypt instance. Note that when an update occurs we have to allocate memory for every pcrypt instance. Should one of the allocations fail we will abort the update without rolling back changes already made. The new per-instance data structure is called padata_shell and is essentially a wrapper around parallel_data. Reproducer: #include <linux/if_alg.h> #include <sys/socket.h> #include <unistd.h> int main() { struct sockaddr_alg addr = { .salg_type = "aead", .salg_name = "pcrypt(pcrypt(rfc4106-gcm-aesni))" }; int algfd, reqfd; char buf[32] = { 0 }; algfd = socket(AF_ALG, SOCK_SEQPACKET, 0); bind(algfd, (void *)&addr, sizeof(addr)); setsockopt(algfd, SOL_ALG, ALG_SET_KEY, buf, 20); reqfd = accept(algfd, 0, 0); write(reqfd, buf, 32); read(reqfd, buf, 16); } Reported-by: syzbot+56c7151cad94eec37c521f0e47d2eee53f9361c4@syzkaller.appspotmail.com Fixes: 5068c7a883d1 ("crypto: pcrypt - Add pcrypt crypto parallelization wrapper") Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au> Tested-by: Eric Biggers <ebiggers@kernel.org> Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
2019-12-11crypto: pcrypt - Fix user-after-free on module unloadHerbert Xu1-1/+2
On module unload of pcrypt we must unregister the crypto algorithms first and then tear down the padata structure. As otherwise the crypto algorithms are still alive and can be used while the padata structure is being freed. Fixes: 5068c7a883d1 ("crypto: pcrypt - Add pcrypt crypto...") Cc: <stable@vger.kernel.org> Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
2019-11-26Merge git://git.kernel.org/pub/scm/linux/kernel/git/herbert/crypto-2.6Linus Torvalds34-1855/+3302
Pull crypto updates from Herbert Xu: "API: - Add library interfaces of certain crypto algorithms for WireGuard - Remove the obsolete ablkcipher and blkcipher interfaces - Move add_early_randomness() out of rng_mutex Algorithms: - Add blake2b shash algorithm - Add blake2s shash algorithm - Add curve25519 kpp algorithm - Implement 4 way interleave in arm64/gcm-ce - Implement ciphertext stealing in powerpc/spe-xts - Add Eric Biggers's scalar accelerated ChaCha code for ARM - Add accelerated 32r2 code from Zinc for MIPS - Add OpenSSL/CRYPTOGRAMS poly1305 implementation for ARM and MIPS Drivers: - Fix entropy reading failures in ks-sa - Add support for sam9x60 in atmel - Add crypto accelerator for amlogic GXL - Add sun8i-ce Crypto Engine - Add sun8i-ss cryptographic offloader - Add a host of algorithms to inside-secure - Add NPCM RNG driver - add HiSilicon HPRE accelerator - Add HiSilicon TRNG driver" * git://git.kernel.org/pub/scm/linux/kernel/git/herbert/crypto-2.6: (285 commits) crypto: vmx - Avoid weird build failures crypto: lib/chacha20poly1305 - use chacha20_crypt() crypto: x86/chacha - only unregister algorithms if registered crypto: chacha_generic - remove unnecessary setkey() functions crypto: amlogic - enable working on big endian kernel crypto: sun8i-ce - enable working on big endian crypto: mips/chacha - select CRYPTO_SKCIPHER, not CRYPTO_BLKCIPHER hwrng: ks-sa - Enable COMPILE_TEST crypto: essiv - remove redundant null pointer check before kfree crypto: atmel-aes - Change data type for "lastc" buffer crypto: atmel-tdes - Set the IV after {en,de}crypt crypto: sun4i-ss - fix big endian issues crypto: sun4i-ss - hide the Invalid keylen message crypto: sun4i-ss - use crypto_ahash_digestsize crypto: sun4i-ss - remove dependency on not 64BIT crypto: sun4i-ss - Fix 64-bit size_t warnings on sun4i-ss-hash.c MAINTAINERS: Add maintainer for HiSilicon SEC V2 driver crypto: hisilicon - add DebugFS for HiSilicon SEC Documentation: add DebugFS doc for HiSilicon SEC crypto: hisilicon - add SRIOV for HiSilicon SEC ...
2019-11-22crypto: chacha_generic - remove unnecessary setkey() functionsEric Biggers1-15/+3
Use chacha20_setkey() and chacha12_setkey() from <crypto/internal/chacha.h> instead of defining them again in chacha_generic.c. Signed-off-by: Eric Biggers <ebiggers@google.com> Acked-by: Ard Biesheuvel <ardb@kernel.org> Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
2019-11-22crypto: mips/chacha - select CRYPTO_SKCIPHER, not CRYPTO_BLKCIPHEREric Biggers1-1/+1
Another instance of CRYPTO_BLKCIPHER made it in just after it was renamed to CRYPTO_SKCIPHER. Fix it. Signed-off-by: Eric Biggers <ebiggers@google.com> Acked-by: Ard Biesheuvel <ardb@kernel.org> Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
2019-11-22crypto: essiv - remove redundant null pointer check before kfreeChen Wandun1-2/+1
kfree has taken null pointer check into account. so it is safe to remove the unnecessary check. Signed-off-by: Chen Wandun <chenwandun@huawei.com> Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
2019-11-22crypto: blake2b - rename tfm context and _setkey callbackDavid Sterba1-18/+18
The TFM context can be renamed to a more appropriate name and the local varaibles as well, using 'tctx' which seems to be more common than 'mctx'. The _setkey callback was the last one without the blake2b_ prefix, rename that too. Signed-off-by: David Sterba <dsterba@suse.com> Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
2019-11-22crypto: blake2b - merge _update to api callbackDavid Sterba1-36/+30
Now that there's only one call to blake2b_update, we can merge it to the callback and simplify. The empty input check is split and the rest of code un-indented. Signed-off-by: David Sterba <dsterba@suse.com> Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
2019-11-22crypto: blake2b - open code set last block helperDavid Sterba1-6/+2
The helper is trival and called once, inlining makes things simpler. There's a comment to tie it back to the idea behind the code. Signed-off-by: David Sterba <dsterba@suse.com> Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
2019-11-22crypto: blake2b - delete unused structs or membersDavid Sterba1-30/+0
All the code for param block has been inlined, last_node and outlen from the state are not used or have become redundant due to other code. Remove it. Signed-off-by: David Sterba <dsterba@suse.com> Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
2019-11-22crypto: blake2b - simplify key initDavid Sterba1-8/+6
The keyed init writes the key bytes to the input buffer and does an update. We can do that in two ways: fill the buffer and update immediatelly. This is what current blake2b_init_key does. Any other following _update or _final will continue from the updated state. The other way is to write the key and set the number of bytes to process at the next _update or _final, lazy evaluation. Which leads to the the simplified code in this patch. Signed-off-by: David Sterba <dsterba@suse.com> Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
2019-11-22crypto: blake2b - merge blake2 init to api callbackDavid Sterba1-84/+19
The call chain from blake2b_init can be simplified because the param block is effectively zeros, besides the key. - blake2b_init0 zeroes state and sets IV - blake2b_init sets up param block with defaults (key and some 1s) - init with key, write it to the input buffer and recalculate state So the compact way is to zero out the state and initialize index 0 of the state directly with the non-zero values and the key. Signed-off-by: David Sterba <dsterba@suse.com> Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
2019-11-22crypto: blake2b - merge _final implementation to callbackDavid Sterba1-25/+17
blake2b_final is called only once, merge it to the crypto API callback and simplify. This avoids the temporary buffer and swaps the bytes of internal buffer. Signed-off-by: David Sterba <dsterba@suse.com> Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
2019-11-17crypto: ablkcipher - remove deprecated and unused ablkcipher supportArd Biesheuvel5-571/+1
Now that all users of the deprecated ablkcipher interface have been moved to the skcipher interface, ablkcipher is no longer used and can be removed. Reviewed-by: Eric Biggers <ebiggers@kernel.org> Signed-off-by: Ard Biesheuvel <ardb@kernel.org> Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
2019-11-17crypto: tcrypt - constify check alg listCorentin Labbe1-2/+2
this patchs constify the alg list because this list is never modified. Signed-off-by: Corentin Labbe <clabbe@baylibre.com> Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
2019-11-17crypto: curve25519 - x86_64 library and KPP implementationsJason A. Donenfeld1-0/+6
This implementation is the fastest available x86_64 implementation, and unlike Sandy2x, it doesn't requie use of the floating point registers at all. Instead it makes use of BMI2 and ADX, available on recent microarchitectures. The implementation was written by Armando Faz-Hernández with contributions (upstream) from Samuel Neves and me, in addition to further changes in the kernel implementation from us. Signed-off-by: Jason A. Donenfeld <Jason@zx2c4.com> Signed-off-by: Samuel Neves <sneves@dei.uc.pt> Co-developed-by: Samuel Neves <sneves@dei.uc.pt> [ardb: - move to arch/x86/crypto - wire into lib/crypto framework - implement crypto API KPP hooks ] Signed-off-by: Ard Biesheuvel <ardb@kernel.org> Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
2019-11-17crypto: curve25519 - implement generic KPP driverArd Biesheuvel3-0/+96
Expose the generic Curve25519 library via the crypto API KPP interface. Signed-off-by: Ard Biesheuvel <ardb@kernel.org> Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
2019-11-17crypto: curve25519 - add kpp selftestArd Biesheuvel2-0/+1231
In preparation of introducing KPP implementations of Curve25519, import the set of test cases proposed by the Zinc patch set, but converted to the KPP format. Signed-off-by: Ard Biesheuvel <ardb@kernel.org> Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
2019-11-17crypto: blake2s - x86_64 SIMD implementationJason A. Donenfeld1-0/+6
These implementations from Samuel Neves support AVX and AVX-512VL. Originally this used AVX-512F, but Skylake thermal throttling made AVX-512VL more attractive and possible to do with negligable difference. Signed-off-by: Jason A. Donenfeld <Jason@zx2c4.com> Signed-off-by: Samuel Neves <sneves@dei.uc.pt> Co-developed-by: Samuel Neves <sneves@dei.uc.pt> [ardb: move to arch/x86/crypto, wire into lib/crypto framework] Signed-off-by: Ard Biesheuvel <ardb@kernel.org> Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
2019-11-17crypto: blake2s - implement generic shash driverArd Biesheuvel3-0/+190
Wire up our newly added Blake2s implementation via the shash API. Signed-off-by: Ard Biesheuvel <ardb@kernel.org> Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
2019-11-17crypto: testmgr - add test cases for Blake2sArd Biesheuvel2-39/+280
As suggested by Eric for the Blake2b implementation contributed by David, introduce a set of test vectors for Blake2s covering different digest and key sizes. blake2s-128 blake2s-160 blake2s-224 blake2s-256 --------------------------------------------------- len=0 | klen=0 klen=1 klen=16 klen=32 len=1 | klen=16 klen=32 klen=0 klen=1 len=7 | klen=32 klen=0 klen=1 klen=16 len=15 | klen=1 klen=16 klen=32 klen=0 len=64 | klen=0 klen=1 klen=16 klen=32 len=247 | klen=16 klen=32 klen=0 klen=1 len=256 | klen=32 klen=0 klen=1 klen=16 Cc: David Sterba <dsterba@suse.com> Cc: Eric Biggers <ebiggers@google.com> Signed-off-by: Ard Biesheuvel <ardb@kernel.org> Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
2019-11-17int128: move __uint128_t compiler test to KconfigArd Biesheuvel1-1/+1
In order to use 128-bit integer arithmetic in C code, the architecture needs to have declared support for it by setting ARCH_SUPPORTS_INT128, and it requires a version of the toolchain that supports this at build time. This is why all existing tests for ARCH_SUPPORTS_INT128 also test whether __SIZEOF_INT128__ is defined, since this is only the case for compilers that can support 128-bit integers. Let's fold this additional test into the Kconfig declaration of ARCH_SUPPORTS_INT128 so that we can also use the symbol in Makefiles, e.g., to decide whether a certain object needs to be included in the first place. Cc: Masahiro Yamada <yamada.masahiro@socionext.com> Signed-off-by: Ard Biesheuvel <ardb@kernel.org> Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>