summaryrefslogtreecommitdiff
path: root/crypto
AgeCommit message (Collapse)AuthorFilesLines
2018-10-12crypto: testmgr - fix sizeof() on COMP_BUF_SIZEMichael Schupikov1-3/+3
After allocation, output and decomp_output both point to memory chunks of size COMP_BUF_SIZE. Then, only the first bytes are zeroed out using sizeof(COMP_BUF_SIZE) as parameter to memset(), because sizeof(COMP_BUF_SIZE) provides the size of the constant and not the size of allocated memory. Instead, the whole allocated memory is meant to be zeroed out. Use COMP_BUF_SIZE as parameter to memset() directly in order to accomplish this. Fixes: 336073840a872 ("crypto: testmgr - Allow different compression results") Signed-off-by: Michael Schupikov <michael@schupikov.de> Reviewed-by: Ard Biesheuvel <ard.biesheuvel@linaro.org> Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
2018-10-08crypto: aegis/generic - fix for big endian systemsArd Biesheuvel1-11/+9
Use the correct __le32 annotation and accessors to perform the single round of AES encryption performed inside the AEGIS transform. Otherwise, tcrypt reports: alg: aead: Test 1 failed on encryption for aegis128-generic 00000000: 6c 25 25 4a 3c 10 1d 27 2b c1 d4 84 9a ef 7f 6e alg: aead: Test 1 failed on encryption for aegis128l-generic 00000000: cd c6 e3 b8 a0 70 9d 8e c2 4f 6f fe 71 42 df 28 alg: aead: Test 1 failed on encryption for aegis256-generic 00000000: aa ed 07 b1 96 1d e9 e6 f2 ed b5 8e 1c 5f dc 1c Fixes: f606a88e5823 ("crypto: aegis - Add generic AEGIS AEAD implementations") Cc: <stable@vger.kernel.org> # v4.18+ Signed-off-by: Ard Biesheuvel <ard.biesheuvel@linaro.org> Reviewed-by: Ondrej Mosnacek <omosnace@redhat.com> Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
2018-10-08crypto: morus/generic - fix for big endian systemsArd Biesheuvel2-17/+6
Omit the endian swabbing when folding the lengths of the assoc and crypt input buffers into the state to finalize the tag. This is not necessary given that the memory representation of the state is in machine native endianness already. This fixes an error reported by tcrypt running on a big endian system: alg: aead: Test 2 failed on encryption for morus640-generic 00000000: a8 30 ef fb e6 26 eb 23 b0 87 dd 98 57 f3 e1 4b 00000010: 21 alg: aead: Test 2 failed on encryption for morus1280-generic 00000000: 88 19 1b fb 1c 29 49 0e ee 82 2f cb 97 a6 a5 ee 00000010: 5f Fixes: 396be41f16fd ("crypto: morus - Add generic MORUS AEAD implementations") Cc: <stable@vger.kernel.org> # v4.18+ Reviewed-by: Ondrej Mosnacek <omosnace@redhat.com> Signed-off-by: Ard Biesheuvel <ard.biesheuvel@linaro.org> Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
2018-10-05crypto: lrw - fix rebase error after out of bounds fixArd Biesheuvel1-4/+3
Due to an unfortunate interaction between commit fbe1a850b3b1 ("crypto: lrw - Fix out-of bounds access on counter overflow") and commit c778f96bf347 ("crypto: lrw - Optimize tweak computation"), we ended up with a version of next_index() that always returns 127. Fixes: c778f96bf347 ("crypto: lrw - Optimize tweak computation") Signed-off-by: Ard Biesheuvel <ard.biesheuvel@linaro.org> Reviewed-by: Ondrej Mosnacek <omosnace@redhat.com> Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
2018-10-05crypto: x86/aes-ni - remove special handling of AES in PCBC modeArd Biesheuvel1-1/+1
For historical reasons, the AES-NI based implementation of the PCBC chaining mode uses a special FPU chaining mode wrapper template to amortize the FPU start/stop overhead over multiple blocks. When this FPU wrapper was introduced, it supported widely used chaining modes such as XTS and CTR (as well as LRW), but currently, PCBC is the only remaining user. Since there are no known users of pcbc(aes) in the kernel, let's remove this special driver, and rely on the generic pcbc driver to encapsulate the AES-NI core cipher. Signed-off-by: Ard Biesheuvel <ard.biesheuvel@linaro.org> Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
2018-09-28crypto: tcrypt - add OFB functional testsGilad Ben-Yossef1-0/+1
We already have OFB test vectors and tcrypt OFB speed tests. Add OFB functional tests to tcrypt as well. Signed-off-by: Gilad Ben-Yossef <gilad@benyossef.com> Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
2018-09-28crypto: ofb - add output feedback modeGilad Ben-Yossef3-0/+238
Add a generic version of output feedback mode. We already have support of several hardware based transformations of this mode and the needed test vectors but we somehow missed adding a generic software one. Fix this now. Signed-off-by: Gilad Ben-Yossef <gilad@benyossef.com> Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
2018-09-28crypto: testmgr - update sm4 test vectorsGilad Ben-Yossef4-7/+144
Add additional test vectors from "The SM4 Blockcipher Algorithm And Its Modes Of Operations" draft-ribose-cfrg-sm4-10 and register cipher speed tests for sm4. Signed-off-by: Gilad Ben-Yossef <gilad@benyossef.com> Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
2018-09-28crypto: tcrypt - remove remnants of pcomp-based zlibHoria Geantă2-8/+1
Commit 110492183c4b ("crypto: compress - remove unused pcomp interface") removed pcomp interface but missed cleaning up tcrypt. Signed-off-by: Horia Geantă <horia.geanta@nxp.com> Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
2018-09-28crypto: user - Implement a generic crypto statisticsCorentin Labbe7-7/+507
This patch implement a generic way to get statistics about all crypto usages. Signed-off-by: Corentin Labbe <clabbe@baylibre.com> Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
2018-09-28crypto: cryptd - Remove VLA usage of skcipherKees Cook1-15/+17
In the quest to remove all stack VLA usage from the kernel[1], this replaces struct crypto_skcipher and SKCIPHER_REQUEST_ON_STACK() usage with struct crypto_sync_skcipher and SYNC_SKCIPHER_REQUEST_ON_STACK(), which uses a fixed stack size. [1] https://lkml.kernel.org/r/CA+55aFzCG-zNmZwX4A2FQpadafLfEzK6CC=qPXydAacU1RqZWA@mail.gmail.com Signed-off-by: Kees Cook <keescook@chromium.org> Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
2018-09-28crypto: null - Remove VLA usage of skcipherKees Cook7-28/+27
In the quest to remove all stack VLA usage from the kernel[1], this replaces struct crypto_skcipher and SKCIPHER_REQUEST_ON_STACK() usage with struct crypto_sync_skcipher and SYNC_SKCIPHER_REQUEST_ON_STACK(), which uses a fixed stack size. [1] https://lkml.kernel.org/r/CA+55aFzCG-zNmZwX4A2FQpadafLfEzK6CC=qPXydAacU1RqZWA@mail.gmail.com Signed-off-by: Kees Cook <keescook@chromium.org> Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
2018-09-28crypto: skcipher - Introduce crypto_sync_skcipherKees Cook1-0/+24
In preparation for removal of VLAs due to skcipher requests on the stack via SKCIPHER_REQUEST_ON_STACK() usage, this introduces the infrastructure for the "sync skcipher" tfm, which is for handling the on-stack cases of skcipher, which are always non-ASYNC and have a known limited request size. The crypto API additions: struct crypto_sync_skcipher (wrapper for struct crypto_skcipher) crypto_alloc_sync_skcipher() crypto_free_sync_skcipher() crypto_sync_skcipher_setkey() crypto_sync_skcipher_get_flags() crypto_sync_skcipher_set_flags() crypto_sync_skcipher_clear_flags() crypto_sync_skcipher_blocksize() crypto_sync_skcipher_ivsize() crypto_sync_skcipher_reqtfm() skcipher_request_set_sync_tfm() SYNC_SKCIPHER_REQUEST_ON_STACK() (with tfm type check) Signed-off-by: Kees Cook <keescook@chromium.org> Reviewed-by: Ard Biesheuvel <ard.biesheuvel@linaro.org> Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
2018-09-28crypto: fix a memory leak in rsa-kcs1pad's encryption modeDan Aloni1-9/+0
The encryption mode of pkcs1pad never uses out_sg and out_buf, so there's no need to allocate the buffer, which presently is not even being freed. CC: Herbert Xu <herbert@gondor.apana.org.au> CC: linux-crypto@vger.kernel.org CC: "David S. Miller" <davem@davemloft.net> Signed-off-by: Dan Aloni <dan@kernelim.com> Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
2018-09-21crypto: lrw - Do not use auxiliary bufferOndrej Mosnacek1-229/+51
This patch simplifies the LRW template to recompute the LRW tweaks from scratch in the second pass and thus also removes the need to allocate a dynamic buffer using kmalloc(). As discussed at [1], the use of kmalloc causes deadlocks with dm-crypt. PERFORMANCE MEASUREMENTS (x86_64) Performed using: https://gitlab.com/omos/linux-crypto-bench Crypto driver used: lrw(ecb-aes-aesni) The results show that the new code has about the same performance as the old code. For 512-byte message it seems to be even slightly faster, but that might be just noise. Before: ALGORITHM KEY (b) DATA (B) TIME ENC (ns) TIME DEC (ns) lrw(aes) 256 64 200 203 lrw(aes) 320 64 202 204 lrw(aes) 384 64 204 205 lrw(aes) 256 512 415 415 lrw(aes) 320 512 432 440 lrw(aes) 384 512 449 451 lrw(aes) 256 4096 1838 1995 lrw(aes) 320 4096 2123 1980 lrw(aes) 384 4096 2100 2119 lrw(aes) 256 16384 7183 6954 lrw(aes) 320 16384 7844 7631 lrw(aes) 384 16384 8256 8126 lrw(aes) 256 32768 14772 14484 lrw(aes) 320 32768 15281 15431 lrw(aes) 384 32768 16469 16293 After: ALGORITHM KEY (b) DATA (B) TIME ENC (ns) TIME DEC (ns) lrw(aes) 256 64 197 196 lrw(aes) 320 64 200 197 lrw(aes) 384 64 203 199 lrw(aes) 256 512 385 380 lrw(aes) 320 512 401 395 lrw(aes) 384 512 415 415 lrw(aes) 256 4096 1869 1846 lrw(aes) 320 4096 2080 1981 lrw(aes) 384 4096 2160 2109 lrw(aes) 256 16384 7077 7127 lrw(aes) 320 16384 7807 7766 lrw(aes) 384 16384 8108 8357 lrw(aes) 256 32768 14111 14454 lrw(aes) 320 32768 15268 15082 lrw(aes) 384 32768 16581 16250 [1] https://lkml.org/lkml/2018/8/23/1315 Signed-off-by: Ondrej Mosnacek <omosnace@redhat.com> Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
2018-09-21crypto: lrw - Optimize tweak computationOndrej Mosnacek1-24/+37
This patch rewrites the tweak computation to a slightly simpler method that performs less bswaps. Based on performance measurements the new code seems to provide slightly better performance than the old one. PERFORMANCE MEASUREMENTS (x86_64) Performed using: https://gitlab.com/omos/linux-crypto-bench Crypto driver used: lrw(ecb-aes-aesni) Before: ALGORITHM KEY (b) DATA (B) TIME ENC (ns) TIME DEC (ns) lrw(aes) 256 64 204 286 lrw(aes) 320 64 227 203 lrw(aes) 384 64 208 204 lrw(aes) 256 512 441 439 lrw(aes) 320 512 456 455 lrw(aes) 384 512 469 483 lrw(aes) 256 4096 2136 2190 lrw(aes) 320 4096 2161 2213 lrw(aes) 384 4096 2295 2369 lrw(aes) 256 16384 7692 7868 lrw(aes) 320 16384 8230 8691 lrw(aes) 384 16384 8971 8813 lrw(aes) 256 32768 15336 15560 lrw(aes) 320 32768 16410 16346 lrw(aes) 384 32768 18023 17465 After: ALGORITHM KEY (b) DATA (B) TIME ENC (ns) TIME DEC (ns) lrw(aes) 256 64 200 203 lrw(aes) 320 64 202 204 lrw(aes) 384 64 204 205 lrw(aes) 256 512 415 415 lrw(aes) 320 512 432 440 lrw(aes) 384 512 449 451 lrw(aes) 256 4096 1838 1995 lrw(aes) 320 4096 2123 1980 lrw(aes) 384 4096 2100 2119 lrw(aes) 256 16384 7183 6954 lrw(aes) 320 16384 7844 7631 lrw(aes) 384 16384 8256 8126 lrw(aes) 256 32768 14772 14484 lrw(aes) 320 32768 15281 15431 lrw(aes) 384 32768 16469 16293 Signed-off-by: Ondrej Mosnacek <omosnace@redhat.com> Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
2018-09-21crypto: testmgr - Add test for LRW counter wrap-aroundOndrej Mosnacek1-0/+21
This patch adds a test vector for lrw(aes) that triggers wrap-around of the counter, which is a tricky corner case. Suggested-by: Eric Biggers <ebiggers@kernel.org> Signed-off-by: Ondrej Mosnacek <omosnace@redhat.com> Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
2018-09-21crypto: lrw - Fix out-of bounds access on counter overflowOndrej Mosnacek1-1/+6
When the LRW block counter overflows, the current implementation returns 128 as the index to the precomputed multiplication table, which has 128 entries. This patch fixes it to return the correct value (127). Fixes: 64470f1b8510 ("[CRYPTO] lrw: Liskov Rivest Wagner, a tweakable narrow block cipher mode") Cc: <stable@vger.kernel.org> # 2.6.20+ Reported-by: Eric Biggers <ebiggers@kernel.org> Signed-off-by: Ondrej Mosnacek <omosnace@redhat.com> Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
2018-09-21crypto: tcrypt - fix ghash-generic speed testHoria Geantă1-0/+3
ghash is a keyed hash algorithm, thus setkey needs to be called. Otherwise the following error occurs: $ modprobe tcrypt mode=318 sec=1 testing speed of async ghash-generic (ghash-generic) tcrypt: test 0 ( 16 byte blocks, 16 bytes per update, 1 updates): tcrypt: hashing failed ret=-126 Cc: <stable@vger.kernel.org> # 4.6+ Fixes: 0660511c0bee ("crypto: tcrypt - Use ahash") Tested-by: Franck Lenormand <franck.lenormand@nxp.com> Signed-off-by: Horia Geantă <horia.geanta@nxp.com> Acked-by: Ard Biesheuvel <ard.biesheuvel@linaro.org> Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
2018-09-21crypto: chacha20 - Fix chacha20_block() keystream alignment (again)Eric Biggers1-3/+4
In commit 9f480faec58c ("crypto: chacha20 - Fix keystream alignment for chacha20_block()"), I had missed that chacha20_block() can be called directly on the buffer passed to get_random_bytes(), which can have any alignment. So, while my commit didn't break anything, it didn't fully solve the alignment problems. Revert my solution and just update chacha20_block() to use put_unaligned_le32(), so the output buffer need not be aligned. This is simpler, and on many CPUs it's the same speed. But, I kept the 'tmp' buffers in extract_crng_user() and _get_random_bytes() 4-byte aligned, since that alignment is actually needed for _crng_backtrack_protect() too. Reported-by: Stephan Müller <smueller@chronox.de> Cc: Theodore Ts'o <tytso@mit.edu> Signed-off-by: Eric Biggers <ebiggers@google.com> Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
2018-09-21crypto: xts - Drop use of auxiliary bufferOndrej Mosnacek1-223/+46
Since commit acb9b159c784 ("crypto: gf128mul - define gf128mul_x_* in gf128mul.h"), the gf128mul_x_*() functions are very fast and therefore caching the computed XTS tweaks has only negligible advantage over computing them twice. In fact, since the current caching implementation limits the size of the calls to the child ecb(...) algorithm to PAGE_SIZE (usually 4096 B), it is often actually slower than the simple recomputing implementation. This patch simplifies the XTS template to recompute the XTS tweaks from scratch in the second pass and thus also removes the need to allocate a dynamic buffer using kmalloc(). As discussed at [1], the use of kmalloc causes deadlocks with dm-crypt. PERFORMANCE RESULTS I measured time to encrypt/decrypt a memory buffer of varying sizes with xts(ecb-aes-aesni) using a tool I wrote ([2]) and the results suggest that after this patch the performance is either better or comparable for both small and large buffers. Note that there is a lot of noise in the measurements, but the overall difference is easy to see. Old code: ALGORITHM KEY (b) DATA (B) TIME ENC (ns) TIME DEC (ns) xts(aes) 256 64 331 328 xts(aes) 384 64 332 333 xts(aes) 512 64 338 348 xts(aes) 256 512 889 920 xts(aes) 384 512 1019 993 xts(aes) 512 512 1032 990 xts(aes) 256 4096 2152 2292 xts(aes) 384 4096 2453 2597 xts(aes) 512 4096 3041 2641 xts(aes) 256 16384 9443 8027 xts(aes) 384 16384 8536 8925 xts(aes) 512 16384 9232 9417 xts(aes) 256 32768 16383 14897 xts(aes) 384 32768 17527 16102 xts(aes) 512 32768 18483 17322 New code: ALGORITHM KEY (b) DATA (B) TIME ENC (ns) TIME DEC (ns) xts(aes) 256 64 328 324 xts(aes) 384 64 324 319 xts(aes) 512 64 320 322 xts(aes) 256 512 476 473 xts(aes) 384 512 509 492 xts(aes) 512 512 531 514 xts(aes) 256 4096 2132 1829 xts(aes) 384 4096 2357 2055 xts(aes) 512 4096 2178 2027 xts(aes) 256 16384 6920 6983 xts(aes) 384 16384 8597 7505 xts(aes) 512 16384 7841 8164 xts(aes) 256 32768 13468 12307 xts(aes) 384 32768 14808 13402 xts(aes) 512 32768 15753 14636 [1] https://lkml.org/lkml/2018/8/23/1315 [2] https://gitlab.com/omos/linux-crypto-bench Signed-off-by: Ondrej Mosnacek <omosnace@redhat.com> Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
2018-09-04crypto: api - Introduce notifier for new crypto algorithmsMartin K. Petersen3-8/+4
Introduce a facility that can be used to receive a notification callback when a new algorithm becomes available. This can be used by existing crypto registrations to trigger a switch from a software-only algorithm to a hardware-accelerated version. A new CRYPTO_MSG_ALG_LOADED state is introduced to the existing crypto notification chain, and the register/unregister functions are exported so they can be called by subsystems outside of crypto. Signed-off-by: Martin K. Petersen <martin.petersen@oracle.com> Suggested-by: Herbert Xu <herbert@gondor.apana.org.au> Signed-off-by: Martin K. Petersen <martin.petersen@oracle.com> Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
2018-09-04crypto: x86 - remove SHA multibuffer routines and mcryptdArd Biesheuvel3-738/+0
As it turns out, the AVX2 multibuffer SHA routines are currently broken [0], in a way that would have likely been noticed if this code were in wide use. Since the code is too complicated to be maintained by anyone except the original authors, and since the performance benefits for real-world use cases are debatable to begin with, it is better to drop it entirely for the moment. [0] https://marc.info/?l=linux-crypto-vger&m=153476243825350&w=2 Suggested-by: Eric Biggers <ebiggers@google.com> Cc: Megha Dey <megha.dey@linux.intel.com> Cc: Tim Chen <tim.c.chen@linux.intel.com> Cc: Geert Uytterhoeven <geert@linux-m68k.org> Cc: Martin Schwidefsky <schwidefsky@de.ibm.com> Cc: Heiko Carstens <heiko.carstens@de.ibm.com> Cc: Thomas Gleixner <tglx@linutronix.de> Cc: Ingo Molnar <mingo@redhat.com> Signed-off-by: Ard Biesheuvel <ard.biesheuvel@linaro.org> Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
2018-09-04crypto: shash - Remove VLA usage in unaligned hashingKees Cook1-11/+16
In the quest to remove all stack VLA usage from the kernel[1], this uses the newly defined max alignment to perform unaligned hashing to avoid VLAs, and drops the helper function while adding sanity checks on the resulting buffer sizes. Additionally, the __aligned_largest macro is removed since this helper was the only user. [1] https://lkml.kernel.org/r/CA+55aFzCG-zNmZwX4A2FQpadafLfEzK6CC=qPXydAacU1RqZWA@mail.gmail.com Signed-off-by: Kees Cook <keescook@chromium.org> Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
2018-09-04crypto: api - Introduce generic max blocksize and alignmaskKees Cook1-1/+6
In the quest to remove all stack VLA usage from the kernel[1], this exposes a new general upper bound on crypto blocksize and alignmask (higher than for the existing cipher limits) for VLA removal, and introduces new checks. At present, the highest cra_alignmask in the kernel is 63. The highest cra_blocksize is 144 (SHA3_224_BLOCK_SIZE, 18 8-byte words). For the new blocksize limit, I went with 160 (20 8-byte words). [1] https://lkml.kernel.org/r/CA+55aFzCG-zNmZwX4A2FQpadafLfEzK6CC=qPXydAacU1RqZWA@mail.gmail.com Signed-off-by: Kees Cook <keescook@chromium.org> Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
2018-09-04crypto: hash - Remove VLA usageKees Cook3-6/+6
In the quest to remove all stack VLA usage from the kernel[1], this removes the VLAs in SHASH_DESC_ON_STACK (via crypto_shash_descsize()) by using the maximum allowable size (which is now more clearly captured in a macro), along with a few other cases. Similar limits are turned into macros as well. A review of existing sizes shows that SHA512_DIGEST_SIZE (64) is the largest digest size and that sizeof(struct sha3_state) (360) is the largest descriptor size. The corresponding maximums are reduced. [1] https://lkml.kernel.org/r/CA+55aFzCG-zNmZwX4A2FQpadafLfEzK6CC=qPXydAacU1RqZWA@mail.gmail.com Signed-off-by: Kees Cook <keescook@chromium.org> Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
2018-09-04crypto: ccm - Remove VLA usageArd Biesheuvel1-3/+6
In the quest to remove all stack VLA usage from the kernel[1], this drops AHASH_REQUEST_ON_STACK by preallocating the ahash request area combined with the skcipher area (which are not used at the same time). [1] https://lkml.kernel.org/r/CA+55aFzCG-zNmZwX4A2FQpadafLfEzK6CC=qPXydAacU1RqZWA@mail.gmail.com Signed-off-by: Ard Biesheuvel <ard.biesheuvel@linaro.org> Signed-off-by: Kees Cook <keescook@chromium.org> Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
2018-09-04crypto: xcbc - Remove VLA usageKees Cook1-3/+5
In the quest to remove all stack VLA usage from the kernel[1], this uses the maximum blocksize and adds a sanity check. For xcbc, the blocksize must always be 16, so use that, since it's already being enforced during instantiation. [1] https://lkml.kernel.org/r/CA+55aFzCG-zNmZwX4A2FQpadafLfEzK6CC=qPXydAacU1RqZWA@mail.gmail.com Signed-off-by: Kees Cook <keescook@chromium.org> Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
2018-09-04crypto: speck - remove SpeckJason A. Donenfeld5-1084/+0
These are unused, undesired, and have never actually been used by anybody. The original authors of this code have changed their mind about its inclusion. While originally proposed for disk encryption on low-end devices, the idea was discarded [1] in favor of something else before that could really get going. Therefore, this patch removes Speck. [1] https://marc.info/?l=linux-crypto-vger&m=153359499015659 Signed-off-by: Jason A. Donenfeld <Jason@zx2c4.com> Acked-by: Eric Biggers <ebiggers@google.com> Cc: stable@vger.kernel.org Acked-by: Ard Biesheuvel <ard.biesheuvel@linaro.org> Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
2018-08-19Merge tag 'dmaengine-4.19-rc1' of git://git.infradead.org/users/vkoul/slave-dmaLinus Torvalds2-5/+9
Pull DMAengine updates from Vinod Koul: "This round brings couple of framework changes, a new driver and usual driver updates: - new managed helper for dmaengine framework registration - split dmaengine pause capability to pause and resume and allow drivers to report that individually - update dma_request_chan_by_mask() to handle deferred probing - move imx-sdma to use virt-dma - new driver for Actions Semi Owl family S900 controller - minor updates to intel, renesas, mv_xor, pl330 etc" * tag 'dmaengine-4.19-rc1' of git://git.infradead.org/users/vkoul/slave-dma: (46 commits) dmaengine: Add Actions Semi Owl family S900 DMA driver dt-bindings: dmaengine: Add binding for Actions Semi Owl SoCs dmaengine: sh: rcar-dmac: Should not stop the DMAC by rcar_dmac_sync_tcr() dmaengine: mic_x100_dma: use the new helper to simplify the code dmaengine: add a new helper dmaenginem_async_device_register dmaengine: imx-sdma: add memcpy interface dmaengine: imx-sdma: add SDMA_BD_MAX_CNT to replace '0xffff' dmaengine: dma_request_chan_by_mask() to handle deferred probing dmaengine: pl330: fix irq race with terminate_all dmaengine: Revert "dmaengine: mv_xor_v2: enable COMPILE_TEST" dmaengine: mv_xor_v2: use {lower,upper}_32_bits to configure HW descriptor address dmaengine: mv_xor_v2: enable COMPILE_TEST dmaengine: mv_xor_v2: move unmap to before callback dmaengine: mv_xor_v2: convert callback to helper function dmaengine: mv_xor_v2: kill the tasklets upon exit dmaengine: mv_xor_v2: explicitly freeup irq dmaengine: sh: rcar-dmac: Add dma_pause operation dmaengine: sh: rcar-dmac: add a new function to clear CHCR.DE with barrier dmaengine: idma64: Support dmaengine_terminate_sync() dmaengine: hsu: Support dmaengine_terminate_sync() ...
2018-08-16Replace magic for trusting the secondary keyring with #defineYannik Sembritzki1-1/+1
Replace the use of a magic number that indicates that verify_*_signature() should use the secondary keyring with a symbol. Signed-off-by: Yannik Sembritzki <yannik@sembritzki.me> Signed-off-by: David Howells <dhowells@redhat.com> Cc: keyrings@vger.kernel.org Cc: linux-security-module@vger.kernel.org Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
2018-08-16Merge branch 'next-integrity' of ↵Linus Torvalds1-1/+1
git://git.kernel.org/pub/scm/linux/kernel/git/jmorris/linux-security Pull integrity updates from James Morris: "This adds support for EVM signatures based on larger digests, contains a new audit record AUDIT_INTEGRITY_POLICY_RULE to differentiate the IMA policy rules from the IMA-audit messages, addresses two deadlocks due to either loading or searching for crypto algorithms, and cleans up the audit messages" * 'next-integrity' of git://git.kernel.org/pub/scm/linux/kernel/git/jmorris/linux-security: EVM: fix return value check in evm_write_xattrs() integrity: prevent deadlock during digsig verification. evm: Allow non-SHA1 digital signatures evm: Don't deadlock if a crypto algorithm is unavailable integrity: silence warning when CONFIG_SECURITYFS is not enabled ima: Differentiate auditing policy rules from "audit" actions ima: Do not audit if CONFIG_INTEGRITY_AUDIT is not set ima: Use audit_log_format() rather than audit_log_string() ima: Call audit_log_string() rather than logging it untrusted
2018-08-16Merge branch 'linus' of ↵Linus Torvalds36-526/+722
git://git.kernel.org/pub/scm/linux/kernel/git/herbert/crypto-2.6 Pull crypto updates from Herbert Xu: "API: - Fix dcache flushing crash in skcipher. - Add hash finup self-tests. - Reschedule during speed tests. Algorithms: - Remove insecure vmac and replace it with vmac64. - Add public key verification for DH/ECDH. Drivers: - Decrease priority of sha-mb on x86. - Improve NEON latency/throughput on ARM64. - Add md5/sha384/sha512/des/3des to inside-secure. - Support eip197d in inside-secure. - Only register algorithms supported by the host in virtio. - Add cts and remove incompatible cts1 from ccree. - Add hisilicon SEC security accelerator driver. - Replace msm hwrng driver with qcom pseudo rng driver. Misc: - Centralize CRC polynomials" * 'linus' of git://git.kernel.org/pub/scm/linux/kernel/git/herbert/crypto-2.6: (121 commits) crypto: arm64/ghash-ce - implement 4-way aggregation crypto: arm64/ghash-ce - replace NEON yield check with block limit crypto: hisilicon - sec_send_request() can be static lib/mpi: remove redundant variable esign crypto: arm64/aes-ce-gcm - don't reload key schedule if avoidable crypto: arm64/aes-ce-gcm - implement 2-way aggregation crypto: arm64/aes-ce-gcm - operate on two input blocks at a time crypto: dh - make crypto_dh_encode_key() make robust crypto: dh - fix calculating encoded key size crypto: ccp - Check for NULL PSP pointer at module unload crypto: arm/chacha20 - always use vrev for 16-bit rotates crypto: ccree - allow bigger than sector XTS op crypto: ccree - zero all of request ctx before use crypto: ccree - remove cipher ivgen left overs crypto: ccree - drop useless type flag during reg crypto: ablkcipher - fix crash flushing dcache in error path crypto: blkcipher - fix crash flushing dcache in error path crypto: skcipher - fix crash flushing dcache in error path crypto: skcipher - remove unnecessary setting of walk->nbytes crypto: scatterwalk - remove scatterwalk_samebuf() ...
2018-08-03crypto: dh - make crypto_dh_encode_key() make robustEric Biggers1-14/+16
Make it return -EINVAL if crypto_dh_key_len() is incorrect rather than overflowing the buffer. Signed-off-by: Eric Biggers <ebiggers@google.com> Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
2018-08-03crypto: dh - fix calculating encoded key sizeEric Biggers2-7/+7
It was forgotten to increase DH_KPP_SECRET_MIN_SIZE to include 'q_size', causing an out-of-bounds write of 4 bytes in crypto_dh_encode_key(), and an out-of-bounds read of 4 bytes in crypto_dh_decode_key(). Fix it, and fix the lengths of the test vectors to match this. Reported-by: syzbot+6d38d558c25b53b8f4ed@syzkaller.appspotmail.com Fixes: e3fe0ae12962 ("crypto: dh - add public key verification test") Signed-off-by: Eric Biggers <ebiggers@google.com> Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
2018-08-03crypto: ablkcipher - fix crash flushing dcache in error pathEric Biggers1-31/+26
Like the skcipher_walk and blkcipher_walk cases: scatterwalk_done() is only meant to be called after a nonzero number of bytes have been processed, since scatterwalk_pagedone() will flush the dcache of the *previous* page. But in the error case of ablkcipher_walk_done(), e.g. if the input wasn't an integer number of blocks, scatterwalk_done() was actually called after advancing 0 bytes. This caused a crash ("BUG: unable to handle kernel paging request") during '!PageSlab(page)' on architectures like arm and arm64 that define ARCH_IMPLEMENTS_FLUSH_DCACHE_PAGE, provided that the input was page-aligned as in that case walk->offset == 0. Fix it by reorganizing ablkcipher_walk_done() to skip the scatterwalk_advance() and scatterwalk_done() if an error has occurred. Reported-by: Liu Chao <liuchao741@huawei.com> Fixes: bf06099db18a ("crypto: skcipher - Add ablkcipher_walk interfaces") Cc: <stable@vger.kernel.org> # v2.6.35+ Signed-off-by: Eric Biggers <ebiggers@google.com> Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
2018-08-03crypto: blkcipher - fix crash flushing dcache in error pathEric Biggers1-28/+26
Like the skcipher_walk case: scatterwalk_done() is only meant to be called after a nonzero number of bytes have been processed, since scatterwalk_pagedone() will flush the dcache of the *previous* page. But in the error case of blkcipher_walk_done(), e.g. if the input wasn't an integer number of blocks, scatterwalk_done() was actually called after advancing 0 bytes. This caused a crash ("BUG: unable to handle kernel paging request") during '!PageSlab(page)' on architectures like arm and arm64 that define ARCH_IMPLEMENTS_FLUSH_DCACHE_PAGE, provided that the input was page-aligned as in that case walk->offset == 0. Fix it by reorganizing blkcipher_walk_done() to skip the scatterwalk_advance() and scatterwalk_done() if an error has occurred. This bug was found by syzkaller fuzzing. Reproducer, assuming ARCH_IMPLEMENTS_FLUSH_DCACHE_PAGE: #include <linux/if_alg.h> #include <sys/socket.h> #include <unistd.h> int main() { struct sockaddr_alg addr = { .salg_type = "skcipher", .salg_name = "ecb(aes-generic)", }; char buffer[4096] __attribute__((aligned(4096))) = { 0 }; int fd; fd = socket(AF_ALG, SOCK_SEQPACKET, 0); bind(fd, (void *)&addr, sizeof(addr)); setsockopt(fd, SOL_ALG, ALG_SET_KEY, buffer, 16); fd = accept(fd, NULL, NULL); write(fd, buffer, 15); read(fd, buffer, 15); } Reported-by: Liu Chao <liuchao741@huawei.com> Fixes: 5cde0af2a982 ("[CRYPTO] cipher: Added block cipher type") Cc: <stable@vger.kernel.org> # v2.6.19+ Signed-off-by: Eric Biggers <ebiggers@google.com> Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
2018-08-03crypto: skcipher - fix crash flushing dcache in error pathEric Biggers1-26/+27
scatterwalk_done() is only meant to be called after a nonzero number of bytes have been processed, since scatterwalk_pagedone() will flush the dcache of the *previous* page. But in the error case of skcipher_walk_done(), e.g. if the input wasn't an integer number of blocks, scatterwalk_done() was actually called after advancing 0 bytes. This caused a crash ("BUG: unable to handle kernel paging request") during '!PageSlab(page)' on architectures like arm and arm64 that define ARCH_IMPLEMENTS_FLUSH_DCACHE_PAGE, provided that the input was page-aligned as in that case walk->offset == 0. Fix it by reorganizing skcipher_walk_done() to skip the scatterwalk_advance() and scatterwalk_done() if an error has occurred. This bug was found by syzkaller fuzzing. Reproducer, assuming ARCH_IMPLEMENTS_FLUSH_DCACHE_PAGE: #include <linux/if_alg.h> #include <sys/socket.h> #include <unistd.h> int main() { struct sockaddr_alg addr = { .salg_type = "skcipher", .salg_name = "cbc(aes-generic)", }; char buffer[4096] __attribute__((aligned(4096))) = { 0 }; int fd; fd = socket(AF_ALG, SOCK_SEQPACKET, 0); bind(fd, (void *)&addr, sizeof(addr)); setsockopt(fd, SOL_ALG, ALG_SET_KEY, buffer, 16); fd = accept(fd, NULL, NULL); write(fd, buffer, 15); read(fd, buffer, 15); } Reported-by: Liu Chao <liuchao741@huawei.com> Fixes: b286d8b1a690 ("crypto: skcipher - Add skcipher walk interface") Cc: <stable@vger.kernel.org> # v4.10+ Signed-off-by: Eric Biggers <ebiggers@google.com> Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
2018-08-03crypto: skcipher - remove unnecessary setting of walk->nbytesEric Biggers1-1/+0
Setting 'walk->nbytes = walk->total' in skcipher_walk_first() doesn't make sense because actually walk->nbytes needs to be set to the length of the first step in the walk, which may be less than walk->total. This is done by skcipher_walk_next() which is called immediately afterwards. Also walk->nbytes was already set to 0 in skcipher_walk_skcipher(), which is a better default value in case it's forgotten to be set later. Therefore, remove the unnecessary assignment to walk->nbytes. Signed-off-by: Eric Biggers <ebiggers@google.com> Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
2018-08-03crypto: scatterwalk - remove 'chain' argument from scatterwalk_crypto_chain()Eric Biggers3-5/+5
All callers pass chain=0 to scatterwalk_crypto_chain(). Remove this unneeded parameter. Signed-off-by: Eric Biggers <ebiggers@google.com> Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
2018-08-03crypto: skcipher - fix aligning block size in skcipher_copy_iv()Eric Biggers1-1/+1
The ALIGN() macro needs to be passed the alignment, not the alignmask (which is the alignment minus 1). Fixes: b286d8b1a690 ("crypto: skcipher - Add skcipher walk interface") Cc: <stable@vger.kernel.org> # v4.10+ Signed-off-by: Eric Biggers <ebiggers@google.com> Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
2018-08-03crypto: tcrypt - reschedule during speed testsHoria Geantă1-12/+24
Avoid RCU stalls in the case of non-preemptible kernel and lengthy speed tests by rescheduling when advancing from one block size to another. Signed-off-by: Horia Geantă <horia.geanta@nxp.com> Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
2018-08-03crypto: drbg - in-place cipher operation for CTRStephan Müller1-20/+14
The cipher implementations of the kernel crypto API favor in-place cipher operations. Thus, switch the CTR cipher operation in the DRBG to perform in-place operations. This is implemented by using the output buffer as input buffer and zeroizing it before the cipher operation to implement a CTR encryption of a NULL buffer. The speed improvement is quite visibile with the following comparison using the LRNG implementation. Without the patch set: 16 bytes| 12.267661 MB/s| 61338304 bytes | 5000000213 ns 32 bytes| 23.603770 MB/s| 118018848 bytes | 5000000073 ns 64 bytes| 46.732262 MB/s| 233661312 bytes | 5000000241 ns 128 bytes| 90.038042 MB/s| 450190208 bytes | 5000000244 ns 256 bytes| 160.399616 MB/s| 801998080 bytes | 5000000393 ns 512 bytes| 259.878400 MB/s| 1299392000 bytes | 5000001675 ns 1024 bytes| 386.050662 MB/s| 1930253312 bytes | 5000001661 ns 2048 bytes| 493.641728 MB/s| 2468208640 bytes | 5000001598 ns 4096 bytes| 581.835981 MB/s| 2909179904 bytes | 5000003426 ns With the patch set: 16 bytes | 17.051142 MB/s | 85255712 bytes | 5000000854 ns 32 bytes | 32.695898 MB/s | 163479488 bytes | 5000000544 ns 64 bytes | 64.490739 MB/s | 322453696 bytes | 5000000954 ns 128 bytes | 123.285043 MB/s | 616425216 bytes | 5000000201 ns 256 bytes | 233.434573 MB/s | 1167172864 bytes | 5000000573 ns 512 bytes | 384.405197 MB/s | 1922025984 bytes | 5000000671 ns 1024 bytes | 566.313370 MB/s | 2831566848 bytes | 5000001080 ns 2048 bytes | 744.518042 MB/s | 3722590208 bytes | 5000000926 ns 4096 bytes | 867.501670 MB/s | 4337508352 bytes | 5000002181 ns Signed-off-by: Stephan Mueller <smueller@chronox.de> Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
2018-08-03Merge git://git.kernel.org/pub/scm/linux/kernel/git/torvalds/linuxHerbert Xu6-10/+29
Merge mainline to pick up c7513c2a2714 ("crypto/arm64: aes-ce-gcm - add missing kernel_neon_begin/end pair").
2018-07-30net: simplify sock_poll_waitChristoph Hellwig1-1/+1
The wait_address argument is always directly derived from the filp argument, so remove it. Signed-off-by: Christoph Hellwig <hch@lst.de> Signed-off-by: David S. Miller <davem@davemloft.net>
2018-07-27crypto: rmd320 - use swap macro in rmd320_transformGustavo A. R. Silva1-6/+6
Make use of the swap macro and remove unnecessary variable *tmp*. This makes the code easier to read and maintain. This code was detected with the help of Coccinelle. Signed-off-by: Gustavo A. R. Silva <gustavo@embeddedor.com> Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
2018-07-27crypto: rmd256 - use swap macro in rmd256_transformGustavo A. R. Silva1-5/+5
Make use of the swap macro and remove unnecessary variable *tmp*. This makes the code easier to read and maintain. This code was detected with the help of Coccinelle. Signed-off-by: Gustavo A. R. Silva <gustavo@embeddedor.com> Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
2018-07-20crypto: ecdh - fix typo of P-192 b valueStephan Mueller1-1/+1
Fix the b value to be compliant with FIPS 186-4 D.1.2.1. This fix is required to make sure the SP800-56A public key test passes for P-192. Signed-off-by: Stephan Mueller <smueller@chronox.de> Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
2018-07-20crypto: dh - update test for public key verificationStephan Mueller1-0/+4
By adding a zero byte-length for the DH parameter Q value, the public key verification test is disabled for the given test. Reported-by: Eric Biggers <ebiggers3@gmail.com> Signed-off-by: Stephan Mueller <smueller@chronox.de> Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
2018-07-20crypto: drbg - eliminate constant reinitialization of SGLStephan Mueller1-4/+7
The CTR DRBG requires two SGLs pointing to input/output buffers for the CTR AES operation. The used SGLs always have only one entry. Thus, the SGL can be initialized during allocation time, preventing a re-initialization of the SGLs during each call. The performance is increased by about 1 to 3 percent depending on the size of the requested buffer size. Signed-off-by: Stephan Mueller <smueller@chronox.de> Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>