summaryrefslogtreecommitdiff
path: root/crypto
AgeCommit message (Collapse)AuthorFilesLines
2018-02-22PKCS#7: fix direct verification of SignerInfo signatureEric Biggers1-0/+1
If none of the certificates in a SignerInfo's certificate chain match a trusted key, nor is the last certificate signed by a trusted key, then pkcs7_validate_trust_one() tries to check whether the SignerInfo's signature was made directly by a trusted key. But, it actually fails to set the 'sig' variable correctly, so it actually verifies the last signature seen. That will only be the SignerInfo's signature if the certificate chain is empty; otherwise it will actually be the last certificate's signature. This is not by itself a security problem, since verifying any of the certificates in the chain should be sufficient to verify the SignerInfo. Still, it's not working as intended so it should be fixed. Fix it by setting 'sig' correctly for the direct verification case. Fixes: 757932e6da6d ("PKCS#7: Handle PKCS#7 messages that contain no X.509 certs") Signed-off-by: Eric Biggers <ebiggers@google.com> Signed-off-by: David Howells <dhowells@redhat.com>
2018-02-22PKCS#7: fix certificate blacklistingEric Biggers1-4/+6
If there is a blacklisted certificate in a SignerInfo's certificate chain, then pkcs7_verify_sig_chain() sets sinfo->blacklisted and returns 0. But, pkcs7_verify() fails to handle this case appropriately, as it actually continues on to the line 'actual_ret = 0;', indicating that the SignerInfo has passed verification. Consequently, PKCS#7 signature verification ignores the certificate blacklist. Fix this by not considering blacklisted SignerInfos to have passed verification. Also fix the function comment with regards to when 0 is returned. Fixes: 03bb79315ddc ("PKCS#7: Handle blacklisted certificates") Cc: <stable@vger.kernel.org> # v4.12+ Signed-off-by: Eric Biggers <ebiggers@google.com> Signed-off-by: David Howells <dhowells@redhat.com>
2018-02-22PKCS#7: fix certificate chain verificationEric Biggers1-1/+1
When pkcs7_verify_sig_chain() is building the certificate chain for a SignerInfo using the certificates in the PKCS#7 message, it is passing the wrong arguments to public_key_verify_signature(). Consequently, when the next certificate is supposed to be used to verify the previous certificate, the next certificate is actually used to verify itself. An attacker can use this bug to create a bogus certificate chain that has no cryptographic relationship between the beginning and end. Fortunately I couldn't quite find a way to use this to bypass the overall signature verification, though it comes very close. Here's the reasoning: due to the bug, every certificate in the chain beyond the first actually has to be self-signed (where "self-signed" here refers to the actual key and signature; an attacker might still manipulate the certificate fields such that the self_signed flag doesn't actually get set, and thus the chain doesn't end immediately). But to pass trust validation (pkcs7_validate_trust()), either the SignerInfo or one of the certificates has to actually be signed by a trusted key. Since only self-signed certificates can be added to the chain, the only way for an attacker to introduce a trusted signature is to include a self-signed trusted certificate. But, when pkcs7_validate_trust_one() reaches that certificate, instead of trying to verify the signature on that certificate, it will actually look up the corresponding trusted key, which will succeed, and then try to verify the *previous* certificate, which will fail. Thus, disaster is narrowly averted (as far as I could tell). Fixes: 6c2dc5ae4ab7 ("X.509: Extract signature digest and make self-signed cert checks earlier") Cc: <stable@vger.kernel.org> # v4.7+ Signed-off-by: Eric Biggers <ebiggers@google.com> Signed-off-by: David Howells <dhowells@redhat.com>
2018-02-22crypto: speck - add test vectors for Speck64-XTSEric Biggers2-0/+680
Add test vectors for Speck64-XTS, generated in userspace using C code. The inputs were borrowed from the AES-XTS test vectors, with key lengths adjusted. xts-speck64-neon passes these tests. However, they aren't currently applicable for the generic XTS template, as that only supports a 128-bit block size. Signed-off-by: Eric Biggers <ebiggers@google.com> Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
2018-02-22crypto: speck - add test vectors for Speck128-XTSEric Biggers2-0/+696
Add test vectors for Speck128-XTS, generated in userspace using C code. The inputs were borrowed from the AES-XTS test vectors. Both xts(speck128-generic) and xts-speck128-neon pass these tests. Signed-off-by: Eric Biggers <ebiggers@google.com> Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
2018-02-22crypto: speck - export common helpersEric Biggers1-41/+49
Export the Speck constants and transform context and the ->setkey(), ->encrypt(), and ->decrypt() functions so that they can be reused by the ARM NEON implementation of Speck-XTS. The generic key expansion code will be reused because it is not performance-critical and is not vectorizable, while the generic encryption and decryption functions are needed as fallbacks and for the XTS tweak encryption. Signed-off-by: Eric Biggers <ebiggers@google.com> Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
2018-02-22crypto: speck - add support for the Speck block cipherEric Biggers5-0/+460
Add a generic implementation of Speck, including the Speck128 and Speck64 variants. Speck is a lightweight block cipher that can be much faster than AES on processors that don't have AES instructions. We are planning to offer Speck-XTS (probably Speck128/256-XTS) as an option for dm-crypt and fscrypt on Android, for low-end mobile devices with older CPUs such as ARMv7 which don't have the Cryptography Extensions. Currently, such devices are unencrypted because AES is not fast enough, even when the NEON bit-sliced implementation of AES is used. Other AES alternatives such as Twofish, Threefish, Camellia, CAST6, and Serpent aren't fast enough either; it seems that only a modern ARX cipher can provide sufficient performance on these devices. This is a replacement for our original proposal (https://patchwork.kernel.org/patch/10101451/) which was to offer ChaCha20 for these devices. However, the use of a stream cipher for disk/file encryption with no space to store nonces would have been much more insecure than we thought initially, given that it would be used on top of flash storage as well as potentially on top of F2FS, neither of which is guaranteed to overwrite data in-place. Speck has been somewhat controversial due to its origin. Nevertheless, it has a straightforward design (it's an ARX cipher), and it appears to be the leading software-optimized lightweight block cipher currently, with the most cryptanalysis. It's also easy to implement without side channels, unlike AES. Moreover, we only intend Speck to be used when the status quo is no encryption, due to AES not being fast enough. We've also considered a novel length-preserving encryption mode based on ChaCha20 and Poly1305. While theoretically attractive, such a mode would be a brand new crypto construction and would be more complicated and difficult to implement efficiently in comparison to Speck-XTS. There is confusion about the byte and word orders of Speck, since the original paper doesn't specify them. But we have implemented it using the orders the authors recommended in a correspondence with them. The test vectors are taken from the original paper but were mapped to byte arrays using the recommended byte and word orders. Signed-off-by: Eric Biggers <ebiggers@google.com> Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
2018-02-22crypto: testmgr - Fix incorrect values in PKCS#1 test vectorConor McLoughlin1-3/+3
The RSA private key for the first form should have version, prime1, prime2, exponent1, exponent2, coefficient values 0. With non-zero values for prime1,2, exponent 1,2 and coefficient the Intel QAT driver will assume that values are provided for the private key second form. This will result in signature verification failures for modules where QAT device is present and the modules are signed with rsa,sha256. Cc: <stable@vger.kernel.org> Signed-off-by: Giovanni Cabiddu <giovanni.cabiddu@intel.com> Signed-off-by: Conor McLoughlin <conor.mcloughlin@intel.com> Reviewed-by: Stephan Mueller <smueller@chronox.de> Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
2018-02-19MIPS: crypto: Add crc32 and crc32c hw accelerated moduleMarcin Nowakowski1-0/+9
This module registers crc32 and crc32c algorithms that use the optional CRC32[bhwd] and CRC32C[bhwd] instructions in MIPSr6 cores. Signed-off-by: Marcin Nowakowski <marcin.nowakowski@mips.com> Signed-off-by: James Hogan <jhogan@kernel.org> Cc: Ralf Baechle <ralf@linux-mips.org> Cc: "David S. Miller" <davem@davemloft.net> Cc: linux-mips@linux-mips.org Cc: linux-crypto@vger.kernel.org Acked-by: Herbert Xu <herbert@gondor.apana.org.au> Patchwork: https://patchwork.linux-mips.org/patch/18601/ [jhogan@kernel.org: Add CRYPTO_ALG_OPTIONAL_KEY flag on Eric Biggers' suggestion, due to commit a208fa8f3303 ("crypto: hash - annotate algorithms taking optional key") in v4.16-rc1]
2018-02-15crypto: engine - Permit to enqueue all async requestsCorentin LABBE1-137/+164
The crypto engine could actually only enqueue hash and ablkcipher request. This patch permit it to enqueue any type of crypto_async_request. Signed-off-by: Corentin Labbe <clabbe.montjoie@gmail.com> Tested-by: Fabien Dessenne <fabien.dessenne@st.com> Tested-by: Fabien Dessenne <fabien.dessenne@st.com> Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
2018-02-15crypto: user - Replace GFP_ATOMIC with GFP_KERNEL in crypto_reportJia-Ju Bai1-1/+1
After checking all possible call chains to crypto_report here, my tool finds that crypto_report is never called in atomic context. And crypto_report calls crypto_alg_match which calls down_read, thus it proves again that crypto_report can call functions which may sleep. Thus GFP_ATOMIC is not necessary, and it can be replaced with GFP_KERNEL. This is found by a static analysis tool named DCNS written by myself. Signed-off-by: Jia-Ju Bai <baijiaju1990@gmail.com> Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
2018-02-15crypto: rsa-pkcs1pad - Replace GFP_ATOMIC with GFP_KERNEL in ↵Jia-Ju Bai1-1/+1
pkcs1pad_encrypt_sign_complete After checking all possible call chains to kzalloc here, my tool finds that this kzalloc is never called in atomic context. Thus GFP_ATOMIC is not necessary, and it can be replaced with GFP_KERNEL. This is found by a static analysis tool named DCNS written by myself. Signed-off-by: Jia-Ju Bai <baijiaju1990@gmail.com> Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
2018-02-15crypto: mcryptd - remove pointless wrapper functionsEric Biggers1-30/+4
There is no need for ahash_mcryptd_{update,final,finup,digest}(); we should just call crypto_ahash_*() directly. Signed-off-by: Eric Biggers <ebiggers@google.com> Acked-by: Tim Chen <tim.c.chen@linux.intel.com> Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
2018-02-15crypto: hash - Require export/import in ahashKamil Konieczny1-16/+2
Export and import are mandatory in async hash. As drivers were rewritten, drop empty wrappers and correct init of ahash transformation. Signed-off-by: Kamil Konieczny <k.konieczny@partner.samsung.com> Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
2018-02-12Merge branch 'linus' of ↵Linus Torvalds1-100/+118
git://git.kernel.org/pub/scm/linux/kernel/git/herbert/crypto-2.6 Pull crypto fixes from Herbert Xu: "This fixes the following issues: - oversize stack frames on mn10300 in sha3-generic - warning on old compilers in sha3-generic - API error in sun4i_ss_prng - potential dead-lock in sun4i_ss_prng - null-pointer dereference in sha512-mb - endless loop when DECO acquire fails in caam - kernel oops when hashing empty message in talitos" * 'linus' of git://git.kernel.org/pub/scm/linux/kernel/git/herbert/crypto-2.6: crypto: sun4i_ss_prng - convert lock to _bh in sun4i_ss_prng_generate crypto: sun4i_ss_prng - fix return value of sun4i_ss_prng_generate crypto: caam - fix endless loop when DECO acquire fails crypto: sha3-generic - Use __optimize to support old compilers compiler-gcc.h: __nostackprotector needs gcc-4.4 and up compiler-gcc.h: Introduce __optimize function attribute crypto: sha3-generic - deal with oversize stack frames crypto: talitos - fix Kernel Oops on hashing an empty file crypto: sha512-mb - initialize pending lengths correctly
2018-02-12vfs: do bulk POLL* -> EPOLL* replacementLinus Torvalds1-8/+8
This is the mindless scripted replacement of kernel use of POLL* variables as described by Al, done by this script: for V in IN OUT PRI ERR RDNORM RDBAND WRNORM WRBAND HUP RDHUP NVAL MSG; do L=`git grep -l -w POLL$V | grep -v '^t' | grep -v /um/ | grep -v '^sa' | grep -v '/poll.h$'|grep -v '^D'` for f in $L; do sed -i "-es/^\([^\"]*\)\(\<POLL$V\>\)/\\1E\\2/" $f; done done with de-mangling cleanups yet to come. NOTE! On almost all architectures, the EPOLL* constants have the same values as the POLL* constants do. But they keyword here is "almost". For various bad reasons they aren't the same, and epoll() doesn't actually work quite correctly in some cases due to this on Sparc et al. The next patch from Al will sort out the final differences, and we should be all done. Scripted-by: Al Viro <viro@zeniv.linux.org.uk> Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
2018-02-08crypto: sha3-generic - Use __optimize to support old compilersGeert Uytterhoeven1-1/+1
With gcc-4.1.2: crypto/sha3_generic.c:39: warning: ‘__optimize__’ attribute directive ignored Use the newly introduced __optimize macro to fix this. Fixes: 83dee2ce1ae791c3 ("crypto: sha3-generic - rewrite KECCAK transform to help the compiler optimize") Signed-off-by: Geert Uytterhoeven <geert@linux-m68k.org> Acked-by: Ard Biesheuvel <ard.biesheuvel@linaro.org> Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
2018-02-08crypto: sha3-generic - deal with oversize stack framesArd Biesheuvel1-100/+118
As reported by kbuild test robot, the optimized SHA3 C implementation compiles to mn10300 code that uses a disproportionate amount of stack space, i.e., crypto/sha3_generic.c: In function 'keccakf': crypto/sha3_generic.c:147:1: warning: the frame size of 1232 bytes is larger than 1024 bytes [-Wframe-larger-than=] As kindly diagnosed by Arnd, this does not only occur when building for the mn10300 architecture (which is what the report was about) but also for h8300, and builds for other 32-bit architectures show an increase in stack space utilization as well. Given that SHA3 operates on 64-bit quantities, and keeps a state matrix of 25 64-bit words, it is not surprising that 32-bit architectures with few general purpose registers are impacted the most by this, and it is therefore reasonable to implement a workaround that distinguishes between 32-bit and 64-bit architectures. Arnd figured out that taking the round calculation out of the loop, and inlining it explicitly but only on 64-bit architectures preserves most of the performance gain achieved by the rewrite, and also gets rid of the excessive use of stack space. Reported-by: kbuild test robot <fengguang.wu@intel.com> Suggested-by: Arnd Bergmann <arnd@arndb.de> Signed-off-by: Ard Biesheuvel <ard.biesheuvel@linaro.org> Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
2018-02-01Merge branch 'linus' of ↵Linus Torvalds44-653/+2032
git://git.kernel.org/pub/scm/linux/kernel/git/herbert/crypto-2.6 Pull crypto updates from Herbert Xu: "API: - Enforce the setting of keys for keyed aead/hash/skcipher algorithms. - Add multibuf speed tests in tcrypt. Algorithms: - Improve performance of sha3-generic. - Add native sha512 support on arm64. - Add v8.2 Crypto Extentions version of sha3/sm3 on arm64. - Avoid hmac nesting by requiring underlying algorithm to be unkeyed. - Add cryptd_max_cpu_qlen module parameter to cryptd. Drivers: - Add support for EIP97 engine in inside-secure. - Add inline IPsec support to chelsio. - Add RevB core support to crypto4xx. - Fix AEAD ICV check in crypto4xx. - Add stm32 crypto driver. - Add support for BCM63xx platforms in bcm2835 and remove bcm63xx. - Add Derived Key Protocol (DKP) support in caam. - Add Samsung Exynos True RNG driver. - Add support for Exynos5250+ SoCs in exynos PRNG driver" * 'linus' of git://git.kernel.org/pub/scm/linux/kernel/git/herbert/crypto-2.6: (166 commits) crypto: picoxcell - Fix error handling in spacc_probe() crypto: arm64/sha512 - fix/improve new v8.2 Crypto Extensions code crypto: arm64/sm3 - new v8.2 Crypto Extensions implementation crypto: arm64/sha3 - new v8.2 Crypto Extensions implementation crypto: testmgr - add new testcases for sha3 crypto: sha3-generic - export init/update/final routines crypto: sha3-generic - simplify code crypto: sha3-generic - rewrite KECCAK transform to help the compiler optimize crypto: sha3-generic - fixes for alignment and big endian operation crypto: aesni - handle zero length dst buffer crypto: artpec6 - remove select on non-existing CRYPTO_SHA384 hwrng: bcm2835 - Remove redundant dev_err call in bcm2835_rng_probe() crypto: stm32 - remove redundant dev_err call in stm32_cryp_probe() crypto: axis - remove unnecessary platform_get_resource() error check crypto: testmgr - test misuse of result in ahash crypto: inside-secure - make function safexcel_try_push_requests static crypto: aes-generic - fix aes-generic regression on powerpc crypto: chelsio - Fix indentation warning crypto: arm64/sha1-ce - get rid of literal pool crypto: arm64/sha2-ce - move the round constant table to .rodata section ...
2018-01-31Merge branch 'misc.poll' of ↵Linus Torvalds2-3/+2
git://git.kernel.org/pub/scm/linux/kernel/git/viro/vfs Pull poll annotations from Al Viro: "This introduces a __bitwise type for POLL### bitmap, and propagates the annotations through the tree. Most of that stuff is as simple as 'make ->poll() instances return __poll_t and do the same to local variables used to hold the future return value'. Some of the obvious brainos found in process are fixed (e.g. POLLIN misspelled as POLL_IN). At that point the amount of sparse warnings is low and most of them are for genuine bugs - e.g. ->poll() instance deciding to return -EINVAL instead of a bitmap. I hadn't touched those in this series - it's large enough as it is. Another problem it has caught was eventpoll() ABI mess; select.c and eventpoll.c assumed that corresponding POLL### and EPOLL### were equal. That's true for some, but not all of them - EPOLL### are arch-independent, but POLL### are not. The last commit in this series separates userland POLL### values from the (now arch-independent) kernel-side ones, converting between them in the few places where they are copied to/from userland. AFAICS, this is the least disruptive fix preserving poll(2) ABI and making epoll() work on all architectures. As it is, it's simply broken on sparc - try to give it EPOLLWRNORM and it will trigger only on what would've triggered EPOLLWRBAND on other architectures. EPOLLWRBAND and EPOLLRDHUP, OTOH, are never triggered at all on sparc. With this patch they should work consistently on all architectures" * 'misc.poll' of git://git.kernel.org/pub/scm/linux/kernel/git/viro/vfs: (37 commits) make kernel-side POLL... arch-independent eventpoll: no need to mask the result of epi_item_poll() again eventpoll: constify struct epoll_event pointers debugging printk in sg_poll() uses %x to print POLL... bitmap annotate poll(2) guts 9p: untangle ->poll() mess ->si_band gets POLL... bitmap stored into a user-visible long field ring_buffer_poll_wait() return value used as return value of ->poll() the rest of drivers/*: annotate ->poll() instances media: annotate ->poll() instances fs: annotate ->poll() instances ipc, kernel, mm: annotate ->poll() instances net: annotate ->poll() instances apparmor: annotate ->poll() instances tomoyo: annotate ->poll() instances sound: annotate ->poll() instances acpi: annotate ->poll() instances crypto: annotate ->poll() instances block: annotate ->poll() instances x86: annotate ->poll() instances ...
2018-01-29Merge branch 'for-4.16/block' of git://git.kernel.dk/linux-blockLinus Torvalds2-49/+3
Pull block updates from Jens Axboe: "This is the main pull request for block IO related changes for the 4.16 kernel. Nothing major in this pull request, but a good amount of improvements and fixes all over the map. This contains: - BFQ improvements, fixes, and cleanups from Angelo, Chiara, and Paolo. - Support for SMR zones for deadline and mq-deadline from Damien and Christoph. - Set of fixes for bcache by way of Michael Lyle, including fixes from himself, Kent, Rui, Tang, and Coly. - Series from Matias for lightnvm with fixes from Hans Holmberg, Javier, and Matias. Mostly centered around pblk, and the removing rrpc 1.2 in preparation for supporting 2.0. - A couple of NVMe pull requests from Christoph. Nothing major in here, just fixes and cleanups, and support for command tracing from Johannes. - Support for blk-throttle for tracking reads and writes separately. From Joseph Qi. A few cleanups/fixes also for blk-throttle from Weiping. - Series from Mike Snitzer that enables dm to register its queue more logically, something that's alwways been problematic on dm since it's a stacked device. - Series from Ming cleaning up some of the bio accessor use, in preparation for supporting multipage bvecs. - Various fixes from Ming closing up holes around queue mapping and quiescing. - BSD partition fix from Richard Narron, fixing a problem where we can't mount newer (10/11) FreeBSD partitions. - Series from Tejun reworking blk-mq timeout handling. The previous scheme relied on atomic bits, but it had races where we would think a request had timed out if it to reused at the wrong time. - null_blk now supports faking timeouts, to enable us to better exercise and test that functionality separately. From me. - Kill the separate atomic poll bit in the request struct. After this, we don't use the atomic bits on blk-mq anymore at all. From me. - sgl_alloc/free helpers from Bart. - Heavily contended tag case scalability improvement from me. - Various little fixes and cleanups from Arnd, Bart, Corentin, Douglas, Eryu, Goldwyn, and myself" * 'for-4.16/block' of git://git.kernel.dk/linux-block: (186 commits) block: remove smart1,2.h nvme: add tracepoint for nvme_complete_rq nvme: add tracepoint for nvme_setup_cmd nvme-pci: introduce RECONNECTING state to mark initializing procedure nvme-rdma: remove redundant boolean for inline_data nvme: don't free uuid pointer before printing it nvme-pci: Suspend queues after deleting them bsg: use pr_debug instead of hand crafted macros blk-mq-debugfs: don't allow write on attributes with seq_operations set nvme-pci: Fix queue double allocations block: Set BIO_TRACE_COMPLETION on new bio during split blk-throttle: use queue_is_rq_based block: Remove kblockd_schedule_delayed_work{,_on}() blk-mq: Avoid that blk_mq_delay_run_hw_queue() introduces unintended delays blk-mq: Rename blk_mq_request_direct_issue() into blk_mq_request_issue_directly() lib/scatterlist: Fix chaining support in sgl_alloc_order() blk-throttle: track read and write request individually block: add bdev_read_only() checks to common helpers block: fail op_is_write() requests to read-only partitions blk-throttle: export io_serviced_recursive, io_service_bytes_recursive ...
2018-01-25crypto: testmgr - add new testcases for sha3Ard Biesheuvel1-0/+550
All current SHA3 test cases are smaller than the SHA3 block size, which means not all code paths are being exercised. So add a new test case to each variant, and make one of the existing test cases chunked. Signed-off-by: Ard Biesheuvel <ard.biesheuvel@linaro.org> Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
2018-01-25crypto: sha3-generic - export init/update/final routinesArd Biesheuvel1-15/+18
To allow accelerated implementations to fall back to the generic routines, e.g., in contexts where a SIMD based implementation is not allowed to run, expose the generic SHA3 init/update/final routines to other modules. Signed-off-by: Ard Biesheuvel <ard.biesheuvel@linaro.org> Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
2018-01-25crypto: sha3-generic - simplify codeArd Biesheuvel1-125/+59
In preparation of exposing the generic SHA3 implementation to other versions as a fallback, simplify the code, and remove an inconsistency in the output handling (endian swabbing rsizw words of state before writing the output does not make sense) Signed-off-by: Ard Biesheuvel <ard.biesheuvel@linaro.org> Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
2018-01-25crypto: sha3-generic - rewrite KECCAK transform to help the compiler optimizeArd Biesheuvel1-38/+96
The way the KECCAK transform is currently coded involves many references into the state array using indexes that are calculated at runtime using simple but non-trivial arithmetic. This forces the compiler to treat the state matrix as an array in memory rather than keep it in registers, which results in poor performance. So instead, let's rephrase the algorithm using fixed array indexes only. This helps the compiler keep the state matrix in registers, resulting in the following speedup (SHA3-256 performance in cycles per byte): before after speedup Intel Core i7 @ 2.0 GHz (2.9 turbo) 100.6 35.7 2.8x Cortex-A57 @ 2.0 GHz (64-bit mode) 101.6 12.7 8.0x Cortex-A53 @ 1.0 GHz 224.4 15.8 14.2x Cortex-A57 @ 2.0 GHz (32-bit mode) 201.8 63.0 3.2x Signed-off-by: Ard Biesheuvel <ard.biesheuvel@linaro.org> Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
2018-01-25crypto: sha3-generic - fixes for alignment and big endian operationArd Biesheuvel1-2/+3
Ensure that the input is byte swabbed before injecting it into the SHA3 transform. Use the get_unaligned() accessor for this so that we don't perform unaligned access inadvertently on architectures that do not support that. Cc: <stable@vger.kernel.org> Fixes: 53964b9ee63b7075 ("crypto: sha3 - Add SHA-3 hash algorithm") Signed-off-by: Ard Biesheuvel <ard.biesheuvel@linaro.org> Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
2018-01-25crypto: testmgr - test misuse of result in ahashKamil Konieczny1-0/+39
Async hash operations can use result pointer in final/finup/digest, but not in init/update/export/import, so test it for misuse. Signed-off-by: Kamil Konieczny <k.konieczny@partner.samsung.com> Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
2018-01-20crypto: aes-generic - fix aes-generic regression on powerpcArnd Bergmann1-1/+1
My last bugfix added -Os on the command line, which unfortunately caused a build regression on powerpc in some configurations. I've done some more analysis of the original problem and found slightly different workaround that avoids this regression and also results in better performance on gcc-7.0: -fcode-hoisting is an optimization step that got added in gcc-7 and that for all gcc-7 versions causes worse performance. This disables -fcode-hoisting on all compilers that understand the option. For gcc-7.1 and 7.2 I found the same performance as my previous patch (using -Os), in gcc-7.0 it was even better. On gcc-8 I could see no change in performance from this patch. In theory, code hoisting should not be able make things better for the AES cipher, so leaving it disabled for gcc-8 only serves to simplify the Makefile change. Reported-by: kbuild test robot <fengguang.wu@intel.com> Link: https://www.mail-archive.com/linux-crypto@vger.kernel.org/msg30418.html Link: https://gcc.gnu.org/bugzilla/show_bug.cgi?id=83356 Link: https://gcc.gnu.org/bugzilla/show_bug.cgi?id=83651 Fixes: 148b974deea9 ("crypto: aes-generic - build with -Os on gcc-7+") Signed-off-by: Arnd Bergmann <arnd@arndb.de> Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
2018-01-12Merge branch 'linus' of ↵Linus Torvalds1-0/+12
git://git.kernel.org/pub/scm/linux/kernel/git/herbert/crypto-2.6 Pull crypto fix from Herbert Xu: "This fixes a NULL pointer dereference in crypto_remove_spawns that can be triggered through af_alg" * 'linus' of git://git.kernel.org/pub/scm/linux/kernel/git/herbert/crypto-2.6: crypto: algapi - fix NULL dereference in crypto_remove_spawns()
2018-01-12crypto: x86/salsa20 - cleanup and convert to skcipher APIEric Biggers1-0/+2
Convert salsa20-asm from the deprecated "blkcipher" API to the "skcipher" API, in the process fixing it up to use the generic helpers. This allows removing the salsa20_keysetup() and salsa20_ivsetup() assembly functions, which aren't performance critical; the C versions do just fine. This also fixes the same bug that salsa20-generic had, where the state array was being maintained directly in the transform context rather than on the stack or in the request context. Thus, if multiple threads used the same Salsa20 transform concurrently they produced the wrong results. Signed-off-by: Eric Biggers <ebiggers@google.com> Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
2018-01-12crypto: salsa20 - export generic helpersEric Biggers1-13/+7
Export the Salsa20 constants, transform context, and initialization functions so that they can be reused by the x86 implementation. Signed-off-by: Eric Biggers <ebiggers@google.com> Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
2018-01-12crypto: salsa20-generic - cleanup and convert to skcipher APIEric Biggers1-136/+104
Convert salsa20-generic from the deprecated "blkcipher" API to the "skcipher" API, in the process fixing it up to be thread-safe (as the crypto API expects) by maintaining each request's state separately from the transform context. Also remove the unnecessary cra_alignmask and tighten validation of the key size by accepting only 16 or 32 bytes, not anything in between. These changes bring the code close to the way chacha20-generic does things, so hopefully it will be easier to maintain in the future. However, the way Salsa20 interprets the IV is still slightly different; that was not changed. Signed-off-by: Eric Biggers <ebiggers@google.com> Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
2018-01-12crypto: aes-generic - build with -Os on gcc-7+Arnd Bergmann1-0/+1
While testing other changes, I discovered that gcc-7.2.1 produces badly optimized code for aes_encrypt/aes_decrypt. This is especially true when CONFIG_UBSAN_SANITIZE_ALL is enabled, where it leads to extremely large stack usage that in turn might cause kernel stack overflows: crypto/aes_generic.c: In function 'aes_encrypt': crypto/aes_generic.c:1371:1: warning: the frame size of 4880 bytes is larger than 2048 bytes [-Wframe-larger-than=] crypto/aes_generic.c: In function 'aes_decrypt': crypto/aes_generic.c:1441:1: warning: the frame size of 4864 bytes is larger than 2048 bytes [-Wframe-larger-than=] I verified that this problem exists on all architectures that are supported by gcc-7.2, though arm64 in particular is less affected than the others. I also found that gcc-7.1 and gcc-8 do not show the extreme stack usage but still produce worse code than earlier versions for this file, apparently because of optimization passes that generally provide a substantial improvement in object code quality but understandably fail to find any shortcuts in the AES algorithm. Possible workarounds include a) disabling -ftree-pre and -ftree-sra optimizations, this was an earlier patch I tried, which reliably fixed the stack usage, but caused a serious performance regression in some versions, as later testing found. b) disabling UBSAN on this file or all ciphers, as suggested by Ard Biesheuvel. This would lead to massively better crypto performance in UBSAN-enabled kernels and avoid the stack usage, but there is a concern over whether we should exclude arbitrary files from UBSAN at all. c) Forcing the optimization level in a different way. Similar to a), but rather than deselecting specific optimization stages, this now uses "gcc -Os" for this file, regardless of the CONFIG_CC_OPTIMIZE_FOR_PERFORMANCE/SIZE option. This is a reliable workaround for the stack consumption on all architecture, and I've retested the performance results now on x86, cycles/byte (lower is better) for cbc(aes-generic) with 256 bit keys: -O2 -Os gcc-6.3.1 14.9 15.1 gcc-7.0.1 14.7 15.3 gcc-7.1.1 15.3 14.7 gcc-7.2.1 16.8 15.9 gcc-8.0.0 15.5 15.6 This implements the option c) by enabling forcing -Os on all compiler versions starting with gcc-7.1. As a workaround for PR83356, it would only be needed for gcc-7.2+ with UBSAN enabled, but since it also shows better performance on gcc-7.1 without UBSAN, it seems appropriate to use the faster version here as well. Side note: during testing, I also played with the AES code in libressl, which had a similar performance regression from gcc-6 to gcc-7.2, but was three times slower overall. It might be interesting to investigate that further and possibly port the Linux implementation into that. Link: https://gcc.gnu.org/bugzilla/show_bug.cgi?id=83356 Link: https://gcc.gnu.org/bugzilla/show_bug.cgi?id=83651 Cc: Richard Biener <rguenther@suse.de> Cc: Jakub Jelinek <jakub@gcc.gnu.org> Cc: Ard Biesheuvel <ard.biesheuvel@linaro.org> Signed-off-by: Arnd Bergmann <arnd@arndb.de> Acked-by: Ard Biesheuvel <ard.biesheuvel@linaro.org> Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
2018-01-12crypto: aead - prevent using AEADs without setting keyEric Biggers2-10/+14
Similar to what was done for the hash API, update the AEAD API to track whether each transform has been keyed, and reject encryption/decryption if a key is needed but one hasn't been set. This isn't quite as important as the equivalent fix for the hash API because AEADs always require a key, so are unlikely to be used without one. Still, tracking the key will prevent accidental unkeyed use. algif_aead also had to track the key anyway, so the new flag replaces that and slightly simplifies the algif_aead implementation. Signed-off-by: Eric Biggers <ebiggers@google.com> Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
2018-01-12crypto: skcipher - prevent using skciphers without setting keyEric Biggers2-50/+39
Similar to what was done for the hash API, update the skcipher API to track whether each transform has been keyed, and reject encryption/decryption if a key is needed but one hasn't been set. This isn't as important as the equivalent fix for the hash API because symmetric ciphers almost always require a key (the "null cipher" is the only exception), so are unlikely to be used without one. Still, tracking the key will prevent accidental unkeyed use. algif_skcipher also had to track the key anyway, so the new flag replaces that and simplifies the algif_skcipher implementation. Signed-off-by: Eric Biggers <ebiggers@google.com> Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
2018-01-12crypto: ghash - remove checks for key being setEric Biggers1-6/+0
Now that the crypto API prevents a keyed hash from being used without setting the key, there's no need for GHASH to do this check itself. Signed-off-by: Eric Biggers <ebiggers@google.com> Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
2018-01-12crypto: hash - prevent using keyed hashes without setting keyEric Biggers3-50/+49
Currently, almost none of the keyed hash algorithms check whether a key has been set before proceeding. Some algorithms are okay with this and will effectively just use a key of all 0's or some other bogus default. However, others will severely break, as demonstrated using "hmac(sha3-512-generic)", the unkeyed use of which causes a kernel crash via a (potentially exploitable) stack buffer overflow. A while ago, this problem was solved for AF_ALG by pairing each hash transform with a 'has_key' bool. However, there are still other places in the kernel where userspace can specify an arbitrary hash algorithm by name, and the kernel uses it as unkeyed hash without checking whether it is really unkeyed. Examples of this include: - KEYCTL_DH_COMPUTE, via the KDF extension - dm-verity - dm-crypt, via the ESSIV support - dm-integrity, via the "internal hash" mode with no key given - drbd (Distributed Replicated Block Device) This bug is especially bad for KEYCTL_DH_COMPUTE as that requires no privileges to call. Fix the bug for all users by adding a flag CRYPTO_TFM_NEED_KEY to the ->crt_flags of each hash transform that indicates whether the transform still needs to be keyed or not. Then, make the hash init, import, and digest functions return -ENOKEY if the key is still needed. The new flag also replaces the 'has_key' bool which algif_hash was previously using, thereby simplifying the algif_hash implementation. Reported-by: syzbot <syzkaller@googlegroups.com> Cc: stable@vger.kernel.org Signed-off-by: Eric Biggers <ebiggers@google.com> Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
2018-01-12crypto: hash - annotate algorithms taking optional keyEric Biggers4-8/+8
We need to consistently enforce that keyed hashes cannot be used without setting the key. To do this we need a reliable way to determine whether a given hash algorithm is keyed or not. AF_ALG currently does this by checking for the presence of a ->setkey() method. However, this is actually slightly broken because the CRC-32 algorithms implement ->setkey() but can also be used without a key. (The CRC-32 "key" is not actually a cryptographic key but rather represents the initial state. If not overridden, then a default initial state is used.) Prepare to fix this by introducing a flag CRYPTO_ALG_OPTIONAL_KEY which indicates that the algorithm has a ->setkey() method, but it is not required to be called. Then set it on all the CRC-32 algorithms. The same also applies to the Adler-32 implementation in Lustre. Also, the cryptd and mcryptd templates have to pass through the flag from their underlying algorithm. Cc: stable@vger.kernel.org Signed-off-by: Eric Biggers <ebiggers@google.com> Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
2018-01-12crypto: poly1305 - remove ->setkey() methodEric Biggers1-12/+5
Since Poly1305 requires a nonce per invocation, the Linux kernel implementations of Poly1305 don't use the crypto API's keying mechanism and instead expect the key and nonce as the first 32 bytes of the data. But ->setkey() is still defined as a stub returning an error code. This prevents Poly1305 from being used through AF_ALG and will also break it completely once we start enforcing that all crypto API users (not just AF_ALG) call ->setkey() if present. Fix it by removing crypto_poly1305_setkey(), leaving ->setkey as NULL. Cc: stable@vger.kernel.org Signed-off-by: Eric Biggers <ebiggers@google.com> Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
2018-01-12crypto: mcryptd - pass through absence of ->setkey()Eric Biggers1-1/+2
When the mcryptd template is used to wrap an unkeyed hash algorithm, don't install a ->setkey() method to the mcryptd instance. This change is necessary for mcryptd to keep working with unkeyed hash algorithms once we start enforcing that ->setkey() is called when present. Cc: stable@vger.kernel.org Signed-off-by: Eric Biggers <ebiggers@google.com> Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
2018-01-12crypto: cryptd - pass through absence of ->setkey()Eric Biggers1-1/+2
When the cryptd template is used to wrap an unkeyed hash algorithm, don't install a ->setkey() method to the cryptd instance. This change is necessary for cryptd to keep working with unkeyed hash algorithms once we start enforcing that ->setkey() is called when present. Cc: stable@vger.kernel.org Signed-off-by: Eric Biggers <ebiggers@google.com> Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
2018-01-12crypto: hash - introduce crypto_hash_alg_has_setkey()Eric Biggers1-0/+11
Templates that use an shash spawn can use crypto_shash_alg_has_setkey() to determine whether the underlying algorithm requires a key or not. But there was no corresponding function for ahash spawns. Add it. Note that the new function actually has to support both shash and ahash algorithms, since the ahash API can be used with either. Cc: stable@vger.kernel.org Signed-off-by: Eric Biggers <ebiggers@google.com> Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
2018-01-12crypto: tcrypt - free xoutbuf instead of axbufColin Ian King1-1/+1
There seems to be a cut-n-paste bug with the name of the buffer being free'd, xoutbuf should be used instead of axbuf. Detected by CoverityScan, CID#1463420 ("Copy-paste error") Fixes: 427988d981c4 ("crypto: tcrypt - add multibuf aead speed test") Signed-off-by: Colin Ian King <colin.king@canonical.com> Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
2018-01-12crypto: tcrypt - fix spelling mistake: "bufufer"-> "buffer"Colin Ian King1-2/+2
Trivial fix to spelling mistakes in pr_err error message text. Signed-off-by: Colin Ian King <colin.king@canonical.com> Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
2018-01-12crypto: af_alg - whitelist mask and typeStephan Mueller1-4/+6
The user space interface allows specifying the type and mask field used to allocate the cipher. Only a subset of the possible flags are intended for user space. Therefore, white-list the allowed flags. In case the user space caller uses at least one non-allowed flag, EINVAL is returned. Reported-by: syzbot <syzkaller@googlegroups.com> Cc: <stable@vger.kernel.org> Signed-off-by: Stephan Mueller <smueller@chronox.de> Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
2018-01-12crypto: testmgr - change `guard` to unsigned charJoey Pabalinas1-1/+1
When char is signed, storing the values 0xba (186) and 0xad (173) in the `guard` array produces signed overflow. Change the type of `guard` to static unsigned char to correct undefined behavior and reduce function stack usage. Signed-off-by: Joey Pabalinas <joeypabalinas@gmail.com> Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
2018-01-06crypto: scompress - use sgl_alloc() and sgl_free()Bart Van Assche2-49/+3
Use the sgl_alloc() and sgl_free() functions instead of open coding these functions. Signed-off-by: Bart Van Assche <bart.vanassche@wdc.com> Acked-by: Ard Biesheuvel <ard.biesheuvel@linaro.org> Cc: Herbert Xu <herbert@gondor.apana.org.au> Signed-off-by: Jens Axboe <axboe@kernel.dk>
2018-01-05Merge branch 'linus' of ↵Linus Torvalds5-14/+19
git://git.kernel.org/pub/scm/linux/kernel/git/herbert/crypto-2.6 Pull crypto fixes from Herbert Xu: "This fixes the following issues: - racy use of ctx->rcvused in af_alg - algif_aead crash in chacha20poly1305 - freeing bogus pointer in pcrypt - build error on MIPS in mpi - memory leak in inside-secure - memory overwrite in inside-secure - NULL pointer dereference in inside-secure - state corruption in inside-secure - build error without CRYPTO_GF128MUL in chelsio - use after free in n2" * 'linus' of git://git.kernel.org/pub/scm/linux/kernel/git/herbert/crypto-2.6: crypto: inside-secure - do not use areq->result for partial results crypto: inside-secure - fix request allocations in invalidation path crypto: inside-secure - free requests even if their handling failed crypto: inside-secure - per request invalidation lib/mpi: Fix umul_ppmm() for MIPS64r6 crypto: pcrypt - fix freeing pcrypt instances crypto: n2 - cure use after free crypto: af_alg - Fix race around ctx->rcvused by making it atomic_t crypto: chacha20poly1305 - validate the digest size crypto: chelsio - select CRYPTO_GF128MUL
2018-01-05crypto: poly1305 - remove cra_alignmaskEric Biggers1-1/+0
Now that nothing in poly1305-generic assumes any special alignment, remove the cra_alignmask so that the crypto API does not have to unnecessarily align the buffers. Signed-off-by: Eric Biggers <ebiggers@google.com> Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
2018-01-05crypto: poly1305 - use unaligned access macros to output digestEric Biggers1-5/+4
Currently the only part of poly1305-generic which is assuming special alignment is the part where the final digest is written. Switch this over to the unaligned access macros so that we'll be able to remove the cra_alignmask. Signed-off-by: Eric Biggers <ebiggers@google.com> Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>