summaryrefslogtreecommitdiff
path: root/crypto
AgeCommit message (Collapse)AuthorFilesLines
2014-07-04crypto: drbg - Use Kconfig to ensure at least one RNG option is setHerbert Xu3-16/+10
This patch removes the build-time test that ensures at least one RNG is set. Instead we will simply not build drbg if no options are set through Kconfig. This also fixes a typo in the name of the Kconfig option CRYTPO_DRBG (should be CRYPTO_DRBG). Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
2014-07-04crypto: drbg - use of kernel linked listStephan Mueller1-109/+124
The DRBG-style linked list to manage input data that is fed into the cipher invocations is replaced with the kernel linked list implementation. The change is transparent to users of the interfaces offered by the DRBG. Therefore, no changes to the testmgr code is needed. Reported-by: kbuild test robot <fengguang.wu@intel.com> Signed-off-by: Stephan Mueller <smueller@chronox.de> Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
2014-07-04crypto: drbg - fix memory corruption for AES192Stephan Mueller1-3/+11
For the CTR DRBG, the drbg_state->scratchpad temp buffer (i.e. the memory location immediately before the drbg_state->tfm variable is the buffer that the BCC function operates on. BCC operates blockwise. Making the temp buffer drbg_statelen(drbg) in size is sufficient when the DRBG state length is a multiple of the block size. For AES192 this is not the case and the length for temp is insufficient (yes, that also means for such ciphers, the final output of all BCC rounds are truncated before used to update the state of the DRBG!!). The patch enlarges the temp buffer from drbg_statelen to drbg_statelen + drbg_blocklen to have sufficient space. Reported-by: Fengguang Wu <fengguang.wu@intel.com> Signed-off-by: Stephan Mueller <smueller@chronox.de> Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
2014-07-03crypto: tcrypt - print cra driver name in tcrypt tests outputLuca Clementi1-11/+20
Print the driver name that is being tested. The driver name can be inferred parsing /proc/crypto but having it in the output is clearer Signed-off-by: Luca Clementi <luca.clementi@gmail.com> Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
2014-07-03crypto: fips - only panic on bad/missing crypto mod signaturesJarod Wilson2-0/+15
Per further discussion with NIST, the requirements for FIPS state that we only need to panic the system on failed kernel module signature checks for crypto subsystem modules. This moves the fips-mode-only module signature check out of the generic module loading code, into the crypto subsystem, at points where we can catch both algorithm module loads and mode module loads. At the same time, make CONFIG_CRYPTO_FIPS dependent on CONFIG_MODULE_SIG, as this is entirely necessary for FIPS mode. v2: remove extraneous blank line, perform checks in static inline function, drop no longer necessary fips.h include. CC: "David S. Miller" <davem@davemloft.net> CC: Rusty Russell <rusty@rustcorp.com.au> CC: Stephan Mueller <stephan.mueller@atsec.com> Signed-off-by: Jarod Wilson <jarod@redhat.com> Acked-by: Neil Horman <nhorman@tuxdriver.com> Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
2014-07-03X.509: Export certificate parse and free functionsDavid Howells1-0/+3
Export certificate parse and free functions for use by modules. Signed-off-by: David Howells <dhowells@redhat.com> Acked-by: Vivek Goyal <vgoyal@redhat.com> Reviewed-by: Kees Cook <keescook@chromium.org> Reviewed-by: Josh Boyer <jwboyer@redhat.com>
2014-07-01X.509: Add bits needed for PKCS#7David Howells3-2/+30
PKCS#7 validation requires access to the serial number and the raw names in an X.509 certificate. Signed-off-by: David Howells <dhowells@redhat.com> Reviewed-by: Kees Cook <keescook@chromium.org> Reviewed-by: Josh Boyer <jwboyer@redhat.com>
2014-06-26crypto: drbg - simplify ordering of linked list in drbg_ctr_dfStephan Mueller1-5/+5
As reported by a static code analyzer, the code for the ordering of the linked list can be simplified. Reported-by: kbuild test robot <fengguang.wu@intel.com> Signed-off-by: Stephan Mueller <smueller@chronox.de> Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
2014-06-25crypto: lzo - use kvfree() helperEric Dumazet1-4/+1
kvfree() helper is now available, use it instead of open code it. Signed-off-by: Eric Dumazet <edumazet@google.com> Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
2014-06-20crypto: testmgr - add 4 more test vectors for GHASHArd Biesheuvel1-4/+45
This adds 4 test vectors for GHASH (of which one for chunked mode), making a total of 5. Signed-off-by: Ard Biesheuvel <ard.biesheuvel@linaro.org> Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
2014-06-20crypto: des_3des - add x86-64 assembly implementationJussi Kivilinna2-5/+30
Patch adds x86_64 assembly implementation of Triple DES EDE cipher algorithm. Two assembly implementations are provided. First is regular 'one-block at time' encrypt/decrypt function. Second is 'three-blocks at time' function that gains performance increase on out-of-order CPUs. tcrypt test results: Intel Core i5-4570: des3_ede-asm vs des3_ede-generic: size ecb-enc ecb-dec cbc-enc cbc-dec ctr-enc ctr-dec 16B 1.21x 1.22x 1.27x 1.36x 1.25x 1.25x 64B 1.98x 1.96x 1.23x 2.04x 2.01x 2.00x 256B 2.34x 2.37x 1.21x 2.40x 2.38x 2.39x 1024B 2.50x 2.47x 1.22x 2.51x 2.52x 2.51x 8192B 2.51x 2.53x 1.21x 2.56x 2.54x 2.55x Signed-off-by: Jussi Kivilinna <jussi.kivilinna@iki.fi> Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
2014-06-20crypto: tcrypt - add ctr(des3_ede) sync speed testJussi Kivilinna1-0/+6
Signed-off-by: Jussi Kivilinna <jussi.kivilinna@iki.fi> Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
2014-06-20crypto: drbg - Add DRBG test code to testmgrStephan Mueller1-0/+247
The DRBG test code implements the CAVS test approach. As discussed for the test vectors, all DRBG types are covered with testing. However, not every backend cipher is covered with testing. To prevent the testmgr from logging missing testing, the NULL test is registered for all backend ciphers not covered with specific test cases. All currently implemented DRBG types and backend ciphers are defined in SP800-90A. Therefore, the fips_allowed flag is set for all. Signed-off-by: Stephan Mueller <smueller@chronox.de> Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
2014-06-20crypto: drbg - DRBG testmgr test vectorsStephan Mueller1-0/+843
All types of the DRBG (CTR, HMAC, Hash) are covered with test vectors. In addition, all permutations of use cases of the DRBG are covered: * with and without predition resistance * with and without additional information string * with and without personalization string As the DRBG implementation is agnositc of the specific backend cipher, only test vectors for one specific backend cipher is used. For example: the Hash DRBG uses the same code paths irrespectively of using SHA-256 or SHA-512. Thus, the test vectors for SHA-256 cover the testing of all DRBG code paths of SHA-512. Signed-off-by: Stephan Mueller <smueller@chronox.de> Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
2014-06-20crypto: drbg - compile the DRBG codeStephan Mueller1-0/+1
Signed-off-by: Stephan Mueller <smueller@chronox.de> Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
2014-06-20crypto: drbg - DRBG kernel configuration optionsStephan Mueller1-1/+35
The different DRBG types of CTR, Hash, HMAC can be enabled or disabled at compile time. At least one DRBG type shall be selected. The default is the HMAC DRBG as its code base is smallest. Signed-off-by: Stephan Mueller <smueller@chronox.de> Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
2014-06-20crypto: drbg - SP800-90A Deterministic Random Bit GeneratorStephan Mueller1-0/+2007
This is a clean-room implementation of the DRBG defined in SP800-90A. All three viable DRBGs defined in the standard are implemented: * HMAC: This is the leanest DRBG and compiled per default * Hash: The more complex DRBG can be enabled at compile time * CTR: The most complex DRBG can also be enabled at compile time The DRBG implementation offers the following: * All three DRBG types are implemented with a derivation function. * All DRBG types are available with and without prediction resistance. * All SHA types of SHA-1, SHA-256, SHA-384, SHA-512 are available for the HMAC and Hash DRBGs. * All AES types of AES-128, AES-192 and AES-256 are available for the CTR DRBG. * A self test is implemented with drbg_healthcheck(). * The FIPS 140-2 continuous self test is implemented. Signed-off-by: Stephan Mueller <smueller@chronox.de> Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
2014-06-20crypto: lzo - try kmalloc() before vmalloc()Eric Dumazet1-2/+9
zswap allocates one LZO context per online cpu. Using vmalloc() for small (16KB) memory areas has drawback of slowing down /proc/vmallocinfo and /proc/meminfo reads, TLB pressure and poor NUMA locality, as default NUMA policy at boot time is to interleave pages : edumazet:~# grep lzo /proc/vmallocinfo | head -4 0xffffc90006062000-0xffffc90006067000 20480 lzo_init+0x1b/0x30 pages=4 vmalloc N0=2 N1=2 0xffffc90006067000-0xffffc9000606c000 20480 lzo_init+0x1b/0x30 pages=4 vmalloc N0=2 N1=2 0xffffc9000606c000-0xffffc90006071000 20480 lzo_init+0x1b/0x30 pages=4 vmalloc N0=2 N1=2 0xffffc90006071000-0xffffc90006076000 20480 lzo_init+0x1b/0x30 pages=4 vmalloc N0=2 N1=2 This patch tries a regular kmalloc() and fallback to vmalloc in case memory is too fragmented. Signed-off-by: Eric Dumazet <edumazet@google.com> Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
2014-06-08Merge tag 'llvmlinux-for-v3.16' of ↵Linus Torvalds1-1/+2
git://git.linuxfoundation.org/llvmlinux/kernel Pull LLVM patches from Behan Webster: "Next set of patches to support compiling the kernel with clang. They've been soaking in linux-next since the last merge window. More still in the works for the next merge window..." * tag 'llvmlinux-for-v3.16' of git://git.linuxfoundation.org/llvmlinux/kernel: arm, unwind, LLVMLinux: Enable clang to be used for unwinding the stack ARM: LLVMLinux: Change "extern inline" to "static inline" in glue-cache.h all: LLVMLinux: Change DWARF flag to support gcc and clang net: netfilter: LLVMLinux: vlais-netfilter crypto: LLVMLinux: aligned-attribute.patch
2014-06-08Merge git://git.kernel.org/pub/scm/linux/kernel/git/herbert/cryptodev-2.6 ↵Linus Torvalds5-53/+1674
into next Pull crypto updates from Herbert Xu: "Here is the crypto update for 3.16: - Added test vectors for SHA/AES-CCM/DES-CBC/3DES-CBC. - Fixed a number of error-path memory leaks in tcrypt. - Fixed error-path memory leak in caam. - Removed unnecessary global mutex from mxs-dcp. - Added ahash walk interface that can actually be asynchronous. - Cleaned up caam error reporting. - Allow crypto_user get operation to be used by non-root users. - Add support for SSS module on Exynos. - Misc fixes" * git://git.kernel.org/pub/scm/linux/kernel/git/herbert/cryptodev-2.6: (60 commits) crypto: testmgr - add aead cbc des, des3_ede tests crypto: testmgr - Fix DMA-API warning crypto: cesa - tfm->__crt_alg->cra_type directly crypto: sahara - tfm->__crt_alg->cra_name directly crypto: padlock - tfm->__crt_alg->cra_name directly crypto: n2 - tfm->__crt_alg->cra_name directly crypto: dcp - tfm->__crt_alg->cra_name directly crypto: cesa - tfm->__crt_alg->cra_name directly crypto: ccp - tfm->__crt_alg->cra_name directly crypto: geode - Don't use tfm->__crt_alg->cra_name directly crypto: geode - Weed out printk() from probe() crypto: geode - Consistently use AES_KEYSIZE_128 crypto: geode - Kill AES_IV_LENGTH crypto: geode - Kill AES_MIN_BLOCK_SIZE crypto: mxs-dcp - Remove global mutex crypto: hash - Add real ahash walk interface hwrng: n2-drv - Introduce the use of the managed version of kzalloc crypto: caam - reinitialize keys_fit_inline for decrypt and givencrypt crypto: s5p-sss - fix multiplatform build hwrng: timeriomem - remove unnecessary OOM messages ...
2014-06-07crypto: LLVMLinux: aligned-attribute.patchMark Charlebois1-1/+2
__attribute__((aligned)) applies the default alignment for the largest scalar type for the target ABI. gcc allows it to be applied inline to a defined type. Clang only allows it to be applied to a type definition (PR11071). Making it into 2 lines makes it more readable and works with both compilers. Author: Mark Charlebois <charlebm@gmail.com> Signed-off-by: Mark Charlebois <charlebm@gmail.com> Signed-off-by: Behan Webster <behanw@converseincode.com>
2014-06-03Merge branch 'locking-core-for-linus' of ↵Linus Torvalds1-1/+1
git://git.kernel.org/pub/scm/linux/kernel/git/tip/tip into next Pull core locking updates from Ingo Molnar: "The main changes in this cycle were: - reduced/streamlined smp_mb__*() interface that allows more usecases and makes the existing ones less buggy, especially in rarer architectures - add rwsem implementation comments - bump up lockdep limits" * 'locking-core-for-linus' of git://git.kernel.org/pub/scm/linux/kernel/git/tip/tip: (33 commits) rwsem: Add comments to explain the meaning of the rwsem's count field lockdep: Increase static allocations arch: Mass conversion of smp_mb__*() arch,doc: Convert smp_mb__*() arch,xtensa: Convert smp_mb__*() arch,x86: Convert smp_mb__*() arch,tile: Convert smp_mb__*() arch,sparc: Convert smp_mb__*() arch,sh: Convert smp_mb__*() arch,score: Convert smp_mb__*() arch,s390: Convert smp_mb__*() arch,powerpc: Convert smp_mb__*() arch,parisc: Convert smp_mb__*() arch,openrisc: Convert smp_mb__*() arch,mn10300: Convert smp_mb__*() arch,mips: Convert smp_mb__*() arch,metag: Convert smp_mb__*() arch,m68k: Convert smp_mb__*() arch,m32r: Convert smp_mb__*() arch,ia64: Convert smp_mb__*() ...
2014-05-22crypto: testmgr - add aead cbc des, des3_ede testsNitesh Lal3-23/+848
Test vectors were taken from existing test for CBC(DES3_EDE). Associated data has been added to test vectors. HMAC computed with Crypto++ has been used. Following algos have been covered. (a) "authenc(hmac(sha1),cbc(des))" (b) "authenc(hmac(sha1),cbc(des3_ede))" (c) "authenc(hmac(sha224),cbc(des))" (d) "authenc(hmac(sha224),cbc(des3_ede))" (e) "authenc(hmac(sha256),cbc(des))" (f) "authenc(hmac(sha256),cbc(des3_ede))" (g) "authenc(hmac(sha384),cbc(des))" (h) "authenc(hmac(sha384),cbc(des3_ede))" (i) "authenc(hmac(sha512),cbc(des))" (j) "authenc(hmac(sha512),cbc(des3_ede))" Signed-off-by: Vakul Garg <vakul@freescale.com> [NiteshNarayanLal@freescale.com: added hooks for the missing algorithms test and tested the patch] Signed-off-by: Nitesh Lal <NiteshNarayanLal@freescale.com> Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
2014-05-22crypto: testmgr - Fix DMA-API warningTadeusz Struk1-2/+5
With DMA-API debug enabled testmgr triggers a "DMA-API: device driver maps memory from stack" warning, when tested on a crypto HW accelerator. Signed-off-by: Tadeusz Struk <tadeusz.struk@intel.com> Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
2014-05-21crypto: hash - Add real ahash walk interfaceHerbert Xu1-5/+36
Although the existing hash walk interface has already been used by a number of ahash crypto drivers, it turns out that none of them were really asynchronous. They were all essentially polling for completion. That's why nobody has noticed until now that the walk interface couldn't work with a real asynchronous driver since the memory is mapped using kmap_atomic. As we now have a use-case for a real ahash implementation on x86, this patch creates a minimal ahash walk interface. Basically it just calls kmap instead of kmap_atomic and does away with the crypto_yield call. Real ahash crypto drivers don't need to yield since by definition they won't be hogging the CPU. Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
2014-05-08crypto: user - Allow CRYPTO_MSG_GETALG without CAP_NET_ADMINMatthias-Christian Ott1-3/+9
CRYPTO_USER requires CAP_NET_ADMIN for all operations. Most information provided by CRYPTO_MSG_GETALG is also accessible through /proc/modules and AF_ALG. CRYPTO_MSG_GETALG should not require CAP_NET_ADMIN so that processes without CAP_NET_ADMIN can use CRYPTO_MSG_GETALG to get cipher details, such as cipher priorities, for AF_ALG. Signed-off-by: Matthias-Christian Ott <ott@mirix.org> Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
2014-04-28crypto: tcrypt - Fix leak of struct aead_request in test_aead_speed()Christian Engelmayer1-1/+3
Fix leakage of memory for struct aead_request that is allocated via aead_request_alloc() but not released via aead_request_free(). Reported by Coverity - CID 1163869. Signed-off-by: Christian Engelmayer <cengelma@gmx.at> Reviewed-by: Marek Vasut <marex@denx.de> Acked-by: Tim Chen <tim.c.chen@linux.intel.com> Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
2014-04-28crypto: tcrypt - Fix potential leak in test_aead_speed() if ↵Christian Engelmayer1-1/+2
crypto_alloc_aead() fails Fix a potential memory leak in the error handling of test_aead_speed(). In case crypto_alloc_aead() fails, the function returns without going through the centralized cleanup path. Reported by Coverity - CID 1163870. Signed-off-by: Christian Engelmayer <cengelma@gmx.at> Reviewed-by: Marek Vasut <marex@denx.de> Acked-by: Tim Chen <tim.c.chen@linux.intel.com> Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
2014-04-28crypto: tcrypt - Fix potential leak in test_aead_speed() if aad_size is too bigChristian Engelmayer1-8/+6
Fix a potential memory leak in the error handling of test_aead_speed(). In case the size check on the associate data length parameter fails, the function goes through the wrong exit label. Reported by Coverity - CID 1163870. Signed-off-by: Christian Engelmayer <cengelma@gmx.at> Acked-by: Tim Chen <tim.c.chen@linux.intel.com> Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
2014-04-24net: Use netlink_ns_capable to verify the permisions of netlink messagesEric W. Biederman1-1/+1
It is possible by passing a netlink socket to a more privileged executable and then to fool that executable into writing to the socket data that happens to be valid netlink message to do something that privileged executable did not intend to do. To keep this from happening replace bare capable and ns_capable calls with netlink_capable, netlink_net_calls and netlink_ns_capable calls. Which act the same as the previous calls except they verify that the opener of the socket had the desired permissions as well. Reported-by: Andy Lutomirski <luto@amacapital.net> Signed-off-by: "Eric W. Biederman" <ebiederm@xmission.com> Signed-off-by: David S. Miller <davem@davemloft.net>
2014-04-18arch: Mass conversion of smp_mb__*()Peter Zijlstra1-1/+1
Mostly scripted conversion of the smp_mb__* barriers. Signed-off-by: Peter Zijlstra <peterz@infradead.org> Acked-by: Paul E. McKenney <paulmck@linux.vnet.ibm.com> Link: http://lkml.kernel.org/n/tip-55dhyhocezdw1dg7u19hmh1u@git.kernel.org Cc: Linus Torvalds <torvalds@linux-foundation.org> Cc: linux-arch@vger.kernel.org Signed-off-by: Ingo Molnar <mingo@kernel.org>
2014-04-16crypto: testmgr - add empty and large test vectors for SHA-1, SHA-224, ↵Jussi Kivilinna1-7/+721
SHA-256, SHA-384 and SHA-512 Patch adds large test-vectors for SHA algorithms for better code coverage in optimized assembly implementations. Empty test-vectors are also added, as some crypto drivers appear to have special case handling for empty input. Signed-off-by: Jussi Kivilinna <jussi.kivilinna@iki.fi> Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
2014-04-16crypto: testmgr - add test cases for SHA-1, SHA-224, SHA-256 and AES-CCMArd Biesheuvel1-6/+47
This adds test cases for SHA-1, SHA-224, SHA-256 and AES-CCM with an input size that is an exact multiple of the block size. The reason is that some implementations use a different code path for these cases. Signed-off-by: Ard Biesheuvel <ard.biesheuvel@linaro.org> Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
2014-03-21crypto: sha - SHA1 transform x86_64 AVX2chandramouli narayanan1-2/+2
This git patch adds x86_64 AVX2 optimization of SHA1 transform to crypto support. The patch has been tested with 3.14.0-rc1 kernel. On a Haswell desktop, with turbo disabled and all cpus running at maximum frequency, tcrypt shows AVX2 performance improvement from 3% for 256 bytes update to 16% for 1024 bytes update over AVX implementation. This patch adds sha1_avx2_transform(), the glue, build and configuration changes needed for AVX2 optimization of SHA1 transform to crypto support. sha1-ssse3 is one module which adds the necessary optimization support (SSSE3/AVX/AVX2) for the low-level SHA1 transform function. With better optimization support, transform function is overridden as the case may be. In the case of AVX2, due to performance reasons across datablock sizes, the AVX or AVX2 transform function is used at run-time as it suits best. The Makefile change therefore appends the necessary objects to the linkage. Due to this, the patch merely appends AVX2 transform to the existing build mix and Kconfig support and leaves the configuration build support as is. Signed-off-by: Chandramouli Narayanan <mouli@linux.intel.com> Reviewed-by: Marek Vasut <marex@denx.de> Acked-by: H. Peter Anvin <hpa@linux.intel.com> Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
2014-03-21crypto: crypto_wq - Fix late crypto work queue initializationTim Chen1-1/+1
The crypto algorithm modules utilizing the crypto daemon could be used early when the system start up. Using module_init does not guarantee that the daemon's work queue is initialized when the cypto alorithm depending on crypto_wq starts. It is necessary to initialize the crypto work queue earlier at the subsystem init time to make sure that it is initialized when used. Signed-off-by: Tim Chen <tim.c.chen@linux.intel.com> Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
2014-03-21crypto: testmgr - add aead null encryption test vectorsHoria Geanta3-0/+220
Add test vectors for aead with null encryption and md5, respectively sha1 authentication. Input data is taken from test vectors listed in RFC2410. Signed-off-by: Horia Geanta <horia.geanta@freescale.com> Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
2014-03-21crypto: export NULL algorithms definesHoria Geanta1-5/+1
These defines might be needed by crypto drivers. Signed-off-by: Horia Geanta <horia.geanta@freescale.com> Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
2014-03-21crypto: hash - Simplify the ahash_finup implementationMarek Vasut1-27/+9
The ahash_def_finup() can make use of the request save/restore functions, thus make it so. This simplifies the code a little and unifies the code paths. Note that the same remark about free()ing the req->priv applies here, the req->priv can only be free()'d after the original request was restored. Finally, squash a bug in the invocation of completion in the ASYNC path. In both ahash_def_finup_done{1,2}, the function areq->base.complete(X, err); was called with X=areq->base.data . This is incorrect , as X=&areq->base is the correct value. By analysis of the data structures, we see the areq is of type 'struct ahash_request' , areq->base is of type 'struct crypto_async_request' and areq->base.completion is of type crypto_completion_t, which is defined in include/linux/crypto.h as: typedef void (*crypto_completion_t)(struct crypto_async_request *req, int err); This is one lead that the X should be &areq->base . Next up, we can inspect other code which calls the completion callback to give us kind-of statistical idea of how this callback is used. We can try: $ git grep base\.complete\( drivers/crypto/ Finally, by inspecting ahash_request_set_callback() implementation defined in include/crypto/hash.h , we observe that the .data entry of 'struct crypto_async_request' is intended for arbitrary data, not for completion argument. Signed-off-by: Marek Vasut <marex@denx.de> Cc: David S. Miller <davem@davemloft.net> Cc: Fabio Estevam <fabio.estevam@freescale.com> Cc: Herbert Xu <herbert@gondor.apana.org.au> Cc: Shawn Guo <shawn.guo@linaro.org> Cc: Tom Lendacky <thomas.lendacky@amd.com> Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
2014-03-21crypto: hash - Pull out the functions to save/restore requestMarek Vasut1-45/+62
The functions to save original request within a newly adjusted request and it's counterpart to restore the original request can be re-used by more code in the crypto/ahash.c file. Pull these functions out from the code so they're available. Signed-off-by: Marek Vasut <marex@denx.de> Cc: David S. Miller <davem@davemloft.net> Cc: Fabio Estevam <fabio.estevam@freescale.com> Cc: Herbert Xu <herbert@gondor.apana.org.au> Cc: Shawn Guo <shawn.guo@linaro.org> Cc: Tom Lendacky <thomas.lendacky@amd.com> Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
2014-03-21crypto: hash - Fix the pointer voodoo in unaligned ahashMarek Vasut1-7/+49
Add documentation for the pointer voodoo that is happening in crypto/ahash.c in ahash_op_unaligned(). This code is quite confusing, so add a beefy chunk of documentation. Moreover, make sure the mangled request is completely restored after finishing this unaligned operation. This means restoring all of .result, .base.data and .base.complete . Also, remove the crypto_completion_t complete = ... line present in the ahash_op_unaligned_done() function. This type actually declares a function pointer, which is very confusing. Finally, yet very important nonetheless, make sure the req->priv is free()'d only after the original request is restored in ahash_op_unaligned_done(). The req->priv data must not be free()'d before that in ahash_op_unaligned_finish(), since we would be accessing previously free()'d data in ahash_op_unaligned_done() and cause corruption. Signed-off-by: Marek Vasut <marex@denx.de> Cc: David S. Miller <davem@davemloft.net> Cc: Fabio Estevam <fabio.estevam@freescale.com> Cc: Herbert Xu <herbert@gondor.apana.org.au> Cc: Shawn Guo <shawn.guo@linaro.org> Cc: Tom Lendacky <thomas.lendacky@amd.com> Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
2014-03-10crypto: allow blkcipher walks over AEAD dataArd Biesheuvel1-0/+14
This adds the function blkcipher_aead_walk_virt_block, which allows the caller to use the blkcipher walk API to handle the input and output scatterlists. Signed-off-by: Ard Biesheuvel <ard.biesheuvel@linaro.org> Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
2014-03-10crypto: remove direct blkcipher_walk dependency on transformArd Biesheuvel1-34/+33
In order to allow other uses of the blkcipher walk API than the blkcipher algos themselves, this patch copies some of the transform data members to the walk struct so the transform is only accessed at walk init time. Signed-off-by: Ard Biesheuvel <ard.biesheuvel@linaro.org> Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
2014-02-25CRC32C: Add soft module dependency to load other accelerated crc32c modulesTim Chen2-1/+3
We added the soft module dependency of crc32c module alias to generic crc32c module so other hardware accelerated crc32c modules could get loaded and used before the generic version. We also renamed the crypto/crc32c.c containing the generic crc32c crypto computation to crypto/crc32c_generic.c according to convention. Signed-off-by: Tim Chen <tim.c.chen@linux.intel.com> Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
2014-01-24Merge git://git.kernel.org/pub/scm/linux/kernel/git/herbert/crypto-2.6Linus Torvalds6-32/+340
Pull crypto update from Herbert Xu: "Here is the crypto update for 3.14: - Improved crypto_memneq helper - Use cyprto_memneq in arch-specific crypto code - Replaced orphaned DCP driver with Freescale MXS DCP driver - Added AVX/AVX2 version of AESNI-GCM encode and decode - Added AMD Cryptographic Coprocessor (CCP) driver - Misc fixes" * git://git.kernel.org/pub/scm/linux/kernel/git/herbert/crypto-2.6: (41 commits) crypto: aesni - fix build on x86 (32bit) crypto: mxs - Fix sparse non static symbol warning crypto: ccp - CCP device enabled/disabled changes crypto: ccp - Cleanup hash invocation calls crypto: ccp - Change data length declarations to u64 crypto: ccp - Check for caller result area before using it crypto: ccp - Cleanup scatterlist usage crypto: ccp - Apply appropriate gfp_t type to memory allocations crypto: drivers - Sort drivers/crypto/Makefile ARM: mxs: dts: Enable DCP for MXS crypto: mxs - Add Freescale MXS DCP driver crypto: mxs - Remove the old DCP driver crypto: ahash - Fully restore ahash request before completing crypto: aesni - fix build on x86 (32bit) crypto: talitos - Remove redundant dev_set_drvdata crypto: ccp - Remove redundant dev_set_drvdata crypto: crypto4xx - Remove redundant dev_set_drvdata crypto: caam - simplify and harden key parsing crypto: omap-sham - Fix Polling mode for larger blocks crypto: tcrypt - Added speed tests for AEAD crypto alogrithms in tcrypt test suite ...
2014-01-05crypto: ahash - Fully restore ahash request before completingMarek Vasut1-1/+4
When finishing the ahash request, the ahash_op_unaligned_done() will call complete() on the request. Yet, this will not call the correct complete callback. The correct complete callback was previously stored in the requests' private data, as seen in ahash_op_unaligned(). This patch restores the correct complete callback and .data field of the request before calling complete() on it. Signed-off-by: Marek Vasut <marex@denx.de> Cc: David S. Miller <davem@davemloft.net> Cc: Fabio Estevam <fabio.estevam@freescale.com> Cc: Shawn Guo <shawn.guo@linaro.org> Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
2013-12-20crypto: tcrypt - Added speed tests for AEAD crypto alogrithms in tcrypt test ↵Tim Chen2-0/+280
suite Adding simple speed tests for a range of block sizes for AEAD crypto algorithms. Signed-off-by: Tim Chen <tim.c.chen@linux.intel.com> Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
2013-12-09crypto: memneq - fix for archs without efficient unaligned accessDaniel Borkmann1-1/+2
Commit fe8c8a126806 introduced a possible build error for archs that do not have CONFIG_HAVE_EFFICIENT_UNALIGNED_ACCESS set. :/ Fix this up by bringing else braces outside of the ifdef. Reported-by: Fengguang Wu <fengguang.wu@intel.com> Fixes: fe8c8a126806 ("crypto: more robust crypto_memneq") Signed-off-by: Daniel Borkmann <dborkman@redhat.com> Acked-By: Cesar Eduardo Barros <cesarb@cesarb.eti.br> Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
2013-12-05crypto: pcrypt - Fix wrong usage of rcu_dereference()Mathias Krause1-1/+1
A kernel with enabled lockdep complains about the wrong usage of rcu_dereference() under a rcu_read_lock_bh() protected region. =============================== [ INFO: suspicious RCU usage. ] 3.13.0-rc1+ #126 Not tainted ------------------------------- linux/crypto/pcrypt.c:81 suspicious rcu_dereference_check() usage! other info that might help us debug this: rcu_scheduler_active = 1, debug_locks = 1 1 lock held by cryptomgr_test/153: #0: (rcu_read_lock_bh){.+....}, at: [<ffffffff812c8075>] pcrypt_do_parallel.isra.2+0x5/0x200 Fix that by using rcu_dereference_bh() instead. Signed-off-by: Mathias Krause <minipli@googlemail.com> Cc: "David S. Miller" <davem@davemloft.net> Acked-by: Steffen Klassert <steffen.klassert@secunet.com> Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
2013-12-05crypto: more robust crypto_memneqCesar Eduardo Barros2-30/+54
Disabling compiler optimizations can be fragile, since a new optimization could be added to -O0 or -Os that breaks the assumptions the code is making. Instead of disabling compiler optimizations, use a dummy inline assembly (based on RELOC_HIDE) to block the problematic kinds of optimization, while still allowing other optimizations to be applied to the code. The dummy inline assembly is added after every OR, and has the accumulator variable as its input and output. The compiler is forced to assume that the dummy inline assembly could both depend on the accumulator variable and change the accumulator variable, so it is forced to compute the value correctly before the inline assembly, and cannot assume anything about its value after the inline assembly. This change should be enough to make crypto_memneq work correctly (with data-independent timing) even if it is inlined at its call sites. That can be done later in a followup patch. Compile-tested on x86_64. Signed-off-by: Cesar Eduardo Barros <cesarb@cesarb.eti.br> Acked-by: Daniel Borkmann <dborkman@redhat.com> Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
2013-12-04Merge git://git.kernel.org/pub/scm/linux/kernel/git/herbert/crypto-2.6Linus Torvalds4-18/+22
Pull crypto fixes from Herbert Xu: "This push fixes a number of crashes triggered by a previous crypto self-test update. It also fixes a build problem in the caam driver, as well as a concurrency issue in s390. Finally there is a pair of fixes to bugs in the crypto scatterwalk code and authenc that may lead to crashes" * git://git.kernel.org/pub/scm/linux/kernel/git/herbert/crypto-2.6: crypto: testmgr - fix sglen in test_aead for case 'dst != src' crypto: talitos - fix aead sglen for case 'dst != src' crypto: caam - fix aead sglen for case 'dst != src' crypto: ccm - Fix handling of zero plaintext when computing mac crypto: s390 - Fix aes-xts parameter corruption crypto: talitos - corrrectly handle zero-length assoc data crypto: scatterwalk - Set the chain pointer indication bit crypto: authenc - Find proper IV address in ablkcipher callback crypto: caam - Add missing Job Ring include