summaryrefslogtreecommitdiff
AgeCommit message (Collapse)AuthorFilesLines
2016-07-05crypto: qat - Add RSA CRT modeSalvatore Benedetto1-25/+209
Extend qat driver to use RSA CRT mode when all CRT related components are present in the private key. Simplify code in qat_rsa_setkey by adding qat_rsa_clear_ctx. Signed-off-by: Salvatore Benedetto <salvatore.benedetto@intel.com> Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
2016-07-05crypto: testmgr - Add 4K private key to RSA testvectorSalvatore Benedetto1-1/+199
Key generated with openssl. It also contains all fields required for testing CRT mode Signed-off-by: Salvatore Benedetto <salvatore.benedetto@intel.com> Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
2016-07-05crypto: rsa - Store rest of the private key componentsSalvatore Benedetto3-5/+100
When parsing a private key, store all non-optional fields. These are required for enabling CRT mode for decrypt and verify Signed-off-by: Salvatore Benedetto <salvatore.benedetto@intel.com> Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
2016-07-05crypto: qat - Use alternative reset methods depending on the specific deviceConor McLoughlin6-9/+43
Different product families will use FLR or SBR. Virtual Function devices have no reset method. Signed-off-by: Conor McLoughlin <conor.mcloughlin@intel.com> Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
2016-07-05crypto: bfin_crc - Simplify use of devm_ioremap_resourceAmitoj Kaur Chawla1-5/+0
Remove unneeded error handling on the result of a call to platform_get_resource when the value is passed to devm_ioremap_resource. The Coccinelle semantic patch that makes this change is as follows: // <smpl> @@ expression pdev,res,n,e,e1; expression ret != 0; identifier l; @@ - res = platform_get_resource(pdev, IORESOURCE_MEM, n); ... when != res - if (res == NULL) { ... \(goto l;\|return ret;\) } ... when != res + res = platform_get_resource(pdev, IORESOURCE_MEM, n); e = devm_ioremap_resource(e1, res); // </smpl> Signed-off-by: Amitoj Kaur Chawla <amitoj1606@gmail.com> Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
2016-07-05crypto: caam - add support for RSA algorithmTudor Ambarus9-1/+789
Add RSA support to caam driver. Initial author is Yashpal Dutta <yashpal.dutta@freescale.com>. Signed-off-by: Tudor Ambarus <tudor-dan.ambarus@nxp.com> Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
2016-07-05crypto: testmgr - Set err before proceedingSalvatore Benedetto1-0/+1
Report correct error in case of failure Signed-off-by: Salvatore Benedetto <salvatore.benedetto@intel.com> Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
2016-07-05crypto: qat - Switch to new rsa_helper functionsSalvatore Benedetto5-55/+21
Drop all asn1 related code and use the new rsa_helper functions rsa_parse_[pub|priv]_key for parsing the key Signed-off-by: Salvatore Benedetto <salvatore.benedetto@intel.com> Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
2016-07-05crypto: powerpc - Add POWER8 optimised crc32cAnton Blanchard5-0/+1745
Use the vector polynomial multiply-sum instructions in POWER8 to speed up crc32c. This is just over 41x faster than the slice-by-8 method that it replaces. Measurements on a 4.1 GHz POWER8 show it sustaining 52 GiB/sec. A simple btrfs write performance test: dd if=/dev/zero of=/mnt/tmpfile bs=1M count=4096 sync is over 3.7x faster. Signed-off-by: Anton Blanchard <anton@samba.org> Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
2016-07-05powerpc: define FUNC_START/FUNC_ENDAnton Blanchard1-0/+3
gcc provides FUNC_START/FUNC_END macros to help with creating assembly functions. Mirror these in the kernel so we can more easily share code between userspace and the kernel. FUNC_END is just a stub since we don't currently annotate the end of kernel functions. It might make sense to do a wholesale search and replace, but for now just create a couple of defines. Signed-off-by: Anton Blanchard <anton@samba.org> Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
2016-07-03crypto: rsa-pkcs1pad - Fix regression from leading zerosHerbert Xu1-16/+22
As the software RSA implementation now produces fixed-length output, we need to eliminate leading zeros in the calling code instead. This patch does just that for pkcs1pad signature verification. Fixes: 9b45b7bba3d2 ("crypto: rsa - Generate fixed-length output") Reported-by: Stephan Mueller <smueller@chronox.de> Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
2016-07-01crypto: sha3 - Add HMAC-SHA3 test modes and test vectorsraveendra padasalagi3-0/+444
This patch adds HMAC-SHA3 test modes in tcrypt module and related test vectors. Signed-off-by: Raveendra Padasalagi <raveendra.padasalagi@broadcom.com> Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
2016-07-01crypto: omap-sham - increase cra_proirity to 400Bin Liu1-12/+12
The arm-neon-sha implementations have cra_priority of 150...300, so increase omap-sham priority to 400 to ensure it is on top of any software alg. Signed-off-by: Bin Liu <b-liu@ti.com> Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
2016-07-01crypto: tcrypt - Do not bail on EINPROGRESS in multibuffer hash testHerbert Xu1-1/+3
The multibuffer hash speed test is incorrectly bailing because of an EINPROGRESS return value. This patch fixes it by setting ret to zero if it is equal to -EINPROGRESS. Reported-by: Megha Dey <megha.dey@linux.intel.com> Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
2016-07-01crypto: rsa-pkcs1pad - Avoid copying output when possibleHerbert Xu1-67/+45
In the vast majority of cases (2^-32 on 32-bit and 2^-64 on 64-bit) cases, the result from encryption/signing will require no padding. This patch makes these two operations write their output directly to the final destination. Only in the exceedingly rare cases where fixup is needed to we copy it out and back to add the leading zeroes. This patch also makes use of the crypto_akcipher_set_crypt API instead of writing the akcipher request directly. Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
2016-07-01crypto: rsa-pkcs1pad - Move key size check to setkeyHerbert Xu1-30/+26
Rather than repeatedly checking the key size on each operation, we should be checking it once when the key is set. Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
2016-07-01crypto: rsa-pkcs1pad - Always use GFP_KERNELHerbert Xu1-16/+6
We don't currently support using akcipher in atomic contexts, so GFP_KERNEL should always be used. Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
2016-07-01crypto: rsa-pkcs1pad - Remove bogus page splittingHerbert Xu1-14/+5
The helper pkcs1pad_sg_set_buf tries to split a buffer that crosses a page boundary into two SG entries. This is unnecessary. This patch removes that. Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
2016-07-01crypto: rsa-pkcs1pad - Require hash to be presentHerbert Xu1-53/+30
The only user of rsa-pkcs1pad always uses the hash so there is no reason to support the case of not having a hash. This patch also changes the digest info lookup so that it is only done once during template instantiation rather than on each operation. Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
2016-07-01lib/mpi: Do not do sg_virtHerbert Xu1-36/+50
Currently the mpi SG helpers use sg_virt which is completely broken. It happens to work with normal kernel memory but will fail with anything that is not linearly mapped. This patch fixes this by using the SG iterator helpers. Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
2016-07-01crypto: rsa - Generate fixed-length outputHerbert Xu4-35/+32
Every implementation of RSA that we have naturally generates output with leading zeroes. The one and only user of RSA, pkcs1pad wants to have those leading zeroes in place, in fact because they are currently absent it has to write those zeroes itself. So we shouldn't be stripping leading zeroes in the first place. In fact this patch makes rsa-generic produce output with fixed length so that pkcs1pad does not need to do any extra work. This patch also changes DH to use the new interface. Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
2016-07-01crypto: testmgr - Allow leading zeros in RSAHerbert Xu1-27/+24
This patch allows RSA implementations to produce output with leading zeroes. testmgr will skip leading zeroes when comparing the output. This patch also tries to make the RSA test function generic enough to potentially handle other akcipher algorithms. Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
2016-07-01crypto: tcrypt - Add speed test for ctsHerbert Xu1-0/+8
This patch adds speed tests for cts(cbc(aes)). Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
2016-07-01crypto: api - Add crypto_inst_setnameHerbert Xu2-7/+19
This patch adds the helper crypto_inst_setname because the current helper crypto_alloc_instance2 is no longer useful given that we now look up the algorithm after we allocate the instance object. Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
2016-07-01crypto: s390/aes - Use skcipher for fallbackHerbert Xu1-53/+60
This patch replaces use of the obsolete blkcipher with skcipher. Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
2016-07-01crypto: sahara - Use skcipher for fallbackHerbert Xu1-62/+50
This patch replaces use of the obsolete ablkcipher with skcipher. It also removes shash_fallback which is totally unused. Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
2016-07-01crypto: qce - Use skcipher for fallbackHerbert Xu2-12/+17
This patch replaces use of the obsolete ablkcipher with skcipher. Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
2016-07-01crypto: picoxcell - Use skcipher for fallbackHerbert Xu1-29/+31
This patch replaces use of the obsolete ablkcipher with skcipher. Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
2016-07-01crypto: mxs-dcp - Use skcipher for fallbackHerbert Xu1-26/+21
This patch replaces use of the obsolete ablkcipher with skcipher. Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
2016-07-01crypto: ccp - Use skcipher for fallbackHerbert Xu2-25/+21
This patch replaces use of the obsolete ablkcipher with skcipher. Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
2016-07-01crypto: aesni - Use crypto_cipher to derive rfc4106 subkeyHerbert Xu1-65/+11
Currently aesni uses an async ctr(aes) to derive the rfc4106 subkey, which was presumably copied over from the generic rfc4106 code. Over there it's done that way because we already have a ctr(aes) spawn. But it is simply overkill for aesni since we have to go get a ctr(aes) from scratch anyway. This patch simplifies the subkey derivation by using a straight aes cipher instead. Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
2016-07-01crypto: tcrypt - Use skcipherHerbert Xu1-197/+44
This patch converts tcrypt to use the new skcipher interface as opposed to ablkcipher/blkcipher. Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
2016-07-01crypto: ahash - Add padding in crypto_ahash_extsizeHerbert Xu1-3/+3
The function crypto_ahash_extsize did not include padding when computing the tfm context size. This patch fixes this by using the generic crypto_alg_extsize helper. Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
2016-07-01crypto: authenc - Consider ahash ASYNC bitHerbert Xu2-4/+8
As it is, if you get an async ahash with a sync skcipher you'll end up with a sync authenc, which is wrong. This patch fixes it by considering the ASYNC bit from ahash as well. It also fixes a little bug where if a sync version of authenc is requested we may still end up using an async ahash. Neither of them should have any effect as none of the authenc users can request for a sync authenc. Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
2016-06-29crypto: authenc - Remove redundant sg_init_table call.Harsh Jain1-6/+1
Remove redundant sg_init_table call. scatterwalk_ffwd doing the same. Signed-off-by: Harsh Jain <harshjain.prof@gmail.com> Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
2016-06-29crypto: tcrypt - Fix memory leaks/crashes in multibuffer hash speed testHerbert Xu1-58/+71
This patch resolves a number of issues with the mb speed test function: * The tfm is never freed. * Memory is allocated even when we're not using mb. * When an error occurs we don't wait for completion for other requests. * When an error occurs during allocation we may leak memory. * The test function ignores plen but still runs for plen != blen. * The backlog flag is incorrectly used (may crash). This patch tries to resolve all these issues as well as making the code consistent with the existing hash speed testing function. Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au> Tested-by: Krzysztof Kozlowski <k.kozlowski@samsung.com>
2016-06-28crypto: tcrypt - Use unsigned long for mb ahash cycle counterHerbert Xu1-5/+5
For the timescales we are working against there is no need to go beyond unsigned long. Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
2016-06-28crypto: tcrypt - Fix mixing printk/pr_err and obvious indentation issuesKrzysztof Kozlowski1-19/+14
The recently added test_mb_ahash_speed() has clearly serious coding style issues. Try to fix some of them: 1. Don't mix pr_err() and printk(); 2. Don't wrap strings; 3. Properly align goto statement in if() block; 4. Align wrapped arguments on new line; 5. Don't wrap functions on first argument; Signed-off-by: Krzysztof Kozlowski <k.kozlowski@samsung.com> Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
2016-06-28crypto: tcrypt - Add new mode for sha512_mbMegha Dey1-0/+4
Add a new mode to calculate the speed of the sha512_mb algorithm Signed-off-by: Megha Dey <megha.dey@linux.intel.com> Reviewed-by: Fenghua Yu <fenghua.yu@intel.com> Reviewed-by: Tim Chen <tim.c.chen@linux.intel.com> Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
2016-06-28crypto: sha512-mb - Crypto computation (x4 AVX2)Megha Dey1-0/+529
This patch introduces the assembly routines to do SHA512 computation on buffers belonging to several jobs at once. The assembly routines are optimized with AVX2 instructions that have 4 data lanes and using AVX2 registers. Signed-off-by: Megha Dey <megha.dey@linux.intel.com> Reviewed-by: Fenghua Yu <fenghua.yu@intel.com> Reviewed-by: Tim Chen <tim.c.chen@linux.intel.com> Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
2016-06-28crypto: sha512-mb - Algorithm data structuresMegha Dey3-0/+515
This patch introduces the data structures and prototypes of functions needed for computing SHA512 hash using multi-buffer. Included are the structures of the multi-buffer SHA512 job, job scheduler in C and x86 assembly. Signed-off-by: Megha Dey <megha.dey@linux.intel.com> Reviewed-by: Fenghua Yu <fenghua.yu@intel.com> Reviewed-by: Tim Chen <tim.c.chen@linux.intel.com> Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
2016-06-28crypto: sha512-mb - submit/flush routines for AVX2Megha Dey3-0/+580
This patch introduces the routines used to submit and flush buffers belonging to SHA512 crypto jobs to the SHA512 multibuffer algorithm. It is implemented mostly in assembly optimized with AVX2 instructions. Signed-off-by: Megha Dey <megha.dey@linux.intel.com> Reviewed-by: Fenghua Yu <fenghua.yu@intel.com> Reviewed-by: Tim Chen <tim.c.chen@linux.intel.com> Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
2016-06-28crypto: sha512-mb - Enable SHA512 multibuffer supportMegha Dey1-0/+16
Add the config CRYPTO_SHA512_MB which will enable the computation using the SHA512 multi-buffer algorithm. Signed-off-by: Megha Dey <megha.dey@linux.intel.com> Reviewed-by: Fenghua Yu <fenghua.yu@intel.com> Reviewed-by: Tim Chen <tim.c.chen@linux.intel.com> Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
2016-06-28crypto: sha512-mb - SHA512 multibuffer job manager and glue codeMegha Dey3-0/+1055
This patch introduces the multi-buffer job manager which is responsible for submitting scatter-gather buffers from several SHA512 jobs to the multi-buffer algorithm. It also contains the flush routine that's called by the crypto daemon to complete the job when no new jobs arrive before the deadline of maximum latency of a SHA512 crypto job. The SHA512 multi-buffer crypto algorithm is defined and initialized in this patch. Signed-off-by: Megha Dey <megha.dey@linux.intel.com> Reviewed-by: Fenghua Yu <fenghua.yu@intel.com> Reviewed-by: Tim Chen <tim.c.chen@linux.intel.com> Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
2016-06-28crypto: ux500 - do not build with -O0Arnd Bergmann2-4/+4
The ARM allmodconfig build currently warngs because of the ux500 crypto driver not working well with the jump label implementation that we started using for dynamic debug, which breaks building with 'gcc -O0': In file included from /git/arm-soc/include/linux/jump_label.h:105:0, from /git/arm-soc/include/linux/dynamic_debug.h:5, from /git/arm-soc/include/linux/printk.h:289, from /git/arm-soc/include/linux/kernel.h:13, from /git/arm-soc/include/linux/clk.h:16, from /git/arm-soc/drivers/crypto/ux500/hash/hash_core.c:16: /git/arm-soc/arch/arm/include/asm/jump_label.h: In function 'hash_set_dma_transfer': /git/arm-soc/arch/arm/include/asm/jump_label.h:13:7: error: asm operand 0 probably doesn't match constraints [-Werror] asm_volatile_goto("1:\n\t" Turning off compiler optimizations has never really been supported here, and it's only used when debugging the driver. I have not found a good reason for doing this here, other than a misguided attempt to produce more readable assembly output. Also, the driver is only used in obsolete hardware that almost certainly nobody will spend time debugging any more. This just removes the -O0 flag from the compiler options. Signed-off-by: Arnd Bergmann <arnd@arndb.de> Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
2016-06-27hwrng: omap - Fix assumption that runtime_get_sync will always succeedNishanth Menon1-2/+14
pm_runtime_get_sync does return a error value that must be checked for error conditions, else, due to various reasons, the device maynot be enabled and the system will crash due to lack of clock to the hardware module. Before: 12.562784] [00000000] *pgd=fe193835 12.562792] Internal error: : 1406 [#1] SMP ARM [...] 12.562864] CPU: 1 PID: 241 Comm: modprobe Not tainted 4.7.0-rc4-next-20160624 #2 12.562867] Hardware name: Generic DRA74X (Flattened Device Tree) 12.562872] task: ed51f140 ti: ed44c000 task.ti: ed44c000 12.562886] PC is at omap4_rng_init+0x20/0x84 [omap_rng] 12.562899] LR is at set_current_rng+0xc0/0x154 [rng_core] [...] After the proper checks: [ 94.366705] omap_rng 48090000.rng: _od_fail_runtime_resume: FIXME: missing hwmod/omap_dev info [ 94.375767] omap_rng 48090000.rng: Failed to runtime_get device -19 [ 94.382351] omap_rng 48090000.rng: initialization failed. Fixes: 665d92fa85b5 ("hwrng: OMAP: convert to use runtime PM") Cc: Paul Walmsley <paul@pwsan.com> Signed-off-by: Nishanth Menon <nm@ti.com> Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
2016-06-27MAINTAINERS: update maintainer for qatTadeusz Struk1-1/+2
Add Giovanni and Salvatore who will take over the qat maintenance. Signed-off-by: Tadeusz Struk <tadeusz.struk@intel.com> Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
2016-06-27crypto: sha1-mb - rename sha-mb to sha1-mbMegha Dey10-4/+4
Until now, there was only support for the SHA1 multibuffer algorithm. Hence, there was just one sha-mb folder. Now, with the introduction of the SHA256 multi-buffer algorithm , it is logical to name the existing folder as sha1-mb. Signed-off-by: Megha Dey <megha.dey@linux.intel.com> Reviewed-by: Fenghua Yu <fenghua.yu@intel.com> Reviewed-by: Tim Chen <tim.c.chen@linux.intel.com> Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
2016-06-27crypto: tcrypt - Add speed tests for SHA multibuffer algorithmsMegha Dey1-0/+118
The existing test suite to calculate the speed of the SHA algorithms assumes serial (single buffer)) computation of data. With the SHA multibuffer algorithms, we work on 8 lanes of data in parallel. Hence, the need to introduce a new test suite to calculate the speed for these algorithms. Signed-off-by: Megha Dey <megha.dey@linux.intel.com> Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
2016-06-27crypto: sha256-mb - Crypto computation (x8 AVX2)Megha Dey1-0/+593
This patch introduces the assembly routines to do SHA256 computation on buffers belonging to several jobs at once. The assembly routines are optimized with AVX2 instructions that have 8 data lanes and using AVX2 registers. Signed-off-by: Megha Dey <megha.dey@linux.intel.com> Reviewed-by: Fenghua Yu <fenghua.yu@intel.com> Reviewed-by: Tim Chen <tim.c.chen@linux.intel.com> Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>