Age | Commit message (Collapse) | Author | Files | Lines |
|
Header was defining CRCPOLY_LE/BE and CRC32C_POLY_LE but in fact all of
them are CRC-32 polynomials so use consistent naming.
Signed-off-by: Krzysztof Kozlowski <krzk@kernel.org>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
|
|
Allow other drivers and parts of kernel to use the same define for
CRC32 polynomial, instead of duplicating it in many places. This code
does not bring any functional changes, except moving existing code.
Signed-off-by: Krzysztof Kozlowski <krzk@kernel.org>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
|
|
Add support for probing on ACPI systems, with ACPI HID QCOM8160.
On ACPI systems, clocks are always enabled, the PRNG should
already be enabled, and the register region is read-only.
The driver only verifies that the hardware is already
enabled never tries to disable or configure it.
Signed-off-by: Timur Tabi <timur@codeaurora.org>
Tested-by: Jeffrey Hugo <jhugo@codeaurora.org>
[port to crypto API]
Signed-off-by: Vinod Koul <vkoul@kernel.org>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
|
|
Qcom 8996 and later chips features multiple Execution Environments
(EE) and secure world is typically responsible for configuring the
prng.
Add driver data for qcom,prng as 0 and qcom,prng-ee as 1 and use
that to skip initialization routine.
Signed-off-by: Vinod Koul <vkoul@kernel.org>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
|
|
Later qcom chips support v2 of the prng, which exposes an EE
(Execution Environment) for OS to use so add new compatible
qcom,prng-ee for this.
Signed-off-by: Vinod Koul <vkoul@kernel.org>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
|
|
This ports the Qcom prng from older hw_random driver.
No change of functionality and move from hw_random to crypto
APIs is done.
Reviewed-by: Linus Walleij <linus.walleij@linaro.org>
Signed-off-by: Vinod Koul <vkoul@kernel.org>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
|
|
Now that we are adding new driver for prng in crypto, move the
binding as well.
Signed-off-by: Vinod Koul <vkoul@kernel.org>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
|
|
This driver is for a psedo-rng so should not be added in hwrng.
Remove it so that it's replacement can be added.
Signed-off-by: Vinod Koul <vkoul@kernel.org>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
|
|
This patch fixes two typos related to unregistering algorithms supported by
SAHARAH 3. In sahara_register_algs the wrong algorithms are unregistered
in case of an error. In sahara_unregister_algs the wrong array is used to
determine the iteration count.
Signed-off-by: Michael Müller <michael@fds-team.de>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
|
|
In the cipher safexcel_send_req function, GCC warns that
first_rdesc may be used uninitialized. While this should never
happen, this patch removes the warning by initializing this
variable to NULL to make GCC happy.
This was reported by the kbuild test robot.
Signed-off-by: Antoine Tenart <antoine.tenart@bootlin.com>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
|
|
Use the appropriate SPDX license identifiers and drop the license text.
This patch is only cosmetic.
Signed-off-by: Antoine Tenart <antoine.tenart@bootlin.com>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
|
|
Fix the b value to be compliant with FIPS 186-4 D.1.2.1. This fix is
required to make sure the SP800-56A public key test passes for P-192.
Signed-off-by: Stephan Mueller <smueller@chronox.de>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
|
|
By adding a zero byte-length for the DH parameter Q value, the public
key verification test is disabled for the given test.
Reported-by: Eric Biggers <ebiggers3@gmail.com>
Signed-off-by: Stephan Mueller <smueller@chronox.de>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
|
|
The CTR DRBG requires two SGLs pointing to input/output buffers for the
CTR AES operation. The used SGLs always have only one entry. Thus, the
SGL can be initialized during allocation time, preventing a
re-initialization of the SGLs during each call.
The performance is increased by about 1 to 3 percent depending on the
size of the requested buffer size.
Signed-off-by: Stephan Mueller <smueller@chronox.de>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
|
|
In case memory resources for *base* were allocated, release them
before return.
Addresses-Coverity-ID: 1471702 ("Resource leak")
Fixes: e3fe0ae12962 ("crypto: dh - add public key verification test")
Signed-off-by: Gustavo A. R. Silva <gustavo@embeddedor.com>
Reviewed-by: Stephan Müller <smueller@chronox.de>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
|
|
Cast *val* to u64 in order to give the compiler complete
information about the proper arithmetic to use.
Notice that such variable is used in a context that expects an
expression of type u64 (64 bits, unsigned) and the following
expression is currently being evaluated using 32-bit arithmetic:
val << bit_pos
Addresses-Coverity-ID: 1467425 ("Unintentional integer overflow")
Signed-off-by: Gustavo A. R. Silva <gustavo@embeddedor.com>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
|
|
Add a new CCP/PSP PCI device ID and new PSP register offsets.
Signed-off-by: Tom Lendacky <thomas.lendacky@amd.com>
Acked-by: Gary R Hook <gary.hook@amd.com>
Reviewed-by: Brijesh Singh <brijesh.singh@amd.com>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
|
|
In preparation for adding a new PSP device ID that uses different register
offsets, add support to the PSP version data for register offset values.
And then update the code to use these new register offset values.
Signed-off-by: Tom Lendacky <thomas.lendacky@amd.com>
Acked-by: Gary R Hook <gary.hook@amd.com>
Reviewed-by: Brijesh Singh <brijesh.singh@amd.com>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
|
|
Remove some unused #defines for register offsets that are not used. This
will lessen the changes required when register offsets change between
versions of the device.
Signed-off-by: Tom Lendacky <thomas.lendacky@amd.com>
Acked-by: Gary R Hook <gary.hook@amd.com>
Reviewed-by: Brijesh Singh <brijesh.singh@amd.com>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
|
|
Add a dev_notice() message to the PSP initialization to report when the
PSP initialization has succeeded and the PSP is enabled.
Signed-off-by: Tom Lendacky <thomas.lendacky@amd.com>
Acked-by: Gary R Hook <gary.hook@amd.com>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
|
|
The wait_event() function is used to detect command completion. The
interrupt handler will set the wait condition variable when the interrupt
is triggered. However, the variable used for wait_event() is initialized
after the command has been submitted, which can create a race condition
with the interrupt handler and result in the wait_event() never returning.
Move the initialization of the wait condition variable to just before
command submission.
Fixes: 200664d5237f ("crypto: ccp: Add Secure Encrypted Virtualization (SEV) command support")
Cc: <stable@vger.kernel.org> # 4.16.x-
Signed-off-by: Tom Lendacky <thomas.lendacky@amd.com>
Reviewed-by: Brijesh Singh <brijesh.singh@amd.com>
Acked-by: Gary R Hook <gary.hook@amd.com>
Acked-by: Gary R Hook <gary.hook@amd.com>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
|
|
A debug print about register status post interrupt can happen
quite often. Rate limit it to avoid cluttering the log.
Signed-off-by: Gilad Ben-Yossef <gilad@benyossef.com>
Reported-by: Geert Uytterhoeven <geert@linux-m68k.org>
Tested-by: Geert Uytterhoeven <geert+renesas@glider.be>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
|
|
The ccree driver implemented NIST 800-38A CBC-CS2 ciphertext format,
which only reverses the last two blocks if the stolen ciphertext amount
are none zero. Move it to the kernel chosen format of CBC-CS3 which swaps
the final blocks unconditionally and rename it to "cts" now that it
complies with the kernel format and passes the self tests.
Ironically, the CryptoCell REE HW does just that, so the fix is dropping
the code that forced it to use plain CBC if the ciphertext was block
aligned.
Signed-off-by: Gilad Ben-Yossef <gilad@benyossef.com>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
|
|
Remove legacy code no longer used by anything.
Signed-off-by: Gilad Ben-Yossef <gilad@benyossef.com>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
|
|
We were copying our last cipher block into the request for use as IV for
all modes of operations. Fix this by discerning the behaviour based on
the mode of operation used: copy ciphertext for CBC, update counter for
CTR.
CC: stable@vger.kernel.org
Fixes: 63ee04c8b491 ("crypto: ccree - add skcipher support")
Reported by: Hadar Gat <hadar.gat@arm.com>
Signed-off-by: Gilad Ben-Yossef <gilad@benyossef.com>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
|
|
The testmgr hash tests were testing init, digest, update and final
methods but not the finup method. Add a test for this one too.
While doing this, make sure we only run the partial tests once with
the digest tests and skip them with the final and finup tests since
they are the same.
Signed-off-by: Gilad Ben-Yossef <gilad@benyossef.com>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
|
|
finup() operation was incorrect, padding was missing.
Fix by setting the ccree HW to enable padding.
Signed-off-by: Hadar Gat <hadar.gat@arm.com>
[ gilad@benyossef.com: refactored for better code sharing ]
Signed-off-by: Gilad Ben-Yossef <gilad@benyossef.com>
Cc: stable@vger.kernel.org
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
|
|
Some crypto API users allocating a tfm with crypto_alloc_$FOO() are also
specifying the type flags for $FOO, e.g. crypto_alloc_shash() with
CRYPTO_ALG_TYPE_SHASH. But, that's redundant since the crypto API will
override any specified type flag/mask with the correct ones.
So, remove the unneeded flags.
This patch shouldn't change any actual behavior.
Signed-off-by: Eric Biggers <ebiggers@google.com>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
|
|
Some skcipher algorithms set .cra_flags = CRYPTO_ALG_TYPE_SKCIPHER. But
this is redundant with the C structure type ('struct skcipher_alg'), and
crypto_register_skcipher() already sets the type flag automatically,
clearing any type flag that was already there. Apparently the useless
assignment has just been copy+pasted around.
So, remove the useless assignment from all the skcipher algorithms.
This patch shouldn't change any actual behavior.
Signed-off-by: Eric Biggers <ebiggers@google.com>
Acked-by: Gilad Ben-Yossef <gilad@benyossef.com>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
|
|
Some aead algorithms set .cra_flags = CRYPTO_ALG_TYPE_AEAD. But this is
redundant with the C structure type ('struct aead_alg'), and
crypto_register_aead() already sets the type flag automatically,
clearing any type flag that was already there. Apparently the useless
assignment has just been copy+pasted around.
So, remove the useless assignment from all the aead algorithms.
This patch shouldn't change any actual behavior.
Signed-off-by: Eric Biggers <ebiggers@google.com>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
|
|
Some ahash algorithms set .cra_type = &crypto_ahash_type. But this is
redundant with the C structure type ('struct ahash_alg'), and
crypto_register_ahash() already sets the .cra_type automatically.
Apparently the useless assignment has just been copy+pasted around.
So, remove the useless assignment from all the ahash algorithms.
This patch shouldn't change any actual behavior.
Signed-off-by: Eric Biggers <ebiggers@google.com>
Acked-by: Gilad Ben-Yossef <gilad@benyossef.com>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
|
|
Many ahash algorithms set .cra_flags = CRYPTO_ALG_TYPE_AHASH. But this
is redundant with the C structure type ('struct ahash_alg'), and
crypto_register_ahash() already sets the type flag automatically,
clearing any type flag that was already there. Apparently the useless
assignment has just been copy+pasted around.
So, remove the useless assignment from all the ahash algorithms.
This patch shouldn't change any actual behavior.
Signed-off-by: Eric Biggers <ebiggers@google.com>
Acked-by: Gilad Ben-Yossef <gilad@benyossef.com>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
|
|
Many shash algorithms set .cra_flags = CRYPTO_ALG_TYPE_SHASH. But this
is redundant with the C structure type ('struct shash_alg'), and
crypto_register_shash() already sets the type flag automatically,
clearing any type flag that was already there. Apparently the useless
assignment has just been copy+pasted around.
So, remove the useless assignment from all the shash algorithms.
This patch shouldn't change any actual behavior.
Signed-off-by: Eric Biggers <ebiggers@google.com>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
|
|
With all the crypto modules enabled on x86, and with a CPU that supports
AVX-2 but not SHA-NI instructions (e.g. Haswell, Broadwell, Skylake),
the "multibuffer" implementations of SHA-1, SHA-256, and SHA-512 are the
highest priority. However, these implementations only perform well when
many hash requests are being submitted concurrently, filling all 8 AVX-2
lanes. Otherwise, they are incredibly slow, as they waste time waiting
for more requests to arrive before proceeding to execute each request.
For example, here are the speeds I see hashing 4096-byte buffers with a
single thread on a Haswell-based processor:
generic avx2 mb (multibuffer)
------- -------- ----------------
sha1 602 MB/s 997 MB/s 0.61 MB/s
sha256 228 MB/s 412 MB/s 0.61 MB/s
sha512 312 MB/s 559 MB/s 0.61 MB/s
So, the multibuffer implementation is 500 to 1000 times slower than the
other implementations. Note that with smaller buffers or more update()s
per digest, the difference would be even greater.
I believe the vast majority of people are in the boat where the
multibuffer code is much slower, and only a small minority are doing the
highly parallel, hashing-intensive, latency-flexible workloads (maybe
IPsec on servers?) where the multibuffer code may be beneficial. Yet,
people often aren't familiar with all the crypto config options and so
the multibuffer code may inadvertently be built into the kernel.
Also the multibuffer code apparently hasn't been very well tested,
seeing as it was sometimes computing the wrong SHA-256 digest.
So, let's make the multibuffer algorithms low priority. Users who want
to use them can either request them explicitly by driver name, or use
NETLINK_CRYPTO (crypto_user) to increase their priority at runtime.
Signed-off-by: Eric Biggers <ebiggers@google.com>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
|
|
sha512-generic and sha384-generic had a cra_priority of 0, so it wasn't
possible to have a lower priority SHA-512 or SHA-384 implementation, as
is desired for sha512_mb which is only useful under certain workloads
and is otherwise extremely slow. Change them to priority 100, which is
the priority used for many of the other generic algorithms.
Signed-off-by: Eric Biggers <ebiggers@google.com>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
|
|
sha256-generic and sha224-generic had a cra_priority of 0, so it wasn't
possible to have a lower priority SHA-256 or SHA-224 implementation, as
is desired for sha256_mb which is only useful under certain workloads
and is otherwise extremely slow. Change them to priority 100, which is
the priority used for many of the other generic algorithms.
Signed-off-by: Eric Biggers <ebiggers@google.com>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
|
|
sha1-generic had a cra_priority of 0, so it wasn't possible to have a
lower priority SHA-1 implementation, as is desired for sha1_mb which is
only useful under certain workloads and is otherwise extremely slow.
Change it to priority 100, which is the priority used for many of the
other generic algorithms.
Signed-off-by: Eric Biggers <ebiggers@google.com>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
|
|
"arch/x86/crypto/sha*-mb" needs a trailing slash, since it refers to
directories. Otherwise get_maintainer.pl doesn't find the entry.
Signed-off-by: Eric Biggers <ebiggers@google.com>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
|
|
There is a copy-paste error where sha256_mb_mgr_get_comp_job_avx2()
copies the SHA-256 digest state from sha256_mb_mgr::args::digest to
job_sha256::result_digest. Consequently, the sha256_mb algorithm
sometimes calculates the wrong digest. Fix it.
Reproducer using AF_ALG:
#include <assert.h>
#include <linux/if_alg.h>
#include <stdio.h>
#include <string.h>
#include <sys/socket.h>
#include <unistd.h>
static const __u8 expected[32] =
"\xad\x7f\xac\xb2\x58\x6f\xc6\xe9\x66\xc0\x04\xd7\xd1\xd1\x6b\x02"
"\x4f\x58\x05\xff\x7c\xb4\x7c\x7a\x85\xda\xbd\x8b\x48\x89\x2c\xa7";
int main()
{
int fd;
struct sockaddr_alg addr = {
.salg_type = "hash",
.salg_name = "sha256_mb",
};
__u8 data[4096] = { 0 };
__u8 digest[32];
int ret;
int i;
fd = socket(AF_ALG, SOCK_SEQPACKET, 0);
bind(fd, (void *)&addr, sizeof(addr));
fork();
fd = accept(fd, 0, 0);
do {
ret = write(fd, data, 4096);
assert(ret == 4096);
ret = read(fd, digest, 32);
assert(ret == 32);
} while (memcmp(digest, expected, 32) == 0);
printf("wrong digest: ");
for (i = 0; i < 32; i++)
printf("%02x", digest[i]);
printf("\n");
}
Output was:
wrong digest: ad7facb2000000000000000000000000ffffffef7cb47c7a85dabd8b48892ca7
Fixes: 172b1d6b5a93 ("crypto: sha256-mb - fix ctx pointer and digest copy")
Cc: <stable@vger.kernel.org> # v4.8+
Signed-off-by: Eric Biggers <ebiggers@google.com>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
|
|
This patch main goal is to improve driver performance by moving the
crypto request from a list to a RDR ring shadow.
This is possible since there is one producer and one consume for this
RDR request shadow and one ring descriptor is left unused.
Doing this change eliminates the use of spinlock when accessing the
descriptor ring and the need to dynamicaly allocate memory per crypto
request.
The crypto request is placed in the first RDR shadow descriptor only
if there are enough descriptors, when the result handler is invoked,
it fetches the first result descriptor from RDR shadow.
Signed-off-by: Ofer Heifetz <oferh@marvell.com>
Signed-off-by: Antoine Tenart <antoine.tenart@bootlin.com>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
|
|
This patch adds support for two new algorithms in the Inside Secure
SafeXcel cryptographic engine driver: ecb(des3_ede) and cbc(des3_ede).
Signed-off-by: Ofer Heifetz <oferh@marvell.com>
Signed-off-by: Antoine Tenart <antoine.tenart@bootlin.com>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
|
|
This patch adds support for two algorithms in the Inside Secure SafeXcel
cryptographic engine driver: ecb(des) and cbc(des).
Signed-off-by: Ofer Heifetz <oferh@marvell.com>
Signed-off-by: Antoine Tenart <antoine.tenart@bootlin.com>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
|
|
This patch adds support for the hmac(md5) algorithm in the Inside Secure
SafeXcel cryptographic engine driver.
Signed-off-by: Ofer Heifetz <oferh@marvell.com>
Signed-off-by: Antoine Tenart <antoine.tenart@bootlin.com>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
|
|
This patch adds the MD5 algorithm support to the Inside Secure SafeXcel
cryptographic engine driver.
Signed-off-by: Ofer Heifetz <oferh@marvell.com>
Signed-off-by: Antoine Tenart <antoine.tenart@bootlin.com>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
|
|
The ORO bridge (connected to the EIP197 write channel) does not
generate back pressure towards the EIP197 when its internal FIFO is
full. It assumes that the EIP will not drive more write transactions
than the maximal supported outstanding (32).
Hence tx_max_cmd_queue must be configured to 5 (or less).
Signed-off-by: Ofer Heifetz <oferh@marvell.com>
Signed-off-by: Antoine Tenart <antoine.tenart@bootlin.com>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
|
|
This patch adds extra steps in the module removal path, to reset the
command and result rings. The corresponding interrupts are cleared, and
the ring address configuration is reset.
Signed-off-by: Ofer Heifetz <oferh@marvell.com>
[Antoine: small reworks, commit message]
Signed-off-by: Antoine Tenart <antoine.tenart@bootlin.com>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
|
|
This patch updates the TRC configuration so that the version of the
EIP197 engine being used is taken into account, as the configuration
differs between the EIP197B and the EIP197D.
Signed-off-by: Ofer Heifetz <oferh@marvell.com>
[Antoine: commit message]
Signed-off-by: Antoine Tenart <antoine.tenart@bootlin.com>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
|
|
This patch documents the new compatible used for the eip197d engine, as
this new engine is now supported by the Inside Secure SafeXcel
cryptographic driver.
Signed-off-by: Antoine Tenart <antoine.tenart@bootlin.com>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
|
|
This patch adds support for the eip197d engine to the Inside Secure
SafeXcel cryptographic driver. This new engine is similar to the eip197b
and reuse most of its code.
Signed-off-by: Antoine Tenart <antoine.tenart@bootlin.com>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
|
|
So far a single processing engine (PE) was configured and used in the
Inside Secure SafeXcel cryptographic engine driver. Some versions have
more than a single PE. This patch rework the driver's initialization to
take this into account and to allow configuring more than one PE.
Signed-off-by: Ofer Heifetz <oferh@marvell.com>
[Antoine: some reworks and commit message.]
Signed-off-by: Antoine Tenart <antoine.tenart@bootlin.com>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
|