diff options
author | Eric Biggers <ebiggers@google.com> | 2019-04-10 09:46:31 +0300 |
---|---|---|
committer | Herbert Xu <herbert@gondor.apana.org.au> | 2019-04-18 17:14:58 +0300 |
commit | 767f015ea0b7ab9d60432ff6cd06b664fd71f50f (patch) | |
tree | 92dc0f57cf5a5662f25a33a73f8a2af24d74bfd5 /arch/arm/crypto | |
parent | edaf28e996af69222b2cb40455dbb5459c2b875a (diff) | |
download | linux-767f015ea0b7ab9d60432ff6cd06b664fd71f50f.tar.xz |
crypto: arm/aes-neonbs - don't access already-freed walk.iv
If the user-provided IV needs to be aligned to the algorithm's
alignmask, then skcipher_walk_virt() copies the IV into a new aligned
buffer walk.iv. But skcipher_walk_virt() can fail afterwards, and then
if the caller unconditionally accesses walk.iv, it's a use-after-free.
arm32 xts-aes-neonbs doesn't set an alignmask, so currently it isn't
affected by this despite unconditionally accessing walk.iv. However
this is more subtle than desired, and it was actually broken prior to
the alignmask being removed by commit cc477bf64573 ("crypto: arm/aes -
replace bit-sliced OpenSSL NEON code"). Thus, update xts-aes-neonbs to
start checking the return value of skcipher_walk_virt().
Fixes: e4e7f10bfc40 ("ARM: add support for bit sliced AES using NEON instructions")
Cc: <stable@vger.kernel.org> # v3.13+
Signed-off-by: Eric Biggers <ebiggers@google.com>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
Diffstat (limited to 'arch/arm/crypto')
-rw-r--r-- | arch/arm/crypto/aes-neonbs-glue.c | 2 |
1 files changed, 2 insertions, 0 deletions
diff --git a/arch/arm/crypto/aes-neonbs-glue.c b/arch/arm/crypto/aes-neonbs-glue.c index 07e31941dc67..617c2c99ebfb 100644 --- a/arch/arm/crypto/aes-neonbs-glue.c +++ b/arch/arm/crypto/aes-neonbs-glue.c @@ -278,6 +278,8 @@ static int __xts_crypt(struct skcipher_request *req, int err; err = skcipher_walk_virt(&walk, req, true); + if (err) + return err; crypto_cipher_encrypt_one(ctx->tweak_tfm, walk.iv, walk.iv); |