diff options
author | Eric Biggers <ebiggers@google.com> | 2019-04-10 09:46:32 +0300 |
---|---|---|
committer | Greg Kroah-Hartman <gregkh@linuxfoundation.org> | 2019-05-21 19:50:19 +0300 |
commit | 33b3b30ff96ac8abe89872ab327c85054de35fb0 (patch) | |
tree | da17a1e4713d224b4c74b1a7b22328ce59d2ecad /crypto | |
parent | a01e8a31c837d2670d86610581f3492c0b25d135 (diff) | |
download | linux-33b3b30ff96ac8abe89872ab327c85054de35fb0.tar.xz |
crypto: arm64/aes-neonbs - don't access already-freed walk.iv
commit 4a8108b70508df0b6c4ffa4a3974dab93dcbe851 upstream.
If the user-provided IV needs to be aligned to the algorithm's
alignmask, then skcipher_walk_virt() copies the IV into a new aligned
buffer walk.iv. But skcipher_walk_virt() can fail afterwards, and then
if the caller unconditionally accesses walk.iv, it's a use-after-free.
xts-aes-neonbs doesn't set an alignmask, so currently it isn't affected
by this despite unconditionally accessing walk.iv. However this is more
subtle than desired, and unconditionally accessing walk.iv has caused a
real problem in other algorithms. Thus, update xts-aes-neonbs to start
checking the return value of skcipher_walk_virt().
Fixes: 1abee99eafab ("crypto: arm64/aes - reimplement bit-sliced ARM/NEON implementation for arm64")
Cc: <stable@vger.kernel.org> # v4.11+
Signed-off-by: Eric Biggers <ebiggers@google.com>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
Diffstat (limited to 'crypto')
0 files changed, 0 insertions, 0 deletions