<feed xmlns='http://www.w3.org/2005/Atom'>
<title>kernel/linux.git/arch/arm64/crypto, branch v6.6.134</title>
<subtitle>Linux kernel stable tree (mirror)</subtitle>
<id>https://git.radix-linux.su/kernel/linux.git/atom?h=v6.6.134</id>
<link rel='self' href='https://git.radix-linux.su/kernel/linux.git/atom?h=v6.6.134'/>
<link rel='alternate' type='text/html' href='https://git.radix-linux.su/kernel/linux.git/'/>
<updated>2024-03-06T14:48:40+00:00</updated>
<entry>
<title>crypto: arm64/neonbs - fix out-of-bounds access on short input</title>
<updated>2024-03-06T14:48:40+00:00</updated>
<author>
<name>Ard Biesheuvel</name>
<email>ardb@kernel.org</email>
</author>
<published>2024-02-23T13:20:35+00:00</published>
<link rel='alternate' type='text/html' href='https://git.radix-linux.su/kernel/linux.git/commit/?id=1291d278b5574819a7266568ce4c28bce9438705'/>
<id>urn:sha1:1291d278b5574819a7266568ce4c28bce9438705</id>
<content type='text'>
commit 1c0cf6d19690141002889d72622b90fc01562ce4 upstream.

The bit-sliced implementation of AES-CTR operates on blocks of 128
bytes, and will fall back to the plain NEON version for tail blocks or
inputs that are shorter than 128 bytes to begin with.

It will call straight into the plain NEON asm helper, which performs all
memory accesses in granules of 16 bytes (the size of a NEON register).
For this reason, the associated plain NEON glue code will copy inputs
shorter than 16 bytes into a temporary buffer, given that this is a rare
occurrence and it is not worth the effort to work around this in the asm
code.

The fallback from the bit-sliced NEON version fails to take this into
account, potentially resulting in out-of-bounds accesses. So clone the
same workaround, and use a temp buffer for short in/outputs.

Fixes: fc074e130051 ("crypto: arm64/aes-neonbs-ctr - fallback to plain NEON for final chunk")
Cc: &lt;stable@vger.kernel.org&gt;
Reported-by: syzbot+f1ceaa1a09ab891e1934@syzkaller.appspotmail.com
Reviewed-by: Eric Biggers &lt;ebiggers@google.com&gt;
Signed-off-by: Ard Biesheuvel &lt;ardb@kernel.org&gt;
Signed-off-by: Herbert Xu &lt;herbert@gondor.apana.org.au&gt;
Signed-off-by: Greg Kroah-Hartman &lt;gregkh@linuxfoundation.org&gt;
</content>
</entry>
<entry>
<title>crypto: arm64/aes - remove Makefile hack</title>
<updated>2023-08-11T11:19:27+00:00</updated>
<author>
<name>Masahiro Yamada</name>
<email>masahiroy@kernel.org</email>
</author>
<published>2023-08-01T10:11:46+00:00</published>
<link rel='alternate' type='text/html' href='https://git.radix-linux.su/kernel/linux.git/commit/?id=ac2d838fb7c479434513c8d4565a111fb805edaa'/>
<id>urn:sha1:ac2d838fb7c479434513c8d4565a111fb805edaa</id>
<content type='text'>
Do it more simiply. This also fixes single target builds.

[before]

  $ make ARCH=arm64 CROSS_COMPILE=aarch64-linux-gnu- arch/arm64/crypto/aes-glue-ce.i
    [snip]
  make[4]: *** No rule to make target 'arch/arm64/crypto/aes-glue-ce.i'.  Stop.

[after]

  $ make ARCH=arm64 CROSS_COMPILE=aarch64-linux-gnu- arch/arm64/crypto/aes-glue-ce.i
    [snip]
    CPP     arch/arm64/crypto/aes-glue-ce.i

Signed-off-by: Masahiro Yamada &lt;masahiroy@kernel.org&gt;
Acked-by: Will Deacon &lt;will@kernel.org&gt;
Signed-off-by: Herbert Xu &lt;herbert@gondor.apana.org.au&gt;
</content>
</entry>
<entry>
<title>crypto: arm64/sha256-glue - Include module.h</title>
<updated>2023-05-19T12:56:59+00:00</updated>
<author>
<name>Herbert Xu</name>
<email>herbert@gondor.apana.org.au</email>
</author>
<published>2023-05-19T12:41:55+00:00</published>
<link rel='alternate' type='text/html' href='https://git.radix-linux.su/kernel/linux.git/commit/?id=f573db7aa528f11820dcc811bc7791b231d22b1c'/>
<id>urn:sha1:f573db7aa528f11820dcc811bc7791b231d22b1c</id>
<content type='text'>
Include module.h in arch/arm64/crypto/sha256-glue.c as it uses
various macros (such as MODULE_AUTHOR) that are defined there.

Also fix the ordering of types.h.

Reported-by: kernel test robot &lt;lkp@intel.com&gt;
Closes: https://lore.kernel.org/oe-kbuild-all/202305191953.PIB1w80W-lkp@intel.com/
Signed-off-by: Herbert Xu &lt;herbert@gondor.apana.org.au&gt;
</content>
</entry>
<entry>
<title>crypto: arm64/aes-neonbs - fix crash with CFI enabled</title>
<updated>2023-03-14T09:06:44+00:00</updated>
<author>
<name>Eric Biggers</name>
<email>ebiggers@google.com</email>
</author>
<published>2023-02-27T06:32:23+00:00</published>
<link rel='alternate' type='text/html' href='https://git.radix-linux.su/kernel/linux.git/commit/?id=47446d7cd42358ca7d7a544f2f7823db03f616ff'/>
<id>urn:sha1:47446d7cd42358ca7d7a544f2f7823db03f616ff</id>
<content type='text'>
aesbs_ecb_encrypt(), aesbs_ecb_decrypt(), aesbs_xts_encrypt(), and
aesbs_xts_decrypt() are called via indirect function calls.  Therefore
they need to use SYM_TYPED_FUNC_START instead of SYM_FUNC_START to cause
their type hashes to be emitted when the kernel is built with
CONFIG_CFI_CLANG=y.  Otherwise, the code crashes with a CFI failure if
the compiler doesn't happen to optimize out the indirect calls.

Fixes: c50d32859e70 ("arm64: Add types to indirect called assembly functions")
Cc: stable@vger.kernel.org
Signed-off-by: Eric Biggers &lt;ebiggers@google.com&gt;
Reviewed-by: Ard Biesheuvel &lt;ardb@kernel.org&gt;
Signed-off-by: Herbert Xu &lt;herbert@gondor.apana.org.au&gt;
</content>
</entry>
<entry>
<title>crypto: arm64/sm4-gcm - Fix possible crash in GCM cryption</title>
<updated>2023-02-10T09:20:19+00:00</updated>
<author>
<name>Herbert Xu</name>
<email>herbert@gondor.apana.org.au</email>
</author>
<published>2023-02-02T08:33:47+00:00</published>
<link rel='alternate' type='text/html' href='https://git.radix-linux.su/kernel/linux.git/commit/?id=4e4a08868f15897ca236528771c3733fded42c62'/>
<id>urn:sha1:4e4a08868f15897ca236528771c3733fded42c62</id>
<content type='text'>
An often overlooked aspect of the skcipher walker API is that an
error is not just indicated by a non-zero return value, but by the
fact that walk-&gt;nbytes is zero.

Thus it is an error to call skcipher_walk_done after getting back
walk-&gt;nbytes == 0 from the previous interaction with the walker.

This is because when walk-&gt;nbytes is zero the walker is left in
an undefined state and any further calls to it may try to free
uninitialised stack memory.

The sm4 arm64 ccm code gets this wrong and ends up calling
skcipher_walk_done even when walk-&gt;nbytes is zero.

This patch rewrites the loop in a form that resembles other callers.

Reported-by: Tianjia Zhang &lt;tianjia.zhang@linux.alibaba.com&gt;
Fixes: ae1b83c7d572 ("crypto: arm64/sm4 - add CE implementation for GCM mode")
Signed-off-by: Herbert Xu &lt;herbert@gondor.apana.org.au&gt;
Tested-by: Tianjia Zhang &lt;tianjia.zhang@linux.alibaba.com&gt;
Signed-off-by: Herbert Xu &lt;herbert@gondor.apana.org.au&gt;
</content>
</entry>
<entry>
<title>crypto: arm64/sm4-ccm - Rewrite skcipher walker loop</title>
<updated>2023-02-10T09:20:19+00:00</updated>
<author>
<name>Tianjia Zhang</name>
<email>tianjia.zhang@linux.alibaba.com</email>
</author>
<published>2023-02-01T12:32:07+00:00</published>
<link rel='alternate' type='text/html' href='https://git.radix-linux.su/kernel/linux.git/commit/?id=3b9d902153f31998a30b0cf4a0aeb090a05005d3'/>
<id>urn:sha1:3b9d902153f31998a30b0cf4a0aeb090a05005d3</id>
<content type='text'>
The fact that an error in the skcipher walker API are indicated
not only by a non-zero return value, but also by the fact that
walk-&gt;nbytes is zero, causes the layout of the skcipher walker
loop to be sufficiently different from the usual layout, which
is not a problem in itself, but it is likely to cause reading
confusion and difficulty in code maintenance.

This patch rewrites skcipher walker loop, and separates the
last chunk cryption from the loop to avoid wrong calls to the
skcipher walker API. In addition to following the usual convention
of checking walk-&gt;nbytes, it also makes the loop execute logic
clearer and easier to understand.

Signed-off-by: Tianjia Zhang &lt;tianjia.zhang@linux.alibaba.com&gt;
Signed-off-by: Herbert Xu &lt;herbert@gondor.apana.org.au&gt;
</content>
</entry>
<entry>
<title>crypto: arm64/aes-ccm - Rewrite skcipher walker loop</title>
<updated>2023-02-10T09:20:19+00:00</updated>
<author>
<name>Herbert Xu</name>
<email>herbert@gondor.apana.org.au</email>
</author>
<published>2023-01-30T08:58:51+00:00</published>
<link rel='alternate' type='text/html' href='https://git.radix-linux.su/kernel/linux.git/commit/?id=57ead1bf1c5430c5f663dfd3399040d90db9c7ab'/>
<id>urn:sha1:57ead1bf1c5430c5f663dfd3399040d90db9c7ab</id>
<content type='text'>
An often overlooked aspect of the skcipher walker API is that an
error is not just indicated by a non-zero return value, but by the
fact that walk-&gt;nbytes is zero.

Thus it is an error to call skcipher_walk_done after getting back
walk-&gt;nbytes == 0 from the previous interaction with the walker.

This is because when walk-&gt;nbytes is zero the walker is left in
an undefined state and any further calls to it may try to free
uninitialised stack memory.

The arm64 ccm code has to deal with zero-length messages, and
it needs to process data even when walk-&gt;nbytes == 0 is returned.
It doesn't have this bug because there is an explicit check for
walk-&gt;nbytes != 0 prior to the skcipher_walk_done call.

However, the loop is still sufficiently different from the usual
layout and it appears to have been copied into other code which
then ended up with this bug.  This patch rewrites it to follow the
usual convention of checking walk-&gt;nbytes.

Signed-off-by: Herbert Xu &lt;herbert@gondor.apana.org.au&gt;
Tested-by: Ard Biesheuvel &lt;ardb@kernel.org&gt;
Signed-off-by: Herbert Xu &lt;herbert@gondor.apana.org.au&gt;
</content>
</entry>
<entry>
<title>crypto: arm64/gcm - add RFC4106 support</title>
<updated>2023-01-20T10:29:31+00:00</updated>
<author>
<name>Ard Biesheuvel</name>
<email>ardb@kernel.org</email>
</author>
<published>2022-12-14T17:19:55+00:00</published>
<link rel='alternate' type='text/html' href='https://git.radix-linux.su/kernel/linux.git/commit/?id=9e3457112b9d410f0cad760f91728251fef7e3e3'/>
<id>urn:sha1:9e3457112b9d410f0cad760f91728251fef7e3e3</id>
<content type='text'>
Add support for RFC4106 ESP encapsulation to the accelerated GCM
implementation. This results in a ~10% speedup for IPsec frames of
typical size (~1420 bytes) on Cortex-A53.

Signed-off-by: Ard Biesheuvel &lt;ardb@kernel.org&gt;
Signed-off-by: Herbert Xu &lt;herbert@gondor.apana.org.au&gt;
</content>
</entry>
<entry>
<title>crypto: arm64/sm4 - fix possible crash with CFI enabled</title>
<updated>2022-12-30T09:57:42+00:00</updated>
<author>
<name>Tianjia Zhang</name>
<email>tianjia.zhang@linux.alibaba.com</email>
</author>
<published>2022-12-21T07:32:32+00:00</published>
<link rel='alternate' type='text/html' href='https://git.radix-linux.su/kernel/linux.git/commit/?id=736f88689c6912f05d0116917910603a7ba97de7'/>
<id>urn:sha1:736f88689c6912f05d0116917910603a7ba97de7</id>
<content type='text'>
The SM4 CCM/GCM assembly functions for encryption and decryption is
called via indirect function calls.  Therefore they need to use
SYM_TYPED_FUNC_START instead of SYM_FUNC_START to cause its type hash
to be emitted when the kernel is built with CONFIG_CFI_CLANG=y.
Otherwise, the code crashes with a CFI failure (if the compiler didn't
happen to optimize out the indirect call).

Fixes: 67fa3a7fdf80 ("crypto: arm64/sm4 - add CE implementation for CCM mode")
Fixes: ae1b83c7d572 ("crypto: arm64/sm4 - add CE implementation for GCM mode")
Signed-off-by: Tianjia Zhang &lt;tianjia.zhang@linux.alibaba.com&gt;
Signed-off-by: Herbert Xu &lt;herbert@gondor.apana.org.au&gt;
</content>
</entry>
<entry>
<title>crypto: arm64/ghash-ce - use frame_push/pop macros consistently</title>
<updated>2022-12-09T10:45:00+00:00</updated>
<author>
<name>Ard Biesheuvel</name>
<email>ardb@kernel.org</email>
</author>
<published>2022-11-29T16:48:52+00:00</published>
<link rel='alternate' type='text/html' href='https://git.radix-linux.su/kernel/linux.git/commit/?id=a428636d4c827ebe967aa31a83684b4c8e742ed1'/>
<id>urn:sha1:a428636d4c827ebe967aa31a83684b4c8e742ed1</id>
<content type='text'>
Use the frame_push and frame_pop macros to set up the stack frame so
that return address protections will be enabled automically when
configured.

Signed-off-by: Ard Biesheuvel &lt;ardb@kernel.org&gt;
Signed-off-by: Herbert Xu &lt;herbert@gondor.apana.org.au&gt;
</content>
</entry>
</feed>
