diff options
author | James Morse <james.morse@arm.com> | 2022-01-27 19:21:27 +0300 |
---|---|---|
committer | Will Deacon <will@kernel.org> | 2022-02-15 18:51:53 +0300 |
commit | a6aab018829948c1818bed656656df9ae321408b (patch) | |
tree | c6e298e1de3905d30c019f0cebf81b909f23ccd9 | |
parent | dfd42facf1e4ada021b939b4e19c935dcdd55566 (diff) | |
download | linux-a6aab018829948c1818bed656656df9ae321408b.tar.xz |
arm64: insn: Generate 64 bit mask immediates correctly
When the insn framework is used to encode an AND/ORR/EOR instruction,
aarch64_encode_immediate() is used to pick the immr imms values.
If the immediate is a 64bit mask, with bit 63 set, and zeros in any
of the upper 32 bits, the immr value is incorrectly calculated meaning
the wrong mask is generated.
For example, 0x8000000000000001 should have an immr of 1, but 32 is used,
meaning the resulting mask is 0x0000000300000000.
It would appear eBPF is unable to hit these cases, as build_insn()'s
imm value is a s32, so when used with BPF_ALU64, the sign-extended
u64 immediate would always have all-1s or all-0s in the upper 32 bits.
KVM does not generate a va_mask with any of the top bits set as these
VA wouldn't be usable with TTBR0_EL2.
This happens because the rotation is calculated from fls(~imm), which
takes an unsigned int, but the immediate may be 64bit.
Use fls64() so the 64bit mask doesn't get truncated to a u32.
Signed-off-by: James Morse <james.morse@arm.com>
Brown-paper-bag-for: Marc Zyngier <maz@kernel.org>
Acked-by: Marc Zyngier <maz@kernel.org>
Link: https://lore.kernel.org/r/20220127162127.2391947-4-james.morse@arm.com
Signed-off-by: Will Deacon <will@kernel.org>
-rw-r--r-- | arch/arm64/lib/insn.c | 2 |
1 files changed, 1 insertions, 1 deletions
diff --git a/arch/arm64/lib/insn.c b/arch/arm64/lib/insn.c index fccfe363e567..e485cd735261 100644 --- a/arch/arm64/lib/insn.c +++ b/arch/arm64/lib/insn.c @@ -1379,7 +1379,7 @@ static u32 aarch64_encode_immediate(u64 imm, * Compute the rotation to get a continuous set of * ones, with the first bit set at position 0 */ - ror = fls(~imm); + ror = fls64(~imm); } /* |