diff options
author | Christophe Leroy <christophe.leroy@csgroup.eu> | 2022-03-08 19:12:10 +0300 |
---|---|---|
committer | David S. Miller <davem@davemloft.net> | 2022-03-11 13:57:22 +0300 |
commit | 3af722cb735d212554027ec81e2aa2e6bf1ee34d (patch) | |
tree | e1f7caba28268fb626893bc719540414f3272248 /include/net/checksum.h | |
parent | 8ef1dc4d204a5329eeeaf6edd9b62f6f2a64ebef (diff) | |
download | linux-3af722cb735d212554027ec81e2aa2e6bf1ee34d.tar.xz |
powerpc/net: Implement powerpc specific csum_shift() to remove branch
Today's implementation of csum_shift() leads to branching based on
parity of 'offset'
000002f8 <csum_block_add>:
2f8: 70 a5 00 01 andi. r5,r5,1
2fc: 41 a2 00 08 beq 304 <csum_block_add+0xc>
300: 54 84 c0 3e rotlwi r4,r4,24
304: 7c 63 20 14 addc r3,r3,r4
308: 7c 63 01 94 addze r3,r3
30c: 4e 80 00 20 blr
Use first bit of 'offset' directly as input of the rotation instead of
branching.
000002f8 <csum_block_add>:
2f8: 54 a5 1f 38 rlwinm r5,r5,3,28,28
2fc: 20 a5 00 20 subfic r5,r5,32
300: 5c 84 28 3e rotlw r4,r4,r5
304: 7c 63 20 14 addc r3,r3,r4
308: 7c 63 01 94 addze r3,r3
30c: 4e 80 00 20 blr
And change to left shift instead of right shift to skip one more
instruction. This has no impact on the final sum.
000002f8 <csum_block_add>:
2f8: 54 a5 1f 38 rlwinm r5,r5,3,28,28
2fc: 5c 84 28 3e rotlw r4,r4,r5
300: 7c 63 20 14 addc r3,r3,r4
304: 7c 63 01 94 addze r3,r3
308: 4e 80 00 20 blr
Seems like only powerpc benefits from a branchless implementation.
Other main architectures like ARM or X86 get better code with
the generic implementation and its branch.
Signed-off-by: Christophe Leroy <christophe.leroy@csgroup.eu>
Signed-off-by: David S. Miller <davem@davemloft.net>
Diffstat (limited to 'include/net/checksum.h')
-rw-r--r-- | include/net/checksum.h | 2 |
1 files changed, 2 insertions, 0 deletions
diff --git a/include/net/checksum.h b/include/net/checksum.h index 79c67f14c448..6bc783b7a06c 100644 --- a/include/net/checksum.h +++ b/include/net/checksum.h @@ -80,6 +80,7 @@ static __always_inline __sum16 csum16_sub(__sum16 csum, __be16 addend) return csum16_add(csum, ~addend); } +#ifndef HAVE_ARCH_CSUM_SHIFT static __always_inline __wsum csum_shift(__wsum sum, int offset) { /* rotate sum to align it with a 16b boundary */ @@ -87,6 +88,7 @@ static __always_inline __wsum csum_shift(__wsum sum, int offset) return (__force __wsum)ror32((__force u32)sum, 8); return sum; } +#endif static __always_inline __wsum csum_block_add(__wsum csum, __wsum csum2, int offset) |