diff options
author | Linus Torvalds <torvalds@linux-foundation.org> | 2014-09-13 22:24:03 +0400 |
---|---|---|
committer | Linus Torvalds <torvalds@linux-foundation.org> | 2014-09-13 22:24:03 +0400 |
commit | 23d0db76ffa13ffb95229946e4648568c3c29db5 (patch) | |
tree | a468c5bb0428be35da5200af5c3aa44dde163b81 /include/linux/hash.h | |
parent | 72d931046030beb2d13dad6d205be0e228618432 (diff) | |
download | linux-23d0db76ffa13ffb95229946e4648568c3c29db5.tar.xz |
Make hash_64() use a 64-bit multiply when appropriate
The hash_64() function historically does the multiply by the
GOLDEN_RATIO_PRIME_64 number with explicit shifts and adds, because
unlike the 32-bit case, gcc seems unable to turn the constant multiply
into the more appropriate shift and adds when required.
However, that means that we generate those shifts and adds even when the
architecture has a fast multiplier, and could just do it better in
hardware.
Use the now-cleaned-up CONFIG_ARCH_HAS_FAST_MULTIPLIER (together with
"is it a 64-bit architecture") to decide whether to use an integer
multiply or the explicit sequence of shift/add instructions.
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
Diffstat (limited to 'include/linux/hash.h')
-rw-r--r-- | include/linux/hash.h | 4 |
1 files changed, 4 insertions, 0 deletions
diff --git a/include/linux/hash.h b/include/linux/hash.h index bd1754c7ecef..d0494c399392 100644 --- a/include/linux/hash.h +++ b/include/linux/hash.h @@ -37,6 +37,9 @@ static __always_inline u64 hash_64(u64 val, unsigned int bits) { u64 hash = val; +#if defined(CONFIG_ARCH_HAS_FAST_MULTIPLIER) && BITS_PER_LONG == 64 + hash = hash * GOLDEN_RATIO_PRIME_64; +#else /* Sigh, gcc can't optimise this alone like it does for 32 bits. */ u64 n = hash; n <<= 18; @@ -51,6 +54,7 @@ static __always_inline u64 hash_64(u64 val, unsigned int bits) hash += n; n <<= 2; hash += n; +#endif /* High bits are more random, so use them. */ return hash >> (64 - bits); |