summaryrefslogtreecommitdiff
path: root/lib/llist.c
diff options
context:
space:
mode:
authorWill Deacon <will.deacon@arm.com>2014-04-23 20:52:52 +0400
committerLinus Torvalds <torvalds@linux-foundation.org>2014-04-28 02:20:05 +0400
commitec6931b281797b69e6cf109f9cc94d5a2bf994e0 (patch)
tree9e8ab9ff709939a2ca13a7a5556b436689e2908b /lib/llist.c
parentac6c9e2bed093c4b60e313674fb7aec4f264c3d4 (diff)
downloadlinux-ec6931b281797b69e6cf109f9cc94d5a2bf994e0.tar.xz
word-at-a-time: avoid undefined behaviour in zero_bytemask macro
The asm-generic, big-endian version of zero_bytemask creates a mask of bytes preceding the first zero-byte by left shifting ~0ul based on the position of the first zero byte. Unfortunately, if the first (top) byte is zero, the output of prep_zero_mask has only the top bit set, resulting in undefined C behaviour as we shift left by an amount equal to the width of the type. As it happens, GCC doesn't manage to spot this through the call to fls(), but the issue remains if architectures choose to implement their shift instructions differently. An example would be arch/arm/ (AArch32), where LSL Rd, Rn, #32 results in Rd == 0x0, whilst on arch/arm64 (AArch64) LSL Xd, Xn, #64 results in Xd == Xn. Rather than check explicitly for the problematic shift, this patch adds an extra shift by 1, replacing fls with __fls. Since zero_bytemask is never called with a zero argument (has_zero() is used to check the data first), we don't need to worry about calling __fls(0), which is undefined. Cc: <stable@vger.kernel.org> Cc: Victor Kamensky <victor.kamensky@linaro.org> Signed-off-by: Will Deacon <will.deacon@arm.com> Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
Diffstat (limited to 'lib/llist.c')
0 files changed, 0 insertions, 0 deletions