summaryrefslogtreecommitdiff
path: root/include/linux
diff options
context:
space:
mode:
authorAlexander Lobakin <aleksander.lobakin@intel.com>2024-12-03 20:37:24 +0300
committerJakub Kicinski <kuba@kernel.org>2024-12-06 05:41:06 +0300
commitca5c94949facce1f67a4a9a9528a27f635ff3e78 (patch)
tree92336b8821f752fe42f28ff61b898a405393b761 /include/linux
parent1daa6591ab7d6893cbcd8d02c2dbe43af42d36de (diff)
downloadlinux-ca5c94949facce1f67a4a9a9528a27f635ff3e78.tar.xz
xsk: align &xdp_buff_xsk harder
After the series "XSk buff on a diet" by Maciej, the greatest pow-2 which &xdp_buff_xsk can be divided got reduced from 16 to 8 on x86_64. Also, sizeof(xdp_buff_xsk) now is 120 bytes, which, taking the previous sentence into account, leads to that it leaves 8 bytes at the end of cacheline, which means an array of buffs will have its elements messed between the cachelines chaotically. Use __aligned_largest for this struct. This alignment is usually 16 bytes, which makes it fill two full cachelines and align an array nicely. ___cacheline_aligned may be excessive here, especially on arches with 128-256 byte CLs, as well as 32-bit arches (76 -> 96 bytes on MIPS32R2), while not doing better than _largest. Signed-off-by: Alexander Lobakin <aleksander.lobakin@intel.com> Reviewed-by: Toke Høiland-Jørgensen <toke@redhat.com> Link: https://patch.msgid.link/20241203173733.3181246-2-aleksander.lobakin@intel.com Signed-off-by: Jakub Kicinski <kuba@kernel.org>
Diffstat (limited to 'include/linux')
0 files changed, 0 insertions, 0 deletions