summaryrefslogtreecommitdiff
path: root/include/linux/memory_hotplug.h
diff options
context:
space:
mode:
authorEric Dumazet <edumazet@google.com>2026-01-16 07:13:59 +0300
committerJakub Kicinski <kuba@kernel.org>2026-01-21 05:19:18 +0300
commit3fbb5395c7303582757d5090ab8f7ec70dbe2c10 (patch)
tree3313205c6667c4c908a320e3101fbf4573f48efb /include/linux/memory_hotplug.h
parent7333299be4e5cc0d9ccbe9d13c9cd954a00f94d9 (diff)
downloadlinux-3fbb5395c7303582757d5090ab8f7ec70dbe2c10.tar.xz
net: split kmalloc_reserve() to allow inlining
kmalloc_reserve() is too big to be inlined. Put the slow path in a new out-of-line function : kmalloc_pfmemalloc() Then let kmalloc_reserve() set skb->pfmemalloc only when/if the slow path is taken. This makes __alloc_skb() faster : - kmalloc_reserve() is now automatically inlined by both gcc and clang. - No more expensive RMW (skb->pfmemalloc = pfmemalloc). - No more expensive stack canary (for CONFIG_STACKPROTECTOR_STRONG=y). - Removal of two prefetches that were coming too late for modern cpus. Text size increase is quite small compared to the cpu savings (~0.7 %) $ size net/core/skbuff.clang.before.o net/core/skbuff.clang.after.o text data bss dec hex filename 72507 5897 0 78404 13244 net/core/skbuff.clang.before.o 72681 5897 0 78578 132f2 net/core/skbuff.clang.after.o Signed-off-by: Eric Dumazet <edumazet@google.com> Link: https://patch.msgid.link/20260116041359.181104-1-edumazet@google.com Signed-off-by: Jakub Kicinski <kuba@kernel.org>
Diffstat (limited to 'include/linux/memory_hotplug.h')
0 files changed, 0 insertions, 0 deletions