diff options
| author | Marco Elver <elver@google.com> | 2026-02-16 17:16:22 +0300 |
|---|---|---|
| committer | Will Deacon <will@kernel.org> | 2026-02-26 21:03:07 +0300 |
| commit | abf1be684dc270b94b7c8782f562959b33766fc0 (patch) | |
| tree | 4fd7aa580ec99944b1720bb38cf6fa03349171d8 | |
| parent | 6de23f81a5e08be8fbf5e8d7e9febc72a5b5f27f (diff) | |
| download | linux-abf1be684dc270b94b7c8782f562959b33766fc0.tar.xz | |
arm64: Optimize __READ_ONCE() with CONFIG_LTO=y
Rework arm64 LTO __READ_ONCE() to improve code generation as follows:
1. Replace _Generic-based __unqual_scalar_typeof() with more complete
__rwonce_typeof_unqual(). This strips qualifiers from all types, not
just integer types, which is required to be able to assign (must be
non-const) to __u.__val in the non-atomic case (required for #2).
One subtle point here is that non-integer types of __val could be const
or volatile within the union with the old __unqual_scalar_typeof(), if
the passed variable is const or volatile. This would then result in a
forced load from the stack if __u.__val is volatile; in the case of
const, it does look odd if the underlying storage changes, but the
compiler is told said member is "const" -- it smells like UB.
2. Eliminate the atomic flag and ternary conditional expression. Move
the fallback volatile load into the default case of the switch,
ensuring __u is unconditionally initialized across all paths.
The statement expression now unconditionally returns __u.__val.
This refactoring appears to help the compiler improve (or fix) code
generation.
With a defconfig + LTO + debug options builds, we observe different
codegen for the following functions:
btrfs_reclaim_sweep (708 -> 1032 bytes)
btrfs_sinfo_bg_reclaim_threshold_store (200 -> 204 bytes)
check_mem_access (3652 -> 3692 bytes) [inlined bpf_map_is_rdonly]
console_flush_all (1268 -> 1264 bytes)
console_lock_spinning_disable_and_check (180 -> 176 bytes)
igb_add_filter (640 -> 636 bytes)
igb_config_tx_modes (2404 -> 2400 bytes)
kvm_vcpu_on_spin (480 -> 476 bytes)
map_freeze (376 -> 380 bytes)
netlink_bind (1664 -> 1656 bytes)
nmi_cpu_backtrace (404 -> 400 bytes)
set_rps_cpu (516 -> 520 bytes)
swap_cluster_readahead (944 -> 932 bytes)
tcp_accecn_third_ack (328 -> 336 bytes)
tcp_create_openreq_child (1764 -> 1772 bytes)
tcp_data_queue (5784 -> 5892 bytes)
tcp_ecn_rcv_synack (620 -> 628 bytes)
xen_manage_runstate_time (944 -> 896 bytes)
xen_steal_clock (340 -> 296 bytes)
Increase of some functions are due to more aggressive inlining due to
better codegen (in this build, e.g. bpf_map_is_rdonly is no longer
present due to being inlined completely).
NOTE: The return-value-of-function-drops-qualifiers hack was first
suggested by Al Viro in [1], which notes some of its limitations which
make it unsuitable for a general __unqual_scalar_typeof() replacement.
Notably, array types are not supported, and GCC 8.1-8.3 still fail. Why
should we use it here? READ_ONCE() does not support reading whole
arrays, and the GCC version problem only affects 3 minor releases of a
very ancient still-supported GCC version; not only that, this arm64
READ_ONCE() version is currently only activated by LTO builds, which
to-date are *only supported by Clang*!
Link: https://lore.kernel.org/all/20260111182010.GH3634291@ZenIV/ [1]
Signed-off-by: Marco Elver <elver@google.com>
Signed-off-by: Will Deacon <will@kernel.org>
| -rw-r--r-- | arch/arm64/include/asm/rwonce.h | 18 |
1 files changed, 14 insertions, 4 deletions
diff --git a/arch/arm64/include/asm/rwonce.h b/arch/arm64/include/asm/rwonce.h index fc0fb42b0b64..9fd24cef3376 100644 --- a/arch/arm64/include/asm/rwonce.h +++ b/arch/arm64/include/asm/rwonce.h @@ -20,6 +20,17 @@ ARM64_HAS_LDAPR) /* + * Replace this with typeof_unqual() when minimum compiler versions are + * increased to GCC 14 and Clang 19. For the time being, we need this + * workaround, which relies on function return values dropping qualifiers. + */ +#define __rwonce_typeof_unqual(x) typeof(({ \ + __diag_push() \ + __diag_ignore_all("-Wignored-qualifiers", "") \ + ((typeof(x)(*)(void))0)(); \ + __diag_pop() })) + +/* * When building with LTO, there is an increased risk of the compiler * converting an address dependency headed by a READ_ONCE() invocation * into a control dependency and consequently allowing for harmful @@ -32,8 +43,7 @@ #define __READ_ONCE(x) \ ({ \ typeof(&(x)) __x = &(x); \ - int atomic = 1; \ - union { __unqual_scalar_typeof(*__x) __val; char __c[1]; } __u; \ + union { __rwonce_typeof_unqual(*__x) __val; char __c[1]; } __u; \ switch (sizeof(x)) { \ case 1: \ asm volatile(__LOAD_RCPC(b, %w0, %1) \ @@ -56,9 +66,9 @@ : "Q" (*__x) : "memory"); \ break; \ default: \ - atomic = 0; \ + __u.__val = *(volatile typeof(*__x) *)__x; \ } \ - atomic ? (typeof(*__x))__u.__val : (*(volatile typeof(*__x) *)__x);\ + __u.__val; \ }) #endif /* !BUILD_VDSO */ |
