diff options
| author | David Yang <mmyangfl@gmail.com> | 2026-01-20 12:21:29 +0300 |
|---|---|---|
| committer | Jakub Kicinski <kuba@kernel.org> | 2026-01-26 00:14:11 +0300 |
| commit | df153517e4d4403d13d603cccef68255783d396b (patch) | |
| tree | 7aed7d97990e33603a0a9b062aa5e7b1c86cf718 /include/linux/workqueue.h | |
| parent | 3085ff59fec593ae1135608edd8afdf0e5889cad (diff) | |
| download | linux-df153517e4d4403d13d603cccef68255783d396b.tar.xz | |
u64_stats: Introduce u64_stats_copy()
The following (anti-)pattern was observed in the code tree:
do {
start = u64_stats_fetch_begin(&pstats->syncp);
memcpy(&temp, &pstats->stats, sizeof(temp));
} while (u64_stats_fetch_retry(&pstats->syncp, start));
On 64bit arches, struct u64_stats_sync is empty and provides no help
against load/store tearing, especially for memcpy(), for which arches may
provide their highly-optimized implements.
In theory the affected code should convert to u64_stats_t, or use
READ_ONCE()/WRITE_ONCE() properly.
However since there are needs to copy chunks of statistics, instead of
writing loops at random places, we provide a safe memcpy() variant for
u64_stats.
Signed-off-by: David Yang <mmyangfl@gmail.com>
Reviewed-by: Ido Schimmel <idosch@nvidia.com>
Link: https://patch.msgid.link/20260120092137.2161162-2-mmyangfl@gmail.com
Signed-off-by: Jakub Kicinski <kuba@kernel.org>
Diffstat (limited to 'include/linux/workqueue.h')
0 files changed, 0 insertions, 0 deletions
