diff options
| author | Johannes Weiner <hannes@cmpxchg.org> | 2026-03-02 22:50:18 +0300 |
|---|---|---|
| committer | Andrew Morton <akpm@linux-foundation.org> | 2026-04-05 23:53:17 +0300 |
| commit | 52af721b9421436a5ee8c301043849f804e3e807 (patch) | |
| tree | d9d93908d1bcb6697c45624220f83c993d070005 /include/linux/errqueue.h | |
| parent | 4665aa7e6523c418d6178ef7e23a4159c72b9d3a (diff) | |
| download | linux-52af721b9421436a5ee8c301043849f804e3e807.tar.xz | |
mm: memcg: separate slab stat accounting from objcg charge cache
Cgroup slab metrics are cached per-cpu the same way as the sub-page charge
cache. However, the intertwined code to manage those dependent caches
right now is quite difficult to follow.
Specifically, cached slab stat updates occur in consume() if there was
enough charge cache to satisfy the new object. If that fails, whole pages
are reserved, and slab stats are updated when the remainder of those
pages, after subtracting the size of the new slab object, are put into the
charge cache. This already juggles a delicate mix of the object size, the
page charge size, and the remainder to put into the byte cache. Doing
slab accounting in this path as well is fragile, and has recently caused a
bug where the input parameters between the two caches were mixed up.
Refactor the consume() and refill() paths into unlocked and locked
variants that only do charge caching. Then let the slab path manage its
own lock section and open-code charging and accounting.
This makes the slab stat cache subordinate to the charge cache:
__refill_obj_stock() is called first to prepare it; __account_obj_stock()
follows to hitch a ride.
This results in a minor behavioral change: previously, a mismatching
percpu stock would always be drained for the purpose of setting up slab
account caching, even if there was no byte remainder to put into the
charge cache. Now, the stock is left alone, and slab accounting takes the
uncached path if there is a mismatch. This is exceedingly rare, and it
was probably never worth draining the whole stock just to cache the slab
stat update.
Link: https://lkml.kernel.org/r/20260302195305.620713-6-hannes@cmpxchg.org
Signed-off-by: Johannes Weiner <hannes@cmpxchg.org>
Acked-by: Shakeel Butt <shakeel.butt@linux.dev>
Acked-by: Roman Gushchin <roman.gushchin@linux.dev>
Reviewed-by: Hao Li <hao.li@linux.dev>
Acked-by: Vlastimil Babka (SUSE) <vbabka@kernel.org>
Cc: Johannes Weiner <jweiner@meta.com>
Cc: Michal Hocko <mhocko@kernel.org>
Cc: Muchun Song <muchun.song@linux.dev>
Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
Diffstat (limited to 'include/linux/errqueue.h')
0 files changed, 0 insertions, 0 deletions
