<feed xmlns='http://www.w3.org/2005/Atom'>
<title>kernel/linux.git/fs/bcachefs/btree_cache.c, branch v6.12.80</title>
<subtitle>Linux kernel stable tree (mirror)</subtitle>
<id>https://git.radix-linux.su/kernel/linux.git/atom?h=v6.12.80</id>
<link rel='self' href='https://git.radix-linux.su/kernel/linux.git/atom?h=v6.12.80'/>
<link rel='alternate' type='text/html' href='https://git.radix-linux.su/kernel/linux.git/'/>
<updated>2024-11-07T21:48:21+00:00</updated>
<entry>
<title>bcachefs: btree_cache.freeable list fixes</title>
<updated>2024-11-07T21:48:21+00:00</updated>
<author>
<name>Kent Overstreet</name>
<email>kent.overstreet@linux.dev</email>
</author>
<published>2024-10-31T05:17:54+00:00</published>
<link rel='alternate' type='text/html' href='https://git.radix-linux.su/kernel/linux.git/commit/?id=baefd3f849ed956d4c1aee80889093cf0d9c6a94'/>
<id>urn:sha1:baefd3f849ed956d4c1aee80889093cf0d9c6a94</id>
<content type='text'>
When allocating new btree nodes, we were leaving them on the freeable
list - unlocked - allowing them to be reclaimed: ouch.

Additionally, bch2_btree_node_free_never_used() -&gt;
bch2_btree_node_hash_remove was putting it on the freelist, while
bch2_btree_node_free_never_used() was putting it back on the btree
update reserve list - ouch.

Originally, the code was written to always keep btree nodes on a list -
live or freeable - and this worked when new nodes were kept locked.

But now with the cycle detector, we can't keep nodes locked that aren't
tracked by the cycle detector; and this is fine as long as they're not
reachable.

We also have better and more robust leak detection now, with memory
allocation profiling, so the original justification no longer applies.

Signed-off-by: Kent Overstreet &lt;kent.overstreet@linux.dev&gt;
</content>
</entry>
<entry>
<title>bcachefs: Fix error handling in bch2_btree_node_prefetch()</title>
<updated>2024-11-07T21:48:20+00:00</updated>
<author>
<name>Kent Overstreet</name>
<email>kent.overstreet@linux.dev</email>
</author>
<published>2024-10-27T22:25:30+00:00</published>
<link rel='alternate' type='text/html' href='https://git.radix-linux.su/kernel/linux.git/commit/?id=72acab3a7c5aee76451fa6054e9608026476a971'/>
<id>urn:sha1:72acab3a7c5aee76451fa6054e9608026476a971</id>
<content type='text'>
Signed-off-by: Kent Overstreet &lt;kent.overstreet@linux.dev&gt;
</content>
</entry>
<entry>
<title>bcachefs: Rework btree node pinning</title>
<updated>2024-09-21T15:39:48+00:00</updated>
<author>
<name>Kent Overstreet</name>
<email>kent.overstreet@linux.dev</email>
</author>
<published>2024-09-05T00:49:37+00:00</published>
<link rel='alternate' type='text/html' href='https://git.radix-linux.su/kernel/linux.git/commit/?id=7a51608d0125469664e2daf8e060d6d783924c98'/>
<id>urn:sha1:7a51608d0125469664e2daf8e060d6d783924c98</id>
<content type='text'>
In backpointers fsck, we do a seqential scan of one btree, and check
references to another: extents &lt;-&gt; backpointers

Checking references generates random lookups, so we want to pin that
btree in memory (or only a range, if it doesn't fit in ram).

Previously, this was done with a simple check in the shrinker - "if
btree node is in range being pinned, don't free it" - but this generated
OOMs, as our shrinker wasn't well behaved if there was less memory
available than expected.

Instead, we now have two different shrinkers and lru lists; the second
shrinker being for pinned nodes, with seeks set much higher than normal
- so they can still be freed if necessary, but we'll prefer not to.

Signed-off-by: Kent Overstreet &lt;kent.overstreet@linux.dev&gt;
</content>
</entry>
<entry>
<title>bcachefs: split up btree cache counters for live, freeable</title>
<updated>2024-09-21T15:39:48+00:00</updated>
<author>
<name>Kent Overstreet</name>
<email>kent.overstreet@linux.dev</email>
</author>
<published>2024-09-05T23:37:56+00:00</published>
<link rel='alternate' type='text/html' href='https://git.radix-linux.su/kernel/linux.git/commit/?id=91ddd7151000c0e538cec7fb2f3f86e2268af4d4'/>
<id>urn:sha1:91ddd7151000c0e538cec7fb2f3f86e2268af4d4</id>
<content type='text'>
this is prep for introducing a second live list and shrinker for pinned
nodes

Signed-off-by: Kent Overstreet &lt;kent.overstreet@linux.dev&gt;
</content>
</entry>
<entry>
<title>bcachefs: btree cache counters should be size_t</title>
<updated>2024-09-21T15:39:48+00:00</updated>
<author>
<name>Kent Overstreet</name>
<email>kent.overstreet@linux.dev</email>
</author>
<published>2024-09-05T23:25:01+00:00</published>
<link rel='alternate' type='text/html' href='https://git.radix-linux.su/kernel/linux.git/commit/?id=691f2cba229189033c55f19b904bc6f4bd68b480'/>
<id>urn:sha1:691f2cba229189033c55f19b904bc6f4bd68b480</id>
<content type='text'>
32 bits won't overflow any time soon, but size_t is the correct type for
counting objects in memory.

Signed-off-by: Kent Overstreet &lt;kent.overstreet@linux.dev&gt;
</content>
</entry>
<entry>
<title>bcachefs: Don't count "skipped access bit" as touched in btree cache scan</title>
<updated>2024-09-21T15:39:48+00:00</updated>
<author>
<name>Kent Overstreet</name>
<email>kent.overstreet@linux.dev</email>
</author>
<published>2024-09-04T21:19:24+00:00</published>
<link rel='alternate' type='text/html' href='https://git.radix-linux.su/kernel/linux.git/commit/?id=ad5dbe3ce533ec13abacad78076050672e3d39eb'/>
<id>urn:sha1:ad5dbe3ce533ec13abacad78076050672e3d39eb</id>
<content type='text'>
Signed-off-by: Kent Overstreet &lt;kent.overstreet@linux.dev&gt;
</content>
</entry>
<entry>
<title>bcachefs: Use mm_account_reclaimed_pages() when freeing btree nodes</title>
<updated>2024-09-21T15:39:48+00:00</updated>
<author>
<name>Kent Overstreet</name>
<email>kent.overstreet@linux.dev</email>
</author>
<published>2024-09-04T19:30:48+00:00</published>
<link rel='alternate' type='text/html' href='https://git.radix-linux.su/kernel/linux.git/commit/?id=ff7f756f2be5b37e323ff8215e500ee564551db9'/>
<id>urn:sha1:ff7f756f2be5b37e323ff8215e500ee564551db9</id>
<content type='text'>
When freeing in a shrinker callback, we need to notify memory reclaim,
so it knows forward progress has been made.

Normally this is done in e.g. slab code, but we're not freeing through
slab - or rather we are, but these allocations are big, and use the
kmalloc_large() path.

This is really a bug in the slub code, but we're working around it here
for now.

Signed-off-by: Kent Overstreet &lt;kent.overstreet@linux.dev&gt;
</content>
</entry>
<entry>
<title>bcachefs: Use __GFP_ACCOUNT for reclaimable memory</title>
<updated>2024-09-21T15:39:46+00:00</updated>
<author>
<name>Kent Overstreet</name>
<email>kent.overstreet@linux.dev</email>
</author>
<published>2024-09-03T21:42:53+00:00</published>
<link rel='alternate' type='text/html' href='https://git.radix-linux.su/kernel/linux.git/commit/?id=895fbf1cf02f0c913bb443e9ae18a230bbd174c1'/>
<id>urn:sha1:895fbf1cf02f0c913bb443e9ae18a230bbd174c1</id>
<content type='text'>
Signed-off-by: Kent Overstreet &lt;kent.overstreet@linux.dev&gt;
</content>
</entry>
<entry>
<title>bcachefs: Add pinned to btree cache not freed counters</title>
<updated>2024-09-21T15:35:19+00:00</updated>
<author>
<name>Kent Overstreet</name>
<email>kent.overstreet@linux.dev</email>
</author>
<published>2024-09-01T17:36:42+00:00</published>
<link rel='alternate' type='text/html' href='https://git.radix-linux.su/kernel/linux.git/commit/?id=3340dee2350954c549b12c41206e98ed86d3b63e'/>
<id>urn:sha1:3340dee2350954c549b12c41206e98ed86d3b63e</id>
<content type='text'>
Signed-off-by: Kent Overstreet &lt;kent.overstreet@linux.dev&gt;
</content>
</entry>
<entry>
<title>bcachefs: Drop memalloc_nofs_save() in bch2_btree_node_mem_alloc()</title>
<updated>2024-09-09T13:41:49+00:00</updated>
<author>
<name>Kent Overstreet</name>
<email>kent.overstreet@linux.dev</email>
</author>
<published>2024-08-19T19:11:20+00:00</published>
<link rel='alternate' type='text/html' href='https://git.radix-linux.su/kernel/linux.git/commit/?id=b36f679c99889bbe9c48e09f7c175d1058823ae8'/>
<id>urn:sha1:b36f679c99889bbe9c48e09f7c175d1058823ae8</id>
<content type='text'>
It's really not needed: the only locks used here are the btree cache
lock, which we drop for GFP_WAIT allocations, and btree node locks - but
we also drop those for GFP_WAIT allocations.

Signed-off-by: Kent Overstreet &lt;kent.overstreet@linux.dev&gt;
</content>
</entry>
</feed>
