summaryrefslogtreecommitdiff
path: root/fs/bcachefs/io_read.c
AgeCommit message (Collapse)AuthorFilesLines
2024-09-28bcachefs: rename version -> bversionKent Overstreet1-2/+2
give bversions a more distinct name, to aid in grepping Signed-off-by: Kent Overstreet <kent.overstreet@linux.dev>
2024-09-21bcachefs: improve error messages in bch2_ec_read_extent()Kent Overstreet1-1/+1
Signed-off-by: Kent Overstreet <kent.overstreet@linux.dev>
2024-09-21bcachefs: EIO errcode cleanupKent Overstreet1-1/+1
We want to be using private errcodes whenever possible, for better error messages. Signed-off-by: Kent Overstreet <kent.overstreet@linux.dev>
2024-09-21bcachefs: improve "no device to read from" messageKent Overstreet1-1/+7
Signed-off-by: Kent Overstreet <kent.overstreet@linux.dev>
2024-09-09bcachefs: promote_whole_extents is now a normal optionKent Overstreet1-1/+1
Signed-off-by: Kent Overstreet <kent.overstreet@linux.dev>
2024-09-09bcachefs: bchfs_read(): call trans_begin() on every loop iterKent Overstreet1-4/+0
Same as the recent change for __bch2_read(); also, kill now unnecessary btree_trans_too_many_iters() calls. Signed-off-by: Kent Overstreet <kent.overstreet@linux.dev>
2024-08-07bcachefs: Add missing bch2_trans_begin() callKent Overstreet1-0/+1
Signed-off-by: Kent Overstreet <kent.overstreet@linux.dev>
2024-07-15bcachefs: __bch2_read(): call trans_begin() on every loop iterKent Overstreet1-26/+19
perusal of /sys/kernel/debug/bcachefs/*/btree_transaction_stats shows that the read path has been acculumalating unneeded paths on the reflink btree, which we don't want. The solution is to call bch2_trans_begin(), which drops paths not used on previous loop iteration. bch2_readahead: Max mem used: 0 Transaction duration: count: 194235 since mount recent duration of events min: 150 ns max: 9 ms total: 838 ms mean: 4 us 6 us stddev: 34 us 7 us time between events min: 10 ns max: 15 h mean: 2 s 12 s stddev: 2 s 3 ms Maximum allocated btree paths (193): path: idx 2 ref 0:0 P btree=extents l=0 pos 270943112:392:U32_MAX locks 0 path: idx 3 ref 1:0 S btree=extents l=0 pos 270943112:24578:U32_MAX locks 1 path: idx 4 ref 0:0 P btree=reflink l=0 pos 0:24773509:0 locks 0 path: idx 5 ref 0:0 P S btree=reflink l=0 pos 0:24773631:0 locks 1 path: idx 6 ref 0:0 P S btree=reflink l=0 pos 0:24773759:0 locks 1 path: idx 7 ref 0:0 P S btree=reflink l=0 pos 0:24773887:0 locks 1 path: idx 8 ref 0:0 P S btree=reflink l=0 pos 0:24774015:0 locks 1 path: idx 9 ref 0:0 P S btree=reflink l=0 pos 0:24774143:0 locks 1 path: idx 10 ref 0:0 P S btree=reflink l=0 pos 0:24774271:0 locks 1 <many more reflink paths> Signed-off-by: Kent Overstreet <kent.overstreet@linux.dev>
2024-07-15bcachefs: Self healing on read IO errorKent Overstreet1-22/+47
This repurposes the promote path, which already knows how to call data_update() after a read: we now automatically rewrite bad data when we get a read error and then successfully retry from a different replica. Signed-off-by: Kent Overstreet <kent.overstreet@linux.dev>
2024-06-29bcachefs: Fix bch2_read_retry_nodecode()Kent Overstreet1-0/+3
BCH_READ_NODECODE mode - used by the move paths - really wants to use only the original rbio, but the retry path really wants to clone - oof. Make sure to copy the crc of the pointer we read from back to the original rbio, or we'll see spurious checksum errors later. Signed-off-by: Kent Overstreet <kent.overstreet@linux.dev>
2024-06-28bcachefs: Delete old faulty bch2_trans_unlock() callKent Overstreet1-1/+0
the unlock is now in read_extent, this fixes an assertion pop in read_from_stale_dirty_pointer() Signed-off-by: Kent Overstreet <kent.overstreet@linux.dev>
2024-06-10bcachefs: Check for invalid bucket from bucket_gen(), gc_bucket()Kent Overstreet1-8/+22
Turn more asserts into proper recoverable error paths. Reported-by: syzbot+246b47da27f8e7e7d6fb@syzkaller.appspotmail.com Signed-off-by: Kent Overstreet <kent.overstreet@linux.dev>
2024-06-10bcachefs: Enable automatic shrinking for rhashtablesKent Overstreet1-3/+4
Since the key cache shrinker walks the rhashtable, a mostly empty rhashtable leads to really nasty reclaim performance issues. Signed-off-by: Kent Overstreet <kent.overstreet@linux.dev>
2024-05-09bcachefs: kill bch2_dev_bkey_exists() in bch2_read_endio()Kent Overstreet1-14/+19
Signed-off-by: Kent Overstreet <kent.overstreet@linux.dev>
2024-05-09bcachefs: bch2_dev_get_ioref() checks for device not presentKent Overstreet1-1/+1
Signed-off-by: Kent Overstreet <kent.overstreet@linux.dev>
2024-05-09bcachefs: bch2_dev_get_ioref2(); io_read.cKent Overstreet1-5/+9
Signed-off-by: Kent Overstreet <kent.overstreet@linux.dev>
2024-05-09bcachefs: pass bch_dev to read_from_stale_dirty_pointer()Kent Overstreet1-2/+2
Signed-off-by: Kent Overstreet <kent.overstreet@linux.dev>
2024-05-09bcachefs: ptr_stale() -> dev_ptr_stale()Kent Overstreet1-2/+2
Signed-off-by: Kent Overstreet <kent.overstreet@linux.dev>
2024-05-09bcachefs: PTR_BUCKET_POS() now takes bch_devKent Overstreet1-1/+1
Signed-off-by: Kent Overstreet <kent.overstreet@linux.dev>
2024-05-09bcachefs: member helper cleanupsKent Overstreet1-4/+4
Some renaming for better consistency bch2_member_exists -> bch2_member_alive bch2_dev_exists -> bch2_member_exists bch2_dev_exsits2 -> bch2_dev_exists bch_dev_locked -> bch2_dev_locked bch_dev_bkey_exists -> bch2_dev_bkey_exists new helper - bch2_dev_safe Signed-off-by: Kent Overstreet <kent.overstreet@linux.dev>
2024-05-09bcachefs: iter/update/trigger/str_hash flag cleanupKent Overstreet1-5/+5
Combine iter/update/trigger/str_hash flags into a single enum, and x-macroize them for a to_text() function later. These flags are all for a specific iter/key/update context, so it makes sense to group them together - iter/update/trigger flags were already given distinct bits, this cleans up and unifies that handling. Signed-off-by: Kent Overstreet <kent.overstreet@linux.dev>
2024-05-09bcachefs: prt_printf() now respects \r\n\tKent Overstreet1-2/+1
Signed-off-by: Kent Overstreet <kent.overstreet@linux.dev>
2024-03-14bcachefs: Prefer struct_size over open coded arithmeticErick Archer1-1/+1
This is an effort to get rid of all multiplications from allocation functions in order to prevent integer overflows [1][2]. As the "op" variable is a pointer to "struct promote_op" and this structure ends in a flexible array: struct promote_op { [...] struct bio_vec bi_inline_vecs[]; }; and the "t" variable is a pointer to "struct journal_seq_blacklist_table" and this structure also ends in a flexible array: struct journal_seq_blacklist_table { [...] struct journal_seq_blacklist_table_entry { u64 start; u64 end; bool dirty; } entries[]; }; the preferred way in the kernel is to use the struct_size() helper to do the arithmetic instead of the argument "size + size * count" in the kzalloc() functions. This way, the code is more readable and safer. Link: https://www.kernel.org/doc/html/latest/process/deprecated.html#open-coded-arithmetic-in-allocator-arguments [1] Link: https://github.com/KSPP/linux/issues/160 [2] Signed-off-by: Erick Archer <erick.archer@gmx.com> Signed-off-by: Kent Overstreet <kent.overstreet@linux.dev>
2024-01-06bcachefs: improve checksum error messagesKent Overstreet1-3/+8
new helpers: - bch2_csum_to_text() - bch2_csum_err_msg() standardize our checksum error messages a bit, and print out the checksums a bit more nicely. Signed-off-by: Kent Overstreet <kent.overstreet@linux.dev>
2024-01-01bcachefs: Improve the nopromote tracepointKent Overstreet1-13/+18
Signed-off-by: Kent Overstreet <kent.overstreet@linux.dev>
2024-01-01bcachefs: Use GFP_KERNEL for promote allocationsKent Overstreet1-3/+3
We already have btree locks dropped here - no need for GFP_NOFS. Signed-off-by: Kent Overstreet <kent.overstreet@linux.dev>
2024-01-01bcachefs: Replace zero-length arrays with flexible-array membersGustavo A. R. Silva1-1/+1
Fake flexible arrays (zero-length and one-element arrays) are deprecated, and should be replaced by flexible-array members. So, replace zero-length arrays with flexible-array members in multiple structures. Signed-off-by: Gustavo A. R. Silva <gustavoars@kernel.org> Signed-off-by: Kent Overstreet <kent.overstreet@linux.dev>
2024-01-01bcachefs: Rename BTREE_INSERT flagsKent Overstreet1-1/+1
BTREE_INSERT flags are actually transaction commit flags - rename them for clarity. Signed-off-by: Kent Overstreet <kent.overstreet@linux.dev>
2023-11-26bcachefs: Data update path won't accidentaly grow replicasKent Overstreet1-1/+1
Previously, there was a bug where if an extent had greater durability than required (because we needed to move a durability=1 pointer and ended up putting it on a durability 2 device), we would submit a write for replicas=2 - the durability of the pointer being rewritten - instead of the number of replicas required to bring it back up to the data_replicas option. This, plus the allocation path sometimes allocating on a greater durability device than requested, meant that extents could continue having more and more replicas added as they were being rewritten. Signed-off-by: Kent Overstreet <kent.overstreet@linux.dev>
2023-11-05bcachefs: bch2_ec_read_extent() now takes btree_transKent Overstreet1-1/+1
We're not supposed to have more than one btree_trans at a time in a given thread - that causes recursive locking deadlocks. Signed-off-by: Kent Overstreet <kent.overstreet@linux.dev>
2023-11-02bcachefs: Add IO error counts to bch_memberKent Overstreet1-2/+2
We now track IO errors per device since filesystem creation. IO error counts can be viewed in sysfs, or with the 'bcachefs show-super' command. Signed-off-by: Kent Overstreet <kent.overstreet@linux.dev>
2023-10-23bcachefs: Fixes for building in userspaceKent Overstreet1-0/+2
Signed-off-by: Kent Overstreet <kent.overstreet@linux.dev>
2023-10-23bcachefs: Heap allocate btree_transKent Overstreet1-19/+17
We're using more stack than we'd like in a number of functions, and btree_trans is the biggest object that we stack allocate. But we have to do a heap allocatation to initialize it anyways, so there's no real downside to heap allocating the entire thing. Signed-off-by: Kent Overstreet <kent.overstreet@linux.dev>
2023-10-23bcachefs: remove duplicated assignment to variable offset_into_extentColin Ian King1-1/+0
Variable offset_into_extent is being assigned to zero and a few statements later it is being re-assigned again to the save value. The second assignment is redundant and can be removed. Cleans up clang-scan build warning: fs/bcachefs/io.c:2722:3: warning: Value stored to 'offset_into_extent' is never read [deadcode.DeadStores] Signed-off-by: Colin Ian King <colin.i.king@gmail.com> Signed-off-by: Kent Overstreet <kent.overstreet@linux.dev>
2023-10-23bcachefs: trace_read_nopromote()Kent Overstreet1-17/+21
Add a tracepoint to print the reason a read wasn't promoted. Signed-off-by: Kent Overstreet <kent.overstreet@linux.dev>
2023-10-23bcachefs: Break up io.cKent Overstreet1-0/+1207
More reorganization, this splits up io.c into - io_read.c - io_misc.c - fallocate, fpunch, truncate - io_write.c Signed-off-by: Kent Overstreet <kent.overstreet@linux.dev>