diff options
| author | Qu Wenruo <wqu@suse.com> | 2025-10-21 06:51:48 +0300 |
|---|---|---|
| committer | David Sterba <dsterba@suse.com> | 2025-11-25 00:03:38 +0300 |
| commit | 988f693a46d83dc832005a1403ae0471eb1f8964 (patch) | |
| tree | a2e72d5aafa7419debc7b2beacb4de0c01f853b1 | |
| parent | ca428e9b49c77b0bfc6ebbc8536ed854463b26e2 (diff) | |
| download | linux-988f693a46d83dc832005a1403ae0471eb1f8964.tar.xz | |
btrfs: subpage: simplify the PAGECACHE_TAG_TOWRITE handling
In function btrfs_subpage_set_writeback() we need to keep the
PAGECACHE_TAG_TOWRITE tag if the folio is still dirty.
This is a needed quirk for support async extents, as a subpage range can
almost suddenly go writeback, without touching other subpage ranges in
the same folio.
However we can simplify the handling by replace the open-coded tag
clearing by passing the @keep_write flag depending on if the folio is
dirty.
Since we're holding the subpage lock already, no one is able to change
the dirty/writeback flag, thus it's safe to check the folio dirty before
calling __folio_start_writeback().
Reviewed-by: Filipe Manana <fdmanana@suse.com>
Reviewed-by: Boris Burkov <boris@bur.io>
Signed-off-by: Qu Wenruo <wqu@suse.com>
Reviewed-by: David Sterba <dsterba@suse.com>
Signed-off-by: David Sterba <dsterba@suse.com>
| -rw-r--r-- | fs/btrfs/subpage.c | 14 |
1 files changed, 3 insertions, 11 deletions
diff --git a/fs/btrfs/subpage.c b/fs/btrfs/subpage.c index 0a4a1ee81e63..80cd27d3267f 100644 --- a/fs/btrfs/subpage.c +++ b/fs/btrfs/subpage.c @@ -440,6 +440,7 @@ void btrfs_subpage_set_writeback(const struct btrfs_fs_info *fs_info, unsigned int start_bit = subpage_calc_start_bit(fs_info, folio, writeback, start, len); unsigned long flags; + bool keep_write; spin_lock_irqsave(&bfs->lock, flags); bitmap_set(bfs->bitmaps, start_bit, len >> fs_info->sectorsize_bits); @@ -450,18 +451,9 @@ void btrfs_subpage_set_writeback(const struct btrfs_fs_info *fs_info, * assume writeback is complete, and exit too early — violating sync * ordering guarantees. */ + keep_write = folio_test_dirty(folio); if (!folio_test_writeback(folio)) - __folio_start_writeback(folio, true); - if (!folio_test_dirty(folio)) { - struct address_space *mapping = folio_mapping(folio); - XA_STATE(xas, &mapping->i_pages, folio->index); - unsigned long xa_flags; - - xas_lock_irqsave(&xas, xa_flags); - xas_load(&xas); - xas_clear_mark(&xas, PAGECACHE_TAG_TOWRITE); - xas_unlock_irqrestore(&xas, xa_flags); - } + __folio_start_writeback(folio, keep_write); spin_unlock_irqrestore(&bfs->lock, flags); } |
