diff options
author | Qu Wenruo <wqu@suse.com> | 2021-09-27 10:22:02 +0300 |
---|---|---|
committer | David Sterba <dsterba@suse.com> | 2021-10-26 20:08:05 +0300 |
commit | 66448b9d5b6840c230d81cbf10d6ffaeece2d71b (patch) | |
tree | 37a9b08a8d68fba3b0565351dca1ccbdd31055ad /fs/btrfs/extent_io.c | |
parent | 741ec653ab58f5f263f2b6df38157997661c7a50 (diff) | |
download | linux-66448b9d5b6840c230d81cbf10d6ffaeece2d71b.tar.xz |
btrfs: subpage: make extent_write_locked_range() compatible
There are two sites are not subpage compatible yet for
extent_write_locked_range():
- How @nr_pages are calculated
For subpage we can have the following range with 64K page size:
0 32K 64K 96K 128K
| |////|/////| |
In that case, although 96K - 32K == 64K, thus it looks like one page
is enough, but the range spans two pages, not one.
Fix it by doing proper round_up() and round_down() to calculate
@nr_pages.
Also add some extra ASSERT()s to ensure the range passed in is already
aligned.
- How the page end is calculated
Currently we just use cur + PAGE_SIZE - 1 to calculate the page end.
Which can't handle the above range layout, and will trigger ASSERT()
in btrfs_writepage_endio_finish_ordered(), as the range is no longer
covered by the page range.
Fix it by taking page end into consideration.
Signed-off-by: Qu Wenruo <wqu@suse.com>
Signed-off-by: David Sterba <dsterba@suse.com>
Diffstat (limited to 'fs/btrfs/extent_io.c')
-rw-r--r-- | fs/btrfs/extent_io.c | 15 |
1 files changed, 11 insertions, 4 deletions
diff --git a/fs/btrfs/extent_io.c b/fs/btrfs/extent_io.c index a010a4058207..655e78ae376e 100644 --- a/fs/btrfs/extent_io.c +++ b/fs/btrfs/extent_io.c @@ -5086,15 +5086,14 @@ int extent_write_locked_range(struct inode *inode, u64 start, u64 end) struct address_space *mapping = inode->i_mapping; struct page *page; u64 cur = start; - unsigned long nr_pages = (end - start + PAGE_SIZE) >> - PAGE_SHIFT; + unsigned long nr_pages; + const u32 sectorsize = btrfs_sb(inode->i_sb)->sectorsize; struct extent_page_data epd = { .bio_ctrl = { 0 }, .extent_locked = 1, .sync_io = 1, }; struct writeback_control wbc_writepages = { - .nr_to_write = nr_pages * 2, .sync_mode = WB_SYNC_ALL, .range_start = start, .range_end = end + 1, @@ -5103,14 +5102,22 @@ int extent_write_locked_range(struct inode *inode, u64 start, u64 end) .no_cgroup_owner = 1, }; + ASSERT(IS_ALIGNED(start, sectorsize) && IS_ALIGNED(end + 1, sectorsize)); + nr_pages = (round_up(end, PAGE_SIZE) - round_down(start, PAGE_SIZE)) >> + PAGE_SHIFT; + wbc_writepages.nr_to_write = nr_pages * 2; + wbc_attach_fdatawrite_inode(&wbc_writepages, inode); while (cur <= end) { + u64 cur_end = min(round_down(cur, PAGE_SIZE) + PAGE_SIZE - 1, end); + page = find_get_page(mapping, cur >> PAGE_SHIFT); /* * All pages in the range are locked since * btrfs_run_delalloc_range(), thus there is no way to clear * the page dirty flag. */ + ASSERT(PageLocked(page)); ASSERT(PageDirty(page)); clear_page_dirty_for_io(page); ret = __extent_writepage(page, &wbc_writepages, &epd); @@ -5120,7 +5127,7 @@ int extent_write_locked_range(struct inode *inode, u64 start, u64 end) first_error = ret; } put_page(page); - cur += PAGE_SIZE; + cur = cur_end + 1; } if (!found_error) |