diff options
author | Filipe Manana <fdmanana@suse.com> | 2022-03-23 19:19:23 +0300 |
---|---|---|
committer | David Sterba <dsterba@suse.com> | 2022-05-16 18:03:09 +0300 |
commit | b023e67512accd01d6daadb0244b3b430f3b2b6e (patch) | |
tree | 014c735969278d9148fda8005813b3dff2aba510 /tools/perf/scripts/python/exported-sql-viewer.py | |
parent | b0a66a3137bde6e011e2b14b135e8ea18facb68c (diff) | |
download | linux-b023e67512accd01d6daadb0244b3b430f3b2b6e.tar.xz |
btrfs: avoid blocking on page locks with nowait dio on compressed range
If we are doing NOWAIT direct IO read/write and our inode has compressed
extents, we call filemap_fdatawrite_range() against the range in order
to wait for compressed writeback to complete, since the generic code at
iomap_dio_rw() calls filemap_write_and_wait_range() once, which is not
enough to wait for compressed writeback to complete.
This call to filemap_fdatawrite_range() can block on page locks, since
the first writepages() on a range that we will try to compress results
only in queuing a work to compress the data while holding the pages
locked.
Even though the generic code at iomap_dio_rw() will do the right thing
and return -EAGAIN for NOWAIT requests in case there are pages in the
range, we can still end up at btrfs_dio_iomap_begin() with pages in the
range because either of the following can happen:
1) Memory mapped writes, as we haven't locked the range yet;
2) Buffered reads might have started, which lock the pages, and we do
the filemap_fdatawrite_range() call before locking the file range.
So don't call filemap_fdatawrite_range() at btrfs_dio_iomap_begin() if we
are doing a NOWAIT read/write. Instead call filemap_range_needs_writeback()
to check if there are any locked, dirty, or under writeback pages, and
return -EAGAIN if that's the case.
Signed-off-by: Filipe Manana <fdmanana@suse.com>
Signed-off-by: David Sterba <dsterba@suse.com>
Diffstat (limited to 'tools/perf/scripts/python/exported-sql-viewer.py')
0 files changed, 0 insertions, 0 deletions