diff options
author | Chandan Babu R <chandanbabu@kernel.org> | 2024-04-24 09:57:33 +0300 |
---|---|---|
committer | Chandan Babu R <chandanbabu@kernel.org> | 2024-04-24 09:57:33 +0300 |
commit | b878dbbe2acda5cd285387dd26a68751cfe66485 (patch) | |
tree | 8dea23bb5627e3878c8fe929742b4af5a79c5fc6 /tools/perf/scripts/python/exported-sql-viewer.py | |
parent | 496baa2cb94f3ceea56aa23ee238020102a07de2 (diff) | |
parent | 4ad350ac58627bfe81f71f43f6738e36b4eb75c6 (diff) | |
download | linux-b878dbbe2acda5cd285387dd26a68751cfe66485.tar.xz |
Merge tag 'reduce-scrub-iget-overhead-6.10_2024-04-23' of https://git.kernel.org/pub/scm/linux/kernel/git/djwong/xfs-linux into xfs-6.10-mergeC
xfs: reduce iget overhead in scrub
This patchset looks to reduce iget overhead in two ways: First, a
previous patch conditionally set DONTCACHE on inodes during xchk_irele
on the grounds that we knew better at irele time if an inode should be
dropped. Unfortunately, over time that patch morphed into a call to
d_mark_dontcache, which resulted in inodes being dropped even if they
were referenced by the dcache. This actually caused *more* recycle
overhead than if we'd simply called xfs_iget to set DONTCACHE only on
misses.
The second patch reduces the cost of untrusted iget for a vectored scrub
call by having the scrubv code maintain a separate refcount to the inode
so that the cache will always hit.
Signed-off-by: Darrick J. Wong <djwong@kernel.org>
Signed-off-by: Chandan Babu R <chandanbabu@kernel.org>
* tag 'reduce-scrub-iget-overhead-6.10_2024-04-23' of https://git.kernel.org/pub/scm/linux/kernel/git/djwong/xfs-linux:
xfs: only iget the file once when doing vectored scrub-by-handle
xfs: use dontcache for grabbing inodes during scrub
Diffstat (limited to 'tools/perf/scripts/python/exported-sql-viewer.py')
0 files changed, 0 insertions, 0 deletions