diff options
author | Darrick J. Wong <djwong@kernel.org> | 2024-04-22 19:48:23 +0300 |
---|---|---|
committer | Darrick J. Wong <djwong@kernel.org> | 2024-04-24 02:55:17 +0300 |
commit | 271557de7cbfdecb08e89ae1ca74647ceb57224f (patch) | |
tree | 75935093e4f7e4a38a34523211bda8d95b7555ff /fs/xfs/scrub/common.h | |
parent | 3f31406aef493b3f19020909d29974e28253f91c (diff) | |
download | linux-271557de7cbfdecb08e89ae1ca74647ceb57224f.tar.xz |
xfs: reduce the rate of cond_resched calls inside scrub
We really don't want to call cond_resched every single time we go
through a loop in scrub -- there may be billions of records, and probing
into the scheduler itself has overhead. Reduce this overhead by only
calling cond_resched 10x per second; and add a counter so that we only
check jiffies once every 1000 records or so.
Surprisingly, this reduces scrub-only fstests runtime by about 2%. I
used the bmapinflate xfs_db command to produce a billion-extent file and
this stupid gadget reduced the scrub runtime by about 4%.
From a stupid microbenchmark of calling these things 1 billion times, I
estimate that cond_resched costs about 5.5ns per call; jiffes costs
about 0.3ns per read; and fatal_signal_pending costs about 0.4ns per
call.
Signed-off-by: Darrick J. Wong <djwong@kernel.org>
Reviewed-by: Christoph Hellwig <hch@lst.de>
Diffstat (limited to 'fs/xfs/scrub/common.h')
-rw-r--r-- | fs/xfs/scrub/common.h | 25 |
1 files changed, 0 insertions, 25 deletions
diff --git a/fs/xfs/scrub/common.h b/fs/xfs/scrub/common.h index 39465e39dc5f..3d5f1f6b4b7b 100644 --- a/fs/xfs/scrub/common.h +++ b/fs/xfs/scrub/common.h @@ -6,31 +6,6 @@ #ifndef __XFS_SCRUB_COMMON_H__ #define __XFS_SCRUB_COMMON_H__ -/* - * We /could/ terminate a scrub/repair operation early. If we're not - * in a good place to continue (fatal signal, etc.) then bail out. - * Note that we're careful not to make any judgements about *error. - */ -static inline bool -xchk_should_terminate( - struct xfs_scrub *sc, - int *error) -{ - /* - * If preemption is disabled, we need to yield to the scheduler every - * few seconds so that we don't run afoul of the soft lockup watchdog - * or RCU stall detector. - */ - cond_resched(); - - if (fatal_signal_pending(current)) { - if (*error == 0) - *error = -EINTR; - return true; - } - return false; -} - int xchk_trans_alloc(struct xfs_scrub *sc, uint resblks); int xchk_trans_alloc_empty(struct xfs_scrub *sc); void xchk_trans_cancel(struct xfs_scrub *sc); |