diff options
author | Joe Thornber <ejt@redhat.com> | 2017-05-11 12:07:34 +0300 |
---|---|---|
committer | Mike Snitzer <snitzer@redhat.com> | 2017-05-15 04:54:32 +0300 |
commit | a8cd1eba6135e086109e2b94bf96deb17456ede8 (patch) | |
tree | 4c77d6484c743c7cbd8b0138e336cb229b7194bf /drivers/md/dm-cache-policy-smq.c | |
parent | 072792dcdfc8d5f91a26050e5665285f50afebf5 (diff) | |
download | linux-a8cd1eba6135e086109e2b94bf96deb17456ede8.tar.xz |
dm cache policy smq: only demote entries in bottom half of the clean multiqueue
Heavy IO load may mean there are very few clean blocks in the cache, and
we risk demoting entries that get hit a lot.
Signed-off-by: Joe Thornber <ejt@redhat.com>
Signed-off-by: Mike Snitzer <snitzer@redhat.com>
Diffstat (limited to 'drivers/md/dm-cache-policy-smq.c')
-rw-r--r-- | drivers/md/dm-cache-policy-smq.c | 2 |
1 files changed, 1 insertions, 1 deletions
diff --git a/drivers/md/dm-cache-policy-smq.c b/drivers/md/dm-cache-policy-smq.c index 72479bd61e11..a177559f2049 100644 --- a/drivers/md/dm-cache-policy-smq.c +++ b/drivers/md/dm-cache-policy-smq.c @@ -1190,7 +1190,7 @@ static void queue_demotion(struct smq_policy *mq) if (unlikely(WARN_ON_ONCE(!mq->migrations_allowed))) return; - e = q_peek(&mq->clean, mq->clean.nr_levels, true); + e = q_peek(&mq->clean, mq->clean.nr_levels / 2, true); if (!e) { if (!clean_target_met(mq, false)) queue_writeback(mq); |