summaryrefslogtreecommitdiff
path: root/mm
diff options
context:
space:
mode:
authorNick Piggin <npiggin@suse.de>2009-02-12 06:34:23 +0300
committerLinus Torvalds <torvalds@linux-foundation.org>2009-02-12 19:10:53 +0300
commit3a4c6800f31ea8395628af5e7e490270ee5d0585 (patch)
tree5703cf8140e5b58f5f333dbbf7b1514404584e69 /mm
parentb578f3fcca1e78624dfb5f358776e63711d7fda2 (diff)
downloadlinux-3a4c6800f31ea8395628af5e7e490270ee5d0585.tar.xz
Fix page writeback thinko, causing Berkeley DB slowdown
A bug was introduced into write_cache_pages cyclic writeout by commit 31a12666d8f0c22235297e1c1575f82061480029 ("mm: write_cache_pages cyclic fix"). The intention (and comments) is that we should cycle back and look for more dirty pages at the beginning of the file if there is no more work to be done. But the !done condition was dropped from the test. This means that any time the page writeout loop breaks (eg. due to nr_to_write == 0), we will set index to 0, then goto again. This will set done_index to index, then find done is set, so will proceed to the end of the function. When updating mapping->writeback_index for cyclic writeout, we now use done_index == 0, so we're always cycling back to 0. This seemed to be causing random mmap writes (slapadd and iozone) to start writing more pages from the LRU and writeout would slowdown, and caused bugzilla entry http://bugzilla.kernel.org/show_bug.cgi?id=12604 about Berkeley DB slowing down dramatically. With this patch, iozone random write performance is increased nearly 5x on my system (iozone -B -r 4k -s 64k -s 512m -s 1200m on ext2). Signed-off-by: Nick Piggin <npiggin@suse.de> Reported-and-tested-by: Jan Kara <jack@suse.cz> Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
Diffstat (limited to 'mm')
-rw-r--r--mm/page-writeback.c2
1 files changed, 1 insertions, 1 deletions
diff --git a/mm/page-writeback.c b/mm/page-writeback.c
index 6106a5c7ed44..3c84128596ba 100644
--- a/mm/page-writeback.c
+++ b/mm/page-writeback.c
@@ -1079,7 +1079,7 @@ continue_unlock:
pagevec_release(&pvec);
cond_resched();
}
- if (!cycled) {
+ if (!cycled && !done) {
/*
* range_cyclic:
* We hit the last page and there is more work to be done: wrap