summaryrefslogtreecommitdiff
path: root/mm/kfence
diff options
context:
space:
mode:
authorQian Yingjin <qian@ddn.com>2023-02-08 05:24:00 +0300
committerAndrew Morton <akpm@linux-foundation.org>2023-02-17 05:11:58 +0300
commit5956592ce337330cdff0399a6f8b6a5aea397a8e (patch)
treef3266b69e8e9db48688e526ab5efec2ed9a504ba /mm/kfence
parentce4d9a1ea35ac5429e822c4106cb2859d5c71f3e (diff)
downloadlinux-5956592ce337330cdff0399a6f8b6a5aea397a8e.tar.xz
mm/filemap: fix page end in filemap_get_read_batch
I was running traces of the read code against an RAID storage system to understand why read requests were being misaligned against the underlying RAID strips. I found that the page end offset calculation in filemap_get_read_batch() was off by one. When a read is submitted with end offset 1048575, then it calculates the end page for read of 256 when it should be 255. "last_index" is the index of the page beyond the end of the read and it should be skipped when get a batch of pages for read in @filemap_get_read_batch(). The below simple patch fixes the problem. This code was introduced in kernel 5.12. Link: https://lkml.kernel.org/r/20230208022400.28962-1-coolqyj@163.com Fixes: cbd59c48ae2b ("mm/filemap: use head pages in generic_file_buffered_read") Signed-off-by: Qian Yingjin <qian@ddn.com> Reviewed-by: Matthew Wilcox (Oracle) <willy@infradead.org> Cc: <stable@vger.kernel.org> Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
Diffstat (limited to 'mm/kfence')
0 files changed, 0 insertions, 0 deletions