diff options
author | Kirill A. Shutemov <kirill.shutemov@linux.intel.com> | 2017-11-27 06:21:26 +0300 |
---|---|---|
committer | Linus Torvalds <torvalds@linux-foundation.org> | 2017-11-27 23:26:29 +0300 |
commit | 152e93af3cfe2d29d8136cc0a02a8612507136ee (patch) | |
tree | 19bd28f0ea6af08ba14ae4bfd841b5256f888ee7 /mm/migrate.c | |
parent | a8f97366452ed491d13cf1e44241bc0b5740b1f0 (diff) | |
download | linux-152e93af3cfe2d29d8136cc0a02a8612507136ee.tar.xz |
mm, thp: Do not make pmd/pud dirty without a reason
Currently we make page table entries dirty all the time regardless of
access type and don't even consider if the mapping is write-protected.
The reasoning is that we don't really need dirty tracking on THP and
making the entry dirty upfront may save some time on first write to the
page.
Unfortunately, such approach may result in false-positive
can_follow_write_pmd() for huge zero page or read-only shmem file.
Let's only make page dirty only if we about to write to the page anyway
(as we do for small pages).
I've restructured the code to make entry dirty inside
maybe_p[mu]d_mkwrite(). It also takes into account if the vma is
write-protected.
Signed-off-by: Kirill A. Shutemov <kirill.shutemov@linux.intel.com>
Acked-by: Michal Hocko <mhocko@suse.com>
Cc: Hugh Dickins <hughd@google.com>
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
Diffstat (limited to 'mm/migrate.c')
-rw-r--r-- | mm/migrate.c | 2 |
1 files changed, 1 insertions, 1 deletions
diff --git a/mm/migrate.c b/mm/migrate.c index 4d0be47a322a..57865fc8cfe3 100644 --- a/mm/migrate.c +++ b/mm/migrate.c @@ -2068,7 +2068,7 @@ int migrate_misplaced_transhuge_page(struct mm_struct *mm, } entry = mk_huge_pmd(new_page, vma->vm_page_prot); - entry = maybe_pmd_mkwrite(pmd_mkdirty(entry), vma); + entry = maybe_pmd_mkwrite(entry, vma, false); /* * Clear the old entry under pagetable lock and establish the new PTE. |