diff options
author | Naoya Horiguchi <naoya.horiguchi@nec.com> | 2022-07-14 07:24:14 +0300 |
---|---|---|
committer | Andrew Morton <akpm@linux-foundation.org> | 2022-08-09 04:06:43 +0300 |
commit | 3a194f3f8ad01bce00bd7174aaba1563bcc827eb (patch) | |
tree | fceab93f83fadb0d5567805024fde0589c649c26 /mm/memory-failure.c | |
parent | c0531714d6e3fd720b7dacc2de2d0503a995bcdc (diff) | |
download | linux-3a194f3f8ad01bce00bd7174aaba1563bcc827eb.tar.xz |
mm/hugetlb: make pud_huge() and follow_huge_pud() aware of non-present pud entry
follow_pud_mask() does not support non-present pud entry now. As long as
I tested on x86_64 server, follow_pud_mask() still simply returns
no_page_table() for non-present_pud_entry() due to pud_bad(), so no severe
user-visible effect should happen. But generally we should call
follow_huge_pud() for non-present pud entry for 1GB hugetlb page.
Update pud_huge() and follow_huge_pud() to handle non-present pud entries.
The changes are similar to previous works for pud entries commit
e66f17ff7177 ("mm/hugetlb: take page table lock in follow_huge_pmd()") and
commit cbef8478bee5 ("mm/hugetlb: pmd_huge() returns true for non-present
hugepage").
Link: https://lkml.kernel.org/r/20220714042420.1847125-3-naoya.horiguchi@linux.dev
Signed-off-by: Naoya Horiguchi <naoya.horiguchi@nec.com>
Reviewed-by: Miaohe Lin <linmiaohe@huawei.com>
Reviewed-by: Mike Kravetz <mike.kravetz@oracle.com>
Cc: David Hildenbrand <david@redhat.com>
Cc: kernel test robot <lkp@intel.com>
Cc: Liu Shixin <liushixin2@huawei.com>
Cc: Muchun Song <songmuchun@bytedance.com>
Cc: Oscar Salvador <osalvador@suse.de>
Cc: Yang Shi <shy828301@gmail.com>
Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
Diffstat (limited to 'mm/memory-failure.c')
0 files changed, 0 insertions, 0 deletions