summaryrefslogtreecommitdiff
diff options
context:
space:
mode:
authorDev Jain <dev.jain@arm.com>2026-02-27 17:35:01 +0300
committerAndrew Morton <akpm@linux-foundation.org>2026-04-05 23:53:10 +0300
commit22aa3321992eee0a39fb465e5083f5b8b5e7a82a (patch)
tree4edf51ffab6b39bba39c9fe445fa513568943ff1
parent1745ccbd2907db2bdaa843e4abccde4fdaccbe5d (diff)
downloadlinux-22aa3321992eee0a39fb465e5083f5b8b5e7a82a.tar.xz
khugepaged: remove redundant index check for pmd-folios
Claim: folio_order(folio) == HPAGE_PMD_ORDER => folio->index == start. Proof: Both loops in hpage_collapse_scan_file and collapse_file, which iterate on the xarray, have the invariant that start <= folio->index < start + HPAGE_PMD_NR ... (i) A folio is always naturally aligned in the pagecache, therefore folio_order == HPAGE_PMD_ORDER => IS_ALIGNED(folio->index, HPAGE_PMD_NR) == true ... (ii) thp_vma_allowable_order -> thp_vma_suitable_order requires that the virtual offsets in the VMA are aligned to the order, => IS_ALIGNED(start, HPAGE_PMD_NR) == true ... (iii) Combining (i), (ii) and (iii), the claim is proven. Therefore, remove this check. While at it, simplify the comments. Link: https://lkml.kernel.org/r/20260227143501.1488110-1-dev.jain@arm.com Signed-off-by: Dev Jain <dev.jain@arm.com> Acked-by: David Hildenbrand (Arm) <david@kernel.org> Reviewed-by: Lance Yang <lance.yang@linux.dev> Reviewed-by: Baolin Wang <baolin.wang@linux.alibaba.com> Reviewed-by: Lorenzo Stoakes <lorenzo.stoakes@oracle.com> Reviewed-by: Anshuman Khandual <anshuman.khandual@arm.com> Cc: Barry Song <baohua@kernel.org> Cc: Liam Howlett <liam.howlett@oracle.com> Cc: Nico Pache <npache@redhat.com> Cc: Ryan Roberts <ryan.roberts@arm.com> Cc: Zi Yan <ziy@nvidia.com> Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
-rw-r--r--mm/khugepaged.c14
1 files changed, 4 insertions, 10 deletions
diff --git a/mm/khugepaged.c b/mm/khugepaged.c
index 13b0fe50dfc5..ab97423fe837 100644
--- a/mm/khugepaged.c
+++ b/mm/khugepaged.c
@@ -2023,9 +2023,7 @@ static enum scan_result collapse_file(struct mm_struct *mm, unsigned long addr,
* we locked the first folio, then a THP might be there already.
* This will be discovered on the first iteration.
*/
- if (folio_order(folio) == HPAGE_PMD_ORDER &&
- folio->index == start) {
- /* Maybe PMD-mapped */
+ if (folio_order(folio) == HPAGE_PMD_ORDER) {
result = SCAN_PTE_MAPPED_HUGEPAGE;
goto out_unlock;
}
@@ -2353,15 +2351,11 @@ static enum scan_result hpage_collapse_scan_file(struct mm_struct *mm,
continue;
}
- if (folio_order(folio) == HPAGE_PMD_ORDER &&
- folio->index == start) {
- /* Maybe PMD-mapped */
+ if (folio_order(folio) == HPAGE_PMD_ORDER) {
result = SCAN_PTE_MAPPED_HUGEPAGE;
/*
- * For SCAN_PTE_MAPPED_HUGEPAGE, further processing
- * by the caller won't touch the page cache, and so
- * it's safe to skip LRU and refcount checks before
- * returning.
+ * PMD-sized THP implies that we can only try
+ * retracting the PTE table.
*/
folio_put(folio);
break;