summaryrefslogtreecommitdiff
path: root/include/trace
diff options
context:
space:
mode:
authorOscar Salvador <osalvador@suse.de>2021-04-30 08:57:22 +0300
committerLinus Torvalds <torvalds@linux-foundation.org>2021-04-30 21:20:38 +0300
commitfaf1c0008a33d4ac6336f63a358641cf86926fc0 (patch)
treece5aeea0c8796ef10d3d98a03e6058cb02ec9bad /include/trace
parent8d400913c231bd1da74067255816453f96cd35b0 (diff)
downloadlinux-faf1c0008a33d4ac6336f63a358641cf86926fc0.tar.xz
x86/vmemmap: optimize for consecutive sections in partial populated PMDs
We can optimize in the case we are adding consecutive sections, so no memset(PAGE_UNUSED) is needed. In that case, let us keep track where the unused range of the previous memory range begins, so we can compare it with start of the range to be added. If they are equal, we know sections are added consecutively. For that purpose, let us introduce 'unused_pmd_start', which always holds the beginning of the unused memory range. In the case a section does not contiguously follow the previous one, we know we can memset [unused_pmd_start, PMD_BOUNDARY) with PAGE_UNUSE. This patch is based on a similar patch by David Hildenbrand: https://lore.kernel.org/linux-mm/20200722094558.9828-10-david@redhat.com/ Link: https://lkml.kernel.org/r/20210309214050.4674-5-osalvador@suse.de Signed-off-by: Oscar Salvador <osalvador@suse.de> Acked-by: Dave Hansen <dave.hansen@linux.intel.com> Cc: Andy Lutomirski <luto@kernel.org> Cc: Borislav Petkov <bp@alien8.de> Cc: "H . Peter Anvin" <hpa@zytor.com> Cc: Ingo Molnar <mingo@redhat.com> Cc: Michal Hocko <mhocko@kernel.org> Cc: Peter Zijlstra <peterz@infradead.org> Cc: Thomas Gleixner <tglx@linutronix.de> Cc: Zi Yan <ziy@nvidia.com> Signed-off-by: Andrew Morton <akpm@linux-foundation.org> Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
Diffstat (limited to 'include/trace')
0 files changed, 0 insertions, 0 deletions