summaryrefslogtreecommitdiff
path: root/scripts/generate_rust_analyzer.py
diff options
context:
space:
mode:
authorVasily Gorbik <gor@linux.ibm.com>2024-11-29 03:07:01 +0300
committerGreg Kroah-Hartman <gregkh@linuxfoundation.org>2025-02-08 11:58:18 +0300
commit5f72bf80e74bbbe8c7e4732375b070eabf5f275a (patch)
tree4609ff022bf195547df9bb024f6a471e6b89c196 /scripts/generate_rust_analyzer.py
parent1cf21779596859e6bbb291a7d97fbb079f2b8ab1 (diff)
downloadlinux-5f72bf80e74bbbe8c7e4732375b070eabf5f275a.tar.xz
Revert "s390/mm: Allow large pages for KASAN shadow mapping"
commit cc00550b2ae7ab1c7c56669fc004a13d880aaf0a upstream. This reverts commit ff123eb7741638d55abf82fac090bb3a543c1e74. Allowing large pages for KASAN shadow mappings isn't inherently wrong, but adding POPULATE_KASAN_MAP_SHADOW to large_allowed() exposes an issue in can_large_pud() and can_large_pmd(). Since commit d8073dc6bc04 ("s390/mm: Allow large pages only for aligned physical addresses"), both can_large_pud() and can_large_pmd() call _pa() to check if large page physical addresses are aligned. However, _pa() has a side effect: it allocates memory in POPULATE_KASAN_MAP_SHADOW mode. This results in massive memory leaks. The proper fix would be to address both large_allowed() and _pa()'s side effects, but for now, revert this change to avoid the leaks. Fixes: ff123eb77416 ("s390/mm: Allow large pages for KASAN shadow mapping") Signed-off-by: Vasily Gorbik <gor@linux.ibm.com> Signed-off-by: Heiko Carstens <hca@linux.ibm.com> Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
Diffstat (limited to 'scripts/generate_rust_analyzer.py')
0 files changed, 0 insertions, 0 deletions