diff options
author | Dave Hansen <dave@linux.vnet.ibm.com> | 2013-01-23 01:24:33 +0400 |
---|---|---|
committer | H. Peter Anvin <hpa@linux.intel.com> | 2013-01-26 04:33:23 +0400 |
commit | d765653445129b7c476758040e3079480775f80a (patch) | |
tree | b79e3e051de83e6326ad8d3bc08ad3c1c0eb1544 /arch/x86/kernel/kvm.c | |
parent | f3c4fbb68e93b10c781c0cc462a9d80770244da6 (diff) | |
download | linux-d765653445129b7c476758040e3079480775f80a.tar.xz |
x86, mm: Create slow_virt_to_phys()
This is necessary because __pa() does not work on some kinds of
memory, like vmalloc() or the alloc_remap() areas on 32-bit
NUMA systems. We have some functions to do conversions _like_
this in the vmalloc() code (like vmalloc_to_page()), but they
do not work on sizes other than 4k pages. We would potentially
need to be able to handle all the page sizes that we use for
the kernel linear mapping (4k, 2M, 1G).
In practice, on 32-bit NUMA systems, the percpu areas get stuck
in the alloc_remap() area. Any __pa() call on them will break
and basically return garbage.
This patch introduces a new function slow_virt_to_phys(), which
walks the kernel page tables on x86 and should do precisely
the same logical thing as __pa(), but actually work on a wider
range of memory. It should work on the normal linear mapping,
vmalloc(), kmap(), etc...
Signed-off-by: Dave Hansen <dave@linux.vnet.ibm.com>
Link: http://lkml.kernel.org/r/20130122212433.4D1FCA62@kernel.stglabs.ibm.com
Acked-by: Rik van Riel <riel@redhat.com>
Signed-off-by: H. Peter Anvin <hpa@linux.intel.com>
Diffstat (limited to 'arch/x86/kernel/kvm.c')
0 files changed, 0 insertions, 0 deletions