From 56425306517ef28a9b480161cdb96d182172bc1d Mon Sep 17 00:00:00 2001 From: "David S. Miller" Date: Sun, 25 Sep 2005 16:46:57 -0700 Subject: [SPARC64]: Add CONFIG_DEBUG_PAGEALLOC support. The trick is that we do the kernel linear mapping TLB miss starting with an instruction sequence like this: ba,pt %xcc, kvmap_load xor %g2, %g4, %g5 succeeded by an instruction sequence which performs a full page table walk starting at swapper_pg_dir. We first take over the trap table from the firmware. Then, using this constant PTE generation for the linear mapping area above, we build the kernel page tables for the linear mapping. After this is setup, we patch that branch above into a "nop", which will cause TLB misses to fall through to the full page table walk. With this, the page unmapping for CONFIG_DEBUG_PAGEALLOC is trivial. Signed-off-by: David S. Miller --- include/asm-sparc64/cacheflush.h | 5 +++++ 1 file changed, 5 insertions(+) (limited to 'include/asm-sparc64/cacheflush.h') diff --git a/include/asm-sparc64/cacheflush.h b/include/asm-sparc64/cacheflush.h index ededd2659eab..b3f61659ba81 100644 --- a/include/asm-sparc64/cacheflush.h +++ b/include/asm-sparc64/cacheflush.h @@ -66,6 +66,11 @@ extern void flush_ptrace_access(struct vm_area_struct *, struct page *, #define flush_cache_vmap(start, end) do { } while (0) #define flush_cache_vunmap(start, end) do { } while (0) +#ifdef CONFIG_DEBUG_PAGEALLOC +/* internal debugging function */ +void kernel_map_pages(struct page *page, int numpages, int enable); +#endif + #endif /* !__ASSEMBLY__ */ #endif /* _SPARC64_CACHEFLUSH_H */ -- cgit v1.2.3