summaryrefslogtreecommitdiff
path: root/arch/xtensa/include/asm/pgtable.h
AgeCommit message (Collapse)AuthorFilesLines
2022-03-25Merge tag 'xtensa-20220325' of https://github.com/jcmvbkbc/linux-xtensaLinus Torvalds1-0/+4
Pull Xtensa updates from Max Filippov: - remove dependency on the compiler's libgcc - allow selection of internal kernel ABI via Kconfig - enable compiler plugins support for gcc-12 or newer - various minor cleanups and fixes * tag 'xtensa-20220325' of https://github.com/jcmvbkbc/linux-xtensa: xtensa: define update_mmu_tlb function xtensa: fix xtensa_wsr always writing 0 xtensa: enable plugin support xtensa: clean up kernel exit assembly code xtensa: rearrange NMI exit path xtensa: merge stack alignment definitions xtensa: fix DTC warning unit_address_format xtensa: fix stop_machine_cpuslocked call in patch_text xtensa: make secondary reset vector support conditional xtensa: add kernel ABI selection to Kconfig xtensa: don't link with libgcc xtensa: add helpers for division, remainder and shifts xtensa: add missing XCHAL_HAVE_WINDOWED check xtensa: use XCHAL_NUM_AREGS as pt_regs::areg size xtensa: rename PT_SIZE to PT_KERNEL_SIZE xtensa: Remove unused early_read_config_byte() et al declarations xtensa: use strscpy to copy strings net: xtensa: use strscpy to copy strings
2022-03-22xtensa: define update_mmu_tlb functionMax Filippov1-0/+4
Before the commit f9ce0be71d1f ("mm: Cleanup faultaround and finish_fault() codepaths") there was a call to update_mmu_cache in alloc_set_pte that used to invalidate TLB entry caching invalid PTE that caused a page fault. That commit removed that call so now invalid TLB entry survives causing repetitive page faults on the CPU that took the initial fault until that TLB entry is occasionally evicted. This issue is spotted by the xtensa TLB sanity checker. Fix this issue by defining update_mmu_tlb function that flushes TLB entry for the faulting address. Cc: stable@vger.kernel.org # 5.12+ Signed-off-by: Max Filippov <jcmvbkbc@gmail.com>
2022-03-21arch: Add pmd_pfn() where it is missingMike Rapoport1-0/+1
We need to use this function in common code, so define it for architectures and/or configrations that miss it. The result of pmd_pfn() will only be used if TRANSPARENT_HUGEPAGE is enabled, but a function or macro called pmd_pfn() must be defined, even on machines with two level page tables. Signed-off-by: Mike Rapoport <rppt@linux.ibm.com> Signed-off-by: Matthew Wilcox (Oracle) <willy@infradead.org>
2021-07-01mm: define default value for FIRST_USER_ADDRESSAnshuman Khandual1-1/+0
Currently most platforms define FIRST_USER_ADDRESS as 0UL duplication the same code all over. Instead just define a generic default value (i.e 0UL) for FIRST_USER_ADDRESS and let the platforms override when required. This makes it much cleaner with reduced code. The default FIRST_USER_ADDRESS here would be skipped in <linux/pgtable.h> when the given platform overrides its value via <asm/pgtable.h>. Link: https://lkml.kernel.org/r/1620615725-24623-1-git-send-email-anshuman.khandual@arm.com Signed-off-by: Anshuman Khandual <anshuman.khandual@arm.com> Acked-by: Geert Uytterhoeven <geert@linux-m68k.org> [m68k] Acked-by: Guo Ren <guoren@kernel.org> [csky] Acked-by: Stafford Horne <shorne@gmail.com> [openrisc] Acked-by: Catalin Marinas <catalin.marinas@arm.com> [arm64] Acked-by: Mike Rapoport <rppt@linux.ibm.com> Acked-by: Palmer Dabbelt <palmerdabbelt@google.com> [RISC-V] Cc: Richard Henderson <rth@twiddle.net> Cc: Vineet Gupta <vgupta@synopsys.com> Cc: Catalin Marinas <catalin.marinas@arm.com> Cc: Will Deacon <will@kernel.org> Cc: Guo Ren <guoren@kernel.org> Cc: Brian Cain <bcain@codeaurora.org> Cc: Geert Uytterhoeven <geert@linux-m68k.org> Cc: Michal Simek <monstr@monstr.eu> Cc: Thomas Bogendoerfer <tsbogend@alpha.franken.de> Cc: Ley Foon Tan <ley.foon.tan@intel.com> Cc: Jonas Bonn <jonas@southpole.se> Cc: Stefan Kristiansson <stefan.kristiansson@saunalahti.fi> Cc: Stafford Horne <shorne@gmail.com> Cc: "James E.J. Bottomley" <James.Bottomley@HansenPartnership.com> Cc: Michael Ellerman <mpe@ellerman.id.au> Cc: Christophe Leroy <christophe.leroy@csgroup.eu> Cc: Paul Walmsley <paul.walmsley@sifive.com> Cc: Heiko Carstens <hca@linux.ibm.com> Cc: Yoshinori Sato <ysato@users.sourceforge.jp> Cc: "David S. Miller" <davem@davemloft.net> Cc: Jeff Dike <jdike@addtoit.com> Cc: Thomas Gleixner <tglx@linutronix.de> Cc: Chris Zankel <chris@zankel.net> Signed-off-by: Andrew Morton <akpm@linux-foundation.org> Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
2021-04-05xtensa: fix pgprot_noncached assumptionsMax Filippov1-1/+3
pgprot_noncached assumes that cache bypass attribute is represented as zero. This may not always be true. Fix pgprot_noncached definition by adding _PAGE_CA_BYPASS to the result. Signed-off-by: Max Filippov <jcmvbkbc@gmail.com>
2020-11-16xtensa: fix TLBTEMP area placementMax Filippov1-1/+1
fast_second_level_miss handler for the TLBTEMP area has an assumption that page table directory entry for the TLBTEMP address range is 0. For it to be true the TLBTEMP area must be aligned to 4MB boundary and not share its 4MB region with anything that may use a page table. This is not true currently: TLBTEMP shares space with vmalloc space which results in the following kinds of runtime errors when fast_second_level_miss loads page table directory entry for the vmalloc space instead of fixing up the TLBTEMP area: Unable to handle kernel paging request at virtual address c7ff0e00 pc = d0009275, ra = 90009478 Oops: sig: 9 [#1] PREEMPT CPU: 1 PID: 61 Comm: kworker/u9:2 Not tainted 5.10.0-rc3-next-20201110-00007-g1fe4962fa983-dirty #58 Workqueue: xprtiod xs_stream_data_receive_workfn a00: 90009478 d11e1dc0 c7ff0e00 00000020 c7ff0000 00000001 7f8b8107 00000000 a08: 900c5992 d11e1d90 d0cc88b8 5506e97c 00000000 5506e97c d06c8074 d11e1d90 pc: d0009275, ps: 00060310, depc: 00000014, excvaddr: c7ff0e00 lbeg: d0009275, lend: d0009287 lcount: 00000003, sar: 00000010 Call Trace: xs_stream_data_receive_workfn+0x43c/0x770 process_one_work+0x1a1/0x324 worker_thread+0x1cc/0x3c0 kthread+0x10d/0x124 ret_from_kernel_thread+0xc/0x18 Cc: stable@vger.kernel.org Signed-off-by: Max Filippov <jcmvbkbc@gmail.com>
2020-06-09mm: consolidate pte_index() and pte_offset_*() definitionsMike Rapoport1-17/+1
All architectures define pte_index() as (address >> PAGE_SHIFT) & (PTRS_PER_PTE - 1) and all architectures define pte_offset_kernel() as an entry in the array of PTEs indexed by the pte_index(). For the most architectures the pte_offset_kernel() implementation relies on the availability of pmd_page_vaddr() that converts a PMD entry value to the virtual address of the page containing PTEs array. Let's move x86 definitions of the PTE accessors to the generic place in <linux/pgtable.h> and then simply drop the respective definitions from the other architectures. The architectures that didn't provide pmd_page_vaddr() are updated to have that defined. The generic implementation of pte_offset_kernel() can be overridden by an architecture and alpha makes use of this because it has special ordering requirements for its version of pte_offset_kernel(). [rppt@linux.ibm.com: v2] Link: http://lkml.kernel.org/r/20200514170327.31389-11-rppt@kernel.org [rppt@linux.ibm.com: update] Link: http://lkml.kernel.org/r/20200514170327.31389-12-rppt@kernel.org [rppt@linux.ibm.com: update] Link: http://lkml.kernel.org/r/20200514170327.31389-13-rppt@kernel.org [akpm@linux-foundation.org: fix x86 warning] [sfr@canb.auug.org.au: fix powerpc build] Link: http://lkml.kernel.org/r/20200607153443.GB738695@linux.ibm.com Signed-off-by: Mike Rapoport <rppt@linux.ibm.com> Signed-off-by: Stephen Rothwell <sfr@canb.auug.org.au> Signed-off-by: Andrew Morton <akpm@linux-foundation.org> Cc: Arnd Bergmann <arnd@arndb.de> Cc: Borislav Petkov <bp@alien8.de> Cc: Brian Cain <bcain@codeaurora.org> Cc: Catalin Marinas <catalin.marinas@arm.com> Cc: Chris Zankel <chris@zankel.net> Cc: "David S. Miller" <davem@davemloft.net> Cc: Geert Uytterhoeven <geert@linux-m68k.org> Cc: Greentime Hu <green.hu@gmail.com> Cc: Greg Ungerer <gerg@linux-m68k.org> Cc: Guan Xuetao <gxt@pku.edu.cn> Cc: Guo Ren <guoren@kernel.org> Cc: Heiko Carstens <heiko.carstens@de.ibm.com> Cc: Helge Deller <deller@gmx.de> Cc: Ingo Molnar <mingo@redhat.com> Cc: Ley Foon Tan <ley.foon.tan@intel.com> Cc: Mark Salter <msalter@redhat.com> Cc: Matthew Wilcox <willy@infradead.org> Cc: Matt Turner <mattst88@gmail.com> Cc: Max Filippov <jcmvbkbc@gmail.com> Cc: Michael Ellerman <mpe@ellerman.id.au> Cc: Michal Simek <monstr@monstr.eu> Cc: Nick Hu <nickhu@andestech.com> Cc: Paul Walmsley <paul.walmsley@sifive.com> Cc: Richard Weinberger <richard@nod.at> Cc: Rich Felker <dalias@libc.org> Cc: Russell King <linux@armlinux.org.uk> Cc: Stafford Horne <shorne@gmail.com> Cc: Thomas Bogendoerfer <tsbogend@alpha.franken.de> Cc: Thomas Gleixner <tglx@linutronix.de> Cc: Tony Luck <tony.luck@intel.com> Cc: Vincent Chen <deanbo422@gmail.com> Cc: Vineet Gupta <vgupta@synopsys.com> Cc: Will Deacon <will@kernel.org> Cc: Yoshinori Sato <ysato@users.sourceforge.jp> Link: http://lkml.kernel.org/r/20200514170327.31389-10-rppt@kernel.org Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
2020-06-09mm: introduce include/linux/pgtable.hMike Rapoport1-2/+0
The include/linux/pgtable.h is going to be the home of generic page table manipulation functions. Start with moving asm-generic/pgtable.h to include/linux/pgtable.h and make the latter include asm/pgtable.h. Signed-off-by: Mike Rapoport <rppt@linux.ibm.com> Signed-off-by: Andrew Morton <akpm@linux-foundation.org> Cc: Arnd Bergmann <arnd@arndb.de> Cc: Borislav Petkov <bp@alien8.de> Cc: Brian Cain <bcain@codeaurora.org> Cc: Catalin Marinas <catalin.marinas@arm.com> Cc: Chris Zankel <chris@zankel.net> Cc: "David S. Miller" <davem@davemloft.net> Cc: Geert Uytterhoeven <geert@linux-m68k.org> Cc: Greentime Hu <green.hu@gmail.com> Cc: Greg Ungerer <gerg@linux-m68k.org> Cc: Guan Xuetao <gxt@pku.edu.cn> Cc: Guo Ren <guoren@kernel.org> Cc: Heiko Carstens <heiko.carstens@de.ibm.com> Cc: Helge Deller <deller@gmx.de> Cc: Ingo Molnar <mingo@redhat.com> Cc: Ley Foon Tan <ley.foon.tan@intel.com> Cc: Mark Salter <msalter@redhat.com> Cc: Matthew Wilcox <willy@infradead.org> Cc: Matt Turner <mattst88@gmail.com> Cc: Max Filippov <jcmvbkbc@gmail.com> Cc: Michael Ellerman <mpe@ellerman.id.au> Cc: Michal Simek <monstr@monstr.eu> Cc: Nick Hu <nickhu@andestech.com> Cc: Paul Walmsley <paul.walmsley@sifive.com> Cc: Richard Weinberger <richard@nod.at> Cc: Rich Felker <dalias@libc.org> Cc: Russell King <linux@armlinux.org.uk> Cc: Stafford Horne <shorne@gmail.com> Cc: Thomas Bogendoerfer <tsbogend@alpha.franken.de> Cc: Thomas Gleixner <tglx@linutronix.de> Cc: Tony Luck <tony.luck@intel.com> Cc: Vincent Chen <deanbo422@gmail.com> Cc: Vineet Gupta <vgupta@synopsys.com> Cc: Will Deacon <will@kernel.org> Cc: Yoshinori Sato <ysato@users.sourceforge.jp> Link: http://lkml.kernel.org/r/20200514170327.31389-3-rppt@kernel.org Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
2020-04-11mm/special: create generic fallbacks for pte_special() and pte_mkspecial()Anshuman Khandual1-3/+0
Currently there are many platforms that dont enable ARCH_HAS_PTE_SPECIAL but required to define quite similar fallback stubs for special page table entry helpers such as pte_special() and pte_mkspecial(), as they get build in generic MM without a config check. This creates two generic fallback stub definitions for these helpers, eliminating much code duplication. mips platform has a special case where pte_special() and pte_mkspecial() visibility is wider than what ARCH_HAS_PTE_SPECIAL enablement requires. This restricts those symbol visibility in order to avoid redefinitions which is now exposed through this new generic stubs and subsequent build failure. arm platform set_pte_at() definition needs to be moved into a C file just to prevent a build failure. [anshuman.khandual@arm.com: use defined(CONFIG_ARCH_HAS_PTE_SPECIAL) in mips per Thomas] Link: http://lkml.kernel.org/r/1583851924-21603-1-git-send-email-anshuman.khandual@arm.com Signed-off-by: Anshuman Khandual <anshuman.khandual@arm.com> Signed-off-by: Andrew Morton <akpm@linux-foundation.org> Acked-by: Guo Ren <guoren@kernel.org> [csky] Acked-by: Geert Uytterhoeven <geert@linux-m68k.org> [m68k] Acked-by: Stafford Horne <shorne@gmail.com> [openrisc] Acked-by: Helge Deller <deller@gmx.de> [parisc] Cc: Richard Henderson <rth@twiddle.net> Cc: Ivan Kokshaysky <ink@jurassic.park.msu.ru> Cc: Matt Turner <mattst88@gmail.com> Cc: Russell King <linux@armlinux.org.uk> Cc: Brian Cain <bcain@codeaurora.org> Cc: Tony Luck <tony.luck@intel.com> Cc: Fenghua Yu <fenghua.yu@intel.com> Cc: Sam Creasey <sammy@sammy.net> Cc: Michal Simek <monstr@monstr.eu> Cc: Ralf Baechle <ralf@linux-mips.org> Cc: Paul Burton <paulburton@kernel.org> Cc: Nick Hu <nickhu@andestech.com> Cc: Greentime Hu <green.hu@gmail.com> Cc: Vincent Chen <deanbo422@gmail.com> Cc: Ley Foon Tan <ley.foon.tan@intel.com> Cc: Jonas Bonn <jonas@southpole.se> Cc: Stefan Kristiansson <stefan.kristiansson@saunalahti.fi> Cc: "James E.J. Bottomley" <James.Bottomley@HansenPartnership.com> Cc: "David S. Miller" <davem@davemloft.net> Cc: Jeff Dike <jdike@addtoit.com> Cc: Richard Weinberger <richard@nod.at> Cc: Anton Ivanov <anton.ivanov@cambridgegreys.com> Cc: Guan Xuetao <gxt@pku.edu.cn> Cc: Chris Zankel <chris@zankel.net> Cc: Max Filippov <jcmvbkbc@gmail.com> Cc: Thomas Bogendoerfer <tsbogend@alpha.franken.de> Link: http://lkml.kernel.org/r/1583802551-15406-1-git-send-email-anshuman.khandual@arm.com Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
2019-11-26xtensa: get rid of __ARCH_USE_5LEVEL_HACKMike Rapoport1-1/+0
xtensa has 2-level page tables and already uses pgtable-nopmd for page table folding. Add walks of p4d level where appropriate and drop usage of __ARCH_USE_5LEVEL_HACK. Signed-off-by: Mike Rapoport <rppt@linux.ibm.com> Message-Id: <1572964400-16542-3-git-send-email-rppt@kernel.org> Signed-off-by: Max Filippov <jcmvbkbc@gmail.com> [fix up arch/xtensa/include/asm/fixmap.h and arch/xtensa/mm/tlb.c]
2019-11-26xtensa: mm: fix PMD folding implementationMike Rapoport1-3/+0
There was a definition of pmd_offset() in arch/xtensa/include/asm/pgtable.h that shadowed the generic implementation defined in include/asm-generic/pgtable-nopmd.h. As the result, xtensa had shortcuts in page table traversal in several places instead of doing level unfolding. Remove local override for pmd_offset() and add page table unfolding where necessary. Signed-off-by: Mike Rapoport <rppt@linux.ibm.com> Message-Id: <1572964400-16542-2-git-send-email-rppt@kernel.org> Signed-off-by: Max Filippov <jcmvbkbc@gmail.com>
2019-09-25mm: consolidate pgtable_cache_init() and pgd_cache_init()Mike Rapoport1-1/+0
Both pgtable_cache_init() and pgd_cache_init() are used to initialize kmem cache for page table allocations on several architectures that do not use PAGE_SIZE tables for one or more levels of the page table hierarchy. Most architectures do not implement these functions and use __weak default NOP implementation of pgd_cache_init(). Since there is no such default for pgtable_cache_init(), its empty stub is duplicated among most architectures. Rename the definitions of pgd_cache_init() to pgtable_cache_init() and drop empty stubs of pgtable_cache_init(). Link: http://lkml.kernel.org/r/1566457046-22637-1-git-send-email-rppt@linux.ibm.com Signed-off-by: Mike Rapoport <rppt@linux.ibm.com> Acked-by: Will Deacon <will@kernel.org> [arm64] Acked-by: Thomas Gleixner <tglx@linutronix.de> [x86] Cc: Catalin Marinas <catalin.marinas@arm.com> Cc: Ingo Molnar <mingo@redhat.com> Cc: Borislav Petkov <bp@alien8.de> Cc: Matthew Wilcox <willy@infradead.org> Signed-off-by: Andrew Morton <akpm@linux-foundation.org> Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
2019-06-19treewide: Replace GPLv2 boilerplate/reference with SPDX - rule 500Thomas Gleixner1-4/+1
Based on 2 normalized pattern(s): this program is free software you can redistribute it and or modify it under the terms of the gnu general public license version 2 as published by the free software foundation this program is free software you can redistribute it and or modify it under the terms of the gnu general public license version 2 as published by the free software foundation # extracted by the scancode license scanner the SPDX license identifier GPL-2.0-only has been chosen to replace the boilerplate/reference in 4122 file(s). Signed-off-by: Thomas Gleixner <tglx@linutronix.de> Reviewed-by: Enrico Weigelt <info@metux.net> Reviewed-by: Kate Stewart <kstewart@linuxfoundation.org> Reviewed-by: Allison Randal <allison@lohutok.net> Cc: linux-spdx@vger.kernel.org Link: https://lkml.kernel.org/r/20190604081206.933168790@linutronix.de Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
2018-07-12xtensa: platform-specific handling of coherent memoryMax Filippov1-0/+8
Memory layout is not fixed for noMMU xtensa configurations. Platforms that need to use coherent DMA should implement platform_vaddr_* helpers that check address type (cached/uncached) and convert addresses between these types. Signed-off-by: Max Filippov <jcmvbkbc@gmail.com>
2017-12-17xtensa: add support for KASANMax Filippov1-1/+2
Cover kernel addresses above 0x90000000 by the shadow map. Enable HAVE_ARCH_KASAN when MMU is enabled. Provide kasan_early_init that fills shadow map with writable copies of kasan_zero_page. Call kasan_early_init right after mmu initialization in the setup_arch. Provide kasan_init that allocates proper shadow map pages from the memblock and puts these pages into the shadow map for addresses from VMALLOC area to the end of KSEG. Call kasan_init right after memblock initialization. Don't use KASAN for the boot code, MMU and KASAN initialization and page fault handler. Make kernel stack size 4 times larger when KASAN is enabled to avoid stack overflows. GCC 7.3, 8 or newer is required to build the xtensa kernel with KASAN. Signed-off-by: Max Filippov <jcmvbkbc@gmail.com>
2017-03-09arch, mm: convert all architectures to use 5level-fixup.hKirill A. Shutemov1-0/+1
If an architecture uses 4level-fixup.h we don't need to do anything as it includes 5level-fixup.h. If an architecture uses pgtable-nop*d.h, define __ARCH_USE_5LEVEL_HACK before inclusion of the header. It makes asm-generic code to use 5level-fixup.h. If an architecture has 4-level paging or folds levels on its own, include 5level-fixup.h directly. Signed-off-by: Kirill A. Shutemov <kirill.shutemov@linux.intel.com> Acked-by: Michal Hocko <mhocko@suse.com> Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
2016-07-24xtensa: move kernel mapping addresses into kmem_layout.hMax Filippov1-3/+4
Create a header dedicated to memory layout definitions. Include it from places where these definitions are needed. Express vmalloc area address, VIRTUAL_MEMORY_ADDRESS and KERNELOFFSET through KSEG address. Signed-off-by: Max Filippov <jcmvbkbc@gmail.com>
2015-11-02xtensa: nommu: fix USER_RING definitionMax Filippov1-0/+4
There's no kernel/user separation in noMMU and PS.RING may not exist. Even if it exists it should not be used because TLB entries are not set up for user ring on user pages. Signed-off-by: Max Filippov <jcmvbkbc@gmail.com>
2015-02-12mm: make FIRST_USER_ADDRESS unsigned long on all archsKirill A. Shutemov1-1/+1
LKP has triggered a compiler warning after my recent patch "mm: account pmd page tables to the process": mm/mmap.c: In function 'exit_mmap': >> mm/mmap.c:2857:2: warning: right shift count >= width of type [enabled by default] The code: > 2857 WARN_ON(mm_nr_pmds(mm) > 2858 round_up(FIRST_USER_ADDRESS, PUD_SIZE) >> PUD_SHIFT); In this, on tile, we have FIRST_USER_ADDRESS defined as 0. round_up() has the same type -- int. PUD_SHIFT. I think the best way to fix it is to define FIRST_USER_ADDRESS as unsigned long. On every arch for consistency. Signed-off-by: Kirill A. Shutemov <kirill.shutemov@linux.intel.com> Reported-by: Wu Fengguang <fengguang.wu@intel.com> Signed-off-by: Andrew Morton <akpm@linux-foundation.org> Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
2015-02-11xtensa: drop _PAGE_FILE and pte_file()-related helpersKirill A. Shutemov1-10/+0
We've replaced remap_file_pages(2) implementation with emulation. Nobody creates non-linear mapping anymore. Signed-off-by: Kirill A. Shutemov <kirill.shutemov@linux.intel.com> Acked-by: Max Filippov <jcmvbkbc@gmail.com> Signed-off-by: Andrew Morton <akpm@linux-foundation.org> Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
2014-10-21xtensa: nommu: provide _PAGE_CHG_MASK definitionMax Filippov1-0/+1
Signed-off-by: Max Filippov <jcmvbkbc@gmail.com>
2014-10-06xtensa: implement pgprot_noncachedMax Filippov1-0/+2
The default pgprot_noncached doesn't do anything. This leads to issues when drivers rely on it to disable caching in userspace mappings. Implement pgprot_noncached properly so that caching of userspace mappings could be controlled. Signed-off-by: Max Filippov <jcmvbkbc@gmail.com>
2014-08-14xtensa: fix TLBTEMP_BASE_2 region handling in fast_second_level_missMax Filippov1-1/+6
Current definition of TLBTEMP_BASE_2 is always 32K above the TLBTEMP_BASE_1, whereas fast_second_level_miss handler for the TLBTEMP region analyzes virtual address bit (PAGE_SHIFT + DCACHE_ALIAS_ORDER) to determine TLBTEMP region where the fault happened. The size of the TLBTEMP region is also checked incorrectly: not 64K, but twice data cache way size (whicht may as well be less than the instruction cache way size). Fix TLBTEMP_BASE_2 to be TLBTEMP_BASE_1 + data cache way size. Provide TLBTEMP_SIZE that is a greater of doubled data cache way size or the instruction cache way size, and use it to determine if the second level TLB miss occured in the TLBTEMP region. Practical occurence of page faults in the TLBTEMP area is extremely rare, this code can be tested by deletion of all w[di]tlb instructions in the tlbtemp_mapping region. Cc: stable@vger.kernel.org Signed-off-by: Max Filippov <jcmvbkbc@gmail.com>
2014-04-06xtensa: add HIGHMEM supportMax Filippov1-0/+4
Introduce fixmap area just below the vmalloc region. Use it for atomic mapping of high memory pages. High memory on cores with cache aliasing is not supported and is still to be implemented. Fail build for such configurations for now. Signed-off-by: Max Filippov <jcmvbkbc@gmail.com>
2013-11-15xtensa: use buddy allocator for PTE tableKirill A. Shutemov1-2/+1
At the moment xtensa uses slab allocator for PTE table. It doesn't work with enabled split page table lock: slab uses page->slab_cache and page->first_page for its pages. These fields share stroage with page->ptl. Signed-off-by: Kirill A. Shutemov <kirill.shutemov@linux.intel.com> Cc: Chris Zankel <chris@zankel.net> Acked-by: Max Filippov <jcmvbkbc@gmail.com> Signed-off-by: Andrew Morton <akpm@linux-foundation.org> Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
2013-07-11Merge tag 'xtensa-next-20130710' of git://github.com/czankel/xtensa-linuxLinus Torvalds1-57/+83
Pull Xtensa updates from Chris Zankel. * tag 'xtensa-next-20130710' of git://github.com/czankel/xtensa-linux: (22 commits) xtensa: remove the second argument of __bio_kmap_atomic() xtensa: add static function tracer support xtensa: Flat DeviceTree copy not future-safe xtensa: check TLB sanity on return to userspace xtensa: adjust boot parameters address when INITIALIZE_XTENSA_MMU_INSIDE_VMLINUX is selected xtensa: bootparams: fix typo xtensa: tell git to ignore generated .dtb files xtensa: ccount based sched_clock xtensa: ccount based clockevent implementation xtensa: consolidate ccount access routines xtensa: cleanup ccount frequency tracking xtensa: timex.h: remove unused symbols xtensa: tell git to ignore copied zlib source files xtensa: fix section mismatch in pcibios_fixup_bus xtensa: ISS: fix section mismatch in iss_net_setup arch: xtensa: include: asm: compiling issue, need cmpxchg64() defined. xtensa: xtfpga: fix section mismatch xtensa: remove unused platform_init_irq() xtensa: tell git to ignore generated files xtensa: flush TLB entries for pages of non-current mm correctly ...
2013-06-29consolidate io_remap_pfn_range definitionsAl Viro1-8/+0
Signed-off-by: Al Viro <viro@zeniv.linux.org.uk>
2013-05-20xtensa: fix TLB multihit exceptionsChris Zankel1-57/+83
- set _PAGE_USER in the pte_clear to avoid having TLB multihit exceptions (see following threads for more details); http://lists.linux-xtensa.org/pipermail/linux-xtensa/Week-of-Mon-20130401/ http://lists.linux-xtensa.org/pipermail/linux-xtensa/Week-of-Mon-20130408/ - improved documentation of the PTE layout - fix PTE mapping for present and 'prot_none' pages for T1050 hw and earlier - fix pte_file offset and size - add check for the correct number of bits for swap type CC: piet.delaney@gmail.com CC: jcmvbkbc@gmail.com Signed-off-by: Chris Zankel <chris@zankel.net>
2013-02-24xtensa: avoid mmap cache aliasingMax Filippov1-0/+4
Provide arch_get_unmapped_area function aligning shared memory mapping addresses to the biggest of the page size or the cache way size. That guarantees that corresponding virtual addresses of shared mappings are cached by the same cache sets. Signed-off-by: Max Filippov <jcmvbkbc@gmail.com> Signed-off-by: Chris Zankel <chris@zankel.net>
2012-12-19xtensa: clean up files to make them code-style compliantChris Zankel1-4/+4
Remove heading and trailing spaces, trim trailing lines, and wrap lines that are longer than 80 characters. Signed-off-by: Chris Zankel <chris@zankel.net>
2010-10-27mm: remove pte_*map_nested()Peter Zijlstra1-3/+0
Since we no longer need to provide KM_type, the whole pte_*map_nested() API is now redundant, remove it. Signed-off-by: Peter Zijlstra <a.p.zijlstra@chello.nl> Acked-by: Chris Metcalf <cmetcalf@tilera.com> Cc: David Howells <dhowells@redhat.com> Cc: Hugh Dickins <hughd@google.com> Cc: Rik van Riel <riel@redhat.com> Cc: Ingo Molnar <mingo@elte.hu> Cc: Thomas Gleixner <tglx@linutronix.de> Cc: "H. Peter Anvin" <hpa@zytor.com> Cc: Steven Rostedt <rostedt@goodmis.org> Cc: Russell King <rmk@arm.linux.org.uk> Cc: Ralf Baechle <ralf@linux-mips.org> Cc: David Miller <davem@davemloft.net> Cc: Paul Mackerras <paulus@samba.org> Cc: Benjamin Herrenschmidt <benh@kernel.crashing.org> Signed-off-by: Andrew Morton <akpm@linux-foundation.org> Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
2010-02-20MM: Pass a PTE pointer to update_mmu_cache() rather than the PTE itselfRussell King1-1/+1
On VIVT ARM, when we have multiple shared mappings of the same file in the same MM, we need to ensure that we have coherency across all copies. We do this via make_coherent() by making the pages uncacheable. This used to work fine, until we allowed highmem with highpte - we now have a page table which is mapped as required, and is not available for modification via update_mmu_cache(). Ralf Beache suggested getting rid of the PTE value passed to update_mmu_cache(): On MIPS update_mmu_cache() calls __update_tlb() which walks pagetables to construct a pointer to the pte again. Passing a pte_t * is much more elegant. Maybe we might even replace the pte argument with the pte_t? Ben Herrenschmidt would also like the pte pointer for PowerPC: Passing the ptep in there is exactly what I want. I want that -instead- of the PTE value, because I have issue on some ppc cases, for I$/D$ coherency, where set_pte_at() may decide to mask out the _PAGE_EXEC. So, pass in the mapped page table pointer into update_mmu_cache(), and remove the PTE value, updating all implementations and call sites to suit. Includes a fix from Stephen Rothwell: sparc: fix fallout from update_mmu_cache API change Signed-off-by: Stephen Rothwell <sfr@canb.auug.org.au> Acked-by: Benjamin Herrenschmidt <benh@kernel.crashing.org> Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk>
2009-04-03xtensa: nommu supportJohannes Weiner1-5/+8
Add support for !CONFIG_MMU setups. Signed-off-by: Johannes Weiner <jw@emlix.com> Signed-off-by: Chris Zankel <chris@zankel.net>
2008-11-06xtensa: move headers files to arch/xtensa/includeChris Zankel1-0/+416
Move all header files for xtensa to arch/xtensa/include and platform and variant header files to the appropriate arch/xtensa/platforms/ and arch/xtensa/variants/ directories. Moving the files gets also rid of all uses of symlinks in the Makefile. This has been completed already for the majority of the architectures and xtensa is one out of six missing. Signed-off-by: Sam Ravnborg <sam@ravnborg.org> Signed-off-by: Chris Zankel <chris@zankel.net>