summaryrefslogtreecommitdiff
path: root/arch/riscv/mm
AgeCommit message (Collapse)AuthorFilesLines
2021-02-19RISC-V: Implement ASID allocatorAnup Patel1-4/+261
Currently, we do local TLB flush on every MM switch. This is very harsh on performance because we are forcing page table walks after every MM switch. This patch implements ASID allocator for assigning an ASID to a MM context. The number of ASIDs are limited in HW so we create a logical entity named CONTEXTID for assigning to MM context. The lower bits of CONTEXTID are ASID and upper bits are VERSION number. The number of usable ASID bits supported by HW are detected at boot-time by writing 1s to ASID bits in SATP CSR. We allocate new CONTEXTID on first MM switch for a MM context where the ASID is allocated from an ASID bitmap and VERSION is provide by an atomic counter. At time of allocating new CONTEXTID, if we run out of available ASIDs then: 1. We flush the ASID bitmap 2. Increment current VERSION atomic counter 3. Re-allocate ASID from ASID bitmap 4. Flush TLB on all CPUs 5. Try CONTEXTID re-assignment on all CPUs Please note that we don't use ASID #0 because it is used at boot-time by all CPUs for initial MM context. Also, newly created context is always assigned CONTEXTID #0 (i.e. VERSION #0 and ASID #0) which is an invalid context in our implementation. Using above approach, we have virtually infinite CONTEXTIDs on-top-of limited number of HW ASIDs. This approach is inspired from ASID allocator used for Linux ARM/ARM64 but we have adapted it for RISC-V. Overall, this ASID allocator helps us reduce rate of local TLB flushes on every CPU thereby increasing performance. This patch is tested on QEMU virt machine, Spike and SiFive Unleashed board. On QEMU virt machine, we see some (3-5% approx) performance improvement with SW emulated TLBs provided by QEMU. Unfortunately, the ASID bits of the SATP CSR are not implemented on Spike and SiFive Unleashed board so we don't see any change in performance. On real HW having all ASID bits implemented, the performance gains will be much more due improved sharing of TLB among different processes. Signed-off-by: Anup Patel <anup.patel@wdc.com> Reviewed-by: Palmer Dabbelt <palmerdabbelt@google.com> Signed-off-by: Palmer Dabbelt <palmerdabbelt@google.com>
2021-02-19riscv/kasan: add KASAN_VMALLOC supportNylon Chen1-1/+56
It references to x86/s390 architecture. So, it doesn't map the early shadow page to cover VMALLOC space. Prepopulate top level page table for the range that would otherwise be empty. lower levels are filled dynamically upon memory allocation while booting. Signed-off-by: Nylon Chen <nylon7@andestech.com> Signed-off-by: Nick Hu <nickhu@andestech.com> Signed-off-by: Palmer Dabbelt <palmerdabbelt@google.com>
2021-02-19riscv: Covert to reserve_initrd_mem()Kefeng Wang1-53/+1
Covert to the generic reserve_initrd_mem() function. Signed-off-by: Kefeng Wang <wangkefeng.wang@huawei.com> Signed-off-by: Palmer Dabbelt <palmerdabbelt@google.com>
2021-02-19riscv: add BUILTIN_DTB support for MMU-enabled targetsVitaly Wool1-6/+13
Sometimes, especially in a production system we may not want to use a "smart bootloader" like u-boot to load kernel, ramdisk and device tree from a filesystem on eMMC, but rather load the kernel from a NAND partition and just run it as soon as we can, and in this case it is convenient to have device tree compiled into the kernel binary. Since this case is not limited to MMU-less systems, let's support it for these which have MMU enabled too. While at it, provide __dtb_start as a parameter to setup_vm() in BUILTIN_DTB case, so we don't have to duplicate BUILTIN_DTB specific processing in MMU-enabled and MMU-disabled versions of setup_vm(). Signed-off-by: Vitaly Wool <vitaly.wool@konsulko.com> Reviewed-by: Anup Patel <anup@brainfault.org> Signed-off-by: Palmer Dabbelt <palmerdabbelt@google.com>
2021-01-23riscv: Fixup pfn_valid error with wrong max_mapnrGuo Ren1-1/+1
The max_mapnr is the number of PFNs, not absolute PFN offset. Signed-off-by: Guo Ren <guoren@linux.alibaba.com> Fixes: d0d8aae64566 ("RISC-V: Set maximum number of mapped pages correctly") Cc: stable@vger.kernel.org Signed-off-by: Palmer Dabbelt <palmerdabbelt@google.com>
2021-01-16RISC-V: Set current memblock limitAtish Patra1-2/+14
Currently, linux kernel can not use last 4k bytes of addressable space because IS_ERR_VALUE macro treats those as an error. This will be an issue for RV32 as any memblock allocator potentially allocate chunk of memory from the end of DRAM (2GB) leading bad address error even though the address was technically valid. Fix this issue by limiting the memblock if available memory spans the entire address space. Reviewed-by: Anup Patel <anup@brainfault.org> Signed-off-by: Atish Patra <atish.patra@wdc.com> Signed-off-by: Palmer Dabbelt <palmerdabbelt@google.com>
2021-01-15riscv: Add uprobes supportedGuo Ren1-0/+6
This patch adds support for uprobes on riscv architecture. Just like kprobe, it support single-step and simulate instructions. Signed-off-by: Guo Ren <guoren@linux.alibaba.com> Reviewed-by: Pekka Enberg <penberg@kernel.org> Cc: Oleg Nesterov <oleg@redhat.com> Cc: Masami Hiramatsu <mhiramat@kernel.org> Cc: Palmer Dabbelt <palmerdabbelt@google.com> Signed-off-by: Palmer Dabbelt <palmerdabbelt@google.com>
2021-01-15riscv: Add kprobes supportedGuo Ren1-0/+4
This patch enables "kprobe & kretprobe" to work with ftrace interface. It utilized software breakpoint as single-step mechanism. Some instructions which can't be single-step executed must be simulated in kernel execution slot, such as: branch, jal, auipc, la ... Some instructions should be rejected for probing and we use a blacklist to filter, such as: ecall, ebreak, ... We use ebreak & c.ebreak to replace origin instruction and the kprobe handler prepares an executable memory slot for out-of-line execution with a copy of the original instruction being probed. In execution slot we add ebreak behind original instruction to simulate a single-setp mechanism. The patch is based on packi's work [1] and csky's work [2]. - The kprobes_trampoline.S is all from packi's patch - The single-step mechanism is new designed for riscv without hw single-step trap - The simulation codes are from csky - Frankly, all codes refer to other archs' implementation [1] https://lore.kernel.org/linux-riscv/20181113195804.22825-1-me@packi.ch/ [2] https://lore.kernel.org/linux-csky/20200403044150.20562-9-guoren@kernel.org/ Signed-off-by: Guo Ren <guoren@linux.alibaba.com> Co-developed-by: Patrick Stählin <me@packi.ch> Signed-off-by: Patrick Stählin <me@packi.ch> Acked-by: Masami Hiramatsu <mhiramat@kernel.org> Tested-by: Zong Li <zong.li@sifive.com> Reviewed-by: Pekka Enberg <penberg@kernel.org> Cc: Patrick Stählin <me@packi.ch> Cc: Palmer Dabbelt <palmerdabbelt@google.com> Cc: Björn Töpel <bjorn.topel@gmail.com> Signed-off-by: Palmer Dabbelt <palmerdabbelt@google.com>
2021-01-15riscv: Fixup patch_text panic in ftraceGuo Ren1-0/+1
Just like arm64, we can't trace the function in the patch_text path. Here is the bug log: [ 45.234334] Unable to handle kernel paging request at virtual address ffffffd38ae80900 [ 45.242313] Oops [#1] [ 45.244600] Modules linked in: [ 45.247678] CPU: 0 PID: 11 Comm: migration/0 Not tainted 5.9.0-00025-g9b7db83-dirty #215 [ 45.255797] epc: ffffffe00021689a ra : ffffffe00021718e sp : ffffffe01afabb58 [ 45.262955] gp : ffffffe00136afa0 tp : ffffffe01af94d00 t0 : 0000000000000002 [ 45.270200] t1 : 0000000000000000 t2 : 0000000000000001 s0 : ffffffe01afabc08 [ 45.277443] s1 : ffffffe0013718a8 a0 : 0000000000000000 a1 : ffffffe01afabba8 [ 45.284686] a2 : 0000000000000000 a3 : 0000000000000000 a4 : c4c16ad38ae80900 [ 45.291929] a5 : 0000000000000000 a6 : 0000000000000000 a7 : 0000000052464e43 [ 45.299173] s2 : 0000000000000001 s3 : ffffffe000206a60 s4 : ffffffe000206a60 [ 45.306415] s5 : 00000000000009ec s6 : ffffffe0013718a8 s7 : c4c16ad38ae80900 [ 45.313658] s8 : 0000000000000004 s9 : 0000000000000001 s10: 0000000000000001 [ 45.320902] s11: 0000000000000003 t3 : 0000000000000001 t4 : ffffffffd192fe79 [ 45.328144] t5 : ffffffffb8f80000 t6 : 0000000000040000 [ 45.333472] status: 0000000200000100 badaddr: ffffffd38ae80900 cause: 000000000000000f [ 45.341514] ---[ end trace d95102172248fdcf ]--- [ 45.346176] note: migration/0[11] exited with preempt_count 1 (gdb) x /2i $pc => 0xffffffe00021689a <__do_proc_dointvec+196>: sd zero,0(s7) 0xffffffe00021689e <__do_proc_dointvec+200>: li s11,0 (gdb) bt 0 __do_proc_dointvec (tbl_data=0x0, table=0xffffffe01afabba8, write=0, buffer=0x0, lenp=0x7bf897061f9a0800, ppos=0x4, conv=0x0, data=0x52464e43) at kernel/sysctl.c:581 1 0xffffffe00021718e in do_proc_dointvec (data=<optimized out>, conv=<optimized out>, ppos=<optimized out>, lenp=<optimized out>, buffer=<optimized out>, write=<optimized out>, table=<optimized out>) at kernel/sysctl.c:964 2 proc_dointvec_minmax (ppos=<optimized out>, lenp=<optimized out>, buffer=<optimized out>, write=<optimized out>, table=<optimized out>) at kernel/sysctl.c:964 3 proc_do_static_key (table=<optimized out>, write=1, buffer=0x0, lenp=0x0, ppos=0x7bf897061f9a0800) at kernel/sysctl.c:1643 4 0xffffffe000206792 in ftrace_make_call (rec=<optimized out>, addr=<optimized out>) at arch/riscv/kernel/ftrace.c:109 5 0xffffffe0002c9c04 in __ftrace_replace_code (rec=0xffffffe01ae40c30, enable=3) at kernel/trace/ftrace.c:2503 6 0xffffffe0002ca0b2 in ftrace_replace_code (mod_flags=<optimized out>) at kernel/trace/ftrace.c:2530 7 0xffffffe0002ca26a in ftrace_modify_all_code (command=5) at kernel/trace/ftrace.c:2677 8 0xffffffe0002ca30e in __ftrace_modify_code (data=<optimized out>) at kernel/trace/ftrace.c:2703 9 0xffffffe0002c13b0 in multi_cpu_stop (data=0x0) at kernel/stop_machine.c:224 10 0xffffffe0002c0fde in cpu_stopper_thread (cpu=<optimized out>) at kernel/stop_machine.c:491 11 0xffffffe0002343de in smpboot_thread_fn (data=0x0) at kernel/smpboot.c:165 12 0xffffffe00022f8b4 in kthread (_create=0xffffffe01af0c040) at kernel/kthread.c:292 13 0xffffffe000201fac in handle_exception () at arch/riscv/kernel/entry.S:236 0xffffffe00020678a <+114>: auipc ra,0xffffe 0xffffffe00020678e <+118>: jalr -118(ra) # 0xffffffe000204714 <patch_text_nosync> 0xffffffe000206792 <+122>: snez a0,a0 (gdb) disassemble patch_text_nosync Dump of assembler code for function patch_text_nosync: 0xffffffe000204714 <+0>: addi sp,sp,-32 0xffffffe000204716 <+2>: sd s0,16(sp) 0xffffffe000204718 <+4>: sd ra,24(sp) 0xffffffe00020471a <+6>: addi s0,sp,32 0xffffffe00020471c <+8>: auipc ra,0x0 0xffffffe000204720 <+12>: jalr -384(ra) # 0xffffffe00020459c <patch_insn_write> 0xffffffe000204724 <+16>: beqz a0,0xffffffe00020472e <patch_text_nosync+26> 0xffffffe000204726 <+18>: ld ra,24(sp) 0xffffffe000204728 <+20>: ld s0,16(sp) 0xffffffe00020472a <+22>: addi sp,sp,32 0xffffffe00020472c <+24>: ret 0xffffffe00020472e <+26>: sd a0,-24(s0) 0xffffffe000204732 <+30>: auipc ra,0x4 0xffffffe000204736 <+34>: jalr -1464(ra) # 0xffffffe00020817a <flush_icache_all> 0xffffffe00020473a <+38>: ld a0,-24(s0) 0xffffffe00020473e <+42>: ld ra,24(sp) 0xffffffe000204740 <+44>: ld s0,16(sp) 0xffffffe000204742 <+46>: addi sp,sp,32 0xffffffe000204744 <+48>: ret (gdb) disassemble flush_icache_all-4 Dump of assembler code for function flush_icache_all: 0xffffffe00020817a <+0>: addi sp,sp,-8 0xffffffe00020817c <+2>: sd ra,0(sp) 0xffffffe00020817e <+4>: auipc ra,0xfffff 0xffffffe000208182 <+8>: jalr -1822(ra) # 0xffffffe000206a60 <ftrace_caller> 0xffffffe000208186 <+12>: ld ra,0(sp) 0xffffffe000208188 <+14>: addi sp,sp,8 0xffffffe00020818a <+0>: addi sp,sp,-16 0xffffffe00020818c <+2>: sd s0,0(sp) 0xffffffe00020818e <+4>: sd ra,8(sp) 0xffffffe000208190 <+6>: addi s0,sp,16 0xffffffe000208192 <+8>: li a0,0 0xffffffe000208194 <+10>: auipc ra,0xfffff 0xffffffe000208198 <+14>: jalr -410(ra) # 0xffffffe000206ffa <sbi_remote_fence_i> 0xffffffe00020819c <+18>: ld s0,0(sp) 0xffffffe00020819e <+20>: ld ra,8(sp) 0xffffffe0002081a0 <+22>: addi sp,sp,16 0xffffffe0002081a2 <+24>: ret (gdb) frame 5 (rec=0xffffffe01ae40c30, enable=3) at kernel/trace/ftrace.c:2503 2503 return ftrace_make_call(rec, ftrace_addr); (gdb) p /x rec->ip $2 = 0xffffffe00020817a -> flush_icache_all ! When we modified flush_icache_all's patchable-entry with ftrace_caller: - Insert ftrace_caller at flush_icache_all prologue. - Call flush_icache_all to sync I/Dcache, but flush_icache_all is just we modified by half. Link: https://lore.kernel.org/linux-riscv/CAJF2gTT=oDWesWe0JVWvTpGi60-gpbNhYLdFWN_5EbyeqoEDdw@mail.gmail.com/T/#t Signed-off-by: Guo Ren <guoren@linux.alibaba.com> Reviewed-by: Atish Patra <atish.patra@wdc.com> Signed-off-by: Palmer Dabbelt <palmerdabbelt@google.com>
2021-01-15riscv: Fixup wrong ftrace remove cflagGuo Ren1-1/+1
We must use $(CC_FLAGS_FTRACE) instead of directly using -pg. It will cause -fpatchable-function-entry error. Signed-off-by: Guo Ren <guoren@linux.alibaba.com> Signed-off-by: Palmer Dabbelt <palmerdabbelt@google.com>
2021-01-15riscv: Add numa support for riscv64 platformAtish Patra1-1/+3
Use the generic numa implementation to add NUMA support for RISC-V. This is based on Greentime's patch[1] but modified to use generic NUMA implementation and few more fixes. [1] https://lkml.org/lkml/2020/1/10/233 Co-developed-by: Greentime Hu <greentime.hu@sifive.com> Signed-off-by: Greentime Hu <greentime.hu@sifive.com> Signed-off-by: Atish Patra <atish.patra@wdc.com> Reviewed-by: Anup Patel <anup@brainfault.org> Reviewed-by: Palmer Dabbelt <palmerdabbelt@google.com> Signed-off-by: Palmer Dabbelt <palmerdabbelt@google.com>
2021-01-15riscv: Separate memory init from paging initAtish Patra1-1/+5
Currently, we perform some memory init functions in paging init. But, that will be an issue for NUMA support where DT needs to be flattened before numa initialization and memblock_present can only be called after numa initialization. Move memory initialization related functions to a separate function. Signed-off-by: Atish Patra <atish.patra@wdc.com> Reviewed-by: Greentime Hu <greentime.hu@sifive.com> Reviewed-by: Anup Patel <anup@brainfault.org> Reviewed-by: Palmer Dabbelt <palmerdabbelt@google.com> Signed-off-by: Palmer Dabbelt <palmerdabbelt@google.com>
2021-01-14riscv: Fix KASAN memory mapping.Nick Hu1-2/+2
Use virtual address instead of physical address when translating the address to shadow memory by kasan_mem_to_shadow(). Signed-off-by: Nick Hu <nickhu@andestech.com> Signed-off-by: Nylon Chen <nylon7@andestech.com> Fixes: b10d6bca8720 ("arch, drivers: replace for_each_membock() with for_each_mem_range()") Cc: stable@vger.kernel.org Signed-off-by: Palmer Dabbelt <palmerdabbelt@google.com>
2021-01-08riscv: Fix builtin DTB handlingDamien Le Moal1-5/+1
All SiPeed K210 MAIX boards have the exact same vendor, arch and implementation IDs, preventing differentiation to select the correct device tree to use through the SOC_BUILTIN_DTB_DECLARE() macro. This result in this macro to be useless and mandates changing the code of the sysctl driver to change the builtin device tree suitable for the target board. Fix this problem by removing the SOC_BUILTIN_DTB_DECLARE() macro since it is used only for the K210 support. The code searching the builtin DTBs using the vendor, arch an implementation IDs is also removed. Support for builtin DTB falls back to the simpler and more traditional handling of builtin DTB using the CONFIG_BUILTIN_DTB option, similarly to other architectures. Signed-off-by: Damien Le Moal <damien.lemoal@wdc.com> Signed-off-by: Palmer Dabbelt <palmerdabbelt@google.com>
2021-01-08riscv/mm: Prevent kernel module to access user memory without uaccess routinesEric Lin1-0/+5
We found this issue in an legacy out-of-tree kernel module which didn't properly access user space pointer by get/put_user(). Such an illegal access loops in the page fault handler. To resolve this, let it die here. Signed-off-by: Eric Lin <tesheng@andestech.com> Reviewed-by: Pekka Enberg <penberg@kernel.org> Signed-off-by: Palmer Dabbelt <palmerdabbelt@google.com>
2021-01-08riscv/mm: Introduce a die_kernel_fault() helper functionEric Lin1-6/+17
Like arm64, this patch adds a die_kernel_fault() helper to ensure the same semantics for the different kernel faults. Signed-off-by: Eric Lin <tesheng@andestech.com> Reviewed-by: Pekka Enberg <penberg@kernel.org> Signed-off-by: Palmer Dabbelt <palmerdabbelt@google.com>
2020-12-22RISC-V: Fix usage of memblock_enforce_memory_limitAtish Patra1-1/+1
memblock_enforce_memory_limit accepts the maximum memory size not the maximum address that can be handled by kernel. Fix the function invocation accordingly. Fixes: 1bd14a66ee52 ("RISC-V: Remove any memblock representing unusable memory area") Cc: stable@vger.kernel.org Reported-by: Bin Meng <bin.meng@windriver.com> Tested-by: Bin Meng <bin.meng@windriver.com> Acked-by: Mike Rapoport <rppt@linux.ibm.com> Signed-off-by: Atish Patra <atish.patra@wdc.com> Signed-off-by: Palmer Dabbelt <palmerdabbelt@google.com>
2020-12-18Merge tag 'riscv-for-linus-5.11-mw0' of ↵Linus Torvalds2-32/+26
git://git.kernel.org/pub/scm/linux/kernel/git/riscv/linux Pull RISC-V updates from Palmer Dabbelt: "We have a handful of new kernel features for 5.11: - Support for the contiguous memory allocator. - Support for IRQ Time Accounting - Support for stack tracing - Support for strict /dev/mem - Support for kernel section protection I'm being a bit conservative on the cutoff for this round due to the timing, so this is all the new development I'm going to take for this cycle (even if some of it probably normally would have been OK). There are, however, some fixes on the list that I will likely be sending along either later this week or early next week. There is one issue in here: one of my test configurations (PREEMPT{,_DEBUG}=y) fails to boot on QEMU 5.0.0 (from April) as of the .text.init alignment patch. With any luck we'll sort out the issue, but given how many bugs get fixed all over the place and how unrelated those features seem my guess is that we're just running into something that's been lurking for a while and has already been fixed in the newer QEMU (though I wouldn't be surprised if it's one of these implicit assumptions we have in the boot flow). If it was hardware I'd be strongly inclined to look more closely, but given that users can upgrade their simulators I'm less worried about it" * tag 'riscv-for-linus-5.11-mw0' of git://git.kernel.org/pub/scm/linux/kernel/git/riscv/linux: arm64: Use the generic devmem_is_allowed() arm: Use the generic devmem_is_allowed() RISC-V: Use the new generic devmem_is_allowed() lib: Add a generic version of devmem_is_allowed() riscv: Fixed kernel test robot warning riscv: kernel: Drop unused clean rule riscv: provide memmove implementation RISC-V: Move dynamic relocation section under __init RISC-V: Protect all kernel sections including init early RISC-V: Align the .init.text section RISC-V: Initialize SBI early riscv: Enable ARCH_STACKWALK riscv: Make stack walk callback consistent with generic code riscv: Cleanup stacktrace riscv: Add HAVE_IRQ_TIME_ACCOUNTING riscv: Enable CMA support riscv: Ignore Image.* and loader.bin riscv: Clean up boot dir riscv: Fix compressed Image formats build RISC-V: Add kernel image sections to the resource tree
2020-12-15arch, mm: make kernel_page_present() always availableMike Rapoport1-0/+29
For architectures that enable ARCH_HAS_SET_MEMORY having the ability to verify that a page is mapped in the kernel direct map can be useful regardless of hibernation. Add RISC-V implementation of kernel_page_present(), update its forward declarations and stubs to be a part of set_memory API and remove ugly ifdefery in inlcude/linux/mm.h around current declarations of kernel_page_present(). Link: https://lkml.kernel.org/r/20201109192128.960-5-rppt@kernel.org Signed-off-by: Mike Rapoport <rppt@linux.ibm.com> Acked-by: Kirill A. Shutemov <kirill.shutemov@linux.intel.com> Cc: Albert Ou <aou@eecs.berkeley.edu> Cc: Andy Lutomirski <luto@kernel.org> Cc: Benjamin Herrenschmidt <benh@kernel.crashing.org> Cc: Borislav Petkov <bp@alien8.de> Cc: Catalin Marinas <catalin.marinas@arm.com> Cc: Christian Borntraeger <borntraeger@de.ibm.com> Cc: Christoph Lameter <cl@linux.com> Cc: Dave Hansen <dave.hansen@linux.intel.com> Cc: David Hildenbrand <david@redhat.com> Cc: David Rientjes <rientjes@google.com> Cc: "David S. Miller" <davem@davemloft.net> Cc: "Edgecombe, Rick P" <rick.p.edgecombe@intel.com> Cc: Heiko Carstens <hca@linux.ibm.com> Cc: "H. Peter Anvin" <hpa@zytor.com> Cc: Ingo Molnar <mingo@redhat.com> Cc: Joonsoo Kim <iamjoonsoo.kim@lge.com> Cc: Len Brown <len.brown@intel.com> Cc: Michael Ellerman <mpe@ellerman.id.au> Cc: Palmer Dabbelt <palmer@dabbelt.com> Cc: Paul Mackerras <paulus@samba.org> Cc: Paul Walmsley <paul.walmsley@sifive.com> Cc: Pavel Machek <pavel@ucw.cz> Cc: Pekka Enberg <penberg@kernel.org> Cc: Peter Zijlstra <peterz@infradead.org> Cc: Rafael J. Wysocki <rafael.j.wysocki@intel.com> Cc: "Rafael J. Wysocki" <rjw@rjwysocki.net> Cc: Thomas Gleixner <tglx@linutronix.de> Cc: Vasily Gorbik <gor@linux.ibm.com> Cc: Vlastimil Babka <vbabka@suse.cz> Cc: Will Deacon <will@kernel.org> Signed-off-by: Andrew Morton <akpm@linux-foundation.org> Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
2020-12-15arch, mm: restore dependency of __kernel_map_pages() on DEBUG_PAGEALLOCMike Rapoport1-0/+2
The design of DEBUG_PAGEALLOC presumes that __kernel_map_pages() must never fail. With this assumption is wouldn't be safe to allow general usage of this function. Moreover, some architectures that implement __kernel_map_pages() have this function guarded by #ifdef DEBUG_PAGEALLOC and some refuse to map/unmap pages when page allocation debugging is disabled at runtime. As all the users of __kernel_map_pages() were converted to use debug_pagealloc_map_pages() it is safe to make it available only when DEBUG_PAGEALLOC is set. Link: https://lkml.kernel.org/r/20201109192128.960-4-rppt@kernel.org Signed-off-by: Mike Rapoport <rppt@linux.ibm.com> Acked-by: David Hildenbrand <david@redhat.com> Acked-by: Kirill A. Shutemov <kirill.shutemov@linux.intel.com> Cc: Albert Ou <aou@eecs.berkeley.edu> Cc: Andy Lutomirski <luto@kernel.org> Cc: Benjamin Herrenschmidt <benh@kernel.crashing.org> Cc: Borislav Petkov <bp@alien8.de> Cc: Catalin Marinas <catalin.marinas@arm.com> Cc: Christian Borntraeger <borntraeger@de.ibm.com> Cc: Christoph Lameter <cl@linux.com> Cc: Dave Hansen <dave.hansen@linux.intel.com> Cc: David Rientjes <rientjes@google.com> Cc: "David S. Miller" <davem@davemloft.net> Cc: "Edgecombe, Rick P" <rick.p.edgecombe@intel.com> Cc: Heiko Carstens <hca@linux.ibm.com> Cc: "H. Peter Anvin" <hpa@zytor.com> Cc: Ingo Molnar <mingo@redhat.com> Cc: Joonsoo Kim <iamjoonsoo.kim@lge.com> Cc: Len Brown <len.brown@intel.com> Cc: Michael Ellerman <mpe@ellerman.id.au> Cc: Palmer Dabbelt <palmer@dabbelt.com> Cc: Paul Mackerras <paulus@samba.org> Cc: Paul Walmsley <paul.walmsley@sifive.com> Cc: Pavel Machek <pavel@ucw.cz> Cc: Pekka Enberg <penberg@kernel.org> Cc: Peter Zijlstra <peterz@infradead.org> Cc: Rafael J. Wysocki <rafael.j.wysocki@intel.com> Cc: "Rafael J. Wysocki" <rjw@rjwysocki.net> Cc: Thomas Gleixner <tglx@linutronix.de> Cc: Vasily Gorbik <gor@linux.ibm.com> Cc: Vlastimil Babka <vbabka@suse.cz> Cc: Will Deacon <will@kernel.org> Signed-off-by: Andrew Morton <akpm@linux-foundation.org> Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
2020-11-26RISC-V: Protect all kernel sections including init earlyAtish Patra2-5/+22
Currently, .init.text & .init.data are intermixed which makes it impossible apply different permissions to them. .init.data shouldn't need exec permissions while .init.text shouldn't have write permission. Moreover, the strict permission are only enforced /init starts. This leaves the kernel vulnerable from possible buggy built-in modules. Keep .init.text & .data in separate sections so that different permissions are applied to each section. Apply permissions to individual sections as early as possible. This improves the kernel protection under CONFIG_STRICT_KERNEL_RWX. We also need to restore the permissions for the entire _init section after it is freed so that those pages can be used for other purpose. Signed-off-by: Atish Patra <atish.patra@wdc.com> Tested-by: Greentime Hu <greentime.hu@sifive.com> Signed-off-by: Palmer Dabbelt <palmerdabbelt@google.com>
2020-11-21riscv: Enable CMA supportKefeng Wang1-2/+6
riscv has selected HAVE_DMA_CONTIGUOUS, but doesn't call dma_contiguous_reserve(). This calls dma_contiguous_reserve(), which enables CMA. Signed-off-by: Kefeng Wang <wangkefeng.wang@huawei.com> Signed-off-by: Palmer Dabbelt <palmerdabbelt@google.com>
2020-11-09RISC-V: Add kernel image sections to the resource treeNick Kossifidis1-27/+0
This patch (previously part of my kexec/kdump series) populates /proc/iomem with the various sections of the kernel image. We need this for kexec-tools to be able to prepare the crashkernel image for kdump to work. Since resource tree initialization is not related to memory initialization I added the code to kernel/setup.c and removed the original code (derived from the arm64 tree) from mm/init.c. Signed-off-by: Nick Kossifidis <mick@ics.forth.gr> Signed-off-by: Palmer Dabbelt <palmerdabbelt@google.com>
2020-11-06RISC-V: Use non-PGD mappings for early DTB accessAnup Patel1-0/+14
Currently, we use PGD mappings for early DTB mapping in early_pgd but this breaks Linux kernel on SiFive Unleashed because on SiFive Unleashed PMP checks don't work correctly for PGD mappings. To fix early DTB mappings on SiFive Unleashed, we use non-PGD mappings (i.e. PMD) for early DTB access. Fixes: 8f3a2b4a96dc ("RISC-V: Move DT mapping outof fixmap") Signed-off-by: Anup Patel <anup.patel@wdc.com> Reviewed-by: Atish Patra <atish.patra@wdc.com> Tested-by: Atish Patra <atish.patra@wdc.com> Signed-off-by: Palmer Dabbelt <palmerdabbelt@google.com>
2020-11-06riscv: fix pfn_to_virt err in do_page_fault().Liu Shaohua1-1/+3
The argument to pfn_to_virt() should be pfn not the value of CSR_SATP. Reviewed-by: Palmer Dabbelt <palmerdabbelt@google.com> Reviewed-by: Anup Patel <anup@brainfault.org> Signed-off-by: liush <liush@allwinnertech.com> Reviewed-by: Pekka Enberg <penberg@kernel.org> Signed-off-by: Palmer Dabbelt <palmerdabbelt@google.com>
2020-11-05RISC-V: Remove any memblock representing unusable memory areaAtish Patra1-11/+7
RISC-V limits the physical memory size by -PAGE_OFFSET. Any memory beyond that size from DRAM start is unusable. Just remove any memblock pointing to those memory region without worrying about computing the maximum size. Signed-off-by: Atish Patra <atish.patra@wdc.com> Reviewed-by: Mike Rapoport <rppt@linux.ibm.com> Signed-off-by: Palmer Dabbelt <palmerdabbelt@google.com>
2020-10-20Merge tag 'riscv-for-linus-5.10-mw0' of ↵Linus Torvalds3-210/+384
git://git.kernel.org/pub/scm/linux/kernel/git/riscv/linux Pull RISC-V updates from Palmer Dabbelt: "A handful of cleanups and new features: - A handful of cleanups for our page fault handling - Improvements to how we fill out cacheinfo - Support for EFI-based systems" * tag 'riscv-for-linus-5.10-mw0' of git://git.kernel.org/pub/scm/linux/kernel/git/riscv/linux: (22 commits) RISC-V: Add page table dump support for uefi RISC-V: Add EFI runtime services RISC-V: Add EFI stub support. RISC-V: Add PE/COFF header for EFI stub RISC-V: Implement late mapping page table allocation functions RISC-V: Add early ioremap support RISC-V: Move DT mapping outof fixmap RISC-V: Fix duplicate included thread_info.h riscv/mm/fault: Set FAULT_FLAG_INSTRUCTION flag in do_page_fault() riscv/mm/fault: Fix inline placement in vmalloc_fault() declaration riscv: Add cache information in AUX vector riscv: Define AT_VECTOR_SIZE_ARCH for ARCH_DLINFO riscv: Set more data to cacheinfo riscv/mm/fault: Move access error check to function riscv/mm/fault: Move FAULT_FLAG_WRITE handling in do_page_fault() riscv/mm/fault: Simplify mm_fault_error() riscv/mm/fault: Move fault error handling to mm_fault_error() riscv/mm/fault: Simplify fault error handling riscv/mm/fault: Move vmalloc fault handling to vmalloc_fault() riscv/mm/fault: Move bad area handling to bad_area() ...
2020-10-14memblock: use separate iterators for memory and reserved regionsMike Rapoport1-1/+1
for_each_memblock() is used to iterate over memblock.memory in a few places that use data from memblock_region rather than the memory ranges. Introduce separate for_each_mem_region() and for_each_reserved_mem_region() to improve encapsulation of memblock internals from its users. Signed-off-by: Mike Rapoport <rppt@linux.ibm.com> Signed-off-by: Andrew Morton <akpm@linux-foundation.org> Reviewed-by: Baoquan He <bhe@redhat.com> Acked-by: Ingo Molnar <mingo@kernel.org> [x86] Acked-by: Thomas Bogendoerfer <tsbogend@alpha.franken.de> [MIPS] Acked-by: Miguel Ojeda <miguel.ojeda.sandonis@gmail.com> [.clang-format] Cc: Andy Lutomirski <luto@kernel.org> Cc: Benjamin Herrenschmidt <benh@kernel.crashing.org> Cc: Borislav Petkov <bp@alien8.de> Cc: Catalin Marinas <catalin.marinas@arm.com> Cc: Christoph Hellwig <hch@lst.de> Cc: Daniel Axtens <dja@axtens.net> Cc: Dave Hansen <dave.hansen@linux.intel.com> Cc: Emil Renner Berthing <kernel@esmil.dk> Cc: Hari Bathini <hbathini@linux.ibm.com> Cc: Ingo Molnar <mingo@redhat.com> Cc: Jonathan Cameron <Jonathan.Cameron@huawei.com> Cc: Marek Szyprowski <m.szyprowski@samsung.com> Cc: Max Filippov <jcmvbkbc@gmail.com> Cc: Michael Ellerman <mpe@ellerman.id.au> Cc: Michal Simek <monstr@monstr.eu> Cc: Palmer Dabbelt <palmer@dabbelt.com> Cc: Paul Mackerras <paulus@samba.org> Cc: Paul Walmsley <paul.walmsley@sifive.com> Cc: Peter Zijlstra <peterz@infradead.org> Cc: Russell King <linux@armlinux.org.uk> Cc: Stafford Horne <shorne@gmail.com> Cc: Thomas Gleixner <tglx@linutronix.de> Cc: Will Deacon <will@kernel.org> Cc: Yoshinori Sato <ysato@users.sourceforge.jp> Link: https://lkml.kernel.org/r/20200818151634.14343-18-rppt@kernel.org Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
2020-10-14arch, drivers: replace for_each_membock() with for_each_mem_range()Mike Rapoport2-20/+15
There are several occurrences of the following pattern: for_each_memblock(memory, reg) { start = __pfn_to_phys(memblock_region_memory_base_pfn(reg); end = __pfn_to_phys(memblock_region_memory_end_pfn(reg)); /* do something with start and end */ } Using for_each_mem_range() iterator is more appropriate in such cases and allows simpler and cleaner code. [akpm@linux-foundation.org: fix arch/arm/mm/pmsa-v7.c build] [rppt@linux.ibm.com: mips: fix cavium-octeon build caused by memblock refactoring] Link: http://lkml.kernel.org/r/20200827124549.GD167163@linux.ibm.com Signed-off-by: Mike Rapoport <rppt@linux.ibm.com> Signed-off-by: Andrew Morton <akpm@linux-foundation.org> Cc: Andy Lutomirski <luto@kernel.org> Cc: Baoquan He <bhe@redhat.com> Cc: Benjamin Herrenschmidt <benh@kernel.crashing.org> Cc: Borislav Petkov <bp@alien8.de> Cc: Catalin Marinas <catalin.marinas@arm.com> Cc: Christoph Hellwig <hch@lst.de> Cc: Daniel Axtens <dja@axtens.net> Cc: Dave Hansen <dave.hansen@linux.intel.com> Cc: Emil Renner Berthing <kernel@esmil.dk> Cc: Hari Bathini <hbathini@linux.ibm.com> Cc: Ingo Molnar <mingo@kernel.org> Cc: Ingo Molnar <mingo@redhat.com> Cc: Jonathan Cameron <Jonathan.Cameron@huawei.com> Cc: Marek Szyprowski <m.szyprowski@samsung.com> Cc: Max Filippov <jcmvbkbc@gmail.com> Cc: Michael Ellerman <mpe@ellerman.id.au> Cc: Michal Simek <monstr@monstr.eu> Cc: Miguel Ojeda <miguel.ojeda.sandonis@gmail.com> Cc: Palmer Dabbelt <palmer@dabbelt.com> Cc: Paul Mackerras <paulus@samba.org> Cc: Paul Walmsley <paul.walmsley@sifive.com> Cc: Peter Zijlstra <peterz@infradead.org> Cc: Russell King <linux@armlinux.org.uk> Cc: Stafford Horne <shorne@gmail.com> Cc: Thomas Bogendoerfer <tsbogend@alpha.franken.de> Cc: Thomas Gleixner <tglx@linutronix.de> Cc: Will Deacon <will@kernel.org> Cc: Yoshinori Sato <ysato@users.sourceforge.jp> Link: https://lkml.kernel.org/r/20200818151634.14343-13-rppt@kernel.org Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
2020-10-14riscv: drop unneeded node initializationMike Rapoport1-9/+0
RISC-V does not (yet) support NUMA and for UMA architectures node 0 is used implicitly during early memory initialization. There is no need to call memblock_set_node(), remove this call and the surrounding code. Signed-off-by: Mike Rapoport <rppt@linux.ibm.com> Signed-off-by: Andrew Morton <akpm@linux-foundation.org> Cc: Andy Lutomirski <luto@kernel.org> Cc: Baoquan He <bhe@redhat.com> Cc: Benjamin Herrenschmidt <benh@kernel.crashing.org> Cc: Borislav Petkov <bp@alien8.de> Cc: Catalin Marinas <catalin.marinas@arm.com> Cc: Christoph Hellwig <hch@lst.de> Cc: Daniel Axtens <dja@axtens.net> Cc: Dave Hansen <dave.hansen@linux.intel.com> Cc: Emil Renner Berthing <kernel@esmil.dk> Cc: Hari Bathini <hbathini@linux.ibm.com> Cc: Ingo Molnar <mingo@kernel.org> Cc: Ingo Molnar <mingo@redhat.com> Cc: Jonathan Cameron <Jonathan.Cameron@huawei.com> Cc: Marek Szyprowski <m.szyprowski@samsung.com> Cc: Max Filippov <jcmvbkbc@gmail.com> Cc: Michael Ellerman <mpe@ellerman.id.au> Cc: Michal Simek <monstr@monstr.eu> Cc: Miguel Ojeda <miguel.ojeda.sandonis@gmail.com> Cc: Palmer Dabbelt <palmer@dabbelt.com> Cc: Paul Mackerras <paulus@samba.org> Cc: Paul Walmsley <paul.walmsley@sifive.com> Cc: Peter Zijlstra <peterz@infradead.org> Cc: Russell King <linux@armlinux.org.uk> Cc: Stafford Horne <shorne@gmail.com> Cc: Thomas Bogendoerfer <tsbogend@alpha.franken.de> Cc: Thomas Gleixner <tglx@linutronix.de> Cc: Will Deacon <will@kernel.org> Cc: Yoshinori Sato <ysato@users.sourceforge.jp> Link: https://lkml.kernel.org/r/20200818151634.14343-7-rppt@kernel.org Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
2020-10-05RISC-V: Make sure memblock reserves the memory containing DTAtish Patra1-0/+1
Currently, the memory containing DT is not reserved. Thus, that region of memory can be reallocated or reused for other purposes. This may result in corrupted DT for nommu virt board in Qemu. We may not face any issue in kendryte as DT is embedded in the kernel image for that. Fixes: 6bd33e1ece52 ("riscv: add nommu support") Cc: stable@vger.kernel.org Signed-off-by: Atish Patra <atish.patra@wdc.com> Signed-off-by: Palmer Dabbelt <palmerdabbelt@google.com>
2020-10-03RISC-V: Add page table dump support for uefiAtish Patra1-6/+42
Extend the current page table dump support in RISC-V to include efi pages as well. Here is the output of efi runtime page table mappings. ---[ UEFI runtime start ]--- 0x0000000020002000-0x0000000020003000 0x00000000be732000 4K PTE D A . . . W R V 0x0000000020018000-0x0000000020019000 0x00000000be738000 4K PTE D A . . . W R V 0x000000002002c000-0x000000002002d000 0x00000000be73c000 4K PTE D A . . . W R V 0x0000000020031000-0x0000000020032000 0x00000000bff61000 4K PTE D A . . X W R V ---[ UEFI runtime end ]--- Signed-off-by: Atish Patra <atish.patra@wdc.com> Reviewed-by: Anup Patel <anup@brainfault.org> Signed-off-by: Palmer Dabbelt <palmerdabbelt@google.com>
2020-10-03RISC-V: Add EFI runtime servicesAtish Patra1-1/+1
This patch adds EFI runtime service support for RISC-V. Signed-off-by: Atish Patra <atish.patra@wdc.com> [ardb: - Remove the page check] Signed-off-by: Ard Biesheuvel <ardb@kernel.org> Acked-by: Ard Biesheuvel <ardb@kernel.org> Signed-off-by: Palmer Dabbelt <palmerdabbelt@google.com>
2020-10-03RISC-V: Implement late mapping page table allocation functionsAtish Patra1-31/+99
Currently, page table setup is done during setup_va_final where fixmap can be used to create the temporary mappings. The physical frame is allocated from memblock_alloc_* functions. However, this won't work if page table mapping needs to be created for a different mm context (i.e. efi mm) at a later point of time. Use generic kernel page allocation function & macros for any mapping after setup_vm_final. Signed-off-by: Atish Patra <atish.patra@wdc.com> Reviewed-by: Anup Patel <anup@brainfault.org> Acked-by: Mike Rapoport <rppt@linux.ibm.com> Signed-off-by: Palmer Dabbelt <palmerdabbelt@google.com>
2020-10-03RISC-V: Add early ioremap supportAtish Patra1-0/+33
UEFI uses early IO or memory mappings for runtime services before normal ioremap() is usable. Add the necessary fixmap bindings and pmd mappings for generic ioremap support to work. Signed-off-by: Atish Patra <atish.patra@wdc.com> Reviewed-by: Anup Patel <anup@brainfault.org> Reviewed-by: Palmer Dabbelt <palmerdabbelt@google.com> Signed-off-by: Palmer Dabbelt <palmerdabbelt@google.com>
2020-10-03RISC-V: Move DT mapping outof fixmapAnup Patel1-14/+12
Currently, RISC-V reserves 1MB of fixmap memory for device tree. However, it maps only single PMD (2MB) space for fixmap which leaves only < 1MB space left for other kernel features such as early ioremap which requires fixmap as well. The fixmap size can be increased by another 2MB but it brings additional complexity and changes the virtual memory layout as well. If we require some additional feature requiring fixmap again, it has to be moved again. Technically, DT doesn't need a fixmap as the memory occupied by the DT is only used during boot. That's why, We map device tree in early page table using two consecutive PGD mappings at lower addresses (< PAGE_OFFSET). This frees lot of space in fixmap and also makes maximum supported device tree size supported as PGDIR_SIZE. Thus, init memory section can be used for the same purpose as well. This simplifies fixmap implementation. Signed-off-by: Anup Patel <anup.patel@wdc.com> Signed-off-by: Atish Patra <atish.patra@wdc.com> Reviewed-by: Palmer Dabbelt <palmerdabbelt@google.com> Signed-off-by: Palmer Dabbelt <palmerdabbelt@google.com>
2020-09-19riscv: Add sfence.vma after early page table changesGreentime Hu1-4/+3
This invalidates local TLB after modifying the page tables during early init as it's too early to handle suprious faults as we otherwise do. Fixes: f2c17aabc917 ("RISC-V: Implement compile-time fixed mappings") Reported-by: Syven Wang <syven.wang@sifive.com> Signed-off-by: Syven Wang <syven.wang@sifive.com> Signed-off-by: Greentime Hu <greentime.hu@sifive.com> Reviewed-by: Anup Patel <anup@brainfault.org> [Palmer: Cleaned up the commit text] Signed-off-by: Palmer Dabbelt <palmerdabbelt@google.com>
2020-09-16riscv/mm/fault: Set FAULT_FLAG_INSTRUCTION flag in do_page_fault()Pekka Enberg1-1/+2
If the page fault "cause" is EXC_INST_PAGE_FAULT, set the FAULT_FLAG_INSTRUCTION flag to let handle_mm_fault() and friends know about it. This has no functional changes because RISC-V uses the default arch_vma_access_permitted() implementation, which always returns true. However, dax_pmd_fault(), for example, has a tracepoint that uses FAULT_FLAG_INSTRUCTION, so we might as well set it. Signed-off-by: Pekka Enberg <penberg@kernel.org> Signed-off-by: Palmer Dabbelt <palmerdabbelt@google.com>
2020-09-16riscv/mm/fault: Fix inline placement in vmalloc_fault() declarationPekka Enberg1-1/+1
The "inline" keyword is in the wrong place in vmalloc_fault() declaration: >> arch/riscv/mm/fault.c:56:1: warning: 'inline' is not at beginning of declaration [-Wold-style-declaration] 56 | static void inline vmalloc_fault(struct pt_regs *regs, int code, unsigned long addr) | ^~~~~~ Fix that up. Reported-by: kernel test robot <lkp@intel.com> Signed-off-by: Pekka Enberg <penberg@kernel.org> Signed-off-by: Palmer Dabbelt <palmerdabbelt@google.com>
2020-09-16riscv/mm/fault: Move access error check to functionPekka Enberg1-21/+27
Move the access error check into a access_error() function to simplify the control flow in do_page_fault(). Signed-off-by: Pekka Enberg <penberg@kernel.org> Signed-off-by: Palmer Dabbelt <palmerdabbelt@google.com>
2020-09-16riscv/mm/fault: Move FAULT_FLAG_WRITE handling in do_page_fault()Pekka Enberg1-1/+3
Let's handle the translation of EXC_STORE_PAGE_FAULT to FAULT_FLAG_WRITE once before looking up the VMA. This makes it easier to extract access error logic in the next patch. Signed-off-by: Pekka Enberg <penberg@kernel.org> Signed-off-by: Palmer Dabbelt <palmerdabbelt@google.com>
2020-09-16riscv/mm/fault: Simplify mm_fault_error()Pekka Enberg1-23/+18
Simplify the mm_fault_error() handling function by eliminating the unnecessary gotos. Signed-off-by: Pekka Enberg <penberg@kernel.org> Signed-off-by: Palmer Dabbelt <palmerdabbelt@google.com>
2020-09-16riscv/mm/fault: Move fault error handling to mm_fault_error()Pekka Enberg1-25/+31
This patch moves the fault error handling to mm_fault_error() function and converts gotos to calls to the new function. Signed-off-by: Pekka Enberg <penberg@kernel.org> Signed-off-by: Palmer Dabbelt <palmerdabbelt@google.com>
2020-09-16riscv/mm/fault: Simplify fault error handlingPekka Enberg1-10/+8
Move fault error handling after retry logic. This simplifies the code flow and makes it easier to move fault error handling to its own function. Signed-off-by: Pekka Enberg <penberg@kernel.org> Signed-off-by: Palmer Dabbelt <palmerdabbelt@google.com>
2020-09-16riscv/mm/fault: Move vmalloc fault handling to vmalloc_fault()Pekka Enberg1-82/+82
This patch moves the vmalloc fault handling in do_page_fault() to vmalloc_fault() function and converts gotos to calls to the new function. Signed-off-by: Pekka Enberg <penberg@kernel.org> Signed-off-by: Palmer Dabbelt <palmerdabbelt@google.com>
2020-09-16riscv/mm/fault: Move bad area handling to bad_area()Pekka Enberg1-27/+40
This patch moves the bad area handling in do_page_fault() to bad_area() function and converts gotos to calls to the new function. Signed-off-by: Pekka Enberg <penberg@kernel.org> Signed-off-by: Palmer Dabbelt <palmerdabbelt@google.com>
2020-09-16riscv/mm/fault: Move no context handling to no_context()Pekka Enberg1-31/+52
This patch moves the no context handling in do_page_fault() to no_context() function and converts gotos to calls to the new function. Signed-off-by: Pekka Enberg <penberg@kernel.org> Signed-off-by: Palmer Dabbelt <palmerdabbelt@google.com>
2020-09-16riscv/mm: Simplify retry logic in do_page_fault()Pekka Enberg1-11/+9
Let's combine the two retry logic if statements in do_page_fault() to simplify the code. Signed-off-by: Pekka Enberg <penberg@kernel.org> Signed-off-by: Palmer Dabbelt <palmerdabbelt@google.com>
2020-08-12mm/riscv: use general page fault accountingPeter Xu1-15/+1
Use the general page fault accounting by passing regs into handle_mm_fault(). It naturally solve the issue of multiple page fault accounting when page fault retry happened. Signed-off-by: Peter Xu <peterx@redhat.com> Signed-off-by: Andrew Morton <akpm@linux-foundation.org> Reviewed-by: Pekka Enberg <penberg@kernel.org> Acked-by: Palmer Dabbelt <palmerdabbelt@google.com> Cc: Paul Walmsley <paul.walmsley@sifive.com> Cc: Albert Ou <aou@eecs.berkeley.edu> Link: http://lkml.kernel.org/r/20200707225021.200906-18-peterx@redhat.com Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
2020-08-12mm: do page fault accounting in handle_mm_faultPeter Xu1-1/+1
Patch series "mm: Page fault accounting cleanups", v5. This is v5 of the pf accounting cleanup series. It originates from Gerald Schaefer's report on an issue a week ago regarding to incorrect page fault accountings for retried page fault after commit 4064b9827063 ("mm: allow VM_FAULT_RETRY for multiple times"): https://lore.kernel.org/lkml/20200610174811.44b94525@thinkpad/ What this series did: - Correct page fault accounting: we do accounting for a page fault (no matter whether it's from #PF handling, or gup, or anything else) only with the one that completed the fault. For example, page fault retries should not be counted in page fault counters. Same to the perf events. - Unify definition of PERF_COUNT_SW_PAGE_FAULTS: currently this perf event is used in an adhoc way across different archs. Case (1): for many archs it's done at the entry of a page fault handler, so that it will also cover e.g. errornous faults. Case (2): for some other archs, it is only accounted when the page fault is resolved successfully. Case (3): there're still quite some archs that have not enabled this perf event. Since this series will touch merely all the archs, we unify this perf event to always follow case (1), which is the one that makes most sense. And since we moved the accounting into handle_mm_fault, the other two MAJ/MIN perf events are well taken care of naturally. - Unify definition of "major faults": the definition of "major fault" is slightly changed when used in accounting (not VM_FAULT_MAJOR). More information in patch 1. - Always account the page fault onto the one that triggered the page fault. This does not matter much for #PF handlings, but mostly for gup. More information on this in patch 25. Patchset layout: Patch 1: Introduced the accounting in handle_mm_fault(), not enabled. Patch 2-23: Enable the new accounting for arch #PF handlers one by one. Patch 24: Enable the new accounting for the rest outliers (gup, iommu, etc.) Patch 25: Cleanup GUP task_struct pointer since it's not needed any more This patch (of 25): This is a preparation patch to move page fault accountings into the general code in handle_mm_fault(). This includes both the per task flt_maj/flt_min counters, and the major/minor page fault perf events. To do this, the pt_regs pointer is passed into handle_mm_fault(). PERF_COUNT_SW_PAGE_FAULTS should still be kept in per-arch page fault handlers. So far, all the pt_regs pointer that passed into handle_mm_fault() is NULL, which means this patch should have no intented functional change. Suggested-by: Linus Torvalds <torvalds@linux-foundation.org> Signed-off-by: Peter Xu <peterx@redhat.com> Signed-off-by: Andrew Morton <akpm@linux-foundation.org> Cc: Albert Ou <aou@eecs.berkeley.edu> Cc: Alexander Gordeev <agordeev@linux.ibm.com> Cc: Andy Lutomirski <luto@kernel.org> Cc: Benjamin Herrenschmidt <benh@kernel.crashing.org> Cc: Borislav Petkov <bp@alien8.de> Cc: Brian Cain <bcain@codeaurora.org> Cc: Catalin Marinas <catalin.marinas@arm.com> Cc: Christian Borntraeger <borntraeger@de.ibm.com> Cc: Chris Zankel <chris@zankel.net> Cc: Dave Hansen <dave.hansen@linux.intel.com> Cc: David S. Miller <davem@davemloft.net> Cc: Geert Uytterhoeven <geert@linux-m68k.org> Cc: Gerald Schaefer <gerald.schaefer@de.ibm.com> Cc: Greentime Hu <green.hu@gmail.com> Cc: Guo Ren <guoren@kernel.org> Cc: Heiko Carstens <heiko.carstens@de.ibm.com> Cc: Helge Deller <deller@gmx.de> Cc: H. Peter Anvin <hpa@zytor.com> Cc: Ingo Molnar <mingo@redhat.com> Cc: Ivan Kokshaysky <ink@jurassic.park.msu.ru> Cc: James E.J. Bottomley <James.Bottomley@HansenPartnership.com> Cc: John Hubbard <jhubbard@nvidia.com> Cc: Jonas Bonn <jonas@southpole.se> Cc: Ley Foon Tan <ley.foon.tan@intel.com> Cc: "Luck, Tony" <tony.luck@intel.com> Cc: Matt Turner <mattst88@gmail.com> Cc: Max Filippov <jcmvbkbc@gmail.com> Cc: Michael Ellerman <mpe@ellerman.id.au> Cc: Michal Simek <monstr@monstr.eu> Cc: Nick Hu <nickhu@andestech.com> Cc: Palmer Dabbelt <palmer@dabbelt.com> Cc: Paul Mackerras <paulus@samba.org> Cc: Paul Walmsley <paul.walmsley@sifive.com> Cc: Pekka Enberg <penberg@kernel.org> Cc: Peter Zijlstra <peterz@infradead.org> Cc: Richard Henderson <rth@twiddle.net> Cc: Rich Felker <dalias@libc.org> Cc: Russell King <linux@armlinux.org.uk> Cc: Stafford Horne <shorne@gmail.com> Cc: Stefan Kristiansson <stefan.kristiansson@saunalahti.fi> Cc: Thomas Bogendoerfer <tsbogend@alpha.franken.de> Cc: Thomas Gleixner <tglx@linutronix.de> Cc: Vasily Gorbik <gor@linux.ibm.com> Cc: Vincent Chen <deanbo422@gmail.com> Cc: Vineet Gupta <vgupta@synopsys.com> Cc: Will Deacon <will@kernel.org> Cc: Yoshinori Sato <ysato@users.sourceforge.jp> Link: http://lkml.kernel.org/r/20200707225021.200906-1-peterx@redhat.com Link: http://lkml.kernel.org/r/20200707225021.200906-2-peterx@redhat.com Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>