summaryrefslogtreecommitdiff
path: root/arch/arm64/mm
AgeCommit message (Collapse)AuthorFilesLines
2014-12-11arm64: mm: dump: don't skip final regionMark Rutland1-3/+0
If the final page table entry we walk is a valid mapping, the page table dumping code will not log the region this entry is part of, as the final note_page call in ptdump_show will trigger an early return. Luckily this isn't seen on contemporary systems as they typically don't have enough RAM to extend the linear mapping right to the end of the address space. In note_page, we log a region when we reach its end (i.e. we hit an entry immediately afterwards which has different prot bits or is invalid). The final entry has no subsequent entry, so we will not log this immediately. We try to cater for this with a subsequent call to note_page in ptdump_show, but this returns early as 0 < LOWEST_ADDR, and hence we will skip a valid mapping if it spans to the final entry we note. Unlike 32-bit ARM, the pgd with the kernel mapping is never shared with user mappings, so we do not need the check to ensure we don't log user page tables. Due to the way addr is constructed in the walk_* functions, it can never be less than LOWEST_ADDR when walking the page tables, so it is not necessary to avoid dereferencing invalid table addresses. The existing checks for st->current_prot and st->marker[1].start_address are sufficient to ensure we will not print and/or dereference garbage when trying to log information. This patch removes the unnecessary check against LOWEST_ADDR, ensuring we log all regions in the kernel page table, including those which span right to the end of the address space. Cc: Kees Cook <keescook@chromium.org> Acked-by: Laura Abbott <lauraa@codeaurora.org> Acked-by: Steve Capper <steve.capper@linaro.org> Signed-off-by: Mark Rutland <mark.rutland@arm.com> Signed-off-by: Will Deacon <will.deacon@arm.com>
2014-12-11arm64: mm: dump: fix shift warningMark Rutland1-1/+1
When building with 48-bit VAs, it's possible to get the following warning when building the arm64 page table dumping code: arch/arm64/mm/dump.c: In function ‘walk_pgd’: arch/arm64/mm/dump.c:266:2: warning: right shift count >= width of type pgd_t *pgd = pgd_offset(mm, 0); ^ As pgd_offset is a macro and the second argument is not cast to any particular type, the zero will be given integer type by the compiler. As pgd_offset passes the pargument to pgd_index, we then try to shift the 32-bit integer by at least 39 bits (for 4k pages). Elsewhere the pgd_offset is passed a second argument of unsigned long type, so let's do the same here by passing '0UL' rather than '0'. Cc: Kees Cook <keescook@chromium.org> Acked-by: Laura Abbott <lauraa@codeaurora.org> Acked-by: Steve Capper <steve.capper@arm.com> Signed-off-by: Mark Rutland <mark.rutland@arm.com> Signed-off-by: Will Deacon <will.deacon@arm.com>
2014-12-05arm64: remove the unnecessary arm64_swiotlb_init()Ding Tianhong1-1/+0
The commit 3690951fc6d42f3a0903987677d0e592c49dd8db (arm64: Use swiotlb late initialisation) switches the DMA mapping code to swiotlb_tlb_late_init_with_default_size(), the arm64_swiotlb_init() will not used anymore, so remove this function. Signed-off-by: Ding Tianhong <dingtianhong@huawei.com> Signed-off-by: Will Deacon <will.deacon@arm.com>
2014-12-01arm64: compat: align cacheflush syscall with arch/armVladimir Murzin1-1/+5
Update handling of cacheflush syscall with changes made in arch/arm counterpart: - return error to userspace when flushing syscall fails - split user cache-flushing into interruptible chunks - don't bother rounding to nearest vma Signed-off-by: Vladimir Murzin <vladimir.murzin@arm.com> [will: changed internal return value from -EINTR to 0 to match arch/arm/] Signed-off-by: Will Deacon <will.deacon@arm.com>
2014-11-26arm64: add support to dump the kernel page tablesLaura Abbott2-0/+333
In a similar manner to arm, it's useful to be able to dump the page tables to verify permissions and memory types. Add a debugfs file to check the page tables. Acked-by: Steve Capper <steve.capper@linaro.org> Tested-by: Steve Capper <steve.capper@linaro.org> Reviewed-by: Mark Rutland <mark.rutland@arm.com> Tested-by: Mark Rutland <mark.rutland@arm.com> Signed-off-by: Laura Abbott <lauraa@codeaurora.org> [will: s/BUFFERABLE/NORMAL-NC/] Signed-off-by: Will Deacon <will.deacon@arm.com>
2014-11-25arm64: Factor out fixmap initialization from ioremapLaura Abbott2-90/+97
The fixmap API was originally added for arm64 for early_ioremap purposes. It can be used for other purposes too so move the initialization from ioremap to somewhere more generic. This makes it obvious where the fixmap is being set up and allows for a cleaner implementation of __set_fixmap. Reviewed-by: Kees Cook <keescook@chromium.org> Acked-by: Mark Rutland <mark.rutland@arm.com> Tested-by: Mark Rutland <mark.rutland@arm.com> Tested-by: Kees Cook <keescook@chromium.org> Signed-off-by: Laura Abbott <lauraa@codeaurora.org> Signed-off-by: Will Deacon <will.deacon@arm.com>
2014-11-25arm64: add Cortex-A53 cache errata workaroundAndre Przywara1-1/+3
The ARM errata 819472, 826319, 827319 and 824069 define the same workaround for these hardware issues in certain Cortex-A53 parts. Use the new alternatives framework and the CPU MIDR detection to patch "cache clean" into "cache clean and invalidate" instructions if an affected CPU is detected at runtime. Signed-off-by: Andre Przywara <andre.przywara@arm.com> [will: add __maybe_unused to squash gcc warning] Signed-off-by: Will Deacon <will.deacon@arm.com>
2014-11-25arm64: add alternative runtime patchingAndre Przywara1-0/+2
With a blatant copy of some x86 bits we introduce the alternative runtime patching "framework" to arm64. This is quite basic for now and we only provide the functions we need at this time. This is connected to the newly introduced feature bits. Signed-off-by: Andre Przywara <andre.przywara@arm.com> Signed-off-by: Will Deacon <will.deacon@arm.com>
2014-11-21arm64: mm: report unhandled level-0 translation faults correctlyWill Deacon1-1/+1
Translation faults that occur due to the input address being outside of the address range mapped by the relevant base register are reported as level 0 faults in ESR.DFSC. If the faulting access cannot be resolved by the kernel (e.g. because it is not mapped by a vma), then we report "input address range fault" on the console. This was fine until we added support for 48-bit VAs, which actually place PGDs at level 0 and can trigger faults for invalid addresses that are within the range of the page tables. This patch changes the string to report "level 0 translation fault", which is far less confusing. Signed-off-by: Will Deacon <will.deacon@arm.com>
2014-11-20arm64: pgalloc: consistently use PGALLOC_GFPMark Rutland1-2/+2
We currently allocate different levels of page tables with a variety of differing flags, and the PGALLOC_GFP flags, intended for use when allocating any level of page table, are only used for ptes in pte_alloc_one. On x86, PGALLOC_GFP is used for all page table allocations. Currently the major differences are: * __GFP_NOTRACK -- Needed to ensure page tables are always accessible in the presence of kmemcheck to prevent recursive faults. Currently kmemcheck cannot be selected for arm64. * __GFP_REPEAT -- Causes the allocator to try to reclaim pages and retry upon a failure to allocate. * __GFP_ZERO -- Sometimes passed explicitly, sometimes zalloc variants are used. While we've no encountered issues so far, it would be preferable to be consistent. This patch ensures all levels of table are allocated in the same manner, with PGALLOC_GFP. Cc: Steve Capper <steve.capper@arm.com> Acked-by: Catalin Marinas <catalin.marinas@arm.com> Signed-off-by: Mark Rutland <mark.rutland@arm.com> Signed-off-by: Will Deacon <will.deacon@arm.com>
2014-11-18arm64/mm: Remove hack in mmap randomize layoutYann Droneaud1-10/+2
Since commit 8a0a9bd4db63 ('random: make get_random_int() more random'), get_random_int() returns a random value for each call, so comment and hack introduced in mmap_rnd() as part of commit 1d18c47c735e ('arm64: MMU fault handling and page table management') are incorrects. Commit 1d18c47c735e seems to use the same hack introduced by commit a5adc91a4b44 ('powerpc: Ensure random space between stack and mmaps'), latter copied in commit 5a0efea09f42 ('sparc64: Sharpen address space randomization calculations.'). But both architectures were cleaned up as part of commit fa8cbaaf5a68 ('powerpc+sparc64/mm: Remove hack in mmap randomize layout') as hack is no more needed since commit 8a0a9bd4db63. So the present patch removes the comment and the hack around get_random_int() on AArch64's mmap_rnd(). Cc: David S. Miller <davem@davemloft.net> Cc: Anton Blanchard <anton@samba.org> Cc: Benjamin Herrenschmidt <benh@kernel.crashing.org> Acked-by: Will Deacon <will.deacon@arm.com> Acked-by: Dan McGee <dpmcgee@gmail.com> Signed-off-by: Yann Droneaud <ydroneaud@opteya.com> Signed-off-by: Will Deacon <will.deacon@arm.com>
2014-11-06arm64: fix data type for physical addressMin-Hua Chen1-1/+1
Use phys_addr_t for physical address in alloc_init_pud. Although phys_addr_t and unsigned long are 64 bit in arm64, it is better to use phys_addr_t to describe physical addresses. Signed-off-by: Min-Hua Chen <orca.chen@gmail.com> Signed-off-by: Will Deacon <will.deacon@arm.com>
2014-10-24arm64: Fix memblock current_limit with 64K pages and 48-bit VACatalin Marinas1-4/+8
With 48-bit VA space, the 64K page configuration uses 3 levels instead of 2 and PUD_SIZE != PMD_SIZE. Since with 64K pages we only cover PMD_SIZE with the initial swapper_pg_dir populated in head.S, the memblock current_limit needs to be set accordingly in map_mem() to avoid allocating unmapped memory. The memblock current_limit is progressively increased as more blocks are mapped. Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
2014-10-20arm64: mm: Correct fixmap pagetable typesSteve Capper1-2/+2
Compiling with STRICT_MM_TYPECHECKS gives the following arch/arm64/mm/ioremap.c: In function ‘early_ioremap_init’: arch/arm64/mm/ioremap.c:152:2: warning: passing argument 3 of ‘pud_populate’ from incompatible pointer type pud_populate(&init_mm, pud, bm_pmd); The data types for bm_pmd and bm_pud are incorrectly set to pte_t. This patch corrects these types. Signed-off-by: Steve Capper <steve.capper@linaro.org> Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
2014-10-20arm64: Align less than PAGE_SIZE pgds naturallyCatalin Marinas1-2/+16
When the pgd size is smaller than PAGE_SIZE, pgd_alloc() uses kzalloc() to save space. However, this is not always naturally aligned as required by the architecture. This patch creates a kmem_cache for pgd allocations with the correct alignment. The current kernel configurations with 4K pages + 39-bit VA and 64K pages + 42-bit VA use a full page for the pgd and are not affected. The patch is required for 48-bit VA with 64K pages where the pgd is 512 bytes. Reported-by: Christoffer Dall <christoffer.dall@linaro.org> Reviewed-by: Christoffer Dall <christoffer.dall@linaro.org> Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
2014-10-10arm64: mm: enable RCU fast_gupSteve Capper1-0/+16
Activate the RCU fast_gup for ARM64. We also need to force THP splits to broadcast an IPI s.t. we block in the fast_gup page walker. As THP splits are comparatively rare, this should not lead to a noticeable performance degradation. Some pre-requisite functions pud_write and pud_page are also added. [akpm@linux-foundation.org: coding-style fixes] Signed-off-by: Steve Capper <steve.capper@linaro.org> Tested-by: Dann Frazier <dann.frazier@canonical.com> Acked-by: Catalin Marinas <catalin.marinas@arm.com> Cc: Hugh Dickins <hughd@google.com> Cc: Russell King <rmk@arm.linux.org.uk> Cc: Mark Rutland <mark.rutland@arm.com> Cc: Mel Gorman <mel@csn.ul.ie> Cc: Will Deacon <will.deacon@arm.com> Cc: Christoffer Dall <christoffer.dall@linaro.org> Cc: Andrea Arcangeli <aarcange@redhat.com> Signed-off-by: Andrew Morton <akpm@linux-foundation.org> Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
2014-10-10arm64: add atomic pool for non-coherent and CMA allocationsLaura Abbott1-19/+145
Neither CMA nor noncoherent allocations support atomic allocations. Add a dedicated atomic pool to support this. Reviewed-by: Catalin Marinas <catalin.marinas@arm.com> Signed-off-by: Laura Abbott <lauraa@codeaurora.org> Cc: Arnd Bergmann <arnd@arndb.de> Cc: David Riley <davidriley@chromium.org> Cc: Olof Johansson <olof@lixom.net> Cc: Ritesh Harjain <ritesh.harjani@gmail.com> Cc: Russell King <linux@arm.linux.org.uk> Cc: Thierry Reding <thierry.reding@gmail.com> Cc: Will Deacon <will.deacon@arm.com> Signed-off-by: Andrew Morton <akpm@linux-foundation.org> Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
2014-10-08Merge tag 'arm64-upstream' of ↵Linus Torvalds7-36/+117
git://git.kernel.org/pub/scm/linux/kernel/git/arm64/linux Pull arm64 updates from Catalin Marinas: - eBPF JIT compiler for arm64 - CPU suspend backend for PSCI (firmware interface) with standard idle states defined in DT (generic idle driver to be merged via a different tree) - Support for CONFIG_DEBUG_SET_MODULE_RONX - Support for unmapped cpu-release-addr (outside kernel linear mapping) - set_arch_dma_coherent_ops() implemented and bus notifiers removed - EFI_STUB improvements when base of DRAM is occupied - Typos in KGDB macros - Clean-up to (partially) allow kernel building with LLVM - Other clean-ups (extern keyword, phys_addr_t usage) * tag 'arm64-upstream' of git://git.kernel.org/pub/scm/linux/kernel/git/arm64/linux: (51 commits) arm64: Remove unneeded extern keyword ARM64: make of_device_ids const arm64: Use phys_addr_t type for physical address aarch64: filter $x from kallsyms arm64: Use DMA_ERROR_CODE to denote failed allocation arm64: Fix typos in KGDB macros arm64: insn: Add return statements after BUG_ON() arm64: debug: don't re-enable debug exceptions on return from el1_dbg Revert "arm64: dmi: Add SMBIOS/DMI support" arm64: Implement set_arch_dma_coherent_ops() to replace bus notifiers of: amba: use of_dma_configure for AMBA devices arm64: dmi: Add SMBIOS/DMI support arm64: Correct ftrace calls to aarch64_insn_gen_branch_imm() arm64:mm: initialize max_mapnr using function set_max_mapnr setup: Move unmask of async interrupts after possible earlycon setup arm64: LLVMLinux: Fix inline arm64 assembly for use with clang arm64: pageattr: Correctly adjust unaligned start addresses net: bpf: arm64: fix module memory leak when JIT image build fails arm64: add PSCI CPU_SUSPEND based cpu_suspend support arm64: kernel: introduce cpu_init_idle CPU operation ...
2014-10-03Merge branches 'fiq' (early part), 'fixes', 'l2c' (early part) and 'misc' ↵Russell King1-3/+8
into for-next
2014-10-03ARM: 8167/1: extend the reserved memory for initrd to be page alignedYalin Wang1-1/+7
This patch extends the start and end address of initrd to be page aligned, so that we can free all memory including the un-page aligned head or tail page of initrd, if the start or end address of initrd are not page aligned, the page can't be freed by free_initrd_mem() function. Signed-off-by: Yalin Wang <yalin.wang@sonymobile.com> Acked-by: Catalin Marinas <catalin.marinas@arm.com> Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk>
2014-10-02arm64: Use phys_addr_t type for physical addressMin-Hua Chen1-1/+1
Change the type of physical address from unsigned long to phys_addr_t, make valid_phys_addr_range more readable. Signed-off-by: Min-Hua Chen <orca.chen@gmail.com> Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
2014-10-02arm64: Use DMA_ERROR_CODE to denote failed allocationSean Paul1-1/+1
This patch replaces the static assignment of ~0 to dma_handle with DMA_ERROR_CODE to be consistent with other platforms. Signed-off-by: Sean Paul <seanpaul@chromium.org> Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
2014-09-22arm64: Implement set_arch_dma_coherent_ops() to replace bus notifiersCatalin Marinas1-31/+0
Commit 6ecba8eb51b7 (arm64: Use bus notifiers to set per-device coherent DMA ops) introduced bus notifiers to set the coherent dma ops based on the 'dma-coherent' DT property. Since the generic of_dma_configure() handles this property for platform and AMBA devices, replace the notifiers with set_arch_dma_coherent_ops(). Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
2014-09-17arm64:mm: initialize max_mapnr using function set_max_mapnrGanapatrao Kulkarni1-1/+1
Initializing max_mapnr using set_max_mapnr() helper function instead of direct reference. Also not adding PHYS_PFN_OFFSET to max_pfn, since it already contains it. Acked-by: Will Deacon <will.deacon@arm.com> Signed-off-by: Ganapatrao Kulkarni <ganapatrao.kulkarni@caviumnetworks.com> Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
2014-09-15arm64: LLVMLinux: Fix inline arm64 assembly for use with clangMark Charlebois1-1/+1
Remove '#' from immediate parameter in AARCH64 inline assembly in mmu. This code now works with both gcc and clang. Signed-off-by: Mark Charlebois <charlebm@gmail.com> Signed-off-by: Behan Webster <behanw@converseincode.com> Acked-by: Will Deacon <will.deacon@arm.com> Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
2014-09-12arm64: pageattr: Correctly adjust unaligned start addressesLaura Abbott1-1/+2
The start address needs to be actually updated after it is detected to be unaligned. Adjust it and the end address properly. Reported-by: Zi Shen Lim <zlim.lnx@gmail.com> Reviewed-by: Zi Shen Lim <zlim.lnx@gmail.com> Signed-off-by: Laura Abbott <lauraa@codeaurora.org> Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
2014-09-09efi/arm64: Fix fdt-related memory reservationMark Salter1-2/+1
Commit 86c8b27a01cf: "arm64: ignore DT memreserve entries when booting in UEFI mode prevents early_init_fdt_scan_reserved_mem() from being called for arm64 kernels booting via UEFI. This was done because the kernel will use the UEFI memory map to determine reserved memory regions. That approach has problems in that early_init_fdt_scan_reserved_mem() also reserves the FDT itself and any node-specific reserved memory. By chance of some kernel configs, the FDT may be overwritten before it can be unflattened and the kernel will fail to boot. More subtle problems will result if the FDT has node specific reserved memory which is not really reserved. This patch has the UEFI stub remove the memory reserve map entries from the FDT as it does with the memory nodes. This allows early_init_fdt_scan_reserved_mem() to be called unconditionally so that the other needed reservations are made. Signed-off-by: Mark Salter <msalter@redhat.com> Acked-by: Ard Biesheuvel <ard.biesheuvel@linaro.org> Acked-by: Mark Rutland <mark.rutland@arm.com> Signed-off-by: Matt Fleming <matt.fleming@intel.com>
2014-09-08arm64: Add CONFIG_DEBUG_SET_MODULE_RONX supportLaura Abbott2-1/+97
In a similar fashion to other architecture, add the infrastructure and Kconfig to enable DEBUG_SET_MODULE_RONX support. When enabled, module ranges will be marked read-only/no-execute as appropriate. Signed-off-by: Laura Abbott <lauraa@codeaurora.org> [will: fixed off-by-one in module end check] Signed-off-by: Will Deacon <will.deacon@arm.com>
2014-09-08arm64: convert part of soft_restart() to assemblyArun Chandran1-0/+15
The current soft_restart() and setup_restart implementations incorrectly assume that compiler will not spill/fill values to/from stack. However this assumption seems to be wrong, revealed by the disassembly of the currently existing code (v3.16) built with Linaro GCC 4.9-2014.05. ffffffc000085224 <soft_restart>: ffffffc000085224: a9be7bfd stp x29, x30, [sp,#-32]! ffffffc000085228: 910003fd mov x29, sp ffffffc00008522c: f9000fa0 str x0, [x29,#24] ffffffc000085230: 94003d21 bl ffffffc0000946b4 <setup_mm_for_reboot> ffffffc000085234: 94003b33 bl ffffffc000093f00 <flush_cache_all> ffffffc000085238: 94003dfa bl ffffffc000094a20 <cpu_cache_off> ffffffc00008523c: 94003b31 bl ffffffc000093f00 <flush_cache_all> ffffffc000085240: b0003321 adrp x1, ffffffc0006ea000 <reset_devices> ffffffc000085244: f9400fa0 ldr x0, [x29,#24] ----> spilled addr ffffffc000085248: f942fc22 ldr x2, [x1,#1528] ----> global memstart_addr ffffffc00008524c: f0000061 adrp x1, ffffffc000094000 <__inval_cache_range+0x40> ffffffc000085250: 91290021 add x1, x1, #0xa40 ffffffc000085254: 8b010041 add x1, x2, x1 ffffffc000085258: d2c00802 mov x2, #0x4000000000 // #274877906944 ffffffc00008525c: 8b020021 add x1, x1, x2 ffffffc000085260: d63f0020 blr x1 ... Here the compiler generates memory accesses after the cache is disabled, loading stale values for the spilled value and global variable. As we cannot control when the compiler will access memory we must rewrite the functions in assembly to stash values we need in registers prior to disabling the cache, avoiding the use of memory. Reviewed-by: Mark Rutland <mark.rutland@arm.com> Signed-off-by: Arun Chandran <achandran@mvista.com> Signed-off-by: Will Deacon <will.deacon@arm.com>
2014-08-19arm64: ignore DT memreserve entries when booting in UEFI modeLeif Lindholm1-1/+3
UEFI provides its own method for marking regions to reserve, via the memory map which is also used to initialise memblock. So when using the UEFI memory map, ignore any memreserve entries present in the DT. Reported-by: Mark Rutland <mark.rutland@arm.com> Reviewed-by: Mark Rutland <mark.rutland@arm.com> Acked-by: Catalin Marinas <catalin.marinas@arm.com> Signed-off-by: Leif Lindholm <leif.lindholm@linaro.org> Signed-off-by: Will Deacon <will.deacon@arm.com>
2014-08-04Merge tag 'arm64-upstream' of ↵Linus Torvalds4-26/+53
git://git.kernel.org/pub/scm/linux/kernel/git/arm64/linux Pull arm64 updates from Will Deacon: "Once again, Catalin's off on holiday and I'm looking after the arm64 tree. Please can you pull the following arm64 updates for 3.17? Note that this branch also includes the new GICv3 driver (merged via a stable tag from Jason's irqchip tree), since there is a fix for older binutils on top. Changes include: - context tracking support (NO_HZ_FULL) which narrowly missed 3.16 - vDSO layout rework following Andy's work on x86 - TEXT_OFFSET fuzzing for bootloader testing - /proc/cpuinfo tidy-up - preliminary work to support 48-bit virtual addresses, but this is currently disabled until KVM has been ported to use it (the patches do, however, bring some nice clean-up) - boot-time CPU sanity checks (especially useful on heterogenous systems) - support for syscall auditing - support for CC_STACKPROTECTOR - defconfig updates" * tag 'arm64-upstream' of git://git.kernel.org/pub/scm/linux/kernel/git/arm64/linux: (55 commits) arm64: add newline to I-cache policy string Revert "arm64: dmi: Add SMBIOS/DMI support" arm64: fpsimd: fix a typo in fpsimd_save_partial_state ENDPROC arm64: don't call break hooks for BRK exceptions from EL0 arm64: defconfig: enable devtmpfs mount option arm64: vdso: fix build error when switching from LE to BE arm64: defconfig: add virtio support for running as a kvm guest arm64: gicv3: Allow GICv3 compilation with older binutils arm64: fix soft lockup due to large tlb flush range arm64/crypto: fix makefile rule for aes-glue-%.o arm64: Do not invoke audit_syscall_* functions if !CONFIG_AUDIT_SYSCALL arm64: Fix barriers used for page table modifications arm64: Add support for 48-bit VA space with 64KB page configuration arm64: asm/pgtable.h pmd/pud definitions clean-up arm64: Determine the vmalloc/vmemmap space at build time based on VA_BITS arm64: Clean up the initial page table creation in head.S arm64: Remove asm/pgtable-*level-types.h files arm64: Remove asm/pgtable-*level-hwdef.h files arm64: Convert bool ARM64_x_LEVELS to int ARM64_PGTABLE_LEVELS arm64: mm: Implement 4 levels of translation tables ...
2014-07-23arm64: Determine the vmalloc/vmemmap space at build time based on VA_BITSCatalin Marinas1-7/+15
Rather than guessing what the maximum vmmemap space should be, this patch allows the calculation based on the VA_BITS and sizeof(struct page). The vmalloc space extends to the beginning of the vmemmap space. Since the virtual kernel memory layout now depends on the build configuration, this patch removes the detailed description in Documentation/arm64/memory.txt in favour of information printed during kernel booting. Signed-off-by: Catalin Marinas <catalin.marinas@arm.com> Tested-by: Jungseok Lee <jungseoklee85@gmail.com>
2014-07-23arm64: Convert bool ARM64_x_LEVELS to int ARM64_PGTABLE_LEVELSCatalin Marinas1-2/+2
Rather than having several Kconfig options, define int ARM64_PGTABLE_LEVELS which will be also useful in converting some of the pgtable macros. Signed-off-by: Catalin Marinas <catalin.marinas@arm.com> Tested-by: Jungseok Lee <jungseoklee85@gmail.com>
2014-07-23arm64: mm: Implement 4 levels of translation tablesJungseok Lee3-4/+17
This patch implements 4 levels of translation tables since 3 levels of page tables with 4KB pages cannot support 40-bit physical address space described in [1] due to the following issue. It is a restriction that kernel logical memory map with 4KB + 3 levels (0xffffffc000000000-0xffffffffffffffff) cannot cover RAM region from 544GB to 1024GB in [1]. Specifically, ARM64 kernel fails to create mapping for this region in map_mem function since __phys_to_virt for this region reaches to address overflow. If SoC design follows the document, [1], over 32GB RAM would be placed from 544GB. Even 64GB system is supposed to use the region from 544GB to 576GB for only 32GB RAM. Naturally, it would reach to enable 4 levels of page tables to avoid hacking __virt_to_phys and __phys_to_virt. However, it is recommended 4 levels of page table should be only enabled if memory map is too sparse or there is about 512GB RAM. References ---------- [1]: Principles of ARM Memory Maps, White Paper, Issue C Signed-off-by: Jungseok Lee <jays.lee@samsung.com> Reviewed-by: Sungjinn Chung <sungjinn.chung@samsung.com> Acked-by: Kukjin Kim <kgene.kim@samsung.com> Reviewed-by: Christoffer Dall <christoffer.dall@linaro.org> Reviewed-by: Steve Capper <steve.capper@linaro.org> [catalin.marinas@arm.com: MEMBLOCK_INITIAL_LIMIT removed, same as PUD_SIZE] [catalin.marinas@arm.com: early_ioremap_init() updated for 4 levels] [catalin.marinas@arm.com: 48-bit VA depends on BROKEN until KVM is fixed] Signed-off-by: Catalin Marinas <catalin.marinas@arm.com> Tested-by: Jungseok Lee <jungseoklee85@gmail.com>
2014-07-23arm64: Do not initialise the fixmap page tables in head.SCatalin Marinas1-8/+18
The early_ioremap_init() function already handles fixmap pte initialisation, so upgrade this to cover all of pud/pmd/pte and remove one page from swapper_pg_dir. Signed-off-by: Catalin Marinas <catalin.marinas@arm.com> Tested-by: Jungseok Lee <jungseoklee85@gmail.com>
2014-07-23arm64: Create non-empty ZONE_DMA when DRAM starts above 4GBCatalin Marinas1-4/+13
ZONE_DMA is created to allow 32-bit only devices to access memory in the absence of an IOMMU. On systems where the memory starts above 4GB, it is expected that some devices have a DMA offset hardwired to be able to access the bottom of the memory. Linux currently supports DT bindings for the DMA offsets but they are not (easily) available early during boot. This patch tries to guess a DMA offset and assumes that ZONE_DMA corresponds to the 32-bit mask above the start of DRAM. Fixes: 2d5a5612bc (arm64: Limit the CMA buffer to 32-bit if ZONE_DMA) Signed-off-by: Catalin Marinas <catalin.marinas@arm.com> Reported-by: Mark Salter <msalter@redhat.com> Tested-by: Mark Salter <msalter@redhat.com> Tested-by: Anup Patel <anup.patel@linaro.org>
2014-07-10arm64: place initial page tables above the kernelMark Rutland1-8/+4
Currently we place swapper_pg_dir and idmap_pg_dir below the kernel image, between PHYS_OFFSET and (PHYS_OFFSET + TEXT_OFFSET). However, bootloaders may use portions of this memory below the kernel and we do not parse the memory reservation list until after the MMU has been enabled. As such we may clobber some memory a bootloader wishes to have preserved. To enable the use of all of this memory by bootloaders (when the required memory reservations are communicated to the kernel) it is necessary to move our initial page tables elsewhere. As we currently have an effectively unbound requirement for memory at the end of the kernel image for .bss, we can place the page tables here. This patch moves the initial page table to the end of the kernel image, after the BSS. As they do not consist of any initialised data they will be stripped from the kernel Image as with the BSS. The BSS clearing routine is updated to stop at __bss_stop rather than _end so as to not clobber the page tables, and memory reservations made redundant by the new organisation are removed. Signed-off-by: Mark Rutland <mark.rutland@arm.com> Tested-by: Laura Abbott <lauraa@codeaurora.org> Acked-by: Will Deacon <will.deacon@arm.com> Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
2014-07-08arm64: export __cpu_{clear,copy}_user_page functionsMark Salter1-0/+2
The __cpu_clear_user_page() and __cpu_copy_user_page() functions are not currently exported. This prevents modules from using clear_user_page() and copy_user_page(). Signed-off-by: Mark Salter <msalter@redhat.com> Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
2014-07-04arm64: mm: Make icache synchronisation logic huge page awareSteve Capper1-1/+2
The __sync_icache_dcache routine will only flush the dcache for the first page of a compound page, potentially leading to stale icache data residing further on in a hugetlb page. This patch addresses this issue by taking into consideration the order of the page when flushing the dcache. Reported-by: Mark Brown <broonie@linaro.org> Tested-by: Mark Brown <broonie@linaro.org> Signed-off-by: Steve Capper <steve.capper@linaro.org> Acked-by: Will Deacon <will.deacon@arm.com> Signed-off-by: Catalin Marinas <catalin.marinas@arm.com> Cc: <stable@vger.kernel.org> # v3.11+
2014-06-18arm64: Limit the CMA buffer to 32-bit if ZONE_DMACatalin Marinas1-2/+8
When the CMA buffer is allocated, it is too early to know whether devices will require ZONE_DMA memory. This patch limits the CMA buffer to (DMA_BIT_MASK(32) + 1) if CONFIG_ZONE_DMA is enabled. In addition, it computes the dma_to_phys(DMA_BIT_MASK(32)) before the increment (no current functional change). Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
2014-06-06Merge tag 'arm64-upstream' of ↵Linus Torvalds7-118/+40
git://git.kernel.org/pub/scm/linux/kernel/git/arm64/linux into next Pull arm64 updates from Catalin Marinas: - Optimised assembly string/memory routines (based on the AArch64 Cortex Strings library contributed to glibc but re-licensed under GPLv2) - Optimised crypto algorithms making use of the ARMv8 crypto extensions (together with kernel API for using FPSIMD instructions in interrupt context) - Ftrace support - CPU topology parsing from DT - ESR_EL1 (Exception Syndrome Register) exposed to user space signal handlers for SIGSEGV/SIGBUS (useful to emulation tools like Qemu) - 1GB section linear mapping if applicable - Barriers usage clean-up - Default pgprot clean-up Conflicts as per Catalin. * tag 'arm64-upstream' of git://git.kernel.org/pub/scm/linux/kernel/git/arm64/linux: (57 commits) arm64: kernel: initialize broadcast hrtimer based clock event device arm64: ftrace: Add system call tracepoint arm64: ftrace: Add CALLER_ADDRx macros arm64: ftrace: Add dynamic ftrace support arm64: Add ftrace support ftrace: Add arm64 support to recordmcount arm64: Add 'notrace' attribute to unwind_frame() for ftrace arm64: add __ASSEMBLY__ in asm/insn.h arm64: Fix linker script entry point arm64: lib: Implement optimized string length routines arm64: lib: Implement optimized string compare routines arm64: lib: Implement optimized memcmp routine arm64: lib: Implement optimized memset routine arm64: lib: Implement optimized memmove routine arm64: lib: Implement optimized memcpy routine arm64: defconfig: enable a few more common/useful options in defconfig ftrace: Make CALLER_ADDRx macros more generic arm64: Fix deadlock scenario with smp_send_stop() arm64: Fix machine_shutdown() definition arm64: Support arch_irq_work_raise() via self IPIs ...
2014-06-06Merge branch 'arm64-efi-for-linus' of ↵Linus Torvalds1-18/+47
git://git.kernel.org/pub/scm/linux/kernel/git/tip/tip into next Pull ARM64 EFI update from Peter Anvin: "By agreement with the ARM64 EFI maintainers, we have agreed to make -tip the upstream for all EFI patches. That is why this patchset comes from me :) This patchset enables EFI stub support for ARM64, like we already have on x86" * 'arm64-efi-for-linus' of git://git.kernel.org/pub/scm/linux/kernel/git/tip/tip: arm64: efi: only attempt efi map setup if booting via EFI efi/arm64: ignore dtb= when UEFI SecureBoot is enabled doc: arm64: add description of EFI stub support arm64: efi: add EFI stub doc: arm: add UEFI support documentation arm64: add EFI runtime services efi: Add shared FDT related functions for ARM/ARM64 arm64: Add function to create identity mappings efi: add helper function to get UEFI params from FDT doc: efi-stub.txt updates for ARM lib: add fdt_empty_tree.c
2014-06-05hugetlb: restrict hugepage_migration_support() to x86_64Naoya Horiguchi1-5/+0
Currently hugepage migration is available for all archs which support pmd-level hugepage, but testing is done only for x86_64 and there're bugs for other archs. So to avoid breaking such archs, this patch limits the availability strictly to x86_64 until developers of other archs get interested in enabling this feature. Simply disabling hugepage migration on non-x86_64 archs is not enough to fix the reported problem where sys_move_pages() hits the BUG_ON() in follow_page(FOLL_GET), so let's fix this by checking if hugepage migration is supported in vma_migratable(). Signed-off-by: Naoya Horiguchi <n-horiguchi@ah.jp.nec.com> Reported-by: Michael Ellerman <mpe@ellerman.id.au> Tested-by: Michael Ellerman <mpe@ellerman.id.au> Acked-by: Hugh Dickins <hughd@google.com> Cc: Benjamin Herrenschmidt <benh@kernel.crashing.org> Cc: Tony Luck <tony.luck@intel.com> Cc: Russell King <rmk@arm.linux.org.uk> Cc: Martin Schwidefsky <schwidefsky@de.ibm.com> Cc: James Hogan <james.hogan@imgtec.com> Cc: Ralf Baechle <ralf@linux-mips.org> Cc: David Miller <davem@davemloft.net> Cc: <stable@vger.kernel.org> [3.12+] Signed-off-by: Andrew Morton <akpm@linux-foundation.org> Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
2014-06-04Merge tag 'devicetree-for-3.16' of ↵Linus Torvalds1-21/+0
git://git.kernel.org/pub/scm/linux/kernel/git/robh/linux into next Pull DeviceTree updates from Rob Herring: - Another round of clean-up of FDT related code in architecture code. This removes knowledge of internal FDT details from most architectures except powerpc. - Conversion of kernel's custom FDT parsing code to use libfdt. - DT based initialization for generic serial earlycon. The introduction of generic serial earlycon support went in through the tty tree. - Improve the platform device naming for DT probed devices to ensure unique naming and use parent names instead of a global index. - Fix a race condition in of_update_property. - Unify the various linker section OF match tables and fix several function prototype errors. - Update platform_get_irq_byname to work in deferred probe cases. - 2 binding doc updates * tag 'devicetree-for-3.16' of git://git.kernel.org/pub/scm/linux/kernel/git/robh/linux: (58 commits) of: handle NULL node in next_child iterators of/irq: provide more wrappers for !CONFIG_OF devicetree: bindings: Document micrel vendor prefix dt: bindings: dwc2: fix required value for the phy-names property of_pci_irq: kill useless variable in of_irq_parse_pci() of/irq: do irq resolution in platform_get_irq_byname() of: Add a testcase for of_find_node_by_path() of: Make of_find_node_by_path() handle /aliases of: Create unlocked version of for_each_child_of_node() lib: add glibc style strchrnul() variant of: Handle memory@0 node on PPC32 only pci/of: Remove dead code of: fix race between search and remove in of_update_property() of: Use NULL for pointers of: Stop naming platform_device using dcr address of: Ensure unique names without sacrificing determinism tty/serial: pl011: add DT based earlycon support of/fdt: add FDT serial scanning for earlycon of/fdt: add FDT address translation support serial: earlycon: add DT support ...
2014-05-16arm64: fix pud_huge() for 2-level pagetablesMark Salter1-0/+4
The following happens when trying to run a kvm guest on a kernel configured for 64k pages. This doesn't happen with 4k pages: BUG: failure at include/linux/mm.h:297/put_page_testzero()! Kernel panic - not syncing: BUG! CPU: 2 PID: 4228 Comm: qemu-system-aar Tainted: GF 3.13.0-0.rc7.31.sa2.k32v1.aarch64.debug #1 Call trace: [<fffffe0000096034>] dump_backtrace+0x0/0x16c [<fffffe00000961b4>] show_stack+0x14/0x1c [<fffffe000066e648>] dump_stack+0x84/0xb0 [<fffffe0000668678>] panic+0xf4/0x220 [<fffffe000018ec78>] free_reserved_area+0x0/0x110 [<fffffe000018edd8>] free_pages+0x50/0x88 [<fffffe00000a759c>] kvm_free_stage2_pgd+0x30/0x40 [<fffffe00000a5354>] kvm_arch_destroy_vm+0x18/0x44 [<fffffe00000a1854>] kvm_put_kvm+0xf0/0x184 [<fffffe00000a1938>] kvm_vm_release+0x10/0x1c [<fffffe00001edc1c>] __fput+0xb0/0x288 [<fffffe00001ede4c>] ____fput+0xc/0x14 [<fffffe00000d5a2c>] task_work_run+0xa8/0x11c [<fffffe0000095c14>] do_notify_resume+0x54/0x58 In arch/arm/kvm/mmu.c:unmap_range(), we end up doing an extra put_page() on the stage2 pgd which leads to the BUG in put_page_testzero(). This happens because a pud_huge() test in unmap_range() returns true when it should always be false with 2-level pages tables used by 64k pages. This patch removes support for huge puds if 2-level pagetables are being used. Signed-off-by: Mark Salter <msalter@redhat.com> [catalin.marinas@arm.com: removed #ifndef around PUD_SIZE check] Signed-off-by: Catalin Marinas <catalin.marinas@arm.com> Cc: <stable@vger.kernel.org> # v3.11+
2014-05-16Revert "arm64: Introduce execute-only page access permissions"Catalin Marinas1-2/+3
This reverts commit bc07c2c6e9ed125d362af0214b6313dca180cb08. While the aim is increased security for --x memory maps, it does not protect against kernel level reads. Until SECCOMP is implemented for arm64, revert this patch to avoid giving a false idea of execute-only mappings. Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
2014-05-14Merge branch 'dt-bus-name' into for-nextRob Herring2-2/+36
2014-05-09arm64: mm: use inner-shareable barriers for inner-shareable maintenanceWill Deacon2-4/+4
In order to ensure ordering and completion of inner-shareable maintenance instructions (cache and TLB) on AArch64, we can use the -ish suffix to the dmb and dsb instructions respectively. This patch updates our low-level cache and tlb maintenance routines to use the inner-shareable barrier variants where appropriate. Acked-by: Catalin Marinas <catalin.marinas@arm.com> Signed-off-by: Will Deacon <will.deacon@arm.com> Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
2014-05-09arm64: mm: Optimise tlb flush logic where we have >4K granuleSteve Capper2-72/+1
The tlb maintainence functions: __cpu_flush_user_tlb_range and __cpu_flush_kern_tlb_range do not take into consideration the page granule when looping through the address range, and repeatedly flush tlb entries for the same page when operating with 64K pages. This patch re-works the logic s.t. we instead advance the loop by 1 << (PAGE_SHIFT - 12), so avoid repeating ourselves. Also the routines have been converted from assembler to static inline functions to aid with legibility and potential compiler optimisations. The isb() has been removed from flush_tlb_kernel_range(.) as it is only needed when changing the execute permission of a mapping. If one needs to set an area of the kernel as execute/non-execute an isb() must be inserted after the call to flush_tlb_kernel_range. Cc: Laura Abbott <lauraa@codeaurora.org> Signed-off-by: Steve Capper <steve.capper@linaro.org> Acked-by: Will Deacon <will.deacon@arm.com> Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
2014-05-09arm64: mm: Create gigabyte kernel logical mappings where possibleSteve Capper1-1/+27
We have the capability to map 1GB level 1 blocks when using a 4K granule. This patch adjusts the create_mapping logic s.t. when mapping physical memory on boot, we attempt to use a 1GB block if both the VA and PA start and end are 1GB aligned. This both reduces the levels of lookup required to resolve a kernel logical address, as well as reduces TLB pressure on cores that support 1GB TLB entries. Signed-off-by: Steve Capper <steve.capper@linaro.org> Tested-by: Jungseok Lee <jays.lee@samsung.com> [catalin.marinas@arm.com: s/prot_sect_kernel/PROT_SECT_NORMAL_EXEC/] Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>