summaryrefslogtreecommitdiff
path: root/arch
AgeCommit message (Collapse)AuthorFilesLines
2017-02-24arm64/cpufeature: check correct field width when updating sys_valMark Rutland1-4/+10
When we're updating a register's sys_val, we use arm64_ftr_value() to find the new field value. We use cpuid_feature_extract_field() to find the new value, but this implicitly assumes a 4-bit field, so we may extract more bits than we mean to for fields like CTR_EL0.L1ip. This affects update_cpu_ftr_reg(), where we may extract erroneous values for ftr_cur and ftr_new. Depending on the additional bits extracted in either case, we may erroneously detect that the value is mismatched, and we'll try to compute a new safe value. Dependent on these extra bits and feature type, arm64_ftr_safe_value() may pessimistically select the always-safe value, or may erroneously choose either the extracted cur or new value as the safe option. The extra bits will subsequently be masked out in arm64_ftr_set_value(), so we may choose a higher value, yet write back a lower one. Fix this by passing the width down explicitly in arm64_ftr_value(), so we always extract the correct amount. Signed-off-by: Mark Rutland <mark.rutland@arm.com> Reviewed-by: Suzuki K Poulose <suzuki.poulose@arm.com> Cc: Catalin Marinas <catalin.marinas@arm.com> Cc: Will Deacon <will.deacon@arm.com> Signed-off-by: Will Deacon <will.deacon@arm.com>
2017-02-24Revert "arm64: mm: set the contiguous bit for kernel mappings where appropriate"Mark Rutland1-30/+4
This reverts commit 0bfc445dec9dd8130d22c9f4476eed7598524129. When we change the permissions of regions mapped using contiguous entries, the architecture requires us to follow a Break-Before-Make strategy, breaking *all* associated entries before we can change any of the following properties from the entries: - presence of the contiguous bit - output address - attributes - permissiones Failure to do so can result in a number of problems (e.g. TLB conflict aborts and/or erroneous results from TLB lookups). See ARM DDI 0487A.k_iss10775, "Misprogramming of the Contiguous bit", page D4-1762. We do not take this into account when altering the permissions of kernel segments in mark_rodata_ro(), where we change the permissions of live contiguous entires one-by-one, leaving them transiently inconsistent. This has been observed to result in failures on some fast model configurations. Unfortunately, we cannot follow Break-Before-Make here as we'd have to unmap kernel text and data used to perform the sequence. For the timebeing, revert commit 0bfc445dec9dd813 so as to avoid issues resulting from this misuse of the contiguous bit. Signed-off-by: Mark Rutland <mark.rutland@arm.com> Acked-by: Ard Biesheuvel <ard.biesheuvel@linaro.org> Reported-by: Jean-Philippe Brucker <jean-philippe.brucker@arm.com> Cc: Ard Biesheuvel <ard.biesheuvel@linaro.org> Cc: Catalin Marinas <catalin.marinas@arm.com> Cc: Will Deacon <Will.Deacon@arm.com> Cc: stable@vger.kernel.org # v4.10 Signed-off-by: Will Deacon <will.deacon@arm.com>
2017-02-24arm64: Avoid clobbering mm in erratum workaround on QDF2400Shanker Donthineni1-1/+1
Commit 38fd94b0275c ("arm64: Work around Falkor erratum 1003") tried to work around a hardware erratum, but actually caused a system crash of its own during switch_mm: cpu_do_switch_mm+0x20/0x40 efi_virtmap_load+0x34/0x40 virt_efi_get_next_variable+0x64/0xc8 efivar_init+0x8c/0x348 efisubsys_init+0xd4/0x270 do_one_initcall+0x80/0x110 kernel_init_freeable+0x19c/0x240 kernel_init+0x10/0x100 ret_from_fork+0x10/0x50 Kernel panic - not syncing: Attempted to kill init! exitcode=0x0000000b In cpu_do_switch_mm, x1 contains the mm_struct pointer, which needs to be preserved by the pre_ttbr0_update_workaround macro rather than passed as a temporary. This patch clobbers x2 and x3 instead, keeping the mm_struct intact after the workaround has run. Fixes: 38fd94b0275c ("arm64: Work around Falkor erratum 1003") Tested-by: Manoj Iyer <manoj.iyer@canonical.com> Signed-off-by: Shanker Donthineni <shankerd@codeaurora.org> Signed-off-by: Will Deacon <will.deacon@arm.com>
2017-02-15arm64/kprobes: consistently handle MRS/MSR with XZRMark Rutland1-12/+6
Now that we have XZR-safe helpers for fiddling with registers, use these in the arm64 kprobes code rather than open-coding the logic. Signed-off-by: Mark Rutland <mark.rutland@arm.com> Cc: Catalin Marinas <catalin.marinas@arm.com> Cc: Will Deacon <will.deacon@arm.com> Signed-off-by: Will Deacon <will.deacon@arm.com>
2017-02-15arm64: cpufeature: correctly handle MRS to XZRMark Rutland1-1/+1
In emulate_mrs() we may erroneously write back to the user SP rather than XZR if we trap an MRS instruction where Xt == 31. Use the new pt_regs_write_reg() helper to handle this correctly. Signed-off-by: Mark Rutland <mark.rutland@arm.com> Fixes: 77c97b4ee21290f5 ("arm64: cpufeature: Expose CPUID registers by emulation") Cc: Andre Przywara <andre.przywara@arm.com> Cc: Catalin Marinas <catalin.marinas@arm.com> Cc: Marc Zyngier <marc.zyngier@arm.com> Cc: Suzuki K Poulose <suzuki.poulose@arm.com> Cc: Will Deacon <will.deacon@arm.com> Signed-off-by: Will Deacon <will.deacon@arm.com>
2017-02-15arm64: traps: correctly handle MRS/MSR with XZRMark Rutland1-2/+4
Currently we hand-roll XZR-safe register handling in user_cache_maint_handler(), though we forget to do the same in ctr_read_handler(), and may erroneously write back to the user SP rather than XZR. Use the new helpers to handle these cases correctly and consistently. Signed-off-by: Mark Rutland <mark.rutland@arm.com> Fixes: 116c81f427ff6c53 ("arm64: Work around systems with mismatched cache line sizes") Cc: Andre Przywara <andre.przywara@arm.com> Cc: Catalin Marinas <catalin.marinas@arm.com> Cc: Marc Zyngier <marc.zyngier@arm.com> Cc: Suzuki K Poulose <suzuki.poulose@arm.com> Cc: Will Deacon <will.deacon@arm.com> Signed-off-by: Will Deacon <will.deacon@arm.com>
2017-02-15arm64: ptrace: add XZR-safe regs accessorsMark Rutland1-0/+20
In A64, XZR and the SP share the same encoding (31), and whether an instruction accesses XZR or SP for a particular register parameter depends on the definition of the instruction. We store the SP in pt_regs::regs[31], and thus when emulating instructions, we must be careful to not erroneously read from or write back to the saved SP. Unfortunately, we often fail to be this careful. In all cases, instructions using a transfer register parameter Xt use this to refer to XZR rather than SP. This patch adds helpers so that we can more easily and consistently handle these cases. Signed-off-by: Mark Rutland <mark.rutland@arm.com> Cc: Catalin Marinas <catalin.marinas@arm.com> Cc: Marc Zyngier <marc.zyngier@arm.com> Cc: Will Deacon <will.deacon@arm.com> Signed-off-by: Will Deacon <will.deacon@arm.com>
2017-02-15arm64: include asm/assembler.h in entry-ftrace.SArnd Bergmann1-0/+1
In a randconfig build I ran into this build error: arch/arm64/kernel/entry-ftrace.S: Assembler messages: arch/arm64/kernel/entry-ftrace.S:101: Error: unknown mnemonic `ldr_l' -- `ldr_l x2,ftrace_trace_function' The macro is defined in asm/assembler.h, so we should include that file. Fixes: 829d2bd13392 ("arm64: entry-ftrace.S: avoid open-coded {adr,ldr}_l") Acked-by: Mark Rutland <mark.rutland@arm.com> Signed-off-by: Arnd Bergmann <arnd@arndb.de> Signed-off-by: Will Deacon <will.deacon@arm.com>
2017-02-15arm64: fix warning about swapper_pg_dir overflowArnd Bergmann1-1/+1
With 4 levels of 16KB pages, we get this warning about the fact that we are copying a whole page into an array that is declared as having only two pointers for the top level of the page table: arch/arm64/mm/mmu.c: In function 'paging_init': arch/arm64/mm/mmu.c:528:2: error: 'memcpy' writing 16384 bytes into a region of size 16 overflows the destination [-Werror=stringop-overflow=] This is harmless since we actually reserve a whole page in the definition of the array that comes from, and just the extern declaration is short. The pgdir is initialized to zero either way, so copying the actual entries here seems like the best solution. Acked-by: Ard Biesheuvel <ard.biesheuvel@linaro.org> Acked-by: Mark Rutland <mark.rutland@arm.com> Signed-off-by: Arnd Bergmann <arnd@arndb.de> Signed-off-by: Will Deacon <will.deacon@arm.com>
2017-02-10arm64: Work around Falkor erratum 1003Christopher Covington7-2/+71
The Qualcomm Datacenter Technologies Falkor v1 CPU may allocate TLB entries using an incorrect ASID when TTBRx_EL1 is being updated. When the erratum is triggered, page table entries using the new translation table base address (BADDR) will be allocated into the TLB using the old ASID. All circumstances leading to the incorrect ASID being cached in the TLB arise when software writes TTBRx_EL1[ASID] and TTBRx_EL1[BADDR], a memory operation is in the process of performing a translation using the specific TTBRx_EL1 being written, and the memory operation uses a translation table descriptor designated as non-global. EL2 and EL3 code changing the EL1&0 ASID is not subject to this erratum because hardware is prohibited from performing translations from an out-of-context translation regime. Consider the following pseudo code. write new BADDR and ASID values to TTBRx_EL1 Replacing the above sequence with the one below will ensure that no TLB entries with an incorrect ASID are used by software. write reserved value to TTBRx_EL1[ASID] ISB write new value to TTBRx_EL1[BADDR] ISB write new value to TTBRx_EL1[ASID] ISB When the above sequence is used, page table entries using the new BADDR value may still be incorrectly allocated into the TLB using the reserved ASID. Yet this will not reduce functionality, since TLB entries incorrectly tagged with the reserved ASID will never be hit by a later instruction. Based on work by Shanker Donthineni <shankerd@codeaurora.org> Reviewed-by: Catalin Marinas <catalin.marinas@arm.com> Signed-off-by: Christopher Covington <cov@codeaurora.org> Signed-off-by: Will Deacon <will.deacon@arm.com>
2017-02-09arm64: head.S: Enable EL1 (host) access to SPE when entered at EL2Will Deacon1-4/+15
The SPE architecture requires each exception level to enable access to the SPE controls for the exception level below it, since additional context-switch logic may be required to handle the buffer safely. This patch allows EL1 (host) access to the SPE controls when entered at EL2. Acked-by: Mark Rutland <mark.rutland@arm.com> Acked-by: Marc Zyngier <marc.zyngier@arm.com> Signed-off-by: Will Deacon <will.deacon@arm.com>
2017-02-09arm64: use is_vmalloc_addrMiles Chen1-1/+1
To is_vmalloc_addr() to check if an address is a vmalloc address instead of checking VMALLOC_START and VMALLOC_END manually. Signed-off-by: Miles Chen <miles.chen@mediatek.com> Signed-off-by: Will Deacon <will.deacon@arm.com>
2017-02-09arm64: use linux/sizes.h for constantsMiles Chen1-2/+2
Use linux/size.h to improve code readability. Signed-off-by: Miles Chen <miles.chen@mediatek.com> Signed-off-by: Will Deacon <will.deacon@arm.com>
2017-02-08arm64: uaccess: consistently check object sizesMark Rutland1-2/+2
Currently in arm64's copy_{to,from}_user, we only check the source/destination object size if access_ok() tells us the user access is permissible. However, in copy_from_user() we'll subsequently zero any remainder on the destination object. If we failed the access_ok() check, that applies to the whole object size, which we didn't check. To ensure that we catch that case, this patch hoists check_object_size() to the start of copy_from_user(), matching __copy_from_user() and __copy_to_user(). To make all of our uaccess copy primitives consistent, the same is done to copy_to_user(). Cc: Catalin Marinas <catalin.marinas@arm.com> Acked-by: Kees Cook <keescook@chromium.org> Signed-off-by: Mark Rutland <mark.rutland@arm.com> Signed-off-by: Will Deacon <will.deacon@arm.com>
2017-02-08arm64: remove wrong CONFIG_PROC_SYSCTL ifdefJuri Lelli1-2/+0
The sysfs cpu_capacity entry for each CPU has nothing to do with PROC_FS, nor it's in /proc/sys path. Remove such ifdef. Cc: Will Deacon <will.deacon@arm.com> Cc: Catalin Marinas <catalin.marinas@arm.com> Reported-and-suggested-by: Sudeep Holla <sudeep.holla@arm.com> Fixes: be8f185d8af4 ('arm64: add sysfs cpu_capacity attribute') Signed-off-by: Juri Lelli <juri.lelli@arm.com> Signed-off-by: Will Deacon <will.deacon@arm.com>
2017-02-06arm64: do not trace atomic operationsPratyush Anand1-1/+1
Atomic operation function symbols are exported,when CONFIG_ARM64_LSE_ATOMICS is defined. Prefix them with notrace, so that an user can not trace these functions. Tracing these functions causes kernel crash. Signed-off-by: Pratyush Anand <panand@redhat.com> Signed-off-by: Will Deacon <will.deacon@arm.com>
2017-02-06arm64: mm: enable CONFIG_HOLES_IN_ZONE for NUMAArd Biesheuvel1-0/+4
The NUMA code may get confused by the presence of NOMAP regions within zones, resulting in spurious BUG() checks where the node id deviates from the containing zone's node id. Since the kernel has no business reasoning about node ids of pages it does not own in the first place, enable CONFIG_HOLES_IN_ZONE to ensure that such pages are disregarded. Acked-by: Robert Richter <rrichter@cavium.com> Signed-off-by: Ard Biesheuvel <ard.biesheuvel@linaro.org> Signed-off-by: Will Deacon <will.deacon@arm.com>
2017-02-03arm: perf: use builtin_platform_driverGeliang Tang3-15/+3
Use builtin_platform_driver() helper to simplify the code. Signed-off-by: Geliang Tang <geliangtang@gmail.com> Signed-off-by: Will Deacon <will.deacon@arm.com>
2017-02-03firmware: qcom: scm: Fix interrupted SCM callsAndy Gross1-1/+8
This patch adds a Qualcomm specific quirk to the arm_smccc_smc call. On Qualcomm ARM64 platforms, the SMC call can return before it has completed. If this occurs, the call can be restarted, but it requires using the returned session ID value from the interrupted SMC call. The quirk stores off the session ID from the interrupted call in the quirk structure so that it can be used by the caller. This patch folds in a fix given by Sricharan R: https://lkml.org/lkml/2016/9/28/272 Signed-off-by: Andy Gross <andy.gross@linaro.org> Reviewed-by: Will Deacon <will.deacon@arm.com> Signed-off-by: Will Deacon <will.deacon@arm.com>
2017-02-03arm: kernel: Add SMC structure parameterAndy Gross5-18/+25
This patch adds a quirk parameter to the arm_smccc_(smc/hvc) calls. The quirk structure allows for specialized SMC operations due to SoC specific requirements. The current arm_smccc_(smc/hvc) is renamed and macros are used instead to specify the standard arm_smccc_(smc/hvc) or the arm_smccc_(smc/hvc)_quirk function. This patch and partial implementation was suggested by Will Deacon. Signed-off-by: Andy Gross <andy.gross@linaro.org> Reviewed-by: Will Deacon <will.deacon@arm.com> Signed-off-by: Will Deacon <will.deacon@arm.com>
2017-02-03efi: arm64: Add vmlinux debug link to the Image binaryArd Biesheuvel3-1/+58
When building with debugging symbols, take the absolute path to the vmlinux binary and add it to the special PE/COFF debug table entry. This allows a debug EFI build to find the vmlinux binary, which is very helpful in debugging, given that the offset where the Image is first loaded by EFI is highly unpredictable. On implementations of UEFI that choose to implement it, this information is exposed via the EFI debug support table, which is a UEFI configuration table that is accessible both by the firmware at boot time and by the OS at runtime, and lists all PE/COFF images loaded by the system. The format of the NB10 Codeview entry is based on the definition used by EDK2, which is our primary reference when it comes to the use of PE/COFF in the context of UEFI firmware. Signed-off-by: Ard Biesheuvel <ard.biesheuvel@linaro.org> [will: use realpath instead of shell invocation, as discussed on list] Signed-off-by: Will Deacon <will.deacon@arm.com>
2017-02-02arm64: ensure __raw_read_system_reg() is self-consistentMark Rutland1-30/+34
We recently discovered that __raw_read_system_reg() erroneously mapped sysreg IDs to the wrong registers. To ensure that we don't get hit by a similar issue in future, this patch makes __raw_read_system_reg() use a macro for each case statement, ensuring that each case reads the correct register. To ensure that this patch hasn't introduced an issue, I've binary-diffed the object files before and after this patch. No code or data sections differ (though some debug section differ due to line numbering changing). Signed-off-by: Mark Rutland <mark.rutland@arm.com> Cc: Catalin Marinas <catalin.marinas@arm.com> Cc: Suzuki K Poulose <suzuki.poulose@arm.com> Cc: Will Deacon <will.deacon@arm.com> Signed-off-by: Will Deacon <will.deacon@arm.com>
2017-02-02arm64: fix erroneous __raw_read_system_reg() casesMark Rutland1-3/+3
Since it was introduced in commit da8d02d19ffdd201 ("arm64/capabilities: Make use of system wide safe value"), __raw_read_system_reg() has erroneously mapped some sysreg IDs to other registers. For the fields in ID_ISAR5_EL1, our local feature detection will be erroneous. We may spuriously detect that a feature is uniformly supported, or may fail to detect when it actually is, meaning some compat hwcaps may be erroneous (or not enforced upon hotplug). This patch corrects the erroneous entries. Signed-off-by: Mark Rutland <mark.rutland@arm.com> Fixes: da8d02d19ffdd201 ("arm64/capabilities: Make use of system wide safe value") Reported-by: Catalin Marinas <catalin.marinas@arm.com> Cc: Suzuki K Poulose <suzuki.poulose@arm.com> Cc: Will Deacon <will.deacon@arm.com> Cc: stable@vger.kernel.org Signed-off-by: Will Deacon <will.deacon@arm.com>
2017-02-02arm64: KVM: Save/restore the host SPE state when entering/leaving a VMWill Deacon5-4/+95
The SPE buffer is virtually addressed, using the page tables of the CPU MMU. Unusually, this means that the EL0/1 page table may be live whilst we're executing at EL2 on non-VHE configurations. When VHE is in use, we can use the same property to profile the guest behind its back. This patch adds the relevant disabling and flushing code to KVM so that the host can make use of SPE without corrupting guest memory, and any attempts by a guest to use SPE will result in a trap. Acked-by: Marc Zyngier <marc.zyngier@arm.com> Cc: Alex Bennée <alex.bennee@linaro.org> Cc: Christoffer Dall <christoffer.dall@linaro.org> Signed-off-by: Will Deacon <will.deacon@arm.com>
2017-02-02arm64: make use of for_each_node_by_type()Dmitry Torokhov1-2/+2
Instead of open-coding the loop, let's use canned macro. Signed-off-by: Dmitry Torokhov <dmitry.torokhov@gmail.com> Signed-off-by: Will Deacon <will.deacon@arm.com>
2017-02-01arm64: Work around Falkor erratum 1009Christopher Covington4-4/+36
During a TLB invalidate sequence targeting the inner shareable domain, Falkor may prematurely complete the DSB before all loads and stores using the old translation are observed. Instruction fetches are not subject to the conditions of this erratum. If the original code sequence includes multiple TLB invalidate instructions followed by a single DSB, onle one of the TLB instructions needs to be repeated to work around this erratum. While the erratum only applies to cases in which the TLBI specifies the inner-shareable domain (*IS form of TLBI) and the DSB is ISH form or stronger (OSH, SYS), this changes applies the workaround overabundantly-- to local TLBI, DSB NSH sequences as well--for simplicity. Based on work by Shanker Donthineni <shankerd@codeaurora.org> Signed-off-by: Christopher Covington <cov@codeaurora.org> Acked-by: Mark Rutland <mark.rutland@arm.com> Signed-off-by: Will Deacon <will.deacon@arm.com>
2017-01-31arm64: Improve detection of user/non-user mappings in set_pte(_at)Catalin Marinas1-6/+9
Commit cab15ce604e5 ("arm64: Introduce execute-only page access permissions") allowed a valid user PTE to have the PTE_USER bit clear. As a consequence, the pte_valid_not_user() macro in set_pte() was replaced with pte_valid_global() under the assumption that only user pages have the nG bit set. EFI mappings, however, also have the nG bit set and set_pte() wrongly ignores issuing the DSB+ISB. This patch reinstates the pte_valid_not_user() macro and adds the PTE_UXN bit check since all kernel mappings have this bit set. For clarity, pte_exec() is renamed to pte_user_exec() as it only checks for the absence of PTE_UXN. Consequently, the user executable check in set_pte_at() drops the pte_ng() test since pte_user_exec() is sufficient. Fixes: cab15ce604e5 ("arm64: Introduce execute-only page access permissions") Signed-off-by: Catalin Marinas <catalin.marinas@arm.com> Signed-off-by: Will Deacon <will.deacon@arm.com>
2017-01-27arm64: handle sys and undef traps consistentlyMark Rutland1-1/+6
If an EL0 instruction in the SYS class triggers an exception, do_sysintr looks for a sys64_hook matching the instruction, and if none is found, injects a SIGILL. This mirrors what we do for undefined instruction encodings in do_undefinstr, where we look for an undef_hook matching the instruction, and if none is found, inject a SIGILL. Over time, new SYS instruction encodings may be allocated. Prior to allocation, exceptions resulting from these would be handled by do_undefinstr, whereas after allocation these may be handled by do_sysintr. To ensure that we have consistent behaviour if and when this happens, it would be beneficial to have do_sysinstr fall back to do_undefinstr. Signed-off-by: Mark Rutland <mark.rutland@arm.com> Acked-by: Catalin Marinas <catalin.marinas@arm.com> Reviewed-by: Suzuki Poulose <suzuki.poulose@arm.com> Signed-off-by: Will Deacon <will.deacon@arm.com>
2017-01-27arm64: Use __tlbi() macros in KVM codeChristopher Covington1-6/+7
Refactor the KVM code to use the __tlbi macros, which will allow an errata workaround that repeats tlbi dsb sequences to only change one location. This is not intended to change the generated assembly and comparing before and after vmlinux objdump shows no functional changes. Acked-by: Christoffer Dall <christoffer.dall@linaro.org> Signed-off-by: Christopher Covington <cov@codeaurora.org> Signed-off-by: Will Deacon <will.deacon@arm.com>
2017-01-27arm64: Define Falkor v1 CPUShanker Donthineni1-0/+4
Define the MIDR implementer and part number field values for the Qualcomm Datacenter Technologies Falkor processor version 1 in the usual manner. Signed-off-by: Shanker Donthineni <shankerd@codeaurora.org> Signed-off-by: Christopher Covington <cov@codeaurora.org> Signed-off-by: Will Deacon <will.deacon@arm.com>
2017-01-26arm64: dma-mapping: Fix dma_mapping_error() when bypassing SWIOTLBRobin Murphy1-1/+8
When bypassing SWIOTLB on small-memory systems, we need to avoid calling into swiotlb_dma_mapping_error() in exactly the same way as we avoid swiotlb_dma_supported(), because the former also relies on SWIOTLB state being initialised. Under the assumptions for which we skip SWIOTLB, dma_map_{single,page}() will only ever return the DMA-offset-adjusted physical address of the page passed in, thus we can report success unconditionally. Fixes: b67a8b29df7e ("arm64: mm: only initialize swiotlb when necessary") CC: stable@vger.kernel.org CC: Jisheng Zhang <jszhang@marvell.com> Reported-by: Aaro Koskinen <aaro.koskinen@iki.fi> Signed-off-by: Robin Murphy <robin.murphy@arm.com> Signed-off-by: Will Deacon <will.deacon@arm.com>
2017-01-26arm64: Kconfig: select COMPAT_BINFMT_ELF only when BINFMT_ELF is setKefeng Wang1-1/+1
Fix warning: "(COMPAT) selects COMPAT_BINFMT_ELF which has unmet direct dependencies (COMPAT && BINFMT_ELF)" Signed-off-by: Kefeng Wang <wangkefeng.wang@huawei.com> Signed-off-by: Will Deacon <will.deacon@arm.com>
2017-01-26arm64: kernel: do not mark reserved memory regions as IORESOURCE_BUSYArd Biesheuvel1-1/+1
Memory regions marked as NOMAP should not be used for general allocation by the kernel, and should not even be covered by the linear mapping (hence the name). However, drivers or other subsystems (such as ACPI) that access the firmware directly may legally access them, which means it is also reasonable for such drivers to claim them by invoking request_resource(). Currently, this is prevented by the fact that arm64's request_standard_resources() marks reserved regions as IORESOURCE_BUSY. So drop the IORESOURCE_BUSY flag from these requests. Reported-by: Hanjun Guo <hanjun.guo@linaro.org> Signed-off-by: Ard Biesheuvel <ard.biesheuvel@linaro.org> Signed-off-by: Will Deacon <will.deacon@arm.com>
2017-01-26arm64: Use __pa_symbol for empty_zero_pageGeert Uytterhoeven1-1/+1
If CONFIG_DEBUG_VIRTUAL=y and CONFIG_ARM64_SW_TTBR0_PAN=y: virt_to_phys used for non-linear address: ffffff8008cc0000 (empty_zero_page+0x0/0x1000) WARNING: CPU: 0 PID: 0 at arch/arm64/mm/physaddr.c:14 __virt_to_phys+0x28/0x60 ... [<ffffff800809abb4>] __virt_to_phys+0x28/0x60 [<ffffff8008a02600>] setup_arch+0x46c/0x4d4 Fixes: 2077be6783b5936c ("arm64: Use __pa_symbol for kernel symbols") Acked-by: Mark Rutland <mark.rutland@arm.com> Acked-by: Laura Abbott <labbott@redhat.com> Signed-off-by: Geert Uytterhoeven <geert+renesas@glider.be> Signed-off-by: Will Deacon <will.deacon@arm.com>
2017-01-23arm64: dma-mapping: Only swizzle DMA ops for IOMMU_DOMAIN_DMAWill Deacon1-5/+12
The arm64 DMA-mapping implementation sets the DMA ops to the IOMMU DMA ops if we detect that an IOMMU is present for the master and the DMA ranges are valid. In the case when the IOMMU domain for the device is not of type IOMMU_DOMAIN_DMA, then we have no business swizzling the ops, since we're not in control of the underlying address space. This patch leaves the DMA ops alone for masters attached to non-DMA IOMMU domains. Reviewed-by: Robin Murphy <robin.murphy@arm.com> Signed-off-by: Will Deacon <will.deacon@arm.com>
2017-01-17arm64: entry-ftrace.S: avoid open-coded {adr,ldr}_lMark Rutland1-8/+4
Some places in the kernel open-code sequences using ADRP for a symbol another instruction using a :lo12: relocation for that same symbol. These sequences are easy to get wrong, and more painful to read than is necessary. For these reasons, it is preferable to use the {adr,ldr,str}_l macros for these cases. This patch makes use of these in entry-ftrace.S, removing open-coded sequences using adrp. This results in a minor code change, since a temporary register is not used when generating the address for some symbols, but this is fine, as the value of the temporary register is not used elsewhere. Signed-off-by: Mark Rutland <mark.rutland@arm.com> Reviewed-by: Ard Biesheuvel <ard.biesheuvel@linaro.org> Cc: AKASHI Takahiro <takahiro.akashi@linaro.org> Cc: Ard Biesheuvel <ard.biesheuvel@linaro.org> Cc: Catalin Marinas <catalin.marinas@arm.com> Cc: Will Deacon <will.deacon@arm.com> Signed-off-by: Will Deacon <will.deacon@arm.com>
2017-01-17arm64: efi-entry.S: avoid open-coded adr_lMark Rutland1-6/+3
Some places in the kernel open-code sequences using ADRP for a symbol another instruction using a :lo12: relocation for that same symbol. These sequences are easy to get wrong, and more painful to read than is necessary. For these reasons, it is preferable to use the {adr,ldr,str}_l macros for these cases. This patch makes use of these in efi-entry.S, removing open-coded sequences using adrp. Signed-off-by: Mark Rutland <mark.rutland@arm.com> Reviewed-by: Ard Biesheuvel <ard.biesheuvel@linaro.org> Cc: Catalin Marinas <catalin.marinas@arm.com> Cc: Matt Fleming <matt@codeblueprint.co.uk> Cc: Will Deacon <will.deacon@arm.com> Signed-off-by: Will Deacon <will.deacon@arm.com>
2017-01-17arm64: head.S: avoid open-coded adr_lMark Rutland1-2/+1
Some places in the kernel open-code sequences using ADRP for a symbol another instruction using a :lo12: relocation for that same symbol. These sequences are easy to get wrong, and more painful to read than is necessary. For these reasons, it is preferable to use the {adr,ldr,str}_l macros for these cases. This patch makes use of adr_l these in head.S, removing an open-coded sequence using adrp. Signed-off-by: Mark Rutland <mark.rutland@arm.com> Reviewed-by: Ard Biesheuvel <ard.biesheuvel@linaro.org> Cc: Catalin Marinas <catalin.marinas@arm.com> Cc: Marc Zyngier <marc.zyngier@arm.com> Cc: Will Deacon <will.deacon@arm.com> Signed-off-by: Will Deacon <will.deacon@arm.com>
2017-01-17arm64: cacheinfo: add support to override cache levels via device treeSudeep Holla1-1/+12
The cache hierarchy can be identified through Cache Level ID(CLIDR) architected system register. However in some cases it will provide only the number of cache levels that are integrated into the processor itself. In other words, it can't provide any information about the caches that are external and/or transparent. Some platforms require to export the information about all such external caches to the userspace applications via the sysfs interface. This patch adds support to override the cache levels using device tree to take such external non-architected caches into account. Cc: Catalin Marinas <catalin.marinas@arm.com> Cc: Will Deacon <will.deacon@arm.com> Cc: Mark Rutland <mark.rutland@arm.com> Tested-by: Tan Xiaojun <tanxiaojun@huawei.com> Signed-off-by: Sudeep Holla <sudeep.holla@arm.com> Signed-off-by: Will Deacon <will.deacon@arm.com>
2017-01-13arm64: errata: Provide macro for major and minor cpu revisionsRobert Richter3-11/+15
Definition of cpu ranges are hard to read if the cpu variant is not zero. Provide MIDR_CPU_VAR_REV() macro to describe the full hardware revision of a cpu including variant and (minor) revision. Signed-off-by: Robert Richter <rrichter@cavium.com> Signed-off-by: Will Deacon <will.deacon@arm.com>
2017-01-13arm64: mm: use phys_addr_t instead of unsigned long in __map_memblockMiles Chen1-2/+2
Cosmetic change to use phys_addr_t instead of unsigned long for the return value of __pa_symbol(). Acked-by: Catalin Marinas <catalin.marinas@arm.com> Acked-by: Mark Rutland <mark.rutland@arm.com> Signed-off-by: Miles Chen <miles.chen@mediatek.com> Signed-off-by: Will Deacon <will.deacon@arm.com>
2017-01-12arm64: Advertise support for Rounding double multiply instructionsSuzuki K Poulose3-0/+3
ARM v8.1 extensions include support for rounding double multiply add/subtract instructions to the A64 SIMD instructions set. Let the userspace know about it via a HWCAP bit. Cc: Mark Rutland <mark.rutland@arm.com> Cc: Will Deacon <will.deacon@arm.com> Acked-by: Catalin Marinas <catalin.marinas@arm.com> Signed-off-by: Suzuki K Poulose <suzuki.poulose@arm.com> Signed-off-by: Will Deacon <will.deacon@arm.com>
2017-01-12arm64: Add support for DMA_ATTR_SKIP_CPU_SYNC attribute to swiotlbTakeshi Kihara1-4/+8
This patch adds support for DMA_ATTR_SKIP_CPU_SYNC attribute for dma_{un}map_{page,sg} functions family to swiotlb. DMA_ATTR_SKIP_CPU_SYNC allows platform code to skip synchronization of the CPU cache for the given buffer assuming that it has been already transferred to 'device' domain. Ported from IOMMU .{un}map_{sg,page} ops. Acked-by: Laurent Pinchart <laurent.pinchart@ideasonboard.com> Reviewed-by: Robin Murphy <robin.murphy@arm.com> Signed-off-by: Takeshi Kihara <takeshi.kihara.df@renesas.com> Signed-off-by: Geert Uytterhoeven <geert+renesas@glider.be> Signed-off-by: Will Deacon <will.deacon@arm.com>
2017-01-12arm64: Add support for CONFIG_DEBUG_VIRTUALLaura Abbott4-3/+61
x86 has an option CONFIG_DEBUG_VIRTUAL to do additional checks on virt_to_phys calls. The goal is to catch users who are calling virt_to_phys on non-linear addresses immediately. This inclues callers using virt_to_phys on image addresses instead of __pa_symbol. As features such as CONFIG_VMAP_STACK get enabled for arm64, this becomes increasingly important. Add checks to catch bad virt_to_phys usage. Reviewed-by: Mark Rutland <mark.rutland@arm.com> Tested-by: Mark Rutland <mark.rutland@arm.com> Signed-off-by: Laura Abbott <labbott@redhat.com> Signed-off-by: Will Deacon <will.deacon@arm.com>
2017-01-12arm64: Use __pa_symbol for kernel symbolsLaura Abbott16-57/+76
__pa_symbol is technically the marcro that should be used for kernel symbols. Switch to this as a pre-requisite for DEBUG_VIRTUAL which will do bounds checking. Reviewed-by: Mark Rutland <mark.rutland@arm.com> Tested-by: Mark Rutland <mark.rutland@arm.com> Signed-off-by: Laura Abbott <labbott@redhat.com> Signed-off-by: Will Deacon <will.deacon@arm.com>
2017-01-12arm64: Add cast for virt_to_pfnLaura Abbott1-1/+1
virt_to_pfn lacks a cast at the top level. Don't rely on __virt_to_phys and explicitly cast to unsigned long. Reviewed-by: Mark Rutland <mark.rutland@arm.com> Tested-by: Mark Rutland <mark.rutland@arm.com> Signed-off-by: Laura Abbott <labbott@redhat.com> Signed-off-by: Will Deacon <will.deacon@arm.com>
2017-01-12arm64: Move some macros under #ifndef __ASSEMBLY__Laura Abbott1-19/+19
Several macros for various x_to_y exist outside the bounds of an __ASSEMBLY__ guard. Move them in preparation for support for CONFIG_DEBUG_VIRTUAL. Reviewed-by: Mark Rutland <mark.rutland@arm.com> Tested-by: Mark Rutland <mark.rutland@arm.com> Signed-off-by: Laura Abbott <labbott@redhat.com> Signed-off-by: Will Deacon <will.deacon@arm.com>
2017-01-12Merge branch 'aarch64/for-next/debug-virtual' into aarch64/for-next/coreWill Deacon1-0/+1
Merge core DEBUG_VIRTUAL changes from Laura Abbott. Later arm and arm64 support depends on these. * aarch64/for-next/debug-virtual: drivers: firmware: psci: Use __pa_symbol for kernel symbol mm/usercopy: Switch to using lm_alias mm/kasan: Switch to using __pa_symbol and lm_alias kexec: Switch to __pa_symbol mm: Introduce lm_alias mm/cma: Cleanup highmem check lib/Kconfig.debug: Add ARCH_HAS_DEBUG_VIRTUAL
2017-01-12arm64: Documentation - Expose CPU feature registersSuzuki K Poulose1-0/+4
Documentation for the infrastructure to expose CPU feature register by emulating MRS. Cc: Catalin Marinas <catalin.marinas@arm.com> Cc: Will Deacon <will.deacon@arm.com> Cc: Mark Rutland <mark.rutland@arm.com> Cc: Dave Martin <dave.martin@arm.com> Reviewed-by: Catalin Marinas <catalin.marinas@arm.com> Signed-off-by: Suzuki K Poulose <suzuki.poulose@arm.com> Signed-off-by: Will Deacon <will.deacon@arm.com>
2017-01-12arm64: cpufeature: Expose CPUID registers by emulationSuzuki K Poulose4-0/+107
This patch adds the hook for emulating MRS instruction to export the 'user visible' value of supported system registers. We emulate only the following id space for system registers: Op0=3, Op1=0, CRn=0, CRm=[0, 4-7] The rest will fall back to SIGILL. This capability is also advertised via a new HWCAP_CPUID. Cc: Catalin Marinas <catalin.marinas@arm.com> Cc: Mark Rutland <mark.rutland@arm.com> Cc: Will Deacon <will.deacon@arm.com> Reviewed-by: Catalin Marinas <catalin.marinas@arm.com> Signed-off-by: Suzuki K Poulose <suzuki.poulose@arm.com> [will: add missing static keyword to enable_mrs_emulation] Signed-off-by: Will Deacon <will.deacon@arm.com>