summaryrefslogtreecommitdiff
path: root/arch/arm64
AgeCommit message (Collapse)AuthorFilesLines
2017-02-23arm64: KVM: Take S1 walks into account when determining S2 write faultsWill Deacon1-5/+6
commit 60e21a0ef54cd836b9eb22c7cb396989b5b11648 upstream. The WnR bit in the HSR/ESR_EL2 indicates whether a data abort was generated by a read or a write instruction. For stage 2 data aborts generated by a stage 1 translation table walk (i.e. the actual page table access faults at EL2), the WnR bit therefore reports whether the instruction generating the walk was a load or a store, *not* whether the page table walker was reading or writing the entry. For page tables marked as read-only at stage 2 (e.g. due to KSM merging them with the tables from another guest), this could result in livelock, where a page table walk generated by a load instruction attempts to set the access flag in the stage 1 descriptor, but fails to trigger CoW in the host since only a read fault is reported. This patch modifies the arm64 kvm_vcpu_dabt_iswrite function to take into account stage 2 faults in stage 1 walks. Since DBM cannot be disabled at EL2 for CPUs that implement it, we assume that these faults are always causes by writes, avoiding the livelock situation at the expense of occasional, spurious CoWs. We could, in theory, do a bit better by checking the guest TCR configuration and inspecting the page table to see why the PTE faulted. However, I doubt this is measurable in practice, and the threat of livelock is real. Cc: Julien Grall <julien.grall@arm.com> Reviewed-by: Marc Zyngier <marc.zyngier@arm.com> Reviewed-by: Christoffer Dall <christoffer.dall@linaro.org> Signed-off-by: Will Deacon <will.deacon@arm.com> [bwh: Backported to 3.16: - Keep using ESR_EL2_WNR in the first part of the condition - Adjust context] Signed-off-by: Ben Hutchings <ben@decadent.org.uk>
2017-02-23arm64: kernel: Init MDCR_EL2 even in the absence of a PMUMarc Zyngier1-1/+2
commit 850540351bb1a4fa5f192e5ce55b89928cc57f42 upstream. Commit f436b2ac90a0 ("arm64: kernel: fix architected PMU registers unconditional access") made sure we wouldn't access unimplemented PMU registers, but also left MDCR_EL2 uninitialized in that case, leading to trap bits being potentially left set. Make sure we always write something in that register. Fixes: f436b2ac90a0 ("arm64: kernel: fix architected PMU registers unconditional access") Cc: Lorenzo Pieralisi <lorenzo.pieralisi@arm.com> Cc: Will Deacon <will.deacon@arm.com> Signed-off-by: Marc Zyngier <marc.zyngier@arm.com> Signed-off-by: Will Deacon <will.deacon@arm.com> Signed-off-by: Ben Hutchings <ben@decadent.org.uk>
2017-02-23arm64: debug: avoid resetting stepping state machine when TIF_SINGLESTEPWill Deacon1-2/+4
commit 3a402a709500c5a3faca2111668c33d96555e35a upstream. When TIF_SINGLESTEP is set for a task, the single-step state machine is enabled and we must take care not to reset it to the active-not-pending state if it is already in the active-pending state. Unfortunately, that's exactly what user_enable_single_step does, by unconditionally setting the SS bit in the SPSR for the current task. This causes failures in the GDB testsuite, where GDB ends up missing expected step traps if the instruction being stepped generates another trap, e.g. PTRACE_EVENT_FORK from an SVC instruction. This patch fixes the problem by preserving the current state of the stepping state machine when TIF_SINGLESTEP is set on the current thread. Reported-by: Yao Qi <yao.qi@arm.com> Signed-off-by: Will Deacon <will.deacon@arm.com> Signed-off-by: Ben Hutchings <ben@decadent.org.uk>
2016-11-20arm64: perf: reject groups spanning multiple HW PMUsSuzuki K. Poulose1-6/+15
commit 8fff105e13041e49b82f92eef034f363a6b1c071 upstream. The perf core implicitly rejects events spanning multiple HW PMUs, as in these cases the event->ctx will differ. However this validation is performed after pmu::event_init() is called in perf_init_event(), and thus pmu::event_init() may be called with a group leader from a different HW PMU. The ARM64 PMU driver does not take this fact into account, and when validating groups assumes that it can call to_arm_pmu(event->pmu) for any HW event. When the event in question is from another HW PMU this is wrong, and results in dereferencing garbage. This patch updates the ARM64 PMU driver to first test for and reject events from other PMUs, moving the to_arm_pmu and related logic after this test. Fixes a crash triggered by perf_fuzzer on Linux-4.0-rc2, with a CCI PMU present: Bad mode in Synchronous Abort handler detected, code 0x86000006 -- IABT (current EL) CPU: 0 PID: 1371 Comm: perf_fuzzer Not tainted 3.19.0+ #249 Hardware name: V2F-1XV7 Cortex-A53x2 SMM (DT) task: ffffffc07c73a280 ti: ffffffc07b0a0000 task.ti: ffffffc07b0a0000 PC is at 0x0 LR is at validate_event+0x90/0xa8 pc : [<0000000000000000>] lr : [<ffffffc000090228>] pstate: 00000145 sp : ffffffc07b0a3ba0 [< (null)>] (null) [<ffffffc0000907d8>] armpmu_event_init+0x174/0x3cc [<ffffffc00015d870>] perf_try_init_event+0x34/0x70 [<ffffffc000164094>] perf_init_event+0xe0/0x10c [<ffffffc000164348>] perf_event_alloc+0x288/0x358 [<ffffffc000164c5c>] SyS_perf_event_open+0x464/0x98c Code: bad PC value Also cleans up the code to use the arm_pmu only when we know that we are dealing with an arm pmu event. Cc: Will Deacon <will.deacon@arm.com> Acked-by: Mark Rutland <mark.rutland@arm.com> Acked-by: Peter Ziljstra (Intel) <peterz@infradead.org> Signed-off-by: Suzuki K. Poulose <suzuki.poulose@arm.com> Signed-off-by: Will Deacon <will.deacon@arm.com> Signed-off-by: Ben Hutchings <ben@decadent.org.uk>
2016-11-20crypto: arm64/aes-ctr - fix NULL dereference in tail processingArd Biesheuvel1-1/+1
commit 2db34e78f126c6001d79d3b66ab1abb482dc7caa upstream. The AES-CTR glue code avoids calling into the blkcipher API for the tail portion of the walk, by comparing the remainder of walk.nbytes modulo AES_BLOCK_SIZE with the residual nbytes, and jumping straight into the tail processing block if they are equal. This tail processing block checks whether nbytes != 0, and does nothing otherwise. However, in case of an allocation failure in the blkcipher layer, we may enter this code with walk.nbytes == 0, while nbytes > 0. In this case, we should not dereference the source and destination pointers, since they may be NULL. So instead of checking for nbytes != 0, check for (walk.nbytes % AES_BLOCK_SIZE) != 0, which implies the former in non-error conditions. Fixes: 49788fe2a128 ("arm64/crypto: AES-ECB/CBC/CTR/XTS using ARMv8 NEON and Crypto Extensions") Reported-by: xiakaixu <xiakaixu@huawei.com> Signed-off-by: Ard Biesheuvel <ard.biesheuvel@linaro.org> Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au> Signed-off-by: Ben Hutchings <ben@decadent.org.uk>
2016-11-20arm64: spinlocks: implement smp_mb__before_spinlock() as smp_mb()Will Deacon1-0/+10
commit 872c63fbf9e153146b07f0cece4da0d70b283eeb upstream. smp_mb__before_spinlock() is intended to upgrade a spin_lock() operation to a full barrier, such that prior stores are ordered with respect to loads and stores occuring inside the critical section. Unfortunately, the core code defines the barrier as smp_wmb(), which is insufficient to provide the required ordering guarantees when used in conjunction with our load-acquire-based spinlock implementation. This patch overrides the arm64 definition of smp_mb__before_spinlock() to map to a full smp_mb(). Cc: Peter Zijlstra <peterz@infradead.org> Reported-by: Alan Stern <stern@rowland.harvard.edu> Signed-off-by: Will Deacon <will.deacon@arm.com> Signed-off-by: Catalin Marinas <catalin.marinas@arm.com> Signed-off-by: Ben Hutchings <ben@decadent.org.uk>
2016-11-20arm64: Define AT_VECTOR_SIZE_ARCH for ARCH_DLINFOJames Hogan2-0/+3
commit 3146bc64d12377a74dbda12b96ea32da3774ae07 upstream. AT_VECTOR_SIZE_ARCH should be defined with the maximum number of NEW_AUX_ENT entries that ARCH_DLINFO can contain, but it wasn't defined for arm64 at all even though ARCH_DLINFO will contain one NEW_AUX_ENT for the VDSO address. This shouldn't be a problem as AT_VECTOR_SIZE_BASE includes space for AT_BASE_PLATFORM which arm64 doesn't use, but lets define it now and add the comment above ARCH_DLINFO as found in several other architectures to remind future modifiers of ARCH_DLINFO to keep AT_VECTOR_SIZE_ARCH up to date. Fixes: f668cd1673aa ("arm64: ELF definitions") Signed-off-by: James Hogan <james.hogan@imgtec.com> Cc: Catalin Marinas <catalin.marinas@arm.com> Cc: Will Deacon <will.deacon@arm.com> Cc: linux-arm-kernel@lists.infradead.org Signed-off-by: Will Deacon <will.deacon@arm.com> Signed-off-by: Ben Hutchings <ben@decadent.org.uk>
2016-11-20arm64: debug: unmask PSTATE.D earlierWill Deacon3-2/+2
commit 2ce39ad15182604beb6c8fa8bed5e46b59fd1082 upstream. Clearing PSTATE.D is one of the requirements for generating a debug exception. The arm64 booting protocol requires that PSTATE.D is set, since many of the debug registers (for example, the hw_breakpoint registers) are UNKNOWN out of reset and could potentially generate spurious, fatal debug exceptions in early boot code if PSTATE.D was clear. Once the debug registers have been safely initialised, PSTATE.D is cleared, however this is currently broken for two reasons: (1) The boot CPU clears PSTATE.D in a postcore_initcall and secondary CPUs clear PSTATE.D in secondary_start_kernel. Since the initcall runs after SMP (and the scheduler) have been initialised, there is no guarantee that it is actually running on the boot CPU. In this case, the boot CPU is left with PSTATE.D set and is not capable of generating debug exceptions. (2) In a preemptible kernel, we may explicitly schedule on the IRQ return path to EL1. If an IRQ occurs with PSTATE.D set in the idle thread, then we may schedule the kthread_init thread, run the postcore_initcall to clear PSTATE.D and then context switch back to the idle thread before returning from the IRQ. The exception return path will then restore PSTATE.D from the stack, and set it again. This patch fixes the problem by moving the clearing of PSTATE.D earlier to proc.S. This has the desirable effect of clearing it in one place for all CPUs, long before we have to worry about the scheduler or any exception handling. We ensure that the previous reset of MDSCR_EL1 has completed before unmasking the exception, so that any spurious exceptions resulting from UNKNOWN debug registers are not generated. Without this patch applied, the kprobes selftests have been seen to fail under KVM, where we end up attempting to step the OOL instruction buffer with PSTATE.D set and therefore fail to complete the step. Acked-by: Mark Rutland <mark.rutland@arm.com> Reported-by: Catalin Marinas <catalin.marinas@arm.com> Tested-by: Marc Zyngier <marc.zyngier@arm.com> Signed-off-by: Will Deacon <will.deacon@arm.com> Reviewed-by: Catalin Marinas <catalin.marinas@arm.com> Tested-by: Catalin Marinas <catalin.marinas@arm.com> Signed-off-by: Catalin Marinas <catalin.marinas@arm.com> Signed-off-by: Ben Hutchings <ben@decadent.org.uk>
2016-08-23arm64: mm: remove page_mapping check in __sync_icache_dcacheShaokun Zhang1-4/+0
commit 20c27a4270c775d7ed661491af8ac03264d60fc6 upstream. __sync_icache_dcache unconditionally skips the cache maintenance for anonymous pages, under the assumption that flushing is only required in the presence of D-side aliases [see 7249b79f6b4cc ("arm64: Do not flush the D-cache for anonymous pages")]. Unfortunately, this breaks migration of anonymous pages holding self-modifying code, where userspace cannot be reasonably expected to reissue maintenance instructions in response to a migration. This patch fixes the problem by removing the broken page_mapping(page) check from the cache syncing code, otherwise we may end up fetching and executing stale instructions from the PoU. Cc: Catalin Marinas <catalin.marinas@arm.com> Cc: Will Deacon <will.deacon@arm.com> Cc: Mark Rutland <mark.rutland@arm.com> Reviewed-by: Catalin Marinas <catalin.marinas@arm.com> Signed-off-by: Shaokun Zhang <zhangshaokun@hisilicon.com> Signed-off-by: Will Deacon <will.deacon@arm.com> [bwh: Backported to 3.16: adjust context] Signed-off-by: Ben Hutchings <ben@decadent.org.uk>
2016-08-23arm64: Provide "model name" in /proc/cpuinfo for PER_LINUX32 tasksCatalin Marinas2-3/+9
commit e47b020a323d1b2a7b1e9aac86e99eae19463630 upstream. This patch brings the PER_LINUX32 /proc/cpuinfo format more in line with the 32-bit ARM one by providing an additional line: model name : ARMv8 Processor rev X (v8l) Acked-by: Will Deacon <will.deacon@arm.com> Signed-off-by: Catalin Marinas <catalin.marinas@arm.com> Signed-off-by: Will Deacon <will.deacon@arm.com> [bwh: Backported to 3.16: - Adjust filename, context - Open-code MIDR_REVISION()] Signed-off-by: Ben Hutchings <ben@decadent.org.uk>
2016-08-23arm64: cpuinfo: Missing NULL terminator in compat_hwcap_strJulien Grall1-1/+2
commit f228b494e56d949be8d8ea09d4f973d1979201bf upstream. The loop that browses the array compat_hwcap_str will stop when a NULL is encountered, however NULL is missing at the end of array. This will lead to overrun until a NULL is found somewhere in the following memory. In reality, this works out because the compat_hwcap2_str array tends to follow immediately in memory, and that *is* terminated correctly. Furthermore, the unsigned int compat_elf_hwcap is checked before printing each capability, so we end up doing the right thing because the size of the two arrays is less than 32. Still, this is an obvious mistake and should be fixed. Note for backporting: commit 12d11817eaafa414 ("arm64: Move /proc/cpuinfo handling code") moved this code in v4.4. Prior to that commit, the same change should be made in arch/arm64/kernel/setup.c. Fixes: 44b82b7700d0 "arm64: Fix up /proc/cpuinfo" Signed-off-by: Julien Grall <julien.grall@arm.com> Signed-off-by: Will Deacon <will.deacon@arm.com> [bwh: Backported to 3.16: adjust filename] Signed-off-by: Ben Hutchings <ben@decadent.org.uk>
2016-08-23arm64: Ensure pmd_present() returns false after pmd_mknotpresent()Catalin Marinas2-3/+2
commit 5bb1cc0ff9a6b68871970737e6c4c16919928d8b upstream. Currently, pmd_present() only checks for a non-zero value, returning true even after pmd_mknotpresent() (which only clears the type bits). This patch converts pmd_present() to using pte_present(), similar to the other pmd_*() checks. As a side effect, it will return true for PROT_NONE mappings, though they are not yet used by the kernel with transparent huge pages. For consistency, also change pmd_mknotpresent() to only clear the PMD_SECT_VALID bit, even though the PMD_TABLE_BIT is already 0 for block mappings (no functional change). The unused PMD_SECT_PROT_NONE definition is removed as transparent huge pages use the pte page prot values. Fixes: 9c7e535fcc17 ("arm64: mm: Route pmd thp functions through pte equivalents") Reviewed-by: Will Deacon <will.deacon@arm.com> Signed-off-by: Catalin Marinas <catalin.marinas@arm.com> Signed-off-by: Will Deacon <will.deacon@arm.com> [bwh: Backported to 3.16: adjust context] Signed-off-by: Ben Hutchings <ben@decadent.org.uk>
2016-06-15arm64: psci: move psci firmware calls out of lineWill Deacon3-35/+33
commit f5e0a12ca2d939e47995f73428d9bf1ad372b289 upstream. An arm64 allmodconfig fails to build with GCC 5 due to __asmeq assertions in the PSCI firmware calling code firing due to mcount preambles breaking our assumptions about register allocation of function arguments: /tmp/ccDqJsJ6.s: Assembler messages: /tmp/ccDqJsJ6.s:60: Error: .err encountered /tmp/ccDqJsJ6.s:61: Error: .err encountered /tmp/ccDqJsJ6.s:62: Error: .err encountered /tmp/ccDqJsJ6.s:99: Error: .err encountered /tmp/ccDqJsJ6.s:100: Error: .err encountered /tmp/ccDqJsJ6.s:101: Error: .err encountered This patch fixes the issue by moving the PSCI calls out-of-line into their own assembly files, which are safe from the compiler's meddling fingers. Reported-by: Andy Whitcroft <apw@canonical.com> Signed-off-by: Will Deacon <will.deacon@arm.com> Signed-off-by: Catalin Marinas <catalin.marinas@arm.com> [bwh: Backported to 3.16: adjust context] Signed-off-by: Ben Hutchings <ben@decadent.org.uk> Cc: Guenter Roeck <linux@roeck-us.net>
2016-06-15arm64: kernel: fix architected PMU registers unconditional accessLorenzo Pieralisi3-2/+19
commit f436b2ac90a095746beb6729b8ee8ed87c9eaede upstream. The Performance Monitors extension is an optional feature of the AArch64 architecture, therefore, in order to access Performance Monitors registers safely, the kernel should detect the architected PMU unit presence through the ID_AA64DFR0_EL1 register PMUVer field before accessing them. This patch implements a guard by reading the ID_AA64DFR0_EL1 register PMUVer field to detect the architected PMU presence and prevent accessing PMU system registers if the Performance Monitors extension is not implemented in the core. Cc: Peter Maydell <peter.maydell@linaro.org> Cc: Mark Rutland <mark.rutland@arm.com> Fixes: 60792ad349f3 ("arm64: kernel: enforce pmuserenr_el0 initialization and restore") Signed-off-by: Lorenzo Pieralisi <lorenzo.pieralisi@arm.com> Reported-by: Guenter Roeck <linux@roeck-us.net> Tested-by: Guenter Roeck <linux@roeck-us.net> Signed-off-by: Will Deacon <will.deacon@arm.com> Signed-off-by: Ben Hutchings <ben@decadent.org.uk>
2016-02-17arm64: errata: Add -mpc-relative-literal-loads to build flagsdann frazier1-0/+1
commit 67dfa1751ce71e629aad7c438e1678ad41054677 upstream. GCC6 (and Linaro's 2015.12 snapshot of GCC5) has a new default that uses adrp/ldr or adrp/add to address literal pools. When CONFIG_ARM64_ERRATUM_843419 is enabled, modules built with this toolchain fail to load: module libahci: unsupported RELA relocation: 275 This patch fixes the problem by passing '-mpc-relative-literal-loads' to the compiler. Fixes: df057cc7b4fa ("arm64: errata: add module build workaround for erratum #843419") BugLink: http://bugs.launchpad.net/bugs/1533009 Acked-by: Ard Biesheuvel <ard.biesheuvel@linaro.org> Suggested-by: Christophe Lyon <christophe.lyon@linaro.org> Signed-off-by: Dann Frazier <dann.frazier@canonical.com> Signed-off-by: Will Deacon <will.deacon@arm.com> [ luis: backported to 3.16: adjusted context ] Signed-off-by: Luis Henriques <luis.henriques@canonical.com>
2016-02-02arm64: kernel: enforce pmuserenr_el0 initialization and restoreLorenzo Pieralisi2-3/+2
commit 60792ad349f3c6dc5735aafefe5dc9121c79e320 upstream. The pmuserenr_el0 register value is architecturally UNKNOWN on reset. Current kernel code resets that register value iff the core pmu device is correctly probed in the kernel. On platforms with missing DT pmu nodes (or disabled perf events in the kernel), the pmu is not probed, therefore the pmuserenr_el0 register is not reset in the kernel, which means that its value retains the reset value that is architecturally UNKNOWN (system may run with eg pmuserenr_el0 == 0x1, which means that PMU counters access is available at EL0, which must be disallowed). This patch adds code that resets pmuserenr_el0 on cold boot and restores it on core resume from shutdown, so that the pmuserenr_el0 setup is always enforced in the kernel. Cc: Mark Rutland <mark.rutland@arm.com> Signed-off-by: Lorenzo Pieralisi <lorenzo.pieralisi@arm.com> Signed-off-by: Will Deacon <will.deacon@arm.com> Signed-off-by: Luis Henriques <luis.henriques@canonical.com>
2016-02-02arm64: mdscr_el1: avoid exposing DCC to userspaceWill Deacon1-1/+2
commit d8d23fa0f27f3b2942a7bbc7378c7735324ed519 upstream. We don't want to expose the DCC to userspace, particularly as there is a kernel console driver for it. This patch resets mdscr_el1 to disable userspace access to the DCC registers on the cold boot path. Signed-off-by: Will Deacon <will.deacon@arm.com> Signed-off-by: Luis Henriques <luis.henriques@canonical.com>
2016-02-02arm64: mm: ensure that the zero page is visible to the page table walkerWill Deacon1-0/+3
commit 32d6397805d00573ce1fa55f408ce2bca15b0ad3 upstream. In paging_init, we allocate the zero page, memset it to zero and then point TTBR0 to it in order to avoid speculative fetches through the identity mapping. In order to guarantee that the freshly zeroed page is indeed visible to the page table walker, we need to execute a dsb instruction prior to writing the TTBR. Signed-off-by: Will Deacon <will.deacon@arm.com> Signed-off-by: Luis Henriques <luis.henriques@canonical.com>
2016-02-02arm64: Clear out any singlestep state on a ptrace detach operationJohn Blackwood1-0/+6
commit 5db4fd8c52810bd9740c1240ebf89223b171aa70 upstream. Make sure to clear out any ptrace singlestep state when a ptrace(2) PTRACE_DETACH call is made on arm64 systems. Otherwise, the previously ptraced task will die off with a SIGTRAP signal if the debugger just previously singlestepped the ptraced task. Signed-off-by: John Blackwood <john.blackwood@ccur.com> [will: added comment to justify why this is in the arch code] Signed-off-by: Will Deacon <will.deacon@arm.com> Signed-off-by: Luis Henriques <luis.henriques@canonical.com>
2015-12-18arm64: KVM: Fix AArch32 to AArch64 register mappingMarc Zyngier2-4/+6
commit c0f0963464c24e034b858441205455bf2a5d93ad upstream. When running a 32bit guest under a 64bit hypervisor, the ARMv8 architecture defines a mapping of the 32bit registers in the 64bit space. This includes banked registers that are being demultiplexed over the 64bit ones. On exceptions caused by an operation involving a 32bit register, the HW exposes the register number in the ESR_EL2 register. It was so far understood that SW had to distinguish between AArch32 and AArch64 accesses (based on the current AArch32 mode and register number). It turns out that I misinterpreted the ARM ARM, and the clue is in D1.20.1: "For some exceptions, the exception syndrome given in the ESR_ELx identifies one or more register numbers from the issued instruction that generated the exception. Where the exception is taken from an Exception level using AArch32 these register numbers give the AArch64 view of the register." Which means that the HW is already giving us the translated version, and that we shouldn't try to interpret it at all (for example, doing an MMIO operation from the IRQ mode using the LR register leads to very unexpected behaviours). The fix is thus not to perform a call to vcpu_reg32() at all from vcpu_reg(), and use whatever register number is supplied directly. The only case we need to find out about the mapping is when we actively generate a register access, which only occurs when injecting a fault in a guest. Reviewed-by: Robin Murphy <robin.murphy@arm.com> Signed-off-by: Marc Zyngier <marc.zyngier@arm.com> Signed-off-by: Christoffer Dall <christoffer.dall@linaro.org> Signed-off-by: Luis Henriques <luis.henriques@canonical.com>
2015-12-14arm64: restore bogomips information in /proc/cpuinfoYang Shi1-0/+4
commit 92e788b749862ebe9920360513a718e5dd4da7a9 upstream. As previously reported, some userspace applications depend on bogomips showed by /proc/cpuinfo. Although there is much less legacy impact on aarch64 than arm, it does break libvirt. This patch reverts commit 326b16db9f69 ("arm64: delay: don't bother reporting bogomips in /proc/cpuinfo"), but with some tweak due to context change and without the pr_info(). Fixes: 326b16db9f69 ("arm64: delay: don't bother reporting bogomips in /proc/cpuinfo") Signed-off-by: Yang Shi <yang.shi@linaro.org> Acked-by: Will Deacon <will.deacon@arm.com> Signed-off-by: Catalin Marinas <catalin.marinas@arm.com> [ luis: backported to 3.16: - file rename: cpuinfo.c -> setup.c - linux/delay.h is already included - adjusted context ] Signed-off-by: Luis Henriques <luis.henriques@canonical.com>
2015-12-14arm64: kernel: pause/unpause function graph tracer in cpu_suspend()Lorenzo Pieralisi1-0/+10
commit de818bd4522c40ea02a81b387d2fa86f989c9623 upstream. The function graph tracer adds instrumentation that is required to trace both entry and exit of a function. In particular the function graph tracer updates the "return address" of a function in order to insert a trace callback on function exit. Kernel power management functions like cpu_suspend() are called upon power down entry with functions called "finishers" that are in turn called to trigger the power down sequence but they may not return to the kernel through the normal return path. When the core resumes from low-power it returns to the cpu_suspend() function through the cpu_resume path, which leaves the trace stack frame set-up by the function tracer in an incosistent state upon return to the kernel when tracing is enabled. This patch fixes the issue by pausing/resuming the function graph tracer on the thread executing cpu_suspend() (ie the function call that subsequently triggers the "suspend finishers"), so that the function graph tracer state is kept consistent across functions that enter power down states and never return by effectively disabling graph tracer while they are executing. Fixes: 819e50e25d0c ("arm64: Add ftrace support") Signed-off-by: Lorenzo Pieralisi <lorenzo.pieralisi@arm.com> Reported-by: Catalin Marinas <catalin.marinas@arm.com> Reported-by: AKASHI Takahiro <takahiro.akashi@linaro.org> Suggested-by: Steven Rostedt <rostedt@goodmis.org> Acked-by: Steven Rostedt <rostedt@goodmis.org> Cc: Will Deacon <will.deacon@arm.com> Signed-off-by: Catalin Marinas <catalin.marinas@arm.com> Signed-off-by: Luis Henriques <luis.henriques@canonical.com>
2015-12-13arm64: Fix compat register mappingsRobin Murphy1-8/+8
commit 5accd17d0eb523350c9ef754d655e379c9bb93b3 upstream. For reasons not entirely apparent, but now enshrined in history, the architectural mapping of AArch32 banked registers to AArch64 registers actually orders SP_<mode> and LR_<mode> backwards compared to the intuitive r13/r14 order, for all modes except FIQ. Fix the compat_<reg>_<mode> macros accordingly, in the hope of avoiding subtle bugs with KVM and AArch32 guests. Signed-off-by: Robin Murphy <robin.murphy@arm.com> Acked-by: Will Deacon <will.deacon@arm.com> Signed-off-by: Catalin Marinas <catalin.marinas@arm.com> Signed-off-by: Luis Henriques <luis.henriques@canonical.com>
2015-11-16Revert "ARM64: unwind: Fix PC calculation"Will Deacon1-5/+1
commit 9702970c7bd3e2d6fecb642a190269131d4ac16c upstream. This reverts commit e306dfd06fcb44d21c80acb8e5a88d55f3d1cf63. With this patch applied, we were the only architecture making this sort of adjustment to the PC calculation in the unwinder. This causes problems for ftrace, where the PC values are matched against the contents of the stack frames in the callchain and fail to match any records after the address adjustment. Whilst there has been some effort to change ftrace to workaround this, those patches are not yet ready for mainline and, since we're the odd architecture in this regard, let's just step in line with other architectures (like arch/arm/) for now. Signed-off-by: Will Deacon <will.deacon@arm.com> Signed-off-by: Luis Henriques <luis.henriques@canonical.com>
2015-11-16arm64: errata: use KBUILD_CFLAGS_MODULE for erratum #843419Will Deacon1-1/+1
commit b6dd8e0719c0d2d01429639a11b7bc2677de240c upstream. Commit df057cc7b4fa ("arm64: errata: add module build workaround for erratum #843419") sets CFLAGS_MODULE to ensure that the large memory model is used by the compiler when building kernel modules. However, CFLAGS_MODULE is an environment variable and intended to be overridden on the command line, which appears to be the case with the Ubuntu kernel packaging system, so use KBUILD_CFLAGS_MODULE instead. Cc: Ard Biesheuvel <ard.biesheuvel@linaro.org> Fixes: df057cc7b4fa ("arm64: errata: add module build workaround for erratum #843419") Reported-by: Dann Frazier <dann.frazier@canonical.com> Tested-by: Dann Frazier <dann.frazier@canonical.com> Signed-off-by: Will Deacon <will.deacon@arm.com> Signed-off-by: Luis Henriques <luis.henriques@canonical.com>
2015-10-30arm64: readahead: fault retry breaks mmap file read random detectionMark Salyzyn1-0/+1
commit 569ba74a7ba69f46ce2950bf085b37fea2408385 upstream. This is the arm64 portion of commit 45cac65b0fcd ("readahead: fault retry breaks mmap file read random detection"), which was absent from the initial port and has since gone unnoticed. The original commit says: > .fault now can retry. The retry can break state machine of .fault. In > filemap_fault, if page is miss, ra->mmap_miss is increased. In the second > try, since the page is in page cache now, ra->mmap_miss is decreased. And > these are done in one fault, so we can't detect random mmap file access. > > Add a new flag to indicate .fault is tried once. In the second try, skip > ra->mmap_miss decreasing. The filemap_fault state machine is ok with it. With this change, Mark reports that: > Random read improves by 250%, sequential read improves by 40%, and > random write by 400% to an eMMC device with dm crypto wrapped around it. Cc: Shaohua Li <shli@kernel.org> Cc: Rik van Riel <riel@redhat.com> Cc: Wu Fengguang <fengguang.wu@intel.com> Signed-off-by: Mark Salyzyn <salyzyn@android.com> Signed-off-by: Riley Andrews <riandrews@android.com> Signed-off-by: Will Deacon <will.deacon@arm.com> Signed-off-by: Luis Henriques <luis.henriques@canonical.com>
2015-10-06arm64: KVM: Disable virtual timer even if the guest is not using itMarc Zyngier1-2/+3
commit c4cbba9fa078f55d9f6d081dbb4aec7cf969e7c7 upstream. When running a guest with the architected timer disabled (with QEMU and the kernel_irqchip=off option, for example), it is important to make sure the timer gets turned off. Otherwise, the guest may try to enable it anyway, leading to a screaming HW interrupt. The fix is to unconditionally turn off the virtual timer on guest exit. Reviewed-by: Christoffer Dall <christoffer.dall@linaro.org> Signed-off-by: Marc Zyngier <marc.zyngier@arm.com> Signed-off-by: Luis Henriques <luis.henriques@canonical.com>
2015-10-06arm64: errata: add module build workaround for erratum #843419Will Deacon3-0/+22
commit df057cc7b4fa59e9b55f07ffdb6c62bf02e99a00 upstream. Cortex-A53 processors <= r0p4 are affected by erratum #843419 which can lead to a memory access using an incorrect address in certain sequences headed by an ADRP instruction. There is a linker fix to generate veneers for ADRP instructions, but this doesn't work for kernel modules which are built as unlinked ELF objects. This patch adds a new config option for the erratum which, when enabled, builds kernel modules with the mcmodel=large flag. This uses absolute addressing for all kernel symbols, thereby removing the use of ADRP as a PC-relative form of addressing. The ADRP relocs are removed from the module loader so that we fail to load any potentially affected modules. Acked-by: Catalin Marinas <catalin.marinas@arm.com> Signed-off-by: Will Deacon <will.deacon@arm.com> Signed-off-by: Luis Henriques <luis.henriques@canonical.com>
2015-10-06arm64: compat: fix vfp save/restore across signal handlers in big-endianWill Deacon1-11/+36
commit bdec97a855ef1e239f130f7a11584721c9a1bf04 upstream. When saving/restoring the VFP registers from a compat (AArch32) signal frame, we rely on the compat registers forming a prefix of the native register file and therefore make use of copy_{to,from}_user to transfer between the native fpsimd_state and the compat_vfp_sigframe. Unfortunately, this doesn't work so well in a big-endian environment. Our fpsimd save/restore code operates directly on 128-bit quantities (Q registers) whereas the compat_vfp_sigframe represents the registers as an array of 64-bit (D) registers. The architecture packs the compat D registers into the Q registers, with the least significant bytes holding the lower register. Consequently, we need to swap the 64-bit halves when converting between these two representations on a big-endian machine. This patch replaces the __copy_{to,from}_user invocations in our compat VFP signal handling code with explicit __put_user loops that operate on 64-bit values and swap them accordingly. Reviewed-by: Catalin Marinas <catalin.marinas@arm.com> Signed-off-by: Will Deacon <will.deacon@arm.com> Signed-off-by: Luis Henriques <luis.henriques@canonical.com>
2015-10-06arm64: head.S: initialise mdcr_el2 in el2_setupWill Deacon1-0/+5
commit d10bcd473301888f957ec4b6b12aa3621be78d59 upstream. When entering the kernel at EL2, we fail to initialise the MDCR_EL2 register which controls debug access and PMU capabilities at EL1. This patch ensures that the register is initialised so that all traps are disabled and all the PMU counters are available to the host. When a guest is scheduled, KVM takes care to configure trapping appropriately. Acked-by: Marc Zyngier <marc.zyngier@arm.com> Signed-off-by: Will Deacon <will.deacon@arm.com> Signed-off-by: Luis Henriques <luis.henriques@canonical.com>
2015-09-29arm64: flush FP/SIMD state correctly after execve()Ard Biesheuvel1-0/+1
commit 674c242c9323d3c293fc4f9a3a3a619fe3063290 upstream. When a task calls execve(), its FP/SIMD state is flushed so that none of the original program state is observeable by the incoming program. However, since this flushing consists of setting the in-memory copy of the FP/SIMD state to all zeroes, the CPU field is set to CPU 0 as well, which indicates to the lazy FP/SIMD preserve/restore code that the FP/SIMD state does not need to be reread from memory if the task is scheduled again on CPU 0 without any other tasks having entered userland (or used the FP/SIMD in kernel mode) on the same CPU in the mean time. If this happens, the FP/SIMD state of the old program will still be present in the registers when the new program starts. So set the CPU field to the invalid value of NR_CPUS when performing the flush, by calling fpsimd_flush_task_state(). Reported-by: Chunyan Zhang <chunyan.zhang@spreadtrum.com> Reported-by: Janet Liu <janet.liu@spreadtrum.com> Signed-off-by: Ard Biesheuvel <ard.biesheuvel@linaro.org> Signed-off-by: Will Deacon <will.deacon@arm.com> Signed-off-by: Luis Henriques <luis.henriques@canonical.com>
2015-09-28arm64: kconfig: Move LIST_POISON to a safe valueJeff Vander Stoep1-0/+4
commit bf0c4e04732479f650ff59d1ee82de761c0071f0 upstream. Move the poison pointer offset to 0xdead000000000000, a recognized value that is not mappable by user-space exploits. Acked-by: Catalin Marinas <catalin.marinas@arm.com> Signed-off-by: Thierry Strudel <tstrudel@google.com> Signed-off-by: Jeff Vander Stoep <jeffv@google.com> Signed-off-by: Will Deacon <will.deacon@arm.com> Signed-off-by: Luis Henriques <luis.henriques@canonical.com>
2015-09-03signal: fix information leak in copy_siginfo_from_user32Amanieu d'Antras1-2/+0
commit 3c00cb5e68dc719f2fc73a33b1b230aadfcb1309 upstream. This function can leak kernel stack data when the user siginfo_t has a positive si_code value. The top 16 bits of si_code descibe which fields in the siginfo_t union are active, but they are treated inconsistently between copy_siginfo_from_user32, copy_siginfo_to_user32 and copy_siginfo_to_user. copy_siginfo_from_user32 is called from rt_sigqueueinfo and rt_tgsigqueueinfo in which the user has full control overthe top 16 bits of si_code. This fixes the following information leaks: x86: 8 bytes leaked when sending a signal from a 32-bit process to itself. This leak grows to 16 bytes if the process uses x32. (si_code = __SI_CHLD) x86: 100 bytes leaked when sending a signal from a 32-bit process to a 64-bit process. (si_code = -1) sparc: 4 bytes leaked when sending a signal from a 32-bit process to a 64-bit process. (si_code = any) parsic and s390 have similar bugs, but they are not vulnerable because rt_[tg]sigqueueinfo have checks that prevent sending a positive si_code to a different process. These bugs are also fixed for consistency. Signed-off-by: Amanieu d'Antras <amanieu@gmail.com> Cc: Oleg Nesterov <oleg@redhat.com> Cc: Ingo Molnar <mingo@kernel.org> Cc: Russell King <rmk@arm.linux.org.uk> Cc: Ralf Baechle <ralf@linux-mips.org> Cc: Benjamin Herrenschmidt <benh@kernel.crashing.org> Cc: Chris Metcalf <cmetcalf@ezchip.com> Cc: Paul Mackerras <paulus@samba.org> Cc: Michael Ellerman <mpe@ellerman.id.au> Signed-off-by: Andrew Morton <akpm@linux-foundation.org> Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org> Signed-off-by: Luis Henriques <luis.henriques@canonical.com>
2015-09-03signal: fix information leak in copy_siginfo_to_userAmanieu d'Antras1-1/+2
commit 26135022f85105ad725cda103fa069e29e83bd16 upstream. This function may copy the si_addr_lsb, si_lower and si_upper fields to user mode when they haven't been initialized, which can leak kernel stack data to user mode. Just checking the value of si_code is insufficient because the same si_code value is shared between multiple signals. This is solved by checking the value of si_signo in addition to si_code. Signed-off-by: Amanieu d'Antras <amanieu@gmail.com> Cc: Oleg Nesterov <oleg@redhat.com> Cc: Ingo Molnar <mingo@kernel.org> Cc: Russell King <rmk@arm.linux.org.uk> Signed-off-by: Andrew Morton <akpm@linux-foundation.org> Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org> [ luis: backported to 3.16: - dropped 2nd hunk in kernel/signal.c ] Signed-off-by: Luis Henriques <luis.henriques@canonical.com>
2015-09-03arm64: KVM: Fix host crash when injecting a fault into a 32bit guestMarc Zyngier1-6/+6
commit 126c69a0bd0e441bf6766a5d9bf20de011be9f68 upstream. When injecting a fault into a misbehaving 32bit guest, it seems rather idiotic to also inject a 64bit fault that is only going to corrupt the guest state. This leads to a situation where we perform an illegal exception return at EL2 causing the host to crash instead of killing the guest. Just fix the stupid bug that has been there from day 1. Reported-by: Russell King <rmk+kernel@arm.linux.org.uk> Tested-by: Russell King <rmk+kernel@arm.linux.org.uk> Signed-off-by: Marc Zyngier <marc.zyngier@arm.com> Signed-off-by: Will Deacon <will.deacon@arm.com> Signed-off-by: Luis Henriques <luis.henriques@canonical.com>
2015-08-25arm64/efi: map the entire UEFI vendor string before reading itArd Biesheuvel1-2/+2
commit f91b1feada0b6f0a4d33648155b3ded2c4e0707e upstream. At boot, the UTF-16 UEFI vendor string is copied from the system table into a char array with a size of 100 bytes. However, this size of 100 bytes is also used for memremapping() the source, which may not be sufficient if the vendor string exceeds 50 UTF-16 characters, and the placement of the vendor string inside a 4 KB page happens to leave the end unmapped. So use the correct '100 * sizeof(efi_char16_t)' for the size of the mapping. Signed-off-by: Ard Biesheuvel <ard.biesheuvel@linaro.org> Fixes: f84d02755f5a ("arm64: add EFI runtime services") Signed-off-by: Catalin Marinas <catalin.marinas@arm.com> [ luis: backported to 3.16: adjusted context ] Signed-off-by: Luis Henriques <luis.henriques@canonical.com>
2015-07-15arm64: Don't report clear pmds and puds as hugeChristoffer Dall1-2/+2
commit fd28f5d439fca77348c129d5b73043a56f8a0296 upstream. The current pmd_huge() and pud_huge() functions simply check if the table bit is not set and reports the entries as huge in that case. This is counter-intuitive as a clear pmd/pud cannot also be a huge pmd/pud, and it is inconsistent with at least arm and x86. To prevent others from making the same mistake as me in looking at code that calls these functions and to fix an issue with KVM on arm64 that causes memory corruption due to incorrect page reference counting resulting from this mistake, let's change the behavior. Signed-off-by: Christoffer Dall <christoffer.dall@linaro.org> Reviewed-by: Steve Capper <steve.capper@linaro.org> Acked-by: Marc Zyngier <marc.zyngier@arm.com> Fixes: 084bd29810a5 ("ARM64: mm: HugeTLB support.") Signed-off-by: Catalin Marinas <catalin.marinas@arm.com> Signed-off-by: Luis Henriques <luis.henriques@canonical.com>
2015-07-15arm64: vdso: work-around broken ELF toolchains in MakefileWill Deacon1-0/+4
commit 6f1a6ae87c0c60d7c462ef8fd071f291aa7a9abb upstream. When building the kernel with a bare-metal (ELF) toolchain, the -shared option may not be passed down to collect2, resulting in silent corruption of the vDSO image (in particular, the DYNAMIC section is omitted). The effect of this corruption is that the dynamic linker fails to find the vDSO symbols and libc is instead used for the syscalls that we intended to optimise (e.g. gettimeofday). Functionally, there is no issue as the sigreturn trampoline is still intact and located by the kernel. This patch fixes the problem by explicitly passing -shared to the linker when building the vDSO. Reported-by: Szabolcs Nagy <Szabolcs.Nagy@arm.com> Reported-by: James Greenlaigh <james.greenhalgh@arm.com> Signed-off-by: Will Deacon <will.deacon@arm.com> Signed-off-by: Catalin Marinas <catalin.marinas@arm.com> Signed-off-by: Luis Henriques <luis.henriques@canonical.com>
2015-07-15arm64: mm: Fix freeing of the wrong memmap entries with !SPARSEMEM_VMEMMAPDave P Martin1-1/+1
commit b9bcc919931611498e856eae9bf66337330d04cc upstream. The memmap freeing code in free_unused_memmap() computes the end of each memblock by adding the memblock size onto the base. However, if SPARSEMEM is enabled then the value (start) used for the base may already have been rounded downwards to work out which memmap entries to free after the previous memblock. This may cause memmap entries that are in use to get freed. In general, you're not likely to hit this problem unless there are at least 2 memblocks and one of them is not aligned to a sparsemem section boundary. Note that carve-outs can increase the number of memblocks by splitting the regions listed in the device tree. This problem doesn't occur with SPARSEMEM_VMEMMAP, because the vmemmap code deals with freeing the unused regions of the memmap instead of requiring the arch code to do it. This patch gets the memblock base out of the memblock directly when computing the block end address to ensure the correct value is used. Signed-off-by: Dave Martin <Dave.Martin@arm.com> Signed-off-by: Catalin Marinas <catalin.marinas@arm.com> Signed-off-by: Luis Henriques <luis.henriques@canonical.com>
2015-07-15arm64: Do not attempt to use init_mm in reset_context()Catalin Marinas1-0/+8
commit 565630d503ef24e44c252bed55571b3a0d68455f upstream. After secondary CPU boot or hotplug, the active_mm of the idle thread is &init_mm. The init_mm.pgd (swapper_pg_dir) is only meant for TTBR1_EL1 and must not be set in TTBR0_EL1. Since when active_mm == &init_mm the TTBR0_EL1 is already set to the reserved value, there is no need to perform any context reset. Signed-off-by: Catalin Marinas <catalin.marinas@arm.com> Signed-off-by: Luis Henriques <luis.henriques@canonical.com>
2015-05-28arm64: add missing PAGE_ALIGN() to __dma_free()Dean Nelson1-1/+3
commit 2cff98b99c469880ce830cbcde015b53b67e0a7b upstream. __dma_alloc() does a PAGE_ALIGN() on the passed in size argument before doing anything else. __dma_free() does not. And because it doesn't, it is possible to leak memory should size not be an integer multiple of PAGE_SIZE. The solution is to add a PAGE_ALIGN() to __dma_free() like is done in __dma_alloc(). Additionally, this patch removes a redundant PAGE_ALIGN() from __dma_alloc_coherent(), since __dma_alloc_coherent() can only be called from __dma_alloc(), which already does a PAGE_ALIGN() before the call. Acked-by: Catalin Marinas <catalin.marinas@arm.com> Signed-off-by: Dean Nelson <dnelson@redhat.com> Signed-off-by: Will Deacon <will.deacon@arm.com> [ luis: backported to 3.16: based on Dean's 3.19 backport ] Signed-off-by: Luis Henriques <luis.henriques@canonical.com>
2015-05-20arm64/mm: Remove hack in mmap randomize layoutYann Droneaud1-10/+2
commit d6c763afab142a85e4770b4bc2a5f40f256d5c5d upstream. Since commit 8a0a9bd4db63 ('random: make get_random_int() more random'), get_random_int() returns a random value for each call, so comment and hack introduced in mmap_rnd() as part of commit 1d18c47c735e ('arm64: MMU fault handling and page table management') are incorrects. Commit 1d18c47c735e seems to use the same hack introduced by commit a5adc91a4b44 ('powerpc: Ensure random space between stack and mmaps'), latter copied in commit 5a0efea09f42 ('sparc64: Sharpen address space randomization calculations.'). But both architectures were cleaned up as part of commit fa8cbaaf5a68 ('powerpc+sparc64/mm: Remove hack in mmap randomize layout') as hack is no more needed since commit 8a0a9bd4db63. So the present patch removes the comment and the hack around get_random_int() on AArch64's mmap_rnd(). Cc: David S. Miller <davem@davemloft.net> Cc: Anton Blanchard <anton@samba.org> Cc: Benjamin Herrenschmidt <benh@kernel.crashing.org> Acked-by: Will Deacon <will.deacon@arm.com> Acked-by: Dan McGee <dpmcgee@gmail.com> Signed-off-by: Yann Droneaud <ydroneaud@opteya.com> Signed-off-by: Will Deacon <will.deacon@arm.com> Cc: Moritz Mühlenhoff <jmm@inutil.org> Signed-off-by: Luis Henriques <luis.henriques@canonical.com>
2015-05-20arm64: KVM: Do not use pgd_index to index stage-2 pgdMarc Zyngier1-0/+2
commit 04b8dc85bf4a64517e3cf20e409eeaa503b15cc1 upstream. The kernel's pgd_index macro is designed to index a normal, page sized array. KVM is a bit diffferent, as we can use concatenated pages to have a bigger address space (for example 40bit IPA with 4kB pages gives us an 8kB PGD. In the above case, the use of pgd_index will always return an index inside the first 4kB, which makes a guest that has memory above 0x8000000000 rather unhappy, as it spins forever in a page fault, whist the host happilly corrupts the lower pgd. The obvious fix is to get our own kvm_pgd_index that does the right thing(tm). Tested on X-Gene with a hacked kvmtool that put memory at a stupidly high address. Reviewed-by: Christoffer Dall <christoffer.dall@linaro.org> Signed-off-by: Marc Zyngier <marc.zyngier@arm.com> Signed-off-by: Christoffer Dall <christoffer.dall@linaro.org> Signed-off-by: Shannon Zhao <shannon.zhao@linaro.org> [ luis: backported to 3.16: used shannon's backport to 3.14 ] Signed-off-by: Luis Henriques <luis.henriques@canonical.com>
2015-05-20arm64: KVM: Fix HCR setting for 32bit guestsMarc Zyngier2-1/+2
commit 801f6772cecea6cfc7da61aa197716ab64db5f9e upstream. Commit b856a59141b1 (arm/arm64: KVM: Reset the HCR on each vcpu when resetting the vcpu) moved the init of the HCR register to happen later in the init of a vcpu, but left out the fixup done in kvm_reset_vcpu when preparing for a 32bit guest. As a result, the 32bit guest is run as a 64bit guest, but the rest of the kernel still manages it as a 32bit. Fun follows. Moving the fixup to vcpu_reset_hcr solves the problem for good. Signed-off-by: Marc Zyngier <marc.zyngier@arm.com> Signed-off-by: Christoffer Dall <christoffer.dall@linaro.org> Signed-off-by: Paolo Bonzini <pbonzini@redhat.com> Signed-off-by: Shannon Zhao <shannon.zhao@linaro.org> Signed-off-by: Luis Henriques <luis.henriques@canonical.com>
2015-05-20arm64: KVM: Fix TLB invalidation by IPA/VMIDMarc Zyngier1-0/+1
commit 55e858b75808347378e5117c3c2339f46cc03575 upstream. It took about two years for someone to notice that the IPA passed to TLBI IPAS2E1IS must be shifted by 12 bits. Clearly our reviewing is not as good as it should be... Paper bag time for me. Reported-by: Mario Smarduch <m.smarduch@samsung.com> Tested-by: Mario Smarduch <m.smarduch@samsung.com> Signed-off-by: Marc Zyngier <marc.zyngier@arm.com> Signed-off-by: Christoffer Dall <christoffer.dall@linaro.org> Signed-off-by: Paolo Bonzini <pbonzini@redhat.com> Signed-off-by: Shannon Zhao <shannon.zhao@linaro.org> Signed-off-by: Luis Henriques <luis.henriques@canonical.com>
2015-05-20arm/arm64: KVM: Introduce stage2_unmap_vmChristoffer Dall1-0/+1
commit 957db105c99792ae8ef61ffc9ae77d910f6471da upstream. Introduce a new function to unmap user RAM regions in the stage2 page tables. This is needed on reboot (or when the guest turns off the MMU) to ensure we fault in pages again and make the dcache, RAM, and icache coherent. Using unmap_stage2_range for the whole guest physical range does not work, because that unmaps IO regions (such as the GIC) which will not be recreated or in the best case faulted in on a page-by-page basis. Call this function on secondary and subsequent calls to the KVM_ARM_VCPU_INIT ioctl so that a reset VCPU will detect the guest Stage-1 MMU is off when faulting in pages and make the caches coherent. Acked-by: Marc Zyngier <marc.zyngier@arm.com> Signed-off-by: Christoffer Dall <christoffer.dall@linaro.org> Signed-off-by: Shannon Zhao <shannon.zhao@linaro.org> Signed-off-by: Luis Henriques <luis.henriques@canonical.com>
2015-05-20arm/arm64: KVM: Reset the HCR on each vcpu when resetting the vcpuChristoffer Dall2-1/+5
commit b856a59141b1066d3c896a0d0231f84dabd040af upstream. When userspace resets the vcpu using KVM_ARM_VCPU_INIT, we should also reset the HCR, because we now modify the HCR dynamically to enable/disable trapping of guest accesses to the VM registers. This is crucial for reboot of VMs working since otherwise we will not be doing the necessary cache maintenance operations when faulting in pages with the guest MMU off. Acked-by: Marc Zyngier <marc.zyngier@arm.com> Signed-off-by: Christoffer Dall <christoffer.dall@linaro.org> Signed-off-by: Shannon Zhao <shannon.zhao@linaro.org> Signed-off-by: Luis Henriques <luis.henriques@canonical.com>
2015-05-20arm64/kvm: Fix assembler compatibility of macrosGeoff Levand1-10/+11
commit 286fb1cc32b11c18da3573a8c8c37a4f9da16e30 upstream. Some of the macros defined in kvm_arm.h are useful in assembly files, but are not compatible with the assembler. Change any C language integer constant definitions using appended U, UL, or ULL to the UL() preprocessor macro. Also, add a preprocessor include of the asm/memory.h file which defines the UL() macro. Fixes build errors like these when using kvm_arm.h in assembly source files: Error: unexpected characters following instruction at operand 3 -- `and x0,x1,#((1U<<25)-1)' Acked-by: Mark Rutland <mark.rutland@arm.com> Signed-off-by: Geoff Levand <geoff@infradead.org> Signed-off-by: Will Deacon <will.deacon@arm.com> Signed-off-by: Shannon Zhao <shannon.zhao@linaro.org> Signed-off-by: Luis Henriques <luis.henriques@canonical.com>
2015-05-20arm/arm64: KVM: Fix VTTBR_BADDR_MASK and pgd allocJoel Schopp2-4/+14
commit dbff124e29fa24aff9705b354b5f4648cd96e0bb upstream. The current aarch64 calculation for VTTBR_BADDR_MASK masks only 39 bits and not all the bits in the PA range. This is clearly a bug that manifests itself on systems that allocate memory in the higher address space range. [ Modified from Joel's original patch to be based on PHYS_MASK_SHIFT instead of a hard-coded value and to move the alignment check of the allocation to mmu.c. Also added a comment explaining why we hardcode the IPA range and changed the stage-2 pgd allocation to be based on the 40 bit IPA range instead of the maximum possible 48 bit PA range. - Christoffer ] Reviewed-by: Catalin Marinas <catalin.marinas@arm.com> Signed-off-by: Joel Schopp <joel.schopp@amd.com> Signed-off-by: Christoffer Dall <christoffer.dall@linaro.org> Signed-off-by: Shannon Zhao <shannon.zhao@linaro.org> Signed-off-by: Luis Henriques <luis.henriques@canonical.com>
2015-05-20ARM/arm64: KVM: fix use of WnR bit in kvm_is_write_fault()Ard Biesheuvel1-13/+0
commit a7d079cea2dffb112e26da2566dd84c0ef1fce97 upstream. The ISS encoding for an exception from a Data Abort has a WnR bit[6] that indicates whether the Data Abort was caused by a read or a write instruction. While there are several fields in the encoding that are only valid if the ISV bit[24] is set, WnR is not one of them, so we can read it unconditionally. Instead of fixing both implementations of kvm_is_write_fault() in place, reimplement it just once using kvm_vcpu_dabt_iswrite(), which already does the right thing with respect to the WnR bit. Also fix up the callers to pass 'vcpu' Acked-by: Laszlo Ersek <lersek@redhat.com> Acked-by: Marc Zyngier <marc.zyngier@arm.com> Acked-by: Christoffer Dall <christoffer.dall@linaro.org> Signed-off-by: Ard Biesheuvel <ard.biesheuvel@linaro.org> Signed-off-by: Marc Zyngier <marc.zyngier@arm.com> Signed-off-by: Shannon Zhao <shannon.zhao@linaro.org> [ luis: backported to 3.16: used shannon's backport to 3.14 ] Signed-off-by: Luis Henriques <luis.henriques@canonical.com>