summaryrefslogtreecommitdiff
path: root/arch/arm/include
AgeCommit message (Collapse)AuthorFilesLines
2015-02-06ARM: 8109/1: mm: Modify pte_write and pmd_write logic for LPAESteven Capper2-3/+4
commit ded9477984690d026e46dd75e8157392cea3f13f upstream. For LPAE, we have the following means for encoding writable or dirty ptes: L_PTE_DIRTY L_PTE_RDONLY !pte_dirty && !pte_write 0 1 !pte_dirty && pte_write 0 1 pte_dirty && !pte_write 1 1 pte_dirty && pte_write 1 0 So we can't distinguish between writeable clean ptes and read only ptes. This can cause problems with ptes being incorrectly flagged as read only when they are writeable but not dirty. This patch renumbers L_PTE_RDONLY from AP[2] to a software bit #58, and adds additional logic to set AP[2] whenever the pte is read only or not dirty. That way we can distinguish between clean writeable ptes and read only ptes. HugeTLB pages will use this new logic automatically. We need to add some logic to Transparent HugePages to ensure that they correctly interpret the revised pgprot permissions (L_PTE_RDONLY has moved and no longer matches PMD_SECT_AP2). In the process of revising THP, the names of the PMD software bits have been prefixed with L_ to make them easier to distinguish from their hardware bit counterparts. Signed-off-by: Steve Capper <steve.capper@linaro.org> Reviewed-by: Will Deacon <will.deacon@arm.com> Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk> [hpy: Backported to 3.10 - adjust the context - ignore change related to pmd, because 3.10 does not support HugePage ] Signed-off-by: Hou Pengyang <houpengyang@huawei.com> Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
2015-02-06ARM: 8108/1: mm: Introduce {pte,pmd}_isset and {pte,pmd}_isclearSteven Capper1-5/+9
commit f2950706871c4b6e8c0f0d7c3f62d35930b8de63 upstream. Long descriptors on ARM are 64 bits, and some pte functions such as pte_dirty return a bitwise-and of a flag with the pte value. If the flag to be tested resides in the upper 32 bits of the pte, then we run into the danger of the result being dropped if downcast. For example: gather_stats(page, md, pte_dirty(*pte), 1); where pte_dirty(*pte) is downcast to an int. This patch introduces a new macro pte_isset which performs the bitwise and, then performs a double logical invert (where needed) to ensure predictable downcasting. The logical inverse pte_isclear is also introduced. Equivalent pmd functions for Transparent HugePages have also been added. Signed-off-by: Steve Capper <steve.capper@linaro.org> Reviewed-by: Will Deacon <will.deacon@arm.com> Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk> [hpy: Backported to 3.10: - adjust the context - ignore change to pmd, because 3.10 does not support HugePage.] Signed-off-by: Hou Pengyang <houpengyang@huawei.com> Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
2015-02-06ARM: 7931/1: Correct virt_addr_validLaura Abbott1-1/+2
commit efea3403d4b7c6d1dd5d5ac3234c161e8b314d66 upstream. The definition of virt_addr_valid is that virt_addr_valid should return true if and only if virt_to_page returns a valid pointer. The current definition of virt_addr_valid only checks against the virtual address range. There's no guarantee that just because a virtual address falls bewteen PAGE_OFFSET and high_memory the associated physical memory has a valid backing struct page. Follow the example of other architectures and convert to pfn_valid to verify that the virtual address is actually valid. The check for an address between PAGE_OFFSET and high_memory is still necessary as vmalloc/highmem addresses are not valid with virt_to_page. Cc: Will Deacon <will.deacon@arm.com> Cc: Nicolas Pitre <nico@linaro.org> Acked-by: Will Deacon <will.deacon@arm.com> Signed-off-by: Laura Abbott <lauraa@codeaurora.org> Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk> Cc: Hou Pengyang <houpengyang@huawei.com> Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
2015-02-06ARM: fix asm/memory.h build errorRussell King1-16/+15
commit b713aa0b15015a65ad5421543b80df86de043d62 upstream. Jason Gunthorpe reports a build failure when ARM_PATCH_PHYS_VIRT is not defined: In file included from arch/arm/include/asm/page.h:163:0, from include/linux/mm_types.h:16, from include/linux/sched.h:24, from arch/arm/kernel/asm-offsets.c:13: arch/arm/include/asm/memory.h: In function '__virt_to_phys': arch/arm/include/asm/memory.h:244:40: error: 'PHYS_OFFSET' undeclared (first use in this function) arch/arm/include/asm/memory.h:244:40: note: each undeclared identifier is reported only once for each function it appears in arch/arm/include/asm/memory.h: In function '__phys_to_virt': arch/arm/include/asm/memory.h:249:13: error: 'PHYS_OFFSET' undeclared (first use in this function) Fixes: ca5a45c06cd4 ("ARM: mm: use phys_addr_t appropriately in p2v and v2p conversions") Tested-By: Jason Gunthorpe <jgunthorpe@obsidianresearch.com> Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk> [hpy: Backported to 3.10: - adjust the context - MPU is not supported by 3.10, so ignore fix to MPU compared with the original patch.] Signed-off-by: Hou Pengyang <houpengyang@huawei.com> Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
2015-02-06ARM: 7867/1: include: asm: use 'int' instead of 'unsigned long' for 'oldval' ↵Chen Gang1-1/+2
in atomic_cmpxchg(). commit 4dcc1cf7316a26e112f5c9fcca531ff98ef44700 upstream. For atomic_cmpxchg(), the type of 'oldval' need be 'int' to match the type of "*ptr" (used by 'ldrex' instruction) and 'old' (used by 'teq' instruction). Reviewed-by: Will Deacon <will.deacon@arm.com> Signed-off-by: Chen Gang <gang.chen@asianux.com> Signed-off-by: Will Deacon <will.deacon@arm.com> Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk> Cc: Hou Pengyang <houpengyang@huawei.com> Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
2015-02-06ARM: 7866/1: include: asm: use 'long long' instead of 'u64' within atomic.hChen Gang1-24/+25
commit 237f12337cfa2175474e4dd015bc07a25eb9080d upstream. atomic* value is signed value, and atomic* functions need also process signed value (parameter value, and return value), so 32-bit arm need use 'long long' instead of 'u64'. After replacement, it will also fix a bug for atomic64_add_negative(): "u64 is never less than 0". The modifications are: in vim, use "1,% s/\<u64\>/long long/g" command. remove '__aligned(8)' which is useless for 64-bit. be sure of 80 column limitation after replacement. Acked-by: Will Deacon <will.deacon@arm.com> Signed-off-by: Chen Gang <gang.chen@asianux.com> Signed-off-by: Will Deacon <will.deacon@arm.com> Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk> Cc: Hou Pengyang <houpengyang@huawei.com> Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
2015-02-06ARM: lpae: fix definition of PTE_HWTABLE_PTRSWill Deacon1-1/+1
commit e38a517578d6c0f764b0d0f6e26dcdf9f70c69d7 upstream. For 2-level page tables, PTE_HWTABLE_PTRS describes the offset between Linux PTEs and hardware PTEs. On LPAE, there is no distinction (since we have 64-bit descriptors with plenty of space) so PTE_HWTABLE_PTRS should be 0. Unfortunately, it is wrongly defined as PTRS_PER_PTE, meaning that current pte table flushing is off by a page. Luckily, all current LPAE implementations are SMP, so the hardware walker can snoop L1. This patch fixes the broken definition. Acked-by: Catalin Marinas <catalin.marinas@arm.com> Signed-off-by: Will Deacon <will.deacon@arm.com> Cc: Hou Pengyang <houpengyang@huawei.com> Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
2015-02-06ARM: fix type of PHYS_PFN_OFFSET to unsigned longCyril Chemparathy1-1/+1
commit 5b20c5b2f014ecc0a6310988af69cd7ede9e7c67 upstream. On LPAE machines, PHYS_OFFSET evaluates to a phys_addr_t and this type is inherited by the PHYS_PFN_OFFSET definition as well. Consequently, the kernel build emits warnings of the form: init/main.c: In function 'start_kernel': init/main.c:588:7: warning: format '%lx' expects argument of type 'long unsigned int', but argument 2 has type 'phys_addr_t' [-Wformat] This patch fixes this warning by pinning down the PFN type to unsigned long. Signed-off-by: Cyril Chemparathy <cyril@ti.com> Acked-by: Nicolas Pitre <nico@linaro.org> Tested-by: Santosh Shilimkar <santosh.shilimkar@ti.com> Tested-by: Subash Patel <subash.rp@samsung.com> Signed-off-by: Will Deacon <will.deacon@arm.com> Cc: Hou Pengyang <houpengyang@huawei.com> Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
2015-02-06ARM: LPAE: use signed arithmetic for mask definitionsCyril Chemparathy2-4/+4
commit 926edcc747e2efb3c9add7ed4dbc4e7a3a959d02 upstream. This patch applies to PAGE_MASK, PMD_MASK, and PGDIR_MASK, where forcing unsigned long math truncates the mask at the 32-bits. This clearly does bad things on PAE systems. This patch fixes this problem by defining these masks as signed quantities. We then rely on sign extension to do the right thing. Signed-off-by: Cyril Chemparathy <cyril@ti.com> Signed-off-by: Vitaly Andrianov <vitalya@ti.com> Reviewed-by: Nicolas Pitre <nico@linaro.org> Reviewed-by: Catalin Marinas <catalin.marinas@arm.com> Tested-by: Santosh Shilimkar <santosh.shilimkar@ti.com> Tested-by: Subash Patel <subash.rp@samsung.com> Signed-off-by: Will Deacon <will.deacon@arm.com> Cc: Hou Pengyang <houpengyang@huawei.com> Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
2015-02-06ARM: mm: correct pte_same behaviour for LPAE.Steve Capper1-0/+17
commit dde1b65110353517816bcbc58539463396202244 upstream. For 3 levels of paging the PTE_EXT_NG bit will be set for user address ptes that are written to a page table but not for ptes created with mk_pte. This can cause some comparison tests made by pte_same to fail spuriously and lead to other problems. To correct this behaviour, we mask off PTE_EXT_NG for any pte that is present before running the comparison. Signed-off-by: Steve Capper <steve.capper@linaro.org> Reviewed-by: Will Deacon <will.deacon@arm.com> Cc: Hou Pengyang <houpengyang@huawei.com> Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
2015-02-06ARM: 7829/1: Add ".text.unlikely" and ".text.hot" to arm unwind tablesDouglas Anderson1-0/+2
commit 849b882b52df0f276d9ffded01d85654aa0da422 upstream. It appears that gcc may put some code in ".text.unlikely" or ".text.hot" sections. Right now those aren't accounted for in unwind tables. Add them. I found some docs about this at: http://gcc.gnu.org/onlinedocs/gcc-4.6.2/gcc.pdf Without this, if you have slub_debug turned on, you can get messages that look like this: unwind: Index not found 7f008c50 Signed-off-by: Doug Anderson <dianders@chromium.org> Acked-by: Mike Frysinger <vapier@gentoo.org> Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk> [wangkai: backport to 3.10 - adjust context ] Signed-off-by: Wang Kai <morgan.wang@huawei.com> Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
2014-11-21ARM: Correct BUG() assembly to ensure it is endian-agnosticBen Dooks1-4/+6
commit 63328070eff2f4fd730c86966a0dbc976147c39f upstream. Currently BUG() uses .word or .hword to create the necessary illegal instructions. However if we are building BE8 then these get swapped by the linker into different illegal instructions in the text. This means that the BUG() macro does not get trapped properly. Change to using <asm/opcodes.h> to provide the necessary ARM instruction building as we cannot rely on gcc/gas having the `.inst` instructions which where added to try and resolve this issue (reported by Dave Martin <Dave.Martin@arm.com>). Signed-off-by: Ben Dooks <ben.dooks@codethink.co.uk> Reviewed-by: Dave Martin <Dave.Martin@arm.com> Cc: Wang Nan <wangnan0@huawei.com> Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
2014-06-11ARM: 8051/1: put_user: fix possible data corruption in put_userAndrey Ryabinin1-1/+2
commit 537094b64b229bf3ad146042f83e74cf6abe59df upstream. According to arm procedure call standart r2 register is call-cloberred. So after the result of x expression was put into r2 any following function call in p may overwrite r2. To fix this, the result of p expression must be saved to the temporary variable before the assigment x expression to __r2. Signed-off-by: Andrey Ryabinin <a.ryabinin@samsung.com> Reviewed-by: Nicolas Pitre <nico@linaro.org> Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk> Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
2014-05-06ARM: 7728/1: mm: Use phys_addr_t properly for ioremap functionsLaura Abbott1-4/+4
commit 9b97173e785a54c5df0aa23d1e1f680f61e36e43 upstream. Several of the ioremap functions use unsigned long in places resulting in truncation if physical addresses greater than 4G are passed in. Change the types of the functions and the callers accordingly. Cc: Krzysztof Halasa <khc@pm.waw.pl> Cc: Arnd Bergmann <arnd@arndb.de> Cc: Stephen Boyd <sboyd@codeaurora.org> Signed-off-by: Laura Abbott <lauraa@codeaurora.org> Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk> Cc: Wang Nan <wangnan0@huawei.com> Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
2014-05-06ARM: 8027/1: fix do_div() bug in big-endian systemsXiangyu Lu1-1/+1
commit 80bb3ef109ff40a7593d9481c17de9bbc4d7c0e2 upstream. In big-endian systems, "%1" get the most significant part of the value, cause the instruction to get the wrong result. When viewing ftrace record in big-endian ARM systems, we found that the timestamp errors: swapper-0 [001] 1325.970000: 0:120:R ==> [001] 16:120:R events/1 events/1-16 [001] 1325.970000: 16:120:S ==> [001] 0:120:R swapper swapper-0 [000] 1325.1000000: 0:120:R + [000] 15:120:R events/0 swapper-0 [000] 1325.1000000: 0:120:R ==> [000] 15:120:R events/0 swapper-0 [000] 1326.030000: 0:120:R + [000] 1150:120:R sshd swapper-0 [000] 1326.030000: 0:120:R ==> [000] 1150:120:R sshd When viewed ftrace records, it will call the do_div(n, base) function, which achieved arch/arm/include/asm/div64.h in. When n = 10000000, base = 1000000, in do_div(n, base) will execute "umull %Q0, %R0, %1, %Q2". Reviewed-by: Dave Martin <Dave.Martin@arm.com> Reviewed-by: Nicolas Pitre <nico@linaro.org> Signed-off-by: Alex Wu <wuquanming@huawei.com> Signed-off-by: Xiangyu Lu <luxiangyu@huawei.com> Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk> Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
2014-05-06ARM: 8007/1: Remove extraneous kcmp syscall ignoreChristopher Covington1-1/+0
commit 95c52fe063351192e0f4ffb70ef9bac1aa26f5a4 upstream. The kcmp system call was ported to ARM in commit 3f7d1fe108dbaefd0c57a41753fc2c90b395f458 "ARM: 7665/1: Wire up kcmp syscall". Fixes: 3f7d1fe108db ("ARM: 7665/1: Wire up kcmp syscall") Signed-off-by: Christopher Covington <cov@codeaurora.org> Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk> Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
2014-05-06ARM: 7954/1: mm: remove remaining domain support from ARMv6Will Deacon2-6/+1
commit b6ccb9803e90c16b212cf4ed62913a7591e79a39 upstream. CPU_32v6 currently selects CPU_USE_DOMAINS if CPU_V6 and MMU. This is because ARM 1136 r0pX CPUs lack the v6k extensions, and therefore do not have hardware thread registers. The lack of these registers requires the kernel to update the vectors page at each context switch in order to write a new TLS pointer. This write must be done via the userspace mapping, since aliasing caches can lead to expensive flushing when using kmap. Finally, this requires the vectors page to be mapped r/w for kernel and r/o for user, which has implications for things like put_user which must trigger CoW appropriately when targetting user pages. The upshot of all this is that a v6/v7 kernel makes use of domains to segregate kernel and user memory accesses. This has the nasty side-effect of making device mappings executable, which has been observed to cause subtle bugs on recent cores (e.g. Cortex-A15 performing a speculative instruction fetch from the GIC and acking an interrupt in the process). This patch solves this problem by removing the remaining domain support from ARMv6. A new memory type is added specifically for the vectors page which allows that page (and only that page) to be mapped as user r/o, kernel r/w. All other user r/o pages are mapped also as kernel r/o. Patch co-developed with Russell King. Signed-off-by: Will Deacon <will.deacon@arm.com> Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk> Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
2014-03-31ARM: move outer_cache declaration out of ifdefRob Herring1-2/+2
commit 0b53c11d533a8f6688d73fad0baf67dd08ec1b90 upstream. Move the outer_cache declaration of the CONFIG_OUTER_CACHE ifdef so that outer_cache can be used inside IS_ENABLED condition. Signed-off-by: Rob Herring <rob.herring@calxeda.com> Cc: Russell King <linux@arm.linux.org.uk> Signed-off-by: Ian Campbell <ian.campbell@citrix.com> Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
2014-03-24ARM: 7811/1: locks: use early clobber in arch_spin_trylockWill Deacon1-1/+1
commit afa31d8eb86fc2f25083e675d57ac8173a98f999 upstream. The res variable is written before we've finished with the input operands (namely the lock address), so ensure that we mark it as `early clobber' to avoid unintended register sharing. Signed-off-by: Will Deacon <will.deacon@arm.com> Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk> Cc: Wang Weidong <wangweidong1@huawei.com> Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
2014-03-07ARM: 7812/1: rwlocks: retry trylock operation if strex fails on free lockWill Deacon1-19/+30
commit 00efaa0250939dc148e2d3104fb3c18395d24a2d upstream. Commit 15e7e5c1ebf5 ("ARM: 7749/1: spinlock: retry trylock operation if strex fails on free lock") modifying our arch_spin_trylock to retry the acquisition if the lock appeared uncontended, but the strex failed. This patch does the same for rwlocks, which were missed by the original patch. Signed-off-by: Will Deacon <will.deacon@arm.com> Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk> Cc: Li Zefan <lizefan@huawei.com> Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
2014-03-07ARM: 7749/1: spinlock: retry trylock operation if strex fails on free lockWill Deacon1-11/+14
commit 15e7e5c1ebf556cd620c9b091e121091ac760f6d upstream. An exclusive store instruction may fail for reasons other than lock contention (e.g. a cache eviction during the critical section) so, in line with other architectures using similar exclusive instructions (alpha, mips, powerpc), retry the trylock operation if the lock appears to be free but the strex reported failure. Reported-by: Tony Thompson <anthony.thompson@arm.com> Signed-off-by: Will Deacon <will.deacon@arm.com> Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk> Cc: Li Zefan <lizefan@huawei.com> Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
2014-03-07ARM: 7957/1: add DSB after icache flush in __flush_icache_all()Vinayak Kale1-0/+1
commit 39544ac9df20f73e49fc6b9ac19ff533388c82c0 upstream. Add DSB after icache flush to complete the cache maintenance operation. Signed-off-by: Vinayak Kale <vkale@apm.com> Acked-by: Catalin Marinas <catalin.marinas@arm.com> Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk> Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
2014-01-10clocksource: arch_timer: use virtual countersMark Rutland1-9/+0
commit 0d651e4e65e96989f72236bf83bd4c6e55eb6ce4 upstream. Switching between reading the virtual or physical counters is problematic, as some core code wants a view of time before we're fully set up. Using a function pointer and switching the source after the first read can make time appear to go backwards, and having a check in the read function is an unfortunate block on what we want to be a fast path. Instead, this patch makes us always use the virtual counters. If we're a guest, or don't have hyp mode, we'll use the virtual timers, and as such don't care about CNTVOFF as long as it doesn't change in such a way as to make time appear to travel backwards. As the guest will use the virtual timers, a (potential) KVM host must use the physical timers (which can wake up the host even if they fire while a guest is executing), and hence a host must have CNTVOFF set to zero so as to have a consistent view of time between the physical timers and virtual counters. Signed-off-by: Mark Rutland <mark.rutland@arm.com> Acked-by: Catalin Marinas <catalin.marinas@arm.com> Acked-by: Marc Zyngier <marc.zyngier@arm.com> Acked-by: Santosh Shilimkar <santosh.shilimkar@ti.com> Cc: Rob Herring <rob.herring@calxeda.com> Cc: Mark Brown <broonie@kernel.org> Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
2013-12-12ARM: fix booting low-vectors machinesRussell King1-1/+1
commit d8aa712c30148ba26fd89a5dc14de95d4c375184 upstream. Commit f6f91b0d9fd9 (ARM: allow kuser helpers to be removed from the vector page) required two pages for the vectors code. Although the code setting up the initial page tables was updated, the code which allocates page tables for new processes wasn't, neither was the code which tears down the mappings. Fix this. Fixes: f6f91b0d9fd9 ("ARM: allow kuser helpers to be removed from the vector page") Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk> Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
2013-11-04ARM: 7851/1: check for number of arguments in syscall_get/set_arguments()AKASHI Takahiro1-0/+6
commit 3c1532df5c1b54b5f6246cdef94eeb73a39fe43a upstream. In ftrace_syscall_enter(), syscall_get_arguments(..., 0, n, ...) if (i == 0) { <handle ORIG_r0> ...; n--;} memcpy(..., n * sizeof(args[0])); If 'number of arguments(n)' is zero and 'argument index(i)' is also zero in syscall_get_arguments(), none of arguments should be copied by memcpy(). Otherwise 'n--' can be a big positive number and unexpected amount of data will be copied. Tracing system calls which take no argument, say sync(void), may hit this case and eventually make the system corrupted. This patch fixes the issue both in syscall_get_arguments() and syscall_set_arguments(). Acked-by: Will Deacon <will.deacon@arm.com> Signed-off-by: AKASHI Takahiro <takahiro.akashi@linaro.org> Signed-off-by: Will Deacon <will.deacon@arm.com> Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk> Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
2013-10-18compiler/gcc4: Add quirk for 'asm goto' miscompilation bugIngo Molnar1-1/+1
commit 3f0116c3238a96bc18ad4b4acefe4e7be32fa861 upstream. Fengguang Wu, Oleg Nesterov and Peter Zijlstra tracked down a kernel crash to a GCC bug: GCC miscompiles certain 'asm goto' constructs, as outlined here: http://gcc.gnu.org/bugzilla/show_bug.cgi?id=58670 Implement a workaround suggested by Jakub Jelinek. Reported-and-tested-by: Fengguang Wu <fengguang.wu@intel.com> Reported-by: Oleg Nesterov <oleg@redhat.com> Reported-by: Peter Zijlstra <a.p.zijlstra@chello.nl> Suggested-by: Jakub Jelinek <jakub@redhat.com> Reviewed-by: Richard Henderson <rth@twiddle.net> Cc: Linus Torvalds <torvalds@linux-foundation.org> Cc: Andrew Morton <akpm@linux-foundation.org> Link: http://lkml.kernel.org/r/20131015062351.GA4666@gmail.com Signed-off-by: Ingo Molnar <mingo@kernel.org> Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
2013-08-20Fix TLB gather virtual address range invalidation corner casesLinus Torvalds1-2/+5
commit 2b047252d087be7f2ba088b4933cd904f92e6fce upstream. Ben Tebulin reported: "Since v3.7.2 on two independent machines a very specific Git repository fails in 9/10 cases on git-fsck due to an SHA1/memory failures. This only occurs on a very specific repository and can be reproduced stably on two independent laptops. Git mailing list ran out of ideas and for me this looks like some very exotic kernel issue" and bisected the failure to the backport of commit 53a59fc67f97 ("mm: limit mmu_gather batching to fix soft lockups on !CONFIG_PREEMPT"). That commit itself is not actually buggy, but what it does is to make it much more likely to hit the partial TLB invalidation case, since it introduces a new case in tlb_next_batch() that previously only ever happened when running out of memory. The real bug is that the TLB gather virtual memory range setup is subtly buggered. It was introduced in commit 597e1c3580b7 ("mm/mmu_gather: enable tlb flush range in generic mmu_gather"), and the range handling was already fixed at least once in commit e6c495a96ce0 ("mm: fix the TLB range flushed when __tlb_remove_page() runs out of slots"), but that fix was not complete. The problem with the TLB gather virtual address range is that it isn't set up by the initial tlb_gather_mmu() initialization (which didn't get the TLB range information), but it is set up ad-hoc later by the functions that actually flush the TLB. And so any such case that forgot to update the TLB range entries would potentially miss TLB invalidates. Rather than try to figure out exactly which particular ad-hoc range setup was missing (I personally suspect it's the hugetlb case in zap_huge_pmd(), which didn't have the same logic as zap_pte_range() did), this patch just gets rid of the problem at the source: make the TLB range information available to tlb_gather_mmu(), and initialize it when initializing all the other tlb gather fields. This makes the patch larger, but conceptually much simpler. And the end result is much more understandable; even if you want to play games with partial ranges when invalidating the TLB contents in chunks, now the range information is always there, and anybody who doesn't want to bother with it won't introduce subtle bugs. Ben verified that this fixes his problem. Reported-bisected-and-tested-by: Ben Tebulin <tebulin@googlemail.com> Build-testing-by: Stephen Rothwell <sfr@canb.auug.org.au> Build-testing-by: Richard Weinberger <richard.weinberger@gmail.com> Reviewed-by: Michal Hocko <mhocko@suse.cz> Acked-by: Peter Zijlstra <peterz@infradead.org> Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org> Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
2013-08-20ARM: KVM: perform save/restore of PARMarc Zyngier1-10/+12
commit 6a077e4ab9cbfbf279fb955bae05b03781c97013 upstream. Not saving PAR is an unfortunate oversight. If the guest performs an AT* operation and gets scheduled out before reading the result of the translation from PAR, it could become corrupted by another guest or the host. Saving this register is made slightly more complicated as KVM also uses it on the permission fault handling path, leading to an ugly "stash and restore" sequence. Fortunately, this is already a slow path so we don't really care. Also, Linux doesn't do any AT* operation, so Linux guests are not impacted by this bug. [ Slightly tweaked to use an even register as first operand to ldrd and strd operations in interrupts_head.S - Christoffer ] Signed-off-by: Marc Zyngier <marc.zyngier@arm.com> Signed-off-by: Christoffer Dall <christoffer.dall@linaro.org> Signed-off-by: Jonghwan Choi <jhbird.choi@samsung.com> Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
2013-08-12ARM: 7791/1: a.out: remove partial a.out supportWill Deacon4-84/+0
commit acfdd4b1f7590d02e9bae3b73bdbbc4a31b05d38 upstream. a.out support on ARM requires that argc, argv and envp are passed in r0-r2 respectively, which requires hacking load_aout_binary to prevent argc being clobbered by the return code. Whilst mainline kernels do set the registers up in start_thread, the aout loader has never carried the hack in mainline. Initialising the registers in this way actually goes against the libc expectations for ELF binaries, where argc, argv and envp are passed on the stack, with r0 being used to hold a pointer to an exit function for cleaning up after the dynamic linker if required. If the pointer is NULL, then it is ignored. When execing an ELF binary, Linux currently zeroes r0, then sets it to argc and then finally clobbers it with the return value of the execve syscall, so we actually end up with: r0 = 0 stack[0] = argc r1 = stack[1] = argv r2 = stack[2] = envp libc treats r1 and r2 as undefined. The clobbering of r0 by sys_execve works for user-spawned threads, but when executing an ELF binary from a kernel thread (via call_usermodehelper), the execve is performed on the ret_from_fork path, which restores r0 from the saved pt_regs, resulting in argc being presented to the C library. This has horrible consequences when the application exits, since we have an exit function registered using argc, resulting in a jump to hyperspace. This patch solves the problem by removing the partial a.out support from arch/arm/ altogether. Signed-off-by: Will Deacon <will.deacon@arm.com> Cc: Ashish Sangwan <ashishsangwan2@gmail.com> Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk> Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
2013-08-12ARM: 7790/1: Fix deferred mm switch on VIVT processorsCatalin Marinas3-5/+18
commit bdae73cd374e28db544fdd9b77de689a36e3c129 upstream. As of commit b9d4d42ad9 (ARM: Remove __ARCH_WANT_INTERRUPTS_ON_CTXSW on pre-ARMv6 CPUs), the mm switching on VIVT processors is done in the finish_arch_post_lock_switch() function to avoid whole cache flushing with interrupts disabled. The need for deferred mm switch is stored as a thread flag (TIF_SWITCH_MM). However, with preemption enabled, we can have another thread switch before finish_arch_post_lock_switch(). If the new thread has the same mm as the previous 'next' thread, the scheduler will not call switch_mm() and the TIF_SWITCH_MM flag won't be set for the new thread. This patch moves the switch pending flag to the mm_context_t structure since this is specific to the mm rather than thread. Signed-off-by: Catalin Marinas <catalin.marinas@arm.com> Reported-by: Marc Kleine-Budde <mkl@pengutronix.de> Tested-by: Marc Kleine-Budde <mkl@pengutronix.de> Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk> Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
2013-08-12ARM: fix nommu builds with 48be69a02 (ARM: move signal handlers into a ↵Russell King1-0/+2
vdso-like page) commit 8c0cc8a5d90bc7373a7a9e7f7a40eb41f51e03fc upstream. Olof reports that noMMU builds error out with: arch/arm/kernel/signal.c: In function 'setup_return': arch/arm/kernel/signal.c:413:25: error: 'mm_context_t' has no member named 'sigpage' This shows one of the evilnesses of IS_ENABLED(). Get rid of it here and replace it with #ifdef's - and as no noMMU platform can make use of sigpage, depend on CONIFG_MMU not CONFIG_ARM_MPU. Reported-by: Olof Johansson <olof@lixom.net> Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk> Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
2013-08-12ARM: make vectors page inaccessible from userspaceRussell King1-0/+2
commit a5463cd3435475386cbbe7b06e01292ac169d36f upstream. If kuser helpers are not provided by the kernel, disable user access to the vectors page. With the kuser helpers gone, there is no reason for this page to be visible to userspace. Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk> Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
2013-08-12ARM: move signal handlers into a vdso-like pageRussell King2-0/+5
commit 48be69a026b2c17350a5ef18a1959a919f60be7d upstream. Move the signal handlers into a VDSO page rather than keeping them in the vectors page. This allows us to place them randomly within this page, and also map the page at a random location within userspace further protecting these code fragments from ROP attacks. The new VDSO page is also poisoned in the same way as the vector page. Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk> Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
2013-07-22ARM: 7769/1: Cortex-A15: fix erratum 798181 implementationMarc Zyngier1-1/+9
commit 0d0752bca1f9a91fb646647aa4abbb21156f316c upstream. Looking into the active_asids array is not enough, as we also need to look into the reserved_asids array (they both represent processes that are currently running). Also, not holding the ASID allocator lock is racy, as another CPU could schedule that process and trigger a rollover, making the erratum workaround miss an IPI. Exposing this outside of context.c is a little ugly on the side, so let's define a new entry point that the erratum workaround can call to obtain the cpumask. Acked-by: Will Deacon <will.deacon@arm.com> Acked-by: Catalin Marinas <catalin.marinas@arm.com> Signed-off-by: Marc Zyngier <marc.zyngier@arm.com> Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk> Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
2013-06-24ARM: 7773/1: PJ4B: Add support for errata 4742Gregory CLEMENT1-0/+9
This commit fixes the regression on Armada 370 (the kernal hang during boot) introduced by the commit: "ARM: 7691/1: mm: kill unused TLB_CAN_READ_FROM_L1_CACHE and use ALT_SMP instead". When coming out of either a Wait for Interrupt (WFI) or a Wait for Event (WFE) IDLE states, a specific timing sensitivity exists between the retiring WFI/WFE instructions and the newly issued subsequent instructions. This sensitivity can result in a CPU hang scenario. The workaround is to insert either a Data Synchronization Barrier (DSB) or Data Memory Barrier (DMB) command immediately after the WFI/WFE instruction. This commit was based on the work of Lior Amsalem, but heavily modified to apply the errata fix dynamically according to the processor type thanks to the suggestions of Russell King and Nicolas Pitre. Signed-off-by: Gregory CLEMENT <gregory.clement@free-electrons.com> Reviewed-by: Will Deacon <will.deacon@arm.com> Acked-by: Nicolas Pitre <nico@linaro.org> Tested-by: Willy Tarreau <w@1wt.eu> Cc: <stable@vger.kernel.org> Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk>
2013-06-24ARM: 7763/1: kernel: fix __cpu_logical_map default initializationLorenzo Pieralisi2-1/+3
The __cpu_logical_map array is statically initialized to 0, which is a valid MPIDR value. To prevent issues with the current implementation, this patch defines an MPIDR_INVALID value, and statically initializes the __cpu_logical_map[] array to it. Entries in the arm_dt_init_cpu_maps() tmp_map array used to stash DT reg properties while parsing DT are initialized with the MPIDR_INVALID value as well for consistency. Signed-off-by: Lorenzo Pieralisi <lorenzo.pieralisi@arm.com> Acked-by: Nicolas Pitre <nico@linaro.org> Cc: Will Deacon <will.deacon@arm.com> Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk>
2013-06-19Merge branch 'fixes' of git://git.linaro.org/people/rmk/linux-armLinus Torvalds1-3/+1
Pull ARM fixes from Russell King: "The larger changes this time are - "ARM: 7755/1: handle user space mapped pages in flush_kernel_dcache_page" which fixes more data corruption problems with O_DIRECT - "ARM: 7759/1: decouple CPU offlining from reboot/shutdown" which gets us back to working shutdown/reboot on SMP platforms - "ARM: 7752/1: errata: LoUIS bit field in CLIDR register is incorrect" which fixes a shutdown regression found in v3.10 on Versatile Express platforms. The remainder are the quite small, maybe one or two line changes" * 'fixes' of git://git.linaro.org/people/rmk/linux-arm: ARM: 7759/1: decouple CPU offlining from reboot/shutdown ARM: 7756/1: zImage/virt: remove hyp-stub.S during distclean ARM: 7755/1: handle user space mapped pages in flush_kernel_dcache_page ARM: 7754/1: Fix the CPU ID and the mask associated to the PJ4B ARM: 7753/1: map_init_section flushes incorrect pmd ARM: 7752/1: errata: LoUIS bit field in CLIDR register is incorrect
2013-06-17ARM: 7755/1: handle user space mapped pages in flush_kernel_dcache_pageSimon Baatz1-3/+1
Commit f8b63c1 made flush_kernel_dcache_page a no-op assuming that the pages it needs to handle are kernel mapped only. However, for example when doing direct I/O, pages with user space mappings may occur. Thus, continue to do lazy flushing if there are no user space mappings. Otherwise, flush the kernel cache lines directly. Signed-off-by: Simon Baatz <gmbnomis@gmail.com> Reviewed-by: Catalin Marinas <catalin.marinas@arm.com> Cc: <stable@vger.kernel.org> # 3.2+ Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk>
2013-06-10Merge branch 'fixes' of git://git.linaro.org/people/rmk/linux-armLinus Torvalds1-2/+9
Pull ARM fixes from Russell King: "The biggest two fixes are fixing a compilation error with the decompressor, and a problem with our __my_cpu_offset implementation. Other changes are very trivial and small, which seems to be the way for most -rc stuff." * 'fixes' of git://git.linaro.org/people/rmk/linux-arm: ARM: 7747/1: pcpu: ensure __my_cpu_offset cannot be re-ordered across barrier() ARM: 7750/1: update legacy CPU ID in decompressor cache support jump table ARM: 7743/1: compressed/head.S: work around new binutils warning ARM: 7742/1: topology: export cpu_topology ARM: 7737/1: fix kernel decompressor compilation error with CONFIG_DEBUG_SEMIHOSTING
2013-06-06arch, mm: Remove tlb_fast_mode()Peter Zijlstra1-23/+4
Since the introduction of preemptible mmu_gather TLB fast mode has been broken. TLB fast mode relies on there being absolutely no concurrency; it frees pages first and invalidates TLBs later. However now we can get concurrency and stuff goes *bang*. This patch removes all tlb_fast_mode() code; it was found the better option vs trying to patch the hole by entangling tlb invalidation with the scheduler. Cc: Thomas Gleixner <tglx@linutronix.de> Cc: Russell King <linux@arm.linux.org.uk> Cc: Tony Luck <tony.luck@intel.com> Reported-by: Max Filippov <jcmvbkbc@gmail.com> Signed-off-by: Peter Zijlstra <peterz@infradead.org> Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
2013-06-06ARM: 7747/1: pcpu: ensure __my_cpu_offset cannot be re-ordered across barrier()Will Deacon1-2/+9
__my_cpu_offset is non-volatile, since we want its value to be cached when we access several per-cpu variables in a row with preemption disabled. This means that we rely on preempt_{en,dis}able to hazard with the operation via the barrier() macro, so that we can't end up migrating CPUs without reloading the per-cpu offset. Unfortunately, GCC doesn't treat a "memory" clobber on a non-volatile asm block as a side-effect, and will happily re-order it before other memory clobbers (including those in prempt_disable()) and cache the value. This has been observed to break the cmpxchg logic in the slub allocator, leading to livelock in kmem_cache_alloc in mainline kernels. This patch adds a dummy memory input operand to __my_cpu_offset, forcing it to be ordered with respect to the barrier() macro. Cc: <stable@vger.kernel.org> Cc: Rob Herring <rob.herring@calxeda.com> Reviewed-by: Nicolas Pitre <nico@linaro.org> Signed-off-by: Will Deacon <will.deacon@arm.com> Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk>
2013-05-21Merge tag 'ux500-arm-soc-v3.10-fixes' of ↵Olof Johansson1-3/+3
git://git.kernel.org/pub/scm/linux/kernel/git/linusw/linux-stericsson into fixes From Linus Walleij, some ux500 fixes for the v3.10-rc series: - Fixes up the debug UART - Fix dangerous platform data double-assignment - Fix auxdata for the ethernet device - Select REGULATOR to satisfy Kconfig * tag 'ux500-arm-soc-v3.10-fixes' of git://git.kernel.org/pub/scm/linux/kernel/git/linusw/linux-stericsson: ARM: ux500: select REGULATOR ARM: ux500: Provide device enumeration number suffix for SMSC911x ARM: ux500: Fix incorrect DEBUG UART virtual addresses ARM: ux500: Remove duplicated assignment of ab8500_platdata Signed-off-by: Olof Johansson <olof@lixom.net>
2013-05-16Merge branch 'fixes' of git://git.linaro.org/people/rmk/linux-armLinus Torvalds1-4/+4
Pull ARM fixes from Russell King: "A small number of fixes for stuff from the last merge window, and in one case (IRQ time accounting) the previous merge window." * 'fixes' of git://git.linaro.org/people/rmk/linux-arm: ARM: 7720/1: ARM v6/v7 cmpxchg64 shouldn't clear upper 32 bits of the old/new value ARM: 7715/1: MCPM: adapt to GIC changes after upstream merge ARM: 7714/1: mmc: mmci: Ensure return value of regulator_enable() is checked ARM: 7712/1: Remove trailing whitespace in arch/arm/Makefile ARM: 7711/1: dove: fix Dove cpu type from V7 to PJ4 ARM: finally enable IRQ time accounting config
2013-05-14ARM: ux500: Fix incorrect DEBUG UART virtual addressesLee Jones1-3/+3
A recent move to rid header files which were hindering multiplatform support forced address allocations out of the headers and into the files which were using them. We also lost some useful macros such as IO_ADDRESS(), so physical -> virtual addressing has been carried out manually in this case. Unfortunately the incorrect value was converted. This patch rectifies the error and ensures earlyprintk works again. Signed-off-by: Lee Jones <lee.jones@linaro.org> Signed-off-by: Linus Walleij <linus.walleij@linaro.org>
2013-05-14ARM: 7720/1: ARM v6/v7 cmpxchg64 shouldn't clear upper 32 bits of the ↵Jaccon Bastiaansen1-4/+4
old/new value The implementation of cmpxchg64() for the ARM v6 and v7 architecture casts parameter 2 and 3 (the old and new 64bit values) to an unsigned long before calling the atomic_cmpxchg64() function. This clears the top 32 bits of the old and new values, resulting in the wrong values being compare-exchanged. Luckily, this only appears to be used for 64-bit sched_clock, which we don't (yet) have on ARM. This bug was introduced by commit 3e0f5a15f500 ("ARM: 7404/1: cmpxchg64: use atomic64 and local64 routines for cmpxchg64"). Cc: <stable@vger.kernel.org> Acked-by: Will Deacon <will.deacon@arm.com> Signed-off-by: Jaccon Bastiaansen <jaccon.bastiaansen@gmail.com> Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk>
2013-05-08Merge tag '3.9-rc3-smp-6-tag' of ↵Linus Torvalds1-0/+1
git://git.kernel.org/pub/scm/linux/kernel/git/sstabellini/xen Pull ARM Xen SMP updates from Stefano Stabellini: "This contains a bunch of Xen/ARM specific changes, including some fixes, SMP support for Xen on ARM, and moving the xenvm machine from mach-vexpress to mach-virt. The non-Xen files that are touched are arch/arm/Kconfig, to select ARM_PSCI on XEN, and arch/arm/boot/dts/Makefile, to build the xenvm DTB if CONFIG_ARCH_VIRT. Highlights: - Move xenvm to mach-virt. - Implement SMP support in Xen on ARM. - Add support for machine reboot and power off via Xen hypercalls" * tag '3.9-rc3-smp-6-tag' of git://git.kernel.org/pub/scm/linux/kernel/git/sstabellini/xen: xen/arm: remove duplicated include from enlighten.c xen/arm: use sched_op hypercalls for machine reboot and power off xenvm: add a simple PSCI node and a second cpu xen/arm: XEN selects ARM_PSCI xen: move the xenvm machine to mach-virt xen/arm: SMP support xen/arm: implement HYPERVISOR_vcpu_op xen/arm: actually pass a non-NULL percpu pointer to request_percpu_irq
2013-05-07Merge tag 'cleanup-for-linus-2' of ↵Linus Torvalds3-16/+15
git://git.kernel.org/pub/scm/linux/kernel/git/arm/arm-soc Pull ARM SoC late cleanups from Arnd Bergmann: "These are cleanups and smaller changes that either depend on earlier feature branches or came in late during the development cycle. We normally try to get all cleanups early, so these are the exceptions: - A follow-up on the clocksource reworks, hopefully the last time we need to merge clocksource subsystem changes through arm-soc. A first set of patches was part of the original 3.10 arm-soc cleanup series because of interdependencies with timer drivers now moved out of arch/arm. - Migrating the SPEAr13xx platform away from using auxdata for DMA channel descriptions towards using information in device tree, based on the earlier SPEAr multiplatform series - A few follow-ups on the Atmel SAMA5 support and other changes for Atmel at91 based on the larger at91 reworks. - Moving the armada irqchip implementation to drivers/irqchip - Several OMAP cleanups following up on the larger series already merged in 3.10." * tag 'cleanup-for-linus-2' of git://git.kernel.org/pub/scm/linux/kernel/git/arm/arm-soc: (50 commits) ARM: OMAP4: change the device names in usb_bind_phy ARM: OMAP2+: Fix mismerge for timer.c between ff931c82 and da4a686a ARM: SPEAr: conditionalize SMP code ARM: arch_timer: Silence debug preempt warnings ARM: OMAP: remove unused variable serial: amba-pl011: fix !CONFIG_DMA_ENGINE case ata: arasan: remove the need for platform_data ARM: at91/sama5d34ek.dts: remove not needed compatibility string ARM: at91: dts: add MCI DMA support ARM: at91: dts: add i2c dma support ARM: at91: dts: set #dma-cells to the correct value ARM: at91: suspend both memory controllers on at91sam9263 irqchip: armada-370-xp: slightly cleanup irq controller driver irqchip: armada-370-xp: move IRQ handler to avoid forward declaration irqchip: move IRQ driver for Armada 370/XP ARM: mvebu: move L2 cache initialization in init_early() devtree: add binding documentation for sp804 ARM: integrator-cp: convert use CLKSRC_OF for timer init ARM: versatile: use OF init for sp804 timer ARM: versatile: add versatile dtbs to dtbs target ...
2013-05-07Merge tag 'soc-for-linus-3' of ↵Linus Torvalds1-1/+1
git://git.kernel.org/pub/scm/linux/kernel/git/arm/arm-soc Pull ARM SoC platform updates (part 3) from Arnd Bergmann: "This is the third and smallest of the SoC specific updates. Changes include: - SMP support for the Xilinx zynq platform - Smaller imx changes - LPAE support for mvebu - Moving the orion5x, kirkwood, dove and mvebu platforms to a common "mbus" driver for their internal devices. It would be good to get feedback on the location of the "mbus" driver. Since this is used on multiple platforms may potentially get shared with other architectures (powerpc and arm64), it was moved to drivers/bus/. We expect other similar drivers to get moved to the same place in order to avoid creating more top-level directories under drivers/ or cluttering up the messy drivers/misc/ even more." * tag 'soc-for-linus-3' of git://git.kernel.org/pub/scm/linux/kernel/git/arm/arm-soc: (50 commits) ARM: imx: reset_controller may be disabled ARM: mvebu: Align the internal registers virtual base to support LPAE ARM: mvebu: Limit the DMA zone when LPAE is selected arm: plat-orion: remove addr-map code arm: mach-mv78xx0: convert to use the mvebu-mbus driver arm: mach-orion5x: convert to use mvebu-mbus driver arm: mach-dove: convert to use mvebu-mbus driver arm: mach-kirkwood: convert to use mvebu-mbus driver arm: mach-mvebu: convert to use mvebu-mbus driver ARM i.MX53: set CLK_SET_RATE_PARENT flag on the tve_ext_sel clock ARM i.MX53: tve_di clock is not part of the CCM, but of TVE ARM i.MX53: make tve_ext_sel propagate rate change to PLL ARM i.MX53: Remove unused tve_gate clkdev entry ARM i.MX5: Remove tve_sel clock from i.MX53 clock tree ARM: i.MX5: Add PATA and SRTC clocks ARM: imx: do not bring up unavailable cores ARM: imx: add initial imx6dl support ARM: imx1: mm: add call to mxc_device_init ARM: imx_v4_v5_defconfig: Add CONFIG_GPIO_SYSFS ARM: imx_v6_v7_defconfig: Select CONFIG_PERF_EVENTS ...
2013-05-07Merge branch 'late/clksrc' into late/cleanupArnd Bergmann4-24/+15
There is no reason to keep the clksrc cleanups separate from the other cleanups, and this resolves some merge conflicts. Conflicts: arch/arm/mach-spear/spear13xx.c drivers/irqchip/Makefile
2013-05-06Merge tag 'kvm-3.10-1' of git://git.kernel.org/pub/scm/virt/kvm/kvmLinus Torvalds3-21/+55
Pull kvm updates from Gleb Natapov: "Highlights of the updates are: general: - new emulated device API - legacy device assignment is now optional - irqfd interface is more generic and can be shared between arches x86: - VMCS shadow support and other nested VMX improvements - APIC virtualization and Posted Interrupt hardware support - Optimize mmio spte zapping ppc: - BookE: in-kernel MPIC emulation with irqfd support - Book3S: in-kernel XICS emulation (incomplete) - Book3S: HV: migration fixes - BookE: more debug support preparation - BookE: e6500 support ARM: - reworking of Hyp idmaps s390: - ioeventfd for virtio-ccw And many other bug fixes, cleanups and improvements" * tag 'kvm-3.10-1' of git://git.kernel.org/pub/scm/virt/kvm/kvm: (204 commits) kvm: Add compat_ioctl for device control API KVM: x86: Account for failing enable_irq_window for NMI window request KVM: PPC: Book3S: Add API for in-kernel XICS emulation kvm/ppc/mpic: fix missing unlock in set_base_addr() kvm/ppc: Hold srcu lock when calling kvm_io_bus_read/write kvm/ppc/mpic: remove users kvm/ppc/mpic: fix mmio region lists when multiple guests used kvm/ppc/mpic: remove default routes from documentation kvm: KVM_CAP_IOMMU only available with device assignment ARM: KVM: iterate over all CPUs for CPU compatibility check KVM: ARM: Fix spelling in error message ARM: KVM: define KVM_ARM_MAX_VCPUS unconditionally KVM: ARM: Fix API documentation for ONE_REG encoding ARM: KVM: promote vfp_host pointer to generic host cpu context ARM: KVM: add architecture specific hook for capabilities ARM: KVM: perform HYP initilization for hotplugged CPUs ARM: KVM: switch to a dual-step HYP init code ARM: KVM: rework HYP page table freeing ARM: KVM: enforce maximum size for identity mapped code ARM: KVM: move to a KVM provided HYP idmap ...