summaryrefslogtreecommitdiff
path: root/arch/arm/kernel
AgeCommit message (Collapse)AuthorFilesLines
2011-01-13ARM: core irq_data conversion.Lennert Buytenhek1-7/+10
Signed-off-by: Lennert Buytenhek <buytenh@secretlab.ca>
2011-01-07Merge branch 'devel' of master.kernel.org:/home/rmk/linux-2.6-armLinus Torvalds27-3189/+3989
* 'devel' of master.kernel.org:/home/rmk/linux-2.6-arm: (416 commits) ARM: DMA: add support for DMA debugging ARM: PL011: add DMA burst threshold support for ST variants ARM: PL011: Add support for transmit DMA ARM: PL011: Ensure IRQs are disabled in UART interrupt handler ARM: PL011: Separate hardware FIFO size from TTY FIFO size ARM: PL011: Allow better handling of vendor data ARM: PL011: Ensure error flags are clear at startup ARM: PL011: include revision number in boot-time port printk ARM: vexpress: add sched_clock() for Versatile Express ARM i.MX53: Make MX53 EVK bootable ARM i.MX53: Some bug fix about MX53 MSL code ARM: 6607/1: sa1100: Update platform device registration ARM: 6606/1: sa1100: Fix platform device registration ARM i.MX51: rename IPU irqs ARM i.MX51: Add ipu clock support ARM: imx/mx27_3ds: Add PMIC support ARM: DMA: Replace page_to_dma()/dma_to_page() with pfn_to_dma()/dma_to_pfn() mx51: fix usb clock support MX51: Add support for usb host 2 arch/arm/plat-mxc/ehci.c: fix errors/typos ...
2011-01-07Merge branch 'devel-stable' into develRussell King11-296/+538
Conflicts: arch/arm/mach-pxa/clock.c arch/arm/mach-pxa/clock.h
2011-01-07Merge branch 'pgt' (early part) into develRussell King2-42/+6
2011-01-07Merge branch 'misc' into develRussell King13-351/+676
Conflicts: arch/arm/Kconfig arch/arm/common/Makefile arch/arm/kernel/Makefile arch/arm/kernel/smp.c
2011-01-07Merge branch 'smp' into miscRussell King13-301/+355
Conflicts: arch/arm/kernel/entry-armv.S arch/arm/mm/ioremap.c
2011-01-05Merge branch 'clksrc' into develRussell King11-2488/+2733
Conflicts: arch/arm/mach-vexpress/v2m.c arch/arm/plat-omap/counter_32k.c arch/arm/plat-versatile/Makefile
2011-01-05Merge branches 'ftrace', 'gic', 'io', 'kexec', 'mod', 'sa11x0', 'sh' and ↵Russell King3-61/+85
'versatile' into devel
2011-01-05Merge commit 'v2.6.37' into perf/coreIngo Molnar2-1/+6
Merge reason: Add the final .37 tree. Signed-off-by: Ingo Molnar <mingo@elte.hu>
2010-12-24ARM: provide an early platform initialization hookRussell King1-0/+3
This allows platforms to hook into the initialization early to setup things like scheduler clocks, etc. Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk>
2010-12-24ARM: simplify early machine init hooksRussell King3-16/+10
Rather than storing each machine init hook separately, store a pointer to the machine description record and dereference this instead. This pointer is only available while the init sections are present, which is not a problem as we only use it from init code. Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk>
2010-12-24ARM: 6538/1: Subarch IRQ handler macros V3Magnus Damm1-29/+2
Per subarch interrupt handler macros V3. This patch breaks out code from the irq_handler macro into arch_irq_handler and arch_irq_handler_default. The macros are put in the header file "entry-macro-multi.S" The arch_irq_handler_default macro is designed to be used by irq_handler in entry-armv.S while arch_irq_handler is suitable for per-subarch use. Signed-off-by: Magnus Damm <damm@opensource.se> Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk>
2010-12-24ARM: 6532/1: Allow machine to specify it's own IRQ handlers at run-timeeric miao2-2/+18
Normally different ARM platform has different way to decode the IRQ hardware status and demultiplex to the corresponding IRQ handler. This is highly optimized by macro irq_handler in entry-armv.S, and each machine defines their own macro to decode the IRQ number. However, this prevents multiple machine classes to be built into a single kernel. By allowing each machine to specify thier own handler, and making function pointer 'handle_arch_irq' to point to it at run time, this can be solved. And introduce CONFIG_MULTI_IRQ_HANDLER to allow both solutions to work. Comparing with the highly optimized macro of irq_handler, the new function must be written with care not to lose too much performance. And the IPI stuff on SMP is expected to move to the provided arch IRQ handler as well. The assembly code to invoke handle_arch_irq is optimized by Russell King. Signed-off-by: Eric Miao <eric.miao@canonical.com> Acked-by: Nicolas Pitre <nicolas.pitre@linaro.org> Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk>
2010-12-24ARM: 6540/1: Stop irqsoff trace on return to userTodd Android Poynor1-0/+6
If the irqsoff tracer is in use, stop tracing the interrupt disable interval when returning to userspace. Tracing userspace execution time as interrupts disabled time is not helpful for kernel performance analysis purposes. Only do so if the irqsoff tracer is enabled, to avoid overhead for lockdep, which doesn't care. Signed-off-by: Todd Poynor <toddpoynor@google.com> Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk>
2010-12-23Merge branch 'devel' of ↵Russell King6-14/+147
git://git.kernel.org/pub/scm/linux/kernel/git/ycmiao/pxa-linux-2.6 into devel-stable
2010-12-23ARM: sched_clock: provide common infrastructure for sched_clock()Russell King2-0/+70
Provide common sched_clock() infrastructure for platforms to use to create a 64-bit ns based sched_clock() implementation from a counter running at a non-variable clock rate. This implementation is based upon maintaining an epoch for the counter and an epoch for the nanosecond time. When we desire a sched_clock() time, we calculate the number of counter ticks since the last epoch update, convert this to nanoseconds and add to the epoch nanoseconds. We regularly refresh these epochs within the counter wrap interval. We perform a similar calculation as above, and store the new epochs. We read and write the epochs in such a way that sched_clock() can easily (and locklessly) detect when an update is in progress, and repeat the loading of these constants when they're known not to be stable. The one caveat is that sched_clock() is not called in the middle of an update. We achieve that by disabling IRQs. Finally, if the clock rate is known at compile time, the counter to ns conversion factors can be specified, allowing sched_clock() to be tightly optimized. We ensure that these factors are correct by providing an initialization function which performs a run-time check. Acked-by: Peter Zijlstra <peterz@infradead.org> Tested-by: Santosh Shilimkar <santosh.shilimkar@ti.com> Tested-by: Will Deacon <will.deacon@arm.com> Tested-by: Mikael Pettersson <mikpe@it.uu.se> Tested-by: Eric Miao <eric.y.miao@gmail.com> Tested-by: Olof Johansson <olof@lixom.net> Tested-by: Jamie Iles <jamie@jamieiles.com> Reviewed-by: Nicolas Pitre <nicolas.pitre@linaro.org> Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk>
2010-12-22ARM: pgtable: collect up identity mapping functionsRussell King1-34/+0
We have two places where we create identity mappings - one when we bring secondary CPUs online, and one where we setup some mappings for soft- reboot. Combine these two into a single implementation. Also collect the identity mapping deletion function. Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk>
2010-12-22ARM: pgtable: remove L2 cache flushes for SMP page table bring-upRussell King1-2/+0
The MMU is always configured to read page tables from the L2 cache so there's little point flushing them out of the L2 cache back to RAM. Remove these flushes. Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk>
2010-12-20ARM: SMP: ensure frame pointer is reinitialized for soft-CPU hotplugRussell King1-0/+1
When we soft-CPU hotplug a CPU, we reset the stack pointer and jump back to start_secondary(). This allows us to restart as if the CPU was actually reset. However, we weren't resetting the frame pointer, which could cause problems with backtracing. Reset the frame pointer to zero (which means no parent frame) just like the early assembly code also does. Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk>
2010-12-20ARM: SMP: split out software TLB maintainence broadcastingRussell King3-127/+140
smp.c is becoming too large, so split out the TLB maintainence broadcasting into a separate smp_tlb.c file. Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk>
2010-12-20ARM: localtimer: clean up local timer on hot unplugRussell King2-11/+18
When a CPU is hot unplugged, the generic tick code cleans up the clock event device, but fails to call down to the device's set_mode function to actually shut the device down. To work around this, we've historically had a local_timer_stop() callback out of the hotplug code. However, this adds needless complexity when we have the clock event device itself available. Explicitly call the clock event device's set_mode function with CLOCK_EVT_MODE_UNUSED, so that the hardware can be cleanly shutdown without any special external callbacks. When/if the generic code is fixed, percpu_timer_stop() can be killed off. Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk>
2010-12-20ARM: smp: improve CPU bringup failure diagnosticsRussell King1-9/+5
We used to print a bland error message which gave no clue as to the failure when we failed to bring up a secondary CPU. Resolve this by separating the two failure cases. If boot_secondary() fails, we print a message indicating the returned error code from boot_secondary(): "CPU%u: failed to boot: %d\n", cpu, ret. However, if boot_secondary() succeeded, but the CPU did not appear to mark itself online within the timeout, indicate that it failed to come online: "CPU%u: failed to come online\n", cpu Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk>
2010-12-20ARM: 6516/1: Allow SMP_ON_UP to work with Thumb-2 kernels.Dave Martin2-5/+12
* __fixup_smp_on_up has been modified with support for the THUMB2_KERNEL case. For THUMB2_KERNEL only, fixups are split into halfwords in case of misalignment, since we can't rely on unaligned accesses working before turning the MMU on. No attempt is made to optimise the aligned case, since the number of fixups is typically small, and it seems best to keep the code as simple as possible. * Add a rotate in the fixup_smp code in order to support CPU_BIG_ENDIAN, as suggested by Nicolas Pitre. * Add an assembly-time sanity-check to ALT_UP() to ensure that the content really is the right size (4 bytes). (No check is done for ALT_SMP(). Possibly, this could be fixed by splitting the two uses ot ALT_SMP() (ALT_SMP...SMP_UP versus ALT_SMP...SMP_UP_B) into two macros. In the first case, ALT_SMP needs to expand to >= 4 bytes, not == 4.) * smp_mpidr.h (which implements ALT_SMP()/ALT_UP() manually due to macro limitations) has not been modified: the affected instruction (mov) has no 16-bit encoding, so the correct instruction size is satisfied in this case. * A "mode" parameter has been added to smp_dmb: smp_dmb arm @ assumes 4-byte instructions (for ARM code, e.g. kuser) smp_dmb @ uses W() to ensure 4-byte instructions for ALT_SMP() This avoids assembly failures due to use of W() inside smp_dmb, when assembling pure-ARM code in the vectors page. There might be a better way to achieve this. * Kconfig: make SMP_ON_UP depend on (!THUMB2_KERNEL || !BIG_ENDIAN) i.e., THUMB2_KERNEL is now supported, but only if !BIG_ENDIAN (The fixup code for Thumb-2 currently assumes little-endian order.) Tested using a single generic realview kernel on: ARM RealView PB-A8 (CONFIG_THUMB2_KERNEL={n,y}) ARM RealView PBX-A9 (SMP) Signed-off-by: Dave Martin <dave.martin@linaro.org> Acked-by: Nicolas Pitre <nicolas.pitre@linaro.org> Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk>
2010-12-20ARM: CPU hotplug: ensure correct ordering of unplugRussell King1-1/+3
Don't call idle_task_exit() with interrupts disabled, and ensure that we have a memory barrier after interrupts are disabled but before signalling that this CPU has shut down. Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk>
2010-12-20ARM: CPU hotplug: move cpu_killed completion to core codeRussell King1-1/+13
We always need to wait for the dying CPU to reach a safe state before taking it down, irrespective of the requirements of the platform. Move the completion code into the ARM SMP hotplug code rather than having each platform re-implement this. Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk>
2010-12-20ARM: SMP: consolidate trace_hardirqs_off() into common SMP codeRussell King1-0/+1
All platforms call trace_hardirqs_off() in their secondary startup code, so move this into the core SMP code - it doesn't need to be in the per-platform code. Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk>
2010-12-20ARM: SMP: consolidate the common parts of smp_prepare_cpus()Russell King1-11/+38
There is a certain amount of smp_prepare_cpus() which doesn't belong in the platform support code - that is, code which is invariant to the SMP implementation. Move this code into arch/arm/kernel/smp.c, and add a platform_ prefix to the original function. Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk>
2010-12-20ARM: SMP: ensure smp_send_stop() waits for CPUs to stopRussell King1-3/+15
Wait for CPUs to indicate that they've stopped, after sending the stop IPI, rather than blindly continuing on and hoping that they've stopped in time. Print a warning if we fail to stop the other CPUs. Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk>
2010-12-20ARM: SMP: use more sane register allocation for __fixup_smp_on_upRussell King1-17/+22
Use r0,r3-r6 rather than r0,r3,r4,r6,r7, which makes it easier to understand which registers can be modified. Also document which registers hold values which must be preserved. Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk>
2010-12-20ARM: SMP: collect IPI and local timer IRQs for /proc/statRussell King1-0/+15
The IPI and local timer interrupts weren't being properly accounted for in /proc/stat. Collect them from the irq_stat structure, and return their sum. Reviewed-by: Catalin Marinas <catalin.marinas@arm.com> Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk>
2010-12-20ARM: SMP: provide individual IPI interrupt statisticsRussell King2-6/+24
This separates out the individual IPI interrupt counts from the total IPI count, which allows better visibility of what IPIs are being used for. Reviewed-by: Catalin Marinas <catalin.marinas@arm.com> Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk>
2010-12-20ARM: pxa: add iwmmx support for PJ4Haojian Zhuang3-13/+137
iwmmxt is used in XScale, XScale3, Mohawk and PJ4 core. But the instructions of accessing CP0 and CP1 is changed in PJ4. Append more files to support iwmmxt in PJ4 core. Signed-off-by: Zhou Zhu <zzhu3@marvell.com> Signed-off-by: Haojian Zhuang <haojian.zhuang@marvell.com> Acked-by: Nicolas Pitre <nico@fluxnic.net> Signed-off-by: Eric Miao <eric.y.miao@gmail.com>
2010-12-20ARM: fix /proc/interrupts formattingRussell King3-15/+20
As per x86, align the initial column according to how many IRQs we have. Also, provide an english explaination for the 'LOC:' and 'IPI:' lines. Reviewed-by: Catalin Marinas <catalin.marinas@arm.com> Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk>
2010-12-20ARM: SMP: move ipi_count into irq_stat structureRussell King1-12/+2
Move the ipi_count into irq_stat, which allows the ipi_data structure to be entirely removed. Reviewed-by: Catalin Marinas <catalin.marinas@arm.com> Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk>
2010-12-20ARM: SMP: provide accessors for irq_stat dataRussell King1-2/+2
Provide __inc_irq_stat() and __get_irq_stat() to increment and read the irq stat counters. Reviewed-by: Catalin Marinas <catalin.marinas@arm.com> Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk>
2010-12-20ARM: include local timer irq stats only when local timers configuredRussell King2-12/+14
Reviewed-by: Catalin Marinas <catalin.marinas@arm.com> Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk>
2010-12-20ARM: SMP: remove send_ipi_message()Russell King1-13/+5
send_ipi_message() does nothing except call smp_cross_call(). As this is a static function, nothing external to this file calls it, so we can easily clean up this now unnecessary indirection. Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk>
2010-12-18Merge branch 'hw-breakpoint' of git://repo.or.cz/linux-2.6/linux-wd into ↵Russell King4-226/+344
devel-stable
2010-12-18ARM: smp: avoid incrementing mm_users on CPU startupRussell King1-1/+0
We should not be incrementing mm_users when we startup a secondary CPU - doing so results in mm_users incrementing by one each time we hotplug a CPU, which will eventually wrap, and will cause problems. Other architectures such as x86 do not increment mm_users, but only mm_count, so we follow that pattern. Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk>
2010-12-16perf: Dynamic pmu typesPeter Zijlstra1-1/+1
Extend the perf_pmu_register() interface to allow for named and dynamic pmu types. Because we need to support the existing static types we cannot use dynamic types for everything, hence provide a type argument. If we want to enumerate the PMUs they need a name, provide one. Signed-off-by: Peter Zijlstra <a.p.zijlstra@chello.nl> LKML-Reference: <20101117222056.259707703@chello.nl> Signed-off-by: Ingo Molnar <mingo@elte.hu>
2010-12-15ARM: hw_breakpoint: do not fail initcall if monitor mode is disabledWill Deacon1-29/+25
The debug registers can only be manipulated from software if monitor debug mode is enabled. On some cores, this can never be enabled (i.e. the corresponding bit in the DSCR is RAZ/WI). This patch ensures we can handle this hardware configuration and fail gracefully, rather than blow up the kernel during boot. Reported-by: Cyril Chemparathy <cyril@ti.com> Signed-off-by: Will Deacon <will.deacon@arm.com>
2010-12-14ARM: GIC: move enablement of PPI interrupts to gic.cRussell King1-6/+1
Avoid adding nasty genirq-specific code to local timers to enable PPI interrupts. Instead, provide a gic function to do this. Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk>
2010-12-07Merge commit 'v2.6.37-rc5' into perf/coreIngo Molnar3-1/+10
Merge reason: Pick up the latest -rc. Signed-off-by: Ingo Molnar <mingo@elte.hu>
2010-12-06ARM: hw_breakpoint: fix warnings generated by sparseWill Deacon1-7/+13
sparse doesn't like per-cpu accesses such as: static DEFINE_PER_CPU(struct perf_event *, foo[MAXLEN]); struct perf_event **bar = __get_cpu_var(foo); and shouts quite loudly about it: | warning: incorrect type in assignment (different modifiers) | expected struct perf_event **slots | got struct perf_event *[noderef] *<noident> This patch adds casts to these sorts of assignments in hw_breakpoint.c in order to silence the warnings. Reported-by: Russell King <rmk+kernel@arm.linux.org.uk> Signed-off-by: Will Deacon <will.deacon@arm.com>
2010-12-06ARM: ptrace: fix style issue with hw_breakpoint interfaceWill Deacon1-2/+2
This patch fixes a trivial style issue in ptrace.c. Signed-off-by: Will Deacon <will.deacon@arm.com>
2010-12-06ARM: hw_breakpoint: disallow per-cpu breakpoints without overflow handlerWill Deacon1-2/+4
Single-stepping a breakpoint requires us to disable it temporarily so that we don't get stuck in a recursive debug trap. With per-cpu breakpoints this presents a problem where an interrupt can be taken before the single-step has completed and a new task is eventually scheduled. This new task will not hit the breakpoint because it will have been disabled during the previous handling code. This patch disallows per-cpu breakpoints on ARM when an overflow handler is not present. A similar effect can be created by placing breakpoints on a shell and then running applications there. Signed-off-by: Will Deacon <will.deacon@arm.com>
2010-12-06ARM: hw_breakpoint: unify single-stepping code for watchpoints and breakpointsWill Deacon1-41/+40
The single-stepping code is currently different depending on whether we are stepping over a breakpoint or a watchpoint. There is no good reason for this, so let's sort it out. This patch adds functions for enabling/disabling single-step for a particular hw_breakpoint and integrates this with the exception handling code. Signed-off-by: Will Deacon <will.deacon@arm.com>
2010-12-06ARM: hw_breakpoint: do not allocate new breakpoints with preemption disabledWill Deacon1-57/+77
The watchpoint single-stepping code calls register_user_hw_breakpoint to register a mismatch breakpoint for stepping over the watchpoint. This is performed with preemption disabled, which is unsafe as we may end up scheduling whilst in_atomic(). Furthermore, using the perf API is rather overkill since we are already in the hw-breakpoint backend and only require access to reserved breakpoints anyway. This patch reworks the watchpoint stepping code so that we don't require another perf_event for the mismatch breakpoint. Instead, we hold a separate arch_hw_breakpoint_ctrl struct inside the watchpoint which is used exclusively for stepping. We can check whether or not stepping is enabled when installing or uninstalling the watchpoint and operate on the breakpoint accordingly. Signed-off-by: Will Deacon <will.deacon@arm.com>
2010-12-06ARM: hw_breakpoint: don't advertise reserved breakpointsWill Deacon1-89/+117
To permit handling of watchpoint exceptions without signalling a debugger, it is necessary to reserve breakpoint registers for in-kernel use only. This patch ensures that we record and subtract the number of reserved breakpoints from the number of usable breakpoint registers that we advertise to userspace via the ptrace API. Signed-off-by: Will Deacon <will.deacon@arm.com>
2010-12-06ARM: hw_breakpoint: disable preemption during debug exception handlingWill Deacon3-5/+36
On ARM, debug exceptions occur in the form of data or prefetch aborts. One difference is that debug exceptions require access to per-cpu banked registers and data structures which are not saved in the low-level exception code. For kernels built with CONFIG_PREEMPT, there is an unlikely scenario that the debug handler ends up running on a different CPU from the one that originally signalled the event, resulting in random data being read from the wrong registers. This patch adds a debug_entry macro to the low-level exception handling code which checks whether the taken exception is a debug exception. If it is, the preempt count for the faulting process is incremented. After the debug handler has finished, the count is decremented. Acked-by: Catalin Marinas <catalin.marinas@arm.com> Signed-off-by: Will Deacon <will.deacon@arm.com>