summaryrefslogtreecommitdiff
path: root/arch/powerpc/kernel
AgeCommit message (Collapse)AuthorFilesLines
2008-12-21powerpc/mm: Introduce MMU featuresBenjamin Herrenschmidt11-9/+156
We're soon running out of CPU features and I need to add some new ones for various MMU related bits, so this patch separates the MMU features from the CPU features. I moved over the 32-bit MMU related ones, added base features for MMU type families, but didn't move over any 64-bit only feature yet. Signed-off-by: Benjamin Herrenschmidt <benh@kernel.crashing.org> Acked-by: Kumar Gala <galak@kernel.crashing.org> Signed-off-by: Paul Mackerras <paulus@samba.org>
2008-12-21powerpc/mm: Split mmu_context handlingBenjamin Herrenschmidt4-4/+14
This splits the mmu_context handling between 32-bit hash based processors, 64-bit hash based processors and everybody else. This is preliminary work for adding SMP support for BookE processors. Signed-off-by: Benjamin Herrenschmidt <benh@kernel.crashing.org> Acked-by: Kumar Gala <galak@kernel.crashing.org> Signed-off-by: Paul Mackerras <paulus@samba.org>
2008-12-21powerpc/4xx: Extended DCR support v2Benjamin Herrenschmidt1-2/+2
This adds supports to the "extended" DCR addressing via the indirect mfdcrx/mtdcrx instructions supported by some 4xx cores (440H6 and later). I enabled the feature for now only on AMCC 460 chips. Signed-off-by: Benjamin Herrenschmidt <benh@kernel.crashing.org> Acked-by: Josh Boyer <jwboyer@linux.vnet.ibm.com> Acked-by: Kumar Gala <galak@kernel.crashing.org> Signed-off-by: Paul Mackerras <paulus@samba.org>
2008-12-21powerpc: Convert sysfs cache code to of_find_next_cache_node()Nathan Lynch1-6/+1
Using the common code means that more complete cache information will provided in sysfs on platforms that don't use the l2-cache property convention. Signed-off-by: Nathan Lynch <ntl@pobox.com> Signed-off-by: Paul Mackerras <paulus@samba.org>
2008-12-21powerpc: Convert cpu_to_l2cache() to of_find_next_cache_node()Nathan Lynch1-7/+4
The smp code uses cache information to populate cpu_core_map; change it to use common code for cache lookup. Signed-off-by: Nathan Lynch <ntl@pobox.com> Signed-off-by: Paul Mackerras <paulus@samba.org>
2008-12-21powerpc: Add of_find_next_cache_node()Nathan Lynch1-0/+31
We have more than one piece of code that looks up cache nodes manually using the "l2-cache" property. Add a common helper routine which does this and handles ePAPR's "next-level-cache" property as well as powermac. Signed-off-by: Nathan Lynch <ntl@pobox.com> Signed-off-by: Paul Mackerras <paulus@samba.org>
2008-12-19Merge branches 'tracing/ftrace', 'tracing/ring-buffer' and 'tracing/urgent' ↵Ingo Molnar3-1/+8
into tracing/core Conflicts: include/linux/ftrace.h
2008-12-18Merge branch 'linus' into cpus4096Ingo Molnar1-2/+2
2008-12-18Merge branch 'linux-2.6' into nextPaul Mackerras1-2/+2
2008-12-17powerpc/fsl-booke: Fix the miss interrupt restoreDave Liu1-2/+2
The commit e5e774d8833de1a0037be2384efccadf16935675 powerpc/fsl-booke: Fix problem with _tlbil_va being interrupted introduce one issue. that casue the problem like this: Kernel BUG at c00b19fc [verbose debug info unavailable] Oops: Exception in kernel mode, sig: 5 [#1] MPC8572 DS Modules linked in: NIP: c00b19fc LR: c00b1c34 CTR: c0064e88 REGS: ef02b7b0 TRAP: 0700 Not tainted (2.6.28-rc8-00057-g1bda712) MSR: 00021000 <ME> CR: 44048028 XER: 20000000 TASK = ef02c000[1] 'init' THREAD: ef02a000 GPR00: 00000001 ef02b860 ef02c000 eec201a0 c0dec2c0 00000000 000078a1 00000400 GPR08: c00b4e40 000078a1 c048ec00 a1780000 44048028 ecd26917 00000001 ef02b948 GPR16: ffffffea 0000020c 00000000 00000000 00000003 0000000a 00000000 000078a1 GPR24: eec201a0 00000000 ed849000 00000400 ef02b95c 00000001 ef02b978 ef02b984 NIP [c00b19fc] __find_get_block+0x24/0x238 LR [c00b1c34] __getblk+0x24/0x2a0 Call Trace: [ef02b860] [c017b768] generic_make_request+0x290/0x328 (unreliable) [ef02b8b0] [c00b1c34] __getblk+0x24/0x2a0 [ef02b910] [c00b4ae4] __bread+0x14/0xf8 [ef02b920] [c00fc228] ext2_get_branch+0xf0/0x138 [ef02b940] [c00fcc88] ext2_get_block+0xb8/0x828 [ef02ba00] [c00bbdc8] do_mpage_readpage+0x188/0x808 [ef02bac0] [c00bc5b4] mpage_readpages+0xec/0x144 [ef02bb50] [c00fba38] ext2_readpages+0x24/0x34 [ef02bb60] [c006ade0] __do_page_cache_readahead+0x150/0x230 [ef02bbb0] [c0064bdc] filemap_fault+0x31c/0x3e0 [ef02bbf0] [c00728b8] __do_fault+0x60/0x5b0 [ef02bc50] [c0011e0c] do_page_fault+0x2d8/0x4c4 [ef02bd10] [c000ed90] handle_page_fault+0xc/0x80 [ef02bdd0] [c00c7adc] set_brk+0x74/0x9c [ef02bdf0] [c00c9274] load_elf_binary+0x70c/0x1180 [ef02be70] [c00945f0] search_binary_handler+0xa8/0x274 [ef02bea0] [c0095818] do_execve+0x19c/0x1d4 [ef02bed0] [c000766c] sys_execve+0x58/0x84 [ef02bef0] [c000e950] ret_from_syscall+0x0/0x3c [ef02bfb0] [c009c6fc] sys_dup+0x24/0x6c [ef02bfc0] [c0001e04] init_post+0xb0/0xf0 [ef02bfd0] [c046c1ac] kernel_init+0xcc/0xf4 [ef02bff0] [c000e6d0] kernel_thread+0x4c/0x68 Instruction dump: 4bffffa4 813f000c 4bffffac 9421ffb0 7c0802a6 7d800026 90010054 bf210034 91810030 7c0000a6 68008000 54008ffe <0f000000> 3d20c04e 3b29ffb8 38000008 The issue was the beqlr returns early but we haven't reenabled interrupts. Signed-off-by: Dave Liu <daveliu@freescale.com> Signed-off-by: Kumar Gala <galak@kernel.crashing.org>
2008-12-17Merge branch 'linus' into cpus4096Ingo Molnar1-0/+3
2008-12-16powerpc: struct device - replace bus_id with dev_name(), dev_set_name()Kay Sievers3-18/+14
Acked-by: Geoff Levand <geoffrey.levand@am.sony.com> Signed-off-by: Kay Sievers <kay.sievers@vrfy.org> Signed-off-by: Greg Kroah-Hartman <gregkh@suse.de> Signed-off-by: Paul Mackerras <paulus@samba.org>
2008-12-16powerpc/pseries: Check for GIQ indicator before calling set-indicatorNathan Lynch1-0/+26
Since "Factor out cpu joining/unjoining the GIQ" (b4963255ad5a426f04a0bb15c4315fa4bb40cde9) the WARN_ON in xics_set_cpu_giq() is being triggered during boot on JS20 because the GIQ indicator is not available on that platform. While the warning is harmless and the system runs normally, it's nicer to check for the existence of the indicator before trying to manipulate it. Implement rtas_indicator_present(), which searches the /rtas/rtas-indicators property for the given indicator token, and use this function in xics_set_cpu_giq(). Also use a WARN statement in xics_set_cpu_giq to get better information on failure. Signed-off-by: Nathan Lynch <ntl@pobox.com> Acked-by: Milton Miller <miltonm@bga.com> Signed-off-by: Paul Mackerras <paulus@samba.org>
2008-12-16powerpc: Move smp_hw_index to 32-bit codeNathan Lynch2-1/+2
smp_hw_index isn't used on 64-bit, so move it from smp.c to setup_32.c. Signed-off-by: Nathan Lynch <ntl@pobox.com> Signed-off-by: Paul Mackerras <paulus@samba.org>
2008-12-16powerpc: Remove `have_of' global variableAnton Vorontsov4-11/+1
The `have_of' variable is a relic from the arch/ppc time, it isn't useful nowadays. Signed-off-by: Anton Vorontsov <avorontsov@ru.mvista.com> Acked-by: Benjamin Herrenschmidt <benh@kernel.crashing.org> Signed-off-by: Paul Mackerras <paulus@samba.org>
2008-12-16Merge branch 'merge' into nextPaul Mackerras3-0/+7
2008-12-14powerpc/fsl-booke: Fix problem with _tlbil_va being interruptedKumar Gala1-0/+3
An example calling sequence which we did see: copy_user_highpage -> kmap_atomic -> flush_tlb_page -> _tlbil_va We got interrupted after setting up the MAS registers before the tlbwe and the interrupt handler that caused the interrupt also did a kmap_atomic (ide code) and thus on returning from the interrupt the MAS registers no longer contained the proper values. Since we dont save/restore MAS registers for normal interrupts we need to disable interrupts in _tlbil_va to ensure atomicity. Signed-off-by: Kumar Gala <galak@kernel.crashing.org>
2008-12-13Merge ../linux-2.6-x86Rusty Russell7-83/+459
Conflicts: arch/x86/kernel/io_apic.c kernel/sched.c kernel/sched_stats.h
2008-12-13cpumask: convert struct clock_event_device to cpumask pointers.Rusty Russell1-1/+1
Impact: change calling convention of existing clock_event APIs struct clock_event_timer's cpumask field gets changed to take pointer, as does the ->broadcast function. Another single-patch change. For safety, we BUG_ON() in clockevents_register_device() if it's not set. Signed-off-by: Rusty Russell <rusty@rustcorp.com.au> Cc: Ingo Molnar <mingo@elte.hu>
2008-12-13cpumask: make irq_set_affinity() take a const struct cpumaskRusty Russell1-1/+1
Impact: change existing irq_chip API Not much point with gentle transition here: the struct irq_chip's setaffinity method signature needs to change. Fortunately, not widely used code, but hits a few architectures. Note: In irq_select_affinity() I save a temporary in by mangling irq_desc[irq].affinity directly. Ingo, does this break anything? (Folded in fix from KOSAKI Motohiro) Signed-off-by: Rusty Russell <rusty@rustcorp.com.au> Signed-off-by: Mike Travis <travis@sgi.com> Reviewed-by: Grant Grundler <grundler@parisc-linux.org> Acked-by: Ingo Molnar <mingo@redhat.com> Cc: ralf@linux-mips.org Cc: grundler@parisc-linux.org Cc: jeremy@xensource.com Cc: KOSAKI Motohiro <kosaki.motohiro@jp.fujitsu.com>
2008-12-13cpumask: centralize cpu_online_map and cpu_possible_mapRusty Russell1-4/+0
Impact: cleanup Each SMP arch defines these themselves. Move them to a central location. Twists: 1) Some archs (m32, parisc, s390) set possible_map to all 1, so we add a CONFIG_INIT_ALL_POSSIBLE for this rather than break them. 2) mips and sparc32 '#define cpu_possible_map phys_cpu_present_map'. Those archs simply have phys_cpu_present_map replaced everywhere. 3) Alpha defined cpu_possible_map to cpu_present_map; this is tricky so I just manipulate them both in sync. 4) IA64, cris and m32r have gratuitous 'extern cpumask_t cpu_possible_map' declarations. Signed-off-by: Rusty Russell <rusty@rustcorp.com.au> Reviewed-by: Grant Grundler <grundler@parisc-linux.org> Tested-by: Tony Luck <tony.luck@intel.com> Acked-by: Ingo Molnar <mingo@elte.hu> Cc: Mike Travis <travis@sgi.com> Cc: ink@jurassic.park.msu.ru Cc: rmk@arm.linux.org.uk Cc: starvik@axis.com Cc: tony.luck@intel.com Cc: takata@linux-m32r.org Cc: ralf@linux-mips.org Cc: grundler@parisc-linux.org Cc: paulus@samba.org Cc: schwidefsky@de.ibm.com Cc: lethal@linux-sh.org Cc: wli@holomorphy.com Cc: davem@davemloft.net Cc: jdike@addtoit.com Cc: mingo@redhat.com
2008-12-12Merge branch 'sched/core' into cpus4096Ingo Molnar2-0/+4
Conflicts: include/linux/ftrace.h kernel/sched.c
2008-12-05powerpc/virtex5: Fix Virtex5 machine check handlingGrant Likely2-0/+4
The 440x5 core in the Virtex5 uses the 440A type machine check (ie, they have MCSRR0/MCSRR1). They thus need to call the appropriate fixup function to hook the right variant of the exception. Without this, all machine checks become fatal due to loss of context when entering the exception handler. Signed-off-by: Grant Likely <grant.likely@secretlab.ca> Signed-off-by: Josh Boyer <jwboyer@linux.vnet.ibm.com>
2008-12-05Merge branches 'tracing/ftrace', 'tracing/function-graph-tracer' and ↵Ingo Molnar1-0/+1
'tracing/urgent' into tracing/core
2008-12-04Merge commit 'v2.6.28-rc7' into tracing/coreIngo Molnar4-4/+20
2008-12-03powerpc/85xx: Add support for SMP initializationKumar Gala1-0/+70
Added 85xx specifc smp_ops structure. We use ePAPR style boot release and the MPIC for IPIs at this point. Additionally added routines for secondary cpu entry and initializtion. Signed-off-by: Andy Fleming <afleming@freescale.com> Signed-off-by: Trent Piepho <tpiepho@freescale.com> Signed-off-by: Kumar Gala <galak@kernel.crashing.org>
2008-12-03powerpc/85xx: minor head_fsl_booke.S cleanupKumar Gala1-2/+2
Removed unused branch labels Signed-off-by: Kumar Gala <galak@kernel.crashing.org>
2008-12-03powerpc: Better setup of boot page TLB entryTrent Piepho1-9/+13
The initial TLB mapping for the kernel boot didn't set the memory coherent attribute, MAS2[M], in SMP mode. If this code supported booting a secondary processor, which it doesn't yet, but if it did, then when a secondary processor boots, it would probably signal the primary processor by setting a variable called something like __secondary_hold_acknowledge. However, due to the lack of the M bit, the primary processor would not snoop the transaction (even if a transaction were broadcast). If primary CPU's L1 D-cache had a copy, it would not be flushed and the CPU would never see the ack. Which would have resulted in the primary CPU spinning for a long time, perhaps a full second before it gives up, while it would have waited for the ack from the secondary CPU that it wouldn't have been able to see because of the stale cache. The value of MAS2 for the boot page TLB1 entry is a compile time constant, so there is no need to calculate it in powerpc assembly language. Also, from the MPC8572 manual section 6.12.5.3, "Bits that represent offsets within a page are ignored and should be cleared." Existing code didn't clear them, this code does. The same when the page of KERNELBASE is found; we don't need to use asm to mask the lower 12 bits off. In the code that computes the address to rfi from, don't hard code the offset to 24 bytes, but have the assembler figure that out for us. Signed-off-by: Trent Piepho <tpiepho@freescale.com> Signed-off-by: Kumar Gala <galak@kernel.crashing.org>
2008-12-03powerpc: Add SPE/EFP math emulation for E500v1/v2 processors.Liu Yu2-10/+59
This patch add the handlers of SPE/EFP exceptions. The code is used to emulate float point arithmetic, when MSR(SPE) is enabled and receive EFP data interrupt or EFP round interrupt. This patch has no conflict with or dependence on FP math-emu. The code has been tested by TestFloat. Now the code doesn't support SPE/EFP instructions emulation (it won't be called when receive program interrupt), but it could be easily added. Signed-off-by: Liu Yu <yu.liu@freescale.com> Signed-off-by: Kumar Gala <galak@kernel.crashing.org>
2008-12-03powerpc/ibmebus: Get rid of the IRQ mapping in ibmebus_free_irq()Sebastien Dugue1-0/+1
ibmebus_free_irq() frees the IRQ but does not remove its mapping, which results in stale entries in the map. This fixes it by adding a call to irq_dispose_mapping() in ibmebus_free_irq(). Signed-off-by: Sebastien Dugue <sebastien.dugue@bull.net> Cc: Benjamin Herrenschmidt <benh@kernel.crashing.org> Signed-off-by: Paul Mackerras <paulus@samba.org>
2008-12-03powerpc: Eliminate NULL test and memset after alloc_bootmemJulia Lawall1-2/+0
As noted by Akinobu Mita in commit b1fceac2 ("x86: remove unnecessary memset and NULL check after alloc_bootmem()"), alloc_bootmem and related functions never return NULL and always return a zeroed region of memory. Thus a NULL test or memset after calls to these functions is unnecessary. This was fixed using the following semantic patch. (http://www.emn.fr/x-info/coccinelle/) // <smpl> @@ expression E; statement S; @@ E = \(alloc_bootmem\|alloc_bootmem_low\|alloc_bootmem_pages\|alloc_bootmem_low_pages\|alloc_bootmem_node\|alloc_bootmem_low_pages_node\|alloc_bootmem_pages_node\)(...) ... when != E ( - BUG_ON (E == NULL); | - if (E == NULL) S ) @@ expression E,E1; @@ E = \(alloc_bootmem\|alloc_bootmem_low\|alloc_bootmem_pages\|alloc_bootmem_low_pages\|alloc_bootmem_node\|alloc_bootmem_low_pages_node\|alloc_bootmem_pages_node\)(...) ... when != E - memset(E,0,E1); // </smpl> Signed-off-by: Julia Lawall <julia@diku.dk> Signed-off-by: Paul Mackerras <paulus@samba.org>
2008-12-03powerpc: Add sync_*_for_* to dma_opsBecky Bruce1-0/+26
We need to swap these out once we start using swiotlb, so add them to dma_ops. Create CONFIG_PPC_NEED_DMA_SYNC_OPS Kconfig option; this is currently enabled automatically if we're CONFIG_NOT_COHERENT_CACHE. In the future, this will also be enabled for builds that need swiotlb. If PPC_NEED_DMA_SYNC_OPS is not defined, the dma_sync_*_for_* ops compile to nothing. Otherwise, they access the dma_ops pointers for the sync ops. This patch also changes dma_sync_single_range_* to actually sync the range - previously it was using a generous dma_sync_single. dma_sync_single_* is now implemented as a dma_sync_single_range with an offset of 0. Signed-off-by: Becky Bruce <becky.bruce@freescale.com> Signed-off-by: Paul Mackerras <paulus@samba.org>
2008-12-03powerpc: Allow the max stack trace depth to be configuredJohannes Berg1-1/+1
On my screen, when something crashes, I only have space for maybe 16 functions of the stack trace before the information above it scrolls off the screen. It's easy to hack the kernel to print out only that much, but it's harder to remember to do it. This introduces a config option for it so that I can keep the setting in my config. Signed-off-by: Johannes Berg <johannes@sipsolutions.net> Signed-off-by: Paul Mackerras <paulus@samba.org>
2008-12-03powerpc: Add MSR[CE, DE] to the MSR bits we print on show_regs()Kumar Gala1-0/+2
Signed-off-by: Kumar Gala <galak@kernel.crashing.org> Signed-off-by: Paul Mackerras <paulus@samba.org>
2008-12-03Merge branch 'merge'Paul Mackerras5-4/+21
2008-12-03powerpc: Fix dma_map_sg() cache flushing on non coherent platformsBenjamin Herrenschmidt1-0/+1
On PowerPC 4xx or other non cache-coherent platforms, we lost the appropriate cache flushing in dma_map_sg() when merging the 32 and 64-bit DMA code (commit 4fc665b88a79a45bae8bbf3a05563c27c7337c3d, "powerpc: Merge 32 and 64-bit dma code"). This restores it. Signed-off-by: Benjamin Herrenschmidt <benh@kernel.crashing.org> Acked-by: Becky Bruce <beckyb@kernel.crashing.org> Signed-off-by: Paul Mackerras <paulus@samba.org>
2008-12-01powerpc: Fix build for 32-bit SMP configsMilton Miller1-0/+2
attr_smt_snooze_delay is only defined for CONFIG_PPC64, so protect the attribute removal with the same condition. This fixes this build error on 32-bit SMP configurations: /data/home/miltonm/next.git/arch/powerpc/kernel/sysfs.c: In function ‘unregister_cpu_online’: /data/home/miltonm/next.git/arch/powerpc/kernel/sysfs.c:722: error: ‘attr_smt_snooze_delay’ undeclared (first use in this function) /data/home/miltonm/next.git/arch/powerpc/kernel/sysfs.c:722: error: (Each undeclared identifier is reported only once /data/home/miltonm/next.git/arch/powerpc/kernel/sysfs.c:722: error: for each function it appears in.) Signed-off-by: Paul Mackerras <paulus@samba.org>
2008-12-01powerpc: Fix system calls on Cell entered with XER.SO=1Paul Mackerras1-1/+7
It turns out that on Cell, on a kernel with CONFIG_VIRT_CPU_ACCOUNTING = y, if a program sets the SO (summary overflow) bit in the XER and then does a system call, the SO bit in CR0 will be set on return regardless of whether the system call detected an error. Since CR0.SO is used as the error indication from the system call, this means that all system calls appear to fail. The reason is that the workaround for the timebase bug on Cell uses a compare instruction. With CONFIG_VIRT_CPU_ACCOUNTING = y, the ACCOUNT_CPU_USER_ENTRY macro reads the timebase, so we end up doing a compare instruction, which copies XER.SO to CR0.SO. Since we were doing this in the system call entry patch after clearing CR0.SO but before saving the CR, this meant that the saved CR image had CR0.SO set if XER.SO was set on entry. This fixes it by moving the clearing of CR0.SO to after the ACCOUNT_CPU_USER_ENTRY call in the system call entry path. Signed-off-by: Paul Mackerras <paulus@samba.org> Acked-by: Arnd Bergmann <arnd@arndb.de> Acked-by: Benjamin Herrenschmidt <benh@kernel.crashing.org>
2008-12-01powerpc: Fix IRQ assignment for some PCIe devicesAdhemerval Zanella1-2/+5
Currently, some PCIe devices on POWER6 machines do not get interrupts assigned correctly. The problem is that OF doesn't create an "interrupt" property for them. The fix is for of_irq_map_pci to fall back to using the value in the PCI interrupt-pin register in config space, as we do when there is no OF device-tree node for the device. I have verified that this works fine with a pair of Squib-E SAS adapter on a P6-570. Acked-by: Benjamin Herrenschmidt <benh@kernel.crashing.org> Signed-off-by: Paul Mackerras <paulus@samba.org>
2008-11-28powerpc/ppc32: static ftrace fixes for PPC32Steven Rostedt1-0/+1
Impact: fix for PowerPC 32 code There were some early init code that was not safe for static ftrace to boot on my PowerBook. This code must only use relative addressing, and static mcount performs a compare of the ftrace_trace_function pointer, and gets that with an absolute address. In the early init boot up code, this will cause a fault. This patch removes tracing from the files containing the offending functions. Signed-off-by: Steven Rostedt <srostedt@redhat.com> Signed-off-by: Ingo Molnar <mingo@elte.hu>
2008-11-28powerpc: ftrace, use create_branchSteven Rostedt1-42/+12
Impact: clean up Paul Mackerras pointed out that the code to determine if the branch can reach the destination is incorrect. Michael Ellerman suggested to pull out the code from create_branch and use that. Simply using create_branch is probably the best. Reported-by: Michael Ellerman <michael@ellerman.id.au> Reported-by: Paul Mackerras <paulus@samba.org> Signed-off-by: Steven Rostedt <srostedt@redhat.com> Signed-off-by: Ingo Molnar <mingo@elte.hu>
2008-11-28powerpc: ftrace, added missing icache flushSteven Rostedt1-0/+9
Impact: fix to PowerPC code modification After modifying code it is essential to flush the icache. This patch adds the missing flush. Reported-by: Paul Mackerras <paulus@samba.org> Signed-off-by: Steven Rostedt <srostedt@redhat.com> Signed-off-by: Ingo Molnar <mingo@elte.hu>
2008-11-28powerpc: ftrace, fix cast aliasing and add code verificationSteven Rostedt1-56/+65
Impact: clean up and robustness addition This patch addresses the comments made by Paul Mackerras. It removes the type casting between unsigned int and unsigned char pointers, and replaces them with a use of all unsigned int. Verification that the jump is indeed made to a trampoline has also been added. Signed-off-by: Steven Rostedt <srostedt@redhat.com> Signed-off-by: Ingo Molnar <mingo@elte.hu>
2008-11-28powerpc: ftrace, do nothing in mcount call for dyn ftraceSteven Rostedt2-43/+9
Impact: quicken mcount calls that are not replaced by dyn ftrace Dynamic ftrace no longer does on the fly recording of mcount locations. The mcount locations are now found at compile time. The mcount function no longer needs to store registers and call a stub function. It can now just simply return. Since there are some functions that do not get converted to a nop (.init sections and other code that may disappear), this patch should help speed up that code. Also, the stub for mcount on PowerPC 32 can not be a simple branch link register like it is on PowerPC 64. According to the ABI specification: "The _mcount routine is required to restore the link register from the stack so that the profiling code can be inserted transparently, whether or not the profiled function saves the link register itself." This means that we must restore the link register that was used to make the call to mcount. The minimal mcount function for PPC32 ends up being: mcount: mflr r0 mtctr r0 lwz r0, 4(r1) mtlr r0 bctr Where we move the link register used to call mcount into the ctr register, and then restore the link register from the stack. Then we use the ctr register to jump back to the mcount caller. The r0 register is free for us to use. Signed-off-by: Steven Rostedt <srostedt@redhat.com> Signed-off-by: Ingo Molnar <mingo@elte.hu>
2008-11-20powerpc/ppc32: ftrace, dynamic ftrace to handle modulesSteven Rostedt2-6/+105
Impact: add ability to trace modules on 32 bit PowerPC This patch performs the necessary trampoline calls to handle modules with dynamic ftrace on 32 bit PowerPC. Signed-off-by: Steven Rostedt <srostedt@redhat.com>
2008-11-20powerpc/ppc64: ftrace, handle module trampolines for dyn ftraceSteven Rostedt2-10/+281
Impact: Allow 64 bit PowerPC to trace modules with dynamic ftrace This adds code to handle the PPC64 module trampolines, and allows for PPC64 to use dynamic ftrace. Thanks to Paul Mackerras for these updates: - fix the mod and rec->arch.mod NULL checks. - fix to is_bl_op compare. Thanks to Milton Miller for: - finding the nasty race with using two nops, and recommending instead that I use a branch 8 forward. Signed-off-by: Steven Rostedt <srostedt@redhat.com>
2008-11-20powerpc: ftrace, use probe_kernel API to modify codeSteven Rostedt1-32/+21
Impact: use cleaner probe_kernel API over assembly Using probe_kernel_read/write interface is a much cleaner approach than the current assembly version. Signed-off-by: Steven Rostedt <srostedt@redhat.com>
2008-11-20powerpc: ftrace, convert to new dynamic ftrace arch APISteven Rostedt1-5/+62
Impact: update to PowerPC ftrace arch API This patch converts PowerPC to use the new dynamic ftrace arch API. Thanks to Paul Mackennas for pointing out the mistakes of my original test_24bit_addr function. Signed-off-by: Steven Rostedt <srostedt@redhat.com>
2008-11-20powerpc: ftrace, do not latency trace idleSteven Rostedt1-0/+5
Impact: fix for irq off latency tracer When idle is called, interrupts are disabled, but the idle function will still wake up on an interrupt. The problem is that the interrupt disabled latency tracer will take this call to idle as a latency. This patch disables the latency tracing when going into idle. Signed-off-by: Steven Rostedt <srostedt@redhat.com>
2008-11-19powerpc: Provide a separate handler for each IPI actionMilton Miller1-0/+59
With the new generic smp call function helpers, I noticed the code in smp_message_recv was a single function call in many cases. While getting the message number from the ipi data is easy, we can reduce the path length by a function and data-dependent switch by registering seperate IPI actions for these simple calls. Originally I left the ipi action array exposed, but then I realized the registration code should be common too. The three users each had their own name array, so I made a fourth to convert all users to use a common one. Signed-off-by: Milton Miller <miltonm@bga.com> Signed-off-by: Paul Mackerras <paulus@samba.org>