summaryrefslogtreecommitdiff
path: root/arch/powerpc
AgeCommit message (Collapse)AuthorFilesLines
2021-02-08powerpc/traps: add NOKPROBE_SYMBOL for sreset and mceNicholas Piggin1-0/+2
These NMIs could fire any time including inside kprobe code, so exclude them from kprobes. Signed-off-by: Nicholas Piggin <npiggin@gmail.com> Signed-off-by: Michael Ellerman <mpe@ellerman.id.au> Link: https://lore.kernel.org/r/20210130130852.2952424-19-npiggin@gmail.com
2021-02-08powerpc/64s: slb comment updateNicholas Piggin1-13/+15
This makes a small improvement to the description of the SLB interrupt environment. Move the memory access restrictions into one paragraph, and the interrupt restrictions into the next rather than mix them. Signed-off-by: Nicholas Piggin <npiggin@gmail.com> Signed-off-by: Michael Ellerman <mpe@ellerman.id.au> Link: https://lore.kernel.org/r/20210130130852.2952424-18-npiggin@gmail.com
2021-02-08powerpc/mm: Remove stale do_page_fault comment referring to SLB faultsNicholas Piggin1-7/+5
SLB faults no longer call do_page_fault, this was removed somewhere between 2.6.0 and 2.6.12. Signed-off-by: Nicholas Piggin <npiggin@gmail.com> Signed-off-by: Michael Ellerman <mpe@ellerman.id.au> Link: https://lore.kernel.org/r/20210130130852.2952424-17-npiggin@gmail.com
2021-02-08powerpc/64s: split do_hash_faultNicholas Piggin1-23/+33
This is required for subsequent interrupt wrapper implementation. Signed-off-by: Nicholas Piggin <npiggin@gmail.com> Signed-off-by: Michael Ellerman <mpe@ellerman.id.au> Link: https://lore.kernel.org/r/20210130130852.2952424-16-npiggin@gmail.com
2021-02-08powerpc/64s: move bad_page_fault handling to CNicholas Piggin2-12/+4
This simplifies code, and it is also useful when introducing interrupt handler wrappers when introducing wrapper functionality that doesn't cope with asm entry code calling into more than one handler function. 32-bit and 64e still have some such cases, which limits some ways they can use interrupt wrappers. Signed-off-by: Nicholas Piggin <npiggin@gmail.com> Signed-off-by: Michael Ellerman <mpe@ellerman.id.au> Link: https://lore.kernel.org/r/20210130130852.2952424-15-npiggin@gmail.com
2021-02-08powerpc: rearrange do_page_fault error case to be inside exception_enterNicholas Piggin1-9/+14
This keeps the context tracking over the entire interrupt handler which helps later with moving context tracking into interrupt wrappers. Signed-off-by: Nicholas Piggin <npiggin@gmail.com> Signed-off-by: Michael Ellerman <mpe@ellerman.id.au> Link: https://lore.kernel.org/r/20210130130852.2952424-14-npiggin@gmail.com
2021-02-08powerpc/64s: add do_bad_page_fault_segv handlerNicholas Piggin3-3/+9
This function acts like an interrupt handler so it needs to follow the standard interrupt handler function signature which will be introduced in a future change. Signed-off-by: Nicholas Piggin <npiggin@gmail.com> Signed-off-by: Michael Ellerman <mpe@ellerman.id.au> Link: https://lore.kernel.org/r/20210130130852.2952424-13-npiggin@gmail.com
2021-02-08powerpc: bad_page_fault get registers from regsNicholas Piggin8-14/+12
Similar to the previous patch this makes interrupt handler function types more regular so they can be wrapped with the next patch. Signed-off-by: Nicholas Piggin <npiggin@gmail.com> Signed-off-by: Michael Ellerman <mpe@ellerman.id.au> Link: https://lore.kernel.org/r/20210130130852.2952424-12-npiggin@gmail.com
2021-02-08powerpc/32: transfer can avoid saving r4/r5 over trace callNicholas Piggin1-6/+1
Now that handlers get all registers from pt_regs, r4 and r5 are no longer live here and may be clobbered. Signed-off-by: Nicholas Piggin <npiggin@gmail.com> Signed-off-by: Michael Ellerman <mpe@ellerman.id.au> Link: https://lore.kernel.org/r/20210130130852.2952424-11-npiggin@gmail.com
2021-02-08powerpc: DebugException remove argsNicholas Piggin4-3/+6
Like other interrupt handler conversions, switch to getting registers from the pt_regs argument. Signed-off-by: Nicholas Piggin <npiggin@gmail.com> Signed-off-by: Michael Ellerman <mpe@ellerman.id.au> Link: https://lore.kernel.org/r/20210130130852.2952424-10-npiggin@gmail.com
2021-02-08powerpc: do_break get registers from regsNicholas Piggin3-9/+6
Similar to the previous patch this makes interrupt handler function types more regular so they can be wrapped with the next patch. Signed-off-by: Nicholas Piggin <npiggin@gmail.com> Signed-off-by: Michael Ellerman <mpe@ellerman.id.au> Link: https://lore.kernel.org/r/20210130130852.2952424-9-npiggin@gmail.com
2021-02-08powerpc/fsl_booke/32: CacheLockingException remove argsNicholas Piggin2-5/+6
Like other interrupt handler conversions, switch to getting registers from the pt_regs argument. Signed-off-by: Nicholas Piggin <npiggin@gmail.com> Signed-off-by: Michael Ellerman <mpe@ellerman.id.au> Link: https://lore.kernel.org/r/20210130130852.2952424-8-npiggin@gmail.com
2021-02-08powerpc: remove arguments from fault handler functionsNicholas Piggin12-43/+33
Make mm fault handlers all just take the pt_regs * argument and load DAR/DSISR from that. Make those that return a value return long. This is done to make the function signatures match other handlers, which will help with a future patch to add wrappers. Explicit arguments could be added for performance but that would require more wrapper macro variants. Signed-off-by: Nicholas Piggin <npiggin@gmail.com> Signed-off-by: Michael Ellerman <mpe@ellerman.id.au> Link: https://lore.kernel.org/r/20210130130852.2952424-7-npiggin@gmail.com
2021-02-08powerpc/64s: move the hash fault handling logic to CNicholas Piggin3-127/+78
The fault handling still has some complex logic particularly around hash table handling, in asm. Implement most of this in C. Signed-off-by: Nicholas Piggin <npiggin@gmail.com> Signed-off-by: Michael Ellerman <mpe@ellerman.id.au> Link: https://lore.kernel.org/r/20210130130852.2952424-6-npiggin@gmail.com
2021-02-08powerpc/64s: move DABR match out of handle_page_faultNicholas Piggin1-18/+16
Similar to the 32/s change, move the test and call to the do_break handler to the DSI. Suggested-by: Christophe Leroy <christophe.leroy@csgroup.eu> Signed-off-by: Nicholas Piggin <npiggin@gmail.com> Signed-off-by: Michael Ellerman <mpe@ellerman.id.au> Link: https://lore.kernel.org/r/20210130130852.2952424-5-npiggin@gmail.com
2021-02-08powerpc/32s: move DABR match out of handle_page_faultChristophe Leroy2-15/+3
handle_page_fault() has some code dedicated to book3s/32 to call do_break() when the DSI is a DABR match. On other platforms, do_break() is handled separately. Do the same for book3s/32, do it earlier in the process of DSI. This change also avoid doing the test on ISI. Signed-off-by: Christophe Leroy <christophe.leroy@csgroup.eu> Signed-off-by: Michael Ellerman <mpe@ellerman.id.au> Link: https://lore.kernel.org/r/20210130130852.2952424-4-npiggin@gmail.com
2021-02-08KVM: PPC: Book3S HV: Context tracking exit guest context before enabling irqsNicholas Piggin1-2/+4
Interrupts that occur in kernel mode expect that context tracking is set to kernel. Enabling local irqs before context tracking switches from guest to host means interrupts can come in and trigger warnings about wrong context, and possibly worse. Signed-off-by: Nicholas Piggin <npiggin@gmail.com> Signed-off-by: Michael Ellerman <mpe@ellerman.id.au> Link: https://lore.kernel.org/r/20210130130852.2952424-3-npiggin@gmail.com
2021-02-08powerpc/64s: interrupt exit improve bounding of interrupt recursionNicholas Piggin1-22/+33
When replaying pending soft-masked interrupts when an interrupt returns to an irqs-enabled context, there is a special case required if this was an asynchronous interrupt to avoid unbounded interrupt recursion. This case was not tested for in the case the asynchronous interrupt hit in user context, because a subsequent nested interrupt would by definition hit in kernel mode, which then exits via the kernel path which does test this case. There is no reason to allow this for such interrupts. While recursion is bounded at the next level, it's simpler and uses less stack to apply the replay logic consistently. This also expands the comment which was really pretty poor and didn't explain the problem (I can say that because I wrote it). Signed-off-by: Nicholas Piggin <npiggin@gmail.com> Signed-off-by: Michael Ellerman <mpe@ellerman.id.au> Link: https://lore.kernel.org/r/20210130130852.2952424-2-npiggin@gmail.com
2021-02-08powerpc/pasemi: Move PHB discoveryOliver O'Halloran1-2/+1
Signed-off-by: Oliver O'Halloran <oohall@gmail.com> Signed-off-by: Michael Ellerman <mpe@ellerman.id.au> Link: https://lore.kernel.org/r/20201103043523.916109-17-oohall@gmail.com
2021-02-08powerpc/embedded6xx/mve5100: Move PHB discoveryOliver O'Halloran2-7/+14
Signed-off-by: Oliver O'Halloran <oohall@gmail.com> Signed-off-by: Michael Ellerman <mpe@ellerman.id.au> Link: https://lore.kernel.org/r/20201103043523.916109-16-oohall@gmail.com
2021-02-08powerpc/embedded6xx/mpc7448: Move PHB discoveryOliver O'Halloran1-5/+9
Signed-off-by: Oliver O'Halloran <oohall@gmail.com> Signed-off-by: Michael Ellerman <mpe@ellerman.id.au> Link: https://lore.kernel.org/r/20201103043523.916109-15-oohall@gmail.com
2021-02-08powerpc/embedded6xx/linkstation: Move PHB discoveryOliver O'Halloran1-3/+7
Signed-off-by: Oliver O'Halloran <oohall@gmail.com> Signed-off-by: Michael Ellerman <mpe@ellerman.id.au> Link: https://lore.kernel.org/r/20201103043523.916109-14-oohall@gmail.com
2021-02-08powerpc/embedded6xx/holly: Move PHB discoveryOliver O'Halloran1-3/+7
Signed-off-by: Oliver O'Halloran <oohall@gmail.com> Signed-off-by: Michael Ellerman <mpe@ellerman.id.au> Link: https://lore.kernel.org/r/20201103043523.916109-13-oohall@gmail.com
2021-02-08powerpc/chrp: Move PHB discoveryOliver O'Halloran2-11/+9
Signed-off-by: Oliver O'Halloran <oohall@gmail.com> Signed-off-by: Michael Ellerman <mpe@ellerman.id.au> Link: https://lore.kernel.org/r/20201103043523.916109-12-oohall@gmail.com
2021-02-08powerpc/amigaone: Move PHB discoveryOliver O'Halloran1-3/+7
Signed-off-by: Oliver O'Halloran <oohall@gmail.com> Signed-off-by: Michael Ellerman <mpe@ellerman.id.au> Link: https://lore.kernel.org/r/20201103043523.916109-11-oohall@gmail.com
2021-02-08powerpc/83xx: Move PHB discoveryOliver O'Halloran13-2/+12
Signed-off-by: Oliver O'Halloran <oohall@gmail.com> Signed-off-by: Michael Ellerman <mpe@ellerman.id.au> Link: https://lore.kernel.org/r/20201103043523.916109-10-oohall@gmail.com
2021-02-08powerpc/82xx/*: Move PHB discoveryOliver O'Halloran2-3/+2
Signed-off-by: Oliver O'Halloran <oohall@gmail.com> Signed-off-by: Michael Ellerman <mpe@ellerman.id.au> Link: https://lore.kernel.org/r/20201103043523.916109-9-oohall@gmail.com
2021-02-08powerpc/52xx/mpc5200_simple: Move PHB discoveryOliver O'Halloran1-2/+1
Signed-off-by: Oliver O'Halloran <oohall@gmail.com> Signed-off-by: Michael Ellerman <mpe@ellerman.id.au> Link: https://lore.kernel.org/r/20201103043523.916109-8-oohall@gmail.com
2021-02-08powerpc/52xx/media5200: Move PHB discoveryOliver O'Halloran1-2/+1
Signed-off-by: Oliver O'Halloran <oohall@gmail.com> Signed-off-by: Michael Ellerman <mpe@ellerman.id.au> Link: https://lore.kernel.org/r/20201103043523.916109-7-oohall@gmail.com
2021-02-08powerpc/52xx/lite5200: Move PHB discoveryOliver O'Halloran1-2/+1
Signed-off-by: Oliver O'Halloran <oohall@gmail.com> Signed-off-by: Michael Ellerman <mpe@ellerman.id.au> Link: https://lore.kernel.org/r/20201103043523.916109-6-oohall@gmail.com
2021-02-08powerpc/52xx/efika: Move PHB discoveryOliver O'Halloran1-2/+1
Signed-off-by: Oliver O'Halloran <oohall@gmail.com> Signed-off-by: Michael Ellerman <mpe@ellerman.id.au> Link: https://lore.kernel.org/r/20201103043523.916109-5-oohall@gmail.com
2021-02-08powerpc/512x: Move PHB discoveryOliver O'Halloran1-5/+8
Signed-off-by: Oliver O'Halloran <oohall@gmail.com> Signed-off-by: Michael Ellerman <mpe@ellerman.id.au> Link: https://lore.kernel.org/r/20201103043523.916109-4-oohall@gmail.com
2021-02-08powerpc/pci: Move PHB discovery for PCI_DN using platformsOliver O'Halloran8-36/+22
Make powernv, pseries, powermac and maple use ppc_mc.discover_phbs. These platforms need to be done together because they all depend on pci_dn's being created from the DT. The pci_dn contains a pointer to the relevant pci_controller so they need to be created after the pci_controller structures are available, but before PCI devices are scanned. Currently this ordering is provided by initcalls and the sequence is: 1. PHBs are discovered (setup_arch) (early boot, pre-initcalls) 2. pci_dn are created from the unflattended DT (core initcall) 3. PHBs are scanned pcibios_init() (subsys initcall) The new ppc_md.discover_phbs() function is also a core_initcall so we can't guarantee ordering between the creation of pci_controllers and the creation of pci_dn's which require a pci_controller. We could use the postcore, or core_sync initcall levels, but it's cleaner to just move the pci_dn setup into the per-PHB inits which occur inside of .discover_phb() for these platforms. This brings the boot-time path in line with the PHB hotplug path that is used for pseries DLPAR operations too. Signed-off-by: Oliver O'Halloran <oohall@gmail.com> [mpe: Squash powermac & maple in to avoid breakage those platforms, convert memblock allocs to use kmalloc to avoid warnings] Signed-off-by: Michael Ellerman <mpe@ellerman.id.au> Link: https://lore.kernel.org/r/20201103043523.916109-2-oohall@gmail.com
2021-02-08module: remove EXPORT_UNUSED_SYMBOL*Christoph Hellwig1-1/+0
EXPORT_UNUSED_SYMBOL* is not actually used anywhere. Remove the unused functionality as we generally just remove unused code anyway. Reviewed-by: Miroslav Benes <mbenes@suse.cz> Reviewed-by: Emil Velikov <emil.l.velikov@gmail.com> Signed-off-by: Christoph Hellwig <hch@lst.de> Signed-off-by: Jessica Yu <jeyu@kernel.org>
2021-02-08powerpc/powernv: remove get_cxl_moduleChristoph Hellwig1-22/+0
The static inline get_cxl_module function is entirely unused since commit 8bf6b91a5125a ("Revert "powerpc/powernv: Add support for the cxl kernel api on the real phb"), so remove it. Acked-by: Michael Ellerman <mpe@ellerman.id.au> Reviewed-by: Andrew Donnellan <ajd@linux.ibm.com> Signed-off-by: Christoph Hellwig <hch@lst.de> Signed-off-by: Jessica Yu <jeyu@kernel.org>
2021-02-07Merge tag 'powerpc-5.11-7' of ↵Linus Torvalds8-21/+23
git://git.kernel.org/pub/scm/linux/kernel/git/powerpc/linux Pull powerpc fixes from Michael Ellerman: - A fix for a change we made to __kernel_sigtramp_rt64() which confused glibc's backtrace logic, and also changed the semantics of that symbol, which was arguably an ABI break. - A fix for a stack overwrite in our VSX instruction emulation. - A couple of fixes for the Makefile logic in the new C VDSO. Thanks to Masahiro Yamada, Naveen N. Rao, Raoni Fassina Firmino, and Ravi Bangoria. * tag 'powerpc-5.11-7' of git://git.kernel.org/pub/scm/linux/kernel/git/powerpc/linux: powerpc/64/signal: Fix regression in __kernel_sigtramp_rt64() semantics powerpc/vdso64: remove meaningless vgettimeofday.o build rule powerpc/vdso: fix unnecessary rebuilds of vgettimeofday.o powerpc/sstep: Fix array out of bound warning
2021-02-06powerpc/kuap: Allow kernel thread to access userspace after kthread_use_mmAneesh Kumar K.V3-9/+12
This fix the bad fault reported by KUAP when io_wqe_worker access userspace. Bug: Read fault blocked by KUAP! WARNING: CPU: 1 PID: 101841 at arch/powerpc/mm/fault.c:229 __do_page_fault+0x6b4/0xcd0 NIP [c00000000009e7e4] __do_page_fault+0x6b4/0xcd0 LR [c00000000009e7e0] __do_page_fault+0x6b0/0xcd0 .......... Call Trace: [c000000016367330] [c00000000009e7e0] __do_page_fault+0x6b0/0xcd0 (unreliable) [c0000000163673e0] [c00000000009ee3c] do_page_fault+0x3c/0x120 [c000000016367430] [c00000000000c848] handle_page_fault+0x10/0x2c --- interrupt: 300 at iov_iter_fault_in_readable+0x148/0x6f0 .......... NIP [c0000000008e8228] iov_iter_fault_in_readable+0x148/0x6f0 LR [c0000000008e834c] iov_iter_fault_in_readable+0x26c/0x6f0 interrupt: 300 [c0000000163677e0] [c0000000007154a0] iomap_write_actor+0xc0/0x280 [c000000016367880] [c00000000070fc94] iomap_apply+0x1c4/0x780 [c000000016367990] [c000000000710330] iomap_file_buffered_write+0xa0/0x120 [c0000000163679e0] [c00800000040791c] xfs_file_buffered_aio_write+0x314/0x5e0 [xfs] [c000000016367a90] [c0000000006d74bc] io_write+0x10c/0x460 [c000000016367bb0] [c0000000006d80e4] io_issue_sqe+0x8d4/0x1200 [c000000016367c70] [c0000000006d8ad0] io_wq_submit_work+0xc0/0x250 [c000000016367cb0] [c0000000006e2578] io_worker_handle_work+0x498/0x800 [c000000016367d40] [c0000000006e2cdc] io_wqe_worker+0x3fc/0x4f0 [c000000016367da0] [c0000000001cb0a4] kthread+0x1c4/0x1d0 [c000000016367e10] [c00000000000dbf0] ret_from_kernel_thread+0x5c/0x6c The kernel consider thread AMR value for kernel thread to be AMR_KUAP_BLOCKED. Hence access to userspace is denied. This of course not correct and we should allow userspace access after kthread_use_mm(). To be precise, kthread_use_mm() should inherit the AMR value of the operating address space. But, the AMR value is thread-specific and we inherit the address space and not thread access restrictions. Because of this ignore AMR value when accessing userspace via kernel thread. current_thread_amr/iamr() are updated, because we use them in the below stack. .... [ 530.710838] CPU: 13 PID: 5587 Comm: io_wqe_worker-0 Tainted: G D 5.11.0-rc6+ #3 .... NIP [c0000000000aa0c8] pkey_access_permitted+0x28/0x90 LR [c0000000004b9278] gup_pte_range+0x188/0x420 --- interrupt: 700 [c00000001c4ef3f0] [0000000000000000] 0x0 (unreliable) [c00000001c4ef490] [c0000000004bd39c] gup_pgd_range+0x3ac/0xa20 [c00000001c4ef5a0] [c0000000004bdd44] internal_get_user_pages_fast+0x334/0x410 [c00000001c4ef620] [c000000000852028] iov_iter_get_pages+0xf8/0x5c0 [c00000001c4ef6a0] [c0000000007da44c] bio_iov_iter_get_pages+0xec/0x700 [c00000001c4ef770] [c0000000006a325c] iomap_dio_bio_actor+0x2ac/0x4f0 [c00000001c4ef810] [c00000000069cd94] iomap_apply+0x2b4/0x740 [c00000001c4ef920] [c0000000006a38b8] __iomap_dio_rw+0x238/0x5c0 [c00000001c4ef9d0] [c0000000006a3c60] iomap_dio_rw+0x20/0x80 [c00000001c4ef9f0] [c008000001927a30] xfs_file_dio_aio_write+0x1f8/0x650 [xfs] [c00000001c4efa60] [c0080000019284dc] xfs_file_write_iter+0xc4/0x130 [xfs] [c00000001c4efa90] [c000000000669984] io_write+0x104/0x4b0 [c00000001c4efbb0] [c00000000066cea4] io_issue_sqe+0x3d4/0xf50 [c00000001c4efc60] [c000000000670200] io_wq_submit_work+0xb0/0x2f0 [c00000001c4efcb0] [c000000000674268] io_worker_handle_work+0x248/0x4a0 [c00000001c4efd30] [c0000000006746e8] io_wqe_worker+0x228/0x2a0 [c00000001c4efda0] [c00000000019d994] kthread+0x1b4/0x1c0 Fixes: 48a8ab4eeb82 ("powerpc/book3s64/pkeys: Don't update SPRN_AMR when in kernel mode.") Reported-by: Zorro Lang <zlang@redhat.com> Signed-off-by: Aneesh Kumar K.V <aneesh.kumar@linux.ibm.com> Signed-off-by: Michael Ellerman <mpe@ellerman.id.au> Link: https://lore.kernel.org/r/20210206025634.521979-1-aneesh.kumar@linux.ibm.com
2021-02-03powerpc/pci: Add ppc_md.discover_phbs()Oliver O'Halloran2-0/+13
On many powerpc platforms the discovery and initalisation of pci_controllers (PHBs) happens inside of setup_arch(). This is very early in boot (pre-initcalls) and means that we're initialising the PHB long before many basic kernel services (slab allocator, debugfs, a real ioremap) are available. On PowerNV this causes an additional problem since we map the PHB registers with ioremap(). As of commit d538aadc2718 ("powerpc/ioremap: warn on early use of ioremap()") a warning is printed because we're using the "incorrect" API to setup and MMIO mapping in searly boot. The kernel does provide early_ioremap(), but that is not intended to create long-lived MMIO mappings and a seperate warning is printed by generic code if early_ioremap() mappings are "leaked." This is all fixable with dumb hacks like using early_ioremap() to setup the initial mapping then replacing it with a real ioremap later on in boot, but it does raise the question: Why the hell are we setting up the PHB's this early in boot? The old and wise claim it's due to "hysterical rasins." Aside from amused grapes there doesn't appear to be any real reason to maintain the current behaviour. Already most of the newer embedded platforms perform PHB discovery in an arch_initcall and between the end of setup_arch() and the start of initcalls none of the generic kernel code does anything PCI related. On powerpc scanning PHBs occurs in a subsys_initcall so it should be possible to move the PHB discovery to a core, postcore or arch initcall. This patch adds the ppc_md.discover_phbs hook and a core_initcall stub that calls it. The core_initcalls are the earliest to be called so this will any possibly issues with dependency between initcalls. This isn't just an academic issue either since on pseries and PowerNV EEH init occurs in an arch_initcall and depends on the pci_controllers being available, similarly the creation of pci_dns occurs at core_initcall_sync (i.e. between core and postcore initcalls). These problems need to be addressed seperately. Reported-by: kernel test robot <lkp@intel.com> Signed-off-by: Oliver O'Halloran <oohall@gmail.com> [mpe: Make discover_phbs() static] Signed-off-by: Michael Ellerman <mpe@ellerman.id.au> Link: https://lore.kernel.org/r/20201103043523.916109-1-oohall@gmail.com
2021-02-03Merge git://git.kernel.org/pub/scm/linux/kernel/git/netdev/netJakub Kicinski1-12/+16
Signed-off-by: Jakub Kicinski <kuba@kernel.org>
2021-02-02powerpc/64/signal: Fix regression in __kernel_sigtramp_rt64() semanticsRaoni Fassina Firmino2-2/+11
Commit 0138ba5783ae ("powerpc/64/signal: Balance return predictor stack in signal trampoline") changed __kernel_sigtramp_rt64() VDSO and trampoline code, and introduced a regression in the way glibc's backtrace()[1] detects the signal-handler stack frame. Apart from the practical implications, __kernel_sigtramp_rt64() was a VDSO function with the semantics that it is a function you can call from userspace to end a signal handling. Now this semantics are no longer valid. I believe the aforementioned change affects all releases since 5.9. This patch tries to fix both the semantics and practical aspect of __kernel_sigtramp_rt64() returning it to the previous code, whilst keeping the intended behaviour of 0138ba5783ae by adding a new symbol to serve as the jump target from the kernel to the trampoline. Now the trampoline has two parts, a new entry point and the old return point. [1] https://lists.ozlabs.org/pipermail/linuxppc-dev/2021-January/223194.html Fixes: 0138ba5783ae ("powerpc/64/signal: Balance return predictor stack in signal trampoline") Cc: stable@vger.kernel.org # v5.9+ Signed-off-by: Raoni Fassina Firmino <raoni@linux.ibm.com> Acked-by: Nicholas Piggin <npiggin@gmail.com> [mpe: Minor tweaks to change log formatting, add stable tag] Signed-off-by: Michael Ellerman <mpe@ellerman.id.au> Link: https://lore.kernel.org/r/20210201200505.iz46ubcizipnkcxe@work-tp
2021-02-01perf/core: Add PERF_SAMPLE_WEIGHT_STRUCTKan Liang1-1/+1
Current PERF_SAMPLE_WEIGHT sample type is very useful to expresses the cost of an action represented by the sample. This allows the profiler to scale the samples to be more informative to the programmer. It could also help to locate a hotspot, e.g., when profiling by memory latencies, the expensive load appear higher up in the histograms. But current PERF_SAMPLE_WEIGHT sample type is solely determined by one factor. This could be a problem, if users want two or more factors to contribute to the weight. For example, Golden Cove core PMU can provide both the instruction latency and the cache Latency information as factors for the memory profiling. For current X86 platforms, although meminfo::latency is defined as a u64, only the lower 32 bits include the valid data in practice (No memory access could last than 4G cycles). The higher 32 bits can be used to store new factors. Add a new sample type, PERF_SAMPLE_WEIGHT_STRUCT, to indicate the new sample weight structure. It shares the same space as the PERF_SAMPLE_WEIGHT sample type. Users can apply either the PERF_SAMPLE_WEIGHT sample type or the PERF_SAMPLE_WEIGHT_STRUCT sample type to retrieve the sample weight, but they cannot apply both sample types simultaneously. Currently, only X86 and PowerPC use the PERF_SAMPLE_WEIGHT sample type. - For PowerPC, there is nothing changed for the PERF_SAMPLE_WEIGHT sample type. There is no effect for the new PERF_SAMPLE_WEIGHT_STRUCT sample type. PowerPC can re-struct the weight field similarly later. - For X86, the same value will be dumped for the PERF_SAMPLE_WEIGHT sample type or the PERF_SAMPLE_WEIGHT_STRUCT sample type for now. The following patches will apply the new factors for the PERF_SAMPLE_WEIGHT_STRUCT sample type. The field in the union perf_sample_weight should be shared among different architectures. A generic name is required, but it's hard to abstract a name that applies to all architectures. For example, on X86, the fields are to store all kinds of latency. While on PowerPC, it stores MMCRA[TECX/TECM], which should not be latency. So a general name prefix 'var$NUM' is used here. Suggested-by: Peter Zijlstra (Intel) <peterz@infradead.org> Signed-off-by: Kan Liang <kan.liang@linux.intel.com> Signed-off-by: Peter Zijlstra (Intel) <peterz@infradead.org> Link: https://lkml.kernel.org/r/1611873611-156687-2-git-send-email-kan.liang@linux.intel.com
2021-01-31Merge tag 'powerpc-5.11-6' of ↵Linus Torvalds1-12/+16
git://git.kernel.org/pub/scm/linux/kernel/git/powerpc/linux Pull powerpc fix from Michael Ellerman: "One fix for a bug in our soft interrupt masking, which could lead to interrupt replaying recursing, causing spurious interrupts. Thanks to Nicholas Piggin" * tag 'powerpc-5.11-6' of git://git.kernel.org/pub/scm/linux/kernel/git/powerpc/linux: powerpc/64s: prevent recursive replay_soft_interrupts causing superfluous interrupt
2021-01-31powerpc/powernv/pci: Drop pnv_phb->initializedOliver O'Halloran2-18/+0
The pnv_phb->initialized flag is an odd beast. It was added back in 2012 in commit db1266c85261 ("powerpc/powernv: Skip check on PE if necessary") to allow devices to be enabled even if the device had not yet been assigned to a PE. Allowing the device to be enabled before the PE is configured may cause spurious EEH events since none of the IOMMU context has been setup. I'm not entirely sure why this was ever necessary. My best guess is that it was an workaround for a bug or some other undesireable behaviour from the PCI core. Either way, it's unnecessary now since as of commit dc3d8f85bb57 ("powerpc/powernv/pci: Re-work bus PE configuration") we can guarantee that the PE will be configured before the PCI core will allow drivers to bind to the device. It's also worth pointing out that the ->initialized flag is only set in pnv_pci_ioda_create_dbgfs(). That function has its entire body wrapped in #ifdef CONFIG_DEBUG_FS. As a result, for kernels built without debugfs (i.e. petitboot) the other checks in pnv_pci_enable_device_hook() are bypassed entirely. Signed-off-by: Oliver O'Halloran <oohall@gmail.com> Signed-off-by: Michael Ellerman <mpe@ellerman.id.au> Link: https://lore.kernel.org/r/20200902013657.1753830-1-oohall@gmail.com
2021-01-31powerpc/xmon: Select CONSOLE_POLL for the 8xxChristophe Leroy1-0/+1
Powerpc 8xx requires CONSOLE_POLL to get udbg_putc() and udbg_getc() in CPM uart driver. Signed-off-by: Christophe Leroy <christophe.leroy@csgroup.eu> Signed-off-by: Michael Ellerman <mpe@ellerman.id.au> Link: https://lore.kernel.org/r/3d10a274516e9be8c4b0dc679a2840cdc1588872.1608716197.git.christophe.leroy@csgroup.eu
2021-01-31powerpc/xmon: Enable breakpoints on 8xxChristophe Leroy1-4/+0
Since commit 4ad8622dc548 ("powerpc/8xx: Implement hw_breakpoint"), 8xx has breakpoints so there is no reason to opt breakpoint logic out of xmon for the 8xx. Fixes: 4ad8622dc548 ("powerpc/8xx: Implement hw_breakpoint") Signed-off-by: Christophe Leroy <christophe.leroy@csgroup.eu> Signed-off-by: Michael Ellerman <mpe@ellerman.id.au> Link: https://lore.kernel.org/r/b0607f1113d1558e73476bb06db0ee16d31a6e5b.1608716197.git.christophe.leroy@csgroup.eu
2021-01-31powerpc/32s: Only build hash code when CONFIG_PPC_BOOK3S_604 is selectedChristophe Leroy2-1/+15
It is now possible to only build book3s/32 kernel for CPUs without hash table. Opt out hash related code when CONFIG_PPC_BOOK3S_604 is not selected. Signed-off-by: Christophe Leroy <christophe.leroy@csgroup.eu> Signed-off-by: Michael Ellerman <mpe@ellerman.id.au> Link: https://lore.kernel.org/r/62df436454ef06e104cc334a0859a2878d7888d5.1608274548.git.christophe.leroy@csgroup.eu
2021-01-31powerpc/setup: Adjust six seq_printf() calls in show_cpuinfo()Markus Elfring1-6/+5
A bit of information should be put into a sequence. Thus improve the execution speed for this data output by better usage of corresponding functions. This issue was detected by using the Coccinelle software. Signed-off-by: Markus Elfring <elfring@users.sourceforge.net> Signed-off-by: Michael Ellerman <mpe@ellerman.id.au> Link: https://lore.kernel.org/r/5b62379e-a35f-4f56-f1b5-6350f76007e7@web.de
2021-01-31powerpc/82xx: Use common error handling code in pq2ads_pci_init_irq()Markus Elfring1-7/+5
Adjust jump targets so that a bit of exception handling can be better reused at the end of this function. This issue was detected by using the Coccinelle software. Signed-off-by: Markus Elfring <elfring@users.sourceforge.net> Signed-off-by: Michael Ellerman <mpe@ellerman.id.au> Link: https://lore.kernel.org/r/1a4bafee-562f-5eb4-d2bd-34704f8c5ab3@web.de
2021-01-31powerpc/82xx: Delete an unnecessary of_node_put() call in pq2ads_pci_init_irq()Markus Elfring1-1/+0
A null pointer would be passed to a call of the function “of_node_put” immediately after a call of the function “of_find_compatible_node” failed at one place. Remove this superfluous function call. This issue was detected by using the Coccinelle software. Signed-off-by: Markus Elfring <elfring@users.sourceforge.net> Signed-off-by: Michael Ellerman <mpe@ellerman.id.au> Link: https://lore.kernel.org/r/9c060a41-438b-6fb8-d549-37c72fae4898@web.de
2021-01-31powerpc/pseries: Delete an error message for a failed string duplication in ↵Markus Elfring1-3/+1
dlpar_store() Omit an extra message for a memory allocation failure in this function. Signed-off-by: Markus Elfring <elfring@users.sourceforge.net> Acked-by: Nathan Lynch <nathanl@linux.ibm.com> Signed-off-by: Michael Ellerman <mpe@ellerman.id.au> Link: https://lore.kernel.org/r/535cfec2-782f-61ec-f6fb-c50186ead2af@web.de