summaryrefslogtreecommitdiff
path: root/arch/powerpc/lib
AgeCommit message (Collapse)AuthorFilesLines
2020-07-23powerpc/sstep: Let compute tests specify a required cpu featureJordan Niethe1-0/+6
An a array of struct compute_test's are used to declare tests for compute instructions. Add a cpu_feature field to struct compute_test as an optional way to specify a cpu feature that must be present. If not present then skip the test. Signed-off-by: Jordan Niethe <jniethe5@gmail.com> Signed-off-by: Michael Ellerman <mpe@ellerman.id.au> Link: https://lore.kernel.org/r/20200525025923.19843-4-jniethe5@gmail.com
2020-07-23powerpc/sstep: Set NIP in instruction emulation testsJordan Niethe1-0/+3
The tests for emulation of compute instructions execute and emulate an instruction and then compare the results to verify the emulation. In ISA v3.1 there are instructions that operate relative to the NIP. Therefore set the NIP in the regs used for the emulated instruction to the location of the executed instruction so they will give the same result. This is a rework of a patch by Balamuruhan S. Signed-off-by: Jordan Niethe <jniethe5@gmail.com> Signed-off-by: Michael Ellerman <mpe@ellerman.id.au> Link: https://lore.kernel.org/r/20200525025923.19843-3-jniethe5@gmail.com
2020-07-23powerpc/sstep: Add tests for prefixed floating-point load/storesJordan Niethe1-0/+124
Add tests for the prefixed versions of the floating-point load/stores that are currently tested. This includes the following instructions: * Prefixed Load Floating-Point Single (plfs) * Prefixed Load Floating-Point Double (plfd) * Prefixed Store Floating-Point Single (pstfs) * Prefixed Store Floating-Point Double (pstfd) Skip the new tests if ISA v3.10 is unsupported. Signed-off-by: Jordan Niethe <jniethe5@gmail.com> [mpe: Fix conflicts with ppc-opcode.h changes] Signed-off-by: Michael Ellerman <mpe@ellerman.id.au> Link: https://lore.kernel.org/r/20200525025923.19843-2-jniethe5@gmail.com
2020-07-23powerpc/sstep: Add tests for prefixed integer load/storesJordan Niethe1-0/+85
Add tests for the prefixed versions of the integer load/stores that are currently tested. This includes the following instructions: * Prefixed Load Doubleword (pld) * Prefixed Load Word and Zero (plwz) * Prefixed Store Doubleword (pstd) Skip the new tests if ISA v3.1 is unsupported. Signed-off-by: Jordan Niethe <jniethe5@gmail.com> [mpe: Fix conflicts with ppc-opcode.h changes] Signed-off-by: Michael Ellerman <mpe@ellerman.id.au> Link: https://lore.kernel.org/r/20200525025923.19843-1-jniethe5@gmail.com
2020-07-16powerpc/ppc-opcode: Move ppc instruction encoding from test_emulate_stepBalamuruhan S1-99/+56
Few ppc instructions are encoded in test_emulate_step.c, consolidate them and use it from ppc-opcode.h Signed-off-by: Balamuruhan S <bala24@linux.ibm.com> Tested-by: Naveen N. Rao <naveen.n.rao@linux.vnet.ibm.com> Acked-by: Naveen N. Rao <naveen.n.rao@linux.vnet.ibm.com> Acked-by: Sandipan Das <sandipan@linux.ibm.com> Signed-off-by: Michael Ellerman <mpe@ellerman.id.au> Link: https://lore.kernel.org/r/20200624113038.908074-3-bala24@linux.ibm.com
2020-07-16powerpc/pmem: Avoid the barrier in flush routinesAneesh Kumar K.V1-6/+0
nvdimm expect the flush routines to just mark the cache clean. The barrier that mark the store globally visible is done in nvdimm_flush(). Signed-off-by: Aneesh Kumar K.V <aneesh.kumar@linux.ibm.com> Signed-off-by: Michael Ellerman <mpe@ellerman.id.au> Link: https://lore.kernel.org/r/20200701072235.223558-7-aneesh.kumar@linux.ibm.com
2020-07-16powerpc/pmem: Add flush routines using new pmem store and sync instructionAneesh Kumar K.V1-4/+46
Start using dcbstps; phwsync; sequence for flushing persistent memory range. The new instructions are implemented as a variant of dcbf and hwsync and on P8 and P9 they will be executed as those instructions. We avoid using them on older hardware. This helps to avoid difficult to debug bugs. Signed-off-by: Aneesh Kumar K.V <aneesh.kumar@linux.ibm.com> Signed-off-by: Michael Ellerman <mpe@ellerman.id.au> Link: https://lore.kernel.org/r/20200701072235.223558-4-aneesh.kumar@linux.ibm.com
2020-06-17maccess: rename probe_user_{read,write} to copy_{from,to}_user_nofaultChristoph Hellwig1-3/+3
Better describe what these functions do. Suggested-by: Linus Torvalds <torvalds@linux-foundation.org> Signed-off-by: Christoph Hellwig <hch@lst.de> Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
2020-06-17maccess: rename probe_kernel_{read,write} to copy_{from,to}_kernel_nofaultChristoph Hellwig1-3/+3
Better describe what these functions do. Suggested-by: Linus Torvalds <torvalds@linux-foundation.org> Signed-off-by: Christoph Hellwig <hch@lst.de> Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
2020-06-09mm: don't include asm/pgtable.h if linux/mm.h is already includedMike Rapoport1-1/+0
Patch series "mm: consolidate definitions of page table accessors", v2. The low level page table accessors (pXY_index(), pXY_offset()) are duplicated across all architectures and sometimes more than once. For instance, we have 31 definition of pgd_offset() for 25 supported architectures. Most of these definitions are actually identical and typically it boils down to, e.g. static inline unsigned long pmd_index(unsigned long address) { return (address >> PMD_SHIFT) & (PTRS_PER_PMD - 1); } static inline pmd_t *pmd_offset(pud_t *pud, unsigned long address) { return (pmd_t *)pud_page_vaddr(*pud) + pmd_index(address); } These definitions can be shared among 90% of the arches provided XYZ_SHIFT, PTRS_PER_XYZ and xyz_page_vaddr() are defined. For architectures that really need a custom version there is always possibility to override the generic version with the usual ifdefs magic. These patches introduce include/linux/pgtable.h that replaces include/asm-generic/pgtable.h and add the definitions of the page table accessors to the new header. This patch (of 12): The linux/mm.h header includes <asm/pgtable.h> to allow inlining of the functions involving page table manipulations, e.g. pte_alloc() and pmd_alloc(). So, there is no point to explicitly include <asm/pgtable.h> in the files that include <linux/mm.h>. The include statements in such cases are remove with a simple loop: for f in $(git grep -l "include <linux/mm.h>") ; do sed -i -e '/include <asm\/pgtable.h>/ d' $f done Signed-off-by: Mike Rapoport <rppt@linux.ibm.com> Signed-off-by: Andrew Morton <akpm@linux-foundation.org> Cc: Arnd Bergmann <arnd@arndb.de> Cc: Borislav Petkov <bp@alien8.de> Cc: Brian Cain <bcain@codeaurora.org> Cc: Catalin Marinas <catalin.marinas@arm.com> Cc: Chris Zankel <chris@zankel.net> Cc: "David S. Miller" <davem@davemloft.net> Cc: Geert Uytterhoeven <geert@linux-m68k.org> Cc: Greentime Hu <green.hu@gmail.com> Cc: Greg Ungerer <gerg@linux-m68k.org> Cc: Guan Xuetao <gxt@pku.edu.cn> Cc: Guo Ren <guoren@kernel.org> Cc: Heiko Carstens <heiko.carstens@de.ibm.com> Cc: Helge Deller <deller@gmx.de> Cc: Ingo Molnar <mingo@redhat.com> Cc: Ley Foon Tan <ley.foon.tan@intel.com> Cc: Mark Salter <msalter@redhat.com> Cc: Matthew Wilcox <willy@infradead.org> Cc: Matt Turner <mattst88@gmail.com> Cc: Max Filippov <jcmvbkbc@gmail.com> Cc: Michael Ellerman <mpe@ellerman.id.au> Cc: Michal Simek <monstr@monstr.eu> Cc: Mike Rapoport <rppt@kernel.org> Cc: Nick Hu <nickhu@andestech.com> Cc: Paul Walmsley <paul.walmsley@sifive.com> Cc: Richard Weinberger <richard@nod.at> Cc: Rich Felker <dalias@libc.org> Cc: Russell King <linux@armlinux.org.uk> Cc: Stafford Horne <shorne@gmail.com> Cc: Thomas Bogendoerfer <tsbogend@alpha.franken.de> Cc: Thomas Gleixner <tglx@linutronix.de> Cc: Tony Luck <tony.luck@intel.com> Cc: Vincent Chen <deanbo422@gmail.com> Cc: Vineet Gupta <vgupta@synopsys.com> Cc: Will Deacon <will@kernel.org> Cc: Yoshinori Sato <ysato@users.sourceforge.jp> Link: http://lkml.kernel.org/r/20200514170327.31389-1-rppt@kernel.org Link: http://lkml.kernel.org/r/20200514170327.31389-2-rppt@kernel.org Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
2020-06-05Merge tag 'powerpc-5.8-1' of ↵Linus Torvalds8-334/+816
git://git.kernel.org/pub/scm/linux/kernel/git/powerpc/linux Pull powerpc updates from Michael Ellerman: - Support for userspace to send requests directly to the on-chip GZIP accelerator on Power9. - Rework of our lockless page table walking (__find_linux_pte()) to make it safe against parallel page table manipulations without relying on an IPI for serialisation. - A series of fixes & enhancements to make our machine check handling more robust. - Lots of plumbing to add support for "prefixed" (64-bit) instructions on Power10. - Support for using huge pages for the linear mapping on 8xx (32-bit). - Remove obsolete Xilinx PPC405/PPC440 support, and an associated sound driver. - Removal of some obsolete 40x platforms and associated cruft. - Initial support for booting on Power10. - Lots of other small features, cleanups & fixes. Thanks to: Alexey Kardashevskiy, Alistair Popple, Andrew Donnellan, Andrey Abramov, Aneesh Kumar K.V, Balamuruhan S, Bharata B Rao, Bulent Abali, Cédric Le Goater, Chen Zhou, Christian Zigotzky, Christophe JAILLET, Christophe Leroy, Dmitry Torokhov, Emmanuel Nicolet, Erhard F., Gautham R. Shenoy, Geoff Levand, George Spelvin, Greg Kurz, Gustavo A. R. Silva, Gustavo Walbon, Haren Myneni, Hari Bathini, Joel Stanley, Jordan Niethe, Kajol Jain, Kees Cook, Leonardo Bras, Madhavan Srinivasan., Mahesh Salgaonkar, Markus Elfring, Michael Neuling, Michal Simek, Nathan Chancellor, Nathan Lynch, Naveen N. Rao, Nicholas Piggin, Oliver O'Halloran, Paul Mackerras, Pingfan Liu, Qian Cai, Ram Pai, Raphael Moreira Zinsly, Ravi Bangoria, Sam Bobroff, Sandipan Das, Segher Boessenkool, Stephen Rothwell, Sukadev Bhattiprolu, Tyrel Datwyler, Wolfram Sang, Xiongfeng Wang. * tag 'powerpc-5.8-1' of git://git.kernel.org/pub/scm/linux/kernel/git/powerpc/linux: (299 commits) powerpc/pseries: Make vio and ibmebus initcalls pseries specific cxl: Remove dead Kconfig options powerpc: Add POWER10 architected mode powerpc/dt_cpu_ftrs: Add MMA feature powerpc/dt_cpu_ftrs: Enable Prefixed Instructions powerpc/dt_cpu_ftrs: Advertise support for ISA v3.1 if selected powerpc: Add support for ISA v3.1 powerpc: Add new HWCAP bits powerpc/64s: Don't set FSCR bits in INIT_THREAD powerpc/64s: Save FSCR to init_task.thread.fscr after feature init powerpc/64s: Don't let DT CPU features set FSCR_DSCR powerpc/64s: Don't init FSCR_DSCR in __init_FSCR() powerpc/32s: Fix another build failure with CONFIG_PPC_KUAP_DEBUG powerpc/module_64: Use special stub for _mcount() with -mprofile-kernel powerpc/module_64: Simplify check for -mprofile-kernel ftrace relocations powerpc/module_64: Consolidate ftrace code powerpc/32: Disable KASAN with pages bigger than 16k powerpc/uaccess: Don't set KUEP by default on book3s/32 powerpc/uaccess: Don't set KUAP by default on book3s/32 powerpc/8xx: Reduce time spent in allow_user_access() and friends ...
2020-06-05powerpc: add support for folded p4d page tablesMike Rapoport1-1/+6
Implement primitives necessary for the 4th level folding, add walks of p4d level where appropriate and replace 5level-fixup.h with pgtable-nop4d.h. [rppt@linux.ibm.com: powerpc/xmon: drop unused pgdir varialble in show_pte() function] Link: http://lkml.kernel.org/r/20200519181454.GI1059226@linux.ibm.com [rppt@linux.ibm.com; build fix] Link: http://lkml.kernel.org/r/20200423141845.GI13521@linux.ibm.com Signed-off-by: Mike Rapoport <rppt@linux.ibm.com> Signed-off-by: Andrew Morton <akpm@linux-foundation.org> Tested-by: Christophe Leroy <christophe.leroy@c-s.fr> # 8xx and 83xx Cc: Arnd Bergmann <arnd@arndb.de> Cc: Benjamin Herrenschmidt <benh@kernel.crashing.org> Cc: Brian Cain <bcain@codeaurora.org> Cc: Catalin Marinas <catalin.marinas@arm.com> Cc: Fenghua Yu <fenghua.yu@intel.com> Cc: Geert Uytterhoeven <geert+renesas@glider.be> Cc: Guan Xuetao <gxt@pku.edu.cn> Cc: James Morse <james.morse@arm.com> Cc: Jonas Bonn <jonas@southpole.se> Cc: Julien Thierry <julien.thierry.kdev@gmail.com> Cc: Ley Foon Tan <ley.foon.tan@intel.com> Cc: Marc Zyngier <maz@kernel.org> Cc: Michael Ellerman <mpe@ellerman.id.au> Cc: Paul Mackerras <paulus@samba.org> Cc: Rich Felker <dalias@libc.org> Cc: Russell King <linux@armlinux.org.uk> Cc: Stafford Horne <shorne@gmail.com> Cc: Stefan Kristiansson <stefan.kristiansson@saunalahti.fi> Cc: Suzuki K Poulose <suzuki.poulose@arm.com> Cc: Tony Luck <tony.luck@intel.com> Cc: Will Deacon <will@kernel.org> Cc: Yoshinori Sato <ysato@users.sourceforge.jp> Link: http://lkml.kernel.org/r/20200414153455.21744-9-rppt@kernel.org Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
2020-05-26powerpc: Add ppc_inst_as_u64()Michael Ellerman1-7/+1
The code patching code wants to get the value of a struct ppc_inst as a u64 when the instruction is prefixed, so we can pass the u64 down to __put_user_asm() and write it with a single store. The optprobes code wants to load a struct ppc_inst as an immediate into a register so it is useful to have it as a u64 to use the existing helper function. Currently this is a bit awkward because the value differs based on the CPU endianness, so add a helper to do the conversion. This fixes the usage in arch_prepare_optimized_kprobe() which was previously incorrect on big endian. Fixes: 650b55b707fd ("powerpc: Add prefixed instructions to instruction data type") Signed-off-by: Michael Ellerman <mpe@ellerman.id.au> Tested-by: Jordan Niethe <jniethe5@gmail.com> Link: https://lore.kernel.org/r/20200526072630.2487363-1-mpe@ellerman.id.au
2020-05-26powerpc: Add ppc_inst_next()Michael Ellerman1-7/+8
In a few places we want to calculate the address of the next instruction. Previously that was simple, we just added 4 bytes, or if using a u32 * we incremented that pointer by 1. But prefixed instructions make it more complicated, we need to advance by either 4 or 8 bytes depending on the actual instruction. We also can't do pointer arithmetic using struct ppc_inst, because it is always 8 bytes in size on 64-bit, even though we might only need to advance by 4 bytes. So add a ppc_inst_next() helper which calculates the location of the next instruction, if the given instruction was located at the given address. Note the instruction doesn't need to actually be at the address in memory. Although it would seem natural for the value to be passed by value, that makes it too easy to write a loop that will read off the end of a page, eg: for (; src < end; src = ppc_inst_next(src, *src), dest = ppc_inst_next(dest, *dest)) As noticed by Christophe and Jordan, if end is the exact end of a page, and the next page is not mapped, this will fault, because *dest will read 8 bytes, 4 bytes into the next page. So value is passed by reference, so the helper can be careful to use ppc_inst_read() on it. Signed-off-by: Michael Ellerman <mpe@ellerman.id.au> Reviewed-by: Jordan Niethe <jniethe5@gmail.com> Link: https://lore.kernel.org/r/20200522133318.1681406-1-mpe@ellerman.id.au
2020-05-18powerpc sstep: Add support for prefixed fixed-point arithmeticJordan Niethe1-0/+20
This adds emulation support for the following prefixed Fixed-Point Arithmetic instructions: * Prefixed Add Immediate (paddi) Signed-off-by: Jordan Niethe <jniethe5@gmail.com> Reviewed-by: Balamuruhan S <bala24@linux.ibm.com> [mpe: Squash in get_op() usage] Signed-off-by: Michael Ellerman <mpe@ellerman.id.au> Link: https://lore.kernel.org/r/20200506034050.24806-31-jniethe5@gmail.com
2020-05-18powerpc sstep: Add support for prefixed load/storesJordan Niethe1-2/+162
This adds emulation support for the following prefixed integer load/stores: * Prefixed Load Byte and Zero (plbz) * Prefixed Load Halfword and Zero (plhz) * Prefixed Load Halfword Algebraic (plha) * Prefixed Load Word and Zero (plwz) * Prefixed Load Word Algebraic (plwa) * Prefixed Load Doubleword (pld) * Prefixed Store Byte (pstb) * Prefixed Store Halfword (psth) * Prefixed Store Word (pstw) * Prefixed Store Doubleword (pstd) * Prefixed Load Quadword (plq) * Prefixed Store Quadword (pstq) the follow prefixed floating-point load/stores: * Prefixed Load Floating-Point Single (plfs) * Prefixed Load Floating-Point Double (plfd) * Prefixed Store Floating-Point Single (pstfs) * Prefixed Store Floating-Point Double (pstfd) and for the following prefixed VSX load/stores: * Prefixed Load VSX Scalar Doubleword (plxsd) * Prefixed Load VSX Scalar Single-Precision (plxssp) * Prefixed Load VSX Vector [0|1] (plxv, plxv0, plxv1) * Prefixed Store VSX Scalar Doubleword (pstxsd) * Prefixed Store VSX Scalar Single-Precision (pstxssp) * Prefixed Store VSX Vector [0|1] (pstxv, pstxv0, pstxv1) Signed-off-by: Jordan Niethe <jniethe5@gmail.com> Reviewed-by: Balamuruhan S <bala24@linux.ibm.com> [mpe: Use CONFIG_PPC64 not __powerpc64__, use get_op()] Signed-off-by: Michael Ellerman <mpe@ellerman.id.au> Link: https://lore.kernel.org/r/20200506034050.24806-30-jniethe5@gmail.com
2020-05-18powerpc: Test prefixed instructions in feature fixupsJordan Niethe2-0/+144
Expand the feature-fixups self-tests to includes tests for prefixed instructions. Signed-off-by: Jordan Niethe <jniethe5@gmail.com> [mpe: Use CONFIG_PPC64 not __powerpc64__, add empty inlines] Signed-off-by: Michael Ellerman <mpe@ellerman.id.au> Link: https://lore.kernel.org/r/20200506034050.24806-26-jniethe5@gmail.com
2020-05-18powerpc: Test prefixed code patchingJordan Niethe3-1/+42
Expand the code-patching self-tests to includes tests for patching prefixed instructions. Signed-off-by: Jordan Niethe <jniethe5@gmail.com> [mpe: Use CONFIG_PPC64 not __powerpc64__] Signed-off-by: Michael Ellerman <mpe@ellerman.id.au> Link: https://lore.kernel.org/r/20200506034050.24806-25-jniethe5@gmail.com
2020-05-18powerpc: Add prefixed instructions to instruction data typeJordan Niethe4-4/+58
For powerpc64, redefine the ppc_inst type so both word and prefixed instructions can be represented. On powerpc32 the type will remain the same. Update places which had assumed instructions to be 4 bytes long. Signed-off-by: Jordan Niethe <jniethe5@gmail.com> Reviewed-by: Alistair Popple <alistair@popple.id.au> [mpe: Rework the get_user_inst() macros to be parameterised, and don't assign to the dest if an error occurred. Use CONFIG_PPC64 not __powerpc64__ in a few places. Address other comments from Christophe. Fix some sparse complaints.] Signed-off-by: Michael Ellerman <mpe@ellerman.id.au> Link: https://lore.kernel.org/r/20200506034050.24806-24-jniethe5@gmail.com
2020-05-18powerpc: Make test_translate_branch() independent of instruction lengthJordan Niethe1-3/+3
test_translate_branch() uses two pointers to instructions within a buffer, p and q, to test patch_branch(). The pointer arithmetic done on them assumes a size of 4. This will not work if the instruction length changes. Instead do the arithmetic relative to the void * to the buffer. Signed-off-by: Jordan Niethe <jniethe5@gmail.com> Signed-off-by: Michael Ellerman <mpe@ellerman.id.au> Reviewed-by: Alistair Popple <alistair@popple.id.au> Link: https://lore.kernel.org/r/20200506034050.24806-21-jniethe5@gmail.com
2020-05-18powerpc: Introduce a function for reporting instruction lengthJordan Niethe1-7/+7
Currently all instructions have the same length, but in preparation for prefixed instructions introduce a function for returning instruction length. Signed-off-by: Jordan Niethe <jniethe5@gmail.com> Signed-off-by: Michael Ellerman <mpe@ellerman.id.au> Reviewed-by: Alistair Popple <alistair@popple.id.au> Link: https://lore.kernel.org/r/20200506034050.24806-18-jniethe5@gmail.com
2020-05-18powerpc: Add a probe_kernel_read_inst() functionJordan Niethe1-0/+13
Introduce a probe_kernel_read_inst() function to use in cases where probe_kernel_read() is used for getting an instruction. This will be more useful for prefixed instructions. Signed-off-by: Jordan Niethe <jniethe5@gmail.com> Reviewed-by: Alistair Popple <alistair@popple.id.au> [mpe: Don't write to *inst on error] Signed-off-by: Michael Ellerman <mpe@ellerman.id.au> Link: https://lore.kernel.org/r/20200506034050.24806-15-jniethe5@gmail.com
2020-05-18powerpc: Add a probe_user_read_inst() functionJordan Niethe2-1/+21
Introduce a probe_user_read_inst() function to use in cases where probe_user_read() is used for getting an instruction. This will be more useful for prefixed instructions. Signed-off-by: Jordan Niethe <jniethe5@gmail.com> Reviewed-by: Alistair Popple <alistair@popple.id.au> [mpe: Don't write to *inst on error, fold in __user annotations] Signed-off-by: Michael Ellerman <mpe@ellerman.id.au> Link: https://lore.kernel.org/r/20200506034050.24806-14-jniethe5@gmail.com
2020-05-18powerpc: Use a function for reading instructionsJordan Niethe2-14/+16
Prefixed instructions will mean there are instructions of different length. As a result dereferencing a pointer to an instruction will not necessarily give the desired result. Introduce a function for reading instructions from memory into the instruction data type. Signed-off-by: Jordan Niethe <jniethe5@gmail.com> Signed-off-by: Michael Ellerman <mpe@ellerman.id.au> Reviewed-by: Alistair Popple <alistair@popple.id.au> Link: https://lore.kernel.org/r/20200506034050.24806-13-jniethe5@gmail.com
2020-05-18powerpc: Use a datatype for instructionsJordan Niethe4-74/+77
Currently unsigned ints are used to represent instructions on powerpc. This has worked well as instructions have always been 4 byte words. However, ISA v3.1 introduces some changes to instructions that mean this scheme will no longer work as well. This change is Prefixed Instructions. A prefixed instruction is made up of a word prefix followed by a word suffix to make an 8 byte double word instruction. No matter the endianness of the system the prefix always comes first. Prefixed instructions are only planned for powerpc64. Introduce a ppc_inst type to represent both prefixed and word instructions on powerpc64 while keeping it possible to exclusively have word instructions on powerpc32. Signed-off-by: Jordan Niethe <jniethe5@gmail.com> [mpe: Fix compile error in emulate_spe()] Signed-off-by: Michael Ellerman <mpe@ellerman.id.au> Link: https://lore.kernel.org/r/20200506034050.24806-12-jniethe5@gmail.com
2020-05-18powerpc: Introduce functions for instruction equalityJordan Niethe1-6/+6
In preparation for an instruction data type that can not be directly used with the '==' operator use functions for checking equality. Signed-off-by: Jordan Niethe <jniethe5@gmail.com> Signed-off-by: Michael Ellerman <mpe@ellerman.id.au> Reviewed-by: Balamuruhan S <bala24@linux.ibm.com> Link: https://lore.kernel.org/r/20200506034050.24806-11-jniethe5@gmail.com
2020-05-18powerpc: Use a function for getting the instruction op codeJordan Niethe2-3/+3
In preparation for using a data type for instructions that can not be directly used with the '>>' operator use a function for getting the op code of an instruction. Signed-off-by: Jordan Niethe <jniethe5@gmail.com> Reviewed-by: Alistair Popple <alistair@popple.id.au> Signed-off-by: Michael Ellerman <mpe@ellerman.id.au> Link: https://lore.kernel.org/r/20200506034050.24806-9-jniethe5@gmail.com
2020-05-18powerpc: Use an accessor for instructionsJordan Niethe3-146/+148
In preparation for introducing a more complicated instruction type to accommodate prefixed instructions use an accessor for getting an instruction as a u32. Signed-off-by: Jordan Niethe <jniethe5@gmail.com> Signed-off-by: Michael Ellerman <mpe@ellerman.id.au> Link: https://lore.kernel.org/r/20200506034050.24806-8-jniethe5@gmail.com
2020-05-18powerpc: Use a macro for creating instructions from u32sJordan Niethe3-66/+69
In preparation for instructions having a more complex data type start using a macro, ppc_inst(), for making an instruction out of a u32. A macro is used so that instructions can be used as initializer elements. Currently this does nothing, but it will allow for creating a data type that can represent prefixed instructions. Signed-off-by: Jordan Niethe <jniethe5@gmail.com> [mpe: Change include guard to _ASM_POWERPC_INST_H] Signed-off-by: Michael Ellerman <mpe@ellerman.id.au> Reviewed-by: Alistair Popple <alistair@popple.id.au> Link: https://lore.kernel.org/r/20200506034050.24806-7-jniethe5@gmail.com
2020-05-18powerpc: Change calling convention for create_branch() et. al.Jordan Niethe2-57/+82
create_branch(), create_cond_branch() and translate_branch() return the instruction that they create, or return 0 to signal an error. Separate these concerns in preparation for an instruction type that is not just an unsigned int. Fill the created instruction to a pointer passed as the first parameter to the function and use a non-zero return value to signify an error. Signed-off-by: Jordan Niethe <jniethe5@gmail.com> Signed-off-by: Michael Ellerman <mpe@ellerman.id.au> Reviewed-by: Alistair Popple <alistair@popple.id.au> Link: https://lore.kernel.org/r/20200506034050.24806-6-jniethe5@gmail.com
2020-04-01powerpc/64/sstep: Ifdef the deprecated fast endian switch syscallNicholas Piggin1-2/+3
Signed-off-by: Nicholas Piggin <npiggin@gmail.com> Signed-off-by: Michael Ellerman <mpe@ellerman.id.au> Link: https://lore.kernel.org/r/20200225173541.1549955-25-npiggin@gmail.com
2020-03-25powerpc/sstep: Fix DS operand in ld encoding to appropriate valueBalamuruhan S1-2/+3
ld instruction should have 14 bit immediate field (DS) concatenated with 0b00 on the right, encode it accordingly. Introduce macro `IMM_DS()` to encode DS form instructions with 14 bit immediate field. Fixes: 4ceae137bdab ("powerpc: emulate_step() tests for load/store instructions") Reviewed-by: Sandipan Das <sandipan@linux.ibm.com> Signed-off-by: Balamuruhan S <bala24@linux.ibm.com> Signed-off-by: Michael Ellerman <mpe@ellerman.id.au> Link: https://lore.kernel.org/r/20200311102405.392263-1-bala24@linux.ibm.com
2020-03-17powerpc/lib: Fix emulate_step() std testNicholas Piggin1-1/+1
We should be checking that the instruction was stepped *and* that the target register has the right value. Signed-off-by: Nicholas Piggin <npiggin@gmail.com> Reviewed-by: Ravi Bangoria <ravi.bangoria@linux.ibm.com> [mpe: Write change log] Signed-off-by: Michael Ellerman <mpe@ellerman.id.au> Link: https://lore.kernel.org/r/20200226055302.1577954-1-npiggin@gmail.com
2019-12-16powerpc: Fix __clear_user() with KUAP enabledAndrew Donnellan2-5/+5
The KUAP implementation adds calls in clear_user() to enable and disable access to userspace memory. However, it doesn't add these to __clear_user(), which is used in the ptrace regset code. As there's only one direct user of __clear_user() (the regset code), and the time taken to set the AMR for KUAP purposes is going to dominate the cost of a quick access_ok(), there's not much point having a separate path. Rename __clear_user() to __arch_clear_user(), and make __clear_user() just call clear_user(). Reported-by: syzbot+f25ecf4b2982d8c7a640@syzkaller-ppc64.appspotmail.com Reported-by: Daniel Axtens <dja@axtens.net> Suggested-by: Michael Ellerman <mpe@ellerman.id.au> Fixes: de78a9c42a79 ("powerpc: Add a framework for Kernel Userspace Access Protection") Signed-off-by: Andrew Donnellan <ajd@linux.ibm.com> [mpe: Use __arch_clear_user() for the asm version like arm64 & nds32] Signed-off-by: Michael Ellerman <mpe@ellerman.id.au> Link: https://lore.kernel.org/r/20191209132221.15328-1-ajd@linux.ibm.com
2019-12-05powerpc/pmem: Convert to EXPORT_SYMBOL_GPLAneesh Kumar K.V1-2/+2
All other architecture export this as GPL symbol Signed-off-by: Aneesh Kumar K.V <aneesh.kumar@linux.ibm.com> Signed-off-by: Michael Ellerman <mpe@ellerman.id.au> Link: https://lore.kernel.org/r/20191202064018.155083-1-aneesh.kumar@linux.ibm.com
2019-09-12powerpc/memcpy: Fix stack corruption for smaller sizesSantosh Sivaraj1-1/+1
For sizes lesser than 128 bytes, the code branches out early without saving the stack frame, which when restored later drops frame of the caller. Tested-by: Aneesh Kumar K.V <aneesh.kumar@linux.ibm.com> Signed-off-by: Santosh Sivaraj <santosh@fossix.org> Signed-off-by: Michael Ellerman <mpe@ellerman.id.au> Link: https://lore.kernel.org/r/20190903214359.23887-1-santosh@fossix.org
2019-08-27powerpc/spinlocks: Rename SPLPAR-only spinlocksChristopher M. Riedl1-3/+3
The __rw_yield and __spin_yield locks only pertain to SPLPAR mode. Rename them to make this relationship obvious. Signed-off-by: Christopher M. Riedl <cmr@informatik.wtf> Reviewed-by: Andrew Donnellan <ajd@linux.ibm.com> Signed-off-by: Michael Ellerman <mpe@ellerman.id.au> Link: https://lore.kernel.org/r/20190813031314.1828-3-cmr@informatik.wtf
2019-08-21powerpc/memcpy: Add memcpy_mcsafe for pmemBalbir Singh2-1/+243
The pmem infrastructure uses memcpy_mcsafe in the pmem layer so as to convert machine check exceptions into a return value on failure in case a machine check exception is encountered during the memcpy. The return value is the number of bytes remaining to be copied. This patch largely borrows from the copyuser_power7 logic and does not add the VMX optimizations, largely to keep the patch simple. If needed those optimizations can be folded in. Signed-off-by: Balbir Singh <bsingharora@gmail.com> [arbab@linux.ibm.com: Added symbol export] Co-developed-by: Santosh Sivaraj <santosh@fossix.org> Signed-off-by: Santosh Sivaraj <santosh@fossix.org> Signed-off-by: Michael Ellerman <mpe@ellerman.id.au> Link: https://lore.kernel.org/r/20190820081352.8641-7-santosh@fossix.org
2019-08-05powerpc/32: activate ARCH_HAS_PMEM_API and ARCH_HAS_UACCESS_FLUSHCACHEChristophe Leroy1-2/+2
PPC32 also have flush_dcache_range() so it can also support ARCH_HAS_PMEM_API and ARCH_HAS_UACCESS_FLUSHCACHE without changes. Signed-off-by: Christophe Leroy <christophe.leroy@c-s.fr> Signed-off-by: Michael Ellerman <mpe@ellerman.id.au> Link: https://lore.kernel.org/r/a682a2f9db308c5cfe77e45aa3352e41bc9f4e33.1564554634.git.christophe.leroy@c-s.fr
2019-07-14Merge tag 'powerpc-5.3-1' of ↵Linus Torvalds3-9/+6
git://git.kernel.org/pub/scm/linux/kernel/git/powerpc/linux Pull powerpc updates from Michael Ellerman: "Notable changes: - Removal of the NPU DMA code, used by the out-of-tree Nvidia driver, as well as some other functions only used by drivers that haven't (yet?) made it upstream. - A fix for a bug in our handling of hardware watchpoints (eg. perf record -e mem: ...) which could lead to register corruption and kernel crashes. - Enable HAVE_ARCH_HUGE_VMAP, which allows us to use large pages for vmalloc when using the Radix MMU. - A large but incremental rewrite of our exception handling code to use gas macros rather than multiple levels of nested CPP macros. And the usual small fixes, cleanups and improvements. Thanks to: Alastair D'Silva, Alexey Kardashevskiy, Andreas Schwab, Aneesh Kumar K.V, Anju T Sudhakar, Anton Blanchard, Arnd Bergmann, Athira Rajeev, Cédric Le Goater, Christian Lamparter, Christophe Leroy, Christophe Lombard, Christoph Hellwig, Daniel Axtens, Denis Efremov, Enrico Weigelt, Frederic Barrat, Gautham R. Shenoy, Geert Uytterhoeven, Geliang Tang, Gen Zhang, Greg Kroah-Hartman, Greg Kurz, Gustavo Romero, Krzysztof Kozlowski, Madhavan Srinivasan, Masahiro Yamada, Mathieu Malaterre, Michael Neuling, Nathan Lynch, Naveen N. Rao, Nicholas Piggin, Nishad Kamdar, Oliver O'Halloran, Qian Cai, Ravi Bangoria, Sachin Sant, Sam Bobroff, Satheesh Rajendran, Segher Boessenkool, Shaokun Zhang, Shawn Anastasio, Stewart Smith, Suraj Jitindar Singh, Thiago Jung Bauermann, YueHaibing" * tag 'powerpc-5.3-1' of git://git.kernel.org/pub/scm/linux/kernel/git/powerpc/linux: (163 commits) powerpc/powernv/idle: Fix restore of SPRN_LDBAR for POWER9 stop state. powerpc/eeh: Handle hugepages in ioremap space ocxl: Update for AFU descriptor template version 1.1 powerpc/boot: pass CONFIG options in a simpler and more robust way powerpc/boot: add {get, put}_unaligned_be32 to xz_config.h powerpc/irq: Don't WARN continuously in arch_local_irq_restore() powerpc/module64: Use symbolic instructions names. powerpc/module32: Use symbolic instructions names. powerpc: Move PPC_HA() PPC_HI() and PPC_LO() to ppc-opcode.h powerpc/module64: Fix comment in R_PPC64_ENTRY handling powerpc/boot: Add lzo support for uImage powerpc/boot: Add lzma support for uImage powerpc/boot: don't force gzipped uImage powerpc/8xx: Add microcode patch to move SMC parameter RAM. powerpc/8xx: Use IO accessors in microcode programming. powerpc/8xx: replace #ifdefs by IS_ENABLED() in microcode.c powerpc/8xx: refactor programming of microcode CPM params. powerpc/8xx: refactor printing of microcode patch name. powerpc/8xx: Refactor microcode write powerpc/8xx: refactor writing of CPM microcode arrays ...
2019-07-04powerpc/64: flush_inval_dcache_range() becomes flush_dcache_range()Christophe Leroy1-4/+4
On most arches having function flush_dcache_range(), including PPC32, this function does a writeback and invalidation of the cache bloc. On PPC64, flush_dcache_range() only does a writeback while flush_inval_dcache_range() does the invalidation in addition. In addition it looks like within arch/powerpc/, there are no PPC64 platforms using flush_dcache_range() This patch drops the existing 64 bits version of flush_dcache_range() and renames flush_inval_dcache_range() into flush_dcache_range(). Signed-off-by: Christophe Leroy <christophe.leroy@c-s.fr> Signed-off-by: Michael Ellerman <mpe@ellerman.id.au>
2019-06-05treewide: Replace GPLv2 boilerplate/reference with SPDX - rule 295Thomas Gleixner1-9/+1
Based on 1 normalized pattern(s): this program is free software you can redistribute it and or modify it under the terms of version 2 of the gnu general public license as published by the free software foundation this program is distributed in the hope that it will be useful but without any warranty without even the implied warranty of merchantability or fitness for a particular purpose see the gnu general public license for more details extracted by the scancode license scanner the SPDX license identifier GPL-2.0-only has been chosen to replace the boilerplate/reference in 64 file(s). Signed-off-by: Thomas Gleixner <tglx@linutronix.de> Reviewed-by: Alexios Zavras <alexios.zavras@intel.com> Reviewed-by: Allison Randal <allison@lohutok.net> Cc: linux-spdx@vger.kernel.org Link: https://lkml.kernel.org/r/20190529141901.894819585@linutronix.de Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
2019-05-30treewide: Replace GPLv2 boilerplate/reference with SPDX - rule 156Thomas Gleixner8-104/+8
Based on 1 normalized pattern(s): this program is free software you can redistribute it and or modify it under the terms of the gnu general public license as published by the free software foundation either version 2 of the license or at your option any later version this program is distributed in the hope that it will be useful but without any warranty without even the implied warranty of merchantability or fitness for a particular purpose see the gnu general public license for more details you should have received a copy of the gnu general public license along with this program if not write to the free software foundation inc 59 temple place suite 330 boston ma 02111 1307 usa extracted by the scancode license scanner the SPDX license identifier GPL-2.0-or-later has been chosen to replace the boilerplate/reference in 1334 file(s). Signed-off-by: Thomas Gleixner <tglx@linutronix.de> Reviewed-by: Allison Randal <allison@lohutok.net> Reviewed-by: Richard Fontana <rfontana@redhat.com> Cc: linux-spdx@vger.kernel.org Link: https://lkml.kernel.org/r/20190527070033.113240726@linutronix.de Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
2019-05-30treewide: Replace GPLv2 boilerplate/reference with SPDX - rule 152Thomas Gleixner19-96/+19
Based on 1 normalized pattern(s): this program is free software you can redistribute it and or modify it under the terms of the gnu general public license as published by the free software foundation either version 2 of the license or at your option any later version extracted by the scancode license scanner the SPDX license identifier GPL-2.0-or-later has been chosen to replace the boilerplate/reference in 3029 file(s). Signed-off-by: Thomas Gleixner <tglx@linutronix.de> Reviewed-by: Allison Randal <allison@lohutok.net> Cc: linux-spdx@vger.kernel.org Link: https://lkml.kernel.org/r/20190527070032.746973796@linutronix.de Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
2019-05-28powerpc/lib: only build ldstfp.o when CONFIG_PPC_FPU is setChristophe Leroy2-5/+2
The entire code in ldstfp.o is enclosed into #ifdef CONFIG_PPC_FPU, so there is no point in building it when this config is not selected. Fixes: cd64d1697cf0 ("powerpc: mtmsrd not defined") Signed-off-by: Christophe Leroy <christophe.leroy@c-s.fr> Signed-off-by: Michael Ellerman <mpe@ellerman.id.au>
2019-05-28powerpc/lib: fix redundant inclusion of quad.oChristophe Leroy1-1/+1
quad.o is only for PPC64, and already included in obj64-y, so it doesn't have to be in obj-y Fixes: 31bfdb036f12 ("powerpc: Use instruction emulation infrastructure to handle alignment faults") Signed-off-by: Christophe Leroy <christophe.leroy@c-s.fr> Signed-off-by: Michael Ellerman <mpe@ellerman.id.au>
2019-05-02powerpc: disable KASAN instrumentation on early/critical files.Christophe Leroy1-0/+8
All files containing functions run before kasan_early_init() is called must have KASAN instrumentation disabled. For those file, branch profiling also have to be disabled otherwise each if () generates a call to ftrace_likely_update(). Signed-off-by: Christophe Leroy <christophe.leroy@c-s.fr> Signed-off-by: Michael Ellerman <mpe@ellerman.id.au>
2019-05-02powerpc: prepare string/mem functions for KASANChristophe Leroy4-9/+27
CONFIG_KASAN implements wrappers for memcpy() memmove() and memset() Those wrappers are doing the verification then call respectively __memcpy() __memmove() and __memset(). The arches are therefore expected to rename their optimised functions that way. For files on which KASAN is inhibited, #defines are used to allow them to directly call optimised versions of the functions without going through the KASAN wrappers. See commit 393f203f5fd5 ("x86_64: kasan: add interceptors for memset/memmove/memcpy functions") for details. Other string / mem functions do not (yet) have kasan wrappers, we therefore have to fallback to the generic versions when KASAN is active, otherwise KASAN checks will be skipped. Signed-off-by: Christophe Leroy <christophe.leroy@c-s.fr> [mpe: Fixups to keep selftests working] Signed-off-by: Michael Ellerman <mpe@ellerman.id.au>
2019-04-21powerpc/lib: Refactor __patch_instruction() to use __put_user_asm()Russell Currey1-2/+2
__patch_instruction() is called in early boot, and uses __put_user_size(), which includes the allow/prevent calls to enforce KUAP, which could either be called too early, or in the Radix case, forced to use "early_" versions of functions just to safely handle this one case. __put_user_asm() does not do this, and thus is safe to use both in early boot, and later on since in this case it should only ever be touching kernel memory. __patch_instruction() was previously refactored to use __put_user_size() in order to be able to return -EFAULT, which would allow the kernel to patch instructions in userspace, which should never happen. This has the functional change of causing faults on userspace addresses if KUAP is turned on, which should never happen in practice. A future enhancement could be to double check the patch address is definitely allowed to be tampered with by the kernel. Signed-off-by: Russell Currey <ruscur@russell.cc> Signed-off-by: Michael Ellerman <mpe@ellerman.id.au>
2019-04-21powerpc: Add a framework for Kernel Userspace Access ProtectionChristophe Leroy1-0/+4
This patch implements a framework for Kernel Userspace Access Protection. Then subarches will have the possibility to provide their own implementation by providing setup_kuap() and allow/prevent_user_access(). Some platforms will need to know the area accessed and whether it is accessed from read, write or both. Therefore source, destination and size and handed over to the two functions. mpe: Rename to allow/prevent rather than unlock/lock, and add read/write wrappers. Drop the 32-bit code for now until we have an implementation for it. Add kuap to pt_regs for 64-bit as well as 32-bit. Don't split strings, use pr_crit_ratelimited(). Signed-off-by: Christophe Leroy <christophe.leroy@c-s.fr> Signed-off-by: Russell Currey <ruscur@russell.cc> Signed-off-by: Michael Ellerman <mpe@ellerman.id.au>