summaryrefslogtreecommitdiff
path: root/arch/powerpc
AgeCommit message (Collapse)AuthorFilesLines
2020-03-17powerpc/lib: Fix emulate_step() std testNicholas Piggin1-1/+1
We should be checking that the instruction was stepped *and* that the target register has the right value. Signed-off-by: Nicholas Piggin <npiggin@gmail.com> Reviewed-by: Ravi Bangoria <ravi.bangoria@linux.ibm.com> [mpe: Write change log] Signed-off-by: Michael Ellerman <mpe@ellerman.id.au> Link: https://lore.kernel.org/r/20200226055302.1577954-1-npiggin@gmail.com
2020-03-17powerpc/64s/radix: Fix CONFIG_SMP=n buildNicholas Piggin2-1/+7
Signed-off-by: Nicholas Piggin <npiggin@gmail.com> Signed-off-by: Michael Ellerman <mpe@ellerman.id.au> Link: https://lore.kernel.org/r/20200302010410.2957362-1-npiggin@gmail.com
2020-03-17powerpc/pmac/smp: Drop unnecessary volatile qualifierYueHaibing1-2/+2
core99_l2_cache/core99_l3_cache do not need to be marked as volatile, remove it. Signed-off-by: YueHaibing <yuehaibing@huawei.com> Signed-off-by: Michael Ellerman <mpe@ellerman.id.au> Link: https://lore.kernel.org/r/20200303085604.24952-1-yuehaibing@huawei.com
2020-03-17powerpc/pmac/smp: Avoid unused-variable warningsIlie Halip1-4/+4
When building with ppc64_defconfig, the compiler reports that these 2 variables are not used: warning: unused variable 'core99_l2_cache' [-Wunused-variable] warning: unused variable 'core99_l3_cache' [-Wunused-variable] They are only used when CONFIG_PPC64 is not defined. Move them into a section which does the same macro check. Reported-by: Nathan Chancellor <natechancellor@gmail.com> Signed-off-by: Ilie Halip <ilie.halip@gmail.com> [mpe: Move them into core99_init_caches() which is their only user] Signed-off-by: Michael Ellerman <mpe@ellerman.id.au> Link: https://lore.kernel.org/r/20190920153951.25762-1-ilie.halip@gmail.com
2020-03-17powerpc/fsl_booke: Avoid creating duplicate tlb1 entryLaurentiu Tudor1-1/+11
In the current implementation, the call to loadcam_multi() is wrapped between switch_to_as1() and restore_to_as0() calls so, when it tries to create its own temporary AS=1 TLB1 entry, it ends up duplicating the existing one created by switch_to_as1(). Add a check to skip creating the temporary entry if already running in AS=1. Fixes: d9e1831a4202 ("powerpc/85xx: Load all early TLB entries at once") Cc: stable@vger.kernel.org # v4.4+ Signed-off-by: Laurentiu Tudor <laurentiu.tudor@nxp.com> Acked-by: Scott Wood <oss@buserror.net> Signed-off-by: Michael Ellerman <mpe@ellerman.id.au> Link: https://lore.kernel.org/r/20200123111914.2565-1-laurentiu.tudor@nxp.com
2020-03-16KVM: Remove unnecessary asm/kvm_host.h includesPeter Xu6-6/+0
Remove includes of asm/kvm_host.h from files that already include linux/kvm_host.h to make it more obvious that there is no ordering issue between the two headers. linux/kvm_host.h includes asm/kvm_host.h to pick up architecture specific settings, and this will never change, i.e. including asm/kvm_host.h after linux/kvm_host.h may seem problematic, but in practice is simply redundant. Signed-off-by: Peter Xu <peterx@redhat.com> Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
2020-03-16KVM: Terminate memslot walks via used_slotsSean Christopherson1-1/+1
Refactor memslot handling to treat the number of used slots as the de facto size of the memslot array, e.g. return NULL from id_to_memslot() when an invalid index is provided instead of relying on npages==0 to detect an invalid memslot. Rework the sorting and walking of memslots in advance of dynamically sizing memslots to aid bisection and debug, e.g. with luck, a bug in the refactoring will bisect here and/or hit a WARN instead of randomly corrupting memory. Alternatively, a global null/invalid memslot could be returned, i.e. so callers of id_to_memslot() don't have to explicitly check for a NULL memslot, but that approach runs the risk of introducing difficult-to- debug issues, e.g. if the global null slot is modified. Constifying the return from id_to_memslot() to combat such issues is possible, but would require a massive refactoring of arch specific code and would still be susceptible to casting shenanigans. Add function comments to update_memslots() and search_memslots() to explicitly (and loudly) state how memslots are sorted. Opportunistically stuff @hva with a non-canonical value when deleting a private memslot on x86 to detect bogus usage of the freed slot. No functional change intended. Tested-by: Christoffer Dall <christoffer.dall@arm.com> Tested-by: Marc Zyngier <maz@kernel.org> Signed-off-by: Sean Christopherson <sean.j.christopherson@intel.com> Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
2020-03-16KVM: Ensure validity of memslot with respect to kvm_get_dirty_log()Sean Christopherson1-5/+1
Rework kvm_get_dirty_log() so that it "returns" the associated memslot on success. A future patch will rework memslot handling such that id_to_memslot() can return NULL, returning the memslot makes it more obvious that the validity of the memslot has been verified, i.e. precludes the need to add validity checks in the arch code that are technically unnecessary. To maintain ordering in s390, move the call to kvm_arch_sync_dirty_log() from s390's kvm_vm_ioctl_get_dirty_log() to the new kvm_get_dirty_log(). This is a nop for PPC, the only other arch that doesn't select KVM_GENERIC_DIRTYLOG_READ_PROTECT, as its sync_dirty_log() is empty. Ideally, moving the sync_dirty_log() call would be done in a separate patch, but it can't be done in a follow-on patch because that would temporarily break s390's ordering. Making the move in a preparatory patch would be functionally correct, but would create an odd scenario where the moved sync_dirty_log() would operate on a "different" memslot due to consuming the result of a different id_to_memslot(). The memslot couldn't actually be different as slots_lock is held, but the code is confusing enough as it is, i.e. moving sync_dirty_log() in this patch is the lesser of all evils. Signed-off-by: Sean Christopherson <sean.j.christopherson@intel.com> Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
2020-03-16KVM: Provide common implementation for generic dirty log functionsSean Christopherson2-0/+10
Move the implementations of KVM_GET_DIRTY_LOG and KVM_CLEAR_DIRTY_LOG for CONFIG_KVM_GENERIC_DIRTYLOG_READ_PROTECT into common KVM code. The arch specific implemenations are extremely similar, differing only in whether the dirty log needs to be sync'd from hardware (x86) and how the TLBs are flushed. Add new arch hooks to handle sync and TLB flush; the sync will also be used for non-generic dirty log support in a future patch (s390). The ulterior motive for providing a common implementation is to eliminate the dependency between arch and common code with respect to the memslot referenced by the dirty log, i.e. to make it obvious in the code that the validity of the memslot is guaranteed, as a future patch will rework memslot handling such that id_to_memslot() can return NULL. Signed-off-by: Sean Christopherson <sean.j.christopherson@intel.com> Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
2020-03-16KVM: Simplify kvm_free_memslot() and all its descendentsSean Christopherson6-20/+11
Now that all callers of kvm_free_memslot() pass NULL for @dont, remove the param from the top-level routine and all arch's implementations. No functional change intended. Tested-by: Christoffer Dall <christoffer.dall@arm.com> Reviewed-by: Peter Xu <peterx@redhat.com> Signed-off-by: Sean Christopherson <sean.j.christopherson@intel.com> Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
2020-03-16KVM: Drop "const" attribute from old memslot in commit_memory_region()Sean Christopherson1-1/+1
Drop the "const" attribute from @old in kvm_arch_commit_memory_region() to allow arch specific code to free arch specific resources in the old memslot without having to cast away the attribute. Freeing resources in kvm_arch_commit_memory_region() paves the way for simplifying kvm_free_memslot() by eliminating the last usage of its @dont param. Reviewed-by: Peter Xu <peterx@redhat.com> Signed-off-by: Sean Christopherson <sean.j.christopherson@intel.com> Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
2020-03-16KVM: Drop kvm_arch_create_memslot()Sean Christopherson1-6/+0
Remove kvm_arch_create_memslot() now that all arch implementations are effectively nops. Removing kvm_arch_create_memslot() eliminates the possibility for arch specific code to allocate memory prior to setting a memslot, which sets the stage for simplifying kvm_free_memslot(). Cc: Janosch Frank <frankja@linux.ibm.com> Acked-by: Christian Borntraeger <borntraeger@de.ibm.com> Reviewed-by: Peter Xu <peterx@redhat.com> Signed-off-by: Sean Christopherson <sean.j.christopherson@intel.com> Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
2020-03-16KVM: PPC: Move memslot memory allocation into prepare_memory_region()Sean Christopherson6-45/+25
Allocate the rmap array during kvm_arch_prepare_memory_region() to pave the way for removing kvm_arch_create_memslot() altogether. Moving PPC's memory allocation only changes the order of kernel memory allocations between PPC and common KVM code. No functional change intended. Acked-by: Paul Mackerras <paulus@ozlabs.org> Reviewed-by: Peter Xu <peterx@redhat.com> Signed-off-by: Sean Christopherson <sean.j.christopherson@intel.com> Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
2020-03-13powerpc/vdso: remove deprecated VDS64_HAS_DESCRIPTORS referencesJoe Lawrence2-29/+0
The original 2005 patch that introduced the powerpc vdso, pre-git ("ppc64: Implement a vDSO and use it for signal trampoline") notes that: ... symbols exposed by the vDSO aren't "normal" function symbols, apps can't be expected to link against them directly, the vDSO's are both seen as if they were linked at 0 and the symbols just contain offsets to the various functions. This is done on purpose to avoid a relocation step (ppc64 functions normally have descriptors with abs addresses in them). When glibc uses those functions, it's expected to use it's own trampolines that know how to reach them. Despite that explanation, there remains dead #ifdef VDS64_HAS_DESCRIPTORS code-blocks that provide alternate function definitions that setup function descriptors. Since VDS64_HAS_DESCRIPTORS has been unused for all these years, we might as well finally remove it from the codebase. Signed-off-by: Joe Lawrence <joe.lawrence@redhat.com> Signed-off-by: Michael Ellerman <mpe@ellerman.id.au> Link: https://lore.kernel.org/r/20200224211848.26087-1-joe.lawrence@redhat.com
2020-03-13powerpc/32: Fix missing NULL pmd check in virt_to_kpte()Christophe Leroy1-1/+3
Commit 2efc7c085f05 ("powerpc/32: drop get_pteptr()"), replaced get_pteptr() by virt_to_kpte(). But virt_to_kpte() lacks a NULL pmd check and returns an invalid non NULL pointer when there is no page table. Reported-by: Nick Desaulniers <ndesaulniers@google.com> Fixes: 2efc7c085f05 ("powerpc/32: drop get_pteptr()") Signed-off-by: Christophe Leroy <christophe.leroy@c-s.fr> Tested-by: Nathan Chancellor <natechancellor@gmail.com> Signed-off-by: Michael Ellerman <mpe@ellerman.id.au> Link: https://lore.kernel.org/r/b1177cdfc6af74a3e277bba5d9e708c4b3315ebe.1583575707.git.christophe.leroy@c-s.fr
2020-03-13powerpc/kasan: Fix shadow memory protection with CONFIG_KASAN_VMALLOCChristophe Leroy1-7/+2
With CONFIG_KASAN_VMALLOC, new page tables are created at the time shadow memory for vmalloc area is unmapped. If some parts of the page table still have entries to the zero page shadow memory, the entries are wrongly marked RW. With CONFIG_KASAN_VMALLOC, almost the entire kernel address space is managed by KASAN. To make it simple, just create KASAN page tables for the entire kernel space at kasan_init(). That doesn't use much more space, and that's anyway already done for hash platforms. Fixes: 3d4247fcc938 ("powerpc/32: Add support of KASAN_VMALLOC") Signed-off-by: Christophe Leroy <christophe.leroy@c-s.fr> Signed-off-by: Michael Ellerman <mpe@ellerman.id.au> Link: https://lore.kernel.org/r/ef5248fc1f496c6b0dfdb59380f24968f25f75c5.1583513368.git.christophe.leroy@c-s.fr
2020-03-12ima: add a new CONFIG for loading arch-specific policiesNayna Jain1-0/+1
Every time a new architecture defines the IMA architecture specific functions - arch_ima_get_secureboot() and arch_ima_get_policy(), the IMA include file needs to be updated. To avoid this "noise", this patch defines a new IMA Kconfig IMA_SECURE_AND_OR_TRUSTED_BOOT option, allowing the different architectures to select it. Suggested-by: Linus Torvalds <torvalds@linux-foundation.org> Signed-off-by: Nayna Jain <nayna@linux.ibm.com> Acked-by: Ard Biesheuvel <ardb@kernel.org> Acked-by: Philipp Rudo <prudo@linux.ibm.com> (s390) Acked-by: Michael Ellerman <mpe@ellerman.id.au> (powerpc) Signed-off-by: Mimi Zohar <zohar@linux.ibm.com>
2020-03-10Merge branch 'fixes' into nextMichael Ellerman21-105/+326
Merge in our fixes branch. In particular we want to merge the TM and KUAP fixes, so we can add selftests for them in next.
2020-03-06Merge branch 'perf/urgent' into perf/core, to pick up the latest fixesIngo Molnar17-99/+308
Signed-off-by: Ingo Molnar <mingo@kernel.org>
2020-03-05powerpc/mm: Fix missing KUAP disable in flush_coherent_icache()Michael Ellerman1-0/+2
Stefan reported a strange kernel fault which turned out to be due to a missing KUAP disable in flush_coherent_icache() called from flush_icache_range(). The fault looks like: Kernel attempted to access user page (7fffc30d9c00) - exploit attempt? (uid: 1009) BUG: Unable to handle kernel data access on read at 0x7fffc30d9c00 Faulting instruction address: 0xc00000000007232c Oops: Kernel access of bad area, sig: 11 [#1] LE PAGE_SIZE=64K MMU=Radix SMP NR_CPUS=2048 NUMA PowerNV CPU: 35 PID: 5886 Comm: sigtramp Not tainted 5.6.0-rc2-gcc-8.2.0-00003-gfc37a1632d40 #79 NIP: c00000000007232c LR: c00000000003b7fc CTR: 0000000000000000 REGS: c000001e11093940 TRAP: 0300 Not tainted (5.6.0-rc2-gcc-8.2.0-00003-gfc37a1632d40) MSR: 900000000280b033 <SF,HV,VEC,VSX,EE,FP,ME,IR,DR,RI,LE> CR: 28000884 XER: 00000000 CFAR: c0000000000722fc DAR: 00007fffc30d9c00 DSISR: 08000000 IRQMASK: 0 GPR00: c00000000003b7fc c000001e11093bd0 c0000000023ac200 00007fffc30d9c00 GPR04: 00007fffc30d9c18 0000000000000000 c000001e11093bd4 0000000000000000 GPR08: 0000000000000000 0000000000000001 0000000000000000 c000001e1104ed80 GPR12: 0000000000000000 c000001fff6ab380 c0000000016be2d0 4000000000000000 GPR16: c000000000000000 bfffffffffffffff 0000000000000000 0000000000000000 GPR20: 00007fffc30d9c00 00007fffc30d8f58 00007fffc30d9c18 00007fffc30d9c20 GPR24: 00007fffc30d9c18 0000000000000000 c000001e11093d90 c000001e1104ed80 GPR28: c000001e11093e90 0000000000000000 c0000000023d9d18 00007fffc30d9c00 NIP flush_icache_range+0x5c/0x80 LR handle_rt_signal64+0x95c/0xc2c Call Trace: 0xc000001e11093d90 (unreliable) handle_rt_signal64+0x93c/0xc2c do_notify_resume+0x310/0x430 ret_from_except_lite+0x70/0x74 Instruction dump: 409e002c 7c0802a6 3c62ff31 3863f6a0 f8010080 48195fed 60000000 48fe4c8d 60000000 e8010080 7c0803a6 7c0004ac <7c00ffac> 7c0004ac 4c00012c 38210070 This path through handle_rt_signal64() to setup_trampoline() and flush_icache_range() is only triggered by 64-bit processes that have unmapped their VDSO, which is rare. flush_icache_range() takes a range of addresses to flush. In flush_coherent_icache() we implement an optimisation for CPUs where we know we don't actually have to flush the whole range, we just need to do a single icbi. However we still execute the icbi on the user address of the start of the range we're flushing. On CPUs that also implement KUAP (Power9) that leads to the spurious fault above. We should be able to pass any address, including a kernel address, to the icbi on these CPUs, which would avoid any interaction with KUAP. But I don't want to make that change in a bug fix, just in case it surfaces some strange behaviour on some CPU. So for now just disable KUAP around the icbi. Note the icbi is treated as a load, so we allow read access, not write as you'd expect. Fixes: 890274c2dc4c ("powerpc/64s: Implement KUAP for Radix MMU") Cc: stable@vger.kernel.org # v5.2+ Reported-by: Stefan Berger <stefanb@linux.ibm.com> Signed-off-by: Michael Ellerman <mpe@ellerman.id.au> Link: https://lore.kernel.org/r/20200303235708.26004-1-mpe@ellerman.id.au
2020-03-04powerpc/numa: Remove late request for home node associativitySrikar Dronamraju3-18/+0
With commit ("powerpc/numa: Early request for home node associativity"), commit 2ea626306810 ("powerpc/topology: Get topology for shared processors at boot") which was requesting home node associativity becomes redundant. Hence remove the late request for home node associativity. Signed-off-by: Srikar Dronamraju <srikar@linux.vnet.ibm.com> Reported-by: Abdul Haleem <abdhalee@linux.vnet.ibm.com> Reviewed-by: Nathan Lynch <nathanl@linux.ibm.com> Signed-off-by: Michael Ellerman <mpe@ellerman.id.au> Link: https://lore.kernel.org/r/20200129135301.24739-6-srikar@linux.vnet.ibm.com
2020-03-04powerpc/numa: Early request for home node associativitySrikar Dronamraju1-1/+40
Currently the kernel detects if its running on a shared lpar platform and requests home node associativity before the scheduler sched_domains are setup. However between the time NUMA setup is initialized and the request for home node associativity, workqueue initializes its per node cpumask. The per node workqueue possible cpumask may turn invalid after home node associativity resulting in weird situations like workqueue possible cpumask being a subset of workqueue online cpumask. This can be fixed by requesting home node associativity earlier just before NUMA setup. However at the NUMA setup time, kernel may not be in a position to detect if its running on a shared lpar platform. So request for home node associativity and if the request fails, fallback on the device tree property. Signed-off-by: Srikar Dronamraju <srikar@linux.vnet.ibm.com> Reported-by: Abdul Haleem <abdhalee@linux.vnet.ibm.com> Signed-off-by: Michael Ellerman <mpe@ellerman.id.au> Link: https://lore.kernel.org/r/20200129135301.24739-5-srikar@linux.vnet.ibm.com
2020-03-04powerpc/numa: Use cpu node map of first sibling threadSrikar Dronamraju1-2/+20
All the sibling threads of a core have to be part of the same node. To ensure that all the sibling threads map to the same node, always lookup/update the cpu-to-node map of the first thread in the core. Signed-off-by: Srikar Dronamraju <srikar@linux.vnet.ibm.com> Reported-by: Abdul Haleem <abdhalee@linux.vnet.ibm.com> Reviewed-by: Nathan Lynch <nathanl@linux.ibm.com> Signed-off-by: Michael Ellerman <mpe@ellerman.id.au> Link: https://lore.kernel.org/r/20200129135301.24739-4-srikar@linux.vnet.ibm.com
2020-03-04powerpc/numa: Handle extra hcall_vphn error casesSrikar Dronamraju1-9/+16
Currently code handles H_FUNCTION, H_SUCCESS, H_HARDWARE return codes. However hcall_vphn can return other return codes. Now it also handles H_PARAMETER return code. Also the rest return codes are handled under the default case. Signed-off-by: Srikar Dronamraju <srikar@linux.vnet.ibm.com> Reported-by: Abdul Haleem <abdhalee@linux.vnet.ibm.com> Reviewed-by: Nathan Lynch <nathanl@linux.ibm.com> Signed-off-by: Michael Ellerman <mpe@ellerman.id.au> Link: https://lore.kernel.org/r/20200129135301.24739-3-srikar@linux.vnet.ibm.com
2020-03-04powerpc/vphn: Check for error from hcall_vphnSrikar Dronamraju1-1/+2
There is no value in unpacking associativity, if H_HOME_NODE_ASSOCIATIVITY hcall has returned an error. Signed-off-by: Srikar Dronamraju <srikar@linux.vnet.ibm.com> Reported-by: Abdul Haleem <abdhalee@linux.vnet.ibm.com> Reviewed-by: Nathan Lynch <nathanl@linux.ibm.com> Signed-off-by: Michael Ellerman <mpe@ellerman.id.au> Link: https://lore.kernel.org/r/20200129135301.24739-2-srikar@linux.vnet.ibm.com
2020-03-04powerpc/smp: Use nid as fallback for package_idSrikar Dronamraju2-3/+33
package_id is to match cores that are part of the same chip. On PowerNV machines, package_id defaults to chip_id. However ibm,chip_id property is not present in device-tree of PowerVM LPARs. Hence lscpu output shows one core per socket and multiple cores. To overcome this, use nid as the package_id on PowerVM LPARs. Before the patch: Architecture: ppc64le Byte Order: Little Endian CPU(s): 128 On-line CPU(s) list: 0-127 Thread(s) per core: 8 Core(s) per socket: 1 <---------------------- Socket(s): 16 <---------------------- NUMA node(s): 2 Model: 2.2 (pvr 004e 0202) Model name: POWER9 (architected), altivec supported Hypervisor vendor: pHyp Virtualization type: para L1d cache: 32K L1i cache: 32K L2 cache: 512K L3 cache: 10240K NUMA node0 CPU(s): 0-63 NUMA node1 CPU(s): 64-127 # # cat /sys/devices/system/cpu/cpu0/topology/physical_package_id -1 After the patch: Architecture: ppc64le Byte Order: Little Endian CPU(s): 128 On-line CPU(s) list: 0-127 Thread(s) per core: 8 <--------------------- Core(s) per socket: 8 <--------------------- Socket(s): 2 NUMA node(s): 2 Model: 2.2 (pvr 004e 0202) Model name: POWER9 (architected), altivec supported Hypervisor vendor: pHyp Virtualization type: para L1d cache: 32K L1i cache: 32K L2 cache: 512K L3 cache: 10240K NUMA node0 CPU(s): 0-63 NUMA node1 CPU(s): 64-127 # # cat /sys/devices/system/cpu/cpu0/topology/physical_package_id 0 Now lscpu output is more in line with the system configuration. Signed-off-by: Srikar Dronamraju <srikar@linux.vnet.ibm.com> [mpe: Use pkg_id instead of ppid, tweak change log and comment] Signed-off-by: Michael Ellerman <mpe@ellerman.id.au> Link: https://lore.kernel.org/r/20200129135121.24617-1-srikar@linux.vnet.ibm.com
2020-03-04powerpc/irq: Use current_stack_pointer in do_IRQ()Christophe Leroy1-1/+1
Until commit 7306e83ccf5c ("powerpc: Don't use CURRENT_THREAD_INFO to find the stack"), the current stack base address was obtained by calling current_thread_info(). That inline function was simply masking out the value of r1. In that commit, it was changed to using current_stack_pointer() (since renamed current_stack_frame()), which is a heavier function as it is an outline assembly function which cannot be inlined and which reads the content of the stack at 0(r1). Convert to using current_stack_pointer for geting r1 and masking out its value to obtain the base address of the stack pointer as before. Fixes: 7306e83ccf5c ("powerpc: Don't use CURRENT_THREAD_INFO to find the stack") Signed-off-by: Christophe Leroy <christophe.leroy@c-s.fr> Signed-off-by: Michael Ellerman <mpe@ellerman.id.au> Link: https://lore.kernel.org/r/20200220115141.2707-5-mpe@ellerman.id.au
2020-03-04powerpc/irq: use IS_ENABLED() in check_stack_overflow()Christophe Leroy1-2/+3
Instead of #ifdef, use IS_ENABLED(CONFIG_DEBUG_STACKOVERFLOW). This enable GCC to check for code validity even when the option is not selected. Signed-off-by: Christophe Leroy <christophe.leroy@c-s.fr> Signed-off-by: Michael Ellerman <mpe@ellerman.id.au> Link: https://lore.kernel.org/r/20200220115141.2707-4-mpe@ellerman.id.au
2020-03-04powerpc/irq: Use current_stack_pointer in check_stack_overflow()Christophe Leroy1-1/+1
The purpose of check_stack_overflow() is to verify that the stack has not overflowed. To really know whether the stack pointer is still within boundaries, the check must be done directly on the value of r1. So use current_stack_pointer, which returns the current value of r1, rather than current_stack_frame() which causes a frame to be created and then returns that value. Signed-off-by: Christophe Leroy <christophe.leroy@c-s.fr> Signed-off-by: Michael Ellerman <mpe@ellerman.id.au> Link: https://lore.kernel.org/r/20200220115141.2707-3-mpe@ellerman.id.au
2020-03-04powerpc: Add current_stack_pointer as a register globalChristophe Leroy1-0/+2
current_stack_frame() doesn't return the stack pointer, but the caller's stack frame. See commit bfe9a2cfe91a ("powerpc: Reimplement __get_SP() as a function not a define") and commit acf620ecf56c ("powerpc: Rename __get_SP() to current_stack_pointer()") for details. In some cases this is overkill or incorrect, as it doesn't return the current value of r1. So add a current_stack_pointer register global to get the value of r1 directly. Signed-off-by: Christophe Leroy <christophe.leroy@c-s.fr> [mpe: Split out of other patch, tweak change log] Signed-off-by: Michael Ellerman <mpe@ellerman.id.au> Link: https://lore.kernel.org/r/20200220115141.2707-2-mpe@ellerman.id.au
2020-03-04powerpc: Rename current_stack_pointer() to current_stack_frame()Michael Ellerman6-10/+10
current_stack_pointer(), which was called __get_SP(), used to just return the value in r1. But that caused problems in some cases, so it was turned into a function in commit bfe9a2cfe91a ("powerpc: Reimplement __get_SP() as a function not a define"). Because it's a function in a separate compilation unit to all its callers, it has the effect of causing a stack frame to be created, and then returns the address of that frame. This is good in some cases like those described in the above commit, but in other cases it's overkill, we just need to know what stack page we're on. On some other arches current_stack_pointer is just a register global giving the stack pointer, and we'd like to do that too. So rename our current_stack_pointer() to current_stack_frame() to make that possible. Signed-off-by: Michael Ellerman <mpe@ellerman.id.au> Reviewed-by: Christophe Leroy <christophe.leroy@c-s.fr> Link: https://lore.kernel.org/r/20200220115141.2707-1-mpe@ellerman.id.au
2020-03-04powerpc/kernel/sysfs: Add new config option PMU_SYSFS to enable PMU SPRs ↵Kajol Jain2-0/+12
sysfs file creation Many of the performance monitoring unit (PMU) SPRs are exposed in the sysfs. This may not be a desirable since "perf" API is the primary interface to program PMU and collect counter data in the system. But that said, we cant remove these sysfs files since we dont whether anyone/anything is using them. So the patch adds a new CONFIG option 'CONFIG_PMU_SYSFS' (user selectable) to be used in sysfs file creation for PMU SPRs. New option by default is disabled, but can be enabled if user needs it. Tested this patch behaviour in powernv and pseries machines. Patch is also tested for pmac32_defconfig. Signed-off-by: Kajol Jain <kjain@linux.ibm.com> Tested-by: Nageswara R Sastry <nasastry@in.ibm.com> Signed-off-by: Michael Ellerman <mpe@ellerman.id.au> Link: https://lore.kernel.org/r/20200214080606.26872-2-kjain@linux.ibm.com
2020-03-04powerpc/kernel/sysfs: Refactor current sysfs.cMadhavan Srinivasan1-175/+200
An attempt to refactor the current sysfs.c file. To start with a big chuck of macro #defines and dscr functions are moved to start of the file. Secondly, HAS_ #define macros are cleanup based on CONFIG_ options Finally new HAS_ macro added: 1. HAS_PPC_PA6T (for PA6T) to separate out non-PMU SPRs. 2. HAS_PPC_PMC56 to separate out PMC SPR's from HAS_PPC_PMC_CLASSIC which come under CONFIG_PPC64. Signed-off-by: Madhavan Srinivasan <maddy@linux.ibm.com> Signed-off-by: Michael Ellerman <mpe@ellerman.id.au> Link: https://lore.kernel.org/r/20200214080606.26872-1-kjain@linux.ibm.com
2020-03-04powerpc/powernv: Add explicit fast-reboot supportOliver O'Halloran2-0/+3
Add a way to manually invoke a fast-reboot rather than setting the NVRAM flag. The idea is to allow userspace to invoke a fast-reboot using the optional string argument to the reboot() system call, or using the xmon zr command so we don't need to leave around a persistent changes on a system to use the feature. Signed-off-by: Oliver O'Halloran <oohall@gmail.com> Signed-off-by: Michael Ellerman <mpe@ellerman.id.au> Link: https://lore.kernel.org/r/20200217024833.30580-2-oohall@gmail.com
2020-03-04powerpc/powernv: Treat an empty reboot string as defaultOliver O'Halloran1-1/+1
Treat an empty reboot cmd string the same as a NULL string. This squashes a spurious unsupported reboot message that sometimes gets out when using xmon. Signed-off-by: Oliver O'Halloran <oohall@gmail.com> Signed-off-by: Michael Ellerman <mpe@ellerman.id.au> Link: https://lore.kernel.org/r/20200217024833.30580-1-oohall@gmail.com
2020-03-04powerpc/Makefile: Mark phony targets as PHONYMichael Ellerman2-0/+8
Some of our phony targets are not marked as such. This can lead to confusing errors, eg: $ make clean $ touch install $ make install make: 'install' is up to date. $ Fix it by adding them to the PHONY variable which is marked phony in the top-level Makefile, or in scripts/Makefile.build for the boot Makefile. Suggested-by: Masahiro Yamada <masahiroy@kernel.org> Signed-off-by: Michael Ellerman <mpe@ellerman.id.au> Reviewed-by: Masahiro Yamada <masahiroy@kernel.org> Signed-off-by: Michael Ellerman <mpe@ellerman.id.au> Link: https://lore.kernel.org/r/20200219000434.15872-1-mpe@ellerman.id.au
2020-03-04powerpc/mm: Don't kmap_atomic() in pte_offset_map() on PPC32Christophe Leroy2-8/+4
On PPC32, pte_offset_map() does a kmap_atomic() in order to support page tables allocated in high memory, just like ARM and x86/32. But since at least 2008 and commit 8054a3428fbe ("powerpc: Remove dead CONFIG_HIGHPTE"), page tables are never allocated in high memory. When the page is in low mem, kmap_atomic() just returns the page address but still disable preemption and pagefault. And it is not an inlined function, so we suffer function call for no reason. Make pte_offset_map() the same as pte_offset_kernel() and make pte_unmap() void, in the same way as PPC64 which doesn't have HIGHMEM. Signed-off-by: Christophe Leroy <christophe.leroy@c-s.fr> Signed-off-by: Michael Ellerman <mpe@ellerman.id.au> Link: https://lore.kernel.org/r/03c97f0f6b3790d164822563be80f2fd4713a955.1581932480.git.christophe.leroy@c-s.fr
2020-03-04powerpc/book3s64: Fix error handling in mm_iommu_do_alloc()Alexey Kardashevskiy1-18/+21
The last jump to free_exit in mm_iommu_do_alloc() happens after page pointers in struct mm_iommu_table_group_mem_t were already converted to physical addresses. Thus calling put_page() on these physical addresses will likely crash. This moves the loop which calculates the pageshift and converts page struct pointers to physical addresses later after the point when we cannot fail; thus eliminating the need to convert pointers back. Fixes: eb9d7a62c386 ("powerpc/mm_iommu: Fix potential deadlock") Reported-by: Jan Kara <jack@suse.cz> Signed-off-by: Alexey Kardashevskiy <aik@ozlabs.ru> Signed-off-by: Michael Ellerman <mpe@ellerman.id.au> Link: https://lore.kernel.org/r/20191223060351.26359-1-aik@ozlabs.ru
2020-03-04powerpc/powernv: no need to check return value of debugfs_create functionsGreg Kroah-Hartman4-63/+10
When calling debugfs functions, there is no need to ever check the return value. The function can work or not, but the code logic should never do something different based on this. Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org> Reviewed-by: Oliver O'Halloran <oohall@gmail.com> Signed-off-by: Michael Ellerman <mpe@ellerman.id.au> Link: https://lore.kernel.org/r/20200209105901.1620958-6-gregkh@linuxfoundation.org
2020-03-04powerpc/cell/axon_msi: no need to check return value of debugfs_create functionsGreg Kroah-Hartman1-5/+1
When calling debugfs functions, there is no need to ever check the return value. The function can work or not, but the code logic should never do something different based on this. Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org> Signed-off-by: Michael Ellerman <mpe@ellerman.id.au> Link: https://lore.kernel.org/r/20200209105901.1620958-5-gregkh@linuxfoundation.org
2020-03-04powerpc/mm: ptdump: no need to check return value of debugfs_create functionsGreg Kroah-Hartman4-20/+11
When calling debugfs functions, there is no need to ever check the return value. The function can work or not, but the code logic should never do something different based on this. Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org> Signed-off-by: Michael Ellerman <mpe@ellerman.id.au> Link: https://lore.kernel.org/r/20200209105901.1620958-4-gregkh@linuxfoundation.org
2020-03-04powerpc/mm: book3s64: hash_utils: no need to check return value of ↵Greg Kroah-Hartman1-5/+2
debugfs_create functions When calling debugfs functions, there is no need to ever check the return value. The function can work or not, but the code logic should never do something different based on this. Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org> Signed-off-by: Michael Ellerman <mpe@ellerman.id.au> Link: https://lore.kernel.org/r/20200209105901.1620958-3-gregkh@linuxfoundation.org
2020-03-04powerpc/kvm: no need to check return value of debugfs_create functionsGreg Kroah-Hartman5-29/+10
When calling debugfs functions, there is no need to ever check the return value. The function can work or not, but the code logic should never do something different based on this. Because of this cleanup, we get to remove a few fields in struct kvm_arch that are now unused. Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org> [mpe: Fix build error in kvm/timing.c, adapt kvmppc_remove_cpu_debugfs()] Signed-off-by: Michael Ellerman <mpe@ellerman.id.au> Link: https://lore.kernel.org/r/20200209105901.1620958-2-gregkh@linuxfoundation.org
2020-03-04powerpc/kernel: no need to check return value of debugfs_create functionsGreg Kroah-Hartman3-29/+9
When calling debugfs functions, there is no need to ever check the return value. The function can work or not, but the code logic should never do something different based on this. Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org> Signed-off-by: Michael Ellerman <mpe@ellerman.id.au> Link: https://lore.kernel.org/r/20200209105901.1620958-1-gregkh@linuxfoundation.org
2020-03-04powerpc/83xx: Add some error handling in 'quirk_mpc8360e_qe_enet10()'Christophe JAILLET1-1/+4
In some error handling path, we should call "of_node_put(np_par)" or some resource may be leaking in case of error. Fixes: 8159df72d43e ("83xx: add support for the kmeter1 board.") Signed-off-by: Christophe JAILLET <christophe.jaillet@wanadoo.fr> Acked-by: Scott Wood <oss@buserror.net> Signed-off-by: Michael Ellerman <mpe@ellerman.id.au> Link: https://lore.kernel.org/r/20200208140920.7652-1-christophe.jaillet@wanadoo.fr
2020-03-04powerpc/83xx: Fix some typo in some warning messageChristophe JAILLET1-2/+2
"couldn;t" should be "couldn't". Signed-off-by: Christophe JAILLET <christophe.jaillet@wanadoo.fr> Acked-by: Scott Wood <oss@buserror.net> Signed-off-by: Michael Ellerman <mpe@ellerman.id.au> Link: https://lore.kernel.org/r/20200208140904.7521-1-christophe.jaillet@wanadoo.fr
2020-02-28powerpc: fix hardware PMU exception bug on PowerVM compatibility mode systemsDesnes A. Nunes do Rosario1-1/+3
PowerVM systems running compatibility mode on a few Power8 revisions are still vulnerable to the hardware defect that loses PMU exceptions arriving prior to a context switch. The software fix for this issue is enabled through the CPU_FTR_PMAO_BUG cpu_feature bit, nevertheless this bit also needs to be set for PowerVM compatibility mode systems. Fixes: 68f2f0d431d9ea4 ("powerpc: Add a cpu feature CPU_FTR_PMAO_BUG") Signed-off-by: Desnes A. Nunes do Rosario <desnesn@linux.ibm.com> Reviewed-by: Leonardo Bras <leonardo@linux.ibm.com> Signed-off-by: Michael Ellerman <mpe@ellerman.id.au> Link: https://lore.kernel.org/r/20200227134715.9715-1-desnesn@linux.ibm.com
2020-02-26powerpc/32: drop get_pteptr()Christophe Leroy3-42/+7
Commit 8d30c14cab30 ("powerpc/mm: Rework I$/D$ coherency (v3)") and commit 90ac19a8b21b ("[POWERPC] Abolish iopa(), mm_ptov(), io_block_mapping() from arch/powerpc") removed the use of get_pteptr() outside of mm/pgtable_32.c In mm/pgtable_32.c, the only user of get_pteptr() is change_page_attr() which operates on kernel context and on lowmem pages only. Make virt_to_kpte() available outside of mm/mem.c and use it instead of get_pteptr(), and drop get_pteptr() Signed-off-by: Christophe Leroy <christophe.leroy@c-s.fr> Signed-off-by: Michael Ellerman <mpe@ellerman.id.au> Link: https://lore.kernel.org/r/788378c6c3ba5c5298caab7c7f95e6c3c88244b8.1578558199.git.christophe.leroy@c-s.fr
2020-02-26powerpc/32: refactor pmd_offset(pud_offset(pgd_offset...Christophe Leroy7-12/+23
At several places pmd pointer is retrieved through the same action: pmd = pmd_offset(pud_offset(pgd_offset(mm, addr), addr), addr); or pmd = pmd_offset(pud_offset(pgd_offset_k(addr), addr), addr); Refactor this by implementing two helpers pmd_ptr() and pmd_ptr_k() This will help when adding the p4d level. Signed-off-by: Christophe Leroy <christophe.leroy@c-s.fr> Signed-off-by: Michael Ellerman <mpe@ellerman.id.au> Link: https://lore.kernel.org/r/7b065c5be35726af4066cab238ee35cabceda1fa.1578558199.git.christophe.leroy@c-s.fr
2020-02-26powerpc/32: don't restore r0, r6-r8 on exception entry path after ↵Christophe Leroy1-8/+3
trace_hardirqs_off() Since commit b86fb88855ea ("powerpc/32: implement fast entry for syscalls on non BOOKE") and commit 1a4b739bbb4f ("powerpc/32: implement fast entry for syscalls on BOOKE"), syscalls don't use the exception entry path anymore. It is therefore pointless to restore r0 and r6-r8 after calling trace_hardirqs_off(). In the meantime, drop the '2:' label which is unused and misleading. Signed-off-by: Christophe Leroy <christophe.leroy@c-s.fr> Signed-off-by: Michael Ellerman <mpe@ellerman.id.au> Link: https://lore.kernel.org/r/d2c6dc65d27e83964eb05f16a126161ab6455eea.1578388585.git.christophe.leroy@c-s.fr