summaryrefslogtreecommitdiff
path: root/arch/powerpc
AgeCommit message (Collapse)AuthorFilesLines
12 dayspowerpc/pseries/cmm: adjust BALLOON_MIGRATE when migrating pagesDavid Hildenbrand1-0/+1
[ Upstream commit 0da2ba35c0d532ca0fe7af698b17d74c4d084b9a ] Let's properly adjust BALLOON_MIGRATE like the other drivers. Note that the INFLATE/DEFLATE events are triggered from the core when enqueueing/dequeueing pages. This was found by code inspection. Link: https://lkml.kernel.org/r/20251021100606.148294-3-david@redhat.com Fixes: fe030c9b85e6 ("powerpc/pseries/cmm: Implement balloon compaction") Signed-off-by: David Hildenbrand <david@redhat.com> Reviewed-by: Ritesh Harjani (IBM) <ritesh.list@gmail.com> Cc: Christophe Leroy <christophe.leroy@csgroup.eu> Cc: Madhavan Srinivasan <maddy@linux.ibm.com> Cc: Michael Ellerman <mpe@ellerman.id.au> Cc: Nicholas Piggin <npiggin@gmail.com> Cc: <stable@vger.kernel.org> Signed-off-by: Andrew Morton <akpm@linux-foundation.org> Signed-off-by: Sasha Levin <sashal@kernel.org> Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
12 daysmm/balloon_compaction: convert balloon_page_delete() to balloon_page_finalize()David Hildenbrand1-1/+1
[ Upstream commit 15504b1163007bbfbd9a63460d5c14737c16e96d ] Let's move the removal of the page from the balloon list into the single caller, to remove the dependency on the PG_isolated flag and clarify locking requirements. Note that for now, balloon_page_delete() was used on two paths: (1) Removing a page from the balloon for deflation through balloon_page_list_dequeue() (2) Removing an isolated page from the balloon for migration in the per-driver migration handlers. Isolated pages were already removed from the balloon list during isolation. So instead of relying on the flag, we can just distinguish both cases directly and handle it accordingly in the caller. We'll shuffle the operations a bit such that they logically make more sense (e.g., remove from the list before clearing flags). In balloon migration functions we can now move the balloon_page_finalize() out of the balloon lock and perform the finalization just before dropping the balloon reference. Document that the page lock is currently required when modifying the movability aspects of a page; hopefully we can soon decouple this from the page lock. Link: https://lkml.kernel.org/r/20250704102524.326966-3-david@redhat.com Signed-off-by: David Hildenbrand <david@redhat.com> Reviewed-by: Lorenzo Stoakes <lorenzo.stoakes@oracle.com> Cc: Alistair Popple <apopple@nvidia.com> Cc: Al Viro <viro@zeniv.linux.org.uk> Cc: Arnd Bergmann <arnd@arndb.de> Cc: Brendan Jackman <jackmanb@google.com> Cc: Byungchul Park <byungchul@sk.com> Cc: Chengming Zhou <chengming.zhou@linux.dev> Cc: Christian Brauner <brauner@kernel.org> Cc: Christophe Leroy <christophe.leroy@csgroup.eu> Cc: Eugenio Pé rez <eperezma@redhat.com> Cc: Greg Kroah-Hartman <gregkh@linuxfoundation.org> Cc: Gregory Price <gourry@gourry.net> Cc: Harry Yoo <harry.yoo@oracle.com> Cc: "Huang, Ying" <ying.huang@linux.alibaba.com> Cc: Jan Kara <jack@suse.cz> Cc: Jason Gunthorpe <jgg@ziepe.ca> Cc: Jason Wang <jasowang@redhat.com> Cc: Jerrin Shaji George <jerrin.shaji-george@broadcom.com> Cc: Johannes Weiner <hannes@cmpxchg.org> Cc: John Hubbard <jhubbard@nvidia.com> Cc: Jonathan Corbet <corbet@lwn.net> Cc: Joshua Hahn <joshua.hahnjy@gmail.com> Cc: Liam Howlett <liam.howlett@oracle.com> Cc: Madhavan Srinivasan <maddy@linux.ibm.com> Cc: Mathew Brost <matthew.brost@intel.com> Cc: Matthew Wilcox (Oracle) <willy@infradead.org> Cc: Miaohe Lin <linmiaohe@huawei.com> Cc: Michael Ellerman <mpe@ellerman.id.au> Cc: "Michael S. Tsirkin" <mst@redhat.com> Cc: Michal Hocko <mhocko@suse.com> Cc: Mike Rapoport <rppt@kernel.org> Cc: Minchan Kim <minchan@kernel.org> Cc: Naoya Horiguchi <nao.horiguchi@gmail.com> Cc: Nicholas Piggin <npiggin@gmail.com> Cc: Oscar Salvador <osalvador@suse.de> Cc: Peter Xu <peterx@redhat.com> Cc: Qi Zheng <zhengqi.arch@bytedance.com> Cc: Rakie Kim <rakie.kim@sk.com> Cc: Rik van Riel <riel@surriel.com> Cc: Sergey Senozhatsky <senozhatsky@chromium.org> Cc: Shakeel Butt <shakeel.butt@linux.dev> Cc: Suren Baghdasaryan <surenb@google.com> Cc: Vlastimil Babka <vbabka@suse.cz> Cc: Xuan Zhuo <xuanzhuo@linux.alibaba.com> Cc: xu xin <xu.xin16@zte.com.cn> Cc: Zi Yan <ziy@nvidia.com> Signed-off-by: Andrew Morton <akpm@linux-foundation.org> Stable-dep-of: 0da2ba35c0d5 ("powerpc/pseries/cmm: adjust BALLOON_MIGRATE when migrating pages") Signed-off-by: Sasha Levin <sashal@kernel.org> Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
12 dayspowerpc/64s/radix/kfence: map __kfence_pool at page granularityHari Bathini3-5/+93
commit 353d7a84c214f184d5a6b62acdec8b4424159b7c upstream. When KFENCE is enabled, total system memory is mapped at page level granularity. But in radix MMU mode, ~3GB additional memory is needed to map 100GB of system memory at page level granularity when compared to using 2MB direct mapping.This is not desired considering KFENCE is designed to be enabled in production kernels [1]. Mapping only the memory allocated for KFENCE pool at page granularity is sufficient to enable KFENCE support. So, allocate __kfence_pool during bootup and map it at page granularity instead of mapping all system memory at page granularity. Without patch: # cat /proc/meminfo MemTotal: 101201920 kB With patch: # cat /proc/meminfo MemTotal: 104483904 kB Note that enabling KFENCE at runtime is disabled for radix MMU for now, as it depends on the ability to split page table mappings and such APIs are not currently implemented for radix MMU. All kfence_test.c testcases passed with this patch. [1] https://lore.kernel.org/all/20201103175841.3495947-2-elver@google.com/ Signed-off-by: Hari Bathini <hbathini@linux.ibm.com> Signed-off-by: Michael Ellerman <mpe@ellerman.id.au> Link: https://msgid.link/20240701130021.578240-1-hbathini@linux.ibm.com Cc: Aboorva Devarajan <aboorvad@linux.ibm.com> Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
12 dayspowerpc/pseries/cmm: call balloon_devinfo_init() also without ↵David Hildenbrand1-1/+1
CONFIG_BALLOON_COMPACTION commit fc6bcf9ac4de76f5e7bcd020b3c0a86faff3f2d5 upstream. Patch series "powerpc/pseries/cmm: two smaller fixes". Two smaller fixes identified while doing a bigger rework. This patch (of 2): We always have to initialize the balloon_dev_info, even when compaction is not configured in: otherwise the containing list and the lock are left uninitialized. Likely not many such configs exist in practice, but let's CC stable to be sure. This was found by code inspection. Link: https://lkml.kernel.org/r/20251021100606.148294-1-david@redhat.com Link: https://lkml.kernel.org/r/20251021100606.148294-2-david@redhat.com Fixes: fe030c9b85e6 ("powerpc/pseries/cmm: Implement balloon compaction") Signed-off-by: David Hildenbrand <david@redhat.com> Reviewed-by: Ritesh Harjani (IBM) <ritesh.list@gmail.com> Cc: Christophe Leroy <christophe.leroy@csgroup.eu> Cc: Madhavan Srinivasan <maddy@linux.ibm.com> Cc: Michael Ellerman <mpe@ellerman.id.au> Cc: Nicholas Piggin <npiggin@gmail.com> Cc: <stable@vger.kernel.org> Signed-off-by: Andrew Morton <akpm@linux-foundation.org> Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
12 dayspowerpc/64s/slb: Fix SLB multihit issue during SLB preloadDonet Tom5-98/+0
commit 00312419f0863964625d6dcda8183f96849412c6 upstream. On systems using the hash MMU, there is a software SLB preload cache that mirrors the entries loaded into the hardware SLB buffer. This preload cache is subject to periodic eviction — typically after every 256 context switches — to remove old entry. To optimize performance, the kernel skips switch_mmu_context() in switch_mm_irqs_off() when the prev and next mm_struct are the same. However, on hash MMU systems, this can lead to inconsistencies between the hardware SLB and the software preload cache. If an SLB entry for a process is evicted from the software cache on one CPU, and the same process later runs on another CPU without executing switch_mmu_context(), the hardware SLB may retain stale entries. If the kernel then attempts to reload that entry, it can trigger an SLB multi-hit error. The following timeline shows how stale SLB entries are created and can cause a multi-hit error when a process moves between CPUs without a MMU context switch. CPU 0 CPU 1 ----- ----- Process P exec swapper/1 load_elf_binary begin_new_exc activate_mm switch_mm_irqs_off switch_mmu_context switch_slb /* * This invalidates all * the entries in the HW * and setup the new HW * SLB entries as per the * preload cache. */ context_switch sched_migrate_task migrates process P to cpu-1 Process swapper/0 context switch (to process P) (uses mm_struct of Process P) switch_mm_irqs_off() switch_slb load_slb++ /* * load_slb becomes 0 here * and we evict an entry from * the preload cache with * preload_age(). We still * keep HW SLB and preload * cache in sync, that is * because all HW SLB entries * anyways gets evicted in * switch_slb during SLBIA. * We then only add those * entries back in HW SLB, * which are currently * present in preload_cache * (after eviction). */ load_elf_binary continues... setup_new_exec() slb_setup_new_exec() sched_switch event sched_migrate_task migrates process P to cpu-0 context_switch from swapper/0 to Process P switch_mm_irqs_off() /* * Since both prev and next mm struct are same we don't call * switch_mmu_context(). This will cause the HW SLB and SW preload * cache to go out of sync in preload_new_slb_context. Because there * was an SLB entry which was evicted from both HW and preload cache * on cpu-1. Now later in preload_new_slb_context(), when we will try * to add the same preload entry again, we will add this to the SW * preload cache and then will add it to the HW SLB. Since on cpu-0 * this entry was never invalidated, hence adding this entry to the HW * SLB will cause a SLB multi-hit error. */ load_elf_binary continues... START_THREAD start_thread preload_new_slb_context /* * This tries to add a new EA to preload cache which was earlier * evicted from both cpu-1 HW SLB and preload cache. This caused the * HW SLB of cpu-0 to go out of sync with the SW preload cache. The * reason for this was, that when we context switched back on CPU-0, * we should have ideally called switch_mmu_context() which will * bring the HW SLB entries on CPU-0 in sync with SW preload cache * entries by setting up the mmu context properly. But we didn't do * that since the prev mm_struct running on cpu-0 was same as the * next mm_struct (which is true for swapper / kernel threads). So * now when we try to add this new entry into the HW SLB of cpu-0, * we hit a SLB multi-hit error. */ WARNING: CPU: 0 PID: 1810970 at arch/powerpc/mm/book3s64/slb.c:62 assert_slb_presence+0x2c/0x50(48 results) 02:47:29 [20157/42149] Modules linked in: CPU: 0 UID: 0 PID: 1810970 Comm: dd Not tainted 6.16.0-rc3-dirty #12 VOLUNTARY Hardware name: IBM pSeries (emulated by qemu) POWER8 (architected) 0x4d0200 0xf000004 of:SLOF,HEAD hv:linux,kvm pSeries NIP: c00000000015426c LR: c0000000001543b4 CTR: 0000000000000000 REGS: c0000000497c77e0 TRAP: 0700 Not tainted (6.16.0-rc3-dirty) MSR: 8000000002823033 <SF,VEC,VSX,FP,ME,IR,DR,RI,LE> CR: 28888482 XER: 00000000 CFAR: c0000000001543b0 IRQMASK: 3 <...> NIP [c00000000015426c] assert_slb_presence+0x2c/0x50 LR [c0000000001543b4] slb_insert_entry+0x124/0x390 Call Trace: 0x7fffceb5ffff (unreliable) preload_new_slb_context+0x100/0x1a0 start_thread+0x26c/0x420 load_elf_binary+0x1b04/0x1c40 bprm_execve+0x358/0x680 do_execveat_common+0x1f8/0x240 sys_execve+0x58/0x70 system_call_exception+0x114/0x300 system_call_common+0x160/0x2c4 >From the above analysis, during early exec the hardware SLB is cleared, and entries from the software preload cache are reloaded into hardware by switch_slb. However, preload_new_slb_context and slb_setup_new_exec also attempt to load some of the same entries, which can trigger a multi-hit. In most cases, these additional preloads simply hit existing entries and add nothing new. Removing these functions avoids redundant preloads and eliminates the multi-hit issue. This patch removes these two functions. We tested process switching performance using the context_switch benchmark on POWER9/hash, and observed no regression. Without this patch: 129041 ops/sec With this patch: 129341 ops/sec We also measured SLB faults during boot, and the counts are essentially the same with and without this patch. SLB faults without this patch: 19727 SLB faults with this patch: 19786 Fixes: 5434ae74629a ("powerpc/64s/hash: Add a SLB preload cache") cc: stable@vger.kernel.org Suggested-by: Nicholas Piggin <npiggin@gmail.com> Signed-off-by: Donet Tom <donettom@linux.ibm.com> Signed-off-by: Ritesh Harjani (IBM) <ritesh.list@gmail.com> Signed-off-by: Madhavan Srinivasan <maddy@linux.ibm.com> Link: https://patch.msgid.link/0ac694ae683494fe8cadbd911a1a5018d5d3c541.1761834163.git.ritesh.list@gmail.com Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
12 dayspowerpc, mm: Fix mprotect on book3s 32-bitDave Vasilevsky2-1/+13
commit 78fc63ffa7813e33681839bb33826c24195f0eb7 upstream. On 32-bit book3s with hash-MMUs, tlb_flush() was a no-op. This was unnoticed because all uses until recently were for unmaps, and thus handled by __tlb_remove_tlb_entry(). After commit 4a18419f71cd ("mm/mprotect: use mmu_gather") in kernel 5.19, tlb_gather_mmu() started being used for mprotect as well. This caused mprotect to simply not work on these machines: int *ptr = mmap(NULL, 4096, PROT_READ|PROT_WRITE, MAP_PRIVATE|MAP_ANONYMOUS, -1, 0); *ptr = 1; // force HPTE to be created mprotect(ptr, 4096, PROT_READ); *ptr = 2; // should segfault, but succeeds Fixed by making tlb_flush() actually flush TLB pages. This finally agrees with the behaviour of boot3s64's tlb_flush(). Fixes: 4a18419f71cd ("mm/mprotect: use mmu_gather") Cc: stable@vger.kernel.org Reviewed-by: Christophe Leroy <christophe.leroy@csgroup.eu> Reviewed-by: Ritesh Harjani (IBM) <ritesh.list@gmail.com> Signed-off-by: Dave Vasilevsky <dave@vasilevsky.ca> Signed-off-by: Madhavan Srinivasan <maddy@linux.ibm.com> Link: https://patch.msgid.link/20251116-vasi-mprotect-g3-v3-1-59a9bd33ba00@vasilevsky.ca Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
12 dayspowerpc/kexec: Enable SMT before waking offline CPUsNysal Jan K.A.1-0/+19
commit c2296a1e42418556efbeb5636c4fa6aa6106713a upstream. If SMT is disabled or a partial SMT state is enabled, when a new kernel image is loaded for kexec, on reboot the following warning is observed: kexec: Waking offline cpu 228. WARNING: CPU: 0 PID: 9062 at arch/powerpc/kexec/core_64.c:223 kexec_prepare_cpus+0x1b0/0x1bc [snip] NIP kexec_prepare_cpus+0x1b0/0x1bc LR kexec_prepare_cpus+0x1a0/0x1bc Call Trace: kexec_prepare_cpus+0x1a0/0x1bc (unreliable) default_machine_kexec+0x160/0x19c machine_kexec+0x80/0x88 kernel_kexec+0xd0/0x118 __do_sys_reboot+0x210/0x2c4 system_call_exception+0x124/0x320 system_call_vectored_common+0x15c/0x2ec This occurs as add_cpu() fails due to cpu_bootable() returning false for CPUs that fail the cpu_smt_thread_allowed() check or non primary threads if SMT is disabled. Fix the issue by enabling SMT and resetting the number of SMT threads to the number of threads per core, before attempting to wake up all present CPUs. Fixes: 38253464bc82 ("cpu/SMT: Create topology_smt_thread_allowed()") Reported-by: Sachin P Bappalige <sachinpb@linux.ibm.com> Cc: stable@vger.kernel.org # v6.6+ Reviewed-by: Srikar Dronamraju <srikar@linux.ibm.com> Signed-off-by: Nysal Jan K.A. <nysal@linux.ibm.com> Tested-by: Samir M <samir@linux.ibm.com> Reviewed-by: Sourabh Jain <sourabhjain@linux.ibm.com> Signed-off-by: Madhavan Srinivasan <maddy@linux.ibm.com> Link: https://patch.msgid.link/20251028105516.26258-1-nysal@linux.ibm.com Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
12 dayspowerpc/addnote: Fix overflow on 32-bit buildsBen Collins1-3/+4
[ Upstream commit 825ce89a3ef17f84cf2c0eacfa6b8dc9fd11d13f ] The PUT_64[LB]E() macros need to cast the value to unsigned long long like the GET_64[LB]E() macros. Caused lots of warnings when compiled on 32-bit, and clobbered addresses (36-bit P4080). Signed-off-by: Ben Collins <bcollins@kernel.org> Reviewed-by: Christophe Leroy <christophe.leroy@csgroup.eu> Signed-off-by: Madhavan Srinivasan <maddy@linux.ibm.com> Link: https://patch.msgid.link/2025042122-mustard-wrasse-694572@boujee-and-buff Signed-off-by: Sasha Levin <sashal@kernel.org>
12 dayspowerpc/64s/ptdump: Fix kernel_hash_pagetable dump for ISA v3.00 HPTE formatRitesh Harjani (IBM)1-0/+6
[ Upstream commit eae40a6da63faa9fb63ff61f8fa2b3b57da78a84 ] HPTE format was changed since Power9 (ISA 3.0) onwards. While dumping kernel hash page tables, nothing gets printed on powernv P9+. This patch utilizes the helpers added in the patch tagged as fixes, to convert new format to old format and dump the hptes. This fix is only needed for native_find() (powernv), since pseries continues to work fine with the old format. Fixes: 6b243fcfb5f1e ("powerpc/64: Simplify adaptation to new ISA v3.00 HPTE format") Signed-off-by: Ritesh Harjani (IBM) <ritesh.list@gmail.com> Signed-off-by: Madhavan Srinivasan <maddy@linux.ibm.com> Link: https://patch.msgid.link/4c2bb9e5b3cfbc0dd80b61b67cdd3ccfc632684c.1761834163.git.ritesh.list@gmail.com Signed-off-by: Sasha Levin <sashal@kernel.org>
12 dayspowerpc/64s/hash: Restrict stress_hpt_struct memblock region to within RMA limitRitesh Harjani (IBM)1-4/+6
[ Upstream commit 17b45ccf09882e0c808ad2cf62acdc90ad968746 ] When HV=0 & IR/DR=0, the Hash MMU is said to be in Virtual Real Addressing Mode during early boot. During this, we should ensure that memory region allocations for stress_hpt_struct should happen from within RMA region as otherwise the boot might get stuck while doing memset of this region. History behind why do we have RMA region limitation is better explained in these 2 patches [1] & [2]. This patch ensures that memset to stress_hpt_struct during early boot does not cross ppc64_rma_size boundary. [1]: https://lore.kernel.org/all/20190710052018.14628-1-sjitindarsingh@gmail.com/ [2]: https://lore.kernel.org/all/87wp54usvj.fsf@linux.vnet.ibm.com/ Fixes: 6b34a099faa12 ("powerpc/64s/hash: add stress_hpt kernel boot option to increase hash faults") Signed-off-by: Ritesh Harjani (IBM) <ritesh.list@gmail.com> Signed-off-by: Madhavan Srinivasan <maddy@linux.ibm.com> Link: https://patch.msgid.link/ada1173933ea7617a994d6ee3e54ced8797339fc.1761834163.git.ritesh.list@gmail.com Signed-off-by: Sasha Levin <sashal@kernel.org>
12 dayspowerpc/32: Fix unpaired stwcx. on interrupt exitChristophe Leroy1-6/+4
[ Upstream commit 10e1c77c3636d815db802ceef588522c2d2d947c ] Commit b96bae3ae2cb ("powerpc/32: Replace ASM exception exit by C exception exit from ppc64") erroneouly copied to powerpc/32 the logic from powerpc/64 based on feature CPU_FTR_STCX_CHECKS_ADDRESS which is always 0 on powerpc/32. Re-instate the logic implemented by commit b64f87c16f3c ("[POWERPC] Avoid unpaired stwcx. on some processors") which is based on CPU_FTR_NEED_PAIRED_STWCX feature. Fixes: b96bae3ae2cb ("powerpc/32: Replace ASM exception exit by C exception exit from ppc64") Signed-off-by: Christophe Leroy <christophe.leroy@csgroup.eu> Signed-off-by: Madhavan Srinivasan <maddy@linux.ibm.com> Link: https://patch.msgid.link/6040b5dbcf5cdaa1cd919fcf0790f12974ea6e5a.1757666244.git.christophe.leroy@csgroup.eu Signed-off-by: Sasha Levin <sashal@kernel.org>
2025-11-24powerpc/eeh: Use result of error_detected() in ueventNiklas Schnelle1-1/+1
[ Upstream commit 704e5dd1c02371dfc7d22e1520102b197a3b628b ] Ever since uevent support was added for AER and EEH with commit 856e1eb9bdd4 ("PCI/AER: Add uevents in AER and EEH error/resume"), it reported PCI_ERS_RESULT_NONE as uevent when recovery begins. Commit 7b42d97e99d3 ("PCI/ERR: Always report current recovery status for udev") subsequently amended AER to report the actual return value of error_detected(). Make the same change to EEH to align it with AER and s390. Suggested-by: Lukas Wunner <lukas@wunner.de> Link: https://lore.kernel.org/linux-pci/aIp6LiKJor9KLVpv@wunner.de/ Signed-off-by: Niklas Schnelle <schnelle@linux.ibm.com> Signed-off-by: Bjorn Helgaas <bhelgaas@google.com> Reviewed-by: Lukas Wunner <lukas@wunner.de> Reviewed-by: Kuppuswamy Sathyanarayanan <sathyanarayanan.kuppuswamy@linux.intel.com> Acked-by: Mahesh Salgaonkar <mahesh@linux.ibm.com> Link: https://patch.msgid.link/20250807-add_err_uevents-v5-3-adf85b0620b0@linux.ibm.com Signed-off-by: Sasha Levin <sashal@kernel.org>
2025-11-02arch: Add the macro COMPILE_OFFSETS to all the asm-offsets.cMenglong Dong1-0/+1
[ Upstream commit 35561bab768977c9e05f1f1a9bc00134c85f3e28 ] The include/generated/asm-offsets.h is generated in Kbuild during compiling from arch/SRCARCH/kernel/asm-offsets.c. When we want to generate another similar offset header file, circular dependency can happen. For example, we want to generate a offset file include/generated/test.h, which is included in include/sched/sched.h. If we generate asm-offsets.h first, it will fail, as include/sched/sched.h is included in asm-offsets.c and include/generated/test.h doesn't exist; If we generate test.h first, it can't success neither, as include/generated/asm-offsets.h is included by it. In x86_64, the macro COMPILE_OFFSETS is used to avoid such circular dependency. We can generate asm-offsets.h first, and if the COMPILE_OFFSETS is defined, we don't include the "generated/test.h". And we define the macro COMPILE_OFFSETS for all the asm-offsets.c for this purpose. Signed-off-by: Menglong Dong <dongml2@chinatelecom.cn> Signed-off-by: Peter Zijlstra (Intel) <peterz@infradead.org> Signed-off-by: Sasha Levin <sashal@kernel.org>
2025-10-29powerpc/32: Remove PAGE_KERNEL_TEXT to fix startup failureChristophe Leroy3-15/+3
[ Upstream commit 9316512b717f6f25c4649b3fdb0a905b6a318e9f ] PAGE_KERNEL_TEXT is an old macro that is used to tell kernel whether kernel text has to be mapped read-only or read-write based on build time options. But nowadays, with functionnalities like jump_labels, static links, etc ... more only less all kernels need to be read-write at some point, and some combinations of configs failed to work due to innacurate setting of PAGE_KERNEL_TEXT. On the other hand, today we have CONFIG_STRICT_KERNEL_RWX which implements a more controlled access to kernel modifications. Instead of trying to keep PAGE_KERNEL_TEXT accurate with all possible options that may imply kernel text modification, always set kernel text read-write at startup and rely on CONFIG_STRICT_KERNEL_RWX to provide accurate protection. Do this by passing PAGE_KERNEL_X to map_kernel_page() in __maping_ram_chunk() instead of passing PAGE_KERNEL_TEXT. Once this is done, the only remaining user of PAGE_KERNEL_TEXT is mmu_mark_initmem_nx() which uses it in a call to setibat(). As setibat() ignores the RW/RO, we can seamlessly replace PAGE_KERNEL_TEXT by PAGE_KERNEL_X here as well and get rid of PAGE_KERNEL_TEXT completely. Reported-by: Erhard Furtner <erhard_f@mailbox.org> Closes: https://lore.kernel.org/all/342b4120-911c-4723-82ec-d8c9b03a8aef@mailbox.org/ Signed-off-by: Christophe Leroy <christophe.leroy@csgroup.eu> Tested-by: Andrew Donnellan <ajd@linux.ibm.com> Signed-off-by: Madhavan Srinivasan <maddy@linux.ibm.com> Link: https://patch.msgid.link/8e2d793abf87ae3efb8f6dce10f974ac0eda61b8.1757412205.git.christophe.leroy@csgroup.eu Signed-off-by: Sasha Levin <sashal@kernel.org>
2025-10-19powerpc/pseries/msi: Fix potential underflow and leak issueNam Cao1-1/+1
commit 3443ff3be6e59b80d74036bb39f5b6409eb23cc9 upstream. pseries_irq_domain_alloc() allocates interrupts at parent's interrupt domain. If it fails in the progress, all allocated interrupts are freed. The number of successfully allocated interrupts so far is stored "i". However, "i - 1" interrupts are freed. This is broken: - One interrupt is not be freed - If "i" is zero, "i - 1" wraps around Correct the number of freed interrupts to 'i'. Fixes: a5f3d2c17b07 ("powerpc/pseries/pci: Add MSI domains") Signed-off-by: Nam Cao <namcao@linutronix.de> Cc: stable@vger.kernel.org Reviewed-by: Cédric Le Goater <clg@redhat.com> Signed-off-by: Madhavan Srinivasan <maddy@linux.ibm.com> Link: https://patch.msgid.link/a980067f2b256bf716b4cd713bc1095966eed8cd.1754300646.git.namcao@linutronix.de Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
2025-10-19powerpc/powernv/pci: Fix underflow and leak issueNam Cao1-1/+1
commit a39087905af9ffecaa237a918a2c03a04e479934 upstream. pnv_irq_domain_alloc() allocates interrupts at parent's interrupt domain. If it fails in the progress, all allocated interrupts are freed. The number of successfully allocated interrupts so far is stored "i". However, "i - 1" interrupts are freed. This is broken: - One interrupt is not be freed - If "i" is zero, "i - 1" wraps around Correct the number of freed interrupts to "i". Fixes: 0fcfe2247e75 ("powerpc/powernv/pci: Add MSI domains") Signed-off-by: Nam Cao <namcao@linutronix.de> Cc: stable@vger.kernel.org Reviewed-by: Cédric Le Goater <clg@redhat.com> Signed-off-by: Madhavan Srinivasan <maddy@linux.ibm.com> Link: https://patch.msgid.link/70f8debe8688e0b467367db769b71c20146a836d.1754300646.git.namcao@linutronix.de Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
2025-09-04powerpc/kvm: Fix ifdef to remove build warningMadhavan Srinivasan1-4/+4
[ Upstream commit 88688a2c8ac6c8036d983ad8b34ce191c46a10aa ] When compiling for pseries or powernv defconfig with "make C=1", these warning were reported bu sparse tool in powerpc/kernel/kvm.c arch/powerpc/kernel/kvm.c:635:9: warning: switch with no cases arch/powerpc/kernel/kvm.c:646:9: warning: switch with no cases Currently #ifdef were added after the switch case which are specific for BOOKE and PPC_BOOK3S_32. These are not enabled in pseries/powernv defconfig. Fix it by moving the #ifdef before switch(){} Fixes: cbe487fac7fc0 ("KVM: PPC: Add mtsrin PV code") Tested-by: Venkat Rao Bagalkote <venkat88@linux.ibm.com> Signed-off-by: Madhavan Srinivasan <maddy@linux.ibm.com> Link: https://patch.msgid.link/20250518044107.39928-1-maddy@linux.ibm.com Signed-off-by: Sasha Levin <sashal@kernel.org>
2025-08-28powerpc/boot: Fix build with gcc 15Michal Suchanek1-0/+1
commit 5a821e2d69e26b51b7f3740b6b0c3462b8cacaff upstream. Similar to x86 the ppc boot code does not build with GCC 15. Copy the fix from commit ee2ab467bddf ("x86/boot: Use '-std=gnu11' to fix build with GCC 15") Signed-off-by: Michal Suchanek <msuchanek@suse.de> Tested-by: Amit Machhiwal <amachhiw@linux.ibm.com> Tested-by: Venkat Rao Bagalkote <venkat88@linux.ibm.com> Signed-off-by: Madhavan Srinivasan <maddy@linux.ibm.com> Link: https://patch.msgid.link/20250331105722.19709-1-msuchanek@suse.de Cc: Christophe Leroy <christophe.leroy@csgroup.eu> Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
2025-08-28powerpc: floppy: Add missing checks after DMA mapThomas Fourier1-1/+4
[ Upstream commit cf183c1730f2634245da35e9b5d53381b787d112 ] The DMA map functions can fail and should be tested for errors. Signed-off-by: Thomas Fourier <fourier.thomas@gmail.com> Reviewed-by: Christophe Leroy <christophe.leroy@csgroup.eu> Signed-off-by: Madhavan Srinivasan <maddy@linux.ibm.com> Link: https://patch.msgid.link/20250620075602.12575-1-fourier.thomas@gmail.com Signed-off-by: Sasha Levin <sashal@kernel.org>
2025-08-28(powerpc/512) Fix possible `dma_unmap_single()` on uninitialized pointerThomas Fourier1-4/+2
[ Upstream commit 760b9b4f6de9a33ca56a05f950cabe82138d25bd ] If the device configuration fails (if `dma_dev->device_config()`), `sg_dma_address(&sg)` is not initialized and the jump to `err_dma_prep` leads to calling `dma_unmap_single()` on `sg_dma_address(&sg)`. Signed-off-by: Thomas Fourier <fourier.thomas@gmail.com> Reviewed-by: Christophe Leroy <christophe.leroy@csgroup.eu> Signed-off-by: Madhavan Srinivasan <maddy@linux.ibm.com> Link: https://patch.msgid.link/20250610142918.169540-2-fourier.thomas@gmail.com Signed-off-by: Sasha Levin <sashal@kernel.org>
2025-08-15PCI: pnv_php: Fix surprise plug detection and recoveryTimothy Pearson1-0/+3
[ Upstream commit a2a2a6fc2469524caa713036297c542746d148dc ] The existing PowerNV hotplug code did not handle surprise plug events correctly, leading to a complete failure of the hotplug system after device removal and a required reboot to detect new devices. This comes down to two issues: 1) When a device is surprise removed, often the bridge upstream port will cause a PE freeze on the PHB. If this freeze is not cleared, the MSI interrupts from the bridge hotplug notification logic will not be received by the kernel, stalling all plug events on all slots associated with the PE. 2) When a device is removed from a slot, regardless of surprise or programmatic removal, the associated PHB/PE ls left frozen. If this freeze is not cleared via a fundamental reset, skiboot is unable to clear the freeze and cannot retrain / rescan the slot. This also requires a reboot to clear the freeze and redetect the device in the slot. Issue the appropriate unfreeze and rescan commands on hotplug events, and don't oops on hotplug if pci_bus_to_OF_node() returns NULL. Signed-off-by: Timothy Pearson <tpearson@raptorengineering.com> [bhelgaas: tidy comments] Signed-off-by: Bjorn Helgaas <bhelgaas@google.com> Signed-off-by: Madhavan Srinivasan <maddy@linux.ibm.com> Link: https://patch.msgid.link/171044224.1359864.1752615546988.JavaMail.zimbra@raptorengineeringinc.com Signed-off-by: Sasha Levin <sashal@kernel.org>
2025-08-15powerpc/eeh: Make EEH driver device hotplug safeTimothy Pearson2-20/+38
[ Upstream commit 1010b4c012b0d78dfb9d3132b49aa2ef024a07a7 ] Multiple race conditions existed between the PCIe hotplug driver and the EEH driver, leading to a variety of kernel oopses of the same general nature: <pcie device unplug> <eeh driver trigger> <hotplug removal trigger> <pcie tree reconfiguration> <eeh recovery next step> <oops in EEH driver bus iteration loop> A second class of oops is also seen when the underlying bus disappears during device recovery. Refactor the EEH module to be PCI rescan and remove safe. Also clean up a few minor formatting / readability issues. Signed-off-by: Timothy Pearson <tpearson@raptorengineering.com> Signed-off-by: Bjorn Helgaas <bhelgaas@google.com> Signed-off-by: Madhavan Srinivasan <maddy@linux.ibm.com> Link: https://patch.msgid.link/1334208367.1359861.1752615503144.JavaMail.zimbra@raptorengineeringinc.com Signed-off-by: Sasha Levin <sashal@kernel.org>
2025-08-15powerpc/eeh: Export eeh_unfreeze_pe()Timothy Pearson1-0/+1
[ Upstream commit e82b34eed04b0ddcff4548b62633467235672fd3 ] The PowerNV hotplug driver needs to be able to clear any frozen PE(s) on the PHB after suprise removal of a downstream device. Export the eeh_unfreeze_pe() symbol to allow implementation of this functionality in the php_nv module. Signed-off-by: Timothy Pearson <tpearson@raptorengineering.com> Signed-off-by: Bjorn Helgaas <bhelgaas@google.com> Signed-off-by: Madhavan Srinivasan <maddy@linux.ibm.com> Link: https://patch.msgid.link/1778535414.1359858.1752615454618.JavaMail.zimbra@raptorengineeringinc.com Signed-off-by: Sasha Levin <sashal@kernel.org>
2025-08-15arch: powerpc: defconfig: Drop obsolete CONFIG_NET_CLS_TCINDEXJohan Korsnes1-1/+0
[ Upstream commit 75cd37c5f28b85979fd5a65174013010f6b78f27 ] This option was removed from the Kconfig in commit 8c710f75256b ("net/sched: Retire tcindex classifier") but it was not removed from the defconfigs. Fixes: 8c710f75256b ("net/sched: Retire tcindex classifier") Signed-off-by: Johan Korsnes <johan.korsnes@gmail.com> Reviewed-by: Christophe Leroy <christophe.leroy@csgroup.eu> Signed-off-by: Madhavan Srinivasan <maddy@linux.ibm.com> Link: https://patch.msgid.link/20250323191116.113482-1-johan.korsnes@gmail.com Signed-off-by: Sasha Levin <sashal@kernel.org>
2025-08-01crypto: powerpc/poly1305 - add depends on BROKEN for nowEric Biggers1-0/+1
[ Upstream commit bc8169003b41e89fe7052e408cf9fdbecb4017fe ] As discussed in the thread containing https://lore.kernel.org/linux-crypto/20250510053308.GB505731@sol/, the Power10-optimized Poly1305 code is currently not safe to call in softirq context. Disable it for now. It can be re-enabled once it is fixed. Fixes: ba8f8624fde2 ("crypto: poly1305-p10 - Glue code for optmized Poly1305 implementation for ppc64le") Cc: stable@vger.kernel.org Signed-off-by: Eric Biggers <ebiggers@google.com> Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au> [ applied to arch/powerpc/crypto/Kconfig ] Signed-off-by: Sasha Levin <sashal@kernel.org> Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
2025-07-10powerpc/kernel: Fix ppc_save_regs inclusion in buildMadhavan Srinivasan1-2/+0
commit 93bd4a80efeb521314485a06d8c21157240497bb upstream. Recent patch fixed an old commit 'fc2a5a6161a2 ("powerpc/64s: ppc_save_regs is now needed for all 64s builds")' which is to include building of ppc_save_reg.c only when XMON and KEXEC_CORE and PPC_BOOK3S are enabled. This was valid, since ppc_save_regs was called only in replay_system_reset() of old irq.c which was under BOOK3S. But there has been multiple refactoring of irq.c and have added call to ppc_save_regs() from __replay_soft_interrupts -> replay_soft_interrupts which is part of irq_64.c included under CONFIG_PPC64. And since ppc_save_regs is called in CRASH_DUMP path as part of crash_setup_regs in kexec.h, CONFIG_PPC32 also needs it. So with this recent patch which enabled the building of ppc_save_regs.c caused a build break when none of these (XMON, KEXEC_CORE, BOOK3S) where enabled as part of config. Patch to enable building of ppc_save_regs.c by defaults. Signed-off-by: Madhavan Srinivasan <maddy@linux.ibm.com> Link: https://patch.msgid.link/20250511041111.841158-1-maddy@linux.ibm.com Cc: Guenter Roeck <linux@roeck-us.net> Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
2025-07-10powerpc: Fix struct termio related ioctl macrosMadhavan Srinivasan1-4/+4
[ Upstream commit ab107276607af90b13a5994997e19b7b9731e251 ] Since termio interface is now obsolete, include/uapi/asm/ioctls.h has some constant macros referring to "struct termio", this caused build failure at userspace. In file included from /usr/include/asm/ioctl.h:12, from /usr/include/asm/ioctls.h:5, from tst-ioctls.c:3: tst-ioctls.c: In function 'get_TCGETA': tst-ioctls.c:12:10: error: invalid application of 'sizeof' to incomplete type 'struct termio' 12 | return TCGETA; | ^~~~~~ Even though termios.h provides "struct termio", trying to juggle definitions around to make it compile could introduce regressions. So better to open code it. Reported-by: Tulio Magno <tuliom@ascii.art.br> Suggested-by: Nicholas Piggin <npiggin@gmail.com> Tested-by: Justin M. Forbes <jforbes@fedoraproject.org> Reviewed-by: Michael Ellerman <mpe@ellerman.id.au> Closes: https://lore.kernel.org/linuxppc-dev/8734dji5wl.fsf@ascii.art.br/ Signed-off-by: Madhavan Srinivasan <maddy@linux.ibm.com> Link: https://patch.msgid.link/20250517142237.156665-1-maddy@linux.ibm.com Signed-off-by: Sasha Levin <sashal@kernel.org>
2025-06-27powerpc/eeh: Fix missing PE bridge reconfiguration during VFIO EEH recoveryNarayana Murty N1-0/+2
[ Upstream commit 33bc69cf6655cf60829a803a45275f11a74899e5 ] VFIO EEH recovery for PCI passthrough devices fails on PowerNV and pseries platforms due to missing host-side PE bridge reconfiguration. In the current implementation, eeh_pe_configure() only performs RTAS or OPAL-based bridge reconfiguration for native host devices, but skips it entirely for PEs managed through VFIO in guest passthrough scenarios. This leads to incomplete EEH recovery when a PCI error affects a passthrough device assigned to a QEMU/KVM guest. Although VFIO triggers the EEH recovery flow through VFIO_EEH_PE_ENABLE ioctl, the platform-specific bridge reconfiguration step is silently bypassed. As a result, the PE's config space is not fully restored, causing subsequent config space access failures or EEH freeze-on-access errors inside the guest. This patch fixes the issue by ensuring that eeh_pe_configure() always invokes the platform's configure_bridge() callback (e.g., pseries_eeh_phb_configure_bridge) even for VFIO-managed PEs. This ensures that RTAS or OPAL calls to reconfigure the PE bridge are correctly issued on the host side, restoring the PE's configuration space after an EEH event. This fix is essential for reliable EEH recovery in QEMU/KVM guests using VFIO PCI passthrough on PowerNV and pseries systems. Tested with: - QEMU/KVM guest using VFIO passthrough (IBM Power9,(lpar)Power11 host) - Injected EEH errors with pseries EEH errinjct tool on host, recovery verified on qemu guest. - Verified successful config space access and CAP_EXP DevCtl restoration after recovery Fixes: 212d16cdca2d ("powerpc/eeh: EEH support for VFIO PCI device") Signed-off-by: Narayana Murty N <nnmlinux@linux.ibm.com> Reviewed-by: Vaibhav Jain <vaibhav@linux.ibm.com> Reviewed-by: Ganesh Goudar <ganeshgr@linux.ibm.com> Signed-off-by: Madhavan Srinivasan <maddy@linux.ibm.com> Link: https://patch.msgid.link/20250508062928.146043-1-nnmlinux@linux.ibm.com Signed-off-by: Sasha Levin <sashal@kernel.org>
2025-06-27powerpc/vdso: Fix build of VDSO32 with pcrelChristophe Leroy2-2/+2
[ Upstream commit b93755f408325170edb2156c6a894ed1cae5f4f6 ] Building vdso32 on power10 with pcrel leads to following errors: VDSO32A arch/powerpc/kernel/vdso/gettimeofday-32.o arch/powerpc/kernel/vdso/gettimeofday.S: Assembler messages: arch/powerpc/kernel/vdso/gettimeofday.S:40: Error: syntax error; found `@', expected `,' arch/powerpc/kernel/vdso/gettimeofday.S:71: Info: macro invoked from here arch/powerpc/kernel/vdso/gettimeofday.S:40: Error: junk at end of line: `@notoc' arch/powerpc/kernel/vdso/gettimeofday.S:71: Info: macro invoked from here ... make[2]: *** [arch/powerpc/kernel/vdso/Makefile:85: arch/powerpc/kernel/vdso/gettimeofday-32.o] Error 1 make[1]: *** [arch/powerpc/Makefile:388: vdso_prepare] Error 2 Once the above is fixed, the following happens: VDSO32C arch/powerpc/kernel/vdso/vgettimeofday-32.o cc1: error: '-mpcrel' requires '-mcmodel=medium' make[2]: *** [arch/powerpc/kernel/vdso/Makefile:89: arch/powerpc/kernel/vdso/vgettimeofday-32.o] Error 1 make[1]: *** [arch/powerpc/Makefile:388: vdso_prepare] Error 2 make: *** [Makefile:251: __sub-make] Error 2 Make sure pcrel version of CFUNC() macro is used only for powerpc64 builds and remove -mpcrel for powerpc32 builds. Fixes: 7e3a68be42e1 ("powerpc/64: vmlinux support building with PCREL addresing") Signed-off-by: Christophe Leroy <christophe.leroy@csgroup.eu> Signed-off-by: Madhavan Srinivasan <maddy@linux.ibm.com> Link: https://patch.msgid.link/1fa3453f07d42a50a70114da9905bf7b73304fca.1747073669.git.christophe.leroy@csgroup.eu Signed-off-by: Sasha Levin <sashal@kernel.org>
2025-06-27powerpc/pseries/msi: Avoid reading PCI device registers in reduced power statesGautam Menghani1-1/+6
commit 9cc0eafd28c7faef300822992bb08d79cab2a36c upstream. When a system is being suspended to RAM, the PCI devices are also suspended and the PPC code ends up calling pseries_msi_compose_msg() and this triggers the BUG_ON() in __pci_read_msi_msg() because the device at this point is in reduced power state. In reduced power state, the memory mapped registers of the PCI device are not accessible. To replicate the bug: 1. Make sure deep sleep is selected # cat /sys/power/mem_sleep s2idle [deep] 2. Make sure console is not suspended (so that dmesg logs are visible) echo N > /sys/module/printk/parameters/console_suspend 3. Suspend the system echo mem > /sys/power/state To fix this behaviour, read the cached msi message of the device when the device is not in PCI_D0 power state instead of touching the hardware. Fixes: a5f3d2c17b07 ("powerpc/pseries/pci: Add MSI domains") Cc: stable@vger.kernel.org # v5.15+ Signed-off-by: Gautam Menghani <gautam@linux.ibm.com> Tested-by: Venkat Rao Bagalkote <venkat88@linux.ibm.com> Reviewed-by: Vaibhav Jain <vaibhav@linux.ibm.com> Signed-off-by: Madhavan Srinivasan <maddy@linux.ibm.com> Link: https://patch.msgid.link/20250305090237.294633-1-gautam@linux.ibm.com Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
2025-06-19powerpc/vas: Return -EINVAL if the offset is non-zero in mmap()Haren Myneni1-0/+9
[ Upstream commit 0d67f0dee6c9176bc09a5482dd7346e3a0f14d0b ] The user space calls mmap() to map VAS window paste address and the kernel returns the complete mapped page for each window. So return -EINVAL if non-zero is passed for offset parameter to mmap(). See Documentation/arch/powerpc/vas-api.rst for mmap() restrictions. Co-developed-by: Jonathan Greental <yonatan02greental@gmail.com> Signed-off-by: Jonathan Greental <yonatan02greental@gmail.com> Reported-by: Jonathan Greental <yonatan02greental@gmail.com> Fixes: dda44eb29c23 ("powerpc/vas: Add VAS user space API") Signed-off-by: Haren Myneni <haren@linux.ibm.com> Signed-off-by: Madhavan Srinivasan <maddy@linux.ibm.com> Link: https://patch.msgid.link/20250610021227.361980-2-maddy@linux.ibm.com Signed-off-by: Sasha Levin <sashal@kernel.org>
2025-06-19powerpc/powernv/memtrace: Fix out of bounds issue in memtrace mmapRitesh Harjani (IBM)1-2/+6
[ Upstream commit cd097df4596f3a1e9d75eb8520162de1eb8485b2 ] memtrace mmap issue has an out of bounds issue. This patch fixes the by checking that the requested mapping region size should stay within the allocated region size. Reported-by: Jonathan Greental <yonatan02greental@gmail.com> Fixes: 08a022ad3dfa ("powerpc/powernv/memtrace: Allow mmaping trace buffers") Signed-off-by: Ritesh Harjani (IBM) <ritesh.list@gmail.com> Signed-off-by: Madhavan Srinivasan <maddy@linux.ibm.com> Link: https://patch.msgid.link/20250610021227.361980-1-maddy@linux.ibm.com Signed-off-by: Sasha Levin <sashal@kernel.org>
2025-06-19powerpc/crash: Fix non-smp kexec preparationEddie James1-1/+4
[ Upstream commit 882b25af265de8e05c66f72b9a29f6047102958f ] In non-smp configurations, crash_kexec_prepare is never called in the crash shutdown path. One result of this is that the crashing_cpu variable is never set, preventing crash_save_cpu from storing the NT_PRSTATUS elf note in the core dump. Fixes: c7255058b543 ("powerpc/crash: save cpu register data in crash_smp_send_stop()") Signed-off-by: Eddie James <eajames@linux.ibm.com> Reviewed-by: Hari Bathini <hbathini@linux.ibm.com> Signed-off-by: Madhavan Srinivasan <maddy@linux.ibm.com> Link: https://patch.msgid.link/20250211162054.857762-1-eajames@linux.ibm.com Signed-off-by: Sasha Levin <sashal@kernel.org>
2025-06-19powerpc: do not build ppc_save_regs.o alwaysJiri Slaby (SUSE)1-1/+1
[ Upstream commit 497b7794aef03d525a5be05ae78dd7137c6861a5 ] The Fixes commit below tried to add CONFIG_PPC_BOOK3S to one of the conditions to enable the build of ppc_save_regs.o. But it failed to do so, in fact. The commit omitted to add a dollar sign. Therefore, ppc_save_regs.o is built always these days (as "(CONFIG_PPC_BOOK3S)" is never an empty string). Fix this by adding the missing dollar sign. Signed-off-by: Jiri Slaby (SUSE) <jirislaby@kernel.org> Fixes: fc2a5a6161a2 ("powerpc/64s: ppc_save_regs is now needed for all 64s builds") Acked-by: Stephen Rothwell <sfr@canb.auug.org.au> Signed-off-by: Madhavan Srinivasan <maddy@linux.ibm.com> Link: https://patch.msgid.link/20250417105305.397128-1-jirislaby@kernel.org Signed-off-by: Sasha Levin <sashal@kernel.org>
2025-06-04book3s64/radix: Fix compile errors when CONFIG_ARCH_WANT_OPTIMIZE_DAX_VMEMMAP=nRitesh Harjani (IBM)1-1/+2
[ Upstream commit 29bdc1f1c1df80868fb35bc69d1f073183adc6de ] Fix compile errors when CONFIG_ARCH_WANT_OPTIMIZE_DAX_VMEMMAP=n Signed-off-by: Ritesh Harjani (IBM) <ritesh.list@gmail.com> Signed-off-by: Donet Tom <donettom@linux.ibm.com> Signed-off-by: Madhavan Srinivasan <maddy@linux.ibm.com> Link: https://patch.msgid.link/8231763344223c193e3452eab0ae8ea966aff466.1741609795.git.donettom@linux.ibm.com Signed-off-by: Sasha Levin <sashal@kernel.org>
2025-06-04arch/powerpc/perf: Check the instruction type before creating sample with ↵Athira Rajeev2-1/+23
perf_mem_data_src [ Upstream commit 2ffb26afa64261139e608bf087a0c1fe24d76d4d ] perf mem report aborts as below sometimes (during some corner case) in powerpc: # ./perf mem report 1>out *** stack smashing detected ***: terminated Aborted (core dumped) The backtrace is as below: __pthread_kill_implementation () raise () abort () __libc_message __fortify_fail __stack_chk_fail hist_entry.lvl_snprintf __sort__hpp_entry __hist_entry__snprintf hists.fprintf cmd_report cmd_mem Snippet of code which triggers the issue from tools/perf/util/sort.c static int hist_entry__lvl_snprintf(struct hist_entry *he, char *bf, size_t size, unsigned int width) { char out[64]; perf_mem__lvl_scnprintf(out, sizeof(out), he->mem_info); return repsep_snprintf(bf, size, "%-*s", width, out); } The value of "out" is filled from perf_mem_data_src value. Debugging this further showed that for some corner cases, the value of "data_src" was pointing to wrong value. This resulted in bigger size of string and causing stack check fail. The perf mem data source values are captured in the sample via isa207_get_mem_data_src function. The initial check is to fetch the type of sampled instruction. If the type of instruction is not valid (not a load/store instruction), the function returns. Since 'commit e16fd7f2cb1a ("perf: Use sample_flags for data_src")', data_src field is not initialized by the perf_sample_data_init() function. If the PMU driver doesn't set the data_src value to zero if type is not valid, this will result in uninitailised value for data_src. The uninitailised value of data_src resulted in stack check fail followed by abort for "perf mem report". When requesting for data source information in the sample, the instruction type is expected to be load or store instruction. In ISA v3.0, due to hardware limitation, there are corner cases where the instruction type other than load or store is observed. In ISA v3.0 and before values "0" and "7" are considered reserved. In ISA v3.1, value "7" has been used to indicate "larx/stcx". Drop the sample if instruction type has reserved values for this field with a ISA version check. Initialize data_src to zero in isa207_get_mem_data_src if the instruction type is not load/store. Reported-by: Disha Goel <disgoel@linux.vnet.ibm.com> Signed-off-by: Athira Rajeev <atrajeev@linux.vnet.ibm.com> Signed-off-by: Madhavan Srinivasan <maddy@linux.ibm.com> Link: https://patch.msgid.link/20250121131621.39054-1-atrajeev@linux.vnet.ibm.com Signed-off-by: Sasha Levin <sashal@kernel.org>
2025-06-04powerpc/pseries/iommu: memory notifier incorrectly adds TCEs for pmemoryGaurav Batra3-14/+18
[ Upstream commit 6aa989ab2bd0d37540c812b4270006ff794662e7 ] iommu_mem_notifier() is invoked when RAM is dynamically added/removed. This notifier call is responsible to add/remove TCEs from the Dynamic DMA Window (DDW) when TCEs are pre-mapped. TCEs are pre-mapped only for RAM and not for persistent memory (pmemory). For DMA buffers in pmemory, TCEs are dynamically mapped when the device driver instructs to do so. The issue is 'daxctl' command is capable of adding pmemory as "System RAM" after LPAR boot. The command to do so is - daxctl reconfigure-device --mode=system-ram dax0.0 --force This will dynamically add pmemory range to LPAR RAM eventually invoking iommu_mem_notifier(). The address range of pmemory is way beyond the Max RAM that the LPAR can have. Which means, this range is beyond the DDW created for the device, at device initialization time. As a result when TCEs are pre-mapped for the pmemory range, by iommu_mem_notifier(), PHYP HCALL returns H_PARAMETER. This failed the command, daxctl, to add pmemory as RAM. The solution is to not pre-map TCEs for pmemory. Signed-off-by: Gaurav Batra <gbatra@linux.ibm.com> Tested-by: Donet Tom <donettom@linux.ibm.com> Reviewed-by: Donet Tom <donettom@linux.ibm.com> Signed-off-by: Madhavan Srinivasan <maddy@linux.ibm.com> Link: https://patch.msgid.link/20250130183854.92258-1-gbatra@linux.ibm.com Signed-off-by: Sasha Levin <sashal@kernel.org>
2025-06-04powerpc/prom_init: Fixup missing #size-cells on PowerBook6,7Andreas Schwab1-2/+2
[ Upstream commit 7e67ef889c9ab7246547db73d524459f47403a77 ] Similar to the PowerMac3,1, the PowerBook6,7 is missing the #size-cells property on the i2s node. Depends-on: commit 045b14ca5c36 ("of: WARN on deprecated #address-cells/#size-cells handling") Signed-off-by: Andreas Schwab <schwab@linux-m68k.org> Acked-by: Rob Herring (Arm) <robh@kernel.org> [maddy: added "commit" work in depends-on to avoid checkpatch error] Signed-off-by: Madhavan Srinivasan <maddy@linux.ibm.com> Link: https://patch.msgid.link/875xmizl6a.fsf@igel.home Signed-off-by: Sasha Levin <sashal@kernel.org>
2025-05-09powerpc/boot: Fix dash warningMadhavan Srinivasan1-1/+1
[ Upstream commit e3f506b78d921e48a00d005bea5c45ec36a99240 ] 'commit b2accfe7ca5b ("powerpc/boot: Check for ld-option support")' suppressed linker warnings, but the expressed used did not go well with POSIX shell (dash) resulting with this warning arch/powerpc/boot/wrapper: 237: [: 0: unexpected operator ld: warning: arch/powerpc/boot/zImage.epapr has a LOAD segment with RWX permissions Fix the check to handle the reported warning. Patch also fixes couple of shellcheck reported errors for the same line. In arch/powerpc/boot/wrapper line 237: if [ $(${CROSS}ld -v --no-warn-rwx-segments &>/dev/null; echo $?) -eq 0 ]; then ^-- SC2046 (warning): Quote this to prevent word splitting. ^------^ SC2086 (info): Double quote to prevent globbing and word splitting. ^---------^ SC3020 (warning): In POSIX sh, &> is undefined. Fixes: b2accfe7ca5b ("powerpc/boot: Check for ld-option support") Reported-by: Stephen Rothwell <sfr@canb.auug.org.au> Suggested-by: Stephen Rothwell <sfr@canb.auug.org.au> Tested-by: Venkat Rao Bagalkote <venkat88@linux.ibm.com> Reviewed-by: Stephen Rothwell <sfr@canb.auug.org.au> Signed-off-by: Madhavan Srinivasan <maddy@linux.ibm.com> Link: https://patch.msgid.link/20250423082154.30625-1-maddy@linux.ibm.com Signed-off-by: Sasha Levin <sashal@kernel.org>
2025-05-09powerpc/boot: Check for ld-option supportMadhavan Srinivasan1-4/+2
[ Upstream commit b2accfe7ca5bc9f9af28e603b79bdd5ad8df5c0b ] Commit 579aee9fc594 ("powerpc: suppress some linker warnings in recent linker versions") enabled support to add linker option "--no-warn-rwx-segments", if the version is greater than 2.39. Similar build warning were reported recently from linker version 2.35.2. ld: warning: arch/powerpc/boot/zImage.epapr has a LOAD segment with RWX permissions ld: warning: arch/powerpc/boot/zImage.pseries has a LOAD segment with RWX permissions Fix the warning by checking for "--no-warn-rwx-segments" option support in linker to enable it, instead of checking for the version range. Fixes: 579aee9fc594 ("powerpc: suppress some linker warnings in recent linker versions") Reported-by: Venkat Rao Bagalkote <venkat88@linux.ibm.com> Suggested-by: Christophe Leroy <christophe.leroy@csgroup.eu> Tested-by: Venkat Rao Bagalkote <venkat88@linux.ibm.com> Closes: https://lore.kernel.org/linuxppc-dev/61cf556c-4947-4bd6-af63-892fc0966dad@linux.ibm.com/ Signed-off-by: Madhavan Srinivasan <maddy@linux.ibm.com> Link: https://patch.msgid.link/20250401004218.24869-1-maddy@linux.ibm.com Signed-off-by: Sasha Levin <sashal@kernel.org>
2025-05-09book3s64/radix : Align section vmemmap start address to PAGE_SIZEDonet Tom1-2/+15
[ Upstream commit 9cf7e13fecbab0894f6986fc6986ab2eba8de52e ] A vmemmap altmap is a device-provided region used to provide backing storage for struct pages. For each namespace, the altmap should belong to that same namespace. If the namespaces are created unaligned, there is a chance that the section vmemmap start address could also be unaligned. If the section vmemmap start address is unaligned, the altmap page allocated from the current namespace might be used by the previous namespace also. During the free operation, since the altmap is shared between two namespaces, the previous namespace may detect that the page does not belong to its altmap and incorrectly assume that the page is a normal page. It then attempts to free the normal page, which leads to a kernel crash. Kernel attempted to read user page (18) - exploit attempt? (uid: 0) BUG: Kernel NULL pointer dereference on read at 0x00000018 Faulting instruction address: 0xc000000000530c7c Oops: Kernel access of bad area, sig: 11 [#1] LE PAGE_SIZE=64K MMU=Radix SMP NR_CPUS=2048 NUMA pSeries CPU: 32 PID: 2104 Comm: ndctl Kdump: loaded Tainted: G W NIP: c000000000530c7c LR: c000000000530e00 CTR: 0000000000007ffe REGS: c000000015e57040 TRAP: 0300 Tainted: G W MSR: 800000000280b033 <SF,VEC,VSX,EE,FP,ME,IR,DR,RI,LE> CR: 84482404 CFAR: c000000000530dfc DAR: 0000000000000018 DSISR: 40000000 IRQMASK: 0 GPR00: c000000000530e00 c000000015e572e0 c000000002c5cb00 c00c000101008040 GPR04: 0000000000000000 0000000000000007 0000000000000001 000000000000001f GPR08: 0000000000000005 0000000000000000 0000000000000018 0000000000002000 GPR12: c0000000001d2fb0 c0000060de6b0080 0000000000000000 c0000060dbf90020 GPR16: c00c000101008000 0000000000000001 0000000000000000 c000000125b20f00 GPR20: 0000000000000001 0000000000000000 ffffffffffffffff c00c000101007fff GPR24: 0000000000000001 0000000000000000 0000000000000000 0000000000000000 GPR28: 0000000004040201 0000000000000001 0000000000000000 c00c000101008040 NIP [c000000000530c7c] get_pfnblock_flags_mask+0x7c/0xd0 LR [c000000000530e00] free_unref_page_prepare+0x130/0x4f0 Call Trace: free_unref_page+0x50/0x1e0 free_reserved_page+0x40/0x68 free_vmemmap_pages+0x98/0xe0 remove_pte_table+0x164/0x1e8 remove_pmd_table+0x204/0x2c8 remove_pud_table+0x1c4/0x288 remove_pagetable+0x1c8/0x310 vmemmap_free+0x24/0x50 section_deactivate+0x28c/0x2a0 __remove_pages+0x84/0x110 arch_remove_memory+0x38/0x60 memunmap_pages+0x18c/0x3d0 devm_action_release+0x30/0x50 release_nodes+0x68/0x140 devres_release_group+0x100/0x190 dax_pmem_compat_release+0x44/0x80 [dax_pmem_compat] device_for_each_child+0x8c/0x100 [dax_pmem_compat_remove+0x2c/0x50 [dax_pmem_compat] nvdimm_bus_remove+0x78/0x140 [libnvdimm] device_remove+0x70/0xd0 Another issue is that if there is no altmap, a PMD-sized vmemmap page will be allocated from RAM, regardless of the alignment of the section start address. If the section start address is not aligned to the PMD size, a VM_BUG_ON will be triggered when setting the PMD-sized page to page table. In this patch, we are aligning the section vmemmap start address to PAGE_SIZE. After alignment, the start address will not be part of the current namespace, and a normal page will be allocated for the vmemmap mapping of the current section. For the remaining sections, altmaps will be allocated. During the free operation, the normal page will be correctly freed. In the same way, a PMD_SIZE vmemmap page will be allocated only if the section start address is PMD_SIZE-aligned; otherwise, it will fall back to a PAGE-sized vmemmap allocation. Without this patch ================== NS1 start NS2 start _________________________________________________________ | NS1 | NS2 | --------------------------------------------------------- | Altmap| Altmap | .....|Altmap| Altmap | ........... | NS1 | NS1 | | NS2 | NS2 | In the above scenario, NS1 and NS2 are two namespaces. The vmemmap for NS1 comes from Altmap NS1, which belongs to NS1, and the vmemmap for NS2 comes from Altmap NS2, which belongs to NS2. The vmemmap start for NS2 is not aligned, so Altmap NS2 is shared by both NS1 and NS2. During the free operation in NS1, Altmap NS2 is not part of NS1's altmap, causing it to attempt to free an invalid page. With this patch =============== NS1 start NS2 start _________________________________________________________ | NS1 | NS2 | --------------------------------------------------------- | Altmap| Altmap | .....| Normal | Altmap | Altmap |....... | NS1 | NS1 | | Page | NS2 | NS2 | If the vmemmap start for NS2 is not aligned then we are allocating a normal page. NS1 and NS2 vmemmap will be freed correctly. Fixes: 368a0590d954 ("powerpc/book3s64/vmemmap: switch radix to use a different vmemmap handling function") Co-developed-by: Ritesh Harjani (IBM) <ritesh.list@gmail.com> Signed-off-by: Ritesh Harjani (IBM) <ritesh.list@gmail.com> Signed-off-by: Donet Tom <donettom@linux.ibm.com> Signed-off-by: Madhavan Srinivasan <maddy@linux.ibm.com> Link: https://patch.msgid.link/8f98ec2b442977c618f7256cec88eb17dde3f2b9.1741609795.git.donettom@linux.ibm.com Signed-off-by: Sasha Levin <sashal@kernel.org>
2025-04-25powerpc/rtas: Prevent Spectre v1 gadget construction in sys_rtas()Nathan Lynch1-0/+4
commit 0974d03eb479384466d828d65637814bee6b26d7 upstream. Smatch warns: arch/powerpc/kernel/rtas.c:1932 __do_sys_rtas() warn: potential spectre issue 'args.args' [r] (local cap) The 'nargs' and 'nret' locals come directly from a user-supplied buffer and are used as indexes into a small stack-based array and as inputs to copy_to_user() after they are subject to bounds checks. Use array_index_nospec() after the bounds checks to clamp these values for speculative execution. Signed-off-by: Nathan Lynch <nathanl@linux.ibm.com> Reported-by: Breno Leitao <leitao@debian.org> Reviewed-by: Breno Leitao <leitao@debian.org> Signed-off-by: Michael Ellerman <mpe@ellerman.id.au> Link: https://msgid.link/20240530-sys_rtas-nargs-nret-v1-1-129acddd4d89@linux.ibm.com [Minor context change fixed] Signed-off-by: Cliff Liu <donghua.liu@windriver.com> Signed-off-by: He Zhe <Zhe.He@windriver.com> Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
2025-04-10spufs: fix a leak in spufs_create_context()Al Viro1-1/+4
[ Upstream commit 0f5cce3fc55b08ee4da3372baccf4bcd36a98396 ] Leak fixes back in 2008 missed one case - if we are trying to set affinity and spufs_mkdir() fails, we need to drop the reference to neighbor. Fixes: 58119068cb27 "[POWERPC] spufs: Fix memory leak on SPU affinity" Signed-off-by: Al Viro <viro@zeniv.linux.org.uk> Signed-off-by: Sasha Levin <sashal@kernel.org>
2025-04-10spufs: fix gang directory lifetimesAl Viro3-8/+49
[ Upstream commit c134deabf4784e155d360744d4a6a835b9de4dd4 ] prior to "[POWERPC] spufs: Fix gang destroy leaks" we used to have a problem with gang lifetimes - creation of a gang returns opened gang directory, which normally gets removed when that gets closed, but if somebody has created a context belonging to that gang and kept it alive until the gang got closed, removal failed and we ended up with a leak. Unfortunately, it had been fixed the wrong way. Dentry of gang directory was no longer pinned, and rmdir on close was gone. One problem was that failure of open kept calling simple_rmdir() as cleanup, which meant an unbalanced dput(). Another bug was in the success case - gang creation incremented link count on root directory, but that was no longer undone when gang got destroyed. Fix consists of * reverting the commit in question * adding a counter to gang, protected by ->i_rwsem of gang directory inode. * having it set to 1 at creation time, dropped in both spufs_dir_close() and spufs_gang_close() and bumped in spufs_create_context(), provided that it's not 0. * using simple_recursive_removal() to take the gang directory out when counter reaches zero. Fixes: 877907d37da9 "[POWERPC] spufs: Fix gang destroy leaks" Signed-off-by: Al Viro <viro@zeniv.linux.org.uk> Signed-off-by: Sasha Levin <sashal@kernel.org>
2025-04-10spufs: fix a leak on spufs_new_file() failureAl Viro1-1/+3
[ Upstream commit d1ca8698ca1332625d83ea0d753747be66f9906d ] It's called from spufs_fill_dir(), and caller of that will do spufs_rmdir() in case of failure. That does remove everything we'd managed to create, but... the problem dentry is still negative. IOW, it needs to be explicitly dropped. Fixes: 3f51dd91c807 "[PATCH] spufs: fix spufs_fill_dir error path" Signed-off-by: Al Viro <viro@zeniv.linux.org.uk> Signed-off-by: Sasha Levin <sashal@kernel.org>
2025-04-10arch/powerpc: drop GENERIC_PTDUMP from mpc885_ads_defconfigAnshuman Khandual1-1/+1
[ Upstream commit 2c5e6ac2db64ace51f66a9f3b3b3ab9553d748e8 ] GENERIC_PTDUMP gets selected on powerpc explicitly and hence can be dropped off from mpc885_ads_defconfig. Replace with CONFIG_PTDUMP_DEBUGFS instead. Link: https://lkml.kernel.org/r/20250226122404.1927473-3-anshuman.khandual@arm.com Fixes: e084728393a5 ("powerpc/ptdump: Convert powerpc to GENERIC_PTDUMP") Signed-off-by: Anshuman Khandual <anshuman.khandual@arm.com> Suggested-by: Christophe Leroy <christophe.leroy@csgroup.eu> Reviewed-by: Christophe Leroy <christophe.leroy@csgroup.eu> Cc: Madhavan Srinivasan <maddy@linux.ibm.com> Cc: Michael Ellerman <mpe@ellerman.id.au> Cc: Nicholas Piggin <npiggin@gmail.com> Cc: Catalin Marinas <catalin.marinas@arm.com> Cc: Heiko Carstens <hca@linux.ibm.com> Cc: Ingo Molnar <mingo@redhat.com> Cc: Jonathan Corbet <corbet@lwn.net> Cc: Marc Zyngier <maz@kernel.org> Cc: Mark Rutland <mark.rutland@arm.com> Cc: Palmer Dabbelt <palmer@dabbelt.com> Cc: Paul Walmsley <paul.walmsley@sifive.com> Cc: Steven Price <steven.price@arm.com> Cc: Thomas Gleixner <tglx@linutronix.de> Cc: Vasily Gorbik <gor@linux.ibm.com> Cc: Will Deacon <will@kernel.org> Signed-off-by: Andrew Morton <akpm@linux-foundation.org> Signed-off-by: Sasha Levin <sashal@kernel.org>
2025-03-13Revert "KVM: PPC: e500: Mark "struct page" dirty in kvmppc_e500_shadow_map()"Greg Kroah-Hartman1-6/+7
This reverts commit 15d60c13b704f770ba45c58477380d4577cebfa3 which is commit c9be85dabb376299504e0d391d15662c0edf8273 upstream. It should not have been applied. Link: https://lore.kernel.org/r/CABgObfb5U9zwTQBPkPB=mKu-vMrRspPCm4wfxoQpB+SyAnb5WQ@mail.gmail.com Reported-by: Paolo Bonzini <pbonzini@redhat.com> Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
2025-03-13Revert "KVM: PPC: e500: Mark "struct page" pfn accessed before dropping ↵Greg Kroah-Hartman1-1/+3
mmu_lock" This reverts commit 59e21c4613b0a46f46eb124984928df46d88ad57 which is commit 84cf78dcd9d65c45ab73998d4ad50f433d53fb93 upstream. It should not have been applied. Link: https://lore.kernel.org/r/CABgObfb5U9zwTQBPkPB=mKu-vMrRspPCm4wfxoQpB+SyAnb5WQ@mail.gmail.com Reported-by: Paolo Bonzini <pbonzini@redhat.com> Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
2025-03-13Revert "KVM: PPC: e500: Use __kvm_faultin_pfn() to handle page faults"Greg Kroah-Hartman1-3/+5
This reverts commit ba3cf83f4a5063edb6ee150633e641954ce30478 which is commit 419cfb983ca93e75e905794521afefcfa07988bb upstream. It should not have been applied. Link: https://lore.kernel.org/r/CABgObfb5U9zwTQBPkPB=mKu-vMrRspPCm4wfxoQpB+SyAnb5WQ@mail.gmail.com Reported-by: Paolo Bonzini <pbonzini@redhat.com> Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
2025-03-13Revert "KVM: e500: always restore irqs"Greg Kroah-Hartman1-2/+2
This reverts commit b9d93eda1214985d1b3d00a0f9d4306282a5b189 which is commit 87ecfdbc699cc95fac73291b52650283ddcf929d upstream. It should not have been applied. Link: https://lore.kernel.org/r/CABgObfb5U9zwTQBPkPB=mKu-vMrRspPCm4wfxoQpB+SyAnb5WQ@mail.gmail.com Reported-by: Paolo Bonzini <pbonzini@redhat.com> Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>