summaryrefslogtreecommitdiff
path: root/arch/s390
AgeCommit message (Collapse)AuthorFilesLines
2025-07-21s390/time: Use monotonic clock in get_cycles()Sven Schnelle1-7/+6
Otherwise the code might not work correctly when the clock is changed. Signed-off-by: Sven Schnelle <svens@linux.ibm.com> Reviewed-by: Heiko Carstens <hca@linux.ibm.com> Signed-off-by: Alexander Gordeev <agordeev@linux.ibm.com>
2025-07-20Add a new optional ",cma" suffix to the crashkernel= command line optionJiri Bohac1-1/+1
Patch series "kdump: crashkernel reservation from CMA", v5. This series implements a way to reserve additional crash kernel memory using CMA. Currently, all the memory for the crash kernel is not usable by the 1st (production) kernel. It is also unmapped so that it can't be corrupted by the fault that will eventually trigger the crash. This makes sense for the memory actually used by the kexec-loaded crash kernel image and initrd and the data prepared during the load (vmcoreinfo, ...). However, the reserved space needs to be much larger than that to provide enough run-time memory for the crash kernel and the kdump userspace. Estimating the amount of memory to reserve is difficult. Being too careful makes kdump likely to end in OOM, being too generous takes even more memory from the production system. Also, the reservation only allows reserving a single contiguous block (or two with the "low" suffix). I've seen systems where this fails because the physical memory is fragmented. By reserving additional crashkernel memory from CMA, the main crashkernel reservation can be just large enough to fit the kernel and initrd image, minimizing the memory taken away from the production system. Most of the run-time memory for the crash kernel will be memory previously available to userspace in the production system. As this memory is no longer wasted, the reservation can be done with a generous margin, making kdump more reliable. Kernel memory that we need to preserve for dumping is normally not allocated from CMA, unless it is explicitly allocated as movable. Currently this is only the case for memory ballooning and zswap. Such movable memory will be missing from the vmcore. User data is typically not dumped by makedumpfile. When dumping of user data is intended this new CMA reservation cannot be used. There are five patches in this series: The first adds a new ",cma" suffix to the recenly introduced generic crashkernel parsing code. parse_crashkernel() takes one more argument to store the cma reservation size. The second patch implements reserve_crashkernel_cma() which performs the reservation. If the requested size is not available in a single range, multiple smaller ranges will be reserved. The third patch updates Documentation/, explicitly mentioning the potential DMA corruption of the CMA-reserved memory. The fourth patch adds a short delay before booting the kdump kernel, allowing pending DMA transfers to finish. The fifth patch enables the functionality for x86 as a proof of concept. There are just three things every arch needs to do: - call reserve_crashkernel_cma() - include the CMA-reserved ranges in the physical memory map - exclude the CMA-reserved ranges from the memory available through /proc/vmcore by excluding them from the vmcoreinfo PT_LOAD ranges. Adding other architectures is easy and I can do that as soon as this series is merged. With this series applied, specifying crashkernel=100M craskhernel=1G,cma on the command line will make a standard crashkernel reservation of 100M, where kexec will load the kernel and initrd. An additional 1G will be reserved from CMA, still usable by the production system. The crash kernel will have 1.1G memory available. The 100M can be reliably predicted based on the size of the kernel and initrd. The new cma suffix is completely optional. When no crashkernel=size,cma is specified, everything works as before. This patch (of 5): Add a new cma_size parameter to parse_crashkernel(). When not NULL, call __parse_crashkernel to parse the CMA reservation size from "crashkernel=size,cma" and store it in cma_size. Set cma_size to NULL in all calls to parse_crashkernel(). Link: https://lkml.kernel.org/r/aEqnxxfLZMllMC8I@dwarf.suse.cz Link: https://lkml.kernel.org/r/aEqoQckgoTQNULnh@dwarf.suse.cz Signed-off-by: Jiri Bohac <jbohac@suse.cz> Cc: Baoquan He <bhe@redhat.com> Cc: Dave Young <dyoung@redhat.com> Cc: Donald Dutile <ddutile@redhat.com> Cc: Michal Hocko <mhocko@suse.cz> Cc: Philipp Rudo <prudo@redhat.com> Cc: Pingfan Liu <piliu@redhat.com> Cc: Tao Liu <ltao@redhat.com> Cc: Vivek Goyal <vgoyal@redhat.com> Cc: David Hildenbrand <david@redhat.com> Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
2025-07-18Merge git://git.kernel.org/pub/scm/linux/kernel/git/bpf/bpf after rc6Alexei Starovoitov5-17/+59
Cross-merge BPF and other fixes after downstream PR. No conflicts. Signed-off-by: Alexei Starovoitov <ast@kernel.org>
2025-07-18Merge tag 'bpf-fixes' of git://git.kernel.org/pub/scm/linux/kernel/git/bpf/bpfLinus Torvalds1-1/+9
Pull bpf fixes from Alexei Starovoitov: - Fix handling of BPF arena relocations (Andrii Nakryiko) - Fix race in bpf_arch_text_poke() on s390 (Ilya Leoshkevich) - Fix use of virt_to_phys() on arm64 when mmapping BTF (Lorenz Bauer) - Reject %p% format string in bprintf-like BPF helpers (Paul Chaignon) * tag 'bpf-fixes' of git://git.kernel.org/pub/scm/linux/kernel/git/bpf/bpf: libbpf: Fix handling of BPF arena relocations btf: Fix virt_to_phys() on arm64 when mmapping BTF selftests/bpf: Stress test attaching a BPF prog to another BPF prog s390/bpf: Fix bpf_arch_text_poke() with new_addr == NULL again selftests/bpf: Add negative test cases for snprintf bpf: Reject %p% format string in bprintf-like helpers
2025-07-18crypto: engine - remove request batching supportOvidiu Panait2-2/+2
Remove request batching support from crypto_engine, as there are no drivers using this feature and it doesn't really work that well. Instead of doing batching based on backlog, a more optimal approach would be for the user to handle the batching (similar to how IPsec can hook into GSO to get 64K of data each time or how block encryption can use unit sizes much greater than 4K). Suggested-by: Herbert Xu <herbert@gondor.apana.org.au> Signed-off-by: Ovidiu Panait <ovidiu.panait.oss@gmail.com> Reviewed-by: Horia Geantă <horia.geanta@nxp.com> Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
2025-07-17s390/bpf: Fix bpf_arch_text_poke() with new_addr == NULL againIlya Leoshkevich1-1/+9
Commit 7ded842b356d ("s390/bpf: Fix bpf_plt pointer arithmetic") has accidentally removed the critical piece of commit c730fce7c70c ("s390/bpf: Fix bpf_arch_text_poke() with new_addr == NULL"), causing intermittent kernel panics in e.g. perf's on_switch() prog to reappear. Restore the fix and add a comment. Fixes: 7ded842b356d ("s390/bpf: Fix bpf_plt pointer arithmetic") Cc: stable@vger.kernel.org Signed-off-by: Ilya Leoshkevich <iii@linux.ibm.com> Link: https://lore.kernel.org/r/20250716194524.48109-2-iii@linux.ibm.com Signed-off-by: Alexei Starovoitov <ast@kernel.org>
2025-07-16s390/pai_crypto: Rename PAI Crypto event 4210Thomas Richter1-1/+1
The PAI crypto event number 4210 is named PCC_COMPUTE_LAST_BLOCK_CMAC_USING_ENCRYPTED_AES_256A According to the z16 and z17 Principle of Operation documents SA22-7832-13 and SA22-7832-14 the event is named PCC_COMPUTE_LAST_BLOCK_CMAC_USING_ENCRYPTED_AES_256 without a trailing 'A'. Adjust this event name. Signed-off-by: Thomas Richter <tmricht@linux.ibm.com> Reviewed-by: Sumanth Korikkar <sumanthk@linux.ibm.com> Signed-off-by: Alexander Gordeev <agordeev@linux.ibm.com>
2025-07-15s390/ptrace: Use USER_REGSET_NOTE_TYPE() to specify regset note namesDave Martin1-21/+21
Instead of having the core code guess the note name for each regset, use USER_REGSET_NOTE_TYPE() to pick the correct name from elf.h. Signed-off-by: Dave Martin <Dave.Martin@arm.com> Cc: Heiko Carstens <hca@linux.ibm.com> Cc: Vasily Gorbik <gor@linux.ibm.com> Cc: Alexander Gordeev <agordeev@linux.ibm.com> Cc: Christian Borntraeger <borntraeger@linux.ibm.com> Cc: Sven Schnelle <svens@linux.ibm.com> Cc: Oleg Nesterov <oleg@redhat.com> Cc: Kees Cook <kees@kernel.org> Cc: Akihiko Odaki <akihiko.odaki@daynix.com> Cc: linux-s390@vger.kernel.org Acked-by: Alexander Gordeev <agordeev@linux.ibm.com> Reviewed-by: Akihiko Odaki <odaki@rsg.ci.i.u-tokyo.ac.jp> Link: https://lore.kernel.org/r/20250701135616.29630-18-Dave.Martin@arm.com Signed-off-by: Kees Cook <kees@kernel.org>
2025-07-14lib/crypto: s390/sha1: Migrate optimized code into libraryEric Biggers5-116/+0
Instead of exposing the s390-optimized SHA-1 code via s390-specific crypto_shash algorithms, instead just implement the sha1_blocks() library function. This is much simpler, it makes the SHA-1 library functions be s390-optimized, and it fixes the longstanding issue where the s390-optimized SHA-1 code was disabled by default. SHA-1 still remains available through crypto_shash, but individual architectures no longer need to handle it. Reviewed-by: Ard Biesheuvel <ardb@kernel.org> Link: https://lore.kernel.org/r/20250712232329.818226-12-ebiggers@kernel.org Signed-off-by: Eric Biggers <ebiggers@kernel.org>
2025-07-14Merge branch 'tip/sched/urgent'Peter Zijlstra5-17/+51
Avoid merge conflicts Signed-off-by: Peter Zijlstra <peterz@infradead.org>
2025-07-14smpboot: introduce SDTL_INIT() helper to tidy sched topology setupLi Chen1-5/+5
Define a small SDTL_INIT(maskfn, flagsfn, name) macro and use it to build the sched_domain_topology_level array. Purely a cleanup; behaviour is unchanged. Suggested-by: Thomas Gleixner <tglx@linutronix.de> Signed-off-by: Li Chen <chenl311@chinatelecom.cn> Signed-off-by: Peter Zijlstra (Intel) <peterz@infradead.org> Reviewed-by: K Prateek Nayak <kprateek.nayak@amd.com> Tested-by: K Prateek Nayak <kprateek.nayak@amd.com> Link: https://lore.kernel.org/r/20250710105715.66594-2-me@linux.beauty
2025-07-11Merge tag 'nf-next-25-07-10' of ↵Jakub Kicinski2-2/+0
git://git.kernel.org/pub/scm/linux/kernel/git/netfilter/nf-next Pablo Neira Ayuso says: ==================== Netfilter updates for net-next (v2) The following series contains an initial small batch of Netfilter updates for net-next: 1) Remove DCCP conntrack support, keep DCCP matches around in order to avoid breakage when loading ruleset, add Kconfig to wrap the code so it can be disabled by distributors. 2) Remove buggy code aiming at shrinking netlink deletion event, then re-add it correctly in another patch. This is to prevent -stable to pick up on a fix that breaks old userspace. From Phil Sutter. 3) Missing WARN_ON_ONCE() to check for lockdep_commit_lock_is_held() to uncover bugs. From Fedor Pchelkin. * tag 'nf-next-25-07-10' of git://git.kernel.org/pub/scm/linux/kernel/git/netfilter/nf-next: netfilter: nf_tables: adjust lockdep assertions handling netfilter: nf_tables: Reintroduce shortened deletion notifications netfilter: nf_tables: Drop dead code from fill_*_info routines netfilter: conntrack: remove DCCP protocol support ==================== Link: https://patch.msgid.link/20250710010706.2861281-1-pablo@netfilter.org Signed-off-by: Jakub Kicinski <kuba@kernel.org>
2025-07-10s390/boot: Introduce jump_to_kernel() functionIlya Leoshkevich5-3/+20
Introduce a global function that jumps from the decompressor to the decompressed kernel. Put its address into svc_old_psw, from where GDB can take it without loading decompressor symbols. It should be available throughout the entire decompressor execution, because it's placed there statically, and nothing in the decompressor uses the SVC instruction. Acked-by: Heiko Carstens <hca@linux.ibm.com> Signed-off-by: Ilya Leoshkevich <iii@linux.ibm.com> Tested-by: Alexander Gordeev <agordeev@linux.ibm.com> Link: https://lore.kernel.org/r/20250625154220.75300-2-iii@linux.ibm.com Signed-off-by: Alexander Gordeev <agordeev@linux.ibm.com>
2025-07-10s390/stp: Remove udelay from stp_sync_clock()Sven Schnelle1-1/+1
When an stp sync check is handled on a system with multiple cpus each cpu gets a machine check but only the first one actually handles the sync operation. All other CPUs spin waiting for the first one to finish with a short udelay(). But udelay can't be used here as the first CPU modifies tod_clock_base before performing the sync op. During this timeframe get_tod_clock_monotonic() might return a non-monotonic time. The time spent waiting should be very short and udelay is a busy loop anyways, therefore simply remove the udelay. Reviewed-by: Heiko Carstens <hca@linux.ibm.com> Signed-off-by: Sven Schnelle <svens@linux.ibm.com> Signed-off-by: Alexander Gordeev <agordeev@linux.ibm.com>
2025-07-10mm/ptdump: take the memory hotplug lock inside ptdump_walk_pgd()Anshuman Khandual1-2/+0
Memory hot remove unmaps and tears down various kernel page table regions as required. The ptdump code can race with concurrent modifications of the kernel page tables. When leaf entries are modified concurrently, the dump code may log stale or inconsistent information for a VA range, but this is otherwise not harmful. But when intermediate levels of kernel page table are freed, the dump code will continue to use memory that has been freed and potentially reallocated for another purpose. In such cases, the ptdump code may dereference bogus addresses, leading to a number of potential problems. To avoid the above mentioned race condition, platforms such as arm64, riscv and s390 take memory hotplug lock, while dumping kernel page table via the sysfs interface /sys/kernel/debug/kernel_page_tables. Similar race condition exists while checking for pages that might have been marked W+X via /sys/kernel/debug/kernel_page_tables/check_wx_pages which in turn calls ptdump_check_wx(). Instead of solving this race condition again, let's just move the memory hotplug lock inside generic ptdump_check_wx() which will benefit both the scenarios. Drop get_online_mems() and put_online_mems() combination from all existing platform ptdump code paths. Link: https://lkml.kernel.org/r/20250620052427.2092093-1-anshuman.khandual@arm.com Fixes: bbd6ec605c0f ("arm64/mm: Enable memory hot remove") Signed-off-by: Anshuman Khandual <anshuman.khandual@arm.com> Acked-by: David Hildenbrand <david@redhat.com> Reviewed-by: Dev Jain <dev.jain@arm.com> Acked-by: Alexander Gordeev <agordeev@linux.ibm.com> [s390] Cc: Catalin Marinas <catalin.marinas@arm.com> Cc: Will Deacon <will@kernel.org> Cc: Ryan Roberts <ryan.roberts@arm.com> Cc: Paul Walmsley <paul.walmsley@sifive.com> Cc: Palmer Dabbelt <palmer@dabbelt.com> Cc: Alexander Gordeev <agordeev@linux.ibm.com> Cc: Gerald Schaefer <gerald.schaefer@linux.ibm.com> Cc: Heiko Carstens <hca@linux.ibm.com> Cc: Vasily Gorbik <gor@linux.ibm.com> Cc: Christian Borntraeger <borntraeger@linux.ibm.com> Cc: Sven Schnelle <svens@linux.ibm.com> Cc: <stable@vger.kernel.org> Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
2025-07-10mm/percpu: conditionally define _shared_alloc_tag via ↵Hao Ge2-3/+3
CONFIG_ARCH_MODULE_NEEDS_WEAK_PER_CPU Recently discovered this entry while checking kallsyms on ARM64: ffff800083e509c0 D _shared_alloc_tag If ARCH_NEEDS_WEAK_PER_CPU is not defined(it is only defined for s390 and alpha architectures), there's no need to statically define the percpu variable _shared_alloc_tag. Therefore, we need to implement isolation for this purpose. When building the core kernel code for s390 or alpha architectures, ARCH_NEEDS_WEAK_PER_CPU remains undefined (as it is gated by #if defined(MODULE)). However, when building modules for these architectures, the macro is explicitly defined. Therefore, we remove all instances of ARCH_NEEDS_WEAK_PER_CPU from the code and introduced CONFIG_ARCH_MODULE_NEEDS_WEAK_PER_CPU to replace the relevant logic. We can now conditionally define the perpcu variable _shared_alloc_tag based on CONFIG_ARCH_MODULE_NEEDS_WEAK_PER_CPU. This allows architectures (such as s390/alpha) that require weak definitions for percpu variables in modules to include the definition, while others can omit it via compile-time exclusion. Link: https://lkml.kernel.org/r/20250618015809.1235761-1-hao.ge@linux.dev Signed-off-by: Hao Ge <gehao@kylinos.cn> Suggested-by: Suren Baghdasaryan <surenb@google.com> Acked-by: Alexander Gordeev <agordeev@linux.ibm.com> [s390] Acked-by: Mike Rapoport (Microsoft) <rppt@kernel.org> Cc: Chistoph Lameter <cl@linux.com> Cc: Christian Borntraeger <borntraeger@linux.ibm.com> Cc: David Hildenbrand <david@redhat.com> Cc: Dennis Zhou <dennis@kernel.org> Cc: Heiko Carstens <hca@linux.ibm.com> Cc: Kent Overstreet <kent.overstreet@linux.dev> Cc: Liam Howlett <liam.howlett@oracle.com> Cc: Lorenzo Stoakes <lorenzo.stoakes@oracle.com> Cc: Matt Turner <mattst88@gmail.com> Cc: Richard Henderson <richard.henderson@linaro.org> Cc: Sven Schnelle <svens@linux.ibm.com> Cc: Tejun Heo <tj@kernel.org> Cc: Vasily Gorbik <gor@linux.ibm.com> Cc: Vlastimil Babka <vbabka@suse.cz> Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
2025-07-09s390/early: Copy last breaking event address to pt_regsHeiko Carstens1-0/+1
In case of an early crash the early program check handler also prints the last breaking event address which is contained within the pt_regs structure. However it is not initialized, and therefore a more or less random value is printed in case of a crash. Copy the last breaking event address from lowcore to pt_regs in case of an early program check to address this. This also makes it easier to analyze early crashes. Reviewed-by: Alexander Gordeev <agordeev@linux.ibm.com> Signed-off-by: Heiko Carstens <hca@linux.ibm.com> Signed-off-by: Alexander Gordeev <agordeev@linux.ibm.com>
2025-07-08Merge tag 'libcrypto-for-linus' of ↵Linus Torvalds2-0/+5
git://git.kernel.org/pub/scm/linux/kernel/git/ebiggers/linux Pull crypto library fix from Eric Biggers: "Fix an uninitialized variable in the s390 optimized SHA-1 and SHA-2. Note that my librarification changes also fix this by greatly simplifying how the s390 optimized SHA code is integrated. However, we need this separate fix for 6.16 and older versions" * tag 'libcrypto-for-linus' of git://git.kernel.org/pub/scm/linux/kernel/git/ebiggers/linux: crypto: s390/sha - Fix uninitialized variable in SHA-1 and SHA-2
2025-07-04lib/crypto: sha256: Make library API use strongly-typed contextsEric Biggers1-1/+1
Currently the SHA-224 and SHA-256 library functions can be mixed arbitrarily, even in ways that are incorrect, for example using sha224_init() and sha256_final(). This is because they operate on the same structure, sha256_state. Introduce stronger typing, as I did for SHA-384 and SHA-512. Also as I did for SHA-384 and SHA-512, use the names *_ctx instead of *_state. The *_ctx names have the following small benefits: - They're shorter. - They avoid an ambiguity with the compression function state. - They're consistent with the well-known OpenSSL API. - Users usually name the variable 'sctx' anyway, which suggests that *_ctx would be the more natural name for the actual struct. Therefore: update the SHA-224 and SHA-256 APIs, implementation, and calling code accordingly. In the new structs, also strongly-type the compression function state. Acked-by: Ard Biesheuvel <ardb@kernel.org> Link: https://lore.kernel.org/r/20250630160645.3198-7-ebiggers@kernel.org Signed-off-by: Eric Biggers <ebiggers@kernel.org>
2025-07-03crypto: s390/sha - Fix uninitialized variable in SHA-1 and SHA-2Eric Biggers2-0/+5
Commit 88c02b3f79a6 ("s390/sha3: Support sha3 performance enhancements") added the field s390_sha_ctx::first_message_part and made it be used by s390_sha_update() (now s390_sha_update_blocks()). At the time, s390_sha_update() was used by all the s390 SHA-1, SHA-2, and SHA-3 algorithms. However, only the initialization functions for SHA-3 were updated, leaving SHA-1 and SHA-2 using first_message_part uninitialized. This could cause e.g. the function code CPACF_KIMD_SHA_512 | CPACF_KIMD_NIP to be used instead of just CPACF_KIMD_SHA_512. This apparently was harmless, as the SHA-1 and SHA-2 function codes ignore CPACF_KIMD_NIP; it is recognized only by the SHA-3 function codes (https://lore.kernel.org/r/73477fe9-a1dc-4e38-98a6-eba9921e8afa@linux.ibm.com/). Therefore, this bug was found only when first_message_part was later converted to a boolean and UBSAN detected its uninitialized use. Regardless, let's fix this by just initializing to zero. Note: in 6.16, we need to patch SHA-1, SHA-384, and SHA-512. In 6.15 and earlier, we'll also need to patch SHA-224 and SHA-256, as they hadn't yet been librarified (which incidentally fixed this bug). Fixes: 88c02b3f79a6 ("s390/sha3: Support sha3 performance enhancements") Cc: stable@vger.kernel.org Reported-by: Ingo Franzki <ifranzki@linux.ibm.com> Closes: https://lore.kernel.org/r/12740696-595c-4604-873e-aefe8b405fbf@linux.ibm.com Acked-by: Heiko Carstens <hca@linux.ibm.com> Link: https://lore.kernel.org/r/20250703172316.7914-1-ebiggers@kernel.org Signed-off-by: Eric Biggers <ebiggers@kernel.org>
2025-07-03netfilter: conntrack: remove DCCP protocol supportPablo Neira Ayuso2-2/+0
The DCCP socket family has now been removed from this tree, see: 8bb3212be4b4 ("Merge branch 'net-retire-dccp-socket'") Remove connection tracking and NAT support for this protocol, this should not pose a problem because no DCCP traffic is expected to be seen on the wire. As for the code for matching on dccp header for iptables and nftables, mark it as deprecated and keep it in place. Ruleset restoration is an atomic operation. Without dccp matching support, an astray match on dccp could break this operation leaving your computer with no policy in place, so let's follow a more conservative approach for matches. Add CONFIG_NFT_EXTHDR_DCCP which is set to 'n' by default to deprecate dccp extension support. Similarly, label CONFIG_NETFILTER_XT_MATCH_DCCP as deprecated too and also set it to 'n' by default. Code to match on DCCP protocol from ebtables also remains in place, this is just a few checks on IPPROTO_DCCP from _check() path which is exercised when ruleset is loaded. There is another use of IPPROTO_DCCP from the _check() path in the iptables multiport match. Another check for IPPROTO_DCCP from the packet in the reject target is also removed. So let's schedule removal of the dccp matching for a second stage, this should not interfer with the dccp retirement since this is only matching on the dccp header. Cc: "David S. Miller" <davem@davemloft.net> Cc: Eric Dumazet <edumazet@google.com> Cc: Jakub Kicinski <kuba@kernel.org> Cc: Paolo Abeni <pabeni@redhat.com> Cc: Kuniyuki Iwashima <kuniyu@amazon.com> Reviewed-by: Simon Horman <horms@kernel.org> Signed-off-by: Pablo Neira Ayuso <pablo@netfilter.org>
2025-07-02fs: introduce file_getattr and file_setattr syscallsAndrey Albershteyn1-0/+2
Introduce file_getattr() and file_setattr() syscalls to manipulate inode extended attributes. The syscalls takes pair of file descriptor and pathname. Then it operates on inode opened accroding to openat() semantics. The struct file_attr is passed to obtain/change extended attributes. This is an alternative to FS_IOC_FSSETXATTR ioctl with a difference that file don't need to be open as we can reference it with a path instead of fd. By having this we can manipulated inode extended attributes not only on regular files but also on special ones. This is not possible with FS_IOC_FSSETXATTR ioctl as with special files we can not call ioctl() directly on the filesystem inode using fd. This patch adds two new syscalls which allows userspace to get/set extended inode attributes on special files by using parent directory and a path - *at() like syscall. CC: linux-api@vger.kernel.org CC: linux-fsdevel@vger.kernel.org CC: linux-xfs@vger.kernel.org Signed-off-by: Andrey Albershteyn <aalbersh@kernel.org> Link: https://lore.kernel.org/20250630-xattrat-syscall-v6-6-c4e3bc35227b@kernel.org Acked-by: Arnd Bergmann <arnd@arndb.de> Signed-off-by: Christian Brauner <brauner@kernel.org>
2025-07-01s390/bpf: Describe the frame using a struct instead of constantsIlya Leoshkevich2-77/+47
Currently the caller-allocated portion of the stack frame is described using constants, hardcoded values, and an ASCII drawing, making it harder than necessary to ensure that everything is in sync. Declare a struct and use offsetof() and offsetofend() macros to refer to various values stored within the frame. Signed-off-by: Ilya Leoshkevich <iii@linux.ibm.com> Link: https://lore.kernel.org/r/20250624121501.50536-3-iii@linux.ibm.com Signed-off-by: Alexei Starovoitov <ast@kernel.org>
2025-07-01s390/bpf: Centralize frame offset calculationsIlya Leoshkevich1-30/+26
The calculation of the distance from %r15 to the caller-allocated portion of the stack frame is copy-pasted into multiple places in the JIT code. Move it to bpf_jit_prog() and save the result into bpf_jit::frame_off, so that the other parts of the JIT can use it. Signed-off-by: Ilya Leoshkevich <iii@linux.ibm.com> Link: https://lore.kernel.org/r/20250624121501.50536-2-iii@linux.ibm.com Signed-off-by: Alexei Starovoitov <ast@kernel.org>
2025-06-30lib/crc: s390: Migrate optimized CRC code into lib/crc/Eric Biggers6-507/+0
Move the s390-optimized CRC code from arch/s390/lib/crc* into its new location in lib/crc/s390/, and wire it up in the new way. This new way of organizing the CRC code eliminates the need to artificially split the code for each CRC variant into separate arch and generic modules, enabling better inlining and dead code elimination. For more details, see "lib/crc: Prepare for arch-optimized code in subdirs of lib/crc/". Reviewed-by: "Martin K. Petersen" <martin.petersen@oracle.com> Acked-by: Ingo Molnar <mingo@kernel.org> Acked-by: "Jason A. Donenfeld" <Jason@zx2c4.com> Link: https://lore.kernel.org/r/20250607200454.73587-10-ebiggers@kernel.org Signed-off-by: Eric Biggers <ebiggers@kernel.org>
2025-06-30lib/crypto: s390: Move arch/s390/lib/crypto/ into lib/crypto/Eric Biggers7-1046/+0
Move the contents of arch/s390/lib/crypto/ into lib/crypto/s390/. The new code organization makes a lot more sense for how this code actually works and is developed. In particular, it makes it possible to build each algorithm as a single module, with better inlining and dead code elimination. For a more detailed explanation, see the patchset which did this for the CRC library code: https://lore.kernel.org/r/20250607200454.73587-1-ebiggers@kernel.org/. Also see the patchset which did this for SHA-512: https://lore.kernel.org/linux-crypto/20250616014019.415791-1-ebiggers@kernel.org/ This is just a preparatory commit, which does the move to get the files into their new location but keeps them building the same way as before. Later commits will make the actual improvements to the way the arch-optimized code is integrated for each algorithm. Acked-by: Ard Biesheuvel <ardb@kernel.org> Reviewed-by: Martin K. Petersen <martin.petersen@oracle.com> Reviewed-by: Sohil Mehta <sohil.mehta@intel.com> Link: https://lore.kernel.org/r/20250619191908.134235-7-ebiggers@kernel.org Signed-off-by: Eric Biggers <ebiggers@kernel.org>
2025-06-30lib/crypto: s390/sha512: Migrate optimized SHA-512 code to libraryEric Biggers5-164/+0
Instead of exposing the s390-optimized SHA-512 code via s390-specific crypto_shash algorithms, instead just implement the sha512_blocks() library function. This is much simpler, it makes the SHA-512 (and SHA-384) library functions be s390-optimized, and it fixes the longstanding issue where the s390-optimized SHA-512 code was disabled by default. SHA-512 still remains available through crypto_shash, but individual architectures no longer need to handle it. Acked-by: Ard Biesheuvel <ardb@kernel.org> Link: https://lore.kernel.org/r/20250630160320.2888-13-ebiggers@kernel.org Signed-off-by: Eric Biggers <ebiggers@kernel.org>
2025-06-30crypto: sha512 - Rename conflicting symbolsEric Biggers1-4/+4
Rename existing functions and structs in architecture-optimized SHA-512 code that had names conflicting with the upcoming library interface which will be added to <crypto/sha2.h>: sha384_init, sha512_init, sha512_update, sha384, and sha512. Note: all affected code will be superseded by later commits that migrate the arch-optimized SHA-512 code into the library. This commit simply keeps the kernel building for the initial introduction of the library. Acked-by: Ard Biesheuvel <ardb@kernel.org> Link: https://lore.kernel.org/r/20250630160320.2888-2-ebiggers@kernel.org Signed-off-by: Eric Biggers <ebiggers@kernel.org>
2025-06-30s390/smp: Remove conditional emergency signal order code usageHeiko Carstens1-4/+1
pcpu_ec_call() uses either the external call or emergency signal order code to signal (aka send an IPI) to a remote CPU. If the remote CPU is not running the emergency signal order is used. Measurements show that always using the external order code is at least as good, and sometimes even better, than the existing code. Therefore remove emergency signal order code usage from pcpu_ec_call(). Suggested-by: Christian Borntraeger <borntraeger@linux.ibm.com> Acked-by: Christian Borntraeger <borntraeger@linux.ibm.com> Reviewed-by: Alexander Gordeev <agordeev@linux.ibm.com> Signed-off-by: Heiko Carstens <hca@linux.ibm.com> Signed-off-by: Alexander Gordeev <agordeev@linux.ibm.com>
2025-06-29Merge branch 'uaccess-key' into featuresAlexander Gordeev7-187/+312
Heiko Carstens says: =================== A rather large series which is supposed to fix the crash below[1], which was seen when running the memop kernel kvm selftest. Problem is that cmpxchg_user_key() is executing code with a non-default key. If a system is IPL'ed with "LOAD NORMAL", and in addition the previous system used storage keys where the fetch-protection bit is set for some pages, and the cmpxchg_user_key() is located within such page a protection exception will happen when executing such code. Idea of this series is to register all code locations running with a non-default key at compile time. All functions, which run with a non-default key, then must explicitly call an init function which initializes the storage key of all pages containing such code locations with default key, which prevents such protection exceptions. Furthermore all functions containing code which may be executed with a non-default access key must be marked with __kprobes to prevent out-of-line execution of any instruction of such functions, which would result in the same problem. By default the kernel will not issue any storage key changing instructions like before, which will preserve the keyless-subset mode optimizations in hosts. Other possible implementations which I discarded: - Moving the code to an own section. This would require an s390 specific change to modpost.c, which complains about section mismatches (EX_TABLE entries in non-default text section). No other architecture has something similar, so let's keep this architecture specific hack local. - Just apply the default storage key to the whole kprobes text section. However this would add special s390 semantics to the kprobes text section, which no other architecture has. History has shown that such hacks fire back sooner or later. Furthermore, and to keep this whole stuff quite simple, this only works for code locations in core kernel code, not within modules. After this series there is no module code left with such code, and as of now I don't see any new kernel code which runs with a non-default access key. Note: the original crash can be reproduced by replacing page_set_storage_key(real, PAGE_DEFAULT_KEY, 1); with page_set_storage_key(real, 8, 1); in arch/s390/kernel/skey.c:__skey_regions_initialize() And then run tools/testing/selftests/kvm/s390/memop from the kernel selftests. [1]: Unable to handle kernel pointer dereference in virtual kernel address space Failing address: 0000000000000000 TEID: 000000000000080b Fault in home space mode while using kernel ASCE. AS:0000000002528007 R3:00000001ffffc007 S:00000001ffffb801 P:000000000000013d Oops: 0004 ilc:1 [#1]SMP Modules linked in: CPU: 3 UID: 0 PID: 791 Comm: memop Not tainted 6.16.0-rc1-00006-g3b568201d0a6-dirty #11 NONE Hardware name: IBM 3931 A01 704 (z/VM 7.4.0) Krnl PSW : 0794f00180000000 000003ffe0f4d91e (__cmpxchg_user_key1+0xbe/0x190) R:0 T:1 IO:1 EX:1 Key:9 M:1 W:0 P:0 AS:3 CC:3 PM:0 RI:0 EA:3 Krnl GPRS: 070003ffdfbf6af0 0000000000070000 0000000095b5a300 0000000000000000 00000000f1000000 0000000000000000 0000000000000090 0000000000000000 0000000000000040 0000000000000018 000003ff9b23d000 0000037fe0ef7bd8 000003ffdfbf7500 00000000962e4000 0000037f00ffffff 0000037fe0ef7aa0 Krnl Code: 000003ffe0f4d912: ad03f0a0 stosm 160(%r15),3 000003ffe0f4d916: a7780000 lhi %r7,0 #000003ffe0f4d91a: b20a6000 spka 0(%r6) >000003ffe0f4d91e: b2790100 sacf 256 000003ffe0f4d922: a56f0080 llill %r6,128 000003ffe0f4d926: 5810a000 l %r1,0(%r10) 000003ffe0f4d92a: 141e nr %r1,%r14 000003ffe0f4d92c: c0e7ffffffff xilf %r14,4294967295 Call Trace: [<000003ffe0f4d91e>] __cmpxchg_user_key1+0xbe/0x190 [<000003ffe0189c6e>] cmpxchg_guest_abs_with_key+0x2fe/0x370 [<000003ffe016d28e>] kvm_s390_vm_mem_op_cmpxchg+0x17e/0x350 [<000003ffe0173284>] kvm_arch_vm_ioctl+0x354/0x6f0 [<000003ffe015fedc>] kvm_vm_ioctl+0x2cc/0x6e0 [<000003ffe05348ae>] vfs_ioctl+0x2e/0x70 [<000003ffe0535e70>] __s390x_sys_ioctl+0xe0/0x100 [<000003ffe0f40f06>] __do_syscall+0x136/0x340 [<000003ffe0f4cb2e>] system_call+0x6e/0x90 Last Breaking-Event-Address: [<000003ffe0f4d896>] __cmpxchg_user_key1+0x36/0x190 =================== Signed-off-by: Alexander Gordeev <agordeev@linux.ibm.com>
2025-06-29s390/uaccess: Merge cmpxchg_user_key() inline assembliesHeiko Carstens1-55/+25
The inline assemblies for __cmpxchg_user_key1() and __cmpxchg_user_key2() are identical. Get rid of the duplication and provide a common helper function. Suggested-by: Mete Durlu <meted@linux.ibm.com> Acked-by: Alexander Gordeev <agordeev@linux.ibm.com> Signed-off-by: Heiko Carstens <hca@linux.ibm.com> Signed-off-by: Alexander Gordeev <agordeev@linux.ibm.com>
2025-06-29s390/uaccess: Prevent kprobes on cmpxchg_user_key() functionsHeiko Carstens1-10/+11
Code regions within cmpxchg_user_key() functions may be executed with a non-default access key, which may lead to a protection exception if the corresponding page has the fetch-protection bit enabled. There is code in place which initializes the storage keys of such pages when needed. However there is also the possibility of out-of-line execution of such code in case a kprobe is set within such a region. To avoid this problem prevent that any kprobe can be set within the cmpxchg_user_key() functions. Reviewed-by: Claudio Imbrenda <imbrenda@linux.ibm.com> Signed-off-by: Heiko Carstens <hca@linux.ibm.com> Signed-off-by: Alexander Gordeev <agordeev@linux.ibm.com>
2025-06-29s390/uaccess: Initialize code pages executed with non-default access keyHeiko Carstens1-5/+21
cmpxchg_user_key() may be executed with a non-zero key; if then the storage key of the page which belongs to the cmpxchg_user_key() code contains a key with fetch-protection enabled the result is a protection exception: Unable to handle kernel pointer dereference in virtual kernel address space Failing address: 0000000000000000 TEID: 000000000000080b Fault in home space mode while using kernel ASCE. AS:0000000002528007 R3:00000001ffffc007 S:00000001ffffb801 P:000000000000013d Oops: 0004 ilc:1 [#1]SMP Modules linked in: CPU: 3 UID: 0 PID: 791 Comm: memop Not tainted 6.16.0-rc1-00006-g3b568201d0a6-dirty #11 NONE Hardware name: IBM 3931 A01 704 (z/VM 7.4.0) Krnl PSW : 0794f00180000000 000003ffe0f4d91e (__cmpxchg_user_key1+0xbe/0x190) R:0 T:1 IO:1 EX:1 Key:9 M:1 W:0 P:0 AS:3 CC:3 PM:0 RI:0 EA:3 Krnl GPRS: 070003ffdfbf6af0 0000000000070000 0000000095b5a300 0000000000000000 00000000f1000000 0000000000000000 0000000000000090 0000000000000000 0000000000000040 0000000000000018 000003ff9b23d000 0000037fe0ef7bd8 000003ffdfbf7500 00000000962e4000 0000037f00ffffff 0000037fe0ef7aa0 Krnl Code: 000003ffe0f4d912: ad03f0a0 stosm 160(%r15),3 000003ffe0f4d916: a7780000 lhi %r7,0 #000003ffe0f4d91a: b20a6000 spka 0(%r6) >000003ffe0f4d91e: b2790100 sacf 256 000003ffe0f4d922: a56f0080 llill %r6,128 000003ffe0f4d926: 5810a000 l %r1,0(%r10) 000003ffe0f4d92a: 141e nr %r1,%r14 000003ffe0f4d92c: c0e7ffffffff xilf %r14,4294967295 Call Trace: [<000003ffe0f4d91e>] __cmpxchg_user_key1+0xbe/0x190 [<000003ffe0189c6e>] cmpxchg_guest_abs_with_key+0x2fe/0x370 [<000003ffe016d28e>] kvm_s390_vm_mem_op_cmpxchg+0x17e/0x350 [<000003ffe0173284>] kvm_arch_vm_ioctl+0x354/0x6f0 [<000003ffe015fedc>] kvm_vm_ioctl+0x2cc/0x6e0 [<000003ffe05348ae>] vfs_ioctl+0x2e/0x70 [<000003ffe0535e70>] __s390x_sys_ioctl+0xe0/0x100 [<000003ffe0f40f06>] __do_syscall+0x136/0x340 [<000003ffe0f4cb2e>] system_call+0x6e/0x90 Last Breaking-Event-Address: [<000003ffe0f4d896>] __cmpxchg_user_key1+0x36/0x190 Fix this by defining all code ranges within cmpxchg_user_key() functions which may be executed with a non-default key and explicitly initialize storage keys by calling skey_regions_initialize(). Reviewed-by: Claudio Imbrenda <imbrenda@linux.ibm.com> Signed-off-by: Heiko Carstens <hca@linux.ibm.com> Signed-off-by: Alexander Gordeev <agordeev@linux.ibm.com>
2025-06-29s390/skey: Provide infrastructure for executing with non-default access keyHeiko Carstens4-1/+88
The current assumption is that kernel code is always executed with access key zero, which means that storage key protection does not apply. However this assumption is not correct: cmpxchg_user_key() may be executed with a non-zero key; if then the storage key of the page which belongs to the cmpxchg_user_key() code contains a key with fetch-protection enabled the result is a protection exception. For several performance optimizations storage keys are not initialized on system boot. To keep these optimizations add infrastructure which allows to define code ranges within functions which are executed with a non-default key. When such code is executed such functions must explicitly call skey_regions_initialize(). This will initialize all storage keys belonging to such code ranges in a way that no protection exceptions happen when the code is executed with a non-default access key. Reviewed-by: Claudio Imbrenda <imbrenda@linux.ibm.com> Signed-off-by: Heiko Carstens <hca@linux.ibm.com> Signed-off-by: Alexander Gordeev <agordeev@linux.ibm.com>
2025-06-29s390/uaccess: Make cmpxchg_user_key() library codeHeiko Carstens2-181/+224
Move cmpxchg_user_key() handling to uaccess library code. The generated code is large in any case so that there is hardly any benefit if it is inlined. Reviewed-by: Claudio Imbrenda <imbrenda@linux.ibm.com> Signed-off-by: Heiko Carstens <hca@linux.ibm.com> Signed-off-by: Alexander Gordeev <agordeev@linux.ibm.com>
2025-06-29s390/page: Add memory clobber to page_set_storage_key()Heiko Carstens1-2/+4
Add memory clobbers to the page_set_storage_key() inline assemblies. This allows for data dependencies from other code, which is important to prevent the compiler from reordering instructions if required. Note that this doesn't fix a bug in existing code; this is just a prerequisite for upcoming code changes. Reviewed-by: Claudio Imbrenda <imbrenda@linux.ibm.com> Signed-off-by: Heiko Carstens <hca@linux.ibm.com> Signed-off-by: Alexander Gordeev <agordeev@linux.ibm.com>
2025-06-29s390/page: Cleanup page_set_storage_key() inline assembliesHeiko Carstens1-5/+11
Add extra lines, indentations, and symbolic names for operands in order to make the two page_set_storage_key() inline assemblies a bit more readable. Reviewed-by: Claudio Imbrenda <imbrenda@linux.ibm.com> Signed-off-by: Heiko Carstens <hca@linux.ibm.com> Signed-off-by: Alexander Gordeev <agordeev@linux.ibm.com>
2025-06-28s390/pci: Allow automatic recovery with minimal driver supportNiklas Schnelle1-15/+29
According to Documentation/PCI/pci-error-recovery.rst only the error_detected() callback in the err_handler struct is mandatory for a driver to support error recovery. So far s390's error recovery chose a stricter approach also requiring slot_reset() and resume(). Relax this requirement and only require error_detected(). If a callback is not implemented EEH and AER treat this as PCI_ERS_RESULT_NONE. This return value is otherwise used by drivers abstaining from their vote on how to proceed with recovery and currently also not supported by s390's recovery code. So to support missing callbacks in-line with other implementors of the recovery flow, also handle PCI_ERS_RESULT_NONE. Since s390 only does per PCI function recovery and does not do voting, treat PCI_ERS_RESULT_NONE optimistically and proceed through recovery unless other failures prevent this. Reviewed-by: Farhan Ali <alifm@linux.ibm.com> Reviewed-by: Julian Ruess <julianr@linux.ibm.com> Signed-off-by: Niklas Schnelle <schnelle@linux.ibm.com> Signed-off-by: Alexander Gordeev <agordeev@linux.ibm.com>
2025-06-28s390/pci: Do not try re-enabling load/store if device is disabledNiklas Schnelle1-0/+4
If a device is disabled unblocking load/store on its own is not useful as a full re-enable of the function is necessary anyway. Note that SCLP Write Event Data Action Qualifier 0 (Reset) leaves the device disabled and triggers this case unless the driver already requests a reset. Cc: stable@vger.kernel.org Fixes: 4cdf2f4e24ff ("s390/pci: implement minimal PCI error recovery") Reviewed-by: Farhan Ali <alifm@linux.ibm.com> Signed-off-by: Niklas Schnelle <schnelle@linux.ibm.com> Signed-off-by: Alexander Gordeev <agordeev@linux.ibm.com>
2025-06-28s390/pci: Fix stale function handles in error handlingNiklas Schnelle1-0/+11
The error event information for PCI error events contains a function handle for the respective function. This handle is generally captured at the time the error event was recorded. Due to delays in processing or cascading issues, it may happen that during firmware recovery multiple events are generated. When processing these events in order Linux may already have recovered an affected function making the event information stale. Fix this by doing an unconditional CLP List PCI function retrieving the current function handle with the zdev->state_lock held and ignoring the event if its function handle is stale. Cc: stable@vger.kernel.org Fixes: 4cdf2f4e24ff ("s390/pci: implement minimal PCI error recovery") Reviewed-by: Julian Ruess <julianr@linux.ibm.com> Reviewed-by: Gerd Bayer <gbayer@linux.ibm.com> Reviewed-by: Farhan Ali <alifm@linux.ibm.com> Signed-off-by: Niklas Schnelle <schnelle@linux.ibm.com> Signed-off-by: Alexander Gordeev <agordeev@linux.ibm.com>
2025-06-26s390/nmi: Print additional informationHeiko Carstens3-5/+75
In case of an unrecoverable machine check only the machine check interrupt code is printed to the console before the machine is stopped. This makes root cause analysis sometimes hard. Print additional machine check information to make analysis easier. The output now looks like this: Unrecoverable machine check, code: 00400F5F4C3B0000 6.16.0-rc2-11605-g987a9431e53a-dirty HW: IBM 3931 A01 704 (z/VM 7.4.0) PSW: 0706C00180000000 000003FFE0F0462E PFX: 0000000000070000 LBA: 000003FFE0F0462A EDC: 0000000000000000 FSA: 0000000000000000 CRS: 0080000014966A12 0000000087CB41C7 0000000000BFF140 0000000000000000 000000000000FFFF 0000000000BFF140 0000000071000000 0000000087CB41C7 0000000000008000 0000000000000000 0000000000000000 0000000000000000 0000000000000000 00000000024C0007 00000000DB000000 0000000000BFF000 GPRS: FFFFFFFF00000000 000003FFE0F0462E E10EA4F489F897A6 0000000000000000 7FFFFFF2C0413C4C 000003FFE19B7010 0000000000000000 0000000000000000 0000000000000000 00000001F76B3380 000003FFE15D4050 0000000000000005 0000000000000000 0000000000070000 000003FFE0F0586C 0000037FE00B7DA0 System stopped Reviewed-by: Alexander Gordeev <agordeev@linux.ibm.com> Signed-off-by: Heiko Carstens <hca@linux.ibm.com> Signed-off-by: Alexander Gordeev <agordeev@linux.ibm.com>
2025-06-26crypto: s390 - Add selftest support for phmacHarald Freudenberger1-0/+137
Add key preparation code in case of selftest running to the phmac setkey function: As long as crypto_ahash_tested() returns with false, all setkey() invocations are assumed to carry sheer hmac clear key values and thus need some preparation to work with the phmac implementation. Thus it is possible to use the already available hmac test vectors implemented in the testmanager to test the phmac code. When crypto_ahash_tested() returns true (that is after larval state) the phmac code assumes the key material is a blob digestible by the pkey kernel module which converts the blob into a working key for the phmac code. Signed-off-by: Harald Freudenberger <freude@linux.ibm.com> Reviewed-by: Holger Dengler <dengler@linux.ibm.com> Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
2025-06-26crypto: s390 - New s390 specific protected key hash phmacHarald Freudenberger4-0/+914
Add support for protected key hmac ("phmac") for s390 arch. With the latest machine generation there is now support for protected key (that is a key wrapped by a master key stored in firmware) hmac for sha2 (sha224, sha256, sha384 and sha512) for the s390 specific CPACF instruction kmac. This patch adds support via 4 new ahashes registered as phmac(sha224), phmac(sha256), phmac(sha384) and phmac(sha512). Co-developed-by: Holger Dengler <dengler@linux.ibm.com> Signed-off-by: Harald Freudenberger <freude@linux.ibm.com> Reviewed-by: Holger Dengler <dengler@linux.ibm.com> Signed-off-by: Holger Dengler <dengler@linux.ibm.com> Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
2025-06-26s390/crypto: Add protected key hmac subfunctions for KMACHolger Dengler1-0/+4
The CPACF KMAC instruction supports new subfunctions for protected key hmac. Add defines for these 4 new subfuctions. Signed-off-by: Holger Dengler <dengler@linux.ibm.com> Signed-off-by: Harald Freudenberger <freude@linux.ibm.com> Reviewed-by: Holger Dengler <dengler@linux.ibm.com> Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
2025-06-25s390/boot: Use -D__DISABLE_EXPORTSPetr Pavlu1-2/+2
Files in the arch/s390/boot directory reuse logic from the rest of the kernel by including certain C and assembly files from the kernel and lib directories. Some of these included files contain EXPORT_SYMBOL directives. For instance, arch/s390/boot/cmdline.c includes lib/cmdline.c, which exports the get_option() function. This inclusion triggers genksyms processing for the files in arch/s390/boot, which is unnecessary and slows down the build. Additionally, when KBUILD_SYMTYPES=1 is set, the generated symtypes data contain exported symbols that are duplicated with the main kernel. This duplication can confuse external kABI tools that process the symtypes data. Address this issue by compiling the files in arch/s390/boot with -D__DISABLE_EXPORTS. Signed-off-by: Petr Pavlu <petr.pavlu@suse.com> Reviewed-by: Heiko Carstens <hca@linux.ibm.com> Link: https://lore.kernel.org/r/20250620154649.116068-1-petr.pavlu@suse.com Signed-off-by: Alexander Gordeev <agordeev@linux.ibm.com>
2025-06-23s390/boot: Use the full title of the manual for facility bitsXose Vazquez Perez1-1/+1
Also indicate the name of the section where facility bits are listed, because the manual has a length of 2124 pages. The current version is Fourteenth Edition (May, 2022) SA22-7832-13 Cc: Heiko Carstens <hca@linux.ibm.com> Cc: Vasily Gorbik <gor@linux.ibm.com> Cc: Alexander Gordeev <agordeev@linux.ibm.com> Cc: Christian Borntraeger <borntraeger@linux.ibm.com> Cc: Sven Schnelle <svens@linux.ibm.com> Cc: S390 ML <linux-s390@vger.kernel.org> Signed-off-by: Xose Vazquez Perez <xose.vazquez@gmail.com> Acked-by: Heiko Carstens <hca@linux.ibm.com> Link: https://lore.kernel.org/r/20250616163248.77951-1-xose.vazquez@gmail.com Signed-off-by: Alexander Gordeev <agordeev@linux.ibm.com>
2025-06-23Merge 6.16-rc3 into driver-core-nextGreg Kroah-Hartman1-1/+1
We need the driver-core fixes that are in 6.16-rc3 into here as well to build on top of. Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
2025-06-17s390: Remove unnecessary include <linux/export.h>Heiko Carstens9-9/+0
Remove include <linux/export.h> from all files which do not contain an EXPORT_SYMBOL(). See commit 7d95680d64ac ("scripts/misc-check: check unnecessary #include <linux/export.h> when W=1") for more details. Acked-by: Alexander Gordeev <agordeev@linux.ibm.com> Signed-off-by: Heiko Carstens <hca@linux.ibm.com> Signed-off-by: Alexander Gordeev <agordeev@linux.ibm.com>
2025-06-17s390: Explicitly include <linux/export.h>Heiko Carstens24-0/+30
Explicitly include <linux/export.h> in files which contain an EXPORT_SYMBOL(). See commit a934a57a42f6 ("scripts/misc-check: check missing #include <linux/export.h> when W=1") for more details. Acked-by: Alexander Gordeev <agordeev@linux.ibm.com> Signed-off-by: Heiko Carstens <hca@linux.ibm.com> Signed-off-by: Alexander Gordeev <agordeev@linux.ibm.com>
2025-06-17s390/ptrace: Fix pointer dereferencing in regs_get_kernel_stack_nth()Heiko Carstens1-1/+1
The recent change which added READ_ONCE_NOCHECK() to read the nth entry from the kernel stack incorrectly dropped dereferencing of the stack pointer in order to read the requested entry. In result the address of the entry is returned instead of its content. Dereference the pointer again to fix this. Reported-by: Will Deacon <will@kernel.org> Closes: https://lore.kernel.org/r/20250612163331.GA13384@willie-the-truck Fixes: d93a855c31b7 ("s390/ptrace: Avoid KASAN false positives in regs_get_kernel_stack_nth()") Cc: stable@vger.kernel.org Reviewed-by: Alexander Gordeev <agordeev@linux.ibm.com> Signed-off-by: Heiko Carstens <hca@linux.ibm.com> Signed-off-by: Alexander Gordeev <agordeev@linux.ibm.com>