<feed xmlns='http://www.w3.org/2005/Atom'>
<title>kernel/linux.git/arch/arm64, branch master</title>
<subtitle>Linux kernel stable tree (mirror)</subtitle>
<id>https://git.radix-linux.su/kernel/linux.git/atom?h=master</id>
<link rel='self' href='https://git.radix-linux.su/kernel/linux.git/atom?h=master'/>
<link rel='alternate' type='text/html' href='https://git.radix-linux.su/kernel/linux.git/'/>
<updated>2026-05-01T23:32:42+00:00</updated>
<entry>
<title>Merge tag 'arm64-fixes' of git://git.kernel.org/pub/scm/linux/kernel/git/arm64/linux</title>
<updated>2026-05-01T23:32:42+00:00</updated>
<author>
<name>Linus Torvalds</name>
<email>torvalds@linux-foundation.org</email>
</author>
<published>2026-05-01T23:32:42+00:00</published>
<link rel='alternate' type='text/html' href='https://git.radix-linux.su/kernel/linux.git/commit/?id=cd546f7ae2fce8b695c834143b50e712d62ebed8'/>
<id>urn:sha1:cd546f7ae2fce8b695c834143b50e712d62ebed8</id>
<content type='text'>
Pull arm64 fixes from Catalin Marinas:

 - Avoid writing an uninitialised stack variable to POR_EL0 on sigreturn
   if the poe_context record is absent

 - Reserve one more page for the early 4K-page kernel mapping to cover
   the extra [_text, _stext) split introduced by the non-executable
   read-only mapping

 - Force the arch_local_irq_*() wrappers to be __always_inline so that
   noinstr entry and idle paths cannot call out-of-line, instrumentable
   copies

 - Fix potential sign extension in the arm64 SCS unwinder's DWARF
   advance_loc4 decoding

 - Tolerate arm64 ACPI platforms with only WFI and no deeper PSCI idle
   states, restoring cpuidle registration on such systems

 - Include the UAPI &lt;asm/ptrace.h&gt; header in the arm64 GCS libc test
   rather than carrying a duplicate struct user_gcs definition (the
   original #ifdef NT_ARM_GCS was wrong to cover the structure
   definition as it would be masked out if the toolchain defined it)

* tag 'arm64-fixes' of git://git.kernel.org/pub/scm/linux/kernel/git/arm64/linux:
  arm64: signal: Preserve POR_EL0 if poe_context is missing
  arm64: Reserve an extra page for early kernel mapping
  kselftest/arm64: Include &lt;asm/ptrace.h&gt; for user_gcs definition
  ACPI: arm64: cpuidle: Tolerate platforms with no deep PSCI idle states
  arm64/irqflags: __always_inline the arch_local_irq_*() helpers
  arm64/scs: Fix potential sign extension issue of advance_loc4
</content>
</entry>
<entry>
<title>arm64: signal: Preserve POR_EL0 if poe_context is missing</title>
<updated>2026-05-01T16:44:25+00:00</updated>
<author>
<name>Kevin Brodsky</name>
<email>kevin.brodsky@arm.com</email>
</author>
<published>2026-04-27T12:03:33+00:00</published>
<link rel='alternate' type='text/html' href='https://git.radix-linux.su/kernel/linux.git/commit/?id=030e8a40fff65ca6ac1c04a4d3c08afe72438922'/>
<id>urn:sha1:030e8a40fff65ca6ac1c04a4d3c08afe72438922</id>
<content type='text'>
Commit 2e8a1acea859 ("arm64: signal: Improve POR_EL0 handling to
avoid uaccess failures") delayed the write to POR_EL0 in
rt_sigreturn to avoid spurious uaccess failures. This change however
relies on the poe_context frame record being present: on a system
supporting POE, calling sigreturn without a poe_context record now
results in writing arbitrary data from the kernel stack into POR_EL0.

Fix this by adding a __valid_fields member to struct
user_access_state, and zeroing the struct on allocation.
restore_poe_context() then indicates that the por_el0 field is valid
by setting the corresponding bit in __valid_fields, and
restore_user_access_state() only touches POR_EL0 if there is a valid
value to set it to. This is in line with how POR_EL0 was originally
handled; all frame records are currently optional, except
fpsimd_context.

To ensure that __valid_fields is kept in sync, fields (currently
just por_el0) are now accessed via accessors and prefixed with __ to
discourage direct access.

Fixes: 2e8a1acea859 ("arm64: signal: Improve POR_EL0 handling to avoid uaccess failures")
Cc: &lt;stable@vger.kernel.org&gt;
Reported-by: Will Deacon &lt;will@kernel.org&gt;
Signed-off-by: Kevin Brodsky &lt;kevin.brodsky@arm.com&gt;
Signed-off-by: Catalin Marinas &lt;catalin.marinas@arm.com&gt;
</content>
</entry>
<entry>
<title>arm64: Reserve an extra page for early kernel mapping</title>
<updated>2026-05-01T15:20:35+00:00</updated>
<author>
<name>Zhaoyang Huang</name>
<email>zhaoyang.huang@unisoc.com</email>
</author>
<published>2026-04-30T08:58:08+00:00</published>
<link rel='alternate' type='text/html' href='https://git.radix-linux.su/kernel/linux.git/commit/?id=4d8e74ad4585672489da6145b3328d415f50db82'/>
<id>urn:sha1:4d8e74ad4585672489da6145b3328d415f50db82</id>
<content type='text'>
The final part of [data, end) segment may overflow into the next page of
init_pg_end[1] which is the gap page before early_init_stack[2]:

[1]
crash_arm64_v9.0.1&gt; vtop ffffffed00601000
VIRTUAL           PHYSICAL
ffffffed00601000  83401000

PAGE DIRECTORY: ffffffecffd62000
   PGD: ffffffecffd62da0 =&gt; 10000000833fb003
   PMD: ffffff80033fb018 =&gt; 10000000833fe003
   PTE: ffffff80033fe008 =&gt; 68000083401f03
  PAGE: 83401000

     PTE        PHYSICAL  FLAGS
68000083401f03  83401000  (VALID|SHARED|AF|NG|PXN|UXN)

      PAGE       PHYSICAL      MAPPING       INDEX CNT FLAGS
fffffffec00d0040 83401000                0        0  1 4000 reserved

[2]
ffffffed002c8000 (r) __pi__data
ffffffed0054e000 (d) __pi___bss_start
ffffffed005f5000 (b) __pi_init_pg_dir
ffffffed005fe000 (b) __pi_init_pg_end
ffffffed005ff000 (B) early_init_stack
ffffffed00608000 (b) __pi__end

For 4K pages, the early kernel mapping may use 2MB block entries but the
kernel segments are only 64KB aligned. Segment boundaries that fall
within a 2MB block therefore require a PTE table so that different
attributes can be applied on either side of the boundary.

KERNEL_SEGMENT_COUNT still correctly counts the five permanent kernel
VMAs registered by declare_kernel_vmas(). However, since commit
5973a62efa34 ("arm64: map [_text, _stext) virtual address range
non-executable+read-only"), the early mapper also maps [_text, _stext)
separately from [_stext, _etext). This adds one more early-only split
and can require one more page-table page than the existing
EARLY_SEGMENT_EXTRA_PAGES allowance reserves.

Increase the 4K-page early mapping allowance by one page to cover that
additional split.

Fixes: 5973a62efa34 ("arm64: map [_text, _stext) virtual address range non-executable+read-only")
Assisted-by: TRAE:GLM-5.1
Suggested-by: Ard Biesheuvel &lt;ardb@kernel.org&gt;
Signed-off-by: Zhaoyang Huang &lt;zhaoyang.huang@unisoc.com&gt;
[catalin.marinas@arm.com: rewrote part of the commit log]
[catalin.marinas@arm.com: expanded the code comment]
Signed-off-by: Catalin Marinas &lt;catalin.marinas@arm.com&gt;
</content>
</entry>
<entry>
<title>arm64/irqflags: __always_inline the arch_local_irq_*() helpers</title>
<updated>2026-04-27T12:13:36+00:00</updated>
<author>
<name>Breno Leitao</name>
<email>leitao@debian.org</email>
</author>
<published>2026-04-21T15:58:57+00:00</published>
<link rel='alternate' type='text/html' href='https://git.radix-linux.su/kernel/linux.git/commit/?id=caecde119e341acd9819cbc1c54edf6caa6c6389'/>
<id>urn:sha1:caecde119e341acd9819cbc1c54edf6caa6c6389</id>
<content type='text'>
The arch_local_irq_*() wrappers in &lt;asm/irqflags.h&gt; dispatch between two
underlying primitives: the __daif_* path on most systems, and the
__pmr_* path on builds that use GIC PMR-based masking (Pseudo-NMI). The
leaf primitives are already __always_inline, but the wrappers themselves
are plain "static inline".

That is unsafe for noinstr callers: nothing prevents the compiler from
emitting an out-of-line copy of e.g. arch_local_irq_disable(), and an
out-of-line copy can be instrumented (ftrace, kcov, sanitizers), which
breaks the noinstr contract on the entry/idle paths that rely on these
helpers.

x86 hit and fixed exactly this class of bug in commit 7a745be1cc90
("x86/entry: __always_inline irqflags for noinstr").

Force-inline all of the arch_local_irq_*() wrappers so they cannot be
emitted out-of-line:

  - arch_local_irq_enable()
  - arch_local_irq_disable()
  - arch_local_save_flags()
  - arch_irqs_disabled_flags()
  - arch_irqs_disabled()
  - arch_local_irq_save()
  - arch_local_irq_restore()

The primary motivation is noinstr safety. There is a useful side effect
for fleet-wide profiling: when the wrapper is emitted out-of-line,
samples taken inside it during the post-WFI IRQ unmask in
default_idle_call() are attributed to arch_local_irq_enable rather than
default_idle_call(), and the FP-unwinder loses default_idle_call() from
the chain.

Signed-off-by: Breno Leitao &lt;leitao@debian.org&gt;
Reviewed-by: Leonardo Bras &lt;leo.bras@arm.com&gt;
Signed-off-by: Catalin Marinas &lt;catalin.marinas@arm.com&gt;
</content>
</entry>
<entry>
<title>arm64/scs: Fix potential sign extension issue of advance_loc4</title>
<updated>2026-04-27T11:16:26+00:00</updated>
<author>
<name>Wentao Guan</name>
<email>guanwentao@uniontech.com</email>
</author>
<published>2026-04-13T09:54:59+00:00</published>
<link rel='alternate' type='text/html' href='https://git.radix-linux.su/kernel/linux.git/commit/?id=4023b7424ecd5d38cc75b650d6c1bf630ef8cb40'/>
<id>urn:sha1:4023b7424ecd5d38cc75b650d6c1bf630ef8cb40</id>
<content type='text'>
The expression (*opcode++ &lt;&lt; 24) and exp * code_alignment_factor
may overflow signed int and becomes negative.

Fix this by casting each byte to u64 before shifting. Also fix
the misaligned break statement while we are here.

Example of the result can be seen here:
Link: https://godbolt.org/z/zhY8d3595

It maybe not a real problem, but could be a issue in future.

Fixes: d499e9627d70 ("arm64/scs: Fix handling of advance_loc4")
Signed-off-by: Wentao Guan &lt;guanwentao@uniontech.com&gt;
Signed-off-by: Catalin Marinas &lt;catalin.marinas@arm.com&gt;
</content>
</entry>
<entry>
<title>Merge tag 'kvmarm-fixes-7.1-1' of git://git.kernel.org/pub/scm/linux/kernel/git/kvmarm/kvmarm into HEAD</title>
<updated>2026-04-27T08:24:34+00:00</updated>
<author>
<name>Paolo Bonzini</name>
<email>pbonzini@redhat.com</email>
</author>
<published>2026-04-27T08:24:34+00:00</published>
<link rel='alternate' type='text/html' href='https://git.radix-linux.su/kernel/linux.git/commit/?id=909eac682c984c3cb02485d5950c2a8d573c1667'/>
<id>urn:sha1:909eac682c984c3cb02485d5950c2a8d573c1667</id>
<content type='text'>
KVM/arm64 fixes for 7.1, take #1

- Allow tracing for non-pKVM, which was accidentally disabled when
  the series was merged

- Rationalise the way the pKVM hypercall ranges are defined by using
  the same mechanism as already used for the vcpu_sysreg enum

- Enforce that SMCCC function numbers relayed by the pKVM proxy are
  actually compliant with the specification

- Fix a couple of feature to idreg mappings which resulted in the
  wrong sanitisation being applied

- Fix the GICD_IIDR revision number field that could never been
  written correctly by userspace

- Make kvm_vcpu_initialized() correctly use its parameter instead
  of relying on the surrounding context

- Enforce correct ordering in __pkvm_init_vcpu(), plugging a
  potential pin leak at the same time

- Move __pkvm_init_finalise() to a less dangerous spot, avoiding
  future problems

- Restore functional userspace irqchip support after a four year
  breakage (last functional kernel was 5.18...). This is obviously
  ripe for garbage collection.

- ... and the usual lot of spelling fixes
</content>
</entry>
<entry>
<title>KVM: arm64: Wake-up from WFI when iqrchip is in userspace</title>
<updated>2026-04-24T11:03:57+00:00</updated>
<author>
<name>Marc Zyngier</name>
<email>maz@kernel.org</email>
</author>
<published>2026-04-23T16:36:07+00:00</published>
<link rel='alternate' type='text/html' href='https://git.radix-linux.su/kernel/linux.git/commit/?id=4ce98bf0865c349e7026ad9c14f48da264920953'/>
<id>urn:sha1:4ce98bf0865c349e7026ad9c14f48da264920953</id>
<content type='text'>
It appears that there is nothing in the wake-up path that
evaluates whether the in-kernel interrupts are pending unless
we have a vgic.

This means that the userspace irqchip support has been broken for
about four years, and nobody noticed. It was also broken before
as we wouldn't wake-up on a PMU interrupt, but hey, who cares...

It is probably time to remove the feature altogether, because it
was a terrible idea 10 years ago, and it still is.

Fixes: b57de4ffd7c6d ("KVM: arm64: Simplify kvm_cpu_has_pending_timer()")
Link: https://patch.msgid.link/20260423163607.486345-1-maz@kernel.org
Signed-off-by: Marc Zyngier &lt;maz@kernel.org&gt;
Cc: stable@vger.kernel.org
</content>
</entry>
<entry>
<title>KVM: arm64: Fix initialisation order in __pkvm_init_finalise()</title>
<updated>2026-04-24T11:03:57+00:00</updated>
<author>
<name>Quentin Perret</name>
<email>qperret@google.com</email>
</author>
<published>2026-04-24T08:49:08+00:00</published>
<link rel='alternate' type='text/html' href='https://git.radix-linux.su/kernel/linux.git/commit/?id=5bb0aed57ba944f8c201e4e82ec066e0187e0f85'/>
<id>urn:sha1:5bb0aed57ba944f8c201e4e82ec066e0187e0f85</id>
<content type='text'>
fix_host_ownership() walks the hypervisor's stage-1 page-table to
adjust the host's stage-2 accordingly. Any such adjustment that
requires cache maintenance operations depends on the per-CPU hyp
fixmap being present. However, fix_host_ownership() is currently
called before fix_hyp_pgtable_refcnt() and hyp_create_fixmap(), so
the fixmap does not yet exist when it runs.

This is benign today because the host stage-2 starts empty and no
CMOs are needed, but it becomes a latent crash as soon as
fix_host_ownership() is extended to operate on a non-empty
page-table.

Reorder the calls so that fix_hyp_pgtable_refcnt() and
hyp_create_fixmap() complete before fix_host_ownership() is invoked.

Fixes: 0d16d12eb26e ("KVM: arm64: Fix-up hyp stage-1 refcounts for all pages mapped at EL2")
Signed-off-by: Quentin Perret &lt;qperret@google.com&gt;
Signed-off-by: Fuad Tabba &lt;tabba@google.com&gt;
Link: https://patch.msgid.link/20260424084908.370776-7-tabba@google.com
Signed-off-by: Marc Zyngier &lt;maz@kernel.org&gt;
Cc: stable@vger.kernel.org
</content>
</entry>
<entry>
<title>KVM: arm64: Fix pin leak and publication ordering in __pkvm_init_vcpu()</title>
<updated>2026-04-24T11:03:57+00:00</updated>
<author>
<name>Fuad Tabba</name>
<email>tabba@google.com</email>
</author>
<published>2026-04-24T08:49:07+00:00</published>
<link rel='alternate' type='text/html' href='https://git.radix-linux.su/kernel/linux.git/commit/?id=73b9c1e5da84cd69b1a86e374e450817cd051371'/>
<id>urn:sha1:73b9c1e5da84cd69b1a86e374e450817cd051371</id>
<content type='text'>
Two bugs exist in the vCPU initialisation path:

1. If a check fails after hyp_pin_shared_mem() succeeds, the cleanup
   path jumps to 'unlock' without calling unpin_host_vcpu() or
   unpin_host_sve_state(), permanently leaking pin references on the
   host vCPU and SVE state pages.

   Extract a register_hyp_vcpu() helper that performs the checks and
   the store. When register_hyp_vcpu() returns an error, call
   unpin_host_vcpu() and unpin_host_sve_state() inline before falling
   through to the existing 'unlock' label.

2. register_hyp_vcpu() publishes the new vCPU pointer into
   'hyp_vm-&gt;vcpus[]' with a bare store, allowing a concurrent caller
   of pkvm_load_hyp_vcpu() to observe a partially initialised vCPU
   object.

   Ensure the store uses smp_store_release() and the load uses
   smp_load_acquire(). While 'vm_table_lock' currently serialises the
   store and the load, these barriers ensure the reader sees the fully
   initialised 'hyp_vcpu' object even if there were a lockless path or
   if the lock's own ordering guarantees were insufficient for nested
   object initialization.

Fixes: 49af6ddb8e5c ("KVM: arm64: Add infrastructure to create and track pKVM instances at EL2")
Reported-by: Ben Simner &lt;ben.simner@cl.cam.ac.uk&gt;
Co-developed-by: Will Deacon &lt;willdeacon@google.com&gt;
Signed-off-by: Will Deacon &lt;willdeacon@google.com&gt;
Signed-off-by: Fuad Tabba &lt;tabba@google.com&gt;
Link: https://patch.msgid.link/20260424084908.370776-6-tabba@google.com
Signed-off-by: Marc Zyngier &lt;maz@kernel.org&gt;
Cc: stable@vger.kernel.org
</content>
</entry>
<entry>
<title>KVM: arm64: Fix kvm_vcpu_initialized() macro parameter</title>
<updated>2026-04-24T11:03:57+00:00</updated>
<author>
<name>Fuad Tabba</name>
<email>tabba@google.com</email>
</author>
<published>2026-04-24T08:49:06+00:00</published>
<link rel='alternate' type='text/html' href='https://git.radix-linux.su/kernel/linux.git/commit/?id=d89fdda7dd8a488f922e1175e6782f781ba8a23b'/>
<id>urn:sha1:d89fdda7dd8a488f922e1175e6782f781ba8a23b</id>
<content type='text'>
The macro is defined with parameter 'v' but the body references the
literal token 'vcpu' instead, causing it to silently operate on whatever
'vcpu' resolves to in the caller's scope rather than the value passed by
the caller. All current call sites happen to use a variable named 'vcpu',
so the bug is latent.

Fixes: e016333745c7 ("KVM: arm64: Only reset vCPU-scoped feature ID regs once")
Signed-off-by: Fuad Tabba &lt;tabba@google.com&gt;
Link: https://patch.msgid.link/20260424084908.370776-5-tabba@google.com
Signed-off-by: Marc Zyngier &lt;maz@kernel.org&gt;
Cc: stable@vger.kernel.org
</content>
</entry>
</feed>
