<feed xmlns='http://www.w3.org/2005/Atom'>
<title>kernel/linux.git/arch/arm/mm, branch v6.19.12</title>
<subtitle>Linux kernel stable tree (mirror)</subtitle>
<id>https://git.radix-linux.su/kernel/linux.git/atom?h=v6.19.12</id>
<link rel='self' href='https://git.radix-linux.su/kernel/linux.git/atom?h=v6.19.12'/>
<link rel='alternate' type='text/html' href='https://git.radix-linux.su/kernel/linux.git/'/>
<updated>2026-03-04T12:20:45+00:00</updated>
<entry>
<title>ARM: 9467/1: mm: Don't use %pK through printk</title>
<updated>2026-03-04T12:20:45+00:00</updated>
<author>
<name>Thomas Weissschuh</name>
<email>thomas.weissschuh@linutronix.de</email>
</author>
<published>2026-01-07T09:56:33+00:00</published>
<link rel='alternate' type='text/html' href='https://git.radix-linux.su/kernel/linux.git/commit/?id=07a44b6c4312d1693259739a8c304451797216b9'/>
<id>urn:sha1:07a44b6c4312d1693259739a8c304451797216b9</id>
<content type='text'>
[ Upstream commit 012ea376a5948b025f260aa45d2a6ec5d96674ea ]

Restricted pointers ("%pK") were never meant to be used
through printk(). They can acquire sleeping locks in atomic contexts.

Switch to %px over the more secure %p as this usage is a debugging aid,
gated behind CONFIG_DEBUG_VIRTUAL and used by WARN().

Link: https://lore.kernel.org/lkml/20250113171731-dc10e3c1-da64-4af0-b767-7c7070468023@linutronix.de/
Signed-off-by: Thomas Weißschuh &lt;thomas.weissschuh@linutronix.de&gt;
Signed-off-by: Russell King (Oracle) &lt;rmk+kernel@armlinux.org.uk&gt;
Signed-off-by: Sasha Levin &lt;sashal@kernel.org&gt;
</content>
</entry>
<entry>
<title>Merge tag 'for-linus' of git://git.kernel.org/pub/scm/linux/kernel/git/rmk/linux</title>
<updated>2025-12-10T22:50:48+00:00</updated>
<author>
<name>Linus Torvalds</name>
<email>torvalds@linux-foundation.org</email>
</author>
<published>2025-12-10T22:50:48+00:00</published>
<link rel='alternate' type='text/html' href='https://git.radix-linux.su/kernel/linux.git/commit/?id=29ba26af9a9d43d5dbb8aa8e653adeb159d42587'/>
<id>urn:sha1:29ba26af9a9d43d5dbb8aa8e653adeb159d42587</id>
<content type='text'>
Pull ARM updates from Russell King:

 - disable jump label and high PTE for PREEMPT RT kernels

 - fix input operand modification in load_unaligned_zeropad()

 - fix hash_name() / fault path induced warnings

 - fix branch predictor hardening

* tag 'for-linus' of git://git.kernel.org/pub/scm/linux/kernel/git/rmk/linux:
  ARM: fix branch predictor hardening
  ARM: fix hash_name() fault
  ARM: allow __do_kernel_fault() to report execution of memory faults
  ARM: group is_permission_fault() with is_translation_fault()
  ARM: 9464/1: fix input-only operand modification in load_unaligned_zeropad()
  ARM: 9461/1: Disable HIGHPTE on PREEMPT_RT kernels
  ARM: 9459/1: Disable jump-label on PREEMPT_RT
</content>
</entry>
<entry>
<title>ARM: fix branch predictor hardening</title>
<updated>2025-12-10T12:22:15+00:00</updated>
<author>
<name>Russell King (Oracle)</name>
<email>rmk+kernel@armlinux.org.uk</email>
</author>
<published>2025-12-05T10:52:12+00:00</published>
<link rel='alternate' type='text/html' href='https://git.radix-linux.su/kernel/linux.git/commit/?id=fd2dee1c6e2256f726ba33fd3083a7be0efc80d3'/>
<id>urn:sha1:fd2dee1c6e2256f726ba33fd3083a7be0efc80d3</id>
<content type='text'>
__do_user_fault() may be called with indeterminent interrupt enable
state, which means we may be preemptive at this point. This causes
problems when calling harden_branch_predictor(). For example, when
called from a data abort, do_alignment_fault()-&gt;do_bad_area().

Move harden_branch_predictor() out of __do_user_fault() and into the
calling contexts.

Moving it into do_kernel_address_page_fault(), we can be sure that
interrupts will be disabled here.

Converting do_translation_fault() to use do_kernel_address_page_fault()
rather than do_bad_area() means that we keep branch predictor handling
for translation faults. Interrupts will also be disabled at this call
site.

do_sect_fault() needs special handling, so detect user mode accesses
to kernel-addresses, and add an explicit call to branch predictor
hardening.

Finally, add branch predictor hardening to do_alignment() for the
faulting case (user mode accessing kernel addresses) before interrupts
are enabled.

This should cover all cases where harden_branch_predictor() is called,
ensuring that it is always has interrupts disabled, also ensuring that
it is called early in each call path.

Reviewed-by: Xie Yuanbin &lt;xieyuanbin1@huawei.com&gt;
Tested-by: Xie Yuanbin &lt;xieyuanbin1@huawei.com&gt;
Signed-off-by: Russell King (Oracle) &lt;rmk+kernel@armlinux.org.uk&gt;
</content>
</entry>
<entry>
<title>ARM: fix hash_name() fault</title>
<updated>2025-12-10T12:22:02+00:00</updated>
<author>
<name>Russell King (Oracle)</name>
<email>rmk+kernel@armlinux.org.uk</email>
</author>
<published>2025-12-05T11:03:07+00:00</published>
<link rel='alternate' type='text/html' href='https://git.radix-linux.su/kernel/linux.git/commit/?id=7733bc7d299d682f2723dc38fc7f370b9bf973e9'/>
<id>urn:sha1:7733bc7d299d682f2723dc38fc7f370b9bf973e9</id>
<content type='text'>
Zizhi Wo reports:

"During the execution of hash_name()-&gt;load_unaligned_zeropad(), a
 potential memory access beyond the PAGE boundary may occur. For
 example, when the filename length is near the PAGE_SIZE boundary.
 This triggers a page fault, which leads to a call to
 do_page_fault()-&gt;mmap_read_trylock(). If we can't acquire the lock,
 we have to fall back to the mmap_read_lock() path, which calls
 might_sleep(). This breaks RCU semantics because path lookup occurs
 under an RCU read-side critical section."

This is seen with CONFIG_DEBUG_ATOMIC_SLEEP=y and CONFIG_KFENCE=y.

Kernel addresses (with the exception of the vectors/kuser helper
page) do not have VMAs associated with them. If the vectors/kuser
helper page faults, then there are two possibilities:

1. if the fault happened while in kernel mode, then we're basically
   dead, because the CPU won't be able to vector through this page
   to handle the fault.
2. if the fault happened while in user mode, that means the page was
   protected from user access, and we want to fault anyway.

Thus, we can handle kernel addresses from any context entirely
separately without going anywhere near the mmap lock. This gives us
an entirely non-sleeping path for all kernel mode kernel address
faults.

As we handle the kernel address faults before interrupts are enabled,
this change has the side effect of improving the branch predictor
hardening, but does not completely solve the issue.

Reported-by: Zizhi Wo &lt;wozizhi@huaweicloud.com&gt;
Reported-by: Xie Yuanbin &lt;xieyuanbin1@huawei.com&gt;
Link: https://lore.kernel.org/r/20251126090505.3057219-1-wozizhi@huaweicloud.com
Reviewed-by: Xie Yuanbin &lt;xieyuanbin1@huawei.com&gt;
Tested-by: Xie Yuanbin &lt;xieyuanbin1@huawei.com&gt;
Signed-off-by: Russell King (Oracle) &lt;rmk+kernel@armlinux.org.uk&gt;
</content>
</entry>
<entry>
<title>ARM: allow __do_kernel_fault() to report execution of memory faults</title>
<updated>2025-12-10T12:21:46+00:00</updated>
<author>
<name>Russell King (Oracle)</name>
<email>rmk+kernel@armlinux.org.uk</email>
</author>
<published>2025-12-05T17:09:44+00:00</published>
<link rel='alternate' type='text/html' href='https://git.radix-linux.su/kernel/linux.git/commit/?id=40b466db1dffb41f0529035c59c5739636d0e5b8'/>
<id>urn:sha1:40b466db1dffb41f0529035c59c5739636d0e5b8</id>
<content type='text'>
Allow __do_kernel_fault() to detect the execution of memory, so we can
provide the same fault message as do_page_fault() would do. This is
required when we split the kernel address fault handling from the
main do_page_fault() code path.

Reviewed-by: Xie Yuanbin &lt;xieyuanbin1@huawei.com&gt;
Tested-by: Xie Yuanbin &lt;xieyuanbin1@huawei.com&gt;
Signed-off-by: Russell King (Oracle) &lt;rmk+kernel@armlinux.org.uk&gt;
</content>
</entry>
<entry>
<title>ARM: group is_permission_fault() with is_translation_fault()</title>
<updated>2025-12-09T09:19:10+00:00</updated>
<author>
<name>Russell King (Oracle)</name>
<email>rmk+kernel@armlinux.org.uk</email>
</author>
<published>2025-12-09T08:35:57+00:00</published>
<link rel='alternate' type='text/html' href='https://git.radix-linux.su/kernel/linux.git/commit/?id=dea20281ac88226615761c570c8ff7adc18e6ac2'/>
<id>urn:sha1:dea20281ac88226615761c570c8ff7adc18e6ac2</id>
<content type='text'>
Group is_permission_fault() with is_translation_fault(), which is
needed to use is_permission_fault() in __do_kernel_fault(). As
this is static inline, there is no need for this to be under
CONFIG_MMU.

Signed-off-by: Russell King (Oracle) &lt;rmk+kernel@armlinux.org.uk&gt;
</content>
</entry>
<entry>
<title>Merge tag 'dma-mapping-6.19-2025-12-05' of git://git.kernel.org/pub/scm/linux/kernel/git/mszyprowski/linux</title>
<updated>2025-12-06T17:25:05+00:00</updated>
<author>
<name>Linus Torvalds</name>
<email>torvalds@linux-foundation.org</email>
</author>
<published>2025-12-06T17:25:05+00:00</published>
<link rel='alternate' type='text/html' href='https://git.radix-linux.su/kernel/linux.git/commit/?id=a7405aa92feec2598cedc1b6c651beb1848240fe'/>
<id>urn:sha1:a7405aa92feec2598cedc1b6c651beb1848240fe</id>
<content type='text'>
Pull dma-mapping updates from Marek Szyprowski:

 - More DMA mapping API refactoring to physical addresses as the primary
   interface instead of page+offset parameters.

   This time dma_map_ops callbacks are converted to physical addresses,
   what in turn results also in some simplification of architecture
   specific code (Leon Romanovsky and Jason Gunthorpe)

 - Clarify that dma_map_benchmark is not a kernel self-test, but
   standalone tool (Qinxin Xia)

* tag 'dma-mapping-6.19-2025-12-05' of git://git.kernel.org/pub/scm/linux/kernel/git/mszyprowski/linux:
  dma-mapping: remove unused map_page callback
  xen: swiotlb: Convert mapping routine to rely on physical address
  x86: Use physical address for DMA mapping
  sparc: Use physical address DMA mapping
  powerpc: Convert to physical address DMA mapping
  parisc: Convert DMA map_page to map_phys interface
  MIPS/jazzdma: Provide physical address directly
  alpha: Convert mapping routine to rely on physical address
  dma-mapping: remove unused mapping resource callbacks
  xen: swiotlb: Switch to physical address mapping callbacks
  ARM: dma-mapping: Switch to physical address mapping callbacks
  ARM: dma-mapping: Reduce struct page exposure in arch_sync_dma*()
  dma-mapping: convert dummy ops to physical address mapping
  dma-mapping: prepare dma_map_ops to conversion to physical address
  tools/dma: move dma_map_benchmark from selftests to tools/dma
</content>
</entry>
<entry>
<title>syscore: Pass context data to callbacks</title>
<updated>2025-11-14T09:01:52+00:00</updated>
<author>
<name>Thierry Reding</name>
<email>treding@nvidia.com</email>
</author>
<published>2025-10-29T16:33:30+00:00</published>
<link rel='alternate' type='text/html' href='https://git.radix-linux.su/kernel/linux.git/commit/?id=a97fbc3ee3e2a536fafaff04f21f45472db71769'/>
<id>urn:sha1:a97fbc3ee3e2a536fafaff04f21f45472db71769</id>
<content type='text'>
Several drivers can benefit from registering per-instance data along
with the syscore operations. To achieve this, move the modifiable fields
out of the syscore_ops structure and into a separate struct syscore that
can be registered with the framework. Add a void * driver data field for
drivers to store contextual data that will be passed to the syscore ops.

Acked-by: Rafael J. Wysocki (Intel) &lt;rafael@kernel.org&gt;
Signed-off-by: Thierry Reding &lt;treding@nvidia.com&gt;
</content>
</entry>
<entry>
<title>ARM: dma-mapping: Switch to physical address mapping callbacks</title>
<updated>2025-10-29T09:27:30+00:00</updated>
<author>
<name>Leon Romanovsky</name>
<email>leonro@nvidia.com</email>
</author>
<published>2025-10-15T09:12:50+00:00</published>
<link rel='alternate' type='text/html' href='https://git.radix-linux.su/kernel/linux.git/commit/?id=50b149be07eb89a1f8bc6af422cd78245fc621a1'/>
<id>urn:sha1:50b149be07eb89a1f8bc6af422cd78245fc621a1</id>
<content type='text'>
Combine resource and page mappings routines to one function, which
handles both these flows at the same manner. This conversion allows
us to remove .map_resource/.unmap_resource callbacks completely.

Reviewed-by: Jason Gunthorpe &lt;jgg@nvidia.com&gt;
Signed-off-by: Leon Romanovsky &lt;leonro@nvidia.com&gt;
Signed-off-by: Marek Szyprowski &lt;m.szyprowski@samsung.com&gt;
Link: https://lore.kernel.org/r/20251015-remove-map-page-v5-4-3bbfe3a25cdf@kernel.org
</content>
</entry>
<entry>
<title>ARM: dma-mapping: Reduce struct page exposure in arch_sync_dma*()</title>
<updated>2025-10-29T09:27:29+00:00</updated>
<author>
<name>Leon Romanovsky</name>
<email>leonro@nvidia.com</email>
</author>
<published>2025-10-15T09:12:49+00:00</published>
<link rel='alternate' type='text/html' href='https://git.radix-linux.su/kernel/linux.git/commit/?id=52c9aa1adc301b6b6bfe1a6c8ca188da125a332c'/>
<id>urn:sha1:52c9aa1adc301b6b6bfe1a6c8ca188da125a332c</id>
<content type='text'>
As a preparation to changing from .map_page to use .map_phys DMA
callbacks, convert arch_sync_dma*() functions to use physical addresses
instead of struct page.

Reviewed-by: Jason Gunthorpe &lt;jgg@nvidia.com&gt;
Signed-off-by: Leon Romanovsky &lt;leonro@nvidia.com&gt;
Signed-off-by: Marek Szyprowski &lt;m.szyprowski@samsung.com&gt;
Link: https://lore.kernel.org/r/20251015-remove-map-page-v5-3-3bbfe3a25cdf@kernel.org
</content>
</entry>
</feed>
