summaryrefslogtreecommitdiff
AgeCommit message (Collapse)AuthorFilesLines
2 daysLinux 6.18.30v6.18.30Greg Kroah-Hartman1-1/+1
Link: https://lore.kernel.org/r/20260512173938.452574370@linuxfoundation.org Tested-by: Pavel Machek (CIP) <pavel@nabladev.com> Tested-by: Peter Schneider <pschneider1968@googlemail.com> Tested-by: Brett A C Sheffield <bacs@librecast.net> Tested-by: Mark Brown <broonie@kernel.org> Tested-by: Shuah Khan <skhan@linuxfoundation.org> Link: https://lore.kernel.org/r/20260513153744.746440810@linuxfoundation.org Tested-by: Brett A C Sheffield <bacs@librecast.net> Tested-by: Florian Fainelli <florian.fainelli@broadcom.com> Tested-by: Ron Economos <re@w6rz.net> Tested-by: Mark Brown <broonie@kernel.org> Tested-by: Barry K. Nathan <barryn@pobox.com> Tested-by: Miguel Ojeda <ojeda@kernel.org> Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
2 daysksmbd: validate inherited ACE SID lengthShota Zaizen1-14/+52
commit 996454bc0da84d5a1dedb1a7861823087e01a7ae upstream. smb_inherit_dacl() walks the parent directory DACL loaded from the security descriptor xattr. It verifies that each ACE contains the fixed SID header before using it, but does not verify that the variable-length SID described by sid.num_subauth is fully contained in the ACE. A malformed inheritable ACE can advertise more subauthorities than are present in the ACE. compare_sids() may then read past the ACE. smb_set_ace() also clamps the copied destination SID, but used the unchecked source SID count to compute the inherited ACE size. That could advance the temporary inherited ACE buffer pointer and nt_size accounting past the allocated buffer. Fix this by validating the parent ACE SID count and SID length before using the SID during inheritance. Compute the inherited ACE size from the copied SID so the size matches the bounded destination SID. Reject the inherited DACL if size accumulation would overflow smb_acl.size or the security descriptor allocation size. Fixes: e2f34481b24d ("cifsd: add server-side procedures for SMB3") Signed-off-by: Shota Zaizen <s@zaizen.me> Acked-by: Namjae Jeon <linkinjeon@kernel.org> Signed-off-by: Steve French <stfrench@microsoft.com> Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
2 daysx86/CPU/AMD: Prevent improper isolation of shared resources in Zen2's op cachePrathyushi Nangia3-2/+7
commit c21b90f77687075115d989e53a8ec5e2bb427ab1 upstream. Make sure resources are not improperly shared in the op cache and cause instruction corruption this way. Signed-off-by: Prathyushi Nangia <prathyushi.nangia@amd.com> Co-developed-by: Borislav Petkov (AMD) <bp@alien8.de> Signed-off-by: Borislav Petkov (AMD) <bp@alien8.de> Cc: stable@vger.kernel.org Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org> Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
2 daysrust: pin-init: fix incorrect accessor reference lifetimeGary Guo2-46/+73
commit 68bf102226cf2199dc609b67c1e847cad4de4b57 upstream When a field has been initialized, `init!`/`pin_init!` create a reference or pinned reference to the field so it can be accessed later during the initialization of other fields. However, the reference it created is incorrectly `&'static` rather than just the scope of the initializer. This means that you can do init!(Foo { a: 1, _: { let b: &'static u32 = a; } }) which is unsound. This is caused by `&mut (*$slot).$ident`, which actually allows arbitrary lifetime, so this is effectively `'static`. Fix it by adding `let_binding` method on `DropGuard` to shorten lifetime. This results in exactly what we want for these accessors. The safety and invariant comments of `DropGuard` have been reworked; instead of reasoning about what caller can do with the guard, express it in a way that the ownership is transferred to the guard and `forget` takes it back, so the unsafe operations within the `DropGuard` can be more easily justified. Assisted-by: Claude:claude-3-opus Signed-off-by: Gary Guo <gary@garyguo.net> Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
2 daysnet: stmmac: Prevent NULL deref when RX memory exhaustedSam Edwards1-7/+12
[ Upstream commit 0bb05e6adfa99a2ea1fee1125cc0953409f83ed8 ] The CPU receives frames from the MAC through conventional DMA: the CPU allocates buffers for the MAC, then the MAC fills them and returns ownership to the CPU. For each hardware RX queue, the CPU and MAC coordinate through a shared ring array of DMA descriptors: one descriptor per DMA buffer. Each descriptor includes the buffer's physical address and a status flag ("OWN") indicating which side owns the buffer: OWN=0 for CPU, OWN=1 for MAC. The CPU is only allowed to set the flag and the MAC is only allowed to clear it, and both must move through the ring in sequence: thus the ring is used for both "submissions" and "completions." In the stmmac driver, stmmac_rx() bookmarks its position in the ring with the `cur_rx` index. The main receive loop in that function checks for rx_descs[cur_rx].own=0, gives the corresponding buffer to the network stack (NULLing the pointer), and increments `cur_rx` modulo the ring size. After the loop exits, stmmac_rx_refill(), which bookmarks its position with `dirty_rx`, allocates fresh buffers and rearms the descriptors (setting OWN=1). If it fails any allocation, it simply stops early (leaving OWN=0) and will retry where it left off when next called. This means descriptors have a three-stage lifecycle (terms my own): - `empty` (OWN=1, buffer valid) - `full` (OWN=0, buffer valid and populated) - `dirty` (OWN=0, buffer NULL) But because stmmac_rx() only checks OWN, it confuses `full`/`dirty`. In the past (see 'Fixes:'), there was a bug where the loop could cycle `cur_rx` all the way back to the first descriptor it dirtied, resulting in a NULL dereference when mistaken for `full`. The aforementioned commit resolved that *specific* failure by capping the loop's iteration limit at `dma_rx_size - 1`, but this is only a partial fix: if the previous stmmac_rx_refill() didn't complete, then there are leftover `dirty` descriptors that the loop might encounter without needing to cycle fully around. The current code therefore panics (see 'Closes:') when stmmac_rx_refill() is memory-starved long enough for `cur_rx` to catch up to `dirty_rx`. Fix this by explicitly checking, before advancing `cur_rx`, if the next entry is dirty; exit the loop if so. This prevents processing of the final, used descriptor until stmmac_rx_refill() succeeds, but fully prevents the `cur_rx == dirty_rx` ambiguity as the previous bugfix intended: so remove the clamp as well. Since stmmac_rx_zc() is a copy-paste-and-tweak of stmmac_rx() and the code structure is identical, any fix to stmmac_rx() will also need a corresponding fix for stmmac_rx_zc(). Therefore, apply the same check there. In stmmac_rx() (not stmmac_rx_zc()), a related bug remains: after the MAC sets OWN=0 on the final descriptor, it will be unable to send any further DMA-complete IRQs until it's given more `empty` descriptors. Currently, the driver simply *hopes* that the next stmmac_rx_refill() succeeds, risking an indefinite stall of the receive process if not. But this is not a regression, so it can be addressed in a future change. Fixes: b6cb4541853c7 ("net: stmmac: avoid rx queue overrun") Closes: https://bugzilla.kernel.org/show_bug.cgi?id=221010 Cc: stable@vger.kernel.org Suggested-by: Russell King <linux@armlinux.org.uk> Signed-off-by: Sam Edwards <CFSworks@gmail.com> Link: https://patch.msgid.link/20260422044503.5349-1-CFSworks@gmail.com Signed-off-by: Paolo Abeni <pabeni@redhat.com> Signed-off-by: Sasha Levin <sashal@kernel.org> Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
2 daysnet: stmmac: rename STMMAC_GET_ENTRY() -> STMMAC_NEXT_ENTRY()Russell King (Oracle)4-16/+16
[ Upstream commit 6b4286e0550814cdc4b897f881ec1fa8b0313227 ] STMMAC_GET_ENTRY() doesn't describe what this macro is doing - it is incrementing the provided index for the circular array of descriptors. Replace "GET" with "NEXT" as this better describes the action here. Signed-off-by: Russell King (Oracle) <rmk+kernel@armlinux.org.uk> Link: https://patch.msgid.link/E1w2vba-0000000DbWo-1oL5@rmk-PC.armlinux.org.uk Signed-off-by: Jakub Kicinski <kuba@kernel.org> Stable-dep-of: 0bb05e6adfa9 ("net: stmmac: Prevent NULL deref when RX memory exhausted") Signed-off-by: Sasha Levin <sashal@kernel.org> Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
2 dayscrypto: caam - guard HMAC key hex dumps in hash_digest_keyThorsten Blum2-4/+4
[ Upstream commit 177730a273b18e195263ed953853273e901b5064 ] Use print_hex_dump_devel() for dumping sensitive HMAC key bytes in hash_digest_key() to avoid leaking secrets at runtime when CONFIG_DYNAMIC_DEBUG is enabled. Fixes: 045e36780f11 ("crypto: caam - ahash hmac support") Fixes: 3f16f6c9d632 ("crypto: caam/qi2 - add support for ahash algorithms") Cc: stable@vger.kernel.org Signed-off-by: Thorsten Blum <thorsten.blum@linux.dev> Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au> Signed-off-by: Sasha Levin <sashal@kernel.org> Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
2 daysprintk: add print_hex_dump_devel()Thorsten Blum1-0/+13
[ Upstream commit d134feeb5df33fbf77f482f52a366a44642dba09 ] Add print_hex_dump_devel() as the hex dump equivalent of pr_devel(), which emits output only when DEBUG is enabled, but keeps call sites compiled otherwise. Suggested-by: Herbert Xu <herbert@gondor.apana.org.au> Signed-off-by: Thorsten Blum <thorsten.blum@linux.dev> Reviewed-by: John Ogness <john.ogness@linutronix.de> Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au> Stable-dep-of: 177730a273b1 ("crypto: caam - guard HMAC key hex dumps in hash_digest_key") Signed-off-by: Sasha Levin <sashal@kernel.org> Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
2 dayserofs: fix unsigned underflow in z_erofs_lz4_handle_overlap()Junrui Luo1-0/+1
[ Upstream commit 21e161de2dc660b1bb70ef5b156ab8e6e1cca3ab ] Some crafted images can have illegal (!partial_decoding && m_llen < m_plen) extents, and the LZ4 inplace decompression path can be wrongly hit, but it cannot handle (outpages < inpages) properly: "outpages - inpages" wraps to a large value and the subsequent rq->out[] access reads past the decompressed_pages array. However, such crafted cases can correctly result in a corruption report in the normal LZ4 non-inplace path. Let's add an additional check to fix this for backporting. Reproducible image (base64-encoded gzipped blob): H4sIAJGR12kCA+3SPUoDQRgG4MkmkkZk8QRbRFIIi9hbpEjrHQI5ghfwCN5BLCzTGtLbBI+g dilSJo1CnIm7GEXFxhT6PDDwfrs73/ywIQD/1ePD4r7Ou6ETsrq4mu7XcWfj++Pb58nJU/9i PNtbjhan04/9GtX4qVYc814WDqt6FaX5s+ZwXXeq52lndT6IuVvlblytLMvh4Gzwaf90nsvz 2DF/21+20T/ldgp5s1jXRaN4t/8izsy/OUB6e/Qa79r+JwAAAAAAAL52vQVuGQAAAP6+my1w ywAAAAAAAADwu14ATsEYtgBQAAA= $ mount -t erofs -o cache_strategy=disabled foo.erofs /mnt $ dd if=/mnt/data of=/dev/null bs=4096 count=1 Fixes: 598162d05080 ("erofs: support decompress big pcluster for lz4 backend") Reported-by: Yuhao Jiang <danisjiang@gmail.com> Cc: stable@vger.kernel.org Signed-off-by: Junrui Luo <moonafterrain@outlook.com> Reviewed-by: Gao Xiang <hsiangkao@linux.alibaba.com> Signed-off-by: Gao Xiang <hsiangkao@linux.alibaba.com> Signed-off-by: Sasha Levin <sashal@kernel.org> Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
2 dayserofs: tidy up z_erofs_lz4_handle_overlap()Gao Xiang1-39/+46
[ Upstream commit 9ae77198d4815c63fc8ebacc659c71d150d1e51b ] - Add some useful comments to explain inplace I/Os and decompression; - Rearrange the code to get rid of one unnecessary goto. Reviewed-by: Chao Yu <chao@kernel.org> Signed-off-by: Gao Xiang <hsiangkao@linux.alibaba.com> Stable-dep-of: 21e161de2dc6 ("erofs: fix unsigned underflow in z_erofs_lz4_handle_overlap()") Signed-off-by: Sasha Levin <sashal@kernel.org> Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
2 dayshfsplus: fix held lock freed on hfsplus_fill_super()Zilin Guan1-1/+3
[ Upstream commit 90c500e4fd83fa33c09bc7ee23b6d9cc487ac733 ] hfsplus_fill_super() calls hfs_find_init() to initialize a search structure, which acquires tree->tree_lock. If the subsequent call to hfsplus_cat_build_key() fails, the function jumps to the out_put_root error label without releasing the lock. The later cleanup path then frees the tree data structure with the lock still held, triggering a held lock freed warning. Fix this by adding the missing hfs_find_exit(&fd) call before jumping to the out_put_root error label. This ensures that tree->tree_lock is properly released on the error path. The bug was originally detected on v6.13-rc1 using an experimental static analysis tool we are developing, and we have verified that the issue persists in the latest mainline kernel. The tool is specifically designed to detect memory management issues. It is currently under active development and not yet publicly available. We confirmed the bug by runtime testing under QEMU with x86_64 defconfig, lockdep enabled, and CONFIG_HFSPLUS_FS=y. To trigger the error path, we used GDB to dynamically shrink the max_unistr_len parameter to 1 before hfsplus_asc2uni() is called. This forces hfsplus_asc2uni() to naturally return -ENAMETOOLONG, which propagates to hfsplus_cat_build_key() and exercises the faulty error path. The following warning was observed during mount: ========================= WARNING: held lock freed! 7.0.0-rc3-00016-gb4f0dd314b39 #4 Not tainted ------------------------- mount/174 is freeing memory ffff888103f92000-ffff888103f92fff, with a lock still held there! ffff888103f920b0 (&tree->tree_lock){+.+.}-{4:4}, at: hfsplus_find_init+0x154/0x1e0 2 locks held by mount/174: #0: ffff888103f960e0 (&type->s_umount_key#42/1){+.+.}-{4:4}, at: alloc_super.constprop.0+0x167/0xa40 #1: ffff888103f920b0 (&tree->tree_lock){+.+.}-{4:4}, at: hfsplus_find_init+0x154/0x1e0 stack backtrace: CPU: 2 UID: 0 PID: 174 Comm: mount Not tainted 7.0.0-rc3-00016-gb4f0dd314b39 #4 PREEMPT(lazy) Hardware name: QEMU Standard PC (Q35 + ICH9, 2009), BIOS 1.15.0-1 04/01/2014 Call Trace: <TASK> dump_stack_lvl+0x82/0xd0 debug_check_no_locks_freed+0x13a/0x180 kfree+0x16b/0x510 ? hfsplus_fill_super+0xcb4/0x18a0 hfsplus_fill_super+0xcb4/0x18a0 ? __pfx_hfsplus_fill_super+0x10/0x10 ? srso_return_thunk+0x5/0x5f ? bdev_open+0x65f/0xc30 ? srso_return_thunk+0x5/0x5f ? pointer+0x4ce/0xbf0 ? trace_contention_end+0x11c/0x150 ? __pfx_pointer+0x10/0x10 ? srso_return_thunk+0x5/0x5f ? bdev_open+0x79b/0xc30 ? srso_return_thunk+0x5/0x5f ? srso_return_thunk+0x5/0x5f ? vsnprintf+0x6da/0x1270 ? srso_return_thunk+0x5/0x5f ? __mutex_unlock_slowpath+0x157/0x740 ? __pfx_vsnprintf+0x10/0x10 ? srso_return_thunk+0x5/0x5f ? srso_return_thunk+0x5/0x5f ? mark_held_locks+0x49/0x80 ? srso_return_thunk+0x5/0x5f ? srso_return_thunk+0x5/0x5f ? irqentry_exit+0x17b/0x5e0 ? trace_irq_disable.constprop.0+0x116/0x150 ? __pfx_hfsplus_fill_super+0x10/0x10 ? __pfx_hfsplus_fill_super+0x10/0x10 get_tree_bdev_flags+0x302/0x580 ? __pfx_get_tree_bdev_flags+0x10/0x10 ? vfs_parse_fs_qstr+0x129/0x1a0 ? __pfx_vfs_parse_fs_qstr+0x3/0x10 vfs_get_tree+0x89/0x320 fc_mount+0x10/0x1d0 path_mount+0x5c5/0x21c0 ? __pfx_path_mount+0x10/0x10 ? trace_irq_enable.constprop.0+0x116/0x150 ? trace_irq_enable.constprop.0+0x116/0x150 ? srso_return_thunk+0x5/0x5f ? srso_return_thunk+0x5/0x5f ? kmem_cache_free+0x307/0x540 ? user_path_at+0x51/0x60 ? __x64_sys_mount+0x212/0x280 ? srso_return_thunk+0x5/0x5f __x64_sys_mount+0x212/0x280 ? __pfx___x64_sys_mount+0x10/0x10 ? srso_return_thunk+0x5/0x5f ? trace_irq_enable.constprop.0+0x116/0x150 ? srso_return_thunk+0x5/0x5f do_syscall_64+0x111/0x680 entry_SYSCALL_64_after_hwframe+0x77/0x7f RIP: 0033:0x7ffacad55eae Code: 48 8b 0d 85 1f 0f 00 f7 d8 64 89 01 48 83 c8 ff c3 66 2e 0f 1f 84 00 00 00 00 00 90 f3 0f 1e fa 49 89 ca b8 a5 00 00 8 RSP: 002b:00007fff1ab55718 EFLAGS: 00000246 ORIG_RAX: 00000000000000a5 RAX: ffffffffffffffda RBX: 0000000000000000 RCX: 00007ffacad55eae RDX: 000055740c64e5b0 RSI: 000055740c64e630 RDI: 000055740c651ab0 RBP: 000055740c64e380 R08: 0000000000000000 R09: 0000000000000001 R10: 0000000000000000 R11: 0000000000000246 R12: 0000000000000000 R13: 000055740c64e5b0 R14: 000055740c651ab0 R15: 000055740c64e380 </TASK> After applying this patch, the warning no longer appears. Fixes: 89ac9b4d3d1a ("hfsplus: fix longname handling") CC: stable@vger.kernel.org Signed-off-by: Zilin Guan <zilin@seu.edu.cn> Reviewed-by: Viacheslav Dubeyko <slava@dubeyko.com> Tested-by: Viacheslav Dubeyko <slava@dubeyko.com> Signed-off-by: Viacheslav Dubeyko <slava@dubeyko.com> Signed-off-by: Sasha Levin <sashal@kernel.org> Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
2 dayshfsplus: fix uninit-value by validating catalog record sizeDeepanshu Kartikey5-4/+64
[ Upstream commit b6b592275aeff184aa82fcf6abccd833fb71b393 ] Syzbot reported a KMSAN uninit-value issue in hfsplus_strcasecmp(). The root cause is that hfs_brec_read() doesn't validate that the on-disk record size matches the expected size for the record type being read. When mounting a corrupted filesystem, hfs_brec_read() may read less data than expected. For example, when reading a catalog thread record, the debug output showed: HFSPLUS_BREC_READ: rec_len=520, fd->entrylength=26 HFSPLUS_BREC_READ: WARNING - entrylength (26) < rec_len (520) - PARTIAL READ! hfs_brec_read() only validates that entrylength is not greater than the buffer size, but doesn't check if it's less than expected. It successfully reads 26 bytes into a 520-byte structure and returns success, leaving 494 bytes uninitialized. This uninitialized data in tmp.thread.nodeName then gets copied by hfsplus_cat_build_key_uni() and used by hfsplus_strcasecmp(), triggering the KMSAN warning when the uninitialized bytes are used as array indices in case_fold(). Fix by introducing hfsplus_brec_read_cat() wrapper that: 1. Calls hfs_brec_read() to read the data 2. Validates the record size based on the type field: - Fixed size for folder and file records - Variable size for thread records (depends on string length) 3. Returns -EIO if size doesn't match expected For thread records, check against HFSPLUS_MIN_THREAD_SZ before reading nodeName.length to avoid reading uninitialized data at call sites that don't zero-initialize the entry structure. Also initialize the tmp variable in hfsplus_find_cat() as defensive programming to ensure no uninitialized data even if validation is bypassed. Reported-by: syzbot+d80abb5b890d39261e72@syzkaller.appspotmail.com Closes: https://syzkaller.appspot.com/bug?extid=d80abb5b890d39261e72 Fixes: 1da177e4c3f4 ("Linux-2.6.12-rc2") Tested-by: syzbot+d80abb5b890d39261e72@syzkaller.appspotmail.com Reviewed-by: Viacheslav Dubeyko <slava@dubeyko.com> Tested-by: Viacheslav Dubeyko <slava@dubeyko.com> Suggested-by: Charalampos Mitrodimas <charmitro@posteo.net> Link: https://lore.kernel.org/all/20260120051114.1281285-1-kartikey406@gmail.com/ [v1] Link: https://lore.kernel.org/all/20260121063109.1830263-1-kartikey406@gmail.com/ [v2] Link: https://lore.kernel.org/all/20260212014233.2422046-1-kartikey406@gmail.com/ [v3] Link: https://lore.kernel.org/all/20260214002100.436125-1-kartikey406@gmail.com/T/ [v4] Link: https://lore.kernel.org/all/20260221061626.15853-1-kartikey406@gmail.com/T/ [v5] Signed-off-by: Deepanshu Kartikey <kartikey406@gmail.com> Signed-off-by: Viacheslav Dubeyko <slava@dubeyko.com> Link: https://lore.kernel.org/r/20260307010302.41547-1-kartikey406@gmail.com Signed-off-by: Viacheslav Dubeyko <slava@dubeyko.com> Stable-dep-of: 90c500e4fd83 ("hfsplus: fix held lock freed on hfsplus_fill_super()") Signed-off-by: Sasha Levin <sashal@kernel.org> Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
2 daysfirmware: exynos-acpm: Drop fake 'const' on handle pointerKrzysztof Kozlowski6-40/+37
[ Upstream commit a2be37eedb52ea26938fa4cc9de1ff84963c57ad ] All the functions operating on the 'handle' pointer are claiming it is a pointer to const thus they should not modify the handle. In fact that's a false statement, because first thing these functions do is drop the cast to const with container_of: struct acpm_info *acpm = handle_to_acpm_info(handle); And with such cast the handle is easily writable with simple: acpm->handle.ops.pmic_ops.read_reg = NULL; The code is not correct logically, either, because functions like acpm_get_by_node() and acpm_handle_put() are meant to modify the handle reference counting, thus they must modify the handle. Modification here happens anyway, even if the reference counting is stored in the container which the handle is part of. The code does not have actual visible bug, but incorrect 'const' annotations could lead to incorrect compiler decisions. Fixes: a88927b534ba ("firmware: add Exynos ACPM protocol driver") Cc: stable@vger.kernel.org Signed-off-by: Krzysztof Kozlowski <krzysztof.kozlowski@oss.qualcomm.com> Link: https://patch.msgid.link/20260224104203.42950-2-krzysztof.kozlowski@oss.qualcomm.com Signed-off-by: Krzysztof Kozlowski <krzk@kernel.org> [ dropped hunks for DVFS/clk-acpm files and `acpm_dvfs_ops` struct that don't exist in 6.18 ] Signed-off-by: Sasha Levin <sashal@kernel.org> Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
2 daysmm, swap: speed up hibernation allocation and writeoutKairui Song1-5/+16
[ Upstream commit 396f57b5720024638dbb503f6a4abd988a49d815 ] Since commit 0ff67f990bd4 ("mm, swap: remove swap slot cache"), hibernation has been using the swap slot slow allocation path for simplification, which turns out might cause regression for some devices because the allocator now rotates clusters too often, leading to slower allocation and more random distribution of data. Fast allocation is not complex, so implement hibernation support as well. Test result with Samsung SSD 830 Series (SATA II, 3.0 Gbps) shows the performance is several times better [1]: 6.19: 324 seconds After this series: 35 seconds Link: https://lkml.kernel.org/r/20260216-hibernate-perf-v4-1-1ba9f0bf1ec9@tencent.com Link: https://lore.kernel.org/linux-mm/8b4bdcfa-ce3f-4e23-839f-31367df7c18f@gmx.de/ [1] Signed-off-by: Kairui Song <kasong@tencent.com> Fixes: 0ff67f990bd4 ("mm, swap: remove swap slot cache") Reported-by: Carsten Grohmann <mail@carstengrohmann.de> Closes: https://lore.kernel.org/linux-mm/20260206121151.dea3633d1f0ded7bbf49c22e@linux-foundation.org/ Cc: Baoquan He <bhe@redhat.com> Cc: Barry Song <baohua@kernel.org> Cc: Chris Li <chrisl@kernel.org> Cc: Kemeng Shi <shikemeng@huaweicloud.com> Cc: Nhat Pham <nphamcs@gmail.com> Cc: <stable@vger.kernel.org> Signed-off-by: Andrew Morton <akpm@linux-foundation.org> [ adjusted helper signatures ] Signed-off-by: Sasha Levin <sashal@kernel.org> Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
2 dayscrypto: qat - fix firmware loading failure for GEN6 devicesSuman Kumar Chakraborty3-1/+12
[ Upstream commit e7dcb722bb75bb3f3992f580a8728a794732fd7a ] QAT GEN6 hardware requires a minimum 3 us delay during the acceleration engine reset sequence to ensure the hardware fully settles. Without this delay, the firmware load may fail intermittently. Add a delay after placing the AE into reset and before clearing the reset, matching the hardware requirements and ensuring stable firmware loading. Earlier generations remain unaffected. Fixes: 17fd7514ae68 ("crypto: qat - add qat_6xxx driver") Signed-off-by: Suman Kumar Chakraborty <suman.kumar.chakraborty@intel.com> Cc: stable@vger.kernel.org Reviewed-by: Giovanni Cabiddu <giovanni.cabiddu@intel.com> Reviewed-by: Andy Shevchenko <andriy.shevchenko@intel.com> Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au> Signed-off-by: Sasha Levin <sashal@kernel.org> Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
2 dayscrypto: qat - fix indentation of macros in qat_hal.cSuman Kumar Chakraborty1-11/+11
[ Upstream commit 4963b39e3a3feed07fbf4d5cc2b5df8498888285 ] The macros in qat_hal.c were using a mixture of tabs and spaces. Update all macro indentation to use tabs consistently, matching the predominant style. This does not introduce any functional change. Signed-off-by: Suman Kumar Chakraborty <suman.kumar.chakraborty@intel.com> Reviewed-by: Giovanni Cabiddu <giovanni.cabiddu@intel.com> Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au> Stable-dep-of: e7dcb722bb75 ("crypto: qat - fix firmware loading failure for GEN6 devices") Signed-off-by: Sasha Levin <sashal@kernel.org> Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
2 daysmmc: core: Optimize time for secure erase/trim for some Kingston eMMCsLuke Wang4-2/+22
[ Upstream commit d6bf2e64dec87322f2b11565ddb59c0e967f96e3 ] Kingston eMMC IY2964 and IB2932 takes a fixed ~2 seconds for each secure erase/trim operation regardless of size - that is, a single secure erase/trim operation of 1MB takes the same time as 1GB. With default calculated 3.5MB max discard size, secure erase 1GB requires ~300 separate operations taking ~10 minutes total. Add a card quirk, MMC_QUIRK_FIXED_SECURE_ERASE_TRIM_TIME, to set maximum secure erase size for those devices. This allows 1GB secure erase to complete in a single operation, reducing time from 10 minutes to just 2 seconds. Signed-off-by: Luke Wang <ziniu.wang_1@nxp.com> Cc: stable@vger.kernel.org Signed-off-by: Ulf Hansson <ulf.hansson@linaro.org> Signed-off-by: Sasha Levin <sashal@kernel.org> Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
2 daysmmc: core: Add quirk for incorrect manufacturing dateAvri Altman4-0/+15
[ Upstream commit 263ff314cc5602599d481b0912a381555fcbad28 ] Some eMMC vendors need to report manufacturing dates beyond 2025 but are reluctant to update the EXT_CSD revision from 8 to 9. Changing the Updating the EXT_CSD revision may involve additional testing or qualification steps with customers. To ease this transition and avoid a full re-qualification process, a workaround is needed. This patch introduces a temporary quirk that re-purposes the year codes corresponding to 2010, 2011, and 2012 to represent the years 2026, 2027, and 2028, respectively. This solution is only valid for this three-year period. After 2028, vendors must update their firmware to set EXT_CSD_REV=9 to continue reporting the correct manufacturing date in compliance with the JEDEC standard. The `MMC_QUIRK_BROKEN_MDT` is introduced and enabled for all Sandisk devices to handle this behavior. Signed-off-by: Avri Altman <avri.altman@sandisk.com> Signed-off-by: Ulf Hansson <ulf.hansson@linaro.org> Stable-dep-of: d6bf2e64dec8 ("mmc: core: Optimize time for secure erase/trim for some Kingston eMMCs") Signed-off-by: Sasha Levin <sashal@kernel.org> Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
2 daysmmc: core: Adjust MDT beyond 2025Avri Altman1-0/+7
[ Upstream commit 3e487a634bc019166e452ea276f7522710eda9f4 ] JEDEC JESD84-B51B which was released in September 2025, increases the manufacturing year limit for eMMC devices. The eMMC manufacturing year is stored in a 4-bit field in the CID register. Originally, it covered 1997–2012. Later, with EXT_CSD_REV=8, it was extended up to 2025. Now, with EXT_CSD_REV=9, the range is rolled over by another 16 years, up to 2038. The mapping is as follows: cid[8..11] | rev ≤ 4 | 8 ≥ rev > 4 | rev > 8 --------------------------------------------- 0 | 1997 | 2013 | 2029 1 | 1998 | 2014 | 2030 2 | 1999 | 2015 | 2031 3 | 2000 | 2016 | 2032 4 | 2001 | 2017 | 2033 5 | 2002 | 2018 | 2034 6 | 2003 | 2019 | 2035 7 | 2004 | 2020 | 2036 8 | 2005 | 2021 | 2037 9 | 2006 | 2022 | 2038 10 | 2007 | 2023 | 11 | 2008 | 2024 | 12 | 2009 | 2025 | 13 | 2010 | | 2026 14 | 2011 | | 2027 15 | 2012 | | 2028 Signed-off-by: Avri Altman <avri.altman@sandisk.com> Reviewed-by: Shawn Lin <shawn.lin@rock-chips.com> Signed-off-by: Ulf Hansson <ulf.hansson@linaro.org> Stable-dep-of: d6bf2e64dec8 ("mmc: core: Optimize time for secure erase/trim for some Kingston eMMCs") Signed-off-by: Sasha Levin <sashal@kernel.org> Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
2 daysocteon_ep_vf: add NULL check for napi_build_skb()David Carlier1-2/+34
[ Upstream commit dd66b42854705e4e4ee7f14d260f86c578bed3e3 ] napi_build_skb() can return NULL on allocation failure. In __octep_vf_oq_process_rx(), the result is used directly without a NULL check in both the single-buffer and multi-fragment paths, leading to a NULL pointer dereference. Add NULL checks after both napi_build_skb() calls, properly advancing descriptors and consuming remaining fragments on failure. Fixes: 1cd3b407977c ("octeon_ep_vf: add Tx/Rx processing and interrupt support") Cc: stable@vger.kernel.org Signed-off-by: David Carlier <devnexen@gmail.com> Link: https://patch.msgid.link/20260409184009.930359-3-devnexen@gmail.com Signed-off-by: Jakub Kicinski <kuba@kernel.org> [ inlined missing octep_vf_oq_next_idx() helper as read_idx++ with wraparound ] Signed-off-by: Sasha Levin <sashal@kernel.org> Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
2 dayshwmon: (powerz) Avoid cacheline sharing for DMA bufferThomas Weißschuh1-1/+4
[ Upstream commit 3023c050af3600bf451153335dea5e073c9a3088 ] Depending on the architecture the transfer buffer may share a cacheline with the following mutex. As the buffer may be used for DMA, that is problematic. Use the high-level DMA helpers to make sure that cacheline sharing can not happen. Also drop the comment, as the helpers are documentation enough. https://sashiko.dev/#/message/20260408175814.934BFC19421%40smtp.kernel.org Fixes: 4381a36abdf1c ("hwmon: add POWER-Z driver") Cc: stable@vger.kernel.org # ca085faabb42: dma-mapping: add __dma_from_device_group_begin()/end() Signed-off-by: Thomas Weißschuh <linux@weissschuh.net> Link: https://lore.kernel.org/r/20260408-powerz-cacheline-alias-v1-1-1254891be0dd@weissschuh.net Signed-off-by: Guenter Roeck <linux@roeck-us.net> Signed-off-by: Sasha Levin <sashal@kernel.org> Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
2 daysdma-mapping: add __dma_from_device_group_begin()/end()Michael S. Tsirkin1-0/+13
[ Upstream commit ca085faabb42c31ee204235facc5a430cb9e78a9 ] When a structure contains a buffer that DMA writes to alongside fields that the CPU writes to, cache line sharing between the DMA buffer and CPU-written fields can cause data corruption on non-cache-coherent platforms. Add __dma_from_device_group_begin()/end() annotations to ensure proper alignment to prevent this: struct my_device { spinlock_t lock1; __dma_from_device_group_begin(); char dma_buffer1[16]; char dma_buffer2[16]; __dma_from_device_group_end(); spinlock_t lock2; }; Message-ID: <19163086d5e4704c316f18f6da06bc1c72968904.1767601130.git.mst@redhat.com> Acked-by: Marek Szyprowski <m.szyprowski@samsung.com> Reviewed-by: Petr Tesarik <ptesarik@suse.com> Signed-off-by: Michael S. Tsirkin <mst@redhat.com> Stable-dep-of: 3023c050af36 ("hwmon: (powerz) Avoid cacheline sharing for DMA buffer") Signed-off-by: Sasha Levin <sashal@kernel.org> Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
2 daysfbdev: defio: Disconnect deferred I/O from the lifetime of struct fb_infoThomas Zimmermann2-37/+145
[ Upstream commit 9ded47ad003f09a94b6a710b5c47f4aa5ceb7429 ] Hold state of deferred I/O in struct fb_deferred_io_state. Allocate an instance as part of initializing deferred I/O and remove it only after the final mapping has been closed. If the fb_info and the contained deferred I/O meanwhile goes away, clear struct fb_deferred_io_state.info to invalidate the mapping. Any access will then result in a SIGBUS signal. Fixes a long-standing problem, where a device hot-unplug happens while user space still has an active mapping of the graphics memory. The hot- unplug frees the instance of struct fb_info. Accessing the memory will operate on undefined state. Signed-off-by: Thomas Zimmermann <tzimmermann@suse.de> Fixes: 60b59beafba8 ("fbdev: mm: Deferred IO support") Cc: Helge Deller <deller@gmx.de> Cc: linux-fbdev@vger.kernel.org Cc: dri-devel@lists.freedesktop.org Cc: stable@vger.kernel.org # v2.6.22+ Signed-off-by: Helge Deller <deller@gmx.de> [ replaced kzalloc_obj(*fbdefio_state) with kzalloc(sizeof(*fbdefio_state), GFP_KERNEL) ] Signed-off-by: Sasha Levin <sashal@kernel.org> Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
2 daysmm/damon/core: disallow non-power of two min_region_sz on damon_start()SeongJae Park1-0/+5
commit 95093e5cb4c5b50a5b1a4b79f2942b62744bd66a upstream. Commit d8f867fa0825 ("mm/damon: add damon_ctx->min_sz_region") introduced a bug that allows unaligned DAMON region address ranges. Commit c80f46ac228b ("mm/damon/core: disallow non-power of two min_region_sz") fixed it, but only for damon_commit_ctx() use case. Still, DAMON sysfs interface can emit non-power of two min_region_sz via damon_start(). Fix the path by adding the is_power_of_2() check on damon_start(). The issue was discovered by sashiko [1]. Link: https://lore.kernel.org/20260411213638.77768-1-sj@kernel.org Link: https://lore.kernel.org/20260403155530.64647-1-sj@kernel.org [1] Fixes: d8f867fa0825 ("mm/damon: add damon_ctx->min_sz_region") Signed-off-by: SeongJae Park <sj@kernel.org> Cc: <stable@vger.kernel.org> # 6.18.x Signed-off-by: Andrew Morton <akpm@linux-foundation.org> Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
2 daysbpf: Fix use-after-free in arena_vm_close on forkAlexei Starovoitov1-3/+16
commit 4fddde2a732de60bb97e3307d4eb69ac5f1d2b74 upstream. arena_vm_open() only bumps vml->mmap_count but never registers the child VMA in arena->vma_list. The vml->vma always points at the parent VMA, so after parent munmap the pointer dangles. If the child then calls bpf_arena_free_pages(), zap_pages() reads the stale vml->vma triggering use-after-free. Fix this by preventing the arena VMA from being inherited across fork with VM_DONTCOPY, and preventing VMA splits via the may_split callback. Also reject mremap with a .mremap callback returning -EINVAL. A same-size mremap(MREMAP_FIXED) on the full arena VMA reaches copy_vma() through the following path: check_prep_vma() - returns 0 early: new_len == old_len skips VM_DONTEXPAND check prep_move_vma() - vm_start == old_addr and vm_end == old_addr + old_len so may_split is never called move_vma() copy_vma_and_data() copy_vma() vm_area_dup() - copies vm_private_data (vml pointer) vm_ops->open() - bumps vml->mmap_count vm_ops->mremap() - returns -EINVAL, rollback unmaps new VMA The refcount ensures the rollback's arena_vm_close does not free the vml shared with the original VMA. Reported-by: Weiming Shi <bestswngs@gmail.com> Reported-by: Xiang Mei <xmei5@asu.edu> Fixes: 317460317a02 ("bpf: Introduce bpf_arena.") Reviewed-by: Emil Tsalapatis <emil@etsalapatis.com> Link: https://lore.kernel.org/r/20260413194245.21449-1-alexei.starovoitov@gmail.com Signed-off-by: Alexei Starovoitov <ast@kernel.org> Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
2 daysio_uring/tw: serialize ctx->retry_llist with ->uring_lockJens Axboe1-1/+11
Commit 17666e2d7592c3e85260cafd3950121524acc2c5 upstream. The DEFER_TASKRUN local task work paths all run under ctx->uring_lock, which serializes them with each other and with the rest of the ring's hot paths. io_move_task_work_from_local() is the exception - it's called from io_ring_exit_work() on a kworker without holding the lock and from the iopoll cancelation side right after dropping it. ->work_llist is fine with this, as it's only ever updated via the expected paths. But the ->retry_llist is updated while runing, and hence it could potentially race between normal task_work running and the task-has-exited shutdown path. Simply grab ->uring_lock while moving the local work to the fallback list for exit purposes, which nicely serializes it across both the normal additions and the exit prune path. Cc: stable@vger.kernel.org Fixes: f46b9cdb22f7 ("io_uring: limit local tw done") Signed-off-by: Jens Axboe <axboe@kernel.dk> Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
2 daysio_uring/kbuf: support min length left for incremental buffersMartin Michaelis3-4/+18
Commit 7deba791ad495ce1d7921683f4f7d1190fa210d1 upstream. Incrementally consumed buffer rings are generally fully consumed, but it's quite possible that the application has a minimum size it needs to meet to avoid truncation. Currently that minimum limit is 1 byte, but this should be a setting that is the hands of the application. For recvmsg multishot, a prime use case for incrementally consumed buffers, the application may get spurious -EFAULT returned at the end of an incrementally consumed buffer, as less space is available than the headers need. Grab a u32 field in struct io_uring_buf_reg, which the application can use to inform the kernel of the minimum size that should be available in an incrementally consumed buffer. If less than that is available, the current buffer is fully processed and the next one will be picked. Cc: stable@vger.kernel.org Fixes: ae98dbf43d75 ("io_uring/kbuf: add support for incremental buffer consumption") Link: https://github.com/axboe/liburing/issues/1433 Signed-off-by: Martin Michaelis <code@mgjm.de> [axboe: write commit message, change io_buffer_list member name] Reviewed-by: Gabriel Krisman Bertazi <krisman@suse.de> Signed-off-by: Jens Axboe <axboe@kernel.dk> Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
2 daysLoongArch: Use per-root-bridge PCIH flag to skip mem resource fixupHuacai Chen1-0/+5
commit 49f33840dcc907d21313d369e34872880846b61c upstream. When firmware enables 64-bit PCI host bridge support, some root bridges already provide valid 64-bit mem resource windows through ACPI. In this case, the LoongArch-specific mem resource high-bits fixup in acpi_prepare_root_resources() should not be applied unconditionally. Otherwise, the kernel may override the native resource layout derived from firmware, and later BAR assignment can fail to place device BARs into the intended 64-bit address space correctly. Add a per-root-bridge ACPI flag, PCIH, and evaluate it from the current root bridge device scope. When PCIH is set, skip the mem resource high- bits fixup path and let the kernel use the firmware-provided resource description directly. When PCIH is absent or cleared, keep the existing behavior and continue filling the high address bits from the host bridge address. This makes the behavior per-root-bridge configurable and avoids breaking valid 64-bit BAR space allocation on bridges whose 64-bit windows have already been fully described by firmware. Cc: stable@vger.kernel.org Suggested-by: Chao Li <lichao@loongson.cn> Tested-by: Dongyan Qian <qiandongyan@loongson.cn> Signed-off-by: Dongyan Qian <qiandongyan@loongson.cn> Signed-off-by: Huacai Chen <chenhuacai@loongson.cn> Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
2 daysLoongArch: KVM: Use kvm_set_pte() in kvm_flush_pte()Tao Cui1-1/+1
commit 81e18777d61440511451866c7c80b34a8bdd6b33 upstream. kvm_flush_pte() is the only caller that directly assigns *pte instead of using the kvm_set_pte() wrapper. Use the wrapper for consistency with the rest of the file. No functional change intended. Cc: stable@vger.kernel.org Reviewed-by: Bibo Mao <maobibo@loongson.cn> Signed-off-by: Tao Cui <cuitao@kylinos.cn> Signed-off-by: Huacai Chen <chenhuacai@loongson.cn> Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
2 daysLoongArch: KVM: Move unconditional delay into timer clear sceneryBibo Mao1-2/+8
commit 5a873d77ba792410a796595a917be6a440f9b7d2 upstream. When timer interrupt arrives in guest kernel, guest kernel clears the timer interrupt and program timer with the next incoming event. During this stage, timer tick is -1 and timer interrupt status is disabled in ESTAT register. KVM hypervisor need write zero with timer tick register and wait timer interrupt injection from HW side, and then clear timer interrupt. So there is 2 cycle delay in KVM hypervisor to emulate such scenery, and the delay is unnecessary if there is no need to clear the timer interrupt. Here move 2 cycle delay into timer clear scenery and add timer ESTAT checking after delay, and set max timer expire value if timer interrupt does not arrive still. Cc: stable@vger.kernel.org Signed-off-by: Bibo Mao <maobibo@loongson.cn> Signed-off-by: Huacai Chen <chenhuacai@loongson.cn> Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
2 daysLoongArch: KVM: Fix HW timer interrupt lost when inject interrupt by softwareBibo Mao1-0/+14
commit 2433f3f5724b3af569d9fb411ba728629524738b upstream. With passthrough HW timer, timer interrupt is injected by HW. When inject emulated CPU interrupt by software such SIP0/SIP1/IPI, HW timer interrupt may be lost. Here check whether there is timer tick value inversion before and after injecting emulated CPU interrupt by software, timer enabling by reading timer cfg register is skipped. If the timer tick value is detected with changing, then timer should be enabled. And inject a timer interrupt by software if there is. Cc: <stable@vger.kernel.org> Fixes: f45ad5b8aa93 ("LoongArch: KVM: Implement vcpu interrupt operations"). Signed-off-by: Bibo Mao <maobibo@loongson.cn> Signed-off-by: Huacai Chen <chenhuacai@loongson.cn> Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
2 daysLoongArch: KVM: Fix "unreliable stack" for kvm_exc_entryXianglai Li1-1/+1
commit b323a441da602dfdfc24f30d3190cac786ffebf2 upstream. Insert the appropriate UNWIND hint into the kvm_exc_entry assembly function to guide the generation of correct ORC table entries, thereby solving the timeout problem ("unreliable stack") while loading the livepatch-sample module on a physical machine running virtual machines with multiple vcpus. Cc: stable@vger.kernel.org Signed-off-by: Xianglai Li <lixianglai@loongson.cn> Signed-off-by: Huacai Chen <chenhuacai@loongson.cn> Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
2 daysLoongArch: KVM: Cap KVM_CAP_NR_VCPUS by KVM_CAP_MAX_VCPUSQiang Ma1-1/+1
commit b3e31a6650d4cab63f0814c37c0b360372c6ee9e upstream. It doesn't make sense to return the recommended maximum number of vCPUs which exceeds the maximum possible number of vCPUs. Other architectures have already done this, such as commit 57a2e13ebdda ("KVM: MIPS: Cap KVM_CAP_NR_VCPUS by KVM_CAP_MAX_VCPUS") Cc: stable@vger.kernel.org Reviewed-by: Bibo Mao <maobibo@loongson.cn> Signed-off-by: Qiang Ma <maqianga@uniontech.com> Signed-off-by: Huacai Chen <chenhuacai@loongson.cn> Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
2 daysLoongArch: Fix potential ADE in loongson_gpu_fixup_dma_hang()Wentao Guan1-0/+3
commit 8dfa2f8780e486d05b9a0ffce70b8f5fbd62053e upstream. The switch case in loongson_gpu_fixup_dma_hang() may not DC2 or DC3, and readl(crtc_reg) will access with random address, because the "device" is from "base+PCI_DEVICE_ID", "base" is from "pdev->devfn+1". This is wrong when my platform inserts a discrete GPU: lspci -tv -[0000:00]-+-00.0 Loongson Technology LLC Hyper Transport Bridge Controller ... +-06.0 Loongson Technology LLC LG100 GPU +-06.2 Loongson Technology LLC Device 7a37 ... Add a default switch case to fix the panic as below: Kernel ade access[#1]: CPU: 0 PID: 1 Comm: swapper/0 Not tainted 6.6.136-loong64-desktop-hwe+ #4 pc 90000000017e5534 ra 90000000017e54c0 tp 90000001002f8000 sp 90000001002fb6c0 a0 80000efe00003100 a1 0000000000003100 a2 0000000000000000 a3 0000000000000002 a4 90000001002fb6b4 a5 900000087cdb58fd a6 90000000027af000 a7 0000000000000001 t0 00000000000085b9 t1 000000000000ffff t2 0000000000000000 t3 0000000000000000 t4 fffffffffffffffd t5 00000000fffb6d9c t6 0000000000083b00 t7 00000000000070c0 t8 900000087cdb4d94 u0 900000087cdb58fd s9 90000001002fb826 s0 90000000031c12c8 s1 7fffffffffffff00 s2 90000000031c12d0 s3 0000000000002710 s4 0000000000000000 s5 0000000000000000 s6 9000000100053000 s7 7fffffffffffff00 s8 90000000030d4000 ra: 90000000017e54c0 loongson_gpu_fixup_dma_hang+0x40/0x210 ERA: 90000000017e5534 loongson_gpu_fixup_dma_hang+0xb4/0x210 CRMD: 000000b0 (PLV0 -IE -DA +PG DACF=CC DACM=CC -WE) PRMD: 00000004 (PPLV0 +PIE -PWE) EUEN: 00000000 (-FPE -SXE -ASXE -BTE) ECFG: 00071c1d (LIE=0,2-4,10-12 VS=7) ESTAT: 00480000 [ADEM] (IS= ECode=8 EsubCode=1) BADV: 7fffffffffffff00 PRID: 0014d000 (Loongson-64bit, Loongson-3A6000-HV) Modules linked in: Process swapper/0 (pid: 1, threadinfo=(____ptrval____), task=(____ptrval____)) Stack : 0000000000000006 90000001002fb778 90000001002fb704 0000000000000007 0000000016a65700 90000000017e5690 000000000000ffff ffffffffffffffff 900000000209f7c0 9000000100053000 900000000209f7a8 9000000000eebc08 0000000000000000 0000000000000000 0000000000000006 90000001002fb778 90000001000530b8 90000000027af000 0000000000000000 9000000100054000 9000000100053000 9000000000ebb70c 9000000100004c00 9000000004000001 90000001002fb7e4 bae765461f31cb12 0000000000000000 0000000000000000 0000000000000006 90000000027af000 0000000000000030 90000000027af000 900000087cd6f800 9000000100053000 0000000000000000 9000000000ebc560 7a2500147cdaf720 bae765461f31cb12 0000000000000001 0000000000000030 ... Call Trace: [<90000000017e5534>] loongson_gpu_fixup_dma_hang+0xb4/0x210 [<9000000000eebc08>] pci_fixup_device+0x108/0x280 [<9000000000ebb70c>] pci_setup_device+0x24c/0x690 [<9000000000ebc560>] pci_scan_single_device+0xe0/0x140 [<9000000000ebc684>] pci_scan_slot+0xc4/0x280 [<9000000000ebdd00>] pci_scan_child_bus_extend+0x60/0x3f0 [<9000000000f5bc94>] acpi_pci_root_create+0x2b4/0x420 [<90000000017e5e74>] pci_acpi_scan_root+0x2d4/0x440 [<9000000000f5b02c>] acpi_pci_root_add+0x21c/0x3a0 [<9000000000f4ee54>] acpi_bus_attach+0x1a4/0x3c0 [<90000000010e200c>] device_for_each_child+0x6c/0xe0 [<9000000000f4bbf4>] acpi_dev_for_each_child+0x44/0x70 [<9000000000f4ef40>] acpi_bus_attach+0x290/0x3c0 [<90000000010e200c>] device_for_each_child+0x6c/0xe0 [<9000000000f4bbf4>] acpi_dev_for_each_child+0x44/0x70 [<9000000000f4ef40>] acpi_bus_attach+0x290/0x3c0 [<9000000000f5211c>] acpi_bus_scan+0x6c/0x280 [<900000000189c028>] acpi_scan_init+0x194/0x310 [<900000000189bc6c>] acpi_init+0xcc/0x140 [<9000000000220cdc>] do_one_initcall+0x4c/0x310 [<90000000018618fc>] kernel_init_freeable+0x258/0x2d4 [<900000000184326c>] kernel_init+0x28/0x13c [<9000000000222008>] ret_from_kernel_thread+0xc/0xa4 Cc: stable@vger.kernel.org Fixes: 95db0c9f526d ("LoongArch: Workaround LS2K/LS7A GPU DMA hang bug") Link: https://gist.github.com/opsiff/ebf2dac51b4013d22462f2124c55f807 Link: https://gist.github.com/opsiff/a62f2a73db0492b3c49bf223a339b133 Signed-off-by: Wentao Guan <guanwentao@uniontech.com> Signed-off-by: Huacai Chen <chenhuacai@loongson.cn> Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
2 daysKVM: arm64: Fix pin leak and publication ordering in __pkvm_init_vcpu()Fuad Tabba1-13/+25
commit 73b9c1e5da84cd69b1a86e374e450817cd051371 upstream. Two bugs exist in the vCPU initialisation path: 1. If a check fails after hyp_pin_shared_mem() succeeds, the cleanup path jumps to 'unlock' without calling unpin_host_vcpu() or unpin_host_sve_state(), permanently leaking pin references on the host vCPU and SVE state pages. Extract a register_hyp_vcpu() helper that performs the checks and the store. When register_hyp_vcpu() returns an error, call unpin_host_vcpu() and unpin_host_sve_state() inline before falling through to the existing 'unlock' label. 2. register_hyp_vcpu() publishes the new vCPU pointer into 'hyp_vm->vcpus[]' with a bare store, allowing a concurrent caller of pkvm_load_hyp_vcpu() to observe a partially initialised vCPU object. Ensure the store uses smp_store_release() and the load uses smp_load_acquire(). While 'vm_table_lock' currently serialises the store and the load, these barriers ensure the reader sees the fully initialised 'hyp_vcpu' object even if there were a lockless path or if the lock's own ordering guarantees were insufficient for nested object initialization. Fixes: 49af6ddb8e5c ("KVM: arm64: Add infrastructure to create and track pKVM instances at EL2") Reported-by: Ben Simner <ben.simner@cl.cam.ac.uk> Co-developed-by: Will Deacon <willdeacon@google.com> Signed-off-by: Will Deacon <willdeacon@google.com> Signed-off-by: Fuad Tabba <tabba@google.com> Link: https://patch.msgid.link/20260424084908.370776-6-tabba@google.com Signed-off-by: Marc Zyngier <maz@kernel.org> Cc: stable@vger.kernel.org Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
2 daysKVM: arm64: Fix FEAT_Debugv8p9 to check DebugVer, not PMUVerFuad Tabba1-1/+1
commit 7fe2cd4e1a3ad230d8fcc00cc99c4bcce4412a75 upstream. FEAT_Debugv8p9 is incorrectly defined against ID_AA64DFR0_EL1.PMUVer instead of ID_AA64DFR0_EL1.DebugVer. All three consumers of the macro gate features that are architecturally tied to FEAT_Debugv8p9 (DebugVer = 0b1011, DDI0487 M.b A2.2.10): - HDFGRTR2_EL2.nMDSELR_EL1, HDFGWTR2_EL2.nMDSELR_EL1: MDSELR_EL1 is present only when FEAT_Debugv8p9 is implemented (D24.3.21). - MDCR_EL2.EBWE: the Extended Breakpoint and Watchpoint Enable bit is RES0 unless FEAT_Debugv8p9 is implemented (D24.3.17). Neither register has any dependency on PMUVer. FEAT_Debugv8p9 and FEAT_PMUv3p9 are independent. Per DDI0487 M.b A2.2.10, FEAT_Debugv8p9 is unconditionally mandatory from Armv8.9, whereas FEAT_PMUv3p9 is mandatory only when FEAT_PMUv3 is implemented. An Armv8.9 CPU without a PMU has DebugVer = 0b1011 but PMUVer = 0b0000, so the wrong field check would cause KVM to incorrectly treat EBWE and MDSELR_EL1 as RES0 on such hardware. Fixes: 4bc0fe089840 ("KVM: arm64: Add sanitisation for FEAT_FGT2 registers") Signed-off-by: Fuad Tabba <tabba@google.com> Link: https://patch.msgid.link/20260424084908.370776-2-tabba@google.com Signed-off-by: Marc Zyngier <maz@kernel.org> Cc: stable@vger.kernel.org Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
2 daysKVM: arm64: Fix FEAT_SPE_FnE to use PMSIDR_EL1.FnE, not PMSVerFuad Tabba1-3/+12
commit 08d715338287a1affb4c7ad5733decef4558a5c8 upstream. FEAT_SPE_FnE is architecturally detected via PMSIDR_EL1.FnE [6], not ID_AA64DFR0_EL1.PMSVer. The FEAT_X macro form (register, field, value) cannot encode a PMSIDR_EL1-based feature, so FEAT_SPE_FnE was defined identically to FEAT_SPEv1p2 (ID_AA64DFR0_EL1, PMSVer, V1P2), producing a duplicate that used PMSVer >= V1P2 as a proxy. Replace the macro with feat_spe_fne(), following the same pattern as the sibling feat_spe_fds(): guard on FEAT_SPEv1p2 and read PMSIDR_EL1.FnE [6] directly. Wire the two NEEDS_FEAT consumers to use the new function. Remove the now-unused FEAT_SPE_FnE macro. Fixes: 63d423a7635b ("KVM: arm64: Switch to table-driven FGU configuration") Signed-off-by: Fuad Tabba <tabba@google.com> Link: https://patch.msgid.link/20260424084908.370776-4-tabba@google.com Signed-off-by: Marc Zyngier <maz@kernel.org> Cc: stable@vger.kernel.org Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
2 daysKVM: arm64: Fix initialisation order in __pkvm_init_finalise()Quentin Perret1-3/+3
commit 5bb0aed57ba944f8c201e4e82ec066e0187e0f85 upstream. fix_host_ownership() walks the hypervisor's stage-1 page-table to adjust the host's stage-2 accordingly. Any such adjustment that requires cache maintenance operations depends on the per-CPU hyp fixmap being present. However, fix_host_ownership() is currently called before fix_hyp_pgtable_refcnt() and hyp_create_fixmap(), so the fixmap does not yet exist when it runs. This is benign today because the host stage-2 starts empty and no CMOs are needed, but it becomes a latent crash as soon as fix_host_ownership() is extended to operate on a non-empty page-table. Reorder the calls so that fix_hyp_pgtable_refcnt() and hyp_create_fixmap() complete before fix_host_ownership() is invoked. Fixes: 0d16d12eb26e ("KVM: arm64: Fix-up hyp stage-1 refcounts for all pages mapped at EL2") Signed-off-by: Quentin Perret <qperret@google.com> Signed-off-by: Fuad Tabba <tabba@google.com> Link: https://patch.msgid.link/20260424084908.370776-7-tabba@google.com Signed-off-by: Marc Zyngier <maz@kernel.org> Cc: stable@vger.kernel.org Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
2 daysKVM: arm64: vgic: Fix IIDR revision field extracted from wrong valueDavid Woodhouse2-2/+2
commit a0e6ae45af17e8b27958830595799c702ffbab8d upstream. The uaccess write handlers for GICD_IIDR in both GICv2 and GICv3 extract the revision field from 'reg' (the current IIDR value read back from the emulated distributor) instead of 'val' (the value userspace is trying to write). This means userspace can never actually change the implementation revision — the extracted value is always the current one. Fix the FIELD_GET to use 'val' so that userspace can select a different revision for migration compatibility. Fixes: 49a1a2c70a7f ("KVM: arm64: vgic-v3: Advertise GICR_CTLR.{IR, CES} as a new GICD_IIDR revision") Signed-off-by: David Woodhouse <dwmw@amazon.co.uk> Link: https://patch.msgid.link/20260407210949.2076251-2-dwmw2@infradead.org Signed-off-by: Marc Zyngier <maz@kernel.org> Cc: stable@vger.kernel.org Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
2 daysKVM: arm64: Wake-up from WFI when iqrchip is in userspaceMarc Zyngier1-0/+4
commit 4ce98bf0865c349e7026ad9c14f48da264920953 upstream. It appears that there is nothing in the wake-up path that evaluates whether the in-kernel interrupts are pending unless we have a vgic. This means that the userspace irqchip support has been broken for about four years, and nobody noticed. It was also broken before as we wouldn't wake-up on a PMU interrupt, but hey, who cares... It is probably time to remove the feature altogether, because it was a terrible idea 10 years ago, and it still is. Fixes: b57de4ffd7c6d ("KVM: arm64: Simplify kvm_cpu_has_pending_timer()") Link: https://patch.msgid.link/20260423163607.486345-1-maz@kernel.org Signed-off-by: Marc Zyngier <maz@kernel.org> Cc: stable@vger.kernel.org Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
2 daysf2fs: fix fsck inconsistency caused by FGGC of node blockYongpeng Yang1-14/+13
commit c3e238bd1f56993f205ef83889d406dfeaf717a8 upstream. During FGGC node block migration, fsck may incorrectly treat the migrated node block as fsync-written data. The reproduction scenario: root@vm:/mnt/f2fs# seq 1 2048 | xargs -n 1 ./test_sync // write inline inode and sync root@vm:/mnt/f2fs# rm -f 1 root@vm:/mnt/f2fs# sync root@vm:/mnt/f2fs# f2fs_io gc_range // move data block in sync mode and not write CP SPO, "fsck --dry-run" find inode has already checkpointed but still with DENT_BIT_SHIFT set The root cause is that GC does not clear the dentry mark and fsync mark during node block migration, leading fsck to misinterpret them as user-issued fsync writes. In BGGC mode, node block migration is handled by f2fs_sync_node_pages(), which guarantees the dentry and fsync marks are cleared before writing. This patch move the set/clear of the fsync|dentry marks into __write_node_folio to make the logic clearer, and ensures the fsync|dentry mark is cleared in FGGC. Cc: stable@kernel.org Fixes: da011cc0da8c ("f2fs: move node pages only in victim section during GC") Signed-off-by: Yongpeng Yang <yangyongpeng@xiaomi.com> Reviewed-by: Chao Yu <chao@kernel.org> Signed-off-by: Jaegeuk Kim <jaegeuk@kernel.org> Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
2 daysf2fs: fix inline data not being written to disk in writeback pathYongpeng Yang3-1/+12
commit fe9b8b30b97102859a9102be7bd2a09803bd90bd upstream. When f2fs_fiemap() is called with `fileinfo->fi_flags` containing the FIEMAP_FLAG_SYNC flag, it attempts to write data to disk before retrieving file mappings via filemap_write_and_wait(). However, there is an issue where the file does not get mapped as expected. The following scenario can occur: root@vm:/mnt/f2fs# dd if=/dev/zero of=data.3k bs=3k count=1 root@vm:/mnt/f2fs# xfs_io data.3k -c "fiemap -v 0 4096" data.3k: EXT: FILE-OFFSET BLOCK-RANGE TOTAL FLAGS 0: [0..5]: 0..5 6 0x307 The root cause of this issue is that f2fs_write_single_data_page() only calls f2fs_write_inline_data() to copy data from the data folio to the inode folio, and it clears the dirty flag on the data folio. However, it does not mark the data folio as writeback. When __filemap_fdatawait_range() checks for folios with the writeback flag, it returns early, causing f2fs_fiemap() to report that the file has no mapping. To fix this issue, the solution is to call f2fs_write_single_node_folio() in f2fs_inline_data_fiemap() when getting fiemap with FIEMAP_FLAG_SYNC flags. This patch ensures that the inode folio is written back and the writeback process completes before proceeding. Cc: stable@kernel.org Fixes: 9ffe0fb5f3bb ("f2fs: handle inline data operations") Signed-off-by: Yongpeng Yang <yangyongpeng@xiaomi.com> Reviewed-by: Chao Yu <chao@kernel.org> Signed-off-by: Jaegeuk Kim <jaegeuk@kernel.org> Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
2 daysf2fs: refactor f2fs_move_node_folio functionYongpeng Yang1-22/+32
commit 92c20989366e023b74fa0c1028af9436c1917dbf upstream. This patch refactor the f2fs_move_node_folio() function. No logical changes. Cc: stable@kernel.org Signed-off-by: Yongpeng Yang <yangyongpeng@xiaomi.com> Reviewed-by: Chao Yu <chao@kernel.org> Signed-off-by: Jaegeuk Kim <jaegeuk@kernel.org> Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
2 daysf2fs: fix uninitialized kobject put in f2fs_init_sysfs()Guangshuo Li1-4/+6
commit b635f2ecdb5ad34f9c967cabb704d6bed9382fd0 upstream. In f2fs_init_sysfs(), all failure paths after kset_register() jump to put_kobject, which unconditionally releases both f2fs_tune and f2fs_feat. If kobject_init_and_add(&f2fs_feat, ...) fails, f2fs_tune has not been initialized yet, so calling kobject_put(&f2fs_tune) is invalid. Fix this by splitting the unwind path so each error path only releases objects that were successfully initialized. Fixes: a907f3a68ee26ba4 ("f2fs: add a sysfs entry to reclaim POSIX_FADV_NOREUSE pages") Cc: stable@vger.kernel.org Signed-off-by: Guangshuo Li <lgs201920130244@gmail.com> Reviewed-by: Chao Yu <chao@kernel.org> Signed-off-by: Jaegeuk Kim <jaegeuk@kernel.org> Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
2 daysf2fs: fix node_cnt race between extent node destroy and writebackYongpeng Yang1-7/+10
commit ed78aeebef05212ef7dca93bd931e4eff67c113f upstream. f2fs_destroy_extent_node() does not set FI_NO_EXTENT before clearing extent nodes. When called from f2fs_drop_inode() with I_SYNC set, concurrent kworker writeback can insert new extent nodes into the same extent tree, racing with the destroy and triggering f2fs_bug_on() in __destroy_extent_node(). The scenario is as follows: drop inode writeback - iput - f2fs_drop_inode // I_SYNC set - f2fs_destroy_extent_node - __destroy_extent_node - while (node_cnt) { write_lock(&et->lock) __free_extent_tree write_unlock(&et->lock) - __writeback_single_inode - f2fs_outplace_write_data - f2fs_update_read_extent_cache - __update_extent_tree_range // FI_NO_EXTENT not set, // insert new extent node } // node_cnt == 0, exit while - f2fs_bug_on(node_cnt) // node_cnt > 0 Additionally, __update_extent_tree_range() only checks FI_NO_EXTENT for EX_READ type, leaving EX_BLOCK_AGE updates completely unprotected. This patch set FI_NO_EXTENT under et->lock in __destroy_extent_node(), consistent with other callers (__update_extent_tree_range and __drop_extent_tree) and check FI_NO_EXTENT for both EX_READ and EX_BLOCK_AGE tree. Fixes: 3fc5d5a182f6 ("f2fs: fix to shrink read extent node in batches") Cc: stable@vger.kernel.org Signed-off-by: Yongpeng Yang <yangyongpeng@xiaomi.com> Reviewed-by: Chao Yu <chao@kernel.org> Signed-off-by: Jaegeuk Kim <jaegeuk@kernel.org> Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
2 daysf2fs: fix incorrect multidevice info in trace_f2fs_map_blocks()Yongpeng Yang1-1/+2
commit eb2ca3ca983551a80e16a4a25df5a4ce59df8484 upstream. When f2fs_map_blocks()->f2fs_map_blocks_cached() hits the read extent cache, map->m_multidev_dio is not updated, which leads to incorrect multidevice information being reported by trace_f2fs_map_blocks(). This patch updates map->m_multidev_dio in f2fs_map_blocks_cached() when the read extent cache is hit. Cc: stable@kernel.org Fixes: 0094e98bd147 ("f2fs: factor a f2fs_map_blocks_cached helper") Signed-off-by: Yongpeng Yang <yangyongpeng@xiaomi.com> Reviewed-by: Chao Yu <chao@kernel.org> Signed-off-by: Jaegeuk Kim <jaegeuk@kernel.org> Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
2 daysf2fs: fix incorrect file address mapping when inline inode is unwrittenYongpeng Yang1-4/+9
commit 68a0178981a0f493295afa29f8880246e561494c upstream. When `fileinfo->fi_flags` does not have the `FIEMAP_FLAG_SYNC` bit set and inline data has not been persisted yet, the physical address of the extent is calculated incorrectly for unwritten inline inodes. root@vm:/mnt/f2fs# dd if=/dev/zero of=data.3k bs=3k count=1 root@vm:/mnt/f2fs# f2fs_io fiemap 0 100 data.3k Fiemap: offset = 0 len = 100 logical addr. physical addr. length flags 0 0000000000000000 00000ffffffff16c 0000000000000c00 00000301 This patch fixes the issue by checking if the inode's address is valid. If the inline inode is unwritten, set the physical address to 0 and mark the extent with `FIEMAP_EXTENT_UNKNOWN | FIEMAP_EXTENT_DELALLOC` flags. Cc: stable@kernel.org Fixes: 67f8cf3cee6f ("f2fs: support fiemap for inline_data") Signed-off-by: Yongpeng Yang <yangyongpeng@xiaomi.com> Reviewed-by: Chao Yu <chao@kernel.org> Signed-off-by: Jaegeuk Kim <jaegeuk@kernel.org> Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
2 daysf2fs: fix fsck inconsistency caused by incorrect nat_entry flag usageYongpeng Yang1-9/+5
commit 019f9dda7f66e55eb94cd32e1d3fff5835f73fbc upstream. f2fs_need_dentry_mark() reads nat_entry flags without mutual exclusion with the checkpoint path, which can result in an incorrect inode block marking state. The scenario is as follows: create & write & fsync 'file A' write checkpoint - f2fs_do_sync_file // inline inode - f2fs_write_inode // inode folio is dirty - f2fs_write_checkpoint - f2fs_flush_merged_writes - f2fs_sync_node_pages - f2fs_fsync_node_pages // no dirty node - f2fs_need_inode_block_update // return true - f2fs_fsync_node_pages // inode dirtied - f2fs_need_dentry_mark //return true - f2fs_flush_nat_entries - f2fs_write_checkpoint end - __write_node_folio // inode with DENT_BIT_SHIFT set SPO, "fsck --dry-run" find inode has already checkpointed but still with DENT_BIT_SHIFT set The state observed by f2fs_need_dentry_mark() can differ from the state observed in __write_node_folio() after acquiring sbi->node_write. The root cause is that the semantics of IS_CHECKPOINTED and HAS_FSYNCED_INODE are only guaranteed after the checkpoint write has fully completed. This patch moves set_dentry_mark() into __write_node_folio() and protects it with the sbi->node_write lock. Cc: stable@kernel.org Fixes: 88bd02c9472a ("f2fs: fix conditions to remain recovery information in f2fs_sync_file") Signed-off-by: Yongpeng Yang <yangyongpeng@xiaomi.com> Reviewed-by: Chao Yu <chao@kernel.org> Signed-off-by: Jaegeuk Kim <jaegeuk@kernel.org> Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
2 daysf2fs: fix fiemap boundary handling when read extent cache is incompleteYongpeng Yang1-3/+22
commit 95e159ad3e52f7478cfd22e44ec37c9f334f8993 upstream. f2fs_fiemap() calls f2fs_map_blocks() to obtain the block mapping a file, and then merges contiguous mappings into extents. If the mapping is found in the read extent cache, node blocks do not need to be read. However, in the following scenario, a contiguous extent can be split into two extents: $ dd if=/dev/zero of=data.128M bs=1M count=128 $ losetup -f data.128M $ mkfs.f2fs /dev/loop0 -f $ mount -o mode=lfs /dev/loop0 /mnt/f2fs/ $ cd /mnt/f2fs/ $ dd if=/dev/zero of=data.72M bs=1M count=72 && sync $ dd if=/dev/zero of=data.4M bs=1M count=4 && sync $ dd if=/dev/zero of=data.4M bs=1M count=2 seek=2 conv=notrunc && sync $ echo 3 > /proc/sys/vm/drop_caches $ dd if=/dev/zero of=data.4M bs=1M count=2 seek=0 conv=notrunc && sync $ dd if=/dev/zero of=data.4M bs=1M count=2 seek=0 conv=notrunc && sync $ f2fs_io fiemap 0 1024 data.4M Fiemap: offset = 0 len = 1024 logical addr. physical addr. length flags 0 0000000000000000 0000000006400000 0000000000200000 00001000 1 0000000000200000 0000000006600000 0000000000200000 00001001 Although the physical addresses of the ranges 0~2MB and 2M~4MB are contiguous, the mapping for the 2M~4MB range is not present in memory. When the physical addresses for the 0~2MB range are updated, no merge happens because the adjacent mapping is missing from the in-memory cache. As a result, fiemap reports two separate extents instead of a single contiguous one. The root cause is that the read extent cache does not guarantee that all blocks of an extent are present in memory. Therefore, when the extent length returned by f2fs_map_blocks_cached() is smaller than maxblocks, the remaining mappings are retrieved via f2fs_get_dnode_of_data() to ensure correct fiemap extent boundary handling. Cc: stable@kernel.org Fixes: cd8fc5226bef ("f2fs: remove the create argument to f2fs_map_blocks") Signed-off-by: Yongpeng Yang <yangyongpeng@xiaomi.com> Reviewed-by: Chao Yu <chao@kernel.org> Signed-off-by: Jaegeuk Kim <jaegeuk@kernel.org> Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
2 daysf2fs: add READ_ONCE() for i_blocks in f2fs_update_inode()Cen Zhang1-1/+1
commit 5471834a96fb697874be2ca0b052e74bcf3c23d1 upstream. f2fs_update_inode() reads inode->i_blocks without holding i_lock to serialize it to the on-disk inode, while concurrent truncate or allocation paths may modify i_blocks under i_lock. Since blkcnt_t is u64, this risks torn reads on 32-bit architectures. Following the approach in ext4_inode_blocks_set(), add READ_ONCE() to prevent potential compiler-induced tearing. Fixes: 19f99cee206c ("f2fs: add core inode operations") Cc: stable@vger.kernel.org Signed-off-by: Cen Zhang <zzzccc427@gmail.com> Reviewed-by: Chao Yu <chao@kernel.org> Signed-off-by: Jaegeuk Kim <jaegeuk@kernel.org> Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>