<feed xmlns='http://www.w3.org/2005/Atom'>
<title>kernel/linux.git/fs/ntfs3, branch v6.18.21</title>
<subtitle>Linux kernel stable tree (mirror)</subtitle>
<id>https://git.radix-linux.su/kernel/linux.git/atom?h=v6.18.21</id>
<link rel='self' href='https://git.radix-linux.su/kernel/linux.git/atom?h=v6.18.21'/>
<link rel='alternate' type='text/html' href='https://git.radix-linux.su/kernel/linux.git/'/>
<updated>2026-03-04T12:20:38+00:00</updated>
<entry>
<title>fs/ntfs3: avoid calling run_get_entry() when run == NULL in ntfs_read_run_nb_ra()</title>
<updated>2026-03-04T12:20:38+00:00</updated>
<author>
<name>Konstantin Komarov</name>
<email>almaz.alexandrovich@paragon-software.com</email>
</author>
<published>2026-02-09T15:07:32+00:00</published>
<link rel='alternate' type='text/html' href='https://git.radix-linux.su/kernel/linux.git/commit/?id=57633b43a603fbfdeeff36276e538bd5a06c46fd'/>
<id>urn:sha1:57633b43a603fbfdeeff36276e538bd5a06c46fd</id>
<content type='text'>
[ Upstream commit c5226b96c08a010ebef5fdf4c90572bcd89e4299 ]

When ntfs_read_run_nb_ra() is invoked with run == NULL the code later
assumes run is valid and may call run_get_entry(NULL, ...), and also
uses clen/idx without initializing them. Smatch reported uninitialized
variable warnings and this can lead to undefined behaviour. This patch
fixes it.

Reported-by: kernel test robot &lt;lkp@intel.com&gt;
Reported-by: Dan Carpenter &lt;dan.carpenter@linaro.org&gt;
Closes: https://lore.kernel.org/r/202512230646.v5hrYXL0-lkp@intel.com/
Signed-off-by: Konstantin Komarov &lt;almaz.alexandrovich@paragon-software.com&gt;
Signed-off-by: Sasha Levin &lt;sashal@kernel.org&gt;
</content>
</entry>
<entry>
<title>ntfs3: fix circular locking dependency in run_unpack_ex</title>
<updated>2026-03-04T12:20:38+00:00</updated>
<author>
<name>Szymon Wilczek</name>
<email>swilczek.lx@gmail.com</email>
</author>
<published>2025-12-27T14:43:07+00:00</published>
<link rel='alternate' type='text/html' href='https://git.radix-linux.su/kernel/linux.git/commit/?id=b014372b62237521444ee51384549bdf48b79015'/>
<id>urn:sha1:b014372b62237521444ee51384549bdf48b79015</id>
<content type='text'>
[ Upstream commit 08ce2fee1b869ecbfbd94e0eb2630e52203a2e03 ]

Syzbot reported a circular locking dependency between wnd-&gt;rw_lock
(sbi-&gt;used.bitmap) and ni-&gt;file.run_lock.

The deadlock scenario:
1. ntfs_extend_mft() takes ni-&gt;file.run_lock then wnd-&gt;rw_lock.
2. run_unpack_ex() takes wnd-&gt;rw_lock then tries to acquire
   ni-&gt;file.run_lock inside ntfs_refresh_zone().

This creates an AB-BA deadlock.

Fix this by using down_read_trylock() instead of down_read() when
acquiring run_lock in run_unpack_ex(). If the lock is contended,
skip ntfs_refresh_zone() - the MFT zone will be refreshed on the
next MFT operation. This breaks the circular dependency since we
never block waiting for run_lock while holding wnd-&gt;rw_lock.

Reported-by: syzbot+d27edf9f96ae85939222@syzkaller.appspotmail.com
Tested-by: syzbot+d27edf9f96ae85939222@syzkaller.appspotmail.com
Closes: https://syzkaller.appspot.com/bug?extid=d27edf9f96ae85939222
Signed-off-by: Szymon Wilczek &lt;swilczek.lx@gmail.com&gt;
Signed-off-by: Konstantin Komarov &lt;almaz.alexandrovich@paragon-software.com&gt;
Signed-off-by: Sasha Levin &lt;sashal@kernel.org&gt;
</content>
</entry>
<entry>
<title>fs/ntfs3: drop preallocated clusters for sparse and compressed files</title>
<updated>2026-03-04T12:20:37+00:00</updated>
<author>
<name>Konstantin Komarov</name>
<email>almaz.alexandrovich@paragon-software.com</email>
</author>
<published>2025-12-12T11:27:48+00:00</published>
<link rel='alternate' type='text/html' href='https://git.radix-linux.su/kernel/linux.git/commit/?id=b67fb01429fff093ac5dfe4a2cb9822b86382374'/>
<id>urn:sha1:b67fb01429fff093ac5dfe4a2cb9822b86382374</id>
<content type='text'>
[ Upstream commit 3a6aba7f3cf2b46816e08548c254d98de9c74eba ]

Do not keep preallocated clusters for sparsed and compressed files.
Preserving preallocation in these cases causes fsx failures when running
with sparse files and preallocation enabled.

Signed-off-by: Konstantin Komarov &lt;almaz.alexandrovich@paragon-software.com&gt;
Signed-off-by: Sasha Levin &lt;sashal@kernel.org&gt;
</content>
</entry>
<entry>
<title>fs: ntfs3: fix infinite loop triggered by zero-sized ATTR_LIST</title>
<updated>2026-03-04T12:20:37+00:00</updated>
<author>
<name>Jaehun Gou</name>
<email>p22gone@gmail.com</email>
</author>
<published>2025-12-02T11:01:46+00:00</published>
<link rel='alternate' type='text/html' href='https://git.radix-linux.su/kernel/linux.git/commit/?id=9779a6eaaabdf47aa57910d352b398ad742e6a5f'/>
<id>urn:sha1:9779a6eaaabdf47aa57910d352b398ad742e6a5f</id>
<content type='text'>
[ Upstream commit 06909b2549d631a47fcda249d34be26f7ca1711d ]

We found an infinite loop bug in the ntfs3 file system that can lead to a
Denial-of-Service (DoS) condition.

A malformed NTFS image can cause an infinite loop when an ATTR_LIST attribute
indicates a zero data size while the driver allocates memory for it.

When ntfs_load_attr_list() processes a resident ATTR_LIST with data_size set
to zero, it still allocates memory because of al_aligned(0). This creates an
inconsistent state where ni-&gt;attr_list.size is zero, but ni-&gt;attr_list.le is
non-null. This causes ni_enum_attr_ex to incorrectly assume that no attribute
list exists and enumerates only the primary MFT record. When it finds
ATTR_LIST, the code reloads it and restarts the enumeration, repeating
indefinitely. The mount operation never completes, hanging the kernel thread.

This patch adds validation to ensure that data_size is non-zero before memory
allocation. When a zero-sized ATTR_LIST is detected, the function returns
-EINVAL, preventing a DoS vulnerability.

Co-developed-by: Seunghun Han &lt;kkamagui@gmail.com&gt;
Signed-off-by: Seunghun Han &lt;kkamagui@gmail.com&gt;
Co-developed-by: Jihoon Kwon &lt;kjh010315@gmail.com&gt;
Signed-off-by: Jihoon Kwon &lt;kjh010315@gmail.com&gt;
Signed-off-by: Jaehun Gou &lt;p22gone@gmail.com&gt;
Signed-off-by: Konstantin Komarov &lt;almaz.alexandrovich@paragon-software.com&gt;
Signed-off-by: Sasha Levin &lt;sashal@kernel.org&gt;
</content>
</entry>
<entry>
<title>fs: ntfs3: fix infinite loop in attr_load_runs_range on inconsistent metadata</title>
<updated>2026-03-04T12:20:37+00:00</updated>
<author>
<name>Jaehun Gou</name>
<email>p22gone@gmail.com</email>
</author>
<published>2025-12-02T11:01:09+00:00</published>
<link rel='alternate' type='text/html' href='https://git.radix-linux.su/kernel/linux.git/commit/?id=c0b43c45d45f59e7faad48675a50231a210c379b'/>
<id>urn:sha1:c0b43c45d45f59e7faad48675a50231a210c379b</id>
<content type='text'>
[ Upstream commit 4b90f16e4bb5607fb35e7802eb67874038da4640 ]

We found an infinite loop bug in the ntfs3 file system that can lead to a
Denial-of-Service (DoS) condition.

A malformed NTFS image can cause an infinite loop when an attribute header
indicates an empty run list, while directory entries reference it as
containing actual data. In NTFS, setting evcn=-1 with svcn=0 is a valid way
to represent an empty run list, and run_unpack() correctly handles this by
checking if evcn + 1 equals svcn and returning early without parsing any run
data. However, this creates a problem when there is metadata inconsistency,
where the attribute header claims to be empty (evcn=-1) but the caller
expects to read actual data. When run_unpack() immediately returns success
upon seeing this condition, it leaves the runs_tree uninitialized with
run-&gt;runs as a NULL. The calling function attr_load_runs_range() assumes
that a successful return means that the runs were loaded and sets clen to 0,
expecting the next run_lookup_entry() call to succeed. Because runs_tree
remains uninitialized, run_lookup_entry() continues to fail, and the loop
increments vcn by zero (vcn += 0), leading to an infinite loop.

This patch adds a retry counter to detect when run_lookup_entry() fails
consecutively after attr_load_runs_vcn(). If the run is still not found on
the second attempt, it indicates corrupted metadata and returns -EINVAL,
preventing the Denial-of-Service (DoS) vulnerability.

Co-developed-by: Seunghun Han &lt;kkamagui@gmail.com&gt;
Signed-off-by: Seunghun Han &lt;kkamagui@gmail.com&gt;
Co-developed-by: Jihoon Kwon &lt;kjh010315@gmail.com&gt;
Signed-off-by: Jihoon Kwon &lt;kjh010315@gmail.com&gt;
Signed-off-by: Jaehun Gou &lt;p22gone@gmail.com&gt;
Signed-off-by: Konstantin Komarov &lt;almaz.alexandrovich@paragon-software.com&gt;
Signed-off-by: Sasha Levin &lt;sashal@kernel.org&gt;
</content>
</entry>
<entry>
<title>fs: ntfs3: check return value of indx_find to avoid infinite loop</title>
<updated>2026-03-04T12:20:37+00:00</updated>
<author>
<name>Jaehun Gou</name>
<email>p22gone@gmail.com</email>
</author>
<published>2025-12-02T10:59:59+00:00</published>
<link rel='alternate' type='text/html' href='https://git.radix-linux.su/kernel/linux.git/commit/?id=b0ea441f44ce64fa514a415d4a9e6e2b06e7946c'/>
<id>urn:sha1:b0ea441f44ce64fa514a415d4a9e6e2b06e7946c</id>
<content type='text'>
[ Upstream commit 1732053c8a6b360e2d5afb1b34fe9779398b072c ]

We found an infinite loop bug in the ntfs3 file system that can lead to a
Denial-of-Service (DoS) condition.

A malformed dentry in the ntfs3 filesystem can cause the kernel to hang
during the lookup operations. By setting the HAS_SUB_NODE flag in an
INDEX_ENTRY within a directory's INDEX_ALLOCATION block and manipulating the
VCN pointer, an attacker can cause the indx_find() function to repeatedly
read the same block, allocating 4 KB of memory each time. The kernel lacks
VCN loop detection and depth limits, causing memory exhaustion and an OOM
crash.

This patch adds a return value check for fnd_push() to prevent a memory
exhaustion vulnerability caused by infinite loops. When the index exceeds the
size of the fnd-&gt;nodes array, fnd_push() returns -EINVAL. The indx_find()
function checks this return value and stops processing, preventing further
memory allocation.

Co-developed-by: Seunghun Han &lt;kkamagui@gmail.com&gt;
Signed-off-by: Seunghun Han &lt;kkamagui@gmail.com&gt;
Co-developed-by: Jihoon Kwon &lt;kjh010315@gmail.com&gt;
Signed-off-by: Jihoon Kwon &lt;kjh010315@gmail.com&gt;
Signed-off-by: Jaehun Gou &lt;p22gone@gmail.com&gt;
Signed-off-by: Konstantin Komarov &lt;almaz.alexandrovich@paragon-software.com&gt;
Signed-off-by: Sasha Levin &lt;sashal@kernel.org&gt;
</content>
</entry>
<entry>
<title>ntfs: -&gt;d_compare() must not block</title>
<updated>2026-03-04T12:19:34+00:00</updated>
<author>
<name>Al Viro</name>
<email>viro@zeniv.linux.org.uk</email>
</author>
<published>2025-11-19T21:15:04+00:00</published>
<link rel='alternate' type='text/html' href='https://git.radix-linux.su/kernel/linux.git/commit/?id=142c444a395f4d26055c8a4473e228bb86283f1e'/>
<id>urn:sha1:142c444a395f4d26055c8a4473e228bb86283f1e</id>
<content type='text'>
[ Upstream commit ca2a04e84af79596e5cd9cfe697d5122ec39c8ce ]

... so don't use __getname() there.  Switch it (and ntfs_d_hash(), while
we are at it) to kmalloc(PATH_MAX, GFP_NOWAIT).  Yes, ntfs_d_hash()
almost certainly can do with smaller allocations, but let ntfs folks
deal with that - keep the allocation size as-is for now.

Stop abusing names_cachep in ntfs, period - various uses of that thing
in there have nothing to do with pathnames; just use k[mz]alloc() and
be done with that.  For now let's keep sizes as-in, but AFAICS none of
the users actually want PATH_MAX.

Signed-off-by: Al Viro &lt;viro@zeniv.linux.org.uk&gt;
Signed-off-by: Sasha Levin &lt;sashal@kernel.org&gt;
</content>
</entry>
<entry>
<title>fs/ntfs3: Fix slab-out-of-bounds read in DeleteIndexEntryRoot</title>
<updated>2026-02-26T22:59:36+00:00</updated>
<author>
<name>Jiasheng Jiang</name>
<email>jiashengjiangcool@gmail.com</email>
</author>
<published>2026-01-17T16:50:24+00:00</published>
<link rel='alternate' type='text/html' href='https://git.radix-linux.su/kernel/linux.git/commit/?id=78942172d5bff4d4afed8674abc09cc560ce44a0'/>
<id>urn:sha1:78942172d5bff4d4afed8674abc09cc560ce44a0</id>
<content type='text'>
[ Upstream commit b2bc7c44ed1779fc9eaab9a186db0f0d01439622 ]

In the 'DeleteIndexEntryRoot' case of the 'do_action' function, the
entry size ('esize') is retrieved from the log record without adequate
bounds checking.

Specifically, the code calculates the end of the entry ('e2') using:
    e2 = Add2Ptr(e1, esize);

It then calculates the size for memmove using 'PtrOffset(e2, ...)',
which subtracts the end pointer from the buffer limit. If 'esize' is
maliciously large, 'e2' exceeds the used buffer size. This results in
a negative offset which, when cast to size_t for memmove, interprets
as a massive unsigned integer, leading to a heap buffer overflow.

This commit adds a check to ensure that the entry size ('esize') strictly
fits within the remaining used space of the index header before performing
memory operations.

Fixes: b46acd6a6a62 ("fs/ntfs3: Add NTFS journal")
Signed-off-by: Jiasheng Jiang &lt;jiashengjiangcool@gmail.com&gt;
Signed-off-by: Konstantin Komarov &lt;almaz.alexandrovich@paragon-software.com&gt;
Signed-off-by: Sasha Levin &lt;sashal@kernel.org&gt;
</content>
</entry>
<entry>
<title>fs/ntfs3: prevent infinite loops caused by the next valid being the same</title>
<updated>2026-02-26T22:59:36+00:00</updated>
<author>
<name>Edward Adam Davis</name>
<email>eadavis@qq.com</email>
</author>
<published>2025-12-28T03:53:25+00:00</published>
<link rel='alternate' type='text/html' href='https://git.radix-linux.su/kernel/linux.git/commit/?id=4bf3bafb8e0635ed93e3cd4156dcbcc0fb960cb4'/>
<id>urn:sha1:4bf3bafb8e0635ed93e3cd4156dcbcc0fb960cb4</id>
<content type='text'>
[ Upstream commit 27b75ca4e51e3e4554dc85dbf1a0246c66106fd3 ]

When processing valid within the range [valid : pos), if valid cannot
be retrieved correctly, for example, if the retrieved valid value is
always the same, this can trigger a potential infinite loop, similar
to the hung problem reported by syzbot [1].

Adding a check for the valid value within the loop body, and terminating
the loop and returning -EINVAL if the value is the same as the current
value, can prevent this.

[1]
INFO: task syz.4.21:6056 blocked for more than 143 seconds.
Call Trace:
 rwbase_write_lock+0x14f/0x750 kernel/locking/rwbase_rt.c:244
 inode_lock include/linux/fs.h:1027 [inline]
 ntfs_file_write_iter+0xe6/0x870 fs/ntfs3/file.c:1284

Fixes: 4342306f0f0d ("fs/ntfs3: Add file operations and implementation")
Reported-by: syzbot+bcf9e1868c1a0c7e04f1@syzkaller.appspotmail.com
Closes: https://syzkaller.appspot.com/bug?extid=bcf9e1868c1a0c7e04f1
Signed-off-by: Edward Adam Davis &lt;eadavis@qq.com&gt;
Signed-off-by: Konstantin Komarov &lt;almaz.alexandrovich@paragon-software.com&gt;
Signed-off-by: Sasha Levin &lt;sashal@kernel.org&gt;
</content>
</entry>
<entry>
<title>fs/ntfs3: Initialize new folios before use</title>
<updated>2026-02-26T22:59:36+00:00</updated>
<author>
<name>Bartlomiej Kubik</name>
<email>kubik.bartlomiej@gmail.com</email>
</author>
<published>2025-11-26T22:02:51+00:00</published>
<link rel='alternate' type='text/html' href='https://git.radix-linux.su/kernel/linux.git/commit/?id=5a30cc03bde169ad558695b26da6ea7e55f6194a'/>
<id>urn:sha1:5a30cc03bde169ad558695b26da6ea7e55f6194a</id>
<content type='text'>
[ Upstream commit f223ebffa185cc8da934333c5a31ff2d4f992dc9 ]

KMSAN reports an uninitialized value in longest_match_std(), invoked
from ntfs_compress_write(). When new folios are allocated without being
marked uptodate and ni_read_frame() is skipped because the caller expects
the frame to be completely overwritten, some reserved folios may remain
only partially filled, leaving the rest memory uninitialized.

Fixes: 584f60ba22f7 ("ntfs3: Convert ntfs_get_frame_pages() to use a folio")
Tested-by: syzbot+08d8956768c96a2c52cf@syzkaller.appspotmail.com
Reported-by: syzbot+08d8956768c96a2c52cf@syzkaller.appspotmail.com
Closes: https://syzkaller.appspot.com/bug?extid=08d8956768c96a2c52cf

Signed-off-by: Bartlomiej Kubik &lt;kubik.bartlomiej@gmail.com&gt;
Signed-off-by: Konstantin Komarov &lt;almaz.alexandrovich@paragon-software.com&gt;
Signed-off-by: Sasha Levin &lt;sashal@kernel.org&gt;
</content>
</entry>
</feed>
