summaryrefslogtreecommitdiff
path: root/drivers/md
AgeCommit message (Collapse)AuthorFilesLines
3 daysdm-snapshot: fix 'scheduling while atomic' on real-time kernelsMikulas Patocka2-40/+35
[ Upstream commit 8581b19eb2c5ccf06c195d3b5468c3c9d17a5020 ] There is reported 'scheduling while atomic' bug when using dm-snapshot on real-time kernels. The reason for the bug is that the hlist_bl code does preempt_disable() when taking the lock and the kernel attempts to take other spinlocks while holding the hlist_bl lock. Fix this by converting a hlist_bl spinlock into a regular spinlock. Signed-off-by: Mikulas Patocka <mpatocka@redhat.com> Reported-by: Jiping Ma <jiping.ma2@windriver.com> Signed-off-by: Sasha Levin <sashal@kernel.org>
3 daysdm-verity: disable recursive forward error correctionMikulas Patocka3-7/+2
[ Upstream commit d9f3e47d3fae0c101d9094bc956ed24e7a0ee801 ] There are two problems with the recursive correction: 1. It may cause denial-of-service. In fec_read_bufs, there is a loop that has 253 iterations. For each iteration, we may call verity_hash_for_block recursively. There is a limit of 4 nested recursions - that means that there may be at most 253^4 (4 billion) iterations. Red Hat QE team actually created an image that pushes dm-verity to this limit - and this image just makes the udev-worker process get stuck in the 'D' state. 2. It doesn't work. In fec_read_bufs we store data into the variable "fio->bufs", but fio bufs is shared between recursive invocations, if "verity_hash_for_block" invoked correction recursively, it would overwrite partially filled fio->bufs. Signed-off-by: Mikulas Patocka <mpatocka@redhat.com> Reported-by: Guangwu Zhang <guazhang@redhat.com> Reviewed-by: Sami Tolvanen <samitolvanen@google.com> Reviewed-by: Eric Biggers <ebiggers@kernel.org> Signed-off-by: Sasha Levin <sashal@kernel.org>
12 daysdm pcache: fix segment info indexingLi Chen1-1/+4
commit 13ea55ea20176736516b20b9ea2d8cf97dbe74f5 upstream. Segment info indexing also used sizeof(struct) instead of the 4K metadata stride, so info_index could point between slots and subsequent writes would advance incorrectly. Derive info_index from the pointer returned by the segment meta search using PCACHE_SEG_INFO_SIZE and advance to the next slot for future updates. Signed-off-by: Li Chen <chenl311@chinatelecom.cn> Signed-off-by: Dongsheng Yang <dongsheng.yang@linux.dev> Signed-off-by: Mikulas Patocka <mpatocka@redhat.com> Reviewed-by: Zheng Gu <cengku@gmail.com> Cc: stable@vger.kernel.org # 6.18 Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
12 daysdm pcache: fix cache info indexingLi Chen1-1/+4
commit ee7633178321f5d983db3adfdea9322456cfdaaa upstream. The on-media cache_info index used sizeof(struct) instead of the 4K metadata stride, so gc_percent updates from dmsetup message were written between slots and lost after reboot. Use PCACHE_CACHE_INFO_SIZE in get_cache_info_addr() and align info_index with the slot returned by pcache_meta_find_latest(). Signed-off-by: Li Chen <chenl311@chinatelecom.cn> Signed-off-by: Dongsheng Yang <dongsheng.yang@linux.dev> Signed-off-by: Mikulas Patocka <mpatocka@redhat.com> Reviewed-by: Zheng Gu <cengku@gmail.com> Cc: stable@vger.kernel.org # 6.18 Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
12 daysdm-bufio: align write boundary on physical block sizeMikulas Patocka1-4/+6
commit d0ac06ae53be0cdb61f5fe6b62d25d3317c51657 upstream. There may be devices with physical block size larger than 4k. If dm-bufio sends I/O that is not aligned on physical block size, performance is degraded. The 4k minimum alignment limit is there because some SSDs report logical and physical block size 512 despite having 4k internally - so dm-bufio shouldn't send I/Os not aligned on 4k boundary, because they perform badly (the SSD does read-modify-write for them). Signed-off-by: Mikulas Patocka <mpatocka@redhat.com> Reported-by: Uladzislau Rezki (Sony) <urezki@gmail.com> Cc: stable@vger.kernel.org Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
12 daysdm-ebs: Mark full buffer dirty even on partial writeUladzislau Rezki (Sony)1-1/+1
commit 7fa3e7d114abc9cc71cc35d768e116641074ddb4 upstream. When performing a read-modify-write(RMW) operation, any modification to a buffered block must cause the entire buffer to be marked dirty. Marking only a subrange as dirty is incorrect because the underlying device block size(ubs) defines the minimum read/write granularity. A lower device can perform I/O only on regions which are fully aligned and sized to ubs. This change ensures that write-back operations always occur in full ubs-sized chunks, matching the intended emulation semantics of the EBS target. As for user space visible impact, submitting sub-ubs and misaligned I/O for devices which are tuned to ubs sizes only, will reject such requests, therefore it can lead to losing data. Example: 1) Create a 8K nvme device in qemu by adding -device nvme,drive=drv0,serial=foo,logical_block_size=8192,physical_block_size=8192 2) Setup dm-ebs to emulate 512B to 8K mapping urezki@pc638:~/bin$ cat dmsetup.sh lower=/dev/nvme0n1 len=$(blockdev --getsz "$lower") echo "0 $len ebs $lower 0 1 16" | dmsetup create nvme-8k urezki@pc638:~/bin$ offset 0, ebs=1 and ubs=16(in sectors). 3) Create an ext4 filesystem(default 4K block size) urezki@pc638:~/bin$ sudo mkfs.ext4 -F /dev/dm-0 mke2fs 1.47.0 (5-Feb-2023) Discarding device blocks: done Creating filesystem with 2072576 4k blocks and 518144 inodes Filesystem UUID: bd0b6ca6-0506-4e31-86da-8d22c9d50b63 Superblock backups stored on blocks: 32768, 98304, 163840, 229376, 294912, 819200, 884736, 1605632 Allocating group tables: done Writing inode tables: done Creating journal (16384 blocks): done Writing superblocks and filesystem accounting information: mkfs.ext4: Input/output error while writing out and closing file system urezki@pc638:~/bin$ dmesg <snip> [ 1618.875449] buffer_io_error: 1028 callbacks suppressed [ 1618.875456] Buffer I/O error on dev dm-0, logical block 0, lost async page write [ 1618.875527] Buffer I/O error on dev dm-0, logical block 1, lost async page write [ 1618.875602] Buffer I/O error on dev dm-0, logical block 2, lost async page write [ 1618.875620] Buffer I/O error on dev dm-0, logical block 3, lost async page write [ 1618.875639] Buffer I/O error on dev dm-0, logical block 4, lost async page write [ 1618.894316] Buffer I/O error on dev dm-0, logical block 5, lost async page write [ 1618.894358] Buffer I/O error on dev dm-0, logical block 6, lost async page write [ 1618.894380] Buffer I/O error on dev dm-0, logical block 7, lost async page write [ 1618.894405] Buffer I/O error on dev dm-0, logical block 8, lost async page write [ 1618.894427] Buffer I/O error on dev dm-0, logical block 9, lost async page write <snip> Many I/O errors because the lower 8K device rejects sub-ubs/misaligned requests. with a patch: urezki@pc638:~/bin$ sudo mkfs.ext4 -F /dev/dm-0 mke2fs 1.47.0 (5-Feb-2023) Discarding device blocks: done Creating filesystem with 2072576 4k blocks and 518144 inodes Filesystem UUID: 9b54f44f-ef55-4bd4-9e40-c8b775a616ac Superblock backups stored on blocks: 32768, 98304, 163840, 229376, 294912, 819200, 884736, 1605632 Allocating group tables: done Writing inode tables: done Creating journal (16384 blocks): done Writing superblocks and filesystem accounting information: done urezki@pc638:~/bin$ sudo mount /dev/dm-0 /mnt/ urezki@pc638:~/bin$ ls -al /mnt/ total 24 drwxr-xr-x 3 root root 4096 Oct 17 15:13 . drwxr-xr-x 19 root root 4096 Jul 10 19:42 .. drwx------ 2 root root 16384 Oct 17 15:13 lost+found urezki@pc638:~/bin$ After this change: mkfs completes; mount succeeds. Signed-off-by: Uladzislau Rezki (Sony) <urezki@gmail.com> Signed-off-by: Mikulas Patocka <mpatocka@redhat.com> Cc: stable@vger.kernel.org Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
12 daysmd/raid5: fix possible null-pointer dereferences in ↵Tuo Li1-4/+6
raid5_store_group_thread_cnt() [ Upstream commit 7ad6ef91d8745d04aff9cce7bdbc6320d8e05fe9 ] The variable mddev->private is first assigned to conf and then checked: conf = mddev->private; if (!conf) ... If conf is NULL, then mddev->private is also NULL. In this case, null-pointer dereferences can occur when calling raid5_quiesce(): raid5_quiesce(mddev, true); raid5_quiesce(mddev, false); since mddev->private is assigned to conf again in raid5_quiesce(), and conf is dereferenced in several places, for example: conf->quiesce = 0; wake_up(&conf->wait_for_quiescent); To fix this issue, the function should unlock mddev and return before invoking raid5_quiesce() when conf is NULL, following the existing pattern in raid5_change_consistency_policy(). Fixes: fa1944bbe622 ("md/raid5: Wait sync io to finish before changing group cnt") Signed-off-by: Tuo Li <islituo@gmail.com> Reviewed-by: Xiao Ni <xni@redhat.com> Reviewed-by: Paul Menzel <pmenzel@molgen.mpg.de> Link: https://lore.kernel.org/linux-raid/20251225130326.67780-1-islituo@gmail.com Signed-off-by: Yu Kuai <yukuai@fnnas.com> Signed-off-by: Sasha Levin <sashal@kernel.org>
12 daysmd: Fix static checker warning in analyze_sbsLi Nan1-4/+1
[ Upstream commit 00f6c1b4d15d35fadb7f34768a1831c81aaa8936 ] The following warn is reported: drivers/md/md.c:3912 analyze_sbs() warn: iterator 'i' not incremented Fixes: d8730f0cf4ef ("md: Remove deprecated CONFIG_MD_MULTIPATH") Reported-by: Dan Carpenter <dan.carpenter@linaro.org> Closes: https://lore.kernel.org/linux-raid/7e2e95ce-3740-09d8-a561-af6bfb767f18@huaweicloud.com/T/#t Signed-off-by: Li Nan <linan122@huawei.com> Link: https://lore.kernel.org/linux-raid/20251215124412.4015572-1-linan666@huaweicloud.com Signed-off-by: Yu Kuai <yukuai@fnnas.com> Signed-off-by: Sasha Levin <sashal@kernel.org>
2026-01-02dm-pcache: advance slot index before writing slotDongsheng Yang2-8/+8
commit ebbb90344a7da2421e4b54668b94e81828b8b308 upstream. In dm-pcache, in order to ensure crash-consistency, a dual-copy scheme is used to alternately update metadata, and there is a slot index that records the current slot. However, in the write path the current implementation writes directly to the current slot indexed by slot index, and then advances the slot — which ends up overwriting the existing slot, violating the crash-consistency guarantee. This patch fixes that behavior, preventing metadata from being overwritten incorrectly. In addition, this patch add a missing pmem_wmb() after memcpy_flushcache(). Signed-off-by: Dongsheng Yang <dongsheng.yang@linux.dev> Signed-off-by: Mikulas Patocka <mpatocka@redhat.com> Reviewed-by: Zheng Gu <cengku@gmail.com> Cc: stable@vger.kernel.org # 6.18 Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
2025-12-18dm log-writes: Add missing set_freezable() for freezable kthreadHaotian Zhang1-0/+1
[ Upstream commit ab08f9c8b363297cafaf45475b08f78bf19b88ef ] The log_writes_kthread() calls try_to_freeze() but lacks set_freezable(), rendering the freeze attempt ineffective since kernel threads are non-freezable by default. This prevents proper thread suspension during system suspend/hibernate. Add set_freezable() to explicitly mark the thread as freezable. Fixes: 0e9cebe72459 ("dm: add log writes target") Signed-off-by: Haotian Zhang <vulab@iscas.ac.cn> Reviewed-by: Benjamin Marzinski <bmarzins@redhat.com> Signed-off-by: Mikulas Patocka <mpatocka@redhat.com> Signed-off-by: Sasha Levin <sashal@kernel.org>
2025-12-18dm-raid: fix possible NULL dereference with undefined raid typeAlexey Simakov1-0/+2
[ Upstream commit 2f6cfd6d7cb165a7af8877b838a9f6aab4159324 ] rs->raid_type is assigned from get_raid_type_by_ll(), which may return NULL. This NULL value could be dereferenced later in the condition 'if (!(rs_is_raid10(rs) && rt_is_raid0(rs->raid_type)))'. Add a fail-fast check to return early with an error if raid_type is NULL, similar to other uses of this function. Found by Linux Verification Center (linuxtesting.org) with Svace. Fixes: 33e53f06850f ("dm raid: introduce extended superblock and new raid types to support takeover/reshaping") Signed-off-by: Alexey Simakov <bigalex934@gmail.com> Signed-off-by: Mikulas Patocka <mpatocka@redhat.com> Signed-off-by: Sasha Levin <sashal@kernel.org>
2025-12-18md/raid5: fix IO hang when array is broken with IO inflightYu Kuai1-2/+4
[ Upstream commit a913d1f6a7f607c110aeef8b58c8988f47a4b24e ] Following test can cause IO hang: mdadm -CvR /dev/md0 -l10 -n4 /dev/sd[abcd] --assume-clean --chunk=64K --bitmap=none sleep 5 echo 1 > /sys/block/sda/device/delete echo 1 > /sys/block/sdb/device/delete echo 1 > /sys/block/sdc/device/delete echo 1 > /sys/block/sdd/device/delete dd if=/dev/md0 of=/dev/null bs=8k count=1 iflag=direct Root cause: 1) all disks removed, however all rdevs in the array is still in sync, IO will be issued normally. 2) IO failure from sda, and set badblocks failed, sda will be faulty and MD_SB_CHANGING_PENDING will be set. 3) error recovery try to recover this IO from other disks, IO will be issued to sdb, sdc, and sdd. 4) IO failure from sdb, and set badblocks failed again, now array is broken and will become read-only. 5) IO failure from sdc and sdd, however, stripe can't be handled anymore because MD_SB_CHANGING_PENDING is set: handle_stripe handle_stripe if (test_bit MD_SB_CHANGING_PENDING) set_bit STRIPE_HANDLE goto finish // skip handling failed stripe release_stripe if (test_bit STRIPE_HANDLE) list_add_tail conf->hand_list 6) later raid5d can't handle failed stripe as well: raid5d md_check_recovery md_update_sb if (!md_is_rdwr()) // can't clear pending bit return if (test_bit MD_SB_CHANGING_PENDING) break; // can't handle failed stripe Since MD_SB_CHANGING_PENDING can never be cleared for read-only array, fix this problem by skip this checking for read-only array. Link: https://lore.kernel.org/linux-raid/20251117085557.770572-3-yukuai@fnnas.com Fixes: d87f064f5874 ("md: never update metadata when array is read-only.") Signed-off-by: Yu Kuai <yukuai@fnnas.com> Reviewed-by: Li Nan <linan122@huawei.com> Signed-off-by: Sasha Levin <sashal@kernel.org>
2025-12-18md: init bioset in mddev_initLi Nan1-36/+33
[ Upstream commit 381a3ce1c0ffed647c9b913e142b099c7e9d5afc ] IO operations may be needed before md_run(), such as updating metadata after writing sysfs. Without bioset, this triggers a NULL pointer dereference as below: BUG: kernel NULL pointer dereference, address: 0000000000000020 Call Trace: md_update_sb+0x658/0xe00 new_level_store+0xc5/0x120 md_attr_store+0xc9/0x1e0 sysfs_kf_write+0x6f/0xa0 kernfs_fop_write_iter+0x141/0x2a0 vfs_write+0x1fc/0x5a0 ksys_write+0x79/0x180 __x64_sys_write+0x1d/0x30 x64_sys_call+0x2818/0x2880 do_syscall_64+0xa9/0x580 entry_SYSCALL_64_after_hwframe+0x4b/0x53 Reproducer ``` mdadm -CR /dev/md0 -l1 -n2 /dev/sd[cd] echo inactive > /sys/block/md0/md/array_state echo 10 > /sys/block/md0/md/new_level ``` mddev_init() can only be called once per mddev, no need to test if bioset has been initialized anymore. Link: https://lore.kernel.org/linux-raid/20251103125757.1405796-3-linan666@huaweicloud.com Fixes: d981ed841930 ("md: Add new_level sysfs interface") Signed-off-by: Li Nan <linan122@huawei.com> Reviewed-by: Xiao Ni <xni@redhat.com> Signed-off-by: Yu Kuai <yukuai@fnnas.com> Signed-off-by: Sasha Levin <sashal@kernel.org>
2025-12-18md: delete md_redundancy_group when array is becoming inactiveLi Nan1-0/+4
[ Upstream commit 0ce112d9171ad766d4c6716951e73f91a0bfc184 ] 'md_redundancy_group' are created in md_run() and deleted in del_gendisk(), but these are not paired. Writing inactive/active to sysfs array_state can trigger md_run() multiple times without del_gendisk(), leading to duplicate creation as below: sysfs: cannot create duplicate filename '/devices/virtual/block/md0/md/sync_action' Call Trace: dump_stack_lvl+0x9f/0x120 dump_stack+0x14/0x20 sysfs_warn_dup+0x96/0xc0 sysfs_add_file_mode_ns+0x19c/0x1b0 internal_create_group+0x213/0x830 sysfs_create_group+0x17/0x20 md_run+0x856/0xe60 ? __x64_sys_openat+0x23/0x30 do_md_run+0x26/0x1d0 array_state_store+0x559/0x760 md_attr_store+0xc9/0x1e0 sysfs_kf_write+0x6f/0xa0 kernfs_fop_write_iter+0x141/0x2a0 vfs_write+0x1fc/0x5a0 ksys_write+0x79/0x180 __x64_sys_write+0x1d/0x30 x64_sys_call+0x2818/0x2880 do_syscall_64+0xa9/0x580 entry_SYSCALL_64_after_hwframe+0x4b/0x53 md: cannot register extra attributes for md0 Creation of it depends on 'pers', its lifecycle cannot be aligned with gendisk. So fix this issue by triggering 'md_redundancy_group' deletion when the array is becoming inactive. Link: https://lore.kernel.org/linux-raid/20251103125757.1405796-2-linan666@huaweicloud.com Fixes: 790abe4d77af ("md: remove/add redundancy group only in level change") Signed-off-by: Li Nan <linan122@huawei.com> Reviewed-by: Xiao Ni <xni@redhat.com> Signed-off-by: Yu Kuai <yukuai@fnnas.com> Signed-off-by: Sasha Levin <sashal@kernel.org>
2025-12-18md: avoid repeated calls to del_gendiskXiao Ni2-1/+3
[ Upstream commit 90e3bb44c0a86e245d8e5c6520206fa113acb1ee ] There is a uaf problem which is found by case 23rdev-lifetime: Oops: general protection fault, probably for non-canonical address 0xdead000000000122 RIP: 0010:bdi_unregister+0x4b/0x170 Call Trace: <TASK> __del_gendisk+0x356/0x3e0 mddev_unlock+0x351/0x360 rdev_attr_store+0x217/0x280 kernfs_fop_write_iter+0x14a/0x210 vfs_write+0x29e/0x550 ksys_write+0x74/0xf0 do_syscall_64+0xbb/0x380 entry_SYSCALL_64_after_hwframe+0x77/0x7f RIP: 0033:0x7ff5250a177e The sequence is: 1. rdev remove path gets reconfig_mutex 2. rdev remove path release reconfig_mutex in mddev_unlock 3. md stop calls do_md_stop and sets MD_DELETED 4. rdev remove path calls del_gendisk because MD_DELETED is set 5. md stop path release reconfig_mutex and calls del_gendisk again So there is a race condition we should resolve. This patch adds a flag MD_DO_DELETE to avoid the race condition. Link: https://lore.kernel.org/linux-raid/20251029063419.21700-1-xni@redhat.com Fixes: 9e59d609763f ("md: call del_gendisk in control path") Signed-off-by: Xiao Ni <xni@redhat.com> Suggested-by: Yu Kuai <yukuai@fnnas.com> Reviewed-by: Li Nan <linan122@huawei.com> Signed-off-by: Yu Kuai <yukuai@fnnas.com> Signed-off-by: Sasha Levin <sashal@kernel.org>
2025-12-18md: fix rcu protection in md_wakeup_threadYun Zhou2-9/+13
[ Upstream commit 0dc76205549b4c25705e54345f211b9f66e018a0 ] We attempted to use RCU to protect the pointer 'thread', but directly passed the value when calling md_wakeup_thread(). This means that the RCU pointer has been acquired before rcu_read_lock(), which renders rcu_read_lock() ineffective and could lead to a use-after-free. Link: https://lore.kernel.org/linux-raid/20251015083227.1079009-1-yun.zhou@windriver.com Fixes: 446931543982 ("md: protect md_thread with rcu") Signed-off-by: Yun Zhou <yun.zhou@windriver.com> Reviewed-by: Li Nan <linan122@huawei.com> Reviewed-by: Yu Kuai <yukuai@fnnas.com> Signed-off-by: Yu Kuai <yukuai@fnnas.com> Signed-off-by: Sasha Levin <sashal@kernel.org>
2025-12-18md: delete mddev kobj before deleting gendisk kobjXiao Ni1-1/+3
[ Upstream commit cc394b94dc40b661efc9895665abf03640ffff2d ] In sync del gendisk path, it deletes gendisk first and the directory /sys/block/md is removed. Then it releases mddev kobj in a delayed work. If we enable debug log in sysfs_remove_group, we can see the debug log 'sysfs group bitmap not found for kobject md'. It's the reason that the parent kobj has been deleted, so it can't find parent directory. In creating path, it allocs gendisk first, then adds mddev kobj. So it should delete mddev kobj before deleting gendisk. Before commit 9e59d609763f ("md: call del_gendisk in control path"), it releases mddev kobj first. If the kobj hasn't been deleted, it does clean job and deletes the kobj. Then it calls del_gendisk and releases gendisk kobj. So it doesn't need to call kobject_del to delete mddev kobj. After this patch, in sync del gendisk path, the sequence changes. So it needs to call kobject_del to delete mddev kobj. After this patch, the sequence is: 1. kobject del mddev kobj 2. del_gendisk deletes gendisk kobj 3. mddev_delayed_delete releases mddev kobj 4. md_kobj_release releases gendisk kobj Link: https://lore.kernel.org/linux-raid/20250928012424.61370-1-xni@redhat.com Fixes: 9e59d609763f ("md: call del_gendisk in control path") Signed-off-by: Xiao Ni <xni@redhat.com> Reviewed-by: Li Nan <linan122@huawei.com> Signed-off-by: Yu Kuai <yukuai@fnnas.com> Signed-off-by: Sasha Levin <sashal@kernel.org>
2025-11-21dm-verity: fix unreliable memory allocationMikulas Patocka1-5/+1
GFP_NOWAIT allocation may fail anytime. It needs to be changed to GFP_NOIO. There's no need to handle an error because mempool_alloc with GFP_NOIO can't fail. Signed-off-by: Mikulas Patocka <mpatocka@redhat.com> Cc: stable@vger.kernel.org Reviewed-by: Eric Biggers <ebiggers@kernel.org>
2025-11-20dm: fix failure when empty flush's bi_sector points beyond the device endMikulas Patocka1-1/+1
An empty flush bio can have arbitrary bi_sector. The commit 2b1c6d7a890a introduced a regression that device mapper would fail an empty flush bio with -EIO if the sector pointed beyond the end of the device. The commit introduced an optimization, that optimization would pass flushes to __split_and_process_bio and __split_and_process_bio is not prepared to handle empty bios. Fix this bug by passing only non-empty flushes to __split_and_process_bio - non-empty flushes must have valid bi_sector. Empty bios will go through __send_empty_flush, as they did before the optimization. This problem can be reproduced by running the lvm2 test: make check_local T=lvconvert-thin.sh LVM_TEST_PREFER_BRD=0 Signed-off-by: Mikulas Patocka <mpatocka@redhat.com> Fixes: 2b1c6d7a890a ("dm: optimize REQ_PREFLUSH with data when using the linear target") Reported-by: Zdenek Kabelac <zkabelac@redhat.com> Reviewed-by: Bart Van Assche <bvanassche@acm.org>
2025-11-18dm-pcache: zero cache_info before default initLi Chen1-1/+1
pcache_meta_find_latest() leaves whatever it last copied into the caller’s buffer even when it returns NULL. For cache_info_init(), that meant cache->cache_info could still contain CRC-bad garbage when no valid metadata exists, leading later initialization paths to read bogus flags. Explicitly memset cache->cache_info in cache_info_init_default() so new-cache paths start from a clean slate. The default sequence number assignment becomes redundant with this reset, so it drops out. Signed-off-by: Li Chen <chenl311@chinatelecom.cn> Reviewed-by: Zheng Gu <cengku@gmail.com> Signed-off-by: Mikulas Patocka <mpatocka@redhat.com>
2025-11-18dm-pcache: reuse meta_addr in pcache_meta_find_latestLi Chen1-1/+1
pcache_meta_find_latest() already computes the metadata address as meta_addr. Reuse that instead of recomputing. Signed-off-by: Li Chen <chenl311@chinatelecom.cn> Signed-off-by: Mikulas Patocka <mpatocka@redhat.com>
2025-11-18dm-pcache: allow built-in build and rename flush helperLi Chen4-6/+6
CONFIG_BCACHE is tristate, so dm-pcache can also be built-in. Switch the Makefile to use obj-$(CONFIG_DM_PCACHE) so the target can be linked into vmlinux instead of always being a loadable module. Also rename cache_flush() to pcache_cache_flush() to avoid a global symbol clash with sunrpc/cache.c's cache_flush(). Signed-off-by: Li Chen <chenl311@chinatelecom.cn> Signed-off-by: Mikulas Patocka <mpatocka@redhat.com>
2025-10-04Merge tag 'for-6.18/dm-changes' of ↵Linus Torvalds34-177/+5613
git://git.kernel.org/pub/scm/linux/kernel/git/device-mapper/linux-dm Pull device mapper updates from Mikulas Patocka: - a new dm-pcache target for read/write caching on persistent memory - fix typos in docs - misc small refactoring - mark dm-error with DM_TARGET_PASSES_INTEGRITY - dm-request-based: fix NULL pointer dereference and quiesce_depth out of sync - dm-linear: optimize REQ_PREFLUSH - dm-vdo: return error on corrupted metadata - dm-integrity: support asynchronous hash interface * tag 'for-6.18/dm-changes' of git://git.kernel.org/pub/scm/linux/kernel/git/device-mapper/linux-dm: (27 commits) dm raid: use proper md_ro_state enumerators dm-integrity: prefer synchronous hash interface dm-integrity: enable asynchronous hash interface dm-integrity: rename internal_hash dm-integrity: add the "offset" argument dm-integrity: allocate the recalculate buffer with kmalloc dm-integrity: introduce integrity_kmap and integrity_kunmap dm-integrity: replace bvec_kmap_local with kmap_local_page dm-integrity: use internal variable for digestsize dm vdo: return error on corrupted metadata in start_restoring_volume functions dm vdo: Update code to use mem_is_zero dm: optimize REQ_PREFLUSH with data when using the linear target dm-pcache: use int type to store negative error codes dm: fix "writen"->"written" dm-pcache: cleanup: fix coding style report by checkpatch.pl dm-pcache: remove ctrl_lock for pcache_cache_segment dm: fix NULL pointer dereference in __dm_suspend() dm: fix queue start/stop imbalance under suspend/load/resume races dm-pcache: add persistent cache target in device-mapper dm error: mark as DM_TARGET_PASSES_INTEGRITY ...
2025-10-02Merge tag 'for-6.18/block-20250929' of ↵Linus Torvalds27-336/+2333
git://git.kernel.org/pub/scm/linux/kernel/git/axboe/linux Pull block updates from Jens Axboe: - NVMe pull request via Keith: - FC target fixes (Daniel) - Authentication fixes and updates (Martin, Chris) - Admin controller handling (Kamaljit) - Target lockdep assertions (Max) - Keep-alive updates for discovery (Alastair) - Suspend quirk (Georg) - MD pull request via Yu: - Add support for a lockless bitmap. A key feature for the new bitmap are that the IO fastpath is lockless. If a user issues lots of write IO to the same bitmap bit in a short time, only the first write has additional overhead to update bitmap bit, no additional overhead for the following writes. By supporting only resync or recover written data, means in the case creating new array or replacing with a new disk, there is no need to do a full disk resync/recovery. - Switch ->getgeo() and ->bios_param() to using struct gendisk rather than struct block_device. - Rust block changes via Andreas. This series adds configuration via configfs and remote completion to the rnull driver. The series also includes a set of changes to the rust block device driver API: a few cleanup patches, and a few features supporting the rnull changes. The series removes the raw buffer formatting logic from `kernel::block` and improves the logic available in `kernel::string` to support the same use as the removed logic. - floppy arch cleanups - Reduce the number of dereferencing needed for ublk commands - Restrict supported sockets for nbd. Mostly done to eliminate a class of issues perpetually reported by syzbot, by using nonsensical socket setups. - A few s390 dasd block fixes - Fix a few issues around atomic writes - Improve DMA interation for integrity requests - Improve how iovecs are treated with regards to O_DIRECT aligment constraints. We used to require each segment to adhere to the constraints, now only the request as a whole needs to. - Clean up and improve p2p support, enabling use of p2p for metadata payloads - Improve locking of request lookup, using SRCU where appropriate - Use page references properly for brd, avoiding very long RCU sections - Fix ordering of recursively submitted IOs - Clean up and improve updating nr_requests for a live device - Various fixes and cleanups * tag 'for-6.18/block-20250929' of git://git.kernel.org/pub/scm/linux/kernel/git/axboe/linux: (164 commits) s390/dasd: enforce dma_alignment to ensure proper buffer validation s390/dasd: Return BLK_STS_INVAL for EINVAL from do_dasd_request ublk: remove redundant zone op check in ublk_setup_iod() nvme: Use non zero KATO for persistent discovery connections nvmet: add safety check for subsys lock nvme-core: use nvme_is_io_ctrl() for I/O controller check nvme-core: do ioccsz/iorcsz validation only for I/O controllers nvme-core: add method to check for an I/O controller blk-cgroup: fix possible deadlock while configuring policy blk-mq: fix null-ptr-deref in blk_mq_free_tags() from error path blk-mq: Fix more tag iteration function documentation selftests: ublk: fix behavior when fio is not installed ublk: don't access ublk_queue in ublk_unmap_io() ublk: pass ublk_io to __ublk_complete_rq() ublk: don't access ublk_queue in ublk_need_complete_req() ublk: don't access ublk_queue in ublk_check_commit_and_fetch() ublk: don't pass ublk_queue to ublk_fetch() ublk: don't access ublk_queue in ublk_config_io_buf() ublk: don't access ublk_queue in ublk_check_fetch_buf() ublk: pass q_id and tag to __ublk_check_and_get_req() ...
2025-09-30Merge tag 'dlm-6.18' of ↵Linus Torvalds1-2/+2
git://git.kernel.org/pub/scm/linux/kernel/git/teigland/linux-dlm Pull dlm updates from David Teigland: "This adds a dlm_release_lockspace() flag to request that node-failure recovery be performed for the node leaving the lockspace. The implementation of this flag requires coordination with userland clustering components. It's been requested for use by GFS2" * tag 'dlm-6.18' of git://git.kernel.org/pub/scm/linux/kernel/git/teigland/linux-dlm: dlm: check for undefined release_option values dlm: handle release_option as unsigned dlm: move to rinfo for all middle conversion cases dlm: handle invalid lockspace member remove dlm: add new flag DLM_RELEASE_RECOVER for dlm_lockspace_release dlm: add new configfs entry release_recover for lockspace members dlm: add new RELEASE_RECOVER uevent attribute for release_lockspace dlm: use defines for force values in dlm_release_lockspace dlm: check for defined force value in dlm_lockspace_release
2025-09-23dm raid: use proper md_ro_state enumeratorsHeinz Mauelshagen1-6/+7
The dm-raid code was using hardcoded integer values to represent the read-only/read-write state of RAID arrays instead of the proper enumeration constants defined in the md_ro_state enumerator type. Changes: - Replace hardcoded integers with the appropriate md_ro_state enumerator values - Add the missing MD_RDONLY setting in the post_suspend function (no failures have been attributed to this inconsistency, the fix ensures correct state transitions for completeness) This improves code clarity and maintainability by using the defined enumeration constants rather than magic numbers, ensuring the code properly conforms to the established API interface. Signed-off-by: Heinz Mauelshagen <heinzm@redhat.com> Signed-off-by: Mikulas Patocka <mpatocka@redhat.com>
2025-09-23dm-integrity: prefer synchronous hash interfaceMikulas Patocka1-23/+24
The previous patch preferred async interface for the purpose of testing. However, the synchronous interface is faster, so it should be preferred. Signed-off-by: Mikulas Patocka <mpatocka@redhat.com>
2025-09-23dm-integrity: enable asynchronous hash interfaceMikulas Patocka1-44/+199
This commit enables the asynchronous hash interface in dm-integrity. Signed-off-by: Mikulas Patocka <mpatocka@redhat.com>
2025-09-23dm-integrity: rename internal_hashMikulas Patocka1-8/+11
Rename "internal_hash" to "internal_shash" and introduce a boolean value "internal_hash". Signed-off-by: Mikulas Patocka <mpatocka@redhat.com>
2025-09-23dm-integrity: add the "offset" argumentMikulas Patocka1-14/+35
Make sure that the "data" argument passed to integrity_sector_checksum is always page-aligned and add an "offset" argument that specifies the offset from the start of the page. This will enable us to use the asynchronous hash interface later. Signed-off-by: Mikulas Patocka <mpatocka@redhat.com>
2025-09-23dm-integrity: allocate the recalculate buffer with kmallocMikulas Patocka1-4/+4
Allocate the recalculate buffer with kmalloc rather than vmalloc. This will be needed later, for the simplification of the asynchronous hash interface. Signed-off-by: Mikulas Patocka <mpatocka@redhat.com>
2025-09-23dm-integrity: introduce integrity_kmap and integrity_kunmapMikulas Patocka1-20/+17
This abstraction will be used later, for the asynchronous hash interface. Signed-off-by: Mikulas Patocka <mpatocka@redhat.com>
2025-09-23dm-integrity: replace bvec_kmap_local with kmap_local_pageMikulas Patocka1-7/+6
Replace bvec_kmap_local with kmap_local_page - it will be needed for the upcoming patches that make kmap_local_page optional, depending on whether asynchronous hash interface is used or not. Signed-off-by: Mikulas Patocka <mpatocka@redhat.com>
2025-09-23dm-integrity: use internal variable for digestsizeMikulas Patocka1-11/+14
Instead of calling digestsize() each time the digestsize for the internal hash is needed, store the digestsize in a new field internal_hash_digestsize within struct dm_integrity_c once and use this value when needed. Signed-off-by: Harald Freudenberger <freude@linux.ibm.com> Signed-off-by: Mikulas Patocka <mpatocka@redhat.com>
2025-09-23dm vdo: return error on corrupted metadata in start_restoring_volume functionsIvan Abramov1-2/+2
The return values of VDO_ASSERT calls that validate metadata are not acted upon. Return UDS_CORRUPT_DATA in case of an error. Found by Linux Verification Center (linuxtesting.org) with SVACE. Fixes: a4eb7e255517 ("dm vdo: implement the volume index") Signed-off-by: Ivan Abramov <i.abramov@mt-integration.ru> Reviewed-by: Matthew Sakai <msakai@redhat.com> Signed-off-by: Mikulas Patocka <mpatocka@redhat.com>
2025-09-23dm vdo: Update code to use mem_is_zeroBruce Johnston1-14/+3
Remove function that would check if data was all zeroes. Use the built-in kernel function mem_is_zero() instead. Signed-off-by: Bruce Johnston <bjohnsto@redhat.com> Signed-off-by: Matthew Sakai <msakai@redhat.com> Signed-off-by: Mikulas Patocka <mpatocka@redhat.com>
2025-09-19Merge tag 'block-6.17-20250918' of git://git.kernel.dk/linuxLinus Torvalds5-0/+5
Pull block fixes from Jens Axboe: "A set of fixes for an issue with md array assembly and drbd for devices supporting write zeros" * tag 'block-6.17-20250918' of git://git.kernel.dk/linux: drbd: init queue_limits->max_hw_wzeroes_unmap_sectors parameter md: init queue_limits->max_hw_wzeroes_unmap_sectors parameter
2025-09-18dm: optimize REQ_PREFLUSH with data when using the linear targetMikulas Patocka2-8/+25
If the table has only linear targets and there is just one underlying device, we can optimize REQ_PREFLUSH with data - we don't have to split it to two bios - a flush and a write. We can pass it to the linear target directly. Signed-off-by: Mikulas Patocka <mpatocka@redhat.com> Tested-by: Bart Van Assche <bvanassche@acm.org> Reviewed-by: Bart Van Assche <bvanassche@acm.org>
2025-09-17Merge tag 'for-6.17/dm-fixes' of ↵Linus Torvalds3-6/+12
git://git.kernel.org/pub/scm/linux/kernel/git/device-mapper/linux-dm Pull device mapper fixes from Mikulas Patocka: - fix integer overflow in dm-stripe - limit tag size in dm-integrity to 255 bytes - fix 'alignment inconsistency' warning in dm-raid * tag 'for-6.17/dm-fixes' of git://git.kernel.org/pub/scm/linux/kernel/git/device-mapper/linux-dm: dm-raid: don't set io_min and io_opt for raid1 dm-integrity: limit MAX_TAG_SIZE to 255 dm-stripe: fix a possible integer overflow
2025-09-17dm-raid: don't set io_min and io_opt for raid1Mikulas Patocka1-2/+4
These commands modprobe brd rd_size=1048576 vgcreate vg /dev/ram* lvcreate -m4 -L10 -n lv vg trigger the following warnings: device-mapper: table: 252:10: adding target device (start sect 0 len 24576) caused an alignment inconsistency device-mapper: table: 252:10: adding target device (start sect 0 len 24576) caused an alignment inconsistency The warnings are caused by the fact that io_min is 512 and physical block size is 4096. If there's chunk-less raid, such as raid1, io_min shouldn't be set to zero because it would be raised to 512 and it would trigger the warning. Signed-off-by: Mikulas Patocka <mpatocka@redhat.com> Reviewed-by: Martin K. Petersen <martin.petersen@oracle.com> Cc: stable@vger.kernel.org
2025-09-16md: init queue_limits->max_hw_wzeroes_unmap_sectors parameterZhang Yi5-0/+5
The parameter max_hw_wzeroes_unmap_sectors in queue_limits should be equal to max_write_zeroes_sectors if it is set to a non-zero value. However, the stacked md drivers call md_init_stacking_limits() to initialize this parameter to UINT_MAX but only adjust max_write_zeroes_sectors when setting limits. Therefore, this discrepancy triggers a value check failure in blk_validate_limits(). $ modprobe scsi_debug num_parts=2 dev_size_mb=8 lbprz=1 lbpws=1 $ mdadm --create /dev/md0 --level=0 --raid-device=2 /dev/sda1 /dev/sda2 mdadm: Defaulting to version 1.2 metadata mdadm: RUN_ARRAY failed: Invalid argument Fix this failure by explicitly setting max_hw_wzeroes_unmap_sectors to max_write_zeroes_sectors. Since the linear and raid0 drivers support write zeroes, so they can support unmap write zeroes operation if all of the backend devices support it. However, the raid1/10/5 drivers don't support write zeroes, so we have to set it to zero. Fixes: 0c40d7cb5ef3 ("block: introduce max_{hw|user}_wzeroes_unmap_sectors to queue limits") Reported-by: John Garry <john.g.garry@oracle.com> Closes: https://lore.kernel.org/linux-block/803a2183-a0bb-4b7a-92f1-afc5097630d2@oracle.com/ Signed-off-by: Zhang Yi <yi.zhang@huawei.com> Tested-by: John Garry <john.g.garry@oracle.com> Reviewed-by: Li Nan <linan122@huawei.com> Reviewed-by: Martin K. Petersen <martin.petersen@oracle.com> Reviewed-by: Yu Kuai <yukuai3@huawei.com> Reviewed-by: Hannes Reinecke <hare@suse.de> Link: https://lore.kernel.org/linux-raid/20250910111107.3247530-2-yi.zhang@huaweicloud.com Signed-off-by: Yu Kuai <yukuai3@huawei.com>
2025-09-11md/md-llbitmap: Use DIV_ROUND_UP_SECTOR_TNathan Chancellor1-8/+8
When building for 32-bit platforms, there are several link (if builtin) or modpost (if a module) errors due to dividends of type 'sector_t' in DIV_ROUND_UP: arm-linux-gnueabi-ld: drivers/md/md-llbitmap.o: in function `llbitmap_resize': drivers/md/md-llbitmap.c:1017:(.text+0xae8): undefined reference to `__aeabi_uldivmod' arm-linux-gnueabi-ld: drivers/md/md-llbitmap.c:1020:(.text+0xb10): undefined reference to `__aeabi_uldivmod' arm-linux-gnueabi-ld: drivers/md/md-llbitmap.o: in function `llbitmap_end_discard': drivers/md/md-llbitmap.c:1114:(.text+0xf14): undefined reference to `__aeabi_uldivmod' arm-linux-gnueabi-ld: drivers/md/md-llbitmap.o: in function `llbitmap_start_discard': drivers/md/md-llbitmap.c:1097:(.text+0x1808): undefined reference to `__aeabi_uldivmod' arm-linux-gnueabi-ld: drivers/md/md-llbitmap.o: in function `llbitmap_read_sb': drivers/md/md-llbitmap.c:867:(.text+0x2080): undefined reference to `__aeabi_uldivmod' arm-linux-gnueabi-ld: drivers/md/md-llbitmap.o:drivers/md/md-llbitmap.c:895: more undefined references to `__aeabi_uldivmod' follow Use DIV_ROUND_UP_SECTOR_T instead of DIV_ROUND_UP, which exists to handle this exact situation. Fixes: 5ab829f1971d ("md/md-llbitmap: introduce new lockless bitmap") Signed-off-by: Nathan Chancellor <nathan@kernel.org> Signed-off-by: Jens Axboe <axboe@kernel.dk>
2025-09-10md/raid0: convert raid0_make_request() to use bio_submit_split_bioset()Yu Kuai1-11/+2
Currently, raid0_make_request() will remap the original bio to underlying disks to prevent reordered IO. Now that bio_submit_split_bioset() will put original bio to the head of current->bio_list, it's safe converting to use this helper and bio will still be ordered. CC: Jan Kara <jack@suse.cz> Signed-off-by: Yu Kuai <yukuai3@huawei.com> Reviewed-by: Christoph Hellwig <hch@lst.de> Signed-off-by: Jens Axboe <axboe@kernel.dk>
2025-09-10md/md-linear: convert to use bio_submit_split_bioset()Yu Kuai1-12/+3
Unify bio split code, prepare to fix reordered split IO. Signed-off-by: Yu Kuai <yukuai3@huawei.com> Reviewed-by: Christoph Hellwig <hch@lst.de> Signed-off-by: Jens Axboe <axboe@kernel.dk>
2025-09-10md/raid5: convert to use bio_submit_split_bioset()Yu Kuai1-6/+4
Unify bio split code, prepare to fix ordering of split IO. Signed-off-by: Yu Kuai <yukuai3@huawei.com> Reviewed-by: Christoph Hellwig <hch@lst.de> Signed-off-by: Jens Axboe <axboe@kernel.dk>
2025-09-10md/raid10: convert read/write to use bio_submit_split_bioset()Yu Kuai1-29/+13
Unify bio split code, prepare to fix ordering of split IO, the error path is modified a bit, however no functional changes are intended: - bio_submit_split_bioset() can fail the original bio directly by split error, set R10BIO_Uptodate in this case to notify raid_end_bio_io() that the original bio is returned already. - set R10BIO_Uptodate and set error value to -EIO is useless now, for r10_bio without R10BIO_Uptodate, -EIO will be returned for original bio. And discard is not handled, because discard is only split for unaligned head and tail, and this can be considered slow path, the reorder here does not matter much. Signed-off-by: Yu Kuai <yukuai3@huawei.com> Reviewed-by: Christoph Hellwig <hch@lst.de> Signed-off-by: Jens Axboe <axboe@kernel.dk>
2025-09-10md/raid10: add a new r10bio flag R10BIO_ReturnedYu Kuai2-3/+7
The new helper bio_submit_split_bioset() can failed the orginal bio on split errors, prepare to handle this case in raid_end_bio_io(). The flag name is refer to the r1bio flag name. Signed-off-by: Yu Kuai <yukuai3@huawei.com> Reviewed-by: Christoph Hellwig <hch@lst.de> Signed-off-by: Jens Axboe <axboe@kernel.dk>
2025-09-10md/raid1: convert to use bio_submit_split_bioset()Yu Kuai2-28/+14
Unify bio split code, and prepare to fix ordering of split IO. Noted that bio_submit_split_bioset() can fail the original bio directly by split error, set R1BIO_Returned in this case to notify raid_end_bio_io() that the original bio is returned already. Signed-off-by: Yu Kuai <yukuai3@huawei.com> Reviewed-by: Christoph Hellwig <hch@lst.de> Signed-off-by: Jens Axboe <axboe@kernel.dk>
2025-09-10md/raid0: convert raid0_handle_discard() to use bio_submit_split_bioset()Yu Kuai1-13/+6
Unify bio split code, and prepare to fix ordering of split IO Noted commit 319ff40a5427 ("md/raid0: Fix performance regression for large sequential writes") already fix ordering of split IO by remapping bio to underlying disks before resubmitting it, with the respect md_submit_bio() already split it by sectors, and raid0_make_request() will split at most once for unaligned IO. This is a bit hacky and we'll convert this to solution in general later. Signed-off-by: Yu Kuai <yukuai3@huawei.com> Reviewed-by: Christoph Hellwig <hch@lst.de> Signed-off-by: Jens Axboe <axboe@kernel.dk>
2025-09-10md: fix mssing blktrace bio split eventsYu Kuai5-0/+19
If bio is split by internal handling like chunksize or badblocks, the corresponding trace_block_split() is missing, resulting in blktrace inability to catch BIO split events and making it harder to analyze the BIO sequence. Cc: stable@vger.kernel.org Fixes: 4b1faf931650 ("block: Kill bio_pair_split()") Signed-off-by: Yu Kuai <yukuai3@huawei.com> Reviewed-by: Damien Le Moal <dlemoal@kernel.org> Reviewed-by: Christoph Hellwig <hch@lst.de> Signed-off-by: Jens Axboe <axboe@kernel.dk>