summaryrefslogtreecommitdiff
path: root/drivers/md
AgeCommit message (Collapse)AuthorFilesLines
2020-03-30Merge tag 'for-5.7/drivers-2020-03-29' of git://git.kernel.dk/linux-blockLinus Torvalds6-76/+437
Pull block driver updates from Jens Axboe: - floppy driver cleanup series from Willy - NVMe updates and fixes (Various) - null_blk trace improvements (Chaitanya) - bcache fixes (Coly) - md fixes (via Song) - loop block size change optimizations (Martijn) - scnprintf() use (Takashi) * tag 'for-5.7/drivers-2020-03-29' of git://git.kernel.dk/linux-block: (81 commits) null_blk: add trace in null_blk_zoned.c null_blk: add tracepoint helpers for zoned mode block: add a zone condition debug helper nvme: cleanup namespace identifier reporting in nvme_init_ns_head nvme: rename __nvme_find_ns_head to nvme_find_ns_head nvme: refactor nvme_identify_ns_descs error handling nvme-tcp: Add warning on state change failure at nvme_tcp_setup_ctrl nvme-rdma: Add warning on state change failure at nvme_rdma_setup_ctrl nvme: Fix controller creation races with teardown flow nvme: Make nvme_uninit_ctrl symmetric to nvme_init_ctrl nvme: Fix ctrl use-after-free during sysfs deletion nvme-pci: Re-order nvme_pci_free_ctrl nvme: Remove unused return code from nvme_delete_ctrl_sync nvme: Use nvme_state_terminal helper nvme: release ida resources nvme: Add compat_ioctl handler for NVME_IOCTL_SUBMIT_IO nvmet-tcp: optimize tcp stack TX when data digest is used nvme-fabrics: Use scnprintf() for avoiding potential buffer overflow nvme-multipath: do not reset on unknown status nvmet-rdma: allocate RW ctxs according to mdts ...
2020-03-27block: simplify queue allocationChristoph Hellwig3-9/+6
Current make_request based drivers use either blk_alloc_queue_node or blk_alloc_queue to allocate a queue, and then set up the make_request_fn function pointer and a few parameters using the blk_queue_make_request helper. Simplify this by passing the make_request pointer to blk_alloc_queue, and while at it merge the _node variant into the main helper by always passing a node_id, and remove the superfluous gfp_mask parameter. A lower-level __blk_alloc_queue is kept for the blk-mq case. Signed-off-by: Christoph Hellwig <hch@lst.de> Signed-off-by: Jens Axboe <axboe@kernel.dk>
2020-03-27bcache: pass the make_request methods to blk_queue_make_requestChristoph Hellwig3-9/+11
bcache is the only driver not actually passing its make_request methods to blk_queue_make_request, but instead just sets them up manually a little later. Make bcache follow the common way of setting up make_request based queues. Signed-off-by: Christoph Hellwig <hch@lst.de> Signed-off-by: Jens Axboe <axboe@kernel.dk>
2020-03-25block: move the part_stat* helpers from genhd.h to a new headerChristoph Hellwig2-0/+2
These macros are just used by a few files. Move them out of genhd.h, which is included everywhere into a new standalone header. Signed-off-by: Christoph Hellwig <hch@lst.de> Signed-off-by: Jens Axboe <axboe@kernel.dk>
2020-03-25bcache: remove dupplicated declaration from btree.hColy Li1-7/+3
Commit 253a99d95d5b ("bcache: move macro btree() and btree_root() into btree.h") makes two duplicated declaration into btree.h, typedef int (btree_map_keys_fn)(); int bch_btree_map_keys(); The kbuild test robot <lkp@intel.com> detects and reports this problem and this patch fixes it by removing the duplicated ones. Fixes: 253a99d95d5b ("bcache: move macro btree() and btree_root() into btree.h") Reported-by: kbuild test robot <lkp@intel.com> Signed-off-by: Coly Li <colyli@suse.de> Signed-off-by: Jens Axboe <axboe@kernel.dk>
2020-03-24block: cleanup how md_autodetect_dev is calledChristoph Hellwig1-0/+1
Add a new include/linux/raid/detect.h header to declare the md_autodetect_dev prototype which can be shared between md and the partition code. Then use IS_BUILTIN to call it instead of the ifdef magic. Signed-off-by: Christoph Hellwig <hch@lst.de> Signed-off-by: Jens Axboe <axboe@kernel.dk>
2020-03-24block: remove __bdevnameChristoph Hellwig1-2/+2
There is no good reason for __bdevname to exist. Just open code printing the string in the callers. For three of them the format string can be trivially merged into existing printk statements, and in init/do_mounts.c we can at least do the scnprintf once at the start of the function, and unconditional of CONFIG_BLOCK to make the output for tiny configfs a little more helpful. Acked-by: Theodore Ts'o <tytso@mit.edu> # for ext4 Signed-off-by: Christoph Hellwig <hch@lst.de> Signed-off-by: Jens Axboe <axboe@kernel.dk>
2020-03-22bcache: optimize barrier usage for atomic operationsColy Li2-6/+6
The idea of this patch is from Davidlohr Bueso, he posts a patch for bcache to optimize barrier usage for read-modify-write atomic bitops. Indeed such optimization can also apply on other locations where smp_mb() is used before or after an atomic operation. This patch replaces smp_mb() with smp_mb__before_atomic() or smp_mb__after_atomic() in btree.c and writeback.c, where it is used to synchronize memory cache just earlier on other cores. Although the locations are not on hot code path, it is always not bad to mkae things a little better. Signed-off-by: Coly Li <colyli@suse.de> Cc: Davidlohr Bueso <dave@stgolabs.net> Signed-off-by: Jens Axboe <axboe@kernel.dk>
2020-03-22bcache: optimize barrier usage for Rmw atomic bitopsDavidlohr Bueso1-3/+3
We can avoid the unnecessary barrier on non LL/SC architectures, such as x86. Instead, use the smp_mb__after_atomic(). Signed-off-by: Davidlohr Bueso <dbueso@suse.de> Signed-off-by: Coly Li <colyli@suse.de> Signed-off-by: Jens Axboe <axboe@kernel.dk>
2020-03-22bcache: Use scnprintf() for avoiding potential buffer overflowTakashi Iwai1-1/+1
Since snprintf() returns the would-be-output size instead of the actual output size, the succeeding calls may go beyond the given buffer limit. Fix it by replacing with scnprintf(). Signed-off-by: Takashi Iwai <tiwai@suse.de> Signed-off-by: Coly Li <colyli@suse.de> Signed-off-by: Jens Axboe <axboe@kernel.dk>
2020-03-22bcache: make bch_sectors_dirty_init() to be multithreadedColy Li2-3/+174
When attaching a cached device (a.k.a backing device) to a cache device, bch_sectors_dirty_init() is called to count dirty sectors and stripes (see what bcache_dev_sectors_dirty_add() does) on the cache device. The counting is done by a single thread recursive function bch_btree_map_keys() to iterate all the bcache btree nodes. If the btree has huge number of nodes, bch_sectors_dirty_init() will take quite long time. In my testing, if the registering cache set has a existed UUID which matches a already registered cached device, the automatical attachment during the registration may take more than 55 minutes. This is too long for waiting the bcache to work in real deployment. Fortunately when bch_sectors_dirty_init() is called, no other thread will access the btree yet, it is safe to do a read-only parallelized dirty sectors counting by multiple threads. This patch tries to create multiple threads, and each thread tries to one-by-one count dirty sectors from the sub-tree indexed by a root node key which the thread fetched. After the sub-tree is counted, the counting thread will continue to fetch another root node key, until the fetched key is NULL. How many threads in parallel depends on the number of keys from the btree root node, and the number of online CPU core. The thread number will be the less number but no more than BCH_DIRTY_INIT_THRD_MAX. If there are only 2 keys in root node, it can only be 2x times faster by this patch. But if there are 10 keys in the root node, with this patch it can be 10x times faster. Signed-off-by: Coly Li <colyli@suse.de> Cc: Christoph Hellwig <hch@infradead.org> Signed-off-by: Jens Axboe <axboe@kernel.dk>
2020-03-22bcache: make bch_btree_check() to be multithreadedColy Li2-3/+188
When registering a cache device, bch_btree_check() is called to check all btree nodes, to make sure the btree is consistent and not corrupted. bch_btree_check() is recursively executed in a single thread, when there are a lot of data cached and the btree is huge, it may take very long time to check all the btree nodes. In my testing, I observed it took around 50 minutes to finish bch_btree_check(). When checking the bcache btree nodes, the cache set is not running yet, and indeed the whole tree is in read-only state, it is safe to create multiple threads to check the btree in parallel. This patch tries to create multiple threads, and each thread tries to one-by-one check the sub-tree indexed by a key from the btree root node. The parallel thread number depends on how many keys in the btree root node. At most BCH_BTR_CHKTHREAD_MAX (64) threads can be created, but in practice is should be min(cpu-number/2, root-node-keys-number). Signed-off-by: Coly Li <colyli@suse.de> Cc: Christoph Hellwig <hch@infradead.org> Signed-off-by: Jens Axboe <axboe@kernel.dk>
2020-03-22bcache: add bcache_ prefix to btree_root() and btree() macrosColy Li2-9/+10
This patch changes macro btree_root() and btree() to bcache_btree_root() and bcache_btree(), to avoid potential generic name clash in future. NOTE: for product kernel maintainers, this patch can be skipped if you feel the rename stuffs introduce inconvenince to patch backport. Suggested-by: Christoph Hellwig <hch@infradead.org> Signed-off-by: Coly Li <colyli@suse.de> Signed-off-by: Jens Axboe <axboe@kernel.dk>
2020-03-22bcache: move macro btree() and btree_root() into btree.hColy Li2-59/+67
In order to accelerate bcache registration speed, the macro btree() and btree_root() will be referenced out of btree.c. This patch moves them from btree.c into btree.h with other relative function declaration in btree.h, for the following changes. Signed-off-by: Coly Li <colyli@suse.de> Signed-off-by: Jens Axboe <axboe@kernel.dk>
2020-03-17md: check arrays is suspended in mddev_detach before call quiesce operationsGuoqing Jiang1-1/+1
Don't call quiesce(1) and quiesce(0) if array is already suspended, otherwise in level_store, the array is writable after mddev_detach in below part though the intention is to make array writable after resume. mddev_suspend(mddev); mddev_detach(mddev); ... mddev_resume(mddev); And it also causes calltrace as follows in [1]. [48005.653834] WARNING: CPU: 1 PID: 45380 at kernel/kthread.c:510 kthread_park+0x77/0x90 [...] [48005.653976] CPU: 1 PID: 45380 Comm: mdadm Tainted: G OE 5.4.10-arch1-1 #1 [48005.653979] Hardware name: To Be Filled By O.E.M. To Be Filled By O.E.M./J4105-ITX, BIOS P1.40 08/06/2018 [48005.653984] RIP: 0010:kthread_park+0x77/0x90 [48005.654015] Call Trace: [48005.654039] r5l_quiesce+0x3c/0x70 [raid456] [48005.654052] raid5_quiesce+0x228/0x2e0 [raid456] [48005.654073] mddev_detach+0x30/0x70 [md_mod] [48005.654090] level_store+0x202/0x670 [md_mod] [48005.654099] ? security_capable+0x40/0x60 [48005.654114] md_attr_store+0x7b/0xc0 [md_mod] [48005.654123] kernfs_fop_write+0xce/0x1b0 [48005.654132] vfs_write+0xb6/0x1a0 [48005.654138] ksys_write+0x67/0xe0 [48005.654146] do_syscall_64+0x4e/0x140 [48005.654155] entry_SYSCALL_64_after_hwframe+0x44/0xa9 [48005.654161] RIP: 0033:0x7fa0c8737497 [1]: https://bugzilla.kernel.org/show_bug.cgi?id=206161 Signed-off-by: Guoqing Jiang <guoqing.jiang@cloud.ionos.com> Signed-off-by: Song Liu <songliubraving@fb.com>
2020-03-07Merge tag 'block-5.6-2020-03-07' of git://git.kernel.dk/linux-blockLinus Torvalds2-29/+2
Pull block fixes from Jens Axboe: "Here are a few fixes that should go into this release. This contains: - Revert of a bad bcache patch from this merge window - Removed unused function (Daniel) - Fixup for the blktrace fix from Jan from this release (Cengiz) - Fix of deeper level bfqq overwrite in BFQ (Carlo)" * tag 'block-5.6-2020-03-07' of git://git.kernel.dk/linux-block: block, bfq: fix overwrite of bfq_group pointer in bfq_find_set_group() blktrace: fix dereference after null check Revert "bcache: ignore pending signals when creating gc and allocator thread" block: Remove used kblockd_schedule_work_on()
2020-03-04Merge tag 'for-5.6/dm-fixes' of ↵Linus Torvalds9-67/+92
git://git.kernel.org/pub/scm/linux/kernel/git/device-mapper/linux-dm Pull device mapper fixes from Mike Snitzer: - Fix request-based DM's congestion_fn and actually wire it up to the bdi. - Extend dm-bio-record to track additional struct bio members needed by DM integrity target. - Fix DM core to properly advertise that a device is suspended during unload (between the presuspend and postsuspend hooks). This change is a prereq for related DM integrity and DM writecache fixes. It elevates DM integrity's 'suspending' state tracking to DM core. - Four stable fixes for DM integrity target. - Fix crash in DM cache target due to incorrect work item cancelling. - Fix DM thin metadata lockdep warning that was introduced during 5.6 merge window. - Fix DM zoned target's chunk work refcounting that regressed during recent conversion to refcount_t. - Bump the minor version for DM core and all target versions that have seen interface changes or important fixes during the 5.6 cycle. * tag 'for-5.6/dm-fixes' of git://git.kernel.org/pub/scm/linux/kernel/git/device-mapper/linux-dm: dm: bump version of core and various targets dm: fix congested_fn for request-based device dm integrity: use dm_bio_record and dm_bio_restore dm bio record: save/restore bi_end_io and bi_integrity dm zoned: Fix reference counter initial value of chunk works dm writecache: verify watermark during resume dm: report suspended device during destroy dm thin metadata: fix lockdep complaint dm cache: fix a crash due to incorrect work item cancelling dm integrity: fix invalid table returned due to argument count mismatch dm integrity: fix a deadlock due to offloading to an incorrect workqueue dm integrity: fix recalculation when moving from journal mode to bitmap mode
2020-03-03dm: bump version of core and various targetsMike Snitzer6-6/+6
Changes made during the 5.6 cycle warrant bumping the version number for DM core and the targets modified by this commit. It should be noted that dm-thin, dm-crypt and dm-raid already had their target version bumped during the 5.6 merge window. Signed-off-by; Mike Snitzer <snitzer@redhat.com>
2020-03-03dm: fix congested_fn for request-based deviceHou Tao1-11/+10
We neither assign congested_fn for requested-based blk-mq device nor implement it correctly. So fix both. Also, remove incorrect comment from dm_init_normal_md_queue and rename it to dm_init_congested_fn. Fixes: 4aa9c692e052 ("bdi: separate out congested state into a separate struct") Cc: stable@vger.kernel.org Signed-off-by: Hou Tao <houtao1@huawei.com> Signed-off-by: Mike Snitzer <snitzer@redhat.com>
2020-03-03dm integrity: use dm_bio_record and dm_bio_restoreMike Snitzer1-23/+9
In cases where dec_in_flight() has to requeue the integrity_bio_wait work to transfer the rest of the data, the bio's __bi_remaining might already have been decremented to 0, e.g.: if bio passed to underlying data device was split via blk_queue_split(). Use dm_bio_{record,restore} rather than effectively open-coding them in dm-integrity -- these methods now manage __bi_remaining too. Depends-on: f7f0b057a9c1 ("dm bio record: save/restore bi_end_io and bi_integrity") Reported-by: Daniel Glöckner <dg@emlix.com> Suggested-by: Mikulas Patocka <mpatocka@redhat.com> Signed-off-by: Mike Snitzer <snitzer@redhat.com>
2020-03-03dm bio record: save/restore bi_end_io and bi_integrityMike Snitzer1-0/+15
Also, save/restore __bi_remaining in case the bio was used in a BIO_CHAIN (e.g. due to blk_queue_split). Suggested-by: Mikulas Patocka <mpatocka@redhat.com> Signed-off-by: Mike Snitzer <snitzer@redhat.com>
2020-03-03Revert "bcache: ignore pending signals when creating gc and allocator thread"Jens Axboe2-29/+2
This reverts commit 0b96da639a4874311e9b5156405f69ef9fc3bef8. We can't just go flushing random signals, under the assumption that the OOM killer will just do something else. It's not safe from the OOM perspective, and it could also cause other signals to get randomly lost. Signed-off-by: Jens Axboe <axboe@kernel.dk>
2020-02-28dm zoned: Fix reference counter initial value of chunk worksShin'ichiro Kawasaki1-4/+4
Dm-zoned initializes reference counters of new chunk works with zero value and refcount_inc() is called to increment the counter. However, the refcount_inc() function handles the addition to zero value as an error and triggers the warning as follows: refcount_t: addition on 0; use-after-free. WARNING: CPU: 7 PID: 1506 at lib/refcount.c:25 refcount_warn_saturate+0x68/0xf0 ... CPU: 7 PID: 1506 Comm: systemd-udevd Not tainted 5.4.0+ #134 ... Call Trace: dmz_map+0x2d2/0x350 [dm_zoned] __map_bio+0x42/0x1a0 __split_and_process_non_flush+0x14a/0x1b0 __split_and_process_bio+0x83/0x240 ? kmem_cache_alloc+0x165/0x220 dm_process_bio+0x90/0x230 ? generic_make_request_checks+0x2e7/0x680 dm_make_request+0x3e/0xb0 generic_make_request+0xcf/0x320 ? memcg_drain_all_list_lrus+0x1c0/0x1c0 submit_bio+0x3c/0x160 ? guard_bio_eod+0x2c/0x130 mpage_readpages+0x182/0x1d0 ? bdev_evict_inode+0xf0/0xf0 read_pages+0x6b/0x1b0 __do_page_cache_readahead+0x1ba/0x1d0 force_page_cache_readahead+0x93/0x100 generic_file_read_iter+0x83a/0xe40 ? __seccomp_filter+0x7b/0x670 new_sync_read+0x12a/0x1c0 vfs_read+0x9d/0x150 ksys_read+0x5f/0xe0 do_syscall_64+0x5b/0x180 entry_SYSCALL_64_after_hwframe+0x44/0xa9 ... After this warning, following refcount API calls for the counter all fail to change the counter value. Fix this by setting the initial reference counter value not zero but one for the new chunk works. Instead, do not call refcount_inc() via dmz_get_chunk_work() for the new chunks works. The failure was observed with linux version 5.4 with CONFIG_REFCOUNT_FULL enabled. Refcount rework was merged to linux version 5.5 by the commit 168829ad09ca ("Merge branch 'locking-core-for-linus' of git://git.kernel.org/pub/scm/linux/kernel/git/tip/tip"). After this commit, CONFIG_REFCOUNT_FULL was removed and the failure was observed regardless of kernel configuration. Linux version 4.20 merged the commit 092b5648760a ("dm zoned: target: use refcount_t for dm zoned reference counters"). Before this commit, dm zoned used atomic_t APIs which does not check addition to zero, then this fix is not necessary. Fixes: 092b5648760a ("dm zoned: target: use refcount_t for dm zoned reference counters") Cc: stable@vger.kernel.org # 5.4+ Signed-off-by: Shin'ichiro Kawasaki <shinichiro.kawasaki@wdc.com> Reviewed-by: Damien Le Moal <damien.lemoal@wdc.com> Signed-off-by: Mike Snitzer <snitzer@redhat.com>
2020-02-28dm writecache: verify watermark during resumeMikulas Patocka1-2/+10
Verify the watermark upon resume - so that if the target is reloaded with lower watermark, it will start the cleanup process immediately. Fixes: 48debafe4f2f ("dm: add writecache target") Cc: stable@vger.kernel.org # 4.18+ Signed-off-by: Mikulas Patocka <mpatocka@redhat.com> Signed-off-by: Mike Snitzer <snitzer@redhat.com>
2020-02-28dm: report suspended device during destroyMikulas Patocka3-8/+7
The function dm_suspended returns true if the target is suspended. However, when the target is being suspended during unload, it returns false. An example where this is a problem: the test "!dm_suspended(wc->ti)" in writecache_writeback is not sufficient, because dm_suspended returns zero while writecache_suspend is in progress. As is, without an enhanced dm_suspended, simply switching from flush_workqueue to drain_workqueue still emits warnings: workqueue writecache-writeback: drain_workqueue() isn't complete after 10 tries workqueue writecache-writeback: drain_workqueue() isn't complete after 100 tries workqueue writecache-writeback: drain_workqueue() isn't complete after 200 tries workqueue writecache-writeback: drain_workqueue() isn't complete after 300 tries workqueue writecache-writeback: drain_workqueue() isn't complete after 400 tries writecache_suspend calls flush_workqueue(wc->writeback_wq) - this function flushes the current work. However, the workqueue may re-queue itself and flush_workqueue doesn't wait for re-queued works to finish. Because of this - the function writecache_writeback continues execution after the device was suspended and then concurrently with writecache_dtr, causing a crash in writecache_writeback. We must use drain_workqueue - that waits until the work and all re-queued works finish. As a prereq for switching to drain_workqueue, this commit fixes dm_suspended to return true after the presuspend hook and before the postsuspend hook - just like during a normal suspend. It allows simplifying the dm-integrity and dm-writecache targets so that they don't have to maintain suspended flags on their own. With this change use of drain_workqueue() can be used effectively. This change was tested with the lvm2 testsuite and cryptsetup testsuite and the are no regressions. Fixes: 48debafe4f2f ("dm: add writecache target") Cc: stable@vger.kernel.org # 4.18+ Reported-by: Corey Marthaler <cmarthal@redhat.com> Signed-off-by: Mikulas Patocka <mpatocka@redhat.com> Signed-off-by: Mike Snitzer <snitzer@redhat.com>
2020-02-27dm thin metadata: fix lockdep complaintTheodore Ts'o1-1/+1
[ 3934.173244] ====================================================== [ 3934.179572] WARNING: possible circular locking dependency detected [ 3934.185884] 5.4.21-xfstests #1 Not tainted [ 3934.190151] ------------------------------------------------------ [ 3934.196673] dmsetup/8897 is trying to acquire lock: [ 3934.201688] ffffffffbce82b18 (shrinker_rwsem){++++}, at: unregister_shrinker+0x22/0x80 [ 3934.210268] but task is already holding lock: [ 3934.216489] ffff92a10cc5e1d0 (&pmd->root_lock){++++}, at: dm_pool_metadata_close+0xba/0x120 [ 3934.225083] which lock already depends on the new lock. [ 3934.564165] Chain exists of: shrinker_rwsem --> &journal->j_checkpoint_mutex --> &pmd->root_lock For a more detailed lockdep report, please see: https://lore.kernel.org/r/20200220234519.GA620489@mit.edu We shouldn't need to hold the lock while are just tearing down and freeing the whole metadata pool structure. Fixes: 44d8ebf436399a4 ("dm thin metadata: use pool locking at end of dm_pool_metadata_close") Signed-off-by: Theodore Ts'o <tytso@mit.edu> Signed-off-by: Mike Snitzer <snitzer@redhat.com>
2020-02-27dm cache: fix a crash due to incorrect work item cancellingMikulas Patocka1-2/+2
The crash can be reproduced by running the lvm2 testsuite test lvconvert-thin-external-cache.sh for several minutes, e.g.: while :; do make check T=shell/lvconvert-thin-external-cache.sh; done The crash happens in this call chain: do_waker -> policy_tick -> smq_tick -> end_hotspot_period -> clear_bitset -> memset -> __memset -- which accesses an invalid pointer in the vmalloc area. The work entry on the workqueue is executed even after the bitmap was freed. The problem is that cancel_delayed_work doesn't wait for the running work item to finish, so the work item can continue running and re-submitting itself even after cache_postsuspend. In order to make sure that the work item won't be running, we must use cancel_delayed_work_sync. Also, change flush_workqueue to drain_workqueue, so that if some work item submits itself or another work item, we are properly waiting for both of them. Fixes: c6b4fcbad044 ("dm: add cache target") Cc: stable@vger.kernel.org # v3.9 Signed-off-by: Mikulas Patocka <mpatocka@redhat.com> Signed-off-by: Mike Snitzer <snitzer@redhat.com>
2020-02-25dm integrity: fix invalid table returned due to argument count mismatchMikulas Patocka1-1/+1
If the flag SB_FLAG_RECALCULATE is present in the superblock, but it was not specified on the command line (i.e. ic->recalculate_flag is false), dm-integrity would return invalid table line - the reported number of arguments would not match the real number. Fixes: 468dfca38b1a ("dm integrity: add a bitmap mode") Cc: stable@vger.kernel.org # v5.2+ Reported-by: Ondrej Kozina <okozina@redhat.com> Signed-off-by: Mikulas Patocka <mpatocka@redhat.com> Signed-off-by: Mike Snitzer <snitzer@redhat.com>
2020-02-25dm integrity: fix a deadlock due to offloading to an incorrect workqueueMikulas Patocka1-4/+15
If we need to perform synchronous I/O in dm_integrity_map_continue(), we must make sure that we are not in the map function - in order to avoid the deadlock due to bio queuing in generic_make_request. To avoid the deadlock, we offload the request to metadata_wq. However, metadata_wq also processes metadata updates for write requests. If there are too many requests that get offloaded to metadata_wq at the beginning of dm_integrity_map_continue, the workqueue metadata_wq becomes clogged and the system is incapable of processing any metadata updates. This causes a deadlock because all the requests that need to do metadata updates wait for metadata_wq to proceed and metadata_wq waits inside wait_and_add_new_range until some existing request releases its range lock (which doesn't happen because the range lock is released after metadata update). In order to fix the deadlock, we create a new workqueue offload_wq and offload requests to it - so that processing of offload_wq is independent from processing of metadata_wq. Fixes: 7eada909bfd7 ("dm: add integrity target") Cc: stable@vger.kernel.org # v4.12+ Reported-by: Heinz Mauelshagen <heinzm@redhat.com> Tested-by: Heinz Mauelshagen <heinzm@redhat.com> Signed-off-by: Heinz Mauelshagen <heinzm@redhat.com> Signed-off-by: Mikulas Patocka <mpatocka@redhat.com> Signed-off-by: Mike Snitzer <snitzer@redhat.com>
2020-02-25dm integrity: fix recalculation when moving from journal mode to bitmap modeMikulas Patocka1-5/+12
If we resume a device in bitmap mode and the on-disk format is in journal mode, we must recalculate anything above ic->sb->recalc_sector. Otherwise, there would be non-recalculated blocks which would cause I/O errors. Fixes: 468dfca38b1a ("dm integrity: add a bitmap mode") Cc: stable@vger.kernel.org # v5.2+ Signed-off-by: Mikulas Patocka <mpatocka@redhat.com> Signed-off-by: Mike Snitzer <snitzer@redhat.com>
2020-02-16Merge tag 'block-5.6-2020-02-16' of git://git.kernel.dk/linux-blockLinus Torvalds4-24/+31
Pull block fixes from Jens Axboe: "Not a lot here, which is great, basically just three small bcache fixes from Coly, and four NVMe fixes via Keith" * tag 'block-5.6-2020-02-16' of git://git.kernel.dk/linux-block: nvme: fix the parameter order for nvme_get_log in nvme_get_fw_slot_info nvme/pci: move cqe check after device shutdown nvme: prevent warning triggered by nvme_stop_keep_alive nvme/tcp: fix bug on double requeue when send fails bcache: remove macro nr_to_fifo_front() bcache: Revert "bcache: shrink btree node cache after bch_btree_check()" bcache: ignore pending signals when creating gc and allocator thread
2020-02-13bcache: remove macro nr_to_fifo_front()Coly Li1-5/+2
Macro nr_to_fifo_front() is only used once in btree_flush_write(), it is unncessary indeed. This patch removes this macro and does calculation directly in place. Signed-off-by: Coly Li <colyli@suse.de> Signed-off-by: Jens Axboe <axboe@kernel.dk>
2020-02-13bcache: Revert "bcache: shrink btree node cache after bch_btree_check()"Coly Li1-17/+0
This reverts commit 1df3877ff6a4810054237c3259d900ded4468969. In my testing, sometimes even all the cached btree nodes are freed, creating gc and allocator kernel threads may still fail. Finally it turns out that kthread_run() may fail if there is pending signal for current task. And the pending signal is sent from OOM killer which is triggered by memory consuption in bch_btree_check(). Therefore explicitly shrinking bcache btree node here does not help, and after the shrinker callback is improved, as well as pending signals are ignored before creating kernel threads, now such operation is unncessary anymore. This patch reverts the commit 1df3877ff6a4 ("bcache: shrink btree node cache after bch_btree_check()") because we have better improvement now. Signed-off-by: Coly Li <colyli@suse.de> Signed-off-by: Jens Axboe <axboe@kernel.dk>
2020-02-13bcache: ignore pending signals when creating gc and allocator threadColy Li2-2/+29
When run a cache set, all the bcache btree node of this cache set will be checked by bch_btree_check(). If the bcache btree is very large, iterating all the btree nodes will occupy too much system memory and the bcache registering process might be selected and killed by system OOM killer. kthread_run() will fail if current process has pending signal, therefore the kthread creating in run_cache_set() for gc and allocator kernel threads are very probably failed for a very large bcache btree. Indeed such OOM is safe and the registering process will exit after the registration done. Therefore this patch flushes pending signals during the cache set start up, specificly in bch_cache_allocator_start() and bch_gc_thread_start(), to make sure run_cache_set() won't fail for large cahced data set. Signed-off-by: Coly Li <colyli@suse.de> Signed-off-by: Jens Axboe <axboe@kernel.dk>
2020-02-09Merge branch 'work.misc' of ↵Linus Torvalds1-6/+10
git://git.kernel.org/pub/scm/linux/kernel/git/viro/vfs Pull misc vfs updates from Al Viro: - bmap series from cmaiolino - getting rid of convolutions in copy_mount_options() (use a couple of copy_from_user() instead of the __get_user() crap) * 'work.misc' of git://git.kernel.org/pub/scm/linux/kernel/git/viro/vfs: saner copy_mount_options() fibmap: Reject negative block numbers fibmap: Use bmap instead of ->bmap method in ioctl_fibmap ecryptfs: drop direct calls to ->bmap cachefiles: drop direct usage of ->bmap method. fs: Enable bmap() function to properly return errors
2020-02-06Merge tag 'block-5.6-2020-02-05' of git://git.kernel.dk/linux-blockLinus Torvalds7-15/+64
Pull more block updates from Jens Axboe: "Some later arrivals, but all fixes at this point: - bcache fix series (Coly) - Series of BFQ fixes (Paolo) - NVMe pull request from Keith with a few minor NVMe fixes - Various little tweaks" * tag 'block-5.6-2020-02-05' of git://git.kernel.dk/linux-block: (23 commits) nvmet: update AEN list and array at one place nvmet: Fix controller use after free nvmet: Fix error print message at nvmet_install_queue function brd: check and limit max_part par nvme-pci: remove nvmeq->tags nvmet: fix dsm failure when payload does not match sgl descriptor nvmet: Pass lockdep expression to RCU lists block, bfq: clarify the goal of bfq_split_bfqq() block, bfq: get a ref to a group when adding it to a service tree block, bfq: remove ifdefs from around gets/puts of bfq groups block, bfq: extend incomplete name of field on_st block, bfq: get extra ref to prevent a queue from being freed during a group move block, bfq: do not insert oom queue into position tree block, bfq: do not plug I/O for bfq_queues with no proc refs bcache: check return value of prio_read() bcache: fix incorrect data type usage in btree_flush_write() bcache: add readahead cache policy options via sysfs interface bcache: explicity type cast in bset_bkey_last() bcache: fix memory corruption in bch_cache_accounting_clear() xen/blkfront: limit allocated memory size to actual use case ...
2020-02-04proc: convert everything to "struct proc_ops"Alexey Dobriyan1-8/+7
The most notable change is DEFINE_SHOW_ATTRIBUTE macro split in seq_file.h. Conversion rule is: llseek => proc_lseek unlocked_ioctl => proc_ioctl xxx => proc_xxx delete ".owner = THIS_MODULE" line [akpm@linux-foundation.org: fix drivers/isdn/capi/kcapi_proc.c] [sfr@canb.auug.org.au: fix kernel/sched/psi.c] Link: http://lkml.kernel.org/r/20200122180545.36222f50@canb.auug.org.au Link: http://lkml.kernel.org/r/20191225172546.GB13378@avx2 Signed-off-by: Alexey Dobriyan <adobriyan@gmail.com> Signed-off-by: Stephen Rothwell <sfr@canb.auug.org.au> Signed-off-by: Andrew Morton <akpm@linux-foundation.org> Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
2020-02-03fs: Enable bmap() function to properly return errorsCarlos Maiolino1-6/+10
By now, bmap() will either return the physical block number related to the requested file offset or 0 in case of error or the requested offset maps into a hole. This patch makes the needed changes to enable bmap() to proper return errors, using the return value as an error return, and now, a pointer must be passed to bmap() to be filled with the mapped physical block. It will change the behavior of bmap() on return: - negative value in case of error - zero on success or map fell into a hole In case of a hole, the *block will be zero too Since this is a prep patch, by now, the only error return is -EINVAL if ->bmap doesn't exist. Reviewed-by: Christoph Hellwig <hch@lst.de> Signed-off-by: Carlos Maiolino <cmaiolino@redhat.com> Signed-off-by: Al Viro <viro@zeniv.linux.org.uk>
2020-02-01bcache: check return value of prio_read()Coly Li1-5/+16
Now if prio_read() failed during starting a cache set, we can print out error message in run_cache_set() and handle the failure properly. Signed-off-by: Coly Li <colyli@suse.de> Signed-off-by: Jens Axboe <axboe@kernel.dk>
2020-02-01bcache: fix incorrect data type usage in btree_flush_write()Coly Li1-1/+2
Dan Carpenter points out that from commit 2aa8c529387c ("bcache: avoid unnecessary btree nodes flushing in btree_flush_write()"), there is a incorrect data type usage which leads to the following static checker warning: drivers/md/bcache/journal.c:444 btree_flush_write() warn: 'ref_nr' unsigned <= 0 drivers/md/bcache/journal.c 422 static void btree_flush_write(struct cache_set *c) 423 { 424 struct btree *b, *t, *btree_nodes[BTREE_FLUSH_NR]; 425 unsigned int i, nr, ref_nr; ^^^^^^ 426 atomic_t *fifo_front_p, *now_fifo_front_p; 427 size_t mask; 428 429 if (c->journal.btree_flushing) 430 return; 431 432 spin_lock(&c->journal.flush_write_lock); 433 if (c->journal.btree_flushing) { 434 spin_unlock(&c->journal.flush_write_lock); 435 return; 436 } 437 c->journal.btree_flushing = true; 438 spin_unlock(&c->journal.flush_write_lock); 439 440 /* get the oldest journal entry and check its refcount */ 441 spin_lock(&c->journal.lock); 442 fifo_front_p = &fifo_front(&c->journal.pin); 443 ref_nr = atomic_read(fifo_front_p); 444 if (ref_nr <= 0) { ^^^^^^^^^^^ Unsigned can't be less than zero. 445 /* 446 * do nothing if no btree node references 447 * the oldest journal entry 448 */ 449 spin_unlock(&c->journal.lock); 450 goto out; 451 } 452 spin_unlock(&c->journal.lock); As the warning information indicates, local varaible ref_nr in unsigned int type is wrong, which does not matche atomic_read() and the "<= 0" checking. This patch fixes the above error by defining local variable ref_nr as int type. Fixes: 2aa8c529387c ("bcache: avoid unnecessary btree nodes flushing in btree_flush_write()") Reported-by: Dan Carpenter <dan.carpenter@oracle.com> Signed-off-by: Coly Li <colyli@suse.de> Signed-off-by: Jens Axboe <axboe@kernel.dk>
2020-02-01bcache: add readahead cache policy options via sysfs interfaceColy Li3-5/+37
In year 2007 high performance SSD was still expensive, in order to save more space for real workload or meta data, the readahead I/Os for non-meta data was bypassed and not cached on SSD. In now days, SSD price drops a lot and people can find larger size SSD with more comfortable price. It is unncessary to alway bypass normal readahead I/Os to save SSD space for now. This patch adds options for readahead data cache policies via sysfs file /sys/block/bcache<N>/readahead_cache_policy, the options are, - "all": cache all readahead data I/Os. - "meta-only": only cache meta data, and bypass other regular I/Os. If users want to make bcache continue to only cache readahead request for metadata and bypass regular data readahead, please set "meta-only" to this sysfs file. By default, bcache will back to cache all read- ahead requests now. Cc: stable@vger.kernel.org Signed-off-by: Coly Li <colyli@suse.de> Acked-by: Eric Wheeler <bcache@linux.ewheeler.net> Cc: Michael Lyle <mlyle@lyle.org> Signed-off-by: Jens Axboe <axboe@kernel.dk>
2020-02-01bcache: explicity type cast in bset_bkey_last()Coly Li1-1/+2
In bset.h, macro bset_bkey_last() is defined as, bkey_idx((struct bkey *) (i)->d, (i)->keys) Parameter i can be variable type of data structure, the macro always works once the type of struct i has member 'd' and 'keys'. bset_bkey_last() is also used in macro csum_set() to calculate the checksum of a on-disk data structure. When csum_set() is used to calculate checksum of on-disk bcache super block, the parameter 'i' data type is struct cache_sb_disk. Inside struct cache_sb_disk (also in struct cache_sb) the member keys is __u16 type. But bkey_idx() expects unsigned int (a 32bit width), so there is problem when sending parameters via stack to call bkey_idx(). Sparse tool from Intel 0day kbuild system reports this incompatible problem. bkey_idx() is part of user space API, so the simplest fix is to cast the (i)->keys to unsigned int type in macro bset_bkey_last(). Reported-by: kbuild test robot <lkp@intel.com> Signed-off-by: Coly Li <colyli@suse.de> Signed-off-by: Jens Axboe <axboe@kernel.dk>
2020-02-01bcache: fix memory corruption in bch_cache_accounting_clear()Coly Li1-3/+7
Commit 83ff9318c44ba ("bcache: not use hard coded memset size in bch_cache_accounting_clear()") tries to make the code more easy to understand by removing the hard coded number with following change, void bch_cache_accounting_clear(...) { memset(&acc->total.cache_hits, 0, - sizeof(unsigned long) * 7); + sizeof(struct cache_stats)); } Unfortunately the change was wrong (it also tells us the original code was not easy to correctly understand). The hard coded number 7 is used because in struct cache_stats, 15 struct cache_stats { 16 struct kobject kobj; 17 18 unsigned long cache_hits; 19 unsigned long cache_misses; 20 unsigned long cache_bypass_hits; 21 unsigned long cache_bypass_misses; 22 unsigned long cache_readaheads; 23 unsigned long cache_miss_collisions; 24 unsigned long sectors_bypassed; 25 26 unsigned int rescale; 27 }; only members in LINE 18-24 want to be set to 0. It is wrong to use 'sizeof(struct cache_stats)' to replace 'sizeof(unsigned long) * 7), the memory objects behind acc->total is staled by this change. Сорокин Артем Сергеевич reports that by the following steps, kernel panic will be triggered, 1. Create new set: make-bcache -B /dev/nvme1n1 -C /dev/sda --wipe-bcache 2. Run in /sys/fs/bcache/<uuid>: echo 1 > clear_stats && cat stats_five_minute/cache_bypass_hits I can reproduce the panic and get following dmesg with KASAN enabled, [22613.172742] ================================================================== [22613.172862] BUG: KASAN: null-ptr-deref in sysfs_kf_seq_show+0x117/0x230 [22613.172864] Read of size 8 at addr 0000000000000000 by task cat/6753 [22613.172870] CPU: 1 PID: 6753 Comm: cat Not tainted 5.5.0-rc7-lp151.28.16-default+ #11 [22613.172872] Hardware name: VMware, Inc. VMware Virtual Platform/440BX Desktop Reference Platform, BIOS 6.00 07/29/2019 [22613.172873] Call Trace: [22613.172964] dump_stack+0x8b/0xbb [22613.172968] ? sysfs_kf_seq_show+0x117/0x230 [22613.172970] ? sysfs_kf_seq_show+0x117/0x230 [22613.173031] __kasan_report+0x176/0x192 [22613.173064] ? pr_cont_kernfs_name+0x40/0x60 [22613.173067] ? sysfs_kf_seq_show+0x117/0x230 [22613.173070] kasan_report+0xe/0x20 [22613.173072] sysfs_kf_seq_show+0x117/0x230 [22613.173105] seq_read+0x199/0x6d0 [22613.173110] vfs_read+0xa5/0x1a0 [22613.173113] ksys_read+0x110/0x160 [22613.173115] ? kernel_write+0xb0/0xb0 [22613.173177] do_syscall_64+0x77/0x290 [22613.173238] entry_SYSCALL_64_after_hwframe+0x44/0xa9 [22613.173241] RIP: 0033:0x7fc2c886ac61 [22613.173244] Code: fe ff ff 48 8d 3d c7 a0 09 00 48 83 ec 08 e8 46 03 02 00 66 0f 1f 44 00 00 8b 05 ca fb 2c 00 48 63 ff 85 c0 75 13 31 c0 0f 05 <48> 3d 00 f0 ff ff 77 57 f3 c3 0f 1f 44 00 00 55 53 48 89 d5 48 89 [22613.173245] RSP: 002b:00007ffebe776d68 EFLAGS: 00000246 ORIG_RAX: 0000000000000000 [22613.173248] RAX: ffffffffffffffda RBX: 0000000000020000 RCX: 00007fc2c886ac61 [22613.173249] RDX: 0000000000020000 RSI: 00007fc2c8cca000 RDI: 0000000000000003 [22613.173250] RBP: 0000000000020000 R08: ffffffffffffffff R09: 0000000000000000 [22613.173251] R10: 000000000000038c R11: 0000000000000246 R12: 00007fc2c8cca000 [22613.173253] R13: 0000000000000003 R14: 00007fc2c8cca00f R15: 0000000000020000 [22613.173255] ================================================================== [22613.173256] Disabling lock debugging due to kernel taint [22613.173350] BUG: kernel NULL pointer dereference, address: 0000000000000000 [22613.178380] #PF: supervisor read access in kernel mode [22613.180959] #PF: error_code(0x0000) - not-present page [22613.183444] PGD 0 P4D 0 [22613.184867] Oops: 0000 [#1] SMP KASAN PTI [22613.186797] CPU: 1 PID: 6753 Comm: cat Tainted: G B 5.5.0-rc7-lp151.28.16-default+ #11 [22613.191253] Hardware name: VMware, Inc. VMware Virtual Platform/440BX Desktop Reference Platform, BIOS 6.00 07/29/2019 [22613.196706] RIP: 0010:sysfs_kf_seq_show+0x117/0x230 [22613.199097] Code: ff 48 8b 0b 48 8b 44 24 08 48 01 e9 eb a6 31 f6 48 89 cf ba 00 10 00 00 48 89 4c 24 10 e8 b1 e6 e9 ff 4c 89 ff e8 19 07 ea ff <49> 8b 07 48 85 c0 48 89 44 24 08 0f 84 91 00 00 00 49 8b 6d 00 48 [22613.208016] RSP: 0018:ffff8881d4f8fd78 EFLAGS: 00010246 [22613.210448] RAX: 0000000000000000 RBX: ffff8881eb99b180 RCX: ffffffff810d9ef6 [22613.213691] RDX: 0000000000000001 RSI: 0000000000000246 RDI: 0000000000000246 [22613.216893] RBP: 0000000000001000 R08: fffffbfff072ddcd R09: fffffbfff072ddcd [22613.220075] R10: 0000000000000001 R11: fffffbfff072ddcc R12: ffff8881de5c0200 [22613.223256] R13: ffff8881ed175500 R14: ffff8881eb99b198 R15: 0000000000000000 [22613.226290] FS: 00007fc2c8d3d500(0000) GS:ffff8881f2a80000(0000) knlGS:0000000000000000 [22613.229637] CS: 0010 DS: 0000 ES: 0000 CR0: 0000000080050033 [22613.231993] CR2: 0000000000000000 CR3: 00000001ec89a004 CR4: 00000000003606e0 [22613.234909] Call Trace: [22613.235931] seq_read+0x199/0x6d0 [22613.237259] vfs_read+0xa5/0x1a0 [22613.239229] ksys_read+0x110/0x160 [22613.240590] ? kernel_write+0xb0/0xb0 [22613.242040] do_syscall_64+0x77/0x290 [22613.243625] entry_SYSCALL_64_after_hwframe+0x44/0xa9 [22613.245450] RIP: 0033:0x7fc2c886ac61 [22613.246706] Code: fe ff ff 48 8d 3d c7 a0 09 00 48 83 ec 08 e8 46 03 02 00 66 0f 1f 44 00 00 8b 05 ca fb 2c 00 48 63 ff 85 c0 75 13 31 c0 0f 05 <48> 3d 00 f0 ff ff 77 57 f3 c3 0f 1f 44 00 00 55 53 48 89 d5 48 89 [22613.253296] RSP: 002b:00007ffebe776d68 EFLAGS: 00000246 ORIG_RAX: 0000000000000000 [22613.255835] RAX: ffffffffffffffda RBX: 0000000000020000 RCX: 00007fc2c886ac61 [22613.258472] RDX: 0000000000020000 RSI: 00007fc2c8cca000 RDI: 0000000000000003 [22613.260807] RBP: 0000000000020000 R08: ffffffffffffffff R09: 0000000000000000 [22613.263188] R10: 000000000000038c R11: 0000000000000246 R12: 00007fc2c8cca000 [22613.265598] R13: 0000000000000003 R14: 00007fc2c8cca00f R15: 0000000000020000 [22613.268729] Modules linked in: scsi_transport_iscsi af_packet iscsi_ibft iscsi_boot_sysfs vmw_vsock_vmci_transport vsock fuse bnep kvm_intel kvm irqbypass crc32_pclmul crc32c_intel ghash_clmulni_intel snd_ens1371 snd_ac97_codec ac97_bus bcache snd_pcm btusb btrtl btbcm btintel crc64 aesni_intel glue_helper crypto_simd vmw_balloon cryptd bluetooth snd_timer snd_rawmidi snd joydev pcspkr e1000 rfkill vmw_vmci soundcore ecdh_generic ecc gameport i2c_piix4 mptctl ac button hid_generic usbhid sr_mod cdrom ata_generic ehci_pci vmwgfx uhci_hcd drm_kms_helper syscopyarea serio_raw sysfillrect sysimgblt fb_sys_fops ttm ehci_hcd mptspi scsi_transport_spi mptscsih ata_piix mptbase ahci usbcore libahci drm sg dm_multipath dm_mod scsi_dh_rdac scsi_dh_emc scsi_dh_alua [22613.292429] CR2: 0000000000000000 [22613.293563] ---[ end trace a074b26a8508f378 ]--- [22613.295138] RIP: 0010:sysfs_kf_seq_show+0x117/0x230 [22613.296769] Code: ff 48 8b 0b 48 8b 44 24 08 48 01 e9 eb a6 31 f6 48 89 cf ba 00 10 00 00 48 89 4c 24 10 e8 b1 e6 e9 ff 4c 89 ff e8 19 07 ea ff <49> 8b 07 48 85 c0 48 89 44 24 08 0f 84 91 00 00 00 49 8b 6d 00 48 [22613.303553] RSP: 0018:ffff8881d4f8fd78 EFLAGS: 00010246 [22613.305280] RAX: 0000000000000000 RBX: ffff8881eb99b180 RCX: ffffffff810d9ef6 [22613.307924] RDX: 0000000000000001 RSI: 0000000000000246 RDI: 0000000000000246 [22613.310272] RBP: 0000000000001000 R08: fffffbfff072ddcd R09: fffffbfff072ddcd [22613.312685] R10: 0000000000000001 R11: fffffbfff072ddcc R12: ffff8881de5c0200 [22613.315076] R13: ffff8881ed175500 R14: ffff8881eb99b198 R15: 0000000000000000 [22613.318116] FS: 00007fc2c8d3d500(0000) GS:ffff8881f2a80000(0000) knlGS:0000000000000000 [22613.320743] CS: 0010 DS: 0000 ES: 0000 CR0: 0000000080050033 [22613.322628] CR2: 0000000000000000 CR3: 00000001ec89a004 CR4: 00000000003606e0 Here this patch fixes the following problem by explicity set all the 7 members to 0 in bch_cache_accounting_clear(). Reported-by: Сорокин Артем Сергеевич <a.sorokin@bank-hlynov.ru> Signed-off-by: Coly Li <colyli@suse.de> Signed-off-by: Jens Axboe <axboe@kernel.dk>
2020-01-30Merge tag 'for-5.6/dm-changes' of ↵Linus Torvalds16-101/+578
git://git.kernel.org/pub/scm/linux/kernel/git/device-mapper/linux-dm Pull device mapper updates from Mike Snitzer: - Fix DM core's potential for q->make_request_fn NULL pointer in the unlikely case that a DM device is created without a DM table and then accessed due to upper-layer userspace code or user error. - Fix DM thin-provisioning's metadata_pre_commit_callback to not use memory after it is free'd. Also refactor code to disallow changing the thin-pool's data device once in use -- doing so guarantees smae lifetime of pool's data device relative to the pool metadata. - Fix DM space maps used by DM thinp and DM cache to avoid reuse of a already used block. This race was identified with extremely heavy snapshot use in the context of DM thin provisioning. - Fix DM raid's table status relative to an active rebuild. - Fix DM crypt to use GFP_NOIO rather than GFP_NOFS in call to skcipher_request_alloc(). Also fix benbi IV constructor crash if used in authenticated mode. - Add DM crypt support for Elephant diffuser to allow for Bitlocker compatibility. - Fix DM verity target to not prefetch hash blocks for data that has already been verified. - Fix DM writecache's incorrect flush sequence during commit when in SSD mode. - Improve DM writecache's sequential write performance on SSDs. - Add DM zoned target support for zone sizes smaller than 128MiB. - Add DM multipath 'queue_if_no_path_timeout_secs' module param to allow timeout if path isn't reinstated. This allows users a kernel safety-net against IO hanging indefinitely, due to no active paths, that has historically only been provided by multipathd userspace. - Various DM code cleanups to use true/false rather than 1/0, a variable rename in dm-dust, and fix for a math error in comment for DM thin metadata's ondisk format. * tag 'for-5.6/dm-changes' of git://git.kernel.org/pub/scm/linux/kernel/git/device-mapper/linux-dm: (21 commits) dm: fix potential for q->make_request_fn NULL pointer dm writecache: improve performance of large linear writes on SSDs dm mpath: Add timeout mechanism for queue_if_no_path dm thin: change data device's flush_bio to be member of struct pool dm thin: don't allow changing data device during thin-pool reload dm thin: fix use-after-free in metadata_pre_commit_callback dm thin metadata: use pool locking at end of dm_pool_metadata_close dm writecache: fix incorrect flush sequence when doing SSD mode commit dm crypt: fix benbi IV constructor crash if used in authenticated mode dm crypt: Implement Elephant diffuser for Bitlocker compatibility dm space map common: fix to ensure new block isn't already in use dm verity: don't prefetch hash blocks for already-verified data dm crypt: fix GFP flags passed to skcipher_request_alloc() dm thin metadata: Fix trivial math error in on-disk format documentation dm thin metadata: use true/false for bool variable dm snapshot: use true/false for bool variable dm bio prison v2: use true/false for bool variable dm mpath: use true/false for bool variable dm zoned: support zone sizes smaller than 128MiB dm raid: table line rebuild status fixes ...
2020-01-27Merge tag 'for-5.6/drivers-2020-01-27' of git://git.kernel.dk/linux-blockLinus Torvalds11-236/+469
Pull block driver updates from Jens Axboe: "Like the core side, not a lot of changes here, just two main items: - Series of patches (via Coly) with fixes for bcache (Coly, Christoph) - MD pull request from Song" * tag 'for-5.6/drivers-2020-01-27' of git://git.kernel.dk/linux-block: (31 commits) bcache: reap from tail of c->btree_cache in bch_mca_scan() bcache: reap c->btree_cache_freeable from the tail in bch_mca_scan() bcache: remove member accessed from struct btree bcache: print written and keys in trace_bcache_btree_write bcache: avoid unnecessary btree nodes flushing in btree_flush_write() bcache: add code comments for state->pool in __btree_sort() lib: crc64: include <linux/crc64.h> for 'crc64_be' bcache: use read_cache_page_gfp to read the superblock bcache: store a pointer to the on-disk sb in the cache and cached_dev structures bcache: return a pointer to the on-disk sb from read_super bcache: transfer the sb_page reference to register_{bdev,cache} bcache: fix use-after-free in register_bcache() bcache: properly initialize 'path' and 'err' in register_bcache() bcache: rework error unwinding in register_bcache bcache: use a separate data structure for the on-disk super block bcache: cached_dev_free needs to put the sb page md/raid1: introduce wait_for_serialization md/raid1: use bucket based mechanism for IO serialization md: introduce a new struct for IO serialization md: don't destroy serial_info_pool if serialize_policy is true ...
2020-01-27dm: fix potential for q->make_request_fn NULL pointerMike Snitzer1-2/+7
Move blk_queue_make_request() to dm.c:alloc_dev() so that q->make_request_fn is never NULL during the lifetime of a DM device (even one that is created without a DM table). Otherwise generic_make_request() will crash simply by doing: dmsetup create -n test mount /dev/dm-N /mnt While at it, move ->congested_data initialization out of dm.c:alloc_dev() and into the bio-based specific init method. Reported-by: Stefan Bader <stefan.bader@canonical.com> BugLink: https://bugs.launchpad.net/bugs/1860231 Fixes: ff36ab34583a ("dm: remove request-based logic from make_request_fn wrapper") Depends-on: c12c9a3c3860c ("dm: various cleanups to md->queue initialization code") Cc: stable@vger.kernel.org Signed-off-by: Mike Snitzer <snitzer@redhat.com>
2020-01-23bcache: reap from tail of c->btree_cache in bch_mca_scan()Coly Li1-5/+5
When shrink btree node cache from c->btree_cache in bch_mca_scan(), no matter the selected node is reaped or not, it will be rotated from the head to the tail of c->btree_cache list. But in bcache journal code, when flushing the btree nodes with oldest journal entry, btree nodes are iterated and slected from the tail of c->btree_cache list in btree_flush_write(). The list_rotate_left() in bch_mca_scan() will make btree_flush_write() iterate more nodes in c->btree_list in reverse order. This patch just reaps the selected btree node cache, and not move it from the head to the tail of c->btree_cache list. Then bch_mca_scan() will not mess up c->btree_cache list to btree_flush_write(). Signed-off-by: Coly Li <colyli@suse.de> Signed-off-by: Jens Axboe <axboe@kernel.dk>
2020-01-23bcache: reap c->btree_cache_freeable from the tail in bch_mca_scan()Coly Li1-3/+3
In order to skip the most recently freed btree node cahce, currently in bch_mca_scan() the first 3 caches in c->btree_cache_freeable list are skipped when shrinking bcache node caches in bch_mca_scan(). The related code in bch_mca_scan() is, 737 list_for_each_entry_safe(b, t, &c->btree_cache_freeable, list) { 738 if (nr <= 0) 739 goto out; 740 741 if (++i > 3 && 742 !mca_reap(b, 0, false)) { lines free cache memory 746 } 747 nr--; 748 } The problem is, if virtual memory code calls bch_mca_scan() and the calculated 'nr' is 1 or 2, then in the above loop, nothing will be shunk. In such case, if slub/slab manager calls bch_mca_scan() for many times with small scan number, it does not help to shrink cache memory and just wasts CPU cycles. This patch just selects btree node caches from tail of the c->btree_cache_freeable list, then the newly freed host cache can still be allocated by mca_alloc(), and at least 1 node can be shunk. Signed-off-by: Coly Li <colyli@suse.de> Signed-off-by: Jens Axboe <axboe@kernel.dk>
2020-01-23bcache: remove member accessed from struct btreeColy Li2-8/+2
The member 'accessed' of struct btree is used in bch_mca_scan() when shrinking btree node caches. The original idea is, if b->accessed is set, clean it and look at next btree node cache from c->btree_cache list, and only shrink the caches whose b->accessed is cleaned. Then only cold btree node cache will be shrunk. But when I/O pressure is high, it is very probably that b->accessed of a btree node cache will be set again in bch_btree_node_get() before bch_mca_scan() selects it again. Then there is no chance for bch_mca_scan() to shrink enough memory back to slub or slab system. This patch removes member accessed from struct btree, then once a btree node ache is selected, it will be immediately shunk. By this change, bch_mca_scan() may release btree node cahce more efficiently. Signed-off-by: Coly Li <colyli@suse.de> Signed-off-by: Jens Axboe <axboe@kernel.dk>
2020-01-23bcache: avoid unnecessary btree nodes flushing in btree_flush_write()Coly Li1-5/+75
the commit 91be66e1318f ("bcache: performance improvement for btree_flush_write()") was an effort to flushing btree node with oldest btree node faster in following methods, - Only iterate dirty btree nodes in c->btree_cache, avoid scanning a lot of clean btree nodes. - Take c->btree_cache as a LRU-like list, aggressively flushing all dirty nodes from tail of c->btree_cache util the btree node with oldest journal entry is flushed. This is to reduce the time of holding c->bucket_lock. Guoju Fang and Shuang Li reported that they observe unexptected extra write I/Os on cache device after applying the above patch. Guoju Fang provideed more detailed diagnose information that the aggressive btree nodes flushing may cause 10x more btree nodes to flush in his workload. He points out when system memory is large enough to hold all btree nodes in memory, c->btree_cache is not a LRU-like list any more. Then the btree node with oldest journal entry is very probably not- close to the tail of c->btree_cache list. In such situation much more dirty btree nodes will be aggressively flushed before the target node is flushed. When slow SATA SSD is used as cache device, such over- aggressive flushing behavior will cause performance regression. After spending a lot of time on debug and diagnose, I find the real condition is more complicated, aggressive flushing dirty btree nodes from tail of c->btree_cache list is not a good solution. - When all btree nodes are cached in memory, c->btree_cache is not a LRU-like list, the btree nodes with oldest journal entry won't be close to the tail of the list. - There can be hundreds dirty btree nodes reference the oldest journal entry, before flushing all the nodes the oldest journal entry cannot be reclaimed. When the above two conditions mixed together, a simply flushing from tail of c->btree_cache list is really NOT a good idea. Fortunately there is still chance to make btree_flush_write() work better. Here is how this patch avoids unnecessary btree nodes flushing, - Only acquire c->journal.lock when getting oldest journal entry of fifo c->journal.pin. In rested locations check the journal entries locklessly, so their values can be changed on other cores in parallel. - In loop list_for_each_entry_safe_reverse(), checking latest front point of fifo c->journal.pin. If it is different from the original point which we get with locking c->journal.lock, it means the oldest journal entry is reclaim on other cores. At this moment, all selected dirty nodes recorded in array btree_nodes[] are all flushed and clean on other CPU cores, it is unncessary to iterate c->btree_cache any longer. Just quit the list_for_each_entry_safe_reverse() loop and the following for-loop will skip all the selected clean nodes. - Find a proper time to quit the list_for_each_entry_safe_reverse() loop. Check the refcount value of orignial fifo front point, if the value is larger than selected node number of btree_nodes[], it means more matching btree nodes should be scanned. Otherwise it means no more matching btee nodes in rest of c->btree_cache list, the loop can be quit. If the original oldest journal entry is reclaimed and fifo front point is updated, the refcount of original fifo front point will be 0, then the loop will be quit too. - Not hold c->bucket_lock too long time. c->bucket_lock is also required for space allocation for cached data, hold it for too long time will block regular I/O requests. When iterating list c->btree_cache, even there are a lot of maching btree nodes, in order to not holding c->bucket_lock for too long time, only BTREE_FLUSH_NR nodes are selected and to flush in following for-loop. With this patch, only btree nodes referencing oldest journal entry are flushed to cache device, no aggressive flushing for unnecessary btree node any more. And in order to avoid blocking regluar I/O requests, each time when btree_flush_write() called, at most only BTREE_FLUSH_NR btree nodes are selected to flush, even there are more maching btree nodes in list c->btree_cache. At last, one more thing to explain: Why it is safe to read front point of c->journal.pin without holding c->journal.lock inside the list_for_each_entry_safe_reverse() loop ? Here is my answer: When reading the front point of fifo c->journal.pin, we don't need to know the exact value of front point, we just want to check whether the value is different from the original front point (which is accurate value because we get it while c->jouranl.lock is held). For such purpose, it works as expected without holding c->journal.lock. Even the front point is changed on other CPU core and not updated to local core, and current iterating btree node has identical journal entry local as original fetched fifo front point, it is still safe. Because after holding mutex b->write_lock (with memory barrier) this btree node can be found as clean and skipped, the loop will quite latter when iterate on next node of list c->btree_cache. Fixes: 91be66e1318f ("bcache: performance improvement for btree_flush_write()") Reported-by: Guoju Fang <fangguoju@gmail.com> Reported-by: Shuang Li <psymon@bonuscloud.io> Signed-off-by: Coly Li <colyli@suse.de> Signed-off-by: Jens Axboe <axboe@kernel.dk>