summaryrefslogtreecommitdiff
path: root/drivers/md
AgeCommit message (Collapse)AuthorFilesLines
2018-05-31dm: convert to bioset_init()/mempool_init()Kent Overstreet17-206/+197
Convert dm to embedded bio sets. Acked-by: Mike Snitzer <snitzer@redhat.com> Signed-off-by: Kent Overstreet <kent.overstreet@gmail.com> Signed-off-by: Jens Axboe <axboe@kernel.dk>
2018-05-31md: convert to bioset_init()/mempool_init()Kent Overstreet15-181/+159
Convert md to embedded bio sets. Signed-off-by: Kent Overstreet <kent.overstreet@gmail.com> Signed-off-by: Jens Axboe <axboe@kernel.dk>
2018-05-31bcache: convert to bioset_init()/mempool_init()Kent Overstreet7-52/+37
Convert bcache to embedded bio sets. Reviewed-by: Coly Li <colyli@suse.de> Signed-off-by: Kent Overstreet <kent.overstreet@gmail.com> Signed-off-by: Jens Axboe <axboe@kernel.dk>
2018-05-31block: convert bounce, q->bio_split to bioset_init()/mempool_init()Kent Overstreet1-1/+1
Convert the core block functionality to embedded bio sets. Reviewed-by: Christoph Hellwig <hch@lst.de> Signed-off-by: Kent Overstreet <kent.overstreet@gmail.com> Signed-off-by: Jens Axboe <axboe@kernel.dk>
2018-05-28bcache: Replace bch_read_string_list() by __sysfs_match_string()Andy Shevchenko1-35/+9
Kernel library has a common function to match user input from sysfs against an array of strings. Thus, replace bch_read_string_list() by __sysfs_match_string(). Signed-off-by: Andy Shevchenko <andriy.shevchenko@linux.intel.com> Signed-off-by: Coly Li <colyli@suse.de> Signed-off-by: Jens Axboe <axboe@kernel.dk>
2018-05-28bcache: Move couple of functions to sysfs.cAndy Shevchenko3-40/+35
There is couple of functions that are used exclusively in sysfs.c. Move it to there and make them static. Besides above, it will allow further clean up. No functional change intended. Signed-off-by: Andy Shevchenko <andriy.shevchenko@linux.intel.com> Signed-off-by: Coly Li <colyli@suse.de> Signed-off-by: Jens Axboe <axboe@kernel.dk>
2018-05-28bcache: Move couple of string arrays to sysfs.cAndy Shevchenko3-20/+18
There is couple of string arrays that are used exclusively in sysfs.c. Move it to there and make them static. Besides above, it will allow further clean up. No functional change intended. Signed-off-by: Andy Shevchenko <andriy.shevchenko@linux.intel.com> Signed-off-by: Coly Li <colyli@suse.de> Signed-off-by: Jens Axboe <axboe@kernel.dk>
2018-05-28bcache: stop bcache device when backing device is offlineColy Li2-0/+55
Currently bcache does not handle backing device failure, if backing device is offline and disconnected from system, its bcache device can still be accessible. If the bcache device is in writeback mode, I/O requests even can success if the requests hit on cache device. That is to say, when and how bcache handles offline backing device is undefined. This patch tries to handle backing device offline in a rather simple way, - Add cached_dev->status_update_thread kernel thread to update backing device status in every 1 second. - Add cached_dev->offline_seconds to record how many seconds the backing device is observed to be offline. If the backing device is offline for BACKING_DEV_OFFLINE_TIMEOUT (30) seconds, set dc->io_disable to 1 and call bcache_device_stop() to stop the bache device which linked to the offline backing device. Now if a backing device is offline for BACKING_DEV_OFFLINE_TIMEOUT seconds, its bcache device will be removed, then user space application writing on it will get error immediately, and handler the device failure in time. This patch is quite simple, does not handle more complicated situations. Once the bcache device is stopped, users need to recovery the backing device, register and attach it manually. Changelog: v3: call wait_for_kthread_stop() before exits kernel thread. v2: remove "bcache: " prefix when calling pr_warn(). v1: initial version. Signed-off-by: Coly Li <colyli@suse.de> Reviewed-by: Hannes Reinecke <hare@suse.com> Cc: Michael Lyle <mlyle@lyle.org> Cc: Junhui Tang <tang.junhui@zte.com.cn> Signed-off-by: Jens Axboe <axboe@kernel.dk>
2018-05-14block: sanitize blk_get_request calling conventionsChristoph Hellwig1-1/+2
Switch everyone to blk_get_request_flags, and then rename blk_get_request_flags to blk_get_request. Signed-off-by: Christoph Hellwig <hch@lst.de> Reviewed-by: Hannes Reinecke <hare@suse.com> Signed-off-by: Jens Axboe <axboe@kernel.dk>
2018-05-09block: consolidate struct request timestamp fieldsOmar Sandoval1-1/+1
Currently, struct request has four timestamp fields: - A start time, set at get_request time, in jiffies, used for iostats - An I/O start time, set at start_request time, in ktime nanoseconds, used for blk-stats (i.e., wbt, kyber, hybrid polling) - Another start time and another I/O start time, used for cfq and bfq These can all be consolidated into one start time and one I/O start time, both in ktime nanoseconds, shaving off up to 16 bytes from struct request depending on the kernel config. Signed-off-by: Omar Sandoval <osandov@fb.com> Signed-off-by: Jens Axboe <axboe@kernel.dk>
2018-05-03bcache: use pr_info() to inform duplicated CACHE_SET_IO_DISABLE setColy Li1-1/+1
It is possible that multiple I/O requests hits on failed cache device or backing device, therefore it is quite common that CACHE_SET_IO_DISABLE is set already when a task tries to set the bit from bch_cache_set_error(). Currently the message "CACHE_SET_IO_DISABLE already set" is printed by pr_warn(), which might mislead users to think a serious fault happens in source code. This patch uses pr_info() to print the information in such situation, avoid extra worries. This information is helpful to understand bcache behavior in cache device failures, so I still keep them in source code. Fixes: 771f393e8ffc9 ("bcache: add CACHE_SET_IO_DISABLE to struct cache_set flags") Signed-off-by: Coly Li <colyli@suse.de> Reviewed-by: Hannes Reinecke <hare@suse.com> Signed-off-by: Jens Axboe <axboe@kernel.dk>
2018-05-03bcache: set dc->io_disable to true in conditional_stop_bcache_device()Coly Li1-0/+14
Commit 7e027ca4b534b ("bcache: add stop_when_cache_set_failed option to backing device") adds stop_when_cache_set_failed option and stops bcache device if stop_when_cache_set_failed is auto and there is dirty data on broken cache device. There might exists a small time gap that the cache set is released and set to NULL but bcache device is not released yet (because they are released in parallel). During this time gap, dc->c is NULL so CACHE_SET_IO_DISABLE won't be checked, and dc->io_disable is still false, so new coming I/O requests will be accepted and directly go into backing device as no cache set attached to. If there is dirty data on cache device, this behavior may introduce potential inconsistent data. This patch sets dc->io_disable to true before calling bcache_device_stop() to make sure the backing device will reject new coming I/O request as well, so even in the small time gap no I/O will directly go into backing device to corrupt data consistency. Fixes: 7e027ca4b534b ("bcache: add stop_when_cache_set_failed option to backing device") Signed-off-by: Coly Li <colyli@suse.de> Reviewed-by: Hannes Reinecke <hare@suse.com> Signed-off-by: Jens Axboe <axboe@kernel.dk>
2018-05-03bcache: add wait_for_kthread_stop() in bch_allocator_thread()Coly Li1-1/+4
When CACHE_SET_IO_DISABLE is set on cache set flags, bcache allocator thread routine bch_allocator_thread() may stop the while-loops and exit. Then it is possible to observe the following kernel oops message, [ 631.068366] bcache: bch_btree_insert() error -5 [ 631.069115] bcache: cached_dev_detach_finish() Caching disabled for sdf [ 631.070220] BUG: unable to handle kernel NULL pointer dereference at 0000000000000000 [ 631.070250] PGD 0 P4D 0 [ 631.070261] Oops: 0002 [#1] SMP PTI [snipped] [ 631.070578] Workqueue: events cache_set_flush [bcache] [ 631.070597] RIP: 0010:exit_creds+0x1b/0x50 [ 631.070610] RSP: 0018:ffffc9000705fe08 EFLAGS: 00010246 [ 631.070626] RAX: 0000000000000001 RBX: ffff880a622ad300 RCX: 000000000000000b [ 631.070645] RDX: 0000000000000601 RSI: 000000000000000c RDI: 0000000000000000 [ 631.070663] RBP: ffff880a622ad300 R08: ffffea00190c66e0 R09: 0000000000000200 [ 631.070682] R10: ffff880a48123000 R11: ffff880000000000 R12: 0000000000000000 [ 631.070700] R13: ffff880a4b160e40 R14: ffff880a4b160000 R15: 0ffff880667e2530 [ 631.070719] FS: 0000000000000000(0000) GS:ffff880667e00000(0000) knlGS:0000000000000000 [ 631.070740] CS: 0010 DS: 0000 ES: 0000 CR0: 0000000080050033 [ 631.070755] CR2: 0000000000000000 CR3: 000000000200a001 CR4: 00000000003606e0 [ 631.070774] DR0: 0000000000000000 DR1: 0000000000000000 DR2: 0000000000000000 [ 631.070793] DR3: 0000000000000000 DR6: 00000000fffe0ff0 DR7: 0000000000000400 [ 631.070811] Call Trace: [ 631.070828] __put_task_struct+0x55/0x160 [ 631.070845] kthread_stop+0xee/0x100 [ 631.070863] cache_set_flush+0x11d/0x1a0 [bcache] [ 631.070879] process_one_work+0x146/0x340 [ 631.070892] worker_thread+0x47/0x3e0 [ 631.070906] kthread+0xf5/0x130 [ 631.070917] ? max_active_store+0x60/0x60 [ 631.070930] ? kthread_bind+0x10/0x10 [ 631.070945] ret_from_fork+0x35/0x40 [snipped] [ 631.071017] RIP: exit_creds+0x1b/0x50 RSP: ffffc9000705fe08 [ 631.071033] CR2: 0000000000000000 [ 631.071045] ---[ end trace 011c63a24b22c927 ]--- [ 631.071085] bcache: bcache_device_free() bcache0 stopped The reason is when cache_set_flush() tries to call kthread_stop() to stop allocator thread, but it exits already due to cache device I/O errors. This patch adds wait_for_kthread_stop() at tail of bch_allocator_thread(), to prevent the thread routine exiting directly. Then the allocator thread can be blocked at wait_for_kthread_stop() and wait for cache_set_flush() to stop it by calling kthread_stop(). changelog: v3: add Reviewed-by from Hannnes. v2: not directly return from allocator_wait(), move 'return 0' to tail of bch_allocator_thread(). v1: initial version. Fixes: 771f393e8ffc ("bcache: add CACHE_SET_IO_DISABLE to struct cache_set flags") Signed-off-by: Coly Li <colyli@suse.de> Reviewed-by: Hannes Reinecke <hare@suse.com> Signed-off-by: Jens Axboe <axboe@kernel.dk>
2018-05-03bcache: count backing device I/O error for writeback I/OColy Li1-1/+3
Commit c7b7bd07404c5 ("bcache: add io_disable to struct cached_dev") counts backing device I/O requets and set dc->io_disable to true if error counters exceeds dc->io_error_limit. But it only counts I/O errors for regular I/O request, neglects errors of write back I/Os when backing device is offline. This patch counts the errors of writeback I/Os, in dirty_endio() if bio->bi_status is not 0, it means error happens when writing dirty keys to backing device, then bch_count_backing_io_errors() is called. By this fix, even there is no reqular I/O request coming, if writeback I/O errors exceed dc->io_error_limit, the bcache device may still be stopped for the broken backing device. Fixes: c7b7bd07404c5 ("bcache: add io_disable to struct cached_dev") Signed-off-by: Coly Li <colyli@suse.de> Reviewed-by: Hannes Reinecke <hare@suse.com> Signed-off-by: Jens Axboe <axboe@kernel.dk>
2018-05-03bcache: set CACHE_SET_IO_DISABLE in bch_cached_dev_error()Coly Li1-0/+17
Commit c7b7bd07404c5 ("bcache: add io_disable to struct cached_dev") tries to stop bcache device by calling bcache_device_stop() when too many I/O errors happened on backing device. But if there is internal I/O happening on cache device (writeback scan, garbage collection, etc), a regular I/O request triggers the internal I/Os may still holds a refcount of dc->count, and the refcount may only be dropped after the internal I/O stopped. By this patch, bch_cached_dev_error() will check if the backing device is attached to a cache set, if yes that CACHE_SET_IO_DISABLE will be set to flags of this cache set. Then internal I/Os on cache device will be rejected and stopped immediately, and the bcache device can be stopped. For people who are not familiar with the interesting refcount dependance, let me explain a bit more how the fix works. Example the writeback thread will scan cache device for dirty data writeback purpose. Before it stopps, it holds a refcount of dc->count. When CACHE_SET_IO_DISABLE bit is set, the internal I/O will stopped and the while-loop in bch_writeback_thread() quits and calls cached_dev_put() to drop dc->count. If this is the last refcount to drop, then cached_dev_detach_finish() will be called. In this call back function, in turn closure_put(dc->disk.cl) is called to drop a refcount of closure dc->disk.cl. If this is the last refcount of this closure to drop, then cached_dev_flush() will be called. Then the cached device is freed. So if CACHE_SET_IO_DISABLE is not set, the bache device can not be stopped until all inernal cache device I/O stopped. For large size cache device, and writeback thread competes locks with gc thread, there might be a quite long time to wait. Fixes: c7b7bd07404c5 ("bcache: add io_disable to struct cached_dev") Signed-off-by: Coly Li <colyli@suse.de> Reviewed-by: Hannes Reinecke <hare@suse.com> Signed-off-by: Jens Axboe <axboe@kernel.dk>
2018-05-03bcache: store disk name in struct cache and struct cached_devColy Li5-34/+30
Current code uses bdevname() or bio_devname() to reference gendisk disk name when bcache needs to display the disk names in kernel message. It was safe before bcache device failure handling patch set merged in, because when devices are failed, there was deadlock to prevent bcache printing error messages with gendisk disk name. But after the failure handling patch set merged, the deadlock is fixed, so it is possible that the gendisk structure bdev->hd_disk is released when bdevname() is called to reference bdev->bd_disk->disk_name[]. This is why I receive bug report of NULL pointers deference panic. This patch stores gendisk disk name in a buffer inside struct cache and struct cached_dev, then print out the offline device name won't reference bdev->hd_disk anymore. And this patch also avoids extra function calls of bdevname() and bio_devnmae(). Changelog: v3, add Reviewed-by from Hannes. v2, call bdevname() earlier in register_bdev() v1, first version with segguestion from Junhui Tang. Fixes: c7b7bd07404c5 ("bcache: add io_disable to struct cached_dev") Fixes: 5138ac6748e38 ("bcache: fix misleading error message in bch_count_io_errors()") Signed-off-by: Coly Li <colyli@suse.de> Reviewed-by: Hannes Reinecke <hare@suse.com> Signed-off-by: Jens Axboe <axboe@kernel.dk>
2018-04-20Merge tag 'md/4.17-rc1' of git://git.kernel.org/pub/scm/linux/kernel/git/shli/mdLinus Torvalds2-7/+24
Pull MD fixes from Shaohua Li: "Three small fixes for MD: - md-cluster fix for faulty device from Guoqing - writehint fix for writebehind IO for raid1 from Mariusz - a live lock fix for interrupted recovery from Yufen" * tag 'md/4.17-rc1' of git://git.kernel.org/pub/scm/linux/kernel/git/shli/md: raid1: copy write hint from master bio to behind bio md/raid1: exit sync request if MD_RECOVERY_INTR is set md-cluster: don't update recovery_offset for faulty device
2018-04-10Merge tag 'libnvdimm-for-4.17' of ↵Linus Torvalds5-50/+69
git://git.kernel.org/pub/scm/linux/kernel/git/nvdimm/nvdimm Pull libnvdimm updates from Dan Williams: "This cycle was was not something I ever want to repeat as there were several late changes that have only now just settled. Half of the branch up to commit d2c997c0f145 ("fs, dax: use page->mapping to warn...") have been in -next for several releases. The of_pmem driver and the address range scrub rework were late arrivals, and the dax work was scaled back at the last moment. The of_pmem driver missed a previous merge window due to an oversight. A sense of obligation to rectify that miss is why it is included for 4.17. It has acks from PowerPC folks. Stephen reported a build failure that only occurs when merging it with your latest tree, for now I have fixed that up by disabling modular builds of of_pmem. A test merge with your tree has received a build success report from the 0day robot over 156 configs. An initial version of the ARS rework was submitted before the merge window. It is self contained to libnvdimm, a net code reduction, and passing all unit tests. The filesystem-dax changes are based on the wait_var_event() functionality from tip/sched/core. However, late review feedback showed that those changes regressed truncate performance to a large degree. The branch was rewound to drop the truncate behavior change and now only includes preparation patches and cleanups (with full acks and reviews). The finalization of this dax-dma-vs-trnucate work will need to wait for 4.18. Summary: - A rework of the filesytem-dax implementation provides for detection of unmap operations (truncate / hole punch) colliding with in-progress device-DMA. A fix for these collisions remains a work-in-progress pending resolution of truncate latency and starvation regressions. - The of_pmem driver expands the users of libnvdimm outside of x86 and ACPI to describe an implementation of persistent memory on PowerPC with Open Firmware / Device tree. - Address Range Scrub (ARS) handling is completely rewritten to account for the fact that ARS may run for 100s of seconds and there is no platform defined way to cancel it. ARS will now no longer block namespace initialization. - The NVDIMM Namespace Label implementation is updated to handle label areas as small as 1K, down from 128K. - Miscellaneous cleanups and updates to unit test infrastructure" * tag 'libnvdimm-for-4.17' of git://git.kernel.org/pub/scm/linux/kernel/git/nvdimm/nvdimm: (39 commits) libnvdimm, of_pmem: workaround OF_NUMA=n build error nfit, address-range-scrub: add module option to skip initial ars nfit, address-range-scrub: rework and simplify ARS state machine nfit, address-range-scrub: determine one platform max_ars value powerpc/powernv: Create platform devs for nvdimm buses doc/devicetree: Persistent memory region bindings libnvdimm: Add device-tree based driver libnvdimm: Add of_node to region and bus descriptors libnvdimm, region: quiet region probe libnvdimm, namespace: use a safe lookup for dimm device name libnvdimm, dimm: fix dpa reservation vs uninitialized label area libnvdimm, testing: update the default smart ctrl_temperature libnvdimm, testing: Add emulation for smart injection commands nfit, address-range-scrub: introduce nfit_spa->ars_state libnvdimm: add an api to cast a 'struct nd_region' to its 'struct device' nfit, address-range-scrub: fix scrub in-progress reporting dax, dm: allow device-mapper to operate without dax support dax: introduce CONFIG_DAX_DRIVER fs, dax: use page->mapping to warn if truncate collides with a busy page ext2, dax: introduce ext2_dax_aops ...
2018-04-09Merge branch 'for-4.17/dax' into libnvdimm-for-nextDan Williams5-50/+69
2018-04-09raid1: copy write hint from master bio to behind bioMariusz Dabrowski1-0/+2
Signed-off-by: Mariusz Dabrowski <mariusz.dabrowski@intel.com> Reviewed-by: Artur Paszkiewicz <artur.paszkiewicz@intel.com> Reviewed-by: Pawel Baldysiak <pawel.baldysiak@intel.com> Signed-off-by: Shaohua Li <shli@fb.com>
2018-04-09md/raid1: exit sync request if MD_RECOVERY_INTR is setYufen Yu1-5/+18
We met a sync thread stuck as follows: raid1_sync_request+0x2c9/0xb50 md_do_sync+0x983/0xfa0 md_thread+0x11c/0x160 kthread+0x111/0x130 ret_from_fork+0x35/0x40 0xffffffffffffffff At the same time, there is a stuck mdadm thread (mdadm --manage /dev/md2 --add /dev/sda). It is trying to stop the sync thread: kthread_stop+0x42/0xf0 md_unregister_thread+0x3a/0x70 md_reap_sync_thread+0x15/0x160 action_store+0x142/0x2a0 md_attr_store+0x6c/0xb0 kernfs_fop_write+0x102/0x180 __vfs_write+0x33/0x170 vfs_write+0xad/0x1a0 SyS_write+0x52/0xc0 do_syscall_64+0x6e/0x190 entry_SYSCALL_64_after_hwframe+0x3d/0xa2 Debug tools show that the sync thread is waiting in raise_barrier(), until raid1d() end all normal IO bios into bio_end_io_list(introduced in commit 55ce74d4bfe1). But, raid1d() cannot end these bios if MD_CHANGE_PENDING bit is set. It needs to get mddev->reconfig_mutex lock and then clear the bit in md_check_recovery(). However, the lock is holding by mdadm in action_store(). Thus, there is a loop: mdadm waiting for sync thread to stop, sync thread waiting for raid1d() to end bios, raid1d() waiting for mdadm to release mddev->reconfig_mutex lock and then it can end bios. Fix this by checking MD_RECOVERY_INTR while waiting in raise_barrier(), so that sync thread can exit while mdadm is stoping the sync thread. Fixes: 55ce74d4bfe1 ("md/raid1: ensure device failure recorded before write request returns.") Signed-off-by: Jason Yan <yanaijie@huawei.com> Signed-off-by: Yufen Yu <yuyufen@huawei.com> Signed-off-by: Shaohua Li <shli@fb.com>
2018-04-09md-cluster: don't update recovery_offset for faulty deviceGuoqing Jiang1-2/+4
Device could become faulty when clustered array handling METADATA_UPDATED msg, so we don't need to call read_rdev for this device. Signed-off-by: Guoqing Jiang <gqjiang@suse.com> Signed-off-by: Shaohua Li <shli@fb.com>
2018-04-06Merge tag 'for-4.17/dm-changes' of ↵Linus Torvalds24-451/+441
git://git.kernel.org/pub/scm/linux/kernel/git/device-mapper/linux-dm Pull device mapper updates from Mike Snitzer: - DM core passthrough ioctl fix to retain reference to DM table, and that table's block devices, while issuing the ioctl to one of those block devices. - DM core passthrough ioctl fix to _not_ override the fmode_t used to issue the ioctl. Overriding by using the fmode_t that the block device was originally open with during DM table load is a liability. - Add DM core support for secure erase forwarding and update the DM linear and DM striped targets to support them. - A DM core 4.16 stable fix to allow abnormal IO (e.g. discard, write same, write zeroes) for targets that make use of the non-splitting IO variant (as is done for multipath or thinp when layered directly on NVMe). - Allow DM targets to return a payload in response to a DM message that they are sent. This is useful for DM targets that would like to provide statistics data in response to DM messages. - Update DM bufio to support non-power-of-2 block sizes. Numerous other related changes prepare the DM bufio code for this support. - Fix DM crypt to use a bounded amount of memory across the entire system. This is to avoid OOM that can otherwise occur in response to certain pathological IO workloads (e.g. discarding a large DM crypt device). - Add a 'check_at_most_once' feature to the DM verity target to allow verity to be used on mobile devices that have very limited resources. - Fix the DM integrity target to fail early if a keyed algorithm (e.g. HMAC) is to be used but the key isn't set. - Add non-power-of-2 support to the DM unstripe target. - Eliminate the use of a Variable Length Array in the DM stripe target. - Update the DM log-writes target to record metadata (REQ_META flag). - DM raid fixes for its nosync status and some variable range issues. * tag 'for-4.17/dm-changes' of git://git.kernel.org/pub/scm/linux/kernel/git/device-mapper/linux-dm: (28 commits) dm: remove fmode_t argument from .prepare_ioctl hook dm: hold DM table for duration of ioctl rather than use blkdev_get dm raid: fix parse_raid_params() variable range issue dm verity: make verity_for_io_block static dm verity: add 'check_at_most_once' option to only validate hashes once dm bufio: don't embed a bio in the dm_buffer structure dm bufio: support non-power-of-two block sizes dm bufio: use slab cache for dm_buffer structure allocations dm bufio: reorder fields in dm_buffer structure dm bufio: relax alignment constraint on slab cache dm bufio: remove code that merges slab caches dm bufio: get rid of slab cache name allocations dm bufio: move dm-bufio.h to include/linux/ dm bufio: delete outdated comment dm: add support for secure erase forwarding dm: backfill abnormal IO support to non-splitting IO submission dm raid: fix nosync status dm mpath: use DM_MAPIO_SUBMITTED instead of magic number 0 in process_queued_bios() dm stripe: get rid of a Variable Length Array (VLA) dm log writes: record metadata flag for better flags record ...
2018-04-06Merge tag 'for-4.17/block-20180402' of git://git.kernel.dk/linux-blockLinus Torvalds26-155/+582
Pull block layer updates from Jens Axboe: "It's a pretty quiet round this time, which is nice. This contains: - series from Bart, cleaning up the way we set/test/clear atomic queue flags. - series from Bart, fixing races between gendisk and queue registration and removal. - set of bcache fixes and improvements from various folks, by way of Michael Lyle. - set of lightnvm updates from Matias, most of it being the 1.2 to 2.0 transition. - removal of unused DIO flags from Nikolay. - blk-mq/sbitmap memory ordering fixes from Omar. - divide-by-zero fix for BFQ from Paolo. - minor documentation patches from Randy. - timeout fix from Tejun. - Alpha "can't write a char atomically" fix from Mikulas. - set of NVMe fixes by way of Keith. - bsg and bsg-lib improvements from Christoph. - a few sed-opal fixes from Jonas. - cdrom check-disk-change deadlock fix from Maurizio. - various little fixes, comment fixes, etc from various folks" * tag 'for-4.17/block-20180402' of git://git.kernel.dk/linux-block: (139 commits) blk-mq: Directly schedule q->timeout_work when aborting a request blktrace: fix comment in blktrace_api.h lightnvm: remove function name in strings lightnvm: pblk: remove some unnecessary NULL checks lightnvm: pblk: don't recover unwritten lines lightnvm: pblk: implement 2.0 support lightnvm: pblk: implement get log report chunk lightnvm: pblk: rename ppaf* to addrf* lightnvm: pblk: check for supported version lightnvm: implement get log report chunk helpers lightnvm: make address conversions depend on generic device lightnvm: add support for 2.0 address format lightnvm: normalize geometry nomenclature lightnvm: complete geo structure with maxoc* lightnvm: add shorten OCSSD version in geo lightnvm: add minor version to generic geometry lightnvm: simplify geometry structure lightnvm: pblk: refactor init/exit sequences lightnvm: Avoid validation of default op value lightnvm: centralize permission check for lightnvm ioctl ...
2018-04-04dm: remove fmode_t argument from .prepare_ioctl hookMike Snitzer8-25/+14
Use the fmode_t that is passed to dm_blk_ioctl() rather than inconsistently (varies across targets) drop it on the floor by overriding it with the fmode_t stored in 'struct dm_dev'. All the persistent reservation functions weren't using the fmode_t they got back from .prepare_ioctl so remove them. Signed-off-by: Mike Snitzer <snitzer@redhat.com>
2018-04-04dm: hold DM table for duration of ioctl rather than use blkdev_getMike Snitzer1-53/+44
Commit 519049afead ("dm: use blkdev_get rather than bdgrab when issuing pass-through ioctl") inadvertantly introduced a regression relative to users of device cgroups that issue ioctls (e.g. libvirt). Using blkdev_get() in DM's passthrough ioctl support implicitly introduced a cgroup permissions check that would fail unless care were taken to add all devices in the IO stack to the device cgroup. E.g. rather than just adding the top-level DM multipath device to the cgroup all the underlying devices would need to be allowed. Fix this, to no longer require allowing all underlying devices, by simply holding the live DM table (which includes the table's original blkdev_get() reference on the blockdevice that the ioctl will be issued to) for the duration of the ioctl. Also, bump the DM ioctl version so a user can know that their device cgroup allow workaround is no longer needed. Reported-by: Michal Privoznik <mprivozn@redhat.com> Suggested-by: Mikulas Patocka <mpatocka@redhat.com> Fixes: 519049afead ("dm: use blkdev_get rather than bdgrab when issuing pass-through ioctl") Cc: stable@vger.kernel.org # 4.16 Signed-off-by: Mike Snitzer <snitzer@redhat.com>
2018-04-04dm raid: fix parse_raid_params() variable range issueHeinz Mauelshagen1-8/+19
parse_raid_params() compares variable "int value" with INT_MAX. E.g. related Coverity report excerpt: CID 1364818 (#2 of 3): Operands don't affect result (CONSTANT_EXPRESSION_RESULT) [select issue] 1433 if (value > INT_MAX) { Fix by changing checks to avoid INT_MAX. Whilst on it, avoid unnecessary checks against constants and add check for sane recovery speed min/max. Signed-off-by: Heinz Mauelshagen <heinzm@redhat.com> Signed-off-by: Mike Snitzer <snitzer@redhat.com>
2018-04-04dm verity: make verity_for_io_block staticweiyongjun (A)1-2/+2
Fixes the following sparse warning: drivers/md/dm-verity-target.c:375:6: warning: symbol 'verity_for_io_block' was not declared. Should it be static? Signed-off-by: Wei Yongjun <weiyongjun1@huawei.com> Signed-off-by: Mike Snitzer <snitzer@redhat.com>
2018-04-03dm verity: add 'check_at_most_once' option to only validate hashes oncePatrik Torstensson2-5/+60
This allows platforms that are CPU/memory contrained to verify data blocks only the first time they are read from the data device, rather than every time. As such, it provides a reduced level of security because only offline tampering of the data device's content will be detected, not online tampering. Hash blocks are still verified each time they are read from the hash device, since verification of hash blocks is less performance critical than data blocks, and a hash block will not be verified any more after all the data blocks it covers have been verified anyway. This option introduces a bitset that is used to check if a block has been validated before or not. A block can be validated more than once as there is no thread protection for the bitset. These changes were developed and tested on entry-level Android Go devices. Signed-off-by: Patrik Torstensson <totte@google.com> Signed-off-by: Mike Snitzer <snitzer@redhat.com>
2018-04-03dm bufio: don't embed a bio in the dm_buffer structureMikulas Patocka1-60/+45
The bio structure consumes a substantial part of dm_buffer. The bio structure is only needed when doing I/O on the buffer, thus we don't have to embed it in the buffer. Allocate the bio structure only when doing I/O. We don't need to create a bio_set because, in case of allocation failure, dm-bufio falls back to using dm-io (which keeps its own bio_set). Signed-off-by: Mikulas Patocka <mpatocka@redhat.com> Signed-off-by: Mike Snitzer <snitzer@redhat.com>
2018-04-03dm bufio: support non-power-of-two block sizesMikulas Patocka1-25/+39
Support block sizes that are not a power-of-two (but they must be a multiple of 512b). As always, a slab cache is used for allocations. Signed-off-by: Mikulas Patocka <mpatocka@redhat.com> Signed-off-by: Mike Snitzer <snitzer@redhat.com>
2018-04-03dm bufio: use slab cache for dm_buffer structure allocationsMikulas Patocka1-9/+19
kmalloc padded to the next power of two, using a slab cache avoids this. Signed-off-by: Mikulas Patocka <mpatocka@redhat.com> Signed-off-by: Mike Snitzer <snitzer@redhat.com>
2018-04-03dm bufio: reorder fields in dm_buffer structureMikulas Patocka1-5/+5
Reorder fields in dm_buffer structure to improve packing and reduce structure size. The compiler allocates 32-bit integer for field 'enum data_mode', so change it to unsigned char. Signed-off-by: Mikulas Patocka <mpatocka@redhat.com> Signed-off-by: Mike Snitzer <snitzer@redhat.com>
2018-04-03dm bufio: relax alignment constraint on slab cacheMikulas Patocka1-2/+2
The I/O buffer doesn't have to be aligned on block size granularity, relax alignment to ARCH_KMALLOC_MINALIGN (required to allow DMA from slab cache memory on some architectures). Also, set SLAB_RECLAIM_ACCOUNT so that the memory allocated from the cache is accounted as reclaimable and doesn't inflate the 'used' entry in the free command. Signed-off-by: Mikulas Patocka <mpatocka@redhat.com> Signed-off-by: Mike Snitzer <snitzer@redhat.com>
2018-04-03dm bufio: remove code that merges slab cachesMikulas Patocka1-39/+14
All slab allocators can merge duplicate caches. So dm-bufio doesn't need extra slab merging logic. Instead it can just allocate one slab cache per client and let the allocator merge them. Signed-off-by: Mikulas Patocka <mpatocka@redhat.com> Signed-off-by: Mike Snitzer <snitzer@redhat.com>
2018-04-03dm bufio: get rid of slab cache name allocationsMikulas Patocka1-18/+3
dm-bufio keeps the dm_bufio_cache_names array that holds names of the slab caches. Since the commit db265eca7700 ("mm/sl[aou]b: Move duping of slab name to slab_common.c"), the kernel automatically duplicates the slab cache name when creating the slab cache, so we no longer have to keep the name allocated. Remove the code that allocates the slab names and keeps them around. Signed-off-by: Mikulas Patocka <mpatocka@redhat.com> Signed-off-by: Mike Snitzer <snitzer@redhat.com>
2018-04-03dm bufio: move dm-bufio.h to include/linux/Mikulas Patocka6-156/+8
Move dm-bufio.h to include/linux/ so that external GPL'd DM target modules can use it. It is better to allow the use of dm-bufio than force external modules to implement the equivalent buffered IO mechanism in some new way. The hope is this will encourage the use of dm-bufio; which will then make it easier for a GPL'd external DM target module to be included upstream. A couple dm-bufio EXPORT_SYMBOL exports have also been updated to use EXPORT_SYMBOL_GPL. Signed-off-by: Mikulas Patocka <mpatocka@redhat.com> Signed-off-by: Mike Snitzer <snitzer@redhat.com>
2018-04-03dm bufio: delete outdated commentMikulas Patocka1-4/+0
This comment was true when dm-bufio was written but, since 4.3, bios can now have arbitrary size and the driver splits them. Signed-off-by: Mikulas Patocka <mpatocka@redhat.com> Signed-off-by: Mike Snitzer <snitzer@redhat.com>
2018-04-03dm: add support for secure erase forwardingDenis Semakin4-0/+46
Set QUEUE_FLAG_SECERASE in DM device's queue_flags if a DM table's data devices support secure erase. Also, add support for secure erase to both the linear and striped targets. Signed-off-by: Denis Semakin <d.semakin@omprussia.ru> Signed-off-by: Mike Snitzer <snitzer@redhat.com>
2018-04-03dm: backfill abnormal IO support to non-splitting IO submissionMike Snitzer1-7/+23
Otherwise, these abnormal IOs would be sent to the DM target regardless of whether the target advertised support for them. Factor out __process_abnormal_io() from __split_and_process_non_flush() so that discards, write same, etc may be conditionally processed. Fixes: 978e51ba3 ("dm: optimize bio-based NVMe IO submission") Cc: stable@vger.kernel.org # 4.16 Signed-off-by: Mike Snitzer <snitzer@redhat.com>
2018-04-03dm raid: fix nosync statusHeinz Mauelshagen1-1/+2
Fix a race for "nosync" activations providing "aa.." device health characters and "0/N" sync ratio rather than "AA..." and "N/N". Occurs when status for the raid set is retrieved during resume before the MD sync thread starts and clears the MD_RECOVERY_NEEDED flag. Cc: stable@vger.kernel.org # 4.16+ Signed-off-by: Heinz Mauelshagen <heinzm@redhat.com> Signed-off-by: Mike Snitzer <snitzer@redhat.com>
2018-04-03dm mpath: use DM_MAPIO_SUBMITTED instead of magic number 0 in ↵Wang Sheng-Hui1-1/+1
process_queued_bios() Signed-off-by: Wang Sheng-Hui <shhuiw@foxmail.com> Signed-off-by: Mike Snitzer <snitzer@redhat.com>
2018-04-03dm stripe: get rid of a Variable Length Array (VLA)Tycho Andersen1-5/+5
Ideally, we'd like to get rid of all VLAs in the kernel and add -Wvla to the build args: https://lkml.org/lkml/2018/3/7/621 This one is a simple case, since we don't actually need the VLA at all: we can just iterate over the stripes twice, once to emit their names, and the second time to emit status (i.e. trade memory for time). Since the number of stripes is probably low, this is hopefully not that expensive. Signed-off-by: Tycho Andersen <tycho@tycho.ws> Reviewed-by: Kees Cook <keescook@chromium.org> Signed-off-by: Mike Snitzer <snitzer@redhat.com>
2018-04-03dm log writes: record metadata flag for better flags recordQu Wenruo1-4/+8
So developer could distinguish data and metadata bios easier. Signed-off-by: Qu Wenruo <wqu@suse.com> Reviewed-by: Josef Bacik <jbacik@fb.com> Signed-off-by: Mike Snitzer <snitzer@redhat.com>
2018-04-03dm integrity: fail early if required HMAC key is not availableMilan Broz1-0/+3
Since crypto API commit 9fa68f62004 ("crypto: hash - prevent using keyed hashes without setting key") dm-integrity cannot use keyed algorithms without the key being set. The dm-integrity recognizes this too late (during use of HMAC), so it allows creation and formatting of superblock, but the device is in fact unusable. Fix it by detecting the key requirement in integrity table constructor. Signed-off-by: Milan Broz <gmazyland@gmail.com> Signed-off-by: Mike Snitzer <snitzer@redhat.com>
2018-04-03dm: remove unused macro DM_MOD_NAME_SIZEWang Sheng-Hui1-2/+0
Signed-off-by: Wang Sheng-Hui <shhuiw@foxmail.com> Signed-off-by: Mike Snitzer <snitzer@redhat.com>
2018-04-03dm unstripe: remove unnecessary header includesHeinz Mauelshagen1-6/+0
Signed-off-by: Heinz Mauelshagen <heinzm@redhat.com> Signed-off-by: Mike Snitzer <snitzer@redhat.com>
2018-04-03dm unstripe: remove superfluous module init error path messageHeinz Mauelshagen1-7/+1
Signed-off-by: Heinz Mauelshagen <heinzm@redhat.com> Reviewed-by: Scott Bauer <Scott.Bauer@intel.com> Signed-off-by: Mike Snitzer <snitzer@redhat.com>
2018-04-03dm unstripe: add "dm-unstriped" module aliasHeinz Mauelshagen1-0/+1
This target's kernel module being named dm-unstripe.ko doesn't allow lvm2's DM module autoload capability to load the dm-unstripe.ko because lvm2 looks for dm-unstriped.ko due to the target name being "unstriped". Add the "dm-unstriped" module alias to resolve this oversight. NOTE: this isn't needed for the "striped" target, despite its source file being named dm-stripe.c, because it is part of dm-mod.ko. Signed-off-by: Heinz Mauelshagen <heinzm@redhat.com> Signed-off-by: Mike Snitzer <snitzer@redhat.com>
2018-04-03dm unstripe: support non-power-of-2 chunk sizeHeinz Mauelshagen1-12/+10
Address "FIXME: must support non power of 2 chunk_size, dm-stripe.c does". Bump target version to indicate change. Signed-off-by: Heinz Mauelshagen <heinzm@redhat.com> Tested-by: Scott Bauer <Scott.Bauer@intel.com> Reviewed-by: Scott Bauer <Scott.Bauer@intel.com> Signed-off-by: Mike Snitzer <snitzer@redhat.com>