summaryrefslogtreecommitdiff
path: root/include/linux/blkdev.h
diff options
context:
space:
mode:
authorMing Lei <ming.lei@redhat.com>2019-04-30 04:52:27 +0300
committerJens Axboe <axboe@kernel.dk>2019-05-04 16:24:08 +0300
commit2f8f1336a48bd5186de3476da0a3e2ec06d0533a (patch)
tree1bc1ee8510442e16f00ba1cf82ebfb0dfb926017 /include/linux/blkdev.h
parent7c6c5b7c9186e3fb5b10afb8e5f710ae661144c6 (diff)
downloadlinux-2f8f1336a48bd5186de3476da0a3e2ec06d0533a.tar.xz
blk-mq: always free hctx after request queue is freed
In normal queue cleanup path, hctx is released after request queue is freed, see blk_mq_release(). However, in __blk_mq_update_nr_hw_queues(), hctx may be freed because of hw queues shrinking. This way is easy to cause use-after-free, because: one implicit rule is that it is safe to call almost all block layer APIs if the request queue is alive; and one hctx may be retrieved by one API, then the hctx can be freed by blk_mq_update_nr_hw_queues(); finally use-after-free is triggered. Fixes this issue by always freeing hctx after releasing request queue. If some hctxs are removed in blk_mq_update_nr_hw_queues(), introduce a per-queue list to hold them, then try to resuse these hctxs if numa node is matched. Cc: Dongli Zhang <dongli.zhang@oracle.com> Cc: James Smart <james.smart@broadcom.com> Cc: Bart Van Assche <bart.vanassche@wdc.com> Cc: linux-scsi@vger.kernel.org, Cc: Martin K . Petersen <martin.petersen@oracle.com>, Cc: Christoph Hellwig <hch@lst.de>, Cc: James E . J . Bottomley <jejb@linux.vnet.ibm.com>, Reviewed-by: Hannes Reinecke <hare@suse.com> Tested-by: James Smart <james.smart@broadcom.com> Signed-off-by: Ming Lei <ming.lei@redhat.com> Signed-off-by: Jens Axboe <axboe@kernel.dk>
Diffstat (limited to 'include/linux/blkdev.h')
-rw-r--r--include/linux/blkdev.h7
1 files changed, 7 insertions, 0 deletions
diff --git a/include/linux/blkdev.h b/include/linux/blkdev.h
index bd3e3f09bfa0..1aafeb923e7b 100644
--- a/include/linux/blkdev.h
+++ b/include/linux/blkdev.h
@@ -535,6 +535,13 @@ struct request_queue {
struct mutex sysfs_lock;
+ /*
+ * for reusing dead hctx instance in case of updating
+ * nr_hw_queues
+ */
+ struct list_head unused_hctx_list;
+ spinlock_t unused_hctx_lock;
+
atomic_t mq_freeze_depth;
#if defined(CONFIG_BLK_DEV_BSG)