diff options
author | Sagi Grimberg <sagi@grimberg.me> | 2021-02-11 01:04:00 +0300 |
---|---|---|
committer | Christoph Hellwig <hch@lst.de> | 2021-02-11 10:04:51 +0300 |
commit | e11e5116171dedeaf63735931e72ad5de0f30ed5 (patch) | |
tree | 7c3ebd61de0d84d8b251672d682a34138544bc4e /drivers/nvme | |
parent | 4bdf260362b3be529d170b04662638fd6dc52241 (diff) | |
download | linux-e11e5116171dedeaf63735931e72ad5de0f30ed5.tar.xz |
nvme-tcp: fix crash triggered with a dataless request submission
write-zeros has a bio, but does not have any data buffers associated
with it. Hence should not initialize the request iter for it (which
attempts to reference the bi_io_vec (and crash).
--
run blktests nvme/012 at 2021-02-05 21:53:34
BUG: kernel NULL pointer dereference, address: 0000000000000008
#PF: supervisor read access in kernel mode
#PF: error_code(0x0000) - not-present page
PGD 0 P4D 0
Oops: 0000 [#1] SMP NOPTI
CPU: 15 PID: 12069 Comm: kworker/15:2H Tainted: G S I 5.11.0-rc6+ #1
Hardware name: Dell Inc. PowerEdge R640/06NR82, BIOS 2.10.0 11/12/2020
Workqueue: kblockd blk_mq_run_work_fn
RIP: 0010:nvme_tcp_init_iter+0x7d/0xd0 [nvme_tcp]
RSP: 0018:ffffbd084447bd18 EFLAGS: 00010246
RAX: 0000000000000000 RBX: ffffa0bba9f3ce80 RCX: 0000000000000000
RDX: 0000000000000000 RSI: 0000000000000001 RDI: 0000000002000000
RBP: ffffa0ba8ac6fec0 R08: 0000000002000000 R09: 0000000000000000
R10: 0000000002800809 R11: 0000000000000000 R12: 0000000000000000
R13: ffffa0bba9f3cf90 R14: 0000000000000000 R15: 0000000000000000
FS: 0000000000000000(0000) GS:ffffa0c9ff9c0000(0000) knlGS:0000000000000000
CS: 0010 DS: 0000 ES: 0000 CR0: 0000000080050033
CR2: 0000000000000008 CR3: 00000001c9c6c005 CR4: 00000000007706e0
DR0: 0000000000000000 DR1: 0000000000000000 DR2: 0000000000000000
DR3: 0000000000000000 DR6: 00000000fffe0ff0 DR7: 0000000000000400
PKRU: 55555554
Call Trace:
nvme_tcp_queue_rq+0xef/0x330 [nvme_tcp]
blk_mq_dispatch_rq_list+0x11c/0x7c0
? blk_mq_flush_busy_ctxs+0xf6/0x110
__blk_mq_sched_dispatch_requests+0x12b/0x170
blk_mq_sched_dispatch_requests+0x30/0x60
__blk_mq_run_hw_queue+0x2b/0x60
process_one_work+0x1cb/0x360
? process_one_work+0x360/0x360
worker_thread+0x30/0x370
? process_one_work+0x360/0x360
kthread+0x116/0x130
? kthread_park+0x80/0x80
ret_from_fork+0x1f/0x30
--
Fixes: cb9b870fba3e ("nvme-tcp: fix wrong setting of request iov_iter")
Reported-by: Yi Zhang <yi.zhang@redhat.com>
Signed-off-by: Sagi Grimberg <sagi@grimberg.me>
Reviewed-by: Keith Busch <kbusch@kernel.org>
Reviewed-by: Chaitanya Kulkarni <chaitanya.kulkarni@wdc.com>
Tested-by: Yi Zhang <yi.zhang@redhat.com>
Signed-off-by: Christoph Hellwig <hch@lst.de>
Diffstat (limited to 'drivers/nvme')
-rw-r--r-- | drivers/nvme/host/tcp.c | 2 |
1 files changed, 1 insertions, 1 deletions
diff --git a/drivers/nvme/host/tcp.c b/drivers/nvme/host/tcp.c index 619b0d8f6e38..69f59d2c5799 100644 --- a/drivers/nvme/host/tcp.c +++ b/drivers/nvme/host/tcp.c @@ -2271,7 +2271,7 @@ static blk_status_t nvme_tcp_setup_cmd_pdu(struct nvme_ns *ns, req->data_len = blk_rq_nr_phys_segments(rq) ? blk_rq_payload_bytes(rq) : 0; req->curr_bio = rq->bio; - if (req->curr_bio) + if (req->curr_bio && req->data_len) nvme_tcp_init_iter(req, rq_data_dir(rq)); if (rq_data_dir(rq) == WRITE && |