<feed xmlns='http://www.w3.org/2005/Atom'>
<title>kernel/linux.git/drivers/nvme/target/loop.c, branch v4.17.1</title>
<subtitle>Linux kernel stable tree (mirror)</subtitle>
<id>https://git.radix-linux.su/kernel/linux.git/atom?h=v4.17.1</id>
<link rel='self' href='https://git.radix-linux.su/kernel/linux.git/atom?h=v4.17.1'/>
<link rel='alternate' type='text/html' href='https://git.radix-linux.su/kernel/linux.git/'/>
<updated>2018-05-03T15:37:50+00:00</updated>
<entry>
<title>nvmet: switch loopback target state to connecting when resetting</title>
<updated>2018-05-03T15:37:50+00:00</updated>
<author>
<name>Johannes Thumshirn</name>
<email>jthumshirn@suse.de</email>
</author>
<published>2018-05-03T15:00:35+00:00</published>
<link rel='alternate' type='text/html' href='https://git.radix-linux.su/kernel/linux.git/commit/?id=8bfc3b4c6f9de815de4ab73784b9419348266a65'/>
<id>urn:sha1:8bfc3b4c6f9de815de4ab73784b9419348266a65</id>
<content type='text'>
After commit bb06ec31452f ("nvme: expand nvmf_check_if_ready checks")
resetting of the loopback nvme target failed as we forgot to switch
it's state to NVME_CTRL_CONNECTING before we reconnect the admin
queues. Therefore the checks in nvmf_check_if_ready() choose to go to
the reject_io case and thus we couldn't sent out an identify
controller command to reconnect.

Change the controller state to NVME_CTRL_CONNECTING after tearing down
the old connection and before re-establishing the connection.

Fixes: bb06ec31452f ("nvme: expand nvmf_check_if_ready checks")
Signed-off-by: Johannes Thumshirn &lt;jthumshirn@suse.de&gt;
Signed-off-by: Keith Busch &lt;keith.busch@intel.com&gt;
Signed-off-by: Jens Axboe &lt;axboe@kernel.dk&gt;
</content>
</entry>
<entry>
<title>nvme: expand nvmf_check_if_ready checks</title>
<updated>2018-04-12T15:58:27+00:00</updated>
<author>
<name>James Smart</name>
<email>jsmart2021@gmail.com</email>
</author>
<published>2018-04-12T15:16:15+00:00</published>
<link rel='alternate' type='text/html' href='https://git.radix-linux.su/kernel/linux.git/commit/?id=bb06ec31452fb2da1594f88035c2ecea4e0652f4'/>
<id>urn:sha1:bb06ec31452fb2da1594f88035c2ecea4e0652f4</id>
<content type='text'>
The nvmf_check_if_ready() checks that were added are very simplistic.
As such, the routine allows a lot of cases to fail ios during windows
of reset or re-connection. In cases where there are not multi-path
options present, the error goes back to the callee - the filesystem
or application. Not good.

The common routine was rewritten and calling syntax slightly expanded
so that per-transport is_ready routines don't need to be present.
The transports now call the routine directly. The routine is now a
fabrics routine rather than an inline function.

The routine now looks at controller state to decide the action to
take. Some states mandate io failure. Others define the condition where
a command can be accepted.  When the decision is unclear, a generic
queue-or-reject check is made to look for failfast or multipath ios and
only fails the io if it is so marked. Otherwise, the io will be queued
and wait for the controller state to resolve.

Admin commands issued via ioctl share a live admin queue with commands
from the transport for controller init. The ioctls could be intermixed
with the initialization commands. It's possible for the ioctl cmd to
be issued prior to the controller being enabled. To block this, the
ioctl admin commands need to be distinguished from admin commands used
for controller init. Added a USERCMD nvme_req(req)-&gt;rq_flags bit to
reflect this division and set it on ioctls requests.  As the
nvmf_check_if_ready() routine is called prior to nvme_setup_cmd(),
ensure that commands allocated by the ioctl path (actually anything
in core.c) preps the nvme_req(req) before starting the io. This will
preserve the USERCMD flag during execution and/or retry.

Signed-off-by: James Smart &lt;james.smart@broadcom.com&gt;
Reviewed-by: Sagi Grimberg &lt;sagi@grimberg.e&gt;
Reviewed-by: Johannes Thumshirn &lt;jthumshirn@suse.de&gt;
Signed-off-by: Keith Busch &lt;keith.busch@intel.com&gt;
Signed-off-by: Jens Axboe &lt;axboe@kernel.dk&gt;
</content>
</entry>
<entry>
<title>nvme-loop: fix kernel oops in case of unhandled command</title>
<updated>2018-04-12T15:58:27+00:00</updated>
<author>
<name>Ming Lei</name>
<email>ming.lei@redhat.com</email>
</author>
<published>2018-04-12T15:16:04+00:00</published>
<link rel='alternate' type='text/html' href='https://git.radix-linux.su/kernel/linux.git/commit/?id=11d9ea6f2ca69237d35d6c55755beba3e006b106'/>
<id>urn:sha1:11d9ea6f2ca69237d35d6c55755beba3e006b106</id>
<content type='text'>
When nvmet_req_init() fails, __nvmet_req_complete() is called
to handle the target request via .queue_response(), so
nvme_loop_queue_response() shouldn't be called again for
handling the failure.

This patch fixes this case by the following way:

- move blk_mq_start_request() before nvmet_req_init(), so
nvme_loop_queue_response() may work well to complete this
host request

- don't call nvme_cleanup_cmd() which is done in nvme_loop_complete_rq()

- don't call nvme_loop_queue_response() which is done via
.queue_response()

Signed-off-by: Ming Lei &lt;ming.lei@redhat.com&gt;
Reviewed-by: Christoph Hellwig &lt;hch@lst.de&gt;
[trimmed changelog]
Signed-off-by: Keith Busch &lt;keith.busch@intel.com&gt;
Signed-off-by: Jens Axboe &lt;axboe@kernel.dk&gt;
</content>
</entry>
<entry>
<title>nvmet: constify struct nvmet_fabrics_ops</title>
<updated>2018-03-26T14:53:43+00:00</updated>
<author>
<name>Christoph Hellwig</name>
<email>hch@lst.de</email>
</author>
<published>2018-03-20T19:41:35+00:00</published>
<link rel='alternate' type='text/html' href='https://git.radix-linux.su/kernel/linux.git/commit/?id=e929f06d9eaab4dba14e730ef18aa85b76465db9'/>
<id>urn:sha1:e929f06d9eaab4dba14e730ef18aa85b76465db9</id>
<content type='text'>
Signed-off-by: Christoph Hellwig &lt;hch@lst.de&gt;
Signed-off-by: Keith Busch &lt;keith.busch@intel.com&gt;
Signed-off-by: Jens Axboe &lt;axboe@kernel.dk&gt;
</content>
</entry>
<entry>
<title>nvmet-loop: use blk_rq_payload_bytes for sgl selection</title>
<updated>2018-02-22T08:45:34+00:00</updated>
<author>
<name>Christoph Hellwig</name>
<email>hch@lst.de</email>
</author>
<published>2018-02-22T15:24:10+00:00</published>
<link rel='alternate' type='text/html' href='https://git.radix-linux.su/kernel/linux.git/commit/?id=796b0b8d8dea191d9f64e0be8ab58d8f3586bcde'/>
<id>urn:sha1:796b0b8d8dea191d9f64e0be8ab58d8f3586bcde</id>
<content type='text'>
blk_rq_bytes does the wrong thing for special payloads like discards and
might cause the driver to not set up a SGL.

Signed-off-by: Christoph Hellwig &lt;hch@lst.de&gt;
Reviewed-by: Sagi Grimberg &lt;sagi@grimberg.me&gt;
Reviewed-by: Johannes Thumshirn &lt;jthumshirn@suse.de&gt;
Signed-off-by: Keith Busch &lt;keith.busch@intel.com&gt;
</content>
</entry>
<entry>
<title>nvme: host delete_work and reset_work on separate workqueues</title>
<updated>2018-01-15T16:09:30+00:00</updated>
<author>
<name>Roy Shterman</name>
<email>roys@lightbitslabs.com</email>
</author>
<published>2018-01-14T10:39:02+00:00</published>
<link rel='alternate' type='text/html' href='https://git.radix-linux.su/kernel/linux.git/commit/?id=b227c59b9b5b8ae52639c8980af853d2f654f90a'/>
<id>urn:sha1:b227c59b9b5b8ae52639c8980af853d2f654f90a</id>
<content type='text'>
We need to ensure that delete_work will be hosted on a different
workqueue than all the works we flush or cancel from it.
Otherwise we may hit a circular dependency warning [1].

Also, given that delete_work flushes reset_work, host reset_work
on nvme_reset_wq and delete_work on nvme_delete_wq. In addition,
fix the flushing in the individual drivers to flush nvme_delete_wq
when draining queued deletes.

[1]:
[  178.491942] =============================================
[  178.492718] [ INFO: possible recursive locking detected ]
[  178.493495] 4.9.0-rc4-c844263313a8-lb #3 Tainted: G           OE
[  178.494382] ---------------------------------------------
[  178.495160] kworker/5:1/135 is trying to acquire lock:
[  178.495894]  (
[  178.496120] "nvme-wq"
[  178.496471] ){++++.+}
[  178.496599] , at:
[  178.496921] [&lt;ffffffffa70ac206&gt;] flush_work+0x1a6/0x2d0
[  178.497670]
               but task is already holding lock:
[  178.498499]  (
[  178.498724] "nvme-wq"
[  178.499074] ){++++.+}
[  178.499202] , at:
[  178.499520] [&lt;ffffffffa70ad6c2&gt;] process_one_work+0x162/0x6a0
[  178.500343]
               other info that might help us debug this:
[  178.501269]  Possible unsafe locking scenario:

[  178.502113]        CPU0
[  178.502472]        ----
[  178.502829]   lock(
[  178.503115] "nvme-wq"
[  178.503467] );
[  178.503716]   lock(
[  178.504001] "nvme-wq"
[  178.504353] );
[  178.504601]
                *** DEADLOCK ***

[  178.505441]  May be due to missing lock nesting notation

[  178.506453] 2 locks held by kworker/5:1/135:
[  178.507068]  #0:
[  178.507330]  (
[  178.507598] "nvme-wq"
[  178.507726] ){++++.+}
[  178.508079] , at:
[  178.508173] [&lt;ffffffffa70ad6c2&gt;] process_one_work+0x162/0x6a0
[  178.509004]  #1:
[  178.509265]  (
[  178.509532] (&amp;ctrl-&gt;delete_work)
[  178.509795] ){+.+.+.}
[  178.510145] , at:
[  178.510239] [&lt;ffffffffa70ad6c2&gt;] process_one_work+0x162/0x6a0
[  178.511070]
               stack backtrace:
:
[  178.511693] CPU: 5 PID: 135 Comm: kworker/5:1 Tainted: G           OE   4.9.0-rc4-c844263313a8-lb #3
[  178.512974] Hardware name: QEMU Standard PC (i440FX + PIIX, 1996), BIOS 1.10.1-1ubuntu1 04/01/2014
[  178.514247] Workqueue: nvme-wq nvme_del_ctrl_work [nvme_tcp]
[  178.515071]  ffffc2668175bae0 ffffffffa7450823 ffffffffa88abd80 ffffffffa88abd80
[  178.516195]  ffffc2668175bb98 ffffffffa70eb012 ffffffffa8d8d90d ffff9c472e9ea700
[  178.517318]  ffff9c472e9ea700 ffff9c4700000000 ffff9c4700007200 ab83be61bec0d50e
[  178.518443] Call Trace:
[  178.518807]  [&lt;ffffffffa7450823&gt;] dump_stack+0x85/0xc2
[  178.519542]  [&lt;ffffffffa70eb012&gt;] __lock_acquire+0x17d2/0x18f0
[  178.520377]  [&lt;ffffffffa75839a7&gt;] ? serial8250_console_putchar+0x27/0x30
[  178.521330]  [&lt;ffffffffa7583980&gt;] ? wait_for_xmitr+0xa0/0xa0
[  178.522174]  [&lt;ffffffffa70ac1eb&gt;] ? flush_work+0x18b/0x2d0
[  178.522975]  [&lt;ffffffffa70eb7cb&gt;] lock_acquire+0x11b/0x220
[  178.523753]  [&lt;ffffffffa70ac206&gt;] ? flush_work+0x1a6/0x2d0
[  178.524535]  [&lt;ffffffffa70ac229&gt;] flush_work+0x1c9/0x2d0
[  178.525291]  [&lt;ffffffffa70ac206&gt;] ? flush_work+0x1a6/0x2d0
[  178.526077]  [&lt;ffffffffa70a9cf0&gt;] ? flush_workqueue_prep_pwqs+0x220/0x220
[  178.527040]  [&lt;ffffffffa70ae7cf&gt;] __cancel_work_timer+0x10f/0x1d0
[  178.527907]  [&lt;ffffffffa70fecb9&gt;] ? vprintk_default+0x29/0x40
[  178.528726]  [&lt;ffffffffa71cb507&gt;] ? printk+0x48/0x50
[  178.529434]  [&lt;ffffffffa70ae8c3&gt;] cancel_delayed_work_sync+0x13/0x20
[  178.530381]  [&lt;ffffffffc042100b&gt;] nvme_stop_ctrl+0x5b/0x70 [nvme_core]
[  178.531314]  [&lt;ffffffffc0403dcc&gt;] nvme_del_ctrl_work+0x2c/0x50 [nvme_tcp]
[  178.532271]  [&lt;ffffffffa70ad741&gt;] process_one_work+0x1e1/0x6a0
[  178.533101]  [&lt;ffffffffa70ad6c2&gt;] ? process_one_work+0x162/0x6a0
[  178.533954]  [&lt;ffffffffa70adc4e&gt;] worker_thread+0x4e/0x490
[  178.534735]  [&lt;ffffffffa70adc00&gt;] ? process_one_work+0x6a0/0x6a0
[  178.535588]  [&lt;ffffffffa70adc00&gt;] ? process_one_work+0x6a0/0x6a0
[  178.536441]  [&lt;ffffffffa70b48cf&gt;] kthread+0xff/0x120
[  178.537149]  [&lt;ffffffffa70b47d0&gt;] ? kthread_park+0x60/0x60
[  178.538094]  [&lt;ffffffffa70b47d0&gt;] ? kthread_park+0x60/0x60
[  178.538900]  [&lt;ffffffffa78e332a&gt;] ret_from_fork+0x2a/0x40

Signed-off-by: Roy Shterman &lt;roys@lightbitslabs.com&gt;
Signed-off-by: Sagi Grimberg &lt;sagi@grimberg.me&gt;
Signed-off-by: Christoph Hellwig &lt;hch@lst.de&gt;
</content>
</entry>
<entry>
<title>nvme-fabrics: protect against module unload during create_ctrl</title>
<updated>2018-01-08T10:01:56+00:00</updated>
<author>
<name>Roy Shterman</name>
<email>roys@lightbitslabs.com</email>
</author>
<published>2017-12-25T12:18:30+00:00</published>
<link rel='alternate' type='text/html' href='https://git.radix-linux.su/kernel/linux.git/commit/?id=0de5cd367c6aa2a31a1c931628f778f79f8ef22e'/>
<id>urn:sha1:0de5cd367c6aa2a31a1c931628f778f79f8ef22e</id>
<content type='text'>
NVMe transport driver module unload may (and usually does) trigger
iteration over the active controllers and delete them all (sometimes
under a mutex).  However, a controller can be created concurrently with
module unload which can lead to leakage of resources (most important char
device node leakage) in case the controller creation occured after the
unload delete and drain sequence.  To protect against this, we take a
module reference to guarantee that the nvme transport driver is not
unloaded while creating a controller.

Signed-off-by: Roy Shterman &lt;roys@lightbitslabs.com&gt;
Signed-off-by: Sagi Grimberg &lt;sagi@grimberg.me&gt;
Reviewed-by: Max Gurtovoy &lt;maxg@mellanox.com&gt;
Signed-off-by: Christoph Hellwig &lt;hch@lst.de&gt;
</content>
</entry>
<entry>
<title>nvme-loop: check if queue is ready in queue_rq</title>
<updated>2017-11-20T07:28:36+00:00</updated>
<author>
<name>Sagi Grimberg</name>
<email>sagi@grimberg.me</email>
</author>
<published>2017-10-24T12:25:22+00:00</published>
<link rel='alternate' type='text/html' href='https://git.radix-linux.su/kernel/linux.git/commit/?id=9d7fab04b95e8c26014a9bfc1c943b8360b44c17'/>
<id>urn:sha1:9d7fab04b95e8c26014a9bfc1c943b8360b44c17</id>
<content type='text'>
In case the queue is not LIVE (fully functional and connected at the nvmf
level), we cannot allow any commands other than connect to pass through.

Add a new queue state flag NVME_LOOP_Q_LIVE which is set after nvmf connect
and cleared in queue teardown.

Signed-off-by: Sagi Grimberg &lt;sagi@grimberg.me&gt;
Signed-off-by: Christoph Hellwig &lt;hch@lst.de&gt;
</content>
</entry>
<entry>
<title>nvmet: better data length validation</title>
<updated>2017-11-11T02:53:25+00:00</updated>
<author>
<name>Christoph Hellwig</name>
<email>hch@lst.de</email>
</author>
<published>2017-11-09T13:29:58+00:00</published>
<link rel='alternate' type='text/html' href='https://git.radix-linux.su/kernel/linux.git/commit/?id=5e62d5c993e6889cd314d5b5de6b670152109a0e'/>
<id>urn:sha1:5e62d5c993e6889cd314d5b5de6b670152109a0e</id>
<content type='text'>
Currently the NVMe target stores the expexted data length in req-&gt;data_len
and uses that for data transfer decisions, but that does not take the
actual transfer length in the SGLs into account.  So this adds a new
transfer_len field, into which the transport drivers store the actual
transfer length.  We then check the two match before actually executing
the command.

The FC transport driver already had such a field, which is removed in
favour of the common one.

Signed-off-by: Christoph Hellwig &lt;hch@lst.de&gt;
Reviewed-by: Sagi Grimberg &lt;sagi@grimberg.me&gt;
Signed-off-by: Christoph Hellwig &lt;hch@lst.de&gt;
Signed-off-by: Jens Axboe &lt;axboe@kernel.dk&gt;
</content>
</entry>
<entry>
<title>nvme: remove handling of multiple AEN requests</title>
<updated>2017-11-11T02:53:25+00:00</updated>
<author>
<name>Keith Busch</name>
<email>keith.busch@intel.com</email>
</author>
<published>2017-11-07T22:13:12+00:00</published>
<link rel='alternate' type='text/html' href='https://git.radix-linux.su/kernel/linux.git/commit/?id=ad22c355b707a8d8d48e282aadc01c0b0604b2e9'/>
<id>urn:sha1:ad22c355b707a8d8d48e282aadc01c0b0604b2e9</id>
<content type='text'>
The driver can handle tracking only one AEN request, so this patch
removes handling for multiple ones.

Reviewed-by: Christoph Hellwig &lt;hch@lst.de&gt;
Reviewed-by: James Smart  &lt;james.smart@broadcom.com&gt;
Signed-off-by: Keith Busch &lt;keith.busch@intel.com&gt;
Signed-off-by: Christoph Hellwig &lt;hch@lst.de&gt;
Signed-off-by: Jens Axboe &lt;axboe@kernel.dk&gt;
</content>
</entry>
</feed>
