<feed xmlns='http://www.w3.org/2005/Atom'>
<title>kernel/linux.git/block, branch v5.2.13</title>
<subtitle>Linux kernel stable tree (mirror)</subtitle>
<id>https://git.radix-linux.su/kernel/linux.git/atom?h=v5.2.13</id>
<link rel='self' href='https://git.radix-linux.su/kernel/linux.git/atom?h=v5.2.13'/>
<link rel='alternate' type='text/html' href='https://git.radix-linux.su/kernel/linux.git/'/>
<updated>2019-08-29T06:30:18+00:00</updated>
<entry>
<title>block, bfq: handle NULL return value by bfq_init_rq()</title>
<updated>2019-08-29T06:30:18+00:00</updated>
<author>
<name>Paolo Valente</name>
<email>paolo.valente@linaro.org</email>
</author>
<published>2019-08-07T17:21:11+00:00</published>
<link rel='alternate' type='text/html' href='https://git.radix-linux.su/kernel/linux.git/commit/?id=371879acb44ece108ad1efbbbbf61fc75a08eee9'/>
<id>urn:sha1:371879acb44ece108ad1efbbbbf61fc75a08eee9</id>
<content type='text'>
[ Upstream commit fd03177c33b287c6541f4048f1d67b7b45a1abc9 ]

As reported in [1], the call bfq_init_rq(rq) may return NULL in case
of OOM (in particular, if rq-&gt;elv.icq is NULL because memory
allocation failed in failed in ioc_create_icq()).

This commit handles this circumstance.

[1] https://lkml.org/lkml/2019/7/22/824

Cc: Hsin-Yi Wang &lt;hsinyi@google.com&gt;
Cc: Nicolas Boichat &lt;drinkcat@chromium.org&gt;
Cc: Doug Anderson &lt;dianders@chromium.org&gt;
Reported-by: Guenter Roeck &lt;linux@roeck-us.net&gt;
Reported-by: Hsin-Yi Wang &lt;hsinyi@google.com&gt;
Reviewed-by: Guenter Roeck &lt;linux@roeck-us.net&gt;
Signed-off-by: Paolo Valente &lt;paolo.valente@linaro.org&gt;
Signed-off-by: Jens Axboe &lt;axboe@kernel.dk&gt;
Signed-off-by: Sasha Levin &lt;sashal@kernel.org&gt;
</content>
</entry>
<entry>
<title>blk-mq: move cancel of requeue_work to the front of blk_exit_queue</title>
<updated>2019-08-25T14:10:24+00:00</updated>
<author>
<name>zhengbin</name>
<email>zhengbin13@huawei.com</email>
</author>
<published>2019-08-12T12:36:55+00:00</published>
<link rel='alternate' type='text/html' href='https://git.radix-linux.su/kernel/linux.git/commit/?id=63e2c0200e4f3d9dbbdefcd32f07f82518a41fda'/>
<id>urn:sha1:63e2c0200e4f3d9dbbdefcd32f07f82518a41fda</id>
<content type='text'>
commit e26cc08265dda37d2acc8394604f220ef412299d upstream.

blk_exit_queue will free elevator_data, while blk_mq_requeue_work
will access it. Move cancel of requeue_work to the front of
blk_exit_queue to avoid use-after-free.

blk_exit_queue                blk_mq_requeue_work
  __elevator_exit               blk_mq_run_hw_queues
    blk_mq_exit_sched             blk_mq_run_hw_queue
      dd_exit_queue                 blk_mq_hctx_has_pending
        kfree(elevator_data)          blk_mq_sched_has_work
                                        dd_has_work

Fixes: fbc2a15e3433 ("blk-mq: move cancel of requeue_work into blk_mq_release")
Cc: stable@vger.kernel.org
Reviewed-by: Ming Lei &lt;ming.lei@redhat.com&gt;
Signed-off-by: zhengbin &lt;zhengbin13@huawei.com&gt;
Signed-off-by: Jens Axboe &lt;axboe@kernel.dk&gt;
Signed-off-by: Greg Kroah-Hartman &lt;gregkh@linuxfoundation.org&gt;

</content>
</entry>
<entry>
<title>rq-qos: use a mb for got_token</title>
<updated>2019-08-16T08:11:00+00:00</updated>
<author>
<name>Josef Bacik</name>
<email>josef@toxicpanda.com</email>
</author>
<published>2019-07-16T20:19:29+00:00</published>
<link rel='alternate' type='text/html' href='https://git.radix-linux.su/kernel/linux.git/commit/?id=ae3afb0ab0b6650644f7d0487d01d62322e4ac58'/>
<id>urn:sha1:ae3afb0ab0b6650644f7d0487d01d62322e4ac58</id>
<content type='text'>
[ Upstream commit ac38297f7038cd5b80d66f8809c7bbf5b70031f3 ]

Oleg noticed that our checking of data.got_token is unsafe in the
cleanup case, and should really use a memory barrier.  Use a wmb on the
write side, and a rmb() on the read side.  We don't need one in the main
loop since we're saved by set_current_state().

Reviewed-by: Oleg Nesterov &lt;oleg@redhat.com&gt;
Signed-off-by: Josef Bacik &lt;josef@toxicpanda.com&gt;
Signed-off-by: Jens Axboe &lt;axboe@kernel.dk&gt;
Signed-off-by: Sasha Levin &lt;sashal@kernel.org&gt;
</content>
</entry>
<entry>
<title>rq-qos: set ourself TASK_UNINTERRUPTIBLE after we schedule</title>
<updated>2019-08-16T08:11:00+00:00</updated>
<author>
<name>Josef Bacik</name>
<email>josef@toxicpanda.com</email>
</author>
<published>2019-07-16T20:19:28+00:00</published>
<link rel='alternate' type='text/html' href='https://git.radix-linux.su/kernel/linux.git/commit/?id=32d1d7051c67ecd8f070c45785efab0322380db3'/>
<id>urn:sha1:32d1d7051c67ecd8f070c45785efab0322380db3</id>
<content type='text'>
[ Upstream commit d14a9b389a86a5154b704bc88ce8dd37c701456a ]

In case we get a spurious wakeup we need to make sure to re-set
ourselves to TASK_UNINTERRUPTIBLE so we don't busy wait.

Reviewed-by: Oleg Nesterov &lt;oleg@redhat.com&gt;
Signed-off-by: Josef Bacik &lt;josef@toxicpanda.com&gt;
Signed-off-by: Jens Axboe &lt;axboe@kernel.dk&gt;
Signed-off-by: Sasha Levin &lt;sashal@kernel.org&gt;
</content>
</entry>
<entry>
<title>rq-qos: don't reset has_sleepers on spurious wakeups</title>
<updated>2019-08-16T08:11:00+00:00</updated>
<author>
<name>Josef Bacik</name>
<email>josef@toxicpanda.com</email>
</author>
<published>2019-07-16T20:19:27+00:00</published>
<link rel='alternate' type='text/html' href='https://git.radix-linux.su/kernel/linux.git/commit/?id=2b6c7c7c9cfa054a3f77d78a836003c12780d514'/>
<id>urn:sha1:2b6c7c7c9cfa054a3f77d78a836003c12780d514</id>
<content type='text'>
[ Upstream commit 64e7ea875ef63b2801be7954cf7257d1bfccc266 ]

If we raced with somebody else getting an inflight counter we could fail
to get an inflight counter with no sleepers on the list, and thus need
to go to sleep.  In this case has_sleepers should be true because we are
now relying on the waker to get our inflight counter for us.  And in the
case of spurious wakeups we'd still want this to be the case.  So set
has_sleepers to true if we went to sleep to make sure we're woken up the
proper way.

Reviewed-by: Oleg Nesterov &lt;oleg@redhat.com&gt;
Signed-off-by: Josef Bacik &lt;josef@toxicpanda.com&gt;
Signed-off-by: Jens Axboe &lt;axboe@kernel.dk&gt;
Signed-off-by: Sasha Levin &lt;sashal@kernel.org&gt;
</content>
</entry>
<entry>
<title>block/bio-integrity: fix a memory leak bug</title>
<updated>2019-07-31T05:24:53+00:00</updated>
<author>
<name>Wenwen Wang</name>
<email>wenwen@cs.uga.edu</email>
</author>
<published>2019-07-11T19:22:02+00:00</published>
<link rel='alternate' type='text/html' href='https://git.radix-linux.su/kernel/linux.git/commit/?id=e9094b6117362ff8b020f26cff02d67bf30112d0'/>
<id>urn:sha1:e9094b6117362ff8b020f26cff02d67bf30112d0</id>
<content type='text'>
[ Upstream commit e7bf90e5afe3aa1d1282c1635a49e17a32c4ecec ]

In bio_integrity_prep(), a kernel buffer is allocated through kmalloc() to
hold integrity metadata. Later on, the buffer will be attached to the bio
structure through bio_integrity_add_page(), which returns the number of
bytes of integrity metadata attached. Due to unexpected situations,
bio_integrity_add_page() may return 0. As a result, bio_integrity_prep()
needs to be terminated with 'false' returned to indicate this error.
However, the allocated kernel buffer is not freed on this execution path,
leading to a memory leak.

To fix this issue, free the allocated buffer before returning from
bio_integrity_prep().

Reviewed-by: Ming Lei &lt;ming.lei@redhat.com&gt;
Acked-by: Martin K. Petersen &lt;martin.petersen@oracle.com&gt;
Signed-off-by: Wenwen Wang &lt;wenwen@cs.uga.edu&gt;
Signed-off-by: Jens Axboe &lt;axboe@kernel.dk&gt;
Signed-off-by: Sasha Levin &lt;sashal@kernel.org&gt;
</content>
</entry>
<entry>
<title>block: init flush rq ref count to 1</title>
<updated>2019-07-31T05:24:52+00:00</updated>
<author>
<name>Josef Bacik</name>
<email>josef@toxicpanda.com</email>
</author>
<published>2019-03-07T21:37:18+00:00</published>
<link rel='alternate' type='text/html' href='https://git.radix-linux.su/kernel/linux.git/commit/?id=65dfe3fafd1ce222e68b7821cae5bbc928aeb199'/>
<id>urn:sha1:65dfe3fafd1ce222e68b7821cae5bbc928aeb199</id>
<content type='text'>
[ Upstream commit b554db147feea39617b533ab6bca247c91c6198a ]

We discovered a problem in newer kernels where a disconnect of a NBD
device while the flush request was pending would result in a hang.  This
is because the blk mq timeout handler does

        if (!refcount_inc_not_zero(&amp;rq-&gt;ref))
                return true;

to determine if it's ok to run the timeout handler for the request.
Flush_rq's don't have a ref count set, so we'd skip running the timeout
handler for this request and it would just sit there in limbo forever.

Fix this by always setting the refcount of any request going through
blk_init_rq() to 1.  I tested this with a nbd-server that dropped flush
requests to verify that it hung, and then tested with this patch to
verify I got the timeout as expected and the error handling kicked in.
Thanks,

Signed-off-by: Josef Bacik &lt;josef@toxicpanda.com&gt;
Signed-off-by: Jens Axboe &lt;axboe@kernel.dk&gt;
Signed-off-by: Sasha Levin &lt;sashal@kernel.org&gt;
</content>
</entry>
<entry>
<title>block: Limit zone array allocation size</title>
<updated>2019-07-28T06:27:24+00:00</updated>
<author>
<name>Damien Le Moal</name>
<email>damien.lemoal@wdc.com</email>
</author>
<published>2019-07-01T05:09:18+00:00</published>
<link rel='alternate' type='text/html' href='https://git.radix-linux.su/kernel/linux.git/commit/?id=0362a47aae58b9d9def01df8fc0d87b83ab83192'/>
<id>urn:sha1:0362a47aae58b9d9def01df8fc0d87b83ab83192</id>
<content type='text'>
commit 26202928fafad8bda8b478edb7e62c885be623d7 upstream.

Limit the size of the struct blk_zone array used in
blk_revalidate_disk_zones() to avoid memory allocation failures leading
to disk revalidation failure. Also further reduce the likelyhood of
such failures by using kvcalloc() (that is vmalloc()) instead of
allocating contiguous pages with alloc_pages().

Fixes: 515ce6061312 ("scsi: sd_zbc: Fix sd_zbc_report_zones() buffer allocation")
Fixes: e76239a3748c ("block: add a report_zones method")
Cc: stable@vger.kernel.org
Reviewed-by: Christoph Hellwig &lt;hch@lst.de&gt;
Reviewed-by: Martin K. Petersen &lt;martin.petersen@oracle.com&gt;
Signed-off-by: Damien Le Moal &lt;damien.lemoal@wdc.com&gt;
Signed-off-by: Jens Axboe &lt;axboe@kernel.dk&gt;
Signed-off-by: Greg Kroah-Hartman &lt;gregkh@linuxfoundation.org&gt;

</content>
</entry>
<entry>
<title>blkcg: update blkcg_print_stat() to handle larger outputs</title>
<updated>2019-07-26T07:11:11+00:00</updated>
<author>
<name>Tejun Heo</name>
<email>tj@kernel.org</email>
</author>
<published>2019-06-13T22:30:38+00:00</published>
<link rel='alternate' type='text/html' href='https://git.radix-linux.su/kernel/linux.git/commit/?id=d483ea783a93f5cd59c93aa0314fde7267a0968f'/>
<id>urn:sha1:d483ea783a93f5cd59c93aa0314fde7267a0968f</id>
<content type='text'>
commit f539da82f2158916e154d206054e0efd5df7ab61 upstream.

Depending on the number of devices, blkcg stats can go over the
default seqfile buf size.  seqfile normally retries with a larger
buffer but since the -&gt;pd_stat() addition, blkcg_print_stat() doesn't
tell seqfile that overflow has happened and the output gets printed
truncated.  Fix it by calling seq_commit() w/ -1 on possible
overflows.

Signed-off-by: Tejun Heo &lt;tj@kernel.org&gt;
Fixes: 903d23f0a354 ("blk-cgroup: allow controllers to output their own stats")
Cc: stable@vger.kernel.org # v4.19+
Cc: Josef Bacik &lt;jbacik@fb.com&gt;
Signed-off-by: Jens Axboe &lt;axboe@kernel.dk&gt;
Signed-off-by: Greg Kroah-Hartman &lt;gregkh@linuxfoundation.org&gt;

</content>
</entry>
<entry>
<title>blk-iolatency: clear use_delay when io.latency is set to zero</title>
<updated>2019-07-26T07:11:10+00:00</updated>
<author>
<name>Tejun Heo</name>
<email>tj@kernel.org</email>
</author>
<published>2019-06-13T22:30:37+00:00</published>
<link rel='alternate' type='text/html' href='https://git.radix-linux.su/kernel/linux.git/commit/?id=c600458bb020e592ad29bf2449957e0a43f026b2'/>
<id>urn:sha1:c600458bb020e592ad29bf2449957e0a43f026b2</id>
<content type='text'>
commit 5de0073fcd50cc1f150895a7bb04d3cf8067b1d7 upstream.

If use_delay was non-zero when the latency target of a cgroup was set
to zero, it will stay stuck until io.latency is enabled on the cgroup
again.  This keeps readahead disabled for the cgroup impacting
performance negatively.

Signed-off-by: Tejun Heo &lt;tj@kernel.org&gt;
Cc: Josef Bacik &lt;jbacik@fb.com&gt;
Fixes: d70675121546 ("block: introduce blk-iolatency io controller")
Cc: stable@vger.kernel.org # v4.19+
Signed-off-by: Jens Axboe &lt;axboe@kernel.dk&gt;
Signed-off-by: Greg Kroah-Hartman &lt;gregkh@linuxfoundation.org&gt;

</content>
</entry>
</feed>
