<feed xmlns='http://www.w3.org/2005/Atom'>
<title>kernel/linux.git/drivers/s390, branch v4.19.16</title>
<subtitle>Linux kernel stable tree (mirror)</subtitle>
<id>https://git.radix-linux.su/kernel/linux.git/atom?h=v4.19.16</id>
<link rel='self' href='https://git.radix-linux.su/kernel/linux.git/atom?h=v4.19.16'/>
<link rel='alternate' type='text/html' href='https://git.radix-linux.su/kernel/linux.git/'/>
<updated>2019-01-13T08:51:03+00:00</updated>
<entry>
<title>scsi: zfcp: fix posting too many status read buffers leading to adapter shutdown</title>
<updated>2019-01-13T08:51:03+00:00</updated>
<author>
<name>Steffen Maier</name>
<email>maier@linux.ibm.com</email>
</author>
<published>2018-12-06T16:31:20+00:00</published>
<link rel='alternate' type='text/html' href='https://git.radix-linux.su/kernel/linux.git/commit/?id=06fd6847c4a15207d266a92914a29a04096961c4'/>
<id>urn:sha1:06fd6847c4a15207d266a92914a29a04096961c4</id>
<content type='text'>
commit 60a161b7e5b2a252ff0d4c622266a7d8da1120ce upstream.

Suppose adapter (open) recovery is between opened QDIO queues and before
(the end of) initial posting of status read buffers (SRBs). This time
window can be seconds long due to FSF_PROT_HOST_CONNECTION_INITIALIZING
causing by design looping with exponential increase sleeps in the function
performing exchange config data during recovery
[zfcp_erp_adapter_strat_fsf_xconf()]. Recovery triggered by local link up.

Suppose an event occurs for which the FCP channel would send an unsolicited
notification to zfcp by means of a previously posted SRB.  We saw it with
local cable pull (link down) in multi-initiator zoning with multiple
NPIV-enabled subchannels of the same shared FCP channel.

As soon as zfcp_erp_adapter_strategy_open_fsf() starts posting the initial
status read buffers from within the adapter's ERP thread, the channel does
send an unsolicited notification.

Since v2.6.27 commit d26ab06ede83 ("[SCSI] zfcp: receiving an unsolicted
status can lead to I/O stall"), zfcp_fsf_status_read_handler() schedules
adapter-&gt;stat_work to re-fill the just consumed SRB from a work item.

Now the ERP thread and the work item post SRBs in parallel.  Both contexts
call the helper function zfcp_status_read_refill().  The tracking of
missing (to be posted / re-filled) SRBs is not thread-safe due to separate
atomic_read() and atomic_dec(), in order to depend on posting
success. Hence, both contexts can see
atomic_read(&amp;adapter-&gt;stat_miss) == 1. One of the two contexts posts
one too many SRB. Zfcp gets QDIO_ERROR_SLSB_STATE on the output queue
(trace tag "qdireq1") leading to zfcp_erp_adapter_shutdown() in
zfcp_qdio_handler_error().

An obvious and seemingly clean fix would be to schedule stat_work from the
ERP thread and wait for it to finish. This would serialize all SRB
re-fills. However, we already have another work item wait on the ERP
thread: adapter-&gt;scan_work runs zfcp_fc_scan_ports() which calls
zfcp_fc_eval_gpn_ft(). The latter calls zfcp_erp_wait() to wait for all the
open port recoveries during zfcp auto port scan, but in fact it waits for
any pending recovery including an adapter recovery. This approach leads to
a deadlock.  [see also v3.19 commit 18f87a67e6d6 ("zfcp: auto port scan
resiliency"); v2.6.37 commit d3e1088d6873
("[SCSI] zfcp: No ERP escalation on gpn_ft eval");
v2.6.28 commit fca55b6fb587
("[SCSI] zfcp: fix deadlock between wq triggered port scan and ERP")
fixing v2.6.27 commit c57a39a45a76
("[SCSI] zfcp: wait until adapter is finished with ERP during auto-port");
v2.6.27 commit cc8c282963bd
("[SCSI] zfcp: Automatically attach remote ports")]

Instead make the accounting of missing SRBs atomic for parallel execution
in both the ERP thread and adapter-&gt;stat_work.

Signed-off-by: Steffen Maier &lt;maier@linux.ibm.com&gt;
Fixes: d26ab06ede83 ("[SCSI] zfcp: receiving an unsolicted status can lead to I/O stall")
Cc: &lt;stable@vger.kernel.org&gt; #2.6.27+
Reviewed-by: Jens Remus &lt;jremus@linux.ibm.com&gt;
Signed-off-by: Martin K. Petersen &lt;martin.petersen@oracle.com&gt;
Signed-off-by: Greg Kroah-Hartman &lt;gregkh@linuxfoundation.org&gt;

</content>
</entry>
<entry>
<title>s390/cio: Fix cleanup when unsupported IDA format is used</title>
<updated>2018-12-17T08:24:31+00:00</updated>
<author>
<name>Eric Farman</name>
<email>farman@linux.ibm.com</email>
</author>
<published>2018-11-09T02:39:29+00:00</published>
<link rel='alternate' type='text/html' href='https://git.radix-linux.su/kernel/linux.git/commit/?id=61170596e1c0715519720947a9a3b8249a729101'/>
<id>urn:sha1:61170596e1c0715519720947a9a3b8249a729101</id>
<content type='text'>
[ Upstream commit b89e242eee8d4cd8261d8d821c62c5d1efc454d0 ]

Direct returns from within a loop are rude, but it doesn't mean it gets
to avoid releasing the memory acquired beforehand.

Signed-off-by: Eric Farman &lt;farman@linux.ibm.com&gt;
Message-Id: &lt;20181109023937.96105-3-farman@linux.ibm.com&gt;
Reviewed-by: Farhan Ali &lt;alifm@linux.ibm.com&gt;
Reviewed-by: Pierre Morel &lt;pmorel@linux.ibm.com&gt;
Acked-by: Halil Pasic &lt;pasic@linux.ibm.com&gt;
Signed-off-by: Cornelia Huck &lt;cohuck@redhat.com&gt;
Signed-off-by: Sasha Levin &lt;sashal@kernel.org&gt;
</content>
</entry>
<entry>
<title>s390/cio: Fix cleanup of pfn_array alloc failure</title>
<updated>2018-12-17T08:24:31+00:00</updated>
<author>
<name>Eric Farman</name>
<email>farman@linux.ibm.com</email>
</author>
<published>2018-11-09T02:39:28+00:00</published>
<link rel='alternate' type='text/html' href='https://git.radix-linux.su/kernel/linux.git/commit/?id=a4f21114d83ec0abc311e71f11a0e05f670a0cac'/>
<id>urn:sha1:a4f21114d83ec0abc311e71f11a0e05f670a0cac</id>
<content type='text'>
[ Upstream commit 806212f91c874b24cf9eb4a9f180323671b6c5ed ]

If pfn_array_alloc fails somehow, we need to release the pfn_array_table
that was malloc'd earlier.

Signed-off-by: Eric Farman &lt;farman@linux.ibm.com&gt;
Message-Id: &lt;20181109023937.96105-2-farman@linux.ibm.com&gt;
Acked-by: Halil Pasic &lt;pasic@linux.ibm.com&gt;
Signed-off-by: Cornelia Huck &lt;cohuck@redhat.com&gt;
Signed-off-by: Sasha Levin &lt;sashal@kernel.org&gt;
</content>
</entry>
<entry>
<title>virtio/s390: fix race in ccw_io_helper()</title>
<updated>2018-12-13T08:16:18+00:00</updated>
<author>
<name>Halil Pasic</name>
<email>pasic@linux.ibm.com</email>
</author>
<published>2018-09-26T16:48:30+00:00</published>
<link rel='alternate' type='text/html' href='https://git.radix-linux.su/kernel/linux.git/commit/?id=2a622040a8bce2bb7e646ca73a328b4e86ba336f'/>
<id>urn:sha1:2a622040a8bce2bb7e646ca73a328b4e86ba336f</id>
<content type='text'>
commit 78b1a52e05c9db11d293342e8d6d8a230a04b4e7 upstream.

While ccw_io_helper() seems like intended to be exclusive in a sense that
it is supposed to facilitate I/O for at most one thread at any given
time, there is actually nothing ensuring that threads won't pile up at
vcdev-&gt;wait_q. If they do, all threads get woken up and see the status
that belongs to some other request than their own. This can lead to bugs.
For an example see:
https://bugs.launchpad.net/ubuntu/+source/linux/+bug/1788432

This race normally does not cause any problems. The operations provided
by struct virtio_config_ops are usually invoked in a well defined
sequence, normally don't fail, and are normally used quite infrequent
too.

Yet, if some of the these operations are directly triggered via sysfs
attributes, like in the case described by the referenced bug, userspace
is given an opportunity to force races by increasing the frequency of the
given operations.

Let us fix the problem by ensuring, that for each device, we finish
processing the previous request before starting with a new one.

Signed-off-by: Halil Pasic &lt;pasic@linux.ibm.com&gt;
Reported-by: Colin Ian King &lt;colin.king@canonical.com&gt;
Cc: stable@vger.kernel.org
Message-Id: &lt;20180925121309.58524-3-pasic@linux.ibm.com&gt;
Signed-off-by: Cornelia Huck &lt;cohuck@redhat.com&gt;
Signed-off-by: Michael S. Tsirkin &lt;mst@redhat.com&gt;
Signed-off-by: Greg Kroah-Hartman &lt;gregkh@linuxfoundation.org&gt;

</content>
</entry>
<entry>
<title>virtio/s390: avoid race on vcdev-&gt;config</title>
<updated>2018-12-13T08:16:18+00:00</updated>
<author>
<name>Halil Pasic</name>
<email>pasic@linux.ibm.com</email>
</author>
<published>2018-09-26T16:48:29+00:00</published>
<link rel='alternate' type='text/html' href='https://git.radix-linux.su/kernel/linux.git/commit/?id=79f6e9facb8b53ef2097a9aa35c7069b5627301a'/>
<id>urn:sha1:79f6e9facb8b53ef2097a9aa35c7069b5627301a</id>
<content type='text'>
commit 2448a299ec416a80f699940a86f4a6d9a4f643b1 upstream.

Currently we have a race on vcdev-&gt;config in virtio_ccw_get_config() and
in virtio_ccw_set_config().

This normally does not cause problems, as these are usually infrequent
operations. However, for some devices writing to/reading from the config
space can be triggered through sysfs attributes. For these, userspace can
force the race by increasing the frequency.

Signed-off-by: Halil Pasic &lt;pasic@linux.ibm.com&gt;
Cc: stable@vger.kernel.org
Message-Id: &lt;20180925121309.58524-2-pasic@linux.ibm.com&gt;
Signed-off-by: Cornelia Huck &lt;cohuck@redhat.com&gt;
Signed-off-by: Michael S. Tsirkin &lt;mst@redhat.com&gt;
Signed-off-by: Greg Kroah-Hartman &lt;gregkh@linuxfoundation.org&gt;

</content>
</entry>
<entry>
<title>s390/ism: clear dmbe_mask bit before SMC IRQ handling</title>
<updated>2018-12-13T08:16:11+00:00</updated>
<author>
<name>Ursula Braun</name>
<email>ubraun@linux.ibm.com</email>
</author>
<published>2018-11-12T16:06:12+00:00</published>
<link rel='alternate' type='text/html' href='https://git.radix-linux.su/kernel/linux.git/commit/?id=b65fa443e56ee2742ed91c6dfcaa4ee428076e30'/>
<id>urn:sha1:b65fa443e56ee2742ed91c6dfcaa4ee428076e30</id>
<content type='text'>
[ Upstream commit 007b656851ed7f94ba0fa358ac3e5d7705da6846 ]

SMC-D stress workload showed connection stalls. Since the firmware
decides to skip raising an interrupt if the SBA DMBE mask bit is
still set, this SBA DMBE mask bit should be cleared before the
IRQ handling in the SMC code runs. Otherwise there are small windows
possible with missing interrupts for incoming data.
SMC-D currently does not care about the old value of the SBA DMBE
mask.

Acked-by: Sebastian Ott &lt;sebott@linux.ibm.com&gt;
Signed-off-by: Ursula Braun &lt;ubraun@linux.ibm.com&gt;
Signed-off-by: David S. Miller &lt;davem@davemloft.net&gt;
Signed-off-by: Sasha Levin &lt;sashal@kernel.org&gt;
</content>
</entry>
<entry>
<title>s390/qeth: fix length check in SNMP processing</title>
<updated>2018-12-05T18:31:59+00:00</updated>
<author>
<name>Julian Wiedmann</name>
<email>jwi@linux.ibm.com</email>
</author>
<published>2018-11-28T15:20:50+00:00</published>
<link rel='alternate' type='text/html' href='https://git.radix-linux.su/kernel/linux.git/commit/?id=711e3d37275c40bb07f538c40048a2ee34f34f70'/>
<id>urn:sha1:711e3d37275c40bb07f538c40048a2ee34f34f70</id>
<content type='text'>
[ Upstream commit 9a764c1e59684c0358e16ccaafd870629f2cfe67 ]

The response for a SNMP request can consist of multiple parts, which
the cmd callback stages into a kernel buffer until all parts have been
received. If the callback detects that the staging buffer provides
insufficient space, it bails out with error.
This processing is buggy for the first part of the response - while it
initially checks for a length of 'data_len', it later copies an
additional amount of 'offsetof(struct qeth_snmp_cmd, data)' bytes.

Fix the calculation of 'data_len' for the first part of the response.
This also nicely cleans up the memcpy code.

Fixes: 1da177e4c3f4 ("Linux-2.6.12-rc2")
Signed-off-by: Julian Wiedmann &lt;jwi@linux.ibm.com&gt;
Reviewed-by: Ursula Braun &lt;ubraun@linux.ibm.com&gt;
Signed-off-by: David S. Miller &lt;davem@davemloft.net&gt;
Signed-off-by: Greg Kroah-Hartman &lt;gregkh@linuxfoundation.org&gt;
</content>
</entry>
<entry>
<title>s390/qeth: unregister netdevice only when registered</title>
<updated>2018-11-27T15:13:03+00:00</updated>
<author>
<name>Julian Wiedmann</name>
<email>jwi@linux.ibm.com</email>
</author>
<published>2018-11-02T18:04:10+00:00</published>
<link rel='alternate' type='text/html' href='https://git.radix-linux.su/kernel/linux.git/commit/?id=99b9de47a8b64d534582d26c80f4220e6189df60'/>
<id>urn:sha1:99b9de47a8b64d534582d26c80f4220e6189df60</id>
<content type='text'>
[ Upstream commit 30356d08159d7899438e94503ae322a8b881e205 ]

qeth only registers its netdevice when the qeth device is first set
online. Thus a device that has never been set online will trigger
a WARN ("network todo 'hsi%d' but state 0") in unregister_netdev() when
removed.

Fix this by protecting the unregister step, just like we already protect
against repeated registering of the netdevice.

Fixes: d3d1b205e89f ("s390/qeth: allocate netdevice early")
Reported-by: Karsten Graul &lt;kgraul@linux.ibm.com&gt;
Signed-off-by: Julian Wiedmann &lt;jwi@linux.ibm.com&gt;
Signed-off-by: David S. Miller &lt;davem@davemloft.net&gt;
Signed-off-by: Sasha Levin &lt;sashal@kernel.org&gt;
</content>
</entry>
<entry>
<title>s390/qeth: fix HiperSockets sniffer</title>
<updated>2018-11-27T15:13:03+00:00</updated>
<author>
<name>Julian Wiedmann</name>
<email>jwi@linux.ibm.com</email>
</author>
<published>2018-11-02T18:04:09+00:00</published>
<link rel='alternate' type='text/html' href='https://git.radix-linux.su/kernel/linux.git/commit/?id=d005b563ae0cef9712077ba4407d771c75bd4194'/>
<id>urn:sha1:d005b563ae0cef9712077ba4407d771c75bd4194</id>
<content type='text'>
[ Upstream commit bd74a7f9cc033cf4d405788f80292268987dc0c5 ]

Sniffing mode for L3 HiperSockets requires that no IP addresses are
registered with the HW. The preferred way to achieve this is for
userspace to delete all the IPs on the interface. But qeth is expected
to also tolerate a configuration where that is not the case, by skipping
the IP registration when in sniffer mode.
Since commit 5f78e29ceebf ("qeth: optimize IP handling in rx_mode callback")
reworked the IP registration logic in the L3 subdriver, this no longer
works. When the qeth device is set online, qeth_l3_recover_ip() now
unconditionally registers all unicast addresses from our internal
IP table.

While we could fix this particular problem by skipping
qeth_l3_recover_ip() on a sniffer device, the more future-proof change
is to skip the IP address registration at the lowest level. This way we
a) catch any future code path that attempts to register an IP address
   without considering the sniffer scenario, and
b) continue to build up our internal IP table, so that if sniffer mode
   is switched off later we can operate just like normal.

Fixes: 5f78e29ceebf ("qeth: optimize IP handling in rx_mode callback")
Signed-off-by: Julian Wiedmann &lt;jwi@linux.ibm.com&gt;
Signed-off-by: David S. Miller &lt;davem@davemloft.net&gt;
Signed-off-by: Sasha Levin &lt;sashal@kernel.org&gt;
</content>
</entry>
<entry>
<title>Merge tag 's390-4.19-4' of git://git.kernel.org/pub/scm/linux/kernel/git/s390/linux</title>
<updated>2018-10-10T06:44:35+00:00</updated>
<author>
<name>Greg Kroah-Hartman</name>
<email>gregkh@linuxfoundation.org</email>
</author>
<published>2018-10-10T06:44:35+00:00</published>
<link rel='alternate' type='text/html' href='https://git.radix-linux.su/kernel/linux.git/commit/?id=3d647e62686f447ca4c9ba6b4500fed5f0368a98'/>
<id>urn:sha1:3d647e62686f447ca4c9ba6b4500fed5f0368a98</id>
<content type='text'>
Martin writes:
  "s390 fixes for 4.19-rc8

   Four more patches for 4.19:
    - Fix resume after suspend-to-disk if resume-CPU != suspend-CPU
    - Fix vfio-ccw check for pinned pages
    - Two patches to avoid a usercopy-whitelist warning in vfio-ccw"

* tag 's390-4.19-4' of git://git.kernel.org/pub/scm/linux/kernel/git/s390/linux:
  s390/cio: Fix how vfio-ccw checks pinned pages
  s390/cio: Refactor alloc of ccw_io_region
  s390/cio: Convert ccw_io_region to pointer
  s390/hibernate: fix error handling when suspend cpu != resume cpu
</content>
</entry>
</feed>
