<feed xmlns='http://www.w3.org/2005/Atom'>
<title>kernel/linux.git/include/linux/iova.h, branch v6.19.11</title>
<subtitle>Linux kernel stable tree (mirror)</subtitle>
<id>https://git.radix-linux.su/kernel/linux.git/atom?h=v6.19.11</id>
<link rel='self' href='https://git.radix-linux.su/kernel/linux.git/atom?h=v6.19.11'/>
<link rel='alternate' type='text/html' href='https://git.radix-linux.su/kernel/linux.git/'/>
<updated>2024-05-07T11:29:45+00:00</updated>
<entry>
<title>iommu/dma: fix zeroing of bounce buffer padding used by untrusted devices</title>
<updated>2024-05-07T11:29:45+00:00</updated>
<author>
<name>Michael Kelley</name>
<email>mhklinux@outlook.com</email>
</author>
<published>2024-04-08T04:11:42+00:00</published>
<link rel='alternate' type='text/html' href='https://git.radix-linux.su/kernel/linux.git/commit/?id=2650073f1b5858008c32712f3d9e1e808ce7e967'/>
<id>urn:sha1:2650073f1b5858008c32712f3d9e1e808ce7e967</id>
<content type='text'>
iommu_dma_map_page() allocates swiotlb memory as a bounce buffer when an
untrusted device wants to map only part of the memory in an granule.  The
goal is to disallow the untrusted device having DMA access to unrelated
kernel data that may be sharing the granule.  To meet this goal, the
bounce buffer itself is zeroed, and any additional swiotlb memory up to
alloc_size after the bounce buffer end (i.e., "post-padding") is also
zeroed.

However, as of commit 901c7280ca0d ("Reinstate some of "swiotlb: rework
"fix info leak with DMA_FROM_DEVICE"""), swiotlb_tbl_map_single() always
initializes the contents of the bounce buffer to the original memory.
Zeroing the bounce buffer is redundant and probably wrong per the
discussion in that commit. Only the post-padding needs to be zeroed.

Also, when the DMA min_align_mask is non-zero, the allocated bounce
buffer space may not start on a granule boundary.  The swiotlb memory
from the granule boundary to the start of the allocated bounce buffer
might belong to some unrelated bounce buffer. So as described in the
"second issue" in [1], it can't be zeroed to protect against untrusted
devices. But as of commit af133562d5af ("swiotlb: extend buffer
pre-padding to alloc_align_mask if necessary"), swiotlb_tbl_map_single()
allocates pre-padding slots when necessary to meet min_align_mask
requirements, making it possible to zero the pre-padding area as well.

Finally, iommu_dma_map_page() uses the swiotlb for untrusted devices
and also for certain kmalloc() memory. Current code does the zeroing
for both cases, but it is needed only for the untrusted device case.

Fix all of this by updating iommu_dma_map_page() to zero both the
pre-padding and post-padding areas, but not the actual bounce buffer.
Do this only in the case where the bounce buffer is used because
of an untrusted device.

[1] https://lore.kernel.org/all/20210929023300.335969-1-stevensd@google.com/

Signed-off-by: Michael Kelley &lt;mhklinux@outlook.com&gt;
Reviewed-by: Petr Tesarik &lt;petr@tesarici.cz&gt;
Signed-off-by: Christoph Hellwig &lt;hch@lst.de&gt;
</content>
</entry>
<entry>
<title>iommu/iova: Fix module config properly</title>
<updated>2022-09-26T11:31:20+00:00</updated>
<author>
<name>Robin Murphy</name>
<email>robin.murphy@arm.com</email>
</author>
<published>2022-09-13T11:47:20+00:00</published>
<link rel='alternate' type='text/html' href='https://git.radix-linux.su/kernel/linux.git/commit/?id=4f58330fcc8482aa90674e1f40f601e82f18ed4a'/>
<id>urn:sha1:4f58330fcc8482aa90674e1f40f601e82f18ed4a</id>
<content type='text'>
IOMMU_IOVA is intended to be an optional library for users to select as
and when they desire. Since it can be a module now, this means that
built-in code which has chosen not to select it should not fail to link
if it happens to have selected as a module by someone else. Replace
IS_ENABLED() with IS_REACHABLE() to do the right thing.

CC: Thierry Reding &lt;thierry.reding@gmail.com&gt;
Reported-by: John Garry &lt;john.garry@huawei.com&gt;
Fixes: 15bbdec3931e ("iommu: Make the iova library a module")
Signed-off-by: Robin Murphy &lt;robin.murphy@arm.com&gt;
Reviewed-by: Thierry Reding &lt;treding@nvidia.com&gt;
Link: https://lore.kernel.org/r/548c2f683ca379aface59639a8f0cccc3a1ac050.1663069227.git.robin.murphy@arm.com
Signed-off-by: Joerg Roedel &lt;jroedel@suse.de&gt;
</content>
</entry>
<entry>
<title>dma-iommu: add iommu_dma_opt_mapping_size()</title>
<updated>2022-07-19T04:05:45+00:00</updated>
<author>
<name>John Garry</name>
<email>john.garry@huawei.com</email>
</author>
<published>2022-07-14T11:15:25+00:00</published>
<link rel='alternate' type='text/html' href='https://git.radix-linux.su/kernel/linux.git/commit/?id=6d9870b7e5def2450e21316515b9efc0529204dd'/>
<id>urn:sha1:6d9870b7e5def2450e21316515b9efc0529204dd</id>
<content type='text'>
Add the IOMMU callback for DMA mapping API dma_opt_mapping_size(), which
allows the drivers to know the optimal mapping limit and thus limit the
requested IOVA lengths.

This value is based on the IOVA rcache range limit, as IOVAs allocated
above this limit must always be newly allocated, which may be quite slow.

Signed-off-by: John Garry &lt;john.garry@huawei.com&gt;
Reviewed-by: Damien Le Moal &lt;damien.lemoal@opensource.wdc.com&gt;
Acked-by: Robin Murphy &lt;robin.murphy@arm.com&gt;
Acked-by: Martin K. Petersen &lt;martin.petersen@oracle.com&gt;
Signed-off-by: Christoph Hellwig &lt;hch@lst.de&gt;
</content>
</entry>
<entry>
<title>iommu/iova: Separate out rcache init</title>
<updated>2022-02-14T14:43:15+00:00</updated>
<author>
<name>John Garry</name>
<email>john.garry@huawei.com</email>
</author>
<published>2022-02-03T09:59:20+00:00</published>
<link rel='alternate' type='text/html' href='https://git.radix-linux.su/kernel/linux.git/commit/?id=32e92d9f6f876721cc4f565f8369d542b6266380'/>
<id>urn:sha1:32e92d9f6f876721cc4f565f8369d542b6266380</id>
<content type='text'>
Currently the rcache structures are allocated for all IOVA domains, even if
they do not use "fast" alloc+free interface. This is wasteful of memory.

In addition, fails in init_iova_rcaches() are not handled safely, which is
less than ideal.

Make "fast" users call a separate rcache init explicitly, which includes
error checking.

Signed-off-by: John Garry &lt;john.garry@huawei.com&gt;
Reviewed-by: Robin Murphy &lt;robin.murphy@arm.com&gt;
Acked-by: Michael S. Tsirkin &lt;mst@redhat.com&gt;
Link: https://lore.kernel.org/r/1643882360-241739-1-git-send-email-john.garry@huawei.com
Signed-off-by: Joerg Roedel &lt;jroedel@suse.de&gt;
</content>
</entry>
<entry>
<title>iommu/iova: Temporarily include dma-mapping.h from iova.h</title>
<updated>2021-12-20T12:53:26+00:00</updated>
<author>
<name>Joerg Roedel</name>
<email>jroedel@suse.de</email>
</author>
<published>2021-12-20T12:34:48+00:00</published>
<link rel='alternate' type='text/html' href='https://git.radix-linux.su/kernel/linux.git/commit/?id=aade40b62745cf0b4e8a17d43652c5faff354e6b'/>
<id>urn:sha1:aade40b62745cf0b4e8a17d43652c5faff354e6b</id>
<content type='text'>
Some users of iova.h still expect that dma-mapping.h is also included.
Re-add the include until these users are updated to fix compile
failures in the iommu tree.

Acked-by: Robin Murphy &lt;robin.murphy@arm.com&gt;
Link: https://lore.kernel.org/r/20211220123448.19996-1-joro@8bytes.org
Signed-off-by: Joerg Roedel &lt;jroedel@suse.de&gt;
</content>
</entry>
<entry>
<title>iommu: Move flush queue data into iommu_dma_cookie</title>
<updated>2021-12-20T08:03:05+00:00</updated>
<author>
<name>Robin Murphy</name>
<email>robin.murphy@arm.com</email>
</author>
<published>2021-12-17T15:31:03+00:00</published>
<link rel='alternate' type='text/html' href='https://git.radix-linux.su/kernel/linux.git/commit/?id=a17e3026bc4da9135ca9a42ec0b1fa67f95172e3'/>
<id>urn:sha1:a17e3026bc4da9135ca9a42ec0b1fa67f95172e3</id>
<content type='text'>
Complete the move into iommu-dma by refactoring the flush queues
themselves to belong to the DMA cookie rather than the IOVA domain.

The refactoring may as well extend to some minor cosmetic aspects
too, to help us stay one step ahead of the style police.

Signed-off-by: Robin Murphy &lt;robin.murphy@arm.com&gt;
Link: https://lore.kernel.org/r/24304722005bc6f144e2a1fdd865d1465722fc2e.1639753638.git.robin.murphy@arm.com
Signed-off-by: Joerg Roedel &lt;jroedel@suse.de&gt;
</content>
</entry>
<entry>
<title>iommu/vt-d: Use put_pages_list</title>
<updated>2021-12-20T08:03:05+00:00</updated>
<author>
<name>Matthew Wilcox (Oracle)</name>
<email>willy@infradead.org</email>
</author>
<published>2021-12-17T15:31:00+00:00</published>
<link rel='alternate' type='text/html' href='https://git.radix-linux.su/kernel/linux.git/commit/?id=87f60cc65d24939353b40aa1d9297fea080cdf8d'/>
<id>urn:sha1:87f60cc65d24939353b40aa1d9297fea080cdf8d</id>
<content type='text'>
page-&gt;freelist is for the use of slab.  We already have the ability
to free a list of pages in the core mm, but it requires the use of a
list_head and for the pages to be chained together through page-&gt;lru.
Switch the Intel IOMMU and IOVA code over to using free_pages_list().

Signed-off-by: Matthew Wilcox (Oracle) &lt;willy@infradead.org&gt;
[rm: split from original patch, cosmetic tweaks, fix fq entries]
Signed-off-by: Robin Murphy &lt;robin.murphy@arm.com&gt;
Reviewed-by: Lu Baolu &lt;baolu.lu@linux.intel.com&gt;
Link: https://lore.kernel.org/r/2115b560d9a0ce7cd4b948bd51a2b7bde8fdfd59.1639753638.git.robin.murphy@arm.com
Signed-off-by: Joerg Roedel &lt;jroedel@suse.de&gt;
</content>
</entry>
<entry>
<title>iommu/iova: Squash flush_cb abstraction</title>
<updated>2021-12-20T08:03:05+00:00</updated>
<author>
<name>Robin Murphy</name>
<email>robin.murphy@arm.com</email>
</author>
<published>2021-12-17T15:30:57+00:00</published>
<link rel='alternate' type='text/html' href='https://git.radix-linux.su/kernel/linux.git/commit/?id=649ad9835a3783bcb6c69368fa939e0010abb2c6'/>
<id>urn:sha1:649ad9835a3783bcb6c69368fa939e0010abb2c6</id>
<content type='text'>
Once again, with iommu-dma now being the only flush queue user, we no
longer need the extra level of indirection through flush_cb. Squash that
and let the flush queue code call the domain method directly. This does
mean temporarily having to carry an additional copy of the IOMMU domain
pointer around instead, but only until a later patch untangles it again.

Reviewed-by: John Garry &lt;john.garry@huawei.com&gt;
Reviewed-by: Christoph Hellwig &lt;hch@lst.de&gt;
Signed-off-by: Robin Murphy &lt;robin.murphy@arm.com&gt;
Link: https://lore.kernel.org/r/e3f9b4acdd6640012ef4fbc819ac868d727b64a9.1639753638.git.robin.murphy@arm.com
Signed-off-by: Joerg Roedel &lt;jroedel@suse.de&gt;
</content>
</entry>
<entry>
<title>iommu/iova: Squash entry_dtor abstraction</title>
<updated>2021-12-20T08:03:05+00:00</updated>
<author>
<name>Robin Murphy</name>
<email>robin.murphy@arm.com</email>
</author>
<published>2021-12-17T15:30:56+00:00</published>
<link rel='alternate' type='text/html' href='https://git.radix-linux.su/kernel/linux.git/commit/?id=d5c383f2c98ac58c210b266cdaf7b86bc32d1ad1'/>
<id>urn:sha1:d5c383f2c98ac58c210b266cdaf7b86bc32d1ad1</id>
<content type='text'>
All flush queues are driven by iommu-dma now, so there is no need to
abstract entry_dtor or its data any more. Squash the now-canonical
implementation directly into the IOVA code to get it out of the way.

Reviewed-by: John Garry &lt;john.garry@huawei.com&gt;
Reviewed-by: Christoph Hellwig &lt;hch@lst.de&gt;
Signed-off-by: Robin Murphy &lt;robin.murphy@arm.com&gt;
Link: https://lore.kernel.org/r/2260f8de00ab5e0f9d2a1cf8978e6ae7cd4f182c.1639753638.git.robin.murphy@arm.com
Signed-off-by: Joerg Roedel &lt;jroedel@suse.de&gt;
</content>
</entry>
<entry>
<title>iommu: Delete iommu_dma_free_cpu_cached_iovas()</title>
<updated>2021-04-07T08:30:47+00:00</updated>
<author>
<name>John Garry</name>
<email>john.garry@huawei.com</email>
</author>
<published>2021-03-25T12:30:00+00:00</published>
<link rel='alternate' type='text/html' href='https://git.radix-linux.su/kernel/linux.git/commit/?id=149448b353e2517ecc6eced7d9f46e9f3e08b89e'/>
<id>urn:sha1:149448b353e2517ecc6eced7d9f46e9f3e08b89e</id>
<content type='text'>
Function iommu_dma_free_cpu_cached_iovas() no longer has any caller, so
delete it.

With that, function free_cpu_cached_iovas() may be made static.

Signed-off-by: John Garry &lt;john.garry@huawei.com&gt;
Reviewed-by: Robin Murphy &lt;robin.murphy@arm.com&gt;
Link: https://lore.kernel.org/r/1616675401-151997-4-git-send-email-john.garry@huawei.com
Signed-off-by: Joerg Roedel &lt;jroedel@suse.de&gt;
</content>
</entry>
</feed>
