<feed xmlns='http://www.w3.org/2005/Atom'>
<title>kernel/linux.git/include/linux/iova.h, branch v4.14.223</title>
<subtitle>Linux kernel stable tree (mirror)</subtitle>
<id>https://git.radix-linux.su/kernel/linux.git/atom?h=v4.14.223</id>
<link rel='self' href='https://git.radix-linux.su/kernel/linux.git/atom?h=v4.14.223'/>
<link rel='alternate' type='text/html' href='https://git.radix-linux.su/kernel/linux.git/'/>
<updated>2019-08-04T07:32:02+00:00</updated>
<entry>
<title>iommu/iova: Fix compilation error with !CONFIG_IOMMU_IOVA</title>
<updated>2019-08-04T07:32:02+00:00</updated>
<author>
<name>Joerg Roedel</name>
<email>jroedel@suse.de</email>
</author>
<published>2019-07-23T07:51:00+00:00</published>
<link rel='alternate' type='text/html' href='https://git.radix-linux.su/kernel/linux.git/commit/?id=417084878e022072086d2de916347b93941bd829'/>
<id>urn:sha1:417084878e022072086d2de916347b93941bd829</id>
<content type='text'>
commit 201c1db90cd643282185a00770f12f95da330eca upstream.

The stub function for !CONFIG_IOMMU_IOVA needs to be
'static inline'.

Fixes: effa467870c76 ('iommu/vt-d: Don't queue_iova() if there is no flush queue')
Signed-off-by: Joerg Roedel &lt;jroedel@suse.de&gt;
Signed-off-by: Dmitry Safonov &lt;dima@arista.com&gt;
Signed-off-by: Greg Kroah-Hartman &lt;gregkh@linuxfoundation.org&gt;

</content>
</entry>
<entry>
<title>iommu/vt-d: Don't queue_iova() if there is no flush queue</title>
<updated>2019-08-04T07:32:02+00:00</updated>
<author>
<name>Dmitry Safonov</name>
<email>dima@arista.com</email>
</author>
<published>2019-07-16T21:38:05+00:00</published>
<link rel='alternate' type='text/html' href='https://git.radix-linux.su/kernel/linux.git/commit/?id=fc2ec1855e536950637b734b0c85caefcea7c3c6'/>
<id>urn:sha1:fc2ec1855e536950637b734b0c85caefcea7c3c6</id>
<content type='text'>
commit effa467870c7612012885df4e246bdb8ffd8e44c upstream.

Intel VT-d driver was reworked to use common deferred flushing
implementation. Previously there was one global per-cpu flush queue,
afterwards - one per domain.

Before deferring a flush, the queue should be allocated and initialized.

Currently only domains with IOMMU_DOMAIN_DMA type initialize their flush
queue. It's probably worth to init it for static or unmanaged domains
too, but it may be arguable - I'm leaving it to iommu folks.

Prevent queuing an iova flush if the domain doesn't have a queue.
The defensive check seems to be worth to keep even if queue would be
initialized for all kinds of domains. And is easy backportable.

On 4.19.43 stable kernel it has a user-visible effect: previously for
devices in si domain there were crashes, on sata devices:

 BUG: spinlock bad magic on CPU#6, swapper/0/1
  lock: 0xffff88844f582008, .magic: 00000000, .owner: &lt;none&gt;/-1, .owner_cpu: 0
 CPU: 6 PID: 1 Comm: swapper/0 Not tainted 4.19.43 #1
 Call Trace:
  &lt;IRQ&gt;
  dump_stack+0x61/0x7e
  spin_bug+0x9d/0xa3
  do_raw_spin_lock+0x22/0x8e
  _raw_spin_lock_irqsave+0x32/0x3a
  queue_iova+0x45/0x115
  intel_unmap+0x107/0x113
  intel_unmap_sg+0x6b/0x76
  __ata_qc_complete+0x7f/0x103
  ata_qc_complete+0x9b/0x26a
  ata_qc_complete_multiple+0xd0/0xe3
  ahci_handle_port_interrupt+0x3ee/0x48a
  ahci_handle_port_intr+0x73/0xa9
  ahci_single_level_irq_intr+0x40/0x60
  __handle_irq_event_percpu+0x7f/0x19a
  handle_irq_event_percpu+0x32/0x72
  handle_irq_event+0x38/0x56
  handle_edge_irq+0x102/0x121
  handle_irq+0x147/0x15c
  do_IRQ+0x66/0xf2
  common_interrupt+0xf/0xf
 RIP: 0010:__do_softirq+0x8c/0x2df

The same for usb devices that use ehci-pci:
 BUG: spinlock bad magic on CPU#0, swapper/0/1
  lock: 0xffff88844f402008, .magic: 00000000, .owner: &lt;none&gt;/-1, .owner_cpu: 0
 CPU: 0 PID: 1 Comm: swapper/0 Not tainted 4.19.43 #4
 Call Trace:
  &lt;IRQ&gt;
  dump_stack+0x61/0x7e
  spin_bug+0x9d/0xa3
  do_raw_spin_lock+0x22/0x8e
  _raw_spin_lock_irqsave+0x32/0x3a
  queue_iova+0x77/0x145
  intel_unmap+0x107/0x113
  intel_unmap_page+0xe/0x10
  usb_hcd_unmap_urb_setup_for_dma+0x53/0x9d
  usb_hcd_unmap_urb_for_dma+0x17/0x100
  unmap_urb_for_dma+0x22/0x24
  __usb_hcd_giveback_urb+0x51/0xc3
  usb_giveback_urb_bh+0x97/0xde
  tasklet_action_common.isra.4+0x5f/0xa1
  tasklet_action+0x2d/0x30
  __do_softirq+0x138/0x2df
  irq_exit+0x7d/0x8b
  smp_apic_timer_interrupt+0x10f/0x151
  apic_timer_interrupt+0xf/0x20
  &lt;/IRQ&gt;
 RIP: 0010:_raw_spin_unlock_irqrestore+0x17/0x39

Cc: David Woodhouse &lt;dwmw2@infradead.org&gt;
Cc: Joerg Roedel &lt;joro@8bytes.org&gt;
Cc: Lu Baolu &lt;baolu.lu@linux.intel.com&gt;
Cc: iommu@lists.linux-foundation.org
Cc: &lt;stable@vger.kernel.org&gt; # 4.14+
Fixes: 13cf01744608 ("iommu/vt-d: Make use of iova deferred flushing")
Signed-off-by: Dmitry Safonov &lt;dima@arista.com&gt;
Reviewed-by: Lu Baolu &lt;baolu.lu@linux.intel.com&gt;
Signed-off-by: Joerg Roedel &lt;jroedel@suse.de&gt;
[v4.14-port notes:
o minor conflict with untrusted IOMMU devices check under if-condition
o setup_timer() near one chunk is timer_setup() in v5.3]
Signed-off-by: Dmitry Safonov &lt;dima@arista.com&gt;
Signed-off-by: Greg Kroah-Hartman &lt;gregkh@linuxfoundation.org&gt;
</content>
</entry>
<entry>
<title>iommu/iova: Add flush timer</title>
<updated>2017-08-15T16:23:52+00:00</updated>
<author>
<name>Joerg Roedel</name>
<email>jroedel@suse.de</email>
</author>
<published>2017-08-10T14:58:18+00:00</published>
<link rel='alternate' type='text/html' href='https://git.radix-linux.su/kernel/linux.git/commit/?id=9a005a800ae817c2c90ef117d7cd77614d866777'/>
<id>urn:sha1:9a005a800ae817c2c90ef117d7cd77614d866777</id>
<content type='text'>
Add a timer to flush entries from the Flush-Queues every
10ms. This makes sure that no stale TLB entries remain for
too long after an IOVA has been unmapped.

Signed-off-by: Joerg Roedel &lt;jroedel@suse.de&gt;
</content>
</entry>
<entry>
<title>iommu/iova: Add locking to Flush-Queues</title>
<updated>2017-08-15T16:23:52+00:00</updated>
<author>
<name>Joerg Roedel</name>
<email>jroedel@suse.de</email>
</author>
<published>2017-08-10T14:31:17+00:00</published>
<link rel='alternate' type='text/html' href='https://git.radix-linux.su/kernel/linux.git/commit/?id=8109c2a2f8463852dddd6a1c3fcf262047c0c124'/>
<id>urn:sha1:8109c2a2f8463852dddd6a1c3fcf262047c0c124</id>
<content type='text'>
The lock is taken from the same CPU most of the time. But
having it allows to flush the queue also from another CPU if
necessary.

This will be used by a timer to regularily flush any pending
IOVAs from the Flush-Queues.

Signed-off-by: Joerg Roedel &lt;jroedel@suse.de&gt;
</content>
</entry>
<entry>
<title>iommu/iova: Add flush counters to Flush-Queue implementation</title>
<updated>2017-08-15T16:23:51+00:00</updated>
<author>
<name>Joerg Roedel</name>
<email>jroedel@suse.de</email>
</author>
<published>2017-08-10T14:14:59+00:00</published>
<link rel='alternate' type='text/html' href='https://git.radix-linux.su/kernel/linux.git/commit/?id=fb418dab8a4f01dde0c025d15145c589ec02796b'/>
<id>urn:sha1:fb418dab8a4f01dde0c025d15145c589ec02796b</id>
<content type='text'>
There are two counters:

	* fq_flush_start_cnt  - Increased when a TLB flush
	                        is started.

	* fq_flush_finish_cnt - Increased when a TLB flush
				is finished.

The fq_flush_start_cnt is assigned to every Flush-Queue
entry on its creation. When freeing entries from the
Flush-Queue, the value in the entry is compared to the
fq_flush_finish_cnt. The entry can only be freed when its
value is less than the value of fq_flush_finish_cnt.

The reason for these counters it to take advantage of IOMMU
TLB flushes that happened on other CPUs. These already
flushed the TLB for Flush-Queue entries on other CPUs so
that they can already be freed without flushing the TLB
again.

This makes it less likely that the Flush-Queue is full and
saves IOMMU TLB flushes.

Signed-off-by: Joerg Roedel &lt;jroedel@suse.de&gt;
</content>
</entry>
<entry>
<title>iommu/iova: Implement Flush-Queue ring buffer</title>
<updated>2017-08-15T16:23:51+00:00</updated>
<author>
<name>Joerg Roedel</name>
<email>jroedel@suse.de</email>
</author>
<published>2017-08-10T13:49:44+00:00</published>
<link rel='alternate' type='text/html' href='https://git.radix-linux.su/kernel/linux.git/commit/?id=1928210107edd4fa786199fef6b875d3af3bef88'/>
<id>urn:sha1:1928210107edd4fa786199fef6b875d3af3bef88</id>
<content type='text'>
Add a function to add entries to the Flush-Queue ring
buffer. If the buffer is full, call the flush-callback and
free the entries.

Signed-off-by: Joerg Roedel &lt;jroedel@suse.de&gt;
</content>
</entry>
<entry>
<title>iommu/iova: Add flush-queue data structures</title>
<updated>2017-08-15T16:23:50+00:00</updated>
<author>
<name>Joerg Roedel</name>
<email>jroedel@suse.de</email>
</author>
<published>2017-08-10T12:44:28+00:00</published>
<link rel='alternate' type='text/html' href='https://git.radix-linux.su/kernel/linux.git/commit/?id=42f87e71c3df12d8f29ec1bb7b47772ffaeaf1ee'/>
<id>urn:sha1:42f87e71c3df12d8f29ec1bb7b47772ffaeaf1ee</id>
<content type='text'>
This patch adds the basic data-structures to implement
flush-queues in the generic IOVA code. It also adds the
initialization and destroy routines for these data
structures.

The initialization routine is designed so that the use of
this feature is optional for the users of IOVA code.

Signed-off-by: Joerg Roedel &lt;jroedel@suse.de&gt;
</content>
</entry>
<entry>
<title>iommu/iova: Fix compile error with CONFIG_IOMMU_IOVA=m</title>
<updated>2017-03-22T23:06:17+00:00</updated>
<author>
<name>Joerg Roedel</name>
<email>jroedel@suse.de</email>
</author>
<published>2017-03-22T23:06:17+00:00</published>
<link rel='alternate' type='text/html' href='https://git.radix-linux.su/kernel/linux.git/commit/?id=b4d8c7aea15efa8c6272c58d78296f8b017c4c6a'/>
<id>urn:sha1:b4d8c7aea15efa8c6272c58d78296f8b017c4c6a</id>
<content type='text'>
The #ifdef in iova.h only catches the CONFIG_IOMMU_IOVA=y
case, so that compilation as a module fails with duplicate
function definition errors. Fix it by catching both cases in
the #if.

Signed-off-by: Joerg Roedel &lt;jroedel@suse.de&gt;
</content>
</entry>
<entry>
<title>iommu: Add dummy implementations for !IOMMU_IOVA</title>
<updated>2017-03-22T14:54:07+00:00</updated>
<author>
<name>Thierry Reding</name>
<email>treding@nvidia.com</email>
</author>
<published>2017-03-20T19:11:28+00:00</published>
<link rel='alternate' type='text/html' href='https://git.radix-linux.su/kernel/linux.git/commit/?id=21aff52ab2c831c2f07d48e2fa8d4bab26a66992'/>
<id>urn:sha1:21aff52ab2c831c2f07d48e2fa8d4bab26a66992</id>
<content type='text'>
Currently, building code which uses the API guarded by the IOMMU_IOVA
will fail to link if IOMMU_IOVA is not enabled. Often this code will be
using the API provided by the IOMMU_API Kconfig symbol, but support for
this can be optional, with code falling back to contiguous memory. This
commit implements dummy functions for the IOVA API so that it can be
compiled out.

With both IOMMU_API and IOMMU_IOVA optional, code can now be built with
or without support for IOMMU without having to resort to #ifdefs in the
user code.

Signed-off-by: Thierry Reding &lt;treding@nvidia.com&gt;
Signed-off-by: Joerg Roedel &lt;jroedel@suse.de&gt;
</content>
</entry>
<entry>
<title>iommu/iova: introduce per-cpu caching to iova allocation</title>
<updated>2016-04-20T19:42:24+00:00</updated>
<author>
<name>Omer Peleg</name>
<email>omer@cs.technion.ac.il</email>
</author>
<published>2016-04-20T08:34:11+00:00</published>
<link rel='alternate' type='text/html' href='https://git.radix-linux.su/kernel/linux.git/commit/?id=9257b4a206fc0229dd5f84b78e4d1ebf3f91d270'/>
<id>urn:sha1:9257b4a206fc0229dd5f84b78e4d1ebf3f91d270</id>
<content type='text'>
IOVA allocation has two problems that impede high-throughput I/O.
First, it can do a linear search over the allocated IOVA ranges.
Second, the rbtree spinlock that serializes IOVA allocations becomes
contended.

Address these problems by creating an API for caching allocated IOVA
ranges, so that the IOVA allocator isn't accessed frequently.  This
patch adds a per-CPU cache, from which CPUs can alloc/free IOVAs
without taking the rbtree spinlock.  The per-CPU caches are backed by
a global cache, to avoid invoking the (linear-time) IOVA allocator
without needing to make the per-CPU cache size excessive.  This design
is based on magazines, as described in "Magazines and Vmem: Extending
the Slab Allocator to Many CPUs and Arbitrary Resources" (currently
available at https://www.usenix.org/legacy/event/usenix01/bonwick.html)

Adding caching on top of the existing rbtree allocator maintains the
property that IOVAs are densely packed in the IO virtual address space,
which is important for keeping IOMMU page table usage low.

To keep the cache size reasonable, we bound the IOVA space a CPU can
cache by 32 MiB (we cache a bounded number of IOVA ranges, and only
ranges of size &lt;= 128 KiB).  The shared global cache is bounded at
4 MiB of IOVA space.

Signed-off-by: Omer Peleg &lt;omer@cs.technion.ac.il&gt;
[mad@cs.technion.ac.il: rebased, cleaned up and reworded the commit message]
Signed-off-by: Adam Morrison &lt;mad@cs.technion.ac.il&gt;
Reviewed-by: Shaohua Li &lt;shli@fb.com&gt;
Reviewed-by: Ben Serebrin &lt;serebrin@google.com&gt;
[dwmw2: split out VT-d part into a separate patch]
Signed-off-by: David Woodhouse &lt;David.Woodhouse@intel.com&gt;
</content>
</entry>
</feed>
