<feed xmlns='http://www.w3.org/2005/Atom'>
<title>kernel/linux.git/drivers/gpu/host1x/syncpt.c, branch v6.6.132</title>
<subtitle>Linux kernel stable tree (mirror)</subtitle>
<id>https://git.radix-linux.su/kernel/linux.git/atom?h=v6.6.132</id>
<link rel='self' href='https://git.radix-linux.su/kernel/linux.git/atom?h=v6.6.132'/>
<link rel='alternate' type='text/html' href='https://git.radix-linux.su/kernel/linux.git/'/>
<updated>2026-01-11T14:21:11+00:00</updated>
<entry>
<title>gpu: host1x: Fix race in syncpt alloc/free</title>
<updated>2026-01-11T14:21:11+00:00</updated>
<author>
<name>Mainak Sen</name>
<email>msen@nvidia.com</email>
</author>
<published>2025-07-07T09:17:39+00:00</published>
<link rel='alternate' type='text/html' href='https://git.radix-linux.su/kernel/linux.git/commit/?id=6245cce711e2cdb2cc75c0bb8632952e36f8c972'/>
<id>urn:sha1:6245cce711e2cdb2cc75c0bb8632952e36f8c972</id>
<content type='text'>
[ Upstream commit c7d393267c497502fa737607f435f05dfe6e3d9b ]

Fix race condition between host1x_syncpt_alloc()
and host1x_syncpt_put() by using kref_put_mutex()
instead of kref_put() + manual mutex locking.

This ensures no thread can acquire the
syncpt_mutex after the refcount drops to zero
but before syncpt_release acquires it.
This prevents races where syncpoints could
be allocated while still being cleaned up
from a previous release.

Remove explicit mutex locking in syncpt_release
as kref_put_mutex() handles this atomically.

Signed-off-by: Mainak Sen &lt;msen@nvidia.com&gt;
Fixes: f5ba33fb9690 ("gpu: host1x: Reserve VBLANK syncpoints at initialization")
Signed-off-by: Mikko Perttunen &lt;mperttunen@nvidia.com&gt;
Signed-off-by: Thierry Reding &lt;treding@nvidia.com&gt;
Link: https://lore.kernel.org/r/20250707-host1x-syncpt-race-fix-v1-1-28b0776e70bc@nvidia.com
Signed-off-by: Sasha Levin &lt;sashal@kernel.org&gt;
</content>
</entry>
<entry>
<title>gpu: host1x: Don't rely on dma_fence_wait_timeout return value</title>
<updated>2023-04-04T12:24:24+00:00</updated>
<author>
<name>Mikko Perttunen</name>
<email>mperttunen@nvidia.com</email>
</author>
<published>2023-03-01T13:51:06+00:00</published>
<link rel='alternate' type='text/html' href='https://git.radix-linux.su/kernel/linux.git/commit/?id=c1aaee94380874fd40f7bb8417c597aba3f72c75'/>
<id>urn:sha1:c1aaee94380874fd40f7bb8417c597aba3f72c75</id>
<content type='text'>
dma_fence_wait_timeout (along with a host of other jiffies-based
timeouting functions) returns zero both in case of timeout and when
the wait completes during the last jiffy before timeout. As such,
we can't rely on it to distinguish between success and timeout.

To prevent confusing callers by returning -EAGAIN before the timeout
period has elapsed, check if the fence got signaled again after
the wait.

Signed-off-by: Mikko Perttunen &lt;mperttunen@nvidia.com&gt;
Signed-off-by: Thierry Reding &lt;treding@nvidia.com&gt;
</content>
</entry>
<entry>
<title>gpu: host1x: External timeout/cancellation for fences</title>
<updated>2023-01-26T14:55:38+00:00</updated>
<author>
<name>Mikko Perttunen</name>
<email>mperttunen@nvidia.com</email>
</author>
<published>2023-01-19T13:09:21+00:00</published>
<link rel='alternate' type='text/html' href='https://git.radix-linux.su/kernel/linux.git/commit/?id=d5179020f5ce44fd449790a9c12ef6c1a90a2ca7'/>
<id>urn:sha1:d5179020f5ce44fd449790a9c12ef6c1a90a2ca7</id>
<content type='text'>
Currently all fences have a 30 second timeout to ensure they are
cleaned up if the fence never completes otherwise. However, this
one size fits all solution doesn't actually fit in every case,
such as syncpoint waiting where we want to be able to have timeouts
longer than 30 seconds. As such, we want to be able to give control
over fence cancellation to the caller (and maybe eventually get rid
of the internal timeout altogether).

Here we add this cancellation mechanism by essentially adding a
function for entering the timeout path by function call, and changing
the syncpoint wait function to use it.

Signed-off-by: Mikko Perttunen &lt;mperttunen@nvidia.com&gt;
Signed-off-by: Thierry Reding &lt;treding@nvidia.com&gt;
</content>
</entry>
<entry>
<title>gpu: host1x: Implement syncpoint wait using DMA fences</title>
<updated>2023-01-26T14:55:38+00:00</updated>
<author>
<name>Mikko Perttunen</name>
<email>mperttunen@nvidia.com</email>
</author>
<published>2023-01-19T13:09:18+00:00</published>
<link rel='alternate' type='text/html' href='https://git.radix-linux.su/kernel/linux.git/commit/?id=f0fb260a0cdb014b22a5f7733279c205f2cba62a'/>
<id>urn:sha1:f0fb260a0cdb014b22a5f7733279c205f2cba62a</id>
<content type='text'>
In anticipation of removal of the intr API, move host1x_syncpt_wait
to use DMA fences instead. As of this patch, this means that waits
have a 30 second maximum timeout because of the implicit timeout
we have with fences, but that will be lifted in a follow-up patch.

Signed-off-by: Mikko Perttunen &lt;mperttunen@nvidia.com&gt;
Signed-off-by: Thierry Reding &lt;treding@nvidia.com&gt;
</content>
</entry>
<entry>
<title>gpu: host1x: Always return syncpoint value when waiting</title>
<updated>2022-02-16T16:20:53+00:00</updated>
<author>
<name>Mikko Perttunen</name>
<email>mperttunen@nvidia.com</email>
</author>
<published>2022-02-07T13:19:31+00:00</published>
<link rel='alternate' type='text/html' href='https://git.radix-linux.su/kernel/linux.git/commit/?id=184b58fa816fb5ee1854daf0d430766422bf2a77'/>
<id>urn:sha1:184b58fa816fb5ee1854daf0d430766422bf2a77</id>
<content type='text'>
The new TegraDRM UAPI uses syncpoint waiting with timeout set to
zero to indicate reading the syncpoint value. To support that we
need to return the syncpoint value always when waiting.

Fixes: 44e961381354 ("drm/tegra: Implement syncpoint wait UAPI")
Signed-off-by: Mikko Perttunen &lt;mperttunen@nvidia.com&gt;
Signed-off-by: Thierry Reding &lt;treding@nvidia.com&gt;
</content>
</entry>
<entry>
<title>gpu: host1x: Fix hang on Tegra186+</title>
<updated>2022-01-27T17:27:02+00:00</updated>
<author>
<name>Dmitry Osipenko</name>
<email>digetx@gmail.com</email>
</author>
<published>2021-12-23T14:46:50+00:00</published>
<link rel='alternate' type='text/html' href='https://git.radix-linux.su/kernel/linux.git/commit/?id=22d7ee32f1fb6d51ef8cf657c6685ca04745755a'/>
<id>urn:sha1:22d7ee32f1fb6d51ef8cf657c6685ca04745755a</id>
<content type='text'>
Tegra186+ hangs if host1x hardware is disabled at a kernel boot time
because we touch hardware before runtime PM is resumed. Move sync point
assignment initialization to the RPM-resume callback. Older SoCs were
unaffected because they skip that sync point initialization.

Tested-by: Jon Hunter &lt;jonathanh@nvidia.com&gt; # T186
Reported-by: Jon Hunter &lt;jonathanh@nvidia.com&gt; # T186
Fixes: 6b6776e2ab8a ("gpu: host1x: Add initial runtime PM and OPP support")
Signed-off-by: Dmitry Osipenko &lt;digetx@gmail.com&gt;
Signed-off-by: Thierry Reding &lt;treding@nvidia.com&gt;
</content>
</entry>
<entry>
<title>gpu: host1x: Add initial runtime PM and OPP support</title>
<updated>2021-12-16T13:07:07+00:00</updated>
<author>
<name>Dmitry Osipenko</name>
<email>digetx@gmail.com</email>
</author>
<published>2021-11-30T23:23:15+00:00</published>
<link rel='alternate' type='text/html' href='https://git.radix-linux.su/kernel/linux.git/commit/?id=6b6776e2ab8ac7086cf31ad339411df7681715b7'/>
<id>urn:sha1:6b6776e2ab8ac7086cf31ad339411df7681715b7</id>
<content type='text'>
Add runtime PM and OPP support to the Host1x driver. For the starter we
will keep host1x always-on because dynamic power management require a major
refactoring of the driver code since lot's of code paths are missing the
RPM handling and we're going to remove some of these paths in the future.

Reviewed-by: Ulf Hansson &lt;ulf.hansson@linaro.org&gt;
Tested-by: Peter Geis &lt;pgwipeout@gmail.com&gt; # Ouya T30
Tested-by: Paul Fertser &lt;fercerpav@gmail.com&gt; # PAZ00 T20
Tested-by: Nicolas Chauvet &lt;kwizart@gmail.com&gt; # PAZ00 T20 and TK1 T124
Tested-by: Matt Merhar &lt;mattmerhar@protonmail.com&gt; # Ouya T30
Signed-off-by: Dmitry Osipenko &lt;digetx@gmail.com&gt;
Signed-off-by: Thierry Reding &lt;treding@nvidia.com&gt;
</content>
</entry>
<entry>
<title>gpu: host1x: Add no-recovery mode</title>
<updated>2021-08-10T12:40:23+00:00</updated>
<author>
<name>Mikko Perttunen</name>
<email>mperttunen@nvidia.com</email>
</author>
<published>2021-06-10T11:04:43+00:00</published>
<link rel='alternate' type='text/html' href='https://git.radix-linux.su/kernel/linux.git/commit/?id=c78f837ae3d1e532ff4eb90155b42d7a2e892a3f'/>
<id>urn:sha1:c78f837ae3d1e532ff4eb90155b42d7a2e892a3f</id>
<content type='text'>
Add a new property for jobs to enable or disable recovery i.e.
CPU increments of syncpoints to max value on job timeout. This
allows for a more solid model for hanged jobs, where userspace
doesn't need to guess if a syncpoint increment happened because
the job completed, or because job timeout was triggered.

On job timeout, we stop the channel, NOP all future jobs on the
channel using the same syncpoint, mark the syncpoint as locked
and resume the channel from the next job, if any.

The future jobs are NOPed, since because we don't do the CPU
increments, the value of the syncpoint is no longer synchronized,
and any waiters would become confused if a future job incremented
the syncpoint. The syncpoint is marked locked to ensure that any
future jobs cannot increment the syncpoint either, until the
application has recognized the situation and reallocated the
syncpoint.

Signed-off-by: Mikko Perttunen &lt;mperttunen@nvidia.com&gt;
Signed-off-by: Thierry Reding &lt;treding@nvidia.com&gt;
</content>
</entry>
<entry>
<title>gpu: host1x: Reserve VBLANK syncpoints at initialization</title>
<updated>2021-03-31T15:42:13+00:00</updated>
<author>
<name>Mikko Perttunen</name>
<email>mperttunen@nvidia.com</email>
</author>
<published>2021-03-29T13:38:34+00:00</published>
<link rel='alternate' type='text/html' href='https://git.radix-linux.su/kernel/linux.git/commit/?id=f5ba33fb9690566c382624637125827b5512e766'/>
<id>urn:sha1:f5ba33fb9690566c382624637125827b5512e766</id>
<content type='text'>
On T20-T148 chips, the bootloader can set up a boot splash
screen with DC configured to increment syncpoint 26/27
at VBLANK. Because of this we shouldn't allow these syncpoints
to be allocated until DC has been reset and will no longer
increment them in the background.

As such, on these chips, reserve those two syncpoints at
initialization, and only mark them free once the DC
driver has indicated it's safe to do so.

Signed-off-by: Mikko Perttunen &lt;mperttunen@nvidia.com&gt;
Signed-off-by: Thierry Reding &lt;treding@nvidia.com&gt;
</content>
</entry>
<entry>
<title>gpu: host1x: Reset max value when freeing a syncpoint</title>
<updated>2021-03-31T15:42:13+00:00</updated>
<author>
<name>Mikko Perttunen</name>
<email>mperttunen@nvidia.com</email>
</author>
<published>2021-03-29T13:38:33+00:00</published>
<link rel='alternate' type='text/html' href='https://git.radix-linux.su/kernel/linux.git/commit/?id=aded42ada6eacfa11d349b158e993f66e4741aa7'/>
<id>urn:sha1:aded42ada6eacfa11d349b158e993f66e4741aa7</id>
<content type='text'>
With job recovery becoming optional, syncpoints may have a mismatch
between their value and max value when freed. As such, when freeing,
set the max value to the current value of the syncpoint so that it
is in a sane state for the next user.

Signed-off-by: Mikko Perttunen &lt;mperttunen@nvidia.com&gt;
Signed-off-by: Thierry Reding &lt;treding@nvidia.com&gt;
</content>
</entry>
</feed>
