summaryrefslogtreecommitdiff
path: root/include/linux/reservation.h
AgeCommit message (Collapse)AuthorFilesLines
2019-02-28dma-buf: clarify locking documentation for reservation_object_get_exclLucas Stach1-1/+2
The documentation was misleading, as for a lot of use-cases holding the RCU read side lock is sufficient. Signed-off-by: Lucas Stach <l.stach@pengutronix.de> Reviewed-by: Christian König <christian.koenig@amd.com> Link: https://patchwork.freedesktop.org/patch/msgid/20180111165302.25556-2-l.stach@pengutronix.de
2018-10-25dma-buf: test shared slot allocation when mutex debugging is activeChristian König1-0/+5
Set shared_max to the number of shared fences right before we release the lock. This way every attempt to add a shared fence without previously reserving a slot will cause an error. Signed-off-by: Christian König <christian.koenig@amd.com> Reviewed-by: Huang Rui <ray.huang@amd.com> Acked-by: Junwei Zhang <Jerry.Zhang@amd.com> Link: https://patchwork.kernel.org/patch/10626147/
2018-10-25dma-buf: allow reserving more than one shared fence slotChristian König1-1/+2
Let's support simultaneous submissions to multiple engines. Signed-off-by: Christian König <christian.koenig@amd.com> Reviewed-by: Michel Dänzer <michel.daenzer@amd.com> Reviewed-by: Junwei Zhang <Jerry.Zhang@amd.com> Reviewed-by: Huang Rui <ray.huang@amd.com> Link: https://patchwork.kernel.org/patch/10626149/
2018-10-25dma-buf: remove shared fence staging in reservation objectChristian König1-4/+0
No need for that any more. Just replace the list when there isn't enough room any more for the additional fence. Signed-off-by: Christian König <christian.koenig@amd.com> Reviewed-by: Junwei Zhang <Jerry.Zhang@amd.com> Reviewed-by: Huang Rui <ray.huang@amd.com> Link: https://patchwork.kernel.org/patch/10626143/
2017-11-13dma-buf: add reservation_object_lock_interruptible()Christian König1-0/+23
That's the only wrapper function missing and necessary to cleanup TTM. Reviewed-and-Tested-by: Michel Dänzer <michel.daenzer@amd.com> Signed-off-by: Christian König <christian.koenig@amd.com> Signed-off-by: Alex Deucher <alexander.deucher@amd.com> Link: https://patchwork.freedesktop.org/patch/msgid/20171109085909.1653-3-christian.koenig@amd.com
2017-08-14dma-buf: add reservation_object_copy_fences (v2)Christian König1-0/+3
Allows us to copy all the fences in a reservation object to another one. v2: handle NULL src_list Signed-off-by: Christian König <christian.koenig@amd.com> Reviewed-by: Alex Deucher <alexander.deucher@amd.com> Signed-off-by: Alex Deucher <alexander.deucher@amd.com> Link: https://patchwork.freedesktop.org/patch/msgid/1502384509-10465-2-git-send-email-alexander.deucher@amd.com
2017-02-27dma-buf/reservation: Wrap ww_mutex_trylockChris Wilson1-0/+20
In a similar fashion to reservation_object_lock() and reservation_object_unlock(), ww_mutex_trylock is also useful and so is worth wrapping for consistency. Signed-off-by: Chris Wilson <chris@chris-wilson.co.uk> Cc: Sumit Semwal <sumit.semwal@linaro.org> Cc: Joonas Lahtinen <joonas.lahtinen@linux.intel.com> Cc: Daniel Vetter <daniel.vetter@ffwll.ch> [danvet: Add __must_check Joonas wants.] Reviewed-by: Joonas Lahtinen <joonas.lahtinen@linux.intel.com> Signed-off-by: Daniel Vetter <daniel.vetter@ffwll.ch> Link: http://patchwork.freedesktop.org/patch/msgid/20170221093000.22802-1-chris@chris-wilson.co.uk
2016-12-12dma-buf: Provide wrappers for reservation's lockChris Wilson1-0/+34
Joonas complained that writing ww_mutex_lock(&resv->lock, ctx) was too intrusive compared to reservation_object_lock(resv, ctx); Signed-off-by: Chris Wilson <chris@chris-wilson.co.uk> Cc: Sumit Semwal <sumit.semwal@linaro.org> Cc: Joonas Lahtinen <joonas.lahtinen@linux.intel.com> Reviewed-by: Alex Deucher <alexander.deucher@amd.com> Signed-off-by: Daniel Vetter <daniel.vetter@ffwll.ch> Link: http://patchwork.freedesktop.org/patch/msgid/20161115154642.31850-1-chris@chris-wilson.co.uk
2016-11-15dma-buf: Use fence_get_rcu_safe() for retrieving the exclusive fenceChris Wilson1-9/+6
The current code is subject to a race where we may try to acquire a reference on a stale fence: [13703.335118] WARNING: CPU: 1 PID: 14975 at ./include/linux/kref.h:46 i915_gem_object_wait+0x1a3/0x1c0 [13703.335184] Modules linked in: [13703.335202] CPU: 1 PID: 14975 Comm: gem_concurrent_ Not tainted 4.9.0-rc4+ #26 [13703.335216] Hardware name: / , BIOS PYBSWCEL.86A.0027.2015.0507.1758 05/07/2015 [13703.335233] ffffc90002f5bcc8 ffffffff812807de 0000000000000000 0000000000000000 [13703.335257] ffffc90002f5bd08 ffffffff81073811 0000002e80000000 ffff88026bf7c780 [13703.335279] 7fffffffffffffff 0000000000000001 ffff88027045a550 ffff88026bf7c780 [13703.335301] Call Trace: [13703.335316] [<ffffffff812807de>] dump_stack+0x4d/0x6f [13703.335331] [<ffffffff81073811>] __warn+0xc1/0xe0 [13703.335343] [<ffffffff810738e8>] warn_slowpath_null+0x18/0x20 [13703.335355] [<ffffffff813ac443>] i915_gem_object_wait+0x1a3/0x1c0 [13703.335367] [<ffffffff813ae8ec>] i915_gem_set_domain_ioctl+0xcc/0x330 [13703.335386] [<ffffffff813534ab>] drm_ioctl+0x1cb/0x410 [13703.335400] [<ffffffff813ae820>] ? i915_gem_obj_prepare_shmem_write+0x1d0/0x1d0 [13703.335416] [<ffffffff8135359b>] ? drm_ioctl+0x2bb/0x410 [13703.335429] [<ffffffff8117d32f>] do_vfs_ioctl+0x8f/0x5c0 [13703.335442] [<ffffffff8117d89c>] SyS_ioctl+0x3c/0x70 [13703.335456] [<ffffffff815a07a4>] entry_SYSCALL_64_fastpath+0x17/0x98 [13703.335558] ---[ end trace fd24176416ba6981 ]--- [13703.382778] general protection fault: 0000 [#1] SMP [13703.382802] Modules linked in: [13703.382816] CPU: 1 PID: 14967 Comm: gem_concurrent_ Tainted: G W 4.9.0-rc4+ #26 [13703.382828] Hardware name: / , BIOS PYBSWCEL.86A.0027.2015.0507.1758 05/07/2015 [13703.382841] task: ffff880275458000 task.stack: ffffc90002f18000 [13703.382849] RIP: 0010:[<ffffffff813b3534>] [<ffffffff813b3534>] i915_gem_request_retire+0x2b4/0x320 [13703.382870] RSP: 0018:ffffc90002f1bbc8 EFLAGS: 00010293 [13703.382878] RAX: dead000000000200 RBX: ffff88026bf7dce8 RCX: dead000000000100 [13703.382887] RDX: dead000000000100 RSI: ffff88026bf7c930 RDI: ffff88026bf7dd00 [13703.382897] RBP: ffffc90002f1bbf8 R08: 00000000ffffffff R09: ffff88026b89a000 [13703.382905] R10: 0000000000000001 R11: ffff88026bbe8fe0 R12: ffff88026bf7c000 [13703.382913] R13: ffff880275af8000 R14: ffff88026bf7c180 R15: dead000000000200 [13703.382922] FS: 00007f89e787d740(0000) GS:ffff88027fd00000(0000) knlGS:0000000000000000 [13703.382934] CS: 0010 DS: 0000 ES: 0000 CR0: 0000000080050033 [13703.382942] CR2: 00007f9053d2e000 CR3: 000000026d414000 CR4: 00000000001006e0 [13703.382951] Stack: [13703.382958] ffff880275413000 ffffc90002f1bde8 ffff880275af8000 ffff880274e8a600 [13703.382976] ffff880276a06000 ffffc90002f1bde8 ffffc90002f1bc38 ffffffff813b48c5 [13703.382995] ffffc90002f1bc00 ffffc90002f1bde8 ffff88026972a440 0000000000000000 [13703.383021] Call Trace: [13703.383032] [<ffffffff813b48c5>] i915_gem_request_alloc+0xa5/0x350 [13703.383043] [<ffffffff813a17c3>] i915_gem_do_execbuffer.isra.41+0x7b3/0x18b0 [13703.383055] [<ffffffff813b144c>] ? i915_gem_object_get_sg+0x25c/0x2b0 [13703.383065] [<ffffffff813b1d4d>] ? i915_gem_object_get_page+0x1d/0x50 [13703.383076] [<ffffffff813b28cc>] ? i915_gem_pwrite_ioctl+0x66c/0x6d0 [13703.383086] [<ffffffff813a2c25>] i915_gem_execbuffer2+0x95/0x1e0 [13703.383096] [<ffffffff813534ab>] drm_ioctl+0x1cb/0x410 [13703.383105] [<ffffffff813a2b90>] ? i915_gem_execbuffer+0x2d0/0x2d0 [13703.383117] [<ffffffff810c3df0>] ? hrtimer_start_range_ns+0x1a0/0x310 [13703.383128] [<ffffffff8117d32f>] do_vfs_ioctl+0x8f/0x5c0 [13703.383140] [<ffffffff810c60e8>] ? SyS_timer_settime+0x118/0x1a0 [13703.383150] [<ffffffff8117d89c>] SyS_ioctl+0x3c/0x70 [13703.383162] [<ffffffff815a07a4>] entry_SYSCALL_64_fastpath+0x17/0x98 [13703.383172] Code: 49 39 c6 48 8d 70 e8 48 8d 5f e8 75 16 eb 47 48 8d 43 18 48 8b 53 18 48 89 de 49 39 c6 48 8d 5a e8 74 33 48 8b 56 08 48 8b 46 10 <48> 89 42 08 48 89 10 f6 46 38 01 48 89 4e 08 4c 89 7e 10 74 cf [13703.383557] RIP [<ffffffff813b3534>] i915_gem_request_retire+0x2b4/0x320 [13703.383570] RSP <ffffc90002f1bbc8> [13703.383586] ---[ end trace fd24176416ba6982 ]--- This is fixed by using the kref_get_unless_zero() as a full memory barrier to validate the fence is still the current exclusive fence before returning it back to the caller. (Note the fix only requires using dma_fence_get_rcu() and correct handling, but we may as well use the helper rather than inline equivalent code.) Note: Issue can only be hit with the i915 driver. Signed-off-by: Chris Wilson <chris@chris-wilson.co.uk> Cc: Sumit Semwal <sumit.semwal@linaro.org Fixes: d07f0e59b2c7 ("drm/i915: Move GEM activity tracking into a common struct reservation_object") Reviewed-by: Christian König <christian.koenig@amd.com>. Signed-off-by: Daniel Vetter <daniel.vetter@ffwll.ch> Link: http://patchwork.freedesktop.org/patch/msgid/20161114115540.31155-1-chris@chris-wilson.co.uk
2016-10-25dma-buf: Rename struct fence to dma_fenceChris Wilson1-14/+14
I plan to usurp the short name of struct fence for a core kernel struct, and so I need to rename the specialised fence/timeline for DMA operations to make room. A consensus was reached in https://lists.freedesktop.org/archives/dri-devel/2016-July/113083.html that making clear this fence applies to DMA operations was a good thing. Since then the patch has grown a bit as usage increases, so hopefully it remains a good thing! (v2...: rebase, rerun spatch) v3: Compile on msm, spotted a manual fixup that I broke. v4: Try again for msm, sorry Daniel coccinelle script: @@ @@ - struct fence + struct dma_fence @@ @@ - struct fence_ops + struct dma_fence_ops @@ @@ - struct fence_cb + struct dma_fence_cb @@ @@ - struct fence_array + struct dma_fence_array @@ @@ - enum fence_flag_bits + enum dma_fence_flag_bits @@ @@ ( - fence_init + dma_fence_init | - fence_release + dma_fence_release | - fence_free + dma_fence_free | - fence_get + dma_fence_get | - fence_get_rcu + dma_fence_get_rcu | - fence_put + dma_fence_put | - fence_signal + dma_fence_signal | - fence_signal_locked + dma_fence_signal_locked | - fence_default_wait + dma_fence_default_wait | - fence_add_callback + dma_fence_add_callback | - fence_remove_callback + dma_fence_remove_callback | - fence_enable_sw_signaling + dma_fence_enable_sw_signaling | - fence_is_signaled_locked + dma_fence_is_signaled_locked | - fence_is_signaled + dma_fence_is_signaled | - fence_is_later + dma_fence_is_later | - fence_later + dma_fence_later | - fence_wait_timeout + dma_fence_wait_timeout | - fence_wait_any_timeout + dma_fence_wait_any_timeout | - fence_wait + dma_fence_wait | - fence_context_alloc + dma_fence_context_alloc | - fence_array_create + dma_fence_array_create | - to_fence_array + to_dma_fence_array | - fence_is_array + dma_fence_is_array | - trace_fence_emit + trace_dma_fence_emit | - FENCE_TRACE + DMA_FENCE_TRACE | - FENCE_WARN + DMA_FENCE_WARN | - FENCE_ERR + DMA_FENCE_ERR ) ( ... ) Signed-off-by: Chris Wilson <chris@chris-wilson.co.uk> Reviewed-by: Gustavo Padovan <gustavo.padovan@collabora.co.uk> Acked-by: Sumit Semwal <sumit.semwal@linaro.org> Acked-by: Christian König <christian.koenig@amd.com> Signed-off-by: Daniel Vetter <daniel.vetter@ffwll.ch> Link: http://patchwork.freedesktop.org/patch/msgid/20161025120045.28839-1-chris@chris-wilson.co.uk
2016-05-31reservation: add headerdoc commentsRob Clark1-0/+53
Signed-off-by: Rob Clark <robdclark@gmail.com> Signed-off-by: Sumit Semwal <sumit.semwal@linaro.org>
2016-05-08reservation: add reservation_object_get_excl_rcu()Rob Clark1-0/+18
In the atomic modesetting path, each driver simply wants to grab a ref to the exclusive fence from a reservation object to store in the incoming drm_plane_state, without doing the whole RCU dance. Since each driver will need to do this, lets make a helper. v2: rename to _rcu instead of _unlocked to be more consistent Signed-off-by: Rob Clark <robdclark@gmail.com> Acked-by: Sumit Semwal <sumit.semwal@linaro.org>
2014-07-09reservation: add suppport for read-only access using rcuMaarten Lankhorst1-16/+36
This adds some extra functions to deal with rcu. reservation_object_get_fences_rcu() will obtain the list of shared and exclusive fences without obtaining the ww_mutex. reservation_object_wait_timeout_rcu() will wait on all fences of the reservation_object, without obtaining the ww_mutex. reservation_object_test_signaled_rcu() will test if all fences of the reservation_object are signaled without using the ww_mutex. reservation_object_get_excl and reservation_object_get_list require the reservation object to be held, updating requires write_seqcount_begin/end. If only the exclusive fence is needed, rcu_dereference followed by fence_get_rcu can be used, if the shared fences are needed it's recommended to use the supplied functions. Signed-off-by: Maarten Lankhorst <maarten.lankhorst@canonical.com> Acked-by: Sumit Semwal <sumit.semwal@linaro.org> Acked-by: Daniel Vetter <daniel@ffwll.ch> Reviewed-By: Thomas Hellstrom <thellstrom@vmware.com> Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
2014-07-09reservation: update api and add some helpersMaarten Lankhorst1-7/+49
Move the list of shared fences to a struct, and return it in reservation_object_get_list(). Add reservation_object_get_excl to get the exclusive fence. Add reservation_object_reserve_shared(), which reserves space in the reservation_object for 1 more shared fence. reservation_object_add_shared_fence() and reservation_object_add_excl_fence() are used to assign a new fence to a reservation_object pointer, to complete a reservation. Changes since v1: - Add reservation_object_get_excl, reorder code a bit. Signed-off-by: Maarten Lankhorst <maarten.lankhorst@canonical.com> Acked-by: Sumit Semwal <sumit.semwal@linaro.org> Acked-by: Daniel Vetter <daniel@ffwll.ch> Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
2014-07-09reservation: add support for fences to enable cross-device synchronisationMaarten Lankhorst1-1/+19
Signed-off-by: Maarten Lankhorst <maarten.lankhorst@canonical.com> Acked-by: Sumit Semwal <sumit.semwal@linaro.org> Acked-by: Daniel Vetter <daniel@ffwll.ch> Reviewed-by: Rob Clark <robdclark@gmail.com> Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
2013-07-12mutex: Move ww_mutex definitions to ww_mutex.hMaarten Lankhorst1-1/+1
Move the definitions for wound/wait mutexes out to a separate header, ww_mutex.h. This reduces clutter in mutex.h, and increases readability. Suggested-by: Linus Torvalds <torvalds@linux-foundation.org> Signed-off-by: Maarten Lankhorst <maarten.lankhorst@canonical.com> Acked-by: Peter Zijlstra <a.p.zijlstra@chello.nl> Acked-by: Rik van Riel <riel@redhat.com> Acked-by: Maarten Lankhorst <maarten.lankhorst@canonical.com> Cc: Dave Airlie <airlied@gmail.com> Link: http://lkml.kernel.org/r/51D675DC.3000907@canonical.com [ Tidied up the code a bit. ] Signed-off-by: Ingo Molnar <mingo@kernel.org>
2013-06-28reservation: cross-device reservation support, v4Maarten Lankhorst1-0/+62
This adds support for a generic reservations framework that can be hooked up to ttm and dma-buf and allows easy sharing of reservations across devices. The idea is that a dma-buf and ttm object both will get a pointer to a struct reservation_object, which has to be reserved before anything is done with the contents of the dma-buf. Changes since v1: - Fix locking issue in ticket_reserve, which could cause mutex_unlock to be called too many times. Changes since v2: - All fence related calls and members have been taken out for now, what's left is the bare minimum to be useful for ttm locking conversion. Changes since v3: - Removed helper functions too. The documentation has an example implementation for locking. With the move to ww_mutex there is no need to have much logic any more. Signed-off-by: Maarten Lankhorst <maarten.lankhorst@canonical.com> Reviewed-by: Jerome Glisse <jglisse@redhat.com> Signed-off-by: Dave Airlie <airlied@redhat.com>