summaryrefslogtreecommitdiff
path: root/drivers/gpu/drm/xe/xe_preempt_fence.c
diff options
context:
space:
mode:
authorMatthew Brost <matthew.brost@intel.com>2024-04-02 01:19:11 +0300
committerLucas De Marchi <lucas.demarchi@intel.com>2024-04-03 17:11:00 +0300
commit37c15c4aae1fe3f67efd2641db8d8c25c2d524ab (patch)
tree222ddb9241237b0e8400a4854f292e0988981f26 /drivers/gpu/drm/xe/xe_preempt_fence.c
parent9f18b55b6d3f77b9e778257efdec385d2d5dfa8e (diff)
downloadlinux-37c15c4aae1fe3f67efd2641db8d8c25c2d524ab.tar.xz
drm/xe: Use ordered wq for preempt fence waiting
Preempt fences can sleep waiting for an exec queue suspend operation to complete. If the system_unbound_wq is used for waiting and the number of waiters exceeds max_active this will result in other users of the system_unbound_wq getting starved. Use a device private work queue for preempt fences to avoid starvation of the system_unbound_wq. Even though suspend operations can complete out-of-order, all suspend operations within a VM need to complete before the preempt rebind worker can start. With that, use a device private ordered wq for preempt fence waiting. v2: - Add comment about cleanup on failure (Matt R) - Update commit message (Lucas) Fixes: dd08ebf6c352 ("drm/xe: Introduce a new DRM driver for Intel GPUs") Signed-off-by: Matthew Brost <matthew.brost@intel.com> Reviewed-by: Lucas De Marchi <lucas.demarchi@intel.com> Link: https://patchwork.freedesktop.org/patch/msgid/20240401221913.139672-2-matthew.brost@intel.com Signed-off-by: Lucas De Marchi <lucas.demarchi@intel.com>
Diffstat (limited to 'drivers/gpu/drm/xe/xe_preempt_fence.c')
-rw-r--r--drivers/gpu/drm/xe/xe_preempt_fence.c2
1 files changed, 1 insertions, 1 deletions
diff --git a/drivers/gpu/drm/xe/xe_preempt_fence.c b/drivers/gpu/drm/xe/xe_preempt_fence.c
index 7bce2a332603..7d50c6e89d8e 100644
--- a/drivers/gpu/drm/xe/xe_preempt_fence.c
+++ b/drivers/gpu/drm/xe/xe_preempt_fence.c
@@ -49,7 +49,7 @@ static bool preempt_fence_enable_signaling(struct dma_fence *fence)
struct xe_exec_queue *q = pfence->q;
pfence->error = q->ops->suspend(q);
- queue_work(system_unbound_wq, &pfence->preempt_work);
+ queue_work(q->vm->xe->preempt_fence_wq, &pfence->preempt_work);
return true;
}