summaryrefslogtreecommitdiff
path: root/drivers/gpu/drm/i915/i915_vma.c
diff options
context:
space:
mode:
authorChris Wilson <chris@chris-wilson.co.uk>2017-06-16 17:05:24 +0300
committerChris Wilson <chris@chris-wilson.co.uk>2017-06-16 18:54:05 +0300
commit7dd4f6729f9243bd7046c6f04c107a456bda38eb (patch)
treeb3f453d82aee261b40dc54142966c1fb24e9c2a2 /drivers/gpu/drm/i915/i915_vma.c
parent1a71cf2fa646799d4397a49b223549d8617fece0 (diff)
downloadlinux-7dd4f6729f9243bd7046c6f04c107a456bda38eb.tar.xz
drm/i915: Async GPU relocation processing
If the user requires patching of their batch or auxiliary buffers, we currently make the alterations on the cpu. If they are active on the GPU at the time, we wait under the struct_mutex for them to finish executing before we rewrite the contents. This happens if shared relocation trees are used between different contexts with separate address space (and the buffers then have different addresses in each), the 3D state will need to be adjusted between execution on each context. However, we don't need to use the CPU to do the relocation patching, as we could queue commands to the GPU to perform it and use fences to serialise the operation with the current activity and future - so the operation on the GPU appears just as atomic as performing it immediately. Performing the relocation rewrites on the GPU is not free, in terms of pure throughput, the number of relocations/s is about halved - but more importantly so is the time under the struct_mutex. v2: Break out the request/batch allocation for clearer error flow. v3: A few asserts to ensure rq ordering is maintained Signed-off-by: Chris Wilson <chris@chris-wilson.co.uk> Reviewed-by: Joonas Lahtinen <joonas.lahtinen@linux.intel.com>
Diffstat (limited to 'drivers/gpu/drm/i915/i915_vma.c')
0 files changed, 0 insertions, 0 deletions