diff options
author | Dave Airlie <airlied@redhat.com> | 2022-07-22 08:51:26 +0300 |
---|---|---|
committer | Dave Airlie <airlied@redhat.com> | 2022-07-22 08:51:31 +0300 |
commit | 417c1c1963549e9a48b83ada59d90258e38c6594 (patch) | |
tree | 1b5c36833e1c7b2ea3756767f498bf615436d952 /Documentation/gpu | |
parent | cb6b81b21bd9cf09d72b7fe711be1b55001eb166 (diff) | |
parent | 17cd10a44a8962860ff4ba351b2a290e752dbbde (diff) | |
download | linux-417c1c1963549e9a48b83ada59d90258e38c6594.tar.xz |
Merge tag 'drm-intel-gt-next-2022-07-13' of git://anongit.freedesktop.org/drm/drm-intel into drm-next
Driver uAPI changes:
- All related to the Small BAR support: (and all by Matt Auld)
* add probed_cpu_visible_size
* expose the avail memory region tracking
* apply ALLOC_GPU only by default
* add NEEDS_CPU_ACCESS hint
* tweak error capture on recoverable contexts
Driver highlights:
- Add Small BAR support (Matt)
- Add MeteorLake support (RK)
- Add support for LMEM PCIe resizable BAR (Akeem)
Driver important fixes:
- ttm related fixes (Matt Auld)
- Fix a performance regression related to waitboost (Chris)
- Fix GT resets (Chris)
Driver others:
- Adding GuC SLPC selftest (Vinay)
- Fix ADL-N GuC load (Daniele)
- Add platform workaround (Gustavo, Matt Roper)
- DG2 and ATS-M device ID updates (Matt Roper)
- Add VM_BIND doc rfc with uAPI documentation (Niranjana)
- Fix user-after-free in vma destruction (Thomas)
- Async flush of GuC log regions (Alan)
- Fixes in selftests (Chris, Dan, Andrzej)
- Convert to drm_dbg (Umesh)
- Disable OA sseu config param for newer hardware (Umesh)
- Multi-cast register steering changes (Matt Roper)
- Add lmem_bar_size modparam (Priyanka)
Signed-off-by: Dave Airlie <airlied@redhat.com>
From: Rodrigo Vivi <rodrigo.vivi@intel.com>
Link: https://patchwork.freedesktop.org/patch/msgid/Ys85pcMYLkqF/HtB@intel.com
Diffstat (limited to 'Documentation/gpu')
-rw-r--r-- | Documentation/gpu/rfc/i915_small_bar.h | 189 | ||||
-rw-r--r-- | Documentation/gpu/rfc/i915_small_bar.rst | 47 | ||||
-rw-r--r-- | Documentation/gpu/rfc/i915_vm_bind.h | 291 | ||||
-rw-r--r-- | Documentation/gpu/rfc/i915_vm_bind.rst | 245 | ||||
-rw-r--r-- | Documentation/gpu/rfc/index.rst | 8 |
5 files changed, 780 insertions, 0 deletions
diff --git a/Documentation/gpu/rfc/i915_small_bar.h b/Documentation/gpu/rfc/i915_small_bar.h new file mode 100644 index 000000000000..6003c81d5aa4 --- /dev/null +++ b/Documentation/gpu/rfc/i915_small_bar.h @@ -0,0 +1,189 @@ +/** + * struct __drm_i915_memory_region_info - Describes one region as known to the + * driver. + * + * Note this is using both struct drm_i915_query_item and struct drm_i915_query. + * For this new query we are adding the new query id DRM_I915_QUERY_MEMORY_REGIONS + * at &drm_i915_query_item.query_id. + */ +struct __drm_i915_memory_region_info { + /** @region: The class:instance pair encoding */ + struct drm_i915_gem_memory_class_instance region; + + /** @rsvd0: MBZ */ + __u32 rsvd0; + + /** + * @probed_size: Memory probed by the driver + * + * Note that it should not be possible to ever encounter a zero value + * here, also note that no current region type will ever return -1 here. + * Although for future region types, this might be a possibility. The + * same applies to the other size fields. + */ + __u64 probed_size; + + /** + * @unallocated_size: Estimate of memory remaining + * + * Requires CAP_PERFMON or CAP_SYS_ADMIN to get reliable accounting. + * Without this (or if this is an older kernel) the value here will + * always equal the @probed_size. Note this is only currently tracked + * for I915_MEMORY_CLASS_DEVICE regions (for other types the value here + * will always equal the @probed_size). + */ + __u64 unallocated_size; + + union { + /** @rsvd1: MBZ */ + __u64 rsvd1[8]; + struct { + /** + * @probed_cpu_visible_size: Memory probed by the driver + * that is CPU accessible. + * + * This will be always be <= @probed_size, and the + * remainder (if there is any) will not be CPU + * accessible. + * + * On systems without small BAR, the @probed_size will + * always equal the @probed_cpu_visible_size, since all + * of it will be CPU accessible. + * + * Note this is only tracked for + * I915_MEMORY_CLASS_DEVICE regions (for other types the + * value here will always equal the @probed_size). + * + * Note that if the value returned here is zero, then + * this must be an old kernel which lacks the relevant + * small-bar uAPI support (including + * I915_GEM_CREATE_EXT_FLAG_NEEDS_CPU_ACCESS), but on + * such systems we should never actually end up with a + * small BAR configuration, assuming we are able to load + * the kernel module. Hence it should be safe to treat + * this the same as when @probed_cpu_visible_size == + * @probed_size. + */ + __u64 probed_cpu_visible_size; + + /** + * @unallocated_cpu_visible_size: Estimate of CPU + * visible memory remaining + * + * Note this is only tracked for + * I915_MEMORY_CLASS_DEVICE regions (for other types the + * value here will always equal the + * @probed_cpu_visible_size). + * + * Requires CAP_PERFMON or CAP_SYS_ADMIN to get reliable + * accounting. Without this the value here will always + * equal the @probed_cpu_visible_size. Note this is only + * currently tracked for I915_MEMORY_CLASS_DEVICE + * regions (for other types the value here will also + * always equal the @probed_cpu_visible_size). + * + * If this is an older kernel the value here will be + * zero, see also @probed_cpu_visible_size. + */ + __u64 unallocated_cpu_visible_size; + }; + }; +}; + +/** + * struct __drm_i915_gem_create_ext - Existing gem_create behaviour, with added + * extension support using struct i915_user_extension. + * + * Note that new buffer flags should be added here, at least for the stuff that + * is immutable. Previously we would have two ioctls, one to create the object + * with gem_create, and another to apply various parameters, however this + * creates some ambiguity for the params which are considered immutable. Also in + * general we're phasing out the various SET/GET ioctls. + */ +struct __drm_i915_gem_create_ext { + /** + * @size: Requested size for the object. + * + * The (page-aligned) allocated size for the object will be returned. + * + * Note that for some devices we have might have further minimum + * page-size restrictions (larger than 4K), like for device local-memory. + * However in general the final size here should always reflect any + * rounding up, if for example using the I915_GEM_CREATE_EXT_MEMORY_REGIONS + * extension to place the object in device local-memory. The kernel will + * always select the largest minimum page-size for the set of possible + * placements as the value to use when rounding up the @size. + */ + __u64 size; + + /** + * @handle: Returned handle for the object. + * + * Object handles are nonzero. + */ + __u32 handle; + + /** + * @flags: Optional flags. + * + * Supported values: + * + * I915_GEM_CREATE_EXT_FLAG_NEEDS_CPU_ACCESS - Signal to the kernel that + * the object will need to be accessed via the CPU. + * + * Only valid when placing objects in I915_MEMORY_CLASS_DEVICE, and only + * strictly required on configurations where some subset of the device + * memory is directly visible/mappable through the CPU (which we also + * call small BAR), like on some DG2+ systems. Note that this is quite + * undesirable, but due to various factors like the client CPU, BIOS etc + * it's something we can expect to see in the wild. See + * &__drm_i915_memory_region_info.probed_cpu_visible_size for how to + * determine if this system applies. + * + * Note that one of the placements MUST be I915_MEMORY_CLASS_SYSTEM, to + * ensure the kernel can always spill the allocation to system memory, + * if the object can't be allocated in the mappable part of + * I915_MEMORY_CLASS_DEVICE. + * + * Also note that since the kernel only supports flat-CCS on objects + * that can *only* be placed in I915_MEMORY_CLASS_DEVICE, we therefore + * don't support I915_GEM_CREATE_EXT_FLAG_NEEDS_CPU_ACCESS together with + * flat-CCS. + * + * Without this hint, the kernel will assume that non-mappable + * I915_MEMORY_CLASS_DEVICE is preferred for this object. Note that the + * kernel can still migrate the object to the mappable part, as a last + * resort, if userspace ever CPU faults this object, but this might be + * expensive, and so ideally should be avoided. + * + * On older kernels which lack the relevant small-bar uAPI support (see + * also &__drm_i915_memory_region_info.probed_cpu_visible_size), + * usage of the flag will result in an error, but it should NEVER be + * possible to end up with a small BAR configuration, assuming we can + * also successfully load the i915 kernel module. In such cases the + * entire I915_MEMORY_CLASS_DEVICE region will be CPU accessible, and as + * such there are zero restrictions on where the object can be placed. + */ +#define I915_GEM_CREATE_EXT_FLAG_NEEDS_CPU_ACCESS (1 << 0) + __u32 flags; + + /** + * @extensions: The chain of extensions to apply to this object. + * + * This will be useful in the future when we need to support several + * different extensions, and we need to apply more than one when + * creating the object. See struct i915_user_extension. + * + * If we don't supply any extensions then we get the same old gem_create + * behaviour. + * + * For I915_GEM_CREATE_EXT_MEMORY_REGIONS usage see + * struct drm_i915_gem_create_ext_memory_regions. + * + * For I915_GEM_CREATE_EXT_PROTECTED_CONTENT usage see + * struct drm_i915_gem_create_ext_protected_content. + */ +#define I915_GEM_CREATE_EXT_MEMORY_REGIONS 0 +#define I915_GEM_CREATE_EXT_PROTECTED_CONTENT 1 + __u64 extensions; +}; diff --git a/Documentation/gpu/rfc/i915_small_bar.rst b/Documentation/gpu/rfc/i915_small_bar.rst new file mode 100644 index 000000000000..d6c03ce3b862 --- /dev/null +++ b/Documentation/gpu/rfc/i915_small_bar.rst @@ -0,0 +1,47 @@ +========================== +I915 Small BAR RFC Section +========================== +Starting from DG2 we will have resizable BAR support for device local-memory(i.e +I915_MEMORY_CLASS_DEVICE), but in some cases the final BAR size might still be +smaller than the total probed_size. In such cases, only some subset of +I915_MEMORY_CLASS_DEVICE will be CPU accessible(for example the first 256M), +while the remainder is only accessible via the GPU. + +I915_GEM_CREATE_EXT_FLAG_NEEDS_CPU_ACCESS flag +---------------------------------------------- +New gem_create_ext flag to tell the kernel that a BO will require CPU access. +This becomes important when placing an object in I915_MEMORY_CLASS_DEVICE, where +underneath the device has a small BAR, meaning only some portion of it is CPU +accessible. Without this flag the kernel will assume that CPU access is not +required, and prioritize using the non-CPU visible portion of +I915_MEMORY_CLASS_DEVICE. + +.. kernel-doc:: Documentation/gpu/rfc/i915_small_bar.h + :functions: __drm_i915_gem_create_ext + +probed_cpu_visible_size attribute +--------------------------------- +New struct__drm_i915_memory_region attribute which returns the total size of the +CPU accessible portion, for the particular region. This should only be +applicable for I915_MEMORY_CLASS_DEVICE. We also report the +unallocated_cpu_visible_size, alongside the unallocated_size. + +Vulkan will need this as part of creating a separate VkMemoryHeap with the +VK_MEMORY_PROPERTY_HOST_VISIBLE_BIT set, to represent the CPU visible portion, +where the total size of the heap needs to be known. It also wants to be able to +give a rough estimate of how memory can potentially be allocated. + +.. kernel-doc:: Documentation/gpu/rfc/i915_small_bar.h + :functions: __drm_i915_memory_region_info + +Error Capture restrictions +-------------------------- +With error capture we have two new restrictions: + + 1) Error capture is best effort on small BAR systems; if the pages are not + CPU accessible, at the time of capture, then the kernel is free to skip + trying to capture them. + + 2) On discrete and newer integrated platforms we now reject error capture + on recoverable contexts. In the future the kernel may want to blit during + error capture, when for example something is not currently CPU accessible. diff --git a/Documentation/gpu/rfc/i915_vm_bind.h b/Documentation/gpu/rfc/i915_vm_bind.h new file mode 100644 index 000000000000..8a8fcd4fceac --- /dev/null +++ b/Documentation/gpu/rfc/i915_vm_bind.h @@ -0,0 +1,291 @@ +/* SPDX-License-Identifier: MIT */ +/* + * Copyright © 2022 Intel Corporation + */ + +/** + * DOC: I915_PARAM_VM_BIND_VERSION + * + * VM_BIND feature version supported. + * See typedef drm_i915_getparam_t param. + * + * Specifies the VM_BIND feature version supported. + * The following versions of VM_BIND have been defined: + * + * 0: No VM_BIND support. + * + * 1: In VM_UNBIND calls, the UMD must specify the exact mappings created + * previously with VM_BIND, the ioctl will not support unbinding multiple + * mappings or splitting them. Similarly, VM_BIND calls will not replace + * any existing mappings. + * + * 2: The restrictions on unbinding partial or multiple mappings is + * lifted, Similarly, binding will replace any mappings in the given range. + * + * See struct drm_i915_gem_vm_bind and struct drm_i915_gem_vm_unbind. + */ +#define I915_PARAM_VM_BIND_VERSION 57 + +/** + * DOC: I915_VM_CREATE_FLAGS_USE_VM_BIND + * + * Flag to opt-in for VM_BIND mode of binding during VM creation. + * See struct drm_i915_gem_vm_control flags. + * + * The older execbuf2 ioctl will not support VM_BIND mode of operation. + * For VM_BIND mode, we have new execbuf3 ioctl which will not accept any + * execlist (See struct drm_i915_gem_execbuffer3 for more details). + */ +#define I915_VM_CREATE_FLAGS_USE_VM_BIND (1 << 0) + +/* VM_BIND related ioctls */ +#define DRM_I915_GEM_VM_BIND 0x3d +#define DRM_I915_GEM_VM_UNBIND 0x3e +#define DRM_I915_GEM_EXECBUFFER3 0x3f + +#define DRM_IOCTL_I915_GEM_VM_BIND DRM_IOWR(DRM_COMMAND_BASE + DRM_I915_GEM_VM_BIND, struct drm_i915_gem_vm_bind) +#define DRM_IOCTL_I915_GEM_VM_UNBIND DRM_IOWR(DRM_COMMAND_BASE + DRM_I915_GEM_VM_UNBIND, struct drm_i915_gem_vm_bind) +#define DRM_IOCTL_I915_GEM_EXECBUFFER3 DRM_IOWR(DRM_COMMAND_BASE + DRM_I915_GEM_EXECBUFFER3, struct drm_i915_gem_execbuffer3) + +/** + * struct drm_i915_gem_timeline_fence - An input or output timeline fence. + * + * The operation will wait for input fence to signal. + * + * The returned output fence will be signaled after the completion of the + * operation. + */ +struct drm_i915_gem_timeline_fence { + /** @handle: User's handle for a drm_syncobj to wait on or signal. */ + __u32 handle; + + /** + * @flags: Supported flags are: + * + * I915_TIMELINE_FENCE_WAIT: + * Wait for the input fence before the operation. + * + * I915_TIMELINE_FENCE_SIGNAL: + * Return operation completion fence as output. + */ + __u32 flags; +#define I915_TIMELINE_FENCE_WAIT (1 << 0) +#define I915_TIMELINE_FENCE_SIGNAL (1 << 1) +#define __I915_TIMELINE_FENCE_UNKNOWN_FLAGS (-(I915_TIMELINE_FENCE_SIGNAL << 1)) + + /** + * @value: A point in the timeline. + * Value must be 0 for a binary drm_syncobj. A Value of 0 for a + * timeline drm_syncobj is invalid as it turns a drm_syncobj into a + * binary one. + */ + __u64 value; +}; + +/** + * struct drm_i915_gem_vm_bind - VA to object mapping to bind. + * + * This structure is passed to VM_BIND ioctl and specifies the mapping of GPU + * virtual address (VA) range to the section of an object that should be bound + * in the device page table of the specified address space (VM). + * The VA range specified must be unique (ie., not currently bound) and can + * be mapped to whole object or a section of the object (partial binding). + * Multiple VA mappings can be created to the same section of the object + * (aliasing). + * + * The @start, @offset and @length must be 4K page aligned. However the DG2 + * and XEHPSDV has 64K page size for device local memory and has compact page + * table. On those platforms, for binding device local-memory objects, the + * @start, @offset and @length must be 64K aligned. Also, UMDs should not mix + * the local memory 64K page and the system memory 4K page bindings in the same + * 2M range. + * + * Error code -EINVAL will be returned if @start, @offset and @length are not + * properly aligned. In version 1 (See I915_PARAM_VM_BIND_VERSION), error code + * -ENOSPC will be returned if the VA range specified can't be reserved. + * + * VM_BIND/UNBIND ioctl calls executed on different CPU threads concurrently + * are not ordered. Furthermore, parts of the VM_BIND operation can be done + * asynchronously, if valid @fence is specified. + */ +struct drm_i915_gem_vm_bind { + /** @vm_id: VM (address space) id to bind */ + __u32 vm_id; + + /** @handle: Object handle */ + __u32 handle; + + /** @start: Virtual Address start to bind */ + __u64 start; + + /** @offset: Offset in object to bind */ + __u64 offset; + + /** @length: Length of mapping to bind */ + __u64 length; + + /** + * @flags: Supported flags are: + * + * I915_GEM_VM_BIND_CAPTURE: + * Capture this mapping in the dump upon GPU error. + * + * Note that @fence carries its own flags. + */ + __u64 flags; +#define I915_GEM_VM_BIND_CAPTURE (1 << 0) + + /** + * @fence: Timeline fence for bind completion signaling. + * + * Timeline fence is of format struct drm_i915_gem_timeline_fence. + * + * It is an out fence, hence using I915_TIMELINE_FENCE_WAIT flag + * is invalid, and an error will be returned. + * + * If I915_TIMELINE_FENCE_SIGNAL flag is not set, then out fence + * is not requested and binding is completed synchronously. + */ + struct drm_i915_gem_timeline_fence fence; + + /** + * @extensions: Zero-terminated chain of extensions. + * + * For future extensions. See struct i915_user_extension. + */ + __u64 extensions; +}; + +/** + * struct drm_i915_gem_vm_unbind - VA to object mapping to unbind. + * + * This structure is passed to VM_UNBIND ioctl and specifies the GPU virtual + * address (VA) range that should be unbound from the device page table of the + * specified address space (VM). VM_UNBIND will force unbind the specified + * range from device page table without waiting for any GPU job to complete. + * It is UMDs responsibility to ensure the mapping is no longer in use before + * calling VM_UNBIND. + * + * If the specified mapping is not found, the ioctl will simply return without + * any error. + * + * VM_BIND/UNBIND ioctl calls executed on different CPU threads concurrently + * are not ordered. Furthermore, parts of the VM_UNBIND operation can be done + * asynchronously, if valid @fence is specified. + */ +struct drm_i915_gem_vm_unbind { + /** @vm_id: VM (address space) id to bind */ + __u32 vm_id; + + /** @rsvd: Reserved, MBZ */ + __u32 rsvd; + + /** @start: Virtual Address start to unbind */ + __u64 start; + + /** @length: Length of mapping to unbind */ + __u64 length; + + /** + * @flags: Currently reserved, MBZ. + * + * Note that @fence carries its own flags. + */ + __u64 flags; + + /** + * @fence: Timeline fence for unbind completion signaling. + * + * Timeline fence is of format struct drm_i915_gem_timeline_fence. + * + * It is an out fence, hence using I915_TIMELINE_FENCE_WAIT flag + * is invalid, and an error will be returned. + * + * If I915_TIMELINE_FENCE_SIGNAL flag is not set, then out fence + * is not requested and unbinding is completed synchronously. + */ + struct drm_i915_gem_timeline_fence fence; + + /** + * @extensions: Zero-terminated chain of extensions. + * + * For future extensions. See struct i915_user_extension. + */ + __u64 extensions; +}; + +/** + * struct drm_i915_gem_execbuffer3 - Structure for DRM_I915_GEM_EXECBUFFER3 + * ioctl. + * + * DRM_I915_GEM_EXECBUFFER3 ioctl only works in VM_BIND mode and VM_BIND mode + * only works with this ioctl for submission. + * See I915_VM_CREATE_FLAGS_USE_VM_BIND. + */ +struct drm_i915_gem_execbuffer3 { + /** + * @ctx_id: Context id + * + * Only contexts with user engine map are allowed. + */ + __u32 ctx_id; + + /** + * @engine_idx: Engine index + * + * An index in the user engine map of the context specified by @ctx_id. + */ + __u32 engine_idx; + + /** + * @batch_address: Batch gpu virtual address/es. + * + * For normal submission, it is the gpu virtual address of the batch + * buffer. For parallel submission, it is a pointer to an array of + * batch buffer gpu virtual addresses with array size equal to the + * number of (parallel) engines involved in that submission (See + * struct i915_context_engines_parallel_submit). + */ + __u64 batch_address; + + /** @flags: Currently reserved, MBZ */ + __u64 flags; + + /** @rsvd1: Reserved, MBZ */ + __u32 rsvd1; + + /** @fence_count: Number of fences in @timeline_fences array. */ + __u32 fence_count; + + /** + * @timeline_fences: Pointer to an array of timeline fences. + * + * Timeline fences are of format struct drm_i915_gem_timeline_fence. + */ + __u64 timeline_fences; + + /** @rsvd2: Reserved, MBZ */ + __u64 rsvd2; + + /** + * @extensions: Zero-terminated chain of extensions. + * + * For future extensions. See struct i915_user_extension. + */ + __u64 extensions; +}; + +/** + * struct drm_i915_gem_create_ext_vm_private - Extension to make the object + * private to the specified VM. + * + * See struct drm_i915_gem_create_ext. + */ +struct drm_i915_gem_create_ext_vm_private { +#define I915_GEM_CREATE_EXT_VM_PRIVATE 2 + /** @base: Extension link. See struct i915_user_extension. */ + struct i915_user_extension base; + + /** @vm_id: Id of the VM to which the object is private */ + __u32 vm_id; +}; diff --git a/Documentation/gpu/rfc/i915_vm_bind.rst b/Documentation/gpu/rfc/i915_vm_bind.rst new file mode 100644 index 000000000000..9a1dcdf2799e --- /dev/null +++ b/Documentation/gpu/rfc/i915_vm_bind.rst @@ -0,0 +1,245 @@ +========================================== +I915 VM_BIND feature design and use cases +========================================== + +VM_BIND feature +================ +DRM_I915_GEM_VM_BIND/UNBIND ioctls allows UMD to bind/unbind GEM buffer +objects (BOs) or sections of a BOs at specified GPU virtual addresses on a +specified address space (VM). These mappings (also referred to as persistent +mappings) will be persistent across multiple GPU submissions (execbuf calls) +issued by the UMD, without user having to provide a list of all required +mappings during each submission (as required by older execbuf mode). + +The VM_BIND/UNBIND calls allow UMDs to request a timeline out fence for +signaling the completion of bind/unbind operation. + +VM_BIND feature is advertised to user via I915_PARAM_VM_BIND_VERSION. +User has to opt-in for VM_BIND mode of binding for an address space (VM) +during VM creation time via I915_VM_CREATE_FLAGS_USE_VM_BIND extension. + +VM_BIND/UNBIND ioctl calls executed on different CPU threads concurrently are +not ordered. Furthermore, parts of the VM_BIND/UNBIND operations can be done +asynchronously, when valid out fence is specified. + +VM_BIND features include: + +* Multiple Virtual Address (VA) mappings can map to the same physical pages + of an object (aliasing). +* VA mapping can map to a partial section of the BO (partial binding). +* Support capture of persistent mappings in the dump upon GPU error. +* Support for userptr gem objects (no special uapi is required for this). + +TLB flush consideration +------------------------ +The i915 driver flushes the TLB for each submission and when an object's +pages are released. The VM_BIND/UNBIND operation will not do any additional +TLB flush. Any VM_BIND mapping added will be in the working set for subsequent +submissions on that VM and will not be in the working set for currently running +batches (which would require additional TLB flushes, which is not supported). + +Execbuf ioctl in VM_BIND mode +------------------------------- +A VM in VM_BIND mode will not support older execbuf mode of binding. +The execbuf ioctl handling in VM_BIND mode differs significantly from the +older execbuf2 ioctl (See struct drm_i915_gem_execbuffer2). +Hence, a new execbuf3 ioctl has been added to support VM_BIND mode. (See +struct drm_i915_gem_execbuffer3). The execbuf3 ioctl will not accept any +execlist. Hence, no support for implicit sync. It is expected that the below +work will be able to support requirements of object dependency setting in all +use cases: + +"dma-buf: Add an API for exporting sync files" +(https://lwn.net/Articles/859290/) + +The new execbuf3 ioctl only works in VM_BIND mode and the VM_BIND mode only +works with execbuf3 ioctl for submission. All BOs mapped on that VM (through +VM_BIND call) at the time of execbuf3 call are deemed required for that +submission. + +The execbuf3 ioctl directly specifies the batch addresses instead of as +object handles as in execbuf2 ioctl. The execbuf3 ioctl will also not +support many of the older features like in/out/submit fences, fence array, +default gem context and many more (See struct drm_i915_gem_execbuffer3). + +In VM_BIND mode, VA allocation is completely managed by the user instead of +the i915 driver. Hence all VA assignment, eviction are not applicable in +VM_BIND mode. Also, for determining object activeness, VM_BIND mode will not +be using the i915_vma active reference tracking. It will instead use dma-resv +object for that (See `VM_BIND dma_resv usage`_). + +So, a lot of existing code supporting execbuf2 ioctl, like relocations, VA +evictions, vma lookup table, implicit sync, vma active reference tracking etc., +are not applicable for execbuf3 ioctl. Hence, all execbuf3 specific handling +should be in a separate file and only functionalities common to these ioctls +can be the shared code where possible. + +VM_PRIVATE objects +------------------- +By default, BOs can be mapped on multiple VMs and can also be dma-buf +exported. Hence these BOs are referred to as Shared BOs. +During each execbuf submission, the request fence must be added to the +dma-resv fence list of all shared BOs mapped on the VM. + +VM_BIND feature introduces an optimization where user can create BO which +is private to a specified VM via I915_GEM_CREATE_EXT_VM_PRIVATE flag during +BO creation. Unlike Shared BOs, these VM private BOs can only be mapped on +the VM they are private to and can't be dma-buf exported. +All private BOs of a VM share the dma-resv object. Hence during each execbuf +submission, they need only one dma-resv fence list updated. Thus, the fast +path (where required mappings are already bound) submission latency is O(1) +w.r.t the number of VM private BOs. + +VM_BIND locking hirarchy +------------------------- +The locking design here supports the older (execlist based) execbuf mode, the +newer VM_BIND mode, the VM_BIND mode with GPU page faults and possible future +system allocator support (See `Shared Virtual Memory (SVM) support`_). +The older execbuf mode and the newer VM_BIND mode without page faults manages +residency of backing storage using dma_fence. The VM_BIND mode with page faults +and the system allocator support do not use any dma_fence at all. + +VM_BIND locking order is as below. + +1) Lock-A: A vm_bind mutex will protect vm_bind lists. This lock is taken in + vm_bind/vm_unbind ioctl calls, in the execbuf path and while releasing the + mapping. + + In future, when GPU page faults are supported, we can potentially use a + rwsem instead, so that multiple page fault handlers can take the read side + lock to lookup the mapping and hence can run in parallel. + The older execbuf mode of binding do not need this lock. + +2) Lock-B: The object's dma-resv lock will protect i915_vma state and needs to + be held while binding/unbinding a vma in the async worker and while updating + dma-resv fence list of an object. Note that private BOs of a VM will all + share a dma-resv object. + + The future system allocator support will use the HMM prescribed locking + instead. + +3) Lock-C: Spinlock/s to protect some of the VM's lists like the list of + invalidated vmas (due to eviction and userptr invalidation) etc. + +When GPU page faults are supported, the execbuf path do not take any of these +locks. There we will simply smash the new batch buffer address into the ring and +then tell the scheduler run that. The lock taking only happens from the page +fault handler, where we take lock-A in read mode, whichever lock-B we need to +find the backing storage (dma_resv lock for gem objects, and hmm/core mm for +system allocator) and some additional locks (lock-D) for taking care of page +table races. Page fault mode should not need to ever manipulate the vm lists, +so won't ever need lock-C. + +VM_BIND LRU handling +--------------------- +We need to ensure VM_BIND mapped objects are properly LRU tagged to avoid +performance degradation. We will also need support for bulk LRU movement of +VM_BIND objects to avoid additional latencies in execbuf path. + +The page table pages are similar to VM_BIND mapped objects (See +`Evictable page table allocations`_) and are maintained per VM and needs to +be pinned in memory when VM is made active (ie., upon an execbuf call with +that VM). So, bulk LRU movement of page table pages is also needed. + +VM_BIND dma_resv usage +----------------------- +Fences needs to be added to all VM_BIND mapped objects. During each execbuf +submission, they are added with DMA_RESV_USAGE_BOOKKEEP usage to prevent +over sync (See enum dma_resv_usage). One can override it with either +DMA_RESV_USAGE_READ or DMA_RESV_USAGE_WRITE usage during explicit object +dependency setting. + +Note that DRM_I915_GEM_WAIT and DRM_I915_GEM_BUSY ioctls do not check for +DMA_RESV_USAGE_BOOKKEEP usage and hence should not be used for end of batch +check. Instead, the execbuf3 out fence should be used for end of batch check +(See struct drm_i915_gem_execbuffer3). + +Also, in VM_BIND mode, use dma-resv apis for determining object activeness +(See dma_resv_test_signaled() and dma_resv_wait_timeout()) and do not use the +older i915_vma active reference tracking which is deprecated. This should be +easier to get it working with the current TTM backend. + +Mesa use case +-------------- +VM_BIND can potentially reduce the CPU overhead in Mesa (both Vulkan and Iris), +hence improving performance of CPU-bound applications. It also allows us to +implement Vulkan's Sparse Resources. With increasing GPU hardware performance, +reducing CPU overhead becomes more impactful. + + +Other VM_BIND use cases +======================== + +Long running Compute contexts +------------------------------ +Usage of dma-fence expects that they complete in reasonable amount of time. +Compute on the other hand can be long running. Hence it is appropriate for +compute to use user/memory fence (See `User/Memory Fence`_) and dma-fence usage +must be limited to in-kernel consumption only. + +Where GPU page faults are not available, kernel driver upon buffer invalidation +will initiate a suspend (preemption) of long running context, finish the +invalidation, revalidate the BO and then resume the compute context. This is +done by having a per-context preempt fence which is enabled when someone tries +to wait on it and triggers the context preemption. + +User/Memory Fence +~~~~~~~~~~~~~~~~~~ +User/Memory fence is a <address, value> pair. To signal the user fence, the +specified value will be written at the specified virtual address and wakeup the +waiting process. User fence can be signaled either by the GPU or kernel async +worker (like upon bind completion). User can wait on a user fence with a new +user fence wait ioctl. + +Here is some prior work on this: +https://patchwork.freedesktop.org/patch/349417/ + +Low Latency Submission +~~~~~~~~~~~~~~~~~~~~~~~ +Allows compute UMD to directly submit GPU jobs instead of through execbuf +ioctl. This is made possible by VM_BIND is not being synchronized against +execbuf. VM_BIND allows bind/unbind of mappings required for the directly +submitted jobs. + +Debugger +--------- +With debug event interface user space process (debugger) is able to keep track +of and act upon resources created by another process (debugged) and attached +to GPU via vm_bind interface. + +GPU page faults +---------------- +GPU page faults when supported (in future), will only be supported in the +VM_BIND mode. While both the older execbuf mode and the newer VM_BIND mode of +binding will require using dma-fence to ensure residency, the GPU page faults +mode when supported, will not use any dma-fence as residency is purely managed +by installing and removing/invalidating page table entries. + +Page level hints settings +-------------------------- +VM_BIND allows any hints setting per mapping instead of per BO. Possible hints +include placement and atomicity. Sub-BO level placement hint will be even more +relevant with upcoming GPU on-demand page fault support. + +Page level Cache/CLOS settings +------------------------------- +VM_BIND allows cache/CLOS settings per mapping instead of per BO. + +Evictable page table allocations +--------------------------------- +Make pagetable allocations evictable and manage them similar to VM_BIND +mapped objects. Page table pages are similar to persistent mappings of a +VM (difference here are that the page table pages will not have an i915_vma +structure and after swapping pages back in, parent page link needs to be +updated). + +Shared Virtual Memory (SVM) support +------------------------------------ +VM_BIND interface can be used to map system memory directly (without gem BO +abstraction) using the HMM interface. SVM is only supported with GPU page +faults enabled. + +VM_BIND UAPI +============= + +.. kernel-doc:: Documentation/gpu/rfc/i915_vm_bind.h diff --git a/Documentation/gpu/rfc/index.rst b/Documentation/gpu/rfc/index.rst index 91e93a705230..476719771eef 100644 --- a/Documentation/gpu/rfc/index.rst +++ b/Documentation/gpu/rfc/index.rst @@ -23,3 +23,11 @@ host such documentation: .. toctree:: i915_scheduler.rst + +.. toctree:: + + i915_small_bar.rst + +.. toctree:: + + i915_vm_bind.rst |