summaryrefslogtreecommitdiff
path: root/drivers/gpu/drm/amd/amdgpu/amdgpu_device.c
diff options
context:
space:
mode:
authorAndres Rodriguez <andresx7@gmail.com>2017-03-07 00:27:55 +0300
committerAlex Deucher <alexander.deucher@amd.com>2017-05-31 23:49:02 +0300
commit795f2813e628bcf57a69f2dfe413360d14a1d7f4 (patch)
treef60bfe602590fde4bd170c263a569cd8147ffdd0 /drivers/gpu/drm/amd/amdgpu/amdgpu_device.c
parenteffd924d2f3b9c52d5bd8137c3803e83f719a290 (diff)
downloadlinux-795f2813e628bcf57a69f2dfe413360d14a1d7f4.tar.xz
drm/amdgpu: implement lru amdgpu_queue_mgr policy for compute v4
Use an LRU policy to map usermode rings to HW compute queues. Most compute clients use one queue, and usually the first queue available. This results in poor pipe/queue work distribution when multiple compute apps are running. In most cases pipe 0 queue 0 is the only queue that gets used. In order to better distribute work across multiple HW queues, we adopt a policy to map the usermode ring ids to the LRU HW queue. This fixes a large majority of multi-app compute workloads sharing the same HW queue, even though 7 other queues are available. v2: use ring->funcs->type instead of ring->hw_ip v3: remove amdgpu_queue_mapper_funcs v4: change ring_lru_list_lock to spinlock, grab only once in lru_get() Signed-off-by: Andres Rodriguez <andresx7@gmail.com> Signed-off-by: Alex Deucher <alexander.deucher@amd.com>
Diffstat (limited to 'drivers/gpu/drm/amd/amdgpu/amdgpu_device.c')
-rw-r--r--drivers/gpu/drm/amd/amdgpu/amdgpu_device.c3
1 files changed, 3 insertions, 0 deletions
diff --git a/drivers/gpu/drm/amd/amdgpu/amdgpu_device.c b/drivers/gpu/drm/amd/amdgpu/amdgpu_device.c
index e731c4876a09..cce94d836221 100644
--- a/drivers/gpu/drm/amd/amdgpu/amdgpu_device.c
+++ b/drivers/gpu/drm/amd/amdgpu/amdgpu_device.c
@@ -2073,6 +2073,9 @@ int amdgpu_device_init(struct amdgpu_device *adev,
INIT_LIST_HEAD(&adev->gtt_list);
spin_lock_init(&adev->gtt_list_lock);
+ INIT_LIST_HEAD(&adev->ring_lru_list);
+ spin_lock_init(&adev->ring_lru_list_lock);
+
INIT_DELAYED_WORK(&adev->late_init_work, amdgpu_late_init_func_handler);
if (adev->asic_type >= CHIP_BONAIRE) {