diff options
author | Tetsuo Handa <penguin-kernel@I-love.SAKURA.ne.jp> | 2014-08-03 15:01:10 +0400 |
---|---|---|
committer | Dave Airlie <airlied@redhat.com> | 2014-08-05 04:53:18 +0400 |
commit | 22e71691fd54c637800d10816bbeba9cf132d218 (patch) | |
tree | 33fe0b7ce147361bdc7b32c7c8838c14796f8ac8 /drivers/gpu/drm/ttm/ttm_page_alloc_dma.c | |
parent | 46c2df68f03a236b30808bba361f10900c88d95e (diff) | |
download | linux-22e71691fd54c637800d10816bbeba9cf132d218.tar.xz |
drm/ttm: Use mutex_trylock() to avoid deadlock inside shrinker functions.
I can observe that RHEL7 environment stalls with 100% CPU usage when a
certain type of memory pressure is given. While the shrinker functions
are called by shrink_slab() before the OOM killer is triggered, the stall
lasts for many minutes.
One of reasons of this stall is that
ttm_dma_pool_shrink_count()/ttm_dma_pool_shrink_scan() are called and
are blocked at mutex_lock(&_manager->lock). GFP_KERNEL allocation with
_manager->lock held causes someone (including kswapd) to deadlock when
these functions are called due to memory pressure. This patch changes
"mutex_lock();" to "if (!mutex_trylock()) return ...;" in order to
avoid deadlock.
Signed-off-by: Tetsuo Handa <penguin-kernel@I-love.SAKURA.ne.jp>
Cc: stable <stable@kernel.org> [3.3+]
Signed-off-by: Dave Airlie <airlied@redhat.com>
Diffstat (limited to 'drivers/gpu/drm/ttm/ttm_page_alloc_dma.c')
-rw-r--r-- | drivers/gpu/drm/ttm/ttm_page_alloc_dma.c | 6 |
1 files changed, 4 insertions, 2 deletions
diff --git a/drivers/gpu/drm/ttm/ttm_page_alloc_dma.c b/drivers/gpu/drm/ttm/ttm_page_alloc_dma.c index d8e59f7b58b2..524cc1a2c1fa 100644 --- a/drivers/gpu/drm/ttm/ttm_page_alloc_dma.c +++ b/drivers/gpu/drm/ttm/ttm_page_alloc_dma.c @@ -1014,7 +1014,8 @@ ttm_dma_pool_shrink_scan(struct shrinker *shrink, struct shrink_control *sc) if (list_empty(&_manager->pools)) return SHRINK_STOP; - mutex_lock(&_manager->lock); + if (!mutex_trylock(&_manager->lock)) + return SHRINK_STOP; if (!_manager->npools) goto out; pool_offset = ++start_pool % _manager->npools; @@ -1047,7 +1048,8 @@ ttm_dma_pool_shrink_count(struct shrinker *shrink, struct shrink_control *sc) struct device_pools *p; unsigned long count = 0; - mutex_lock(&_manager->lock); + if (!mutex_trylock(&_manager->lock)) + return 0; list_for_each_entry(p, &_manager->pools, pools) count += p->pool->npages_free; mutex_unlock(&_manager->lock); |