diff options
author | Jan Kara <jack@suse.cz> | 2013-10-04 17:29:06 +0400 |
---|---|---|
committer | Roland Dreier <roland@purestorage.com> | 2013-11-09 02:43:11 +0400 |
commit | 4adcf7fb6783e354aab38824d803fa8c4f8e8a27 (patch) | |
tree | 6aaa86095a45cf4c845090f025c0a1543d5cbbcb /drivers/infiniband/hw/qib | |
parent | 959f58544b7f20c92d5eb43d1232c96c15c01bfb (diff) | |
download | linux-4adcf7fb6783e354aab38824d803fa8c4f8e8a27.tar.xz |
IB/ipath: Convert ipath_user_sdma_pin_pages() to use get_user_pages_fast()
ipath_user_sdma_queue_pkts() gets called with mmap_sem held for
writing. Except for get_user_pages() deep down in
ipath_user_sdma_pin_pages() we don't seem to need mmap_sem at all.
Even more interestingly the function ipath_user_sdma_queue_pkts() (and
also ipath_user_sdma_coalesce() called somewhat later) call
copy_from_user() which can hit a page fault and we deadlock on trying
to get mmap_sem when handling that fault. So just make
ipath_user_sdma_pin_pages() use get_user_pages_fast() and leave
mmap_sem locking for mm.
This deadlock has actually been observed in the wild when the node
is under memory pressure.
Cc: <stable@vger.kernel.org>
Signed-off-by: Jan Kara <jack@suse.cz>
Signed-off-by: Mike Marciniszyn <mike.marciniszyn@intel.com>
[ Merged in fix for call to get_user_pages_fast from Tetsuo Handa
<penguin-kernel@I-love.SAKURA.ne.jp>. - Roland ]
Signed-off-by: Roland Dreier <roland@purestorage.com>
Diffstat (limited to 'drivers/infiniband/hw/qib')
0 files changed, 0 insertions, 0 deletions