diff options
author | Eric Dumazet <edumazet@google.com> | 2020-07-30 04:57:55 +0300 |
---|---|---|
committer | Jason Gunthorpe <jgg@nvidia.com> | 2020-07-31 20:12:30 +0300 |
commit | 928da37a229f344424ffc89c9a58feb2368bb018 (patch) | |
tree | d38b2349a494ef530af1599b89f19908a46a1138 /drivers/infiniband | |
parent | 395f2e8fd340c5bfad026f5968b56ec34cf20dd1 (diff) | |
download | linux-928da37a229f344424ffc89c9a58feb2368bb018.tar.xz |
RDMA/umem: Add a schedule point in ib_umem_get()
Mapping as little as 64GB can take more than 10 seconds, triggering issues
on kernels with CONFIG_PREEMPT_NONE=y.
ib_umem_get() already splits the work in 2MB units on x86_64, adding a
cond_resched() in the long-lasting loop is enough to solve the issue.
Note that sg_alloc_table() can still use more than 100 ms, which is also
problematic. This might be addressed later in ib_umem_add_sg_table(),
adding new blocks in sgl on demand.
Link: https://lore.kernel.org/r/20200730015755.1827498-1-edumazet@google.com
Signed-off-by: Eric Dumazet <edumazet@google.com>
Signed-off-by: Jason Gunthorpe <jgg@nvidia.com>
Diffstat (limited to 'drivers/infiniband')
-rw-r--r-- | drivers/infiniband/core/umem.c | 1 |
1 files changed, 1 insertions, 0 deletions
diff --git a/drivers/infiniband/core/umem.c b/drivers/infiniband/core/umem.c index 82455a1392f1..831bff8d52e5 100644 --- a/drivers/infiniband/core/umem.c +++ b/drivers/infiniband/core/umem.c @@ -261,6 +261,7 @@ struct ib_umem *ib_umem_get(struct ib_device *device, unsigned long addr, sg = umem->sg_head.sgl; while (npages) { + cond_resched(); ret = pin_user_pages_fast(cur_base, min_t(unsigned long, npages, PAGE_SIZE / |