summaryrefslogtreecommitdiff
path: root/drivers/infiniband/core/umem.c
diff options
context:
space:
mode:
authorQing Huang <qing.huang@oracle.com>2017-05-19 02:33:53 +0300
committerDoug Ledford <dledford@redhat.com>2017-06-02 00:20:12 +0300
commit53376fedb9da54c0d3b0bd3a6edcbeb681692909 (patch)
tree74f8555562491ee89c8674d4b8ca052d48d12034 /drivers/infiniband/core/umem.c
parentf937d93a9122d1510ca6e4bb8d860aedcf9408c3 (diff)
downloadlinux-53376fedb9da54c0d3b0bd3a6edcbeb681692909.tar.xz
RDMA/core: not to set page dirty bit if it's already set.
This change will optimize kernel memory deregistration operations. __ib_umem_release() used to call set_page_dirty_lock() against every writable page in its memory region. Its purpose is to keep data synced between CPU and DMA device when swapping happens after mem deregistration ops. Now we choose not to set page dirty bit if it's already set by kernel prior to calling __ib_umem_release(). This reduces memory deregistration time by half or even more when we ran application simulation test program. Signed-off-by: Qing Huang <qing.huang@oracle.com> Signed-off-by: Doug Ledford <dledford@redhat.com>
Diffstat (limited to 'drivers/infiniband/core/umem.c')
-rw-r--r--drivers/infiniband/core/umem.c2
1 files changed, 1 insertions, 1 deletions
diff --git a/drivers/infiniband/core/umem.c b/drivers/infiniband/core/umem.c
index 3dbf811d3c51..21e60b1e2ff4 100644
--- a/drivers/infiniband/core/umem.c
+++ b/drivers/infiniband/core/umem.c
@@ -58,7 +58,7 @@ static void __ib_umem_release(struct ib_device *dev, struct ib_umem *umem, int d
for_each_sg(umem->sg_head.sgl, sg, umem->npages, i) {
page = sg_page(sg);
- if (umem->writable && dirty)
+ if (!PageDirty(page) && umem->writable && dirty)
set_page_dirty_lock(page);
put_page(page);
}