summaryrefslogtreecommitdiff
path: root/net/sunrpc/xprtrdma
diff options
context:
space:
mode:
authorChuck Lever <chuck.lever@oracle.com>2021-05-01 22:38:02 +0300
committerTrond Myklebust <trond.myklebust@hammerspace.com>2021-05-02 02:42:22 +0300
commit9e895cd9649abe4392c59d14e31b0f5667d082d2 (patch)
tree65806dfb3557a48362d1548ab031ff1498345d32 /net/sunrpc/xprtrdma
parentf8f7e0fb22b2e75be55f2f0c13e229e75b0eac07 (diff)
downloadlinux-9e895cd9649abe4392c59d14e31b0f5667d082d2.tar.xz
xprtrdma: Fix a NULL dereference in frwr_unmap_sync()
The normal mechanism that invalidates and unmaps MRs is frwr_unmap_async(). frwr_unmap_sync() is used only when an RPC Reply bearing Write or Reply chunks has been lost (ie, almost never). Coverity found that after commit 9a301cafc861 ("xprtrdma: Move fr_linv_done field to struct rpcrdma_mr"), the while() loop in frwr_unmap_sync() exits only once @mr is NULL, unconditionally causing subsequent dereferences of @mr to Oops. I've tested this fix by creating a client that skips invoking frwr_unmap_async() when RPC Replies complete. That forces all invalidation tasks to fall upon frwr_unmap_sync(). Simple workloads with this fix applied to the adulterated client work as designed. Reported-by: coverity-bot <keescook+coverity-bot@chromium.org> Addresses-Coverity-ID: 1504556 ("Null pointer dereferences") Fixes: 9a301cafc861 ("xprtrdma: Move fr_linv_done field to struct rpcrdma_mr") Signed-off-by: Chuck Lever <chuck.lever@oracle.com> Signed-off-by: Trond Myklebust <trond.myklebust@hammerspace.com>
Diffstat (limited to 'net/sunrpc/xprtrdma')
-rw-r--r--net/sunrpc/xprtrdma/frwr_ops.c1
1 files changed, 1 insertions, 0 deletions
diff --git a/net/sunrpc/xprtrdma/frwr_ops.c b/net/sunrpc/xprtrdma/frwr_ops.c
index 285d73246fc2..229fcc9a9064 100644
--- a/net/sunrpc/xprtrdma/frwr_ops.c
+++ b/net/sunrpc/xprtrdma/frwr_ops.c
@@ -530,6 +530,7 @@ void frwr_unmap_sync(struct rpcrdma_xprt *r_xprt, struct rpcrdma_req *req)
*prev = last;
prev = &last->next;
}
+ mr = container_of(last, struct rpcrdma_mr, mr_invwr);
/* Strong send queue ordering guarantees that when the
* last WR in the chain completes, all WRs in the chain