summaryrefslogtreecommitdiff
path: root/net/sunrpc
diff options
context:
space:
mode:
authorChuck Lever <chuck.lever@oracle.com>2020-12-18 20:28:58 +0300
committerChuck Lever <chuck.lever@oracle.com>2020-12-18 20:28:58 +0300
commit7b723008f9c95624c848fad661c01b06e47b20da (patch)
tree9cf7454596dd5ee45b66339d7a61be9dffefbd8f /net/sunrpc
parent4a85a6a3320b4a622315d2e0ea91a1d2b013bce4 (diff)
downloadlinux-7b723008f9c95624c848fad661c01b06e47b20da.tar.xz
NFSD: Restore NFSv4 decoding's SAVEMEM functionality
While converting the NFSv4 decoder to use xdr_stream-based XDR processing, I removed the old SAVEMEM() macro. This macro wrapped a bit of logic that avoided a memory allocation by recognizing when the decoded item resides in a linear section of the Receive buffer. In that case, it returned a pointer into that buffer instead of allocating a bounce buffer. The bounce buffer is necessary only when xdr_inline_decode() has placed the decoded item in the xdr_stream's scratch buffer, which disappears the next time xdr_inline_decode() is called with that xdr_stream. That happens only if the data item crosses a page boundary in the receive buffer, an exceedingly rare occurrence. Allocating a bounce buffer every time results in a minor performance regression that was introduced by the recent NFSv4 decoder overhaul. Let's restore the previous behavior. On average, it saves about 1.5 kmalloc() calls per COMPOUND. Signed-off-by: Chuck Lever <chuck.lever@oracle.com>
Diffstat (limited to 'net/sunrpc')
0 files changed, 0 insertions, 0 deletions