summaryrefslogtreecommitdiff
path: root/include/linux/sizes.h
diff options
context:
space:
mode:
authorLinus Torvalds <torvalds@linux-foundation.org>2015-10-31 02:53:57 +0300
committerLinus Torvalds <torvalds@linux-foundation.org>2015-11-01 02:12:10 +0300
commitf3f86e33dc3da437fa4f204588ce7c78ea756982 (patch)
treeb74105990139f8102016b786bff0e29768ef7828 /include/linux/sizes.h
parent8a28d67457b613258aa0578ccece206d166f2b9f (diff)
downloadlinux-f3f86e33dc3da437fa4f204588ce7c78ea756982.tar.xz
vfs: Fix pathological performance case for __alloc_fd()
Al Viro points out that: > > * [Linux-specific aside] our __alloc_fd() can degrade quite badly > > with some use patterns. The cacheline pingpong in the bitmap is probably > > inevitable, unless we accept considerably heavier memory footprint, > > but we also have a case when alloc_fd() takes O(n) and it's _not_ hard > > to trigger - close(3);open(...); will have the next open() after that > > scanning the entire in-use bitmap. And Eric Dumazet has a somewhat realistic multithreaded microbenchmark that opens and closes a lot of sockets with minimal work per socket. This patch largely fixes it. We keep a 2nd-level bitmap of the open file bitmaps, showing which words are already full. So then we can traverse that second-level bitmap to efficiently skip already allocated file descriptors. On his benchmark, this improves performance by up to an order of magnitude, by avoiding the excessive open file bitmap scanning. Tested-and-acked-by: Eric Dumazet <edumazet@google.com> Cc: Al Viro <viro@zeniv.linux.org.uk> Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
Diffstat (limited to 'include/linux/sizes.h')
0 files changed, 0 insertions, 0 deletions