summaryrefslogtreecommitdiff
path: root/include/linux/kernel_stat.h
diff options
context:
space:
mode:
authorPablo Neira Ayuso <pablo@netfilter.org>2026-01-21 03:08:45 +0300
committerFlorian Westphal <fw@strlen.de>2026-01-22 19:18:13 +0300
commit7e43e0a1141deec651a60109dab3690854107298 (patch)
tree46e943032883960dfc6bc4288a8afded6ffde214 /include/linux/kernel_stat.h
parentf175b46d9134f708358b5404730c6dfa200fbf3c (diff)
downloadlinux-7e43e0a1141deec651a60109dab3690854107298.tar.xz
netfilter: nft_set_rbtree: translate rbtree to array for binary search
The rbtree can temporarily store overlapping inactive elements during the transaction processing, leading to false negative lookups. To address this issue, this patch adds a .commit function that walks the the rbtree to build a array of intervals of ordered elements. This conversion compacts the two singleton elements that represent the start and the end of the interval into a single interval object for space efficient. Binary search is O(log n), similar to rbtree lookup time, therefore, performance number should be similar, and there is an implementation available under lib/bsearch.c and include/linux/bsearch.h that is used for this purpose. This slightly increases memory consumption for this new array that stores pointers to the start and the end of the interval. With this patch: # time nft -f 100k-intervals-set.nft real 0m4.218s user 0m3.544s sys 0m0.400s Without this patch: # time nft -f 100k-intervals-set.nft real 0m3.920s user 0m3.547s sys 0m0.276s With this patch, with IPv4 intervals: baseline rbtree (match on first field only): 15254954pps Without this patch: baseline rbtree (match on first field only): 10256119pps This provides a ~50% improvement in matching intervals from packet path. Signed-off-by: Pablo Neira Ayuso <pablo@netfilter.org> Signed-off-by: Florian Westphal <fw@strlen.de>
Diffstat (limited to 'include/linux/kernel_stat.h')
0 files changed, 0 insertions, 0 deletions