diff options
author | Florian Westphal <fw@strlen.de> | 2016-11-22 16:44:19 +0300 |
---|---|---|
committer | Greg Kroah-Hartman <gregkh@linuxfoundation.org> | 2018-03-18 13:18:54 +0300 |
commit | db6a0cbeb940898dd56167241f68972b50890566 (patch) | |
tree | ff2657259b0e2175f6dfe992beada033aee1b693 /include | |
parent | dac4448faf499b2597536aecfdb46eeace17a243 (diff) | |
download | linux-db6a0cbeb940898dd56167241f68972b50890566.tar.xz |
netfilter: x_tables: pack percpu counter allocations
commit ae0ac0ed6fcf5af3be0f63eb935f483f44a402d2 upstream.
instead of allocating each xt_counter individually, allocate 4k chunks
and then use these for counter allocation requests.
This should speed up rule evaluation by increasing data locality,
also speeds up ruleset loading because we reduce calls to the percpu
allocator.
As Eric points out we can't use PAGE_SIZE, page_allocator would fail on
arches with 64k page size.
Suggested-by: Eric Dumazet <edumazet@google.com>
Signed-off-by: Florian Westphal <fw@strlen.de>
Acked-by: Eric Dumazet <edumazet@google.com>
Signed-off-by: Pablo Neira Ayuso <pablo@netfilter.org>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
Diffstat (limited to 'include')
-rw-r--r-- | include/linux/netfilter/x_tables.h | 7 |
1 files changed, 6 insertions, 1 deletions
diff --git a/include/linux/netfilter/x_tables.h b/include/linux/netfilter/x_tables.h index 58f37f8a3d8b..9bfeb88fb940 100644 --- a/include/linux/netfilter/x_tables.h +++ b/include/linux/netfilter/x_tables.h @@ -375,8 +375,13 @@ static inline unsigned long ifname_compare_aligned(const char *_a, return ret; } +struct xt_percpu_counter_alloc_state { + unsigned int off; + const char __percpu *mem; +}; -bool xt_percpu_counter_alloc(struct xt_counters *counters); +bool xt_percpu_counter_alloc(struct xt_percpu_counter_alloc_state *state, + struct xt_counters *counter); void xt_percpu_counter_free(struct xt_counters *cnt); static inline struct xt_counters * |