diff options
author | Toke Høiland-Jørgensen <toke@redhat.com> | 2021-06-24 19:05:56 +0300 |
---|---|---|
committer | Daniel Borkmann <daniel@iogearbox.net> | 2021-06-24 20:43:11 +0300 |
commit | 77151ccf10659d4066074f278402032f3265f0cc (patch) | |
tree | 0e91b4e0e041da86beb85813ee2fa49e8ab497c0 /net/sched/act_bpf.c | |
parent | 782347b6bcad07ddb574422e01e22c92e05928c8 (diff) | |
download | linux-77151ccf10659d4066074f278402032f3265f0cc.tar.xz |
bpf, sched: Remove unneeded rcu_read_lock() around BPF program invocation
The rcu_read_lock() call in cls_bpf and act_bpf are redundant: on the TX
side, there's already a call to rcu_read_lock_bh() in __dev_queue_xmit(),
and on RX there's a covering rcu_read_lock() in
netif_receive_skb{,_list}_internal().
With the previous patches we also amended the lockdep checks in the map
code to not require any particular RCU flavour, so we can just get rid of
the rcu_read_lock()s.
Suggested-by: Daniel Borkmann <daniel@iogearbox.net>
Signed-off-by: Toke Høiland-Jørgensen <toke@redhat.com>
Signed-off-by: Daniel Borkmann <daniel@iogearbox.net>
Link: https://lore.kernel.org/bpf/20210624160609.292325-7-toke@redhat.com
Diffstat (limited to 'net/sched/act_bpf.c')
-rw-r--r-- | net/sched/act_bpf.c | 2 |
1 files changed, 0 insertions, 2 deletions
diff --git a/net/sched/act_bpf.c b/net/sched/act_bpf.c index e48e980c3b93..e409a0005717 100644 --- a/net/sched/act_bpf.c +++ b/net/sched/act_bpf.c @@ -43,7 +43,6 @@ static int tcf_bpf_act(struct sk_buff *skb, const struct tc_action *act, tcf_lastuse_update(&prog->tcf_tm); bstats_cpu_update(this_cpu_ptr(prog->common.cpu_bstats), skb); - rcu_read_lock(); filter = rcu_dereference(prog->filter); if (at_ingress) { __skb_push(skb, skb->mac_len); @@ -56,7 +55,6 @@ static int tcf_bpf_act(struct sk_buff *skb, const struct tc_action *act, } if (skb_sk_is_prefetched(skb) && filter_res != TC_ACT_OK) skb_orphan(skb); - rcu_read_unlock(); /* A BPF program may overwrite the default action opcode. * Similarly as in cls_bpf, if filter_res == -1 we use the |