summaryrefslogtreecommitdiff
path: root/net
diff options
context:
space:
mode:
authorTao Liu <thomas.liu@ucloud.cn>2021-05-13 16:08:00 +0300
committerGreg Kroah-Hartman <gregkh@linuxfoundation.org>2021-06-03 09:59:13 +0300
commit5c01181700ab93b3892c0a2730915a3bc1f582f3 (patch)
tree2921681f53cc06b72dbad17f4e1267fa42ce7cc2 /net
parent5bfdc481d8123f35a0fade7453c9641f2305b643 (diff)
downloadlinux-5c01181700ab93b3892c0a2730915a3bc1f582f3.tar.xz
openvswitch: meter: fix race when getting now_ms.
[ Upstream commit e4df1b0c24350a0f00229ff895a91f1072bd850d ] We have observed meters working unexpected if traffic is 3+Gbit/s with multiple connections. now_ms is not pretected by meter->lock, we may get a negative long_delta_ms when another cpu updated meter->used, then: delta_ms = (u32)long_delta_ms; which will be a large value. band->bucket += delta_ms * band->rate; then we get a wrong band->bucket. OpenVswitch userspace datapath has fixed the same issue[1] some time ago, and we port the implementation to kernel datapath. [1] https://patchwork.ozlabs.org/project/openvswitch/patch/20191025114436.9746-1-i.maximets@ovn.org/ Fixes: 96fbc13d7e77 ("openvswitch: Add meter infrastructure") Signed-off-by: Tao Liu <thomas.liu@ucloud.cn> Suggested-by: Ilya Maximets <i.maximets@ovn.org> Reviewed-by: Ilya Maximets <i.maximets@ovn.org> Signed-off-by: David S. Miller <davem@davemloft.net> Signed-off-by: Sasha Levin <sashal@kernel.org>
Diffstat (limited to 'net')
-rw-r--r--net/openvswitch/meter.c8
1 files changed, 8 insertions, 0 deletions
diff --git a/net/openvswitch/meter.c b/net/openvswitch/meter.c
index 541eea74ef7a..c37e09223cbb 100644
--- a/net/openvswitch/meter.c
+++ b/net/openvswitch/meter.c
@@ -460,6 +460,14 @@ bool ovs_meter_execute(struct datapath *dp, struct sk_buff *skb,
spin_lock(&meter->lock);
long_delta_ms = (now_ms - meter->used); /* ms */
+ if (long_delta_ms < 0) {
+ /* This condition means that we have several threads fighting
+ * for a meter lock, and the one who received the packets a
+ * bit later wins. Assuming that all racing threads received
+ * packets at the same time to avoid overflow.
+ */
+ long_delta_ms = 0;
+ }
/* Make sure delta_ms will not be too large, so that bucket will not
* wrap around below.