diff options
author | Srivatsa S. Bhat <srivatsa.bhat@linux.vnet.ibm.com> | 2014-03-11 00:42:51 +0400 |
---|---|---|
committer | Rafael J. Wysocki <rafael.j.wysocki@intel.com> | 2014-03-20 16:43:48 +0400 |
commit | e30a293e8ad7e6048d6d88bcc114094f964bd67b (patch) | |
tree | 143269e0c4f0d4df9a3c9bef40964afab32fe3db /net/core/flow.c | |
parent | 576378249c8e0a020aafeaa702c834dff81dd596 (diff) | |
download | linux-e30a293e8ad7e6048d6d88bcc114094f964bd67b.tar.xz |
net/core/flow.c: Fix CPU hotplug callback registration
Subsystems that want to register CPU hotplug callbacks, as well as perform
initialization for the CPUs that are already online, often do it as shown
below:
get_online_cpus();
for_each_online_cpu(cpu)
init_cpu(cpu);
register_cpu_notifier(&foobar_cpu_notifier);
put_online_cpus();
This is wrong, since it is prone to ABBA deadlocks involving the
cpu_add_remove_lock and the cpu_hotplug.lock (when running concurrently
with CPU hotplug operations).
Instead, the correct and race-free way of performing the callback
registration is:
cpu_notifier_register_begin();
for_each_online_cpu(cpu)
init_cpu(cpu);
/* Note the use of the double underscored version of the API */
__register_cpu_notifier(&foobar_cpu_notifier);
cpu_notifier_register_done();
Fix the code in net/core/flow.c by using this latter form of callback
registration.
Cc: Li RongQing <roy.qing.li@gmail.com>
Cc: Sasha Levin <sasha.levin@oracle.com>
Cc: Andrew Morton <akpm@linux-foundation.org>
Cc: Chris Metcalf <cmetcalf@tilera.com>
Cc: Ingo Molnar <mingo@kernel.org>
Acked-by: David S. Miller <davem@davemloft.net>
Signed-off-by: Srivatsa S. Bhat <srivatsa.bhat@linux.vnet.ibm.com>
Signed-off-by: Rafael J. Wysocki <rafael.j.wysocki@intel.com>
Diffstat (limited to 'net/core/flow.c')
-rw-r--r-- | net/core/flow.c | 8 |
1 files changed, 7 insertions, 1 deletions
diff --git a/net/core/flow.c b/net/core/flow.c index dfa602ceb8cd..9a2151f5d593 100644 --- a/net/core/flow.c +++ b/net/core/flow.c @@ -456,6 +456,8 @@ static int __init flow_cache_init(struct flow_cache *fc) if (!fc->percpu) return -ENOMEM; + cpu_notifier_register_begin(); + for_each_online_cpu(i) { if (flow_cache_cpu_prepare(fc, i)) goto err; @@ -463,7 +465,9 @@ static int __init flow_cache_init(struct flow_cache *fc) fc->hotcpu_notifier = (struct notifier_block){ .notifier_call = flow_cache_cpu, }; - register_hotcpu_notifier(&fc->hotcpu_notifier); + __register_hotcpu_notifier(&fc->hotcpu_notifier); + + cpu_notifier_register_done(); setup_timer(&fc->rnd_timer, flow_cache_new_hashrnd, (unsigned long) fc); @@ -479,6 +483,8 @@ err: fcp->hash_table = NULL; } + cpu_notifier_register_done(); + free_percpu(fc->percpu); fc->percpu = NULL; |