diff options
author | Thomas Gleixner <tglx@linutronix.de> | 2012-05-07 21:59:47 +0400 |
---|---|---|
committer | Thomas Gleixner <tglx@linutronix.de> | 2012-05-08 14:35:05 +0400 |
commit | 9cd75e13de2dcf32ecc21c7f277cff3c0ced059e (patch) | |
tree | ee14db83846f870aff7cc9e4c5e20633bc2c0136 /arch/powerpc/kernel/idle.c | |
parent | 392d9215782595e92afb318c0d48c930f8e571f0 (diff) | |
download | linux-9cd75e13de2dcf32ecc21c7f277cff3c0ced059e.tar.xz |
powerpc: Fix broken cpu_idle_wait() implementation
commit 771dae818 (powerpc/cpuidle: Add cpu_idle_wait() to allow
switching of idle routines) implemented cpu_idle_wait() for powerpc.
The changelog says:
"The equivalent routine for x86 is in arch/x86/kernel/process.c
but the powerpc implementation is different.":
Unfortunately the changelog is completely useless as it does not tell
_WHY_ it is different.
Aside of being different the implementation is patently wrong.
The rescheduling IPI is async. That means that there is no guarantee,
that the other cores have executed the IPI when cpu_idle_wait()
returns. But that's the whole purpose of this function: to guarantee
that no CPU uses the old idle handler anymore.
Use the smp_functional_call() based implementation, which fulfils the
requirements.
[ This code is going to replaced by a core version to remove all the
pointless copies in arch/*, but this one should go to stable ]
Signed-off-by: Thomas Gleixner <tglx@linutronix.de>
Acked-by: Peter Zijlstra <peterz@infradead.org>
Cc: Deepthi Dharwar <deepthi@linux.vnet.ibm.com>
Cc: Trinabh Gupta <g.trinabh@gmail.com>
Cc: Arun R Bharadwaj <arun.r.bharadwaj@gmail.com>
Acked-by: Benjamin Herrenschmidt <benh@kernel.crashing.org>
Link: http://lkml.kernel.org/r/20120507175651.980164748@linutronix.de
Cc: stable@vger.kernel.org
Diffstat (limited to 'arch/powerpc/kernel/idle.c')
-rw-r--r-- | arch/powerpc/kernel/idle.c | 14 |
1 files changed, 5 insertions, 9 deletions
diff --git a/arch/powerpc/kernel/idle.c b/arch/powerpc/kernel/idle.c index 6d2209ac0c44..04d79093d7a1 100644 --- a/arch/powerpc/kernel/idle.c +++ b/arch/powerpc/kernel/idle.c @@ -113,6 +113,9 @@ void cpu_idle(void) } } +static void do_nothing(void *unused) +{ +} /* * cpu_idle_wait - Used to ensure that all the CPUs come out of the old @@ -123,16 +126,9 @@ void cpu_idle(void) */ void cpu_idle_wait(void) { - int cpu; smp_mb(); - - /* kick all the CPUs so that they exit out of old idle routine */ - get_online_cpus(); - for_each_online_cpu(cpu) { - if (cpu != smp_processor_id()) - smp_send_reschedule(cpu); - } - put_online_cpus(); + /* kick all the CPUs so that they exit out of pm_idle */ + smp_call_function(do_nothing, NULL, 1); } EXPORT_SYMBOL_GPL(cpu_idle_wait); |