diff options
author | Rafael J. Wysocki <rafael.j.wysocki@intel.com> | 2021-10-22 19:04:02 +0300 |
---|---|---|
committer | Rafael J. Wysocki <rafael.j.wysocki@intel.com> | 2021-10-26 16:52:07 +0300 |
commit | 8d89835b0467b7e618c1c93603c1aff85a0c3c66 (patch) | |
tree | 87d331bf331b6c26f4ff870dd39f730b585402f5 /kernel/power | |
parent | 928265e3601cde78c7e0a3e518a93b27defed3b1 (diff) | |
download | linux-8d89835b0467b7e618c1c93603c1aff85a0c3c66.tar.xz |
PM: suspend: Do not pause cpuidle in the suspend-to-idle path
It is pointless to pause cpuidle in the suspend-to-idle path,
because it is going to be resumed in the same path later and
pausing it does not serve any particular purpose in that case.
Rework the code to avoid doing that.
Signed-off-by: Rafael J. Wysocki <rafael.j.wysocki@intel.com>
Reviewed-by: Ulf Hansson <ulf.hansson@linaro.org>
Tested-by: Ulf Hansson <ulf.hansson@linaro.org>
Diffstat (limited to 'kernel/power')
-rw-r--r-- | kernel/power/suspend.c | 8 |
1 files changed, 6 insertions, 2 deletions
diff --git a/kernel/power/suspend.c b/kernel/power/suspend.c index eb75f394a059..529d7818513f 100644 --- a/kernel/power/suspend.c +++ b/kernel/power/suspend.c @@ -97,7 +97,6 @@ static void s2idle_enter(void) raw_spin_unlock_irq(&s2idle_lock); cpus_read_lock(); - cpuidle_resume(); /* Push all the CPUs into the idle loop. */ wake_up_all_idle_cpus(); @@ -105,7 +104,6 @@ static void s2idle_enter(void) swait_event_exclusive(s2idle_wait_head, s2idle_state == S2IDLE_STATE_WAKE); - cpuidle_pause(); cpus_read_unlock(); raw_spin_lock_irq(&s2idle_lock); @@ -405,6 +403,9 @@ static int suspend_enter(suspend_state_t state, bool *wakeup) if (error) goto Devices_early_resume; + if (state != PM_SUSPEND_TO_IDLE) + cpuidle_pause(); + error = dpm_suspend_noirq(PMSG_SUSPEND); if (error) { pr_err("noirq suspend of devices failed\n"); @@ -459,6 +460,9 @@ static int suspend_enter(suspend_state_t state, bool *wakeup) dpm_resume_noirq(PMSG_RESUME); Platform_early_resume: + if (state != PM_SUSPEND_TO_IDLE) + cpuidle_resume(); + platform_resume_early(state); Devices_early_resume: |