diff options
author | Linus Torvalds <torvalds@linux-foundation.org> | 2017-11-14 06:43:50 +0300 |
---|---|---|
committer | Linus Torvalds <torvalds@linux-foundation.org> | 2017-11-14 06:43:50 +0300 |
commit | bd2cd7d5a8f83ddc761025f42a3ca8e56351a6cc (patch) | |
tree | 6ea70f09f32544f895020e198dac632145332cc2 /drivers/acpi | |
parent | b29c6ef7bb1257853c1e31616d84f55e561cf631 (diff) | |
parent | 990a848d537e4da966907c8ccec95bc568f2911c (diff) | |
download | linux-bd2cd7d5a8f83ddc761025f42a3ca8e56351a6cc.tar.xz |
Merge tag 'pm-4.15-rc1' of git://git.kernel.org/pub/scm/linux/kernel/git/rafael/linux-pm
Pull power management updates from Rafael Wysocki:
"There are no real big ticket items here this time.
The most noticeable change is probably the relocation of the OPP
(Operating Performance Points) framework to its own directory under
drivers/ as it has grown big enough for that. Also Viresh is now going
to maintain it and send pull requests for it to me, so you will see
this change in the git history going forward (but still not right
now).
Another noticeable set of changes is the modifications of the PM core,
the PCI subsystem and the ACPI PM domain to allow of more integration
between system-wide suspend/resume and runtime PM. For now it's just a
way to avoid resuming devices from runtime suspend unnecessarily
during system suspend (if the driver sets a flag to indicate its
readiness for that) and in the works is an analogous mechanism to
allow devices to stay suspended after system resume.
In addition to that, we have some changes related to supporting
frequency-invariant CPU utilization metrics in the scheduler and in
the schedutil cpufreq governor on ARM and changes to add support for
device performance states to the generic power domains (genpd)
framework.
The rest is mostly fixes and cleanups of various sorts.
Specifics:
- Relocate the OPP (Operating Performance Points) framework to its
own directory under drivers/ and add support for power domain
performance states to it (Viresh Kumar).
- Modify the PM core, the PCI bus type and the ACPI PM domain to
support power management driver flags allowing device drivers to
specify their capabilities and preferences regarding the handling
of devices with enabled runtime PM during system suspend/resume and
clean up that code somewhat (Rafael Wysocki, Ulf Hansson).
- Add frequency-invariant accounting support to the task scheduler on
ARM and ARM64 (Dietmar Eggemann).
- Fix PM QoS device resume latency framework to prevent "no
restriction" requests from overriding requests with specific
requirements and drop the confusing PM_QOS_FLAG_REMOTE_WAKEUP
device PM QoS flag (Rafael Wysocki).
- Drop legacy class suspend/resume operations from the PM core and
drop legacy bus type suspend and resume callbacks from ARM/locomo
(Rafael Wysocki).
- Add min/max frequency support to devfreq and clean it up somewhat
(Chanwoo Choi).
- Rework wakeup support in the generic power domains (genpd)
framework and update some of its users accordingly (Geert
Uytterhoeven).
- Convert timers in the PM core to use timer_setup() (Kees Cook).
- Add support for exposing the SLP_S0 (Low Power S0 Idle) residency
counter based on the LPIT ACPI table on Intel platforms (Srinivas
Pandruvada).
- Add per-CPU PM QoS resume latency support to the ladder cpuidle
governor (Ramesh Thomas).
- Fix a deadlock between the wakeup notify handler and the notifier
removal in the ACPI core (Ville Syrjälä).
- Fix a cpufreq schedutil governor issue causing it to use stale
cached frequency values sometimes (Viresh Kumar).
- Fix an issue in the system suspend core support code causing wakeup
events detection to fail in some cases (Rajat Jain).
- Fix the generic power domains (genpd) framework to prevent the PM
core from using the direct-complete optimization with it as that is
guaranteed to fail (Ulf Hansson).
- Fix a minor issue in the cpuidle core and clean it up a bit (Gaurav
Jindal, Nicholas Piggin).
- Fix and clean up the intel_idle and ARM cpuidle drivers (Jason
Baron, Len Brown, Leo Yan).
- Fix a couple of minor issues in the OPP framework and clean it up
(Arvind Yadav, Fabio Estevam, Sudeep Holla, Tobias Jordan).
- Fix and clean up some cpufreq drivers and fix a minor issue in the
cpufreq statistics code (Arvind Yadav, Bhumika Goyal, Fabio
Estevam, Gautham Shenoy, Gustavo Silva, Marek Szyprowski, Masahiro
Yamada, Robert Jarzmik, Zumeng Chen).
- Fix minor issues in the system suspend and hibernation core, in
power management documentation and in the AVS (Adaptive Voltage
Scaling) framework (Helge Deller, Himanshu Jha, Joe Perches, Rafael
Wysocki).
- Fix some issues in the cpupower utility and document that Shuah
Khan is going to maintain it going forward (Prarit Bhargava, Shuah
Khan)"
* tag 'pm-4.15-rc1' of git://git.kernel.org/pub/scm/linux/kernel/git/rafael/linux-pm: (88 commits)
tools/power/cpupower: add libcpupower.so.0.0.1 to .gitignore
tools/power/cpupower: Add 64 bit library detection
intel_idle: Graceful probe failure when MWAIT is disabled
cpufreq: schedutil: Reset cached_raw_freq when not in sync with next_freq
freezer: Fix typo in freezable_schedule_timeout() comment
PM / s2idle: Clear the events_check_enabled flag
cpufreq: stats: Handle the case when trans_table goes beyond PAGE_SIZE
cpufreq: arm_big_little: make cpufreq_arm_bL_ops structures const
cpufreq: arm_big_little: make function arguments and structure pointer const
cpuidle: Avoid assignment in if () argument
cpuidle: Clean up cpuidle_enable_device() error handling a bit
ACPI / PM: Fix acpi_pm_notifier_lock vs flush_workqueue() deadlock
PM / Domains: Fix genpd to deal with drivers returning 1 from ->prepare()
cpuidle: ladder: Add per CPU PM QoS resume latency support
PM / QoS: Fix device resume latency framework
PM / domains: Rework governor code to be more consistent
PM / Domains: Remove gpd_dev_ops.active_wakeup() callback
soc: rockchip: power-domain: Use GENPD_FLAG_ACTIVE_WAKEUP
soc: mediatek: Use GENPD_FLAG_ACTIVE_WAKEUP
ARM: shmobile: pm-rmobile: Use GENPD_FLAG_ACTIVE_WAKEUP
...
Diffstat (limited to 'drivers/acpi')
-rw-r--r-- | drivers/acpi/Kconfig | 5 | ||||
-rw-r--r-- | drivers/acpi/Makefile | 1 | ||||
-rw-r--r-- | drivers/acpi/acpi_lpit.c | 162 | ||||
-rw-r--r-- | drivers/acpi/acpi_lpss.c | 95 | ||||
-rw-r--r-- | drivers/acpi/device_pm.c | 277 | ||||
-rw-r--r-- | drivers/acpi/internal.h | 6 | ||||
-rw-r--r-- | drivers/acpi/osl.c | 42 | ||||
-rw-r--r-- | drivers/acpi/scan.c | 1 |
8 files changed, 425 insertions, 164 deletions
diff --git a/drivers/acpi/Kconfig b/drivers/acpi/Kconfig index 5b1938f4b626..4cb763a01f4d 100644 --- a/drivers/acpi/Kconfig +++ b/drivers/acpi/Kconfig @@ -81,6 +81,11 @@ endif config ACPI_SPCR_TABLE bool +config ACPI_LPIT + bool + depends on X86_64 + default y + config ACPI_SLEEP bool depends on SUSPEND || HIBERNATION diff --git a/drivers/acpi/Makefile b/drivers/acpi/Makefile index cd1abc9bc325..168e14d29d31 100644 --- a/drivers/acpi/Makefile +++ b/drivers/acpi/Makefile @@ -57,6 +57,7 @@ acpi-$(CONFIG_DEBUG_FS) += debugfs.o acpi-$(CONFIG_ACPI_NUMA) += numa.o acpi-$(CONFIG_ACPI_PROCFS_POWER) += cm_sbs.o acpi-y += acpi_lpat.o +acpi-$(CONFIG_ACPI_LPIT) += acpi_lpit.o acpi-$(CONFIG_ACPI_GENERIC_GSI) += irq.o acpi-$(CONFIG_ACPI_WATCHDOG) += acpi_watchdog.o diff --git a/drivers/acpi/acpi_lpit.c b/drivers/acpi/acpi_lpit.c new file mode 100644 index 000000000000..e94e478dd18b --- /dev/null +++ b/drivers/acpi/acpi_lpit.c @@ -0,0 +1,162 @@ + +/* + * acpi_lpit.c - LPIT table processing functions + * + * Copyright (C) 2017 Intel Corporation. All rights reserved. + * + * This program is free software; you can redistribute it and/or + * modify it under the terms of the GNU General Public License version + * 2 as published by the Free Software Foundation. + * + * This program is distributed in the hope that it will be useful, + * but WITHOUT ANY WARRANTY; without even the implied warranty of + * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the + * GNU General Public License for more details. + */ + +#include <linux/cpu.h> +#include <linux/acpi.h> +#include <asm/msr.h> +#include <asm/tsc.h> + +struct lpit_residency_info { + struct acpi_generic_address gaddr; + u64 frequency; + void __iomem *iomem_addr; +}; + +/* Storage for an memory mapped and FFH based entries */ +static struct lpit_residency_info residency_info_mem; +static struct lpit_residency_info residency_info_ffh; + +static int lpit_read_residency_counter_us(u64 *counter, bool io_mem) +{ + int err; + + if (io_mem) { + u64 count = 0; + int error; + + error = acpi_os_read_iomem(residency_info_mem.iomem_addr, &count, + residency_info_mem.gaddr.bit_width); + if (error) + return error; + + *counter = div64_u64(count * 1000000ULL, residency_info_mem.frequency); + return 0; + } + + err = rdmsrl_safe(residency_info_ffh.gaddr.address, counter); + if (!err) { + u64 mask = GENMASK_ULL(residency_info_ffh.gaddr.bit_offset + + residency_info_ffh.gaddr. bit_width - 1, + residency_info_ffh.gaddr.bit_offset); + + *counter &= mask; + *counter >>= residency_info_ffh.gaddr.bit_offset; + *counter = div64_u64(*counter * 1000000ULL, residency_info_ffh.frequency); + return 0; + } + + return -ENODATA; +} + +static ssize_t low_power_idle_system_residency_us_show(struct device *dev, + struct device_attribute *attr, + char *buf) +{ + u64 counter; + int ret; + + ret = lpit_read_residency_counter_us(&counter, true); + if (ret) + return ret; + + return sprintf(buf, "%llu\n", counter); +} +static DEVICE_ATTR_RO(low_power_idle_system_residency_us); + +static ssize_t low_power_idle_cpu_residency_us_show(struct device *dev, + struct device_attribute *attr, + char *buf) +{ + u64 counter; + int ret; + + ret = lpit_read_residency_counter_us(&counter, false); + if (ret) + return ret; + + return sprintf(buf, "%llu\n", counter); +} +static DEVICE_ATTR_RO(low_power_idle_cpu_residency_us); + +int lpit_read_residency_count_address(u64 *address) +{ + if (!residency_info_mem.gaddr.address) + return -EINVAL; + + *address = residency_info_mem.gaddr.address; + + return 0; +} + +static void lpit_update_residency(struct lpit_residency_info *info, + struct acpi_lpit_native *lpit_native) +{ + info->frequency = lpit_native->counter_frequency ? + lpit_native->counter_frequency : tsc_khz * 1000; + if (!info->frequency) + info->frequency = 1; + + info->gaddr = lpit_native->residency_counter; + if (info->gaddr.space_id == ACPI_ADR_SPACE_SYSTEM_MEMORY) { + info->iomem_addr = ioremap_nocache(info->gaddr.address, + info->gaddr.bit_width / 8); + if (!info->iomem_addr) + return; + + /* Silently fail, if cpuidle attribute group is not present */ + sysfs_add_file_to_group(&cpu_subsys.dev_root->kobj, + &dev_attr_low_power_idle_system_residency_us.attr, + "cpuidle"); + } else if (info->gaddr.space_id == ACPI_ADR_SPACE_FIXED_HARDWARE) { + /* Silently fail, if cpuidle attribute group is not present */ + sysfs_add_file_to_group(&cpu_subsys.dev_root->kobj, + &dev_attr_low_power_idle_cpu_residency_us.attr, + "cpuidle"); + } +} + +static void lpit_process(u64 begin, u64 end) +{ + while (begin + sizeof(struct acpi_lpit_native) < end) { + struct acpi_lpit_native *lpit_native = (struct acpi_lpit_native *)begin; + + if (!lpit_native->header.type && !lpit_native->header.flags) { + if (lpit_native->residency_counter.space_id == ACPI_ADR_SPACE_SYSTEM_MEMORY && + !residency_info_mem.gaddr.address) { + lpit_update_residency(&residency_info_mem, lpit_native); + } else if (lpit_native->residency_counter.space_id == ACPI_ADR_SPACE_FIXED_HARDWARE && + !residency_info_ffh.gaddr.address) { + lpit_update_residency(&residency_info_ffh, lpit_native); + } + } + begin += lpit_native->header.length; + } +} + +void acpi_init_lpit(void) +{ + acpi_status status; + u64 lpit_begin; + struct acpi_table_lpit *lpit; + + status = acpi_get_table(ACPI_SIG_LPIT, 0, (struct acpi_table_header **)&lpit); + + if (ACPI_FAILURE(status)) + return; + + lpit_begin = (u64)lpit + sizeof(*lpit); + lpit_process(lpit_begin, lpit_begin + lpit->header.length); +} diff --git a/drivers/acpi/acpi_lpss.c b/drivers/acpi/acpi_lpss.c index 032ae44710e5..de7385b824e1 100644 --- a/drivers/acpi/acpi_lpss.c +++ b/drivers/acpi/acpi_lpss.c @@ -693,7 +693,7 @@ static int acpi_lpss_activate(struct device *dev) struct lpss_private_data *pdata = acpi_driver_data(ACPI_COMPANION(dev)); int ret; - ret = acpi_dev_runtime_resume(dev); + ret = acpi_dev_resume(dev); if (ret) return ret; @@ -713,43 +713,9 @@ static int acpi_lpss_activate(struct device *dev) static void acpi_lpss_dismiss(struct device *dev) { - acpi_dev_runtime_suspend(dev); + acpi_dev_suspend(dev, false); } -#ifdef CONFIG_PM_SLEEP -static int acpi_lpss_suspend_late(struct device *dev) -{ - struct lpss_private_data *pdata = acpi_driver_data(ACPI_COMPANION(dev)); - int ret; - - ret = pm_generic_suspend_late(dev); - if (ret) - return ret; - - if (pdata->dev_desc->flags & LPSS_SAVE_CTX) - acpi_lpss_save_ctx(dev, pdata); - - return acpi_dev_suspend_late(dev); -} - -static int acpi_lpss_resume_early(struct device *dev) -{ - struct lpss_private_data *pdata = acpi_driver_data(ACPI_COMPANION(dev)); - int ret; - - ret = acpi_dev_resume_early(dev); - if (ret) - return ret; - - acpi_lpss_d3_to_d0_delay(pdata); - - if (pdata->dev_desc->flags & LPSS_SAVE_CTX) - acpi_lpss_restore_ctx(dev, pdata); - - return pm_generic_resume_early(dev); -} -#endif /* CONFIG_PM_SLEEP */ - /* IOSF SB for LPSS island */ #define LPSS_IOSF_UNIT_LPIOEP 0xA0 #define LPSS_IOSF_UNIT_LPIO1 0xAB @@ -835,19 +801,15 @@ static void lpss_iosf_exit_d3_state(void) mutex_unlock(&lpss_iosf_mutex); } -static int acpi_lpss_runtime_suspend(struct device *dev) +static int acpi_lpss_suspend(struct device *dev, bool wakeup) { struct lpss_private_data *pdata = acpi_driver_data(ACPI_COMPANION(dev)); int ret; - ret = pm_generic_runtime_suspend(dev); - if (ret) - return ret; - if (pdata->dev_desc->flags & LPSS_SAVE_CTX) acpi_lpss_save_ctx(dev, pdata); - ret = acpi_dev_runtime_suspend(dev); + ret = acpi_dev_suspend(dev, wakeup); /* * This call must be last in the sequence, otherwise PMC will return @@ -860,7 +822,7 @@ static int acpi_lpss_runtime_suspend(struct device *dev) return ret; } -static int acpi_lpss_runtime_resume(struct device *dev) +static int acpi_lpss_resume(struct device *dev) { struct lpss_private_data *pdata = acpi_driver_data(ACPI_COMPANION(dev)); int ret; @@ -872,7 +834,7 @@ static int acpi_lpss_runtime_resume(struct device *dev) if (lpss_quirks & LPSS_QUIRK_ALWAYS_POWER_ON && iosf_mbi_available()) lpss_iosf_exit_d3_state(); - ret = acpi_dev_runtime_resume(dev); + ret = acpi_dev_resume(dev); if (ret) return ret; @@ -881,7 +843,41 @@ static int acpi_lpss_runtime_resume(struct device *dev) if (pdata->dev_desc->flags & LPSS_SAVE_CTX) acpi_lpss_restore_ctx(dev, pdata); - return pm_generic_runtime_resume(dev); + return 0; +} + +#ifdef CONFIG_PM_SLEEP +static int acpi_lpss_suspend_late(struct device *dev) +{ + int ret; + + if (dev_pm_smart_suspend_and_suspended(dev)) + return 0; + + ret = pm_generic_suspend_late(dev); + return ret ? ret : acpi_lpss_suspend(dev, device_may_wakeup(dev)); +} + +static int acpi_lpss_resume_early(struct device *dev) +{ + int ret = acpi_lpss_resume(dev); + + return ret ? ret : pm_generic_resume_early(dev); +} +#endif /* CONFIG_PM_SLEEP */ + +static int acpi_lpss_runtime_suspend(struct device *dev) +{ + int ret = pm_generic_runtime_suspend(dev); + + return ret ? ret : acpi_lpss_suspend(dev, true); +} + +static int acpi_lpss_runtime_resume(struct device *dev) +{ + int ret = acpi_lpss_resume(dev); + + return ret ? ret : pm_generic_runtime_resume(dev); } #endif /* CONFIG_PM */ @@ -894,13 +890,20 @@ static struct dev_pm_domain acpi_lpss_pm_domain = { #ifdef CONFIG_PM #ifdef CONFIG_PM_SLEEP .prepare = acpi_subsys_prepare, - .complete = pm_complete_with_resume_check, + .complete = acpi_subsys_complete, .suspend = acpi_subsys_suspend, .suspend_late = acpi_lpss_suspend_late, + .suspend_noirq = acpi_subsys_suspend_noirq, + .resume_noirq = acpi_subsys_resume_noirq, .resume_early = acpi_lpss_resume_early, .freeze = acpi_subsys_freeze, + .freeze_late = acpi_subsys_freeze_late, + .freeze_noirq = acpi_subsys_freeze_noirq, + .thaw_noirq = acpi_subsys_thaw_noirq, .poweroff = acpi_subsys_suspend, .poweroff_late = acpi_lpss_suspend_late, + .poweroff_noirq = acpi_subsys_suspend_noirq, + .restore_noirq = acpi_subsys_resume_noirq, .restore_early = acpi_lpss_resume_early, #endif .runtime_suspend = acpi_lpss_runtime_suspend, diff --git a/drivers/acpi/device_pm.c b/drivers/acpi/device_pm.c index fbcc73f7a099..e4ffaeec9ec2 100644 --- a/drivers/acpi/device_pm.c +++ b/drivers/acpi/device_pm.c @@ -387,6 +387,7 @@ EXPORT_SYMBOL(acpi_bus_power_manageable); #ifdef CONFIG_PM static DEFINE_MUTEX(acpi_pm_notifier_lock); +static DEFINE_MUTEX(acpi_pm_notifier_install_lock); void acpi_pm_wakeup_event(struct device *dev) { @@ -443,24 +444,25 @@ acpi_status acpi_add_pm_notifier(struct acpi_device *adev, struct device *dev, if (!dev && !func) return AE_BAD_PARAMETER; - mutex_lock(&acpi_pm_notifier_lock); + mutex_lock(&acpi_pm_notifier_install_lock); if (adev->wakeup.flags.notifier_present) goto out; - adev->wakeup.ws = wakeup_source_register(dev_name(&adev->dev)); - adev->wakeup.context.dev = dev; - adev->wakeup.context.func = func; - status = acpi_install_notify_handler(adev->handle, ACPI_SYSTEM_NOTIFY, acpi_pm_notify_handler, NULL); if (ACPI_FAILURE(status)) goto out; + mutex_lock(&acpi_pm_notifier_lock); + adev->wakeup.ws = wakeup_source_register(dev_name(&adev->dev)); + adev->wakeup.context.dev = dev; + adev->wakeup.context.func = func; adev->wakeup.flags.notifier_present = true; + mutex_unlock(&acpi_pm_notifier_lock); out: - mutex_unlock(&acpi_pm_notifier_lock); + mutex_unlock(&acpi_pm_notifier_install_lock); return status; } @@ -472,7 +474,7 @@ acpi_status acpi_remove_pm_notifier(struct acpi_device *adev) { acpi_status status = AE_BAD_PARAMETER; - mutex_lock(&acpi_pm_notifier_lock); + mutex_lock(&acpi_pm_notifier_install_lock); if (!adev->wakeup.flags.notifier_present) goto out; @@ -483,14 +485,15 @@ acpi_status acpi_remove_pm_notifier(struct acpi_device *adev) if (ACPI_FAILURE(status)) goto out; + mutex_lock(&acpi_pm_notifier_lock); adev->wakeup.context.func = NULL; adev->wakeup.context.dev = NULL; wakeup_source_unregister(adev->wakeup.ws); - adev->wakeup.flags.notifier_present = false; + mutex_unlock(&acpi_pm_notifier_lock); out: - mutex_unlock(&acpi_pm_notifier_lock); + mutex_unlock(&acpi_pm_notifier_install_lock); return status; } @@ -581,8 +584,7 @@ static int acpi_dev_pm_get_state(struct device *dev, struct acpi_device *adev, d_min = ret; wakeup = device_may_wakeup(dev) && adev->wakeup.flags.valid && adev->wakeup.sleep_state >= target_state; - } else if (dev_pm_qos_flags(dev, PM_QOS_FLAG_REMOTE_WAKEUP) != - PM_QOS_FLAGS_NONE) { + } else { wakeup = adev->wakeup.flags.valid; } @@ -848,48 +850,48 @@ static int acpi_dev_pm_full_power(struct acpi_device *adev) } /** - * acpi_dev_runtime_suspend - Put device into a low-power state using ACPI. + * acpi_dev_suspend - Put device into a low-power state using ACPI. * @dev: Device to put into a low-power state. + * @wakeup: Whether or not to enable wakeup for the device. * - * Put the given device into a runtime low-power state using the standard ACPI + * Put the given device into a low-power state using the standard ACPI * mechanism. Set up remote wakeup if desired, choose the state to put the * device into (this checks if remote wakeup is expected to work too), and set * the power state of the device. */ -int acpi_dev_runtime_suspend(struct device *dev) +int acpi_dev_suspend(struct device *dev, bool wakeup) { struct acpi_device *adev = ACPI_COMPANION(dev); - bool remote_wakeup; + u32 target_state = acpi_target_system_state(); int error; if (!adev) return 0; - remote_wakeup = dev_pm_qos_flags(dev, PM_QOS_FLAG_REMOTE_WAKEUP) > - PM_QOS_FLAGS_NONE; - if (remote_wakeup) { - error = acpi_device_wakeup_enable(adev, ACPI_STATE_S0); + if (wakeup && acpi_device_can_wakeup(adev)) { + error = acpi_device_wakeup_enable(adev, target_state); if (error) return -EAGAIN; + } else { + wakeup = false; } - error = acpi_dev_pm_low_power(dev, adev, ACPI_STATE_S0); - if (error && remote_wakeup) + error = acpi_dev_pm_low_power(dev, adev, target_state); + if (error && wakeup) acpi_device_wakeup_disable(adev); return error; } -EXPORT_SYMBOL_GPL(acpi_dev_runtime_suspend); +EXPORT_SYMBOL_GPL(acpi_dev_suspend); /** - * acpi_dev_runtime_resume - Put device into the full-power state using ACPI. + * acpi_dev_resume - Put device into the full-power state using ACPI. * @dev: Device to put into the full-power state. * * Put the given device into the full-power state using the standard ACPI - * mechanism at run time. Set the power state of the device to ACPI D0 and - * disable remote wakeup. + * mechanism. Set the power state of the device to ACPI D0 and disable wakeup. */ -int acpi_dev_runtime_resume(struct device *dev) +int acpi_dev_resume(struct device *dev) { struct acpi_device *adev = ACPI_COMPANION(dev); int error; @@ -901,7 +903,7 @@ int acpi_dev_runtime_resume(struct device *dev) acpi_device_wakeup_disable(adev); return error; } -EXPORT_SYMBOL_GPL(acpi_dev_runtime_resume); +EXPORT_SYMBOL_GPL(acpi_dev_resume); /** * acpi_subsys_runtime_suspend - Suspend device using ACPI. @@ -913,7 +915,7 @@ EXPORT_SYMBOL_GPL(acpi_dev_runtime_resume); int acpi_subsys_runtime_suspend(struct device *dev) { int ret = pm_generic_runtime_suspend(dev); - return ret ? ret : acpi_dev_runtime_suspend(dev); + return ret ? ret : acpi_dev_suspend(dev, true); } EXPORT_SYMBOL_GPL(acpi_subsys_runtime_suspend); @@ -926,68 +928,33 @@ EXPORT_SYMBOL_GPL(acpi_subsys_runtime_suspend); */ int acpi_subsys_runtime_resume(struct device *dev) { - int ret = acpi_dev_runtime_resume(dev); + int ret = acpi_dev_resume(dev); return ret ? ret : pm_generic_runtime_resume(dev); } EXPORT_SYMBOL_GPL(acpi_subsys_runtime_resume); #ifdef CONFIG_PM_SLEEP -/** - * acpi_dev_suspend_late - Put device into a low-power state using ACPI. - * @dev: Device to put into a low-power state. - * - * Put the given device into a low-power state during system transition to a - * sleep state using the standard ACPI mechanism. Set up system wakeup if - * desired, choose the state to put the device into (this checks if system - * wakeup is expected to work too), and set the power state of the device. - */ -int acpi_dev_suspend_late(struct device *dev) +static bool acpi_dev_needs_resume(struct device *dev, struct acpi_device *adev) { - struct acpi_device *adev = ACPI_COMPANION(dev); - u32 target_state; - bool wakeup; - int error; - - if (!adev) - return 0; - - target_state = acpi_target_system_state(); - wakeup = device_may_wakeup(dev) && acpi_device_can_wakeup(adev); - if (wakeup) { - error = acpi_device_wakeup_enable(adev, target_state); - if (error) - return error; - } + u32 sys_target = acpi_target_system_state(); + int ret, state; - error = acpi_dev_pm_low_power(dev, adev, target_state); - if (error && wakeup) - acpi_device_wakeup_disable(adev); + if (!pm_runtime_suspended(dev) || !adev || + device_may_wakeup(dev) != !!adev->wakeup.prepare_count) + return true; - return error; -} -EXPORT_SYMBOL_GPL(acpi_dev_suspend_late); + if (sys_target == ACPI_STATE_S0) + return false; -/** - * acpi_dev_resume_early - Put device into the full-power state using ACPI. - * @dev: Device to put into the full-power state. - * - * Put the given device into the full-power state using the standard ACPI - * mechanism during system transition to the working state. Set the power - * state of the device to ACPI D0 and disable remote wakeup. - */ -int acpi_dev_resume_early(struct device *dev) -{ - struct acpi_device *adev = ACPI_COMPANION(dev); - int error; + if (adev->power.flags.dsw_present) + return true; - if (!adev) - return 0; + ret = acpi_dev_pm_get_state(dev, adev, sys_target, NULL, &state); + if (ret) + return true; - error = acpi_dev_pm_full_power(adev); - acpi_device_wakeup_disable(adev); - return error; + return state != adev->power.state; } -EXPORT_SYMBOL_GPL(acpi_dev_resume_early); /** * acpi_subsys_prepare - Prepare device for system transition to a sleep state. @@ -996,39 +963,53 @@ EXPORT_SYMBOL_GPL(acpi_dev_resume_early); int acpi_subsys_prepare(struct device *dev) { struct acpi_device *adev = ACPI_COMPANION(dev); - u32 sys_target; - int ret, state; - ret = pm_generic_prepare(dev); - if (ret < 0) - return ret; - - if (!adev || !pm_runtime_suspended(dev) - || device_may_wakeup(dev) != !!adev->wakeup.prepare_count) - return 0; + if (dev->driver && dev->driver->pm && dev->driver->pm->prepare) { + int ret = dev->driver->pm->prepare(dev); - sys_target = acpi_target_system_state(); - if (sys_target == ACPI_STATE_S0) - return 1; + if (ret < 0) + return ret; - if (adev->power.flags.dsw_present) - return 0; + if (!ret && dev_pm_test_driver_flags(dev, DPM_FLAG_SMART_PREPARE)) + return 0; + } - ret = acpi_dev_pm_get_state(dev, adev, sys_target, NULL, &state); - return !ret && state == adev->power.state; + return !acpi_dev_needs_resume(dev, adev); } EXPORT_SYMBOL_GPL(acpi_subsys_prepare); /** + * acpi_subsys_complete - Finalize device's resume during system resume. + * @dev: Device to handle. + */ +void acpi_subsys_complete(struct device *dev) +{ + pm_generic_complete(dev); + /* + * If the device had been runtime-suspended before the system went into + * the sleep state it is going out of and it has never been resumed till + * now, resume it in case the firmware powered it up. + */ + if (dev->power.direct_complete && pm_resume_via_firmware()) + pm_request_resume(dev); +} +EXPORT_SYMBOL_GPL(acpi_subsys_complete); + +/** * acpi_subsys_suspend - Run the device driver's suspend callback. * @dev: Device to handle. * - * Follow PCI and resume devices suspended at run time before running their - * system suspend callbacks. + * Follow PCI and resume devices from runtime suspend before running their + * system suspend callbacks, unless the driver can cope with runtime-suspended + * devices during system suspend and there are no ACPI-specific reasons for + * resuming them. */ int acpi_subsys_suspend(struct device *dev) { - pm_runtime_resume(dev); + if (!dev_pm_test_driver_flags(dev, DPM_FLAG_SMART_SUSPEND) || + acpi_dev_needs_resume(dev, ACPI_COMPANION(dev))) + pm_runtime_resume(dev); + return pm_generic_suspend(dev); } EXPORT_SYMBOL_GPL(acpi_subsys_suspend); @@ -1042,12 +1023,48 @@ EXPORT_SYMBOL_GPL(acpi_subsys_suspend); */ int acpi_subsys_suspend_late(struct device *dev) { - int ret = pm_generic_suspend_late(dev); - return ret ? ret : acpi_dev_suspend_late(dev); + int ret; + + if (dev_pm_smart_suspend_and_suspended(dev)) + return 0; + + ret = pm_generic_suspend_late(dev); + return ret ? ret : acpi_dev_suspend(dev, device_may_wakeup(dev)); } EXPORT_SYMBOL_GPL(acpi_subsys_suspend_late); /** + * acpi_subsys_suspend_noirq - Run the device driver's "noirq" suspend callback. + * @dev: Device to suspend. + */ +int acpi_subsys_suspend_noirq(struct device *dev) +{ + if (dev_pm_smart_suspend_and_suspended(dev)) + return 0; + + return pm_generic_suspend_noirq(dev); +} +EXPORT_SYMBOL_GPL(acpi_subsys_suspend_noirq); + +/** + * acpi_subsys_resume_noirq - Run the device driver's "noirq" resume callback. + * @dev: Device to handle. + */ +int acpi_subsys_resume_noirq(struct device *dev) +{ + /* + * Devices with DPM_FLAG_SMART_SUSPEND may be left in runtime suspend + * during system suspend, so update their runtime PM status to "active" + * as they will be put into D0 going forward. + */ + if (dev_pm_smart_suspend_and_suspended(dev)) + pm_runtime_set_active(dev); + + return pm_generic_resume_noirq(dev); +} +EXPORT_SYMBOL_GPL(acpi_subsys_resume_noirq); + +/** * acpi_subsys_resume_early - Resume device using ACPI. * @dev: Device to Resume. * @@ -1057,7 +1074,7 @@ EXPORT_SYMBOL_GPL(acpi_subsys_suspend_late); */ int acpi_subsys_resume_early(struct device *dev) { - int ret = acpi_dev_resume_early(dev); + int ret = acpi_dev_resume(dev); return ret ? ret : pm_generic_resume_early(dev); } EXPORT_SYMBOL_GPL(acpi_subsys_resume_early); @@ -1074,11 +1091,60 @@ int acpi_subsys_freeze(struct device *dev) * runtime-suspended devices should not be touched during freeze/thaw * transitions. */ - pm_runtime_resume(dev); + if (!dev_pm_test_driver_flags(dev, DPM_FLAG_SMART_SUSPEND)) + pm_runtime_resume(dev); + return pm_generic_freeze(dev); } EXPORT_SYMBOL_GPL(acpi_subsys_freeze); +/** + * acpi_subsys_freeze_late - Run the device driver's "late" freeze callback. + * @dev: Device to handle. + */ +int acpi_subsys_freeze_late(struct device *dev) +{ + + if (dev_pm_smart_suspend_and_suspended(dev)) + return 0; + + return pm_generic_freeze_late(dev); +} +EXPORT_SYMBOL_GPL(acpi_subsys_freeze_late); + +/** + * acpi_subsys_freeze_noirq - Run the device driver's "noirq" freeze callback. + * @dev: Device to handle. + */ +int acpi_subsys_freeze_noirq(struct device *dev) +{ + + if (dev_pm_smart_suspend_and_suspended(dev)) + return 0; + + return pm_generic_freeze_noirq(dev); +} +EXPORT_SYMBOL_GPL(acpi_subsys_freeze_noirq); + +/** + * acpi_subsys_thaw_noirq - Run the device driver's "noirq" thaw callback. + * @dev: Device to handle. + */ +int acpi_subsys_thaw_noirq(struct device *dev) +{ + /* + * If the device is in runtime suspend, the "thaw" code may not work + * correctly with it, so skip the driver callback and make the PM core + * skip all of the subsequent "thaw" callbacks for the device. + */ + if (dev_pm_smart_suspend_and_suspended(dev)) { + dev->power.direct_complete = true; + return 0; + } + + return pm_generic_thaw_noirq(dev); +} +EXPORT_SYMBOL_GPL(acpi_subsys_thaw_noirq); #endif /* CONFIG_PM_SLEEP */ static struct dev_pm_domain acpi_general_pm_domain = { @@ -1087,13 +1153,20 @@ static struct dev_pm_domain acpi_general_pm_domain = { .runtime_resume = acpi_subsys_runtime_resume, #ifdef CONFIG_PM_SLEEP .prepare = acpi_subsys_prepare, - .complete = pm_complete_with_resume_check, + .complete = acpi_subsys_complete, .suspend = acpi_subsys_suspend, .suspend_late = acpi_subsys_suspend_late, + .suspend_noirq = acpi_subsys_suspend_noirq, + .resume_noirq = acpi_subsys_resume_noirq, .resume_early = acpi_subsys_resume_early, .freeze = acpi_subsys_freeze, + .freeze_late = acpi_subsys_freeze_late, + .freeze_noirq = acpi_subsys_freeze_noirq, + .thaw_noirq = acpi_subsys_thaw_noirq, .poweroff = acpi_subsys_suspend, .poweroff_late = acpi_subsys_suspend_late, + .poweroff_noirq = acpi_subsys_suspend_noirq, + .restore_noirq = acpi_subsys_resume_noirq, .restore_early = acpi_subsys_resume_early, #endif }, diff --git a/drivers/acpi/internal.h b/drivers/acpi/internal.h index 4361c4415b4f..fc8c43e76707 100644 --- a/drivers/acpi/internal.h +++ b/drivers/acpi/internal.h @@ -248,4 +248,10 @@ void acpi_watchdog_init(void); static inline void acpi_watchdog_init(void) {} #endif +#ifdef CONFIG_ACPI_LPIT +void acpi_init_lpit(void); +#else +static inline void acpi_init_lpit(void) { } +#endif + #endif /* _ACPI_INTERNAL_H_ */ diff --git a/drivers/acpi/osl.c b/drivers/acpi/osl.c index db78d353bab1..3bb46cb24a99 100644 --- a/drivers/acpi/osl.c +++ b/drivers/acpi/osl.c @@ -663,6 +663,29 @@ acpi_status acpi_os_write_port(acpi_io_address port, u32 value, u32 width) EXPORT_SYMBOL(acpi_os_write_port); +int acpi_os_read_iomem(void __iomem *virt_addr, u64 *value, u32 width) +{ + + switch (width) { + case 8: + *(u8 *) value = readb(virt_addr); + break; + case 16: + *(u16 *) value = readw(virt_addr); + break; + case 32: + *(u32 *) value = readl(virt_addr); + break; + case 64: + *(u64 *) value = readq(virt_addr); + break; + default: + return -EINVAL; + } + + return 0; +} + acpi_status acpi_os_read_memory(acpi_physical_address phys_addr, u64 *value, u32 width) { @@ -670,6 +693,7 @@ acpi_os_read_memory(acpi_physical_address phys_addr, u64 *value, u32 width) unsigned int size = width / 8; bool unmap = false; u64 dummy; + int error; rcu_read_lock(); virt_addr = acpi_map_vaddr_lookup(phys_addr, size); @@ -684,22 +708,8 @@ acpi_os_read_memory(acpi_physical_address phys_addr, u64 *value, u32 width) if (!value) value = &dummy; - switch (width) { - case 8: - *(u8 *) value = readb(virt_addr); - break; - case 16: - *(u16 *) value = readw(virt_addr); - break; - case 32: - *(u32 *) value = readl(virt_addr); - break; - case 64: - *(u64 *) value = readq(virt_addr); - break; - default: - BUG(); - } + error = acpi_os_read_iomem(virt_addr, value, width); + BUG_ON(error); if (unmap) iounmap(virt_addr); diff --git a/drivers/acpi/scan.c b/drivers/acpi/scan.c index 602f8ff212f2..81367edc8a10 100644 --- a/drivers/acpi/scan.c +++ b/drivers/acpi/scan.c @@ -2122,6 +2122,7 @@ int __init acpi_scan_init(void) acpi_int340x_thermal_init(); acpi_amba_init(); acpi_watchdog_init(); + acpi_init_lpit(); acpi_scan_add_handler(&generic_device_handler); |