2
0
mirror of https://github.com/edk2-porting/linux-next.git synced 2024-12-19 02:34:01 +08:00

Merge branches 'pm-core', 'pm-domains', 'pm-sleep', 'acpi-pm' and 'pm-cpuidle'

Merge changes in the PM core, system-wide PM infrastructure, generic
power domains (genpd) framework, ACPI PM infrastructure and cpuidle
for 4.19.

* pm-core:
  driver core: Add flag to autoremove device link on supplier unbind
  driver core: Rename flag AUTOREMOVE to AUTOREMOVE_CONSUMER

* pm-domains:
  PM / Domains: Introduce dev_pm_domain_attach_by_name()
  PM / Domains: Introduce option to attach a device by name to genpd
  PM / Domains: dt: Add a power-domain-names property

* pm-sleep:
  PM / reboot: Eliminate race between reboot and suspend
  PM / hibernate: Mark expected switch fall-through
  x86/power/hibernate_64: Remove VLA usage
  PM / hibernate: cast PAGE_SIZE to int when comparing with error code

* acpi-pm:
  ACPI / PM: save NVS memory for ASUS 1025C laptop
  ACPI / PM: Default to s2idle in all machines supporting LP S0

* pm-cpuidle:
  ARM: cpuidle: silence error on driver registration failure
This commit is contained in:
Rafael J. Wysocki 2018-08-14 09:48:10 +02:00
25 changed files with 186 additions and 89 deletions

View File

@ -114,18 +114,26 @@ Required properties:
- power-domains : A list of PM domain specifiers, as defined by bindings of - power-domains : A list of PM domain specifiers, as defined by bindings of
the power controller that is the PM domain provider. the power controller that is the PM domain provider.
Optional properties:
- power-domain-names : A list of power domain name strings sorted in the same
order as the power-domains property. Consumers drivers will use
power-domain-names to match power domains with power-domains
specifiers.
Example: Example:
leaky-device@12350000 { leaky-device@12350000 {
compatible = "foo,i-leak-current"; compatible = "foo,i-leak-current";
reg = <0x12350000 0x1000>; reg = <0x12350000 0x1000>;
power-domains = <&power 0>; power-domains = <&power 0>;
power-domain-names = "io";
}; };
leaky-device@12351000 { leaky-device@12351000 {
compatible = "foo,i-leak-current"; compatible = "foo,i-leak-current";
reg = <0x12351000 0x1000>; reg = <0x12351000 0x1000>;
power-domains = <&power 0>, <&power 1> ; power-domains = <&power 0>, <&power 1> ;
power-domain-names = "io", "clk";
}; };
The first example above defines a typical PM domain consumer device, which is The first example above defines a typical PM domain consumer device, which is

View File

@ -81,10 +81,14 @@ integration is desired.
Two other flags are specifically targeted at use cases where the device Two other flags are specifically targeted at use cases where the device
link is added from the consumer's ``->probe`` callback: ``DL_FLAG_RPM_ACTIVE`` link is added from the consumer's ``->probe`` callback: ``DL_FLAG_RPM_ACTIVE``
can be specified to runtime resume the supplier upon addition of the can be specified to runtime resume the supplier upon addition of the
device link. ``DL_FLAG_AUTOREMOVE`` causes the device link to be automatically device link. ``DL_FLAG_AUTOREMOVE_CONSUMER`` causes the device link to be
purged when the consumer fails to probe or later unbinds. This obviates automatically purged when the consumer fails to probe or later unbinds.
the need to explicitly delete the link in the ``->remove`` callback or in This obviates the need to explicitly delete the link in the ``->remove``
the error path of the ``->probe`` callback. callback or in the error path of the ``->probe`` callback.
Similarly, when the device link is added from supplier's ``->probe`` callback,
``DL_FLAG_AUTOREMOVE_SUPPLIER`` causes the device link to be automatically
purged when the supplier fails to probe or later unbinds.
Limitations Limitations
=========== ===========

View File

@ -204,26 +204,26 @@ VI. Are there any precautions to be taken to prevent freezing failures?
Yes, there are. Yes, there are.
First of all, grabbing the 'pm_mutex' lock to mutually exclude a piece of code First of all, grabbing the 'system_transition_mutex' lock to mutually exclude a piece of code
from system-wide sleep such as suspend/hibernation is not encouraged. from system-wide sleep such as suspend/hibernation is not encouraged.
If possible, that piece of code must instead hook onto the suspend/hibernation If possible, that piece of code must instead hook onto the suspend/hibernation
notifiers to achieve mutual exclusion. Look at the CPU-Hotplug code notifiers to achieve mutual exclusion. Look at the CPU-Hotplug code
(kernel/cpu.c) for an example. (kernel/cpu.c) for an example.
However, if that is not feasible, and grabbing 'pm_mutex' is deemed necessary, However, if that is not feasible, and grabbing 'system_transition_mutex' is deemed necessary,
it is strongly discouraged to directly call mutex_[un]lock(&pm_mutex) since it is strongly discouraged to directly call mutex_[un]lock(&system_transition_mutex) since
that could lead to freezing failures, because if the suspend/hibernate code that could lead to freezing failures, because if the suspend/hibernate code
successfully acquired the 'pm_mutex' lock, and hence that other entity failed successfully acquired the 'system_transition_mutex' lock, and hence that other entity failed
to acquire the lock, then that task would get blocked in TASK_UNINTERRUPTIBLE to acquire the lock, then that task would get blocked in TASK_UNINTERRUPTIBLE
state. As a consequence, the freezer would not be able to freeze that task, state. As a consequence, the freezer would not be able to freeze that task,
leading to freezing failure. leading to freezing failure.
However, the [un]lock_system_sleep() APIs are safe to use in this scenario, However, the [un]lock_system_sleep() APIs are safe to use in this scenario,
since they ask the freezer to skip freezing this task, since it is anyway since they ask the freezer to skip freezing this task, since it is anyway
"frozen enough" as it is blocked on 'pm_mutex', which will be released "frozen enough" as it is blocked on 'system_transition_mutex', which will be released
only after the entire suspend/hibernation sequence is complete. only after the entire suspend/hibernation sequence is complete.
So, to summarize, use [un]lock_system_sleep() instead of directly using So, to summarize, use [un]lock_system_sleep() instead of directly using
mutex_[un]lock(&pm_mutex). That would prevent freezing failures. mutex_[un]lock(&system_transition_mutex). That would prevent freezing failures.
V. Miscellaneous V. Miscellaneous
/sys/power/pm_freeze_timeout controls how long it will cost at most to freeze /sys/power/pm_freeze_timeout controls how long it will cost at most to freeze

View File

@ -32,7 +32,7 @@ More details follow:
sysfs file sysfs file
| |
v v
Acquire pm_mutex lock Acquire system_transition_mutex lock
| |
v v
Send PM_SUSPEND_PREPARE Send PM_SUSPEND_PREPARE
@ -96,10 +96,10 @@ execution during resume):
* thaw tasks * thaw tasks
* send PM_POST_SUSPEND notifications * send PM_POST_SUSPEND notifications
* Release pm_mutex lock. * Release system_transition_mutex lock.
It is to be noted here that the pm_mutex lock is acquired at the very It is to be noted here that the system_transition_mutex lock is acquired at the very
beginning, when we are just starting out to suspend, and then released only beginning, when we are just starting out to suspend, and then released only
after the entire cycle is complete (i.e., suspend + resume). after the entire cycle is complete (i.e., suspend + resume).

View File

@ -233,29 +233,35 @@ struct restore_data_record {
*/ */
static int get_e820_md5(struct e820_table *table, void *buf) static int get_e820_md5(struct e820_table *table, void *buf)
{ {
struct scatterlist sg; struct crypto_shash *tfm;
struct crypto_ahash *tfm; struct shash_desc *desc;
int size; int size;
int ret = 0; int ret = 0;
tfm = crypto_alloc_ahash("md5", 0, CRYPTO_ALG_ASYNC); tfm = crypto_alloc_shash("md5", 0, 0);
if (IS_ERR(tfm)) if (IS_ERR(tfm))
return -ENOMEM; return -ENOMEM;
{ desc = kmalloc(sizeof(struct shash_desc) + crypto_shash_descsize(tfm),
AHASH_REQUEST_ON_STACK(req, tfm); GFP_KERNEL);
size = offsetof(struct e820_table, entries) + sizeof(struct e820_entry) * table->nr_entries; if (!desc) {
ahash_request_set_tfm(req, tfm); ret = -ENOMEM;
sg_init_one(&sg, (u8 *)table, size); goto free_tfm;
ahash_request_set_callback(req, 0, NULL, NULL);
ahash_request_set_crypt(req, &sg, buf, size);
if (crypto_ahash_digest(req))
ret = -EINVAL;
ahash_request_zero(req);
} }
crypto_free_ahash(tfm);
desc->tfm = tfm;
desc->flags = 0;
size = offsetof(struct e820_table, entries) +
sizeof(struct e820_entry) * table->nr_entries;
if (crypto_shash_digest(desc, (u8 *)table, size, buf))
ret = -EINVAL;
kzfree(desc);
free_tfm:
crypto_free_shash(tfm);
return ret; return ret;
} }

View File

@ -338,6 +338,14 @@ static const struct dmi_system_id acpisleep_dmi_table[] __initconst = {
DMI_MATCH(DMI_PRODUCT_NAME, "K54HR"), DMI_MATCH(DMI_PRODUCT_NAME, "K54HR"),
}, },
}, },
{
.callback = init_nvs_save_s3,
.ident = "Asus 1025C",
.matches = {
DMI_MATCH(DMI_SYS_VENDOR, "ASUSTeK COMPUTER INC."),
DMI_MATCH(DMI_PRODUCT_NAME, "1025C"),
},
},
/* /*
* https://bugzilla.kernel.org/show_bug.cgi?id=189431 * https://bugzilla.kernel.org/show_bug.cgi?id=189431
* Lenovo G50-45 is a platform later than 2012, but needs nvs memory * Lenovo G50-45 is a platform later than 2012, but needs nvs memory
@ -718,9 +726,6 @@ static const struct acpi_device_id lps0_device_ids[] = {
#define ACPI_LPS0_ENTRY 5 #define ACPI_LPS0_ENTRY 5
#define ACPI_LPS0_EXIT 6 #define ACPI_LPS0_EXIT 6
#define ACPI_LPS0_SCREEN_MASK ((1 << ACPI_LPS0_SCREEN_OFF) | (1 << ACPI_LPS0_SCREEN_ON))
#define ACPI_LPS0_PLATFORM_MASK ((1 << ACPI_LPS0_ENTRY) | (1 << ACPI_LPS0_EXIT))
static acpi_handle lps0_device_handle; static acpi_handle lps0_device_handle;
static guid_t lps0_dsm_guid; static guid_t lps0_dsm_guid;
static char lps0_dsm_func_mask; static char lps0_dsm_func_mask;
@ -924,17 +929,14 @@ static int lps0_device_attach(struct acpi_device *adev,
if (out_obj && out_obj->type == ACPI_TYPE_BUFFER) { if (out_obj && out_obj->type == ACPI_TYPE_BUFFER) {
char bitmask = *(char *)out_obj->buffer.pointer; char bitmask = *(char *)out_obj->buffer.pointer;
if ((bitmask & ACPI_LPS0_PLATFORM_MASK) == ACPI_LPS0_PLATFORM_MASK || lps0_dsm_func_mask = bitmask;
(bitmask & ACPI_LPS0_SCREEN_MASK) == ACPI_LPS0_SCREEN_MASK) { lps0_device_handle = adev->handle;
lps0_dsm_func_mask = bitmask; /*
lps0_device_handle = adev->handle; * Use suspend-to-idle by default if the default
/* * suspend mode was not set from the command line.
* Use suspend-to-idle by default if the default */
* suspend mode was not set from the command line. if (mem_sleep_default > PM_SUSPEND_MEM)
*/ mem_sleep_current = PM_SUSPEND_TO_IDLE;
if (mem_sleep_default > PM_SUSPEND_MEM)
mem_sleep_current = PM_SUSPEND_TO_IDLE;
}
acpi_handle_debug(adev->handle, "_DSM function mask: 0x%x\n", acpi_handle_debug(adev->handle, "_DSM function mask: 0x%x\n",
bitmask); bitmask);

View File

@ -178,10 +178,10 @@ void device_pm_move_to_tail(struct device *dev)
* of the link. If DL_FLAG_PM_RUNTIME is not set, DL_FLAG_RPM_ACTIVE will be * of the link. If DL_FLAG_PM_RUNTIME is not set, DL_FLAG_RPM_ACTIVE will be
* ignored. * ignored.
* *
* If the DL_FLAG_AUTOREMOVE is set, the link will be removed automatically * If the DL_FLAG_AUTOREMOVE_CONSUMER is set, the link will be removed
* when the consumer device driver unbinds from it. The combination of both * automatically when the consumer device driver unbinds from it.
* DL_FLAG_AUTOREMOVE and DL_FLAG_STATELESS set is invalid and will cause NULL * The combination of both DL_FLAG_AUTOREMOVE_CONSUMER and DL_FLAG_STATELESS
* to be returned. * set is invalid and will cause NULL to be returned.
* *
* A side effect of the link creation is re-ordering of dpm_list and the * A side effect of the link creation is re-ordering of dpm_list and the
* devices_kset list by moving the consumer device and all devices depending * devices_kset list by moving the consumer device and all devices depending
@ -198,7 +198,8 @@ struct device_link *device_link_add(struct device *consumer,
struct device_link *link; struct device_link *link;
if (!consumer || !supplier || if (!consumer || !supplier ||
((flags & DL_FLAG_STATELESS) && (flags & DL_FLAG_AUTOREMOVE))) ((flags & DL_FLAG_STATELESS) &&
(flags & DL_FLAG_AUTOREMOVE_CONSUMER)))
return NULL; return NULL;
device_links_write_lock(); device_links_write_lock();
@ -479,7 +480,7 @@ static void __device_links_no_driver(struct device *dev)
if (link->flags & DL_FLAG_STATELESS) if (link->flags & DL_FLAG_STATELESS)
continue; continue;
if (link->flags & DL_FLAG_AUTOREMOVE) if (link->flags & DL_FLAG_AUTOREMOVE_CONSUMER)
kref_put(&link->kref, __device_link_del); kref_put(&link->kref, __device_link_del);
else if (link->status != DL_STATE_SUPPLIER_UNBIND) else if (link->status != DL_STATE_SUPPLIER_UNBIND)
WRITE_ONCE(link->status, DL_STATE_AVAILABLE); WRITE_ONCE(link->status, DL_STATE_AVAILABLE);
@ -515,8 +516,18 @@ void device_links_driver_cleanup(struct device *dev)
if (link->flags & DL_FLAG_STATELESS) if (link->flags & DL_FLAG_STATELESS)
continue; continue;
WARN_ON(link->flags & DL_FLAG_AUTOREMOVE); WARN_ON(link->flags & DL_FLAG_AUTOREMOVE_CONSUMER);
WARN_ON(link->status != DL_STATE_SUPPLIER_UNBIND); WARN_ON(link->status != DL_STATE_SUPPLIER_UNBIND);
/*
* autoremove the links between this @dev and its consumer
* devices that are not active, i.e. where the link state
* has moved to DL_STATE_SUPPLIER_UNBIND.
*/
if (link->status == DL_STATE_SUPPLIER_UNBIND &&
link->flags & DL_FLAG_AUTOREMOVE_SUPPLIER)
kref_put(&link->kref, __device_link_del);
WRITE_ONCE(link->status, DL_STATE_DORMANT); WRITE_ONCE(link->status, DL_STATE_DORMANT);
} }

View File

@ -152,6 +152,23 @@ struct device *dev_pm_domain_attach_by_id(struct device *dev,
} }
EXPORT_SYMBOL_GPL(dev_pm_domain_attach_by_id); EXPORT_SYMBOL_GPL(dev_pm_domain_attach_by_id);
/**
* dev_pm_domain_attach_by_name - Associate a device with one of its PM domains.
* @dev: The device used to lookup the PM domain.
* @name: The name of the PM domain.
*
* For a detailed function description, see dev_pm_domain_attach_by_id().
*/
struct device *dev_pm_domain_attach_by_name(struct device *dev,
char *name)
{
if (dev->pm_domain)
return ERR_PTR(-EEXIST);
return genpd_dev_pm_attach_by_name(dev, name);
}
EXPORT_SYMBOL_GPL(dev_pm_domain_attach_by_name);
/** /**
* dev_pm_domain_detach - Detach a device from its PM domain. * dev_pm_domain_detach - Detach a device from its PM domain.
* @dev: Device to detach. * @dev: Device to detach.

View File

@ -2374,6 +2374,30 @@ struct device *genpd_dev_pm_attach_by_id(struct device *dev,
} }
EXPORT_SYMBOL_GPL(genpd_dev_pm_attach_by_id); EXPORT_SYMBOL_GPL(genpd_dev_pm_attach_by_id);
/**
* genpd_dev_pm_attach_by_name - Associate a device with one of its PM domains.
* @dev: The device used to lookup the PM domain.
* @name: The name of the PM domain.
*
* Parse device's OF node to find a PM domain specifier using the
* power-domain-names DT property. For further description see
* genpd_dev_pm_attach_by_id().
*/
struct device *genpd_dev_pm_attach_by_name(struct device *dev, char *name)
{
int index;
if (!dev->of_node)
return NULL;
index = of_property_match_string(dev->of_node, "power-domain-names",
name);
if (index < 0)
return NULL;
return genpd_dev_pm_attach_by_id(dev, index);
}
static const struct of_device_id idle_state_match[] = { static const struct of_device_id idle_state_match[] = {
{ .compatible = "domain-idle-state", }, { .compatible = "domain-idle-state", },
{ } { }

View File

@ -105,7 +105,8 @@ static int __init arm_idle_init_cpu(int cpu)
ret = cpuidle_register_driver(drv); ret = cpuidle_register_driver(drv);
if (ret) { if (ret) {
pr_err("Failed to register cpuidle driver\n"); if (ret != -EBUSY)
pr_err("Failed to register cpuidle driver\n");
goto out_kfree_drv; goto out_kfree_drv;
} }

View File

@ -2312,7 +2312,7 @@ static int tegra_dc_couple(struct tegra_dc *dc)
* POWER_CONTROL registers during CRTC enabling. * POWER_CONTROL registers during CRTC enabling.
*/ */
if (dc->soc->coupled_pm && dc->pipe == 1) { if (dc->soc->coupled_pm && dc->pipe == 1) {
u32 flags = DL_FLAG_PM_RUNTIME | DL_FLAG_AUTOREMOVE; u32 flags = DL_FLAG_PM_RUNTIME | DL_FLAG_AUTOREMOVE_CONSUMER;
struct device_link *link; struct device_link *link;
struct device *partner; struct device *partner;

View File

@ -128,7 +128,8 @@ ipu_pre_lookup_by_phandle(struct device *dev, const char *name, int index)
list_for_each_entry(pre, &ipu_pre_list, list) { list_for_each_entry(pre, &ipu_pre_list, list) {
if (pre_node == pre->dev->of_node) { if (pre_node == pre->dev->of_node) {
mutex_unlock(&ipu_pre_list_mutex); mutex_unlock(&ipu_pre_list_mutex);
device_link_add(dev, pre->dev, DL_FLAG_AUTOREMOVE); device_link_add(dev, pre->dev,
DL_FLAG_AUTOREMOVE_CONSUMER);
of_node_put(pre_node); of_node_put(pre_node);
return pre; return pre;
} }

View File

@ -100,7 +100,8 @@ ipu_prg_lookup_by_phandle(struct device *dev, const char *name, int ipu_id)
list_for_each_entry(prg, &ipu_prg_list, list) { list_for_each_entry(prg, &ipu_prg_list, list) {
if (prg_node == prg->dev->of_node) { if (prg_node == prg->dev->of_node) {
mutex_unlock(&ipu_prg_list_mutex); mutex_unlock(&ipu_prg_list_mutex);
device_link_add(dev, prg->dev, DL_FLAG_AUTOREMOVE); device_link_add(dev, prg->dev,
DL_FLAG_AUTOREMOVE_CONSUMER);
prg->id = ipu_id; prg->id = ipu_id;
of_node_put(prg_node); of_node_put(prg_node);
return prg; return prg;

View File

@ -209,7 +209,7 @@ static int imx_pgc_power_domain_probe(struct platform_device *pdev)
goto genpd_err; goto genpd_err;
} }
device_link_add(dev, dev->parent, DL_FLAG_AUTOREMOVE); device_link_add(dev, dev->parent, DL_FLAG_AUTOREMOVE_CONSUMER);
return 0; return 0;

View File

@ -90,7 +90,7 @@ extern void bus_remove_file(struct bus_type *, struct bus_attribute *);
* @num_vf: Called to find out how many virtual functions a device on this * @num_vf: Called to find out how many virtual functions a device on this
* bus supports. * bus supports.
* @dma_configure: Called to setup DMA configuration on a device on * @dma_configure: Called to setup DMA configuration on a device on
this bus. * this bus.
* @pm: Power management operations of this bus, callback the specific * @pm: Power management operations of this bus, callback the specific
* device driver's pm-ops. * device driver's pm-ops.
* @iommu_ops: IOMMU specific operations for this bus, used to attach IOMMU * @iommu_ops: IOMMU specific operations for this bus, used to attach IOMMU
@ -784,14 +784,16 @@ enum device_link_state {
* Device link flags. * Device link flags.
* *
* STATELESS: The core won't track the presence of supplier/consumer drivers. * STATELESS: The core won't track the presence of supplier/consumer drivers.
* AUTOREMOVE: Remove this link automatically on consumer driver unbind. * AUTOREMOVE_CONSUMER: Remove the link automatically on consumer driver unbind.
* PM_RUNTIME: If set, the runtime PM framework will use this link. * PM_RUNTIME: If set, the runtime PM framework will use this link.
* RPM_ACTIVE: Run pm_runtime_get_sync() on the supplier during link creation. * RPM_ACTIVE: Run pm_runtime_get_sync() on the supplier during link creation.
* AUTOREMOVE_SUPPLIER: Remove the link automatically on supplier driver unbind.
*/ */
#define DL_FLAG_STATELESS BIT(0) #define DL_FLAG_STATELESS BIT(0)
#define DL_FLAG_AUTOREMOVE BIT(1) #define DL_FLAG_AUTOREMOVE_CONSUMER BIT(1)
#define DL_FLAG_PM_RUNTIME BIT(2) #define DL_FLAG_PM_RUNTIME BIT(2)
#define DL_FLAG_RPM_ACTIVE BIT(3) #define DL_FLAG_RPM_ACTIVE BIT(3)
#define DL_FLAG_AUTOREMOVE_SUPPLIER BIT(4)
/** /**
* struct device_link - Device link representation. * struct device_link - Device link representation.

View File

@ -239,6 +239,8 @@ unsigned int of_genpd_opp_to_performance_state(struct device *dev,
int genpd_dev_pm_attach(struct device *dev); int genpd_dev_pm_attach(struct device *dev);
struct device *genpd_dev_pm_attach_by_id(struct device *dev, struct device *genpd_dev_pm_attach_by_id(struct device *dev,
unsigned int index); unsigned int index);
struct device *genpd_dev_pm_attach_by_name(struct device *dev,
char *name);
#else /* !CONFIG_PM_GENERIC_DOMAINS_OF */ #else /* !CONFIG_PM_GENERIC_DOMAINS_OF */
static inline int of_genpd_add_provider_simple(struct device_node *np, static inline int of_genpd_add_provider_simple(struct device_node *np,
struct generic_pm_domain *genpd) struct generic_pm_domain *genpd)
@ -290,6 +292,12 @@ static inline struct device *genpd_dev_pm_attach_by_id(struct device *dev,
return NULL; return NULL;
} }
static inline struct device *genpd_dev_pm_attach_by_name(struct device *dev,
char *name)
{
return NULL;
}
static inline static inline
struct generic_pm_domain *of_genpd_remove_last(struct device_node *np) struct generic_pm_domain *of_genpd_remove_last(struct device_node *np)
{ {
@ -301,6 +309,8 @@ struct generic_pm_domain *of_genpd_remove_last(struct device_node *np)
int dev_pm_domain_attach(struct device *dev, bool power_on); int dev_pm_domain_attach(struct device *dev, bool power_on);
struct device *dev_pm_domain_attach_by_id(struct device *dev, struct device *dev_pm_domain_attach_by_id(struct device *dev,
unsigned int index); unsigned int index);
struct device *dev_pm_domain_attach_by_name(struct device *dev,
char *name);
void dev_pm_domain_detach(struct device *dev, bool power_off); void dev_pm_domain_detach(struct device *dev, bool power_off);
void dev_pm_domain_set(struct device *dev, struct dev_pm_domain *pd); void dev_pm_domain_set(struct device *dev, struct dev_pm_domain *pd);
#else #else
@ -313,6 +323,11 @@ static inline struct device *dev_pm_domain_attach_by_id(struct device *dev,
{ {
return NULL; return NULL;
} }
static inline struct device *dev_pm_domain_attach_by_name(struct device *dev,
char *name)
{
return NULL;
}
static inline void dev_pm_domain_detach(struct device *dev, bool power_off) {} static inline void dev_pm_domain_detach(struct device *dev, bool power_off) {}
static inline void dev_pm_domain_set(struct device *dev, static inline void dev_pm_domain_set(struct device *dev,
struct dev_pm_domain *pd) {} struct dev_pm_domain *pd) {}

View File

@ -414,7 +414,7 @@ static inline bool hibernation_available(void) { return false; }
#define PM_RESTORE_PREPARE 0x0005 /* Going to restore a saved image */ #define PM_RESTORE_PREPARE 0x0005 /* Going to restore a saved image */
#define PM_POST_RESTORE 0x0006 /* Restore failed */ #define PM_POST_RESTORE 0x0006 /* Restore failed */
extern struct mutex pm_mutex; extern struct mutex system_transition_mutex;
#ifdef CONFIG_PM_SLEEP #ifdef CONFIG_PM_SLEEP
void save_processor_state(void); void save_processor_state(void);

View File

@ -15,7 +15,9 @@
atomic_t system_freezing_cnt = ATOMIC_INIT(0); atomic_t system_freezing_cnt = ATOMIC_INIT(0);
EXPORT_SYMBOL(system_freezing_cnt); EXPORT_SYMBOL(system_freezing_cnt);
/* indicate whether PM freezing is in effect, protected by pm_mutex */ /* indicate whether PM freezing is in effect, protected by
* system_transition_mutex
*/
bool pm_freezing; bool pm_freezing;
bool pm_nosig_freezing; bool pm_nosig_freezing;

View File

@ -338,7 +338,7 @@ static int create_image(int platform_mode)
* hibernation_snapshot - Quiesce devices and create a hibernation image. * hibernation_snapshot - Quiesce devices and create a hibernation image.
* @platform_mode: If set, use platform driver to prepare for the transition. * @platform_mode: If set, use platform driver to prepare for the transition.
* *
* This routine must be called with pm_mutex held. * This routine must be called with system_transition_mutex held.
*/ */
int hibernation_snapshot(int platform_mode) int hibernation_snapshot(int platform_mode)
{ {
@ -500,8 +500,9 @@ static int resume_target_kernel(bool platform_mode)
* hibernation_restore - Quiesce devices and restore from a hibernation image. * hibernation_restore - Quiesce devices and restore from a hibernation image.
* @platform_mode: If set, use platform driver to prepare for the transition. * @platform_mode: If set, use platform driver to prepare for the transition.
* *
* This routine must be called with pm_mutex held. If it is successful, control * This routine must be called with system_transition_mutex held. If it is
* reappears in the restored target kernel in hibernation_snapshot(). * successful, control reappears in the restored target kernel in
* hibernation_snapshot().
*/ */
int hibernation_restore(int platform_mode) int hibernation_restore(int platform_mode)
{ {
@ -638,6 +639,7 @@ static void power_down(void)
break; break;
case HIBERNATION_PLATFORM: case HIBERNATION_PLATFORM:
hibernation_platform_enter(); hibernation_platform_enter();
/* Fall through */
case HIBERNATION_SHUTDOWN: case HIBERNATION_SHUTDOWN:
if (pm_power_off) if (pm_power_off)
kernel_power_off(); kernel_power_off();
@ -805,13 +807,13 @@ static int software_resume(void)
* name_to_dev_t() below takes a sysfs buffer mutex when sysfs * name_to_dev_t() below takes a sysfs buffer mutex when sysfs
* is configured into the kernel. Since the regular hibernate * is configured into the kernel. Since the regular hibernate
* trigger path is via sysfs which takes a buffer mutex before * trigger path is via sysfs which takes a buffer mutex before
* calling hibernate functions (which take pm_mutex) this can * calling hibernate functions (which take system_transition_mutex)
* cause lockdep to complain about a possible ABBA deadlock * this can cause lockdep to complain about a possible ABBA deadlock
* which cannot happen since we're in the boot code here and * which cannot happen since we're in the boot code here and
* sysfs can't be invoked yet. Therefore, we use a subclass * sysfs can't be invoked yet. Therefore, we use a subclass
* here to avoid lockdep complaining. * here to avoid lockdep complaining.
*/ */
mutex_lock_nested(&pm_mutex, SINGLE_DEPTH_NESTING); mutex_lock_nested(&system_transition_mutex, SINGLE_DEPTH_NESTING);
if (swsusp_resume_device) if (swsusp_resume_device)
goto Check_image; goto Check_image;
@ -899,7 +901,7 @@ static int software_resume(void)
atomic_inc(&snapshot_device_available); atomic_inc(&snapshot_device_available);
/* For success case, the suspend path will release the lock */ /* For success case, the suspend path will release the lock */
Unlock: Unlock:
mutex_unlock(&pm_mutex); mutex_unlock(&system_transition_mutex);
pm_pr_dbg("Hibernation image not present or could not be loaded.\n"); pm_pr_dbg("Hibernation image not present or could not be loaded.\n");
return error; return error;
Close_Finish: Close_Finish:

View File

@ -15,17 +15,16 @@
#include <linux/workqueue.h> #include <linux/workqueue.h>
#include <linux/debugfs.h> #include <linux/debugfs.h>
#include <linux/seq_file.h> #include <linux/seq_file.h>
#include <linux/suspend.h>
#include "power.h" #include "power.h"
DEFINE_MUTEX(pm_mutex);
#ifdef CONFIG_PM_SLEEP #ifdef CONFIG_PM_SLEEP
void lock_system_sleep(void) void lock_system_sleep(void)
{ {
current->flags |= PF_FREEZER_SKIP; current->flags |= PF_FREEZER_SKIP;
mutex_lock(&pm_mutex); mutex_lock(&system_transition_mutex);
} }
EXPORT_SYMBOL_GPL(lock_system_sleep); EXPORT_SYMBOL_GPL(lock_system_sleep);
@ -37,8 +36,9 @@ void unlock_system_sleep(void)
* *
* Reason: * Reason:
* Fundamentally, we just don't need it, because freezing condition * Fundamentally, we just don't need it, because freezing condition
* doesn't come into effect until we release the pm_mutex lock, * doesn't come into effect until we release the
* since the freezer always works with pm_mutex held. * system_transition_mutex lock, since the freezer always works with
* system_transition_mutex held.
* *
* More importantly, in the case of hibernation, * More importantly, in the case of hibernation,
* unlock_system_sleep() gets called in snapshot_read() and * unlock_system_sleep() gets called in snapshot_read() and
@ -47,7 +47,7 @@ void unlock_system_sleep(void)
* enter the refrigerator, thus causing hibernation to lockup. * enter the refrigerator, thus causing hibernation to lockup.
*/ */
current->flags &= ~PF_FREEZER_SKIP; current->flags &= ~PF_FREEZER_SKIP;
mutex_unlock(&pm_mutex); mutex_unlock(&system_transition_mutex);
} }
EXPORT_SYMBOL_GPL(unlock_system_sleep); EXPORT_SYMBOL_GPL(unlock_system_sleep);

View File

@ -556,7 +556,7 @@ static int enter_state(suspend_state_t state)
} else if (!valid_state(state)) { } else if (!valid_state(state)) {
return -EINVAL; return -EINVAL;
} }
if (!mutex_trylock(&pm_mutex)) if (!mutex_trylock(&system_transition_mutex))
return -EBUSY; return -EBUSY;
if (state == PM_SUSPEND_TO_IDLE) if (state == PM_SUSPEND_TO_IDLE)
@ -590,7 +590,7 @@ static int enter_state(suspend_state_t state)
pm_pr_dbg("Finishing wakeup.\n"); pm_pr_dbg("Finishing wakeup.\n");
suspend_finish(); suspend_finish();
Unlock: Unlock:
mutex_unlock(&pm_mutex); mutex_unlock(&system_transition_mutex);
return error; return error;
} }

View File

@ -923,7 +923,7 @@ int swsusp_write(unsigned int flags)
} }
memset(&snapshot, 0, sizeof(struct snapshot_handle)); memset(&snapshot, 0, sizeof(struct snapshot_handle));
error = snapshot_read_next(&snapshot); error = snapshot_read_next(&snapshot);
if (error < PAGE_SIZE) { if (error < (int)PAGE_SIZE) {
if (error >= 0) if (error >= 0)
error = -EFAULT; error = -EFAULT;
@ -1483,7 +1483,7 @@ int swsusp_read(unsigned int *flags_p)
memset(&snapshot, 0, sizeof(struct snapshot_handle)); memset(&snapshot, 0, sizeof(struct snapshot_handle));
error = snapshot_write_next(&snapshot); error = snapshot_write_next(&snapshot);
if (error < PAGE_SIZE) if (error < (int)PAGE_SIZE)
return error < 0 ? error : -EFAULT; return error < 0 ? error : -EFAULT;
header = (struct swsusp_info *)data_of(snapshot); header = (struct swsusp_info *)data_of(snapshot);
error = get_swap_reader(&handle, flags_p); error = get_swap_reader(&handle, flags_p);

View File

@ -216,7 +216,7 @@ static long snapshot_ioctl(struct file *filp, unsigned int cmd,
if (!capable(CAP_SYS_ADMIN)) if (!capable(CAP_SYS_ADMIN))
return -EPERM; return -EPERM;
if (!mutex_trylock(&pm_mutex)) if (!mutex_trylock(&system_transition_mutex))
return -EBUSY; return -EBUSY;
lock_device_hotplug(); lock_device_hotplug();
@ -394,7 +394,7 @@ static long snapshot_ioctl(struct file *filp, unsigned int cmd,
} }
unlock_device_hotplug(); unlock_device_hotplug();
mutex_unlock(&pm_mutex); mutex_unlock(&system_transition_mutex);
return error; return error;
} }

View File

@ -294,7 +294,7 @@ void kernel_power_off(void)
} }
EXPORT_SYMBOL_GPL(kernel_power_off); EXPORT_SYMBOL_GPL(kernel_power_off);
static DEFINE_MUTEX(reboot_mutex); DEFINE_MUTEX(system_transition_mutex);
/* /*
* Reboot system call: for obvious reasons only root may call it, * Reboot system call: for obvious reasons only root may call it,
@ -338,7 +338,7 @@ SYSCALL_DEFINE4(reboot, int, magic1, int, magic2, unsigned int, cmd,
if ((cmd == LINUX_REBOOT_CMD_POWER_OFF) && !pm_power_off) if ((cmd == LINUX_REBOOT_CMD_POWER_OFF) && !pm_power_off)
cmd = LINUX_REBOOT_CMD_HALT; cmd = LINUX_REBOOT_CMD_HALT;
mutex_lock(&reboot_mutex); mutex_lock(&system_transition_mutex);
switch (cmd) { switch (cmd) {
case LINUX_REBOOT_CMD_RESTART: case LINUX_REBOOT_CMD_RESTART:
kernel_restart(NULL); kernel_restart(NULL);
@ -389,7 +389,7 @@ SYSCALL_DEFINE4(reboot, int, magic1, int, magic2, unsigned int, cmd,
ret = -EINVAL; ret = -EINVAL;
break; break;
} }
mutex_unlock(&reboot_mutex); mutex_unlock(&system_transition_mutex);
return ret; return ret;
} }

View File

@ -155,16 +155,17 @@ static inline void set_pcppage_migratetype(struct page *page, int migratetype)
* The following functions are used by the suspend/hibernate code to temporarily * The following functions are used by the suspend/hibernate code to temporarily
* change gfp_allowed_mask in order to avoid using I/O during memory allocations * change gfp_allowed_mask in order to avoid using I/O during memory allocations
* while devices are suspended. To avoid races with the suspend/hibernate code, * while devices are suspended. To avoid races with the suspend/hibernate code,
* they should always be called with pm_mutex held (gfp_allowed_mask also should * they should always be called with system_transition_mutex held
* only be modified with pm_mutex held, unless the suspend/hibernate code is * (gfp_allowed_mask also should only be modified with system_transition_mutex
* guaranteed not to run in parallel with that modification). * held, unless the suspend/hibernate code is guaranteed not to run in parallel
* with that modification).
*/ */
static gfp_t saved_gfp_mask; static gfp_t saved_gfp_mask;
void pm_restore_gfp_mask(void) void pm_restore_gfp_mask(void)
{ {
WARN_ON(!mutex_is_locked(&pm_mutex)); WARN_ON(!mutex_is_locked(&system_transition_mutex));
if (saved_gfp_mask) { if (saved_gfp_mask) {
gfp_allowed_mask = saved_gfp_mask; gfp_allowed_mask = saved_gfp_mask;
saved_gfp_mask = 0; saved_gfp_mask = 0;
@ -173,7 +174,7 @@ void pm_restore_gfp_mask(void)
void pm_restrict_gfp_mask(void) void pm_restrict_gfp_mask(void)
{ {
WARN_ON(!mutex_is_locked(&pm_mutex)); WARN_ON(!mutex_is_locked(&system_transition_mutex));
WARN_ON(saved_gfp_mask); WARN_ON(saved_gfp_mask);
saved_gfp_mask = gfp_allowed_mask; saved_gfp_mask = gfp_allowed_mask;
gfp_allowed_mask &= ~(__GFP_IO | __GFP_FS); gfp_allowed_mask &= ~(__GFP_IO | __GFP_FS);