Move cpu device specific code out of generic opp library, and add it to
cpu.c.
Along with that, create a core-internal opp.h header, which will be used
to share structures and function prototypes within opp core.
Reviewed-by: Stephen Boyd <sboyd@codeaurora.org>
Signed-off-by: Viresh Kumar <viresh.kumar@linaro.org>
Signed-off-by: Rafael J. Wysocki <rafael.j.wysocki@intel.com>
OPP code is expanding and is already present in multiple directories
(cpufreq and power). Lets move it to its own directory, to manage it
better.
This also moves/renames the cpufreq_opp file to cpu.c, as it will
contain helpers for cpu device. Its not just about cpufreq, other
frameworks can use OPPs as well.
Reviewed-by: Stephen Boyd <sboyd@codeaurora.org>
Signed-off-by: Viresh Kumar <viresh.kumar@linaro.org>
Signed-off-by: Rafael J. Wysocki <rafael.j.wysocki@intel.com>
That's the naming convention followed in most of opp core, but few
routines didn't follow this, fix them.
Reviewed-by: Stephen Boyd <sboyd@codeaurora.org>
Signed-off-by: Viresh Kumar <viresh.kumar@linaro.org>
Acked-by: Shawn Guo <shawnguo@kernel.org>
Signed-off-by: Rafael J. Wysocki <rafael.j.wysocki@intel.com>
free-table routines are opposite of init-table ones, and must be named
to make that clear. Opposite of 'init' is 'exit', but those doesn't suit
really well.
Replace 'init' with 'add' and 'free' with 'remove'.
Reported-by: Pavel Machek <pavel@ucw.cz>
Reviewed-by: Stephen Boyd <sboyd@codeaurora.org>
Signed-off-by: Viresh Kumar <viresh.kumar@linaro.org>
Acked-by: Shawn Guo <shawnguo@kernel.org>
Signed-off-by: Rafael J. Wysocki <rafael.j.wysocki@intel.com>
We already have a better API to get the opp descriptor block's node from
cpu-node. Lets reuse that instead of creating our own routines for the
same stuff. That cleans the code a lot.
This also kills a check we had earlier (as we are using the generic API
now). Earlier we used to check if the operating-points-v2 property
contained multiple phandles instead of a single phandle.
Killing this check isn't an issue because, we only parse the first entry
with of_parse_phandle(). So, if a user passes multiple phandles, its
really his problem :)
Reviewed-by: Stephen Boyd <sboyd@codeaurora.org>
Signed-off-by: Viresh Kumar <viresh.kumar@linaro.org>
Signed-off-by: Rafael J. Wysocki <rafael.j.wysocki@intel.com>
* pm-cpu:
kernel/cpu_pm: fix cpu_cluster_pm_exit comment
* pm-cpuidle:
cpuidle/coupled: Add sanity check for safe_state_index
* pm-domains:
staging: board: Migrate away from __pm_genpd_name_add_device()
PM / Domains: Ensure subdomain is not in use before removing
PM / Domains: Try power off masters in error path of __pm_genpd_poweron()
There is no point returning suspend_opp, if it is disabled by the core.
As we can't use it at all. Fix it.
Fixes: 4eafbd15b6 ("PM / OPP: add dev_pm_opp_get_suspend_opp() helper")
Signed-off-by: Viresh Kumar <viresh.kumar@linaro.org>
Signed-off-by: Rafael J. Wysocki <rafael.j.wysocki@intel.com>
The function pm_genpd_remove_subdomain() removes a subdomain from a
generic PM domain, however, it does not check if the subdomain has any
slave domains or device attached before doing so. Therefore, add a test
to verify that the subdomain does not have any slave domains associated
or any device attached before removing.
Signed-off-by: Jon Hunter <jonathanh@nvidia.com>
Acked-by: Kevin Hilman <khilman@linaro.org>
Signed-off-by: Rafael J. Wysocki <rafael.j.wysocki@intel.com>
While powering up a genpd, its domain masters are first being powered up.
In the error path of __pm_genpd_poweron(), we didn't care to try power off
these domain masters. Let's deal with that to avoid leaving unused PM
domains powered.
Signed-off-by: Ulf Hansson <ulf.hansson@linaro.org>
Reviewed-by: Krzysztof Kozlowski <k.kozlowski@samsung.com>
Signed-off-by: Rafael J. Wysocki <rafael.j.wysocki@intel.com>
- ACPICA update to upstream revision 20150818 including method
tracing extensions to allow more in-depth AML debugging in the
kernel and a number of assorted fixes and cleanups (Bob Moore,
Lv Zheng, Markus Elfring).
- ACPI sysfs code updates and a documentation update related to
AML method tracing (Lv Zheng).
- ACPI EC driver fix related to serialized evaluations of _Qxx
methods and ACPI tools updates allowing the EC userspace tool
to be built from the kernel source (Lv Zheng).
- ACPI processor driver updates preparing it for future
introduction of CPPC support and ACPI PCC mailbox driver
updates (Ashwin Chaugule).
- ACPI interrupts enumeration fix for a regression related
to the handling of IRQ attribute conflicts between MADT
and the ACPI namespace (Jiang Liu).
- Fixes related to ACPI device PM (Mika Westerberg, Srinidhi Kasagar).
- ACPI device registration code reorganization to separate the
sysfs-related code and bus type operations from the rest (Rafael
J Wysocki).
- Assorted cleanups in the ACPI core (Jarkko Nikula, Mathias Krause,
Andy Shevchenko, Rafael J Wysocki, Nicolas Iooss).
- ACPI cpufreq driver and ia64 cpufreq driver fixes and cleanups
(Pan Xinhui, Rafael J Wysocki).
- cpufreq core cleanups on top of the previous changes allowing it
to preseve its sysfs directories over system suspend/resume (Viresh
Kumar, Rafael J Wysocki, Sebastian Andrzej Siewior).
- cpufreq fixes and cleanups related to governors (Viresh Kumar).
- cpufreq updates (core and the cpufreq-dt driver) related to the
turbo/boost mode support (Viresh Kumar, Bartlomiej Zolnierkiewicz).
- New DT bindings for Operating Performance Points (OPP), support
for them in the OPP framework and in the cpufreq-dt driver plus
related OPP framework fixes and cleanups (Viresh Kumar).
- cpufreq powernv driver updates (Shilpasri G Bhat).
- New cpufreq driver for Mediatek MT8173 (Pi-Cheng Chen).
- Assorted cpufreq driver (speedstep-lib, sfi, integrator) cleanups
and fixes (Abhilash Jindal, Andrzej Hajda, Cristian Ardelean).
- intel_pstate driver updates including Skylake-S support, support
for enabling HW P-states per CPU and an additional vendor bypass
list entry (Kristen Carlson Accardi, Chen Yu, Ethan Zhao).
- cpuidle core fixes related to the handling of coupled idle states
(Xunlei Pang).
- intel_idle driver updates including Skylake Client support and
support for freeze-mode-specific idle states (Len Brown).
- Driver core updates related to power management (Andy Shevchenko,
Rafael J Wysocki).
- Generic power domains framework fixes and cleanups (Jon Hunter,
Geert Uytterhoeven, Rajendra Nayak, Ulf Hansson).
- Device PM QoS framework update to allow the latency tolerance
setting to be exposed to user space via sysfs (Mika Westerberg).
- devfreq support for PPMUv2 in Exynos5433 and a fix for an incorrect
exynos-ppmu DT binding (Chanwoo Choi, Javier Martinez Canillas).
- System sleep support updates (Alan Stern, Len Brown, SungEun Kim).
- rockchip-io AVS support updates (Heiko Stuebner).
- PM core clocks support fixup (Colin Ian King).
- Power capping RAPL driver update including support for Skylake H/S
and Broadwell-H (Radivoje Jovanovic, Seiichi Ikarashi).
- Generic device properties framework fixes related to the handling
of static (driver-provided) property sets (Andy Shevchenko).
- turbostat and cpupower updates (Len Brown, Shilpasri G Bhat,
Shreyas B Prabhu).
/
-----BEGIN PGP SIGNATURE-----
Version: GnuPG v2.0.22 (GNU/Linux)
iQIcBAABCAAGBQJV5hhGAAoJEILEb/54YlRxs+EQAK51iFk48+IbpHYaZZ50Yo4m
ZZc2zBcbwRcBlU9vKERrhG+jieSl8J/JJNxT8vBjKqyvNw038mCjewQh02ol0HuC
R7nlDiVJkmZ50sLO4xwE/1UBZr/XqbddwCUnYzvFMkMTA0ePzFtf8BrJ1FXpT8S/
fkwSXQty6hvJDwxkfrbMSaA730wMju9lahx8D6MlmUAedWYZOJDMQKB4WKa/St5X
9uckBPHUBB2KiKlXxdbFPwKLNxHvLROq5SpDLc6cM/7XZB+QfNFy85CUjCUtYo1O
1W8k0qnztvZ6UEv27qz5dejGyAGOarMWGGNsmL9evoeGeHRpQL+dom7HcTnbAfUZ
walyhYSm/zKkdy7Vl3xWUUQkMG48+PviMI6K0YhHXb3Rm5wlR/yBNZTwNIty9SX/
fKCHEa8QynWwLxgm53c3xRkiitJxMsHNK03moLD9zQMjshTyTNvpNbZoahyKQzk6
H+9M1DBRHhkkREDWSwGutukxfEMtWe2vcZcyERrFiY7l5k1j58DwDBMPqjPhRv6q
P/1NlCzr0XYf83Y86J18LbDuPGDhTjjIEn6CqbtI2mmWqTg3+rF7zvS2ux+FzMnA
gisv8l6GT9JiWhxKFqqL/rrVpwtyHebWLYE/RpNUW6fEzLziRNj1qyYO9dqI/GGi
I3rfxlXoc/5xJWCgNB8f
=fTgI
-----END PGP SIGNATURE-----
Merge tag 'pm+acpi-4.3-rc1' of git://git.kernel.org/pub/scm/linux/kernel/git/rafael/linux-pm
Pull power management and ACPI updates from Rafael Wysocki:
"From the number of commits perspective, the biggest items are ACPICA
and cpufreq changes with the latter taking the lead (over 50 commits).
On the cpufreq front, there are many cleanups and minor fixes in the
core and governors, driver updates etc. We also have a new cpufreq
driver for Mediatek MT8173 chips.
ACPICA mostly updates its debug infrastructure and adds a number of
fixes and cleanups for a good measure.
The Operating Performance Points (OPP) framework is updated with new
DT bindings and support for them among other things.
We have a few updates of the generic power domains framework and a
reorganization of the ACPI device enumeration code and bus type
operations.
And a lot of fixes and cleanups all over.
Included is one branch from the MFD tree as it contains some
PM-related driver core and ACPI PM changes a few other commits are
based on.
Specifics:
- ACPICA update to upstream revision 20150818 including method
tracing extensions to allow more in-depth AML debugging in the
kernel and a number of assorted fixes and cleanups (Bob Moore, Lv
Zheng, Markus Elfring).
- ACPI sysfs code updates and a documentation update related to AML
method tracing (Lv Zheng).
- ACPI EC driver fix related to serialized evaluations of _Qxx
methods and ACPI tools updates allowing the EC userspace tool to be
built from the kernel source (Lv Zheng).
- ACPI processor driver updates preparing it for future introduction
of CPPC support and ACPI PCC mailbox driver updates (Ashwin
Chaugule).
- ACPI interrupts enumeration fix for a regression related to the
handling of IRQ attribute conflicts between MADT and the ACPI
namespace (Jiang Liu).
- Fixes related to ACPI device PM (Mika Westerberg, Srinidhi
Kasagar).
- ACPI device registration code reorganization to separate the
sysfs-related code and bus type operations from the rest (Rafael J
Wysocki).
- Assorted cleanups in the ACPI core (Jarkko Nikula, Mathias Krause,
Andy Shevchenko, Rafael J Wysocki, Nicolas Iooss).
- ACPI cpufreq driver and ia64 cpufreq driver fixes and cleanups (Pan
Xinhui, Rafael J Wysocki).
- cpufreq core cleanups on top of the previous changes allowing it to
preseve its sysfs directories over system suspend/resume (Viresh
Kumar, Rafael J Wysocki, Sebastian Andrzej Siewior).
- cpufreq fixes and cleanups related to governors (Viresh Kumar).
- cpufreq updates (core and the cpufreq-dt driver) related to the
turbo/boost mode support (Viresh Kumar, Bartlomiej Zolnierkiewicz).
- New DT bindings for Operating Performance Points (OPP), support for
them in the OPP framework and in the cpufreq-dt driver plus related
OPP framework fixes and cleanups (Viresh Kumar).
- cpufreq powernv driver updates (Shilpasri G Bhat).
- New cpufreq driver for Mediatek MT8173 (Pi-Cheng Chen).
- Assorted cpufreq driver (speedstep-lib, sfi, integrator) cleanups
and fixes (Abhilash Jindal, Andrzej Hajda, Cristian Ardelean).
- intel_pstate driver updates including Skylake-S support, support
for enabling HW P-states per CPU and an additional vendor bypass
list entry (Kristen Carlson Accardi, Chen Yu, Ethan Zhao).
- cpuidle core fixes related to the handling of coupled idle states
(Xunlei Pang).
- intel_idle driver updates including Skylake Client support and
support for freeze-mode-specific idle states (Len Brown).
- Driver core updates related to power management (Andy Shevchenko,
Rafael J Wysocki).
- Generic power domains framework fixes and cleanups (Jon Hunter,
Geert Uytterhoeven, Rajendra Nayak, Ulf Hansson).
- Device PM QoS framework update to allow the latency tolerance
setting to be exposed to user space via sysfs (Mika Westerberg).
- devfreq support for PPMUv2 in Exynos5433 and a fix for an incorrect
exynos-ppmu DT binding (Chanwoo Choi, Javier Martinez Canillas).
- System sleep support updates (Alan Stern, Len Brown, SungEun Kim).
- rockchip-io AVS support updates (Heiko Stuebner).
- PM core clocks support fixup (Colin Ian King).
- Power capping RAPL driver update including support for Skylake H/S
and Broadwell-H (Radivoje Jovanovic, Seiichi Ikarashi).
- Generic device properties framework fixes related to the handling
of static (driver-provided) property sets (Andy Shevchenko).
- turbostat and cpupower updates (Len Brown, Shilpasri G Bhat,
Shreyas B Prabhu)"
* tag 'pm+acpi-4.3-rc1' of git://git.kernel.org/pub/scm/linux/kernel/git/rafael/linux-pm: (180 commits)
cpufreq: speedstep-lib: Use monotonic clock
cpufreq: powernv: Increase the verbosity of OCC console messages
cpufreq: sfi: use kmemdup rather than duplicating its implementation
cpufreq: drop !cpufreq_driver check from cpufreq_parse_governor()
cpufreq: rename cpufreq_real_policy as cpufreq_user_policy
cpufreq: remove redundant 'policy' field from user_policy
cpufreq: remove redundant 'governor' field from user_policy
cpufreq: update user_policy.* on success
cpufreq: use memcpy() to copy policy
cpufreq: remove redundant CPUFREQ_INCOMPATIBLE notifier event
cpufreq: mediatek: Add MT8173 cpufreq driver
dt-bindings: mediatek: Add MT8173 CPU DVFS clock bindings
PM / Domains: Fix typo in description of genpd_dev_pm_detach()
PM / Domains: Remove unusable governor dummies
PM / Domains: Make pm_genpd_init() available to modules
PM / domains: Align column headers and data in pm_genpd_summary output
powercap / RAPL: disable the 2nd power limit properly
tools: cpupower: Fix error when running cpupower monitor
PM / OPP: Drop unlikely before IS_ERR(_OR_NULL)
PM / OPP: Fix static checker warning (broken 64bit big endian systems)
...
* pm-sleep:
PM / suspend: make sync() on suspend-to-RAM build-time optional
PM / sleep: Allow devices without runtime PM to do direct-complete
PM / autosleep: Use workqueue for user space wakeup sources garbage collector
* pm-domains:
PM / Domains: Fix typo in description of genpd_dev_pm_detach()
PM / Domains: Remove unusable governor dummies
PM / Domains: Make pm_genpd_init() available to modules
PM / domains: Align column headers and data in pm_genpd_summary output
PM / Domains: Return -EPROBE_DEFER if we fail to init or turn-on domain
PM / Domains: Correct unit address in power-controller example
PM / Domains: Remove intermediate states from the power off sequence
* pm-avs:
PM / AVS: rockchip-io: add io selectors and supplies for rk3368
PM / AVS: rockchip-io: depend on CONFIG_POWER_AVS
* pm-cpuidle:
cpuidle/coupled: Remove redundant 'dev' argument of cpuidle_state_is_coupled()
cpuidle/coupled: Remove cpuidle_device::safe_state_index
intel_idle: Skylake Client Support
intel_idle: allow idle states to be freeze-mode specific
* pm-devfreq:
PM / devfreq: exynos-ppmu: Update documentation to support PPMUv2
PM / devfreq: exynos-ppmu: Add the support of PPMUv2 for Exynos5433
PM / devfreq: event: Remove incorrect property in exynos-ppmu DT binding
* pm-clk:
PM / clk: don't return int on __pm_clk_enable()
* pm-opp:
PM / OPP: Drop unlikely before IS_ERR(_OR_NULL)
PM / OPP: Fix static checker warning (broken 64bit big endian systems)
PM / OPP: Free resources and properly return error on failure
cpufreq-dt: make scaling_boost_freqs sysfs attr available when boost is enabled
cpufreq: dt: Add support for turbo/boost mode
cpufreq: dt: Add support for operating-points-v2 bindings
cpufreq: Allow drivers to enable boost support after registering driver
cpufreq: Update boost flag while initializing freq table from OPPs
PM / OPP: add dev_pm_opp_is_turbo() helper
PM / OPP: Add helpers for initializing CPU OPPs
PM / OPP: Add support for opp-suspend
PM / OPP: Add OPP sharing information to OPP library
PM / OPP: Add clock-latency-ns support
PM / OPP: Add support to parse "operating-points-v2" bindings
PM / OPP: Break _opp_add_dynamic() into smaller functions
PM / OPP: Allocate dev_opp from _add_device_opp()
PM / OPP: Create _remove_device_opp() for freeing dev_opp
PM / OPP: Relocate few routines
PM / OPP: Create a directory for opp bindings
PM / OPP: Update bindings to make opp-hz a 64 bit value
The function genpd_dev_pm_detach() detaches a device from a PM domain,
however, in the description, the "dev" argument for the function is
described as the device to "attach" instead of "detach". Correct this.
Signed-off-by: Jon Hunter <jonathanh@nvidia.com>
Acked-by: Geert Uytterhoeven <geert+renesas@glider.be>
Acked-by: Kevin Hilman <khilman@linaro.org>
Signed-off-by: Rafael J. Wysocki <rafael.j.wysocki@intel.com>
Export symbol pm_genpd_init so it can be used in loadable
kernel modules
Signed-off-by: Rajendra Nayak <rnayak@codeaurora.org>
Reported-by: Stephen Rothwell <sfr@canb.auug.org.au>
Acked-by: Geert Uytterhoeven <geert+renesas@glider.be>
Acked-by: Kevin Hilman <khilman@linaro.org>
Signed-off-by: Rafael J. Wysocki <rafael.j.wysocki@intel.com>
"domain": header is indented by 4, data by 0 spaces => 0 spaces
"/device": header is indented by 11, data by 4 spaces => 4 spaces
"slaves": header is indented by 47, data by 49 spaces => 48 spaces
Ruler:
1234567890123456789012345678901234567890123456789012345678901234567890
Before:
domain status slaves
/device runtime status
----------------------------------------------------------------------
a3sp on a2us
/devices/platform/e60b0000.i2c suspended
After:
domain status slaves
/device runtime status
----------------------------------------------------------------------
a3sp on a2us
/devices/platform/e60b0000.i2c suspended
Signed-off-by: Geert Uytterhoeven <geert+renesas@glider.be>
Acked-by: Kevin Hilman <khilman@linaro.org>
Signed-off-by: Rafael J. Wysocki <rafael.j.wysocki@intel.com>
IS_ERR(_OR_NULL) already contain an 'unlikely' compiler flag and there
is no need to do that again from its callers. Drop it.
Acked-by: Pavel Machek <pavel@ucw.cz>
Signed-off-by: Viresh Kumar <viresh.kumar@linaro.org>
Signed-off-by: Rafael J. Wysocki <rafael.j.wysocki@intel.com>
Dan Carpenter reported (generated with static checker):
drivers/base/power/opp.c:949 _opp_add_static_v2()
warn: passing casted pointer '&new_opp->clock_latency_ns' to
'of_property_read_u32()' 64 vs 32.
This code will break on 64 bit, big endian machines.
Fix this by reading the value in a u32 type variable first and then
assigning it to the unsigned long variable.
Reported-by: Dan Carpenter <dan.carpenter@oracle.com>
Suggested-by: Stephen Boyd <sboyd@codeaurora.org>
Signed-off-by: Viresh Kumar <viresh.kumar@linaro.org>
Signed-off-by: Rafael J. Wysocki <rafael.j.wysocki@intel.com>
_of_init_opp_table_v2() isn't freeing up resources on some errors and
the error values returned are also not correct always.
This fixes following problems:
- Return -ENOENT, if no entries are found in the table.
- Use IS_ERR() to properly check return value of _find_device_opp().
- Return error value with PTR_ERR() in above case.
- Free table if _find_device_opp() fails.
Reported-by: Dan Carpenter <dan.carpenter@oracle.com>
Signed-off-by: Viresh Kumar <viresh.kumar@linaro.org>
Signed-off-by: Rafael J. Wysocki <rafael.j.wysocki@intel.com>
Pull RCU changes from Paul E. McKenney:
- The combination of tree geometry-initialization simplifications
and OS-jitter-reduction changes to expedited grace periods.
These two are stacked due to the large number of conflicts
that would otherwise result.
[ With one addition, a temporary commit to silence a lockdep false
positive. Additional changes to the expedited grace-period
primitives (queued for 4.4) remove the cause of this false
positive, and therefore include a revert of this temporary commit. ]
- Documentation updates.
- Torture-test updates.
- Miscellaneous fixes.
Signed-off-by: Ingo Molnar <mingo@kernel.org>
Add dev_pm_opp_is_turbo() helper to verify if an opp is to be used only
for turbo mode or not.
Reviewed-by: Stephen Boyd <sboyd@codeaurora.org>
Signed-off-by: Bartlomiej Zolnierkiewicz <b.zolnierkie@samsung.com>
Signed-off-by: Viresh Kumar <viresh.kumar@linaro.org>
Signed-off-by: Rafael J. Wysocki <rafael.j.wysocki@intel.com>
With "operating-points-v2" its possible to tell which devices share
OPPs. We already have infrastructure to decode that information.
This patch adds following APIs:
- of_get_cpus_sharing_opps: Returns cpumask of CPUs sharing OPPs (only
valid with v2 bindings).
- of_cpumask_init_opp_table: Initializes OPPs for all CPUs present in
cpumask.
- of_cpumask_free_opp_table: Frees OPPs for all CPUs present in cpumask.
- set_cpus_sharing_opps: Sets which CPUs share OPPs (only valid with old
OPP bindings, as this information isn't present in DT).
Reviewed-by: Stephen Boyd <sboyd@codeaurora.org>
Reviewed-by: Bartlomiej Zolnierkiewicz <b.zolnierkie@samsung.com>
Signed-off-by: Viresh Kumar <viresh.kumar@linaro.org>
Signed-off-by: Rafael J. Wysocki <rafael.j.wysocki@intel.com>
With "operating-points-v2" bindings, it's possible to specify the OPP to
which the device must be switched, before suspending.
This patch adds support for getting that information.
Reviewed-by: Stephen Boyd <sboyd@codeaurora.org>
Reviewed-by: Bartlomiej Zolnierkiewicz <b.zolnierkie@samsung.com>
Signed-off-by: Viresh Kumar <viresh.kumar@linaro.org>
Signed-off-by: Rafael J. Wysocki <rafael.j.wysocki@intel.com>
An opp can be shared by multiple devices, for example its very common
for CPUs to share the OPPs, i.e. when they share clock/voltage rails.
This patch adds support of shared OPPs to the OPP library.
Instead of a single device, dev_opp will now contain a list of devices
that use it. It also senses if the device (we are trying to initialize
OPPs for) shares OPPs with a device added earlier and in that case we
update the list of devices managed by OPPs instead of duplicating OPPs
again.
The same infrastructure will be used for the old OPP bindings, with
later patches.
Reviewed-by: Stephen Boyd <sboyd@codeaurora.org>
Reviewed-by: Bartlomiej Zolnierkiewicz <b.zolnierkie@samsung.com>
Signed-off-by: Viresh Kumar <viresh.kumar@linaro.org>
Signed-off-by: Rafael J. Wysocki <rafael.j.wysocki@intel.com>
With "operating-points-v2" bindings, clock-latency is defined per OPP.
Users of this value expect a single value which defines the latency to
switch to any clock rate. Find maximum clock-latency-ns from the OPP
table to service requests from such users.
Reviewed-by: Stephen Boyd <sboyd@codeaurora.org>
Reviewed-by: Bartlomiej Zolnierkiewicz <b.zolnierkie@samsung.com>
Signed-off-by: Viresh Kumar <viresh.kumar@linaro.org>
Signed-off-by: Rafael J. Wysocki <rafael.j.wysocki@intel.com>
This adds support in OPP library to parse and create list of OPPs from
operating-points-v2 bindings. It takes care of most of the properties of
new bindings (except shared-opp, which will be handled separately).
For backward compatibility, we keep supporting earlier bindings. We try
to search for the new bindings first, in case they aren't present we
look for the old deprecated ones.
There are few things marked as TODO:
- Support for multiple OPP tables
- Support for multiple regulators
They should be fixed separately.
Reviewed-by: Bartlomiej Zolnierkiewicz <b.zolnierkie@samsung.com>
Signed-off-by: Viresh Kumar <viresh.kumar@linaro.org>
Reviewed-by: Stephen Boyd <sboyd@codeaurora.org>
Signed-off-by: Rafael J. Wysocki <rafael.j.wysocki@intel.com>
Later commits would add support for new OPP bindings and this would be
required then. So, lets do it in a separate patch to make it easily
reviewable.
Another change worth noticing is INIT_LIST_HEAD(&opp->node). We weren't
doing it earlier as we never tried to delete a list node before it is
added to list. But this wouldn't be the case anymore. We might try to
delete a node (just to reuse the same code paths), without it being
getting added to the list.
Reviewed-by: Bartlomiej Zolnierkiewicz <b.zolnierkie@samsung.com>
Signed-off-by: Viresh Kumar <viresh.kumar@linaro.org>
Reviewed-by: Stephen Boyd <sboyd@codeaurora.org>
Signed-off-by: Rafael J. Wysocki <rafael.j.wysocki@intel.com>
There is no need to complicate _opp_add_dynamic() with allocation of
dev_opp as well. Allocate it from _add_device_opp() instead.
Reviewed-by: Stephen Boyd <sboyd@codeaurora.org>
Reviewed-by: Bartlomiej Zolnierkiewicz <b.zolnierkie@samsung.com>
Signed-off-by: Viresh Kumar <viresh.kumar@linaro.org>
Signed-off-by: Rafael J. Wysocki <rafael.j.wysocki@intel.com>
This will be used from multiple places later. Lets create a separate
routine for that.
Reviewed-by: Stephen Boyd <sboyd@codeaurora.org>
Reviewed-by: Bartlomiej Zolnierkiewicz <b.zolnierkie@samsung.com>
Signed-off-by: Viresh Kumar <viresh.kumar@linaro.org>
Signed-off-by: Rafael J. Wysocki <rafael.j.wysocki@intel.com>
In order to prepare for the later commits, this relocates few routines
towards the top as they will be used earlier in the code.
Reviewed-by: Stephen Boyd <sboyd@codeaurora.org>
Reviewed-by: Bartlomiej Zolnierkiewicz <b.zolnierkie@samsung.com>
Signed-off-by: Viresh Kumar <viresh.kumar@linaro.org>
Signed-off-by: Rafael J. Wysocki <rafael.j.wysocki@intel.com>
When a device is probed, the function dev_pm_domain_attach() is called
to see if there is a power-domain that is associated with the device and
needs to be turned on. If dev_pm_domain_attach() does not return
-EPROBE_DEFER then the device will be probed.
For devices using genpd, dev_pm_domain_attach() will call
genpd_dev_pm_attach(). If genpd_dev_pm_attach() does not find a power
domain associated with the device then it returns an error code not
equal to -EPROBE_DEFER to allow the device to be probed. However, if
genpd_dev_pm_attach() does find a power-domain that is associated with
the device, then it does not return -EPROBE_DEFER on failure and hence
the device will still be probed. Furthermore, genpd_dev_pm_attach() does
not check the error code returned by pm_genpd_poweron() to see if the
power-domain was turned on successfully.
Fix this by checking the return code from pm_genpd_poweron() and
returning -EPROBE_DEFER from genpd_dev_pm_attach on failure, if there
is a power-domain associated with the device.
Signed-off-by: Jon Hunter <jonathanh@nvidia.com>
Acked-by: Ulf Hansson <ulf.hansson@linaro.org>
Acked-by: Kevin Hilman <khilman@linaro.org>
Signed-off-by: Rafael J. Wysocki <rafael.j.wysocki@intel.com>
Genpd's ->runtime_suspend() (assigned to pm_genpd_runtime_suspend())
doesn't immediately walk the hierarchy of ->runtime_suspend() callbacks.
Instead, pm_genpd_runtime_suspend() calls pm_genpd_poweroff() which
postpones that until *all* the devices in the genpd are runtime suspended.
When pm_genpd_poweroff() discovers that the last device in the genpd is
about to be runtime suspended, it calls __pm_genpd_save_device() for *all*
the devices in the genpd sequentially. Furthermore,
__pm_genpd_save_device() invokes the ->start() callback, walks the
hierarchy of the ->runtime_suspend() callbacks and invokes the ->stop()
callback. This causes a "thundering herd" problem.
Let's address this issue by having pm_genpd_runtime_suspend() immediately
walk the hierarchy of the ->runtime_suspend() callbacks, instead of
postponing that to the power off sequence via pm_genpd_poweroff(). If the
selected ->runtime_suspend() callback doesn't return an error code, call
pm_genpd_poweroff() to see if it's feasible to also power off the PM
domain.
Adopting this change enables us to simplify parts of the code in genpd,
for example the locking mechanism. Additionally, it gives some positive
side effects, as described below.
i)
One device's ->runtime_resume() latency is no longer affected by other
devices' latencies in a genpd.
The complexity genpd has to support the option to abort the power off
sequence suffers from latency issues. More precisely, a device that is
requested to be runtime resumed, may end up waiting for
__pm_genpd_save_device() to complete its operations for *another* device.
That's because pm_genpd_poweroff() can't confirm an abort request while it
waits for __pm_genpd_save_device() to return.
As this patch removes the intermediate states in pm_genpd_poweroff() while
powering off the PM domain, we no longer need the ability to abort that
sequence.
ii)
Make pm_runtime[_status]_suspended() reliable when used with genpd.
Until the last device in a genpd becomes idle, pm_genpd_runtime_suspend()
will return 0 without actually walking the hierarchy of the
->runtime_suspend() callbacks. However, by returning 0 the runtime PM core
considers the device as runtime_suspended, so
pm_runtime[_status]_suspended() will return true, even though the device
isn't (yet) runtime suspended.
After this patch, since pm_genpd_runtime_suspend() immediately walks the
hierarchy of the ->runtime_suspend() callbacks,
pm_runtime[_status]_suspended() will accurately reflect the status of the
device.
iii)
Enable fine-grained PM through runtime PM callbacks in drivers/subsystems.
There are currently cases were drivers/subsystems implements runtime PM
callbacks to deploy fine-grained PM (e.g. gate clocks, move pinctrl to
power-save state, etc.). While using the genpd, pm_genpd_runtime_suspend()
postpones invoking these callbacks until *all* the devices in the genpd
are runtime suspended. In essence, one runtime resumed device prevents
fine-grained PM for other devices within the same genpd.
After this patch, since pm_genpd_runtime_suspend() immediately walks the
hierarchy of the ->runtime_suspend() callbacks, fine-grained PM is enabled
throughout all the levels of runtime PM callbacks.
iiii)
Enable fine-grained PM for IRQ safe devices
Per the definition for an IRQ safe device, its runtime PM callbacks must
be able to execute in atomic context. In the path while genpd walks the
hierarchy of the ->runtime_suspend() callbacks for the device, it uses a
mutex. Therefore, genpd prevents that path to be executed for IRQ safe
devices.
As this patch changes pm_genpd_runtime_suspend() to immediately walk the
hierarchy of the ->runtime_suspend() callbacks and without needing to use
a mutex, fine-grained PM is enabled throughout all the levels of runtime
PM callbacks for IRQ safe devices.
Unfortunately this patch also comes with a drawback, as described in the
summary below.
Driver's/subsystem's runtime PM callbacks may be invoked even when the
genpd hasn't actually powered off the PM domain, potentially introducing
unnecessary latency.
However, in most cases, saving/restoring register contexts for devices are
typically fast operations or can be optimized in device specific ways
(e.g. shadow copies of register contents in memory, device-specific checks
to see if context has been lost before restoring context, etc.).
Still, in some cases the driver/subsystem may suffer from latency if
runtime PM is used in a very fine-grained manner (e.g. for each IO request
or xfer). To prevent that extra overhead, the driver/subsystem may deploy
the runtime PM autosuspend feature.
Signed-off-by: Ulf Hansson <ulf.hansson@linaro.org>
Reviewed-by: Kevin Hilman <khilman@linaro.org>
Tested-by: Geert Uytterhoeven <geert+renesas@glider.be>
Tested-by: Lina Iyer <lina.iyer@linaro.org>
Signed-off-by: Rafael J. Wysocki <rafael.j.wysocki@intel.com>
Typically when a device is created the bus core it belongs to (for example
PCI) does not know if the device supports things like latency tolerance.
This is left to the driver that binds to the device in question. However,
at that time the device has already been created and there is no way to set
its dev->power.set_latency_tolerance anymore.
So follow what has been done for other PM QoS attributes as well and allow
drivers to expose and hide latency tolerance from userspace, if the device
supports it.
Acked-by: Rafael J. Wysocki <rafael.j.wysocki@intel.com>
Signed-off-by: Mika Westerberg <mika.westerberg@linux.intel.com>
Signed-off-by: Andy Shevchenko <andriy.shevchenko@linux.intel.com>
Signed-off-by: Lee Jones <lee.jones@linaro.org>
This commit renames rcu_lockdep_assert() to RCU_LOCKDEP_WARN() for
consistency with the WARN() series of macros. This also requires
inverting the sense of the conditional, which this commit also does.
Reported-by: Ingo Molnar <mingo@kernel.org>
Signed-off-by: Paul E. McKenney <paulmck@linux.vnet.ibm.com>
Reviewed-by: Ingo Molnar <mingo@kernel.org>
Don't unset the direct_complete flag on devices that have runtime PM
disabled, if they are runtime suspended.
This is needed because otherwise ancestor devices wouldn't be able to
do direct_complete without adding runtime PM support to all its
descendants.
Also removes pm_runtime_suspended_if_enabled() because it's now unused.
Signed-off-by: Tomeu Vizoso <tomeu.vizoso@collabora.com>
Signed-off-by: Alan Stern <stern@rowland.harvard.edu>
Signed-off-by: Rafael J. Wysocki <rafael.j.wysocki@intel.com>
Static analysis by cppcheck found an issue that was recently introduced by
commit 471f7707b6 ("PM / clock_ops: make __pm_clk_enable more generic")
where a return status in ret was not being initialised and garbage
being returned when ce->status >= PCE_STATUS_ERROR.
The fact that ret is not being checked by the caller and that
ret is only used internally __pm_clk_enable() to check if clk_enable()
was OK means we can ignore returning it instead turn
__pm_clk_enable() into function with a void return.
Fixes: 471f7707b6 ("PM / clock_ops: make __pm_clk_enable more generic")
Signed-off-by: Colin Ian King <colin.king@canonical.com>
Signed-off-by: Rafael J. Wysocki <rafael.j.wysocki@intel.com>
If dev_pm_attach_wake_irq() fails, the device's power.wakeirq field
should not be set to point to the struct wake_irq passed to that
function, as that object will be freed going forward.
For this reason, make dev_pm_attach_wake_irq() first call
device_wakeup_attach_irq() and only set the device's power.wakeirq
field if that's successful.
That requires device_wakeup_attach_irq() to be called under the
device's power.lock lock, but since dev_pm_attach_wake_irq() is
the only caller of it, the requisite changes are easy to make.
Fixes: 4990d4fe32 (PM / Wakeirq: Add automated device wake IRQ handling)
Reported-by: Felipe Balbi <balbi@ti.com>
Tested-by: Tony Lindgren <tony@atomide.com>
Signed-off-by: Rafael J. Wysocki <rafael.j.wysocki@intel.com>
If pm_genpd_{add,remove}_device() keeps on failing with -EAGAIN, we end
up with an infinite loop in genpd_dev_pm_{at,de}tach().
This may happen due to a genpd.prepared_count imbalance. This is a bug
elsewhere, but it will result in a system lock up, possibly during
reboot of an otherwise functioning system.
To avoid this, put a limit on the maximum number of loop iterations,
using an exponential back-off mechanism. If the limit is reached, the
operation will just fail. An error message is already printed.
Signed-off-by: Geert Uytterhoeven <geert+renesas@glider.be>
Signed-off-by: Rafael J. Wysocki <rafael.j.wysocki@intel.com>
* pm-clk:
PM / clk: Print acquired clock name in addition to con_id
PM / clk: Fix clock error check in __pm_clk_add()
drivers: sh: remove boilerplate code and use USE_PM_CLK_RUNTIME_OPS
arm: davinci: remove boilerplate code and use USE_PM_CLK_RUNTIME_OPS
arm: omap1: remove boilerplate code and use USE_PM_CLK_RUNTIME_OPS
arm: keystone: remove boilerplate code and use USE_PM_CLK_RUNTIME_OPS
PM / clock_ops: Provide default runtime ops to users
* pm-domains:
PM / Domains: Skip timings during syscore suspend/resume
* powercap:
powercap / RAPL: Support Knights Landing
powercap / RAPL: Floor frequency setting in Atom SoC
* pm-sleep:
PM / sleep: trace_device_pm_callback coverage in dpm_prepare/complete
PM / wakeup: add a dummy wakeup_source to record statistics
PM / sleep: Make suspend-to-idle-specific code depend on CONFIG_SUSPEND
PM / sleep: Return -EBUSY from suspend_enter() on wakeup detection
PM / tick: Add tracepoints for suspend-to-idle diagnostics
PM / sleep: Fix symbol name in a comment in kernel/power/main.c
leds / PM: fix hibernation on arm when gpio-led used with CPU led trigger
ARM: omap-device: use SET_NOIRQ_SYSTEM_SLEEP_PM_OPS
bus: omap_l3_noc: add missed callbacks for suspend-to-disk
PM / sleep: Add macro to define common noirq system PM callbacks
PM / sleep: Refine diagnostic messages in enter_state()
PM / wakeup: validate wakeup source before activating it.
* pm-runtime:
PM / Runtime: Update last_busy in rpm_resume
PM / runtime: add note about re-calling in during device probe()
Currently the con_id of the acquired clock is printed for debugging
purposes. But in several cases, the con_id is NULL, which doesn't
provide much debugging information when printed. These cases are:
- When explicitly passing a NULL con_id (which means the first clock
tied to the device, if available),
- When not using pm_clk_add(), but pm_clk_add_clk() (which takes a
"struct clk *" directly).
Hence print the actual clock name in addition to (and not instead of;
thanks Grygorii Strashko!) the con_id.
Note that the clock name is not available with legacy clock frameworks,
and the hex pointer address will be printed instead.
Signed-off-by: Geert Uytterhoeven <geert+renesas@glider.be>
Reviewed-by: Grygorii Strashko <grygorii.strashko@linaro.org>
Reviewed-by: Ulf Hansson <ulf.hansson@linaro.org>
Signed-off-by: Rafael J. Wysocki <rafael.j.wysocki@intel.com>
The PM Domain code uses ktime_get() to perform various latency
measurements. However, if ktime_get() is called while timekeeping is
suspended, the following warning is printed:
WARNING: CPU: 0 PID: 1340 at kernel/time/timekeeping.c:576 ktime_get+0x3
This happens when resuming the PM Domain that contains the clock events
source, which calls pm_genpd_syscore_poweron(). Chain of operations is:
timekeeping_resume()
{
clockevents_resume()
sh_cmt_clock_event_resume()
pm_genpd_syscore_poweron()
pm_genpd_sync_poweron()
genpd_syscore_switch()
genpd_power_on()
ktime_get(), but timekeeping_suspended == 1
...
timekeeping_suspended = 0;
}
Fix this by adding a "timed" parameter to genpd_power_{on,off}() and
pm_genpd_sync_power{off,on}(), to indicate whether latency measurements
are allowed. This parameter is passed as false in
genpd_syscore_switch() (i.e. during syscore suspend/resume), and true in
all other cases.
Signed-off-by: Geert Uytterhoeven <geert+renesas@glider.be>
Signed-off-by: Rafael J. Wysocki <rafael.j.wysocki@intel.com>
Move the trace_device_pm_callback locations for dpm_prepare and dpm_complete
to encompass the attempt to capture the device mutex prior to callback. This
is needed by analyze_suspend to identify gaps in the trace output caused by
the delay in locking the mutex for a device.
Signed-off-by: Todd Brandt <todd.e.brandt@linux.intel.com>
Signed-off-by: Rafael J. Wysocki <rafael.j.wysocki@intel.com>
Turns out we can automate the handling for the device_may_wakeup()
quite a bit by using the kernel wakeup source list as suggested
by Rafael J. Wysocki <rjw@rjwysocki.net>.
And as some hardware has separate dedicated wake-up interrupt
in addition to the IO interrupt, we can automate the handling by
adding a generic threaded interrupt handler that just calls the
device PM runtime to wake up the device.
This allows dropping code from device drivers as we currently
are doing it in multiple ways, and often wrong.
For most drivers, we should be able to drop the following
boilerplate code from runtime_suspend and runtime_resume
functions:
...
device_init_wakeup(dev, true);
...
if (device_may_wakeup(dev))
enable_irq_wake(irq);
...
if (device_may_wakeup(dev))
disable_irq_wake(irq);
...
device_init_wakeup(dev, false);
...
We can replace it with just the following init and exit
time code:
...
device_init_wakeup(dev, true);
dev_pm_set_wake_irq(dev, irq);
...
dev_pm_clear_wake_irq(dev);
device_init_wakeup(dev, false);
...
And for hardware with dedicated wake-up interrupts:
...
device_init_wakeup(dev, true);
dev_pm_set_dedicated_wake_irq(dev, irq);
...
dev_pm_clear_wake_irq(dev);
device_init_wakeup(dev, false);
...
Signed-off-by: Tony Lindgren <tony@atomide.com>
Signed-off-by: Rafael J. Wysocki <rafael.j.wysocki@intel.com>
If we don't update last_busy in rpm_resume, devices can go back
to sleep immediately after resume. This happens at least in
cases where the device has been powered off and does not have
any interrupt pending until there's something in the FIFO.
Signed-off-by: Tony Lindgren <tony@atomide.com>
Signed-off-by: Rafael J. Wysocki <rafael.j.wysocki@intel.com>
In the final iteration of commit 245bd6f6af ("PM / clock_ops: Add
pm_clk_add_clk()"), a refcount increment was added by Grygorii Strashko.
However, the accompanying IS_ERR() check operates on the wrong clock
pointer, which is always zero at this point, i.e. not an error.
This may lead to a NULL pointer dereference later, when __clk_get()
tries to dereference an error pointer.
Check the passed clock pointer instead to fix this.
Signed-off-by: Geert Uytterhoeven <geert+renesas@glider.be>
Fixes: 245bd6f6af ("PM / clock_ops: Add pm_clk_add_clk()")
Signed-off-by: Rafael J. Wysocki <rafael.j.wysocki@intel.com>
After a wakeup_source is destroyed, we lost all information such as how
long this wakeup_source has been active. Add a dummy wakeup_source to
record such info.
Signed-off-by: Jin Qian <jinqian@android.com>
Signed-off-by: Rafael J. Wysocki <rafael.j.wysocki@intel.com>
Most users of PM clocks do the extact same things in the runtime
suspend/resume callbacks. Provide them USE_PM_CLK_RUNTIME_OPS so
as to avoid/remove boilerplate code.
Signed-off-by: Rajendra Nayak <rnayak@codeaurora.org>
Reviewed-by: Kevin Hilman <khilman@linaro.org>
Acked-by: Santosh Shilimkar <ssantosh@kernel.org>
Acked-by: Geert Uytterhoeven <geert+renesas@glider.be>
Signed-off-by: Rafael J. Wysocki <rafael.j.wysocki@intel.com>
A rogue wakeup source not registered in wakeup_sources list is not visible
from wakeup_sources_stats_show. Check if the wakeup source is registered
properly by looking at the timer struct.
Signed-off-by: Jin Qian <jinqian@android.com>
Signed-off-by: Rafael J. Wysocki <rafael.j.wysocki@intel.com>
The seq_printf return value, because it's frequently misused,
will eventually be converted to void.
See: commit 1f33c41c03 ("seq_file: Rename seq_overflow() to
seq_has_overflowed() and make public")
Signed-off-by: Joe Perches <joe@perches.com>
Cc: "Rafael J. Wysocki" <rjw@rjwysocki.net>
Cc: Pavel Machek <pavel@ucw.cz>
Cc: Len Brown <len.brown@intel.com>
Cc: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
pm_genpd_remove_device() tries hard to validate the generic PM domain
passed to it, but the validation is not complete.
dev->pm_domain contains a struct dev_pm_domain, which is the "base
class" of generic PM domains. Other users of dev_pm_domains include
stuff like vga_switheroo. Hence, a device could have a generic PM
domain or a vga_switcheroo PM domain in dev->pm_domain.
We need ot be certain that the PM domain is actually valid before we
try to remove it. We can do this easily as we have a way to get the
current validated generic PM domain for a struct device. This must
match the generic PM domain being requested for removal.
Convert the code to use this alternative validation method instead.
Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk>
Acked-by: Kevin Hilman <khilman@linaro.org>
Acked-by: Ulf Hansson <ulf.hansson@linaro.org>
Signed-off-by: Rafael J. Wysocki <rafael.j.wysocki@intel.com>
The PM domain code contains two methods to get the generic PM domain
for a struct device. One is dev_to_genpd() which is only safe when
we know for certain that the device has a generic PM domain attached.
The other is coded into genpd_dev_pm_detach() which ensures that the
PM domain in the struct device is a generic PM domain (and so is safer).
This commit factors out the safer version, documents it, and hides the
unsafe dev_to_genpd().
Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk>
Signed-off-by: Rafael J. Wysocki <rafael.j.wysocki@intel.com>
PM domains are rather noisy; scheduling behaviour can cause callbacks
to take longer, which causes them to spit out a warning-level message
each time a callback takes a little longer than the previous time.
There really isn't a need for this, except when debugging.
Acked-by: Ulf Hansson <ulf.hansson@linaro.org>
Acked-by: Kevin Hilman <khilman@linaro.org>
Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk>
Signed-off-by: Rafael J. Wysocki <rafael.j.wysocki@intel.com>
Buses which currently supports attaching devices to their PM domains,
will invoke the dev_pm_domain_attach() API from their ->probe()
callbacks. During the attach procedure, genpd power up the PM domain.
In those scenarios where the bus/driver don't need to access its device
during probe, it may leave it in runtime PM suspended state since
that's also the default state. In that way, no notifications through
the runtime PM callbacks will reach the PM domain during probe.
For genpd, the consequence from the above scenario means the PM domain
will remain powered. Therefore, implement the struct dev_pm_domain's
->sync() callback, which is invoked from driver core after the
bus/driver has probed the device. It allows genpd to power off the PM
domain if it's unused.
Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk>
[ Ulf: Updated patch according to updates in driver core ]
Signed-off-by: Ulf Hansson <ulf.hansson@linaro.org>
Acked-by: Kevin Hilman <khilman@linaro.org>
Signed-off-by: Rafael J. Wysocki <rafael.j.wysocki@intel.com>
Occasionally, the system can't come back up after suspend/resume
due to problems of device suspending phase. This patch make
PM_TRACE infrastructure cover device suspending phase of
suspend/resume process, and the information in RTC can tell
developers which device suspending function make system hang.
Signed-off-by: Zhonghui Fu <zhonghui.fu@linux.intel.com>
Signed-off-by: Rafael J. Wysocki <rafael.j.wysocki@intel.com>
Export pm_system_wakeup function to allow irq handlers to deal with system
wakeup.
This is needed for shared IRQ lines where one of the handler is registered
with IRQF_NO_SUSPEND, while the other ones want to configure it as a wakeup
source.
In this specific case, irq core does not handle the wakeup process and
leave the decision to each irq handler.
Signed-off-by: Boris Brezillon <boris.brezillon@free-electrons.com>
Reviewed-by: Alexandre Belloni <alexandre.belloni@free-electrons.com>
Acked-by: Nicolas Ferre <nicolas.ferre@atmel.com>
Acked-by: Mark Rutland <mark.rutland@arm.com>
Signed-off-by: Rafael J. Wysocki <rafael.j.wysocki@intel.com>
To keep consisitency with the rest of the file, use 'genpd' as the
name of the 'struct generic_pm_domain' pointer instead of 'gpd'.
This is just a rename, no functional changes.
Signed-off-by: Kevin Hilman <khilman@linaro.org>
Acked-by: Pavel Machek <pavel@ucw.cz>
Reviewed-by: Ulf Hansson <ulf.hansson@linaro.org>
Signed-off-by: Rafael J. Wysocki <rafael.j.wysocki@intel.com>
This patch reduces the kernel size by removing error messages that duplicate
the normal OOM message.
A simplified version of the semantic patch that finds this problem is as
follows: (http://coccinelle.lip6.fr)
@@
identifier f,print,l;
expression e;
constant char[] c;
@@
e = \(kzalloc\|kmalloc\|devm_kzalloc\|devm_kmalloc\)(...);
if (e == NULL) {
<+...
- print(...,c,...);
... when any
(
goto l;
|
return ...;
)
...+> }
Signed-off-by: Quentin Lambert <lambert.quentin@gmail.com>
Acked-by: Nishanth Menon <nm@ti.com>
Signed-off-by: Rafael J. Wysocki <rafael.j.wysocki@intel.com>
* pm-domains:
PM: Convert dev_pm_put_subsys_data() into a void function
PM: Update function header for dev_pm_get_subsys_data()
PM / Domains: Handle errors from genpd's ->attach_dev() callback
PM / Domains: Re-order initialization of generic_pm_domain_data
PM / Domains: Free pm_subsys_data in error path in __pm_genpd_add_device()
PM / Domains: Eliminate the mutex for the generic_pm_domain_data
PM / Domains: Don't check for an existing device when adding a new
PM / Domains: Don't allow an existing generic_pm_domain_data
PM / Domains: Remove reference counting for the generic_pm_domain_data
PM / Domains: Rename __pm_genpd_alloc|free_dev_data()
PM / Domains: Remove pm_genpd_dev_need_restore() API
Clients using the dev_pm_put_subsys_data() API isn't interested of a
return value. They care only of decreasing a reference to the device's
pm_subsys_data. So, let's convert the API to a void function, which
anyway seems like reasonable thing to do.
Signed-off-by: Ulf Hansson <ulf.hansson@linaro.org>
Acked-by: Geert Uytterhoeven <geert@linux-m68k.org>
Signed-off-by: Rafael J. Wysocki <rafael.j.wysocki@intel.com>
The commit "PM: Make dev_pm_get_subsys_data() always return 0 on success"
changed the return value from dev_pm_get_subsys_data(). Let's update the
comment in the function header to reflect this change as well.
Signed-off-by: Ulf Hansson <ulf.hansson@linaro.org>
Acked-by: Geert Uytterhoeven <geert+renesas@glider.be>
Signed-off-by: Rafael J. Wysocki <rafael.j.wysocki@intel.com>
The optional genpd's ->attach_dev() callback is invoked from
__pm_genpd_add_device(). Let's add error handling from the response
from this callback and propagate the error code.
When __pm_genpd_add_device() is invoked through the generic OF-based PM
domain look-up path, the device is being probed. Returning an error
will mean the device won't be attached to its PM domain. Errors of
-EPROBE_DEFER get special treatment and is propagated to the driver
core.
Therefore this change also enables the ->attach_dev() callback to
be able to request for a deferred probe sequence.
Signed-off-by: Ulf Hansson <ulf.hansson@linaro.org>
Signed-off-by: Rafael J. Wysocki <rafael.j.wysocki@intel.com>
Move the initialization of the struct generic_pm_domain_data into
genpd_alloc_dev_data(), including the assignment of the device's
->pm_domain() callback. Make corresponding changes to
genpd_free_dev_data().
These changes will make the related code more readable. It will also
decrease the critical regions for where genpd's mutex is being held and
for where the device's power related spinlock is being held.
Signed-off-by: Ulf Hansson <ulf.hansson@linaro.org>
Signed-off-by: Rafael J. Wysocki <rafael.j.wysocki@intel.com>
The error path in __pm_genpd_add_device() didn't decrease the reference
to the struct pm_subsys_data.
Let's move the calls to dev_pm_get|put_subsys_data() into
genpd_alloc|free_dev_data() to fix this issue and thus prevent a
potential memory leakage.
Signed-off-by: Ulf Hansson <ulf.hansson@linaro.org>
Signed-off-by: Rafael J. Wysocki <rafael.j.wysocki@intel.com>
While adding devices to their PM domains, dev_pm_qos_add_notifier() was
invoked while allocating the generic_pm_domain_data for the device.
Since the generic_pm_domain_data's device pointer will be assigned
after allocation, the ->genpd_dev_pm_qos_notifier() callback could be
called prior having a valid pointer to the device. Similar scenario
existed while removing a device from a genpd.
To cope with these scenarios a mutex was used to protect the pointer to
the device.
By re-order the sequence for when dev_pm_qos_add|remove_notifier() are
invoked, we make sure the ->genpd_dev_pm_qos_notifier() callback are
always called with a valid device pointer available.
In this way, we eliminate the need for protecting the pointer and thus
we can remove the mutex from the struct generic_pm_domain_data.
Signed-off-by: Ulf Hansson <ulf.hansson@linaro.org>
Signed-off-by: Rafael J. Wysocki <rafael.j.wysocki@intel.com>
When adding a device to a genpd, we no longer need to walk genpd's list
of existing devices to verify it hasn't already been added.
Instead we can now rely on the verification of not allowing existing
generic_pm_domain_data for a device, since that has the same meaning.
Signed-off-by: Ulf Hansson <ulf.hansson@linaro.org>
Signed-off-by: Rafael J. Wysocki <rafael.j.wysocki@intel.com>
When adding a device to a genpd, a struct generic_pm_domain_data is
allocated per device.
Verify that there are no existing generic_pm_domain_data for the device
we are about to add, since that tells us it has already been added to a
genpd.
When genpd supported PM domain device callbacks, this was a valid
scenario. Now it isn't so let's return an error code.
Signed-off-by: Ulf Hansson <ulf.hansson@linaro.org>
Signed-off-by: Rafael J. Wysocki <rafael.j.wysocki@intel.com>
The reference counting was needed when genpd supported PM domain device
callbacks. Since this option has been removed, let's also remove the
reference counting of the struct generic_pm_domain_data.
Signed-off-by: Ulf Hansson <ulf.hansson@linaro.org>
Acked-by: Pavel Machek <pavel@ucw.cz>
Signed-off-by: Rafael J. Wysocki <rafael.j.wysocki@intel.com>
In a step to get consistent names of functions in genpd, rename
the internal __pm_genpd_alloc|free_dev_data() into
gendp_alloc|free_dev_data().
As discussed on the linux-pm list, let's move towards the following
name rules:
Internal functions:
genpd_*
_genpd_*
__genpd_*
External functions:
pm_genpd_*
_pm_genpd_*
__pm_genpd_*
Signed-off-by: Ulf Hansson <ulf.hansson@linaro.org>
Acked-by: Pavel Machek <pavel@ucw.cz>
Signed-off-by: Rafael J. Wysocki <rafael.j.wysocki@intel.com>
There are currently no users of this API, let's remove it.
Additionally, if such feature would be needed future wise, a better
option is likely use pm_runtime_set_active|suspended() in some form.
Signed-off-by: Ulf Hansson <ulf.hansson@linaro.org>
Acked-by: Geert Uytterhoeven <geert+renesas@glider.be>
Acked-by: Kevin Hilman <khilman@linaro.org>
Signed-off-by: Rafael J. Wysocki <rafael.j.wysocki@intel.com>
Add lockdep asserts for holding the RCU lock when calling
dev_pm_opp_get_freq() and dev_pm_opp_get_voltage() to aid in detecting
RCU misuses.
These are called often after dev_pm_opp_find_freq_ceil/exact() which
already asserts for RCU lock. However one could make an error by
releasing lock too early - just after dev_pm_opp_find_freq_ceil().
Signed-off-by: Krzysztof Kozlowski <k.kozlowski@samsung.com>
Signed-off-by: Rafael J. Wysocki <rafael.j.wysocki@intel.com>
Add lockdep asserts for holding the dev->power.lock to non-static
functions which require this. They could be used outside of the file so
asserts may help in detecting locking misuse.
Signed-off-by: Krzysztof Kozlowski <k.kozlowski@samsung.com>
Signed-off-by: Rafael J. Wysocki <rafael.j.wysocki@intel.com>
kernel doc has gotten bit-rotted over time. Re-sync with Locking and
Return information. document all functions properly and ensure that
./scripts/kernel-doc -v ./drivers/base/power/opp.c >/dev/null returns
no errors
Signed-off-by: Nishanth Menon <nm@ti.com>
Signed-off-by: Rafael J. Wysocki <rafael.j.wysocki@intel.com>
All exported functions use dev_pm_* prefix and all static functions
are now standardized with _ prefix. This is better than having to deal
with multiple function naming styles within the same file.
Signed-off-by: Nishanth Menon <nm@ti.com>
Signed-off-by: Rafael J. Wysocki <rafael.j.wysocki@intel.com>
Allows user drivers such as devfreq to be modules.
Signed-off-by: Nishanth Menon <nm@ti.com>
Signed-off-by: Rafael J. Wysocki <rafael.j.wysocki@intel.com>
* pm-domains:
PM / Domains: Export of_genpd_get_from_provider function
* powercap:
powercap / RAPL: add IDs for future Xeon CPUs
* pm-tools:
tools / cpupower: Fix no idle state information return value
tools / cpupower: Correctly detect if running as root
* pm-opp:
PM / OPP: do error handling at the bottom of dev_pm_opp_add_dynamic()
PM / OPP: handle allocation of device_opp in a separate routine
PM / OPP: reuse find_device_opp() instead of duplicating code
PM / OPP: Staticize __dev_pm_opp_remove()
PM / OPP: replace kfree with kfree_rcu while freeing 'struct device_opp'
* pm-cpufreq:
MAINTAINERS: add entry for intel_pstate
intel_pstate: Add a few comments
intel_pstate: add kernel parameter to force loading
* pm-tools:
Revert "tools: cpupower: fix return checks for sysfs_get_idlestate_count()"
A lot of callers are missing the fact that dev_pm_opp_get_opp_count
needs to be called under RCU lock. Given that RCU locks can safely be
nested, instead of providing *_locked() API, let's take RCU lock inside
dev_pm_opp_get_opp_count() and leave callers as is.
Signed-off-by: Dmitry Torokhov <dtor@chromium.org>
Acked-by: Viresh Kumar <viresh.kumar@linaro.org>
Signed-off-by: Rafael J. Wysocki <rafael.j.wysocki@intel.com>
Not having OPP defined for a device is not a crime, we should not splat
warning in this case. Also, it seems that we are ready to accept invalid
dev (find_device_opp will return ERR_PTR(-EINVAL) then) so let's not
crash in dev_name() in such case.
Signed-off-by: Dmitry Torokhov <dtor@chromium.org>
Acked-by: Viresh Kumar <viresh.kumar@linaro.org>
Signed-off-by: Rafael J. Wysocki <rafael.j.wysocki@intel.com>
Certain OPP APIs need to be called under RCU lock; let's add a few
rcu_lockdep_assert() calls to warn about potential misuse.
Signed-off-by: Dmitry Torokhov <dtor@chromium.org>
Acked-by: Viresh Kumar <viresh.kumar@linaro.org>
Signed-off-by: Rafael J. Wysocki <rafael.j.wysocki@intel.com>
This function looks up a PM domain form the provider. This will be
useful to add parent/child domain relationship from the SoC specific
code. The caller of the function must make sure that PM domain provider
is already registered.
Reviewed-by: Ulf Hansson <ulf.hansson@linaro.org>
Signed-off-by: Amit Daniel Kachhap <amit.daniel@samsung.com>
Signed-off-by: Rafael J. Wysocki <rafael.j.wysocki@intel.com>
This makes it less error prone and moves common resource deallocation at a
single place.
Signed-off-by: Viresh Kumar <viresh.kumar@linaro.org>
Signed-off-by: Rafael J. Wysocki <rafael.j.wysocki@intel.com>
Get the 'device_opp' allocation code into a separate routine to keep only the
necessary part in dev_pm_opp_add_dynamic().
Also do s/sizeof(struct device_opp)/sizeof(*dev_opp) and remove the print
message on kzalloc() failure as checkpatch warns for that.
Signed-off-by: Viresh Kumar <viresh.kumar@linaro.org>
Signed-off-by: Rafael J. Wysocki <rafael.j.wysocki@intel.com>
Reuse find_device_opp() in opp_set_availability() instead of duplicating code.
Signed-off-by: Viresh Kumar <viresh.kumar@linaro.org>
Signed-off-by: Rafael J. Wysocki <rafael.j.wysocki@intel.com>
Its a local routine and need not be accessible outside of opp.c.
Signed-off-by: Viresh Kumar <viresh.kumar@linaro.org>
Signed-off-by: Rafael J. Wysocki <rafael.j.wysocki@intel.com>
Somehow one of the instance of freeing resources failed to use kfree_rcu() and
used kfree() instead. This might cause problems as the node might be referenced
by readers under rcu locks and we must wait for the rcu grace period as well.
While we are at it, also update comment over 'struct device_opp' to mention why
we are waiting for both rcu and srcu grace periods.
Fixes: 129eec55df (PM / OPP Introduce APIs to remove OPPs)
Signed-off-by: Viresh Kumar <viresh.kumar@linaro.org>
Signed-off-by: Rafael J. Wysocki <rafael.j.wysocki@intel.com>
By mistake we called find_device_opp() twice in of_free_opp_table(), fix it.
Generated diff doesn't show the problem well and so here is the code snippet:
void of_free_opp_table(struct device *dev)
{
struct device_opp *dev_opp = find_device_opp(dev);
struct dev_pm_opp *opp, *tmp;
/* Check for existing list for 'dev' */
dev_opp = find_device_opp(dev);
...
}
Signed-off-by: Viresh Kumar <viresh.kumar@linaro.org>
Signed-off-by: Rafael J. Wysocki <rafael.j.wysocki@intel.com>
We find/allocate dev_opp after using its value to fill new_opp->dev_opp right
now. Move this to a later point where dev_opp is valid.
Fixes: a7470db6fe (PM / OPP don't match for existing OPPs when list is empty)
Signed-off-by: Viresh Kumar <viresh.kumar@linaro.org>
Signed-off-by: Rafael J. Wysocki <rafael.j.wysocki@intel.com>