2
0
mirror of https://github.com/edk2-porting/linux-next.git synced 2024-12-04 03:14:02 +08:00

Power management and ACPI material for v4.6-rc3

- intel_pstate fixes for two issues exposed by the recent switch
    over from using timers and for one issue introduced during the
    4.4 cycle plus new comments describing data structures used by
    the driver (Rafael Wysocki, Srinivas Pandruvada).
 
  - intel_idle fixes related to CPU offline/online (Richard Cochran).
 
  - intel_idle support (new CPU IDs and state definitions mostly) for
    Skylake-X and Kabylake processors (Len Brown).
 
  - PCC mailbox driver fix for an out-of-bounds memory access that
    may cause the kernel to panic() (Shanker Donthineni).
 
  - New (missing) CPU ID for one apparently overlooked Haswell model
    in the Intel RAPL power capping driver (Srinivas Pandruvada).
 
  - Fix for the PM core's wakeup IRQs framework to make it work after
    wakeup settings reconfiguration from sysfs (Grygorii Strashko).
 
  - Runtime PM documentation update to make it describe what needs
    to be done during device removal more precisely (Krzysztof
    Kozlowski).
 
  - Stale comment removal cleanup in the cpufreq-dt driver (Viresh
    Kumar).
 
  - turbostat utility fixes and support for Broxton, Skylake-X
    and Kabylake processors (Len Brown).
 
 /
 -----BEGIN PGP SIGNATURE-----
 Version: GnuPG v2.0.22 (GNU/Linux)
 
 iQIcBAABCAAGBQJXCBaNAAoJEILEb/54YlRxDrkP/jdCfB3wOcRGp6LN6GHksC3u
 k2yUQl+XJZhggLda+yUK2ovV5tCJwGmP1N1B/MacRr/5MeicGXBvt6ZrHGqOfTKo
 9xAGKOt2xsoAbHkleB733GIRD9TYZpqCTx9n0dka7kRArQA/uhY+TlXYY2Kn+E0B
 1c4lEN3d9j4G2qW55YxnfRUOhwUc03syokOsS2z9ChEAVPSrn7Zv8V+8rPwxIz5w
 uUWphlnEQgaRwzwxACHwE0bht8sl1cJ1UUSVbzf30mzRlGHiteYyI/vuE2hF9SiV
 DEv9mhSB+4U6WuBHr4Cwu+nf9YTlaY1ZaS3r5EDzJaNJb8tMqPZcGitfn+fTGtUz
 9GlCQZIBcP1vWBDybuuPOzAH++QUFrikozuNfUu1d+WFEypRyadMIvUhtmZPG9mh
 9+Vem+ta32eXos07dx4Dvth+yNvmG5bxPnteDp3GPsnCCDlXutcKaaJaaj+1NzHi
 oqHyEynMhJuN/D2h+oIVIUppKBVO55M+lJMJrX891zICs/98K2LwOtFDuNRWQml0
 3yR7Ieoj5gxVfbT6zNeH5z7CxP/MymNAMjbxfiwqtGHhkNoAv92ymMxnptPiLCHn
 Zyycelsve3sneV+iyk1fAOZFvDXbzTUyigxGP7Kc87iia/Yd2TjLlvwJkMNK6x/K
 2OSkjJFqqrqAqcXNIz3G
 =f4MM
 -----END PGP SIGNATURE-----

Merge tag 'pm+acpi-4.6-rc3' of git://git.kernel.org/pub/scm/linux/kernel/git/rafael/linux-pm

Pull power management and ACPI fixes from Rafael Wysocki:
 "Fixes for some issues discovered after recent changes and for some
  that have just been found lately regardless of those changes
  (intel_pstate, intel_idle, PM core, mailbox/pcc, turbostat) plus
  support for some new CPU models (intel_idle, Intel RAPL driver,
  turbostat) and documentation updates (intel_pstate, PM core).

  Specifics:

   - intel_pstate fixes for two issues exposed by the recent switch over
     from using timers and for one issue introduced during the 4.4 cycle
     plus new comments describing data structures used by the driver
     (Rafael Wysocki, Srinivas Pandruvada).

   - intel_idle fixes related to CPU offline/online (Richard Cochran).

   - intel_idle support (new CPU IDs and state definitions mostly) for
     Skylake-X and Kabylake processors (Len Brown).

   - PCC mailbox driver fix for an out-of-bounds memory access that may
     cause the kernel to panic() (Shanker Donthineni).

   - New (missing) CPU ID for one apparently overlooked Haswell model in
     the Intel RAPL power capping driver (Srinivas Pandruvada).

   - Fix for the PM core's wakeup IRQs framework to make it work after
     wakeup settings reconfiguration from sysfs (Grygorii Strashko).

   - Runtime PM documentation update to make it describe what needs to
     be done during device removal more precisely (Krzysztof Kozlowski).

   - Stale comment removal cleanup in the cpufreq-dt driver (Viresh
     Kumar).

   - turbostat utility fixes and support for Broxton, Skylake-X and
     Kabylake processors (Len Brown)"

* tag 'pm+acpi-4.6-rc3' of git://git.kernel.org/pub/scm/linux/kernel/git/rafael/linux-pm: (28 commits)
  PM / wakeirq: fix wakeirq setting after wakup re-configuration from sysfs
  tools/power turbostat: work around RC6 counter wrap
  tools/power turbostat: initial KBL support
  tools/power turbostat: initial SKX support
  tools/power turbostat: decode BXT TSC frequency via CPUID
  tools/power turbostat: initial BXT support
  tools/power turbostat: print IRTL MSRs
  tools/power turbostat: SGX state should print only if --debug
  intel_idle: Add KBL support
  intel_idle: Add SKX support
  intel_idle: Clean up all registered devices on exit.
  intel_idle: Propagate hot plug errors.
  intel_idle: Don't overreact to a cpuidle registration failure.
  intel_idle: Setup the timer broadcast only on successful driver load.
  intel_idle: Avoid a double free of the per-CPU data.
  intel_idle: Fix dangling registration on error path.
  intel_idle: Fix deallocation order on the driver exit path.
  intel_idle: Remove redundant initialization calls.
  intel_idle: Fix a helper function's return value.
  intel_idle: remove useless return from void function.
  ...
This commit is contained in:
Linus Torvalds 2016-04-09 11:03:48 -07:00
commit 40bca9dbab
9 changed files with 380 additions and 62 deletions

View File

@ -586,6 +586,10 @@ drivers to make their ->remove() callbacks avoid races with runtime PM directly,
but also it allows of more flexibility in the handling of devices during the
removal of their drivers.
Drivers in ->remove() callback should undo the runtime PM changes done
in ->probe(). Usually this means calling pm_runtime_disable(),
pm_runtime_dont_use_autosuspend() etc.
The user space can effectively disallow the driver of the device to power manage
it at run time by changing the value of its /sys/devices/.../power/control
attribute to "on", which causes pm_runtime_forbid() to be called. In principle,

View File

@ -167,6 +167,14 @@
#define MSR_PKG_C9_RESIDENCY 0x00000631
#define MSR_PKG_C10_RESIDENCY 0x00000632
/* Interrupt Response Limit */
#define MSR_PKGC3_IRTL 0x0000060a
#define MSR_PKGC6_IRTL 0x0000060b
#define MSR_PKGC7_IRTL 0x0000060c
#define MSR_PKGC8_IRTL 0x00000633
#define MSR_PKGC9_IRTL 0x00000634
#define MSR_PKGC10_IRTL 0x00000635
/* Run Time Average Power Limiting (RAPL) Interface */
#define MSR_RAPL_POWER_UNIT 0x00000606

View File

@ -246,6 +246,8 @@ static int device_wakeup_attach(struct device *dev, struct wakeup_source *ws)
return -EEXIST;
}
dev->power.wakeup = ws;
if (dev->power.wakeirq)
device_wakeup_attach_irq(dev, dev->power.wakeirq);
spin_unlock_irq(&dev->power.lock);
return 0;
}

View File

@ -4,9 +4,6 @@
* Copyright (C) 2014 Linaro.
* Viresh Kumar <viresh.kumar@linaro.org>
*
* The OPP code in function set_target() is reused from
* drivers/cpufreq/omap-cpufreq.c
*
* This program is free software; you can redistribute it and/or modify
* it under the terms of the GNU General Public License version 2 as
* published by the Free Software Foundation.

View File

@ -64,6 +64,25 @@ static inline int ceiling_fp(int32_t x)
return ret;
}
/**
* struct sample - Store performance sample
* @core_pct_busy: Ratio of APERF/MPERF in percent, which is actual
* performance during last sample period
* @busy_scaled: Scaled busy value which is used to calculate next
* P state. This can be different than core_pct_busy
* to account for cpu idle period
* @aperf: Difference of actual performance frequency clock count
* read from APERF MSR between last and current sample
* @mperf: Difference of maximum performance frequency clock count
* read from MPERF MSR between last and current sample
* @tsc: Difference of time stamp counter between last and
* current sample
* @freq: Effective frequency calculated from APERF/MPERF
* @time: Current time from scheduler
*
* This structure is used in the cpudata structure to store performance sample
* data for choosing next P State.
*/
struct sample {
int32_t core_pct_busy;
int32_t busy_scaled;
@ -74,6 +93,20 @@ struct sample {
u64 time;
};
/**
* struct pstate_data - Store P state data
* @current_pstate: Current requested P state
* @min_pstate: Min P state possible for this platform
* @max_pstate: Max P state possible for this platform
* @max_pstate_physical:This is physical Max P state for a processor
* This can be higher than the max_pstate which can
* be limited by platform thermal design power limits
* @scaling: Scaling factor to convert frequency to cpufreq
* frequency units
* @turbo_pstate: Max Turbo P state possible for this platform
*
* Stores the per cpu model P state limits and current P state.
*/
struct pstate_data {
int current_pstate;
int min_pstate;
@ -83,6 +116,19 @@ struct pstate_data {
int turbo_pstate;
};
/**
* struct vid_data - Stores voltage information data
* @min: VID data for this platform corresponding to
* the lowest P state
* @max: VID data corresponding to the highest P State.
* @turbo: VID data for turbo P state
* @ratio: Ratio of (vid max - vid min) /
* (max P state - Min P State)
*
* Stores the voltage data for DVFS (Dynamic Voltage and Frequency Scaling)
* This data is used in Atom platforms, where in addition to target P state,
* the voltage data needs to be specified to select next P State.
*/
struct vid_data {
int min;
int max;
@ -90,6 +136,18 @@ struct vid_data {
int32_t ratio;
};
/**
* struct _pid - Stores PID data
* @setpoint: Target set point for busyness or performance
* @integral: Storage for accumulated error values
* @p_gain: PID proportional gain
* @i_gain: PID integral gain
* @d_gain: PID derivative gain
* @deadband: PID deadband
* @last_err: Last error storage for integral part of PID calculation
*
* Stores PID coefficients and last error for PID controller.
*/
struct _pid {
int setpoint;
int32_t integral;
@ -100,6 +158,23 @@ struct _pid {
int32_t last_err;
};
/**
* struct cpudata - Per CPU instance data storage
* @cpu: CPU number for this instance data
* @update_util: CPUFreq utility callback information
* @pstate: Stores P state limits for this CPU
* @vid: Stores VID limits for this CPU
* @pid: Stores PID parameters for this CPU
* @last_sample_time: Last Sample time
* @prev_aperf: Last APERF value read from APERF MSR
* @prev_mperf: Last MPERF value read from MPERF MSR
* @prev_tsc: Last timestamp counter (TSC) value
* @prev_cummulative_iowait: IO Wait time difference from last and
* current sample
* @sample: Storage for storing last Sample data
*
* This structure stores per CPU instance data for all CPUs.
*/
struct cpudata {
int cpu;
@ -118,6 +193,19 @@ struct cpudata {
};
static struct cpudata **all_cpu_data;
/**
* struct pid_adjust_policy - Stores static PID configuration data
* @sample_rate_ms: PID calculation sample rate in ms
* @sample_rate_ns: Sample rate calculation in ns
* @deadband: PID deadband
* @setpoint: PID Setpoint
* @p_gain_pct: PID proportional gain
* @i_gain_pct: PID integral gain
* @d_gain_pct: PID derivative gain
*
* Stores per CPU model static PID configuration data.
*/
struct pstate_adjust_policy {
int sample_rate_ms;
s64 sample_rate_ns;
@ -128,6 +216,20 @@ struct pstate_adjust_policy {
int i_gain_pct;
};
/**
* struct pstate_funcs - Per CPU model specific callbacks
* @get_max: Callback to get maximum non turbo effective P state
* @get_max_physical: Callback to get maximum non turbo physical P state
* @get_min: Callback to get minimum P state
* @get_turbo: Callback to get turbo P state
* @get_scaling: Callback to get frequency scaling factor
* @get_val: Callback to convert P state to actual MSR write value
* @get_vid: Callback to get VID data for Atom platforms
* @get_target_pstate: Callback to a function to calculate next P state to use
*
* Core and Atom CPU models have different way to get P State limits. This
* structure is used to store those callbacks.
*/
struct pstate_funcs {
int (*get_max)(void);
int (*get_max_physical)(void);
@ -139,6 +241,11 @@ struct pstate_funcs {
int32_t (*get_target_pstate)(struct cpudata *);
};
/**
* struct cpu_defaults- Per CPU model default config data
* @pid_policy: PID config data
* @funcs: Callback function data
*/
struct cpu_defaults {
struct pstate_adjust_policy pid_policy;
struct pstate_funcs funcs;
@ -151,6 +258,34 @@ static struct pstate_adjust_policy pid_params;
static struct pstate_funcs pstate_funcs;
static int hwp_active;
/**
* struct perf_limits - Store user and policy limits
* @no_turbo: User requested turbo state from intel_pstate sysfs
* @turbo_disabled: Platform turbo status either from msr
* MSR_IA32_MISC_ENABLE or when maximum available pstate
* matches the maximum turbo pstate
* @max_perf_pct: Effective maximum performance limit in percentage, this
* is minimum of either limits enforced by cpufreq policy
* or limits from user set limits via intel_pstate sysfs
* @min_perf_pct: Effective minimum performance limit in percentage, this
* is maximum of either limits enforced by cpufreq policy
* or limits from user set limits via intel_pstate sysfs
* @max_perf: This is a scaled value between 0 to 255 for max_perf_pct
* This value is used to limit max pstate
* @min_perf: This is a scaled value between 0 to 255 for min_perf_pct
* This value is used to limit min pstate
* @max_policy_pct: The maximum performance in percentage enforced by
* cpufreq setpolicy interface
* @max_sysfs_pct: The maximum performance in percentage enforced by
* intel pstate sysfs interface
* @min_policy_pct: The minimum performance in percentage enforced by
* cpufreq setpolicy interface
* @min_sysfs_pct: The minimum performance in percentage enforced by
* intel pstate sysfs interface
*
* Storage for user and policy defined limits.
*/
struct perf_limits {
int no_turbo;
int turbo_disabled;
@ -910,7 +1045,14 @@ static inline bool intel_pstate_sample(struct cpudata *cpu, u64 time)
cpu->prev_aperf = aperf;
cpu->prev_mperf = mperf;
cpu->prev_tsc = tsc;
return true;
/*
* First time this function is invoked in a given cycle, all of the
* previous sample data fields are equal to zero or stale and they must
* be populated with meaningful numbers for things to work, so assume
* that sample.time will always be reset before setting the utilization
* update hook and make the caller skip the sample then.
*/
return !!cpu->last_sample_time;
}
static inline int32_t get_avg_frequency(struct cpudata *cpu)
@ -984,8 +1126,7 @@ static inline int32_t get_target_pstate_use_performance(struct cpudata *cpu)
* enough period of time to adjust our busyness.
*/
duration_ns = cpu->sample.time - cpu->last_sample_time;
if ((s64)duration_ns > pid_params.sample_rate_ns * 3
&& cpu->last_sample_time > 0) {
if ((s64)duration_ns > pid_params.sample_rate_ns * 3) {
sample_ratio = div_fp(int_tofp(pid_params.sample_rate_ns),
int_tofp(duration_ns));
core_busy = mul_fp(core_busy, sample_ratio);
@ -1100,10 +1241,8 @@ static int intel_pstate_init_cpu(unsigned int cpunum)
intel_pstate_get_cpu_pstates(cpu);
intel_pstate_busy_pid_reset(cpu);
intel_pstate_sample(cpu, 0);
cpu->update_util.func = intel_pstate_update_util;
cpufreq_set_update_util_data(cpunum, &cpu->update_util);
pr_debug("intel_pstate: controlling: cpu %d\n", cpunum);
@ -1122,22 +1261,54 @@ static unsigned int intel_pstate_get(unsigned int cpu_num)
return get_avg_frequency(cpu);
}
static void intel_pstate_set_update_util_hook(unsigned int cpu_num)
{
struct cpudata *cpu = all_cpu_data[cpu_num];
/* Prevent intel_pstate_update_util() from using stale data. */
cpu->sample.time = 0;
cpufreq_set_update_util_data(cpu_num, &cpu->update_util);
}
static void intel_pstate_clear_update_util_hook(unsigned int cpu)
{
cpufreq_set_update_util_data(cpu, NULL);
synchronize_sched();
}
static void intel_pstate_set_performance_limits(struct perf_limits *limits)
{
limits->no_turbo = 0;
limits->turbo_disabled = 0;
limits->max_perf_pct = 100;
limits->max_perf = int_tofp(1);
limits->min_perf_pct = 100;
limits->min_perf = int_tofp(1);
limits->max_policy_pct = 100;
limits->max_sysfs_pct = 100;
limits->min_policy_pct = 0;
limits->min_sysfs_pct = 0;
}
static int intel_pstate_set_policy(struct cpufreq_policy *policy)
{
if (!policy->cpuinfo.max_freq)
return -ENODEV;
if (policy->policy == CPUFREQ_POLICY_PERFORMANCE &&
policy->max >= policy->cpuinfo.max_freq) {
pr_debug("intel_pstate: set performance\n");
intel_pstate_clear_update_util_hook(policy->cpu);
if (policy->policy == CPUFREQ_POLICY_PERFORMANCE) {
limits = &performance_limits;
if (hwp_active)
intel_pstate_hwp_set(policy->cpus);
return 0;
if (policy->max >= policy->cpuinfo.max_freq) {
pr_debug("intel_pstate: set performance\n");
intel_pstate_set_performance_limits(limits);
goto out;
}
} else {
pr_debug("intel_pstate: set powersave\n");
limits = &powersave_limits;
}
pr_debug("intel_pstate: set powersave\n");
limits = &powersave_limits;
limits->min_policy_pct = (policy->min * 100) / policy->cpuinfo.max_freq;
limits->min_policy_pct = clamp_t(int, limits->min_policy_pct, 0 , 100);
limits->max_policy_pct = DIV_ROUND_UP(policy->max * 100,
@ -1163,6 +1334,9 @@ static int intel_pstate_set_policy(struct cpufreq_policy *policy)
limits->max_perf = div_fp(int_tofp(limits->max_perf_pct),
int_tofp(100));
out:
intel_pstate_set_update_util_hook(policy->cpu);
if (hwp_active)
intel_pstate_hwp_set(policy->cpus);
@ -1187,8 +1361,7 @@ static void intel_pstate_stop_cpu(struct cpufreq_policy *policy)
pr_debug("intel_pstate: CPU %d exiting\n", cpu_num);
cpufreq_set_update_util_data(cpu_num, NULL);
synchronize_sched();
intel_pstate_clear_update_util_hook(cpu_num);
if (hwp_active)
return;
@ -1455,8 +1628,7 @@ out:
get_online_cpus();
for_each_online_cpu(cpu) {
if (all_cpu_data[cpu]) {
cpufreq_set_update_util_data(cpu, NULL);
synchronize_sched();
intel_pstate_clear_update_util_hook(cpu);
kfree(all_cpu_data[cpu]);
}
}

View File

@ -660,6 +660,35 @@ static struct cpuidle_state skl_cstates[] = {
.enter = NULL }
};
static struct cpuidle_state skx_cstates[] = {
{
.name = "C1-SKX",
.desc = "MWAIT 0x00",
.flags = MWAIT2flg(0x00),
.exit_latency = 2,
.target_residency = 2,
.enter = &intel_idle,
.enter_freeze = intel_idle_freeze, },
{
.name = "C1E-SKX",
.desc = "MWAIT 0x01",
.flags = MWAIT2flg(0x01),
.exit_latency = 10,
.target_residency = 20,
.enter = &intel_idle,
.enter_freeze = intel_idle_freeze, },
{
.name = "C6-SKX",
.desc = "MWAIT 0x20",
.flags = MWAIT2flg(0x20) | CPUIDLE_FLAG_TLB_FLUSHED,
.exit_latency = 133,
.target_residency = 600,
.enter = &intel_idle,
.enter_freeze = intel_idle_freeze, },
{
.enter = NULL }
};
static struct cpuidle_state atom_cstates[] = {
{
.name = "C1E-ATM",
@ -818,8 +847,11 @@ static int cpu_hotplug_notify(struct notifier_block *n,
* driver in this case
*/
dev = per_cpu_ptr(intel_idle_cpuidle_devices, hotcpu);
if (!dev->registered)
intel_idle_cpu_init(hotcpu);
if (dev->registered)
break;
if (intel_idle_cpu_init(hotcpu))
return NOTIFY_BAD;
break;
}
@ -904,6 +936,10 @@ static const struct idle_cpu idle_cpu_skl = {
.disable_promotion_to_c1e = true,
};
static const struct idle_cpu idle_cpu_skx = {
.state_table = skx_cstates,
.disable_promotion_to_c1e = true,
};
static const struct idle_cpu idle_cpu_avn = {
.state_table = avn_cstates,
@ -945,6 +981,9 @@ static const struct x86_cpu_id intel_idle_ids[] __initconst = {
ICPU(0x56, idle_cpu_bdw),
ICPU(0x4e, idle_cpu_skl),
ICPU(0x5e, idle_cpu_skl),
ICPU(0x8e, idle_cpu_skl),
ICPU(0x9e, idle_cpu_skl),
ICPU(0x55, idle_cpu_skx),
ICPU(0x57, idle_cpu_knl),
{}
};
@ -987,22 +1026,15 @@ static int __init intel_idle_probe(void)
icpu = (const struct idle_cpu *)id->driver_data;
cpuidle_state_table = icpu->state_table;
if (boot_cpu_has(X86_FEATURE_ARAT)) /* Always Reliable APIC Timer */
lapic_timer_reliable_states = LAPIC_TIMER_ALWAYS_RELIABLE;
else
on_each_cpu(__setup_broadcast_timer, (void *)true, 1);
pr_debug(PREFIX "v" INTEL_IDLE_VERSION
" model 0x%X\n", boot_cpu_data.x86_model);
pr_debug(PREFIX "lapic_timer_reliable_states 0x%x\n",
lapic_timer_reliable_states);
return 0;
}
/*
* intel_idle_cpuidle_devices_uninit()
* unregister, free cpuidle_devices
* Unregisters the cpuidle devices.
*/
static void intel_idle_cpuidle_devices_uninit(void)
{
@ -1013,9 +1045,6 @@ static void intel_idle_cpuidle_devices_uninit(void)
dev = per_cpu_ptr(intel_idle_cpuidle_devices, i);
cpuidle_unregister_device(dev);
}
free_percpu(intel_idle_cpuidle_devices);
return;
}
/*
@ -1111,7 +1140,7 @@ static void intel_idle_state_table_update(void)
* intel_idle_cpuidle_driver_init()
* allocate, initialize cpuidle_states
*/
static int __init intel_idle_cpuidle_driver_init(void)
static void __init intel_idle_cpuidle_driver_init(void)
{
int cstate;
struct cpuidle_driver *drv = &intel_idle_driver;
@ -1163,18 +1192,10 @@ static int __init intel_idle_cpuidle_driver_init(void)
drv->state_count += 1;
}
if (icpu->auto_demotion_disable_flags)
on_each_cpu(auto_demotion_disable, NULL, 1);
if (icpu->byt_auto_demotion_disable_flag) {
wrmsrl(MSR_CC6_DEMOTION_POLICY_CONFIG, 0);
wrmsrl(MSR_MC6_DEMOTION_POLICY_CONFIG, 0);
}
if (icpu->disable_promotion_to_c1e) /* each-cpu is redundant */
on_each_cpu(c1e_promotion_disable, NULL, 1);
return 0;
}
@ -1193,7 +1214,6 @@ static int intel_idle_cpu_init(int cpu)
if (cpuidle_register_device(dev)) {
pr_debug(PREFIX "cpuidle_register_device %d failed!\n", cpu);
intel_idle_cpuidle_devices_uninit();
return -EIO;
}
@ -1218,40 +1238,51 @@ static int __init intel_idle_init(void)
if (retval)
return retval;
intel_idle_cpuidle_devices = alloc_percpu(struct cpuidle_device);
if (intel_idle_cpuidle_devices == NULL)
return -ENOMEM;
intel_idle_cpuidle_driver_init();
retval = cpuidle_register_driver(&intel_idle_driver);
if (retval) {
struct cpuidle_driver *drv = cpuidle_get_driver();
printk(KERN_DEBUG PREFIX "intel_idle yielding to %s",
drv ? drv->name : "none");
free_percpu(intel_idle_cpuidle_devices);
return retval;
}
intel_idle_cpuidle_devices = alloc_percpu(struct cpuidle_device);
if (intel_idle_cpuidle_devices == NULL)
return -ENOMEM;
cpu_notifier_register_begin();
for_each_online_cpu(i) {
retval = intel_idle_cpu_init(i);
if (retval) {
intel_idle_cpuidle_devices_uninit();
cpu_notifier_register_done();
cpuidle_unregister_driver(&intel_idle_driver);
free_percpu(intel_idle_cpuidle_devices);
return retval;
}
}
__register_cpu_notifier(&cpu_hotplug_notifier);
if (boot_cpu_has(X86_FEATURE_ARAT)) /* Always Reliable APIC Timer */
lapic_timer_reliable_states = LAPIC_TIMER_ALWAYS_RELIABLE;
else
on_each_cpu(__setup_broadcast_timer, (void *)true, 1);
cpu_notifier_register_done();
pr_debug(PREFIX "lapic_timer_reliable_states 0x%x\n",
lapic_timer_reliable_states);
return 0;
}
static void __exit intel_idle_exit(void)
{
intel_idle_cpuidle_devices_uninit();
cpuidle_unregister_driver(&intel_idle_driver);
struct cpuidle_device *dev;
int i;
cpu_notifier_register_begin();
@ -1259,9 +1290,15 @@ static void __exit intel_idle_exit(void)
on_each_cpu(__setup_broadcast_timer, (void *)false, 1);
__unregister_cpu_notifier(&cpu_hotplug_notifier);
for_each_possible_cpu(i) {
dev = per_cpu_ptr(intel_idle_cpuidle_devices, i);
cpuidle_unregister_device(dev);
}
cpu_notifier_register_done();
return;
cpuidle_unregister_driver(&intel_idle_driver);
free_percpu(intel_idle_cpuidle_devices);
}
module_init(intel_idle_init);

View File

@ -361,8 +361,6 @@ static int __init acpi_pcc_probe(void)
struct acpi_generic_address *db_reg;
struct acpi_pcct_hw_reduced *pcct_ss;
pcc_mbox_channels[i].con_priv = pcct_entry;
pcct_entry = (struct acpi_subtable_header *)
((unsigned long) pcct_entry + pcct_entry->length);
/* If doorbell is in system memory cache the virt address */
pcct_ss = (struct acpi_pcct_hw_reduced *)pcct_entry;
@ -370,6 +368,8 @@ static int __init acpi_pcc_probe(void)
if (db_reg->space_id == ACPI_ADR_SPACE_SYSTEM_MEMORY)
pcc_doorbell_vaddr[i] = acpi_os_ioremap(db_reg->address,
db_reg->bit_width/8);
pcct_entry = (struct acpi_subtable_header *)
((unsigned long) pcct_entry + pcct_entry->length);
}
pcc_mbox_ctrl.num_chans = count;

View File

@ -1091,6 +1091,7 @@ static const struct x86_cpu_id rapl_ids[] __initconst = {
RAPL_CPU(0x3f, rapl_defaults_hsw_server),/* Haswell servers */
RAPL_CPU(0x4f, rapl_defaults_hsw_server),/* Broadwell servers */
RAPL_CPU(0x45, rapl_defaults_core),/* Haswell ULT */
RAPL_CPU(0x46, rapl_defaults_core),/* Haswell */
RAPL_CPU(0x47, rapl_defaults_core),/* Broadwell-H */
RAPL_CPU(0x4E, rapl_defaults_core),/* Skylake */
RAPL_CPU(0x4C, rapl_defaults_cht),/* Braswell/Cherryview */

View File

@ -66,6 +66,8 @@ unsigned int do_slm_cstates;
unsigned int use_c1_residency_msr;
unsigned int has_aperf;
unsigned int has_epb;
unsigned int do_irtl_snb;
unsigned int do_irtl_hsw;
unsigned int units = 1000000; /* MHz etc */
unsigned int genuine_intel;
unsigned int has_invariant_tsc;
@ -187,7 +189,7 @@ struct pkg_data {
unsigned long long pkg_any_core_c0;
unsigned long long pkg_any_gfxe_c0;
unsigned long long pkg_both_core_gfxe_c0;
unsigned long long gfx_rc6_ms;
long long gfx_rc6_ms;
unsigned int gfx_mhz;
unsigned int package_id;
unsigned int energy_pkg; /* MSR_PKG_ENERGY_STATUS */
@ -621,8 +623,14 @@ int format_counters(struct thread_data *t, struct core_data *c,
outp += sprintf(outp, "%8d", p->pkg_temp_c);
/* GFXrc6 */
if (do_gfx_rc6_ms)
outp += sprintf(outp, "%8.2f", 100.0 * p->gfx_rc6_ms / 1000.0 / interval_float);
if (do_gfx_rc6_ms) {
if (p->gfx_rc6_ms == -1) { /* detect counter reset */
outp += sprintf(outp, " ***.**");
} else {
outp += sprintf(outp, "%8.2f",
p->gfx_rc6_ms / 10.0 / interval_float);
}
}
/* GFXMHz */
if (do_gfx_mhz)
@ -766,7 +774,12 @@ delta_package(struct pkg_data *new, struct pkg_data *old)
old->pc10 = new->pc10 - old->pc10;
old->pkg_temp_c = new->pkg_temp_c;
old->gfx_rc6_ms = new->gfx_rc6_ms - old->gfx_rc6_ms;
/* flag an error when rc6 counter resets/wraps */
if (old->gfx_rc6_ms > new->gfx_rc6_ms)
old->gfx_rc6_ms = -1;
else
old->gfx_rc6_ms = new->gfx_rc6_ms - old->gfx_rc6_ms;
old->gfx_mhz = new->gfx_mhz;
DELTA_WRAP32(new->energy_pkg, old->energy_pkg);
@ -1296,6 +1309,7 @@ int hsw_pkg_cstate_limits[16] = {PCL__0, PCL__2, PCL__3, PCL__6, PCL__7, PCL_7S,
int slv_pkg_cstate_limits[16] = {PCL__0, PCL__1, PCLRSV, PCLRSV, PCL__4, PCLRSV, PCL__6, PCL__7, PCLRSV, PCLRSV, PCLRSV, PCLRSV, PCLRSV, PCLRSV, PCLRSV, PCLRSV};
int amt_pkg_cstate_limits[16] = {PCL__0, PCL__1, PCL__2, PCLRSV, PCLRSV, PCLRSV, PCL__6, PCL__7, PCLRSV, PCLRSV, PCLRSV, PCLRSV, PCLRSV, PCLRSV, PCLRSV, PCLRSV};
int phi_pkg_cstate_limits[16] = {PCL__0, PCL__2, PCL_6N, PCL_6R, PCLRSV, PCLRSV, PCLRSV, PCLUNL, PCLRSV, PCLRSV, PCLRSV, PCLRSV, PCLRSV, PCLRSV, PCLRSV, PCLRSV};
int bxt_pkg_cstate_limits[16] = {PCL__0, PCL__2, PCLUNL, PCLRSV, PCLRSV, PCLRSV, PCLRSV, PCLRSV, PCLRSV, PCLRSV, PCLRSV, PCLRSV, PCLRSV, PCLRSV, PCLRSV, PCLRSV};
static void
@ -1579,6 +1593,47 @@ dump_config_tdp(void)
fprintf(outf, " lock=%d", (unsigned int)(msr >> 31) & 1);
fprintf(outf, ")\n");
}
unsigned int irtl_time_units[] = {1, 32, 1024, 32768, 1048576, 33554432, 0, 0 };
void print_irtl(void)
{
unsigned long long msr;
get_msr(base_cpu, MSR_PKGC3_IRTL, &msr);
fprintf(outf, "cpu%d: MSR_PKGC3_IRTL: 0x%08llx (", base_cpu, msr);
fprintf(outf, "%svalid, %lld ns)\n", msr & (1 << 15) ? "" : "NOT",
(msr & 0x3FF) * irtl_time_units[(msr >> 10) & 0x3]);
get_msr(base_cpu, MSR_PKGC6_IRTL, &msr);
fprintf(outf, "cpu%d: MSR_PKGC6_IRTL: 0x%08llx (", base_cpu, msr);
fprintf(outf, "%svalid, %lld ns)\n", msr & (1 << 15) ? "" : "NOT",
(msr & 0x3FF) * irtl_time_units[(msr >> 10) & 0x3]);
get_msr(base_cpu, MSR_PKGC7_IRTL, &msr);
fprintf(outf, "cpu%d: MSR_PKGC7_IRTL: 0x%08llx (", base_cpu, msr);
fprintf(outf, "%svalid, %lld ns)\n", msr & (1 << 15) ? "" : "NOT",
(msr & 0x3FF) * irtl_time_units[(msr >> 10) & 0x3]);
if (!do_irtl_hsw)
return;
get_msr(base_cpu, MSR_PKGC8_IRTL, &msr);
fprintf(outf, "cpu%d: MSR_PKGC8_IRTL: 0x%08llx (", base_cpu, msr);
fprintf(outf, "%svalid, %lld ns)\n", msr & (1 << 15) ? "" : "NOT",
(msr & 0x3FF) * irtl_time_units[(msr >> 10) & 0x3]);
get_msr(base_cpu, MSR_PKGC9_IRTL, &msr);
fprintf(outf, "cpu%d: MSR_PKGC9_IRTL: 0x%08llx (", base_cpu, msr);
fprintf(outf, "%svalid, %lld ns)\n", msr & (1 << 15) ? "" : "NOT",
(msr & 0x3FF) * irtl_time_units[(msr >> 10) & 0x3]);
get_msr(base_cpu, MSR_PKGC10_IRTL, &msr);
fprintf(outf, "cpu%d: MSR_PKGC10_IRTL: 0x%08llx (", base_cpu, msr);
fprintf(outf, "%svalid, %lld ns)\n", msr & (1 << 15) ? "" : "NOT",
(msr & 0x3FF) * irtl_time_units[(msr >> 10) & 0x3]);
}
void free_fd_percpu(void)
{
int i;
@ -2144,6 +2199,9 @@ int probe_nhm_msrs(unsigned int family, unsigned int model)
case 0x56: /* BDX-DE */
case 0x4E: /* SKL */
case 0x5E: /* SKL */
case 0x8E: /* KBL */
case 0x9E: /* KBL */
case 0x55: /* SKX */
pkg_cstate_limits = hsw_pkg_cstate_limits;
break;
case 0x37: /* BYT */
@ -2156,6 +2214,9 @@ int probe_nhm_msrs(unsigned int family, unsigned int model)
case 0x57: /* PHI */
pkg_cstate_limits = phi_pkg_cstate_limits;
break;
case 0x5C: /* BXT */
pkg_cstate_limits = bxt_pkg_cstate_limits;
break;
default:
return 0;
}
@ -2248,6 +2309,9 @@ int has_config_tdp(unsigned int family, unsigned int model)
case 0x56: /* BDX-DE */
case 0x4E: /* SKL */
case 0x5E: /* SKL */
case 0x8E: /* KBL */
case 0x9E: /* KBL */
case 0x55: /* SKX */
case 0x57: /* Knights Landing */
return 1;
@ -2585,13 +2649,19 @@ void rapl_probe(unsigned int family, unsigned int model)
case 0x47: /* BDW */
do_rapl = RAPL_PKG | RAPL_CORES | RAPL_CORE_POLICY | RAPL_GFX | RAPL_PKG_POWER_INFO;
break;
case 0x5C: /* BXT */
do_rapl = RAPL_PKG | RAPL_PKG_POWER_INFO;
break;
case 0x4E: /* SKL */
case 0x5E: /* SKL */
case 0x8E: /* KBL */
case 0x9E: /* KBL */
do_rapl = RAPL_PKG | RAPL_DRAM | RAPL_DRAM_PERF_STATUS | RAPL_PKG_PERF_STATUS | RAPL_PKG_POWER_INFO;
break;
case 0x3F: /* HSX */
case 0x4F: /* BDX */
case 0x56: /* BDX-DE */
case 0x55: /* SKX */
case 0x57: /* KNL */
do_rapl = RAPL_PKG | RAPL_DRAM | RAPL_DRAM_POWER_INFO | RAPL_DRAM_PERF_STATUS | RAPL_PKG_PERF_STATUS | RAPL_PKG_POWER_INFO;
break;
@ -2871,6 +2941,10 @@ int has_snb_msrs(unsigned int family, unsigned int model)
case 0x56: /* BDX-DE */
case 0x4E: /* SKL */
case 0x5E: /* SKL */
case 0x8E: /* KBL */
case 0x9E: /* KBL */
case 0x55: /* SKX */
case 0x5C: /* BXT */
return 1;
}
return 0;
@ -2879,9 +2953,14 @@ int has_snb_msrs(unsigned int family, unsigned int model)
/*
* HSW adds support for additional MSRs:
*
* MSR_PKG_C8_RESIDENCY 0x00000630
* MSR_PKG_C9_RESIDENCY 0x00000631
* MSR_PKG_C10_RESIDENCY 0x00000632
* MSR_PKG_C8_RESIDENCY 0x00000630
* MSR_PKG_C9_RESIDENCY 0x00000631
* MSR_PKG_C10_RESIDENCY 0x00000632
*
* MSR_PKGC8_IRTL 0x00000633
* MSR_PKGC9_IRTL 0x00000634
* MSR_PKGC10_IRTL 0x00000635
*
*/
int has_hsw_msrs(unsigned int family, unsigned int model)
{
@ -2893,6 +2972,9 @@ int has_hsw_msrs(unsigned int family, unsigned int model)
case 0x3D: /* BDW */
case 0x4E: /* SKL */
case 0x5E: /* SKL */
case 0x8E: /* KBL */
case 0x9E: /* KBL */
case 0x5C: /* BXT */
return 1;
}
return 0;
@ -2914,6 +2996,8 @@ int has_skl_msrs(unsigned int family, unsigned int model)
switch (model) {
case 0x4E: /* SKL */
case 0x5E: /* SKL */
case 0x8E: /* KBL */
case 0x9E: /* KBL */
return 1;
}
return 0;
@ -3187,7 +3271,7 @@ void process_cpuid()
if (debug)
decode_misc_enable_msr();
if (max_level >= 0x7) {
if (max_level >= 0x7 && debug) {
int has_sgx;
ecx = 0;
@ -3221,7 +3305,15 @@ void process_cpuid()
switch(model) {
case 0x4E: /* SKL */
case 0x5E: /* SKL */
crystal_hz = 24000000; /* 24 MHz */
case 0x8E: /* KBL */
case 0x9E: /* KBL */
crystal_hz = 24000000; /* 24.0 MHz */
break;
case 0x55: /* SKX */
crystal_hz = 25000000; /* 25.0 MHz */
break;
case 0x5C: /* BXT */
crystal_hz = 19200000; /* 19.2 MHz */
break;
default:
crystal_hz = 0;
@ -3254,11 +3346,13 @@ void process_cpuid()
do_nhm_platform_info = do_nhm_cstates = do_smi = probe_nhm_msrs(family, model);
do_snb_cstates = has_snb_msrs(family, model);
do_irtl_snb = has_snb_msrs(family, model);
do_pc2 = do_snb_cstates && (pkg_cstate_limit >= PCL__2);
do_pc3 = (pkg_cstate_limit >= PCL__3);
do_pc6 = (pkg_cstate_limit >= PCL__6);
do_pc7 = do_snb_cstates && (pkg_cstate_limit >= PCL__7);
do_c8_c9_c10 = has_hsw_msrs(family, model);
do_irtl_hsw = has_hsw_msrs(family, model);
do_skl_residency = has_skl_msrs(family, model);
do_slm_cstates = is_slm(family, model);
do_knl_cstates = is_knl(family, model);
@ -3564,6 +3658,9 @@ void turbostat_init()
if (debug)
for_all_cpus(print_thermal, ODD_COUNTERS);
if (debug && do_irtl_snb)
print_irtl();
}
int fork_it(char **argv)
@ -3629,7 +3726,7 @@ int get_and_dump_counters(void)
}
void print_version() {
fprintf(outf, "turbostat version 4.11 27 Feb 2016"
fprintf(outf, "turbostat version 4.12 5 Apr 2016"
" - Len Brown <lenb@kernel.org>\n");
}