2011-10-25 16:00:11 +08:00
|
|
|
|
|
|
|
#include <linux/sched.h>
|
2013-02-07 23:46:59 +08:00
|
|
|
#include <linux/sched/sysctl.h>
|
2013-02-07 23:47:07 +08:00
|
|
|
#include <linux/sched/rt.h>
|
2016-09-26 08:29:20 +08:00
|
|
|
#include <linux/u64_stats_sync.h>
|
sched/deadline: Add SCHED_DEADLINE structures & implementation
Introduces the data structures, constants and symbols needed for
SCHED_DEADLINE implementation.
Core data structure of SCHED_DEADLINE are defined, along with their
initializers. Hooks for checking if a task belong to the new policy
are also added where they are needed.
Adds a scheduling class, in sched/dl.c and a new policy called
SCHED_DEADLINE. It is an implementation of the Earliest Deadline
First (EDF) scheduling algorithm, augmented with a mechanism (called
Constant Bandwidth Server, CBS) that makes it possible to isolate
the behaviour of tasks between each other.
The typical -deadline task will be made up of a computation phase
(instance) which is activated on a periodic or sporadic fashion. The
expected (maximum) duration of such computation is called the task's
runtime; the time interval by which each instance need to be completed
is called the task's relative deadline. The task's absolute deadline
is dynamically calculated as the time instant a task (better, an
instance) activates plus the relative deadline.
The EDF algorithms selects the task with the smallest absolute
deadline as the one to be executed first, while the CBS ensures each
task to run for at most its runtime every (relative) deadline
length time interval, avoiding any interference between different
tasks (bandwidth isolation).
Thanks to this feature, also tasks that do not strictly comply with
the computational model sketched above can effectively use the new
policy.
To summarize, this patch:
- introduces the data structures, constants and symbols needed;
- implements the core logic of the scheduling algorithm in the new
scheduling class file;
- provides all the glue code between the new scheduling class and
the core scheduler and refines the interactions between sched/dl
and the other existing scheduling classes.
Signed-off-by: Dario Faggioli <raistlin@linux.it>
Signed-off-by: Michael Trimarchi <michael@amarulasolutions.com>
Signed-off-by: Fabio Checconi <fchecconi@gmail.com>
Signed-off-by: Juri Lelli <juri.lelli@gmail.com>
Signed-off-by: Peter Zijlstra <peterz@infradead.org>
Link: http://lkml.kernel.org/r/1383831828-15501-4-git-send-email-juri.lelli@gmail.com
Signed-off-by: Ingo Molnar <mingo@kernel.org>
2013-11-28 18:14:43 +08:00
|
|
|
#include <linux/sched/deadline.h>
|
2016-02-23 05:26:51 +08:00
|
|
|
#include <linux/binfmts.h>
|
2011-10-25 16:00:11 +08:00
|
|
|
#include <linux/mutex.h>
|
|
|
|
#include <linux/spinlock.h>
|
|
|
|
#include <linux/stop_machine.h>
|
sched/rt: Use IPI to trigger RT task push migration instead of pulling
When debugging the latencies on a 40 core box, where we hit 300 to
500 microsecond latencies, I found there was a huge contention on the
runqueue locks.
Investigating it further, running ftrace, I found that it was due to
the pulling of RT tasks.
The test that was run was the following:
cyclictest --numa -p95 -m -d0 -i100
This created a thread on each CPU, that would set its wakeup in iterations
of 100 microseconds. The -d0 means that all the threads had the same
interval (100us). Each thread sleeps for 100us and wakes up and measures
its latencies.
cyclictest is maintained at:
git://git.kernel.org/pub/scm/linux/kernel/git/clrkwllms/rt-tests.git
What happened was another RT task would be scheduled on one of the CPUs
that was running our test, when the other CPU tests went to sleep and
scheduled idle. This caused the "pull" operation to execute on all
these CPUs. Each one of these saw the RT task that was overloaded on
the CPU of the test that was still running, and each one tried
to grab that task in a thundering herd way.
To grab the task, each thread would do a double rq lock grab, grabbing
its own lock as well as the rq of the overloaded CPU. As the sched
domains on this box was rather flat for its size, I saw up to 12 CPUs
block on this lock at once. This caused a ripple affect with the
rq locks especially since the taking was done via a double rq lock, which
means that several of the CPUs had their own rq locks held while trying
to take this rq lock. As these locks were blocked, any wakeups or load
balanceing on these CPUs would also block on these locks, and the wait
time escalated.
I've tried various methods to lessen the load, but things like an
atomic counter to only let one CPU grab the task wont work, because
the task may have a limited affinity, and we may pick the wrong
CPU to take that lock and do the pull, to only find out that the
CPU we picked isn't in the task's affinity.
Instead of doing the PULL, I now have the CPUs that want the pull to
send over an IPI to the overloaded CPU, and let that CPU pick what
CPU to push the task to. No more need to grab the rq lock, and the
push/pull algorithm still works fine.
With this patch, the latency dropped to just 150us over a 20 hour run.
Without the patch, the huge latencies would trigger in seconds.
I've created a new sched feature called RT_PUSH_IPI, which is enabled
by default.
When RT_PUSH_IPI is not enabled, the old method of grabbing the rq locks
and having the pulling CPU do the work is implemented. When RT_PUSH_IPI
is enabled, the IPI is sent to the overloaded CPU to do a push.
To enabled or disable this at run time:
# mount -t debugfs nodev /sys/kernel/debug
# echo RT_PUSH_IPI > /sys/kernel/debug/sched_features
or
# echo NO_RT_PUSH_IPI > /sys/kernel/debug/sched_features
Update: This original patch would send an IPI to all CPUs in the RT overload
list. But that could theoretically cause the reverse issue. That is, there
could be lots of overloaded RT queues and one CPU lowers its priority. It would
then send an IPI to all the overloaded RT queues and they could then all try
to grab the rq lock of the CPU lowering its priority, and then we have the
same problem.
The latest design sends out only one IPI to the first overloaded CPU. It tries to
push any tasks that it can, and then looks for the next overloaded CPU that can
push to the source CPU. The IPIs stop when all overloaded CPUs that have pushable
tasks that have priorities greater than the source CPU are covered. In case the
source CPU lowers its priority again, a flag is set to tell the IPI traversal to
restart with the first RT overloaded CPU after the source CPU.
Parts-suggested-by: Peter Zijlstra <peterz@infradead.org>
Signed-off-by: Steven Rostedt <rostedt@goodmis.org>
Signed-off-by: Peter Zijlstra (Intel) <peterz@infradead.org>
Cc: Joern Engel <joern@purestorage.com>
Cc: Clark Williams <williams@redhat.com>
Cc: Mike Galbraith <umgwanakikbuti@gmail.com>
Cc: Paul E. McKenney <paulmck@linux.vnet.ibm.com>
Cc: Thomas Gleixner <tglx@linutronix.de>
Link: http://lkml.kernel.org/r/20150318144946.2f3cc982@gandalf.local.home
Signed-off-by: Ingo Molnar <mingo@kernel.org>
2015-03-19 02:49:46 +08:00
|
|
|
#include <linux/irq_work.h>
|
2013-04-20 20:35:09 +08:00
|
|
|
#include <linux/tick.h>
|
2013-10-07 18:28:57 +08:00
|
|
|
#include <linux/slab.h>
|
2011-10-25 16:00:11 +08:00
|
|
|
|
2011-11-16 00:14:39 +08:00
|
|
|
#include "cpupri.h"
|
2013-11-07 21:43:47 +08:00
|
|
|
#include "cpudeadline.h"
|
2013-03-29 14:36:43 +08:00
|
|
|
#include "cpuacct.h"
|
2011-10-25 16:00:11 +08:00
|
|
|
|
2016-09-21 04:34:51 +08:00
|
|
|
#ifdef CONFIG_SCHED_DEBUG
|
|
|
|
#define SCHED_WARN_ON(x) WARN_ONCE(x, #x)
|
|
|
|
#else
|
|
|
|
#define SCHED_WARN_ON(x) ((void)(x))
|
|
|
|
#endif
|
|
|
|
|
2013-04-20 03:10:49 +08:00
|
|
|
struct rq;
|
2014-09-04 23:32:09 +08:00
|
|
|
struct cpuidle_state;
|
2013-04-20 03:10:49 +08:00
|
|
|
|
2014-08-20 17:47:32 +08:00
|
|
|
/* task_struct::on_rq states: */
|
|
|
|
#define TASK_ON_RQ_QUEUED 1
|
sched: Teach scheduler to understand TASK_ON_RQ_MIGRATING state
This is a new p->on_rq state which will be used to indicate that a task
is in a process of migrating between two RQs. It allows to get
rid of double_rq_lock(), which we used to use to change a rq of
a queued task before.
Let's consider an example. To move a task between src_rq and
dst_rq we will do the following:
raw_spin_lock(&src_rq->lock);
/* p is a task which is queued on src_rq */
p = ...;
dequeue_task(src_rq, p, 0);
p->on_rq = TASK_ON_RQ_MIGRATING;
set_task_cpu(p, dst_cpu);
raw_spin_unlock(&src_rq->lock);
/*
* Both RQs are unlocked here.
* Task p is dequeued from src_rq
* but its on_rq value is not zero.
*/
raw_spin_lock(&dst_rq->lock);
p->on_rq = TASK_ON_RQ_QUEUED;
enqueue_task(dst_rq, p, 0);
raw_spin_unlock(&dst_rq->lock);
While p->on_rq is TASK_ON_RQ_MIGRATING, task is considered as
"migrating", and other parallel scheduler actions with it are
not available to parallel callers. The parallel caller is
spining till migration is completed.
The unavailable actions are changing of cpu affinity, changing
of priority etc, in other words all the functionality which used
to require task_rq(p)->lock before (and related to the task).
To implement TASK_ON_RQ_MIGRATING support we primarily are using
the following fact. Most of scheduler users (from which we are
protecting a migrating task) use task_rq_lock() and
__task_rq_lock() to get the lock of task_rq(p). These primitives
know that task's cpu may change, and they are spining while the
lock of the right RQ is not held. We add one more condition into
them, so they will be also spinning until the migration is
finished.
Signed-off-by: Kirill Tkhai <ktkhai@parallels.com>
Cc: Peter Zijlstra <peterz@infradead.org>
Cc: Paul Turner <pjt@google.com>
Cc: Oleg Nesterov <oleg@redhat.com>
Cc: Steven Rostedt <rostedt@goodmis.org>
Cc: Mike Galbraith <umgwanakikbuti@gmail.com>
Cc: Kirill Tkhai <tkhai@yandex.ru>
Cc: Tim Chen <tim.c.chen@linux.intel.com>
Cc: Nicolas Pitre <nicolas.pitre@linaro.org>
Cc: Linus Torvalds <torvalds@linux-foundation.org>
Link: http://lkml.kernel.org/r/1408528062.23412.88.camel@tkhai
Signed-off-by: Ingo Molnar <mingo@kernel.org>
2014-08-20 17:47:42 +08:00
|
|
|
#define TASK_ON_RQ_MIGRATING 2
|
2014-08-20 17:47:32 +08:00
|
|
|
|
2011-10-25 16:00:11 +08:00
|
|
|
extern __read_mostly int scheduler_running;
|
|
|
|
|
2013-04-20 03:10:49 +08:00
|
|
|
extern unsigned long calc_load_update;
|
|
|
|
extern atomic_long_t calc_load_tasks;
|
|
|
|
|
2015-04-14 19:19:42 +08:00
|
|
|
extern void calc_global_load_tick(struct rq *this_rq);
|
2016-07-13 00:33:56 +08:00
|
|
|
extern long calc_load_fold_active(struct rq *this_rq, long adjust);
|
2015-04-14 19:19:42 +08:00
|
|
|
|
|
|
|
#ifdef CONFIG_SMP
|
2016-04-13 21:56:50 +08:00
|
|
|
extern void cpu_load_update_active(struct rq *this_rq);
|
2015-04-14 19:19:42 +08:00
|
|
|
#else
|
2016-04-13 21:56:50 +08:00
|
|
|
static inline void cpu_load_update_active(struct rq *this_rq) { }
|
2015-04-14 19:19:42 +08:00
|
|
|
#endif
|
2013-04-20 03:10:49 +08:00
|
|
|
|
2011-10-25 16:00:11 +08:00
|
|
|
/*
|
|
|
|
* Helpers for converting nanosecond timing to jiffy resolution
|
|
|
|
*/
|
|
|
|
#define NS_TO_JIFFIES(TIME) ((unsigned long)(TIME) / (NSEC_PER_SEC / HZ))
|
|
|
|
|
2013-03-05 16:06:09 +08:00
|
|
|
/*
|
|
|
|
* Increase resolution of nice-level calculations for 64-bit architectures.
|
|
|
|
* The extra resolution improves shares distribution and load balancing of
|
|
|
|
* low-weight task groups (eg. nice +19 on an autogroup), deeper taskgroup
|
|
|
|
* hierarchies, especially on larger systems. This is not a user-visible change
|
|
|
|
* and does not change the user-interface for setting shares/weights.
|
|
|
|
*
|
|
|
|
* We increase resolution only if we have enough bits to allow this increased
|
2016-04-28 18:49:38 +08:00
|
|
|
* resolution (i.e. 64bit). The costs for increasing resolution when 32bit are
|
|
|
|
* pretty high and the returns do not justify the increased costs.
|
|
|
|
*
|
|
|
|
* Really only required when CONFIG_FAIR_GROUP_SCHED is also set, but to
|
|
|
|
* increase coverage and consistency always enable it on 64bit platforms.
|
2013-03-05 16:06:09 +08:00
|
|
|
*/
|
2016-04-28 18:49:38 +08:00
|
|
|
#ifdef CONFIG_64BIT
|
2016-04-05 12:12:27 +08:00
|
|
|
# define NICE_0_LOAD_SHIFT (SCHED_FIXEDPOINT_SHIFT + SCHED_FIXEDPOINT_SHIFT)
|
sched/fair: Generalize the load/util averages resolution definition
Integer metric needs fixed point arithmetic. In sched/fair, a few
metrics, e.g., weight, load, load_avg, util_avg, freq, and capacity,
may have different fixed point ranges, which makes their update and
usage error-prone.
In order to avoid the errors relating to the fixed point range, we
definie a basic fixed point range, and then formalize all metrics to
base on the basic range.
The basic range is 1024 or (1 << 10). Further, one can recursively
apply the basic range to have larger range.
Pointed out by Ben Segall, weight (visible to user, e.g., NICE-0 has
1024) and load (e.g., NICE_0_LOAD) have independent ranges, but they
must be well calibrated.
Signed-off-by: Yuyang Du <yuyang.du@intel.com>
Signed-off-by: Peter Zijlstra (Intel) <peterz@infradead.org>
Cc: Linus Torvalds <torvalds@linux-foundation.org>
Cc: Mike Galbraith <efault@gmx.de>
Cc: Peter Zijlstra <peterz@infradead.org>
Cc: Thomas Gleixner <tglx@linutronix.de>
Cc: bsegall@google.com
Cc: dietmar.eggemann@arm.com
Cc: lizefan@huawei.com
Cc: morten.rasmussen@arm.com
Cc: pjt@google.com
Cc: umgwanakikbuti@gmail.com
Cc: vincent.guittot@linaro.org
Link: http://lkml.kernel.org/r/1459829551-21625-2-git-send-email-yuyang.du@intel.com
Signed-off-by: Ingo Molnar <mingo@kernel.org>
2016-04-05 12:12:26 +08:00
|
|
|
# define scale_load(w) ((w) << SCHED_FIXEDPOINT_SHIFT)
|
|
|
|
# define scale_load_down(w) ((w) >> SCHED_FIXEDPOINT_SHIFT)
|
2013-03-05 16:06:09 +08:00
|
|
|
#else
|
2016-04-05 12:12:27 +08:00
|
|
|
# define NICE_0_LOAD_SHIFT (SCHED_FIXEDPOINT_SHIFT)
|
2013-03-05 16:06:09 +08:00
|
|
|
# define scale_load(w) (w)
|
|
|
|
# define scale_load_down(w) (w)
|
|
|
|
#endif
|
|
|
|
|
sched/fair: Generalize the load/util averages resolution definition
Integer metric needs fixed point arithmetic. In sched/fair, a few
metrics, e.g., weight, load, load_avg, util_avg, freq, and capacity,
may have different fixed point ranges, which makes their update and
usage error-prone.
In order to avoid the errors relating to the fixed point range, we
definie a basic fixed point range, and then formalize all metrics to
base on the basic range.
The basic range is 1024 or (1 << 10). Further, one can recursively
apply the basic range to have larger range.
Pointed out by Ben Segall, weight (visible to user, e.g., NICE-0 has
1024) and load (e.g., NICE_0_LOAD) have independent ranges, but they
must be well calibrated.
Signed-off-by: Yuyang Du <yuyang.du@intel.com>
Signed-off-by: Peter Zijlstra (Intel) <peterz@infradead.org>
Cc: Linus Torvalds <torvalds@linux-foundation.org>
Cc: Mike Galbraith <efault@gmx.de>
Cc: Peter Zijlstra <peterz@infradead.org>
Cc: Thomas Gleixner <tglx@linutronix.de>
Cc: bsegall@google.com
Cc: dietmar.eggemann@arm.com
Cc: lizefan@huawei.com
Cc: morten.rasmussen@arm.com
Cc: pjt@google.com
Cc: umgwanakikbuti@gmail.com
Cc: vincent.guittot@linaro.org
Link: http://lkml.kernel.org/r/1459829551-21625-2-git-send-email-yuyang.du@intel.com
Signed-off-by: Ingo Molnar <mingo@kernel.org>
2016-04-05 12:12:26 +08:00
|
|
|
/*
|
2016-04-05 12:12:27 +08:00
|
|
|
* Task weight (visible to users) and its load (invisible to users) have
|
|
|
|
* independent resolution, but they should be well calibrated. We use
|
|
|
|
* scale_load() and scale_load_down(w) to convert between them. The
|
|
|
|
* following must be true:
|
|
|
|
*
|
|
|
|
* scale_load(sched_prio_to_weight[USER_PRIO(NICE_TO_PRIO(0))]) == NICE_0_LOAD
|
|
|
|
*
|
sched/fair: Generalize the load/util averages resolution definition
Integer metric needs fixed point arithmetic. In sched/fair, a few
metrics, e.g., weight, load, load_avg, util_avg, freq, and capacity,
may have different fixed point ranges, which makes their update and
usage error-prone.
In order to avoid the errors relating to the fixed point range, we
definie a basic fixed point range, and then formalize all metrics to
base on the basic range.
The basic range is 1024 or (1 << 10). Further, one can recursively
apply the basic range to have larger range.
Pointed out by Ben Segall, weight (visible to user, e.g., NICE-0 has
1024) and load (e.g., NICE_0_LOAD) have independent ranges, but they
must be well calibrated.
Signed-off-by: Yuyang Du <yuyang.du@intel.com>
Signed-off-by: Peter Zijlstra (Intel) <peterz@infradead.org>
Cc: Linus Torvalds <torvalds@linux-foundation.org>
Cc: Mike Galbraith <efault@gmx.de>
Cc: Peter Zijlstra <peterz@infradead.org>
Cc: Thomas Gleixner <tglx@linutronix.de>
Cc: bsegall@google.com
Cc: dietmar.eggemann@arm.com
Cc: lizefan@huawei.com
Cc: morten.rasmussen@arm.com
Cc: pjt@google.com
Cc: umgwanakikbuti@gmail.com
Cc: vincent.guittot@linaro.org
Link: http://lkml.kernel.org/r/1459829551-21625-2-git-send-email-yuyang.du@intel.com
Signed-off-by: Ingo Molnar <mingo@kernel.org>
2016-04-05 12:12:26 +08:00
|
|
|
*/
|
2016-04-05 12:12:27 +08:00
|
|
|
#define NICE_0_LOAD (1L << NICE_0_LOAD_SHIFT)
|
2011-10-25 16:00:11 +08:00
|
|
|
|
sched/deadline: Add bandwidth management for SCHED_DEADLINE tasks
In order of deadline scheduling to be effective and useful, it is
important that some method of having the allocation of the available
CPU bandwidth to tasks and task groups under control.
This is usually called "admission control" and if it is not performed
at all, no guarantee can be given on the actual scheduling of the
-deadline tasks.
Since when RT-throttling has been introduced each task group have a
bandwidth associated to itself, calculated as a certain amount of
runtime over a period. Moreover, to make it possible to manipulate
such bandwidth, readable/writable controls have been added to both
procfs (for system wide settings) and cgroupfs (for per-group
settings).
Therefore, the same interface is being used for controlling the
bandwidth distrubution to -deadline tasks and task groups, i.e.,
new controls but with similar names, equivalent meaning and with
the same usage paradigm are added.
However, more discussion is needed in order to figure out how
we want to manage SCHED_DEADLINE bandwidth at the task group level.
Therefore, this patch adds a less sophisticated, but actually
very sensible, mechanism to ensure that a certain utilization
cap is not overcome per each root_domain (the single rq for !SMP
configurations).
Another main difference between deadline bandwidth management and
RT-throttling is that -deadline tasks have bandwidth on their own
(while -rt ones doesn't!), and thus we don't need an higher level
throttling mechanism to enforce the desired bandwidth.
This patch, therefore:
- adds system wide deadline bandwidth management by means of:
* /proc/sys/kernel/sched_dl_runtime_us,
* /proc/sys/kernel/sched_dl_period_us,
that determine (i.e., runtime / period) the total bandwidth
available on each CPU of each root_domain for -deadline tasks;
- couples the RT and deadline bandwidth management, i.e., enforces
that the sum of how much bandwidth is being devoted to -rt
-deadline tasks to stay below 100%.
This means that, for a root_domain comprising M CPUs, -deadline tasks
can be created until the sum of their bandwidths stay below:
M * (sched_dl_runtime_us / sched_dl_period_us)
It is also possible to disable this bandwidth management logic, and
be thus free of oversubscribing the system up to any arbitrary level.
Signed-off-by: Dario Faggioli <raistlin@linux.it>
Signed-off-by: Juri Lelli <juri.lelli@gmail.com>
Signed-off-by: Peter Zijlstra <peterz@infradead.org>
Link: http://lkml.kernel.org/r/1383831828-15501-12-git-send-email-juri.lelli@gmail.com
Signed-off-by: Ingo Molnar <mingo@kernel.org>
2013-11-07 21:43:45 +08:00
|
|
|
/*
|
|
|
|
* Single value that decides SCHED_DEADLINE internal math precision.
|
|
|
|
* 10 -> just above 1us
|
|
|
|
* 9 -> just above 0.5us
|
|
|
|
*/
|
|
|
|
#define DL_SCALE (10)
|
|
|
|
|
2011-10-25 16:00:11 +08:00
|
|
|
/*
|
|
|
|
* These are the 'tuning knobs' of the scheduler:
|
|
|
|
*/
|
|
|
|
|
|
|
|
/*
|
|
|
|
* single value that denotes runtime == period, ie unlimited time.
|
|
|
|
*/
|
|
|
|
#define RUNTIME_INF ((u64)~0ULL)
|
|
|
|
|
2015-09-09 23:00:41 +08:00
|
|
|
static inline int idle_policy(int policy)
|
|
|
|
{
|
|
|
|
return policy == SCHED_IDLE;
|
|
|
|
}
|
sched: Add new scheduler syscalls to support an extended scheduling parameters ABI
Add the syscalls needed for supporting scheduling algorithms
with extended scheduling parameters (e.g., SCHED_DEADLINE).
In general, it makes possible to specify a periodic/sporadic task,
that executes for a given amount of runtime at each instance, and is
scheduled according to the urgency of their own timing constraints,
i.e.:
- a (maximum/typical) instance execution time,
- a minimum interval between consecutive instances,
- a time constraint by which each instance must be completed.
Thus, both the data structure that holds the scheduling parameters of
the tasks and the system calls dealing with it must be extended.
Unfortunately, modifying the existing struct sched_param would break
the ABI and result in potentially serious compatibility issues with
legacy binaries.
For these reasons, this patch:
- defines the new struct sched_attr, containing all the fields
that are necessary for specifying a task in the computational
model described above;
- defines and implements the new scheduling related syscalls that
manipulate it, i.e., sched_setattr() and sched_getattr().
Syscalls are introduced for x86 (32 and 64 bits) and ARM only, as a
proof of concept and for developing and testing purposes. Making them
available on other architectures is straightforward.
Since no "user" for these new parameters is introduced in this patch,
the implementation of the new system calls is just identical to their
already existing counterpart. Future patches that implement scheduling
policies able to exploit the new data structure must also take care of
modifying the sched_*attr() calls accordingly with their own purposes.
Signed-off-by: Dario Faggioli <raistlin@linux.it>
[ Rewrote to use sched_attr. ]
Signed-off-by: Juri Lelli <juri.lelli@gmail.com>
[ Removed sched_setscheduler2() for now. ]
Signed-off-by: Peter Zijlstra <peterz@infradead.org>
Link: http://lkml.kernel.org/r/1383831828-15501-3-git-send-email-juri.lelli@gmail.com
Signed-off-by: Ingo Molnar <mingo@kernel.org>
2013-11-07 21:43:36 +08:00
|
|
|
static inline int fair_policy(int policy)
|
|
|
|
{
|
|
|
|
return policy == SCHED_NORMAL || policy == SCHED_BATCH;
|
|
|
|
}
|
|
|
|
|
2011-10-25 16:00:11 +08:00
|
|
|
static inline int rt_policy(int policy)
|
|
|
|
{
|
sched: Add new scheduler syscalls to support an extended scheduling parameters ABI
Add the syscalls needed for supporting scheduling algorithms
with extended scheduling parameters (e.g., SCHED_DEADLINE).
In general, it makes possible to specify a periodic/sporadic task,
that executes for a given amount of runtime at each instance, and is
scheduled according to the urgency of their own timing constraints,
i.e.:
- a (maximum/typical) instance execution time,
- a minimum interval between consecutive instances,
- a time constraint by which each instance must be completed.
Thus, both the data structure that holds the scheduling parameters of
the tasks and the system calls dealing with it must be extended.
Unfortunately, modifying the existing struct sched_param would break
the ABI and result in potentially serious compatibility issues with
legacy binaries.
For these reasons, this patch:
- defines the new struct sched_attr, containing all the fields
that are necessary for specifying a task in the computational
model described above;
- defines and implements the new scheduling related syscalls that
manipulate it, i.e., sched_setattr() and sched_getattr().
Syscalls are introduced for x86 (32 and 64 bits) and ARM only, as a
proof of concept and for developing and testing purposes. Making them
available on other architectures is straightforward.
Since no "user" for these new parameters is introduced in this patch,
the implementation of the new system calls is just identical to their
already existing counterpart. Future patches that implement scheduling
policies able to exploit the new data structure must also take care of
modifying the sched_*attr() calls accordingly with their own purposes.
Signed-off-by: Dario Faggioli <raistlin@linux.it>
[ Rewrote to use sched_attr. ]
Signed-off-by: Juri Lelli <juri.lelli@gmail.com>
[ Removed sched_setscheduler2() for now. ]
Signed-off-by: Peter Zijlstra <peterz@infradead.org>
Link: http://lkml.kernel.org/r/1383831828-15501-3-git-send-email-juri.lelli@gmail.com
Signed-off-by: Ingo Molnar <mingo@kernel.org>
2013-11-07 21:43:36 +08:00
|
|
|
return policy == SCHED_FIFO || policy == SCHED_RR;
|
2011-10-25 16:00:11 +08:00
|
|
|
}
|
|
|
|
|
sched/deadline: Add SCHED_DEADLINE structures & implementation
Introduces the data structures, constants and symbols needed for
SCHED_DEADLINE implementation.
Core data structure of SCHED_DEADLINE are defined, along with their
initializers. Hooks for checking if a task belong to the new policy
are also added where they are needed.
Adds a scheduling class, in sched/dl.c and a new policy called
SCHED_DEADLINE. It is an implementation of the Earliest Deadline
First (EDF) scheduling algorithm, augmented with a mechanism (called
Constant Bandwidth Server, CBS) that makes it possible to isolate
the behaviour of tasks between each other.
The typical -deadline task will be made up of a computation phase
(instance) which is activated on a periodic or sporadic fashion. The
expected (maximum) duration of such computation is called the task's
runtime; the time interval by which each instance need to be completed
is called the task's relative deadline. The task's absolute deadline
is dynamically calculated as the time instant a task (better, an
instance) activates plus the relative deadline.
The EDF algorithms selects the task with the smallest absolute
deadline as the one to be executed first, while the CBS ensures each
task to run for at most its runtime every (relative) deadline
length time interval, avoiding any interference between different
tasks (bandwidth isolation).
Thanks to this feature, also tasks that do not strictly comply with
the computational model sketched above can effectively use the new
policy.
To summarize, this patch:
- introduces the data structures, constants and symbols needed;
- implements the core logic of the scheduling algorithm in the new
scheduling class file;
- provides all the glue code between the new scheduling class and
the core scheduler and refines the interactions between sched/dl
and the other existing scheduling classes.
Signed-off-by: Dario Faggioli <raistlin@linux.it>
Signed-off-by: Michael Trimarchi <michael@amarulasolutions.com>
Signed-off-by: Fabio Checconi <fchecconi@gmail.com>
Signed-off-by: Juri Lelli <juri.lelli@gmail.com>
Signed-off-by: Peter Zijlstra <peterz@infradead.org>
Link: http://lkml.kernel.org/r/1383831828-15501-4-git-send-email-juri.lelli@gmail.com
Signed-off-by: Ingo Molnar <mingo@kernel.org>
2013-11-28 18:14:43 +08:00
|
|
|
static inline int dl_policy(int policy)
|
|
|
|
{
|
|
|
|
return policy == SCHED_DEADLINE;
|
|
|
|
}
|
2015-09-09 23:00:41 +08:00
|
|
|
static inline bool valid_policy(int policy)
|
|
|
|
{
|
|
|
|
return idle_policy(policy) || fair_policy(policy) ||
|
|
|
|
rt_policy(policy) || dl_policy(policy);
|
|
|
|
}
|
sched/deadline: Add SCHED_DEADLINE structures & implementation
Introduces the data structures, constants and symbols needed for
SCHED_DEADLINE implementation.
Core data structure of SCHED_DEADLINE are defined, along with their
initializers. Hooks for checking if a task belong to the new policy
are also added where they are needed.
Adds a scheduling class, in sched/dl.c and a new policy called
SCHED_DEADLINE. It is an implementation of the Earliest Deadline
First (EDF) scheduling algorithm, augmented with a mechanism (called
Constant Bandwidth Server, CBS) that makes it possible to isolate
the behaviour of tasks between each other.
The typical -deadline task will be made up of a computation phase
(instance) which is activated on a periodic or sporadic fashion. The
expected (maximum) duration of such computation is called the task's
runtime; the time interval by which each instance need to be completed
is called the task's relative deadline. The task's absolute deadline
is dynamically calculated as the time instant a task (better, an
instance) activates plus the relative deadline.
The EDF algorithms selects the task with the smallest absolute
deadline as the one to be executed first, while the CBS ensures each
task to run for at most its runtime every (relative) deadline
length time interval, avoiding any interference between different
tasks (bandwidth isolation).
Thanks to this feature, also tasks that do not strictly comply with
the computational model sketched above can effectively use the new
policy.
To summarize, this patch:
- introduces the data structures, constants and symbols needed;
- implements the core logic of the scheduling algorithm in the new
scheduling class file;
- provides all the glue code between the new scheduling class and
the core scheduler and refines the interactions between sched/dl
and the other existing scheduling classes.
Signed-off-by: Dario Faggioli <raistlin@linux.it>
Signed-off-by: Michael Trimarchi <michael@amarulasolutions.com>
Signed-off-by: Fabio Checconi <fchecconi@gmail.com>
Signed-off-by: Juri Lelli <juri.lelli@gmail.com>
Signed-off-by: Peter Zijlstra <peterz@infradead.org>
Link: http://lkml.kernel.org/r/1383831828-15501-4-git-send-email-juri.lelli@gmail.com
Signed-off-by: Ingo Molnar <mingo@kernel.org>
2013-11-28 18:14:43 +08:00
|
|
|
|
2011-10-25 16:00:11 +08:00
|
|
|
static inline int task_has_rt_policy(struct task_struct *p)
|
|
|
|
{
|
|
|
|
return rt_policy(p->policy);
|
|
|
|
}
|
|
|
|
|
sched/deadline: Add SCHED_DEADLINE structures & implementation
Introduces the data structures, constants and symbols needed for
SCHED_DEADLINE implementation.
Core data structure of SCHED_DEADLINE are defined, along with their
initializers. Hooks for checking if a task belong to the new policy
are also added where they are needed.
Adds a scheduling class, in sched/dl.c and a new policy called
SCHED_DEADLINE. It is an implementation of the Earliest Deadline
First (EDF) scheduling algorithm, augmented with a mechanism (called
Constant Bandwidth Server, CBS) that makes it possible to isolate
the behaviour of tasks between each other.
The typical -deadline task will be made up of a computation phase
(instance) which is activated on a periodic or sporadic fashion. The
expected (maximum) duration of such computation is called the task's
runtime; the time interval by which each instance need to be completed
is called the task's relative deadline. The task's absolute deadline
is dynamically calculated as the time instant a task (better, an
instance) activates plus the relative deadline.
The EDF algorithms selects the task with the smallest absolute
deadline as the one to be executed first, while the CBS ensures each
task to run for at most its runtime every (relative) deadline
length time interval, avoiding any interference between different
tasks (bandwidth isolation).
Thanks to this feature, also tasks that do not strictly comply with
the computational model sketched above can effectively use the new
policy.
To summarize, this patch:
- introduces the data structures, constants and symbols needed;
- implements the core logic of the scheduling algorithm in the new
scheduling class file;
- provides all the glue code between the new scheduling class and
the core scheduler and refines the interactions between sched/dl
and the other existing scheduling classes.
Signed-off-by: Dario Faggioli <raistlin@linux.it>
Signed-off-by: Michael Trimarchi <michael@amarulasolutions.com>
Signed-off-by: Fabio Checconi <fchecconi@gmail.com>
Signed-off-by: Juri Lelli <juri.lelli@gmail.com>
Signed-off-by: Peter Zijlstra <peterz@infradead.org>
Link: http://lkml.kernel.org/r/1383831828-15501-4-git-send-email-juri.lelli@gmail.com
Signed-off-by: Ingo Molnar <mingo@kernel.org>
2013-11-28 18:14:43 +08:00
|
|
|
static inline int task_has_dl_policy(struct task_struct *p)
|
|
|
|
{
|
|
|
|
return dl_policy(p->policy);
|
|
|
|
}
|
|
|
|
|
sched/deadline: Add SCHED_DEADLINE inheritance logic
Some method to deal with rt-mutexes and make sched_dl interact with
the current PI-coded is needed, raising all but trivial issues, that
needs (according to us) to be solved with some restructuring of
the pi-code (i.e., going toward a proxy execution-ish implementation).
This is under development, in the meanwhile, as a temporary solution,
what this commits does is:
- ensure a pi-lock owner with waiters is never throttled down. Instead,
when it runs out of runtime, it immediately gets replenished and it's
deadline is postponed;
- the scheduling parameters (relative deadline and default runtime)
used for that replenishments --during the whole period it holds the
pi-lock-- are the ones of the waiting task with earliest deadline.
Acting this way, we provide some kind of boosting to the lock-owner,
still by using the existing (actually, slightly modified by the previous
commit) pi-architecture.
We would stress the fact that this is only a surely needed, all but
clean solution to the problem. In the end it's only a way to re-start
discussion within the community. So, as always, comments, ideas, rants,
etc.. are welcome! :-)
Signed-off-by: Dario Faggioli <raistlin@linux.it>
Signed-off-by: Juri Lelli <juri.lelli@gmail.com>
[ Added !RT_MUTEXES build fix. ]
Signed-off-by: Peter Zijlstra <peterz@infradead.org>
Link: http://lkml.kernel.org/r/1383831828-15501-11-git-send-email-juri.lelli@gmail.com
Signed-off-by: Ingo Molnar <mingo@kernel.org>
2013-11-07 21:43:44 +08:00
|
|
|
/*
|
|
|
|
* Tells if entity @a should preempt entity @b.
|
|
|
|
*/
|
sched/deadline: Add bandwidth management for SCHED_DEADLINE tasks
In order of deadline scheduling to be effective and useful, it is
important that some method of having the allocation of the available
CPU bandwidth to tasks and task groups under control.
This is usually called "admission control" and if it is not performed
at all, no guarantee can be given on the actual scheduling of the
-deadline tasks.
Since when RT-throttling has been introduced each task group have a
bandwidth associated to itself, calculated as a certain amount of
runtime over a period. Moreover, to make it possible to manipulate
such bandwidth, readable/writable controls have been added to both
procfs (for system wide settings) and cgroupfs (for per-group
settings).
Therefore, the same interface is being used for controlling the
bandwidth distrubution to -deadline tasks and task groups, i.e.,
new controls but with similar names, equivalent meaning and with
the same usage paradigm are added.
However, more discussion is needed in order to figure out how
we want to manage SCHED_DEADLINE bandwidth at the task group level.
Therefore, this patch adds a less sophisticated, but actually
very sensible, mechanism to ensure that a certain utilization
cap is not overcome per each root_domain (the single rq for !SMP
configurations).
Another main difference between deadline bandwidth management and
RT-throttling is that -deadline tasks have bandwidth on their own
(while -rt ones doesn't!), and thus we don't need an higher level
throttling mechanism to enforce the desired bandwidth.
This patch, therefore:
- adds system wide deadline bandwidth management by means of:
* /proc/sys/kernel/sched_dl_runtime_us,
* /proc/sys/kernel/sched_dl_period_us,
that determine (i.e., runtime / period) the total bandwidth
available on each CPU of each root_domain for -deadline tasks;
- couples the RT and deadline bandwidth management, i.e., enforces
that the sum of how much bandwidth is being devoted to -rt
-deadline tasks to stay below 100%.
This means that, for a root_domain comprising M CPUs, -deadline tasks
can be created until the sum of their bandwidths stay below:
M * (sched_dl_runtime_us / sched_dl_period_us)
It is also possible to disable this bandwidth management logic, and
be thus free of oversubscribing the system up to any arbitrary level.
Signed-off-by: Dario Faggioli <raistlin@linux.it>
Signed-off-by: Juri Lelli <juri.lelli@gmail.com>
Signed-off-by: Peter Zijlstra <peterz@infradead.org>
Link: http://lkml.kernel.org/r/1383831828-15501-12-git-send-email-juri.lelli@gmail.com
Signed-off-by: Ingo Molnar <mingo@kernel.org>
2013-11-07 21:43:45 +08:00
|
|
|
static inline bool
|
|
|
|
dl_entity_preempt(struct sched_dl_entity *a, struct sched_dl_entity *b)
|
sched/deadline: Add SCHED_DEADLINE inheritance logic
Some method to deal with rt-mutexes and make sched_dl interact with
the current PI-coded is needed, raising all but trivial issues, that
needs (according to us) to be solved with some restructuring of
the pi-code (i.e., going toward a proxy execution-ish implementation).
This is under development, in the meanwhile, as a temporary solution,
what this commits does is:
- ensure a pi-lock owner with waiters is never throttled down. Instead,
when it runs out of runtime, it immediately gets replenished and it's
deadline is postponed;
- the scheduling parameters (relative deadline and default runtime)
used for that replenishments --during the whole period it holds the
pi-lock-- are the ones of the waiting task with earliest deadline.
Acting this way, we provide some kind of boosting to the lock-owner,
still by using the existing (actually, slightly modified by the previous
commit) pi-architecture.
We would stress the fact that this is only a surely needed, all but
clean solution to the problem. In the end it's only a way to re-start
discussion within the community. So, as always, comments, ideas, rants,
etc.. are welcome! :-)
Signed-off-by: Dario Faggioli <raistlin@linux.it>
Signed-off-by: Juri Lelli <juri.lelli@gmail.com>
[ Added !RT_MUTEXES build fix. ]
Signed-off-by: Peter Zijlstra <peterz@infradead.org>
Link: http://lkml.kernel.org/r/1383831828-15501-11-git-send-email-juri.lelli@gmail.com
Signed-off-by: Ingo Molnar <mingo@kernel.org>
2013-11-07 21:43:44 +08:00
|
|
|
{
|
|
|
|
return dl_time_before(a->deadline, b->deadline);
|
|
|
|
}
|
|
|
|
|
2011-10-25 16:00:11 +08:00
|
|
|
/*
|
|
|
|
* This is the priority-queue data structure of the RT scheduling class:
|
|
|
|
*/
|
|
|
|
struct rt_prio_array {
|
|
|
|
DECLARE_BITMAP(bitmap, MAX_RT_PRIO+1); /* include 1 bit for delimiter */
|
|
|
|
struct list_head queue[MAX_RT_PRIO];
|
|
|
|
};
|
|
|
|
|
|
|
|
struct rt_bandwidth {
|
|
|
|
/* nests inside the rq lock: */
|
|
|
|
raw_spinlock_t rt_runtime_lock;
|
|
|
|
ktime_t rt_period;
|
|
|
|
u64 rt_runtime;
|
|
|
|
struct hrtimer rt_period_timer;
|
sched,perf: Fix periodic timers
In the below two commits (see Fixes) we have periodic timers that can
stop themselves when they're no longer required, but need to be
(re)-started when their idle condition changes.
Further complications is that we want the timer handler to always do
the forward such that it will always correctly deal with the overruns,
and we do not want to race such that the handler has already decided
to stop, but the (external) restart sees the timer still active and we
end up with a 'lost' timer.
The problem with the current code is that the re-start can come before
the callback does the forward, at which point the forward from the
callback will WARN about forwarding an enqueued timer.
Now, conceptually its easy to detect if you're before or after the fwd
by comparing the expiration time against the current time. Of course,
that's expensive (and racy) because we don't have the current time.
Alternatively one could cache this state inside the timer, but then
everybody pays the overhead of maintaining this extra state, and that
is undesired.
The only other option that I could see is the external timer_active
variable, which I tried to kill before. I would love a nicer interface
for this seemingly simple 'problem' but alas.
Fixes: 272325c4821f ("perf: Fix mux_interval hrtimer wreckage")
Fixes: 77a4d1a1b9a1 ("sched: Cleanup bandwidth timers")
Cc: pjt@google.com
Cc: tglx@linutronix.de
Cc: klamm@yandex-team.ru
Cc: mingo@kernel.org
Cc: bsegall@google.com
Cc: hpa@zytor.com
Cc: Sasha Levin <sasha.levin@oracle.com>
Signed-off-by: Peter Zijlstra (Intel) <peterz@infradead.org>
Link: http://lkml.kernel.org/r/20150514102311.GX21418@twins.programming.kicks-ass.net
2015-05-14 18:23:11 +08:00
|
|
|
unsigned int rt_period_active;
|
2011-10-25 16:00:11 +08:00
|
|
|
};
|
2014-09-19 17:22:39 +08:00
|
|
|
|
|
|
|
void __dl_clear_params(struct task_struct *p);
|
|
|
|
|
sched/deadline: Add bandwidth management for SCHED_DEADLINE tasks
In order of deadline scheduling to be effective and useful, it is
important that some method of having the allocation of the available
CPU bandwidth to tasks and task groups under control.
This is usually called "admission control" and if it is not performed
at all, no guarantee can be given on the actual scheduling of the
-deadline tasks.
Since when RT-throttling has been introduced each task group have a
bandwidth associated to itself, calculated as a certain amount of
runtime over a period. Moreover, to make it possible to manipulate
such bandwidth, readable/writable controls have been added to both
procfs (for system wide settings) and cgroupfs (for per-group
settings).
Therefore, the same interface is being used for controlling the
bandwidth distrubution to -deadline tasks and task groups, i.e.,
new controls but with similar names, equivalent meaning and with
the same usage paradigm are added.
However, more discussion is needed in order to figure out how
we want to manage SCHED_DEADLINE bandwidth at the task group level.
Therefore, this patch adds a less sophisticated, but actually
very sensible, mechanism to ensure that a certain utilization
cap is not overcome per each root_domain (the single rq for !SMP
configurations).
Another main difference between deadline bandwidth management and
RT-throttling is that -deadline tasks have bandwidth on their own
(while -rt ones doesn't!), and thus we don't need an higher level
throttling mechanism to enforce the desired bandwidth.
This patch, therefore:
- adds system wide deadline bandwidth management by means of:
* /proc/sys/kernel/sched_dl_runtime_us,
* /proc/sys/kernel/sched_dl_period_us,
that determine (i.e., runtime / period) the total bandwidth
available on each CPU of each root_domain for -deadline tasks;
- couples the RT and deadline bandwidth management, i.e., enforces
that the sum of how much bandwidth is being devoted to -rt
-deadline tasks to stay below 100%.
This means that, for a root_domain comprising M CPUs, -deadline tasks
can be created until the sum of their bandwidths stay below:
M * (sched_dl_runtime_us / sched_dl_period_us)
It is also possible to disable this bandwidth management logic, and
be thus free of oversubscribing the system up to any arbitrary level.
Signed-off-by: Dario Faggioli <raistlin@linux.it>
Signed-off-by: Juri Lelli <juri.lelli@gmail.com>
Signed-off-by: Peter Zijlstra <peterz@infradead.org>
Link: http://lkml.kernel.org/r/1383831828-15501-12-git-send-email-juri.lelli@gmail.com
Signed-off-by: Ingo Molnar <mingo@kernel.org>
2013-11-07 21:43:45 +08:00
|
|
|
/*
|
|
|
|
* To keep the bandwidth of -deadline tasks and groups under control
|
|
|
|
* we need some place where:
|
|
|
|
* - store the maximum -deadline bandwidth of the system (the group);
|
|
|
|
* - cache the fraction of that bandwidth that is currently allocated.
|
|
|
|
*
|
|
|
|
* This is all done in the data structure below. It is similar to the
|
|
|
|
* one used for RT-throttling (rt_bandwidth), with the main difference
|
|
|
|
* that, since here we are only interested in admission control, we
|
|
|
|
* do not decrease any runtime while the group "executes", neither we
|
|
|
|
* need a timer to replenish it.
|
|
|
|
*
|
|
|
|
* With respect to SMP, the bandwidth is given on a per-CPU basis,
|
|
|
|
* meaning that:
|
|
|
|
* - dl_bw (< 100%) is the bandwidth of the system (group) on each CPU;
|
|
|
|
* - dl_total_bw array contains, in the i-eth element, the currently
|
|
|
|
* allocated bandwidth on the i-eth CPU.
|
|
|
|
* Moreover, groups consume bandwidth on each CPU, while tasks only
|
|
|
|
* consume bandwidth on the CPU they're running on.
|
|
|
|
* Finally, dl_total_bw_cpu is used to cache the index of dl_total_bw
|
|
|
|
* that will be shown the next time the proc or cgroup controls will
|
|
|
|
* be red. It on its turn can be changed by writing on its own
|
|
|
|
* control.
|
|
|
|
*/
|
|
|
|
struct dl_bandwidth {
|
|
|
|
raw_spinlock_t dl_runtime_lock;
|
|
|
|
u64 dl_runtime;
|
|
|
|
u64 dl_period;
|
|
|
|
};
|
|
|
|
|
|
|
|
static inline int dl_bandwidth_enabled(void)
|
|
|
|
{
|
2013-12-17 19:44:49 +08:00
|
|
|
return sysctl_sched_rt_runtime >= 0;
|
sched/deadline: Add bandwidth management for SCHED_DEADLINE tasks
In order of deadline scheduling to be effective and useful, it is
important that some method of having the allocation of the available
CPU bandwidth to tasks and task groups under control.
This is usually called "admission control" and if it is not performed
at all, no guarantee can be given on the actual scheduling of the
-deadline tasks.
Since when RT-throttling has been introduced each task group have a
bandwidth associated to itself, calculated as a certain amount of
runtime over a period. Moreover, to make it possible to manipulate
such bandwidth, readable/writable controls have been added to both
procfs (for system wide settings) and cgroupfs (for per-group
settings).
Therefore, the same interface is being used for controlling the
bandwidth distrubution to -deadline tasks and task groups, i.e.,
new controls but with similar names, equivalent meaning and with
the same usage paradigm are added.
However, more discussion is needed in order to figure out how
we want to manage SCHED_DEADLINE bandwidth at the task group level.
Therefore, this patch adds a less sophisticated, but actually
very sensible, mechanism to ensure that a certain utilization
cap is not overcome per each root_domain (the single rq for !SMP
configurations).
Another main difference between deadline bandwidth management and
RT-throttling is that -deadline tasks have bandwidth on their own
(while -rt ones doesn't!), and thus we don't need an higher level
throttling mechanism to enforce the desired bandwidth.
This patch, therefore:
- adds system wide deadline bandwidth management by means of:
* /proc/sys/kernel/sched_dl_runtime_us,
* /proc/sys/kernel/sched_dl_period_us,
that determine (i.e., runtime / period) the total bandwidth
available on each CPU of each root_domain for -deadline tasks;
- couples the RT and deadline bandwidth management, i.e., enforces
that the sum of how much bandwidth is being devoted to -rt
-deadline tasks to stay below 100%.
This means that, for a root_domain comprising M CPUs, -deadline tasks
can be created until the sum of their bandwidths stay below:
M * (sched_dl_runtime_us / sched_dl_period_us)
It is also possible to disable this bandwidth management logic, and
be thus free of oversubscribing the system up to any arbitrary level.
Signed-off-by: Dario Faggioli <raistlin@linux.it>
Signed-off-by: Juri Lelli <juri.lelli@gmail.com>
Signed-off-by: Peter Zijlstra <peterz@infradead.org>
Link: http://lkml.kernel.org/r/1383831828-15501-12-git-send-email-juri.lelli@gmail.com
Signed-off-by: Ingo Molnar <mingo@kernel.org>
2013-11-07 21:43:45 +08:00
|
|
|
}
|
|
|
|
|
|
|
|
extern struct dl_bw *dl_bw_of(int i);
|
|
|
|
|
|
|
|
struct dl_bw {
|
|
|
|
raw_spinlock_t lock;
|
|
|
|
u64 bw, total_bw;
|
|
|
|
};
|
|
|
|
|
2014-09-19 17:22:40 +08:00
|
|
|
static inline
|
|
|
|
void __dl_clear(struct dl_bw *dl_b, u64 tsk_bw)
|
|
|
|
{
|
|
|
|
dl_b->total_bw -= tsk_bw;
|
|
|
|
}
|
|
|
|
|
|
|
|
static inline
|
|
|
|
void __dl_add(struct dl_bw *dl_b, u64 tsk_bw)
|
|
|
|
{
|
|
|
|
dl_b->total_bw += tsk_bw;
|
|
|
|
}
|
|
|
|
|
|
|
|
static inline
|
|
|
|
bool __dl_overflow(struct dl_bw *dl_b, int cpus, u64 old_bw, u64 new_bw)
|
|
|
|
{
|
|
|
|
return dl_b->bw != -1 &&
|
|
|
|
dl_b->bw * cpus < dl_b->total_bw - old_bw + new_bw;
|
|
|
|
}
|
|
|
|
|
2011-10-25 16:00:11 +08:00
|
|
|
extern struct mutex sched_domains_mutex;
|
|
|
|
|
|
|
|
#ifdef CONFIG_CGROUP_SCHED
|
|
|
|
|
|
|
|
#include <linux/cgroup.h>
|
|
|
|
|
|
|
|
struct cfs_rq;
|
|
|
|
struct rt_rq;
|
|
|
|
|
2012-08-07 11:00:13 +08:00
|
|
|
extern struct list_head task_groups;
|
2011-10-25 16:00:11 +08:00
|
|
|
|
|
|
|
struct cfs_bandwidth {
|
|
|
|
#ifdef CONFIG_CFS_BANDWIDTH
|
|
|
|
raw_spinlock_t lock;
|
|
|
|
ktime_t period;
|
|
|
|
u64 quota, runtime;
|
2014-09-21 09:24:36 +08:00
|
|
|
s64 hierarchical_quota;
|
2011-10-25 16:00:11 +08:00
|
|
|
u64 runtime_expires;
|
|
|
|
|
sched,perf: Fix periodic timers
In the below two commits (see Fixes) we have periodic timers that can
stop themselves when they're no longer required, but need to be
(re)-started when their idle condition changes.
Further complications is that we want the timer handler to always do
the forward such that it will always correctly deal with the overruns,
and we do not want to race such that the handler has already decided
to stop, but the (external) restart sees the timer still active and we
end up with a 'lost' timer.
The problem with the current code is that the re-start can come before
the callback does the forward, at which point the forward from the
callback will WARN about forwarding an enqueued timer.
Now, conceptually its easy to detect if you're before or after the fwd
by comparing the expiration time against the current time. Of course,
that's expensive (and racy) because we don't have the current time.
Alternatively one could cache this state inside the timer, but then
everybody pays the overhead of maintaining this extra state, and that
is undesired.
The only other option that I could see is the external timer_active
variable, which I tried to kill before. I would love a nicer interface
for this seemingly simple 'problem' but alas.
Fixes: 272325c4821f ("perf: Fix mux_interval hrtimer wreckage")
Fixes: 77a4d1a1b9a1 ("sched: Cleanup bandwidth timers")
Cc: pjt@google.com
Cc: tglx@linutronix.de
Cc: klamm@yandex-team.ru
Cc: mingo@kernel.org
Cc: bsegall@google.com
Cc: hpa@zytor.com
Cc: Sasha Levin <sasha.levin@oracle.com>
Signed-off-by: Peter Zijlstra (Intel) <peterz@infradead.org>
Link: http://lkml.kernel.org/r/20150514102311.GX21418@twins.programming.kicks-ass.net
2015-05-14 18:23:11 +08:00
|
|
|
int idle, period_active;
|
2011-10-25 16:00:11 +08:00
|
|
|
struct hrtimer period_timer, slack_timer;
|
|
|
|
struct list_head throttled_cfs_rq;
|
|
|
|
|
|
|
|
/* statistics */
|
|
|
|
int nr_periods, nr_throttled;
|
|
|
|
u64 throttled_time;
|
|
|
|
#endif
|
|
|
|
};
|
|
|
|
|
|
|
|
/* task group related information */
|
|
|
|
struct task_group {
|
|
|
|
struct cgroup_subsys_state css;
|
|
|
|
|
|
|
|
#ifdef CONFIG_FAIR_GROUP_SCHED
|
|
|
|
/* schedulable entities of this group on each cpu */
|
|
|
|
struct sched_entity **se;
|
|
|
|
/* runqueue "owned" by this group on each cpu */
|
|
|
|
struct cfs_rq **cfs_rq;
|
|
|
|
unsigned long shares;
|
|
|
|
|
2013-06-20 10:18:46 +08:00
|
|
|
#ifdef CONFIG_SMP
|
2015-12-03 02:41:49 +08:00
|
|
|
/*
|
|
|
|
* load_avg can be heavily contended at clock tick time, so put
|
|
|
|
* it in its own cacheline separated from the fields above which
|
|
|
|
* will also be accessed at each tick.
|
|
|
|
*/
|
|
|
|
atomic_long_t load_avg ____cacheline_aligned;
|
2011-10-25 16:00:11 +08:00
|
|
|
#endif
|
2013-06-20 10:18:46 +08:00
|
|
|
#endif
|
2011-10-25 16:00:11 +08:00
|
|
|
|
|
|
|
#ifdef CONFIG_RT_GROUP_SCHED
|
|
|
|
struct sched_rt_entity **rt_se;
|
|
|
|
struct rt_rq **rt_rq;
|
|
|
|
|
|
|
|
struct rt_bandwidth rt_bandwidth;
|
|
|
|
#endif
|
|
|
|
|
|
|
|
struct rcu_head rcu;
|
|
|
|
struct list_head list;
|
|
|
|
|
|
|
|
struct task_group *parent;
|
|
|
|
struct list_head siblings;
|
|
|
|
struct list_head children;
|
|
|
|
|
|
|
|
#ifdef CONFIG_SCHED_AUTOGROUP
|
|
|
|
struct autogroup *autogroup;
|
|
|
|
#endif
|
|
|
|
|
|
|
|
struct cfs_bandwidth cfs_bandwidth;
|
|
|
|
};
|
|
|
|
|
|
|
|
#ifdef CONFIG_FAIR_GROUP_SCHED
|
|
|
|
#define ROOT_TASK_GROUP_LOAD NICE_0_LOAD
|
|
|
|
|
|
|
|
/*
|
|
|
|
* A weight of 0 or 1 can cause arithmetics problems.
|
|
|
|
* A weight of a cfs_rq is the sum of weights of which entities
|
|
|
|
* are queued on this cfs_rq, so a weight of a entity should not be
|
|
|
|
* too large, so as the shares value of a task group.
|
|
|
|
* (The default weight is 1024 - so there's no practical
|
|
|
|
* limitation from this.)
|
|
|
|
*/
|
|
|
|
#define MIN_SHARES (1UL << 1)
|
|
|
|
#define MAX_SHARES (1UL << 18)
|
|
|
|
#endif
|
|
|
|
|
|
|
|
typedef int (*tg_visitor)(struct task_group *, void *);
|
|
|
|
|
|
|
|
extern int walk_tg_tree_from(struct task_group *from,
|
|
|
|
tg_visitor down, tg_visitor up, void *data);
|
|
|
|
|
|
|
|
/*
|
|
|
|
* Iterate the full tree, calling @down when first entering a node and @up when
|
|
|
|
* leaving it for the final time.
|
|
|
|
*
|
|
|
|
* Caller must hold rcu_lock or sufficient equivalent.
|
|
|
|
*/
|
|
|
|
static inline int walk_tg_tree(tg_visitor down, tg_visitor up, void *data)
|
|
|
|
{
|
|
|
|
return walk_tg_tree_from(&root_task_group, down, up, data);
|
|
|
|
}
|
|
|
|
|
|
|
|
extern int tg_nop(struct task_group *tg, void *data);
|
|
|
|
|
|
|
|
extern void free_fair_sched_group(struct task_group *tg);
|
|
|
|
extern int alloc_fair_sched_group(struct task_group *tg, struct task_group *parent);
|
2016-06-22 20:58:02 +08:00
|
|
|
extern void online_fair_sched_group(struct task_group *tg);
|
2016-01-22 05:24:16 +08:00
|
|
|
extern void unregister_fair_sched_group(struct task_group *tg);
|
2011-10-25 16:00:11 +08:00
|
|
|
extern void init_tg_cfs_entry(struct task_group *tg, struct cfs_rq *cfs_rq,
|
|
|
|
struct sched_entity *se, int cpu,
|
|
|
|
struct sched_entity *parent);
|
|
|
|
extern void init_cfs_bandwidth(struct cfs_bandwidth *cfs_b);
|
|
|
|
|
|
|
|
extern void __refill_cfs_bandwidth_runtime(struct cfs_bandwidth *cfs_b);
|
sched: Cleanup bandwidth timers
Roman reported a 3 cpu lockup scenario involving __start_cfs_bandwidth().
The more I look at that code the more I'm convinced its crack, that
entire __start_cfs_bandwidth() thing is brain melting, we don't need to
cancel a timer before starting it, *hrtimer_start*() will happily remove
the timer for you if its still enqueued.
Removing that, removes a big part of the problem, no more ugly cancel
loop to get stuck in.
So now, if I understand things right, the entire reason you have this
cfs_b->lock guarded ->timer_active nonsense is to make sure we don't
accidentally lose the timer.
It appears to me that it should be possible to guarantee that same by
unconditionally (re)starting the timer when !queued. Because regardless
what hrtimer::function will return, if we beat it to (re)enqueue the
timer, it doesn't matter.
Now, because hrtimers don't come with any serialization guarantees we
must ensure both handler and (re)start loop serialize their access to
the hrtimer to avoid both trying to forward the timer at the same
time.
Update the rt bandwidth timer to match.
This effectively reverts: 09dc4ab03936 ("sched/fair: Fix
tg_set_cfs_bandwidth() deadlock on rq->lock").
Reported-by: Roman Gushchin <klamm@yandex-team.ru>
Signed-off-by: Peter Zijlstra (Intel) <peterz@infradead.org>
Reviewed-by: Ben Segall <bsegall@google.com>
Cc: Paul Turner <pjt@google.com>
Link: http://lkml.kernel.org/r/20150415095011.804589208@infradead.org
Signed-off-by: Thomas Gleixner <tglx@linutronix.de>
2015-04-15 17:41:57 +08:00
|
|
|
extern void start_cfs_bandwidth(struct cfs_bandwidth *cfs_b);
|
2011-10-25 16:00:11 +08:00
|
|
|
extern void unthrottle_cfs_rq(struct cfs_rq *cfs_rq);
|
|
|
|
|
|
|
|
extern void free_rt_sched_group(struct task_group *tg);
|
|
|
|
extern int alloc_rt_sched_group(struct task_group *tg, struct task_group *parent);
|
|
|
|
extern void init_tg_rt_entry(struct task_group *tg, struct rt_rq *rt_rq,
|
|
|
|
struct sched_rt_entity *rt_se, int cpu,
|
|
|
|
struct sched_rt_entity *parent);
|
|
|
|
|
2013-03-05 16:07:33 +08:00
|
|
|
extern struct task_group *sched_create_group(struct task_group *parent);
|
|
|
|
extern void sched_online_group(struct task_group *tg,
|
|
|
|
struct task_group *parent);
|
|
|
|
extern void sched_destroy_group(struct task_group *tg);
|
|
|
|
extern void sched_offline_group(struct task_group *tg);
|
|
|
|
|
|
|
|
extern void sched_move_task(struct task_struct *tsk);
|
|
|
|
|
|
|
|
#ifdef CONFIG_FAIR_GROUP_SCHED
|
|
|
|
extern int sched_group_set_shares(struct task_group *tg, unsigned long shares);
|
2015-10-24 00:16:19 +08:00
|
|
|
|
|
|
|
#ifdef CONFIG_SMP
|
|
|
|
extern void set_task_rq_fair(struct sched_entity *se,
|
|
|
|
struct cfs_rq *prev, struct cfs_rq *next);
|
|
|
|
#else /* !CONFIG_SMP */
|
|
|
|
static inline void set_task_rq_fair(struct sched_entity *se,
|
|
|
|
struct cfs_rq *prev, struct cfs_rq *next) { }
|
|
|
|
#endif /* CONFIG_SMP */
|
|
|
|
#endif /* CONFIG_FAIR_GROUP_SCHED */
|
2013-03-05 16:07:33 +08:00
|
|
|
|
2011-10-25 16:00:11 +08:00
|
|
|
#else /* CONFIG_CGROUP_SCHED */
|
|
|
|
|
|
|
|
struct cfs_bandwidth { };
|
|
|
|
|
|
|
|
#endif /* CONFIG_CGROUP_SCHED */
|
|
|
|
|
|
|
|
/* CFS-related fields in a runqueue */
|
|
|
|
struct cfs_rq {
|
|
|
|
struct load_weight load;
|
2012-04-26 19:12:27 +08:00
|
|
|
unsigned int nr_running, h_nr_running;
|
2011-10-25 16:00:11 +08:00
|
|
|
|
|
|
|
u64 exec_clock;
|
|
|
|
u64 min_vruntime;
|
|
|
|
#ifndef CONFIG_64BIT
|
|
|
|
u64 min_vruntime_copy;
|
|
|
|
#endif
|
|
|
|
|
|
|
|
struct rb_root tasks_timeline;
|
|
|
|
struct rb_node *rb_leftmost;
|
|
|
|
|
|
|
|
/*
|
|
|
|
* 'curr' points to currently running entity on this cfs_rq.
|
|
|
|
* It is set to NULL otherwise (i.e when none are currently running).
|
|
|
|
*/
|
|
|
|
struct sched_entity *curr, *next, *last, *skip;
|
|
|
|
|
|
|
|
#ifdef CONFIG_SCHED_DEBUG
|
|
|
|
unsigned int nr_spread_over;
|
|
|
|
#endif
|
|
|
|
|
2012-10-04 19:18:30 +08:00
|
|
|
#ifdef CONFIG_SMP
|
|
|
|
/*
|
sched/fair: Rewrite runnable load and utilization average tracking
The idea of runnable load average (let runnable time contribute to weight)
was proposed by Paul Turner and Ben Segall, and it is still followed by
this rewrite. This rewrite aims to solve the following issues:
1. cfs_rq's load average (namely runnable_load_avg and blocked_load_avg) is
updated at the granularity of an entity at a time, which results in the
cfs_rq's load average is stale or partially updated: at any time, only
one entity is up to date, all other entities are effectively lagging
behind. This is undesirable.
To illustrate, if we have n runnable entities in the cfs_rq, as time
elapses, they certainly become outdated:
t0: cfs_rq { e1_old, e2_old, ..., en_old }
and when we update:
t1: update e1, then we have cfs_rq { e1_new, e2_old, ..., en_old }
t2: update e2, then we have cfs_rq { e1_old, e2_new, ..., en_old }
...
We solve this by combining all runnable entities' load averages together
in cfs_rq's avg, and update the cfs_rq's avg as a whole. This is based
on the fact that if we regard the update as a function, then:
w * update(e) = update(w * e) and
update(e1) + update(e2) = update(e1 + e2), then
w1 * update(e1) + w2 * update(e2) = update(w1 * e1 + w2 * e2)
therefore, by this rewrite, we have an entirely updated cfs_rq at the
time we update it:
t1: update cfs_rq { e1_new, e2_new, ..., en_new }
t2: update cfs_rq { e1_new, e2_new, ..., en_new }
...
2. cfs_rq's load average is different between top rq->cfs_rq and other
task_group's per CPU cfs_rqs in whether or not blocked_load_average
contributes to the load.
The basic idea behind runnable load average (the same for utilization)
is that the blocked state is taken into account as opposed to only
accounting for the currently runnable state. Therefore, the average
should include both the runnable/running and blocked load averages.
This rewrite does that.
In addition, we also combine runnable/running and blocked averages
of all entities into the cfs_rq's average, and update it together at
once. This is based on the fact that:
update(runnable) + update(blocked) = update(runnable + blocked)
This significantly reduces the code as we don't need to separately
maintain/update runnable/running load and blocked load.
3. How task_group entities' share is calculated is complex and imprecise.
We reduce the complexity in this rewrite to allow a very simple rule:
the task_group's load_avg is aggregated from its per CPU cfs_rqs's
load_avgs. Then group entity's weight is simply proportional to its
own cfs_rq's load_avg / task_group's load_avg. To illustrate,
if a task_group has { cfs_rq1, cfs_rq2, ..., cfs_rqn }, then,
task_group_avg = cfs_rq1_avg + cfs_rq2_avg + ... + cfs_rqn_avg, then
cfs_rqx's entity's share = cfs_rqx_avg / task_group_avg * task_group's share
To sum up, this rewrite in principle is equivalent to the current one, but
fixes the issues described above. Turns out, it significantly reduces the
code complexity and hence increases clarity and efficiency. In addition,
the new averages are more smooth/continuous (no spurious spikes and valleys)
and updated more consistently and quickly to reflect the load dynamics.
As a result, we have less load tracking overhead, better performance,
and especially better power efficiency due to more balanced load.
Signed-off-by: Yuyang Du <yuyang.du@intel.com>
Signed-off-by: Peter Zijlstra (Intel) <peterz@infradead.org>
Cc: Linus Torvalds <torvalds@linux-foundation.org>
Cc: Mike Galbraith <efault@gmx.de>
Cc: Peter Zijlstra <peterz@infradead.org>
Cc: Thomas Gleixner <tglx@linutronix.de>
Cc: arjan@linux.intel.com
Cc: bsegall@google.com
Cc: dietmar.eggemann@arm.com
Cc: fengguang.wu@intel.com
Cc: len.brown@intel.com
Cc: morten.rasmussen@arm.com
Cc: pjt@google.com
Cc: rafael.j.wysocki@intel.com
Cc: umgwanakikbuti@gmail.com
Cc: vincent.guittot@linaro.org
Link: http://lkml.kernel.org/r/1436918682-4971-3-git-send-email-yuyang.du@intel.com
Signed-off-by: Ingo Molnar <mingo@kernel.org>
2015-07-15 08:04:37 +08:00
|
|
|
* CFS load tracking
|
2012-10-04 19:18:30 +08:00
|
|
|
*/
|
sched/fair: Rewrite runnable load and utilization average tracking
The idea of runnable load average (let runnable time contribute to weight)
was proposed by Paul Turner and Ben Segall, and it is still followed by
this rewrite. This rewrite aims to solve the following issues:
1. cfs_rq's load average (namely runnable_load_avg and blocked_load_avg) is
updated at the granularity of an entity at a time, which results in the
cfs_rq's load average is stale or partially updated: at any time, only
one entity is up to date, all other entities are effectively lagging
behind. This is undesirable.
To illustrate, if we have n runnable entities in the cfs_rq, as time
elapses, they certainly become outdated:
t0: cfs_rq { e1_old, e2_old, ..., en_old }
and when we update:
t1: update e1, then we have cfs_rq { e1_new, e2_old, ..., en_old }
t2: update e2, then we have cfs_rq { e1_old, e2_new, ..., en_old }
...
We solve this by combining all runnable entities' load averages together
in cfs_rq's avg, and update the cfs_rq's avg as a whole. This is based
on the fact that if we regard the update as a function, then:
w * update(e) = update(w * e) and
update(e1) + update(e2) = update(e1 + e2), then
w1 * update(e1) + w2 * update(e2) = update(w1 * e1 + w2 * e2)
therefore, by this rewrite, we have an entirely updated cfs_rq at the
time we update it:
t1: update cfs_rq { e1_new, e2_new, ..., en_new }
t2: update cfs_rq { e1_new, e2_new, ..., en_new }
...
2. cfs_rq's load average is different between top rq->cfs_rq and other
task_group's per CPU cfs_rqs in whether or not blocked_load_average
contributes to the load.
The basic idea behind runnable load average (the same for utilization)
is that the blocked state is taken into account as opposed to only
accounting for the currently runnable state. Therefore, the average
should include both the runnable/running and blocked load averages.
This rewrite does that.
In addition, we also combine runnable/running and blocked averages
of all entities into the cfs_rq's average, and update it together at
once. This is based on the fact that:
update(runnable) + update(blocked) = update(runnable + blocked)
This significantly reduces the code as we don't need to separately
maintain/update runnable/running load and blocked load.
3. How task_group entities' share is calculated is complex and imprecise.
We reduce the complexity in this rewrite to allow a very simple rule:
the task_group's load_avg is aggregated from its per CPU cfs_rqs's
load_avgs. Then group entity's weight is simply proportional to its
own cfs_rq's load_avg / task_group's load_avg. To illustrate,
if a task_group has { cfs_rq1, cfs_rq2, ..., cfs_rqn }, then,
task_group_avg = cfs_rq1_avg + cfs_rq2_avg + ... + cfs_rqn_avg, then
cfs_rqx's entity's share = cfs_rqx_avg / task_group_avg * task_group's share
To sum up, this rewrite in principle is equivalent to the current one, but
fixes the issues described above. Turns out, it significantly reduces the
code complexity and hence increases clarity and efficiency. In addition,
the new averages are more smooth/continuous (no spurious spikes and valleys)
and updated more consistently and quickly to reflect the load dynamics.
As a result, we have less load tracking overhead, better performance,
and especially better power efficiency due to more balanced load.
Signed-off-by: Yuyang Du <yuyang.du@intel.com>
Signed-off-by: Peter Zijlstra (Intel) <peterz@infradead.org>
Cc: Linus Torvalds <torvalds@linux-foundation.org>
Cc: Mike Galbraith <efault@gmx.de>
Cc: Peter Zijlstra <peterz@infradead.org>
Cc: Thomas Gleixner <tglx@linutronix.de>
Cc: arjan@linux.intel.com
Cc: bsegall@google.com
Cc: dietmar.eggemann@arm.com
Cc: fengguang.wu@intel.com
Cc: len.brown@intel.com
Cc: morten.rasmussen@arm.com
Cc: pjt@google.com
Cc: rafael.j.wysocki@intel.com
Cc: umgwanakikbuti@gmail.com
Cc: vincent.guittot@linaro.org
Link: http://lkml.kernel.org/r/1436918682-4971-3-git-send-email-yuyang.du@intel.com
Signed-off-by: Ingo Molnar <mingo@kernel.org>
2015-07-15 08:04:37 +08:00
|
|
|
struct sched_avg avg;
|
2015-07-15 08:04:41 +08:00
|
|
|
u64 runnable_load_sum;
|
|
|
|
unsigned long runnable_load_avg;
|
2012-10-04 19:18:30 +08:00
|
|
|
#ifdef CONFIG_FAIR_GROUP_SCHED
|
sched/fair: Rewrite runnable load and utilization average tracking
The idea of runnable load average (let runnable time contribute to weight)
was proposed by Paul Turner and Ben Segall, and it is still followed by
this rewrite. This rewrite aims to solve the following issues:
1. cfs_rq's load average (namely runnable_load_avg and blocked_load_avg) is
updated at the granularity of an entity at a time, which results in the
cfs_rq's load average is stale or partially updated: at any time, only
one entity is up to date, all other entities are effectively lagging
behind. This is undesirable.
To illustrate, if we have n runnable entities in the cfs_rq, as time
elapses, they certainly become outdated:
t0: cfs_rq { e1_old, e2_old, ..., en_old }
and when we update:
t1: update e1, then we have cfs_rq { e1_new, e2_old, ..., en_old }
t2: update e2, then we have cfs_rq { e1_old, e2_new, ..., en_old }
...
We solve this by combining all runnable entities' load averages together
in cfs_rq's avg, and update the cfs_rq's avg as a whole. This is based
on the fact that if we regard the update as a function, then:
w * update(e) = update(w * e) and
update(e1) + update(e2) = update(e1 + e2), then
w1 * update(e1) + w2 * update(e2) = update(w1 * e1 + w2 * e2)
therefore, by this rewrite, we have an entirely updated cfs_rq at the
time we update it:
t1: update cfs_rq { e1_new, e2_new, ..., en_new }
t2: update cfs_rq { e1_new, e2_new, ..., en_new }
...
2. cfs_rq's load average is different between top rq->cfs_rq and other
task_group's per CPU cfs_rqs in whether or not blocked_load_average
contributes to the load.
The basic idea behind runnable load average (the same for utilization)
is that the blocked state is taken into account as opposed to only
accounting for the currently runnable state. Therefore, the average
should include both the runnable/running and blocked load averages.
This rewrite does that.
In addition, we also combine runnable/running and blocked averages
of all entities into the cfs_rq's average, and update it together at
once. This is based on the fact that:
update(runnable) + update(blocked) = update(runnable + blocked)
This significantly reduces the code as we don't need to separately
maintain/update runnable/running load and blocked load.
3. How task_group entities' share is calculated is complex and imprecise.
We reduce the complexity in this rewrite to allow a very simple rule:
the task_group's load_avg is aggregated from its per CPU cfs_rqs's
load_avgs. Then group entity's weight is simply proportional to its
own cfs_rq's load_avg / task_group's load_avg. To illustrate,
if a task_group has { cfs_rq1, cfs_rq2, ..., cfs_rqn }, then,
task_group_avg = cfs_rq1_avg + cfs_rq2_avg + ... + cfs_rqn_avg, then
cfs_rqx's entity's share = cfs_rqx_avg / task_group_avg * task_group's share
To sum up, this rewrite in principle is equivalent to the current one, but
fixes the issues described above. Turns out, it significantly reduces the
code complexity and hence increases clarity and efficiency. In addition,
the new averages are more smooth/continuous (no spurious spikes and valleys)
and updated more consistently and quickly to reflect the load dynamics.
As a result, we have less load tracking overhead, better performance,
and especially better power efficiency due to more balanced load.
Signed-off-by: Yuyang Du <yuyang.du@intel.com>
Signed-off-by: Peter Zijlstra (Intel) <peterz@infradead.org>
Cc: Linus Torvalds <torvalds@linux-foundation.org>
Cc: Mike Galbraith <efault@gmx.de>
Cc: Peter Zijlstra <peterz@infradead.org>
Cc: Thomas Gleixner <tglx@linutronix.de>
Cc: arjan@linux.intel.com
Cc: bsegall@google.com
Cc: dietmar.eggemann@arm.com
Cc: fengguang.wu@intel.com
Cc: len.brown@intel.com
Cc: morten.rasmussen@arm.com
Cc: pjt@google.com
Cc: rafael.j.wysocki@intel.com
Cc: umgwanakikbuti@gmail.com
Cc: vincent.guittot@linaro.org
Link: http://lkml.kernel.org/r/1436918682-4971-3-git-send-email-yuyang.du@intel.com
Signed-off-by: Ingo Molnar <mingo@kernel.org>
2015-07-15 08:04:37 +08:00
|
|
|
unsigned long tg_load_avg_contrib;
|
|
|
|
#endif
|
|
|
|
atomic_long_t removed_load_avg, removed_util_avg;
|
|
|
|
#ifndef CONFIG_64BIT
|
|
|
|
u64 load_last_update_time_copy;
|
|
|
|
#endif
|
2012-10-04 19:18:31 +08:00
|
|
|
|
sched/fair: Rewrite runnable load and utilization average tracking
The idea of runnable load average (let runnable time contribute to weight)
was proposed by Paul Turner and Ben Segall, and it is still followed by
this rewrite. This rewrite aims to solve the following issues:
1. cfs_rq's load average (namely runnable_load_avg and blocked_load_avg) is
updated at the granularity of an entity at a time, which results in the
cfs_rq's load average is stale or partially updated: at any time, only
one entity is up to date, all other entities are effectively lagging
behind. This is undesirable.
To illustrate, if we have n runnable entities in the cfs_rq, as time
elapses, they certainly become outdated:
t0: cfs_rq { e1_old, e2_old, ..., en_old }
and when we update:
t1: update e1, then we have cfs_rq { e1_new, e2_old, ..., en_old }
t2: update e2, then we have cfs_rq { e1_old, e2_new, ..., en_old }
...
We solve this by combining all runnable entities' load averages together
in cfs_rq's avg, and update the cfs_rq's avg as a whole. This is based
on the fact that if we regard the update as a function, then:
w * update(e) = update(w * e) and
update(e1) + update(e2) = update(e1 + e2), then
w1 * update(e1) + w2 * update(e2) = update(w1 * e1 + w2 * e2)
therefore, by this rewrite, we have an entirely updated cfs_rq at the
time we update it:
t1: update cfs_rq { e1_new, e2_new, ..., en_new }
t2: update cfs_rq { e1_new, e2_new, ..., en_new }
...
2. cfs_rq's load average is different between top rq->cfs_rq and other
task_group's per CPU cfs_rqs in whether or not blocked_load_average
contributes to the load.
The basic idea behind runnable load average (the same for utilization)
is that the blocked state is taken into account as opposed to only
accounting for the currently runnable state. Therefore, the average
should include both the runnable/running and blocked load averages.
This rewrite does that.
In addition, we also combine runnable/running and blocked averages
of all entities into the cfs_rq's average, and update it together at
once. This is based on the fact that:
update(runnable) + update(blocked) = update(runnable + blocked)
This significantly reduces the code as we don't need to separately
maintain/update runnable/running load and blocked load.
3. How task_group entities' share is calculated is complex and imprecise.
We reduce the complexity in this rewrite to allow a very simple rule:
the task_group's load_avg is aggregated from its per CPU cfs_rqs's
load_avgs. Then group entity's weight is simply proportional to its
own cfs_rq's load_avg / task_group's load_avg. To illustrate,
if a task_group has { cfs_rq1, cfs_rq2, ..., cfs_rqn }, then,
task_group_avg = cfs_rq1_avg + cfs_rq2_avg + ... + cfs_rqn_avg, then
cfs_rqx's entity's share = cfs_rqx_avg / task_group_avg * task_group's share
To sum up, this rewrite in principle is equivalent to the current one, but
fixes the issues described above. Turns out, it significantly reduces the
code complexity and hence increases clarity and efficiency. In addition,
the new averages are more smooth/continuous (no spurious spikes and valleys)
and updated more consistently and quickly to reflect the load dynamics.
As a result, we have less load tracking overhead, better performance,
and especially better power efficiency due to more balanced load.
Signed-off-by: Yuyang Du <yuyang.du@intel.com>
Signed-off-by: Peter Zijlstra (Intel) <peterz@infradead.org>
Cc: Linus Torvalds <torvalds@linux-foundation.org>
Cc: Mike Galbraith <efault@gmx.de>
Cc: Peter Zijlstra <peterz@infradead.org>
Cc: Thomas Gleixner <tglx@linutronix.de>
Cc: arjan@linux.intel.com
Cc: bsegall@google.com
Cc: dietmar.eggemann@arm.com
Cc: fengguang.wu@intel.com
Cc: len.brown@intel.com
Cc: morten.rasmussen@arm.com
Cc: pjt@google.com
Cc: rafael.j.wysocki@intel.com
Cc: umgwanakikbuti@gmail.com
Cc: vincent.guittot@linaro.org
Link: http://lkml.kernel.org/r/1436918682-4971-3-git-send-email-yuyang.du@intel.com
Signed-off-by: Ingo Molnar <mingo@kernel.org>
2015-07-15 08:04:37 +08:00
|
|
|
#ifdef CONFIG_FAIR_GROUP_SCHED
|
2012-10-04 19:18:31 +08:00
|
|
|
/*
|
|
|
|
* h_load = weight * f(tg)
|
|
|
|
*
|
|
|
|
* Where f(tg) is the recursive weight fraction assigned to
|
|
|
|
* this group.
|
|
|
|
*/
|
|
|
|
unsigned long h_load;
|
sched: Move h_load calculation to task_h_load()
The bad thing about update_h_load(), which computes hierarchical load
factor for task groups, is that it is called for each task group in the
system before every load balancer run, and since rebalance can be
triggered very often, this function can eat really a lot of cpu time if
there are many cpu cgroups in the system.
Although the situation was improved significantly by commit a35b646
('sched, cgroup: Reduce rq->lock hold times for large cgroup
hierarchies'), the problem still can arise under some kinds of loads,
e.g. when cpus are switching from idle to busy and back very frequently.
For instance, when I start 1000 of processes that wake up every
millisecond on my 8 cpus host, 'top' and 'perf top' show:
Cpu(s): 17.8%us, 24.3%sy, 0.0%ni, 57.9%id, 0.0%wa, 0.0%hi, 0.0%si
Events: 243K cycles
7.57% [kernel] [k] __schedule
7.08% [kernel] [k] timerqueue_add
6.13% libc-2.12.so [.] usleep
Then if I create 10000 *idle* cpu cgroups (no processes in them), cpu
usage increases significantly although the 'wakers' are still executing
in the root cpu cgroup:
Cpu(s): 19.1%us, 48.7%sy, 0.0%ni, 31.6%id, 0.0%wa, 0.0%hi, 0.7%si
Events: 230K cycles
24.56% [kernel] [k] tg_load_down
5.76% [kernel] [k] __schedule
This happens because this particular kind of load triggers 'new idle'
rebalance very frequently, which requires calling update_h_load(),
which, in turn, calls tg_load_down() for every *idle* cpu cgroup even
though it is absolutely useless, because idle cpu cgroups have no tasks
to pull.
This patch tries to improve the situation by making h_load calculation
proceed only when h_load is really necessary. To achieve this, it
substitutes update_h_load() with update_cfs_rq_h_load(), which computes
h_load only for a given cfs_rq and all its ascendants, and makes the
load balancer call this function whenever it considers if a task should
be pulled, i.e. it moves h_load calculations directly to task_h_load().
For h_load of the same cfs_rq not to be updated multiple times (in case
several tasks in the same cgroup are considered during the same balance
run), the patch keeps the time of the last h_load update for each cfs_rq
and breaks calculation when it finds h_load to be uptodate.
The benefit of it is that h_load is computed only for those cfs_rq's,
which really need it, in particular all idle task groups are skipped.
Although this, in fact, moves h_load calculation under rq lock, it
should not affect latency much, because the amount of work done under rq
lock while trying to pull tasks is limited by sched_nr_migrate.
After the patch applied with the setup described above (1000 wakers in
the root cgroup and 10000 idle cgroups), I get:
Cpu(s): 16.9%us, 24.8%sy, 0.0%ni, 58.4%id, 0.0%wa, 0.0%hi, 0.0%si
Events: 242K cycles
7.57% [kernel] [k] __schedule
6.70% [kernel] [k] timerqueue_add
5.93% libc-2.12.so [.] usleep
Signed-off-by: Vladimir Davydov <vdavydov@parallels.com>
Signed-off-by: Peter Zijlstra <a.p.zijlstra@chello.nl>
Link: http://lkml.kernel.org/r/1373896159-1278-1-git-send-email-vdavydov@parallels.com
Signed-off-by: Ingo Molnar <mingo@kernel.org>
2013-07-15 21:49:19 +08:00
|
|
|
u64 last_h_load_update;
|
|
|
|
struct sched_entity *h_load_next;
|
|
|
|
#endif /* CONFIG_FAIR_GROUP_SCHED */
|
2012-10-04 19:18:31 +08:00
|
|
|
#endif /* CONFIG_SMP */
|
|
|
|
|
2011-10-25 16:00:11 +08:00
|
|
|
#ifdef CONFIG_FAIR_GROUP_SCHED
|
|
|
|
struct rq *rq; /* cpu runqueue to which this cfs_rq is attached */
|
|
|
|
|
|
|
|
/*
|
|
|
|
* leaf cfs_rqs are those that hold tasks (lowest schedulable entity in
|
|
|
|
* a hierarchy). Non-leaf lrqs hold other higher schedulable entities
|
|
|
|
* (like users, containers etc.)
|
|
|
|
*
|
|
|
|
* leaf_cfs_rq_list ties together list of leaf cfs_rq's in a cpu. This
|
|
|
|
* list is used during load balance.
|
|
|
|
*/
|
|
|
|
int on_list;
|
|
|
|
struct list_head leaf_cfs_rq_list;
|
|
|
|
struct task_group *tg; /* group that "owns" this runqueue */
|
|
|
|
|
|
|
|
#ifdef CONFIG_CFS_BANDWIDTH
|
|
|
|
int runtime_enabled;
|
|
|
|
u64 runtime_expires;
|
|
|
|
s64 runtime_remaining;
|
|
|
|
|
2012-10-04 19:18:31 +08:00
|
|
|
u64 throttled_clock, throttled_clock_task;
|
|
|
|
u64 throttled_clock_task_time;
|
2016-06-22 21:14:26 +08:00
|
|
|
int throttled, throttle_count;
|
2011-10-25 16:00:11 +08:00
|
|
|
struct list_head throttled_list;
|
|
|
|
#endif /* CONFIG_CFS_BANDWIDTH */
|
|
|
|
#endif /* CONFIG_FAIR_GROUP_SCHED */
|
|
|
|
};
|
|
|
|
|
|
|
|
static inline int rt_bandwidth_enabled(void)
|
|
|
|
{
|
|
|
|
return sysctl_sched_rt_runtime >= 0;
|
|
|
|
}
|
|
|
|
|
sched/rt: Use IPI to trigger RT task push migration instead of pulling
When debugging the latencies on a 40 core box, where we hit 300 to
500 microsecond latencies, I found there was a huge contention on the
runqueue locks.
Investigating it further, running ftrace, I found that it was due to
the pulling of RT tasks.
The test that was run was the following:
cyclictest --numa -p95 -m -d0 -i100
This created a thread on each CPU, that would set its wakeup in iterations
of 100 microseconds. The -d0 means that all the threads had the same
interval (100us). Each thread sleeps for 100us and wakes up and measures
its latencies.
cyclictest is maintained at:
git://git.kernel.org/pub/scm/linux/kernel/git/clrkwllms/rt-tests.git
What happened was another RT task would be scheduled on one of the CPUs
that was running our test, when the other CPU tests went to sleep and
scheduled idle. This caused the "pull" operation to execute on all
these CPUs. Each one of these saw the RT task that was overloaded on
the CPU of the test that was still running, and each one tried
to grab that task in a thundering herd way.
To grab the task, each thread would do a double rq lock grab, grabbing
its own lock as well as the rq of the overloaded CPU. As the sched
domains on this box was rather flat for its size, I saw up to 12 CPUs
block on this lock at once. This caused a ripple affect with the
rq locks especially since the taking was done via a double rq lock, which
means that several of the CPUs had their own rq locks held while trying
to take this rq lock. As these locks were blocked, any wakeups or load
balanceing on these CPUs would also block on these locks, and the wait
time escalated.
I've tried various methods to lessen the load, but things like an
atomic counter to only let one CPU grab the task wont work, because
the task may have a limited affinity, and we may pick the wrong
CPU to take that lock and do the pull, to only find out that the
CPU we picked isn't in the task's affinity.
Instead of doing the PULL, I now have the CPUs that want the pull to
send over an IPI to the overloaded CPU, and let that CPU pick what
CPU to push the task to. No more need to grab the rq lock, and the
push/pull algorithm still works fine.
With this patch, the latency dropped to just 150us over a 20 hour run.
Without the patch, the huge latencies would trigger in seconds.
I've created a new sched feature called RT_PUSH_IPI, which is enabled
by default.
When RT_PUSH_IPI is not enabled, the old method of grabbing the rq locks
and having the pulling CPU do the work is implemented. When RT_PUSH_IPI
is enabled, the IPI is sent to the overloaded CPU to do a push.
To enabled or disable this at run time:
# mount -t debugfs nodev /sys/kernel/debug
# echo RT_PUSH_IPI > /sys/kernel/debug/sched_features
or
# echo NO_RT_PUSH_IPI > /sys/kernel/debug/sched_features
Update: This original patch would send an IPI to all CPUs in the RT overload
list. But that could theoretically cause the reverse issue. That is, there
could be lots of overloaded RT queues and one CPU lowers its priority. It would
then send an IPI to all the overloaded RT queues and they could then all try
to grab the rq lock of the CPU lowering its priority, and then we have the
same problem.
The latest design sends out only one IPI to the first overloaded CPU. It tries to
push any tasks that it can, and then looks for the next overloaded CPU that can
push to the source CPU. The IPIs stop when all overloaded CPUs that have pushable
tasks that have priorities greater than the source CPU are covered. In case the
source CPU lowers its priority again, a flag is set to tell the IPI traversal to
restart with the first RT overloaded CPU after the source CPU.
Parts-suggested-by: Peter Zijlstra <peterz@infradead.org>
Signed-off-by: Steven Rostedt <rostedt@goodmis.org>
Signed-off-by: Peter Zijlstra (Intel) <peterz@infradead.org>
Cc: Joern Engel <joern@purestorage.com>
Cc: Clark Williams <williams@redhat.com>
Cc: Mike Galbraith <umgwanakikbuti@gmail.com>
Cc: Paul E. McKenney <paulmck@linux.vnet.ibm.com>
Cc: Thomas Gleixner <tglx@linutronix.de>
Link: http://lkml.kernel.org/r/20150318144946.2f3cc982@gandalf.local.home
Signed-off-by: Ingo Molnar <mingo@kernel.org>
2015-03-19 02:49:46 +08:00
|
|
|
/* RT IPI pull logic requires IRQ_WORK */
|
|
|
|
#ifdef CONFIG_IRQ_WORK
|
|
|
|
# define HAVE_RT_PUSH_IPI
|
|
|
|
#endif
|
|
|
|
|
2011-10-25 16:00:11 +08:00
|
|
|
/* Real-Time classes' related field in a runqueue: */
|
|
|
|
struct rt_rq {
|
|
|
|
struct rt_prio_array active;
|
2012-04-26 19:12:27 +08:00
|
|
|
unsigned int rt_nr_running;
|
2015-11-05 01:17:10 +08:00
|
|
|
unsigned int rr_nr_running;
|
2011-10-25 16:00:11 +08:00
|
|
|
#if defined CONFIG_SMP || defined CONFIG_RT_GROUP_SCHED
|
|
|
|
struct {
|
|
|
|
int curr; /* highest queued rt task prio */
|
|
|
|
#ifdef CONFIG_SMP
|
|
|
|
int next; /* next highest */
|
|
|
|
#endif
|
|
|
|
} highest_prio;
|
|
|
|
#endif
|
|
|
|
#ifdef CONFIG_SMP
|
|
|
|
unsigned long rt_nr_migratory;
|
|
|
|
unsigned long rt_nr_total;
|
|
|
|
int overloaded;
|
|
|
|
struct plist_head pushable_tasks;
|
sched/rt: Use IPI to trigger RT task push migration instead of pulling
When debugging the latencies on a 40 core box, where we hit 300 to
500 microsecond latencies, I found there was a huge contention on the
runqueue locks.
Investigating it further, running ftrace, I found that it was due to
the pulling of RT tasks.
The test that was run was the following:
cyclictest --numa -p95 -m -d0 -i100
This created a thread on each CPU, that would set its wakeup in iterations
of 100 microseconds. The -d0 means that all the threads had the same
interval (100us). Each thread sleeps for 100us and wakes up and measures
its latencies.
cyclictest is maintained at:
git://git.kernel.org/pub/scm/linux/kernel/git/clrkwllms/rt-tests.git
What happened was another RT task would be scheduled on one of the CPUs
that was running our test, when the other CPU tests went to sleep and
scheduled idle. This caused the "pull" operation to execute on all
these CPUs. Each one of these saw the RT task that was overloaded on
the CPU of the test that was still running, and each one tried
to grab that task in a thundering herd way.
To grab the task, each thread would do a double rq lock grab, grabbing
its own lock as well as the rq of the overloaded CPU. As the sched
domains on this box was rather flat for its size, I saw up to 12 CPUs
block on this lock at once. This caused a ripple affect with the
rq locks especially since the taking was done via a double rq lock, which
means that several of the CPUs had their own rq locks held while trying
to take this rq lock. As these locks were blocked, any wakeups or load
balanceing on these CPUs would also block on these locks, and the wait
time escalated.
I've tried various methods to lessen the load, but things like an
atomic counter to only let one CPU grab the task wont work, because
the task may have a limited affinity, and we may pick the wrong
CPU to take that lock and do the pull, to only find out that the
CPU we picked isn't in the task's affinity.
Instead of doing the PULL, I now have the CPUs that want the pull to
send over an IPI to the overloaded CPU, and let that CPU pick what
CPU to push the task to. No more need to grab the rq lock, and the
push/pull algorithm still works fine.
With this patch, the latency dropped to just 150us over a 20 hour run.
Without the patch, the huge latencies would trigger in seconds.
I've created a new sched feature called RT_PUSH_IPI, which is enabled
by default.
When RT_PUSH_IPI is not enabled, the old method of grabbing the rq locks
and having the pulling CPU do the work is implemented. When RT_PUSH_IPI
is enabled, the IPI is sent to the overloaded CPU to do a push.
To enabled or disable this at run time:
# mount -t debugfs nodev /sys/kernel/debug
# echo RT_PUSH_IPI > /sys/kernel/debug/sched_features
or
# echo NO_RT_PUSH_IPI > /sys/kernel/debug/sched_features
Update: This original patch would send an IPI to all CPUs in the RT overload
list. But that could theoretically cause the reverse issue. That is, there
could be lots of overloaded RT queues and one CPU lowers its priority. It would
then send an IPI to all the overloaded RT queues and they could then all try
to grab the rq lock of the CPU lowering its priority, and then we have the
same problem.
The latest design sends out only one IPI to the first overloaded CPU. It tries to
push any tasks that it can, and then looks for the next overloaded CPU that can
push to the source CPU. The IPIs stop when all overloaded CPUs that have pushable
tasks that have priorities greater than the source CPU are covered. In case the
source CPU lowers its priority again, a flag is set to tell the IPI traversal to
restart with the first RT overloaded CPU after the source CPU.
Parts-suggested-by: Peter Zijlstra <peterz@infradead.org>
Signed-off-by: Steven Rostedt <rostedt@goodmis.org>
Signed-off-by: Peter Zijlstra (Intel) <peterz@infradead.org>
Cc: Joern Engel <joern@purestorage.com>
Cc: Clark Williams <williams@redhat.com>
Cc: Mike Galbraith <umgwanakikbuti@gmail.com>
Cc: Paul E. McKenney <paulmck@linux.vnet.ibm.com>
Cc: Thomas Gleixner <tglx@linutronix.de>
Link: http://lkml.kernel.org/r/20150318144946.2f3cc982@gandalf.local.home
Signed-off-by: Ingo Molnar <mingo@kernel.org>
2015-03-19 02:49:46 +08:00
|
|
|
#ifdef HAVE_RT_PUSH_IPI
|
|
|
|
int push_flags;
|
|
|
|
int push_cpu;
|
|
|
|
struct irq_work push_work;
|
|
|
|
raw_spinlock_t push_lock;
|
2011-10-25 16:00:11 +08:00
|
|
|
#endif
|
sched/rt: Use IPI to trigger RT task push migration instead of pulling
When debugging the latencies on a 40 core box, where we hit 300 to
500 microsecond latencies, I found there was a huge contention on the
runqueue locks.
Investigating it further, running ftrace, I found that it was due to
the pulling of RT tasks.
The test that was run was the following:
cyclictest --numa -p95 -m -d0 -i100
This created a thread on each CPU, that would set its wakeup in iterations
of 100 microseconds. The -d0 means that all the threads had the same
interval (100us). Each thread sleeps for 100us and wakes up and measures
its latencies.
cyclictest is maintained at:
git://git.kernel.org/pub/scm/linux/kernel/git/clrkwllms/rt-tests.git
What happened was another RT task would be scheduled on one of the CPUs
that was running our test, when the other CPU tests went to sleep and
scheduled idle. This caused the "pull" operation to execute on all
these CPUs. Each one of these saw the RT task that was overloaded on
the CPU of the test that was still running, and each one tried
to grab that task in a thundering herd way.
To grab the task, each thread would do a double rq lock grab, grabbing
its own lock as well as the rq of the overloaded CPU. As the sched
domains on this box was rather flat for its size, I saw up to 12 CPUs
block on this lock at once. This caused a ripple affect with the
rq locks especially since the taking was done via a double rq lock, which
means that several of the CPUs had their own rq locks held while trying
to take this rq lock. As these locks were blocked, any wakeups or load
balanceing on these CPUs would also block on these locks, and the wait
time escalated.
I've tried various methods to lessen the load, but things like an
atomic counter to only let one CPU grab the task wont work, because
the task may have a limited affinity, and we may pick the wrong
CPU to take that lock and do the pull, to only find out that the
CPU we picked isn't in the task's affinity.
Instead of doing the PULL, I now have the CPUs that want the pull to
send over an IPI to the overloaded CPU, and let that CPU pick what
CPU to push the task to. No more need to grab the rq lock, and the
push/pull algorithm still works fine.
With this patch, the latency dropped to just 150us over a 20 hour run.
Without the patch, the huge latencies would trigger in seconds.
I've created a new sched feature called RT_PUSH_IPI, which is enabled
by default.
When RT_PUSH_IPI is not enabled, the old method of grabbing the rq locks
and having the pulling CPU do the work is implemented. When RT_PUSH_IPI
is enabled, the IPI is sent to the overloaded CPU to do a push.
To enabled or disable this at run time:
# mount -t debugfs nodev /sys/kernel/debug
# echo RT_PUSH_IPI > /sys/kernel/debug/sched_features
or
# echo NO_RT_PUSH_IPI > /sys/kernel/debug/sched_features
Update: This original patch would send an IPI to all CPUs in the RT overload
list. But that could theoretically cause the reverse issue. That is, there
could be lots of overloaded RT queues and one CPU lowers its priority. It would
then send an IPI to all the overloaded RT queues and they could then all try
to grab the rq lock of the CPU lowering its priority, and then we have the
same problem.
The latest design sends out only one IPI to the first overloaded CPU. It tries to
push any tasks that it can, and then looks for the next overloaded CPU that can
push to the source CPU. The IPIs stop when all overloaded CPUs that have pushable
tasks that have priorities greater than the source CPU are covered. In case the
source CPU lowers its priority again, a flag is set to tell the IPI traversal to
restart with the first RT overloaded CPU after the source CPU.
Parts-suggested-by: Peter Zijlstra <peterz@infradead.org>
Signed-off-by: Steven Rostedt <rostedt@goodmis.org>
Signed-off-by: Peter Zijlstra (Intel) <peterz@infradead.org>
Cc: Joern Engel <joern@purestorage.com>
Cc: Clark Williams <williams@redhat.com>
Cc: Mike Galbraith <umgwanakikbuti@gmail.com>
Cc: Paul E. McKenney <paulmck@linux.vnet.ibm.com>
Cc: Thomas Gleixner <tglx@linutronix.de>
Link: http://lkml.kernel.org/r/20150318144946.2f3cc982@gandalf.local.home
Signed-off-by: Ingo Molnar <mingo@kernel.org>
2015-03-19 02:49:46 +08:00
|
|
|
#endif /* CONFIG_SMP */
|
2014-03-15 06:15:00 +08:00
|
|
|
int rt_queued;
|
|
|
|
|
2011-10-25 16:00:11 +08:00
|
|
|
int rt_throttled;
|
|
|
|
u64 rt_time;
|
|
|
|
u64 rt_runtime;
|
|
|
|
/* Nests inside the rq lock: */
|
|
|
|
raw_spinlock_t rt_runtime_lock;
|
|
|
|
|
|
|
|
#ifdef CONFIG_RT_GROUP_SCHED
|
|
|
|
unsigned long rt_nr_boosted;
|
|
|
|
|
|
|
|
struct rq *rq;
|
|
|
|
struct task_group *tg;
|
|
|
|
#endif
|
|
|
|
};
|
|
|
|
|
sched/deadline: Add SCHED_DEADLINE structures & implementation
Introduces the data structures, constants and symbols needed for
SCHED_DEADLINE implementation.
Core data structure of SCHED_DEADLINE are defined, along with their
initializers. Hooks for checking if a task belong to the new policy
are also added where they are needed.
Adds a scheduling class, in sched/dl.c and a new policy called
SCHED_DEADLINE. It is an implementation of the Earliest Deadline
First (EDF) scheduling algorithm, augmented with a mechanism (called
Constant Bandwidth Server, CBS) that makes it possible to isolate
the behaviour of tasks between each other.
The typical -deadline task will be made up of a computation phase
(instance) which is activated on a periodic or sporadic fashion. The
expected (maximum) duration of such computation is called the task's
runtime; the time interval by which each instance need to be completed
is called the task's relative deadline. The task's absolute deadline
is dynamically calculated as the time instant a task (better, an
instance) activates plus the relative deadline.
The EDF algorithms selects the task with the smallest absolute
deadline as the one to be executed first, while the CBS ensures each
task to run for at most its runtime every (relative) deadline
length time interval, avoiding any interference between different
tasks (bandwidth isolation).
Thanks to this feature, also tasks that do not strictly comply with
the computational model sketched above can effectively use the new
policy.
To summarize, this patch:
- introduces the data structures, constants and symbols needed;
- implements the core logic of the scheduling algorithm in the new
scheduling class file;
- provides all the glue code between the new scheduling class and
the core scheduler and refines the interactions between sched/dl
and the other existing scheduling classes.
Signed-off-by: Dario Faggioli <raistlin@linux.it>
Signed-off-by: Michael Trimarchi <michael@amarulasolutions.com>
Signed-off-by: Fabio Checconi <fchecconi@gmail.com>
Signed-off-by: Juri Lelli <juri.lelli@gmail.com>
Signed-off-by: Peter Zijlstra <peterz@infradead.org>
Link: http://lkml.kernel.org/r/1383831828-15501-4-git-send-email-juri.lelli@gmail.com
Signed-off-by: Ingo Molnar <mingo@kernel.org>
2013-11-28 18:14:43 +08:00
|
|
|
/* Deadline class' related fields in a runqueue */
|
|
|
|
struct dl_rq {
|
|
|
|
/* runqueue is an rbtree, ordered by deadline */
|
|
|
|
struct rb_root rb_root;
|
|
|
|
struct rb_node *rb_leftmost;
|
|
|
|
|
|
|
|
unsigned long dl_nr_running;
|
sched/deadline: Add SCHED_DEADLINE SMP-related data structures & logic
Introduces data structures relevant for implementing dynamic
migration of -deadline tasks and the logic for checking if
runqueues are overloaded with -deadline tasks and for choosing
where a task should migrate, when it is the case.
Adds also dynamic migrations to SCHED_DEADLINE, so that tasks can
be moved among CPUs when necessary. It is also possible to bind a
task to a (set of) CPU(s), thus restricting its capability of
migrating, or forbidding migrations at all.
The very same approach used in sched_rt is utilised:
- -deadline tasks are kept into CPU-specific runqueues,
- -deadline tasks are migrated among runqueues to achieve the
following:
* on an M-CPU system the M earliest deadline ready tasks
are always running;
* affinity/cpusets settings of all the -deadline tasks is
always respected.
Therefore, this very special form of "load balancing" is done with
an active method, i.e., the scheduler pushes or pulls tasks between
runqueues when they are woken up and/or (de)scheduled.
IOW, every time a preemption occurs, the descheduled task might be sent
to some other CPU (depending on its deadline) to continue executing
(push). On the other hand, every time a CPU becomes idle, it might pull
the second earliest deadline ready task from some other CPU.
To enforce this, a pull operation is always attempted before taking any
scheduling decision (pre_schedule()), as well as a push one after each
scheduling decision (post_schedule()). In addition, when a task arrives
or wakes up, the best CPU where to resume it is selected taking into
account its affinity mask, the system topology, but also its deadline.
E.g., from the scheduling point of view, the best CPU where to wake
up (and also where to push) a task is the one which is running the task
with the latest deadline among the M executing ones.
In order to facilitate these decisions, per-runqueue "caching" of the
deadlines of the currently running and of the first ready task is used.
Queued but not running tasks are also parked in another rb-tree to
speed-up pushes.
Signed-off-by: Juri Lelli <juri.lelli@gmail.com>
Signed-off-by: Dario Faggioli <raistlin@linux.it>
Signed-off-by: Peter Zijlstra <peterz@infradead.org>
Link: http://lkml.kernel.org/r/1383831828-15501-5-git-send-email-juri.lelli@gmail.com
Signed-off-by: Ingo Molnar <mingo@kernel.org>
2013-11-07 21:43:38 +08:00
|
|
|
|
|
|
|
#ifdef CONFIG_SMP
|
|
|
|
/*
|
|
|
|
* Deadline values of the currently executing and the
|
|
|
|
* earliest ready task on this rq. Caching these facilitates
|
|
|
|
* the decision wether or not a ready but not running task
|
|
|
|
* should migrate somewhere else.
|
|
|
|
*/
|
|
|
|
struct {
|
|
|
|
u64 curr;
|
|
|
|
u64 next;
|
|
|
|
} earliest_dl;
|
|
|
|
|
|
|
|
unsigned long dl_nr_migratory;
|
|
|
|
int overloaded;
|
|
|
|
|
|
|
|
/*
|
|
|
|
* Tasks on this rq that can be pushed away. They are kept in
|
|
|
|
* an rb-tree, ordered by tasks' deadlines, with caching
|
|
|
|
* of the leftmost (earliest deadline) element.
|
|
|
|
*/
|
|
|
|
struct rb_root pushable_dl_tasks_root;
|
|
|
|
struct rb_node *pushable_dl_tasks_leftmost;
|
sched/deadline: Add bandwidth management for SCHED_DEADLINE tasks
In order of deadline scheduling to be effective and useful, it is
important that some method of having the allocation of the available
CPU bandwidth to tasks and task groups under control.
This is usually called "admission control" and if it is not performed
at all, no guarantee can be given on the actual scheduling of the
-deadline tasks.
Since when RT-throttling has been introduced each task group have a
bandwidth associated to itself, calculated as a certain amount of
runtime over a period. Moreover, to make it possible to manipulate
such bandwidth, readable/writable controls have been added to both
procfs (for system wide settings) and cgroupfs (for per-group
settings).
Therefore, the same interface is being used for controlling the
bandwidth distrubution to -deadline tasks and task groups, i.e.,
new controls but with similar names, equivalent meaning and with
the same usage paradigm are added.
However, more discussion is needed in order to figure out how
we want to manage SCHED_DEADLINE bandwidth at the task group level.
Therefore, this patch adds a less sophisticated, but actually
very sensible, mechanism to ensure that a certain utilization
cap is not overcome per each root_domain (the single rq for !SMP
configurations).
Another main difference between deadline bandwidth management and
RT-throttling is that -deadline tasks have bandwidth on their own
(while -rt ones doesn't!), and thus we don't need an higher level
throttling mechanism to enforce the desired bandwidth.
This patch, therefore:
- adds system wide deadline bandwidth management by means of:
* /proc/sys/kernel/sched_dl_runtime_us,
* /proc/sys/kernel/sched_dl_period_us,
that determine (i.e., runtime / period) the total bandwidth
available on each CPU of each root_domain for -deadline tasks;
- couples the RT and deadline bandwidth management, i.e., enforces
that the sum of how much bandwidth is being devoted to -rt
-deadline tasks to stay below 100%.
This means that, for a root_domain comprising M CPUs, -deadline tasks
can be created until the sum of their bandwidths stay below:
M * (sched_dl_runtime_us / sched_dl_period_us)
It is also possible to disable this bandwidth management logic, and
be thus free of oversubscribing the system up to any arbitrary level.
Signed-off-by: Dario Faggioli <raistlin@linux.it>
Signed-off-by: Juri Lelli <juri.lelli@gmail.com>
Signed-off-by: Peter Zijlstra <peterz@infradead.org>
Link: http://lkml.kernel.org/r/1383831828-15501-12-git-send-email-juri.lelli@gmail.com
Signed-off-by: Ingo Molnar <mingo@kernel.org>
2013-11-07 21:43:45 +08:00
|
|
|
#else
|
|
|
|
struct dl_bw dl_bw;
|
sched/deadline: Add SCHED_DEADLINE SMP-related data structures & logic
Introduces data structures relevant for implementing dynamic
migration of -deadline tasks and the logic for checking if
runqueues are overloaded with -deadline tasks and for choosing
where a task should migrate, when it is the case.
Adds also dynamic migrations to SCHED_DEADLINE, so that tasks can
be moved among CPUs when necessary. It is also possible to bind a
task to a (set of) CPU(s), thus restricting its capability of
migrating, or forbidding migrations at all.
The very same approach used in sched_rt is utilised:
- -deadline tasks are kept into CPU-specific runqueues,
- -deadline tasks are migrated among runqueues to achieve the
following:
* on an M-CPU system the M earliest deadline ready tasks
are always running;
* affinity/cpusets settings of all the -deadline tasks is
always respected.
Therefore, this very special form of "load balancing" is done with
an active method, i.e., the scheduler pushes or pulls tasks between
runqueues when they are woken up and/or (de)scheduled.
IOW, every time a preemption occurs, the descheduled task might be sent
to some other CPU (depending on its deadline) to continue executing
(push). On the other hand, every time a CPU becomes idle, it might pull
the second earliest deadline ready task from some other CPU.
To enforce this, a pull operation is always attempted before taking any
scheduling decision (pre_schedule()), as well as a push one after each
scheduling decision (post_schedule()). In addition, when a task arrives
or wakes up, the best CPU where to resume it is selected taking into
account its affinity mask, the system topology, but also its deadline.
E.g., from the scheduling point of view, the best CPU where to wake
up (and also where to push) a task is the one which is running the task
with the latest deadline among the M executing ones.
In order to facilitate these decisions, per-runqueue "caching" of the
deadlines of the currently running and of the first ready task is used.
Queued but not running tasks are also parked in another rb-tree to
speed-up pushes.
Signed-off-by: Juri Lelli <juri.lelli@gmail.com>
Signed-off-by: Dario Faggioli <raistlin@linux.it>
Signed-off-by: Peter Zijlstra <peterz@infradead.org>
Link: http://lkml.kernel.org/r/1383831828-15501-5-git-send-email-juri.lelli@gmail.com
Signed-off-by: Ingo Molnar <mingo@kernel.org>
2013-11-07 21:43:38 +08:00
|
|
|
#endif
|
sched/deadline: Add SCHED_DEADLINE structures & implementation
Introduces the data structures, constants and symbols needed for
SCHED_DEADLINE implementation.
Core data structure of SCHED_DEADLINE are defined, along with their
initializers. Hooks for checking if a task belong to the new policy
are also added where they are needed.
Adds a scheduling class, in sched/dl.c and a new policy called
SCHED_DEADLINE. It is an implementation of the Earliest Deadline
First (EDF) scheduling algorithm, augmented with a mechanism (called
Constant Bandwidth Server, CBS) that makes it possible to isolate
the behaviour of tasks between each other.
The typical -deadline task will be made up of a computation phase
(instance) which is activated on a periodic or sporadic fashion. The
expected (maximum) duration of such computation is called the task's
runtime; the time interval by which each instance need to be completed
is called the task's relative deadline. The task's absolute deadline
is dynamically calculated as the time instant a task (better, an
instance) activates plus the relative deadline.
The EDF algorithms selects the task with the smallest absolute
deadline as the one to be executed first, while the CBS ensures each
task to run for at most its runtime every (relative) deadline
length time interval, avoiding any interference between different
tasks (bandwidth isolation).
Thanks to this feature, also tasks that do not strictly comply with
the computational model sketched above can effectively use the new
policy.
To summarize, this patch:
- introduces the data structures, constants and symbols needed;
- implements the core logic of the scheduling algorithm in the new
scheduling class file;
- provides all the glue code between the new scheduling class and
the core scheduler and refines the interactions between sched/dl
and the other existing scheduling classes.
Signed-off-by: Dario Faggioli <raistlin@linux.it>
Signed-off-by: Michael Trimarchi <michael@amarulasolutions.com>
Signed-off-by: Fabio Checconi <fchecconi@gmail.com>
Signed-off-by: Juri Lelli <juri.lelli@gmail.com>
Signed-off-by: Peter Zijlstra <peterz@infradead.org>
Link: http://lkml.kernel.org/r/1383831828-15501-4-git-send-email-juri.lelli@gmail.com
Signed-off-by: Ingo Molnar <mingo@kernel.org>
2013-11-28 18:14:43 +08:00
|
|
|
};
|
|
|
|
|
2011-10-25 16:00:11 +08:00
|
|
|
#ifdef CONFIG_SMP
|
|
|
|
|
|
|
|
/*
|
|
|
|
* We add the notion of a root-domain which will be used to define per-domain
|
|
|
|
* variables. Each exclusive cpuset essentially defines an island domain by
|
|
|
|
* fully partitioning the member cpus from any other cpuset. Whenever a new
|
|
|
|
* exclusive cpuset is created, we also create and attach a new root-domain
|
|
|
|
* object.
|
|
|
|
*
|
|
|
|
*/
|
|
|
|
struct root_domain {
|
|
|
|
atomic_t refcount;
|
|
|
|
atomic_t rto_count;
|
|
|
|
struct rcu_head rcu;
|
|
|
|
cpumask_var_t span;
|
|
|
|
cpumask_var_t online;
|
|
|
|
|
2014-06-24 03:16:49 +08:00
|
|
|
/* Indicate more than one runnable task for any CPU */
|
|
|
|
bool overload;
|
|
|
|
|
sched/deadline: Add SCHED_DEADLINE SMP-related data structures & logic
Introduces data structures relevant for implementing dynamic
migration of -deadline tasks and the logic for checking if
runqueues are overloaded with -deadline tasks and for choosing
where a task should migrate, when it is the case.
Adds also dynamic migrations to SCHED_DEADLINE, so that tasks can
be moved among CPUs when necessary. It is also possible to bind a
task to a (set of) CPU(s), thus restricting its capability of
migrating, or forbidding migrations at all.
The very same approach used in sched_rt is utilised:
- -deadline tasks are kept into CPU-specific runqueues,
- -deadline tasks are migrated among runqueues to achieve the
following:
* on an M-CPU system the M earliest deadline ready tasks
are always running;
* affinity/cpusets settings of all the -deadline tasks is
always respected.
Therefore, this very special form of "load balancing" is done with
an active method, i.e., the scheduler pushes or pulls tasks between
runqueues when they are woken up and/or (de)scheduled.
IOW, every time a preemption occurs, the descheduled task might be sent
to some other CPU (depending on its deadline) to continue executing
(push). On the other hand, every time a CPU becomes idle, it might pull
the second earliest deadline ready task from some other CPU.
To enforce this, a pull operation is always attempted before taking any
scheduling decision (pre_schedule()), as well as a push one after each
scheduling decision (post_schedule()). In addition, when a task arrives
or wakes up, the best CPU where to resume it is selected taking into
account its affinity mask, the system topology, but also its deadline.
E.g., from the scheduling point of view, the best CPU where to wake
up (and also where to push) a task is the one which is running the task
with the latest deadline among the M executing ones.
In order to facilitate these decisions, per-runqueue "caching" of the
deadlines of the currently running and of the first ready task is used.
Queued but not running tasks are also parked in another rb-tree to
speed-up pushes.
Signed-off-by: Juri Lelli <juri.lelli@gmail.com>
Signed-off-by: Dario Faggioli <raistlin@linux.it>
Signed-off-by: Peter Zijlstra <peterz@infradead.org>
Link: http://lkml.kernel.org/r/1383831828-15501-5-git-send-email-juri.lelli@gmail.com
Signed-off-by: Ingo Molnar <mingo@kernel.org>
2013-11-07 21:43:38 +08:00
|
|
|
/*
|
|
|
|
* The bit corresponding to a CPU gets set here if such CPU has more
|
|
|
|
* than one runnable -deadline task (as it is below for RT tasks).
|
|
|
|
*/
|
|
|
|
cpumask_var_t dlo_mask;
|
|
|
|
atomic_t dlo_count;
|
sched/deadline: Add bandwidth management for SCHED_DEADLINE tasks
In order of deadline scheduling to be effective and useful, it is
important that some method of having the allocation of the available
CPU bandwidth to tasks and task groups under control.
This is usually called "admission control" and if it is not performed
at all, no guarantee can be given on the actual scheduling of the
-deadline tasks.
Since when RT-throttling has been introduced each task group have a
bandwidth associated to itself, calculated as a certain amount of
runtime over a period. Moreover, to make it possible to manipulate
such bandwidth, readable/writable controls have been added to both
procfs (for system wide settings) and cgroupfs (for per-group
settings).
Therefore, the same interface is being used for controlling the
bandwidth distrubution to -deadline tasks and task groups, i.e.,
new controls but with similar names, equivalent meaning and with
the same usage paradigm are added.
However, more discussion is needed in order to figure out how
we want to manage SCHED_DEADLINE bandwidth at the task group level.
Therefore, this patch adds a less sophisticated, but actually
very sensible, mechanism to ensure that a certain utilization
cap is not overcome per each root_domain (the single rq for !SMP
configurations).
Another main difference between deadline bandwidth management and
RT-throttling is that -deadline tasks have bandwidth on their own
(while -rt ones doesn't!), and thus we don't need an higher level
throttling mechanism to enforce the desired bandwidth.
This patch, therefore:
- adds system wide deadline bandwidth management by means of:
* /proc/sys/kernel/sched_dl_runtime_us,
* /proc/sys/kernel/sched_dl_period_us,
that determine (i.e., runtime / period) the total bandwidth
available on each CPU of each root_domain for -deadline tasks;
- couples the RT and deadline bandwidth management, i.e., enforces
that the sum of how much bandwidth is being devoted to -rt
-deadline tasks to stay below 100%.
This means that, for a root_domain comprising M CPUs, -deadline tasks
can be created until the sum of their bandwidths stay below:
M * (sched_dl_runtime_us / sched_dl_period_us)
It is also possible to disable this bandwidth management logic, and
be thus free of oversubscribing the system up to any arbitrary level.
Signed-off-by: Dario Faggioli <raistlin@linux.it>
Signed-off-by: Juri Lelli <juri.lelli@gmail.com>
Signed-off-by: Peter Zijlstra <peterz@infradead.org>
Link: http://lkml.kernel.org/r/1383831828-15501-12-git-send-email-juri.lelli@gmail.com
Signed-off-by: Ingo Molnar <mingo@kernel.org>
2013-11-07 21:43:45 +08:00
|
|
|
struct dl_bw dl_bw;
|
2013-11-07 21:43:47 +08:00
|
|
|
struct cpudl cpudl;
|
sched/deadline: Add SCHED_DEADLINE SMP-related data structures & logic
Introduces data structures relevant for implementing dynamic
migration of -deadline tasks and the logic for checking if
runqueues are overloaded with -deadline tasks and for choosing
where a task should migrate, when it is the case.
Adds also dynamic migrations to SCHED_DEADLINE, so that tasks can
be moved among CPUs when necessary. It is also possible to bind a
task to a (set of) CPU(s), thus restricting its capability of
migrating, or forbidding migrations at all.
The very same approach used in sched_rt is utilised:
- -deadline tasks are kept into CPU-specific runqueues,
- -deadline tasks are migrated among runqueues to achieve the
following:
* on an M-CPU system the M earliest deadline ready tasks
are always running;
* affinity/cpusets settings of all the -deadline tasks is
always respected.
Therefore, this very special form of "load balancing" is done with
an active method, i.e., the scheduler pushes or pulls tasks between
runqueues when they are woken up and/or (de)scheduled.
IOW, every time a preemption occurs, the descheduled task might be sent
to some other CPU (depending on its deadline) to continue executing
(push). On the other hand, every time a CPU becomes idle, it might pull
the second earliest deadline ready task from some other CPU.
To enforce this, a pull operation is always attempted before taking any
scheduling decision (pre_schedule()), as well as a push one after each
scheduling decision (post_schedule()). In addition, when a task arrives
or wakes up, the best CPU where to resume it is selected taking into
account its affinity mask, the system topology, but also its deadline.
E.g., from the scheduling point of view, the best CPU where to wake
up (and also where to push) a task is the one which is running the task
with the latest deadline among the M executing ones.
In order to facilitate these decisions, per-runqueue "caching" of the
deadlines of the currently running and of the first ready task is used.
Queued but not running tasks are also parked in another rb-tree to
speed-up pushes.
Signed-off-by: Juri Lelli <juri.lelli@gmail.com>
Signed-off-by: Dario Faggioli <raistlin@linux.it>
Signed-off-by: Peter Zijlstra <peterz@infradead.org>
Link: http://lkml.kernel.org/r/1383831828-15501-5-git-send-email-juri.lelli@gmail.com
Signed-off-by: Ingo Molnar <mingo@kernel.org>
2013-11-07 21:43:38 +08:00
|
|
|
|
2011-10-25 16:00:11 +08:00
|
|
|
/*
|
|
|
|
* The "RT overload" flag: it gets set if a CPU has more than
|
|
|
|
* one runnable RT task.
|
|
|
|
*/
|
|
|
|
cpumask_var_t rto_mask;
|
|
|
|
struct cpupri cpupri;
|
2016-08-02 02:53:35 +08:00
|
|
|
|
|
|
|
unsigned long max_cpu_capacity;
|
2011-10-25 16:00:11 +08:00
|
|
|
};
|
|
|
|
|
|
|
|
extern struct root_domain def_root_domain;
|
|
|
|
|
|
|
|
#endif /* CONFIG_SMP */
|
|
|
|
|
|
|
|
/*
|
|
|
|
* This is the main, per-CPU runqueue data structure.
|
|
|
|
*
|
|
|
|
* Locking rule: those places that want to lock multiple runqueues
|
|
|
|
* (such as the load balancing or the thread migration code), lock
|
|
|
|
* acquire operations must be ordered by ascending &runqueue.
|
|
|
|
*/
|
|
|
|
struct rq {
|
|
|
|
/* runqueue lock: */
|
|
|
|
raw_spinlock_t lock;
|
|
|
|
|
|
|
|
/*
|
|
|
|
* nr_running and cpu_load should be in the same cacheline because
|
|
|
|
* remote CPUs use both these fields when doing load calculation.
|
|
|
|
*/
|
2012-04-26 19:12:27 +08:00
|
|
|
unsigned int nr_running;
|
2013-10-07 18:29:33 +08:00
|
|
|
#ifdef CONFIG_NUMA_BALANCING
|
|
|
|
unsigned int nr_numa_running;
|
|
|
|
unsigned int nr_preferred_running;
|
|
|
|
#endif
|
2011-10-25 16:00:11 +08:00
|
|
|
#define CPU_LOAD_IDX_MAX 5
|
|
|
|
unsigned long cpu_load[CPU_LOAD_IDX_MAX];
|
2011-08-11 05:21:01 +08:00
|
|
|
#ifdef CONFIG_NO_HZ_COMMON
|
2016-04-19 23:36:51 +08:00
|
|
|
#ifdef CONFIG_SMP
|
|
|
|
unsigned long last_load_update_tick;
|
|
|
|
#endif /* CONFIG_SMP */
|
2011-12-02 09:07:32 +08:00
|
|
|
unsigned long nohz_flags;
|
2016-04-19 23:36:51 +08:00
|
|
|
#endif /* CONFIG_NO_HZ_COMMON */
|
2013-05-03 09:39:05 +08:00
|
|
|
#ifdef CONFIG_NO_HZ_FULL
|
|
|
|
unsigned long last_sched_tick;
|
2011-10-25 16:00:11 +08:00
|
|
|
#endif
|
|
|
|
/* capture load from *all* tasks on this cpu: */
|
|
|
|
struct load_weight load;
|
|
|
|
unsigned long nr_load_updates;
|
|
|
|
u64 nr_switches;
|
|
|
|
|
|
|
|
struct cfs_rq cfs;
|
|
|
|
struct rt_rq rt;
|
sched/deadline: Add SCHED_DEADLINE structures & implementation
Introduces the data structures, constants and symbols needed for
SCHED_DEADLINE implementation.
Core data structure of SCHED_DEADLINE are defined, along with their
initializers. Hooks for checking if a task belong to the new policy
are also added where they are needed.
Adds a scheduling class, in sched/dl.c and a new policy called
SCHED_DEADLINE. It is an implementation of the Earliest Deadline
First (EDF) scheduling algorithm, augmented with a mechanism (called
Constant Bandwidth Server, CBS) that makes it possible to isolate
the behaviour of tasks between each other.
The typical -deadline task will be made up of a computation phase
(instance) which is activated on a periodic or sporadic fashion. The
expected (maximum) duration of such computation is called the task's
runtime; the time interval by which each instance need to be completed
is called the task's relative deadline. The task's absolute deadline
is dynamically calculated as the time instant a task (better, an
instance) activates plus the relative deadline.
The EDF algorithms selects the task with the smallest absolute
deadline as the one to be executed first, while the CBS ensures each
task to run for at most its runtime every (relative) deadline
length time interval, avoiding any interference between different
tasks (bandwidth isolation).
Thanks to this feature, also tasks that do not strictly comply with
the computational model sketched above can effectively use the new
policy.
To summarize, this patch:
- introduces the data structures, constants and symbols needed;
- implements the core logic of the scheduling algorithm in the new
scheduling class file;
- provides all the glue code between the new scheduling class and
the core scheduler and refines the interactions between sched/dl
and the other existing scheduling classes.
Signed-off-by: Dario Faggioli <raistlin@linux.it>
Signed-off-by: Michael Trimarchi <michael@amarulasolutions.com>
Signed-off-by: Fabio Checconi <fchecconi@gmail.com>
Signed-off-by: Juri Lelli <juri.lelli@gmail.com>
Signed-off-by: Peter Zijlstra <peterz@infradead.org>
Link: http://lkml.kernel.org/r/1383831828-15501-4-git-send-email-juri.lelli@gmail.com
Signed-off-by: Ingo Molnar <mingo@kernel.org>
2013-11-28 18:14:43 +08:00
|
|
|
struct dl_rq dl;
|
2011-10-25 16:00:11 +08:00
|
|
|
|
|
|
|
#ifdef CONFIG_FAIR_GROUP_SCHED
|
|
|
|
/* list of leaf cfs_rq on this cpu: */
|
|
|
|
struct list_head leaf_cfs_rq_list;
|
2016-11-08 17:53:43 +08:00
|
|
|
struct list_head *tmp_alone_branch;
|
2012-08-09 03:46:40 +08:00
|
|
|
#endif /* CONFIG_FAIR_GROUP_SCHED */
|
|
|
|
|
2011-10-25 16:00:11 +08:00
|
|
|
/*
|
|
|
|
* This is part of a global counter where only the total sum
|
|
|
|
* over all CPUs matters. A task can increase this counter on
|
|
|
|
* one CPU and if it got migrated afterwards it may decrease
|
|
|
|
* it on another CPU. Always updated under the runqueue lock:
|
|
|
|
*/
|
|
|
|
unsigned long nr_uninterruptible;
|
|
|
|
|
|
|
|
struct task_struct *curr, *idle, *stop;
|
|
|
|
unsigned long next_balance;
|
|
|
|
struct mm_struct *prev_mm;
|
|
|
|
|
2015-01-05 18:18:11 +08:00
|
|
|
unsigned int clock_skip_update;
|
2011-10-25 16:00:11 +08:00
|
|
|
u64 clock;
|
|
|
|
u64 clock_task;
|
|
|
|
|
|
|
|
atomic_t nr_iowait;
|
|
|
|
|
|
|
|
#ifdef CONFIG_SMP
|
|
|
|
struct root_domain *rd;
|
|
|
|
struct sched_domain *sd;
|
|
|
|
|
2014-05-27 06:19:38 +08:00
|
|
|
unsigned long cpu_capacity;
|
2015-02-27 23:54:09 +08:00
|
|
|
unsigned long cpu_capacity_orig;
|
2011-10-25 16:00:11 +08:00
|
|
|
|
2015-06-11 20:46:37 +08:00
|
|
|
struct callback_head *balance_callback;
|
|
|
|
|
2011-10-25 16:00:11 +08:00
|
|
|
unsigned char idle_balance;
|
|
|
|
/* For active balancing */
|
|
|
|
int active_balance;
|
|
|
|
int push_cpu;
|
|
|
|
struct cpu_stop_work active_balance_work;
|
|
|
|
/* cpu of this runqueue: */
|
|
|
|
int cpu;
|
|
|
|
int online;
|
|
|
|
|
2012-02-21 04:49:09 +08:00
|
|
|
struct list_head cfs_tasks;
|
|
|
|
|
2011-10-25 16:00:11 +08:00
|
|
|
u64 rt_avg;
|
|
|
|
u64 age_stamp;
|
|
|
|
u64 idle_stamp;
|
|
|
|
u64 avg_idle;
|
2013-09-14 02:26:52 +08:00
|
|
|
|
|
|
|
/* This is used to determine avg_idle's max value */
|
|
|
|
u64 max_idle_balance_cost;
|
2011-10-25 16:00:11 +08:00
|
|
|
#endif
|
|
|
|
|
|
|
|
#ifdef CONFIG_IRQ_TIME_ACCOUNTING
|
|
|
|
u64 prev_irq_time;
|
|
|
|
#endif
|
|
|
|
#ifdef CONFIG_PARAVIRT
|
|
|
|
u64 prev_steal_time;
|
|
|
|
#endif
|
|
|
|
#ifdef CONFIG_PARAVIRT_TIME_ACCOUNTING
|
|
|
|
u64 prev_steal_time_rq;
|
|
|
|
#endif
|
|
|
|
|
|
|
|
/* calc_load related fields */
|
|
|
|
unsigned long calc_load_update;
|
|
|
|
long calc_load_active;
|
|
|
|
|
|
|
|
#ifdef CONFIG_SCHED_HRTICK
|
|
|
|
#ifdef CONFIG_SMP
|
|
|
|
int hrtick_csd_pending;
|
|
|
|
struct call_single_data hrtick_csd;
|
|
|
|
#endif
|
|
|
|
struct hrtimer hrtick_timer;
|
|
|
|
#endif
|
|
|
|
|
|
|
|
#ifdef CONFIG_SCHEDSTATS
|
|
|
|
/* latency stats */
|
|
|
|
struct sched_info rq_sched_info;
|
|
|
|
unsigned long long rq_cpu_time;
|
|
|
|
/* could above be rq->cfs_rq.exec_clock + rq->rt_rq.rt_runtime ? */
|
|
|
|
|
|
|
|
/* sys_sched_yield() stats */
|
|
|
|
unsigned int yld_count;
|
|
|
|
|
|
|
|
/* schedule() stats */
|
|
|
|
unsigned int sched_count;
|
|
|
|
unsigned int sched_goidle;
|
|
|
|
|
|
|
|
/* try_to_wake_up() stats */
|
|
|
|
unsigned int ttwu_count;
|
|
|
|
unsigned int ttwu_local;
|
|
|
|
#endif
|
|
|
|
|
|
|
|
#ifdef CONFIG_SMP
|
|
|
|
struct llist_head wake_list;
|
|
|
|
#endif
|
2014-09-04 23:32:09 +08:00
|
|
|
|
|
|
|
#ifdef CONFIG_CPU_IDLE
|
|
|
|
/* Must be inspected within a rcu lock section */
|
|
|
|
struct cpuidle_state *idle_state;
|
|
|
|
#endif
|
2011-10-25 16:00:11 +08:00
|
|
|
};
|
|
|
|
|
|
|
|
static inline int cpu_of(struct rq *rq)
|
|
|
|
{
|
|
|
|
#ifdef CONFIG_SMP
|
|
|
|
return rq->cpu;
|
|
|
|
#else
|
|
|
|
return 0;
|
|
|
|
#endif
|
|
|
|
}
|
|
|
|
|
2016-05-09 16:38:41 +08:00
|
|
|
|
|
|
|
#ifdef CONFIG_SCHED_SMT
|
|
|
|
|
|
|
|
extern struct static_key_false sched_smt_present;
|
|
|
|
|
|
|
|
extern void __update_idle_core(struct rq *rq);
|
|
|
|
|
|
|
|
static inline void update_idle_core(struct rq *rq)
|
|
|
|
{
|
|
|
|
if (static_branch_unlikely(&sched_smt_present))
|
|
|
|
__update_idle_core(rq);
|
|
|
|
}
|
|
|
|
|
|
|
|
#else
|
|
|
|
static inline void update_idle_core(struct rq *rq) { }
|
|
|
|
#endif
|
|
|
|
|
2014-08-14 01:28:12 +08:00
|
|
|
DECLARE_PER_CPU_SHARED_ALIGNED(struct rq, runqueues);
|
2011-10-25 16:00:11 +08:00
|
|
|
|
2011-12-07 22:07:31 +08:00
|
|
|
#define cpu_rq(cpu) (&per_cpu(runqueues, (cpu)))
|
2014-08-18 01:30:27 +08:00
|
|
|
#define this_rq() this_cpu_ptr(&runqueues)
|
2011-12-07 22:07:31 +08:00
|
|
|
#define task_rq(p) cpu_rq(task_cpu(p))
|
|
|
|
#define cpu_curr(cpu) (cpu_rq(cpu)->curr)
|
2014-08-18 01:30:27 +08:00
|
|
|
#define raw_rq() raw_cpu_ptr(&runqueues)
|
2011-12-07 22:07:31 +08:00
|
|
|
|
2015-01-05 18:18:10 +08:00
|
|
|
static inline u64 __rq_clock_broken(struct rq *rq)
|
|
|
|
{
|
2015-04-29 04:00:20 +08:00
|
|
|
return READ_ONCE(rq->clock);
|
2015-01-05 18:18:10 +08:00
|
|
|
}
|
|
|
|
|
2013-04-12 07:51:02 +08:00
|
|
|
static inline u64 rq_clock(struct rq *rq)
|
|
|
|
{
|
2015-01-05 18:18:10 +08:00
|
|
|
lockdep_assert_held(&rq->lock);
|
2013-04-12 07:51:02 +08:00
|
|
|
return rq->clock;
|
|
|
|
}
|
|
|
|
|
|
|
|
static inline u64 rq_clock_task(struct rq *rq)
|
|
|
|
{
|
2015-01-05 18:18:10 +08:00
|
|
|
lockdep_assert_held(&rq->lock);
|
2013-04-12 07:51:02 +08:00
|
|
|
return rq->clock_task;
|
|
|
|
}
|
|
|
|
|
2015-01-05 18:18:11 +08:00
|
|
|
#define RQCF_REQ_SKIP 0x01
|
|
|
|
#define RQCF_ACT_SKIP 0x02
|
|
|
|
|
|
|
|
static inline void rq_clock_skip_update(struct rq *rq, bool skip)
|
|
|
|
{
|
|
|
|
lockdep_assert_held(&rq->lock);
|
|
|
|
if (skip)
|
|
|
|
rq->clock_skip_update |= RQCF_REQ_SKIP;
|
|
|
|
else
|
|
|
|
rq->clock_skip_update &= ~RQCF_REQ_SKIP;
|
|
|
|
}
|
|
|
|
|
2014-10-17 15:29:49 +08:00
|
|
|
#ifdef CONFIG_NUMA
|
2014-10-17 15:29:50 +08:00
|
|
|
enum numa_topology_type {
|
|
|
|
NUMA_DIRECT,
|
|
|
|
NUMA_GLUELESS_MESH,
|
|
|
|
NUMA_BACKPLANE,
|
|
|
|
};
|
|
|
|
extern enum numa_topology_type sched_numa_topology_type;
|
2014-10-17 15:29:49 +08:00
|
|
|
extern int sched_max_numa_distance;
|
|
|
|
extern bool find_numa_distance(int distance);
|
|
|
|
#endif
|
|
|
|
|
2013-10-07 18:28:57 +08:00
|
|
|
#ifdef CONFIG_NUMA_BALANCING
|
2014-10-31 08:13:31 +08:00
|
|
|
/* The regions in numa_faults array from task_struct */
|
|
|
|
enum numa_faults_stats {
|
|
|
|
NUMA_MEM = 0,
|
|
|
|
NUMA_CPU,
|
|
|
|
NUMA_MEMBUF,
|
|
|
|
NUMA_CPUBUF
|
|
|
|
};
|
2013-10-07 18:29:33 +08:00
|
|
|
extern void sched_setnuma(struct task_struct *p, int node);
|
2013-10-07 18:29:02 +08:00
|
|
|
extern int migrate_task_to(struct task_struct *p, int cpu);
|
2013-10-07 18:29:16 +08:00
|
|
|
extern int migrate_swap(struct task_struct *, struct task_struct *);
|
2013-10-07 18:28:57 +08:00
|
|
|
#endif /* CONFIG_NUMA_BALANCING */
|
|
|
|
|
2011-12-07 22:07:31 +08:00
|
|
|
#ifdef CONFIG_SMP
|
|
|
|
|
2015-06-11 20:46:37 +08:00
|
|
|
static inline void
|
|
|
|
queue_balance_callback(struct rq *rq,
|
|
|
|
struct callback_head *head,
|
|
|
|
void (*func)(struct rq *rq))
|
|
|
|
{
|
|
|
|
lockdep_assert_held(&rq->lock);
|
|
|
|
|
|
|
|
if (unlikely(head->next))
|
|
|
|
return;
|
|
|
|
|
|
|
|
head->func = (void (*)(struct callback_head *))func;
|
|
|
|
head->next = rq->balance_callback;
|
|
|
|
rq->balance_callback = head;
|
|
|
|
}
|
|
|
|
|
2014-06-05 01:31:18 +08:00
|
|
|
extern void sched_ttwu_pending(void);
|
|
|
|
|
2011-10-25 16:00:11 +08:00
|
|
|
#define rcu_dereference_check_sched_domain(p) \
|
|
|
|
rcu_dereference_check((p), \
|
|
|
|
lockdep_is_held(&sched_domains_mutex))
|
|
|
|
|
|
|
|
/*
|
|
|
|
* The domain tree (rq->sd) is protected by RCU's quiescent state transition.
|
|
|
|
* See detach_destroy_domains: synchronize_sched for details.
|
|
|
|
*
|
|
|
|
* The domain tree of any CPU may only be accessed from within
|
|
|
|
* preempt-disabled sections.
|
|
|
|
*/
|
|
|
|
#define for_each_domain(cpu, __sd) \
|
2011-12-07 22:07:31 +08:00
|
|
|
for (__sd = rcu_dereference_check_sched_domain(cpu_rq(cpu)->sd); \
|
|
|
|
__sd; __sd = __sd->parent)
|
2011-10-25 16:00:11 +08:00
|
|
|
|
2011-11-18 03:08:23 +08:00
|
|
|
#define for_each_lower_domain(sd) for (; sd; sd = sd->child)
|
|
|
|
|
2011-12-07 22:07:31 +08:00
|
|
|
/**
|
|
|
|
* highest_flag_domain - Return highest sched_domain containing flag.
|
|
|
|
* @cpu: The cpu whose highest level of sched domain is to
|
|
|
|
* be returned.
|
|
|
|
* @flag: The flag to check for the highest sched_domain
|
|
|
|
* for the given cpu.
|
|
|
|
*
|
|
|
|
* Returns the highest sched_domain of a cpu which contains the given flag.
|
|
|
|
*/
|
|
|
|
static inline struct sched_domain *highest_flag_domain(int cpu, int flag)
|
|
|
|
{
|
|
|
|
struct sched_domain *sd, *hsd = NULL;
|
|
|
|
|
|
|
|
for_each_domain(cpu, sd) {
|
|
|
|
if (!(sd->flags & flag))
|
|
|
|
break;
|
|
|
|
hsd = sd;
|
|
|
|
}
|
|
|
|
|
|
|
|
return hsd;
|
|
|
|
}
|
|
|
|
|
2013-10-07 18:29:17 +08:00
|
|
|
static inline struct sched_domain *lowest_flag_domain(int cpu, int flag)
|
|
|
|
{
|
|
|
|
struct sched_domain *sd;
|
|
|
|
|
|
|
|
for_each_domain(cpu, sd) {
|
|
|
|
if (sd->flags & flag)
|
|
|
|
break;
|
|
|
|
}
|
|
|
|
|
|
|
|
return sd;
|
|
|
|
}
|
|
|
|
|
2011-12-07 22:07:31 +08:00
|
|
|
DECLARE_PER_CPU(struct sched_domain *, sd_llc);
|
2013-07-04 12:56:46 +08:00
|
|
|
DECLARE_PER_CPU(int, sd_llc_size);
|
2011-12-07 22:07:31 +08:00
|
|
|
DECLARE_PER_CPU(int, sd_llc_id);
|
2016-05-09 16:38:01 +08:00
|
|
|
DECLARE_PER_CPU(struct sched_domain_shared *, sd_llc_shared);
|
2013-10-07 18:29:17 +08:00
|
|
|
DECLARE_PER_CPU(struct sched_domain *, sd_numa);
|
2013-10-30 11:12:52 +08:00
|
|
|
DECLARE_PER_CPU(struct sched_domain *, sd_asym);
|
2011-12-07 22:07:31 +08:00
|
|
|
|
2014-05-27 06:19:37 +08:00
|
|
|
struct sched_group_capacity {
|
2013-03-05 16:06:23 +08:00
|
|
|
atomic_t ref;
|
|
|
|
/*
|
2016-04-05 12:12:27 +08:00
|
|
|
* CPU capacity of this group, SCHED_CAPACITY_SCALE being max capacity
|
2014-05-27 06:19:37 +08:00
|
|
|
* for a single CPU.
|
2013-03-05 16:06:23 +08:00
|
|
|
*/
|
2016-10-14 21:41:09 +08:00
|
|
|
unsigned long capacity;
|
|
|
|
unsigned long min_capacity; /* Min per-CPU capacity in group */
|
2013-03-05 16:06:23 +08:00
|
|
|
unsigned long next_update;
|
2014-05-27 06:19:37 +08:00
|
|
|
int imbalance; /* XXX unrelated to capacity but shared group state */
|
2013-03-05 16:06:23 +08:00
|
|
|
|
|
|
|
unsigned long cpumask[0]; /* iteration mask */
|
|
|
|
};
|
|
|
|
|
|
|
|
struct sched_group {
|
|
|
|
struct sched_group *next; /* Must be a circular list */
|
|
|
|
atomic_t ref;
|
|
|
|
|
|
|
|
unsigned int group_weight;
|
2014-05-27 06:19:37 +08:00
|
|
|
struct sched_group_capacity *sgc;
|
2013-03-05 16:06:23 +08:00
|
|
|
|
|
|
|
/*
|
|
|
|
* The CPUs this group covers.
|
|
|
|
*
|
|
|
|
* NOTE: this field is variable length. (Allocated dynamically
|
|
|
|
* by attaching extra space to the end of the structure,
|
|
|
|
* depending on how many CPUs the kernel has booted up with)
|
|
|
|
*/
|
|
|
|
unsigned long cpumask[0];
|
|
|
|
};
|
|
|
|
|
|
|
|
static inline struct cpumask *sched_group_cpus(struct sched_group *sg)
|
|
|
|
{
|
|
|
|
return to_cpumask(sg->cpumask);
|
|
|
|
}
|
|
|
|
|
|
|
|
/*
|
|
|
|
* cpumask masking which cpus in the group are allowed to iterate up the domain
|
|
|
|
* tree.
|
|
|
|
*/
|
|
|
|
static inline struct cpumask *sched_group_mask(struct sched_group *sg)
|
|
|
|
{
|
2014-05-27 06:19:37 +08:00
|
|
|
return to_cpumask(sg->sgc->cpumask);
|
2013-03-05 16:06:23 +08:00
|
|
|
}
|
|
|
|
|
|
|
|
/**
|
|
|
|
* group_first_cpu - Returns the first cpu in the cpumask of a sched_group.
|
|
|
|
* @group: The group whose first cpu is to be returned.
|
|
|
|
*/
|
|
|
|
static inline unsigned int group_first_cpu(struct sched_group *group)
|
|
|
|
{
|
|
|
|
return cpumask_first(sched_group_cpus(group));
|
|
|
|
}
|
|
|
|
|
2012-05-31 20:47:33 +08:00
|
|
|
extern int group_balance_cpu(struct sched_group *sg);
|
|
|
|
|
2016-02-23 05:26:51 +08:00
|
|
|
#if defined(CONFIG_SCHED_DEBUG) && defined(CONFIG_SYSCTL)
|
|
|
|
void register_sched_domain_sysctl(void);
|
|
|
|
void unregister_sched_domain_sysctl(void);
|
|
|
|
#else
|
|
|
|
static inline void register_sched_domain_sysctl(void)
|
|
|
|
{
|
|
|
|
}
|
|
|
|
static inline void unregister_sched_domain_sysctl(void)
|
|
|
|
{
|
|
|
|
}
|
|
|
|
#endif
|
|
|
|
|
2014-06-05 01:31:18 +08:00
|
|
|
#else
|
|
|
|
|
|
|
|
static inline void sched_ttwu_pending(void) { }
|
|
|
|
|
2011-12-07 22:07:31 +08:00
|
|
|
#endif /* CONFIG_SMP */
|
2011-10-25 16:00:11 +08:00
|
|
|
|
2011-11-16 00:14:39 +08:00
|
|
|
#include "stats.h"
|
|
|
|
#include "auto_group.h"
|
2011-10-25 16:00:11 +08:00
|
|
|
|
|
|
|
#ifdef CONFIG_CGROUP_SCHED
|
|
|
|
|
|
|
|
/*
|
|
|
|
* Return the group to which this tasks belongs.
|
|
|
|
*
|
2013-08-09 08:11:22 +08:00
|
|
|
* We cannot use task_css() and friends because the cgroup subsystem
|
|
|
|
* changes that value before the cgroup_subsys::attach() method is called,
|
|
|
|
* therefore we cannot pin it and might observe the wrong value.
|
2012-06-22 19:36:05 +08:00
|
|
|
*
|
|
|
|
* The same is true for autogroup's p->signal->autogroup->tg, the autogroup
|
|
|
|
* core changes this before calling sched_move_task().
|
|
|
|
*
|
|
|
|
* Instead we use a 'copy' which is updated from sched_move_task() while
|
|
|
|
* holding both task_struct::pi_lock and rq::lock.
|
2011-10-25 16:00:11 +08:00
|
|
|
*/
|
|
|
|
static inline struct task_group *task_group(struct task_struct *p)
|
|
|
|
{
|
2012-06-22 19:36:05 +08:00
|
|
|
return p->sched_task_group;
|
2011-10-25 16:00:11 +08:00
|
|
|
}
|
|
|
|
|
|
|
|
/* Change a task's cfs_rq and parent entity if it moves across CPUs/groups */
|
|
|
|
static inline void set_task_rq(struct task_struct *p, unsigned int cpu)
|
|
|
|
{
|
|
|
|
#if defined(CONFIG_FAIR_GROUP_SCHED) || defined(CONFIG_RT_GROUP_SCHED)
|
|
|
|
struct task_group *tg = task_group(p);
|
|
|
|
#endif
|
|
|
|
|
|
|
|
#ifdef CONFIG_FAIR_GROUP_SCHED
|
2015-10-24 00:16:19 +08:00
|
|
|
set_task_rq_fair(&p->se, p->se.cfs_rq, tg->cfs_rq[cpu]);
|
2011-10-25 16:00:11 +08:00
|
|
|
p->se.cfs_rq = tg->cfs_rq[cpu];
|
|
|
|
p->se.parent = tg->se[cpu];
|
|
|
|
#endif
|
|
|
|
|
|
|
|
#ifdef CONFIG_RT_GROUP_SCHED
|
|
|
|
p->rt.rt_rq = tg->rt_rq[cpu];
|
|
|
|
p->rt.parent = tg->rt_se[cpu];
|
|
|
|
#endif
|
|
|
|
}
|
|
|
|
|
|
|
|
#else /* CONFIG_CGROUP_SCHED */
|
|
|
|
|
|
|
|
static inline void set_task_rq(struct task_struct *p, unsigned int cpu) { }
|
|
|
|
static inline struct task_group *task_group(struct task_struct *p)
|
|
|
|
{
|
|
|
|
return NULL;
|
|
|
|
}
|
|
|
|
|
|
|
|
#endif /* CONFIG_CGROUP_SCHED */
|
|
|
|
|
|
|
|
static inline void __set_task_cpu(struct task_struct *p, unsigned int cpu)
|
|
|
|
{
|
|
|
|
set_task_rq(p, cpu);
|
|
|
|
#ifdef CONFIG_SMP
|
|
|
|
/*
|
|
|
|
* After ->cpu is set up to a new value, task_rq_lock(p, ...) can be
|
|
|
|
* successfuly executed on another CPU. We must ensure that updates of
|
|
|
|
* per-task data have been completed by this moment.
|
|
|
|
*/
|
|
|
|
smp_wmb();
|
2016-09-14 05:29:24 +08:00
|
|
|
#ifdef CONFIG_THREAD_INFO_IN_TASK
|
|
|
|
p->cpu = cpu;
|
|
|
|
#else
|
2011-10-25 16:00:11 +08:00
|
|
|
task_thread_info(p)->cpu = cpu;
|
2016-09-14 05:29:24 +08:00
|
|
|
#endif
|
2013-10-07 18:29:16 +08:00
|
|
|
p->wake_cpu = cpu;
|
2011-10-25 16:00:11 +08:00
|
|
|
#endif
|
|
|
|
}
|
|
|
|
|
|
|
|
/*
|
|
|
|
* Tunables that become constants when CONFIG_SCHED_DEBUG is off:
|
|
|
|
*/
|
|
|
|
#ifdef CONFIG_SCHED_DEBUG
|
2012-02-24 15:31:31 +08:00
|
|
|
# include <linux/static_key.h>
|
2011-10-25 16:00:11 +08:00
|
|
|
# define const_debug __read_mostly
|
|
|
|
#else
|
|
|
|
# define const_debug const
|
|
|
|
#endif
|
|
|
|
|
|
|
|
extern const_debug unsigned int sysctl_sched_features;
|
|
|
|
|
|
|
|
#define SCHED_FEAT(name, enabled) \
|
|
|
|
__SCHED_FEAT_##name ,
|
|
|
|
|
|
|
|
enum {
|
2011-11-16 00:14:39 +08:00
|
|
|
#include "features.h"
|
2011-07-06 20:20:14 +08:00
|
|
|
__SCHED_FEAT_NR,
|
2011-10-25 16:00:11 +08:00
|
|
|
};
|
|
|
|
|
|
|
|
#undef SCHED_FEAT
|
|
|
|
|
2011-07-06 20:20:14 +08:00
|
|
|
#if defined(CONFIG_SCHED_DEBUG) && defined(HAVE_JUMP_LABEL)
|
|
|
|
#define SCHED_FEAT(name, enabled) \
|
2012-02-24 15:31:31 +08:00
|
|
|
static __always_inline bool static_branch_##name(struct static_key *key) \
|
2011-07-06 20:20:14 +08:00
|
|
|
{ \
|
2014-07-02 23:52:41 +08:00
|
|
|
return static_key_##enabled(key); \
|
2011-07-06 20:20:14 +08:00
|
|
|
}
|
|
|
|
|
|
|
|
#include "features.h"
|
|
|
|
|
|
|
|
#undef SCHED_FEAT
|
|
|
|
|
2012-02-24 15:31:31 +08:00
|
|
|
extern struct static_key sched_feat_keys[__SCHED_FEAT_NR];
|
2011-07-06 20:20:14 +08:00
|
|
|
#define sched_feat(x) (static_branch_##x(&sched_feat_keys[__SCHED_FEAT_##x]))
|
|
|
|
#else /* !(SCHED_DEBUG && HAVE_JUMP_LABEL) */
|
2011-10-25 16:00:11 +08:00
|
|
|
#define sched_feat(x) (sysctl_sched_features & (1UL << __SCHED_FEAT_##x))
|
2011-07-06 20:20:14 +08:00
|
|
|
#endif /* SCHED_DEBUG && HAVE_JUMP_LABEL */
|
2011-10-25 16:00:11 +08:00
|
|
|
|
2015-08-12 00:24:21 +08:00
|
|
|
extern struct static_key_false sched_numa_balancing;
|
2016-02-05 17:08:36 +08:00
|
|
|
extern struct static_key_false sched_schedstats;
|
2012-10-25 20:16:43 +08:00
|
|
|
|
2011-10-25 16:00:11 +08:00
|
|
|
static inline u64 global_rt_period(void)
|
|
|
|
{
|
|
|
|
return (u64)sysctl_sched_rt_period * NSEC_PER_USEC;
|
|
|
|
}
|
|
|
|
|
|
|
|
static inline u64 global_rt_runtime(void)
|
|
|
|
{
|
|
|
|
if (sysctl_sched_rt_runtime < 0)
|
|
|
|
return RUNTIME_INF;
|
|
|
|
|
|
|
|
return (u64)sysctl_sched_rt_runtime * NSEC_PER_USEC;
|
|
|
|
}
|
|
|
|
|
|
|
|
static inline int task_current(struct rq *rq, struct task_struct *p)
|
|
|
|
{
|
|
|
|
return rq->curr == p;
|
|
|
|
}
|
|
|
|
|
|
|
|
static inline int task_running(struct rq *rq, struct task_struct *p)
|
|
|
|
{
|
|
|
|
#ifdef CONFIG_SMP
|
|
|
|
return p->on_cpu;
|
|
|
|
#else
|
|
|
|
return task_current(rq, p);
|
|
|
|
#endif
|
|
|
|
}
|
|
|
|
|
2014-08-20 17:47:32 +08:00
|
|
|
static inline int task_on_rq_queued(struct task_struct *p)
|
|
|
|
{
|
|
|
|
return p->on_rq == TASK_ON_RQ_QUEUED;
|
|
|
|
}
|
2011-10-25 16:00:11 +08:00
|
|
|
|
sched: Teach scheduler to understand TASK_ON_RQ_MIGRATING state
This is a new p->on_rq state which will be used to indicate that a task
is in a process of migrating between two RQs. It allows to get
rid of double_rq_lock(), which we used to use to change a rq of
a queued task before.
Let's consider an example. To move a task between src_rq and
dst_rq we will do the following:
raw_spin_lock(&src_rq->lock);
/* p is a task which is queued on src_rq */
p = ...;
dequeue_task(src_rq, p, 0);
p->on_rq = TASK_ON_RQ_MIGRATING;
set_task_cpu(p, dst_cpu);
raw_spin_unlock(&src_rq->lock);
/*
* Both RQs are unlocked here.
* Task p is dequeued from src_rq
* but its on_rq value is not zero.
*/
raw_spin_lock(&dst_rq->lock);
p->on_rq = TASK_ON_RQ_QUEUED;
enqueue_task(dst_rq, p, 0);
raw_spin_unlock(&dst_rq->lock);
While p->on_rq is TASK_ON_RQ_MIGRATING, task is considered as
"migrating", and other parallel scheduler actions with it are
not available to parallel callers. The parallel caller is
spining till migration is completed.
The unavailable actions are changing of cpu affinity, changing
of priority etc, in other words all the functionality which used
to require task_rq(p)->lock before (and related to the task).
To implement TASK_ON_RQ_MIGRATING support we primarily are using
the following fact. Most of scheduler users (from which we are
protecting a migrating task) use task_rq_lock() and
__task_rq_lock() to get the lock of task_rq(p). These primitives
know that task's cpu may change, and they are spining while the
lock of the right RQ is not held. We add one more condition into
them, so they will be also spinning until the migration is
finished.
Signed-off-by: Kirill Tkhai <ktkhai@parallels.com>
Cc: Peter Zijlstra <peterz@infradead.org>
Cc: Paul Turner <pjt@google.com>
Cc: Oleg Nesterov <oleg@redhat.com>
Cc: Steven Rostedt <rostedt@goodmis.org>
Cc: Mike Galbraith <umgwanakikbuti@gmail.com>
Cc: Kirill Tkhai <tkhai@yandex.ru>
Cc: Tim Chen <tim.c.chen@linux.intel.com>
Cc: Nicolas Pitre <nicolas.pitre@linaro.org>
Cc: Linus Torvalds <torvalds@linux-foundation.org>
Link: http://lkml.kernel.org/r/1408528062.23412.88.camel@tkhai
Signed-off-by: Ingo Molnar <mingo@kernel.org>
2014-08-20 17:47:42 +08:00
|
|
|
static inline int task_on_rq_migrating(struct task_struct *p)
|
|
|
|
{
|
|
|
|
return p->on_rq == TASK_ON_RQ_MIGRATING;
|
|
|
|
}
|
|
|
|
|
2011-10-25 16:00:11 +08:00
|
|
|
#ifndef prepare_arch_switch
|
|
|
|
# define prepare_arch_switch(next) do { } while (0)
|
|
|
|
#endif
|
2011-11-28 05:43:10 +08:00
|
|
|
#ifndef finish_arch_post_lock_switch
|
|
|
|
# define finish_arch_post_lock_switch() do { } while (0)
|
|
|
|
#endif
|
2011-10-25 16:00:11 +08:00
|
|
|
|
|
|
|
static inline void prepare_lock_switch(struct rq *rq, struct task_struct *next)
|
|
|
|
{
|
|
|
|
#ifdef CONFIG_SMP
|
|
|
|
/*
|
|
|
|
* We can optimise this out completely for !SMP, because the
|
|
|
|
* SMP rebalancing from interrupt is the only thing that cares
|
|
|
|
* here.
|
|
|
|
*/
|
|
|
|
next->on_cpu = 1;
|
|
|
|
#endif
|
|
|
|
}
|
|
|
|
|
|
|
|
static inline void finish_lock_switch(struct rq *rq, struct task_struct *prev)
|
|
|
|
{
|
|
|
|
#ifdef CONFIG_SMP
|
|
|
|
/*
|
|
|
|
* After ->on_cpu is cleared, the task can be moved to a different CPU.
|
|
|
|
* We must ensure this doesn't happen until the switch is completely
|
|
|
|
* finished.
|
2015-09-29 20:45:09 +08:00
|
|
|
*
|
2015-10-06 20:36:17 +08:00
|
|
|
* In particular, the load of prev->state in finish_task_switch() must
|
|
|
|
* happen before this.
|
|
|
|
*
|
2016-04-04 16:57:12 +08:00
|
|
|
* Pairs with the smp_cond_load_acquire() in try_to_wake_up().
|
2011-10-25 16:00:11 +08:00
|
|
|
*/
|
2015-09-29 20:45:09 +08:00
|
|
|
smp_store_release(&prev->on_cpu, 0);
|
2011-10-25 16:00:11 +08:00
|
|
|
#endif
|
|
|
|
#ifdef CONFIG_DEBUG_SPINLOCK
|
|
|
|
/* this is a valid case when another task releases the spinlock */
|
|
|
|
rq->lock.owner = current;
|
|
|
|
#endif
|
|
|
|
/*
|
|
|
|
* If we are tracking spinlock dependencies then we have to
|
|
|
|
* fix up the runqueue lock - which gets 'carried over' from
|
|
|
|
* prev into current:
|
|
|
|
*/
|
|
|
|
spin_acquire(&rq->lock.dep_map, 0, 0, _THIS_IP_);
|
|
|
|
|
|
|
|
raw_spin_unlock_irq(&rq->lock);
|
|
|
|
}
|
|
|
|
|
2013-03-05 16:06:38 +08:00
|
|
|
/*
|
|
|
|
* wake flags
|
|
|
|
*/
|
|
|
|
#define WF_SYNC 0x01 /* waker goes to sleep after wakeup */
|
|
|
|
#define WF_FORK 0x02 /* child wakeup after fork */
|
|
|
|
#define WF_MIGRATED 0x4 /* internal use, task got migrated */
|
|
|
|
|
2011-10-25 16:00:11 +08:00
|
|
|
/*
|
|
|
|
* To aid in avoiding the subversion of "niceness" due to uneven distribution
|
|
|
|
* of tasks with abnormal "nice" values across CPUs the contribution that
|
|
|
|
* each task makes to its run queue's load is weighted according to its
|
|
|
|
* scheduling class and "nice" value. For SCHED_NORMAL tasks this is just a
|
|
|
|
* scaled version of the new time slice allocation that they receive on time
|
|
|
|
* slice expiry etc.
|
|
|
|
*/
|
|
|
|
|
|
|
|
#define WEIGHT_IDLEPRIO 3
|
|
|
|
#define WMULT_IDLEPRIO 1431655765
|
|
|
|
|
2015-11-30 12:59:43 +08:00
|
|
|
extern const int sched_prio_to_weight[40];
|
|
|
|
extern const u32 sched_prio_to_wmult[40];
|
2011-10-25 16:00:11 +08:00
|
|
|
|
2016-01-18 22:27:07 +08:00
|
|
|
/*
|
|
|
|
* {de,en}queue flags:
|
|
|
|
*
|
|
|
|
* DEQUEUE_SLEEP - task is no longer runnable
|
|
|
|
* ENQUEUE_WAKEUP - task just became runnable
|
|
|
|
*
|
|
|
|
* SAVE/RESTORE - an otherwise spurious dequeue/enqueue, done to ensure tasks
|
|
|
|
* are in a known state which allows modification. Such pairs
|
|
|
|
* should preserve as much state as possible.
|
|
|
|
*
|
|
|
|
* MOVE - paired with SAVE/RESTORE, explicitly does not preserve the location
|
|
|
|
* in the runqueue.
|
|
|
|
*
|
|
|
|
* ENQUEUE_HEAD - place at front of runqueue (tail if not specified)
|
|
|
|
* ENQUEUE_REPLENISH - CBS (replenish runtime and postpone deadline)
|
2016-05-11 00:24:37 +08:00
|
|
|
* ENQUEUE_MIGRATED - the task was migrated during wakeup
|
2016-01-18 22:27:07 +08:00
|
|
|
*
|
|
|
|
*/
|
|
|
|
|
|
|
|
#define DEQUEUE_SLEEP 0x01
|
|
|
|
#define DEQUEUE_SAVE 0x02 /* matches ENQUEUE_RESTORE */
|
|
|
|
#define DEQUEUE_MOVE 0x04 /* matches ENQUEUE_MOVE */
|
|
|
|
|
sched/core: Fix task and run queue sched_info::run_delay inconsistencies
Mike Meyer reported the following bug:
> During evaluation of some performance data, it was discovered thread
> and run queue run_delay accounting data was inconsistent with the other
> accounting data that was collected. Further investigation found under
> certain circumstances execution time was leaking into the task and
> run queue accounting of run_delay.
>
> Consider the following sequence:
>
> a. thread is running.
> b. thread moves beween cgroups, changes scheduling class or priority.
> c. thread sleeps OR
> d. thread involuntarily gives up cpu.
>
> a. implies:
>
> thread->sched_info.last_queued = 0
>
> a. and b. results in the following:
>
> 1. dequeue_task(rq, thread)
>
> sched_info_dequeued(rq, thread)
> delta = 0
>
> sched_info_reset_dequeued(thread)
> thread->sched_info.last_queued = 0
>
> thread->sched_info.run_delay += delta
>
> 2. enqueue_task(rq, thread)
>
> sched_info_queued(rq, thread)
>
> /* thread is still on cpu at this point. */
> thread->sched_info.last_queued = task_rq(thread)->clock;
>
> c. results in:
>
> dequeue_task(rq, thread)
>
> sched_info_dequeued(rq, thread)
>
> /* delta is execution time not run_delay. */
> delta = task_rq(thread)->clock - thread->sched_info.last_queued
>
> sched_info_reset_dequeued(thread)
> thread->sched_info.last_queued = 0
>
> thread->sched_info.run_delay += delta
>
> Since thread was running between enqueue_task(rq, thread) and
> dequeue_task(rq, thread), the delta above is really execution
> time and not run_delay.
>
> d. results in:
>
> __sched_info_switch(thread, next_thread)
>
> sched_info_depart(rq, thread)
>
> sched_info_queued(rq, thread)
>
> /* last_queued not updated due to being non-zero */
> return
>
> Since thread was running between enqueue_task(rq, thread) and
> __sched_info_switch(thread, next_thread), the execution time
> between enqueue_task(rq, thread) and
> __sched_info_switch(thread, next_thread) now will become
> associated with run_delay due to when last_queued was last updated.
>
This alternative patch solves the problem by not calling
sched_info_{de,}queued() in {de,en}queue_task(). Therefore the
sched_info state is preserved and things work as expected.
By inlining the {de,en}queue_task() functions the new condition
becomes (mostly) a compile-time constant and we'll not emit any new
branch instructions.
It even shrinks the code (due to inlining {en,de}queue_task()):
$ size defconfig-build/kernel/sched/core.o defconfig-build/kernel/sched/core.o.orig
text data bss dec hex filename
64019 23378 2344 89741 15e8d defconfig-build/kernel/sched/core.o
64149 23378 2344 89871 15f0f defconfig-build/kernel/sched/core.o.orig
Reported-by: Mike Meyer <Mike.Meyer@Teradata.com>
Signed-off-by: Peter Zijlstra (Intel) <peterz@infradead.org>
Cc: Linus Torvalds <torvalds@linux-foundation.org>
Cc: Mike Galbraith <efault@gmx.de>
Cc: Peter Zijlstra <peterz@infradead.org>
Cc: Thomas Gleixner <tglx@linutronix.de>
Cc: linux-kernel@vger.kernel.org
Link: http://lkml.kernel.org/r/20150930154413.GO3604@twins.programming.kicks-ass.net
Signed-off-by: Ingo Molnar <mingo@kernel.org>
2015-09-30 23:44:13 +08:00
|
|
|
#define ENQUEUE_WAKEUP 0x01
|
2016-01-18 22:27:07 +08:00
|
|
|
#define ENQUEUE_RESTORE 0x02
|
|
|
|
#define ENQUEUE_MOVE 0x04
|
|
|
|
|
|
|
|
#define ENQUEUE_HEAD 0x08
|
|
|
|
#define ENQUEUE_REPLENISH 0x10
|
2013-03-05 16:06:55 +08:00
|
|
|
#ifdef CONFIG_SMP
|
2016-05-11 00:24:37 +08:00
|
|
|
#define ENQUEUE_MIGRATED 0x20
|
2013-03-05 16:06:55 +08:00
|
|
|
#else
|
2016-05-11 00:24:37 +08:00
|
|
|
#define ENQUEUE_MIGRATED 0x00
|
2013-03-05 16:06:55 +08:00
|
|
|
#endif
|
|
|
|
|
2014-02-14 19:25:08 +08:00
|
|
|
#define RETRY_TASK ((void *)-1UL)
|
|
|
|
|
2013-03-05 16:06:55 +08:00
|
|
|
struct sched_class {
|
|
|
|
const struct sched_class *next;
|
|
|
|
|
|
|
|
void (*enqueue_task) (struct rq *rq, struct task_struct *p, int flags);
|
|
|
|
void (*dequeue_task) (struct rq *rq, struct task_struct *p, int flags);
|
|
|
|
void (*yield_task) (struct rq *rq);
|
|
|
|
bool (*yield_to_task) (struct rq *rq, struct task_struct *p, bool preempt);
|
|
|
|
|
|
|
|
void (*check_preempt_curr) (struct rq *rq, struct task_struct *p, int flags);
|
|
|
|
|
2012-02-11 13:05:00 +08:00
|
|
|
/*
|
|
|
|
* It is the responsibility of the pick_next_task() method that will
|
|
|
|
* return the next task to call put_prev_task() on the @prev task or
|
|
|
|
* something equivalent.
|
2014-02-14 19:25:08 +08:00
|
|
|
*
|
|
|
|
* May return RETRY_TASK when it finds a higher prio class has runnable
|
|
|
|
* tasks.
|
2012-02-11 13:05:00 +08:00
|
|
|
*/
|
|
|
|
struct task_struct * (*pick_next_task) (struct rq *rq,
|
2015-08-02 01:25:08 +08:00
|
|
|
struct task_struct *prev,
|
|
|
|
struct pin_cookie cookie);
|
2013-03-05 16:06:55 +08:00
|
|
|
void (*put_prev_task) (struct rq *rq, struct task_struct *p);
|
|
|
|
|
|
|
|
#ifdef CONFIG_SMP
|
2013-10-07 18:29:16 +08:00
|
|
|
int (*select_task_rq)(struct task_struct *p, int task_cpu, int sd_flag, int flags);
|
2015-09-23 14:55:59 +08:00
|
|
|
void (*migrate_task_rq)(struct task_struct *p);
|
2013-03-05 16:06:55 +08:00
|
|
|
|
|
|
|
void (*task_woken) (struct rq *this_rq, struct task_struct *task);
|
|
|
|
|
|
|
|
void (*set_cpus_allowed)(struct task_struct *p,
|
|
|
|
const struct cpumask *newmask);
|
|
|
|
|
|
|
|
void (*rq_online)(struct rq *rq);
|
|
|
|
void (*rq_offline)(struct rq *rq);
|
|
|
|
#endif
|
|
|
|
|
|
|
|
void (*set_curr_task) (struct rq *rq);
|
|
|
|
void (*task_tick) (struct rq *rq, struct task_struct *p, int queued);
|
|
|
|
void (*task_fork) (struct task_struct *p);
|
2013-11-07 21:43:35 +08:00
|
|
|
void (*task_dead) (struct task_struct *p);
|
2013-03-05 16:06:55 +08:00
|
|
|
|
2014-10-27 22:40:52 +08:00
|
|
|
/*
|
|
|
|
* The switched_from() call is allowed to drop rq->lock, therefore we
|
|
|
|
* cannot assume the switched_from/switched_to pair is serliazed by
|
|
|
|
* rq->lock. They are however serialized by p->pi_lock.
|
|
|
|
*/
|
2013-03-05 16:06:55 +08:00
|
|
|
void (*switched_from) (struct rq *this_rq, struct task_struct *task);
|
|
|
|
void (*switched_to) (struct rq *this_rq, struct task_struct *task);
|
|
|
|
void (*prio_changed) (struct rq *this_rq, struct task_struct *task,
|
|
|
|
int oldprio);
|
|
|
|
|
|
|
|
unsigned int (*get_rr_interval) (struct rq *rq,
|
|
|
|
struct task_struct *task);
|
|
|
|
|
sched/cputime: Fix clock_nanosleep()/clock_gettime() inconsistency
Commit d670ec13178d0 "posix-cpu-timers: Cure SMP wobbles" fixes one glibc
test case in cost of breaking another one. After that commit, calling
clock_nanosleep(TIMER_ABSTIME, X) and then clock_gettime(&Y) can result
of Y time being smaller than X time.
Reproducer/tester can be found further below, it can be compiled and ran by:
gcc -o tst-cpuclock2 tst-cpuclock2.c -pthread
while ./tst-cpuclock2 ; do : ; done
This reproducer, when running on a buggy kernel, will complain
about "clock_gettime difference too small".
Issue happens because on start in thread_group_cputimer() we initialize
sum_exec_runtime of cputimer with threads runtime not yet accounted and
then add the threads runtime to running cputimer again on scheduler
tick, making it's sum_exec_runtime bigger than actual threads runtime.
KOSAKI Motohiro posted a fix for this problem, but that patch was never
applied: https://lkml.org/lkml/2013/5/26/191 .
This patch takes different approach to cure the problem. It calls
update_curr() when cputimer starts, that assure we will have updated
stats of running threads and on the next schedule tick we will account
only the runtime that elapsed from cputimer start. That also assure we
have consistent state between cpu times of individual threads and cpu
time of the process consisted by those threads.
Full reproducer (tst-cpuclock2.c):
#define _GNU_SOURCE
#include <unistd.h>
#include <sys/syscall.h>
#include <stdio.h>
#include <time.h>
#include <pthread.h>
#include <stdint.h>
#include <inttypes.h>
/* Parameters for the Linux kernel ABI for CPU clocks. */
#define CPUCLOCK_SCHED 2
#define MAKE_PROCESS_CPUCLOCK(pid, clock) \
((~(clockid_t) (pid) << 3) | (clockid_t) (clock))
static pthread_barrier_t barrier;
/* Help advance the clock. */
static void *chew_cpu(void *arg)
{
pthread_barrier_wait(&barrier);
while (1) ;
return NULL;
}
/* Don't use the glibc wrapper. */
static int do_nanosleep(int flags, const struct timespec *req)
{
clockid_t clock_id = MAKE_PROCESS_CPUCLOCK(0, CPUCLOCK_SCHED);
return syscall(SYS_clock_nanosleep, clock_id, flags, req, NULL);
}
static int64_t tsdiff(const struct timespec *before, const struct timespec *after)
{
int64_t before_i = before->tv_sec * 1000000000ULL + before->tv_nsec;
int64_t after_i = after->tv_sec * 1000000000ULL + after->tv_nsec;
return after_i - before_i;
}
int main(void)
{
int result = 0;
pthread_t th;
pthread_barrier_init(&barrier, NULL, 2);
if (pthread_create(&th, NULL, chew_cpu, NULL) != 0) {
perror("pthread_create");
return 1;
}
pthread_barrier_wait(&barrier);
/* The test. */
struct timespec before, after, sleeptimeabs;
int64_t sleepdiff, diffabs;
const struct timespec sleeptime = {.tv_sec = 0,.tv_nsec = 100000000 };
/* The relative nanosleep. Not sure why this is needed, but its presence
seems to make it easier to reproduce the problem. */
if (do_nanosleep(0, &sleeptime) != 0) {
perror("clock_nanosleep");
return 1;
}
/* Get the current time. */
if (clock_gettime(CLOCK_PROCESS_CPUTIME_ID, &before) < 0) {
perror("clock_gettime[2]");
return 1;
}
/* Compute the absolute sleep time based on the current time. */
uint64_t nsec = before.tv_nsec + sleeptime.tv_nsec;
sleeptimeabs.tv_sec = before.tv_sec + nsec / 1000000000;
sleeptimeabs.tv_nsec = nsec % 1000000000;
/* Sleep for the computed time. */
if (do_nanosleep(TIMER_ABSTIME, &sleeptimeabs) != 0) {
perror("absolute clock_nanosleep");
return 1;
}
/* Get the time after the sleep. */
if (clock_gettime(CLOCK_PROCESS_CPUTIME_ID, &after) < 0) {
perror("clock_gettime[3]");
return 1;
}
/* The time after sleep should always be equal to or after the absolute sleep
time passed to clock_nanosleep. */
sleepdiff = tsdiff(&sleeptimeabs, &after);
if (sleepdiff < 0) {
printf("absolute clock_nanosleep woke too early: %" PRId64 "\n", sleepdiff);
result = 1;
printf("Before %llu.%09llu\n", before.tv_sec, before.tv_nsec);
printf("After %llu.%09llu\n", after.tv_sec, after.tv_nsec);
printf("Sleep %llu.%09llu\n", sleeptimeabs.tv_sec, sleeptimeabs.tv_nsec);
}
/* The difference between the timestamps taken before and after the
clock_nanosleep call should be equal to or more than the duration of the
sleep. */
diffabs = tsdiff(&before, &after);
if (diffabs < sleeptime.tv_nsec) {
printf("clock_gettime difference too small: %" PRId64 "\n", diffabs);
result = 1;
}
pthread_cancel(th);
return result;
}
Signed-off-by: Stanislaw Gruszka <sgruszka@redhat.com>
Signed-off-by: Peter Zijlstra (Intel) <peterz@infradead.org>
Cc: Rik van Riel <riel@redhat.com>
Cc: Frederic Weisbecker <fweisbec@gmail.com>
Cc: KOSAKI Motohiro <kosaki.motohiro@jp.fujitsu.com>
Cc: Oleg Nesterov <oleg@redhat.com>
Cc: Linus Torvalds <torvalds@linux-foundation.org>
Link: http://lkml.kernel.org/r/20141112155843.GA24803@redhat.com
Signed-off-by: Ingo Molnar <mingo@kernel.org>
2014-11-12 23:58:44 +08:00
|
|
|
void (*update_curr) (struct rq *rq);
|
|
|
|
|
2016-06-17 19:38:55 +08:00
|
|
|
#define TASK_SET_GROUP 0
|
|
|
|
#define TASK_MOVE_GROUP 1
|
|
|
|
|
2013-03-05 16:06:55 +08:00
|
|
|
#ifdef CONFIG_FAIR_GROUP_SCHED
|
2016-06-17 19:38:55 +08:00
|
|
|
void (*task_change_group) (struct task_struct *p, int type);
|
2013-03-05 16:06:55 +08:00
|
|
|
#endif
|
|
|
|
};
|
2011-10-25 16:00:11 +08:00
|
|
|
|
2014-02-12 17:49:30 +08:00
|
|
|
static inline void put_prev_task(struct rq *rq, struct task_struct *prev)
|
|
|
|
{
|
|
|
|
prev->sched_class->put_prev_task(rq, prev);
|
|
|
|
}
|
|
|
|
|
2016-09-21 04:00:38 +08:00
|
|
|
static inline void set_curr_task(struct rq *rq, struct task_struct *curr)
|
|
|
|
{
|
|
|
|
curr->sched_class->set_curr_task(rq);
|
|
|
|
}
|
|
|
|
|
2011-10-25 16:00:11 +08:00
|
|
|
#define sched_class_highest (&stop_sched_class)
|
|
|
|
#define for_each_class(class) \
|
|
|
|
for (class = sched_class_highest; class; class = class->next)
|
|
|
|
|
|
|
|
extern const struct sched_class stop_sched_class;
|
sched/deadline: Add SCHED_DEADLINE structures & implementation
Introduces the data structures, constants and symbols needed for
SCHED_DEADLINE implementation.
Core data structure of SCHED_DEADLINE are defined, along with their
initializers. Hooks for checking if a task belong to the new policy
are also added where they are needed.
Adds a scheduling class, in sched/dl.c and a new policy called
SCHED_DEADLINE. It is an implementation of the Earliest Deadline
First (EDF) scheduling algorithm, augmented with a mechanism (called
Constant Bandwidth Server, CBS) that makes it possible to isolate
the behaviour of tasks between each other.
The typical -deadline task will be made up of a computation phase
(instance) which is activated on a periodic or sporadic fashion. The
expected (maximum) duration of such computation is called the task's
runtime; the time interval by which each instance need to be completed
is called the task's relative deadline. The task's absolute deadline
is dynamically calculated as the time instant a task (better, an
instance) activates plus the relative deadline.
The EDF algorithms selects the task with the smallest absolute
deadline as the one to be executed first, while the CBS ensures each
task to run for at most its runtime every (relative) deadline
length time interval, avoiding any interference between different
tasks (bandwidth isolation).
Thanks to this feature, also tasks that do not strictly comply with
the computational model sketched above can effectively use the new
policy.
To summarize, this patch:
- introduces the data structures, constants and symbols needed;
- implements the core logic of the scheduling algorithm in the new
scheduling class file;
- provides all the glue code between the new scheduling class and
the core scheduler and refines the interactions between sched/dl
and the other existing scheduling classes.
Signed-off-by: Dario Faggioli <raistlin@linux.it>
Signed-off-by: Michael Trimarchi <michael@amarulasolutions.com>
Signed-off-by: Fabio Checconi <fchecconi@gmail.com>
Signed-off-by: Juri Lelli <juri.lelli@gmail.com>
Signed-off-by: Peter Zijlstra <peterz@infradead.org>
Link: http://lkml.kernel.org/r/1383831828-15501-4-git-send-email-juri.lelli@gmail.com
Signed-off-by: Ingo Molnar <mingo@kernel.org>
2013-11-28 18:14:43 +08:00
|
|
|
extern const struct sched_class dl_sched_class;
|
2011-10-25 16:00:11 +08:00
|
|
|
extern const struct sched_class rt_sched_class;
|
|
|
|
extern const struct sched_class fair_sched_class;
|
|
|
|
extern const struct sched_class idle_sched_class;
|
|
|
|
|
|
|
|
|
|
|
|
#ifdef CONFIG_SMP
|
|
|
|
|
2014-05-27 06:19:37 +08:00
|
|
|
extern void update_group_capacity(struct sched_domain *sd, int cpu);
|
2013-03-07 10:00:26 +08:00
|
|
|
|
2014-01-06 19:34:38 +08:00
|
|
|
extern void trigger_load_balance(struct rq *rq);
|
2011-10-25 16:00:11 +08:00
|
|
|
|
2015-05-15 23:43:35 +08:00
|
|
|
extern void set_cpus_allowed_common(struct task_struct *p, const struct cpumask *new_mask);
|
|
|
|
|
2011-10-25 16:00:11 +08:00
|
|
|
#endif
|
|
|
|
|
2014-09-04 23:32:09 +08:00
|
|
|
#ifdef CONFIG_CPU_IDLE
|
|
|
|
static inline void idle_set_state(struct rq *rq,
|
|
|
|
struct cpuidle_state *idle_state)
|
|
|
|
{
|
|
|
|
rq->idle_state = idle_state;
|
|
|
|
}
|
|
|
|
|
|
|
|
static inline struct cpuidle_state *idle_get_state(struct rq *rq)
|
|
|
|
{
|
2016-09-21 04:34:51 +08:00
|
|
|
SCHED_WARN_ON(!rcu_read_lock_held());
|
2014-09-04 23:32:09 +08:00
|
|
|
return rq->idle_state;
|
|
|
|
}
|
|
|
|
#else
|
|
|
|
static inline void idle_set_state(struct rq *rq,
|
|
|
|
struct cpuidle_state *idle_state)
|
|
|
|
{
|
|
|
|
}
|
|
|
|
|
|
|
|
static inline struct cpuidle_state *idle_get_state(struct rq *rq)
|
|
|
|
{
|
|
|
|
return NULL;
|
|
|
|
}
|
|
|
|
#endif
|
|
|
|
|
2011-10-25 16:00:11 +08:00
|
|
|
extern void sysrq_sched_debug_show(void);
|
|
|
|
extern void sched_init_granularity(void);
|
|
|
|
extern void update_max_interval(void);
|
sched/deadline: Add SCHED_DEADLINE SMP-related data structures & logic
Introduces data structures relevant for implementing dynamic
migration of -deadline tasks and the logic for checking if
runqueues are overloaded with -deadline tasks and for choosing
where a task should migrate, when it is the case.
Adds also dynamic migrations to SCHED_DEADLINE, so that tasks can
be moved among CPUs when necessary. It is also possible to bind a
task to a (set of) CPU(s), thus restricting its capability of
migrating, or forbidding migrations at all.
The very same approach used in sched_rt is utilised:
- -deadline tasks are kept into CPU-specific runqueues,
- -deadline tasks are migrated among runqueues to achieve the
following:
* on an M-CPU system the M earliest deadline ready tasks
are always running;
* affinity/cpusets settings of all the -deadline tasks is
always respected.
Therefore, this very special form of "load balancing" is done with
an active method, i.e., the scheduler pushes or pulls tasks between
runqueues when they are woken up and/or (de)scheduled.
IOW, every time a preemption occurs, the descheduled task might be sent
to some other CPU (depending on its deadline) to continue executing
(push). On the other hand, every time a CPU becomes idle, it might pull
the second earliest deadline ready task from some other CPU.
To enforce this, a pull operation is always attempted before taking any
scheduling decision (pre_schedule()), as well as a push one after each
scheduling decision (post_schedule()). In addition, when a task arrives
or wakes up, the best CPU where to resume it is selected taking into
account its affinity mask, the system topology, but also its deadline.
E.g., from the scheduling point of view, the best CPU where to wake
up (and also where to push) a task is the one which is running the task
with the latest deadline among the M executing ones.
In order to facilitate these decisions, per-runqueue "caching" of the
deadlines of the currently running and of the first ready task is used.
Queued but not running tasks are also parked in another rb-tree to
speed-up pushes.
Signed-off-by: Juri Lelli <juri.lelli@gmail.com>
Signed-off-by: Dario Faggioli <raistlin@linux.it>
Signed-off-by: Peter Zijlstra <peterz@infradead.org>
Link: http://lkml.kernel.org/r/1383831828-15501-5-git-send-email-juri.lelli@gmail.com
Signed-off-by: Ingo Molnar <mingo@kernel.org>
2013-11-07 21:43:38 +08:00
|
|
|
|
|
|
|
extern void init_sched_dl_class(void);
|
2011-10-25 16:00:11 +08:00
|
|
|
extern void init_sched_rt_class(void);
|
|
|
|
extern void init_sched_fair_class(void);
|
|
|
|
|
2014-06-29 04:03:57 +08:00
|
|
|
extern void resched_curr(struct rq *rq);
|
2011-10-25 16:00:11 +08:00
|
|
|
extern void resched_cpu(int cpu);
|
|
|
|
|
|
|
|
extern struct rt_bandwidth def_rt_bandwidth;
|
|
|
|
extern void init_rt_bandwidth(struct rt_bandwidth *rt_b, u64 period, u64 runtime);
|
|
|
|
|
sched/deadline: Add bandwidth management for SCHED_DEADLINE tasks
In order of deadline scheduling to be effective and useful, it is
important that some method of having the allocation of the available
CPU bandwidth to tasks and task groups under control.
This is usually called "admission control" and if it is not performed
at all, no guarantee can be given on the actual scheduling of the
-deadline tasks.
Since when RT-throttling has been introduced each task group have a
bandwidth associated to itself, calculated as a certain amount of
runtime over a period. Moreover, to make it possible to manipulate
such bandwidth, readable/writable controls have been added to both
procfs (for system wide settings) and cgroupfs (for per-group
settings).
Therefore, the same interface is being used for controlling the
bandwidth distrubution to -deadline tasks and task groups, i.e.,
new controls but with similar names, equivalent meaning and with
the same usage paradigm are added.
However, more discussion is needed in order to figure out how
we want to manage SCHED_DEADLINE bandwidth at the task group level.
Therefore, this patch adds a less sophisticated, but actually
very sensible, mechanism to ensure that a certain utilization
cap is not overcome per each root_domain (the single rq for !SMP
configurations).
Another main difference between deadline bandwidth management and
RT-throttling is that -deadline tasks have bandwidth on their own
(while -rt ones doesn't!), and thus we don't need an higher level
throttling mechanism to enforce the desired bandwidth.
This patch, therefore:
- adds system wide deadline bandwidth management by means of:
* /proc/sys/kernel/sched_dl_runtime_us,
* /proc/sys/kernel/sched_dl_period_us,
that determine (i.e., runtime / period) the total bandwidth
available on each CPU of each root_domain for -deadline tasks;
- couples the RT and deadline bandwidth management, i.e., enforces
that the sum of how much bandwidth is being devoted to -rt
-deadline tasks to stay below 100%.
This means that, for a root_domain comprising M CPUs, -deadline tasks
can be created until the sum of their bandwidths stay below:
M * (sched_dl_runtime_us / sched_dl_period_us)
It is also possible to disable this bandwidth management logic, and
be thus free of oversubscribing the system up to any arbitrary level.
Signed-off-by: Dario Faggioli <raistlin@linux.it>
Signed-off-by: Juri Lelli <juri.lelli@gmail.com>
Signed-off-by: Peter Zijlstra <peterz@infradead.org>
Link: http://lkml.kernel.org/r/1383831828-15501-12-git-send-email-juri.lelli@gmail.com
Signed-off-by: Ingo Molnar <mingo@kernel.org>
2013-11-07 21:43:45 +08:00
|
|
|
extern struct dl_bandwidth def_dl_bandwidth;
|
|
|
|
extern void init_dl_bandwidth(struct dl_bandwidth *dl_b, u64 period, u64 runtime);
|
sched/deadline: Add SCHED_DEADLINE structures & implementation
Introduces the data structures, constants and symbols needed for
SCHED_DEADLINE implementation.
Core data structure of SCHED_DEADLINE are defined, along with their
initializers. Hooks for checking if a task belong to the new policy
are also added where they are needed.
Adds a scheduling class, in sched/dl.c and a new policy called
SCHED_DEADLINE. It is an implementation of the Earliest Deadline
First (EDF) scheduling algorithm, augmented with a mechanism (called
Constant Bandwidth Server, CBS) that makes it possible to isolate
the behaviour of tasks between each other.
The typical -deadline task will be made up of a computation phase
(instance) which is activated on a periodic or sporadic fashion. The
expected (maximum) duration of such computation is called the task's
runtime; the time interval by which each instance need to be completed
is called the task's relative deadline. The task's absolute deadline
is dynamically calculated as the time instant a task (better, an
instance) activates plus the relative deadline.
The EDF algorithms selects the task with the smallest absolute
deadline as the one to be executed first, while the CBS ensures each
task to run for at most its runtime every (relative) deadline
length time interval, avoiding any interference between different
tasks (bandwidth isolation).
Thanks to this feature, also tasks that do not strictly comply with
the computational model sketched above can effectively use the new
policy.
To summarize, this patch:
- introduces the data structures, constants and symbols needed;
- implements the core logic of the scheduling algorithm in the new
scheduling class file;
- provides all the glue code between the new scheduling class and
the core scheduler and refines the interactions between sched/dl
and the other existing scheduling classes.
Signed-off-by: Dario Faggioli <raistlin@linux.it>
Signed-off-by: Michael Trimarchi <michael@amarulasolutions.com>
Signed-off-by: Fabio Checconi <fchecconi@gmail.com>
Signed-off-by: Juri Lelli <juri.lelli@gmail.com>
Signed-off-by: Peter Zijlstra <peterz@infradead.org>
Link: http://lkml.kernel.org/r/1383831828-15501-4-git-send-email-juri.lelli@gmail.com
Signed-off-by: Ingo Molnar <mingo@kernel.org>
2013-11-28 18:14:43 +08:00
|
|
|
extern void init_dl_task_timer(struct sched_dl_entity *dl_se);
|
|
|
|
|
sched/deadline: Add bandwidth management for SCHED_DEADLINE tasks
In order of deadline scheduling to be effective and useful, it is
important that some method of having the allocation of the available
CPU bandwidth to tasks and task groups under control.
This is usually called "admission control" and if it is not performed
at all, no guarantee can be given on the actual scheduling of the
-deadline tasks.
Since when RT-throttling has been introduced each task group have a
bandwidth associated to itself, calculated as a certain amount of
runtime over a period. Moreover, to make it possible to manipulate
such bandwidth, readable/writable controls have been added to both
procfs (for system wide settings) and cgroupfs (for per-group
settings).
Therefore, the same interface is being used for controlling the
bandwidth distrubution to -deadline tasks and task groups, i.e.,
new controls but with similar names, equivalent meaning and with
the same usage paradigm are added.
However, more discussion is needed in order to figure out how
we want to manage SCHED_DEADLINE bandwidth at the task group level.
Therefore, this patch adds a less sophisticated, but actually
very sensible, mechanism to ensure that a certain utilization
cap is not overcome per each root_domain (the single rq for !SMP
configurations).
Another main difference between deadline bandwidth management and
RT-throttling is that -deadline tasks have bandwidth on their own
(while -rt ones doesn't!), and thus we don't need an higher level
throttling mechanism to enforce the desired bandwidth.
This patch, therefore:
- adds system wide deadline bandwidth management by means of:
* /proc/sys/kernel/sched_dl_runtime_us,
* /proc/sys/kernel/sched_dl_period_us,
that determine (i.e., runtime / period) the total bandwidth
available on each CPU of each root_domain for -deadline tasks;
- couples the RT and deadline bandwidth management, i.e., enforces
that the sum of how much bandwidth is being devoted to -rt
-deadline tasks to stay below 100%.
This means that, for a root_domain comprising M CPUs, -deadline tasks
can be created until the sum of their bandwidths stay below:
M * (sched_dl_runtime_us / sched_dl_period_us)
It is also possible to disable this bandwidth management logic, and
be thus free of oversubscribing the system up to any arbitrary level.
Signed-off-by: Dario Faggioli <raistlin@linux.it>
Signed-off-by: Juri Lelli <juri.lelli@gmail.com>
Signed-off-by: Peter Zijlstra <peterz@infradead.org>
Link: http://lkml.kernel.org/r/1383831828-15501-12-git-send-email-juri.lelli@gmail.com
Signed-off-by: Ingo Molnar <mingo@kernel.org>
2013-11-07 21:43:45 +08:00
|
|
|
unsigned long to_ratio(u64 period, u64 runtime);
|
|
|
|
|
2015-07-15 08:04:39 +08:00
|
|
|
extern void init_entity_runnable_average(struct sched_entity *se);
|
2016-03-30 04:30:56 +08:00
|
|
|
extern void post_init_entity_util_avg(struct sched_entity *se);
|
2013-06-20 10:18:47 +08:00
|
|
|
|
2015-07-18 04:25:49 +08:00
|
|
|
#ifdef CONFIG_NO_HZ_FULL
|
|
|
|
extern bool sched_can_stop_tick(struct rq *rq);
|
|
|
|
|
|
|
|
/*
|
|
|
|
* Tick may be needed by tasks in the runqueue depending on their policy and
|
|
|
|
* requirements. If tick is needed, lets send the target an IPI to kick it out of
|
|
|
|
* nohz mode if necessary.
|
|
|
|
*/
|
|
|
|
static inline void sched_update_tick_dependency(struct rq *rq)
|
|
|
|
{
|
|
|
|
int cpu;
|
|
|
|
|
|
|
|
if (!tick_nohz_full_enabled())
|
|
|
|
return;
|
|
|
|
|
|
|
|
cpu = cpu_of(rq);
|
|
|
|
|
|
|
|
if (!tick_nohz_full_cpu(cpu))
|
|
|
|
return;
|
|
|
|
|
|
|
|
if (sched_can_stop_tick(rq))
|
|
|
|
tick_nohz_dep_clear_cpu(cpu, TICK_DEP_BIT_SCHED);
|
|
|
|
else
|
|
|
|
tick_nohz_dep_set_cpu(cpu, TICK_DEP_BIT_SCHED);
|
|
|
|
}
|
|
|
|
#else
|
|
|
|
static inline void sched_update_tick_dependency(struct rq *rq) { }
|
|
|
|
#endif
|
|
|
|
|
2014-05-09 07:00:14 +08:00
|
|
|
static inline void add_nr_running(struct rq *rq, unsigned count)
|
2011-10-25 16:00:11 +08:00
|
|
|
{
|
2014-05-09 07:00:14 +08:00
|
|
|
unsigned prev_nr = rq->nr_running;
|
|
|
|
|
|
|
|
rq->nr_running = prev_nr + count;
|
2013-04-20 20:35:09 +08:00
|
|
|
|
2014-05-09 07:00:14 +08:00
|
|
|
if (prev_nr < 2 && rq->nr_running >= 2) {
|
2014-06-24 03:16:49 +08:00
|
|
|
#ifdef CONFIG_SMP
|
|
|
|
if (!rq->rd->overload)
|
|
|
|
rq->rd->overload = true;
|
|
|
|
#endif
|
|
|
|
}
|
2015-07-18 04:25:49 +08:00
|
|
|
|
|
|
|
sched_update_tick_dependency(rq);
|
2011-10-25 16:00:11 +08:00
|
|
|
}
|
|
|
|
|
2014-05-09 07:00:14 +08:00
|
|
|
static inline void sub_nr_running(struct rq *rq, unsigned count)
|
2011-10-25 16:00:11 +08:00
|
|
|
{
|
2014-05-09 07:00:14 +08:00
|
|
|
rq->nr_running -= count;
|
2015-07-18 04:25:49 +08:00
|
|
|
/* Check if we still need preemption */
|
|
|
|
sched_update_tick_dependency(rq);
|
2011-10-25 16:00:11 +08:00
|
|
|
}
|
|
|
|
|
2013-05-03 09:39:05 +08:00
|
|
|
static inline void rq_last_tick_reset(struct rq *rq)
|
|
|
|
{
|
|
|
|
#ifdef CONFIG_NO_HZ_FULL
|
|
|
|
rq->last_sched_tick = jiffies;
|
|
|
|
#endif
|
|
|
|
}
|
|
|
|
|
2011-10-25 16:00:11 +08:00
|
|
|
extern void update_rq_clock(struct rq *rq);
|
|
|
|
|
|
|
|
extern void activate_task(struct rq *rq, struct task_struct *p, int flags);
|
|
|
|
extern void deactivate_task(struct rq *rq, struct task_struct *p, int flags);
|
|
|
|
|
|
|
|
extern void check_preempt_curr(struct rq *rq, struct task_struct *p, int flags);
|
|
|
|
|
|
|
|
extern const_debug unsigned int sysctl_sched_time_avg;
|
|
|
|
extern const_debug unsigned int sysctl_sched_nr_migrate;
|
|
|
|
extern const_debug unsigned int sysctl_sched_migration_cost;
|
|
|
|
|
|
|
|
static inline u64 sched_avg_period(void)
|
|
|
|
{
|
|
|
|
return (u64)sysctl_sched_time_avg * NSEC_PER_MSEC / 2;
|
|
|
|
}
|
|
|
|
|
|
|
|
#ifdef CONFIG_SCHED_HRTICK
|
|
|
|
|
|
|
|
/*
|
|
|
|
* Use hrtick when:
|
|
|
|
* - enabled by features
|
|
|
|
* - hrtimer is actually high res
|
|
|
|
*/
|
|
|
|
static inline int hrtick_enabled(struct rq *rq)
|
|
|
|
{
|
|
|
|
if (!sched_feat(HRTICK))
|
|
|
|
return 0;
|
|
|
|
if (!cpu_active(cpu_of(rq)))
|
|
|
|
return 0;
|
|
|
|
return hrtimer_is_hres_active(&rq->hrtick_timer);
|
|
|
|
}
|
|
|
|
|
|
|
|
void hrtick_start(struct rq *rq, u64 delay);
|
|
|
|
|
2011-11-22 22:20:07 +08:00
|
|
|
#else
|
|
|
|
|
|
|
|
static inline int hrtick_enabled(struct rq *rq)
|
|
|
|
{
|
|
|
|
return 0;
|
|
|
|
}
|
|
|
|
|
2011-10-25 16:00:11 +08:00
|
|
|
#endif /* CONFIG_SCHED_HRTICK */
|
|
|
|
|
|
|
|
#ifdef CONFIG_SMP
|
|
|
|
extern void sched_avg_update(struct rq *rq);
|
2015-03-23 21:19:05 +08:00
|
|
|
|
|
|
|
#ifndef arch_scale_freq_capacity
|
|
|
|
static __always_inline
|
|
|
|
unsigned long arch_scale_freq_capacity(struct sched_domain *sd, int cpu)
|
|
|
|
{
|
|
|
|
return SCHED_CAPACITY_SCALE;
|
|
|
|
}
|
|
|
|
#endif
|
2015-02-27 23:54:08 +08:00
|
|
|
|
2015-08-15 00:23:10 +08:00
|
|
|
#ifndef arch_scale_cpu_capacity
|
|
|
|
static __always_inline
|
|
|
|
unsigned long arch_scale_cpu_capacity(struct sched_domain *sd, int cpu)
|
|
|
|
{
|
2015-08-15 07:04:41 +08:00
|
|
|
if (sd && (sd->flags & SD_SHARE_CPUCAPACITY) && (sd->span_weight > 1))
|
2015-08-15 00:23:10 +08:00
|
|
|
return sd->smt_gain / sd->span_weight;
|
|
|
|
|
|
|
|
return SCHED_CAPACITY_SCALE;
|
|
|
|
}
|
|
|
|
#endif
|
|
|
|
|
2011-10-25 16:00:11 +08:00
|
|
|
static inline void sched_rt_avg_update(struct rq *rq, u64 rt_delta)
|
|
|
|
{
|
2015-02-27 23:54:08 +08:00
|
|
|
rq->rt_avg += rt_delta * arch_scale_freq_capacity(NULL, cpu_of(rq));
|
2011-10-25 16:00:11 +08:00
|
|
|
sched_avg_update(rq);
|
|
|
|
}
|
|
|
|
#else
|
|
|
|
static inline void sched_rt_avg_update(struct rq *rq, u64 rt_delta) { }
|
|
|
|
static inline void sched_avg_update(struct rq *rq) { }
|
|
|
|
#endif
|
|
|
|
|
2015-08-01 03:28:18 +08:00
|
|
|
struct rq_flags {
|
|
|
|
unsigned long flags;
|
2015-08-02 01:25:08 +08:00
|
|
|
struct pin_cookie cookie;
|
2015-08-01 03:28:18 +08:00
|
|
|
};
|
|
|
|
|
|
|
|
struct rq *__task_rq_lock(struct task_struct *p, struct rq_flags *rf)
|
2016-04-28 22:16:33 +08:00
|
|
|
__acquires(rq->lock);
|
2015-08-01 03:28:18 +08:00
|
|
|
struct rq *task_rq_lock(struct task_struct *p, struct rq_flags *rf)
|
2015-02-17 20:22:25 +08:00
|
|
|
__acquires(p->pi_lock)
|
2016-04-28 22:16:33 +08:00
|
|
|
__acquires(rq->lock);
|
2015-02-17 20:22:25 +08:00
|
|
|
|
2015-08-01 03:28:18 +08:00
|
|
|
static inline void __task_rq_unlock(struct rq *rq, struct rq_flags *rf)
|
2015-02-17 20:22:25 +08:00
|
|
|
__releases(rq->lock)
|
|
|
|
{
|
2015-08-02 01:25:08 +08:00
|
|
|
lockdep_unpin_lock(&rq->lock, rf->cookie);
|
2015-02-17 20:22:25 +08:00
|
|
|
raw_spin_unlock(&rq->lock);
|
|
|
|
}
|
|
|
|
|
|
|
|
static inline void
|
2015-08-01 03:28:18 +08:00
|
|
|
task_rq_unlock(struct rq *rq, struct task_struct *p, struct rq_flags *rf)
|
2015-02-17 20:22:25 +08:00
|
|
|
__releases(rq->lock)
|
|
|
|
__releases(p->pi_lock)
|
|
|
|
{
|
2015-08-02 01:25:08 +08:00
|
|
|
lockdep_unpin_lock(&rq->lock, rf->cookie);
|
2015-02-17 20:22:25 +08:00
|
|
|
raw_spin_unlock(&rq->lock);
|
2015-08-01 03:28:18 +08:00
|
|
|
raw_spin_unlock_irqrestore(&p->pi_lock, rf->flags);
|
2015-02-17 20:22:25 +08:00
|
|
|
}
|
|
|
|
|
2011-10-25 16:00:11 +08:00
|
|
|
#ifdef CONFIG_SMP
|
|
|
|
#ifdef CONFIG_PREEMPT
|
|
|
|
|
|
|
|
static inline void double_rq_lock(struct rq *rq1, struct rq *rq2);
|
|
|
|
|
|
|
|
/*
|
|
|
|
* fair double_lock_balance: Safely acquires both rq->locks in a fair
|
|
|
|
* way at the expense of forcing extra atomic operations in all
|
|
|
|
* invocations. This assures that the double_lock is acquired using the
|
|
|
|
* same underlying policy as the spinlock_t on this architecture, which
|
|
|
|
* reduces latency compared to the unfair variant below. However, it
|
|
|
|
* also adds more overhead and therefore may reduce throughput.
|
|
|
|
*/
|
|
|
|
static inline int _double_lock_balance(struct rq *this_rq, struct rq *busiest)
|
|
|
|
__releases(this_rq->lock)
|
|
|
|
__acquires(busiest->lock)
|
|
|
|
__acquires(this_rq->lock)
|
|
|
|
{
|
|
|
|
raw_spin_unlock(&this_rq->lock);
|
|
|
|
double_rq_lock(this_rq, busiest);
|
|
|
|
|
|
|
|
return 1;
|
|
|
|
}
|
|
|
|
|
|
|
|
#else
|
|
|
|
/*
|
|
|
|
* Unfair double_lock_balance: Optimizes throughput at the expense of
|
|
|
|
* latency by eliminating extra atomic operations when the locks are
|
|
|
|
* already in proper order on entry. This favors lower cpu-ids and will
|
|
|
|
* grant the double lock to lower cpus over higher ids under contention,
|
|
|
|
* regardless of entry order into the function.
|
|
|
|
*/
|
|
|
|
static inline int _double_lock_balance(struct rq *this_rq, struct rq *busiest)
|
|
|
|
__releases(this_rq->lock)
|
|
|
|
__acquires(busiest->lock)
|
|
|
|
__acquires(this_rq->lock)
|
|
|
|
{
|
|
|
|
int ret = 0;
|
|
|
|
|
|
|
|
if (unlikely(!raw_spin_trylock(&busiest->lock))) {
|
|
|
|
if (busiest < this_rq) {
|
|
|
|
raw_spin_unlock(&this_rq->lock);
|
|
|
|
raw_spin_lock(&busiest->lock);
|
|
|
|
raw_spin_lock_nested(&this_rq->lock,
|
|
|
|
SINGLE_DEPTH_NESTING);
|
|
|
|
ret = 1;
|
|
|
|
} else
|
|
|
|
raw_spin_lock_nested(&busiest->lock,
|
|
|
|
SINGLE_DEPTH_NESTING);
|
|
|
|
}
|
|
|
|
return ret;
|
|
|
|
}
|
|
|
|
|
|
|
|
#endif /* CONFIG_PREEMPT */
|
|
|
|
|
|
|
|
/*
|
|
|
|
* double_lock_balance - lock the busiest runqueue, this_rq is locked already.
|
|
|
|
*/
|
|
|
|
static inline int double_lock_balance(struct rq *this_rq, struct rq *busiest)
|
|
|
|
{
|
|
|
|
if (unlikely(!irqs_disabled())) {
|
|
|
|
/* printk() doesn't work good under rq->lock */
|
|
|
|
raw_spin_unlock(&this_rq->lock);
|
|
|
|
BUG_ON(1);
|
|
|
|
}
|
|
|
|
|
|
|
|
return _double_lock_balance(this_rq, busiest);
|
|
|
|
}
|
|
|
|
|
|
|
|
static inline void double_unlock_balance(struct rq *this_rq, struct rq *busiest)
|
|
|
|
__releases(busiest->lock)
|
|
|
|
{
|
|
|
|
raw_spin_unlock(&busiest->lock);
|
|
|
|
lock_set_subclass(&this_rq->lock.dep_map, 0, _RET_IP_);
|
|
|
|
}
|
|
|
|
|
sched: Fix race in migrate_swap_stop()
There is a subtle race in migrate_swap, when task P, on CPU A, decides to swap
places with task T, on CPU B.
Task P:
- call migrate_swap
Task T:
- go to sleep, removing itself from the runqueue
Task P:
- double lock the runqueues on CPU A & B
Task T:
- get woken up, place itself on the runqueue of CPU C
Task P:
- see that task T is on a runqueue, and pretend to remove it
from the runqueue on CPU B
Now CPUs B & C both have corrupted scheduler data structures.
This patch fixes it, by holding the pi_lock for both of the tasks
involved in the migrate swap. This prevents task T from waking up,
and placing itself onto another runqueue, until after migrate_swap
has released all locks.
This means that, when migrate_swap checks, task T will be either
on the runqueue where it was originally seen, or not on any
runqueue at all. Migrate_swap deals correctly with of those cases.
Tested-by: Joe Mario <jmario@redhat.com>
Acked-by: Mel Gorman <mgorman@suse.de>
Reviewed-by: Rik van Riel <riel@redhat.com>
Signed-off-by: Peter Zijlstra <peterz@infradead.org>
Cc: hannes@cmpxchg.org
Cc: aarcange@redhat.com
Cc: srikar@linux.vnet.ibm.com
Cc: tglx@linutronix.de
Cc: hpa@zytor.com
Link: http://lkml.kernel.org/r/20131010181722.GO13848@laptop.programming.kicks-ass.net
Signed-off-by: Ingo Molnar <mingo@kernel.org>
2013-10-11 02:17:22 +08:00
|
|
|
static inline void double_lock(spinlock_t *l1, spinlock_t *l2)
|
|
|
|
{
|
|
|
|
if (l1 > l2)
|
|
|
|
swap(l1, l2);
|
|
|
|
|
|
|
|
spin_lock(l1);
|
|
|
|
spin_lock_nested(l2, SINGLE_DEPTH_NESTING);
|
|
|
|
}
|
|
|
|
|
2014-04-07 16:55:15 +08:00
|
|
|
static inline void double_lock_irq(spinlock_t *l1, spinlock_t *l2)
|
|
|
|
{
|
|
|
|
if (l1 > l2)
|
|
|
|
swap(l1, l2);
|
|
|
|
|
|
|
|
spin_lock_irq(l1);
|
|
|
|
spin_lock_nested(l2, SINGLE_DEPTH_NESTING);
|
|
|
|
}
|
|
|
|
|
sched: Fix race in migrate_swap_stop()
There is a subtle race in migrate_swap, when task P, on CPU A, decides to swap
places with task T, on CPU B.
Task P:
- call migrate_swap
Task T:
- go to sleep, removing itself from the runqueue
Task P:
- double lock the runqueues on CPU A & B
Task T:
- get woken up, place itself on the runqueue of CPU C
Task P:
- see that task T is on a runqueue, and pretend to remove it
from the runqueue on CPU B
Now CPUs B & C both have corrupted scheduler data structures.
This patch fixes it, by holding the pi_lock for both of the tasks
involved in the migrate swap. This prevents task T from waking up,
and placing itself onto another runqueue, until after migrate_swap
has released all locks.
This means that, when migrate_swap checks, task T will be either
on the runqueue where it was originally seen, or not on any
runqueue at all. Migrate_swap deals correctly with of those cases.
Tested-by: Joe Mario <jmario@redhat.com>
Acked-by: Mel Gorman <mgorman@suse.de>
Reviewed-by: Rik van Riel <riel@redhat.com>
Signed-off-by: Peter Zijlstra <peterz@infradead.org>
Cc: hannes@cmpxchg.org
Cc: aarcange@redhat.com
Cc: srikar@linux.vnet.ibm.com
Cc: tglx@linutronix.de
Cc: hpa@zytor.com
Link: http://lkml.kernel.org/r/20131010181722.GO13848@laptop.programming.kicks-ass.net
Signed-off-by: Ingo Molnar <mingo@kernel.org>
2013-10-11 02:17:22 +08:00
|
|
|
static inline void double_raw_lock(raw_spinlock_t *l1, raw_spinlock_t *l2)
|
|
|
|
{
|
|
|
|
if (l1 > l2)
|
|
|
|
swap(l1, l2);
|
|
|
|
|
|
|
|
raw_spin_lock(l1);
|
|
|
|
raw_spin_lock_nested(l2, SINGLE_DEPTH_NESTING);
|
|
|
|
}
|
|
|
|
|
2011-10-25 16:00:11 +08:00
|
|
|
/*
|
|
|
|
* double_rq_lock - safely lock two runqueues
|
|
|
|
*
|
|
|
|
* Note this does not disable interrupts like task_rq_lock,
|
|
|
|
* you need to do so manually before calling.
|
|
|
|
*/
|
|
|
|
static inline void double_rq_lock(struct rq *rq1, struct rq *rq2)
|
|
|
|
__acquires(rq1->lock)
|
|
|
|
__acquires(rq2->lock)
|
|
|
|
{
|
|
|
|
BUG_ON(!irqs_disabled());
|
|
|
|
if (rq1 == rq2) {
|
|
|
|
raw_spin_lock(&rq1->lock);
|
|
|
|
__acquire(rq2->lock); /* Fake it out ;) */
|
|
|
|
} else {
|
|
|
|
if (rq1 < rq2) {
|
|
|
|
raw_spin_lock(&rq1->lock);
|
|
|
|
raw_spin_lock_nested(&rq2->lock, SINGLE_DEPTH_NESTING);
|
|
|
|
} else {
|
|
|
|
raw_spin_lock(&rq2->lock);
|
|
|
|
raw_spin_lock_nested(&rq1->lock, SINGLE_DEPTH_NESTING);
|
|
|
|
}
|
|
|
|
}
|
|
|
|
}
|
|
|
|
|
|
|
|
/*
|
|
|
|
* double_rq_unlock - safely unlock two runqueues
|
|
|
|
*
|
|
|
|
* Note this does not restore interrupts like task_rq_unlock,
|
|
|
|
* you need to do so manually after calling.
|
|
|
|
*/
|
|
|
|
static inline void double_rq_unlock(struct rq *rq1, struct rq *rq2)
|
|
|
|
__releases(rq1->lock)
|
|
|
|
__releases(rq2->lock)
|
|
|
|
{
|
|
|
|
raw_spin_unlock(&rq1->lock);
|
|
|
|
if (rq1 != rq2)
|
|
|
|
raw_spin_unlock(&rq2->lock);
|
|
|
|
else
|
|
|
|
__release(rq2->lock);
|
|
|
|
}
|
|
|
|
|
|
|
|
#else /* CONFIG_SMP */
|
|
|
|
|
|
|
|
/*
|
|
|
|
* double_rq_lock - safely lock two runqueues
|
|
|
|
*
|
|
|
|
* Note this does not disable interrupts like task_rq_lock,
|
|
|
|
* you need to do so manually before calling.
|
|
|
|
*/
|
|
|
|
static inline void double_rq_lock(struct rq *rq1, struct rq *rq2)
|
|
|
|
__acquires(rq1->lock)
|
|
|
|
__acquires(rq2->lock)
|
|
|
|
{
|
|
|
|
BUG_ON(!irqs_disabled());
|
|
|
|
BUG_ON(rq1 != rq2);
|
|
|
|
raw_spin_lock(&rq1->lock);
|
|
|
|
__acquire(rq2->lock); /* Fake it out ;) */
|
|
|
|
}
|
|
|
|
|
|
|
|
/*
|
|
|
|
* double_rq_unlock - safely unlock two runqueues
|
|
|
|
*
|
|
|
|
* Note this does not restore interrupts like task_rq_unlock,
|
|
|
|
* you need to do so manually after calling.
|
|
|
|
*/
|
|
|
|
static inline void double_rq_unlock(struct rq *rq1, struct rq *rq2)
|
|
|
|
__releases(rq1->lock)
|
|
|
|
__releases(rq2->lock)
|
|
|
|
{
|
|
|
|
BUG_ON(rq1 != rq2);
|
|
|
|
raw_spin_unlock(&rq1->lock);
|
|
|
|
__release(rq2->lock);
|
|
|
|
}
|
|
|
|
|
|
|
|
#endif
|
|
|
|
|
|
|
|
extern struct sched_entity *__pick_first_entity(struct cfs_rq *cfs_rq);
|
|
|
|
extern struct sched_entity *__pick_last_entity(struct cfs_rq *cfs_rq);
|
2015-06-26 01:21:41 +08:00
|
|
|
|
|
|
|
#ifdef CONFIG_SCHED_DEBUG
|
2011-10-25 16:00:11 +08:00
|
|
|
extern void print_cfs_stats(struct seq_file *m, int cpu);
|
|
|
|
extern void print_rt_stats(struct seq_file *m, int cpu);
|
2014-10-31 06:39:33 +08:00
|
|
|
extern void print_dl_stats(struct seq_file *m, int cpu);
|
2015-06-26 01:21:41 +08:00
|
|
|
extern void
|
|
|
|
print_cfs_rq(struct seq_file *m, int cpu, struct cfs_rq *cfs_rq);
|
2015-06-26 01:21:43 +08:00
|
|
|
|
|
|
|
#ifdef CONFIG_NUMA_BALANCING
|
|
|
|
extern void
|
|
|
|
show_numa_stats(struct task_struct *p, struct seq_file *m);
|
|
|
|
extern void
|
|
|
|
print_numa_stats(struct seq_file *m, int node, unsigned long tsf,
|
|
|
|
unsigned long tpf, unsigned long gsf, unsigned long gpf);
|
|
|
|
#endif /* CONFIG_NUMA_BALANCING */
|
|
|
|
#endif /* CONFIG_SCHED_DEBUG */
|
2011-10-25 16:00:11 +08:00
|
|
|
|
|
|
|
extern void init_cfs_rq(struct cfs_rq *cfs_rq);
|
2015-03-03 19:50:27 +08:00
|
|
|
extern void init_rt_rq(struct rt_rq *rt_rq);
|
|
|
|
extern void init_dl_rq(struct dl_rq *dl_rq);
|
2011-10-25 16:00:11 +08:00
|
|
|
|
2013-10-17 02:16:12 +08:00
|
|
|
extern void cfs_bandwidth_usage_inc(void);
|
|
|
|
extern void cfs_bandwidth_usage_dec(void);
|
2011-12-02 09:07:32 +08:00
|
|
|
|
2011-08-11 05:21:01 +08:00
|
|
|
#ifdef CONFIG_NO_HZ_COMMON
|
2011-12-02 09:07:32 +08:00
|
|
|
enum rq_nohz_flag_bits {
|
|
|
|
NOHZ_TICK_STOPPED,
|
|
|
|
NOHZ_BALANCE_KICK,
|
|
|
|
};
|
|
|
|
|
|
|
|
#define nohz_flags(cpu) (&cpu_rq(cpu)->nohz_flags)
|
2016-03-10 19:54:20 +08:00
|
|
|
|
|
|
|
extern void nohz_balance_exit_idle(unsigned int cpu);
|
|
|
|
#else
|
|
|
|
static inline void nohz_balance_exit_idle(unsigned int cpu) { }
|
2011-12-02 09:07:32 +08:00
|
|
|
#endif
|
2012-06-16 21:57:37 +08:00
|
|
|
|
|
|
|
#ifdef CONFIG_IRQ_TIME_ACCOUNTING
|
2016-09-26 08:29:20 +08:00
|
|
|
struct irqtime {
|
|
|
|
u64 hardirq_time;
|
|
|
|
u64 softirq_time;
|
|
|
|
u64 irq_start_time;
|
|
|
|
struct u64_stats_sync sync;
|
|
|
|
};
|
2012-06-16 21:57:37 +08:00
|
|
|
|
2016-09-26 08:29:20 +08:00
|
|
|
DECLARE_PER_CPU(struct irqtime, cpu_irqtime);
|
2012-06-16 21:57:37 +08:00
|
|
|
|
|
|
|
static inline u64 irq_time_read(int cpu)
|
|
|
|
{
|
2016-09-26 08:29:20 +08:00
|
|
|
struct irqtime *irqtime = &per_cpu(cpu_irqtime, cpu);
|
|
|
|
unsigned int seq;
|
|
|
|
u64 total;
|
2012-06-16 21:57:37 +08:00
|
|
|
|
|
|
|
do {
|
2016-09-26 08:29:20 +08:00
|
|
|
seq = __u64_stats_fetch_begin(&irqtime->sync);
|
|
|
|
total = irqtime->softirq_time + irqtime->hardirq_time;
|
|
|
|
} while (__u64_stats_fetch_retry(&irqtime->sync, seq));
|
2012-06-16 21:57:37 +08:00
|
|
|
|
2016-09-26 08:29:20 +08:00
|
|
|
return total;
|
2012-06-16 21:57:37 +08:00
|
|
|
}
|
|
|
|
#endif /* CONFIG_IRQ_TIME_ACCOUNTING */
|
2016-03-11 03:44:47 +08:00
|
|
|
|
|
|
|
#ifdef CONFIG_CPU_FREQ
|
|
|
|
DECLARE_PER_CPU(struct update_util_data *, cpufreq_update_util_data);
|
|
|
|
|
|
|
|
/**
|
|
|
|
* cpufreq_update_util - Take a note about CPU utilization changes.
|
2016-08-10 09:11:17 +08:00
|
|
|
* @rq: Runqueue to carry out the update for.
|
2016-08-17 04:14:55 +08:00
|
|
|
* @flags: Update reason flags.
|
2016-03-11 03:44:47 +08:00
|
|
|
*
|
2016-08-17 04:14:55 +08:00
|
|
|
* This function is called by the scheduler on the CPU whose utilization is
|
|
|
|
* being updated.
|
2016-03-11 03:44:47 +08:00
|
|
|
*
|
|
|
|
* It can only be called from RCU-sched read-side critical sections.
|
|
|
|
*
|
|
|
|
* The way cpufreq is currently arranged requires it to evaluate the CPU
|
|
|
|
* performance state (frequency/voltage) on a regular basis to prevent it from
|
|
|
|
* being stuck in a completely inadequate performance level for too long.
|
|
|
|
* That is not guaranteed to happen if the updates are only triggered from CFS,
|
|
|
|
* though, because they may not be coming in if RT or deadline tasks are active
|
|
|
|
* all the time (or there are RT and DL tasks only).
|
|
|
|
*
|
|
|
|
* As a workaround for that issue, this function is called by the RT and DL
|
|
|
|
* sched classes to trigger extra cpufreq updates to prevent it from stalling,
|
|
|
|
* but that really is a band-aid. Going forward it should be replaced with
|
|
|
|
* solutions targeted more specifically at RT and DL tasks.
|
|
|
|
*/
|
2016-08-10 09:11:17 +08:00
|
|
|
static inline void cpufreq_update_util(struct rq *rq, unsigned int flags)
|
2016-03-11 03:44:47 +08:00
|
|
|
{
|
2016-08-17 04:14:55 +08:00
|
|
|
struct update_util_data *data;
|
|
|
|
|
|
|
|
data = rcu_dereference_sched(*this_cpu_ptr(&cpufreq_update_util_data));
|
|
|
|
if (data)
|
2016-08-10 09:11:17 +08:00
|
|
|
data->func(data, rq_clock(rq), flags);
|
|
|
|
}
|
|
|
|
|
|
|
|
static inline void cpufreq_update_this_cpu(struct rq *rq, unsigned int flags)
|
|
|
|
{
|
|
|
|
if (cpu_of(rq) == smp_processor_id())
|
|
|
|
cpufreq_update_util(rq, flags);
|
2016-03-11 03:44:47 +08:00
|
|
|
}
|
|
|
|
#else
|
2016-08-10 09:11:17 +08:00
|
|
|
static inline void cpufreq_update_util(struct rq *rq, unsigned int flags) {}
|
|
|
|
static inline void cpufreq_update_this_cpu(struct rq *rq, unsigned int flags) {}
|
2016-03-11 03:44:47 +08:00
|
|
|
#endif /* CONFIG_CPU_FREQ */
|
2016-03-25 00:42:50 +08:00
|
|
|
|
2016-04-02 07:09:12 +08:00
|
|
|
#ifdef arch_scale_freq_capacity
|
|
|
|
#ifndef arch_scale_freq_invariant
|
|
|
|
#define arch_scale_freq_invariant() (true)
|
|
|
|
#endif
|
|
|
|
#else /* arch_scale_freq_capacity */
|
|
|
|
#define arch_scale_freq_invariant() (false)
|
|
|
|
#endif
|