mirror of
https://mirrors.bfsu.edu.cn/git/linux.git
synced 2024-12-18 08:35:08 +08:00
Merge branches 'consolidate.2019.04.09a', 'doc.2019.03.26b', 'fixes.2019.03.26b', 'srcu.2019.03.26b', 'stall.2019.03.26b' and 'torture.2019.03.26b' into HEAD
consolidate.2019.04.09a: Lingering RCU flavor consolidation cleanups. doc.2019.03.26b: Documentation updates. fixes.2019.03.26b: Miscellaneous fixes. srcu.2019.03.26b: SRCU updates. stall.2019.03.26b: RCU CPU stall warning updates. torture.2019.03.26b: Torture-test updates.
This commit is contained in:
commit
6cdbc07a5a
@ -155,8 +155,7 @@ keeping lock contention under control at all tree levels regardless
|
|||||||
of the level of loading on the system.
|
of the level of loading on the system.
|
||||||
|
|
||||||
</p><p>RCU updaters wait for normal grace periods by registering
|
</p><p>RCU updaters wait for normal grace periods by registering
|
||||||
RCU callbacks, either directly via <tt>call_rcu()</tt> and
|
RCU callbacks, either directly via <tt>call_rcu()</tt>
|
||||||
friends (namely <tt>call_rcu_bh()</tt> and <tt>call_rcu_sched()</tt>),
|
|
||||||
or indirectly via <tt>synchronize_rcu()</tt> and friends.
|
or indirectly via <tt>synchronize_rcu()</tt> and friends.
|
||||||
RCU callbacks are represented by <tt>rcu_head</tt> structures,
|
RCU callbacks are represented by <tt>rcu_head</tt> structures,
|
||||||
which are queued on <tt>rcu_data</tt> structures while they are
|
which are queued on <tt>rcu_data</tt> structures while they are
|
||||||
|
@ -56,6 +56,7 @@ sections.
|
|||||||
RCU-preempt Expedited Grace Periods</a></h2>
|
RCU-preempt Expedited Grace Periods</a></h2>
|
||||||
|
|
||||||
<p>
|
<p>
|
||||||
|
<tt>CONFIG_PREEMPT=y</tt> kernels implement RCU-preempt.
|
||||||
The overall flow of the handling of a given CPU by an RCU-preempt
|
The overall flow of the handling of a given CPU by an RCU-preempt
|
||||||
expedited grace period is shown in the following diagram:
|
expedited grace period is shown in the following diagram:
|
||||||
|
|
||||||
@ -139,6 +140,7 @@ or offline, among other things.
|
|||||||
RCU-sched Expedited Grace Periods</a></h2>
|
RCU-sched Expedited Grace Periods</a></h2>
|
||||||
|
|
||||||
<p>
|
<p>
|
||||||
|
<tt>CONFIG_PREEMPT=n</tt> kernels implement RCU-sched.
|
||||||
The overall flow of the handling of a given CPU by an RCU-sched
|
The overall flow of the handling of a given CPU by an RCU-sched
|
||||||
expedited grace period is shown in the following diagram:
|
expedited grace period is shown in the following diagram:
|
||||||
|
|
||||||
@ -146,7 +148,7 @@ expedited grace period is shown in the following diagram:
|
|||||||
|
|
||||||
<p>
|
<p>
|
||||||
As with RCU-preempt, RCU-sched's
|
As with RCU-preempt, RCU-sched's
|
||||||
<tt>synchronize_sched_expedited()</tt> ignores offline and
|
<tt>synchronize_rcu_expedited()</tt> ignores offline and
|
||||||
idle CPUs, again because they are in remotely detectable
|
idle CPUs, again because they are in remotely detectable
|
||||||
quiescent states.
|
quiescent states.
|
||||||
However, because the
|
However, because the
|
||||||
|
@ -34,12 +34,11 @@ Similarly, any code that happens before the beginning of a given RCU grace
|
|||||||
period is guaranteed to see the effects of all accesses following the end
|
period is guaranteed to see the effects of all accesses following the end
|
||||||
of that grace period that are within RCU read-side critical sections.
|
of that grace period that are within RCU read-side critical sections.
|
||||||
|
|
||||||
<p>This guarantee is particularly pervasive for <tt>synchronize_sched()</tt>,
|
<p>Note well that RCU-sched read-side critical sections include any region
|
||||||
for which RCU-sched read-side critical sections include any region
|
|
||||||
of code for which preemption is disabled.
|
of code for which preemption is disabled.
|
||||||
Given that each individual machine instruction can be thought of as
|
Given that each individual machine instruction can be thought of as
|
||||||
an extremely small region of preemption-disabled code, one can think of
|
an extremely small region of preemption-disabled code, one can think of
|
||||||
<tt>synchronize_sched()</tt> as <tt>smp_mb()</tt> on steroids.
|
<tt>synchronize_rcu()</tt> as <tt>smp_mb()</tt> on steroids.
|
||||||
|
|
||||||
<p>RCU updaters use this guarantee by splitting their updates into
|
<p>RCU updaters use this guarantee by splitting their updates into
|
||||||
two phases, one of which is executed before the grace period and
|
two phases, one of which is executed before the grace period and
|
||||||
|
@ -81,18 +81,19 @@ currently executing on some other CPU. We therefore cannot free
|
|||||||
up any data structures used by the old NMI handler until execution
|
up any data structures used by the old NMI handler until execution
|
||||||
of it completes on all other CPUs.
|
of it completes on all other CPUs.
|
||||||
|
|
||||||
One way to accomplish this is via synchronize_sched(), perhaps as
|
One way to accomplish this is via synchronize_rcu(), perhaps as
|
||||||
follows:
|
follows:
|
||||||
|
|
||||||
unset_nmi_callback();
|
unset_nmi_callback();
|
||||||
synchronize_sched();
|
synchronize_rcu();
|
||||||
kfree(my_nmi_data);
|
kfree(my_nmi_data);
|
||||||
|
|
||||||
This works because synchronize_sched() blocks until all CPUs complete
|
This works because (as of v4.20) synchronize_rcu() blocks until all
|
||||||
any preemption-disabled segments of code that they were executing.
|
CPUs complete any preemption-disabled segments of code that they were
|
||||||
Since NMI handlers disable preemption, synchronize_sched() is guaranteed
|
executing.
|
||||||
|
Since NMI handlers disable preemption, synchronize_rcu() is guaranteed
|
||||||
not to return until all ongoing NMI handlers exit. It is therefore safe
|
not to return until all ongoing NMI handlers exit. It is therefore safe
|
||||||
to free up the handler's data as soon as synchronize_sched() returns.
|
to free up the handler's data as soon as synchronize_rcu() returns.
|
||||||
|
|
||||||
Important note: for this to work, the architecture in question must
|
Important note: for this to work, the architecture in question must
|
||||||
invoke nmi_enter() and nmi_exit() on NMI entry and exit, respectively.
|
invoke nmi_enter() and nmi_exit() on NMI entry and exit, respectively.
|
||||||
|
@ -86,10 +86,8 @@ even on a UP system. So do not do it! Even on a UP system, the RCU
|
|||||||
infrastructure -must- respect grace periods, and -must- invoke callbacks
|
infrastructure -must- respect grace periods, and -must- invoke callbacks
|
||||||
from a known environment in which no locks are held.
|
from a known environment in which no locks are held.
|
||||||
|
|
||||||
It -is- safe for synchronize_sched() and synchronize_rcu_bh() to return
|
Note that it -is- safe for synchronize_rcu() to return immediately on
|
||||||
immediately on an UP system. It is also safe for synchronize_rcu()
|
UP systems, including !PREEMPT SMP builds running on UP systems.
|
||||||
to return immediately on UP systems, except when running preemptable
|
|
||||||
RCU.
|
|
||||||
|
|
||||||
Quick Quiz #3: Why can't synchronize_rcu() return immediately on
|
Quick Quiz #3: Why can't synchronize_rcu() return immediately on
|
||||||
UP systems running preemptable RCU?
|
UP systems running preemptable RCU?
|
||||||
|
@ -182,16 +182,13 @@ over a rather long period of time, but improvements are always welcome!
|
|||||||
when publicizing a pointer to a structure that can
|
when publicizing a pointer to a structure that can
|
||||||
be traversed by an RCU read-side critical section.
|
be traversed by an RCU read-side critical section.
|
||||||
|
|
||||||
5. If call_rcu(), or a related primitive such as call_rcu_bh(),
|
5. If call_rcu() or call_srcu() is used, the callback function will
|
||||||
call_rcu_sched(), or call_srcu() is used, the callback function
|
be called from softirq context. In particular, it cannot block.
|
||||||
will be called from softirq context. In particular, it cannot
|
|
||||||
block.
|
|
||||||
|
|
||||||
6. Since synchronize_rcu() can block, it cannot be called from
|
6. Since synchronize_rcu() can block, it cannot be called
|
||||||
any sort of irq context. The same rule applies for
|
from any sort of irq context. The same rule applies
|
||||||
synchronize_rcu_bh(), synchronize_sched(), synchronize_srcu(),
|
for synchronize_srcu(), synchronize_rcu_expedited(), and
|
||||||
synchronize_rcu_expedited(), synchronize_rcu_bh_expedited(),
|
synchronize_srcu_expedited().
|
||||||
synchronize_sched_expedite(), and synchronize_srcu_expedited().
|
|
||||||
|
|
||||||
The expedited forms of these primitives have the same semantics
|
The expedited forms of these primitives have the same semantics
|
||||||
as the non-expedited forms, but expediting is both expensive and
|
as the non-expedited forms, but expediting is both expensive and
|
||||||
@ -212,20 +209,20 @@ over a rather long period of time, but improvements are always welcome!
|
|||||||
of the system, especially to real-time workloads running on
|
of the system, especially to real-time workloads running on
|
||||||
the rest of the system.
|
the rest of the system.
|
||||||
|
|
||||||
7. If the updater uses call_rcu() or synchronize_rcu(), then the
|
7. As of v4.20, a given kernel implements only one RCU flavor,
|
||||||
corresponding readers must use rcu_read_lock() and
|
which is RCU-sched for PREEMPT=n and RCU-preempt for PREEMPT=y.
|
||||||
rcu_read_unlock(). If the updater uses call_rcu_bh() or
|
If the updater uses call_rcu() or synchronize_rcu(),
|
||||||
synchronize_rcu_bh(), then the corresponding readers must
|
then the corresponding readers my use rcu_read_lock() and
|
||||||
use rcu_read_lock_bh() and rcu_read_unlock_bh(). If the
|
rcu_read_unlock(), rcu_read_lock_bh() and rcu_read_unlock_bh(),
|
||||||
updater uses call_rcu_sched() or synchronize_sched(), then
|
or any pair of primitives that disables and re-enables preemption,
|
||||||
the corresponding readers must disable preemption, possibly
|
for example, rcu_read_lock_sched() and rcu_read_unlock_sched().
|
||||||
by calling rcu_read_lock_sched() and rcu_read_unlock_sched().
|
If the updater uses synchronize_srcu() or call_srcu(),
|
||||||
If the updater uses synchronize_srcu() or call_srcu(), then
|
then the corresponding readers must use srcu_read_lock() and
|
||||||
the corresponding readers must use srcu_read_lock() and
|
|
||||||
srcu_read_unlock(), and with the same srcu_struct. The rules for
|
srcu_read_unlock(), and with the same srcu_struct. The rules for
|
||||||
the expedited primitives are the same as for their non-expedited
|
the expedited primitives are the same as for their non-expedited
|
||||||
counterparts. Mixing things up will result in confusion and
|
counterparts. Mixing things up will result in confusion and
|
||||||
broken kernels.
|
broken kernels, and has even resulted in an exploitable security
|
||||||
|
issue.
|
||||||
|
|
||||||
One exception to this rule: rcu_read_lock() and rcu_read_unlock()
|
One exception to this rule: rcu_read_lock() and rcu_read_unlock()
|
||||||
may be substituted for rcu_read_lock_bh() and rcu_read_unlock_bh()
|
may be substituted for rcu_read_lock_bh() and rcu_read_unlock_bh()
|
||||||
@ -288,8 +285,7 @@ over a rather long period of time, but improvements are always welcome!
|
|||||||
d. Periodically invoke synchronize_rcu(), permitting a limited
|
d. Periodically invoke synchronize_rcu(), permitting a limited
|
||||||
number of updates per grace period.
|
number of updates per grace period.
|
||||||
|
|
||||||
The same cautions apply to call_rcu_bh(), call_rcu_sched(),
|
The same cautions apply to call_srcu() and kfree_rcu().
|
||||||
call_srcu(), and kfree_rcu().
|
|
||||||
|
|
||||||
Note that although these primitives do take action to avoid memory
|
Note that although these primitives do take action to avoid memory
|
||||||
exhaustion when any given CPU has too many callbacks, a determined
|
exhaustion when any given CPU has too many callbacks, a determined
|
||||||
@ -322,7 +318,7 @@ over a rather long period of time, but improvements are always welcome!
|
|||||||
|
|
||||||
11. Any lock acquired by an RCU callback must be acquired elsewhere
|
11. Any lock acquired by an RCU callback must be acquired elsewhere
|
||||||
with softirq disabled, e.g., via spin_lock_irqsave(),
|
with softirq disabled, e.g., via spin_lock_irqsave(),
|
||||||
spin_lock_bh(), etc. Failing to disable irq on a given
|
spin_lock_bh(), etc. Failing to disable softirq on a given
|
||||||
acquisition of that lock will result in deadlock as soon as
|
acquisition of that lock will result in deadlock as soon as
|
||||||
the RCU softirq handler happens to run your RCU callback while
|
the RCU softirq handler happens to run your RCU callback while
|
||||||
interrupting that acquisition's critical section.
|
interrupting that acquisition's critical section.
|
||||||
@ -335,13 +331,16 @@ over a rather long period of time, but improvements are always welcome!
|
|||||||
must use whatever locking or other synchronization is required
|
must use whatever locking or other synchronization is required
|
||||||
to safely access and/or modify that data structure.
|
to safely access and/or modify that data structure.
|
||||||
|
|
||||||
RCU callbacks are -usually- executed on the same CPU that executed
|
Do not assume that RCU callbacks will be executed on the same
|
||||||
the corresponding call_rcu(), call_rcu_bh(), or call_rcu_sched(),
|
CPU that executed the corresponding call_rcu() or call_srcu().
|
||||||
but are by -no- means guaranteed to be. For example, if a given
|
For example, if a given CPU goes offline while having an RCU
|
||||||
CPU goes offline while having an RCU callback pending, then that
|
callback pending, then that RCU callback will execute on some
|
||||||
RCU callback will execute on some surviving CPU. (If this was
|
surviving CPU. (If this was not the case, a self-spawning RCU
|
||||||
not the case, a self-spawning RCU callback would prevent the
|
callback would prevent the victim CPU from ever going offline.)
|
||||||
victim CPU from ever going offline.)
|
Furthermore, CPUs designated by rcu_nocbs= might well -always-
|
||||||
|
have their RCU callbacks executed on some other CPUs, in fact,
|
||||||
|
for some real-time workloads, this is the whole point of using
|
||||||
|
the rcu_nocbs= kernel boot parameter.
|
||||||
|
|
||||||
13. Unlike other forms of RCU, it -is- permissible to block in an
|
13. Unlike other forms of RCU, it -is- permissible to block in an
|
||||||
SRCU read-side critical section (demarked by srcu_read_lock()
|
SRCU read-side critical section (demarked by srcu_read_lock()
|
||||||
@ -381,11 +380,11 @@ over a rather long period of time, but improvements are always welcome!
|
|||||||
|
|
||||||
SRCU's expedited primitive (synchronize_srcu_expedited())
|
SRCU's expedited primitive (synchronize_srcu_expedited())
|
||||||
never sends IPIs to other CPUs, so it is easier on
|
never sends IPIs to other CPUs, so it is easier on
|
||||||
real-time workloads than is synchronize_rcu_expedited(),
|
real-time workloads than is synchronize_rcu_expedited().
|
||||||
synchronize_rcu_bh_expedited() or synchronize_sched_expedited().
|
|
||||||
|
|
||||||
Note that rcu_dereference() and rcu_assign_pointer() relate to
|
Note that rcu_assign_pointer() relates to SRCU just as it does to
|
||||||
SRCU just as they do to other forms of RCU.
|
other forms of RCU, but instead of rcu_dereference() you should
|
||||||
|
use srcu_dereference() in order to avoid lockdep splats.
|
||||||
|
|
||||||
14. The whole point of call_rcu(), synchronize_rcu(), and friends
|
14. The whole point of call_rcu(), synchronize_rcu(), and friends
|
||||||
is to wait until all pre-existing readers have finished before
|
is to wait until all pre-existing readers have finished before
|
||||||
@ -405,6 +404,9 @@ over a rather long period of time, but improvements are always welcome!
|
|||||||
read-side critical sections. It is the responsibility of the
|
read-side critical sections. It is the responsibility of the
|
||||||
RCU update-side primitives to deal with this.
|
RCU update-side primitives to deal with this.
|
||||||
|
|
||||||
|
For SRCU readers, you can use smp_mb__after_srcu_read_unlock()
|
||||||
|
immediately after an srcu_read_unlock() to get a full barrier.
|
||||||
|
|
||||||
16. Use CONFIG_PROVE_LOCKING, CONFIG_DEBUG_OBJECTS_RCU_HEAD, and the
|
16. Use CONFIG_PROVE_LOCKING, CONFIG_DEBUG_OBJECTS_RCU_HEAD, and the
|
||||||
__rcu sparse checks to validate your RCU code. These can help
|
__rcu sparse checks to validate your RCU code. These can help
|
||||||
find problems as follows:
|
find problems as follows:
|
||||||
@ -428,22 +430,19 @@ over a rather long period of time, but improvements are always welcome!
|
|||||||
These debugging aids can help you find problems that are
|
These debugging aids can help you find problems that are
|
||||||
otherwise extremely difficult to spot.
|
otherwise extremely difficult to spot.
|
||||||
|
|
||||||
17. If you register a callback using call_rcu(), call_rcu_bh(),
|
17. If you register a callback using call_rcu() or call_srcu(), and
|
||||||
call_rcu_sched(), or call_srcu(), and pass in a function defined
|
pass in a function defined within a loadable module, then it in
|
||||||
within a loadable module, then it in necessary to wait for
|
necessary to wait for all pending callbacks to be invoked after
|
||||||
all pending callbacks to be invoked after the last invocation
|
the last invocation and before unloading that module. Note that
|
||||||
and before unloading that module. Note that it is absolutely
|
it is absolutely -not- sufficient to wait for a grace period!
|
||||||
-not- sufficient to wait for a grace period! The current (say)
|
The current (say) synchronize_rcu() implementation is -not-
|
||||||
synchronize_rcu() implementation waits only for all previous
|
guaranteed to wait for callbacks registered on other CPUs.
|
||||||
callbacks registered on the CPU that synchronize_rcu() is running
|
Or even on the current CPU if that CPU recently went offline
|
||||||
on, but it is -not- guaranteed to wait for callbacks registered
|
and came back online.
|
||||||
on other CPUs.
|
|
||||||
|
|
||||||
You instead need to use one of the barrier functions:
|
You instead need to use one of the barrier functions:
|
||||||
|
|
||||||
o call_rcu() -> rcu_barrier()
|
o call_rcu() -> rcu_barrier()
|
||||||
o call_rcu_bh() -> rcu_barrier()
|
|
||||||
o call_rcu_sched() -> rcu_barrier()
|
|
||||||
o call_srcu() -> srcu_barrier()
|
o call_srcu() -> srcu_barrier()
|
||||||
|
|
||||||
However, these barrier functions are absolutely -not- guaranteed
|
However, these barrier functions are absolutely -not- guaranteed
|
||||||
|
@ -52,10 +52,10 @@ o If I am running on a uniprocessor kernel, which can only do one
|
|||||||
o How can I see where RCU is currently used in the Linux kernel?
|
o How can I see where RCU is currently used in the Linux kernel?
|
||||||
|
|
||||||
Search for "rcu_read_lock", "rcu_read_unlock", "call_rcu",
|
Search for "rcu_read_lock", "rcu_read_unlock", "call_rcu",
|
||||||
"rcu_read_lock_bh", "rcu_read_unlock_bh", "call_rcu_bh",
|
"rcu_read_lock_bh", "rcu_read_unlock_bh", "srcu_read_lock",
|
||||||
"srcu_read_lock", "srcu_read_unlock", "synchronize_rcu",
|
"srcu_read_unlock", "synchronize_rcu", "synchronize_net",
|
||||||
"synchronize_net", "synchronize_srcu", and the other RCU
|
"synchronize_srcu", and the other RCU primitives. Or grab one
|
||||||
primitives. Or grab one of the cscope databases from:
|
of the cscope databases from:
|
||||||
|
|
||||||
http://www.rdrop.com/users/paulmck/RCU/linuxusage/rculocktab.html
|
http://www.rdrop.com/users/paulmck/RCU/linuxusage/rculocktab.html
|
||||||
|
|
||||||
|
@ -351,3 +351,106 @@ garbage values.
|
|||||||
|
|
||||||
In short, rcu_dereference() is -not- optional when you are going to
|
In short, rcu_dereference() is -not- optional when you are going to
|
||||||
dereference the resulting pointer.
|
dereference the resulting pointer.
|
||||||
|
|
||||||
|
|
||||||
|
WHICH MEMBER OF THE rcu_dereference() FAMILY SHOULD YOU USE?
|
||||||
|
|
||||||
|
First, please avoid using rcu_dereference_raw() and also please avoid
|
||||||
|
using rcu_dereference_check() and rcu_dereference_protected() with a
|
||||||
|
second argument with a constant value of 1 (or true, for that matter).
|
||||||
|
With that caution out of the way, here is some guidance for which
|
||||||
|
member of the rcu_dereference() to use in various situations:
|
||||||
|
|
||||||
|
1. If the access needs to be within an RCU read-side critical
|
||||||
|
section, use rcu_dereference(). With the new consolidated
|
||||||
|
RCU flavors, an RCU read-side critical section is entered
|
||||||
|
using rcu_read_lock(), anything that disables bottom halves,
|
||||||
|
anything that disables interrupts, or anything that disables
|
||||||
|
preemption.
|
||||||
|
|
||||||
|
2. If the access might be within an RCU read-side critical section
|
||||||
|
on the one hand, or protected by (say) my_lock on the other,
|
||||||
|
use rcu_dereference_check(), for example:
|
||||||
|
|
||||||
|
p1 = rcu_dereference_check(p->rcu_protected_pointer,
|
||||||
|
lockdep_is_held(&my_lock));
|
||||||
|
|
||||||
|
|
||||||
|
3. If the access might be within an RCU read-side critical section
|
||||||
|
on the one hand, or protected by either my_lock or your_lock on
|
||||||
|
the other, again use rcu_dereference_check(), for example:
|
||||||
|
|
||||||
|
p1 = rcu_dereference_check(p->rcu_protected_pointer,
|
||||||
|
lockdep_is_held(&my_lock) ||
|
||||||
|
lockdep_is_held(&your_lock));
|
||||||
|
|
||||||
|
4. If the access is on the update side, so that it is always protected
|
||||||
|
by my_lock, use rcu_dereference_protected():
|
||||||
|
|
||||||
|
p1 = rcu_dereference_protected(p->rcu_protected_pointer,
|
||||||
|
lockdep_is_held(&my_lock));
|
||||||
|
|
||||||
|
This can be extended to handle multiple locks as in #3 above,
|
||||||
|
and both can be extended to check other conditions as well.
|
||||||
|
|
||||||
|
5. If the protection is supplied by the caller, and is thus unknown
|
||||||
|
to this code, that is the rare case when rcu_dereference_raw()
|
||||||
|
is appropriate. In addition, rcu_dereference_raw() might be
|
||||||
|
appropriate when the lockdep expression would be excessively
|
||||||
|
complex, except that a better approach in that case might be to
|
||||||
|
take a long hard look at your synchronization design. Still,
|
||||||
|
there are data-locking cases where any one of a very large number
|
||||||
|
of locks or reference counters suffices to protect the pointer,
|
||||||
|
so rcu_dereference_raw() does have its place.
|
||||||
|
|
||||||
|
However, its place is probably quite a bit smaller than one
|
||||||
|
might expect given the number of uses in the current kernel.
|
||||||
|
Ditto for its synonym, rcu_dereference_check( ... , 1), and
|
||||||
|
its close relative, rcu_dereference_protected(... , 1).
|
||||||
|
|
||||||
|
|
||||||
|
SPARSE CHECKING OF RCU-PROTECTED POINTERS
|
||||||
|
|
||||||
|
The sparse static-analysis tool checks for direct access to RCU-protected
|
||||||
|
pointers, which can result in "interesting" bugs due to compiler
|
||||||
|
optimizations involving invented loads and perhaps also load tearing.
|
||||||
|
For example, suppose someone mistakenly does something like this:
|
||||||
|
|
||||||
|
p = q->rcu_protected_pointer;
|
||||||
|
do_something_with(p->a);
|
||||||
|
do_something_else_with(p->b);
|
||||||
|
|
||||||
|
If register pressure is high, the compiler might optimize "p" out
|
||||||
|
of existence, transforming the code to something like this:
|
||||||
|
|
||||||
|
do_something_with(q->rcu_protected_pointer->a);
|
||||||
|
do_something_else_with(q->rcu_protected_pointer->b);
|
||||||
|
|
||||||
|
This could fatally disappoint your code if q->rcu_protected_pointer
|
||||||
|
changed in the meantime. Nor is this a theoretical problem: Exactly
|
||||||
|
this sort of bug cost Paul E. McKenney (and several of his innocent
|
||||||
|
colleagues) a three-day weekend back in the early 1990s.
|
||||||
|
|
||||||
|
Load tearing could of course result in dereferencing a mashup of a pair
|
||||||
|
of pointers, which also might fatally disappoint your code.
|
||||||
|
|
||||||
|
These problems could have been avoided simply by making the code instead
|
||||||
|
read as follows:
|
||||||
|
|
||||||
|
p = rcu_dereference(q->rcu_protected_pointer);
|
||||||
|
do_something_with(p->a);
|
||||||
|
do_something_else_with(p->b);
|
||||||
|
|
||||||
|
Unfortunately, these sorts of bugs can be extremely hard to spot during
|
||||||
|
review. This is where the sparse tool comes into play, along with the
|
||||||
|
"__rcu" marker. If you mark a pointer declaration, whether in a structure
|
||||||
|
or as a formal parameter, with "__rcu", which tells sparse to complain if
|
||||||
|
this pointer is accessed directly. It will also cause sparse to complain
|
||||||
|
if a pointer not marked with "__rcu" is accessed using rcu_dereference()
|
||||||
|
and friends. For example, ->rcu_protected_pointer might be declared as
|
||||||
|
follows:
|
||||||
|
|
||||||
|
struct foo __rcu *rcu_protected_pointer;
|
||||||
|
|
||||||
|
Use of "__rcu" is opt-in. If you choose not to use it, then you should
|
||||||
|
ignore the sparse warnings.
|
||||||
|
@ -83,16 +83,15 @@ Pseudo-code using rcu_barrier() is as follows:
|
|||||||
2. Execute rcu_barrier().
|
2. Execute rcu_barrier().
|
||||||
3. Allow the module to be unloaded.
|
3. Allow the module to be unloaded.
|
||||||
|
|
||||||
There are also rcu_barrier_bh(), rcu_barrier_sched(), and srcu_barrier()
|
There is also an srcu_barrier() function for SRCU, and you of course
|
||||||
functions for the other flavors of RCU, and you of course must match
|
must match the flavor of rcu_barrier() with that of call_rcu(). If your
|
||||||
the flavor of rcu_barrier() with that of call_rcu(). If your module
|
module uses multiple flavors of call_rcu(), then it must also use multiple
|
||||||
uses multiple flavors of call_rcu(), then it must also use multiple
|
|
||||||
flavors of rcu_barrier() when unloading that module. For example, if
|
flavors of rcu_barrier() when unloading that module. For example, if
|
||||||
it uses call_rcu_bh(), call_srcu() on srcu_struct_1, and call_srcu() on
|
it uses call_rcu(), call_srcu() on srcu_struct_1, and call_srcu() on
|
||||||
srcu_struct_2(), then the following three lines of code will be required
|
srcu_struct_2(), then the following three lines of code will be required
|
||||||
when unloading:
|
when unloading:
|
||||||
|
|
||||||
1 rcu_barrier_bh();
|
1 rcu_barrier();
|
||||||
2 srcu_barrier(&srcu_struct_1);
|
2 srcu_barrier(&srcu_struct_1);
|
||||||
3 srcu_barrier(&srcu_struct_2);
|
3 srcu_barrier(&srcu_struct_2);
|
||||||
|
|
||||||
@ -185,12 +184,12 @@ module invokes call_rcu() from timers, you will need to first cancel all
|
|||||||
the timers, and only then invoke rcu_barrier() to wait for any remaining
|
the timers, and only then invoke rcu_barrier() to wait for any remaining
|
||||||
RCU callbacks to complete.
|
RCU callbacks to complete.
|
||||||
|
|
||||||
Of course, if you module uses call_rcu_bh(), you will need to invoke
|
Of course, if you module uses call_rcu(), you will need to invoke
|
||||||
rcu_barrier_bh() before unloading. Similarly, if your module uses
|
rcu_barrier() before unloading. Similarly, if your module uses
|
||||||
call_rcu_sched(), you will need to invoke rcu_barrier_sched() before
|
call_srcu(), you will need to invoke srcu_barrier() before unloading,
|
||||||
unloading. If your module uses call_rcu(), call_rcu_bh(), -and-
|
and on the same srcu_struct structure. If your module uses call_rcu()
|
||||||
call_rcu_sched(), then you will need to invoke each of rcu_barrier(),
|
-and- call_srcu(), then you will need to invoke rcu_barrier() -and-
|
||||||
rcu_barrier_bh(), and rcu_barrier_sched().
|
srcu_barrier().
|
||||||
|
|
||||||
|
|
||||||
Implementing rcu_barrier()
|
Implementing rcu_barrier()
|
||||||
@ -223,8 +222,8 @@ shown below. Note that the final "1" in on_each_cpu()'s argument list
|
|||||||
ensures that all the calls to rcu_barrier_func() will have completed
|
ensures that all the calls to rcu_barrier_func() will have completed
|
||||||
before on_each_cpu() returns. Line 9 then waits for the completion.
|
before on_each_cpu() returns. Line 9 then waits for the completion.
|
||||||
|
|
||||||
This code was rewritten in 2008 to support rcu_barrier_bh() and
|
This code was rewritten in 2008 and several times thereafter, but this
|
||||||
rcu_barrier_sched() in addition to the original rcu_barrier().
|
still gives the general idea.
|
||||||
|
|
||||||
The rcu_barrier_func() runs on each CPU, where it invokes call_rcu()
|
The rcu_barrier_func() runs on each CPU, where it invokes call_rcu()
|
||||||
to post an RCU callback, as follows:
|
to post an RCU callback, as follows:
|
||||||
|
@ -310,7 +310,7 @@ reader, updater, and reclaimer.
|
|||||||
|
|
||||||
|
|
||||||
rcu_assign_pointer()
|
rcu_assign_pointer()
|
||||||
+--------+
|
+--------+
|
||||||
+---------------------->| reader |---------+
|
+---------------------->| reader |---------+
|
||||||
| +--------+ |
|
| +--------+ |
|
||||||
| | |
|
| | |
|
||||||
@ -318,12 +318,12 @@ reader, updater, and reclaimer.
|
|||||||
| | | rcu_read_lock()
|
| | | rcu_read_lock()
|
||||||
| | | rcu_read_unlock()
|
| | | rcu_read_unlock()
|
||||||
| rcu_dereference() | |
|
| rcu_dereference() | |
|
||||||
+---------+ | |
|
+---------+ | |
|
||||||
| updater |<---------------------+ |
|
| updater |<----------------+ |
|
||||||
+---------+ V
|
+---------+ V
|
||||||
| +-----------+
|
| +-----------+
|
||||||
+----------------------------------->| reclaimer |
|
+----------------------------------->| reclaimer |
|
||||||
+-----------+
|
+-----------+
|
||||||
Defer:
|
Defer:
|
||||||
synchronize_rcu() & call_rcu()
|
synchronize_rcu() & call_rcu()
|
||||||
|
|
||||||
|
@ -3623,7 +3623,9 @@
|
|||||||
see CONFIG_RAS_CEC help text.
|
see CONFIG_RAS_CEC help text.
|
||||||
|
|
||||||
rcu_nocbs= [KNL]
|
rcu_nocbs= [KNL]
|
||||||
The argument is a cpu list, as described above.
|
The argument is a cpu list, as described above,
|
||||||
|
except that the string "all" can be used to
|
||||||
|
specify every CPU on the system.
|
||||||
|
|
||||||
In kernels built with CONFIG_RCU_NOCB_CPU=y, set
|
In kernels built with CONFIG_RCU_NOCB_CPU=y, set
|
||||||
the specified list of CPUs to be no-callback CPUs.
|
the specified list of CPUs to be no-callback CPUs.
|
||||||
|
16
MAINTAINERS
16
MAINTAINERS
@ -8983,7 +8983,7 @@ R: Daniel Lustig <dlustig@nvidia.com>
|
|||||||
L: linux-kernel@vger.kernel.org
|
L: linux-kernel@vger.kernel.org
|
||||||
L: linux-arch@vger.kernel.org
|
L: linux-arch@vger.kernel.org
|
||||||
S: Supported
|
S: Supported
|
||||||
T: git git://git.kernel.org/pub/scm/linux/kernel/git/paulmck/linux-rcu.git
|
T: git git://git.kernel.org/pub/scm/linux/kernel/git/paulmck/linux-rcu.git dev
|
||||||
F: tools/memory-model/
|
F: tools/memory-model/
|
||||||
F: Documentation/atomic_bitops.txt
|
F: Documentation/atomic_bitops.txt
|
||||||
F: Documentation/atomic_t.txt
|
F: Documentation/atomic_t.txt
|
||||||
@ -13031,9 +13031,9 @@ M: Josh Triplett <josh@joshtriplett.org>
|
|||||||
R: Steven Rostedt <rostedt@goodmis.org>
|
R: Steven Rostedt <rostedt@goodmis.org>
|
||||||
R: Mathieu Desnoyers <mathieu.desnoyers@efficios.com>
|
R: Mathieu Desnoyers <mathieu.desnoyers@efficios.com>
|
||||||
R: Lai Jiangshan <jiangshanlai@gmail.com>
|
R: Lai Jiangshan <jiangshanlai@gmail.com>
|
||||||
L: linux-kernel@vger.kernel.org
|
L: rcu@vger.kernel.org
|
||||||
S: Supported
|
S: Supported
|
||||||
T: git git://git.kernel.org/pub/scm/linux/kernel/git/paulmck/linux-rcu.git
|
T: git git://git.kernel.org/pub/scm/linux/kernel/git/paulmck/linux-rcu.git dev
|
||||||
F: tools/testing/selftests/rcutorture
|
F: tools/testing/selftests/rcutorture
|
||||||
|
|
||||||
RDC R-321X SoC
|
RDC R-321X SoC
|
||||||
@ -13079,10 +13079,10 @@ R: Steven Rostedt <rostedt@goodmis.org>
|
|||||||
R: Mathieu Desnoyers <mathieu.desnoyers@efficios.com>
|
R: Mathieu Desnoyers <mathieu.desnoyers@efficios.com>
|
||||||
R: Lai Jiangshan <jiangshanlai@gmail.com>
|
R: Lai Jiangshan <jiangshanlai@gmail.com>
|
||||||
R: Joel Fernandes <joel@joelfernandes.org>
|
R: Joel Fernandes <joel@joelfernandes.org>
|
||||||
L: linux-kernel@vger.kernel.org
|
L: rcu@vger.kernel.org
|
||||||
W: http://www.rdrop.com/users/paulmck/RCU/
|
W: http://www.rdrop.com/users/paulmck/RCU/
|
||||||
S: Supported
|
S: Supported
|
||||||
T: git git://git.kernel.org/pub/scm/linux/kernel/git/paulmck/linux-rcu.git
|
T: git git://git.kernel.org/pub/scm/linux/kernel/git/paulmck/linux-rcu.git dev
|
||||||
F: Documentation/RCU/
|
F: Documentation/RCU/
|
||||||
X: Documentation/RCU/torture.txt
|
X: Documentation/RCU/torture.txt
|
||||||
F: include/linux/rcu*
|
F: include/linux/rcu*
|
||||||
@ -14234,10 +14234,10 @@ M: "Paul E. McKenney" <paulmck@linux.ibm.com>
|
|||||||
M: Josh Triplett <josh@joshtriplett.org>
|
M: Josh Triplett <josh@joshtriplett.org>
|
||||||
R: Steven Rostedt <rostedt@goodmis.org>
|
R: Steven Rostedt <rostedt@goodmis.org>
|
||||||
R: Mathieu Desnoyers <mathieu.desnoyers@efficios.com>
|
R: Mathieu Desnoyers <mathieu.desnoyers@efficios.com>
|
||||||
L: linux-kernel@vger.kernel.org
|
L: rcu@vger.kernel.org
|
||||||
W: http://www.rdrop.com/users/paulmck/RCU/
|
W: http://www.rdrop.com/users/paulmck/RCU/
|
||||||
S: Supported
|
S: Supported
|
||||||
T: git git://git.kernel.org/pub/scm/linux/kernel/git/paulmck/linux-rcu.git
|
T: git git://git.kernel.org/pub/scm/linux/kernel/git/paulmck/linux-rcu.git dev
|
||||||
F: include/linux/srcu*.h
|
F: include/linux/srcu*.h
|
||||||
F: kernel/rcu/srcu*.c
|
F: kernel/rcu/srcu*.c
|
||||||
|
|
||||||
@ -15684,7 +15684,7 @@ M: "Paul E. McKenney" <paulmck@linux.ibm.com>
|
|||||||
M: Josh Triplett <josh@joshtriplett.org>
|
M: Josh Triplett <josh@joshtriplett.org>
|
||||||
L: linux-kernel@vger.kernel.org
|
L: linux-kernel@vger.kernel.org
|
||||||
S: Supported
|
S: Supported
|
||||||
T: git git://git.kernel.org/pub/scm/linux/kernel/git/paulmck/linux-rcu.git
|
T: git git://git.kernel.org/pub/scm/linux/kernel/git/paulmck/linux-rcu.git dev
|
||||||
F: Documentation/RCU/torture.txt
|
F: Documentation/RCU/torture.txt
|
||||||
F: kernel/torture.c
|
F: kernel/torture.c
|
||||||
F: kernel/rcu/rcutorture.c
|
F: kernel/rcu/rcutorture.c
|
||||||
|
@ -388,7 +388,7 @@ static void nvme_free_ns_head(struct kref *ref)
|
|||||||
nvme_mpath_remove_disk(head);
|
nvme_mpath_remove_disk(head);
|
||||||
ida_simple_remove(&head->subsys->ns_ida, head->instance);
|
ida_simple_remove(&head->subsys->ns_ida, head->instance);
|
||||||
list_del_init(&head->entry);
|
list_del_init(&head->entry);
|
||||||
cleanup_srcu_struct_quiesced(&head->srcu);
|
cleanup_srcu_struct(&head->srcu);
|
||||||
nvme_put_subsystem(head->subsys);
|
nvme_put_subsystem(head->subsys);
|
||||||
kfree(head);
|
kfree(head);
|
||||||
}
|
}
|
||||||
|
@ -878,9 +878,11 @@ static inline void rcu_head_init(struct rcu_head *rhp)
|
|||||||
static inline bool
|
static inline bool
|
||||||
rcu_head_after_call_rcu(struct rcu_head *rhp, rcu_callback_t f)
|
rcu_head_after_call_rcu(struct rcu_head *rhp, rcu_callback_t f)
|
||||||
{
|
{
|
||||||
if (READ_ONCE(rhp->func) == f)
|
rcu_callback_t func = READ_ONCE(rhp->func);
|
||||||
|
|
||||||
|
if (func == f)
|
||||||
return true;
|
return true;
|
||||||
WARN_ON_ONCE(READ_ONCE(rhp->func) != (rcu_callback_t)~0L);
|
WARN_ON_ONCE(func != (rcu_callback_t)~0L);
|
||||||
return false;
|
return false;
|
||||||
}
|
}
|
||||||
|
|
||||||
|
@ -56,45 +56,11 @@ struct srcu_struct { };
|
|||||||
|
|
||||||
void call_srcu(struct srcu_struct *ssp, struct rcu_head *head,
|
void call_srcu(struct srcu_struct *ssp, struct rcu_head *head,
|
||||||
void (*func)(struct rcu_head *head));
|
void (*func)(struct rcu_head *head));
|
||||||
void _cleanup_srcu_struct(struct srcu_struct *ssp, bool quiesced);
|
void cleanup_srcu_struct(struct srcu_struct *ssp);
|
||||||
int __srcu_read_lock(struct srcu_struct *ssp) __acquires(ssp);
|
int __srcu_read_lock(struct srcu_struct *ssp) __acquires(ssp);
|
||||||
void __srcu_read_unlock(struct srcu_struct *ssp, int idx) __releases(ssp);
|
void __srcu_read_unlock(struct srcu_struct *ssp, int idx) __releases(ssp);
|
||||||
void synchronize_srcu(struct srcu_struct *ssp);
|
void synchronize_srcu(struct srcu_struct *ssp);
|
||||||
|
|
||||||
/**
|
|
||||||
* cleanup_srcu_struct - deconstruct a sleep-RCU structure
|
|
||||||
* @ssp: structure to clean up.
|
|
||||||
*
|
|
||||||
* Must invoke this after you are finished using a given srcu_struct that
|
|
||||||
* was initialized via init_srcu_struct(), else you leak memory.
|
|
||||||
*/
|
|
||||||
static inline void cleanup_srcu_struct(struct srcu_struct *ssp)
|
|
||||||
{
|
|
||||||
_cleanup_srcu_struct(ssp, false);
|
|
||||||
}
|
|
||||||
|
|
||||||
/**
|
|
||||||
* cleanup_srcu_struct_quiesced - deconstruct a quiesced sleep-RCU structure
|
|
||||||
* @ssp: structure to clean up.
|
|
||||||
*
|
|
||||||
* Must invoke this after you are finished using a given srcu_struct that
|
|
||||||
* was initialized via init_srcu_struct(), else you leak memory. Also,
|
|
||||||
* all grace-period processing must have completed.
|
|
||||||
*
|
|
||||||
* "Completed" means that the last synchronize_srcu() and
|
|
||||||
* synchronize_srcu_expedited() calls must have returned before the call
|
|
||||||
* to cleanup_srcu_struct_quiesced(). It also means that the callback
|
|
||||||
* from the last call_srcu() must have been invoked before the call to
|
|
||||||
* cleanup_srcu_struct_quiesced(), but you can use srcu_barrier() to help
|
|
||||||
* with this last. Violating these rules will get you a WARN_ON() splat
|
|
||||||
* (with high probability, anyway), and will also cause the srcu_struct
|
|
||||||
* to be leaked.
|
|
||||||
*/
|
|
||||||
static inline void cleanup_srcu_struct_quiesced(struct srcu_struct *ssp)
|
|
||||||
{
|
|
||||||
_cleanup_srcu_struct(ssp, true);
|
|
||||||
}
|
|
||||||
|
|
||||||
#ifdef CONFIG_DEBUG_LOCK_ALLOC
|
#ifdef CONFIG_DEBUG_LOCK_ALLOC
|
||||||
|
|
||||||
/**
|
/**
|
||||||
|
@ -829,7 +829,9 @@ static void lock_torture_cleanup(void)
|
|||||||
"End of test: SUCCESS");
|
"End of test: SUCCESS");
|
||||||
|
|
||||||
kfree(cxt.lwsa);
|
kfree(cxt.lwsa);
|
||||||
|
cxt.lwsa = NULL;
|
||||||
kfree(cxt.lrsa);
|
kfree(cxt.lrsa);
|
||||||
|
cxt.lrsa = NULL;
|
||||||
|
|
||||||
end:
|
end:
|
||||||
torture_cleanup_end();
|
torture_cleanup_end();
|
||||||
|
@ -233,6 +233,7 @@ static inline bool __rcu_reclaim(const char *rn, struct rcu_head *head)
|
|||||||
#ifdef CONFIG_RCU_STALL_COMMON
|
#ifdef CONFIG_RCU_STALL_COMMON
|
||||||
|
|
||||||
extern int rcu_cpu_stall_suppress;
|
extern int rcu_cpu_stall_suppress;
|
||||||
|
extern int rcu_cpu_stall_timeout;
|
||||||
int rcu_jiffies_till_stall_check(void);
|
int rcu_jiffies_till_stall_check(void);
|
||||||
|
|
||||||
#define rcu_ftrace_dump_stall_suppress() \
|
#define rcu_ftrace_dump_stall_suppress() \
|
||||||
|
@ -494,6 +494,10 @@ rcu_perf_cleanup(void)
|
|||||||
|
|
||||||
if (torture_cleanup_begin())
|
if (torture_cleanup_begin())
|
||||||
return;
|
return;
|
||||||
|
if (!cur_ops) {
|
||||||
|
torture_cleanup_end();
|
||||||
|
return;
|
||||||
|
}
|
||||||
|
|
||||||
if (reader_tasks) {
|
if (reader_tasks) {
|
||||||
for (i = 0; i < nrealreaders; i++)
|
for (i = 0; i < nrealreaders; i++)
|
||||||
@ -614,6 +618,7 @@ rcu_perf_init(void)
|
|||||||
pr_cont("\n");
|
pr_cont("\n");
|
||||||
WARN_ON(!IS_MODULE(CONFIG_RCU_PERF_TEST));
|
WARN_ON(!IS_MODULE(CONFIG_RCU_PERF_TEST));
|
||||||
firsterr = -EINVAL;
|
firsterr = -EINVAL;
|
||||||
|
cur_ops = NULL;
|
||||||
goto unwind;
|
goto unwind;
|
||||||
}
|
}
|
||||||
if (cur_ops->init)
|
if (cur_ops->init)
|
||||||
|
@ -299,7 +299,6 @@ struct rcu_torture_ops {
|
|||||||
int irq_capable;
|
int irq_capable;
|
||||||
int can_boost;
|
int can_boost;
|
||||||
int extendables;
|
int extendables;
|
||||||
int ext_irq_conflict;
|
|
||||||
const char *name;
|
const char *name;
|
||||||
};
|
};
|
||||||
|
|
||||||
@ -592,12 +591,7 @@ static void srcu_torture_init(void)
|
|||||||
|
|
||||||
static void srcu_torture_cleanup(void)
|
static void srcu_torture_cleanup(void)
|
||||||
{
|
{
|
||||||
static DEFINE_TORTURE_RANDOM(rand);
|
cleanup_srcu_struct(&srcu_ctld);
|
||||||
|
|
||||||
if (torture_random(&rand) & 0x800)
|
|
||||||
cleanup_srcu_struct(&srcu_ctld);
|
|
||||||
else
|
|
||||||
cleanup_srcu_struct_quiesced(&srcu_ctld);
|
|
||||||
srcu_ctlp = &srcu_ctl; /* In case of a later rcutorture run. */
|
srcu_ctlp = &srcu_ctl; /* In case of a later rcutorture run. */
|
||||||
}
|
}
|
||||||
|
|
||||||
@ -1160,7 +1154,7 @@ rcutorture_extend_mask(int oldmask, struct torture_random_state *trsp)
|
|||||||
unsigned long randmask2 = randmask1 >> 3;
|
unsigned long randmask2 = randmask1 >> 3;
|
||||||
|
|
||||||
WARN_ON_ONCE(mask >> RCUTORTURE_RDR_SHIFT);
|
WARN_ON_ONCE(mask >> RCUTORTURE_RDR_SHIFT);
|
||||||
/* Most of the time lots of bits, half the time only one bit. */
|
/* Mostly only one bit (need preemption!), sometimes lots of bits. */
|
||||||
if (!(randmask1 & 0x7))
|
if (!(randmask1 & 0x7))
|
||||||
mask = mask & randmask2;
|
mask = mask & randmask2;
|
||||||
else
|
else
|
||||||
@ -1170,10 +1164,6 @@ rcutorture_extend_mask(int oldmask, struct torture_random_state *trsp)
|
|||||||
((!(mask & RCUTORTURE_RDR_BH) && (oldmask & RCUTORTURE_RDR_BH)) ||
|
((!(mask & RCUTORTURE_RDR_BH) && (oldmask & RCUTORTURE_RDR_BH)) ||
|
||||||
(!(mask & RCUTORTURE_RDR_RBH) && (oldmask & RCUTORTURE_RDR_RBH))))
|
(!(mask & RCUTORTURE_RDR_RBH) && (oldmask & RCUTORTURE_RDR_RBH))))
|
||||||
mask |= RCUTORTURE_RDR_BH | RCUTORTURE_RDR_RBH;
|
mask |= RCUTORTURE_RDR_BH | RCUTORTURE_RDR_RBH;
|
||||||
if ((mask & RCUTORTURE_RDR_IRQ) &&
|
|
||||||
!(mask & cur_ops->ext_irq_conflict) &&
|
|
||||||
(oldmask & cur_ops->ext_irq_conflict))
|
|
||||||
mask |= cur_ops->ext_irq_conflict; /* Or if readers object. */
|
|
||||||
return mask ?: RCUTORTURE_RDR_RCU;
|
return mask ?: RCUTORTURE_RDR_RCU;
|
||||||
}
|
}
|
||||||
|
|
||||||
@ -1848,7 +1838,7 @@ static int rcutorture_oom_notify(struct notifier_block *self,
|
|||||||
WARN(1, "%s invoked upon OOM during forward-progress testing.\n",
|
WARN(1, "%s invoked upon OOM during forward-progress testing.\n",
|
||||||
__func__);
|
__func__);
|
||||||
rcu_torture_fwd_cb_hist();
|
rcu_torture_fwd_cb_hist();
|
||||||
rcu_fwd_progress_check(1 + (jiffies - READ_ONCE(rcu_fwd_startat) / 2));
|
rcu_fwd_progress_check(1 + (jiffies - READ_ONCE(rcu_fwd_startat)) / 2);
|
||||||
WRITE_ONCE(rcu_fwd_emergency_stop, true);
|
WRITE_ONCE(rcu_fwd_emergency_stop, true);
|
||||||
smp_mb(); /* Emergency stop before free and wait to avoid hangs. */
|
smp_mb(); /* Emergency stop before free and wait to avoid hangs. */
|
||||||
pr_info("%s: Freed %lu RCU callbacks.\n",
|
pr_info("%s: Freed %lu RCU callbacks.\n",
|
||||||
@ -2094,6 +2084,10 @@ rcu_torture_cleanup(void)
|
|||||||
cur_ops->cb_barrier();
|
cur_ops->cb_barrier();
|
||||||
return;
|
return;
|
||||||
}
|
}
|
||||||
|
if (!cur_ops) {
|
||||||
|
torture_cleanup_end();
|
||||||
|
return;
|
||||||
|
}
|
||||||
|
|
||||||
rcu_torture_barrier_cleanup();
|
rcu_torture_barrier_cleanup();
|
||||||
torture_stop_kthread(rcu_torture_fwd_prog, fwd_prog_task);
|
torture_stop_kthread(rcu_torture_fwd_prog, fwd_prog_task);
|
||||||
@ -2267,6 +2261,7 @@ rcu_torture_init(void)
|
|||||||
pr_cont("\n");
|
pr_cont("\n");
|
||||||
WARN_ON(!IS_MODULE(CONFIG_RCU_TORTURE_TEST));
|
WARN_ON(!IS_MODULE(CONFIG_RCU_TORTURE_TEST));
|
||||||
firsterr = -EINVAL;
|
firsterr = -EINVAL;
|
||||||
|
cur_ops = NULL;
|
||||||
goto unwind;
|
goto unwind;
|
||||||
}
|
}
|
||||||
if (cur_ops->fqs == NULL && fqs_duration != 0) {
|
if (cur_ops->fqs == NULL && fqs_duration != 0) {
|
||||||
|
@ -76,19 +76,16 @@ EXPORT_SYMBOL_GPL(init_srcu_struct);
|
|||||||
* Must invoke this after you are finished using a given srcu_struct that
|
* Must invoke this after you are finished using a given srcu_struct that
|
||||||
* was initialized via init_srcu_struct(), else you leak memory.
|
* was initialized via init_srcu_struct(), else you leak memory.
|
||||||
*/
|
*/
|
||||||
void _cleanup_srcu_struct(struct srcu_struct *ssp, bool quiesced)
|
void cleanup_srcu_struct(struct srcu_struct *ssp)
|
||||||
{
|
{
|
||||||
WARN_ON(ssp->srcu_lock_nesting[0] || ssp->srcu_lock_nesting[1]);
|
WARN_ON(ssp->srcu_lock_nesting[0] || ssp->srcu_lock_nesting[1]);
|
||||||
if (quiesced)
|
flush_work(&ssp->srcu_work);
|
||||||
WARN_ON(work_pending(&ssp->srcu_work));
|
|
||||||
else
|
|
||||||
flush_work(&ssp->srcu_work);
|
|
||||||
WARN_ON(ssp->srcu_gp_running);
|
WARN_ON(ssp->srcu_gp_running);
|
||||||
WARN_ON(ssp->srcu_gp_waiting);
|
WARN_ON(ssp->srcu_gp_waiting);
|
||||||
WARN_ON(ssp->srcu_cb_head);
|
WARN_ON(ssp->srcu_cb_head);
|
||||||
WARN_ON(&ssp->srcu_cb_head != ssp->srcu_cb_tail);
|
WARN_ON(&ssp->srcu_cb_head != ssp->srcu_cb_tail);
|
||||||
}
|
}
|
||||||
EXPORT_SYMBOL_GPL(_cleanup_srcu_struct);
|
EXPORT_SYMBOL_GPL(cleanup_srcu_struct);
|
||||||
|
|
||||||
/*
|
/*
|
||||||
* Removes the count for the old reader from the appropriate element of
|
* Removes the count for the old reader from the appropriate element of
|
||||||
|
@ -360,8 +360,14 @@ static unsigned long srcu_get_delay(struct srcu_struct *ssp)
|
|||||||
return SRCU_INTERVAL;
|
return SRCU_INTERVAL;
|
||||||
}
|
}
|
||||||
|
|
||||||
/* Helper for cleanup_srcu_struct() and cleanup_srcu_struct_quiesced(). */
|
/**
|
||||||
void _cleanup_srcu_struct(struct srcu_struct *ssp, bool quiesced)
|
* cleanup_srcu_struct - deconstruct a sleep-RCU structure
|
||||||
|
* @ssp: structure to clean up.
|
||||||
|
*
|
||||||
|
* Must invoke this after you are finished using a given srcu_struct that
|
||||||
|
* was initialized via init_srcu_struct(), else you leak memory.
|
||||||
|
*/
|
||||||
|
void cleanup_srcu_struct(struct srcu_struct *ssp)
|
||||||
{
|
{
|
||||||
int cpu;
|
int cpu;
|
||||||
|
|
||||||
@ -369,24 +375,14 @@ void _cleanup_srcu_struct(struct srcu_struct *ssp, bool quiesced)
|
|||||||
return; /* Just leak it! */
|
return; /* Just leak it! */
|
||||||
if (WARN_ON(srcu_readers_active(ssp)))
|
if (WARN_ON(srcu_readers_active(ssp)))
|
||||||
return; /* Just leak it! */
|
return; /* Just leak it! */
|
||||||
if (quiesced) {
|
flush_delayed_work(&ssp->work);
|
||||||
if (WARN_ON(delayed_work_pending(&ssp->work)))
|
|
||||||
return; /* Just leak it! */
|
|
||||||
} else {
|
|
||||||
flush_delayed_work(&ssp->work);
|
|
||||||
}
|
|
||||||
for_each_possible_cpu(cpu) {
|
for_each_possible_cpu(cpu) {
|
||||||
struct srcu_data *sdp = per_cpu_ptr(ssp->sda, cpu);
|
struct srcu_data *sdp = per_cpu_ptr(ssp->sda, cpu);
|
||||||
|
|
||||||
if (quiesced) {
|
del_timer_sync(&sdp->delay_work);
|
||||||
if (WARN_ON(timer_pending(&sdp->delay_work)))
|
flush_work(&sdp->work);
|
||||||
return; /* Just leak it! */
|
if (WARN_ON(rcu_segcblist_n_cbs(&sdp->srcu_cblist)))
|
||||||
if (WARN_ON(work_pending(&sdp->work)))
|
return; /* Forgot srcu_barrier(), so just leak it! */
|
||||||
return; /* Just leak it! */
|
|
||||||
} else {
|
|
||||||
del_timer_sync(&sdp->delay_work);
|
|
||||||
flush_work(&sdp->work);
|
|
||||||
}
|
|
||||||
}
|
}
|
||||||
if (WARN_ON(rcu_seq_state(READ_ONCE(ssp->srcu_gp_seq)) != SRCU_STATE_IDLE) ||
|
if (WARN_ON(rcu_seq_state(READ_ONCE(ssp->srcu_gp_seq)) != SRCU_STATE_IDLE) ||
|
||||||
WARN_ON(srcu_readers_active(ssp))) {
|
WARN_ON(srcu_readers_active(ssp))) {
|
||||||
@ -397,7 +393,7 @@ void _cleanup_srcu_struct(struct srcu_struct *ssp, bool quiesced)
|
|||||||
free_percpu(ssp->sda);
|
free_percpu(ssp->sda);
|
||||||
ssp->sda = NULL;
|
ssp->sda = NULL;
|
||||||
}
|
}
|
||||||
EXPORT_SYMBOL_GPL(_cleanup_srcu_struct);
|
EXPORT_SYMBOL_GPL(cleanup_srcu_struct);
|
||||||
|
|
||||||
/*
|
/*
|
||||||
* Counts the new reader in the appropriate per-CPU element of the
|
* Counts the new reader in the appropriate per-CPU element of the
|
||||||
|
@ -52,7 +52,7 @@ void rcu_qs(void)
|
|||||||
local_irq_save(flags);
|
local_irq_save(flags);
|
||||||
if (rcu_ctrlblk.donetail != rcu_ctrlblk.curtail) {
|
if (rcu_ctrlblk.donetail != rcu_ctrlblk.curtail) {
|
||||||
rcu_ctrlblk.donetail = rcu_ctrlblk.curtail;
|
rcu_ctrlblk.donetail = rcu_ctrlblk.curtail;
|
||||||
raise_softirq(RCU_SOFTIRQ);
|
raise_softirq_irqoff(RCU_SOFTIRQ);
|
||||||
}
|
}
|
||||||
local_irq_restore(flags);
|
local_irq_restore(flags);
|
||||||
}
|
}
|
||||||
|
@ -102,11 +102,6 @@ int rcu_num_lvls __read_mostly = RCU_NUM_LVLS;
|
|||||||
/* Number of rcu_nodes at specified level. */
|
/* Number of rcu_nodes at specified level. */
|
||||||
int num_rcu_lvl[] = NUM_RCU_LVL_INIT;
|
int num_rcu_lvl[] = NUM_RCU_LVL_INIT;
|
||||||
int rcu_num_nodes __read_mostly = NUM_RCU_NODES; /* Total # rcu_nodes in use. */
|
int rcu_num_nodes __read_mostly = NUM_RCU_NODES; /* Total # rcu_nodes in use. */
|
||||||
/* panic() on RCU Stall sysctl. */
|
|
||||||
int sysctl_panic_on_rcu_stall __read_mostly;
|
|
||||||
/* Commandeer a sysrq key to dump RCU's tree. */
|
|
||||||
static bool sysrq_rcu;
|
|
||||||
module_param(sysrq_rcu, bool, 0444);
|
|
||||||
|
|
||||||
/*
|
/*
|
||||||
* The rcu_scheduler_active variable is initialized to the value
|
* The rcu_scheduler_active variable is initialized to the value
|
||||||
@ -149,7 +144,7 @@ static void sync_sched_exp_online_cleanup(int cpu);
|
|||||||
|
|
||||||
/* rcuc/rcub kthread realtime priority */
|
/* rcuc/rcub kthread realtime priority */
|
||||||
static int kthread_prio = IS_ENABLED(CONFIG_RCU_BOOST) ? 1 : 0;
|
static int kthread_prio = IS_ENABLED(CONFIG_RCU_BOOST) ? 1 : 0;
|
||||||
module_param(kthread_prio, int, 0644);
|
module_param(kthread_prio, int, 0444);
|
||||||
|
|
||||||
/* Delay in jiffies for grace-period initialization delays, debug only. */
|
/* Delay in jiffies for grace-period initialization delays, debug only. */
|
||||||
|
|
||||||
@ -406,7 +401,7 @@ static bool rcu_kick_kthreads;
|
|||||||
*/
|
*/
|
||||||
static ulong jiffies_till_sched_qs = ULONG_MAX;
|
static ulong jiffies_till_sched_qs = ULONG_MAX;
|
||||||
module_param(jiffies_till_sched_qs, ulong, 0444);
|
module_param(jiffies_till_sched_qs, ulong, 0444);
|
||||||
static ulong jiffies_to_sched_qs; /* Adjusted version of above if not default */
|
static ulong jiffies_to_sched_qs; /* See adjust_jiffies_till_sched_qs(). */
|
||||||
module_param(jiffies_to_sched_qs, ulong, 0444); /* Display only! */
|
module_param(jiffies_to_sched_qs, ulong, 0444); /* Display only! */
|
||||||
|
|
||||||
/*
|
/*
|
||||||
@ -424,6 +419,7 @@ static void adjust_jiffies_till_sched_qs(void)
|
|||||||
WRITE_ONCE(jiffies_to_sched_qs, jiffies_till_sched_qs);
|
WRITE_ONCE(jiffies_to_sched_qs, jiffies_till_sched_qs);
|
||||||
return;
|
return;
|
||||||
}
|
}
|
||||||
|
/* Otherwise, set to third fqs scan, but bound below on large system. */
|
||||||
j = READ_ONCE(jiffies_till_first_fqs) +
|
j = READ_ONCE(jiffies_till_first_fqs) +
|
||||||
2 * READ_ONCE(jiffies_till_next_fqs);
|
2 * READ_ONCE(jiffies_till_next_fqs);
|
||||||
if (j < HZ / 10 + nr_cpu_ids / RCU_JIFFIES_FQS_DIV)
|
if (j < HZ / 10 + nr_cpu_ids / RCU_JIFFIES_FQS_DIV)
|
||||||
@ -512,74 +508,6 @@ static const char *gp_state_getname(short gs)
|
|||||||
return gp_state_names[gs];
|
return gp_state_names[gs];
|
||||||
}
|
}
|
||||||
|
|
||||||
/*
|
|
||||||
* Show the state of the grace-period kthreads.
|
|
||||||
*/
|
|
||||||
void show_rcu_gp_kthreads(void)
|
|
||||||
{
|
|
||||||
int cpu;
|
|
||||||
unsigned long j;
|
|
||||||
unsigned long ja;
|
|
||||||
unsigned long jr;
|
|
||||||
unsigned long jw;
|
|
||||||
struct rcu_data *rdp;
|
|
||||||
struct rcu_node *rnp;
|
|
||||||
|
|
||||||
j = jiffies;
|
|
||||||
ja = j - READ_ONCE(rcu_state.gp_activity);
|
|
||||||
jr = j - READ_ONCE(rcu_state.gp_req_activity);
|
|
||||||
jw = j - READ_ONCE(rcu_state.gp_wake_time);
|
|
||||||
pr_info("%s: wait state: %s(%d) ->state: %#lx delta ->gp_activity %lu ->gp_req_activity %lu ->gp_wake_time %lu ->gp_wake_seq %ld ->gp_seq %ld ->gp_seq_needed %ld ->gp_flags %#x\n",
|
|
||||||
rcu_state.name, gp_state_getname(rcu_state.gp_state),
|
|
||||||
rcu_state.gp_state,
|
|
||||||
rcu_state.gp_kthread ? rcu_state.gp_kthread->state : 0x1ffffL,
|
|
||||||
ja, jr, jw, (long)READ_ONCE(rcu_state.gp_wake_seq),
|
|
||||||
(long)READ_ONCE(rcu_state.gp_seq),
|
|
||||||
(long)READ_ONCE(rcu_get_root()->gp_seq_needed),
|
|
||||||
READ_ONCE(rcu_state.gp_flags));
|
|
||||||
rcu_for_each_node_breadth_first(rnp) {
|
|
||||||
if (ULONG_CMP_GE(rcu_state.gp_seq, rnp->gp_seq_needed))
|
|
||||||
continue;
|
|
||||||
pr_info("\trcu_node %d:%d ->gp_seq %ld ->gp_seq_needed %ld\n",
|
|
||||||
rnp->grplo, rnp->grphi, (long)rnp->gp_seq,
|
|
||||||
(long)rnp->gp_seq_needed);
|
|
||||||
if (!rcu_is_leaf_node(rnp))
|
|
||||||
continue;
|
|
||||||
for_each_leaf_node_possible_cpu(rnp, cpu) {
|
|
||||||
rdp = per_cpu_ptr(&rcu_data, cpu);
|
|
||||||
if (rdp->gpwrap ||
|
|
||||||
ULONG_CMP_GE(rcu_state.gp_seq,
|
|
||||||
rdp->gp_seq_needed))
|
|
||||||
continue;
|
|
||||||
pr_info("\tcpu %d ->gp_seq_needed %ld\n",
|
|
||||||
cpu, (long)rdp->gp_seq_needed);
|
|
||||||
}
|
|
||||||
}
|
|
||||||
/* sched_show_task(rcu_state.gp_kthread); */
|
|
||||||
}
|
|
||||||
EXPORT_SYMBOL_GPL(show_rcu_gp_kthreads);
|
|
||||||
|
|
||||||
/* Dump grace-period-request information due to commandeered sysrq. */
|
|
||||||
static void sysrq_show_rcu(int key)
|
|
||||||
{
|
|
||||||
show_rcu_gp_kthreads();
|
|
||||||
}
|
|
||||||
|
|
||||||
static struct sysrq_key_op sysrq_rcudump_op = {
|
|
||||||
.handler = sysrq_show_rcu,
|
|
||||||
.help_msg = "show-rcu(y)",
|
|
||||||
.action_msg = "Show RCU tree",
|
|
||||||
.enable_mask = SYSRQ_ENABLE_DUMP,
|
|
||||||
};
|
|
||||||
|
|
||||||
static int __init rcu_sysrq_init(void)
|
|
||||||
{
|
|
||||||
if (sysrq_rcu)
|
|
||||||
return register_sysrq_key('y', &sysrq_rcudump_op);
|
|
||||||
return 0;
|
|
||||||
}
|
|
||||||
early_initcall(rcu_sysrq_init);
|
|
||||||
|
|
||||||
/*
|
/*
|
||||||
* Send along grace-period-related data for rcutorture diagnostics.
|
* Send along grace-period-related data for rcutorture diagnostics.
|
||||||
*/
|
*/
|
||||||
@ -1033,27 +961,6 @@ static int dyntick_save_progress_counter(struct rcu_data *rdp)
|
|||||||
return 0;
|
return 0;
|
||||||
}
|
}
|
||||||
|
|
||||||
/*
|
|
||||||
* Handler for the irq_work request posted when a grace period has
|
|
||||||
* gone on for too long, but not yet long enough for an RCU CPU
|
|
||||||
* stall warning. Set state appropriately, but just complain if
|
|
||||||
* there is unexpected state on entry.
|
|
||||||
*/
|
|
||||||
static void rcu_iw_handler(struct irq_work *iwp)
|
|
||||||
{
|
|
||||||
struct rcu_data *rdp;
|
|
||||||
struct rcu_node *rnp;
|
|
||||||
|
|
||||||
rdp = container_of(iwp, struct rcu_data, rcu_iw);
|
|
||||||
rnp = rdp->mynode;
|
|
||||||
raw_spin_lock_rcu_node(rnp);
|
|
||||||
if (!WARN_ON_ONCE(!rdp->rcu_iw_pending)) {
|
|
||||||
rdp->rcu_iw_gp_seq = rnp->gp_seq;
|
|
||||||
rdp->rcu_iw_pending = false;
|
|
||||||
}
|
|
||||||
raw_spin_unlock_rcu_node(rnp);
|
|
||||||
}
|
|
||||||
|
|
||||||
/*
|
/*
|
||||||
* Return true if the specified CPU has passed through a quiescent
|
* Return true if the specified CPU has passed through a quiescent
|
||||||
* state by virtue of being in or having passed through an dynticks
|
* state by virtue of being in or having passed through an dynticks
|
||||||
@ -1167,295 +1074,6 @@ static int rcu_implicit_dynticks_qs(struct rcu_data *rdp)
|
|||||||
return 0;
|
return 0;
|
||||||
}
|
}
|
||||||
|
|
||||||
static void record_gp_stall_check_time(void)
|
|
||||||
{
|
|
||||||
unsigned long j = jiffies;
|
|
||||||
unsigned long j1;
|
|
||||||
|
|
||||||
rcu_state.gp_start = j;
|
|
||||||
j1 = rcu_jiffies_till_stall_check();
|
|
||||||
/* Record ->gp_start before ->jiffies_stall. */
|
|
||||||
smp_store_release(&rcu_state.jiffies_stall, j + j1); /* ^^^ */
|
|
||||||
rcu_state.jiffies_resched = j + j1 / 2;
|
|
||||||
rcu_state.n_force_qs_gpstart = READ_ONCE(rcu_state.n_force_qs);
|
|
||||||
}
|
|
||||||
|
|
||||||
/*
|
|
||||||
* Complain about starvation of grace-period kthread.
|
|
||||||
*/
|
|
||||||
static void rcu_check_gp_kthread_starvation(void)
|
|
||||||
{
|
|
||||||
struct task_struct *gpk = rcu_state.gp_kthread;
|
|
||||||
unsigned long j;
|
|
||||||
|
|
||||||
j = jiffies - READ_ONCE(rcu_state.gp_activity);
|
|
||||||
if (j > 2 * HZ) {
|
|
||||||
pr_err("%s kthread starved for %ld jiffies! g%ld f%#x %s(%d) ->state=%#lx ->cpu=%d\n",
|
|
||||||
rcu_state.name, j,
|
|
||||||
(long)rcu_seq_current(&rcu_state.gp_seq),
|
|
||||||
READ_ONCE(rcu_state.gp_flags),
|
|
||||||
gp_state_getname(rcu_state.gp_state), rcu_state.gp_state,
|
|
||||||
gpk ? gpk->state : ~0, gpk ? task_cpu(gpk) : -1);
|
|
||||||
if (gpk) {
|
|
||||||
pr_err("RCU grace-period kthread stack dump:\n");
|
|
||||||
sched_show_task(gpk);
|
|
||||||
wake_up_process(gpk);
|
|
||||||
}
|
|
||||||
}
|
|
||||||
}
|
|
||||||
|
|
||||||
/*
|
|
||||||
* Dump stacks of all tasks running on stalled CPUs. First try using
|
|
||||||
* NMIs, but fall back to manual remote stack tracing on architectures
|
|
||||||
* that don't support NMI-based stack dumps. The NMI-triggered stack
|
|
||||||
* traces are more accurate because they are printed by the target CPU.
|
|
||||||
*/
|
|
||||||
static void rcu_dump_cpu_stacks(void)
|
|
||||||
{
|
|
||||||
int cpu;
|
|
||||||
unsigned long flags;
|
|
||||||
struct rcu_node *rnp;
|
|
||||||
|
|
||||||
rcu_for_each_leaf_node(rnp) {
|
|
||||||
raw_spin_lock_irqsave_rcu_node(rnp, flags);
|
|
||||||
for_each_leaf_node_possible_cpu(rnp, cpu)
|
|
||||||
if (rnp->qsmask & leaf_node_cpu_bit(rnp, cpu))
|
|
||||||
if (!trigger_single_cpu_backtrace(cpu))
|
|
||||||
dump_cpu_task(cpu);
|
|
||||||
raw_spin_unlock_irqrestore_rcu_node(rnp, flags);
|
|
||||||
}
|
|
||||||
}
|
|
||||||
|
|
||||||
/*
|
|
||||||
* If too much time has passed in the current grace period, and if
|
|
||||||
* so configured, go kick the relevant kthreads.
|
|
||||||
*/
|
|
||||||
static void rcu_stall_kick_kthreads(void)
|
|
||||||
{
|
|
||||||
unsigned long j;
|
|
||||||
|
|
||||||
if (!rcu_kick_kthreads)
|
|
||||||
return;
|
|
||||||
j = READ_ONCE(rcu_state.jiffies_kick_kthreads);
|
|
||||||
if (time_after(jiffies, j) && rcu_state.gp_kthread &&
|
|
||||||
(rcu_gp_in_progress() || READ_ONCE(rcu_state.gp_flags))) {
|
|
||||||
WARN_ONCE(1, "Kicking %s grace-period kthread\n",
|
|
||||||
rcu_state.name);
|
|
||||||
rcu_ftrace_dump(DUMP_ALL);
|
|
||||||
wake_up_process(rcu_state.gp_kthread);
|
|
||||||
WRITE_ONCE(rcu_state.jiffies_kick_kthreads, j + HZ);
|
|
||||||
}
|
|
||||||
}
|
|
||||||
|
|
||||||
static void panic_on_rcu_stall(void)
|
|
||||||
{
|
|
||||||
if (sysctl_panic_on_rcu_stall)
|
|
||||||
panic("RCU Stall\n");
|
|
||||||
}
|
|
||||||
|
|
||||||
static void print_other_cpu_stall(unsigned long gp_seq)
|
|
||||||
{
|
|
||||||
int cpu;
|
|
||||||
unsigned long flags;
|
|
||||||
unsigned long gpa;
|
|
||||||
unsigned long j;
|
|
||||||
int ndetected = 0;
|
|
||||||
struct rcu_node *rnp = rcu_get_root();
|
|
||||||
long totqlen = 0;
|
|
||||||
|
|
||||||
/* Kick and suppress, if so configured. */
|
|
||||||
rcu_stall_kick_kthreads();
|
|
||||||
if (rcu_cpu_stall_suppress)
|
|
||||||
return;
|
|
||||||
|
|
||||||
/*
|
|
||||||
* OK, time to rat on our buddy...
|
|
||||||
* See Documentation/RCU/stallwarn.txt for info on how to debug
|
|
||||||
* RCU CPU stall warnings.
|
|
||||||
*/
|
|
||||||
pr_err("INFO: %s detected stalls on CPUs/tasks:", rcu_state.name);
|
|
||||||
print_cpu_stall_info_begin();
|
|
||||||
rcu_for_each_leaf_node(rnp) {
|
|
||||||
raw_spin_lock_irqsave_rcu_node(rnp, flags);
|
|
||||||
ndetected += rcu_print_task_stall(rnp);
|
|
||||||
if (rnp->qsmask != 0) {
|
|
||||||
for_each_leaf_node_possible_cpu(rnp, cpu)
|
|
||||||
if (rnp->qsmask & leaf_node_cpu_bit(rnp, cpu)) {
|
|
||||||
print_cpu_stall_info(cpu);
|
|
||||||
ndetected++;
|
|
||||||
}
|
|
||||||
}
|
|
||||||
raw_spin_unlock_irqrestore_rcu_node(rnp, flags);
|
|
||||||
}
|
|
||||||
|
|
||||||
print_cpu_stall_info_end();
|
|
||||||
for_each_possible_cpu(cpu)
|
|
||||||
totqlen += rcu_get_n_cbs_cpu(cpu);
|
|
||||||
pr_cont("(detected by %d, t=%ld jiffies, g=%ld, q=%lu)\n",
|
|
||||||
smp_processor_id(), (long)(jiffies - rcu_state.gp_start),
|
|
||||||
(long)rcu_seq_current(&rcu_state.gp_seq), totqlen);
|
|
||||||
if (ndetected) {
|
|
||||||
rcu_dump_cpu_stacks();
|
|
||||||
|
|
||||||
/* Complain about tasks blocking the grace period. */
|
|
||||||
rcu_print_detail_task_stall();
|
|
||||||
} else {
|
|
||||||
if (rcu_seq_current(&rcu_state.gp_seq) != gp_seq) {
|
|
||||||
pr_err("INFO: Stall ended before state dump start\n");
|
|
||||||
} else {
|
|
||||||
j = jiffies;
|
|
||||||
gpa = READ_ONCE(rcu_state.gp_activity);
|
|
||||||
pr_err("All QSes seen, last %s kthread activity %ld (%ld-%ld), jiffies_till_next_fqs=%ld, root ->qsmask %#lx\n",
|
|
||||||
rcu_state.name, j - gpa, j, gpa,
|
|
||||||
READ_ONCE(jiffies_till_next_fqs),
|
|
||||||
rcu_get_root()->qsmask);
|
|
||||||
/* In this case, the current CPU might be at fault. */
|
|
||||||
sched_show_task(current);
|
|
||||||
}
|
|
||||||
}
|
|
||||||
/* Rewrite if needed in case of slow consoles. */
|
|
||||||
if (ULONG_CMP_GE(jiffies, READ_ONCE(rcu_state.jiffies_stall)))
|
|
||||||
WRITE_ONCE(rcu_state.jiffies_stall,
|
|
||||||
jiffies + 3 * rcu_jiffies_till_stall_check() + 3);
|
|
||||||
|
|
||||||
rcu_check_gp_kthread_starvation();
|
|
||||||
|
|
||||||
panic_on_rcu_stall();
|
|
||||||
|
|
||||||
rcu_force_quiescent_state(); /* Kick them all. */
|
|
||||||
}
|
|
||||||
|
|
||||||
static void print_cpu_stall(void)
|
|
||||||
{
|
|
||||||
int cpu;
|
|
||||||
unsigned long flags;
|
|
||||||
struct rcu_data *rdp = this_cpu_ptr(&rcu_data);
|
|
||||||
struct rcu_node *rnp = rcu_get_root();
|
|
||||||
long totqlen = 0;
|
|
||||||
|
|
||||||
/* Kick and suppress, if so configured. */
|
|
||||||
rcu_stall_kick_kthreads();
|
|
||||||
if (rcu_cpu_stall_suppress)
|
|
||||||
return;
|
|
||||||
|
|
||||||
/*
|
|
||||||
* OK, time to rat on ourselves...
|
|
||||||
* See Documentation/RCU/stallwarn.txt for info on how to debug
|
|
||||||
* RCU CPU stall warnings.
|
|
||||||
*/
|
|
||||||
pr_err("INFO: %s self-detected stall on CPU", rcu_state.name);
|
|
||||||
print_cpu_stall_info_begin();
|
|
||||||
raw_spin_lock_irqsave_rcu_node(rdp->mynode, flags);
|
|
||||||
print_cpu_stall_info(smp_processor_id());
|
|
||||||
raw_spin_unlock_irqrestore_rcu_node(rdp->mynode, flags);
|
|
||||||
print_cpu_stall_info_end();
|
|
||||||
for_each_possible_cpu(cpu)
|
|
||||||
totqlen += rcu_get_n_cbs_cpu(cpu);
|
|
||||||
pr_cont(" (t=%lu jiffies g=%ld q=%lu)\n",
|
|
||||||
jiffies - rcu_state.gp_start,
|
|
||||||
(long)rcu_seq_current(&rcu_state.gp_seq), totqlen);
|
|
||||||
|
|
||||||
rcu_check_gp_kthread_starvation();
|
|
||||||
|
|
||||||
rcu_dump_cpu_stacks();
|
|
||||||
|
|
||||||
raw_spin_lock_irqsave_rcu_node(rnp, flags);
|
|
||||||
/* Rewrite if needed in case of slow consoles. */
|
|
||||||
if (ULONG_CMP_GE(jiffies, READ_ONCE(rcu_state.jiffies_stall)))
|
|
||||||
WRITE_ONCE(rcu_state.jiffies_stall,
|
|
||||||
jiffies + 3 * rcu_jiffies_till_stall_check() + 3);
|
|
||||||
raw_spin_unlock_irqrestore_rcu_node(rnp, flags);
|
|
||||||
|
|
||||||
panic_on_rcu_stall();
|
|
||||||
|
|
||||||
/*
|
|
||||||
* Attempt to revive the RCU machinery by forcing a context switch.
|
|
||||||
*
|
|
||||||
* A context switch would normally allow the RCU state machine to make
|
|
||||||
* progress and it could be we're stuck in kernel space without context
|
|
||||||
* switches for an entirely unreasonable amount of time.
|
|
||||||
*/
|
|
||||||
set_tsk_need_resched(current);
|
|
||||||
set_preempt_need_resched();
|
|
||||||
}
|
|
||||||
|
|
||||||
static void check_cpu_stall(struct rcu_data *rdp)
|
|
||||||
{
|
|
||||||
unsigned long gs1;
|
|
||||||
unsigned long gs2;
|
|
||||||
unsigned long gps;
|
|
||||||
unsigned long j;
|
|
||||||
unsigned long jn;
|
|
||||||
unsigned long js;
|
|
||||||
struct rcu_node *rnp;
|
|
||||||
|
|
||||||
if ((rcu_cpu_stall_suppress && !rcu_kick_kthreads) ||
|
|
||||||
!rcu_gp_in_progress())
|
|
||||||
return;
|
|
||||||
rcu_stall_kick_kthreads();
|
|
||||||
j = jiffies;
|
|
||||||
|
|
||||||
/*
|
|
||||||
* Lots of memory barriers to reject false positives.
|
|
||||||
*
|
|
||||||
* The idea is to pick up rcu_state.gp_seq, then
|
|
||||||
* rcu_state.jiffies_stall, then rcu_state.gp_start, and finally
|
|
||||||
* another copy of rcu_state.gp_seq. These values are updated in
|
|
||||||
* the opposite order with memory barriers (or equivalent) during
|
|
||||||
* grace-period initialization and cleanup. Now, a false positive
|
|
||||||
* can occur if we get an new value of rcu_state.gp_start and a old
|
|
||||||
* value of rcu_state.jiffies_stall. But given the memory barriers,
|
|
||||||
* the only way that this can happen is if one grace period ends
|
|
||||||
* and another starts between these two fetches. This is detected
|
|
||||||
* by comparing the second fetch of rcu_state.gp_seq with the
|
|
||||||
* previous fetch from rcu_state.gp_seq.
|
|
||||||
*
|
|
||||||
* Given this check, comparisons of jiffies, rcu_state.jiffies_stall,
|
|
||||||
* and rcu_state.gp_start suffice to forestall false positives.
|
|
||||||
*/
|
|
||||||
gs1 = READ_ONCE(rcu_state.gp_seq);
|
|
||||||
smp_rmb(); /* Pick up ->gp_seq first... */
|
|
||||||
js = READ_ONCE(rcu_state.jiffies_stall);
|
|
||||||
smp_rmb(); /* ...then ->jiffies_stall before the rest... */
|
|
||||||
gps = READ_ONCE(rcu_state.gp_start);
|
|
||||||
smp_rmb(); /* ...and finally ->gp_start before ->gp_seq again. */
|
|
||||||
gs2 = READ_ONCE(rcu_state.gp_seq);
|
|
||||||
if (gs1 != gs2 ||
|
|
||||||
ULONG_CMP_LT(j, js) ||
|
|
||||||
ULONG_CMP_GE(gps, js))
|
|
||||||
return; /* No stall or GP completed since entering function. */
|
|
||||||
rnp = rdp->mynode;
|
|
||||||
jn = jiffies + 3 * rcu_jiffies_till_stall_check() + 3;
|
|
||||||
if (rcu_gp_in_progress() &&
|
|
||||||
(READ_ONCE(rnp->qsmask) & rdp->grpmask) &&
|
|
||||||
cmpxchg(&rcu_state.jiffies_stall, js, jn) == js) {
|
|
||||||
|
|
||||||
/* We haven't checked in, so go dump stack. */
|
|
||||||
print_cpu_stall();
|
|
||||||
|
|
||||||
} else if (rcu_gp_in_progress() &&
|
|
||||||
ULONG_CMP_GE(j, js + RCU_STALL_RAT_DELAY) &&
|
|
||||||
cmpxchg(&rcu_state.jiffies_stall, js, jn) == js) {
|
|
||||||
|
|
||||||
/* They had a few time units to dump stack, so complain. */
|
|
||||||
print_other_cpu_stall(gs2);
|
|
||||||
}
|
|
||||||
}
|
|
||||||
|
|
||||||
/**
|
|
||||||
* rcu_cpu_stall_reset - prevent further stall warnings in current grace period
|
|
||||||
*
|
|
||||||
* Set the stall-warning timeout way off into the future, thus preventing
|
|
||||||
* any RCU CPU stall-warning messages from appearing in the current set of
|
|
||||||
* RCU grace periods.
|
|
||||||
*
|
|
||||||
* The caller must disable hard irqs.
|
|
||||||
*/
|
|
||||||
void rcu_cpu_stall_reset(void)
|
|
||||||
{
|
|
||||||
WRITE_ONCE(rcu_state.jiffies_stall, jiffies + ULONG_MAX / 2);
|
|
||||||
}
|
|
||||||
|
|
||||||
/* Trace-event wrapper function for trace_rcu_future_grace_period. */
|
/* Trace-event wrapper function for trace_rcu_future_grace_period. */
|
||||||
static void trace_rcu_this_gp(struct rcu_node *rnp, struct rcu_data *rdp,
|
static void trace_rcu_this_gp(struct rcu_node *rnp, struct rcu_data *rdp,
|
||||||
unsigned long gp_seq_req, const char *s)
|
unsigned long gp_seq_req, const char *s)
|
||||||
@ -1585,7 +1203,7 @@ static bool rcu_future_gp_cleanup(struct rcu_node *rnp)
|
|||||||
static void rcu_gp_kthread_wake(void)
|
static void rcu_gp_kthread_wake(void)
|
||||||
{
|
{
|
||||||
if ((current == rcu_state.gp_kthread &&
|
if ((current == rcu_state.gp_kthread &&
|
||||||
!in_interrupt() && !in_serving_softirq()) ||
|
!in_irq() && !in_serving_softirq()) ||
|
||||||
!READ_ONCE(rcu_state.gp_flags) ||
|
!READ_ONCE(rcu_state.gp_flags) ||
|
||||||
!rcu_state.gp_kthread)
|
!rcu_state.gp_kthread)
|
||||||
return;
|
return;
|
||||||
@ -2295,11 +1913,10 @@ rcu_report_qs_rdp(int cpu, struct rcu_data *rdp)
|
|||||||
return;
|
return;
|
||||||
}
|
}
|
||||||
mask = rdp->grpmask;
|
mask = rdp->grpmask;
|
||||||
|
rdp->core_needs_qs = false;
|
||||||
if ((rnp->qsmask & mask) == 0) {
|
if ((rnp->qsmask & mask) == 0) {
|
||||||
raw_spin_unlock_irqrestore_rcu_node(rnp, flags);
|
raw_spin_unlock_irqrestore_rcu_node(rnp, flags);
|
||||||
} else {
|
} else {
|
||||||
rdp->core_needs_qs = false;
|
|
||||||
|
|
||||||
/*
|
/*
|
||||||
* This GP can't end until cpu checks in, so all of our
|
* This GP can't end until cpu checks in, so all of our
|
||||||
* callbacks can be processed during the next GP.
|
* callbacks can be processed during the next GP.
|
||||||
@ -2548,11 +2165,11 @@ void rcu_sched_clock_irq(int user)
|
|||||||
}
|
}
|
||||||
|
|
||||||
/*
|
/*
|
||||||
* Scan the leaf rcu_node structures, processing dyntick state for any that
|
* Scan the leaf rcu_node structures. For each structure on which all
|
||||||
* have not yet encountered a quiescent state, using the function specified.
|
* CPUs have reported a quiescent state and on which there are tasks
|
||||||
* Also initiate boosting for any threads blocked on the root rcu_node.
|
* blocking the current grace period, initiate RCU priority boosting.
|
||||||
*
|
* Otherwise, invoke the specified function to check dyntick state for
|
||||||
* The caller must have suppressed start of new grace periods.
|
* each CPU that has not yet reported a quiescent state.
|
||||||
*/
|
*/
|
||||||
static void force_qs_rnp(int (*f)(struct rcu_data *rdp))
|
static void force_qs_rnp(int (*f)(struct rcu_data *rdp))
|
||||||
{
|
{
|
||||||
@ -2635,101 +2252,6 @@ void rcu_force_quiescent_state(void)
|
|||||||
}
|
}
|
||||||
EXPORT_SYMBOL_GPL(rcu_force_quiescent_state);
|
EXPORT_SYMBOL_GPL(rcu_force_quiescent_state);
|
||||||
|
|
||||||
/*
|
|
||||||
* This function checks for grace-period requests that fail to motivate
|
|
||||||
* RCU to come out of its idle mode.
|
|
||||||
*/
|
|
||||||
void
|
|
||||||
rcu_check_gp_start_stall(struct rcu_node *rnp, struct rcu_data *rdp,
|
|
||||||
const unsigned long gpssdelay)
|
|
||||||
{
|
|
||||||
unsigned long flags;
|
|
||||||
unsigned long j;
|
|
||||||
struct rcu_node *rnp_root = rcu_get_root();
|
|
||||||
static atomic_t warned = ATOMIC_INIT(0);
|
|
||||||
|
|
||||||
if (!IS_ENABLED(CONFIG_PROVE_RCU) || rcu_gp_in_progress() ||
|
|
||||||
ULONG_CMP_GE(rnp_root->gp_seq, rnp_root->gp_seq_needed))
|
|
||||||
return;
|
|
||||||
j = jiffies; /* Expensive access, and in common case don't get here. */
|
|
||||||
if (time_before(j, READ_ONCE(rcu_state.gp_req_activity) + gpssdelay) ||
|
|
||||||
time_before(j, READ_ONCE(rcu_state.gp_activity) + gpssdelay) ||
|
|
||||||
atomic_read(&warned))
|
|
||||||
return;
|
|
||||||
|
|
||||||
raw_spin_lock_irqsave_rcu_node(rnp, flags);
|
|
||||||
j = jiffies;
|
|
||||||
if (rcu_gp_in_progress() ||
|
|
||||||
ULONG_CMP_GE(rnp_root->gp_seq, rnp_root->gp_seq_needed) ||
|
|
||||||
time_before(j, READ_ONCE(rcu_state.gp_req_activity) + gpssdelay) ||
|
|
||||||
time_before(j, READ_ONCE(rcu_state.gp_activity) + gpssdelay) ||
|
|
||||||
atomic_read(&warned)) {
|
|
||||||
raw_spin_unlock_irqrestore_rcu_node(rnp, flags);
|
|
||||||
return;
|
|
||||||
}
|
|
||||||
/* Hold onto the leaf lock to make others see warned==1. */
|
|
||||||
|
|
||||||
if (rnp_root != rnp)
|
|
||||||
raw_spin_lock_rcu_node(rnp_root); /* irqs already disabled. */
|
|
||||||
j = jiffies;
|
|
||||||
if (rcu_gp_in_progress() ||
|
|
||||||
ULONG_CMP_GE(rnp_root->gp_seq, rnp_root->gp_seq_needed) ||
|
|
||||||
time_before(j, rcu_state.gp_req_activity + gpssdelay) ||
|
|
||||||
time_before(j, rcu_state.gp_activity + gpssdelay) ||
|
|
||||||
atomic_xchg(&warned, 1)) {
|
|
||||||
raw_spin_unlock_rcu_node(rnp_root); /* irqs remain disabled. */
|
|
||||||
raw_spin_unlock_irqrestore_rcu_node(rnp, flags);
|
|
||||||
return;
|
|
||||||
}
|
|
||||||
WARN_ON(1);
|
|
||||||
if (rnp_root != rnp)
|
|
||||||
raw_spin_unlock_rcu_node(rnp_root);
|
|
||||||
raw_spin_unlock_irqrestore_rcu_node(rnp, flags);
|
|
||||||
show_rcu_gp_kthreads();
|
|
||||||
}
|
|
||||||
|
|
||||||
/*
|
|
||||||
* Do a forward-progress check for rcutorture. This is normally invoked
|
|
||||||
* due to an OOM event. The argument "j" gives the time period during
|
|
||||||
* which rcutorture would like progress to have been made.
|
|
||||||
*/
|
|
||||||
void rcu_fwd_progress_check(unsigned long j)
|
|
||||||
{
|
|
||||||
unsigned long cbs;
|
|
||||||
int cpu;
|
|
||||||
unsigned long max_cbs = 0;
|
|
||||||
int max_cpu = -1;
|
|
||||||
struct rcu_data *rdp;
|
|
||||||
|
|
||||||
if (rcu_gp_in_progress()) {
|
|
||||||
pr_info("%s: GP age %lu jiffies\n",
|
|
||||||
__func__, jiffies - rcu_state.gp_start);
|
|
||||||
show_rcu_gp_kthreads();
|
|
||||||
} else {
|
|
||||||
pr_info("%s: Last GP end %lu jiffies ago\n",
|
|
||||||
__func__, jiffies - rcu_state.gp_end);
|
|
||||||
preempt_disable();
|
|
||||||
rdp = this_cpu_ptr(&rcu_data);
|
|
||||||
rcu_check_gp_start_stall(rdp->mynode, rdp, j);
|
|
||||||
preempt_enable();
|
|
||||||
}
|
|
||||||
for_each_possible_cpu(cpu) {
|
|
||||||
cbs = rcu_get_n_cbs_cpu(cpu);
|
|
||||||
if (!cbs)
|
|
||||||
continue;
|
|
||||||
if (max_cpu < 0)
|
|
||||||
pr_info("%s: callbacks", __func__);
|
|
||||||
pr_cont(" %d: %lu", cpu, cbs);
|
|
||||||
if (cbs <= max_cbs)
|
|
||||||
continue;
|
|
||||||
max_cbs = cbs;
|
|
||||||
max_cpu = cpu;
|
|
||||||
}
|
|
||||||
if (max_cpu >= 0)
|
|
||||||
pr_cont("\n");
|
|
||||||
}
|
|
||||||
EXPORT_SYMBOL_GPL(rcu_fwd_progress_check);
|
|
||||||
|
|
||||||
/* Perform RCU core processing work for the current CPU. */
|
/* Perform RCU core processing work for the current CPU. */
|
||||||
static __latent_entropy void rcu_core(struct softirq_action *unused)
|
static __latent_entropy void rcu_core(struct softirq_action *unused)
|
||||||
{
|
{
|
||||||
@ -3559,13 +3081,11 @@ static int rcu_pm_notify(struct notifier_block *self,
|
|||||||
switch (action) {
|
switch (action) {
|
||||||
case PM_HIBERNATION_PREPARE:
|
case PM_HIBERNATION_PREPARE:
|
||||||
case PM_SUSPEND_PREPARE:
|
case PM_SUSPEND_PREPARE:
|
||||||
if (nr_cpu_ids <= 256) /* Expediting bad for large systems. */
|
rcu_expedite_gp();
|
||||||
rcu_expedite_gp();
|
|
||||||
break;
|
break;
|
||||||
case PM_POST_HIBERNATION:
|
case PM_POST_HIBERNATION:
|
||||||
case PM_POST_SUSPEND:
|
case PM_POST_SUSPEND:
|
||||||
if (nr_cpu_ids <= 256) /* Expediting bad for large systems. */
|
rcu_unexpedite_gp();
|
||||||
rcu_unexpedite_gp();
|
|
||||||
break;
|
break;
|
||||||
default:
|
default:
|
||||||
break;
|
break;
|
||||||
@ -3742,8 +3262,7 @@ static void __init rcu_init_geometry(void)
|
|||||||
jiffies_till_first_fqs = d;
|
jiffies_till_first_fqs = d;
|
||||||
if (jiffies_till_next_fqs == ULONG_MAX)
|
if (jiffies_till_next_fqs == ULONG_MAX)
|
||||||
jiffies_till_next_fqs = d;
|
jiffies_till_next_fqs = d;
|
||||||
if (jiffies_till_sched_qs == ULONG_MAX)
|
adjust_jiffies_till_sched_qs();
|
||||||
adjust_jiffies_till_sched_qs();
|
|
||||||
|
|
||||||
/* If the compile-time values are accurate, just leave. */
|
/* If the compile-time values are accurate, just leave. */
|
||||||
if (rcu_fanout_leaf == RCU_FANOUT_LEAF &&
|
if (rcu_fanout_leaf == RCU_FANOUT_LEAF &&
|
||||||
@ -3858,5 +3377,6 @@ void __init rcu_init(void)
|
|||||||
srcu_init();
|
srcu_init();
|
||||||
}
|
}
|
||||||
|
|
||||||
|
#include "tree_stall.h"
|
||||||
#include "tree_exp.h"
|
#include "tree_exp.h"
|
||||||
#include "tree_plugin.h"
|
#include "tree_plugin.h"
|
||||||
|
@ -393,15 +393,13 @@ static const char *tp_rcu_varname __used __tracepoint_string = rcu_name;
|
|||||||
|
|
||||||
int rcu_dynticks_snap(struct rcu_data *rdp);
|
int rcu_dynticks_snap(struct rcu_data *rdp);
|
||||||
|
|
||||||
/* Forward declarations for rcutree_plugin.h */
|
/* Forward declarations for tree_plugin.h */
|
||||||
static void rcu_bootup_announce(void);
|
static void rcu_bootup_announce(void);
|
||||||
static void rcu_qs(void);
|
static void rcu_qs(void);
|
||||||
static int rcu_preempt_blocked_readers_cgp(struct rcu_node *rnp);
|
static int rcu_preempt_blocked_readers_cgp(struct rcu_node *rnp);
|
||||||
#ifdef CONFIG_HOTPLUG_CPU
|
#ifdef CONFIG_HOTPLUG_CPU
|
||||||
static bool rcu_preempt_has_tasks(struct rcu_node *rnp);
|
static bool rcu_preempt_has_tasks(struct rcu_node *rnp);
|
||||||
#endif /* #ifdef CONFIG_HOTPLUG_CPU */
|
#endif /* #ifdef CONFIG_HOTPLUG_CPU */
|
||||||
static void rcu_print_detail_task_stall(void);
|
|
||||||
static int rcu_print_task_stall(struct rcu_node *rnp);
|
|
||||||
static int rcu_print_task_exp_stall(struct rcu_node *rnp);
|
static int rcu_print_task_exp_stall(struct rcu_node *rnp);
|
||||||
static void rcu_preempt_check_blocked_tasks(struct rcu_node *rnp);
|
static void rcu_preempt_check_blocked_tasks(struct rcu_node *rnp);
|
||||||
static void rcu_flavor_sched_clock_irq(int user);
|
static void rcu_flavor_sched_clock_irq(int user);
|
||||||
@ -418,9 +416,6 @@ static void rcu_prepare_for_idle(void);
|
|||||||
static bool rcu_preempt_has_tasks(struct rcu_node *rnp);
|
static bool rcu_preempt_has_tasks(struct rcu_node *rnp);
|
||||||
static bool rcu_preempt_need_deferred_qs(struct task_struct *t);
|
static bool rcu_preempt_need_deferred_qs(struct task_struct *t);
|
||||||
static void rcu_preempt_deferred_qs(struct task_struct *t);
|
static void rcu_preempt_deferred_qs(struct task_struct *t);
|
||||||
static void print_cpu_stall_info_begin(void);
|
|
||||||
static void print_cpu_stall_info(int cpu);
|
|
||||||
static void print_cpu_stall_info_end(void);
|
|
||||||
static void zero_cpu_stall_ticks(struct rcu_data *rdp);
|
static void zero_cpu_stall_ticks(struct rcu_data *rdp);
|
||||||
static bool rcu_nocb_cpu_needs_barrier(int cpu);
|
static bool rcu_nocb_cpu_needs_barrier(int cpu);
|
||||||
static struct swait_queue_head *rcu_nocb_gp_get(struct rcu_node *rnp);
|
static struct swait_queue_head *rcu_nocb_gp_get(struct rcu_node *rnp);
|
||||||
@ -445,3 +440,10 @@ static void rcu_bind_gp_kthread(void);
|
|||||||
static bool rcu_nohz_full_cpu(void);
|
static bool rcu_nohz_full_cpu(void);
|
||||||
static void rcu_dynticks_task_enter(void);
|
static void rcu_dynticks_task_enter(void);
|
||||||
static void rcu_dynticks_task_exit(void);
|
static void rcu_dynticks_task_exit(void);
|
||||||
|
|
||||||
|
/* Forward declarations for tree_stall.h */
|
||||||
|
static void record_gp_stall_check_time(void);
|
||||||
|
static void rcu_iw_handler(struct irq_work *iwp);
|
||||||
|
static void check_cpu_stall(struct rcu_data *rdp);
|
||||||
|
static void rcu_check_gp_start_stall(struct rcu_node *rnp, struct rcu_data *rdp,
|
||||||
|
const unsigned long gpssdelay);
|
||||||
|
@ -10,6 +10,7 @@
|
|||||||
#include <linux/lockdep.h>
|
#include <linux/lockdep.h>
|
||||||
|
|
||||||
static void rcu_exp_handler(void *unused);
|
static void rcu_exp_handler(void *unused);
|
||||||
|
static int rcu_print_task_exp_stall(struct rcu_node *rnp);
|
||||||
|
|
||||||
/*
|
/*
|
||||||
* Record the start of an expedited grace period.
|
* Record the start of an expedited grace period.
|
||||||
@ -633,7 +634,7 @@ static void rcu_exp_handler(void *unused)
|
|||||||
raw_spin_lock_irqsave_rcu_node(rnp, flags);
|
raw_spin_lock_irqsave_rcu_node(rnp, flags);
|
||||||
if (rnp->expmask & rdp->grpmask) {
|
if (rnp->expmask & rdp->grpmask) {
|
||||||
rdp->deferred_qs = true;
|
rdp->deferred_qs = true;
|
||||||
WRITE_ONCE(t->rcu_read_unlock_special.b.exp_hint, true);
|
t->rcu_read_unlock_special.b.exp_hint = true;
|
||||||
}
|
}
|
||||||
raw_spin_unlock_irqrestore_rcu_node(rnp, flags);
|
raw_spin_unlock_irqrestore_rcu_node(rnp, flags);
|
||||||
return;
|
return;
|
||||||
@ -648,7 +649,7 @@ static void rcu_exp_handler(void *unused)
|
|||||||
*
|
*
|
||||||
* If the CPU is fully enabled (or if some buggy RCU-preempt
|
* If the CPU is fully enabled (or if some buggy RCU-preempt
|
||||||
* read-side critical section is being used from idle), just
|
* read-side critical section is being used from idle), just
|
||||||
* invoke rcu_preempt_defer_qs() to immediately report the
|
* invoke rcu_preempt_deferred_qs() to immediately report the
|
||||||
* quiescent state. We cannot use rcu_read_unlock_special()
|
* quiescent state. We cannot use rcu_read_unlock_special()
|
||||||
* because we are in an interrupt handler, which will cause that
|
* because we are in an interrupt handler, which will cause that
|
||||||
* function to take an early exit without doing anything.
|
* function to take an early exit without doing anything.
|
||||||
@ -670,6 +671,27 @@ static void sync_sched_exp_online_cleanup(int cpu)
|
|||||||
{
|
{
|
||||||
}
|
}
|
||||||
|
|
||||||
|
/*
|
||||||
|
* Scan the current list of tasks blocked within RCU read-side critical
|
||||||
|
* sections, printing out the tid of each that is blocking the current
|
||||||
|
* expedited grace period.
|
||||||
|
*/
|
||||||
|
static int rcu_print_task_exp_stall(struct rcu_node *rnp)
|
||||||
|
{
|
||||||
|
struct task_struct *t;
|
||||||
|
int ndetected = 0;
|
||||||
|
|
||||||
|
if (!rnp->exp_tasks)
|
||||||
|
return 0;
|
||||||
|
t = list_entry(rnp->exp_tasks->prev,
|
||||||
|
struct task_struct, rcu_node_entry);
|
||||||
|
list_for_each_entry_continue(t, &rnp->blkd_tasks, rcu_node_entry) {
|
||||||
|
pr_cont(" P%d", t->pid);
|
||||||
|
ndetected++;
|
||||||
|
}
|
||||||
|
return ndetected;
|
||||||
|
}
|
||||||
|
|
||||||
#else /* #ifdef CONFIG_PREEMPT_RCU */
|
#else /* #ifdef CONFIG_PREEMPT_RCU */
|
||||||
|
|
||||||
/* Invoked on each online non-idle CPU for expedited quiescent state. */
|
/* Invoked on each online non-idle CPU for expedited quiescent state. */
|
||||||
@ -709,6 +731,16 @@ static void sync_sched_exp_online_cleanup(int cpu)
|
|||||||
WARN_ON_ONCE(ret);
|
WARN_ON_ONCE(ret);
|
||||||
}
|
}
|
||||||
|
|
||||||
|
/*
|
||||||
|
* Because preemptible RCU does not exist, we never have to check for
|
||||||
|
* tasks blocked within RCU read-side critical sections that are
|
||||||
|
* blocking the current expedited grace period.
|
||||||
|
*/
|
||||||
|
static int rcu_print_task_exp_stall(struct rcu_node *rnp)
|
||||||
|
{
|
||||||
|
return 0;
|
||||||
|
}
|
||||||
|
|
||||||
#endif /* #else #ifdef CONFIG_PREEMPT_RCU */
|
#endif /* #else #ifdef CONFIG_PREEMPT_RCU */
|
||||||
|
|
||||||
/**
|
/**
|
||||||
|
@ -285,7 +285,7 @@ static void rcu_qs(void)
|
|||||||
TPS("cpuqs"));
|
TPS("cpuqs"));
|
||||||
__this_cpu_write(rcu_data.cpu_no_qs.b.norm, false);
|
__this_cpu_write(rcu_data.cpu_no_qs.b.norm, false);
|
||||||
barrier(); /* Coordinate with rcu_flavor_sched_clock_irq(). */
|
barrier(); /* Coordinate with rcu_flavor_sched_clock_irq(). */
|
||||||
current->rcu_read_unlock_special.b.need_qs = false;
|
WRITE_ONCE(current->rcu_read_unlock_special.b.need_qs, false);
|
||||||
}
|
}
|
||||||
}
|
}
|
||||||
|
|
||||||
@ -642,100 +642,6 @@ static void rcu_read_unlock_special(struct task_struct *t)
|
|||||||
rcu_preempt_deferred_qs_irqrestore(t, flags);
|
rcu_preempt_deferred_qs_irqrestore(t, flags);
|
||||||
}
|
}
|
||||||
|
|
||||||
/*
|
|
||||||
* Dump detailed information for all tasks blocking the current RCU
|
|
||||||
* grace period on the specified rcu_node structure.
|
|
||||||
*/
|
|
||||||
static void rcu_print_detail_task_stall_rnp(struct rcu_node *rnp)
|
|
||||||
{
|
|
||||||
unsigned long flags;
|
|
||||||
struct task_struct *t;
|
|
||||||
|
|
||||||
raw_spin_lock_irqsave_rcu_node(rnp, flags);
|
|
||||||
if (!rcu_preempt_blocked_readers_cgp(rnp)) {
|
|
||||||
raw_spin_unlock_irqrestore_rcu_node(rnp, flags);
|
|
||||||
return;
|
|
||||||
}
|
|
||||||
t = list_entry(rnp->gp_tasks->prev,
|
|
||||||
struct task_struct, rcu_node_entry);
|
|
||||||
list_for_each_entry_continue(t, &rnp->blkd_tasks, rcu_node_entry) {
|
|
||||||
/*
|
|
||||||
* We could be printing a lot while holding a spinlock.
|
|
||||||
* Avoid triggering hard lockup.
|
|
||||||
*/
|
|
||||||
touch_nmi_watchdog();
|
|
||||||
sched_show_task(t);
|
|
||||||
}
|
|
||||||
raw_spin_unlock_irqrestore_rcu_node(rnp, flags);
|
|
||||||
}
|
|
||||||
|
|
||||||
/*
|
|
||||||
* Dump detailed information for all tasks blocking the current RCU
|
|
||||||
* grace period.
|
|
||||||
*/
|
|
||||||
static void rcu_print_detail_task_stall(void)
|
|
||||||
{
|
|
||||||
struct rcu_node *rnp = rcu_get_root();
|
|
||||||
|
|
||||||
rcu_print_detail_task_stall_rnp(rnp);
|
|
||||||
rcu_for_each_leaf_node(rnp)
|
|
||||||
rcu_print_detail_task_stall_rnp(rnp);
|
|
||||||
}
|
|
||||||
|
|
||||||
static void rcu_print_task_stall_begin(struct rcu_node *rnp)
|
|
||||||
{
|
|
||||||
pr_err("\tTasks blocked on level-%d rcu_node (CPUs %d-%d):",
|
|
||||||
rnp->level, rnp->grplo, rnp->grphi);
|
|
||||||
}
|
|
||||||
|
|
||||||
static void rcu_print_task_stall_end(void)
|
|
||||||
{
|
|
||||||
pr_cont("\n");
|
|
||||||
}
|
|
||||||
|
|
||||||
/*
|
|
||||||
* Scan the current list of tasks blocked within RCU read-side critical
|
|
||||||
* sections, printing out the tid of each.
|
|
||||||
*/
|
|
||||||
static int rcu_print_task_stall(struct rcu_node *rnp)
|
|
||||||
{
|
|
||||||
struct task_struct *t;
|
|
||||||
int ndetected = 0;
|
|
||||||
|
|
||||||
if (!rcu_preempt_blocked_readers_cgp(rnp))
|
|
||||||
return 0;
|
|
||||||
rcu_print_task_stall_begin(rnp);
|
|
||||||
t = list_entry(rnp->gp_tasks->prev,
|
|
||||||
struct task_struct, rcu_node_entry);
|
|
||||||
list_for_each_entry_continue(t, &rnp->blkd_tasks, rcu_node_entry) {
|
|
||||||
pr_cont(" P%d", t->pid);
|
|
||||||
ndetected++;
|
|
||||||
}
|
|
||||||
rcu_print_task_stall_end();
|
|
||||||
return ndetected;
|
|
||||||
}
|
|
||||||
|
|
||||||
/*
|
|
||||||
* Scan the current list of tasks blocked within RCU read-side critical
|
|
||||||
* sections, printing out the tid of each that is blocking the current
|
|
||||||
* expedited grace period.
|
|
||||||
*/
|
|
||||||
static int rcu_print_task_exp_stall(struct rcu_node *rnp)
|
|
||||||
{
|
|
||||||
struct task_struct *t;
|
|
||||||
int ndetected = 0;
|
|
||||||
|
|
||||||
if (!rnp->exp_tasks)
|
|
||||||
return 0;
|
|
||||||
t = list_entry(rnp->exp_tasks->prev,
|
|
||||||
struct task_struct, rcu_node_entry);
|
|
||||||
list_for_each_entry_continue(t, &rnp->blkd_tasks, rcu_node_entry) {
|
|
||||||
pr_cont(" P%d", t->pid);
|
|
||||||
ndetected++;
|
|
||||||
}
|
|
||||||
return ndetected;
|
|
||||||
}
|
|
||||||
|
|
||||||
/*
|
/*
|
||||||
* Check that the list of blocked tasks for the newly completed grace
|
* Check that the list of blocked tasks for the newly completed grace
|
||||||
* period is in fact empty. It is a serious bug to complete a grace
|
* period is in fact empty. It is a serious bug to complete a grace
|
||||||
@ -804,19 +710,25 @@ static void rcu_flavor_sched_clock_irq(int user)
|
|||||||
|
|
||||||
/*
|
/*
|
||||||
* Check for a task exiting while in a preemptible-RCU read-side
|
* Check for a task exiting while in a preemptible-RCU read-side
|
||||||
* critical section, clean up if so. No need to issue warnings,
|
* critical section, clean up if so. No need to issue warnings, as
|
||||||
* as debug_check_no_locks_held() already does this if lockdep
|
* debug_check_no_locks_held() already does this if lockdep is enabled.
|
||||||
* is enabled.
|
* Besides, if this function does anything other than just immediately
|
||||||
|
* return, there was a bug of some sort. Spewing warnings from this
|
||||||
|
* function is like as not to simply obscure important prior warnings.
|
||||||
*/
|
*/
|
||||||
void exit_rcu(void)
|
void exit_rcu(void)
|
||||||
{
|
{
|
||||||
struct task_struct *t = current;
|
struct task_struct *t = current;
|
||||||
|
|
||||||
if (likely(list_empty(¤t->rcu_node_entry)))
|
if (unlikely(!list_empty(¤t->rcu_node_entry))) {
|
||||||
|
t->rcu_read_lock_nesting = 1;
|
||||||
|
barrier();
|
||||||
|
WRITE_ONCE(t->rcu_read_unlock_special.b.blocked, true);
|
||||||
|
} else if (unlikely(t->rcu_read_lock_nesting)) {
|
||||||
|
t->rcu_read_lock_nesting = 1;
|
||||||
|
} else {
|
||||||
return;
|
return;
|
||||||
t->rcu_read_lock_nesting = 1;
|
}
|
||||||
barrier();
|
|
||||||
t->rcu_read_unlock_special.b.blocked = true;
|
|
||||||
__rcu_read_unlock();
|
__rcu_read_unlock();
|
||||||
rcu_preempt_deferred_qs(current);
|
rcu_preempt_deferred_qs(current);
|
||||||
}
|
}
|
||||||
@ -979,33 +891,6 @@ static bool rcu_preempt_need_deferred_qs(struct task_struct *t)
|
|||||||
}
|
}
|
||||||
static void rcu_preempt_deferred_qs(struct task_struct *t) { }
|
static void rcu_preempt_deferred_qs(struct task_struct *t) { }
|
||||||
|
|
||||||
/*
|
|
||||||
* Because preemptible RCU does not exist, we never have to check for
|
|
||||||
* tasks blocked within RCU read-side critical sections.
|
|
||||||
*/
|
|
||||||
static void rcu_print_detail_task_stall(void)
|
|
||||||
{
|
|
||||||
}
|
|
||||||
|
|
||||||
/*
|
|
||||||
* Because preemptible RCU does not exist, we never have to check for
|
|
||||||
* tasks blocked within RCU read-side critical sections.
|
|
||||||
*/
|
|
||||||
static int rcu_print_task_stall(struct rcu_node *rnp)
|
|
||||||
{
|
|
||||||
return 0;
|
|
||||||
}
|
|
||||||
|
|
||||||
/*
|
|
||||||
* Because preemptible RCU does not exist, we never have to check for
|
|
||||||
* tasks blocked within RCU read-side critical sections that are
|
|
||||||
* blocking the current expedited grace period.
|
|
||||||
*/
|
|
||||||
static int rcu_print_task_exp_stall(struct rcu_node *rnp)
|
|
||||||
{
|
|
||||||
return 0;
|
|
||||||
}
|
|
||||||
|
|
||||||
/*
|
/*
|
||||||
* Because there is no preemptible RCU, there can be no readers blocked,
|
* Because there is no preemptible RCU, there can be no readers blocked,
|
||||||
* so there is no need to check for blocked tasks. So check only for
|
* so there is no need to check for blocked tasks. So check only for
|
||||||
@ -1185,8 +1070,6 @@ static int rcu_boost_kthread(void *arg)
|
|||||||
static void rcu_initiate_boost(struct rcu_node *rnp, unsigned long flags)
|
static void rcu_initiate_boost(struct rcu_node *rnp, unsigned long flags)
|
||||||
__releases(rnp->lock)
|
__releases(rnp->lock)
|
||||||
{
|
{
|
||||||
struct task_struct *t;
|
|
||||||
|
|
||||||
raw_lockdep_assert_held_rcu_node(rnp);
|
raw_lockdep_assert_held_rcu_node(rnp);
|
||||||
if (!rcu_preempt_blocked_readers_cgp(rnp) && rnp->exp_tasks == NULL) {
|
if (!rcu_preempt_blocked_readers_cgp(rnp) && rnp->exp_tasks == NULL) {
|
||||||
raw_spin_unlock_irqrestore_rcu_node(rnp, flags);
|
raw_spin_unlock_irqrestore_rcu_node(rnp, flags);
|
||||||
@ -1200,9 +1083,8 @@ static void rcu_initiate_boost(struct rcu_node *rnp, unsigned long flags)
|
|||||||
if (rnp->exp_tasks == NULL)
|
if (rnp->exp_tasks == NULL)
|
||||||
rnp->boost_tasks = rnp->gp_tasks;
|
rnp->boost_tasks = rnp->gp_tasks;
|
||||||
raw_spin_unlock_irqrestore_rcu_node(rnp, flags);
|
raw_spin_unlock_irqrestore_rcu_node(rnp, flags);
|
||||||
t = rnp->boost_kthread_task;
|
rcu_wake_cond(rnp->boost_kthread_task,
|
||||||
if (t)
|
rnp->boost_kthread_status);
|
||||||
rcu_wake_cond(t, rnp->boost_kthread_status);
|
|
||||||
} else {
|
} else {
|
||||||
raw_spin_unlock_irqrestore_rcu_node(rnp, flags);
|
raw_spin_unlock_irqrestore_rcu_node(rnp, flags);
|
||||||
}
|
}
|
||||||
@ -1649,98 +1531,6 @@ static void rcu_cleanup_after_idle(void)
|
|||||||
|
|
||||||
#endif /* #else #if !defined(CONFIG_RCU_FAST_NO_HZ) */
|
#endif /* #else #if !defined(CONFIG_RCU_FAST_NO_HZ) */
|
||||||
|
|
||||||
#ifdef CONFIG_RCU_FAST_NO_HZ
|
|
||||||
|
|
||||||
static void print_cpu_stall_fast_no_hz(char *cp, int cpu)
|
|
||||||
{
|
|
||||||
struct rcu_data *rdp = &per_cpu(rcu_data, cpu);
|
|
||||||
|
|
||||||
sprintf(cp, "last_accelerate: %04lx/%04lx, Nonlazy posted: %c%c%c",
|
|
||||||
rdp->last_accelerate & 0xffff, jiffies & 0xffff,
|
|
||||||
".l"[rdp->all_lazy],
|
|
||||||
".L"[!rcu_segcblist_n_nonlazy_cbs(&rdp->cblist)],
|
|
||||||
".D"[!rdp->tick_nohz_enabled_snap]);
|
|
||||||
}
|
|
||||||
|
|
||||||
#else /* #ifdef CONFIG_RCU_FAST_NO_HZ */
|
|
||||||
|
|
||||||
static void print_cpu_stall_fast_no_hz(char *cp, int cpu)
|
|
||||||
{
|
|
||||||
*cp = '\0';
|
|
||||||
}
|
|
||||||
|
|
||||||
#endif /* #else #ifdef CONFIG_RCU_FAST_NO_HZ */
|
|
||||||
|
|
||||||
/* Initiate the stall-info list. */
|
|
||||||
static void print_cpu_stall_info_begin(void)
|
|
||||||
{
|
|
||||||
pr_cont("\n");
|
|
||||||
}
|
|
||||||
|
|
||||||
/*
|
|
||||||
* Print out diagnostic information for the specified stalled CPU.
|
|
||||||
*
|
|
||||||
* If the specified CPU is aware of the current RCU grace period, then
|
|
||||||
* print the number of scheduling clock interrupts the CPU has taken
|
|
||||||
* during the time that it has been aware. Otherwise, print the number
|
|
||||||
* of RCU grace periods that this CPU is ignorant of, for example, "1"
|
|
||||||
* if the CPU was aware of the previous grace period.
|
|
||||||
*
|
|
||||||
* Also print out idle and (if CONFIG_RCU_FAST_NO_HZ) idle-entry info.
|
|
||||||
*/
|
|
||||||
static void print_cpu_stall_info(int cpu)
|
|
||||||
{
|
|
||||||
unsigned long delta;
|
|
||||||
char fast_no_hz[72];
|
|
||||||
struct rcu_data *rdp = per_cpu_ptr(&rcu_data, cpu);
|
|
||||||
char *ticks_title;
|
|
||||||
unsigned long ticks_value;
|
|
||||||
|
|
||||||
/*
|
|
||||||
* We could be printing a lot while holding a spinlock. Avoid
|
|
||||||
* triggering hard lockup.
|
|
||||||
*/
|
|
||||||
touch_nmi_watchdog();
|
|
||||||
|
|
||||||
ticks_value = rcu_seq_ctr(rcu_state.gp_seq - rdp->gp_seq);
|
|
||||||
if (ticks_value) {
|
|
||||||
ticks_title = "GPs behind";
|
|
||||||
} else {
|
|
||||||
ticks_title = "ticks this GP";
|
|
||||||
ticks_value = rdp->ticks_this_gp;
|
|
||||||
}
|
|
||||||
print_cpu_stall_fast_no_hz(fast_no_hz, cpu);
|
|
||||||
delta = rcu_seq_ctr(rdp->mynode->gp_seq - rdp->rcu_iw_gp_seq);
|
|
||||||
pr_err("\t%d-%c%c%c%c: (%lu %s) idle=%03x/%ld/%#lx softirq=%u/%u fqs=%ld %s\n",
|
|
||||||
cpu,
|
|
||||||
"O."[!!cpu_online(cpu)],
|
|
||||||
"o."[!!(rdp->grpmask & rdp->mynode->qsmaskinit)],
|
|
||||||
"N."[!!(rdp->grpmask & rdp->mynode->qsmaskinitnext)],
|
|
||||||
!IS_ENABLED(CONFIG_IRQ_WORK) ? '?' :
|
|
||||||
rdp->rcu_iw_pending ? (int)min(delta, 9UL) + '0' :
|
|
||||||
"!."[!delta],
|
|
||||||
ticks_value, ticks_title,
|
|
||||||
rcu_dynticks_snap(rdp) & 0xfff,
|
|
||||||
rdp->dynticks_nesting, rdp->dynticks_nmi_nesting,
|
|
||||||
rdp->softirq_snap, kstat_softirqs_cpu(RCU_SOFTIRQ, cpu),
|
|
||||||
READ_ONCE(rcu_state.n_force_qs) - rcu_state.n_force_qs_gpstart,
|
|
||||||
fast_no_hz);
|
|
||||||
}
|
|
||||||
|
|
||||||
/* Terminate the stall-info list. */
|
|
||||||
static void print_cpu_stall_info_end(void)
|
|
||||||
{
|
|
||||||
pr_err("\t");
|
|
||||||
}
|
|
||||||
|
|
||||||
/* Zero ->ticks_this_gp and snapshot the number of RCU softirq handlers. */
|
|
||||||
static void zero_cpu_stall_ticks(struct rcu_data *rdp)
|
|
||||||
{
|
|
||||||
rdp->ticks_this_gp = 0;
|
|
||||||
rdp->softirq_snap = kstat_softirqs_cpu(RCU_SOFTIRQ, smp_processor_id());
|
|
||||||
WRITE_ONCE(rdp->last_fqs_resched, jiffies);
|
|
||||||
}
|
|
||||||
|
|
||||||
#ifdef CONFIG_RCU_NOCB_CPU
|
#ifdef CONFIG_RCU_NOCB_CPU
|
||||||
|
|
||||||
/*
|
/*
|
||||||
@ -1766,11 +1556,22 @@ static void zero_cpu_stall_ticks(struct rcu_data *rdp)
|
|||||||
*/
|
*/
|
||||||
|
|
||||||
|
|
||||||
/* Parse the boot-time rcu_nocb_mask CPU list from the kernel parameters. */
|
/*
|
||||||
|
* Parse the boot-time rcu_nocb_mask CPU list from the kernel parameters.
|
||||||
|
* The string after the "rcu_nocbs=" is either "all" for all CPUs, or a
|
||||||
|
* comma-separated list of CPUs and/or CPU ranges. If an invalid list is
|
||||||
|
* given, a warning is emitted and all CPUs are offloaded.
|
||||||
|
*/
|
||||||
static int __init rcu_nocb_setup(char *str)
|
static int __init rcu_nocb_setup(char *str)
|
||||||
{
|
{
|
||||||
alloc_bootmem_cpumask_var(&rcu_nocb_mask);
|
alloc_bootmem_cpumask_var(&rcu_nocb_mask);
|
||||||
cpulist_parse(str, rcu_nocb_mask);
|
if (!strcasecmp(str, "all"))
|
||||||
|
cpumask_setall(rcu_nocb_mask);
|
||||||
|
else
|
||||||
|
if (cpulist_parse(str, rcu_nocb_mask)) {
|
||||||
|
pr_warn("rcu_nocbs= bad CPU range, all CPUs set\n");
|
||||||
|
cpumask_setall(rcu_nocb_mask);
|
||||||
|
}
|
||||||
return 1;
|
return 1;
|
||||||
}
|
}
|
||||||
__setup("rcu_nocbs=", rcu_nocb_setup);
|
__setup("rcu_nocbs=", rcu_nocb_setup);
|
||||||
|
709
kernel/rcu/tree_stall.h
Normal file
709
kernel/rcu/tree_stall.h
Normal file
@ -0,0 +1,709 @@
|
|||||||
|
// SPDX-License-Identifier: GPL-2.0+
|
||||||
|
/*
|
||||||
|
* RCU CPU stall warnings for normal RCU grace periods
|
||||||
|
*
|
||||||
|
* Copyright IBM Corporation, 2019
|
||||||
|
*
|
||||||
|
* Author: Paul E. McKenney <paulmck@linux.ibm.com>
|
||||||
|
*/
|
||||||
|
|
||||||
|
//////////////////////////////////////////////////////////////////////////////
|
||||||
|
//
|
||||||
|
// Controlling CPU stall warnings, including delay calculation.
|
||||||
|
|
||||||
|
/* panic() on RCU Stall sysctl. */
|
||||||
|
int sysctl_panic_on_rcu_stall __read_mostly;
|
||||||
|
|
||||||
|
#ifdef CONFIG_PROVE_RCU
|
||||||
|
#define RCU_STALL_DELAY_DELTA (5 * HZ)
|
||||||
|
#else
|
||||||
|
#define RCU_STALL_DELAY_DELTA 0
|
||||||
|
#endif
|
||||||
|
|
||||||
|
/* Limit-check stall timeouts specified at boottime and runtime. */
|
||||||
|
int rcu_jiffies_till_stall_check(void)
|
||||||
|
{
|
||||||
|
int till_stall_check = READ_ONCE(rcu_cpu_stall_timeout);
|
||||||
|
|
||||||
|
/*
|
||||||
|
* Limit check must be consistent with the Kconfig limits
|
||||||
|
* for CONFIG_RCU_CPU_STALL_TIMEOUT.
|
||||||
|
*/
|
||||||
|
if (till_stall_check < 3) {
|
||||||
|
WRITE_ONCE(rcu_cpu_stall_timeout, 3);
|
||||||
|
till_stall_check = 3;
|
||||||
|
} else if (till_stall_check > 300) {
|
||||||
|
WRITE_ONCE(rcu_cpu_stall_timeout, 300);
|
||||||
|
till_stall_check = 300;
|
||||||
|
}
|
||||||
|
return till_stall_check * HZ + RCU_STALL_DELAY_DELTA;
|
||||||
|
}
|
||||||
|
EXPORT_SYMBOL_GPL(rcu_jiffies_till_stall_check);
|
||||||
|
|
||||||
|
/* Don't do RCU CPU stall warnings during long sysrq printouts. */
|
||||||
|
void rcu_sysrq_start(void)
|
||||||
|
{
|
||||||
|
if (!rcu_cpu_stall_suppress)
|
||||||
|
rcu_cpu_stall_suppress = 2;
|
||||||
|
}
|
||||||
|
|
||||||
|
void rcu_sysrq_end(void)
|
||||||
|
{
|
||||||
|
if (rcu_cpu_stall_suppress == 2)
|
||||||
|
rcu_cpu_stall_suppress = 0;
|
||||||
|
}
|
||||||
|
|
||||||
|
/* Don't print RCU CPU stall warnings during a kernel panic. */
|
||||||
|
static int rcu_panic(struct notifier_block *this, unsigned long ev, void *ptr)
|
||||||
|
{
|
||||||
|
rcu_cpu_stall_suppress = 1;
|
||||||
|
return NOTIFY_DONE;
|
||||||
|
}
|
||||||
|
|
||||||
|
static struct notifier_block rcu_panic_block = {
|
||||||
|
.notifier_call = rcu_panic,
|
||||||
|
};
|
||||||
|
|
||||||
|
static int __init check_cpu_stall_init(void)
|
||||||
|
{
|
||||||
|
atomic_notifier_chain_register(&panic_notifier_list, &rcu_panic_block);
|
||||||
|
return 0;
|
||||||
|
}
|
||||||
|
early_initcall(check_cpu_stall_init);
|
||||||
|
|
||||||
|
/* If so specified via sysctl, panic, yielding cleaner stall-warning output. */
|
||||||
|
static void panic_on_rcu_stall(void)
|
||||||
|
{
|
||||||
|
if (sysctl_panic_on_rcu_stall)
|
||||||
|
panic("RCU Stall\n");
|
||||||
|
}
|
||||||
|
|
||||||
|
/**
|
||||||
|
* rcu_cpu_stall_reset - prevent further stall warnings in current grace period
|
||||||
|
*
|
||||||
|
* Set the stall-warning timeout way off into the future, thus preventing
|
||||||
|
* any RCU CPU stall-warning messages from appearing in the current set of
|
||||||
|
* RCU grace periods.
|
||||||
|
*
|
||||||
|
* The caller must disable hard irqs.
|
||||||
|
*/
|
||||||
|
void rcu_cpu_stall_reset(void)
|
||||||
|
{
|
||||||
|
WRITE_ONCE(rcu_state.jiffies_stall, jiffies + ULONG_MAX / 2);
|
||||||
|
}
|
||||||
|
|
||||||
|
//////////////////////////////////////////////////////////////////////////////
|
||||||
|
//
|
||||||
|
// Interaction with RCU grace periods
|
||||||
|
|
||||||
|
/* Start of new grace period, so record stall time (and forcing times). */
|
||||||
|
static void record_gp_stall_check_time(void)
|
||||||
|
{
|
||||||
|
unsigned long j = jiffies;
|
||||||
|
unsigned long j1;
|
||||||
|
|
||||||
|
rcu_state.gp_start = j;
|
||||||
|
j1 = rcu_jiffies_till_stall_check();
|
||||||
|
/* Record ->gp_start before ->jiffies_stall. */
|
||||||
|
smp_store_release(&rcu_state.jiffies_stall, j + j1); /* ^^^ */
|
||||||
|
rcu_state.jiffies_resched = j + j1 / 2;
|
||||||
|
rcu_state.n_force_qs_gpstart = READ_ONCE(rcu_state.n_force_qs);
|
||||||
|
}
|
||||||
|
|
||||||
|
/* Zero ->ticks_this_gp and snapshot the number of RCU softirq handlers. */
|
||||||
|
static void zero_cpu_stall_ticks(struct rcu_data *rdp)
|
||||||
|
{
|
||||||
|
rdp->ticks_this_gp = 0;
|
||||||
|
rdp->softirq_snap = kstat_softirqs_cpu(RCU_SOFTIRQ, smp_processor_id());
|
||||||
|
WRITE_ONCE(rdp->last_fqs_resched, jiffies);
|
||||||
|
}
|
||||||
|
|
||||||
|
/*
|
||||||
|
* If too much time has passed in the current grace period, and if
|
||||||
|
* so configured, go kick the relevant kthreads.
|
||||||
|
*/
|
||||||
|
static void rcu_stall_kick_kthreads(void)
|
||||||
|
{
|
||||||
|
unsigned long j;
|
||||||
|
|
||||||
|
if (!rcu_kick_kthreads)
|
||||||
|
return;
|
||||||
|
j = READ_ONCE(rcu_state.jiffies_kick_kthreads);
|
||||||
|
if (time_after(jiffies, j) && rcu_state.gp_kthread &&
|
||||||
|
(rcu_gp_in_progress() || READ_ONCE(rcu_state.gp_flags))) {
|
||||||
|
WARN_ONCE(1, "Kicking %s grace-period kthread\n",
|
||||||
|
rcu_state.name);
|
||||||
|
rcu_ftrace_dump(DUMP_ALL);
|
||||||
|
wake_up_process(rcu_state.gp_kthread);
|
||||||
|
WRITE_ONCE(rcu_state.jiffies_kick_kthreads, j + HZ);
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
/*
|
||||||
|
* Handler for the irq_work request posted about halfway into the RCU CPU
|
||||||
|
* stall timeout, and used to detect excessive irq disabling. Set state
|
||||||
|
* appropriately, but just complain if there is unexpected state on entry.
|
||||||
|
*/
|
||||||
|
static void rcu_iw_handler(struct irq_work *iwp)
|
||||||
|
{
|
||||||
|
struct rcu_data *rdp;
|
||||||
|
struct rcu_node *rnp;
|
||||||
|
|
||||||
|
rdp = container_of(iwp, struct rcu_data, rcu_iw);
|
||||||
|
rnp = rdp->mynode;
|
||||||
|
raw_spin_lock_rcu_node(rnp);
|
||||||
|
if (!WARN_ON_ONCE(!rdp->rcu_iw_pending)) {
|
||||||
|
rdp->rcu_iw_gp_seq = rnp->gp_seq;
|
||||||
|
rdp->rcu_iw_pending = false;
|
||||||
|
}
|
||||||
|
raw_spin_unlock_rcu_node(rnp);
|
||||||
|
}
|
||||||
|
|
||||||
|
//////////////////////////////////////////////////////////////////////////////
|
||||||
|
//
|
||||||
|
// Printing RCU CPU stall warnings
|
||||||
|
|
||||||
|
#ifdef CONFIG_PREEMPT
|
||||||
|
|
||||||
|
/*
|
||||||
|
* Dump detailed information for all tasks blocking the current RCU
|
||||||
|
* grace period on the specified rcu_node structure.
|
||||||
|
*/
|
||||||
|
static void rcu_print_detail_task_stall_rnp(struct rcu_node *rnp)
|
||||||
|
{
|
||||||
|
unsigned long flags;
|
||||||
|
struct task_struct *t;
|
||||||
|
|
||||||
|
raw_spin_lock_irqsave_rcu_node(rnp, flags);
|
||||||
|
if (!rcu_preempt_blocked_readers_cgp(rnp)) {
|
||||||
|
raw_spin_unlock_irqrestore_rcu_node(rnp, flags);
|
||||||
|
return;
|
||||||
|
}
|
||||||
|
t = list_entry(rnp->gp_tasks->prev,
|
||||||
|
struct task_struct, rcu_node_entry);
|
||||||
|
list_for_each_entry_continue(t, &rnp->blkd_tasks, rcu_node_entry) {
|
||||||
|
/*
|
||||||
|
* We could be printing a lot while holding a spinlock.
|
||||||
|
* Avoid triggering hard lockup.
|
||||||
|
*/
|
||||||
|
touch_nmi_watchdog();
|
||||||
|
sched_show_task(t);
|
||||||
|
}
|
||||||
|
raw_spin_unlock_irqrestore_rcu_node(rnp, flags);
|
||||||
|
}
|
||||||
|
|
||||||
|
/*
|
||||||
|
* Scan the current list of tasks blocked within RCU read-side critical
|
||||||
|
* sections, printing out the tid of each.
|
||||||
|
*/
|
||||||
|
static int rcu_print_task_stall(struct rcu_node *rnp)
|
||||||
|
{
|
||||||
|
struct task_struct *t;
|
||||||
|
int ndetected = 0;
|
||||||
|
|
||||||
|
if (!rcu_preempt_blocked_readers_cgp(rnp))
|
||||||
|
return 0;
|
||||||
|
pr_err("\tTasks blocked on level-%d rcu_node (CPUs %d-%d):",
|
||||||
|
rnp->level, rnp->grplo, rnp->grphi);
|
||||||
|
t = list_entry(rnp->gp_tasks->prev,
|
||||||
|
struct task_struct, rcu_node_entry);
|
||||||
|
list_for_each_entry_continue(t, &rnp->blkd_tasks, rcu_node_entry) {
|
||||||
|
pr_cont(" P%d", t->pid);
|
||||||
|
ndetected++;
|
||||||
|
}
|
||||||
|
pr_cont("\n");
|
||||||
|
return ndetected;
|
||||||
|
}
|
||||||
|
|
||||||
|
#else /* #ifdef CONFIG_PREEMPT */
|
||||||
|
|
||||||
|
/*
|
||||||
|
* Because preemptible RCU does not exist, we never have to check for
|
||||||
|
* tasks blocked within RCU read-side critical sections.
|
||||||
|
*/
|
||||||
|
static void rcu_print_detail_task_stall_rnp(struct rcu_node *rnp)
|
||||||
|
{
|
||||||
|
}
|
||||||
|
|
||||||
|
/*
|
||||||
|
* Because preemptible RCU does not exist, we never have to check for
|
||||||
|
* tasks blocked within RCU read-side critical sections.
|
||||||
|
*/
|
||||||
|
static int rcu_print_task_stall(struct rcu_node *rnp)
|
||||||
|
{
|
||||||
|
return 0;
|
||||||
|
}
|
||||||
|
#endif /* #else #ifdef CONFIG_PREEMPT */
|
||||||
|
|
||||||
|
/*
|
||||||
|
* Dump stacks of all tasks running on stalled CPUs. First try using
|
||||||
|
* NMIs, but fall back to manual remote stack tracing on architectures
|
||||||
|
* that don't support NMI-based stack dumps. The NMI-triggered stack
|
||||||
|
* traces are more accurate because they are printed by the target CPU.
|
||||||
|
*/
|
||||||
|
static void rcu_dump_cpu_stacks(void)
|
||||||
|
{
|
||||||
|
int cpu;
|
||||||
|
unsigned long flags;
|
||||||
|
struct rcu_node *rnp;
|
||||||
|
|
||||||
|
rcu_for_each_leaf_node(rnp) {
|
||||||
|
raw_spin_lock_irqsave_rcu_node(rnp, flags);
|
||||||
|
for_each_leaf_node_possible_cpu(rnp, cpu)
|
||||||
|
if (rnp->qsmask & leaf_node_cpu_bit(rnp, cpu))
|
||||||
|
if (!trigger_single_cpu_backtrace(cpu))
|
||||||
|
dump_cpu_task(cpu);
|
||||||
|
raw_spin_unlock_irqrestore_rcu_node(rnp, flags);
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
#ifdef CONFIG_RCU_FAST_NO_HZ
|
||||||
|
|
||||||
|
static void print_cpu_stall_fast_no_hz(char *cp, int cpu)
|
||||||
|
{
|
||||||
|
struct rcu_data *rdp = &per_cpu(rcu_data, cpu);
|
||||||
|
|
||||||
|
sprintf(cp, "last_accelerate: %04lx/%04lx, Nonlazy posted: %c%c%c",
|
||||||
|
rdp->last_accelerate & 0xffff, jiffies & 0xffff,
|
||||||
|
".l"[rdp->all_lazy],
|
||||||
|
".L"[!rcu_segcblist_n_nonlazy_cbs(&rdp->cblist)],
|
||||||
|
".D"[!!rdp->tick_nohz_enabled_snap]);
|
||||||
|
}
|
||||||
|
|
||||||
|
#else /* #ifdef CONFIG_RCU_FAST_NO_HZ */
|
||||||
|
|
||||||
|
static void print_cpu_stall_fast_no_hz(char *cp, int cpu)
|
||||||
|
{
|
||||||
|
*cp = '\0';
|
||||||
|
}
|
||||||
|
|
||||||
|
#endif /* #else #ifdef CONFIG_RCU_FAST_NO_HZ */
|
||||||
|
|
||||||
|
/*
|
||||||
|
* Print out diagnostic information for the specified stalled CPU.
|
||||||
|
*
|
||||||
|
* If the specified CPU is aware of the current RCU grace period, then
|
||||||
|
* print the number of scheduling clock interrupts the CPU has taken
|
||||||
|
* during the time that it has been aware. Otherwise, print the number
|
||||||
|
* of RCU grace periods that this CPU is ignorant of, for example, "1"
|
||||||
|
* if the CPU was aware of the previous grace period.
|
||||||
|
*
|
||||||
|
* Also print out idle and (if CONFIG_RCU_FAST_NO_HZ) idle-entry info.
|
||||||
|
*/
|
||||||
|
static void print_cpu_stall_info(int cpu)
|
||||||
|
{
|
||||||
|
unsigned long delta;
|
||||||
|
char fast_no_hz[72];
|
||||||
|
struct rcu_data *rdp = per_cpu_ptr(&rcu_data, cpu);
|
||||||
|
char *ticks_title;
|
||||||
|
unsigned long ticks_value;
|
||||||
|
|
||||||
|
/*
|
||||||
|
* We could be printing a lot while holding a spinlock. Avoid
|
||||||
|
* triggering hard lockup.
|
||||||
|
*/
|
||||||
|
touch_nmi_watchdog();
|
||||||
|
|
||||||
|
ticks_value = rcu_seq_ctr(rcu_state.gp_seq - rdp->gp_seq);
|
||||||
|
if (ticks_value) {
|
||||||
|
ticks_title = "GPs behind";
|
||||||
|
} else {
|
||||||
|
ticks_title = "ticks this GP";
|
||||||
|
ticks_value = rdp->ticks_this_gp;
|
||||||
|
}
|
||||||
|
print_cpu_stall_fast_no_hz(fast_no_hz, cpu);
|
||||||
|
delta = rcu_seq_ctr(rdp->mynode->gp_seq - rdp->rcu_iw_gp_seq);
|
||||||
|
pr_err("\t%d-%c%c%c%c: (%lu %s) idle=%03x/%ld/%#lx softirq=%u/%u fqs=%ld %s\n",
|
||||||
|
cpu,
|
||||||
|
"O."[!!cpu_online(cpu)],
|
||||||
|
"o."[!!(rdp->grpmask & rdp->mynode->qsmaskinit)],
|
||||||
|
"N."[!!(rdp->grpmask & rdp->mynode->qsmaskinitnext)],
|
||||||
|
!IS_ENABLED(CONFIG_IRQ_WORK) ? '?' :
|
||||||
|
rdp->rcu_iw_pending ? (int)min(delta, 9UL) + '0' :
|
||||||
|
"!."[!delta],
|
||||||
|
ticks_value, ticks_title,
|
||||||
|
rcu_dynticks_snap(rdp) & 0xfff,
|
||||||
|
rdp->dynticks_nesting, rdp->dynticks_nmi_nesting,
|
||||||
|
rdp->softirq_snap, kstat_softirqs_cpu(RCU_SOFTIRQ, cpu),
|
||||||
|
READ_ONCE(rcu_state.n_force_qs) - rcu_state.n_force_qs_gpstart,
|
||||||
|
fast_no_hz);
|
||||||
|
}
|
||||||
|
|
||||||
|
/* Complain about starvation of grace-period kthread. */
|
||||||
|
static void rcu_check_gp_kthread_starvation(void)
|
||||||
|
{
|
||||||
|
struct task_struct *gpk = rcu_state.gp_kthread;
|
||||||
|
unsigned long j;
|
||||||
|
|
||||||
|
j = jiffies - READ_ONCE(rcu_state.gp_activity);
|
||||||
|
if (j > 2 * HZ) {
|
||||||
|
pr_err("%s kthread starved for %ld jiffies! g%ld f%#x %s(%d) ->state=%#lx ->cpu=%d\n",
|
||||||
|
rcu_state.name, j,
|
||||||
|
(long)rcu_seq_current(&rcu_state.gp_seq),
|
||||||
|
READ_ONCE(rcu_state.gp_flags),
|
||||||
|
gp_state_getname(rcu_state.gp_state), rcu_state.gp_state,
|
||||||
|
gpk ? gpk->state : ~0, gpk ? task_cpu(gpk) : -1);
|
||||||
|
if (gpk) {
|
||||||
|
pr_err("RCU grace-period kthread stack dump:\n");
|
||||||
|
sched_show_task(gpk);
|
||||||
|
wake_up_process(gpk);
|
||||||
|
}
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
static void print_other_cpu_stall(unsigned long gp_seq)
|
||||||
|
{
|
||||||
|
int cpu;
|
||||||
|
unsigned long flags;
|
||||||
|
unsigned long gpa;
|
||||||
|
unsigned long j;
|
||||||
|
int ndetected = 0;
|
||||||
|
struct rcu_node *rnp;
|
||||||
|
long totqlen = 0;
|
||||||
|
|
||||||
|
/* Kick and suppress, if so configured. */
|
||||||
|
rcu_stall_kick_kthreads();
|
||||||
|
if (rcu_cpu_stall_suppress)
|
||||||
|
return;
|
||||||
|
|
||||||
|
/*
|
||||||
|
* OK, time to rat on our buddy...
|
||||||
|
* See Documentation/RCU/stallwarn.txt for info on how to debug
|
||||||
|
* RCU CPU stall warnings.
|
||||||
|
*/
|
||||||
|
pr_err("INFO: %s detected stalls on CPUs/tasks:\n", rcu_state.name);
|
||||||
|
rcu_for_each_leaf_node(rnp) {
|
||||||
|
raw_spin_lock_irqsave_rcu_node(rnp, flags);
|
||||||
|
ndetected += rcu_print_task_stall(rnp);
|
||||||
|
if (rnp->qsmask != 0) {
|
||||||
|
for_each_leaf_node_possible_cpu(rnp, cpu)
|
||||||
|
if (rnp->qsmask & leaf_node_cpu_bit(rnp, cpu)) {
|
||||||
|
print_cpu_stall_info(cpu);
|
||||||
|
ndetected++;
|
||||||
|
}
|
||||||
|
}
|
||||||
|
raw_spin_unlock_irqrestore_rcu_node(rnp, flags);
|
||||||
|
}
|
||||||
|
|
||||||
|
for_each_possible_cpu(cpu)
|
||||||
|
totqlen += rcu_get_n_cbs_cpu(cpu);
|
||||||
|
pr_cont("\t(detected by %d, t=%ld jiffies, g=%ld, q=%lu)\n",
|
||||||
|
smp_processor_id(), (long)(jiffies - rcu_state.gp_start),
|
||||||
|
(long)rcu_seq_current(&rcu_state.gp_seq), totqlen);
|
||||||
|
if (ndetected) {
|
||||||
|
rcu_dump_cpu_stacks();
|
||||||
|
|
||||||
|
/* Complain about tasks blocking the grace period. */
|
||||||
|
rcu_for_each_leaf_node(rnp)
|
||||||
|
rcu_print_detail_task_stall_rnp(rnp);
|
||||||
|
} else {
|
||||||
|
if (rcu_seq_current(&rcu_state.gp_seq) != gp_seq) {
|
||||||
|
pr_err("INFO: Stall ended before state dump start\n");
|
||||||
|
} else {
|
||||||
|
j = jiffies;
|
||||||
|
gpa = READ_ONCE(rcu_state.gp_activity);
|
||||||
|
pr_err("All QSes seen, last %s kthread activity %ld (%ld-%ld), jiffies_till_next_fqs=%ld, root ->qsmask %#lx\n",
|
||||||
|
rcu_state.name, j - gpa, j, gpa,
|
||||||
|
READ_ONCE(jiffies_till_next_fqs),
|
||||||
|
rcu_get_root()->qsmask);
|
||||||
|
/* In this case, the current CPU might be at fault. */
|
||||||
|
sched_show_task(current);
|
||||||
|
}
|
||||||
|
}
|
||||||
|
/* Rewrite if needed in case of slow consoles. */
|
||||||
|
if (ULONG_CMP_GE(jiffies, READ_ONCE(rcu_state.jiffies_stall)))
|
||||||
|
WRITE_ONCE(rcu_state.jiffies_stall,
|
||||||
|
jiffies + 3 * rcu_jiffies_till_stall_check() + 3);
|
||||||
|
|
||||||
|
rcu_check_gp_kthread_starvation();
|
||||||
|
|
||||||
|
panic_on_rcu_stall();
|
||||||
|
|
||||||
|
rcu_force_quiescent_state(); /* Kick them all. */
|
||||||
|
}
|
||||||
|
|
||||||
|
static void print_cpu_stall(void)
|
||||||
|
{
|
||||||
|
int cpu;
|
||||||
|
unsigned long flags;
|
||||||
|
struct rcu_data *rdp = this_cpu_ptr(&rcu_data);
|
||||||
|
struct rcu_node *rnp = rcu_get_root();
|
||||||
|
long totqlen = 0;
|
||||||
|
|
||||||
|
/* Kick and suppress, if so configured. */
|
||||||
|
rcu_stall_kick_kthreads();
|
||||||
|
if (rcu_cpu_stall_suppress)
|
||||||
|
return;
|
||||||
|
|
||||||
|
/*
|
||||||
|
* OK, time to rat on ourselves...
|
||||||
|
* See Documentation/RCU/stallwarn.txt for info on how to debug
|
||||||
|
* RCU CPU stall warnings.
|
||||||
|
*/
|
||||||
|
pr_err("INFO: %s self-detected stall on CPU\n", rcu_state.name);
|
||||||
|
raw_spin_lock_irqsave_rcu_node(rdp->mynode, flags);
|
||||||
|
print_cpu_stall_info(smp_processor_id());
|
||||||
|
raw_spin_unlock_irqrestore_rcu_node(rdp->mynode, flags);
|
||||||
|
for_each_possible_cpu(cpu)
|
||||||
|
totqlen += rcu_get_n_cbs_cpu(cpu);
|
||||||
|
pr_cont("\t(t=%lu jiffies g=%ld q=%lu)\n",
|
||||||
|
jiffies - rcu_state.gp_start,
|
||||||
|
(long)rcu_seq_current(&rcu_state.gp_seq), totqlen);
|
||||||
|
|
||||||
|
rcu_check_gp_kthread_starvation();
|
||||||
|
|
||||||
|
rcu_dump_cpu_stacks();
|
||||||
|
|
||||||
|
raw_spin_lock_irqsave_rcu_node(rnp, flags);
|
||||||
|
/* Rewrite if needed in case of slow consoles. */
|
||||||
|
if (ULONG_CMP_GE(jiffies, READ_ONCE(rcu_state.jiffies_stall)))
|
||||||
|
WRITE_ONCE(rcu_state.jiffies_stall,
|
||||||
|
jiffies + 3 * rcu_jiffies_till_stall_check() + 3);
|
||||||
|
raw_spin_unlock_irqrestore_rcu_node(rnp, flags);
|
||||||
|
|
||||||
|
panic_on_rcu_stall();
|
||||||
|
|
||||||
|
/*
|
||||||
|
* Attempt to revive the RCU machinery by forcing a context switch.
|
||||||
|
*
|
||||||
|
* A context switch would normally allow the RCU state machine to make
|
||||||
|
* progress and it could be we're stuck in kernel space without context
|
||||||
|
* switches for an entirely unreasonable amount of time.
|
||||||
|
*/
|
||||||
|
set_tsk_need_resched(current);
|
||||||
|
set_preempt_need_resched();
|
||||||
|
}
|
||||||
|
|
||||||
|
static void check_cpu_stall(struct rcu_data *rdp)
|
||||||
|
{
|
||||||
|
unsigned long gs1;
|
||||||
|
unsigned long gs2;
|
||||||
|
unsigned long gps;
|
||||||
|
unsigned long j;
|
||||||
|
unsigned long jn;
|
||||||
|
unsigned long js;
|
||||||
|
struct rcu_node *rnp;
|
||||||
|
|
||||||
|
if ((rcu_cpu_stall_suppress && !rcu_kick_kthreads) ||
|
||||||
|
!rcu_gp_in_progress())
|
||||||
|
return;
|
||||||
|
rcu_stall_kick_kthreads();
|
||||||
|
j = jiffies;
|
||||||
|
|
||||||
|
/*
|
||||||
|
* Lots of memory barriers to reject false positives.
|
||||||
|
*
|
||||||
|
* The idea is to pick up rcu_state.gp_seq, then
|
||||||
|
* rcu_state.jiffies_stall, then rcu_state.gp_start, and finally
|
||||||
|
* another copy of rcu_state.gp_seq. These values are updated in
|
||||||
|
* the opposite order with memory barriers (or equivalent) during
|
||||||
|
* grace-period initialization and cleanup. Now, a false positive
|
||||||
|
* can occur if we get an new value of rcu_state.gp_start and a old
|
||||||
|
* value of rcu_state.jiffies_stall. But given the memory barriers,
|
||||||
|
* the only way that this can happen is if one grace period ends
|
||||||
|
* and another starts between these two fetches. This is detected
|
||||||
|
* by comparing the second fetch of rcu_state.gp_seq with the
|
||||||
|
* previous fetch from rcu_state.gp_seq.
|
||||||
|
*
|
||||||
|
* Given this check, comparisons of jiffies, rcu_state.jiffies_stall,
|
||||||
|
* and rcu_state.gp_start suffice to forestall false positives.
|
||||||
|
*/
|
||||||
|
gs1 = READ_ONCE(rcu_state.gp_seq);
|
||||||
|
smp_rmb(); /* Pick up ->gp_seq first... */
|
||||||
|
js = READ_ONCE(rcu_state.jiffies_stall);
|
||||||
|
smp_rmb(); /* ...then ->jiffies_stall before the rest... */
|
||||||
|
gps = READ_ONCE(rcu_state.gp_start);
|
||||||
|
smp_rmb(); /* ...and finally ->gp_start before ->gp_seq again. */
|
||||||
|
gs2 = READ_ONCE(rcu_state.gp_seq);
|
||||||
|
if (gs1 != gs2 ||
|
||||||
|
ULONG_CMP_LT(j, js) ||
|
||||||
|
ULONG_CMP_GE(gps, js))
|
||||||
|
return; /* No stall or GP completed since entering function. */
|
||||||
|
rnp = rdp->mynode;
|
||||||
|
jn = jiffies + 3 * rcu_jiffies_till_stall_check() + 3;
|
||||||
|
if (rcu_gp_in_progress() &&
|
||||||
|
(READ_ONCE(rnp->qsmask) & rdp->grpmask) &&
|
||||||
|
cmpxchg(&rcu_state.jiffies_stall, js, jn) == js) {
|
||||||
|
|
||||||
|
/* We haven't checked in, so go dump stack. */
|
||||||
|
print_cpu_stall();
|
||||||
|
|
||||||
|
} else if (rcu_gp_in_progress() &&
|
||||||
|
ULONG_CMP_GE(j, js + RCU_STALL_RAT_DELAY) &&
|
||||||
|
cmpxchg(&rcu_state.jiffies_stall, js, jn) == js) {
|
||||||
|
|
||||||
|
/* They had a few time units to dump stack, so complain. */
|
||||||
|
print_other_cpu_stall(gs2);
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
//////////////////////////////////////////////////////////////////////////////
|
||||||
|
//
|
||||||
|
// RCU forward-progress mechanisms, including of callback invocation.
|
||||||
|
|
||||||
|
|
||||||
|
/*
|
||||||
|
* Show the state of the grace-period kthreads.
|
||||||
|
*/
|
||||||
|
void show_rcu_gp_kthreads(void)
|
||||||
|
{
|
||||||
|
int cpu;
|
||||||
|
unsigned long j;
|
||||||
|
unsigned long ja;
|
||||||
|
unsigned long jr;
|
||||||
|
unsigned long jw;
|
||||||
|
struct rcu_data *rdp;
|
||||||
|
struct rcu_node *rnp;
|
||||||
|
|
||||||
|
j = jiffies;
|
||||||
|
ja = j - READ_ONCE(rcu_state.gp_activity);
|
||||||
|
jr = j - READ_ONCE(rcu_state.gp_req_activity);
|
||||||
|
jw = j - READ_ONCE(rcu_state.gp_wake_time);
|
||||||
|
pr_info("%s: wait state: %s(%d) ->state: %#lx delta ->gp_activity %lu ->gp_req_activity %lu ->gp_wake_time %lu ->gp_wake_seq %ld ->gp_seq %ld ->gp_seq_needed %ld ->gp_flags %#x\n",
|
||||||
|
rcu_state.name, gp_state_getname(rcu_state.gp_state),
|
||||||
|
rcu_state.gp_state,
|
||||||
|
rcu_state.gp_kthread ? rcu_state.gp_kthread->state : 0x1ffffL,
|
||||||
|
ja, jr, jw, (long)READ_ONCE(rcu_state.gp_wake_seq),
|
||||||
|
(long)READ_ONCE(rcu_state.gp_seq),
|
||||||
|
(long)READ_ONCE(rcu_get_root()->gp_seq_needed),
|
||||||
|
READ_ONCE(rcu_state.gp_flags));
|
||||||
|
rcu_for_each_node_breadth_first(rnp) {
|
||||||
|
if (ULONG_CMP_GE(rcu_state.gp_seq, rnp->gp_seq_needed))
|
||||||
|
continue;
|
||||||
|
pr_info("\trcu_node %d:%d ->gp_seq %ld ->gp_seq_needed %ld\n",
|
||||||
|
rnp->grplo, rnp->grphi, (long)rnp->gp_seq,
|
||||||
|
(long)rnp->gp_seq_needed);
|
||||||
|
if (!rcu_is_leaf_node(rnp))
|
||||||
|
continue;
|
||||||
|
for_each_leaf_node_possible_cpu(rnp, cpu) {
|
||||||
|
rdp = per_cpu_ptr(&rcu_data, cpu);
|
||||||
|
if (rdp->gpwrap ||
|
||||||
|
ULONG_CMP_GE(rcu_state.gp_seq,
|
||||||
|
rdp->gp_seq_needed))
|
||||||
|
continue;
|
||||||
|
pr_info("\tcpu %d ->gp_seq_needed %ld\n",
|
||||||
|
cpu, (long)rdp->gp_seq_needed);
|
||||||
|
}
|
||||||
|
}
|
||||||
|
/* sched_show_task(rcu_state.gp_kthread); */
|
||||||
|
}
|
||||||
|
EXPORT_SYMBOL_GPL(show_rcu_gp_kthreads);
|
||||||
|
|
||||||
|
/*
|
||||||
|
* This function checks for grace-period requests that fail to motivate
|
||||||
|
* RCU to come out of its idle mode.
|
||||||
|
*/
|
||||||
|
static void rcu_check_gp_start_stall(struct rcu_node *rnp, struct rcu_data *rdp,
|
||||||
|
const unsigned long gpssdelay)
|
||||||
|
{
|
||||||
|
unsigned long flags;
|
||||||
|
unsigned long j;
|
||||||
|
struct rcu_node *rnp_root = rcu_get_root();
|
||||||
|
static atomic_t warned = ATOMIC_INIT(0);
|
||||||
|
|
||||||
|
if (!IS_ENABLED(CONFIG_PROVE_RCU) || rcu_gp_in_progress() ||
|
||||||
|
ULONG_CMP_GE(rnp_root->gp_seq, rnp_root->gp_seq_needed))
|
||||||
|
return;
|
||||||
|
j = jiffies; /* Expensive access, and in common case don't get here. */
|
||||||
|
if (time_before(j, READ_ONCE(rcu_state.gp_req_activity) + gpssdelay) ||
|
||||||
|
time_before(j, READ_ONCE(rcu_state.gp_activity) + gpssdelay) ||
|
||||||
|
atomic_read(&warned))
|
||||||
|
return;
|
||||||
|
|
||||||
|
raw_spin_lock_irqsave_rcu_node(rnp, flags);
|
||||||
|
j = jiffies;
|
||||||
|
if (rcu_gp_in_progress() ||
|
||||||
|
ULONG_CMP_GE(rnp_root->gp_seq, rnp_root->gp_seq_needed) ||
|
||||||
|
time_before(j, READ_ONCE(rcu_state.gp_req_activity) + gpssdelay) ||
|
||||||
|
time_before(j, READ_ONCE(rcu_state.gp_activity) + gpssdelay) ||
|
||||||
|
atomic_read(&warned)) {
|
||||||
|
raw_spin_unlock_irqrestore_rcu_node(rnp, flags);
|
||||||
|
return;
|
||||||
|
}
|
||||||
|
/* Hold onto the leaf lock to make others see warned==1. */
|
||||||
|
|
||||||
|
if (rnp_root != rnp)
|
||||||
|
raw_spin_lock_rcu_node(rnp_root); /* irqs already disabled. */
|
||||||
|
j = jiffies;
|
||||||
|
if (rcu_gp_in_progress() ||
|
||||||
|
ULONG_CMP_GE(rnp_root->gp_seq, rnp_root->gp_seq_needed) ||
|
||||||
|
time_before(j, rcu_state.gp_req_activity + gpssdelay) ||
|
||||||
|
time_before(j, rcu_state.gp_activity + gpssdelay) ||
|
||||||
|
atomic_xchg(&warned, 1)) {
|
||||||
|
raw_spin_unlock_rcu_node(rnp_root); /* irqs remain disabled. */
|
||||||
|
raw_spin_unlock_irqrestore_rcu_node(rnp, flags);
|
||||||
|
return;
|
||||||
|
}
|
||||||
|
WARN_ON(1);
|
||||||
|
if (rnp_root != rnp)
|
||||||
|
raw_spin_unlock_rcu_node(rnp_root);
|
||||||
|
raw_spin_unlock_irqrestore_rcu_node(rnp, flags);
|
||||||
|
show_rcu_gp_kthreads();
|
||||||
|
}
|
||||||
|
|
||||||
|
/*
|
||||||
|
* Do a forward-progress check for rcutorture. This is normally invoked
|
||||||
|
* due to an OOM event. The argument "j" gives the time period during
|
||||||
|
* which rcutorture would like progress to have been made.
|
||||||
|
*/
|
||||||
|
void rcu_fwd_progress_check(unsigned long j)
|
||||||
|
{
|
||||||
|
unsigned long cbs;
|
||||||
|
int cpu;
|
||||||
|
unsigned long max_cbs = 0;
|
||||||
|
int max_cpu = -1;
|
||||||
|
struct rcu_data *rdp;
|
||||||
|
|
||||||
|
if (rcu_gp_in_progress()) {
|
||||||
|
pr_info("%s: GP age %lu jiffies\n",
|
||||||
|
__func__, jiffies - rcu_state.gp_start);
|
||||||
|
show_rcu_gp_kthreads();
|
||||||
|
} else {
|
||||||
|
pr_info("%s: Last GP end %lu jiffies ago\n",
|
||||||
|
__func__, jiffies - rcu_state.gp_end);
|
||||||
|
preempt_disable();
|
||||||
|
rdp = this_cpu_ptr(&rcu_data);
|
||||||
|
rcu_check_gp_start_stall(rdp->mynode, rdp, j);
|
||||||
|
preempt_enable();
|
||||||
|
}
|
||||||
|
for_each_possible_cpu(cpu) {
|
||||||
|
cbs = rcu_get_n_cbs_cpu(cpu);
|
||||||
|
if (!cbs)
|
||||||
|
continue;
|
||||||
|
if (max_cpu < 0)
|
||||||
|
pr_info("%s: callbacks", __func__);
|
||||||
|
pr_cont(" %d: %lu", cpu, cbs);
|
||||||
|
if (cbs <= max_cbs)
|
||||||
|
continue;
|
||||||
|
max_cbs = cbs;
|
||||||
|
max_cpu = cpu;
|
||||||
|
}
|
||||||
|
if (max_cpu >= 0)
|
||||||
|
pr_cont("\n");
|
||||||
|
}
|
||||||
|
EXPORT_SYMBOL_GPL(rcu_fwd_progress_check);
|
||||||
|
|
||||||
|
/* Commandeer a sysrq key to dump RCU's tree. */
|
||||||
|
static bool sysrq_rcu;
|
||||||
|
module_param(sysrq_rcu, bool, 0444);
|
||||||
|
|
||||||
|
/* Dump grace-period-request information due to commandeered sysrq. */
|
||||||
|
static void sysrq_show_rcu(int key)
|
||||||
|
{
|
||||||
|
show_rcu_gp_kthreads();
|
||||||
|
}
|
||||||
|
|
||||||
|
static struct sysrq_key_op sysrq_rcudump_op = {
|
||||||
|
.handler = sysrq_show_rcu,
|
||||||
|
.help_msg = "show-rcu(y)",
|
||||||
|
.action_msg = "Show RCU tree",
|
||||||
|
.enable_mask = SYSRQ_ENABLE_DUMP,
|
||||||
|
};
|
||||||
|
|
||||||
|
static int __init rcu_sysrq_init(void)
|
||||||
|
{
|
||||||
|
if (sysrq_rcu)
|
||||||
|
return register_sysrq_key('y', &sysrq_rcudump_op);
|
||||||
|
return 0;
|
||||||
|
}
|
||||||
|
early_initcall(rcu_sysrq_init);
|
@ -424,68 +424,11 @@ EXPORT_SYMBOL_GPL(do_trace_rcu_torture_read);
|
|||||||
#endif
|
#endif
|
||||||
|
|
||||||
#ifdef CONFIG_RCU_STALL_COMMON
|
#ifdef CONFIG_RCU_STALL_COMMON
|
||||||
|
|
||||||
#ifdef CONFIG_PROVE_RCU
|
|
||||||
#define RCU_STALL_DELAY_DELTA (5 * HZ)
|
|
||||||
#else
|
|
||||||
#define RCU_STALL_DELAY_DELTA 0
|
|
||||||
#endif
|
|
||||||
|
|
||||||
int rcu_cpu_stall_suppress __read_mostly; /* 1 = suppress stall warnings. */
|
int rcu_cpu_stall_suppress __read_mostly; /* 1 = suppress stall warnings. */
|
||||||
EXPORT_SYMBOL_GPL(rcu_cpu_stall_suppress);
|
EXPORT_SYMBOL_GPL(rcu_cpu_stall_suppress);
|
||||||
static int rcu_cpu_stall_timeout __read_mostly = CONFIG_RCU_CPU_STALL_TIMEOUT;
|
|
||||||
|
|
||||||
module_param(rcu_cpu_stall_suppress, int, 0644);
|
module_param(rcu_cpu_stall_suppress, int, 0644);
|
||||||
|
int rcu_cpu_stall_timeout __read_mostly = CONFIG_RCU_CPU_STALL_TIMEOUT;
|
||||||
module_param(rcu_cpu_stall_timeout, int, 0644);
|
module_param(rcu_cpu_stall_timeout, int, 0644);
|
||||||
|
|
||||||
int rcu_jiffies_till_stall_check(void)
|
|
||||||
{
|
|
||||||
int till_stall_check = READ_ONCE(rcu_cpu_stall_timeout);
|
|
||||||
|
|
||||||
/*
|
|
||||||
* Limit check must be consistent with the Kconfig limits
|
|
||||||
* for CONFIG_RCU_CPU_STALL_TIMEOUT.
|
|
||||||
*/
|
|
||||||
if (till_stall_check < 3) {
|
|
||||||
WRITE_ONCE(rcu_cpu_stall_timeout, 3);
|
|
||||||
till_stall_check = 3;
|
|
||||||
} else if (till_stall_check > 300) {
|
|
||||||
WRITE_ONCE(rcu_cpu_stall_timeout, 300);
|
|
||||||
till_stall_check = 300;
|
|
||||||
}
|
|
||||||
return till_stall_check * HZ + RCU_STALL_DELAY_DELTA;
|
|
||||||
}
|
|
||||||
EXPORT_SYMBOL_GPL(rcu_jiffies_till_stall_check);
|
|
||||||
|
|
||||||
void rcu_sysrq_start(void)
|
|
||||||
{
|
|
||||||
if (!rcu_cpu_stall_suppress)
|
|
||||||
rcu_cpu_stall_suppress = 2;
|
|
||||||
}
|
|
||||||
|
|
||||||
void rcu_sysrq_end(void)
|
|
||||||
{
|
|
||||||
if (rcu_cpu_stall_suppress == 2)
|
|
||||||
rcu_cpu_stall_suppress = 0;
|
|
||||||
}
|
|
||||||
|
|
||||||
static int rcu_panic(struct notifier_block *this, unsigned long ev, void *ptr)
|
|
||||||
{
|
|
||||||
rcu_cpu_stall_suppress = 1;
|
|
||||||
return NOTIFY_DONE;
|
|
||||||
}
|
|
||||||
|
|
||||||
static struct notifier_block rcu_panic_block = {
|
|
||||||
.notifier_call = rcu_panic,
|
|
||||||
};
|
|
||||||
|
|
||||||
static int __init check_cpu_stall_init(void)
|
|
||||||
{
|
|
||||||
atomic_notifier_chain_register(&panic_notifier_list, &rcu_panic_block);
|
|
||||||
return 0;
|
|
||||||
}
|
|
||||||
early_initcall(check_cpu_stall_init);
|
|
||||||
|
|
||||||
#endif /* #ifdef CONFIG_RCU_STALL_COMMON */
|
#endif /* #ifdef CONFIG_RCU_STALL_COMMON */
|
||||||
|
|
||||||
#ifdef CONFIG_TASKS_RCU
|
#ifdef CONFIG_TASKS_RCU
|
||||||
|
@ -88,6 +88,8 @@ bool torture_offline(int cpu, long *n_offl_attempts, long *n_offl_successes,
|
|||||||
|
|
||||||
if (!cpu_online(cpu) || !cpu_is_hotpluggable(cpu))
|
if (!cpu_online(cpu) || !cpu_is_hotpluggable(cpu))
|
||||||
return false;
|
return false;
|
||||||
|
if (num_online_cpus() <= 1)
|
||||||
|
return false; /* Can't offline the last CPU. */
|
||||||
|
|
||||||
if (verbose > 1)
|
if (verbose > 1)
|
||||||
pr_alert("%s" TORTURE_FLAG
|
pr_alert("%s" TORTURE_FLAG
|
||||||
|
@ -1,4 +1,5 @@
|
|||||||
#!/bin/bash
|
#!/bin/bash
|
||||||
|
# SPDX-License-Identifier: GPL-2.0+
|
||||||
#
|
#
|
||||||
# Extract the number of CPUs expected from the specified Kconfig-file
|
# Extract the number of CPUs expected from the specified Kconfig-file
|
||||||
# fragment by checking CONFIG_SMP and CONFIG_NR_CPUS. If the specified
|
# fragment by checking CONFIG_SMP and CONFIG_NR_CPUS. If the specified
|
||||||
@ -7,23 +8,9 @@
|
|||||||
#
|
#
|
||||||
# Usage: configNR_CPUS.sh config-frag
|
# Usage: configNR_CPUS.sh config-frag
|
||||||
#
|
#
|
||||||
# This program is free software; you can redistribute it and/or modify
|
|
||||||
# it under the terms of the GNU General Public License as published by
|
|
||||||
# the Free Software Foundation; either version 2 of the License, or
|
|
||||||
# (at your option) any later version.
|
|
||||||
#
|
|
||||||
# This program is distributed in the hope that it will be useful,
|
|
||||||
# but WITHOUT ANY WARRANTY; without even the implied warranty of
|
|
||||||
# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
|
|
||||||
# GNU General Public License for more details.
|
|
||||||
#
|
|
||||||
# You should have received a copy of the GNU General Public License
|
|
||||||
# along with this program; if not, you can access it online at
|
|
||||||
# http://www.gnu.org/licenses/gpl-2.0.html.
|
|
||||||
#
|
|
||||||
# Copyright (C) IBM Corporation, 2013
|
# Copyright (C) IBM Corporation, 2013
|
||||||
#
|
#
|
||||||
# Authors: Paul E. McKenney <paulmck@linux.vnet.ibm.com>
|
# Authors: Paul E. McKenney <paulmck@linux.ibm.com>
|
||||||
|
|
||||||
cf=$1
|
cf=$1
|
||||||
if test ! -r $cf
|
if test ! -r $cf
|
||||||
|
@ -1,4 +1,5 @@
|
|||||||
#!/bin/bash
|
#!/bin/bash
|
||||||
|
# SPDX-License-Identifier: GPL-2.0+
|
||||||
#
|
#
|
||||||
# config_override.sh base override
|
# config_override.sh base override
|
||||||
#
|
#
|
||||||
@ -6,23 +7,9 @@
|
|||||||
# that conflict with any in override, concatenating what remains and
|
# that conflict with any in override, concatenating what remains and
|
||||||
# sending the result to standard output.
|
# sending the result to standard output.
|
||||||
#
|
#
|
||||||
# This program is free software; you can redistribute it and/or modify
|
|
||||||
# it under the terms of the GNU General Public License as published by
|
|
||||||
# the Free Software Foundation; either version 2 of the License, or
|
|
||||||
# (at your option) any later version.
|
|
||||||
#
|
|
||||||
# This program is distributed in the hope that it will be useful,
|
|
||||||
# but WITHOUT ANY WARRANTY; without even the implied warranty of
|
|
||||||
# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
|
|
||||||
# GNU General Public License for more details.
|
|
||||||
#
|
|
||||||
# You should have received a copy of the GNU General Public License
|
|
||||||
# along with this program; if not, you can access it online at
|
|
||||||
# http://www.gnu.org/licenses/gpl-2.0.html.
|
|
||||||
#
|
|
||||||
# Copyright (C) IBM Corporation, 2017
|
# Copyright (C) IBM Corporation, 2017
|
||||||
#
|
#
|
||||||
# Authors: Paul E. McKenney <paulmck@linux.vnet.ibm.com>
|
# Authors: Paul E. McKenney <paulmck@linux.ibm.com>
|
||||||
|
|
||||||
base=$1
|
base=$1
|
||||||
if test -r $base
|
if test -r $base
|
||||||
|
@ -1,23 +1,11 @@
|
|||||||
#!/bin/bash
|
#!/bin/bash
|
||||||
|
# SPDX-License-Identifier: GPL-2.0+
|
||||||
|
#
|
||||||
# Usage: configcheck.sh .config .config-template
|
# Usage: configcheck.sh .config .config-template
|
||||||
#
|
#
|
||||||
# This program is free software; you can redistribute it and/or modify
|
|
||||||
# it under the terms of the GNU General Public License as published by
|
|
||||||
# the Free Software Foundation; either version 2 of the License, or
|
|
||||||
# (at your option) any later version.
|
|
||||||
#
|
|
||||||
# This program is distributed in the hope that it will be useful,
|
|
||||||
# but WITHOUT ANY WARRANTY; without even the implied warranty of
|
|
||||||
# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
|
|
||||||
# GNU General Public License for more details.
|
|
||||||
#
|
|
||||||
# You should have received a copy of the GNU General Public License
|
|
||||||
# along with this program; if not, you can access it online at
|
|
||||||
# http://www.gnu.org/licenses/gpl-2.0.html.
|
|
||||||
#
|
|
||||||
# Copyright (C) IBM Corporation, 2011
|
# Copyright (C) IBM Corporation, 2011
|
||||||
#
|
#
|
||||||
# Authors: Paul E. McKenney <paulmck@linux.vnet.ibm.com>
|
# Authors: Paul E. McKenney <paulmck@linux.ibm.com>
|
||||||
|
|
||||||
T=${TMPDIR-/tmp}/abat-chk-config.sh.$$
|
T=${TMPDIR-/tmp}/abat-chk-config.sh.$$
|
||||||
trap 'rm -rf $T' 0
|
trap 'rm -rf $T' 0
|
||||||
@ -26,6 +14,7 @@ mkdir $T
|
|||||||
cat $1 > $T/.config
|
cat $1 > $T/.config
|
||||||
|
|
||||||
cat $2 | sed -e 's/\(.*\)=n/# \1 is not set/' -e 's/^#CHECK#//' |
|
cat $2 | sed -e 's/\(.*\)=n/# \1 is not set/' -e 's/^#CHECK#//' |
|
||||||
|
grep -v '^CONFIG_INITRAMFS_SOURCE' |
|
||||||
awk '
|
awk '
|
||||||
{
|
{
|
||||||
print "if grep -q \"" $0 "\" < '"$T/.config"'";
|
print "if grep -q \"" $0 "\" < '"$T/.config"'";
|
||||||
|
@ -1,4 +1,5 @@
|
|||||||
#!/bin/bash
|
#!/bin/bash
|
||||||
|
# SPDX-License-Identifier: GPL-2.0+
|
||||||
#
|
#
|
||||||
# Usage: configinit.sh config-spec-file build-output-dir results-dir
|
# Usage: configinit.sh config-spec-file build-output-dir results-dir
|
||||||
#
|
#
|
||||||
@ -14,23 +15,9 @@
|
|||||||
# for example, "O=/tmp/foo". If this argument is omitted, the .config
|
# for example, "O=/tmp/foo". If this argument is omitted, the .config
|
||||||
# file will be generated directly in the current directory.
|
# file will be generated directly in the current directory.
|
||||||
#
|
#
|
||||||
# This program is free software; you can redistribute it and/or modify
|
|
||||||
# it under the terms of the GNU General Public License as published by
|
|
||||||
# the Free Software Foundation; either version 2 of the License, or
|
|
||||||
# (at your option) any later version.
|
|
||||||
#
|
|
||||||
# This program is distributed in the hope that it will be useful,
|
|
||||||
# but WITHOUT ANY WARRANTY; without even the implied warranty of
|
|
||||||
# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
|
|
||||||
# GNU General Public License for more details.
|
|
||||||
#
|
|
||||||
# You should have received a copy of the GNU General Public License
|
|
||||||
# along with this program; if not, you can access it online at
|
|
||||||
# http://www.gnu.org/licenses/gpl-2.0.html.
|
|
||||||
#
|
|
||||||
# Copyright (C) IBM Corporation, 2013
|
# Copyright (C) IBM Corporation, 2013
|
||||||
#
|
#
|
||||||
# Authors: Paul E. McKenney <paulmck@linux.vnet.ibm.com>
|
# Authors: Paul E. McKenney <paulmck@linux.ibm.com>
|
||||||
|
|
||||||
T=${TMPDIR-/tmp}/configinit.sh.$$
|
T=${TMPDIR-/tmp}/configinit.sh.$$
|
||||||
trap 'rm -rf $T' 0
|
trap 'rm -rf $T' 0
|
||||||
|
@ -1,26 +1,13 @@
|
|||||||
#!/bin/bash
|
#!/bin/bash
|
||||||
|
# SPDX-License-Identifier: GPL-2.0+
|
||||||
#
|
#
|
||||||
# Get an estimate of how CPU-hoggy to be.
|
# Get an estimate of how CPU-hoggy to be.
|
||||||
#
|
#
|
||||||
# Usage: cpus2use.sh
|
# Usage: cpus2use.sh
|
||||||
#
|
#
|
||||||
# This program is free software; you can redistribute it and/or modify
|
|
||||||
# it under the terms of the GNU General Public License as published by
|
|
||||||
# the Free Software Foundation; either version 2 of the License, or
|
|
||||||
# (at your option) any later version.
|
|
||||||
#
|
|
||||||
# This program is distributed in the hope that it will be useful,
|
|
||||||
# but WITHOUT ANY WARRANTY; without even the implied warranty of
|
|
||||||
# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
|
|
||||||
# GNU General Public License for more details.
|
|
||||||
#
|
|
||||||
# You should have received a copy of the GNU General Public License
|
|
||||||
# along with this program; if not, you can access it online at
|
|
||||||
# http://www.gnu.org/licenses/gpl-2.0.html.
|
|
||||||
#
|
|
||||||
# Copyright (C) IBM Corporation, 2013
|
# Copyright (C) IBM Corporation, 2013
|
||||||
#
|
#
|
||||||
# Authors: Paul E. McKenney <paulmck@linux.vnet.ibm.com>
|
# Authors: Paul E. McKenney <paulmck@linux.ibm.com>
|
||||||
|
|
||||||
ncpus=`grep '^processor' /proc/cpuinfo | wc -l`
|
ncpus=`grep '^processor' /proc/cpuinfo | wc -l`
|
||||||
idlecpus=`mpstat | tail -1 | \
|
idlecpus=`mpstat | tail -1 | \
|
||||||
|
@ -1,24 +1,11 @@
|
|||||||
#!/bin/bash
|
#!/bin/bash
|
||||||
|
# SPDX-License-Identifier: GPL-2.0+
|
||||||
#
|
#
|
||||||
# Shell functions for the rest of the scripts.
|
# Shell functions for the rest of the scripts.
|
||||||
#
|
#
|
||||||
# This program is free software; you can redistribute it and/or modify
|
|
||||||
# it under the terms of the GNU General Public License as published by
|
|
||||||
# the Free Software Foundation; either version 2 of the License, or
|
|
||||||
# (at your option) any later version.
|
|
||||||
#
|
|
||||||
# This program is distributed in the hope that it will be useful,
|
|
||||||
# but WITHOUT ANY WARRANTY; without even the implied warranty of
|
|
||||||
# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
|
|
||||||
# GNU General Public License for more details.
|
|
||||||
#
|
|
||||||
# You should have received a copy of the GNU General Public License
|
|
||||||
# along with this program; if not, you can access it online at
|
|
||||||
# http://www.gnu.org/licenses/gpl-2.0.html.
|
|
||||||
#
|
|
||||||
# Copyright (C) IBM Corporation, 2013
|
# Copyright (C) IBM Corporation, 2013
|
||||||
#
|
#
|
||||||
# Authors: Paul E. McKenney <paulmck@linux.vnet.ibm.com>
|
# Authors: Paul E. McKenney <paulmck@linux.ibm.com>
|
||||||
|
|
||||||
# bootparam_hotplug_cpu bootparam-string
|
# bootparam_hotplug_cpu bootparam-string
|
||||||
#
|
#
|
||||||
|
@ -1,4 +1,5 @@
|
|||||||
#!/bin/bash
|
#!/bin/bash
|
||||||
|
# SPDX-License-Identifier: GPL-2.0+
|
||||||
#
|
#
|
||||||
# Alternate sleeping and spinning on randomly selected CPUs. The purpose
|
# Alternate sleeping and spinning on randomly selected CPUs. The purpose
|
||||||
# of this script is to inflict random OS jitter on a concurrently running
|
# of this script is to inflict random OS jitter on a concurrently running
|
||||||
@ -11,23 +12,9 @@
|
|||||||
# sleepmax: Maximum microseconds to sleep, defaults to one second.
|
# sleepmax: Maximum microseconds to sleep, defaults to one second.
|
||||||
# spinmax: Maximum microseconds to spin, defaults to one millisecond.
|
# spinmax: Maximum microseconds to spin, defaults to one millisecond.
|
||||||
#
|
#
|
||||||
# This program is free software; you can redistribute it and/or modify
|
|
||||||
# it under the terms of the GNU General Public License as published by
|
|
||||||
# the Free Software Foundation; either version 2 of the License, or
|
|
||||||
# (at your option) any later version.
|
|
||||||
#
|
|
||||||
# This program is distributed in the hope that it will be useful,
|
|
||||||
# but WITHOUT ANY WARRANTY; without even the implied warranty of
|
|
||||||
# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
|
|
||||||
# GNU General Public License for more details.
|
|
||||||
#
|
|
||||||
# You should have received a copy of the GNU General Public License
|
|
||||||
# along with this program; if not, you can access it online at
|
|
||||||
# http://www.gnu.org/licenses/gpl-2.0.html.
|
|
||||||
#
|
|
||||||
# Copyright (C) IBM Corporation, 2016
|
# Copyright (C) IBM Corporation, 2016
|
||||||
#
|
#
|
||||||
# Authors: Paul E. McKenney <paulmck@linux.vnet.ibm.com>
|
# Authors: Paul E. McKenney <paulmck@linux.ibm.com>
|
||||||
|
|
||||||
me=$(($1 * 1000))
|
me=$(($1 * 1000))
|
||||||
duration=$2
|
duration=$2
|
||||||
|
@ -1,26 +1,13 @@
|
|||||||
#!/bin/bash
|
#!/bin/bash
|
||||||
|
# SPDX-License-Identifier: GPL-2.0+
|
||||||
#
|
#
|
||||||
# Build a kvm-ready Linux kernel from the tree in the current directory.
|
# Build a kvm-ready Linux kernel from the tree in the current directory.
|
||||||
#
|
#
|
||||||
# Usage: kvm-build.sh config-template build-dir resdir
|
# Usage: kvm-build.sh config-template build-dir resdir
|
||||||
#
|
#
|
||||||
# This program is free software; you can redistribute it and/or modify
|
|
||||||
# it under the terms of the GNU General Public License as published by
|
|
||||||
# the Free Software Foundation; either version 2 of the License, or
|
|
||||||
# (at your option) any later version.
|
|
||||||
#
|
|
||||||
# This program is distributed in the hope that it will be useful,
|
|
||||||
# but WITHOUT ANY WARRANTY; without even the implied warranty of
|
|
||||||
# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
|
|
||||||
# GNU General Public License for more details.
|
|
||||||
#
|
|
||||||
# You should have received a copy of the GNU General Public License
|
|
||||||
# along with this program; if not, you can access it online at
|
|
||||||
# http://www.gnu.org/licenses/gpl-2.0.html.
|
|
||||||
#
|
|
||||||
# Copyright (C) IBM Corporation, 2011
|
# Copyright (C) IBM Corporation, 2011
|
||||||
#
|
#
|
||||||
# Authors: Paul E. McKenney <paulmck@linux.vnet.ibm.com>
|
# Authors: Paul E. McKenney <paulmck@linux.ibm.com>
|
||||||
|
|
||||||
config_template=${1}
|
config_template=${1}
|
||||||
if test -z "$config_template" -o ! -f "$config_template" -o ! -r "$config_template"
|
if test -z "$config_template" -o ! -f "$config_template" -o ! -r "$config_template"
|
||||||
|
@ -1,4 +1,5 @@
|
|||||||
#!/bin/sh
|
#!/bin/sh
|
||||||
|
# SPDX-License-Identifier: GPL-2.0+
|
||||||
#
|
#
|
||||||
# Invoke a text editor on all console.log files for all runs with diagnostics,
|
# Invoke a text editor on all console.log files for all runs with diagnostics,
|
||||||
# that is, on all such files having a console.log.diags counterpart.
|
# that is, on all such files having a console.log.diags counterpart.
|
||||||
@ -10,6 +11,10 @@
|
|||||||
#
|
#
|
||||||
# The "directory" above should end with the date/time directory, for example,
|
# The "directory" above should end with the date/time directory, for example,
|
||||||
# "tools/testing/selftests/rcutorture/res/2018.02.25-14:27:27".
|
# "tools/testing/selftests/rcutorture/res/2018.02.25-14:27:27".
|
||||||
|
#
|
||||||
|
# Copyright (C) IBM Corporation, 2018
|
||||||
|
#
|
||||||
|
# Author: Paul E. McKenney <paulmck@linux.ibm.com>
|
||||||
|
|
||||||
rundir="${1}"
|
rundir="${1}"
|
||||||
if test -z "$rundir" -o ! -d "$rundir"
|
if test -z "$rundir" -o ! -d "$rundir"
|
||||||
|
@ -1,26 +1,13 @@
|
|||||||
#!/bin/bash
|
#!/bin/bash
|
||||||
|
# SPDX-License-Identifier: GPL-2.0+
|
||||||
#
|
#
|
||||||
# Analyze a given results directory for locktorture progress.
|
# Analyze a given results directory for locktorture progress.
|
||||||
#
|
#
|
||||||
# Usage: kvm-recheck-lock.sh resdir
|
# Usage: kvm-recheck-lock.sh resdir
|
||||||
#
|
#
|
||||||
# This program is free software; you can redistribute it and/or modify
|
|
||||||
# it under the terms of the GNU General Public License as published by
|
|
||||||
# the Free Software Foundation; either version 2 of the License, or
|
|
||||||
# (at your option) any later version.
|
|
||||||
#
|
|
||||||
# This program is distributed in the hope that it will be useful,
|
|
||||||
# but WITHOUT ANY WARRANTY; without even the implied warranty of
|
|
||||||
# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
|
|
||||||
# GNU General Public License for more details.
|
|
||||||
#
|
|
||||||
# You should have received a copy of the GNU General Public License
|
|
||||||
# along with this program; if not, you can access it online at
|
|
||||||
# http://www.gnu.org/licenses/gpl-2.0.html.
|
|
||||||
#
|
|
||||||
# Copyright (C) IBM Corporation, 2014
|
# Copyright (C) IBM Corporation, 2014
|
||||||
#
|
#
|
||||||
# Authors: Paul E. McKenney <paulmck@linux.vnet.ibm.com>
|
# Authors: Paul E. McKenney <paulmck@linux.ibm.com>
|
||||||
|
|
||||||
i="$1"
|
i="$1"
|
||||||
if test -d "$i" -a -r "$i"
|
if test -d "$i" -a -r "$i"
|
||||||
|
@ -1,26 +1,13 @@
|
|||||||
#!/bin/bash
|
#!/bin/bash
|
||||||
|
# SPDX-License-Identifier: GPL-2.0+
|
||||||
#
|
#
|
||||||
# Analyze a given results directory for rcutorture progress.
|
# Analyze a given results directory for rcutorture progress.
|
||||||
#
|
#
|
||||||
# Usage: kvm-recheck-rcu.sh resdir
|
# Usage: kvm-recheck-rcu.sh resdir
|
||||||
#
|
#
|
||||||
# This program is free software; you can redistribute it and/or modify
|
|
||||||
# it under the terms of the GNU General Public License as published by
|
|
||||||
# the Free Software Foundation; either version 2 of the License, or
|
|
||||||
# (at your option) any later version.
|
|
||||||
#
|
|
||||||
# This program is distributed in the hope that it will be useful,
|
|
||||||
# but WITHOUT ANY WARRANTY; without even the implied warranty of
|
|
||||||
# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
|
|
||||||
# GNU General Public License for more details.
|
|
||||||
#
|
|
||||||
# You should have received a copy of the GNU General Public License
|
|
||||||
# along with this program; if not, you can access it online at
|
|
||||||
# http://www.gnu.org/licenses/gpl-2.0.html.
|
|
||||||
#
|
|
||||||
# Copyright (C) IBM Corporation, 2014
|
# Copyright (C) IBM Corporation, 2014
|
||||||
#
|
#
|
||||||
# Authors: Paul E. McKenney <paulmck@linux.vnet.ibm.com>
|
# Authors: Paul E. McKenney <paulmck@linux.ibm.com>
|
||||||
|
|
||||||
i="$1"
|
i="$1"
|
||||||
if test -d "$i" -a -r "$i"
|
if test -d "$i" -a -r "$i"
|
||||||
|
@ -1,4 +1,5 @@
|
|||||||
#!/bin/bash
|
#!/bin/bash
|
||||||
|
# SPDX-License-Identifier: GPL-2.0+
|
||||||
#
|
#
|
||||||
# Analyze a given results directory for rcuperf performance measurements,
|
# Analyze a given results directory for rcuperf performance measurements,
|
||||||
# looking for ftrace data. Exits with 0 if data was found, analyzed, and
|
# looking for ftrace data. Exits with 0 if data was found, analyzed, and
|
||||||
@ -7,23 +8,9 @@
|
|||||||
#
|
#
|
||||||
# Usage: kvm-recheck-rcuperf-ftrace.sh resdir
|
# Usage: kvm-recheck-rcuperf-ftrace.sh resdir
|
||||||
#
|
#
|
||||||
# This program is free software; you can redistribute it and/or modify
|
|
||||||
# it under the terms of the GNU General Public License as published by
|
|
||||||
# the Free Software Foundation; either version 2 of the License, or
|
|
||||||
# (at your option) any later version.
|
|
||||||
#
|
|
||||||
# This program is distributed in the hope that it will be useful,
|
|
||||||
# but WITHOUT ANY WARRANTY; without even the implied warranty of
|
|
||||||
# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
|
|
||||||
# GNU General Public License for more details.
|
|
||||||
#
|
|
||||||
# You should have received a copy of the GNU General Public License
|
|
||||||
# along with this program; if not, you can access it online at
|
|
||||||
# http://www.gnu.org/licenses/gpl-2.0.html.
|
|
||||||
#
|
|
||||||
# Copyright (C) IBM Corporation, 2016
|
# Copyright (C) IBM Corporation, 2016
|
||||||
#
|
#
|
||||||
# Authors: Paul E. McKenney <paulmck@linux.vnet.ibm.com>
|
# Authors: Paul E. McKenney <paulmck@linux.ibm.com>
|
||||||
|
|
||||||
i="$1"
|
i="$1"
|
||||||
. functions.sh
|
. functions.sh
|
||||||
|
@ -1,26 +1,13 @@
|
|||||||
#!/bin/bash
|
#!/bin/bash
|
||||||
|
# SPDX-License-Identifier: GPL-2.0+
|
||||||
#
|
#
|
||||||
# Analyze a given results directory for rcuperf performance measurements.
|
# Analyze a given results directory for rcuperf performance measurements.
|
||||||
#
|
#
|
||||||
# Usage: kvm-recheck-rcuperf.sh resdir
|
# Usage: kvm-recheck-rcuperf.sh resdir
|
||||||
#
|
#
|
||||||
# This program is free software; you can redistribute it and/or modify
|
|
||||||
# it under the terms of the GNU General Public License as published by
|
|
||||||
# the Free Software Foundation; either version 2 of the License, or
|
|
||||||
# (at your option) any later version.
|
|
||||||
#
|
|
||||||
# This program is distributed in the hope that it will be useful,
|
|
||||||
# but WITHOUT ANY WARRANTY; without even the implied warranty of
|
|
||||||
# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
|
|
||||||
# GNU General Public License for more details.
|
|
||||||
#
|
|
||||||
# You should have received a copy of the GNU General Public License
|
|
||||||
# along with this program; if not, you can access it online at
|
|
||||||
# http://www.gnu.org/licenses/gpl-2.0.html.
|
|
||||||
#
|
|
||||||
# Copyright (C) IBM Corporation, 2016
|
# Copyright (C) IBM Corporation, 2016
|
||||||
#
|
#
|
||||||
# Authors: Paul E. McKenney <paulmck@linux.vnet.ibm.com>
|
# Authors: Paul E. McKenney <paulmck@linux.ibm.com>
|
||||||
|
|
||||||
i="$1"
|
i="$1"
|
||||||
if test -d "$i" -a -r "$i"
|
if test -d "$i" -a -r "$i"
|
||||||
|
@ -1,4 +1,5 @@
|
|||||||
#!/bin/bash
|
#!/bin/bash
|
||||||
|
# SPDX-License-Identifier: GPL-2.0+
|
||||||
#
|
#
|
||||||
# Given the results directories for previous KVM-based torture runs,
|
# Given the results directories for previous KVM-based torture runs,
|
||||||
# check the build and console output for errors. Given a directory
|
# check the build and console output for errors. Given a directory
|
||||||
@ -6,23 +7,9 @@
|
|||||||
#
|
#
|
||||||
# Usage: kvm-recheck.sh resdir ...
|
# Usage: kvm-recheck.sh resdir ...
|
||||||
#
|
#
|
||||||
# This program is free software; you can redistribute it and/or modify
|
|
||||||
# it under the terms of the GNU General Public License as published by
|
|
||||||
# the Free Software Foundation; either version 2 of the License, or
|
|
||||||
# (at your option) any later version.
|
|
||||||
#
|
|
||||||
# This program is distributed in the hope that it will be useful,
|
|
||||||
# but WITHOUT ANY WARRANTY; without even the implied warranty of
|
|
||||||
# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
|
|
||||||
# GNU General Public License for more details.
|
|
||||||
#
|
|
||||||
# You should have received a copy of the GNU General Public License
|
|
||||||
# along with this program; if not, you can access it online at
|
|
||||||
# http://www.gnu.org/licenses/gpl-2.0.html.
|
|
||||||
#
|
|
||||||
# Copyright (C) IBM Corporation, 2011
|
# Copyright (C) IBM Corporation, 2011
|
||||||
#
|
#
|
||||||
# Authors: Paul E. McKenney <paulmck@linux.vnet.ibm.com>
|
# Authors: Paul E. McKenney <paulmck@linux.ibm.com>
|
||||||
|
|
||||||
PATH=`pwd`/tools/testing/selftests/rcutorture/bin:$PATH; export PATH
|
PATH=`pwd`/tools/testing/selftests/rcutorture/bin:$PATH; export PATH
|
||||||
. functions.sh
|
. functions.sh
|
||||||
|
@ -1,4 +1,5 @@
|
|||||||
#!/bin/bash
|
#!/bin/bash
|
||||||
|
# SPDX-License-Identifier: GPL-2.0+
|
||||||
#
|
#
|
||||||
# Run a kvm-based test of the specified tree on the specified configs.
|
# Run a kvm-based test of the specified tree on the specified configs.
|
||||||
# Fully automated run and error checking, no graphics console.
|
# Fully automated run and error checking, no graphics console.
|
||||||
@ -20,23 +21,9 @@
|
|||||||
#
|
#
|
||||||
# More sophisticated argument parsing is clearly needed.
|
# More sophisticated argument parsing is clearly needed.
|
||||||
#
|
#
|
||||||
# This program is free software; you can redistribute it and/or modify
|
|
||||||
# it under the terms of the GNU General Public License as published by
|
|
||||||
# the Free Software Foundation; either version 2 of the License, or
|
|
||||||
# (at your option) any later version.
|
|
||||||
#
|
|
||||||
# This program is distributed in the hope that it will be useful,
|
|
||||||
# but WITHOUT ANY WARRANTY; without even the implied warranty of
|
|
||||||
# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
|
|
||||||
# GNU General Public License for more details.
|
|
||||||
#
|
|
||||||
# You should have received a copy of the GNU General Public License
|
|
||||||
# along with this program; if not, you can access it online at
|
|
||||||
# http://www.gnu.org/licenses/gpl-2.0.html.
|
|
||||||
#
|
|
||||||
# Copyright (C) IBM Corporation, 2011
|
# Copyright (C) IBM Corporation, 2011
|
||||||
#
|
#
|
||||||
# Authors: Paul E. McKenney <paulmck@linux.vnet.ibm.com>
|
# Authors: Paul E. McKenney <paulmck@linux.ibm.com>
|
||||||
|
|
||||||
T=${TMPDIR-/tmp}/kvm-test-1-run.sh.$$
|
T=${TMPDIR-/tmp}/kvm-test-1-run.sh.$$
|
||||||
trap 'rm -rf $T' 0
|
trap 'rm -rf $T' 0
|
||||||
|
@ -1,4 +1,5 @@
|
|||||||
#!/bin/bash
|
#!/bin/bash
|
||||||
|
# SPDX-License-Identifier: GPL-2.0+
|
||||||
#
|
#
|
||||||
# Run a series of tests under KVM. By default, this series is specified
|
# Run a series of tests under KVM. By default, this series is specified
|
||||||
# by the relevant CFLIST file, but can be overridden by the --configs
|
# by the relevant CFLIST file, but can be overridden by the --configs
|
||||||
@ -6,23 +7,9 @@
|
|||||||
#
|
#
|
||||||
# Usage: kvm.sh [ options ]
|
# Usage: kvm.sh [ options ]
|
||||||
#
|
#
|
||||||
# This program is free software; you can redistribute it and/or modify
|
|
||||||
# it under the terms of the GNU General Public License as published by
|
|
||||||
# the Free Software Foundation; either version 2 of the License, or
|
|
||||||
# (at your option) any later version.
|
|
||||||
#
|
|
||||||
# This program is distributed in the hope that it will be useful,
|
|
||||||
# but WITHOUT ANY WARRANTY; without even the implied warranty of
|
|
||||||
# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
|
|
||||||
# GNU General Public License for more details.
|
|
||||||
#
|
|
||||||
# You should have received a copy of the GNU General Public License
|
|
||||||
# along with this program; if not, you can access it online at
|
|
||||||
# http://www.gnu.org/licenses/gpl-2.0.html.
|
|
||||||
#
|
|
||||||
# Copyright (C) IBM Corporation, 2011
|
# Copyright (C) IBM Corporation, 2011
|
||||||
#
|
#
|
||||||
# Authors: Paul E. McKenney <paulmck@linux.vnet.ibm.com>
|
# Authors: Paul E. McKenney <paulmck@linux.ibm.com>
|
||||||
|
|
||||||
scriptname=$0
|
scriptname=$0
|
||||||
args="$*"
|
args="$*"
|
||||||
|
@ -1,21 +1,8 @@
|
|||||||
#!/bin/bash
|
#!/bin/bash
|
||||||
|
# SPDX-License-Identifier: GPL-2.0+
|
||||||
#
|
#
|
||||||
# Create an initrd directory if one does not already exist.
|
# Create an initrd directory if one does not already exist.
|
||||||
#
|
#
|
||||||
# This program is free software; you can redistribute it and/or modify
|
|
||||||
# it under the terms of the GNU General Public License as published by
|
|
||||||
# the Free Software Foundation; either version 2 of the License, or
|
|
||||||
# (at your option) any later version.
|
|
||||||
#
|
|
||||||
# This program is distributed in the hope that it will be useful,
|
|
||||||
# but WITHOUT ANY WARRANTY; without even the implied warranty of
|
|
||||||
# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
|
|
||||||
# GNU General Public License for more details.
|
|
||||||
#
|
|
||||||
# You should have received a copy of the GNU General Public License
|
|
||||||
# along with this program; if not, you can access it online at
|
|
||||||
# http://www.gnu.org/licenses/gpl-2.0.html.
|
|
||||||
#
|
|
||||||
# Copyright (C) IBM Corporation, 2013
|
# Copyright (C) IBM Corporation, 2013
|
||||||
#
|
#
|
||||||
# Author: Connor Shu <Connor.Shu@ibm.com>
|
# Author: Connor Shu <Connor.Shu@ibm.com>
|
||||||
|
@ -1,4 +1,5 @@
|
|||||||
#!/bin/bash
|
#!/bin/bash
|
||||||
|
# SPDX-License-Identifier: GPL-2.0+
|
||||||
#
|
#
|
||||||
# Check the build output from an rcutorture run for goodness.
|
# Check the build output from an rcutorture run for goodness.
|
||||||
# The "file" is a pathname on the local system, and "title" is
|
# The "file" is a pathname on the local system, and "title" is
|
||||||
@ -8,23 +9,9 @@
|
|||||||
#
|
#
|
||||||
# Usage: parse-build.sh file title
|
# Usage: parse-build.sh file title
|
||||||
#
|
#
|
||||||
# This program is free software; you can redistribute it and/or modify
|
|
||||||
# it under the terms of the GNU General Public License as published by
|
|
||||||
# the Free Software Foundation; either version 2 of the License, or
|
|
||||||
# (at your option) any later version.
|
|
||||||
#
|
|
||||||
# This program is distributed in the hope that it will be useful,
|
|
||||||
# but WITHOUT ANY WARRANTY; without even the implied warranty of
|
|
||||||
# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
|
|
||||||
# GNU General Public License for more details.
|
|
||||||
#
|
|
||||||
# You should have received a copy of the GNU General Public License
|
|
||||||
# along with this program; if not, you can access it online at
|
|
||||||
# http://www.gnu.org/licenses/gpl-2.0.html.
|
|
||||||
#
|
|
||||||
# Copyright (C) IBM Corporation, 2011
|
# Copyright (C) IBM Corporation, 2011
|
||||||
#
|
#
|
||||||
# Authors: Paul E. McKenney <paulmck@linux.vnet.ibm.com>
|
# Authors: Paul E. McKenney <paulmck@linux.ibm.com>
|
||||||
|
|
||||||
F=$1
|
F=$1
|
||||||
title=$2
|
title=$2
|
||||||
|
@ -1,4 +1,5 @@
|
|||||||
#!/bin/bash
|
#!/bin/bash
|
||||||
|
# SPDX-License-Identifier: GPL-2.0+
|
||||||
#
|
#
|
||||||
# Check the console output from an rcutorture run for oopses.
|
# Check the console output from an rcutorture run for oopses.
|
||||||
# The "file" is a pathname on the local system, and "title" is
|
# The "file" is a pathname on the local system, and "title" is
|
||||||
@ -6,23 +7,9 @@
|
|||||||
#
|
#
|
||||||
# Usage: parse-console.sh file title
|
# Usage: parse-console.sh file title
|
||||||
#
|
#
|
||||||
# This program is free software; you can redistribute it and/or modify
|
|
||||||
# it under the terms of the GNU General Public License as published by
|
|
||||||
# the Free Software Foundation; either version 2 of the License, or
|
|
||||||
# (at your option) any later version.
|
|
||||||
#
|
|
||||||
# This program is distributed in the hope that it will be useful,
|
|
||||||
# but WITHOUT ANY WARRANTY; without even the implied warranty of
|
|
||||||
# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
|
|
||||||
# GNU General Public License for more details.
|
|
||||||
#
|
|
||||||
# You should have received a copy of the GNU General Public License
|
|
||||||
# along with this program; if not, you can access it online at
|
|
||||||
# http://www.gnu.org/licenses/gpl-2.0.html.
|
|
||||||
#
|
|
||||||
# Copyright (C) IBM Corporation, 2011
|
# Copyright (C) IBM Corporation, 2011
|
||||||
#
|
#
|
||||||
# Authors: Paul E. McKenney <paulmck@linux.vnet.ibm.com>
|
# Authors: Paul E. McKenney <paulmck@linux.ibm.com>
|
||||||
|
|
||||||
T=${TMPDIR-/tmp}/parse-console.sh.$$
|
T=${TMPDIR-/tmp}/parse-console.sh.$$
|
||||||
file="$1"
|
file="$1"
|
||||||
|
@ -1,24 +1,11 @@
|
|||||||
#!/bin/bash
|
#!/bin/bash
|
||||||
|
# SPDX-License-Identifier: GPL-2.0+
|
||||||
#
|
#
|
||||||
# Kernel-version-dependent shell functions for the rest of the scripts.
|
# Kernel-version-dependent shell functions for the rest of the scripts.
|
||||||
#
|
#
|
||||||
# This program is free software; you can redistribute it and/or modify
|
|
||||||
# it under the terms of the GNU General Public License as published by
|
|
||||||
# the Free Software Foundation; either version 2 of the License, or
|
|
||||||
# (at your option) any later version.
|
|
||||||
#
|
|
||||||
# This program is distributed in the hope that it will be useful,
|
|
||||||
# but WITHOUT ANY WARRANTY; without even the implied warranty of
|
|
||||||
# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
|
|
||||||
# GNU General Public License for more details.
|
|
||||||
#
|
|
||||||
# You should have received a copy of the GNU General Public License
|
|
||||||
# along with this program; if not, you can access it online at
|
|
||||||
# http://www.gnu.org/licenses/gpl-2.0.html.
|
|
||||||
#
|
|
||||||
# Copyright (C) IBM Corporation, 2014
|
# Copyright (C) IBM Corporation, 2014
|
||||||
#
|
#
|
||||||
# Authors: Paul E. McKenney <paulmck@linux.vnet.ibm.com>
|
# Authors: Paul E. McKenney <paulmck@linux.ibm.com>
|
||||||
|
|
||||||
# locktorture_param_onoff bootparam-string config-file
|
# locktorture_param_onoff bootparam-string config-file
|
||||||
#
|
#
|
||||||
|
@ -1,24 +1,11 @@
|
|||||||
#!/bin/bash
|
#!/bin/bash
|
||||||
|
# SPDX-License-Identifier: GPL-2.0+
|
||||||
#
|
#
|
||||||
# Kernel-version-dependent shell functions for the rest of the scripts.
|
# Kernel-version-dependent shell functions for the rest of the scripts.
|
||||||
#
|
#
|
||||||
# This program is free software; you can redistribute it and/or modify
|
|
||||||
# it under the terms of the GNU General Public License as published by
|
|
||||||
# the Free Software Foundation; either version 2 of the License, or
|
|
||||||
# (at your option) any later version.
|
|
||||||
#
|
|
||||||
# This program is distributed in the hope that it will be useful,
|
|
||||||
# but WITHOUT ANY WARRANTY; without even the implied warranty of
|
|
||||||
# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
|
|
||||||
# GNU General Public License for more details.
|
|
||||||
#
|
|
||||||
# You should have received a copy of the GNU General Public License
|
|
||||||
# along with this program; if not, you can access it online at
|
|
||||||
# http://www.gnu.org/licenses/gpl-2.0.html.
|
|
||||||
#
|
|
||||||
# Copyright (C) IBM Corporation, 2013
|
# Copyright (C) IBM Corporation, 2013
|
||||||
#
|
#
|
||||||
# Authors: Paul E. McKenney <paulmck@linux.vnet.ibm.com>
|
# Authors: Paul E. McKenney <paulmck@linux.ibm.com>
|
||||||
|
|
||||||
# rcutorture_param_n_barrier_cbs bootparam-string
|
# rcutorture_param_n_barrier_cbs bootparam-string
|
||||||
#
|
#
|
||||||
|
@ -1,24 +1,11 @@
|
|||||||
#!/bin/bash
|
#!/bin/bash
|
||||||
|
# SPDX-License-Identifier: GPL-2.0+
|
||||||
#
|
#
|
||||||
# Torture-suite-dependent shell functions for the rest of the scripts.
|
# Torture-suite-dependent shell functions for the rest of the scripts.
|
||||||
#
|
#
|
||||||
# This program is free software; you can redistribute it and/or modify
|
|
||||||
# it under the terms of the GNU General Public License as published by
|
|
||||||
# the Free Software Foundation; either version 2 of the License, or
|
|
||||||
# (at your option) any later version.
|
|
||||||
#
|
|
||||||
# This program is distributed in the hope that it will be useful,
|
|
||||||
# but WITHOUT ANY WARRANTY; without even the implied warranty of
|
|
||||||
# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
|
|
||||||
# GNU General Public License for more details.
|
|
||||||
#
|
|
||||||
# You should have received a copy of the GNU General Public License
|
|
||||||
# along with this program; if not, you can access it online at
|
|
||||||
# http://www.gnu.org/licenses/gpl-2.0.html.
|
|
||||||
#
|
|
||||||
# Copyright (C) IBM Corporation, 2015
|
# Copyright (C) IBM Corporation, 2015
|
||||||
#
|
#
|
||||||
# Authors: Paul E. McKenney <paulmck@linux.vnet.ibm.com>
|
# Authors: Paul E. McKenney <paulmck@linux.ibm.com>
|
||||||
|
|
||||||
# per_version_boot_params bootparam-string config-file seconds
|
# per_version_boot_params bootparam-string config-file seconds
|
||||||
#
|
#
|
||||||
|
Loading…
Reference in New Issue
Block a user