2020-04-22 01:04:10 +08:00
|
|
|
.. SPDX-License-Identifier: GPL-2.0
|
|
|
|
|
|
|
|
==============================
|
2010-01-15 08:10:57 +08:00
|
|
|
Using RCU's CPU Stall Detector
|
2020-04-22 01:04:10 +08:00
|
|
|
==============================
|
2010-01-15 08:10:57 +08:00
|
|
|
|
2017-02-09 06:30:15 +08:00
|
|
|
This document first discusses what sorts of issues RCU's CPU stall
|
|
|
|
detector can locate, and then discusses kernel parameters and Kconfig
|
|
|
|
options that can be used to fine-tune the detector's operation. Finally,
|
|
|
|
this document explains the stall detector's "splat" format.
|
|
|
|
|
|
|
|
|
|
|
|
What Causes RCU CPU Stall Warnings?
|
2020-04-22 01:04:10 +08:00
|
|
|
===================================
|
2017-02-09 06:30:15 +08:00
|
|
|
|
|
|
|
So your kernel printed an RCU CPU stall warning. The next question is
|
|
|
|
"What caused it?" The following problems can result in RCU CPU stall
|
|
|
|
warnings:
|
|
|
|
|
2020-04-22 01:04:10 +08:00
|
|
|
- A CPU looping in an RCU read-side critical section.
|
2017-02-09 06:30:15 +08:00
|
|
|
|
2020-04-22 01:04:10 +08:00
|
|
|
- A CPU looping with interrupts disabled.
|
2017-02-09 06:30:15 +08:00
|
|
|
|
2020-04-22 01:04:10 +08:00
|
|
|
- A CPU looping with preemption disabled.
|
2017-02-09 06:30:15 +08:00
|
|
|
|
2020-04-22 01:04:10 +08:00
|
|
|
- A CPU looping with bottom halves disabled.
|
2017-02-09 06:30:15 +08:00
|
|
|
|
2022-11-05 05:39:32 +08:00
|
|
|
- For !CONFIG_PREEMPTION kernels, a CPU looping anywhere in the
|
|
|
|
kernel without potentially invoking schedule(). If the looping
|
|
|
|
in the kernel is really expected and desirable behavior, you
|
|
|
|
might need to add some calls to cond_resched().
|
2017-02-09 06:30:15 +08:00
|
|
|
|
2020-04-22 01:04:10 +08:00
|
|
|
- Booting Linux using a console connection that is too slow to
|
2017-02-09 06:30:15 +08:00
|
|
|
keep up with the boot-time console-message rate. For example,
|
2021-05-20 12:32:36 +08:00
|
|
|
a 115Kbaud serial console can be *way* too slow to keep up
|
2017-02-09 06:30:15 +08:00
|
|
|
with boot-time message rates, and will frequently result in
|
|
|
|
RCU CPU stall warning messages. Especially if you have added
|
|
|
|
debug printk()s.
|
|
|
|
|
2020-04-22 01:04:10 +08:00
|
|
|
- Anything that prevents RCU's grace-period kthreads from running.
|
2017-02-09 06:30:15 +08:00
|
|
|
This can result in the "All QSes seen" console-log message.
|
|
|
|
This message will include information on when the kthread last
|
2017-08-10 01:16:29 +08:00
|
|
|
ran and how often it should be expected to run. It can also
|
2020-04-22 01:04:10 +08:00
|
|
|
result in the ``rcu_.*kthread starved for`` console-log message,
|
2017-08-10 01:16:29 +08:00
|
|
|
which will include additional debugging information.
|
2017-02-09 06:30:15 +08:00
|
|
|
|
2020-12-15 22:16:49 +08:00
|
|
|
- A CPU-bound real-time task in a CONFIG_PREEMPTION kernel, which might
|
2017-02-09 06:30:15 +08:00
|
|
|
happen to preempt a low-priority task in the middle of an RCU
|
|
|
|
read-side critical section. This is especially damaging if
|
|
|
|
that low-priority task is not permitted to run on any other CPU,
|
|
|
|
in which case the next RCU grace period can never complete, which
|
|
|
|
will eventually cause the system to run out of memory and hang.
|
|
|
|
While the system is in the process of running itself out of
|
|
|
|
memory, you might see stall-warning messages.
|
|
|
|
|
2020-04-22 01:04:10 +08:00
|
|
|
- A CPU-bound real-time task in a CONFIG_PREEMPT_RT kernel that
|
2017-02-09 06:30:15 +08:00
|
|
|
is running at a higher priority than the RCU softirq threads.
|
|
|
|
This will prevent RCU callbacks from ever being invoked,
|
|
|
|
and in a CONFIG_PREEMPT_RCU kernel will further prevent
|
|
|
|
RCU grace periods from ever completing. Either way, the
|
|
|
|
system will eventually run out of memory and hang. In the
|
|
|
|
CONFIG_PREEMPT_RCU case, you might see stall-warning
|
|
|
|
messages.
|
|
|
|
|
2019-07-08 23:01:50 +08:00
|
|
|
You can use the rcutree.kthread_prio kernel boot parameter to
|
|
|
|
increase the scheduling priority of RCU's kthreads, which can
|
|
|
|
help avoid this problem. However, please note that doing this
|
|
|
|
can increase your system's context-switch rate and thus degrade
|
|
|
|
performance.
|
|
|
|
|
2020-04-22 01:04:10 +08:00
|
|
|
- A periodic interrupt whose handler takes longer than the time
|
2017-08-10 01:16:29 +08:00
|
|
|
interval between successive pairs of interrupts. This can
|
|
|
|
prevent RCU's kthreads and softirq handlers from running.
|
|
|
|
Note that certain high-overhead debugging options, for example
|
|
|
|
the function_graph tracer, can result in interrupt handler taking
|
|
|
|
considerably longer than normal, which can in turn result in
|
|
|
|
RCU CPU stall warnings.
|
|
|
|
|
2020-04-22 01:04:10 +08:00
|
|
|
- Testing a workload on a fast system, tuning the stall-warning
|
2017-08-11 05:33:17 +08:00
|
|
|
timeout down to just barely avoid RCU CPU stall warnings, and then
|
|
|
|
running the same workload with the same stall-warning timeout on a
|
|
|
|
slow system. Note that thermal throttling and on-demand governors
|
|
|
|
can cause a single system to be sometimes fast and sometimes slow!
|
|
|
|
|
2020-04-22 01:04:10 +08:00
|
|
|
- A hardware or software issue shuts off the scheduler-clock
|
2017-02-09 06:30:15 +08:00
|
|
|
interrupt on a CPU that is not in dyntick-idle mode. This
|
|
|
|
problem really has happened, and seems to be most likely to
|
|
|
|
result in RCU CPU stall warnings for CONFIG_NO_HZ_COMMON=n kernels.
|
|
|
|
|
2020-05-01 03:23:11 +08:00
|
|
|
- A hardware or software issue that prevents time-based wakeups
|
|
|
|
from occurring. These issues can range from misconfigured or
|
|
|
|
buggy timer hardware through bugs in the interrupt or exception
|
|
|
|
path (whether hardware, firmware, or software) through bugs
|
|
|
|
in Linux's timer subsystem through bugs in the scheduler, and,
|
rcu: Check and report missed fqs timer wakeup on RCU stall
For a new grace period request, the RCU GP kthread transitions through
following states:
a. [RCU_GP_WAIT_GPS] -> [RCU_GP_DONE_GPS]
The RCU_GP_WAIT_GPS state is where the GP kthread waits for a request
for a new GP. Once it receives a request (for example, when a new RCU
callback is queued), the GP kthread transitions to RCU_GP_DONE_GPS.
b. [RCU_GP_DONE_GPS] -> [RCU_GP_ONOFF]
Grace period initialization starts in rcu_gp_init(), which records the
start of new GP in rcu_state.gp_seq and transitions to RCU_GP_ONOFF.
c. [RCU_GP_ONOFF] -> [RCU_GP_INIT]
The purpose of the RCU_GP_ONOFF state is to apply the online/offline
information that was buffered for any CPUs that recently came online or
went offline. This state is maintained in per-leaf rcu_node bitmasks,
with the buffered state in ->qsmaskinitnext and the state for the upcoming
GP in ->qsmaskinit. At the end of this RCU_GP_ONOFF state, each bit in
->qsmaskinit will correspond to a CPU that must pass through a quiescent
state before the upcoming grace period is allowed to complete.
However, a leaf rcu_node structure with an all-zeroes ->qsmaskinit
cannot necessarily be ignored. In preemptible RCU, there might well be
tasks still in RCU read-side critical sections that were first preempted
while running on one of the CPUs managed by this structure. Such tasks
will be queued on this structure's ->blkd_tasks list. Only after this
list fully drains can this leaf rcu_node structure be ignored, and even
then only if none of its CPUs have come back online in the meantime.
Once that happens, the ->qsmaskinit masks further up the tree will be
updated to exclude this leaf rcu_node structure.
Once the ->qsmaskinitnext and ->qsmaskinit fields have been updated
as needed, the GP kthread transitions to RCU_GP_INIT.
d. [RCU_GP_INIT] -> [RCU_GP_WAIT_FQS]
The purpose of the RCU_GP_INIT state is to copy each ->qsmaskinit to
the ->qsmask field within each rcu_node structure. This copying is done
breadth-first from the root to the leaves. Why not just copy directly
from ->qsmaskinitnext to ->qsmask? Because the ->qsmaskinitnext masks
can change in the meantime as additional CPUs come online or go offline.
Such changes would result in inconsistencies in the ->qsmask fields up and
down the tree, which could in turn result in too-short grace periods or
grace-period hangs. These issues are avoided by snapshotting the leaf
rcu_node structures' ->qsmaskinitnext fields into their ->qsmaskinit
counterparts, generating a consistent set of ->qsmaskinit fields
throughout the tree, and only then copying these consistent ->qsmaskinit
fields to their ->qsmask counterparts.
Once this initialization step is complete, the GP kthread transitions
to RCU_GP_WAIT_FQS, where it waits to do a force-quiescent-state scan
on the one hand or for the end of the grace period on the other.
e. [RCU_GP_WAIT_FQS] -> [RCU_GP_DOING_FQS]
The RCU_GP_WAIT_FQS state waits for one of three things: (1) An
explicit request to do a force-quiescent-state scan, (2) The end of
the grace period, or (3) A short interval of time, after which it
will do a force-quiescent-state (FQS) scan. The explicit request can
come from rcutorture or from any CPU that has too many RCU callbacks
queued (see the qhimark kernel parameter and the RCU_GP_FLAG_OVLD
flag). The aforementioned "short period of time" is specified by the
jiffies_till_first_fqs boot parameter for a given grace period's first
FQS scan and by the jiffies_till_next_fqs for later FQS scans.
Either way, once the wait is over, the GP kthread transitions to
RCU_GP_DOING_FQS.
f. [RCU_GP_DOING_FQS] -> [RCU_GP_CLEANUP]
The RCU_GP_DOING_FQS state performs an FQS scan. Each such scan carries
out two functions for any CPU whose bit is still set in its leaf rcu_node
structure's ->qsmask field, that is, for any CPU that has not yet reported
a quiescent state for the current grace period:
i. Report quiescent states on behalf of CPUs that have been observed
to be idle (from an RCU perspective) since the beginning of the
grace period.
ii. If the current grace period is too old, take various actions to
encourage holdout CPUs to pass through quiescent states, including
enlisting the aid of any calls to cond_resched() and might_sleep(),
and even including IPIing the holdout CPUs.
These checks are skipped for any leaf rcu_node structure with a all-zero
->qsmask field, however such structures are subject to RCU priority
boosting if there are tasks on a given structure blocking the current
grace period. The end of the grace period is detected when the root
rcu_node structure's ->qsmask is zero and when there are no longer any
preempted tasks blocking the current grace period. (No, this last check
is not redundant. To see this, consider an rcu_node tree having exactly
one structure that serves as both root and leaf.)
Once the end of the grace period is detected, the GP kthread transitions
to RCU_GP_CLEANUP.
g. [RCU_GP_CLEANUP] -> [RCU_GP_CLEANED]
The RCU_GP_CLEANUP state marks the end of grace period by updating the
rcu_state structure's ->gp_seq field and also all rcu_node structures'
->gp_seq field. As before, the rcu_node tree is traversed in breadth
first order. Once this update is complete, the GP kthread transitions
to the RCU_GP_CLEANED state.
i. [RCU_GP_CLEANED] -> [RCU_GP_INIT]
Once in the RCU_GP_CLEANED state, the GP kthread immediately transitions
into the RCU_GP_INIT state.
j. The role of timers.
If there is at least one idle CPU, and if timers are not firing, the
transition from RCU_GP_DOING_FQS to RCU_GP_CLEANUP will never happen.
Timers can fail to fire for a number of reasons, including issues in
timer configuration, issues in the timer framework, and failure to handle
softirqs (for example, when there is a storm of interrupts). Whatever the
reason, if the timers fail to fire, the GP kthread will never be awakened,
resulting in RCU CPU stall warnings and eventually in OOM.
However, an RCU CPU stall warning has a large number of potential causes,
as documented in Documentation/RCU/stallwarn.rst. This commit therefore
adds analysis to the RCU CPU stall-warning code to emit an additional
message if the cause of the stall is likely to be timer failure.
Signed-off-by: Neeraj Upadhyay <neeraju@codeaurora.org>
Signed-off-by: Paul E. McKenney <paulmck@kernel.org>
2020-11-17 00:06:00 +08:00
|
|
|
yes, even including bugs in RCU itself. It can also result in
|
|
|
|
the ``rcu_.*timer wakeup didn't happen for`` console-log message,
|
|
|
|
which will include additional debugging information.
|
2020-05-01 03:23:11 +08:00
|
|
|
|
2021-07-23 12:41:48 +08:00
|
|
|
- A low-level kernel issue that either fails to invoke one of the
|
2022-06-08 22:40:34 +08:00
|
|
|
variants of rcu_eqs_enter(true), rcu_eqs_exit(true), ct_idle_enter(),
|
2022-06-08 22:40:26 +08:00
|
|
|
ct_idle_exit(), ct_irq_enter(), or ct_irq_exit() on the one
|
2021-07-23 12:41:48 +08:00
|
|
|
hand, or that invokes one of them too many times on the other.
|
|
|
|
Historically, the most frequent issue has been an omission
|
|
|
|
of either irq_enter() or irq_exit(), which in turn invoke
|
2022-06-08 22:40:26 +08:00
|
|
|
ct_irq_enter() or ct_irq_exit(), respectively. Building your
|
2021-07-23 12:41:48 +08:00
|
|
|
kernel with CONFIG_RCU_EQS_DEBUG=y can help track down these types
|
|
|
|
of issues, which sometimes arise in architecture-specific code.
|
|
|
|
|
2020-04-22 01:04:10 +08:00
|
|
|
- A bug in the RCU implementation.
|
2017-02-09 06:30:15 +08:00
|
|
|
|
2022-11-05 05:39:32 +08:00
|
|
|
- A hardware failure. This is quite unlikely, but is not at all
|
|
|
|
uncommon in large datacenter. In one memorable case some decades
|
|
|
|
back, a CPU failed in a running system, becoming unresponsive,
|
|
|
|
but not causing an immediate crash. This resulted in a series
|
|
|
|
of RCU CPU stall warnings, eventually leading the realization
|
|
|
|
that the CPU had failed.
|
2017-02-09 06:30:15 +08:00
|
|
|
|
2022-11-05 05:39:32 +08:00
|
|
|
The RCU, RCU-sched, RCU-tasks, and RCU-tasks-trace implementations have
|
|
|
|
CPU stall warning. Note that SRCU does *not* have CPU stall warnings.
|
|
|
|
Please note that RCU only detects CPU stalls when there is a grace period
|
|
|
|
in progress. No grace period, no CPU stall warnings.
|
2017-02-09 06:30:15 +08:00
|
|
|
|
|
|
|
To diagnose the cause of the stall, inspect the stack traces.
|
|
|
|
The offending function will usually be near the top of the stack.
|
|
|
|
If you have a series of stall warnings from a single extended stall,
|
|
|
|
comparing the stack traces can often help determine where the stall
|
|
|
|
is occurring, which will usually be in the function nearest the top of
|
|
|
|
that portion of the stack which remains the same from trace to trace.
|
|
|
|
If you can reliably trigger the stall, ftrace can be quite helpful.
|
|
|
|
|
|
|
|
RCU bugs can often be debugged with the help of CONFIG_RCU_TRACE
|
|
|
|
and with RCU's event tracing. For information on RCU's event tracing,
|
|
|
|
see include/trace/events/rcu.h.
|
|
|
|
|
|
|
|
|
|
|
|
Fine-Tuning the RCU CPU Stall Detector
|
2020-04-22 01:04:10 +08:00
|
|
|
======================================
|
2017-02-09 06:30:15 +08:00
|
|
|
|
|
|
|
The rcuupdate.rcu_cpu_stall_suppress module parameter disables RCU's
|
|
|
|
CPU stall detector, which detects conditions that unduly delay RCU grace
|
|
|
|
periods. This module parameter enables CPU stall detection by default,
|
|
|
|
but may be overridden via boot-time parameter or at runtime via sysfs.
|
2011-02-09 09:14:39 +08:00
|
|
|
The stall detector's idea of what constitutes "unduly delayed" is
|
|
|
|
controlled by a set of kernel configuration variables and cpp macros:
|
2010-01-15 08:10:57 +08:00
|
|
|
|
2011-02-09 09:14:39 +08:00
|
|
|
CONFIG_RCU_CPU_STALL_TIMEOUT
|
2020-04-22 01:04:10 +08:00
|
|
|
----------------------------
|
2010-01-15 08:10:57 +08:00
|
|
|
|
2011-02-09 09:14:39 +08:00
|
|
|
This kernel configuration parameter defines the period of time
|
|
|
|
that RCU will wait from the beginning of a grace period until it
|
|
|
|
issues an RCU CPU stall warning. This time period is normally
|
2013-08-20 02:59:43 +08:00
|
|
|
21 seconds.
|
2010-01-15 08:10:57 +08:00
|
|
|
|
2012-01-21 09:35:55 +08:00
|
|
|
This configuration parameter may be changed at runtime via the
|
2014-11-11 12:03:26 +08:00
|
|
|
/sys/module/rcupdate/parameters/rcu_cpu_stall_timeout, however
|
2012-01-21 09:35:55 +08:00
|
|
|
this parameter is checked only at the beginning of a cycle.
|
2013-08-20 02:59:43 +08:00
|
|
|
So if you are 10 seconds into a 40-second stall, setting this
|
2012-01-21 09:35:55 +08:00
|
|
|
sysfs parameter to (say) five will shorten the timeout for the
|
2021-05-20 12:32:36 +08:00
|
|
|
*next* stall, or the following warning for the current stall
|
2012-01-21 09:35:55 +08:00
|
|
|
(assuming the stall lasts long enough). It will not affect the
|
|
|
|
timing of the next warning for the current stall.
|
2010-01-15 08:10:57 +08:00
|
|
|
|
2012-01-21 09:35:55 +08:00
|
|
|
Stall-warning messages may be enabled and disabled completely via
|
2014-02-26 01:47:34 +08:00
|
|
|
/sys/module/rcupdate/parameters/rcu_cpu_stall_suppress.
|
2012-01-21 09:35:55 +08:00
|
|
|
|
2022-02-16 21:52:09 +08:00
|
|
|
CONFIG_RCU_EXP_CPU_STALL_TIMEOUT
|
|
|
|
--------------------------------
|
|
|
|
|
|
|
|
Same as the CONFIG_RCU_CPU_STALL_TIMEOUT parameter but only for
|
|
|
|
the expedited grace period. This parameter defines the period
|
|
|
|
of time that RCU will wait from the beginning of an expedited
|
|
|
|
grace period until it issues an RCU CPU stall warning. This time
|
|
|
|
period is normally 20 milliseconds on Android devices. A zero
|
|
|
|
value causes the CONFIG_RCU_CPU_STALL_TIMEOUT value to be used,
|
|
|
|
after conversion to milliseconds.
|
|
|
|
|
|
|
|
This configuration parameter may be changed at runtime via the
|
|
|
|
/sys/module/rcupdate/parameters/rcu_exp_cpu_stall_timeout, however
|
|
|
|
this parameter is checked only at the beginning of a cycle. If you
|
|
|
|
are in a current stall cycle, setting it to a new value will change
|
|
|
|
the timeout for the -next- stall.
|
|
|
|
|
|
|
|
Stall-warning messages may be enabled and disabled completely via
|
|
|
|
/sys/module/rcupdate/parameters/rcu_cpu_stall_suppress.
|
|
|
|
|
2012-01-21 09:35:55 +08:00
|
|
|
RCU_STALL_DELAY_DELTA
|
2020-04-22 01:04:10 +08:00
|
|
|
---------------------
|
2012-01-21 09:35:55 +08:00
|
|
|
|
|
|
|
Although the lockdep facility is extremely useful, it does add
|
|
|
|
some overhead. Therefore, under CONFIG_PROVE_RCU, the
|
|
|
|
RCU_STALL_DELAY_DELTA macro allows five extra seconds before
|
2013-08-20 02:59:43 +08:00
|
|
|
giving an RCU CPU stall warning message. (This is a cpp
|
|
|
|
macro, not a kernel configuration parameter.)
|
2010-01-15 08:10:57 +08:00
|
|
|
|
|
|
|
RCU_STALL_RAT_DELAY
|
2020-04-22 01:04:10 +08:00
|
|
|
-------------------
|
2010-01-15 08:10:57 +08:00
|
|
|
|
2010-04-16 06:49:46 +08:00
|
|
|
The CPU stall detector tries to make the offending CPU print its
|
|
|
|
own warnings, as this often gives better-quality stack traces.
|
|
|
|
However, if the offending CPU does not detect its own stall in
|
|
|
|
the number of jiffies specified by RCU_STALL_RAT_DELAY, then
|
|
|
|
some other CPU will complain. This delay is normally set to
|
2013-08-20 02:59:43 +08:00
|
|
|
two jiffies. (This is a cpp macro, not a kernel configuration
|
|
|
|
parameter.)
|
2010-01-15 08:10:57 +08:00
|
|
|
|
2014-07-30 00:49:23 +08:00
|
|
|
rcupdate.rcu_task_stall_timeout
|
2020-04-22 01:04:10 +08:00
|
|
|
-------------------------------
|
2014-07-30 00:49:23 +08:00
|
|
|
|
2022-11-05 05:39:32 +08:00
|
|
|
This boot/sysfs parameter controls the RCU-tasks and
|
|
|
|
RCU-tasks-trace stall warning intervals. A value of zero or less
|
|
|
|
suppresses RCU-tasks stall warnings. A positive value sets the
|
|
|
|
stall-warning interval in seconds. An RCU-tasks stall warning
|
|
|
|
starts with the line:
|
2014-07-30 00:49:23 +08:00
|
|
|
|
|
|
|
INFO: rcu_tasks detected stalls on tasks:
|
|
|
|
|
|
|
|
And continues with the output of sched_show_task() for each
|
|
|
|
task stalling the current RCU-tasks grace period.
|
|
|
|
|
2022-11-05 05:39:32 +08:00
|
|
|
An RCU-tasks-trace stall warning starts (and continues) similarly:
|
|
|
|
|
|
|
|
INFO: rcu_tasks_trace detected stalls on tasks
|
|
|
|
|
2017-02-09 06:30:15 +08:00
|
|
|
|
|
|
|
Interpreting RCU's CPU Stall-Detector "Splats"
|
2020-04-22 01:04:10 +08:00
|
|
|
==============================================
|
2017-02-09 06:30:15 +08:00
|
|
|
|
2021-07-15 02:46:55 +08:00
|
|
|
For non-RCU-tasks flavors of RCU, when a CPU detects that some other
|
|
|
|
CPU is stalling, it will print a message similar to the following::
|
2010-04-16 06:49:46 +08:00
|
|
|
|
2017-08-18 03:29:22 +08:00
|
|
|
INFO: rcu_sched detected stalls on CPUs/tasks:
|
|
|
|
2-...: (3 GPs behind) idle=06c/0/0 softirq=1453/1455 fqs=0
|
|
|
|
16-...: (0 ticks this GP) idle=81c/0/0 softirq=764/764 fqs=0
|
2018-05-03 03:39:42 +08:00
|
|
|
(detected by 32, t=2603 jiffies, g=7075, q=625)
|
2010-04-16 06:49:46 +08:00
|
|
|
|
2017-08-18 03:29:22 +08:00
|
|
|
This message indicates that CPU 32 detected that CPUs 2 and 16 were both
|
|
|
|
causing stalls, and that the stall was affecting RCU-sched. This message
|
2010-04-16 06:49:46 +08:00
|
|
|
will normally be followed by stack dumps for each CPU. Please note that
|
2017-08-18 03:29:22 +08:00
|
|
|
PREEMPT_RCU builds can be stalled by tasks as well as by CPUs, and that
|
|
|
|
the tasks will be indicated by PID, for example, "P3421". It is even
|
2021-05-20 12:32:36 +08:00
|
|
|
possible for an rcu_state stall to be caused by both CPUs *and* tasks,
|
2018-09-23 07:41:27 +08:00
|
|
|
in which case the offending CPUs and tasks will all be called out in the list.
|
2021-07-15 02:46:55 +08:00
|
|
|
In some cases, CPUs will detect themselves stalling, which will result
|
|
|
|
in a self-detected stall.
|
2012-01-21 09:35:55 +08:00
|
|
|
|
2017-08-18 03:29:22 +08:00
|
|
|
CPU 2's "(3 GPs behind)" indicates that this CPU has not interacted with
|
|
|
|
the RCU core for the past three grace periods. In contrast, CPU 16's "(0
|
|
|
|
ticks this GP)" indicates that this CPU has not taken any scheduling-clock
|
|
|
|
interrupts during the current stalled grace period.
|
2012-01-21 09:35:55 +08:00
|
|
|
|
|
|
|
The "idle=" portion of the message prints the dyntick-idle state.
|
|
|
|
The hex number before the first "/" is the low-order 12 bits of the
|
2017-08-18 03:29:22 +08:00
|
|
|
dynticks counter, which will have an even-numbered value if the CPU
|
|
|
|
is in dyntick-idle mode and an odd-numbered value otherwise. The hex
|
|
|
|
number between the two "/"s is the value of the nesting, which will be
|
|
|
|
a small non-negative number if in the idle loop (as shown above) and a
|
2022-11-05 05:39:32 +08:00
|
|
|
very large positive number otherwise. The number following the final
|
|
|
|
"/" is the NMI nesting, which will be a small non-negative number.
|
2012-01-21 09:35:55 +08:00
|
|
|
|
2013-03-07 05:37:09 +08:00
|
|
|
The "softirq=" portion of the message tracks the number of RCU softirq
|
|
|
|
handlers that the stalled CPU has executed. The number before the "/"
|
|
|
|
is the number that had executed since boot at the time that this CPU
|
|
|
|
last noted the beginning of a grace period, which might be the current
|
|
|
|
(stalled) grace period, or it might be some earlier grace period (for
|
|
|
|
example, if the CPU might have been in dyntick-idle mode for an extended
|
2021-05-25 17:31:52 +08:00
|
|
|
time period). The number after the "/" is the number that have executed
|
2013-03-07 05:37:09 +08:00
|
|
|
since boot until the current time. If this latter number stays constant
|
|
|
|
across repeated stall-warning messages, it is possible that RCU's softirq
|
|
|
|
handlers are no longer able to execute on this CPU. This can happen if
|
|
|
|
the stalled CPU is spinning with interrupts are disabled, or, in -rt
|
|
|
|
kernels, if a high-priority process is starving RCU's softirq handler.
|
|
|
|
|
2018-10-30 13:15:59 +08:00
|
|
|
The "fqs=" shows the number of force-quiescent-state idle/offline
|
2017-08-18 03:29:22 +08:00
|
|
|
detection passes that the grace-period kthread has made across this
|
|
|
|
CPU since the last time that this CPU noted the beginning of a grace
|
|
|
|
period.
|
|
|
|
|
|
|
|
The "detected by" line indicates which CPU detected the stall (in this
|
2018-05-03 03:39:42 +08:00
|
|
|
case, CPU 32), how many jiffies have elapsed since the start of the grace
|
|
|
|
period (in this case 2603), the grace-period sequence number (7075), and
|
|
|
|
an estimate of the total number of RCU callbacks queued across all CPUs
|
|
|
|
(625 in this case).
|
2017-08-18 03:29:22 +08:00
|
|
|
|
|
|
|
If the grace period ends just as the stall warning starts printing,
|
|
|
|
there will be a spurious stall-warning message, which will include
|
2020-04-22 01:04:10 +08:00
|
|
|
the following::
|
2017-08-18 03:29:22 +08:00
|
|
|
|
|
|
|
INFO: Stall ended before state dump start
|
|
|
|
|
|
|
|
This is rare, but does happen from time to time in real life. It is also
|
|
|
|
possible for a zero-jiffy stall to be flagged in this case, depending
|
|
|
|
on how the stall warning and the grace-period initialization happen to
|
|
|
|
interact. Please note that it is not possible to entirely eliminate this
|
|
|
|
sort of false positive without resorting to things like stop_machine(),
|
|
|
|
which is overkill for this sort of problem.
|
|
|
|
|
|
|
|
If all CPUs and tasks have passed through quiescent states, but the
|
|
|
|
grace period has nevertheless failed to end, the stall-warning splat
|
2020-04-22 01:04:10 +08:00
|
|
|
will include something like the following::
|
2017-08-18 03:29:22 +08:00
|
|
|
|
|
|
|
All QSes seen, last rcu_preempt kthread activity 23807 (4297905177-4297881370), jiffies_till_next_fqs=3, root ->qsmask 0x0
|
|
|
|
|
|
|
|
The "23807" indicates that it has been more than 23 thousand jiffies
|
|
|
|
since the grace-period kthread ran. The "jiffies_till_next_fqs"
|
|
|
|
indicates how frequently that kthread should run, giving the number
|
|
|
|
of jiffies between force-quiescent-state scans, in this case three,
|
|
|
|
which is way less than 23807. Finally, the root rcu_node structure's
|
|
|
|
->qsmask field is printed, which will normally be zero.
|
2012-01-21 09:35:55 +08:00
|
|
|
|
2014-12-18 00:35:02 +08:00
|
|
|
If the relevant grace-period kthread has been unable to run prior to
|
2017-08-18 03:29:22 +08:00
|
|
|
the stall warning, as was the case in the "All QSes seen" line above,
|
2020-04-22 01:04:10 +08:00
|
|
|
the following additional line is printed::
|
2014-12-18 00:35:02 +08:00
|
|
|
|
2021-07-15 02:46:55 +08:00
|
|
|
rcu_sched kthread starved for 23807 jiffies! g7075 f0x0 RCU_GP_WAIT_FQS(3) ->state=0x1 ->cpu=5
|
|
|
|
Unless rcu_sched kthread gets sufficient CPU time, OOM is now expected behavior.
|
2014-12-18 00:35:02 +08:00
|
|
|
|
2017-08-18 03:29:22 +08:00
|
|
|
Starving the grace-period kthreads of CPU time can of course result
|
|
|
|
in RCU CPU stall warnings even when all CPUs and tasks have passed
|
2018-05-03 03:39:42 +08:00
|
|
|
through the required quiescent states. The "g" number shows the current
|
|
|
|
grace-period sequence number, the "f" precedes the ->gp_flags command
|
|
|
|
to the grace-period kthread, the "RCU_GP_WAIT_FQS" indicates that the
|
|
|
|
kthread is waiting for a short timeout, the "state" precedes value of the
|
|
|
|
task_struct ->state field, and the "cpu" indicates that the grace-period
|
|
|
|
kthread last ran on CPU 5.
|
2014-12-18 00:35:02 +08:00
|
|
|
|
rcu: Check and report missed fqs timer wakeup on RCU stall
For a new grace period request, the RCU GP kthread transitions through
following states:
a. [RCU_GP_WAIT_GPS] -> [RCU_GP_DONE_GPS]
The RCU_GP_WAIT_GPS state is where the GP kthread waits for a request
for a new GP. Once it receives a request (for example, when a new RCU
callback is queued), the GP kthread transitions to RCU_GP_DONE_GPS.
b. [RCU_GP_DONE_GPS] -> [RCU_GP_ONOFF]
Grace period initialization starts in rcu_gp_init(), which records the
start of new GP in rcu_state.gp_seq and transitions to RCU_GP_ONOFF.
c. [RCU_GP_ONOFF] -> [RCU_GP_INIT]
The purpose of the RCU_GP_ONOFF state is to apply the online/offline
information that was buffered for any CPUs that recently came online or
went offline. This state is maintained in per-leaf rcu_node bitmasks,
with the buffered state in ->qsmaskinitnext and the state for the upcoming
GP in ->qsmaskinit. At the end of this RCU_GP_ONOFF state, each bit in
->qsmaskinit will correspond to a CPU that must pass through a quiescent
state before the upcoming grace period is allowed to complete.
However, a leaf rcu_node structure with an all-zeroes ->qsmaskinit
cannot necessarily be ignored. In preemptible RCU, there might well be
tasks still in RCU read-side critical sections that were first preempted
while running on one of the CPUs managed by this structure. Such tasks
will be queued on this structure's ->blkd_tasks list. Only after this
list fully drains can this leaf rcu_node structure be ignored, and even
then only if none of its CPUs have come back online in the meantime.
Once that happens, the ->qsmaskinit masks further up the tree will be
updated to exclude this leaf rcu_node structure.
Once the ->qsmaskinitnext and ->qsmaskinit fields have been updated
as needed, the GP kthread transitions to RCU_GP_INIT.
d. [RCU_GP_INIT] -> [RCU_GP_WAIT_FQS]
The purpose of the RCU_GP_INIT state is to copy each ->qsmaskinit to
the ->qsmask field within each rcu_node structure. This copying is done
breadth-first from the root to the leaves. Why not just copy directly
from ->qsmaskinitnext to ->qsmask? Because the ->qsmaskinitnext masks
can change in the meantime as additional CPUs come online or go offline.
Such changes would result in inconsistencies in the ->qsmask fields up and
down the tree, which could in turn result in too-short grace periods or
grace-period hangs. These issues are avoided by snapshotting the leaf
rcu_node structures' ->qsmaskinitnext fields into their ->qsmaskinit
counterparts, generating a consistent set of ->qsmaskinit fields
throughout the tree, and only then copying these consistent ->qsmaskinit
fields to their ->qsmask counterparts.
Once this initialization step is complete, the GP kthread transitions
to RCU_GP_WAIT_FQS, where it waits to do a force-quiescent-state scan
on the one hand or for the end of the grace period on the other.
e. [RCU_GP_WAIT_FQS] -> [RCU_GP_DOING_FQS]
The RCU_GP_WAIT_FQS state waits for one of three things: (1) An
explicit request to do a force-quiescent-state scan, (2) The end of
the grace period, or (3) A short interval of time, after which it
will do a force-quiescent-state (FQS) scan. The explicit request can
come from rcutorture or from any CPU that has too many RCU callbacks
queued (see the qhimark kernel parameter and the RCU_GP_FLAG_OVLD
flag). The aforementioned "short period of time" is specified by the
jiffies_till_first_fqs boot parameter for a given grace period's first
FQS scan and by the jiffies_till_next_fqs for later FQS scans.
Either way, once the wait is over, the GP kthread transitions to
RCU_GP_DOING_FQS.
f. [RCU_GP_DOING_FQS] -> [RCU_GP_CLEANUP]
The RCU_GP_DOING_FQS state performs an FQS scan. Each such scan carries
out two functions for any CPU whose bit is still set in its leaf rcu_node
structure's ->qsmask field, that is, for any CPU that has not yet reported
a quiescent state for the current grace period:
i. Report quiescent states on behalf of CPUs that have been observed
to be idle (from an RCU perspective) since the beginning of the
grace period.
ii. If the current grace period is too old, take various actions to
encourage holdout CPUs to pass through quiescent states, including
enlisting the aid of any calls to cond_resched() and might_sleep(),
and even including IPIing the holdout CPUs.
These checks are skipped for any leaf rcu_node structure with a all-zero
->qsmask field, however such structures are subject to RCU priority
boosting if there are tasks on a given structure blocking the current
grace period. The end of the grace period is detected when the root
rcu_node structure's ->qsmask is zero and when there are no longer any
preempted tasks blocking the current grace period. (No, this last check
is not redundant. To see this, consider an rcu_node tree having exactly
one structure that serves as both root and leaf.)
Once the end of the grace period is detected, the GP kthread transitions
to RCU_GP_CLEANUP.
g. [RCU_GP_CLEANUP] -> [RCU_GP_CLEANED]
The RCU_GP_CLEANUP state marks the end of grace period by updating the
rcu_state structure's ->gp_seq field and also all rcu_node structures'
->gp_seq field. As before, the rcu_node tree is traversed in breadth
first order. Once this update is complete, the GP kthread transitions
to the RCU_GP_CLEANED state.
i. [RCU_GP_CLEANED] -> [RCU_GP_INIT]
Once in the RCU_GP_CLEANED state, the GP kthread immediately transitions
into the RCU_GP_INIT state.
j. The role of timers.
If there is at least one idle CPU, and if timers are not firing, the
transition from RCU_GP_DOING_FQS to RCU_GP_CLEANUP will never happen.
Timers can fail to fire for a number of reasons, including issues in
timer configuration, issues in the timer framework, and failure to handle
softirqs (for example, when there is a storm of interrupts). Whatever the
reason, if the timers fail to fire, the GP kthread will never be awakened,
resulting in RCU CPU stall warnings and eventually in OOM.
However, an RCU CPU stall warning has a large number of potential causes,
as documented in Documentation/RCU/stallwarn.rst. This commit therefore
adds analysis to the RCU CPU stall-warning code to emit an additional
message if the cause of the stall is likely to be timer failure.
Signed-off-by: Neeraj Upadhyay <neeraju@codeaurora.org>
Signed-off-by: Paul E. McKenney <paulmck@kernel.org>
2020-11-17 00:06:00 +08:00
|
|
|
If the relevant grace-period kthread does not wake from FQS wait in a
|
|
|
|
reasonable time, then the following additional line is printed::
|
|
|
|
|
|
|
|
kthread timer wakeup didn't happen for 23804 jiffies! g7076 f0x0 RCU_GP_WAIT_FQS(5) ->state=0x402
|
|
|
|
|
|
|
|
The "23804" indicates that kthread's timer expired more than 23 thousand
|
|
|
|
jiffies ago. The rest of the line has meaning similar to the kthread
|
|
|
|
starvation case.
|
|
|
|
|
|
|
|
Additionally, the following line is printed::
|
|
|
|
|
|
|
|
Possible timer handling issue on cpu=4 timer-softirq=11142
|
|
|
|
|
|
|
|
Here "cpu" indicates that the grace-period kthread last ran on CPU 4,
|
|
|
|
where it queued the fqs timer. The number following the "timer-softirq"
|
|
|
|
is the current ``TIMER_SOFTIRQ`` count on cpu 4. If this value does not
|
|
|
|
change on successive RCU CPU stall warnings, there is further reason to
|
|
|
|
suspect a timer problem.
|
|
|
|
|
2021-07-15 02:46:55 +08:00
|
|
|
These messages are usually followed by stack dumps of the CPUs and tasks
|
|
|
|
involved in the stall. These stack traces can help you locate the cause
|
|
|
|
of the stall, keeping in mind that the CPU detecting the stall will have
|
|
|
|
an interrupt frame that is mainly devoted to detecting the stall.
|
|
|
|
|
2012-01-21 09:35:55 +08:00
|
|
|
|
|
|
|
Multiple Warnings From One Stall
|
2020-04-22 01:04:10 +08:00
|
|
|
================================
|
2012-01-21 09:35:55 +08:00
|
|
|
|
2021-07-15 02:46:55 +08:00
|
|
|
If a stall lasts long enough, multiple stall-warning messages will
|
|
|
|
be printed for it. The second and subsequent messages are printed at
|
2012-01-21 09:35:55 +08:00
|
|
|
longer intervals, so that the time between (say) the first and second
|
|
|
|
message will be about three times the interval between the beginning
|
2021-07-15 02:46:55 +08:00
|
|
|
of the stall and the first message. It can be helpful to compare the
|
|
|
|
stack dumps for the different messages for the same stalled grace period.
|
2012-01-21 09:35:55 +08:00
|
|
|
|
|
|
|
|
2015-07-01 05:54:09 +08:00
|
|
|
Stall Warnings for Expedited Grace Periods
|
2020-04-22 01:04:10 +08:00
|
|
|
==========================================
|
2015-07-01 05:54:09 +08:00
|
|
|
|
|
|
|
If an expedited grace period detects a stall, it will place a message
|
2020-04-22 01:04:10 +08:00
|
|
|
like the following in dmesg::
|
2015-07-01 05:54:09 +08:00
|
|
|
|
2017-08-18 03:29:22 +08:00
|
|
|
INFO: rcu_sched detected expedited stalls on CPUs/tasks: { 7-... } 21119 jiffies s: 73 root: 0x2/.
|
|
|
|
|
|
|
|
This indicates that CPU 7 has failed to respond to a reschedule IPI.
|
|
|
|
The three periods (".") following the CPU number indicate that the CPU
|
|
|
|
is online (otherwise the first period would instead have been "O"),
|
|
|
|
that the CPU was online at the beginning of the expedited grace period
|
|
|
|
(otherwise the second period would have instead been "o"), and that
|
|
|
|
the CPU has been online at least once since boot (otherwise, the third
|
|
|
|
period would instead have been "N"). The number before the "jiffies"
|
|
|
|
indicates that the expedited grace period has been going on for 21,119
|
|
|
|
jiffies. The number following the "s:" indicates that the expedited
|
|
|
|
grace-period sequence counter is 73. The fact that this last value is
|
|
|
|
odd indicates that an expedited grace period is in flight. The number
|
|
|
|
following "root:" is a bitmask that indicates which children of the root
|
|
|
|
rcu_node structure correspond to CPUs and/or tasks that are blocking the
|
|
|
|
current expedited grace period. If the tree had more than one level,
|
|
|
|
additional hex numbers would be printed for the states of the other
|
|
|
|
rcu_node structures in the tree.
|
|
|
|
|
|
|
|
As with normal grace periods, PREEMPT_RCU builds can be stalled by
|
|
|
|
tasks as well as by CPUs, and that the tasks will be indicated by PID,
|
|
|
|
for example, "P3421".
|
2015-07-01 05:54:09 +08:00
|
|
|
|
|
|
|
It is entirely possible to see stall warnings from normal and from
|
2017-08-18 03:29:22 +08:00
|
|
|
expedited grace periods at about the same time during the same run.
|
2022-11-19 17:25:07 +08:00
|
|
|
|
|
|
|
RCU_CPU_STALL_CPUTIME
|
|
|
|
=====================
|
|
|
|
|
|
|
|
In kernels built with CONFIG_RCU_CPU_STALL_CPUTIME=y or booted with
|
|
|
|
rcupdate.rcu_cpu_stall_cputime=1, the following additional information
|
|
|
|
is supplied with each RCU CPU stall warning::
|
|
|
|
|
2022-11-24 14:22:03 +08:00
|
|
|
rcu: hardirqs softirqs csw/system
|
|
|
|
rcu: number: 624 45 0
|
|
|
|
rcu: cputime: 69 1 2425 ==> 2500(ms)
|
2022-11-19 17:25:07 +08:00
|
|
|
|
|
|
|
These statistics are collected during the sampling period. The values
|
|
|
|
in row "number:" are the number of hard interrupts, number of soft
|
|
|
|
interrupts, and number of context switches on the stalled CPU. The
|
|
|
|
first three values in row "cputime:" indicate the CPU time in
|
|
|
|
milliseconds consumed by hard interrupts, soft interrupts, and tasks
|
|
|
|
on the stalled CPU. The last number is the measurement interval, again
|
|
|
|
in milliseconds. Because user-mode tasks normally do not cause RCU CPU
|
|
|
|
stalls, these tasks are typically kernel tasks, which is why only the
|
|
|
|
system CPU time are considered.
|
|
|
|
|
2022-11-24 14:22:03 +08:00
|
|
|
The sampling period is shown as follows::
|
2022-11-19 17:25:07 +08:00
|
|
|
|
2022-11-24 14:22:03 +08:00
|
|
|
|<------------first timeout---------->|<-----second timeout----->|
|
|
|
|
|<--half timeout-->|<--half timeout-->| |
|
|
|
|
| |<--first period-->| |
|
|
|
|
| |<-----------second sampling period---------->|
|
|
|
|
| | | |
|
|
|
|
snapshot time point 1st-stall 2nd-stall
|
2022-11-19 17:25:07 +08:00
|
|
|
|
|
|
|
The following describes four typical scenarios:
|
|
|
|
|
2022-11-24 14:22:03 +08:00
|
|
|
1. A CPU looping with interrupts disabled.
|
2022-11-19 17:25:07 +08:00
|
|
|
|
2022-11-24 14:22:03 +08:00
|
|
|
::
|
|
|
|
|
|
|
|
rcu: hardirqs softirqs csw/system
|
|
|
|
rcu: number: 0 0 0
|
|
|
|
rcu: cputime: 0 0 0 ==> 2500(ms)
|
2022-11-19 17:25:07 +08:00
|
|
|
|
|
|
|
Because interrupts have been disabled throughout the measurement
|
|
|
|
interval, there are no interrupts and no context switches.
|
|
|
|
Furthermore, because CPU time consumption was measured using interrupt
|
|
|
|
handlers, the system CPU consumption is misleadingly measured as zero.
|
|
|
|
This scenario will normally also have "(0 ticks this GP)" printed on
|
|
|
|
this CPU's summary line.
|
|
|
|
|
|
|
|
2. A CPU looping with bottom halves disabled.
|
|
|
|
|
|
|
|
This is similar to the previous example, but with non-zero number of
|
|
|
|
and CPU time consumed by hard interrupts, along with non-zero CPU
|
2022-11-24 14:22:03 +08:00
|
|
|
time consumed by in-kernel execution::
|
2022-11-19 17:25:07 +08:00
|
|
|
|
2022-11-24 14:22:03 +08:00
|
|
|
rcu: hardirqs softirqs csw/system
|
|
|
|
rcu: number: 624 0 0
|
|
|
|
rcu: cputime: 49 0 2446 ==> 2500(ms)
|
2022-11-19 17:25:07 +08:00
|
|
|
|
|
|
|
The fact that there are zero softirqs gives a hint that these were
|
|
|
|
disabled, perhaps via local_bh_disable(). It is of course possible
|
|
|
|
that there were no softirqs, perhaps because all events that would
|
|
|
|
result in softirq execution are confined to other CPUs. In this case,
|
|
|
|
the diagnosis should continue as shown in the next example.
|
|
|
|
|
|
|
|
3. A CPU looping with preemption disabled.
|
|
|
|
|
2022-11-24 14:22:03 +08:00
|
|
|
Here, only the number of context switches is zero::
|
2022-11-19 17:25:07 +08:00
|
|
|
|
2022-11-24 14:22:03 +08:00
|
|
|
rcu: hardirqs softirqs csw/system
|
|
|
|
rcu: number: 624 45 0
|
|
|
|
rcu: cputime: 69 1 2425 ==> 2500(ms)
|
2022-11-19 17:25:07 +08:00
|
|
|
|
|
|
|
This situation hints that the stalled CPU was looping with preemption
|
|
|
|
disabled.
|
|
|
|
|
2022-11-24 14:22:03 +08:00
|
|
|
4. No looping, but massive hard and soft interrupts.
|
|
|
|
|
|
|
|
::
|
2022-11-19 17:25:07 +08:00
|
|
|
|
2022-11-24 14:22:03 +08:00
|
|
|
rcu: hardirqs softirqs csw/system
|
|
|
|
rcu: number: xx xx 0
|
|
|
|
rcu: cputime: xx xx 0 ==> 2500(ms)
|
2022-11-19 17:25:07 +08:00
|
|
|
|
|
|
|
Here, the number and CPU time of hard interrupts are all non-zero,
|
|
|
|
but the number of context switches and the in-kernel CPU time consumed
|
|
|
|
are zero. The number and cputime of soft interrupts will usually be
|
|
|
|
non-zero, but could be zero, for example, if the CPU was spinning
|
|
|
|
within a single hard interrupt handler.
|
|
|
|
|
|
|
|
If this type of RCU CPU stall warning can be reproduced, you can
|
|
|
|
narrow it down by looking at /proc/interrupts or by writing code to
|
|
|
|
trace each interrupt, for example, by referring to show_interrupts().
|