2019-10-29 05:42:52 +08:00
|
|
|
.. _NMI_rcu_doc:
|
|
|
|
|
2005-09-07 06:16:35 +08:00
|
|
|
Using RCU to Protect Dynamic NMI Handlers
|
2019-10-29 05:42:52 +08:00
|
|
|
=========================================
|
2005-09-07 06:16:35 +08:00
|
|
|
|
|
|
|
|
|
|
|
Although RCU is usually used to protect read-mostly data structures,
|
|
|
|
it is possible to use RCU to provide dynamic non-maskable interrupt
|
|
|
|
handlers, as well as dynamic irq handlers. This document describes
|
|
|
|
how to do this, drawing loosely from Zwane Mwaikambo's NMI-timer
|
2021-01-14 19:35:30 +08:00
|
|
|
work in "arch/x86/kernel/traps.c".
|
2005-09-07 06:16:35 +08:00
|
|
|
|
|
|
|
The relevant pieces of code are listed below, each followed by a
|
2019-10-29 05:42:52 +08:00
|
|
|
brief explanation::
|
2005-09-07 06:16:35 +08:00
|
|
|
|
|
|
|
static int dummy_nmi_callback(struct pt_regs *regs, int cpu)
|
|
|
|
{
|
|
|
|
return 0;
|
|
|
|
}
|
|
|
|
|
|
|
|
The dummy_nmi_callback() function is a "dummy" NMI handler that does
|
|
|
|
nothing, but returns zero, thus saying that it did nothing, allowing
|
2019-10-29 05:42:52 +08:00
|
|
|
the NMI handler to take the default machine-specific action::
|
2005-09-07 06:16:35 +08:00
|
|
|
|
|
|
|
static nmi_callback_t nmi_callback = dummy_nmi_callback;
|
|
|
|
|
|
|
|
This nmi_callback variable is a global function pointer to the current
|
2019-10-29 05:42:52 +08:00
|
|
|
NMI handler::
|
2005-09-07 06:16:35 +08:00
|
|
|
|
2008-02-14 07:03:16 +08:00
|
|
|
void do_nmi(struct pt_regs * regs, long error_code)
|
2005-09-07 06:16:35 +08:00
|
|
|
{
|
|
|
|
int cpu;
|
|
|
|
|
|
|
|
nmi_enter();
|
|
|
|
|
|
|
|
cpu = smp_processor_id();
|
|
|
|
++nmi_count(cpu);
|
|
|
|
|
2010-04-10 06:39:12 +08:00
|
|
|
if (!rcu_dereference_sched(nmi_callback)(regs, cpu))
|
2005-09-07 06:16:35 +08:00
|
|
|
default_do_nmi(regs);
|
|
|
|
|
|
|
|
nmi_exit();
|
|
|
|
}
|
|
|
|
|
|
|
|
The do_nmi() function processes each NMI. It first disables preemption
|
|
|
|
in the same way that a hardware irq would, then increments the per-CPU
|
|
|
|
count of NMIs. It then invokes the NMI handler stored in the nmi_callback
|
|
|
|
function pointer. If this handler returns zero, do_nmi() invokes the
|
|
|
|
default_do_nmi() function to handle a machine-specific NMI. Finally,
|
|
|
|
preemption is restored.
|
|
|
|
|
2010-04-10 06:39:12 +08:00
|
|
|
In theory, rcu_dereference_sched() is not needed, since this code runs
|
|
|
|
only on i386, which in theory does not need rcu_dereference_sched()
|
|
|
|
anyway. However, in practice it is a good documentation aid, particularly
|
|
|
|
for anyone attempting to do something similar on Alpha or on systems
|
|
|
|
with aggressive optimizing compilers.
|
2005-09-07 06:16:35 +08:00
|
|
|
|
2019-10-29 05:42:52 +08:00
|
|
|
Quick Quiz:
|
|
|
|
Why might the rcu_dereference_sched() be necessary on Alpha, given that the code referenced by the pointer is read-only?
|
2005-09-07 06:16:35 +08:00
|
|
|
|
2019-10-29 05:42:52 +08:00
|
|
|
:ref:`Answer to Quick Quiz <answer_quick_quiz_NMI>`
|
2005-09-07 06:16:35 +08:00
|
|
|
|
2019-10-29 05:42:52 +08:00
|
|
|
Back to the discussion of NMI and RCU::
|
2005-09-07 06:16:35 +08:00
|
|
|
|
|
|
|
void set_nmi_callback(nmi_callback_t callback)
|
|
|
|
{
|
|
|
|
rcu_assign_pointer(nmi_callback, callback);
|
|
|
|
}
|
|
|
|
|
|
|
|
The set_nmi_callback() function registers an NMI handler. Note that any
|
|
|
|
data that is to be used by the callback must be initialized up -before-
|
|
|
|
the call to set_nmi_callback(). On architectures that do not order
|
|
|
|
writes, the rcu_assign_pointer() ensures that the NMI handler sees the
|
2019-10-29 05:42:52 +08:00
|
|
|
initialized values::
|
2005-09-07 06:16:35 +08:00
|
|
|
|
|
|
|
void unset_nmi_callback(void)
|
|
|
|
{
|
|
|
|
rcu_assign_pointer(nmi_callback, dummy_nmi_callback);
|
|
|
|
}
|
|
|
|
|
|
|
|
This function unregisters an NMI handler, restoring the original
|
|
|
|
dummy_nmi_handler(). However, there may well be an NMI handler
|
|
|
|
currently executing on some other CPU. We therefore cannot free
|
|
|
|
up any data structures used by the old NMI handler until execution
|
|
|
|
of it completes on all other CPUs.
|
|
|
|
|
doc: Remove obsolete RCU update functions from RCU documentation
Now that synchronize_rcu_bh, synchronize_rcu_bh_expedited, call_rcu_bh,
rcu_barrier_bh, synchronize_sched, synchronize_sched_expedited,
call_rcu_sched, rcu_barrier_sched, get_state_synchronize_sched,
and cond_synchronize_sched are obsolete, let's remove them from the
documentation aside from a small historical section.
Signed-off-by: Paul E. McKenney <paulmck@linux.ibm.com>
2019-01-10 06:48:09 +08:00
|
|
|
One way to accomplish this is via synchronize_rcu(), perhaps as
|
2019-10-29 05:42:52 +08:00
|
|
|
follows::
|
2005-09-07 06:16:35 +08:00
|
|
|
|
|
|
|
unset_nmi_callback();
|
doc: Remove obsolete RCU update functions from RCU documentation
Now that synchronize_rcu_bh, synchronize_rcu_bh_expedited, call_rcu_bh,
rcu_barrier_bh, synchronize_sched, synchronize_sched_expedited,
call_rcu_sched, rcu_barrier_sched, get_state_synchronize_sched,
and cond_synchronize_sched are obsolete, let's remove them from the
documentation aside from a small historical section.
Signed-off-by: Paul E. McKenney <paulmck@linux.ibm.com>
2019-01-10 06:48:09 +08:00
|
|
|
synchronize_rcu();
|
2005-09-07 06:16:35 +08:00
|
|
|
kfree(my_nmi_data);
|
|
|
|
|
doc: Remove obsolete RCU update functions from RCU documentation
Now that synchronize_rcu_bh, synchronize_rcu_bh_expedited, call_rcu_bh,
rcu_barrier_bh, synchronize_sched, synchronize_sched_expedited,
call_rcu_sched, rcu_barrier_sched, get_state_synchronize_sched,
and cond_synchronize_sched are obsolete, let's remove them from the
documentation aside from a small historical section.
Signed-off-by: Paul E. McKenney <paulmck@linux.ibm.com>
2019-01-10 06:48:09 +08:00
|
|
|
This works because (as of v4.20) synchronize_rcu() blocks until all
|
|
|
|
CPUs complete any preemption-disabled segments of code that they were
|
|
|
|
executing.
|
|
|
|
Since NMI handlers disable preemption, synchronize_rcu() is guaranteed
|
2005-09-07 06:16:35 +08:00
|
|
|
not to return until all ongoing NMI handlers exit. It is therefore safe
|
doc: Remove obsolete RCU update functions from RCU documentation
Now that synchronize_rcu_bh, synchronize_rcu_bh_expedited, call_rcu_bh,
rcu_barrier_bh, synchronize_sched, synchronize_sched_expedited,
call_rcu_sched, rcu_barrier_sched, get_state_synchronize_sched,
and cond_synchronize_sched are obsolete, let's remove them from the
documentation aside from a small historical section.
Signed-off-by: Paul E. McKenney <paulmck@linux.ibm.com>
2019-01-10 06:48:09 +08:00
|
|
|
to free up the handler's data as soon as synchronize_rcu() returns.
|
2005-09-07 06:16:35 +08:00
|
|
|
|
2008-05-13 03:21:05 +08:00
|
|
|
Important note: for this to work, the architecture in question must
|
2011-06-08 08:05:34 +08:00
|
|
|
invoke nmi_enter() and nmi_exit() on NMI entry and exit, respectively.
|
2008-05-13 03:21:05 +08:00
|
|
|
|
2019-10-29 05:42:52 +08:00
|
|
|
.. _answer_quick_quiz_NMI:
|
2005-09-07 06:16:35 +08:00
|
|
|
|
2019-10-29 05:42:52 +08:00
|
|
|
Answer to Quick Quiz:
|
|
|
|
Why might the rcu_dereference_sched() be necessary on Alpha, given that the code referenced by the pointer is read-only?
|
2005-09-07 06:16:35 +08:00
|
|
|
|
2019-10-29 05:42:52 +08:00
|
|
|
The caller to set_nmi_callback() might well have
|
|
|
|
initialized some data that is to be used by the new NMI
|
|
|
|
handler. In this case, the rcu_dereference_sched() would
|
|
|
|
be needed, because otherwise a CPU that received an NMI
|
|
|
|
just after the new handler was set might see the pointer
|
|
|
|
to the new NMI handler, but the old pre-initialized
|
|
|
|
version of the handler's data.
|
2010-04-10 06:39:12 +08:00
|
|
|
|
2019-10-29 05:42:52 +08:00
|
|
|
This same sad story can happen on other CPUs when using
|
|
|
|
a compiler with aggressive pointer-value speculation
|
|
|
|
optimizations.
|
2010-04-10 06:39:12 +08:00
|
|
|
|
2019-10-29 05:42:52 +08:00
|
|
|
More important, the rcu_dereference_sched() makes it
|
|
|
|
clear to someone reading the code that the pointer is
|
|
|
|
being protected by RCU-sched.
|