mirror of
https://github.com/edk2-porting/linux-next.git
synced 2024-12-11 23:03:55 +08:00
Merge branches 'doc.2018.08.30a', 'dynticks.2018.08.30b', 'srcu.2018.08.30b' and 'torture.2018.08.29a' into HEAD
doc.2018.08.30a: Documentation updates dynticks.2018.08.30b: RCU flavor consolidation updates and cleanups srcu.2018.08.30b: SRCU updates torture.2018.08.29a: Torture-test updates
This commit is contained in:
commit
b56ada1209
@ -2398,30 +2398,9 @@ when invoked from a CPU-hotplug notifier.
|
|||||||
<p>
|
<p>
|
||||||
RCU depends on the scheduler, and the scheduler uses RCU to
|
RCU depends on the scheduler, and the scheduler uses RCU to
|
||||||
protect some of its data structures.
|
protect some of its data structures.
|
||||||
This means the scheduler is forbidden from acquiring
|
The preemptible-RCU <tt>rcu_read_unlock()</tt>
|
||||||
the runqueue locks and the priority-inheritance locks
|
implementation must therefore be written carefully to avoid deadlocks
|
||||||
in the middle of an outermost RCU read-side critical section unless either
|
involving the scheduler's runqueue and priority-inheritance locks.
|
||||||
(1) it releases them before exiting that same
|
|
||||||
RCU read-side critical section, or
|
|
||||||
(2) interrupts are disabled across
|
|
||||||
that entire RCU read-side critical section.
|
|
||||||
This same prohibition also applies (recursively!) to any lock that is acquired
|
|
||||||
while holding any lock to which this prohibition applies.
|
|
||||||
Adhering to this rule prevents preemptible RCU from invoking
|
|
||||||
<tt>rcu_read_unlock_special()</tt> while either runqueue or
|
|
||||||
priority-inheritance locks are held, thus avoiding deadlock.
|
|
||||||
|
|
||||||
<p>
|
|
||||||
Prior to v4.4, it was only necessary to disable preemption across
|
|
||||||
RCU read-side critical sections that acquired scheduler locks.
|
|
||||||
In v4.4, expedited grace periods started using IPIs, and these
|
|
||||||
IPIs could force a <tt>rcu_read_unlock()</tt> to take the slowpath.
|
|
||||||
Therefore, this expedited-grace-period change required disabling of
|
|
||||||
interrupts, not just preemption.
|
|
||||||
|
|
||||||
<p>
|
|
||||||
For RCU's part, the preemptible-RCU <tt>rcu_read_unlock()</tt>
|
|
||||||
implementation must be written carefully to avoid similar deadlocks.
|
|
||||||
In particular, <tt>rcu_read_unlock()</tt> must tolerate an
|
In particular, <tt>rcu_read_unlock()</tt> must tolerate an
|
||||||
interrupt where the interrupt handler invokes both
|
interrupt where the interrupt handler invokes both
|
||||||
<tt>rcu_read_lock()</tt> and <tt>rcu_read_unlock()</tt>.
|
<tt>rcu_read_lock()</tt> and <tt>rcu_read_unlock()</tt>.
|
||||||
@ -2430,7 +2409,7 @@ negative nesting levels to avoid destructive recursion via
|
|||||||
interrupt handler's use of RCU.
|
interrupt handler's use of RCU.
|
||||||
|
|
||||||
<p>
|
<p>
|
||||||
This pair of mutual scheduler-RCU requirements came as a
|
This scheduler-RCU requirement came as a
|
||||||
<a href="https://lwn.net/Articles/453002/">complete surprise</a>.
|
<a href="https://lwn.net/Articles/453002/">complete surprise</a>.
|
||||||
|
|
||||||
<p>
|
<p>
|
||||||
@ -2441,9 +2420,28 @@ when running context-switch-heavy workloads when built with
|
|||||||
<tt>CONFIG_NO_HZ_FULL=y</tt>
|
<tt>CONFIG_NO_HZ_FULL=y</tt>
|
||||||
<a href="http://www.rdrop.com/users/paulmck/scalability/paper/BareMetal.2015.01.15b.pdf">did come as a surprise [PDF]</a>.
|
<a href="http://www.rdrop.com/users/paulmck/scalability/paper/BareMetal.2015.01.15b.pdf">did come as a surprise [PDF]</a>.
|
||||||
RCU has made good progress towards meeting this requirement, even
|
RCU has made good progress towards meeting this requirement, even
|
||||||
for context-switch-have <tt>CONFIG_NO_HZ_FULL=y</tt> workloads,
|
for context-switch-heavy <tt>CONFIG_NO_HZ_FULL=y</tt> workloads,
|
||||||
but there is room for further improvement.
|
but there is room for further improvement.
|
||||||
|
|
||||||
|
<p>
|
||||||
|
In the past, it was forbidden to disable interrupts across an
|
||||||
|
<tt>rcu_read_unlock()</tt> unless that interrupt-disabled region
|
||||||
|
of code also included the matching <tt>rcu_read_lock()</tt>.
|
||||||
|
Violating this restriction could result in deadlocks involving the
|
||||||
|
scheduler's runqueue and priority-inheritance spinlocks.
|
||||||
|
This restriction was lifted when interrupt-disabled calls to
|
||||||
|
<tt>rcu_read_unlock()</tt> started deferring the reporting of
|
||||||
|
the resulting RCU-preempt quiescent state until the end of that
|
||||||
|
interrupts-disabled region.
|
||||||
|
This deferred reporting means that the scheduler's runqueue and
|
||||||
|
priority-inheritance locks cannot be held while reporting an RCU-preempt
|
||||||
|
quiescent state, which lifts the earlier restriction, at least from
|
||||||
|
a deadlock perspective.
|
||||||
|
Unfortunately, real-time systems using RCU priority boosting may
|
||||||
|
need this restriction to remain in effect because deferred
|
||||||
|
quiescent-state reporting also defers deboosting, which in turn
|
||||||
|
degrades real-time latencies.
|
||||||
|
|
||||||
<h3><a name="Tracing and RCU">Tracing and RCU</a></h3>
|
<h3><a name="Tracing and RCU">Tracing and RCU</a></h3>
|
||||||
|
|
||||||
<p>
|
<p>
|
||||||
|
@ -3595,7 +3595,14 @@
|
|||||||
Set required age in jiffies for a
|
Set required age in jiffies for a
|
||||||
given grace period before RCU starts
|
given grace period before RCU starts
|
||||||
soliciting quiescent-state help from
|
soliciting quiescent-state help from
|
||||||
rcu_note_context_switch().
|
rcu_note_context_switch(). If not specified, the
|
||||||
|
kernel will calculate a value based on the most
|
||||||
|
recent settings of rcutree.jiffies_till_first_fqs
|
||||||
|
and rcutree.jiffies_till_next_fqs.
|
||||||
|
This calculated value may be viewed in
|
||||||
|
rcutree.jiffies_to_sched_qs. Any attempt to
|
||||||
|
set rcutree.jiffies_to_sched_qs will be
|
||||||
|
cheerfully overwritten.
|
||||||
|
|
||||||
rcutree.jiffies_till_first_fqs= [KNL]
|
rcutree.jiffies_till_first_fqs= [KNL]
|
||||||
Set delay from grace-period initialization to
|
Set delay from grace-period initialization to
|
||||||
@ -3863,12 +3870,6 @@
|
|||||||
rcupdate.rcu_self_test= [KNL]
|
rcupdate.rcu_self_test= [KNL]
|
||||||
Run the RCU early boot self tests
|
Run the RCU early boot self tests
|
||||||
|
|
||||||
rcupdate.rcu_self_test_bh= [KNL]
|
|
||||||
Run the RCU bh early boot self tests
|
|
||||||
|
|
||||||
rcupdate.rcu_self_test_sched= [KNL]
|
|
||||||
Run the RCU sched early boot self tests
|
|
||||||
|
|
||||||
rdinit= [KNL]
|
rdinit= [KNL]
|
||||||
Format: <full_path>
|
Format: <full_path>
|
||||||
Run specified binary instead of /init from the ramdisk,
|
Run specified binary instead of /init from the ramdisk,
|
||||||
|
@ -182,7 +182,7 @@ static inline void list_replace_rcu(struct list_head *old,
|
|||||||
* @list: the RCU-protected list to splice
|
* @list: the RCU-protected list to splice
|
||||||
* @prev: points to the last element of the existing list
|
* @prev: points to the last element of the existing list
|
||||||
* @next: points to the first element of the existing list
|
* @next: points to the first element of the existing list
|
||||||
* @sync: function to sync: synchronize_rcu(), synchronize_sched(), ...
|
* @sync: synchronize_rcu, synchronize_rcu_expedited, ...
|
||||||
*
|
*
|
||||||
* The list pointed to by @prev and @next can be RCU-read traversed
|
* The list pointed to by @prev and @next can be RCU-read traversed
|
||||||
* concurrently with this function.
|
* concurrently with this function.
|
||||||
@ -240,7 +240,7 @@ static inline void __list_splice_init_rcu(struct list_head *list,
|
|||||||
* designed for stacks.
|
* designed for stacks.
|
||||||
* @list: the RCU-protected list to splice
|
* @list: the RCU-protected list to splice
|
||||||
* @head: the place in the existing list to splice the first list into
|
* @head: the place in the existing list to splice the first list into
|
||||||
* @sync: function to sync: synchronize_rcu(), synchronize_sched(), ...
|
* @sync: synchronize_rcu, synchronize_rcu_expedited, ...
|
||||||
*/
|
*/
|
||||||
static inline void list_splice_init_rcu(struct list_head *list,
|
static inline void list_splice_init_rcu(struct list_head *list,
|
||||||
struct list_head *head,
|
struct list_head *head,
|
||||||
@ -255,7 +255,7 @@ static inline void list_splice_init_rcu(struct list_head *list,
|
|||||||
* list, designed for queues.
|
* list, designed for queues.
|
||||||
* @list: the RCU-protected list to splice
|
* @list: the RCU-protected list to splice
|
||||||
* @head: the place in the existing list to splice the first list into
|
* @head: the place in the existing list to splice the first list into
|
||||||
* @sync: function to sync: synchronize_rcu(), synchronize_sched(), ...
|
* @sync: synchronize_rcu, synchronize_rcu_expedited, ...
|
||||||
*/
|
*/
|
||||||
static inline void list_splice_tail_init_rcu(struct list_head *list,
|
static inline void list_splice_tail_init_rcu(struct list_head *list,
|
||||||
struct list_head *head,
|
struct list_head *head,
|
||||||
@ -359,13 +359,12 @@ static inline void list_splice_tail_init_rcu(struct list_head *list,
|
|||||||
* @type: the type of the struct this is embedded in.
|
* @type: the type of the struct this is embedded in.
|
||||||
* @member: the name of the list_head within the struct.
|
* @member: the name of the list_head within the struct.
|
||||||
*
|
*
|
||||||
* This primitive may safely run concurrently with the _rcu list-mutation
|
* This primitive may safely run concurrently with the _rcu
|
||||||
* primitives such as list_add_rcu(), but requires some implicit RCU
|
* list-mutation primitives such as list_add_rcu(), but requires some
|
||||||
* read-side guarding. One example is running within a special
|
* implicit RCU read-side guarding. One example is running within a special
|
||||||
* exception-time environment where preemption is disabled and where
|
* exception-time environment where preemption is disabled and where lockdep
|
||||||
* lockdep cannot be invoked (in which case updaters must use RCU-sched,
|
* cannot be invoked. Another example is when items are added to the list,
|
||||||
* as in synchronize_sched(), call_rcu_sched(), and friends). Another
|
* but never deleted.
|
||||||
* example is when items are added to the list, but never deleted.
|
|
||||||
*/
|
*/
|
||||||
#define list_entry_lockless(ptr, type, member) \
|
#define list_entry_lockless(ptr, type, member) \
|
||||||
container_of((typeof(ptr))READ_ONCE(ptr), type, member)
|
container_of((typeof(ptr))READ_ONCE(ptr), type, member)
|
||||||
@ -376,13 +375,12 @@ static inline void list_splice_tail_init_rcu(struct list_head *list,
|
|||||||
* @head: the head for your list.
|
* @head: the head for your list.
|
||||||
* @member: the name of the list_struct within the struct.
|
* @member: the name of the list_struct within the struct.
|
||||||
*
|
*
|
||||||
* This primitive may safely run concurrently with the _rcu list-mutation
|
* This primitive may safely run concurrently with the _rcu
|
||||||
* primitives such as list_add_rcu(), but requires some implicit RCU
|
* list-mutation primitives such as list_add_rcu(), but requires some
|
||||||
* read-side guarding. One example is running within a special
|
* implicit RCU read-side guarding. One example is running within a special
|
||||||
* exception-time environment where preemption is disabled and where
|
* exception-time environment where preemption is disabled and where lockdep
|
||||||
* lockdep cannot be invoked (in which case updaters must use RCU-sched,
|
* cannot be invoked. Another example is when items are added to the list,
|
||||||
* as in synchronize_sched(), call_rcu_sched(), and friends). Another
|
* but never deleted.
|
||||||
* example is when items are added to the list, but never deleted.
|
|
||||||
*/
|
*/
|
||||||
#define list_for_each_entry_lockless(pos, head, member) \
|
#define list_for_each_entry_lockless(pos, head, member) \
|
||||||
for (pos = list_entry_lockless((head)->next, typeof(*pos), member); \
|
for (pos = list_entry_lockless((head)->next, typeof(*pos), member); \
|
||||||
|
@ -48,23 +48,14 @@
|
|||||||
#define ulong2long(a) (*(long *)(&(a)))
|
#define ulong2long(a) (*(long *)(&(a)))
|
||||||
|
|
||||||
/* Exported common interfaces */
|
/* Exported common interfaces */
|
||||||
|
|
||||||
#ifdef CONFIG_PREEMPT_RCU
|
|
||||||
void call_rcu(struct rcu_head *head, rcu_callback_t func);
|
void call_rcu(struct rcu_head *head, rcu_callback_t func);
|
||||||
#else /* #ifdef CONFIG_PREEMPT_RCU */
|
|
||||||
#define call_rcu call_rcu_sched
|
|
||||||
#endif /* #else #ifdef CONFIG_PREEMPT_RCU */
|
|
||||||
|
|
||||||
void call_rcu_bh(struct rcu_head *head, rcu_callback_t func);
|
|
||||||
void call_rcu_sched(struct rcu_head *head, rcu_callback_t func);
|
|
||||||
void synchronize_sched(void);
|
|
||||||
void rcu_barrier_tasks(void);
|
void rcu_barrier_tasks(void);
|
||||||
|
void synchronize_rcu(void);
|
||||||
|
|
||||||
#ifdef CONFIG_PREEMPT_RCU
|
#ifdef CONFIG_PREEMPT_RCU
|
||||||
|
|
||||||
void __rcu_read_lock(void);
|
void __rcu_read_lock(void);
|
||||||
void __rcu_read_unlock(void);
|
void __rcu_read_unlock(void);
|
||||||
void synchronize_rcu(void);
|
|
||||||
|
|
||||||
/*
|
/*
|
||||||
* Defined as a macro as it is a very low level header included from
|
* Defined as a macro as it is a very low level header included from
|
||||||
@ -88,11 +79,6 @@ static inline void __rcu_read_unlock(void)
|
|||||||
preempt_enable();
|
preempt_enable();
|
||||||
}
|
}
|
||||||
|
|
||||||
static inline void synchronize_rcu(void)
|
|
||||||
{
|
|
||||||
synchronize_sched();
|
|
||||||
}
|
|
||||||
|
|
||||||
static inline int rcu_preempt_depth(void)
|
static inline int rcu_preempt_depth(void)
|
||||||
{
|
{
|
||||||
return 0;
|
return 0;
|
||||||
@ -103,8 +89,6 @@ static inline int rcu_preempt_depth(void)
|
|||||||
/* Internal to kernel */
|
/* Internal to kernel */
|
||||||
void rcu_init(void);
|
void rcu_init(void);
|
||||||
extern int rcu_scheduler_active __read_mostly;
|
extern int rcu_scheduler_active __read_mostly;
|
||||||
void rcu_sched_qs(void);
|
|
||||||
void rcu_bh_qs(void);
|
|
||||||
void rcu_check_callbacks(int user);
|
void rcu_check_callbacks(int user);
|
||||||
void rcu_report_dead(unsigned int cpu);
|
void rcu_report_dead(unsigned int cpu);
|
||||||
void rcutree_migrate_callbacks(int cpu);
|
void rcutree_migrate_callbacks(int cpu);
|
||||||
@ -135,11 +119,10 @@ static inline void rcu_init_nohz(void) { }
|
|||||||
* RCU_NONIDLE - Indicate idle-loop code that needs RCU readers
|
* RCU_NONIDLE - Indicate idle-loop code that needs RCU readers
|
||||||
* @a: Code that RCU needs to pay attention to.
|
* @a: Code that RCU needs to pay attention to.
|
||||||
*
|
*
|
||||||
* RCU, RCU-bh, and RCU-sched read-side critical sections are forbidden
|
* RCU read-side critical sections are forbidden in the inner idle loop,
|
||||||
* in the inner idle loop, that is, between the rcu_idle_enter() and
|
* that is, between the rcu_idle_enter() and the rcu_idle_exit() -- RCU
|
||||||
* the rcu_idle_exit() -- RCU will happily ignore any such read-side
|
* will happily ignore any such read-side critical sections. However,
|
||||||
* critical sections. However, things like powertop need tracepoints
|
* things like powertop need tracepoints in the inner idle loop.
|
||||||
* in the inner idle loop.
|
|
||||||
*
|
*
|
||||||
* This macro provides the way out: RCU_NONIDLE(do_something_with_RCU())
|
* This macro provides the way out: RCU_NONIDLE(do_something_with_RCU())
|
||||||
* will tell RCU that it needs to pay attention, invoke its argument
|
* will tell RCU that it needs to pay attention, invoke its argument
|
||||||
@ -167,20 +150,16 @@ static inline void rcu_init_nohz(void) { }
|
|||||||
if (READ_ONCE((t)->rcu_tasks_holdout)) \
|
if (READ_ONCE((t)->rcu_tasks_holdout)) \
|
||||||
WRITE_ONCE((t)->rcu_tasks_holdout, false); \
|
WRITE_ONCE((t)->rcu_tasks_holdout, false); \
|
||||||
} while (0)
|
} while (0)
|
||||||
#define rcu_note_voluntary_context_switch(t) \
|
#define rcu_note_voluntary_context_switch(t) rcu_tasks_qs(t)
|
||||||
do { \
|
|
||||||
rcu_all_qs(); \
|
|
||||||
rcu_tasks_qs(t); \
|
|
||||||
} while (0)
|
|
||||||
void call_rcu_tasks(struct rcu_head *head, rcu_callback_t func);
|
void call_rcu_tasks(struct rcu_head *head, rcu_callback_t func);
|
||||||
void synchronize_rcu_tasks(void);
|
void synchronize_rcu_tasks(void);
|
||||||
void exit_tasks_rcu_start(void);
|
void exit_tasks_rcu_start(void);
|
||||||
void exit_tasks_rcu_finish(void);
|
void exit_tasks_rcu_finish(void);
|
||||||
#else /* #ifdef CONFIG_TASKS_RCU */
|
#else /* #ifdef CONFIG_TASKS_RCU */
|
||||||
#define rcu_tasks_qs(t) do { } while (0)
|
#define rcu_tasks_qs(t) do { } while (0)
|
||||||
#define rcu_note_voluntary_context_switch(t) rcu_all_qs()
|
#define rcu_note_voluntary_context_switch(t) do { } while (0)
|
||||||
#define call_rcu_tasks call_rcu_sched
|
#define call_rcu_tasks call_rcu
|
||||||
#define synchronize_rcu_tasks synchronize_sched
|
#define synchronize_rcu_tasks synchronize_rcu
|
||||||
static inline void exit_tasks_rcu_start(void) { }
|
static inline void exit_tasks_rcu_start(void) { }
|
||||||
static inline void exit_tasks_rcu_finish(void) { }
|
static inline void exit_tasks_rcu_finish(void) { }
|
||||||
#endif /* #else #ifdef CONFIG_TASKS_RCU */
|
#endif /* #else #ifdef CONFIG_TASKS_RCU */
|
||||||
@ -325,9 +304,8 @@ static inline void rcu_preempt_sleep_check(void) { }
|
|||||||
* Helper functions for rcu_dereference_check(), rcu_dereference_protected()
|
* Helper functions for rcu_dereference_check(), rcu_dereference_protected()
|
||||||
* and rcu_assign_pointer(). Some of these could be folded into their
|
* and rcu_assign_pointer(). Some of these could be folded into their
|
||||||
* callers, but they are left separate in order to ease introduction of
|
* callers, but they are left separate in order to ease introduction of
|
||||||
* multiple flavors of pointers to match the multiple flavors of RCU
|
* multiple pointers markings to match different RCU implementations
|
||||||
* (e.g., __rcu_bh, * __rcu_sched, and __srcu), should this make sense in
|
* (e.g., __srcu), should this make sense in the future.
|
||||||
* the future.
|
|
||||||
*/
|
*/
|
||||||
|
|
||||||
#ifdef __CHECKER__
|
#ifdef __CHECKER__
|
||||||
@ -686,14 +664,9 @@ static inline void rcu_read_unlock(void)
|
|||||||
/**
|
/**
|
||||||
* rcu_read_lock_bh() - mark the beginning of an RCU-bh critical section
|
* rcu_read_lock_bh() - mark the beginning of an RCU-bh critical section
|
||||||
*
|
*
|
||||||
* This is equivalent of rcu_read_lock(), but to be used when updates
|
* This is equivalent of rcu_read_lock(), but also disables softirqs.
|
||||||
* are being done using call_rcu_bh() or synchronize_rcu_bh(). Since
|
* Note that anything else that disables softirqs can also serve as
|
||||||
* both call_rcu_bh() and synchronize_rcu_bh() consider completion of a
|
* an RCU read-side critical section.
|
||||||
* softirq handler to be a quiescent state, a process in RCU read-side
|
|
||||||
* critical section must be protected by disabling softirqs. Read-side
|
|
||||||
* critical sections in interrupt context can use just rcu_read_lock(),
|
|
||||||
* though this should at least be commented to avoid confusing people
|
|
||||||
* reading the code.
|
|
||||||
*
|
*
|
||||||
* Note that rcu_read_lock_bh() and the matching rcu_read_unlock_bh()
|
* Note that rcu_read_lock_bh() and the matching rcu_read_unlock_bh()
|
||||||
* must occur in the same context, for example, it is illegal to invoke
|
* must occur in the same context, for example, it is illegal to invoke
|
||||||
@ -726,10 +699,9 @@ static inline void rcu_read_unlock_bh(void)
|
|||||||
/**
|
/**
|
||||||
* rcu_read_lock_sched() - mark the beginning of a RCU-sched critical section
|
* rcu_read_lock_sched() - mark the beginning of a RCU-sched critical section
|
||||||
*
|
*
|
||||||
* This is equivalent of rcu_read_lock(), but to be used when updates
|
* This is equivalent of rcu_read_lock(), but disables preemption.
|
||||||
* are being done using call_rcu_sched() or synchronize_rcu_sched().
|
* Read-side critical sections can also be introduced by anything else
|
||||||
* Read-side critical sections can also be introduced by anything that
|
* that disables preemption, including local_irq_disable() and friends.
|
||||||
* disables preemption, including local_irq_disable() and friends.
|
|
||||||
*
|
*
|
||||||
* Note that rcu_read_lock_sched() and the matching rcu_read_unlock_sched()
|
* Note that rcu_read_lock_sched() and the matching rcu_read_unlock_sched()
|
||||||
* must occur in the same context, for example, it is illegal to invoke
|
* must occur in the same context, for example, it is illegal to invoke
|
||||||
@ -885,4 +857,96 @@ static inline notrace void rcu_read_unlock_sched_notrace(void)
|
|||||||
#endif /* #else #ifdef CONFIG_ARCH_WEAK_RELEASE_ACQUIRE */
|
#endif /* #else #ifdef CONFIG_ARCH_WEAK_RELEASE_ACQUIRE */
|
||||||
|
|
||||||
|
|
||||||
|
/* Has the specified rcu_head structure been handed to call_rcu()? */
|
||||||
|
|
||||||
|
/*
|
||||||
|
* rcu_head_init - Initialize rcu_head for rcu_head_after_call_rcu()
|
||||||
|
* @rhp: The rcu_head structure to initialize.
|
||||||
|
*
|
||||||
|
* If you intend to invoke rcu_head_after_call_rcu() to test whether a
|
||||||
|
* given rcu_head structure has already been passed to call_rcu(), then
|
||||||
|
* you must also invoke this rcu_head_init() function on it just after
|
||||||
|
* allocating that structure. Calls to this function must not race with
|
||||||
|
* calls to call_rcu(), rcu_head_after_call_rcu(), or callback invocation.
|
||||||
|
*/
|
||||||
|
static inline void rcu_head_init(struct rcu_head *rhp)
|
||||||
|
{
|
||||||
|
rhp->func = (rcu_callback_t)~0L;
|
||||||
|
}
|
||||||
|
|
||||||
|
/*
|
||||||
|
* rcu_head_after_call_rcu - Has this rcu_head been passed to call_rcu()?
|
||||||
|
* @rhp: The rcu_head structure to test.
|
||||||
|
* @func: The function passed to call_rcu() along with @rhp.
|
||||||
|
*
|
||||||
|
* Returns @true if the @rhp has been passed to call_rcu() with @func,
|
||||||
|
* and @false otherwise. Emits a warning in any other case, including
|
||||||
|
* the case where @rhp has already been invoked after a grace period.
|
||||||
|
* Calls to this function must not race with callback invocation. One way
|
||||||
|
* to avoid such races is to enclose the call to rcu_head_after_call_rcu()
|
||||||
|
* in an RCU read-side critical section that includes a read-side fetch
|
||||||
|
* of the pointer to the structure containing @rhp.
|
||||||
|
*/
|
||||||
|
static inline bool
|
||||||
|
rcu_head_after_call_rcu(struct rcu_head *rhp, rcu_callback_t f)
|
||||||
|
{
|
||||||
|
if (READ_ONCE(rhp->func) == f)
|
||||||
|
return true;
|
||||||
|
WARN_ON_ONCE(READ_ONCE(rhp->func) != (rcu_callback_t)~0L);
|
||||||
|
return false;
|
||||||
|
}
|
||||||
|
|
||||||
|
|
||||||
|
/* Transitional pre-consolidation compatibility definitions. */
|
||||||
|
|
||||||
|
static inline void synchronize_rcu_bh(void)
|
||||||
|
{
|
||||||
|
synchronize_rcu();
|
||||||
|
}
|
||||||
|
|
||||||
|
static inline void synchronize_rcu_bh_expedited(void)
|
||||||
|
{
|
||||||
|
synchronize_rcu_expedited();
|
||||||
|
}
|
||||||
|
|
||||||
|
static inline void call_rcu_bh(struct rcu_head *head, rcu_callback_t func)
|
||||||
|
{
|
||||||
|
call_rcu(head, func);
|
||||||
|
}
|
||||||
|
|
||||||
|
static inline void rcu_barrier_bh(void)
|
||||||
|
{
|
||||||
|
rcu_barrier();
|
||||||
|
}
|
||||||
|
|
||||||
|
static inline void synchronize_sched(void)
|
||||||
|
{
|
||||||
|
synchronize_rcu();
|
||||||
|
}
|
||||||
|
|
||||||
|
static inline void synchronize_sched_expedited(void)
|
||||||
|
{
|
||||||
|
synchronize_rcu_expedited();
|
||||||
|
}
|
||||||
|
|
||||||
|
static inline void call_rcu_sched(struct rcu_head *head, rcu_callback_t func)
|
||||||
|
{
|
||||||
|
call_rcu(head, func);
|
||||||
|
}
|
||||||
|
|
||||||
|
static inline void rcu_barrier_sched(void)
|
||||||
|
{
|
||||||
|
rcu_barrier();
|
||||||
|
}
|
||||||
|
|
||||||
|
static inline unsigned long get_state_synchronize_sched(void)
|
||||||
|
{
|
||||||
|
return get_state_synchronize_rcu();
|
||||||
|
}
|
||||||
|
|
||||||
|
static inline void cond_synchronize_sched(unsigned long oldstate)
|
||||||
|
{
|
||||||
|
cond_synchronize_rcu(oldstate);
|
||||||
|
}
|
||||||
|
|
||||||
#endif /* __LINUX_RCUPDATE_H */
|
#endif /* __LINUX_RCUPDATE_H */
|
||||||
|
@ -33,17 +33,17 @@ do { \
|
|||||||
|
|
||||||
/**
|
/**
|
||||||
* synchronize_rcu_mult - Wait concurrently for multiple grace periods
|
* synchronize_rcu_mult - Wait concurrently for multiple grace periods
|
||||||
* @...: List of call_rcu() functions for the flavors to wait on.
|
* @...: List of call_rcu() functions for different grace periods to wait on
|
||||||
*
|
*
|
||||||
* This macro waits concurrently for multiple flavors of RCU grace periods.
|
* This macro waits concurrently for multiple types of RCU grace periods.
|
||||||
* For example, synchronize_rcu_mult(call_rcu, call_rcu_bh) would wait
|
* For example, synchronize_rcu_mult(call_rcu, call_rcu_tasks) would wait
|
||||||
* on concurrent RCU and RCU-bh grace periods. Waiting on a give SRCU
|
* on concurrent RCU and RCU-tasks grace periods. Waiting on a give SRCU
|
||||||
* domain requires you to write a wrapper function for that SRCU domain's
|
* domain requires you to write a wrapper function for that SRCU domain's
|
||||||
* call_srcu() function, supplying the corresponding srcu_struct.
|
* call_srcu() function, supplying the corresponding srcu_struct.
|
||||||
*
|
*
|
||||||
* If Tiny RCU, tell _wait_rcu_gp() not to bother waiting for RCU
|
* If Tiny RCU, tell _wait_rcu_gp() does not bother waiting for RCU,
|
||||||
* or RCU-bh, given that anywhere synchronize_rcu_mult() can be called
|
* given that anywhere synchronize_rcu_mult() can be called is automatically
|
||||||
* is automatically a grace period.
|
* a grace period.
|
||||||
*/
|
*/
|
||||||
#define synchronize_rcu_mult(...) \
|
#define synchronize_rcu_mult(...) \
|
||||||
_wait_rcu_gp(IS_ENABLED(CONFIG_TINY_RCU), __VA_ARGS__)
|
_wait_rcu_gp(IS_ENABLED(CONFIG_TINY_RCU), __VA_ARGS__)
|
||||||
|
@ -27,12 +27,6 @@
|
|||||||
|
|
||||||
#include <linux/ktime.h>
|
#include <linux/ktime.h>
|
||||||
|
|
||||||
struct rcu_dynticks;
|
|
||||||
static inline int rcu_dynticks_snap(struct rcu_dynticks *rdtp)
|
|
||||||
{
|
|
||||||
return 0;
|
|
||||||
}
|
|
||||||
|
|
||||||
/* Never flag non-existent other CPUs! */
|
/* Never flag non-existent other CPUs! */
|
||||||
static inline bool rcu_eqs_special_set(int cpu) { return false; }
|
static inline bool rcu_eqs_special_set(int cpu) { return false; }
|
||||||
|
|
||||||
@ -46,53 +40,28 @@ static inline void cond_synchronize_rcu(unsigned long oldstate)
|
|||||||
might_sleep();
|
might_sleep();
|
||||||
}
|
}
|
||||||
|
|
||||||
static inline unsigned long get_state_synchronize_sched(void)
|
extern void rcu_barrier(void);
|
||||||
{
|
|
||||||
return 0;
|
|
||||||
}
|
|
||||||
|
|
||||||
static inline void cond_synchronize_sched(unsigned long oldstate)
|
|
||||||
{
|
|
||||||
might_sleep();
|
|
||||||
}
|
|
||||||
|
|
||||||
extern void rcu_barrier_bh(void);
|
|
||||||
extern void rcu_barrier_sched(void);
|
|
||||||
|
|
||||||
static inline void synchronize_rcu_expedited(void)
|
static inline void synchronize_rcu_expedited(void)
|
||||||
{
|
{
|
||||||
synchronize_sched(); /* Only one CPU, so pretty fast anyway!!! */
|
synchronize_rcu();
|
||||||
}
|
}
|
||||||
|
|
||||||
static inline void rcu_barrier(void)
|
static inline void kfree_call_rcu(struct rcu_head *head, rcu_callback_t func)
|
||||||
{
|
|
||||||
rcu_barrier_sched(); /* Only one CPU, so only one list of callbacks! */
|
|
||||||
}
|
|
||||||
|
|
||||||
static inline void synchronize_rcu_bh(void)
|
|
||||||
{
|
|
||||||
synchronize_sched();
|
|
||||||
}
|
|
||||||
|
|
||||||
static inline void synchronize_rcu_bh_expedited(void)
|
|
||||||
{
|
|
||||||
synchronize_sched();
|
|
||||||
}
|
|
||||||
|
|
||||||
static inline void synchronize_sched_expedited(void)
|
|
||||||
{
|
|
||||||
synchronize_sched();
|
|
||||||
}
|
|
||||||
|
|
||||||
static inline void kfree_call_rcu(struct rcu_head *head,
|
|
||||||
rcu_callback_t func)
|
|
||||||
{
|
{
|
||||||
call_rcu(head, func);
|
call_rcu(head, func);
|
||||||
}
|
}
|
||||||
|
|
||||||
|
void rcu_qs(void);
|
||||||
|
|
||||||
|
static inline void rcu_softirq_qs(void)
|
||||||
|
{
|
||||||
|
rcu_qs();
|
||||||
|
}
|
||||||
|
|
||||||
#define rcu_note_context_switch(preempt) \
|
#define rcu_note_context_switch(preempt) \
|
||||||
do { \
|
do { \
|
||||||
rcu_sched_qs(); \
|
rcu_qs(); \
|
||||||
rcu_tasks_qs(current); \
|
rcu_tasks_qs(current); \
|
||||||
} while (0)
|
} while (0)
|
||||||
|
|
||||||
@ -108,6 +77,7 @@ static inline int rcu_needs_cpu(u64 basemono, u64 *nextevt)
|
|||||||
*/
|
*/
|
||||||
static inline void rcu_virt_note_context_switch(int cpu) { }
|
static inline void rcu_virt_note_context_switch(int cpu) { }
|
||||||
static inline void rcu_cpu_stall_reset(void) { }
|
static inline void rcu_cpu_stall_reset(void) { }
|
||||||
|
static inline int rcu_jiffies_till_stall_check(void) { return 21 * HZ; }
|
||||||
static inline void rcu_idle_enter(void) { }
|
static inline void rcu_idle_enter(void) { }
|
||||||
static inline void rcu_idle_exit(void) { }
|
static inline void rcu_idle_exit(void) { }
|
||||||
static inline void rcu_irq_enter(void) { }
|
static inline void rcu_irq_enter(void) { }
|
||||||
@ -115,6 +85,11 @@ static inline void rcu_irq_exit_irqson(void) { }
|
|||||||
static inline void rcu_irq_enter_irqson(void) { }
|
static inline void rcu_irq_enter_irqson(void) { }
|
||||||
static inline void rcu_irq_exit(void) { }
|
static inline void rcu_irq_exit(void) { }
|
||||||
static inline void exit_rcu(void) { }
|
static inline void exit_rcu(void) { }
|
||||||
|
static inline bool rcu_preempt_need_deferred_qs(struct task_struct *t)
|
||||||
|
{
|
||||||
|
return false;
|
||||||
|
}
|
||||||
|
static inline void rcu_preempt_deferred_qs(struct task_struct *t) { }
|
||||||
#ifdef CONFIG_SRCU
|
#ifdef CONFIG_SRCU
|
||||||
void rcu_scheduler_starting(void);
|
void rcu_scheduler_starting(void);
|
||||||
#else /* #ifndef CONFIG_SRCU */
|
#else /* #ifndef CONFIG_SRCU */
|
||||||
|
@ -30,6 +30,7 @@
|
|||||||
#ifndef __LINUX_RCUTREE_H
|
#ifndef __LINUX_RCUTREE_H
|
||||||
#define __LINUX_RCUTREE_H
|
#define __LINUX_RCUTREE_H
|
||||||
|
|
||||||
|
void rcu_softirq_qs(void);
|
||||||
void rcu_note_context_switch(bool preempt);
|
void rcu_note_context_switch(bool preempt);
|
||||||
int rcu_needs_cpu(u64 basem, u64 *nextevt);
|
int rcu_needs_cpu(u64 basem, u64 *nextevt);
|
||||||
void rcu_cpu_stall_reset(void);
|
void rcu_cpu_stall_reset(void);
|
||||||
@ -44,41 +45,13 @@ static inline void rcu_virt_note_context_switch(int cpu)
|
|||||||
rcu_note_context_switch(false);
|
rcu_note_context_switch(false);
|
||||||
}
|
}
|
||||||
|
|
||||||
void synchronize_rcu_bh(void);
|
|
||||||
void synchronize_sched_expedited(void);
|
|
||||||
void synchronize_rcu_expedited(void);
|
void synchronize_rcu_expedited(void);
|
||||||
|
|
||||||
void kfree_call_rcu(struct rcu_head *head, rcu_callback_t func);
|
void kfree_call_rcu(struct rcu_head *head, rcu_callback_t func);
|
||||||
|
|
||||||
/**
|
|
||||||
* synchronize_rcu_bh_expedited - Brute-force RCU-bh grace period
|
|
||||||
*
|
|
||||||
* Wait for an RCU-bh grace period to elapse, but use a "big hammer"
|
|
||||||
* approach to force the grace period to end quickly. This consumes
|
|
||||||
* significant time on all CPUs and is unfriendly to real-time workloads,
|
|
||||||
* so is thus not recommended for any sort of common-case code. In fact,
|
|
||||||
* if you are using synchronize_rcu_bh_expedited() in a loop, please
|
|
||||||
* restructure your code to batch your updates, and then use a single
|
|
||||||
* synchronize_rcu_bh() instead.
|
|
||||||
*
|
|
||||||
* Note that it is illegal to call this function while holding any lock
|
|
||||||
* that is acquired by a CPU-hotplug notifier. And yes, it is also illegal
|
|
||||||
* to call this function from a CPU-hotplug notifier. Failing to observe
|
|
||||||
* these restriction will result in deadlock.
|
|
||||||
*/
|
|
||||||
static inline void synchronize_rcu_bh_expedited(void)
|
|
||||||
{
|
|
||||||
synchronize_sched_expedited();
|
|
||||||
}
|
|
||||||
|
|
||||||
void rcu_barrier(void);
|
void rcu_barrier(void);
|
||||||
void rcu_barrier_bh(void);
|
|
||||||
void rcu_barrier_sched(void);
|
|
||||||
bool rcu_eqs_special_set(int cpu);
|
bool rcu_eqs_special_set(int cpu);
|
||||||
unsigned long get_state_synchronize_rcu(void);
|
unsigned long get_state_synchronize_rcu(void);
|
||||||
void cond_synchronize_rcu(unsigned long oldstate);
|
void cond_synchronize_rcu(unsigned long oldstate);
|
||||||
unsigned long get_state_synchronize_sched(void);
|
|
||||||
void cond_synchronize_sched(unsigned long oldstate);
|
|
||||||
|
|
||||||
void rcu_idle_enter(void);
|
void rcu_idle_enter(void);
|
||||||
void rcu_idle_exit(void);
|
void rcu_idle_exit(void);
|
||||||
@ -93,7 +66,9 @@ void rcu_scheduler_starting(void);
|
|||||||
extern int rcu_scheduler_active __read_mostly;
|
extern int rcu_scheduler_active __read_mostly;
|
||||||
void rcu_end_inkernel_boot(void);
|
void rcu_end_inkernel_boot(void);
|
||||||
bool rcu_is_watching(void);
|
bool rcu_is_watching(void);
|
||||||
|
#ifndef CONFIG_PREEMPT
|
||||||
void rcu_all_qs(void);
|
void rcu_all_qs(void);
|
||||||
|
#endif
|
||||||
|
|
||||||
/* RCUtree hotplug events */
|
/* RCUtree hotplug events */
|
||||||
int rcutree_prepare_cpu(unsigned int cpu);
|
int rcutree_prepare_cpu(unsigned int cpu);
|
||||||
|
@ -571,12 +571,8 @@ union rcu_special {
|
|||||||
struct {
|
struct {
|
||||||
u8 blocked;
|
u8 blocked;
|
||||||
u8 need_qs;
|
u8 need_qs;
|
||||||
u8 exp_need_qs;
|
|
||||||
|
|
||||||
/* Otherwise the compiler can store garbage here: */
|
|
||||||
u8 pad;
|
|
||||||
} b; /* Bits. */
|
} b; /* Bits. */
|
||||||
u32 s; /* Set of bits. */
|
u16 s; /* Set of bits. */
|
||||||
};
|
};
|
||||||
|
|
||||||
enum perf_event_task_context {
|
enum perf_event_task_context {
|
||||||
|
@ -105,12 +105,13 @@ struct srcu_struct {
|
|||||||
#define SRCU_STATE_SCAN2 2
|
#define SRCU_STATE_SCAN2 2
|
||||||
|
|
||||||
#define __SRCU_STRUCT_INIT(name, pcpu_name) \
|
#define __SRCU_STRUCT_INIT(name, pcpu_name) \
|
||||||
{ \
|
{ \
|
||||||
.sda = &pcpu_name, \
|
.sda = &pcpu_name, \
|
||||||
.lock = __SPIN_LOCK_UNLOCKED(name.lock), \
|
.lock = __SPIN_LOCK_UNLOCKED(name.lock), \
|
||||||
.srcu_gp_seq_needed = 0 - 1, \
|
.srcu_gp_seq_needed = -1UL, \
|
||||||
__SRCU_DEP_MAP_INIT(name) \
|
.work = __DELAYED_WORK_INITIALIZER(name.work, NULL, 0), \
|
||||||
}
|
__SRCU_DEP_MAP_INIT(name) \
|
||||||
|
}
|
||||||
|
|
||||||
/*
|
/*
|
||||||
* Define and initialize a srcu struct at build time.
|
* Define and initialize a srcu struct at build time.
|
||||||
|
@ -77,7 +77,7 @@ void torture_shutdown_absorb(const char *title);
|
|||||||
int torture_shutdown_init(int ssecs, void (*cleanup)(void));
|
int torture_shutdown_init(int ssecs, void (*cleanup)(void));
|
||||||
|
|
||||||
/* Task stuttering, which forces load/no-load transitions. */
|
/* Task stuttering, which forces load/no-load transitions. */
|
||||||
void stutter_wait(const char *title);
|
bool stutter_wait(const char *title);
|
||||||
int torture_stutter_init(int s);
|
int torture_stutter_init(int s);
|
||||||
|
|
||||||
/* Initialization and cleanup. */
|
/* Initialization and cleanup. */
|
||||||
|
@ -393,9 +393,8 @@ TRACE_EVENT(rcu_quiescent_state_report,
|
|||||||
* Tracepoint for quiescent states detected by force_quiescent_state().
|
* Tracepoint for quiescent states detected by force_quiescent_state().
|
||||||
* These trace events include the type of RCU, the grace-period number
|
* These trace events include the type of RCU, the grace-period number
|
||||||
* that was blocked by the CPU, the CPU itself, and the type of quiescent
|
* that was blocked by the CPU, the CPU itself, and the type of quiescent
|
||||||
* state, which can be "dti" for dyntick-idle mode, "kick" when kicking
|
* state, which can be "dti" for dyntick-idle mode or "kick" when kicking
|
||||||
* a CPU that has been in dyntick-idle mode for too long, or "rqc" if the
|
* a CPU that has been in dyntick-idle mode for too long.
|
||||||
* CPU got a quiescent state via its rcu_qs_ctr.
|
|
||||||
*/
|
*/
|
||||||
TRACE_EVENT(rcu_fqs,
|
TRACE_EVENT(rcu_fqs,
|
||||||
|
|
||||||
@ -705,20 +704,20 @@ TRACE_EVENT(rcu_torture_read,
|
|||||||
);
|
);
|
||||||
|
|
||||||
/*
|
/*
|
||||||
* Tracepoint for _rcu_barrier() execution. The string "s" describes
|
* Tracepoint for rcu_barrier() execution. The string "s" describes
|
||||||
* the _rcu_barrier phase:
|
* the rcu_barrier phase:
|
||||||
* "Begin": _rcu_barrier() started.
|
* "Begin": rcu_barrier() started.
|
||||||
* "EarlyExit": _rcu_barrier() piggybacked, thus early exit.
|
* "EarlyExit": rcu_barrier() piggybacked, thus early exit.
|
||||||
* "Inc1": _rcu_barrier() piggyback check counter incremented.
|
* "Inc1": rcu_barrier() piggyback check counter incremented.
|
||||||
* "OfflineNoCB": _rcu_barrier() found callback on never-online CPU
|
* "OfflineNoCB": rcu_barrier() found callback on never-online CPU
|
||||||
* "OnlineNoCB": _rcu_barrier() found online no-CBs CPU.
|
* "OnlineNoCB": rcu_barrier() found online no-CBs CPU.
|
||||||
* "OnlineQ": _rcu_barrier() found online CPU with callbacks.
|
* "OnlineQ": rcu_barrier() found online CPU with callbacks.
|
||||||
* "OnlineNQ": _rcu_barrier() found online CPU, no callbacks.
|
* "OnlineNQ": rcu_barrier() found online CPU, no callbacks.
|
||||||
* "IRQ": An rcu_barrier_callback() callback posted on remote CPU.
|
* "IRQ": An rcu_barrier_callback() callback posted on remote CPU.
|
||||||
* "IRQNQ": An rcu_barrier_callback() callback found no callbacks.
|
* "IRQNQ": An rcu_barrier_callback() callback found no callbacks.
|
||||||
* "CB": An rcu_barrier_callback() invoked a callback, not the last.
|
* "CB": An rcu_barrier_callback() invoked a callback, not the last.
|
||||||
* "LastCB": An rcu_barrier_callback() invoked the last callback.
|
* "LastCB": An rcu_barrier_callback() invoked the last callback.
|
||||||
* "Inc2": _rcu_barrier() piggyback check counter incremented.
|
* "Inc2": rcu_barrier() piggyback check counter incremented.
|
||||||
* The "cpu" argument is the CPU or -1 if meaningless, the "cnt" argument
|
* The "cpu" argument is the CPU or -1 if meaningless, the "cnt" argument
|
||||||
* is the count of remaining callbacks, and "done" is the piggybacking count.
|
* is the count of remaining callbacks, and "done" is the piggybacking count.
|
||||||
*/
|
*/
|
||||||
|
@ -196,7 +196,7 @@ config RCU_BOOST
|
|||||||
This option boosts the priority of preempted RCU readers that
|
This option boosts the priority of preempted RCU readers that
|
||||||
block the current preemptible RCU grace period for too long.
|
block the current preemptible RCU grace period for too long.
|
||||||
This option also prevents heavy loads from blocking RCU
|
This option also prevents heavy loads from blocking RCU
|
||||||
callback invocation for all flavors of RCU.
|
callback invocation.
|
||||||
|
|
||||||
Say Y here if you are working with real-time apps or heavy loads
|
Say Y here if you are working with real-time apps or heavy loads
|
||||||
Say N here if you are unsure.
|
Say N here if you are unsure.
|
||||||
@ -225,12 +225,12 @@ config RCU_NOCB_CPU
|
|||||||
callback invocation to energy-efficient CPUs in battery-powered
|
callback invocation to energy-efficient CPUs in battery-powered
|
||||||
asymmetric multiprocessors.
|
asymmetric multiprocessors.
|
||||||
|
|
||||||
This option offloads callback invocation from the set of
|
This option offloads callback invocation from the set of CPUs
|
||||||
CPUs specified at boot time by the rcu_nocbs parameter.
|
specified at boot time by the rcu_nocbs parameter. For each
|
||||||
For each such CPU, a kthread ("rcuox/N") will be created to
|
such CPU, a kthread ("rcuox/N") will be created to invoke
|
||||||
invoke callbacks, where the "N" is the CPU being offloaded,
|
callbacks, where the "N" is the CPU being offloaded, and where
|
||||||
and where the "x" is "b" for RCU-bh, "p" for RCU-preempt, and
|
the "p" for RCU-preempt (PREEMPT kernels) and "s" for RCU-sched
|
||||||
"s" for RCU-sched. Nothing prevents this kthread from running
|
(!PREEMPT kernels). Nothing prevents this kthread from running
|
||||||
on the specified CPUs, but (1) the kthreads may be preempted
|
on the specified CPUs, but (1) the kthreads may be preempted
|
||||||
between each callback, and (2) affinity or cgroups can be used
|
between each callback, and (2) affinity or cgroups can be used
|
||||||
to force the kthreads to run on whatever set of CPUs is desired.
|
to force the kthreads to run on whatever set of CPUs is desired.
|
||||||
|
@ -176,8 +176,9 @@ static inline unsigned long rcu_seq_diff(unsigned long new, unsigned long old)
|
|||||||
|
|
||||||
/*
|
/*
|
||||||
* debug_rcu_head_queue()/debug_rcu_head_unqueue() are used internally
|
* debug_rcu_head_queue()/debug_rcu_head_unqueue() are used internally
|
||||||
* by call_rcu() and rcu callback execution, and are therefore not part of the
|
* by call_rcu() and rcu callback execution, and are therefore not part
|
||||||
* RCU API. Leaving in rcupdate.h because they are used by all RCU flavors.
|
* of the RCU API. These are in rcupdate.h because they are used by all
|
||||||
|
* RCU implementations.
|
||||||
*/
|
*/
|
||||||
|
|
||||||
#ifdef CONFIG_DEBUG_OBJECTS_RCU_HEAD
|
#ifdef CONFIG_DEBUG_OBJECTS_RCU_HEAD
|
||||||
@ -223,6 +224,7 @@ void kfree(const void *);
|
|||||||
*/
|
*/
|
||||||
static inline bool __rcu_reclaim(const char *rn, struct rcu_head *head)
|
static inline bool __rcu_reclaim(const char *rn, struct rcu_head *head)
|
||||||
{
|
{
|
||||||
|
rcu_callback_t f;
|
||||||
unsigned long offset = (unsigned long)head->func;
|
unsigned long offset = (unsigned long)head->func;
|
||||||
|
|
||||||
rcu_lock_acquire(&rcu_callback_map);
|
rcu_lock_acquire(&rcu_callback_map);
|
||||||
@ -233,7 +235,9 @@ static inline bool __rcu_reclaim(const char *rn, struct rcu_head *head)
|
|||||||
return true;
|
return true;
|
||||||
} else {
|
} else {
|
||||||
RCU_TRACE(trace_rcu_invoke_callback(rn, head);)
|
RCU_TRACE(trace_rcu_invoke_callback(rn, head);)
|
||||||
head->func(head);
|
f = head->func;
|
||||||
|
WRITE_ONCE(head->func, (rcu_callback_t)0L);
|
||||||
|
f(head);
|
||||||
rcu_lock_release(&rcu_callback_map);
|
rcu_lock_release(&rcu_callback_map);
|
||||||
return false;
|
return false;
|
||||||
}
|
}
|
||||||
@ -328,40 +332,35 @@ static inline void rcu_init_levelspread(int *levelspread, const int *levelcnt)
|
|||||||
}
|
}
|
||||||
}
|
}
|
||||||
|
|
||||||
/* Returns first leaf rcu_node of the specified RCU flavor. */
|
/* Returns a pointer to the first leaf rcu_node structure. */
|
||||||
#define rcu_first_leaf_node(rsp) ((rsp)->level[rcu_num_lvls - 1])
|
#define rcu_first_leaf_node() (rcu_state.level[rcu_num_lvls - 1])
|
||||||
|
|
||||||
/* Is this rcu_node a leaf? */
|
/* Is this rcu_node a leaf? */
|
||||||
#define rcu_is_leaf_node(rnp) ((rnp)->level == rcu_num_lvls - 1)
|
#define rcu_is_leaf_node(rnp) ((rnp)->level == rcu_num_lvls - 1)
|
||||||
|
|
||||||
/* Is this rcu_node the last leaf? */
|
/* Is this rcu_node the last leaf? */
|
||||||
#define rcu_is_last_leaf_node(rsp, rnp) ((rnp) == &(rsp)->node[rcu_num_nodes - 1])
|
#define rcu_is_last_leaf_node(rnp) ((rnp) == &rcu_state.node[rcu_num_nodes - 1])
|
||||||
|
|
||||||
/*
|
/*
|
||||||
* Do a full breadth-first scan of the rcu_node structures for the
|
* Do a full breadth-first scan of the {s,}rcu_node structures for the
|
||||||
* specified rcu_state structure.
|
* specified state structure (for SRCU) or the only rcu_state structure
|
||||||
|
* (for RCU).
|
||||||
*/
|
*/
|
||||||
#define rcu_for_each_node_breadth_first(rsp, rnp) \
|
#define srcu_for_each_node_breadth_first(sp, rnp) \
|
||||||
for ((rnp) = &(rsp)->node[0]; \
|
for ((rnp) = &(sp)->node[0]; \
|
||||||
(rnp) < &(rsp)->node[rcu_num_nodes]; (rnp)++)
|
(rnp) < &(sp)->node[rcu_num_nodes]; (rnp)++)
|
||||||
|
#define rcu_for_each_node_breadth_first(rnp) \
|
||||||
|
srcu_for_each_node_breadth_first(&rcu_state, rnp)
|
||||||
|
|
||||||
/*
|
/*
|
||||||
* Do a breadth-first scan of the non-leaf rcu_node structures for the
|
* Scan the leaves of the rcu_node hierarchy for the rcu_state structure.
|
||||||
* specified rcu_state structure. Note that if there is a singleton
|
* Note that if there is a singleton rcu_node tree with but one rcu_node
|
||||||
* rcu_node tree with but one rcu_node structure, this loop is a no-op.
|
* structure, this loop -will- visit the rcu_node structure. It is still
|
||||||
|
* a leaf node, even if it is also the root node.
|
||||||
*/
|
*/
|
||||||
#define rcu_for_each_nonleaf_node_breadth_first(rsp, rnp) \
|
#define rcu_for_each_leaf_node(rnp) \
|
||||||
for ((rnp) = &(rsp)->node[0]; !rcu_is_leaf_node(rsp, rnp); (rnp)++)
|
for ((rnp) = rcu_first_leaf_node(); \
|
||||||
|
(rnp) < &rcu_state.node[rcu_num_nodes]; (rnp)++)
|
||||||
/*
|
|
||||||
* Scan the leaves of the rcu_node hierarchy for the specified rcu_state
|
|
||||||
* structure. Note that if there is a singleton rcu_node tree with but
|
|
||||||
* one rcu_node structure, this loop -will- visit the rcu_node structure.
|
|
||||||
* It is still a leaf node, even if it is also the root node.
|
|
||||||
*/
|
|
||||||
#define rcu_for_each_leaf_node(rsp, rnp) \
|
|
||||||
for ((rnp) = rcu_first_leaf_node(rsp); \
|
|
||||||
(rnp) < &(rsp)->node[rcu_num_nodes]; (rnp)++)
|
|
||||||
|
|
||||||
/*
|
/*
|
||||||
* Iterate over all possible CPUs in a leaf RCU node.
|
* Iterate over all possible CPUs in a leaf RCU node.
|
||||||
@ -435,6 +434,12 @@ do { \
|
|||||||
|
|
||||||
#endif /* #if defined(SRCU) || !defined(TINY_RCU) */
|
#endif /* #if defined(SRCU) || !defined(TINY_RCU) */
|
||||||
|
|
||||||
|
#ifdef CONFIG_SRCU
|
||||||
|
void srcu_init(void);
|
||||||
|
#else /* #ifdef CONFIG_SRCU */
|
||||||
|
static inline void srcu_init(void) { }
|
||||||
|
#endif /* #else #ifdef CONFIG_SRCU */
|
||||||
|
|
||||||
#ifdef CONFIG_TINY_RCU
|
#ifdef CONFIG_TINY_RCU
|
||||||
/* Tiny RCU doesn't expedite, as its purpose in life is instead to be tiny. */
|
/* Tiny RCU doesn't expedite, as its purpose in life is instead to be tiny. */
|
||||||
static inline bool rcu_gp_is_normal(void) { return true; }
|
static inline bool rcu_gp_is_normal(void) { return true; }
|
||||||
@ -515,29 +520,19 @@ void srcutorture_get_gp_data(enum rcutorture_type test_type,
|
|||||||
|
|
||||||
#ifdef CONFIG_TINY_RCU
|
#ifdef CONFIG_TINY_RCU
|
||||||
static inline unsigned long rcu_get_gp_seq(void) { return 0; }
|
static inline unsigned long rcu_get_gp_seq(void) { return 0; }
|
||||||
static inline unsigned long rcu_bh_get_gp_seq(void) { return 0; }
|
|
||||||
static inline unsigned long rcu_sched_get_gp_seq(void) { return 0; }
|
|
||||||
static inline unsigned long rcu_exp_batches_completed(void) { return 0; }
|
static inline unsigned long rcu_exp_batches_completed(void) { return 0; }
|
||||||
static inline unsigned long rcu_exp_batches_completed_sched(void) { return 0; }
|
|
||||||
static inline unsigned long
|
static inline unsigned long
|
||||||
srcu_batches_completed(struct srcu_struct *sp) { return 0; }
|
srcu_batches_completed(struct srcu_struct *sp) { return 0; }
|
||||||
static inline void rcu_force_quiescent_state(void) { }
|
static inline void rcu_force_quiescent_state(void) { }
|
||||||
static inline void rcu_bh_force_quiescent_state(void) { }
|
|
||||||
static inline void rcu_sched_force_quiescent_state(void) { }
|
|
||||||
static inline void show_rcu_gp_kthreads(void) { }
|
static inline void show_rcu_gp_kthreads(void) { }
|
||||||
static inline int rcu_get_gp_kthreads_prio(void) { return 0; }
|
static inline int rcu_get_gp_kthreads_prio(void) { return 0; }
|
||||||
#else /* #ifdef CONFIG_TINY_RCU */
|
#else /* #ifdef CONFIG_TINY_RCU */
|
||||||
unsigned long rcu_get_gp_seq(void);
|
unsigned long rcu_get_gp_seq(void);
|
||||||
unsigned long rcu_bh_get_gp_seq(void);
|
|
||||||
unsigned long rcu_sched_get_gp_seq(void);
|
|
||||||
unsigned long rcu_exp_batches_completed(void);
|
unsigned long rcu_exp_batches_completed(void);
|
||||||
unsigned long rcu_exp_batches_completed_sched(void);
|
|
||||||
unsigned long srcu_batches_completed(struct srcu_struct *sp);
|
unsigned long srcu_batches_completed(struct srcu_struct *sp);
|
||||||
void show_rcu_gp_kthreads(void);
|
void show_rcu_gp_kthreads(void);
|
||||||
int rcu_get_gp_kthreads_prio(void);
|
int rcu_get_gp_kthreads_prio(void);
|
||||||
void rcu_force_quiescent_state(void);
|
void rcu_force_quiescent_state(void);
|
||||||
void rcu_bh_force_quiescent_state(void);
|
|
||||||
void rcu_sched_force_quiescent_state(void);
|
|
||||||
extern struct workqueue_struct *rcu_gp_wq;
|
extern struct workqueue_struct *rcu_gp_wq;
|
||||||
extern struct workqueue_struct *rcu_par_gp_wq;
|
extern struct workqueue_struct *rcu_par_gp_wq;
|
||||||
#endif /* #else #ifdef CONFIG_TINY_RCU */
|
#endif /* #else #ifdef CONFIG_TINY_RCU */
|
||||||
|
@ -189,36 +189,6 @@ static struct rcu_perf_ops rcu_ops = {
|
|||||||
.name = "rcu"
|
.name = "rcu"
|
||||||
};
|
};
|
||||||
|
|
||||||
/*
|
|
||||||
* Definitions for rcu_bh perf testing.
|
|
||||||
*/
|
|
||||||
|
|
||||||
static int rcu_bh_perf_read_lock(void) __acquires(RCU_BH)
|
|
||||||
{
|
|
||||||
rcu_read_lock_bh();
|
|
||||||
return 0;
|
|
||||||
}
|
|
||||||
|
|
||||||
static void rcu_bh_perf_read_unlock(int idx) __releases(RCU_BH)
|
|
||||||
{
|
|
||||||
rcu_read_unlock_bh();
|
|
||||||
}
|
|
||||||
|
|
||||||
static struct rcu_perf_ops rcu_bh_ops = {
|
|
||||||
.ptype = RCU_BH_FLAVOR,
|
|
||||||
.init = rcu_sync_perf_init,
|
|
||||||
.readlock = rcu_bh_perf_read_lock,
|
|
||||||
.readunlock = rcu_bh_perf_read_unlock,
|
|
||||||
.get_gp_seq = rcu_bh_get_gp_seq,
|
|
||||||
.gp_diff = rcu_seq_diff,
|
|
||||||
.exp_completed = rcu_exp_batches_completed_sched,
|
|
||||||
.async = call_rcu_bh,
|
|
||||||
.gp_barrier = rcu_barrier_bh,
|
|
||||||
.sync = synchronize_rcu_bh,
|
|
||||||
.exp_sync = synchronize_rcu_bh_expedited,
|
|
||||||
.name = "rcu_bh"
|
|
||||||
};
|
|
||||||
|
|
||||||
/*
|
/*
|
||||||
* Definitions for srcu perf testing.
|
* Definitions for srcu perf testing.
|
||||||
*/
|
*/
|
||||||
@ -305,36 +275,6 @@ static struct rcu_perf_ops srcud_ops = {
|
|||||||
.name = "srcud"
|
.name = "srcud"
|
||||||
};
|
};
|
||||||
|
|
||||||
/*
|
|
||||||
* Definitions for sched perf testing.
|
|
||||||
*/
|
|
||||||
|
|
||||||
static int sched_perf_read_lock(void)
|
|
||||||
{
|
|
||||||
preempt_disable();
|
|
||||||
return 0;
|
|
||||||
}
|
|
||||||
|
|
||||||
static void sched_perf_read_unlock(int idx)
|
|
||||||
{
|
|
||||||
preempt_enable();
|
|
||||||
}
|
|
||||||
|
|
||||||
static struct rcu_perf_ops sched_ops = {
|
|
||||||
.ptype = RCU_SCHED_FLAVOR,
|
|
||||||
.init = rcu_sync_perf_init,
|
|
||||||
.readlock = sched_perf_read_lock,
|
|
||||||
.readunlock = sched_perf_read_unlock,
|
|
||||||
.get_gp_seq = rcu_sched_get_gp_seq,
|
|
||||||
.gp_diff = rcu_seq_diff,
|
|
||||||
.exp_completed = rcu_exp_batches_completed_sched,
|
|
||||||
.async = call_rcu_sched,
|
|
||||||
.gp_barrier = rcu_barrier_sched,
|
|
||||||
.sync = synchronize_sched,
|
|
||||||
.exp_sync = synchronize_sched_expedited,
|
|
||||||
.name = "sched"
|
|
||||||
};
|
|
||||||
|
|
||||||
/*
|
/*
|
||||||
* Definitions for RCU-tasks perf testing.
|
* Definitions for RCU-tasks perf testing.
|
||||||
*/
|
*/
|
||||||
@ -611,7 +551,7 @@ rcu_perf_cleanup(void)
|
|||||||
kfree(writer_n_durations);
|
kfree(writer_n_durations);
|
||||||
}
|
}
|
||||||
|
|
||||||
/* Do flavor-specific cleanup operations. */
|
/* Do torture-type-specific cleanup operations. */
|
||||||
if (cur_ops->cleanup != NULL)
|
if (cur_ops->cleanup != NULL)
|
||||||
cur_ops->cleanup();
|
cur_ops->cleanup();
|
||||||
|
|
||||||
@ -661,8 +601,7 @@ rcu_perf_init(void)
|
|||||||
long i;
|
long i;
|
||||||
int firsterr = 0;
|
int firsterr = 0;
|
||||||
static struct rcu_perf_ops *perf_ops[] = {
|
static struct rcu_perf_ops *perf_ops[] = {
|
||||||
&rcu_ops, &rcu_bh_ops, &srcu_ops, &srcud_ops, &sched_ops,
|
&rcu_ops, &srcu_ops, &srcud_ops, &tasks_ops,
|
||||||
&tasks_ops,
|
|
||||||
};
|
};
|
||||||
|
|
||||||
if (!torture_init_begin(perf_type, verbose))
|
if (!torture_init_begin(perf_type, verbose))
|
||||||
@ -680,6 +619,7 @@ rcu_perf_init(void)
|
|||||||
for (i = 0; i < ARRAY_SIZE(perf_ops); i++)
|
for (i = 0; i < ARRAY_SIZE(perf_ops); i++)
|
||||||
pr_cont(" %s", perf_ops[i]->name);
|
pr_cont(" %s", perf_ops[i]->name);
|
||||||
pr_cont("\n");
|
pr_cont("\n");
|
||||||
|
WARN_ON(!IS_MODULE(CONFIG_RCU_PERF_TEST));
|
||||||
firsterr = -EINVAL;
|
firsterr = -EINVAL;
|
||||||
goto unwind;
|
goto unwind;
|
||||||
}
|
}
|
||||||
|
@ -66,15 +66,19 @@ MODULE_AUTHOR("Paul E. McKenney <paulmck@us.ibm.com> and Josh Triplett <josh@jos
|
|||||||
/* Bits for ->extendables field, extendables param, and related definitions. */
|
/* Bits for ->extendables field, extendables param, and related definitions. */
|
||||||
#define RCUTORTURE_RDR_SHIFT 8 /* Put SRCU index in upper bits. */
|
#define RCUTORTURE_RDR_SHIFT 8 /* Put SRCU index in upper bits. */
|
||||||
#define RCUTORTURE_RDR_MASK ((1 << RCUTORTURE_RDR_SHIFT) - 1)
|
#define RCUTORTURE_RDR_MASK ((1 << RCUTORTURE_RDR_SHIFT) - 1)
|
||||||
#define RCUTORTURE_RDR_BH 0x1 /* Extend readers by disabling bh. */
|
#define RCUTORTURE_RDR_BH 0x01 /* Extend readers by disabling bh. */
|
||||||
#define RCUTORTURE_RDR_IRQ 0x2 /* ... disabling interrupts. */
|
#define RCUTORTURE_RDR_IRQ 0x02 /* ... disabling interrupts. */
|
||||||
#define RCUTORTURE_RDR_PREEMPT 0x4 /* ... disabling preemption. */
|
#define RCUTORTURE_RDR_PREEMPT 0x04 /* ... disabling preemption. */
|
||||||
#define RCUTORTURE_RDR_RCU 0x8 /* ... entering another RCU reader. */
|
#define RCUTORTURE_RDR_RBH 0x08 /* ... rcu_read_lock_bh(). */
|
||||||
#define RCUTORTURE_RDR_NBITS 4 /* Number of bits defined above. */
|
#define RCUTORTURE_RDR_SCHED 0x10 /* ... rcu_read_lock_sched(). */
|
||||||
#define RCUTORTURE_MAX_EXTEND (RCUTORTURE_RDR_BH | RCUTORTURE_RDR_IRQ | \
|
#define RCUTORTURE_RDR_RCU 0x20 /* ... entering another RCU reader. */
|
||||||
RCUTORTURE_RDR_PREEMPT)
|
#define RCUTORTURE_RDR_NBITS 6 /* Number of bits defined above. */
|
||||||
|
#define RCUTORTURE_MAX_EXTEND \
|
||||||
|
(RCUTORTURE_RDR_BH | RCUTORTURE_RDR_IRQ | RCUTORTURE_RDR_PREEMPT | \
|
||||||
|
RCUTORTURE_RDR_RBH | RCUTORTURE_RDR_SCHED)
|
||||||
#define RCUTORTURE_RDR_MAX_LOOPS 0x7 /* Maximum reader extensions. */
|
#define RCUTORTURE_RDR_MAX_LOOPS 0x7 /* Maximum reader extensions. */
|
||||||
/* Must be power of two minus one. */
|
/* Must be power of two minus one. */
|
||||||
|
#define RCUTORTURE_RDR_MAX_SEGS (RCUTORTURE_RDR_MAX_LOOPS + 3)
|
||||||
|
|
||||||
torture_param(int, cbflood_inter_holdoff, HZ,
|
torture_param(int, cbflood_inter_holdoff, HZ,
|
||||||
"Holdoff between floods (jiffies)");
|
"Holdoff between floods (jiffies)");
|
||||||
@ -89,6 +93,12 @@ torture_param(int, fqs_duration, 0,
|
|||||||
"Duration of fqs bursts (us), 0 to disable");
|
"Duration of fqs bursts (us), 0 to disable");
|
||||||
torture_param(int, fqs_holdoff, 0, "Holdoff time within fqs bursts (us)");
|
torture_param(int, fqs_holdoff, 0, "Holdoff time within fqs bursts (us)");
|
||||||
torture_param(int, fqs_stutter, 3, "Wait time between fqs bursts (s)");
|
torture_param(int, fqs_stutter, 3, "Wait time between fqs bursts (s)");
|
||||||
|
torture_param(bool, fwd_progress, 1, "Test grace-period forward progress");
|
||||||
|
torture_param(int, fwd_progress_div, 4, "Fraction of CPU stall to wait");
|
||||||
|
torture_param(int, fwd_progress_holdoff, 60,
|
||||||
|
"Time between forward-progress tests (s)");
|
||||||
|
torture_param(bool, fwd_progress_need_resched, 1,
|
||||||
|
"Hide cond_resched() behind need_resched()");
|
||||||
torture_param(bool, gp_cond, false, "Use conditional/async GP wait primitives");
|
torture_param(bool, gp_cond, false, "Use conditional/async GP wait primitives");
|
||||||
torture_param(bool, gp_exp, false, "Use expedited GP wait primitives");
|
torture_param(bool, gp_exp, false, "Use expedited GP wait primitives");
|
||||||
torture_param(bool, gp_normal, false,
|
torture_param(bool, gp_normal, false,
|
||||||
@ -125,7 +135,7 @@ torture_param(int, verbose, 1,
|
|||||||
|
|
||||||
static char *torture_type = "rcu";
|
static char *torture_type = "rcu";
|
||||||
module_param(torture_type, charp, 0444);
|
module_param(torture_type, charp, 0444);
|
||||||
MODULE_PARM_DESC(torture_type, "Type of RCU to torture (rcu, rcu_bh, ...)");
|
MODULE_PARM_DESC(torture_type, "Type of RCU to torture (rcu, srcu, ...)");
|
||||||
|
|
||||||
static int nrealreaders;
|
static int nrealreaders;
|
||||||
static int ncbflooders;
|
static int ncbflooders;
|
||||||
@ -137,6 +147,7 @@ static struct task_struct **cbflood_task;
|
|||||||
static struct task_struct *fqs_task;
|
static struct task_struct *fqs_task;
|
||||||
static struct task_struct *boost_tasks[NR_CPUS];
|
static struct task_struct *boost_tasks[NR_CPUS];
|
||||||
static struct task_struct *stall_task;
|
static struct task_struct *stall_task;
|
||||||
|
static struct task_struct *fwd_prog_task;
|
||||||
static struct task_struct **barrier_cbs_tasks;
|
static struct task_struct **barrier_cbs_tasks;
|
||||||
static struct task_struct *barrier_task;
|
static struct task_struct *barrier_task;
|
||||||
|
|
||||||
@ -197,6 +208,18 @@ static const char * const rcu_torture_writer_state_names[] = {
|
|||||||
"RTWS_STOPPING",
|
"RTWS_STOPPING",
|
||||||
};
|
};
|
||||||
|
|
||||||
|
/* Record reader segment types and duration for first failing read. */
|
||||||
|
struct rt_read_seg {
|
||||||
|
int rt_readstate;
|
||||||
|
unsigned long rt_delay_jiffies;
|
||||||
|
unsigned long rt_delay_ms;
|
||||||
|
unsigned long rt_delay_us;
|
||||||
|
bool rt_preempted;
|
||||||
|
};
|
||||||
|
static int err_segs_recorded;
|
||||||
|
static struct rt_read_seg err_segs[RCUTORTURE_RDR_MAX_SEGS];
|
||||||
|
static int rt_read_nsegs;
|
||||||
|
|
||||||
static const char *rcu_torture_writer_state_getname(void)
|
static const char *rcu_torture_writer_state_getname(void)
|
||||||
{
|
{
|
||||||
unsigned int i = READ_ONCE(rcu_torture_writer_state);
|
unsigned int i = READ_ONCE(rcu_torture_writer_state);
|
||||||
@ -278,7 +301,8 @@ struct rcu_torture_ops {
|
|||||||
void (*init)(void);
|
void (*init)(void);
|
||||||
void (*cleanup)(void);
|
void (*cleanup)(void);
|
||||||
int (*readlock)(void);
|
int (*readlock)(void);
|
||||||
void (*read_delay)(struct torture_random_state *rrsp);
|
void (*read_delay)(struct torture_random_state *rrsp,
|
||||||
|
struct rt_read_seg *rtrsp);
|
||||||
void (*readunlock)(int idx);
|
void (*readunlock)(int idx);
|
||||||
unsigned long (*get_gp_seq)(void);
|
unsigned long (*get_gp_seq)(void);
|
||||||
unsigned long (*gp_diff)(unsigned long new, unsigned long old);
|
unsigned long (*gp_diff)(unsigned long new, unsigned long old);
|
||||||
@ -291,6 +315,7 @@ struct rcu_torture_ops {
|
|||||||
void (*cb_barrier)(void);
|
void (*cb_barrier)(void);
|
||||||
void (*fqs)(void);
|
void (*fqs)(void);
|
||||||
void (*stats)(void);
|
void (*stats)(void);
|
||||||
|
int (*stall_dur)(void);
|
||||||
int irq_capable;
|
int irq_capable;
|
||||||
int can_boost;
|
int can_boost;
|
||||||
int extendables;
|
int extendables;
|
||||||
@ -310,12 +335,13 @@ static int rcu_torture_read_lock(void) __acquires(RCU)
|
|||||||
return 0;
|
return 0;
|
||||||
}
|
}
|
||||||
|
|
||||||
static void rcu_read_delay(struct torture_random_state *rrsp)
|
static void
|
||||||
|
rcu_read_delay(struct torture_random_state *rrsp, struct rt_read_seg *rtrsp)
|
||||||
{
|
{
|
||||||
unsigned long started;
|
unsigned long started;
|
||||||
unsigned long completed;
|
unsigned long completed;
|
||||||
const unsigned long shortdelay_us = 200;
|
const unsigned long shortdelay_us = 200;
|
||||||
const unsigned long longdelay_ms = 50;
|
unsigned long longdelay_ms = 300;
|
||||||
unsigned long long ts;
|
unsigned long long ts;
|
||||||
|
|
||||||
/* We want a short delay sometimes to make a reader delay the grace
|
/* We want a short delay sometimes to make a reader delay the grace
|
||||||
@ -325,16 +351,23 @@ static void rcu_read_delay(struct torture_random_state *rrsp)
|
|||||||
if (!(torture_random(rrsp) % (nrealreaders * 2000 * longdelay_ms))) {
|
if (!(torture_random(rrsp) % (nrealreaders * 2000 * longdelay_ms))) {
|
||||||
started = cur_ops->get_gp_seq();
|
started = cur_ops->get_gp_seq();
|
||||||
ts = rcu_trace_clock_local();
|
ts = rcu_trace_clock_local();
|
||||||
|
if (preempt_count() & (SOFTIRQ_MASK | HARDIRQ_MASK))
|
||||||
|
longdelay_ms = 5; /* Avoid triggering BH limits. */
|
||||||
mdelay(longdelay_ms);
|
mdelay(longdelay_ms);
|
||||||
|
rtrsp->rt_delay_ms = longdelay_ms;
|
||||||
completed = cur_ops->get_gp_seq();
|
completed = cur_ops->get_gp_seq();
|
||||||
do_trace_rcu_torture_read(cur_ops->name, NULL, ts,
|
do_trace_rcu_torture_read(cur_ops->name, NULL, ts,
|
||||||
started, completed);
|
started, completed);
|
||||||
}
|
}
|
||||||
if (!(torture_random(rrsp) % (nrealreaders * 2 * shortdelay_us)))
|
if (!(torture_random(rrsp) % (nrealreaders * 2 * shortdelay_us))) {
|
||||||
udelay(shortdelay_us);
|
udelay(shortdelay_us);
|
||||||
|
rtrsp->rt_delay_us = shortdelay_us;
|
||||||
|
}
|
||||||
if (!preempt_count() &&
|
if (!preempt_count() &&
|
||||||
!(torture_random(rrsp) % (nrealreaders * 500)))
|
!(torture_random(rrsp) % (nrealreaders * 500))) {
|
||||||
torture_preempt_schedule(); /* QS only if preemptible. */
|
torture_preempt_schedule(); /* QS only if preemptible. */
|
||||||
|
rtrsp->rt_preempted = true;
|
||||||
|
}
|
||||||
}
|
}
|
||||||
|
|
||||||
static void rcu_torture_read_unlock(int idx) __releases(RCU)
|
static void rcu_torture_read_unlock(int idx) __releases(RCU)
|
||||||
@ -429,52 +462,13 @@ static struct rcu_torture_ops rcu_ops = {
|
|||||||
.cb_barrier = rcu_barrier,
|
.cb_barrier = rcu_barrier,
|
||||||
.fqs = rcu_force_quiescent_state,
|
.fqs = rcu_force_quiescent_state,
|
||||||
.stats = NULL,
|
.stats = NULL,
|
||||||
|
.stall_dur = rcu_jiffies_till_stall_check,
|
||||||
.irq_capable = 1,
|
.irq_capable = 1,
|
||||||
.can_boost = rcu_can_boost(),
|
.can_boost = rcu_can_boost(),
|
||||||
|
.extendables = RCUTORTURE_MAX_EXTEND,
|
||||||
.name = "rcu"
|
.name = "rcu"
|
||||||
};
|
};
|
||||||
|
|
||||||
/*
|
|
||||||
* Definitions for rcu_bh torture testing.
|
|
||||||
*/
|
|
||||||
|
|
||||||
static int rcu_bh_torture_read_lock(void) __acquires(RCU_BH)
|
|
||||||
{
|
|
||||||
rcu_read_lock_bh();
|
|
||||||
return 0;
|
|
||||||
}
|
|
||||||
|
|
||||||
static void rcu_bh_torture_read_unlock(int idx) __releases(RCU_BH)
|
|
||||||
{
|
|
||||||
rcu_read_unlock_bh();
|
|
||||||
}
|
|
||||||
|
|
||||||
static void rcu_bh_torture_deferred_free(struct rcu_torture *p)
|
|
||||||
{
|
|
||||||
call_rcu_bh(&p->rtort_rcu, rcu_torture_cb);
|
|
||||||
}
|
|
||||||
|
|
||||||
static struct rcu_torture_ops rcu_bh_ops = {
|
|
||||||
.ttype = RCU_BH_FLAVOR,
|
|
||||||
.init = rcu_sync_torture_init,
|
|
||||||
.readlock = rcu_bh_torture_read_lock,
|
|
||||||
.read_delay = rcu_read_delay, /* just reuse rcu's version. */
|
|
||||||
.readunlock = rcu_bh_torture_read_unlock,
|
|
||||||
.get_gp_seq = rcu_bh_get_gp_seq,
|
|
||||||
.gp_diff = rcu_seq_diff,
|
|
||||||
.deferred_free = rcu_bh_torture_deferred_free,
|
|
||||||
.sync = synchronize_rcu_bh,
|
|
||||||
.exp_sync = synchronize_rcu_bh_expedited,
|
|
||||||
.call = call_rcu_bh,
|
|
||||||
.cb_barrier = rcu_barrier_bh,
|
|
||||||
.fqs = rcu_bh_force_quiescent_state,
|
|
||||||
.stats = NULL,
|
|
||||||
.irq_capable = 1,
|
|
||||||
.extendables = (RCUTORTURE_RDR_BH | RCUTORTURE_RDR_IRQ),
|
|
||||||
.ext_irq_conflict = RCUTORTURE_RDR_RCU,
|
|
||||||
.name = "rcu_bh"
|
|
||||||
};
|
|
||||||
|
|
||||||
/*
|
/*
|
||||||
* Don't even think about trying any of these in real life!!!
|
* Don't even think about trying any of these in real life!!!
|
||||||
* The names includes "busted", and they really means it!
|
* The names includes "busted", and they really means it!
|
||||||
@ -531,7 +525,8 @@ static int srcu_torture_read_lock(void) __acquires(srcu_ctlp)
|
|||||||
return srcu_read_lock(srcu_ctlp);
|
return srcu_read_lock(srcu_ctlp);
|
||||||
}
|
}
|
||||||
|
|
||||||
static void srcu_read_delay(struct torture_random_state *rrsp)
|
static void
|
||||||
|
srcu_read_delay(struct torture_random_state *rrsp, struct rt_read_seg *rtrsp)
|
||||||
{
|
{
|
||||||
long delay;
|
long delay;
|
||||||
const long uspertick = 1000000 / HZ;
|
const long uspertick = 1000000 / HZ;
|
||||||
@ -541,10 +536,12 @@ static void srcu_read_delay(struct torture_random_state *rrsp)
|
|||||||
|
|
||||||
delay = torture_random(rrsp) %
|
delay = torture_random(rrsp) %
|
||||||
(nrealreaders * 2 * longdelay * uspertick);
|
(nrealreaders * 2 * longdelay * uspertick);
|
||||||
if (!delay && in_task())
|
if (!delay && in_task()) {
|
||||||
schedule_timeout_interruptible(longdelay);
|
schedule_timeout_interruptible(longdelay);
|
||||||
else
|
rtrsp->rt_delay_jiffies = longdelay;
|
||||||
rcu_read_delay(rrsp);
|
} else {
|
||||||
|
rcu_read_delay(rrsp, rtrsp);
|
||||||
|
}
|
||||||
}
|
}
|
||||||
|
|
||||||
static void srcu_torture_read_unlock(int idx) __releases(srcu_ctlp)
|
static void srcu_torture_read_unlock(int idx) __releases(srcu_ctlp)
|
||||||
@ -662,48 +659,6 @@ static struct rcu_torture_ops busted_srcud_ops = {
|
|||||||
.name = "busted_srcud"
|
.name = "busted_srcud"
|
||||||
};
|
};
|
||||||
|
|
||||||
/*
|
|
||||||
* Definitions for sched torture testing.
|
|
||||||
*/
|
|
||||||
|
|
||||||
static int sched_torture_read_lock(void)
|
|
||||||
{
|
|
||||||
preempt_disable();
|
|
||||||
return 0;
|
|
||||||
}
|
|
||||||
|
|
||||||
static void sched_torture_read_unlock(int idx)
|
|
||||||
{
|
|
||||||
preempt_enable();
|
|
||||||
}
|
|
||||||
|
|
||||||
static void rcu_sched_torture_deferred_free(struct rcu_torture *p)
|
|
||||||
{
|
|
||||||
call_rcu_sched(&p->rtort_rcu, rcu_torture_cb);
|
|
||||||
}
|
|
||||||
|
|
||||||
static struct rcu_torture_ops sched_ops = {
|
|
||||||
.ttype = RCU_SCHED_FLAVOR,
|
|
||||||
.init = rcu_sync_torture_init,
|
|
||||||
.readlock = sched_torture_read_lock,
|
|
||||||
.read_delay = rcu_read_delay, /* just reuse rcu's version. */
|
|
||||||
.readunlock = sched_torture_read_unlock,
|
|
||||||
.get_gp_seq = rcu_sched_get_gp_seq,
|
|
||||||
.gp_diff = rcu_seq_diff,
|
|
||||||
.deferred_free = rcu_sched_torture_deferred_free,
|
|
||||||
.sync = synchronize_sched,
|
|
||||||
.exp_sync = synchronize_sched_expedited,
|
|
||||||
.get_state = get_state_synchronize_sched,
|
|
||||||
.cond_sync = cond_synchronize_sched,
|
|
||||||
.call = call_rcu_sched,
|
|
||||||
.cb_barrier = rcu_barrier_sched,
|
|
||||||
.fqs = rcu_sched_force_quiescent_state,
|
|
||||||
.stats = NULL,
|
|
||||||
.irq_capable = 1,
|
|
||||||
.extendables = RCUTORTURE_MAX_EXTEND,
|
|
||||||
.name = "sched"
|
|
||||||
};
|
|
||||||
|
|
||||||
/*
|
/*
|
||||||
* Definitions for RCU-tasks torture testing.
|
* Definitions for RCU-tasks torture testing.
|
||||||
*/
|
*/
|
||||||
@ -1116,7 +1071,8 @@ rcu_torture_writer(void *arg)
|
|||||||
break;
|
break;
|
||||||
}
|
}
|
||||||
}
|
}
|
||||||
rcu_torture_current_version++;
|
WRITE_ONCE(rcu_torture_current_version,
|
||||||
|
rcu_torture_current_version + 1);
|
||||||
/* Cycle through nesting levels of rcu_expedite_gp() calls. */
|
/* Cycle through nesting levels of rcu_expedite_gp() calls. */
|
||||||
if (can_expedite &&
|
if (can_expedite &&
|
||||||
!(torture_random(&rand) & 0xff & (!!expediting - 1))) {
|
!(torture_random(&rand) & 0xff & (!!expediting - 1))) {
|
||||||
@ -1132,7 +1088,10 @@ rcu_torture_writer(void *arg)
|
|||||||
!rcu_gp_is_normal();
|
!rcu_gp_is_normal();
|
||||||
}
|
}
|
||||||
rcu_torture_writer_state = RTWS_STUTTER;
|
rcu_torture_writer_state = RTWS_STUTTER;
|
||||||
stutter_wait("rcu_torture_writer");
|
if (stutter_wait("rcu_torture_writer"))
|
||||||
|
for (i = 0; i < ARRAY_SIZE(rcu_tortures); i++)
|
||||||
|
if (list_empty(&rcu_tortures[i].rtort_free))
|
||||||
|
WARN_ON_ONCE(1);
|
||||||
} while (!torture_must_stop());
|
} while (!torture_must_stop());
|
||||||
/* Reset expediting back to unexpedited. */
|
/* Reset expediting back to unexpedited. */
|
||||||
if (expediting > 0)
|
if (expediting > 0)
|
||||||
@ -1199,7 +1158,8 @@ static void rcu_torture_timer_cb(struct rcu_head *rhp)
|
|||||||
* change, do a ->read_delay().
|
* change, do a ->read_delay().
|
||||||
*/
|
*/
|
||||||
static void rcutorture_one_extend(int *readstate, int newstate,
|
static void rcutorture_one_extend(int *readstate, int newstate,
|
||||||
struct torture_random_state *trsp)
|
struct torture_random_state *trsp,
|
||||||
|
struct rt_read_seg *rtrsp)
|
||||||
{
|
{
|
||||||
int idxnew = -1;
|
int idxnew = -1;
|
||||||
int idxold = *readstate;
|
int idxold = *readstate;
|
||||||
@ -1208,6 +1168,7 @@ static void rcutorture_one_extend(int *readstate, int newstate,
|
|||||||
|
|
||||||
WARN_ON_ONCE(idxold < 0);
|
WARN_ON_ONCE(idxold < 0);
|
||||||
WARN_ON_ONCE((idxold >> RCUTORTURE_RDR_SHIFT) > 1);
|
WARN_ON_ONCE((idxold >> RCUTORTURE_RDR_SHIFT) > 1);
|
||||||
|
rtrsp->rt_readstate = newstate;
|
||||||
|
|
||||||
/* First, put new protection in place to avoid critical-section gap. */
|
/* First, put new protection in place to avoid critical-section gap. */
|
||||||
if (statesnew & RCUTORTURE_RDR_BH)
|
if (statesnew & RCUTORTURE_RDR_BH)
|
||||||
@ -1216,6 +1177,10 @@ static void rcutorture_one_extend(int *readstate, int newstate,
|
|||||||
local_irq_disable();
|
local_irq_disable();
|
||||||
if (statesnew & RCUTORTURE_RDR_PREEMPT)
|
if (statesnew & RCUTORTURE_RDR_PREEMPT)
|
||||||
preempt_disable();
|
preempt_disable();
|
||||||
|
if (statesnew & RCUTORTURE_RDR_RBH)
|
||||||
|
rcu_read_lock_bh();
|
||||||
|
if (statesnew & RCUTORTURE_RDR_SCHED)
|
||||||
|
rcu_read_lock_sched();
|
||||||
if (statesnew & RCUTORTURE_RDR_RCU)
|
if (statesnew & RCUTORTURE_RDR_RCU)
|
||||||
idxnew = cur_ops->readlock() << RCUTORTURE_RDR_SHIFT;
|
idxnew = cur_ops->readlock() << RCUTORTURE_RDR_SHIFT;
|
||||||
|
|
||||||
@ -1226,12 +1191,16 @@ static void rcutorture_one_extend(int *readstate, int newstate,
|
|||||||
local_bh_enable();
|
local_bh_enable();
|
||||||
if (statesold & RCUTORTURE_RDR_PREEMPT)
|
if (statesold & RCUTORTURE_RDR_PREEMPT)
|
||||||
preempt_enable();
|
preempt_enable();
|
||||||
|
if (statesold & RCUTORTURE_RDR_RBH)
|
||||||
|
rcu_read_unlock_bh();
|
||||||
|
if (statesold & RCUTORTURE_RDR_SCHED)
|
||||||
|
rcu_read_unlock_sched();
|
||||||
if (statesold & RCUTORTURE_RDR_RCU)
|
if (statesold & RCUTORTURE_RDR_RCU)
|
||||||
cur_ops->readunlock(idxold >> RCUTORTURE_RDR_SHIFT);
|
cur_ops->readunlock(idxold >> RCUTORTURE_RDR_SHIFT);
|
||||||
|
|
||||||
/* Delay if neither beginning nor end and there was a change. */
|
/* Delay if neither beginning nor end and there was a change. */
|
||||||
if ((statesnew || statesold) && *readstate && newstate)
|
if ((statesnew || statesold) && *readstate && newstate)
|
||||||
cur_ops->read_delay(trsp);
|
cur_ops->read_delay(trsp, rtrsp);
|
||||||
|
|
||||||
/* Update the reader state. */
|
/* Update the reader state. */
|
||||||
if (idxnew == -1)
|
if (idxnew == -1)
|
||||||
@ -1260,18 +1229,19 @@ rcutorture_extend_mask(int oldmask, struct torture_random_state *trsp)
|
|||||||
{
|
{
|
||||||
int mask = rcutorture_extend_mask_max();
|
int mask = rcutorture_extend_mask_max();
|
||||||
unsigned long randmask1 = torture_random(trsp) >> 8;
|
unsigned long randmask1 = torture_random(trsp) >> 8;
|
||||||
unsigned long randmask2 = randmask1 >> 1;
|
unsigned long randmask2 = randmask1 >> 3;
|
||||||
|
|
||||||
WARN_ON_ONCE(mask >> RCUTORTURE_RDR_SHIFT);
|
WARN_ON_ONCE(mask >> RCUTORTURE_RDR_SHIFT);
|
||||||
/* Half the time lots of bits, half the time only one bit. */
|
/* Most of the time lots of bits, half the time only one bit. */
|
||||||
if (randmask1 & 0x1)
|
if (!(randmask1 & 0x7))
|
||||||
mask = mask & randmask2;
|
mask = mask & randmask2;
|
||||||
else
|
else
|
||||||
mask = mask & (1 << (randmask2 % RCUTORTURE_RDR_NBITS));
|
mask = mask & (1 << (randmask2 % RCUTORTURE_RDR_NBITS));
|
||||||
|
/* Can't enable bh w/irq disabled. */
|
||||||
if ((mask & RCUTORTURE_RDR_IRQ) &&
|
if ((mask & RCUTORTURE_RDR_IRQ) &&
|
||||||
!(mask & RCUTORTURE_RDR_BH) &&
|
((!(mask & RCUTORTURE_RDR_BH) && (oldmask & RCUTORTURE_RDR_BH)) ||
|
||||||
(oldmask & RCUTORTURE_RDR_BH))
|
(!(mask & RCUTORTURE_RDR_RBH) && (oldmask & RCUTORTURE_RDR_RBH))))
|
||||||
mask |= RCUTORTURE_RDR_BH; /* Can't enable bh w/irq disabled. */
|
mask |= RCUTORTURE_RDR_BH | RCUTORTURE_RDR_RBH;
|
||||||
if ((mask & RCUTORTURE_RDR_IRQ) &&
|
if ((mask & RCUTORTURE_RDR_IRQ) &&
|
||||||
!(mask & cur_ops->ext_irq_conflict) &&
|
!(mask & cur_ops->ext_irq_conflict) &&
|
||||||
(oldmask & cur_ops->ext_irq_conflict))
|
(oldmask & cur_ops->ext_irq_conflict))
|
||||||
@ -1283,20 +1253,25 @@ rcutorture_extend_mask(int oldmask, struct torture_random_state *trsp)
|
|||||||
* Do a randomly selected number of extensions of an existing RCU read-side
|
* Do a randomly selected number of extensions of an existing RCU read-side
|
||||||
* critical section.
|
* critical section.
|
||||||
*/
|
*/
|
||||||
static void rcutorture_loop_extend(int *readstate,
|
static struct rt_read_seg *
|
||||||
struct torture_random_state *trsp)
|
rcutorture_loop_extend(int *readstate, struct torture_random_state *trsp,
|
||||||
|
struct rt_read_seg *rtrsp)
|
||||||
{
|
{
|
||||||
int i;
|
int i;
|
||||||
|
int j;
|
||||||
int mask = rcutorture_extend_mask_max();
|
int mask = rcutorture_extend_mask_max();
|
||||||
|
|
||||||
WARN_ON_ONCE(!*readstate); /* -Existing- RCU read-side critsect! */
|
WARN_ON_ONCE(!*readstate); /* -Existing- RCU read-side critsect! */
|
||||||
if (!((mask - 1) & mask))
|
if (!((mask - 1) & mask))
|
||||||
return; /* Current RCU flavor not extendable. */
|
return rtrsp; /* Current RCU reader not extendable. */
|
||||||
i = (torture_random(trsp) >> 3) & RCUTORTURE_RDR_MAX_LOOPS;
|
/* Bias towards larger numbers of loops. */
|
||||||
while (i--) {
|
i = (torture_random(trsp) >> 3);
|
||||||
|
i = ((i | (i >> 3)) & RCUTORTURE_RDR_MAX_LOOPS) + 1;
|
||||||
|
for (j = 0; j < i; j++) {
|
||||||
mask = rcutorture_extend_mask(*readstate, trsp);
|
mask = rcutorture_extend_mask(*readstate, trsp);
|
||||||
rcutorture_one_extend(readstate, mask, trsp);
|
rcutorture_one_extend(readstate, mask, trsp, &rtrsp[j]);
|
||||||
}
|
}
|
||||||
|
return &rtrsp[j];
|
||||||
}
|
}
|
||||||
|
|
||||||
/*
|
/*
|
||||||
@ -1306,16 +1281,20 @@ static void rcutorture_loop_extend(int *readstate,
|
|||||||
*/
|
*/
|
||||||
static bool rcu_torture_one_read(struct torture_random_state *trsp)
|
static bool rcu_torture_one_read(struct torture_random_state *trsp)
|
||||||
{
|
{
|
||||||
|
int i;
|
||||||
unsigned long started;
|
unsigned long started;
|
||||||
unsigned long completed;
|
unsigned long completed;
|
||||||
int newstate;
|
int newstate;
|
||||||
struct rcu_torture *p;
|
struct rcu_torture *p;
|
||||||
int pipe_count;
|
int pipe_count;
|
||||||
int readstate = 0;
|
int readstate = 0;
|
||||||
|
struct rt_read_seg rtseg[RCUTORTURE_RDR_MAX_SEGS] = { { 0 } };
|
||||||
|
struct rt_read_seg *rtrsp = &rtseg[0];
|
||||||
|
struct rt_read_seg *rtrsp1;
|
||||||
unsigned long long ts;
|
unsigned long long ts;
|
||||||
|
|
||||||
newstate = rcutorture_extend_mask(readstate, trsp);
|
newstate = rcutorture_extend_mask(readstate, trsp);
|
||||||
rcutorture_one_extend(&readstate, newstate, trsp);
|
rcutorture_one_extend(&readstate, newstate, trsp, rtrsp++);
|
||||||
started = cur_ops->get_gp_seq();
|
started = cur_ops->get_gp_seq();
|
||||||
ts = rcu_trace_clock_local();
|
ts = rcu_trace_clock_local();
|
||||||
p = rcu_dereference_check(rcu_torture_current,
|
p = rcu_dereference_check(rcu_torture_current,
|
||||||
@ -1325,12 +1304,12 @@ static bool rcu_torture_one_read(struct torture_random_state *trsp)
|
|||||||
torturing_tasks());
|
torturing_tasks());
|
||||||
if (p == NULL) {
|
if (p == NULL) {
|
||||||
/* Wait for rcu_torture_writer to get underway */
|
/* Wait for rcu_torture_writer to get underway */
|
||||||
rcutorture_one_extend(&readstate, 0, trsp);
|
rcutorture_one_extend(&readstate, 0, trsp, rtrsp);
|
||||||
return false;
|
return false;
|
||||||
}
|
}
|
||||||
if (p->rtort_mbtest == 0)
|
if (p->rtort_mbtest == 0)
|
||||||
atomic_inc(&n_rcu_torture_mberror);
|
atomic_inc(&n_rcu_torture_mberror);
|
||||||
rcutorture_loop_extend(&readstate, trsp);
|
rtrsp = rcutorture_loop_extend(&readstate, trsp, rtrsp);
|
||||||
preempt_disable();
|
preempt_disable();
|
||||||
pipe_count = p->rtort_pipe_count;
|
pipe_count = p->rtort_pipe_count;
|
||||||
if (pipe_count > RCU_TORTURE_PIPE_LEN) {
|
if (pipe_count > RCU_TORTURE_PIPE_LEN) {
|
||||||
@ -1351,8 +1330,17 @@ static bool rcu_torture_one_read(struct torture_random_state *trsp)
|
|||||||
}
|
}
|
||||||
__this_cpu_inc(rcu_torture_batch[completed]);
|
__this_cpu_inc(rcu_torture_batch[completed]);
|
||||||
preempt_enable();
|
preempt_enable();
|
||||||
rcutorture_one_extend(&readstate, 0, trsp);
|
rcutorture_one_extend(&readstate, 0, trsp, rtrsp);
|
||||||
WARN_ON_ONCE(readstate & RCUTORTURE_RDR_MASK);
|
WARN_ON_ONCE(readstate & RCUTORTURE_RDR_MASK);
|
||||||
|
|
||||||
|
/* If error or close call, record the sequence of reader protections. */
|
||||||
|
if ((pipe_count > 1 || completed > 1) && !xchg(&err_segs_recorded, 1)) {
|
||||||
|
i = 0;
|
||||||
|
for (rtrsp1 = &rtseg[0]; rtrsp1 < rtrsp; rtrsp1++)
|
||||||
|
err_segs[i++] = *rtrsp1;
|
||||||
|
rt_read_nsegs = i;
|
||||||
|
}
|
||||||
|
|
||||||
return true;
|
return true;
|
||||||
}
|
}
|
||||||
|
|
||||||
@ -1387,6 +1375,9 @@ static void rcu_torture_timer(struct timer_list *unused)
|
|||||||
static int
|
static int
|
||||||
rcu_torture_reader(void *arg)
|
rcu_torture_reader(void *arg)
|
||||||
{
|
{
|
||||||
|
unsigned long lastsleep = jiffies;
|
||||||
|
long myid = (long)arg;
|
||||||
|
int mynumonline = myid;
|
||||||
DEFINE_TORTURE_RANDOM(rand);
|
DEFINE_TORTURE_RANDOM(rand);
|
||||||
struct timer_list t;
|
struct timer_list t;
|
||||||
|
|
||||||
@ -1402,6 +1393,12 @@ rcu_torture_reader(void *arg)
|
|||||||
}
|
}
|
||||||
if (!rcu_torture_one_read(&rand))
|
if (!rcu_torture_one_read(&rand))
|
||||||
schedule_timeout_interruptible(HZ);
|
schedule_timeout_interruptible(HZ);
|
||||||
|
if (time_after(jiffies, lastsleep)) {
|
||||||
|
schedule_timeout_interruptible(1);
|
||||||
|
lastsleep = jiffies + 10;
|
||||||
|
}
|
||||||
|
while (num_online_cpus() < mynumonline && !torture_must_stop())
|
||||||
|
schedule_timeout_interruptible(HZ / 5);
|
||||||
stutter_wait("rcu_torture_reader");
|
stutter_wait("rcu_torture_reader");
|
||||||
} while (!torture_must_stop());
|
} while (!torture_must_stop());
|
||||||
if (irqreader && cur_ops->irq_capable) {
|
if (irqreader && cur_ops->irq_capable) {
|
||||||
@ -1655,6 +1652,121 @@ static int __init rcu_torture_stall_init(void)
|
|||||||
return torture_create_kthread(rcu_torture_stall, NULL, stall_task);
|
return torture_create_kthread(rcu_torture_stall, NULL, stall_task);
|
||||||
}
|
}
|
||||||
|
|
||||||
|
/* State structure for forward-progress self-propagating RCU callback. */
|
||||||
|
struct fwd_cb_state {
|
||||||
|
struct rcu_head rh;
|
||||||
|
int stop;
|
||||||
|
};
|
||||||
|
|
||||||
|
/*
|
||||||
|
* Forward-progress self-propagating RCU callback function. Because
|
||||||
|
* callbacks run from softirq, this function is an implicit RCU read-side
|
||||||
|
* critical section.
|
||||||
|
*/
|
||||||
|
static void rcu_torture_fwd_prog_cb(struct rcu_head *rhp)
|
||||||
|
{
|
||||||
|
struct fwd_cb_state *fcsp = container_of(rhp, struct fwd_cb_state, rh);
|
||||||
|
|
||||||
|
if (READ_ONCE(fcsp->stop)) {
|
||||||
|
WRITE_ONCE(fcsp->stop, 2);
|
||||||
|
return;
|
||||||
|
}
|
||||||
|
cur_ops->call(&fcsp->rh, rcu_torture_fwd_prog_cb);
|
||||||
|
}
|
||||||
|
|
||||||
|
/* Carry out grace-period forward-progress testing. */
|
||||||
|
static int rcu_torture_fwd_prog(void *args)
|
||||||
|
{
|
||||||
|
unsigned long cver;
|
||||||
|
unsigned long dur;
|
||||||
|
struct fwd_cb_state fcs;
|
||||||
|
unsigned long gps;
|
||||||
|
int idx;
|
||||||
|
int sd;
|
||||||
|
int sd4;
|
||||||
|
bool selfpropcb = false;
|
||||||
|
unsigned long stopat;
|
||||||
|
int tested = 0;
|
||||||
|
int tested_tries = 0;
|
||||||
|
static DEFINE_TORTURE_RANDOM(trs);
|
||||||
|
|
||||||
|
VERBOSE_TOROUT_STRING("rcu_torture_fwd_progress task started");
|
||||||
|
if (!IS_ENABLED(CONFIG_SMP) || !IS_ENABLED(CONFIG_RCU_BOOST))
|
||||||
|
set_user_nice(current, MAX_NICE);
|
||||||
|
if (cur_ops->call && cur_ops->sync && cur_ops->cb_barrier) {
|
||||||
|
init_rcu_head_on_stack(&fcs.rh);
|
||||||
|
selfpropcb = true;
|
||||||
|
}
|
||||||
|
do {
|
||||||
|
schedule_timeout_interruptible(fwd_progress_holdoff * HZ);
|
||||||
|
if (selfpropcb) {
|
||||||
|
WRITE_ONCE(fcs.stop, 0);
|
||||||
|
cur_ops->call(&fcs.rh, rcu_torture_fwd_prog_cb);
|
||||||
|
}
|
||||||
|
cver = READ_ONCE(rcu_torture_current_version);
|
||||||
|
gps = cur_ops->get_gp_seq();
|
||||||
|
sd = cur_ops->stall_dur() + 1;
|
||||||
|
sd4 = (sd + fwd_progress_div - 1) / fwd_progress_div;
|
||||||
|
dur = sd4 + torture_random(&trs) % (sd - sd4);
|
||||||
|
stopat = jiffies + dur;
|
||||||
|
while (time_before(jiffies, stopat) && !torture_must_stop()) {
|
||||||
|
idx = cur_ops->readlock();
|
||||||
|
udelay(10);
|
||||||
|
cur_ops->readunlock(idx);
|
||||||
|
if (!fwd_progress_need_resched || need_resched())
|
||||||
|
cond_resched();
|
||||||
|
}
|
||||||
|
tested_tries++;
|
||||||
|
if (!time_before(jiffies, stopat) && !torture_must_stop()) {
|
||||||
|
tested++;
|
||||||
|
cver = READ_ONCE(rcu_torture_current_version) - cver;
|
||||||
|
gps = rcutorture_seq_diff(cur_ops->get_gp_seq(), gps);
|
||||||
|
WARN_ON(!cver && gps < 2);
|
||||||
|
pr_alert("%s: Duration %ld cver %ld gps %ld\n", __func__, dur, cver, gps);
|
||||||
|
}
|
||||||
|
if (selfpropcb) {
|
||||||
|
WRITE_ONCE(fcs.stop, 1);
|
||||||
|
cur_ops->sync(); /* Wait for running CB to complete. */
|
||||||
|
cur_ops->cb_barrier(); /* Wait for queued callbacks. */
|
||||||
|
}
|
||||||
|
/* Avoid slow periods, better to test when busy. */
|
||||||
|
stutter_wait("rcu_torture_fwd_prog");
|
||||||
|
} while (!torture_must_stop());
|
||||||
|
if (selfpropcb) {
|
||||||
|
WARN_ON(READ_ONCE(fcs.stop) != 2);
|
||||||
|
destroy_rcu_head_on_stack(&fcs.rh);
|
||||||
|
}
|
||||||
|
/* Short runs might not contain a valid forward-progress attempt. */
|
||||||
|
WARN_ON(!tested && tested_tries >= 5);
|
||||||
|
pr_alert("%s: tested %d tested_tries %d\n", __func__, tested, tested_tries);
|
||||||
|
torture_kthread_stopping("rcu_torture_fwd_prog");
|
||||||
|
return 0;
|
||||||
|
}
|
||||||
|
|
||||||
|
/* If forward-progress checking is requested and feasible, spawn the thread. */
|
||||||
|
static int __init rcu_torture_fwd_prog_init(void)
|
||||||
|
{
|
||||||
|
if (!fwd_progress)
|
||||||
|
return 0; /* Not requested, so don't do it. */
|
||||||
|
if (!cur_ops->stall_dur || cur_ops->stall_dur() <= 0) {
|
||||||
|
VERBOSE_TOROUT_STRING("rcu_torture_fwd_prog_init: Disabled, unsupported by RCU flavor under test");
|
||||||
|
return 0;
|
||||||
|
}
|
||||||
|
if (stall_cpu > 0) {
|
||||||
|
VERBOSE_TOROUT_STRING("rcu_torture_fwd_prog_init: Disabled, conflicts with CPU-stall testing");
|
||||||
|
if (IS_MODULE(CONFIG_RCU_TORTURE_TESTS))
|
||||||
|
return -EINVAL; /* In module, can fail back to user. */
|
||||||
|
WARN_ON(1); /* Make sure rcutorture notices conflict. */
|
||||||
|
return 0;
|
||||||
|
}
|
||||||
|
if (fwd_progress_holdoff <= 0)
|
||||||
|
fwd_progress_holdoff = 1;
|
||||||
|
if (fwd_progress_div <= 0)
|
||||||
|
fwd_progress_div = 4;
|
||||||
|
return torture_create_kthread(rcu_torture_fwd_prog,
|
||||||
|
NULL, fwd_prog_task);
|
||||||
|
}
|
||||||
|
|
||||||
/* Callback function for RCU barrier testing. */
|
/* Callback function for RCU barrier testing. */
|
||||||
static void rcu_torture_barrier_cbf(struct rcu_head *rcu)
|
static void rcu_torture_barrier_cbf(struct rcu_head *rcu)
|
||||||
{
|
{
|
||||||
@ -1817,6 +1929,7 @@ static enum cpuhp_state rcutor_hp;
|
|||||||
static void
|
static void
|
||||||
rcu_torture_cleanup(void)
|
rcu_torture_cleanup(void)
|
||||||
{
|
{
|
||||||
|
int firsttime;
|
||||||
int flags = 0;
|
int flags = 0;
|
||||||
unsigned long gp_seq = 0;
|
unsigned long gp_seq = 0;
|
||||||
int i;
|
int i;
|
||||||
@ -1828,6 +1941,7 @@ rcu_torture_cleanup(void)
|
|||||||
}
|
}
|
||||||
|
|
||||||
rcu_torture_barrier_cleanup();
|
rcu_torture_barrier_cleanup();
|
||||||
|
torture_stop_kthread(rcu_torture_fwd_prog, fwd_prog_task);
|
||||||
torture_stop_kthread(rcu_torture_stall, stall_task);
|
torture_stop_kthread(rcu_torture_stall, stall_task);
|
||||||
torture_stop_kthread(rcu_torture_writer, writer_task);
|
torture_stop_kthread(rcu_torture_writer, writer_task);
|
||||||
|
|
||||||
@ -1860,7 +1974,7 @@ rcu_torture_cleanup(void)
|
|||||||
cpuhp_remove_state(rcutor_hp);
|
cpuhp_remove_state(rcutor_hp);
|
||||||
|
|
||||||
/*
|
/*
|
||||||
* Wait for all RCU callbacks to fire, then do flavor-specific
|
* Wait for all RCU callbacks to fire, then do torture-type-specific
|
||||||
* cleanup operations.
|
* cleanup operations.
|
||||||
*/
|
*/
|
||||||
if (cur_ops->cb_barrier != NULL)
|
if (cur_ops->cb_barrier != NULL)
|
||||||
@ -1870,6 +1984,33 @@ rcu_torture_cleanup(void)
|
|||||||
|
|
||||||
rcu_torture_stats_print(); /* -After- the stats thread is stopped! */
|
rcu_torture_stats_print(); /* -After- the stats thread is stopped! */
|
||||||
|
|
||||||
|
if (err_segs_recorded) {
|
||||||
|
pr_alert("Failure/close-call rcutorture reader segments:\n");
|
||||||
|
if (rt_read_nsegs == 0)
|
||||||
|
pr_alert("\t: No segments recorded!!!\n");
|
||||||
|
firsttime = 1;
|
||||||
|
for (i = 0; i < rt_read_nsegs; i++) {
|
||||||
|
pr_alert("\t%d: %#x ", i, err_segs[i].rt_readstate);
|
||||||
|
if (err_segs[i].rt_delay_jiffies != 0) {
|
||||||
|
pr_cont("%s%ldjiffies", firsttime ? "" : "+",
|
||||||
|
err_segs[i].rt_delay_jiffies);
|
||||||
|
firsttime = 0;
|
||||||
|
}
|
||||||
|
if (err_segs[i].rt_delay_ms != 0) {
|
||||||
|
pr_cont("%s%ldms", firsttime ? "" : "+",
|
||||||
|
err_segs[i].rt_delay_ms);
|
||||||
|
firsttime = 0;
|
||||||
|
}
|
||||||
|
if (err_segs[i].rt_delay_us != 0) {
|
||||||
|
pr_cont("%s%ldus", firsttime ? "" : "+",
|
||||||
|
err_segs[i].rt_delay_us);
|
||||||
|
firsttime = 0;
|
||||||
|
}
|
||||||
|
pr_cont("%s\n",
|
||||||
|
err_segs[i].rt_preempted ? "preempted" : "");
|
||||||
|
|
||||||
|
}
|
||||||
|
}
|
||||||
if (atomic_read(&n_rcu_torture_error) || n_rcu_torture_barrier_error)
|
if (atomic_read(&n_rcu_torture_error) || n_rcu_torture_barrier_error)
|
||||||
rcu_torture_print_module_parms(cur_ops, "End of test: FAILURE");
|
rcu_torture_print_module_parms(cur_ops, "End of test: FAILURE");
|
||||||
else if (torture_onoff_failures())
|
else if (torture_onoff_failures())
|
||||||
@ -1939,12 +2080,12 @@ static void rcu_test_debug_objects(void)
|
|||||||
static int __init
|
static int __init
|
||||||
rcu_torture_init(void)
|
rcu_torture_init(void)
|
||||||
{
|
{
|
||||||
int i;
|
long i;
|
||||||
int cpu;
|
int cpu;
|
||||||
int firsterr = 0;
|
int firsterr = 0;
|
||||||
static struct rcu_torture_ops *torture_ops[] = {
|
static struct rcu_torture_ops *torture_ops[] = {
|
||||||
&rcu_ops, &rcu_bh_ops, &rcu_busted_ops, &srcu_ops, &srcud_ops,
|
&rcu_ops, &rcu_busted_ops, &srcu_ops, &srcud_ops,
|
||||||
&busted_srcud_ops, &sched_ops, &tasks_ops,
|
&busted_srcud_ops, &tasks_ops,
|
||||||
};
|
};
|
||||||
|
|
||||||
if (!torture_init_begin(torture_type, verbose))
|
if (!torture_init_begin(torture_type, verbose))
|
||||||
@ -1963,6 +2104,7 @@ rcu_torture_init(void)
|
|||||||
for (i = 0; i < ARRAY_SIZE(torture_ops); i++)
|
for (i = 0; i < ARRAY_SIZE(torture_ops); i++)
|
||||||
pr_cont(" %s", torture_ops[i]->name);
|
pr_cont(" %s", torture_ops[i]->name);
|
||||||
pr_cont("\n");
|
pr_cont("\n");
|
||||||
|
WARN_ON(!IS_MODULE(CONFIG_RCU_TORTURE_TEST));
|
||||||
firsterr = -EINVAL;
|
firsterr = -EINVAL;
|
||||||
goto unwind;
|
goto unwind;
|
||||||
}
|
}
|
||||||
@ -2013,6 +2155,8 @@ rcu_torture_init(void)
|
|||||||
per_cpu(rcu_torture_batch, cpu)[i] = 0;
|
per_cpu(rcu_torture_batch, cpu)[i] = 0;
|
||||||
}
|
}
|
||||||
}
|
}
|
||||||
|
err_segs_recorded = 0;
|
||||||
|
rt_read_nsegs = 0;
|
||||||
|
|
||||||
/* Start up the kthreads. */
|
/* Start up the kthreads. */
|
||||||
|
|
||||||
@ -2044,7 +2188,7 @@ rcu_torture_init(void)
|
|||||||
goto unwind;
|
goto unwind;
|
||||||
}
|
}
|
||||||
for (i = 0; i < nrealreaders; i++) {
|
for (i = 0; i < nrealreaders; i++) {
|
||||||
firsterr = torture_create_kthread(rcu_torture_reader, NULL,
|
firsterr = torture_create_kthread(rcu_torture_reader, (void *)i,
|
||||||
reader_tasks[i]);
|
reader_tasks[i]);
|
||||||
if (firsterr)
|
if (firsterr)
|
||||||
goto unwind;
|
goto unwind;
|
||||||
@ -2098,6 +2242,9 @@ rcu_torture_init(void)
|
|||||||
if (firsterr)
|
if (firsterr)
|
||||||
goto unwind;
|
goto unwind;
|
||||||
firsterr = rcu_torture_stall_init();
|
firsterr = rcu_torture_stall_init();
|
||||||
|
if (firsterr)
|
||||||
|
goto unwind;
|
||||||
|
firsterr = rcu_torture_fwd_prog_init();
|
||||||
if (firsterr)
|
if (firsterr)
|
||||||
goto unwind;
|
goto unwind;
|
||||||
firsterr = rcu_torture_barrier_init();
|
firsterr = rcu_torture_barrier_init();
|
||||||
|
@ -34,6 +34,8 @@
|
|||||||
#include "rcu.h"
|
#include "rcu.h"
|
||||||
|
|
||||||
int rcu_scheduler_active __read_mostly;
|
int rcu_scheduler_active __read_mostly;
|
||||||
|
static LIST_HEAD(srcu_boot_list);
|
||||||
|
static bool srcu_init_done;
|
||||||
|
|
||||||
static int init_srcu_struct_fields(struct srcu_struct *sp)
|
static int init_srcu_struct_fields(struct srcu_struct *sp)
|
||||||
{
|
{
|
||||||
@ -46,6 +48,7 @@ static int init_srcu_struct_fields(struct srcu_struct *sp)
|
|||||||
sp->srcu_gp_waiting = false;
|
sp->srcu_gp_waiting = false;
|
||||||
sp->srcu_idx = 0;
|
sp->srcu_idx = 0;
|
||||||
INIT_WORK(&sp->srcu_work, srcu_drive_gp);
|
INIT_WORK(&sp->srcu_work, srcu_drive_gp);
|
||||||
|
INIT_LIST_HEAD(&sp->srcu_work.entry);
|
||||||
return 0;
|
return 0;
|
||||||
}
|
}
|
||||||
|
|
||||||
@ -179,8 +182,12 @@ void call_srcu(struct srcu_struct *sp, struct rcu_head *rhp,
|
|||||||
*sp->srcu_cb_tail = rhp;
|
*sp->srcu_cb_tail = rhp;
|
||||||
sp->srcu_cb_tail = &rhp->next;
|
sp->srcu_cb_tail = &rhp->next;
|
||||||
local_irq_restore(flags);
|
local_irq_restore(flags);
|
||||||
if (!READ_ONCE(sp->srcu_gp_running))
|
if (!READ_ONCE(sp->srcu_gp_running)) {
|
||||||
schedule_work(&sp->srcu_work);
|
if (likely(srcu_init_done))
|
||||||
|
schedule_work(&sp->srcu_work);
|
||||||
|
else if (list_empty(&sp->srcu_work.entry))
|
||||||
|
list_add(&sp->srcu_work.entry, &srcu_boot_list);
|
||||||
|
}
|
||||||
}
|
}
|
||||||
EXPORT_SYMBOL_GPL(call_srcu);
|
EXPORT_SYMBOL_GPL(call_srcu);
|
||||||
|
|
||||||
@ -204,3 +211,21 @@ void __init rcu_scheduler_starting(void)
|
|||||||
{
|
{
|
||||||
rcu_scheduler_active = RCU_SCHEDULER_RUNNING;
|
rcu_scheduler_active = RCU_SCHEDULER_RUNNING;
|
||||||
}
|
}
|
||||||
|
|
||||||
|
/*
|
||||||
|
* Queue work for srcu_struct structures with early boot callbacks.
|
||||||
|
* The work won't actually execute until the workqueue initialization
|
||||||
|
* phase that takes place after the scheduler starts.
|
||||||
|
*/
|
||||||
|
void __init srcu_init(void)
|
||||||
|
{
|
||||||
|
struct srcu_struct *sp;
|
||||||
|
|
||||||
|
srcu_init_done = true;
|
||||||
|
while (!list_empty(&srcu_boot_list)) {
|
||||||
|
sp = list_first_entry(&srcu_boot_list,
|
||||||
|
struct srcu_struct, srcu_work.entry);
|
||||||
|
list_del_init(&sp->srcu_work.entry);
|
||||||
|
schedule_work(&sp->srcu_work);
|
||||||
|
}
|
||||||
|
}
|
||||||
|
@ -51,6 +51,10 @@ module_param(exp_holdoff, ulong, 0444);
|
|||||||
static ulong counter_wrap_check = (ULONG_MAX >> 2);
|
static ulong counter_wrap_check = (ULONG_MAX >> 2);
|
||||||
module_param(counter_wrap_check, ulong, 0444);
|
module_param(counter_wrap_check, ulong, 0444);
|
||||||
|
|
||||||
|
/* Early-boot callback-management, so early that no lock is required! */
|
||||||
|
static LIST_HEAD(srcu_boot_list);
|
||||||
|
static bool __read_mostly srcu_init_done;
|
||||||
|
|
||||||
static void srcu_invoke_callbacks(struct work_struct *work);
|
static void srcu_invoke_callbacks(struct work_struct *work);
|
||||||
static void srcu_reschedule(struct srcu_struct *sp, unsigned long delay);
|
static void srcu_reschedule(struct srcu_struct *sp, unsigned long delay);
|
||||||
static void process_srcu(struct work_struct *work);
|
static void process_srcu(struct work_struct *work);
|
||||||
@ -105,7 +109,7 @@ static void init_srcu_struct_nodes(struct srcu_struct *sp, bool is_static)
|
|||||||
rcu_init_levelspread(levelspread, num_rcu_lvl);
|
rcu_init_levelspread(levelspread, num_rcu_lvl);
|
||||||
|
|
||||||
/* Each pass through this loop initializes one srcu_node structure. */
|
/* Each pass through this loop initializes one srcu_node structure. */
|
||||||
rcu_for_each_node_breadth_first(sp, snp) {
|
srcu_for_each_node_breadth_first(sp, snp) {
|
||||||
spin_lock_init(&ACCESS_PRIVATE(snp, lock));
|
spin_lock_init(&ACCESS_PRIVATE(snp, lock));
|
||||||
WARN_ON_ONCE(ARRAY_SIZE(snp->srcu_have_cbs) !=
|
WARN_ON_ONCE(ARRAY_SIZE(snp->srcu_have_cbs) !=
|
||||||
ARRAY_SIZE(snp->srcu_data_have_cbs));
|
ARRAY_SIZE(snp->srcu_data_have_cbs));
|
||||||
@ -235,7 +239,6 @@ static void check_init_srcu_struct(struct srcu_struct *sp)
|
|||||||
{
|
{
|
||||||
unsigned long flags;
|
unsigned long flags;
|
||||||
|
|
||||||
WARN_ON_ONCE(rcu_scheduler_active == RCU_SCHEDULER_INIT);
|
|
||||||
/* The smp_load_acquire() pairs with the smp_store_release(). */
|
/* The smp_load_acquire() pairs with the smp_store_release(). */
|
||||||
if (!rcu_seq_state(smp_load_acquire(&sp->srcu_gp_seq_needed))) /*^^^*/
|
if (!rcu_seq_state(smp_load_acquire(&sp->srcu_gp_seq_needed))) /*^^^*/
|
||||||
return; /* Already initialized. */
|
return; /* Already initialized. */
|
||||||
@ -561,7 +564,7 @@ static void srcu_gp_end(struct srcu_struct *sp)
|
|||||||
|
|
||||||
/* Initiate callback invocation as needed. */
|
/* Initiate callback invocation as needed. */
|
||||||
idx = rcu_seq_ctr(gpseq) % ARRAY_SIZE(snp->srcu_have_cbs);
|
idx = rcu_seq_ctr(gpseq) % ARRAY_SIZE(snp->srcu_have_cbs);
|
||||||
rcu_for_each_node_breadth_first(sp, snp) {
|
srcu_for_each_node_breadth_first(sp, snp) {
|
||||||
spin_lock_irq_rcu_node(snp);
|
spin_lock_irq_rcu_node(snp);
|
||||||
cbs = false;
|
cbs = false;
|
||||||
last_lvl = snp >= sp->level[rcu_num_lvls - 1];
|
last_lvl = snp >= sp->level[rcu_num_lvls - 1];
|
||||||
@ -701,7 +704,11 @@ static void srcu_funnel_gp_start(struct srcu_struct *sp, struct srcu_data *sdp,
|
|||||||
rcu_seq_state(sp->srcu_gp_seq) == SRCU_STATE_IDLE) {
|
rcu_seq_state(sp->srcu_gp_seq) == SRCU_STATE_IDLE) {
|
||||||
WARN_ON_ONCE(ULONG_CMP_GE(sp->srcu_gp_seq, sp->srcu_gp_seq_needed));
|
WARN_ON_ONCE(ULONG_CMP_GE(sp->srcu_gp_seq, sp->srcu_gp_seq_needed));
|
||||||
srcu_gp_start(sp);
|
srcu_gp_start(sp);
|
||||||
queue_delayed_work(rcu_gp_wq, &sp->work, srcu_get_delay(sp));
|
if (likely(srcu_init_done))
|
||||||
|
queue_delayed_work(rcu_gp_wq, &sp->work,
|
||||||
|
srcu_get_delay(sp));
|
||||||
|
else if (list_empty(&sp->work.work.entry))
|
||||||
|
list_add(&sp->work.work.entry, &srcu_boot_list);
|
||||||
}
|
}
|
||||||
spin_unlock_irqrestore_rcu_node(sp, flags);
|
spin_unlock_irqrestore_rcu_node(sp, flags);
|
||||||
}
|
}
|
||||||
@ -980,7 +987,7 @@ EXPORT_SYMBOL_GPL(synchronize_srcu_expedited);
|
|||||||
* There are memory-ordering constraints implied by synchronize_srcu().
|
* There are memory-ordering constraints implied by synchronize_srcu().
|
||||||
* On systems with more than one CPU, when synchronize_srcu() returns,
|
* On systems with more than one CPU, when synchronize_srcu() returns,
|
||||||
* each CPU is guaranteed to have executed a full memory barrier since
|
* each CPU is guaranteed to have executed a full memory barrier since
|
||||||
* the end of its last corresponding SRCU-sched read-side critical section
|
* the end of its last corresponding SRCU read-side critical section
|
||||||
* whose beginning preceded the call to synchronize_srcu(). In addition,
|
* whose beginning preceded the call to synchronize_srcu(). In addition,
|
||||||
* each CPU having an SRCU read-side critical section that extends beyond
|
* each CPU having an SRCU read-side critical section that extends beyond
|
||||||
* the return from synchronize_srcu() is guaranteed to have executed a
|
* the return from synchronize_srcu() is guaranteed to have executed a
|
||||||
@ -1308,3 +1315,17 @@ static int __init srcu_bootup_announce(void)
|
|||||||
return 0;
|
return 0;
|
||||||
}
|
}
|
||||||
early_initcall(srcu_bootup_announce);
|
early_initcall(srcu_bootup_announce);
|
||||||
|
|
||||||
|
void __init srcu_init(void)
|
||||||
|
{
|
||||||
|
struct srcu_struct *sp;
|
||||||
|
|
||||||
|
srcu_init_done = true;
|
||||||
|
while (!list_empty(&srcu_boot_list)) {
|
||||||
|
sp = list_first_entry(&srcu_boot_list, struct srcu_struct,
|
||||||
|
work.work.entry);
|
||||||
|
check_init_srcu_struct(sp);
|
||||||
|
list_del_init(&sp->work.work.entry);
|
||||||
|
queue_work(rcu_gp_wq, &sp->work.work);
|
||||||
|
}
|
||||||
|
}
|
||||||
|
@ -46,69 +46,27 @@ struct rcu_ctrlblk {
|
|||||||
};
|
};
|
||||||
|
|
||||||
/* Definition for rcupdate control block. */
|
/* Definition for rcupdate control block. */
|
||||||
static struct rcu_ctrlblk rcu_sched_ctrlblk = {
|
static struct rcu_ctrlblk rcu_ctrlblk = {
|
||||||
.donetail = &rcu_sched_ctrlblk.rcucblist,
|
.donetail = &rcu_ctrlblk.rcucblist,
|
||||||
.curtail = &rcu_sched_ctrlblk.rcucblist,
|
.curtail = &rcu_ctrlblk.rcucblist,
|
||||||
};
|
};
|
||||||
|
|
||||||
static struct rcu_ctrlblk rcu_bh_ctrlblk = {
|
void rcu_barrier(void)
|
||||||
.donetail = &rcu_bh_ctrlblk.rcucblist,
|
|
||||||
.curtail = &rcu_bh_ctrlblk.rcucblist,
|
|
||||||
};
|
|
||||||
|
|
||||||
void rcu_barrier_bh(void)
|
|
||||||
{
|
{
|
||||||
wait_rcu_gp(call_rcu_bh);
|
wait_rcu_gp(call_rcu);
|
||||||
}
|
}
|
||||||
EXPORT_SYMBOL(rcu_barrier_bh);
|
EXPORT_SYMBOL(rcu_barrier);
|
||||||
|
|
||||||
void rcu_barrier_sched(void)
|
/* Record an rcu quiescent state. */
|
||||||
|
void rcu_qs(void)
|
||||||
{
|
{
|
||||||
wait_rcu_gp(call_rcu_sched);
|
unsigned long flags;
|
||||||
}
|
|
||||||
EXPORT_SYMBOL(rcu_barrier_sched);
|
|
||||||
|
|
||||||
/*
|
local_irq_save(flags);
|
||||||
* Helper function for rcu_sched_qs() and rcu_bh_qs().
|
if (rcu_ctrlblk.donetail != rcu_ctrlblk.curtail) {
|
||||||
* Also irqs are disabled to avoid confusion due to interrupt handlers
|
rcu_ctrlblk.donetail = rcu_ctrlblk.curtail;
|
||||||
* invoking call_rcu().
|
raise_softirq(RCU_SOFTIRQ);
|
||||||
*/
|
|
||||||
static int rcu_qsctr_help(struct rcu_ctrlblk *rcp)
|
|
||||||
{
|
|
||||||
if (rcp->donetail != rcp->curtail) {
|
|
||||||
rcp->donetail = rcp->curtail;
|
|
||||||
return 1;
|
|
||||||
}
|
}
|
||||||
|
|
||||||
return 0;
|
|
||||||
}
|
|
||||||
|
|
||||||
/*
|
|
||||||
* Record an rcu quiescent state. And an rcu_bh quiescent state while we
|
|
||||||
* are at it, given that any rcu quiescent state is also an rcu_bh
|
|
||||||
* quiescent state. Use "+" instead of "||" to defeat short circuiting.
|
|
||||||
*/
|
|
||||||
void rcu_sched_qs(void)
|
|
||||||
{
|
|
||||||
unsigned long flags;
|
|
||||||
|
|
||||||
local_irq_save(flags);
|
|
||||||
if (rcu_qsctr_help(&rcu_sched_ctrlblk) +
|
|
||||||
rcu_qsctr_help(&rcu_bh_ctrlblk))
|
|
||||||
raise_softirq(RCU_SOFTIRQ);
|
|
||||||
local_irq_restore(flags);
|
|
||||||
}
|
|
||||||
|
|
||||||
/*
|
|
||||||
* Record an rcu_bh quiescent state.
|
|
||||||
*/
|
|
||||||
void rcu_bh_qs(void)
|
|
||||||
{
|
|
||||||
unsigned long flags;
|
|
||||||
|
|
||||||
local_irq_save(flags);
|
|
||||||
if (rcu_qsctr_help(&rcu_bh_ctrlblk))
|
|
||||||
raise_softirq(RCU_SOFTIRQ);
|
|
||||||
local_irq_restore(flags);
|
local_irq_restore(flags);
|
||||||
}
|
}
|
||||||
|
|
||||||
@ -120,34 +78,33 @@ void rcu_bh_qs(void)
|
|||||||
*/
|
*/
|
||||||
void rcu_check_callbacks(int user)
|
void rcu_check_callbacks(int user)
|
||||||
{
|
{
|
||||||
if (user)
|
if (user) {
|
||||||
rcu_sched_qs();
|
rcu_qs();
|
||||||
if (user || !in_softirq())
|
} else if (rcu_ctrlblk.donetail != rcu_ctrlblk.curtail) {
|
||||||
rcu_bh_qs();
|
set_tsk_need_resched(current);
|
||||||
|
set_preempt_need_resched();
|
||||||
|
}
|
||||||
}
|
}
|
||||||
|
|
||||||
/*
|
/* Invoke the RCU callbacks whose grace period has elapsed. */
|
||||||
* Invoke the RCU callbacks on the specified rcu_ctrlkblk structure
|
static __latent_entropy void rcu_process_callbacks(struct softirq_action *unused)
|
||||||
* whose grace period has elapsed.
|
|
||||||
*/
|
|
||||||
static void __rcu_process_callbacks(struct rcu_ctrlblk *rcp)
|
|
||||||
{
|
{
|
||||||
struct rcu_head *next, *list;
|
struct rcu_head *next, *list;
|
||||||
unsigned long flags;
|
unsigned long flags;
|
||||||
|
|
||||||
/* Move the ready-to-invoke callbacks to a local list. */
|
/* Move the ready-to-invoke callbacks to a local list. */
|
||||||
local_irq_save(flags);
|
local_irq_save(flags);
|
||||||
if (rcp->donetail == &rcp->rcucblist) {
|
if (rcu_ctrlblk.donetail == &rcu_ctrlblk.rcucblist) {
|
||||||
/* No callbacks ready, so just leave. */
|
/* No callbacks ready, so just leave. */
|
||||||
local_irq_restore(flags);
|
local_irq_restore(flags);
|
||||||
return;
|
return;
|
||||||
}
|
}
|
||||||
list = rcp->rcucblist;
|
list = rcu_ctrlblk.rcucblist;
|
||||||
rcp->rcucblist = *rcp->donetail;
|
rcu_ctrlblk.rcucblist = *rcu_ctrlblk.donetail;
|
||||||
*rcp->donetail = NULL;
|
*rcu_ctrlblk.donetail = NULL;
|
||||||
if (rcp->curtail == rcp->donetail)
|
if (rcu_ctrlblk.curtail == rcu_ctrlblk.donetail)
|
||||||
rcp->curtail = &rcp->rcucblist;
|
rcu_ctrlblk.curtail = &rcu_ctrlblk.rcucblist;
|
||||||
rcp->donetail = &rcp->rcucblist;
|
rcu_ctrlblk.donetail = &rcu_ctrlblk.rcucblist;
|
||||||
local_irq_restore(flags);
|
local_irq_restore(flags);
|
||||||
|
|
||||||
/* Invoke the callbacks on the local list. */
|
/* Invoke the callbacks on the local list. */
|
||||||
@ -162,37 +119,31 @@ static void __rcu_process_callbacks(struct rcu_ctrlblk *rcp)
|
|||||||
}
|
}
|
||||||
}
|
}
|
||||||
|
|
||||||
static __latent_entropy void rcu_process_callbacks(struct softirq_action *unused)
|
|
||||||
{
|
|
||||||
__rcu_process_callbacks(&rcu_sched_ctrlblk);
|
|
||||||
__rcu_process_callbacks(&rcu_bh_ctrlblk);
|
|
||||||
}
|
|
||||||
|
|
||||||
/*
|
/*
|
||||||
* Wait for a grace period to elapse. But it is illegal to invoke
|
* Wait for a grace period to elapse. But it is illegal to invoke
|
||||||
* synchronize_sched() from within an RCU read-side critical section.
|
* synchronize_rcu() from within an RCU read-side critical section.
|
||||||
* Therefore, any legal call to synchronize_sched() is a quiescent
|
* Therefore, any legal call to synchronize_rcu() is a quiescent
|
||||||
* state, and so on a UP system, synchronize_sched() need do nothing.
|
* state, and so on a UP system, synchronize_rcu() need do nothing.
|
||||||
* Ditto for synchronize_rcu_bh(). (But Lai Jiangshan points out the
|
* (But Lai Jiangshan points out the benefits of doing might_sleep()
|
||||||
* benefits of doing might_sleep() to reduce latency.)
|
* to reduce latency.)
|
||||||
*
|
*
|
||||||
* Cool, huh? (Due to Josh Triplett.)
|
* Cool, huh? (Due to Josh Triplett.)
|
||||||
*/
|
*/
|
||||||
void synchronize_sched(void)
|
void synchronize_rcu(void)
|
||||||
{
|
{
|
||||||
RCU_LOCKDEP_WARN(lock_is_held(&rcu_bh_lock_map) ||
|
RCU_LOCKDEP_WARN(lock_is_held(&rcu_bh_lock_map) ||
|
||||||
lock_is_held(&rcu_lock_map) ||
|
lock_is_held(&rcu_lock_map) ||
|
||||||
lock_is_held(&rcu_sched_lock_map),
|
lock_is_held(&rcu_sched_lock_map),
|
||||||
"Illegal synchronize_sched() in RCU read-side critical section");
|
"Illegal synchronize_rcu() in RCU read-side critical section");
|
||||||
}
|
}
|
||||||
EXPORT_SYMBOL_GPL(synchronize_sched);
|
EXPORT_SYMBOL_GPL(synchronize_rcu);
|
||||||
|
|
||||||
/*
|
/*
|
||||||
* Helper function for call_rcu() and call_rcu_bh().
|
* Post an RCU callback to be invoked after the end of an RCU grace
|
||||||
|
* period. But since we have but one CPU, that would be after any
|
||||||
|
* quiescent state.
|
||||||
*/
|
*/
|
||||||
static void __call_rcu(struct rcu_head *head,
|
void call_rcu(struct rcu_head *head, rcu_callback_t func)
|
||||||
rcu_callback_t func,
|
|
||||||
struct rcu_ctrlblk *rcp)
|
|
||||||
{
|
{
|
||||||
unsigned long flags;
|
unsigned long flags;
|
||||||
|
|
||||||
@ -201,39 +152,20 @@ static void __call_rcu(struct rcu_head *head,
|
|||||||
head->next = NULL;
|
head->next = NULL;
|
||||||
|
|
||||||
local_irq_save(flags);
|
local_irq_save(flags);
|
||||||
*rcp->curtail = head;
|
*rcu_ctrlblk.curtail = head;
|
||||||
rcp->curtail = &head->next;
|
rcu_ctrlblk.curtail = &head->next;
|
||||||
local_irq_restore(flags);
|
local_irq_restore(flags);
|
||||||
|
|
||||||
if (unlikely(is_idle_task(current))) {
|
if (unlikely(is_idle_task(current))) {
|
||||||
/* force scheduling for rcu_sched_qs() */
|
/* force scheduling for rcu_qs() */
|
||||||
resched_cpu(0);
|
resched_cpu(0);
|
||||||
}
|
}
|
||||||
}
|
}
|
||||||
|
EXPORT_SYMBOL_GPL(call_rcu);
|
||||||
/*
|
|
||||||
* Post an RCU callback to be invoked after the end of an RCU-sched grace
|
|
||||||
* period. But since we have but one CPU, that would be after any
|
|
||||||
* quiescent state.
|
|
||||||
*/
|
|
||||||
void call_rcu_sched(struct rcu_head *head, rcu_callback_t func)
|
|
||||||
{
|
|
||||||
__call_rcu(head, func, &rcu_sched_ctrlblk);
|
|
||||||
}
|
|
||||||
EXPORT_SYMBOL_GPL(call_rcu_sched);
|
|
||||||
|
|
||||||
/*
|
|
||||||
* Post an RCU bottom-half callback to be invoked after any subsequent
|
|
||||||
* quiescent state.
|
|
||||||
*/
|
|
||||||
void call_rcu_bh(struct rcu_head *head, rcu_callback_t func)
|
|
||||||
{
|
|
||||||
__call_rcu(head, func, &rcu_bh_ctrlblk);
|
|
||||||
}
|
|
||||||
EXPORT_SYMBOL_GPL(call_rcu_bh);
|
|
||||||
|
|
||||||
void __init rcu_init(void)
|
void __init rcu_init(void)
|
||||||
{
|
{
|
||||||
open_softirq(RCU_SOFTIRQ, rcu_process_callbacks);
|
open_softirq(RCU_SOFTIRQ, rcu_process_callbacks);
|
||||||
rcu_early_boot_tests();
|
rcu_early_boot_tests();
|
||||||
|
srcu_init();
|
||||||
}
|
}
|
||||||
|
2241
kernel/rcu/tree.c
2241
kernel/rcu/tree.c
File diff suppressed because it is too large
Load Diff
@ -34,34 +34,9 @@
|
|||||||
|
|
||||||
#include "rcu_segcblist.h"
|
#include "rcu_segcblist.h"
|
||||||
|
|
||||||
/*
|
|
||||||
* Dynticks per-CPU state.
|
|
||||||
*/
|
|
||||||
struct rcu_dynticks {
|
|
||||||
long dynticks_nesting; /* Track process nesting level. */
|
|
||||||
long dynticks_nmi_nesting; /* Track irq/NMI nesting level. */
|
|
||||||
atomic_t dynticks; /* Even value for idle, else odd. */
|
|
||||||
bool rcu_need_heavy_qs; /* GP old, need heavy quiescent state. */
|
|
||||||
unsigned long rcu_qs_ctr; /* Light universal quiescent state ctr. */
|
|
||||||
bool rcu_urgent_qs; /* GP old need light quiescent state. */
|
|
||||||
#ifdef CONFIG_RCU_FAST_NO_HZ
|
|
||||||
bool all_lazy; /* Are all CPU's CBs lazy? */
|
|
||||||
unsigned long nonlazy_posted;
|
|
||||||
/* # times non-lazy CBs posted to CPU. */
|
|
||||||
unsigned long nonlazy_posted_snap;
|
|
||||||
/* idle-period nonlazy_posted snapshot. */
|
|
||||||
unsigned long last_accelerate;
|
|
||||||
/* Last jiffy CBs were accelerated. */
|
|
||||||
unsigned long last_advance_all;
|
|
||||||
/* Last jiffy CBs were all advanced. */
|
|
||||||
int tick_nohz_enabled_snap; /* Previously seen value from sysfs. */
|
|
||||||
#endif /* #ifdef CONFIG_RCU_FAST_NO_HZ */
|
|
||||||
};
|
|
||||||
|
|
||||||
/* Communicate arguments to a workqueue handler. */
|
/* Communicate arguments to a workqueue handler. */
|
||||||
struct rcu_exp_work {
|
struct rcu_exp_work {
|
||||||
smp_call_func_t rew_func;
|
smp_call_func_t rew_func;
|
||||||
struct rcu_state *rew_rsp;
|
|
||||||
unsigned long rew_s;
|
unsigned long rew_s;
|
||||||
struct work_struct rew_work;
|
struct work_struct rew_work;
|
||||||
};
|
};
|
||||||
@ -170,7 +145,7 @@ struct rcu_node {
|
|||||||
* are indexed relative to this interval rather than the global CPU ID space.
|
* are indexed relative to this interval rather than the global CPU ID space.
|
||||||
* This generates the bit for a CPU in node-local masks.
|
* This generates the bit for a CPU in node-local masks.
|
||||||
*/
|
*/
|
||||||
#define leaf_node_cpu_bit(rnp, cpu) (1UL << ((cpu) - (rnp)->grplo))
|
#define leaf_node_cpu_bit(rnp, cpu) (BIT((cpu) - (rnp)->grplo))
|
||||||
|
|
||||||
/*
|
/*
|
||||||
* Union to allow "aggregate OR" operation on the need for a quiescent
|
* Union to allow "aggregate OR" operation on the need for a quiescent
|
||||||
@ -189,12 +164,11 @@ struct rcu_data {
|
|||||||
/* 1) quiescent-state and grace-period handling : */
|
/* 1) quiescent-state and grace-period handling : */
|
||||||
unsigned long gp_seq; /* Track rsp->rcu_gp_seq counter. */
|
unsigned long gp_seq; /* Track rsp->rcu_gp_seq counter. */
|
||||||
unsigned long gp_seq_needed; /* Track rsp->rcu_gp_seq_needed ctr. */
|
unsigned long gp_seq_needed; /* Track rsp->rcu_gp_seq_needed ctr. */
|
||||||
unsigned long rcu_qs_ctr_snap;/* Snapshot of rcu_qs_ctr to check */
|
|
||||||
/* for rcu_all_qs() invocations. */
|
|
||||||
union rcu_noqs cpu_no_qs; /* No QSes yet for this CPU. */
|
union rcu_noqs cpu_no_qs; /* No QSes yet for this CPU. */
|
||||||
bool core_needs_qs; /* Core waits for quiesc state. */
|
bool core_needs_qs; /* Core waits for quiesc state. */
|
||||||
bool beenonline; /* CPU online at least once. */
|
bool beenonline; /* CPU online at least once. */
|
||||||
bool gpwrap; /* Possible ->gp_seq wrap. */
|
bool gpwrap; /* Possible ->gp_seq wrap. */
|
||||||
|
bool deferred_qs; /* This CPU awaiting a deferred QS? */
|
||||||
struct rcu_node *mynode; /* This CPU's leaf of hierarchy */
|
struct rcu_node *mynode; /* This CPU's leaf of hierarchy */
|
||||||
unsigned long grpmask; /* Mask to apply to leaf qsmask. */
|
unsigned long grpmask; /* Mask to apply to leaf qsmask. */
|
||||||
unsigned long ticks_this_gp; /* The number of scheduling-clock */
|
unsigned long ticks_this_gp; /* The number of scheduling-clock */
|
||||||
@ -213,23 +187,27 @@ struct rcu_data {
|
|||||||
long blimit; /* Upper limit on a processed batch */
|
long blimit; /* Upper limit on a processed batch */
|
||||||
|
|
||||||
/* 3) dynticks interface. */
|
/* 3) dynticks interface. */
|
||||||
struct rcu_dynticks *dynticks; /* Shared per-CPU dynticks state. */
|
|
||||||
int dynticks_snap; /* Per-GP tracking for dynticks. */
|
int dynticks_snap; /* Per-GP tracking for dynticks. */
|
||||||
|
long dynticks_nesting; /* Track process nesting level. */
|
||||||
/* 4) reasons this CPU needed to be kicked by force_quiescent_state */
|
long dynticks_nmi_nesting; /* Track irq/NMI nesting level. */
|
||||||
unsigned long dynticks_fqs; /* Kicked due to dynticks idle. */
|
atomic_t dynticks; /* Even value for idle, else odd. */
|
||||||
unsigned long cond_resched_completed;
|
bool rcu_need_heavy_qs; /* GP old, so heavy quiescent state! */
|
||||||
/* Grace period that needs help */
|
bool rcu_urgent_qs; /* GP old need light quiescent state. */
|
||||||
/* from cond_resched(). */
|
|
||||||
|
|
||||||
/* 5) _rcu_barrier(), OOM callbacks, and expediting. */
|
|
||||||
struct rcu_head barrier_head;
|
|
||||||
#ifdef CONFIG_RCU_FAST_NO_HZ
|
#ifdef CONFIG_RCU_FAST_NO_HZ
|
||||||
struct rcu_head oom_head;
|
bool all_lazy; /* Are all CPU's CBs lazy? */
|
||||||
|
unsigned long nonlazy_posted; /* # times non-lazy CB posted to CPU. */
|
||||||
|
unsigned long nonlazy_posted_snap;
|
||||||
|
/* Nonlazy_posted snapshot. */
|
||||||
|
unsigned long last_accelerate; /* Last jiffy CBs were accelerated. */
|
||||||
|
unsigned long last_advance_all; /* Last jiffy CBs were all advanced. */
|
||||||
|
int tick_nohz_enabled_snap; /* Previously seen value from sysfs. */
|
||||||
#endif /* #ifdef CONFIG_RCU_FAST_NO_HZ */
|
#endif /* #ifdef CONFIG_RCU_FAST_NO_HZ */
|
||||||
|
|
||||||
|
/* 4) rcu_barrier(), OOM callbacks, and expediting. */
|
||||||
|
struct rcu_head barrier_head;
|
||||||
int exp_dynticks_snap; /* Double-check need for IPI. */
|
int exp_dynticks_snap; /* Double-check need for IPI. */
|
||||||
|
|
||||||
/* 6) Callback offloading. */
|
/* 5) Callback offloading. */
|
||||||
#ifdef CONFIG_RCU_NOCB_CPU
|
#ifdef CONFIG_RCU_NOCB_CPU
|
||||||
struct rcu_head *nocb_head; /* CBs waiting for kthread. */
|
struct rcu_head *nocb_head; /* CBs waiting for kthread. */
|
||||||
struct rcu_head **nocb_tail;
|
struct rcu_head **nocb_tail;
|
||||||
@ -256,7 +234,7 @@ struct rcu_data {
|
|||||||
/* Leader CPU takes GP-end wakeups. */
|
/* Leader CPU takes GP-end wakeups. */
|
||||||
#endif /* #ifdef CONFIG_RCU_NOCB_CPU */
|
#endif /* #ifdef CONFIG_RCU_NOCB_CPU */
|
||||||
|
|
||||||
/* 7) Diagnostic data, including RCU CPU stall warnings. */
|
/* 6) Diagnostic data, including RCU CPU stall warnings. */
|
||||||
unsigned int softirq_snap; /* Snapshot of softirq activity. */
|
unsigned int softirq_snap; /* Snapshot of softirq activity. */
|
||||||
/* ->rcu_iw* fields protected by leaf rcu_node ->lock. */
|
/* ->rcu_iw* fields protected by leaf rcu_node ->lock. */
|
||||||
struct irq_work rcu_iw; /* Check for non-irq activity. */
|
struct irq_work rcu_iw; /* Check for non-irq activity. */
|
||||||
@ -266,9 +244,9 @@ struct rcu_data {
|
|||||||
short rcu_ofl_gp_flags; /* ->gp_flags at last offline. */
|
short rcu_ofl_gp_flags; /* ->gp_flags at last offline. */
|
||||||
unsigned long rcu_onl_gp_seq; /* ->gp_seq at last online. */
|
unsigned long rcu_onl_gp_seq; /* ->gp_seq at last online. */
|
||||||
short rcu_onl_gp_flags; /* ->gp_flags at last online. */
|
short rcu_onl_gp_flags; /* ->gp_flags at last online. */
|
||||||
|
unsigned long last_fqs_resched; /* Time of last rcu_resched(). */
|
||||||
|
|
||||||
int cpu;
|
int cpu;
|
||||||
struct rcu_state *rsp;
|
|
||||||
};
|
};
|
||||||
|
|
||||||
/* Values for nocb_defer_wakeup field in struct rcu_data. */
|
/* Values for nocb_defer_wakeup field in struct rcu_data. */
|
||||||
@ -314,8 +292,6 @@ struct rcu_state {
|
|||||||
struct rcu_node *level[RCU_NUM_LVLS + 1];
|
struct rcu_node *level[RCU_NUM_LVLS + 1];
|
||||||
/* Hierarchy levels (+1 to */
|
/* Hierarchy levels (+1 to */
|
||||||
/* shut bogus gcc warning) */
|
/* shut bogus gcc warning) */
|
||||||
struct rcu_data __percpu *rda; /* pointer of percu rcu_data. */
|
|
||||||
call_rcu_func_t call; /* call_rcu() flavor. */
|
|
||||||
int ncpus; /* # CPUs seen so far. */
|
int ncpus; /* # CPUs seen so far. */
|
||||||
|
|
||||||
/* The following fields are guarded by the root rcu_node's lock. */
|
/* The following fields are guarded by the root rcu_node's lock. */
|
||||||
@ -334,7 +310,7 @@ struct rcu_state {
|
|||||||
atomic_t barrier_cpu_count; /* # CPUs waiting on. */
|
atomic_t barrier_cpu_count; /* # CPUs waiting on. */
|
||||||
struct completion barrier_completion; /* Wake at barrier end. */
|
struct completion barrier_completion; /* Wake at barrier end. */
|
||||||
unsigned long barrier_sequence; /* ++ at start and end of */
|
unsigned long barrier_sequence; /* ++ at start and end of */
|
||||||
/* _rcu_barrier(). */
|
/* rcu_barrier(). */
|
||||||
/* End of fields guarded by barrier_mutex. */
|
/* End of fields guarded by barrier_mutex. */
|
||||||
|
|
||||||
struct mutex exp_mutex; /* Serialize expedited GP. */
|
struct mutex exp_mutex; /* Serialize expedited GP. */
|
||||||
@ -366,9 +342,8 @@ struct rcu_state {
|
|||||||
/* jiffies. */
|
/* jiffies. */
|
||||||
const char *name; /* Name of structure. */
|
const char *name; /* Name of structure. */
|
||||||
char abbr; /* Abbreviated name. */
|
char abbr; /* Abbreviated name. */
|
||||||
struct list_head flavors; /* List of RCU flavors. */
|
|
||||||
|
|
||||||
spinlock_t ofl_lock ____cacheline_internodealigned_in_smp;
|
raw_spinlock_t ofl_lock ____cacheline_internodealigned_in_smp;
|
||||||
/* Synchronize offline with */
|
/* Synchronize offline with */
|
||||||
/* GP pre-initialization. */
|
/* GP pre-initialization. */
|
||||||
};
|
};
|
||||||
@ -388,7 +363,6 @@ struct rcu_state {
|
|||||||
#define RCU_GP_CLEANUP 7 /* Grace-period cleanup started. */
|
#define RCU_GP_CLEANUP 7 /* Grace-period cleanup started. */
|
||||||
#define RCU_GP_CLEANED 8 /* Grace-period cleanup complete. */
|
#define RCU_GP_CLEANED 8 /* Grace-period cleanup complete. */
|
||||||
|
|
||||||
#ifndef RCU_TREE_NONCORE
|
|
||||||
static const char * const gp_state_names[] = {
|
static const char * const gp_state_names[] = {
|
||||||
"RCU_GP_IDLE",
|
"RCU_GP_IDLE",
|
||||||
"RCU_GP_WAIT_GPS",
|
"RCU_GP_WAIT_GPS",
|
||||||
@ -400,13 +374,29 @@ static const char * const gp_state_names[] = {
|
|||||||
"RCU_GP_CLEANUP",
|
"RCU_GP_CLEANUP",
|
||||||
"RCU_GP_CLEANED",
|
"RCU_GP_CLEANED",
|
||||||
};
|
};
|
||||||
#endif /* #ifndef RCU_TREE_NONCORE */
|
|
||||||
|
|
||||||
extern struct list_head rcu_struct_flavors;
|
/*
|
||||||
|
* In order to export the rcu_state name to the tracing tools, it
|
||||||
/* Sequence through rcu_state structures for each RCU flavor. */
|
* needs to be added in the __tracepoint_string section.
|
||||||
#define for_each_rcu_flavor(rsp) \
|
* This requires defining a separate variable tp_<sname>_varname
|
||||||
list_for_each_entry((rsp), &rcu_struct_flavors, flavors)
|
* that points to the string being used, and this will allow
|
||||||
|
* the tracing userspace tools to be able to decipher the string
|
||||||
|
* address to the matching string.
|
||||||
|
*/
|
||||||
|
#ifdef CONFIG_PREEMPT_RCU
|
||||||
|
#define RCU_ABBR 'p'
|
||||||
|
#define RCU_NAME_RAW "rcu_preempt"
|
||||||
|
#else /* #ifdef CONFIG_PREEMPT_RCU */
|
||||||
|
#define RCU_ABBR 's'
|
||||||
|
#define RCU_NAME_RAW "rcu_sched"
|
||||||
|
#endif /* #else #ifdef CONFIG_PREEMPT_RCU */
|
||||||
|
#ifndef CONFIG_TRACING
|
||||||
|
#define RCU_NAME RCU_NAME_RAW
|
||||||
|
#else /* #ifdef CONFIG_TRACING */
|
||||||
|
static char rcu_name[] = RCU_NAME_RAW;
|
||||||
|
static const char *tp_rcu_varname __used __tracepoint_string = rcu_name;
|
||||||
|
#define RCU_NAME rcu_name
|
||||||
|
#endif /* #else #ifdef CONFIG_TRACING */
|
||||||
|
|
||||||
/*
|
/*
|
||||||
* RCU implementation internal declarations:
|
* RCU implementation internal declarations:
|
||||||
@ -419,7 +409,7 @@ extern struct rcu_state rcu_bh_state;
|
|||||||
extern struct rcu_state rcu_preempt_state;
|
extern struct rcu_state rcu_preempt_state;
|
||||||
#endif /* #ifdef CONFIG_PREEMPT_RCU */
|
#endif /* #ifdef CONFIG_PREEMPT_RCU */
|
||||||
|
|
||||||
int rcu_dynticks_snap(struct rcu_dynticks *rdtp);
|
int rcu_dynticks_snap(struct rcu_data *rdp);
|
||||||
|
|
||||||
#ifdef CONFIG_RCU_BOOST
|
#ifdef CONFIG_RCU_BOOST
|
||||||
DECLARE_PER_CPU(unsigned int, rcu_cpu_kthread_status);
|
DECLARE_PER_CPU(unsigned int, rcu_cpu_kthread_status);
|
||||||
@ -428,45 +418,37 @@ DECLARE_PER_CPU(unsigned int, rcu_cpu_kthread_loops);
|
|||||||
DECLARE_PER_CPU(char, rcu_cpu_has_work);
|
DECLARE_PER_CPU(char, rcu_cpu_has_work);
|
||||||
#endif /* #ifdef CONFIG_RCU_BOOST */
|
#endif /* #ifdef CONFIG_RCU_BOOST */
|
||||||
|
|
||||||
#ifndef RCU_TREE_NONCORE
|
|
||||||
|
|
||||||
/* Forward declarations for rcutree_plugin.h */
|
/* Forward declarations for rcutree_plugin.h */
|
||||||
static void rcu_bootup_announce(void);
|
static void rcu_bootup_announce(void);
|
||||||
static void rcu_preempt_note_context_switch(bool preempt);
|
static void rcu_qs(void);
|
||||||
static int rcu_preempt_blocked_readers_cgp(struct rcu_node *rnp);
|
static int rcu_preempt_blocked_readers_cgp(struct rcu_node *rnp);
|
||||||
#ifdef CONFIG_HOTPLUG_CPU
|
#ifdef CONFIG_HOTPLUG_CPU
|
||||||
static bool rcu_preempt_has_tasks(struct rcu_node *rnp);
|
static bool rcu_preempt_has_tasks(struct rcu_node *rnp);
|
||||||
#endif /* #ifdef CONFIG_HOTPLUG_CPU */
|
#endif /* #ifdef CONFIG_HOTPLUG_CPU */
|
||||||
static void rcu_print_detail_task_stall(struct rcu_state *rsp);
|
static void rcu_print_detail_task_stall(void);
|
||||||
static int rcu_print_task_stall(struct rcu_node *rnp);
|
static int rcu_print_task_stall(struct rcu_node *rnp);
|
||||||
static int rcu_print_task_exp_stall(struct rcu_node *rnp);
|
static int rcu_print_task_exp_stall(struct rcu_node *rnp);
|
||||||
static void rcu_preempt_check_blocked_tasks(struct rcu_state *rsp,
|
static void rcu_preempt_check_blocked_tasks(struct rcu_node *rnp);
|
||||||
struct rcu_node *rnp);
|
static void rcu_flavor_check_callbacks(int user);
|
||||||
static void rcu_preempt_check_callbacks(void);
|
|
||||||
void call_rcu(struct rcu_head *head, rcu_callback_t func);
|
void call_rcu(struct rcu_head *head, rcu_callback_t func);
|
||||||
static void __init __rcu_init_preempt(void);
|
static void dump_blkd_tasks(struct rcu_node *rnp, int ncheck);
|
||||||
static void dump_blkd_tasks(struct rcu_state *rsp, struct rcu_node *rnp,
|
|
||||||
int ncheck);
|
|
||||||
static void rcu_initiate_boost(struct rcu_node *rnp, unsigned long flags);
|
static void rcu_initiate_boost(struct rcu_node *rnp, unsigned long flags);
|
||||||
static void rcu_preempt_boost_start_gp(struct rcu_node *rnp);
|
static void rcu_preempt_boost_start_gp(struct rcu_node *rnp);
|
||||||
static void invoke_rcu_callbacks_kthread(void);
|
static void invoke_rcu_callbacks_kthread(void);
|
||||||
static bool rcu_is_callbacks_kthread(void);
|
static bool rcu_is_callbacks_kthread(void);
|
||||||
#ifdef CONFIG_RCU_BOOST
|
|
||||||
static int rcu_spawn_one_boost_kthread(struct rcu_state *rsp,
|
|
||||||
struct rcu_node *rnp);
|
|
||||||
#endif /* #ifdef CONFIG_RCU_BOOST */
|
|
||||||
static void __init rcu_spawn_boost_kthreads(void);
|
static void __init rcu_spawn_boost_kthreads(void);
|
||||||
static void rcu_prepare_kthreads(int cpu);
|
static void rcu_prepare_kthreads(int cpu);
|
||||||
static void rcu_cleanup_after_idle(void);
|
static void rcu_cleanup_after_idle(void);
|
||||||
static void rcu_prepare_for_idle(void);
|
static void rcu_prepare_for_idle(void);
|
||||||
static void rcu_idle_count_callbacks_posted(void);
|
static void rcu_idle_count_callbacks_posted(void);
|
||||||
static bool rcu_preempt_has_tasks(struct rcu_node *rnp);
|
static bool rcu_preempt_has_tasks(struct rcu_node *rnp);
|
||||||
|
static bool rcu_preempt_need_deferred_qs(struct task_struct *t);
|
||||||
|
static void rcu_preempt_deferred_qs(struct task_struct *t);
|
||||||
static void print_cpu_stall_info_begin(void);
|
static void print_cpu_stall_info_begin(void);
|
||||||
static void print_cpu_stall_info(struct rcu_state *rsp, int cpu);
|
static void print_cpu_stall_info(int cpu);
|
||||||
static void print_cpu_stall_info_end(void);
|
static void print_cpu_stall_info_end(void);
|
||||||
static void zero_cpu_stall_ticks(struct rcu_data *rdp);
|
static void zero_cpu_stall_ticks(struct rcu_data *rdp);
|
||||||
static void increment_cpu_stall_ticks(void);
|
static bool rcu_nocb_cpu_needs_barrier(int cpu);
|
||||||
static bool rcu_nocb_cpu_needs_barrier(struct rcu_state *rsp, int cpu);
|
|
||||||
static struct swait_queue_head *rcu_nocb_gp_get(struct rcu_node *rnp);
|
static struct swait_queue_head *rcu_nocb_gp_get(struct rcu_node *rnp);
|
||||||
static void rcu_nocb_gp_cleanup(struct swait_queue_head *sq);
|
static void rcu_nocb_gp_cleanup(struct swait_queue_head *sq);
|
||||||
static void rcu_init_one_nocb(struct rcu_node *rnp);
|
static void rcu_init_one_nocb(struct rcu_node *rnp);
|
||||||
@ -481,11 +463,11 @@ static void rcu_boot_init_nocb_percpu_data(struct rcu_data *rdp);
|
|||||||
static void rcu_spawn_all_nocb_kthreads(int cpu);
|
static void rcu_spawn_all_nocb_kthreads(int cpu);
|
||||||
static void __init rcu_spawn_nocb_kthreads(void);
|
static void __init rcu_spawn_nocb_kthreads(void);
|
||||||
#ifdef CONFIG_RCU_NOCB_CPU
|
#ifdef CONFIG_RCU_NOCB_CPU
|
||||||
static void __init rcu_organize_nocb_kthreads(struct rcu_state *rsp);
|
static void __init rcu_organize_nocb_kthreads(void);
|
||||||
#endif /* #ifdef CONFIG_RCU_NOCB_CPU */
|
#endif /* #ifdef CONFIG_RCU_NOCB_CPU */
|
||||||
static bool init_nocb_callback_list(struct rcu_data *rdp);
|
static bool init_nocb_callback_list(struct rcu_data *rdp);
|
||||||
static void rcu_bind_gp_kthread(void);
|
static void rcu_bind_gp_kthread(void);
|
||||||
static bool rcu_nohz_full_cpu(struct rcu_state *rsp);
|
static bool rcu_nohz_full_cpu(void);
|
||||||
static void rcu_dynticks_task_enter(void);
|
static void rcu_dynticks_task_enter(void);
|
||||||
static void rcu_dynticks_task_exit(void);
|
static void rcu_dynticks_task_exit(void);
|
||||||
|
|
||||||
@ -496,5 +478,3 @@ void srcu_offline_cpu(unsigned int cpu);
|
|||||||
void srcu_online_cpu(unsigned int cpu) { }
|
void srcu_online_cpu(unsigned int cpu) { }
|
||||||
void srcu_offline_cpu(unsigned int cpu) { }
|
void srcu_offline_cpu(unsigned int cpu) { }
|
||||||
#endif /* #else #ifdef CONFIG_SRCU */
|
#endif /* #else #ifdef CONFIG_SRCU */
|
||||||
|
|
||||||
#endif /* #ifndef RCU_TREE_NONCORE */
|
|
||||||
|
@ -25,39 +25,39 @@
|
|||||||
/*
|
/*
|
||||||
* Record the start of an expedited grace period.
|
* Record the start of an expedited grace period.
|
||||||
*/
|
*/
|
||||||
static void rcu_exp_gp_seq_start(struct rcu_state *rsp)
|
static void rcu_exp_gp_seq_start(void)
|
||||||
{
|
{
|
||||||
rcu_seq_start(&rsp->expedited_sequence);
|
rcu_seq_start(&rcu_state.expedited_sequence);
|
||||||
}
|
}
|
||||||
|
|
||||||
/*
|
/*
|
||||||
* Return then value that expedited-grace-period counter will have
|
* Return then value that expedited-grace-period counter will have
|
||||||
* at the end of the current grace period.
|
* at the end of the current grace period.
|
||||||
*/
|
*/
|
||||||
static __maybe_unused unsigned long rcu_exp_gp_seq_endval(struct rcu_state *rsp)
|
static __maybe_unused unsigned long rcu_exp_gp_seq_endval(void)
|
||||||
{
|
{
|
||||||
return rcu_seq_endval(&rsp->expedited_sequence);
|
return rcu_seq_endval(&rcu_state.expedited_sequence);
|
||||||
}
|
}
|
||||||
|
|
||||||
/*
|
/*
|
||||||
* Record the end of an expedited grace period.
|
* Record the end of an expedited grace period.
|
||||||
*/
|
*/
|
||||||
static void rcu_exp_gp_seq_end(struct rcu_state *rsp)
|
static void rcu_exp_gp_seq_end(void)
|
||||||
{
|
{
|
||||||
rcu_seq_end(&rsp->expedited_sequence);
|
rcu_seq_end(&rcu_state.expedited_sequence);
|
||||||
smp_mb(); /* Ensure that consecutive grace periods serialize. */
|
smp_mb(); /* Ensure that consecutive grace periods serialize. */
|
||||||
}
|
}
|
||||||
|
|
||||||
/*
|
/*
|
||||||
* Take a snapshot of the expedited-grace-period counter.
|
* Take a snapshot of the expedited-grace-period counter.
|
||||||
*/
|
*/
|
||||||
static unsigned long rcu_exp_gp_seq_snap(struct rcu_state *rsp)
|
static unsigned long rcu_exp_gp_seq_snap(void)
|
||||||
{
|
{
|
||||||
unsigned long s;
|
unsigned long s;
|
||||||
|
|
||||||
smp_mb(); /* Caller's modifications seen first by other CPUs. */
|
smp_mb(); /* Caller's modifications seen first by other CPUs. */
|
||||||
s = rcu_seq_snap(&rsp->expedited_sequence);
|
s = rcu_seq_snap(&rcu_state.expedited_sequence);
|
||||||
trace_rcu_exp_grace_period(rsp->name, s, TPS("snap"));
|
trace_rcu_exp_grace_period(rcu_state.name, s, TPS("snap"));
|
||||||
return s;
|
return s;
|
||||||
}
|
}
|
||||||
|
|
||||||
@ -66,9 +66,9 @@ static unsigned long rcu_exp_gp_seq_snap(struct rcu_state *rsp)
|
|||||||
* if a full expedited grace period has elapsed since that snapshot
|
* if a full expedited grace period has elapsed since that snapshot
|
||||||
* was taken.
|
* was taken.
|
||||||
*/
|
*/
|
||||||
static bool rcu_exp_gp_seq_done(struct rcu_state *rsp, unsigned long s)
|
static bool rcu_exp_gp_seq_done(unsigned long s)
|
||||||
{
|
{
|
||||||
return rcu_seq_done(&rsp->expedited_sequence, s);
|
return rcu_seq_done(&rcu_state.expedited_sequence, s);
|
||||||
}
|
}
|
||||||
|
|
||||||
/*
|
/*
|
||||||
@ -78,26 +78,26 @@ static bool rcu_exp_gp_seq_done(struct rcu_state *rsp, unsigned long s)
|
|||||||
* ever been online. This means that this function normally takes its
|
* ever been online. This means that this function normally takes its
|
||||||
* no-work-to-do fastpath.
|
* no-work-to-do fastpath.
|
||||||
*/
|
*/
|
||||||
static void sync_exp_reset_tree_hotplug(struct rcu_state *rsp)
|
static void sync_exp_reset_tree_hotplug(void)
|
||||||
{
|
{
|
||||||
bool done;
|
bool done;
|
||||||
unsigned long flags;
|
unsigned long flags;
|
||||||
unsigned long mask;
|
unsigned long mask;
|
||||||
unsigned long oldmask;
|
unsigned long oldmask;
|
||||||
int ncpus = smp_load_acquire(&rsp->ncpus); /* Order against locking. */
|
int ncpus = smp_load_acquire(&rcu_state.ncpus); /* Order vs. locking. */
|
||||||
struct rcu_node *rnp;
|
struct rcu_node *rnp;
|
||||||
struct rcu_node *rnp_up;
|
struct rcu_node *rnp_up;
|
||||||
|
|
||||||
/* If no new CPUs onlined since last time, nothing to do. */
|
/* If no new CPUs onlined since last time, nothing to do. */
|
||||||
if (likely(ncpus == rsp->ncpus_snap))
|
if (likely(ncpus == rcu_state.ncpus_snap))
|
||||||
return;
|
return;
|
||||||
rsp->ncpus_snap = ncpus;
|
rcu_state.ncpus_snap = ncpus;
|
||||||
|
|
||||||
/*
|
/*
|
||||||
* Each pass through the following loop propagates newly onlined
|
* Each pass through the following loop propagates newly onlined
|
||||||
* CPUs for the current rcu_node structure up the rcu_node tree.
|
* CPUs for the current rcu_node structure up the rcu_node tree.
|
||||||
*/
|
*/
|
||||||
rcu_for_each_leaf_node(rsp, rnp) {
|
rcu_for_each_leaf_node(rnp) {
|
||||||
raw_spin_lock_irqsave_rcu_node(rnp, flags);
|
raw_spin_lock_irqsave_rcu_node(rnp, flags);
|
||||||
if (rnp->expmaskinit == rnp->expmaskinitnext) {
|
if (rnp->expmaskinit == rnp->expmaskinitnext) {
|
||||||
raw_spin_unlock_irqrestore_rcu_node(rnp, flags);
|
raw_spin_unlock_irqrestore_rcu_node(rnp, flags);
|
||||||
@ -135,13 +135,13 @@ static void sync_exp_reset_tree_hotplug(struct rcu_state *rsp)
|
|||||||
* Reset the ->expmask values in the rcu_node tree in preparation for
|
* Reset the ->expmask values in the rcu_node tree in preparation for
|
||||||
* a new expedited grace period.
|
* a new expedited grace period.
|
||||||
*/
|
*/
|
||||||
static void __maybe_unused sync_exp_reset_tree(struct rcu_state *rsp)
|
static void __maybe_unused sync_exp_reset_tree(void)
|
||||||
{
|
{
|
||||||
unsigned long flags;
|
unsigned long flags;
|
||||||
struct rcu_node *rnp;
|
struct rcu_node *rnp;
|
||||||
|
|
||||||
sync_exp_reset_tree_hotplug(rsp);
|
sync_exp_reset_tree_hotplug();
|
||||||
rcu_for_each_node_breadth_first(rsp, rnp) {
|
rcu_for_each_node_breadth_first(rnp) {
|
||||||
raw_spin_lock_irqsave_rcu_node(rnp, flags);
|
raw_spin_lock_irqsave_rcu_node(rnp, flags);
|
||||||
WARN_ON_ONCE(rnp->expmask);
|
WARN_ON_ONCE(rnp->expmask);
|
||||||
rnp->expmask = rnp->expmaskinit;
|
rnp->expmask = rnp->expmaskinit;
|
||||||
@ -194,7 +194,7 @@ static bool sync_rcu_preempt_exp_done_unlocked(struct rcu_node *rnp)
|
|||||||
*
|
*
|
||||||
* Caller must hold the specified rcu_node structure's ->lock.
|
* Caller must hold the specified rcu_node structure's ->lock.
|
||||||
*/
|
*/
|
||||||
static void __rcu_report_exp_rnp(struct rcu_state *rsp, struct rcu_node *rnp,
|
static void __rcu_report_exp_rnp(struct rcu_node *rnp,
|
||||||
bool wake, unsigned long flags)
|
bool wake, unsigned long flags)
|
||||||
__releases(rnp->lock)
|
__releases(rnp->lock)
|
||||||
{
|
{
|
||||||
@ -212,7 +212,7 @@ static void __rcu_report_exp_rnp(struct rcu_state *rsp, struct rcu_node *rnp,
|
|||||||
raw_spin_unlock_irqrestore_rcu_node(rnp, flags);
|
raw_spin_unlock_irqrestore_rcu_node(rnp, flags);
|
||||||
if (wake) {
|
if (wake) {
|
||||||
smp_mb(); /* EGP done before wake_up(). */
|
smp_mb(); /* EGP done before wake_up(). */
|
||||||
swake_up_one(&rsp->expedited_wq);
|
swake_up_one(&rcu_state.expedited_wq);
|
||||||
}
|
}
|
||||||
break;
|
break;
|
||||||
}
|
}
|
||||||
@ -229,20 +229,19 @@ static void __rcu_report_exp_rnp(struct rcu_state *rsp, struct rcu_node *rnp,
|
|||||||
* Report expedited quiescent state for specified node. This is a
|
* Report expedited quiescent state for specified node. This is a
|
||||||
* lock-acquisition wrapper function for __rcu_report_exp_rnp().
|
* lock-acquisition wrapper function for __rcu_report_exp_rnp().
|
||||||
*/
|
*/
|
||||||
static void __maybe_unused rcu_report_exp_rnp(struct rcu_state *rsp,
|
static void __maybe_unused rcu_report_exp_rnp(struct rcu_node *rnp, bool wake)
|
||||||
struct rcu_node *rnp, bool wake)
|
|
||||||
{
|
{
|
||||||
unsigned long flags;
|
unsigned long flags;
|
||||||
|
|
||||||
raw_spin_lock_irqsave_rcu_node(rnp, flags);
|
raw_spin_lock_irqsave_rcu_node(rnp, flags);
|
||||||
__rcu_report_exp_rnp(rsp, rnp, wake, flags);
|
__rcu_report_exp_rnp(rnp, wake, flags);
|
||||||
}
|
}
|
||||||
|
|
||||||
/*
|
/*
|
||||||
* Report expedited quiescent state for multiple CPUs, all covered by the
|
* Report expedited quiescent state for multiple CPUs, all covered by the
|
||||||
* specified leaf rcu_node structure.
|
* specified leaf rcu_node structure.
|
||||||
*/
|
*/
|
||||||
static void rcu_report_exp_cpu_mult(struct rcu_state *rsp, struct rcu_node *rnp,
|
static void rcu_report_exp_cpu_mult(struct rcu_node *rnp,
|
||||||
unsigned long mask, bool wake)
|
unsigned long mask, bool wake)
|
||||||
{
|
{
|
||||||
unsigned long flags;
|
unsigned long flags;
|
||||||
@ -253,23 +252,23 @@ static void rcu_report_exp_cpu_mult(struct rcu_state *rsp, struct rcu_node *rnp,
|
|||||||
return;
|
return;
|
||||||
}
|
}
|
||||||
rnp->expmask &= ~mask;
|
rnp->expmask &= ~mask;
|
||||||
__rcu_report_exp_rnp(rsp, rnp, wake, flags); /* Releases rnp->lock. */
|
__rcu_report_exp_rnp(rnp, wake, flags); /* Releases rnp->lock. */
|
||||||
}
|
}
|
||||||
|
|
||||||
/*
|
/*
|
||||||
* Report expedited quiescent state for specified rcu_data (CPU).
|
* Report expedited quiescent state for specified rcu_data (CPU).
|
||||||
*/
|
*/
|
||||||
static void rcu_report_exp_rdp(struct rcu_state *rsp, struct rcu_data *rdp,
|
static void rcu_report_exp_rdp(struct rcu_data *rdp)
|
||||||
bool wake)
|
|
||||||
{
|
{
|
||||||
rcu_report_exp_cpu_mult(rsp, rdp->mynode, rdp->grpmask, wake);
|
WRITE_ONCE(rdp->deferred_qs, false);
|
||||||
|
rcu_report_exp_cpu_mult(rdp->mynode, rdp->grpmask, true);
|
||||||
}
|
}
|
||||||
|
|
||||||
/* Common code for synchronize_{rcu,sched}_expedited() work-done checking. */
|
/* Common code for work-done checking. */
|
||||||
static bool sync_exp_work_done(struct rcu_state *rsp, unsigned long s)
|
static bool sync_exp_work_done(unsigned long s)
|
||||||
{
|
{
|
||||||
if (rcu_exp_gp_seq_done(rsp, s)) {
|
if (rcu_exp_gp_seq_done(s)) {
|
||||||
trace_rcu_exp_grace_period(rsp->name, s, TPS("done"));
|
trace_rcu_exp_grace_period(rcu_state.name, s, TPS("done"));
|
||||||
/* Ensure test happens before caller kfree(). */
|
/* Ensure test happens before caller kfree(). */
|
||||||
smp_mb__before_atomic(); /* ^^^ */
|
smp_mb__before_atomic(); /* ^^^ */
|
||||||
return true;
|
return true;
|
||||||
@ -284,28 +283,28 @@ static bool sync_exp_work_done(struct rcu_state *rsp, unsigned long s)
|
|||||||
* with the mutex held, indicating that the caller must actually do the
|
* with the mutex held, indicating that the caller must actually do the
|
||||||
* expedited grace period.
|
* expedited grace period.
|
||||||
*/
|
*/
|
||||||
static bool exp_funnel_lock(struct rcu_state *rsp, unsigned long s)
|
static bool exp_funnel_lock(unsigned long s)
|
||||||
{
|
{
|
||||||
struct rcu_data *rdp = per_cpu_ptr(rsp->rda, raw_smp_processor_id());
|
struct rcu_data *rdp = per_cpu_ptr(&rcu_data, raw_smp_processor_id());
|
||||||
struct rcu_node *rnp = rdp->mynode;
|
struct rcu_node *rnp = rdp->mynode;
|
||||||
struct rcu_node *rnp_root = rcu_get_root(rsp);
|
struct rcu_node *rnp_root = rcu_get_root();
|
||||||
|
|
||||||
/* Low-contention fastpath. */
|
/* Low-contention fastpath. */
|
||||||
if (ULONG_CMP_LT(READ_ONCE(rnp->exp_seq_rq), s) &&
|
if (ULONG_CMP_LT(READ_ONCE(rnp->exp_seq_rq), s) &&
|
||||||
(rnp == rnp_root ||
|
(rnp == rnp_root ||
|
||||||
ULONG_CMP_LT(READ_ONCE(rnp_root->exp_seq_rq), s)) &&
|
ULONG_CMP_LT(READ_ONCE(rnp_root->exp_seq_rq), s)) &&
|
||||||
mutex_trylock(&rsp->exp_mutex))
|
mutex_trylock(&rcu_state.exp_mutex))
|
||||||
goto fastpath;
|
goto fastpath;
|
||||||
|
|
||||||
/*
|
/*
|
||||||
* Each pass through the following loop works its way up
|
* Each pass through the following loop works its way up
|
||||||
* the rcu_node tree, returning if others have done the work or
|
* the rcu_node tree, returning if others have done the work or
|
||||||
* otherwise falls through to acquire rsp->exp_mutex. The mapping
|
* otherwise falls through to acquire ->exp_mutex. The mapping
|
||||||
* from CPU to rcu_node structure can be inexact, as it is just
|
* from CPU to rcu_node structure can be inexact, as it is just
|
||||||
* promoting locality and is not strictly needed for correctness.
|
* promoting locality and is not strictly needed for correctness.
|
||||||
*/
|
*/
|
||||||
for (; rnp != NULL; rnp = rnp->parent) {
|
for (; rnp != NULL; rnp = rnp->parent) {
|
||||||
if (sync_exp_work_done(rsp, s))
|
if (sync_exp_work_done(s))
|
||||||
return true;
|
return true;
|
||||||
|
|
||||||
/* Work not done, either wait here or go up. */
|
/* Work not done, either wait here or go up. */
|
||||||
@ -314,68 +313,29 @@ static bool exp_funnel_lock(struct rcu_state *rsp, unsigned long s)
|
|||||||
|
|
||||||
/* Someone else doing GP, so wait for them. */
|
/* Someone else doing GP, so wait for them. */
|
||||||
spin_unlock(&rnp->exp_lock);
|
spin_unlock(&rnp->exp_lock);
|
||||||
trace_rcu_exp_funnel_lock(rsp->name, rnp->level,
|
trace_rcu_exp_funnel_lock(rcu_state.name, rnp->level,
|
||||||
rnp->grplo, rnp->grphi,
|
rnp->grplo, rnp->grphi,
|
||||||
TPS("wait"));
|
TPS("wait"));
|
||||||
wait_event(rnp->exp_wq[rcu_seq_ctr(s) & 0x3],
|
wait_event(rnp->exp_wq[rcu_seq_ctr(s) & 0x3],
|
||||||
sync_exp_work_done(rsp, s));
|
sync_exp_work_done(s));
|
||||||
return true;
|
return true;
|
||||||
}
|
}
|
||||||
rnp->exp_seq_rq = s; /* Followers can wait on us. */
|
rnp->exp_seq_rq = s; /* Followers can wait on us. */
|
||||||
spin_unlock(&rnp->exp_lock);
|
spin_unlock(&rnp->exp_lock);
|
||||||
trace_rcu_exp_funnel_lock(rsp->name, rnp->level, rnp->grplo,
|
trace_rcu_exp_funnel_lock(rcu_state.name, rnp->level,
|
||||||
rnp->grphi, TPS("nxtlvl"));
|
rnp->grplo, rnp->grphi, TPS("nxtlvl"));
|
||||||
}
|
}
|
||||||
mutex_lock(&rsp->exp_mutex);
|
mutex_lock(&rcu_state.exp_mutex);
|
||||||
fastpath:
|
fastpath:
|
||||||
if (sync_exp_work_done(rsp, s)) {
|
if (sync_exp_work_done(s)) {
|
||||||
mutex_unlock(&rsp->exp_mutex);
|
mutex_unlock(&rcu_state.exp_mutex);
|
||||||
return true;
|
return true;
|
||||||
}
|
}
|
||||||
rcu_exp_gp_seq_start(rsp);
|
rcu_exp_gp_seq_start();
|
||||||
trace_rcu_exp_grace_period(rsp->name, s, TPS("start"));
|
trace_rcu_exp_grace_period(rcu_state.name, s, TPS("start"));
|
||||||
return false;
|
return false;
|
||||||
}
|
}
|
||||||
|
|
||||||
/* Invoked on each online non-idle CPU for expedited quiescent state. */
|
|
||||||
static void sync_sched_exp_handler(void *data)
|
|
||||||
{
|
|
||||||
struct rcu_data *rdp;
|
|
||||||
struct rcu_node *rnp;
|
|
||||||
struct rcu_state *rsp = data;
|
|
||||||
|
|
||||||
rdp = this_cpu_ptr(rsp->rda);
|
|
||||||
rnp = rdp->mynode;
|
|
||||||
if (!(READ_ONCE(rnp->expmask) & rdp->grpmask) ||
|
|
||||||
__this_cpu_read(rcu_sched_data.cpu_no_qs.b.exp))
|
|
||||||
return;
|
|
||||||
if (rcu_is_cpu_rrupt_from_idle()) {
|
|
||||||
rcu_report_exp_rdp(&rcu_sched_state,
|
|
||||||
this_cpu_ptr(&rcu_sched_data), true);
|
|
||||||
return;
|
|
||||||
}
|
|
||||||
__this_cpu_write(rcu_sched_data.cpu_no_qs.b.exp, true);
|
|
||||||
/* Store .exp before .rcu_urgent_qs. */
|
|
||||||
smp_store_release(this_cpu_ptr(&rcu_dynticks.rcu_urgent_qs), true);
|
|
||||||
resched_cpu(smp_processor_id());
|
|
||||||
}
|
|
||||||
|
|
||||||
/* Send IPI for expedited cleanup if needed at end of CPU-hotplug operation. */
|
|
||||||
static void sync_sched_exp_online_cleanup(int cpu)
|
|
||||||
{
|
|
||||||
struct rcu_data *rdp;
|
|
||||||
int ret;
|
|
||||||
struct rcu_node *rnp;
|
|
||||||
struct rcu_state *rsp = &rcu_sched_state;
|
|
||||||
|
|
||||||
rdp = per_cpu_ptr(rsp->rda, cpu);
|
|
||||||
rnp = rdp->mynode;
|
|
||||||
if (!(READ_ONCE(rnp->expmask) & rdp->grpmask))
|
|
||||||
return;
|
|
||||||
ret = smp_call_function_single(cpu, sync_sched_exp_handler, rsp, 0);
|
|
||||||
WARN_ON_ONCE(ret);
|
|
||||||
}
|
|
||||||
|
|
||||||
/*
|
/*
|
||||||
* Select the CPUs within the specified rcu_node that the upcoming
|
* Select the CPUs within the specified rcu_node that the upcoming
|
||||||
* expedited grace period needs to wait for.
|
* expedited grace period needs to wait for.
|
||||||
@ -391,7 +351,6 @@ static void sync_rcu_exp_select_node_cpus(struct work_struct *wp)
|
|||||||
struct rcu_exp_work *rewp =
|
struct rcu_exp_work *rewp =
|
||||||
container_of(wp, struct rcu_exp_work, rew_work);
|
container_of(wp, struct rcu_exp_work, rew_work);
|
||||||
struct rcu_node *rnp = container_of(rewp, struct rcu_node, rew);
|
struct rcu_node *rnp = container_of(rewp, struct rcu_node, rew);
|
||||||
struct rcu_state *rsp = rewp->rew_rsp;
|
|
||||||
|
|
||||||
func = rewp->rew_func;
|
func = rewp->rew_func;
|
||||||
raw_spin_lock_irqsave_rcu_node(rnp, flags);
|
raw_spin_lock_irqsave_rcu_node(rnp, flags);
|
||||||
@ -400,15 +359,14 @@ static void sync_rcu_exp_select_node_cpus(struct work_struct *wp)
|
|||||||
mask_ofl_test = 0;
|
mask_ofl_test = 0;
|
||||||
for_each_leaf_node_cpu_mask(rnp, cpu, rnp->expmask) {
|
for_each_leaf_node_cpu_mask(rnp, cpu, rnp->expmask) {
|
||||||
unsigned long mask = leaf_node_cpu_bit(rnp, cpu);
|
unsigned long mask = leaf_node_cpu_bit(rnp, cpu);
|
||||||
struct rcu_data *rdp = per_cpu_ptr(rsp->rda, cpu);
|
struct rcu_data *rdp = per_cpu_ptr(&rcu_data, cpu);
|
||||||
struct rcu_dynticks *rdtp = per_cpu_ptr(&rcu_dynticks, cpu);
|
|
||||||
int snap;
|
int snap;
|
||||||
|
|
||||||
if (raw_smp_processor_id() == cpu ||
|
if (raw_smp_processor_id() == cpu ||
|
||||||
!(rnp->qsmaskinitnext & mask)) {
|
!(rnp->qsmaskinitnext & mask)) {
|
||||||
mask_ofl_test |= mask;
|
mask_ofl_test |= mask;
|
||||||
} else {
|
} else {
|
||||||
snap = rcu_dynticks_snap(rdtp);
|
snap = rcu_dynticks_snap(rdp);
|
||||||
if (rcu_dynticks_in_eqs(snap))
|
if (rcu_dynticks_in_eqs(snap))
|
||||||
mask_ofl_test |= mask;
|
mask_ofl_test |= mask;
|
||||||
else
|
else
|
||||||
@ -429,17 +387,16 @@ static void sync_rcu_exp_select_node_cpus(struct work_struct *wp)
|
|||||||
/* IPI the remaining CPUs for expedited quiescent state. */
|
/* IPI the remaining CPUs for expedited quiescent state. */
|
||||||
for_each_leaf_node_cpu_mask(rnp, cpu, rnp->expmask) {
|
for_each_leaf_node_cpu_mask(rnp, cpu, rnp->expmask) {
|
||||||
unsigned long mask = leaf_node_cpu_bit(rnp, cpu);
|
unsigned long mask = leaf_node_cpu_bit(rnp, cpu);
|
||||||
struct rcu_data *rdp = per_cpu_ptr(rsp->rda, cpu);
|
struct rcu_data *rdp = per_cpu_ptr(&rcu_data, cpu);
|
||||||
|
|
||||||
if (!(mask_ofl_ipi & mask))
|
if (!(mask_ofl_ipi & mask))
|
||||||
continue;
|
continue;
|
||||||
retry_ipi:
|
retry_ipi:
|
||||||
if (rcu_dynticks_in_eqs_since(rdp->dynticks,
|
if (rcu_dynticks_in_eqs_since(rdp, rdp->exp_dynticks_snap)) {
|
||||||
rdp->exp_dynticks_snap)) {
|
|
||||||
mask_ofl_test |= mask;
|
mask_ofl_test |= mask;
|
||||||
continue;
|
continue;
|
||||||
}
|
}
|
||||||
ret = smp_call_function_single(cpu, func, rsp, 0);
|
ret = smp_call_function_single(cpu, func, NULL, 0);
|
||||||
if (!ret) {
|
if (!ret) {
|
||||||
mask_ofl_ipi &= ~mask;
|
mask_ofl_ipi &= ~mask;
|
||||||
continue;
|
continue;
|
||||||
@ -450,7 +407,7 @@ retry_ipi:
|
|||||||
(rnp->expmask & mask)) {
|
(rnp->expmask & mask)) {
|
||||||
/* Online, so delay for a bit and try again. */
|
/* Online, so delay for a bit and try again. */
|
||||||
raw_spin_unlock_irqrestore_rcu_node(rnp, flags);
|
raw_spin_unlock_irqrestore_rcu_node(rnp, flags);
|
||||||
trace_rcu_exp_grace_period(rsp->name, rcu_exp_gp_seq_endval(rsp), TPS("selectofl"));
|
trace_rcu_exp_grace_period(rcu_state.name, rcu_exp_gp_seq_endval(), TPS("selectofl"));
|
||||||
schedule_timeout_uninterruptible(1);
|
schedule_timeout_uninterruptible(1);
|
||||||
goto retry_ipi;
|
goto retry_ipi;
|
||||||
}
|
}
|
||||||
@ -462,33 +419,31 @@ retry_ipi:
|
|||||||
/* Report quiescent states for those that went offline. */
|
/* Report quiescent states for those that went offline. */
|
||||||
mask_ofl_test |= mask_ofl_ipi;
|
mask_ofl_test |= mask_ofl_ipi;
|
||||||
if (mask_ofl_test)
|
if (mask_ofl_test)
|
||||||
rcu_report_exp_cpu_mult(rsp, rnp, mask_ofl_test, false);
|
rcu_report_exp_cpu_mult(rnp, mask_ofl_test, false);
|
||||||
}
|
}
|
||||||
|
|
||||||
/*
|
/*
|
||||||
* Select the nodes that the upcoming expedited grace period needs
|
* Select the nodes that the upcoming expedited grace period needs
|
||||||
* to wait for.
|
* to wait for.
|
||||||
*/
|
*/
|
||||||
static void sync_rcu_exp_select_cpus(struct rcu_state *rsp,
|
static void sync_rcu_exp_select_cpus(smp_call_func_t func)
|
||||||
smp_call_func_t func)
|
|
||||||
{
|
{
|
||||||
int cpu;
|
int cpu;
|
||||||
struct rcu_node *rnp;
|
struct rcu_node *rnp;
|
||||||
|
|
||||||
trace_rcu_exp_grace_period(rsp->name, rcu_exp_gp_seq_endval(rsp), TPS("reset"));
|
trace_rcu_exp_grace_period(rcu_state.name, rcu_exp_gp_seq_endval(), TPS("reset"));
|
||||||
sync_exp_reset_tree(rsp);
|
sync_exp_reset_tree();
|
||||||
trace_rcu_exp_grace_period(rsp->name, rcu_exp_gp_seq_endval(rsp), TPS("select"));
|
trace_rcu_exp_grace_period(rcu_state.name, rcu_exp_gp_seq_endval(), TPS("select"));
|
||||||
|
|
||||||
/* Schedule work for each leaf rcu_node structure. */
|
/* Schedule work for each leaf rcu_node structure. */
|
||||||
rcu_for_each_leaf_node(rsp, rnp) {
|
rcu_for_each_leaf_node(rnp) {
|
||||||
rnp->exp_need_flush = false;
|
rnp->exp_need_flush = false;
|
||||||
if (!READ_ONCE(rnp->expmask))
|
if (!READ_ONCE(rnp->expmask))
|
||||||
continue; /* Avoid early boot non-existent wq. */
|
continue; /* Avoid early boot non-existent wq. */
|
||||||
rnp->rew.rew_func = func;
|
rnp->rew.rew_func = func;
|
||||||
rnp->rew.rew_rsp = rsp;
|
|
||||||
if (!READ_ONCE(rcu_par_gp_wq) ||
|
if (!READ_ONCE(rcu_par_gp_wq) ||
|
||||||
rcu_scheduler_active != RCU_SCHEDULER_RUNNING ||
|
rcu_scheduler_active != RCU_SCHEDULER_RUNNING ||
|
||||||
rcu_is_last_leaf_node(rsp, rnp)) {
|
rcu_is_last_leaf_node(rnp)) {
|
||||||
/* No workqueues yet or last leaf, do direct call. */
|
/* No workqueues yet or last leaf, do direct call. */
|
||||||
sync_rcu_exp_select_node_cpus(&rnp->rew.rew_work);
|
sync_rcu_exp_select_node_cpus(&rnp->rew.rew_work);
|
||||||
continue;
|
continue;
|
||||||
@ -505,12 +460,12 @@ static void sync_rcu_exp_select_cpus(struct rcu_state *rsp,
|
|||||||
}
|
}
|
||||||
|
|
||||||
/* Wait for workqueue jobs (if any) to complete. */
|
/* Wait for workqueue jobs (if any) to complete. */
|
||||||
rcu_for_each_leaf_node(rsp, rnp)
|
rcu_for_each_leaf_node(rnp)
|
||||||
if (rnp->exp_need_flush)
|
if (rnp->exp_need_flush)
|
||||||
flush_work(&rnp->rew.rew_work);
|
flush_work(&rnp->rew.rew_work);
|
||||||
}
|
}
|
||||||
|
|
||||||
static void synchronize_sched_expedited_wait(struct rcu_state *rsp)
|
static void synchronize_sched_expedited_wait(void)
|
||||||
{
|
{
|
||||||
int cpu;
|
int cpu;
|
||||||
unsigned long jiffies_stall;
|
unsigned long jiffies_stall;
|
||||||
@ -518,16 +473,16 @@ static void synchronize_sched_expedited_wait(struct rcu_state *rsp)
|
|||||||
unsigned long mask;
|
unsigned long mask;
|
||||||
int ndetected;
|
int ndetected;
|
||||||
struct rcu_node *rnp;
|
struct rcu_node *rnp;
|
||||||
struct rcu_node *rnp_root = rcu_get_root(rsp);
|
struct rcu_node *rnp_root = rcu_get_root();
|
||||||
int ret;
|
int ret;
|
||||||
|
|
||||||
trace_rcu_exp_grace_period(rsp->name, rcu_exp_gp_seq_endval(rsp), TPS("startwait"));
|
trace_rcu_exp_grace_period(rcu_state.name, rcu_exp_gp_seq_endval(), TPS("startwait"));
|
||||||
jiffies_stall = rcu_jiffies_till_stall_check();
|
jiffies_stall = rcu_jiffies_till_stall_check();
|
||||||
jiffies_start = jiffies;
|
jiffies_start = jiffies;
|
||||||
|
|
||||||
for (;;) {
|
for (;;) {
|
||||||
ret = swait_event_timeout_exclusive(
|
ret = swait_event_timeout_exclusive(
|
||||||
rsp->expedited_wq,
|
rcu_state.expedited_wq,
|
||||||
sync_rcu_preempt_exp_done_unlocked(rnp_root),
|
sync_rcu_preempt_exp_done_unlocked(rnp_root),
|
||||||
jiffies_stall);
|
jiffies_stall);
|
||||||
if (ret > 0 || sync_rcu_preempt_exp_done_unlocked(rnp_root))
|
if (ret > 0 || sync_rcu_preempt_exp_done_unlocked(rnp_root))
|
||||||
@ -537,9 +492,9 @@ static void synchronize_sched_expedited_wait(struct rcu_state *rsp)
|
|||||||
continue;
|
continue;
|
||||||
panic_on_rcu_stall();
|
panic_on_rcu_stall();
|
||||||
pr_err("INFO: %s detected expedited stalls on CPUs/tasks: {",
|
pr_err("INFO: %s detected expedited stalls on CPUs/tasks: {",
|
||||||
rsp->name);
|
rcu_state.name);
|
||||||
ndetected = 0;
|
ndetected = 0;
|
||||||
rcu_for_each_leaf_node(rsp, rnp) {
|
rcu_for_each_leaf_node(rnp) {
|
||||||
ndetected += rcu_print_task_exp_stall(rnp);
|
ndetected += rcu_print_task_exp_stall(rnp);
|
||||||
for_each_leaf_node_possible_cpu(rnp, cpu) {
|
for_each_leaf_node_possible_cpu(rnp, cpu) {
|
||||||
struct rcu_data *rdp;
|
struct rcu_data *rdp;
|
||||||
@ -548,7 +503,7 @@ static void synchronize_sched_expedited_wait(struct rcu_state *rsp)
|
|||||||
if (!(rnp->expmask & mask))
|
if (!(rnp->expmask & mask))
|
||||||
continue;
|
continue;
|
||||||
ndetected++;
|
ndetected++;
|
||||||
rdp = per_cpu_ptr(rsp->rda, cpu);
|
rdp = per_cpu_ptr(&rcu_data, cpu);
|
||||||
pr_cont(" %d-%c%c%c", cpu,
|
pr_cont(" %d-%c%c%c", cpu,
|
||||||
"O."[!!cpu_online(cpu)],
|
"O."[!!cpu_online(cpu)],
|
||||||
"o."[!!(rdp->grpmask & rnp->expmaskinit)],
|
"o."[!!(rdp->grpmask & rnp->expmaskinit)],
|
||||||
@ -556,11 +511,11 @@ static void synchronize_sched_expedited_wait(struct rcu_state *rsp)
|
|||||||
}
|
}
|
||||||
}
|
}
|
||||||
pr_cont(" } %lu jiffies s: %lu root: %#lx/%c\n",
|
pr_cont(" } %lu jiffies s: %lu root: %#lx/%c\n",
|
||||||
jiffies - jiffies_start, rsp->expedited_sequence,
|
jiffies - jiffies_start, rcu_state.expedited_sequence,
|
||||||
rnp_root->expmask, ".T"[!!rnp_root->exp_tasks]);
|
rnp_root->expmask, ".T"[!!rnp_root->exp_tasks]);
|
||||||
if (ndetected) {
|
if (ndetected) {
|
||||||
pr_err("blocking rcu_node structures:");
|
pr_err("blocking rcu_node structures:");
|
||||||
rcu_for_each_node_breadth_first(rsp, rnp) {
|
rcu_for_each_node_breadth_first(rnp) {
|
||||||
if (rnp == rnp_root)
|
if (rnp == rnp_root)
|
||||||
continue; /* printed unconditionally */
|
continue; /* printed unconditionally */
|
||||||
if (sync_rcu_preempt_exp_done_unlocked(rnp))
|
if (sync_rcu_preempt_exp_done_unlocked(rnp))
|
||||||
@ -572,7 +527,7 @@ static void synchronize_sched_expedited_wait(struct rcu_state *rsp)
|
|||||||
}
|
}
|
||||||
pr_cont("\n");
|
pr_cont("\n");
|
||||||
}
|
}
|
||||||
rcu_for_each_leaf_node(rsp, rnp) {
|
rcu_for_each_leaf_node(rnp) {
|
||||||
for_each_leaf_node_possible_cpu(rnp, cpu) {
|
for_each_leaf_node_possible_cpu(rnp, cpu) {
|
||||||
mask = leaf_node_cpu_bit(rnp, cpu);
|
mask = leaf_node_cpu_bit(rnp, cpu);
|
||||||
if (!(rnp->expmask & mask))
|
if (!(rnp->expmask & mask))
|
||||||
@ -590,21 +545,21 @@ static void synchronize_sched_expedited_wait(struct rcu_state *rsp)
|
|||||||
* grace period. Also update all the ->exp_seq_rq counters as needed
|
* grace period. Also update all the ->exp_seq_rq counters as needed
|
||||||
* in order to avoid counter-wrap problems.
|
* in order to avoid counter-wrap problems.
|
||||||
*/
|
*/
|
||||||
static void rcu_exp_wait_wake(struct rcu_state *rsp, unsigned long s)
|
static void rcu_exp_wait_wake(unsigned long s)
|
||||||
{
|
{
|
||||||
struct rcu_node *rnp;
|
struct rcu_node *rnp;
|
||||||
|
|
||||||
synchronize_sched_expedited_wait(rsp);
|
synchronize_sched_expedited_wait();
|
||||||
rcu_exp_gp_seq_end(rsp);
|
rcu_exp_gp_seq_end();
|
||||||
trace_rcu_exp_grace_period(rsp->name, s, TPS("end"));
|
trace_rcu_exp_grace_period(rcu_state.name, s, TPS("end"));
|
||||||
|
|
||||||
/*
|
/*
|
||||||
* Switch over to wakeup mode, allowing the next GP, but -only- the
|
* Switch over to wakeup mode, allowing the next GP, but -only- the
|
||||||
* next GP, to proceed.
|
* next GP, to proceed.
|
||||||
*/
|
*/
|
||||||
mutex_lock(&rsp->exp_wake_mutex);
|
mutex_lock(&rcu_state.exp_wake_mutex);
|
||||||
|
|
||||||
rcu_for_each_node_breadth_first(rsp, rnp) {
|
rcu_for_each_node_breadth_first(rnp) {
|
||||||
if (ULONG_CMP_LT(READ_ONCE(rnp->exp_seq_rq), s)) {
|
if (ULONG_CMP_LT(READ_ONCE(rnp->exp_seq_rq), s)) {
|
||||||
spin_lock(&rnp->exp_lock);
|
spin_lock(&rnp->exp_lock);
|
||||||
/* Recheck, avoid hang in case someone just arrived. */
|
/* Recheck, avoid hang in case someone just arrived. */
|
||||||
@ -613,24 +568,23 @@ static void rcu_exp_wait_wake(struct rcu_state *rsp, unsigned long s)
|
|||||||
spin_unlock(&rnp->exp_lock);
|
spin_unlock(&rnp->exp_lock);
|
||||||
}
|
}
|
||||||
smp_mb(); /* All above changes before wakeup. */
|
smp_mb(); /* All above changes before wakeup. */
|
||||||
wake_up_all(&rnp->exp_wq[rcu_seq_ctr(rsp->expedited_sequence) & 0x3]);
|
wake_up_all(&rnp->exp_wq[rcu_seq_ctr(rcu_state.expedited_sequence) & 0x3]);
|
||||||
}
|
}
|
||||||
trace_rcu_exp_grace_period(rsp->name, s, TPS("endwake"));
|
trace_rcu_exp_grace_period(rcu_state.name, s, TPS("endwake"));
|
||||||
mutex_unlock(&rsp->exp_wake_mutex);
|
mutex_unlock(&rcu_state.exp_wake_mutex);
|
||||||
}
|
}
|
||||||
|
|
||||||
/*
|
/*
|
||||||
* Common code to drive an expedited grace period forward, used by
|
* Common code to drive an expedited grace period forward, used by
|
||||||
* workqueues and mid-boot-time tasks.
|
* workqueues and mid-boot-time tasks.
|
||||||
*/
|
*/
|
||||||
static void rcu_exp_sel_wait_wake(struct rcu_state *rsp,
|
static void rcu_exp_sel_wait_wake(smp_call_func_t func, unsigned long s)
|
||||||
smp_call_func_t func, unsigned long s)
|
|
||||||
{
|
{
|
||||||
/* Initialize the rcu_node tree in preparation for the wait. */
|
/* Initialize the rcu_node tree in preparation for the wait. */
|
||||||
sync_rcu_exp_select_cpus(rsp, func);
|
sync_rcu_exp_select_cpus(func);
|
||||||
|
|
||||||
/* Wait and clean up, including waking everyone. */
|
/* Wait and clean up, including waking everyone. */
|
||||||
rcu_exp_wait_wake(rsp, s);
|
rcu_exp_wait_wake(s);
|
||||||
}
|
}
|
||||||
|
|
||||||
/*
|
/*
|
||||||
@ -641,15 +595,14 @@ static void wait_rcu_exp_gp(struct work_struct *wp)
|
|||||||
struct rcu_exp_work *rewp;
|
struct rcu_exp_work *rewp;
|
||||||
|
|
||||||
rewp = container_of(wp, struct rcu_exp_work, rew_work);
|
rewp = container_of(wp, struct rcu_exp_work, rew_work);
|
||||||
rcu_exp_sel_wait_wake(rewp->rew_rsp, rewp->rew_func, rewp->rew_s);
|
rcu_exp_sel_wait_wake(rewp->rew_func, rewp->rew_s);
|
||||||
}
|
}
|
||||||
|
|
||||||
/*
|
/*
|
||||||
* Given an rcu_state pointer and a smp_call_function() handler, kick
|
* Given a smp_call_function() handler, kick off the specified
|
||||||
* off the specified flavor of expedited grace period.
|
* implementation of expedited grace period.
|
||||||
*/
|
*/
|
||||||
static void _synchronize_rcu_expedited(struct rcu_state *rsp,
|
static void _synchronize_rcu_expedited(smp_call_func_t func)
|
||||||
smp_call_func_t func)
|
|
||||||
{
|
{
|
||||||
struct rcu_data *rdp;
|
struct rcu_data *rdp;
|
||||||
struct rcu_exp_work rew;
|
struct rcu_exp_work rew;
|
||||||
@ -658,72 +611,38 @@ static void _synchronize_rcu_expedited(struct rcu_state *rsp,
|
|||||||
|
|
||||||
/* If expedited grace periods are prohibited, fall back to normal. */
|
/* If expedited grace periods are prohibited, fall back to normal. */
|
||||||
if (rcu_gp_is_normal()) {
|
if (rcu_gp_is_normal()) {
|
||||||
wait_rcu_gp(rsp->call);
|
wait_rcu_gp(call_rcu);
|
||||||
return;
|
return;
|
||||||
}
|
}
|
||||||
|
|
||||||
/* Take a snapshot of the sequence number. */
|
/* Take a snapshot of the sequence number. */
|
||||||
s = rcu_exp_gp_seq_snap(rsp);
|
s = rcu_exp_gp_seq_snap();
|
||||||
if (exp_funnel_lock(rsp, s))
|
if (exp_funnel_lock(s))
|
||||||
return; /* Someone else did our work for us. */
|
return; /* Someone else did our work for us. */
|
||||||
|
|
||||||
/* Ensure that load happens before action based on it. */
|
/* Ensure that load happens before action based on it. */
|
||||||
if (unlikely(rcu_scheduler_active == RCU_SCHEDULER_INIT)) {
|
if (unlikely(rcu_scheduler_active == RCU_SCHEDULER_INIT)) {
|
||||||
/* Direct call during scheduler init and early_initcalls(). */
|
/* Direct call during scheduler init and early_initcalls(). */
|
||||||
rcu_exp_sel_wait_wake(rsp, func, s);
|
rcu_exp_sel_wait_wake(func, s);
|
||||||
} else {
|
} else {
|
||||||
/* Marshall arguments & schedule the expedited grace period. */
|
/* Marshall arguments & schedule the expedited grace period. */
|
||||||
rew.rew_func = func;
|
rew.rew_func = func;
|
||||||
rew.rew_rsp = rsp;
|
|
||||||
rew.rew_s = s;
|
rew.rew_s = s;
|
||||||
INIT_WORK_ONSTACK(&rew.rew_work, wait_rcu_exp_gp);
|
INIT_WORK_ONSTACK(&rew.rew_work, wait_rcu_exp_gp);
|
||||||
queue_work(rcu_gp_wq, &rew.rew_work);
|
queue_work(rcu_gp_wq, &rew.rew_work);
|
||||||
}
|
}
|
||||||
|
|
||||||
/* Wait for expedited grace period to complete. */
|
/* Wait for expedited grace period to complete. */
|
||||||
rdp = per_cpu_ptr(rsp->rda, raw_smp_processor_id());
|
rdp = per_cpu_ptr(&rcu_data, raw_smp_processor_id());
|
||||||
rnp = rcu_get_root(rsp);
|
rnp = rcu_get_root();
|
||||||
wait_event(rnp->exp_wq[rcu_seq_ctr(s) & 0x3],
|
wait_event(rnp->exp_wq[rcu_seq_ctr(s) & 0x3],
|
||||||
sync_exp_work_done(rsp, s));
|
sync_exp_work_done(s));
|
||||||
smp_mb(); /* Workqueue actions happen before return. */
|
smp_mb(); /* Workqueue actions happen before return. */
|
||||||
|
|
||||||
/* Let the next expedited grace period start. */
|
/* Let the next expedited grace period start. */
|
||||||
mutex_unlock(&rsp->exp_mutex);
|
mutex_unlock(&rcu_state.exp_mutex);
|
||||||
}
|
}
|
||||||
|
|
||||||
/**
|
|
||||||
* synchronize_sched_expedited - Brute-force RCU-sched grace period
|
|
||||||
*
|
|
||||||
* Wait for an RCU-sched grace period to elapse, but use a "big hammer"
|
|
||||||
* approach to force the grace period to end quickly. This consumes
|
|
||||||
* significant time on all CPUs and is unfriendly to real-time workloads,
|
|
||||||
* so is thus not recommended for any sort of common-case code. In fact,
|
|
||||||
* if you are using synchronize_sched_expedited() in a loop, please
|
|
||||||
* restructure your code to batch your updates, and then use a single
|
|
||||||
* synchronize_sched() instead.
|
|
||||||
*
|
|
||||||
* This implementation can be thought of as an application of sequence
|
|
||||||
* locking to expedited grace periods, but using the sequence counter to
|
|
||||||
* determine when someone else has already done the work instead of for
|
|
||||||
* retrying readers.
|
|
||||||
*/
|
|
||||||
void synchronize_sched_expedited(void)
|
|
||||||
{
|
|
||||||
struct rcu_state *rsp = &rcu_sched_state;
|
|
||||||
|
|
||||||
RCU_LOCKDEP_WARN(lock_is_held(&rcu_bh_lock_map) ||
|
|
||||||
lock_is_held(&rcu_lock_map) ||
|
|
||||||
lock_is_held(&rcu_sched_lock_map),
|
|
||||||
"Illegal synchronize_sched_expedited() in RCU read-side critical section");
|
|
||||||
|
|
||||||
/* If only one CPU, this is automatically a grace period. */
|
|
||||||
if (rcu_blocking_is_gp())
|
|
||||||
return;
|
|
||||||
|
|
||||||
_synchronize_rcu_expedited(rsp, sync_sched_exp_handler);
|
|
||||||
}
|
|
||||||
EXPORT_SYMBOL_GPL(synchronize_sched_expedited);
|
|
||||||
|
|
||||||
#ifdef CONFIG_PREEMPT_RCU
|
#ifdef CONFIG_PREEMPT_RCU
|
||||||
|
|
||||||
/*
|
/*
|
||||||
@ -733,34 +652,78 @@ EXPORT_SYMBOL_GPL(synchronize_sched_expedited);
|
|||||||
* ->expmask fields in the rcu_node tree. Otherwise, immediately
|
* ->expmask fields in the rcu_node tree. Otherwise, immediately
|
||||||
* report the quiescent state.
|
* report the quiescent state.
|
||||||
*/
|
*/
|
||||||
static void sync_rcu_exp_handler(void *info)
|
static void sync_rcu_exp_handler(void *unused)
|
||||||
{
|
{
|
||||||
struct rcu_data *rdp;
|
unsigned long flags;
|
||||||
struct rcu_state *rsp = info;
|
struct rcu_data *rdp = this_cpu_ptr(&rcu_data);
|
||||||
|
struct rcu_node *rnp = rdp->mynode;
|
||||||
struct task_struct *t = current;
|
struct task_struct *t = current;
|
||||||
|
|
||||||
/*
|
/*
|
||||||
* Within an RCU read-side critical section, request that the next
|
* First, the common case of not being in an RCU read-side
|
||||||
* rcu_read_unlock() report. Unless this RCU read-side critical
|
* critical section. If also enabled or idle, immediately
|
||||||
* section has already blocked, in which case it is already set
|
* report the quiescent state, otherwise defer.
|
||||||
* up for the expedited grace period to wait on it.
|
|
||||||
*/
|
*/
|
||||||
if (t->rcu_read_lock_nesting > 0 &&
|
if (!t->rcu_read_lock_nesting) {
|
||||||
!t->rcu_read_unlock_special.b.blocked) {
|
if (!(preempt_count() & (PREEMPT_MASK | SOFTIRQ_MASK)) ||
|
||||||
t->rcu_read_unlock_special.b.exp_need_qs = true;
|
rcu_dynticks_curr_cpu_in_eqs()) {
|
||||||
|
rcu_report_exp_rdp(rdp);
|
||||||
|
} else {
|
||||||
|
rdp->deferred_qs = true;
|
||||||
|
set_tsk_need_resched(t);
|
||||||
|
set_preempt_need_resched();
|
||||||
|
}
|
||||||
return;
|
return;
|
||||||
}
|
}
|
||||||
|
|
||||||
/*
|
/*
|
||||||
* We are either exiting an RCU read-side critical section (negative
|
* Second, the less-common case of being in an RCU read-side
|
||||||
* values of t->rcu_read_lock_nesting) or are not in one at all
|
* critical section. In this case we can count on a future
|
||||||
* (zero value of t->rcu_read_lock_nesting). Or we are in an RCU
|
* rcu_read_unlock(). However, this rcu_read_unlock() might
|
||||||
* read-side critical section that blocked before this expedited
|
* execute on some other CPU, but in that case there will be
|
||||||
* grace period started. Either way, we can immediately report
|
* a future context switch. Either way, if the expedited
|
||||||
* the quiescent state.
|
* grace period is still waiting on this CPU, set ->deferred_qs
|
||||||
|
* so that the eventual quiescent state will be reported.
|
||||||
|
* Note that there is a large group of race conditions that
|
||||||
|
* can have caused this quiescent state to already have been
|
||||||
|
* reported, so we really do need to check ->expmask.
|
||||||
*/
|
*/
|
||||||
rdp = this_cpu_ptr(rsp->rda);
|
if (t->rcu_read_lock_nesting > 0) {
|
||||||
rcu_report_exp_rdp(rsp, rdp, true);
|
raw_spin_lock_irqsave_rcu_node(rnp, flags);
|
||||||
|
if (rnp->expmask & rdp->grpmask)
|
||||||
|
rdp->deferred_qs = true;
|
||||||
|
raw_spin_unlock_irqrestore_rcu_node(rnp, flags);
|
||||||
|
}
|
||||||
|
|
||||||
|
/*
|
||||||
|
* The final and least likely case is where the interrupted
|
||||||
|
* code was just about to or just finished exiting the RCU-preempt
|
||||||
|
* read-side critical section, and no, we can't tell which.
|
||||||
|
* So either way, set ->deferred_qs to flag later code that
|
||||||
|
* a quiescent state is required.
|
||||||
|
*
|
||||||
|
* If the CPU is fully enabled (or if some buggy RCU-preempt
|
||||||
|
* read-side critical section is being used from idle), just
|
||||||
|
* invoke rcu_preempt_defer_qs() to immediately report the
|
||||||
|
* quiescent state. We cannot use rcu_read_unlock_special()
|
||||||
|
* because we are in an interrupt handler, which will cause that
|
||||||
|
* function to take an early exit without doing anything.
|
||||||
|
*
|
||||||
|
* Otherwise, force a context switch after the CPU enables everything.
|
||||||
|
*/
|
||||||
|
rdp->deferred_qs = true;
|
||||||
|
if (!(preempt_count() & (PREEMPT_MASK | SOFTIRQ_MASK)) ||
|
||||||
|
WARN_ON_ONCE(rcu_dynticks_curr_cpu_in_eqs())) {
|
||||||
|
rcu_preempt_deferred_qs(t);
|
||||||
|
} else {
|
||||||
|
set_tsk_need_resched(t);
|
||||||
|
set_preempt_need_resched();
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
/* PREEMPT=y, so no PREEMPT=n expedited grace period to clean up after. */
|
||||||
|
static void sync_sched_exp_online_cleanup(int cpu)
|
||||||
|
{
|
||||||
}
|
}
|
||||||
|
|
||||||
/**
|
/**
|
||||||
@ -780,11 +743,11 @@ static void sync_rcu_exp_handler(void *info)
|
|||||||
* you are using synchronize_rcu_expedited() in a loop, please restructure
|
* you are using synchronize_rcu_expedited() in a loop, please restructure
|
||||||
* your code to batch your updates, and then Use a single synchronize_rcu()
|
* your code to batch your updates, and then Use a single synchronize_rcu()
|
||||||
* instead.
|
* instead.
|
||||||
|
*
|
||||||
|
* This has the same semantics as (but is more brutal than) synchronize_rcu().
|
||||||
*/
|
*/
|
||||||
void synchronize_rcu_expedited(void)
|
void synchronize_rcu_expedited(void)
|
||||||
{
|
{
|
||||||
struct rcu_state *rsp = rcu_state_p;
|
|
||||||
|
|
||||||
RCU_LOCKDEP_WARN(lock_is_held(&rcu_bh_lock_map) ||
|
RCU_LOCKDEP_WARN(lock_is_held(&rcu_bh_lock_map) ||
|
||||||
lock_is_held(&rcu_lock_map) ||
|
lock_is_held(&rcu_lock_map) ||
|
||||||
lock_is_held(&rcu_sched_lock_map),
|
lock_is_held(&rcu_sched_lock_map),
|
||||||
@ -792,19 +755,82 @@ void synchronize_rcu_expedited(void)
|
|||||||
|
|
||||||
if (rcu_scheduler_active == RCU_SCHEDULER_INACTIVE)
|
if (rcu_scheduler_active == RCU_SCHEDULER_INACTIVE)
|
||||||
return;
|
return;
|
||||||
_synchronize_rcu_expedited(rsp, sync_rcu_exp_handler);
|
_synchronize_rcu_expedited(sync_rcu_exp_handler);
|
||||||
}
|
}
|
||||||
EXPORT_SYMBOL_GPL(synchronize_rcu_expedited);
|
EXPORT_SYMBOL_GPL(synchronize_rcu_expedited);
|
||||||
|
|
||||||
#else /* #ifdef CONFIG_PREEMPT_RCU */
|
#else /* #ifdef CONFIG_PREEMPT_RCU */
|
||||||
|
|
||||||
|
/* Invoked on each online non-idle CPU for expedited quiescent state. */
|
||||||
|
static void sync_sched_exp_handler(void *unused)
|
||||||
|
{
|
||||||
|
struct rcu_data *rdp;
|
||||||
|
struct rcu_node *rnp;
|
||||||
|
|
||||||
|
rdp = this_cpu_ptr(&rcu_data);
|
||||||
|
rnp = rdp->mynode;
|
||||||
|
if (!(READ_ONCE(rnp->expmask) & rdp->grpmask) ||
|
||||||
|
__this_cpu_read(rcu_data.cpu_no_qs.b.exp))
|
||||||
|
return;
|
||||||
|
if (rcu_is_cpu_rrupt_from_idle()) {
|
||||||
|
rcu_report_exp_rdp(this_cpu_ptr(&rcu_data));
|
||||||
|
return;
|
||||||
|
}
|
||||||
|
__this_cpu_write(rcu_data.cpu_no_qs.b.exp, true);
|
||||||
|
/* Store .exp before .rcu_urgent_qs. */
|
||||||
|
smp_store_release(this_cpu_ptr(&rcu_data.rcu_urgent_qs), true);
|
||||||
|
set_tsk_need_resched(current);
|
||||||
|
set_preempt_need_resched();
|
||||||
|
}
|
||||||
|
|
||||||
|
/* Send IPI for expedited cleanup if needed at end of CPU-hotplug operation. */
|
||||||
|
static void sync_sched_exp_online_cleanup(int cpu)
|
||||||
|
{
|
||||||
|
struct rcu_data *rdp;
|
||||||
|
int ret;
|
||||||
|
struct rcu_node *rnp;
|
||||||
|
|
||||||
|
rdp = per_cpu_ptr(&rcu_data, cpu);
|
||||||
|
rnp = rdp->mynode;
|
||||||
|
if (!(READ_ONCE(rnp->expmask) & rdp->grpmask))
|
||||||
|
return;
|
||||||
|
ret = smp_call_function_single(cpu, sync_sched_exp_handler, NULL, 0);
|
||||||
|
WARN_ON_ONCE(ret);
|
||||||
|
}
|
||||||
|
|
||||||
/*
|
/*
|
||||||
* Wait for an rcu-preempt grace period, but make it happen quickly.
|
* Because a context switch is a grace period for !PREEMPT, any
|
||||||
* But because preemptible RCU does not exist, map to rcu-sched.
|
* blocking grace-period wait automatically implies a grace period if
|
||||||
|
* there is only one CPU online at any point time during execution of
|
||||||
|
* either synchronize_rcu() or synchronize_rcu_expedited(). It is OK to
|
||||||
|
* occasionally incorrectly indicate that there are multiple CPUs online
|
||||||
|
* when there was in fact only one the whole time, as this just adds some
|
||||||
|
* overhead: RCU still operates correctly.
|
||||||
*/
|
*/
|
||||||
|
static int rcu_blocking_is_gp(void)
|
||||||
|
{
|
||||||
|
int ret;
|
||||||
|
|
||||||
|
might_sleep(); /* Check for RCU read-side critical section. */
|
||||||
|
preempt_disable();
|
||||||
|
ret = num_online_cpus() <= 1;
|
||||||
|
preempt_enable();
|
||||||
|
return ret;
|
||||||
|
}
|
||||||
|
|
||||||
|
/* PREEMPT=n implementation of synchronize_rcu_expedited(). */
|
||||||
void synchronize_rcu_expedited(void)
|
void synchronize_rcu_expedited(void)
|
||||||
{
|
{
|
||||||
synchronize_sched_expedited();
|
RCU_LOCKDEP_WARN(lock_is_held(&rcu_bh_lock_map) ||
|
||||||
|
lock_is_held(&rcu_lock_map) ||
|
||||||
|
lock_is_held(&rcu_sched_lock_map),
|
||||||
|
"Illegal synchronize_rcu_expedited() in RCU read-side critical section");
|
||||||
|
|
||||||
|
/* If only one CPU, this is automatically a grace period. */
|
||||||
|
if (rcu_blocking_is_gp())
|
||||||
|
return;
|
||||||
|
|
||||||
|
_synchronize_rcu_expedited(sync_sched_exp_handler);
|
||||||
}
|
}
|
||||||
EXPORT_SYMBOL_GPL(synchronize_rcu_expedited);
|
EXPORT_SYMBOL_GPL(synchronize_rcu_expedited);
|
||||||
|
|
||||||
|
File diff suppressed because it is too large
Load Diff
@ -203,11 +203,7 @@ void rcu_test_sync_prims(void)
|
|||||||
if (!IS_ENABLED(CONFIG_PROVE_RCU))
|
if (!IS_ENABLED(CONFIG_PROVE_RCU))
|
||||||
return;
|
return;
|
||||||
synchronize_rcu();
|
synchronize_rcu();
|
||||||
synchronize_rcu_bh();
|
|
||||||
synchronize_sched();
|
|
||||||
synchronize_rcu_expedited();
|
synchronize_rcu_expedited();
|
||||||
synchronize_rcu_bh_expedited();
|
|
||||||
synchronize_sched_expedited();
|
|
||||||
}
|
}
|
||||||
|
|
||||||
#if !defined(CONFIG_TINY_RCU) || defined(CONFIG_SRCU)
|
#if !defined(CONFIG_TINY_RCU) || defined(CONFIG_SRCU)
|
||||||
@ -298,7 +294,7 @@ EXPORT_SYMBOL_GPL(rcu_read_lock_held);
|
|||||||
*
|
*
|
||||||
* Check debug_lockdep_rcu_enabled() to prevent false positives during boot.
|
* Check debug_lockdep_rcu_enabled() to prevent false positives during boot.
|
||||||
*
|
*
|
||||||
* Note that rcu_read_lock() is disallowed if the CPU is either idle or
|
* Note that rcu_read_lock_bh() is disallowed if the CPU is either idle or
|
||||||
* offline from an RCU perspective, so check for those as well.
|
* offline from an RCU perspective, so check for those as well.
|
||||||
*/
|
*/
|
||||||
int rcu_read_lock_bh_held(void)
|
int rcu_read_lock_bh_held(void)
|
||||||
@ -336,7 +332,7 @@ void __wait_rcu_gp(bool checktiny, int n, call_rcu_func_t *crcu_array,
|
|||||||
int i;
|
int i;
|
||||||
int j;
|
int j;
|
||||||
|
|
||||||
/* Initialize and register callbacks for each flavor specified. */
|
/* Initialize and register callbacks for each crcu_array element. */
|
||||||
for (i = 0; i < n; i++) {
|
for (i = 0; i < n; i++) {
|
||||||
if (checktiny &&
|
if (checktiny &&
|
||||||
(crcu_array[i] == call_rcu ||
|
(crcu_array[i] == call_rcu ||
|
||||||
@ -472,6 +468,7 @@ int rcu_jiffies_till_stall_check(void)
|
|||||||
}
|
}
|
||||||
return till_stall_check * HZ + RCU_STALL_DELAY_DELTA;
|
return till_stall_check * HZ + RCU_STALL_DELAY_DELTA;
|
||||||
}
|
}
|
||||||
|
EXPORT_SYMBOL_GPL(rcu_jiffies_till_stall_check);
|
||||||
|
|
||||||
void rcu_sysrq_start(void)
|
void rcu_sysrq_start(void)
|
||||||
{
|
{
|
||||||
@ -701,19 +698,19 @@ static int __noreturn rcu_tasks_kthread(void *arg)
|
|||||||
|
|
||||||
/*
|
/*
|
||||||
* Wait for all pre-existing t->on_rq and t->nvcsw
|
* Wait for all pre-existing t->on_rq and t->nvcsw
|
||||||
* transitions to complete. Invoking synchronize_sched()
|
* transitions to complete. Invoking synchronize_rcu()
|
||||||
* suffices because all these transitions occur with
|
* suffices because all these transitions occur with
|
||||||
* interrupts disabled. Without this synchronize_sched(),
|
* interrupts disabled. Without this synchronize_rcu(),
|
||||||
* a read-side critical section that started before the
|
* a read-side critical section that started before the
|
||||||
* grace period might be incorrectly seen as having started
|
* grace period might be incorrectly seen as having started
|
||||||
* after the grace period.
|
* after the grace period.
|
||||||
*
|
*
|
||||||
* This synchronize_sched() also dispenses with the
|
* This synchronize_rcu() also dispenses with the
|
||||||
* need for a memory barrier on the first store to
|
* need for a memory barrier on the first store to
|
||||||
* ->rcu_tasks_holdout, as it forces the store to happen
|
* ->rcu_tasks_holdout, as it forces the store to happen
|
||||||
* after the beginning of the grace period.
|
* after the beginning of the grace period.
|
||||||
*/
|
*/
|
||||||
synchronize_sched();
|
synchronize_rcu();
|
||||||
|
|
||||||
/*
|
/*
|
||||||
* There were callbacks, so we need to wait for an
|
* There were callbacks, so we need to wait for an
|
||||||
@ -740,7 +737,7 @@ static int __noreturn rcu_tasks_kthread(void *arg)
|
|||||||
* This does only part of the job, ensuring that all
|
* This does only part of the job, ensuring that all
|
||||||
* tasks that were previously exiting reach the point
|
* tasks that were previously exiting reach the point
|
||||||
* where they have disabled preemption, allowing the
|
* where they have disabled preemption, allowing the
|
||||||
* later synchronize_sched() to finish the job.
|
* later synchronize_rcu() to finish the job.
|
||||||
*/
|
*/
|
||||||
synchronize_srcu(&tasks_rcu_exit_srcu);
|
synchronize_srcu(&tasks_rcu_exit_srcu);
|
||||||
|
|
||||||
@ -790,20 +787,20 @@ static int __noreturn rcu_tasks_kthread(void *arg)
|
|||||||
* cause their RCU-tasks read-side critical sections to
|
* cause their RCU-tasks read-side critical sections to
|
||||||
* extend past the end of the grace period. However,
|
* extend past the end of the grace period. However,
|
||||||
* because these ->nvcsw updates are carried out with
|
* because these ->nvcsw updates are carried out with
|
||||||
* interrupts disabled, we can use synchronize_sched()
|
* interrupts disabled, we can use synchronize_rcu()
|
||||||
* to force the needed ordering on all such CPUs.
|
* to force the needed ordering on all such CPUs.
|
||||||
*
|
*
|
||||||
* This synchronize_sched() also confines all
|
* This synchronize_rcu() also confines all
|
||||||
* ->rcu_tasks_holdout accesses to be within the grace
|
* ->rcu_tasks_holdout accesses to be within the grace
|
||||||
* period, avoiding the need for memory barriers for
|
* period, avoiding the need for memory barriers for
|
||||||
* ->rcu_tasks_holdout accesses.
|
* ->rcu_tasks_holdout accesses.
|
||||||
*
|
*
|
||||||
* In addition, this synchronize_sched() waits for exiting
|
* In addition, this synchronize_rcu() waits for exiting
|
||||||
* tasks to complete their final preempt_disable() region
|
* tasks to complete their final preempt_disable() region
|
||||||
* of execution, cleaning up after the synchronize_srcu()
|
* of execution, cleaning up after the synchronize_srcu()
|
||||||
* above.
|
* above.
|
||||||
*/
|
*/
|
||||||
synchronize_sched();
|
synchronize_rcu();
|
||||||
|
|
||||||
/* Invoke the callbacks. */
|
/* Invoke the callbacks. */
|
||||||
while (list) {
|
while (list) {
|
||||||
@ -870,15 +867,10 @@ static void __init rcu_tasks_bootup_oddness(void)
|
|||||||
#ifdef CONFIG_PROVE_RCU
|
#ifdef CONFIG_PROVE_RCU
|
||||||
|
|
||||||
/*
|
/*
|
||||||
* Early boot self test parameters, one for each flavor
|
* Early boot self test parameters.
|
||||||
*/
|
*/
|
||||||
static bool rcu_self_test;
|
static bool rcu_self_test;
|
||||||
static bool rcu_self_test_bh;
|
|
||||||
static bool rcu_self_test_sched;
|
|
||||||
|
|
||||||
module_param(rcu_self_test, bool, 0444);
|
module_param(rcu_self_test, bool, 0444);
|
||||||
module_param(rcu_self_test_bh, bool, 0444);
|
|
||||||
module_param(rcu_self_test_sched, bool, 0444);
|
|
||||||
|
|
||||||
static int rcu_self_test_counter;
|
static int rcu_self_test_counter;
|
||||||
|
|
||||||
@ -888,25 +880,16 @@ static void test_callback(struct rcu_head *r)
|
|||||||
pr_info("RCU test callback executed %d\n", rcu_self_test_counter);
|
pr_info("RCU test callback executed %d\n", rcu_self_test_counter);
|
||||||
}
|
}
|
||||||
|
|
||||||
|
DEFINE_STATIC_SRCU(early_srcu);
|
||||||
|
|
||||||
static void early_boot_test_call_rcu(void)
|
static void early_boot_test_call_rcu(void)
|
||||||
{
|
{
|
||||||
static struct rcu_head head;
|
static struct rcu_head head;
|
||||||
|
static struct rcu_head shead;
|
||||||
|
|
||||||
call_rcu(&head, test_callback);
|
call_rcu(&head, test_callback);
|
||||||
}
|
if (IS_ENABLED(CONFIG_SRCU))
|
||||||
|
call_srcu(&early_srcu, &shead, test_callback);
|
||||||
static void early_boot_test_call_rcu_bh(void)
|
|
||||||
{
|
|
||||||
static struct rcu_head head;
|
|
||||||
|
|
||||||
call_rcu_bh(&head, test_callback);
|
|
||||||
}
|
|
||||||
|
|
||||||
static void early_boot_test_call_rcu_sched(void)
|
|
||||||
{
|
|
||||||
static struct rcu_head head;
|
|
||||||
|
|
||||||
call_rcu_sched(&head, test_callback);
|
|
||||||
}
|
}
|
||||||
|
|
||||||
void rcu_early_boot_tests(void)
|
void rcu_early_boot_tests(void)
|
||||||
@ -915,10 +898,6 @@ void rcu_early_boot_tests(void)
|
|||||||
|
|
||||||
if (rcu_self_test)
|
if (rcu_self_test)
|
||||||
early_boot_test_call_rcu();
|
early_boot_test_call_rcu();
|
||||||
if (rcu_self_test_bh)
|
|
||||||
early_boot_test_call_rcu_bh();
|
|
||||||
if (rcu_self_test_sched)
|
|
||||||
early_boot_test_call_rcu_sched();
|
|
||||||
rcu_test_sync_prims();
|
rcu_test_sync_prims();
|
||||||
}
|
}
|
||||||
|
|
||||||
@ -930,16 +909,11 @@ static int rcu_verify_early_boot_tests(void)
|
|||||||
if (rcu_self_test) {
|
if (rcu_self_test) {
|
||||||
early_boot_test_counter++;
|
early_boot_test_counter++;
|
||||||
rcu_barrier();
|
rcu_barrier();
|
||||||
|
if (IS_ENABLED(CONFIG_SRCU)) {
|
||||||
|
early_boot_test_counter++;
|
||||||
|
srcu_barrier(&early_srcu);
|
||||||
|
}
|
||||||
}
|
}
|
||||||
if (rcu_self_test_bh) {
|
|
||||||
early_boot_test_counter++;
|
|
||||||
rcu_barrier_bh();
|
|
||||||
}
|
|
||||||
if (rcu_self_test_sched) {
|
|
||||||
early_boot_test_counter++;
|
|
||||||
rcu_barrier_sched();
|
|
||||||
}
|
|
||||||
|
|
||||||
if (rcu_self_test_counter != early_boot_test_counter) {
|
if (rcu_self_test_counter != early_boot_test_counter) {
|
||||||
WARN_ON(1);
|
WARN_ON(1);
|
||||||
ret = -1;
|
ret = -1;
|
||||||
|
@ -301,7 +301,8 @@ restart:
|
|||||||
pending >>= softirq_bit;
|
pending >>= softirq_bit;
|
||||||
}
|
}
|
||||||
|
|
||||||
rcu_bh_qs();
|
if (__this_cpu_read(ksoftirqd) == current)
|
||||||
|
rcu_softirq_qs();
|
||||||
local_irq_disable();
|
local_irq_disable();
|
||||||
|
|
||||||
pending = local_softirq_pending();
|
pending = local_softirq_pending();
|
||||||
|
@ -573,7 +573,7 @@ static int stutter;
|
|||||||
* Block until the stutter interval ends. This must be called periodically
|
* Block until the stutter interval ends. This must be called periodically
|
||||||
* by all running kthreads that need to be subject to stuttering.
|
* by all running kthreads that need to be subject to stuttering.
|
||||||
*/
|
*/
|
||||||
void stutter_wait(const char *title)
|
bool stutter_wait(const char *title)
|
||||||
{
|
{
|
||||||
int spt;
|
int spt;
|
||||||
|
|
||||||
@ -590,6 +590,7 @@ void stutter_wait(const char *title)
|
|||||||
}
|
}
|
||||||
torture_shutdown_absorb(title);
|
torture_shutdown_absorb(title);
|
||||||
}
|
}
|
||||||
|
return !!spt;
|
||||||
}
|
}
|
||||||
EXPORT_SYMBOL_GPL(stutter_wait);
|
EXPORT_SYMBOL_GPL(stutter_wait);
|
||||||
|
|
||||||
|
@ -120,7 +120,6 @@ then
|
|||||||
parse-build.sh $resdir/Make.out $title
|
parse-build.sh $resdir/Make.out $title
|
||||||
else
|
else
|
||||||
# Build failed.
|
# Build failed.
|
||||||
cp $builddir/Make*.out $resdir
|
|
||||||
cp $builddir/.config $resdir || :
|
cp $builddir/.config $resdir || :
|
||||||
echo Build failed, not running KVM, see $resdir.
|
echo Build failed, not running KVM, see $resdir.
|
||||||
if test -f $builddir.wait
|
if test -f $builddir.wait
|
||||||
|
@ -3,9 +3,7 @@ TREE02
|
|||||||
TREE03
|
TREE03
|
||||||
TREE04
|
TREE04
|
||||||
TREE05
|
TREE05
|
||||||
TREE06
|
|
||||||
TREE07
|
TREE07
|
||||||
TREE08
|
|
||||||
TREE09
|
TREE09
|
||||||
SRCU-N
|
SRCU-N
|
||||||
SRCU-P
|
SRCU-P
|
||||||
|
@ -1 +1,2 @@
|
|||||||
rcutorture.torture_type=srcud
|
rcutorture.torture_type=srcud
|
||||||
|
rcupdate.rcu_self_test=1
|
||||||
|
@ -1 +1,2 @@
|
|||||||
rcutorture.torture_type=srcud
|
rcutorture.torture_type=srcud
|
||||||
|
rcupdate.rcu_self_test=1
|
||||||
|
@ -1,3 +1 @@
|
|||||||
rcupdate.rcu_self_test=1
|
rcupdate.rcu_self_test=1
|
||||||
rcupdate.rcu_self_test_bh=1
|
|
||||||
rcutorture.torture_type=rcu_bh
|
|
||||||
|
@ -1,4 +1,4 @@
|
|||||||
rcutorture.torture_type=rcu_bh maxcpus=8 nr_cpus=43
|
maxcpus=8 nr_cpus=43
|
||||||
rcutree.gp_preinit_delay=3
|
rcutree.gp_preinit_delay=3
|
||||||
rcutree.gp_init_delay=3
|
rcutree.gp_init_delay=3
|
||||||
rcutree.gp_cleanup_delay=3
|
rcutree.gp_cleanup_delay=3
|
||||||
|
@ -1 +1 @@
|
|||||||
rcutorture.torture_type=rcu_bh rcutree.rcu_fanout_leaf=4 nohz_full=1-7
|
rcutree.rcu_fanout_leaf=4 nohz_full=1-7
|
||||||
|
@ -1,5 +1,4 @@
|
|||||||
rcutorture.torture_type=sched
|
|
||||||
rcupdate.rcu_self_test_sched=1
|
|
||||||
rcutree.gp_preinit_delay=3
|
rcutree.gp_preinit_delay=3
|
||||||
rcutree.gp_init_delay=3
|
rcutree.gp_init_delay=3
|
||||||
rcutree.gp_cleanup_delay=3
|
rcutree.gp_cleanup_delay=3
|
||||||
|
rcupdate.rcu_self_test=1
|
||||||
|
@ -1,6 +1,4 @@
|
|||||||
rcupdate.rcu_self_test=1
|
rcupdate.rcu_self_test=1
|
||||||
rcupdate.rcu_self_test_bh=1
|
|
||||||
rcupdate.rcu_self_test_sched=1
|
|
||||||
rcutree.rcu_fanout_exact=1
|
rcutree.rcu_fanout_exact=1
|
||||||
rcutree.gp_preinit_delay=3
|
rcutree.gp_preinit_delay=3
|
||||||
rcutree.gp_init_delay=3
|
rcutree.gp_init_delay=3
|
||||||
|
@ -1,5 +1,3 @@
|
|||||||
rcutorture.torture_type=sched
|
|
||||||
rcupdate.rcu_self_test=1
|
rcupdate.rcu_self_test=1
|
||||||
rcupdate.rcu_self_test_sched=1
|
|
||||||
rcutree.rcu_fanout_exact=1
|
rcutree.rcu_fanout_exact=1
|
||||||
rcu_nocbs=0-7
|
rcu_nocbs=0-7
|
||||||
|
Loading…
Reference in New Issue
Block a user