2
0
mirror of https://github.com/edk2-porting/linux-next.git synced 2024-11-30 13:34:44 +08:00

Merge branch 'core-rcu-for-linus' of git://git.kernel.org/pub/scm/linux/kernel/git/tip/tip

Pull RCU updates from Ingo Molnar:
 "The main changes in this cycle were:

   - changes related to No-CBs CPUs and NO_HZ_FULL

   - RCU-tasks implementation

   - torture-test updates

   - miscellaneous fixes

   - locktorture updates

   - RCU documentation updates"

* 'core-rcu-for-linus' of git://git.kernel.org/pub/scm/linux/kernel/git/tip/tip: (81 commits)
  workqueue: Use cond_resched_rcu_qs macro
  workqueue: Add quiescent state between work items
  locktorture: Cleanup header usage
  locktorture: Cannot hold read and write lock
  locktorture: Fix __acquire annotation for spinlock irq
  locktorture: Support rwlocks
  rcu: Eliminate deadlock between CPU hotplug and expedited grace periods
  locktorture: Document boot/module parameters
  rcutorture: Rename rcutorture_runnable parameter
  locktorture: Add test scenario for rwsem_lock
  locktorture: Add test scenario for mutex_lock
  locktorture: Make torture scripting account for new _runnable name
  locktorture: Introduce torture context
  locktorture: Support rwsems
  locktorture: Add infrastructure for torturing read locks
  torture: Address race in module cleanup
  locktorture: Make statistics generic
  locktorture: Teach about lock debugging
  locktorture: Support mutexes
  locktorture: Add documentation
  ...
This commit is contained in:
Linus Torvalds 2014-10-13 15:44:12 +02:00
commit d6dd50e07c
63 changed files with 1935 additions and 546 deletions

View File

@ -56,8 +56,20 @@ RCU_STALL_RAT_DELAY
two jiffies. (This is a cpp macro, not a kernel configuration two jiffies. (This is a cpp macro, not a kernel configuration
parameter.) parameter.)
When a CPU detects that it is stalling, it will print a message similar rcupdate.rcu_task_stall_timeout
to the following:
This boot/sysfs parameter controls the RCU-tasks stall warning
interval. A value of zero or less suppresses RCU-tasks stall
warnings. A positive value sets the stall-warning interval
in jiffies. An RCU-tasks stall warning starts wtih the line:
INFO: rcu_tasks detected stalls on tasks:
And continues with the output of sched_show_task() for each
task stalling the current RCU-tasks grace period.
For non-RCU-tasks flavors of RCU, when a CPU detects that it is stalling,
it will print a message similar to the following:
INFO: rcu_sched_state detected stall on CPU 5 (t=2500 jiffies) INFO: rcu_sched_state detected stall on CPU 5 (t=2500 jiffies)
@ -174,8 +186,12 @@ o A CPU looping with preemption disabled. This condition can
o A CPU looping with bottom halves disabled. This condition can o A CPU looping with bottom halves disabled. This condition can
result in RCU-sched and RCU-bh stalls. result in RCU-sched and RCU-bh stalls.
o For !CONFIG_PREEMPT kernels, a CPU looping anywhere in the kernel o For !CONFIG_PREEMPT kernels, a CPU looping anywhere in the
without invoking schedule(). kernel without invoking schedule(). Note that cond_resched()
does not necessarily prevent RCU CPU stall warnings. Therefore,
if the looping in the kernel is really expected and desirable
behavior, you might need to replace some of the cond_resched()
calls with calls to cond_resched_rcu_qs().
o A CPU-bound real-time task in a CONFIG_PREEMPT kernel, which might o A CPU-bound real-time task in a CONFIG_PREEMPT kernel, which might
happen to preempt a low-priority task in the middle of an RCU happen to preempt a low-priority task in the middle of an RCU
@ -208,11 +224,10 @@ o A hardware failure. This is quite unlikely, but has occurred
This resulted in a series of RCU CPU stall warnings, eventually This resulted in a series of RCU CPU stall warnings, eventually
leading the realization that the CPU had failed. leading the realization that the CPU had failed.
The RCU, RCU-sched, and RCU-bh implementations have CPU stall warning. The RCU, RCU-sched, RCU-bh, and RCU-tasks implementations have CPU stall
SRCU does not have its own CPU stall warnings, but its calls to warning. Note that SRCU does -not- have CPU stall warnings. Please note
synchronize_sched() will result in RCU-sched detecting RCU-sched-related that RCU only detects CPU stalls when there is a grace period in progress.
CPU stalls. Please note that RCU only detects CPU stalls when there is No grace period, no CPU stall warnings.
a grace period in progress. No grace period, no CPU stall warnings.
To diagnose the cause of the stall, inspect the stack traces. To diagnose the cause of the stall, inspect the stack traces.
The offending function will usually be near the top of the stack. The offending function will usually be near the top of the stack.

View File

@ -1723,6 +1723,49 @@ bytes respectively. Such letter suffixes can also be entirely omitted.
lockd.nlm_udpport=M [NFS] Assign UDP port. lockd.nlm_udpport=M [NFS] Assign UDP port.
Format: <integer> Format: <integer>
locktorture.nreaders_stress= [KNL]
Set the number of locking read-acquisition kthreads.
Defaults to being automatically set based on the
number of online CPUs.
locktorture.nwriters_stress= [KNL]
Set the number of locking write-acquisition kthreads.
locktorture.onoff_holdoff= [KNL]
Set time (s) after boot for CPU-hotplug testing.
locktorture.onoff_interval= [KNL]
Set time (s) between CPU-hotplug operations, or
zero to disable CPU-hotplug testing.
locktorture.shuffle_interval= [KNL]
Set task-shuffle interval (jiffies). Shuffling
tasks allows some CPUs to go into dyntick-idle
mode during the locktorture test.
locktorture.shutdown_secs= [KNL]
Set time (s) after boot system shutdown. This
is useful for hands-off automated testing.
locktorture.stat_interval= [KNL]
Time (s) between statistics printk()s.
locktorture.stutter= [KNL]
Time (s) to stutter testing, for example,
specifying five seconds causes the test to run for
five seconds, wait for five seconds, and so on.
This tests the locking primitive's ability to
transition abruptly to and from idle.
locktorture.torture_runnable= [BOOT]
Start locktorture running at boot time.
locktorture.torture_type= [KNL]
Specify the locking implementation to test.
locktorture.verbose= [KNL]
Enable additional printk() statements.
logibm.irq= [HW,MOUSE] Logitech Bus Mouse Driver logibm.irq= [HW,MOUSE] Logitech Bus Mouse Driver
Format: <irq> Format: <irq>
@ -2900,6 +2943,24 @@ bytes respectively. Such letter suffixes can also be entirely omitted.
Lazy RCU callbacks are those which RCU can Lazy RCU callbacks are those which RCU can
prove do nothing more than free memory. prove do nothing more than free memory.
rcutorture.cbflood_inter_holdoff= [KNL]
Set holdoff time (jiffies) between successive
callback-flood tests.
rcutorture.cbflood_intra_holdoff= [KNL]
Set holdoff time (jiffies) between successive
bursts of callbacks within a given callback-flood
test.
rcutorture.cbflood_n_burst= [KNL]
Set the number of bursts making up a given
callback-flood test. Set this to zero to
disable callback-flood testing.
rcutorture.cbflood_n_per_burst= [KNL]
Set the number of callbacks to be registered
in a given burst of a callback-flood test.
rcutorture.fqs_duration= [KNL] rcutorture.fqs_duration= [KNL]
Set duration of force_quiescent_state bursts. Set duration of force_quiescent_state bursts.
@ -2939,7 +3000,7 @@ bytes respectively. Such letter suffixes can also be entirely omitted.
Set time (s) between CPU-hotplug operations, or Set time (s) between CPU-hotplug operations, or
zero to disable CPU-hotplug testing. zero to disable CPU-hotplug testing.
rcutorture.rcutorture_runnable= [BOOT] rcutorture.torture_runnable= [BOOT]
Start rcutorture running at boot time. Start rcutorture running at boot time.
rcutorture.shuffle_interval= [KNL] rcutorture.shuffle_interval= [KNL]
@ -3001,6 +3062,11 @@ bytes respectively. Such letter suffixes can also be entirely omitted.
rcupdate.rcu_cpu_stall_timeout= [KNL] rcupdate.rcu_cpu_stall_timeout= [KNL]
Set timeout for RCU CPU stall warning messages. Set timeout for RCU CPU stall warning messages.
rcupdate.rcu_task_stall_timeout= [KNL]
Set timeout in jiffies for RCU task stall warning
messages. Disable with a value less than or equal
to zero.
rdinit= [KNL] rdinit= [KNL]
Format: <full_path> Format: <full_path>
Run specified binary instead of /init from the ramdisk, Run specified binary instead of /init from the ramdisk,

View File

@ -0,0 +1,147 @@
Kernel Lock Torture Test Operation
CONFIG_LOCK_TORTURE_TEST
The CONFIG LOCK_TORTURE_TEST config option provides a kernel module
that runs torture tests on core kernel locking primitives. The kernel
module, 'locktorture', may be built after the fact on the running
kernel to be tested, if desired. The tests periodically output status
messages via printk(), which can be examined via the dmesg (perhaps
grepping for "torture"). The test is started when the module is loaded,
and stops when the module is unloaded. This program is based on how RCU
is tortured, via rcutorture.
This torture test consists of creating a number of kernel threads which
acquire the lock and hold it for specific amount of time, thus simulating
different critical region behaviors. The amount of contention on the lock
can be simulated by either enlarging this critical region hold time and/or
creating more kthreads.
MODULE PARAMETERS
This module has the following parameters:
** Locktorture-specific **
nwriters_stress Number of kernel threads that will stress exclusive lock
ownership (writers). The default value is twice the number
of online CPUs.
nreaders_stress Number of kernel threads that will stress shared lock
ownership (readers). The default is the same amount of writer
locks. If the user did not specify nwriters_stress, then
both readers and writers be the amount of online CPUs.
torture_type Type of lock to torture. By default, only spinlocks will
be tortured. This module can torture the following locks,
with string values as follows:
o "lock_busted": Simulates a buggy lock implementation.
o "spin_lock": spin_lock() and spin_unlock() pairs.
o "spin_lock_irq": spin_lock_irq() and spin_unlock_irq()
pairs.
o "rw_lock": read/write lock() and unlock() rwlock pairs.
o "rw_lock_irq": read/write lock_irq() and unlock_irq()
rwlock pairs.
o "mutex_lock": mutex_lock() and mutex_unlock() pairs.
o "rwsem_lock": read/write down() and up() semaphore pairs.
torture_runnable Start locktorture at boot time in the case where the
module is built into the kernel, otherwise wait for
torture_runnable to be set via sysfs before starting.
By default it will begin once the module is loaded.
** Torture-framework (RCU + locking) **
shutdown_secs The number of seconds to run the test before terminating
the test and powering off the system. The default is
zero, which disables test termination and system shutdown.
This capability is useful for automated testing.
onoff_interval The number of seconds between each attempt to execute a
randomly selected CPU-hotplug operation. Defaults
to zero, which disables CPU hotplugging. In
CONFIG_HOTPLUG_CPU=n kernels, locktorture will silently
refuse to do any CPU-hotplug operations regardless of
what value is specified for onoff_interval.
onoff_holdoff The number of seconds to wait until starting CPU-hotplug
operations. This would normally only be used when
locktorture was built into the kernel and started
automatically at boot time, in which case it is useful
in order to avoid confusing boot-time code with CPUs
coming and going. This parameter is only useful if
CONFIG_HOTPLUG_CPU is enabled.
stat_interval Number of seconds between statistics-related printk()s.
By default, locktorture will report stats every 60 seconds.
Setting the interval to zero causes the statistics to
be printed -only- when the module is unloaded, and this
is the default.
stutter The length of time to run the test before pausing for this
same period of time. Defaults to "stutter=5", so as
to run and pause for (roughly) five-second intervals.
Specifying "stutter=0" causes the test to run continuously
without pausing, which is the old default behavior.
shuffle_interval The number of seconds to keep the test threads affinitied
to a particular subset of the CPUs, defaults to 3 seconds.
Used in conjunction with test_no_idle_hz.
verbose Enable verbose debugging printing, via printk(). Enabled
by default. This extra information is mostly related to
high-level errors and reports from the main 'torture'
framework.
STATISTICS
Statistics are printed in the following format:
spin_lock-torture: Writes: Total: 93746064 Max/Min: 0/0 Fail: 0
(A) (B) (C) (D) (E)
(A): Lock type that is being tortured -- torture_type parameter.
(B): Number of writer lock acquisitions. If dealing with a read/write primitive
a second "Reads" statistics line is printed.
(C): Number of times the lock was acquired.
(D): Min and max number of times threads failed to acquire the lock.
(E): true/false values if there were errors acquiring the lock. This should
-only- be positive if there is a bug in the locking primitive's
implementation. Otherwise a lock should never fail (i.e., spin_lock()).
Of course, the same applies for (C), above. A dummy example of this is
the "lock_busted" type.
USAGE
The following script may be used to torture locks:
#!/bin/sh
modprobe locktorture
sleep 3600
rmmod locktorture
dmesg | grep torture:
The output can be manually inspected for the error flag of "!!!".
One could of course create a more elaborate script that automatically
checked for such errors. The "rmmod" command forces a "SUCCESS",
"FAILURE", or "RCU_HOTPLUG" indication to be printk()ed. The first
two are self-explanatory, while the last indicates that while there
were no locking failures, CPU-hotplug problems were detected.
Also see: Documentation/RCU/torture.txt

View File

@ -574,30 +574,14 @@ However, stores are not speculated. This means that ordering -is- provided
in the following example: in the following example:
q = ACCESS_ONCE(a); q = ACCESS_ONCE(a);
if (ACCESS_ONCE(q)) { if (q) {
ACCESS_ONCE(b) = p; ACCESS_ONCE(b) = p;
} }
Please note that ACCESS_ONCE() is not optional! Without the ACCESS_ONCE(), Please note that ACCESS_ONCE() is not optional! Without the
the compiler is within its rights to transform this example: ACCESS_ONCE(), might combine the load from 'a' with other loads from
'a', and the store to 'b' with other stores to 'b', with possible highly
q = a; counterintuitive effects on ordering.
if (q) {
b = p; /* BUG: Compiler can reorder!!! */
do_something();
} else {
b = p; /* BUG: Compiler can reorder!!! */
do_something_else();
}
into this, which of course defeats the ordering:
b = p;
q = a;
if (q)
do_something();
else
do_something_else();
Worse yet, if the compiler is able to prove (say) that the value of Worse yet, if the compiler is able to prove (say) that the value of
variable 'a' is always non-zero, it would be well within its rights variable 'a' is always non-zero, it would be well within its rights
@ -605,11 +589,12 @@ to optimize the original example by eliminating the "if" statement
as follows: as follows:
q = a; q = a;
b = p; /* BUG: Compiler can reorder!!! */ b = p; /* BUG: Compiler and CPU can both reorder!!! */
do_something();
The solution is again ACCESS_ONCE() and barrier(), which preserves the So don't leave out the ACCESS_ONCE().
ordering between the load from variable 'a' and the store to variable 'b':
It is tempting to try to enforce ordering on identical stores on both
branches of the "if" statement as follows:
q = ACCESS_ONCE(a); q = ACCESS_ONCE(a);
if (q) { if (q) {
@ -622,18 +607,11 @@ ordering between the load from variable 'a' and the store to variable 'b':
do_something_else(); do_something_else();
} }
The initial ACCESS_ONCE() is required to prevent the compiler from Unfortunately, current compilers will transform this as follows at high
proving the value of 'a', and the pair of barrier() invocations are optimization levels:
required to prevent the compiler from pulling the two identical stores
to 'b' out from the legs of the "if" statement.
It is important to note that control dependencies absolutely require a
a conditional. For example, the following "optimized" version of
the above example breaks ordering, which is why the barrier() invocations
are absolutely required if you have identical stores in both legs of
the "if" statement:
q = ACCESS_ONCE(a); q = ACCESS_ONCE(a);
barrier();
ACCESS_ONCE(b) = p; /* BUG: No ordering vs. load from a!!! */ ACCESS_ONCE(b) = p; /* BUG: No ordering vs. load from a!!! */
if (q) { if (q) {
/* ACCESS_ONCE(b) = p; -- moved up, BUG!!! */ /* ACCESS_ONCE(b) = p; -- moved up, BUG!!! */
@ -643,21 +621,36 @@ the "if" statement:
do_something_else(); do_something_else();
} }
It is of course legal for the prior load to be part of the conditional, Now there is no conditional between the load from 'a' and the store to
for example, as follows: 'b', which means that the CPU is within its rights to reorder them:
The conditional is absolutely required, and must be present in the
assembly code even after all compiler optimizations have been applied.
Therefore, if you need ordering in this example, you need explicit
memory barriers, for example, smp_store_release():
if (ACCESS_ONCE(a) > 0) { q = ACCESS_ONCE(a);
barrier(); if (q) {
ACCESS_ONCE(b) = q / 2; smp_store_release(&b, p);
do_something(); do_something();
} else { } else {
barrier(); smp_store_release(&b, p);
ACCESS_ONCE(b) = q / 3;
do_something_else(); do_something_else();
} }
This will again ensure that the load from variable 'a' is ordered before the In contrast, without explicit memory barriers, two-legged-if control
stores to variable 'b'. ordering is guaranteed only when the stores differ, for example:
q = ACCESS_ONCE(a);
if (q) {
ACCESS_ONCE(b) = p;
do_something();
} else {
ACCESS_ONCE(b) = r;
do_something_else();
}
The initial ACCESS_ONCE() is still required to prevent the compiler from
proving the value of 'a'.
In addition, you need to be careful what you do with the local variable 'q', In addition, you need to be careful what you do with the local variable 'q',
otherwise the compiler might be able to guess the value and again remove otherwise the compiler might be able to guess the value and again remove
@ -665,12 +658,10 @@ the needed conditional. For example:
q = ACCESS_ONCE(a); q = ACCESS_ONCE(a);
if (q % MAX) { if (q % MAX) {
barrier();
ACCESS_ONCE(b) = p; ACCESS_ONCE(b) = p;
do_something(); do_something();
} else { } else {
barrier(); ACCESS_ONCE(b) = r;
ACCESS_ONCE(b) = p;
do_something_else(); do_something_else();
} }
@ -682,9 +673,12 @@ transform the above code into the following:
ACCESS_ONCE(b) = p; ACCESS_ONCE(b) = p;
do_something_else(); do_something_else();
This transformation loses the ordering between the load from variable 'a' Given this transformation, the CPU is not required to respect the ordering
and the store to variable 'b'. If you are relying on this ordering, you between the load from variable 'a' and the store to variable 'b'. It is
should do something like the following: tempting to add a barrier(), but this does not help. The conditional
is gone, and the barrier won't bring it back. Therefore, if you are
relying on this ordering, you should make sure that MAX is greater than
one, perhaps as follows:
q = ACCESS_ONCE(a); q = ACCESS_ONCE(a);
BUILD_BUG_ON(MAX <= 1); /* Order load from a with store to b. */ BUILD_BUG_ON(MAX <= 1); /* Order load from a with store to b. */
@ -692,35 +686,45 @@ should do something like the following:
ACCESS_ONCE(b) = p; ACCESS_ONCE(b) = p;
do_something(); do_something();
} else { } else {
ACCESS_ONCE(b) = p; ACCESS_ONCE(b) = r;
do_something_else(); do_something_else();
} }
Please note once again that the stores to 'b' differ. If they were
identical, as noted earlier, the compiler could pull this store outside
of the 'if' statement.
Finally, control dependencies do -not- provide transitivity. This is Finally, control dependencies do -not- provide transitivity. This is
demonstrated by two related examples: demonstrated by two related examples, with the initial values of
x and y both being zero:
CPU 0 CPU 1 CPU 0 CPU 1
===================== ===================== ===================== =====================
r1 = ACCESS_ONCE(x); r2 = ACCESS_ONCE(y); r1 = ACCESS_ONCE(x); r2 = ACCESS_ONCE(y);
if (r1 >= 0) if (r2 >= 0) if (r1 > 0) if (r2 > 0)
ACCESS_ONCE(y) = 1; ACCESS_ONCE(x) = 1; ACCESS_ONCE(y) = 1; ACCESS_ONCE(x) = 1;
assert(!(r1 == 1 && r2 == 1)); assert(!(r1 == 1 && r2 == 1));
The above two-CPU example will never trigger the assert(). However, The above two-CPU example will never trigger the assert(). However,
if control dependencies guaranteed transitivity (which they do not), if control dependencies guaranteed transitivity (which they do not),
then adding the following two CPUs would guarantee a related assertion: then adding the following CPU would guarantee a related assertion:
CPU 2 CPU 3 CPU 2
===================== ===================== =====================
ACCESS_ONCE(x) = 2; ACCESS_ONCE(y) = 2; ACCESS_ONCE(x) = 2;
assert(!(r1 == 2 && r2 == 2 && x == 1 && y == 1)); /* FAILS!!! */ assert(!(r1 == 2 && r2 == 1 && x == 2)); /* FAILS!!! */
But because control dependencies do -not- provide transitivity, the But because control dependencies do -not- provide transitivity, the above
above assertion can fail after the combined four-CPU example completes. assertion can fail after the combined three-CPU example completes. If you
If you need the four-CPU example to provide ordering, you will need need the three-CPU example to provide ordering, you will need smp_mb()
smp_mb() between the loads and stores in the CPU 0 and CPU 1 code fragments. between the loads and stores in the CPU 0 and CPU 1 code fragments,
that is, just before or just after the "if" statements.
These two examples are the LB and WWC litmus tests from this paper:
http://www.cl.cam.ac.uk/users/pes20/ppc-supplemental/test6.pdf and this
site: https://www.cl.cam.ac.uk/~pes20/ppcmem/index.html.
In summary: In summary:

View File

@ -367,7 +367,7 @@ static struct fdtable *close_files(struct files_struct * files)
struct file * file = xchg(&fdt->fd[i], NULL); struct file * file = xchg(&fdt->fd[i], NULL);
if (file) { if (file) {
filp_close(file, files); filp_close(file, files);
cond_resched(); cond_resched_rcu_qs();
} }
} }
i++; i++;

View File

@ -213,6 +213,7 @@ extern struct bus_type cpu_subsys;
extern void cpu_hotplug_begin(void); extern void cpu_hotplug_begin(void);
extern void cpu_hotplug_done(void); extern void cpu_hotplug_done(void);
extern void get_online_cpus(void); extern void get_online_cpus(void);
extern bool try_get_online_cpus(void);
extern void put_online_cpus(void); extern void put_online_cpus(void);
extern void cpu_hotplug_disable(void); extern void cpu_hotplug_disable(void);
extern void cpu_hotplug_enable(void); extern void cpu_hotplug_enable(void);
@ -230,6 +231,7 @@ int cpu_down(unsigned int cpu);
static inline void cpu_hotplug_begin(void) {} static inline void cpu_hotplug_begin(void) {}
static inline void cpu_hotplug_done(void) {} static inline void cpu_hotplug_done(void) {}
#define get_online_cpus() do { } while (0) #define get_online_cpus() do { } while (0)
#define try_get_online_cpus() true
#define put_online_cpus() do { } while (0) #define put_online_cpus() do { } while (0)
#define cpu_hotplug_disable() do { } while (0) #define cpu_hotplug_disable() do { } while (0)
#define cpu_hotplug_enable() do { } while (0) #define cpu_hotplug_enable() do { } while (0)

View File

@ -111,12 +111,21 @@ extern struct group_info init_groups;
#ifdef CONFIG_PREEMPT_RCU #ifdef CONFIG_PREEMPT_RCU
#define INIT_TASK_RCU_PREEMPT(tsk) \ #define INIT_TASK_RCU_PREEMPT(tsk) \
.rcu_read_lock_nesting = 0, \ .rcu_read_lock_nesting = 0, \
.rcu_read_unlock_special = 0, \ .rcu_read_unlock_special.s = 0, \
.rcu_node_entry = LIST_HEAD_INIT(tsk.rcu_node_entry), \ .rcu_node_entry = LIST_HEAD_INIT(tsk.rcu_node_entry), \
INIT_TASK_RCU_TREE_PREEMPT() INIT_TASK_RCU_TREE_PREEMPT()
#else #else
#define INIT_TASK_RCU_PREEMPT(tsk) #define INIT_TASK_RCU_PREEMPT(tsk)
#endif #endif
#ifdef CONFIG_TASKS_RCU
#define INIT_TASK_RCU_TASKS(tsk) \
.rcu_tasks_holdout = false, \
.rcu_tasks_holdout_list = \
LIST_HEAD_INIT(tsk.rcu_tasks_holdout_list), \
.rcu_tasks_idle_cpu = -1,
#else
#define INIT_TASK_RCU_TASKS(tsk)
#endif
extern struct cred init_cred; extern struct cred init_cred;
@ -224,6 +233,7 @@ extern struct task_group root_task_group;
INIT_FTRACE_GRAPH \ INIT_FTRACE_GRAPH \
INIT_TRACE_RECURSION \ INIT_TRACE_RECURSION \
INIT_TASK_RCU_PREEMPT(tsk) \ INIT_TASK_RCU_PREEMPT(tsk) \
INIT_TASK_RCU_TASKS(tsk) \
INIT_CPUSET_SEQ(tsk) \ INIT_CPUSET_SEQ(tsk) \
INIT_RT_MUTEXES(tsk) \ INIT_RT_MUTEXES(tsk) \
INIT_VTIME(tsk) \ INIT_VTIME(tsk) \

View File

@ -510,6 +510,7 @@ static inline void print_irqtrace_events(struct task_struct *curr)
#define lock_map_acquire(l) lock_acquire_exclusive(l, 0, 0, NULL, _THIS_IP_) #define lock_map_acquire(l) lock_acquire_exclusive(l, 0, 0, NULL, _THIS_IP_)
#define lock_map_acquire_read(l) lock_acquire_shared_recursive(l, 0, 0, NULL, _THIS_IP_) #define lock_map_acquire_read(l) lock_acquire_shared_recursive(l, 0, 0, NULL, _THIS_IP_)
#define lock_map_acquire_tryread(l) lock_acquire_shared_recursive(l, 0, 1, NULL, _THIS_IP_)
#define lock_map_release(l) lock_release(l, 1, _THIS_IP_) #define lock_map_release(l) lock_release(l, 1, _THIS_IP_)
#ifdef CONFIG_PROVE_LOCKING #ifdef CONFIG_PROVE_LOCKING

View File

@ -47,14 +47,12 @@
#include <asm/barrier.h> #include <asm/barrier.h>
extern int rcu_expedited; /* for sysctl */ extern int rcu_expedited; /* for sysctl */
#ifdef CONFIG_RCU_TORTURE_TEST
extern int rcutorture_runnable; /* for sysctl */
#endif /* #ifdef CONFIG_RCU_TORTURE_TEST */
enum rcutorture_type { enum rcutorture_type {
RCU_FLAVOR, RCU_FLAVOR,
RCU_BH_FLAVOR, RCU_BH_FLAVOR,
RCU_SCHED_FLAVOR, RCU_SCHED_FLAVOR,
RCU_TASKS_FLAVOR,
SRCU_FLAVOR, SRCU_FLAVOR,
INVALID_RCU_FLAVOR INVALID_RCU_FLAVOR
}; };
@ -197,6 +195,28 @@ void call_rcu_sched(struct rcu_head *head,
void synchronize_sched(void); void synchronize_sched(void);
/**
* call_rcu_tasks() - Queue an RCU for invocation task-based grace period
* @head: structure to be used for queueing the RCU updates.
* @func: actual callback function to be invoked after the grace period
*
* The callback function will be invoked some time after a full grace
* period elapses, in other words after all currently executing RCU
* read-side critical sections have completed. call_rcu_tasks() assumes
* that the read-side critical sections end at a voluntary context
* switch (not a preemption!), entry into idle, or transition to usermode
* execution. As such, there are no read-side primitives analogous to
* rcu_read_lock() and rcu_read_unlock() because this primitive is intended
* to determine that all tasks have passed through a safe state, not so
* much for data-strcuture synchronization.
*
* See the description of call_rcu() for more detailed information on
* memory ordering guarantees.
*/
void call_rcu_tasks(struct rcu_head *head, void (*func)(struct rcu_head *head));
void synchronize_rcu_tasks(void);
void rcu_barrier_tasks(void);
#ifdef CONFIG_PREEMPT_RCU #ifdef CONFIG_PREEMPT_RCU
void __rcu_read_lock(void); void __rcu_read_lock(void);
@ -238,8 +258,8 @@ static inline int rcu_preempt_depth(void)
/* Internal to kernel */ /* Internal to kernel */
void rcu_init(void); void rcu_init(void);
void rcu_sched_qs(int cpu); void rcu_sched_qs(void);
void rcu_bh_qs(int cpu); void rcu_bh_qs(void);
void rcu_check_callbacks(int cpu, int user); void rcu_check_callbacks(int cpu, int user);
struct notifier_block; struct notifier_block;
void rcu_idle_enter(void); void rcu_idle_enter(void);
@ -269,6 +289,14 @@ static inline void rcu_user_hooks_switch(struct task_struct *prev,
struct task_struct *next) { } struct task_struct *next) { }
#endif /* CONFIG_RCU_USER_QS */ #endif /* CONFIG_RCU_USER_QS */
#ifdef CONFIG_RCU_NOCB_CPU
void rcu_init_nohz(void);
#else /* #ifdef CONFIG_RCU_NOCB_CPU */
static inline void rcu_init_nohz(void)
{
}
#endif /* #else #ifdef CONFIG_RCU_NOCB_CPU */
/** /**
* RCU_NONIDLE - Indicate idle-loop code that needs RCU readers * RCU_NONIDLE - Indicate idle-loop code that needs RCU readers
* @a: Code that RCU needs to pay attention to. * @a: Code that RCU needs to pay attention to.
@ -294,6 +322,36 @@ static inline void rcu_user_hooks_switch(struct task_struct *prev,
rcu_irq_exit(); \ rcu_irq_exit(); \
} while (0) } while (0)
/*
* Note a voluntary context switch for RCU-tasks benefit. This is a
* macro rather than an inline function to avoid #include hell.
*/
#ifdef CONFIG_TASKS_RCU
#define TASKS_RCU(x) x
extern struct srcu_struct tasks_rcu_exit_srcu;
#define rcu_note_voluntary_context_switch(t) \
do { \
if (ACCESS_ONCE((t)->rcu_tasks_holdout)) \
ACCESS_ONCE((t)->rcu_tasks_holdout) = false; \
} while (0)
#else /* #ifdef CONFIG_TASKS_RCU */
#define TASKS_RCU(x) do { } while (0)
#define rcu_note_voluntary_context_switch(t) do { } while (0)
#endif /* #else #ifdef CONFIG_TASKS_RCU */
/**
* cond_resched_rcu_qs - Report potential quiescent states to RCU
*
* This macro resembles cond_resched(), except that it is defined to
* report potential quiescent states to RCU-tasks even if the cond_resched()
* machinery were to be shut off, as some advocate for PREEMPT kernels.
*/
#define cond_resched_rcu_qs() \
do { \
rcu_note_voluntary_context_switch(current); \
cond_resched(); \
} while (0)
#if defined(CONFIG_DEBUG_LOCK_ALLOC) || defined(CONFIG_RCU_TRACE) || defined(CONFIG_SMP) #if defined(CONFIG_DEBUG_LOCK_ALLOC) || defined(CONFIG_RCU_TRACE) || defined(CONFIG_SMP)
bool __rcu_is_watching(void); bool __rcu_is_watching(void);
#endif /* #if defined(CONFIG_DEBUG_LOCK_ALLOC) || defined(CONFIG_RCU_TRACE) || defined(CONFIG_SMP) */ #endif /* #if defined(CONFIG_DEBUG_LOCK_ALLOC) || defined(CONFIG_RCU_TRACE) || defined(CONFIG_SMP) */
@ -349,7 +407,7 @@ bool rcu_lockdep_current_cpu_online(void);
#else /* #if defined(CONFIG_HOTPLUG_CPU) && defined(CONFIG_PROVE_RCU) */ #else /* #if defined(CONFIG_HOTPLUG_CPU) && defined(CONFIG_PROVE_RCU) */
static inline bool rcu_lockdep_current_cpu_online(void) static inline bool rcu_lockdep_current_cpu_online(void)
{ {
return 1; return true;
} }
#endif /* #else #if defined(CONFIG_HOTPLUG_CPU) && defined(CONFIG_PROVE_RCU) */ #endif /* #else #if defined(CONFIG_HOTPLUG_CPU) && defined(CONFIG_PROVE_RCU) */
@ -371,41 +429,7 @@ extern struct lockdep_map rcu_sched_lock_map;
extern struct lockdep_map rcu_callback_map; extern struct lockdep_map rcu_callback_map;
int debug_lockdep_rcu_enabled(void); int debug_lockdep_rcu_enabled(void);
/** int rcu_read_lock_held(void);
* rcu_read_lock_held() - might we be in RCU read-side critical section?
*
* If CONFIG_DEBUG_LOCK_ALLOC is selected, returns nonzero iff in an RCU
* read-side critical section. In absence of CONFIG_DEBUG_LOCK_ALLOC,
* this assumes we are in an RCU read-side critical section unless it can
* prove otherwise. This is useful for debug checks in functions that
* require that they be called within an RCU read-side critical section.
*
* Checks debug_lockdep_rcu_enabled() to prevent false positives during boot
* and while lockdep is disabled.
*
* Note that rcu_read_lock() and the matching rcu_read_unlock() must
* occur in the same context, for example, it is illegal to invoke
* rcu_read_unlock() in process context if the matching rcu_read_lock()
* was invoked from within an irq handler.
*
* Note that rcu_read_lock() is disallowed if the CPU is either idle or
* offline from an RCU perspective, so check for those as well.
*/
static inline int rcu_read_lock_held(void)
{
if (!debug_lockdep_rcu_enabled())
return 1;
if (!rcu_is_watching())
return 0;
if (!rcu_lockdep_current_cpu_online())
return 0;
return lock_is_held(&rcu_lock_map);
}
/*
* rcu_read_lock_bh_held() is defined out of line to avoid #include-file
* hell.
*/
int rcu_read_lock_bh_held(void); int rcu_read_lock_bh_held(void);
/** /**

View File

@ -80,7 +80,7 @@ static inline void kfree_call_rcu(struct rcu_head *head,
static inline void rcu_note_context_switch(int cpu) static inline void rcu_note_context_switch(int cpu)
{ {
rcu_sched_qs(cpu); rcu_sched_qs();
} }
/* /*

View File

@ -1213,6 +1213,13 @@ struct sched_dl_entity {
struct hrtimer dl_timer; struct hrtimer dl_timer;
}; };
union rcu_special {
struct {
bool blocked;
bool need_qs;
} b;
short s;
};
struct rcu_node; struct rcu_node;
enum perf_event_task_context { enum perf_event_task_context {
@ -1265,12 +1272,18 @@ struct task_struct {
#ifdef CONFIG_PREEMPT_RCU #ifdef CONFIG_PREEMPT_RCU
int rcu_read_lock_nesting; int rcu_read_lock_nesting;
char rcu_read_unlock_special; union rcu_special rcu_read_unlock_special;
struct list_head rcu_node_entry; struct list_head rcu_node_entry;
#endif /* #ifdef CONFIG_PREEMPT_RCU */ #endif /* #ifdef CONFIG_PREEMPT_RCU */
#ifdef CONFIG_TREE_PREEMPT_RCU #ifdef CONFIG_TREE_PREEMPT_RCU
struct rcu_node *rcu_blocked_node; struct rcu_node *rcu_blocked_node;
#endif /* #ifdef CONFIG_TREE_PREEMPT_RCU */ #endif /* #ifdef CONFIG_TREE_PREEMPT_RCU */
#ifdef CONFIG_TASKS_RCU
unsigned long rcu_tasks_nvcsw;
bool rcu_tasks_holdout;
struct list_head rcu_tasks_holdout_list;
int rcu_tasks_idle_cpu;
#endif /* #ifdef CONFIG_TASKS_RCU */
#if defined(CONFIG_SCHEDSTATS) || defined(CONFIG_TASK_DELAY_ACCT) #if defined(CONFIG_SCHEDSTATS) || defined(CONFIG_TASK_DELAY_ACCT)
struct sched_info sched_info; struct sched_info sched_info;
@ -2014,29 +2027,21 @@ extern void task_clear_jobctl_trapping(struct task_struct *task);
extern void task_clear_jobctl_pending(struct task_struct *task, extern void task_clear_jobctl_pending(struct task_struct *task,
unsigned int mask); unsigned int mask);
static inline void rcu_copy_process(struct task_struct *p)
{
#ifdef CONFIG_PREEMPT_RCU #ifdef CONFIG_PREEMPT_RCU
#define RCU_READ_UNLOCK_BLOCKED (1 << 0) /* blocked while in RCU read-side. */
#define RCU_READ_UNLOCK_NEED_QS (1 << 1) /* RCU core needs CPU response. */
static inline void rcu_copy_process(struct task_struct *p)
{
p->rcu_read_lock_nesting = 0; p->rcu_read_lock_nesting = 0;
p->rcu_read_unlock_special = 0; p->rcu_read_unlock_special.s = 0;
#ifdef CONFIG_TREE_PREEMPT_RCU
p->rcu_blocked_node = NULL; p->rcu_blocked_node = NULL;
#endif /* #ifdef CONFIG_TREE_PREEMPT_RCU */
INIT_LIST_HEAD(&p->rcu_node_entry); INIT_LIST_HEAD(&p->rcu_node_entry);
#endif /* #ifdef CONFIG_PREEMPT_RCU */
#ifdef CONFIG_TASKS_RCU
p->rcu_tasks_holdout = false;
INIT_LIST_HEAD(&p->rcu_tasks_holdout_list);
p->rcu_tasks_idle_cpu = -1;
#endif /* #ifdef CONFIG_TASKS_RCU */
} }
#else
static inline void rcu_copy_process(struct task_struct *p)
{
}
#endif
static inline void tsk_restore_flags(struct task_struct *task, static inline void tsk_restore_flags(struct task_struct *task,
unsigned long orig_flags, unsigned long flags) unsigned long orig_flags, unsigned long flags)
{ {

View File

@ -51,7 +51,7 @@
/* Definitions for online/offline exerciser. */ /* Definitions for online/offline exerciser. */
int torture_onoff_init(long ooholdoff, long oointerval); int torture_onoff_init(long ooholdoff, long oointerval);
char *torture_onoff_stats(char *page); void torture_onoff_stats(void);
bool torture_onoff_failures(void); bool torture_onoff_failures(void);
/* Low-rider random number generator. */ /* Low-rider random number generator. */
@ -77,7 +77,8 @@ int torture_stutter_init(int s);
/* Initialization and cleanup. */ /* Initialization and cleanup. */
bool torture_init_begin(char *ttype, bool v, int *runnable); bool torture_init_begin(char *ttype, bool v, int *runnable);
void torture_init_end(void); void torture_init_end(void);
bool torture_cleanup(void); bool torture_cleanup_begin(void);
void torture_cleanup_end(void);
bool torture_must_stop(void); bool torture_must_stop(void);
bool torture_must_stop_irq(void); bool torture_must_stop_irq(void);
void torture_kthread_stopping(char *title); void torture_kthread_stopping(char *title);

View File

@ -180,9 +180,12 @@ TRACE_EVENT(rcu_grace_period_init,
* argument is a string as follows: * argument is a string as follows:
* *
* "WakeEmpty": Wake rcuo kthread, first CB to empty list. * "WakeEmpty": Wake rcuo kthread, first CB to empty list.
* "WakeEmptyIsDeferred": Wake rcuo kthread later, first CB to empty list.
* "WakeOvf": Wake rcuo kthread, CB list is huge. * "WakeOvf": Wake rcuo kthread, CB list is huge.
* "WakeOvfIsDeferred": Wake rcuo kthread later, CB list is huge.
* "WakeNot": Don't wake rcuo kthread. * "WakeNot": Don't wake rcuo kthread.
* "WakeNotPoll": Don't wake rcuo kthread because it is polling. * "WakeNotPoll": Don't wake rcuo kthread because it is polling.
* "DeferredWake": Carried out the "IsDeferred" wakeup.
* "Poll": Start of new polling cycle for rcu_nocb_poll. * "Poll": Start of new polling cycle for rcu_nocb_poll.
* "Sleep": Sleep waiting for CBs for !rcu_nocb_poll. * "Sleep": Sleep waiting for CBs for !rcu_nocb_poll.
* "WokeEmpty": rcuo kthread woke to find empty list. * "WokeEmpty": rcuo kthread woke to find empty list.

View File

@ -507,6 +507,16 @@ config PREEMPT_RCU
This option enables preemptible-RCU code that is common between This option enables preemptible-RCU code that is common between
TREE_PREEMPT_RCU and, in the old days, TINY_PREEMPT_RCU. TREE_PREEMPT_RCU and, in the old days, TINY_PREEMPT_RCU.
config TASKS_RCU
bool "Task_based RCU implementation using voluntary context switch"
default n
help
This option enables a task-based RCU implementation that uses
only voluntary context switch (not preemption!), idle, and
user-mode execution as quiescent states.
If unsure, say N.
config RCU_STALL_COMMON config RCU_STALL_COMMON
def_bool ( TREE_RCU || TREE_PREEMPT_RCU || RCU_TRACE ) def_bool ( TREE_RCU || TREE_PREEMPT_RCU || RCU_TRACE )
help help
@ -737,7 +747,7 @@ choice
config RCU_NOCB_CPU_NONE config RCU_NOCB_CPU_NONE
bool "No build_forced no-CBs CPUs" bool "No build_forced no-CBs CPUs"
depends on RCU_NOCB_CPU && !NO_HZ_FULL_ALL depends on RCU_NOCB_CPU
help help
This option does not force any of the CPUs to be no-CBs CPUs. This option does not force any of the CPUs to be no-CBs CPUs.
Only CPUs designated by the rcu_nocbs= boot parameter will be Only CPUs designated by the rcu_nocbs= boot parameter will be
@ -751,7 +761,7 @@ config RCU_NOCB_CPU_NONE
config RCU_NOCB_CPU_ZERO config RCU_NOCB_CPU_ZERO
bool "CPU 0 is a build_forced no-CBs CPU" bool "CPU 0 is a build_forced no-CBs CPU"
depends on RCU_NOCB_CPU && !NO_HZ_FULL_ALL depends on RCU_NOCB_CPU
help help
This option forces CPU 0 to be a no-CBs CPU, so that its RCU This option forces CPU 0 to be a no-CBs CPU, so that its RCU
callbacks are invoked by a per-CPU kthread whose name begins callbacks are invoked by a per-CPU kthread whose name begins

View File

@ -583,6 +583,7 @@ asmlinkage __visible void __init start_kernel(void)
early_irq_init(); early_irq_init();
init_IRQ(); init_IRQ();
tick_init(); tick_init();
rcu_init_nohz();
init_timers(); init_timers();
hrtimers_init(); hrtimers_init();
softirq_init(); softirq_init();

View File

@ -79,6 +79,8 @@ static struct {
/* Lockdep annotations for get/put_online_cpus() and cpu_hotplug_begin/end() */ /* Lockdep annotations for get/put_online_cpus() and cpu_hotplug_begin/end() */
#define cpuhp_lock_acquire_read() lock_map_acquire_read(&cpu_hotplug.dep_map) #define cpuhp_lock_acquire_read() lock_map_acquire_read(&cpu_hotplug.dep_map)
#define cpuhp_lock_acquire_tryread() \
lock_map_acquire_tryread(&cpu_hotplug.dep_map)
#define cpuhp_lock_acquire() lock_map_acquire(&cpu_hotplug.dep_map) #define cpuhp_lock_acquire() lock_map_acquire(&cpu_hotplug.dep_map)
#define cpuhp_lock_release() lock_map_release(&cpu_hotplug.dep_map) #define cpuhp_lock_release() lock_map_release(&cpu_hotplug.dep_map)
@ -91,10 +93,22 @@ void get_online_cpus(void)
mutex_lock(&cpu_hotplug.lock); mutex_lock(&cpu_hotplug.lock);
cpu_hotplug.refcount++; cpu_hotplug.refcount++;
mutex_unlock(&cpu_hotplug.lock); mutex_unlock(&cpu_hotplug.lock);
} }
EXPORT_SYMBOL_GPL(get_online_cpus); EXPORT_SYMBOL_GPL(get_online_cpus);
bool try_get_online_cpus(void)
{
if (cpu_hotplug.active_writer == current)
return true;
if (!mutex_trylock(&cpu_hotplug.lock))
return false;
cpuhp_lock_acquire_tryread();
cpu_hotplug.refcount++;
mutex_unlock(&cpu_hotplug.lock);
return true;
}
EXPORT_SYMBOL_GPL(try_get_online_cpus);
void put_online_cpus(void) void put_online_cpus(void)
{ {
if (cpu_hotplug.active_writer == current) if (cpu_hotplug.active_writer == current)

View File

@ -667,6 +667,7 @@ void do_exit(long code)
{ {
struct task_struct *tsk = current; struct task_struct *tsk = current;
int group_dead; int group_dead;
TASKS_RCU(int tasks_rcu_i);
profile_task_exit(tsk); profile_task_exit(tsk);
@ -775,6 +776,7 @@ void do_exit(long code)
*/ */
flush_ptrace_hw_breakpoint(tsk); flush_ptrace_hw_breakpoint(tsk);
TASKS_RCU(tasks_rcu_i = __srcu_read_lock(&tasks_rcu_exit_srcu));
exit_notify(tsk, group_dead); exit_notify(tsk, group_dead);
proc_exit_connector(tsk); proc_exit_connector(tsk);
#ifdef CONFIG_NUMA #ifdef CONFIG_NUMA
@ -814,6 +816,7 @@ void do_exit(long code)
if (tsk->nr_dirtied) if (tsk->nr_dirtied)
__this_cpu_add(dirty_throttle_leaks, tsk->nr_dirtied); __this_cpu_add(dirty_throttle_leaks, tsk->nr_dirtied);
exit_rcu(); exit_rcu();
TASKS_RCU(__srcu_read_unlock(&tasks_rcu_exit_srcu, tasks_rcu_i));
/* /*
* The setting of TASK_RUNNING by try_to_wake_up() may be delayed * The setting of TASK_RUNNING by try_to_wake_up() may be delayed

View File

@ -20,30 +20,20 @@
* Author: Paul E. McKenney <paulmck@us.ibm.com> * Author: Paul E. McKenney <paulmck@us.ibm.com>
* Based on kernel/rcu/torture.c. * Based on kernel/rcu/torture.c.
*/ */
#include <linux/types.h>
#include <linux/kernel.h> #include <linux/kernel.h>
#include <linux/init.h>
#include <linux/module.h> #include <linux/module.h>
#include <linux/kthread.h> #include <linux/kthread.h>
#include <linux/err.h>
#include <linux/spinlock.h> #include <linux/spinlock.h>
#include <linux/rwlock.h>
#include <linux/mutex.h>
#include <linux/rwsem.h>
#include <linux/smp.h> #include <linux/smp.h>
#include <linux/interrupt.h> #include <linux/interrupt.h>
#include <linux/sched.h> #include <linux/sched.h>
#include <linux/atomic.h> #include <linux/atomic.h>
#include <linux/bitops.h>
#include <linux/completion.h>
#include <linux/moduleparam.h> #include <linux/moduleparam.h>
#include <linux/percpu.h>
#include <linux/notifier.h>
#include <linux/reboot.h>
#include <linux/freezer.h>
#include <linux/cpu.h>
#include <linux/delay.h> #include <linux/delay.h>
#include <linux/stat.h>
#include <linux/slab.h> #include <linux/slab.h>
#include <linux/trace_clock.h>
#include <asm/byteorder.h>
#include <linux/torture.h> #include <linux/torture.h>
MODULE_LICENSE("GPL"); MODULE_LICENSE("GPL");
@ -51,6 +41,8 @@ MODULE_AUTHOR("Paul E. McKenney <paulmck@us.ibm.com>");
torture_param(int, nwriters_stress, -1, torture_param(int, nwriters_stress, -1,
"Number of write-locking stress-test threads"); "Number of write-locking stress-test threads");
torture_param(int, nreaders_stress, -1,
"Number of read-locking stress-test threads");
torture_param(int, onoff_holdoff, 0, "Time after boot before CPU hotplugs (s)"); torture_param(int, onoff_holdoff, 0, "Time after boot before CPU hotplugs (s)");
torture_param(int, onoff_interval, 0, torture_param(int, onoff_interval, 0,
"Time between CPU hotplugs (s), 0=disable"); "Time between CPU hotplugs (s), 0=disable");
@ -66,30 +58,28 @@ torture_param(bool, verbose, true,
static char *torture_type = "spin_lock"; static char *torture_type = "spin_lock";
module_param(torture_type, charp, 0444); module_param(torture_type, charp, 0444);
MODULE_PARM_DESC(torture_type, MODULE_PARM_DESC(torture_type,
"Type of lock to torture (spin_lock, spin_lock_irq, ...)"); "Type of lock to torture (spin_lock, spin_lock_irq, mutex_lock, ...)");
static atomic_t n_lock_torture_errors;
static struct task_struct *stats_task; static struct task_struct *stats_task;
static struct task_struct **writer_tasks; static struct task_struct **writer_tasks;
static struct task_struct **reader_tasks;
static int nrealwriters_stress;
static bool lock_is_write_held; static bool lock_is_write_held;
static bool lock_is_read_held;
struct lock_writer_stress_stats { struct lock_stress_stats {
long n_write_lock_fail; long n_lock_fail;
long n_write_lock_acquired; long n_lock_acquired;
}; };
static struct lock_writer_stress_stats *lwsa;
#if defined(MODULE) #if defined(MODULE)
#define LOCKTORTURE_RUNNABLE_INIT 1 #define LOCKTORTURE_RUNNABLE_INIT 1
#else #else
#define LOCKTORTURE_RUNNABLE_INIT 0 #define LOCKTORTURE_RUNNABLE_INIT 0
#endif #endif
int locktorture_runnable = LOCKTORTURE_RUNNABLE_INIT; int torture_runnable = LOCKTORTURE_RUNNABLE_INIT;
module_param(locktorture_runnable, int, 0444); module_param(torture_runnable, int, 0444);
MODULE_PARM_DESC(locktorture_runnable, "Start locktorture at module init"); MODULE_PARM_DESC(torture_runnable, "Start locktorture at module init");
/* Forward reference. */ /* Forward reference. */
static void lock_torture_cleanup(void); static void lock_torture_cleanup(void);
@ -102,12 +92,25 @@ struct lock_torture_ops {
int (*writelock)(void); int (*writelock)(void);
void (*write_delay)(struct torture_random_state *trsp); void (*write_delay)(struct torture_random_state *trsp);
void (*writeunlock)(void); void (*writeunlock)(void);
int (*readlock)(void);
void (*read_delay)(struct torture_random_state *trsp);
void (*readunlock)(void);
unsigned long flags; unsigned long flags;
const char *name; const char *name;
}; };
static struct lock_torture_ops *cur_ops; struct lock_torture_cxt {
int nrealwriters_stress;
int nrealreaders_stress;
bool debug_lock;
atomic_t n_lock_torture_errors;
struct lock_torture_ops *cur_ops;
struct lock_stress_stats *lwsa; /* writer statistics */
struct lock_stress_stats *lrsa; /* reader statistics */
};
static struct lock_torture_cxt cxt = { 0, 0, false,
ATOMIC_INIT(0),
NULL, NULL};
/* /*
* Definitions for lock torture testing. * Definitions for lock torture testing.
*/ */
@ -123,10 +126,10 @@ static void torture_lock_busted_write_delay(struct torture_random_state *trsp)
/* We want a long delay occasionally to force massive contention. */ /* We want a long delay occasionally to force massive contention. */
if (!(torture_random(trsp) % if (!(torture_random(trsp) %
(nrealwriters_stress * 2000 * longdelay_us))) (cxt.nrealwriters_stress * 2000 * longdelay_us)))
mdelay(longdelay_us); mdelay(longdelay_us);
#ifdef CONFIG_PREEMPT #ifdef CONFIG_PREEMPT
if (!(torture_random(trsp) % (nrealwriters_stress * 20000))) if (!(torture_random(trsp) % (cxt.nrealwriters_stress * 20000)))
preempt_schedule(); /* Allow test to be preempted. */ preempt_schedule(); /* Allow test to be preempted. */
#endif #endif
} }
@ -140,6 +143,9 @@ static struct lock_torture_ops lock_busted_ops = {
.writelock = torture_lock_busted_write_lock, .writelock = torture_lock_busted_write_lock,
.write_delay = torture_lock_busted_write_delay, .write_delay = torture_lock_busted_write_delay,
.writeunlock = torture_lock_busted_write_unlock, .writeunlock = torture_lock_busted_write_unlock,
.readlock = NULL,
.read_delay = NULL,
.readunlock = NULL,
.name = "lock_busted" .name = "lock_busted"
}; };
@ -160,13 +166,13 @@ static void torture_spin_lock_write_delay(struct torture_random_state *trsp)
* we want a long delay occasionally to force massive contention. * we want a long delay occasionally to force massive contention.
*/ */
if (!(torture_random(trsp) % if (!(torture_random(trsp) %
(nrealwriters_stress * 2000 * longdelay_us))) (cxt.nrealwriters_stress * 2000 * longdelay_us)))
mdelay(longdelay_us); mdelay(longdelay_us);
if (!(torture_random(trsp) % if (!(torture_random(trsp) %
(nrealwriters_stress * 2 * shortdelay_us))) (cxt.nrealwriters_stress * 2 * shortdelay_us)))
udelay(shortdelay_us); udelay(shortdelay_us);
#ifdef CONFIG_PREEMPT #ifdef CONFIG_PREEMPT
if (!(torture_random(trsp) % (nrealwriters_stress * 20000))) if (!(torture_random(trsp) % (cxt.nrealwriters_stress * 20000)))
preempt_schedule(); /* Allow test to be preempted. */ preempt_schedule(); /* Allow test to be preempted. */
#endif #endif
} }
@ -180,39 +186,253 @@ static struct lock_torture_ops spin_lock_ops = {
.writelock = torture_spin_lock_write_lock, .writelock = torture_spin_lock_write_lock,
.write_delay = torture_spin_lock_write_delay, .write_delay = torture_spin_lock_write_delay,
.writeunlock = torture_spin_lock_write_unlock, .writeunlock = torture_spin_lock_write_unlock,
.readlock = NULL,
.read_delay = NULL,
.readunlock = NULL,
.name = "spin_lock" .name = "spin_lock"
}; };
static int torture_spin_lock_write_lock_irq(void) static int torture_spin_lock_write_lock_irq(void)
__acquires(torture_spinlock_irq) __acquires(torture_spinlock)
{ {
unsigned long flags; unsigned long flags;
spin_lock_irqsave(&torture_spinlock, flags); spin_lock_irqsave(&torture_spinlock, flags);
cur_ops->flags = flags; cxt.cur_ops->flags = flags;
return 0; return 0;
} }
static void torture_lock_spin_write_unlock_irq(void) static void torture_lock_spin_write_unlock_irq(void)
__releases(torture_spinlock) __releases(torture_spinlock)
{ {
spin_unlock_irqrestore(&torture_spinlock, cur_ops->flags); spin_unlock_irqrestore(&torture_spinlock, cxt.cur_ops->flags);
} }
static struct lock_torture_ops spin_lock_irq_ops = { static struct lock_torture_ops spin_lock_irq_ops = {
.writelock = torture_spin_lock_write_lock_irq, .writelock = torture_spin_lock_write_lock_irq,
.write_delay = torture_spin_lock_write_delay, .write_delay = torture_spin_lock_write_delay,
.writeunlock = torture_lock_spin_write_unlock_irq, .writeunlock = torture_lock_spin_write_unlock_irq,
.readlock = NULL,
.read_delay = NULL,
.readunlock = NULL,
.name = "spin_lock_irq" .name = "spin_lock_irq"
}; };
static DEFINE_RWLOCK(torture_rwlock);
static int torture_rwlock_write_lock(void) __acquires(torture_rwlock)
{
write_lock(&torture_rwlock);
return 0;
}
static void torture_rwlock_write_delay(struct torture_random_state *trsp)
{
const unsigned long shortdelay_us = 2;
const unsigned long longdelay_ms = 100;
/* We want a short delay mostly to emulate likely code, and
* we want a long delay occasionally to force massive contention.
*/
if (!(torture_random(trsp) %
(cxt.nrealwriters_stress * 2000 * longdelay_ms)))
mdelay(longdelay_ms);
else
udelay(shortdelay_us);
}
static void torture_rwlock_write_unlock(void) __releases(torture_rwlock)
{
write_unlock(&torture_rwlock);
}
static int torture_rwlock_read_lock(void) __acquires(torture_rwlock)
{
read_lock(&torture_rwlock);
return 0;
}
static void torture_rwlock_read_delay(struct torture_random_state *trsp)
{
const unsigned long shortdelay_us = 10;
const unsigned long longdelay_ms = 100;
/* We want a short delay mostly to emulate likely code, and
* we want a long delay occasionally to force massive contention.
*/
if (!(torture_random(trsp) %
(cxt.nrealreaders_stress * 2000 * longdelay_ms)))
mdelay(longdelay_ms);
else
udelay(shortdelay_us);
}
static void torture_rwlock_read_unlock(void) __releases(torture_rwlock)
{
read_unlock(&torture_rwlock);
}
static struct lock_torture_ops rw_lock_ops = {
.writelock = torture_rwlock_write_lock,
.write_delay = torture_rwlock_write_delay,
.writeunlock = torture_rwlock_write_unlock,
.readlock = torture_rwlock_read_lock,
.read_delay = torture_rwlock_read_delay,
.readunlock = torture_rwlock_read_unlock,
.name = "rw_lock"
};
static int torture_rwlock_write_lock_irq(void) __acquires(torture_rwlock)
{
unsigned long flags;
write_lock_irqsave(&torture_rwlock, flags);
cxt.cur_ops->flags = flags;
return 0;
}
static void torture_rwlock_write_unlock_irq(void)
__releases(torture_rwlock)
{
write_unlock_irqrestore(&torture_rwlock, cxt.cur_ops->flags);
}
static int torture_rwlock_read_lock_irq(void) __acquires(torture_rwlock)
{
unsigned long flags;
read_lock_irqsave(&torture_rwlock, flags);
cxt.cur_ops->flags = flags;
return 0;
}
static void torture_rwlock_read_unlock_irq(void)
__releases(torture_rwlock)
{
write_unlock_irqrestore(&torture_rwlock, cxt.cur_ops->flags);
}
static struct lock_torture_ops rw_lock_irq_ops = {
.writelock = torture_rwlock_write_lock_irq,
.write_delay = torture_rwlock_write_delay,
.writeunlock = torture_rwlock_write_unlock_irq,
.readlock = torture_rwlock_read_lock_irq,
.read_delay = torture_rwlock_read_delay,
.readunlock = torture_rwlock_read_unlock_irq,
.name = "rw_lock_irq"
};
static DEFINE_MUTEX(torture_mutex);
static int torture_mutex_lock(void) __acquires(torture_mutex)
{
mutex_lock(&torture_mutex);
return 0;
}
static void torture_mutex_delay(struct torture_random_state *trsp)
{
const unsigned long longdelay_ms = 100;
/* We want a long delay occasionally to force massive contention. */
if (!(torture_random(trsp) %
(cxt.nrealwriters_stress * 2000 * longdelay_ms)))
mdelay(longdelay_ms * 5);
else
mdelay(longdelay_ms / 5);
#ifdef CONFIG_PREEMPT
if (!(torture_random(trsp) % (cxt.nrealwriters_stress * 20000)))
preempt_schedule(); /* Allow test to be preempted. */
#endif
}
static void torture_mutex_unlock(void) __releases(torture_mutex)
{
mutex_unlock(&torture_mutex);
}
static struct lock_torture_ops mutex_lock_ops = {
.writelock = torture_mutex_lock,
.write_delay = torture_mutex_delay,
.writeunlock = torture_mutex_unlock,
.readlock = NULL,
.read_delay = NULL,
.readunlock = NULL,
.name = "mutex_lock"
};
static DECLARE_RWSEM(torture_rwsem);
static int torture_rwsem_down_write(void) __acquires(torture_rwsem)
{
down_write(&torture_rwsem);
return 0;
}
static void torture_rwsem_write_delay(struct torture_random_state *trsp)
{
const unsigned long longdelay_ms = 100;
/* We want a long delay occasionally to force massive contention. */
if (!(torture_random(trsp) %
(cxt.nrealwriters_stress * 2000 * longdelay_ms)))
mdelay(longdelay_ms * 10);
else
mdelay(longdelay_ms / 10);
#ifdef CONFIG_PREEMPT
if (!(torture_random(trsp) % (cxt.nrealwriters_stress * 20000)))
preempt_schedule(); /* Allow test to be preempted. */
#endif
}
static void torture_rwsem_up_write(void) __releases(torture_rwsem)
{
up_write(&torture_rwsem);
}
static int torture_rwsem_down_read(void) __acquires(torture_rwsem)
{
down_read(&torture_rwsem);
return 0;
}
static void torture_rwsem_read_delay(struct torture_random_state *trsp)
{
const unsigned long longdelay_ms = 100;
/* We want a long delay occasionally to force massive contention. */
if (!(torture_random(trsp) %
(cxt.nrealwriters_stress * 2000 * longdelay_ms)))
mdelay(longdelay_ms * 2);
else
mdelay(longdelay_ms / 2);
#ifdef CONFIG_PREEMPT
if (!(torture_random(trsp) % (cxt.nrealreaders_stress * 20000)))
preempt_schedule(); /* Allow test to be preempted. */
#endif
}
static void torture_rwsem_up_read(void) __releases(torture_rwsem)
{
up_read(&torture_rwsem);
}
static struct lock_torture_ops rwsem_lock_ops = {
.writelock = torture_rwsem_down_write,
.write_delay = torture_rwsem_write_delay,
.writeunlock = torture_rwsem_up_write,
.readlock = torture_rwsem_down_read,
.read_delay = torture_rwsem_read_delay,
.readunlock = torture_rwsem_up_read,
.name = "rwsem_lock"
};
/* /*
* Lock torture writer kthread. Repeatedly acquires and releases * Lock torture writer kthread. Repeatedly acquires and releases
* the lock, checking for duplicate acquisitions. * the lock, checking for duplicate acquisitions.
*/ */
static int lock_torture_writer(void *arg) static int lock_torture_writer(void *arg)
{ {
struct lock_writer_stress_stats *lwsp = arg; struct lock_stress_stats *lwsp = arg;
static DEFINE_TORTURE_RANDOM(rand); static DEFINE_TORTURE_RANDOM(rand);
VERBOSE_TOROUT_STRING("lock_torture_writer task started"); VERBOSE_TOROUT_STRING("lock_torture_writer task started");
@ -221,47 +441,86 @@ static int lock_torture_writer(void *arg)
do { do {
if ((torture_random(&rand) & 0xfffff) == 0) if ((torture_random(&rand) & 0xfffff) == 0)
schedule_timeout_uninterruptible(1); schedule_timeout_uninterruptible(1);
cur_ops->writelock();
cxt.cur_ops->writelock();
if (WARN_ON_ONCE(lock_is_write_held)) if (WARN_ON_ONCE(lock_is_write_held))
lwsp->n_write_lock_fail++; lwsp->n_lock_fail++;
lock_is_write_held = 1; lock_is_write_held = 1;
lwsp->n_write_lock_acquired++; if (WARN_ON_ONCE(lock_is_read_held))
cur_ops->write_delay(&rand); lwsp->n_lock_fail++; /* rare, but... */
lwsp->n_lock_acquired++;
cxt.cur_ops->write_delay(&rand);
lock_is_write_held = 0; lock_is_write_held = 0;
cur_ops->writeunlock(); cxt.cur_ops->writeunlock();
stutter_wait("lock_torture_writer"); stutter_wait("lock_torture_writer");
} while (!torture_must_stop()); } while (!torture_must_stop());
torture_kthread_stopping("lock_torture_writer"); torture_kthread_stopping("lock_torture_writer");
return 0; return 0;
} }
/*
* Lock torture reader kthread. Repeatedly acquires and releases
* the reader lock.
*/
static int lock_torture_reader(void *arg)
{
struct lock_stress_stats *lrsp = arg;
static DEFINE_TORTURE_RANDOM(rand);
VERBOSE_TOROUT_STRING("lock_torture_reader task started");
set_user_nice(current, MAX_NICE);
do {
if ((torture_random(&rand) & 0xfffff) == 0)
schedule_timeout_uninterruptible(1);
cxt.cur_ops->readlock();
lock_is_read_held = 1;
if (WARN_ON_ONCE(lock_is_write_held))
lrsp->n_lock_fail++; /* rare, but... */
lrsp->n_lock_acquired++;
cxt.cur_ops->read_delay(&rand);
lock_is_read_held = 0;
cxt.cur_ops->readunlock();
stutter_wait("lock_torture_reader");
} while (!torture_must_stop());
torture_kthread_stopping("lock_torture_reader");
return 0;
}
/* /*
* Create an lock-torture-statistics message in the specified buffer. * Create an lock-torture-statistics message in the specified buffer.
*/ */
static void lock_torture_printk(char *page) static void __torture_print_stats(char *page,
struct lock_stress_stats *statp, bool write)
{ {
bool fail = 0; bool fail = 0;
int i; int i, n_stress;
long max = 0; long max = 0;
long min = lwsa[0].n_write_lock_acquired; long min = statp[0].n_lock_acquired;
long long sum = 0; long long sum = 0;
for (i = 0; i < nrealwriters_stress; i++) { n_stress = write ? cxt.nrealwriters_stress : cxt.nrealreaders_stress;
if (lwsa[i].n_write_lock_fail) for (i = 0; i < n_stress; i++) {
if (statp[i].n_lock_fail)
fail = true; fail = true;
sum += lwsa[i].n_write_lock_acquired; sum += statp[i].n_lock_acquired;
if (max < lwsa[i].n_write_lock_fail) if (max < statp[i].n_lock_fail)
max = lwsa[i].n_write_lock_fail; max = statp[i].n_lock_fail;
if (min > lwsa[i].n_write_lock_fail) if (min > statp[i].n_lock_fail)
min = lwsa[i].n_write_lock_fail; min = statp[i].n_lock_fail;
} }
page += sprintf(page, "%s%s ", torture_type, TORTURE_FLAG);
page += sprintf(page, page += sprintf(page,
"Writes: Total: %lld Max/Min: %ld/%ld %s Fail: %d %s\n", "%s: Total: %lld Max/Min: %ld/%ld %s Fail: %d %s\n",
write ? "Writes" : "Reads ",
sum, max, min, max / 2 > min ? "???" : "", sum, max, min, max / 2 > min ? "???" : "",
fail, fail ? "!!!" : ""); fail, fail ? "!!!" : "");
if (fail) if (fail)
atomic_inc(&n_lock_torture_errors); atomic_inc(&cxt.n_lock_torture_errors);
} }
/* /*
@ -274,18 +533,35 @@ static void lock_torture_printk(char *page)
*/ */
static void lock_torture_stats_print(void) static void lock_torture_stats_print(void)
{ {
int size = nrealwriters_stress * 200 + 8192; int size = cxt.nrealwriters_stress * 200 + 8192;
char *buf; char *buf;
if (cxt.cur_ops->readlock)
size += cxt.nrealreaders_stress * 200 + 8192;
buf = kmalloc(size, GFP_KERNEL); buf = kmalloc(size, GFP_KERNEL);
if (!buf) { if (!buf) {
pr_err("lock_torture_stats_print: Out of memory, need: %d", pr_err("lock_torture_stats_print: Out of memory, need: %d",
size); size);
return; return;
} }
lock_torture_printk(buf);
__torture_print_stats(buf, cxt.lwsa, true);
pr_alert("%s", buf); pr_alert("%s", buf);
kfree(buf); kfree(buf);
if (cxt.cur_ops->readlock) {
buf = kmalloc(size, GFP_KERNEL);
if (!buf) {
pr_err("lock_torture_stats_print: Out of memory, need: %d",
size);
return;
}
__torture_print_stats(buf, cxt.lrsa, false);
pr_alert("%s", buf);
kfree(buf);
}
} }
/* /*
@ -312,9 +588,10 @@ lock_torture_print_module_parms(struct lock_torture_ops *cur_ops,
const char *tag) const char *tag)
{ {
pr_alert("%s" TORTURE_FLAG pr_alert("%s" TORTURE_FLAG
"--- %s: nwriters_stress=%d stat_interval=%d verbose=%d shuffle_interval=%d stutter=%d shutdown_secs=%d onoff_interval=%d onoff_holdoff=%d\n", "--- %s%s: nwriters_stress=%d nreaders_stress=%d stat_interval=%d verbose=%d shuffle_interval=%d stutter=%d shutdown_secs=%d onoff_interval=%d onoff_holdoff=%d\n",
torture_type, tag, nrealwriters_stress, stat_interval, verbose, torture_type, tag, cxt.debug_lock ? " [debug]": "",
shuffle_interval, stutter, shutdown_secs, cxt.nrealwriters_stress, cxt.nrealreaders_stress, stat_interval,
verbose, shuffle_interval, stutter, shutdown_secs,
onoff_interval, onoff_holdoff); onoff_interval, onoff_holdoff);
} }
@ -322,46 +599,59 @@ static void lock_torture_cleanup(void)
{ {
int i; int i;
if (torture_cleanup()) if (torture_cleanup_begin())
return; return;
if (writer_tasks) { if (writer_tasks) {
for (i = 0; i < nrealwriters_stress; i++) for (i = 0; i < cxt.nrealwriters_stress; i++)
torture_stop_kthread(lock_torture_writer, torture_stop_kthread(lock_torture_writer,
writer_tasks[i]); writer_tasks[i]);
kfree(writer_tasks); kfree(writer_tasks);
writer_tasks = NULL; writer_tasks = NULL;
} }
if (reader_tasks) {
for (i = 0; i < cxt.nrealreaders_stress; i++)
torture_stop_kthread(lock_torture_reader,
reader_tasks[i]);
kfree(reader_tasks);
reader_tasks = NULL;
}
torture_stop_kthread(lock_torture_stats, stats_task); torture_stop_kthread(lock_torture_stats, stats_task);
lock_torture_stats_print(); /* -After- the stats thread is stopped! */ lock_torture_stats_print(); /* -After- the stats thread is stopped! */
if (atomic_read(&n_lock_torture_errors)) if (atomic_read(&cxt.n_lock_torture_errors))
lock_torture_print_module_parms(cur_ops, lock_torture_print_module_parms(cxt.cur_ops,
"End of test: FAILURE"); "End of test: FAILURE");
else if (torture_onoff_failures()) else if (torture_onoff_failures())
lock_torture_print_module_parms(cur_ops, lock_torture_print_module_parms(cxt.cur_ops,
"End of test: LOCK_HOTPLUG"); "End of test: LOCK_HOTPLUG");
else else
lock_torture_print_module_parms(cur_ops, lock_torture_print_module_parms(cxt.cur_ops,
"End of test: SUCCESS"); "End of test: SUCCESS");
torture_cleanup_end();
} }
static int __init lock_torture_init(void) static int __init lock_torture_init(void)
{ {
int i; int i, j;
int firsterr = 0; int firsterr = 0;
static struct lock_torture_ops *torture_ops[] = { static struct lock_torture_ops *torture_ops[] = {
&lock_busted_ops, &spin_lock_ops, &spin_lock_irq_ops, &lock_busted_ops,
&spin_lock_ops, &spin_lock_irq_ops,
&rw_lock_ops, &rw_lock_irq_ops,
&mutex_lock_ops,
&rwsem_lock_ops,
}; };
if (!torture_init_begin(torture_type, verbose, &locktorture_runnable)) if (!torture_init_begin(torture_type, verbose, &torture_runnable))
return -EBUSY; return -EBUSY;
/* Process args and tell the world that the torturer is on the job. */ /* Process args and tell the world that the torturer is on the job. */
for (i = 0; i < ARRAY_SIZE(torture_ops); i++) { for (i = 0; i < ARRAY_SIZE(torture_ops); i++) {
cur_ops = torture_ops[i]; cxt.cur_ops = torture_ops[i];
if (strcmp(torture_type, cur_ops->name) == 0) if (strcmp(torture_type, cxt.cur_ops->name) == 0)
break; break;
} }
if (i == ARRAY_SIZE(torture_ops)) { if (i == ARRAY_SIZE(torture_ops)) {
@ -374,31 +664,69 @@ static int __init lock_torture_init(void)
torture_init_end(); torture_init_end();
return -EINVAL; return -EINVAL;
} }
if (cur_ops->init) if (cxt.cur_ops->init)
cur_ops->init(); /* no "goto unwind" prior to this point!!! */ cxt.cur_ops->init(); /* no "goto unwind" prior to this point!!! */
if (nwriters_stress >= 0) if (nwriters_stress >= 0)
nrealwriters_stress = nwriters_stress; cxt.nrealwriters_stress = nwriters_stress;
else else
nrealwriters_stress = 2 * num_online_cpus(); cxt.nrealwriters_stress = 2 * num_online_cpus();
lock_torture_print_module_parms(cur_ops, "Start of test");
#ifdef CONFIG_DEBUG_MUTEXES
if (strncmp(torture_type, "mutex", 5) == 0)
cxt.debug_lock = true;
#endif
#ifdef CONFIG_DEBUG_SPINLOCK
if ((strncmp(torture_type, "spin", 4) == 0) ||
(strncmp(torture_type, "rw_lock", 7) == 0))
cxt.debug_lock = true;
#endif
/* Initialize the statistics so that each run gets its own numbers. */ /* Initialize the statistics so that each run gets its own numbers. */
lock_is_write_held = 0; lock_is_write_held = 0;
lwsa = kmalloc(sizeof(*lwsa) * nrealwriters_stress, GFP_KERNEL); cxt.lwsa = kmalloc(sizeof(*cxt.lwsa) * cxt.nrealwriters_stress, GFP_KERNEL);
if (lwsa == NULL) { if (cxt.lwsa == NULL) {
VERBOSE_TOROUT_STRING("lwsa: Out of memory"); VERBOSE_TOROUT_STRING("cxt.lwsa: Out of memory");
firsterr = -ENOMEM; firsterr = -ENOMEM;
goto unwind; goto unwind;
} }
for (i = 0; i < nrealwriters_stress; i++) { for (i = 0; i < cxt.nrealwriters_stress; i++) {
lwsa[i].n_write_lock_fail = 0; cxt.lwsa[i].n_lock_fail = 0;
lwsa[i].n_write_lock_acquired = 0; cxt.lwsa[i].n_lock_acquired = 0;
} }
/* Start up the kthreads. */ if (cxt.cur_ops->readlock) {
if (nreaders_stress >= 0)
cxt.nrealreaders_stress = nreaders_stress;
else {
/*
* By default distribute evenly the number of
* readers and writers. We still run the same number
* of threads as the writer-only locks default.
*/
if (nwriters_stress < 0) /* user doesn't care */
cxt.nrealwriters_stress = num_online_cpus();
cxt.nrealreaders_stress = cxt.nrealwriters_stress;
}
lock_is_read_held = 0;
cxt.lrsa = kmalloc(sizeof(*cxt.lrsa) * cxt.nrealreaders_stress, GFP_KERNEL);
if (cxt.lrsa == NULL) {
VERBOSE_TOROUT_STRING("cxt.lrsa: Out of memory");
firsterr = -ENOMEM;
kfree(cxt.lwsa);
goto unwind;
}
for (i = 0; i < cxt.nrealreaders_stress; i++) {
cxt.lrsa[i].n_lock_fail = 0;
cxt.lrsa[i].n_lock_acquired = 0;
}
}
lock_torture_print_module_parms(cxt.cur_ops, "Start of test");
/* Prepare torture context. */
if (onoff_interval > 0) { if (onoff_interval > 0) {
firsterr = torture_onoff_init(onoff_holdoff * HZ, firsterr = torture_onoff_init(onoff_holdoff * HZ,
onoff_interval * HZ); onoff_interval * HZ);
@ -422,18 +750,51 @@ static int __init lock_torture_init(void)
goto unwind; goto unwind;
} }
writer_tasks = kzalloc(nrealwriters_stress * sizeof(writer_tasks[0]), writer_tasks = kzalloc(cxt.nrealwriters_stress * sizeof(writer_tasks[0]),
GFP_KERNEL); GFP_KERNEL);
if (writer_tasks == NULL) { if (writer_tasks == NULL) {
VERBOSE_TOROUT_ERRSTRING("writer_tasks: Out of memory"); VERBOSE_TOROUT_ERRSTRING("writer_tasks: Out of memory");
firsterr = -ENOMEM; firsterr = -ENOMEM;
goto unwind; goto unwind;
} }
for (i = 0; i < nrealwriters_stress; i++) {
firsterr = torture_create_kthread(lock_torture_writer, &lwsa[i], if (cxt.cur_ops->readlock) {
reader_tasks = kzalloc(cxt.nrealreaders_stress * sizeof(reader_tasks[0]),
GFP_KERNEL);
if (reader_tasks == NULL) {
VERBOSE_TOROUT_ERRSTRING("reader_tasks: Out of memory");
firsterr = -ENOMEM;
goto unwind;
}
}
/*
* Create the kthreads and start torturing (oh, those poor little locks).
*
* TODO: Note that we interleave writers with readers, giving writers a
* slight advantage, by creating its kthread first. This can be modified
* for very specific needs, or even let the user choose the policy, if
* ever wanted.
*/
for (i = 0, j = 0; i < cxt.nrealwriters_stress ||
j < cxt.nrealreaders_stress; i++, j++) {
if (i >= cxt.nrealwriters_stress)
goto create_reader;
/* Create writer. */
firsterr = torture_create_kthread(lock_torture_writer, &cxt.lwsa[i],
writer_tasks[i]); writer_tasks[i]);
if (firsterr) if (firsterr)
goto unwind; goto unwind;
create_reader:
if (cxt.cur_ops->readlock == NULL || (j >= cxt.nrealreaders_stress))
continue;
/* Create reader. */
firsterr = torture_create_kthread(lock_torture_reader, &cxt.lrsa[j],
reader_tasks[j]);
if (firsterr)
goto unwind;
} }
if (stat_interval > 0) { if (stat_interval > 0) {
firsterr = torture_create_kthread(lock_torture_stats, NULL, firsterr = torture_create_kthread(lock_torture_stats, NULL,

View File

@ -49,11 +49,19 @@
#include <linux/trace_clock.h> #include <linux/trace_clock.h>
#include <asm/byteorder.h> #include <asm/byteorder.h>
#include <linux/torture.h> #include <linux/torture.h>
#include <linux/vmalloc.h>
MODULE_LICENSE("GPL"); MODULE_LICENSE("GPL");
MODULE_AUTHOR("Paul E. McKenney <paulmck@us.ibm.com> and Josh Triplett <josh@joshtriplett.org>"); MODULE_AUTHOR("Paul E. McKenney <paulmck@us.ibm.com> and Josh Triplett <josh@joshtriplett.org>");
torture_param(int, cbflood_inter_holdoff, HZ,
"Holdoff between floods (jiffies)");
torture_param(int, cbflood_intra_holdoff, 1,
"Holdoff between bursts (jiffies)");
torture_param(int, cbflood_n_burst, 3, "# bursts in flood, zero to disable");
torture_param(int, cbflood_n_per_burst, 20000,
"# callbacks per burst in flood");
torture_param(int, fqs_duration, 0, torture_param(int, fqs_duration, 0,
"Duration of fqs bursts (us), 0 to disable"); "Duration of fqs bursts (us), 0 to disable");
torture_param(int, fqs_holdoff, 0, "Holdoff time within fqs bursts (us)"); torture_param(int, fqs_holdoff, 0, "Holdoff time within fqs bursts (us)");
@ -96,10 +104,12 @@ module_param(torture_type, charp, 0444);
MODULE_PARM_DESC(torture_type, "Type of RCU to torture (rcu, rcu_bh, ...)"); MODULE_PARM_DESC(torture_type, "Type of RCU to torture (rcu, rcu_bh, ...)");
static int nrealreaders; static int nrealreaders;
static int ncbflooders;
static struct task_struct *writer_task; static struct task_struct *writer_task;
static struct task_struct **fakewriter_tasks; static struct task_struct **fakewriter_tasks;
static struct task_struct **reader_tasks; static struct task_struct **reader_tasks;
static struct task_struct *stats_task; static struct task_struct *stats_task;
static struct task_struct **cbflood_task;
static struct task_struct *fqs_task; static struct task_struct *fqs_task;
static struct task_struct *boost_tasks[NR_CPUS]; static struct task_struct *boost_tasks[NR_CPUS];
static struct task_struct *stall_task; static struct task_struct *stall_task;
@ -138,6 +148,7 @@ static long n_rcu_torture_boosts;
static long n_rcu_torture_timers; static long n_rcu_torture_timers;
static long n_barrier_attempts; static long n_barrier_attempts;
static long n_barrier_successes; static long n_barrier_successes;
static atomic_long_t n_cbfloods;
static struct list_head rcu_torture_removed; static struct list_head rcu_torture_removed;
static int rcu_torture_writer_state; static int rcu_torture_writer_state;
@ -157,9 +168,9 @@ static int rcu_torture_writer_state;
#else #else
#define RCUTORTURE_RUNNABLE_INIT 0 #define RCUTORTURE_RUNNABLE_INIT 0
#endif #endif
int rcutorture_runnable = RCUTORTURE_RUNNABLE_INIT; static int torture_runnable = RCUTORTURE_RUNNABLE_INIT;
module_param(rcutorture_runnable, int, 0444); module_param(torture_runnable, int, 0444);
MODULE_PARM_DESC(rcutorture_runnable, "Start rcutorture at boot"); MODULE_PARM_DESC(torture_runnable, "Start rcutorture at boot");
#if defined(CONFIG_RCU_BOOST) && !defined(CONFIG_HOTPLUG_CPU) #if defined(CONFIG_RCU_BOOST) && !defined(CONFIG_HOTPLUG_CPU)
#define rcu_can_boost() 1 #define rcu_can_boost() 1
@ -182,7 +193,7 @@ static u64 notrace rcu_trace_clock_local(void)
#endif /* #else #ifdef CONFIG_RCU_TRACE */ #endif /* #else #ifdef CONFIG_RCU_TRACE */
static unsigned long boost_starttime; /* jiffies of next boost test start. */ static unsigned long boost_starttime; /* jiffies of next boost test start. */
DEFINE_MUTEX(boost_mutex); /* protect setting boost_starttime */ static DEFINE_MUTEX(boost_mutex); /* protect setting boost_starttime */
/* and boost task create/destroy. */ /* and boost task create/destroy. */
static atomic_t barrier_cbs_count; /* Barrier callbacks registered. */ static atomic_t barrier_cbs_count; /* Barrier callbacks registered. */
static bool barrier_phase; /* Test phase. */ static bool barrier_phase; /* Test phase. */
@ -242,7 +253,7 @@ struct rcu_torture_ops {
void (*call)(struct rcu_head *head, void (*func)(struct rcu_head *rcu)); void (*call)(struct rcu_head *head, void (*func)(struct rcu_head *rcu));
void (*cb_barrier)(void); void (*cb_barrier)(void);
void (*fqs)(void); void (*fqs)(void);
void (*stats)(char *page); void (*stats)(void);
int irq_capable; int irq_capable;
int can_boost; int can_boost;
const char *name; const char *name;
@ -525,21 +536,21 @@ static void srcu_torture_barrier(void)
srcu_barrier(&srcu_ctl); srcu_barrier(&srcu_ctl);
} }
static void srcu_torture_stats(char *page) static void srcu_torture_stats(void)
{ {
int cpu; int cpu;
int idx = srcu_ctl.completed & 0x1; int idx = srcu_ctl.completed & 0x1;
page += sprintf(page, "%s%s per-CPU(idx=%d):", pr_alert("%s%s per-CPU(idx=%d):",
torture_type, TORTURE_FLAG, idx); torture_type, TORTURE_FLAG, idx);
for_each_possible_cpu(cpu) { for_each_possible_cpu(cpu) {
long c0, c1; long c0, c1;
c0 = (long)per_cpu_ptr(srcu_ctl.per_cpu_ref, cpu)->c[!idx]; c0 = (long)per_cpu_ptr(srcu_ctl.per_cpu_ref, cpu)->c[!idx];
c1 = (long)per_cpu_ptr(srcu_ctl.per_cpu_ref, cpu)->c[idx]; c1 = (long)per_cpu_ptr(srcu_ctl.per_cpu_ref, cpu)->c[idx];
page += sprintf(page, " %d(%ld,%ld)", cpu, c0, c1); pr_cont(" %d(%ld,%ld)", cpu, c0, c1);
} }
sprintf(page, "\n"); pr_cont("\n");
} }
static void srcu_torture_synchronize_expedited(void) static void srcu_torture_synchronize_expedited(void)
@ -601,6 +612,52 @@ static struct rcu_torture_ops sched_ops = {
.name = "sched" .name = "sched"
}; };
#ifdef CONFIG_TASKS_RCU
/*
* Definitions for RCU-tasks torture testing.
*/
static int tasks_torture_read_lock(void)
{
return 0;
}
static void tasks_torture_read_unlock(int idx)
{
}
static void rcu_tasks_torture_deferred_free(struct rcu_torture *p)
{
call_rcu_tasks(&p->rtort_rcu, rcu_torture_cb);
}
static struct rcu_torture_ops tasks_ops = {
.ttype = RCU_TASKS_FLAVOR,
.init = rcu_sync_torture_init,
.readlock = tasks_torture_read_lock,
.read_delay = rcu_read_delay, /* just reuse rcu's version. */
.readunlock = tasks_torture_read_unlock,
.completed = rcu_no_completed,
.deferred_free = rcu_tasks_torture_deferred_free,
.sync = synchronize_rcu_tasks,
.exp_sync = synchronize_rcu_tasks,
.call = call_rcu_tasks,
.cb_barrier = rcu_barrier_tasks,
.fqs = NULL,
.stats = NULL,
.irq_capable = 1,
.name = "tasks"
};
#define RCUTORTURE_TASKS_OPS &tasks_ops,
#else /* #ifdef CONFIG_TASKS_RCU */
#define RCUTORTURE_TASKS_OPS
#endif /* #else #ifdef CONFIG_TASKS_RCU */
/* /*
* RCU torture priority-boost testing. Runs one real-time thread per * RCU torture priority-boost testing. Runs one real-time thread per
* CPU for moderate bursts, repeatedly registering RCU callbacks and * CPU for moderate bursts, repeatedly registering RCU callbacks and
@ -667,7 +724,7 @@ static int rcu_torture_boost(void *arg)
} }
call_rcu_time = jiffies; call_rcu_time = jiffies;
} }
cond_resched(); cond_resched_rcu_qs();
stutter_wait("rcu_torture_boost"); stutter_wait("rcu_torture_boost");
if (torture_must_stop()) if (torture_must_stop())
goto checkwait; goto checkwait;
@ -707,6 +764,58 @@ checkwait: stutter_wait("rcu_torture_boost");
return 0; return 0;
} }
static void rcu_torture_cbflood_cb(struct rcu_head *rhp)
{
}
/*
* RCU torture callback-flood kthread. Repeatedly induces bursts of calls
* to call_rcu() or analogous, increasing the probability of occurrence
* of callback-overflow corner cases.
*/
static int
rcu_torture_cbflood(void *arg)
{
int err = 1;
int i;
int j;
struct rcu_head *rhp;
if (cbflood_n_per_burst > 0 &&
cbflood_inter_holdoff > 0 &&
cbflood_intra_holdoff > 0 &&
cur_ops->call &&
cur_ops->cb_barrier) {
rhp = vmalloc(sizeof(*rhp) *
cbflood_n_burst * cbflood_n_per_burst);
err = !rhp;
}
if (err) {
VERBOSE_TOROUT_STRING("rcu_torture_cbflood disabled: Bad args or OOM");
while (!torture_must_stop())
schedule_timeout_interruptible(HZ);
return 0;
}
VERBOSE_TOROUT_STRING("rcu_torture_cbflood task started");
do {
schedule_timeout_interruptible(cbflood_inter_holdoff);
atomic_long_inc(&n_cbfloods);
WARN_ON(signal_pending(current));
for (i = 0; i < cbflood_n_burst; i++) {
for (j = 0; j < cbflood_n_per_burst; j++) {
cur_ops->call(&rhp[i * cbflood_n_per_burst + j],
rcu_torture_cbflood_cb);
}
schedule_timeout_interruptible(cbflood_intra_holdoff);
WARN_ON(signal_pending(current));
}
cur_ops->cb_barrier();
stutter_wait("rcu_torture_cbflood");
} while (!torture_must_stop());
torture_kthread_stopping("rcu_torture_cbflood");
return 0;
}
/* /*
* RCU torture force-quiescent-state kthread. Repeatedly induces * RCU torture force-quiescent-state kthread. Repeatedly induces
* bursts of calls to force_quiescent_state(), increasing the probability * bursts of calls to force_quiescent_state(), increasing the probability
@ -1019,7 +1128,7 @@ rcu_torture_reader(void *arg)
__this_cpu_inc(rcu_torture_batch[completed]); __this_cpu_inc(rcu_torture_batch[completed]);
preempt_enable(); preempt_enable();
cur_ops->readunlock(idx); cur_ops->readunlock(idx);
cond_resched(); cond_resched_rcu_qs();
stutter_wait("rcu_torture_reader"); stutter_wait("rcu_torture_reader");
} while (!torture_must_stop()); } while (!torture_must_stop());
if (irqreader && cur_ops->irq_capable) { if (irqreader && cur_ops->irq_capable) {
@ -1031,10 +1140,15 @@ rcu_torture_reader(void *arg)
} }
/* /*
* Create an RCU-torture statistics message in the specified buffer. * Print torture statistics. Caller must ensure that there is only
* one call to this function at a given time!!! This is normally
* accomplished by relying on the module system to only have one copy
* of the module loaded, and then by giving the rcu_torture_stats
* kthread full control (or the init/cleanup functions when rcu_torture_stats
* thread is not running).
*/ */
static void static void
rcu_torture_printk(char *page) rcu_torture_stats_print(void)
{ {
int cpu; int cpu;
int i; int i;
@ -1052,55 +1166,61 @@ rcu_torture_printk(char *page)
if (pipesummary[i] != 0) if (pipesummary[i] != 0)
break; break;
} }
page += sprintf(page, "%s%s ", torture_type, TORTURE_FLAG);
page += sprintf(page, pr_alert("%s%s ", torture_type, TORTURE_FLAG);
"rtc: %p ver: %lu tfle: %d rta: %d rtaf: %d rtf: %d ", pr_cont("rtc: %p ver: %lu tfle: %d rta: %d rtaf: %d rtf: %d ",
rcu_torture_current, rcu_torture_current,
rcu_torture_current_version, rcu_torture_current_version,
list_empty(&rcu_torture_freelist), list_empty(&rcu_torture_freelist),
atomic_read(&n_rcu_torture_alloc), atomic_read(&n_rcu_torture_alloc),
atomic_read(&n_rcu_torture_alloc_fail), atomic_read(&n_rcu_torture_alloc_fail),
atomic_read(&n_rcu_torture_free)); atomic_read(&n_rcu_torture_free));
page += sprintf(page, "rtmbe: %d rtbke: %ld rtbre: %ld ", pr_cont("rtmbe: %d rtbke: %ld rtbre: %ld ",
atomic_read(&n_rcu_torture_mberror), atomic_read(&n_rcu_torture_mberror),
n_rcu_torture_boost_ktrerror, n_rcu_torture_boost_ktrerror,
n_rcu_torture_boost_rterror); n_rcu_torture_boost_rterror);
page += sprintf(page, "rtbf: %ld rtb: %ld nt: %ld ", pr_cont("rtbf: %ld rtb: %ld nt: %ld ",
n_rcu_torture_boost_failure, n_rcu_torture_boost_failure,
n_rcu_torture_boosts, n_rcu_torture_boosts,
n_rcu_torture_timers); n_rcu_torture_timers);
page = torture_onoff_stats(page); torture_onoff_stats();
page += sprintf(page, "barrier: %ld/%ld:%ld", pr_cont("barrier: %ld/%ld:%ld ",
n_barrier_successes, n_barrier_successes,
n_barrier_attempts, n_barrier_attempts,
n_rcu_torture_barrier_error); n_rcu_torture_barrier_error);
page += sprintf(page, "\n%s%s ", torture_type, TORTURE_FLAG); pr_cont("cbflood: %ld\n", atomic_long_read(&n_cbfloods));
pr_alert("%s%s ", torture_type, TORTURE_FLAG);
if (atomic_read(&n_rcu_torture_mberror) != 0 || if (atomic_read(&n_rcu_torture_mberror) != 0 ||
n_rcu_torture_barrier_error != 0 || n_rcu_torture_barrier_error != 0 ||
n_rcu_torture_boost_ktrerror != 0 || n_rcu_torture_boost_ktrerror != 0 ||
n_rcu_torture_boost_rterror != 0 || n_rcu_torture_boost_rterror != 0 ||
n_rcu_torture_boost_failure != 0 || n_rcu_torture_boost_failure != 0 ||
i > 1) { i > 1) {
page += sprintf(page, "!!! "); pr_cont("%s", "!!! ");
atomic_inc(&n_rcu_torture_error); atomic_inc(&n_rcu_torture_error);
WARN_ON_ONCE(1); WARN_ON_ONCE(1);
} }
page += sprintf(page, "Reader Pipe: "); pr_cont("Reader Pipe: ");
for (i = 0; i < RCU_TORTURE_PIPE_LEN + 1; i++) for (i = 0; i < RCU_TORTURE_PIPE_LEN + 1; i++)
page += sprintf(page, " %ld", pipesummary[i]); pr_cont(" %ld", pipesummary[i]);
page += sprintf(page, "\n%s%s ", torture_type, TORTURE_FLAG); pr_cont("\n");
page += sprintf(page, "Reader Batch: ");
pr_alert("%s%s ", torture_type, TORTURE_FLAG);
pr_cont("Reader Batch: ");
for (i = 0; i < RCU_TORTURE_PIPE_LEN + 1; i++) for (i = 0; i < RCU_TORTURE_PIPE_LEN + 1; i++)
page += sprintf(page, " %ld", batchsummary[i]); pr_cont(" %ld", batchsummary[i]);
page += sprintf(page, "\n%s%s ", torture_type, TORTURE_FLAG); pr_cont("\n");
page += sprintf(page, "Free-Block Circulation: ");
pr_alert("%s%s ", torture_type, TORTURE_FLAG);
pr_cont("Free-Block Circulation: ");
for (i = 0; i < RCU_TORTURE_PIPE_LEN + 1; i++) { for (i = 0; i < RCU_TORTURE_PIPE_LEN + 1; i++) {
page += sprintf(page, " %d", pr_cont(" %d", atomic_read(&rcu_torture_wcount[i]));
atomic_read(&rcu_torture_wcount[i]));
} }
page += sprintf(page, "\n"); pr_cont("\n");
if (cur_ops->stats) if (cur_ops->stats)
cur_ops->stats(page); cur_ops->stats();
if (rtcv_snap == rcu_torture_current_version && if (rtcv_snap == rcu_torture_current_version &&
rcu_torture_current != NULL) { rcu_torture_current != NULL) {
int __maybe_unused flags; int __maybe_unused flags;
@ -1109,40 +1229,15 @@ rcu_torture_printk(char *page)
rcutorture_get_gp_data(cur_ops->ttype, rcutorture_get_gp_data(cur_ops->ttype,
&flags, &gpnum, &completed); &flags, &gpnum, &completed);
page += sprintf(page, pr_alert("??? Writer stall state %d g%lu c%lu f%#x\n",
"??? Writer stall state %d g%lu c%lu f%#x\n", rcu_torture_writer_state,
rcu_torture_writer_state, gpnum, completed, flags);
gpnum, completed, flags);
show_rcu_gp_kthreads(); show_rcu_gp_kthreads();
rcutorture_trace_dump(); rcutorture_trace_dump();
} }
rtcv_snap = rcu_torture_current_version; rtcv_snap = rcu_torture_current_version;
} }
/*
* Print torture statistics. Caller must ensure that there is only
* one call to this function at a given time!!! This is normally
* accomplished by relying on the module system to only have one copy
* of the module loaded, and then by giving the rcu_torture_stats
* kthread full control (or the init/cleanup functions when rcu_torture_stats
* thread is not running).
*/
static void
rcu_torture_stats_print(void)
{
int size = nr_cpu_ids * 200 + 8192;
char *buf;
buf = kmalloc(size, GFP_KERNEL);
if (!buf) {
pr_err("rcu-torture: Out of memory, need: %d", size);
return;
}
rcu_torture_printk(buf);
pr_alert("%s", buf);
kfree(buf);
}
/* /*
* Periodically prints torture statistics, if periodic statistics printing * Periodically prints torture statistics, if periodic statistics printing
* was specified via the stat_interval module parameter. * was specified via the stat_interval module parameter.
@ -1295,7 +1390,8 @@ static int rcu_torture_barrier_cbs(void *arg)
if (atomic_dec_and_test(&barrier_cbs_count)) if (atomic_dec_and_test(&barrier_cbs_count))
wake_up(&barrier_wq); wake_up(&barrier_wq);
} while (!torture_must_stop()); } while (!torture_must_stop());
cur_ops->cb_barrier(); if (cur_ops->cb_barrier != NULL)
cur_ops->cb_barrier();
destroy_rcu_head_on_stack(&rcu); destroy_rcu_head_on_stack(&rcu);
torture_kthread_stopping("rcu_torture_barrier_cbs"); torture_kthread_stopping("rcu_torture_barrier_cbs");
return 0; return 0;
@ -1418,7 +1514,7 @@ rcu_torture_cleanup(void)
int i; int i;
rcutorture_record_test_transition(); rcutorture_record_test_transition();
if (torture_cleanup()) { if (torture_cleanup_begin()) {
if (cur_ops->cb_barrier != NULL) if (cur_ops->cb_barrier != NULL)
cur_ops->cb_barrier(); cur_ops->cb_barrier();
return; return;
@ -1447,6 +1543,8 @@ rcu_torture_cleanup(void)
torture_stop_kthread(rcu_torture_stats, stats_task); torture_stop_kthread(rcu_torture_stats, stats_task);
torture_stop_kthread(rcu_torture_fqs, fqs_task); torture_stop_kthread(rcu_torture_fqs, fqs_task);
for (i = 0; i < ncbflooders; i++)
torture_stop_kthread(rcu_torture_cbflood, cbflood_task[i]);
if ((test_boost == 1 && cur_ops->can_boost) || if ((test_boost == 1 && cur_ops->can_boost) ||
test_boost == 2) { test_boost == 2) {
unregister_cpu_notifier(&rcutorture_cpu_nb); unregister_cpu_notifier(&rcutorture_cpu_nb);
@ -1468,6 +1566,7 @@ rcu_torture_cleanup(void)
"End of test: RCU_HOTPLUG"); "End of test: RCU_HOTPLUG");
else else
rcu_torture_print_module_parms(cur_ops, "End of test: SUCCESS"); rcu_torture_print_module_parms(cur_ops, "End of test: SUCCESS");
torture_cleanup_end();
} }
#ifdef CONFIG_DEBUG_OBJECTS_RCU_HEAD #ifdef CONFIG_DEBUG_OBJECTS_RCU_HEAD
@ -1534,9 +1633,10 @@ rcu_torture_init(void)
int firsterr = 0; int firsterr = 0;
static struct rcu_torture_ops *torture_ops[] = { static struct rcu_torture_ops *torture_ops[] = {
&rcu_ops, &rcu_bh_ops, &rcu_busted_ops, &srcu_ops, &sched_ops, &rcu_ops, &rcu_bh_ops, &rcu_busted_ops, &srcu_ops, &sched_ops,
RCUTORTURE_TASKS_OPS
}; };
if (!torture_init_begin(torture_type, verbose, &rcutorture_runnable)) if (!torture_init_begin(torture_type, verbose, &torture_runnable))
return -EBUSY; return -EBUSY;
/* Process args and tell the world that the torturer is on the job. */ /* Process args and tell the world that the torturer is on the job. */
@ -1693,6 +1793,24 @@ rcu_torture_init(void)
goto unwind; goto unwind;
if (object_debug) if (object_debug)
rcu_test_debug_objects(); rcu_test_debug_objects();
if (cbflood_n_burst > 0) {
/* Create the cbflood threads */
ncbflooders = (num_online_cpus() + 3) / 4;
cbflood_task = kcalloc(ncbflooders, sizeof(*cbflood_task),
GFP_KERNEL);
if (!cbflood_task) {
VERBOSE_TOROUT_ERRSTRING("out of memory");
firsterr = -ENOMEM;
goto unwind;
}
for (i = 0; i < ncbflooders; i++) {
firsterr = torture_create_kthread(rcu_torture_cbflood,
NULL,
cbflood_task[i]);
if (firsterr)
goto unwind;
}
}
rcutorture_record_test_transition(); rcutorture_record_test_transition();
torture_init_end(); torture_init_end();
return 0; return 0;

View File

@ -51,7 +51,7 @@ static long long rcu_dynticks_nesting = DYNTICK_TASK_EXIT_IDLE;
#include "tiny_plugin.h" #include "tiny_plugin.h"
/* Common code for rcu_idle_enter() and rcu_irq_exit(), see kernel/rcutree.c. */ /* Common code for rcu_idle_enter() and rcu_irq_exit(), see kernel/rcu/tree.c. */
static void rcu_idle_enter_common(long long newval) static void rcu_idle_enter_common(long long newval)
{ {
if (newval) { if (newval) {
@ -62,7 +62,7 @@ static void rcu_idle_enter_common(long long newval)
} }
RCU_TRACE(trace_rcu_dyntick(TPS("Start"), RCU_TRACE(trace_rcu_dyntick(TPS("Start"),
rcu_dynticks_nesting, newval)); rcu_dynticks_nesting, newval));
if (!is_idle_task(current)) { if (IS_ENABLED(CONFIG_RCU_TRACE) && !is_idle_task(current)) {
struct task_struct *idle __maybe_unused = idle_task(smp_processor_id()); struct task_struct *idle __maybe_unused = idle_task(smp_processor_id());
RCU_TRACE(trace_rcu_dyntick(TPS("Entry error: not idle task"), RCU_TRACE(trace_rcu_dyntick(TPS("Entry error: not idle task"),
@ -72,7 +72,7 @@ static void rcu_idle_enter_common(long long newval)
current->pid, current->comm, current->pid, current->comm,
idle->pid, idle->comm); /* must be idle task! */ idle->pid, idle->comm); /* must be idle task! */
} }
rcu_sched_qs(0); /* implies rcu_bh_qsctr_inc(0) */ rcu_sched_qs(); /* implies rcu_bh_inc() */
barrier(); barrier();
rcu_dynticks_nesting = newval; rcu_dynticks_nesting = newval;
} }
@ -114,7 +114,7 @@ void rcu_irq_exit(void)
} }
EXPORT_SYMBOL_GPL(rcu_irq_exit); EXPORT_SYMBOL_GPL(rcu_irq_exit);
/* Common code for rcu_idle_exit() and rcu_irq_enter(), see kernel/rcutree.c. */ /* Common code for rcu_idle_exit() and rcu_irq_enter(), see kernel/rcu/tree.c. */
static void rcu_idle_exit_common(long long oldval) static void rcu_idle_exit_common(long long oldval)
{ {
if (oldval) { if (oldval) {
@ -123,7 +123,7 @@ static void rcu_idle_exit_common(long long oldval)
return; return;
} }
RCU_TRACE(trace_rcu_dyntick(TPS("End"), oldval, rcu_dynticks_nesting)); RCU_TRACE(trace_rcu_dyntick(TPS("End"), oldval, rcu_dynticks_nesting));
if (!is_idle_task(current)) { if (IS_ENABLED(CONFIG_RCU_TRACE) && !is_idle_task(current)) {
struct task_struct *idle __maybe_unused = idle_task(smp_processor_id()); struct task_struct *idle __maybe_unused = idle_task(smp_processor_id());
RCU_TRACE(trace_rcu_dyntick(TPS("Exit error: not idle task"), RCU_TRACE(trace_rcu_dyntick(TPS("Exit error: not idle task"),
@ -217,7 +217,7 @@ static int rcu_qsctr_help(struct rcu_ctrlblk *rcp)
* are at it, given that any rcu quiescent state is also an rcu_bh * are at it, given that any rcu quiescent state is also an rcu_bh
* quiescent state. Use "+" instead of "||" to defeat short circuiting. * quiescent state. Use "+" instead of "||" to defeat short circuiting.
*/ */
void rcu_sched_qs(int cpu) void rcu_sched_qs(void)
{ {
unsigned long flags; unsigned long flags;
@ -231,7 +231,7 @@ void rcu_sched_qs(int cpu)
/* /*
* Record an rcu_bh quiescent state. * Record an rcu_bh quiescent state.
*/ */
void rcu_bh_qs(int cpu) void rcu_bh_qs(void)
{ {
unsigned long flags; unsigned long flags;
@ -251,9 +251,11 @@ void rcu_check_callbacks(int cpu, int user)
{ {
RCU_TRACE(check_cpu_stalls()); RCU_TRACE(check_cpu_stalls());
if (user || rcu_is_cpu_rrupt_from_idle()) if (user || rcu_is_cpu_rrupt_from_idle())
rcu_sched_qs(cpu); rcu_sched_qs();
else if (!in_softirq()) else if (!in_softirq())
rcu_bh_qs(cpu); rcu_bh_qs();
if (user)
rcu_note_voluntary_context_switch(current);
} }
/* /*

View File

@ -79,9 +79,18 @@ static struct lock_class_key rcu_fqs_class[RCU_NUM_LVLS];
* the tracing userspace tools to be able to decipher the string * the tracing userspace tools to be able to decipher the string
* address to the matching string. * address to the matching string.
*/ */
#define RCU_STATE_INITIALIZER(sname, sabbr, cr) \ #ifdef CONFIG_TRACING
# define DEFINE_RCU_TPS(sname) \
static char sname##_varname[] = #sname; \ static char sname##_varname[] = #sname; \
static const char *tp_##sname##_varname __used __tracepoint_string = sname##_varname; \ static const char *tp_##sname##_varname __used __tracepoint_string = sname##_varname;
# define RCU_STATE_NAME(sname) sname##_varname
#else
# define DEFINE_RCU_TPS(sname)
# define RCU_STATE_NAME(sname) __stringify(sname)
#endif
#define RCU_STATE_INITIALIZER(sname, sabbr, cr) \
DEFINE_RCU_TPS(sname) \
struct rcu_state sname##_state = { \ struct rcu_state sname##_state = { \
.level = { &sname##_state.node[0] }, \ .level = { &sname##_state.node[0] }, \
.call = cr, \ .call = cr, \
@ -93,7 +102,7 @@ struct rcu_state sname##_state = { \
.orphan_donetail = &sname##_state.orphan_donelist, \ .orphan_donetail = &sname##_state.orphan_donelist, \
.barrier_mutex = __MUTEX_INITIALIZER(sname##_state.barrier_mutex), \ .barrier_mutex = __MUTEX_INITIALIZER(sname##_state.barrier_mutex), \
.onoff_mutex = __MUTEX_INITIALIZER(sname##_state.onoff_mutex), \ .onoff_mutex = __MUTEX_INITIALIZER(sname##_state.onoff_mutex), \
.name = sname##_varname, \ .name = RCU_STATE_NAME(sname), \
.abbr = sabbr, \ .abbr = sabbr, \
}; \ }; \
DEFINE_PER_CPU(struct rcu_data, sname##_data) DEFINE_PER_CPU(struct rcu_data, sname##_data)
@ -188,22 +197,24 @@ static int rcu_gp_in_progress(struct rcu_state *rsp)
* one since the start of the grace period, this just sets a flag. * one since the start of the grace period, this just sets a flag.
* The caller must have disabled preemption. * The caller must have disabled preemption.
*/ */
void rcu_sched_qs(int cpu) void rcu_sched_qs(void)
{ {
struct rcu_data *rdp = &per_cpu(rcu_sched_data, cpu); if (!__this_cpu_read(rcu_sched_data.passed_quiesce)) {
trace_rcu_grace_period(TPS("rcu_sched"),
if (rdp->passed_quiesce == 0) __this_cpu_read(rcu_sched_data.gpnum),
trace_rcu_grace_period(TPS("rcu_sched"), rdp->gpnum, TPS("cpuqs")); TPS("cpuqs"));
rdp->passed_quiesce = 1; __this_cpu_write(rcu_sched_data.passed_quiesce, 1);
}
} }
void rcu_bh_qs(int cpu) void rcu_bh_qs(void)
{ {
struct rcu_data *rdp = &per_cpu(rcu_bh_data, cpu); if (!__this_cpu_read(rcu_bh_data.passed_quiesce)) {
trace_rcu_grace_period(TPS("rcu_bh"),
if (rdp->passed_quiesce == 0) __this_cpu_read(rcu_bh_data.gpnum),
trace_rcu_grace_period(TPS("rcu_bh"), rdp->gpnum, TPS("cpuqs")); TPS("cpuqs"));
rdp->passed_quiesce = 1; __this_cpu_write(rcu_bh_data.passed_quiesce, 1);
}
} }
static DEFINE_PER_CPU(int, rcu_sched_qs_mask); static DEFINE_PER_CPU(int, rcu_sched_qs_mask);
@ -278,7 +289,7 @@ static void rcu_momentary_dyntick_idle(void)
void rcu_note_context_switch(int cpu) void rcu_note_context_switch(int cpu)
{ {
trace_rcu_utilization(TPS("Start context switch")); trace_rcu_utilization(TPS("Start context switch"));
rcu_sched_qs(cpu); rcu_sched_qs();
rcu_preempt_note_context_switch(cpu); rcu_preempt_note_context_switch(cpu);
if (unlikely(raw_cpu_read(rcu_sched_qs_mask))) if (unlikely(raw_cpu_read(rcu_sched_qs_mask)))
rcu_momentary_dyntick_idle(); rcu_momentary_dyntick_idle();
@ -526,6 +537,7 @@ static void rcu_eqs_enter_common(struct rcu_dynticks *rdtp, long long oldval,
atomic_inc(&rdtp->dynticks); atomic_inc(&rdtp->dynticks);
smp_mb__after_atomic(); /* Force ordering with next sojourn. */ smp_mb__after_atomic(); /* Force ordering with next sojourn. */
WARN_ON_ONCE(atomic_read(&rdtp->dynticks) & 0x1); WARN_ON_ONCE(atomic_read(&rdtp->dynticks) & 0x1);
rcu_dynticks_task_enter();
/* /*
* It is illegal to enter an extended quiescent state while * It is illegal to enter an extended quiescent state while
@ -642,6 +654,7 @@ void rcu_irq_exit(void)
static void rcu_eqs_exit_common(struct rcu_dynticks *rdtp, long long oldval, static void rcu_eqs_exit_common(struct rcu_dynticks *rdtp, long long oldval,
int user) int user)
{ {
rcu_dynticks_task_exit();
smp_mb__before_atomic(); /* Force ordering w/previous sojourn. */ smp_mb__before_atomic(); /* Force ordering w/previous sojourn. */
atomic_inc(&rdtp->dynticks); atomic_inc(&rdtp->dynticks);
/* CPUs seeing atomic_inc() must see later RCU read-side crit sects */ /* CPUs seeing atomic_inc() must see later RCU read-side crit sects */
@ -819,7 +832,7 @@ bool notrace __rcu_is_watching(void)
*/ */
bool notrace rcu_is_watching(void) bool notrace rcu_is_watching(void)
{ {
int ret; bool ret;
preempt_disable(); preempt_disable();
ret = __rcu_is_watching(); ret = __rcu_is_watching();
@ -1647,7 +1660,7 @@ static int rcu_gp_init(struct rcu_state *rsp)
rnp->level, rnp->grplo, rnp->level, rnp->grplo,
rnp->grphi, rnp->qsmask); rnp->grphi, rnp->qsmask);
raw_spin_unlock_irq(&rnp->lock); raw_spin_unlock_irq(&rnp->lock);
cond_resched(); cond_resched_rcu_qs();
} }
mutex_unlock(&rsp->onoff_mutex); mutex_unlock(&rsp->onoff_mutex);
@ -1668,7 +1681,7 @@ static int rcu_gp_fqs(struct rcu_state *rsp, int fqs_state_in)
if (fqs_state == RCU_SAVE_DYNTICK) { if (fqs_state == RCU_SAVE_DYNTICK) {
/* Collect dyntick-idle snapshots. */ /* Collect dyntick-idle snapshots. */
if (is_sysidle_rcu_state(rsp)) { if (is_sysidle_rcu_state(rsp)) {
isidle = 1; isidle = true;
maxj = jiffies - ULONG_MAX / 4; maxj = jiffies - ULONG_MAX / 4;
} }
force_qs_rnp(rsp, dyntick_save_progress_counter, force_qs_rnp(rsp, dyntick_save_progress_counter,
@ -1677,14 +1690,15 @@ static int rcu_gp_fqs(struct rcu_state *rsp, int fqs_state_in)
fqs_state = RCU_FORCE_QS; fqs_state = RCU_FORCE_QS;
} else { } else {
/* Handle dyntick-idle and offline CPUs. */ /* Handle dyntick-idle and offline CPUs. */
isidle = 0; isidle = false;
force_qs_rnp(rsp, rcu_implicit_dynticks_qs, &isidle, &maxj); force_qs_rnp(rsp, rcu_implicit_dynticks_qs, &isidle, &maxj);
} }
/* Clear flag to prevent immediate re-entry. */ /* Clear flag to prevent immediate re-entry. */
if (ACCESS_ONCE(rsp->gp_flags) & RCU_GP_FLAG_FQS) { if (ACCESS_ONCE(rsp->gp_flags) & RCU_GP_FLAG_FQS) {
raw_spin_lock_irq(&rnp->lock); raw_spin_lock_irq(&rnp->lock);
smp_mb__after_unlock_lock(); smp_mb__after_unlock_lock();
ACCESS_ONCE(rsp->gp_flags) &= ~RCU_GP_FLAG_FQS; ACCESS_ONCE(rsp->gp_flags) =
ACCESS_ONCE(rsp->gp_flags) & ~RCU_GP_FLAG_FQS;
raw_spin_unlock_irq(&rnp->lock); raw_spin_unlock_irq(&rnp->lock);
} }
return fqs_state; return fqs_state;
@ -1736,7 +1750,7 @@ static void rcu_gp_cleanup(struct rcu_state *rsp)
/* smp_mb() provided by prior unlock-lock pair. */ /* smp_mb() provided by prior unlock-lock pair. */
nocb += rcu_future_gp_cleanup(rsp, rnp); nocb += rcu_future_gp_cleanup(rsp, rnp);
raw_spin_unlock_irq(&rnp->lock); raw_spin_unlock_irq(&rnp->lock);
cond_resched(); cond_resched_rcu_qs();
} }
rnp = rcu_get_root(rsp); rnp = rcu_get_root(rsp);
raw_spin_lock_irq(&rnp->lock); raw_spin_lock_irq(&rnp->lock);
@ -1785,8 +1799,8 @@ static int __noreturn rcu_gp_kthread(void *arg)
/* Locking provides needed memory barrier. */ /* Locking provides needed memory barrier. */
if (rcu_gp_init(rsp)) if (rcu_gp_init(rsp))
break; break;
cond_resched(); cond_resched_rcu_qs();
flush_signals(current); WARN_ON(signal_pending(current));
trace_rcu_grace_period(rsp->name, trace_rcu_grace_period(rsp->name,
ACCESS_ONCE(rsp->gpnum), ACCESS_ONCE(rsp->gpnum),
TPS("reqwaitsig")); TPS("reqwaitsig"));
@ -1828,11 +1842,11 @@ static int __noreturn rcu_gp_kthread(void *arg)
trace_rcu_grace_period(rsp->name, trace_rcu_grace_period(rsp->name,
ACCESS_ONCE(rsp->gpnum), ACCESS_ONCE(rsp->gpnum),
TPS("fqsend")); TPS("fqsend"));
cond_resched(); cond_resched_rcu_qs();
} else { } else {
/* Deal with stray signal. */ /* Deal with stray signal. */
cond_resched(); cond_resched_rcu_qs();
flush_signals(current); WARN_ON(signal_pending(current));
trace_rcu_grace_period(rsp->name, trace_rcu_grace_period(rsp->name,
ACCESS_ONCE(rsp->gpnum), ACCESS_ONCE(rsp->gpnum),
TPS("fqswaitsig")); TPS("fqswaitsig"));
@ -1928,7 +1942,7 @@ static void rcu_report_qs_rsp(struct rcu_state *rsp, unsigned long flags)
{ {
WARN_ON_ONCE(!rcu_gp_in_progress(rsp)); WARN_ON_ONCE(!rcu_gp_in_progress(rsp));
raw_spin_unlock_irqrestore(&rcu_get_root(rsp)->lock, flags); raw_spin_unlock_irqrestore(&rcu_get_root(rsp)->lock, flags);
wake_up(&rsp->gp_wq); /* Memory barrier implied by wake_up() path. */ rcu_gp_kthread_wake(rsp);
} }
/* /*
@ -2210,8 +2224,6 @@ static void rcu_cleanup_dead_cpu(int cpu, struct rcu_state *rsp)
/* Adjust any no-longer-needed kthreads. */ /* Adjust any no-longer-needed kthreads. */
rcu_boost_kthread_setaffinity(rnp, -1); rcu_boost_kthread_setaffinity(rnp, -1);
/* Remove the dead CPU from the bitmasks in the rcu_node hierarchy. */
/* Exclude any attempts to start a new grace period. */ /* Exclude any attempts to start a new grace period. */
mutex_lock(&rsp->onoff_mutex); mutex_lock(&rsp->onoff_mutex);
raw_spin_lock_irqsave(&rsp->orphan_lock, flags); raw_spin_lock_irqsave(&rsp->orphan_lock, flags);
@ -2393,8 +2405,8 @@ void rcu_check_callbacks(int cpu, int user)
* at least not while the corresponding CPU is online. * at least not while the corresponding CPU is online.
*/ */
rcu_sched_qs(cpu); rcu_sched_qs();
rcu_bh_qs(cpu); rcu_bh_qs();
} else if (!in_softirq()) { } else if (!in_softirq()) {
@ -2405,11 +2417,13 @@ void rcu_check_callbacks(int cpu, int user)
* critical section, so note it. * critical section, so note it.
*/ */
rcu_bh_qs(cpu); rcu_bh_qs();
} }
rcu_preempt_check_callbacks(cpu); rcu_preempt_check_callbacks(cpu);
if (rcu_pending(cpu)) if (rcu_pending(cpu))
invoke_rcu_core(); invoke_rcu_core();
if (user)
rcu_note_voluntary_context_switch(current);
trace_rcu_utilization(TPS("End scheduler-tick")); trace_rcu_utilization(TPS("End scheduler-tick"));
} }
@ -2432,7 +2446,7 @@ static void force_qs_rnp(struct rcu_state *rsp,
struct rcu_node *rnp; struct rcu_node *rnp;
rcu_for_each_leaf_node(rsp, rnp) { rcu_for_each_leaf_node(rsp, rnp) {
cond_resched(); cond_resched_rcu_qs();
mask = 0; mask = 0;
raw_spin_lock_irqsave(&rnp->lock, flags); raw_spin_lock_irqsave(&rnp->lock, flags);
smp_mb__after_unlock_lock(); smp_mb__after_unlock_lock();
@ -2449,7 +2463,7 @@ static void force_qs_rnp(struct rcu_state *rsp,
for (; cpu <= rnp->grphi; cpu++, bit <<= 1) { for (; cpu <= rnp->grphi; cpu++, bit <<= 1) {
if ((rnp->qsmask & bit) != 0) { if ((rnp->qsmask & bit) != 0) {
if ((rnp->qsmaskinit & bit) != 0) if ((rnp->qsmaskinit & bit) != 0)
*isidle = 0; *isidle = false;
if (f(per_cpu_ptr(rsp->rda, cpu), isidle, maxj)) if (f(per_cpu_ptr(rsp->rda, cpu), isidle, maxj))
mask |= bit; mask |= bit;
} }
@ -2505,9 +2519,10 @@ static void force_quiescent_state(struct rcu_state *rsp)
raw_spin_unlock_irqrestore(&rnp_old->lock, flags); raw_spin_unlock_irqrestore(&rnp_old->lock, flags);
return; /* Someone beat us to it. */ return; /* Someone beat us to it. */
} }
ACCESS_ONCE(rsp->gp_flags) |= RCU_GP_FLAG_FQS; ACCESS_ONCE(rsp->gp_flags) =
ACCESS_ONCE(rsp->gp_flags) | RCU_GP_FLAG_FQS;
raw_spin_unlock_irqrestore(&rnp_old->lock, flags); raw_spin_unlock_irqrestore(&rnp_old->lock, flags);
wake_up(&rsp->gp_wq); /* Memory barrier implied by wake_up() path. */ rcu_gp_kthread_wake(rsp);
} }
/* /*
@ -2925,11 +2940,6 @@ static int synchronize_sched_expedited_cpu_stop(void *data)
* restructure your code to batch your updates, and then use a single * restructure your code to batch your updates, and then use a single
* synchronize_sched() instead. * synchronize_sched() instead.
* *
* Note that it is illegal to call this function while holding any lock
* that is acquired by a CPU-hotplug notifier. And yes, it is also illegal
* to call this function from a CPU-hotplug notifier. Failing to observe
* these restriction will result in deadlock.
*
* This implementation can be thought of as an application of ticket * This implementation can be thought of as an application of ticket
* locking to RCU, with sync_sched_expedited_started and * locking to RCU, with sync_sched_expedited_started and
* sync_sched_expedited_done taking on the roles of the halves * sync_sched_expedited_done taking on the roles of the halves
@ -2979,7 +2989,12 @@ void synchronize_sched_expedited(void)
*/ */
snap = atomic_long_inc_return(&rsp->expedited_start); snap = atomic_long_inc_return(&rsp->expedited_start);
firstsnap = snap; firstsnap = snap;
get_online_cpus(); if (!try_get_online_cpus()) {
/* CPU hotplug operation in flight, fall back to normal GP. */
wait_rcu_gp(call_rcu_sched);
atomic_long_inc(&rsp->expedited_normal);
return;
}
WARN_ON_ONCE(cpu_is_offline(raw_smp_processor_id())); WARN_ON_ONCE(cpu_is_offline(raw_smp_processor_id()));
/* /*
@ -3026,7 +3041,12 @@ void synchronize_sched_expedited(void)
* and they started after our first try, so their grace * and they started after our first try, so their grace
* period works for us. * period works for us.
*/ */
get_online_cpus(); if (!try_get_online_cpus()) {
/* CPU hotplug operation in flight, use normal GP. */
wait_rcu_gp(call_rcu_sched);
atomic_long_inc(&rsp->expedited_normal);
return;
}
snap = atomic_long_read(&rsp->expedited_start); snap = atomic_long_read(&rsp->expedited_start);
smp_mb(); /* ensure read is before try_stop_cpus(). */ smp_mb(); /* ensure read is before try_stop_cpus(). */
} }
@ -3442,6 +3462,7 @@ static int rcu_cpu_notify(struct notifier_block *self,
case CPU_UP_PREPARE_FROZEN: case CPU_UP_PREPARE_FROZEN:
rcu_prepare_cpu(cpu); rcu_prepare_cpu(cpu);
rcu_prepare_kthreads(cpu); rcu_prepare_kthreads(cpu);
rcu_spawn_all_nocb_kthreads(cpu);
break; break;
case CPU_ONLINE: case CPU_ONLINE:
case CPU_DOWN_FAILED: case CPU_DOWN_FAILED:
@ -3489,7 +3510,7 @@ static int rcu_pm_notify(struct notifier_block *self,
} }
/* /*
* Spawn the kthread that handles this RCU flavor's grace periods. * Spawn the kthreads that handle each RCU flavor's grace periods.
*/ */
static int __init rcu_spawn_gp_kthread(void) static int __init rcu_spawn_gp_kthread(void)
{ {
@ -3498,6 +3519,7 @@ static int __init rcu_spawn_gp_kthread(void)
struct rcu_state *rsp; struct rcu_state *rsp;
struct task_struct *t; struct task_struct *t;
rcu_scheduler_fully_active = 1;
for_each_rcu_flavor(rsp) { for_each_rcu_flavor(rsp) {
t = kthread_run(rcu_gp_kthread, rsp, "%s", rsp->name); t = kthread_run(rcu_gp_kthread, rsp, "%s", rsp->name);
BUG_ON(IS_ERR(t)); BUG_ON(IS_ERR(t));
@ -3505,8 +3527,9 @@ static int __init rcu_spawn_gp_kthread(void)
raw_spin_lock_irqsave(&rnp->lock, flags); raw_spin_lock_irqsave(&rnp->lock, flags);
rsp->gp_kthread = t; rsp->gp_kthread = t;
raw_spin_unlock_irqrestore(&rnp->lock, flags); raw_spin_unlock_irqrestore(&rnp->lock, flags);
rcu_spawn_nocb_kthreads(rsp);
} }
rcu_spawn_nocb_kthreads();
rcu_spawn_boost_kthreads();
return 0; return 0;
} }
early_initcall(rcu_spawn_gp_kthread); early_initcall(rcu_spawn_gp_kthread);

View File

@ -350,7 +350,7 @@ struct rcu_data {
int nocb_p_count_lazy; /* (approximate). */ int nocb_p_count_lazy; /* (approximate). */
wait_queue_head_t nocb_wq; /* For nocb kthreads to sleep on. */ wait_queue_head_t nocb_wq; /* For nocb kthreads to sleep on. */
struct task_struct *nocb_kthread; struct task_struct *nocb_kthread;
bool nocb_defer_wakeup; /* Defer wakeup of nocb_kthread. */ int nocb_defer_wakeup; /* Defer wakeup of nocb_kthread. */
/* The following fields are used by the leader, hence own cacheline. */ /* The following fields are used by the leader, hence own cacheline. */
struct rcu_head *nocb_gp_head ____cacheline_internodealigned_in_smp; struct rcu_head *nocb_gp_head ____cacheline_internodealigned_in_smp;
@ -383,6 +383,11 @@ struct rcu_data {
#define RCU_FORCE_QS 3 /* Need to force quiescent state. */ #define RCU_FORCE_QS 3 /* Need to force quiescent state. */
#define RCU_SIGNAL_INIT RCU_SAVE_DYNTICK #define RCU_SIGNAL_INIT RCU_SAVE_DYNTICK
/* Values for nocb_defer_wakeup field in struct rcu_data. */
#define RCU_NOGP_WAKE_NOT 0
#define RCU_NOGP_WAKE 1
#define RCU_NOGP_WAKE_FORCE 2
#define RCU_JIFFIES_TILL_FORCE_QS (1 + (HZ > 250) + (HZ > 500)) #define RCU_JIFFIES_TILL_FORCE_QS (1 + (HZ > 250) + (HZ > 500))
/* For jiffies_till_first_fqs and */ /* For jiffies_till_first_fqs and */
/* and jiffies_till_next_fqs. */ /* and jiffies_till_next_fqs. */
@ -572,6 +577,7 @@ static void rcu_preempt_do_callbacks(void);
static int rcu_spawn_one_boost_kthread(struct rcu_state *rsp, static int rcu_spawn_one_boost_kthread(struct rcu_state *rsp,
struct rcu_node *rnp); struct rcu_node *rnp);
#endif /* #ifdef CONFIG_RCU_BOOST */ #endif /* #ifdef CONFIG_RCU_BOOST */
static void __init rcu_spawn_boost_kthreads(void);
static void rcu_prepare_kthreads(int cpu); static void rcu_prepare_kthreads(int cpu);
static void rcu_cleanup_after_idle(int cpu); static void rcu_cleanup_after_idle(int cpu);
static void rcu_prepare_for_idle(int cpu); static void rcu_prepare_for_idle(int cpu);
@ -589,10 +595,14 @@ static bool __call_rcu_nocb(struct rcu_data *rdp, struct rcu_head *rhp,
static bool rcu_nocb_adopt_orphan_cbs(struct rcu_state *rsp, static bool rcu_nocb_adopt_orphan_cbs(struct rcu_state *rsp,
struct rcu_data *rdp, struct rcu_data *rdp,
unsigned long flags); unsigned long flags);
static bool rcu_nocb_need_deferred_wakeup(struct rcu_data *rdp); static int rcu_nocb_need_deferred_wakeup(struct rcu_data *rdp);
static void do_nocb_deferred_wakeup(struct rcu_data *rdp); static void do_nocb_deferred_wakeup(struct rcu_data *rdp);
static void rcu_boot_init_nocb_percpu_data(struct rcu_data *rdp); static void rcu_boot_init_nocb_percpu_data(struct rcu_data *rdp);
static void rcu_spawn_nocb_kthreads(struct rcu_state *rsp); static void rcu_spawn_all_nocb_kthreads(int cpu);
static void __init rcu_spawn_nocb_kthreads(void);
#ifdef CONFIG_RCU_NOCB_CPU
static void __init rcu_organize_nocb_kthreads(struct rcu_state *rsp);
#endif /* #ifdef CONFIG_RCU_NOCB_CPU */
static void __maybe_unused rcu_kick_nohz_cpu(int cpu); static void __maybe_unused rcu_kick_nohz_cpu(int cpu);
static bool init_nocb_callback_list(struct rcu_data *rdp); static bool init_nocb_callback_list(struct rcu_data *rdp);
static void rcu_sysidle_enter(struct rcu_dynticks *rdtp, int irq); static void rcu_sysidle_enter(struct rcu_dynticks *rdtp, int irq);
@ -605,6 +615,8 @@ static void rcu_sysidle_report_gp(struct rcu_state *rsp, int isidle,
static void rcu_bind_gp_kthread(void); static void rcu_bind_gp_kthread(void);
static void rcu_sysidle_init_percpu_data(struct rcu_dynticks *rdtp); static void rcu_sysidle_init_percpu_data(struct rcu_dynticks *rdtp);
static bool rcu_nohz_full_cpu(struct rcu_state *rsp); static bool rcu_nohz_full_cpu(struct rcu_state *rsp);
static void rcu_dynticks_task_enter(void);
static void rcu_dynticks_task_exit(void);
#endif /* #ifndef RCU_TREE_NONCORE */ #endif /* #ifndef RCU_TREE_NONCORE */

View File

@ -85,33 +85,6 @@ static void __init rcu_bootup_announce_oddness(void)
pr_info("\tBoot-time adjustment of leaf fanout to %d.\n", rcu_fanout_leaf); pr_info("\tBoot-time adjustment of leaf fanout to %d.\n", rcu_fanout_leaf);
if (nr_cpu_ids != NR_CPUS) if (nr_cpu_ids != NR_CPUS)
pr_info("\tRCU restricting CPUs from NR_CPUS=%d to nr_cpu_ids=%d.\n", NR_CPUS, nr_cpu_ids); pr_info("\tRCU restricting CPUs from NR_CPUS=%d to nr_cpu_ids=%d.\n", NR_CPUS, nr_cpu_ids);
#ifdef CONFIG_RCU_NOCB_CPU
#ifndef CONFIG_RCU_NOCB_CPU_NONE
if (!have_rcu_nocb_mask) {
zalloc_cpumask_var(&rcu_nocb_mask, GFP_KERNEL);
have_rcu_nocb_mask = true;
}
#ifdef CONFIG_RCU_NOCB_CPU_ZERO
pr_info("\tOffload RCU callbacks from CPU 0\n");
cpumask_set_cpu(0, rcu_nocb_mask);
#endif /* #ifdef CONFIG_RCU_NOCB_CPU_ZERO */
#ifdef CONFIG_RCU_NOCB_CPU_ALL
pr_info("\tOffload RCU callbacks from all CPUs\n");
cpumask_copy(rcu_nocb_mask, cpu_possible_mask);
#endif /* #ifdef CONFIG_RCU_NOCB_CPU_ALL */
#endif /* #ifndef CONFIG_RCU_NOCB_CPU_NONE */
if (have_rcu_nocb_mask) {
if (!cpumask_subset(rcu_nocb_mask, cpu_possible_mask)) {
pr_info("\tNote: kernel parameter 'rcu_nocbs=' contains nonexistent CPUs.\n");
cpumask_and(rcu_nocb_mask, cpu_possible_mask,
rcu_nocb_mask);
}
cpulist_scnprintf(nocb_buf, sizeof(nocb_buf), rcu_nocb_mask);
pr_info("\tOffload RCU callbacks from CPUs: %s.\n", nocb_buf);
if (rcu_nocb_poll)
pr_info("\tPoll for callbacks from no-CBs CPUs.\n");
}
#endif /* #ifdef CONFIG_RCU_NOCB_CPU */
} }
#ifdef CONFIG_TREE_PREEMPT_RCU #ifdef CONFIG_TREE_PREEMPT_RCU
@ -134,7 +107,7 @@ static void __init rcu_bootup_announce(void)
* Return the number of RCU-preempt batches processed thus far * Return the number of RCU-preempt batches processed thus far
* for debug and statistics. * for debug and statistics.
*/ */
long rcu_batches_completed_preempt(void) static long rcu_batches_completed_preempt(void)
{ {
return rcu_preempt_state.completed; return rcu_preempt_state.completed;
} }
@ -155,18 +128,19 @@ EXPORT_SYMBOL_GPL(rcu_batches_completed);
* not in a quiescent state. There might be any number of tasks blocked * not in a quiescent state. There might be any number of tasks blocked
* while in an RCU read-side critical section. * while in an RCU read-side critical section.
* *
* Unlike the other rcu_*_qs() functions, callers to this function * As with the other rcu_*_qs() functions, callers to this function
* must disable irqs in order to protect the assignment to * must disable preemption.
* ->rcu_read_unlock_special.
*/ */
static void rcu_preempt_qs(int cpu) static void rcu_preempt_qs(void)
{ {
struct rcu_data *rdp = &per_cpu(rcu_preempt_data, cpu); if (!__this_cpu_read(rcu_preempt_data.passed_quiesce)) {
trace_rcu_grace_period(TPS("rcu_preempt"),
if (rdp->passed_quiesce == 0) __this_cpu_read(rcu_preempt_data.gpnum),
trace_rcu_grace_period(TPS("rcu_preempt"), rdp->gpnum, TPS("cpuqs")); TPS("cpuqs"));
rdp->passed_quiesce = 1; __this_cpu_write(rcu_preempt_data.passed_quiesce, 1);
current->rcu_read_unlock_special &= ~RCU_READ_UNLOCK_NEED_QS; barrier(); /* Coordinate with rcu_preempt_check_callbacks(). */
current->rcu_read_unlock_special.b.need_qs = false;
}
} }
/* /*
@ -190,14 +164,14 @@ static void rcu_preempt_note_context_switch(int cpu)
struct rcu_node *rnp; struct rcu_node *rnp;
if (t->rcu_read_lock_nesting > 0 && if (t->rcu_read_lock_nesting > 0 &&
(t->rcu_read_unlock_special & RCU_READ_UNLOCK_BLOCKED) == 0) { !t->rcu_read_unlock_special.b.blocked) {
/* Possibly blocking in an RCU read-side critical section. */ /* Possibly blocking in an RCU read-side critical section. */
rdp = per_cpu_ptr(rcu_preempt_state.rda, cpu); rdp = per_cpu_ptr(rcu_preempt_state.rda, cpu);
rnp = rdp->mynode; rnp = rdp->mynode;
raw_spin_lock_irqsave(&rnp->lock, flags); raw_spin_lock_irqsave(&rnp->lock, flags);
smp_mb__after_unlock_lock(); smp_mb__after_unlock_lock();
t->rcu_read_unlock_special |= RCU_READ_UNLOCK_BLOCKED; t->rcu_read_unlock_special.b.blocked = true;
t->rcu_blocked_node = rnp; t->rcu_blocked_node = rnp;
/* /*
@ -239,7 +213,7 @@ static void rcu_preempt_note_context_switch(int cpu)
: rnp->gpnum + 1); : rnp->gpnum + 1);
raw_spin_unlock_irqrestore(&rnp->lock, flags); raw_spin_unlock_irqrestore(&rnp->lock, flags);
} else if (t->rcu_read_lock_nesting < 0 && } else if (t->rcu_read_lock_nesting < 0 &&
t->rcu_read_unlock_special) { t->rcu_read_unlock_special.s) {
/* /*
* Complete exit from RCU read-side critical section on * Complete exit from RCU read-side critical section on
@ -257,9 +231,7 @@ static void rcu_preempt_note_context_switch(int cpu)
* grace period, then the fact that the task has been enqueued * grace period, then the fact that the task has been enqueued
* means that we continue to block the current grace period. * means that we continue to block the current grace period.
*/ */
local_irq_save(flags); rcu_preempt_qs();
rcu_preempt_qs(cpu);
local_irq_restore(flags);
} }
/* /*
@ -340,7 +312,7 @@ void rcu_read_unlock_special(struct task_struct *t)
bool drop_boost_mutex = false; bool drop_boost_mutex = false;
#endif /* #ifdef CONFIG_RCU_BOOST */ #endif /* #ifdef CONFIG_RCU_BOOST */
struct rcu_node *rnp; struct rcu_node *rnp;
int special; union rcu_special special;
/* NMI handlers cannot block and cannot safely manipulate state. */ /* NMI handlers cannot block and cannot safely manipulate state. */
if (in_nmi()) if (in_nmi())
@ -350,12 +322,13 @@ void rcu_read_unlock_special(struct task_struct *t)
/* /*
* If RCU core is waiting for this CPU to exit critical section, * If RCU core is waiting for this CPU to exit critical section,
* let it know that we have done so. * let it know that we have done so. Because irqs are disabled,
* t->rcu_read_unlock_special cannot change.
*/ */
special = t->rcu_read_unlock_special; special = t->rcu_read_unlock_special;
if (special & RCU_READ_UNLOCK_NEED_QS) { if (special.b.need_qs) {
rcu_preempt_qs(smp_processor_id()); rcu_preempt_qs();
if (!t->rcu_read_unlock_special) { if (!t->rcu_read_unlock_special.s) {
local_irq_restore(flags); local_irq_restore(flags);
return; return;
} }
@ -368,8 +341,8 @@ void rcu_read_unlock_special(struct task_struct *t)
} }
/* Clean up if blocked during RCU read-side critical section. */ /* Clean up if blocked during RCU read-side critical section. */
if (special & RCU_READ_UNLOCK_BLOCKED) { if (special.b.blocked) {
t->rcu_read_unlock_special &= ~RCU_READ_UNLOCK_BLOCKED; t->rcu_read_unlock_special.b.blocked = false;
/* /*
* Remove this task from the list it blocked on. The * Remove this task from the list it blocked on. The
@ -653,12 +626,13 @@ static void rcu_preempt_check_callbacks(int cpu)
struct task_struct *t = current; struct task_struct *t = current;
if (t->rcu_read_lock_nesting == 0) { if (t->rcu_read_lock_nesting == 0) {
rcu_preempt_qs(cpu); rcu_preempt_qs();
return; return;
} }
if (t->rcu_read_lock_nesting > 0 && if (t->rcu_read_lock_nesting > 0 &&
per_cpu(rcu_preempt_data, cpu).qs_pending) per_cpu(rcu_preempt_data, cpu).qs_pending &&
t->rcu_read_unlock_special |= RCU_READ_UNLOCK_NEED_QS; !per_cpu(rcu_preempt_data, cpu).passed_quiesce)
t->rcu_read_unlock_special.b.need_qs = true;
} }
#ifdef CONFIG_RCU_BOOST #ifdef CONFIG_RCU_BOOST
@ -819,11 +793,6 @@ sync_rcu_preempt_exp_init(struct rcu_state *rsp, struct rcu_node *rnp)
* In fact, if you are using synchronize_rcu_expedited() in a loop, * In fact, if you are using synchronize_rcu_expedited() in a loop,
* please restructure your code to batch your updates, and then Use a * please restructure your code to batch your updates, and then Use a
* single synchronize_rcu() instead. * single synchronize_rcu() instead.
*
* Note that it is illegal to call this function while holding any lock
* that is acquired by a CPU-hotplug notifier. And yes, it is also illegal
* to call this function from a CPU-hotplug notifier. Failing to observe
* these restriction will result in deadlock.
*/ */
void synchronize_rcu_expedited(void) void synchronize_rcu_expedited(void)
{ {
@ -845,7 +814,11 @@ void synchronize_rcu_expedited(void)
* being boosted. This simplifies the process of moving tasks * being boosted. This simplifies the process of moving tasks
* from leaf to root rcu_node structures. * from leaf to root rcu_node structures.
*/ */
get_online_cpus(); if (!try_get_online_cpus()) {
/* CPU-hotplug operation in flight, fall back to normal GP. */
wait_rcu_gp(call_rcu);
return;
}
/* /*
* Acquire lock, falling back to synchronize_rcu() if too many * Acquire lock, falling back to synchronize_rcu() if too many
@ -897,7 +870,8 @@ void synchronize_rcu_expedited(void)
/* Clean up and exit. */ /* Clean up and exit. */
smp_mb(); /* ensure expedited GP seen before counter increment. */ smp_mb(); /* ensure expedited GP seen before counter increment. */
ACCESS_ONCE(sync_rcu_preempt_exp_count)++; ACCESS_ONCE(sync_rcu_preempt_exp_count) =
sync_rcu_preempt_exp_count + 1;
unlock_mb_ret: unlock_mb_ret:
mutex_unlock(&sync_rcu_preempt_exp_mutex); mutex_unlock(&sync_rcu_preempt_exp_mutex);
mb_ret: mb_ret:
@ -941,7 +915,7 @@ void exit_rcu(void)
return; return;
t->rcu_read_lock_nesting = 1; t->rcu_read_lock_nesting = 1;
barrier(); barrier();
t->rcu_read_unlock_special = RCU_READ_UNLOCK_BLOCKED; t->rcu_read_unlock_special.b.blocked = true;
__rcu_read_unlock(); __rcu_read_unlock();
} }
@ -1462,14 +1436,13 @@ static struct smp_hotplug_thread rcu_cpu_thread_spec = {
}; };
/* /*
* Spawn all kthreads -- called as soon as the scheduler is running. * Spawn boost kthreads -- called as soon as the scheduler is running.
*/ */
static int __init rcu_spawn_kthreads(void) static void __init rcu_spawn_boost_kthreads(void)
{ {
struct rcu_node *rnp; struct rcu_node *rnp;
int cpu; int cpu;
rcu_scheduler_fully_active = 1;
for_each_possible_cpu(cpu) for_each_possible_cpu(cpu)
per_cpu(rcu_cpu_has_work, cpu) = 0; per_cpu(rcu_cpu_has_work, cpu) = 0;
BUG_ON(smpboot_register_percpu_thread(&rcu_cpu_thread_spec)); BUG_ON(smpboot_register_percpu_thread(&rcu_cpu_thread_spec));
@ -1479,9 +1452,7 @@ static int __init rcu_spawn_kthreads(void)
rcu_for_each_leaf_node(rcu_state_p, rnp) rcu_for_each_leaf_node(rcu_state_p, rnp)
(void)rcu_spawn_one_boost_kthread(rcu_state_p, rnp); (void)rcu_spawn_one_boost_kthread(rcu_state_p, rnp);
} }
return 0;
} }
early_initcall(rcu_spawn_kthreads);
static void rcu_prepare_kthreads(int cpu) static void rcu_prepare_kthreads(int cpu)
{ {
@ -1519,12 +1490,9 @@ static void rcu_boost_kthread_setaffinity(struct rcu_node *rnp, int outgoingcpu)
{ {
} }
static int __init rcu_scheduler_really_started(void) static void __init rcu_spawn_boost_kthreads(void)
{ {
rcu_scheduler_fully_active = 1;
return 0;
} }
early_initcall(rcu_scheduler_really_started);
static void rcu_prepare_kthreads(int cpu) static void rcu_prepare_kthreads(int cpu)
{ {
@ -1625,7 +1593,7 @@ static bool __maybe_unused rcu_try_advance_all_cbs(void)
/* Exit early if we advanced recently. */ /* Exit early if we advanced recently. */
if (jiffies == rdtp->last_advance_all) if (jiffies == rdtp->last_advance_all)
return 0; return false;
rdtp->last_advance_all = jiffies; rdtp->last_advance_all = jiffies;
for_each_rcu_flavor(rsp) { for_each_rcu_flavor(rsp) {
@ -1848,7 +1816,7 @@ static int rcu_oom_notify(struct notifier_block *self,
get_online_cpus(); get_online_cpus();
for_each_online_cpu(cpu) { for_each_online_cpu(cpu) {
smp_call_function_single(cpu, rcu_oom_notify_cpu, NULL, 1); smp_call_function_single(cpu, rcu_oom_notify_cpu, NULL, 1);
cond_resched(); cond_resched_rcu_qs();
} }
put_online_cpus(); put_online_cpus();
@ -2075,7 +2043,7 @@ static void wake_nocb_leader(struct rcu_data *rdp, bool force)
if (!ACCESS_ONCE(rdp_leader->nocb_kthread)) if (!ACCESS_ONCE(rdp_leader->nocb_kthread))
return; return;
if (ACCESS_ONCE(rdp_leader->nocb_leader_sleep) || force) { if (ACCESS_ONCE(rdp_leader->nocb_leader_sleep) || force) {
/* Prior xchg orders against prior callback enqueue. */ /* Prior smp_mb__after_atomic() orders against prior enqueue. */
ACCESS_ONCE(rdp_leader->nocb_leader_sleep) = false; ACCESS_ONCE(rdp_leader->nocb_leader_sleep) = false;
wake_up(&rdp_leader->nocb_wq); wake_up(&rdp_leader->nocb_wq);
} }
@ -2104,6 +2072,7 @@ static void __call_rcu_nocb_enqueue(struct rcu_data *rdp,
ACCESS_ONCE(*old_rhpp) = rhp; ACCESS_ONCE(*old_rhpp) = rhp;
atomic_long_add(rhcount, &rdp->nocb_q_count); atomic_long_add(rhcount, &rdp->nocb_q_count);
atomic_long_add(rhcount_lazy, &rdp->nocb_q_count_lazy); atomic_long_add(rhcount_lazy, &rdp->nocb_q_count_lazy);
smp_mb__after_atomic(); /* Store *old_rhpp before _wake test. */
/* If we are not being polled and there is a kthread, awaken it ... */ /* If we are not being polled and there is a kthread, awaken it ... */
t = ACCESS_ONCE(rdp->nocb_kthread); t = ACCESS_ONCE(rdp->nocb_kthread);
@ -2120,16 +2089,23 @@ static void __call_rcu_nocb_enqueue(struct rcu_data *rdp,
trace_rcu_nocb_wake(rdp->rsp->name, rdp->cpu, trace_rcu_nocb_wake(rdp->rsp->name, rdp->cpu,
TPS("WakeEmpty")); TPS("WakeEmpty"));
} else { } else {
rdp->nocb_defer_wakeup = true; rdp->nocb_defer_wakeup = RCU_NOGP_WAKE;
trace_rcu_nocb_wake(rdp->rsp->name, rdp->cpu, trace_rcu_nocb_wake(rdp->rsp->name, rdp->cpu,
TPS("WakeEmptyIsDeferred")); TPS("WakeEmptyIsDeferred"));
} }
rdp->qlen_last_fqs_check = 0; rdp->qlen_last_fqs_check = 0;
} else if (len > rdp->qlen_last_fqs_check + qhimark) { } else if (len > rdp->qlen_last_fqs_check + qhimark) {
/* ... or if many callbacks queued. */ /* ... or if many callbacks queued. */
wake_nocb_leader(rdp, true); if (!irqs_disabled_flags(flags)) {
wake_nocb_leader(rdp, true);
trace_rcu_nocb_wake(rdp->rsp->name, rdp->cpu,
TPS("WakeOvf"));
} else {
rdp->nocb_defer_wakeup = RCU_NOGP_WAKE_FORCE;
trace_rcu_nocb_wake(rdp->rsp->name, rdp->cpu,
TPS("WakeOvfIsDeferred"));
}
rdp->qlen_last_fqs_check = LONG_MAX / 2; rdp->qlen_last_fqs_check = LONG_MAX / 2;
trace_rcu_nocb_wake(rdp->rsp->name, rdp->cpu, TPS("WakeOvf"));
} else { } else {
trace_rcu_nocb_wake(rdp->rsp->name, rdp->cpu, TPS("WakeNot")); trace_rcu_nocb_wake(rdp->rsp->name, rdp->cpu, TPS("WakeNot"));
} }
@ -2150,7 +2126,7 @@ static bool __call_rcu_nocb(struct rcu_data *rdp, struct rcu_head *rhp,
{ {
if (!rcu_is_nocb_cpu(rdp->cpu)) if (!rcu_is_nocb_cpu(rdp->cpu))
return 0; return false;
__call_rcu_nocb_enqueue(rdp, rhp, &rhp->next, 1, lazy, flags); __call_rcu_nocb_enqueue(rdp, rhp, &rhp->next, 1, lazy, flags);
if (__is_kfree_rcu_offset((unsigned long)rhp->func)) if (__is_kfree_rcu_offset((unsigned long)rhp->func))
trace_rcu_kfree_callback(rdp->rsp->name, rhp, trace_rcu_kfree_callback(rdp->rsp->name, rhp,
@ -2161,7 +2137,18 @@ static bool __call_rcu_nocb(struct rcu_data *rdp, struct rcu_head *rhp,
trace_rcu_callback(rdp->rsp->name, rhp, trace_rcu_callback(rdp->rsp->name, rhp,
-atomic_long_read(&rdp->nocb_q_count_lazy), -atomic_long_read(&rdp->nocb_q_count_lazy),
-atomic_long_read(&rdp->nocb_q_count)); -atomic_long_read(&rdp->nocb_q_count));
return 1;
/*
* If called from an extended quiescent state with interrupts
* disabled, invoke the RCU core in order to allow the idle-entry
* deferred-wakeup check to function.
*/
if (irqs_disabled_flags(flags) &&
!rcu_is_watching() &&
cpu_online(smp_processor_id()))
invoke_rcu_core();
return true;
} }
/* /*
@ -2177,7 +2164,7 @@ static bool __maybe_unused rcu_nocb_adopt_orphan_cbs(struct rcu_state *rsp,
/* If this is not a no-CBs CPU, tell the caller to do it the old way. */ /* If this is not a no-CBs CPU, tell the caller to do it the old way. */
if (!rcu_is_nocb_cpu(smp_processor_id())) if (!rcu_is_nocb_cpu(smp_processor_id()))
return 0; return false;
rsp->qlen = 0; rsp->qlen = 0;
rsp->qlen_lazy = 0; rsp->qlen_lazy = 0;
@ -2196,7 +2183,7 @@ static bool __maybe_unused rcu_nocb_adopt_orphan_cbs(struct rcu_state *rsp,
rsp->orphan_nxtlist = NULL; rsp->orphan_nxtlist = NULL;
rsp->orphan_nxttail = &rsp->orphan_nxtlist; rsp->orphan_nxttail = &rsp->orphan_nxtlist;
} }
return 1; return true;
} }
/* /*
@ -2229,7 +2216,7 @@ static void rcu_nocb_wait_gp(struct rcu_data *rdp)
(d = ULONG_CMP_GE(ACCESS_ONCE(rnp->completed), c))); (d = ULONG_CMP_GE(ACCESS_ONCE(rnp->completed), c)));
if (likely(d)) if (likely(d))
break; break;
flush_signals(current); WARN_ON(signal_pending(current));
trace_rcu_future_gp(rnp, rdp, c, TPS("ResumeWait")); trace_rcu_future_gp(rnp, rdp, c, TPS("ResumeWait"));
} }
trace_rcu_future_gp(rnp, rdp, c, TPS("EndWait")); trace_rcu_future_gp(rnp, rdp, c, TPS("EndWait"));
@ -2288,7 +2275,7 @@ wait_again:
if (!rcu_nocb_poll) if (!rcu_nocb_poll)
trace_rcu_nocb_wake(my_rdp->rsp->name, my_rdp->cpu, trace_rcu_nocb_wake(my_rdp->rsp->name, my_rdp->cpu,
"WokeEmpty"); "WokeEmpty");
flush_signals(current); WARN_ON(signal_pending(current));
schedule_timeout_interruptible(1); schedule_timeout_interruptible(1);
/* Rescan in case we were a victim of memory ordering. */ /* Rescan in case we were a victim of memory ordering. */
@ -2327,6 +2314,7 @@ wait_again:
atomic_long_add(rdp->nocb_gp_count, &rdp->nocb_follower_count); atomic_long_add(rdp->nocb_gp_count, &rdp->nocb_follower_count);
atomic_long_add(rdp->nocb_gp_count_lazy, atomic_long_add(rdp->nocb_gp_count_lazy,
&rdp->nocb_follower_count_lazy); &rdp->nocb_follower_count_lazy);
smp_mb__after_atomic(); /* Store *tail before wakeup. */
if (rdp != my_rdp && tail == &rdp->nocb_follower_head) { if (rdp != my_rdp && tail == &rdp->nocb_follower_head) {
/* /*
* List was empty, wake up the follower. * List was empty, wake up the follower.
@ -2367,7 +2355,7 @@ static void nocb_follower_wait(struct rcu_data *rdp)
if (!rcu_nocb_poll) if (!rcu_nocb_poll)
trace_rcu_nocb_wake(rdp->rsp->name, rdp->cpu, trace_rcu_nocb_wake(rdp->rsp->name, rdp->cpu,
"WokeEmpty"); "WokeEmpty");
flush_signals(current); WARN_ON(signal_pending(current));
schedule_timeout_interruptible(1); schedule_timeout_interruptible(1);
} }
} }
@ -2428,15 +2416,16 @@ static int rcu_nocb_kthread(void *arg)
list = next; list = next;
} }
trace_rcu_batch_end(rdp->rsp->name, c, !!list, 0, 0, 1); trace_rcu_batch_end(rdp->rsp->name, c, !!list, 0, 0, 1);
ACCESS_ONCE(rdp->nocb_p_count) -= c; ACCESS_ONCE(rdp->nocb_p_count) = rdp->nocb_p_count - c;
ACCESS_ONCE(rdp->nocb_p_count_lazy) -= cl; ACCESS_ONCE(rdp->nocb_p_count_lazy) =
rdp->nocb_p_count_lazy - cl;
rdp->n_nocbs_invoked += c; rdp->n_nocbs_invoked += c;
} }
return 0; return 0;
} }
/* Is a deferred wakeup of rcu_nocb_kthread() required? */ /* Is a deferred wakeup of rcu_nocb_kthread() required? */
static bool rcu_nocb_need_deferred_wakeup(struct rcu_data *rdp) static int rcu_nocb_need_deferred_wakeup(struct rcu_data *rdp)
{ {
return ACCESS_ONCE(rdp->nocb_defer_wakeup); return ACCESS_ONCE(rdp->nocb_defer_wakeup);
} }
@ -2444,11 +2433,79 @@ static bool rcu_nocb_need_deferred_wakeup(struct rcu_data *rdp)
/* Do a deferred wakeup of rcu_nocb_kthread(). */ /* Do a deferred wakeup of rcu_nocb_kthread(). */
static void do_nocb_deferred_wakeup(struct rcu_data *rdp) static void do_nocb_deferred_wakeup(struct rcu_data *rdp)
{ {
int ndw;
if (!rcu_nocb_need_deferred_wakeup(rdp)) if (!rcu_nocb_need_deferred_wakeup(rdp))
return; return;
ACCESS_ONCE(rdp->nocb_defer_wakeup) = false; ndw = ACCESS_ONCE(rdp->nocb_defer_wakeup);
wake_nocb_leader(rdp, false); ACCESS_ONCE(rdp->nocb_defer_wakeup) = RCU_NOGP_WAKE_NOT;
trace_rcu_nocb_wake(rdp->rsp->name, rdp->cpu, TPS("DeferredWakeEmpty")); wake_nocb_leader(rdp, ndw == RCU_NOGP_WAKE_FORCE);
trace_rcu_nocb_wake(rdp->rsp->name, rdp->cpu, TPS("DeferredWake"));
}
void __init rcu_init_nohz(void)
{
int cpu;
bool need_rcu_nocb_mask = true;
struct rcu_state *rsp;
#ifdef CONFIG_RCU_NOCB_CPU_NONE
need_rcu_nocb_mask = false;
#endif /* #ifndef CONFIG_RCU_NOCB_CPU_NONE */
#if defined(CONFIG_NO_HZ_FULL)
if (tick_nohz_full_running && cpumask_weight(tick_nohz_full_mask))
need_rcu_nocb_mask = true;
#endif /* #if defined(CONFIG_NO_HZ_FULL) */
if (!have_rcu_nocb_mask && need_rcu_nocb_mask) {
if (!zalloc_cpumask_var(&rcu_nocb_mask, GFP_KERNEL)) {
pr_info("rcu_nocb_mask allocation failed, callback offloading disabled.\n");
return;
}
have_rcu_nocb_mask = true;
}
if (!have_rcu_nocb_mask)
return;
#ifdef CONFIG_RCU_NOCB_CPU_ZERO
pr_info("\tOffload RCU callbacks from CPU 0\n");
cpumask_set_cpu(0, rcu_nocb_mask);
#endif /* #ifdef CONFIG_RCU_NOCB_CPU_ZERO */
#ifdef CONFIG_RCU_NOCB_CPU_ALL
pr_info("\tOffload RCU callbacks from all CPUs\n");
cpumask_copy(rcu_nocb_mask, cpu_possible_mask);
#endif /* #ifdef CONFIG_RCU_NOCB_CPU_ALL */
#if defined(CONFIG_NO_HZ_FULL)
if (tick_nohz_full_running)
cpumask_or(rcu_nocb_mask, rcu_nocb_mask, tick_nohz_full_mask);
#endif /* #if defined(CONFIG_NO_HZ_FULL) */
if (!cpumask_subset(rcu_nocb_mask, cpu_possible_mask)) {
pr_info("\tNote: kernel parameter 'rcu_nocbs=' contains nonexistent CPUs.\n");
cpumask_and(rcu_nocb_mask, cpu_possible_mask,
rcu_nocb_mask);
}
cpulist_scnprintf(nocb_buf, sizeof(nocb_buf), rcu_nocb_mask);
pr_info("\tOffload RCU callbacks from CPUs: %s.\n", nocb_buf);
if (rcu_nocb_poll)
pr_info("\tPoll for callbacks from no-CBs CPUs.\n");
for_each_rcu_flavor(rsp) {
for_each_cpu(cpu, rcu_nocb_mask) {
struct rcu_data *rdp = per_cpu_ptr(rsp->rda, cpu);
/*
* If there are early callbacks, they will need
* to be moved to the nocb lists.
*/
WARN_ON_ONCE(rdp->nxttail[RCU_NEXT_TAIL] !=
&rdp->nxtlist &&
rdp->nxttail[RCU_NEXT_TAIL] != NULL);
init_nocb_callback_list(rdp);
}
rcu_organize_nocb_kthreads(rsp);
}
} }
/* Initialize per-rcu_data variables for no-CBs CPUs. */ /* Initialize per-rcu_data variables for no-CBs CPUs. */
@ -2459,15 +2516,85 @@ static void __init rcu_boot_init_nocb_percpu_data(struct rcu_data *rdp)
rdp->nocb_follower_tail = &rdp->nocb_follower_head; rdp->nocb_follower_tail = &rdp->nocb_follower_head;
} }
/*
* If the specified CPU is a no-CBs CPU that does not already have its
* rcuo kthread for the specified RCU flavor, spawn it. If the CPUs are
* brought online out of order, this can require re-organizing the
* leader-follower relationships.
*/
static void rcu_spawn_one_nocb_kthread(struct rcu_state *rsp, int cpu)
{
struct rcu_data *rdp;
struct rcu_data *rdp_last;
struct rcu_data *rdp_old_leader;
struct rcu_data *rdp_spawn = per_cpu_ptr(rsp->rda, cpu);
struct task_struct *t;
/*
* If this isn't a no-CBs CPU or if it already has an rcuo kthread,
* then nothing to do.
*/
if (!rcu_is_nocb_cpu(cpu) || rdp_spawn->nocb_kthread)
return;
/* If we didn't spawn the leader first, reorganize! */
rdp_old_leader = rdp_spawn->nocb_leader;
if (rdp_old_leader != rdp_spawn && !rdp_old_leader->nocb_kthread) {
rdp_last = NULL;
rdp = rdp_old_leader;
do {
rdp->nocb_leader = rdp_spawn;
if (rdp_last && rdp != rdp_spawn)
rdp_last->nocb_next_follower = rdp;
rdp_last = rdp;
rdp = rdp->nocb_next_follower;
rdp_last->nocb_next_follower = NULL;
} while (rdp);
rdp_spawn->nocb_next_follower = rdp_old_leader;
}
/* Spawn the kthread for this CPU and RCU flavor. */
t = kthread_run(rcu_nocb_kthread, rdp_spawn,
"rcuo%c/%d", rsp->abbr, cpu);
BUG_ON(IS_ERR(t));
ACCESS_ONCE(rdp_spawn->nocb_kthread) = t;
}
/*
* If the specified CPU is a no-CBs CPU that does not already have its
* rcuo kthreads, spawn them.
*/
static void rcu_spawn_all_nocb_kthreads(int cpu)
{
struct rcu_state *rsp;
if (rcu_scheduler_fully_active)
for_each_rcu_flavor(rsp)
rcu_spawn_one_nocb_kthread(rsp, cpu);
}
/*
* Once the scheduler is running, spawn rcuo kthreads for all online
* no-CBs CPUs. This assumes that the early_initcall()s happen before
* non-boot CPUs come online -- if this changes, we will need to add
* some mutual exclusion.
*/
static void __init rcu_spawn_nocb_kthreads(void)
{
int cpu;
for_each_online_cpu(cpu)
rcu_spawn_all_nocb_kthreads(cpu);
}
/* How many follower CPU IDs per leader? Default of -1 for sqrt(nr_cpu_ids). */ /* How many follower CPU IDs per leader? Default of -1 for sqrt(nr_cpu_ids). */
static int rcu_nocb_leader_stride = -1; static int rcu_nocb_leader_stride = -1;
module_param(rcu_nocb_leader_stride, int, 0444); module_param(rcu_nocb_leader_stride, int, 0444);
/* /*
* Create a kthread for each RCU flavor for each no-CBs CPU. * Initialize leader-follower relationships for all no-CBs CPU.
* Also initialize leader-follower relationships.
*/ */
static void __init rcu_spawn_nocb_kthreads(struct rcu_state *rsp) static void __init rcu_organize_nocb_kthreads(struct rcu_state *rsp)
{ {
int cpu; int cpu;
int ls = rcu_nocb_leader_stride; int ls = rcu_nocb_leader_stride;
@ -2475,14 +2602,9 @@ static void __init rcu_spawn_nocb_kthreads(struct rcu_state *rsp)
struct rcu_data *rdp; struct rcu_data *rdp;
struct rcu_data *rdp_leader = NULL; /* Suppress misguided gcc warn. */ struct rcu_data *rdp_leader = NULL; /* Suppress misguided gcc warn. */
struct rcu_data *rdp_prev = NULL; struct rcu_data *rdp_prev = NULL;
struct task_struct *t;
if (rcu_nocb_mask == NULL) if (!have_rcu_nocb_mask)
return; return;
#if defined(CONFIG_NO_HZ_FULL) && !defined(CONFIG_NO_HZ_FULL_ALL)
if (tick_nohz_full_running)
cpumask_or(rcu_nocb_mask, rcu_nocb_mask, tick_nohz_full_mask);
#endif /* #if defined(CONFIG_NO_HZ_FULL) && !defined(CONFIG_NO_HZ_FULL_ALL) */
if (ls == -1) { if (ls == -1) {
ls = int_sqrt(nr_cpu_ids); ls = int_sqrt(nr_cpu_ids);
rcu_nocb_leader_stride = ls; rcu_nocb_leader_stride = ls;
@ -2505,21 +2627,15 @@ static void __init rcu_spawn_nocb_kthreads(struct rcu_state *rsp)
rdp_prev->nocb_next_follower = rdp; rdp_prev->nocb_next_follower = rdp;
} }
rdp_prev = rdp; rdp_prev = rdp;
/* Spawn the kthread for this CPU. */
t = kthread_run(rcu_nocb_kthread, rdp,
"rcuo%c/%d", rsp->abbr, cpu);
BUG_ON(IS_ERR(t));
ACCESS_ONCE(rdp->nocb_kthread) = t;
} }
} }
/* Prevent __call_rcu() from enqueuing callbacks on no-CBs CPUs */ /* Prevent __call_rcu() from enqueuing callbacks on no-CBs CPUs */
static bool init_nocb_callback_list(struct rcu_data *rdp) static bool init_nocb_callback_list(struct rcu_data *rdp)
{ {
if (rcu_nocb_mask == NULL || if (!rcu_is_nocb_cpu(rdp->cpu))
!cpumask_test_cpu(rdp->cpu, rcu_nocb_mask))
return false; return false;
rdp->nxttail[RCU_NEXT_TAIL] = NULL; rdp->nxttail[RCU_NEXT_TAIL] = NULL;
return true; return true;
} }
@ -2541,21 +2657,21 @@ static void rcu_init_one_nocb(struct rcu_node *rnp)
static bool __call_rcu_nocb(struct rcu_data *rdp, struct rcu_head *rhp, static bool __call_rcu_nocb(struct rcu_data *rdp, struct rcu_head *rhp,
bool lazy, unsigned long flags) bool lazy, unsigned long flags)
{ {
return 0; return false;
} }
static bool __maybe_unused rcu_nocb_adopt_orphan_cbs(struct rcu_state *rsp, static bool __maybe_unused rcu_nocb_adopt_orphan_cbs(struct rcu_state *rsp,
struct rcu_data *rdp, struct rcu_data *rdp,
unsigned long flags) unsigned long flags)
{ {
return 0; return false;
} }
static void __init rcu_boot_init_nocb_percpu_data(struct rcu_data *rdp) static void __init rcu_boot_init_nocb_percpu_data(struct rcu_data *rdp)
{ {
} }
static bool rcu_nocb_need_deferred_wakeup(struct rcu_data *rdp) static int rcu_nocb_need_deferred_wakeup(struct rcu_data *rdp)
{ {
return false; return false;
} }
@ -2564,7 +2680,11 @@ static void do_nocb_deferred_wakeup(struct rcu_data *rdp)
{ {
} }
static void __init rcu_spawn_nocb_kthreads(struct rcu_state *rsp) static void rcu_spawn_all_nocb_kthreads(int cpu)
{
}
static void __init rcu_spawn_nocb_kthreads(void)
{ {
} }
@ -2595,16 +2715,6 @@ static void __maybe_unused rcu_kick_nohz_cpu(int cpu)
#ifdef CONFIG_NO_HZ_FULL_SYSIDLE #ifdef CONFIG_NO_HZ_FULL_SYSIDLE
/*
* Define RCU flavor that holds sysidle state. This needs to be the
* most active flavor of RCU.
*/
#ifdef CONFIG_PREEMPT_RCU
static struct rcu_state *rcu_sysidle_state = &rcu_preempt_state;
#else /* #ifdef CONFIG_PREEMPT_RCU */
static struct rcu_state *rcu_sysidle_state = &rcu_sched_state;
#endif /* #else #ifdef CONFIG_PREEMPT_RCU */
static int full_sysidle_state; /* Current system-idle state. */ static int full_sysidle_state; /* Current system-idle state. */
#define RCU_SYSIDLE_NOT 0 /* Some CPU is not idle. */ #define RCU_SYSIDLE_NOT 0 /* Some CPU is not idle. */
#define RCU_SYSIDLE_SHORT 1 /* All CPUs idle for brief period. */ #define RCU_SYSIDLE_SHORT 1 /* All CPUs idle for brief period. */
@ -2622,6 +2732,10 @@ static void rcu_sysidle_enter(struct rcu_dynticks *rdtp, int irq)
{ {
unsigned long j; unsigned long j;
/* If there are no nohz_full= CPUs, no need to track this. */
if (!tick_nohz_full_enabled())
return;
/* Adjust nesting, check for fully idle. */ /* Adjust nesting, check for fully idle. */
if (irq) { if (irq) {
rdtp->dynticks_idle_nesting--; rdtp->dynticks_idle_nesting--;
@ -2687,6 +2801,10 @@ void rcu_sysidle_force_exit(void)
*/ */
static void rcu_sysidle_exit(struct rcu_dynticks *rdtp, int irq) static void rcu_sysidle_exit(struct rcu_dynticks *rdtp, int irq)
{ {
/* If there are no nohz_full= CPUs, no need to track this. */
if (!tick_nohz_full_enabled())
return;
/* Adjust nesting, check for already non-idle. */ /* Adjust nesting, check for already non-idle. */
if (irq) { if (irq) {
rdtp->dynticks_idle_nesting++; rdtp->dynticks_idle_nesting++;
@ -2741,12 +2859,16 @@ static void rcu_sysidle_check_cpu(struct rcu_data *rdp, bool *isidle,
unsigned long j; unsigned long j;
struct rcu_dynticks *rdtp = rdp->dynticks; struct rcu_dynticks *rdtp = rdp->dynticks;
/* If there are no nohz_full= CPUs, don't check system-wide idleness. */
if (!tick_nohz_full_enabled())
return;
/* /*
* If some other CPU has already reported non-idle, if this is * If some other CPU has already reported non-idle, if this is
* not the flavor of RCU that tracks sysidle state, or if this * not the flavor of RCU that tracks sysidle state, or if this
* is an offline or the timekeeping CPU, nothing to do. * is an offline or the timekeeping CPU, nothing to do.
*/ */
if (!*isidle || rdp->rsp != rcu_sysidle_state || if (!*isidle || rdp->rsp != rcu_state_p ||
cpu_is_offline(rdp->cpu) || rdp->cpu == tick_do_timer_cpu) cpu_is_offline(rdp->cpu) || rdp->cpu == tick_do_timer_cpu)
return; return;
if (rcu_gp_in_progress(rdp->rsp)) if (rcu_gp_in_progress(rdp->rsp))
@ -2772,7 +2894,7 @@ static void rcu_sysidle_check_cpu(struct rcu_data *rdp, bool *isidle,
*/ */
static bool is_sysidle_rcu_state(struct rcu_state *rsp) static bool is_sysidle_rcu_state(struct rcu_state *rsp)
{ {
return rsp == rcu_sysidle_state; return rsp == rcu_state_p;
} }
/* /*
@ -2850,7 +2972,7 @@ static void rcu_sysidle_cancel(void)
static void rcu_sysidle_report(struct rcu_state *rsp, int isidle, static void rcu_sysidle_report(struct rcu_state *rsp, int isidle,
unsigned long maxj, bool gpkt) unsigned long maxj, bool gpkt)
{ {
if (rsp != rcu_sysidle_state) if (rsp != rcu_state_p)
return; /* Wrong flavor, ignore. */ return; /* Wrong flavor, ignore. */
if (gpkt && nr_cpu_ids <= CONFIG_NO_HZ_FULL_SYSIDLE_SMALL) if (gpkt && nr_cpu_ids <= CONFIG_NO_HZ_FULL_SYSIDLE_SMALL)
return; /* Running state machine from timekeeping CPU. */ return; /* Running state machine from timekeeping CPU. */
@ -2867,6 +2989,10 @@ static void rcu_sysidle_report(struct rcu_state *rsp, int isidle,
static void rcu_sysidle_report_gp(struct rcu_state *rsp, int isidle, static void rcu_sysidle_report_gp(struct rcu_state *rsp, int isidle,
unsigned long maxj) unsigned long maxj)
{ {
/* If there are no nohz_full= CPUs, no need to track this. */
if (!tick_nohz_full_enabled())
return;
rcu_sysidle_report(rsp, isidle, maxj, true); rcu_sysidle_report(rsp, isidle, maxj, true);
} }
@ -2893,7 +3019,8 @@ static void rcu_sysidle_cb(struct rcu_head *rhp)
/* /*
* Check to see if the system is fully idle, other than the timekeeping CPU. * Check to see if the system is fully idle, other than the timekeeping CPU.
* The caller must have disabled interrupts. * The caller must have disabled interrupts. This is not intended to be
* called unless tick_nohz_full_enabled().
*/ */
bool rcu_sys_is_idle(void) bool rcu_sys_is_idle(void)
{ {
@ -2919,13 +3046,12 @@ bool rcu_sys_is_idle(void)
/* Scan all the CPUs looking for nonidle CPUs. */ /* Scan all the CPUs looking for nonidle CPUs. */
for_each_possible_cpu(cpu) { for_each_possible_cpu(cpu) {
rdp = per_cpu_ptr(rcu_sysidle_state->rda, cpu); rdp = per_cpu_ptr(rcu_state_p->rda, cpu);
rcu_sysidle_check_cpu(rdp, &isidle, &maxj); rcu_sysidle_check_cpu(rdp, &isidle, &maxj);
if (!isidle) if (!isidle)
break; break;
} }
rcu_sysidle_report(rcu_sysidle_state, rcu_sysidle_report(rcu_state_p, isidle, maxj, false);
isidle, maxj, false);
oldrss = rss; oldrss = rss;
rss = ACCESS_ONCE(full_sysidle_state); rss = ACCESS_ONCE(full_sysidle_state);
} }
@ -2952,7 +3078,7 @@ bool rcu_sys_is_idle(void)
* provided by the memory allocator. * provided by the memory allocator.
*/ */
if (nr_cpu_ids > CONFIG_NO_HZ_FULL_SYSIDLE_SMALL && if (nr_cpu_ids > CONFIG_NO_HZ_FULL_SYSIDLE_SMALL &&
!rcu_gp_in_progress(rcu_sysidle_state) && !rcu_gp_in_progress(rcu_state_p) &&
!rsh.inuse && xchg(&rsh.inuse, 1) == 0) !rsh.inuse && xchg(&rsh.inuse, 1) == 0)
call_rcu(&rsh.rh, rcu_sysidle_cb); call_rcu(&rsh.rh, rcu_sysidle_cb);
return false; return false;
@ -3036,3 +3162,19 @@ static void rcu_bind_gp_kthread(void)
housekeeping_affine(current); housekeeping_affine(current);
#endif /* #else #ifdef CONFIG_NO_HZ_FULL_SYSIDLE */ #endif /* #else #ifdef CONFIG_NO_HZ_FULL_SYSIDLE */
} }
/* Record the current task on dyntick-idle entry. */
static void rcu_dynticks_task_enter(void)
{
#if defined(CONFIG_TASKS_RCU) && defined(CONFIG_NO_HZ_FULL)
ACCESS_ONCE(current->rcu_tasks_idle_cpu) = smp_processor_id();
#endif /* #if defined(CONFIG_TASKS_RCU) && defined(CONFIG_NO_HZ_FULL) */
}
/* Record no current task on dyntick-idle exit. */
static void rcu_dynticks_task_exit(void)
{
#if defined(CONFIG_TASKS_RCU) && defined(CONFIG_NO_HZ_FULL)
ACCESS_ONCE(current->rcu_tasks_idle_cpu) = -1;
#endif /* #if defined(CONFIG_TASKS_RCU) && defined(CONFIG_NO_HZ_FULL) */
}

View File

@ -47,6 +47,8 @@
#include <linux/hardirq.h> #include <linux/hardirq.h>
#include <linux/delay.h> #include <linux/delay.h>
#include <linux/module.h> #include <linux/module.h>
#include <linux/kthread.h>
#include <linux/tick.h>
#define CREATE_TRACE_POINTS #define CREATE_TRACE_POINTS
@ -91,7 +93,7 @@ void __rcu_read_unlock(void)
barrier(); /* critical section before exit code. */ barrier(); /* critical section before exit code. */
t->rcu_read_lock_nesting = INT_MIN; t->rcu_read_lock_nesting = INT_MIN;
barrier(); /* assign before ->rcu_read_unlock_special load */ barrier(); /* assign before ->rcu_read_unlock_special load */
if (unlikely(ACCESS_ONCE(t->rcu_read_unlock_special))) if (unlikely(ACCESS_ONCE(t->rcu_read_unlock_special.s)))
rcu_read_unlock_special(t); rcu_read_unlock_special(t);
barrier(); /* ->rcu_read_unlock_special load before assign */ barrier(); /* ->rcu_read_unlock_special load before assign */
t->rcu_read_lock_nesting = 0; t->rcu_read_lock_nesting = 0;
@ -136,6 +138,38 @@ int notrace debug_lockdep_rcu_enabled(void)
} }
EXPORT_SYMBOL_GPL(debug_lockdep_rcu_enabled); EXPORT_SYMBOL_GPL(debug_lockdep_rcu_enabled);
/**
* rcu_read_lock_held() - might we be in RCU read-side critical section?
*
* If CONFIG_DEBUG_LOCK_ALLOC is selected, returns nonzero iff in an RCU
* read-side critical section. In absence of CONFIG_DEBUG_LOCK_ALLOC,
* this assumes we are in an RCU read-side critical section unless it can
* prove otherwise. This is useful for debug checks in functions that
* require that they be called within an RCU read-side critical section.
*
* Checks debug_lockdep_rcu_enabled() to prevent false positives during boot
* and while lockdep is disabled.
*
* Note that rcu_read_lock() and the matching rcu_read_unlock() must
* occur in the same context, for example, it is illegal to invoke
* rcu_read_unlock() in process context if the matching rcu_read_lock()
* was invoked from within an irq handler.
*
* Note that rcu_read_lock() is disallowed if the CPU is either idle or
* offline from an RCU perspective, so check for those as well.
*/
int rcu_read_lock_held(void)
{
if (!debug_lockdep_rcu_enabled())
return 1;
if (!rcu_is_watching())
return 0;
if (!rcu_lockdep_current_cpu_online())
return 0;
return lock_is_held(&rcu_lock_map);
}
EXPORT_SYMBOL_GPL(rcu_read_lock_held);
/** /**
* rcu_read_lock_bh_held() - might we be in RCU-bh read-side critical section? * rcu_read_lock_bh_held() - might we be in RCU-bh read-side critical section?
* *
@ -347,3 +381,312 @@ static int __init check_cpu_stall_init(void)
early_initcall(check_cpu_stall_init); early_initcall(check_cpu_stall_init);
#endif /* #ifdef CONFIG_RCU_STALL_COMMON */ #endif /* #ifdef CONFIG_RCU_STALL_COMMON */
#ifdef CONFIG_TASKS_RCU
/*
* Simple variant of RCU whose quiescent states are voluntary context switch,
* user-space execution, and idle. As such, grace periods can take one good
* long time. There are no read-side primitives similar to rcu_read_lock()
* and rcu_read_unlock() because this implementation is intended to get
* the system into a safe state for some of the manipulations involved in
* tracing and the like. Finally, this implementation does not support
* high call_rcu_tasks() rates from multiple CPUs. If this is required,
* per-CPU callback lists will be needed.
*/
/* Global list of callbacks and associated lock. */
static struct rcu_head *rcu_tasks_cbs_head;
static struct rcu_head **rcu_tasks_cbs_tail = &rcu_tasks_cbs_head;
static DECLARE_WAIT_QUEUE_HEAD(rcu_tasks_cbs_wq);
static DEFINE_RAW_SPINLOCK(rcu_tasks_cbs_lock);
/* Track exiting tasks in order to allow them to be waited for. */
DEFINE_SRCU(tasks_rcu_exit_srcu);
/* Control stall timeouts. Disable with <= 0, otherwise jiffies till stall. */
static int rcu_task_stall_timeout __read_mostly = HZ * 60 * 10;
module_param(rcu_task_stall_timeout, int, 0644);
static void rcu_spawn_tasks_kthread(void);
/*
* Post an RCU-tasks callback. First call must be from process context
* after the scheduler if fully operational.
*/
void call_rcu_tasks(struct rcu_head *rhp, void (*func)(struct rcu_head *rhp))
{
unsigned long flags;
bool needwake;
rhp->next = NULL;
rhp->func = func;
raw_spin_lock_irqsave(&rcu_tasks_cbs_lock, flags);
needwake = !rcu_tasks_cbs_head;
*rcu_tasks_cbs_tail = rhp;
rcu_tasks_cbs_tail = &rhp->next;
raw_spin_unlock_irqrestore(&rcu_tasks_cbs_lock, flags);
if (needwake) {
rcu_spawn_tasks_kthread();
wake_up(&rcu_tasks_cbs_wq);
}
}
EXPORT_SYMBOL_GPL(call_rcu_tasks);
/**
* synchronize_rcu_tasks - wait until an rcu-tasks grace period has elapsed.
*
* Control will return to the caller some time after a full rcu-tasks
* grace period has elapsed, in other words after all currently
* executing rcu-tasks read-side critical sections have elapsed. These
* read-side critical sections are delimited by calls to schedule(),
* cond_resched_rcu_qs(), idle execution, userspace execution, calls
* to synchronize_rcu_tasks(), and (in theory, anyway) cond_resched().
*
* This is a very specialized primitive, intended only for a few uses in
* tracing and other situations requiring manipulation of function
* preambles and profiling hooks. The synchronize_rcu_tasks() function
* is not (yet) intended for heavy use from multiple CPUs.
*
* Note that this guarantee implies further memory-ordering guarantees.
* On systems with more than one CPU, when synchronize_rcu_tasks() returns,
* each CPU is guaranteed to have executed a full memory barrier since the
* end of its last RCU-tasks read-side critical section whose beginning
* preceded the call to synchronize_rcu_tasks(). In addition, each CPU
* having an RCU-tasks read-side critical section that extends beyond
* the return from synchronize_rcu_tasks() is guaranteed to have executed
* a full memory barrier after the beginning of synchronize_rcu_tasks()
* and before the beginning of that RCU-tasks read-side critical section.
* Note that these guarantees include CPUs that are offline, idle, or
* executing in user mode, as well as CPUs that are executing in the kernel.
*
* Furthermore, if CPU A invoked synchronize_rcu_tasks(), which returned
* to its caller on CPU B, then both CPU A and CPU B are guaranteed
* to have executed a full memory barrier during the execution of
* synchronize_rcu_tasks() -- even if CPU A and CPU B are the same CPU
* (but again only if the system has more than one CPU).
*/
void synchronize_rcu_tasks(void)
{
/* Complain if the scheduler has not started. */
rcu_lockdep_assert(!rcu_scheduler_active,
"synchronize_rcu_tasks called too soon");
/* Wait for the grace period. */
wait_rcu_gp(call_rcu_tasks);
}
EXPORT_SYMBOL_GPL(synchronize_rcu_tasks);
/**
* rcu_barrier_tasks - Wait for in-flight call_rcu_tasks() callbacks.
*
* Although the current implementation is guaranteed to wait, it is not
* obligated to, for example, if there are no pending callbacks.
*/
void rcu_barrier_tasks(void)
{
/* There is only one callback queue, so this is easy. ;-) */
synchronize_rcu_tasks();
}
EXPORT_SYMBOL_GPL(rcu_barrier_tasks);
/* See if tasks are still holding out, complain if so. */
static void check_holdout_task(struct task_struct *t,
bool needreport, bool *firstreport)
{
int cpu;
if (!ACCESS_ONCE(t->rcu_tasks_holdout) ||
t->rcu_tasks_nvcsw != ACCESS_ONCE(t->nvcsw) ||
!ACCESS_ONCE(t->on_rq) ||
(IS_ENABLED(CONFIG_NO_HZ_FULL) &&
!is_idle_task(t) && t->rcu_tasks_idle_cpu >= 0)) {
ACCESS_ONCE(t->rcu_tasks_holdout) = false;
list_del_init(&t->rcu_tasks_holdout_list);
put_task_struct(t);
return;
}
if (!needreport)
return;
if (*firstreport) {
pr_err("INFO: rcu_tasks detected stalls on tasks:\n");
*firstreport = false;
}
cpu = task_cpu(t);
pr_alert("%p: %c%c nvcsw: %lu/%lu holdout: %d idle_cpu: %d/%d\n",
t, ".I"[is_idle_task(t)],
"N."[cpu < 0 || !tick_nohz_full_cpu(cpu)],
t->rcu_tasks_nvcsw, t->nvcsw, t->rcu_tasks_holdout,
t->rcu_tasks_idle_cpu, cpu);
sched_show_task(t);
}
/* RCU-tasks kthread that detects grace periods and invokes callbacks. */
static int __noreturn rcu_tasks_kthread(void *arg)
{
unsigned long flags;
struct task_struct *g, *t;
unsigned long lastreport;
struct rcu_head *list;
struct rcu_head *next;
LIST_HEAD(rcu_tasks_holdouts);
/* FIXME: Add housekeeping affinity. */
/*
* Each pass through the following loop makes one check for
* newly arrived callbacks, and, if there are some, waits for
* one RCU-tasks grace period and then invokes the callbacks.
* This loop is terminated by the system going down. ;-)
*/
for (;;) {
/* Pick up any new callbacks. */
raw_spin_lock_irqsave(&rcu_tasks_cbs_lock, flags);
list = rcu_tasks_cbs_head;
rcu_tasks_cbs_head = NULL;
rcu_tasks_cbs_tail = &rcu_tasks_cbs_head;
raw_spin_unlock_irqrestore(&rcu_tasks_cbs_lock, flags);
/* If there were none, wait a bit and start over. */
if (!list) {
wait_event_interruptible(rcu_tasks_cbs_wq,
rcu_tasks_cbs_head);
if (!rcu_tasks_cbs_head) {
WARN_ON(signal_pending(current));
schedule_timeout_interruptible(HZ/10);
}
continue;
}
/*
* Wait for all pre-existing t->on_rq and t->nvcsw
* transitions to complete. Invoking synchronize_sched()
* suffices because all these transitions occur with
* interrupts disabled. Without this synchronize_sched(),
* a read-side critical section that started before the
* grace period might be incorrectly seen as having started
* after the grace period.
*
* This synchronize_sched() also dispenses with the
* need for a memory barrier on the first store to
* ->rcu_tasks_holdout, as it forces the store to happen
* after the beginning of the grace period.
*/
synchronize_sched();
/*
* There were callbacks, so we need to wait for an
* RCU-tasks grace period. Start off by scanning
* the task list for tasks that are not already
* voluntarily blocked. Mark these tasks and make
* a list of them in rcu_tasks_holdouts.
*/
rcu_read_lock();
for_each_process_thread(g, t) {
if (t != current && ACCESS_ONCE(t->on_rq) &&
!is_idle_task(t)) {
get_task_struct(t);
t->rcu_tasks_nvcsw = ACCESS_ONCE(t->nvcsw);
ACCESS_ONCE(t->rcu_tasks_holdout) = true;
list_add(&t->rcu_tasks_holdout_list,
&rcu_tasks_holdouts);
}
}
rcu_read_unlock();
/*
* Wait for tasks that are in the process of exiting.
* This does only part of the job, ensuring that all
* tasks that were previously exiting reach the point
* where they have disabled preemption, allowing the
* later synchronize_sched() to finish the job.
*/
synchronize_srcu(&tasks_rcu_exit_srcu);
/*
* Each pass through the following loop scans the list
* of holdout tasks, removing any that are no longer
* holdouts. When the list is empty, we are done.
*/
lastreport = jiffies;
while (!list_empty(&rcu_tasks_holdouts)) {
bool firstreport;
bool needreport;
int rtst;
struct task_struct *t1;
schedule_timeout_interruptible(HZ);
rtst = ACCESS_ONCE(rcu_task_stall_timeout);
needreport = rtst > 0 &&
time_after(jiffies, lastreport + rtst);
if (needreport)
lastreport = jiffies;
firstreport = true;
WARN_ON(signal_pending(current));
list_for_each_entry_safe(t, t1, &rcu_tasks_holdouts,
rcu_tasks_holdout_list) {
check_holdout_task(t, needreport, &firstreport);
cond_resched();
}
}
/*
* Because ->on_rq and ->nvcsw are not guaranteed
* to have a full memory barriers prior to them in the
* schedule() path, memory reordering on other CPUs could
* cause their RCU-tasks read-side critical sections to
* extend past the end of the grace period. However,
* because these ->nvcsw updates are carried out with
* interrupts disabled, we can use synchronize_sched()
* to force the needed ordering on all such CPUs.
*
* This synchronize_sched() also confines all
* ->rcu_tasks_holdout accesses to be within the grace
* period, avoiding the need for memory barriers for
* ->rcu_tasks_holdout accesses.
*
* In addition, this synchronize_sched() waits for exiting
* tasks to complete their final preempt_disable() region
* of execution, cleaning up after the synchronize_srcu()
* above.
*/
synchronize_sched();
/* Invoke the callbacks. */
while (list) {
next = list->next;
local_bh_disable();
list->func(list);
local_bh_enable();
list = next;
cond_resched();
}
schedule_timeout_uninterruptible(HZ/10);
}
}
/* Spawn rcu_tasks_kthread() at first call to call_rcu_tasks(). */
static void rcu_spawn_tasks_kthread(void)
{
static DEFINE_MUTEX(rcu_tasks_kthread_mutex);
static struct task_struct *rcu_tasks_kthread_ptr;
struct task_struct *t;
if (ACCESS_ONCE(rcu_tasks_kthread_ptr)) {
smp_mb(); /* Ensure caller sees full kthread. */
return;
}
mutex_lock(&rcu_tasks_kthread_mutex);
if (rcu_tasks_kthread_ptr) {
mutex_unlock(&rcu_tasks_kthread_mutex);
return;
}
t = kthread_run(rcu_tasks_kthread, NULL, "rcu_tasks_kthread");
BUG_ON(IS_ERR(t));
smp_mb(); /* Ensure others see full kthread. */
ACCESS_ONCE(rcu_tasks_kthread_ptr) = t;
mutex_unlock(&rcu_tasks_kthread_mutex);
}
#endif /* #ifdef CONFIG_TASKS_RCU */

View File

@ -278,7 +278,7 @@ restart:
pending >>= softirq_bit; pending >>= softirq_bit;
} }
rcu_bh_qs(smp_processor_id()); rcu_bh_qs();
local_irq_disable(); local_irq_disable();
pending = local_softirq_pending(); pending = local_softirq_pending();

View File

@ -1055,15 +1055,6 @@ static struct ctl_table kern_table[] = {
.child = key_sysctls, .child = key_sysctls,
}, },
#endif #endif
#ifdef CONFIG_RCU_TORTURE_TEST
{
.procname = "rcutorture_runnable",
.data = &rcutorture_runnable,
.maxlen = sizeof(int),
.mode = 0644,
.proc_handler = proc_dointvec,
},
#endif
#ifdef CONFIG_PERF_EVENTS #ifdef CONFIG_PERF_EVENTS
/* /*
* User-space scripts rely on the existence of this file * User-space scripts rely on the existence of this file

View File

@ -211,18 +211,16 @@ EXPORT_SYMBOL_GPL(torture_onoff_cleanup);
/* /*
* Print online/offline testing statistics. * Print online/offline testing statistics.
*/ */
char *torture_onoff_stats(char *page) void torture_onoff_stats(void)
{ {
#ifdef CONFIG_HOTPLUG_CPU #ifdef CONFIG_HOTPLUG_CPU
page += sprintf(page, pr_cont("onoff: %ld/%ld:%ld/%ld %d,%d:%d,%d %lu:%lu (HZ=%d) ",
"onoff: %ld/%ld:%ld/%ld %d,%d:%d,%d %lu:%lu (HZ=%d) ", n_online_successes, n_online_attempts,
n_online_successes, n_online_attempts, n_offline_successes, n_offline_attempts,
n_offline_successes, n_offline_attempts, min_online, max_online,
min_online, max_online, min_offline, max_offline,
min_offline, max_offline, sum_online, sum_offline, HZ);
sum_online, sum_offline, HZ);
#endif /* #ifdef CONFIG_HOTPLUG_CPU */ #endif /* #ifdef CONFIG_HOTPLUG_CPU */
return page;
} }
EXPORT_SYMBOL_GPL(torture_onoff_stats); EXPORT_SYMBOL_GPL(torture_onoff_stats);
@ -635,8 +633,13 @@ EXPORT_SYMBOL_GPL(torture_init_end);
* *
* This must be called before the caller starts shutting down its own * This must be called before the caller starts shutting down its own
* kthreads. * kthreads.
*
* Both torture_cleanup_begin() and torture_cleanup_end() must be paired,
* in order to correctly perform the cleanup. They are separated because
* threads can still need to reference the torture_type type, thus nullify
* only after completing all other relevant calls.
*/ */
bool torture_cleanup(void) bool torture_cleanup_begin(void)
{ {
mutex_lock(&fullstop_mutex); mutex_lock(&fullstop_mutex);
if (ACCESS_ONCE(fullstop) == FULLSTOP_SHUTDOWN) { if (ACCESS_ONCE(fullstop) == FULLSTOP_SHUTDOWN) {
@ -651,12 +654,17 @@ bool torture_cleanup(void)
torture_shuffle_cleanup(); torture_shuffle_cleanup();
torture_stutter_cleanup(); torture_stutter_cleanup();
torture_onoff_cleanup(); torture_onoff_cleanup();
return false;
}
EXPORT_SYMBOL_GPL(torture_cleanup_begin);
void torture_cleanup_end(void)
{
mutex_lock(&fullstop_mutex); mutex_lock(&fullstop_mutex);
torture_type = NULL; torture_type = NULL;
mutex_unlock(&fullstop_mutex); mutex_unlock(&fullstop_mutex);
return false;
} }
EXPORT_SYMBOL_GPL(torture_cleanup); EXPORT_SYMBOL_GPL(torture_cleanup_end);
/* /*
* Is it time for the current torture test to stop? * Is it time for the current torture test to stop?

View File

@ -2043,9 +2043,10 @@ __acquires(&pool->lock)
* kernels, where a requeueing work item waiting for something to * kernels, where a requeueing work item waiting for something to
* happen could deadlock with stop_machine as such work item could * happen could deadlock with stop_machine as such work item could
* indefinitely requeue itself while all other CPUs are trapped in * indefinitely requeue itself while all other CPUs are trapped in
* stop_machine. * stop_machine. At the same time, report a quiescent RCU state so
* the same condition doesn't freeze RCU.
*/ */
cond_resched(); cond_resched_rcu_qs();
spin_lock_irq(&pool->lock); spin_lock_irq(&pool->lock);

View File

@ -789,7 +789,7 @@ static int do_mlockall(int flags)
/* Ignore errors */ /* Ignore errors */
mlock_fixup(vma, &prev, vma->vm_start, vma->vm_end, newflags); mlock_fixup(vma, &prev, vma->vm_start, vma->vm_end, newflags);
cond_resched(); cond_resched_rcu_qs();
} }
out: out:
return 0; return 0;

4
tools/testing/selftests/rcutorture/bin/config2frag.sh Normal file → Executable file
View File

@ -1,5 +1,5 @@
#!/bin/sh #!/bin/bash
# Usage: sh config2frag.sh < .config > configfrag # Usage: config2frag.sh < .config > configfrag
# #
# Converts the "# CONFIG_XXX is not set" to "CONFIG_XXX=n" so that the # Converts the "# CONFIG_XXX is not set" to "CONFIG_XXX=n" so that the
# resulting file becomes a legitimate Kconfig fragment. # resulting file becomes a legitimate Kconfig fragment.

View File

@ -1,5 +1,5 @@
#!/bin/sh #!/bin/bash
# Usage: sh configcheck.sh .config .config-template # Usage: configcheck.sh .config .config-template
# #
# This program is free software; you can redistribute it and/or modify # This program is free software; you can redistribute it and/or modify
# it under the terms of the GNU General Public License as published by # it under the terms of the GNU General Public License as published by

View File

@ -1,6 +1,6 @@
#!/bin/sh #!/bin/bash
# #
# sh configinit.sh config-spec-file [ build output dir ] # Usage: configinit.sh config-spec-file [ build output dir ]
# #
# Create a .config file from the spec file. Run from the kernel source tree. # Create a .config file from the spec file. Run from the kernel source tree.
# Exits with 0 if all went well, with 1 if all went well but the config # Exits with 0 if all went well, with 1 if all went well but the config

View File

@ -64,6 +64,26 @@ configfrag_boot_params () {
fi fi
} }
# configfrag_boot_cpus bootparam-string config-fragment-file config-cpus
#
# Decreases number of CPUs based on any maxcpus= boot parameters specified.
configfrag_boot_cpus () {
local bootargs="`configfrag_boot_params "$1" "$2"`"
local maxcpus
if echo "${bootargs}" | grep -q 'maxcpus=[0-9]'
then
maxcpus="`echo "${bootargs}" | sed -e 's/^.*maxcpus=\([0-9]*\).*$/\1/'`"
if test "$3" -gt "$maxcpus"
then
echo $maxcpus
else
echo $3
fi
else
echo $3
fi
}
# configfrag_hotplug_cpu config-fragment-file # configfrag_hotplug_cpu config-fragment-file
# #
# Returns 1 if the config fragment specifies hotplug CPU. # Returns 1 if the config fragment specifies hotplug CPU.

View File

@ -2,7 +2,7 @@
# #
# Build a kvm-ready Linux kernel from the tree in the current directory. # Build a kvm-ready Linux kernel from the tree in the current directory.
# #
# Usage: sh kvm-build.sh config-template build-dir more-configs # Usage: kvm-build.sh config-template build-dir more-configs
# #
# This program is free software; you can redistribute it and/or modify # This program is free software; you can redistribute it and/or modify
# it under the terms of the GNU General Public License as published by # it under the terms of the GNU General Public License as published by

View File

@ -2,7 +2,7 @@
# #
# Analyze a given results directory for locktorture progress. # Analyze a given results directory for locktorture progress.
# #
# Usage: sh kvm-recheck-lock.sh resdir # Usage: kvm-recheck-lock.sh resdir
# #
# This program is free software; you can redistribute it and/or modify # This program is free software; you can redistribute it and/or modify
# it under the terms of the GNU General Public License as published by # it under the terms of the GNU General Public License as published by

View File

@ -2,7 +2,7 @@
# #
# Analyze a given results directory for rcutorture progress. # Analyze a given results directory for rcutorture progress.
# #
# Usage: sh kvm-recheck-rcu.sh resdir # Usage: kvm-recheck-rcu.sh resdir
# #
# This program is free software; you can redistribute it and/or modify # This program is free software; you can redistribute it and/or modify
# it under the terms of the GNU General Public License as published by # it under the terms of the GNU General Public License as published by

View File

@ -4,7 +4,7 @@
# check the build and console output for errors. Given a directory # check the build and console output for errors. Given a directory
# containing results directories, this recursively checks them all. # containing results directories, this recursively checks them all.
# #
# Usage: sh kvm-recheck.sh resdir ... # Usage: kvm-recheck.sh resdir ...
# #
# This program is free software; you can redistribute it and/or modify # This program is free software; you can redistribute it and/or modify
# it under the terms of the GNU General Public License as published by # it under the terms of the GNU General Public License as published by

View File

@ -6,7 +6,7 @@
# Execute this in the source tree. Do not run it as a background task # Execute this in the source tree. Do not run it as a background task
# because qemu does not seem to like that much. # because qemu does not seem to like that much.
# #
# Usage: sh kvm-test-1-run.sh config builddir resdir minutes qemu-args boot_args # Usage: kvm-test-1-run.sh config builddir resdir minutes qemu-args boot_args
# #
# qemu-args defaults to "-nographic", along with arguments specifying the # qemu-args defaults to "-nographic", along with arguments specifying the
# number of CPUs and other options generated from # number of CPUs and other options generated from
@ -140,6 +140,7 @@ fi
# Generate -smp qemu argument. # Generate -smp qemu argument.
qemu_args="-nographic $qemu_args" qemu_args="-nographic $qemu_args"
cpu_count=`configNR_CPUS.sh $config_template` cpu_count=`configNR_CPUS.sh $config_template`
cpu_count=`configfrag_boot_cpus "$boot_args" "$config_template" "$cpu_count"`
vcpus=`identify_qemu_vcpus` vcpus=`identify_qemu_vcpus`
if test $cpu_count -gt $vcpus if test $cpu_count -gt $vcpus
then then
@ -214,7 +215,7 @@ then
fi fi
if test $kruntime -ge $((seconds + grace)) if test $kruntime -ge $((seconds + grace))
then then
echo "!!! Hang at $kruntime vs. $seconds seconds" >> $resdir/Warnings 2>&1 echo "!!! PID $qemu_pid hung at $kruntime vs. $seconds seconds" >> $resdir/Warnings 2>&1
kill -KILL $qemu_pid kill -KILL $qemu_pid
break break
fi fi

6
tools/testing/selftests/rcutorture/bin/kvm.sh Normal file → Executable file
View File

@ -7,7 +7,7 @@
# Edit the definitions below to set the locations of the various directories, # Edit the definitions below to set the locations of the various directories,
# as well as the test duration. # as well as the test duration.
# #
# Usage: sh kvm.sh [ options ] # Usage: kvm.sh [ options ]
# #
# This program is free software; you can redistribute it and/or modify # This program is free software; you can redistribute it and/or modify
# it under the terms of the GNU General Public License as published by # it under the terms of the GNU General Public License as published by
@ -188,7 +188,9 @@ for CF in $configs
do do
if test -f "$CONFIGFRAG/$kversion/$CF" if test -f "$CONFIGFRAG/$kversion/$CF"
then then
echo $CF `configNR_CPUS.sh $CONFIGFRAG/$kversion/$CF` >> $T/cfgcpu cpu_count=`configNR_CPUS.sh $CONFIGFRAG/$kversion/$CF`
cpu_count=`configfrag_boot_cpus "$TORTURE_BOOTARGS" "$CONFIGFRAG/$kversion/$CF" "$cpu_count"`
echo $CF $cpu_count >> $T/cfgcpu
else else
echo "The --configs file $CF does not exist, terminating." echo "The --configs file $CF does not exist, terminating."
exit 1 exit 1

View File

@ -1,4 +1,4 @@
#!/bin/sh #!/bin/bash
# #
# Check the build output from an rcutorture run for goodness. # Check the build output from an rcutorture run for goodness.
# The "file" is a pathname on the local system, and "title" is # The "file" is a pathname on the local system, and "title" is
@ -6,8 +6,7 @@
# #
# The file must contain kernel build output. # The file must contain kernel build output.
# #
# Usage: # Usage: parse-build.sh file title
# sh parse-build.sh file title
# #
# This program is free software; you can redistribute it and/or modify # This program is free software; you can redistribute it and/or modify
# it under the terms of the GNU General Public License as published by # it under the terms of the GNU General Public License as published by

View File

@ -1,11 +1,10 @@
#!/bin/sh #!/bin/bash
# #
# Check the console output from an rcutorture run for oopses. # Check the console output from an rcutorture run for oopses.
# The "file" is a pathname on the local system, and "title" is # The "file" is a pathname on the local system, and "title" is
# a text string for error-message purposes. # a text string for error-message purposes.
# #
# Usage: # Usage: parse-console.sh file title
# sh parse-console.sh file title
# #
# This program is free software; you can redistribute it and/or modify # This program is free software; you can redistribute it and/or modify
# it under the terms of the GNU General Public License as published by # it under the terms of the GNU General Public License as published by
@ -33,6 +32,10 @@ title="$2"
. functions.sh . functions.sh
if grep -Pq '\x00' < $file
then
print_warning Console output contains nul bytes, old qemu still running?
fi
egrep 'Badness|WARNING:|Warn|BUG|===========|Call Trace:|Oops:' < $file | grep -v 'ODEBUG: ' | grep -v 'Warning: unable to open an initial console' > $T egrep 'Badness|WARNING:|Warn|BUG|===========|Call Trace:|Oops:' < $file | grep -v 'ODEBUG: ' | grep -v 'Warning: unable to open an initial console' > $T
if test -s $T if test -s $T
then then

View File

@ -1,4 +1,4 @@
#!/bin/sh #!/bin/bash
# #
# Check the console output from a torture run for goodness. # Check the console output from a torture run for goodness.
# The "file" is a pathname on the local system, and "title" is # The "file" is a pathname on the local system, and "title" is
@ -7,8 +7,7 @@
# The file must contain torture output, but can be interspersed # The file must contain torture output, but can be interspersed
# with other dmesg text, as in console-log output. # with other dmesg text, as in console-log output.
# #
# Usage: # Usage: parse-torture.sh file title
# sh parse-torture.sh file title
# #
# This program is free software; you can redistribute it and/or modify # This program is free software; you can redistribute it and/or modify
# it under the terms of the GNU General Public License as published by # it under the terms of the GNU General Public License as published by

View File

@ -1 +1,4 @@
LOCK01 LOCK01
LOCK02
LOCK03
LOCK04

View File

@ -0,0 +1,6 @@
CONFIG_SMP=y
CONFIG_NR_CPUS=4
CONFIG_HOTPLUG_CPU=y
CONFIG_PREEMPT_NONE=n
CONFIG_PREEMPT_VOLUNTARY=n
CONFIG_PREEMPT=y

View File

@ -0,0 +1 @@
locktorture.torture_type=mutex_lock

View File

@ -0,0 +1,6 @@
CONFIG_SMP=y
CONFIG_NR_CPUS=4
CONFIG_HOTPLUG_CPU=y
CONFIG_PREEMPT_NONE=n
CONFIG_PREEMPT_VOLUNTARY=n
CONFIG_PREEMPT=y

View File

@ -0,0 +1 @@
locktorture.torture_type=rwsem_lock

View File

@ -0,0 +1,6 @@
CONFIG_SMP=y
CONFIG_NR_CPUS=4
CONFIG_HOTPLUG_CPU=y
CONFIG_PREEMPT_NONE=n
CONFIG_PREEMPT_VOLUNTARY=n
CONFIG_PREEMPT=y

View File

@ -0,0 +1 @@
locktorture.torture_type=rw_lock

View File

@ -38,6 +38,6 @@ per_version_boot_params () {
echo $1 `locktorture_param_onoff "$1" "$2"` \ echo $1 `locktorture_param_onoff "$1" "$2"` \
locktorture.stat_interval=15 \ locktorture.stat_interval=15 \
locktorture.shutdown_secs=$3 \ locktorture.shutdown_secs=$3 \
locktorture.locktorture_runnable=1 \ locktorture.torture_runnable=1 \
locktorture.verbose=1 locktorture.verbose=1
} }

View File

@ -11,3 +11,6 @@ SRCU-N
SRCU-P SRCU-P
TINY01 TINY01
TINY02 TINY02
TASKS01
TASKS02
TASKS03

View File

@ -0,0 +1,9 @@
CONFIG_SMP=y
CONFIG_NR_CPUS=2
CONFIG_HOTPLUG_CPU=y
CONFIG_PREEMPT_NONE=n
CONFIG_PREEMPT_VOLUNTARY=n
CONFIG_PREEMPT=y
CONFIG_DEBUG_LOCK_ALLOC=y
CONFIG_PROVE_RCU=y
CONFIG_TASKS_RCU=y

View File

@ -0,0 +1 @@
rcutorture.torture_type=tasks

View File

@ -0,0 +1,5 @@
CONFIG_SMP=n
CONFIG_PREEMPT_NONE=y
CONFIG_PREEMPT_VOLUNTARY=n
CONFIG_PREEMPT=n
CONFIG_TASKS_RCU=y

View File

@ -0,0 +1 @@
rcutorture.torture_type=tasks

View File

@ -0,0 +1,13 @@
CONFIG_SMP=y
CONFIG_NR_CPUS=2
CONFIG_HOTPLUG_CPU=n
CONFIG_SUSPEND=n
CONFIG_HIBERNATION=n
CONFIG_PREEMPT_NONE=n
CONFIG_PREEMPT_VOLUNTARY=n
CONFIG_PREEMPT=y
CONFIG_TASKS_RCU=y
CONFIG_HZ_PERIODIC=n
CONFIG_NO_HZ_IDLE=n
CONFIG_NO_HZ_FULL=y
CONFIG_NO_HZ_FULL_ALL=y

View File

@ -0,0 +1 @@
rcutorture.torture_type=tasks

View File

@ -1,5 +1,4 @@
CONFIG_SMP=y CONFIG_SMP=y
CONFIG_NR_CPUS=8
CONFIG_PREEMPT_NONE=n CONFIG_PREEMPT_NONE=n
CONFIG_PREEMPT_VOLUNTARY=n CONFIG_PREEMPT_VOLUNTARY=n
CONFIG_PREEMPT=y CONFIG_PREEMPT=y
@ -10,8 +9,7 @@ CONFIG_NO_HZ_FULL=n
CONFIG_RCU_FAST_NO_HZ=y CONFIG_RCU_FAST_NO_HZ=y
CONFIG_RCU_TRACE=y CONFIG_RCU_TRACE=y
CONFIG_HOTPLUG_CPU=y CONFIG_HOTPLUG_CPU=y
CONFIG_RCU_FANOUT=8 CONFIG_MAXSMP=y
CONFIG_RCU_FANOUT_EXACT=n
CONFIG_RCU_NOCB_CPU=y CONFIG_RCU_NOCB_CPU=y
CONFIG_RCU_NOCB_CPU_ZERO=y CONFIG_RCU_NOCB_CPU_ZERO=y
CONFIG_DEBUG_LOCK_ALLOC=n CONFIG_DEBUG_LOCK_ALLOC=n

View File

@ -1 +1 @@
rcutorture.torture_type=rcu_bh rcutorture.torture_type=rcu_bh maxcpus=8

View File

@ -1,5 +1,6 @@
CONFIG_SMP=y CONFIG_SMP=y
CONFIG_NR_CPUS=16 CONFIG_NR_CPUS=16
CONFIG_CPUMASK_OFFSTACK=y
CONFIG_PREEMPT_NONE=y CONFIG_PREEMPT_NONE=y
CONFIG_PREEMPT_VOLUNTARY=n CONFIG_PREEMPT_VOLUNTARY=n
CONFIG_PREEMPT=n CONFIG_PREEMPT=n
@ -7,7 +8,7 @@ CONFIG_PREEMPT=n
CONFIG_HZ_PERIODIC=n CONFIG_HZ_PERIODIC=n
CONFIG_NO_HZ_IDLE=n CONFIG_NO_HZ_IDLE=n
CONFIG_NO_HZ_FULL=y CONFIG_NO_HZ_FULL=y
CONFIG_NO_HZ_FULL_ALL=y CONFIG_NO_HZ_FULL_ALL=n
CONFIG_NO_HZ_FULL_SYSIDLE=y CONFIG_NO_HZ_FULL_SYSIDLE=y
CONFIG_RCU_FAST_NO_HZ=n CONFIG_RCU_FAST_NO_HZ=n
CONFIG_RCU_TRACE=y CONFIG_RCU_TRACE=y

View File

@ -0,0 +1 @@
nohz_full=2-9

View File

@ -51,7 +51,7 @@ per_version_boot_params () {
`rcutorture_param_n_barrier_cbs "$1"` \ `rcutorture_param_n_barrier_cbs "$1"` \
rcutorture.stat_interval=15 \ rcutorture.stat_interval=15 \
rcutorture.shutdown_secs=$3 \ rcutorture.shutdown_secs=$3 \
rcutorture.rcutorture_runnable=1 \ rcutorture.torture_runnable=1 \
rcutorture.test_no_idle_hz=1 \ rcutorture.test_no_idle_hz=1 \
rcutorture.verbose=1 rcutorture.verbose=1
} }

View File

@ -6,6 +6,7 @@ this case. There are probably much better ways of doing this.
That said, here are the commands: That said, here are the commands:
------------------------------------------------------------------------ ------------------------------------------------------------------------
cd tools/testing/selftests/rcutorture
zcat /initrd.img > /tmp/initrd.img.zcat zcat /initrd.img > /tmp/initrd.img.zcat
mkdir initrd mkdir initrd
cd initrd cd initrd