RCU pull request for v6.6

doc.2023.07.14b: Documentation updates.
 
 fixes.2023.08.16a: Miscellaneous fixes, perhaps most notably simplifying
 	SRCU_NOTIFIER_INIT() as suggested.
 
 rcu-tasks.2023.07.24a:  RCU Tasks updates, most notably treating
 	Tasks RCU callbacks as lazy while still treating synchronous
 	grace periods as urgent.  Also fixes one bug that restores the
 	ability to apply debug-objects to RCU Tasks and another that
 	fixes a race condition that could result in false-positive
 	failures of the boot-time self-test code.
 
 rcuscale.2023.07.14b: RCU-scalability performance-test updates,
 	most notably adding the ability to measure the RCU-Tasks's
 	grace-period kthread's CPU consumption.  This proved
 	quite useful for the rcu-tasks.2023.07.24a work.
 
 refscale.2023.07.14b: Reference-acquisition/release performance-test
 	updates, including a fix for an uninitialized wait_queue_head_t.
 
 torture.2023.08.14a: Miscellaneous torture-test updates.
 
 torturescripts.2023.07.20a: Torture-test scripting updates, including
 	removal of the non-longer-functional formal-verification scripts,
 	test builds of individual RCU Tasks flavors, better diagnostics
 	for loss of connectivity for distributed rcutorture tests,
 	disabling of reboot loops in qemu/KVM-based rcutorture testing,
 	and passing of init parameters to rcutorture's init program.
 -----BEGIN PGP SIGNATURE-----
 
 iQJHBAABCgAxFiEEbK7UrM+RBIrCoViJnr8S83LZ+4wFAmTjkssTHHBhdWxtY2tA
 a2VybmVsLm9yZwAKCRCevxLzctn7jITND/9zEqYNbeFrcBs/YaHdoAjsNgOt1IYN
 csfF/KArVgdvmrwlV/nEaQMLaJcw9X7DVU5+7E2JbbDaB/2FSacseNyKk6mfgSVK
 /0rnTOXpqI9/T1HiJObWZvDQFuKL12bfteXWGJg1sMt2JUGZ4nAWhdZ3xRjp2XkO
 89qB5r0fF8gyGwvQ3M29ss8T9Oy0uUNJmDY/QyVxHM6dhkpSAezFffKzD7C4zkSV
 WucRTpYJ7bs6otBGtVmwz3x60UAuLwcVfQyB+CTbnGLsps9yAYU+1DDVdm7olcr3
 ARXMeboeodMvy9jWXhtbWRVAAob4lVUDXQN27kb4sBgroRQBfQXMuByRAU6s0VtX
 frOl6rlbORuAetsC8wFL0IFVn4yTpvXKbYw7h1MXTs7gVVbl33O9FieGvWu0r79/
 VR4Xw+JbmYWtyvFV8Zaq4iIEcOe+PeNH6u0bPx+htsHYd1+DUG2UY0MVmJQ3a4sb
 ygejA6mguCk7KBzWab8wdDpgAfhNwg0T9a+LQYcaskuD5SSWjYqqg6i1ulqqqyiE
 bOfRKDX4mWmAobWKHLssqUrjiLbxfygIaHjCrt7rWJKPIs1bK/WfWa4JbrE0NRwK
 9IDd1lWc9C+zoUpjyZWSG3ahK5lWo2u4sPNoRtMQjowjobIz1cBhaEwmFe72bG7C
 FCKb7Da2oUaLOw==
 =EujZ
 -----END PGP SIGNATURE-----

Merge tag 'rcu.2023.08.21a' of git://git.kernel.org/pub/scm/linux/kernel/git/paulmck/linux-rcu

Pull RCU updates from Paul McKenney:

 - Documentation updates

 - Miscellaneous fixes, perhaps most notably simplifying
   SRCU_NOTIFIER_INIT() as suggested

 - RCU Tasks updates, most notably treating Tasks RCU callbacks as lazy
   while still treating synchronous grace periods as urgent. Also fixes
   one bug that restores the ability to apply debug-objects to RCU Tasks
   and another that fixes a race condition that could result in
   false-positive failures of the boot-time self-test code

 - RCU-scalability performance-test updates, most notably adding the
   ability to measure the RCU-Tasks's grace-period kthread's CPU
   consumption. This proved quite useful for the RCU Tasks work

 - Reference-acquisition/release performance-test updates, including a
   fix for an uninitialized wait_queue_head_t

 - Miscellaneous torture-test updates

 - Torture-test scripting updates, including removal of the
   non-longer-functional formal-verification scripts, test builds of
   individual RCU Tasks flavors, better diagnostics for loss of
   connectivity for distributed rcutorture tests, disabling of reboot
   loops in qemu/KVM-based rcutorture testing, and passing of init
   parameters to rcutorture's init program

* tag 'rcu.2023.08.21a' of git://git.kernel.org/pub/scm/linux/kernel/git/paulmck/linux-rcu: (64 commits)
  rcu: Use WRITE_ONCE() for assignments to ->next for rculist_nulls
  rcu: Make the rcu_nocb_poll boot parameter usable via boot config
  rcu: Mark __rcu_irq_enter_check_tick() ->rcu_urgent_qs load
  srcu,notifier: Remove #ifdefs in favor of SRCU Tiny srcu_usage
  rcutorture: Stop right-shifting torture_random() return values
  torture: Stop right-shifting torture_random() return values
  torture: Move stutter_wait() timeouts to hrtimers
  torture: Move torture_shuffle() timeouts to hrtimers
  torture: Move torture_onoff() timeouts to hrtimers
  torture: Make torture_hrtimeout_*() use TASK_IDLE
  torture: Add lock_torture writer_fifo module parameter
  torture: Add a kthread-creation callback to _torture_create_kthread()
  rcu-tasks: Fix boot-time RCU tasks debug-only deadlock
  rcu-tasks: Permit use of debug-objects with RCU Tasks flavors
  checkpatch: Complain about unexpected uses of RCU Tasks Trace
  torture: Cause mkinitrd.sh to indicate failure on compile errors
  torture: Make init program dump command-line arguments
  torture: Switch qemu from -nographic to -display none
  torture: Add init-program support for loongarch
  torture: Avoid torture-test reboot loops
  ...
This commit is contained in:
Linus Torvalds 2023-08-28 13:19:28 -07:00
commit 68cadad11f
78 changed files with 636 additions and 1771 deletions

View File

@ -10,7 +10,7 @@ misuses of the RCU API, most notably using one of the rcu_dereference()
family to access an RCU-protected pointer without the proper protection.
When such misuse is detected, an lockdep-RCU splat is emitted.
The usual cause of a lockdep-RCU slat is someone accessing an
The usual cause of a lockdep-RCU splat is someone accessing an
RCU-protected data structure without either (1) being in the right kind of
RCU read-side critical section or (2) holding the right update-side lock.
This problem can therefore be serious: it might result in random memory

View File

@ -18,7 +18,16 @@ to solve following problem.
Without 'nulls', a typical RCU linked list managing objects which are
allocated with SLAB_TYPESAFE_BY_RCU kmem_cache can use the following
algorithms:
algorithms. Following examples assume 'obj' is a pointer to such
objects, which is having below type.
::
struct object {
struct hlist_node obj_node;
atomic_t refcnt;
unsigned int key;
};
1) Lookup algorithm
-------------------
@ -26,11 +35,13 @@ algorithms:
::
begin:
rcu_read_lock()
rcu_read_lock();
obj = lockless_lookup(key);
if (obj) {
if (!try_get_ref(obj)) // might fail for free objects
if (!try_get_ref(obj)) { // might fail for free objects
rcu_read_unlock();
goto begin;
}
/*
* Because a writer could delete object, and a writer could
* reuse these object before the RCU grace period, we
@ -54,7 +65,7 @@ but a version with an additional memory barrier (smp_rmb())
struct hlist_node *node, *next;
for (pos = rcu_dereference((head)->first);
pos && ({ next = pos->next; smp_rmb(); prefetch(next); 1; }) &&
({ tpos = hlist_entry(pos, typeof(*tpos), member); 1; });
({ obj = hlist_entry(pos, typeof(*obj), obj_node); 1; });
pos = rcu_dereference(next))
if (obj->key == key)
return obj;
@ -66,10 +77,10 @@ And note the traditional hlist_for_each_entry_rcu() misses this smp_rmb()::
struct hlist_node *node;
for (pos = rcu_dereference((head)->first);
pos && ({ prefetch(pos->next); 1; }) &&
({ tpos = hlist_entry(pos, typeof(*tpos), member); 1; });
({ obj = hlist_entry(pos, typeof(*obj), obj_node); 1; });
pos = rcu_dereference(pos->next))
if (obj->key == key)
return obj;
if (obj->key == key)
return obj;
return NULL;
Quoting Corey Minyard::
@ -86,7 +97,7 @@ Quoting Corey Minyard::
2) Insertion algorithm
----------------------
We need to make sure a reader cannot read the new 'obj->obj_next' value
We need to make sure a reader cannot read the new 'obj->obj_node.next' value
and previous value of 'obj->key'. Otherwise, an item could be deleted
from a chain, and inserted into another chain. If new chain was empty
before the move, 'next' pointer is NULL, and lockless reader can not
@ -129,8 +140,7 @@ very very fast (before the end of RCU grace period)
Avoiding extra smp_rmb()
========================
With hlist_nulls we can avoid extra smp_rmb() in lockless_lookup()
and extra _release() in insert function.
With hlist_nulls we can avoid extra smp_rmb() in lockless_lookup().
For example, if we choose to store the slot number as the 'nulls'
end-of-list marker for each slot of the hash table, we can detect
@ -142,6 +152,9 @@ the beginning. If the object was moved to the same chain,
then the reader doesn't care: It might occasionally
scan the list again without harm.
Note that using hlist_nulls means the type of 'obj_node' field of
'struct object' becomes 'struct hlist_nulls_node'.
1) lookup algorithm
-------------------
@ -151,7 +164,7 @@ scan the list again without harm.
head = &table[slot];
begin:
rcu_read_lock();
hlist_nulls_for_each_entry_rcu(obj, node, head, member) {
hlist_nulls_for_each_entry_rcu(obj, node, head, obj_node) {
if (obj->key == key) {
if (!try_get_ref(obj)) { // might fail for free objects
rcu_read_unlock();
@ -182,6 +195,9 @@ scan the list again without harm.
2) Insert algorithm
-------------------
Same to the above one, but uses hlist_nulls_add_head_rcu() instead of
hlist_add_head_rcu().
::
/*

View File

@ -2938,6 +2938,10 @@
locktorture.torture_type= [KNL]
Specify the locking implementation to test.
locktorture.writer_fifo= [KNL]
Run the write-side locktorture kthreads at
sched_set_fifo() real-time priority.
locktorture.verbose= [KNL]
Enable additional printk() statements.
@ -4949,6 +4953,15 @@
test until boot completes in order to avoid
interference.
rcuscale.kfree_by_call_rcu= [KNL]
In kernels built with CONFIG_RCU_LAZY=y, test
call_rcu() instead of kfree_rcu().
rcuscale.kfree_mult= [KNL]
Instead of allocating an object of size kfree_obj,
allocate one of kfree_mult * sizeof(kfree_obj).
Defaults to 1.
rcuscale.kfree_rcu_test= [KNL]
Set to measure performance of kfree_rcu() flooding.
@ -4974,6 +4987,12 @@
Number of loops doing rcuscale.kfree_alloc_num number
of allocations and frees.
rcuscale.minruntime= [KNL]
Set the minimum test run time in seconds. This
does not affect the data-collection interval,
but instead allows better measurement of things
like CPU consumption.
rcuscale.nreaders= [KNL]
Set number of RCU readers. The value -1 selects
N, where N is the number of CPUs. A value
@ -4988,7 +5007,7 @@
the same as for rcuscale.nreaders.
N, where N is the number of CPUs
rcuscale.perf_type= [KNL]
rcuscale.scale_type= [KNL]
Specify the RCU implementation to test.
rcuscale.shutdown= [KNL]
@ -5004,6 +5023,11 @@
in microseconds. The default of zero says
no holdoff.
rcuscale.writer_holdoff_jiffies= [KNL]
Additional write-side holdoff between grace
periods, but in jiffies. The default of zero
says no holdoff.
rcutorture.fqs_duration= [KNL]
Set duration of force_quiescent_state bursts
in microseconds.
@ -5285,6 +5309,13 @@
number avoids disturbing real-time workloads,
but lengthens grace periods.
rcupdate.rcu_task_lazy_lim= [KNL]
Number of callbacks on a given CPU that will
cancel laziness on that CPU. Use -1 to disable
cancellation of laziness, but be advised that
doing so increases the danger of OOM due to
callback flooding.
rcupdate.rcu_task_stall_info= [KNL]
Set initial timeout in jiffies for RCU task stall
informational messages, which give some indication
@ -5314,6 +5345,29 @@
A change in value does not take effect until
the beginning of the next grace period.
rcupdate.rcu_tasks_lazy_ms= [KNL]
Set timeout in milliseconds RCU Tasks asynchronous
callback batching for call_rcu_tasks().
A negative value will take the default. A value
of zero will disable batching. Batching is
always disabled for synchronize_rcu_tasks().
rcupdate.rcu_tasks_rude_lazy_ms= [KNL]
Set timeout in milliseconds RCU Tasks
Rude asynchronous callback batching for
call_rcu_tasks_rude(). A negative value
will take the default. A value of zero will
disable batching. Batching is always disabled
for synchronize_rcu_tasks_rude().
rcupdate.rcu_tasks_trace_lazy_ms= [KNL]
Set timeout in milliseconds RCU Tasks
Trace asynchronous callback batching for
call_rcu_tasks_trace(). A negative value
will take the default. A value of zero will
disable batching. Batching is always disabled
for synchronize_rcu_tasks_trace().
rcupdate.rcu_self_test= [KNL]
Run the RCU early boot self tests

View File

@ -73,9 +73,7 @@ struct raw_notifier_head {
struct srcu_notifier_head {
struct mutex mutex;
#ifdef CONFIG_TREE_SRCU
struct srcu_usage srcuu;
#endif
struct srcu_struct srcu;
struct notifier_block __rcu *head;
};
@ -106,7 +104,6 @@ extern void srcu_init_notifier_head(struct srcu_notifier_head *nh);
#define RAW_NOTIFIER_INIT(name) { \
.head = NULL }
#ifdef CONFIG_TREE_SRCU
#define SRCU_NOTIFIER_INIT(name, pcpu) \
{ \
.mutex = __MUTEX_INITIALIZER(name.mutex), \
@ -114,14 +111,6 @@ extern void srcu_init_notifier_head(struct srcu_notifier_head *nh);
.srcuu = __SRCU_USAGE_INIT(name.srcuu), \
.srcu = __SRCU_STRUCT_INIT(name.srcu, name.srcuu, pcpu), \
}
#else
#define SRCU_NOTIFIER_INIT(name, pcpu) \
{ \
.mutex = __MUTEX_INITIALIZER(name.mutex), \
.head = NULL, \
.srcu = __SRCU_STRUCT_INIT(name.srcu, name.srcuu, pcpu), \
}
#endif
#define ATOMIC_NOTIFIER_HEAD(name) \
struct atomic_notifier_head name = \

View File

@ -101,7 +101,7 @@ static inline void hlist_nulls_add_head_rcu(struct hlist_nulls_node *n,
{
struct hlist_nulls_node *first = h->first;
n->next = first;
WRITE_ONCE(n->next, first);
WRITE_ONCE(n->pprev, &h->first);
rcu_assign_pointer(hlist_nulls_first_rcu(h), n);
if (!is_a_nulls(first))
@ -137,7 +137,7 @@ static inline void hlist_nulls_add_tail_rcu(struct hlist_nulls_node *n,
last = i;
if (last) {
n->next = last->next;
WRITE_ONCE(n->next, last->next);
n->pprev = &last->next;
rcu_assign_pointer(hlist_nulls_next_rcu(last), n);
} else {

View File

@ -87,6 +87,7 @@ static inline void rcu_read_unlock_trace(void)
void call_rcu_tasks_trace(struct rcu_head *rhp, rcu_callback_t func);
void synchronize_rcu_tasks_trace(void);
void rcu_barrier_tasks_trace(void);
struct task_struct *get_rcu_tasks_trace_gp_kthread(void);
#else
/*
* The BPF JIT forms these addresses even when it doesn't call these

View File

@ -42,6 +42,11 @@ do { \
* call_srcu() function, with this wrapper supplying the pointer to the
* corresponding srcu_struct.
*
* Note that call_rcu_hurry() should be used instead of call_rcu()
* because in kernels built with CONFIG_RCU_LAZY=y the delay between the
* invocation of call_rcu() and that of the corresponding RCU callback
* can be multiple seconds.
*
* The first argument tells Tiny RCU's _wait_rcu_gp() not to
* bother waiting for RCU. The reason for this is because anywhere
* synchronize_rcu_mult() can be called is automatically already a full

View File

@ -48,6 +48,10 @@ void srcu_drive_gp(struct work_struct *wp);
#define DEFINE_STATIC_SRCU(name) \
static struct srcu_struct name = __SRCU_STRUCT_INIT(name, name, name)
// Dummy structure for srcu_notifier_head.
struct srcu_usage { };
#define __SRCU_USAGE_INIT(name) { }
void synchronize_srcu(struct srcu_struct *ssp);
/*

View File

@ -108,12 +108,15 @@ bool torture_must_stop(void);
bool torture_must_stop_irq(void);
void torture_kthread_stopping(char *title);
int _torture_create_kthread(int (*fn)(void *arg), void *arg, char *s, char *m,
char *f, struct task_struct **tp);
char *f, struct task_struct **tp, void (*cbf)(struct task_struct *tp));
void _torture_stop_kthread(char *m, struct task_struct **tp);
#define torture_create_kthread(n, arg, tp) \
_torture_create_kthread(n, (arg), #n, "Creating " #n " task", \
"Failed to create " #n, &(tp))
"Failed to create " #n, &(tp), NULL)
#define torture_create_kthread_cb(n, arg, tp, cbf) \
_torture_create_kthread(n, (arg), #n, "Creating " #n " task", \
"Failed to create " #n, &(tp), cbf)
#define torture_stop_kthread(n, tp) \
_torture_stop_kthread("Stopping " #n " task", &(tp))

View File

@ -45,6 +45,7 @@ torture_param(int, stutter, 5, "Number of jiffies to run/halt test, 0=disable");
torture_param(int, rt_boost, 2,
"Do periodic rt-boost. 0=Disable, 1=Only for rt_mutex, 2=For all lock types.");
torture_param(int, rt_boost_factor, 50, "A factor determining how often rt-boost happens.");
torture_param(int, writer_fifo, 0, "Run writers at sched_set_fifo() priority");
torture_param(int, verbose, 1, "Enable verbose debugging printk()s");
torture_param(int, nested_locks, 0, "Number of nested locks (max = 8)");
/* Going much higher trips "BUG: MAX_LOCKDEP_CHAIN_HLOCKS too low!" errors */
@ -809,7 +810,8 @@ static int lock_torture_writer(void *arg)
bool skip_main_lock;
VERBOSE_TOROUT_STRING("lock_torture_writer task started");
set_user_nice(current, MAX_NICE);
if (!rt_task(current))
set_user_nice(current, MAX_NICE);
do {
if ((torture_random(&rand) & 0xfffff) == 0)
@ -1015,8 +1017,7 @@ static void lock_torture_cleanup(void)
if (writer_tasks) {
for (i = 0; i < cxt.nrealwriters_stress; i++)
torture_stop_kthread(lock_torture_writer,
writer_tasks[i]);
torture_stop_kthread(lock_torture_writer, writer_tasks[i]);
kfree(writer_tasks);
writer_tasks = NULL;
}
@ -1244,8 +1245,9 @@ static int __init lock_torture_init(void)
goto create_reader;
/* Create writer. */
firsterr = torture_create_kthread(lock_torture_writer, &cxt.lwsa[i],
writer_tasks[i]);
firsterr = torture_create_kthread_cb(lock_torture_writer, &cxt.lwsa[i],
writer_tasks[i],
writer_fifo ? sched_set_fifo : NULL);
if (torture_init_error(firsterr))
goto unwind;

View File

@ -511,6 +511,14 @@ static inline void show_rcu_tasks_gp_kthreads(void) {}
void rcu_request_urgent_qs_task(struct task_struct *t);
#endif /* #else #ifdef CONFIG_TINY_RCU */
#ifdef CONFIG_TASKS_RCU
struct task_struct *get_rcu_tasks_gp_kthread(void);
#endif // # ifdef CONFIG_TASKS_RCU
#ifdef CONFIG_TASKS_RUDE_RCU
struct task_struct *get_rcu_tasks_rude_gp_kthread(void);
#endif // # ifdef CONFIG_TASKS_RUDE_RCU
#define RCU_SCHEDULER_INACTIVE 0
#define RCU_SCHEDULER_INIT 1
#define RCU_SCHEDULER_RUNNING 2

View File

@ -84,15 +84,17 @@ MODULE_AUTHOR("Paul E. McKenney <paulmck@linux.ibm.com>");
#endif
torture_param(bool, gp_async, false, "Use asynchronous GP wait primitives");
torture_param(int, gp_async_max, 1000, "Max # outstanding waits per reader");
torture_param(int, gp_async_max, 1000, "Max # outstanding waits per writer");
torture_param(bool, gp_exp, false, "Use expedited GP wait primitives");
torture_param(int, holdoff, 10, "Holdoff time before test start (s)");
torture_param(int, minruntime, 0, "Minimum run time (s)");
torture_param(int, nreaders, -1, "Number of RCU reader threads");
torture_param(int, nwriters, -1, "Number of RCU updater threads");
torture_param(bool, shutdown, RCUSCALE_SHUTDOWN,
"Shutdown at end of scalability tests.");
torture_param(int, verbose, 1, "Enable verbose debugging printk()s");
torture_param(int, writer_holdoff, 0, "Holdoff (us) between GPs, zero to disable");
torture_param(int, writer_holdoff_jiffies, 0, "Holdoff (jiffies) between GPs, zero to disable");
torture_param(int, kfree_rcu_test, 0, "Do we run a kfree_rcu() scale test?");
torture_param(int, kfree_mult, 1, "Multiple of kfree_obj size to allocate.");
torture_param(int, kfree_by_call_rcu, 0, "Use call_rcu() to emulate kfree_rcu()?");
@ -139,6 +141,7 @@ struct rcu_scale_ops {
void (*gp_barrier)(void);
void (*sync)(void);
void (*exp_sync)(void);
struct task_struct *(*rso_gp_kthread)(void);
const char *name;
};
@ -295,6 +298,7 @@ static struct rcu_scale_ops tasks_ops = {
.gp_barrier = rcu_barrier_tasks,
.sync = synchronize_rcu_tasks,
.exp_sync = synchronize_rcu_tasks,
.rso_gp_kthread = get_rcu_tasks_gp_kthread,
.name = "tasks"
};
@ -306,6 +310,44 @@ static struct rcu_scale_ops tasks_ops = {
#endif // #else // #ifdef CONFIG_TASKS_RCU
#ifdef CONFIG_TASKS_RUDE_RCU
/*
* Definitions for RCU-tasks-rude scalability testing.
*/
static int tasks_rude_scale_read_lock(void)
{
return 0;
}
static void tasks_rude_scale_read_unlock(int idx)
{
}
static struct rcu_scale_ops tasks_rude_ops = {
.ptype = RCU_TASKS_RUDE_FLAVOR,
.init = rcu_sync_scale_init,
.readlock = tasks_rude_scale_read_lock,
.readunlock = tasks_rude_scale_read_unlock,
.get_gp_seq = rcu_no_completed,
.gp_diff = rcu_seq_diff,
.async = call_rcu_tasks_rude,
.gp_barrier = rcu_barrier_tasks_rude,
.sync = synchronize_rcu_tasks_rude,
.exp_sync = synchronize_rcu_tasks_rude,
.rso_gp_kthread = get_rcu_tasks_rude_gp_kthread,
.name = "tasks-rude"
};
#define TASKS_RUDE_OPS &tasks_rude_ops,
#else // #ifdef CONFIG_TASKS_RUDE_RCU
#define TASKS_RUDE_OPS
#endif // #else // #ifdef CONFIG_TASKS_RUDE_RCU
#ifdef CONFIG_TASKS_TRACE_RCU
/*
@ -334,6 +376,7 @@ static struct rcu_scale_ops tasks_tracing_ops = {
.gp_barrier = rcu_barrier_tasks_trace,
.sync = synchronize_rcu_tasks_trace,
.exp_sync = synchronize_rcu_tasks_trace,
.rso_gp_kthread = get_rcu_tasks_trace_gp_kthread,
.name = "tasks-tracing"
};
@ -410,10 +453,12 @@ rcu_scale_writer(void *arg)
{
int i = 0;
int i_max;
unsigned long jdone;
long me = (long)arg;
struct rcu_head *rhp = NULL;
bool started = false, done = false, alldone = false;
u64 t;
DEFINE_TORTURE_RANDOM(tr);
u64 *wdp;
u64 *wdpp = writer_durations[me];
@ -424,7 +469,7 @@ rcu_scale_writer(void *arg)
sched_set_fifo_low(current);
if (holdoff)
schedule_timeout_uninterruptible(holdoff * HZ);
schedule_timeout_idle(holdoff * HZ);
/*
* Wait until rcu_end_inkernel_boot() is called for normal GP tests
@ -445,9 +490,12 @@ rcu_scale_writer(void *arg)
}
}
jdone = jiffies + minruntime * HZ;
do {
if (writer_holdoff)
udelay(writer_holdoff);
if (writer_holdoff_jiffies)
schedule_timeout_idle(torture_random(&tr) % writer_holdoff_jiffies + 1);
wdp = &wdpp[i];
*wdp = ktime_get_mono_fast_ns();
if (gp_async) {
@ -475,7 +523,7 @@ retry:
if (!started &&
atomic_read(&n_rcu_scale_writer_started) >= nrealwriters)
started = true;
if (!done && i >= MIN_MEAS) {
if (!done && i >= MIN_MEAS && time_after(jiffies, jdone)) {
done = true;
sched_set_normal(current, 0);
pr_alert("%s%s rcu_scale_writer %ld has %d measurements\n",
@ -518,8 +566,8 @@ static void
rcu_scale_print_module_parms(struct rcu_scale_ops *cur_ops, const char *tag)
{
pr_alert("%s" SCALE_FLAG
"--- %s: nreaders=%d nwriters=%d verbose=%d shutdown=%d\n",
scale_type, tag, nrealreaders, nrealwriters, verbose, shutdown);
"--- %s: gp_async=%d gp_async_max=%d gp_exp=%d holdoff=%d minruntime=%d nreaders=%d nwriters=%d writer_holdoff=%d writer_holdoff_jiffies=%d verbose=%d shutdown=%d\n",
scale_type, tag, gp_async, gp_async_max, gp_exp, holdoff, minruntime, nrealreaders, nrealwriters, writer_holdoff, writer_holdoff_jiffies, verbose, shutdown);
}
/*
@ -556,6 +604,8 @@ static struct task_struct **kfree_reader_tasks;
static int kfree_nrealthreads;
static atomic_t n_kfree_scale_thread_started;
static atomic_t n_kfree_scale_thread_ended;
static struct task_struct *kthread_tp;
static u64 kthread_stime;
struct kfree_obj {
char kfree_obj[8];
@ -701,6 +751,10 @@ kfree_scale_init(void)
unsigned long jif_start;
unsigned long orig_jif;
pr_alert("%s" SCALE_FLAG
"--- kfree_rcu_test: kfree_mult=%d kfree_by_call_rcu=%d kfree_nthreads=%d kfree_alloc_num=%d kfree_loops=%d kfree_rcu_test_double=%d kfree_rcu_test_single=%d\n",
scale_type, kfree_mult, kfree_by_call_rcu, kfree_nthreads, kfree_alloc_num, kfree_loops, kfree_rcu_test_double, kfree_rcu_test_single);
// Also, do a quick self-test to ensure laziness is as much as
// expected.
if (kfree_by_call_rcu && !IS_ENABLED(CONFIG_RCU_LAZY)) {
@ -797,6 +851,18 @@ rcu_scale_cleanup(void)
if (gp_exp && gp_async)
SCALEOUT_ERRSTRING("No expedited async GPs, so went with async!");
// If built-in, just report all of the GP kthread's CPU time.
if (IS_BUILTIN(CONFIG_RCU_SCALE_TEST) && !kthread_tp && cur_ops->rso_gp_kthread)
kthread_tp = cur_ops->rso_gp_kthread();
if (kthread_tp) {
u32 ns;
u64 us;
kthread_stime = kthread_tp->stime - kthread_stime;
us = div_u64_rem(kthread_stime, 1000, &ns);
pr_info("rcu_scale: Grace-period kthread CPU time: %llu.%03u us\n", us, ns);
show_rcu_gp_kthreads();
}
if (kfree_rcu_test) {
kfree_scale_cleanup();
return;
@ -885,7 +951,7 @@ rcu_scale_init(void)
long i;
int firsterr = 0;
static struct rcu_scale_ops *scale_ops[] = {
&rcu_ops, &srcu_ops, &srcud_ops, TASKS_OPS TASKS_TRACING_OPS
&rcu_ops, &srcu_ops, &srcud_ops, TASKS_OPS TASKS_RUDE_OPS TASKS_TRACING_OPS
};
if (!torture_init_begin(scale_type, verbose))
@ -910,6 +976,11 @@ rcu_scale_init(void)
if (cur_ops->init)
cur_ops->init();
if (cur_ops->rso_gp_kthread) {
kthread_tp = cur_ops->rso_gp_kthread();
if (kthread_tp)
kthread_stime = kthread_tp->stime;
}
if (kfree_rcu_test)
return kfree_scale_init();

View File

@ -1581,6 +1581,7 @@ rcu_torture_writer(void *arg)
rcu_access_pointer(rcu_torture_current) !=
&rcu_tortures[i]) {
tracing_off();
show_rcu_gp_kthreads();
WARN(1, "%s: rtort_pipe_count: %d\n", __func__, rcu_tortures[i].rtort_pipe_count);
rcu_ftrace_dump(DUMP_ALL);
}
@ -1876,7 +1877,7 @@ static int
rcutorture_extend_mask(int oldmask, struct torture_random_state *trsp)
{
int mask = rcutorture_extend_mask_max();
unsigned long randmask1 = torture_random(trsp) >> 8;
unsigned long randmask1 = torture_random(trsp);
unsigned long randmask2 = randmask1 >> 3;
unsigned long preempts = RCUTORTURE_RDR_PREEMPT | RCUTORTURE_RDR_SCHED;
unsigned long preempts_irq = preempts | RCUTORTURE_RDR_IRQ;
@ -1935,7 +1936,7 @@ rcutorture_loop_extend(int *readstate, struct torture_random_state *trsp,
if (!((mask - 1) & mask))
return rtrsp; /* Current RCU reader not extendable. */
/* Bias towards larger numbers of loops. */
i = (torture_random(trsp) >> 3);
i = torture_random(trsp);
i = ((i | (i >> 3)) & RCUTORTURE_RDR_MAX_LOOPS) + 1;
for (j = 0; j < i; j++) {
mask = rcutorture_extend_mask(*readstate, trsp);
@ -2136,7 +2137,7 @@ static int rcu_nocb_toggle(void *arg)
toggle_fuzz = NSEC_PER_USEC;
do {
r = torture_random(&rand);
cpu = (r >> 4) % (maxcpu + 1);
cpu = (r >> 1) % (maxcpu + 1);
if (r & 0x1) {
rcu_nocb_cpu_offload(cpu);
atomic_long_inc(&n_nocb_offload);

View File

@ -528,6 +528,38 @@ static struct ref_scale_ops clock_ops = {
.name = "clock"
};
static void ref_jiffies_section(const int nloops)
{
u64 x = 0;
int i;
preempt_disable();
for (i = nloops; i >= 0; i--)
x += jiffies;
preempt_enable();
stopopts = x;
}
static void ref_jiffies_delay_section(const int nloops, const int udl, const int ndl)
{
u64 x = 0;
int i;
preempt_disable();
for (i = nloops; i >= 0; i--) {
x += jiffies;
un_delay(udl, ndl);
}
preempt_enable();
stopopts = x;
}
static struct ref_scale_ops jiffies_ops = {
.readsection = ref_jiffies_section,
.delaysection = ref_jiffies_delay_section,
.name = "jiffies"
};
////////////////////////////////////////////////////////////////////////
//
// Methods leveraging SLAB_TYPESAFE_BY_RCU.
@ -1047,7 +1079,7 @@ ref_scale_init(void)
int firsterr = 0;
static struct ref_scale_ops *scale_ops[] = {
&rcu_ops, &srcu_ops, RCU_TRACE_OPS RCU_TASKS_OPS &refcnt_ops, &rwlock_ops,
&rwsem_ops, &lock_ops, &lock_irq_ops, &acqrel_ops, &clock_ops,
&rwsem_ops, &lock_ops, &lock_irq_ops, &acqrel_ops, &clock_ops, &jiffies_ops,
&typesafe_ref_ops, &typesafe_lock_ops, &typesafe_seqlock_ops,
};
@ -1107,12 +1139,11 @@ ref_scale_init(void)
VERBOSE_SCALEOUT("Starting %d reader threads", nreaders);
for (i = 0; i < nreaders; i++) {
init_waitqueue_head(&reader_tasks[i].wq);
firsterr = torture_create_kthread(ref_scale_reader, (void *)i,
reader_tasks[i].task);
if (torture_init_error(firsterr))
goto unwind;
init_waitqueue_head(&(reader_tasks[i].wq));
}
// Main Task

View File

@ -25,6 +25,8 @@ typedef void (*postgp_func_t)(struct rcu_tasks *rtp);
* @cblist: Callback list.
* @lock: Lock protecting per-CPU callback list.
* @rtp_jiffies: Jiffies counter value for statistics.
* @lazy_timer: Timer to unlazify callbacks.
* @urgent_gp: Number of additional non-lazy grace periods.
* @rtp_n_lock_retries: Rough lock-contention statistic.
* @rtp_work: Work queue for invoking callbacks.
* @rtp_irq_work: IRQ work queue for deferred wakeups.
@ -38,6 +40,8 @@ struct rcu_tasks_percpu {
raw_spinlock_t __private lock;
unsigned long rtp_jiffies;
unsigned long rtp_n_lock_retries;
struct timer_list lazy_timer;
unsigned int urgent_gp;
struct work_struct rtp_work;
struct irq_work rtp_irq_work;
struct rcu_head barrier_q_head;
@ -51,7 +55,6 @@ struct rcu_tasks_percpu {
* @cbs_wait: RCU wait allowing a new callback to get kthread's attention.
* @cbs_gbl_lock: Lock protecting callback list.
* @tasks_gp_mutex: Mutex protecting grace period, needed during mid-boot dead zone.
* @kthread_ptr: This flavor's grace-period/callback-invocation kthread.
* @gp_func: This flavor's grace-period-wait function.
* @gp_state: Grace period's most recent state transition (debugging).
* @gp_sleep: Per-grace-period sleep to prevent CPU-bound looping.
@ -61,6 +64,8 @@ struct rcu_tasks_percpu {
* @tasks_gp_seq: Number of grace periods completed since boot.
* @n_ipis: Number of IPIs sent to encourage grace periods to end.
* @n_ipis_fails: Number of IPI-send failures.
* @kthread_ptr: This flavor's grace-period/callback-invocation kthread.
* @lazy_jiffies: Number of jiffies to allow callbacks to be lazy.
* @pregp_func: This flavor's pre-grace-period function (optional).
* @pertask_func: This flavor's per-task scan function (optional).
* @postscan_func: This flavor's post-task scan function (optional).
@ -92,6 +97,7 @@ struct rcu_tasks {
unsigned long n_ipis;
unsigned long n_ipis_fails;
struct task_struct *kthread_ptr;
unsigned long lazy_jiffies;
rcu_tasks_gp_func_t gp_func;
pregp_func_t pregp_func;
pertask_func_t pertask_func;
@ -127,6 +133,7 @@ static struct rcu_tasks rt_name = \
.gp_func = gp, \
.call_func = call, \
.rtpcpu = &rt_name ## __percpu, \
.lazy_jiffies = DIV_ROUND_UP(HZ, 4), \
.name = n, \
.percpu_enqueue_shift = order_base_2(CONFIG_NR_CPUS), \
.percpu_enqueue_lim = 1, \
@ -139,9 +146,7 @@ static struct rcu_tasks rt_name = \
#ifdef CONFIG_TASKS_RCU
/* Track exiting tasks in order to allow them to be waited for. */
DEFINE_STATIC_SRCU(tasks_rcu_exit_srcu);
#endif
#ifdef CONFIG_TASKS_RCU
/* Report delay in synchronize_srcu() completion in rcu_tasks_postscan(). */
static void tasks_rcu_exit_srcu_stall(struct timer_list *unused);
static DEFINE_TIMER(tasks_rcu_exit_srcu_stall_timer, tasks_rcu_exit_srcu_stall);
@ -171,6 +176,8 @@ static int rcu_task_contend_lim __read_mostly = 100;
module_param(rcu_task_contend_lim, int, 0444);
static int rcu_task_collapse_lim __read_mostly = 10;
module_param(rcu_task_collapse_lim, int, 0444);
static int rcu_task_lazy_lim __read_mostly = 32;
module_param(rcu_task_lazy_lim, int, 0444);
/* RCU tasks grace-period state for debugging. */
#define RTGS_INIT 0
@ -229,7 +236,7 @@ static const char *tasks_gp_state_getname(struct rcu_tasks *rtp)
#endif /* #ifndef CONFIG_TINY_RCU */
// Initialize per-CPU callback lists for the specified flavor of
// Tasks RCU.
// Tasks RCU. Do not enqueue callbacks before this function is invoked.
static void cblist_init_generic(struct rcu_tasks *rtp)
{
int cpu;
@ -237,7 +244,6 @@ static void cblist_init_generic(struct rcu_tasks *rtp)
int lim;
int shift;
raw_spin_lock_irqsave(&rtp->cbs_gbl_lock, flags);
if (rcu_task_enqueue_lim < 0) {
rcu_task_enqueue_lim = 1;
rcu_task_cb_adjust = true;
@ -260,22 +266,48 @@ static void cblist_init_generic(struct rcu_tasks *rtp)
WARN_ON_ONCE(!rtpcp);
if (cpu)
raw_spin_lock_init(&ACCESS_PRIVATE(rtpcp, lock));
raw_spin_lock_rcu_node(rtpcp); // irqs already disabled.
local_irq_save(flags); // serialize initialization
if (rcu_segcblist_empty(&rtpcp->cblist))
rcu_segcblist_init(&rtpcp->cblist);
local_irq_restore(flags);
INIT_WORK(&rtpcp->rtp_work, rcu_tasks_invoke_cbs_wq);
rtpcp->cpu = cpu;
rtpcp->rtpp = rtp;
if (!rtpcp->rtp_blkd_tasks.next)
INIT_LIST_HEAD(&rtpcp->rtp_blkd_tasks);
raw_spin_unlock_rcu_node(rtpcp); // irqs remain disabled.
}
raw_spin_unlock_irqrestore(&rtp->cbs_gbl_lock, flags);
pr_info("%s: Setting shift to %d and lim to %d rcu_task_cb_adjust=%d.\n", rtp->name,
data_race(rtp->percpu_enqueue_shift), data_race(rtp->percpu_enqueue_lim), rcu_task_cb_adjust);
}
// Compute wakeup time for lazy callback timer.
static unsigned long rcu_tasks_lazy_time(struct rcu_tasks *rtp)
{
return jiffies + rtp->lazy_jiffies;
}
// Timer handler that unlazifies lazy callbacks.
static void call_rcu_tasks_generic_timer(struct timer_list *tlp)
{
unsigned long flags;
bool needwake = false;
struct rcu_tasks *rtp;
struct rcu_tasks_percpu *rtpcp = from_timer(rtpcp, tlp, lazy_timer);
rtp = rtpcp->rtpp;
raw_spin_lock_irqsave_rcu_node(rtpcp, flags);
if (!rcu_segcblist_empty(&rtpcp->cblist) && rtp->lazy_jiffies) {
if (!rtpcp->urgent_gp)
rtpcp->urgent_gp = 1;
needwake = true;
mod_timer(&rtpcp->lazy_timer, rcu_tasks_lazy_time(rtp));
}
raw_spin_unlock_irqrestore_rcu_node(rtpcp, flags);
if (needwake)
rcuwait_wake_up(&rtp->cbs_wait);
}
// IRQ-work handler that does deferred wakeup for call_rcu_tasks_generic().
static void call_rcu_tasks_iw_wakeup(struct irq_work *iwp)
{
@ -292,6 +324,7 @@ static void call_rcu_tasks_generic(struct rcu_head *rhp, rcu_callback_t func,
{
int chosen_cpu;
unsigned long flags;
bool havekthread = smp_load_acquire(&rtp->kthread_ptr);
int ideal_cpu;
unsigned long j;
bool needadjust = false;
@ -316,12 +349,19 @@ static void call_rcu_tasks_generic(struct rcu_head *rhp, rcu_callback_t func,
READ_ONCE(rtp->percpu_enqueue_lim) != nr_cpu_ids)
needadjust = true; // Defer adjustment to avoid deadlock.
}
if (!rcu_segcblist_is_enabled(&rtpcp->cblist)) {
raw_spin_unlock_rcu_node(rtpcp); // irqs remain disabled.
cblist_init_generic(rtp);
raw_spin_lock_rcu_node(rtpcp); // irqs already disabled.
// Queuing callbacks before initialization not yet supported.
if (WARN_ON_ONCE(!rcu_segcblist_is_enabled(&rtpcp->cblist)))
rcu_segcblist_init(&rtpcp->cblist);
needwake = (func == wakeme_after_rcu) ||
(rcu_segcblist_n_cbs(&rtpcp->cblist) == rcu_task_lazy_lim);
if (havekthread && !needwake && !timer_pending(&rtpcp->lazy_timer)) {
if (rtp->lazy_jiffies)
mod_timer(&rtpcp->lazy_timer, rcu_tasks_lazy_time(rtp));
else
needwake = rcu_segcblist_empty(&rtpcp->cblist);
}
needwake = rcu_segcblist_empty(&rtpcp->cblist);
if (needwake)
rtpcp->urgent_gp = 3;
rcu_segcblist_enqueue(&rtpcp->cblist, rhp);
raw_spin_unlock_irqrestore_rcu_node(rtpcp, flags);
if (unlikely(needadjust)) {
@ -415,9 +455,14 @@ static int rcu_tasks_need_gpcb(struct rcu_tasks *rtp)
}
rcu_segcblist_advance(&rtpcp->cblist, rcu_seq_current(&rtp->tasks_gp_seq));
(void)rcu_segcblist_accelerate(&rtpcp->cblist, rcu_seq_snap(&rtp->tasks_gp_seq));
if (rcu_segcblist_pend_cbs(&rtpcp->cblist))
if (rtpcp->urgent_gp > 0 && rcu_segcblist_pend_cbs(&rtpcp->cblist)) {
if (rtp->lazy_jiffies)
rtpcp->urgent_gp--;
needgpcb |= 0x3;
if (!rcu_segcblist_empty(&rtpcp->cblist))
} else if (rcu_segcblist_empty(&rtpcp->cblist)) {
rtpcp->urgent_gp = 0;
}
if (rcu_segcblist_ready_cbs(&rtpcp->cblist))
needgpcb |= 0x1;
raw_spin_unlock_irqrestore_rcu_node(rtpcp, flags);
}
@ -525,10 +570,12 @@ static void rcu_tasks_one_gp(struct rcu_tasks *rtp, bool midboot)
if (unlikely(midboot)) {
needgpcb = 0x2;
} else {
mutex_unlock(&rtp->tasks_gp_mutex);
set_tasks_gp_state(rtp, RTGS_WAIT_CBS);
rcuwait_wait_event(&rtp->cbs_wait,
(needgpcb = rcu_tasks_need_gpcb(rtp)),
TASK_IDLE);
mutex_lock(&rtp->tasks_gp_mutex);
}
if (needgpcb & 0x2) {
@ -549,11 +596,19 @@ static void rcu_tasks_one_gp(struct rcu_tasks *rtp, bool midboot)
// RCU-tasks kthread that detects grace periods and invokes callbacks.
static int __noreturn rcu_tasks_kthread(void *arg)
{
int cpu;
struct rcu_tasks *rtp = arg;
for_each_possible_cpu(cpu) {
struct rcu_tasks_percpu *rtpcp = per_cpu_ptr(rtp->rtpcpu, cpu);
timer_setup(&rtpcp->lazy_timer, call_rcu_tasks_generic_timer, 0);
rtpcp->urgent_gp = 1;
}
/* Run on housekeeping CPUs by default. Sysadm can move if desired. */
housekeeping_affine(current, HK_TYPE_RCU);
WRITE_ONCE(rtp->kthread_ptr, current); // Let GPs start!
smp_store_release(&rtp->kthread_ptr, current); // Let GPs start!
/*
* Each pass through the following loop makes one check for
@ -635,16 +690,22 @@ static void show_rcu_tasks_generic_gp_kthread(struct rcu_tasks *rtp, char *s)
{
int cpu;
bool havecbs = false;
bool haveurgent = false;
bool haveurgentcbs = false;
for_each_possible_cpu(cpu) {
struct rcu_tasks_percpu *rtpcp = per_cpu_ptr(rtp->rtpcpu, cpu);
if (!data_race(rcu_segcblist_empty(&rtpcp->cblist))) {
if (!data_race(rcu_segcblist_empty(&rtpcp->cblist)))
havecbs = true;
if (data_race(rtpcp->urgent_gp))
haveurgent = true;
if (!data_race(rcu_segcblist_empty(&rtpcp->cblist)) && data_race(rtpcp->urgent_gp))
haveurgentcbs = true;
if (havecbs && haveurgent && haveurgentcbs)
break;
}
}
pr_info("%s: %s(%d) since %lu g:%lu i:%lu/%lu %c%c %s\n",
pr_info("%s: %s(%d) since %lu g:%lu i:%lu/%lu %c%c%c%c l:%lu %s\n",
rtp->kname,
tasks_gp_state_getname(rtp), data_race(rtp->gp_state),
jiffies - data_race(rtp->gp_jiffies),
@ -652,6 +713,9 @@ static void show_rcu_tasks_generic_gp_kthread(struct rcu_tasks *rtp, char *s)
data_race(rtp->n_ipis_fails), data_race(rtp->n_ipis),
".k"[!!data_race(rtp->kthread_ptr)],
".C"[havecbs],
".u"[haveurgent],
".U"[haveurgentcbs],
rtp->lazy_jiffies,
s);
}
#endif // #ifndef CONFIG_TINY_RCU
@ -1020,11 +1084,16 @@ void rcu_barrier_tasks(void)
}
EXPORT_SYMBOL_GPL(rcu_barrier_tasks);
int rcu_tasks_lazy_ms = -1;
module_param(rcu_tasks_lazy_ms, int, 0444);
static int __init rcu_spawn_tasks_kthread(void)
{
cblist_init_generic(&rcu_tasks);
rcu_tasks.gp_sleep = HZ / 10;
rcu_tasks.init_fract = HZ / 10;
if (rcu_tasks_lazy_ms >= 0)
rcu_tasks.lazy_jiffies = msecs_to_jiffies(rcu_tasks_lazy_ms);
rcu_tasks.pregp_func = rcu_tasks_pregp_step;
rcu_tasks.pertask_func = rcu_tasks_pertask;
rcu_tasks.postscan_func = rcu_tasks_postscan;
@ -1042,6 +1111,12 @@ void show_rcu_tasks_classic_gp_kthread(void)
EXPORT_SYMBOL_GPL(show_rcu_tasks_classic_gp_kthread);
#endif // !defined(CONFIG_TINY_RCU)
struct task_struct *get_rcu_tasks_gp_kthread(void)
{
return rcu_tasks.kthread_ptr;
}
EXPORT_SYMBOL_GPL(get_rcu_tasks_gp_kthread);
/*
* Contribute to protect against tasklist scan blind spot while the
* task is exiting and may be removed from the tasklist. See
@ -1173,10 +1248,15 @@ void rcu_barrier_tasks_rude(void)
}
EXPORT_SYMBOL_GPL(rcu_barrier_tasks_rude);
int rcu_tasks_rude_lazy_ms = -1;
module_param(rcu_tasks_rude_lazy_ms, int, 0444);
static int __init rcu_spawn_tasks_rude_kthread(void)
{
cblist_init_generic(&rcu_tasks_rude);
rcu_tasks_rude.gp_sleep = HZ / 10;
if (rcu_tasks_rude_lazy_ms >= 0)
rcu_tasks_rude.lazy_jiffies = msecs_to_jiffies(rcu_tasks_rude_lazy_ms);
rcu_spawn_tasks_kthread_generic(&rcu_tasks_rude);
return 0;
}
@ -1188,6 +1268,13 @@ void show_rcu_tasks_rude_gp_kthread(void)
}
EXPORT_SYMBOL_GPL(show_rcu_tasks_rude_gp_kthread);
#endif // !defined(CONFIG_TINY_RCU)
struct task_struct *get_rcu_tasks_rude_gp_kthread(void)
{
return rcu_tasks_rude.kthread_ptr;
}
EXPORT_SYMBOL_GPL(get_rcu_tasks_rude_gp_kthread);
#endif /* #ifdef CONFIG_TASKS_RUDE_RCU */
////////////////////////////////////////////////////////////////////////
@ -1793,6 +1880,9 @@ void rcu_barrier_tasks_trace(void)
}
EXPORT_SYMBOL_GPL(rcu_barrier_tasks_trace);
int rcu_tasks_trace_lazy_ms = -1;
module_param(rcu_tasks_trace_lazy_ms, int, 0444);
static int __init rcu_spawn_tasks_trace_kthread(void)
{
cblist_init_generic(&rcu_tasks_trace);
@ -1807,6 +1897,8 @@ static int __init rcu_spawn_tasks_trace_kthread(void)
if (rcu_tasks_trace.init_fract <= 0)
rcu_tasks_trace.init_fract = 1;
}
if (rcu_tasks_trace_lazy_ms >= 0)
rcu_tasks_trace.lazy_jiffies = msecs_to_jiffies(rcu_tasks_trace_lazy_ms);
rcu_tasks_trace.pregp_func = rcu_tasks_trace_pregp_step;
rcu_tasks_trace.postscan_func = rcu_tasks_trace_postscan;
rcu_tasks_trace.holdouts_func = check_all_holdout_tasks_trace;
@ -1830,6 +1922,12 @@ void show_rcu_tasks_trace_gp_kthread(void)
EXPORT_SYMBOL_GPL(show_rcu_tasks_trace_gp_kthread);
#endif // !defined(CONFIG_TINY_RCU)
struct task_struct *get_rcu_tasks_trace_gp_kthread(void)
{
return rcu_tasks_trace.kthread_ptr;
}
EXPORT_SYMBOL_GPL(get_rcu_tasks_trace_gp_kthread);
#else /* #ifdef CONFIG_TASKS_TRACE_RCU */
static void exit_tasks_rcu_finish_trace(struct task_struct *t) { }
#endif /* #else #ifdef CONFIG_TASKS_TRACE_RCU */

View File

@ -632,7 +632,7 @@ void __rcu_irq_enter_check_tick(void)
// prevents self-deadlock. So we can safely recheck under the lock.
// Note that the nohz_full state currently cannot change.
raw_spin_lock_rcu_node(rdp->mynode);
if (rdp->rcu_urgent_qs && !rdp->rcu_forced_tick) {
if (READ_ONCE(rdp->rcu_urgent_qs) && !rdp->rcu_forced_tick) {
// A nohz_full CPU is in the kernel and RCU needs a
// quiescent state. Turn on the tick!
WRITE_ONCE(rdp->rcu_forced_tick, true);
@ -677,12 +677,16 @@ static void rcu_disable_urgency_upon_qs(struct rcu_data *rdp)
}
/**
* rcu_is_watching - see if RCU thinks that the current CPU is not idle
* rcu_is_watching - RCU read-side critical sections permitted on current CPU?
*
* Return true if RCU is watching the running CPU, which means that this
* CPU can safely enter RCU read-side critical sections. In other words,
* if the current CPU is not in its idle loop or is in an interrupt or
* NMI handler, return true.
* Return @true if RCU is watching the running CPU and @false otherwise.
* An @true return means that this CPU can safely enter RCU read-side
* critical sections.
*
* Although calls to rcu_is_watching() from most parts of the kernel
* will return @true, there are important exceptions. For example, if the
* current CPU is deep within its idle loop, in kernel entry/exit code,
* or offline, rcu_is_watching() will return @false.
*
* Make notrace because it can be called by the internal functions of
* ftrace, and making this notrace removes unnecessary recursion calls.

View File

@ -77,9 +77,9 @@ __setup("rcu_nocbs", rcu_nocb_setup);
static int __init parse_rcu_nocb_poll(char *arg)
{
rcu_nocb_poll = true;
return 0;
return 1;
}
early_param("rcu_nocb_poll", parse_rcu_nocb_poll);
__setup("rcu_nocb_poll", parse_rcu_nocb_poll);
/*
* Don't bother bypassing ->cblist if the call_rcu() rate is low.

View File

@ -37,6 +37,7 @@
#include <linux/ktime.h>
#include <asm/byteorder.h>
#include <linux/torture.h>
#include <linux/sched/rt.h>
#include "rcu/rcu.h"
MODULE_LICENSE("GPL");
@ -54,6 +55,9 @@ module_param(verbose_sleep_frequency, int, 0444);
static int verbose_sleep_duration = 1;
module_param(verbose_sleep_duration, int, 0444);
static int random_shuffle;
module_param(random_shuffle, int, 0444);
static char *torture_type;
static int verbose;
@ -88,8 +92,8 @@ int torture_hrtimeout_ns(ktime_t baset_ns, u32 fuzzt_ns, struct torture_random_s
ktime_t hto = baset_ns;
if (trsp)
hto += (torture_random(trsp) >> 3) % fuzzt_ns;
set_current_state(TASK_UNINTERRUPTIBLE);
hto += torture_random(trsp) % fuzzt_ns;
set_current_state(TASK_IDLE);
return schedule_hrtimeout(&hto, HRTIMER_MODE_REL);
}
EXPORT_SYMBOL_GPL(torture_hrtimeout_ns);
@ -350,22 +354,22 @@ torture_onoff(void *arg)
if (onoff_holdoff > 0) {
VERBOSE_TOROUT_STRING("torture_onoff begin holdoff");
schedule_timeout_interruptible(onoff_holdoff);
torture_hrtimeout_jiffies(onoff_holdoff, &rand);
VERBOSE_TOROUT_STRING("torture_onoff end holdoff");
}
while (!torture_must_stop()) {
if (disable_onoff_at_boot && !rcu_inkernel_boot_has_ended()) {
schedule_timeout_interruptible(HZ / 10);
torture_hrtimeout_jiffies(HZ / 10, &rand);
continue;
}
cpu = (torture_random(&rand) >> 4) % (maxcpu + 1);
cpu = torture_random(&rand) % (maxcpu + 1);
if (!torture_offline(cpu,
&n_offline_attempts, &n_offline_successes,
&sum_offline, &min_offline, &max_offline))
torture_online(cpu,
&n_online_attempts, &n_online_successes,
&sum_online, &min_online, &max_online);
schedule_timeout_interruptible(onoff_interval);
torture_hrtimeout_jiffies(onoff_interval, &rand);
}
stop:
@ -518,6 +522,7 @@ static void torture_shuffle_task_unregister_all(void)
*/
static void torture_shuffle_tasks(void)
{
DEFINE_TORTURE_RANDOM(rand);
struct shuffle_task *stp;
cpumask_setall(shuffle_tmp_mask);
@ -537,8 +542,10 @@ static void torture_shuffle_tasks(void)
cpumask_clear_cpu(shuffle_idle_cpu, shuffle_tmp_mask);
mutex_lock(&shuffle_task_mutex);
list_for_each_entry(stp, &shuffle_task_list, st_l)
set_cpus_allowed_ptr(stp->st_t, shuffle_tmp_mask);
list_for_each_entry(stp, &shuffle_task_list, st_l) {
if (!random_shuffle || torture_random(&rand) & 0x1)
set_cpus_allowed_ptr(stp->st_t, shuffle_tmp_mask);
}
mutex_unlock(&shuffle_task_mutex);
cpus_read_unlock();
@ -550,9 +557,11 @@ static void torture_shuffle_tasks(void)
*/
static int torture_shuffle(void *arg)
{
DEFINE_TORTURE_RANDOM(rand);
VERBOSE_TOROUT_STRING("torture_shuffle task started");
do {
schedule_timeout_interruptible(shuffle_interval);
torture_hrtimeout_jiffies(shuffle_interval, &rand);
torture_shuffle_tasks();
torture_shutdown_absorb("torture_shuffle");
} while (!torture_must_stop());
@ -728,12 +737,12 @@ bool stutter_wait(const char *title)
cond_resched_tasks_rcu_qs();
spt = READ_ONCE(stutter_pause_test);
for (; spt; spt = READ_ONCE(stutter_pause_test)) {
if (!ret) {
if (!ret && !rt_task(current)) {
sched_set_normal(current, MAX_NICE);
ret = true;
}
if (spt == 1) {
schedule_timeout_interruptible(1);
torture_hrtimeout_jiffies(1, NULL);
} else if (spt == 2) {
while (READ_ONCE(stutter_pause_test)) {
if (!(i++ & 0xffff))
@ -741,7 +750,7 @@ bool stutter_wait(const char *title)
cond_resched();
}
} else {
schedule_timeout_interruptible(round_jiffies_relative(HZ));
torture_hrtimeout_jiffies(round_jiffies_relative(HZ), NULL);
}
torture_shutdown_absorb(title);
}
@ -926,7 +935,7 @@ EXPORT_SYMBOL_GPL(torture_kthread_stopping);
* it starts, you will need to open-code your own.
*/
int _torture_create_kthread(int (*fn)(void *arg), void *arg, char *s, char *m,
char *f, struct task_struct **tp)
char *f, struct task_struct **tp, void (*cbf)(struct task_struct *tp))
{
int ret = 0;
@ -938,6 +947,10 @@ int _torture_create_kthread(int (*fn)(void *arg), void *arg, char *s, char *m,
*tp = NULL;
return ret;
}
if (cbf)
cbf(*tp);
wake_up_process(*tp); // Process is sleeping, so ordering provided.
torture_shuffle_task_register(*tp);
return ret;

View File

@ -7457,6 +7457,30 @@ sub process {
}
}
# Complain about RCU Tasks Trace used outside of BPF (and of course, RCU).
our $rcu_trace_funcs = qr{(?x:
rcu_read_lock_trace |
rcu_read_lock_trace_held |
rcu_read_unlock_trace |
call_rcu_tasks_trace |
synchronize_rcu_tasks_trace |
rcu_barrier_tasks_trace |
rcu_request_urgent_qs_task
)};
our $rcu_trace_paths = qr{(?x:
kernel/bpf/ |
include/linux/bpf |
net/bpf/ |
kernel/rcu/ |
include/linux/rcu
)};
if ($line =~ /\b($rcu_trace_funcs)\s*\(/) {
if ($realfile !~ m{^$rcu_trace_paths}) {
WARN("RCU_TASKS_TRACE",
"use of RCU tasks trace is incorrect outside BPF or core RCU code\n" . $herecurr);
}
}
# check for lockdep_set_novalidate_class
if ($line =~ /^.\s*lockdep_set_novalidate_class\s*\(/ ||
$line =~ /__lockdep_no_validate__\s*\)/ ) {

View File

@ -3,6 +3,8 @@
#
# Usage: configcheck.sh .config .config-template
#
# Non-empty output if errors detected.
#
# Copyright (C) IBM Corporation, 2011
#
# Authors: Paul E. McKenney <paulmck@linux.ibm.com>
@ -10,32 +12,35 @@
T="`mktemp -d ${TMPDIR-/tmp}/configcheck.sh.XXXXXX`"
trap 'rm -rf $T' 0
sed -e 's/"//g' < $1 > $T/.config
# function test_kconfig_enabled ( Kconfig-var=val )
function test_kconfig_enabled () {
if ! grep -q "^$1$" $T/.config
then
echo :$1: improperly set
return 1
fi
return 0
}
sed -e 's/"//g' -e 's/\(.*\)=n/# \1 is not set/' -e 's/^#CHECK#//' < $2 |
awk '
{
print "if grep -q \"" $0 "\" < '"$T/.config"'";
print "then";
print "\t:";
print "else";
if ($1 == "#") {
print "\tif grep -q \"" $2 "\" < '"$T/.config"'";
print "\tthen";
print "\t\tif test \"$firsttime\" = \"\""
print "\t\tthen"
print "\t\t\tfirsttime=1"
print "\t\tfi"
print "\t\techo \":" $2 ": improperly set\"";
print "\telse";
print "\t\t:";
print "\tfi";
} else {
print "\tif test \"$firsttime\" = \"\""
print "\tthen"
print "\t\tfirsttime=1"
print "\tfi"
print "\techo \":" $0 ": improperly set\"";
}
print "fi";
}' | sh
# function test_kconfig_disabled ( Kconfig-var )
function test_kconfig_disabled () {
if grep -q "^$1=n$" $T/.config
then
return 0
fi
if grep -q "^$1=" $T/.config
then
echo :$1=n: improperly set
return 1
fi
return 0
}
sed -e 's/"//g' < $1 > $T/.config
sed -e 's/^#CHECK#//' < $2 > $T/ConfigFragment
grep '^CONFIG_.*=n$' $T/ConfigFragment |
sed -e 's/^/test_kconfig_disabled /' -e 's/=n$//' > $T/kconfig-n.sh
. $T/kconfig-n.sh
grep -v '^CONFIG_.*=n$' $T/ConfigFragment | grep '^CONFIG_' |
sed -e 's/^/test_kconfig_enabled /' > $T/kconfig-not-n.sh
. $T/kconfig-not-n.sh

View File

@ -45,7 +45,7 @@ checkarg () {
configfrag_boot_params () {
if test -r "$2.boot"
then
echo $1 `grep -v '^#' "$2.boot" | tr '\012' ' '`
echo `grep -v '^#' "$2.boot" | tr '\012' ' '` $1
else
echo $1
fi

View File

@ -40,6 +40,10 @@ awk '
sum += $5 / 1000.;
}
/rcu_scale: Grace-period kthread CPU time/ {
cputime = $6;
}
END {
newNR = asort(gptimes);
if (newNR <= 0) {
@ -78,6 +82,8 @@ END {
print "90th percentile grace-period duration: " gptimes[pct90];
print "99th percentile grace-period duration: " gptimes[pct99];
print "Maximum grace-period duration: " gptimes[newNR];
print "Grace periods: " ngps + 0 " Batches: " nbatches + 0 " Ratio: " ngps / nbatches;
if (cputime != "")
cpustr = " CPU: " cputime;
print "Grace periods: " ngps + 0 " Batches: " nbatches + 0 " Ratio: " ngps / nbatches cpustr;
print "Computed from rcuscale printk output.";
}'

View File

@ -16,6 +16,8 @@
T=/tmp/kvm-recheck.sh.$$
trap 'rm -f $T' 0 2
configerrors=0
PATH=`pwd`/tools/testing/selftests/rcutorture/bin:$PATH; export PATH
. functions.sh
for rd in "$@"
@ -32,7 +34,7 @@ do
fi
TORTURE_SUITE="`cat $i/../torture_suite`" ; export TORTURE_SUITE
configfile=`echo $i | sed -e 's,^.*/,,'`
rm -f $i/console.log.*.diags
rm -f $i/console.log.*.diags $i/ConfigFragment.diags
case "${TORTURE_SUITE}" in
X*)
;;
@ -49,8 +51,21 @@ do
then
echo QEMU killed
fi
configcheck.sh $i/.config $i/ConfigFragment > $T 2>&1
cat $T
configcheck.sh $i/.config $i/ConfigFragment > $i/ConfigFragment.diags 2>&1
if grep -q '^CONFIG_KCSAN=y$' $i/ConfigFragment.input
then
# KCSAN forces a number of Kconfig options, so remove
# complaints about those Kconfig options in KCSAN runs.
mv $i/ConfigFragment.diags $i/ConfigFragment.diags.kcsan
grep -v -E 'CONFIG_PROVE_RCU|CONFIG_PREEMPT_COUNT' $i/ConfigFragment.diags.kcsan > $i/ConfigFragment.diags
fi
if test -s $i/ConfigFragment.diags
then
cat $i/ConfigFragment.diags
configerrors=$((configerrors+1))
else
rm $i/ConfigFragment.diags
fi
if test -r $i/Make.oldconfig.err
then
cat $i/Make.oldconfig.err
@ -65,7 +80,14 @@ do
if test -f "$i/buildonly"
then
echo Build-only run, no boot/test
configcheck.sh $i/.config $i/ConfigFragment
configcheck.sh $i/.config $i/ConfigFragment > $i/ConfigFragment.diags 2>&1
if test -s $i/ConfigFragment.diags
then
cat $i/ConfigFragment.diags
configerrors=$((configerrors+1))
else
rm $i/ConfigFragment.diags
fi
parse-build.sh $i/Make.out $configfile
elif test -f "$i/qemu-cmd"
then
@ -79,10 +101,10 @@ do
done
if test -f "$rd/kcsan.sum"
then
if ! test -f $T
if ! test -f $i/ConfigFragment.diags
then
:
elif grep -q CONFIG_KCSAN=y $T
elif grep -q CONFIG_KCSAN=y $i/ConfigFragment.diags
then
echo "Compiler or architecture does not support KCSAN!"
echo Did you forget to switch your compiler with '--kmake-arg CC=<cc-that-supports-kcsan>'?
@ -94,17 +116,23 @@ do
fi
fi
done
if test "$configerrors" -gt 0
then
echo $configerrors runs with .config errors.
ret=1
fi
EDITOR=echo kvm-find-errors.sh "${@: -1}" > $T 2>&1
builderrors="`tr ' ' '\012' < $T | grep -c '/Make.out.diags'`"
if test "$builderrors" -gt 0
then
echo $builderrors runs with build errors.
ret=1
ret=2
fi
runerrors="`tr ' ' '\012' < $T | grep -c '/console.log.diags'`"
if test "$runerrors" -gt 0
then
echo $runerrors runs with runtime errors.
ret=2
ret=3
fi
exit $ret

View File

@ -137,14 +137,20 @@ chmod +x $T/bin/kvm-remote-*.sh
# Check first to avoid the need for cleanup for system-name typos
for i in $systems
do
ncpus="`ssh -o BatchMode=yes $i getconf _NPROCESSORS_ONLN 2> /dev/null`"
ssh -o BatchMode=yes $i getconf _NPROCESSORS_ONLN > $T/ssh.stdout 2> $T/ssh.stderr
ret=$?
if test "$ret" -ne 0
then
echo System $i unreachable, giving up. | tee -a "$oldrun/remote-log"
echo "System $i unreachable ($ret), giving up." | tee -a "$oldrun/remote-log"
echo ' --- ssh stdout: vvv' | tee -a "$oldrun/remote-log"
cat $T/ssh.stdout | tee -a "$oldrun/remote-log"
echo ' --- ssh stdout: ^^^' | tee -a "$oldrun/remote-log"
echo ' --- ssh stderr: vvv' | tee -a "$oldrun/remote-log"
cat $T/ssh.stderr | tee -a "$oldrun/remote-log"
echo ' --- ssh stderr: ^^^' | tee -a "$oldrun/remote-log"
exit 4
fi
echo $i: $ncpus CPUs " " `date` | tee -a "$oldrun/remote-log"
echo $i: `cat $T/ssh.stdout` CPUs " " `date` | tee -a "$oldrun/remote-log"
done
# Download and expand the tarball on all systems.

View File

@ -9,9 +9,10 @@
#
# Usage: kvm-test-1-run.sh config resdir seconds qemu-args boot_args_in
#
# qemu-args defaults to "-enable-kvm -nographic", along with arguments
# specifying the number of CPUs and other options
# generated from the underlying CPU architecture.
# qemu-args defaults to "-enable-kvm -display none -no-reboot", along
# with arguments specifying the number of CPUs
# and other options generated from the underlying
# CPU architecture.
# boot_args_in defaults to value returned by the per_version_boot_params
# shell function.
#
@ -57,7 +58,6 @@ config_override_param () {
cat $T/Kconfig_args >> $resdir/ConfigFragment.input
config_override.sh $T/$2 $T/Kconfig_args > $T/$2.tmp
mv $T/$2.tmp $T/$2
# Note that "#CHECK#" is not permitted on commandline.
fi
}
@ -140,7 +140,7 @@ then
fi
# Generate -smp qemu argument.
qemu_args="-enable-kvm -nographic $qemu_args"
qemu_args="-enable-kvm -display none -no-reboot $qemu_args"
cpu_count=`configNR_CPUS.sh $resdir/ConfigFragment`
cpu_count=`configfrag_boot_cpus "$boot_args_in" "$config_template" "$cpu_count"`
if test "$cpu_count" -gt "$TORTURE_ALLOTED_CPUS"
@ -163,7 +163,7 @@ boot_args="`configfrag_boot_params "$boot_args_in" "$config_template"`"
boot_args="`per_version_boot_params "$boot_args" $resdir/.config $seconds`"
if test -n "$TORTURE_BOOT_GDB_ARG"
then
boot_args="$boot_args $TORTURE_BOOT_GDB_ARG"
boot_args="$TORTURE_BOOT_GDB_ARG $boot_args"
fi
# Give bare-metal advice

View File

@ -186,7 +186,7 @@ do
fi
;;
--kconfig|--kconfigs)
checkarg --kconfig "(Kconfig options)" $# "$2" '^CONFIG_[A-Z0-9_]\+=\([ynm]\|[0-9]\+\|"[^"]*"\)\( CONFIG_[A-Z0-9_]\+=\([ynm]\|[0-9]\+\|"[^"]*"\)\)*$' '^error$'
checkarg --kconfig "(Kconfig options)" $# "$2" '^\(#CHECK#\)\?CONFIG_[A-Z0-9_]\+=\([ynm]\|[0-9]\+\|"[^"]*"\)\( \(#CHECK#\)\?CONFIG_[A-Z0-9_]\+=\([ynm]\|[0-9]\+\|"[^"]*"\)\)*$' '^error$'
TORTURE_KCONFIG_ARG="`echo "$TORTURE_KCONFIG_ARG $2" | sed -e 's/^ *//' -e 's/ *$//'`"
shift
;;

View File

@ -10,7 +10,6 @@
D=tools/testing/selftests/rcutorture
# Prerequisite checks
[ -z "$D" ] && echo >&2 "No argument supplied" && exit 1
if [ ! -d "$D" ]; then
echo >&2 "$D does not exist: Malformed kernel source tree?"
exit 1
@ -34,12 +33,16 @@ cat > init.c << '___EOF___'
volatile unsigned long delaycount;
int main(int argc, int argv[])
int main(int argc, char *argv[])
{
int i;
struct timeval tv;
struct timeval tvb;
printf("Torture-test rudimentary init program started, command line:\n");
for (i = 0; i < argc; i++)
printf(" %s", argv[i]);
printf("\n");
for (;;) {
sleep(1);
/* Need some userspace time. */
@ -64,15 +67,23 @@ ___EOF___
# build using nolibc on supported archs (smaller executable) and fall
# back to regular glibc on other ones.
if echo -e "#if __x86_64__||__i386__||__i486__||__i586__||__i686__" \
"||__ARM_EABI__||__aarch64__||__s390x__\nyes\n#endif" \
"||__ARM_EABI__||__aarch64__||__s390x__||__loongarch__\nyes\n#endif" \
| ${CROSS_COMPILE}gcc -E -nostdlib -xc - \
| grep -q '^yes'; then
# architecture supported by nolibc
${CROSS_COMPILE}gcc -fno-asynchronous-unwind-tables -fno-ident \
-nostdlib -include ../../../../include/nolibc/nolibc.h \
-s -static -Os -o init init.c -lgcc
ret=$?
else
${CROSS_COMPILE}gcc -s -static -Os -o init init.c
ret=$?
fi
if [ "$ret" -ne 0 ]
then
echo "Failed to create a statically linked C-language initrd"
exit "$ret"
fi
rm init.c

View File

@ -55,6 +55,8 @@ do_kasan=yes
do_kcsan=no
do_clocksourcewd=yes
do_rt=yes
do_rcutasksflavors=yes
do_srcu_lockdep=yes
# doyesno - Helper function for yes/no arguments
function doyesno () {
@ -73,18 +75,20 @@ usage () {
echo " --configs-locktorture \"config-file list w/ repeat factor (10*LOCK01)\""
echo " --configs-scftorture \"config-file list w/ repeat factor (2*CFLIST)\""
echo " --do-all"
echo " --do-allmodconfig / --do-no-allmodconfig"
echo " --do-clocksourcewd / --do-no-clocksourcewd"
echo " --do-kasan / --do-no-kasan"
echo " --do-kcsan / --do-no-kcsan"
echo " --do-kvfree / --do-no-kvfree"
echo " --do-locktorture / --do-no-locktorture"
echo " --do-allmodconfig / --do-no-allmodconfig / --no-allmodconfig"
echo " --do-clocksourcewd / --do-no-clocksourcewd / --no-clocksourcewd"
echo " --do-kasan / --do-no-kasan / --no-kasan"
echo " --do-kcsan / --do-no-kcsan / --no-kcsan"
echo " --do-kvfree / --do-no-kvfree / --no-kvfree"
echo " --do-locktorture / --do-no-locktorture / --no-locktorture"
echo " --do-none"
echo " --do-rcuscale / --do-no-rcuscale"
echo " --do-rcutorture / --do-no-rcutorture"
echo " --do-refscale / --do-no-refscale"
echo " --do-rt / --do-no-rt"
echo " --do-scftorture / --do-no-scftorture"
echo " --do-rcuscale / --do-no-rcuscale / --no-rcuscale"
echo " --do-rcutasksflavors / --do-no-rcutasksflavors / --no-rcutasksflavors"
echo " --do-rcutorture / --do-no-rcutorture / --no-rcutorture"
echo " --do-refscale / --do-no-refscale / --no-refscale"
echo " --do-rt / --do-no-rt / --no-rt"
echo " --do-scftorture / --do-no-scftorture / --no-scftorture"
echo " --do-srcu-lockdep / --do-no-srcu-lockdep / --no-srcu-lockdep"
echo " --duration [ <minutes> | <hours>h | <days>d ]"
echo " --kcsan-kmake-arg kernel-make-arguments"
exit 1
@ -115,6 +119,7 @@ do
;;
--do-all|--doall)
do_allmodconfig=yes
do_rcutasksflavor=yes
do_rcutorture=yes
do_locktorture=yes
do_scftorture=yes
@ -125,27 +130,29 @@ do
do_kasan=yes
do_kcsan=yes
do_clocksourcewd=yes
do_srcu_lockdep=yes
;;
--do-allmodconfig|--do-no-allmodconfig)
--do-allmodconfig|--do-no-allmodconfig|--no-allmodconfig)
do_allmodconfig=`doyesno "$1" --do-allmodconfig`
;;
--do-clocksourcewd|--do-no-clocksourcewd)
--do-clocksourcewd|--do-no-clocksourcewd|--no-clocksourcewd)
do_clocksourcewd=`doyesno "$1" --do-clocksourcewd`
;;
--do-kasan|--do-no-kasan)
--do-kasan|--do-no-kasan|--no-kasan)
do_kasan=`doyesno "$1" --do-kasan`
;;
--do-kcsan|--do-no-kcsan)
--do-kcsan|--do-no-kcsan|--no-kcsan)
do_kcsan=`doyesno "$1" --do-kcsan`
;;
--do-kvfree|--do-no-kvfree)
--do-kvfree|--do-no-kvfree|--no-kvfree)
do_kvfree=`doyesno "$1" --do-kvfree`
;;
--do-locktorture|--do-no-locktorture)
--do-locktorture|--do-no-locktorture|--no-locktorture)
do_locktorture=`doyesno "$1" --do-locktorture`
;;
--do-none|--donone)
do_allmodconfig=no
do_rcutasksflavors=no
do_rcutorture=no
do_locktorture=no
do_scftorture=no
@ -156,22 +163,29 @@ do
do_kasan=no
do_kcsan=no
do_clocksourcewd=no
do_srcu_lockdep=no
;;
--do-rcuscale|--do-no-rcuscale)
--do-rcuscale|--do-no-rcuscale|--no-rcuscale)
do_rcuscale=`doyesno "$1" --do-rcuscale`
;;
--do-rcutorture|--do-no-rcutorture)
--do-rcutasksflavors|--do-no-rcutasksflavors|--no-rcutasksflavors)
do_rcutasksflavors=`doyesno "$1" --do-rcutasksflavors`
;;
--do-rcutorture|--do-no-rcutorture|--no-rcutorture)
do_rcutorture=`doyesno "$1" --do-rcutorture`
;;
--do-refscale|--do-no-refscale)
--do-refscale|--do-no-refscale|--no-refscale)
do_refscale=`doyesno "$1" --do-refscale`
;;
--do-rt|--do-no-rt)
--do-rt|--do-no-rt|--no-rt)
do_rt=`doyesno "$1" --do-rt`
;;
--do-scftorture|--do-no-scftorture)
--do-scftorture|--do-no-scftorture|--no-scftorture)
do_scftorture=`doyesno "$1" --do-scftorture`
;;
--do-srcu-lockdep|--do-no-srcu-lockdep|--no-srcu-lockdep)
do_srcu_lockdep=`doyesno "$1" --do-srcu-lockdep`
;;
--duration)
checkarg --duration "(minutes)" $# "$2" '^[0-9][0-9]*\(m\|h\|d\|\)$' '^error'
mult=1
@ -361,6 +375,40 @@ then
fi
fi
# Test building RCU Tasks flavors in isolation, both SMP and !SMP
if test "$do_rcutasksflavors" = "yes"
then
echo " --- rcutasksflavors:" Start `date` | tee -a $T/log
rtfdir="tools/testing/selftests/rcutorture/res/$ds/results-rcutasksflavors"
mkdir -p "$rtfdir"
cat > $T/rcutasksflavors << __EOF__
#CHECK#CONFIG_TASKS_RCU=n
#CHECK#CONFIG_TASKS_RUDE_RCU=n
#CHECK#CONFIG_TASKS_TRACE_RCU=n
__EOF__
for flavor in CONFIG_TASKS_RCU CONFIG_TASKS_RUDE_RCU CONFIG_TASKS_TRACE_RCU
do
forceflavor="`echo $flavor | sed -e 's/^CONFIG/CONFIG_FORCE/'`"
deselectedflavors="`grep -v $flavor $T/rcutasksflavors | tr '\012' ' ' | tr -s ' ' | sed -e 's/ *$//'`"
echo " --- Running RCU Tasks Trace flavor $flavor `date`" >> $rtfdir/log
tools/testing/selftests/rcutorture/bin/kvm.sh --datestamp "$ds/results-rcutasksflavors/$flavor" --buildonly --configs "TINY01 TREE04" --kconfig "CONFIG_RCU_EXPERT=y CONFIG_RCU_SCALE_TEST=y $forceflavor=y $deselectedflavors" --trust-make > $T/$flavor.out 2>&1
retcode=$?
if test "$retcode" -ne 0
then
break
fi
done
if test "$retcode" -eq 0
then
echo "rcutasksflavors($retcode)" $rtfdir >> $T/successes
echo Success >> $rtfdir/log
else
echo "rcutasksflavors($retcode)" $rtfdir >> $T/failures
echo " --- rcutasksflavors Test summary:" >> $rtfdir/log
echo " --- Summary: Exit code $retcode from $flavor, see Make.out" >> $rtfdir/log
fi
fi
# --torture rcu
if test "$do_rcutorture" = "yes"
then
@ -391,6 +439,23 @@ then
torture_set "rcurttorture-exp" tools/testing/selftests/rcutorture/bin/kvm.sh --allcpus --duration "$duration_rcutorture" --configs "TREE03" --trust-make
fi
if test "$do_srcu_lockdep" = "yes"
then
echo " --- do-srcu-lockdep:" Start `date` | tee -a $T/log
tools/testing/selftests/rcutorture/bin/srcu_lockdep.sh --datestamp "$ds/results-srcu-lockdep" > $T/srcu_lockdep.sh.out 2>&1
retcode=$?
cp $T/srcu_lockdep.sh.out "tools/testing/selftests/rcutorture/res/$ds/results-srcu-lockdep/log"
if test "$retcode" -eq 0
then
echo "srcu_lockdep($retcode)" "tools/testing/selftests/rcutorture/res/$ds/results-srcu-lockdep" >> $T/successes
echo Success >> "tools/testing/selftests/rcutorture/res/$ds/results-srcu-lockdep/log"
else
echo "srcu_lockdep($retcode)" "tools/testing/selftests/rcutorture/res/$ds/results-srcu-lockdep" >> $T/failures
echo " --- srcu_lockdep Test Summary:" >> "tools/testing/selftests/rcutorture/res/$ds/results-srcu-lockdep/log"
echo " --- Summary: Exit code $retcode from srcu_lockdep.sh, see ds/results-srcu-lockdep" >> "tools/testing/selftests/rcutorture/res/$ds/results-srcu-lockdep/log"
fi
fi
if test "$do_refscale" = yes
then
primlist="`grep '\.name[ ]*=' kernel/rcu/refscale.c | sed -e 's/^[^"]*"//' -e 's/".*$//'`"
@ -541,11 +606,23 @@ then
fi
echo Started at $startdate, ended at `date`, duration `get_starttime_duration $starttime`. | tee -a $T/log
echo Summary: Successes: $nsuccesses Failures: $nfailures. | tee -a $T/log
tdir="`cat $T/successes $T/failures | head -1 | awk '{ print $NF }' | sed -e 's,/[^/]\+/*$,,'`"
find "$tdir" -name 'ConfigFragment.diags' -print > $T/configerrors
find "$tdir" -name 'Make.out.diags' -print > $T/builderrors
if test -s "$T/configerrors"
then
echo " Scenarios with .config errors: `wc -l "$T/configerrors" | awk '{ print $1 }'`"
nonkcsanbug="yes"
fi
if test -s "$T/builderrors"
then
echo " Scenarios with build errors: `wc -l "$T/builderrors" | awk '{ print $1 }'`"
nonkcsanbug="yes"
fi
if test -z "$nonkcsanbug" && test -s "$T/failuresum"
then
echo " All bugs were KCSAN failures."
fi
tdir="`cat $T/successes $T/failures | head -1 | awk '{ print $NF }' | sed -e 's,/[^/]\+/*$,,'`"
if test -n "$tdir" && test $compress_concurrency -gt 0
then
# KASAN vmlinux files can approach 1GB in size, so compress them.

View File

@ -22,8 +22,9 @@ locktorture_param_onoff () {
#
# Adds per-version torture-module parameters to kernels supporting them.
per_version_boot_params () {
echo $1 `locktorture_param_onoff "$1" "$2"` \
echo `locktorture_param_onoff "$1" "$2"` \
locktorture.stat_interval=15 \
locktorture.shutdown_secs=$3 \
locktorture.verbose=1
locktorture.verbose=1 \
$1
}

View File

@ -6,6 +6,5 @@ CONFIG_PREEMPT=y
CONFIG_HZ_PERIODIC=n
CONFIG_NO_HZ_IDLE=n
CONFIG_NO_HZ_FULL=y
#CHECK#CONFIG_RCU_EXPERT=n
CONFIG_TASKS_RCU=y
CONFIG_RCU_EXPERT=y

View File

@ -15,4 +15,3 @@ CONFIG_DEBUG_LOCK_ALLOC=n
CONFIG_RCU_BOOST=n
CONFIG_DEBUG_OBJECTS_RCU_HEAD=n
CONFIG_RCU_EXPERT=y
CONFIG_BOOTPARAM_HOTPLUG_CPU0=y

View File

@ -46,10 +46,11 @@ rcutorture_param_stat_interval () {
#
# Adds per-version torture-module parameters to kernels supporting them.
per_version_boot_params () {
echo $1 `rcutorture_param_onoff "$1" "$2"` \
echo `rcutorture_param_onoff "$1" "$2"` \
`rcutorture_param_n_barrier_cbs "$1"` \
`rcutorture_param_stat_interval "$1"` \
rcutorture.shutdown_secs=$3 \
rcutorture.test_no_idle_hz=1 \
rcutorture.verbose=1
rcutorture.verbose=1 \
$1
}

View File

@ -2,5 +2,7 @@ CONFIG_RCU_SCALE_TEST=y
CONFIG_PRINTK_TIME=y
CONFIG_FORCE_TASKS_RCU=y
#CHECK#CONFIG_TASKS_RCU=y
CONFIG_FORCE_TASKS_RUDE_RCU=y
#CHECK#CONFIG_TASKS_RUDE_RCU=y
CONFIG_FORCE_TASKS_TRACE_RCU=y
#CHECK#CONFIG_TASKS_TRACE_RCU=y

View File

@ -2,6 +2,8 @@ CONFIG_SMP=y
CONFIG_PREEMPT_NONE=y
CONFIG_PREEMPT_VOLUNTARY=n
CONFIG_PREEMPT=n
CONFIG_PREEMPT_DYNAMIC=n
#CHECK#CONFIG_TREE_RCU=y
CONFIG_HZ_PERIODIC=n
CONFIG_NO_HZ_IDLE=y
CONFIG_NO_HZ_FULL=n

View File

@ -11,6 +11,7 @@
#
# Adds per-version torture-module parameters to kernels supporting them.
per_version_boot_params () {
echo $1 rcuscale.shutdown=1 \
rcuscale.verbose=0
echo rcuscale.shutdown=1 \
rcuscale.verbose=0 \
$1
}

View File

@ -2,6 +2,7 @@ CONFIG_SMP=y
CONFIG_PREEMPT_NONE=y
CONFIG_PREEMPT_VOLUNTARY=n
CONFIG_PREEMPT=n
CONFIG_PREEMPT_DYNAMIC=n
#CHECK#CONFIG_PREEMPT_RCU=n
CONFIG_HZ_PERIODIC=n
CONFIG_NO_HZ_IDLE=y

View File

@ -11,6 +11,7 @@
#
# Adds per-version torture-module parameters to kernels supporting them.
per_version_boot_params () {
echo $1 refscale.shutdown=1 \
refscale.verbose=0
echo refscale.shutdown=1 \
refscale.verbose=0 \
$1
}

View File

@ -22,8 +22,9 @@ scftorture_param_onoff () {
#
# Adds per-version torture-module parameters to kernels supporting them.
per_version_boot_params () {
echo $1 `scftorture_param_onoff "$1" "$2"` \
echo `scftorture_param_onoff "$1" "$2"` \
scftorture.stat_interval=15 \
scftorture.shutdown_secs=$3 \
scftorture.verbose=1
scftorture.verbose=1 \
$1
}

View File

@ -1,2 +0,0 @@
# SPDX-License-Identifier: GPL-2.0-only
srcu.c

View File

@ -1,17 +0,0 @@
# SPDX-License-Identifier: GPL-2.0
all: srcu.c store_buffering
LINUX_SOURCE = ../../../../../..
modified_srcu_input = $(LINUX_SOURCE)/include/linux/srcu.h \
$(LINUX_SOURCE)/kernel/rcu/srcu.c
modified_srcu_output = include/linux/srcu.h srcu.c
include/linux/srcu.h: srcu.c
srcu.c: modify_srcu.awk Makefile $(modified_srcu_input)
awk -f modify_srcu.awk $(modified_srcu_input) $(modified_srcu_output)
store_buffering:
@cd tests/store_buffering; make

View File

@ -1,2 +0,0 @@
# SPDX-License-Identifier: GPL-2.0-only
srcu.h

View File

@ -1 +0,0 @@
#include <LINUX_SOURCE/linux/kconfig.h>

View File

@ -1,152 +0,0 @@
/* SPDX-License-Identifier: GPL-2.0 */
/*
* This header has been modifies to remove definitions of types that
* are defined in standard userspace headers or are problematic for some
* other reason.
*/
#ifndef _LINUX_TYPES_H
#define _LINUX_TYPES_H
#define __EXPORTED_HEADERS__
#include <uapi/linux/types.h>
#ifndef __ASSEMBLY__
#define DECLARE_BITMAP(name, bits) \
unsigned long name[BITS_TO_LONGS(bits)]
typedef __u32 __kernel_dev_t;
/* bsd */
typedef unsigned char u_char;
typedef unsigned short u_short;
typedef unsigned int u_int;
typedef unsigned long u_long;
/* sysv */
typedef unsigned char unchar;
typedef unsigned short ushort;
typedef unsigned int uint;
typedef unsigned long ulong;
#ifndef __BIT_TYPES_DEFINED__
#define __BIT_TYPES_DEFINED__
typedef __u8 u_int8_t;
typedef __s8 int8_t;
typedef __u16 u_int16_t;
typedef __s16 int16_t;
typedef __u32 u_int32_t;
typedef __s32 int32_t;
#endif /* !(__BIT_TYPES_DEFINED__) */
typedef __u8 uint8_t;
typedef __u16 uint16_t;
typedef __u32 uint32_t;
/* this is a special 64bit data type that is 8-byte aligned */
#define aligned_u64 __u64 __attribute__((aligned(8)))
#define aligned_be64 __be64 __attribute__((aligned(8)))
#define aligned_le64 __le64 __attribute__((aligned(8)))
/**
* The type used for indexing onto a disc or disc partition.
*
* Linux always considers sectors to be 512 bytes long independently
* of the devices real block size.
*
* blkcnt_t is the type of the inode's block count.
*/
typedef u64 sector_t;
/*
* The type of an index into the pagecache.
*/
#define pgoff_t unsigned long
/*
* A dma_addr_t can hold any valid DMA address, i.e., any address returned
* by the DMA API.
*
* If the DMA API only uses 32-bit addresses, dma_addr_t need only be 32
* bits wide. Bus addresses, e.g., PCI BARs, may be wider than 32 bits,
* but drivers do memory-mapped I/O to ioremapped kernel virtual addresses,
* so they don't care about the size of the actual bus addresses.
*/
#ifdef CONFIG_ARCH_DMA_ADDR_T_64BIT
typedef u64 dma_addr_t;
#else
typedef u32 dma_addr_t;
#endif
#ifdef CONFIG_PHYS_ADDR_T_64BIT
typedef u64 phys_addr_t;
#else
typedef u32 phys_addr_t;
#endif
typedef phys_addr_t resource_size_t;
/*
* This type is the placeholder for a hardware interrupt number. It has to be
* big enough to enclose whatever representation is used by a given platform.
*/
typedef unsigned long irq_hw_number_t;
typedef struct {
int counter;
} atomic_t;
#ifdef CONFIG_64BIT
typedef struct {
long counter;
} atomic64_t;
#endif
struct list_head {
struct list_head *next, *prev;
};
struct hlist_head {
struct hlist_node *first;
};
struct hlist_node {
struct hlist_node *next, **pprev;
};
/**
* struct callback_head - callback structure for use with RCU and task_work
* @next: next update requests in a list
* @func: actual update function to call after the grace period.
*
* The struct is aligned to size of pointer. On most architectures it happens
* naturally due ABI requirements, but some architectures (like CRIS) have
* weird ABI and we need to ask it explicitly.
*
* The alignment is required to guarantee that bits 0 and 1 of @next will be
* clear under normal conditions -- as long as we use call_rcu() or
* call_srcu() to queue callback.
*
* This guarantee is important for few reasons:
* - future call_rcu_lazy() will make use of lower bits in the pointer;
* - the structure shares storage spacer in struct page with @compound_head,
* which encode PageTail() in bit 0. The guarantee is needed to avoid
* false-positive PageTail().
*/
struct callback_head {
struct callback_head *next;
void (*func)(struct callback_head *head);
} __attribute__((aligned(sizeof(void *))));
#define rcu_head callback_head
typedef void (*rcu_callback_t)(struct rcu_head *head);
typedef void (*call_rcu_func_t)(struct rcu_head *head, rcu_callback_t func);
/* clocksource cycle base type */
typedef u64 cycle_t;
#endif /* __ASSEMBLY__ */
#endif /* _LINUX_TYPES_H */

View File

@ -1,376 +0,0 @@
#!/usr/bin/awk -f
# SPDX-License-Identifier: GPL-2.0
# Modify SRCU for formal verification. The first argument should be srcu.h and
# the second should be srcu.c. Outputs modified srcu.h and srcu.c into the
# current directory.
BEGIN {
if (ARGC != 5) {
print "Usange: input.h input.c output.h output.c" > "/dev/stderr";
exit 1;
}
h_output = ARGV[3];
c_output = ARGV[4];
ARGC = 3;
# Tokenize using FS and not RS as FS supports regular expressions. Each
# record is one line of source, except that backslashed lines are
# combined. Comments are treated as field separators, as are quotes.
quote_regexp="\"([^\\\\\"]|\\\\.)*\"";
comment_regexp="\\/\\*([^*]|\\*+[^*/])*\\*\\/|\\/\\/.*(\n|$)";
FS="([ \\\\\t\n\v\f;,.=(){}+*/<>&|^-]|\\[|\\]|" comment_regexp "|" quote_regexp ")+";
inside_srcu_struct = 0;
inside_srcu_init_def = 0;
srcu_init_param_name = "";
in_macro = 0;
brace_nesting = 0;
paren_nesting = 0;
# Allow the manipulation of the last field separator after has been
# seen.
last_fs = "";
# Whether the last field separator was intended to be output.
last_fs_print = 0;
# rcu_batches stores the initialization for each instance of struct
# rcu_batch
in_comment = 0;
outputfile = "";
}
{
prev_outputfile = outputfile;
if (FILENAME ~ /\.h$/) {
outputfile = h_output;
if (FNR != NR) {
print "Incorrect file order" > "/dev/stderr";
exit 1;
}
}
else
outputfile = c_output;
if (prev_outputfile && outputfile != prev_outputfile) {
new_outputfile = outputfile;
outputfile = prev_outputfile;
update_fieldsep("", 0);
outputfile = new_outputfile;
}
}
# Combine the next line into $0.
function combine_line() {
ret = getline next_line;
if (ret == 0) {
# Don't allow two consecutive getlines at the end of the file
if (eof_found) {
print "Error: expected more input." > "/dev/stderr";
exit 1;
} else {
eof_found = 1;
}
} else if (ret == -1) {
print "Error reading next line of file" FILENAME > "/dev/stderr";
exit 1;
}
$0 = $0 "\n" next_line;
}
# Combine backslashed lines and multiline comments.
function combine_backslashes() {
while (/\\$|\/\*([^*]|\*+[^*\/])*\**$/) {
combine_line();
}
}
function read_line() {
combine_line();
combine_backslashes();
}
# Print out field separators and update variables that depend on them. Only
# print if p is true. Call with sep="" and p=0 to print out the last field
# separator.
function update_fieldsep(sep, p) {
# Count braces
sep_tmp = sep;
gsub(quote_regexp "|" comment_regexp, "", sep_tmp);
while (1)
{
if (sub("[^{}()]*\\{", "", sep_tmp)) {
brace_nesting++;
continue;
}
if (sub("[^{}()]*\\}", "", sep_tmp)) {
brace_nesting--;
if (brace_nesting < 0) {
print "Unbalanced braces!" > "/dev/stderr";
exit 1;
}
continue;
}
if (sub("[^{}()]*\\(", "", sep_tmp)) {
paren_nesting++;
continue;
}
if (sub("[^{}()]*\\)", "", sep_tmp)) {
paren_nesting--;
if (paren_nesting < 0) {
print "Unbalanced parenthesis!" > "/dev/stderr";
exit 1;
}
continue;
}
break;
}
if (last_fs_print)
printf("%s", last_fs) > outputfile;
last_fs = sep;
last_fs_print = p;
}
# Shifts the fields down by n positions. Calls next if there are no more. If p
# is true then print out field separators.
function shift_fields(n, p) {
do {
if (match($0, FS) > 0) {
update_fieldsep(substr($0, RSTART, RLENGTH), p);
if (RSTART + RLENGTH <= length())
$0 = substr($0, RSTART + RLENGTH);
else
$0 = "";
} else {
update_fieldsep("", 0);
print "" > outputfile;
next;
}
} while (--n > 0);
}
# Shifts and prints the first n fields.
function print_fields(n) {
do {
update_fieldsep("", 0);
printf("%s", $1) > outputfile;
shift_fields(1, 1);
} while (--n > 0);
}
{
combine_backslashes();
}
# Print leading FS
{
if (match($0, "^(" FS ")+") > 0) {
update_fieldsep(substr($0, RSTART, RLENGTH), 1);
if (RSTART + RLENGTH <= length())
$0 = substr($0, RSTART + RLENGTH);
else
$0 = "";
}
}
# Parse the line.
{
while (NF > 0) {
if ($1 == "struct" && NF < 3) {
read_line();
continue;
}
if (FILENAME ~ /\.h$/ && !inside_srcu_struct &&
brace_nesting == 0 && paren_nesting == 0 &&
$1 == "struct" && $2 == "srcu_struct" &&
$0 ~ "^struct(" FS ")+srcu_struct(" FS ")+\\{") {
inside_srcu_struct = 1;
print_fields(2);
continue;
}
if (inside_srcu_struct && brace_nesting == 0 &&
paren_nesting == 0) {
inside_srcu_struct = 0;
update_fieldsep("", 0);
for (name in rcu_batches)
print "extern struct rcu_batch " name ";" > outputfile;
}
if (inside_srcu_struct && $1 == "struct" && $2 == "rcu_batch") {
# Move rcu_batches outside of the struct.
rcu_batches[$3] = "";
shift_fields(3, 1);
sub(/;[[:space:]]*$/, "", last_fs);
continue;
}
if (FILENAME ~ /\.h$/ && !inside_srcu_init_def &&
$1 == "#define" && $2 == "__SRCU_STRUCT_INIT") {
inside_srcu_init_def = 1;
srcu_init_param_name = $3;
in_macro = 1;
print_fields(3);
continue;
}
if (inside_srcu_init_def && brace_nesting == 0 &&
paren_nesting == 0) {
inside_srcu_init_def = 0;
in_macro = 0;
continue;
}
if (inside_srcu_init_def && brace_nesting == 1 &&
paren_nesting == 0 && last_fs ~ /\.[[:space:]]*$/ &&
$1 ~ /^[[:alnum:]_]+$/) {
name = $1;
if (name in rcu_batches) {
# Remove the dot.
sub(/\.[[:space:]]*$/, "", last_fs);
old_record = $0;
do
shift_fields(1, 0);
while (last_fs !~ /,/ || paren_nesting > 0);
end_loc = length(old_record) - length($0);
end_loc += index(last_fs, ",") - length(last_fs);
last_fs = substr(last_fs, index(last_fs, ",") + 1);
last_fs_print = 1;
match(old_record, "^"name"("FS")+=");
start_loc = RSTART + RLENGTH;
len = end_loc - start_loc;
initializer = substr(old_record, start_loc, len);
gsub(srcu_init_param_name "\\.", "", initializer);
rcu_batches[name] = initializer;
continue;
}
}
# Don't include a nonexistent file
if (!in_macro && $1 == "#include" && /^#include[[:space:]]+"rcu\.h"/) {
update_fieldsep("", 0);
next;
}
# Ignore most preprocessor stuff.
if (!in_macro && $1 ~ /#/) {
break;
}
if (brace_nesting > 0 && $1 ~ "^[[:alnum:]_]+$" && NF < 2) {
read_line();
continue;
}
if (brace_nesting > 0 &&
$0 ~ "^[[:alnum:]_]+[[:space:]]*(\\.|->)[[:space:]]*[[:alnum:]_]+" &&
$2 in rcu_batches) {
# Make uses of rcu_batches global. Somewhat unreliable.
shift_fields(1, 0);
print_fields(1);
continue;
}
if ($1 == "static" && NF < 3) {
read_line();
continue;
}
if ($1 == "static" && ($2 == "bool" && $3 == "try_check_zero" ||
$2 == "void" && $3 == "srcu_flip")) {
shift_fields(1, 1);
print_fields(2);
continue;
}
# Distinguish between read-side and write-side memory barriers.
if ($1 == "smp_mb" && NF < 2) {
read_line();
continue;
}
if (match($0, /^smp_mb[[:space:]();\/*]*[[:alnum:]]/)) {
barrier_letter = substr($0, RLENGTH, 1);
if (barrier_letter ~ /A|D/)
new_barrier_name = "sync_smp_mb";
else if (barrier_letter ~ /B|C/)
new_barrier_name = "rs_smp_mb";
else {
print "Unrecognized memory barrier." > "/dev/null";
exit 1;
}
shift_fields(1, 1);
printf("%s", new_barrier_name) > outputfile;
continue;
}
# Skip definition of rcu_synchronize, since it is already
# defined in misc.h. Only present in old versions of srcu.
if (brace_nesting == 0 && paren_nesting == 0 &&
$1 == "struct" && $2 == "rcu_synchronize" &&
$0 ~ "^struct(" FS ")+rcu_synchronize(" FS ")+\\{") {
shift_fields(2, 0);
while (brace_nesting) {
if (NF < 2)
read_line();
shift_fields(1, 0);
}
}
# Skip definition of wakeme_after_rcu for the same reason
if (brace_nesting == 0 && $1 == "static" && $2 == "void" &&
$3 == "wakeme_after_rcu") {
while (NF < 5)
read_line();
shift_fields(3, 0);
do {
while (NF < 3)
read_line();
shift_fields(1, 0);
} while (paren_nesting || brace_nesting);
}
if ($1 ~ /^(unsigned|long)$/ && NF < 3) {
read_line();
continue;
}
# Give srcu_batches_completed the correct type for old SRCU.
if (brace_nesting == 0 && $1 == "long" &&
$2 == "srcu_batches_completed") {
update_fieldsep("", 0);
printf("unsigned ") > outputfile;
print_fields(2);
continue;
}
if (brace_nesting == 0 && $1 == "unsigned" && $2 == "long" &&
$3 == "srcu_batches_completed") {
print_fields(3);
continue;
}
# Just print out the input code by default.
print_fields(1);
}
update_fieldsep("", 0);
print > outputfile;
next;
}
END {
update_fieldsep("", 0);
if (brace_nesting != 0) {
print "Unbalanced braces!" > "/dev/stderr";
exit 1;
}
# Define the rcu_batches
for (name in rcu_batches)
print "struct rcu_batch " name " = " rcu_batches[name] ";" > c_output;
}

View File

@ -1,17 +0,0 @@
/* SPDX-License-Identifier: GPL-2.0 */
#ifndef ASSUME_H
#define ASSUME_H
/* Provide an assumption macro that can be disabled for gcc. */
#ifdef RUN
#define assume(x) \
do { \
/* Evaluate x to suppress warnings. */ \
(void) (x); \
} while (0)
#else
#define assume(x) __CPROVER_assume(x)
#endif
#endif

View File

@ -1,41 +0,0 @@
/* SPDX-License-Identifier: GPL-2.0 */
#ifndef BARRIERS_H
#define BARRIERS_H
#define barrier() __asm__ __volatile__("" : : : "memory")
#ifdef RUN
#define smp_mb() __sync_synchronize()
#define smp_mb__after_unlock_lock() __sync_synchronize()
#else
/*
* Copied from CBMC's implementation of __sync_synchronize(), which
* seems to be disabled by default.
*/
#define smp_mb() __CPROVER_fence("WWfence", "RRfence", "RWfence", "WRfence", \
"WWcumul", "RRcumul", "RWcumul", "WRcumul")
#define smp_mb__after_unlock_lock() __CPROVER_fence("WWfence", "RRfence", "RWfence", "WRfence", \
"WWcumul", "RRcumul", "RWcumul", "WRcumul")
#endif
/*
* Allow memory barriers to be disabled in either the read or write side
* of SRCU individually.
*/
#ifndef NO_SYNC_SMP_MB
#define sync_smp_mb() smp_mb()
#else
#define sync_smp_mb() do {} while (0)
#endif
#ifndef NO_READ_SIDE_SMP_MB
#define rs_smp_mb() smp_mb()
#else
#define rs_smp_mb() do {} while (0)
#endif
#define READ_ONCE(x) (*(volatile typeof(x) *) &(x))
#define WRITE_ONCE(x) ((*(volatile typeof(x) *) &(x)) = (val))
#endif

View File

@ -1,14 +0,0 @@
/* SPDX-License-Identifier: GPL-2.0 */
#ifndef BUG_ON_H
#define BUG_ON_H
#include <assert.h>
#define BUG() assert(0)
#define BUG_ON(x) assert(!(x))
/* Does it make sense to treat warnings as errors? */
#define WARN() BUG()
#define WARN_ON(x) (BUG_ON(x), false)
#endif

View File

@ -1,14 +0,0 @@
// SPDX-License-Identifier: GPL-2.0
#include <config.h>
/* Include all source files. */
#include "include_srcu.c"
#include "preempt.c"
#include "misc.c"
/* Used by test.c files */
#include <pthread.h>
#include <stdlib.h>
#include <linux/srcu.h>

View File

@ -1,28 +0,0 @@
/* SPDX-License-Identifier: GPL-2.0 */
/* "Cheater" definitions based on restricted Kconfig choices. */
#undef CONFIG_TINY_RCU
#undef __CHECKER__
#undef CONFIG_DEBUG_LOCK_ALLOC
#undef CONFIG_DEBUG_OBJECTS_RCU_HEAD
#undef CONFIG_HOTPLUG_CPU
#undef CONFIG_MODULES
#undef CONFIG_NO_HZ_FULL_SYSIDLE
#undef CONFIG_PREEMPT_COUNT
#undef CONFIG_PREEMPT_RCU
#undef CONFIG_PROVE_RCU
#undef CONFIG_RCU_NOCB_CPU
#undef CONFIG_RCU_NOCB_CPU_ALL
#undef CONFIG_RCU_STALL_COMMON
#undef CONFIG_RCU_TRACE
#undef CONFIG_RCU_USER_QS
#undef CONFIG_TASKS_RCU
#define CONFIG_TREE_RCU
#define CONFIG_GENERIC_ATOMIC64
#if NR_CPUS > 1
#define CONFIG_SMP
#else
#undef CONFIG_SMP
#endif

View File

@ -1,32 +0,0 @@
// SPDX-License-Identifier: GPL-2.0
#include <config.h>
#include <assert.h>
#include <errno.h>
#include <inttypes.h>
#include <pthread.h>
#include <stddef.h>
#include <string.h>
#include <sys/types.h>
#include "int_typedefs.h"
#include "barriers.h"
#include "bug_on.h"
#include "locks.h"
#include "misc.h"
#include "preempt.h"
#include "percpu.h"
#include "workqueues.h"
#ifdef USE_SIMPLE_SYNC_SRCU
#define synchronize_srcu(sp) synchronize_srcu_original(sp)
#endif
#include <srcu.c>
#ifdef USE_SIMPLE_SYNC_SRCU
#undef synchronize_srcu
#include "simple_sync_srcu.c"
#endif

View File

@ -1,34 +0,0 @@
/* SPDX-License-Identifier: GPL-2.0 */
#ifndef INT_TYPEDEFS_H
#define INT_TYPEDEFS_H
#include <inttypes.h>
typedef int8_t s8;
typedef uint8_t u8;
typedef int16_t s16;
typedef uint16_t u16;
typedef int32_t s32;
typedef uint32_t u32;
typedef int64_t s64;
typedef uint64_t u64;
typedef int8_t __s8;
typedef uint8_t __u8;
typedef int16_t __s16;
typedef uint16_t __u16;
typedef int32_t __s32;
typedef uint32_t __u32;
typedef int64_t __s64;
typedef uint64_t __u64;
#define S8_C(x) INT8_C(x)
#define U8_C(x) UINT8_C(x)
#define S16_C(x) INT16_C(x)
#define U16_C(x) UINT16_C(x)
#define S32_C(x) INT32_C(x)
#define U32_C(x) UINT32_C(x)
#define S64_C(x) INT64_C(x)
#define U64_C(x) UINT64_C(x)
#endif

View File

@ -1,221 +0,0 @@
/* SPDX-License-Identifier: GPL-2.0 */
#ifndef LOCKS_H
#define LOCKS_H
#include <limits.h>
#include <pthread.h>
#include <stdbool.h>
#include "assume.h"
#include "bug_on.h"
#include "preempt.h"
int nondet_int(void);
#define __acquire(x)
#define __acquires(x)
#define __release(x)
#define __releases(x)
/* Only use one lock mechanism. Select which one. */
#ifdef PTHREAD_LOCK
struct lock_impl {
pthread_mutex_t mutex;
};
static inline void lock_impl_lock(struct lock_impl *lock)
{
BUG_ON(pthread_mutex_lock(&lock->mutex));
}
static inline void lock_impl_unlock(struct lock_impl *lock)
{
BUG_ON(pthread_mutex_unlock(&lock->mutex));
}
static inline bool lock_impl_trylock(struct lock_impl *lock)
{
int err = pthread_mutex_trylock(&lock->mutex);
if (!err)
return true;
else if (err == EBUSY)
return false;
BUG();
}
static inline void lock_impl_init(struct lock_impl *lock)
{
pthread_mutex_init(&lock->mutex, NULL);
}
#define LOCK_IMPL_INITIALIZER {.mutex = PTHREAD_MUTEX_INITIALIZER}
#else /* !defined(PTHREAD_LOCK) */
/* Spinlock that assumes that it always gets the lock immediately. */
struct lock_impl {
bool locked;
};
static inline bool lock_impl_trylock(struct lock_impl *lock)
{
#ifdef RUN
/* TODO: Should this be a test and set? */
return __sync_bool_compare_and_swap(&lock->locked, false, true);
#else
__CPROVER_atomic_begin();
bool old_locked = lock->locked;
lock->locked = true;
__CPROVER_atomic_end();
/* Minimal barrier to prevent accesses leaking out of lock. */
__CPROVER_fence("RRfence", "RWfence");
return !old_locked;
#endif
}
static inline void lock_impl_lock(struct lock_impl *lock)
{
/*
* CBMC doesn't support busy waiting, so just assume that the
* lock is available.
*/
assume(lock_impl_trylock(lock));
/*
* If the lock was already held by this thread then the assumption
* is unsatisfiable (deadlock).
*/
}
static inline void lock_impl_unlock(struct lock_impl *lock)
{
#ifdef RUN
BUG_ON(!__sync_bool_compare_and_swap(&lock->locked, true, false));
#else
/* Minimal barrier to prevent accesses leaking out of lock. */
__CPROVER_fence("RWfence", "WWfence");
__CPROVER_atomic_begin();
bool old_locked = lock->locked;
lock->locked = false;
__CPROVER_atomic_end();
BUG_ON(!old_locked);
#endif
}
static inline void lock_impl_init(struct lock_impl *lock)
{
lock->locked = false;
}
#define LOCK_IMPL_INITIALIZER {.locked = false}
#endif /* !defined(PTHREAD_LOCK) */
/*
* Implement spinlocks using the lock mechanism. Wrap the lock to prevent mixing
* locks of different types.
*/
typedef struct {
struct lock_impl internal_lock;
} spinlock_t;
#define SPIN_LOCK_UNLOCKED {.internal_lock = LOCK_IMPL_INITIALIZER}
#define __SPIN_LOCK_UNLOCKED(x) SPIN_LOCK_UNLOCKED
#define DEFINE_SPINLOCK(x) spinlock_t x = SPIN_LOCK_UNLOCKED
static inline void spin_lock_init(spinlock_t *lock)
{
lock_impl_init(&lock->internal_lock);
}
static inline void spin_lock(spinlock_t *lock)
{
/*
* Spin locks also need to be removed in order to eliminate all
* memory barriers. They are only used by the write side anyway.
*/
#ifndef NO_SYNC_SMP_MB
preempt_disable();
lock_impl_lock(&lock->internal_lock);
#endif
}
static inline void spin_unlock(spinlock_t *lock)
{
#ifndef NO_SYNC_SMP_MB
lock_impl_unlock(&lock->internal_lock);
preempt_enable();
#endif
}
/* Don't bother with interrupts */
#define spin_lock_irq(lock) spin_lock(lock)
#define spin_unlock_irq(lock) spin_unlock(lock)
#define spin_lock_irqsave(lock, flags) spin_lock(lock)
#define spin_unlock_irqrestore(lock, flags) spin_unlock(lock)
/*
* This is supposed to return an int, but I think that a bool should work as
* well.
*/
static inline bool spin_trylock(spinlock_t *lock)
{
#ifndef NO_SYNC_SMP_MB
preempt_disable();
return lock_impl_trylock(&lock->internal_lock);
#else
return true;
#endif
}
struct completion {
/* Hopefully this won't overflow. */
unsigned int count;
};
#define COMPLETION_INITIALIZER(x) {.count = 0}
#define DECLARE_COMPLETION(x) struct completion x = COMPLETION_INITIALIZER(x)
#define DECLARE_COMPLETION_ONSTACK(x) DECLARE_COMPLETION(x)
static inline void init_completion(struct completion *c)
{
c->count = 0;
}
static inline void wait_for_completion(struct completion *c)
{
unsigned int prev_count = __sync_fetch_and_sub(&c->count, 1);
assume(prev_count);
}
static inline void complete(struct completion *c)
{
unsigned int prev_count = __sync_fetch_and_add(&c->count, 1);
BUG_ON(prev_count == UINT_MAX);
}
/* This function probably isn't very useful for CBMC. */
static inline bool try_wait_for_completion(struct completion *c)
{
BUG();
}
static inline bool completion_done(struct completion *c)
{
return c->count;
}
/* TODO: Implement complete_all */
static inline void complete_all(struct completion *c)
{
BUG();
}
#endif

View File

@ -1,12 +0,0 @@
// SPDX-License-Identifier: GPL-2.0
#include <config.h>
#include "misc.h"
#include "bug_on.h"
struct rcu_head;
void wakeme_after_rcu(struct rcu_head *head)
{
BUG();
}

View File

@ -1,58 +0,0 @@
#ifndef MISC_H
#define MISC_H
#include "assume.h"
#include "int_typedefs.h"
#include "locks.h"
#include <linux/types.h>
/* Probably won't need to deal with bottom halves. */
static inline void local_bh_disable(void) {}
static inline void local_bh_enable(void) {}
#define MODULE_ALIAS(X)
#define module_param(...)
#define EXPORT_SYMBOL_GPL(x)
#define container_of(ptr, type, member) ({ \
const typeof(((type *)0)->member) *__mptr = (ptr); \
(type *)((char *)__mptr - offsetof(type, member)); \
})
#ifndef USE_SIMPLE_SYNC_SRCU
/* Abuse udelay to make sure that busy loops terminate. */
#define udelay(x) assume(0)
#else
/* The simple custom synchronize_srcu is ok with try_check_zero failing. */
#define udelay(x) do { } while (0)
#endif
#define trace_rcu_torture_read(rcutorturename, rhp, secs, c_old, c) \
do { } while (0)
#define notrace
/* Avoid including rcupdate.h */
struct rcu_synchronize {
struct rcu_head head;
struct completion completion;
};
void wakeme_after_rcu(struct rcu_head *head);
#define rcu_lock_acquire(a) do { } while (0)
#define rcu_lock_release(a) do { } while (0)
#define rcu_lockdep_assert(c, s) do { } while (0)
#define RCU_LOCKDEP_WARN(c, s) do { } while (0)
/* Let CBMC non-deterministically choose switch between normal and expedited. */
bool rcu_gp_is_normal(void);
bool rcu_gp_is_expedited(void);
/* Do the same for old versions of rcu. */
#define rcu_expedited (rcu_gp_is_expedited())
#endif

View File

@ -1,93 +0,0 @@
/* SPDX-License-Identifier: GPL-2.0 */
#ifndef PERCPU_H
#define PERCPU_H
#include <stddef.h>
#include "bug_on.h"
#include "preempt.h"
#define __percpu
/* Maximum size of any percpu data. */
#define PERCPU_OFFSET (4 * sizeof(long))
/* Ignore alignment, as CBMC doesn't care about false sharing. */
#define alloc_percpu(type) __alloc_percpu(sizeof(type), 1)
static inline void *__alloc_percpu(size_t size, size_t align)
{
BUG();
return NULL;
}
static inline void free_percpu(void *ptr)
{
BUG();
}
#define per_cpu_ptr(ptr, cpu) \
((typeof(ptr)) ((char *) (ptr) + PERCPU_OFFSET * cpu))
#define __this_cpu_inc(pcp) __this_cpu_add(pcp, 1)
#define __this_cpu_dec(pcp) __this_cpu_sub(pcp, 1)
#define __this_cpu_sub(pcp, n) __this_cpu_add(pcp, -(typeof(pcp)) (n))
#define this_cpu_inc(pcp) this_cpu_add(pcp, 1)
#define this_cpu_dec(pcp) this_cpu_sub(pcp, 1)
#define this_cpu_sub(pcp, n) this_cpu_add(pcp, -(typeof(pcp)) (n))
/* Make CBMC use atomics to work around bug. */
#ifdef RUN
#define THIS_CPU_ADD_HELPER(ptr, x) (*(ptr) += (x))
#else
/*
* Split the atomic into a read and a write so that it has the least
* possible ordering.
*/
#define THIS_CPU_ADD_HELPER(ptr, x) \
do { \
typeof(ptr) this_cpu_add_helper_ptr = (ptr); \
typeof(ptr) this_cpu_add_helper_x = (x); \
typeof(*ptr) this_cpu_add_helper_temp; \
__CPROVER_atomic_begin(); \
this_cpu_add_helper_temp = *(this_cpu_add_helper_ptr); \
__CPROVER_atomic_end(); \
this_cpu_add_helper_temp += this_cpu_add_helper_x; \
__CPROVER_atomic_begin(); \
*(this_cpu_add_helper_ptr) = this_cpu_add_helper_temp; \
__CPROVER_atomic_end(); \
} while (0)
#endif
/*
* For some reason CBMC needs an atomic operation even though this is percpu
* data.
*/
#define __this_cpu_add(pcp, n) \
do { \
BUG_ON(preemptible()); \
THIS_CPU_ADD_HELPER(per_cpu_ptr(&(pcp), thread_cpu_id), \
(typeof(pcp)) (n)); \
} while (0)
#define this_cpu_add(pcp, n) \
do { \
int this_cpu_add_impl_cpu = get_cpu(); \
THIS_CPU_ADD_HELPER(per_cpu_ptr(&(pcp), this_cpu_add_impl_cpu), \
(typeof(pcp)) (n)); \
put_cpu(); \
} while (0)
/*
* This will cause a compiler warning because of the cast from char[][] to
* type*. This will cause a compile time error if type is too big.
*/
#define DEFINE_PER_CPU(type, name) \
char name[NR_CPUS][PERCPU_OFFSET]; \
typedef char percpu_too_big_##name \
[sizeof(type) > PERCPU_OFFSET ? -1 : 1]
#define for_each_possible_cpu(cpu) \
for ((cpu) = 0; (cpu) < NR_CPUS; ++(cpu))
#endif

View File

@ -1,79 +0,0 @@
// SPDX-License-Identifier: GPL-2.0
#include <config.h>
#include "preempt.h"
#include "assume.h"
#include "locks.h"
/* Support NR_CPUS of at most 64 */
#define CPU_PREEMPTION_LOCKS_INIT0 LOCK_IMPL_INITIALIZER
#define CPU_PREEMPTION_LOCKS_INIT1 \
CPU_PREEMPTION_LOCKS_INIT0, CPU_PREEMPTION_LOCKS_INIT0
#define CPU_PREEMPTION_LOCKS_INIT2 \
CPU_PREEMPTION_LOCKS_INIT1, CPU_PREEMPTION_LOCKS_INIT1
#define CPU_PREEMPTION_LOCKS_INIT3 \
CPU_PREEMPTION_LOCKS_INIT2, CPU_PREEMPTION_LOCKS_INIT2
#define CPU_PREEMPTION_LOCKS_INIT4 \
CPU_PREEMPTION_LOCKS_INIT3, CPU_PREEMPTION_LOCKS_INIT3
#define CPU_PREEMPTION_LOCKS_INIT5 \
CPU_PREEMPTION_LOCKS_INIT4, CPU_PREEMPTION_LOCKS_INIT4
/*
* Simulate disabling preemption by locking a particular cpu. NR_CPUS
* should be the actual number of cpus, not just the maximum.
*/
struct lock_impl cpu_preemption_locks[NR_CPUS] = {
CPU_PREEMPTION_LOCKS_INIT0
#if (NR_CPUS - 1) & 1
, CPU_PREEMPTION_LOCKS_INIT0
#endif
#if (NR_CPUS - 1) & 2
, CPU_PREEMPTION_LOCKS_INIT1
#endif
#if (NR_CPUS - 1) & 4
, CPU_PREEMPTION_LOCKS_INIT2
#endif
#if (NR_CPUS - 1) & 8
, CPU_PREEMPTION_LOCKS_INIT3
#endif
#if (NR_CPUS - 1) & 16
, CPU_PREEMPTION_LOCKS_INIT4
#endif
#if (NR_CPUS - 1) & 32
, CPU_PREEMPTION_LOCKS_INIT5
#endif
};
#undef CPU_PREEMPTION_LOCKS_INIT0
#undef CPU_PREEMPTION_LOCKS_INIT1
#undef CPU_PREEMPTION_LOCKS_INIT2
#undef CPU_PREEMPTION_LOCKS_INIT3
#undef CPU_PREEMPTION_LOCKS_INIT4
#undef CPU_PREEMPTION_LOCKS_INIT5
__thread int thread_cpu_id;
__thread int preempt_disable_count;
void preempt_disable(void)
{
BUG_ON(preempt_disable_count < 0 || preempt_disable_count == INT_MAX);
if (preempt_disable_count++)
return;
thread_cpu_id = nondet_int();
assume(thread_cpu_id >= 0);
assume(thread_cpu_id < NR_CPUS);
lock_impl_lock(&cpu_preemption_locks[thread_cpu_id]);
}
void preempt_enable(void)
{
BUG_ON(preempt_disable_count < 1);
if (--preempt_disable_count)
return;
lock_impl_unlock(&cpu_preemption_locks[thread_cpu_id]);
}

View File

@ -1,59 +0,0 @@
/* SPDX-License-Identifier: GPL-2.0 */
#ifndef PREEMPT_H
#define PREEMPT_H
#include <stdbool.h>
#include "bug_on.h"
/* This flag contains garbage if preempt_disable_count is 0. */
extern __thread int thread_cpu_id;
/* Support recursive preemption disabling. */
extern __thread int preempt_disable_count;
void preempt_disable(void);
void preempt_enable(void);
static inline void preempt_disable_notrace(void)
{
preempt_disable();
}
static inline void preempt_enable_no_resched(void)
{
preempt_enable();
}
static inline void preempt_enable_notrace(void)
{
preempt_enable();
}
static inline int preempt_count(void)
{
return preempt_disable_count;
}
static inline bool preemptible(void)
{
return !preempt_count();
}
static inline int get_cpu(void)
{
preempt_disable();
return thread_cpu_id;
}
static inline void put_cpu(void)
{
preempt_enable();
}
static inline void might_sleep(void)
{
BUG_ON(preempt_disable_count);
}
#endif

View File

@ -1,51 +0,0 @@
// SPDX-License-Identifier: GPL-2.0
#include <config.h>
#include <assert.h>
#include <errno.h>
#include <inttypes.h>
#include <pthread.h>
#include <stddef.h>
#include <string.h>
#include <sys/types.h>
#include "int_typedefs.h"
#include "barriers.h"
#include "bug_on.h"
#include "locks.h"
#include "misc.h"
#include "preempt.h"
#include "percpu.h"
#include "workqueues.h"
#include <linux/srcu.h>
/* Functions needed from modify_srcu.c */
bool try_check_zero(struct srcu_struct *sp, int idx, int trycount);
void srcu_flip(struct srcu_struct *sp);
/* Simpler implementation of synchronize_srcu that ignores batching. */
void synchronize_srcu(struct srcu_struct *sp)
{
int idx;
/*
* This code assumes that try_check_zero will succeed anyway,
* so there is no point in multiple tries.
*/
const int trycount = 1;
might_sleep();
/* Ignore the lock, as multiple writers aren't working yet anyway. */
idx = 1 ^ (sp->completed & 1);
/* For comments see srcu_advance_batches. */
assume(try_check_zero(sp, idx, trycount));
srcu_flip(sp);
assume(try_check_zero(sp, idx^1, trycount));
}

View File

@ -1,103 +0,0 @@
/* SPDX-License-Identifier: GPL-2.0 */
#ifndef WORKQUEUES_H
#define WORKQUEUES_H
#include <stdbool.h>
#include "barriers.h"
#include "bug_on.h"
#include "int_typedefs.h"
#include <linux/types.h>
/* Stub workqueue implementation. */
struct work_struct;
typedef void (*work_func_t)(struct work_struct *work);
void delayed_work_timer_fn(unsigned long __data);
struct work_struct {
/* atomic_long_t data; */
unsigned long data;
struct list_head entry;
work_func_t func;
#ifdef CONFIG_LOCKDEP
struct lockdep_map lockdep_map;
#endif
};
struct timer_list {
struct hlist_node entry;
unsigned long expires;
void (*function)(unsigned long);
unsigned long data;
u32 flags;
int slack;
};
struct delayed_work {
struct work_struct work;
struct timer_list timer;
/* target workqueue and CPU ->timer uses to queue ->work */
struct workqueue_struct *wq;
int cpu;
};
static inline bool schedule_work(struct work_struct *work)
{
BUG();
return true;
}
static inline bool schedule_work_on(int cpu, struct work_struct *work)
{
BUG();
return true;
}
static inline bool queue_work(struct workqueue_struct *wq,
struct work_struct *work)
{
BUG();
return true;
}
static inline bool queue_delayed_work(struct workqueue_struct *wq,
struct delayed_work *dwork,
unsigned long delay)
{
BUG();
return true;
}
#define INIT_WORK(w, f) \
do { \
(w)->data = 0; \
(w)->func = (f); \
} while (0)
#define INIT_DELAYED_WORK(w, f) INIT_WORK(&(w)->work, (f))
#define __WORK_INITIALIZER(n, f) { \
.data = 0, \
.entry = { &(n).entry, &(n).entry }, \
.func = f \
}
/* Don't bother initializing timer. */
#define __DELAYED_WORK_INITIALIZER(n, f, tflags) { \
.work = __WORK_INITIALIZER((n).work, (f)), \
}
#define DECLARE_WORK(n, f) \
struct workqueue_struct n = __WORK_INITIALIZER
#define DECLARE_DELAYED_WORK(n, f) \
struct delayed_work n = __DELAYED_WORK_INITIALIZER(n, f, 0)
#define system_power_efficient_wq ((struct workqueue_struct *) NULL)
#endif

View File

@ -1,2 +0,0 @@
# SPDX-License-Identifier: GPL-2.0-only
*.out

View File

@ -1,12 +0,0 @@
# SPDX-License-Identifier: GPL-2.0
CBMC_FLAGS = -I../.. -I../../src -I../../include -I../../empty_includes -32 -pointer-check -mm pso
all:
for i in ./*.pass; do \
echo $$i ; \
CBMC_FLAGS="$(CBMC_FLAGS)" sh ../test_script.sh --should-pass $$i > $$i.out 2>&1 ; \
done
for i in ./*.fail; do \
echo $$i ; \
CBMC_FLAGS="$(CBMC_FLAGS)" sh ../test_script.sh --should-fail $$i > $$i.out 2>&1 ; \
done

View File

@ -1 +0,0 @@
test_cbmc_options="-DASSERT_END"

View File

@ -1 +0,0 @@
test_cbmc_options="-DFORCE_FAILURE"

View File

@ -1 +0,0 @@
test_cbmc_options="-DFORCE_FAILURE_2"

View File

@ -1 +0,0 @@
test_cbmc_options="-DFORCE_FAILURE_3"

View File

@ -1,73 +0,0 @@
// SPDX-License-Identifier: GPL-2.0
#include <src/combined_source.c>
int x;
int y;
int __unbuffered_tpr_x;
int __unbuffered_tpr_y;
DEFINE_SRCU(ss);
void rcu_reader(void)
{
int idx;
#ifndef FORCE_FAILURE_3
idx = srcu_read_lock(&ss);
#endif
might_sleep();
__unbuffered_tpr_y = READ_ONCE(y);
#ifdef FORCE_FAILURE
srcu_read_unlock(&ss, idx);
idx = srcu_read_lock(&ss);
#endif
WRITE_ONCE(x, 1);
#ifndef FORCE_FAILURE_3
srcu_read_unlock(&ss, idx);
#endif
might_sleep();
}
void *thread_update(void *arg)
{
WRITE_ONCE(y, 1);
#ifndef FORCE_FAILURE_2
synchronize_srcu(&ss);
#endif
might_sleep();
__unbuffered_tpr_x = READ_ONCE(x);
return NULL;
}
void *thread_process_reader(void *arg)
{
rcu_reader();
return NULL;
}
int main(int argc, char *argv[])
{
pthread_t tu;
pthread_t tpr;
if (pthread_create(&tu, NULL, thread_update, NULL))
abort();
if (pthread_create(&tpr, NULL, thread_process_reader, NULL))
abort();
if (pthread_join(tu, NULL))
abort();
if (pthread_join(tpr, NULL))
abort();
assert(__unbuffered_tpr_y != 0 || __unbuffered_tpr_x != 0);
#ifdef ASSERT_END
assert(0);
#endif
return 0;
}

View File

@ -1,103 +0,0 @@
#!/bin/sh
# SPDX-License-Identifier: GPL-2.0
# This script expects a mode (either --should-pass or --should-fail) followed by
# an input file. The script uses the following environment variables. The test C
# source file is expected to be named test.c in the directory containing the
# input file.
#
# CBMC: The command to run CBMC. Default: cbmc
# CBMC_FLAGS: Additional flags to pass to CBMC
# NR_CPUS: Number of cpus to run tests with. Default specified by the test
# SYNC_SRCU_MODE: Choose implementation of synchronize_srcu. Defaults to simple.
# kernel: Version included in the linux kernel source.
# simple: Use try_check_zero directly.
#
# The input file is a script that is sourced by this file. It can define any of
# the following variables to configure the test.
#
# test_cbmc_options: Extra options to pass to CBMC.
# min_cpus_fail: Minimum number of CPUs (NR_CPUS) for verification to fail.
# The test is expected to pass if it is run with fewer. (Only
# useful for .fail files)
# default_cpus: Quantity of CPUs to use for the test, if not specified on the
# command line. Default: Larger of 2 and MIN_CPUS_FAIL.
set -e
if test "$#" -ne 2; then
echo "Expected one option followed by an input file" 1>&2
exit 99
fi
if test "x$1" = "x--should-pass"; then
should_pass="yes"
elif test "x$1" = "x--should-fail"; then
should_pass="no"
else
echo "Unrecognized argument '$1'" 1>&2
# Exit code 99 indicates a hard error.
exit 99
fi
CBMC=${CBMC:-cbmc}
SYNC_SRCU_MODE=${SYNC_SRCU_MODE:-simple}
case ${SYNC_SRCU_MODE} in
kernel) sync_srcu_mode_flags="" ;;
simple) sync_srcu_mode_flags="-DUSE_SIMPLE_SYNC_SRCU" ;;
*)
echo "Unrecognized argument '${SYNC_SRCU_MODE}'" 1>&2
exit 99
;;
esac
min_cpus_fail=1
c_file=`dirname "$2"`/test.c
# Source the input file.
. $2
if test ${min_cpus_fail} -gt 2; then
default_default_cpus=${min_cpus_fail}
else
default_default_cpus=2
fi
default_cpus=${default_cpus:-${default_default_cpus}}
cpus=${NR_CPUS:-${default_cpus}}
# Check if there are two few cpus to make the test fail.
if test $cpus -lt ${min_cpus_fail:-0}; then
should_pass="yes"
fi
cbmc_opts="-DNR_CPUS=${cpus} ${sync_srcu_mode_flags} ${test_cbmc_options} ${CBMC_FLAGS}"
echo "Running CBMC: ${CBMC} ${cbmc_opts} ${c_file}"
if ${CBMC} ${cbmc_opts} "${c_file}"; then
# Verification successful. Make sure that it was supposed to verify.
test "x${should_pass}" = xyes
else
cbmc_exit_status=$?
# An exit status of 10 indicates a failed verification.
# (see cbmc_parse_optionst::do_bmc in the CBMC source code)
if test ${cbmc_exit_status} -eq 10 && test "x${should_pass}" = xno; then
:
else
echo "CBMC returned ${cbmc_exit_status} exit status" 1>&2
# Parse errors have exit status 6. Any other type of error
# should be considered a hard error.
if test ${cbmc_exit_status} -ne 6 && \
test ${cbmc_exit_status} -ne 10; then
exit 99
else
exit 1
fi
fi
fi