linux/tools/memory-model/litmus-tests/MP+polockmbonce+poacquiresilsil.litmus
Luc Maranget 15553dcbca tools/memory-model: Add model support for spin_is_locked()
This commit first adds a trivial macro for spin_is_locked() to
linux-kernel.def.

It also adds cat code for enumerating all possible matches of lock
write events (set LKW) with islocked events returning true (set RL,
for Read from Lock), and unlock write events (set UL) with islocked
events returning false (set RU, for Read from Unlock).  Note that this
intentionally does not model uniprocessor kernels (CONFIG_SMP=n) built
with CONFIG_DEBUG_SPINLOCK=n, in which spin_is_locked() unconditionally
returns zero.

It also adds a pair of litmus tests demonstrating the minimal ordering
provided by spin_is_locked() in conjunction with spin_lock().  Will Deacon
noted that this minimal ordering happens on ARMv8:
https://lkml.kernel.org/r/20180226162426.GB17158@arm.com

Notice that herd7 installations strictly older than version 7.49
do not handle the new constructs.

Signed-off-by: Luc Maranget <luc.maranget@inria.fr>
Signed-off-by: Paul E. McKenney <paulmck@linux.vnet.ibm.com>
Reviewed-by: Alan Stern <stern@rowland.harvard.edu>
Cc: Akira Yokosawa <akiyks@gmail.com>
Cc: Andrea Parri <parri.andrea@gmail.com>
Cc: Andrew Morton <akpm@linux-foundation.org>
Cc: Boqun Feng <boqun.feng@gmail.com>
Cc: David Howells <dhowells@redhat.com>
Cc: Jade Alglave <j.alglave@ucl.ac.uk>
Cc: Linus Torvalds <torvalds@linux-foundation.org>
Cc: Luc Maranget <Luc.Maranget@inria.fr>
Cc: Nicholas Piggin <npiggin@gmail.com>
Cc: Peter Zijlstra <peterz@infradead.org>
Cc: Thomas Gleixner <tglx@linutronix.de>
Cc: Will Deacon <will.deacon@arm.com>
Cc: linux-arch@vger.kernel.org
Link: http://lkml.kernel.org/r/1526340837-12222-10-git-send-email-paulmck@linux.vnet.ibm.com
Signed-off-by: Ingo Molnar <mingo@kernel.org>
2018-05-15 08:11:17 +02:00

36 lines
703 B
Plaintext

C MP+polockmbonce+poacquiresilsil
(*
* Result: Never
*
* Do spinlocks combined with smp_mb__after_spinlock() provide order
* to outside observers using spin_is_locked() to sense the lock-held
* state, ordered by acquire? Note that when the first spin_is_locked()
* returns false and the second true, we know that the smp_load_acquire()
* executed before the lock was acquired (loosely speaking).
*)
{
}
P0(spinlock_t *lo, int *x)
{
spin_lock(lo);
smp_mb__after_spinlock();
WRITE_ONCE(*x, 1);
spin_unlock(lo);
}
P1(spinlock_t *lo, int *x)
{
int r1;
int r2;
int r3;
r1 = smp_load_acquire(x);
r2 = spin_is_locked(lo);
r3 = spin_is_locked(lo);
}
exists (1:r1=1 /\ 1:r2=0 /\ 1:r3=1)