2
0
mirror of https://github.com/edk2-porting/linux-next.git synced 2024-12-20 11:13:58 +08:00
linux-next/arch/powerpc/lib
Boqun Feng 6262db7c08 powerpc/spinlock: Fix spin_unlock_wait()
There is an ordering issue with spin_unlock_wait() on powerpc, because
the spin_lock primitive is an ACQUIRE and an ACQUIRE is only ordering
the load part of the operation with memory operations following it.
Therefore the following event sequence can happen:

CPU 1			CPU 2			CPU 3

==================	====================	==============
						spin_unlock(&lock);
			spin_lock(&lock):
			  r1 = *lock; // r1 == 0;
o = object;		o = READ_ONCE(object); // reordered here
object = NULL;
smp_mb();
spin_unlock_wait(&lock);
			  *lock = 1;
smp_mb();
o->dead = true;         < o = READ_ONCE(object); > // reordered upwards
			if (o) // true
				BUG_ON(o->dead); // true!!

To fix this, we add a "nop" ll/sc loop in arch_spin_unlock_wait() on
ppc, the "nop" ll/sc loop reads the lock
value and writes it back atomically, in this way it will synchronize the
view of the lock on CPU1 with that on CPU2. Therefore in the scenario
above, either CPU2 will fail to get the lock at first or CPU1 will see
the lock acquired by CPU2, both cases will eliminate this bug. This is a
similar idea as what Will Deacon did for ARM64 in:

  d86b8da04d ("arm64: spinlock: serialise spin_unlock_wait against concurrent lockers")

Furthermore, if the "nop" ll/sc figures out the lock is locked, we
actually don't need to do the "nop" ll/sc trick again, we can just do a
normal load+check loop for the lock to be released, because in that
case, spin_unlock_wait() is called when someone is holding the lock, and
the store part of the "nop" ll/sc happens before the lock release of the
current lock holder:

	"nop" ll/sc -> spin_unlock()

and the lock release happens before the next lock acquisition:

	spin_unlock() -> spin_lock() <next holder>

which means the "nop" ll/sc happens before the next lock acquisition:

	"nop" ll/sc -> spin_unlock() -> spin_lock() <next holder>

With a smp_mb() preceding spin_unlock_wait(), the store of object is
guaranteed to be observed by the next lock holder:

	STORE -> smp_mb() -> "nop" ll/sc
	-> spin_unlock() -> spin_lock() <next holder>

This patch therefore fixes the issue and also cleans the
arch_spin_unlock_wait() a little bit by removing superfluous memory
barriers in loops and consolidating the implementations for PPC32 and
PPC64 into one.

Suggested-by: "Paul E. McKenney" <paulmck@linux.vnet.ibm.com>
Signed-off-by: Boqun Feng <boqun.feng@gmail.com>
Reviewed-by: "Paul E. McKenney" <paulmck@linux.vnet.ibm.com>
[mpe: Inline the "nop" ll/sc loop and set EH=0, munge change log]
Signed-off-by: Michael Ellerman <mpe@ellerman.id.au>
2016-06-14 16:05:44 +10:00
..
alloc.c powerpc: Replace mem_init_done with slab_is_available() 2015-04-10 20:02:48 +10:00
checksum_32.S powerpc: optimise csum_partial() call when len is constant 2016-03-09 10:44:18 -06:00
checksum_64.S powerpc: optimise csum_partial() call when len is constant 2016-03-09 10:44:18 -06:00
checksum_wrappers.c powerpc32: checksum_wrappers_64 becomes checksum_wrappers 2016-03-04 21:47:47 -06:00
code-patching.c powerpc: Move the patch_exception to a common place 2013-12-02 14:06:54 +11:00
copy_32.S powerpc: Make generic_memcpy() private to copy_32.S 2016-04-11 20:30:41 +10:00
copypage_64.S powerpc: Exported functions __clear_user and copy_page use r2 so need _GLOBAL_TOC() 2014-06-05 13:20:41 +10:00
copypage_power7.S powerpc: Change vrX register defines to vX to match gcc and glibc 2015-03-16 18:32:11 +11:00
copyuser_64.S powerpc: Remove power3 from comments 2014-07-28 14:10:26 +10:00
copyuser_power7.S powerpc: Change vrX register defines to vX to match gcc and glibc 2015-03-16 18:32:11 +11:00
crtsavres.S powerpc: Change vrX register defines to vX to match gcc and glibc 2015-03-16 18:32:11 +11:00
div64.S powerpc: Fix a corner case in __div64_32 2005-10-20 09:37:02 +10:00
feature-fixups-test.S powerpc: Ensure the else case of feature sections will fit 2011-01-21 14:08:33 +11:00
feature-fixups.c powerpc: Make a bunch of things static 2014-09-25 23:14:41 +10:00
hweight_64.S powerpc: No need to use dot symbols when branching to a function 2014-04-23 10:05:16 +10:00
ldstfp.S powerpc: Change vsrX register defines to vsX to match gcc and glibc 2015-03-16 18:32:11 +11:00
locks.c powerpc/spinlock: Fix spin_unlock_wait() 2016-06-14 16:05:44 +10:00
Makefile Merge branch 'next' of git://git.kernel.org/pub/scm/linux/kernel/git/scottwood/linux into next 2016-03-14 20:05:14 +11:00
mem_64.S powerpc: use _GLOBAL_TOC for memmove 2014-07-22 15:56:04 +10:00
memcmp_64.S powerpc: Add 64bit optimised memcmp 2015-01-23 14:02:55 +11:00
memcpy_64.S Merge remote-tracking branch 'anton/abiv2' into next 2014-05-05 20:57:12 +10:00
memcpy_power7.S powerpc: Change vrX register defines to vX to match gcc and glibc 2015-03-16 18:32:11 +11:00
ppc_ksyms.c powerpc: Remove assembly versions of strcpy, strcat, strlen and strcmp 2016-06-14 13:58:25 +10:00
rheap.c powerpc: Various typo fixes 2016-06-14 13:58:26 +10:00
sstep.c powerpc/sstep: Fix emulation fall-through 2016-05-11 21:54:08 +10:00
string_64.S powerpc: Exported functions __clear_user and copy_page use r2 so need _GLOBAL_TOC() 2014-06-05 13:20:41 +10:00
string.S powerpc: Align hot loops of some string functions 2016-06-14 13:58:25 +10:00
usercopy_64.c powerpc: Rename files to have consistent _32/_64 suffixes 2005-10-10 21:52:43 +10:00
vmx-helper.c powerpc: Create disable_kernel_{fp,altivec,vsx,spe}() 2015-12-01 13:52:25 +11:00
xor_vmx.c powerpc: rework sparse for lib/xor_vmx.c 2016-04-27 09:33:37 +10:00