linux/kernel/locking
Thomas Gleixner 84d82ec5b9 locking/rtmutex: Explain locking rules for rt_mutex_proxy_unlock()/init_proxy_locked()
While debugging the unlock vs. dequeue race which resulted in state
corruption of futexes the lockless nature of rt_mutex_proxy_unlock()
caused some confusion.

Add commentry to explain why it is safe to do this lockless. Add matching
comments to rt_mutex_init_proxy_locked() for completeness sake.

Signed-off-by: Thomas Gleixner <tglx@linutronix.de>
Acked-by: Peter Zijlstra (Intel) <peterz@infradead.org>
Cc: David Daney <ddaney@caviumnetworks.com>
Cc: Linus Torvalds <torvalds@linux-foundation.org>
Cc: Mark Rutland <mark.rutland@arm.com>
Cc: Peter Zijlstra <peterz@infradead.org>
Cc: Sebastian Siewior <bigeasy@linutronix.de>
Cc: Steven Rostedt <rostedt@goodmis.org>
Cc: Will Deacon <will.deacon@arm.com>
Link: http://lkml.kernel.org/r/20161130210030.591941927@linutronix.de
Signed-off-by: Ingo Molnar <mingo@kernel.org>
2016-12-02 11:13:57 +01:00
..
lockdep_internals.h lockdep: Limit static allocations if PROVE_LOCKING_SMALL is defined 2016-11-18 11:33:19 -08:00
lockdep_proc.c lockdep: Fix lock_chain::base size 2016-04-23 13:53:03 +02:00
lockdep_states.h locking: Move the lockdep code to kernel/locking/ 2013-11-06 07:55:08 +01:00
lockdep.c locking/lockdep: Remove unused parameter from the add_lock_to_list() function 2016-11-11 08:25:20 +01:00
locktorture.c lcoking/locktorture: Simplify the torture_runnable computation 2016-04-28 10:57:51 +02:00
Makefile locking/lglock: Remove lglock implementation 2016-09-22 15:25:56 +02:00
mcs_spinlock.h locking/core: Remove cpu_relax_lowlatency() users 2016-11-16 10:15:10 +01:00
mutex-debug.c locking/mutex: Rework mutex::owner 2016-10-25 11:31:50 +02:00
mutex-debug.h locking/mutex: Rework mutex::owner 2016-10-25 11:31:50 +02:00
mutex.c locking/mutex: Break out of expensive busy-loop on {mutex,rwsem}_spin_on_owner() when owner vCPU is preempted 2016-11-22 12:48:10 +01:00
mutex.h locking/mutex: Rework mutex::owner 2016-10-25 11:31:50 +02:00
osq_lock.c locking/osq: Break out of spin-wait busy waiting loop for a preempted vCPU in osq_lock() 2016-11-22 12:48:10 +01:00
percpu-rwsem.c locking/percpu-rwsem: Optimize readers and reduce global impact 2016-08-10 14:34:01 +02:00
qrwlock.c locking/core: Remove cpu_relax_lowlatency() users 2016-11-16 10:15:10 +01:00
qspinlock_paravirt.h locking/pv-qspinlock: Use cmpxchg_release() in __pv_queued_spin_unlock() 2016-09-22 15:25:51 +02:00
qspinlock_stat.h locking/pvstat: Separate wait_again and spurious wakeup stats 2016-08-10 14:16:02 +02:00
qspinlock.c locking/qspinlock: Use __this_cpu_dec() instead of full-blown this_cpu_dec() 2016-06-27 11:37:41 +02:00
rtmutex_common.h locking/rtmutex: Get rid of RT_MUTEX_OWNER_MASKALL 2016-12-02 11:13:57 +01:00
rtmutex-debug.c rtmutex: Cleanup deadlock detector debug logic 2014-06-21 22:05:30 +02:00
rtmutex-debug.h rtmutex: Cleanup deadlock detector debug logic 2014-06-21 22:05:30 +02:00
rtmutex.c locking/rtmutex: Explain locking rules for rt_mutex_proxy_unlock()/init_proxy_locked() 2016-12-02 11:13:57 +01:00
rtmutex.h rtmutex: Cleanup deadlock detector debug logic 2014-06-21 22:05:30 +02:00
rwsem-spinlock.c locking/rwsem: Introduce basis for down_write_killable() 2016-04-13 10:42:20 +02:00
rwsem-xadd.c locking/mutex: Break out of expensive busy-loop on {mutex,rwsem}_spin_on_owner() when owner vCPU is preempted 2016-11-22 12:48:10 +01:00
rwsem.c locking/rwsem: Add reader-owned state to the owner field 2016-06-08 15:16:59 +02:00
rwsem.h locking/rwsem: Protect all writes to owner by WRITE_ONCE() 2016-06-08 15:16:59 +02:00
semaphore.c locking/semaphore: Resolve some shadow warnings 2014-09-04 07:17:24 +02:00
spinlock_debug.c locking: Move the spinlock code to kernel/locking/ 2013-11-06 07:55:21 +01:00
spinlock.c spinlock: Add spin_lock_bh_nested() 2015-01-03 14:32:57 -05:00