mirror of
https://github.com/edk2-porting/linux-next.git
synced 2024-12-16 01:04:08 +08:00
34b133f8e9
The spinning mutex implementation uses cpu_relax() in busy loops as a compiler barrier. Depending on the architecture, cpu_relax() may do more than needed in this specific mutex spin loops. On System z we also give up the time slice of the virtual cpu in cpu_relax(), which prevents effective spinning on the mutex. This patch replaces cpu_relax() in the spinning mutex code with arch_mutex_cpu_relax(), which can be defined by each architecture that selects HAVE_ARCH_MUTEX_CPU_RELAX. The default is still cpu_relax(), so this patch should not affect other architectures than System z for now. Signed-off-by: Gerald Schaefer <gerald.schaefer@de.ibm.com> Signed-off-by: Peter Zijlstra <a.p.zijlstra@chello.nl> LKML-Reference: <1290437256.7455.4.camel@thinkpad> Signed-off-by: Ingo Molnar <mingo@elte.hu>
12 lines
350 B
C
12 lines
350 B
C
/*
|
|
* Pull in the generic implementation for the mutex fastpath.
|
|
*
|
|
* TODO: implement optimized primitives instead, or leave the generic
|
|
* implementation in place, or pick the atomic_xchg() based generic
|
|
* implementation. (see asm-generic/mutex-xchg.h for details)
|
|
*/
|
|
|
|
#include <asm-generic/mutex-dec.h>
|
|
|
|
#define arch_mutex_cpu_relax() barrier()
|