mirror of
https://github.com/edk2-porting/linux-next.git
synced 2024-12-23 20:53:53 +08:00
00d5551cc4
Currently x86's arch_cmpxchg64() and arch_cmpxchg64_local() are instrumented twice, as they call into instrumented atomics rather than their arch_ equivalents. A call to cmpxchg64() results in: cmpxchg64() kasan_check_write() arch_cmpxchg64() cmpxchg() kasan_check_write() arch_cmpxchg() Let's fix this up and call the arch_ equivalents, resulting in: cmpxchg64() kasan_check_write() arch_cmpxchg64() arch_cmpxchg() Signed-off-by: Mark Rutland <mark.rutland@arm.com> Acked-by: Thomas Gleixner <tglx@linutronix.de> Acked-by: Peter Zijlstra (Intel) <peterz@infradead.org> Acked-by: Will Deacon <will.deacon@arm.com> Cc: Boqun Feng <boqun.feng@gmail.com> Cc: Dmitry Vyukov <dvyukov@google.com> Cc: Linus Torvalds <torvalds@linux-foundation.org> Cc: Peter Zijlstra <peterz@infradead.org> Cc: andy.shevchenko@gmail.com Cc: arnd@arndb.de Cc: aryabinin@virtuozzo.com Cc: catalin.marinas@arm.com Cc: glider@google.com Cc: linux-arm-kernel@lists.infradead.org Cc: parri.andrea@gmail.com Cc: peter@hurleysoftware.com Link: http://lkml.kernel.org/r/20180716113017.3909-3-mark.rutland@arm.com Signed-off-by: Ingo Molnar <mingo@kernel.org>
25 lines
561 B
C
25 lines
561 B
C
/* SPDX-License-Identifier: GPL-2.0 */
|
|
#ifndef _ASM_X86_CMPXCHG_64_H
|
|
#define _ASM_X86_CMPXCHG_64_H
|
|
|
|
static inline void set_64bit(volatile u64 *ptr, u64 val)
|
|
{
|
|
*ptr = val;
|
|
}
|
|
|
|
#define arch_cmpxchg64(ptr, o, n) \
|
|
({ \
|
|
BUILD_BUG_ON(sizeof(*(ptr)) != 8); \
|
|
arch_cmpxchg((ptr), (o), (n)); \
|
|
})
|
|
|
|
#define arch_cmpxchg64_local(ptr, o, n) \
|
|
({ \
|
|
BUILD_BUG_ON(sizeof(*(ptr)) != 8); \
|
|
arch_cmpxchg_local((ptr), (o), (n)); \
|
|
})
|
|
|
|
#define system_has_cmpxchg_double() boot_cpu_has(X86_FEATURE_CX16)
|
|
|
|
#endif /* _ASM_X86_CMPXCHG_64_H */
|