linux/arch/x86/include/asm/linkage.h
Brian Gerst 0676b4e0a1 x86/entry/32: Remove asmlinkage_protect()
Now that syscalls are called from C code, which copies the args to
new stack slots instead of overlaying pt_regs, asmlinkage_protect()
is no longer needed.

Signed-off-by: Brian Gerst <brgerst@gmail.com>
Acked-by: Andy Lutomirski <luto@kernel.org>
Cc: Andy Lutomirski <luto@amacapital.net>
Cc: Borislav Petkov <bp@alien8.de>
Cc: Borislav Petkov <bp@suse.de>
Cc: Denys Vlasenko <dvlasenk@redhat.com>
Cc: H. Peter Anvin <hpa@zytor.com>
Cc: Linus Torvalds <torvalds@linux-foundation.org>
Cc: Peter Zijlstra <peterz@infradead.org>
Cc: Thomas Gleixner <tglx@linutronix.de>
Link: http://lkml.kernel.org/r/1462416278-11974-4-git-send-email-brgerst@gmail.com
Signed-off-by: Ingo Molnar <mingo@kernel.org>
2016-05-05 08:37:31 +02:00

28 lines
542 B
C

#ifndef _ASM_X86_LINKAGE_H
#define _ASM_X86_LINKAGE_H
#include <linux/stringify.h>
#undef notrace
#define notrace __attribute__((no_instrument_function))
#ifdef CONFIG_X86_32
#define asmlinkage CPP_ASMLINKAGE __attribute__((regparm(0)))
#endif /* CONFIG_X86_32 */
#ifdef __ASSEMBLY__
#define GLOBAL(name) \
.globl name; \
name:
#if defined(CONFIG_X86_64) || defined(CONFIG_X86_ALIGNMENT_16)
#define __ALIGN .p2align 4, 0x90
#define __ALIGN_STR __stringify(__ALIGN)
#endif
#endif /* __ASSEMBLY__ */
#endif /* _ASM_X86_LINKAGE_H */