linux/scripts/module.lds.S

58 lines
1.6 KiB
ArmAsm
Raw Normal View History

2009-06-24 14:13:38 +08:00
/*
* Common module linker script, always used when linking a module.
* Archs are free to supply their own linker scripts. ld will
* combine them automatically.
*/
SECTIONS {
/DISCARD/ : {
*(.discard)
*(.discard.*)
}
__ksymtab 0 : { *(SORT(___ksymtab+*)) }
__ksymtab_gpl 0 : { *(SORT(___ksymtab_gpl+*)) }
__ksymtab_unused 0 : { *(SORT(___ksymtab_unused+*)) }
__ksymtab_unused_gpl 0 : { *(SORT(___ksymtab_unused_gpl+*)) }
__ksymtab_gpl_future 0 : { *(SORT(___ksymtab_gpl_future+*)) }
__kcrctab 0 : { *(SORT(___kcrctab+*)) }
__kcrctab_gpl 0 : { *(SORT(___kcrctab_gpl+*)) }
__kcrctab_unused 0 : { *(SORT(___kcrctab_unused+*)) }
__kcrctab_unused_gpl 0 : { *(SORT(___kcrctab_unused_gpl+*)) }
__kcrctab_gpl_future 0 : { *(SORT(___kcrctab_gpl_future+*)) }
kernel: add support for .init_array.* constructors KASan uses constructors for initializing redzones for global variables. Globals instrumentation in GCC 4.9.2 produces constructors with priority (.init_array.00099) Currently kernel ignores such constructors. Only constructors with default priority supported (.init_array) This patch adds support for constructors with priorities. For kernel image we put pointers to constructors between __ctors_start/__ctors_end and do_ctors() will call them on start up. For modules we merge .init_array.* sections into resulting .init_array. Module code properly handles constructors in .init_array section. Signed-off-by: Andrey Ryabinin <a.ryabinin@samsung.com> Cc: Dmitry Vyukov <dvyukov@google.com> Cc: Konstantin Serebryany <kcc@google.com> Cc: Dmitry Chernenkov <dmitryc@google.com> Signed-off-by: Andrey Konovalov <adech.fo@gmail.com> Cc: Yuri Gribov <tetra2005@gmail.com> Cc: Konstantin Khlebnikov <koct9i@gmail.com> Cc: Sasha Levin <sasha.levin@oracle.com> Cc: Christoph Lameter <cl@linux.com> Cc: Joonsoo Kim <iamjoonsoo.kim@lge.com> Cc: Dave Hansen <dave.hansen@intel.com> Cc: Andi Kleen <andi@firstfloor.org> Cc: Ingo Molnar <mingo@elte.hu> Cc: Thomas Gleixner <tglx@linutronix.de> Cc: "H. Peter Anvin" <hpa@zytor.com> Cc: Christoph Lameter <cl@linux.com> Cc: Pekka Enberg <penberg@kernel.org> Cc: David Rientjes <rientjes@google.com> Signed-off-by: Andrew Morton <akpm@linux-foundation.org> Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
2015-02-14 06:40:10 +08:00
.init_array 0 : ALIGN(8) { *(SORT(.init_array.*)) *(.init_array) }
modules: Ensure natural alignment for .altinstructions and __bug_table sections [ Upstream commit 87c482bdfa79f378297d92af49cdf265be199df5 ] In the kernel image vmlinux.lds.S linker scripts the .altinstructions and __bug_table sections are 4- or 8-byte aligned because they hold 32- and/or 64-bit values. Most architectures use altinstructions and BUG() or WARN() in modules as well, but in the module linker script (module.lds.S) those sections are currently missing. As consequence the linker will store their content byte-aligned by default, which then can lead to unnecessary unaligned memory accesses by the CPU when those tables are processed at runtime. Usually unaligned memory accesses are unnoticed, because either the hardware (as on x86 CPUs) or in-kernel exception handlers (e.g. on parisc or sparc) emulate and fix them up at runtime. Nevertheless, such unaligned accesses introduce a performance penalty and can even crash the kernel if there is a bug in the unalignment exception handlers (which happened once to me on the parisc architecture and which is why I noticed that issue at all). This patch fixes a non-critical issue and might be backported at any time. It's trivial and shouldn't introduce any regression because it simply tells the linker to use a different (8-byte alignment) for those sections by default. Signed-off-by: Helge Deller <deller@gmx.de> Link: https://lore.kernel.org/all/Yr8%2Fgr8e8I7tVX4d@p100/ Signed-off-by: Luis Chamberlain <mcgrof@kernel.org> Signed-off-by: Sasha Levin <sashal@kernel.org>
2022-07-08 17:44:54 +08:00
.altinstructions 0 : ALIGN(8) { KEEP(*(.altinstructions)) }
__bug_table 0 : ALIGN(8) { KEEP(*(__bug_table)) }
__jump_table 0 : ALIGN(8) { KEEP(*(__jump_table)) }
__patchable_function_entries : { *(__patchable_function_entries) }
#ifdef CONFIG_LTO_CLANG
/*
* With CONFIG_LTO_CLANG, LLD always enables -fdata-sections and
* -ffunction-sections, which increases the size of the final module.
* Merge the split sections in the final binary.
*/
.bss : {
*(.bss .bss.[0-9a-zA-Z_]*)
*(.bss..L*)
}
.data : {
*(.data .data.[0-9a-zA-Z_]*)
*(.data..L*)
}
.rodata : {
*(.rodata .rodata.[0-9a-zA-Z_]*)
*(.rodata..L*)
}
.text : { *(.text .text.[0-9a-zA-Z_]*) }
#endif
2009-06-24 14:13:38 +08:00
}
/* bring in arch-specific sections */
#include <asm/module.lds.h>