linux/scripts/module.lds.S
Helge Deller 333bdb72be modules: Ensure natural alignment for .altinstructions and __bug_table sections
[ Upstream commit 87c482bdfa ]

In the kernel image vmlinux.lds.S linker scripts the .altinstructions
and __bug_table sections are 4- or 8-byte aligned because they hold 32-
and/or 64-bit values.

Most architectures use altinstructions and BUG() or WARN() in modules as
well, but in the module linker script (module.lds.S) those sections are
currently missing. As consequence the linker will store their content
byte-aligned by default, which then can lead to unnecessary unaligned
memory accesses by the CPU when those tables are processed at runtime.

Usually unaligned memory accesses are unnoticed, because either the
hardware (as on x86 CPUs) or in-kernel exception handlers (e.g. on
parisc or sparc) emulate and fix them up at runtime. Nevertheless, such
unaligned accesses introduce a performance penalty and can even crash
the kernel if there is a bug in the unalignment exception handlers
(which happened once to me on the parisc architecture and which is why I
noticed that issue at all).

This patch fixes a non-critical issue and might be backported at any time.
It's trivial and shouldn't introduce any regression because it simply
tells the linker to use a different (8-byte alignment) for those
sections by default.

Signed-off-by: Helge Deller <deller@gmx.de>
Link: https://lore.kernel.org/all/Yr8%2Fgr8e8I7tVX4d@p100/
Signed-off-by: Luis Chamberlain <mcgrof@kernel.org>
Signed-off-by: Sasha Levin <sashal@kernel.org>
2022-08-25 11:38:19 +02:00

58 lines
1.6 KiB
ArmAsm

/*
* Common module linker script, always used when linking a module.
* Archs are free to supply their own linker scripts. ld will
* combine them automatically.
*/
SECTIONS {
/DISCARD/ : {
*(.discard)
*(.discard.*)
}
__ksymtab 0 : { *(SORT(___ksymtab+*)) }
__ksymtab_gpl 0 : { *(SORT(___ksymtab_gpl+*)) }
__ksymtab_unused 0 : { *(SORT(___ksymtab_unused+*)) }
__ksymtab_unused_gpl 0 : { *(SORT(___ksymtab_unused_gpl+*)) }
__ksymtab_gpl_future 0 : { *(SORT(___ksymtab_gpl_future+*)) }
__kcrctab 0 : { *(SORT(___kcrctab+*)) }
__kcrctab_gpl 0 : { *(SORT(___kcrctab_gpl+*)) }
__kcrctab_unused 0 : { *(SORT(___kcrctab_unused+*)) }
__kcrctab_unused_gpl 0 : { *(SORT(___kcrctab_unused_gpl+*)) }
__kcrctab_gpl_future 0 : { *(SORT(___kcrctab_gpl_future+*)) }
.init_array 0 : ALIGN(8) { *(SORT(.init_array.*)) *(.init_array) }
.altinstructions 0 : ALIGN(8) { KEEP(*(.altinstructions)) }
__bug_table 0 : ALIGN(8) { KEEP(*(__bug_table)) }
__jump_table 0 : ALIGN(8) { KEEP(*(__jump_table)) }
__patchable_function_entries : { *(__patchable_function_entries) }
#ifdef CONFIG_LTO_CLANG
/*
* With CONFIG_LTO_CLANG, LLD always enables -fdata-sections and
* -ffunction-sections, which increases the size of the final module.
* Merge the split sections in the final binary.
*/
.bss : {
*(.bss .bss.[0-9a-zA-Z_]*)
*(.bss..L*)
}
.data : {
*(.data .data.[0-9a-zA-Z_]*)
*(.data..L*)
}
.rodata : {
*(.rodata .rodata.[0-9a-zA-Z_]*)
*(.rodata..L*)
}
.text : { *(.text .text.[0-9a-zA-Z_]*) }
#endif
}
/* bring in arch-specific sections */
#include <asm/module.lds.h>