linux/arch/arm64/kernel/jump_label.c

39 lines
1.0 KiB
C
Raw Normal View History

// SPDX-License-Identifier: GPL-2.0-only
/*
* Copyright (C) 2013 Huawei Ltd.
* Author: Jiang Liu <liuj97@gmail.com>
*
* Based on arch/arm/kernel/jump_label.c
*/
#include <linux/kernel.h>
#include <linux/jump_label.h>
#include <asm/insn.h>
arm64: jump labels: NOP out NOP -> NOP replacement In the arm64 arch_static_branch implementation we place an A64 NOP into the instruction stream and log relevant details to a jump_entry in a __jump_table section. Later this may be replaced with an immediate branch without link to the code for the unlikely case. At init time, the core calls arch_jump_label_transform_static to initialise the NOPs. On x86 this involves inserting the optimal NOP for a given microarchitecture, but on arm64 we only use the architectural NOP, and hence replace each NOP with the exact same NOP. This is somewhat pointless. Additionally, at module load time we don't call jump_label_apply_nops to patch the optimal NOPs in, unlike other architectures, but get away with this because we only use the architectural NOP anyway. A later notifier will patch NOPs with branches as required. Similarly to x86 commit 11570da1c5b1dee1 (x86/jump-label: Do not bother updating NOPs if they are correct), we can avoid patching NOPs with identical NOPs. Given that we only use a single NOP encoding, this means we can NOP-out the body of arch_jump_label_transform_static entirely. As the default __weak arch_jump_label_transform_static implementation performs a patch, we must use an empty function to achieve this. Cc: Catalin Marinas <catalin.marinas@arm.com> Cc: Jiang Liu <liuj97@gmail.com> Cc: Laura Abbott <lauraa@codeaurora.org> Acked-by: Will Deacon <will.deacon@arm.com> Signed-off-by: Mark Rutland <mark.rutland@arm.com> Signed-off-by: Will Deacon <will.deacon@arm.com>
2014-11-25 23:44:18 +08:00
void arch_jump_label_transform(struct jump_entry *entry,
enum jump_label_type type)
{
arm64/kernel: jump_label: Switch to relative references On a randomly chosen distro kernel build for arm64, vmlinux.o shows the following sections, containing jump label entries, and the associated RELA relocation records, respectively: ... [38088] __jump_table PROGBITS 0000000000000000 00e19f30 000000000002ea10 0000000000000000 WA 0 0 8 [38089] .rela__jump_table RELA 0000000000000000 01fd8bb0 000000000008be30 0000000000000018 I 38178 38088 8 ... In other words, we have 190 KB worth of 'struct jump_entry' instances, and 573 KB worth of RELA entries to relocate each entry's code, target and key members. This means the RELA section occupies 10% of the .init segment, and the two sections combined represent 5% of vmlinux's entire memory footprint. So let's switch from 64-bit absolute references to 32-bit relative references for the code and target field, and a 64-bit relative reference for the 'key' field (which may reside in another module or the core kernel, which may be more than 4 GB way on arm64 when running with KASLR enable): this reduces the size of the __jump_table by 33%, and gets rid of the RELA section entirely. Signed-off-by: Ard Biesheuvel <ard.biesheuvel@linaro.org> Signed-off-by: Thomas Gleixner <tglx@linutronix.de> Acked-by: Will Deacon <will.deacon@arm.com> Acked-by: Peter Zijlstra (Intel) <peterz@infradead.org> Cc: linux-arm-kernel@lists.infradead.org Cc: linux-s390@vger.kernel.org Cc: Arnd Bergmann <arnd@arndb.de> Cc: Heiko Carstens <heiko.carstens@de.ibm.com> Cc: Kees Cook <keescook@chromium.org> Cc: Catalin Marinas <catalin.marinas@arm.com> Cc: Steven Rostedt <rostedt@goodmis.org> Cc: Martin Schwidefsky <schwidefsky@de.ibm.com> Cc: Jessica Yu <jeyu@kernel.org> Link: https://lkml.kernel.org/r/20180919065144.25010-4-ard.biesheuvel@linaro.org
2018-09-19 14:51:38 +08:00
void *addr = (void *)jump_entry_code(entry);
u32 insn;
if (type == JUMP_LABEL_JMP) {
arm64/kernel: jump_label: Switch to relative references On a randomly chosen distro kernel build for arm64, vmlinux.o shows the following sections, containing jump label entries, and the associated RELA relocation records, respectively: ... [38088] __jump_table PROGBITS 0000000000000000 00e19f30 000000000002ea10 0000000000000000 WA 0 0 8 [38089] .rela__jump_table RELA 0000000000000000 01fd8bb0 000000000008be30 0000000000000018 I 38178 38088 8 ... In other words, we have 190 KB worth of 'struct jump_entry' instances, and 573 KB worth of RELA entries to relocate each entry's code, target and key members. This means the RELA section occupies 10% of the .init segment, and the two sections combined represent 5% of vmlinux's entire memory footprint. So let's switch from 64-bit absolute references to 32-bit relative references for the code and target field, and a 64-bit relative reference for the 'key' field (which may reside in another module or the core kernel, which may be more than 4 GB way on arm64 when running with KASLR enable): this reduces the size of the __jump_table by 33%, and gets rid of the RELA section entirely. Signed-off-by: Ard Biesheuvel <ard.biesheuvel@linaro.org> Signed-off-by: Thomas Gleixner <tglx@linutronix.de> Acked-by: Will Deacon <will.deacon@arm.com> Acked-by: Peter Zijlstra (Intel) <peterz@infradead.org> Cc: linux-arm-kernel@lists.infradead.org Cc: linux-s390@vger.kernel.org Cc: Arnd Bergmann <arnd@arndb.de> Cc: Heiko Carstens <heiko.carstens@de.ibm.com> Cc: Kees Cook <keescook@chromium.org> Cc: Catalin Marinas <catalin.marinas@arm.com> Cc: Steven Rostedt <rostedt@goodmis.org> Cc: Martin Schwidefsky <schwidefsky@de.ibm.com> Cc: Jessica Yu <jeyu@kernel.org> Link: https://lkml.kernel.org/r/20180919065144.25010-4-ard.biesheuvel@linaro.org
2018-09-19 14:51:38 +08:00
insn = aarch64_insn_gen_branch_imm(jump_entry_code(entry),
jump_entry_target(entry),
AARCH64_INSN_BRANCH_NOLINK);
} else {
insn = aarch64_insn_gen_nop();
}
arm64: Avoid calling stop_machine() when patching jump labels Patching a jump label involves patching a single instruction at a time, swizzling between a branch and a NOP. The architecture treats these instructions specially, so a concurrently executing CPU is guaranteed to see either the NOP or the branch, rather than an amalgamation of the two instruction encodings. However, in order to guarantee that the new instruction is visible, it is necessary to send an IPI to the concurrently executing CPU so that it discards any previously fetched instructions from its pipeline. This operation therefore cannot be completed from a context with IRQs disabled, but this is exactly what happens on the jump label path where the hotplug lock is held and irqs are subsequently disabled by stop_machine_cpuslocked(). This results in a deadlock during boot on Hikey-960. Due to the architectural guarantees around patching NOPs and branches, we don't actually need to stop_machine() at all on the jump label path, so we can avoid the deadlock by using the "nosync" variant of our instruction patching routine. Fixes: 693350a79980 ("arm64: insn: Don't fallback on nosync path for general insn patching") Reported-by: Tuomas Tynkkynen <tuomas.tynkkynen@iki.fi> Reported-by: John Stultz <john.stultz@linaro.org> Tested-by: Valentin Schneider <valentin.schneider@arm.com> Tested-by: Tuomas Tynkkynen <tuomas@tuxera.com> Tested-by: John Stultz <john.stultz@linaro.org> Signed-off-by: Will Deacon <will.deacon@arm.com>
2018-08-16 18:45:50 +08:00
aarch64_insn_patch_text_nosync(addr, insn);
}
void arch_jump_label_transform_static(struct jump_entry *entry,
enum jump_label_type type)
{
arm64: jump labels: NOP out NOP -> NOP replacement In the arm64 arch_static_branch implementation we place an A64 NOP into the instruction stream and log relevant details to a jump_entry in a __jump_table section. Later this may be replaced with an immediate branch without link to the code for the unlikely case. At init time, the core calls arch_jump_label_transform_static to initialise the NOPs. On x86 this involves inserting the optimal NOP for a given microarchitecture, but on arm64 we only use the architectural NOP, and hence replace each NOP with the exact same NOP. This is somewhat pointless. Additionally, at module load time we don't call jump_label_apply_nops to patch the optimal NOPs in, unlike other architectures, but get away with this because we only use the architectural NOP anyway. A later notifier will patch NOPs with branches as required. Similarly to x86 commit 11570da1c5b1dee1 (x86/jump-label: Do not bother updating NOPs if they are correct), we can avoid patching NOPs with identical NOPs. Given that we only use a single NOP encoding, this means we can NOP-out the body of arch_jump_label_transform_static entirely. As the default __weak arch_jump_label_transform_static implementation performs a patch, we must use an empty function to achieve this. Cc: Catalin Marinas <catalin.marinas@arm.com> Cc: Jiang Liu <liuj97@gmail.com> Cc: Laura Abbott <lauraa@codeaurora.org> Acked-by: Will Deacon <will.deacon@arm.com> Signed-off-by: Mark Rutland <mark.rutland@arm.com> Signed-off-by: Will Deacon <will.deacon@arm.com>
2014-11-25 23:44:18 +08:00
/*
* We use the architected A64 NOP in arch_static_branch, so there's no
* need to patch an identical A64 NOP over the top of it here. The core
* will call arch_jump_label_transform from a module notifier if the
* NOP needs to be replaced by a branch.
*/
}