mirror of
https://mirrors.bfsu.edu.cn/git/linux.git
synced 2024-11-17 01:04:19 +08:00
94a855111e
been long in the making. It is a lighterweight software-only fix for Skylake-based cores where enabling IBRS is a big hammer and causes a significant performance impact. What it basically does is, it aligns all kernel functions to 16 bytes boundary and adds a 16-byte padding before the function, objtool collects all functions' locations and when the mitigation gets applied, it patches a call accounting thunk which is used to track the call depth of the stack at any time. When that call depth reaches a magical, microarchitecture-specific value for the Return Stack Buffer, the code stuffs that RSB and avoids its underflow which could otherwise lead to the Intel variant of Retbleed. This software-only solution brings a lot of the lost performance back, as benchmarks suggest: https://lore.kernel.org/all/20220915111039.092790446@infradead.org/ That page above also contains a lot more detailed explanation of the whole mechanism - Implement a new control flow integrity scheme called FineIBT which is based on the software kCFI implementation and uses hardware IBT support where present to annotate and track indirect branches using a hash to validate them - Other misc fixes and cleanups -----BEGIN PGP SIGNATURE----- iQIzBAABCgAdFiEEzv7L6UO9uDPlPSfHEsHwGGHeVUoFAmOZp5EACgkQEsHwGGHe VUrZFxAAvi/+8L0IYSK4mKJvixGbTFjxN/Swo2JVOfs34LqGUT6JaBc+VUMwZxdb VMTFIZ3ttkKEodjhxGI7oGev6V8UfhI37SmO2lYKXpQVjXXnMlv/M+Vw3teE38CN gopi+xtGnT1IeWQ3tc/Tv18pleJ0mh5HKWiW+9KoqgXj0wgF9x4eRYDz1TDCDA/A iaBzs56j8m/FSykZHnrWZ/MvjKNPdGlfJASUCPeTM2dcrXQGJ93+X2hJctzDte0y Nuiw6Y0htfFBE7xoJn+sqm5Okr+McoUM18/CCprbgSKYk18iMYm3ZtAi6FUQZS1A ua4wQCf49loGp15PO61AS5d3OBf5D3q/WihQRbCaJvTVgPp9sWYnWwtcVUuhMllh ZQtBU9REcVJ/22bH09Q9CjBW0VpKpXHveqQdqRDViLJ6v/iI6EFGmD24SW/VxyRd 73k9MBGrL/dOf1SbEzdsnvcSB3LGzp0Om8o/KzJWOomrVKjBCJy16bwTEsCZEJmP i406m92GPXeaN1GhTko7vmF0GnkEdJs1GVCZPluCAxxbhHukyxHnrjlQjI4vC80n Ylc0B3Kvitw7LGJsPqu+/jfNHADC/zhx1qz/30wb5cFmFbN1aRdp3pm8JYUkn+l/ zri2Y6+O89gvE/9/xUhMohzHsWUO7xITiBavewKeTP9GSWybWUs= =cRy1 -----END PGP SIGNATURE----- Merge tag 'x86_core_for_v6.2' of git://git.kernel.org/pub/scm/linux/kernel/git/tip/tip Pull x86 core updates from Borislav Petkov: - Add the call depth tracking mitigation for Retbleed which has been long in the making. It is a lighterweight software-only fix for Skylake-based cores where enabling IBRS is a big hammer and causes a significant performance impact. What it basically does is, it aligns all kernel functions to 16 bytes boundary and adds a 16-byte padding before the function, objtool collects all functions' locations and when the mitigation gets applied, it patches a call accounting thunk which is used to track the call depth of the stack at any time. When that call depth reaches a magical, microarchitecture-specific value for the Return Stack Buffer, the code stuffs that RSB and avoids its underflow which could otherwise lead to the Intel variant of Retbleed. This software-only solution brings a lot of the lost performance back, as benchmarks suggest: https://lore.kernel.org/all/20220915111039.092790446@infradead.org/ That page above also contains a lot more detailed explanation of the whole mechanism - Implement a new control flow integrity scheme called FineIBT which is based on the software kCFI implementation and uses hardware IBT support where present to annotate and track indirect branches using a hash to validate them - Other misc fixes and cleanups * tag 'x86_core_for_v6.2' of git://git.kernel.org/pub/scm/linux/kernel/git/tip/tip: (80 commits) x86/paravirt: Use common macro for creating simple asm paravirt functions x86/paravirt: Remove clobber bitmask from .parainstructions x86/debug: Include percpu.h in debugreg.h to get DECLARE_PER_CPU() et al x86/cpufeatures: Move X86_FEATURE_CALL_DEPTH from bit 18 to bit 19 of word 11, to leave space for WIP X86_FEATURE_SGX_EDECCSSA bit x86/Kconfig: Enable kernel IBT by default x86,pm: Force out-of-line memcpy() objtool: Fix weak hole vs prefix symbol objtool: Optimize elf_dirty_reloc_sym() x86/cfi: Add boot time hash randomization x86/cfi: Boot time selection of CFI scheme x86/ibt: Implement FineIBT objtool: Add --cfi to generate the .cfi_sites section x86: Add prefix symbols for function padding objtool: Add option to generate prefix symbols objtool: Avoid O(bloody terrible) behaviour -- an ode to libelf objtool: Slice up elf_create_section_symbol() kallsyms: Revert "Take callthunks into account" x86: Unconfuse CONFIG_ and X86_FEATURE_ namespaces x86/retpoline: Fix crash printing warning x86/paravirt: Fix a !PARAVIRT build warning ... |
||
---|---|---|
.. | ||
.gitignore | ||
aegis128-aesni-asm.S | ||
aegis128-aesni-glue.c | ||
aes_ctrby8_avx-x86_64.S | ||
aesni-intel_asm.S | ||
aesni-intel_avx-x86_64.S | ||
aesni-intel_glue.c | ||
aria_aesni_avx_glue.c | ||
aria-aesni-avx-asm_64.S | ||
aria-avx.h | ||
blake2s-core.S | ||
blake2s-glue.c | ||
blowfish_glue.c | ||
blowfish-x86_64-asm_64.S | ||
camellia_aesni_avx2_glue.c | ||
camellia_aesni_avx_glue.c | ||
camellia_glue.c | ||
camellia-aesni-avx2-asm_64.S | ||
camellia-aesni-avx-asm_64.S | ||
camellia-x86_64-asm_64.S | ||
camellia.h | ||
cast5_avx_glue.c | ||
cast5-avx-x86_64-asm_64.S | ||
cast6_avx_glue.c | ||
cast6-avx-x86_64-asm_64.S | ||
chacha_glue.c | ||
chacha-avx2-x86_64.S | ||
chacha-avx512vl-x86_64.S | ||
chacha-ssse3-x86_64.S | ||
crc32-pclmul_asm.S | ||
crc32-pclmul_glue.c | ||
crc32c-intel_glue.c | ||
crc32c-pcl-intel-asm_64.S | ||
crct10dif-pcl-asm_64.S | ||
crct10dif-pclmul_glue.c | ||
curve25519-x86_64.c | ||
des3_ede_glue.c | ||
des3_ede-asm_64.S | ||
ecb_cbc_helpers.h | ||
ghash-clmulni-intel_asm.S | ||
ghash-clmulni-intel_glue.c | ||
glue_helper-asm-avx2.S | ||
glue_helper-asm-avx.S | ||
Kconfig | ||
Makefile | ||
nh-avx2-x86_64.S | ||
nh-sse2-x86_64.S | ||
nhpoly1305-avx2-glue.c | ||
nhpoly1305-sse2-glue.c | ||
poly1305_glue.c | ||
poly1305-x86_64-cryptogams.pl | ||
polyval-clmulni_asm.S | ||
polyval-clmulni_glue.c | ||
serpent_avx2_glue.c | ||
serpent_avx_glue.c | ||
serpent_sse2_glue.c | ||
serpent-avx2-asm_64.S | ||
serpent-avx-x86_64-asm_64.S | ||
serpent-avx.h | ||
serpent-sse2-i586-asm_32.S | ||
serpent-sse2-x86_64-asm_64.S | ||
serpent-sse2.h | ||
sha1_avx2_x86_64_asm.S | ||
sha1_ni_asm.S | ||
sha1_ssse3_asm.S | ||
sha1_ssse3_glue.c | ||
sha256_ni_asm.S | ||
sha256_ssse3_glue.c | ||
sha256-avx2-asm.S | ||
sha256-avx-asm.S | ||
sha256-ssse3-asm.S | ||
sha512_ssse3_glue.c | ||
sha512-avx2-asm.S | ||
sha512-avx-asm.S | ||
sha512-ssse3-asm.S | ||
sm3_avx_glue.c | ||
sm3-avx-asm_64.S | ||
sm4_aesni_avx2_glue.c | ||
sm4_aesni_avx_glue.c | ||
sm4-aesni-avx2-asm_64.S | ||
sm4-aesni-avx-asm_64.S | ||
sm4-avx.h | ||
twofish_avx_glue.c | ||
twofish_glue_3way.c | ||
twofish_glue.c | ||
twofish-avx-x86_64-asm_64.S | ||
twofish-i586-asm_32.S | ||
twofish-x86_64-asm_64-3way.S | ||
twofish-x86_64-asm_64.S | ||
twofish.h |