linux/kernel/bpf
Luis Gerhorst e4f4db4779 bpf: Fix pointer-leak due to insufficient speculative store bypass mitigation
To mitigate Spectre v4, 2039f26f3a ("bpf: Fix leakage due to
insufficient speculative store bypass mitigation") inserts lfence
instructions after 1) initializing a stack slot and 2) spilling a
pointer to the stack.

However, this does not cover cases where a stack slot is first
initialized with a pointer (subject to sanitization) but then
overwritten with a scalar (not subject to sanitization because
the slot was already initialized). In this case, the second write
may be subject to speculative store bypass (SSB) creating a
speculative pointer-as-scalar type confusion. This allows the
program to subsequently leak the numerical pointer value using,
for example, a branch-based cache side channel.

To fix this, also sanitize scalars if they write a stack slot
that previously contained a pointer. Assuming that pointer-spills
are only generated by LLVM on register-pressure, the performance
impact on most real-world BPF programs should be small.

The following unprivileged BPF bytecode drafts a minimal exploit
and the mitigation:

  [...]
  // r6 = 0 or 1 (skalar, unknown user input)
  // r7 = accessible ptr for side channel
  // r10 = frame pointer (fp), to be leaked
  //
  r9 = r10 # fp alias to encourage ssb
  *(u64 *)(r9 - 8) = r10 // fp[-8] = ptr, to be leaked
  // lfence added here because of pointer spill to stack.
  //
  // Ommitted: Dummy bpf_ringbuf_output() here to train alias predictor
  // for no r9-r10 dependency.
  //
  *(u64 *)(r10 - 8) = r6 // fp[-8] = scalar, overwrites ptr
  // 2039f26f3a: no lfence added because stack slot was not STACK_INVALID,
  // store may be subject to SSB
  //
  // fix: also add an lfence when the slot contained a ptr
  //
  r8 = *(u64 *)(r9 - 8)
  // r8 = architecturally a scalar, speculatively a ptr
  //
  // leak ptr using branch-based cache side channel:
  r8 &= 1 // choose bit to leak
  if r8 == 0 goto SLOW // no mispredict
  // architecturally dead code if input r6 is 0,
  // only executes speculatively iff ptr bit is 1
  r8 = *(u64 *)(r7 + 0) # encode bit in cache (0: slow, 1: fast)
SLOW:
  [...]

After running this, the program can time the access to *(r7 + 0) to
determine whether the chosen pointer bit was 0 or 1. Repeat this 64
times to recover the whole address on amd64.

In summary, sanitization can only be skipped if one scalar is
overwritten with another scalar. Scalar-confusion due to speculative
store bypass can not lead to invalid accesses because the pointer
bounds deducted during verification are enforced using branchless
logic. See 979d63d50c ("bpf: prevent out of bounds speculation on
pointer arithmetic") for details.

Do not make the mitigation depend on !env->allow_{uninit_stack,ptr_leaks}
because speculative leaks are likely unexpected if these were enabled.
For example, leaking the address to a protected log file may be acceptable
while disabling the mitigation might unintentionally leak the address
into the cached-state of a map that is accessible to unprivileged
processes.

Fixes: 2039f26f3a ("bpf: Fix leakage due to insufficient speculative store bypass mitigation")
Signed-off-by: Luis Gerhorst <gerhorst@cs.fau.de>
Signed-off-by: Daniel Borkmann <daniel@iogearbox.net>
Acked-by: Henriette Hofmeier <henriette.hofmeier@rub.de>
Link: https://lore.kernel.org/bpf/edc95bad-aada-9cfc-ffe2-fa9bb206583c@cs.fau.de
Link: https://lore.kernel.org/bpf/20230109150544.41465-1-gerhorst@cs.fau.de
2023-01-13 17:18:35 +01:00
..
preload bpf: iterators: Build and use lightweight bootstrap version of bpftool 2022-07-15 12:01:30 -07:00
arraymap.c bpf: Do btf_record_free outside map_free callback 2022-11-17 19:11:31 -08:00
bloom_filter.c treewide: use get_random_u32() when possible 2022-10-11 17:42:58 -06:00
bpf_cgrp_storage.c bpf: Fix a compilation failure with clang lto build 2022-11-30 17:13:25 -08:00
bpf_inode_storage.c bpf: Fix a compilation failure with clang lto build 2022-11-30 17:13:25 -08:00
bpf_iter.c bpf: Initialize the bpf_run_ctx in bpf_iter_run_prog() 2022-08-18 17:06:13 -07:00
bpf_local_storage.c Merge git://git.kernel.org/pub/scm/linux/kernel/git/netdev/net 2022-11-29 13:04:52 -08:00
bpf_lru_list.c bpf_lru_list: Read double-checked variable once without lock 2021-02-10 15:54:26 -08:00
bpf_lru_list.h printk: stop including cache.h from printk.h 2022-05-13 07:20:07 -07:00
bpf_lsm.c bpf: Define sock security related BTF IDs under CONFIG_SECURITY_NETWORK 2022-12-19 22:02:17 +01:00
bpf_struct_ops_types.h bpf: Add dummy BPF STRUCT_OPS for test purpose 2021-11-01 14:10:00 -07:00
bpf_struct_ops.c mm: Introduce set_memory_rox() 2022-12-15 10:37:26 -08:00
bpf_task_storage.c bpf: Fix a compilation failure with clang lto build 2022-11-30 17:13:25 -08:00
btf.c for-alexei-2022120701 2022-12-07 13:49:21 -08:00
cgroup_iter.c bpf: Pin the start cgroup in cgroup_iter_seq_init() 2022-11-21 17:40:42 +01:00
cgroup.c Merge git://git.kernel.org/pub/scm/linux/kernel/git/netdev/net 2022-10-03 17:44:18 -07:00
core.c Including fixes from bpf, netfilter and can. 2022-12-21 08:41:32 -08:00
cpumap.c bpf: Expand map key argument of bpf_redirect_map to u64 2022-11-15 09:00:27 -08:00
devmap.c bpf: Expand map key argument of bpf_redirect_map to u64 2022-11-15 09:00:27 -08:00
disasm.c bpf: Relicense disassembler as GPL-2.0-only OR BSD-2-Clause 2021-09-02 14:49:23 +02:00
disasm.h bpf: Relicense disassembler as GPL-2.0-only OR BSD-2-Clause 2021-09-02 14:49:23 +02:00
dispatcher.c bpf: Synchronize dispatcher update with bpf_dispatcher_xdp_func 2022-12-14 12:02:14 -08:00
hashtab.c bpf: hash map, avoid deadlock with suitable hash mask 2023-01-12 18:55:42 -08:00
helpers.c bpf: Use memmove for bpf_dynptr_{read,write} 2022-12-08 18:39:28 -08:00
inode.c bpf: Convert bpf_preload.ko to use light skeleton. 2022-02-10 23:31:51 +01:00
Kconfig rcu: Make the TASKS_RCU Kconfig option be selected 2022-04-20 16:52:58 -07:00
link_iter.c bpf: Add bpf_link iterator 2022-05-10 11:20:45 -07:00
local_storage.c bpf: Consolidate spin_lock, timer management into btf_record 2022-11-03 22:19:40 -07:00
lpm_trie.c bpf: Use bpf_map_area_alloc consistently on bpf map creation 2022-08-10 11:50:43 -07:00
Makefile bpf: Implement cgroup storage available to non-cgroup-attached bpf progs 2022-10-25 23:19:19 -07:00
map_in_map.c bpf: Add comments for map BTF matching requirement for bpf_list_head 2022-11-17 19:22:14 -08:00
map_in_map.h bpf: Add map_meta_equal map ops 2020-08-28 15:41:30 +02:00
map_iter.c bpf: Introduce MEM_RDONLY flag 2021-12-18 13:27:41 -08:00
memalloc.c bpf: Skip rcu_barrier() if rcu_trace_implies_rcu_gp() is true 2022-12-08 17:50:17 -08:00
mmap_unlock_work.h bpf: Introduce helper bpf_find_vma 2021-11-07 11:54:51 -08:00
net_namespace.c net: Add includes masked by netdevice.h including uapi/bpf.h 2021-12-29 20:03:05 -08:00
offload.c bpf: restore the ebpf program ID for BPF_AUDIT_UNLOAD and PERF_BPF_EVENT_PROG_UNLOAD 2023-01-09 19:47:58 -08:00
percpu_freelist.c bpf: Initialize same number of free nodes for each pcpu_freelist 2022-11-11 12:05:14 -08:00
percpu_freelist.h bpf: Use raw_spin_trylock() for pcpu_freelist_push/pop in NMI 2020-10-06 00:04:11 +02:00
prog_iter.c bpf: Refactor bpf_iter_reg to have separate seq_info member 2020-07-25 20:16:32 -07:00
queue_stack_maps.c bpf: Remove unneeded memset in queue_stack_map creation 2022-08-10 11:48:22 -07:00
reuseport_array.c net: Fix suspicious RCU usage in bpf_sk_reuseport_detach() 2022-08-17 16:42:59 -07:00
ringbuf.c bpf: Rename MEM_ALLOC to MEM_RINGBUF 2022-11-14 21:52:45 -08:00
stackmap.c perf/bpf: Always use perf callchains if exist 2022-09-13 15:03:22 +02:00
syscall.c bpf: remove the do_idr_lock parameter from bpf_prog_free_id() 2023-01-09 19:47:59 -08:00
sysfs_btf.c bpf: Load and verify kernel module BTFs 2020-11-10 15:25:53 -08:00
task_iter.c bpf: keep a reference to the mm, in case the task is dead. 2022-12-28 14:11:48 -08:00
tnum.c bpf, tnums: Provably sound, faster, and more precise algorithm for tnum_mul 2021-06-01 13:34:15 +02:00
trampoline.c bpf: Fix panic due to wrong pageattr of im->image 2022-12-28 13:46:28 -08:00
verifier.c bpf: Fix pointer-leak due to insufficient speculative store bypass mitigation 2023-01-13 17:18:35 +01:00