2019-05-29 01:10:09 +08:00
|
|
|
// SPDX-License-Identifier: GPL-2.0-only
|
bpf: mini eBPF library, test stubs and verifier testsuite
1.
the library includes a trivial set of BPF syscall wrappers:
int bpf_create_map(int key_size, int value_size, int max_entries);
int bpf_update_elem(int fd, void *key, void *value);
int bpf_lookup_elem(int fd, void *key, void *value);
int bpf_delete_elem(int fd, void *key);
int bpf_get_next_key(int fd, void *key, void *next_key);
int bpf_prog_load(enum bpf_prog_type prog_type,
const struct sock_filter_int *insns, int insn_len,
const char *license);
bpf_prog_load() stores verifier log into global bpf_log_buf[] array
and BPF_*() macros to build instructions
2.
test stubs configure eBPF infra with 'unspec' map and program types.
These are fake types used by user space testsuite only.
3.
verifier tests valid and invalid programs and expects predefined
error log messages from kernel.
40 tests so far.
$ sudo ./test_verifier
#0 add+sub+mul OK
#1 unreachable OK
#2 unreachable2 OK
#3 out of range jump OK
#4 out of range jump2 OK
#5 test1 ld_imm64 OK
...
Signed-off-by: Alexei Starovoitov <ast@plumgrid.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
2014-09-26 15:17:07 +08:00
|
|
|
/*
|
|
|
|
* Testsuite for eBPF verifier
|
|
|
|
*
|
|
|
|
* Copyright (c) 2014 PLUMgrid, http://plumgrid.com
|
2017-12-15 09:55:07 +08:00
|
|
|
* Copyright (c) 2017 Facebook
|
2018-10-03 04:35:38 +08:00
|
|
|
* Copyright (c) 2018 Covalent IO, Inc. http://covalent.io
|
bpf: mini eBPF library, test stubs and verifier testsuite
1.
the library includes a trivial set of BPF syscall wrappers:
int bpf_create_map(int key_size, int value_size, int max_entries);
int bpf_update_elem(int fd, void *key, void *value);
int bpf_lookup_elem(int fd, void *key, void *value);
int bpf_delete_elem(int fd, void *key);
int bpf_get_next_key(int fd, void *key, void *next_key);
int bpf_prog_load(enum bpf_prog_type prog_type,
const struct sock_filter_int *insns, int insn_len,
const char *license);
bpf_prog_load() stores verifier log into global bpf_log_buf[] array
and BPF_*() macros to build instructions
2.
test stubs configure eBPF infra with 'unspec' map and program types.
These are fake types used by user space testsuite only.
3.
verifier tests valid and invalid programs and expects predefined
error log messages from kernel.
40 tests so far.
$ sudo ./test_verifier
#0 add+sub+mul OK
#1 unreachable OK
#2 unreachable2 OK
#3 out of range jump OK
#4 out of range jump2 OK
#5 test1 ld_imm64 OK
...
Signed-off-by: Alexei Starovoitov <ast@plumgrid.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
2014-09-26 15:17:07 +08:00
|
|
|
*/
|
2016-10-17 20:28:36 +08:00
|
|
|
|
bpf: fix byte order test in test_verifier
We really must check with #if __BYTE_ORDER == XYZ instead of
just presence of #ifdef __LITTLE_ENDIAN. I noticed that when
actually running this on big endian machine, the latter test
resolves to true for user space, same for #ifdef __BIG_ENDIAN.
E.g., looking at endian.h from libc, both are also defined
there, so we really must test this against __BYTE_ORDER instead
for proper insns selection. For the kernel, such checks are
fine though e.g. see 13da9e200fe4 ("Revert "endian: #define
__BYTE_ORDER"") and 415586c9e6d3 ("UAPI: fix endianness conditionals
in M32R's asm/stat.h") for some more context, but not for
user space. Lets also make sure to properly include endian.h.
After that, suite passes for me:
./test_verifier: ELF 64-bit MSB executable, [...]
Linux foo 4.13.0-rc3+ #4 SMP Fri Aug 4 06:59:30 EDT 2017 s390x s390x s390x GNU/Linux
Before fix: Summary: 505 PASSED, 11 FAILED
After fix: Summary: 516 PASSED, 0 FAILED
Fixes: 18f3d6be6be1 ("selftests/bpf: Add test cases to test narrower ctx field loads")
Signed-off-by: Daniel Borkmann <daniel@iogearbox.net>
Acked-by: Yonghong <yhs@fb.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
2017-08-05 04:24:41 +08:00
|
|
|
#include <endian.h>
|
2017-03-11 14:05:55 +08:00
|
|
|
#include <asm/types.h>
|
|
|
|
#include <linux/types.h>
|
2017-02-10 07:21:44 +08:00
|
|
|
#include <stdint.h>
|
bpf: mini eBPF library, test stubs and verifier testsuite
1.
the library includes a trivial set of BPF syscall wrappers:
int bpf_create_map(int key_size, int value_size, int max_entries);
int bpf_update_elem(int fd, void *key, void *value);
int bpf_lookup_elem(int fd, void *key, void *value);
int bpf_delete_elem(int fd, void *key);
int bpf_get_next_key(int fd, void *key, void *next_key);
int bpf_prog_load(enum bpf_prog_type prog_type,
const struct sock_filter_int *insns, int insn_len,
const char *license);
bpf_prog_load() stores verifier log into global bpf_log_buf[] array
and BPF_*() macros to build instructions
2.
test stubs configure eBPF infra with 'unspec' map and program types.
These are fake types used by user space testsuite only.
3.
verifier tests valid and invalid programs and expects predefined
error log messages from kernel.
40 tests so far.
$ sudo ./test_verifier
#0 add+sub+mul OK
#1 unreachable OK
#2 unreachable2 OK
#3 out of range jump OK
#4 out of range jump2 OK
#5 test1 ld_imm64 OK
...
Signed-off-by: Alexei Starovoitov <ast@plumgrid.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
2014-09-26 15:17:07 +08:00
|
|
|
#include <stdio.h>
|
2017-02-10 07:21:44 +08:00
|
|
|
#include <stdlib.h>
|
bpf: mini eBPF library, test stubs and verifier testsuite
1.
the library includes a trivial set of BPF syscall wrappers:
int bpf_create_map(int key_size, int value_size, int max_entries);
int bpf_update_elem(int fd, void *key, void *value);
int bpf_lookup_elem(int fd, void *key, void *value);
int bpf_delete_elem(int fd, void *key);
int bpf_get_next_key(int fd, void *key, void *next_key);
int bpf_prog_load(enum bpf_prog_type prog_type,
const struct sock_filter_int *insns, int insn_len,
const char *license);
bpf_prog_load() stores verifier log into global bpf_log_buf[] array
and BPF_*() macros to build instructions
2.
test stubs configure eBPF infra with 'unspec' map and program types.
These are fake types used by user space testsuite only.
3.
verifier tests valid and invalid programs and expects predefined
error log messages from kernel.
40 tests so far.
$ sudo ./test_verifier
#0 add+sub+mul OK
#1 unreachable OK
#2 unreachable2 OK
#3 out of range jump OK
#4 out of range jump2 OK
#5 test1 ld_imm64 OK
...
Signed-off-by: Alexei Starovoitov <ast@plumgrid.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
2014-09-26 15:17:07 +08:00
|
|
|
#include <unistd.h>
|
|
|
|
#include <errno.h>
|
|
|
|
#include <string.h>
|
2015-03-14 02:57:43 +08:00
|
|
|
#include <stddef.h>
|
2015-10-08 13:23:23 +08:00
|
|
|
#include <stdbool.h>
|
2016-10-17 20:28:36 +08:00
|
|
|
#include <sched.h>
|
2018-01-27 06:33:48 +08:00
|
|
|
#include <limits.h>
|
2019-01-03 07:58:35 +08:00
|
|
|
#include <assert.h>
|
2016-10-17 20:28:36 +08:00
|
|
|
|
2017-02-10 07:21:37 +08:00
|
|
|
#include <sys/capability.h>
|
bpf: mini eBPF library, test stubs and verifier testsuite
1.
the library includes a trivial set of BPF syscall wrappers:
int bpf_create_map(int key_size, int value_size, int max_entries);
int bpf_update_elem(int fd, void *key, void *value);
int bpf_lookup_elem(int fd, void *key, void *value);
int bpf_delete_elem(int fd, void *key);
int bpf_get_next_key(int fd, void *key, void *next_key);
int bpf_prog_load(enum bpf_prog_type prog_type,
const struct sock_filter_int *insns, int insn_len,
const char *license);
bpf_prog_load() stores verifier log into global bpf_log_buf[] array
and BPF_*() macros to build instructions
2.
test stubs configure eBPF infra with 'unspec' map and program types.
These are fake types used by user space testsuite only.
3.
verifier tests valid and invalid programs and expects predefined
error log messages from kernel.
40 tests so far.
$ sudo ./test_verifier
#0 add+sub+mul OK
#1 unreachable OK
#2 unreachable2 OK
#3 out of range jump OK
#4 out of range jump2 OK
#5 test1 ld_imm64 OK
...
Signed-off-by: Alexei Starovoitov <ast@plumgrid.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
2014-09-26 15:17:07 +08:00
|
|
|
|
2016-10-17 20:28:36 +08:00
|
|
|
#include <linux/unistd.h>
|
|
|
|
#include <linux/filter.h>
|
|
|
|
#include <linux/bpf_perf_event.h>
|
|
|
|
#include <linux/bpf.h>
|
2018-01-18 08:52:03 +08:00
|
|
|
#include <linux/if_ether.h>
|
2019-02-01 07:40:07 +08:00
|
|
|
#include <linux/btf.h>
|
2016-10-17 20:28:36 +08:00
|
|
|
|
2022-01-15 00:39:51 +08:00
|
|
|
#include <bpf/btf.h>
|
2017-02-10 07:21:38 +08:00
|
|
|
#include <bpf/bpf.h>
|
2019-01-29 01:21:16 +08:00
|
|
|
#include <bpf/libbpf.h>
|
2017-02-10 07:21:38 +08:00
|
|
|
|
2017-03-31 08:24:04 +08:00
|
|
|
#ifdef HAVE_GENHDR
|
|
|
|
# include "autoconf.h"
|
|
|
|
#else
|
|
|
|
# if defined(__i386) || defined(__x86_64) || defined(__s390x__) || defined(__aarch64__)
|
|
|
|
# define CONFIG_HAVE_EFFICIENT_UNALIGNED_ACCESS 1
|
|
|
|
# endif
|
|
|
|
#endif
|
2018-05-15 05:22:34 +08:00
|
|
|
#include "bpf_rand.h"
|
2018-08-08 16:01:27 +08:00
|
|
|
#include "bpf_util.h"
|
2019-04-27 07:39:49 +08:00
|
|
|
#include "test_btf.h"
|
2016-10-17 20:28:36 +08:00
|
|
|
#include "../../../include/linux/filter.h"
|
|
|
|
|
2021-10-08 01:33:28 +08:00
|
|
|
#ifndef ENOTSUPP
|
|
|
|
#define ENOTSUPP 524
|
|
|
|
#endif
|
|
|
|
|
2018-05-04 07:08:13 +08:00
|
|
|
#define MAX_INSNS BPF_MAXINSNS
|
2019-04-02 12:27:49 +08:00
|
|
|
#define MAX_TEST_INSNS 1000000
|
2016-10-17 20:28:36 +08:00
|
|
|
#define MAX_FIXUPS 8
|
2021-12-14 09:48:00 +08:00
|
|
|
#define MAX_NR_MAPS 22
|
2018-12-20 14:13:03 +08:00
|
|
|
#define MAX_TEST_RUNS 8
|
2018-01-18 08:52:03 +08:00
|
|
|
#define POINTER_VALUE 0xcafe4all
|
|
|
|
#define TEST_DATA_LEN 64
|
2015-10-08 13:23:23 +08:00
|
|
|
|
2017-03-31 08:24:04 +08:00
|
|
|
#define F_NEEDS_EFFICIENT_UNALIGNED_ACCESS (1 << 0)
|
2017-05-25 07:05:09 +08:00
|
|
|
#define F_LOAD_WITH_STRICT_ALIGNMENT (1 << 1)
|
2017-03-31 08:24:04 +08:00
|
|
|
|
2018-02-15 05:50:36 +08:00
|
|
|
#define UNPRIV_SYSCTL "kernel/unprivileged_bpf_disabled"
|
|
|
|
static bool unpriv_disabled = false;
|
2019-01-29 01:21:16 +08:00
|
|
|
static int skips;
|
2019-08-23 13:52:14 +08:00
|
|
|
static bool verbose = false;
|
2018-02-15 05:50:36 +08:00
|
|
|
|
2022-01-15 00:39:51 +08:00
|
|
|
struct kfunc_btf_id_pair {
|
|
|
|
const char *kfunc;
|
|
|
|
int insn_idx;
|
|
|
|
};
|
|
|
|
|
bpf: mini eBPF library, test stubs and verifier testsuite
1.
the library includes a trivial set of BPF syscall wrappers:
int bpf_create_map(int key_size, int value_size, int max_entries);
int bpf_update_elem(int fd, void *key, void *value);
int bpf_lookup_elem(int fd, void *key, void *value);
int bpf_delete_elem(int fd, void *key);
int bpf_get_next_key(int fd, void *key, void *next_key);
int bpf_prog_load(enum bpf_prog_type prog_type,
const struct sock_filter_int *insns, int insn_len,
const char *license);
bpf_prog_load() stores verifier log into global bpf_log_buf[] array
and BPF_*() macros to build instructions
2.
test stubs configure eBPF infra with 'unspec' map and program types.
These are fake types used by user space testsuite only.
3.
verifier tests valid and invalid programs and expects predefined
error log messages from kernel.
40 tests so far.
$ sudo ./test_verifier
#0 add+sub+mul OK
#1 unreachable OK
#2 unreachable2 OK
#3 out of range jump OK
#4 out of range jump2 OK
#5 test1 ld_imm64 OK
...
Signed-off-by: Alexei Starovoitov <ast@plumgrid.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
2014-09-26 15:17:07 +08:00
|
|
|
struct bpf_test {
|
|
|
|
const char *descr;
|
|
|
|
struct bpf_insn insns[MAX_INSNS];
|
2019-04-02 12:27:49 +08:00
|
|
|
struct bpf_insn *fill_insns;
|
2018-10-09 09:04:53 +08:00
|
|
|
int fixup_map_hash_8b[MAX_FIXUPS];
|
|
|
|
int fixup_map_hash_48b[MAX_FIXUPS];
|
|
|
|
int fixup_map_hash_16b[MAX_FIXUPS];
|
|
|
|
int fixup_map_array_48b[MAX_FIXUPS];
|
2018-10-09 09:04:54 +08:00
|
|
|
int fixup_map_sockmap[MAX_FIXUPS];
|
|
|
|
int fixup_map_sockhash[MAX_FIXUPS];
|
|
|
|
int fixup_map_xskmap[MAX_FIXUPS];
|
|
|
|
int fixup_map_stacktrace[MAX_FIXUPS];
|
2018-06-03 05:06:31 +08:00
|
|
|
int fixup_prog1[MAX_FIXUPS];
|
|
|
|
int fixup_prog2[MAX_FIXUPS];
|
2017-03-23 01:00:35 +08:00
|
|
|
int fixup_map_in_map[MAX_FIXUPS];
|
2018-08-03 05:27:28 +08:00
|
|
|
int fixup_cgroup_storage[MAX_FIXUPS];
|
2018-09-28 22:45:53 +08:00
|
|
|
int fixup_percpu_cgroup_storage[MAX_FIXUPS];
|
2019-02-01 07:40:07 +08:00
|
|
|
int fixup_map_spin_lock[MAX_FIXUPS];
|
bpf, selftest: test {rd, wr}only flags and direct value access
Extend test_verifier with various test cases around the two kernel
extensions, that is, {rd,wr}only map support as well as direct map
value access. All passing, one skipped due to xskmap not present
on test machine:
# ./test_verifier
[...]
#948/p XDP pkt read, pkt_meta' <= pkt_data, bad access 1 OK
#949/p XDP pkt read, pkt_meta' <= pkt_data, bad access 2 OK
#950/p XDP pkt read, pkt_data <= pkt_meta', good access OK
#951/p XDP pkt read, pkt_data <= pkt_meta', bad access 1 OK
#952/p XDP pkt read, pkt_data <= pkt_meta', bad access 2 OK
Summary: 1410 PASSED, 1 SKIPPED, 0 FAILED
Signed-off-by: Daniel Borkmann <daniel@iogearbox.net>
Acked-by: Martin KaFai Lau <kafai@fb.com>
Signed-off-by: Alexei Starovoitov <ast@kernel.org>
2019-04-10 05:20:16 +08:00
|
|
|
int fixup_map_array_ro[MAX_FIXUPS];
|
|
|
|
int fixup_map_array_wo[MAX_FIXUPS];
|
|
|
|
int fixup_map_array_small[MAX_FIXUPS];
|
2019-04-27 07:39:49 +08:00
|
|
|
int fixup_sk_storage_map[MAX_FIXUPS];
|
2019-07-24 08:07:25 +08:00
|
|
|
int fixup_map_event_output[MAX_FIXUPS];
|
2020-04-30 18:47:38 +08:00
|
|
|
int fixup_map_reuseport_array[MAX_FIXUPS];
|
2021-01-13 13:38:08 +08:00
|
|
|
int fixup_map_ringbuf[MAX_FIXUPS];
|
2021-11-13 22:22:27 +08:00
|
|
|
int fixup_map_timer[MAX_FIXUPS];
|
2022-01-15 00:39:51 +08:00
|
|
|
struct kfunc_btf_id_pair fixup_kfunc_btf_id[MAX_FIXUPS];
|
2021-01-31 06:01:50 +08:00
|
|
|
/* Expected verifier log output for result REJECT or VERBOSE_ACCEPT.
|
|
|
|
* Can be a tab-separated sequence of expected strings. An empty string
|
|
|
|
* means no log verification.
|
|
|
|
*/
|
bpf: mini eBPF library, test stubs and verifier testsuite
1.
the library includes a trivial set of BPF syscall wrappers:
int bpf_create_map(int key_size, int value_size, int max_entries);
int bpf_update_elem(int fd, void *key, void *value);
int bpf_lookup_elem(int fd, void *key, void *value);
int bpf_delete_elem(int fd, void *key);
int bpf_get_next_key(int fd, void *key, void *next_key);
int bpf_prog_load(enum bpf_prog_type prog_type,
const struct sock_filter_int *insns, int insn_len,
const char *license);
bpf_prog_load() stores verifier log into global bpf_log_buf[] array
and BPF_*() macros to build instructions
2.
test stubs configure eBPF infra with 'unspec' map and program types.
These are fake types used by user space testsuite only.
3.
verifier tests valid and invalid programs and expects predefined
error log messages from kernel.
40 tests so far.
$ sudo ./test_verifier
#0 add+sub+mul OK
#1 unreachable OK
#2 unreachable2 OK
#3 out of range jump OK
#4 out of range jump2 OK
#5 test1 ld_imm64 OK
...
Signed-off-by: Alexei Starovoitov <ast@plumgrid.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
2014-09-26 15:17:07 +08:00
|
|
|
const char *errstr;
|
2015-10-08 13:23:23 +08:00
|
|
|
const char *errstr_unpriv;
|
2019-07-13 01:44:41 +08:00
|
|
|
uint32_t insn_processed;
|
2019-04-02 12:27:49 +08:00
|
|
|
int prog_len;
|
bpf: mini eBPF library, test stubs and verifier testsuite
1.
the library includes a trivial set of BPF syscall wrappers:
int bpf_create_map(int key_size, int value_size, int max_entries);
int bpf_update_elem(int fd, void *key, void *value);
int bpf_lookup_elem(int fd, void *key, void *value);
int bpf_delete_elem(int fd, void *key);
int bpf_get_next_key(int fd, void *key, void *next_key);
int bpf_prog_load(enum bpf_prog_type prog_type,
const struct sock_filter_int *insns, int insn_len,
const char *license);
bpf_prog_load() stores verifier log into global bpf_log_buf[] array
and BPF_*() macros to build instructions
2.
test stubs configure eBPF infra with 'unspec' map and program types.
These are fake types used by user space testsuite only.
3.
verifier tests valid and invalid programs and expects predefined
error log messages from kernel.
40 tests so far.
$ sudo ./test_verifier
#0 add+sub+mul OK
#1 unreachable OK
#2 unreachable2 OK
#3 out of range jump OK
#4 out of range jump2 OK
#5 test1 ld_imm64 OK
...
Signed-off-by: Alexei Starovoitov <ast@plumgrid.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
2014-09-26 15:17:07 +08:00
|
|
|
enum {
|
2015-10-08 13:23:23 +08:00
|
|
|
UNDEF,
|
bpf: mini eBPF library, test stubs and verifier testsuite
1.
the library includes a trivial set of BPF syscall wrappers:
int bpf_create_map(int key_size, int value_size, int max_entries);
int bpf_update_elem(int fd, void *key, void *value);
int bpf_lookup_elem(int fd, void *key, void *value);
int bpf_delete_elem(int fd, void *key);
int bpf_get_next_key(int fd, void *key, void *next_key);
int bpf_prog_load(enum bpf_prog_type prog_type,
const struct sock_filter_int *insns, int insn_len,
const char *license);
bpf_prog_load() stores verifier log into global bpf_log_buf[] array
and BPF_*() macros to build instructions
2.
test stubs configure eBPF infra with 'unspec' map and program types.
These are fake types used by user space testsuite only.
3.
verifier tests valid and invalid programs and expects predefined
error log messages from kernel.
40 tests so far.
$ sudo ./test_verifier
#0 add+sub+mul OK
#1 unreachable OK
#2 unreachable2 OK
#3 out of range jump OK
#4 out of range jump2 OK
#5 test1 ld_imm64 OK
...
Signed-off-by: Alexei Starovoitov <ast@plumgrid.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
2014-09-26 15:17:07 +08:00
|
|
|
ACCEPT,
|
2019-08-23 13:52:14 +08:00
|
|
|
REJECT,
|
|
|
|
VERBOSE_ACCEPT,
|
2015-10-08 13:23:23 +08:00
|
|
|
} result, result_unpriv;
|
2015-06-05 01:11:54 +08:00
|
|
|
enum bpf_prog_type prog_type;
|
2017-03-31 08:24:04 +08:00
|
|
|
uint8_t flags;
|
2018-05-04 07:08:13 +08:00
|
|
|
void (*fill_helper)(struct bpf_test *self);
|
2021-03-03 18:18:16 +08:00
|
|
|
int runs;
|
2019-07-13 01:44:41 +08:00
|
|
|
#define bpf_testdata_struct_t \
|
|
|
|
struct { \
|
|
|
|
uint32_t retval, retval_unpriv; \
|
|
|
|
union { \
|
|
|
|
__u8 data[TEST_DATA_LEN]; \
|
|
|
|
__u64 data64[TEST_DATA_LEN / 8]; \
|
|
|
|
}; \
|
|
|
|
}
|
|
|
|
union {
|
|
|
|
bpf_testdata_struct_t;
|
|
|
|
bpf_testdata_struct_t retvals[MAX_TEST_RUNS];
|
|
|
|
};
|
2019-07-02 01:38:41 +08:00
|
|
|
enum bpf_attach_type expected_attach_type;
|
2020-08-26 03:21:22 +08:00
|
|
|
const char *kfunc;
|
bpf: mini eBPF library, test stubs and verifier testsuite
1.
the library includes a trivial set of BPF syscall wrappers:
int bpf_create_map(int key_size, int value_size, int max_entries);
int bpf_update_elem(int fd, void *key, void *value);
int bpf_lookup_elem(int fd, void *key, void *value);
int bpf_delete_elem(int fd, void *key);
int bpf_get_next_key(int fd, void *key, void *next_key);
int bpf_prog_load(enum bpf_prog_type prog_type,
const struct sock_filter_int *insns, int insn_len,
const char *license);
bpf_prog_load() stores verifier log into global bpf_log_buf[] array
and BPF_*() macros to build instructions
2.
test stubs configure eBPF infra with 'unspec' map and program types.
These are fake types used by user space testsuite only.
3.
verifier tests valid and invalid programs and expects predefined
error log messages from kernel.
40 tests so far.
$ sudo ./test_verifier
#0 add+sub+mul OK
#1 unreachable OK
#2 unreachable2 OK
#3 out of range jump OK
#4 out of range jump2 OK
#5 test1 ld_imm64 OK
...
Signed-off-by: Alexei Starovoitov <ast@plumgrid.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
2014-09-26 15:17:07 +08:00
|
|
|
};
|
|
|
|
|
2016-09-28 22:54:32 +08:00
|
|
|
/* Note we want this to be 64 bit aligned so that the end of our array is
|
|
|
|
* actually the end of the structure.
|
|
|
|
*/
|
|
|
|
#define MAX_ENTRIES 11
|
2016-10-17 20:28:36 +08:00
|
|
|
|
2016-09-28 22:54:32 +08:00
|
|
|
struct test_val {
|
2016-10-17 20:28:36 +08:00
|
|
|
unsigned int index;
|
2016-09-28 22:54:32 +08:00
|
|
|
int foo[MAX_ENTRIES];
|
|
|
|
};
|
|
|
|
|
2018-04-24 21:08:19 +08:00
|
|
|
struct other_val {
|
|
|
|
long long foo;
|
|
|
|
long long bar;
|
|
|
|
};
|
|
|
|
|
2018-05-04 07:08:13 +08:00
|
|
|
static void bpf_fill_ld_abs_vlan_push_pop(struct bpf_test *self)
|
|
|
|
{
|
2019-04-02 12:27:49 +08:00
|
|
|
/* test: {skb->data[0], vlan_push} x 51 + {skb->data[0], vlan_pop} x 51 */
|
2018-05-04 07:08:13 +08:00
|
|
|
#define PUSH_CNT 51
|
2019-04-02 12:27:49 +08:00
|
|
|
/* jump range is limited to 16 bit. PUSH_CNT of ld_abs needs room */
|
|
|
|
unsigned int len = (1 << 15) - PUSH_CNT * 2 * 5 * 6;
|
|
|
|
struct bpf_insn *insn = self->fill_insns;
|
2018-05-04 07:08:13 +08:00
|
|
|
int i = 0, j, k = 0;
|
|
|
|
|
|
|
|
insn[i++] = BPF_MOV64_REG(BPF_REG_6, BPF_REG_1);
|
|
|
|
loop:
|
|
|
|
for (j = 0; j < PUSH_CNT; j++) {
|
|
|
|
insn[i++] = BPF_LD_ABS(BPF_B, 0);
|
2019-05-25 06:25:20 +08:00
|
|
|
/* jump to error label */
|
|
|
|
insn[i] = BPF_JMP32_IMM(BPF_JNE, BPF_REG_0, 0x34, len - i - 3);
|
2018-05-04 07:08:13 +08:00
|
|
|
i++;
|
|
|
|
insn[i++] = BPF_MOV64_REG(BPF_REG_1, BPF_REG_6);
|
|
|
|
insn[i++] = BPF_MOV64_IMM(BPF_REG_2, 1);
|
|
|
|
insn[i++] = BPF_MOV64_IMM(BPF_REG_3, 2);
|
|
|
|
insn[i++] = BPF_RAW_INSN(BPF_JMP | BPF_CALL, 0, 0, 0,
|
|
|
|
BPF_FUNC_skb_vlan_push),
|
2019-05-25 06:25:20 +08:00
|
|
|
insn[i] = BPF_JMP_IMM(BPF_JNE, BPF_REG_0, 0, len - i - 3);
|
2018-05-04 07:08:13 +08:00
|
|
|
i++;
|
|
|
|
}
|
|
|
|
|
|
|
|
for (j = 0; j < PUSH_CNT; j++) {
|
|
|
|
insn[i++] = BPF_LD_ABS(BPF_B, 0);
|
2019-05-25 06:25:20 +08:00
|
|
|
insn[i] = BPF_JMP32_IMM(BPF_JNE, BPF_REG_0, 0x34, len - i - 3);
|
2018-05-04 07:08:13 +08:00
|
|
|
i++;
|
|
|
|
insn[i++] = BPF_MOV64_REG(BPF_REG_1, BPF_REG_6);
|
|
|
|
insn[i++] = BPF_RAW_INSN(BPF_JMP | BPF_CALL, 0, 0, 0,
|
|
|
|
BPF_FUNC_skb_vlan_pop),
|
2019-05-25 06:25:20 +08:00
|
|
|
insn[i] = BPF_JMP_IMM(BPF_JNE, BPF_REG_0, 0, len - i - 3);
|
2018-05-04 07:08:13 +08:00
|
|
|
i++;
|
|
|
|
}
|
|
|
|
if (++k < 5)
|
|
|
|
goto loop;
|
|
|
|
|
2019-05-25 06:25:20 +08:00
|
|
|
for (; i < len - 3; i++)
|
|
|
|
insn[i] = BPF_ALU64_IMM(BPF_MOV, BPF_REG_0, 0xbef);
|
|
|
|
insn[len - 3] = BPF_JMP_A(1);
|
|
|
|
/* error label */
|
|
|
|
insn[len - 2] = BPF_MOV32_IMM(BPF_REG_0, 0);
|
2018-05-04 07:08:13 +08:00
|
|
|
insn[len - 1] = BPF_EXIT_INSN();
|
2019-04-02 12:27:49 +08:00
|
|
|
self->prog_len = len;
|
2018-05-04 07:08:13 +08:00
|
|
|
}
|
|
|
|
|
|
|
|
static void bpf_fill_jump_around_ld_abs(struct bpf_test *self)
|
|
|
|
{
|
2019-04-02 12:27:49 +08:00
|
|
|
struct bpf_insn *insn = self->fill_insns;
|
2019-05-25 06:25:20 +08:00
|
|
|
/* jump range is limited to 16 bit. every ld_abs is replaced by 6 insns,
|
|
|
|
* but on arches like arm, ppc etc, there will be one BPF_ZEXT inserted
|
|
|
|
* to extend the error value of the inlined ld_abs sequence which then
|
|
|
|
* contains 7 insns. so, set the dividend to 7 so the testcase could
|
|
|
|
* work on all arches.
|
|
|
|
*/
|
|
|
|
unsigned int len = (1 << 15) / 7;
|
2018-05-04 07:08:13 +08:00
|
|
|
int i = 0;
|
|
|
|
|
|
|
|
insn[i++] = BPF_MOV64_REG(BPF_REG_6, BPF_REG_1);
|
|
|
|
insn[i++] = BPF_LD_ABS(BPF_B, 0);
|
|
|
|
insn[i] = BPF_JMP_IMM(BPF_JEQ, BPF_REG_0, 10, len - i - 2);
|
|
|
|
i++;
|
|
|
|
while (i < len - 1)
|
|
|
|
insn[i++] = BPF_LD_ABS(BPF_B, 1);
|
|
|
|
insn[i] = BPF_EXIT_INSN();
|
2019-04-02 12:27:49 +08:00
|
|
|
self->prog_len = i + 1;
|
2018-05-04 07:08:13 +08:00
|
|
|
}
|
|
|
|
|
2018-05-15 05:22:34 +08:00
|
|
|
static void bpf_fill_rand_ld_dw(struct bpf_test *self)
|
|
|
|
{
|
2019-04-02 12:27:49 +08:00
|
|
|
struct bpf_insn *insn = self->fill_insns;
|
2018-05-15 05:22:34 +08:00
|
|
|
uint64_t res = 0;
|
|
|
|
int i = 0;
|
|
|
|
|
|
|
|
insn[i++] = BPF_MOV32_IMM(BPF_REG_0, 0);
|
|
|
|
while (i < self->retval) {
|
|
|
|
uint64_t val = bpf_semi_rand_get();
|
|
|
|
struct bpf_insn tmp[2] = { BPF_LD_IMM64(BPF_REG_1, val) };
|
|
|
|
|
|
|
|
res ^= val;
|
|
|
|
insn[i++] = tmp[0];
|
|
|
|
insn[i++] = tmp[1];
|
|
|
|
insn[i++] = BPF_ALU64_REG(BPF_XOR, BPF_REG_0, BPF_REG_1);
|
|
|
|
}
|
|
|
|
insn[i++] = BPF_MOV64_REG(BPF_REG_1, BPF_REG_0);
|
|
|
|
insn[i++] = BPF_ALU64_IMM(BPF_RSH, BPF_REG_1, 32);
|
|
|
|
insn[i++] = BPF_ALU64_REG(BPF_XOR, BPF_REG_0, BPF_REG_1);
|
|
|
|
insn[i] = BPF_EXIT_INSN();
|
2019-04-02 12:27:49 +08:00
|
|
|
self->prog_len = i + 1;
|
2018-05-15 05:22:34 +08:00
|
|
|
res ^= (res >> 32);
|
|
|
|
self->retval = (uint32_t)res;
|
|
|
|
}
|
|
|
|
|
2019-05-22 11:14:20 +08:00
|
|
|
#define MAX_JMP_SEQ 8192
|
|
|
|
|
|
|
|
/* test the sequence of 8k jumps */
|
2019-04-13 05:41:32 +08:00
|
|
|
static void bpf_fill_scale1(struct bpf_test *self)
|
|
|
|
{
|
|
|
|
struct bpf_insn *insn = self->fill_insns;
|
|
|
|
int i = 0, k = 0;
|
|
|
|
|
|
|
|
insn[i++] = BPF_MOV64_REG(BPF_REG_6, BPF_REG_1);
|
2019-05-22 11:14:20 +08:00
|
|
|
/* test to check that the long sequence of jumps is acceptable */
|
|
|
|
while (k++ < MAX_JMP_SEQ) {
|
2019-04-13 05:41:32 +08:00
|
|
|
insn[i++] = BPF_RAW_INSN(BPF_JMP | BPF_CALL, 0, 0, 0,
|
|
|
|
BPF_FUNC_get_prandom_u32);
|
2019-05-22 11:14:20 +08:00
|
|
|
insn[i++] = BPF_JMP_IMM(BPF_JEQ, BPF_REG_0, bpf_semi_rand_get(), 2);
|
2019-04-13 05:41:32 +08:00
|
|
|
insn[i++] = BPF_MOV64_REG(BPF_REG_1, BPF_REG_10);
|
|
|
|
insn[i++] = BPF_STX_MEM(BPF_DW, BPF_REG_1, BPF_REG_6,
|
|
|
|
-8 * (k % 64 + 1));
|
|
|
|
}
|
2019-06-16 03:12:22 +08:00
|
|
|
/* is_state_visited() doesn't allocate state for pruning for every jump.
|
|
|
|
* Hence multiply jmps by 4 to accommodate that heuristic
|
2019-04-13 05:41:32 +08:00
|
|
|
*/
|
2019-06-16 03:12:22 +08:00
|
|
|
while (i < MAX_TEST_INSNS - MAX_JMP_SEQ * 4)
|
2019-05-25 06:25:20 +08:00
|
|
|
insn[i++] = BPF_ALU64_IMM(BPF_MOV, BPF_REG_0, 42);
|
2019-04-13 05:41:32 +08:00
|
|
|
insn[i] = BPF_EXIT_INSN();
|
|
|
|
self->prog_len = i + 1;
|
|
|
|
self->retval = 42;
|
|
|
|
}
|
|
|
|
|
2019-05-22 11:14:20 +08:00
|
|
|
/* test the sequence of 8k jumps in inner most function (function depth 8)*/
|
2019-04-13 05:41:32 +08:00
|
|
|
static void bpf_fill_scale2(struct bpf_test *self)
|
|
|
|
{
|
|
|
|
struct bpf_insn *insn = self->fill_insns;
|
|
|
|
int i = 0, k = 0;
|
|
|
|
|
|
|
|
#define FUNC_NEST 7
|
|
|
|
for (k = 0; k < FUNC_NEST; k++) {
|
|
|
|
insn[i++] = BPF_CALL_REL(1);
|
|
|
|
insn[i++] = BPF_EXIT_INSN();
|
|
|
|
}
|
|
|
|
insn[i++] = BPF_MOV64_REG(BPF_REG_6, BPF_REG_1);
|
2019-05-22 11:14:20 +08:00
|
|
|
/* test to check that the long sequence of jumps is acceptable */
|
|
|
|
k = 0;
|
|
|
|
while (k++ < MAX_JMP_SEQ) {
|
2019-04-13 05:41:32 +08:00
|
|
|
insn[i++] = BPF_RAW_INSN(BPF_JMP | BPF_CALL, 0, 0, 0,
|
|
|
|
BPF_FUNC_get_prandom_u32);
|
2019-05-22 11:14:20 +08:00
|
|
|
insn[i++] = BPF_JMP_IMM(BPF_JEQ, BPF_REG_0, bpf_semi_rand_get(), 2);
|
2019-04-13 05:41:32 +08:00
|
|
|
insn[i++] = BPF_MOV64_REG(BPF_REG_1, BPF_REG_10);
|
|
|
|
insn[i++] = BPF_STX_MEM(BPF_DW, BPF_REG_1, BPF_REG_6,
|
|
|
|
-8 * (k % (64 - 4 * FUNC_NEST) + 1));
|
|
|
|
}
|
2019-06-16 03:12:22 +08:00
|
|
|
while (i < MAX_TEST_INSNS - MAX_JMP_SEQ * 4)
|
2019-05-25 06:25:20 +08:00
|
|
|
insn[i++] = BPF_ALU64_IMM(BPF_MOV, BPF_REG_0, 42);
|
2019-04-13 05:41:32 +08:00
|
|
|
insn[i] = BPF_EXIT_INSN();
|
|
|
|
self->prog_len = i + 1;
|
|
|
|
self->retval = 42;
|
|
|
|
}
|
|
|
|
|
|
|
|
static void bpf_fill_scale(struct bpf_test *self)
|
|
|
|
{
|
|
|
|
switch (self->retval) {
|
|
|
|
case 1:
|
|
|
|
return bpf_fill_scale1(self);
|
|
|
|
case 2:
|
|
|
|
return bpf_fill_scale2(self);
|
|
|
|
default:
|
|
|
|
self->prog_len = 0;
|
|
|
|
break;
|
|
|
|
}
|
|
|
|
}
|
|
|
|
|
selftests/bpf: Add verifier tests for x64 jit jump padding
There are 3 tests added into verifier's jit tests to trigger x64
jit jump padding.
The first test can be represented as the following assembly code:
1: bpf_call bpf_get_prandom_u32
2: if r0 == 1 goto pc+128
3: if r0 == 2 goto pc+128
...
129: if r0 == 128 goto pc+128
130: goto pc+128
131: goto pc+127
...
256: goto pc+2
257: goto pc+1
258: r0 = 1
259: ret
We first store a random number to r0 and add the corresponding
conditional jumps (2~129) to make verifier believe that those jump
instructions from 130 to 257 are reachable. When the program is sent to
x64 jit, it starts to optimize out the NOP jumps backwards from 257.
Since there are 128 such jumps, the program easily reaches 15 passes and
triggers jump padding.
Here is the x64 jit code of the first test:
0: 0f 1f 44 00 00 nop DWORD PTR [rax+rax*1+0x0]
5: 66 90 xchg ax,ax
7: 55 push rbp
8: 48 89 e5 mov rbp,rsp
b: e8 4c 90 75 e3 call 0xffffffffe375905c
10: 48 83 f8 01 cmp rax,0x1
14: 0f 84 fe 04 00 00 je 0x518
1a: 48 83 f8 02 cmp rax,0x2
1e: 0f 84 f9 04 00 00 je 0x51d
...
f6: 48 83 f8 18 cmp rax,0x18
fa: 0f 84 8b 04 00 00 je 0x58b
100: 48 83 f8 19 cmp rax,0x19
104: 0f 84 86 04 00 00 je 0x590
10a: 48 83 f8 1a cmp rax,0x1a
10e: 0f 84 81 04 00 00 je 0x595
...
500: 0f 84 83 01 00 00 je 0x689
506: 48 81 f8 80 00 00 00 cmp rax,0x80
50d: 0f 84 76 01 00 00 je 0x689
513: e9 71 01 00 00 jmp 0x689
518: e9 6c 01 00 00 jmp 0x689
...
5fe: e9 86 00 00 00 jmp 0x689
603: e9 81 00 00 00 jmp 0x689
608: 0f 1f 00 nop DWORD PTR [rax]
60b: eb 7c jmp 0x689
60d: eb 7a jmp 0x689
...
683: eb 04 jmp 0x689
685: eb 02 jmp 0x689
687: 66 90 xchg ax,ax
689: b8 01 00 00 00 mov eax,0x1
68e: c9 leave
68f: c3 ret
As expected, a 3 bytes NOPs is inserted at 608 due to the transition
from imm32 jmp to imm8 jmp. A 2 bytes NOPs is also inserted at 687 to
replace a NOP jump.
The second test case is tricky. Here is the assembly code:
1: bpf_call bpf_get_prandom_u32
2: if r0 == 1 goto pc+2048
3: if r0 == 2 goto pc+2048
...
2049: if r0 == 2048 goto pc+2048
2050: goto pc+2048
2051: goto pc+16
2052: goto pc+15
...
2064: goto pc+3
2065: goto pc+2
2066: goto pc+1
...
[repeat "goto pc+16".."goto pc+1" 127 times]
...
4099: r0 = 2
4100: ret
There are 4 major parts of the program.
1) 1~2049: Those are instructions to make 2050~4098 reachable. Some of
them also could generate the padding for jmp_cond.
2) 2050: This is the target instruction for the imm32 nop jmp padding.
3) 2051~4098: The repeated "goto 1~16" instructions are designed to be
consumed by the nop jmp optimization. In the end, those
instrucitons become 128 continuous 0 offset jmp and are
optimized out in 1 pass, and this make insn 2050 an imm32
nop jmp in the next pass, so that we can trigger the
5 bytes padding.
4) 4099~4100: Those are the instructions to end the program.
The x64 jit code is like this:
0: 0f 1f 44 00 00 nop DWORD PTR [rax+rax*1+0x0]
5: 66 90 xchg ax,ax
7: 55 push rbp
8: 48 89 e5 mov rbp,rsp
b: e8 bc 7b d5 d3 call 0xffffffffd3d57bcc
10: 48 83 f8 01 cmp rax,0x1
14: 0f 84 7e 66 00 00 je 0x6698
1a: 48 83 f8 02 cmp rax,0x2
1e: 0f 84 74 66 00 00 je 0x6698
24: 48 83 f8 03 cmp rax,0x3
28: 0f 84 6a 66 00 00 je 0x6698
2e: 48 83 f8 04 cmp rax,0x4
32: 0f 84 60 66 00 00 je 0x6698
38: 48 83 f8 05 cmp rax,0x5
3c: 0f 84 56 66 00 00 je 0x6698
42: 48 83 f8 06 cmp rax,0x6
46: 0f 84 4c 66 00 00 je 0x6698
...
666c: 48 81 f8 fe 07 00 00 cmp rax,0x7fe
6673: 0f 1f 40 00 nop DWORD PTR [rax+0x0]
6677: 74 1f je 0x6698
6679: 48 81 f8 ff 07 00 00 cmp rax,0x7ff
6680: 0f 1f 40 00 nop DWORD PTR [rax+0x0]
6684: 74 12 je 0x6698
6686: 48 81 f8 00 08 00 00 cmp rax,0x800
668d: 0f 1f 40 00 nop DWORD PTR [rax+0x0]
6691: 74 05 je 0x6698
6693: 0f 1f 44 00 00 nop DWORD PTR [rax+rax*1+0x0]
6698: b8 02 00 00 00 mov eax,0x2
669d: c9 leave
669e: c3 ret
Since insn 2051~4098 are optimized out right before the padding pass,
there are several conditional jumps from the first part are replaced with
imm8 jmp_cond, and this triggers the 4 bytes padding, for example at
6673, 6680, and 668d. On the other hand, Insn 2050 is replaced with the
5 bytes nops at 6693.
The third test is to invoke the first and second tests as subprogs to test
bpf2bpf. Per the system log, there was one more jit happened with only
one pass and the same jit code was produced.
v4:
- Add the second test case which triggers jmp_cond padding and imm32 nop
jmp padding.
- Add the new test case as another subprog
Signed-off-by: Gary Lin <glin@suse.com>
Signed-off-by: Alexei Starovoitov <ast@kernel.org>
Link: https://lore.kernel.org/bpf/20210119102501.511-4-glin@suse.com
2021-01-19 18:25:01 +08:00
|
|
|
static int bpf_fill_torturous_jumps_insn_1(struct bpf_insn *insn)
|
|
|
|
{
|
|
|
|
unsigned int len = 259, hlen = 128;
|
|
|
|
int i;
|
|
|
|
|
|
|
|
insn[0] = BPF_EMIT_CALL(BPF_FUNC_get_prandom_u32);
|
|
|
|
for (i = 1; i <= hlen; i++) {
|
|
|
|
insn[i] = BPF_JMP_IMM(BPF_JEQ, BPF_REG_0, i, hlen);
|
|
|
|
insn[i + hlen] = BPF_JMP_A(hlen - i);
|
|
|
|
}
|
|
|
|
insn[len - 2] = BPF_MOV64_IMM(BPF_REG_0, 1);
|
|
|
|
insn[len - 1] = BPF_EXIT_INSN();
|
|
|
|
|
|
|
|
return len;
|
|
|
|
}
|
|
|
|
|
|
|
|
static int bpf_fill_torturous_jumps_insn_2(struct bpf_insn *insn)
|
|
|
|
{
|
|
|
|
unsigned int len = 4100, jmp_off = 2048;
|
|
|
|
int i, j;
|
|
|
|
|
|
|
|
insn[0] = BPF_EMIT_CALL(BPF_FUNC_get_prandom_u32);
|
|
|
|
for (i = 1; i <= jmp_off; i++) {
|
|
|
|
insn[i] = BPF_JMP_IMM(BPF_JEQ, BPF_REG_0, i, jmp_off);
|
|
|
|
}
|
|
|
|
insn[i++] = BPF_JMP_A(jmp_off);
|
|
|
|
for (; i <= jmp_off * 2 + 1; i+=16) {
|
|
|
|
for (j = 0; j < 16; j++) {
|
|
|
|
insn[i + j] = BPF_JMP_A(16 - j - 1);
|
|
|
|
}
|
|
|
|
}
|
|
|
|
|
|
|
|
insn[len - 2] = BPF_MOV64_IMM(BPF_REG_0, 2);
|
|
|
|
insn[len - 1] = BPF_EXIT_INSN();
|
|
|
|
|
|
|
|
return len;
|
|
|
|
}
|
|
|
|
|
|
|
|
static void bpf_fill_torturous_jumps(struct bpf_test *self)
|
|
|
|
{
|
|
|
|
struct bpf_insn *insn = self->fill_insns;
|
|
|
|
int i = 0;
|
|
|
|
|
|
|
|
switch (self->retval) {
|
|
|
|
case 1:
|
|
|
|
self->prog_len = bpf_fill_torturous_jumps_insn_1(insn);
|
|
|
|
return;
|
|
|
|
case 2:
|
|
|
|
self->prog_len = bpf_fill_torturous_jumps_insn_2(insn);
|
|
|
|
return;
|
|
|
|
case 3:
|
|
|
|
/* main */
|
|
|
|
insn[i++] = BPF_RAW_INSN(BPF_JMP|BPF_CALL, 0, 1, 0, 4);
|
|
|
|
insn[i++] = BPF_RAW_INSN(BPF_JMP|BPF_CALL, 0, 1, 0, 262);
|
|
|
|
insn[i++] = BPF_ST_MEM(BPF_B, BPF_REG_10, -32, 0);
|
|
|
|
insn[i++] = BPF_MOV64_IMM(BPF_REG_0, 3);
|
|
|
|
insn[i++] = BPF_EXIT_INSN();
|
|
|
|
|
|
|
|
/* subprog 1 */
|
|
|
|
i += bpf_fill_torturous_jumps_insn_1(insn + i);
|
|
|
|
|
|
|
|
/* subprog 2 */
|
|
|
|
i += bpf_fill_torturous_jumps_insn_2(insn + i);
|
|
|
|
|
|
|
|
self->prog_len = i;
|
|
|
|
return;
|
|
|
|
default:
|
|
|
|
self->prog_len = 0;
|
|
|
|
break;
|
|
|
|
}
|
|
|
|
}
|
|
|
|
|
2018-10-03 04:35:38 +08:00
|
|
|
/* BPF_SK_LOOKUP contains 13 instructions, if you need to fix up maps */
|
2019-03-22 09:54:04 +08:00
|
|
|
#define BPF_SK_LOOKUP(func) \
|
2018-10-03 04:35:38 +08:00
|
|
|
/* struct bpf_sock_tuple tuple = {} */ \
|
|
|
|
BPF_MOV64_IMM(BPF_REG_2, 0), \
|
|
|
|
BPF_STX_MEM(BPF_W, BPF_REG_10, BPF_REG_2, -8), \
|
|
|
|
BPF_STX_MEM(BPF_DW, BPF_REG_10, BPF_REG_2, -16), \
|
|
|
|
BPF_STX_MEM(BPF_DW, BPF_REG_10, BPF_REG_2, -24), \
|
|
|
|
BPF_STX_MEM(BPF_DW, BPF_REG_10, BPF_REG_2, -32), \
|
|
|
|
BPF_STX_MEM(BPF_DW, BPF_REG_10, BPF_REG_2, -40), \
|
|
|
|
BPF_STX_MEM(BPF_DW, BPF_REG_10, BPF_REG_2, -48), \
|
2019-03-22 09:54:04 +08:00
|
|
|
/* sk = func(ctx, &tuple, sizeof tuple, 0, 0) */ \
|
2018-10-03 04:35:38 +08:00
|
|
|
BPF_MOV64_REG(BPF_REG_2, BPF_REG_10), \
|
|
|
|
BPF_ALU64_IMM(BPF_ADD, BPF_REG_2, -48), \
|
|
|
|
BPF_MOV64_IMM(BPF_REG_3, sizeof(struct bpf_sock_tuple)), \
|
|
|
|
BPF_MOV64_IMM(BPF_REG_4, 0), \
|
|
|
|
BPF_MOV64_IMM(BPF_REG_5, 0), \
|
2019-03-22 09:54:04 +08:00
|
|
|
BPF_EMIT_CALL(BPF_FUNC_ ## func)
|
2018-10-03 04:35:38 +08:00
|
|
|
|
2019-01-27 01:26:13 +08:00
|
|
|
/* BPF_DIRECT_PKT_R2 contains 7 instructions, it initializes default return
|
|
|
|
* value into 0 and does necessary preparation for direct packet access
|
|
|
|
* through r2. The allowed access range is 8 bytes.
|
|
|
|
*/
|
|
|
|
#define BPF_DIRECT_PKT_R2 \
|
|
|
|
BPF_MOV64_IMM(BPF_REG_0, 0), \
|
|
|
|
BPF_LDX_MEM(BPF_W, BPF_REG_2, BPF_REG_1, \
|
|
|
|
offsetof(struct __sk_buff, data)), \
|
|
|
|
BPF_LDX_MEM(BPF_W, BPF_REG_3, BPF_REG_1, \
|
|
|
|
offsetof(struct __sk_buff, data_end)), \
|
|
|
|
BPF_MOV64_REG(BPF_REG_4, BPF_REG_2), \
|
|
|
|
BPF_ALU64_IMM(BPF_ADD, BPF_REG_4, 8), \
|
|
|
|
BPF_JMP_REG(BPF_JLE, BPF_REG_4, BPF_REG_3, 1), \
|
|
|
|
BPF_EXIT_INSN()
|
|
|
|
|
|
|
|
/* BPF_RAND_UEXT_R7 contains 4 instructions, it initializes R7 into a random
|
|
|
|
* positive u32, and zero-extend it into 64-bit.
|
|
|
|
*/
|
|
|
|
#define BPF_RAND_UEXT_R7 \
|
|
|
|
BPF_RAW_INSN(BPF_JMP | BPF_CALL, 0, 0, 0, \
|
|
|
|
BPF_FUNC_get_prandom_u32), \
|
|
|
|
BPF_MOV64_REG(BPF_REG_7, BPF_REG_0), \
|
|
|
|
BPF_ALU64_IMM(BPF_LSH, BPF_REG_7, 33), \
|
|
|
|
BPF_ALU64_IMM(BPF_RSH, BPF_REG_7, 33)
|
|
|
|
|
|
|
|
/* BPF_RAND_SEXT_R7 contains 5 instructions, it initializes R7 into a random
|
|
|
|
* negative u32, and sign-extend it into 64-bit.
|
|
|
|
*/
|
|
|
|
#define BPF_RAND_SEXT_R7 \
|
|
|
|
BPF_RAW_INSN(BPF_JMP | BPF_CALL, 0, 0, 0, \
|
|
|
|
BPF_FUNC_get_prandom_u32), \
|
|
|
|
BPF_MOV64_REG(BPF_REG_7, BPF_REG_0), \
|
|
|
|
BPF_ALU64_IMM(BPF_OR, BPF_REG_7, 0x80000000), \
|
|
|
|
BPF_ALU64_IMM(BPF_LSH, BPF_REG_7, 32), \
|
|
|
|
BPF_ALU64_IMM(BPF_ARSH, BPF_REG_7, 32)
|
|
|
|
|
bpf: mini eBPF library, test stubs and verifier testsuite
1.
the library includes a trivial set of BPF syscall wrappers:
int bpf_create_map(int key_size, int value_size, int max_entries);
int bpf_update_elem(int fd, void *key, void *value);
int bpf_lookup_elem(int fd, void *key, void *value);
int bpf_delete_elem(int fd, void *key);
int bpf_get_next_key(int fd, void *key, void *next_key);
int bpf_prog_load(enum bpf_prog_type prog_type,
const struct sock_filter_int *insns, int insn_len,
const char *license);
bpf_prog_load() stores verifier log into global bpf_log_buf[] array
and BPF_*() macros to build instructions
2.
test stubs configure eBPF infra with 'unspec' map and program types.
These are fake types used by user space testsuite only.
3.
verifier tests valid and invalid programs and expects predefined
error log messages from kernel.
40 tests so far.
$ sudo ./test_verifier
#0 add+sub+mul OK
#1 unreachable OK
#2 unreachable2 OK
#3 out of range jump OK
#4 out of range jump2 OK
#5 test1 ld_imm64 OK
...
Signed-off-by: Alexei Starovoitov <ast@plumgrid.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
2014-09-26 15:17:07 +08:00
|
|
|
static struct bpf_test tests[] = {
|
2019-01-26 07:24:42 +08:00
|
|
|
#define FILL_ARRAY
|
|
|
|
#include <verifier/tests.h>
|
|
|
|
#undef FILL_ARRAY
|
bpf: mini eBPF library, test stubs and verifier testsuite
1.
the library includes a trivial set of BPF syscall wrappers:
int bpf_create_map(int key_size, int value_size, int max_entries);
int bpf_update_elem(int fd, void *key, void *value);
int bpf_lookup_elem(int fd, void *key, void *value);
int bpf_delete_elem(int fd, void *key);
int bpf_get_next_key(int fd, void *key, void *next_key);
int bpf_prog_load(enum bpf_prog_type prog_type,
const struct sock_filter_int *insns, int insn_len,
const char *license);
bpf_prog_load() stores verifier log into global bpf_log_buf[] array
and BPF_*() macros to build instructions
2.
test stubs configure eBPF infra with 'unspec' map and program types.
These are fake types used by user space testsuite only.
3.
verifier tests valid and invalid programs and expects predefined
error log messages from kernel.
40 tests so far.
$ sudo ./test_verifier
#0 add+sub+mul OK
#1 unreachable OK
#2 unreachable2 OK
#3 out of range jump OK
#4 out of range jump2 OK
#5 test1 ld_imm64 OK
...
Signed-off-by: Alexei Starovoitov <ast@plumgrid.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
2014-09-26 15:17:07 +08:00
|
|
|
};
|
|
|
|
|
2016-10-17 20:28:36 +08:00
|
|
|
static int probe_filter_length(const struct bpf_insn *fp)
|
bpf: mini eBPF library, test stubs and verifier testsuite
1.
the library includes a trivial set of BPF syscall wrappers:
int bpf_create_map(int key_size, int value_size, int max_entries);
int bpf_update_elem(int fd, void *key, void *value);
int bpf_lookup_elem(int fd, void *key, void *value);
int bpf_delete_elem(int fd, void *key);
int bpf_get_next_key(int fd, void *key, void *next_key);
int bpf_prog_load(enum bpf_prog_type prog_type,
const struct sock_filter_int *insns, int insn_len,
const char *license);
bpf_prog_load() stores verifier log into global bpf_log_buf[] array
and BPF_*() macros to build instructions
2.
test stubs configure eBPF infra with 'unspec' map and program types.
These are fake types used by user space testsuite only.
3.
verifier tests valid and invalid programs and expects predefined
error log messages from kernel.
40 tests so far.
$ sudo ./test_verifier
#0 add+sub+mul OK
#1 unreachable OK
#2 unreachable2 OK
#3 out of range jump OK
#4 out of range jump2 OK
#5 test1 ld_imm64 OK
...
Signed-off-by: Alexei Starovoitov <ast@plumgrid.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
2014-09-26 15:17:07 +08:00
|
|
|
{
|
2016-10-17 20:28:36 +08:00
|
|
|
int len;
|
bpf: mini eBPF library, test stubs and verifier testsuite
1.
the library includes a trivial set of BPF syscall wrappers:
int bpf_create_map(int key_size, int value_size, int max_entries);
int bpf_update_elem(int fd, void *key, void *value);
int bpf_lookup_elem(int fd, void *key, void *value);
int bpf_delete_elem(int fd, void *key);
int bpf_get_next_key(int fd, void *key, void *next_key);
int bpf_prog_load(enum bpf_prog_type prog_type,
const struct sock_filter_int *insns, int insn_len,
const char *license);
bpf_prog_load() stores verifier log into global bpf_log_buf[] array
and BPF_*() macros to build instructions
2.
test stubs configure eBPF infra with 'unspec' map and program types.
These are fake types used by user space testsuite only.
3.
verifier tests valid and invalid programs and expects predefined
error log messages from kernel.
40 tests so far.
$ sudo ./test_verifier
#0 add+sub+mul OK
#1 unreachable OK
#2 unreachable2 OK
#3 out of range jump OK
#4 out of range jump2 OK
#5 test1 ld_imm64 OK
...
Signed-off-by: Alexei Starovoitov <ast@plumgrid.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
2014-09-26 15:17:07 +08:00
|
|
|
|
|
|
|
for (len = MAX_INSNS - 1; len > 0; --len)
|
|
|
|
if (fp[len].code != 0 || fp[len].imm != 0)
|
|
|
|
break;
|
|
|
|
return len + 1;
|
|
|
|
}
|
|
|
|
|
2019-01-29 01:21:17 +08:00
|
|
|
static bool skip_unsupported_map(enum bpf_map_type map_type)
|
|
|
|
{
|
2022-02-03 06:59:14 +08:00
|
|
|
if (!libbpf_probe_bpf_map_type(map_type, NULL)) {
|
2019-01-29 01:21:17 +08:00
|
|
|
printf("SKIP (unsupported map type %d)\n", map_type);
|
|
|
|
skips++;
|
|
|
|
return true;
|
|
|
|
}
|
|
|
|
return false;
|
|
|
|
}
|
|
|
|
|
bpf, selftest: test {rd, wr}only flags and direct value access
Extend test_verifier with various test cases around the two kernel
extensions, that is, {rd,wr}only map support as well as direct map
value access. All passing, one skipped due to xskmap not present
on test machine:
# ./test_verifier
[...]
#948/p XDP pkt read, pkt_meta' <= pkt_data, bad access 1 OK
#949/p XDP pkt read, pkt_meta' <= pkt_data, bad access 2 OK
#950/p XDP pkt read, pkt_data <= pkt_meta', good access OK
#951/p XDP pkt read, pkt_data <= pkt_meta', bad access 1 OK
#952/p XDP pkt read, pkt_data <= pkt_meta', bad access 2 OK
Summary: 1410 PASSED, 1 SKIPPED, 0 FAILED
Signed-off-by: Daniel Borkmann <daniel@iogearbox.net>
Acked-by: Martin KaFai Lau <kafai@fb.com>
Signed-off-by: Alexei Starovoitov <ast@kernel.org>
2019-04-10 05:20:16 +08:00
|
|
|
static int __create_map(uint32_t type, uint32_t size_key,
|
|
|
|
uint32_t size_value, uint32_t max_elem,
|
|
|
|
uint32_t extra_flags)
|
bpf: mini eBPF library, test stubs and verifier testsuite
1.
the library includes a trivial set of BPF syscall wrappers:
int bpf_create_map(int key_size, int value_size, int max_entries);
int bpf_update_elem(int fd, void *key, void *value);
int bpf_lookup_elem(int fd, void *key, void *value);
int bpf_delete_elem(int fd, void *key);
int bpf_get_next_key(int fd, void *key, void *next_key);
int bpf_prog_load(enum bpf_prog_type prog_type,
const struct sock_filter_int *insns, int insn_len,
const char *license);
bpf_prog_load() stores verifier log into global bpf_log_buf[] array
and BPF_*() macros to build instructions
2.
test stubs configure eBPF infra with 'unspec' map and program types.
These are fake types used by user space testsuite only.
3.
verifier tests valid and invalid programs and expects predefined
error log messages from kernel.
40 tests so far.
$ sudo ./test_verifier
#0 add+sub+mul OK
#1 unreachable OK
#2 unreachable2 OK
#3 out of range jump OK
#4 out of range jump2 OK
#5 test1 ld_imm64 OK
...
Signed-off-by: Alexei Starovoitov <ast@plumgrid.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
2014-09-26 15:17:07 +08:00
|
|
|
{
|
2021-11-25 03:32:33 +08:00
|
|
|
LIBBPF_OPTS(bpf_map_create_opts, opts);
|
2016-10-17 20:28:36 +08:00
|
|
|
int fd;
|
bpf: mini eBPF library, test stubs and verifier testsuite
1.
the library includes a trivial set of BPF syscall wrappers:
int bpf_create_map(int key_size, int value_size, int max_entries);
int bpf_update_elem(int fd, void *key, void *value);
int bpf_lookup_elem(int fd, void *key, void *value);
int bpf_delete_elem(int fd, void *key);
int bpf_get_next_key(int fd, void *key, void *next_key);
int bpf_prog_load(enum bpf_prog_type prog_type,
const struct sock_filter_int *insns, int insn_len,
const char *license);
bpf_prog_load() stores verifier log into global bpf_log_buf[] array
and BPF_*() macros to build instructions
2.
test stubs configure eBPF infra with 'unspec' map and program types.
These are fake types used by user space testsuite only.
3.
verifier tests valid and invalid programs and expects predefined
error log messages from kernel.
40 tests so far.
$ sudo ./test_verifier
#0 add+sub+mul OK
#1 unreachable OK
#2 unreachable2 OK
#3 out of range jump OK
#4 out of range jump2 OK
#5 test1 ld_imm64 OK
...
Signed-off-by: Alexei Starovoitov <ast@plumgrid.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
2014-09-26 15:17:07 +08:00
|
|
|
|
2021-11-25 03:32:33 +08:00
|
|
|
opts.map_flags = (type == BPF_MAP_TYPE_HASH ? BPF_F_NO_PREALLOC : 0) | extra_flags;
|
|
|
|
fd = bpf_map_create(type, NULL, size_key, size_value, max_elem, &opts);
|
2019-01-29 01:21:17 +08:00
|
|
|
if (fd < 0) {
|
|
|
|
if (skip_unsupported_map(type))
|
|
|
|
return -1;
|
2016-10-17 20:28:36 +08:00
|
|
|
printf("Failed to create hash map '%s'!\n", strerror(errno));
|
2019-01-29 01:21:17 +08:00
|
|
|
}
|
2015-10-08 13:23:23 +08:00
|
|
|
|
2016-10-17 20:28:36 +08:00
|
|
|
return fd;
|
2015-10-08 13:23:23 +08:00
|
|
|
}
|
|
|
|
|
bpf, selftest: test {rd, wr}only flags and direct value access
Extend test_verifier with various test cases around the two kernel
extensions, that is, {rd,wr}only map support as well as direct map
value access. All passing, one skipped due to xskmap not present
on test machine:
# ./test_verifier
[...]
#948/p XDP pkt read, pkt_meta' <= pkt_data, bad access 1 OK
#949/p XDP pkt read, pkt_meta' <= pkt_data, bad access 2 OK
#950/p XDP pkt read, pkt_data <= pkt_meta', good access OK
#951/p XDP pkt read, pkt_data <= pkt_meta', bad access 1 OK
#952/p XDP pkt read, pkt_data <= pkt_meta', bad access 2 OK
Summary: 1410 PASSED, 1 SKIPPED, 0 FAILED
Signed-off-by: Daniel Borkmann <daniel@iogearbox.net>
Acked-by: Martin KaFai Lau <kafai@fb.com>
Signed-off-by: Alexei Starovoitov <ast@kernel.org>
2019-04-10 05:20:16 +08:00
|
|
|
static int create_map(uint32_t type, uint32_t size_key,
|
|
|
|
uint32_t size_value, uint32_t max_elem)
|
|
|
|
{
|
|
|
|
return __create_map(type, size_key, size_value, max_elem, 0);
|
|
|
|
}
|
|
|
|
|
2019-01-03 07:58:35 +08:00
|
|
|
static void update_map(int fd, int index)
|
|
|
|
{
|
|
|
|
struct test_val value = {
|
|
|
|
.index = (6 + 1) * sizeof(int),
|
|
|
|
.foo[6] = 0xabcdef12,
|
|
|
|
};
|
|
|
|
|
|
|
|
assert(!bpf_map_update_elem(fd, &index, &value, 0));
|
|
|
|
}
|
|
|
|
|
2019-12-20 05:19:51 +08:00
|
|
|
static int create_prog_dummy_simple(enum bpf_prog_type prog_type, int ret)
|
2018-02-27 05:34:33 +08:00
|
|
|
{
|
|
|
|
struct bpf_insn prog[] = {
|
2019-12-20 05:19:51 +08:00
|
|
|
BPF_MOV64_IMM(BPF_REG_0, ret),
|
2018-02-27 05:34:33 +08:00
|
|
|
BPF_EXIT_INSN(),
|
|
|
|
};
|
|
|
|
|
2021-11-04 06:08:42 +08:00
|
|
|
return bpf_prog_load(prog_type, NULL, "GPL", prog, ARRAY_SIZE(prog), NULL);
|
2018-02-27 05:34:33 +08:00
|
|
|
}
|
|
|
|
|
2019-12-20 05:19:51 +08:00
|
|
|
static int create_prog_dummy_loop(enum bpf_prog_type prog_type, int mfd,
|
|
|
|
int idx, int ret)
|
2018-02-27 05:34:33 +08:00
|
|
|
{
|
|
|
|
struct bpf_insn prog[] = {
|
|
|
|
BPF_MOV64_IMM(BPF_REG_3, idx),
|
|
|
|
BPF_LD_MAP_FD(BPF_REG_2, mfd),
|
|
|
|
BPF_RAW_INSN(BPF_JMP | BPF_CALL, 0, 0, 0,
|
|
|
|
BPF_FUNC_tail_call),
|
2019-12-20 05:19:51 +08:00
|
|
|
BPF_MOV64_IMM(BPF_REG_0, ret),
|
2018-02-27 05:34:33 +08:00
|
|
|
BPF_EXIT_INSN(),
|
|
|
|
};
|
|
|
|
|
2021-11-04 06:08:42 +08:00
|
|
|
return bpf_prog_load(prog_type, NULL, "GPL", prog, ARRAY_SIZE(prog), NULL);
|
2018-02-27 05:34:33 +08:00
|
|
|
}
|
|
|
|
|
2018-12-11 07:25:04 +08:00
|
|
|
static int create_prog_array(enum bpf_prog_type prog_type, uint32_t max_elem,
|
2019-12-20 05:19:51 +08:00
|
|
|
int p1key, int p2key, int p3key)
|
2015-10-08 13:23:23 +08:00
|
|
|
{
|
2019-12-20 05:19:51 +08:00
|
|
|
int mfd, p1fd, p2fd, p3fd;
|
2015-10-08 13:23:23 +08:00
|
|
|
|
2021-11-25 03:32:33 +08:00
|
|
|
mfd = bpf_map_create(BPF_MAP_TYPE_PROG_ARRAY, NULL, sizeof(int),
|
|
|
|
sizeof(int), max_elem, NULL);
|
2018-02-27 05:34:33 +08:00
|
|
|
if (mfd < 0) {
|
2019-01-29 01:21:17 +08:00
|
|
|
if (skip_unsupported_map(BPF_MAP_TYPE_PROG_ARRAY))
|
|
|
|
return -1;
|
2016-10-17 20:28:36 +08:00
|
|
|
printf("Failed to create prog array '%s'!\n", strerror(errno));
|
2018-02-27 05:34:33 +08:00
|
|
|
return -1;
|
|
|
|
}
|
bpf: mini eBPF library, test stubs and verifier testsuite
1.
the library includes a trivial set of BPF syscall wrappers:
int bpf_create_map(int key_size, int value_size, int max_entries);
int bpf_update_elem(int fd, void *key, void *value);
int bpf_lookup_elem(int fd, void *key, void *value);
int bpf_delete_elem(int fd, void *key);
int bpf_get_next_key(int fd, void *key, void *next_key);
int bpf_prog_load(enum bpf_prog_type prog_type,
const struct sock_filter_int *insns, int insn_len,
const char *license);
bpf_prog_load() stores verifier log into global bpf_log_buf[] array
and BPF_*() macros to build instructions
2.
test stubs configure eBPF infra with 'unspec' map and program types.
These are fake types used by user space testsuite only.
3.
verifier tests valid and invalid programs and expects predefined
error log messages from kernel.
40 tests so far.
$ sudo ./test_verifier
#0 add+sub+mul OK
#1 unreachable OK
#2 unreachable2 OK
#3 out of range jump OK
#4 out of range jump2 OK
#5 test1 ld_imm64 OK
...
Signed-off-by: Alexei Starovoitov <ast@plumgrid.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
2014-09-26 15:17:07 +08:00
|
|
|
|
2019-12-20 05:19:51 +08:00
|
|
|
p1fd = create_prog_dummy_simple(prog_type, 42);
|
|
|
|
p2fd = create_prog_dummy_loop(prog_type, mfd, p2key, 41);
|
|
|
|
p3fd = create_prog_dummy_simple(prog_type, 24);
|
|
|
|
if (p1fd < 0 || p2fd < 0 || p3fd < 0)
|
|
|
|
goto err;
|
2018-02-27 05:34:33 +08:00
|
|
|
if (bpf_map_update_elem(mfd, &p1key, &p1fd, BPF_ANY) < 0)
|
2019-12-20 05:19:51 +08:00
|
|
|
goto err;
|
2018-02-27 05:34:33 +08:00
|
|
|
if (bpf_map_update_elem(mfd, &p2key, &p2fd, BPF_ANY) < 0)
|
2019-12-20 05:19:51 +08:00
|
|
|
goto err;
|
|
|
|
if (bpf_map_update_elem(mfd, &p3key, &p3fd, BPF_ANY) < 0) {
|
|
|
|
err:
|
|
|
|
close(mfd);
|
|
|
|
mfd = -1;
|
|
|
|
}
|
|
|
|
close(p3fd);
|
2018-02-27 05:34:33 +08:00
|
|
|
close(p2fd);
|
|
|
|
close(p1fd);
|
|
|
|
return mfd;
|
bpf: mini eBPF library, test stubs and verifier testsuite
1.
the library includes a trivial set of BPF syscall wrappers:
int bpf_create_map(int key_size, int value_size, int max_entries);
int bpf_update_elem(int fd, void *key, void *value);
int bpf_lookup_elem(int fd, void *key, void *value);
int bpf_delete_elem(int fd, void *key);
int bpf_get_next_key(int fd, void *key, void *next_key);
int bpf_prog_load(enum bpf_prog_type prog_type,
const struct sock_filter_int *insns, int insn_len,
const char *license);
bpf_prog_load() stores verifier log into global bpf_log_buf[] array
and BPF_*() macros to build instructions
2.
test stubs configure eBPF infra with 'unspec' map and program types.
These are fake types used by user space testsuite only.
3.
verifier tests valid and invalid programs and expects predefined
error log messages from kernel.
40 tests so far.
$ sudo ./test_verifier
#0 add+sub+mul OK
#1 unreachable OK
#2 unreachable2 OK
#3 out of range jump OK
#4 out of range jump2 OK
#5 test1 ld_imm64 OK
...
Signed-off-by: Alexei Starovoitov <ast@plumgrid.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
2014-09-26 15:17:07 +08:00
|
|
|
}
|
|
|
|
|
2017-03-23 01:00:35 +08:00
|
|
|
static int create_map_in_map(void)
|
|
|
|
{
|
2021-11-25 03:32:33 +08:00
|
|
|
LIBBPF_OPTS(bpf_map_create_opts, opts);
|
2017-03-23 01:00:35 +08:00
|
|
|
int inner_map_fd, outer_map_fd;
|
|
|
|
|
2021-11-25 03:32:33 +08:00
|
|
|
inner_map_fd = bpf_map_create(BPF_MAP_TYPE_ARRAY, NULL, sizeof(int),
|
|
|
|
sizeof(int), 1, NULL);
|
2017-03-23 01:00:35 +08:00
|
|
|
if (inner_map_fd < 0) {
|
2019-01-29 01:21:17 +08:00
|
|
|
if (skip_unsupported_map(BPF_MAP_TYPE_ARRAY))
|
|
|
|
return -1;
|
2017-03-23 01:00:35 +08:00
|
|
|
printf("Failed to create array '%s'!\n", strerror(errno));
|
|
|
|
return inner_map_fd;
|
|
|
|
}
|
|
|
|
|
2021-11-25 03:32:33 +08:00
|
|
|
opts.inner_map_fd = inner_map_fd;
|
|
|
|
outer_map_fd = bpf_map_create(BPF_MAP_TYPE_ARRAY_OF_MAPS, NULL,
|
|
|
|
sizeof(int), sizeof(int), 1, &opts);
|
2019-01-29 01:21:17 +08:00
|
|
|
if (outer_map_fd < 0) {
|
|
|
|
if (skip_unsupported_map(BPF_MAP_TYPE_ARRAY_OF_MAPS))
|
|
|
|
return -1;
|
2017-03-23 01:00:35 +08:00
|
|
|
printf("Failed to create array of maps '%s'!\n",
|
|
|
|
strerror(errno));
|
2019-01-29 01:21:17 +08:00
|
|
|
}
|
2017-03-23 01:00:35 +08:00
|
|
|
|
|
|
|
close(inner_map_fd);
|
|
|
|
|
|
|
|
return outer_map_fd;
|
|
|
|
}
|
|
|
|
|
2018-09-28 22:45:53 +08:00
|
|
|
static int create_cgroup_storage(bool percpu)
|
2018-08-03 05:27:28 +08:00
|
|
|
{
|
2018-09-28 22:45:53 +08:00
|
|
|
enum bpf_map_type type = percpu ? BPF_MAP_TYPE_PERCPU_CGROUP_STORAGE :
|
|
|
|
BPF_MAP_TYPE_CGROUP_STORAGE;
|
2018-08-03 05:27:28 +08:00
|
|
|
int fd;
|
|
|
|
|
2021-11-25 03:32:33 +08:00
|
|
|
fd = bpf_map_create(type, NULL, sizeof(struct bpf_cgroup_storage_key),
|
|
|
|
TEST_DATA_LEN, 0, NULL);
|
2019-01-29 01:21:17 +08:00
|
|
|
if (fd < 0) {
|
|
|
|
if (skip_unsupported_map(type))
|
|
|
|
return -1;
|
2018-09-28 22:45:53 +08:00
|
|
|
printf("Failed to create cgroup storage '%s'!\n",
|
|
|
|
strerror(errno));
|
2019-01-29 01:21:17 +08:00
|
|
|
}
|
2018-08-03 05:27:28 +08:00
|
|
|
|
|
|
|
return fd;
|
|
|
|
}
|
|
|
|
|
2019-02-01 07:40:07 +08:00
|
|
|
/* struct bpf_spin_lock {
|
|
|
|
* int val;
|
|
|
|
* };
|
|
|
|
* struct val {
|
|
|
|
* int cnt;
|
|
|
|
* struct bpf_spin_lock l;
|
|
|
|
* };
|
2021-11-13 22:22:27 +08:00
|
|
|
* struct bpf_timer {
|
|
|
|
* __u64 :64;
|
|
|
|
* __u64 :64;
|
|
|
|
* } __attribute__((aligned(8)));
|
|
|
|
* struct timer {
|
|
|
|
* struct bpf_timer t;
|
|
|
|
* };
|
2019-02-01 07:40:07 +08:00
|
|
|
*/
|
2021-11-13 22:22:27 +08:00
|
|
|
static const char btf_str_sec[] = "\0bpf_spin_lock\0val\0cnt\0l\0bpf_timer\0timer\0t";
|
2019-02-01 07:40:07 +08:00
|
|
|
static __u32 btf_raw_types[] = {
|
|
|
|
/* int */
|
|
|
|
BTF_TYPE_INT_ENC(0, BTF_INT_SIGNED, 0, 32, 4), /* [1] */
|
|
|
|
/* struct bpf_spin_lock */ /* [2] */
|
|
|
|
BTF_TYPE_ENC(1, BTF_INFO_ENC(BTF_KIND_STRUCT, 0, 1), 4),
|
|
|
|
BTF_MEMBER_ENC(15, 1, 0), /* int val; */
|
|
|
|
/* struct val */ /* [3] */
|
|
|
|
BTF_TYPE_ENC(15, BTF_INFO_ENC(BTF_KIND_STRUCT, 0, 2), 8),
|
|
|
|
BTF_MEMBER_ENC(19, 1, 0), /* int cnt; */
|
|
|
|
BTF_MEMBER_ENC(23, 2, 32),/* struct bpf_spin_lock l; */
|
2021-11-13 22:22:27 +08:00
|
|
|
/* struct bpf_timer */ /* [4] */
|
|
|
|
BTF_TYPE_ENC(25, BTF_INFO_ENC(BTF_KIND_STRUCT, 0, 0), 16),
|
|
|
|
/* struct timer */ /* [5] */
|
|
|
|
BTF_TYPE_ENC(35, BTF_INFO_ENC(BTF_KIND_STRUCT, 0, 1), 16),
|
|
|
|
BTF_MEMBER_ENC(41, 4, 0), /* struct bpf_timer t; */
|
2019-02-01 07:40:07 +08:00
|
|
|
};
|
|
|
|
|
|
|
|
static int load_btf(void)
|
|
|
|
{
|
|
|
|
struct btf_header hdr = {
|
|
|
|
.magic = BTF_MAGIC,
|
|
|
|
.version = BTF_VERSION,
|
|
|
|
.hdr_len = sizeof(struct btf_header),
|
|
|
|
.type_len = sizeof(btf_raw_types),
|
|
|
|
.str_off = sizeof(btf_raw_types),
|
|
|
|
.str_len = sizeof(btf_str_sec),
|
|
|
|
};
|
|
|
|
void *ptr, *raw_btf;
|
|
|
|
int btf_fd;
|
|
|
|
|
|
|
|
ptr = raw_btf = malloc(sizeof(hdr) + sizeof(btf_raw_types) +
|
|
|
|
sizeof(btf_str_sec));
|
|
|
|
|
|
|
|
memcpy(ptr, &hdr, sizeof(hdr));
|
|
|
|
ptr += sizeof(hdr);
|
|
|
|
memcpy(ptr, btf_raw_types, hdr.type_len);
|
|
|
|
ptr += hdr.type_len;
|
|
|
|
memcpy(ptr, btf_str_sec, hdr.str_len);
|
|
|
|
ptr += hdr.str_len;
|
|
|
|
|
2021-12-10 03:38:37 +08:00
|
|
|
btf_fd = bpf_btf_load(raw_btf, ptr - raw_btf, NULL);
|
2019-02-01 07:40:07 +08:00
|
|
|
free(raw_btf);
|
|
|
|
if (btf_fd < 0)
|
|
|
|
return -1;
|
|
|
|
return btf_fd;
|
|
|
|
}
|
|
|
|
|
|
|
|
static int create_map_spin_lock(void)
|
|
|
|
{
|
2021-11-25 03:32:33 +08:00
|
|
|
LIBBPF_OPTS(bpf_map_create_opts, opts,
|
2019-02-01 07:40:07 +08:00
|
|
|
.btf_key_type_id = 1,
|
|
|
|
.btf_value_type_id = 3,
|
2021-11-25 03:32:33 +08:00
|
|
|
);
|
2019-02-01 07:40:07 +08:00
|
|
|
int fd, btf_fd;
|
|
|
|
|
|
|
|
btf_fd = load_btf();
|
|
|
|
if (btf_fd < 0)
|
|
|
|
return -1;
|
2021-11-25 03:32:33 +08:00
|
|
|
opts.btf_fd = btf_fd;
|
|
|
|
fd = bpf_map_create(BPF_MAP_TYPE_ARRAY, "test_map", 4, 8, 1, &opts);
|
2019-02-01 07:40:07 +08:00
|
|
|
if (fd < 0)
|
|
|
|
printf("Failed to create map with spin_lock\n");
|
|
|
|
return fd;
|
|
|
|
}
|
|
|
|
|
2019-04-27 07:39:49 +08:00
|
|
|
static int create_sk_storage_map(void)
|
|
|
|
{
|
2021-11-25 03:32:33 +08:00
|
|
|
LIBBPF_OPTS(bpf_map_create_opts, opts,
|
2019-04-27 07:39:49 +08:00
|
|
|
.map_flags = BPF_F_NO_PREALLOC,
|
|
|
|
.btf_key_type_id = 1,
|
|
|
|
.btf_value_type_id = 3,
|
2021-11-25 03:32:33 +08:00
|
|
|
);
|
2019-04-27 07:39:49 +08:00
|
|
|
int fd, btf_fd;
|
|
|
|
|
|
|
|
btf_fd = load_btf();
|
|
|
|
if (btf_fd < 0)
|
|
|
|
return -1;
|
2021-11-25 03:32:33 +08:00
|
|
|
opts.btf_fd = btf_fd;
|
|
|
|
fd = bpf_map_create(BPF_MAP_TYPE_SK_STORAGE, "test_map", 4, 8, 0, &opts);
|
|
|
|
close(opts.btf_fd);
|
2019-04-27 07:39:49 +08:00
|
|
|
if (fd < 0)
|
|
|
|
printf("Failed to create sk_storage_map\n");
|
|
|
|
return fd;
|
|
|
|
}
|
|
|
|
|
2021-11-13 22:22:27 +08:00
|
|
|
static int create_map_timer(void)
|
|
|
|
{
|
2021-12-13 03:13:41 +08:00
|
|
|
LIBBPF_OPTS(bpf_map_create_opts, opts,
|
2021-11-13 22:22:27 +08:00
|
|
|
.btf_key_type_id = 1,
|
|
|
|
.btf_value_type_id = 5,
|
2021-12-13 03:13:41 +08:00
|
|
|
);
|
2021-11-13 22:22:27 +08:00
|
|
|
int fd, btf_fd;
|
|
|
|
|
|
|
|
btf_fd = load_btf();
|
|
|
|
if (btf_fd < 0)
|
|
|
|
return -1;
|
2021-12-13 03:13:41 +08:00
|
|
|
|
|
|
|
opts.btf_fd = btf_fd;
|
|
|
|
fd = bpf_map_create(BPF_MAP_TYPE_ARRAY, "test_map", 4, 16, 1, &opts);
|
2021-11-13 22:22:27 +08:00
|
|
|
if (fd < 0)
|
|
|
|
printf("Failed to create map with timer\n");
|
|
|
|
return fd;
|
|
|
|
}
|
|
|
|
|
2018-05-04 07:08:13 +08:00
|
|
|
static char bpf_vlog[UINT_MAX >> 8];
|
2016-10-17 20:28:36 +08:00
|
|
|
|
2018-12-11 07:25:04 +08:00
|
|
|
static void do_test_fixup(struct bpf_test *test, enum bpf_prog_type prog_type,
|
2018-10-03 04:35:37 +08:00
|
|
|
struct bpf_insn *prog, int *map_fds)
|
bpf: mini eBPF library, test stubs and verifier testsuite
1.
the library includes a trivial set of BPF syscall wrappers:
int bpf_create_map(int key_size, int value_size, int max_entries);
int bpf_update_elem(int fd, void *key, void *value);
int bpf_lookup_elem(int fd, void *key, void *value);
int bpf_delete_elem(int fd, void *key);
int bpf_get_next_key(int fd, void *key, void *next_key);
int bpf_prog_load(enum bpf_prog_type prog_type,
const struct sock_filter_int *insns, int insn_len,
const char *license);
bpf_prog_load() stores verifier log into global bpf_log_buf[] array
and BPF_*() macros to build instructions
2.
test stubs configure eBPF infra with 'unspec' map and program types.
These are fake types used by user space testsuite only.
3.
verifier tests valid and invalid programs and expects predefined
error log messages from kernel.
40 tests so far.
$ sudo ./test_verifier
#0 add+sub+mul OK
#1 unreachable OK
#2 unreachable2 OK
#3 out of range jump OK
#4 out of range jump2 OK
#5 test1 ld_imm64 OK
...
Signed-off-by: Alexei Starovoitov <ast@plumgrid.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
2014-09-26 15:17:07 +08:00
|
|
|
{
|
2018-10-09 09:04:53 +08:00
|
|
|
int *fixup_map_hash_8b = test->fixup_map_hash_8b;
|
|
|
|
int *fixup_map_hash_48b = test->fixup_map_hash_48b;
|
|
|
|
int *fixup_map_hash_16b = test->fixup_map_hash_16b;
|
|
|
|
int *fixup_map_array_48b = test->fixup_map_array_48b;
|
2018-10-09 09:04:54 +08:00
|
|
|
int *fixup_map_sockmap = test->fixup_map_sockmap;
|
|
|
|
int *fixup_map_sockhash = test->fixup_map_sockhash;
|
|
|
|
int *fixup_map_xskmap = test->fixup_map_xskmap;
|
|
|
|
int *fixup_map_stacktrace = test->fixup_map_stacktrace;
|
2018-06-03 05:06:31 +08:00
|
|
|
int *fixup_prog1 = test->fixup_prog1;
|
|
|
|
int *fixup_prog2 = test->fixup_prog2;
|
2017-03-23 01:00:35 +08:00
|
|
|
int *fixup_map_in_map = test->fixup_map_in_map;
|
2018-08-03 05:27:28 +08:00
|
|
|
int *fixup_cgroup_storage = test->fixup_cgroup_storage;
|
2018-09-28 22:45:53 +08:00
|
|
|
int *fixup_percpu_cgroup_storage = test->fixup_percpu_cgroup_storage;
|
2019-02-01 07:40:07 +08:00
|
|
|
int *fixup_map_spin_lock = test->fixup_map_spin_lock;
|
bpf, selftest: test {rd, wr}only flags and direct value access
Extend test_verifier with various test cases around the two kernel
extensions, that is, {rd,wr}only map support as well as direct map
value access. All passing, one skipped due to xskmap not present
on test machine:
# ./test_verifier
[...]
#948/p XDP pkt read, pkt_meta' <= pkt_data, bad access 1 OK
#949/p XDP pkt read, pkt_meta' <= pkt_data, bad access 2 OK
#950/p XDP pkt read, pkt_data <= pkt_meta', good access OK
#951/p XDP pkt read, pkt_data <= pkt_meta', bad access 1 OK
#952/p XDP pkt read, pkt_data <= pkt_meta', bad access 2 OK
Summary: 1410 PASSED, 1 SKIPPED, 0 FAILED
Signed-off-by: Daniel Borkmann <daniel@iogearbox.net>
Acked-by: Martin KaFai Lau <kafai@fb.com>
Signed-off-by: Alexei Starovoitov <ast@kernel.org>
2019-04-10 05:20:16 +08:00
|
|
|
int *fixup_map_array_ro = test->fixup_map_array_ro;
|
|
|
|
int *fixup_map_array_wo = test->fixup_map_array_wo;
|
|
|
|
int *fixup_map_array_small = test->fixup_map_array_small;
|
2019-04-27 07:39:49 +08:00
|
|
|
int *fixup_sk_storage_map = test->fixup_sk_storage_map;
|
2019-07-24 08:07:25 +08:00
|
|
|
int *fixup_map_event_output = test->fixup_map_event_output;
|
2020-04-30 18:47:38 +08:00
|
|
|
int *fixup_map_reuseport_array = test->fixup_map_reuseport_array;
|
2021-01-13 13:38:08 +08:00
|
|
|
int *fixup_map_ringbuf = test->fixup_map_ringbuf;
|
2021-11-13 22:22:27 +08:00
|
|
|
int *fixup_map_timer = test->fixup_map_timer;
|
2022-01-15 00:39:51 +08:00
|
|
|
struct kfunc_btf_id_pair *fixup_kfunc_btf_id = test->fixup_kfunc_btf_id;
|
bpf: mini eBPF library, test stubs and verifier testsuite
1.
the library includes a trivial set of BPF syscall wrappers:
int bpf_create_map(int key_size, int value_size, int max_entries);
int bpf_update_elem(int fd, void *key, void *value);
int bpf_lookup_elem(int fd, void *key, void *value);
int bpf_delete_elem(int fd, void *key);
int bpf_get_next_key(int fd, void *key, void *next_key);
int bpf_prog_load(enum bpf_prog_type prog_type,
const struct sock_filter_int *insns, int insn_len,
const char *license);
bpf_prog_load() stores verifier log into global bpf_log_buf[] array
and BPF_*() macros to build instructions
2.
test stubs configure eBPF infra with 'unspec' map and program types.
These are fake types used by user space testsuite only.
3.
verifier tests valid and invalid programs and expects predefined
error log messages from kernel.
40 tests so far.
$ sudo ./test_verifier
#0 add+sub+mul OK
#1 unreachable OK
#2 unreachable2 OK
#3 out of range jump OK
#4 out of range jump2 OK
#5 test1 ld_imm64 OK
...
Signed-off-by: Alexei Starovoitov <ast@plumgrid.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
2014-09-26 15:17:07 +08:00
|
|
|
|
2019-04-02 12:27:49 +08:00
|
|
|
if (test->fill_helper) {
|
|
|
|
test->fill_insns = calloc(MAX_TEST_INSNS, sizeof(struct bpf_insn));
|
2018-05-04 07:08:13 +08:00
|
|
|
test->fill_helper(test);
|
2019-04-02 12:27:49 +08:00
|
|
|
}
|
2018-05-04 07:08:13 +08:00
|
|
|
|
2016-10-17 20:28:36 +08:00
|
|
|
/* Allocating HTs with 1 elem is fine here, since we only test
|
|
|
|
* for verifier and not do a runtime lookup, so the only thing
|
|
|
|
* that really matters is value size in this case.
|
|
|
|
*/
|
2018-10-09 09:04:53 +08:00
|
|
|
if (*fixup_map_hash_8b) {
|
2018-06-03 05:06:31 +08:00
|
|
|
map_fds[0] = create_map(BPF_MAP_TYPE_HASH, sizeof(long long),
|
|
|
|
sizeof(long long), 1);
|
2016-10-17 20:28:36 +08:00
|
|
|
do {
|
2018-10-09 09:04:53 +08:00
|
|
|
prog[*fixup_map_hash_8b].imm = map_fds[0];
|
|
|
|
fixup_map_hash_8b++;
|
|
|
|
} while (*fixup_map_hash_8b);
|
2016-10-17 20:28:36 +08:00
|
|
|
}
|
bpf: mini eBPF library, test stubs and verifier testsuite
1.
the library includes a trivial set of BPF syscall wrappers:
int bpf_create_map(int key_size, int value_size, int max_entries);
int bpf_update_elem(int fd, void *key, void *value);
int bpf_lookup_elem(int fd, void *key, void *value);
int bpf_delete_elem(int fd, void *key);
int bpf_get_next_key(int fd, void *key, void *next_key);
int bpf_prog_load(enum bpf_prog_type prog_type,
const struct sock_filter_int *insns, int insn_len,
const char *license);
bpf_prog_load() stores verifier log into global bpf_log_buf[] array
and BPF_*() macros to build instructions
2.
test stubs configure eBPF infra with 'unspec' map and program types.
These are fake types used by user space testsuite only.
3.
verifier tests valid and invalid programs and expects predefined
error log messages from kernel.
40 tests so far.
$ sudo ./test_verifier
#0 add+sub+mul OK
#1 unreachable OK
#2 unreachable2 OK
#3 out of range jump OK
#4 out of range jump2 OK
#5 test1 ld_imm64 OK
...
Signed-off-by: Alexei Starovoitov <ast@plumgrid.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
2014-09-26 15:17:07 +08:00
|
|
|
|
2018-10-09 09:04:53 +08:00
|
|
|
if (*fixup_map_hash_48b) {
|
2018-06-03 05:06:31 +08:00
|
|
|
map_fds[1] = create_map(BPF_MAP_TYPE_HASH, sizeof(long long),
|
|
|
|
sizeof(struct test_val), 1);
|
2016-10-17 20:28:36 +08:00
|
|
|
do {
|
2018-10-09 09:04:53 +08:00
|
|
|
prog[*fixup_map_hash_48b].imm = map_fds[1];
|
|
|
|
fixup_map_hash_48b++;
|
|
|
|
} while (*fixup_map_hash_48b);
|
2016-10-17 20:28:36 +08:00
|
|
|
}
|
bpf: mini eBPF library, test stubs and verifier testsuite
1.
the library includes a trivial set of BPF syscall wrappers:
int bpf_create_map(int key_size, int value_size, int max_entries);
int bpf_update_elem(int fd, void *key, void *value);
int bpf_lookup_elem(int fd, void *key, void *value);
int bpf_delete_elem(int fd, void *key);
int bpf_get_next_key(int fd, void *key, void *next_key);
int bpf_prog_load(enum bpf_prog_type prog_type,
const struct sock_filter_int *insns, int insn_len,
const char *license);
bpf_prog_load() stores verifier log into global bpf_log_buf[] array
and BPF_*() macros to build instructions
2.
test stubs configure eBPF infra with 'unspec' map and program types.
These are fake types used by user space testsuite only.
3.
verifier tests valid and invalid programs and expects predefined
error log messages from kernel.
40 tests so far.
$ sudo ./test_verifier
#0 add+sub+mul OK
#1 unreachable OK
#2 unreachable2 OK
#3 out of range jump OK
#4 out of range jump2 OK
#5 test1 ld_imm64 OK
...
Signed-off-by: Alexei Starovoitov <ast@plumgrid.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
2014-09-26 15:17:07 +08:00
|
|
|
|
2018-10-09 09:04:53 +08:00
|
|
|
if (*fixup_map_hash_16b) {
|
2018-06-03 05:06:31 +08:00
|
|
|
map_fds[2] = create_map(BPF_MAP_TYPE_HASH, sizeof(long long),
|
|
|
|
sizeof(struct other_val), 1);
|
2018-04-24 21:08:19 +08:00
|
|
|
do {
|
2018-10-09 09:04:53 +08:00
|
|
|
prog[*fixup_map_hash_16b].imm = map_fds[2];
|
|
|
|
fixup_map_hash_16b++;
|
|
|
|
} while (*fixup_map_hash_16b);
|
2018-04-24 21:08:19 +08:00
|
|
|
}
|
|
|
|
|
2018-10-09 09:04:53 +08:00
|
|
|
if (*fixup_map_array_48b) {
|
2018-06-03 05:06:31 +08:00
|
|
|
map_fds[3] = create_map(BPF_MAP_TYPE_ARRAY, sizeof(int),
|
|
|
|
sizeof(struct test_val), 1);
|
2019-01-03 07:58:35 +08:00
|
|
|
update_map(map_fds[3], 0);
|
2018-06-03 05:06:31 +08:00
|
|
|
do {
|
2018-10-09 09:04:53 +08:00
|
|
|
prog[*fixup_map_array_48b].imm = map_fds[3];
|
|
|
|
fixup_map_array_48b++;
|
|
|
|
} while (*fixup_map_array_48b);
|
2018-06-03 05:06:31 +08:00
|
|
|
}
|
|
|
|
|
|
|
|
if (*fixup_prog1) {
|
2019-12-20 05:19:51 +08:00
|
|
|
map_fds[4] = create_prog_array(prog_type, 4, 0, 1, 2);
|
2018-06-03 05:06:31 +08:00
|
|
|
do {
|
|
|
|
prog[*fixup_prog1].imm = map_fds[4];
|
|
|
|
fixup_prog1++;
|
|
|
|
} while (*fixup_prog1);
|
|
|
|
}
|
|
|
|
|
|
|
|
if (*fixup_prog2) {
|
2019-12-20 05:19:51 +08:00
|
|
|
map_fds[5] = create_prog_array(prog_type, 8, 7, 1, 2);
|
2016-10-17 20:28:36 +08:00
|
|
|
do {
|
2018-06-03 05:06:31 +08:00
|
|
|
prog[*fixup_prog2].imm = map_fds[5];
|
|
|
|
fixup_prog2++;
|
|
|
|
} while (*fixup_prog2);
|
2016-10-17 20:28:36 +08:00
|
|
|
}
|
2017-03-23 01:00:35 +08:00
|
|
|
|
|
|
|
if (*fixup_map_in_map) {
|
2018-06-03 05:06:31 +08:00
|
|
|
map_fds[6] = create_map_in_map();
|
2017-03-23 01:00:35 +08:00
|
|
|
do {
|
2018-06-03 05:06:31 +08:00
|
|
|
prog[*fixup_map_in_map].imm = map_fds[6];
|
2017-03-23 01:00:35 +08:00
|
|
|
fixup_map_in_map++;
|
|
|
|
} while (*fixup_map_in_map);
|
|
|
|
}
|
2018-08-03 05:27:28 +08:00
|
|
|
|
|
|
|
if (*fixup_cgroup_storage) {
|
2018-09-28 22:45:53 +08:00
|
|
|
map_fds[7] = create_cgroup_storage(false);
|
2018-08-03 05:27:28 +08:00
|
|
|
do {
|
|
|
|
prog[*fixup_cgroup_storage].imm = map_fds[7];
|
|
|
|
fixup_cgroup_storage++;
|
|
|
|
} while (*fixup_cgroup_storage);
|
|
|
|
}
|
2018-09-28 22:45:53 +08:00
|
|
|
|
|
|
|
if (*fixup_percpu_cgroup_storage) {
|
|
|
|
map_fds[8] = create_cgroup_storage(true);
|
|
|
|
do {
|
|
|
|
prog[*fixup_percpu_cgroup_storage].imm = map_fds[8];
|
|
|
|
fixup_percpu_cgroup_storage++;
|
|
|
|
} while (*fixup_percpu_cgroup_storage);
|
|
|
|
}
|
2018-10-09 09:04:54 +08:00
|
|
|
if (*fixup_map_sockmap) {
|
|
|
|
map_fds[9] = create_map(BPF_MAP_TYPE_SOCKMAP, sizeof(int),
|
|
|
|
sizeof(int), 1);
|
|
|
|
do {
|
|
|
|
prog[*fixup_map_sockmap].imm = map_fds[9];
|
|
|
|
fixup_map_sockmap++;
|
|
|
|
} while (*fixup_map_sockmap);
|
|
|
|
}
|
|
|
|
if (*fixup_map_sockhash) {
|
|
|
|
map_fds[10] = create_map(BPF_MAP_TYPE_SOCKHASH, sizeof(int),
|
|
|
|
sizeof(int), 1);
|
|
|
|
do {
|
|
|
|
prog[*fixup_map_sockhash].imm = map_fds[10];
|
|
|
|
fixup_map_sockhash++;
|
|
|
|
} while (*fixup_map_sockhash);
|
|
|
|
}
|
|
|
|
if (*fixup_map_xskmap) {
|
|
|
|
map_fds[11] = create_map(BPF_MAP_TYPE_XSKMAP, sizeof(int),
|
|
|
|
sizeof(int), 1);
|
|
|
|
do {
|
|
|
|
prog[*fixup_map_xskmap].imm = map_fds[11];
|
|
|
|
fixup_map_xskmap++;
|
|
|
|
} while (*fixup_map_xskmap);
|
|
|
|
}
|
|
|
|
if (*fixup_map_stacktrace) {
|
|
|
|
map_fds[12] = create_map(BPF_MAP_TYPE_STACK_TRACE, sizeof(u32),
|
|
|
|
sizeof(u64), 1);
|
|
|
|
do {
|
|
|
|
prog[*fixup_map_stacktrace].imm = map_fds[12];
|
|
|
|
fixup_map_stacktrace++;
|
2018-12-07 12:14:11 +08:00
|
|
|
} while (*fixup_map_stacktrace);
|
2018-10-09 09:04:54 +08:00
|
|
|
}
|
2019-02-01 07:40:07 +08:00
|
|
|
if (*fixup_map_spin_lock) {
|
|
|
|
map_fds[13] = create_map_spin_lock();
|
|
|
|
do {
|
|
|
|
prog[*fixup_map_spin_lock].imm = map_fds[13];
|
|
|
|
fixup_map_spin_lock++;
|
|
|
|
} while (*fixup_map_spin_lock);
|
|
|
|
}
|
bpf, selftest: test {rd, wr}only flags and direct value access
Extend test_verifier with various test cases around the two kernel
extensions, that is, {rd,wr}only map support as well as direct map
value access. All passing, one skipped due to xskmap not present
on test machine:
# ./test_verifier
[...]
#948/p XDP pkt read, pkt_meta' <= pkt_data, bad access 1 OK
#949/p XDP pkt read, pkt_meta' <= pkt_data, bad access 2 OK
#950/p XDP pkt read, pkt_data <= pkt_meta', good access OK
#951/p XDP pkt read, pkt_data <= pkt_meta', bad access 1 OK
#952/p XDP pkt read, pkt_data <= pkt_meta', bad access 2 OK
Summary: 1410 PASSED, 1 SKIPPED, 0 FAILED
Signed-off-by: Daniel Borkmann <daniel@iogearbox.net>
Acked-by: Martin KaFai Lau <kafai@fb.com>
Signed-off-by: Alexei Starovoitov <ast@kernel.org>
2019-04-10 05:20:16 +08:00
|
|
|
if (*fixup_map_array_ro) {
|
|
|
|
map_fds[14] = __create_map(BPF_MAP_TYPE_ARRAY, sizeof(int),
|
|
|
|
sizeof(struct test_val), 1,
|
|
|
|
BPF_F_RDONLY_PROG);
|
|
|
|
update_map(map_fds[14], 0);
|
|
|
|
do {
|
|
|
|
prog[*fixup_map_array_ro].imm = map_fds[14];
|
|
|
|
fixup_map_array_ro++;
|
|
|
|
} while (*fixup_map_array_ro);
|
|
|
|
}
|
|
|
|
if (*fixup_map_array_wo) {
|
|
|
|
map_fds[15] = __create_map(BPF_MAP_TYPE_ARRAY, sizeof(int),
|
|
|
|
sizeof(struct test_val), 1,
|
|
|
|
BPF_F_WRONLY_PROG);
|
|
|
|
update_map(map_fds[15], 0);
|
|
|
|
do {
|
|
|
|
prog[*fixup_map_array_wo].imm = map_fds[15];
|
|
|
|
fixup_map_array_wo++;
|
|
|
|
} while (*fixup_map_array_wo);
|
|
|
|
}
|
|
|
|
if (*fixup_map_array_small) {
|
|
|
|
map_fds[16] = __create_map(BPF_MAP_TYPE_ARRAY, sizeof(int),
|
|
|
|
1, 1, 0);
|
|
|
|
update_map(map_fds[16], 0);
|
|
|
|
do {
|
|
|
|
prog[*fixup_map_array_small].imm = map_fds[16];
|
|
|
|
fixup_map_array_small++;
|
|
|
|
} while (*fixup_map_array_small);
|
|
|
|
}
|
2019-04-27 07:39:49 +08:00
|
|
|
if (*fixup_sk_storage_map) {
|
|
|
|
map_fds[17] = create_sk_storage_map();
|
|
|
|
do {
|
|
|
|
prog[*fixup_sk_storage_map].imm = map_fds[17];
|
|
|
|
fixup_sk_storage_map++;
|
|
|
|
} while (*fixup_sk_storage_map);
|
|
|
|
}
|
2019-07-24 08:07:25 +08:00
|
|
|
if (*fixup_map_event_output) {
|
|
|
|
map_fds[18] = __create_map(BPF_MAP_TYPE_PERF_EVENT_ARRAY,
|
|
|
|
sizeof(int), sizeof(int), 1, 0);
|
|
|
|
do {
|
|
|
|
prog[*fixup_map_event_output].imm = map_fds[18];
|
|
|
|
fixup_map_event_output++;
|
|
|
|
} while (*fixup_map_event_output);
|
|
|
|
}
|
2020-04-30 18:47:38 +08:00
|
|
|
if (*fixup_map_reuseport_array) {
|
|
|
|
map_fds[19] = __create_map(BPF_MAP_TYPE_REUSEPORT_SOCKARRAY,
|
|
|
|
sizeof(u32), sizeof(u64), 1, 0);
|
|
|
|
do {
|
|
|
|
prog[*fixup_map_reuseport_array].imm = map_fds[19];
|
|
|
|
fixup_map_reuseport_array++;
|
|
|
|
} while (*fixup_map_reuseport_array);
|
|
|
|
}
|
2021-01-13 13:38:08 +08:00
|
|
|
if (*fixup_map_ringbuf) {
|
|
|
|
map_fds[20] = create_map(BPF_MAP_TYPE_RINGBUF, 0,
|
|
|
|
0, 4096);
|
|
|
|
do {
|
|
|
|
prog[*fixup_map_ringbuf].imm = map_fds[20];
|
|
|
|
fixup_map_ringbuf++;
|
|
|
|
} while (*fixup_map_ringbuf);
|
|
|
|
}
|
2021-11-13 22:22:27 +08:00
|
|
|
if (*fixup_map_timer) {
|
|
|
|
map_fds[21] = create_map_timer();
|
|
|
|
do {
|
|
|
|
prog[*fixup_map_timer].imm = map_fds[21];
|
|
|
|
fixup_map_timer++;
|
|
|
|
} while (*fixup_map_timer);
|
|
|
|
}
|
2022-01-15 00:39:51 +08:00
|
|
|
|
|
|
|
/* Patch in kfunc BTF IDs */
|
|
|
|
if (fixup_kfunc_btf_id->kfunc) {
|
|
|
|
struct btf *btf;
|
|
|
|
int btf_id;
|
|
|
|
|
|
|
|
do {
|
|
|
|
btf_id = 0;
|
|
|
|
btf = btf__load_vmlinux_btf();
|
|
|
|
if (btf) {
|
|
|
|
btf_id = btf__find_by_name_kind(btf,
|
|
|
|
fixup_kfunc_btf_id->kfunc,
|
|
|
|
BTF_KIND_FUNC);
|
|
|
|
btf_id = btf_id < 0 ? 0 : btf_id;
|
|
|
|
}
|
|
|
|
btf__free(btf);
|
|
|
|
prog[fixup_kfunc_btf_id->insn_idx].imm = btf_id;
|
|
|
|
fixup_kfunc_btf_id++;
|
|
|
|
} while (fixup_kfunc_btf_id->kfunc);
|
|
|
|
}
|
2016-10-17 20:28:36 +08:00
|
|
|
}
|
2015-10-08 13:23:23 +08:00
|
|
|
|
2020-05-14 07:03:55 +08:00
|
|
|
struct libcap {
|
|
|
|
struct __user_cap_header_struct hdr;
|
|
|
|
struct __user_cap_data_struct data[2];
|
|
|
|
};
|
|
|
|
|
2018-11-01 07:05:55 +08:00
|
|
|
static int set_admin(bool admin)
|
|
|
|
{
|
|
|
|
cap_t caps;
|
2020-05-14 07:03:55 +08:00
|
|
|
/* need CAP_BPF, CAP_NET_ADMIN, CAP_PERFMON to load progs */
|
|
|
|
const cap_value_t cap_net_admin = CAP_NET_ADMIN;
|
|
|
|
const cap_value_t cap_sys_admin = CAP_SYS_ADMIN;
|
|
|
|
struct libcap *cap;
|
2018-11-01 07:05:55 +08:00
|
|
|
int ret = -1;
|
|
|
|
|
|
|
|
caps = cap_get_proc();
|
|
|
|
if (!caps) {
|
|
|
|
perror("cap_get_proc");
|
|
|
|
return -1;
|
|
|
|
}
|
2020-05-14 07:03:55 +08:00
|
|
|
cap = (struct libcap *)caps;
|
|
|
|
if (cap_set_flag(caps, CAP_EFFECTIVE, 1, &cap_sys_admin, CAP_CLEAR)) {
|
|
|
|
perror("cap_set_flag clear admin");
|
|
|
|
goto out;
|
|
|
|
}
|
|
|
|
if (cap_set_flag(caps, CAP_EFFECTIVE, 1, &cap_net_admin,
|
2018-11-01 07:05:55 +08:00
|
|
|
admin ? CAP_SET : CAP_CLEAR)) {
|
2020-05-14 07:03:55 +08:00
|
|
|
perror("cap_set_flag set_or_clear net");
|
2018-11-01 07:05:55 +08:00
|
|
|
goto out;
|
|
|
|
}
|
2020-05-14 07:03:55 +08:00
|
|
|
/* libcap is likely old and simply ignores CAP_BPF and CAP_PERFMON,
|
|
|
|
* so update effective bits manually
|
|
|
|
*/
|
|
|
|
if (admin) {
|
|
|
|
cap->data[1].effective |= 1 << (38 /* CAP_PERFMON */ - 32);
|
|
|
|
cap->data[1].effective |= 1 << (39 /* CAP_BPF */ - 32);
|
|
|
|
} else {
|
|
|
|
cap->data[1].effective &= ~(1 << (38 - 32));
|
|
|
|
cap->data[1].effective &= ~(1 << (39 - 32));
|
|
|
|
}
|
2018-11-01 07:05:55 +08:00
|
|
|
if (cap_set_proc(caps)) {
|
|
|
|
perror("cap_set_proc");
|
|
|
|
goto out;
|
|
|
|
}
|
|
|
|
ret = 0;
|
|
|
|
out:
|
|
|
|
if (cap_free(caps))
|
|
|
|
perror("cap_free");
|
|
|
|
return ret;
|
|
|
|
}
|
|
|
|
|
2018-12-20 14:13:03 +08:00
|
|
|
static int do_prog_test_run(int fd_prog, bool unpriv, uint32_t expected_val,
|
|
|
|
void *data, size_t size_data)
|
|
|
|
{
|
|
|
|
__u8 tmp[TEST_DATA_LEN << 2];
|
|
|
|
__u32 size_tmp = sizeof(tmp);
|
2020-12-05 02:18:28 +08:00
|
|
|
int err, saved_errno;
|
2022-02-03 07:54:20 +08:00
|
|
|
LIBBPF_OPTS(bpf_test_run_opts, topts,
|
|
|
|
.data_in = data,
|
|
|
|
.data_size_in = size_data,
|
|
|
|
.data_out = tmp,
|
|
|
|
.data_size_out = size_tmp,
|
|
|
|
.repeat = 1,
|
|
|
|
);
|
2018-12-20 14:13:03 +08:00
|
|
|
|
|
|
|
if (unpriv)
|
|
|
|
set_admin(true);
|
2022-02-03 07:54:20 +08:00
|
|
|
err = bpf_prog_test_run_opts(fd_prog, &topts);
|
2020-12-05 02:18:28 +08:00
|
|
|
saved_errno = errno;
|
|
|
|
|
2018-12-20 14:13:03 +08:00
|
|
|
if (unpriv)
|
|
|
|
set_admin(false);
|
2020-12-05 02:18:28 +08:00
|
|
|
|
|
|
|
if (err) {
|
|
|
|
switch (saved_errno) {
|
2021-10-08 01:33:28 +08:00
|
|
|
case ENOTSUPP:
|
2020-12-05 02:18:28 +08:00
|
|
|
printf("Did not run the program (not supported) ");
|
|
|
|
return 0;
|
|
|
|
case EPERM:
|
|
|
|
if (unpriv) {
|
|
|
|
printf("Did not run the program (no permission) ");
|
|
|
|
return 0;
|
|
|
|
}
|
|
|
|
/* fallthrough; */
|
|
|
|
default:
|
|
|
|
printf("FAIL: Unexpected bpf_prog_test_run error (%s) ",
|
|
|
|
strerror(saved_errno));
|
|
|
|
return err;
|
|
|
|
}
|
2018-12-20 14:13:03 +08:00
|
|
|
}
|
2020-12-05 02:18:28 +08:00
|
|
|
|
2022-02-03 07:54:20 +08:00
|
|
|
if (topts.retval != expected_val && expected_val != POINTER_VALUE) {
|
|
|
|
printf("FAIL retval %d != %d ", topts.retval, expected_val);
|
2018-12-20 14:13:03 +08:00
|
|
|
return 1;
|
|
|
|
}
|
|
|
|
|
|
|
|
return 0;
|
|
|
|
}
|
|
|
|
|
2021-01-31 06:01:50 +08:00
|
|
|
/* Returns true if every part of exp (tab-separated) appears in log, in order.
|
|
|
|
*
|
|
|
|
* If exp is an empty string, returns true.
|
|
|
|
*/
|
2019-08-23 13:52:14 +08:00
|
|
|
static bool cmp_str_seq(const char *log, const char *exp)
|
|
|
|
{
|
2021-01-31 06:01:50 +08:00
|
|
|
char needle[200];
|
2019-08-23 13:52:14 +08:00
|
|
|
const char *p, *q;
|
|
|
|
int len;
|
|
|
|
|
|
|
|
do {
|
2021-01-31 06:01:50 +08:00
|
|
|
if (!strlen(exp))
|
|
|
|
break;
|
2019-08-23 13:52:14 +08:00
|
|
|
p = strchr(exp, '\t');
|
|
|
|
if (!p)
|
|
|
|
p = exp + strlen(exp);
|
|
|
|
|
|
|
|
len = p - exp;
|
|
|
|
if (len >= sizeof(needle) || !len) {
|
|
|
|
printf("FAIL\nTestcase bug\n");
|
|
|
|
return false;
|
|
|
|
}
|
|
|
|
strncpy(needle, exp, len);
|
|
|
|
needle[len] = 0;
|
|
|
|
q = strstr(log, needle);
|
|
|
|
if (!q) {
|
2021-01-31 06:01:50 +08:00
|
|
|
printf("FAIL\nUnexpected verifier log!\n"
|
2019-08-23 13:52:14 +08:00
|
|
|
"EXP: %s\nRES:\n", needle);
|
|
|
|
return false;
|
|
|
|
}
|
|
|
|
log = q + len;
|
|
|
|
exp = p + 1;
|
|
|
|
} while (*p);
|
|
|
|
return true;
|
|
|
|
}
|
|
|
|
|
2016-10-17 20:28:36 +08:00
|
|
|
static void do_test_single(struct bpf_test *test, bool unpriv,
|
|
|
|
int *passes, int *errors)
|
|
|
|
{
|
2018-12-01 13:08:26 +08:00
|
|
|
int fd_prog, expected_ret, alignment_prevented_execution;
|
2018-05-04 07:08:13 +08:00
|
|
|
int prog_len, prog_type = test->prog_type;
|
2016-10-17 20:28:36 +08:00
|
|
|
struct bpf_insn *prog = test->insns;
|
2021-11-04 06:08:42 +08:00
|
|
|
LIBBPF_OPTS(bpf_prog_load_opts, opts);
|
2018-12-20 14:13:03 +08:00
|
|
|
int run_errs, run_successes;
|
2017-03-23 01:00:35 +08:00
|
|
|
int map_fds[MAX_NR_MAPS];
|
2016-10-17 20:28:36 +08:00
|
|
|
const char *expected_err;
|
2020-12-05 02:18:27 +08:00
|
|
|
int saved_errno;
|
2019-01-29 01:21:17 +08:00
|
|
|
int fixup_skips;
|
2018-12-01 13:08:26 +08:00
|
|
|
__u32 pflags;
|
2018-01-18 08:52:03 +08:00
|
|
|
int i, err;
|
2017-03-23 01:00:35 +08:00
|
|
|
|
|
|
|
for (i = 0; i < MAX_NR_MAPS; i++)
|
|
|
|
map_fds[i] = -1;
|
2016-09-28 22:54:32 +08:00
|
|
|
|
2018-10-03 04:35:37 +08:00
|
|
|
if (!prog_type)
|
|
|
|
prog_type = BPF_PROG_TYPE_SOCKET_FILTER;
|
2019-01-29 01:21:17 +08:00
|
|
|
fixup_skips = skips;
|
2018-10-03 04:35:37 +08:00
|
|
|
do_test_fixup(test, prog_type, prog, map_fds);
|
2019-04-02 12:27:49 +08:00
|
|
|
if (test->fill_insns) {
|
|
|
|
prog = test->fill_insns;
|
|
|
|
prog_len = test->prog_len;
|
|
|
|
} else {
|
|
|
|
prog_len = probe_filter_length(prog);
|
|
|
|
}
|
2019-01-29 01:21:17 +08:00
|
|
|
/* If there were some map skips during fixup due to missing bpf
|
|
|
|
* features, skip this test.
|
|
|
|
*/
|
|
|
|
if (fixup_skips != skips)
|
|
|
|
return;
|
bpf: mini eBPF library, test stubs and verifier testsuite
1.
the library includes a trivial set of BPF syscall wrappers:
int bpf_create_map(int key_size, int value_size, int max_entries);
int bpf_update_elem(int fd, void *key, void *value);
int bpf_lookup_elem(int fd, void *key, void *value);
int bpf_delete_elem(int fd, void *key);
int bpf_get_next_key(int fd, void *key, void *next_key);
int bpf_prog_load(enum bpf_prog_type prog_type,
const struct sock_filter_int *insns, int insn_len,
const char *license);
bpf_prog_load() stores verifier log into global bpf_log_buf[] array
and BPF_*() macros to build instructions
2.
test stubs configure eBPF infra with 'unspec' map and program types.
These are fake types used by user space testsuite only.
3.
verifier tests valid and invalid programs and expects predefined
error log messages from kernel.
40 tests so far.
$ sudo ./test_verifier
#0 add+sub+mul OK
#1 unreachable OK
#2 unreachable2 OK
#3 out of range jump OK
#4 out of range jump2 OK
#5 test1 ld_imm64 OK
...
Signed-off-by: Alexei Starovoitov <ast@plumgrid.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
2014-09-26 15:17:07 +08:00
|
|
|
|
selftests: bpf: enable hi32 randomization for all tests
The previous libbpf patch allows user to specify "prog_flags" to bpf
program load APIs. To enable high 32-bit randomization for a test, we need
to set BPF_F_TEST_RND_HI32 in "prog_flags".
To enable such randomization for all tests, we need to make sure all places
are passing BPF_F_TEST_RND_HI32. Changing them one by one is not
convenient, also, it would be better if a test could be switched to
"normal" running mode without code change.
Given the program load APIs used across bpf selftests are mostly:
bpf_prog_load: load from file
bpf_load_program: load from raw insns
A test_stub.c is implemented for bpf seltests, it offers two functions for
testing purpose:
bpf_prog_test_load
bpf_test_load_program
The are the same as "bpf_prog_load" and "bpf_load_program", except they
also set BPF_F_TEST_RND_HI32. Given *_xattr functions are the APIs to
customize any "prog_flags", it makes little sense to put these two
functions into libbpf.
Then, the following CFLAGS are passed to compilations for host programs:
-Dbpf_prog_load=bpf_prog_test_load
-Dbpf_load_program=bpf_test_load_program
They migrate the used load APIs to the test version, hence enable high
32-bit randomization for these tests without changing source code.
Besides all these, there are several testcases are using
"bpf_prog_load_attr" directly, their call sites are updated to pass
BPF_F_TEST_RND_HI32.
Signed-off-by: Jiong Wang <jiong.wang@netronome.com>
Signed-off-by: Alexei Starovoitov <ast@kernel.org>
2019-05-25 06:25:21 +08:00
|
|
|
pflags = BPF_F_TEST_RND_HI32;
|
2018-12-01 13:08:26 +08:00
|
|
|
if (test->flags & F_LOAD_WITH_STRICT_ALIGNMENT)
|
|
|
|
pflags |= BPF_F_STRICT_ALIGNMENT;
|
|
|
|
if (test->flags & F_NEEDS_EFFICIENT_UNALIGNED_ACCESS)
|
|
|
|
pflags |= BPF_F_ANY_ALIGNMENT;
|
2019-08-23 13:52:14 +08:00
|
|
|
if (test->flags & ~3)
|
|
|
|
pflags |= test->flags;
|
2019-07-02 01:38:41 +08:00
|
|
|
|
2019-08-23 13:52:14 +08:00
|
|
|
expected_ret = unpriv && test->result_unpriv != UNDEF ?
|
|
|
|
test->result_unpriv : test->result;
|
|
|
|
expected_err = unpriv && test->errstr_unpriv ?
|
|
|
|
test->errstr_unpriv : test->errstr;
|
2021-11-04 06:08:42 +08:00
|
|
|
|
|
|
|
opts.expected_attach_type = test->expected_attach_type;
|
bpf: Make verifier log more relevant by default
To make BPF verifier verbose log more releavant and easier to use to debug
verification failures, "pop" parts of log that were successfully verified.
This has effect of leaving only verifier logs that correspond to code branches
that lead to verification failure, which in practice should result in much
shorter and more relevant verifier log dumps. This behavior is made the
default behavior and can be overriden to do exhaustive logging by specifying
BPF_LOG_LEVEL2 log level.
Using BPF_LOG_LEVEL2 to disable this behavior is not ideal, because in some
cases it's good to have BPF_LOG_LEVEL2 per-instruction register dump
verbosity, but still have only relevant verifier branches logged. But for this
patch, I didn't want to add any new flags. It might be worth-while to just
rethink how BPF verifier logging is performed and requested and streamline it
a bit. But this trimming of successfully verified branches seems to be useful
and a good default behavior.
To test this, I modified runqslower slightly to introduce read of
uninitialized stack variable. Log (**truncated in the middle** to save many
lines out of this commit message) BEFORE this change:
; int handle__sched_switch(u64 *ctx)
0: (bf) r6 = r1
; struct task_struct *prev = (struct task_struct *)ctx[1];
1: (79) r1 = *(u64 *)(r6 +8)
func 'sched_switch' arg1 has btf_id 151 type STRUCT 'task_struct'
2: (b7) r2 = 0
; struct event event = {};
3: (7b) *(u64 *)(r10 -24) = r2
last_idx 3 first_idx 0
regs=4 stack=0 before 2: (b7) r2 = 0
4: (7b) *(u64 *)(r10 -32) = r2
5: (7b) *(u64 *)(r10 -40) = r2
6: (7b) *(u64 *)(r10 -48) = r2
; if (prev->state == TASK_RUNNING)
[ ... instruction dump from insn #7 through #50 are cut out ... ]
51: (b7) r2 = 16
52: (85) call bpf_get_current_comm#16
last_idx 52 first_idx 42
regs=4 stack=0 before 51: (b7) r2 = 16
; bpf_perf_event_output(ctx, &events, BPF_F_CURRENT_CPU,
53: (bf) r1 = r6
54: (18) r2 = 0xffff8881f3868800
56: (18) r3 = 0xffffffff
58: (bf) r4 = r7
59: (b7) r5 = 32
60: (85) call bpf_perf_event_output#25
last_idx 60 first_idx 53
regs=20 stack=0 before 59: (b7) r5 = 32
61: (bf) r2 = r10
; event.pid = pid;
62: (07) r2 += -16
; bpf_map_delete_elem(&start, &pid);
63: (18) r1 = 0xffff8881f3868000
65: (85) call bpf_map_delete_elem#3
; }
66: (b7) r0 = 0
67: (95) exit
from 44 to 66: safe
from 34 to 66: safe
from 11 to 28: R1_w=inv0 R2_w=inv0 R6_w=ctx(id=0,off=0,imm=0) R10=fp0 fp-8=mmmm???? fp-24_w=00000000 fp-32_w=00000000 fp-40_w=00000000 fp-48_w=00000000
; bpf_map_update_elem(&start, &pid, &ts, 0);
28: (bf) r2 = r10
;
29: (07) r2 += -16
; tsp = bpf_map_lookup_elem(&start, &pid);
30: (18) r1 = 0xffff8881f3868000
32: (85) call bpf_map_lookup_elem#1
invalid indirect read from stack off -16+0 size 4
processed 65 insns (limit 1000000) max_states_per_insn 1 total_states 5 peak_states 5 mark_read 4
Notice how there is a successful code path from instruction 0 through 67, few
successfully verified jumps (44->66, 34->66), and only after that 11->28 jump
plus error on instruction #32.
AFTER this change (full verifier log, **no truncation**):
; int handle__sched_switch(u64 *ctx)
0: (bf) r6 = r1
; struct task_struct *prev = (struct task_struct *)ctx[1];
1: (79) r1 = *(u64 *)(r6 +8)
func 'sched_switch' arg1 has btf_id 151 type STRUCT 'task_struct'
2: (b7) r2 = 0
; struct event event = {};
3: (7b) *(u64 *)(r10 -24) = r2
last_idx 3 first_idx 0
regs=4 stack=0 before 2: (b7) r2 = 0
4: (7b) *(u64 *)(r10 -32) = r2
5: (7b) *(u64 *)(r10 -40) = r2
6: (7b) *(u64 *)(r10 -48) = r2
; if (prev->state == TASK_RUNNING)
7: (79) r2 = *(u64 *)(r1 +16)
; if (prev->state == TASK_RUNNING)
8: (55) if r2 != 0x0 goto pc+19
R1_w=ptr_task_struct(id=0,off=0,imm=0) R2_w=inv0 R6_w=ctx(id=0,off=0,imm=0) R10=fp0 fp-24_w=00000000 fp-32_w=00000000 fp-40_w=00000000 fp-48_w=00000000
; trace_enqueue(prev->tgid, prev->pid);
9: (61) r1 = *(u32 *)(r1 +1184)
10: (63) *(u32 *)(r10 -4) = r1
; if (!pid || (targ_pid && targ_pid != pid))
11: (15) if r1 == 0x0 goto pc+16
from 11 to 28: R1_w=inv0 R2_w=inv0 R6_w=ctx(id=0,off=0,imm=0) R10=fp0 fp-8=mmmm???? fp-24_w=00000000 fp-32_w=00000000 fp-40_w=00000000 fp-48_w=00000000
; bpf_map_update_elem(&start, &pid, &ts, 0);
28: (bf) r2 = r10
;
29: (07) r2 += -16
; tsp = bpf_map_lookup_elem(&start, &pid);
30: (18) r1 = 0xffff8881db3ce800
32: (85) call bpf_map_lookup_elem#1
invalid indirect read from stack off -16+0 size 4
processed 65 insns (limit 1000000) max_states_per_insn 1 total_states 5 peak_states 5 mark_read 4
Notice how in this case, there are 0-11 instructions + jump from 11 to
28 is recorded + 28-32 instructions with error on insn #32.
test_verifier test runner was updated to specify BPF_LOG_LEVEL2 for
VERBOSE_ACCEPT expected result due to potentially "incomplete" success verbose
log at BPF_LOG_LEVEL1.
On success, verbose log will only have a summary of number of processed
instructions, etc, but no branch tracing log. Having just a last succesful
branch tracing seemed weird and confusing. Having small and clean summary log
in success case seems quite logical and nice, though.
Signed-off-by: Andrii Nakryiko <andriin@fb.com>
Signed-off-by: Alexei Starovoitov <ast@kernel.org>
Link: https://lore.kernel.org/bpf/20200423195850.1259827-1-andriin@fb.com
2020-04-24 03:58:50 +08:00
|
|
|
if (verbose)
|
2021-11-04 06:08:42 +08:00
|
|
|
opts.log_level = 1;
|
bpf: Make verifier log more relevant by default
To make BPF verifier verbose log more releavant and easier to use to debug
verification failures, "pop" parts of log that were successfully verified.
This has effect of leaving only verifier logs that correspond to code branches
that lead to verification failure, which in practice should result in much
shorter and more relevant verifier log dumps. This behavior is made the
default behavior and can be overriden to do exhaustive logging by specifying
BPF_LOG_LEVEL2 log level.
Using BPF_LOG_LEVEL2 to disable this behavior is not ideal, because in some
cases it's good to have BPF_LOG_LEVEL2 per-instruction register dump
verbosity, but still have only relevant verifier branches logged. But for this
patch, I didn't want to add any new flags. It might be worth-while to just
rethink how BPF verifier logging is performed and requested and streamline it
a bit. But this trimming of successfully verified branches seems to be useful
and a good default behavior.
To test this, I modified runqslower slightly to introduce read of
uninitialized stack variable. Log (**truncated in the middle** to save many
lines out of this commit message) BEFORE this change:
; int handle__sched_switch(u64 *ctx)
0: (bf) r6 = r1
; struct task_struct *prev = (struct task_struct *)ctx[1];
1: (79) r1 = *(u64 *)(r6 +8)
func 'sched_switch' arg1 has btf_id 151 type STRUCT 'task_struct'
2: (b7) r2 = 0
; struct event event = {};
3: (7b) *(u64 *)(r10 -24) = r2
last_idx 3 first_idx 0
regs=4 stack=0 before 2: (b7) r2 = 0
4: (7b) *(u64 *)(r10 -32) = r2
5: (7b) *(u64 *)(r10 -40) = r2
6: (7b) *(u64 *)(r10 -48) = r2
; if (prev->state == TASK_RUNNING)
[ ... instruction dump from insn #7 through #50 are cut out ... ]
51: (b7) r2 = 16
52: (85) call bpf_get_current_comm#16
last_idx 52 first_idx 42
regs=4 stack=0 before 51: (b7) r2 = 16
; bpf_perf_event_output(ctx, &events, BPF_F_CURRENT_CPU,
53: (bf) r1 = r6
54: (18) r2 = 0xffff8881f3868800
56: (18) r3 = 0xffffffff
58: (bf) r4 = r7
59: (b7) r5 = 32
60: (85) call bpf_perf_event_output#25
last_idx 60 first_idx 53
regs=20 stack=0 before 59: (b7) r5 = 32
61: (bf) r2 = r10
; event.pid = pid;
62: (07) r2 += -16
; bpf_map_delete_elem(&start, &pid);
63: (18) r1 = 0xffff8881f3868000
65: (85) call bpf_map_delete_elem#3
; }
66: (b7) r0 = 0
67: (95) exit
from 44 to 66: safe
from 34 to 66: safe
from 11 to 28: R1_w=inv0 R2_w=inv0 R6_w=ctx(id=0,off=0,imm=0) R10=fp0 fp-8=mmmm???? fp-24_w=00000000 fp-32_w=00000000 fp-40_w=00000000 fp-48_w=00000000
; bpf_map_update_elem(&start, &pid, &ts, 0);
28: (bf) r2 = r10
;
29: (07) r2 += -16
; tsp = bpf_map_lookup_elem(&start, &pid);
30: (18) r1 = 0xffff8881f3868000
32: (85) call bpf_map_lookup_elem#1
invalid indirect read from stack off -16+0 size 4
processed 65 insns (limit 1000000) max_states_per_insn 1 total_states 5 peak_states 5 mark_read 4
Notice how there is a successful code path from instruction 0 through 67, few
successfully verified jumps (44->66, 34->66), and only after that 11->28 jump
plus error on instruction #32.
AFTER this change (full verifier log, **no truncation**):
; int handle__sched_switch(u64 *ctx)
0: (bf) r6 = r1
; struct task_struct *prev = (struct task_struct *)ctx[1];
1: (79) r1 = *(u64 *)(r6 +8)
func 'sched_switch' arg1 has btf_id 151 type STRUCT 'task_struct'
2: (b7) r2 = 0
; struct event event = {};
3: (7b) *(u64 *)(r10 -24) = r2
last_idx 3 first_idx 0
regs=4 stack=0 before 2: (b7) r2 = 0
4: (7b) *(u64 *)(r10 -32) = r2
5: (7b) *(u64 *)(r10 -40) = r2
6: (7b) *(u64 *)(r10 -48) = r2
; if (prev->state == TASK_RUNNING)
7: (79) r2 = *(u64 *)(r1 +16)
; if (prev->state == TASK_RUNNING)
8: (55) if r2 != 0x0 goto pc+19
R1_w=ptr_task_struct(id=0,off=0,imm=0) R2_w=inv0 R6_w=ctx(id=0,off=0,imm=0) R10=fp0 fp-24_w=00000000 fp-32_w=00000000 fp-40_w=00000000 fp-48_w=00000000
; trace_enqueue(prev->tgid, prev->pid);
9: (61) r1 = *(u32 *)(r1 +1184)
10: (63) *(u32 *)(r10 -4) = r1
; if (!pid || (targ_pid && targ_pid != pid))
11: (15) if r1 == 0x0 goto pc+16
from 11 to 28: R1_w=inv0 R2_w=inv0 R6_w=ctx(id=0,off=0,imm=0) R10=fp0 fp-8=mmmm???? fp-24_w=00000000 fp-32_w=00000000 fp-40_w=00000000 fp-48_w=00000000
; bpf_map_update_elem(&start, &pid, &ts, 0);
28: (bf) r2 = r10
;
29: (07) r2 += -16
; tsp = bpf_map_lookup_elem(&start, &pid);
30: (18) r1 = 0xffff8881db3ce800
32: (85) call bpf_map_lookup_elem#1
invalid indirect read from stack off -16+0 size 4
processed 65 insns (limit 1000000) max_states_per_insn 1 total_states 5 peak_states 5 mark_read 4
Notice how in this case, there are 0-11 instructions + jump from 11 to
28 is recorded + 28-32 instructions with error on insn #32.
test_verifier test runner was updated to specify BPF_LOG_LEVEL2 for
VERBOSE_ACCEPT expected result due to potentially "incomplete" success verbose
log at BPF_LOG_LEVEL1.
On success, verbose log will only have a summary of number of processed
instructions, etc, but no branch tracing log. Having just a last succesful
branch tracing seemed weird and confusing. Having small and clean summary log
in success case seems quite logical and nice, though.
Signed-off-by: Andrii Nakryiko <andriin@fb.com>
Signed-off-by: Alexei Starovoitov <ast@kernel.org>
Link: https://lore.kernel.org/bpf/20200423195850.1259827-1-andriin@fb.com
2020-04-24 03:58:50 +08:00
|
|
|
else if (expected_ret == VERBOSE_ACCEPT)
|
2021-11-04 06:08:42 +08:00
|
|
|
opts.log_level = 2;
|
bpf: Make verifier log more relevant by default
To make BPF verifier verbose log more releavant and easier to use to debug
verification failures, "pop" parts of log that were successfully verified.
This has effect of leaving only verifier logs that correspond to code branches
that lead to verification failure, which in practice should result in much
shorter and more relevant verifier log dumps. This behavior is made the
default behavior and can be overriden to do exhaustive logging by specifying
BPF_LOG_LEVEL2 log level.
Using BPF_LOG_LEVEL2 to disable this behavior is not ideal, because in some
cases it's good to have BPF_LOG_LEVEL2 per-instruction register dump
verbosity, but still have only relevant verifier branches logged. But for this
patch, I didn't want to add any new flags. It might be worth-while to just
rethink how BPF verifier logging is performed and requested and streamline it
a bit. But this trimming of successfully verified branches seems to be useful
and a good default behavior.
To test this, I modified runqslower slightly to introduce read of
uninitialized stack variable. Log (**truncated in the middle** to save many
lines out of this commit message) BEFORE this change:
; int handle__sched_switch(u64 *ctx)
0: (bf) r6 = r1
; struct task_struct *prev = (struct task_struct *)ctx[1];
1: (79) r1 = *(u64 *)(r6 +8)
func 'sched_switch' arg1 has btf_id 151 type STRUCT 'task_struct'
2: (b7) r2 = 0
; struct event event = {};
3: (7b) *(u64 *)(r10 -24) = r2
last_idx 3 first_idx 0
regs=4 stack=0 before 2: (b7) r2 = 0
4: (7b) *(u64 *)(r10 -32) = r2
5: (7b) *(u64 *)(r10 -40) = r2
6: (7b) *(u64 *)(r10 -48) = r2
; if (prev->state == TASK_RUNNING)
[ ... instruction dump from insn #7 through #50 are cut out ... ]
51: (b7) r2 = 16
52: (85) call bpf_get_current_comm#16
last_idx 52 first_idx 42
regs=4 stack=0 before 51: (b7) r2 = 16
; bpf_perf_event_output(ctx, &events, BPF_F_CURRENT_CPU,
53: (bf) r1 = r6
54: (18) r2 = 0xffff8881f3868800
56: (18) r3 = 0xffffffff
58: (bf) r4 = r7
59: (b7) r5 = 32
60: (85) call bpf_perf_event_output#25
last_idx 60 first_idx 53
regs=20 stack=0 before 59: (b7) r5 = 32
61: (bf) r2 = r10
; event.pid = pid;
62: (07) r2 += -16
; bpf_map_delete_elem(&start, &pid);
63: (18) r1 = 0xffff8881f3868000
65: (85) call bpf_map_delete_elem#3
; }
66: (b7) r0 = 0
67: (95) exit
from 44 to 66: safe
from 34 to 66: safe
from 11 to 28: R1_w=inv0 R2_w=inv0 R6_w=ctx(id=0,off=0,imm=0) R10=fp0 fp-8=mmmm???? fp-24_w=00000000 fp-32_w=00000000 fp-40_w=00000000 fp-48_w=00000000
; bpf_map_update_elem(&start, &pid, &ts, 0);
28: (bf) r2 = r10
;
29: (07) r2 += -16
; tsp = bpf_map_lookup_elem(&start, &pid);
30: (18) r1 = 0xffff8881f3868000
32: (85) call bpf_map_lookup_elem#1
invalid indirect read from stack off -16+0 size 4
processed 65 insns (limit 1000000) max_states_per_insn 1 total_states 5 peak_states 5 mark_read 4
Notice how there is a successful code path from instruction 0 through 67, few
successfully verified jumps (44->66, 34->66), and only after that 11->28 jump
plus error on instruction #32.
AFTER this change (full verifier log, **no truncation**):
; int handle__sched_switch(u64 *ctx)
0: (bf) r6 = r1
; struct task_struct *prev = (struct task_struct *)ctx[1];
1: (79) r1 = *(u64 *)(r6 +8)
func 'sched_switch' arg1 has btf_id 151 type STRUCT 'task_struct'
2: (b7) r2 = 0
; struct event event = {};
3: (7b) *(u64 *)(r10 -24) = r2
last_idx 3 first_idx 0
regs=4 stack=0 before 2: (b7) r2 = 0
4: (7b) *(u64 *)(r10 -32) = r2
5: (7b) *(u64 *)(r10 -40) = r2
6: (7b) *(u64 *)(r10 -48) = r2
; if (prev->state == TASK_RUNNING)
7: (79) r2 = *(u64 *)(r1 +16)
; if (prev->state == TASK_RUNNING)
8: (55) if r2 != 0x0 goto pc+19
R1_w=ptr_task_struct(id=0,off=0,imm=0) R2_w=inv0 R6_w=ctx(id=0,off=0,imm=0) R10=fp0 fp-24_w=00000000 fp-32_w=00000000 fp-40_w=00000000 fp-48_w=00000000
; trace_enqueue(prev->tgid, prev->pid);
9: (61) r1 = *(u32 *)(r1 +1184)
10: (63) *(u32 *)(r10 -4) = r1
; if (!pid || (targ_pid && targ_pid != pid))
11: (15) if r1 == 0x0 goto pc+16
from 11 to 28: R1_w=inv0 R2_w=inv0 R6_w=ctx(id=0,off=0,imm=0) R10=fp0 fp-8=mmmm???? fp-24_w=00000000 fp-32_w=00000000 fp-40_w=00000000 fp-48_w=00000000
; bpf_map_update_elem(&start, &pid, &ts, 0);
28: (bf) r2 = r10
;
29: (07) r2 += -16
; tsp = bpf_map_lookup_elem(&start, &pid);
30: (18) r1 = 0xffff8881db3ce800
32: (85) call bpf_map_lookup_elem#1
invalid indirect read from stack off -16+0 size 4
processed 65 insns (limit 1000000) max_states_per_insn 1 total_states 5 peak_states 5 mark_read 4
Notice how in this case, there are 0-11 instructions + jump from 11 to
28 is recorded + 28-32 instructions with error on insn #32.
test_verifier test runner was updated to specify BPF_LOG_LEVEL2 for
VERBOSE_ACCEPT expected result due to potentially "incomplete" success verbose
log at BPF_LOG_LEVEL1.
On success, verbose log will only have a summary of number of processed
instructions, etc, but no branch tracing log. Having just a last succesful
branch tracing seemed weird and confusing. Having small and clean summary log
in success case seems quite logical and nice, though.
Signed-off-by: Andrii Nakryiko <andriin@fb.com>
Signed-off-by: Alexei Starovoitov <ast@kernel.org>
Link: https://lore.kernel.org/bpf/20200423195850.1259827-1-andriin@fb.com
2020-04-24 03:58:50 +08:00
|
|
|
else
|
2021-11-04 06:08:42 +08:00
|
|
|
opts.log_level = 4;
|
|
|
|
opts.prog_flags = pflags;
|
2019-07-02 01:38:41 +08:00
|
|
|
|
2020-08-26 03:21:22 +08:00
|
|
|
if (prog_type == BPF_PROG_TYPE_TRACING && test->kfunc) {
|
2021-11-04 06:08:42 +08:00
|
|
|
int attach_btf_id;
|
|
|
|
|
|
|
|
attach_btf_id = libbpf_find_vmlinux_btf_id(test->kfunc,
|
|
|
|
opts.expected_attach_type);
|
|
|
|
if (attach_btf_id < 0) {
|
2020-08-26 03:21:22 +08:00
|
|
|
printf("FAIL\nFailed to find BTF ID for '%s'!\n",
|
|
|
|
test->kfunc);
|
|
|
|
(*errors)++;
|
|
|
|
return;
|
|
|
|
}
|
2021-11-04 06:08:42 +08:00
|
|
|
|
|
|
|
opts.attach_btf_id = attach_btf_id;
|
2020-08-26 03:21:22 +08:00
|
|
|
}
|
|
|
|
|
2021-11-04 06:08:42 +08:00
|
|
|
opts.log_buf = bpf_vlog;
|
|
|
|
opts.log_size = sizeof(bpf_vlog);
|
|
|
|
fd_prog = bpf_prog_load(prog_type, NULL, "GPL", prog, prog_len, &opts);
|
2020-12-05 02:18:27 +08:00
|
|
|
saved_errno = errno;
|
2020-08-26 03:21:22 +08:00
|
|
|
|
|
|
|
/* BPF_PROG_TYPE_TRACING requires more setup and
|
|
|
|
* bpf_probe_prog_type won't give correct answer
|
|
|
|
*/
|
|
|
|
if (fd_prog < 0 && prog_type != BPF_PROG_TYPE_TRACING &&
|
2022-02-03 06:59:14 +08:00
|
|
|
!libbpf_probe_bpf_prog_type(prog_type, NULL)) {
|
2019-01-29 01:21:16 +08:00
|
|
|
printf("SKIP (unsupported program type %d)\n", prog_type);
|
|
|
|
skips++;
|
|
|
|
goto close_fds;
|
|
|
|
}
|
bpf: mini eBPF library, test stubs and verifier testsuite
1.
the library includes a trivial set of BPF syscall wrappers:
int bpf_create_map(int key_size, int value_size, int max_entries);
int bpf_update_elem(int fd, void *key, void *value);
int bpf_lookup_elem(int fd, void *key, void *value);
int bpf_delete_elem(int fd, void *key);
int bpf_get_next_key(int fd, void *key, void *next_key);
int bpf_prog_load(enum bpf_prog_type prog_type,
const struct sock_filter_int *insns, int insn_len,
const char *license);
bpf_prog_load() stores verifier log into global bpf_log_buf[] array
and BPF_*() macros to build instructions
2.
test stubs configure eBPF infra with 'unspec' map and program types.
These are fake types used by user space testsuite only.
3.
verifier tests valid and invalid programs and expects predefined
error log messages from kernel.
40 tests so far.
$ sudo ./test_verifier
#0 add+sub+mul OK
#1 unreachable OK
#2 unreachable2 OK
#3 out of range jump OK
#4 out of range jump2 OK
#5 test1 ld_imm64 OK
...
Signed-off-by: Alexei Starovoitov <ast@plumgrid.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
2014-09-26 15:17:07 +08:00
|
|
|
|
2021-10-08 01:33:28 +08:00
|
|
|
if (fd_prog < 0 && saved_errno == ENOTSUPP) {
|
|
|
|
printf("SKIP (program uses an unsupported feature)\n");
|
|
|
|
skips++;
|
|
|
|
goto close_fds;
|
|
|
|
}
|
|
|
|
|
2018-12-01 13:08:26 +08:00
|
|
|
alignment_prevented_execution = 0;
|
|
|
|
|
2019-08-23 13:52:14 +08:00
|
|
|
if (expected_ret == ACCEPT || expected_ret == VERBOSE_ACCEPT) {
|
2018-12-01 13:08:26 +08:00
|
|
|
if (fd_prog < 0) {
|
2016-10-17 20:28:36 +08:00
|
|
|
printf("FAIL\nFailed to load prog '%s'!\n",
|
2020-12-05 02:18:27 +08:00
|
|
|
strerror(saved_errno));
|
2016-10-17 20:28:36 +08:00
|
|
|
goto fail_log;
|
|
|
|
}
|
2018-12-01 13:08:26 +08:00
|
|
|
#ifndef CONFIG_HAVE_EFFICIENT_UNALIGNED_ACCESS
|
|
|
|
if (fd_prog >= 0 &&
|
2018-12-20 14:13:03 +08:00
|
|
|
(test->flags & F_NEEDS_EFFICIENT_UNALIGNED_ACCESS))
|
2018-12-01 13:08:26 +08:00
|
|
|
alignment_prevented_execution = 1;
|
|
|
|
#endif
|
2019-08-23 13:52:14 +08:00
|
|
|
if (expected_ret == VERBOSE_ACCEPT && !cmp_str_seq(bpf_vlog, expected_err)) {
|
|
|
|
goto fail_log;
|
|
|
|
}
|
2016-10-17 20:28:36 +08:00
|
|
|
} else {
|
|
|
|
if (fd_prog >= 0) {
|
|
|
|
printf("FAIL\nUnexpected success to load!\n");
|
|
|
|
goto fail_log;
|
|
|
|
}
|
2021-01-31 06:01:50 +08:00
|
|
|
if (!expected_err || !cmp_str_seq(bpf_vlog, expected_err)) {
|
2018-02-15 05:50:34 +08:00
|
|
|
printf("FAIL\nUnexpected error message!\n\tEXP: %s\n\tRES: %s\n",
|
|
|
|
expected_err, bpf_vlog);
|
2016-10-17 20:28:36 +08:00
|
|
|
goto fail_log;
|
|
|
|
}
|
|
|
|
}
|
2015-10-08 13:23:23 +08:00
|
|
|
|
2021-05-31 20:34:24 +08:00
|
|
|
if (!unpriv && test->insn_processed) {
|
2018-12-14 03:42:32 +08:00
|
|
|
uint32_t insn_processed;
|
|
|
|
char *proc;
|
|
|
|
|
|
|
|
proc = strstr(bpf_vlog, "processed ");
|
|
|
|
insn_processed = atoi(proc + 10);
|
|
|
|
if (test->insn_processed != insn_processed) {
|
|
|
|
printf("FAIL\nUnexpected insn_processed %u vs %u\n",
|
|
|
|
insn_processed, test->insn_processed);
|
2018-01-18 08:52:03 +08:00
|
|
|
goto fail_log;
|
|
|
|
}
|
2018-12-14 03:42:32 +08:00
|
|
|
}
|
|
|
|
|
2019-08-23 13:52:14 +08:00
|
|
|
if (verbose)
|
|
|
|
printf(", verifier log:\n%s", bpf_vlog);
|
|
|
|
|
2018-12-20 14:13:03 +08:00
|
|
|
run_errs = 0;
|
|
|
|
run_successes = 0;
|
2021-03-03 18:18:16 +08:00
|
|
|
if (!alignment_prevented_execution && fd_prog >= 0 && test->runs >= 0) {
|
2018-12-20 14:13:03 +08:00
|
|
|
uint32_t expected_val;
|
|
|
|
int i;
|
|
|
|
|
2019-07-13 01:44:41 +08:00
|
|
|
if (!test->runs)
|
|
|
|
test->runs = 1;
|
2018-12-20 14:13:03 +08:00
|
|
|
|
|
|
|
for (i = 0; i < test->runs; i++) {
|
|
|
|
if (unpriv && test->retvals[i].retval_unpriv)
|
|
|
|
expected_val = test->retvals[i].retval_unpriv;
|
|
|
|
else
|
|
|
|
expected_val = test->retvals[i].retval;
|
|
|
|
|
|
|
|
err = do_prog_test_run(fd_prog, unpriv, expected_val,
|
|
|
|
test->retvals[i].data,
|
|
|
|
sizeof(test->retvals[i].data));
|
|
|
|
if (err) {
|
|
|
|
printf("(run %d/%d) ", i + 1, test->runs);
|
|
|
|
run_errs++;
|
|
|
|
} else {
|
|
|
|
run_successes++;
|
|
|
|
}
|
2018-01-18 08:52:03 +08:00
|
|
|
}
|
|
|
|
}
|
2018-12-20 14:13:03 +08:00
|
|
|
|
|
|
|
if (!run_errs) {
|
|
|
|
(*passes)++;
|
|
|
|
if (run_successes > 1)
|
|
|
|
printf("%d cases ", run_successes);
|
|
|
|
printf("OK");
|
|
|
|
if (alignment_prevented_execution)
|
|
|
|
printf(" (NOTE: not executed due to unknown alignment)");
|
|
|
|
printf("\n");
|
|
|
|
} else {
|
|
|
|
printf("\n");
|
|
|
|
goto fail_log;
|
|
|
|
}
|
2016-10-17 20:28:36 +08:00
|
|
|
close_fds:
|
2019-04-02 12:27:49 +08:00
|
|
|
if (test->fill_insns)
|
|
|
|
free(test->fill_insns);
|
2016-10-17 20:28:36 +08:00
|
|
|
close(fd_prog);
|
2017-03-23 01:00:35 +08:00
|
|
|
for (i = 0; i < MAX_NR_MAPS; i++)
|
|
|
|
close(map_fds[i]);
|
2016-10-17 20:28:36 +08:00
|
|
|
sched_yield();
|
|
|
|
return;
|
|
|
|
fail_log:
|
|
|
|
(*errors)++;
|
|
|
|
printf("%s", bpf_vlog);
|
|
|
|
goto close_fds;
|
|
|
|
}
|
2015-10-08 13:23:23 +08:00
|
|
|
|
2017-02-10 07:21:37 +08:00
|
|
|
static bool is_admin(void)
|
|
|
|
{
|
2020-05-14 07:03:55 +08:00
|
|
|
cap_flag_value_t net_priv = CAP_CLEAR;
|
|
|
|
bool perfmon_priv = false;
|
|
|
|
bool bpf_priv = false;
|
|
|
|
struct libcap *cap;
|
2017-02-10 07:21:37 +08:00
|
|
|
cap_t caps;
|
|
|
|
|
2017-03-11 14:05:55 +08:00
|
|
|
#ifdef CAP_IS_SUPPORTED
|
2017-02-10 07:21:37 +08:00
|
|
|
if (!CAP_IS_SUPPORTED(CAP_SETFCAP)) {
|
|
|
|
perror("cap_get_flag");
|
|
|
|
return false;
|
|
|
|
}
|
2017-03-11 14:05:55 +08:00
|
|
|
#endif
|
2017-02-10 07:21:37 +08:00
|
|
|
caps = cap_get_proc();
|
|
|
|
if (!caps) {
|
|
|
|
perror("cap_get_proc");
|
|
|
|
return false;
|
|
|
|
}
|
2020-05-14 07:03:55 +08:00
|
|
|
cap = (struct libcap *)caps;
|
|
|
|
bpf_priv = cap->data[1].effective & (1 << (39/* CAP_BPF */ - 32));
|
|
|
|
perfmon_priv = cap->data[1].effective & (1 << (38/* CAP_PERFMON */ - 32));
|
|
|
|
if (cap_get_flag(caps, CAP_NET_ADMIN, CAP_EFFECTIVE, &net_priv))
|
|
|
|
perror("cap_get_flag NET");
|
2017-02-10 07:21:37 +08:00
|
|
|
if (cap_free(caps))
|
|
|
|
perror("cap_free");
|
2020-05-14 07:03:55 +08:00
|
|
|
return bpf_priv && perfmon_priv && net_priv == CAP_SET;
|
2017-02-10 07:21:37 +08:00
|
|
|
}
|
|
|
|
|
2018-02-15 05:50:36 +08:00
|
|
|
static void get_unpriv_disabled()
|
|
|
|
{
|
|
|
|
char buf[2];
|
|
|
|
FILE *fd;
|
|
|
|
|
|
|
|
fd = fopen("/proc/sys/"UNPRIV_SYSCTL, "r");
|
2018-05-18 01:39:31 +08:00
|
|
|
if (!fd) {
|
|
|
|
perror("fopen /proc/sys/"UNPRIV_SYSCTL);
|
|
|
|
unpriv_disabled = true;
|
|
|
|
return;
|
|
|
|
}
|
2018-02-15 05:50:36 +08:00
|
|
|
if (fgets(buf, 2, fd) == buf && atoi(buf))
|
|
|
|
unpriv_disabled = true;
|
|
|
|
fclose(fd);
|
|
|
|
}
|
|
|
|
|
2018-10-25 04:05:43 +08:00
|
|
|
static bool test_as_unpriv(struct bpf_test *test)
|
|
|
|
{
|
2020-11-18 15:16:39 +08:00
|
|
|
#ifndef CONFIG_HAVE_EFFICIENT_UNALIGNED_ACCESS
|
|
|
|
/* Some architectures have strict alignment requirements. In
|
|
|
|
* that case, the BPF verifier detects if a program has
|
|
|
|
* unaligned accesses and rejects them. A user can pass
|
|
|
|
* BPF_F_ANY_ALIGNMENT to a program to override this
|
|
|
|
* check. That, however, will only work when a privileged user
|
|
|
|
* loads a program. An unprivileged user loading a program
|
|
|
|
* with this flag will be rejected prior entering the
|
|
|
|
* verifier.
|
|
|
|
*/
|
|
|
|
if (test->flags & F_NEEDS_EFFICIENT_UNALIGNED_ACCESS)
|
|
|
|
return false;
|
|
|
|
#endif
|
2018-10-25 04:05:43 +08:00
|
|
|
return !test->prog_type ||
|
|
|
|
test->prog_type == BPF_PROG_TYPE_SOCKET_FILTER ||
|
|
|
|
test->prog_type == BPF_PROG_TYPE_CGROUP_SKB;
|
|
|
|
}
|
|
|
|
|
2016-10-17 20:28:36 +08:00
|
|
|
static int do_test(bool unpriv, unsigned int from, unsigned int to)
|
|
|
|
{
|
2019-01-29 01:21:16 +08:00
|
|
|
int i, passes = 0, errors = 0;
|
bpf: mini eBPF library, test stubs and verifier testsuite
1.
the library includes a trivial set of BPF syscall wrappers:
int bpf_create_map(int key_size, int value_size, int max_entries);
int bpf_update_elem(int fd, void *key, void *value);
int bpf_lookup_elem(int fd, void *key, void *value);
int bpf_delete_elem(int fd, void *key);
int bpf_get_next_key(int fd, void *key, void *next_key);
int bpf_prog_load(enum bpf_prog_type prog_type,
const struct sock_filter_int *insns, int insn_len,
const char *license);
bpf_prog_load() stores verifier log into global bpf_log_buf[] array
and BPF_*() macros to build instructions
2.
test stubs configure eBPF infra with 'unspec' map and program types.
These are fake types used by user space testsuite only.
3.
verifier tests valid and invalid programs and expects predefined
error log messages from kernel.
40 tests so far.
$ sudo ./test_verifier
#0 add+sub+mul OK
#1 unreachable OK
#2 unreachable2 OK
#3 out of range jump OK
#4 out of range jump2 OK
#5 test1 ld_imm64 OK
...
Signed-off-by: Alexei Starovoitov <ast@plumgrid.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
2014-09-26 15:17:07 +08:00
|
|
|
|
2016-10-17 20:28:36 +08:00
|
|
|
for (i = from; i < to; i++) {
|
|
|
|
struct bpf_test *test = &tests[i];
|
bpf: mini eBPF library, test stubs and verifier testsuite
1.
the library includes a trivial set of BPF syscall wrappers:
int bpf_create_map(int key_size, int value_size, int max_entries);
int bpf_update_elem(int fd, void *key, void *value);
int bpf_lookup_elem(int fd, void *key, void *value);
int bpf_delete_elem(int fd, void *key);
int bpf_get_next_key(int fd, void *key, void *next_key);
int bpf_prog_load(enum bpf_prog_type prog_type,
const struct sock_filter_int *insns, int insn_len,
const char *license);
bpf_prog_load() stores verifier log into global bpf_log_buf[] array
and BPF_*() macros to build instructions
2.
test stubs configure eBPF infra with 'unspec' map and program types.
These are fake types used by user space testsuite only.
3.
verifier tests valid and invalid programs and expects predefined
error log messages from kernel.
40 tests so far.
$ sudo ./test_verifier
#0 add+sub+mul OK
#1 unreachable OK
#2 unreachable2 OK
#3 out of range jump OK
#4 out of range jump2 OK
#5 test1 ld_imm64 OK
...
Signed-off-by: Alexei Starovoitov <ast@plumgrid.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
2014-09-26 15:17:07 +08:00
|
|
|
|
2016-10-17 20:28:36 +08:00
|
|
|
/* Program types that are not supported by non-root we
|
|
|
|
* skip right away.
|
|
|
|
*/
|
2018-10-25 04:05:43 +08:00
|
|
|
if (test_as_unpriv(test) && unpriv_disabled) {
|
2018-02-15 05:50:36 +08:00
|
|
|
printf("#%d/u %s SKIP\n", i, test->descr);
|
|
|
|
skips++;
|
2018-10-25 04:05:43 +08:00
|
|
|
} else if (test_as_unpriv(test)) {
|
2017-02-10 07:21:37 +08:00
|
|
|
if (!unpriv)
|
|
|
|
set_admin(false);
|
|
|
|
printf("#%d/u %s ", i, test->descr);
|
|
|
|
do_test_single(test, true, &passes, &errors);
|
|
|
|
if (!unpriv)
|
|
|
|
set_admin(true);
|
|
|
|
}
|
2016-10-17 20:28:36 +08:00
|
|
|
|
2018-02-15 05:50:35 +08:00
|
|
|
if (unpriv) {
|
|
|
|
printf("#%d/p %s SKIP\n", i, test->descr);
|
|
|
|
skips++;
|
|
|
|
} else {
|
2017-02-10 07:21:37 +08:00
|
|
|
printf("#%d/p %s ", i, test->descr);
|
|
|
|
do_test_single(test, false, &passes, &errors);
|
|
|
|
}
|
bpf: mini eBPF library, test stubs and verifier testsuite
1.
the library includes a trivial set of BPF syscall wrappers:
int bpf_create_map(int key_size, int value_size, int max_entries);
int bpf_update_elem(int fd, void *key, void *value);
int bpf_lookup_elem(int fd, void *key, void *value);
int bpf_delete_elem(int fd, void *key);
int bpf_get_next_key(int fd, void *key, void *next_key);
int bpf_prog_load(enum bpf_prog_type prog_type,
const struct sock_filter_int *insns, int insn_len,
const char *license);
bpf_prog_load() stores verifier log into global bpf_log_buf[] array
and BPF_*() macros to build instructions
2.
test stubs configure eBPF infra with 'unspec' map and program types.
These are fake types used by user space testsuite only.
3.
verifier tests valid and invalid programs and expects predefined
error log messages from kernel.
40 tests so far.
$ sudo ./test_verifier
#0 add+sub+mul OK
#1 unreachable OK
#2 unreachable2 OK
#3 out of range jump OK
#4 out of range jump2 OK
#5 test1 ld_imm64 OK
...
Signed-off-by: Alexei Starovoitov <ast@plumgrid.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
2014-09-26 15:17:07 +08:00
|
|
|
}
|
|
|
|
|
2018-02-15 05:50:35 +08:00
|
|
|
printf("Summary: %d PASSED, %d SKIPPED, %d FAILED\n", passes,
|
|
|
|
skips, errors);
|
2017-06-13 21:17:19 +08:00
|
|
|
return errors ? EXIT_FAILURE : EXIT_SUCCESS;
|
bpf: mini eBPF library, test stubs and verifier testsuite
1.
the library includes a trivial set of BPF syscall wrappers:
int bpf_create_map(int key_size, int value_size, int max_entries);
int bpf_update_elem(int fd, void *key, void *value);
int bpf_lookup_elem(int fd, void *key, void *value);
int bpf_delete_elem(int fd, void *key);
int bpf_get_next_key(int fd, void *key, void *next_key);
int bpf_prog_load(enum bpf_prog_type prog_type,
const struct sock_filter_int *insns, int insn_len,
const char *license);
bpf_prog_load() stores verifier log into global bpf_log_buf[] array
and BPF_*() macros to build instructions
2.
test stubs configure eBPF infra with 'unspec' map and program types.
These are fake types used by user space testsuite only.
3.
verifier tests valid and invalid programs and expects predefined
error log messages from kernel.
40 tests so far.
$ sudo ./test_verifier
#0 add+sub+mul OK
#1 unreachable OK
#2 unreachable2 OK
#3 out of range jump OK
#4 out of range jump2 OK
#5 test1 ld_imm64 OK
...
Signed-off-by: Alexei Starovoitov <ast@plumgrid.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
2014-09-26 15:17:07 +08:00
|
|
|
}
|
|
|
|
|
2016-10-17 20:28:36 +08:00
|
|
|
int main(int argc, char **argv)
|
bpf: mini eBPF library, test stubs and verifier testsuite
1.
the library includes a trivial set of BPF syscall wrappers:
int bpf_create_map(int key_size, int value_size, int max_entries);
int bpf_update_elem(int fd, void *key, void *value);
int bpf_lookup_elem(int fd, void *key, void *value);
int bpf_delete_elem(int fd, void *key);
int bpf_get_next_key(int fd, void *key, void *next_key);
int bpf_prog_load(enum bpf_prog_type prog_type,
const struct sock_filter_int *insns, int insn_len,
const char *license);
bpf_prog_load() stores verifier log into global bpf_log_buf[] array
and BPF_*() macros to build instructions
2.
test stubs configure eBPF infra with 'unspec' map and program types.
These are fake types used by user space testsuite only.
3.
verifier tests valid and invalid programs and expects predefined
error log messages from kernel.
40 tests so far.
$ sudo ./test_verifier
#0 add+sub+mul OK
#1 unreachable OK
#2 unreachable2 OK
#3 out of range jump OK
#4 out of range jump2 OK
#5 test1 ld_imm64 OK
...
Signed-off-by: Alexei Starovoitov <ast@plumgrid.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
2014-09-26 15:17:07 +08:00
|
|
|
{
|
2016-10-17 20:28:36 +08:00
|
|
|
unsigned int from = 0, to = ARRAY_SIZE(tests);
|
2017-02-10 07:21:37 +08:00
|
|
|
bool unpriv = !is_admin();
|
2019-08-23 13:52:14 +08:00
|
|
|
int arg = 1;
|
|
|
|
|
|
|
|
if (argc > 1 && strcmp(argv[1], "-v") == 0) {
|
|
|
|
arg++;
|
|
|
|
verbose = true;
|
|
|
|
argc--;
|
|
|
|
}
|
2016-10-17 20:28:36 +08:00
|
|
|
|
|
|
|
if (argc == 3) {
|
2019-08-23 13:52:14 +08:00
|
|
|
unsigned int l = atoi(argv[arg]);
|
|
|
|
unsigned int u = atoi(argv[arg + 1]);
|
2016-10-17 20:28:36 +08:00
|
|
|
|
|
|
|
if (l < to && u < to) {
|
|
|
|
from = l;
|
|
|
|
to = u + 1;
|
|
|
|
}
|
|
|
|
} else if (argc == 2) {
|
2019-08-23 13:52:14 +08:00
|
|
|
unsigned int t = atoi(argv[arg]);
|
2016-10-17 20:28:36 +08:00
|
|
|
|
|
|
|
if (t < to) {
|
|
|
|
from = t;
|
|
|
|
to = t + 1;
|
|
|
|
}
|
|
|
|
}
|
2015-10-08 13:23:23 +08:00
|
|
|
|
2018-02-15 05:50:36 +08:00
|
|
|
get_unpriv_disabled();
|
|
|
|
if (unpriv && unpriv_disabled) {
|
|
|
|
printf("Cannot run as unprivileged user with sysctl %s.\n",
|
|
|
|
UNPRIV_SYSCTL);
|
|
|
|
return EXIT_FAILURE;
|
|
|
|
}
|
|
|
|
|
2021-12-15 03:59:04 +08:00
|
|
|
/* Use libbpf 1.0 API mode */
|
|
|
|
libbpf_set_strict_mode(LIBBPF_STRICT_ALL);
|
|
|
|
|
2018-05-15 05:22:34 +08:00
|
|
|
bpf_semi_rand_init();
|
2016-10-17 20:28:36 +08:00
|
|
|
return do_test(unpriv, from, to);
|
bpf: mini eBPF library, test stubs and verifier testsuite
1.
the library includes a trivial set of BPF syscall wrappers:
int bpf_create_map(int key_size, int value_size, int max_entries);
int bpf_update_elem(int fd, void *key, void *value);
int bpf_lookup_elem(int fd, void *key, void *value);
int bpf_delete_elem(int fd, void *key);
int bpf_get_next_key(int fd, void *key, void *next_key);
int bpf_prog_load(enum bpf_prog_type prog_type,
const struct sock_filter_int *insns, int insn_len,
const char *license);
bpf_prog_load() stores verifier log into global bpf_log_buf[] array
and BPF_*() macros to build instructions
2.
test stubs configure eBPF infra with 'unspec' map and program types.
These are fake types used by user space testsuite only.
3.
verifier tests valid and invalid programs and expects predefined
error log messages from kernel.
40 tests so far.
$ sudo ./test_verifier
#0 add+sub+mul OK
#1 unreachable OK
#2 unreachable2 OK
#3 out of range jump OK
#4 out of range jump2 OK
#5 test1 ld_imm64 OK
...
Signed-off-by: Alexei Starovoitov <ast@plumgrid.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
2014-09-26 15:17:07 +08:00
|
|
|
}
|