mirror of
https://mirrors.bfsu.edu.cn/git/linux.git
synced 2024-11-11 04:18:39 +08:00
bpf-next-for-netdev
-----BEGIN PGP SIGNATURE----- iHUEABYIAB0WIQTFp0I1jqZrAX+hPRXbK58LschIgwUCZJX+ygAKCRDbK58LschI g0/2AQDHg12smf9mPfK9wOFDNRIIX8r2iufB8LUFQMzCwltN6gEAkAdkAyfbof7P TMaNUiHABijAFtChxoSI35j3OOSRrwE= =GJgN -----END PGP SIGNATURE----- Merge tag 'for-netdev' of https://git.kernel.org/pub/scm/linux/kernel/git/bpf/bpf-next Daniel Borkmann says: ==================== pull-request: bpf-next 2023-06-23 We've added 49 non-merge commits during the last 24 day(s) which contain a total of 70 files changed, 1935 insertions(+), 442 deletions(-). The main changes are: 1) Extend bpf_fib_lookup helper to allow passing the route table ID, from Louis DeLosSantos. 2) Fix regsafe() in verifier to call check_ids() for scalar registers, from Eduard Zingerman. 3) Extend the set of cpumask kfuncs with bpf_cpumask_first_and() and a rework of bpf_cpumask_any*() kfuncs. Additionally, add selftests, from David Vernet. 4) Fix socket lookup BPF helpers for tc/XDP to respect VRF bindings, from Gilad Sever. 5) Change bpf_link_put() to use workqueue unconditionally to fix it under PREEMPT_RT, from Sebastian Andrzej Siewior. 6) Follow-ups to address issues in the bpf_refcount shared ownership implementation, from Dave Marchevsky. 7) A few general refactorings to BPF map and program creation permissions checks which were part of the BPF token series, from Andrii Nakryiko. 8) Various fixes for benchmark framework and add a new benchmark for BPF memory allocator to BPF selftests, from Hou Tao. 9) Documentation improvements around iterators and trusted pointers, from Anton Protopopov. 10) Small cleanup in verifier to improve allocated object check, from Daniel T. Lee. 11) Improve performance of bpf_xdp_pointer() by avoiding access to shared_info when XDP packet does not have frags, from Jesper Dangaard Brouer. 12) Silence a harmless syzbot-reported warning in btf_type_id_size(), from Yonghong Song. 13) Remove duplicate bpfilter_umh_cleanup in favor of umd_cleanup_helper, from Jarkko Sakkinen. 14) Fix BPF selftests build for resolve_btfids under custom HOSTCFLAGS, from Viktor Malik. * tag 'for-netdev' of https://git.kernel.org/pub/scm/linux/kernel/git/bpf/bpf-next: (49 commits) bpf, docs: Document existing macros instead of deprecated bpf, docs: BPF Iterator Document selftests/bpf: Fix compilation failure for prog vrf_socket_lookup selftests/bpf: Add vrf_socket_lookup tests bpf: Fix bpf socket lookup from tc/xdp to respect socket VRF bindings bpf: Call __bpf_sk_lookup()/__bpf_skc_lookup() directly via TC hookpoint bpf: Factor out socket lookup functions for the TC hookpoint. selftests/bpf: Set the default value of consumer_cnt as 0 selftests/bpf: Ensure that next_cpu() returns a valid CPU number selftests/bpf: Output the correct error code for pthread APIs selftests/bpf: Use producer_cnt to allocate local counter array xsk: Remove unused inline function xsk_buff_discard() bpf: Keep BPF_PROG_LOAD permission checks clear of validations bpf: Centralize permissions checks for all BPF map types bpf: Inline map creation logic in map_create() function bpf: Move unprivileged checks into map_create() and bpf_prog_load() bpf: Remove in_atomic() from bpf_link_put(). selftests/bpf: Verify that check_ids() is used for scalars in regsafe() bpf: Verify scalar ids mapping in regsafe() using check_ids() selftests/bpf: Check if mark_chain_precision() follows scalar ids ... ==================== Link: https://lore.kernel.org/r/20230623211256.8409-1-daniel@iogearbox.net Signed-off-by: Jakub Kicinski <kuba@kernel.org>
This commit is contained in:
commit
a685d0df75
@ -238,11 +238,8 @@ The following is the breakdown for each field in struct ``bpf_iter_reg``.
|
||||
that the kernel function cond_resched() is called to avoid other kernel
|
||||
subsystem (e.g., rcu) misbehaving.
|
||||
* - seq_info
|
||||
- Specifies certain action requests in the kernel BPF iterator
|
||||
infrastructure. Currently, only BPF_ITER_RESCHED is supported. This means
|
||||
that the kernel function cond_resched() is called to avoid other kernel
|
||||
subsystem (e.g., rcu) misbehaving.
|
||||
|
||||
- Specifies the set of seq operations for the BPF iterator and helpers to
|
||||
initialize/free the private data for the corresponding ``seq_file``.
|
||||
|
||||
`Click here
|
||||
<https://lore.kernel.org/bpf/20210212183107.50963-2-songliubraving@fb.com/>`_
|
||||
|
@ -351,14 +351,15 @@ In addition to the above kfuncs, there is also a set of read-only kfuncs that
|
||||
can be used to query the contents of cpumasks.
|
||||
|
||||
.. kernel-doc:: kernel/bpf/cpumask.c
|
||||
:identifiers: bpf_cpumask_first bpf_cpumask_first_zero bpf_cpumask_test_cpu
|
||||
:identifiers: bpf_cpumask_first bpf_cpumask_first_zero bpf_cpumask_first_and
|
||||
bpf_cpumask_test_cpu
|
||||
|
||||
.. kernel-doc:: kernel/bpf/cpumask.c
|
||||
:identifiers: bpf_cpumask_equal bpf_cpumask_intersects bpf_cpumask_subset
|
||||
bpf_cpumask_empty bpf_cpumask_full
|
||||
|
||||
.. kernel-doc:: kernel/bpf/cpumask.c
|
||||
:identifiers: bpf_cpumask_any bpf_cpumask_any_and
|
||||
:identifiers: bpf_cpumask_any_distribute bpf_cpumask_any_and_distribute
|
||||
|
||||
----
|
||||
|
||||
|
@ -227,23 +227,49 @@ absolutely no ABI stability guarantees.
|
||||
|
||||
As mentioned above, a nested pointer obtained from walking a trusted pointer is
|
||||
no longer trusted, with one exception. If a struct type has a field that is
|
||||
guaranteed to be valid as long as its parent pointer is trusted, the
|
||||
``BTF_TYPE_SAFE_NESTED`` macro can be used to express that to the verifier as
|
||||
follows:
|
||||
guaranteed to be valid (trusted or rcu, as in KF_RCU description below) as long
|
||||
as its parent pointer is valid, the following macros can be used to express
|
||||
that to the verifier:
|
||||
|
||||
* ``BTF_TYPE_SAFE_TRUSTED``
|
||||
* ``BTF_TYPE_SAFE_RCU``
|
||||
* ``BTF_TYPE_SAFE_RCU_OR_NULL``
|
||||
|
||||
For example,
|
||||
|
||||
.. code-block:: c
|
||||
|
||||
BTF_TYPE_SAFE_NESTED(struct task_struct) {
|
||||
BTF_TYPE_SAFE_TRUSTED(struct socket) {
|
||||
struct sock *sk;
|
||||
};
|
||||
|
||||
or
|
||||
|
||||
.. code-block:: c
|
||||
|
||||
BTF_TYPE_SAFE_RCU(struct task_struct) {
|
||||
const cpumask_t *cpus_ptr;
|
||||
struct css_set __rcu *cgroups;
|
||||
struct task_struct __rcu *real_parent;
|
||||
struct task_struct *group_leader;
|
||||
};
|
||||
|
||||
In other words, you must:
|
||||
|
||||
1. Wrap the trusted pointer type in the ``BTF_TYPE_SAFE_NESTED`` macro.
|
||||
1. Wrap the valid pointer type in a ``BTF_TYPE_SAFE_*`` macro.
|
||||
|
||||
2. Specify the type and name of the trusted nested field. This field must match
|
||||
2. Specify the type and name of the valid nested field. This field must match
|
||||
the field in the original type definition exactly.
|
||||
|
||||
A new type declared by a ``BTF_TYPE_SAFE_*`` macro also needs to be emitted so
|
||||
that it appears in BTF. For example, ``BTF_TYPE_SAFE_TRUSTED(struct socket)``
|
||||
is emitted in the ``type_is_trusted()`` function as follows:
|
||||
|
||||
.. code-block:: c
|
||||
|
||||
BTF_TYPE_EMIT(BTF_TYPE_SAFE_TRUSTED(struct socket));
|
||||
|
||||
|
||||
2.4.5 KF_SLEEPABLE flag
|
||||
-----------------------
|
||||
|
||||
|
@ -313,11 +313,6 @@ struct bpf_idx_pair {
|
||||
u32 idx;
|
||||
};
|
||||
|
||||
struct bpf_id_pair {
|
||||
u32 old;
|
||||
u32 cur;
|
||||
};
|
||||
|
||||
#define MAX_CALL_FRAMES 8
|
||||
/* Maximum number of register states that can exist at once */
|
||||
#define BPF_ID_MAP_SIZE ((MAX_BPF_REG + MAX_BPF_STACK / BPF_REG_SIZE) * MAX_CALL_FRAMES)
|
||||
@ -557,6 +552,21 @@ struct backtrack_state {
|
||||
u64 stack_masks[MAX_CALL_FRAMES];
|
||||
};
|
||||
|
||||
struct bpf_id_pair {
|
||||
u32 old;
|
||||
u32 cur;
|
||||
};
|
||||
|
||||
struct bpf_idmap {
|
||||
u32 tmp_id_gen;
|
||||
struct bpf_id_pair map[BPF_ID_MAP_SIZE];
|
||||
};
|
||||
|
||||
struct bpf_idset {
|
||||
u32 count;
|
||||
u32 ids[BPF_ID_MAP_SIZE];
|
||||
};
|
||||
|
||||
/* single container for all structs
|
||||
* one verifier_env per bpf_check() call
|
||||
*/
|
||||
@ -588,7 +598,10 @@ struct bpf_verifier_env {
|
||||
const struct bpf_line_info *prev_linfo;
|
||||
struct bpf_verifier_log log;
|
||||
struct bpf_subprog_info subprog_info[BPF_MAX_SUBPROGS + 1];
|
||||
struct bpf_id_pair idmap_scratch[BPF_ID_MAP_SIZE];
|
||||
union {
|
||||
struct bpf_idmap idmap_scratch;
|
||||
struct bpf_idset idset_scratch;
|
||||
};
|
||||
struct {
|
||||
int *insn_state;
|
||||
int *insn_stack;
|
||||
|
@ -11,7 +11,6 @@ int bpfilter_ip_set_sockopt(struct sock *sk, int optname, sockptr_t optval,
|
||||
unsigned int optlen);
|
||||
int bpfilter_ip_get_sockopt(struct sock *sk, int optname, char __user *optval,
|
||||
int __user *optlen);
|
||||
void bpfilter_umh_cleanup(struct umd_info *info);
|
||||
|
||||
struct bpfilter_umh_ops {
|
||||
struct umd_info info;
|
||||
|
@ -874,7 +874,6 @@ void bpf_prog_free(struct bpf_prog *fp);
|
||||
|
||||
bool bpf_opcode_in_insntable(u8 code);
|
||||
|
||||
void bpf_prog_free_linfo(struct bpf_prog *prog);
|
||||
void bpf_prog_fill_jited_linfo(struct bpf_prog *prog,
|
||||
const u32 *insn_to_jit_off);
|
||||
int bpf_prog_alloc_jited_linfo(struct bpf_prog *prog);
|
||||
|
@ -5073,6 +5073,15 @@ static inline bool netif_is_l3_slave(const struct net_device *dev)
|
||||
return dev->priv_flags & IFF_L3MDEV_SLAVE;
|
||||
}
|
||||
|
||||
static inline int dev_sdif(const struct net_device *dev)
|
||||
{
|
||||
#ifdef CONFIG_NET_L3_MASTER_DEV
|
||||
if (netif_is_l3_slave(dev))
|
||||
return dev->ifindex;
|
||||
#endif
|
||||
return 0;
|
||||
}
|
||||
|
||||
static inline bool netif_is_bridge_master(const struct net_device *dev)
|
||||
{
|
||||
return dev->priv_flags & IFF_EBRIDGE;
|
||||
|
@ -255,10 +255,6 @@ static inline void xsk_buff_free(struct xdp_buff *xdp)
|
||||
{
|
||||
}
|
||||
|
||||
static inline void xsk_buff_discard(struct xdp_buff *xdp)
|
||||
{
|
||||
}
|
||||
|
||||
static inline void xsk_buff_set_size(struct xdp_buff *xdp, u32 size)
|
||||
{
|
||||
}
|
||||
|
@ -3178,6 +3178,10 @@ union bpf_attr {
|
||||
* **BPF_FIB_LOOKUP_DIRECT**
|
||||
* Do a direct table lookup vs full lookup using FIB
|
||||
* rules.
|
||||
* **BPF_FIB_LOOKUP_TBID**
|
||||
* Used with BPF_FIB_LOOKUP_DIRECT.
|
||||
* Use the routing table ID present in *params*->tbid
|
||||
* for the fib lookup.
|
||||
* **BPF_FIB_LOOKUP_OUTPUT**
|
||||
* Perform lookup from an egress perspective (default is
|
||||
* ingress).
|
||||
@ -6832,6 +6836,7 @@ enum {
|
||||
BPF_FIB_LOOKUP_DIRECT = (1U << 0),
|
||||
BPF_FIB_LOOKUP_OUTPUT = (1U << 1),
|
||||
BPF_FIB_LOOKUP_SKIP_NEIGH = (1U << 2),
|
||||
BPF_FIB_LOOKUP_TBID = (1U << 3),
|
||||
};
|
||||
|
||||
enum {
|
||||
@ -6892,9 +6897,19 @@ struct bpf_fib_lookup {
|
||||
__u32 ipv6_dst[4]; /* in6_addr; network order */
|
||||
};
|
||||
|
||||
/* output */
|
||||
__be16 h_vlan_proto;
|
||||
__be16 h_vlan_TCI;
|
||||
union {
|
||||
struct {
|
||||
/* output */
|
||||
__be16 h_vlan_proto;
|
||||
__be16 h_vlan_TCI;
|
||||
};
|
||||
/* input: when accompanied with the
|
||||
* 'BPF_FIB_LOOKUP_DIRECT | BPF_FIB_LOOKUP_TBID` flags, a
|
||||
* specific routing table to use for the fib lookup.
|
||||
*/
|
||||
__u32 tbid;
|
||||
};
|
||||
|
||||
__u8 smac[6]; /* ETH_ALEN */
|
||||
__u8 dmac[6]; /* ETH_ALEN */
|
||||
};
|
||||
|
@ -86,9 +86,6 @@ static struct bpf_map *bloom_map_alloc(union bpf_attr *attr)
|
||||
int numa_node = bpf_map_attr_numa_node(attr);
|
||||
struct bpf_bloom_filter *bloom;
|
||||
|
||||
if (!bpf_capable())
|
||||
return ERR_PTR(-EPERM);
|
||||
|
||||
if (attr->key_size != 0 || attr->value_size == 0 ||
|
||||
attr->max_entries == 0 ||
|
||||
attr->map_flags & ~BLOOM_CREATE_FLAG_MASK ||
|
||||
|
@ -723,9 +723,6 @@ int bpf_local_storage_map_alloc_check(union bpf_attr *attr)
|
||||
!attr->btf_key_type_id || !attr->btf_value_type_id)
|
||||
return -EINVAL;
|
||||
|
||||
if (!bpf_capable())
|
||||
return -EPERM;
|
||||
|
||||
if (attr->value_size > BPF_LOCAL_STORAGE_MAX_VALUE_SIZE)
|
||||
return -E2BIG;
|
||||
|
||||
|
@ -655,9 +655,6 @@ static struct bpf_map *bpf_struct_ops_map_alloc(union bpf_attr *attr)
|
||||
const struct btf_type *t, *vt;
|
||||
struct bpf_map *map;
|
||||
|
||||
if (!bpf_capable())
|
||||
return ERR_PTR(-EPERM);
|
||||
|
||||
st_ops = bpf_struct_ops_find_value(attr->btf_vmlinux_value_type_id);
|
||||
if (!st_ops)
|
||||
return ERR_PTR(-ENOTSUPP);
|
||||
|
@ -492,17 +492,6 @@ static bool btf_type_is_fwd(const struct btf_type *t)
|
||||
return BTF_INFO_KIND(t->info) == BTF_KIND_FWD;
|
||||
}
|
||||
|
||||
static bool btf_type_nosize(const struct btf_type *t)
|
||||
{
|
||||
return btf_type_is_void(t) || btf_type_is_fwd(t) ||
|
||||
btf_type_is_func(t) || btf_type_is_func_proto(t);
|
||||
}
|
||||
|
||||
static bool btf_type_nosize_or_null(const struct btf_type *t)
|
||||
{
|
||||
return !t || btf_type_nosize(t);
|
||||
}
|
||||
|
||||
static bool btf_type_is_datasec(const struct btf_type *t)
|
||||
{
|
||||
return BTF_INFO_KIND(t->info) == BTF_KIND_DATASEC;
|
||||
@ -513,6 +502,18 @@ static bool btf_type_is_decl_tag(const struct btf_type *t)
|
||||
return BTF_INFO_KIND(t->info) == BTF_KIND_DECL_TAG;
|
||||
}
|
||||
|
||||
static bool btf_type_nosize(const struct btf_type *t)
|
||||
{
|
||||
return btf_type_is_void(t) || btf_type_is_fwd(t) ||
|
||||
btf_type_is_func(t) || btf_type_is_func_proto(t) ||
|
||||
btf_type_is_decl_tag(t);
|
||||
}
|
||||
|
||||
static bool btf_type_nosize_or_null(const struct btf_type *t)
|
||||
{
|
||||
return !t || btf_type_nosize(t);
|
||||
}
|
||||
|
||||
static bool btf_type_is_decl_tag_target(const struct btf_type *t)
|
||||
{
|
||||
return btf_type_is_func(t) || btf_type_is_struct(t) ||
|
||||
|
@ -2064,14 +2064,16 @@ EVAL4(PROG_NAME_LIST, 416, 448, 480, 512)
|
||||
};
|
||||
#undef PROG_NAME_LIST
|
||||
#define PROG_NAME_LIST(stack_size) PROG_NAME_ARGS(stack_size),
|
||||
static u64 (*interpreters_args[])(u64 r1, u64 r2, u64 r3, u64 r4, u64 r5,
|
||||
const struct bpf_insn *insn) = {
|
||||
static __maybe_unused
|
||||
u64 (*interpreters_args[])(u64 r1, u64 r2, u64 r3, u64 r4, u64 r5,
|
||||
const struct bpf_insn *insn) = {
|
||||
EVAL6(PROG_NAME_LIST, 32, 64, 96, 128, 160, 192)
|
||||
EVAL6(PROG_NAME_LIST, 224, 256, 288, 320, 352, 384)
|
||||
EVAL4(PROG_NAME_LIST, 416, 448, 480, 512)
|
||||
};
|
||||
#undef PROG_NAME_LIST
|
||||
|
||||
#ifdef CONFIG_BPF_SYSCALL
|
||||
void bpf_patch_call_args(struct bpf_insn *insn, u32 stack_depth)
|
||||
{
|
||||
stack_depth = max_t(u32, stack_depth, 1);
|
||||
@ -2080,7 +2082,7 @@ void bpf_patch_call_args(struct bpf_insn *insn, u32 stack_depth)
|
||||
__bpf_call_base_args;
|
||||
insn->code = BPF_JMP | BPF_CALL_ARGS;
|
||||
}
|
||||
|
||||
#endif
|
||||
#else
|
||||
static unsigned int __bpf_prog_ret0_warn(const void *ctx,
|
||||
const struct bpf_insn *insn)
|
||||
|
@ -28,7 +28,6 @@
|
||||
#include <linux/sched.h>
|
||||
#include <linux/workqueue.h>
|
||||
#include <linux/kthread.h>
|
||||
#include <linux/capability.h>
|
||||
#include <trace/events/xdp.h>
|
||||
#include <linux/btf_ids.h>
|
||||
|
||||
@ -89,9 +88,6 @@ static struct bpf_map *cpu_map_alloc(union bpf_attr *attr)
|
||||
u32 value_size = attr->value_size;
|
||||
struct bpf_cpu_map *cmap;
|
||||
|
||||
if (!bpf_capable())
|
||||
return ERR_PTR(-EPERM);
|
||||
|
||||
/* check sanity of attributes */
|
||||
if (attr->max_entries == 0 || attr->key_size != 4 ||
|
||||
(value_size != offsetofend(struct bpf_cpumap_val, qsize) &&
|
||||
|
@ -131,6 +131,21 @@ __bpf_kfunc u32 bpf_cpumask_first_zero(const struct cpumask *cpumask)
|
||||
return cpumask_first_zero(cpumask);
|
||||
}
|
||||
|
||||
/**
|
||||
* bpf_cpumask_first_and() - Return the index of the first nonzero bit from the
|
||||
* AND of two cpumasks.
|
||||
* @src1: The first cpumask.
|
||||
* @src2: The second cpumask.
|
||||
*
|
||||
* Find the index of the first nonzero bit of the AND of two cpumasks.
|
||||
* struct bpf_cpumask pointers may be safely passed to @src1 and @src2.
|
||||
*/
|
||||
__bpf_kfunc u32 bpf_cpumask_first_and(const struct cpumask *src1,
|
||||
const struct cpumask *src2)
|
||||
{
|
||||
return cpumask_first_and(src1, src2);
|
||||
}
|
||||
|
||||
/**
|
||||
* bpf_cpumask_set_cpu() - Set a bit for a CPU in a BPF cpumask.
|
||||
* @cpu: The CPU to be set in the cpumask.
|
||||
@ -367,7 +382,7 @@ __bpf_kfunc void bpf_cpumask_copy(struct bpf_cpumask *dst, const struct cpumask
|
||||
}
|
||||
|
||||
/**
|
||||
* bpf_cpumask_any() - Return a random set CPU from a cpumask.
|
||||
* bpf_cpumask_any_distribute() - Return a random set CPU from a cpumask.
|
||||
* @cpumask: The cpumask being queried.
|
||||
*
|
||||
* Return:
|
||||
@ -376,26 +391,28 @@ __bpf_kfunc void bpf_cpumask_copy(struct bpf_cpumask *dst, const struct cpumask
|
||||
*
|
||||
* A struct bpf_cpumask pointer may be safely passed to @src.
|
||||
*/
|
||||
__bpf_kfunc u32 bpf_cpumask_any(const struct cpumask *cpumask)
|
||||
__bpf_kfunc u32 bpf_cpumask_any_distribute(const struct cpumask *cpumask)
|
||||
{
|
||||
return cpumask_any(cpumask);
|
||||
return cpumask_any_distribute(cpumask);
|
||||
}
|
||||
|
||||
/**
|
||||
* bpf_cpumask_any_and() - Return a random set CPU from the AND of two
|
||||
* cpumasks.
|
||||
* bpf_cpumask_any_and_distribute() - Return a random set CPU from the AND of
|
||||
* two cpumasks.
|
||||
* @src1: The first cpumask.
|
||||
* @src2: The second cpumask.
|
||||
*
|
||||
* Return:
|
||||
* * A random set bit within [0, num_cpus) if at least one bit is set.
|
||||
* * A random set bit within [0, num_cpus) from the AND of two cpumasks, if at
|
||||
* least one bit is set.
|
||||
* * >= num_cpus if no bit is set.
|
||||
*
|
||||
* struct bpf_cpumask pointers may be safely passed to @src1 and @src2.
|
||||
*/
|
||||
__bpf_kfunc u32 bpf_cpumask_any_and(const struct cpumask *src1, const struct cpumask *src2)
|
||||
__bpf_kfunc u32 bpf_cpumask_any_and_distribute(const struct cpumask *src1,
|
||||
const struct cpumask *src2)
|
||||
{
|
||||
return cpumask_any_and(src1, src2);
|
||||
return cpumask_any_and_distribute(src1, src2);
|
||||
}
|
||||
|
||||
__diag_pop();
|
||||
@ -406,6 +423,7 @@ BTF_ID_FLAGS(func, bpf_cpumask_release, KF_RELEASE)
|
||||
BTF_ID_FLAGS(func, bpf_cpumask_acquire, KF_ACQUIRE | KF_TRUSTED_ARGS)
|
||||
BTF_ID_FLAGS(func, bpf_cpumask_first, KF_RCU)
|
||||
BTF_ID_FLAGS(func, bpf_cpumask_first_zero, KF_RCU)
|
||||
BTF_ID_FLAGS(func, bpf_cpumask_first_and, KF_RCU)
|
||||
BTF_ID_FLAGS(func, bpf_cpumask_set_cpu, KF_RCU)
|
||||
BTF_ID_FLAGS(func, bpf_cpumask_clear_cpu, KF_RCU)
|
||||
BTF_ID_FLAGS(func, bpf_cpumask_test_cpu, KF_RCU)
|
||||
@ -422,8 +440,8 @@ BTF_ID_FLAGS(func, bpf_cpumask_subset, KF_RCU)
|
||||
BTF_ID_FLAGS(func, bpf_cpumask_empty, KF_RCU)
|
||||
BTF_ID_FLAGS(func, bpf_cpumask_full, KF_RCU)
|
||||
BTF_ID_FLAGS(func, bpf_cpumask_copy, KF_RCU)
|
||||
BTF_ID_FLAGS(func, bpf_cpumask_any, KF_RCU)
|
||||
BTF_ID_FLAGS(func, bpf_cpumask_any_and, KF_RCU)
|
||||
BTF_ID_FLAGS(func, bpf_cpumask_any_distribute, KF_RCU)
|
||||
BTF_ID_FLAGS(func, bpf_cpumask_any_and_distribute, KF_RCU)
|
||||
BTF_SET8_END(cpumask_kfunc_btf_ids)
|
||||
|
||||
static const struct btf_kfunc_id_set cpumask_kfunc_set = {
|
||||
|
@ -160,9 +160,6 @@ static struct bpf_map *dev_map_alloc(union bpf_attr *attr)
|
||||
struct bpf_dtab *dtab;
|
||||
int err;
|
||||
|
||||
if (!capable(CAP_NET_ADMIN))
|
||||
return ERR_PTR(-EPERM);
|
||||
|
||||
dtab = bpf_map_area_alloc(sizeof(*dtab), NUMA_NO_NODE);
|
||||
if (!dtab)
|
||||
return ERR_PTR(-ENOMEM);
|
||||
|
@ -422,12 +422,6 @@ static int htab_map_alloc_check(union bpf_attr *attr)
|
||||
BUILD_BUG_ON(offsetof(struct htab_elem, fnode.next) !=
|
||||
offsetof(struct htab_elem, hash_node.pprev));
|
||||
|
||||
if (lru && !bpf_capable())
|
||||
/* LRU implementation is much complicated than other
|
||||
* maps. Hence, limit to CAP_BPF.
|
||||
*/
|
||||
return -EPERM;
|
||||
|
||||
if (zero_seed && !capable(CAP_SYS_ADMIN))
|
||||
/* Guard against local DoS, and discourage production use. */
|
||||
return -EPERM;
|
||||
|
@ -1933,8 +1933,12 @@ __bpf_kfunc void *bpf_refcount_acquire_impl(void *p__refcounted_kptr, void *meta
|
||||
* bpf_refcount type so that it is emitted in vmlinux BTF
|
||||
*/
|
||||
ref = (struct bpf_refcount *)(p__refcounted_kptr + meta->record->refcount_off);
|
||||
if (!refcount_inc_not_zero((refcount_t *)ref))
|
||||
return NULL;
|
||||
|
||||
refcount_inc((refcount_t *)ref);
|
||||
/* Verifier strips KF_RET_NULL if input is owned ref, see is_kfunc_ret_null
|
||||
* in verifier.c
|
||||
*/
|
||||
return (void *)p__refcounted_kptr;
|
||||
}
|
||||
|
||||
@ -1950,7 +1954,7 @@ static int __bpf_list_add(struct bpf_list_node *node, struct bpf_list_head *head
|
||||
INIT_LIST_HEAD(h);
|
||||
if (!list_empty(n)) {
|
||||
/* Only called from BPF prog, no need to migrate_disable */
|
||||
__bpf_obj_drop_impl(n - off, rec);
|
||||
__bpf_obj_drop_impl((void *)n - off, rec);
|
||||
return -EINVAL;
|
||||
}
|
||||
|
||||
@ -2032,7 +2036,7 @@ static int __bpf_rbtree_add(struct bpf_rb_root *root, struct bpf_rb_node *node,
|
||||
|
||||
if (!RB_EMPTY_NODE(n)) {
|
||||
/* Only called from BPF prog, no need to migrate_disable */
|
||||
__bpf_obj_drop_impl(n - off, rec);
|
||||
__bpf_obj_drop_impl((void *)n - off, rec);
|
||||
return -EINVAL;
|
||||
}
|
||||
|
||||
@ -2406,7 +2410,7 @@ BTF_ID_FLAGS(func, crash_kexec, KF_DESTRUCTIVE)
|
||||
#endif
|
||||
BTF_ID_FLAGS(func, bpf_obj_new_impl, KF_ACQUIRE | KF_RET_NULL)
|
||||
BTF_ID_FLAGS(func, bpf_obj_drop_impl, KF_RELEASE)
|
||||
BTF_ID_FLAGS(func, bpf_refcount_acquire_impl, KF_ACQUIRE)
|
||||
BTF_ID_FLAGS(func, bpf_refcount_acquire_impl, KF_ACQUIRE | KF_RET_NULL)
|
||||
BTF_ID_FLAGS(func, bpf_list_push_front_impl)
|
||||
BTF_ID_FLAGS(func, bpf_list_push_back_impl)
|
||||
BTF_ID_FLAGS(func, bpf_list_pop_front, KF_ACQUIRE | KF_RET_NULL)
|
||||
|
@ -544,9 +544,6 @@ static struct bpf_map *trie_alloc(union bpf_attr *attr)
|
||||
{
|
||||
struct lpm_trie *trie;
|
||||
|
||||
if (!bpf_capable())
|
||||
return ERR_PTR(-EPERM);
|
||||
|
||||
/* check sanity of attributes */
|
||||
if (attr->max_entries == 0 ||
|
||||
!(attr->map_flags & BPF_F_NO_PREALLOC) ||
|
||||
|
@ -211,9 +211,9 @@ static void alloc_bulk(struct bpf_mem_cache *c, int cnt, int node)
|
||||
mem_cgroup_put(memcg);
|
||||
}
|
||||
|
||||
static void free_one(struct bpf_mem_cache *c, void *obj)
|
||||
static void free_one(void *obj, bool percpu)
|
||||
{
|
||||
if (c->percpu_size) {
|
||||
if (percpu) {
|
||||
free_percpu(((void **)obj)[1]);
|
||||
kfree(obj);
|
||||
return;
|
||||
@ -222,14 +222,19 @@ static void free_one(struct bpf_mem_cache *c, void *obj)
|
||||
kfree(obj);
|
||||
}
|
||||
|
||||
static void __free_rcu(struct rcu_head *head)
|
||||
static void free_all(struct llist_node *llnode, bool percpu)
|
||||
{
|
||||
struct bpf_mem_cache *c = container_of(head, struct bpf_mem_cache, rcu);
|
||||
struct llist_node *llnode = llist_del_all(&c->waiting_for_gp);
|
||||
struct llist_node *pos, *t;
|
||||
|
||||
llist_for_each_safe(pos, t, llnode)
|
||||
free_one(c, pos);
|
||||
free_one(pos, percpu);
|
||||
}
|
||||
|
||||
static void __free_rcu(struct rcu_head *head)
|
||||
{
|
||||
struct bpf_mem_cache *c = container_of(head, struct bpf_mem_cache, rcu);
|
||||
|
||||
free_all(llist_del_all(&c->waiting_for_gp), !!c->percpu_size);
|
||||
atomic_set(&c->call_rcu_in_progress, 0);
|
||||
}
|
||||
|
||||
@ -432,7 +437,7 @@ int bpf_mem_alloc_init(struct bpf_mem_alloc *ma, int size, bool percpu)
|
||||
|
||||
static void drain_mem_cache(struct bpf_mem_cache *c)
|
||||
{
|
||||
struct llist_node *llnode, *t;
|
||||
bool percpu = !!c->percpu_size;
|
||||
|
||||
/* No progs are using this bpf_mem_cache, but htab_map_free() called
|
||||
* bpf_mem_cache_free() for all remaining elements and they can be in
|
||||
@ -441,14 +446,10 @@ static void drain_mem_cache(struct bpf_mem_cache *c)
|
||||
* Except for waiting_for_gp list, there are no concurrent operations
|
||||
* on these lists, so it is safe to use __llist_del_all().
|
||||
*/
|
||||
llist_for_each_safe(llnode, t, __llist_del_all(&c->free_by_rcu))
|
||||
free_one(c, llnode);
|
||||
llist_for_each_safe(llnode, t, llist_del_all(&c->waiting_for_gp))
|
||||
free_one(c, llnode);
|
||||
llist_for_each_safe(llnode, t, __llist_del_all(&c->free_llist))
|
||||
free_one(c, llnode);
|
||||
llist_for_each_safe(llnode, t, __llist_del_all(&c->free_llist_extra))
|
||||
free_one(c, llnode);
|
||||
free_all(__llist_del_all(&c->free_by_rcu), percpu);
|
||||
free_all(llist_del_all(&c->waiting_for_gp), percpu);
|
||||
free_all(__llist_del_all(&c->free_llist), percpu);
|
||||
free_all(__llist_del_all(&c->free_llist_extra), percpu);
|
||||
}
|
||||
|
||||
static void free_mem_alloc_no_barrier(struct bpf_mem_alloc *ma)
|
||||
|
@ -23,9 +23,9 @@ static void free_links_and_skel(void)
|
||||
|
||||
static int preload(struct bpf_preload_info *obj)
|
||||
{
|
||||
strlcpy(obj[0].link_name, "maps.debug", sizeof(obj[0].link_name));
|
||||
strscpy(obj[0].link_name, "maps.debug", sizeof(obj[0].link_name));
|
||||
obj[0].link = maps_link;
|
||||
strlcpy(obj[1].link_name, "progs.debug", sizeof(obj[1].link_name));
|
||||
strscpy(obj[1].link_name, "progs.debug", sizeof(obj[1].link_name));
|
||||
obj[1].link = progs_link;
|
||||
return 0;
|
||||
}
|
||||
|
@ -7,7 +7,6 @@
|
||||
#include <linux/bpf.h>
|
||||
#include <linux/list.h>
|
||||
#include <linux/slab.h>
|
||||
#include <linux/capability.h>
|
||||
#include <linux/btf_ids.h>
|
||||
#include "percpu_freelist.h"
|
||||
|
||||
@ -46,9 +45,6 @@ static bool queue_stack_map_is_full(struct bpf_queue_stack *qs)
|
||||
/* Called from syscall */
|
||||
static int queue_stack_map_alloc_check(union bpf_attr *attr)
|
||||
{
|
||||
if (!bpf_capable())
|
||||
return -EPERM;
|
||||
|
||||
/* check sanity of attributes */
|
||||
if (attr->max_entries == 0 || attr->key_size != 0 ||
|
||||
attr->value_size == 0 ||
|
||||
|
@ -151,9 +151,6 @@ static struct bpf_map *reuseport_array_alloc(union bpf_attr *attr)
|
||||
int numa_node = bpf_map_attr_numa_node(attr);
|
||||
struct reuseport_array *array;
|
||||
|
||||
if (!bpf_capable())
|
||||
return ERR_PTR(-EPERM);
|
||||
|
||||
/* allocate all map elements and zero-initialize them */
|
||||
array = bpf_map_area_alloc(struct_size(array, ptrs, attr->max_entries), numa_node);
|
||||
if (!array)
|
||||
|
@ -74,9 +74,6 @@ static struct bpf_map *stack_map_alloc(union bpf_attr *attr)
|
||||
u64 cost, n_buckets;
|
||||
int err;
|
||||
|
||||
if (!bpf_capable())
|
||||
return ERR_PTR(-EPERM);
|
||||
|
||||
if (attr->map_flags & ~STACK_CREATE_FLAG_MASK)
|
||||
return ERR_PTR(-EINVAL);
|
||||
|
||||
|
@ -109,37 +109,6 @@ const struct bpf_map_ops bpf_map_offload_ops = {
|
||||
.map_mem_usage = bpf_map_offload_map_mem_usage,
|
||||
};
|
||||
|
||||
static struct bpf_map *find_and_alloc_map(union bpf_attr *attr)
|
||||
{
|
||||
const struct bpf_map_ops *ops;
|
||||
u32 type = attr->map_type;
|
||||
struct bpf_map *map;
|
||||
int err;
|
||||
|
||||
if (type >= ARRAY_SIZE(bpf_map_types))
|
||||
return ERR_PTR(-EINVAL);
|
||||
type = array_index_nospec(type, ARRAY_SIZE(bpf_map_types));
|
||||
ops = bpf_map_types[type];
|
||||
if (!ops)
|
||||
return ERR_PTR(-EINVAL);
|
||||
|
||||
if (ops->map_alloc_check) {
|
||||
err = ops->map_alloc_check(attr);
|
||||
if (err)
|
||||
return ERR_PTR(err);
|
||||
}
|
||||
if (attr->map_ifindex)
|
||||
ops = &bpf_map_offload_ops;
|
||||
if (!ops->map_mem_usage)
|
||||
return ERR_PTR(-EINVAL);
|
||||
map = ops->map_alloc(attr);
|
||||
if (IS_ERR(map))
|
||||
return map;
|
||||
map->ops = ops;
|
||||
map->map_type = type;
|
||||
return map;
|
||||
}
|
||||
|
||||
static void bpf_map_write_active_inc(struct bpf_map *map)
|
||||
{
|
||||
atomic64_inc(&map->writecnt);
|
||||
@ -1127,7 +1096,9 @@ free_map_tab:
|
||||
/* called via syscall */
|
||||
static int map_create(union bpf_attr *attr)
|
||||
{
|
||||
const struct bpf_map_ops *ops;
|
||||
int numa_node = bpf_map_attr_numa_node(attr);
|
||||
u32 map_type = attr->map_type;
|
||||
struct bpf_map *map;
|
||||
int f_flags;
|
||||
int err;
|
||||
@ -1158,9 +1129,85 @@ static int map_create(union bpf_attr *attr)
|
||||
return -EINVAL;
|
||||
|
||||
/* find map type and init map: hashtable vs rbtree vs bloom vs ... */
|
||||
map = find_and_alloc_map(attr);
|
||||
map_type = attr->map_type;
|
||||
if (map_type >= ARRAY_SIZE(bpf_map_types))
|
||||
return -EINVAL;
|
||||
map_type = array_index_nospec(map_type, ARRAY_SIZE(bpf_map_types));
|
||||
ops = bpf_map_types[map_type];
|
||||
if (!ops)
|
||||
return -EINVAL;
|
||||
|
||||
if (ops->map_alloc_check) {
|
||||
err = ops->map_alloc_check(attr);
|
||||
if (err)
|
||||
return err;
|
||||
}
|
||||
if (attr->map_ifindex)
|
||||
ops = &bpf_map_offload_ops;
|
||||
if (!ops->map_mem_usage)
|
||||
return -EINVAL;
|
||||
|
||||
/* Intent here is for unprivileged_bpf_disabled to block BPF map
|
||||
* creation for unprivileged users; other actions depend
|
||||
* on fd availability and access to bpffs, so are dependent on
|
||||
* object creation success. Even with unprivileged BPF disabled,
|
||||
* capability checks are still carried out.
|
||||
*/
|
||||
if (sysctl_unprivileged_bpf_disabled && !bpf_capable())
|
||||
return -EPERM;
|
||||
|
||||
/* check privileged map type permissions */
|
||||
switch (map_type) {
|
||||
case BPF_MAP_TYPE_ARRAY:
|
||||
case BPF_MAP_TYPE_PERCPU_ARRAY:
|
||||
case BPF_MAP_TYPE_PROG_ARRAY:
|
||||
case BPF_MAP_TYPE_PERF_EVENT_ARRAY:
|
||||
case BPF_MAP_TYPE_CGROUP_ARRAY:
|
||||
case BPF_MAP_TYPE_ARRAY_OF_MAPS:
|
||||
case BPF_MAP_TYPE_HASH:
|
||||
case BPF_MAP_TYPE_PERCPU_HASH:
|
||||
case BPF_MAP_TYPE_HASH_OF_MAPS:
|
||||
case BPF_MAP_TYPE_RINGBUF:
|
||||
case BPF_MAP_TYPE_USER_RINGBUF:
|
||||
case BPF_MAP_TYPE_CGROUP_STORAGE:
|
||||
case BPF_MAP_TYPE_PERCPU_CGROUP_STORAGE:
|
||||
/* unprivileged */
|
||||
break;
|
||||
case BPF_MAP_TYPE_SK_STORAGE:
|
||||
case BPF_MAP_TYPE_INODE_STORAGE:
|
||||
case BPF_MAP_TYPE_TASK_STORAGE:
|
||||
case BPF_MAP_TYPE_CGRP_STORAGE:
|
||||
case BPF_MAP_TYPE_BLOOM_FILTER:
|
||||
case BPF_MAP_TYPE_LPM_TRIE:
|
||||
case BPF_MAP_TYPE_REUSEPORT_SOCKARRAY:
|
||||
case BPF_MAP_TYPE_STACK_TRACE:
|
||||
case BPF_MAP_TYPE_QUEUE:
|
||||
case BPF_MAP_TYPE_STACK:
|
||||
case BPF_MAP_TYPE_LRU_HASH:
|
||||
case BPF_MAP_TYPE_LRU_PERCPU_HASH:
|
||||
case BPF_MAP_TYPE_STRUCT_OPS:
|
||||
case BPF_MAP_TYPE_CPUMAP:
|
||||
if (!bpf_capable())
|
||||
return -EPERM;
|
||||
break;
|
||||
case BPF_MAP_TYPE_SOCKMAP:
|
||||
case BPF_MAP_TYPE_SOCKHASH:
|
||||
case BPF_MAP_TYPE_DEVMAP:
|
||||
case BPF_MAP_TYPE_DEVMAP_HASH:
|
||||
case BPF_MAP_TYPE_XSKMAP:
|
||||
if (!capable(CAP_NET_ADMIN))
|
||||
return -EPERM;
|
||||
break;
|
||||
default:
|
||||
WARN(1, "unsupported map type %d", map_type);
|
||||
return -EPERM;
|
||||
}
|
||||
|
||||
map = ops->map_alloc(attr);
|
||||
if (IS_ERR(map))
|
||||
return PTR_ERR(map);
|
||||
map->ops = ops;
|
||||
map->map_type = map_type;
|
||||
|
||||
err = bpf_obj_name_cpy(map->name, attr->map_name,
|
||||
sizeof(attr->map_name));
|
||||
@ -2507,7 +2554,6 @@ static int bpf_prog_load(union bpf_attr *attr, bpfptr_t uattr, u32 uattr_size)
|
||||
struct btf *attach_btf = NULL;
|
||||
int err;
|
||||
char license[128];
|
||||
bool is_gpl;
|
||||
|
||||
if (CHECK_ATTR(BPF_PROG_LOAD))
|
||||
return -EINVAL;
|
||||
@ -2526,15 +2572,15 @@ static int bpf_prog_load(union bpf_attr *attr, bpfptr_t uattr, u32 uattr_size)
|
||||
!bpf_capable())
|
||||
return -EPERM;
|
||||
|
||||
/* copy eBPF program license from user space */
|
||||
if (strncpy_from_bpfptr(license,
|
||||
make_bpfptr(attr->license, uattr.is_kernel),
|
||||
sizeof(license) - 1) < 0)
|
||||
return -EFAULT;
|
||||
license[sizeof(license) - 1] = 0;
|
||||
|
||||
/* eBPF programs must be GPL compatible to use GPL-ed functions */
|
||||
is_gpl = license_is_gpl_compatible(license);
|
||||
/* Intent here is for unprivileged_bpf_disabled to block BPF program
|
||||
* creation for unprivileged users; other actions depend
|
||||
* on fd availability and access to bpffs, so are dependent on
|
||||
* object creation success. Even with unprivileged BPF disabled,
|
||||
* capability checks are still carried out for these
|
||||
* and other operations.
|
||||
*/
|
||||
if (sysctl_unprivileged_bpf_disabled && !bpf_capable())
|
||||
return -EPERM;
|
||||
|
||||
if (attr->insn_cnt == 0 ||
|
||||
attr->insn_cnt > (bpf_capable() ? BPF_COMPLEXITY_LIMIT_INSNS : BPF_MAXINSNS))
|
||||
@ -2618,12 +2664,20 @@ static int bpf_prog_load(union bpf_attr *attr, bpfptr_t uattr, u32 uattr_size)
|
||||
make_bpfptr(attr->insns, uattr.is_kernel),
|
||||
bpf_prog_insn_size(prog)) != 0)
|
||||
goto free_prog_sec;
|
||||
/* copy eBPF program license from user space */
|
||||
if (strncpy_from_bpfptr(license,
|
||||
make_bpfptr(attr->license, uattr.is_kernel),
|
||||
sizeof(license) - 1) < 0)
|
||||
goto free_prog_sec;
|
||||
license[sizeof(license) - 1] = 0;
|
||||
|
||||
/* eBPF programs must be GPL compatible to use GPL-ed functions */
|
||||
prog->gpl_compatible = license_is_gpl_compatible(license) ? 1 : 0;
|
||||
|
||||
prog->orig_prog = NULL;
|
||||
prog->jited = 0;
|
||||
|
||||
atomic64_set(&prog->aux->refcnt, 1);
|
||||
prog->gpl_compatible = is_gpl ? 1 : 0;
|
||||
|
||||
if (bpf_prog_is_dev_bound(prog->aux)) {
|
||||
err = bpf_prog_dev_bound_init(prog, attr);
|
||||
@ -2797,28 +2851,31 @@ static void bpf_link_put_deferred(struct work_struct *work)
|
||||
bpf_link_free(link);
|
||||
}
|
||||
|
||||
/* bpf_link_put can be called from atomic context, but ensures that resources
|
||||
* are freed from process context
|
||||
/* bpf_link_put might be called from atomic context. It needs to be called
|
||||
* from sleepable context in order to acquire sleeping locks during the process.
|
||||
*/
|
||||
void bpf_link_put(struct bpf_link *link)
|
||||
{
|
||||
if (!atomic64_dec_and_test(&link->refcnt))
|
||||
return;
|
||||
|
||||
if (in_atomic()) {
|
||||
INIT_WORK(&link->work, bpf_link_put_deferred);
|
||||
schedule_work(&link->work);
|
||||
} else {
|
||||
bpf_link_free(link);
|
||||
}
|
||||
INIT_WORK(&link->work, bpf_link_put_deferred);
|
||||
schedule_work(&link->work);
|
||||
}
|
||||
EXPORT_SYMBOL(bpf_link_put);
|
||||
|
||||
static void bpf_link_put_direct(struct bpf_link *link)
|
||||
{
|
||||
if (!atomic64_dec_and_test(&link->refcnt))
|
||||
return;
|
||||
bpf_link_free(link);
|
||||
}
|
||||
|
||||
static int bpf_link_release(struct inode *inode, struct file *filp)
|
||||
{
|
||||
struct bpf_link *link = filp->private_data;
|
||||
|
||||
bpf_link_put(link);
|
||||
bpf_link_put_direct(link);
|
||||
return 0;
|
||||
}
|
||||
|
||||
@ -4801,7 +4858,7 @@ out_put_progs:
|
||||
if (ret)
|
||||
bpf_prog_put(new_prog);
|
||||
out_put_link:
|
||||
bpf_link_put(link);
|
||||
bpf_link_put_direct(link);
|
||||
return ret;
|
||||
}
|
||||
|
||||
@ -4824,7 +4881,7 @@ static int link_detach(union bpf_attr *attr)
|
||||
else
|
||||
ret = -EOPNOTSUPP;
|
||||
|
||||
bpf_link_put(link);
|
||||
bpf_link_put_direct(link);
|
||||
return ret;
|
||||
}
|
||||
|
||||
@ -4894,7 +4951,7 @@ static int bpf_link_get_fd_by_id(const union bpf_attr *attr)
|
||||
|
||||
fd = bpf_link_new_fd(link);
|
||||
if (fd < 0)
|
||||
bpf_link_put(link);
|
||||
bpf_link_put_direct(link);
|
||||
|
||||
return fd;
|
||||
}
|
||||
@ -4971,7 +5028,7 @@ static int bpf_iter_create(union bpf_attr *attr)
|
||||
return PTR_ERR(link);
|
||||
|
||||
err = bpf_iter_new_fd(link);
|
||||
bpf_link_put(link);
|
||||
bpf_link_put_direct(link);
|
||||
|
||||
return err;
|
||||
}
|
||||
@ -5041,23 +5098,8 @@ out_prog_put:
|
||||
static int __sys_bpf(int cmd, bpfptr_t uattr, unsigned int size)
|
||||
{
|
||||
union bpf_attr attr;
|
||||
bool capable;
|
||||
int err;
|
||||
|
||||
capable = bpf_capable() || !sysctl_unprivileged_bpf_disabled;
|
||||
|
||||
/* Intent here is for unprivileged_bpf_disabled to block key object
|
||||
* creation commands for unprivileged users; other actions depend
|
||||
* of fd availability and access to bpffs, so are dependent on
|
||||
* object creation success. Capabilities are later verified for
|
||||
* operations such as load and map create, so even with unprivileged
|
||||
* BPF disabled, capability checks are still carried out for these
|
||||
* and other operations.
|
||||
*/
|
||||
if (!capable &&
|
||||
(cmd == BPF_MAP_CREATE || cmd == BPF_PROG_LOAD))
|
||||
return -EPERM;
|
||||
|
||||
err = bpf_check_uarg_tail_zero(uattr, sizeof(attr), size);
|
||||
if (err)
|
||||
return err;
|
||||
|
@ -197,6 +197,7 @@ static int ref_set_non_owning(struct bpf_verifier_env *env,
|
||||
struct bpf_reg_state *reg);
|
||||
static void specialize_kfunc(struct bpf_verifier_env *env,
|
||||
u32 func_id, u16 offset, unsigned long *addr);
|
||||
static bool is_trusted_reg(const struct bpf_reg_state *reg);
|
||||
|
||||
static bool bpf_map_ptr_poisoned(const struct bpf_insn_aux_data *aux)
|
||||
{
|
||||
@ -298,16 +299,19 @@ struct bpf_kfunc_call_arg_meta {
|
||||
bool found;
|
||||
} arg_constant;
|
||||
|
||||
/* arg_btf and arg_btf_id are used by kfunc-specific handling,
|
||||
/* arg_{btf,btf_id,owning_ref} are used by kfunc-specific handling,
|
||||
* generally to pass info about user-defined local kptr types to later
|
||||
* verification logic
|
||||
* bpf_obj_drop
|
||||
* Record the local kptr type to be drop'd
|
||||
* bpf_refcount_acquire (via KF_ARG_PTR_TO_REFCOUNTED_KPTR arg type)
|
||||
* Record the local kptr type to be refcount_incr'd
|
||||
* Record the local kptr type to be refcount_incr'd and use
|
||||
* arg_owning_ref to determine whether refcount_acquire should be
|
||||
* fallible
|
||||
*/
|
||||
struct btf *arg_btf;
|
||||
u32 arg_btf_id;
|
||||
bool arg_owning_ref;
|
||||
|
||||
struct {
|
||||
struct btf_field *field;
|
||||
@ -439,8 +443,11 @@ static bool type_may_be_null(u32 type)
|
||||
return type & PTR_MAYBE_NULL;
|
||||
}
|
||||
|
||||
static bool reg_type_not_null(enum bpf_reg_type type)
|
||||
static bool reg_not_null(const struct bpf_reg_state *reg)
|
||||
{
|
||||
enum bpf_reg_type type;
|
||||
|
||||
type = reg->type;
|
||||
if (type_may_be_null(type))
|
||||
return false;
|
||||
|
||||
@ -450,6 +457,7 @@ static bool reg_type_not_null(enum bpf_reg_type type)
|
||||
type == PTR_TO_MAP_VALUE ||
|
||||
type == PTR_TO_MAP_KEY ||
|
||||
type == PTR_TO_SOCK_COMMON ||
|
||||
(type == PTR_TO_BTF_ID && is_trusted_reg(reg)) ||
|
||||
type == PTR_TO_MEM;
|
||||
}
|
||||
|
||||
@ -3771,6 +3779,96 @@ static void mark_all_scalars_imprecise(struct bpf_verifier_env *env, struct bpf_
|
||||
}
|
||||
}
|
||||
|
||||
static bool idset_contains(struct bpf_idset *s, u32 id)
|
||||
{
|
||||
u32 i;
|
||||
|
||||
for (i = 0; i < s->count; ++i)
|
||||
if (s->ids[i] == id)
|
||||
return true;
|
||||
|
||||
return false;
|
||||
}
|
||||
|
||||
static int idset_push(struct bpf_idset *s, u32 id)
|
||||
{
|
||||
if (WARN_ON_ONCE(s->count >= ARRAY_SIZE(s->ids)))
|
||||
return -EFAULT;
|
||||
s->ids[s->count++] = id;
|
||||
return 0;
|
||||
}
|
||||
|
||||
static void idset_reset(struct bpf_idset *s)
|
||||
{
|
||||
s->count = 0;
|
||||
}
|
||||
|
||||
/* Collect a set of IDs for all registers currently marked as precise in env->bt.
|
||||
* Mark all registers with these IDs as precise.
|
||||
*/
|
||||
static int mark_precise_scalar_ids(struct bpf_verifier_env *env, struct bpf_verifier_state *st)
|
||||
{
|
||||
struct bpf_idset *precise_ids = &env->idset_scratch;
|
||||
struct backtrack_state *bt = &env->bt;
|
||||
struct bpf_func_state *func;
|
||||
struct bpf_reg_state *reg;
|
||||
DECLARE_BITMAP(mask, 64);
|
||||
int i, fr;
|
||||
|
||||
idset_reset(precise_ids);
|
||||
|
||||
for (fr = bt->frame; fr >= 0; fr--) {
|
||||
func = st->frame[fr];
|
||||
|
||||
bitmap_from_u64(mask, bt_frame_reg_mask(bt, fr));
|
||||
for_each_set_bit(i, mask, 32) {
|
||||
reg = &func->regs[i];
|
||||
if (!reg->id || reg->type != SCALAR_VALUE)
|
||||
continue;
|
||||
if (idset_push(precise_ids, reg->id))
|
||||
return -EFAULT;
|
||||
}
|
||||
|
||||
bitmap_from_u64(mask, bt_frame_stack_mask(bt, fr));
|
||||
for_each_set_bit(i, mask, 64) {
|
||||
if (i >= func->allocated_stack / BPF_REG_SIZE)
|
||||
break;
|
||||
if (!is_spilled_scalar_reg(&func->stack[i]))
|
||||
continue;
|
||||
reg = &func->stack[i].spilled_ptr;
|
||||
if (!reg->id)
|
||||
continue;
|
||||
if (idset_push(precise_ids, reg->id))
|
||||
return -EFAULT;
|
||||
}
|
||||
}
|
||||
|
||||
for (fr = 0; fr <= st->curframe; ++fr) {
|
||||
func = st->frame[fr];
|
||||
|
||||
for (i = BPF_REG_0; i < BPF_REG_10; ++i) {
|
||||
reg = &func->regs[i];
|
||||
if (!reg->id)
|
||||
continue;
|
||||
if (!idset_contains(precise_ids, reg->id))
|
||||
continue;
|
||||
bt_set_frame_reg(bt, fr, i);
|
||||
}
|
||||
for (i = 0; i < func->allocated_stack / BPF_REG_SIZE; ++i) {
|
||||
if (!is_spilled_scalar_reg(&func->stack[i]))
|
||||
continue;
|
||||
reg = &func->stack[i].spilled_ptr;
|
||||
if (!reg->id)
|
||||
continue;
|
||||
if (!idset_contains(precise_ids, reg->id))
|
||||
continue;
|
||||
bt_set_frame_slot(bt, fr, i);
|
||||
}
|
||||
}
|
||||
|
||||
return 0;
|
||||
}
|
||||
|
||||
/*
|
||||
* __mark_chain_precision() backtracks BPF program instruction sequence and
|
||||
* chain of verifier states making sure that register *regno* (if regno >= 0)
|
||||
@ -3902,6 +4000,31 @@ static int __mark_chain_precision(struct bpf_verifier_env *env, int regno)
|
||||
bt->frame, last_idx, first_idx, subseq_idx);
|
||||
}
|
||||
|
||||
/* If some register with scalar ID is marked as precise,
|
||||
* make sure that all registers sharing this ID are also precise.
|
||||
* This is needed to estimate effect of find_equal_scalars().
|
||||
* Do this at the last instruction of each state,
|
||||
* bpf_reg_state::id fields are valid for these instructions.
|
||||
*
|
||||
* Allows to track precision in situation like below:
|
||||
*
|
||||
* r2 = unknown value
|
||||
* ...
|
||||
* --- state #0 ---
|
||||
* ...
|
||||
* r1 = r2 // r1 and r2 now share the same ID
|
||||
* ...
|
||||
* --- state #1 {r1.id = A, r2.id = A} ---
|
||||
* ...
|
||||
* if (r2 > 10) goto exit; // find_equal_scalars() assigns range to r1
|
||||
* ...
|
||||
* --- state #2 {r1.id = A, r2.id = A} ---
|
||||
* r3 = r10
|
||||
* r3 += r1 // need to mark both r1 and r2
|
||||
*/
|
||||
if (mark_precise_scalar_ids(env, st))
|
||||
return -EFAULT;
|
||||
|
||||
if (last_idx < 0) {
|
||||
/* we are at the entry into subprog, which
|
||||
* is expected for global funcs, but only if
|
||||
@ -5894,7 +6017,7 @@ static int check_ptr_to_btf_access(struct bpf_verifier_env *env,
|
||||
* program allocated objects (which always have ref_obj_id > 0),
|
||||
* but not for untrusted PTR_TO_BTF_ID | MEM_ALLOC.
|
||||
*/
|
||||
if (atype != BPF_READ && reg->type != (PTR_TO_BTF_ID | MEM_ALLOC)) {
|
||||
if (atype != BPF_READ && !type_is_ptr_alloc_obj(reg->type)) {
|
||||
verbose(env, "only read is supported\n");
|
||||
return -EACCES;
|
||||
}
|
||||
@ -7514,7 +7637,7 @@ static int check_reg_type(struct bpf_verifier_env *env, u32 regno,
|
||||
if (base_type(arg_type) == ARG_PTR_TO_MEM)
|
||||
type &= ~DYNPTR_TYPE_FLAG_MASK;
|
||||
|
||||
if (meta->func_id == BPF_FUNC_kptr_xchg && type & MEM_ALLOC)
|
||||
if (meta->func_id == BPF_FUNC_kptr_xchg && type_is_alloc(type))
|
||||
type &= ~MEM_ALLOC;
|
||||
|
||||
for (i = 0; i < ARRAY_SIZE(compatible->types); i++) {
|
||||
@ -9681,11 +9804,6 @@ static bool is_kfunc_acquire(struct bpf_kfunc_call_arg_meta *meta)
|
||||
return meta->kfunc_flags & KF_ACQUIRE;
|
||||
}
|
||||
|
||||
static bool is_kfunc_ret_null(struct bpf_kfunc_call_arg_meta *meta)
|
||||
{
|
||||
return meta->kfunc_flags & KF_RET_NULL;
|
||||
}
|
||||
|
||||
static bool is_kfunc_release(struct bpf_kfunc_call_arg_meta *meta)
|
||||
{
|
||||
return meta->kfunc_flags & KF_RELEASE;
|
||||
@ -10001,6 +10119,16 @@ BTF_ID(func, bpf_dynptr_slice)
|
||||
BTF_ID(func, bpf_dynptr_slice_rdwr)
|
||||
BTF_ID(func, bpf_dynptr_clone)
|
||||
|
||||
static bool is_kfunc_ret_null(struct bpf_kfunc_call_arg_meta *meta)
|
||||
{
|
||||
if (meta->func_id == special_kfunc_list[KF_bpf_refcount_acquire_impl] &&
|
||||
meta->arg_owning_ref) {
|
||||
return false;
|
||||
}
|
||||
|
||||
return meta->kfunc_flags & KF_RET_NULL;
|
||||
}
|
||||
|
||||
static bool is_kfunc_bpf_rcu_read_lock(struct bpf_kfunc_call_arg_meta *meta)
|
||||
{
|
||||
return meta->func_id == special_kfunc_list[KF_bpf_rcu_read_lock];
|
||||
@ -10478,6 +10606,8 @@ __process_kf_arg_ptr_to_graph_node(struct bpf_verifier_env *env,
|
||||
node_off, btf_name_by_offset(reg->btf, t->name_off));
|
||||
return -EINVAL;
|
||||
}
|
||||
meta->arg_btf = reg->btf;
|
||||
meta->arg_btf_id = reg->btf_id;
|
||||
|
||||
if (node_off != field->graph_root.node_offset) {
|
||||
verbose(env, "arg#1 offset=%d, but expected %s at offset=%d in struct %s\n",
|
||||
@ -10881,10 +11011,12 @@ static int check_kfunc_args(struct bpf_verifier_env *env, struct bpf_kfunc_call_
|
||||
meta->subprogno = reg->subprogno;
|
||||
break;
|
||||
case KF_ARG_PTR_TO_REFCOUNTED_KPTR:
|
||||
if (!type_is_ptr_alloc_obj(reg->type) && !type_is_non_owning_ref(reg->type)) {
|
||||
if (!type_is_ptr_alloc_obj(reg->type)) {
|
||||
verbose(env, "arg#%d is neither owning or non-owning ref\n", i);
|
||||
return -EINVAL;
|
||||
}
|
||||
if (!type_is_non_owning_ref(reg->type))
|
||||
meta->arg_owning_ref = true;
|
||||
|
||||
rec = reg_btf_record(reg);
|
||||
if (!rec) {
|
||||
@ -11047,6 +11179,7 @@ static int check_kfunc_call(struct bpf_verifier_env *env, struct bpf_insn *insn,
|
||||
meta.func_id == special_kfunc_list[KF_bpf_rbtree_add_impl]) {
|
||||
release_ref_obj_id = regs[BPF_REG_2].ref_obj_id;
|
||||
insn_aux->insert_off = regs[BPF_REG_2].off;
|
||||
insn_aux->kptr_struct_meta = btf_find_struct_meta(meta.arg_btf, meta.arg_btf_id);
|
||||
err = ref_convert_owning_non_owning(env, release_ref_obj_id);
|
||||
if (err) {
|
||||
verbose(env, "kfunc %s#%d conversion of owning ref to non-owning failed\n",
|
||||
@ -12804,12 +12937,14 @@ static int check_alu_op(struct bpf_verifier_env *env, struct bpf_insn *insn)
|
||||
if (BPF_SRC(insn->code) == BPF_X) {
|
||||
struct bpf_reg_state *src_reg = regs + insn->src_reg;
|
||||
struct bpf_reg_state *dst_reg = regs + insn->dst_reg;
|
||||
bool need_id = src_reg->type == SCALAR_VALUE && !src_reg->id &&
|
||||
!tnum_is_const(src_reg->var_off);
|
||||
|
||||
if (BPF_CLASS(insn->code) == BPF_ALU64) {
|
||||
/* case: R1 = R2
|
||||
* copy register state to dest reg
|
||||
*/
|
||||
if (src_reg->type == SCALAR_VALUE && !src_reg->id)
|
||||
if (need_id)
|
||||
/* Assign src and dst registers the same ID
|
||||
* that will be used by find_equal_scalars()
|
||||
* to propagate min/max range.
|
||||
@ -12828,7 +12963,7 @@ static int check_alu_op(struct bpf_verifier_env *env, struct bpf_insn *insn)
|
||||
} else if (src_reg->type == SCALAR_VALUE) {
|
||||
bool is_src_reg_u32 = src_reg->umax_value <= U32_MAX;
|
||||
|
||||
if (is_src_reg_u32 && !src_reg->id)
|
||||
if (is_src_reg_u32 && need_id)
|
||||
src_reg->id = ++env->id_gen;
|
||||
copy_register_state(dst_reg, src_reg);
|
||||
/* Make sure ID is cleared if src_reg is not in u32 range otherwise
|
||||
@ -13160,7 +13295,7 @@ static int is_branch_taken(struct bpf_reg_state *reg, u64 val, u8 opcode,
|
||||
bool is_jmp32)
|
||||
{
|
||||
if (__is_pointer_value(false, reg)) {
|
||||
if (!reg_type_not_null(reg->type))
|
||||
if (!reg_not_null(reg))
|
||||
return -1;
|
||||
|
||||
/* If pointer is valid tests against zero will fail so we can
|
||||
@ -14984,8 +15119,9 @@ static bool range_within(struct bpf_reg_state *old,
|
||||
* So we look through our idmap to see if this old id has been seen before. If
|
||||
* so, we require the new id to match; otherwise, we add the id pair to the map.
|
||||
*/
|
||||
static bool check_ids(u32 old_id, u32 cur_id, struct bpf_id_pair *idmap)
|
||||
static bool check_ids(u32 old_id, u32 cur_id, struct bpf_idmap *idmap)
|
||||
{
|
||||
struct bpf_id_pair *map = idmap->map;
|
||||
unsigned int i;
|
||||
|
||||
/* either both IDs should be set or both should be zero */
|
||||
@ -14996,20 +15132,34 @@ static bool check_ids(u32 old_id, u32 cur_id, struct bpf_id_pair *idmap)
|
||||
return true;
|
||||
|
||||
for (i = 0; i < BPF_ID_MAP_SIZE; i++) {
|
||||
if (!idmap[i].old) {
|
||||
if (!map[i].old) {
|
||||
/* Reached an empty slot; haven't seen this id before */
|
||||
idmap[i].old = old_id;
|
||||
idmap[i].cur = cur_id;
|
||||
map[i].old = old_id;
|
||||
map[i].cur = cur_id;
|
||||
return true;
|
||||
}
|
||||
if (idmap[i].old == old_id)
|
||||
return idmap[i].cur == cur_id;
|
||||
if (map[i].old == old_id)
|
||||
return map[i].cur == cur_id;
|
||||
if (map[i].cur == cur_id)
|
||||
return false;
|
||||
}
|
||||
/* We ran out of idmap slots, which should be impossible */
|
||||
WARN_ON_ONCE(1);
|
||||
return false;
|
||||
}
|
||||
|
||||
/* Similar to check_ids(), but allocate a unique temporary ID
|
||||
* for 'old_id' or 'cur_id' of zero.
|
||||
* This makes pairs like '0 vs unique ID', 'unique ID vs 0' valid.
|
||||
*/
|
||||
static bool check_scalar_ids(u32 old_id, u32 cur_id, struct bpf_idmap *idmap)
|
||||
{
|
||||
old_id = old_id ? old_id : ++idmap->tmp_id_gen;
|
||||
cur_id = cur_id ? cur_id : ++idmap->tmp_id_gen;
|
||||
|
||||
return check_ids(old_id, cur_id, idmap);
|
||||
}
|
||||
|
||||
static void clean_func_state(struct bpf_verifier_env *env,
|
||||
struct bpf_func_state *st)
|
||||
{
|
||||
@ -15108,7 +15258,7 @@ next:
|
||||
|
||||
static bool regs_exact(const struct bpf_reg_state *rold,
|
||||
const struct bpf_reg_state *rcur,
|
||||
struct bpf_id_pair *idmap)
|
||||
struct bpf_idmap *idmap)
|
||||
{
|
||||
return memcmp(rold, rcur, offsetof(struct bpf_reg_state, id)) == 0 &&
|
||||
check_ids(rold->id, rcur->id, idmap) &&
|
||||
@ -15117,7 +15267,7 @@ static bool regs_exact(const struct bpf_reg_state *rold,
|
||||
|
||||
/* Returns true if (rold safe implies rcur safe) */
|
||||
static bool regsafe(struct bpf_verifier_env *env, struct bpf_reg_state *rold,
|
||||
struct bpf_reg_state *rcur, struct bpf_id_pair *idmap)
|
||||
struct bpf_reg_state *rcur, struct bpf_idmap *idmap)
|
||||
{
|
||||
if (!(rold->live & REG_LIVE_READ))
|
||||
/* explored state didn't use this */
|
||||
@ -15154,15 +15304,42 @@ static bool regsafe(struct bpf_verifier_env *env, struct bpf_reg_state *rold,
|
||||
|
||||
switch (base_type(rold->type)) {
|
||||
case SCALAR_VALUE:
|
||||
if (regs_exact(rold, rcur, idmap))
|
||||
return true;
|
||||
if (env->explore_alu_limits)
|
||||
return false;
|
||||
if (env->explore_alu_limits) {
|
||||
/* explore_alu_limits disables tnum_in() and range_within()
|
||||
* logic and requires everything to be strict
|
||||
*/
|
||||
return memcmp(rold, rcur, offsetof(struct bpf_reg_state, id)) == 0 &&
|
||||
check_scalar_ids(rold->id, rcur->id, idmap);
|
||||
}
|
||||
if (!rold->precise)
|
||||
return true;
|
||||
/* new val must satisfy old val knowledge */
|
||||
/* Why check_ids() for scalar registers?
|
||||
*
|
||||
* Consider the following BPF code:
|
||||
* 1: r6 = ... unbound scalar, ID=a ...
|
||||
* 2: r7 = ... unbound scalar, ID=b ...
|
||||
* 3: if (r6 > r7) goto +1
|
||||
* 4: r6 = r7
|
||||
* 5: if (r6 > X) goto ...
|
||||
* 6: ... memory operation using r7 ...
|
||||
*
|
||||
* First verification path is [1-6]:
|
||||
* - at (4) same bpf_reg_state::id (b) would be assigned to r6 and r7;
|
||||
* - at (5) r6 would be marked <= X, find_equal_scalars() would also mark
|
||||
* r7 <= X, because r6 and r7 share same id.
|
||||
* Next verification path is [1-4, 6].
|
||||
*
|
||||
* Instruction (6) would be reached in two states:
|
||||
* I. r6{.id=b}, r7{.id=b} via path 1-6;
|
||||
* II. r6{.id=a}, r7{.id=b} via path 1-4, 6.
|
||||
*
|
||||
* Use check_ids() to distinguish these states.
|
||||
* ---
|
||||
* Also verify that new value satisfies old value range knowledge.
|
||||
*/
|
||||
return range_within(rold, rcur) &&
|
||||
tnum_in(rold->var_off, rcur->var_off);
|
||||
tnum_in(rold->var_off, rcur->var_off) &&
|
||||
check_scalar_ids(rold->id, rcur->id, idmap);
|
||||
case PTR_TO_MAP_KEY:
|
||||
case PTR_TO_MAP_VALUE:
|
||||
case PTR_TO_MEM:
|
||||
@ -15208,7 +15385,7 @@ static bool regsafe(struct bpf_verifier_env *env, struct bpf_reg_state *rold,
|
||||
}
|
||||
|
||||
static bool stacksafe(struct bpf_verifier_env *env, struct bpf_func_state *old,
|
||||
struct bpf_func_state *cur, struct bpf_id_pair *idmap)
|
||||
struct bpf_func_state *cur, struct bpf_idmap *idmap)
|
||||
{
|
||||
int i, spi;
|
||||
|
||||
@ -15311,7 +15488,7 @@ static bool stacksafe(struct bpf_verifier_env *env, struct bpf_func_state *old,
|
||||
}
|
||||
|
||||
static bool refsafe(struct bpf_func_state *old, struct bpf_func_state *cur,
|
||||
struct bpf_id_pair *idmap)
|
||||
struct bpf_idmap *idmap)
|
||||
{
|
||||
int i;
|
||||
|
||||
@ -15359,13 +15536,13 @@ static bool func_states_equal(struct bpf_verifier_env *env, struct bpf_func_stat
|
||||
|
||||
for (i = 0; i < MAX_BPF_REG; i++)
|
||||
if (!regsafe(env, &old->regs[i], &cur->regs[i],
|
||||
env->idmap_scratch))
|
||||
&env->idmap_scratch))
|
||||
return false;
|
||||
|
||||
if (!stacksafe(env, old, cur, env->idmap_scratch))
|
||||
if (!stacksafe(env, old, cur, &env->idmap_scratch))
|
||||
return false;
|
||||
|
||||
if (!refsafe(old, cur, env->idmap_scratch))
|
||||
if (!refsafe(old, cur, &env->idmap_scratch))
|
||||
return false;
|
||||
|
||||
return true;
|
||||
@ -15380,7 +15557,8 @@ static bool states_equal(struct bpf_verifier_env *env,
|
||||
if (old->curframe != cur->curframe)
|
||||
return false;
|
||||
|
||||
memset(env->idmap_scratch, 0, sizeof(env->idmap_scratch));
|
||||
env->idmap_scratch.tmp_id_gen = env->id_gen;
|
||||
memset(&env->idmap_scratch.map, 0, sizeof(env->idmap_scratch.map));
|
||||
|
||||
/* Verification state from speculative execution simulation
|
||||
* must never prune a non-speculative execution one.
|
||||
@ -15398,7 +15576,7 @@ static bool states_equal(struct bpf_verifier_env *env,
|
||||
return false;
|
||||
|
||||
if (old->active_lock.id &&
|
||||
!check_ids(old->active_lock.id, cur->active_lock.id, env->idmap_scratch))
|
||||
!check_ids(old->active_lock.id, cur->active_lock.id, &env->idmap_scratch))
|
||||
return false;
|
||||
|
||||
if (old->active_rcu_lock != cur->active_rcu_lock)
|
||||
|
@ -15056,8 +15056,7 @@ static __init int prepare_tail_call_tests(struct bpf_array **pprogs)
|
||||
int which, err;
|
||||
|
||||
/* Allocate the table of programs to be used for tall calls */
|
||||
progs = kzalloc(sizeof(*progs) + (ntests + 1) * sizeof(progs->ptrs[0]),
|
||||
GFP_KERNEL);
|
||||
progs = kzalloc(struct_size(progs, ptrs, ntests + 1), GFP_KERNEL);
|
||||
if (!progs)
|
||||
goto out_nomem;
|
||||
|
||||
|
@ -21,7 +21,7 @@ static void shutdown_umh(void)
|
||||
if (tgid) {
|
||||
kill_pid(tgid, SIGKILL, 1);
|
||||
wait_event(tgid->wait_pidfd, thread_group_exited(tgid));
|
||||
bpfilter_umh_cleanup(info);
|
||||
umd_cleanup_helper(info);
|
||||
}
|
||||
}
|
||||
|
||||
|
@ -3948,20 +3948,21 @@ void bpf_xdp_copy_buf(struct xdp_buff *xdp, unsigned long off,
|
||||
|
||||
void *bpf_xdp_pointer(struct xdp_buff *xdp, u32 offset, u32 len)
|
||||
{
|
||||
struct skb_shared_info *sinfo = xdp_get_shared_info_from_buff(xdp);
|
||||
u32 size = xdp->data_end - xdp->data;
|
||||
struct skb_shared_info *sinfo;
|
||||
void *addr = xdp->data;
|
||||
int i;
|
||||
|
||||
if (unlikely(offset > 0xffff || len > 0xffff))
|
||||
return ERR_PTR(-EFAULT);
|
||||
|
||||
if (offset + len > xdp_get_buff_len(xdp))
|
||||
if (unlikely(offset + len > xdp_get_buff_len(xdp)))
|
||||
return ERR_PTR(-EINVAL);
|
||||
|
||||
if (offset < size) /* linear area */
|
||||
if (likely(offset < size)) /* linear area */
|
||||
goto out;
|
||||
|
||||
sinfo = xdp_get_shared_info_from_buff(xdp);
|
||||
offset -= size;
|
||||
for (i = 0; i < sinfo->nr_frags; i++) { /* paged area */
|
||||
u32 frag_size = skb_frag_size(&sinfo->frags[i]);
|
||||
@ -5803,6 +5804,12 @@ static int bpf_ipv4_fib_lookup(struct net *net, struct bpf_fib_lookup *params,
|
||||
u32 tbid = l3mdev_fib_table_rcu(dev) ? : RT_TABLE_MAIN;
|
||||
struct fib_table *tb;
|
||||
|
||||
if (flags & BPF_FIB_LOOKUP_TBID) {
|
||||
tbid = params->tbid;
|
||||
/* zero out for vlan output */
|
||||
params->tbid = 0;
|
||||
}
|
||||
|
||||
tb = fib_get_table(net, tbid);
|
||||
if (unlikely(!tb))
|
||||
return BPF_FIB_LKUP_RET_NOT_FWDED;
|
||||
@ -5936,6 +5943,12 @@ static int bpf_ipv6_fib_lookup(struct net *net, struct bpf_fib_lookup *params,
|
||||
u32 tbid = l3mdev_fib_table_rcu(dev) ? : RT_TABLE_MAIN;
|
||||
struct fib6_table *tb;
|
||||
|
||||
if (flags & BPF_FIB_LOOKUP_TBID) {
|
||||
tbid = params->tbid;
|
||||
/* zero out for vlan output */
|
||||
params->tbid = 0;
|
||||
}
|
||||
|
||||
tb = ipv6_stub->fib6_get_table(net, tbid);
|
||||
if (unlikely(!tb))
|
||||
return BPF_FIB_LKUP_RET_NOT_FWDED;
|
||||
@ -6008,7 +6021,7 @@ set_fwd_params:
|
||||
#endif
|
||||
|
||||
#define BPF_FIB_LOOKUP_MASK (BPF_FIB_LOOKUP_DIRECT | BPF_FIB_LOOKUP_OUTPUT | \
|
||||
BPF_FIB_LOOKUP_SKIP_NEIGH)
|
||||
BPF_FIB_LOOKUP_SKIP_NEIGH | BPF_FIB_LOOKUP_TBID)
|
||||
|
||||
BPF_CALL_4(bpf_xdp_fib_lookup, struct xdp_buff *, ctx,
|
||||
struct bpf_fib_lookup *, params, int, plen, u32, flags)
|
||||
@ -6555,12 +6568,11 @@ static struct sock *sk_lookup(struct net *net, struct bpf_sock_tuple *tuple,
|
||||
static struct sock *
|
||||
__bpf_skc_lookup(struct sk_buff *skb, struct bpf_sock_tuple *tuple, u32 len,
|
||||
struct net *caller_net, u32 ifindex, u8 proto, u64 netns_id,
|
||||
u64 flags)
|
||||
u64 flags, int sdif)
|
||||
{
|
||||
struct sock *sk = NULL;
|
||||
struct net *net;
|
||||
u8 family;
|
||||
int sdif;
|
||||
|
||||
if (len == sizeof(tuple->ipv4))
|
||||
family = AF_INET;
|
||||
@ -6572,10 +6584,12 @@ __bpf_skc_lookup(struct sk_buff *skb, struct bpf_sock_tuple *tuple, u32 len,
|
||||
if (unlikely(flags || !((s32)netns_id < 0 || netns_id <= S32_MAX)))
|
||||
goto out;
|
||||
|
||||
if (family == AF_INET)
|
||||
sdif = inet_sdif(skb);
|
||||
else
|
||||
sdif = inet6_sdif(skb);
|
||||
if (sdif < 0) {
|
||||
if (family == AF_INET)
|
||||
sdif = inet_sdif(skb);
|
||||
else
|
||||
sdif = inet6_sdif(skb);
|
||||
}
|
||||
|
||||
if ((s32)netns_id < 0) {
|
||||
net = caller_net;
|
||||
@ -6595,10 +6609,11 @@ out:
|
||||
static struct sock *
|
||||
__bpf_sk_lookup(struct sk_buff *skb, struct bpf_sock_tuple *tuple, u32 len,
|
||||
struct net *caller_net, u32 ifindex, u8 proto, u64 netns_id,
|
||||
u64 flags)
|
||||
u64 flags, int sdif)
|
||||
{
|
||||
struct sock *sk = __bpf_skc_lookup(skb, tuple, len, caller_net,
|
||||
ifindex, proto, netns_id, flags);
|
||||
ifindex, proto, netns_id, flags,
|
||||
sdif);
|
||||
|
||||
if (sk) {
|
||||
struct sock *sk2 = sk_to_full_sk(sk);
|
||||
@ -6638,7 +6653,7 @@ bpf_skc_lookup(struct sk_buff *skb, struct bpf_sock_tuple *tuple, u32 len,
|
||||
}
|
||||
|
||||
return __bpf_skc_lookup(skb, tuple, len, caller_net, ifindex, proto,
|
||||
netns_id, flags);
|
||||
netns_id, flags, -1);
|
||||
}
|
||||
|
||||
static struct sock *
|
||||
@ -6727,6 +6742,78 @@ static const struct bpf_func_proto bpf_sk_lookup_udp_proto = {
|
||||
.arg5_type = ARG_ANYTHING,
|
||||
};
|
||||
|
||||
BPF_CALL_5(bpf_tc_skc_lookup_tcp, struct sk_buff *, skb,
|
||||
struct bpf_sock_tuple *, tuple, u32, len, u64, netns_id, u64, flags)
|
||||
{
|
||||
struct net_device *dev = skb->dev;
|
||||
int ifindex = dev->ifindex, sdif = dev_sdif(dev);
|
||||
struct net *caller_net = dev_net(dev);
|
||||
|
||||
return (unsigned long)__bpf_skc_lookup(skb, tuple, len, caller_net,
|
||||
ifindex, IPPROTO_TCP, netns_id,
|
||||
flags, sdif);
|
||||
}
|
||||
|
||||
static const struct bpf_func_proto bpf_tc_skc_lookup_tcp_proto = {
|
||||
.func = bpf_tc_skc_lookup_tcp,
|
||||
.gpl_only = false,
|
||||
.pkt_access = true,
|
||||
.ret_type = RET_PTR_TO_SOCK_COMMON_OR_NULL,
|
||||
.arg1_type = ARG_PTR_TO_CTX,
|
||||
.arg2_type = ARG_PTR_TO_MEM | MEM_RDONLY,
|
||||
.arg3_type = ARG_CONST_SIZE,
|
||||
.arg4_type = ARG_ANYTHING,
|
||||
.arg5_type = ARG_ANYTHING,
|
||||
};
|
||||
|
||||
BPF_CALL_5(bpf_tc_sk_lookup_tcp, struct sk_buff *, skb,
|
||||
struct bpf_sock_tuple *, tuple, u32, len, u64, netns_id, u64, flags)
|
||||
{
|
||||
struct net_device *dev = skb->dev;
|
||||
int ifindex = dev->ifindex, sdif = dev_sdif(dev);
|
||||
struct net *caller_net = dev_net(dev);
|
||||
|
||||
return (unsigned long)__bpf_sk_lookup(skb, tuple, len, caller_net,
|
||||
ifindex, IPPROTO_TCP, netns_id,
|
||||
flags, sdif);
|
||||
}
|
||||
|
||||
static const struct bpf_func_proto bpf_tc_sk_lookup_tcp_proto = {
|
||||
.func = bpf_tc_sk_lookup_tcp,
|
||||
.gpl_only = false,
|
||||
.pkt_access = true,
|
||||
.ret_type = RET_PTR_TO_SOCKET_OR_NULL,
|
||||
.arg1_type = ARG_PTR_TO_CTX,
|
||||
.arg2_type = ARG_PTR_TO_MEM | MEM_RDONLY,
|
||||
.arg3_type = ARG_CONST_SIZE,
|
||||
.arg4_type = ARG_ANYTHING,
|
||||
.arg5_type = ARG_ANYTHING,
|
||||
};
|
||||
|
||||
BPF_CALL_5(bpf_tc_sk_lookup_udp, struct sk_buff *, skb,
|
||||
struct bpf_sock_tuple *, tuple, u32, len, u64, netns_id, u64, flags)
|
||||
{
|
||||
struct net_device *dev = skb->dev;
|
||||
int ifindex = dev->ifindex, sdif = dev_sdif(dev);
|
||||
struct net *caller_net = dev_net(dev);
|
||||
|
||||
return (unsigned long)__bpf_sk_lookup(skb, tuple, len, caller_net,
|
||||
ifindex, IPPROTO_UDP, netns_id,
|
||||
flags, sdif);
|
||||
}
|
||||
|
||||
static const struct bpf_func_proto bpf_tc_sk_lookup_udp_proto = {
|
||||
.func = bpf_tc_sk_lookup_udp,
|
||||
.gpl_only = false,
|
||||
.pkt_access = true,
|
||||
.ret_type = RET_PTR_TO_SOCKET_OR_NULL,
|
||||
.arg1_type = ARG_PTR_TO_CTX,
|
||||
.arg2_type = ARG_PTR_TO_MEM | MEM_RDONLY,
|
||||
.arg3_type = ARG_CONST_SIZE,
|
||||
.arg4_type = ARG_ANYTHING,
|
||||
.arg5_type = ARG_ANYTHING,
|
||||
};
|
||||
|
||||
BPF_CALL_1(bpf_sk_release, struct sock *, sk)
|
||||
{
|
||||
if (sk && sk_is_refcounted(sk))
|
||||
@ -6744,12 +6831,13 @@ static const struct bpf_func_proto bpf_sk_release_proto = {
|
||||
BPF_CALL_5(bpf_xdp_sk_lookup_udp, struct xdp_buff *, ctx,
|
||||
struct bpf_sock_tuple *, tuple, u32, len, u32, netns_id, u64, flags)
|
||||
{
|
||||
struct net *caller_net = dev_net(ctx->rxq->dev);
|
||||
int ifindex = ctx->rxq->dev->ifindex;
|
||||
struct net_device *dev = ctx->rxq->dev;
|
||||
int ifindex = dev->ifindex, sdif = dev_sdif(dev);
|
||||
struct net *caller_net = dev_net(dev);
|
||||
|
||||
return (unsigned long)__bpf_sk_lookup(NULL, tuple, len, caller_net,
|
||||
ifindex, IPPROTO_UDP, netns_id,
|
||||
flags);
|
||||
flags, sdif);
|
||||
}
|
||||
|
||||
static const struct bpf_func_proto bpf_xdp_sk_lookup_udp_proto = {
|
||||
@ -6767,12 +6855,13 @@ static const struct bpf_func_proto bpf_xdp_sk_lookup_udp_proto = {
|
||||
BPF_CALL_5(bpf_xdp_skc_lookup_tcp, struct xdp_buff *, ctx,
|
||||
struct bpf_sock_tuple *, tuple, u32, len, u32, netns_id, u64, flags)
|
||||
{
|
||||
struct net *caller_net = dev_net(ctx->rxq->dev);
|
||||
int ifindex = ctx->rxq->dev->ifindex;
|
||||
struct net_device *dev = ctx->rxq->dev;
|
||||
int ifindex = dev->ifindex, sdif = dev_sdif(dev);
|
||||
struct net *caller_net = dev_net(dev);
|
||||
|
||||
return (unsigned long)__bpf_skc_lookup(NULL, tuple, len, caller_net,
|
||||
ifindex, IPPROTO_TCP, netns_id,
|
||||
flags);
|
||||
flags, sdif);
|
||||
}
|
||||
|
||||
static const struct bpf_func_proto bpf_xdp_skc_lookup_tcp_proto = {
|
||||
@ -6790,12 +6879,13 @@ static const struct bpf_func_proto bpf_xdp_skc_lookup_tcp_proto = {
|
||||
BPF_CALL_5(bpf_xdp_sk_lookup_tcp, struct xdp_buff *, ctx,
|
||||
struct bpf_sock_tuple *, tuple, u32, len, u32, netns_id, u64, flags)
|
||||
{
|
||||
struct net *caller_net = dev_net(ctx->rxq->dev);
|
||||
int ifindex = ctx->rxq->dev->ifindex;
|
||||
struct net_device *dev = ctx->rxq->dev;
|
||||
int ifindex = dev->ifindex, sdif = dev_sdif(dev);
|
||||
struct net *caller_net = dev_net(dev);
|
||||
|
||||
return (unsigned long)__bpf_sk_lookup(NULL, tuple, len, caller_net,
|
||||
ifindex, IPPROTO_TCP, netns_id,
|
||||
flags);
|
||||
flags, sdif);
|
||||
}
|
||||
|
||||
static const struct bpf_func_proto bpf_xdp_sk_lookup_tcp_proto = {
|
||||
@ -6815,7 +6905,8 @@ BPF_CALL_5(bpf_sock_addr_skc_lookup_tcp, struct bpf_sock_addr_kern *, ctx,
|
||||
{
|
||||
return (unsigned long)__bpf_skc_lookup(NULL, tuple, len,
|
||||
sock_net(ctx->sk), 0,
|
||||
IPPROTO_TCP, netns_id, flags);
|
||||
IPPROTO_TCP, netns_id, flags,
|
||||
-1);
|
||||
}
|
||||
|
||||
static const struct bpf_func_proto bpf_sock_addr_skc_lookup_tcp_proto = {
|
||||
@ -6834,7 +6925,7 @@ BPF_CALL_5(bpf_sock_addr_sk_lookup_tcp, struct bpf_sock_addr_kern *, ctx,
|
||||
{
|
||||
return (unsigned long)__bpf_sk_lookup(NULL, tuple, len,
|
||||
sock_net(ctx->sk), 0, IPPROTO_TCP,
|
||||
netns_id, flags);
|
||||
netns_id, flags, -1);
|
||||
}
|
||||
|
||||
static const struct bpf_func_proto bpf_sock_addr_sk_lookup_tcp_proto = {
|
||||
@ -6853,7 +6944,7 @@ BPF_CALL_5(bpf_sock_addr_sk_lookup_udp, struct bpf_sock_addr_kern *, ctx,
|
||||
{
|
||||
return (unsigned long)__bpf_sk_lookup(NULL, tuple, len,
|
||||
sock_net(ctx->sk), 0, IPPROTO_UDP,
|
||||
netns_id, flags);
|
||||
netns_id, flags, -1);
|
||||
}
|
||||
|
||||
static const struct bpf_func_proto bpf_sock_addr_sk_lookup_udp_proto = {
|
||||
@ -7982,9 +8073,9 @@ tc_cls_act_func_proto(enum bpf_func_id func_id, const struct bpf_prog *prog)
|
||||
#endif
|
||||
#ifdef CONFIG_INET
|
||||
case BPF_FUNC_sk_lookup_tcp:
|
||||
return &bpf_sk_lookup_tcp_proto;
|
||||
return &bpf_tc_sk_lookup_tcp_proto;
|
||||
case BPF_FUNC_sk_lookup_udp:
|
||||
return &bpf_sk_lookup_udp_proto;
|
||||
return &bpf_tc_sk_lookup_udp_proto;
|
||||
case BPF_FUNC_sk_release:
|
||||
return &bpf_sk_release_proto;
|
||||
case BPF_FUNC_tcp_sock:
|
||||
@ -7992,7 +8083,7 @@ tc_cls_act_func_proto(enum bpf_func_id func_id, const struct bpf_prog *prog)
|
||||
case BPF_FUNC_get_listener_sock:
|
||||
return &bpf_get_listener_sock_proto;
|
||||
case BPF_FUNC_skc_lookup_tcp:
|
||||
return &bpf_skc_lookup_tcp_proto;
|
||||
return &bpf_tc_skc_lookup_tcp_proto;
|
||||
case BPF_FUNC_tcp_check_syncookie:
|
||||
return &bpf_tcp_check_syncookie_proto;
|
||||
case BPF_FUNC_skb_ecn_set_ce:
|
||||
|
@ -32,8 +32,6 @@ static struct bpf_map *sock_map_alloc(union bpf_attr *attr)
|
||||
{
|
||||
struct bpf_stab *stab;
|
||||
|
||||
if (!capable(CAP_NET_ADMIN))
|
||||
return ERR_PTR(-EPERM);
|
||||
if (attr->max_entries == 0 ||
|
||||
attr->key_size != 4 ||
|
||||
(attr->value_size != sizeof(u32) &&
|
||||
@ -1085,8 +1083,6 @@ static struct bpf_map *sock_hash_alloc(union bpf_attr *attr)
|
||||
struct bpf_shtab *htab;
|
||||
int i, err;
|
||||
|
||||
if (!capable(CAP_NET_ADMIN))
|
||||
return ERR_PTR(-EPERM);
|
||||
if (attr->max_entries == 0 ||
|
||||
attr->key_size == 0 ||
|
||||
(attr->value_size != sizeof(u32) &&
|
||||
|
@ -12,15 +12,6 @@
|
||||
struct bpfilter_umh_ops bpfilter_ops;
|
||||
EXPORT_SYMBOL_GPL(bpfilter_ops);
|
||||
|
||||
void bpfilter_umh_cleanup(struct umd_info *info)
|
||||
{
|
||||
fput(info->pipe_to_umh);
|
||||
fput(info->pipe_from_umh);
|
||||
put_pid(info->tgid);
|
||||
info->tgid = NULL;
|
||||
}
|
||||
EXPORT_SYMBOL_GPL(bpfilter_umh_cleanup);
|
||||
|
||||
static int bpfilter_mbox_request(struct sock *sk, int optname, sockptr_t optval,
|
||||
unsigned int optlen, bool is_set)
|
||||
{
|
||||
@ -38,7 +29,7 @@ static int bpfilter_mbox_request(struct sock *sk, int optname, sockptr_t optval,
|
||||
}
|
||||
if (bpfilter_ops.info.tgid &&
|
||||
thread_group_exited(bpfilter_ops.info.tgid))
|
||||
bpfilter_umh_cleanup(&bpfilter_ops.info);
|
||||
umd_cleanup_helper(&bpfilter_ops.info);
|
||||
|
||||
if (!bpfilter_ops.info.tgid) {
|
||||
err = bpfilter_ops.start();
|
||||
|
@ -5,7 +5,6 @@
|
||||
|
||||
#include <linux/bpf.h>
|
||||
#include <linux/filter.h>
|
||||
#include <linux/capability.h>
|
||||
#include <net/xdp_sock.h>
|
||||
#include <linux/slab.h>
|
||||
#include <linux/sched.h>
|
||||
@ -68,9 +67,6 @@ static struct bpf_map *xsk_map_alloc(union bpf_attr *attr)
|
||||
int numa_node;
|
||||
u64 size;
|
||||
|
||||
if (!capable(CAP_NET_ADMIN))
|
||||
return ERR_PTR(-EPERM);
|
||||
|
||||
if (attr->max_entries == 0 || attr->key_size != 4 ||
|
||||
attr->value_size != 4 ||
|
||||
attr->map_flags & ~(BPF_F_NUMA_NODE | BPF_F_RDONLY | BPF_F_WRONLY))
|
||||
|
@ -39,7 +39,7 @@ static int parse_ipv6(void *data, u64 nh_off, void *data_end)
|
||||
return ip6h->nexthdr;
|
||||
}
|
||||
|
||||
#define XDPBUFSIZE 64
|
||||
#define XDPBUFSIZE 60
|
||||
SEC("xdp.frags")
|
||||
int xdp_prog1(struct xdp_md *ctx)
|
||||
{
|
||||
|
@ -55,7 +55,7 @@ static int parse_ipv6(void *data, u64 nh_off, void *data_end)
|
||||
return ip6h->nexthdr;
|
||||
}
|
||||
|
||||
#define XDPBUFSIZE 64
|
||||
#define XDPBUFSIZE 60
|
||||
SEC("xdp.frags")
|
||||
int xdp_prog1(struct xdp_md *ctx)
|
||||
{
|
||||
|
@ -67,7 +67,7 @@ $(BPFOBJ): $(wildcard $(LIBBPF_SRC)/*.[ch] $(LIBBPF_SRC)/Makefile) | $(LIBBPF_OU
|
||||
LIBELF_FLAGS := $(shell $(HOSTPKG_CONFIG) libelf --cflags 2>/dev/null)
|
||||
LIBELF_LIBS := $(shell $(HOSTPKG_CONFIG) libelf --libs 2>/dev/null || echo -lelf)
|
||||
|
||||
HOSTCFLAGS += -g \
|
||||
HOSTCFLAGS_resolve_btfids += -g \
|
||||
-I$(srctree)/tools/include \
|
||||
-I$(srctree)/tools/include/uapi \
|
||||
-I$(LIBBPF_INCLUDE) \
|
||||
@ -76,7 +76,7 @@ HOSTCFLAGS += -g \
|
||||
|
||||
LIBS = $(LIBELF_LIBS) -lz
|
||||
|
||||
export srctree OUTPUT HOSTCFLAGS Q HOSTCC HOSTLD HOSTAR
|
||||
export srctree OUTPUT HOSTCFLAGS_resolve_btfids Q HOSTCC HOSTLD HOSTAR
|
||||
include $(srctree)/tools/build/Makefile.include
|
||||
|
||||
$(BINARY_IN): fixdep FORCE prepare | $(OUTPUT)
|
||||
|
@ -3178,6 +3178,10 @@ union bpf_attr {
|
||||
* **BPF_FIB_LOOKUP_DIRECT**
|
||||
* Do a direct table lookup vs full lookup using FIB
|
||||
* rules.
|
||||
* **BPF_FIB_LOOKUP_TBID**
|
||||
* Used with BPF_FIB_LOOKUP_DIRECT.
|
||||
* Use the routing table ID present in *params*->tbid
|
||||
* for the fib lookup.
|
||||
* **BPF_FIB_LOOKUP_OUTPUT**
|
||||
* Perform lookup from an egress perspective (default is
|
||||
* ingress).
|
||||
@ -6832,6 +6836,7 @@ enum {
|
||||
BPF_FIB_LOOKUP_DIRECT = (1U << 0),
|
||||
BPF_FIB_LOOKUP_OUTPUT = (1U << 1),
|
||||
BPF_FIB_LOOKUP_SKIP_NEIGH = (1U << 2),
|
||||
BPF_FIB_LOOKUP_TBID = (1U << 3),
|
||||
};
|
||||
|
||||
enum {
|
||||
@ -6892,9 +6897,19 @@ struct bpf_fib_lookup {
|
||||
__u32 ipv6_dst[4]; /* in6_addr; network order */
|
||||
};
|
||||
|
||||
/* output */
|
||||
__be16 h_vlan_proto;
|
||||
__be16 h_vlan_TCI;
|
||||
union {
|
||||
struct {
|
||||
/* output */
|
||||
__be16 h_vlan_proto;
|
||||
__be16 h_vlan_TCI;
|
||||
};
|
||||
/* input: when accompanied with the
|
||||
* 'BPF_FIB_LOOKUP_DIRECT | BPF_FIB_LOOKUP_TBID` flags, a
|
||||
* specific routing table to use for the fib lookup.
|
||||
*/
|
||||
__u32 tbid;
|
||||
};
|
||||
|
||||
__u8 smac[6]; /* ETH_ALEN */
|
||||
__u8 dmac[6]; /* ETH_ALEN */
|
||||
};
|
||||
|
@ -17,7 +17,7 @@ struct env env = {
|
||||
.duration_sec = 5,
|
||||
.affinity = false,
|
||||
.quiet = false,
|
||||
.consumer_cnt = 1,
|
||||
.consumer_cnt = 0,
|
||||
.producer_cnt = 1,
|
||||
};
|
||||
|
||||
@ -441,12 +441,14 @@ static void setup_timer()
|
||||
static void set_thread_affinity(pthread_t thread, int cpu)
|
||||
{
|
||||
cpu_set_t cpuset;
|
||||
int err;
|
||||
|
||||
CPU_ZERO(&cpuset);
|
||||
CPU_SET(cpu, &cpuset);
|
||||
if (pthread_setaffinity_np(thread, sizeof(cpuset), &cpuset)) {
|
||||
err = pthread_setaffinity_np(thread, sizeof(cpuset), &cpuset);
|
||||
if (err) {
|
||||
fprintf(stderr, "setting affinity to CPU #%d failed: %d\n",
|
||||
cpu, errno);
|
||||
cpu, -err);
|
||||
exit(1);
|
||||
}
|
||||
}
|
||||
@ -467,7 +469,7 @@ static int next_cpu(struct cpu_set *cpu_set)
|
||||
exit(1);
|
||||
}
|
||||
|
||||
return cpu_set->next_cpu++;
|
||||
return cpu_set->next_cpu++ % env.nr_cpus;
|
||||
}
|
||||
|
||||
static struct bench_state {
|
||||
@ -605,7 +607,7 @@ static void setup_benchmark(void)
|
||||
bench->consumer_thread, (void *)(long)i);
|
||||
if (err) {
|
||||
fprintf(stderr, "failed to create consumer thread #%d: %d\n",
|
||||
i, -errno);
|
||||
i, -err);
|
||||
exit(1);
|
||||
}
|
||||
if (env.affinity)
|
||||
@ -624,7 +626,7 @@ static void setup_benchmark(void)
|
||||
bench->producer_thread, (void *)(long)i);
|
||||
if (err) {
|
||||
fprintf(stderr, "failed to create producer thread #%d: %d\n",
|
||||
i, -errno);
|
||||
i, -err);
|
||||
exit(1);
|
||||
}
|
||||
if (env.affinity)
|
||||
@ -657,6 +659,7 @@ static void collect_measurements(long delta_ns) {
|
||||
|
||||
int main(int argc, char **argv)
|
||||
{
|
||||
env.nr_cpus = get_nprocs();
|
||||
parse_cmdline_args_init(argc, argv);
|
||||
|
||||
if (env.list) {
|
||||
|
@ -27,6 +27,7 @@ struct env {
|
||||
bool quiet;
|
||||
int consumer_cnt;
|
||||
int producer_cnt;
|
||||
int nr_cpus;
|
||||
struct cpu_set prod_cpus;
|
||||
struct cpu_set cons_cpus;
|
||||
};
|
||||
|
@ -107,9 +107,9 @@ const struct argp bench_bloom_map_argp = {
|
||||
|
||||
static void validate(void)
|
||||
{
|
||||
if (env.consumer_cnt != 1) {
|
||||
if (env.consumer_cnt != 0) {
|
||||
fprintf(stderr,
|
||||
"The bloom filter benchmarks do not support multi-consumer use\n");
|
||||
"The bloom filter benchmarks do not support consumer\n");
|
||||
exit(1);
|
||||
}
|
||||
}
|
||||
@ -421,18 +421,12 @@ static void measure(struct bench_res *res)
|
||||
last_false_hits = total_false_hits;
|
||||
}
|
||||
|
||||
static void *consumer(void *input)
|
||||
{
|
||||
return NULL;
|
||||
}
|
||||
|
||||
const struct bench bench_bloom_lookup = {
|
||||
.name = "bloom-lookup",
|
||||
.argp = &bench_bloom_map_argp,
|
||||
.validate = validate,
|
||||
.setup = bloom_lookup_setup,
|
||||
.producer_thread = producer,
|
||||
.consumer_thread = consumer,
|
||||
.measure = measure,
|
||||
.report_progress = hits_drops_report_progress,
|
||||
.report_final = hits_drops_report_final,
|
||||
@ -444,7 +438,6 @@ const struct bench bench_bloom_update = {
|
||||
.validate = validate,
|
||||
.setup = bloom_update_setup,
|
||||
.producer_thread = producer,
|
||||
.consumer_thread = consumer,
|
||||
.measure = measure,
|
||||
.report_progress = hits_drops_report_progress,
|
||||
.report_final = hits_drops_report_final,
|
||||
@ -456,7 +449,6 @@ const struct bench bench_bloom_false_positive = {
|
||||
.validate = validate,
|
||||
.setup = false_positive_setup,
|
||||
.producer_thread = producer,
|
||||
.consumer_thread = consumer,
|
||||
.measure = measure,
|
||||
.report_progress = false_hits_report_progress,
|
||||
.report_final = false_hits_report_final,
|
||||
@ -468,7 +460,6 @@ const struct bench bench_hashmap_without_bloom = {
|
||||
.validate = validate,
|
||||
.setup = hashmap_no_bloom_setup,
|
||||
.producer_thread = producer,
|
||||
.consumer_thread = consumer,
|
||||
.measure = measure,
|
||||
.report_progress = hits_drops_report_progress,
|
||||
.report_final = hits_drops_report_final,
|
||||
@ -480,7 +471,6 @@ const struct bench bench_hashmap_with_bloom = {
|
||||
.validate = validate,
|
||||
.setup = hashmap_with_bloom_setup,
|
||||
.producer_thread = producer,
|
||||
.consumer_thread = consumer,
|
||||
.measure = measure,
|
||||
.report_progress = hits_drops_report_progress,
|
||||
.report_final = hits_drops_report_final,
|
||||
|
@ -14,8 +14,8 @@ static struct ctx {
|
||||
|
||||
static void validate(void)
|
||||
{
|
||||
if (env.consumer_cnt != 1) {
|
||||
fprintf(stderr, "benchmark doesn't support multi-consumer!\n");
|
||||
if (env.consumer_cnt != 0) {
|
||||
fprintf(stderr, "benchmark doesn't support consumer!\n");
|
||||
exit(1);
|
||||
}
|
||||
}
|
||||
@ -30,11 +30,6 @@ static void *producer(void *input)
|
||||
return NULL;
|
||||
}
|
||||
|
||||
static void *consumer(void *input)
|
||||
{
|
||||
return NULL;
|
||||
}
|
||||
|
||||
static void measure(struct bench_res *res)
|
||||
{
|
||||
}
|
||||
@ -88,7 +83,6 @@ const struct bench bench_bpf_hashmap_full_update = {
|
||||
.validate = validate,
|
||||
.setup = setup,
|
||||
.producer_thread = producer,
|
||||
.consumer_thread = consumer,
|
||||
.measure = measure,
|
||||
.report_progress = NULL,
|
||||
.report_final = hashmap_report_final,
|
||||
|
@ -113,8 +113,8 @@ const struct argp bench_hashmap_lookup_argp = {
|
||||
|
||||
static void validate(void)
|
||||
{
|
||||
if (env.consumer_cnt != 1) {
|
||||
fprintf(stderr, "benchmark doesn't support multi-consumer!\n");
|
||||
if (env.consumer_cnt != 0) {
|
||||
fprintf(stderr, "benchmark doesn't support consumer!\n");
|
||||
exit(1);
|
||||
}
|
||||
|
||||
@ -134,11 +134,6 @@ static void *producer(void *input)
|
||||
return NULL;
|
||||
}
|
||||
|
||||
static void *consumer(void *input)
|
||||
{
|
||||
return NULL;
|
||||
}
|
||||
|
||||
static void measure(struct bench_res *res)
|
||||
{
|
||||
}
|
||||
@ -276,7 +271,6 @@ const struct bench bench_bpf_hashmap_lookup = {
|
||||
.validate = validate,
|
||||
.setup = setup,
|
||||
.producer_thread = producer,
|
||||
.consumer_thread = consumer,
|
||||
.measure = measure,
|
||||
.report_progress = NULL,
|
||||
.report_final = hashmap_report_final,
|
||||
|
@ -47,8 +47,8 @@ const struct argp bench_bpf_loop_argp = {
|
||||
|
||||
static void validate(void)
|
||||
{
|
||||
if (env.consumer_cnt != 1) {
|
||||
fprintf(stderr, "benchmark doesn't support multi-consumer!\n");
|
||||
if (env.consumer_cnt != 0) {
|
||||
fprintf(stderr, "benchmark doesn't support consumer!\n");
|
||||
exit(1);
|
||||
}
|
||||
}
|
||||
@ -62,11 +62,6 @@ static void *producer(void *input)
|
||||
return NULL;
|
||||
}
|
||||
|
||||
static void *consumer(void *input)
|
||||
{
|
||||
return NULL;
|
||||
}
|
||||
|
||||
static void measure(struct bench_res *res)
|
||||
{
|
||||
res->hits = atomic_swap(&ctx.skel->bss->hits, 0);
|
||||
@ -99,7 +94,6 @@ const struct bench bench_bpf_loop = {
|
||||
.validate = validate,
|
||||
.setup = setup,
|
||||
.producer_thread = producer,
|
||||
.consumer_thread = consumer,
|
||||
.measure = measure,
|
||||
.report_progress = ops_report_progress,
|
||||
.report_final = ops_report_final,
|
||||
|
@ -18,11 +18,6 @@ static void *count_global_producer(void *input)
|
||||
return NULL;
|
||||
}
|
||||
|
||||
static void *count_global_consumer(void *input)
|
||||
{
|
||||
return NULL;
|
||||
}
|
||||
|
||||
static void count_global_measure(struct bench_res *res)
|
||||
{
|
||||
struct count_global_ctx *ctx = &count_global_ctx;
|
||||
@ -40,7 +35,7 @@ static void count_local_setup(void)
|
||||
{
|
||||
struct count_local_ctx *ctx = &count_local_ctx;
|
||||
|
||||
ctx->hits = calloc(env.consumer_cnt, sizeof(*ctx->hits));
|
||||
ctx->hits = calloc(env.producer_cnt, sizeof(*ctx->hits));
|
||||
if (!ctx->hits)
|
||||
exit(1);
|
||||
}
|
||||
@ -56,11 +51,6 @@ static void *count_local_producer(void *input)
|
||||
return NULL;
|
||||
}
|
||||
|
||||
static void *count_local_consumer(void *input)
|
||||
{
|
||||
return NULL;
|
||||
}
|
||||
|
||||
static void count_local_measure(struct bench_res *res)
|
||||
{
|
||||
struct count_local_ctx *ctx = &count_local_ctx;
|
||||
@ -74,7 +64,6 @@ static void count_local_measure(struct bench_res *res)
|
||||
const struct bench bench_count_global = {
|
||||
.name = "count-global",
|
||||
.producer_thread = count_global_producer,
|
||||
.consumer_thread = count_global_consumer,
|
||||
.measure = count_global_measure,
|
||||
.report_progress = hits_drops_report_progress,
|
||||
.report_final = hits_drops_report_final,
|
||||
@ -84,7 +73,6 @@ const struct bench bench_count_local = {
|
||||
.name = "count-local",
|
||||
.setup = count_local_setup,
|
||||
.producer_thread = count_local_producer,
|
||||
.consumer_thread = count_local_consumer,
|
||||
.measure = count_local_measure,
|
||||
.report_progress = hits_drops_report_progress,
|
||||
.report_final = hits_drops_report_final,
|
||||
|
@ -74,8 +74,8 @@ static void validate(void)
|
||||
fprintf(stderr, "benchmark doesn't support multi-producer!\n");
|
||||
exit(1);
|
||||
}
|
||||
if (env.consumer_cnt != 1) {
|
||||
fprintf(stderr, "benchmark doesn't support multi-consumer!\n");
|
||||
if (env.consumer_cnt != 0) {
|
||||
fprintf(stderr, "benchmark doesn't support consumer!\n");
|
||||
exit(1);
|
||||
}
|
||||
|
||||
@ -230,11 +230,6 @@ static inline void trigger_bpf_program(void)
|
||||
syscall(__NR_getpgid);
|
||||
}
|
||||
|
||||
static void *consumer(void *input)
|
||||
{
|
||||
return NULL;
|
||||
}
|
||||
|
||||
static void *producer(void *input)
|
||||
{
|
||||
while (true)
|
||||
@ -259,7 +254,6 @@ const struct bench bench_local_storage_cache_seq_get = {
|
||||
.validate = validate,
|
||||
.setup = local_storage_cache_get_setup,
|
||||
.producer_thread = producer,
|
||||
.consumer_thread = consumer,
|
||||
.measure = measure,
|
||||
.report_progress = local_storage_report_progress,
|
||||
.report_final = local_storage_report_final,
|
||||
@ -271,7 +265,6 @@ const struct bench bench_local_storage_cache_interleaved_get = {
|
||||
.validate = validate,
|
||||
.setup = local_storage_cache_get_interleaved_setup,
|
||||
.producer_thread = producer,
|
||||
.consumer_thread = consumer,
|
||||
.measure = measure,
|
||||
.report_progress = local_storage_report_progress,
|
||||
.report_final = local_storage_report_final,
|
||||
@ -283,7 +276,6 @@ const struct bench bench_local_storage_cache_hashmap_control = {
|
||||
.validate = validate,
|
||||
.setup = hashmap_setup,
|
||||
.producer_thread = producer,
|
||||
.consumer_thread = consumer,
|
||||
.measure = measure,
|
||||
.report_progress = local_storage_report_progress,
|
||||
.report_final = local_storage_report_final,
|
||||
|
@ -71,7 +71,7 @@ const struct argp bench_local_storage_create_argp = {
|
||||
|
||||
static void validate(void)
|
||||
{
|
||||
if (env.consumer_cnt > 1) {
|
||||
if (env.consumer_cnt != 0) {
|
||||
fprintf(stderr,
|
||||
"local-storage-create benchmark does not need consumer\n");
|
||||
exit(1);
|
||||
@ -143,11 +143,6 @@ static void measure(struct bench_res *res)
|
||||
res->drops = atomic_swap(&skel->bss->kmalloc_cnts, 0);
|
||||
}
|
||||
|
||||
static void *consumer(void *input)
|
||||
{
|
||||
return NULL;
|
||||
}
|
||||
|
||||
static void *sk_producer(void *input)
|
||||
{
|
||||
struct thread *t = &threads[(long)(input)];
|
||||
@ -257,7 +252,6 @@ const struct bench bench_local_storage_create = {
|
||||
.validate = validate,
|
||||
.setup = setup,
|
||||
.producer_thread = producer,
|
||||
.consumer_thread = consumer,
|
||||
.measure = measure,
|
||||
.report_progress = report_progress,
|
||||
.report_final = report_final,
|
||||
|
@ -72,8 +72,8 @@ static void validate(void)
|
||||
fprintf(stderr, "benchmark doesn't support multi-producer!\n");
|
||||
exit(1);
|
||||
}
|
||||
if (env.consumer_cnt != 1) {
|
||||
fprintf(stderr, "benchmark doesn't support multi-consumer!\n");
|
||||
if (env.consumer_cnt != 0) {
|
||||
fprintf(stderr, "benchmark doesn't support consumer!\n");
|
||||
exit(1);
|
||||
}
|
||||
|
||||
@ -197,11 +197,6 @@ static void measure(struct bench_res *res)
|
||||
ctx.prev_kthread_stime = ticks;
|
||||
}
|
||||
|
||||
static void *consumer(void *input)
|
||||
{
|
||||
return NULL;
|
||||
}
|
||||
|
||||
static void *producer(void *input)
|
||||
{
|
||||
while (true)
|
||||
@ -262,7 +257,6 @@ const struct bench bench_local_storage_tasks_trace = {
|
||||
.validate = validate,
|
||||
.setup = local_storage_tasks_trace_setup,
|
||||
.producer_thread = producer,
|
||||
.consumer_thread = consumer,
|
||||
.measure = measure,
|
||||
.report_progress = report_progress,
|
||||
.report_final = report_final,
|
||||
|
@ -17,8 +17,8 @@ static void validate(void)
|
||||
fprintf(stderr, "benchmark doesn't support multi-producer!\n");
|
||||
exit(1);
|
||||
}
|
||||
if (env.consumer_cnt != 1) {
|
||||
fprintf(stderr, "benchmark doesn't support multi-consumer!\n");
|
||||
if (env.consumer_cnt != 0) {
|
||||
fprintf(stderr, "benchmark doesn't support consumer!\n");
|
||||
exit(1);
|
||||
}
|
||||
}
|
||||
@ -106,17 +106,11 @@ static void setup_fexit(void)
|
||||
attach_bpf(ctx.skel->progs.prog5);
|
||||
}
|
||||
|
||||
static void *consumer(void *input)
|
||||
{
|
||||
return NULL;
|
||||
}
|
||||
|
||||
const struct bench bench_rename_base = {
|
||||
.name = "rename-base",
|
||||
.validate = validate,
|
||||
.setup = setup_base,
|
||||
.producer_thread = producer,
|
||||
.consumer_thread = consumer,
|
||||
.measure = measure,
|
||||
.report_progress = hits_drops_report_progress,
|
||||
.report_final = hits_drops_report_final,
|
||||
@ -127,7 +121,6 @@ const struct bench bench_rename_kprobe = {
|
||||
.validate = validate,
|
||||
.setup = setup_kprobe,
|
||||
.producer_thread = producer,
|
||||
.consumer_thread = consumer,
|
||||
.measure = measure,
|
||||
.report_progress = hits_drops_report_progress,
|
||||
.report_final = hits_drops_report_final,
|
||||
@ -138,7 +131,6 @@ const struct bench bench_rename_kretprobe = {
|
||||
.validate = validate,
|
||||
.setup = setup_kretprobe,
|
||||
.producer_thread = producer,
|
||||
.consumer_thread = consumer,
|
||||
.measure = measure,
|
||||
.report_progress = hits_drops_report_progress,
|
||||
.report_final = hits_drops_report_final,
|
||||
@ -149,7 +141,6 @@ const struct bench bench_rename_rawtp = {
|
||||
.validate = validate,
|
||||
.setup = setup_rawtp,
|
||||
.producer_thread = producer,
|
||||
.consumer_thread = consumer,
|
||||
.measure = measure,
|
||||
.report_progress = hits_drops_report_progress,
|
||||
.report_final = hits_drops_report_final,
|
||||
@ -160,7 +151,6 @@ const struct bench bench_rename_fentry = {
|
||||
.validate = validate,
|
||||
.setup = setup_fentry,
|
||||
.producer_thread = producer,
|
||||
.consumer_thread = consumer,
|
||||
.measure = measure,
|
||||
.report_progress = hits_drops_report_progress,
|
||||
.report_final = hits_drops_report_final,
|
||||
@ -171,7 +161,6 @@ const struct bench bench_rename_fexit = {
|
||||
.validate = validate,
|
||||
.setup = setup_fexit,
|
||||
.producer_thread = producer,
|
||||
.consumer_thread = consumer,
|
||||
.measure = measure,
|
||||
.report_progress = hits_drops_report_progress,
|
||||
.report_final = hits_drops_report_final,
|
||||
|
@ -96,7 +96,7 @@ static inline void bufs_trigger_batch(void)
|
||||
static void bufs_validate(void)
|
||||
{
|
||||
if (env.consumer_cnt != 1) {
|
||||
fprintf(stderr, "rb-libbpf benchmark doesn't support multi-consumer!\n");
|
||||
fprintf(stderr, "rb-libbpf benchmark needs one consumer!\n");
|
||||
exit(1);
|
||||
}
|
||||
|
||||
|
@ -50,8 +50,8 @@ const struct argp bench_strncmp_argp = {
|
||||
|
||||
static void strncmp_validate(void)
|
||||
{
|
||||
if (env.consumer_cnt != 1) {
|
||||
fprintf(stderr, "strncmp benchmark doesn't support multi-consumer!\n");
|
||||
if (env.consumer_cnt != 0) {
|
||||
fprintf(stderr, "strncmp benchmark doesn't support consumer!\n");
|
||||
exit(1);
|
||||
}
|
||||
}
|
||||
@ -128,11 +128,6 @@ static void *strncmp_producer(void *ctx)
|
||||
return NULL;
|
||||
}
|
||||
|
||||
static void *strncmp_consumer(void *ctx)
|
||||
{
|
||||
return NULL;
|
||||
}
|
||||
|
||||
static void strncmp_measure(struct bench_res *res)
|
||||
{
|
||||
res->hits = atomic_swap(&ctx.skel->bss->hits, 0);
|
||||
@ -144,7 +139,6 @@ const struct bench bench_strncmp_no_helper = {
|
||||
.validate = strncmp_validate,
|
||||
.setup = strncmp_no_helper_setup,
|
||||
.producer_thread = strncmp_producer,
|
||||
.consumer_thread = strncmp_consumer,
|
||||
.measure = strncmp_measure,
|
||||
.report_progress = hits_drops_report_progress,
|
||||
.report_final = hits_drops_report_final,
|
||||
@ -156,7 +150,6 @@ const struct bench bench_strncmp_helper = {
|
||||
.validate = strncmp_validate,
|
||||
.setup = strncmp_helper_setup,
|
||||
.producer_thread = strncmp_producer,
|
||||
.consumer_thread = strncmp_consumer,
|
||||
.measure = strncmp_measure,
|
||||
.report_progress = hits_drops_report_progress,
|
||||
.report_final = hits_drops_report_final,
|
||||
|
@ -13,8 +13,8 @@ static struct counter base_hits;
|
||||
|
||||
static void trigger_validate(void)
|
||||
{
|
||||
if (env.consumer_cnt != 1) {
|
||||
fprintf(stderr, "benchmark doesn't support multi-consumer!\n");
|
||||
if (env.consumer_cnt != 0) {
|
||||
fprintf(stderr, "benchmark doesn't support consumer!\n");
|
||||
exit(1);
|
||||
}
|
||||
}
|
||||
@ -103,11 +103,6 @@ static void trigger_fmodret_setup(void)
|
||||
attach_bpf(ctx.skel->progs.bench_trigger_fmodret);
|
||||
}
|
||||
|
||||
static void *trigger_consumer(void *input)
|
||||
{
|
||||
return NULL;
|
||||
}
|
||||
|
||||
/* make sure call is not inlined and not avoided by compiler, so __weak and
|
||||
* inline asm volatile in the body of the function
|
||||
*
|
||||
@ -205,7 +200,6 @@ const struct bench bench_trig_base = {
|
||||
.name = "trig-base",
|
||||
.validate = trigger_validate,
|
||||
.producer_thread = trigger_base_producer,
|
||||
.consumer_thread = trigger_consumer,
|
||||
.measure = trigger_base_measure,
|
||||
.report_progress = hits_drops_report_progress,
|
||||
.report_final = hits_drops_report_final,
|
||||
@ -216,7 +210,6 @@ const struct bench bench_trig_tp = {
|
||||
.validate = trigger_validate,
|
||||
.setup = trigger_tp_setup,
|
||||
.producer_thread = trigger_producer,
|
||||
.consumer_thread = trigger_consumer,
|
||||
.measure = trigger_measure,
|
||||
.report_progress = hits_drops_report_progress,
|
||||
.report_final = hits_drops_report_final,
|
||||
@ -227,7 +220,6 @@ const struct bench bench_trig_rawtp = {
|
||||
.validate = trigger_validate,
|
||||
.setup = trigger_rawtp_setup,
|
||||
.producer_thread = trigger_producer,
|
||||
.consumer_thread = trigger_consumer,
|
||||
.measure = trigger_measure,
|
||||
.report_progress = hits_drops_report_progress,
|
||||
.report_final = hits_drops_report_final,
|
||||
@ -238,7 +230,6 @@ const struct bench bench_trig_kprobe = {
|
||||
.validate = trigger_validate,
|
||||
.setup = trigger_kprobe_setup,
|
||||
.producer_thread = trigger_producer,
|
||||
.consumer_thread = trigger_consumer,
|
||||
.measure = trigger_measure,
|
||||
.report_progress = hits_drops_report_progress,
|
||||
.report_final = hits_drops_report_final,
|
||||
@ -249,7 +240,6 @@ const struct bench bench_trig_fentry = {
|
||||
.validate = trigger_validate,
|
||||
.setup = trigger_fentry_setup,
|
||||
.producer_thread = trigger_producer,
|
||||
.consumer_thread = trigger_consumer,
|
||||
.measure = trigger_measure,
|
||||
.report_progress = hits_drops_report_progress,
|
||||
.report_final = hits_drops_report_final,
|
||||
@ -260,7 +250,6 @@ const struct bench bench_trig_fentry_sleep = {
|
||||
.validate = trigger_validate,
|
||||
.setup = trigger_fentry_sleep_setup,
|
||||
.producer_thread = trigger_producer,
|
||||
.consumer_thread = trigger_consumer,
|
||||
.measure = trigger_measure,
|
||||
.report_progress = hits_drops_report_progress,
|
||||
.report_final = hits_drops_report_final,
|
||||
@ -271,7 +260,6 @@ const struct bench bench_trig_fmodret = {
|
||||
.validate = trigger_validate,
|
||||
.setup = trigger_fmodret_setup,
|
||||
.producer_thread = trigger_producer,
|
||||
.consumer_thread = trigger_consumer,
|
||||
.measure = trigger_measure,
|
||||
.report_progress = hits_drops_report_progress,
|
||||
.report_final = hits_drops_report_final,
|
||||
@ -281,7 +269,6 @@ const struct bench bench_trig_uprobe_base = {
|
||||
.name = "trig-uprobe-base",
|
||||
.setup = NULL, /* no uprobe/uretprobe is attached */
|
||||
.producer_thread = uprobe_base_producer,
|
||||
.consumer_thread = trigger_consumer,
|
||||
.measure = trigger_base_measure,
|
||||
.report_progress = hits_drops_report_progress,
|
||||
.report_final = hits_drops_report_final,
|
||||
@ -291,7 +278,6 @@ const struct bench bench_trig_uprobe_with_nop = {
|
||||
.name = "trig-uprobe-with-nop",
|
||||
.setup = uprobe_setup_with_nop,
|
||||
.producer_thread = uprobe_producer_with_nop,
|
||||
.consumer_thread = trigger_consumer,
|
||||
.measure = trigger_measure,
|
||||
.report_progress = hits_drops_report_progress,
|
||||
.report_final = hits_drops_report_final,
|
||||
@ -301,7 +287,6 @@ const struct bench bench_trig_uretprobe_with_nop = {
|
||||
.name = "trig-uretprobe-with-nop",
|
||||
.setup = uretprobe_setup_with_nop,
|
||||
.producer_thread = uprobe_producer_with_nop,
|
||||
.consumer_thread = trigger_consumer,
|
||||
.measure = trigger_measure,
|
||||
.report_progress = hits_drops_report_progress,
|
||||
.report_final = hits_drops_report_final,
|
||||
@ -311,7 +296,6 @@ const struct bench bench_trig_uprobe_without_nop = {
|
||||
.name = "trig-uprobe-without-nop",
|
||||
.setup = uprobe_setup_without_nop,
|
||||
.producer_thread = uprobe_producer_without_nop,
|
||||
.consumer_thread = trigger_consumer,
|
||||
.measure = trigger_measure,
|
||||
.report_progress = hits_drops_report_progress,
|
||||
.report_final = hits_drops_report_final,
|
||||
@ -321,7 +305,6 @@ const struct bench bench_trig_uretprobe_without_nop = {
|
||||
.name = "trig-uretprobe-without-nop",
|
||||
.setup = uretprobe_setup_without_nop,
|
||||
.producer_thread = uprobe_producer_without_nop,
|
||||
.consumer_thread = trigger_consumer,
|
||||
.measure = trigger_measure,
|
||||
.report_progress = hits_drops_report_progress,
|
||||
.report_final = hits_drops_report_final,
|
||||
|
@ -4,46 +4,48 @@ source ./benchs/run_common.sh
|
||||
|
||||
set -eufo pipefail
|
||||
|
||||
RUN_RB_BENCH="$RUN_BENCH -c1"
|
||||
|
||||
header "Single-producer, parallel producer"
|
||||
for b in rb-libbpf rb-custom pb-libbpf pb-custom; do
|
||||
summarize $b "$($RUN_BENCH $b)"
|
||||
summarize $b "$($RUN_RB_BENCH $b)"
|
||||
done
|
||||
|
||||
header "Single-producer, parallel producer, sampled notification"
|
||||
for b in rb-libbpf rb-custom pb-libbpf pb-custom; do
|
||||
summarize $b "$($RUN_BENCH --rb-sampled $b)"
|
||||
summarize $b "$($RUN_RB_BENCH --rb-sampled $b)"
|
||||
done
|
||||
|
||||
header "Single-producer, back-to-back mode"
|
||||
for b in rb-libbpf rb-custom pb-libbpf pb-custom; do
|
||||
summarize $b "$($RUN_BENCH --rb-b2b $b)"
|
||||
summarize $b-sampled "$($RUN_BENCH --rb-sampled --rb-b2b $b)"
|
||||
summarize $b "$($RUN_RB_BENCH --rb-b2b $b)"
|
||||
summarize $b-sampled "$($RUN_RB_BENCH --rb-sampled --rb-b2b $b)"
|
||||
done
|
||||
|
||||
header "Ringbuf back-to-back, effect of sample rate"
|
||||
for b in 1 5 10 25 50 100 250 500 1000 2000 3000; do
|
||||
summarize "rb-sampled-$b" "$($RUN_BENCH --rb-b2b --rb-batch-cnt $b --rb-sampled --rb-sample-rate $b rb-custom)"
|
||||
summarize "rb-sampled-$b" "$($RUN_RB_BENCH --rb-b2b --rb-batch-cnt $b --rb-sampled --rb-sample-rate $b rb-custom)"
|
||||
done
|
||||
header "Perfbuf back-to-back, effect of sample rate"
|
||||
for b in 1 5 10 25 50 100 250 500 1000 2000 3000; do
|
||||
summarize "pb-sampled-$b" "$($RUN_BENCH --rb-b2b --rb-batch-cnt $b --rb-sampled --rb-sample-rate $b pb-custom)"
|
||||
summarize "pb-sampled-$b" "$($RUN_RB_BENCH --rb-b2b --rb-batch-cnt $b --rb-sampled --rb-sample-rate $b pb-custom)"
|
||||
done
|
||||
|
||||
header "Ringbuf back-to-back, reserve+commit vs output"
|
||||
summarize "reserve" "$($RUN_BENCH --rb-b2b rb-custom)"
|
||||
summarize "output" "$($RUN_BENCH --rb-b2b --rb-use-output rb-custom)"
|
||||
summarize "reserve" "$($RUN_RB_BENCH --rb-b2b rb-custom)"
|
||||
summarize "output" "$($RUN_RB_BENCH --rb-b2b --rb-use-output rb-custom)"
|
||||
|
||||
header "Ringbuf sampled, reserve+commit vs output"
|
||||
summarize "reserve-sampled" "$($RUN_BENCH --rb-sampled rb-custom)"
|
||||
summarize "output-sampled" "$($RUN_BENCH --rb-sampled --rb-use-output rb-custom)"
|
||||
summarize "reserve-sampled" "$($RUN_RB_BENCH --rb-sampled rb-custom)"
|
||||
summarize "output-sampled" "$($RUN_RB_BENCH --rb-sampled --rb-use-output rb-custom)"
|
||||
|
||||
header "Single-producer, consumer/producer competing on the same CPU, low batch count"
|
||||
for b in rb-libbpf rb-custom pb-libbpf pb-custom; do
|
||||
summarize $b "$($RUN_BENCH --rb-batch-cnt 1 --rb-sample-rate 1 --prod-affinity 0 --cons-affinity 0 $b)"
|
||||
summarize $b "$($RUN_RB_BENCH --rb-batch-cnt 1 --rb-sample-rate 1 --prod-affinity 0 --cons-affinity 0 $b)"
|
||||
done
|
||||
|
||||
header "Ringbuf, multi-producer contention"
|
||||
for b in 1 2 3 4 8 12 16 20 24 28 32 36 40 44 48 52; do
|
||||
summarize "rb-libbpf nr_prod $b" "$($RUN_BENCH -p$b --rb-batch-cnt 50 rb-libbpf)"
|
||||
summarize "rb-libbpf nr_prod $b" "$($RUN_RB_BENCH -p$b --rb-batch-cnt 50 rb-libbpf)"
|
||||
done
|
||||
|
||||
|
@ -191,8 +191,6 @@ noinline int bpf_testmod_fentry_test3(char a, int b, u64 c)
|
||||
return a + b + c;
|
||||
}
|
||||
|
||||
__diag_pop();
|
||||
|
||||
int bpf_testmod_fentry_ok;
|
||||
|
||||
noinline ssize_t
|
||||
@ -273,6 +271,14 @@ bpf_testmod_test_write(struct file *file, struct kobject *kobj,
|
||||
EXPORT_SYMBOL(bpf_testmod_test_write);
|
||||
ALLOW_ERROR_INJECTION(bpf_testmod_test_write, ERRNO);
|
||||
|
||||
noinline int bpf_fentry_shadow_test(int a)
|
||||
{
|
||||
return a + 2;
|
||||
}
|
||||
EXPORT_SYMBOL_GPL(bpf_fentry_shadow_test);
|
||||
|
||||
__diag_pop();
|
||||
|
||||
static struct bin_attribute bin_attr_bpf_testmod_file __ro_after_init = {
|
||||
.attr = { .name = "bpf_testmod", .mode = 0666, },
|
||||
.read = bpf_testmod_test_read,
|
||||
@ -462,12 +468,6 @@ static const struct btf_kfunc_id_set bpf_testmod_kfunc_set = {
|
||||
.set = &bpf_testmod_check_kfunc_ids,
|
||||
};
|
||||
|
||||
noinline int bpf_fentry_shadow_test(int a)
|
||||
{
|
||||
return a + 2;
|
||||
}
|
||||
EXPORT_SYMBOL_GPL(bpf_fentry_shadow_test);
|
||||
|
||||
extern int bpf_fentry_test1(int a);
|
||||
|
||||
static int bpf_testmod_init(void)
|
||||
|
@ -97,4 +97,11 @@ void bpf_kfunc_call_test_mem_len_fail2(__u64 *mem, int len) __ksym;
|
||||
|
||||
void bpf_kfunc_call_test_destructive(void) __ksym;
|
||||
|
||||
void bpf_kfunc_call_test_offset(struct prog_test_ref_kfunc *p);
|
||||
struct prog_test_member *bpf_kfunc_call_memb_acquire(void);
|
||||
void bpf_kfunc_call_memb1_release(struct prog_test_member1 *p);
|
||||
void bpf_kfunc_call_test_fail1(struct prog_test_fail1 *p);
|
||||
void bpf_kfunc_call_test_fail2(struct prog_test_fail2 *p);
|
||||
void bpf_kfunc_call_test_fail3(struct prog_test_fail3 *p);
|
||||
void bpf_kfunc_call_test_mem_len_fail1(void *mem, int len);
|
||||
#endif /* _BPF_TESTMOD_KFUNC_H */
|
||||
|
@ -13,6 +13,9 @@ CONFIG_CGROUP_BPF=y
|
||||
CONFIG_CRYPTO_HMAC=y
|
||||
CONFIG_CRYPTO_SHA256=y
|
||||
CONFIG_CRYPTO_USER_API_HASH=y
|
||||
CONFIG_DEBUG_INFO=y
|
||||
CONFIG_DEBUG_INFO_BTF=y
|
||||
CONFIG_DEBUG_INFO_DWARF4=y
|
||||
CONFIG_DYNAMIC_FTRACE=y
|
||||
CONFIG_FPROBE=y
|
||||
CONFIG_FTRACE_SYSCALLS=y
|
||||
@ -60,6 +63,7 @@ CONFIG_NET_SCH_INGRESS=y
|
||||
CONFIG_NET_SCHED=y
|
||||
CONFIG_NETDEVSIM=y
|
||||
CONFIG_NETFILTER=y
|
||||
CONFIG_NETFILTER_ADVANCED=y
|
||||
CONFIG_NETFILTER_SYNPROXY=y
|
||||
CONFIG_NETFILTER_XT_CONNMARK=y
|
||||
CONFIG_NETFILTER_XT_MATCH_STATE=y
|
||||
|
@ -3990,6 +3990,46 @@ static struct btf_raw_test raw_tests[] = {
|
||||
.btf_load_err = true,
|
||||
.err_str = "Invalid arg#1",
|
||||
},
|
||||
{
|
||||
.descr = "decl_tag test #18, decl_tag as the map key type",
|
||||
.raw_types = {
|
||||
BTF_TYPE_INT_ENC(0, BTF_INT_SIGNED, 0, 32, 4), /* [1] */
|
||||
BTF_STRUCT_ENC(0, 2, 8), /* [2] */
|
||||
BTF_MEMBER_ENC(NAME_TBD, 1, 0),
|
||||
BTF_MEMBER_ENC(NAME_TBD, 1, 32),
|
||||
BTF_DECL_TAG_ENC(NAME_TBD, 2, -1), /* [3] */
|
||||
BTF_END_RAW,
|
||||
},
|
||||
BTF_STR_SEC("\0m1\0m2\0tag"),
|
||||
.map_type = BPF_MAP_TYPE_HASH,
|
||||
.map_name = "tag_type_check_btf",
|
||||
.key_size = 8,
|
||||
.value_size = 4,
|
||||
.key_type_id = 3,
|
||||
.value_type_id = 1,
|
||||
.max_entries = 1,
|
||||
.map_create_err = true,
|
||||
},
|
||||
{
|
||||
.descr = "decl_tag test #19, decl_tag as the map value type",
|
||||
.raw_types = {
|
||||
BTF_TYPE_INT_ENC(0, BTF_INT_SIGNED, 0, 32, 4), /* [1] */
|
||||
BTF_STRUCT_ENC(0, 2, 8), /* [2] */
|
||||
BTF_MEMBER_ENC(NAME_TBD, 1, 0),
|
||||
BTF_MEMBER_ENC(NAME_TBD, 1, 32),
|
||||
BTF_DECL_TAG_ENC(NAME_TBD, 2, -1), /* [3] */
|
||||
BTF_END_RAW,
|
||||
},
|
||||
BTF_STR_SEC("\0m1\0m2\0tag"),
|
||||
.map_type = BPF_MAP_TYPE_HASH,
|
||||
.map_name = "tag_type_check_btf",
|
||||
.key_size = 4,
|
||||
.value_size = 8,
|
||||
.key_type_id = 1,
|
||||
.value_type_id = 3,
|
||||
.max_entries = 1,
|
||||
.map_create_err = true,
|
||||
},
|
||||
{
|
||||
.descr = "type_tag test #1",
|
||||
.raw_types = {
|
||||
|
@ -183,7 +183,7 @@ cleanup:
|
||||
|
||||
void serial_test_check_mtu(void)
|
||||
{
|
||||
__u32 mtu_lo;
|
||||
int mtu_lo;
|
||||
|
||||
if (test__start_subtest("bpf_check_mtu XDP-attach"))
|
||||
test_check_mtu_xdp_attach();
|
||||
|
@ -10,6 +10,7 @@ static const char * const cpumask_success_testcases[] = {
|
||||
"test_set_clear_cpu",
|
||||
"test_setall_clear_cpu",
|
||||
"test_first_firstzero_cpu",
|
||||
"test_firstand_nocpu",
|
||||
"test_test_and_set_clear",
|
||||
"test_and_or_xor",
|
||||
"test_intersects_subset",
|
||||
@ -70,5 +71,6 @@ void test_cpumask(void)
|
||||
verify_success(cpumask_success_testcases[i]);
|
||||
}
|
||||
|
||||
RUN_TESTS(cpumask_success);
|
||||
RUN_TESTS(cpumask_failure);
|
||||
}
|
||||
|
@ -1,6 +1,7 @@
|
||||
// SPDX-License-Identifier: GPL-2.0
|
||||
/* Copyright (c) 2023 Meta Platforms, Inc. and affiliates. */
|
||||
|
||||
#include <linux/rtnetlink.h>
|
||||
#include <sys/types.h>
|
||||
#include <net/if.h>
|
||||
|
||||
@ -15,14 +16,23 @@
|
||||
#define IPV4_IFACE_ADDR "10.0.0.254"
|
||||
#define IPV4_NUD_FAILED_ADDR "10.0.0.1"
|
||||
#define IPV4_NUD_STALE_ADDR "10.0.0.2"
|
||||
#define IPV4_TBID_ADDR "172.0.0.254"
|
||||
#define IPV4_TBID_NET "172.0.0.0"
|
||||
#define IPV4_TBID_DST "172.0.0.2"
|
||||
#define IPV6_TBID_ADDR "fd00::FFFF"
|
||||
#define IPV6_TBID_NET "fd00::"
|
||||
#define IPV6_TBID_DST "fd00::2"
|
||||
#define DMAC "11:11:11:11:11:11"
|
||||
#define DMAC_INIT { 0x11, 0x11, 0x11, 0x11, 0x11, 0x11, }
|
||||
#define DMAC2 "01:01:01:01:01:01"
|
||||
#define DMAC_INIT2 { 0x01, 0x01, 0x01, 0x01, 0x01, 0x01, }
|
||||
|
||||
struct fib_lookup_test {
|
||||
const char *desc;
|
||||
const char *daddr;
|
||||
int expected_ret;
|
||||
int lookup_flags;
|
||||
__u32 tbid;
|
||||
__u8 dmac[6];
|
||||
};
|
||||
|
||||
@ -43,6 +53,22 @@ static const struct fib_lookup_test tests[] = {
|
||||
{ .desc = "IPv4 skip neigh",
|
||||
.daddr = IPV4_NUD_FAILED_ADDR, .expected_ret = BPF_FIB_LKUP_RET_SUCCESS,
|
||||
.lookup_flags = BPF_FIB_LOOKUP_SKIP_NEIGH, },
|
||||
{ .desc = "IPv4 TBID lookup failure",
|
||||
.daddr = IPV4_TBID_DST, .expected_ret = BPF_FIB_LKUP_RET_NOT_FWDED,
|
||||
.lookup_flags = BPF_FIB_LOOKUP_DIRECT | BPF_FIB_LOOKUP_TBID,
|
||||
.tbid = RT_TABLE_MAIN, },
|
||||
{ .desc = "IPv4 TBID lookup success",
|
||||
.daddr = IPV4_TBID_DST, .expected_ret = BPF_FIB_LKUP_RET_SUCCESS,
|
||||
.lookup_flags = BPF_FIB_LOOKUP_DIRECT | BPF_FIB_LOOKUP_TBID, .tbid = 100,
|
||||
.dmac = DMAC_INIT2, },
|
||||
{ .desc = "IPv6 TBID lookup failure",
|
||||
.daddr = IPV6_TBID_DST, .expected_ret = BPF_FIB_LKUP_RET_NOT_FWDED,
|
||||
.lookup_flags = BPF_FIB_LOOKUP_DIRECT | BPF_FIB_LOOKUP_TBID,
|
||||
.tbid = RT_TABLE_MAIN, },
|
||||
{ .desc = "IPv6 TBID lookup success",
|
||||
.daddr = IPV6_TBID_DST, .expected_ret = BPF_FIB_LKUP_RET_SUCCESS,
|
||||
.lookup_flags = BPF_FIB_LOOKUP_DIRECT | BPF_FIB_LOOKUP_TBID, .tbid = 100,
|
||||
.dmac = DMAC_INIT2, },
|
||||
};
|
||||
|
||||
static int ifindex;
|
||||
@ -53,6 +79,7 @@ static int setup_netns(void)
|
||||
|
||||
SYS(fail, "ip link add veth1 type veth peer name veth2");
|
||||
SYS(fail, "ip link set dev veth1 up");
|
||||
SYS(fail, "ip link set dev veth2 up");
|
||||
|
||||
err = write_sysctl("/proc/sys/net/ipv4/neigh/veth1/gc_stale_time", "900");
|
||||
if (!ASSERT_OK(err, "write_sysctl(net.ipv4.neigh.veth1.gc_stale_time)"))
|
||||
@ -70,6 +97,17 @@ static int setup_netns(void)
|
||||
SYS(fail, "ip neigh add %s dev veth1 nud failed", IPV4_NUD_FAILED_ADDR);
|
||||
SYS(fail, "ip neigh add %s dev veth1 lladdr %s nud stale", IPV4_NUD_STALE_ADDR, DMAC);
|
||||
|
||||
/* Setup for tbid lookup tests */
|
||||
SYS(fail, "ip addr add %s/24 dev veth2", IPV4_TBID_ADDR);
|
||||
SYS(fail, "ip route del %s/24 dev veth2", IPV4_TBID_NET);
|
||||
SYS(fail, "ip route add table 100 %s/24 dev veth2", IPV4_TBID_NET);
|
||||
SYS(fail, "ip neigh add %s dev veth2 lladdr %s nud stale", IPV4_TBID_DST, DMAC2);
|
||||
|
||||
SYS(fail, "ip addr add %s/64 dev veth2", IPV6_TBID_ADDR);
|
||||
SYS(fail, "ip -6 route del %s/64 dev veth2", IPV6_TBID_NET);
|
||||
SYS(fail, "ip -6 route add table 100 %s/64 dev veth2", IPV6_TBID_NET);
|
||||
SYS(fail, "ip neigh add %s dev veth2 lladdr %s nud stale", IPV6_TBID_DST, DMAC2);
|
||||
|
||||
err = write_sysctl("/proc/sys/net/ipv4/conf/veth1/forwarding", "1");
|
||||
if (!ASSERT_OK(err, "write_sysctl(net.ipv4.conf.veth1.forwarding)"))
|
||||
goto fail;
|
||||
@ -83,7 +121,7 @@ fail:
|
||||
return -1;
|
||||
}
|
||||
|
||||
static int set_lookup_params(struct bpf_fib_lookup *params, const char *daddr)
|
||||
static int set_lookup_params(struct bpf_fib_lookup *params, const struct fib_lookup_test *test)
|
||||
{
|
||||
int ret;
|
||||
|
||||
@ -91,8 +129,9 @@ static int set_lookup_params(struct bpf_fib_lookup *params, const char *daddr)
|
||||
|
||||
params->l4_protocol = IPPROTO_TCP;
|
||||
params->ifindex = ifindex;
|
||||
params->tbid = test->tbid;
|
||||
|
||||
if (inet_pton(AF_INET6, daddr, params->ipv6_dst) == 1) {
|
||||
if (inet_pton(AF_INET6, test->daddr, params->ipv6_dst) == 1) {
|
||||
params->family = AF_INET6;
|
||||
ret = inet_pton(AF_INET6, IPV6_IFACE_ADDR, params->ipv6_src);
|
||||
if (!ASSERT_EQ(ret, 1, "inet_pton(IPV6_IFACE_ADDR)"))
|
||||
@ -100,7 +139,7 @@ static int set_lookup_params(struct bpf_fib_lookup *params, const char *daddr)
|
||||
return 0;
|
||||
}
|
||||
|
||||
ret = inet_pton(AF_INET, daddr, ¶ms->ipv4_dst);
|
||||
ret = inet_pton(AF_INET, test->daddr, ¶ms->ipv4_dst);
|
||||
if (!ASSERT_EQ(ret, 1, "convert IP[46] address"))
|
||||
return -1;
|
||||
params->family = AF_INET;
|
||||
@ -154,13 +193,12 @@ void test_fib_lookup(void)
|
||||
fib_params = &skel->bss->fib_params;
|
||||
|
||||
for (i = 0; i < ARRAY_SIZE(tests); i++) {
|
||||
printf("Testing %s\n", tests[i].desc);
|
||||
printf("Testing %s ", tests[i].desc);
|
||||
|
||||
if (set_lookup_params(fib_params, tests[i].daddr))
|
||||
if (set_lookup_params(fib_params, &tests[i]))
|
||||
continue;
|
||||
skel->bss->fib_lookup_ret = -1;
|
||||
skel->bss->lookup_flags = BPF_FIB_LOOKUP_OUTPUT |
|
||||
tests[i].lookup_flags;
|
||||
skel->bss->lookup_flags = tests[i].lookup_flags;
|
||||
|
||||
err = bpf_prog_test_run_opts(prog_fd, &run_opts);
|
||||
if (!ASSERT_OK(err, "bpf_prog_test_run_opts"))
|
||||
@ -175,7 +213,14 @@ void test_fib_lookup(void)
|
||||
|
||||
mac_str(expected, tests[i].dmac);
|
||||
mac_str(actual, fib_params->dmac);
|
||||
printf("dmac expected %s actual %s\n", expected, actual);
|
||||
printf("dmac expected %s actual %s ", expected, actual);
|
||||
}
|
||||
|
||||
// ensure tbid is zero'd out after fib lookup.
|
||||
if (tests[i].lookup_flags & BPF_FIB_LOOKUP_DIRECT) {
|
||||
if (!ASSERT_EQ(skel->bss->fib_params.tbid, 0,
|
||||
"expected fib_params.tbid to be zero"))
|
||||
goto fail;
|
||||
}
|
||||
}
|
||||
|
||||
|
@ -171,7 +171,11 @@ static void test_unpriv_bpf_disabled_negative(struct test_unpriv_bpf_disabled *s
|
||||
prog_insns, prog_insn_cnt, &load_opts),
|
||||
-EPERM, "prog_load_fails");
|
||||
|
||||
for (i = BPF_MAP_TYPE_HASH; i <= BPF_MAP_TYPE_BLOOM_FILTER; i++)
|
||||
/* some map types require particular correct parameters which could be
|
||||
* sanity-checked before enforcing -EPERM, so only validate that
|
||||
* the simple ARRAY and HASH maps are failing with -EPERM
|
||||
*/
|
||||
for (i = BPF_MAP_TYPE_HASH; i <= BPF_MAP_TYPE_ARRAY; i++)
|
||||
ASSERT_EQ(bpf_map_create(i, NULL, sizeof(int), sizeof(int), 1, NULL),
|
||||
-EPERM, "map_create_fails");
|
||||
|
||||
|
@ -50,6 +50,7 @@
|
||||
#include "verifier_regalloc.skel.h"
|
||||
#include "verifier_ringbuf.skel.h"
|
||||
#include "verifier_runtime_jit.skel.h"
|
||||
#include "verifier_scalar_ids.skel.h"
|
||||
#include "verifier_search_pruning.skel.h"
|
||||
#include "verifier_sock.skel.h"
|
||||
#include "verifier_spill_fill.skel.h"
|
||||
@ -150,6 +151,7 @@ void test_verifier_ref_tracking(void) { RUN(verifier_ref_tracking); }
|
||||
void test_verifier_regalloc(void) { RUN(verifier_regalloc); }
|
||||
void test_verifier_ringbuf(void) { RUN(verifier_ringbuf); }
|
||||
void test_verifier_runtime_jit(void) { RUN(verifier_runtime_jit); }
|
||||
void test_verifier_scalar_ids(void) { RUN(verifier_scalar_ids); }
|
||||
void test_verifier_search_pruning(void) { RUN(verifier_search_pruning); }
|
||||
void test_verifier_sock(void) { RUN(verifier_sock); }
|
||||
void test_verifier_spill_fill(void) { RUN(verifier_spill_fill); }
|
||||
|
312
tools/testing/selftests/bpf/prog_tests/vrf_socket_lookup.c
Normal file
312
tools/testing/selftests/bpf/prog_tests/vrf_socket_lookup.c
Normal file
@ -0,0 +1,312 @@
|
||||
// SPDX-License-Identifier: GPL-2.0 OR BSD-3-Clause
|
||||
|
||||
/*
|
||||
* Topology:
|
||||
* ---------
|
||||
* NS0 namespace | NS1 namespace
|
||||
* |
|
||||
* +--------------+ | +--------------+
|
||||
* | veth01 |----------| veth10 |
|
||||
* | 172.16.1.100 | | | 172.16.1.200 |
|
||||
* | bpf | | +--------------+
|
||||
* +--------------+ |
|
||||
* server(UDP/TCP) |
|
||||
* +-------------------+ |
|
||||
* | vrf1 | |
|
||||
* | +--------------+ | | +--------------+
|
||||
* | | veth02 |----------| veth20 |
|
||||
* | | 172.16.2.100 | | | | 172.16.2.200 |
|
||||
* | | bpf | | | +--------------+
|
||||
* | +--------------+ | |
|
||||
* | server(UDP/TCP) | |
|
||||
* +-------------------+ |
|
||||
*
|
||||
* Test flow
|
||||
* -----------
|
||||
* The tests verifies that socket lookup via TC is VRF aware:
|
||||
* 1) Creates two veth pairs between NS0 and NS1:
|
||||
* a) veth01 <-> veth10 outside the VRF
|
||||
* b) veth02 <-> veth20 in the VRF
|
||||
* 2) Attaches to veth01 and veth02 a program that calls:
|
||||
* a) bpf_skc_lookup_tcp() with TCP and tcp_skc is true
|
||||
* b) bpf_sk_lookup_tcp() with TCP and tcp_skc is false
|
||||
* c) bpf_sk_lookup_udp() with UDP
|
||||
* The program stores the lookup result in bss->lookup_status.
|
||||
* 3) Creates a socket TCP/UDP server in/outside the VRF.
|
||||
* 4) The test expects lookup_status to be:
|
||||
* a) 0 from device in VRF to server outside VRF
|
||||
* b) 0 from device outside VRF to server in VRF
|
||||
* c) 1 from device in VRF to server in VRF
|
||||
* d) 1 from device outside VRF to server outside VRF
|
||||
*/
|
||||
|
||||
#include <net/if.h>
|
||||
|
||||
#include "test_progs.h"
|
||||
#include "network_helpers.h"
|
||||
#include "vrf_socket_lookup.skel.h"
|
||||
|
||||
#define NS0 "vrf_socket_lookup_0"
|
||||
#define NS1 "vrf_socket_lookup_1"
|
||||
|
||||
#define IP4_ADDR_VETH01 "172.16.1.100"
|
||||
#define IP4_ADDR_VETH10 "172.16.1.200"
|
||||
#define IP4_ADDR_VETH02 "172.16.2.100"
|
||||
#define IP4_ADDR_VETH20 "172.16.2.200"
|
||||
|
||||
#define NON_VRF_PORT 5000
|
||||
#define IN_VRF_PORT 5001
|
||||
|
||||
#define TIMEOUT_MS 3000
|
||||
|
||||
static int make_socket(int sotype, const char *ip, int port,
|
||||
struct sockaddr_storage *addr)
|
||||
{
|
||||
int err, fd;
|
||||
|
||||
err = make_sockaddr(AF_INET, ip, port, addr, NULL);
|
||||
if (!ASSERT_OK(err, "make_address"))
|
||||
return -1;
|
||||
|
||||
fd = socket(AF_INET, sotype, 0);
|
||||
if (!ASSERT_GE(fd, 0, "socket"))
|
||||
return -1;
|
||||
|
||||
if (!ASSERT_OK(settimeo(fd, TIMEOUT_MS), "settimeo"))
|
||||
goto fail;
|
||||
|
||||
return fd;
|
||||
fail:
|
||||
close(fd);
|
||||
return -1;
|
||||
}
|
||||
|
||||
static int make_server(int sotype, const char *ip, int port, const char *ifname)
|
||||
{
|
||||
int err, fd = -1;
|
||||
|
||||
fd = start_server(AF_INET, sotype, ip, port, TIMEOUT_MS);
|
||||
if (!ASSERT_GE(fd, 0, "start_server"))
|
||||
return -1;
|
||||
|
||||
if (ifname) {
|
||||
err = setsockopt(fd, SOL_SOCKET, SO_BINDTODEVICE,
|
||||
ifname, strlen(ifname) + 1);
|
||||
if (!ASSERT_OK(err, "setsockopt(SO_BINDTODEVICE)"))
|
||||
goto fail;
|
||||
}
|
||||
|
||||
return fd;
|
||||
fail:
|
||||
close(fd);
|
||||
return -1;
|
||||
}
|
||||
|
||||
static int attach_progs(char *ifname, int tc_prog_fd, int xdp_prog_fd)
|
||||
{
|
||||
LIBBPF_OPTS(bpf_tc_hook, hook, .attach_point = BPF_TC_INGRESS);
|
||||
LIBBPF_OPTS(bpf_tc_opts, opts, .handle = 1, .priority = 1,
|
||||
.prog_fd = tc_prog_fd);
|
||||
int ret, ifindex;
|
||||
|
||||
ifindex = if_nametoindex(ifname);
|
||||
if (!ASSERT_NEQ(ifindex, 0, "if_nametoindex"))
|
||||
return -1;
|
||||
hook.ifindex = ifindex;
|
||||
|
||||
ret = bpf_tc_hook_create(&hook);
|
||||
if (!ASSERT_OK(ret, "bpf_tc_hook_create"))
|
||||
return ret;
|
||||
|
||||
ret = bpf_tc_attach(&hook, &opts);
|
||||
if (!ASSERT_OK(ret, "bpf_tc_attach")) {
|
||||
bpf_tc_hook_destroy(&hook);
|
||||
return ret;
|
||||
}
|
||||
ret = bpf_xdp_attach(ifindex, xdp_prog_fd, 0, NULL);
|
||||
if (!ASSERT_OK(ret, "bpf_xdp_attach")) {
|
||||
bpf_tc_hook_destroy(&hook);
|
||||
return ret;
|
||||
}
|
||||
|
||||
return 0;
|
||||
}
|
||||
|
||||
static void cleanup(void)
|
||||
{
|
||||
SYS_NOFAIL("test -f /var/run/netns/" NS0 " && ip netns delete "
|
||||
NS0);
|
||||
SYS_NOFAIL("test -f /var/run/netns/" NS1 " && ip netns delete "
|
||||
NS1);
|
||||
}
|
||||
|
||||
static int setup(struct vrf_socket_lookup *skel)
|
||||
{
|
||||
int tc_prog_fd, xdp_prog_fd, ret = 0;
|
||||
struct nstoken *nstoken = NULL;
|
||||
|
||||
SYS(fail, "ip netns add " NS0);
|
||||
SYS(fail, "ip netns add " NS1);
|
||||
|
||||
/* NS0 <-> NS1 [veth01 <-> veth10] */
|
||||
SYS(fail, "ip link add veth01 netns " NS0 " type veth peer name veth10"
|
||||
" netns " NS1);
|
||||
SYS(fail, "ip -net " NS0 " addr add " IP4_ADDR_VETH01 "/24 dev veth01");
|
||||
SYS(fail, "ip -net " NS0 " link set dev veth01 up");
|
||||
SYS(fail, "ip -net " NS1 " addr add " IP4_ADDR_VETH10 "/24 dev veth10");
|
||||
SYS(fail, "ip -net " NS1 " link set dev veth10 up");
|
||||
|
||||
/* NS0 <-> NS1 [veth02 <-> veth20] */
|
||||
SYS(fail, "ip link add veth02 netns " NS0 " type veth peer name veth20"
|
||||
" netns " NS1);
|
||||
SYS(fail, "ip -net " NS0 " addr add " IP4_ADDR_VETH02 "/24 dev veth02");
|
||||
SYS(fail, "ip -net " NS0 " link set dev veth02 up");
|
||||
SYS(fail, "ip -net " NS1 " addr add " IP4_ADDR_VETH20 "/24 dev veth20");
|
||||
SYS(fail, "ip -net " NS1 " link set dev veth20 up");
|
||||
|
||||
/* veth02 -> vrf1 */
|
||||
SYS(fail, "ip -net " NS0 " link add vrf1 type vrf table 11");
|
||||
SYS(fail, "ip -net " NS0 " route add vrf vrf1 unreachable default"
|
||||
" metric 4278198272");
|
||||
SYS(fail, "ip -net " NS0 " link set vrf1 alias vrf");
|
||||
SYS(fail, "ip -net " NS0 " link set vrf1 up");
|
||||
SYS(fail, "ip -net " NS0 " link set veth02 master vrf1");
|
||||
|
||||
/* Attach TC and XDP progs to veth devices in NS0 */
|
||||
nstoken = open_netns(NS0);
|
||||
if (!ASSERT_OK_PTR(nstoken, "setns " NS0))
|
||||
goto fail;
|
||||
tc_prog_fd = bpf_program__fd(skel->progs.tc_socket_lookup);
|
||||
if (!ASSERT_GE(tc_prog_fd, 0, "bpf_program__tc_fd"))
|
||||
goto fail;
|
||||
xdp_prog_fd = bpf_program__fd(skel->progs.xdp_socket_lookup);
|
||||
if (!ASSERT_GE(xdp_prog_fd, 0, "bpf_program__xdp_fd"))
|
||||
goto fail;
|
||||
|
||||
if (attach_progs("veth01", tc_prog_fd, xdp_prog_fd))
|
||||
goto fail;
|
||||
|
||||
if (attach_progs("veth02", tc_prog_fd, xdp_prog_fd))
|
||||
goto fail;
|
||||
|
||||
goto close;
|
||||
fail:
|
||||
ret = -1;
|
||||
close:
|
||||
if (nstoken)
|
||||
close_netns(nstoken);
|
||||
return ret;
|
||||
}
|
||||
|
||||
static int test_lookup(struct vrf_socket_lookup *skel, int sotype,
|
||||
const char *ip, int port, bool test_xdp, bool tcp_skc,
|
||||
int lookup_status_exp)
|
||||
{
|
||||
static const char msg[] = "Hello Server";
|
||||
struct sockaddr_storage addr = {};
|
||||
int fd, ret = 0;
|
||||
|
||||
fd = make_socket(sotype, ip, port, &addr);
|
||||
if (fd < 0)
|
||||
return -1;
|
||||
|
||||
skel->bss->test_xdp = test_xdp;
|
||||
skel->bss->tcp_skc = tcp_skc;
|
||||
skel->bss->lookup_status = -1;
|
||||
|
||||
if (sotype == SOCK_STREAM)
|
||||
connect(fd, (void *)&addr, sizeof(struct sockaddr_in));
|
||||
else
|
||||
sendto(fd, msg, sizeof(msg), 0, (void *)&addr,
|
||||
sizeof(struct sockaddr_in));
|
||||
|
||||
if (!ASSERT_EQ(skel->bss->lookup_status, lookup_status_exp,
|
||||
"lookup_status"))
|
||||
goto fail;
|
||||
|
||||
goto close;
|
||||
|
||||
fail:
|
||||
ret = -1;
|
||||
close:
|
||||
close(fd);
|
||||
return ret;
|
||||
}
|
||||
|
||||
static void _test_vrf_socket_lookup(struct vrf_socket_lookup *skel, int sotype,
|
||||
bool test_xdp, bool tcp_skc)
|
||||
{
|
||||
int in_vrf_server = -1, non_vrf_server = -1;
|
||||
struct nstoken *nstoken = NULL;
|
||||
|
||||
nstoken = open_netns(NS0);
|
||||
if (!ASSERT_OK_PTR(nstoken, "setns " NS0))
|
||||
goto done;
|
||||
|
||||
/* Open sockets in and outside VRF */
|
||||
non_vrf_server = make_server(sotype, "0.0.0.0", NON_VRF_PORT, NULL);
|
||||
if (!ASSERT_GE(non_vrf_server, 0, "make_server__outside_vrf_fd"))
|
||||
goto done;
|
||||
|
||||
in_vrf_server = make_server(sotype, "0.0.0.0", IN_VRF_PORT, "veth02");
|
||||
if (!ASSERT_GE(in_vrf_server, 0, "make_server__in_vrf_fd"))
|
||||
goto done;
|
||||
|
||||
/* Perform test from NS1 */
|
||||
close_netns(nstoken);
|
||||
nstoken = open_netns(NS1);
|
||||
if (!ASSERT_OK_PTR(nstoken, "setns " NS1))
|
||||
goto done;
|
||||
|
||||
if (!ASSERT_OK(test_lookup(skel, sotype, IP4_ADDR_VETH02, NON_VRF_PORT,
|
||||
test_xdp, tcp_skc, 0), "in_to_out"))
|
||||
goto done;
|
||||
if (!ASSERT_OK(test_lookup(skel, sotype, IP4_ADDR_VETH02, IN_VRF_PORT,
|
||||
test_xdp, tcp_skc, 1), "in_to_in"))
|
||||
goto done;
|
||||
if (!ASSERT_OK(test_lookup(skel, sotype, IP4_ADDR_VETH01, NON_VRF_PORT,
|
||||
test_xdp, tcp_skc, 1), "out_to_out"))
|
||||
goto done;
|
||||
if (!ASSERT_OK(test_lookup(skel, sotype, IP4_ADDR_VETH01, IN_VRF_PORT,
|
||||
test_xdp, tcp_skc, 0), "out_to_in"))
|
||||
goto done;
|
||||
|
||||
done:
|
||||
if (non_vrf_server >= 0)
|
||||
close(non_vrf_server);
|
||||
if (in_vrf_server >= 0)
|
||||
close(in_vrf_server);
|
||||
if (nstoken)
|
||||
close_netns(nstoken);
|
||||
}
|
||||
|
||||
void test_vrf_socket_lookup(void)
|
||||
{
|
||||
struct vrf_socket_lookup *skel;
|
||||
|
||||
cleanup();
|
||||
|
||||
skel = vrf_socket_lookup__open_and_load();
|
||||
if (!ASSERT_OK_PTR(skel, "vrf_socket_lookup__open_and_load"))
|
||||
return;
|
||||
|
||||
if (!ASSERT_OK(setup(skel), "setup"))
|
||||
goto done;
|
||||
|
||||
if (test__start_subtest("tc_socket_lookup_tcp"))
|
||||
_test_vrf_socket_lookup(skel, SOCK_STREAM, false, false);
|
||||
if (test__start_subtest("tc_socket_lookup_tcp_skc"))
|
||||
_test_vrf_socket_lookup(skel, SOCK_STREAM, false, false);
|
||||
if (test__start_subtest("tc_socket_lookup_udp"))
|
||||
_test_vrf_socket_lookup(skel, SOCK_STREAM, false, false);
|
||||
if (test__start_subtest("xdp_socket_lookup_tcp"))
|
||||
_test_vrf_socket_lookup(skel, SOCK_STREAM, true, false);
|
||||
if (test__start_subtest("xdp_socket_lookup_tcp_skc"))
|
||||
_test_vrf_socket_lookup(skel, SOCK_STREAM, true, false);
|
||||
if (test__start_subtest("xdp_socket_lookup_udp"))
|
||||
_test_vrf_socket_lookup(skel, SOCK_STREAM, true, false);
|
||||
|
||||
done:
|
||||
vrf_socket_lookup__destroy(skel);
|
||||
cleanup();
|
||||
}
|
@ -28,6 +28,8 @@ void bpf_cpumask_release(struct bpf_cpumask *cpumask) __ksym;
|
||||
struct bpf_cpumask *bpf_cpumask_acquire(struct bpf_cpumask *cpumask) __ksym;
|
||||
u32 bpf_cpumask_first(const struct cpumask *cpumask) __ksym;
|
||||
u32 bpf_cpumask_first_zero(const struct cpumask *cpumask) __ksym;
|
||||
u32 bpf_cpumask_first_and(const struct cpumask *src1,
|
||||
const struct cpumask *src2) __ksym;
|
||||
void bpf_cpumask_set_cpu(u32 cpu, struct bpf_cpumask *cpumask) __ksym;
|
||||
void bpf_cpumask_clear_cpu(u32 cpu, struct bpf_cpumask *cpumask) __ksym;
|
||||
bool bpf_cpumask_test_cpu(u32 cpu, const struct cpumask *cpumask) __ksym;
|
||||
@ -50,8 +52,8 @@ bool bpf_cpumask_subset(const struct cpumask *src1, const struct cpumask *src2)
|
||||
bool bpf_cpumask_empty(const struct cpumask *cpumask) __ksym;
|
||||
bool bpf_cpumask_full(const struct cpumask *cpumask) __ksym;
|
||||
void bpf_cpumask_copy(struct bpf_cpumask *dst, const struct cpumask *src) __ksym;
|
||||
u32 bpf_cpumask_any(const struct cpumask *src) __ksym;
|
||||
u32 bpf_cpumask_any_and(const struct cpumask *src1, const struct cpumask *src2) __ksym;
|
||||
u32 bpf_cpumask_any_distribute(const struct cpumask *src) __ksym;
|
||||
u32 bpf_cpumask_any_and_distribute(const struct cpumask *src1, const struct cpumask *src2) __ksym;
|
||||
|
||||
void bpf_rcu_read_lock(void) __ksym;
|
||||
void bpf_rcu_read_unlock(void) __ksym;
|
||||
|
@ -5,6 +5,7 @@
|
||||
#include <bpf/bpf_tracing.h>
|
||||
#include <bpf/bpf_helpers.h>
|
||||
|
||||
#include "bpf_misc.h"
|
||||
#include "cpumask_common.h"
|
||||
|
||||
char _license[] SEC("license") = "GPL";
|
||||
@ -174,6 +175,38 @@ release_exit:
|
||||
return 0;
|
||||
}
|
||||
|
||||
SEC("tp_btf/task_newtask")
|
||||
int BPF_PROG(test_firstand_nocpu, struct task_struct *task, u64 clone_flags)
|
||||
{
|
||||
struct bpf_cpumask *mask1, *mask2;
|
||||
u32 first;
|
||||
|
||||
if (!is_test_task())
|
||||
return 0;
|
||||
|
||||
mask1 = create_cpumask();
|
||||
if (!mask1)
|
||||
return 0;
|
||||
|
||||
mask2 = create_cpumask();
|
||||
if (!mask2)
|
||||
goto release_exit;
|
||||
|
||||
bpf_cpumask_set_cpu(0, mask1);
|
||||
bpf_cpumask_set_cpu(1, mask2);
|
||||
|
||||
first = bpf_cpumask_first_and(cast(mask1), cast(mask2));
|
||||
if (first <= 1)
|
||||
err = 3;
|
||||
|
||||
release_exit:
|
||||
if (mask1)
|
||||
bpf_cpumask_release(mask1);
|
||||
if (mask2)
|
||||
bpf_cpumask_release(mask2);
|
||||
return 0;
|
||||
}
|
||||
|
||||
SEC("tp_btf/task_newtask")
|
||||
int BPF_PROG(test_test_and_set_clear, struct task_struct *task, u64 clone_flags)
|
||||
{
|
||||
@ -311,13 +344,13 @@ int BPF_PROG(test_copy_any_anyand, struct task_struct *task, u64 clone_flags)
|
||||
bpf_cpumask_set_cpu(1, mask2);
|
||||
bpf_cpumask_or(dst1, cast(mask1), cast(mask2));
|
||||
|
||||
cpu = bpf_cpumask_any(cast(mask1));
|
||||
cpu = bpf_cpumask_any_distribute(cast(mask1));
|
||||
if (cpu != 0) {
|
||||
err = 6;
|
||||
goto release_exit;
|
||||
}
|
||||
|
||||
cpu = bpf_cpumask_any(cast(dst2));
|
||||
cpu = bpf_cpumask_any_distribute(cast(dst2));
|
||||
if (cpu < nr_cpus) {
|
||||
err = 7;
|
||||
goto release_exit;
|
||||
@ -329,13 +362,13 @@ int BPF_PROG(test_copy_any_anyand, struct task_struct *task, u64 clone_flags)
|
||||
goto release_exit;
|
||||
}
|
||||
|
||||
cpu = bpf_cpumask_any(cast(dst2));
|
||||
cpu = bpf_cpumask_any_distribute(cast(dst2));
|
||||
if (cpu > 1) {
|
||||
err = 9;
|
||||
goto release_exit;
|
||||
}
|
||||
|
||||
cpu = bpf_cpumask_any_and(cast(mask1), cast(mask2));
|
||||
cpu = bpf_cpumask_any_and_distribute(cast(mask1), cast(mask2));
|
||||
if (cpu < nr_cpus) {
|
||||
err = 10;
|
||||
goto release_exit;
|
||||
@ -426,3 +459,26 @@ int BPF_PROG(test_global_mask_rcu, struct task_struct *task, u64 clone_flags)
|
||||
|
||||
return 0;
|
||||
}
|
||||
|
||||
SEC("tp_btf/task_newtask")
|
||||
__success
|
||||
int BPF_PROG(test_refcount_null_tracking, struct task_struct *task, u64 clone_flags)
|
||||
{
|
||||
struct bpf_cpumask *mask1, *mask2;
|
||||
|
||||
mask1 = bpf_cpumask_create();
|
||||
mask2 = bpf_cpumask_create();
|
||||
|
||||
if (!mask1 || !mask2)
|
||||
goto free_masks_return;
|
||||
|
||||
bpf_cpumask_test_cpu(0, (const struct cpumask *)mask1);
|
||||
bpf_cpumask_test_cpu(0, (const struct cpumask *)mask2);
|
||||
|
||||
free_masks_return:
|
||||
if (mask1)
|
||||
bpf_cpumask_release(mask1);
|
||||
if (mask2)
|
||||
bpf_cpumask_release(mask2);
|
||||
return 0;
|
||||
}
|
||||
|
@ -375,6 +375,8 @@ long rbtree_refcounted_node_ref_escapes(void *ctx)
|
||||
bpf_rbtree_add(&aroot, &n->node, less_a);
|
||||
m = bpf_refcount_acquire(n);
|
||||
bpf_spin_unlock(&alock);
|
||||
if (!m)
|
||||
return 2;
|
||||
|
||||
m->key = 2;
|
||||
bpf_obj_drop(m);
|
||||
|
@ -29,7 +29,7 @@ static bool less(struct bpf_rb_node *a, const struct bpf_rb_node *b)
|
||||
}
|
||||
|
||||
SEC("?tc")
|
||||
__failure __msg("Unreleased reference id=3 alloc_insn=21")
|
||||
__failure __msg("Unreleased reference id=4 alloc_insn=21")
|
||||
long rbtree_refcounted_node_ref_escapes(void *ctx)
|
||||
{
|
||||
struct node_acquire *n, *m;
|
||||
@ -43,6 +43,8 @@ long rbtree_refcounted_node_ref_escapes(void *ctx)
|
||||
/* m becomes an owning ref but is never drop'd or added to a tree */
|
||||
m = bpf_refcount_acquire(n);
|
||||
bpf_spin_unlock(&glock);
|
||||
if (!m)
|
||||
return 2;
|
||||
|
||||
m->key = 2;
|
||||
return 0;
|
||||
|
659
tools/testing/selftests/bpf/progs/verifier_scalar_ids.c
Normal file
659
tools/testing/selftests/bpf/progs/verifier_scalar_ids.c
Normal file
@ -0,0 +1,659 @@
|
||||
// SPDX-License-Identifier: GPL-2.0
|
||||
|
||||
#include <linux/bpf.h>
|
||||
#include <bpf/bpf_helpers.h>
|
||||
#include "bpf_misc.h"
|
||||
|
||||
/* Check that precision marks propagate through scalar IDs.
|
||||
* Registers r{0,1,2} have the same scalar ID at the moment when r0 is
|
||||
* marked to be precise, this mark is immediately propagated to r{1,2}.
|
||||
*/
|
||||
SEC("socket")
|
||||
__success __log_level(2)
|
||||
__msg("frame0: regs=r0,r1,r2 stack= before 4: (bf) r3 = r10")
|
||||
__msg("frame0: regs=r0,r1,r2 stack= before 3: (bf) r2 = r0")
|
||||
__msg("frame0: regs=r0,r1 stack= before 2: (bf) r1 = r0")
|
||||
__msg("frame0: regs=r0 stack= before 1: (57) r0 &= 255")
|
||||
__msg("frame0: regs=r0 stack= before 0: (85) call bpf_ktime_get_ns")
|
||||
__flag(BPF_F_TEST_STATE_FREQ)
|
||||
__naked void precision_same_state(void)
|
||||
{
|
||||
asm volatile (
|
||||
/* r0 = random number up to 0xff */
|
||||
"call %[bpf_ktime_get_ns];"
|
||||
"r0 &= 0xff;"
|
||||
/* tie r0.id == r1.id == r2.id */
|
||||
"r1 = r0;"
|
||||
"r2 = r0;"
|
||||
/* force r0 to be precise, this immediately marks r1 and r2 as
|
||||
* precise as well because of shared IDs
|
||||
*/
|
||||
"r3 = r10;"
|
||||
"r3 += r0;"
|
||||
"r0 = 0;"
|
||||
"exit;"
|
||||
:
|
||||
: __imm(bpf_ktime_get_ns)
|
||||
: __clobber_all);
|
||||
}
|
||||
|
||||
/* Same as precision_same_state, but mark propagates through state /
|
||||
* parent state boundary.
|
||||
*/
|
||||
SEC("socket")
|
||||
__success __log_level(2)
|
||||
__msg("frame0: last_idx 6 first_idx 5 subseq_idx -1")
|
||||
__msg("frame0: regs=r0,r1,r2 stack= before 5: (bf) r3 = r10")
|
||||
__msg("frame0: parent state regs=r0,r1,r2 stack=:")
|
||||
__msg("frame0: regs=r0,r1,r2 stack= before 4: (05) goto pc+0")
|
||||
__msg("frame0: regs=r0,r1,r2 stack= before 3: (bf) r2 = r0")
|
||||
__msg("frame0: regs=r0,r1 stack= before 2: (bf) r1 = r0")
|
||||
__msg("frame0: regs=r0 stack= before 1: (57) r0 &= 255")
|
||||
__msg("frame0: parent state regs=r0 stack=:")
|
||||
__msg("frame0: regs=r0 stack= before 0: (85) call bpf_ktime_get_ns")
|
||||
__flag(BPF_F_TEST_STATE_FREQ)
|
||||
__naked void precision_cross_state(void)
|
||||
{
|
||||
asm volatile (
|
||||
/* r0 = random number up to 0xff */
|
||||
"call %[bpf_ktime_get_ns];"
|
||||
"r0 &= 0xff;"
|
||||
/* tie r0.id == r1.id == r2.id */
|
||||
"r1 = r0;"
|
||||
"r2 = r0;"
|
||||
/* force checkpoint */
|
||||
"goto +0;"
|
||||
/* force r0 to be precise, this immediately marks r1 and r2 as
|
||||
* precise as well because of shared IDs
|
||||
*/
|
||||
"r3 = r10;"
|
||||
"r3 += r0;"
|
||||
"r0 = 0;"
|
||||
"exit;"
|
||||
:
|
||||
: __imm(bpf_ktime_get_ns)
|
||||
: __clobber_all);
|
||||
}
|
||||
|
||||
/* Same as precision_same_state, but break one of the
|
||||
* links, note that r1 is absent from regs=... in __msg below.
|
||||
*/
|
||||
SEC("socket")
|
||||
__success __log_level(2)
|
||||
__msg("frame0: regs=r0,r2 stack= before 5: (bf) r3 = r10")
|
||||
__msg("frame0: regs=r0,r2 stack= before 4: (b7) r1 = 0")
|
||||
__msg("frame0: regs=r0,r2 stack= before 3: (bf) r2 = r0")
|
||||
__msg("frame0: regs=r0 stack= before 2: (bf) r1 = r0")
|
||||
__msg("frame0: regs=r0 stack= before 1: (57) r0 &= 255")
|
||||
__msg("frame0: regs=r0 stack= before 0: (85) call bpf_ktime_get_ns")
|
||||
__flag(BPF_F_TEST_STATE_FREQ)
|
||||
__naked void precision_same_state_broken_link(void)
|
||||
{
|
||||
asm volatile (
|
||||
/* r0 = random number up to 0xff */
|
||||
"call %[bpf_ktime_get_ns];"
|
||||
"r0 &= 0xff;"
|
||||
/* tie r0.id == r1.id == r2.id */
|
||||
"r1 = r0;"
|
||||
"r2 = r0;"
|
||||
/* break link for r1, this is the only line that differs
|
||||
* compared to the previous test
|
||||
*/
|
||||
"r1 = 0;"
|
||||
/* force r0 to be precise, this immediately marks r1 and r2 as
|
||||
* precise as well because of shared IDs
|
||||
*/
|
||||
"r3 = r10;"
|
||||
"r3 += r0;"
|
||||
"r0 = 0;"
|
||||
"exit;"
|
||||
:
|
||||
: __imm(bpf_ktime_get_ns)
|
||||
: __clobber_all);
|
||||
}
|
||||
|
||||
/* Same as precision_same_state_broken_link, but with state /
|
||||
* parent state boundary.
|
||||
*/
|
||||
SEC("socket")
|
||||
__success __log_level(2)
|
||||
__msg("frame0: regs=r0,r2 stack= before 6: (bf) r3 = r10")
|
||||
__msg("frame0: regs=r0,r2 stack= before 5: (b7) r1 = 0")
|
||||
__msg("frame0: parent state regs=r0,r2 stack=:")
|
||||
__msg("frame0: regs=r0,r1,r2 stack= before 4: (05) goto pc+0")
|
||||
__msg("frame0: regs=r0,r1,r2 stack= before 3: (bf) r2 = r0")
|
||||
__msg("frame0: regs=r0,r1 stack= before 2: (bf) r1 = r0")
|
||||
__msg("frame0: regs=r0 stack= before 1: (57) r0 &= 255")
|
||||
__msg("frame0: parent state regs=r0 stack=:")
|
||||
__msg("frame0: regs=r0 stack= before 0: (85) call bpf_ktime_get_ns")
|
||||
__flag(BPF_F_TEST_STATE_FREQ)
|
||||
__naked void precision_cross_state_broken_link(void)
|
||||
{
|
||||
asm volatile (
|
||||
/* r0 = random number up to 0xff */
|
||||
"call %[bpf_ktime_get_ns];"
|
||||
"r0 &= 0xff;"
|
||||
/* tie r0.id == r1.id == r2.id */
|
||||
"r1 = r0;"
|
||||
"r2 = r0;"
|
||||
/* force checkpoint, although link between r1 and r{0,2} is
|
||||
* broken by the next statement current precision tracking
|
||||
* algorithm can't react to it and propagates mark for r1 to
|
||||
* the parent state.
|
||||
*/
|
||||
"goto +0;"
|
||||
/* break link for r1, this is the only line that differs
|
||||
* compared to precision_cross_state()
|
||||
*/
|
||||
"r1 = 0;"
|
||||
/* force r0 to be precise, this immediately marks r1 and r2 as
|
||||
* precise as well because of shared IDs
|
||||
*/
|
||||
"r3 = r10;"
|
||||
"r3 += r0;"
|
||||
"r0 = 0;"
|
||||
"exit;"
|
||||
:
|
||||
: __imm(bpf_ktime_get_ns)
|
||||
: __clobber_all);
|
||||
}
|
||||
|
||||
/* Check that precision marks propagate through scalar IDs.
|
||||
* Use the same scalar ID in multiple stack frames, check that
|
||||
* precision information is propagated up the call stack.
|
||||
*/
|
||||
SEC("socket")
|
||||
__success __log_level(2)
|
||||
__msg("11: (0f) r2 += r1")
|
||||
/* Current state */
|
||||
__msg("frame2: last_idx 11 first_idx 10 subseq_idx -1")
|
||||
__msg("frame2: regs=r1 stack= before 10: (bf) r2 = r10")
|
||||
__msg("frame2: parent state regs=r1 stack=")
|
||||
/* frame1.r{6,7} are marked because mark_precise_scalar_ids()
|
||||
* looks for all registers with frame2.r1.id in the current state
|
||||
*/
|
||||
__msg("frame1: parent state regs=r6,r7 stack=")
|
||||
__msg("frame0: parent state regs=r6 stack=")
|
||||
/* Parent state */
|
||||
__msg("frame2: last_idx 8 first_idx 8 subseq_idx 10")
|
||||
__msg("frame2: regs=r1 stack= before 8: (85) call pc+1")
|
||||
/* frame1.r1 is marked because of backtracking of call instruction */
|
||||
__msg("frame1: parent state regs=r1,r6,r7 stack=")
|
||||
__msg("frame0: parent state regs=r6 stack=")
|
||||
/* Parent state */
|
||||
__msg("frame1: last_idx 7 first_idx 6 subseq_idx 8")
|
||||
__msg("frame1: regs=r1,r6,r7 stack= before 7: (bf) r7 = r1")
|
||||
__msg("frame1: regs=r1,r6 stack= before 6: (bf) r6 = r1")
|
||||
__msg("frame1: parent state regs=r1 stack=")
|
||||
__msg("frame0: parent state regs=r6 stack=")
|
||||
/* Parent state */
|
||||
__msg("frame1: last_idx 4 first_idx 4 subseq_idx 6")
|
||||
__msg("frame1: regs=r1 stack= before 4: (85) call pc+1")
|
||||
__msg("frame0: parent state regs=r1,r6 stack=")
|
||||
/* Parent state */
|
||||
__msg("frame0: last_idx 3 first_idx 1 subseq_idx 4")
|
||||
__msg("frame0: regs=r0,r1,r6 stack= before 3: (bf) r6 = r0")
|
||||
__msg("frame0: regs=r0,r1 stack= before 2: (bf) r1 = r0")
|
||||
__msg("frame0: regs=r0 stack= before 1: (57) r0 &= 255")
|
||||
__flag(BPF_F_TEST_STATE_FREQ)
|
||||
__naked void precision_many_frames(void)
|
||||
{
|
||||
asm volatile (
|
||||
/* r0 = random number up to 0xff */
|
||||
"call %[bpf_ktime_get_ns];"
|
||||
"r0 &= 0xff;"
|
||||
/* tie r0.id == r1.id == r6.id */
|
||||
"r1 = r0;"
|
||||
"r6 = r0;"
|
||||
"call precision_many_frames__foo;"
|
||||
"exit;"
|
||||
:
|
||||
: __imm(bpf_ktime_get_ns)
|
||||
: __clobber_all);
|
||||
}
|
||||
|
||||
static __naked __noinline __used
|
||||
void precision_many_frames__foo(void)
|
||||
{
|
||||
asm volatile (
|
||||
/* conflate one of the register numbers (r6) with outer frame,
|
||||
* to verify that those are tracked independently
|
||||
*/
|
||||
"r6 = r1;"
|
||||
"r7 = r1;"
|
||||
"call precision_many_frames__bar;"
|
||||
"exit"
|
||||
::: __clobber_all);
|
||||
}
|
||||
|
||||
static __naked __noinline __used
|
||||
void precision_many_frames__bar(void)
|
||||
{
|
||||
asm volatile (
|
||||
/* force r1 to be precise, this immediately marks:
|
||||
* - bar frame r1
|
||||
* - foo frame r{1,6,7}
|
||||
* - main frame r{1,6}
|
||||
*/
|
||||
"r2 = r10;"
|
||||
"r2 += r1;"
|
||||
"r0 = 0;"
|
||||
"exit;"
|
||||
::: __clobber_all);
|
||||
}
|
||||
|
||||
/* Check that scalars with the same IDs are marked precise on stack as
|
||||
* well as in registers.
|
||||
*/
|
||||
SEC("socket")
|
||||
__success __log_level(2)
|
||||
/* foo frame */
|
||||
__msg("frame1: regs=r1 stack=-8,-16 before 9: (bf) r2 = r10")
|
||||
__msg("frame1: regs=r1 stack=-8,-16 before 8: (7b) *(u64 *)(r10 -16) = r1")
|
||||
__msg("frame1: regs=r1 stack=-8 before 7: (7b) *(u64 *)(r10 -8) = r1")
|
||||
__msg("frame1: regs=r1 stack= before 4: (85) call pc+2")
|
||||
/* main frame */
|
||||
__msg("frame0: regs=r0,r1 stack=-8 before 3: (7b) *(u64 *)(r10 -8) = r1")
|
||||
__msg("frame0: regs=r0,r1 stack= before 2: (bf) r1 = r0")
|
||||
__msg("frame0: regs=r0 stack= before 1: (57) r0 &= 255")
|
||||
__flag(BPF_F_TEST_STATE_FREQ)
|
||||
__naked void precision_stack(void)
|
||||
{
|
||||
asm volatile (
|
||||
/* r0 = random number up to 0xff */
|
||||
"call %[bpf_ktime_get_ns];"
|
||||
"r0 &= 0xff;"
|
||||
/* tie r0.id == r1.id == fp[-8].id */
|
||||
"r1 = r0;"
|
||||
"*(u64*)(r10 - 8) = r1;"
|
||||
"call precision_stack__foo;"
|
||||
"r0 = 0;"
|
||||
"exit;"
|
||||
:
|
||||
: __imm(bpf_ktime_get_ns)
|
||||
: __clobber_all);
|
||||
}
|
||||
|
||||
static __naked __noinline __used
|
||||
void precision_stack__foo(void)
|
||||
{
|
||||
asm volatile (
|
||||
/* conflate one of the register numbers (r6) with outer frame,
|
||||
* to verify that those are tracked independently
|
||||
*/
|
||||
"*(u64*)(r10 - 8) = r1;"
|
||||
"*(u64*)(r10 - 16) = r1;"
|
||||
/* force r1 to be precise, this immediately marks:
|
||||
* - foo frame r1,fp{-8,-16}
|
||||
* - main frame r1,fp{-8}
|
||||
*/
|
||||
"r2 = r10;"
|
||||
"r2 += r1;"
|
||||
"exit"
|
||||
::: __clobber_all);
|
||||
}
|
||||
|
||||
/* Use two separate scalar IDs to check that these are propagated
|
||||
* independently.
|
||||
*/
|
||||
SEC("socket")
|
||||
__success __log_level(2)
|
||||
/* r{6,7} */
|
||||
__msg("11: (0f) r3 += r7")
|
||||
__msg("frame0: regs=r6,r7 stack= before 10: (bf) r3 = r10")
|
||||
/* ... skip some insns ... */
|
||||
__msg("frame0: regs=r6,r7 stack= before 3: (bf) r7 = r0")
|
||||
__msg("frame0: regs=r0,r6 stack= before 2: (bf) r6 = r0")
|
||||
/* r{8,9} */
|
||||
__msg("12: (0f) r3 += r9")
|
||||
__msg("frame0: regs=r8,r9 stack= before 11: (0f) r3 += r7")
|
||||
/* ... skip some insns ... */
|
||||
__msg("frame0: regs=r8,r9 stack= before 7: (bf) r9 = r0")
|
||||
__msg("frame0: regs=r0,r8 stack= before 6: (bf) r8 = r0")
|
||||
__flag(BPF_F_TEST_STATE_FREQ)
|
||||
__naked void precision_two_ids(void)
|
||||
{
|
||||
asm volatile (
|
||||
/* r6 = random number up to 0xff
|
||||
* r6.id == r7.id
|
||||
*/
|
||||
"call %[bpf_ktime_get_ns];"
|
||||
"r0 &= 0xff;"
|
||||
"r6 = r0;"
|
||||
"r7 = r0;"
|
||||
/* same, but for r{8,9} */
|
||||
"call %[bpf_ktime_get_ns];"
|
||||
"r0 &= 0xff;"
|
||||
"r8 = r0;"
|
||||
"r9 = r0;"
|
||||
/* clear r0 id */
|
||||
"r0 = 0;"
|
||||
/* force checkpoint */
|
||||
"goto +0;"
|
||||
"r3 = r10;"
|
||||
/* force r7 to be precise, this also marks r6 */
|
||||
"r3 += r7;"
|
||||
/* force r9 to be precise, this also marks r8 */
|
||||
"r3 += r9;"
|
||||
"exit;"
|
||||
:
|
||||
: __imm(bpf_ktime_get_ns)
|
||||
: __clobber_all);
|
||||
}
|
||||
|
||||
/* Verify that check_ids() is used by regsafe() for scalars.
|
||||
*
|
||||
* r9 = ... some pointer with range X ...
|
||||
* r6 = ... unbound scalar ID=a ...
|
||||
* r7 = ... unbound scalar ID=b ...
|
||||
* if (r6 > r7) goto +1
|
||||
* r7 = r6
|
||||
* if (r7 > X) goto exit
|
||||
* r9 += r6
|
||||
* ... access memory using r9 ...
|
||||
*
|
||||
* The memory access is safe only if r7 is bounded,
|
||||
* which is true for one branch and not true for another.
|
||||
*/
|
||||
SEC("socket")
|
||||
__failure __msg("register with unbounded min value")
|
||||
__flag(BPF_F_TEST_STATE_FREQ)
|
||||
__naked void check_ids_in_regsafe(void)
|
||||
{
|
||||
asm volatile (
|
||||
/* Bump allocated stack */
|
||||
"r1 = 0;"
|
||||
"*(u64*)(r10 - 8) = r1;"
|
||||
/* r9 = pointer to stack */
|
||||
"r9 = r10;"
|
||||
"r9 += -8;"
|
||||
/* r7 = ktime_get_ns() */
|
||||
"call %[bpf_ktime_get_ns];"
|
||||
"r7 = r0;"
|
||||
/* r6 = ktime_get_ns() */
|
||||
"call %[bpf_ktime_get_ns];"
|
||||
"r6 = r0;"
|
||||
/* if r6 > r7 is an unpredictable jump */
|
||||
"if r6 > r7 goto l1_%=;"
|
||||
"r7 = r6;"
|
||||
"l1_%=:"
|
||||
/* if r7 > 4 ...; transfers range to r6 on one execution path
|
||||
* but does not transfer on another
|
||||
*/
|
||||
"if r7 > 4 goto l2_%=;"
|
||||
/* Access memory at r9[r6], r6 is not always bounded */
|
||||
"r9 += r6;"
|
||||
"r0 = *(u8*)(r9 + 0);"
|
||||
"l2_%=:"
|
||||
"r0 = 0;"
|
||||
"exit;"
|
||||
:
|
||||
: __imm(bpf_ktime_get_ns)
|
||||
: __clobber_all);
|
||||
}
|
||||
|
||||
/* Similar to check_ids_in_regsafe.
|
||||
* The l0 could be reached in two states:
|
||||
*
|
||||
* (1) r6{.id=A}, r7{.id=A}, r8{.id=B}
|
||||
* (2) r6{.id=B}, r7{.id=A}, r8{.id=B}
|
||||
*
|
||||
* Where (2) is not safe, as "r7 > 4" check won't propagate range for it.
|
||||
* This example would be considered safe without changes to
|
||||
* mark_chain_precision() to track scalar values with equal IDs.
|
||||
*/
|
||||
SEC("socket")
|
||||
__failure __msg("register with unbounded min value")
|
||||
__flag(BPF_F_TEST_STATE_FREQ)
|
||||
__naked void check_ids_in_regsafe_2(void)
|
||||
{
|
||||
asm volatile (
|
||||
/* Bump allocated stack */
|
||||
"r1 = 0;"
|
||||
"*(u64*)(r10 - 8) = r1;"
|
||||
/* r9 = pointer to stack */
|
||||
"r9 = r10;"
|
||||
"r9 += -8;"
|
||||
/* r8 = ktime_get_ns() */
|
||||
"call %[bpf_ktime_get_ns];"
|
||||
"r8 = r0;"
|
||||
/* r7 = ktime_get_ns() */
|
||||
"call %[bpf_ktime_get_ns];"
|
||||
"r7 = r0;"
|
||||
/* r6 = ktime_get_ns() */
|
||||
"call %[bpf_ktime_get_ns];"
|
||||
"r6 = r0;"
|
||||
/* scratch .id from r0 */
|
||||
"r0 = 0;"
|
||||
/* if r6 > r7 is an unpredictable jump */
|
||||
"if r6 > r7 goto l1_%=;"
|
||||
/* tie r6 and r7 .id */
|
||||
"r6 = r7;"
|
||||
"l0_%=:"
|
||||
/* if r7 > 4 exit(0) */
|
||||
"if r7 > 4 goto l2_%=;"
|
||||
/* Access memory at r9[r6] */
|
||||
"r9 += r6;"
|
||||
"r0 = *(u8*)(r9 + 0);"
|
||||
"l2_%=:"
|
||||
"r0 = 0;"
|
||||
"exit;"
|
||||
"l1_%=:"
|
||||
/* tie r6 and r8 .id */
|
||||
"r6 = r8;"
|
||||
"goto l0_%=;"
|
||||
:
|
||||
: __imm(bpf_ktime_get_ns)
|
||||
: __clobber_all);
|
||||
}
|
||||
|
||||
/* Check that scalar IDs *are not* generated on register to register
|
||||
* assignments if source register is a constant.
|
||||
*
|
||||
* If such IDs *are* generated the 'l1' below would be reached in
|
||||
* two states:
|
||||
*
|
||||
* (1) r1{.id=A}, r2{.id=A}
|
||||
* (2) r1{.id=C}, r2{.id=C}
|
||||
*
|
||||
* Thus forcing 'if r1 == r2' verification twice.
|
||||
*/
|
||||
SEC("socket")
|
||||
__success __log_level(2)
|
||||
__msg("11: (1d) if r3 == r4 goto pc+0")
|
||||
__msg("frame 0: propagating r3,r4")
|
||||
__msg("11: safe")
|
||||
__msg("processed 15 insns")
|
||||
__flag(BPF_F_TEST_STATE_FREQ)
|
||||
__naked void no_scalar_id_for_const(void)
|
||||
{
|
||||
asm volatile (
|
||||
"call %[bpf_ktime_get_ns];"
|
||||
/* unpredictable jump */
|
||||
"if r0 > 7 goto l0_%=;"
|
||||
/* possibly generate same scalar ids for r3 and r4 */
|
||||
"r1 = 0;"
|
||||
"r1 = r1;"
|
||||
"r3 = r1;"
|
||||
"r4 = r1;"
|
||||
"goto l1_%=;"
|
||||
"l0_%=:"
|
||||
/* possibly generate different scalar ids for r3 and r4 */
|
||||
"r1 = 0;"
|
||||
"r2 = 0;"
|
||||
"r3 = r1;"
|
||||
"r4 = r2;"
|
||||
"l1_%=:"
|
||||
/* predictable jump, marks r3 and r4 precise */
|
||||
"if r3 == r4 goto +0;"
|
||||
"r0 = 0;"
|
||||
"exit;"
|
||||
:
|
||||
: __imm(bpf_ktime_get_ns)
|
||||
: __clobber_all);
|
||||
}
|
||||
|
||||
/* Same as no_scalar_id_for_const() but for 32-bit values */
|
||||
SEC("socket")
|
||||
__success __log_level(2)
|
||||
__msg("11: (1e) if w3 == w4 goto pc+0")
|
||||
__msg("frame 0: propagating r3,r4")
|
||||
__msg("11: safe")
|
||||
__msg("processed 15 insns")
|
||||
__flag(BPF_F_TEST_STATE_FREQ)
|
||||
__naked void no_scalar_id_for_const32(void)
|
||||
{
|
||||
asm volatile (
|
||||
"call %[bpf_ktime_get_ns];"
|
||||
/* unpredictable jump */
|
||||
"if r0 > 7 goto l0_%=;"
|
||||
/* possibly generate same scalar ids for r3 and r4 */
|
||||
"w1 = 0;"
|
||||
"w1 = w1;"
|
||||
"w3 = w1;"
|
||||
"w4 = w1;"
|
||||
"goto l1_%=;"
|
||||
"l0_%=:"
|
||||
/* possibly generate different scalar ids for r3 and r4 */
|
||||
"w1 = 0;"
|
||||
"w2 = 0;"
|
||||
"w3 = w1;"
|
||||
"w4 = w2;"
|
||||
"l1_%=:"
|
||||
/* predictable jump, marks r1 and r2 precise */
|
||||
"if w3 == w4 goto +0;"
|
||||
"r0 = 0;"
|
||||
"exit;"
|
||||
:
|
||||
: __imm(bpf_ktime_get_ns)
|
||||
: __clobber_all);
|
||||
}
|
||||
|
||||
/* Check that unique scalar IDs are ignored when new verifier state is
|
||||
* compared to cached verifier state. For this test:
|
||||
* - cached state has no id on r1
|
||||
* - new state has a unique id on r1
|
||||
*/
|
||||
SEC("socket")
|
||||
__success __log_level(2)
|
||||
__msg("6: (25) if r6 > 0x7 goto pc+1")
|
||||
__msg("7: (57) r1 &= 255")
|
||||
__msg("8: (bf) r2 = r10")
|
||||
__msg("from 6 to 8: safe")
|
||||
__msg("processed 12 insns")
|
||||
__flag(BPF_F_TEST_STATE_FREQ)
|
||||
__naked void ignore_unique_scalar_ids_cur(void)
|
||||
{
|
||||
asm volatile (
|
||||
"call %[bpf_ktime_get_ns];"
|
||||
"r6 = r0;"
|
||||
"call %[bpf_ktime_get_ns];"
|
||||
"r0 &= 0xff;"
|
||||
/* r1.id == r0.id */
|
||||
"r1 = r0;"
|
||||
/* make r1.id unique */
|
||||
"r0 = 0;"
|
||||
"if r6 > 7 goto l0_%=;"
|
||||
/* clear r1 id, but keep the range compatible */
|
||||
"r1 &= 0xff;"
|
||||
"l0_%=:"
|
||||
/* get here in two states:
|
||||
* - first: r1 has no id (cached state)
|
||||
* - second: r1 has a unique id (should be considered equivalent)
|
||||
*/
|
||||
"r2 = r10;"
|
||||
"r2 += r1;"
|
||||
"exit;"
|
||||
:
|
||||
: __imm(bpf_ktime_get_ns)
|
||||
: __clobber_all);
|
||||
}
|
||||
|
||||
/* Check that unique scalar IDs are ignored when new verifier state is
|
||||
* compared to cached verifier state. For this test:
|
||||
* - cached state has a unique id on r1
|
||||
* - new state has no id on r1
|
||||
*/
|
||||
SEC("socket")
|
||||
__success __log_level(2)
|
||||
__msg("6: (25) if r6 > 0x7 goto pc+1")
|
||||
__msg("7: (05) goto pc+1")
|
||||
__msg("9: (bf) r2 = r10")
|
||||
__msg("9: safe")
|
||||
__msg("processed 13 insns")
|
||||
__flag(BPF_F_TEST_STATE_FREQ)
|
||||
__naked void ignore_unique_scalar_ids_old(void)
|
||||
{
|
||||
asm volatile (
|
||||
"call %[bpf_ktime_get_ns];"
|
||||
"r6 = r0;"
|
||||
"call %[bpf_ktime_get_ns];"
|
||||
"r0 &= 0xff;"
|
||||
/* r1.id == r0.id */
|
||||
"r1 = r0;"
|
||||
/* make r1.id unique */
|
||||
"r0 = 0;"
|
||||
"if r6 > 7 goto l1_%=;"
|
||||
"goto l0_%=;"
|
||||
"l1_%=:"
|
||||
/* clear r1 id, but keep the range compatible */
|
||||
"r1 &= 0xff;"
|
||||
"l0_%=:"
|
||||
/* get here in two states:
|
||||
* - first: r1 has a unique id (cached state)
|
||||
* - second: r1 has no id (should be considered equivalent)
|
||||
*/
|
||||
"r2 = r10;"
|
||||
"r2 += r1;"
|
||||
"exit;"
|
||||
:
|
||||
: __imm(bpf_ktime_get_ns)
|
||||
: __clobber_all);
|
||||
}
|
||||
|
||||
/* Check that two different scalar IDs in a verified state can't be
|
||||
* mapped to the same scalar ID in current state.
|
||||
*/
|
||||
SEC("socket")
|
||||
__success __log_level(2)
|
||||
/* The exit instruction should be reachable from two states,
|
||||
* use two matches and "processed .. insns" to ensure this.
|
||||
*/
|
||||
__msg("13: (95) exit")
|
||||
__msg("13: (95) exit")
|
||||
__msg("processed 18 insns")
|
||||
__flag(BPF_F_TEST_STATE_FREQ)
|
||||
__naked void two_old_ids_one_cur_id(void)
|
||||
{
|
||||
asm volatile (
|
||||
/* Give unique scalar IDs to r{6,7} */
|
||||
"call %[bpf_ktime_get_ns];"
|
||||
"r0 &= 0xff;"
|
||||
"r6 = r0;"
|
||||
"call %[bpf_ktime_get_ns];"
|
||||
"r0 &= 0xff;"
|
||||
"r7 = r0;"
|
||||
"r0 = 0;"
|
||||
/* Maybe make r{6,7} IDs identical */
|
||||
"if r6 > r7 goto l0_%=;"
|
||||
"goto l1_%=;"
|
||||
"l0_%=:"
|
||||
"r6 = r7;"
|
||||
"l1_%=:"
|
||||
/* Mark r{6,7} precise.
|
||||
* Get here in two states:
|
||||
* - first: r6{.id=A}, r7{.id=B} (cached state)
|
||||
* - second: r6{.id=A}, r7{.id=A}
|
||||
* Currently we don't want to consider such states equivalent.
|
||||
* Thus "exit;" would be verified twice.
|
||||
*/
|
||||
"r2 = r10;"
|
||||
"r2 += r6;"
|
||||
"r2 += r7;"
|
||||
"exit;"
|
||||
:
|
||||
: __imm(bpf_ktime_get_ns)
|
||||
: __clobber_all);
|
||||
}
|
||||
|
||||
char _license[] SEC("license") = "GPL";
|
89
tools/testing/selftests/bpf/progs/vrf_socket_lookup.c
Normal file
89
tools/testing/selftests/bpf/progs/vrf_socket_lookup.c
Normal file
@ -0,0 +1,89 @@
|
||||
// SPDX-License-Identifier: GPL-2.0
|
||||
#include <linux/bpf.h>
|
||||
#include <bpf/bpf_helpers.h>
|
||||
#include <bpf/bpf_endian.h>
|
||||
|
||||
#include <linux/ip.h>
|
||||
#include <linux/in.h>
|
||||
#include <linux/if_ether.h>
|
||||
#include <linux/pkt_cls.h>
|
||||
#include <stdbool.h>
|
||||
|
||||
int lookup_status;
|
||||
bool test_xdp;
|
||||
bool tcp_skc;
|
||||
|
||||
#define CUR_NS BPF_F_CURRENT_NETNS
|
||||
|
||||
static void socket_lookup(void *ctx, void *data_end, void *data)
|
||||
{
|
||||
struct ethhdr *eth = data;
|
||||
struct bpf_sock_tuple *tp;
|
||||
struct bpf_sock *sk;
|
||||
struct iphdr *iph;
|
||||
int tplen;
|
||||
|
||||
if (eth + 1 > data_end)
|
||||
return;
|
||||
|
||||
if (eth->h_proto != bpf_htons(ETH_P_IP))
|
||||
return;
|
||||
|
||||
iph = (struct iphdr *)(eth + 1);
|
||||
if (iph + 1 > data_end)
|
||||
return;
|
||||
|
||||
tp = (struct bpf_sock_tuple *)&iph->saddr;
|
||||
tplen = sizeof(tp->ipv4);
|
||||
if ((void *)tp + tplen > data_end)
|
||||
return;
|
||||
|
||||
switch (iph->protocol) {
|
||||
case IPPROTO_TCP:
|
||||
if (tcp_skc)
|
||||
sk = bpf_skc_lookup_tcp(ctx, tp, tplen, CUR_NS, 0);
|
||||
else
|
||||
sk = bpf_sk_lookup_tcp(ctx, tp, tplen, CUR_NS, 0);
|
||||
break;
|
||||
case IPPROTO_UDP:
|
||||
sk = bpf_sk_lookup_udp(ctx, tp, tplen, CUR_NS, 0);
|
||||
break;
|
||||
default:
|
||||
return;
|
||||
}
|
||||
|
||||
lookup_status = 0;
|
||||
|
||||
if (sk) {
|
||||
bpf_sk_release(sk);
|
||||
lookup_status = 1;
|
||||
}
|
||||
}
|
||||
|
||||
SEC("tc")
|
||||
int tc_socket_lookup(struct __sk_buff *skb)
|
||||
{
|
||||
void *data_end = (void *)(long)skb->data_end;
|
||||
void *data = (void *)(long)skb->data;
|
||||
|
||||
if (test_xdp)
|
||||
return TC_ACT_UNSPEC;
|
||||
|
||||
socket_lookup(skb, data_end, data);
|
||||
return TC_ACT_UNSPEC;
|
||||
}
|
||||
|
||||
SEC("xdp")
|
||||
int xdp_socket_lookup(struct xdp_md *xdp)
|
||||
{
|
||||
void *data_end = (void *)(long)xdp->data_end;
|
||||
void *data = (void *)(long)xdp->data;
|
||||
|
||||
if (!test_xdp)
|
||||
return XDP_PASS;
|
||||
|
||||
socket_lookup(xdp, data_end, data);
|
||||
return XDP_PASS;
|
||||
}
|
||||
|
||||
char _license[] SEC("license") = "GPL";
|
@ -1341,45 +1341,46 @@ static bool cmp_str_seq(const char *log, const char *exp)
|
||||
return true;
|
||||
}
|
||||
|
||||
static int get_xlated_program(int fd_prog, struct bpf_insn **buf, int *cnt)
|
||||
static struct bpf_insn *get_xlated_program(int fd_prog, int *cnt)
|
||||
{
|
||||
__u32 buf_element_size = sizeof(struct bpf_insn);
|
||||
struct bpf_prog_info info = {};
|
||||
__u32 info_len = sizeof(info);
|
||||
__u32 xlated_prog_len;
|
||||
__u32 buf_element_size = sizeof(struct bpf_insn);
|
||||
struct bpf_insn *buf;
|
||||
|
||||
if (bpf_prog_get_info_by_fd(fd_prog, &info, &info_len)) {
|
||||
perror("bpf_prog_get_info_by_fd failed");
|
||||
return -1;
|
||||
return NULL;
|
||||
}
|
||||
|
||||
xlated_prog_len = info.xlated_prog_len;
|
||||
if (xlated_prog_len % buf_element_size) {
|
||||
printf("Program length %d is not multiple of %d\n",
|
||||
xlated_prog_len, buf_element_size);
|
||||
return -1;
|
||||
return NULL;
|
||||
}
|
||||
|
||||
*cnt = xlated_prog_len / buf_element_size;
|
||||
*buf = calloc(*cnt, buf_element_size);
|
||||
buf = calloc(*cnt, buf_element_size);
|
||||
if (!buf) {
|
||||
perror("can't allocate xlated program buffer");
|
||||
return -ENOMEM;
|
||||
return NULL;
|
||||
}
|
||||
|
||||
bzero(&info, sizeof(info));
|
||||
info.xlated_prog_len = xlated_prog_len;
|
||||
info.xlated_prog_insns = (__u64)(unsigned long)*buf;
|
||||
info.xlated_prog_insns = (__u64)(unsigned long)buf;
|
||||
if (bpf_prog_get_info_by_fd(fd_prog, &info, &info_len)) {
|
||||
perror("second bpf_prog_get_info_by_fd failed");
|
||||
goto out_free_buf;
|
||||
}
|
||||
|
||||
return 0;
|
||||
return buf;
|
||||
|
||||
out_free_buf:
|
||||
free(*buf);
|
||||
return -1;
|
||||
free(buf);
|
||||
return NULL;
|
||||
}
|
||||
|
||||
static bool is_null_insn(struct bpf_insn *insn)
|
||||
@ -1512,7 +1513,8 @@ static bool check_xlated_program(struct bpf_test *test, int fd_prog)
|
||||
if (!check_expected && !check_unexpected)
|
||||
goto out;
|
||||
|
||||
if (get_xlated_program(fd_prog, &buf, &cnt)) {
|
||||
buf = get_xlated_program(fd_prog, &cnt);
|
||||
if (!buf) {
|
||||
printf("FAIL: can't get xlated program\n");
|
||||
result = false;
|
||||
goto out;
|
||||
|
@ -46,7 +46,7 @@
|
||||
mark_precise: frame0: regs=r2 stack= before 20\
|
||||
mark_precise: frame0: parent state regs=r2 stack=:\
|
||||
mark_precise: frame0: last_idx 19 first_idx 10\
|
||||
mark_precise: frame0: regs=r2 stack= before 19\
|
||||
mark_precise: frame0: regs=r2,r9 stack= before 19\
|
||||
mark_precise: frame0: regs=r9 stack= before 18\
|
||||
mark_precise: frame0: regs=r8,r9 stack= before 17\
|
||||
mark_precise: frame0: regs=r0,r9 stack= before 15\
|
||||
@ -106,10 +106,10 @@
|
||||
mark_precise: frame0: regs=r2 stack= before 22\
|
||||
mark_precise: frame0: parent state regs=r2 stack=:\
|
||||
mark_precise: frame0: last_idx 20 first_idx 20\
|
||||
mark_precise: frame0: regs=r2 stack= before 20\
|
||||
mark_precise: frame0: parent state regs=r2 stack=:\
|
||||
mark_precise: frame0: regs=r2,r9 stack= before 20\
|
||||
mark_precise: frame0: parent state regs=r2,r9 stack=:\
|
||||
mark_precise: frame0: last_idx 19 first_idx 17\
|
||||
mark_precise: frame0: regs=r2 stack= before 19\
|
||||
mark_precise: frame0: regs=r2,r9 stack= before 19\
|
||||
mark_precise: frame0: regs=r9 stack= before 18\
|
||||
mark_precise: frame0: regs=r8,r9 stack= before 17\
|
||||
mark_precise: frame0: parent state regs= stack=:",
|
||||
|
Loading…
Reference in New Issue
Block a user