2019-05-29 01:10:09 +08:00
|
|
|
// SPDX-License-Identifier: GPL-2.0-only
|
2017-03-31 12:45:38 +08:00
|
|
|
/* Copyright (c) 2017 Facebook
|
|
|
|
*/
|
|
|
|
#include <linux/bpf.h>
|
2021-03-25 09:52:52 +08:00
|
|
|
#include <linux/btf_ids.h>
|
2017-03-31 12:45:38 +08:00
|
|
|
#include <linux/slab.h>
|
|
|
|
#include <linux/vmalloc.h>
|
|
|
|
#include <linux/etherdevice.h>
|
|
|
|
#include <linux/filter.h>
|
2021-08-10 07:51:51 +08:00
|
|
|
#include <linux/rcupdate_trace.h>
|
2017-03-31 12:45:38 +08:00
|
|
|
#include <linux/sched/signal.h>
|
bpf: Introduce bpf sk local storage
After allowing a bpf prog to
- directly read the skb->sk ptr
- get the fullsock bpf_sock by "bpf_sk_fullsock()"
- get the bpf_tcp_sock by "bpf_tcp_sock()"
- get the listener sock by "bpf_get_listener_sock()"
- avoid duplicating the fields of "(bpf_)sock" and "(bpf_)tcp_sock"
into different bpf running context.
this patch is another effort to make bpf's network programming
more intuitive to do (together with memory and performance benefit).
When bpf prog needs to store data for a sk, the current practice is to
define a map with the usual 4-tuples (src/dst ip/port) as the key.
If multiple bpf progs require to store different sk data, multiple maps
have to be defined. Hence, wasting memory to store the duplicated
keys (i.e. 4 tuples here) in each of the bpf map.
[ The smallest key could be the sk pointer itself which requires
some enhancement in the verifier and it is a separate topic. ]
Also, the bpf prog needs to clean up the elem when sk is freed.
Otherwise, the bpf map will become full and un-usable quickly.
The sk-free tracking currently could be done during sk state
transition (e.g. BPF_SOCK_OPS_STATE_CB).
The size of the map needs to be predefined which then usually ended-up
with an over-provisioned map in production. Even the map was re-sizable,
while the sk naturally come and go away already, this potential re-size
operation is arguably redundant if the data can be directly connected
to the sk itself instead of proxy-ing through a bpf map.
This patch introduces sk->sk_bpf_storage to provide local storage space
at sk for bpf prog to use. The space will be allocated when the first bpf
prog has created data for this particular sk.
The design optimizes the bpf prog's lookup (and then optionally followed by
an inline update). bpf_spin_lock should be used if the inline update needs
to be protected.
BPF_MAP_TYPE_SK_STORAGE:
-----------------------
To define a bpf "sk-local-storage", a BPF_MAP_TYPE_SK_STORAGE map (new in
this patch) needs to be created. Multiple BPF_MAP_TYPE_SK_STORAGE maps can
be created to fit different bpf progs' needs. The map enforces
BTF to allow printing the sk-local-storage during a system-wise
sk dump (e.g. "ss -ta") in the future.
The purpose of a BPF_MAP_TYPE_SK_STORAGE map is not for lookup/update/delete
a "sk-local-storage" data from a particular sk.
Think of the map as a meta-data (or "type") of a "sk-local-storage". This
particular "type" of "sk-local-storage" data can then be stored in any sk.
The main purposes of this map are mostly:
1. Define the size of a "sk-local-storage" type.
2. Provide a similar syscall userspace API as the map (e.g. lookup/update,
map-id, map-btf...etc.)
3. Keep track of all sk's storages of this "type" and clean them up
when the map is freed.
sk->sk_bpf_storage:
------------------
The main lookup/update/delete is done on sk->sk_bpf_storage (which
is a "struct bpf_sk_storage"). When doing a lookup,
the "map" pointer is now used as the "key" to search on the
sk_storage->list. The "map" pointer is actually serving
as the "type" of the "sk-local-storage" that is being
requested.
To allow very fast lookup, it should be as fast as looking up an
array at a stable-offset. At the same time, it is not ideal to
set a hard limit on the number of sk-local-storage "type" that the
system can have. Hence, this patch takes a cache approach.
The last search result from sk_storage->list is cached in
sk_storage->cache[] which is a stable sized array. Each
"sk-local-storage" type has a stable offset to the cache[] array.
In the future, a map's flag could be introduced to do cache
opt-out/enforcement if it became necessary.
The cache size is 16 (i.e. 16 types of "sk-local-storage").
Programs can share map. On the program side, having a few bpf_progs
running in the networking hotpath is already a lot. The bpf_prog
should have already consolidated the existing sock-key-ed map usage
to minimize the map lookup penalty. 16 has enough runway to grow.
All sk-local-storage data will be removed from sk->sk_bpf_storage
during sk destruction.
bpf_sk_storage_get() and bpf_sk_storage_delete():
------------------------------------------------
Instead of using bpf_map_(lookup|update|delete)_elem(),
the bpf prog needs to use the new helper bpf_sk_storage_get() and
bpf_sk_storage_delete(). The verifier can then enforce the
ARG_PTR_TO_SOCKET argument. The bpf_sk_storage_get() also allows to
"create" new elem if one does not exist in the sk. It is done by
the new BPF_SK_STORAGE_GET_F_CREATE flag. An optional value can also be
provided as the initial value during BPF_SK_STORAGE_GET_F_CREATE.
The BPF_MAP_TYPE_SK_STORAGE also supports bpf_spin_lock. Together,
it has eliminated the potential use cases for an equivalent
bpf_map_update_elem() API (for bpf_prog) in this patch.
Misc notes:
----------
1. map_get_next_key is not supported. From the userspace syscall
perspective, the map has the socket fd as the key while the map
can be shared by pinned-file or map-id.
Since btf is enforced, the existing "ss" could be enhanced to pretty
print the local-storage.
Supporting a kernel defined btf with 4 tuples as the return key could
be explored later also.
2. The sk->sk_lock cannot be acquired. Atomic operations is used instead.
e.g. cmpxchg is done on the sk->sk_bpf_storage ptr.
Please refer to the source code comments for the details in
synchronization cases and considerations.
3. The mem is charged to the sk->sk_omem_alloc as the sk filter does.
Benchmark:
---------
Here is the benchmark data collected by turning on
the "kernel.bpf_stats_enabled" sysctl.
Two bpf progs are tested:
One bpf prog with the usual bpf hashmap (max_entries = 8192) with the
sk ptr as the key. (verifier is modified to support sk ptr as the key
That should have shortened the key lookup time.)
Another bpf prog is with the new BPF_MAP_TYPE_SK_STORAGE.
Both are storing a "u32 cnt", do a lookup on "egress_skb/cgroup" for
each egress skb and then bump the cnt. netperf is used to drive
data with 4096 connected UDP sockets.
BPF_MAP_TYPE_HASH with a modifier verifier (152ns per bpf run)
27: cgroup_skb name egress_sk_map tag 74f56e832918070b run_time_ns 58280107540 run_cnt 381347633
loaded_at 2019-04-15T13:46:39-0700 uid 0
xlated 344B jited 258B memlock 4096B map_ids 16
btf_id 5
BPF_MAP_TYPE_SK_STORAGE in this patch (66ns per bpf run)
30: cgroup_skb name egress_sk_stora tag d4aa70984cc7bbf6 run_time_ns 25617093319 run_cnt 390989739
loaded_at 2019-04-15T13:47:54-0700 uid 0
xlated 168B jited 156B memlock 4096B map_ids 17
btf_id 6
Here is a high-level picture on how are the objects organized:
sk
┌──────┐
│ │
│ │
│ │
│*sk_bpf_storage─────▶ bpf_sk_storage
└──────┘ ┌───────┐
┌───────────┤ list │
│ │ │
│ │ │
│ │ │
│ └───────┘
│
│ elem
│ ┌────────┐
├─▶│ snode │
│ ├────────┤
│ │ data │ bpf_map
│ ├────────┤ ┌─────────┐
│ │map_node│◀─┬─────┤ list │
│ └────────┘ │ │ │
│ │ │ │
│ elem │ │ │
│ ┌────────┐ │ └─────────┘
└─▶│ snode │ │
├────────┤ │
bpf_map │ data │ │
┌─────────┐ ├────────┤ │
│ list ├───────▶│map_node│ │
│ │ └────────┘ │
│ │ │
│ │ elem │
└─────────┘ ┌────────┐ │
┌─▶│ snode │ │
│ ├────────┤ │
│ │ data │ │
│ ├────────┤ │
│ │map_node│◀─┘
│ └────────┘
│
│
│ ┌───────┐
sk └──────────│ list │
┌──────┐ │ │
│ │ │ │
│ │ │ │
│ │ └───────┘
│*sk_bpf_storage───────▶bpf_sk_storage
└──────┘
Signed-off-by: Martin KaFai Lau <kafai@fb.com>
Signed-off-by: Alexei Starovoitov <ast@kernel.org>
2019-04-27 07:39:39 +08:00
|
|
|
#include <net/bpf_sk_storage.h>
|
2018-10-20 00:57:58 +08:00
|
|
|
#include <net/sock.h>
|
|
|
|
#include <net/tcp.h>
|
2021-03-03 18:18:13 +08:00
|
|
|
#include <net/net_namespace.h>
|
2020-03-05 03:18:53 +08:00
|
|
|
#include <linux/error-injection.h>
|
2020-09-26 04:54:29 +08:00
|
|
|
#include <linux/smp.h>
|
2021-03-03 18:18:13 +08:00
|
|
|
#include <linux/sock_diag.h>
|
2021-07-08 06:16:55 +08:00
|
|
|
#include <net/xdp.h>
|
2017-03-31 12:45:38 +08:00
|
|
|
|
2019-04-27 02:49:51 +08:00
|
|
|
#define CREATE_TRACE_POINTS
|
|
|
|
#include <trace/events/bpf_test_run.h>
|
|
|
|
|
2021-03-03 18:18:12 +08:00
|
|
|
struct bpf_test_timer {
|
|
|
|
enum { NO_PREEMPT, NO_MIGRATE } mode;
|
|
|
|
u32 i;
|
|
|
|
u64 time_start, time_spent;
|
|
|
|
};
|
|
|
|
|
|
|
|
static void bpf_test_timer_enter(struct bpf_test_timer *t)
|
|
|
|
__acquires(rcu)
|
|
|
|
{
|
|
|
|
rcu_read_lock();
|
|
|
|
if (t->mode == NO_PREEMPT)
|
|
|
|
preempt_disable();
|
|
|
|
else
|
|
|
|
migrate_disable();
|
|
|
|
|
|
|
|
t->time_start = ktime_get_ns();
|
|
|
|
}
|
|
|
|
|
|
|
|
static void bpf_test_timer_leave(struct bpf_test_timer *t)
|
|
|
|
__releases(rcu)
|
|
|
|
{
|
|
|
|
t->time_start = 0;
|
|
|
|
|
|
|
|
if (t->mode == NO_PREEMPT)
|
|
|
|
preempt_enable();
|
|
|
|
else
|
|
|
|
migrate_enable();
|
|
|
|
rcu_read_unlock();
|
|
|
|
}
|
|
|
|
|
|
|
|
static bool bpf_test_timer_continue(struct bpf_test_timer *t, u32 repeat, int *err, u32 *duration)
|
|
|
|
__must_hold(rcu)
|
|
|
|
{
|
|
|
|
t->i++;
|
|
|
|
if (t->i >= repeat) {
|
|
|
|
/* We're done. */
|
|
|
|
t->time_spent += ktime_get_ns() - t->time_start;
|
|
|
|
do_div(t->time_spent, t->i);
|
|
|
|
*duration = t->time_spent > U32_MAX ? U32_MAX : (u32)t->time_spent;
|
|
|
|
*err = 0;
|
|
|
|
goto reset;
|
|
|
|
}
|
|
|
|
|
|
|
|
if (signal_pending(current)) {
|
|
|
|
/* During iteration: we've been cancelled, abort. */
|
|
|
|
*err = -EINTR;
|
|
|
|
goto reset;
|
|
|
|
}
|
|
|
|
|
|
|
|
if (need_resched()) {
|
|
|
|
/* During iteration: we need to reschedule between runs. */
|
|
|
|
t->time_spent += ktime_get_ns() - t->time_start;
|
|
|
|
bpf_test_timer_leave(t);
|
|
|
|
cond_resched();
|
|
|
|
bpf_test_timer_enter(t);
|
|
|
|
}
|
|
|
|
|
|
|
|
/* Do another round. */
|
|
|
|
return true;
|
|
|
|
|
|
|
|
reset:
|
|
|
|
t->i = 0;
|
|
|
|
return false;
|
|
|
|
}
|
|
|
|
|
2019-02-13 07:42:38 +08:00
|
|
|
static int bpf_test_run(struct bpf_prog *prog, void *ctx, u32 repeat,
|
2019-12-14 01:51:10 +08:00
|
|
|
u32 *retval, u32 *time, bool xdp)
|
2017-03-31 12:45:38 +08:00
|
|
|
{
|
bpf: Add ambient BPF runtime context stored in current
b910eaaaa4b8 ("bpf: Fix NULL pointer dereference in bpf_get_local_storage()
helper") fixed the problem with cgroup-local storage use in BPF by
pre-allocating per-CPU array of 8 cgroup storage pointers to accommodate
possible BPF program preemptions and nested executions.
While this seems to work good in practice, it introduces new and unnecessary
failure mode in which not all BPF programs might be executed if we fail to
find an unused slot for cgroup storage, however unlikely it is. It might also
not be so unlikely when/if we allow sleepable cgroup BPF programs in the
future.
Further, the way that cgroup storage is implemented as ambiently-available
property during entire BPF program execution is a convenient way to pass extra
information to BPF program and helpers without requiring user code to pass
around extra arguments explicitly. So it would be good to have a generic
solution that can allow implementing this without arbitrary restrictions.
Ideally, such solution would work for both preemptable and sleepable BPF
programs in exactly the same way.
This patch introduces such solution, bpf_run_ctx. It adds one pointer field
(bpf_ctx) to task_struct. This field is maintained by BPF_PROG_RUN family of
macros in such a way that it always stays valid throughout BPF program
execution. BPF program preemption is handled by remembering previous
current->bpf_ctx value locally while executing nested BPF program and
restoring old value after nested BPF program finishes. This is handled by two
helper functions, bpf_set_run_ctx() and bpf_reset_run_ctx(), which are
supposed to be used before and after BPF program runs, respectively.
Restoring old value of the pointer handles preemption, while bpf_run_ctx
pointer being a property of current task_struct naturally solves this problem
for sleepable BPF programs by "following" BPF program execution as it is
scheduled in and out of CPU. It would even allow CPU migration of BPF
programs, even though it's not currently allowed by BPF infra.
This patch cleans up cgroup local storage handling as a first application. The
design itself is generic, though, with bpf_run_ctx being an empty struct that
is supposed to be embedded into a specific struct for a given BPF program type
(bpf_cg_run_ctx in this case). Follow up patches are planned that will expand
this mechanism for other uses within tracing BPF programs.
To verify that this change doesn't revert the fix to the original cgroup
storage issue, I ran the same repro as in the original report ([0]) and didn't
get any problems. Replacing bpf_reset_run_ctx(old_run_ctx) with
bpf_reset_run_ctx(NULL) triggers the issue pretty quickly (so repro does work).
[0] https://lore.kernel.org/bpf/YEEvBUiJl2pJkxTd@krava/
Fixes: b910eaaaa4b8 ("bpf: Fix NULL pointer dereference in bpf_get_local_storage() helper")
Signed-off-by: Andrii Nakryiko <andrii@kernel.org>
Signed-off-by: Daniel Borkmann <daniel@iogearbox.net>
Acked-by: Yonghong Song <yhs@fb.com>
Link: https://lore.kernel.org/bpf/20210712230615.3525979-1-andrii@kernel.org
2021-07-13 07:06:15 +08:00
|
|
|
struct bpf_prog_array_item item = {.prog = prog};
|
|
|
|
struct bpf_run_ctx *old_ctx;
|
|
|
|
struct bpf_cg_run_ctx run_ctx;
|
2021-03-03 18:18:12 +08:00
|
|
|
struct bpf_test_timer t = { NO_MIGRATE };
|
2018-09-28 22:45:36 +08:00
|
|
|
enum bpf_cgroup_storage_type stype;
|
2021-03-03 18:18:12 +08:00
|
|
|
int ret;
|
2017-03-31 12:45:38 +08:00
|
|
|
|
2018-09-28 22:45:36 +08:00
|
|
|
for_each_cgroup_storage_type(stype) {
|
bpf: Add ambient BPF runtime context stored in current
b910eaaaa4b8 ("bpf: Fix NULL pointer dereference in bpf_get_local_storage()
helper") fixed the problem with cgroup-local storage use in BPF by
pre-allocating per-CPU array of 8 cgroup storage pointers to accommodate
possible BPF program preemptions and nested executions.
While this seems to work good in practice, it introduces new and unnecessary
failure mode in which not all BPF programs might be executed if we fail to
find an unused slot for cgroup storage, however unlikely it is. It might also
not be so unlikely when/if we allow sleepable cgroup BPF programs in the
future.
Further, the way that cgroup storage is implemented as ambiently-available
property during entire BPF program execution is a convenient way to pass extra
information to BPF program and helpers without requiring user code to pass
around extra arguments explicitly. So it would be good to have a generic
solution that can allow implementing this without arbitrary restrictions.
Ideally, such solution would work for both preemptable and sleepable BPF
programs in exactly the same way.
This patch introduces such solution, bpf_run_ctx. It adds one pointer field
(bpf_ctx) to task_struct. This field is maintained by BPF_PROG_RUN family of
macros in such a way that it always stays valid throughout BPF program
execution. BPF program preemption is handled by remembering previous
current->bpf_ctx value locally while executing nested BPF program and
restoring old value after nested BPF program finishes. This is handled by two
helper functions, bpf_set_run_ctx() and bpf_reset_run_ctx(), which are
supposed to be used before and after BPF program runs, respectively.
Restoring old value of the pointer handles preemption, while bpf_run_ctx
pointer being a property of current task_struct naturally solves this problem
for sleepable BPF programs by "following" BPF program execution as it is
scheduled in and out of CPU. It would even allow CPU migration of BPF
programs, even though it's not currently allowed by BPF infra.
This patch cleans up cgroup local storage handling as a first application. The
design itself is generic, though, with bpf_run_ctx being an empty struct that
is supposed to be embedded into a specific struct for a given BPF program type
(bpf_cg_run_ctx in this case). Follow up patches are planned that will expand
this mechanism for other uses within tracing BPF programs.
To verify that this change doesn't revert the fix to the original cgroup
storage issue, I ran the same repro as in the original report ([0]) and didn't
get any problems. Replacing bpf_reset_run_ctx(old_run_ctx) with
bpf_reset_run_ctx(NULL) triggers the issue pretty quickly (so repro does work).
[0] https://lore.kernel.org/bpf/YEEvBUiJl2pJkxTd@krava/
Fixes: b910eaaaa4b8 ("bpf: Fix NULL pointer dereference in bpf_get_local_storage() helper")
Signed-off-by: Andrii Nakryiko <andrii@kernel.org>
Signed-off-by: Daniel Borkmann <daniel@iogearbox.net>
Acked-by: Yonghong Song <yhs@fb.com>
Link: https://lore.kernel.org/bpf/20210712230615.3525979-1-andrii@kernel.org
2021-07-13 07:06:15 +08:00
|
|
|
item.cgroup_storage[stype] = bpf_cgroup_storage_alloc(prog, stype);
|
|
|
|
if (IS_ERR(item.cgroup_storage[stype])) {
|
|
|
|
item.cgroup_storage[stype] = NULL;
|
2018-09-28 22:45:36 +08:00
|
|
|
for_each_cgroup_storage_type(stype)
|
bpf: Add ambient BPF runtime context stored in current
b910eaaaa4b8 ("bpf: Fix NULL pointer dereference in bpf_get_local_storage()
helper") fixed the problem with cgroup-local storage use in BPF by
pre-allocating per-CPU array of 8 cgroup storage pointers to accommodate
possible BPF program preemptions and nested executions.
While this seems to work good in practice, it introduces new and unnecessary
failure mode in which not all BPF programs might be executed if we fail to
find an unused slot for cgroup storage, however unlikely it is. It might also
not be so unlikely when/if we allow sleepable cgroup BPF programs in the
future.
Further, the way that cgroup storage is implemented as ambiently-available
property during entire BPF program execution is a convenient way to pass extra
information to BPF program and helpers without requiring user code to pass
around extra arguments explicitly. So it would be good to have a generic
solution that can allow implementing this without arbitrary restrictions.
Ideally, such solution would work for both preemptable and sleepable BPF
programs in exactly the same way.
This patch introduces such solution, bpf_run_ctx. It adds one pointer field
(bpf_ctx) to task_struct. This field is maintained by BPF_PROG_RUN family of
macros in such a way that it always stays valid throughout BPF program
execution. BPF program preemption is handled by remembering previous
current->bpf_ctx value locally while executing nested BPF program and
restoring old value after nested BPF program finishes. This is handled by two
helper functions, bpf_set_run_ctx() and bpf_reset_run_ctx(), which are
supposed to be used before and after BPF program runs, respectively.
Restoring old value of the pointer handles preemption, while bpf_run_ctx
pointer being a property of current task_struct naturally solves this problem
for sleepable BPF programs by "following" BPF program execution as it is
scheduled in and out of CPU. It would even allow CPU migration of BPF
programs, even though it's not currently allowed by BPF infra.
This patch cleans up cgroup local storage handling as a first application. The
design itself is generic, though, with bpf_run_ctx being an empty struct that
is supposed to be embedded into a specific struct for a given BPF program type
(bpf_cg_run_ctx in this case). Follow up patches are planned that will expand
this mechanism for other uses within tracing BPF programs.
To verify that this change doesn't revert the fix to the original cgroup
storage issue, I ran the same repro as in the original report ([0]) and didn't
get any problems. Replacing bpf_reset_run_ctx(old_run_ctx) with
bpf_reset_run_ctx(NULL) triggers the issue pretty quickly (so repro does work).
[0] https://lore.kernel.org/bpf/YEEvBUiJl2pJkxTd@krava/
Fixes: b910eaaaa4b8 ("bpf: Fix NULL pointer dereference in bpf_get_local_storage() helper")
Signed-off-by: Andrii Nakryiko <andrii@kernel.org>
Signed-off-by: Daniel Borkmann <daniel@iogearbox.net>
Acked-by: Yonghong Song <yhs@fb.com>
Link: https://lore.kernel.org/bpf/20210712230615.3525979-1-andrii@kernel.org
2021-07-13 07:06:15 +08:00
|
|
|
bpf_cgroup_storage_free(item.cgroup_storage[stype]);
|
2018-09-28 22:45:36 +08:00
|
|
|
return -ENOMEM;
|
|
|
|
}
|
|
|
|
}
|
2018-08-03 05:27:27 +08:00
|
|
|
|
2017-03-31 12:45:38 +08:00
|
|
|
if (!repeat)
|
|
|
|
repeat = 1;
|
2019-02-13 07:42:38 +08:00
|
|
|
|
2021-03-03 18:18:12 +08:00
|
|
|
bpf_test_timer_enter(&t);
|
bpf: Add ambient BPF runtime context stored in current
b910eaaaa4b8 ("bpf: Fix NULL pointer dereference in bpf_get_local_storage()
helper") fixed the problem with cgroup-local storage use in BPF by
pre-allocating per-CPU array of 8 cgroup storage pointers to accommodate
possible BPF program preemptions and nested executions.
While this seems to work good in practice, it introduces new and unnecessary
failure mode in which not all BPF programs might be executed if we fail to
find an unused slot for cgroup storage, however unlikely it is. It might also
not be so unlikely when/if we allow sleepable cgroup BPF programs in the
future.
Further, the way that cgroup storage is implemented as ambiently-available
property during entire BPF program execution is a convenient way to pass extra
information to BPF program and helpers without requiring user code to pass
around extra arguments explicitly. So it would be good to have a generic
solution that can allow implementing this without arbitrary restrictions.
Ideally, such solution would work for both preemptable and sleepable BPF
programs in exactly the same way.
This patch introduces such solution, bpf_run_ctx. It adds one pointer field
(bpf_ctx) to task_struct. This field is maintained by BPF_PROG_RUN family of
macros in such a way that it always stays valid throughout BPF program
execution. BPF program preemption is handled by remembering previous
current->bpf_ctx value locally while executing nested BPF program and
restoring old value after nested BPF program finishes. This is handled by two
helper functions, bpf_set_run_ctx() and bpf_reset_run_ctx(), which are
supposed to be used before and after BPF program runs, respectively.
Restoring old value of the pointer handles preemption, while bpf_run_ctx
pointer being a property of current task_struct naturally solves this problem
for sleepable BPF programs by "following" BPF program execution as it is
scheduled in and out of CPU. It would even allow CPU migration of BPF
programs, even though it's not currently allowed by BPF infra.
This patch cleans up cgroup local storage handling as a first application. The
design itself is generic, though, with bpf_run_ctx being an empty struct that
is supposed to be embedded into a specific struct for a given BPF program type
(bpf_cg_run_ctx in this case). Follow up patches are planned that will expand
this mechanism for other uses within tracing BPF programs.
To verify that this change doesn't revert the fix to the original cgroup
storage issue, I ran the same repro as in the original report ([0]) and didn't
get any problems. Replacing bpf_reset_run_ctx(old_run_ctx) with
bpf_reset_run_ctx(NULL) triggers the issue pretty quickly (so repro does work).
[0] https://lore.kernel.org/bpf/YEEvBUiJl2pJkxTd@krava/
Fixes: b910eaaaa4b8 ("bpf: Fix NULL pointer dereference in bpf_get_local_storage() helper")
Signed-off-by: Andrii Nakryiko <andrii@kernel.org>
Signed-off-by: Daniel Borkmann <daniel@iogearbox.net>
Acked-by: Yonghong Song <yhs@fb.com>
Link: https://lore.kernel.org/bpf/20210712230615.3525979-1-andrii@kernel.org
2021-07-13 07:06:15 +08:00
|
|
|
old_ctx = bpf_set_run_ctx(&run_ctx.run_ctx);
|
2021-03-03 18:18:12 +08:00
|
|
|
do {
|
bpf: Add ambient BPF runtime context stored in current
b910eaaaa4b8 ("bpf: Fix NULL pointer dereference in bpf_get_local_storage()
helper") fixed the problem with cgroup-local storage use in BPF by
pre-allocating per-CPU array of 8 cgroup storage pointers to accommodate
possible BPF program preemptions and nested executions.
While this seems to work good in practice, it introduces new and unnecessary
failure mode in which not all BPF programs might be executed if we fail to
find an unused slot for cgroup storage, however unlikely it is. It might also
not be so unlikely when/if we allow sleepable cgroup BPF programs in the
future.
Further, the way that cgroup storage is implemented as ambiently-available
property during entire BPF program execution is a convenient way to pass extra
information to BPF program and helpers without requiring user code to pass
around extra arguments explicitly. So it would be good to have a generic
solution that can allow implementing this without arbitrary restrictions.
Ideally, such solution would work for both preemptable and sleepable BPF
programs in exactly the same way.
This patch introduces such solution, bpf_run_ctx. It adds one pointer field
(bpf_ctx) to task_struct. This field is maintained by BPF_PROG_RUN family of
macros in such a way that it always stays valid throughout BPF program
execution. BPF program preemption is handled by remembering previous
current->bpf_ctx value locally while executing nested BPF program and
restoring old value after nested BPF program finishes. This is handled by two
helper functions, bpf_set_run_ctx() and bpf_reset_run_ctx(), which are
supposed to be used before and after BPF program runs, respectively.
Restoring old value of the pointer handles preemption, while bpf_run_ctx
pointer being a property of current task_struct naturally solves this problem
for sleepable BPF programs by "following" BPF program execution as it is
scheduled in and out of CPU. It would even allow CPU migration of BPF
programs, even though it's not currently allowed by BPF infra.
This patch cleans up cgroup local storage handling as a first application. The
design itself is generic, though, with bpf_run_ctx being an empty struct that
is supposed to be embedded into a specific struct for a given BPF program type
(bpf_cg_run_ctx in this case). Follow up patches are planned that will expand
this mechanism for other uses within tracing BPF programs.
To verify that this change doesn't revert the fix to the original cgroup
storage issue, I ran the same repro as in the original report ([0]) and didn't
get any problems. Replacing bpf_reset_run_ctx(old_run_ctx) with
bpf_reset_run_ctx(NULL) triggers the issue pretty quickly (so repro does work).
[0] https://lore.kernel.org/bpf/YEEvBUiJl2pJkxTd@krava/
Fixes: b910eaaaa4b8 ("bpf: Fix NULL pointer dereference in bpf_get_local_storage() helper")
Signed-off-by: Andrii Nakryiko <andrii@kernel.org>
Signed-off-by: Daniel Borkmann <daniel@iogearbox.net>
Acked-by: Yonghong Song <yhs@fb.com>
Link: https://lore.kernel.org/bpf/20210712230615.3525979-1-andrii@kernel.org
2021-07-13 07:06:15 +08:00
|
|
|
run_ctx.prog_item = &item;
|
2019-12-14 01:51:10 +08:00
|
|
|
if (xdp)
|
|
|
|
*retval = bpf_prog_run_xdp(prog, ctx);
|
|
|
|
else
|
2021-08-15 15:05:54 +08:00
|
|
|
*retval = bpf_prog_run(prog, ctx);
|
2021-03-03 18:18:12 +08:00
|
|
|
} while (bpf_test_timer_continue(&t, repeat, &ret, time));
|
bpf: Add ambient BPF runtime context stored in current
b910eaaaa4b8 ("bpf: Fix NULL pointer dereference in bpf_get_local_storage()
helper") fixed the problem with cgroup-local storage use in BPF by
pre-allocating per-CPU array of 8 cgroup storage pointers to accommodate
possible BPF program preemptions and nested executions.
While this seems to work good in practice, it introduces new and unnecessary
failure mode in which not all BPF programs might be executed if we fail to
find an unused slot for cgroup storage, however unlikely it is. It might also
not be so unlikely when/if we allow sleepable cgroup BPF programs in the
future.
Further, the way that cgroup storage is implemented as ambiently-available
property during entire BPF program execution is a convenient way to pass extra
information to BPF program and helpers without requiring user code to pass
around extra arguments explicitly. So it would be good to have a generic
solution that can allow implementing this without arbitrary restrictions.
Ideally, such solution would work for both preemptable and sleepable BPF
programs in exactly the same way.
This patch introduces such solution, bpf_run_ctx. It adds one pointer field
(bpf_ctx) to task_struct. This field is maintained by BPF_PROG_RUN family of
macros in such a way that it always stays valid throughout BPF program
execution. BPF program preemption is handled by remembering previous
current->bpf_ctx value locally while executing nested BPF program and
restoring old value after nested BPF program finishes. This is handled by two
helper functions, bpf_set_run_ctx() and bpf_reset_run_ctx(), which are
supposed to be used before and after BPF program runs, respectively.
Restoring old value of the pointer handles preemption, while bpf_run_ctx
pointer being a property of current task_struct naturally solves this problem
for sleepable BPF programs by "following" BPF program execution as it is
scheduled in and out of CPU. It would even allow CPU migration of BPF
programs, even though it's not currently allowed by BPF infra.
This patch cleans up cgroup local storage handling as a first application. The
design itself is generic, though, with bpf_run_ctx being an empty struct that
is supposed to be embedded into a specific struct for a given BPF program type
(bpf_cg_run_ctx in this case). Follow up patches are planned that will expand
this mechanism for other uses within tracing BPF programs.
To verify that this change doesn't revert the fix to the original cgroup
storage issue, I ran the same repro as in the original report ([0]) and didn't
get any problems. Replacing bpf_reset_run_ctx(old_run_ctx) with
bpf_reset_run_ctx(NULL) triggers the issue pretty quickly (so repro does work).
[0] https://lore.kernel.org/bpf/YEEvBUiJl2pJkxTd@krava/
Fixes: b910eaaaa4b8 ("bpf: Fix NULL pointer dereference in bpf_get_local_storage() helper")
Signed-off-by: Andrii Nakryiko <andrii@kernel.org>
Signed-off-by: Daniel Borkmann <daniel@iogearbox.net>
Acked-by: Yonghong Song <yhs@fb.com>
Link: https://lore.kernel.org/bpf/20210712230615.3525979-1-andrii@kernel.org
2021-07-13 07:06:15 +08:00
|
|
|
bpf_reset_run_ctx(old_ctx);
|
2021-03-03 18:18:12 +08:00
|
|
|
bpf_test_timer_leave(&t);
|
2017-03-31 12:45:38 +08:00
|
|
|
|
2018-09-28 22:45:36 +08:00
|
|
|
for_each_cgroup_storage_type(stype)
|
bpf: Add ambient BPF runtime context stored in current
b910eaaaa4b8 ("bpf: Fix NULL pointer dereference in bpf_get_local_storage()
helper") fixed the problem with cgroup-local storage use in BPF by
pre-allocating per-CPU array of 8 cgroup storage pointers to accommodate
possible BPF program preemptions and nested executions.
While this seems to work good in practice, it introduces new and unnecessary
failure mode in which not all BPF programs might be executed if we fail to
find an unused slot for cgroup storage, however unlikely it is. It might also
not be so unlikely when/if we allow sleepable cgroup BPF programs in the
future.
Further, the way that cgroup storage is implemented as ambiently-available
property during entire BPF program execution is a convenient way to pass extra
information to BPF program and helpers without requiring user code to pass
around extra arguments explicitly. So it would be good to have a generic
solution that can allow implementing this without arbitrary restrictions.
Ideally, such solution would work for both preemptable and sleepable BPF
programs in exactly the same way.
This patch introduces such solution, bpf_run_ctx. It adds one pointer field
(bpf_ctx) to task_struct. This field is maintained by BPF_PROG_RUN family of
macros in such a way that it always stays valid throughout BPF program
execution. BPF program preemption is handled by remembering previous
current->bpf_ctx value locally while executing nested BPF program and
restoring old value after nested BPF program finishes. This is handled by two
helper functions, bpf_set_run_ctx() and bpf_reset_run_ctx(), which are
supposed to be used before and after BPF program runs, respectively.
Restoring old value of the pointer handles preemption, while bpf_run_ctx
pointer being a property of current task_struct naturally solves this problem
for sleepable BPF programs by "following" BPF program execution as it is
scheduled in and out of CPU. It would even allow CPU migration of BPF
programs, even though it's not currently allowed by BPF infra.
This patch cleans up cgroup local storage handling as a first application. The
design itself is generic, though, with bpf_run_ctx being an empty struct that
is supposed to be embedded into a specific struct for a given BPF program type
(bpf_cg_run_ctx in this case). Follow up patches are planned that will expand
this mechanism for other uses within tracing BPF programs.
To verify that this change doesn't revert the fix to the original cgroup
storage issue, I ran the same repro as in the original report ([0]) and didn't
get any problems. Replacing bpf_reset_run_ctx(old_run_ctx) with
bpf_reset_run_ctx(NULL) triggers the issue pretty quickly (so repro does work).
[0] https://lore.kernel.org/bpf/YEEvBUiJl2pJkxTd@krava/
Fixes: b910eaaaa4b8 ("bpf: Fix NULL pointer dereference in bpf_get_local_storage() helper")
Signed-off-by: Andrii Nakryiko <andrii@kernel.org>
Signed-off-by: Daniel Borkmann <daniel@iogearbox.net>
Acked-by: Yonghong Song <yhs@fb.com>
Link: https://lore.kernel.org/bpf/20210712230615.3525979-1-andrii@kernel.org
2021-07-13 07:06:15 +08:00
|
|
|
bpf_cgroup_storage_free(item.cgroup_storage[stype]);
|
2018-08-03 05:27:27 +08:00
|
|
|
|
2019-02-13 07:42:38 +08:00
|
|
|
return ret;
|
2017-03-31 12:45:38 +08:00
|
|
|
}
|
|
|
|
|
2017-05-02 23:36:33 +08:00
|
|
|
static int bpf_test_finish(const union bpf_attr *kattr,
|
|
|
|
union bpf_attr __user *uattr, const void *data,
|
2017-03-31 12:45:38 +08:00
|
|
|
u32 size, u32 retval, u32 duration)
|
|
|
|
{
|
2017-05-02 23:36:33 +08:00
|
|
|
void __user *data_out = u64_to_user_ptr(kattr->test.data_out);
|
2017-03-31 12:45:38 +08:00
|
|
|
int err = -EFAULT;
|
2018-12-03 19:31:23 +08:00
|
|
|
u32 copy_size = size;
|
2017-03-31 12:45:38 +08:00
|
|
|
|
2018-12-03 19:31:23 +08:00
|
|
|
/* Clamp copy if the user has provided a size hint, but copy the full
|
|
|
|
* buffer if not to retain old behaviour.
|
|
|
|
*/
|
|
|
|
if (kattr->test.data_size_out &&
|
|
|
|
copy_size > kattr->test.data_size_out) {
|
|
|
|
copy_size = kattr->test.data_size_out;
|
|
|
|
err = -ENOSPC;
|
|
|
|
}
|
|
|
|
|
|
|
|
if (data_out && copy_to_user(data_out, data, copy_size))
|
2017-03-31 12:45:38 +08:00
|
|
|
goto out;
|
|
|
|
if (copy_to_user(&uattr->test.data_size_out, &size, sizeof(size)))
|
|
|
|
goto out;
|
|
|
|
if (copy_to_user(&uattr->test.retval, &retval, sizeof(retval)))
|
|
|
|
goto out;
|
|
|
|
if (copy_to_user(&uattr->test.duration, &duration, sizeof(duration)))
|
|
|
|
goto out;
|
2018-12-03 19:31:23 +08:00
|
|
|
if (err != -ENOSPC)
|
|
|
|
err = 0;
|
2017-03-31 12:45:38 +08:00
|
|
|
out:
|
2019-04-27 02:49:51 +08:00
|
|
|
trace_bpf_test_finish(&err);
|
2017-03-31 12:45:38 +08:00
|
|
|
return err;
|
|
|
|
}
|
|
|
|
|
2019-11-15 02:57:08 +08:00
|
|
|
/* Integer types of various sizes and pointer combinations cover variety of
|
|
|
|
* architecture dependent calling conventions. 7+ can be supported in the
|
|
|
|
* future.
|
|
|
|
*/
|
2020-03-28 04:47:13 +08:00
|
|
|
__diag_push();
|
|
|
|
__diag_ignore(GCC, 8, "-Wmissing-prototypes",
|
|
|
|
"Global functions as their definitions will be in vmlinux BTF");
|
2019-11-15 02:57:08 +08:00
|
|
|
int noinline bpf_fentry_test1(int a)
|
|
|
|
{
|
|
|
|
return a + 1;
|
|
|
|
}
|
|
|
|
|
|
|
|
int noinline bpf_fentry_test2(int a, u64 b)
|
|
|
|
{
|
|
|
|
return a + b;
|
|
|
|
}
|
|
|
|
|
|
|
|
int noinline bpf_fentry_test3(char a, int b, u64 c)
|
|
|
|
{
|
|
|
|
return a + b + c;
|
|
|
|
}
|
|
|
|
|
|
|
|
int noinline bpf_fentry_test4(void *a, char b, int c, u64 d)
|
|
|
|
{
|
|
|
|
return (long)a + b + c + d;
|
|
|
|
}
|
|
|
|
|
|
|
|
int noinline bpf_fentry_test5(u64 a, void *b, short c, int d, u64 e)
|
|
|
|
{
|
|
|
|
return a + (long)b + c + d + e;
|
|
|
|
}
|
|
|
|
|
|
|
|
int noinline bpf_fentry_test6(u64 a, void *b, short c, int d, void *e, u64 f)
|
|
|
|
{
|
|
|
|
return a + (long)b + c + d + (long)e + f;
|
|
|
|
}
|
|
|
|
|
bpf: Add tests for PTR_TO_BTF_ID vs. null comparison
Add two tests for PTR_TO_BTF_ID vs. null ptr comparison,
one for PTR_TO_BTF_ID in the ctx structure and the
other for PTR_TO_BTF_ID after one level pointer chasing.
In both cases, the test ensures condition is not
removed.
For example, for this test
struct bpf_fentry_test_t {
struct bpf_fentry_test_t *a;
};
int BPF_PROG(test7, struct bpf_fentry_test_t *arg)
{
if (arg == 0)
test7_result = 1;
return 0;
}
Before the previous verifier change, we have xlated codes:
int test7(long long unsigned int * ctx):
; int BPF_PROG(test7, struct bpf_fentry_test_t *arg)
0: (79) r1 = *(u64 *)(r1 +0)
; int BPF_PROG(test7, struct bpf_fentry_test_t *arg)
1: (b4) w0 = 0
2: (95) exit
After the previous verifier change, we have:
int test7(long long unsigned int * ctx):
; int BPF_PROG(test7, struct bpf_fentry_test_t *arg)
0: (79) r1 = *(u64 *)(r1 +0)
; if (arg == 0)
1: (55) if r1 != 0x0 goto pc+4
; test7_result = 1;
2: (18) r1 = map[id:6][0]+48
4: (b7) r2 = 1
5: (7b) *(u64 *)(r1 +0) = r2
; int BPF_PROG(test7, struct bpf_fentry_test_t *arg)
6: (b4) w0 = 0
7: (95) exit
Signed-off-by: Yonghong Song <yhs@fb.com>
Signed-off-by: Daniel Borkmann <daniel@iogearbox.net>
Acked-by: John Fastabend <john.fastabend@gmail.com>
Acked-by: Andrii Nakryiko <andriin@fb.com>
Link: https://lore.kernel.org/bpf/20200630171241.2523875-1-yhs@fb.com
2020-07-01 01:12:41 +08:00
|
|
|
struct bpf_fentry_test_t {
|
|
|
|
struct bpf_fentry_test_t *a;
|
|
|
|
};
|
|
|
|
|
|
|
|
int noinline bpf_fentry_test7(struct bpf_fentry_test_t *arg)
|
|
|
|
{
|
|
|
|
return (long)arg;
|
|
|
|
}
|
|
|
|
|
|
|
|
int noinline bpf_fentry_test8(struct bpf_fentry_test_t *arg)
|
|
|
|
{
|
|
|
|
return (long)arg->a;
|
|
|
|
}
|
|
|
|
|
2020-03-05 03:18:53 +08:00
|
|
|
int noinline bpf_modify_return_test(int a, int *b)
|
|
|
|
{
|
|
|
|
*b += 1;
|
|
|
|
return a + *b;
|
|
|
|
}
|
2021-03-25 09:52:52 +08:00
|
|
|
|
|
|
|
u64 noinline bpf_kfunc_call_test1(struct sock *sk, u32 a, u64 b, u32 c, u64 d)
|
|
|
|
{
|
|
|
|
return a + b + c + d;
|
|
|
|
}
|
|
|
|
|
|
|
|
int noinline bpf_kfunc_call_test2(struct sock *sk, u32 a, u32 b)
|
|
|
|
{
|
|
|
|
return a + b;
|
|
|
|
}
|
|
|
|
|
|
|
|
struct sock * noinline bpf_kfunc_call_test3(struct sock *sk)
|
|
|
|
{
|
|
|
|
return sk;
|
|
|
|
}
|
|
|
|
|
2020-03-28 04:47:13 +08:00
|
|
|
__diag_pop();
|
2020-03-05 03:18:53 +08:00
|
|
|
|
|
|
|
ALLOW_ERROR_INJECTION(bpf_modify_return_test, ERRNO);
|
|
|
|
|
2021-03-25 09:52:52 +08:00
|
|
|
BTF_SET_START(test_sk_kfunc_ids)
|
|
|
|
BTF_ID(func, bpf_kfunc_call_test1)
|
|
|
|
BTF_ID(func, bpf_kfunc_call_test2)
|
|
|
|
BTF_ID(func, bpf_kfunc_call_test3)
|
|
|
|
BTF_SET_END(test_sk_kfunc_ids)
|
|
|
|
|
|
|
|
bool bpf_prog_test_check_kfunc_call(u32 kfunc_id)
|
|
|
|
{
|
|
|
|
return btf_id_set_contains(&test_sk_kfunc_ids, kfunc_id);
|
|
|
|
}
|
|
|
|
|
2017-03-31 12:45:38 +08:00
|
|
|
static void *bpf_test_init(const union bpf_attr *kattr, u32 size,
|
|
|
|
u32 headroom, u32 tailroom)
|
|
|
|
{
|
|
|
|
void __user *data_in = u64_to_user_ptr(kattr->test.data_in);
|
2020-05-18 21:05:27 +08:00
|
|
|
u32 user_size = kattr->test.data_size_in;
|
2017-03-31 12:45:38 +08:00
|
|
|
void *data;
|
|
|
|
|
|
|
|
if (size < ETH_HLEN || size > PAGE_SIZE - headroom - tailroom)
|
|
|
|
return ERR_PTR(-EINVAL);
|
|
|
|
|
2020-05-18 21:05:27 +08:00
|
|
|
if (user_size > size)
|
|
|
|
return ERR_PTR(-EMSGSIZE);
|
|
|
|
|
2017-03-31 12:45:38 +08:00
|
|
|
data = kzalloc(size + headroom + tailroom, GFP_USER);
|
|
|
|
if (!data)
|
|
|
|
return ERR_PTR(-ENOMEM);
|
|
|
|
|
2020-05-18 21:05:27 +08:00
|
|
|
if (copy_from_user(data + headroom, data_in, user_size)) {
|
2017-03-31 12:45:38 +08:00
|
|
|
kfree(data);
|
|
|
|
return ERR_PTR(-EFAULT);
|
|
|
|
}
|
2020-03-05 03:18:52 +08:00
|
|
|
|
2017-03-31 12:45:38 +08:00
|
|
|
return data;
|
|
|
|
}
|
|
|
|
|
2020-03-05 03:18:52 +08:00
|
|
|
int bpf_prog_test_run_tracing(struct bpf_prog *prog,
|
|
|
|
const union bpf_attr *kattr,
|
|
|
|
union bpf_attr __user *uattr)
|
|
|
|
{
|
bpf: Add tests for PTR_TO_BTF_ID vs. null comparison
Add two tests for PTR_TO_BTF_ID vs. null ptr comparison,
one for PTR_TO_BTF_ID in the ctx structure and the
other for PTR_TO_BTF_ID after one level pointer chasing.
In both cases, the test ensures condition is not
removed.
For example, for this test
struct bpf_fentry_test_t {
struct bpf_fentry_test_t *a;
};
int BPF_PROG(test7, struct bpf_fentry_test_t *arg)
{
if (arg == 0)
test7_result = 1;
return 0;
}
Before the previous verifier change, we have xlated codes:
int test7(long long unsigned int * ctx):
; int BPF_PROG(test7, struct bpf_fentry_test_t *arg)
0: (79) r1 = *(u64 *)(r1 +0)
; int BPF_PROG(test7, struct bpf_fentry_test_t *arg)
1: (b4) w0 = 0
2: (95) exit
After the previous verifier change, we have:
int test7(long long unsigned int * ctx):
; int BPF_PROG(test7, struct bpf_fentry_test_t *arg)
0: (79) r1 = *(u64 *)(r1 +0)
; if (arg == 0)
1: (55) if r1 != 0x0 goto pc+4
; test7_result = 1;
2: (18) r1 = map[id:6][0]+48
4: (b7) r2 = 1
5: (7b) *(u64 *)(r1 +0) = r2
; int BPF_PROG(test7, struct bpf_fentry_test_t *arg)
6: (b4) w0 = 0
7: (95) exit
Signed-off-by: Yonghong Song <yhs@fb.com>
Signed-off-by: Daniel Borkmann <daniel@iogearbox.net>
Acked-by: John Fastabend <john.fastabend@gmail.com>
Acked-by: Andrii Nakryiko <andriin@fb.com>
Link: https://lore.kernel.org/bpf/20200630171241.2523875-1-yhs@fb.com
2020-07-01 01:12:41 +08:00
|
|
|
struct bpf_fentry_test_t arg = {};
|
2020-03-05 03:18:53 +08:00
|
|
|
u16 side_effect = 0, ret = 0;
|
|
|
|
int b = 2, err = -EFAULT;
|
|
|
|
u32 retval = 0;
|
2020-03-05 03:18:52 +08:00
|
|
|
|
2020-09-26 04:54:29 +08:00
|
|
|
if (kattr->test.flags || kattr->test.cpu)
|
|
|
|
return -EINVAL;
|
|
|
|
|
2020-03-05 03:18:52 +08:00
|
|
|
switch (prog->expected_attach_type) {
|
|
|
|
case BPF_TRACE_FENTRY:
|
|
|
|
case BPF_TRACE_FEXIT:
|
|
|
|
if (bpf_fentry_test1(1) != 2 ||
|
|
|
|
bpf_fentry_test2(2, 3) != 5 ||
|
|
|
|
bpf_fentry_test3(4, 5, 6) != 15 ||
|
|
|
|
bpf_fentry_test4((void *)7, 8, 9, 10) != 34 ||
|
|
|
|
bpf_fentry_test5(11, (void *)12, 13, 14, 15) != 65 ||
|
bpf: Add tests for PTR_TO_BTF_ID vs. null comparison
Add two tests for PTR_TO_BTF_ID vs. null ptr comparison,
one for PTR_TO_BTF_ID in the ctx structure and the
other for PTR_TO_BTF_ID after one level pointer chasing.
In both cases, the test ensures condition is not
removed.
For example, for this test
struct bpf_fentry_test_t {
struct bpf_fentry_test_t *a;
};
int BPF_PROG(test7, struct bpf_fentry_test_t *arg)
{
if (arg == 0)
test7_result = 1;
return 0;
}
Before the previous verifier change, we have xlated codes:
int test7(long long unsigned int * ctx):
; int BPF_PROG(test7, struct bpf_fentry_test_t *arg)
0: (79) r1 = *(u64 *)(r1 +0)
; int BPF_PROG(test7, struct bpf_fentry_test_t *arg)
1: (b4) w0 = 0
2: (95) exit
After the previous verifier change, we have:
int test7(long long unsigned int * ctx):
; int BPF_PROG(test7, struct bpf_fentry_test_t *arg)
0: (79) r1 = *(u64 *)(r1 +0)
; if (arg == 0)
1: (55) if r1 != 0x0 goto pc+4
; test7_result = 1;
2: (18) r1 = map[id:6][0]+48
4: (b7) r2 = 1
5: (7b) *(u64 *)(r1 +0) = r2
; int BPF_PROG(test7, struct bpf_fentry_test_t *arg)
6: (b4) w0 = 0
7: (95) exit
Signed-off-by: Yonghong Song <yhs@fb.com>
Signed-off-by: Daniel Borkmann <daniel@iogearbox.net>
Acked-by: John Fastabend <john.fastabend@gmail.com>
Acked-by: Andrii Nakryiko <andriin@fb.com>
Link: https://lore.kernel.org/bpf/20200630171241.2523875-1-yhs@fb.com
2020-07-01 01:12:41 +08:00
|
|
|
bpf_fentry_test6(16, (void *)17, 18, 19, (void *)20, 21) != 111 ||
|
|
|
|
bpf_fentry_test7((struct bpf_fentry_test_t *)0) != 0 ||
|
|
|
|
bpf_fentry_test8(&arg) != 0)
|
2020-03-05 03:18:52 +08:00
|
|
|
goto out;
|
|
|
|
break;
|
2020-03-05 03:18:53 +08:00
|
|
|
case BPF_MODIFY_RETURN:
|
|
|
|
ret = bpf_modify_return_test(1, &b);
|
|
|
|
if (b != 2)
|
|
|
|
side_effect = 1;
|
|
|
|
break;
|
2020-03-05 03:18:52 +08:00
|
|
|
default:
|
|
|
|
goto out;
|
|
|
|
}
|
|
|
|
|
2020-03-05 03:18:53 +08:00
|
|
|
retval = ((u32)side_effect << 16) | ret;
|
|
|
|
if (copy_to_user(&uattr->test.retval, &retval, sizeof(retval)))
|
|
|
|
goto out;
|
|
|
|
|
2020-03-05 03:18:52 +08:00
|
|
|
err = 0;
|
|
|
|
out:
|
|
|
|
trace_bpf_test_finish(&err);
|
|
|
|
return err;
|
|
|
|
}
|
|
|
|
|
2020-09-26 04:54:29 +08:00
|
|
|
struct bpf_raw_tp_test_run_info {
|
|
|
|
struct bpf_prog *prog;
|
|
|
|
void *ctx;
|
|
|
|
u32 retval;
|
|
|
|
};
|
|
|
|
|
|
|
|
static void
|
|
|
|
__bpf_prog_test_run_raw_tp(void *data)
|
|
|
|
{
|
|
|
|
struct bpf_raw_tp_test_run_info *info = data;
|
|
|
|
|
|
|
|
rcu_read_lock();
|
2021-08-15 15:05:54 +08:00
|
|
|
info->retval = bpf_prog_run(info->prog, info->ctx);
|
2020-09-26 04:54:29 +08:00
|
|
|
rcu_read_unlock();
|
|
|
|
}
|
|
|
|
|
|
|
|
int bpf_prog_test_run_raw_tp(struct bpf_prog *prog,
|
|
|
|
const union bpf_attr *kattr,
|
|
|
|
union bpf_attr __user *uattr)
|
|
|
|
{
|
|
|
|
void __user *ctx_in = u64_to_user_ptr(kattr->test.ctx_in);
|
|
|
|
__u32 ctx_size_in = kattr->test.ctx_size_in;
|
|
|
|
struct bpf_raw_tp_test_run_info info;
|
|
|
|
int cpu = kattr->test.cpu, err = 0;
|
2020-09-30 06:29:49 +08:00
|
|
|
int current_cpu;
|
2020-09-26 04:54:29 +08:00
|
|
|
|
|
|
|
/* doesn't support data_in/out, ctx_out, duration, or repeat */
|
|
|
|
if (kattr->test.data_in || kattr->test.data_out ||
|
|
|
|
kattr->test.ctx_out || kattr->test.duration ||
|
|
|
|
kattr->test.repeat)
|
|
|
|
return -EINVAL;
|
|
|
|
|
2021-01-13 07:42:54 +08:00
|
|
|
if (ctx_size_in < prog->aux->max_ctx_offset ||
|
|
|
|
ctx_size_in > MAX_BPF_FUNC_ARGS * sizeof(u64))
|
2020-09-26 04:54:29 +08:00
|
|
|
return -EINVAL;
|
|
|
|
|
|
|
|
if ((kattr->test.flags & BPF_F_TEST_RUN_ON_CPU) == 0 && cpu != 0)
|
|
|
|
return -EINVAL;
|
|
|
|
|
|
|
|
if (ctx_size_in) {
|
|
|
|
info.ctx = kzalloc(ctx_size_in, GFP_USER);
|
|
|
|
if (!info.ctx)
|
|
|
|
return -ENOMEM;
|
|
|
|
if (copy_from_user(info.ctx, ctx_in, ctx_size_in)) {
|
|
|
|
err = -EFAULT;
|
|
|
|
goto out;
|
|
|
|
}
|
|
|
|
} else {
|
|
|
|
info.ctx = NULL;
|
|
|
|
}
|
|
|
|
|
|
|
|
info.prog = prog;
|
|
|
|
|
2020-09-30 06:29:49 +08:00
|
|
|
current_cpu = get_cpu();
|
2020-09-26 04:54:29 +08:00
|
|
|
if ((kattr->test.flags & BPF_F_TEST_RUN_ON_CPU) == 0 ||
|
2020-09-30 06:29:49 +08:00
|
|
|
cpu == current_cpu) {
|
2020-09-26 04:54:29 +08:00
|
|
|
__bpf_prog_test_run_raw_tp(&info);
|
2020-09-30 06:29:49 +08:00
|
|
|
} else if (cpu >= nr_cpu_ids || !cpu_online(cpu)) {
|
2020-09-26 04:54:29 +08:00
|
|
|
/* smp_call_function_single() also checks cpu_online()
|
|
|
|
* after csd_lock(). However, since cpu is from user
|
|
|
|
* space, let's do an extra quick check to filter out
|
|
|
|
* invalid value before smp_call_function_single().
|
|
|
|
*/
|
2020-09-30 06:29:49 +08:00
|
|
|
err = -ENXIO;
|
|
|
|
} else {
|
2020-09-26 04:54:29 +08:00
|
|
|
err = smp_call_function_single(cpu, __bpf_prog_test_run_raw_tp,
|
|
|
|
&info, 1);
|
|
|
|
}
|
2020-09-30 06:29:49 +08:00
|
|
|
put_cpu();
|
2020-09-26 04:54:29 +08:00
|
|
|
|
2020-09-30 06:29:49 +08:00
|
|
|
if (!err &&
|
|
|
|
copy_to_user(&uattr->test.retval, &info.retval, sizeof(u32)))
|
2020-09-26 04:54:29 +08:00
|
|
|
err = -EFAULT;
|
|
|
|
|
|
|
|
out:
|
|
|
|
kfree(info.ctx);
|
|
|
|
return err;
|
|
|
|
}
|
|
|
|
|
2019-04-10 02:49:09 +08:00
|
|
|
static void *bpf_ctx_init(const union bpf_attr *kattr, u32 max_size)
|
|
|
|
{
|
|
|
|
void __user *data_in = u64_to_user_ptr(kattr->test.ctx_in);
|
|
|
|
void __user *data_out = u64_to_user_ptr(kattr->test.ctx_out);
|
|
|
|
u32 size = kattr->test.ctx_size_in;
|
|
|
|
void *data;
|
|
|
|
int err;
|
|
|
|
|
|
|
|
if (!data_in && !data_out)
|
|
|
|
return NULL;
|
|
|
|
|
|
|
|
data = kzalloc(max_size, GFP_USER);
|
|
|
|
if (!data)
|
|
|
|
return ERR_PTR(-ENOMEM);
|
|
|
|
|
|
|
|
if (data_in) {
|
2021-05-14 08:36:05 +08:00
|
|
|
err = bpf_check_uarg_tail_zero(USER_BPFPTR(data_in), max_size, size);
|
2019-04-10 02:49:09 +08:00
|
|
|
if (err) {
|
|
|
|
kfree(data);
|
|
|
|
return ERR_PTR(err);
|
|
|
|
}
|
|
|
|
|
|
|
|
size = min_t(u32, max_size, size);
|
|
|
|
if (copy_from_user(data, data_in, size)) {
|
|
|
|
kfree(data);
|
|
|
|
return ERR_PTR(-EFAULT);
|
|
|
|
}
|
|
|
|
}
|
|
|
|
return data;
|
|
|
|
}
|
|
|
|
|
|
|
|
static int bpf_ctx_finish(const union bpf_attr *kattr,
|
|
|
|
union bpf_attr __user *uattr, const void *data,
|
|
|
|
u32 size)
|
|
|
|
{
|
|
|
|
void __user *data_out = u64_to_user_ptr(kattr->test.ctx_out);
|
|
|
|
int err = -EFAULT;
|
|
|
|
u32 copy_size = size;
|
|
|
|
|
|
|
|
if (!data || !data_out)
|
|
|
|
return 0;
|
|
|
|
|
|
|
|
if (copy_size > kattr->test.ctx_size_out) {
|
|
|
|
copy_size = kattr->test.ctx_size_out;
|
|
|
|
err = -ENOSPC;
|
|
|
|
}
|
|
|
|
|
|
|
|
if (copy_to_user(data_out, data, copy_size))
|
|
|
|
goto out;
|
|
|
|
if (copy_to_user(&uattr->test.ctx_size_out, &size, sizeof(size)))
|
|
|
|
goto out;
|
|
|
|
if (err != -ENOSPC)
|
|
|
|
err = 0;
|
|
|
|
out:
|
|
|
|
return err;
|
|
|
|
}
|
|
|
|
|
|
|
|
/**
|
|
|
|
* range_is_zero - test whether buffer is initialized
|
|
|
|
* @buf: buffer to check
|
|
|
|
* @from: check from this position
|
|
|
|
* @to: check up until (excluding) this position
|
|
|
|
*
|
|
|
|
* This function returns true if the there is a non-zero byte
|
|
|
|
* in the buf in the range [from,to).
|
|
|
|
*/
|
|
|
|
static inline bool range_is_zero(void *buf, size_t from, size_t to)
|
|
|
|
{
|
|
|
|
return !memchr_inv((u8 *)buf + from, 0, to - from);
|
|
|
|
}
|
|
|
|
|
|
|
|
static int convert___skb_to_skb(struct sk_buff *skb, struct __sk_buff *__skb)
|
|
|
|
{
|
|
|
|
struct qdisc_skb_cb *cb = (struct qdisc_skb_cb *)skb->cb;
|
|
|
|
|
|
|
|
if (!__skb)
|
|
|
|
return 0;
|
|
|
|
|
|
|
|
/* make sure the fields we don't use are zeroed */
|
2019-12-19 04:57:47 +08:00
|
|
|
if (!range_is_zero(__skb, 0, offsetof(struct __sk_buff, mark)))
|
|
|
|
return -EINVAL;
|
|
|
|
|
|
|
|
/* mark is allowed */
|
|
|
|
|
|
|
|
if (!range_is_zero(__skb, offsetofend(struct __sk_buff, mark),
|
|
|
|
offsetof(struct __sk_buff, priority)))
|
2019-04-10 02:49:09 +08:00
|
|
|
return -EINVAL;
|
|
|
|
|
|
|
|
/* priority is allowed */
|
2021-08-31 11:33:56 +08:00
|
|
|
/* ingress_ifindex is allowed */
|
2020-08-03 17:05:45 +08:00
|
|
|
/* ifindex is allowed */
|
|
|
|
|
|
|
|
if (!range_is_zero(__skb, offsetofend(struct __sk_buff, ifindex),
|
2019-04-10 02:49:09 +08:00
|
|
|
offsetof(struct __sk_buff, cb)))
|
|
|
|
return -EINVAL;
|
|
|
|
|
|
|
|
/* cb is allowed */
|
|
|
|
|
2019-12-11 03:19:33 +08:00
|
|
|
if (!range_is_zero(__skb, offsetofend(struct __sk_buff, cb),
|
2019-10-16 02:31:24 +08:00
|
|
|
offsetof(struct __sk_buff, tstamp)))
|
|
|
|
return -EINVAL;
|
|
|
|
|
|
|
|
/* tstamp is allowed */
|
2019-12-14 06:30:27 +08:00
|
|
|
/* wire_len is allowed */
|
|
|
|
/* gso_segs is allowed */
|
2019-10-16 02:31:24 +08:00
|
|
|
|
2019-12-14 06:30:27 +08:00
|
|
|
if (!range_is_zero(__skb, offsetofend(struct __sk_buff, gso_segs),
|
2020-03-04 04:05:01 +08:00
|
|
|
offsetof(struct __sk_buff, gso_size)))
|
|
|
|
return -EINVAL;
|
|
|
|
|
|
|
|
/* gso_size is allowed */
|
|
|
|
|
|
|
|
if (!range_is_zero(__skb, offsetofend(struct __sk_buff, gso_size),
|
2021-09-10 06:04:09 +08:00
|
|
|
offsetof(struct __sk_buff, hwtstamp)))
|
|
|
|
return -EINVAL;
|
|
|
|
|
|
|
|
/* hwtstamp is allowed */
|
|
|
|
|
|
|
|
if (!range_is_zero(__skb, offsetofend(struct __sk_buff, hwtstamp),
|
2019-04-10 02:49:09 +08:00
|
|
|
sizeof(struct __sk_buff)))
|
|
|
|
return -EINVAL;
|
|
|
|
|
2019-12-19 04:57:47 +08:00
|
|
|
skb->mark = __skb->mark;
|
2019-04-10 02:49:09 +08:00
|
|
|
skb->priority = __skb->priority;
|
2021-08-31 11:33:56 +08:00
|
|
|
skb->skb_iif = __skb->ingress_ifindex;
|
2019-10-16 02:31:24 +08:00
|
|
|
skb->tstamp = __skb->tstamp;
|
2019-04-10 02:49:09 +08:00
|
|
|
memcpy(&cb->data, __skb->cb, QDISC_CB_PRIV_LEN);
|
|
|
|
|
2019-12-14 06:30:27 +08:00
|
|
|
if (__skb->wire_len == 0) {
|
|
|
|
cb->pkt_len = skb->len;
|
|
|
|
} else {
|
|
|
|
if (__skb->wire_len < skb->len ||
|
|
|
|
__skb->wire_len > GSO_MAX_SIZE)
|
|
|
|
return -EINVAL;
|
|
|
|
cb->pkt_len = __skb->wire_len;
|
|
|
|
}
|
|
|
|
|
|
|
|
if (__skb->gso_segs > GSO_MAX_SEGS)
|
|
|
|
return -EINVAL;
|
|
|
|
skb_shinfo(skb)->gso_segs = __skb->gso_segs;
|
2020-03-04 04:05:01 +08:00
|
|
|
skb_shinfo(skb)->gso_size = __skb->gso_size;
|
2021-09-10 06:04:09 +08:00
|
|
|
skb_shinfo(skb)->hwtstamps.hwtstamp = __skb->hwtstamp;
|
2019-12-14 06:30:27 +08:00
|
|
|
|
2019-04-10 02:49:09 +08:00
|
|
|
return 0;
|
|
|
|
}
|
|
|
|
|
|
|
|
static void convert_skb_to___skb(struct sk_buff *skb, struct __sk_buff *__skb)
|
|
|
|
{
|
|
|
|
struct qdisc_skb_cb *cb = (struct qdisc_skb_cb *)skb->cb;
|
|
|
|
|
|
|
|
if (!__skb)
|
|
|
|
return;
|
|
|
|
|
2019-12-19 04:57:47 +08:00
|
|
|
__skb->mark = skb->mark;
|
2019-04-10 02:49:09 +08:00
|
|
|
__skb->priority = skb->priority;
|
2021-08-31 11:33:56 +08:00
|
|
|
__skb->ingress_ifindex = skb->skb_iif;
|
2020-08-03 17:05:45 +08:00
|
|
|
__skb->ifindex = skb->dev->ifindex;
|
2019-10-16 02:31:24 +08:00
|
|
|
__skb->tstamp = skb->tstamp;
|
2019-04-10 02:49:09 +08:00
|
|
|
memcpy(__skb->cb, &cb->data, QDISC_CB_PRIV_LEN);
|
2019-12-14 06:30:27 +08:00
|
|
|
__skb->wire_len = cb->pkt_len;
|
|
|
|
__skb->gso_segs = skb_shinfo(skb)->gso_segs;
|
2021-09-10 06:04:09 +08:00
|
|
|
__skb->hwtstamp = skb_shinfo(skb)->hwtstamps.hwtstamp;
|
2019-04-10 02:49:09 +08:00
|
|
|
}
|
|
|
|
|
2017-03-31 12:45:38 +08:00
|
|
|
int bpf_prog_test_run_skb(struct bpf_prog *prog, const union bpf_attr *kattr,
|
|
|
|
union bpf_attr __user *uattr)
|
|
|
|
{
|
|
|
|
bool is_l2 = false, is_direct_pkt_access = false;
|
2020-08-03 17:05:45 +08:00
|
|
|
struct net *net = current->nsproxy->net_ns;
|
|
|
|
struct net_device *dev = net->loopback_dev;
|
2017-03-31 12:45:38 +08:00
|
|
|
u32 size = kattr->test.data_size_in;
|
|
|
|
u32 repeat = kattr->test.repeat;
|
2019-04-10 02:49:09 +08:00
|
|
|
struct __sk_buff *ctx = NULL;
|
2017-03-31 12:45:38 +08:00
|
|
|
u32 retval, duration;
|
2018-07-11 21:30:14 +08:00
|
|
|
int hh_len = ETH_HLEN;
|
2017-03-31 12:45:38 +08:00
|
|
|
struct sk_buff *skb;
|
2018-10-20 00:57:58 +08:00
|
|
|
struct sock *sk;
|
2017-03-31 12:45:38 +08:00
|
|
|
void *data;
|
|
|
|
int ret;
|
|
|
|
|
2020-09-26 04:54:29 +08:00
|
|
|
if (kattr->test.flags || kattr->test.cpu)
|
|
|
|
return -EINVAL;
|
|
|
|
|
2017-05-02 23:36:45 +08:00
|
|
|
data = bpf_test_init(kattr, size, NET_SKB_PAD + NET_IP_ALIGN,
|
2017-03-31 12:45:38 +08:00
|
|
|
SKB_DATA_ALIGN(sizeof(struct skb_shared_info)));
|
|
|
|
if (IS_ERR(data))
|
|
|
|
return PTR_ERR(data);
|
|
|
|
|
2019-04-10 02:49:09 +08:00
|
|
|
ctx = bpf_ctx_init(kattr, sizeof(struct __sk_buff));
|
|
|
|
if (IS_ERR(ctx)) {
|
|
|
|
kfree(data);
|
|
|
|
return PTR_ERR(ctx);
|
|
|
|
}
|
|
|
|
|
2017-03-31 12:45:38 +08:00
|
|
|
switch (prog->type) {
|
|
|
|
case BPF_PROG_TYPE_SCHED_CLS:
|
|
|
|
case BPF_PROG_TYPE_SCHED_ACT:
|
|
|
|
is_l2 = true;
|
2020-08-24 06:36:59 +08:00
|
|
|
fallthrough;
|
2017-03-31 12:45:38 +08:00
|
|
|
case BPF_PROG_TYPE_LWT_IN:
|
|
|
|
case BPF_PROG_TYPE_LWT_OUT:
|
|
|
|
case BPF_PROG_TYPE_LWT_XMIT:
|
|
|
|
is_direct_pkt_access = true;
|
|
|
|
break;
|
|
|
|
default:
|
|
|
|
break;
|
|
|
|
}
|
|
|
|
|
2018-10-20 00:57:58 +08:00
|
|
|
sk = kzalloc(sizeof(struct sock), GFP_USER);
|
|
|
|
if (!sk) {
|
|
|
|
kfree(data);
|
2019-04-10 02:49:09 +08:00
|
|
|
kfree(ctx);
|
2018-10-20 00:57:58 +08:00
|
|
|
return -ENOMEM;
|
|
|
|
}
|
2020-08-03 17:05:45 +08:00
|
|
|
sock_net_set(sk, net);
|
2018-10-20 00:57:58 +08:00
|
|
|
sock_init_data(NULL, sk);
|
|
|
|
|
2017-03-31 12:45:38 +08:00
|
|
|
skb = build_skb(data, 0);
|
|
|
|
if (!skb) {
|
|
|
|
kfree(data);
|
2019-04-10 02:49:09 +08:00
|
|
|
kfree(ctx);
|
2018-10-20 00:57:58 +08:00
|
|
|
kfree(sk);
|
2017-03-31 12:45:38 +08:00
|
|
|
return -ENOMEM;
|
|
|
|
}
|
2018-10-20 00:57:58 +08:00
|
|
|
skb->sk = sk;
|
2017-03-31 12:45:38 +08:00
|
|
|
|
2017-05-02 23:36:45 +08:00
|
|
|
skb_reserve(skb, NET_SKB_PAD + NET_IP_ALIGN);
|
2017-03-31 12:45:38 +08:00
|
|
|
__skb_put(skb, size);
|
2020-08-03 17:05:45 +08:00
|
|
|
if (ctx && ctx->ifindex > 1) {
|
|
|
|
dev = dev_get_by_index(net, ctx->ifindex);
|
|
|
|
if (!dev) {
|
|
|
|
ret = -ENODEV;
|
|
|
|
goto out;
|
|
|
|
}
|
|
|
|
}
|
|
|
|
skb->protocol = eth_type_trans(skb, dev);
|
2017-03-31 12:45:38 +08:00
|
|
|
skb_reset_network_header(skb);
|
|
|
|
|
2020-08-03 17:05:44 +08:00
|
|
|
switch (skb->protocol) {
|
|
|
|
case htons(ETH_P_IP):
|
|
|
|
sk->sk_family = AF_INET;
|
|
|
|
if (sizeof(struct iphdr) <= skb_headlen(skb)) {
|
|
|
|
sk->sk_rcv_saddr = ip_hdr(skb)->saddr;
|
|
|
|
sk->sk_daddr = ip_hdr(skb)->daddr;
|
|
|
|
}
|
|
|
|
break;
|
|
|
|
#if IS_ENABLED(CONFIG_IPV6)
|
|
|
|
case htons(ETH_P_IPV6):
|
|
|
|
sk->sk_family = AF_INET6;
|
|
|
|
if (sizeof(struct ipv6hdr) <= skb_headlen(skb)) {
|
|
|
|
sk->sk_v6_rcv_saddr = ipv6_hdr(skb)->saddr;
|
|
|
|
sk->sk_v6_daddr = ipv6_hdr(skb)->daddr;
|
|
|
|
}
|
|
|
|
break;
|
|
|
|
#endif
|
|
|
|
default:
|
|
|
|
break;
|
|
|
|
}
|
|
|
|
|
2017-03-31 12:45:38 +08:00
|
|
|
if (is_l2)
|
2018-07-11 21:30:14 +08:00
|
|
|
__skb_push(skb, hh_len);
|
2017-03-31 12:45:38 +08:00
|
|
|
if (is_direct_pkt_access)
|
2017-09-25 08:25:50 +08:00
|
|
|
bpf_compute_data_pointers(skb);
|
2019-04-10 02:49:09 +08:00
|
|
|
ret = convert___skb_to_skb(skb, ctx);
|
|
|
|
if (ret)
|
|
|
|
goto out;
|
2019-12-14 01:51:10 +08:00
|
|
|
ret = bpf_test_run(prog, skb, repeat, &retval, &duration, false);
|
2019-04-10 02:49:09 +08:00
|
|
|
if (ret)
|
|
|
|
goto out;
|
2018-07-11 21:30:14 +08:00
|
|
|
if (!is_l2) {
|
|
|
|
if (skb_headroom(skb) < hh_len) {
|
|
|
|
int nhead = HH_DATA_ALIGN(hh_len - skb_headroom(skb));
|
|
|
|
|
|
|
|
if (pskb_expand_head(skb, nhead, 0, GFP_USER)) {
|
2019-04-10 02:49:09 +08:00
|
|
|
ret = -ENOMEM;
|
|
|
|
goto out;
|
2018-07-11 21:30:14 +08:00
|
|
|
}
|
|
|
|
}
|
|
|
|
memset(__skb_push(skb, hh_len), 0, hh_len);
|
|
|
|
}
|
2019-04-10 02:49:09 +08:00
|
|
|
convert_skb_to___skb(skb, ctx);
|
2018-07-11 21:30:14 +08:00
|
|
|
|
2017-03-31 12:45:38 +08:00
|
|
|
size = skb->len;
|
|
|
|
/* bpf program can never convert linear skb to non-linear */
|
|
|
|
if (WARN_ON_ONCE(skb_is_nonlinear(skb)))
|
|
|
|
size = skb_headlen(skb);
|
2017-05-02 23:36:33 +08:00
|
|
|
ret = bpf_test_finish(kattr, uattr, skb->data, size, retval, duration);
|
2019-04-10 02:49:09 +08:00
|
|
|
if (!ret)
|
|
|
|
ret = bpf_ctx_finish(kattr, uattr, ctx,
|
|
|
|
sizeof(struct __sk_buff));
|
|
|
|
out:
|
2020-08-03 17:05:45 +08:00
|
|
|
if (dev && dev != net->loopback_dev)
|
|
|
|
dev_put(dev);
|
2017-03-31 12:45:38 +08:00
|
|
|
kfree_skb(skb);
|
bpf: Introduce bpf sk local storage
After allowing a bpf prog to
- directly read the skb->sk ptr
- get the fullsock bpf_sock by "bpf_sk_fullsock()"
- get the bpf_tcp_sock by "bpf_tcp_sock()"
- get the listener sock by "bpf_get_listener_sock()"
- avoid duplicating the fields of "(bpf_)sock" and "(bpf_)tcp_sock"
into different bpf running context.
this patch is another effort to make bpf's network programming
more intuitive to do (together with memory and performance benefit).
When bpf prog needs to store data for a sk, the current practice is to
define a map with the usual 4-tuples (src/dst ip/port) as the key.
If multiple bpf progs require to store different sk data, multiple maps
have to be defined. Hence, wasting memory to store the duplicated
keys (i.e. 4 tuples here) in each of the bpf map.
[ The smallest key could be the sk pointer itself which requires
some enhancement in the verifier and it is a separate topic. ]
Also, the bpf prog needs to clean up the elem when sk is freed.
Otherwise, the bpf map will become full and un-usable quickly.
The sk-free tracking currently could be done during sk state
transition (e.g. BPF_SOCK_OPS_STATE_CB).
The size of the map needs to be predefined which then usually ended-up
with an over-provisioned map in production. Even the map was re-sizable,
while the sk naturally come and go away already, this potential re-size
operation is arguably redundant if the data can be directly connected
to the sk itself instead of proxy-ing through a bpf map.
This patch introduces sk->sk_bpf_storage to provide local storage space
at sk for bpf prog to use. The space will be allocated when the first bpf
prog has created data for this particular sk.
The design optimizes the bpf prog's lookup (and then optionally followed by
an inline update). bpf_spin_lock should be used if the inline update needs
to be protected.
BPF_MAP_TYPE_SK_STORAGE:
-----------------------
To define a bpf "sk-local-storage", a BPF_MAP_TYPE_SK_STORAGE map (new in
this patch) needs to be created. Multiple BPF_MAP_TYPE_SK_STORAGE maps can
be created to fit different bpf progs' needs. The map enforces
BTF to allow printing the sk-local-storage during a system-wise
sk dump (e.g. "ss -ta") in the future.
The purpose of a BPF_MAP_TYPE_SK_STORAGE map is not for lookup/update/delete
a "sk-local-storage" data from a particular sk.
Think of the map as a meta-data (or "type") of a "sk-local-storage". This
particular "type" of "sk-local-storage" data can then be stored in any sk.
The main purposes of this map are mostly:
1. Define the size of a "sk-local-storage" type.
2. Provide a similar syscall userspace API as the map (e.g. lookup/update,
map-id, map-btf...etc.)
3. Keep track of all sk's storages of this "type" and clean them up
when the map is freed.
sk->sk_bpf_storage:
------------------
The main lookup/update/delete is done on sk->sk_bpf_storage (which
is a "struct bpf_sk_storage"). When doing a lookup,
the "map" pointer is now used as the "key" to search on the
sk_storage->list. The "map" pointer is actually serving
as the "type" of the "sk-local-storage" that is being
requested.
To allow very fast lookup, it should be as fast as looking up an
array at a stable-offset. At the same time, it is not ideal to
set a hard limit on the number of sk-local-storage "type" that the
system can have. Hence, this patch takes a cache approach.
The last search result from sk_storage->list is cached in
sk_storage->cache[] which is a stable sized array. Each
"sk-local-storage" type has a stable offset to the cache[] array.
In the future, a map's flag could be introduced to do cache
opt-out/enforcement if it became necessary.
The cache size is 16 (i.e. 16 types of "sk-local-storage").
Programs can share map. On the program side, having a few bpf_progs
running in the networking hotpath is already a lot. The bpf_prog
should have already consolidated the existing sock-key-ed map usage
to minimize the map lookup penalty. 16 has enough runway to grow.
All sk-local-storage data will be removed from sk->sk_bpf_storage
during sk destruction.
bpf_sk_storage_get() and bpf_sk_storage_delete():
------------------------------------------------
Instead of using bpf_map_(lookup|update|delete)_elem(),
the bpf prog needs to use the new helper bpf_sk_storage_get() and
bpf_sk_storage_delete(). The verifier can then enforce the
ARG_PTR_TO_SOCKET argument. The bpf_sk_storage_get() also allows to
"create" new elem if one does not exist in the sk. It is done by
the new BPF_SK_STORAGE_GET_F_CREATE flag. An optional value can also be
provided as the initial value during BPF_SK_STORAGE_GET_F_CREATE.
The BPF_MAP_TYPE_SK_STORAGE also supports bpf_spin_lock. Together,
it has eliminated the potential use cases for an equivalent
bpf_map_update_elem() API (for bpf_prog) in this patch.
Misc notes:
----------
1. map_get_next_key is not supported. From the userspace syscall
perspective, the map has the socket fd as the key while the map
can be shared by pinned-file or map-id.
Since btf is enforced, the existing "ss" could be enhanced to pretty
print the local-storage.
Supporting a kernel defined btf with 4 tuples as the return key could
be explored later also.
2. The sk->sk_lock cannot be acquired. Atomic operations is used instead.
e.g. cmpxchg is done on the sk->sk_bpf_storage ptr.
Please refer to the source code comments for the details in
synchronization cases and considerations.
3. The mem is charged to the sk->sk_omem_alloc as the sk filter does.
Benchmark:
---------
Here is the benchmark data collected by turning on
the "kernel.bpf_stats_enabled" sysctl.
Two bpf progs are tested:
One bpf prog with the usual bpf hashmap (max_entries = 8192) with the
sk ptr as the key. (verifier is modified to support sk ptr as the key
That should have shortened the key lookup time.)
Another bpf prog is with the new BPF_MAP_TYPE_SK_STORAGE.
Both are storing a "u32 cnt", do a lookup on "egress_skb/cgroup" for
each egress skb and then bump the cnt. netperf is used to drive
data with 4096 connected UDP sockets.
BPF_MAP_TYPE_HASH with a modifier verifier (152ns per bpf run)
27: cgroup_skb name egress_sk_map tag 74f56e832918070b run_time_ns 58280107540 run_cnt 381347633
loaded_at 2019-04-15T13:46:39-0700 uid 0
xlated 344B jited 258B memlock 4096B map_ids 16
btf_id 5
BPF_MAP_TYPE_SK_STORAGE in this patch (66ns per bpf run)
30: cgroup_skb name egress_sk_stora tag d4aa70984cc7bbf6 run_time_ns 25617093319 run_cnt 390989739
loaded_at 2019-04-15T13:47:54-0700 uid 0
xlated 168B jited 156B memlock 4096B map_ids 17
btf_id 6
Here is a high-level picture on how are the objects organized:
sk
┌──────┐
│ │
│ │
│ │
│*sk_bpf_storage─────▶ bpf_sk_storage
└──────┘ ┌───────┐
┌───────────┤ list │
│ │ │
│ │ │
│ │ │
│ └───────┘
│
│ elem
│ ┌────────┐
├─▶│ snode │
│ ├────────┤
│ │ data │ bpf_map
│ ├────────┤ ┌─────────┐
│ │map_node│◀─┬─────┤ list │
│ └────────┘ │ │ │
│ │ │ │
│ elem │ │ │
│ ┌────────┐ │ └─────────┘
└─▶│ snode │ │
├────────┤ │
bpf_map │ data │ │
┌─────────┐ ├────────┤ │
│ list ├───────▶│map_node│ │
│ │ └────────┘ │
│ │ │
│ │ elem │
└─────────┘ ┌────────┐ │
┌─▶│ snode │ │
│ ├────────┤ │
│ │ data │ │
│ ├────────┤ │
│ │map_node│◀─┘
│ └────────┘
│
│
│ ┌───────┐
sk └──────────│ list │
┌──────┐ │ │
│ │ │ │
│ │ │ │
│ │ └───────┘
│*sk_bpf_storage───────▶bpf_sk_storage
└──────┘
Signed-off-by: Martin KaFai Lau <kafai@fb.com>
Signed-off-by: Alexei Starovoitov <ast@kernel.org>
2019-04-27 07:39:39 +08:00
|
|
|
bpf_sk_storage_free(sk);
|
2018-10-20 00:57:58 +08:00
|
|
|
kfree(sk);
|
2019-04-10 02:49:09 +08:00
|
|
|
kfree(ctx);
|
2017-03-31 12:45:38 +08:00
|
|
|
return ret;
|
|
|
|
}
|
|
|
|
|
2021-07-08 06:16:55 +08:00
|
|
|
static int xdp_convert_md_to_buff(struct xdp_md *xdp_md, struct xdp_buff *xdp)
|
|
|
|
{
|
2021-07-08 06:16:56 +08:00
|
|
|
unsigned int ingress_ifindex, rx_queue_index;
|
|
|
|
struct netdev_rx_queue *rxqueue;
|
|
|
|
struct net_device *device;
|
|
|
|
|
2021-07-08 06:16:55 +08:00
|
|
|
if (!xdp_md)
|
|
|
|
return 0;
|
|
|
|
|
|
|
|
if (xdp_md->egress_ifindex != 0)
|
|
|
|
return -EINVAL;
|
|
|
|
|
2021-07-08 06:16:56 +08:00
|
|
|
ingress_ifindex = xdp_md->ingress_ifindex;
|
|
|
|
rx_queue_index = xdp_md->rx_queue_index;
|
|
|
|
|
|
|
|
if (!ingress_ifindex && rx_queue_index)
|
2021-07-08 06:16:55 +08:00
|
|
|
return -EINVAL;
|
|
|
|
|
2021-07-08 06:16:56 +08:00
|
|
|
if (ingress_ifindex) {
|
|
|
|
device = dev_get_by_index(current->nsproxy->net_ns,
|
|
|
|
ingress_ifindex);
|
|
|
|
if (!device)
|
|
|
|
return -ENODEV;
|
|
|
|
|
|
|
|
if (rx_queue_index >= device->real_num_rx_queues)
|
|
|
|
goto free_dev;
|
|
|
|
|
|
|
|
rxqueue = __netif_get_rx_queue(device, rx_queue_index);
|
2021-07-08 06:16:55 +08:00
|
|
|
|
2021-07-08 06:16:56 +08:00
|
|
|
if (!xdp_rxq_info_is_reg(&rxqueue->xdp_rxq))
|
|
|
|
goto free_dev;
|
|
|
|
|
|
|
|
xdp->rxq = &rxqueue->xdp_rxq;
|
|
|
|
/* The device is now tracked in the xdp->rxq for later
|
|
|
|
* dev_put()
|
|
|
|
*/
|
|
|
|
}
|
|
|
|
|
|
|
|
xdp->data = xdp->data_meta + xdp_md->data;
|
2021-07-08 06:16:55 +08:00
|
|
|
return 0;
|
2021-07-08 06:16:56 +08:00
|
|
|
|
|
|
|
free_dev:
|
|
|
|
dev_put(device);
|
|
|
|
return -EINVAL;
|
|
|
|
}
|
|
|
|
|
|
|
|
static void xdp_convert_buff_to_md(struct xdp_buff *xdp, struct xdp_md *xdp_md)
|
|
|
|
{
|
|
|
|
if (!xdp_md)
|
|
|
|
return;
|
|
|
|
|
|
|
|
xdp_md->data = xdp->data - xdp->data_meta;
|
|
|
|
xdp_md->data_end = xdp->data_end - xdp->data_meta;
|
|
|
|
|
|
|
|
if (xdp_md->ingress_ifindex)
|
|
|
|
dev_put(xdp->rxq->dev);
|
2021-07-08 06:16:55 +08:00
|
|
|
}
|
|
|
|
|
2017-03-31 12:45:38 +08:00
|
|
|
int bpf_prog_test_run_xdp(struct bpf_prog *prog, const union bpf_attr *kattr,
|
|
|
|
union bpf_attr __user *uattr)
|
|
|
|
{
|
2020-05-14 18:51:35 +08:00
|
|
|
u32 tailroom = SKB_DATA_ALIGN(sizeof(struct skb_shared_info));
|
|
|
|
u32 headroom = XDP_PACKET_HEADROOM;
|
2017-03-31 12:45:38 +08:00
|
|
|
u32 size = kattr->test.data_size_in;
|
|
|
|
u32 repeat = kattr->test.repeat;
|
2018-01-31 19:58:56 +08:00
|
|
|
struct netdev_rx_queue *rxqueue;
|
2017-03-31 12:45:38 +08:00
|
|
|
struct xdp_buff xdp = {};
|
|
|
|
u32 retval, duration;
|
2021-07-08 06:16:55 +08:00
|
|
|
struct xdp_md *ctx;
|
2020-05-14 18:51:35 +08:00
|
|
|
u32 max_data_sz;
|
2017-03-31 12:45:38 +08:00
|
|
|
void *data;
|
2021-07-08 06:16:55 +08:00
|
|
|
int ret = -EINVAL;
|
2017-03-31 12:45:38 +08:00
|
|
|
|
2021-07-08 16:04:09 +08:00
|
|
|
if (prog->expected_attach_type == BPF_XDP_DEVMAP ||
|
|
|
|
prog->expected_attach_type == BPF_XDP_CPUMAP)
|
|
|
|
return -EINVAL;
|
2021-08-04 23:37:50 +08:00
|
|
|
|
2021-07-08 06:16:55 +08:00
|
|
|
ctx = bpf_ctx_init(kattr, sizeof(struct xdp_md));
|
|
|
|
if (IS_ERR(ctx))
|
|
|
|
return PTR_ERR(ctx);
|
|
|
|
|
|
|
|
if (ctx) {
|
|
|
|
/* There can't be user provided data before the meta data */
|
|
|
|
if (ctx->data_meta || ctx->data_end != size ||
|
|
|
|
ctx->data > ctx->data_end ||
|
|
|
|
unlikely(xdp_metalen_invalid(ctx->data)))
|
|
|
|
goto free_ctx;
|
|
|
|
/* Meta data is allocated from the headroom */
|
|
|
|
headroom -= ctx->data;
|
|
|
|
}
|
2019-04-12 06:47:07 +08:00
|
|
|
|
2020-05-14 18:51:35 +08:00
|
|
|
/* XDP have extra tailroom as (most) drivers use full page */
|
|
|
|
max_data_sz = 4096 - headroom - tailroom;
|
|
|
|
|
|
|
|
data = bpf_test_init(kattr, max_data_sz, headroom, tailroom);
|
2021-07-08 06:16:55 +08:00
|
|
|
if (IS_ERR(data)) {
|
|
|
|
ret = PTR_ERR(data);
|
|
|
|
goto free_ctx;
|
|
|
|
}
|
2017-03-31 12:45:38 +08:00
|
|
|
|
2018-01-31 19:58:56 +08:00
|
|
|
rxqueue = __netif_get_rx_queue(current->nsproxy->net_ns->loopback_dev, 0);
|
2020-12-23 05:09:28 +08:00
|
|
|
xdp_init_buff(&xdp, headroom + max_data_sz + tailroom,
|
|
|
|
&rxqueue->xdp_rxq);
|
2020-12-23 05:09:29 +08:00
|
|
|
xdp_prepare_buff(&xdp, data, headroom, size, true);
|
|
|
|
|
2021-07-08 06:16:55 +08:00
|
|
|
ret = xdp_convert_md_to_buff(ctx, &xdp);
|
|
|
|
if (ret)
|
|
|
|
goto free_data;
|
|
|
|
|
2019-12-14 01:51:10 +08:00
|
|
|
bpf_prog_change_xdp(NULL, prog);
|
|
|
|
ret = bpf_test_run(prog, &xdp, repeat, &retval, &duration, true);
|
2021-07-08 06:16:56 +08:00
|
|
|
/* We convert the xdp_buff back to an xdp_md before checking the return
|
|
|
|
* code so the reference count of any held netdevice will be decremented
|
|
|
|
* even if the test run failed.
|
|
|
|
*/
|
|
|
|
xdp_convert_buff_to_md(&xdp, ctx);
|
2018-12-02 02:39:44 +08:00
|
|
|
if (ret)
|
|
|
|
goto out;
|
2021-07-08 06:16:55 +08:00
|
|
|
|
|
|
|
if (xdp.data_meta != data + headroom ||
|
|
|
|
xdp.data_end != xdp.data_meta + size)
|
|
|
|
size = xdp.data_end - xdp.data_meta;
|
|
|
|
|
|
|
|
ret = bpf_test_finish(kattr, uattr, xdp.data_meta, size, retval,
|
|
|
|
duration);
|
|
|
|
if (!ret)
|
|
|
|
ret = bpf_ctx_finish(kattr, uattr, ctx,
|
|
|
|
sizeof(struct xdp_md));
|
|
|
|
|
2018-12-02 02:39:44 +08:00
|
|
|
out:
|
2019-12-14 01:51:10 +08:00
|
|
|
bpf_prog_change_xdp(prog, NULL);
|
2021-07-08 06:16:55 +08:00
|
|
|
free_data:
|
2017-03-31 12:45:38 +08:00
|
|
|
kfree(data);
|
2021-07-08 06:16:55 +08:00
|
|
|
free_ctx:
|
|
|
|
kfree(ctx);
|
2017-03-31 12:45:38 +08:00
|
|
|
return ret;
|
|
|
|
}
|
2019-01-29 00:53:54 +08:00
|
|
|
|
2019-07-26 06:52:27 +08:00
|
|
|
static int verify_user_bpf_flow_keys(struct bpf_flow_keys *ctx)
|
|
|
|
{
|
|
|
|
/* make sure the fields we don't use are zeroed */
|
|
|
|
if (!range_is_zero(ctx, 0, offsetof(struct bpf_flow_keys, flags)))
|
|
|
|
return -EINVAL;
|
|
|
|
|
|
|
|
/* flags is allowed */
|
|
|
|
|
2019-12-11 03:19:33 +08:00
|
|
|
if (!range_is_zero(ctx, offsetofend(struct bpf_flow_keys, flags),
|
2019-07-26 06:52:27 +08:00
|
|
|
sizeof(struct bpf_flow_keys)))
|
|
|
|
return -EINVAL;
|
|
|
|
|
|
|
|
return 0;
|
|
|
|
}
|
|
|
|
|
2019-01-29 00:53:54 +08:00
|
|
|
int bpf_prog_test_run_flow_dissector(struct bpf_prog *prog,
|
|
|
|
const union bpf_attr *kattr,
|
|
|
|
union bpf_attr __user *uattr)
|
|
|
|
{
|
2021-03-03 18:18:12 +08:00
|
|
|
struct bpf_test_timer t = { NO_PREEMPT };
|
2019-01-29 00:53:54 +08:00
|
|
|
u32 size = kattr->test.data_size_in;
|
2019-04-22 23:55:45 +08:00
|
|
|
struct bpf_flow_dissector ctx = {};
|
2019-01-29 00:53:54 +08:00
|
|
|
u32 repeat = kattr->test.repeat;
|
2019-07-26 06:52:27 +08:00
|
|
|
struct bpf_flow_keys *user_ctx;
|
2019-01-29 00:53:54 +08:00
|
|
|
struct bpf_flow_keys flow_keys;
|
2019-04-22 23:55:45 +08:00
|
|
|
const struct ethhdr *eth;
|
2019-07-26 06:52:27 +08:00
|
|
|
unsigned int flags = 0;
|
2019-01-29 00:53:54 +08:00
|
|
|
u32 retval, duration;
|
|
|
|
void *data;
|
|
|
|
int ret;
|
|
|
|
|
|
|
|
if (prog->type != BPF_PROG_TYPE_FLOW_DISSECTOR)
|
|
|
|
return -EINVAL;
|
|
|
|
|
2020-09-26 04:54:29 +08:00
|
|
|
if (kattr->test.flags || kattr->test.cpu)
|
|
|
|
return -EINVAL;
|
|
|
|
|
2019-04-22 23:55:45 +08:00
|
|
|
if (size < ETH_HLEN)
|
|
|
|
return -EINVAL;
|
|
|
|
|
|
|
|
data = bpf_test_init(kattr, size, 0, 0);
|
2019-01-29 00:53:54 +08:00
|
|
|
if (IS_ERR(data))
|
|
|
|
return PTR_ERR(data);
|
|
|
|
|
2019-04-22 23:55:45 +08:00
|
|
|
eth = (struct ethhdr *)data;
|
2019-01-29 00:53:54 +08:00
|
|
|
|
|
|
|
if (!repeat)
|
|
|
|
repeat = 1;
|
|
|
|
|
2019-07-26 06:52:27 +08:00
|
|
|
user_ctx = bpf_ctx_init(kattr, sizeof(struct bpf_flow_keys));
|
|
|
|
if (IS_ERR(user_ctx)) {
|
|
|
|
kfree(data);
|
|
|
|
return PTR_ERR(user_ctx);
|
|
|
|
}
|
|
|
|
if (user_ctx) {
|
|
|
|
ret = verify_user_bpf_flow_keys(user_ctx);
|
|
|
|
if (ret)
|
|
|
|
goto out;
|
|
|
|
flags = user_ctx->flags;
|
|
|
|
}
|
|
|
|
|
2019-04-22 23:55:45 +08:00
|
|
|
ctx.flow_keys = &flow_keys;
|
|
|
|
ctx.data = data;
|
|
|
|
ctx.data_end = (__u8 *)data + size;
|
|
|
|
|
2021-03-03 18:18:12 +08:00
|
|
|
bpf_test_timer_enter(&t);
|
|
|
|
do {
|
2019-04-22 23:55:45 +08:00
|
|
|
retval = bpf_flow_dissect(prog, &ctx, eth->h_proto, ETH_HLEN,
|
2019-07-26 06:52:27 +08:00
|
|
|
size, flags);
|
2021-03-03 18:18:12 +08:00
|
|
|
} while (bpf_test_timer_continue(&t, repeat, &ret, &duration));
|
|
|
|
bpf_test_timer_leave(&t);
|
2019-04-22 23:55:45 +08:00
|
|
|
|
2021-03-03 18:18:12 +08:00
|
|
|
if (ret < 0)
|
|
|
|
goto out;
|
2019-01-29 00:53:54 +08:00
|
|
|
|
|
|
|
ret = bpf_test_finish(kattr, uattr, &flow_keys, sizeof(flow_keys),
|
|
|
|
retval, duration);
|
2019-07-26 06:52:27 +08:00
|
|
|
if (!ret)
|
|
|
|
ret = bpf_ctx_finish(kattr, uattr, user_ctx,
|
|
|
|
sizeof(struct bpf_flow_keys));
|
2019-01-29 00:53:54 +08:00
|
|
|
|
2019-02-20 02:54:17 +08:00
|
|
|
out:
|
2019-07-26 06:52:27 +08:00
|
|
|
kfree(user_ctx);
|
2019-04-22 23:55:45 +08:00
|
|
|
kfree(data);
|
2019-01-29 00:53:54 +08:00
|
|
|
return ret;
|
|
|
|
}
|
2021-03-03 18:18:13 +08:00
|
|
|
|
|
|
|
int bpf_prog_test_run_sk_lookup(struct bpf_prog *prog, const union bpf_attr *kattr,
|
|
|
|
union bpf_attr __user *uattr)
|
|
|
|
{
|
|
|
|
struct bpf_test_timer t = { NO_PREEMPT };
|
|
|
|
struct bpf_prog_array *progs = NULL;
|
|
|
|
struct bpf_sk_lookup_kern ctx = {};
|
|
|
|
u32 repeat = kattr->test.repeat;
|
|
|
|
struct bpf_sk_lookup *user_ctx;
|
|
|
|
u32 retval, duration;
|
|
|
|
int ret = -EINVAL;
|
|
|
|
|
|
|
|
if (prog->type != BPF_PROG_TYPE_SK_LOOKUP)
|
|
|
|
return -EINVAL;
|
|
|
|
|
|
|
|
if (kattr->test.flags || kattr->test.cpu)
|
|
|
|
return -EINVAL;
|
|
|
|
|
|
|
|
if (kattr->test.data_in || kattr->test.data_size_in || kattr->test.data_out ||
|
|
|
|
kattr->test.data_size_out)
|
|
|
|
return -EINVAL;
|
|
|
|
|
|
|
|
if (!repeat)
|
|
|
|
repeat = 1;
|
|
|
|
|
|
|
|
user_ctx = bpf_ctx_init(kattr, sizeof(*user_ctx));
|
|
|
|
if (IS_ERR(user_ctx))
|
|
|
|
return PTR_ERR(user_ctx);
|
|
|
|
|
|
|
|
if (!user_ctx)
|
|
|
|
return -EINVAL;
|
|
|
|
|
|
|
|
if (user_ctx->sk)
|
|
|
|
goto out;
|
|
|
|
|
|
|
|
if (!range_is_zero(user_ctx, offsetofend(typeof(*user_ctx), local_port), sizeof(*user_ctx)))
|
|
|
|
goto out;
|
|
|
|
|
|
|
|
if (user_ctx->local_port > U16_MAX || user_ctx->remote_port > U16_MAX) {
|
|
|
|
ret = -ERANGE;
|
|
|
|
goto out;
|
|
|
|
}
|
|
|
|
|
|
|
|
ctx.family = (u16)user_ctx->family;
|
|
|
|
ctx.protocol = (u16)user_ctx->protocol;
|
|
|
|
ctx.dport = (u16)user_ctx->local_port;
|
|
|
|
ctx.sport = (__force __be16)user_ctx->remote_port;
|
|
|
|
|
|
|
|
switch (ctx.family) {
|
|
|
|
case AF_INET:
|
|
|
|
ctx.v4.daddr = (__force __be32)user_ctx->local_ip4;
|
|
|
|
ctx.v4.saddr = (__force __be32)user_ctx->remote_ip4;
|
|
|
|
break;
|
|
|
|
|
|
|
|
#if IS_ENABLED(CONFIG_IPV6)
|
|
|
|
case AF_INET6:
|
|
|
|
ctx.v6.daddr = (struct in6_addr *)user_ctx->local_ip6;
|
|
|
|
ctx.v6.saddr = (struct in6_addr *)user_ctx->remote_ip6;
|
|
|
|
break;
|
|
|
|
#endif
|
|
|
|
|
|
|
|
default:
|
|
|
|
ret = -EAFNOSUPPORT;
|
|
|
|
goto out;
|
|
|
|
}
|
|
|
|
|
|
|
|
progs = bpf_prog_array_alloc(1, GFP_KERNEL);
|
|
|
|
if (!progs) {
|
|
|
|
ret = -ENOMEM;
|
|
|
|
goto out;
|
|
|
|
}
|
|
|
|
|
|
|
|
progs->items[0].prog = prog;
|
|
|
|
|
|
|
|
bpf_test_timer_enter(&t);
|
|
|
|
do {
|
|
|
|
ctx.selected_sk = NULL;
|
2021-08-15 15:05:54 +08:00
|
|
|
retval = BPF_PROG_SK_LOOKUP_RUN_ARRAY(progs, ctx, bpf_prog_run);
|
2021-03-03 18:18:13 +08:00
|
|
|
} while (bpf_test_timer_continue(&t, repeat, &ret, &duration));
|
|
|
|
bpf_test_timer_leave(&t);
|
|
|
|
|
|
|
|
if (ret < 0)
|
|
|
|
goto out;
|
|
|
|
|
|
|
|
user_ctx->cookie = 0;
|
|
|
|
if (ctx.selected_sk) {
|
|
|
|
if (ctx.selected_sk->sk_reuseport && !ctx.no_reuseport) {
|
|
|
|
ret = -EOPNOTSUPP;
|
|
|
|
goto out;
|
|
|
|
}
|
|
|
|
|
|
|
|
user_ctx->cookie = sock_gen_cookie(ctx.selected_sk);
|
|
|
|
}
|
|
|
|
|
|
|
|
ret = bpf_test_finish(kattr, uattr, NULL, 0, retval, duration);
|
|
|
|
if (!ret)
|
|
|
|
ret = bpf_ctx_finish(kattr, uattr, user_ctx, sizeof(*user_ctx));
|
|
|
|
|
|
|
|
out:
|
|
|
|
bpf_prog_array_free(progs);
|
|
|
|
kfree(user_ctx);
|
|
|
|
return ret;
|
|
|
|
}
|
2021-05-14 08:36:03 +08:00
|
|
|
|
|
|
|
int bpf_prog_test_run_syscall(struct bpf_prog *prog,
|
|
|
|
const union bpf_attr *kattr,
|
|
|
|
union bpf_attr __user *uattr)
|
|
|
|
{
|
|
|
|
void __user *ctx_in = u64_to_user_ptr(kattr->test.ctx_in);
|
|
|
|
__u32 ctx_size_in = kattr->test.ctx_size_in;
|
|
|
|
void *ctx = NULL;
|
|
|
|
u32 retval;
|
|
|
|
int err = 0;
|
|
|
|
|
|
|
|
/* doesn't support data_in/out, ctx_out, duration, or repeat or flags */
|
|
|
|
if (kattr->test.data_in || kattr->test.data_out ||
|
|
|
|
kattr->test.ctx_out || kattr->test.duration ||
|
|
|
|
kattr->test.repeat || kattr->test.flags)
|
|
|
|
return -EINVAL;
|
|
|
|
|
|
|
|
if (ctx_size_in < prog->aux->max_ctx_offset ||
|
|
|
|
ctx_size_in > U16_MAX)
|
|
|
|
return -EINVAL;
|
|
|
|
|
|
|
|
if (ctx_size_in) {
|
|
|
|
ctx = kzalloc(ctx_size_in, GFP_USER);
|
|
|
|
if (!ctx)
|
|
|
|
return -ENOMEM;
|
|
|
|
if (copy_from_user(ctx, ctx_in, ctx_size_in)) {
|
|
|
|
err = -EFAULT;
|
|
|
|
goto out;
|
|
|
|
}
|
|
|
|
}
|
2021-08-10 07:51:51 +08:00
|
|
|
|
|
|
|
rcu_read_lock_trace();
|
2021-05-14 08:36:03 +08:00
|
|
|
retval = bpf_prog_run_pin_on_cpu(prog, ctx);
|
2021-08-10 07:51:51 +08:00
|
|
|
rcu_read_unlock_trace();
|
2021-05-14 08:36:03 +08:00
|
|
|
|
|
|
|
if (copy_to_user(&uattr->test.retval, &retval, sizeof(u32))) {
|
|
|
|
err = -EFAULT;
|
|
|
|
goto out;
|
|
|
|
}
|
|
|
|
if (ctx_size_in)
|
|
|
|
if (copy_to_user(ctx_in, ctx, ctx_size_in))
|
|
|
|
err = -EFAULT;
|
|
|
|
out:
|
|
|
|
kfree(ctx);
|
|
|
|
return err;
|
|
|
|
}
|