Quentin Monnet says:
====================
Libbpf is used at several locations in the repository. Most of the time,
the tools relying on it build the library in its own directory, and include
the headers from there. This works, but this is not the cleanest approach.
It generates objects outside of the directory of the tool which is being
built, and it also increases the risk that developers include a header file
internal to libbpf, which is not supposed to be exposed to user
applications.
This set adjusts all involved Makefiles to make sure that libbpf is built
locally (with respect to the tool's directory or provided build directory),
and by ensuring that "make install_headers" is run from libbpf's Makefile
to export user headers properly.
This comes at a cost: given that the libbpf was so far mostly compiled in
its own directory by the different components using it, compiling it once
would be enough for all those components. With the new approach, each
component compiles its own version. To mitigate this cost, efforts were
made to reuse the compiled library when possible:
- Make the bpftool version in samples/bpf reuse the library previously
compiled for the selftests.
- Make the bpftool version in BPF selftests reuse the library previously
compiled for the selftests.
- Similarly, make resolve_btfids in BPF selftests reuse the same compiled
library.
- Similarly, make runqslower in BPF selftests reuse the same compiled
library; and make it rely on the bpftool version also compiled from the
selftests (instead of compiling its own version).
- runqslower, when compiled independently, needs its own version of
bpftool: make them share the same compiled libbpf.
As a result:
- Compiling the samples/bpf should compile libbpf just once.
- Compiling the BPF selftests should compile libbpf just once.
- Compiling the kernel (with BTF support) should now lead to compiling
libbpf twice: one for resolve_btfids, one for kernel/bpf/preload.
- Compiling runqslower individually should compile libbpf just once. Same
thing for bpftool, resolve_btfids, and kernel/bpf/preload/iterators.
(Not accounting for the boostrap version of libbpf required by bpftool,
which was already placed under a dedicated .../boostrap/libbpf/ directory,
and for which the count remains unchanged.)
A few commits in the series also contain drive-by clean-up changes for
bpftool includes, samples/bpf/.gitignore, or test_bpftool_build.sh. Please
refer to individual commit logs for details.
v4:
- Make the "libbpf_hdrs" dependency an order-only dependency in
kernel/bpf/preload/Makefile, samples/bpf/Makefile, and
tools/bpf/runqslower/Makefile. This is to avoid to unconditionally
recompile the targets.
- samples/bpf/.gitignore: prefix objects with a "/" to mark that we
ignore them when at the root of the samples/bpf/ directory.
- libbpf: add a commit to make "install_headers" depend on the header
files, to avoid exporting again if the sources are older than the
targets. This ensures that applications relying on those headers are
not rebuilt unnecessarily.
- bpftool: uncouple the copy of nlattr.h from libbpf target, to have it
depend on the source header itself. By avoiding to reinstall this
header every time, we avoid unnecessary builds of bpftool.
- samples/bpf: Add a new commit to remove the FORCE dependency for
libbpf, and replace it with a "$(wildcard ...)" on the .c/.h files in
libbpf's directory. This is to avoid always recompiling libbpf/bpftool.
- Adjust prefixes in commit subjects.
v3:
- Remove order-only dependencies on $(LIBBPF_INCLUDE) (or equivalent)
directories, given that they are created by libbpf's Makefile.
- Add libbpf as a dependency for bpftool/resolve_btfids/runqslower when
they are supposed to reuse a libbpf compiled previously. This is to
avoid having several libbpf versions being compiled simultaneously in
the same directory with parallel builds. Even if this didn't show up
during tests, let's remain on the safe side.
- kernel/bpf/preload/Makefile: Rename libbpf-hdrs (dash) dependency as
libbpf_hdrs.
- samples/bpf/.gitignore: Add bpftool/
- samples/bpf/Makefile: Change "/bin/rm -rf" to "$(RM) -r".
- samples/bpf/Makefile: Add missing slashes for $(LIBBPF_OUTPUT) and
$(LIBBPF_DESTDIR) when buildling bpftool
- samples/bpf/Makefile: Add a dependency to libbpf's headers for
$(TRACE_HELPERS).
- bpftool's Makefile: Use $(LIBBPF) instead of equivalent (but longer)
$(LIBBPF_OUTPUT)libbpf.a
- BPF iterators' Makefile: build bpftool in .output/bpftool (instead of
.output/), add and clean up variables.
- runqslower's Makefile: Add an explicit dependency on libbpf's headers
to several objects. The dependency is not required (libbpf should have
been compiled and so the headers exported through other dependencies
for those targets), but they better mark the logical dependency and
should help if exporting the headers changed in the future.
- New commit to add an "install-bin" target to bpftool, to avoid
installing bash completion when buildling BPF iterators and selftests.
v2: Declare an additional dependency on libbpf's headers for
iterators/iterators.o in kernel/preload/Makefile to make sure that
these headers are exported before we compile the object file (and not
just before we link it).
====================
Signed-off-by: Andrii Nakryiko <andrii@kernel.org>
With "make install", bpftool installs its binary and its bash completion
file. Usually, this is what we want. But a few components in the kernel
repository (namely, BPF iterators and selftests) also install bpftool
locally before using it. In such a case, bash completion is not
necessary and is just a useless build artifact.
Let's add an "install-bin" target to bpftool, to offer a way to install
the binary only.
Signed-off-by: Quentin Monnet <quentin@isovalent.com>
Signed-off-by: Andrii Nakryiko <andrii@kernel.org>
Link: https://lore.kernel.org/bpf/20211007194438.34443-13-quentin@isovalent.com
The script test_bpftool_build.sh attempts to build bpftool in the
various supported ways, to make sure nothing breaks.
One of those ways is to run "make tools/bpf" from the root of the kernel
repository. This command builds bpftool, along with the other tools
under tools/bpf, and runqslower in particular. After running the
command and upon a successful bpftool build, the script attempts to
cleanup the generated objects. However, after building with this target
and in the case of runqslower, the files are not cleaned up as expected.
This is because the "tools/bpf" target sets $(OUTPUT) to
.../tools/bpf/runqslower/ when building the tool, causing the object
files to be placed directly under the runqslower directory. But when
running "cd tools/bpf; make clean", the value for $(OUTPUT) is set to
".output" (relative to the runqslower directory) by runqslower's
Makefile, and this is where the Makefile looks for files to clean up.
We cannot easily fix in the root Makefile (where "tools/bpf" is defined)
or in tools/scripts/Makefile.include (setting $(OUTPUT)), where changing
the way the output variables are passed would likely have consequences
elsewhere. We could change runqslower's Makefile to build in the
repository instead of in a dedicated ".output/", but doing so just to
accommodate a test script doesn't sound great. Instead, let's just make
sure that we clean up runqslower properly by adding the correct command
to the script.
This will attempt to clean runqslower twice: the first try with command
"cd tools/bpf; make clean" will search for tools/bpf/runqslower/.output
and fail to clean it (but will still clean the other tools, in
particular bpftool), the second one (added in this commit) sets the
$(OUTPUT) variable like for building with the "tool/bpf" target and
should succeed.
Signed-off-by: Quentin Monnet <quentin@isovalent.com>
Signed-off-by: Andrii Nakryiko <andrii@kernel.org>
Link: https://lore.kernel.org/bpf/20211007194438.34443-12-quentin@isovalent.com
In samples/bpf/Makefile, libbpf has a FORCE dependency that force it to
be rebuilt. I read this as a way to keep the library up-to-date, given
that we do not have, in samples/bpf, a list of the source files for
libbpf itself. However, a better approach would be to use the
"$(wildcard ...)" function from make, and to have libbpf depend on all
the .c and .h files in its directory. This is what samples/bpf/Makefile
does for bpftool, and also what the BPF selftests' Makefile does for
libbpf.
Let's update the Makefile to avoid rebuilding libbpf all the time (and
bpftool on top of it).
Signed-off-by: Quentin Monnet <quentin@isovalent.com>
Signed-off-by: Andrii Nakryiko <andrii@kernel.org>
Link: https://lore.kernel.org/bpf/20211007194438.34443-11-quentin@isovalent.com
API headers from libbpf should not be accessed directly from the source
directory. Instead, they should be exported with "make install_headers".
Make sure that samples/bpf/Makefile installs the headers properly when
building.
The object compiled from and exported by libbpf are now placed into a
subdirectory of sample/bpf/ instead of remaining in tools/lib/bpf/. We
attempt to remove this directory on "make clean". However, the "clean"
target re-enters the samples/bpf/ directory from the root of the
repository ("$(MAKE) -C ../../ M=$(CURDIR) clean"), in such a way that
$(srctree) and $(src) are not defined, making it impossible to use
$(LIBBPF_OUTPUT) and $(LIBBPF_DESTDIR) in the recipe. So we only attempt
to clean $(CURDIR)/libbpf, which is the default value.
Add a dependency on libbpf's headers for the $(TRACE_HELPERS).
We also change the output directory for bpftool, to place the generated
objects under samples/bpf/bpftool/ instead of building in bpftool's
directory directly. Doing so, we make sure bpftool reuses the libbpf
library previously compiled and installed.
Signed-off-by: Quentin Monnet <quentin@isovalent.com>
Signed-off-by: Andrii Nakryiko <andrii@kernel.org>
Link: https://lore.kernel.org/bpf/20211007194438.34443-10-quentin@isovalent.com
Update samples/bpf/.gitignore to ignore files generated when building
the samples. Add:
- vmlinux.h
- the generated skeleton files (*.skel.h)
- the samples/bpf/libbpf/ and .../bpftool/ directories, in preparation
of a future commit which introduces a local output directory for
building libbpf and bpftool.
Signed-off-by: Quentin Monnet <quentin@isovalent.com>
Signed-off-by: Andrii Nakryiko <andrii@kernel.org>
Link: https://lore.kernel.org/bpf/20211007194438.34443-9-quentin@isovalent.com
API headers from libbpf should not be accessed directly from the
library's source directory. Instead, they should be exported with "make
install_headers". Let's make sure that bpf/preload/iterators/Makefile
installs the headers properly when building.
Signed-off-by: Quentin Monnet <quentin@isovalent.com>
Signed-off-by: Andrii Nakryiko <andrii@kernel.org>
Link: https://lore.kernel.org/bpf/20211007194438.34443-8-quentin@isovalent.com
API headers from libbpf should not be accessed directly from the
library's source directory. Instead, they should be exported with "make
install_headers". Let's make sure that bpf/preload/Makefile installs the
headers properly when building.
Note that we declare an additional dependency for iterators/iterators.o:
having $(LIBBPF_A) as a dependency to "$(obj)/bpf_preload_umd" is not
sufficient, as it makes it required only at the linking step. But we
need libbpf to be compiled, and in particular its headers to be
exported, before we attempt to compile iterators.o. The issue would not
occur before this commit, because libbpf's headers were not exported and
were always available under tools/lib/bpf.
Signed-off-by: Quentin Monnet <quentin@isovalent.com>
Signed-off-by: Andrii Nakryiko <andrii@kernel.org>
Link: https://lore.kernel.org/bpf/20211007194438.34443-7-quentin@isovalent.com
API headers from libbpf should not be accessed directly from the
library's source directory. Instead, they should be exported with "make
install_headers". Let's make sure that runqslower installs the
headers properly when building.
We use a libbpf_hdrs target to mark the logical dependency on libbpf's
headers export for a number of object files, even though the headers
should have been exported at this time (since bpftool needs them, and is
required to generate the skeleton or the vmlinux.h).
When descending from a parent Makefile, the specific output directories
for building the library and exporting the headers are configurable with
BPFOBJ_OUTPUT and BPF_DESTDIR, respectively. This is in addition to
OUTPUT, on top of which those variables are constructed by default.
Also adjust the Makefile for the BPF selftests. We pass a number of
variables to the "make" invocation, because we want to point runqslower
to the (target) libbpf shared with other tools, instead of building its
own version. In addition, runqslower relies on (target) bpftool, and we
also want to pass the proper variables to its Makefile so that bpftool
itself reuses the same libbpf.
Signed-off-by: Quentin Monnet <quentin@isovalent.com>
Signed-off-by: Andrii Nakryiko <andrii@kernel.org>
Link: https://lore.kernel.org/bpf/20211007194438.34443-6-quentin@isovalent.com
API headers from libbpf should not be accessed directly from the
library's source directory. Instead, they should be exported with "make
install_headers". Let's make sure that resolve_btfids installs the
headers properly when building.
When descending from a parent Makefile, the specific output directories
for building the library and exporting the headers are configurable with
LIBBPF_OUT and LIBBPF_DESTDIR, respectively. This is in addition to
OUTPUT, on top of which those variables are constructed by default.
Also adjust the Makefile for the BPF selftests in order to point to the
(target) libbpf shared with other tools, instead of building a version
specific to resolve_btfids. Remove libbpf's order-only dependencies on
the include directories (they are created by libbpf and don't need to
exist beforehand).
Signed-off-by: Quentin Monnet <quentin@isovalent.com>
Signed-off-by: Andrii Nakryiko <andrii@kernel.org>
Acked-by: Andrii Nakryiko <andrii@kernel.org>
Link: https://lore.kernel.org/bpf/20211007194438.34443-5-quentin@isovalent.com
Bpftool relies on libbpf, therefore it relies on a number of headers
from the library and must be linked against the library. The Makefile
for bpftool exposes these objects by adding tools/lib as an include
directory ("-I$(srctree)/tools/lib"). This is a working solution, but
this is not the cleanest one. The risk is to involuntarily include
objects that are not intended to be exposed by the libbpf.
The headers needed to compile bpftool should in fact be "installed" from
libbpf, with its "install_headers" Makefile target. In addition, there
is one header which is internal to the library and not supposed to be
used by external applications, but that bpftool uses anyway.
Adjust the Makefile in order to install the header files properly before
compiling bpftool. Also copy the additional internal header file
(nlattr.h), but call it out explicitly. Build (and install headers) in a
subdirectory under bpftool/ instead of tools/lib/bpf/. When descending
from a parent Makefile, this is configurable by setting the OUTPUT,
LIBBPF_OUTPUT and LIBBPF_DESTDIR variables.
Also adjust the Makefile for BPF selftests, so as to reuse the (host)
libbpf compiled earlier and to avoid compiling a separate version of the
library just for bpftool.
Signed-off-by: Quentin Monnet <quentin@isovalent.com>
Signed-off-by: Andrii Nakryiko <andrii@kernel.org>
Acked-by: Andrii Nakryiko <andrii@kernel.org>
Link: https://lore.kernel.org/bpf/20211007194438.34443-4-quentin@isovalent.com
It seems that the header file was never necessary to compile bpftool,
and it is not part of the headers exported from libbpf. Let's remove the
includes from prog.c and gen.c.
Fixes: d510296d33 ("bpftool: Use syscall/loader program in "prog load" and "gen skeleton" command.")
Signed-off-by: Quentin Monnet <quentin@isovalent.com>
Signed-off-by: Andrii Nakryiko <andrii@kernel.org>
Link: https://lore.kernel.org/bpf/20211007194438.34443-3-quentin@isovalent.com
The "install_headers" target in libbpf's Makefile would unconditionally
export all API headers to the target directory. When those headers are
installed to compile another application, this means that make always
finds newer dependencies for the source files relying on those headers,
and deduces that the targets should be rebuilt.
Avoid that by making "install_headers" depend on the source header
files, and (re-)install them only when necessary.
Signed-off-by: Quentin Monnet <quentin@isovalent.com>
Signed-off-by: Andrii Nakryiko <andrii@kernel.org>
Link: https://lore.kernel.org/bpf/20211007194438.34443-2-quentin@isovalent.com
Since commit 6c4fc209fc ("bpf: remove useless version check for prog
load") these "version" sections, which result in bpf_attr.kern_version
being set, have been unnecessary.
Remove them so that it's obvious to folks using selftests as a guide that
"modern" BPF progs don't need this section.
Signed-off-by: Dave Marchevsky <davemarchevsky@fb.com>
Signed-off-by: Andrii Nakryiko <andrii@kernel.org>
Link: https://lore.kernel.org/bpf/20211007231234.2223081-1-davemarchevsky@fb.com
VMs running on upstream 5.12+ kernel support LBR. However,
bpf_get_branch_snapshot couldn't stop the LBR before too many entries
are flushed. Skip the hit/waste test for VMs before we find a proper fix
for LBR in VM.
Fixes: 025bd7c753 ("selftests/bpf: Add test for bpf_get_branch_snapshot")
Signed-off-by: Song Liu <songliubraving@fb.com>
Signed-off-by: Andrii Nakryiko <andrii@kernel.org>
Link: https://lore.kernel.org/bpf/20211007050231.728496-1-songliubraving@fb.com
This patch adds new tests for the two-instruction LD_IMM64. The new tests
verify the operation with immediate values of different byte patterns.
Mainly intended to cover JITs that want to be clever when loading 64-bit
constants.
Signed-off-by: Johan Almbladh <johan.almbladh@anyfinetworks.com>
Signed-off-by: Daniel Borkmann <daniel@iogearbox.net>
Link: https://lore.kernel.org/bpf/20211007143006.634308-1-johan.almbladh@anyfinetworks.com
This patch shaves off a few instructions when loading sparse 64-bit
constants to register. The change is covered by additional tests in
lib/test_bpf.c.
Signed-off-by: Johan Almbladh <johan.almbladh@anyfinetworks.com>
Signed-off-by: Daniel Borkmann <daniel@iogearbox.net>
Link: https://lore.kernel.org/bpf/20211007142828.634182-1-johan.almbladh@anyfinetworks.com
Introduce a single reg version of maybe_emit_mod() and factor out
common code in more cases.
Signed-off-by: Jie Meng <jmeng@fb.com>
Signed-off-by: Daniel Borkmann <daniel@iogearbox.net>
Acked-by: Song Liu <songliubraving@fb.com>
Link: https://lore.kernel.org/bpf/20211006194135.608932-1-jmeng@fb.com
Hengqi Chen says:
====================
bpf_{map,program}__{prev,next} don't follow the libbpf API naming
convention. Deprecate them and replace them with a new set of APIs
named bpf_object__{prev,next}_{program,map}.
v1->v2: [0]
* Addressed Andrii's comments
[0]: https://patchwork.kernel.org/project/netdevbpf/patch/20210906165456.325999-1-hengqi.chen@gmail.com/
====================
Signed-off-by: Andrii Nakryiko <andrii@kernel.org>
BPF objects are not reloadable after unload. Users are expected to use
bpf_object__close() to unload and free up resources in one operation.
No need to expose bpf_object__unload() as a public API, deprecate it
([0]). Add bpf_object__unload() as an alias to internal
bpf_object_unload() and replace all bpf_object__unload() uses to avoid
compilation errors.
[0] Closes: https://github.com/libbpf/libbpf/issues/290
Signed-off-by: Hengqi Chen <hengqi.chen@gmail.com>
Signed-off-by: Andrii Nakryiko <andrii@kernel.org>
Acked-by: Song Liu <songliubraving@fb.com>
Link: https://lore.kernel.org/bpf/20211002161000.3854559-1-hengqi.chen@gmail.com
Replace deprecated bpf_{map,program}__next APIs with newly added
bpf_object__next_{map,program} APIs, so that no compilation warnings
emit.
Signed-off-by: Hengqi Chen <hengqi.chen@gmail.com>
Signed-off-by: Andrii Nakryiko <andrii@kernel.org>
Acked-by: Song Liu <songliubraving@fb.com>
Link: https://lore.kernel.org/bpf/20211003165844.4054931-3-hengqi.chen@gmail.com
This adds a section to the documentation for libbpf
naming convention which describes how to document
API features in libbpf, specifically the format of
which API doc comments need to conform to.
Signed-off-by: Grant Seltzer <grantseltzer@gmail.com>
Signed-off-by: Andrii Nakryiko <andrii@kernel.org>
Acked-by: Song Liu <songliubraving@fb.com>
Link: https://lore.kernel.org/bpf/20211004215644.497327-1-grantseltzer@gmail.com
Deprecate bpf_{map,program}__{prev,next} APIs. Replace them with
a new set of APIs named bpf_object__{prev,next}_{program,map} which
follow the libbpf API naming convention ([0]). No functionality changes.
[0] Closes: https://github.com/libbpf/libbpf/issues/296
Signed-off-by: Hengqi Chen <hengqi.chen@gmail.com>
Signed-off-by: Andrii Nakryiko <andrii@kernel.org>
Acked-by: Song Liu <songliubraving@fb.com>
Link: https://lore.kernel.org/bpf/20211003165844.4054931-2-hengqi.chen@gmail.com
Currently the recursion test is hooking __htab_map_lookup_elem
function, which is invoked both from bpf_prog and bpf syscall.
But in our kernel build, the __htab_map_lookup_elem gets inlined
within the htab_map_lookup_elem, so it's not trigered and the
test fails.
Fixing this by using htab_map_delete_elem, which is not inlined
for bpf_prog calls (like htab_map_lookup_elem is) and is used
directly as pointer for map_delete_elem, so it won't disappear
by inlining.
Signed-off-by: Jiri Olsa <jolsa@kernel.org>
Signed-off-by: Andrii Nakryiko <andrii@kernel.org>
Acked-by: Song Liu <songliubraving@fb.com>
Link: https://lore.kernel.org/bpf/YVnfFTL/3T6jOwHI@krava
Recent-ish versions of make do no longer consider number signs ("#") as
comment symbols when they are inserted inside of a macro reference or in
a function invocation. In such cases, the symbols should not be escaped.
There are a few occurrences of "\#" in libbpf's and samples' Makefiles.
In the former, the backslash is harmless, because grep associates no
particular meaning to the escaped symbol and reads it as a regular "#".
In samples' Makefile, recent versions of make will pass the backslash
down to the compiler, making the probe fail all the time and resulting
in the display of a warning about "make headers_install" being required,
even after headers have been installed.
A similar issue has been addressed at some other locations by commit
9564a8cf42 ("Kbuild: fix # escaping in .cmd files for future Make").
Let's address it for libbpf's and samples' Makefiles in the same
fashion, by using a "$(pound)" variable (pulled from
tools/scripts/Makefile.include for libbpf, or re-defined for the
samples).
Reference for the change in make:
https://git.savannah.gnu.org/cgit/make.git/commit/?id=c6966b323811c37acedff05b57
Fixes: 2f38304127 ("libbpf: Make libbpf_version.h non-auto-generated")
Fixes: 07c3bbdb1a ("samples: bpf: print a warning about headers_install")
Signed-off-by: Quentin Monnet <quentin@isovalent.com>
Signed-off-by: Andrii Nakryiko <andrii@kernel.org>
Link: https://lore.kernel.org/bpf/20211006111049.20708-1-quentin@isovalent.com
The BPF core defines a __weak bpf_jit_compile() dummy function already
which should only be overridden by JITs if they actually implement a
legacy cBPF JIT. Given arm implements an eBPF JIT, this stub is not
needed.
Now that MIPS cBPF JIT is finally gone, the only JIT left that is still
implementing bpf_jit_compile() is the sparc32 one.
Signed-off-by: Daniel Borkmann <daniel@iogearbox.net>
Signed-off-by: Andrii Nakryiko <andrii@kernel.org>
Johan Almbladh says:
====================
This is an implementation of an eBPF JIT for MIPS I-V and MIPS32/64 r1-r6.
The new JIT is written from scratch, but uses the same overall structure
as other eBPF JITs.
Before, the MIPS JIT situation looked like this.
- 32-bit: MIPS32, cBPF-only, tests fail
- 64-bit: MIPS64r2-r6, eBPF, tests fail, incomplete eBPF ISA support
The new JIT implementation raises the bar to the following level.
- 32/64-bit: all MIPS ISA, eBPF, all tests pass, full eBPF ISA support
Overview
--------
The implementation supports all 32-bit and 64-bit eBPF instructions
defined as of this writing, including the recently-added atomics. It is
intended to provide good performance for native word size operations,
while also being complete so the JIT never has to fall back to the
interpreter. The new JIT replaces the current cBPF and eBPF JITs for MIPS.
The implementation is divided into separate files as follows. The source
files contains comments describing internal mechanisms and details on
things like eBPF-to-CPU register mappings, so I won't repeat that here.
- jit_comp.[ch] code shared between 32-bit and 64-bit JITs
- jit_comp32.c 32-bit JIT implementation
- jit_comp64.c 64-bit JIT implementation
Both the 32-bit and 64-bit versions map all eBPF registers to native MIPS
CPU registers. There are also enough unmapped CPU registers available to
allow all eBPF operations implemented natively by the JIT to use only CPU
registers without having to resort to stack scratch space.
Some operations are deemed too complex to implement natively in the JIT.
Those are instead implemented as a function call to a helper that performs
the operation. This is done in the following cases.
- 64-bit div and mod on a 32-bit CPU
- 64-bit atomics on a 32-bit CPU
- 32-bit atomics on a 32-bit CPU that lacks ll/sc instructions
CPU errata workarounds
----------------------
The JIT implements workarounds for R10000, Loongson-2F and Loongson-3 CPU
errata. For the Loongson workarounds, I have used the public information
available on the matter.
Link: https://sourceware.org/legacy-ml/binutils/2009-11/msg00387.html
Testing
-------
During the development of the JIT, I have added a number of new test cases
to the test_bpf.ko test suite to be able to verify correctness of JIT
implementations in a more systematic way. The new additions increase the
test suite roughly three-fold, with many of the new tests being very
extensive and even exhaustive when feasible.
Link: https://lore.kernel.org/bpf/20211001130348.3670534-1-johan.almbladh@anyfinetworks.com/
Link: https://lore.kernel.org/bpf/20210914091842.4186267-1-johan.almbladh@anyfinetworks.com/
Link: https://lore.kernel.org/bpf/20210809091829.810076-1-johan.almbladh@anyfinetworks.com/
The JIT has been tested by running the test_bpf.ko test suite in QEMU with
the following MIPS ISAs, in both big and little endian mode, with and
without JIT hardening enabled.
MIPS32r2, MIPS32r6, MIPS64r2, MIPS64r6
For the QEMU r2 targets, the correctness of pre-r2 code emitted has been
tested by manually overriding each of the following macros with 0.
cpu_has_llsc, cpu_has_mips_2, cpu_has_mips_r1, cpu_has_mips_r2
Similarly, CPU errata workaround code has been tested by enabling the
each of the following configurations for the MIPS64r2 targets.
CONFIG_WAR_R10000
CONFIG_CPU_LOONGSON3_WORKAROUNDS
CONFIG_CPU_NOP_WORKAROUNDS
CONFIG_CPU_JUMP_WORKAROUNDS
The JIT passes all tests in all configurations. Below is the summary for
MIPS32r2 in little endian mode.
test_bpf: Summary: 1006 PASSED, 0 FAILED, [994/994 JIT'ed]
test_bpf: test_tail_calls: Summary: 8 PASSED, 0 FAILED, [8/8 JIT'ed]
test_bpf: test_skb_segment: Summary: 2 PASSED, 0 FAILED
According to MIPS ISA reference documentation, the result of a 32-bit ALU
arithmetic operation on a 64-bit CPU is unpredictable if an operand
register value is not properly sign-extended to 64 bits. To verify the
code emitted by the JIT, the code generation engine in QEMU was modifed to
flip all low 32 bits if the above condition was not met. With this
trip-wire installed, the kernel booted properly in qemu-system-mips64el
and all test_bpf.ko tests passed.
Remaining features
------------------
While the JIT is complete is terms of eBPF ISA support, this series does
not include support for BPF-to-BPF calls and BPF trampolines. Those
features are planned to be added in another patch series.
The BPF_ST | BPF_NOSPEC instruction currently emits nothing. This is
consistent with the behavior if the MIPS interpreter and the existing
eBPF JIT.
Why not build on the existing eBPF JIT?
---------------------------------------
The existing eBPF JIT was originally written for MIPS64. An effort was
made to add MIPS32 support to it in commit 716850ab10 ("MIPS: eBPF:
Initial eBPF support for MIPS32 architecture."). That turned out to
contain a number of flaws, so eBPF support for MIPS32 was disabled in
commit 36366e367e ("MIPS: BPF: Restore MIPS32 cBPF JIT").
Link: https://lore.kernel.org/bpf/5deaa994.1c69fb81.97561.647e@mx.google.com/
The current eBPF JIT for MIPS64 lacks a lot of functionality regarding
ALU32, JMP32 and atomic operations. It also lacks 32-bit CPU support on a
fundamental level, for example 32-bit CPU register mappings and o32 ABI
calling conventions. For optimization purposes, it tracks register usage
through the program control flow in order to do zero-extension and sign-
extension only when necessary, a static analysis of sorts. In my opinion,
having this kind of complexity in JITs, and for which there is not
adequate test coverage, is a problem. Such analysis should be done by the
verifier, if needed at all. Finally, when I run the BPF test suite
test_bpf.ko on the current JIT, there are errors and warnings.
I believe that an eBPF JIT should strive to be correct, complete and
optimized, and in that order. The JIT runs after the verifer has audited
the program and given its approval. If the JIT then emits code that does
something else, it will undermine the eBPF security model. A simple
implementation is easier to get correct than a complex one. Furthermore,
the real performance hit is not an extra CPU instruction here and there,
but when the JIT bails on an unimplemented eBPF instruction and cause the
whole program to fall back to the interpreter. My reasoning here boils
down to the following.
* The JIT should not contain a static analyzer that tracks branches.
* It is acceptable to emit possibly superfluous sign-/zero-extensions for
ALU32 and JMP32 operations on a 64-bit MIPS to guarantee correctness.
* The JIT should handle all eBPF instructions on all MIPS CPUs.
I conclude that the current eBPF MIPS JIT is complex, incomplete and
incorrect. For the reasons stated above, I decided to not use the existing
JIT implementation.
====================
Signed-off-by: Daniel Borkmann <daniel@iogearbox.net>
Signed-off-by: Andrii Nakryiko <andrii@kernel.org>
This patch removes the old 32-bit cBPF and 64-bit eBPF JIT implementations.
They are replaced by a new eBPF implementation that supports both 32-bit
and 64-bit MIPS CPUs.
Signed-off-by: Johan Almbladh <johan.almbladh@anyfinetworks.com>
Signed-off-by: Daniel Borkmann <daniel@iogearbox.net>
Signed-off-by: Andrii Nakryiko <andrii@kernel.org>
Link: https://lore.kernel.org/bpf/20211005165408.2305108-8-johan.almbladh@anyfinetworks.com
This patch enables the new eBPF JITs for 32-bit and 64-bit MIPS. It also
disables the old cBPF JIT to so cBPF programs are converted to use the
new JIT.
Workarounds for R4000 CPU errata are not implemented by the JIT, so the
JIT is disabled if any of those workarounds are configured.
Signed-off-by: Johan Almbladh <johan.almbladh@anyfinetworks.com>
Signed-off-by: Daniel Borkmann <daniel@iogearbox.net>
Signed-off-by: Andrii Nakryiko <andrii@kernel.org>
Link: https://lore.kernel.org/bpf/20211005165408.2305108-7-johan.almbladh@anyfinetworks.com
This patch adds workarounds for the following CPU errata to the MIPS
eBPF JIT, if enabled in the kernel configuration.
- R10000 ll/sc weak ordering
- Loongson-3 ll/sc weak ordering
- Loongson-2F jump hang
The Loongson-2F nop errata is implemented in uasm, which the JIT uses,
so no additional mitigations are needed for that.
Signed-off-by: Johan Almbladh <johan.almbladh@anyfinetworks.com>
Signed-off-by: Daniel Borkmann <daniel@iogearbox.net>
Signed-off-by: Andrii Nakryiko <andrii@kernel.org>
Reviewed-by: Jiaxun Yang <jiaxun.yang@flygoat.com>
Link: https://lore.kernel.org/bpf/20211005165408.2305108-6-johan.almbladh@anyfinetworks.com
This is an implementation on of an eBPF JIT for 64-bit MIPS III-V and
MIPS64r1-r6. It uses the same framework introduced by the 32-bit JIT.
Signed-off-by: Johan Almbladh <johan.almbladh@anyfinetworks.com>
Signed-off-by: Daniel Borkmann <daniel@iogearbox.net>
Signed-off-by: Andrii Nakryiko <andrii@kernel.org>
Link: https://lore.kernel.org/bpf/20211005165408.2305108-5-johan.almbladh@anyfinetworks.com
This is an implementation of an eBPF JIT for 32-bit MIPS I-V and MIPS32.
The implementation supports all 32-bit and 64-bit ALU and JMP operations,
including the recently-added atomics. 64-bit div/mod and 64-bit atomics
are implemented using function calls to math64 and atomic64 functions,
respectively. All 32-bit operations are implemented natively by the JIT,
except if the CPU lacks ll/sc instructions.
Register mapping
================
All 64-bit eBPF registers are mapped to native 32-bit MIPS register pairs,
and does not use any stack scratch space for register swapping. This means
that all eBPF register data is kept in CPU registers all the time, and
this simplifies the register management a lot. It also reduces the JIT's
pressure on temporary registers since we do not have to move data around.
Native register pairs are ordered according to CPU endiannes, following
the O32 calling convention for passing 64-bit arguments and return values.
The eBPF return value, arguments and callee-saved registers are mapped to
their native MIPS equivalents.
Since the 32 highest bits in the eBPF FP (frame pointer) register are
always zero, only one general-purpose register is actually needed for the
mapping. The MIPS fp register is used for this purpose. The high bits are
mapped to MIPS register r0. This saves us one CPU register, which is much
needed for temporaries, while still allowing us to treat the R10 (FP)
register just like any other eBPF register in the JIT.
The MIPS gp (global pointer) and at (assembler temporary) registers are
used as internal temporary registers for constant blinding. CPU registers
t6-t9 are used internally by the JIT when constructing more complex 64-bit
operations. This is precisely what is needed - two registers to store an
operand value, and two more as scratch registers when performing the
operation.
The register mapping is shown below.
R0 - $v1, $v0 return value
R1 - $a1, $a0 argument 1, passed in registers
R2 - $a3, $a2 argument 2, passed in registers
R3 - $t1, $t0 argument 3, passed on stack
R4 - $t3, $t2 argument 4, passed on stack
R5 - $t4, $t3 argument 5, passed on stack
R6 - $s1, $s0 callee-saved
R7 - $s3, $s2 callee-saved
R8 - $s5, $s4 callee-saved
R9 - $s7, $s6 callee-saved
FP - $r0, $fp 32-bit frame pointer
AX - $gp, $at constant-blinding
$t6 - $t9 unallocated, JIT temporaries
Jump offsets
============
The JIT tries to map all conditional JMP operations to MIPS conditional
PC-relative branches. The MIPS branch offset field is 18 bits, in bytes,
which is equivalent to the eBPF 16-bit instruction offset. However, since
the JIT may emit more than one CPU instruction per eBPF instruction, the
field width may overflow. If that happens, the JIT converts the long
conditional jump to a short PC-relative branch with the condition
inverted, jumping over a long unconditional absolute jmp (j).
This conversion will change the instruction offset mapping used for jumps,
and may in turn result in more branch offset overflows. The JIT therefore
dry-runs the translation until no more branches are converted and the
offsets do not change anymore. There is an upper bound on this of course,
and if the JIT hits that limit, the last two iterations are run with all
branches being converted.
Tail call count
===============
The current tail call count is stored in the 16-byte area of the caller's
stack frame that is reserved for the callee in the o32 ABI. The value is
initialized in the prologue, and propagated to the tail-callee by skipping
the initialization instructions when emitting the tail call.
Signed-off-by: Johan Almbladh <johan.almbladh@anyfinetworks.com>
Signed-off-by: Daniel Borkmann <daniel@iogearbox.net>
Signed-off-by: Andrii Nakryiko <andrii@kernel.org>
Link: https://lore.kernel.org/bpf/20211005165408.2305108-4-johan.almbladh@anyfinetworks.com
This patch implements a workaround for the Loongson-2F nop in generated,
code, if the existing option CONFIG_CPU_NOP_WORKAROUND is set. Before,
the binutils option -mfix-loongson2f-nop was enabled, but no workaround
was done when emitting MIPS code. Now, the nop pseudo instruction is
emitted as "or ax,ax,zero" instead of the default "sll zero,zero,0". This
is consistent with the workaround implemented by binutils.
Signed-off-by: Johan Almbladh <johan.almbladh@anyfinetworks.com>
Signed-off-by: Daniel Borkmann <daniel@iogearbox.net>
Signed-off-by: Andrii Nakryiko <andrii@kernel.org>
Reviewed-by: Jiaxun Yang <jiaxun.yang@flygoat.com>
Link: https://sourceware.org/legacy-ml/binutils/2009-11/msg00387.html
Link: https://lore.kernel.org/bpf/20211005165408.2305108-3-johan.almbladh@anyfinetworks.com
Enable the 'muhu' instruction, complementing the existing 'mulu', needed
to implement a MIPS32 BPF JIT.
Also fix a typo in the existing definition of 'dmulu'.
Signed-off-by: Tony Ambardar <Tony.Ambardar@gmail.com>
Signed-off-by: Johan Almbladh <johan.almbladh@anyfinetworks.com>
Signed-off-by: Daniel Borkmann <daniel@iogearbox.net>
Signed-off-by: Andrii Nakryiko <andrii@kernel.org>
Link: https://lore.kernel.org/bpf/20211005165408.2305108-2-johan.almbladh@anyfinetworks.com
Add a test that validates that btf__add_btf() API is correctly copying
all the types from the source BTF into destination BTF object and
adjusts type IDs and string offsets properly.
Signed-off-by: Andrii Nakryiko <andrii@kernel.org>
Signed-off-by: Daniel Borkmann <daniel@iogearbox.net>
Link: https://lore.kernel.org/bpf/20211006051107.17921-4-andrii@kernel.org
Next patch will need to reuse BTF generation logic, which tests every
supported BTF kind, for testing btf__add_btf() APIs. So restructure
existing selftests and make it as a single subtest that uses bulk
VALIDATE_RAW_BTF() macro for raw BTF dump checking.
Signed-off-by: Andrii Nakryiko <andrii@kernel.org>
Signed-off-by: Daniel Borkmann <daniel@iogearbox.net>
Acked-by: Song Liu <songliubraving@fb.com>
Link: https://lore.kernel.org/bpf/20211006051107.17921-3-andrii@kernel.org
Add a bulk copying api, btf__add_btf(), that speeds up and simplifies
appending entire contents of one BTF object to another one, taking care
of copying BTF type data, adjusting resulting BTF type IDs according to
their new locations in the destination BTF object, as well as copying
and deduplicating all the referenced strings and updating all the string
offsets in new BTF types as appropriate.
This API is intended to be used from tools that are generating and
otherwise manipulating BTFs generically, such as pahole. In pahole's
case, this API is useful for speeding up parallelized BTF encoding, as
it allows pahole to offload all the intricacies of BTF type copying to
libbpf and handle the parallelization aspects of the process.
Signed-off-by: Andrii Nakryiko <andrii@kernel.org>
Signed-off-by: Daniel Borkmann <daniel@iogearbox.net>
Acked-by: Song Liu <songliubraving@fb.com>
Cc: Arnaldo Carvalho de Melo <acme@kernel.org>
Link: https://lore.kernel.org/bpf/20211006051107.17921-2-andrii@kernel.org
Instead of unconditionally performing push/pop on %rax/%rdx in case of
division/modulo, we can save a few bytes in case of destination register
being either BPF r0 (%rax) or r3 (%rdx) since the result is written in
there anyway.
Also, we do not need to copy the source to %r11 unless the source is either
%rax, %rdx or an immediate.
For example, before the patch:
22: push %rax
23: push %rdx
24: mov %rsi,%r11
27: xor %edx,%edx
29: div %r11
2c: mov %rax,%r11
2f: pop %rdx
30: pop %rax
31: mov %r11,%rax
After:
22: push %rdx
23: xor %edx,%edx
25: div %rsi
28: pop %rdx
Signed-off-by: Jie Meng <jmeng@fb.com>
Signed-off-by: Alexei Starovoitov <ast@kernel.org>
Signed-off-by: Daniel Borkmann <daniel@iogearbox.net>
Tested-by: Daniel Borkmann <daniel@iogearbox.net>
Acked-by: Daniel Borkmann <daniel@iogearbox.net>
Link: https://lore.kernel.org/bpf/20211002035626.2041910-1-jmeng@fb.com
Kumar Kartikeya says:
====================
This set enables kernel module function calls, and also modifies verifier logic
to permit invalid kernel function calls as long as they are pruned as part of
dead code elimination. This is done to provide better runtime portability for
BPF objects, which can conditionally disable parts of code that are pruned later
by the verifier (e.g. const volatile vars, kconfig options). libbpf
modifications are made along with kernel changes to support module function
calls.
It also converts TCP congestion control objects to use the module kfunc support
instead of relying on IS_BUILTIN ifdef.
Changelog:
----------
v6 -> v7
v6: https://lore.kernel.org/bpf/20210930062948.1843919-1-memxor@gmail.com
* Let __bpf_check_kfunc_call take kfunc_btf_id_list instead of generating
callbacks (Andrii)
* Rename it to bpf_check_mod_kfunc_call to reflect usage
* Remove OOM checks (Alexei)
* Remove resolve_btfids invocation for bpf_testmod (Andrii)
* Move fd_array_cnt initialization near fd_array alloc (Andrii)
* Rename helper to btf_find_by_name_kind and pass start_id (Andrii)
* memset when data is NULL in add_data (Alexei)
* Fix other nits
v5 -> v6
v5: https://lore.kernel.org/bpf/20210927145941.1383001-1-memxor@gmail.com
* Rework gen_loader relocation emits
* Only emit bpf_btf_find_by_name_kind call when required (Alexei)
* Refactor code to emit ksym var and func relo into separate helpers, this
will be easier to add future weak/typeless ksym support to (for my followup)
* Count references for both ksym var and funcs, and avoid calling helpers
unless required for both of them. This also means we share fds between
ksym vars for the module BTFs. Also be careful with this when closing
BTF fd so that we only close one instance of the fd for each ksym
v4 -> v5
v4: https://lore.kernel.org/bpf/20210920141526.3940002-1-memxor@gmail.com
* Address comments from Alexei
* Use reserved fd_array area in loader map instead of creating a new map
* Drop selftest testing the 256 kfunc limit, however selftest testing reuse
of BTF fd for same kfunc in gen_loader and libbpf is kept
* Address comments from Andrii
* Make --no-fail the default for resolve_btfids, i.e. only fail if we find
BTF section and cannot process it
* Use obj->btf_modules array to store index in the fd_array, so that we don't
have to do any searching to reuse the index, instead only set it the first
time a module BTF's fd is used
* Make find_ksym_btf_id to return struct module_btf * in last parameter
* Improve logging when index becomes bigger than INT16_MAX
* Add btf__find_by_name_kind_own internal helper to only start searching for
kfunc ID in module BTF, since find_ksym_btf_id already checks vmlinux BTF
before iterating over module BTFs.
* Fix various other nits
* Fixes for failing selftests on BPF CI
* Rearrange/cleanup selftests
* Avoid testing kfunc limit (Alexei)
* Do test gen_loader and libbpf BTF fd index dedup with 256 calls
* Move invalid kfunc failure test to verifier selftest
* Minimize duplication
* Use consistent bpf_<type>_check_kfunc_call naming for module kfunc callback
* Since we try to add fd using add_data while we can, cherry pick Alexei's
patch from CO-RE RFC series to align gen_loader data.
v3 -> v4
v3: https://lore.kernel.org/bpf/20210915050943.679062-1-memxor@gmail.com
* Address comments from Alexei
* Drop MAX_BPF_STACK change, instead move map_fd and BTF fd to BPF array map
and pass fd_array using BPF_PSEUDO_MAP_IDX_VALUE
* Address comments from Andrii
* Fix selftest to store to variable for observing function call instead of
printk and polluting CI logs
* Drop use of raw_tp for testing, instead reuse classifier based prog_test_run
* Drop index + 1 based insn->off convention for kfunc module calls
* Expand selftests to cover more corner cases
* Misc cleanups
v2 -> v3
v2: https://lore.kernel.org/bpf/20210914123750.460750-1-memxor@gmail.com
* Fix issues pointed out by Kernel Test Robot
* Fix find_kfunc_desc to also take offset into consideration when comparing
RFC v1 -> v2
v1: https://lore.kernel.org/bpf/20210830173424.1385796-1-memxor@gmail.com
* Address comments from Alexei
* Reuse fd_array instead of introducing kfunc_btf_fds array
* Take btf and module reference as needed, instead of preloading
* Add BTF_KIND_FUNC relocation support to gen_loader infrastructure
* Address comments from Andrii
* Drop hashmap in libbpf for finding index of existing BTF in fd_array
* Preserve invalid kfunc calls only when the symbol is weak
* Adjust verifier selftests
====================
Signed-off-by: Alexei Starovoitov <ast@kernel.org>
This adds selftests that tests the success and failure path for modules
kfuncs (in presence of invalid kfunc calls) for both libbpf and
gen_loader. It also adds a prog_test kfunc_btf_id_list so that we can
add module BTF ID set from bpf_testmod.
This also introduces a couple of test cases to verifier selftests for
validating whether we get an error or not depending on if invalid kfunc
call remains after elimination of unreachable instructions.
Signed-off-by: Kumar Kartikeya Dwivedi <memxor@gmail.com>
Signed-off-by: Alexei Starovoitov <ast@kernel.org>
Link: https://lore.kernel.org/bpf/20211002011757.311265-10-memxor@gmail.com
This change updates the BPF syscall loader to relocate BTF_KIND_FUNC
relocations, with support for weak kfunc relocations. The general idea
is to move map_fds to loader map, and also use the data for storing
kfunc BTF fds. Since both reuse the fd_array parameter, they need to be
kept together.
For map_fds, we reserve MAX_USED_MAPS slots in a region, and for kfunc,
we reserve MAX_KFUNC_DESCS. This is done so that insn->off has more
chances of being <= INT16_MAX than treating data map as a sparse array
and adding fd as needed.
When the MAX_KFUNC_DESCS limit is reached, we fall back to the sparse
array model, so that as long as it does remain <= INT16_MAX, we pass an
index relative to the start of fd_array.
We store all ksyms in an array where we try to avoid calling the
bpf_btf_find_by_name_kind helper, and also reuse the BTF fd that was
already stored. This also speeds up the loading process compared to
emitting calls in all cases, in later tests.
Signed-off-by: Kumar Kartikeya Dwivedi <memxor@gmail.com>
Signed-off-by: Alexei Starovoitov <ast@kernel.org>
Link: https://lore.kernel.org/bpf/20211002011757.311265-9-memxor@gmail.com
Preserve these calls as it allows verifier to succeed in loading the
program if they are determined to be unreachable after dead code
elimination during program load. If not, the verifier will fail at
runtime. This is done for ext->is_weak symbols similar to the case for
variable ksyms.
Signed-off-by: Kumar Kartikeya Dwivedi <memxor@gmail.com>
Signed-off-by: Alexei Starovoitov <ast@kernel.org>
Acked-by: Andrii Nakryiko <andrii@kernel.org>
Link: https://lore.kernel.org/bpf/20211002011757.311265-8-memxor@gmail.com
This patch adds libbpf support for kernel module function call support.
The fd_array parameter is used during BPF program load to pass module
BTFs referenced by the program. insn->off is set to index into this
array, but starts from 1, because insn->off as 0 is reserved for
btf_vmlinux.
We try to use existing insn->off for a module, since the kernel limits
the maximum distinct module BTFs for kfuncs to 256, and also because
index must never exceed the maximum allowed value that can fit in
insn->off (INT16_MAX). In the future, if kernel interprets signed offset
as unsigned for kfunc calls, this limit can be increased to UINT16_MAX.
Also introduce a btf__find_by_name_kind_own helper to start searching
from module BTF's start id when we know that the BTF ID is not present
in vmlinux BTF (in find_ksym_btf_id).
Signed-off-by: Kumar Kartikeya Dwivedi <memxor@gmail.com>
Signed-off-by: Alexei Starovoitov <ast@kernel.org>
Link: https://lore.kernel.org/bpf/20211002011757.311265-7-memxor@gmail.com
This commit moves BTF ID lookup into the newly added registration
helper, in a way that the bbr, cubic, and dctcp implementation set up
their sets in the bpf_tcp_ca kfunc_btf_set list, while the ones not
dependent on modules are looked up from the wrapper function.
This lifts the restriction for them to be compiled as built in objects,
and can be loaded as modules if required. Also modify Makefile.modfinal
to call resolve_btfids for each module.
Note that since kernel kfunc_ids never overlap with module kfunc_ids, we
only match the owner for module btf id sets.
See following commits for background on use of:
CONFIG_X86 ifdef:
569c484f99 (bpf: Limit static tcp-cc functions in the .BTF_ids list to x86)
CONFIG_DYNAMIC_FTRACE ifdef:
7aae231ac9 (bpf: tcp: Limit calling some tcp cc functions to CONFIG_DYNAMIC_FTRACE)
Signed-off-by: Kumar Kartikeya Dwivedi <memxor@gmail.com>
Signed-off-by: Alexei Starovoitov <ast@kernel.org>
Link: https://lore.kernel.org/bpf/20211002011757.311265-6-memxor@gmail.com
This commit allows specifying the base BTF for resolving btf id
lists/sets during link time in the resolve_btfids tool. The base BTF is
set to NULL if no path is passed. This allows resolving BTF ids for
module kernel objects.
Also, drop the --no-fail option, as it is only used in case .BTF_ids
section is not present, instead make no-fail the default mode. The long
option name is same as that of pahole.
Signed-off-by: Kumar Kartikeya Dwivedi <memxor@gmail.com>
Signed-off-by: Alexei Starovoitov <ast@kernel.org>
Acked-by: Andrii Nakryiko <andrii@kernel.org>
Link: https://lore.kernel.org/bpf/20211002011757.311265-5-memxor@gmail.com
This adds helpers for registering btf_id_set from modules and the
bpf_check_mod_kfunc_call callback that can be used to look them up.
With in kernel sets, the way this is supposed to work is, in kernel
callback looks up within the in-kernel kfunc whitelist, and then defers
to the dynamic BTF set lookup if it doesn't find the BTF id. If there is
no in-kernel BTF id set, this callback can be used directly.
Also fix includes for btf.h and bpfptr.h so that they can included in
isolation. This is in preparation for their usage in tcp_bbr, tcp_cubic
and tcp_dctcp modules in the next patch.
Signed-off-by: Kumar Kartikeya Dwivedi <memxor@gmail.com>
Signed-off-by: Alexei Starovoitov <ast@kernel.org>
Link: https://lore.kernel.org/bpf/20211002011757.311265-4-memxor@gmail.com
This patch also modifies the BPF verifier to only return error for
invalid kfunc calls specially marked by userspace (with insn->imm == 0,
insn->off == 0) after the verifier has eliminated dead instructions.
This can be handled in the fixup stage, and skip processing during add
and check stages.
If such an invalid call is dropped, the fixup stage will not encounter
insn->imm as 0, otherwise it bails out and returns an error.
This will be exposed as weak ksym support in libbpf in later patches.
Signed-off-by: Kumar Kartikeya Dwivedi <memxor@gmail.com>
Signed-off-by: Alexei Starovoitov <ast@kernel.org>
Link: https://lore.kernel.org/bpf/20211002011757.311265-3-memxor@gmail.com